diff --git a/docs/chat/index.html b/docs/chat/index.html
index 659ef3b9..e4bd7d4c 100644
--- a/docs/chat/index.html
+++ b/docs/chat/index.html
@@ -1,9 +1,9 @@
-
+
-https://gitter.im/jpetazzo/workshop-20170504-chicago
+https://gitter.im/jpetazzo/workshop-20170508-austin
diff --git a/docs/chat/index.html.sh b/docs/chat/index.html.sh
index 7f8611fc..08b21ba5 100755
--- a/docs/chat/index.html.sh
+++ b/docs/chat/index.html.sh
@@ -1,5 +1,5 @@
#!/bin/sh
-LINK=https://gitter.im/jpetazzo/workshop-20170504-chicago
+LINK=https://gitter.im/jpetazzo/workshop-20170508-austin
#LINK=https://dockercommunity.slack.com/messages/docker-mentor
#LINK=https://usenix-lisa.slack.com/messages/docker
sed "s,@@LINK@@,$LINK,g" >index.html <
+
+- The tutorial will run from 9:00am to 12:30pm
+
+- This will be fast-paced, but DON'T PANIC!
+
+- All the content is publicly available (slides, code samples, scripts)
+
+- There will be a coffee break at 10:30am
+
+ (please remind me if I forget about it!)
+
- Feel free to interrupt for questions at any time
- Live feedback, questions, help on [Gitter](chat)
-- All the content is publicly available (slides, code samples, scripts)
-
???
class: in-person
@@ -249,24 +250,20 @@ class: in-person
- Identifying bottlenecks
-- Introducing SwarmKit
-
---
class: in-person
## Chapter 2: scaling out our app on Swarm
-- Creating our first Swarm
+- Introducing SwarmKit
-- Docker Machine
+- Creating our first Swarm
- Running our first Swarm service
- Deploying a local registry
-- Overlay networks
-
- Global scheduling
- Integration with Compose
@@ -283,20 +280,18 @@ class: in-person
- Rolling updates
-- (Secrets management and encryption at rest)
-
- [Centralized logging](#logging)
- Metrics collection
----
+- Dealing with stateful services
+
+???
class: in-person
## Chapter 4: deeper in Swarm
-- Dealing with stateful services
-
- Controlling Docker from a container
- Node management
@@ -325,7 +320,7 @@ class: in-person
(but that's OK if you're not a Docker expert!)
----
+???
class: in-person
@@ -399,7 +394,7 @@ class: in-person
## You get five VMs
- Each person gets 5 private VMs (not shared with anybody else)
-- They'll remain up until the day after the tutorial
+- They'll remain up until tonight
- You should have a little card with login+password+IP addresses
- You can automatically SSH from one VM to another
@@ -440,13 +435,13 @@ wait
]
----
+???
class: in-person
## If doing or re-doing the workshop on your own ...
----
+???
class: self-paced
@@ -454,7 +449,7 @@ class: self-paced
- Use [Play-With-Docker](http://www.play-with-docker.com/)!
---
+???
- Main differences:
@@ -577,6 +572,22 @@ You are welcome to use the method that you feel the most comfortable with.
--
+- Docker 1.13 = Docker 17.03 (year.month, like Ubuntu)
+
+- Every month, there is a new "edge" release (with new features)
+
+- Every quarter, there is a new "stable" release
+
+- Docker CE releases are maintained 4+ months
+
+- Docker EE releases are maintained 12+ months
+
+---
+
+class: extra-details
+
+## Docker CE vs Docker EE
+
- Docker EE:
- $$$
@@ -591,6 +602,8 @@ You are welcome to use the method that you feel the most comfortable with.
---
+class: extra-details
+
## Why?
- More readable for enterprise users
@@ -1075,7 +1088,7 @@ Note: this is a fiction! We have enough entropy. But we need a pretext to scale
class: title
-# Scaling out
+Scaling out
---
@@ -1088,12 +1101,16 @@ class: title
- It is a plumbing part of the Docker ecosystem
+
+
- SwarmKit/swarmd/swarmctl → libcontainer/containerd/container-ctr
---
@@ -1102,22 +1119,26 @@ class: title
- Highly-available, distributed store based on [Raft](
https://en.wikipedia.org/wiki/Raft_%28computer_science%29)
-
(more on next slide)
+
(avoids depending on an external store: easier to deploy; higher performance)
+
+- Dynamic reconfiguration of Raft without interrupting cluster operations
- *Services* managed with a *declarative API*
(implementing *desired state* and *reconciliation loop*)
-- Automatic TLS keying and signing
-
-- Dynamic promotion/demotion of nodes, allowing to change
- how many nodes are actively part of the Raft consensus
-
- Integration with overlay networks and load balancing
-- And much more!
+- Strong emphasis on security:
+
+ - automatic TLS keying and signing; automatic cert rotation
+ - full encryption of the data plane; automatic key rotation
+ - least privilege architecture (single-node compromise ≠ cluster compromise)
+ - on-disk encryption with optional passphrase
---
+class: extra-details
+
## Where is the key/value store?
- Many orchestration systems use a key/value store backed by a consensus algorithm
@@ -1149,14 +1170,16 @@ class: title
- A *node* can be a *manager* or a *worker*
- (Note: in SwarmKit, *managers* are also *workers*)
-
-- A *manager* actively takes part in the Raft consensus
+- A *manager* actively takes part in the Raft consensus, and keeps the Raft log
- You can talk to a *manager* using the SwarmKit API
- One *manager* is elected as the *leader*; other managers merely forward requests to it
+- The *workers* get their instructions from the *managers*
+
+- Both *workers* and *managers* can run containers
+
---
## Illustration
@@ -1188,7 +1211,9 @@ You can refer to the [NOMENCLATURE](https://github.com/docker/swarmkit/blob/mast
- Since version 1.12, Docker Engine embeds SwarmKit
-- The Docker CLI features three new commands:
+- All the SwarmKit features are "asleep" until you enable "Swarm Mode"
+
+- Examples of Swarm Mode commands:
- `docker swarm` (enable Swarm mode; join a Swarm; adjust cluster parameters)
@@ -1196,6 +1221,8 @@ You can refer to the [NOMENCLATURE](https://github.com/docker/swarmkit/blob/mast
- `docker service` (create and manage services)
+???
+
- The Docker API exposes the same concepts
- The SwarmKit API is also exposed (on a separate socket)
@@ -1246,10 +1273,14 @@ Error response from daemon: This node is not a swarm manager. [...]
]
+???
+
If Docker tells you that it `could not choose an IP address to advertise`, see next slide!
---
+class: extra-details
+
## IP address to advertise
- When running in Swarm mode, each node *advertises* its address to the others
@@ -1270,6 +1301,8 @@ If Docker tells you that it `could not choose an IP address to advertise`, see n
---
+class: extra-details
+
## Which IP address should be advertised?
- If your nodes have only one IP address, it's safe to let autodetection do the job
@@ -1320,6 +1353,8 @@ docker swarm init --advertise-addr eth0:7777
---
+class: extra-details
+
## Checking that Swarm mode is enabled
.exercise[
@@ -1450,6 +1485,8 @@ ehb0...4fvx node2 Ready Active
---
+class: docker-machine
+
## Adding nodes using the Docker API
- We don't have to SSH into the other nodes, we can use the Docker API
@@ -1469,6 +1506,8 @@ ehb0...4fvx node2 Ready Active
---
+class: docker-machine
+
# Docker Machine
- Docker Machine has two primary uses:
@@ -1489,7 +1528,7 @@ ehb0...4fvx node2 Ready Active
---
-class: self-paced
+class: self-paced, docker-machine
## If you're using Play-With-Docker ...
@@ -1501,6 +1540,8 @@ class: self-paced
---
+class: docker-machine
+
## Docker Machine basic usage
- We will learn two commands:
@@ -1522,7 +1563,7 @@ You should see your 5 nodes.
---
-class: in-person
+class: in-person, docker-machine
## How did we make our 5 nodes show up there?
@@ -1542,6 +1583,8 @@ class: in-person
---
+class: docker-machine
+
## Using Docker Machine to communicate with a node
- To select a node, use `eval $(docker-machine env nodeX)`
@@ -1563,6 +1606,8 @@ class: in-person
---
+class: docker-machine
+
## Getting the token
- First, let's store the join token in a variable
@@ -1585,6 +1630,8 @@ class: in-person
---
+class: docker-machine
+
## Change the node targeted by the Docker CLI
- We need to set the right environment variables to communicate with `node3`
@@ -1605,6 +1652,8 @@ class: in-person
---
+class: docker-machine
+
## Checking which node we're talking to
- Let's use the Docker API to ask "who are you?" to the remote node
@@ -1626,6 +1675,8 @@ reflecting the `DOCKER_HOST` variable.
---
+class: docker-machine
+
## Adding a node through the Docker API
- We are going to use the same `docker swarm join` command as before
@@ -1641,6 +1692,8 @@ reflecting the `DOCKER_HOST` variable.
---
+class: docker-machine
+
## Going back to the local node
- We need to revert the environment variable(s) that we had set previously
@@ -1663,6 +1716,8 @@ From that point, we are communicating with `node1` again.
---
+class: docker-machine
+
## Checking the composition of our cluster
- Now that we're talking to `node1` again, we can use management commands
@@ -1771,14 +1826,16 @@ Some presentations from the Docker Distributed Systems Summit in Berlin:
- Let's make our cluster highly available
+???
+
- Can you write a tiny script to automatically retrieve the manager token,
and automatically add remaining nodes to the cluster?
---
+???
- Hint: we want to use `for N in $(seq 4 5) ...`
----
+???
## Adding more managers
@@ -1793,7 +1850,7 @@ done
unset DOCKER_HOST
```
----
+???
## Adding more managers
@@ -1810,6 +1867,48 @@ eval $(docker-machine env -u)
---
+## Building our full cluster
+
+- We could SSH to nodes 3, 4, 5; and copy-paste the command
+
+--
+
+- Or we could use the AWESOME POWER OF THE SHELL!
+
+--
+
+
+
+--
+
+- No, not *that* shell
+
+---
+
+## Let's form like Swarm-tron
+
+- Let's get the token, and loop over the remaining nodes with SSH
+
+.exercise[
+
+- Obtain the manager token:
+ ```bash
+ TOKEN=$(docker swarm join-token -q manager)
+ ```
+
+- Loop over the 3 remaining nodes:
+ ```bash
+ for NODE in node3 node4 node5; do
+ ssh $NODE docker swarm join --token $TOKEN node1:2377
+ done
+ ```
+
+]
+
+[That was easy.](https://www.youtube.com/watch?v=3YmMNpbFjp0)
+
+---
+
## You can control the Swarm from any manager node
.exercise[
@@ -1833,6 +1932,8 @@ As we saw earlier, you can only control the Swarm through a manager node.
---
+class: self-paced
+
## Play-With-Docker node status icon
- If you're using Play-With-Docker, you get node status icons
@@ -1847,12 +1948,14 @@ As we saw earlier, you can only control the Swarm through a manager node.
---
-## Promoting nodes
+## Dynamically changing the role of a node
-- Instead of adding a manager node, we can also promote existing workers
-
-- Nodes can be promoted (and demoted) at any time
+- We can change the role of a node on the fly:
+ `docker node promote XXX` → make XXX a manager
+
+ `docker node demote XXX` → make XXX a worker
+
.exercise[
- See the current list of nodes:
@@ -1860,9 +1963,9 @@ As we saw earlier, you can only control the Swarm through a manager node.
docker node ls
```
-- Promote the two worker nodes to be managers:
+- Promote any worker node to be a manager:
```
- docker node promote XXX YYY
+ docker node promote
```
]
@@ -1890,9 +1993,9 @@ As we saw earlier, you can only control the Swarm through a manager node.
- Intuitively, it's harder to reach consensus in larger groups
-- With Raft, each write needs to be acknowledged by the majority of nodes
+- With Raft, writes have to go to (and be acknowledged by) all nodes
-- More nodes = more chance that we will have to wait for some laggard
+- More nodes = more network traffic
- Bigger network = more latency
@@ -2010,6 +2113,8 @@ Note: by default, when a container is destroyed (e.g. when scaling down), its lo
---
+class: extra-details
+
## Looking up where our container is running
- The `docker service ps` command told us where our container was scheduled
@@ -2029,6 +2134,8 @@ Note: by default, when a container is destroyed (e.g. when scaling down), its lo
---
+class: extra-details
+
## Viewing the logs of the container
.exercise[
@@ -2149,6 +2256,8 @@ The latest version of the ElasticSearch image won't start without mandatory conf
---
+class: extra-details
+

---
@@ -2257,8 +2366,6 @@ There are many ways to deal with inbound traffic on a Swarm cluster.
---
-name: here
-
## Visualize container placement
- Let's leverage the Docker API!
@@ -2308,6 +2415,24 @@ it to Swarm and maintains it.
---
+## Why This Is More Important Than You Think
+
+- The visualizer accesses the Docker API *from within a container*
+
+- This is a common pattern: run container management tools *in containers*
+
+- Instead of viewing your cluster, this could take care of logging, metrics, autoscaling ...
+
+- We can run it within a service, too! We don't do it, but the command would look like:
+
+ ```bash
+ docker service create \
+ --mount source=/var/run/docker.sock,type=bind,target=/var/run/docker.sock \
+ --name viz --constraint node.role==manager ...
+ ```
+
+---
+
## Terminate our services
- Before moving on, we will remove those services
@@ -2331,21 +2456,19 @@ it to Swarm and maintains it.
class: title
-# Our app on Swarm
+Our app on Swarm
---
## What's on the menu?
-In this part, we will cover:
+In this part, we will:
-- building images for our app,
+- **build** images for our app,
-- shipping those images with a registry,
+- **ship** these images with a registry,
-- running them through the services concept,
-
-- enabling inter-container communication with overlay networks.
+- **run** services using these images.
---
@@ -2353,9 +2476,9 @@ In this part, we will cover:
- When we do `docker-compose up`, images are built for our services
-- Those images are present only on the local node
+- These images are present only on the local node
-- We need those images to be distributed on the whole Swarm
+- We need these images to be distributed on the whole Swarm
- The easiest way to achieve that is to use a Docker registry
@@ -2364,6 +2487,8 @@ In this part, we will cover:
---
+class: extra-details
+
## Build, ship, and run, for a single service
If we had only one service (built from a `Dockerfile` in the
@@ -2383,9 +2508,7 @@ We just have to adapt this to our application, which has 4 services!
- Build on our local node (`node1`)
-- Tag images with a version number
-
- (timestamp; git hash; semantic...)
+- Tag images so that they are named `localhost:5000/servicename`
- Upload them to a registry
@@ -2556,22 +2679,24 @@ The curl command should now output:
---
+class: manual-btp
+
## Build, tag, and push our application container images
-- Scriptery to the rescue!
+- Compose has named our images `dockercoins_XXX` for each service
+
+- We need to retag them (to `127.0.0.1:5000/XXX:v1`) and push them
.exercise[
-- Set `DOCKER_REGISTRY` and `TAG` environment variables to use our local registry
-
+- Set `REGISTRY` and `TAG` environment variables to use our local registry
- And run this little for loop:
```bash
- DOCKER_REGISTRY=127.0.0.1:5000
- TAG=v0.1
+ REGISTRY=127.0.0.1:5000
+ TAG=v1
for SERVICE in hasher rng webui worker; do
- docker-compose build $SERVICE
- docker tag dockercoins_$SERVICE $DOCKER_REGISTRY/dockercoins_$SERVICE:$TAG
- docker push $DOCKER_REGISTRY/dockercoins_$SERVICE
+ docker tag dockercoins_$SERVICE $REGISTRY/$SERVICE:$TAG
+ docker push $REGISTRY/$SERVICE
done
```
@@ -2579,12 +2704,16 @@ The curl command should now output:
---
+class: manual-btp
+
# Overlay networks
-- SwarmKit integrates with overlay networks, without requiring
- an extra key/value store
+- SwarmKit integrates with overlay networks
-- Overlay networks are created the same way as before
+- Networks are created with `docker network create`
+
+- Make sure to specify that you want an *overlay* network
+
(otherwise you will get a local *bridge* network by default)
.exercise[
@@ -2593,7 +2722,19 @@ The curl command should now output:
docker network create --driver overlay dockercoins
```
-- Check existing networks:
+]
+
+---
+
+class: manual-btp
+
+## Viewing existing networks
+
+- Let's confirm that our network was created
+
+.exercise[
+
+- List existing networks:
```bash
docker network ls
```
@@ -2602,6 +2743,8 @@ The curl command should now output:
---
+class: manual-btp
+
## Can you spot the differences?
The networks `dockercoins` and `ingress` are different from the other ones.
@@ -2610,6 +2753,8 @@ Can you see how?
--
+class: manual-btp
+
- They are using a different kind of ID, reflecting the fact that they
are SwarmKit objects instead of "classic" Docker Engine objects.
@@ -2619,6 +2764,8 @@ Can you see how?
---
+class: manual-btp, extra-details
+
## Caveats
.warning[In Docker 1.12, you cannot join an overlay network with `docker run --net ...`.]
@@ -2637,6 +2784,8 @@ It alters the code path for `docker run`, so it is allowed only under strict cir
---
+class: manual-btp
+
## Run the application
- First, create the `redis` service; that one is using a Docker Hub image
@@ -2652,6 +2801,8 @@ It alters the code path for `docker run`, so it is allowed only under strict cir
---
+class: manual-btp
+
## Run the other services
- Then, start the other services one by one
@@ -2662,11 +2813,11 @@ It alters the code path for `docker run`, so it is allowed only under strict cir
- Start the other services:
```bash
- DOCKER_REGISTRY=127.0.0.1:5000
- TAG=v0.1
+ REGISTRY=127.0.0.1:5000
+ TAG=v1
for SERVICE in hasher rng webui worker; do
- docker service create --network dockercoins --name $SERVICE \
- $DOCKER_REGISTRY/dockercoins_$SERVICE:$TAG
+ docker service create --network dockercoins --detach=true \
+ --name $SERVICE $REGISTRY/$SERVICE:$TAG
done
```
@@ -2693,6 +2844,8 @@ It alters the code path for `docker run`, so it is allowed only under strict cir
---
+class: manual-btp
+
## Expose our application web UI
- We need to connect to the `webui` service, but it is not publishing any port
@@ -2703,7 +2856,7 @@ It alters the code path for `docker run`, so it is allowed only under strict cir
- Update `webui` so that we can connect to it from outside:
```bash
- docker service update webui --publish-add 8000:80
+ docker service update webui --publish-add 8000:80 --detach=false
```
]
@@ -2713,6 +2866,8 @@ Note: to "de-publish" a port, you would have to specify the container port.
---
+class: manual-btp
+
## What happens when we modify a service?
- Let's find out what happened to our `webui` service
@@ -2727,6 +2882,8 @@ Note: to "de-publish" a port, you would have to specify the container port.
--
+class: manual-btp
+
The first version of the service (the one that was not exposed) has been shutdown.
It has been replaced by the new version, with port 80 accessible from outside.
@@ -2747,10 +2904,6 @@ It has been replaced by the new version, with port 80 accessible from outside.
]
-You might have to wait a bit for the container to be up and running.
-
-Check its status with `docker service ps webui`.
-
---
## Scaling the application
@@ -2763,7 +2916,7 @@ Check its status with `docker service ps webui`.
- Bring up more workers:
```bash
- docker service update worker --replicas 10
+ docker service update worker --replicas 10 --detach=false
```
- Check the result in the web UI
@@ -2801,7 +2954,7 @@ You should see the performance peaking at 10 hashes/s (like before).
- Re-create the `rng` service with *global scheduling*:
```bash
docker service create --name rng --network dockercoins --mode global \
- $DOCKER_REGISTRY/dockercoins_rng:$TAG
+ --detach=false $REGISTRY/rng:$TAG
```
- Look at the result in the web UI
@@ -2810,6 +2963,8 @@ You should see the performance peaking at 10 hashes/s (like before).
---
+class: extra-details
+
## Why do we have to re-create the service to enable global scheduling?
- Enabling it dynamically would make rolling updates semantics very complex
@@ -3084,7 +3239,7 @@ version: "3"
services:
rng:
build: dockercoins/rng
- image: ${REGISTRY-127.0.0.1:5000}/rng:${COLON-latest}
+ image: ${REGISTRY-127.0.0.1:5000}/rng:${TAG-latest}
deploy:
mode: global
...
@@ -3093,30 +3248,12 @@ services:
...
worker:
build: dockercoins/worker
- image: ${REGISTRY-127.0.0.1:5000}/worker:${COLON-latest}
+ image: ${REGISTRY-127.0.0.1:5000}/worker:${TAG-latest}
...
deploy:
replicas: 10
```
-???
-
-## What's this `logging` section?
-
-- This application stack is setup to send logs to a local GELF receiver
-
-- We will use another "ready-to-use" Compose file to deploy an ELK stack
-
-- We won't give much more details about the ELK stack right now
-
- (But there is a chapter dedicated to it in another part!)
-
-- A given container can have only one logging driver at a time (for now)
-
-- As a result, the `gelf` driver is superseding the default `json-file` driver
-
-- ... Which means that the output of these containers won't show up in `docker logs`
-
---
## Deploying the application
@@ -3258,6 +3395,8 @@ See [this documentation page](https://docs.docker.com/compose/extends/) for more
---
+class: extra-details
+
## Good to know ...
- Compose file version 3 adds the `deploy` section
@@ -3355,6 +3494,8 @@ You should now be able to connect to port 8000 and see the DockerCoins web UI.
---
+class: extra-details
+
## Troubleshooting overlay networks
+That's all folks!
+
+.small[.small[
+
+Jérôme ([@jpetazzo](https://twitter.com/jpetazzo)) — [@docker](https://twitter.com/docker)
+
+AJ ([@s0ulshake](https://twitter.com/s0ulshake)) — *For hire!*
+
+`curl cv.soulshake.net`
+
+]]
-
-
@@ -6106,7 +6419,7 @@ class: title
var slideshow = remark.create({
ratio: '16:9',
highlightSpans: true,
- excludedClasses: ["self-paced", "extra-details"]
+ excludedClasses: ["self-paced", "extra-details", "docker-machine", "node-info", "swarmtools", "secrets", "encryption-at-rest", "elk-manual", "prom-manual"]
});