large update to fix many "slide debt" issues

with swarm stacks, service updates, rollbacks, and healthchecks
This commit is contained in:
Bret Fisher
2018-11-26 17:00:35 -05:00
committed by Jérôme Petazzoni
parent 523ca55831
commit cb624755e4
11 changed files with 57 additions and 80 deletions

View File

@@ -5,6 +5,3 @@ RUN gem install thin
ADD hasher.rb /
CMD ["ruby", "hasher.rb"]
EXPOSE 80
HEALTHCHECK \
--interval=1s --timeout=2s --retries=3 --start-period=1s \
CMD curl http://localhost/ || exit 1

View File

@@ -40,10 +40,10 @@ chapters:
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/stacks.md
- swarm/cicd.md
- swarm/updatingservices.md
#- swarm/rollingupdates.md
- swarm/rollingupdates.md
- swarm/healthchecks.md
- - swarm/operatingswarm.md
- swarm/netshoot.md

View File

@@ -40,7 +40,7 @@ chapters:
#- swarm/testingregistry.md
#- swarm/btp-manual.md
#- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/stacks.md
- swarm/cicd.md
- swarm/updatingservices.md
#- swarm/rollingupdates.md

View File

@@ -41,7 +41,7 @@ chapters:
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/stacks.md
- swarm/cicd.md
- |
name: part-2

View File

@@ -41,7 +41,7 @@ chapters:
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/stacks.md
- |
name: part-2

View File

@@ -10,9 +10,10 @@
- And run this little for loop:
```bash
cd ~/container.training/dockercoins
REGISTRY=127.0.0.1:5000 TAG=v1
export REGISTRY=127.0.0.1:5000
export TAG=v0.1
for SERVICE in hasher rng webui worker; do
docker tag dockercoins_$SERVICE $REGISTRY/$SERVICE:$TAG
docker build -t $REGISTRY/$SERVICE:$TAG ./$SERVICE
docker push $REGISTRY/$SERVICE
done
```
@@ -119,12 +120,12 @@ It alters the code path for `docker run`, so it is allowed only under strict cir
- Start the other services:
```bash
REGISTRY=127.0.0.1:5000
TAG=v1
for SERVICE in hasher rng webui worker; do
docker service create --network dockercoins --detach=true \
--name $SERVICE $REGISTRY/$SERVICE:$TAG
done
export REGISTRY=127.0.0.1:5000
export TAG=v0.1
for SERVICE in hasher rng webui worker; do
docker service create --network dockercoins --detach=true \
--name $SERVICE $REGISTRY/$SERVICE:$TAG
done
```
]

View File

@@ -96,6 +96,8 @@ We will use the following Compose file (`stacks/dockercoins+healthcheck.yml`):
hasher:
build: dockercoins/hasher
image: ${REGISTRY-127.0.0.1:5000}/hasher:${TAG-latest}
healthcheck:
test: curl -f http://localhost/ || exit 1
deploy:
replicas: 7
update_config:
@@ -127,17 +129,15 @@ We need to update our services with a healthcheck.
]
This will also scale the `hasher` service to 7 instances.
---
## Visualizing an automated rollback
Here's a good example of why healthchecks are necessary.
- Here's a good example of why healthchecks are necessary
This breaking change will prevent the app from listening on the correct port.
- This breaking change will prevent the app from listening on the correct port
The container still runs fine, it just won't accept connections on port 80.
- The container still runs fine, it just won't accept connections on port 80
.exercise[
@@ -148,11 +148,10 @@ The container still runs fine, it just won't accept connections on port 80.
- Build, ship, and run the new image:
```bash
export TAG=v0.5
export TAG=v0.3
docker-compose -f dockercoins+healthcheck.yml build
docker-compose -f dockercoins+healthcheck.yml push
docker service update dockercoins_hasher \
--image=127.0.0.1:5000/hasher:$TAG
docker service update --image=127.0.0.1:5000/hasher:$TAG dockercoins_hasher
```
]

View File

@@ -1,40 +1,44 @@
# Rolling updates
- Let's force an update on worker to watch it update
- Let's force an update on hasher to watch it update
.exercise[
- First lets scale hasher to 5 replicas:
- First lets scale up hasher to 7 replicas:
```bash
docker service scale dockercoins_worker=5
docker service scale dockercoins_hasher=7
```
- Force a rolling update (replace containers) without a change:
- Force a rolling update (replace containers) to different image:
```bash
docker service update --force dockercoins_worker
docker service update --image 127.0.0.1:5000/hasher:v0.1 dockercoins_hasher
```
]
- You can run `docker events` in a seperate `node1` shell to see Swarm actions
- You can use `--force` to replace containers without a config change
---
## Changing the upgrade policy
- We can set upgrade parallelism (how many instances to update at the same time)
- And upgrade delay (how long to wait between two batches of instances)
- We can change many options on how updates happen
.exercise[
- Change the parallelism to 2 and the delay to 5 seconds:
- Change the parallelism to 2, and the max failed container updates to 25%:
```bash
docker service update dockercoins_worker \
--update-parallelism 2 --update-delay 5s
docker service update --update-parallelism 2 \
--update-max-failure-ratio .25 dockercoins_hasher
```
]
The current upgrade will continue at a faster pace.
- No containers were replaced, this is called a "no op" change
- Service metadata-only changes don't require orchestrator operations
---
@@ -58,15 +62,17 @@ The current upgrade will continue at a faster pace.
- At any time (e.g. before the upgrade is complete), we can rollback:
- by editing the Compose file and redeploying;
- by editing the Compose file and redeploying
- or with the special `--rollback` flag
- by using the special `--rollback` flag with `service update`
- by using `docker service rollback`
.exercise[
- Try to rollback the service:
- Try to rollback the webui service:
```bash
docker service update dockercoins_worker --rollback
docker service rollback dockercoins_webui
```
]
@@ -79,6 +85,8 @@ What happens with the web UI graph?
- Rollback reverts to the previous service definition
- see `PreviousSpec` in `docker service inspect <servicename>`
- If we visualize successive updates as a stack:
- it doesn't "pop" the latest update

View File

@@ -14,11 +14,12 @@
---
## Updating a single service the hard way
## Updating a single service with `service update`
- To update a single service, we could do the following:
```bash
REGISTRY=localhost:5000 TAG=v0.3
export REGISTRY=127.0.0.1:5000
export TAG=v0.2
IMAGE=$REGISTRY/dockercoins_webui:$TAG
docker build -t $IMAGE webui/
docker push $IMAGE
@@ -31,11 +32,11 @@
---
## Updating services the easy way
## Updating services with `stack deploy`
- With the Compose integration, all we have to do is:
```bash
export TAG=v0.3
export TAG=v0.2
docker-compose -f composefile.yml build
docker-compose -f composefile.yml push
docker stack deploy -c composefile.yml nameofstack
@@ -47,6 +48,8 @@
- We don't need to learn new commands!
- It will diff each service and only update ones that changed
---
## Changing the code
@@ -77,7 +80,7 @@
- Build, ship, and run:
```bash
export TAG=v0.3
export TAG=v0.2
docker-compose -f dockercoins.yml build
docker-compose -f dockercoins.yml push
docker stack deploy -c dockercoins.yml dockercoins
@@ -85,6 +88,8 @@
]
- Because we're tagging all images in this demo v0.2, deploy will update all apps, FYI
---
## Viewing our changes

View File

@@ -10,6 +10,8 @@ services:
hasher:
build: dockercoins/hasher
image: ${REGISTRY-127.0.0.1:5000}/hasher:${TAG-latest}
healthcheck:
test: curl -f http://localhost/ || exit 1
deploy:
replicas: 7
update_config:

View File

@@ -1,35 +0,0 @@
version: "3"
services:
rng:
build: dockercoins/rng
image: ${REGISTRY-127.0.0.1:5000}/rng:${TAG-latest}
deploy:
mode: global
hasher:
build: dockercoins/hasher
image: ${REGISTRY-127.0.0.1:5000}/hasher:${TAG-latest}
deploy:
replicas: 7
update_config:
delay: 5s
failure_action: rollback
max_failure_ratio: .5
monitor: 5s
parallelism: 1
webui:
build: dockercoins/webui
image: ${REGISTRY-127.0.0.1:5000}/webui:${TAG-latest}
ports:
- "8000:80"
redis:
image: redis
worker:
build: dockercoins/worker
image: ${REGISTRY-127.0.0.1:5000}/worker:${TAG-latest}
deploy:
replicas: 10