Merge pull request #139 from bridgetkromhout/boosterconf2018

Boosterconf2018 fork update
This commit is contained in:
Jérôme Petazzoni
2018-03-06 18:00:44 -08:00
committed by GitHub
4 changed files with 57 additions and 9 deletions

24
CHECKLIST.md Normal file
View File

@@ -0,0 +1,24 @@
Checklist to use when delivering a workshop
Authored by Jérôme; additions by Bridget
- [ ] Create event-named branch (such as `conferenceYYYY`) in the [main repo](https://github.com/jpetazzo/container.training/)
- [ ] Create file `slides/_redirects` containing a link to the desired tutorial: `/ /kube-halfday.yml.html 200`
- [ ] Push local branch to GitHub and merge into main repo
- [ ] [Netlify setup](https://app.netlify.com/sites/container-training/settings/domain): create subdomain for event-named branch
- [ ] Add link to event-named branch to [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] Update the slides that says which versions we are using for [kube](https://github.com/jpetazzo/container.training/blob/master/slides/kube/versions-k8s.md) or [swarm](https://github.com/jpetazzo/container.training/blob/master/slides/swarm/versions.md) workshops
- [ ] Update the version of Compose and Machine in [settings](https://github.com/jpetazzo/container.training/tree/master/prepare-vms/settings)
- [ ] (optional) Create chatroom
- [ ] (optional) Set chatroom in YML ([kube half-day example](https://github.com/jpetazzo/container.training/blob/master/slides/kube-halfday.yml#L6-L8)) and deploy
- [ ] (optional) Put chat link on [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] How many VMs do we need? Check with event organizers ahead of time
- [ ] Provision VMs (slightly more than we think we'll need)
- [ ] Change password on presenter's VMs (to forestall any hijinx)
- [ ] Onsite: walk the room to count seats, check power supplies, lectern, A/V setup
- [ ] Print cards
- [ ] Cut cards
- [ ] Last-minute merge from master
- [ ] Check that all looks good
- [ ] DELIVER!
- [ ] Shut down VMs
- [ ] Update index.html to remove chat link and move session to past things

View File

@@ -1,6 +1,5 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
@@ -12,7 +11,7 @@ exclude:
chapters:
- common/title.md
- logistics.md
- logistics-bridget.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
@@ -36,4 +35,4 @@ chapters:
- kube/rollout.md
- kube/whatsnext.md
- common/thankyou.md
- kube/links.md
- kube/links-bridget.md

View File

@@ -185,6 +185,7 @@ The curl command should now output:
- Build and push the images:
```bash
export REGISTRY
export TAG=v0.1
docker-compose -f dockercoins.yml build
docker-compose -f dockercoins.yml push
```
@@ -220,6 +221,30 @@ services:
---
class: extra-details
## Avoiding the `latest` tag
.warning[Make sure that you've set the `TAG` variable properly!]
- If you don't, the tag will default to `latest`
- The problem with `latest`: nobody knows what it points to!
- the latest commit in the repo?
- the latest commit in some branch? (Which one?)
- the latest tag?
- some random version pushed by a random team member?
- If you keep pushing the `latest` tag, how do you roll back?
- Image tags should be meaningful, i.e. correspond to code branches, tags, or hashes
---
## Deploying all the things
- We can now deploy our code (as well as a redis instance)
@@ -234,7 +259,7 @@ services:
- Deploy everything else:
```bash
for SERVICE in hasher rng webui worker; do
kubectl run $SERVICE --image=$REGISTRY/$SERVICE
kubectl run $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
```
@@ -268,7 +293,7 @@ services:
---
# Exposing services internally
# Exposing services internally
- Three deployments need to be reachable by others: `hasher`, `redis`, `rng`

View File

@@ -149,7 +149,7 @@ Our rollout is stuck. However, the app is not dead (just 10% slower).
- We want to:
- revert to `v0.1` (which we now realize we didn't tag - yikes!)
- revert to `v0.1`
- be conservative on availability (always have desired number of available workers)
- be aggressive on rollout speed (update more than one pod at a time)
- give some time to our workers to "warm up" before starting more
@@ -163,7 +163,7 @@ spec:
spec:
containers:
- name: worker
image: $REGISTRY/worker:latest
image: $REGISTRY/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0
@@ -192,7 +192,7 @@ spec:
spec:
containers:
- name: worker
image: $REGISTRY/worker:latest
image: $REGISTRY/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0