Get rid of $TAG and $REGISTRY

These variables are useful when deploying images
from a local registry (or from another place than
the Docker Hub) but they turned out to be quite
confusing. After holding to them for a while,
I think it is time to see the errors of my ways
and simplify that stuff.
This commit is contained in:
Jerome Petazzoni
2019-10-31 19:49:35 -05:00
parent ff132fd728
commit 7d4331477a
3 changed files with 33 additions and 76 deletions

View File

@@ -15,26 +15,3 @@
- `dockercoins/webui:v0.1`
- `dockercoins/worker:v0.1`
---
## Setting `$REGISTRY` and `$TAG`
- In the upcoming exercises and labs, we use a couple of environment variables:
- `$REGISTRY` as a prefix to all image names
- `$TAG` as the image version tag
- For example, the worker image is `$REGISTRY/worker:$TAG`
- If you copy-paste the commands in these exercises:
**make sure that you set `$REGISTRY` and `$TAG` first!**
- For example:
```bash
export REGISTRY=dockercoins TAG=v0.1
```
(this will expand `$REGISTRY/worker:$TAG` to `dockercoins/worker:v0.1`)

View File

@@ -11,16 +11,36 @@
- Deploy everything else:
```bash
set -u
for SERVICE in hasher rng webui worker; do
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
kubectl create deployment hasher --image=dockercoins/hasher:v0.1
kubectl create deployment rng --image=dockercoins/rng:v0.1
kubectl create deployment webui --image=dockercoins/webui:v0.1
kubectl create deployment worker --image=dockercoins/worker:v0.1
```
]
---
class: extra-details
## Deploying other images
- If we wanted to deploy images from another registry ...
- ... Or with a different tag ...
- ... We could use the following snippet:
```bash
REGISTRY=dockercoins
TAG=v0.1
for SERVICE in hasher rng webui worker; do
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
```
---
## Is this working?
- After waiting for the deployment to complete, let's look at the logs!

View File

@@ -61,32 +61,6 @@
---
## Building a new version of the `worker` service
.warning[
Only run these commands if you have built and pushed DockerCoins to a local registry.
<br/>
If you are using images from the Docker Hub (`dockercoins/worker:v0.1`), skip this.
]
.exercise[
- Go to the `stacks` directory (`~/container.training/stacks`)
- Edit `dockercoins/worker/worker.py`; update the first `sleep` line to sleep 1 second
- Build a new tag and push it to the registry:
```bash
#export REGISTRY=localhost:3xxxx
export TAG=v0.2
docker-compose -f dockercoins.yml build
docker-compose -f dockercoins.yml push
```
]
---
## Rolling out the new `worker` service
.exercise[
@@ -105,7 +79,7 @@ If you are using images from the Docker Hub (`dockercoins/worker:v0.1`), skip th
- Update `worker` either with `kubectl edit`, or by running:
```bash
kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
kubectl set image deploy worker worker=dockercoins/worker:v0.2
```
]
@@ -146,8 +120,7 @@ That rollout should be pretty quick. What shows in the web UI?
- Update `worker` by specifying a non-existent image:
```bash
export TAG=v0.3
kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
kubectl set image deploy worker worker=dockercoins/worker:v0.3
```
- Check what's going on:
@@ -216,27 +189,14 @@ If you didn't deploy the Kubernetes dashboard earlier, just skip this slide.
.exercise[
- Check which port the dashboard is on:
```bash
kubectl -n kube-system get svc socat
```
- Connect to the dashboard that we deployed earlier
- Check that we have failures in Deployments, Pods, and Replica Sets
- Can we see the reason for the failure?
]
Note the `3xxxx` port.
.exercise[
- Connect to http://oneofournodes:3xxxx/
<!-- ```open https://node1:3xxxx/``` -->
]
--
- We have failures in Deployments, Pods, and Replica Sets
---
## Recovering from a bad rollout
@@ -285,7 +245,7 @@ spec:
spec:
containers:
- name: worker
image: $REGISTRY/worker:v0.1
image: dockercoins/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0
@@ -316,7 +276,7 @@ class: extra-details
spec:
containers:
- name: worker
image: $REGISTRY/worker:v0.1
image: dockercoins/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0