Address deprecation of 'kubectl run'

kubectl run is being deprecated as a multi-purpose tool.
This PR replaces 'kubectl run' with 'kubectl create deployment'
in most places (except in the very first example, to reduce the
cognitive load; and when we really want a single-shot container).

It also updates the places where we use a 'run' label, since
'kubectl create deployment' uses the 'app' label instead.

NOTE: this hasn't gone through end-to-end testing yet.
This commit is contained in:
Jerome Petazzoni
2018-11-01 01:25:26 -05:00
parent b4bb9e5958
commit b9de73d0fd
13 changed files with 99 additions and 69 deletions

View File

@@ -5,7 +5,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress:
- from:
- podSelector:

View File

@@ -5,6 +5,6 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress: []

View File

@@ -16,7 +16,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: webui
app: webui
ingress:
- from: []

View File

@@ -6,7 +6,7 @@ metadata:
creationTimestamp: null
generation: 1
labels:
run: socat
app: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
@@ -14,7 +14,7 @@ spec:
replicas: 1
selector:
matchLabels:
run: socat
app: socat
strategy:
rollingUpdate:
maxSurge: 1
@@ -24,7 +24,7 @@ spec:
metadata:
creationTimestamp: null
labels:
run: socat
app: socat
spec:
containers:
- args:
@@ -49,7 +49,7 @@ kind: Service
metadata:
creationTimestamp: null
labels:
run: socat
app: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
@@ -60,7 +60,7 @@ spec:
protocol: TCP
targetPort: 80
selector:
run: socat
app: socat
sessionAffinity: None
type: NodePort
status:

View File

@@ -538,7 +538,7 @@ It's important to note a couple of details in these flags ...
- But that we can't create things:
```
./kubectl run tryme --image=nginx
./kubectl create deployment --image=nginx
```
- Exit the container with `exit` or `^D`

View File

@@ -256,19 +256,19 @@ The master node has [taints](https://kubernetes.io/docs/concepts/configuration/t
- Let's check the logs of all these `rng` pods
- All these pods have a `run=rng` label:
- All these pods have the label `app=rng`:
- the first pod, because that's what `kubectl run` does
- the first pod, because that's what `kubectl create deployment` does
- the other ones (in the daemon set), because we
*copied the spec from the first one*
- Therefore, we can query everybody's logs using that `run=rng` selector
- Therefore, we can query everybody's logs using that `app=rng` selector
.exercise[
- Check the logs of all the pods having a label `run=rng`:
- Check the logs of all the pods having a label `app=rng`:
```bash
kubectl logs -l run=rng --tail 1
kubectl logs -l app=rng --tail 1
```
]
@@ -283,7 +283,7 @@ It appears that *all the pods* are serving requests at the moment.
- The `rng` *service* is load balancing requests to a set of pods
- This set of pods is defined as "pods having the label `run=rng`"
- This set of pods is defined as "pods having the label `app=rng`"
.exercise[
@@ -310,7 +310,7 @@ to the associated load balancer.
--
- What would happen if we removed the `run=rng` label from that pod?
- What would happen if we removed the `app=rng` label from that pod?
--
@@ -322,7 +322,7 @@ to the associated load balancer.
--
- But but but ... Don't we have more than one pod with `run=rng` now?
- But but but ... Don't we have more than one pod with `app=rng` now?
--
@@ -345,7 +345,7 @@ to the associated load balancer.
<br/>(The second command doesn't require you to get the exact name of the replica set)
```bash
kubectl describe rs rng-yyyyyyyy
kubectl describe rs -l run=rng
kubectl describe rs -l app=rng
```
]
@@ -433,11 +433,11 @@ Of course, option 2 offers more learning opportunities. Right?
<!--
```wait Please edit the object below```
```keys /run: rng```
```keys /app: rng```
```keys ^J```
```keys noisactive: "yes"```
```keys ^[``` ]
```keys /run: rng```
```keys /app: rng```
```keys ^J```
```keys oisactive: "yes"```
```keys ^[``` ]
@@ -452,7 +452,7 @@ Of course, option 2 offers more learning opportunities. Right?
<!--
```wait Please edit the object below```
```keys /run: rng```
```keys /app: rng```
```keys ^J```
```keys noisactive: "yes"```
```keys ^[``` ]
@@ -468,9 +468,9 @@ Of course, option 2 offers more learning opportunities. Right?
.exercise[
- Check the most recent log line of all `run=rng` pods to confirm that exactly one per node is now active:
- Check the most recent log line of all `app=rng` pods to confirm that exactly one per node is now active:
```bash
kubectl logs -l run=rng --tail 1
kubectl logs -l app=rng --tail 1
```
]
@@ -496,14 +496,14 @@ The timestamps should give us a hint about how many pods are currently receiving
.exercise[
- List the pods with `run=rng` but without `isactive=yes`:
- List the pods with `app=rng` but without `isactive=yes`:
```bash
kubectl get pods -l run=rng,isactive!=yes
kubectl get pods -l app=rng,isactive!=yes
```
- Remove these pods:
```bash
kubectl delete pods -l run=rng,isactive!=yes
kubectl delete pods -l app=rng,isactive!=yes
```
]
@@ -581,7 +581,7 @@ Ding, dong, the deployment is dead! And the daemon set lives on.
labels:
isactive: "yes"
'
kubectl get pods -l run=rng -l controller-revision-hash -o name |
kubectl get pods -l app=rng -l controller-revision-hash -o name |
xargs kubectl patch -p "$PATCH"
```

View File

@@ -392,9 +392,9 @@ This is normal: we haven't provided any ingress rule yet.
- Run all three deployments:
```bash
kubectl run cheddar --image=errm/cheese:cheddar
kubectl run stilton --image=errm/cheese:stilton
kubectl run wensleydale --image=errm/cheese:wensleydale
kubectl create deployment cheddar --image=errm/cheese:cheddar
kubectl create deployment stilton --image=errm/cheese:stilton
kubectl create deployment wensleydale --image=errm/cheese:wensleydale
```
- Create a service for each of them:

View File

@@ -57,31 +57,49 @@ Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables`
- Since `ping` doesn't have anything to connect to, we'll have to run something else
- We could use the `nginx` official image, but ...
... we wouldn't be able to tell the backends from each other!
- We are going to use `jpetazzo/httpenv`, a tiny HTTP server written in Go
- `jpetazzo/httpenv` listens on port 8888
- It serves its environment variables in JSON format
- The environment variables will include `HOSTNAME`, which will be the pod name
(and therefore, will be different on each backend)
---
## Creating a deployment for our HTTP server
- We *could* do `kubectl run httpenv --image=jpetazzo/httpenv` ...
- But since `kubectl run` is being deprecated, let's see how to use `kubectl create` instead
.exercise[
- Start a bunch of HTTP servers:
```bash
kubectl run httpenv --image=jpetazzo/httpenv --replicas=10
```
- Watch them being started:
- In another window, watch the pods (to see when they will be created):
```bash
kubectl get pods -w
```
<!--
```wait httpenv-```
```keys ^C```
-->
<!-- ```keys ^C``` -->
- Create a deployment for this very lightweight HTTP server:
```bash
kubectl create deployment httpenv --image=jpetazzo/httpenv
```
- Scale it to 10 replicas:
```bash
kubectl scale deployment httpenv --replicas=10
```
]
The `jpetazzo/httpenv` image runs an HTTP server on port 8888.
<br/>
It serves its environment variables in JSON format.
The `-w` option "watches" events happening on the specified resources.
---
## Exposing our deployment
@@ -92,12 +110,12 @@ The `-w` option "watches" events happening on the specified resources.
- Expose the HTTP port of our server:
```bash
kubectl expose deploy/httpenv --port 8888
kubectl expose deployment httpenv --port 8888
```
- Look up which IP address was allocated:
```bash
kubectl get svc
kubectl get service
```
]
@@ -237,7 +255,7 @@ class: extra-details
- These IP addresses should match the addresses of the corresponding pods:
```bash
kubectl get pods -l run=httpenv -o wide
kubectl get pods -l app=httpenv -o wide
```
---

View File

@@ -173,6 +173,11 @@ pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
kubectl scale deploy/pingpong --replicas 8
```
- Note that this command does exactly the same thing:
```bash
kubectl scale deployment pingpong --replicas 8
```
]
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?

View File

@@ -130,11 +130,13 @@ Exactly what we need!
- We can use that property to view the logs of all the pods created with `kubectl run`
- Similarly, everything created with `kubectl create deployment` has a label `app`
.exercise[
- View the logs for all the things started with `kubectl run`:
- View the logs for all the things started with `kubectl create deployment`:
```bash
stern -l run
stern -l app
```
<!--

View File

@@ -117,13 +117,13 @@ This is our game plan:
- Let's use the `nginx` image:
```bash
kubectl run testweb --image=nginx
kubectl create deployment testweb --image=nginx
```
- Find out the IP address of the pod with one of these two commands:
```bash
kubectl get pods -o wide -l run=testweb
IP=$(kubectl get pods -l run=testweb -o json | jq -r .items[0].status.podIP)
kubectl get pods -o wide -l app=testweb
IP=$(kubectl get pods -l app=testweb -o json | jq -r .items[0].status.podIP)
```
- Check that we can connect to the server:
@@ -138,7 +138,7 @@ The `curl` command should show us the "Welcome to nginx!" page.
## Adding a very restrictive network policy
- The policy will select pods with the label `run=testweb`
- The policy will select pods with the label `app=testweb`
- It will specify an empty list of ingress rules (matching nothing)
@@ -172,7 +172,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress: []
```
@@ -207,7 +207,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress:
- from:
- podSelector:
@@ -325,7 +325,7 @@ spec:
## Allowing traffic to `webui` pods
This policy selects all pods with label `run=webui`.
This policy selects all pods with label `app=webui`.
It allows traffic from any source.
@@ -339,7 +339,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: webui
app: webui
ingress:
- from: []
```

View File

@@ -74,7 +74,7 @@ In this part, we will:
- Create the registry service:
```bash
kubectl run registry --image=registry
kubectl create deployment registry --image=registry
```
- Expose it on a NodePort:
@@ -254,13 +254,13 @@ class: extra-details
- Deploy `redis`:
```bash
kubectl run redis --image=redis
kubectl create deployment redis --image=redis
```
- Deploy everything else:
```bash
for SERVICE in hasher rng webui worker; do
kubectl run $SERVICE --image=$REGISTRY/$SERVICE:$TAG
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
```

View File

@@ -22,14 +22,19 @@
.exercise[
- Let's start a replicated `nginx` deployment:
- Let's create a deployment running `nginx`:
```bash
kubectl run yanginx --image=nginx --replicas=3
kubectl create deployment yanginx --image=nginx
```
- Scale it to a few replicas:
```bash
kubectl scale deployment yanginx --replicas=3
```
- Once it's up, check the corresponding pods:
```bash
kubectl get pods -l run=yanginx -o yaml | head -n 25
kubectl get pods -l app=yanginx -o yaml | head -n 25
```
]
@@ -99,12 +104,12 @@ so the lines should not be indented (otherwise the indentation will insert space
- Delete the Deployment:
```bash
kubectl delete deployment -l run=yanginx --cascade=false
kubectl delete deployment -l app=yanginx --cascade=false
```
- Delete the Replica Set:
```bash
kubectl delete replicaset -l run=yanginx --cascade=false
kubectl delete replicaset -l app=yanginx --cascade=false
```
- Check that the pods are still here:
@@ -126,7 +131,7 @@ class: extra-details
- If we change the labels on a dependent, so that it's not selected anymore
(e.g. change the `run: yanginx` in the pods of the previous example)
(e.g. change the `app: yanginx` in the pods of the previous example)
- If a deployment tool that we're using does these things for us
@@ -174,4 +179,4 @@ class: extra-details
]
As always, the [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) has useful extra information and pointers.
As always, the [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) has useful extra information and pointers.