mirror of
https://github.com/jpetazzo/container.training.git
synced 2026-02-15 01:59:57 +00:00
Compare commits
23 Commits
gitpod
...
decembre20
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a4e21ffe4d | ||
|
|
ae140b24cb | ||
|
|
ab87de9813 | ||
|
|
53a416626e | ||
|
|
8bf71b956f | ||
|
|
2c445ca99b | ||
|
|
2ba244839a | ||
|
|
0b71bc8655 | ||
|
|
949fdd7791 | ||
|
|
5cf8b42fe9 | ||
|
|
781ac48c5c | ||
|
|
1cf3849bbd | ||
|
|
c77960d77b | ||
|
|
9fa7b958dc | ||
|
|
8c4914294e | ||
|
|
7b9b9f527d | ||
|
|
3c7f39747c | ||
|
|
be67a742ee | ||
|
|
40cd934118 | ||
|
|
abcc47b563 | ||
|
|
2efc29991e | ||
|
|
f01bc2a7a9 | ||
|
|
3eaa844c55 |
@@ -1,3 +1,37 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: consul
|
||||
labels:
|
||||
app: consul
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: consul
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: consul
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: consul
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: consul
|
||||
labels:
|
||||
app: consul
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
@@ -24,6 +58,7 @@ spec:
|
||||
labels:
|
||||
app: consul
|
||||
spec:
|
||||
serviceAccountName: consul
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
@@ -37,18 +72,11 @@ spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: consul
|
||||
image: "consul:1.2.2"
|
||||
env:
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
image: "consul:1.4.0"
|
||||
args:
|
||||
- "agent"
|
||||
- "-bootstrap-expect=3"
|
||||
- "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
|
||||
- "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
|
||||
- "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
|
||||
- "-retry-join=provider=k8s label_selector=\"app=consul\""
|
||||
- "-client=0.0.0.0"
|
||||
- "-data-dir=/consul/data"
|
||||
- "-server"
|
||||
|
||||
10
k8s/just-a-pod.yaml
Normal file
10
k8s/just-a-pod.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
apiVersion: v1
|
||||
Kind: Pod
|
||||
metadata:
|
||||
name: hello
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: nginx
|
||||
|
||||
1
slides/_redirects
Normal file
1
slides/_redirects
Normal file
@@ -0,0 +1 @@
|
||||
/ /kube-twodays.yml.html 200!
|
||||
@@ -133,6 +133,8 @@ class: extra-details
|
||||
|
||||
→ We are user `kubernetes-admin`, in group `system:masters`.
|
||||
|
||||
(We will see later how and why this gives us the permissions that we have.)
|
||||
|
||||
---
|
||||
|
||||
## User certificates in practice
|
||||
@@ -567,3 +569,45 @@ It's important to note a couple of details in these flags ...
|
||||
kubectl auth can-i list nodes \
|
||||
--as system:serviceaccount:<namespace>:<name-of-service-account>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Where do our permissions come from?
|
||||
|
||||
- When interacting with the Kubernetes API, we are using a client certificate
|
||||
|
||||
- We saw previously that this client certificate contained:
|
||||
|
||||
`CN=kubernetes-admin` and `O=system:masters`
|
||||
|
||||
- Let's look for these in existing ClusterRoleBindings:
|
||||
```bash
|
||||
kubectl get clusterrolebindings -o yaml |
|
||||
grep -e kubernetes-admin -e system:masters
|
||||
```
|
||||
|
||||
(`system:masters` should show up, but not `kubernetes-admin`.)
|
||||
|
||||
- Where does this match come from?
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## The `system:masters` group
|
||||
|
||||
- If we eyeball the output of `kubectl get clusterrolebindings -o yaml`, we'll find out!
|
||||
|
||||
- It is in the `cluster-admin` binding:
|
||||
```bash
|
||||
kubectl describe clusterrolebinding cluster-admin
|
||||
```
|
||||
|
||||
- This binding associates `system:masters` to the cluster role `cluster-admin`
|
||||
|
||||
- And the `cluster-admin` is, basically, `root`:
|
||||
```bash
|
||||
kubectl describe clusterrole cluster-admin
|
||||
```
|
||||
|
||||
@@ -252,38 +252,29 @@ The master node has [taints](https://kubernetes.io/docs/concepts/configuration/t
|
||||
|
||||
---
|
||||
|
||||
## What are all these pods doing?
|
||||
## Is this working?
|
||||
|
||||
- Let's check the logs of all these `rng` pods
|
||||
|
||||
- All these pods have the label `app=rng`:
|
||||
|
||||
- the first pod, because that's what `kubectl create deployment` does
|
||||
- the other ones (in the daemon set), because we
|
||||
*copied the spec from the first one*
|
||||
|
||||
- Therefore, we can query everybody's logs using that `app=rng` selector
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check the logs of all the pods having a label `app=rng`:
|
||||
```bash
|
||||
kubectl logs -l app=rng --tail 1
|
||||
```
|
||||
|
||||
]
|
||||
- Look at the web UI
|
||||
|
||||
--
|
||||
|
||||
It appears that *all the pods* are serving requests at the moment.
|
||||
- The graph should now go above 10 hashes per second!
|
||||
|
||||
--
|
||||
|
||||
- It looks like the newly created pods are serving traffic correctly
|
||||
|
||||
- How and why did this happen?
|
||||
|
||||
(We didn't do anything special to add them to the `rng` service load balancer!)
|
||||
|
||||
---
|
||||
|
||||
## The magic of selectors
|
||||
# Labels and selectors
|
||||
|
||||
- The `rng` *service* is load balancing requests to a set of pods
|
||||
|
||||
- This set of pods is defined as "pods having the label `app=rng`"
|
||||
- That set of pods is defined by the *selector* of the `rng` service
|
||||
|
||||
.exercise[
|
||||
|
||||
@@ -294,19 +285,60 @@ It appears that *all the pods* are serving requests at the moment.
|
||||
|
||||
]
|
||||
|
||||
When we created additional pods with this label, they were
|
||||
automatically detected by `svc/rng` and added as *endpoints*
|
||||
to the associated load balancer.
|
||||
- The selector is `app=rng`
|
||||
|
||||
- It means "all the pods having the label `app=rng`"
|
||||
|
||||
(They can have additional labels as well, that's OK!)
|
||||
|
||||
---
|
||||
|
||||
## Removing the first pod from the load balancer
|
||||
## Selector evaluation
|
||||
|
||||
- We can use selectors with many `kubectl` commands
|
||||
|
||||
- For instance, with `kubectl get`, `kubectl logs`, `kubectl delete` ... and more
|
||||
|
||||
.exercise[
|
||||
|
||||
- Get the list of pods matching selector `app=rng`:
|
||||
```bash
|
||||
kubectl get pods -l app=rng
|
||||
kubectl get pods --selector app=rng
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
But ... why do these pods (in particular, the *new* ones) have this `app=rng` label?
|
||||
|
||||
---
|
||||
|
||||
## Where do labels come from?
|
||||
|
||||
- When we create a deployment with `kubectl create deployment rng`,
|
||||
<br/>this deployment gets the label `app=rng`
|
||||
|
||||
- The replica sets created by this deployment also get the label `app=rng`
|
||||
|
||||
- The pods created by these replica sets also get the label `app=rng`
|
||||
|
||||
- When we created the daemon set from the deployment, we re-used the same spec
|
||||
|
||||
- Therefore, the pods created by the daemon set get the same labels
|
||||
|
||||
.footnote[Note: when we use `kubectl run stuff`, the label is `run=stuff` instead.]
|
||||
|
||||
---
|
||||
|
||||
## Updating load balancer configuration
|
||||
|
||||
- We would like to remove a pod from the load balancer
|
||||
|
||||
- What would happen if we removed that pod, with `kubectl delete pod ...`?
|
||||
|
||||
--
|
||||
|
||||
The `replicaset` would re-create it immediately.
|
||||
It would be re-created immediately (by the replica set or the daemon set)
|
||||
|
||||
--
|
||||
|
||||
@@ -314,90 +346,272 @@ to the associated load balancer.
|
||||
|
||||
--
|
||||
|
||||
The `replicaset` would re-create it immediately.
|
||||
It would *also* be re-created immediately
|
||||
|
||||
--
|
||||
|
||||
... Because what matters to the `replicaset` is the number of pods *matching that selector.*
|
||||
|
||||
--
|
||||
|
||||
- But but but ... Don't we have more than one pod with `app=rng` now?
|
||||
|
||||
--
|
||||
|
||||
The answer lies in the exact selector used by the `replicaset` ...
|
||||
Why?!?
|
||||
|
||||
---
|
||||
|
||||
## Deep dive into selectors
|
||||
## Selectors for replica sets and daemon sets
|
||||
|
||||
- Let's look at the selectors for the `rng` *deployment* and the associated *replica set*
|
||||
- The "mission" of a replica set is:
|
||||
|
||||
"Make sure that there is the right number of pods matching this spec!"
|
||||
|
||||
- The "mission" of a daemon set is:
|
||||
|
||||
"Make sure that there is a pod matching this spec on each node!"
|
||||
|
||||
--
|
||||
|
||||
- *In fact,* replica sets and daemon sets do not check pod specifications
|
||||
|
||||
- They merely have a *selector*, and they look for pods matching that selector
|
||||
|
||||
- Yes, we can fool them by manually creating pods with the "right" labels
|
||||
|
||||
- Bottom line: if we remove our `app=rng` label ...
|
||||
|
||||
... The pod "diseappears" for its parent, which re-creates another pod to replace it
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Isolation of replica sets and daemon sets
|
||||
|
||||
- Since both the `rng` daemon set and the `rng` replica set use `app=rng` ...
|
||||
|
||||
... Why don't they "find" each other's pods?
|
||||
|
||||
--
|
||||
|
||||
- *Replica sets* have a more specific selector, visible with `kubectl describe`
|
||||
|
||||
(It looks like `app=rng,pod-template-hash=abcd1234`)
|
||||
|
||||
- *Daemon sets* also have a more specific selector, but it's invisible
|
||||
|
||||
(It looks like `app=rng,controller-revision-hash=abcd1234`)
|
||||
|
||||
- As a result, each controller only "sees" the pods it manages
|
||||
|
||||
---
|
||||
|
||||
## Removing a pod from the load balancer
|
||||
|
||||
- Currently, the `rng` service is defined by the `app=rng` selector
|
||||
|
||||
- The only way to remove a pod is to remove or change the `app` label
|
||||
|
||||
- ... But that will cause another pod to be created instead!
|
||||
|
||||
- What's the solution?
|
||||
|
||||
--
|
||||
|
||||
- We need to change the selector of the `rng` service!
|
||||
|
||||
- Let's add another label to that selector (e.g. `enabled=yes`)
|
||||
|
||||
---
|
||||
|
||||
## Complex selectors
|
||||
|
||||
- If a selector specifies multiple labels, they are understood as a logical *AND*
|
||||
|
||||
(In other words: the pods must match all the labels)
|
||||
|
||||
- Kubernetes has support for advanced, set-based selectors
|
||||
|
||||
(But these cannot be used with services, at least not yet!)
|
||||
|
||||
---
|
||||
|
||||
## The plan
|
||||
|
||||
1. Add the label `enabled=yes` to all our `rng` pods
|
||||
|
||||
2. Update the selector for the `rng` service to also include `enabled=yes`
|
||||
|
||||
3. Toggle traffic to a pod by manually adding/removing the `enabled` label
|
||||
|
||||
4. Profit!
|
||||
|
||||
*Note: if we swap steps 1 and 2, it will cause a short
|
||||
service disruption, because there will be a period of time
|
||||
during which the service selector won't match any pod.
|
||||
During that time, requests to the service will time out.
|
||||
By doing things in the order above, we guarantee that there won't
|
||||
be any interruption.*
|
||||
|
||||
---
|
||||
|
||||
## Adding labels to pods
|
||||
|
||||
- We want to add the label `enabled=yes` to all pods that have `app=rng`
|
||||
|
||||
- We could edit each pod one by one with `kubectl edit` ...
|
||||
|
||||
- ... Or we could use `kubectl label` to label them all
|
||||
|
||||
- `kubectl label` can use selectors itself
|
||||
|
||||
.exercise[
|
||||
|
||||
- Show detailed information about the `rng` deployment:
|
||||
- Add `enabled=yes` to all pods that have `app=rng`:
|
||||
```bash
|
||||
kubectl describe deploy rng
|
||||
kubectl label pods -l app=rng enabled=yes
|
||||
```
|
||||
|
||||
- Show detailed information about the `rng` replica:
|
||||
<br/>(The second command doesn't require you to get the exact name of the replica set)
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Updating the service selector
|
||||
|
||||
- We need to edit the service specification
|
||||
|
||||
- Reminder: in the service definition, we will see `app: rng` in two places
|
||||
|
||||
- the label of the service itself (we don't need to touch that one)
|
||||
|
||||
- the selector of the service (that's the one we want to change)
|
||||
|
||||
.exercise[
|
||||
|
||||
- Update the service to add `enabled: yes` to its selector:
|
||||
```bash
|
||||
kubectl describe rs rng-yyyyyyyy
|
||||
kubectl describe rs -l app=rng
|
||||
kubectl edit service rng
|
||||
```
|
||||
|
||||
<!--
|
||||
```wait Please edit the object below```
|
||||
```keys /app: rng```
|
||||
```keys ^J```
|
||||
```keys noenabled: yes```
|
||||
```keys ^[``` ]
|
||||
```keys :wq```
|
||||
```keys ^J```
|
||||
-->
|
||||
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
The replica set selector also has a `pod-template-hash`, unlike the pods in our daemon set.
|
||||
... And then we get *the weirdest error ever.* Why?
|
||||
|
||||
---
|
||||
|
||||
# Updating a service through labels and selectors
|
||||
## When the YAML parser is being too smart
|
||||
|
||||
- What if we want to drop the `rng` deployment from the load balancer?
|
||||
- YAML parsers try to help us:
|
||||
|
||||
- Option 1:
|
||||
- `xyz` is the string `"xyz"`
|
||||
|
||||
- destroy it
|
||||
- `42` is the integer `42`
|
||||
|
||||
- Option 2:
|
||||
- `yes` is the boolean value `true`
|
||||
|
||||
- add an extra *label* to the daemon set
|
||||
- If we want the string `"42"` or the string `"yes"`, we have to quote them
|
||||
|
||||
- update the service *selector* to refer to that *label*
|
||||
- So we have to use `enabled: "yes"`
|
||||
|
||||
--
|
||||
|
||||
Of course, option 2 offers more learning opportunities. Right?
|
||||
.footnote[For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!]
|
||||
|
||||
---
|
||||
|
||||
## Add an extra label to the daemon set
|
||||
## Updating the service selector, take 2
|
||||
|
||||
- We will update the daemon set "spec"
|
||||
.exercise[
|
||||
|
||||
- Option 1:
|
||||
- Update the service to add `enabled: "yes"` to its selector:
|
||||
```bash
|
||||
kubectl edit service rng
|
||||
```
|
||||
|
||||
- edit the `rng.yml` file that we used earlier
|
||||
<!--
|
||||
```wait Please edit the object below```
|
||||
```keys /app: rng```
|
||||
```keys ^J```
|
||||
```keys noenabled: "yes"```
|
||||
```keys ^[``` ]
|
||||
```keys :wq```
|
||||
```keys ^J```
|
||||
-->
|
||||
|
||||
- load the new definition with `kubectl apply`
|
||||
]
|
||||
|
||||
- Option 2:
|
||||
This time it should work!
|
||||
|
||||
- use `kubectl edit`
|
||||
|
||||
--
|
||||
|
||||
*If you feel like you got this💕🌈, feel free to try directly.*
|
||||
|
||||
*We've included a few hints on the next slides for your convenience!*
|
||||
If we did everything correctly, the web UI shouldn't show any change.
|
||||
|
||||
---
|
||||
|
||||
## Updating labels
|
||||
|
||||
- We want to disable the pod that was created by the deployment
|
||||
|
||||
- All we have to do, is remove the `enabled` label from that pod
|
||||
|
||||
- To identify that pod, we can use its name
|
||||
|
||||
- ... Or rely on the fact that it's the only one with a `pod-template-hash` label
|
||||
|
||||
- Good to know:
|
||||
|
||||
- `kubectl label ... foo=` doesn't remove a label (it sets it to an empty string)
|
||||
|
||||
- to remove label `foo`, use `kubectl label ... foo-`
|
||||
|
||||
- to change an existing label, we would need to add `--overwrite`
|
||||
|
||||
---
|
||||
|
||||
## Removing a pod from the load balancer
|
||||
|
||||
.exercise[
|
||||
|
||||
- In one window, check the logs of that pod:
|
||||
```bash
|
||||
POD=$(kubectl get pod -l app=rng,pod-template-hash -o name)
|
||||
kubectl logs --tail 1 --follow $POD
|
||||
|
||||
```
|
||||
(We should see a steady stream of HTTP logs)
|
||||
|
||||
- In another window, remove the label from the pod:
|
||||
```bash
|
||||
kubectl label pod -l app=rng,pod-template-hash enabled-
|
||||
```
|
||||
(The stream of HTTP logs should stop immediately)
|
||||
|
||||
]
|
||||
|
||||
There might be a slight change in the web UI (since we removed a bit
|
||||
of capacity from the `rng` service). If we remove more pods,
|
||||
the effect should be more visible.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Updating the daemon set
|
||||
|
||||
- If we scale up our cluster by adding new nodes, the daemon set will create more pods
|
||||
|
||||
- These pods won't have the `enabled=yes` label
|
||||
|
||||
- If we want these pods to have that label, we need to edit the daemon set spec
|
||||
|
||||
- We can do that with e.g. `kubectl edit daemonset rng`
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## We've put resources in your resources
|
||||
|
||||
- Reminder: a daemon set is a resource that creates more resources!
|
||||
@@ -410,7 +624,9 @@ Of course, option 2 offers more learning opportunities. Right?
|
||||
|
||||
- the label(s) of the resource(s) created by the first resource (in the `template` block)
|
||||
|
||||
- You need to update the selector and the template (metadata labels are not mandatory)
|
||||
- We would need to update the selector and the template
|
||||
|
||||
(metadata labels are not mandatory)
|
||||
|
||||
- The template must match the selector
|
||||
|
||||
@@ -418,175 +634,6 @@ Of course, option 2 offers more learning opportunities. Right?
|
||||
|
||||
---
|
||||
|
||||
## Adding our label
|
||||
|
||||
- Let's add a label `isactive: yes`
|
||||
|
||||
- In YAML, `yes` should be quoted; i.e. `isactive: "yes"`
|
||||
|
||||
.exercise[
|
||||
|
||||
- Update the daemon set to add `isactive: "yes"` to the selector and template label:
|
||||
```bash
|
||||
kubectl edit daemonset rng
|
||||
```
|
||||
|
||||
<!--
|
||||
```wait Please edit the object below```
|
||||
```keys /app: rng```
|
||||
```keys ^J```
|
||||
```keys noisactive: "yes"```
|
||||
```keys ^[``` ]
|
||||
```keys /app: rng```
|
||||
```keys ^J```
|
||||
```keys oisactive: "yes"```
|
||||
```keys ^[``` ]
|
||||
```keys :wq```
|
||||
```keys ^J```
|
||||
-->
|
||||
|
||||
- Update the service to add `isactive: "yes"` to its selector:
|
||||
```bash
|
||||
kubectl edit service rng
|
||||
```
|
||||
|
||||
<!--
|
||||
```wait Please edit the object below```
|
||||
```keys /app: rng```
|
||||
```keys ^J```
|
||||
```keys noisactive: "yes"```
|
||||
```keys ^[``` ]
|
||||
```keys :wq```
|
||||
```keys ^J```
|
||||
-->
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Checking what we've done
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check the most recent log line of all `app=rng` pods to confirm that exactly one per node is now active:
|
||||
```bash
|
||||
kubectl logs -l app=rng --tail 1
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
The timestamps should give us a hint about how many pods are currently receiving traffic.
|
||||
|
||||
.exercise[
|
||||
|
||||
- Look at the pods that we have right now:
|
||||
```bash
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Cleaning up
|
||||
|
||||
- The pods of the deployment and the "old" daemon set are still running
|
||||
|
||||
- We are going to identify them programmatically
|
||||
|
||||
.exercise[
|
||||
|
||||
- List the pods with `app=rng` but without `isactive=yes`:
|
||||
```bash
|
||||
kubectl get pods -l app=rng,isactive!=yes
|
||||
```
|
||||
|
||||
- Remove these pods:
|
||||
```bash
|
||||
kubectl delete pods -l app=rng,isactive!=yes
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Cleaning up stale pods
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
rng-54f57d4d49-7pt82 1/1 Terminating 0 51m
|
||||
rng-54f57d4d49-vgz9h 1/1 Running 0 22s
|
||||
rng-b85tm 1/1 Terminating 0 39m
|
||||
rng-hfbrr 1/1 Terminating 0 39m
|
||||
rng-vplmj 1/1 Running 0 7m
|
||||
rng-xbpvg 1/1 Running 0 7m
|
||||
[...]
|
||||
```
|
||||
|
||||
- The extra pods (noted `Terminating` above) are going away
|
||||
|
||||
- ... But a new one (`rng-54f57d4d49-vgz9h` above) was restarted immediately!
|
||||
|
||||
--
|
||||
|
||||
- Remember, the *deployment* still exists, and makes sure that one pod is up and running
|
||||
|
||||
- If we delete the pod associated to the deployment, it is recreated automatically
|
||||
|
||||
---
|
||||
|
||||
## Deleting a deployment
|
||||
|
||||
.exercise[
|
||||
|
||||
- Remove the `rng` deployment:
|
||||
```bash
|
||||
kubectl delete deployment rng
|
||||
```
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
- The pod that was created by the deployment is now being terminated:
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
rng-54f57d4d49-vgz9h 1/1 Terminating 0 4m
|
||||
rng-vplmj 1/1 Running 0 11m
|
||||
rng-xbpvg 1/1 Running 0 11m
|
||||
[...]
|
||||
```
|
||||
|
||||
Ding, dong, the deployment is dead! And the daemon set lives on.
|
||||
|
||||
---
|
||||
|
||||
## Avoiding extra pods
|
||||
|
||||
- When we changed the definition of the daemon set, it immediately created new pods. We had to remove the old ones manually.
|
||||
|
||||
- How could we have avoided this?
|
||||
|
||||
--
|
||||
|
||||
- By adding the `isactive: "yes"` label to the pods before changing the daemon set!
|
||||
|
||||
- This can be done programmatically with `kubectl patch`:
|
||||
|
||||
```bash
|
||||
PATCH='
|
||||
metadata:
|
||||
labels:
|
||||
isactive: "yes"
|
||||
'
|
||||
kubectl get pods -l app=rng -l controller-revision-hash -o name |
|
||||
xargs kubectl patch -p "$PATCH"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Labels and debugging
|
||||
|
||||
- When a pod is misbehaving, we can delete it: another one will be recreated
|
||||
|
||||
@@ -295,6 +295,24 @@ Unfortunately, `--follow` cannot (yet) be used to stream the logs from multiple
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## `kubectl logs -l ... --tail N`
|
||||
|
||||
- If we run this with Kubernetes 1.12, the last command shows multiple lines
|
||||
|
||||
- This is a regression when `--tail` is used together with `-l`/`--selector`
|
||||
|
||||
- It always shows the last 10 lines of output for each container
|
||||
|
||||
(instead of the number of lines specified on the command line)
|
||||
|
||||
- The problem was fixed in Kubernetes 1.13
|
||||
|
||||
*See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.*
|
||||
|
||||
---
|
||||
|
||||
## Aren't we flooding 1.1.1.1?
|
||||
|
||||
- If you're wondering this, good question!
|
||||
|
||||
@@ -68,7 +68,7 @@
|
||||
kubectl -n blue get svc
|
||||
```
|
||||
|
||||
- We can also use *contexts*
|
||||
- We can also change our current *context*
|
||||
|
||||
- A context is a *(user, cluster, namespace)* tuple
|
||||
|
||||
@@ -76,9 +76,9 @@
|
||||
|
||||
---
|
||||
|
||||
## Creating a context
|
||||
## Viewing existing contexts
|
||||
|
||||
- We are going to create a context for the `blue` namespace
|
||||
- On our training environments, at this point, there should be only one context
|
||||
|
||||
.exercise[
|
||||
|
||||
@@ -87,29 +87,79 @@
|
||||
kubectl config get-contexts
|
||||
```
|
||||
|
||||
- Create a new context:
|
||||
]
|
||||
|
||||
- The current context (the only one!) is tagged with a `*`
|
||||
|
||||
- What are NAME, CLUSTER, AUTHINFO, and NAMESPACE?
|
||||
|
||||
---
|
||||
|
||||
## What's in a context
|
||||
|
||||
- NAME is an arbitrary string to identify the context
|
||||
|
||||
- CLUSTER is a reference to a cluster
|
||||
|
||||
(i.e. API endpoint URL, and optional certificate)
|
||||
|
||||
- AUTHINFO is a reference to the authentication information to use
|
||||
|
||||
(i.e. a TLS client certificate, token, or otherwise)
|
||||
|
||||
- NAMESPACE is the namespace
|
||||
|
||||
(empty string = `default`)
|
||||
|
||||
---
|
||||
|
||||
## Switching contexts
|
||||
|
||||
- We want to use a different namespace
|
||||
|
||||
- Solution 1: update the current context
|
||||
|
||||
*This is appropriate if we need to change just one thing (e.g. namespace or authentication).*
|
||||
|
||||
- Solution 2: create a new context and switch to it
|
||||
|
||||
*This is appropriate if we need to change multiple things and switch back and forth.*
|
||||
|
||||
- Let's go with solution 1!
|
||||
|
||||
---
|
||||
|
||||
## Updating a context
|
||||
|
||||
- This is done through `kubectl config set-context`
|
||||
|
||||
- We can update a context by passing its name, or the current context with `--current`
|
||||
|
||||
.exercise[
|
||||
|
||||
- Update the current context to use the `blue` namespace:
|
||||
```bash
|
||||
kubectl config set-context blue --namespace=blue \
|
||||
--cluster=kubernetes --user=kubernetes-admin
|
||||
kubectl config set-context --current --namespace=blue
|
||||
```
|
||||
|
||||
- Check the result:
|
||||
```bash
|
||||
kubectl config get-contexts
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
We have created a context; but this is just some configuration values.
|
||||
|
||||
The namespace doesn't exist yet.
|
||||
|
||||
---
|
||||
|
||||
## Using a context
|
||||
## Using our new namespace
|
||||
|
||||
- Let's switch to our new context and deploy the DockerCoins chart
|
||||
- Let's check that we are in our new namespace, then deploy the DockerCoins chart
|
||||
|
||||
.exercise[
|
||||
|
||||
- Use the `blue` context:
|
||||
- Verify that the new context is empty:
|
||||
```bash
|
||||
kubectl config use-context blue
|
||||
kubectl get all
|
||||
```
|
||||
|
||||
- Deploy DockerCoins:
|
||||
@@ -181,30 +231,19 @@ Note: it might take a minute or two for the app to be up and running.
|
||||
|
||||
.exercise[
|
||||
|
||||
- View the names of the contexts:
|
||||
```bash
|
||||
kubectl config get-contexts
|
||||
```
|
||||
|
||||
- Switch back to the original context:
|
||||
```bash
|
||||
kubectl config use-context kubernetes-admin@kubernetes
|
||||
kubectl config set-context --current --namespace=
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
Note: we could have used `--namespace=default` for the same result.
|
||||
|
||||
---
|
||||
|
||||
## Switching namespaces more easily
|
||||
|
||||
- Defining a new context for each namespace can be cumbersome
|
||||
|
||||
- We can also alter the current context with this one-liner:
|
||||
|
||||
```bash
|
||||
kubectl config set-context --current --namespace=foo
|
||||
```
|
||||
|
||||
- We can also use a little helper tool called `kubens`:
|
||||
|
||||
```bash
|
||||
|
||||
@@ -266,7 +266,9 @@ spec:
|
||||
|
||||
---
|
||||
|
||||
## Stateful sets in action
|
||||
# Running a Consul cluster
|
||||
|
||||
- Here is a good use-case for Stateful sets!
|
||||
|
||||
- We are going to deploy a Consul cluster with 3 nodes
|
||||
|
||||
@@ -294,42 +296,54 @@ consul agent -data=dir=/consul/data -client=0.0.0.0 -server -ui \
|
||||
-retry-join=`Y.Y.Y.Y`
|
||||
```
|
||||
|
||||
- We need to replace X.X.X.X and Y.Y.Y.Y with the addresses of other nodes
|
||||
- Replace X.X.X.X and Y.Y.Y.Y with the addresses of other nodes
|
||||
|
||||
- We can specify DNS names, but then they have to be FQDN
|
||||
|
||||
- It's OK for a pod to include itself in the list as well
|
||||
|
||||
- We can therefore use the same command-line on all nodes (easier!)
|
||||
- The same command-line can be used on all nodes (convenient!)
|
||||
|
||||
---
|
||||
|
||||
## Discovering the addresses of other pods
|
||||
## Cloud Auto-join
|
||||
|
||||
- When a service is created for a stateful set, individual DNS entries are created
|
||||
- Since version 1.4.0, Consul can use the Kubernetes API to find its peers
|
||||
|
||||
- These entries are constructed like this:
|
||||
- This is called [Cloud Auto-join]
|
||||
|
||||
`<name-of-stateful-set>-<n>.<name-of-service>.<namespace>.svc.cluster.local`
|
||||
- Instead of passing an IP address, we need to pass a parameter like this:
|
||||
|
||||
- `<n>` is the number of the pod in the set (starting at zero)
|
||||
```
|
||||
consul agent -retry-join "provider=k8s label_selector=\"app=consul\""
|
||||
```
|
||||
|
||||
- If we deploy Consul in the default namespace, the names could be:
|
||||
- Consul needs to be able to talk to the Kubernetes API
|
||||
|
||||
- `consul-0.consul.default.svc.cluster.local`
|
||||
- `consul-1.consul.default.svc.cluster.local`
|
||||
- `consul-2.consul.default.svc.cluster.local`
|
||||
- We can provide a `kubeconfig` file
|
||||
|
||||
- If Consul runs in a pod, it will use the *service account* of the pod
|
||||
|
||||
[Cloud Auto-join]: https://www.consul.io/docs/agent/cloud-auto-join.html#kubernetes-k8s-
|
||||
|
||||
---
|
||||
|
||||
## Setting up Cloud auto-join
|
||||
|
||||
- We need to create a service account for Consul
|
||||
|
||||
- We need to create a role that can `list` and `get` pods
|
||||
|
||||
- We need to bind that role to the service account
|
||||
|
||||
- And of course, we need to make sure that Consul pods use that service account
|
||||
|
||||
---
|
||||
|
||||
## Putting it all together
|
||||
|
||||
- The file `k8s/consul.yaml` defines a service and a stateful set
|
||||
- The file `k8s/consul.yaml` defines the required resources
|
||||
|
||||
(service account, cluster role, cluster role binding, service, stateful set)
|
||||
|
||||
- It has a few extra touches:
|
||||
|
||||
- the name of the namespace is injected through an environment variable
|
||||
|
||||
- a `podAntiAffinity` prevents two pods from running on the same node
|
||||
|
||||
- a `preStop` hook makes the pod leave the cluster when shutdown gracefully
|
||||
|
||||
239
slides/k8s/staticpods.md
Normal file
239
slides/k8s/staticpods.md
Normal file
@@ -0,0 +1,239 @@
|
||||
# Static pods
|
||||
|
||||
- Pods are usually created indirectly, through another resource:
|
||||
|
||||
Deployment, Daemon Set, Job, Stateful Set ...
|
||||
|
||||
- They can also be created directly
|
||||
|
||||
- This can be done by writing YAML and using `kubectl apply` or `kubectl create`
|
||||
|
||||
- Some resources (not all of them) can be created with `kubectl run`
|
||||
|
||||
- Creating a resource with `kubectl` requires the API to be up
|
||||
|
||||
- If we want to run the API server (and its dependencies) on Kubernetes itself ...
|
||||
|
||||
... how can we create API pods (and other resources) when the API is not up yet?
|
||||
|
||||
---
|
||||
|
||||
## In theory
|
||||
|
||||
- Each component of the control plane can be replicated
|
||||
|
||||
- We could set up the control plane outside of the cluster
|
||||
|
||||
- Then, once the cluster is up, create replicas running on the cluster
|
||||
|
||||
- Finally, remove the replicas that are running outside of the cluster
|
||||
|
||||
*What could possibly go wrong?*
|
||||
|
||||
---
|
||||
|
||||
## Sawing off the branch you're sitting on
|
||||
|
||||
- What if anything goes wrong?
|
||||
|
||||
(During the setup or at a later point)
|
||||
|
||||
- Worst case scenario, we might need to:
|
||||
|
||||
- set up a new control plane (outside of the cluster)
|
||||
|
||||
- restore a backup from the old control plane
|
||||
|
||||
- move the new control plane to the cluster (again)
|
||||
|
||||
- This doesn't sound like a great experience
|
||||
|
||||
---
|
||||
|
||||
## Static pods to the rescue
|
||||
|
||||
- Pods are started by kubelet (an agent running on every node)
|
||||
|
||||
- To know which pods it should run, the kubelet queries the API server
|
||||
|
||||
- The kubelet can also get a list of *static pods* from:
|
||||
|
||||
- a directory containing one (or multiple) *manifests*, and/or
|
||||
|
||||
- a URL (serving a *manifest*)
|
||||
|
||||
- These "manifests" are basically YAML definitions
|
||||
|
||||
(As produced by `kubectl get pod my-little-pod -o yaml --export`)
|
||||
|
||||
---
|
||||
|
||||
## Static pods are dynamic
|
||||
|
||||
- Kubelet will periodically reload the manifests
|
||||
|
||||
- It will start/stop pods accordingly
|
||||
|
||||
(i.e. it is not necessary to restart the kubelet after updating the manifests)
|
||||
|
||||
- When connected to the Kubernetes API, the kubelet will create *mirror pods*
|
||||
|
||||
- Mirror pods are copies of the static pods
|
||||
|
||||
(so they can be seen with e.g. `kubectl get pods`)
|
||||
|
||||
---
|
||||
|
||||
## Bootstrapping a cluster with static pods
|
||||
|
||||
- We can run control plane components with these static pods
|
||||
|
||||
- They don't need the API to be up (just the kubelet)
|
||||
|
||||
- Once they are up, the API becomes available
|
||||
|
||||
- These pods are then visible through the API
|
||||
|
||||
(We cannot upgrade them from the API, though)
|
||||
|
||||
*This is how kubeadm has initialized our clusters.*
|
||||
|
||||
---
|
||||
|
||||
## Static pods vs normal pods
|
||||
|
||||
- The API only gives us a read-only access to static pods
|
||||
|
||||
- We can `kubectl delete` a static pod ...
|
||||
|
||||
... But the kubelet will restart it immediately
|
||||
|
||||
- Static pods can be selected just like other pods
|
||||
|
||||
(So they can receive service traffic)
|
||||
|
||||
- A service can select a mixture of static and other pods
|
||||
|
||||
---
|
||||
|
||||
## From static pods to normal pods
|
||||
|
||||
- Once the control plane is up and running, it can be used to create normal pods
|
||||
|
||||
- We can then set up a copy of the control plane in normal pods
|
||||
|
||||
- Then the static pods can be removed
|
||||
|
||||
- The scheduler and the controller manager use leader election
|
||||
|
||||
(Only one is active at a time; removing an instance is seamless)
|
||||
|
||||
- Each instance of the API server adds itself to the `kubernetes` service
|
||||
|
||||
- Etcd will typically require more work!
|
||||
|
||||
---
|
||||
|
||||
## From normal pods back to static pods
|
||||
|
||||
- Alright, but what if the control plane is down and we need to fix it?
|
||||
|
||||
- We restart it using static pods!
|
||||
|
||||
- This can be done automatically with the [Pod Checkpointer]
|
||||
|
||||
- The Pod Checkpointer automatically generates manifests of running pods
|
||||
|
||||
- The manifests are used to restart these pods if API contact is lost
|
||||
|
||||
(More details in the [Pod Checkpointer] documentation page)
|
||||
|
||||
- This technique is used by [bootkube]
|
||||
|
||||
[Pod Checkpointer]: https://github.com/kubernetes-incubator/bootkube/blob/master/cmd/checkpoint/README.md
|
||||
[bootkube]: https://github.com/kubernetes-incubator/bootkube
|
||||
|
||||
---
|
||||
|
||||
## Where should the control plane be?
|
||||
|
||||
*Is it better to run the control plane in static pods, or normal pods?*
|
||||
|
||||
- If I'm a *user* of the cluster: I don't care, it makes no difference to me
|
||||
|
||||
- What if I'm an *admin*, i.e. the person who installs, upgraes, repairs... the cluster?
|
||||
|
||||
- If I'm using a managed Kubernetes cluster (AKS, EKS, GKE...) it's not my problem
|
||||
|
||||
(I'm not the one setting up and managing the control plane)
|
||||
|
||||
- If I already picked a tool (kubeadm, kops...) to setup my cluster, the tool decides for me
|
||||
|
||||
- What if I haven't picked a tool yet, or if I'm installing from scratch?
|
||||
|
||||
- static pods = easier to set up, easier to troubleshoot, less risk of outage
|
||||
|
||||
- normal pods = easier to upgrade, easier to move (if nodes need to be shutdown)
|
||||
|
||||
---
|
||||
|
||||
## Static pods in action
|
||||
|
||||
- On our clusters, the `staticPodPath` is `/etc/kubernetes/manifests`
|
||||
|
||||
.exercise[
|
||||
|
||||
- Have a look at this directory:
|
||||
```bash
|
||||
ls -l /etc/kubernetes/manifests
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
We should see YAML files corresponding to the pods of the control plane.
|
||||
|
||||
---
|
||||
|
||||
## Running a static pod
|
||||
|
||||
- We are going to add a pod manifest to the directory, and kubelet will run it
|
||||
|
||||
.exercise[
|
||||
|
||||
- Copy a manifest to the directory:
|
||||
```bash
|
||||
sudo cp ~/container.training/k8s/just-a-pod.yaml /etc/kubernetes/manifests
|
||||
```
|
||||
|
||||
- Check that it's running:
|
||||
```bash
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
The output should include a pod named `hello-node1`.
|
||||
|
||||
---
|
||||
|
||||
## Remarks
|
||||
|
||||
In the manifest, the pod was named `hello`.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
Kind: Pod
|
||||
metadata:
|
||||
name: hello
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: nginx
|
||||
```
|
||||
|
||||
The `-node1` suffix was added automatically by kubelet.
|
||||
|
||||
If we delete the pod (with `kubectl delete`), it will be recreated immediately.
|
||||
|
||||
To delete the pod, we need to delete (or move) the manifest file.
|
||||
@@ -1,6 +1,6 @@
|
||||
## Versions installed
|
||||
|
||||
- Kubernetes 1.12.2
|
||||
- Kubernetes 1.13.0
|
||||
- Docker Engine 18.09.0
|
||||
- Docker Compose 1.21.1
|
||||
|
||||
@@ -23,7 +23,7 @@ class: extra-details
|
||||
|
||||
## Kubernetes and Docker compatibility
|
||||
|
||||
- Kubernetes 1.12.x only validates Docker Engine versions [1.11.2 to 1.13.1 and 17.03.x](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#external-dependencies)
|
||||
- Kubernetes 1.13.x only validates Docker Engine versions [up to 18.06](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#external-dependencies)
|
||||
|
||||
--
|
||||
|
||||
@@ -35,7 +35,9 @@ class: extra-details
|
||||
|
||||
class: extra-details
|
||||
|
||||
- "Validates" = continuous integration builds
|
||||
- No!
|
||||
|
||||
- "Validates" = continuous integration builds with very extensive (and expensive) testing
|
||||
|
||||
- The Docker API is versioned, and offers strong backward-compatibility
|
||||
|
||||
|
||||
@@ -57,6 +57,7 @@ chapters:
|
||||
- - k8s/owners-and-dependents.md
|
||||
- k8s/statefulsets.md
|
||||
- k8s/portworx.md
|
||||
- k8s/staticpods.md
|
||||
- - k8s/whatsnext.md
|
||||
- k8s/links.md
|
||||
- shared/thankyou.md
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
title: |
|
||||
Deploying and Scaling Microservices
|
||||
with Kubernetes
|
||||
Déployer ses applications
|
||||
avec Kubernetes
|
||||
|
||||
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||
chat: "In person!"
|
||||
chat: "[Gitter](https://gitter.im/enix/formation-kubernetes-20181217)"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: http://container.training/
|
||||
slides: http://decembre2018.container.training/
|
||||
|
||||
exclude:
|
||||
- self-paced
|
||||
@@ -57,6 +55,8 @@ chapters:
|
||||
- - k8s/owners-and-dependents.md
|
||||
- k8s/statefulsets.md
|
||||
- k8s/portworx.md
|
||||
- k8s/staticpods.md
|
||||
- - k8s/whatsnext.md
|
||||
- k8s/links.md
|
||||
- shared/thankyou.md
|
||||
|
||||
|
||||
@@ -1,26 +1,14 @@
|
||||
## Intros
|
||||
|
||||
- This slide should be customized by the tutorial instructor(s).
|
||||
|
||||
- Hello! We are:
|
||||
|
||||
- .emoji[👩🏻🏫] Ann O'Nymous ([@...](https://twitter.com/...), Megacorp Inc)
|
||||
|
||||
- .emoji[👨🏾🎓] Stu Dent ([@...](https://twitter.com/...), University of Wakanda)
|
||||
|
||||
<!-- .dummy[
|
||||
|
||||
- .emoji[👷🏻♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
|
||||
|
||||
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
|
||||
|
||||
- .emoji[⛵] Jérémy ([@jeremygarrouste](twitter.com/jeremygarrouste), Inpiwee)
|
||||
- .emoji[🎧] Romain ([@rdegez](https://twitter.com/rdegez), Enix SAS)
|
||||
|
||||
] -->
|
||||
- The workshop will run from 9:15 to 17:30
|
||||
|
||||
- The workshop will run from ...
|
||||
|
||||
- There will be a lunch break at ...
|
||||
- There will be a lunch break at noon
|
||||
|
||||
(And coffee breaks!)
|
||||
|
||||
|
||||
17
slides/override.css
Normal file
17
slides/override.css
Normal file
@@ -0,0 +1,17 @@
|
||||
.remark-slide-content:not(.pic) {
|
||||
background-repeat: no-repeat;
|
||||
background-position: 99% 1%;
|
||||
background-size: 8%;
|
||||
background-image: url(https://enix.io/static/img/logos/logo-domain-cropped.png);
|
||||
}
|
||||
|
||||
div.extra-details:not(.pic) {
|
||||
background-image: url("images/extra-details.png"), url(https://enix.io/static/img/logos/logo-domain-cropped.png);
|
||||
background-position: 0.5% 1%, 99% 1%;
|
||||
background-size: 4%, 8%;
|
||||
}
|
||||
|
||||
.remark-slide-content:not(.pic) div.remark-slide-number {
|
||||
top: 16px;
|
||||
right: 112px
|
||||
}
|
||||
@@ -108,7 +108,7 @@ and displays aggregated logs.
|
||||
|
||||
- `worker` invokes web service `rng` to generate random bytes
|
||||
|
||||
- `worker` invokes web servie `hasher` to hash these bytes
|
||||
- `worker` invokes web service `hasher` to hash these bytes
|
||||
|
||||
- `worker` does this in an infinite loop
|
||||
|
||||
|
||||
@@ -11,9 +11,9 @@ class: title, in-person
|
||||
@@TITLE@@<br/></br>
|
||||
|
||||
.footnote[
|
||||
**Be kind to the WiFi!**<br/>
|
||||
**WiFi: EnixTraining**<br/>
|
||||
**Password: kubeforever**<br/>
|
||||
<!-- *Use the 5G network.* -->
|
||||
*Don't use your hotspot.*<br/>
|
||||
*Don't stream videos or download big files during the workshop[.](https://www.youtube.com/watch?v=h16zyxiwDLY)*<br/>
|
||||
*Thank you!*
|
||||
|
||||
|
||||
@@ -4,6 +4,7 @@
|
||||
<title>@@TITLE@@</title>
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
|
||||
<link rel="stylesheet" href="workshop.css">
|
||||
<link rel="stylesheet" href="override.css">
|
||||
</head>
|
||||
<body>
|
||||
<!--
|
||||
|
||||
Reference in New Issue
Block a user