mirror of
https://github.com/jpetazzo/container.training.git
synced 2026-02-15 10:09:56 +00:00
Compare commits
4 Commits
oscon2018
...
devopsdays
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bd28742333 | ||
|
|
1b38a1ff09 | ||
|
|
a280c34a27 | ||
|
|
47e1ced630 |
@@ -28,5 +28,5 @@ def rng(how_many_bytes):
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app.run(host="0.0.0.0", port=80, threaded=False)
|
||||
app.run(host="0.0.0.0", port=80)
|
||||
|
||||
|
||||
@@ -15,13 +15,10 @@ TEMPLATE="""<html>
|
||||
|
||||
{% for item in coming_soon %}
|
||||
<tr>
|
||||
<td>{{ item.title }}</td>
|
||||
<td>{{ item.prettydate }}: {{ item.title }} at {{ item.event }} in {{ item.city }}</td>
|
||||
<td>{% if item.slides %}<a class="slides" href="{{ item.slides }}" />{% endif %}</td>
|
||||
<td><a class="attend" href="{{ item.attend }}" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="details">Scheduled {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
@@ -30,14 +27,10 @@ TEMPLATE="""<html>
|
||||
|
||||
{% for item in past_workshops[:5] %}
|
||||
<tr>
|
||||
<td>{{ item.title }}</td>
|
||||
<td>{{ item.prettydate }}: {{ item.title }} {% if item.event %}at {{ item.event }} {% endif %} {% if item.city %} in {{ item.city }} {% endif %}</td>
|
||||
<td><a class="slides" href="{{ item.slides }}" /></td>
|
||||
<td>{% if item.video %}<a class="video" href="{{ item.video }}" />{% endif %}</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
|
||||
</tr>
|
||||
|
||||
{% endfor %}
|
||||
|
||||
{% if past_workshops[5:] %}
|
||||
|
||||
@@ -1,12 +1,3 @@
|
||||
- date: 2018-07-12
|
||||
city: Minneapolis, MN
|
||||
country: us
|
||||
event: devopsdays Minneapolis
|
||||
title: Kubernetes 101
|
||||
speaker: "ashleymcnamara, bketelsen"
|
||||
slides: https://devopsdaysmsp2018.container.training
|
||||
attend: https://www.devopsdays.org/events/2018-minneapolis/registration/
|
||||
|
||||
- date: 2018-10-01
|
||||
city: New York, NY
|
||||
country: us
|
||||
@@ -23,22 +14,12 @@
|
||||
speaker: jpetazzo
|
||||
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/69875
|
||||
|
||||
- date: 2018-09-17
|
||||
country: fr
|
||||
city: Paris
|
||||
event: ENIX SAS
|
||||
speaker: jpetazzo
|
||||
title: Déployer ses applications avec Kubernetes (in French)
|
||||
lang: fr
|
||||
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
|
||||
|
||||
- date: 2018-07-17
|
||||
city: Portland, OR
|
||||
country: us
|
||||
event: OSCON
|
||||
title: Kubernetes 101
|
||||
speaker: bridgetkromhout
|
||||
slides: https://oscon2018.container.training/
|
||||
attend: https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/66287
|
||||
|
||||
- date: 2018-06-27
|
||||
|
||||
@@ -2,8 +2,8 @@ title: |
|
||||
Kubernetes 101
|
||||
|
||||
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
chat: "[Gitter](https://gitter.im/jpetazzo/workshop-20180717-portland)"
|
||||
#chat: "In person!"
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
|
||||
chat: "In person!"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
|
||||
@@ -239,11 +239,7 @@ Yes!
|
||||
- namespace (more-or-less isolated group of things)
|
||||
- secret (bundle of sensitive data to be passed to a container)
|
||||
|
||||
And much more!
|
||||
|
||||
- We can see the full list by running `kubectl api-resources`
|
||||
|
||||
(In Kubernetes 1.10 and prior, the command to list API resources was `kubectl get`)
|
||||
And much more! (We can see the full list by running `kubectl get`)
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
|
||||
- If we want to connect to our pod(s), we need to create a *service*
|
||||
|
||||
- Once a service is created, CoreDNS will allow us to resolve it by name
|
||||
- Once a service is created, `kube-dns` will allow us to resolve it by name
|
||||
|
||||
(i.e. after creating service `hello`, the name `hello` will resolve to something)
|
||||
|
||||
@@ -46,7 +46,7 @@ Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables`
|
||||
|
||||
- `ExternalName`
|
||||
|
||||
- the DNS entry managed by CoreDNS will just be a `CNAME` to a provided record
|
||||
- the DNS entry managed by `kube-dns` will just be a `CNAME` to a provided record
|
||||
- no port, no IP address, no nothing else is allocated
|
||||
|
||||
The `LoadBalancer` type is currently only available on AWS, Azure, and GCE.
|
||||
@@ -179,7 +179,7 @@ class: extra-details
|
||||
|
||||
- Since there is no virtual IP address, there is no load balancer either
|
||||
|
||||
- CoreDNS will return the pods' IP addresses as multiple `A` records
|
||||
- `kube-dns` will return the pods' IP addresses as multiple `A` records
|
||||
|
||||
- This gives us an easy way to discover all the replicas for a deployment
|
||||
|
||||
|
||||
@@ -83,9 +83,7 @@
|
||||
|
||||
- `kubectl` has pretty good introspection facilities
|
||||
|
||||
- We can list all available resource types by running `kubectl api-resources`
|
||||
<br/>
|
||||
(In Kubernetes 1.10 and prior, this command used to be `kubectl get`)
|
||||
- We can list all available resource types by running `kubectl get`
|
||||
|
||||
- We can view details about a resource with:
|
||||
```bash
|
||||
@@ -226,7 +224,7 @@ The `kube-system` namespace is used for the control plane.
|
||||
|
||||
- `kube-controller-manager` and `kube-scheduler` are other master components
|
||||
|
||||
- `coredns` provides DNS-based service discovery ([replacing kube-dns as of 1.11](https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/))
|
||||
- `kube-dns` is an additional component (not mandatory but super useful, so it's there)
|
||||
|
||||
- `kube-proxy` is the (per-node) component managing port mappings and such
|
||||
|
||||
|
||||
@@ -151,7 +151,7 @@ Note: it might take a minute or two for the app to be up and running.
|
||||
|
||||
- A pod in the `default` namespace can communicate with a pod in the `kube-system` namespace
|
||||
|
||||
- CoreDNS uses a different subdomain for each namespace
|
||||
- `kube-dns` uses a different subdomain for each namespace
|
||||
|
||||
- Example: from any pod in the cluster, you can connect to the Kubernetes API with:
|
||||
|
||||
|
||||
@@ -154,29 +154,19 @@ That rollout should be pretty quick. What shows in the web UI?
|
||||
|
||||
--
|
||||
|
||||
Our rollout is stuck. However, the app is not dead.
|
||||
|
||||
(After a minute, it will stabilize to be 20-25% slower.)
|
||||
Our rollout is stuck. However, the app is not dead (just 10% slower).
|
||||
|
||||
---
|
||||
|
||||
## What's going on with our rollout?
|
||||
|
||||
- Why is our app a bit slower?
|
||||
- Why is our app 10% slower?
|
||||
|
||||
- Because `MaxUnavailable=25%`
|
||||
- Because `MaxUnavailable=1`, so the rollout terminated 1 replica out of 10 available
|
||||
|
||||
... So the rollout terminated 2 replicas out of 10 available
|
||||
- Okay, but why do we see 2 new replicas being rolled out?
|
||||
|
||||
- Okay, but why do we see 5 new replicas being rolled out?
|
||||
|
||||
- Because `MaxSurge=25%`
|
||||
|
||||
... So in addition to replacing 2 replicas, the rollout is also starting 3 more
|
||||
|
||||
- It rounded down the number of MaxUnavailable pods conservatively,
|
||||
<br/>
|
||||
but the total number of pods being rolled out is allowed to be 25+25=50%
|
||||
- Because `MaxSurge=1`, so in addition to replacing the terminated one, the rollout is also starting one more
|
||||
|
||||
---
|
||||
|
||||
@@ -186,15 +176,15 @@ class: extra-details
|
||||
|
||||
- We start with 10 pods running for the `worker` deployment
|
||||
|
||||
- Current settings: MaxUnavailable=25% and MaxSurge=25%
|
||||
- Current settings: MaxUnavailable=1 and MaxSurge=1
|
||||
|
||||
- When we start the rollout:
|
||||
|
||||
- two replicas are taken down (as per MaxUnavailable=25%)
|
||||
- two others are created (with the new version) to replace them
|
||||
- three others are created (with the new version) per MaxSurge=25%)
|
||||
- one replica is taken down (as per MaxUnavailable=1)
|
||||
- another is created (with the new version) to replace it
|
||||
- another is created (with the new version) per MaxSurge=1
|
||||
|
||||
- Now we have 8 replicas up and running, and 5 being deployed
|
||||
- Now we have 9 replicas up and running, and 2 being deployed
|
||||
|
||||
- Our rollout is stuck at this point!
|
||||
|
||||
@@ -261,7 +251,7 @@ Note the `3xxxx` port.
|
||||
|
||||
- revert to `v0.1`
|
||||
- be conservative on availability (always have desired number of available workers)
|
||||
- go slow on rollout speed (update only one pod at a time)
|
||||
- be aggressive on rollout speed (update more than one pod at a time)
|
||||
- give some time to our workers to "warm up" before starting more
|
||||
|
||||
The corresponding changes can be expressed in the following YAML snippet:
|
||||
@@ -277,7 +267,7 @@ spec:
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
maxSurge: 1
|
||||
maxSurge: 3
|
||||
minReadySeconds: 10
|
||||
```
|
||||
]
|
||||
@@ -306,7 +296,7 @@ spec:
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
maxSurge: 1
|
||||
maxSurge: 3
|
||||
minReadySeconds: 10
|
||||
"
|
||||
kubectl rollout status deployment worker
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
## Versions installed
|
||||
|
||||
- Kubernetes 1.11.0
|
||||
- Kubernetes 1.10.5
|
||||
- Docker Engine 18.03.1-ce
|
||||
- Docker Compose 1.21.1
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ And *then* it is time to look at orchestration!
|
||||
|
||||
- Each of the two `redis` services has its own `ClusterIP`
|
||||
|
||||
- CoreDNS creates two entries, mapping to these two `ClusterIP` addresses:
|
||||
- `kube-dns` creates two entries, mapping to these two `ClusterIP` addresses:
|
||||
|
||||
`redis.blue.svc.cluster.local` and `redis.green.svc.cluster.local`
|
||||
|
||||
|
||||
@@ -2,30 +2,13 @@
|
||||
|
||||
- Hello! We are:
|
||||
|
||||
- .emoji[✨] Bridget Kromhout ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
|
||||
- .emoji[✨] Bridget ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
|
||||
|
||||
- .emoji[🌟] Joe Laha ([@joelaha](https://twitter.com/joelaha))
|
||||
- .emoji[🌟] Joe ([@joelaha](https://twitter.com/joelaha))
|
||||
|
||||
- .emoji[💁🏻♀️] Karen Chu ([@karenhchu](https://twitter.com/karenhchu))
|
||||
|
||||
- .emoji[🐳] Jérôme Petazzoni ([@jpetazzo](https://twitter.com/jpetazzo)) (joining us from Berlin in the chat room!)
|
||||
|
||||
- The workshop will run from 9:00-12:30
|
||||
|
||||
- There will be a break from 10:30-11:00
|
||||
- The workshop will run from 14:45 - 17:15
|
||||
|
||||
- Feel free to interrupt for questions at any time
|
||||
|
||||
- *Especially when you see full screen container pictures!*
|
||||
|
||||
---
|
||||
|
||||
## Say hi!
|
||||
|
||||
- We encourage networking at [#oscon](https://twitter.com/hashtag/oscon?f=tweets&vertical=default&src=hash)
|
||||
|
||||
- Take a minute to introduce yourself to your neighbors
|
||||
|
||||
- Tell them where you're from (where you're based out of & what org you work at)
|
||||
|
||||
- Share what you're hoping to learn in this session! .emoji[✨]
|
||||
|
||||
Reference in New Issue
Block a user