Compare commits

...

28 Commits

Author SHA1 Message Date
Jerome Petazzoni
56f67aedb3 fix-redirects.sh: adding forced redirect 2020-04-07 16:56:32 -05:00
Jerome Petazzoni
cb47280632 Merge branch 'master' into oscon2018 2018-07-17 10:55:25 -05:00
Jérôme Petazzoni
28863728c2 Update rollout, new defaults are 25%/25% for MaxSurge and MaxUnavailable (#314) 2018-07-17 10:54:45 -05:00
Bridget Kromhout
05588a86d9 Merge pull request #312 from jpetazzo/master
updates from master
2018-07-16 19:02:47 -05:00
Bridget Kromhout
dc341da813 Merge pull request #309 from bridgetkromhout/slight-updates
Slight updates for 1.11
2018-07-16 18:58:00 -05:00
Bridget Kromhout
1d210ad808 Merge pull request #3 from jpetazzo/slighter-updates
Slighter updates
2018-07-16 18:28:20 -05:00
Jerome Petazzoni
76d9adadf5 'until 1.10' is ambiguous, try to be more explicit 2018-07-16 18:25:30 -05:00
Jerome Petazzoni
065371fa99 Merge branch 'bridgetkromhout-slight-updates' into slighter-updates 2018-07-16 18:12:45 -05:00
Jerome Petazzoni
e45f21454e Update a couple of references to kube-dns; and cosmetic tweaks 2018-07-16 18:09:50 -05:00
Bridget Kromhout
4d8c13b0bf AKS name change 2018-07-16 18:09:50 -05:00
Bridget Kromhout
5e6b38e8d1 Replace kube-dns with CoreDNS 2018-07-16 18:09:50 -05:00
Bridget Kromhout
5dd2b6313e coredns instead of kube-dns 2018-07-16 18:09:50 -05:00
Bridget Kromhout
96bf00c59b Switching from get to use kubectl api-resources 2018-07-16 18:09:50 -05:00
Bridget Kromhout
065310901f This info isn't shown anymore by kubectl get 2018-07-16 18:09:50 -05:00
Jerome Petazzoni
103261ea35 Update a couple of references to kube-dns; and cosmetic tweaks 2018-07-16 18:07:07 -05:00
Jerome Petazzoni
c6fb6f30af Merge branch 'slight-updates' of git://github.com/bridgetkromhout/container.training into bridgetkromhout-slight-updates 2018-07-16 17:48:56 -05:00
Bridget Kromhout
134d24e23b AKS name change 2018-07-16 15:08:07 -07:00
Jerome Petazzoni
8a8e97f6e2 Add Jerome's training, September in Paris 2018-07-16 16:42:25 -05:00
Bridget Kromhout
29c1bc47d4 Replace kube-dns with CoreDNS 2018-07-16 13:53:27 -07:00
Bridget Kromhout
8af5a10407 coredns instead of kube-dns 2018-07-16 13:45:26 -07:00
Bridget Kromhout
8e9991a860 Switching from get to use kubectl api-resources 2018-07-16 13:38:28 -07:00
Bridget Kromhout
8ba5d6d736 This info isn't shown anymore by kubectl get 2018-07-16 13:32:53 -07:00
Bridget Kromhout
b3d1e2133d Merge pull request #308 from bridgetkromhout/add-oscon
Add oscon slides
2018-07-15 13:24:46 -05:00
Bridget Kromhout
b3cf30f804 Add oscon slides 2018-07-15 13:23:33 -05:00
Bridget Kromhout
59ffe6b6c8 Merge pull request #307 from bridgetkromhout/oscon2018
Adding oscon-specific details
2018-07-15 13:18:48 -05:00
Bridget Kromhout
54f1300305 Adding oscon-specific details 2018-07-15 13:16:27 -05:00
Bridget Kromhout
b845543e5f Merge pull request #305 from bridgetkromhout/list-msp-slides
Adding slides link
2018-07-10 18:08:52 -05:00
Bridget Kromhout
1b54470046 Adding slides link 2018-07-10 18:04:35 -05:00
11 changed files with 71 additions and 28 deletions

1
slides/_redirects Normal file
View File

@@ -0,0 +1 @@
/ /kube-halfday.yml.html 200!

View File

@@ -4,6 +4,7 @@
event: devopsdays Minneapolis
title: Kubernetes 101
speaker: "ashleymcnamara, bketelsen"
slides: https://devopsdaysmsp2018.container.training
attend: https://www.devopsdays.org/events/2018-minneapolis/registration/
- date: 2018-10-01
@@ -22,12 +23,22 @@
speaker: jpetazzo
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/69875
- date: 2018-09-17
country: fr
city: Paris
event: ENIX SAS
speaker: jpetazzo
title: Déployer ses applications avec Kubernetes (in French)
lang: fr
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
- date: 2018-07-17
city: Portland, OR
country: us
event: OSCON
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://oscon2018.container.training/
attend: https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/66287
- date: 2018-06-27

View File

@@ -2,8 +2,8 @@ title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"
chat: "[Gitter](https://gitter.im/jpetazzo/workshop-20180717-portland)"
#chat: "In person!"
gitrepo: github.com/jpetazzo/container.training

View File

@@ -239,7 +239,11 @@ Yes!
- namespace (more-or-less isolated group of things)
- secret (bundle of sensitive data to be passed to a container)
And much more! (We can see the full list by running `kubectl get`)
And much more!
- We can see the full list by running `kubectl api-resources`
(In Kubernetes 1.10 and prior, the command to list API resources was `kubectl get`)
---

View File

@@ -6,7 +6,7 @@
- If we want to connect to our pod(s), we need to create a *service*
- Once a service is created, `kube-dns` will allow us to resolve it by name
- Once a service is created, CoreDNS will allow us to resolve it by name
(i.e. after creating service `hello`, the name `hello` will resolve to something)
@@ -46,7 +46,7 @@ Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables`
- `ExternalName`
- the DNS entry managed by `kube-dns` will just be a `CNAME` to a provided record
- the DNS entry managed by CoreDNS will just be a `CNAME` to a provided record
- no port, no IP address, no nothing else is allocated
The `LoadBalancer` type is currently only available on AWS, Azure, and GCE.
@@ -179,7 +179,7 @@ class: extra-details
- Since there is no virtual IP address, there is no load balancer either
- `kube-dns` will return the pods' IP addresses as multiple `A` records
- CoreDNS will return the pods' IP addresses as multiple `A` records
- This gives us an easy way to discover all the replicas for a deployment

View File

@@ -83,7 +83,9 @@
- `kubectl` has pretty good introspection facilities
- We can list all available resource types by running `kubectl get`
- We can list all available resource types by running `kubectl api-resources`
<br/>
(In Kubernetes 1.10 and prior, this command used to be `kubectl get`)
- We can view details about a resource with:
```bash
@@ -224,7 +226,7 @@ The `kube-system` namespace is used for the control plane.
- `kube-controller-manager` and `kube-scheduler` are other master components
- `kube-dns` is an additional component (not mandatory but super useful, so it's there)
- `coredns` provides DNS-based service discovery ([replacing kube-dns as of 1.11](https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/))
- `kube-proxy` is the (per-node) component managing port mappings and such

View File

@@ -6,7 +6,7 @@
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
- [Azure Container Service](https://docs.microsoft.com/azure/aks/)
- [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/)
- [Cloud Developer Advocates](https://developer.microsoft.com/advocates/)

View File

@@ -151,7 +151,7 @@ Note: it might take a minute or two for the app to be up and running.
- A pod in the `default` namespace can communicate with a pod in the `kube-system` namespace
- `kube-dns` uses a different subdomain for each namespace
- CoreDNS uses a different subdomain for each namespace
- Example: from any pod in the cluster, you can connect to the Kubernetes API with:

View File

@@ -154,19 +154,29 @@ That rollout should be pretty quick. What shows in the web UI?
--
Our rollout is stuck. However, the app is not dead (just 10% slower).
Our rollout is stuck. However, the app is not dead.
(After a minute, it will stabilize to be 20-25% slower.)
---
## What's going on with our rollout?
- Why is our app 10% slower?
- Why is our app a bit slower?
- Because `MaxUnavailable=1`, so the rollout terminated 1 replica out of 10 available
- Because `MaxUnavailable=25%`
- Okay, but why do we see 2 new replicas being rolled out?
... So the rollout terminated 2 replicas out of 10 available
- Because `MaxSurge=1`, so in addition to replacing the terminated one, the rollout is also starting one more
- Okay, but why do we see 5 new replicas being rolled out?
- Because `MaxSurge=25%`
... So in addition to replacing 2 replicas, the rollout is also starting 3 more
- It rounded down the number of MaxUnavailable pods conservatively,
<br/>
but the total number of pods being rolled out is allowed to be 25+25=50%
---
@@ -176,15 +186,15 @@ class: extra-details
- We start with 10 pods running for the `worker` deployment
- Current settings: MaxUnavailable=1 and MaxSurge=1
- Current settings: MaxUnavailable=25% and MaxSurge=25%
- When we start the rollout:
- one replica is taken down (as per MaxUnavailable=1)
- another is created (with the new version) to replace it
- another is created (with the new version) per MaxSurge=1
- two replicas are taken down (as per MaxUnavailable=25%)
- two others are created (with the new version) to replace them
- three others are created (with the new version) per MaxSurge=25%)
- Now we have 9 replicas up and running, and 2 being deployed
- Now we have 8 replicas up and running, and 5 being deployed
- Our rollout is stuck at this point!
@@ -251,7 +261,7 @@ Note the `3xxxx` port.
- revert to `v0.1`
- be conservative on availability (always have desired number of available workers)
- be aggressive on rollout speed (update more than one pod at a time)
- go slow on rollout speed (update only one pod at a time)
- give some time to our workers to "warm up" before starting more
The corresponding changes can be expressed in the following YAML snippet:
@@ -267,7 +277,7 @@ spec:
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 3
maxSurge: 1
minReadySeconds: 10
```
]
@@ -296,7 +306,7 @@ spec:
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 3
maxSurge: 1
minReadySeconds: 10
"
kubectl rollout status deployment worker

View File

@@ -28,7 +28,7 @@ And *then* it is time to look at orchestration!
- Each of the two `redis` services has its own `ClusterIP`
- `kube-dns` creates two entries, mapping to these two `ClusterIP` addresses:
- CoreDNS creates two entries, mapping to these two `ClusterIP` addresses:
`redis.blue.svc.cluster.local` and `redis.green.svc.cluster.local`

View File

@@ -2,15 +2,30 @@
- Hello! We are:
- .emoji[✨] Bridget ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
- .emoji[✨] Bridget Kromhout ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
- .emoji[🌟] Joe ([@joelaha](https://twitter.com/joelaha))
- .emoji[🌟] Joe Laha ([@joelaha](https://twitter.com/joelaha))
- The workshop will run from 13:30-16:45
- .emoji[💁🏻‍♀️] Karen Chu ([@karenhchu](https://twitter.com/karenhchu))
- There will be a break from 15:00-15:15
- .emoji[🐳] Jérôme Petazzoni ([@jpetazzo](https://twitter.com/jpetazzo)) (joining us from Berlin in the chat room!)
- The workshop will run from 9:00-12:30
- There will be a break from 10:30-11:00
- Feel free to interrupt for questions at any time
- *Especially when you see full screen container pictures!*
---
## Say hi!
- We encourage networking at [#oscon](https://twitter.com/hashtag/oscon?f=tweets&vertical=default&src=hash)
- Take a minute to introduce yourself to your neighbors
- Tell them where you're from (where you're based out of & what org you work at)
- Share what you're hoping to learn in this session! .emoji[✨]