Compare commits

...

30 Commits

Author SHA1 Message Date
Gerry S
5af30c64bb Friday 2022-09-23 08:06:14 -04:00
Gerry S
75c5964c30 Thursday 2022-09-22 07:53:28 -04:00
Gerry S
b112c1fae6 Wednesday 2022-09-21 10:23:23 -04:00
Gerry S
b4d837bbf5 Tuesday 2022-09-20 07:57:41 -04:00
Gerry S
dda21fee01 Day 1 2022-09-19 08:10:59 -04:00
Gerry S
da2806ea93 Day 1 2022-09-19 08:04:46 -04:00
Gerry S
d983592ddc Post-Friday 2022-09-18 13:43:09 -04:00
Gerry S
d759703f9a Friday/Day 5 2022-08-26 11:05:06 -04:00
Gerry S
ffbecd9e04 Thurs Day-4 2022-08-24 15:40:52 -04:00
Gerry S
6a235fae44 Wednesday/Day-3 2022-08-24 11:05:11 -04:00
Gerry S
d83a6232c4 Day 2 TOC 2022-08-23 10:50:36 -04:00
Jérôme Petazzoni
7b7c755b95 Bump up runtime version to fix Netlify deployment 2022-08-22 17:22:18 +02:00
Gerry S
6d0849eebb New Relic Gerry version (day 1) 2022-08-22 10:28:39 -04:00
Jérôme Petazzoni
b46dcd5157 🧭 New Relic August 2022 content (for Gerry) 2022-08-03 15:10:25 +02:00
Jérôme Petazzoni
1aaf9b0bd5 ♻️ Update Linode LKE terraform module 2022-07-29 14:37:37 +02:00
Jérôme Petazzoni
ce39f97a28 Bump up versions for cluster upgrade lab 2022-07-22 11:32:22 +02:00
jonjohnsonjr
162651bdfd Typo: sould -> should 2022-07-18 19:16:47 +02:00
Jérôme Petazzoni
2958ca3a32 ♻️ Update CRD content
Rehaul for crd/v1; demonstrate what happens when adding
data validation a posteriori.
2022-07-14 10:32:34 +02:00
Jérôme Petazzoni
02a15d94a3 Add nsinjector 2022-07-06 14:28:24 +02:00
Jérôme Petazzoni
12d9f06f8a Add YTT content 2022-06-23 08:37:50 +02:00
Jérôme Petazzoni
43caccbdf6 ♻️ Bump up socket.io versions to address dependabot complaints
The autopilot code isn't exposed to anything; but this will stop dependabot
from displaying the annoying warning banners 😅
2022-06-20 07:09:36 +02:00
Tianon Gravi
a52f642231 Update links to kube-resource-report
Also, remove links to demos that no longer exist.
2022-06-10 21:43:56 +02:00
Tianon Gravi
30b1bfde5b Fix a few minor typos 2022-06-10 21:43:56 +02:00
Jérôme Petazzoni
5b39218593 Bump up Kapsule k8s version 2022-06-08 14:35:24 +02:00
Jérôme Petazzoni
f65ca19b44 📃 Mention type validation issues for CRDs 2022-06-06 13:59:13 +02:00
Jérôme Petazzoni
abb0fbe364 📃 Update operators intro to be less db-centric 2022-06-06 13:03:51 +02:00
Jerome Petazzoni
a18af8f4c4 🐞 Fix WaitForFirstConsumer with OpenEBS hostpath 2022-06-01 08:57:42 +02:00
Jerome Petazzoni
41e9047f3d Bump up sealed secret controller
quay.io doesn't work anymore, and kubeseal 0.17.4 was using
an image on quay. kubeseal 0.17.5 uses an image on the docker
hub instead
2022-06-01 08:51:31 +02:00
Jérôme Petazzoni
907e769d4e 📍 Pin containerd version to avoid weave/containerd issue
See https://github.com/containerd/containerd/issues/6921 for details
2022-05-25 08:59:14 +02:00
Karol Berezicki
71ba3ec520 Fixed link to Docker forums in intro.md 2022-05-23 14:41:59 +02:00
125 changed files with 27224 additions and 984 deletions

8
.gitignore vendored
View File

@@ -6,13 +6,7 @@ prepare-vms/tags
prepare-vms/infra
prepare-vms/www
prepare-tf/.terraform*
prepare-tf/terraform.*
prepare-tf/stage2/*.tf
prepare-tf/stage2/kubeconfig.*
prepare-tf/stage2/.terraform*
prepare-tf/stage2/terraform.*
prepare-tf/stage2/externalips.*
prepare-tf/tag-*
slides/*.yml.html
slides/autopilot/state.yaml

22
k8s/affinity-pod.yaml Normal file
View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: Pod
metadata:
name: aff-pod
spec:
terminationGracePeriodSeconds: 30
affinity:
containers:
- name: aff-pod
image: alpine
command:
- sleep
args:
- "1000"
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cow
operator: In
values:
- elsie

22
k8s/init-container.yaml Normal file
View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: Pod
metadata:
name: initty
spec:
volumes:
- name: preFetched
emptyDir: {}
containers:
- name: main
image: main
volumeMounts:
- name: preFetched
mountPath: /usr/share/nginx/html/
initContainers:
- name: git-cloner
image: alpine
command: [ "sh", "-c", "apk add git && sleep 5 && git clone https://github.com/octocat/Spoon-Knife /preFetched" ]
volumeMounts:
- name: preFetched
mountPath: /preFetched/

View File

@@ -0,0 +1,18 @@
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURKekNDQWcrZ0F3SUJBZ0lDQm5Vd0RRWUpLb1pJaHZjTkFRRUxCUUF3TXpFVk1CTUdBMVVFQ2hNTVJHbG4KYVhSaGJFOWpaV0Z1TVJvd0dBWURWUVFERXhGck9ITmhZWE1nUTJ4MWMzUmxjaUJEUVRBZUZ3MHlNakE1TVRneQpNekV6TWpGYUZ3MDBNakE1TVRneU16RXpNakZhTURNeEZUQVRCZ05WQkFvVERFUnBaMmwwWVd4UFkyVmhiakVhCk1CZ0dBMVVFQXhNUmF6aHpZV0Z6SUVOc2RYTjBaWElnUTBFd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUIKRHdBd2dnRUtBb0lCQVFEYnVlN1MzRS9hdFpvQVJjSUllRFJNMG5vMThvaDNEL3cyV3VWQmNaQWppZXhmNGw4VQpldEZlWDBWQmZFZGJqUndIWTYva2VHdHVzS0dXUzNZdUN5RHd3WFNhMEV5NS9LM0ZLUHhEUkdyUWJSNXJkUWg5CmI4NW1IbXVIcUYvQXJHMWJVV2JYQmFRVVhBdXNtMVpjMnNtOXdWQm0vRlRJSTJDdEpReTViVXVIQnY3N01BNHEKUzV3b1liMXkwUHo0OXNuVldiY3BXZ1FxR080SE9JelFJc2crakxYR0lhWi96L0lneHR2M0ZYaVJVUlVIZWhERwplTTVuRDErY1JuUkorcDlLQU9VMUdOZzQwVENoN3hjaGo3UHNJMDV1Q0xVQWFhYVJ4M0pVRFBpRXgxWjVjOHQwCll6aTBXTVVTUVpkTjlUc3UrNGZZaXAyTFpkZGxXOW1ma0NYREFnTUJBQUdqUlRCRE1BNEdBMVVkRHdFQi93UUUKQXdJQmhqQVNCZ05WSFJNQkFmOEVDREFHQVFIL0FnRUFNQjBHQTFVZERnUVdCQlNpcEo3SHZQTkRZMWcrcDNEdwp0TUEvNThmUmFEQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFuYkNYSHUvM3YrbXRlU3N4TXFxUndJd1c0T015CkdRdzE0aERtYkFRcmovYVo0WkFvZUJIdFJSMGYxTFFXQnVIQTBtTFJvSTFSenpBQWw3V2lNMDd6VU1ETlV2enUKR0FCVmtwOEV6b2RneTlNclFkN2VtZkNJRFA3SkhZV1FzL1VxcGVVZW4zcHljQ3dXZFFXY3ZDR0FtTEZZSzI3TApKcnFKV1JXNGErWTVDUkhqVytzTGJpeTNNMTdrOHVWM1pzMktNS0FUaVNXWUZTUzUrSkg5Tk5WdXNKd1lUZVZPCmJOZG5PbS9ub1NLejYrbHUvUm1NK0NsUFdXakdXcUlHdHZyNFl6b0puZk52UDNXL01FQXlzY3Zlck9jcXUxWTAKa1dmRkg2azVlY3NsK2k1RTFkaE02U0JRaFZzV1crMjFlN1plbVJwc1htNkNyYUZqek4vSFlaMEMzdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://8f36cb5d-e565-452a-a09c-81760683c1f9.k8s.ondigitalocean.com
name: do-sfo3-k8s-nr
contexts:
- context:
cluster: do-sfo3-k8s-nr
user: do-sfo3-k8s-nr-admin
name: do-sfo3-k8s-nr
current-context: do-sfo3-k8s-nr
kind: Config
preferences: {}
users:
- name: do-sfo3-k8s-nr-admin
user:
token: dop_v1_dc6f141491e1e3447a52ec192c3424c0481622f5430cf219fb38458280e1ff88

23
k8s/multiLine.yaml Normal file
View File

@@ -0,0 +1,23 @@
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
terminationGracePeriodSeconds: 0
containers:
- command:
- /bin/sh
- -c
- |
echo "running below scripts"
i=0;
while true;
do
echo "$i: $(date)";
i=$((i+1));
sleep 1;
done
name: busybox
image: busybox

22
k8s/multiLine2.yaml Normal file
View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
terminationGracePeriodSeconds: 0
containers:
- command: ["/bin/sh", "-c"]
args:
- |
echo "running below scripts"
i=0;
while true;
do
echo "$i: $(date)";
i=$((i+1));
sleep 1;
done
name: busybox
image: busybox

View File

@@ -3,11 +3,13 @@ kind: Pod
metadata:
name: nginx-with-volume
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
volumes:
- name: www
emptyDir: {}

View File

@@ -3,8 +3,9 @@ kind: Pod
metadata:
name: nginx-with-git
spec:
volumes:
- name: www
terminationGracePeriodSeconds: 0
restartPolicy: OnFailure
containers:
- name: nginx
image: nginx
@@ -17,5 +18,9 @@ spec:
volumeMounts:
- name: www
mountPath: /www/
restartPolicy: OnFailure
volumes:
- name: www
emptyDir: {}

View File

@@ -3,14 +3,8 @@ kind: Pod
metadata:
name: nginx-with-init
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
terminationGracePeriodSeconds: 0
initContainers:
- name: git
image: alpine
@@ -18,3 +12,15 @@ spec:
volumeMounts:
- name: www
mountPath: /www/
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
volumes:
- name: www
emptyDir: {}

View File

@@ -0,0 +1,28 @@
apiVersion: v1
kind: Pod
metadata:
name: hostpath-nginx
spec:
terminationGracePeriodSeconds: 30
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
volumes:
- name: www
hostPath:
path: /home/k8s/myFiles
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: myData
operator: In
values:
- present

27
k8s/nginx-git.yaml Normal file
View File

@@ -0,0 +1,27 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-git
spec:
terminationGracePeriodSeconds: 0
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command:
- /bin/sh
- -c
- |
apk add git &&
git clone https://github.com/octocat/Spoon-Knife /www
volumeMounts:
- name: www
mountPath: /www/
volumes:
- name: www
emptyDir: {}

28
k8s/nginx-init.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-git
spec:
terminationGracePeriodSeconds: 0
initContainers:
- name: git
image: alpine
command:
- /bin/sh
- -c
- |
apk add git &&
git clone https://github.com/octocat/Spoon-Knife /www
volumeMounts:
- name: www
mountPath: /www/
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
volumes:
- name: www
emptyDir: {}

8
k8s/nginx.yaml Normal file
View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Pod
metadata:
name: my-web
spec:
containers:
- name: nginx
image: nginx

19
k8s/ping.yaml Normal file
View File

@@ -0,0 +1,19 @@
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: ping
name: ping
spec:
terminationGracePeriodSeconds: 0
containers:
- command:
- ping
args:
- 127.0.0.1
image: alpine
name: ping
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

14
k8s/pizza-1.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz

20
k8s/pizza-2.yaml Normal file
View File

@@ -0,0 +1,20 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object

32
k8s/pizza-3.yaml Normal file
View File

@@ -0,0 +1,32 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
toppings:
type: array
items:
type: string

39
k8s/pizza-4.yaml Normal file
View File

@@ -0,0 +1,39 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
toppings:
type: array
items:
type: string
additionalPrinterColumns:
- jsonPath: .spec.sauce
name: Sauce
type: string
- jsonPath: .spec.toppings
name: Toppings
type: string

40
k8s/pizza-5.yaml Normal file
View File

@@ -0,0 +1,40 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
enum: [ red, white ]
toppings:
type: array
items:
type: string
additionalPrinterColumns:
- jsonPath: .spec.sauce
name: Sauce
type: string
- jsonPath: .spec.toppings
name: Toppings
type: string

45
k8s/pizzas.yaml Normal file
View File

@@ -0,0 +1,45 @@
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: margherita
spec:
sauce: red
toppings:
- mozarella
- basil
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: quatrostagioni
spec:
sauce: red
toppings:
- artichoke
- basil
- mushrooms
- prosciutto
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: mehl31
spec:
sauce: white
toppings:
- goatcheese
- pear
- walnuts
- mozzarella
- rosemary
- honey
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: brownie
spec:
sauce: chocolate
toppings:
- nuts

18
k8s/sampleYaml.yaml Normal file
View File

@@ -0,0 +1,18 @@
name: gerry
citizenship: US
height-in-cm: 197
coder: true
friends:
- Moe
- Larry
- Curly
employees:
- name: Moe
position: dev
- name: Larry
position: ops
- name: Curly
position: devOps
poem: |
Mary had a little lamb
It was very cute

26
k8s/sampleYamlAsJson.json Normal file
View File

@@ -0,0 +1,26 @@
{
"name": "gerry",
"citizenship": "US",
"height-in-cm": 197,
"coder": true,
"friends": [
"Moe",
"Larry",
"Curly"
],
"employees": [
{
"name": "Moe",
"position": "dev"
},
{
"name": "Larry",
"position": "ops"
},
{
"name": "Curly",
"position": "devOps"
}
],
"poem": "Mary had a little lamb\nIt was very cute\n"
}

View File

@@ -0,0 +1,164 @@
#! Define and use variables.
---
#@ repository = "dockercoins"
#@ tag = "v0.1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ "{}/hasher:{}".format(repository, tag)
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ "{}/rng:{}".format(repository, tag)
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ "{}/webui:{}".format(repository, tag)
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ "{}/worker:{}".format(repository, tag)
name: worker

View File

@@ -0,0 +1,167 @@
#! Define and use a function to set the deployment image.
---
#@ repository = "dockercoins"
#@ tag = "v0.1"
#@ def image(component):
#@ return "{}/{}:{}".format(repository, component, tag)
#@ end
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ image("hasher")
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ image("rng")
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ image("webui")
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ image("worker")
name: worker

164
k8s/ytt/3-labels/app.yaml Normal file
View File

@@ -0,0 +1,164 @@
#! Define and use functions, demonstrating how to generate labels.
---
#@ repository = "dockercoins"
#@ tag = "v0.1"
#@ def image(component):
#@ return "{}/{}:{}".format(repository, component, tag)
#@ end
#@ def labels(component):
#@ return {
#@ "app": component,
#@ "container.training/generated-by": "ytt",
#@ }
#@ end
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ image("hasher")
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("redis")
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("redis")
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("rng")
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ image("rng")
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("rng")
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("webui")
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ image("webui")
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("webui")
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("worker")
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ image("worker")
name: worker

162
k8s/ytt/4-data/app.yaml Normal file
View File

@@ -0,0 +1,162 @@
---
#@ load("@ytt:data", "data")
#@ def image(component):
#@ return "{}/{}:{}".format(data.values.repository, component, data.values.tag)
#@ end
#@ def labels(component):
#@ return {
#@ "app": component,
#@ "container.training/generated-by": "ytt",
#@ }
#@ end
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ image("hasher")
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("redis")
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("redis")
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("rng")
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ image("rng")
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("rng")
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("webui")
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ image("webui")
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("webui")
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("worker")
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ image("worker")
name: worker

View File

@@ -0,0 +1,4 @@
#@data/values-schema
---
repository: dockercoins
tag: v0.1

54
k8s/ytt/5-factor/app.yaml Normal file
View File

@@ -0,0 +1,54 @@
---
#@ load("@ytt:data", "data")
---
#@ def Deployment(component, repository=data.values.repository, tag=data.values.tag):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ component
container.training/generated-by: ytt
name: #@ component
spec:
replicas: 1
selector:
matchLabels:
app: #@ component
template:
metadata:
labels:
app: #@ component
spec:
containers:
- image: #@ repository + "/" + component + ":" + tag
name: #@ component
#@ end
---
#@ def Service(component, port=80, type="ClusterIP"):
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ component
container.training/generated-by: ytt
name: #@ component
spec:
ports:
- port: #@ port
protocol: TCP
targetPort: #@ port
selector:
app: #@ component
type: #@ type
#@ end
---
--- #@ Deployment("hasher")
--- #@ Service("hasher")
--- #@ Deployment("redis", repository="library", tag="latest")
--- #@ Service("redis", port=6379)
--- #@ Deployment("rng")
--- #@ Service("rng")
--- #@ Deployment("webui")
--- #@ Service("webui", type="NodePort")
--- #@ Deployment("worker")
---

View File

@@ -0,0 +1,4 @@
#@data/values-schema
---
repository: dockercoins
tag: v0.1

View File

@@ -0,0 +1,56 @@
---
#@ load("@ytt:data", "data")
#@ load("@ytt:template", "template")
---
#@ def component(name, repository=data.values.repository, tag=data.values.tag, port=None, type="ClusterIP"):
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
replicas: 1
selector:
matchLabels:
app: #@ name
template:
metadata:
labels:
app: #@ name
spec:
containers:
- image: #@ repository + "/" + name + ":" + tag
name: #@ name
#@ if/end port==80:
readinessProbe:
httpGet:
port: #@ port
#@ if port != None:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
ports:
- port: #@ port
protocol: TCP
targetPort: #@ port
selector:
app: #@ name
type: #@ type
#@ end
#@ end
---
--- #@ template.replace(component("hasher", port=80))
--- #@ template.replace(component("redis", repository="library", tag="latest", port=6379))
--- #@ template.replace(component("rng", port=80))
--- #@ template.replace(component("webui", port=80, type="NodePort"))
--- #@ template.replace(component("worker"))
---

View File

@@ -0,0 +1,4 @@
#@data/values-schema
---
repository: dockercoins
tag: v0.1

View File

@@ -0,0 +1,65 @@
---
#@ load("@ytt:data", "data")
#@ load("@ytt:template", "template")
---
#@ def component(name, repository, tag, port=None, type="ClusterIP"):
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
replicas: 1
selector:
matchLabels:
app: #@ name
template:
metadata:
labels:
app: #@ name
spec:
containers:
- image: #@ repository + "/" + name + ":" + tag
name: #@ name
#@ if/end port==80:
readinessProbe:
httpGet:
port: #@ port
#@ if port != None:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
ports:
- port: #@ port
protocol: TCP
targetPort: #@ port
selector:
app: #@ name
type: #@ type
#@ end
#@ end
---
#@ defaults = {}
#@ for name in data.values:
#@ if name.startswith("_"):
#@ defaults.update(data.values[name])
#@ end
#@ end
---
#@ for name in data.values:
#@ if not name.startswith("_"):
#@ values = dict(name=name)
#@ values.update(defaults)
#@ values.update(data.values[name])
--- #@ template.replace(component(**values))
#@ end
#@ end

View File

@@ -0,0 +1,19 @@
#@data/values-schema
#! Entries starting with an underscore will hold default values.
#! Entires NOT starting with an underscore will generate a Deployment
#! (and a Service if a port number is set).
---
_default_:
repository: dockercoins
tag: v0.1
hasher:
port: 80
redis:
repository: library
tag: latest
rng:
port: 80
webui:
port: 80
type: NodePort
worker: {}

View File

@@ -0,0 +1,26 @@
#@ load("@ytt:data", "data")
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
replicas: 1
selector:
matchLabels:
app: #@ data.values.name
template:
metadata:
labels:
app: #@ data.values.name
spec:
containers:
- image: #@ data.values.repository + "/" + data.values.name + ":" + data.values.tag
name: #@ data.values.name
#@ if/end data.values.port==80:
readinessProbe:
httpGet:
port: #@ data.values.port

View File

@@ -0,0 +1,7 @@
#@data/values-schema
---
name: component
repository: dockercoins
tag: v0.1
port: 0
type: ClusterIP

View File

@@ -0,0 +1,19 @@
#@ load("@ytt:data", "data")
#@ if data.values.port > 0:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
ports:
- port: #@ data.values.port
protocol: TCP
targetPort: #@ data.values.port
selector:
app: #@ data.values.name
type: #@ data.values.type
#@ end

View File

@@ -0,0 +1,20 @@
#@ load("@ytt:data", "data")
#@ load("@ytt:library", "library")
#@ load("@ytt:template", "template")
#@
#@ component = library.get("component")
#@
#@ defaults = {}
#@ for name in data.values:
#@ if name.startswith("_"):
#@ defaults.update(data.values[name])
#@ end
#@ end
#@ for name in data.values:
#@ if not name.startswith("_"):
#@ values = dict(name=name)
#@ values.update(defaults)
#@ values.update(data.values[name])
--- #@ template.replace(component.with_data_values(values).eval())
#@ end
#@ end

View File

@@ -0,0 +1,19 @@
#@data/values-schema
#! Entries starting with an underscore will hold default values.
#! Entires NOT starting with an underscore will generate a Deployment
#! (and a Service if a port number is set).
---
_default_:
repository: dockercoins
tag: v0.1
hasher:
port: 80
redis:
repository: library
tag: latest
rng:
port: 80
webui:
port: 80
type: NodePort
worker: {}

View File

@@ -0,0 +1,26 @@
#@ load("@ytt:data", "data")
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
replicas: 1
selector:
matchLabels:
app: #@ data.values.name
template:
metadata:
labels:
app: #@ data.values.name
spec:
containers:
- image: #@ data.values.repository + "/" + data.values.name + ":" + data.values.tag
name: #@ data.values.name
#@ if/end data.values.port==80:
readinessProbe:
httpGet:
port: #@ data.values.port

View File

@@ -0,0 +1,7 @@
#@data/values-schema
---
name: component
repository: dockercoins
tag: v0.1
port: 0
type: ClusterIP

View File

@@ -0,0 +1,19 @@
#@ load("@ytt:data", "data")
#@ if data.values.port > 0:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
ports:
- port: #@ data.values.port
protocol: TCP
targetPort: #@ data.values.port
selector:
app: #@ data.values.name
type: #@ data.values.type
#@ end

View File

@@ -0,0 +1,20 @@
#@ load("@ytt:data", "data")
#@ load("@ytt:library", "library")
#@ load("@ytt:template", "template")
#@
#@ component = library.get("component")
#@
#@ defaults = {}
#@ for name in data.values:
#@ if name.startswith("_"):
#@ defaults.update(data.values[name])
#@ end
#@ end
#@ for name in data.values:
#@ if not name.startswith("_"):
#@ values = dict(name=name)
#@ values.update(defaults)
#@ values.update(data.values[name])
--- #@ template.replace(component.with_data_values(values).eval())
#@ end
#@ end

View File

@@ -0,0 +1,20 @@
#@ load("@ytt:overlay", "overlay")
#@ def match():
kind: Deployment
metadata:
name: rng
#@ end
#@overlay/match by=overlay.subset(match())
---
spec:
template:
spec:
containers:
#@overlay/match by="name"
- name: rng
readinessProbe:
httpGet:
#@overlay/match missing_ok=True
path: /1

View File

@@ -0,0 +1,19 @@
#@data/values-schema
#! Entries starting with an underscore will hold default values.
#! Entires NOT starting with an underscore will generate a Deployment
#! (and a Service if a port number is set).
---
_default_:
repository: dockercoins
tag: v0.1
hasher:
port: 80
redis:
repository: library
tag: latest
rng:
port: 80
webui:
port: 80
type: NodePort
worker: {}

View File

@@ -0,0 +1,25 @@
#@ load("@ytt:overlay", "overlay")
#@ def match():
kind: Deployment
metadata:
name: worker
#@ end
#! This removes the number of replicas:
#@overlay/match by=overlay.subset(match())
---
spec:
#@overlay/remove
replicas:
#! This overrides it:
#@overlay/match by=overlay.subset(match())
---
spec:
#@overlay/match missing_ok=True
replicas: 10
#! Note that it's not necessary to remove the number of replicas.
#! We're just presenting both options here (for instance, you might
#! want to remove the number of replicas if you're using an HPA).

View File

@@ -1,6 +1,6 @@
resource "random_string" "_" {
length = 4
number = false
numeric = false
special = false
upper = false
}

View File

@@ -3,7 +3,7 @@ resource "linode_lke_cluster" "_" {
tags = var.common_tags
# "region" is mandatory, so let's provide a default value if none was given.
region = var.location != null ? var.location : "eu-central"
k8s_version = var.k8s_version
k8s_version = local.k8s_version
pool {
type = local.node_type

View File

@@ -51,7 +51,22 @@ variable "location" {
# To view supported versions, run:
# linode-cli lke versions-list --json | jq -r .[].id
data "external" "k8s_version" {
program = [
"sh",
"-c",
<<-EOT
linode-cli lke versions-list --json |
jq -r '{"latest": [.[].id] | sort [-1]}'
EOT
]
}
variable "k8s_version" {
type = string
default = "1.22"
default = ""
}
locals {
k8s_version = var.k8s_version != "" ? var.k8s_version : data.external.k8s_version.result.latest
}

View File

@@ -56,5 +56,5 @@ variable "location" {
# scw k8s version list -o json | jq -r .[].name
variable "k8s_version" {
type = string
default = "1.22.2"
default = "1.23.6"
}

View File

@@ -193,7 +193,6 @@ resource "tls_private_key" "cluster_admin_${index}" {
}
resource "tls_cert_request" "cluster_admin_${index}" {
key_algorithm = tls_private_key.cluster_admin_${index}.algorithm
private_key_pem = tls_private_key.cluster_admin_${index}.private_key_pem
subject {
common_name = "cluster-admin"

View File

@@ -239,6 +239,14 @@ _cmd_docker() {
sudo ln -sfn /mnt/docker /var/lib/docker
fi
# containerd 1.6 breaks Weave.
# See https://github.com/containerd/containerd/issues/6921
sudo tee /etc/apt/preferences.d/containerd <<EOF
Package: containerd.io
Pin: version 1.5.*
Pin-Priority: 1000
EOF
# This will install the latest Docker.
sudo apt-get -qy install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

View File

@@ -16,7 +16,7 @@ user_password: training
# For a list of old versions, check:
# https://kubernetes.io/releases/patch-releases/#non-active-branch-history
kubernetes_version: 1.18.20
kubernetes_version: 1.20.15
image:

View File

@@ -2,6 +2,7 @@
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
/ /kube.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@
"version": "0.0.1",
"dependencies": {
"express": "^4.16.2",
"socket.io": "^2.4.0"
"socket.io": "^4.5.1",
"socket.io-client": "^4.5.1"
}
}

1
slides/b Executable file
View File

@@ -0,0 +1 @@
./markmaker.py kube.yml >out.html

View File

@@ -100,7 +100,11 @@ _We will give more details about namespaces and cgroups later._
* But it is easier to use `docker exec`.
```bash
$ docker exec -ti ticktock sh
$ docker ps -lq # Get Last Container ID
17e4e95e2702
$ docker exec 17
$ docker exec -ti $(docker ps -lq) sh # bash-fu version
```
* This creates a new process (running `sh`) _inside_ the container.

View File

@@ -0,0 +1,20 @@
class: title
# High Level Discussion
![image](images/title-understanding-docker-images.png)
---
## White Board Topics
* What is the real problem that containers solve?
* What are the inputs to a Unix Process?
* What is the init Process?
* Userland vs Kernel
* The Root File System
* What is an Overlay File System?
* Wrapping it all up to represent a container image
* Deploying Container images

View File

@@ -0,0 +1,318 @@
class: title
# A Macroscopic View
---
## Macroscopic Items
* The business case for containers
* The problem containers are solving
* What applications need
* What is the OS doing provides?
---
## What do CIOs worry about?
Who are the CIO's customers?
* Business Units: Need Computers to Run Applications
* Peak Capacity
* CFO: Demanding Budget Justifications
* Spend Less
---
## History of Solutions
For Each Business Application Buy a Machine
* Buy a machine for each application
* Big enough for Peak Load (CPU, Memory, Disk)
The Age of VMs
* Buy bigger machines and chop them up into logical machines
* Distribute your applications as VMs theses machines
* Observe what and when the application load actually is
* Possibly rebalance be to inform possibly moving
But Maintaining Machines (Bare Metal or VM) is hard (Patches, Packages, Drivers, etc)
---
## What Developers and Ops worry about
* Getting Software deployed
* Mysterious reasons why deployed application doesn't work
* Developer to Ops:
* "Hey it works on my development machine..."
* "I don't know why it isn't working for ***you***"
* "Everything ***looks*** the same"
* "I have no idea what could be different"
---
## The History of Software Deployment
Software Deployment is just a reproducible way to install files:
* Cards
* Tapes
* Floppy Disks
* Zip/Tar Files
* Installation "Files" (rpm/deb/msi)
* VM Images
---
## What is the Problem Containers are Solving?
It depends on who you are:
* For the CIO: Better resource utilization
* For Ops: Software Distribution
* For the Developer & Ops: Reproducible Environment
<BR><BR>
Ummm, but what exactly are containers....
* Wait a few more slides...
---
## Macroscopic view: Applications and the OS
Applications:
* What are the inputs/outputs to a program?
The OS:
* What does the OS provide?
---
## What are the inputs/outputs to a program?
Explicitly:
* Command Line Arguments
* Environment Variables
* Standard In
* Standard Out/Err
Implicitly (via the File System):
* Configuration Files
* Other Installed Applications
* Any other files
Also Implicitly
* Memory
* Network
---
## What does the OS provide?
* OS Kernel
* Kernel loded at boot time
* Sets up disk drives, network cards, other hardware, etc
* Manages all hardware, processes, memory, etc
* Kernel Space
* Low level innards of Kernel (fluid internal API)
* No direct access by applications of most Kernel functionality
* User Space (userland) Processes
* Code running outside the Kernel
* Very stable shim library access from User Space to Kernel Space (Think "fopen")
* The "init" Process
* User Space Process run after Kernel has booted
* Always PID 1
---
## OS Processes
* Created when an application is launched
* Each has a unique Process ID (PID)
* Provides it its own logical 'view' of all implicit inputs/output when launching app
* File System ( root directory, / )
* Memory
* Network Adaptors
* Other running processes
---
## What do we mean by "The OS"
Different Linux's
* Ubuntu / Debian; Centos / RHEL; Raspberry Pi; etc
What do they have in common?
* They all have a kernel that provides access to Userland (ie fopen)
* They typically have all the commands (bash, sh, ls, grep, ...)
What may be different?
* May use different versions of the Kernel (4.18, 5.4, ...)
* Internally different, but providing same Userland API
* Many other bundled commands, packages and package management tools
* Namely what makes it 'Debian' vs 'Centos'
---
## What might a 'Minimal' Linux be?
You could actually just have:
* A Linux Kernel
* An application (for simplicity a statically linked C program)
* The kernel configured to run that application as its 'init' process
Would you ever do this?
* Why not?
* It certainly would be very secure
---
## So Finally... What are Containers?
Containers just a Linux process that 'thinks' it is it's own machine
* With its own 'view' of things like:
* File System ( root directory, / ), Memory, Network Adaptors, Other running processes
* Leverages our understanding that a (logical) Linux Machine is
* A kernel
* A bunch of files ( Maybe a few Environment Variables )
Since it is a process running on a host machine
* It uses the kernel of the host machine
* And of course you need some tools to create the running container process
---
## Container Runtimes and Container Images
The Linux kernel actually has no concept of a container.
* There have been many 'container' technologies
* See [A Brief History of containers: From the 1970's till now](https://blog.aquasec.com/a-brief-history-of-containers-from-1970s-chroot-to-docker-2016)
* Over the years more capabilities have been added to the kernel to make it easier
<BR>
A 'Container technology' is:
* A Container Image Format of the unit of software deployment
* A bundle of all the files and miscellaneous configuration
* A Container Runtime Engine
* Software that takes a Container Image and creates a running container
---
## The Container Runtime War is now Over
The Cloud Native Computing Foundation (CNCF) has standardized containers
* A standard container image format
* A standard for building and configuring container runtimes
* A standard REST API for loading/downloading container image to a registries
There primary Container Runtimes are:
* containerd: using the 'docker' Command Line Interface (or Kubernetes)
* CRI-O: using the 'podman' Command Line Interface (or Kubernetes/OpenShift)
* Others exists, for example Singularity which has a history in HPC
---
## Linux Namespaces Makes Containers Possible
- Provide processes with their own isolated view of the system.
- Namespaces limit what you can see (and therefore, what you can use).
- These namespaces are available in modern kernels:
- pid: processes
- net: network
- mnt: root file system (ie chroot)
- uts: hostname
- ipc
- user: UID/GID mapping
- time: time
- cgroup: Resource Monitoring and Limiting
- Each process belongs to one namespace of each type.
---
## Namespaces are always active
- Namespaces exist even when you don't use containers.
- This is a bit similar to the UID field in UNIX processes:
- all processes have the UID field, even if no user exists on the system
- the field always has a value / the value is always defined
<br/>
(i.e. any process running on the system has some UID)
- the value of the UID field is used when checking permissions
<br/>
(the UID field determines which resources the process can access)
- You can replace "UID field" with "namespace" above and it still works!
- In other words: even when you don't use containers,
<br/>there is one namespace of each type, containing all the processes on the system.

View File

@@ -0,0 +1,224 @@
class: title
# Our training environment
![SSH terminal](images/title-our-training-environment.jpg)
---
class: in-person
## Connecting to your Virtual Machine
You need an SSH client.
* On OS X, Linux, and other UNIX systems, just use `ssh`:
```bash
$ ssh <login>@<ip-address>
```
* On Windows, if you don't have an SSH client, you can download:
* Putty (www.putty.org)
* Git BASH (https://git-for-windows.github.io/)
* MobaXterm (https://mobaxterm.mobatek.net/)
---
class: in-person
## Connecting to our lab environment
.lab[
- Log into your VM with your SSH client:
```bash
ssh `user`@`A.B.C.D`
```
(Replace `user` and `A.B.C.D` with the user and IP address provided to you)
]
You should see a prompt looking like this:
```
[A.B.C.D] (...) user@node1 ~
$
```
If anything goes wrong — ask for help!
---
## Our Docker VM
About the Lab VM
- The VM is created just before the training.
- It will stay up during the whole training.
- It will be destroyed shortly after the training.
- It comes pre-loaded with Docker and some other useful tools.
---
## Why don't we run Docker locally?
- I can log into your VMs to help you with labs
- Installing docker is out of the scope of this class (lots of online docs)
- It's better to spend time learning containers than fiddling with the installer!
---
class: in-person
## `tailhist`
- The shell history of the instructor is available online in real time
- Note the IP address of the instructor's virtual machine (A.B.C.D)
- Open http://A.B.C.D:1088 in your browser and you should see the history
- The history is updated in real time (using a WebSocket connection)
- It should be green when the WebSocket is connected
(if it turns red, reloading the page should fix it)
- If you want to play with it on your lab machine, tailhist is installed
- sudo apt install firewalld
- sudo firewall-cmd --add-port=1088/tcp
---
## Checking your Virtual Machine
Once logged in, make sure that you can run a basic Docker command:
.small[
```bash
$ docker version
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:10:06 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.03.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:08:35 2018
OS/Arch: linux/amd64
Experimental: false
```
]
If this doesn't work, raise your hand so that an instructor can assist you!
???
:EN:Container concepts
:FR:Premier contact avec les conteneurs
:EN:- What's a container engine?
:FR:- Qu'est-ce qu'un *container engine* ?
---
## Doing or re-doing the workshop on your own?
- Use something like
[Play-With-Docker](http://play-with-docker.com/) or
[Play-With-Kubernetes](https://training.play-with-kubernetes.com/)
Zero setup effort; but environment are short-lived and
might have limited resources
- Create your own cluster (local or cloud VMs)
Small setup effort; small cost; flexible environments
- Create a bunch of clusters for you and your friends
([instructions](https://@@GITREPO@@/tree/master/prepare-vms))
Bigger setup effort; ideal for group training
---
class: self-paced
## Get your own Docker nodes
- If you already have some Docker nodes: great!
- If not: let's get some thanks to Play-With-Docker
.lab[
- Go to http://www.play-with-docker.com/
- Log in
- Create your first node
<!-- ```open http://www.play-with-docker.com/``` -->
]
You will need a Docker ID to use Play-With-Docker.
(Creating a Docker ID is free.)
---
## Terminals
Once in a while, the instructions will say:
<br/>"Open a new terminal."
There are multiple ways to do this:
- create a new window or tab on your machine, and SSH into the VM;
- use screen or tmux on the VM and open a new window from there.
You are welcome to use the method that you feel the most comfortable with.
---
## Tmux cheat sheet
[Tmux](https://en.wikipedia.org/wiki/Tmux) is a terminal multiplexer like `screen`.
*You don't have to use it or even know about it to follow along.
<br/>
But some of us like to use it to switch between terminals.
<br/>
It has been preinstalled on your workshop nodes.*
- Ctrl-b c → creates a new window
- Ctrl-b n → go to next window
- Ctrl-b p → go to previous window
- Ctrl-b " → split window top/bottom
- Ctrl-b % → split window left/right
- Ctrl-b Alt-1 → rearrange windows in columns
- Ctrl-b Alt-2 → rearrange windows in rows
- Ctrl-b arrows → navigate to other windows
- Ctrl-b d → detach session
- tmux attach → re-attach to session

View File

@@ -0,0 +1,27 @@
```bash
$ docker run -it debian
root@ef22f9437171:/# apt-get update
root@ef22f9437171:/# apt-get install skopeo
root@ef22f9437171:/# apt-get wget curl jq
root@ef22f9437171:/# skopeo login docker.io -u containertraining -p testaccount
$ docker commit $(docker ps -lq) skop
```
```bash
root@0ab665194c4f:~# skopeo copy docker://docker.io/containertraining/test-image-0 dir:/root/test-image-0
root@0ab665194c4f:~# cd /root/test-image-0
root@0ab665194c4f:~# jq <manifest.json .layers[].digest
```
Stuff in Exploring-images
image-test-0/1/2 + jpg

View File

@@ -0,0 +1,20 @@
FROM busybox
ADD verifyImageFiles.sh /
WORKDIR /play
RUN echo "== LAYER 0 ==" && \
echo "A is for Aardvark" >A && \
echo "B is for Beetle" >B && \
mkdir C/ && \
echo "A is for Cowboy Allan" >C/CA && \
mkdir -p C/CB && \
echo "A is for Cowboy Buffalo Alex" >C/CB/CBA && \
echo "B is for Cowboy Buffalo Bill" >C/CB/CBB && \
echo "Z is for Cowboy Zeke" >> C/CZ && \
mkdir D/ && \
echo "A is for Detective Alisha" >D/DA && \
echo "B is for Detective Betty" >D/DB && \
echo "E is for Elephant" >E && \
find . >../state.layer-0

View File

@@ -0,0 +1,17 @@
FROM test-image-0
WORKDIR /play
RUN echo "== LAYER 1 == Change File B, Create File C/CC, Add Dir C/CD, Remove File E, Create Dir F, Add File G, Create Empty Dir H" && \
echo "B is for Butterfly" >B && \
echo "C is for Cowboy Chuck">C/CC && \
mkdir -p C/CD && \
echo "A is for Cowboy Dandy Austin" >C/CD/CDA && \
rm E && \
mkdir F && \
echo "A is for Ferret Albert" >F/FA && \
echo "G is for Gorilla" >G && \
mkdir H && \
find . >../state.layer-1

View File

@@ -0,0 +1,18 @@
FROM test-image-1
WORKDIR /play
RUN echo "== LAYER 2 == Remove File C/CA, Remove Dir G, Remove Dir D / Replace with new Dir D, Remove Dir C/CB, Remove Dir C/CB, Remove Dir F, Add File G, Remove Dir H / Create File H" && \
rm C/CA && \
rm -rf C/CB && \
echo "Z is for Cowboy Zoe" >> CZ && \
rm -rf D && \
mkdir -p D && \
echo "A is for Duplicitous Albatros" >D/DA && \
rm -rf F && \
rm G && \
echo "G is for Geccos" >G && \
rmdir H \
echo "H is for Human" >H && \
find . >../state.layer-2

View File

@@ -0,0 +1,87 @@
clear
baseDir=$(pwd)
rm -rf /tmp/exploringImags
mkdir -p /tmp/exploringImags
cd /tmp/exploringImags
echo "== LAYER 0 =="
echo "A is for Aardvark" >A
echo "B is for Beetle" >B
mkdir C/
echo "A is for Cowboy Allan" >C/CA
mkdir -p C/CB
echo "A is for Cowboy Buffalo Alex" >C/CB/CBA
echo "B is for Cowboy Buffalo Bill" >C/CB/CBB
echo "Z is for Cowboy Zeke" >C/CZ
mkdir D/
echo "A is for Detective Alisha" >D/DA
echo "B is for Detective Betty" >D/DB
echo "E is for Elephant" >E
find . >../state.layer-0
tree | grep -v directories | tee ../tree.layer-0
$baseDir/verifyImageFiles.sh 0 $(pwd)
echo "== LAYER 1 == Change File B, Create File C/CC, Add Dir C/CD, Remove File E, Create Dir F, Add File G, Create Empty Dir H"
echo "B is for Butterfly" >B
echo "C is for Cowboy Chuck">C/CC
mkdir -p C/CD
echo "A is for Cowboy Dandy Austin" >C/CD/CDA
rm E
mkdir F
echo "A is for Ferret Albert" >F/FA
echo "G is for Gorilla" >G
mkdir H
find . >../state.layer-1
tree | grep -v directories | tee ../tree.layer-1
$baseDir/verifyImageFiles.sh 1 $(pwd)
echo "== LAYER 2 == Remove File C/CA, Remove Dir G, Remove Dir D Replace with new Dir D, Remove Dir C/CB, Remove Dir C/CB, Add File H/HA, Add File, Create Dir I"
rm C/CA
rm -rf C/CB
echo "Z is for Cowboy Zoe" >C/CZ
rm -rf D
mkdir -p D
echo "A is for Duplicitous Albatros" >D/DA
rm -rf F
rm -rf G
echo "G is for Geccos" >G
rmdir H
echo "H is for Human" >H
find . >../state.layer-2
tree | grep -v directories | tee ../tree.layer-2
$baseDir/verifyImageFiles.sh 2 $(pwd)

View File

@@ -0,0 +1,88 @@
fileContentsCompare() {
layer=$1
text=$2
file=$(pwd)/$3
if [ -f "$file" ]; then
fileContents=$(cat $file)
if [ "$fileContents" != "$text" ]; then
echo In Layer $layer Unexpected contents in file: $file
echo -- Contents: $fileContents
echo -- Expected: $text
fi
else
echo Missing File $file in Layer $layer
fi
}
checkLayer() {
layer=$1
find . >/tmp/state
if [[ $(diff /tmp/state $targetDir/../state.layer-$layer) ]]; then
echo Directory Structure mismatch in layer: $layer
diff /tmp/state $targetDir/../state.layer-$layer
fi
case $layer in
0)
fileContentsCompare $layer "A is for Aardvark" A
fileContentsCompare $layer "B is for Beetle" B
fileContentsCompare $layer "A is for Cowboy Allan" C/CA
fileContentsCompare $layer "A is for Cowboy Buffalo Alex" C/CB/CBA
fileContentsCompare $layer "B is for Cowboy Buffalo Bill" C/CB/CBB
fileContentsCompare $layer "Z is for Cowboy Zeke" C/CZ
fileContentsCompare $layer "A is for Detective Alisha" D/DA
fileContentsCompare $layer "B is for Detective Betty" D/DB
fileContentsCompare $layer "E is for Elephant" E
;;
# echo "== LAYER 1 == Change File B, Create File C/CC, Add Dir C/CD, Remove File E, Create Dir F, Add File G, Create Empty Dir H"
1)
fileContentsCompare $layer "A is for Aardvark" A
fileContentsCompare $layer "B is for Butterfly" B ## CHANGED FILE B
fileContentsCompare $layer "A is for Cowboy Allan" C/CA
fileContentsCompare $layer "A is for Cowboy Buffalo Alex" C/CB/CBA
fileContentsCompare $layer "B is for Cowboy Buffalo Bill" C/CB/CBB
fileContentsCompare $layer "C is for Cowboy Chuck" C/CC ## ADDED FILE C/CC
fileContentsCompare $layer "A is for Cowboy Dandy Austin" C/CD/CDA ## ADDED DIR C/CD, ADDED FILE C/CD/CDA
fileContentsCompare $layer "Z is for Cowboy Zeke" C/CZ
fileContentsCompare $layer "A is for Detective Alisha" D/DA
fileContentsCompare $layer "B is for Detective Betty" D/DB
## REMOVED FILE E
fileContentsCompare $layer "A is for Ferret Albert" F/FA ## ADDED DIR F, ADDED FILE F/A
fileContentsCompare $layer "G is for Gorilla" G ## ADDED G
## CREATED EMPTY DIR H
;;
# echo "== LAYER 2 == Remove File C/CA, Remove Dir C/CB, Remove Dir C/CB, Remove Dir D Replace with new Dir D, Delete and Recreatee File G, Add File H/HA Create Dir I"
2)
fileContentsCompare $layer "A is for Aardvark" A
fileContentsCompare $layer "B is for Butterfly" B
## REMOVED FILE C/CA
## REMOVED DIR C/CB
fileContentsCompare $layer "C is for Cowboy Chuck" C/CC
fileContentsCompare $layer "A is for Cowboy Dandy Austin" C/CD/CDA
fileContentsCompare $layer "Z is for Cowboy Zoe" C/CZ ## CHANGED FILE C/CZ
## REMOVE DIR D
fileContentsCompare $layer "A is for Duplicitous Albatros" D/DA ## RECREATE DIR D, ADD FILE D/DA
fileContentsCompare $layer "G is for Geccos" G ## DELETED FILE G, ADDED FILE G (Implicit CHANGED)
fileContentsCompare $layer "H is for Human" H ## ADDED FILE H
;;
esac
}
layer=$1
targetDir=$2
echo VERIFYING LAYER $layer
checkLayer $layer

Binary file not shown.

After

Width:  |  Height:  |  Size: 219 KiB

View File

@@ -13,7 +13,7 @@
- ... Or be comfortable spending some time reading the Docker
[documentation](https://docs.docker.com/) ...
- ... And looking for answers in the [Docker forums](forums.docker.com),
- ... And looking for answers in the [Docker forums](https://forums.docker.com),
[StackOverflow](http://stackoverflow.com/questions/tagged/docker),
and other outlets

View File

@@ -0,0 +1,120 @@
# Container Based Software Deployment
---
class: pic
![dummmy](containers/software-deployment/slide-1.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-2.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-3.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-4.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-5.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-6.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-7.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-8.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-9.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-10.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-11.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-12.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-13.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-14.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-15.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-16.jpg)
---
class: pic
![dummmy](containers/software-deployment/slide-17.jpg)

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

View File

@@ -0,0 +1,46 @@
# External References && kubectl Aliases
Class Slides: https://2022-09-nr1.container.training/
Kubectl Cheat Sheet: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Kubernetes API Object and kubectl Explorers
- https://github.com/GerrySeidman/Kubernetes-Explorer
Gerry Kubernetes Storage Converence Talks
- Vault '20: https://www.usenix.org/conference/vault20/presentation/seidman
- Data and Dev '21: https://www.youtube.com/watch?v=k_8rWPwJ_38
Gerry Seidmans Info
- gerry.seidman@ardanlabs.com
- https://www.linkedin.com/in/gerryseidman/
---
## Kubectl Aliases
```bash
alias k='kubectl'
alias kg='kubectl get'
alias kl='kubectl logs'
alias ka='kubectl apply -f'
alias kd='kubectl delete'
alias kdf='kubectl delete -f'
alias kb='kubectl describe'
alias kex='kubectl explain'
alias kx='kubectl expose'
alias kr='kubectl run'
alias ke='kubectl edit'
```
Note the below is only because of a quirk in how the lab VMs were installed:
```bash
echo 'kubectl exec -it $1 -- /bin/sh' >kx
chmod +x kx
sudo mv kx /usr/local/bin/kx
```

View File

@@ -168,7 +168,7 @@ class: extra-details
(`O=system:nodes`, `CN=system:node:name-of-the-node`)
- The Kubernetse API can act as a CA
- The Kubernetes API can act as a CA
(by wrapping an X509 CSR into a CertificateSigningRequest resource)

View File

@@ -81,7 +81,7 @@
## What version are we running anyway?
- When I say, "I'm running Kubernetes 1.18", is that the version of:
- When I say, "I'm running Kubernetes 1.20", is that the version of:
- kubectl
@@ -157,15 +157,15 @@
## Kubernetes uses semantic versioning
- Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.18.20:
- Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.20.15:
- MAJOR = 1
- MINOR = 18
- PATCH = 20
- MINOR = 20
- PATCH = 15
- It's always possible to mix and match different PATCH releases
(e.g. 1.18.20 and 1.18.15 are compatible)
(e.g. 1.20.0 and 1.20.15 are compatible)
- It is recommended to run the latest PATCH release
@@ -181,9 +181,9 @@
- All components support a difference of one¹ MINOR version
- This allows live upgrades (since we can mix e.g. 1.18 and 1.19)
- This allows live upgrades (since we can mix e.g. 1.20 and 1.21)
- It also means that going from 1.18 to 1.20 requires going through 1.19
- It also means that going from 1.20 to 1.22 requires going through 1.21
.footnote[¹Except kubelet, which can be up to two MINOR behind API server,
and kubectl, which can be one MINOR ahead or behind API server.]
@@ -254,7 +254,7 @@ and kubectl, which can be one MINOR ahead or behind API server.]
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
```
- Look for the `image:` line, and update it to e.g. `v1.19.0`
- Look for the `image:` line, and update it to e.g. `v1.24.0`
]
@@ -308,11 +308,11 @@ and kubectl, which can be one MINOR ahead or behind API server.]
]
Note 1: kubeadm thinks that our cluster is running 1.19.0.
Note 1: kubeadm thinks that our cluster is running 1.24.0.
<br/>It is confused by our manual upgrade of the API server!
Note 2: kubeadm itself is still version 1.18.20..
<br/>It doesn't know how to upgrade do 1.19.X.
Note 2: kubeadm itself is still version 1.20.15..
<br/>It doesn't know how to upgrade do 1.21.X.
---
@@ -335,28 +335,28 @@ Note 2: kubeadm itself is still version 1.18.20..
]
Problem: kubeadm doesn't know know how to handle
upgrades from version 1.18.
upgrades from version 1.20.
This is because we installed version 1.22 (or even later).
This is because we installed version 1.24 (or even later).
We need to install kubeadm version 1.19.X.
We need to install kubeadm version 1.21.X.
---
## Downgrading kubeadm
- We need to go back to version 1.19.X.
- We need to go back to version 1.21.X.
.lab[
- View available versions for package `kubeadm`:
```bash
apt show kubeadm -a | grep ^Version | grep 1.19
apt show kubeadm -a | grep ^Version | grep 1.21
```
- Downgrade kubeadm:
```
sudo apt install kubeadm=1.19.8-00
sudo apt install kubeadm=1.21.0-00
```
- Check what kubeadm tells us:
@@ -366,7 +366,7 @@ We need to install kubeadm version 1.19.X.
]
kubeadm should now agree to upgrade to 1.19.8.
kubeadm should now agree to upgrade to 1.21.X.
---
@@ -464,9 +464,9 @@ kubeadm should now agree to upgrade to 1.19.8.
```bash
for N in 1 2 3; do
ssh oldversion$N "
sudo apt install kubeadm=1.19.8-00 &&
sudo apt install kubeadm=1.21.14-00 &&
sudo kubeadm upgrade node &&
sudo apt install kubelet=1.19.8-00"
sudo apt install kubelet=1.21.14-00"
done
```
]
@@ -475,7 +475,7 @@ kubeadm should now agree to upgrade to 1.19.8.
## Checking what we've done
- All our nodes should now be updated to version 1.19.8
- All our nodes should now be updated to version 1.21.14
.lab[
@@ -492,7 +492,7 @@ class: extra-details
## Skipping versions
- This example worked because we went from 1.18 to 1.19
- This example worked because we went from 1.20 to 1.21
- If you are upgrading from e.g. 1.16, you will have to go through 1.17 first

View File

@@ -0,0 +1,370 @@
# Kubernetes Architecture
- The Kubernetes Architecture is minimal
- Kubernetes runs in Kubernetes (for the most part)
- Orchestration is done by a collection of Software Operators
- You can even write your own operators
---
class: pic
![haha only kidding](images/k8s-arch1.png)
---
## Kubernetes architecture
- Ha ha ha ha ha
- OK, I was trying to scare you, it's much simpler than that ❤️
---
class: pic
![that one is more like the real thing](images/k8s-arch2.png)
---
class: pic
![haha only kidding](images/k8s-arch1.png)
---
## Kubernetes Architecture
- Ha ha ha ha
- OK, I was trying to scare you, it's much simpler than that ❤️
---
class: pic
![that one is more like the real thing](images/k8s-arch2.png)
---
## Credits
- The first schema is a Kubernetes cluster with storage backed by multi-path iSCSI
(Courtesy of [Yongbok Kim](https://www.yongbok.net/blog/))
- The second one is a simplified representation of a Kubernetes cluster
(Courtesy of [Imesh Gunaratne](https://medium.com/containermind/a-reference-architecture-for-deploying-wso2-middleware-on-kubernetes-d4dee7601e8e))
---
## Kubernetes architecture: the nodes
- The nodes executing our containers run a collection of services:
- a container Engine (typically Docker)
- kubelet (the "node agent")
- kube-proxy (a necessary but not sufficient network component)
- Nodes were formerly called "minions"
(You might see that word in older articles or documentation)
---
## Kubernetes architecture: the control plane
- The Kubernetes logic (its "brains") is a collection of services:
- the API server (our point of entry to everything!)
- core services like the scheduler and controller manager
- `etcd` (a highly available key/value store; the "database" of Kubernetes)
- Together, these services form the control plane of our cluster
- The control plane is also called the "master"
---
class: pic
![One of the best Kubernetes architecture diagrams available](images/k8s-arch4-thanks-luxas.png)
---
class: extra-details
## Running the control plane on special nodes
- It is common to reserve a dedicated node for the control plane
(Except for single-node development clusters, like when using minikube)
- This node is then called a "master"
(Yes, this is ambiguous: is the "master" a node, or the whole control plane?)
- Normal applications are restricted from running on this node
(By using a mechanism called ["taints"](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/))
- When high availability is required, each service of the control plane must be resilient
- The control plane is then replicated on multiple nodes
(This is sometimes called a "multi-master" setup)
---
class: extra-details
## Running the control plane outside containers
- The services of the control plane can run in or out of containers
- For instance: since `etcd` is a critical service, some people
deploy it directly on a dedicated cluster (without containers)
(This is illustrated on the first "super complicated" schema)
- In some hosted Kubernetes offerings (e.g. AKS, GKE, EKS), the control plane is invisible
(We only "see" a Kubernetes API endpoint)
- In that case, there is no "master node"
*For this reason, it is more accurate to say "control plane" rather than "master."*
---
class: pic
![](images/control-planes/single-node-dev.svg)
---
class: pic
![](images/control-planes/managed-kubernetes.svg)
---
class: pic
![](images/control-planes/single-control-and-workers.svg)
---
class: pic
![](images/control-planes/stacked-control-plane.svg)
---
class: pic
![](images/control-planes/non-dedicated-stacked-nodes.svg)
---
class: pic
![](images/control-planes/advanced-control-plane.svg)
---
class: pic
![](images/control-planes/advanced-control-plane-split-events.svg)
---
class: extra-details
## How many nodes should a cluster have?
- There is no particular constraint
(no need to have an odd number of nodes for quorum)
- A cluster can have zero node
(but then it won't be able to start any pods)
- For testing and development, having a single node is fine
- For production, make sure that you have extra capacity
(so that your workload still fits if you lose a node or a group of nodes)
- Kubernetes is tested with [up to 5000 nodes](https://kubernetes.io/docs/setup/best-practices/cluster-large/)
(however, running a cluster of that size requires a lot of tuning)
---
class: extra-details
## Do we need to run Docker at all?
No!
--
- By default, Kubernetes uses the Docker Engine to run containers
- We can leverage other pluggable runtimes through the *Container Runtime Interface*
- <del>We could also use `rkt` ("Rocket") from CoreOS</del> (deprecated)
---
class: extra-details
## Some runtimes available through CRI
- [containerd](https://github.com/containerd/containerd/blob/master/README.md)
- maintained by Docker, IBM, and community
- used by Docker Engine, microk8s, k3s, GKE; also standalone
- comes with its own CLI, `ctr`
- [CRI-O](https://github.com/cri-o/cri-o/blob/master/README.md):
- maintained by Red Hat, SUSE, and community
- used by OpenShift and Kubic
- designed specifically as a minimal runtime for Kubernetes
- [And more](https://kubernetes.io/docs/setup/production-environment/container-runtimes/)
---
class: extra-details
## Do we need to run Docker at all?
Yes!
--
- In this workshop, we run our app on a single node first
- We will need to build images and ship them around
- We can do these things without Docker
<br/>
(and get diagnosed with NIH¹ syndrome)
- Docker is still the most stable container engine today
<br/>
(but other options are maturing very quickly)
.footnote[¹[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)]
---
class: extra-details
## Do we need to run Docker at all?
- On our development environments, CI pipelines ... :
*Yes, almost certainly*
- On our production servers:
*Yes (today)*
*Probably not (in the future)*
.footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)]
---
## Interacting with Kubernetes
- We will interact with our Kubernetes cluster through the Kubernetes API
- The Kubernetes API is (mostly) RESTful
- It allows us to create, read, update, delete *resources*
- A few common resource types are:
- node (a machine — physical or virtual — in our cluster)
- pod (group of containers running together on a node)
- service (stable network endpoint to connect to one or multiple containers)
---
class: pic
![Node, pod, container](images/k8s-arch3-thanks-weave.png)
---
## Scaling
- How would we scale the pod shown on the previous slide?
- **Do** create additional pods
- each pod can be on a different node
- each pod will have its own IP address
- **Do not** add more NGINX containers in the pod
- all the NGINX containers would be on the same node
- they would all have the same IP address
<br/>(resulting in `Address alreading in use` errors)
---
## Together or separate
- Should we put e.g. a web application server and a cache together?
<br/>
("cache" being something like e.g. Memcached or Redis)
- Putting them **in the same pod** means:
- they have to be scaled together
- they can communicate very efficiently over `localhost`
- Putting them **in different pods** means:
- they can be scaled separately
- they must communicate over remote IP addresses
<br/>(incurring more latency, lower performance)
- Both scenarios can make sense, depending on our goals
---
## Credits
- The first diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha)
- it's one of the best Kubernetes architecture diagrams available!
- The second diagram is courtesy of Weave Works
- a *pod* can have multiple containers working together
- IP addresses are associated with *pods*, not with individual containers
Both diagrams used with permission.
???
:EN:- Kubernetes concepts
:FR:- Kubernetes en théorie

View File

@@ -0,0 +1,101 @@
# Kubernetes concepts
- Kubernetes is a container management system
- It runs and manages containerized applications on a cluster
--
- What does that really mean?
---
## What can we do with Kubernetes?
- Let's imagine that we have a 3-tier e-commerce app:
- web frontend
- API backend
- database (that we will keep out of Kubernetes for now)
- We have built images for our frontend and backend components
(e.g. with Dockerfiles and `docker build`)
- We are running them successfully with a local environment
(e.g. with Docker Compose)
- Let's see how we would deploy our app on Kubernetes!
---
## Basic things we can ask Kubernetes to do
--
- Start 5 containers using image `atseashop/api:v1.3`
--
- Place an internal load balancer in front of these containers
--
- Start 10 containers using image `atseashop/webfront:v1.3`
--
- Place a public load balancer in front of these containers
--
- It's Black Friday (or Christmas), traffic spikes, grow our cluster and add containers
--
- New release! Replace my containers with the new image `atseashop/webfront:v1.4`
--
- Keep processing requests during the upgrade; update my containers one at a time
---
## Other things that Kubernetes can do for us
- Autoscaling
(straightforward on CPU; more complex on other metrics)
- Resource management and scheduling
(reserve CPU/RAM for containers; placement constraints)
- Advanced rollout patterns
(blue/green deployment, canary deployment)
---
## More things that Kubernetes can do for us
- Batch jobs
(one-off; parallel; also cron-style periodic execution)
- Fine-grained access control
(defining *what* can be done by *whom* on *which* resources)
- Stateful services
(databases, message queues, etc.)
- Automating complex tasks with *operators*
(e.g. database replication, failover, etc.)

View File

@@ -14,22 +14,20 @@
## Creating a CRD
- We will create a CRD to represent the different species of coffee
- We will create a CRD to represent different recipes of pizzas
(arabica, liberica, and robusta)
- We will be able to run `kubectl get pizzas` and it will list the recipes
- We will be able to run `kubectl get coffees` and it will list the species
- Creating/deleting recipes won't do anything else
- Then we can label, edit, etc. the species to attach some information
(e.g. the taste profile of the coffee, or whatever we want)
(because we won't implement a *controller*)
---
## First shot of coffee
## First slice of pizza
```yaml
@@INCLUDE[k8s/coffee-1.yaml]
@@INCLUDE[k8s/pizza-1.yaml]
```
---
@@ -48,9 +46,9 @@
---
## Second shot of coffee
## Second slice of pizza
- The next slide will show file @@LINK[k8s/coffee-2.yaml]
- The next slide will show file @@LINK[k8s/pizza-2.yaml]
- Note the `spec.versions` list
@@ -65,20 +63,20 @@
---
```yaml
@@INCLUDE[k8s/coffee-2.yaml]
@@INCLUDE[k8s/pizza-2.yaml]
```
---
## Creating our Coffee CRD
## Baking some pizza
- Let's create the Custom Resource Definition for our Coffee resource
- Let's create the Custom Resource Definition for our Pizza resource
.lab[
- Load the CRD:
```bash
kubectl apply -f ~/container.training/k8s/coffee-2.yaml
kubectl apply -f ~/container.training/k8s/pizza-2.yaml
```
- Confirm that it shows up:
@@ -95,19 +93,57 @@
The YAML below defines a resource using the CRD that we just created:
```yaml
kind: Coffee
kind: Pizza
apiVersion: container.training/v1alpha1
metadata:
name: arabica
name: napolitana
spec:
taste: strong
toppings: [ mozzarella ]
```
.lab[
- Create a few types of coffee beans:
- Try to create a few pizza recipes:
```bash
kubectl apply -f ~/container.training/k8s/coffees.yaml
kubectl apply -f ~/container.training/k8s/pizzas.yaml
```
]
---
## Type validation
- Older versions of Kubernetes will accept our pizza definition as is
- Newer versions, however, will issue warnings about unknown fields
(and if we use `--validate=false`, these fields will simply be dropped)
- We need to improve our OpenAPI schema
(to add e.g. the `spec.toppings` field used by our pizza resources)
---
## Third slice of pizza
- Let's add a full OpenAPI v3 schema to our Pizza CRD
- We'll require a field `spec.sauce` which will be a string
- And a field `spec.toppings` which will have to be a list of strings
.lab[
- Update our pizza CRD:
```bash
kubectl apply -f ~/container.training/k8s/pizza-3.yaml
```
- Load our pizza recipes:
```bash
kubectl apply -f ~/container.training/k8s/pizzas.yaml
```
]
@@ -120,91 +156,48 @@ spec:
.lab[
- View the coffee beans that we just created:
- View the pizza recipes that we just created:
```bash
kubectl get coffees
kubectl get pizzas
```
]
- We'll see in a bit how to improve that
---
## What can we do with CRDs?
There are many possibilities!
- *Operators* encapsulate complex sets of resources
(e.g.: a PostgreSQL replicated cluster; an etcd cluster...
<br/>
see [awesome operators](https://github.com/operator-framework/awesome-operators) and
[OperatorHub](https://operatorhub.io/) to find more)
- Custom use-cases like [gitkube](https://gitkube.sh/)
- creates a new custom type, `Remote`, exposing a git+ssh server
- deploy by pushing YAML or Helm charts to that remote
- Replacing built-in types with CRDs
(see [this lightning talk by Tim Hockin](https://www.youtube.com/watch?v=ji0FWzFwNhA))
---
## What's next?
- Creating a basic CRD is quick and easy
- But there is a lot more that we can (and probably should) do:
- improve input with *data validation*
- improve output with *custom columns*
- And of course, we probably need a *controller* to go with our CRD!
(otherwise, we're just using the Kubernetes API as a fancy data store)
- Let's see how we can improve that display!
---
## Additional printer columns
- We can specify `additionalPrinterColumns` in the CRD
- This is similar to `-o custom-columns`
(map a column name to a path in the object, e.g. `.spec.taste`)
```yaml
- We can tell Kubernetes which columns to show:
```yaml
additionalPrinterColumns:
- jsonPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
- jsonPath: .spec.sauce
name: Sauce
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
```
- jsonPath: .spec.toppings
name: Toppings
type: string
```
- There is an updated CRD in @@LINK[k8s/pizza-4.yaml]
---
## Using additional printer columns
- Let's update our CRD using @@LINK[k8s/coffee-3.yaml]
- Let's update our CRD!
.lab[
- Update the CRD:
```bash
kubectl apply -f ~/container.training/k8s/coffee-3.yaml
kubectl apply -f ~/container.training/k8s/pizza-4.yaml
```
- Look at our Coffee resources:
- Look at our Pizza resources:
```bash
kubectl get coffees
kubectl get pizzas
```
]
@@ -215,50 +208,26 @@ Note: we can update a CRD without having to re-create the corresponding resource
---
## Data validation
## Better data validation
- CRDs are validated with the OpenAPI v3 schema that we specify
- Let's change the data schema so that the sauce can only be `red` or `white`
(with older versions of the API, when the schema was optional,
<br/>
no schema = no validation at all)
- This will be implemented by @@LINK[k8s/pizza-5.yaml]
- Otherwise, we can put anything we want in the `spec`
.lab[
- More advanced validation can also be done with admission webhooks, e.g.:
- Update the Pizza CRD:
```bash
kubectl apply -f ~/container.training/k8s/pizza-5.yaml
```
- consistency between parameters
- advanced integer filters (e.g. odd number of replicas)
- things that can change in one direction but not the other
---
## OpenAPI v3 schema example
This is what we have in @@LINK[k8s/coffee-3.yaml]:
```yaml
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string
required: [ taste ]
```
]
---
## Validation *a posteriori*
- Some of the "coffees" that we defined earlier *do not* pass validation
- Some of the pizzas that we defined earlier *do not* pass validation
- How is that possible?
@@ -326,15 +295,23 @@ This is what we have in @@LINK[k8s/coffee-3.yaml]:
---
## What's next?
## Even better data validation
- Generally, when creating a CRD, we also want to run a *controller*
- If we need more complex data validation, we can use a validating webhook
(otherwise nothing will happen when we create resources of that type)
- Use cases:
- The controller will typically *watch* our custom resources
- validating a "version" field for a database engine
(and take action when they are created/updated)
- validating that the number of e.g. coordination nodes is even
- preventing inconsistent or dangerous changes
<br/>
(e.g. major version downgrades)
- checking a key or certificate format or validity
- and much more!
---
@@ -376,6 +353,24 @@ This is what we have in @@LINK[k8s/coffee-3.yaml]:
(unrelated to containers, clusters, etc.)
---
## What's next?
- Creating a basic CRD is relatively straightforward
- But CRDs generally require a *controller* to do anything useful
- The controller will typically *watch* our custom resources
(and take action when they are created/updated)
- Most serious use-cases will also require *validation web hooks*
- When our CRD data format evolves, we'll also need *conversion web hooks*
- Doing all that work manually is tedious; use a framework!
???
:EN:- Custom Resource Definitions (CRDs)

View File

@@ -157,7 +157,7 @@ class: extra-details
(as opposed to, e.g., installing a new release each time we run it)
- Other example: `kubectl -f some-file.yaml`
- Other example: `kubectl apply -f some-file.yaml`
---

View File

@@ -66,7 +66,7 @@
Where do that `repository` and `version` come from?
We're assuming here that we did our reserach,
We're assuming here that we did our research,
or that our resident Helm expert advised us to
use Bitnami's Redis chart.

View File

@@ -316,6 +316,7 @@ class: extra-details
## How to find charts, the new way
- Go to the [Artifact Hub](https://artifacthub.io/packages/search?kind=0) (https://artifacthub.io)
https://artifacthub.io/packages/helm/securecodebox/juice-shop
- Or use `helm search hub ...` from the CLI
@@ -343,7 +344,8 @@ class: extra-details
]
Then go to → https://artifacthub.io/packages/helm/seccurecodebox/juice-shop
Then go to → https://artifacthub.io/packages/helm/securecodebox/juice-shop
---

View File

@@ -167,7 +167,7 @@ Let's try one more round of decoding!
--
... OK, that was *a lot* of binary data. What sould we do with it?
... OK, that was *a lot* of binary data. What should we do with it?
---

View File

@@ -0,0 +1,51 @@
## Basic Commands
run
create
get
delete
logs
explain
describe
exec
## Modifying Objects
apply (upsert)
set
edit
patch
label
annotate
https://blog.atomist.com/kubernetes-apply-replace-patch/
diff
replace
wait
## NetCommands
expose
port-forward
proxy
## Deploy Command
rollout
scale
autoscale
## Cluster Management Commands
certificate
cluster-info
cordon
uncordon
drain
taint
## Troubleshooting and Debugging Commands
top
attach
cp
auth
debug
## Settings Commands
completion
## Other Commands
alpha
api-resources
api-versions
config
plugin
version
Please Share this API Explorer

269
slides/k8s/kubectl-first.md Normal file
View File

@@ -0,0 +1,269 @@
# First contact with `kubectl`
- `kubectl` is (almost) the only tool we'll need to talk to Kubernetes
- It is a rich CLI tool around the Kubernetes API
(Everything you can do with `kubectl`, you can do directly with the API)
- On our machines, there is a `~/.kube/config` file with:
- the Kubernetes API address
- the path to our TLS certificates used to authenticate
- You can also use the `--kubeconfig` flag to pass a config file
- Or directly `--server`, `--user`, etc.
- `kubectl` can be pronounced "Cube C T L", "Cube cuttle", "Cube cuddle"...
---
class: extra-details
## `kubectl` is the new SSH
- We often start managing servers with SSH
(installing packages, troubleshooting ...)
- At scale, it becomes tedious, repetitive, error-prone
- Instead, we use config management, central logging, etc.
- In many cases, we still need SSH:
- as the underlying access method (e.g. Ansible)
- to debug tricky scenarios
- to inspect and poke at things
---
class: extra-details
## The parallel with `kubectl`
- We often start managing Kubernetes clusters with `kubectl`
(deploying applications, troubleshooting ...)
- At scale (with many applications or clusters), it becomes tedious, repetitive, error-prone
- Instead, we use automated pipelines, observability tooling, etc.
- In many cases, we still need `kubectl`:
- to debug tricky scenarios
- to inspect and poke at things
- The Kubernetes API is always the underlying access method
---
## `kubectl get`
- Let's look at our `Node` resources with `kubectl get`!
.lab[
- Look at the composition of our cluster:
```bash
kubectl get node
```
- These commands are equivalent:
```bash
kubectl get no
kubectl get node
kubectl get nodes
```
]
---
## kubectl is an API Server Client
- kubectl verbose (-v)
- --v=6 Display requested resources.
- --v=7 Display HTTP request headers.
- --v=8 Display HTTP request contents.
- --v=9 Display HTTP request contents without truncation of contents.
```bash
kubectl get nodes --v=8
```
---
## Obtaining machine-readable output
- `kubectl get` can output JSON, YAML, or be directly formatted
.lab[
- Give us more info about the nodes:
```bash
kubectl get nodes -o wide
```
- Let's have some YAML:
```bash
kubectl get no -o yaml
```
See that `kind: List` at the end? It's the type of our result!
]
---
## (Ab)using `kubectl` and `jq`
- It's super easy to build custom reports
.lab[
- Show the capacity of all our nodes as a stream of JSON objects:
```bash
kubectl get nodes -o json |
jq ".items[] | {name:.metadata.name} + .status.capacity"
```
]
---
class: extra-details
## Exploring types and definitions
- We can list all available resource types by running `kubectl api-resources`
<br/>
(In Kubernetes 1.10 and prior, this command used to be `kubectl get`)
- We can view the definition for a resource type with:
```bash
kubectl explain type
```
- We can view the definition of a field in a resource, for instance:
```bash
kubectl explain node.spec
```
- Or get the full definition of all fields and sub-fields:
```bash
kubectl explain node --recursive
```
---
class: extra-details
## Introspection vs. documentation
- We can access the same information by reading the [API documentation](https://kubernetes.io/docs/reference/#api-reference)
- The API documentation is usually easier to read, but:
- it won't show custom types (like Custom Resource Definitions)
- we need to make sure that we look at the correct version
- `kubectl api-resources` and `kubectl explain` perform *introspection*
(they communicate with the API server and obtain the exact type definitions)
---
## Type names
- The most common resource names have three forms:
- singular (e.g. `node`, `service`, `deployment`)
- plural (e.g. `nodes`, `services`, `deployments`)
- short (e.g. `no`, `svc`, `deploy`)
- Some resources do not have a short name
- `Endpoints` only have a plural form
(because even a single `Endpoints` resource is actually a list of endpoints)
---
## Viewing details
- We can use `kubectl get -o yaml` to see all available details
- However, YAML output is often simultaneously too much and not enough
- For instance, `kubectl get node node1 -o yaml` is:
- too much information (e.g.: list of images available on this node)
- not enough information (e.g.: doesn't show pods running on this node)
- difficult to read for a human operator
- For a comprehensive overview, we can use `kubectl describe` instead
---
## `kubectl describe`
- `kubectl describe` needs a resource type and (optionally) a resource name
- It is possible to provide a resource name *prefix*
(all matching objects will be displayed)
- `kubectl describe` will retrieve some extra information about the resource
.lab[
- Look at the information available for `node1` with one of the following commands:
```bash
kubectl describe node/node1
kubectl describe node node1
```
]
(We should notice a bunch of control plane pods.)
---
## Listing running containers
- Containers are manipulated through *pods*
- A pod is a group of containers:
- running together (on the same node)
- sharing resources (RAM, CPU; but also network, volumes)
.lab[
- List pods on our cluster:
```bash
kubectl get pods
```
]
--
*Where are the pods that we saw just a moment earlier?!?*

340
slides/k8s/kubectl-more.md Normal file
View File

@@ -0,0 +1,340 @@
# More contact with `kubectl`
- Namespaces
- Clusters
- Proxy
---
## Namespaces
- Namespaces allow us to segregate resources
.lab[
- List the namespaces on our cluster with one of these commands:
```bash
kubectl get namespaces
kubectl get namespace
kubectl get ns
```
]
--
*You know what ... This `kube-system` thing looks suspicious.*
*In fact, I'm pretty sure it showed up earlier, when we did:*
`kubectl describe node node1`
---
## Accessing namespaces
- By default, `kubectl` uses the `default` namespace
- We can see resources in all namespaces with `--all-namespaces`
.lab[
- List the pods in all namespaces:
```bash
kubectl get pods --all-namespaces
```
- Since Kubernetes 1.14, we can also use `-A` as a shorter version:
```bash
kubectl get pods -A
```
]
*Here are our system pods!*
---
## What are all these control plane pods?
- `etcd` is our etcd server
- `kube-apiserver` is the API server
- `kube-controller-manager` and `kube-scheduler` are other control plane components
- `coredns` provides DNS-based service discovery ([replacing kube-dns as of 1.11](https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/))
- `kube-proxy` is the (per-node) component managing port mappings and such
- `weave` is the (per-node) component managing the network overlay
- the `READY` column indicates the number of containers in each pod
(1 for most pods, but `weave` has 2, for instance)
---
## Scoping another namespace
- We can also look at a different namespace (other than `default`)
.lab[
- List only the pods in the `kube-system` namespace:
```bash
kubectl get pods --namespace=kube-system
kubectl get pods -n kube-system
```
]
---
## Namespaces and other `kubectl` commands
- We can use `-n`/`--namespace` with almost every `kubectl` command
- Example:
- `kubectl create --namespace=X` to create something in namespace X
- We can use `-A`/`--all-namespaces` with most commands that manipulate multiple objects
- Examples:
- `kubectl delete` can delete resources across multiple namespaces
- `kubectl label` can add/remove/update labels across multiple namespaces
---
class: extra-details
## What about `kube-public`?
.lab[
- List the pods in the `kube-public` namespace:
```bash
kubectl -n kube-public get pods
```
]
Nothing!
`kube-public` is created by kubeadm & [used for security bootstrapping](https://kubernetes.io/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters).
---
class: extra-details
## Exploring `kube-public`
- The only interesting object in `kube-public` is a ConfigMap named `cluster-info`
.lab[
- List ConfigMap objects:
```bash
kubectl -n kube-public get configmaps
```
- Inspect `cluster-info`:
```bash
kubectl -n kube-public get configmap cluster-info -o yaml
```
]
Note the `selfLink` URI: `/api/v1/namespaces/kube-public/configmaps/cluster-info`
We can use that!
---
class: extra-details
## Accessing `cluster-info`
- Earlier, when trying to access the API server, we got a `Forbidden` message
- But `cluster-info` is readable by everyone (even without authentication)
.lab[
- Retrieve `cluster-info`:
```bash
curl -k https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info
```
]
- We were able to access `cluster-info` (without auth)
- It contains a `kubeconfig` file
---
class: extra-details
## Retrieving `kubeconfig`
- We can easily extract the `kubeconfig` file from this ConfigMap
.lab[
- Display the content of `kubeconfig`:
```bash
curl -sk https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info \
| jq -r .data.kubeconfig
```
]
- This file holds the canonical address of the API server, and the public key of the CA
- This file *does not* hold client keys or tokens
- This is not sensitive information, but allows us to establish trust
---
class: extra-details
## What about `kube-node-lease`?
- Starting with Kubernetes 1.14, there is a `kube-node-lease` namespace
(or in Kubernetes 1.13 if the NodeLease feature gate is enabled)
- That namespace contains one Lease object per node
- *Node leases* are a new way to implement node heartbeats
(i.e. node regularly pinging the control plane to say "I'm alive!")
- For more details, see [Efficient Node Heartbeats KEP] or the [node controller documentation]
[Efficient Node Heartbeats KEP]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/589-efficient-node-heartbeats/README.md
[node controller documentation]: https://kubernetes.io/docs/concepts/architecture/nodes/#node-controller
---
## Services
- A *service* is a stable endpoint to connect to "something"
(In the initial proposal, they were called "portals")
.lab[
- List the services on our cluster with one of these commands:
```bash
kubectl get services
kubectl get svc
```
]
--
There is already one service on our cluster: the Kubernetes API itself.
---
## ClusterIP services
- A `ClusterIP` service is internal, available from the cluster only
- This is useful for introspection from within containers
.lab[
- Try to connect to the API:
```bash
curl -k https://`10.96.0.1`
```
- `-k` is used to skip certificate verification
- Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `kubectl get svc`
]
The command above should either time out, or show an authentication error. Why?
---
## Time out
- Connections to ClusterIP services only work *from within the cluster*
- If we are outside the cluster, the `curl` command will probably time out
(Because the IP address, e.g. 10.96.0.1, isn't routed properly outside the cluster)
- This is the case with most "real" Kubernetes clusters
- To try the connection from within the cluster, we can use [shpod](https://github.com/jpetazzo/shpod)
---
## Authentication error
This is what we should see when connecting from within the cluster:
```json
$ curl -k https://10.96.0.1
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
```
---
## Explanations
- We can see `kind`, `apiVersion`, `metadata`
- These are typical of a Kubernetes API reply
- Because we *are* talking to the Kubernetes API
- The Kubernetes API tells us "Forbidden"
(because it requires authentication)
- The Kubernetes API is reachable from within the cluster
(many apps integrating with Kubernetes will use this)
---
## DNS integration
- Each service also gets a DNS record
- The Kubernetes DNS resolver is available *from within pods*
(and sometimes, from within nodes, depending on configuration)
- Code running in pods can connect to services using their name
(e.g. https://kubernetes/...)
???
:EN:- Getting started with kubectl
:FR:- Se familiariser avec kubectl

View File

@@ -0,0 +1,399 @@
# Scaling our application
- `kubectl` gives us a simple command to scale a workload:
`kubectl scale TYPE NAME --replicas=HOWMANY`
- Let's try it on our Pod, so that we have more Pods!
.lab[
- Try to scale the Pod:
```bash
kubectl scale pod pingpong --replicas=3
```
]
🤔 We get the following error, what does that mean?
```
Error from server (NotFound): the server could not find the requested resource
```
---
## Scaling a Pod
- We cannot "scale a Pod"
(that's not completely true; we could give it more CPU/RAM)
- If we want more Pods, we need to create more Pods
(i.e. execute `kubectl run` multiple times)
- There must be a better way!
(spoiler alert: yes, there is a better way!)
---
class: extra-details
## `NotFound`
- What's the meaning of that error?
```
Error from server (NotFound): the server could not find the requested resource
```
- When we execute `kubectl scale THAT-RESOURCE --replicas=THAT-MANY`,
<br/>
it is like telling Kubernetes:
*go to THAT-RESOURCE and set the scaling button to position THAT-MANY*
- Pods do not have a "scaling button"
- Try to execute the `kubectl scale pod` command with `-v6`
- We see a `PATCH` request to `/scale`: that's the "scaling button"
(technically it's called a *subresource* of the Pod)
---
## Creating more pods
- We are going to create a ReplicaSet
(= set of replicas = set of identical pods)
- In fact, we will create a Deployment, which itself will create a ReplicaSet
- Why so many layers? We'll explain that shortly, don't worry!
---
## Creating a Deployment running `ping`
- Let's create a Deployment instead of a single Pod
.lab[
- Create the Deployment; pay attention to the `--`:
```bash
kubectl create deployment pingpong --image=alpine -- ping 127.0.0.1
```
]
- The `--` is used to separate:
- "options/flags of `kubectl create`
- command to run in the container
---
## What has been created?
.lab[
<!-- ```hide kubectl wait pod --selector=app=pingpong --for condition=ready ``` -->
- Check the resources that were created:
```bash
kubectl get all
```
]
Note: `kubectl get all` is a lie. It doesn't show everything.
(But it shows a lot of "usual suspects", i.e. commonly used resources.)
---
## There's a lot going on here!
```
NAME READY STATUS RESTARTS AGE
pod/pingpong 1/1 Running 0 4m17s
pod/pingpong-6ccbc77f68-kmgfn 1/1 Running 0 11s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h45
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/pingpong 1/1 1 1 11s
NAME DESIRED CURRENT READY AGE
replicaset.apps/pingpong-6ccbc77f68 1 1 1 11s
```
Our new Pod is not named `pingpong`, but `pingpong-xxxxxxxxxxx-yyyyy`.
We have a Deployment named `pingpong`, and an extra ReplicaSet, too. What's going on?
---
## From Deployment to Pod
We have the following resources:
- `deployment.apps/pingpong`
This is the Deployment that we just created.
- `replicaset.apps/pingpong-xxxxxxxxxx`
This is a Replica Set created by this Deployment.
- `pod/pingpong-xxxxxxxxxx-yyyyy`
This is a *pod* created by the Replica Set.
Let's explain what these things are.
---
## Pod
- Can have one or multiple containers
- Runs on a single node
(Pod cannot "straddle" multiple nodes)
- Pods cannot be moved
(e.g. in case of node outage)
- Pods cannot be scaled horizontally
(except by manually creating more Pods)
---
class: extra-details
## Pod details
- A Pod is not a process; it's an environment for containers
- it cannot be "restarted"
- it cannot "crash"
- The containers in a Pod can crash
- They may or may not get restarted
(depending on Pod's restart policy)
- If all containers exit successfully, the Pod ends in "Succeeded" phase
- If some containers fail and don't get restarted, the Pod ends in "Failed" phase
---
## Replica Set
- Set of identical (replicated) Pods
- Defined by a pod template + number of desired replicas
- If there are not enough Pods, the Replica Set creates more
(e.g. in case of node outage; or simply when scaling up)
- If there are too many Pods, the Replica Set deletes some
(e.g. if a node was disconnected and comes back; or when scaling down)
- We can scale up/down a Replica Set
- we update the manifest of the Replica Set
- as a consequence, the Replica Set controller creates/deletes Pods
---
## Deployment
- Replica Sets control *identical* Pods
- Deployments are used to roll out different Pods
(different image, command, environment variables, ...)
- When we update a Deployment with a new Pod definition:
- a new Replica Set is created with the new Pod definition
- that new Replica Set is progressively scaled up
- meanwhile, the old Replica Set(s) is(are) scaled down
- This is a *rolling update*, minimizing application downtime
- When we scale up/down a Deployment, it scales up/down its Replica Set
---
## Can we scale now?
- Let's try `kubectl scale` again, but on the Deployment!
.lab[
- Scale our `pingpong` deployment:
```bash
kubectl scale deployment pingpong --replicas 3
```
- Note that we could also write it like this:
```bash
kubectl scale deployment/pingpong --replicas 3
```
- Check that we now have multiple pods:
```bash
kubectl get pods
```
]
---
class: extra-details
## Scaling a Replica Set
- What if we scale the Replica Set instead of the Deployment?
- The Deployment would notice it right away and scale back to the initial level
- The Replica Set makes sure that we have the right numbers of Pods
- The Deployment makes sure that the Replica Set has the right size
(conceptually, it delegates the management of the Pods to the Replica Set)
- This might seem weird (why this extra layer?) but will soon make sense
(when we will look at how rolling updates work!)
---
## Checking Deployment logs
- `kubectl logs` needs a Pod name
- But it can also work with a *type/name*
(e.g. `deployment/pingpong`)
.lab[
- View the result of our `ping` command:
```bash
kubectl logs deploy/pingpong --tail 2
```
]
- It shows us the logs of the first Pod of the Deployment
- We'll see later how to get the logs of *all* the Pods!
---
## Resilience
- The *deployment* `pingpong` watches its *replica set*
- The *replica set* ensures that the right number of *pods* are running
- What happens if pods disappear?
.lab[
- In a separate window, watch the list of pods:
```bash
watch kubectl get pods
```
<!--
```wait Every 2.0s```
```tmux split-pane -v```
-->
- Destroy the pod currently shown by `kubectl logs`:
```
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
```
<!--
```tmux select-pane -t 0```
```copy pingpong-[^-]*-.....```
```tmux last-pane```
```keys kubectl delete pod ```
```paste```
```key ^J```
```check```
-->
]
---
## What happened?
- `kubectl delete pod` terminates the pod gracefully
(sending it the TERM signal and waiting for it to shutdown)
- As soon as the pod is in "Terminating" state, the Replica Set replaces it
- But we can still see the output of the "Terminating" pod in `kubectl logs`
- Until 30 seconds later, when the grace period expires
- The pod is then killed, and `kubectl logs` exits
---
## Deleting a standalone Pod
- What happens if we delete a standalone Pod?
(like the first `pingpong` Pod that we created)
.lab[
- Delete the Pod:
```bash
kubectl delete pod pingpong
```
<!--
```key ^D```
```key ^C```
-->
]
- No replacement Pod gets created because there is no *controller* watching it
- That's why we will rarely use standalone Pods in practice
(except for e.g. punctual debugging or executing a short supervised task)
???
:EN:- Running pods and deployments
:FR:- Créer un pod et un déploiement

Some files were not shown because too many files have changed in this diff Show More