Compare commits

..

31 Commits

Author SHA1 Message Date
Jérôme Petazzoni
2122456ebd ⁂ Wemanity Masterclass Oct 2022 2022-10-10 08:50:43 +02:00
Jérôme Petazzoni
bb8e655f92 🔧 Disable unattended upgrades; add completion for kubeadm 2022-10-09 12:18:42 +02:00
Jérôme Petazzoni
50772ca439 🌍 Switch Scaleway to fr-par-2 (better PUE) 2022-10-09 12:18:07 +02:00
Jérôme Petazzoni
1082204ac7 📃 Add note about .Chart.IsRoot 2022-10-04 17:11:59 +02:00
Jérôme Petazzoni
c9c79c409c Add ytt; fix Weave YAML URL; add completion for a few tools 2022-10-04 16:53:36 +02:00
Jérôme Petazzoni
71daf27237 ⌨️ Add tmux rename window shortcut 2022-10-03 15:28:32 +02:00
Jérôme Petazzoni
986da15a22 🔗 Update kustomize eschewed features link 2022-10-03 15:23:18 +02:00
Jérôme Petazzoni
407a8631ed 🐞 Typo in variable name 2022-10-03 15:15:53 +02:00
Jérôme Petazzoni
b4a81a7054 🔧 Minor tweak to Terraform provisioning wrapper 2022-10-03 15:15:12 +02:00
Jérôme Petazzoni
d0f0d2c87b 🔧 Typo fix 2022-09-27 14:53:14 +02:00
Jérôme Petazzoni
0f77eaa48b 📃 Update info about Docker Desktop and Rancher Desktop 2022-09-26 13:42:20 +02:00
Jérôme Petazzoni
659713a697 Bump up dashboard version 2022-09-26 11:41:28 +02:00
Jérôme Petazzoni
20d21b742a Bump up Compose version to use 2.X everywhere 2022-09-25 17:28:52 +02:00
Jérôme Petazzoni
747605357d 🏭️ Refactor Ingress chapter 2022-09-25 14:20:26 +02:00
Jérôme Petazzoni
17bb84d22e 🏭️ Refactor healthcheck chapter
Add more details for startup probes.
Mention GRPC check.
Better spell out recommendations and gotchas.
2022-09-11 13:11:01 +02:00
Jérôme Petazzoni
d343264b86 📃 Update swap/cgroups v2 section to mention KEP2400 2022-09-10 09:31:39 +02:00
Jérôme Petazzoni
a216aa2034 🐞 Fix install of kube-ps1
The former method was invalid and didn't work with e.g. screen.
2022-08-31 12:42:47 +02:00
Francesco Manzali
64f993ff69 - Update VMs to ubuntu/focal64 20.04 LTS (trusty64 reaced EOL on April 25 2019)
- Update Docker installation task from the
  [official docs](https://docs.docker.com/engine/install/ubuntu/)
2022-08-31 12:06:10 +02:00
Jérôme Petazzoni
73b3cad0b8 🔧 Fix a couple of issues related to OCI images 2022-08-22 17:20:36 +02:00
Naeem Ilyas
26e5459fae type fix 2022-08-22 10:23:57 +02:00
Jérôme Petazzoni
9c564e6787 Add info about ownerReferences with Kyverno 2022-08-19 14:59:11 +02:00
Jérôme Petazzoni
2724a611a6 📃 Update rolling update intro slide 2022-08-17 14:49:17 +02:00
Jérôme Petazzoni
2ca239ddfc 🔒️ Mention bound service account tokens 2022-08-17 14:18:15 +02:00
Jérôme Petazzoni
e74a158c59 📃 Document dependency on yq 2022-08-17 13:49:15 +02:00
Jérôme Petazzoni
138af3b5d2 ♻️ Upgrade build image to Netlify Focal; bump up Python version 2022-08-17 13:48:55 +02:00
Jérôme Petazzoni
ad6d16bade Add RBAC and NetPol exercises 2022-08-17 13:16:52 +02:00
Jérôme Petazzoni
1aaf9b0bd5 ♻️ Update Linode LKE terraform module 2022-07-29 14:37:37 +02:00
Jérôme Petazzoni
ce39f97a28 Bump up versions for cluster upgrade lab 2022-07-22 11:32:22 +02:00
jonjohnsonjr
162651bdfd Typo: sould -> should 2022-07-18 19:16:47 +02:00
Jérôme Petazzoni
2958ca3a32 ♻️ Update CRD content
Rehaul for crd/v1; demonstrate what happens when adding
data validation a posteriori.
2022-07-14 10:32:34 +02:00
Jérôme Petazzoni
02a15d94a3 Add nsinjector 2022-07-06 14:28:24 +02:00
57 changed files with 1571 additions and 1019 deletions

8
.gitignore vendored
View File

@@ -6,13 +6,7 @@ prepare-vms/tags
prepare-vms/infra
prepare-vms/www
prepare-tf/.terraform*
prepare-tf/terraform.*
prepare-tf/stage2/*.tf
prepare-tf/stage2/kubeconfig.*
prepare-tf/stage2/.terraform*
prepare-tf/stage2/terraform.*
prepare-tf/stage2/externalips.*
prepare-tf/tag-*
slides/*.yml.html
slides/autopilot/state.yaml

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,8 +253,8 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
spec:
containers:
- args:
@@ -262,7 +262,7 @@ spec:
- --sidecar-host=http://127.0.0.1:8000
- --enable-skip-login
- --enable-insecure-login
image: kubernetesui/dashboard:v2.5.0
image: kubernetesui/dashboard:v2.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -293,7 +293,7 @@ spec:
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
- image: kubernetesui/metrics-scraper:v1.0.8
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,15 +253,15 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
spec:
containers:
- args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
image: kubernetesui/dashboard:v2.5.0
image: kubernetesui/dashboard:v2.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -292,7 +292,7 @@ spec:
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
- image: kubernetesui/metrics-scraper:v1.0.8
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,15 +253,15 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
spec:
containers:
- args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
image: kubernetesui/dashboard:v2.5.0
image: kubernetesui/dashboard:v2.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -292,7 +292,7 @@ spec:
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
- image: kubernetesui/metrics-scraper:v1.0.8
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:

14
k8s/pizza-1.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz

20
k8s/pizza-2.yaml Normal file
View File

@@ -0,0 +1,20 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object

32
k8s/pizza-3.yaml Normal file
View File

@@ -0,0 +1,32 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
toppings:
type: array
items:
type: string

39
k8s/pizza-4.yaml Normal file
View File

@@ -0,0 +1,39 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
toppings:
type: array
items:
type: string
additionalPrinterColumns:
- jsonPath: .spec.sauce
name: Sauce
type: string
- jsonPath: .spec.toppings
name: Toppings
type: string

40
k8s/pizza-5.yaml Normal file
View File

@@ -0,0 +1,40 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
enum: [ red, white ]
toppings:
type: array
items:
type: string
additionalPrinterColumns:
- jsonPath: .spec.sauce
name: Sauce
type: string
- jsonPath: .spec.toppings
name: Toppings
type: string

45
k8s/pizzas.yaml Normal file
View File

@@ -0,0 +1,45 @@
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: margherita
spec:
sauce: red
toppings:
- mozarella
- basil
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: quatrostagioni
spec:
sauce: red
toppings:
- artichoke
- basil
- mushrooms
- prosciutto
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: mehl31
spec:
sauce: white
toppings:
- goatcheese
- pear
- walnuts
- mozzarella
- rosemary
- honey
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: brownie
spec:
sauce: chocolate
toppings:
- nuts

View File

@@ -2,4 +2,3 @@
base = "slides"
publish = "slides"
command = "./build.sh once"

View File

@@ -1,11 +1,10 @@
---
- hosts: nodes
sudo: true
become: yes
vars_files:
- vagrant.yml
tasks:
- name: clean up the home folder
file:
path: /home/vagrant/{{ item }}
@@ -24,25 +23,23 @@
- name: installing dependencies
apt:
name: apt-transport-https,ca-certificates,python-pip,tmux
name: apt-transport-https,ca-certificates,python3-pip,tmux
state: present
update_cache: true
- name: fetching docker repo key
apt_key:
keyserver: hkp://p80.pool.sks-keyservers.net:80
id: 58118E89F3A912897C070ADBF76221572C52609D
- name: adding package repos
apt_repository:
repo: "{{ item }}"
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: adding docker repo
apt_repository:
repo: deb https://download.docker.com/linux/ubuntu focal stable
state: present
with_items:
- deb https://apt.dockerproject.org/repo ubuntu-trusty main
- name: installing docker
apt:
name: docker-engine
name: docker-ce,docker-ce-cli,containerd.io,docker-compose-plugin
state: present
update_cache: true
@@ -56,7 +53,7 @@
lineinfile:
dest: /etc/default/docker
line: DOCKER_OPTS="--host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:55555"
regexp: '^#?DOCKER_OPTS=.*$'
regexp: "^#?DOCKER_OPTS=.*$"
state: present
register: docker_opts
@@ -66,22 +63,14 @@
state: restarted
when: docker_opts is defined and docker_opts.changed
- name: performing pip autoupgrade
pip:
name: pip
state: latest
- name: installing virtualenv
pip:
name: virtualenv
state: latest
- name: Install Docker Compose via PIP
pip: name=docker-compose
- name: install docker-compose from official github repo
get_url:
url: https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64
dest: /usr/local/bin/docker-compose
mode: "u+x,g+x"
- name:
file:
path="/usr/local/bin/docker-compose"
file: path="/usr/local/bin/docker-compose"
state=file
mode=0755
owner=vagrant
@@ -128,5 +117,3 @@
line: "127.0.0.1 localhost {{ inventory_hostname }}"
- regexp: '^127\.0\.1\.1'
line: "127.0.1.1 {{ inventory_hostname }}"

View File

@@ -1,13 +1,12 @@
---
vagrant:
default_box: ubuntu/trusty64
default_box: ubuntu/focal64
default_box_check_update: true
ssh_insert_key: false
min_memory: 256
min_cores: 1
instances:
- hostname: node1
private_ip: 10.10.10.10
memory: 1512
@@ -37,6 +36,3 @@ instances:
private_ip: 10.10.10.50
memory: 512
cores: 1

View File

@@ -53,7 +53,7 @@ The value of the `location` variable is provider-specific. Examples:
| Provider | Example value | How to see possible values
|---------------|-------------------|---------------------------
| Digital Ocean | `ams3` | `doctl compute region list`
| Google Cloud | `europe-north1-a` | `gcloud compute zones list`
| Google Cloud | `europe-north1-a` | `gcloud compute zones list`
| Linode | `eu-central` | `linode-cli regions list`
| Oracle Cloud | `eu-stockholm-1` | `oci iam region list`
@@ -112,7 +112,7 @@ terraform init
See steps above, and add the following extra steps:
- Digital Coean:
- Digital Ocean:
```bash
export DIGITALOCEAN_ACCESS_TOKEN=$(grep ^access-token ~/.config/doctl/config.yaml | cut -d: -f2 | tr -d " ")
```

View File

@@ -3,6 +3,14 @@ set -e
TIME=$(which time)
if [ -f ~/.config/doctl/config.yaml ]; then
export DIGITALOCEAN_ACCESS_TOKEN=$(grep ^access-token ~/.config/doctl/config.yaml | cut -d: -f2 | tr -d " ")
fi
if [ -f ~/.config/linode-cli ]; then
export LINODE_TOKEN=$(grep ^token ~/.config/linode-cli | cut -d= -f2 | tr -d " ")
fi
PROVIDER=$1
[ "$PROVIDER" ] || {
echo "Please specify a provider as first argument, or 'ALL' for parallel mode."

View File

@@ -1,6 +1,6 @@
resource "random_string" "_" {
length = 4
number = false
numeric = false
special = false
upper = false
}

View File

@@ -3,7 +3,7 @@ resource "linode_lke_cluster" "_" {
tags = var.common_tags
# "region" is mandatory, so let's provide a default value if none was given.
region = var.location != null ? var.location : "eu-central"
k8s_version = var.k8s_version
k8s_version = local.k8s_version
pool {
type = local.node_type

View File

@@ -51,7 +51,22 @@ variable "location" {
# To view supported versions, run:
# linode-cli lke versions-list --json | jq -r .[].id
data "external" "k8s_version" {
program = [
"sh",
"-c",
<<-EOT
linode-cli lke versions-list --json |
jq -r '{"latest": [.[].id] | sort [-1]}'
EOT
]
}
variable "k8s_version" {
type = string
default = "1.22"
default = ""
}
locals {
k8s_version = var.k8s_version != "" ? var.k8s_version : data.external.k8s_version.result.latest
}

View File

@@ -193,7 +193,6 @@ resource "tls_private_key" "cluster_admin_${index}" {
}
resource "tls_cert_request" "cluster_admin_${index}" {
key_algorithm = tls_private_key.cluster_admin_${index}.algorithm
private_key_pem = tls_private_key.cluster_admin_${index}.private_key_pem
subject {
common_name = "cluster-admin"

View File

@@ -17,6 +17,7 @@ These tools can help you to create VMs on:
- [Parallel SSH](https://github.com/lilydjwg/pssh)
(should be installable with `pip install git+https://github.com/lilydjwg/pssh`;
on a Mac, try `brew install pssh`)
- [yq](https://github.com/kislyuk/yq)
Depending on the infrastructure that you want to use, you also need to install
the CLI that is specific to that cloud. For OpenStack deployments, you will

View File

@@ -1,3 +1,3 @@
INFRACLASS=scaleway
#SCW_INSTANCE_TYPE=DEV1-L
#SCW_ZONE=fr-par-2
SCW_ZONE=fr-par-2

View File

@@ -157,6 +157,9 @@ _cmd_clusterize() {
TAG=$1
need_tag
# Disable unattended upgrades so that they don't mess up with the subsequent steps
pssh sudo rm -f /etc/apt/apt.conf.d/50unattended-upgrades
# Special case for scaleway since it doesn't come with sudo
if [ "$INFRACLASS" = "scaleway" ]; then
pssh -l root "
@@ -182,9 +185,23 @@ _cmd_clusterize() {
pssh "
if [ -f /etc/iptables/rules.v4 ]; then
sudo sed -i 's/-A INPUT -j REJECT --reject-with icmp-host-prohibited//' /etc/iptables/rules.v4
sudo netfilter-persistent flush
sudo netfilter-persistent start
fi"
# oracle-cloud-agent upgrades pacakges in the background.
# This breaks our deployment scripts, because when we invoke apt-get, it complains
# that the lock already exists (symptom: random "Exited with error code 100").
# Workaround: if we detect oracle-cloud-agent, remove it.
# But this agent seems to also take care of installing/upgrading
# the unified-monitoring-agent package, so when we stop the snap,
# it can leave dpkg in a broken state. We "fix" it with the 2nd command.
pssh "
if [ -d /snap/oracle-cloud-agent ]; then
sudo snap remove oracle-cloud-agent
sudo dpkg --remove --force-remove-reinstreq unified-monitoring-agent
fi"
# Copy settings and install Python YAML parser
pssh -I tee /tmp/settings.yaml <tags/$TAG/settings.yaml
pssh "
@@ -262,13 +279,14 @@ EOF
"
##VERSION## https://github.com/docker/compose/releases
if [ "$ARCHITECTURE" ]; then
COMPOSE_VERSION=v2.2.3
COMPOSE_PLATFORM='linux-$(uname -m)'
else
COMPOSE_VERSION=1.29.2
COMPOSE_PLATFORM='Linux-$(uname -m)'
fi
COMPOSE_VERSION=v2.11.1
COMPOSE_PLATFORM='linux-$(uname -m)'
# Just in case you need Compose 1.X, you can use the following lines.
# (But it will probably only work for x86_64 machines.)
#COMPOSE_VERSION=1.29.2
#COMPOSE_PLATFORM='Linux-$(uname -m)'
pssh "
set -e
### Install docker-compose.
@@ -346,7 +364,8 @@ EOF"
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet kubeadm kubectl &&
sudo apt-mark hold kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl &&
kubeadm completion bash | sudo tee /etc/bash_completion.d/kubeadm &&
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl &&
echo 'alias k=kubectl' | sudo tee /etc/bash_completion.d/k &&
echo 'complete -F __start_kubectl k' | sudo tee -a /etc/bash_completion.d/k"
@@ -419,8 +438,9 @@ EOF
# Install weave as the pod network
pssh "
if i_am_first_node; then
kubever=\$(kubectl version | base64 | tr -d '\n') &&
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
#kubever=\$(kubectl version | base64 | tr -d '\n') &&
#kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml
fi"
# Join the other nodes to the cluster
@@ -478,12 +498,13 @@ _cmd_kubetools() {
# Install kube-ps1
pssh "
set -e
if ! [ -f /etc/profile.d/kube-ps1.sh ]; then
if ! [ -d /opt/kube-ps1 ]; then
cd /tmp
git clone https://github.com/jonmosco/kube-ps1
sudo cp kube-ps1/kube-ps1.sh /etc/profile.d/kube-ps1.sh
sudo mv kube-ps1 /opt/kube-ps1
sudo -u $USER_LOGIN sed -i s/docker-prompt/kube_ps1/ /home/$USER_LOGIN/.bashrc &&
sudo -u $USER_LOGIN tee -a /home/$USER_LOGIN/.bashrc <<EOF
. /opt/kube-ps1/kube-ps1.sh
KUBE_PS1_PREFIX=""
KUBE_PS1_SUFFIX=""
KUBE_PS1_SYMBOL_ENABLE="false"
@@ -494,13 +515,13 @@ EOF
# Install stern
##VERSION## https://github.com/stern/stern/releases
STERN_VERSION=1.20.1
STERN_VERSION=1.22.0
FILENAME=stern_${STERN_VERSION}_linux_${ARCH}
URL=https://github.com/stern/stern/releases/download/v$STERN_VERSION/$FILENAME.tar.gz
pssh "
if [ ! -x /usr/local/bin/stern ]; then
curl -fsSL $URL |
sudo tar -C /usr/local/bin -zx --strip-components=1 $FILENAME/stern
sudo tar -C /usr/local/bin -zx stern
sudo chmod +x /usr/local/bin/stern
stern --completion bash | sudo tee /etc/bash_completion.d/stern
stern --version
@@ -516,7 +537,7 @@ EOF
# Install kustomize
##VERSION## https://github.com/kubernetes-sigs/kustomize/releases
KUSTOMIZE_VERSION=v4.4.0
KUSTOMIZE_VERSION=v4.5.7
URL=https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_${ARCH}.tar.gz
pssh "
if [ ! -x /usr/local/bin/kustomize ]; then
@@ -535,7 +556,7 @@ EOF
if [ ! -x /usr/local/bin/ship ]; then
##VERSION##
curl -fsSL https://github.com/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_$ARCH.tar.gz |
sudo tar -C /usr/local/bin -zx ship
sudo tar -C /usr/local/bin -zx ship
fi"
# Install the AWS IAM authenticator
@@ -543,8 +564,8 @@ EOF
if [ ! -x /usr/local/bin/aws-iam-authenticator ]; then
##VERSION##
sudo curl -fsSLo /usr/local/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/$ARCH/aws-iam-authenticator
sudo chmod +x /usr/local/bin/aws-iam-authenticator
aws-iam-authenticator version
sudo chmod +x /usr/local/bin/aws-iam-authenticator
aws-iam-authenticator version
fi"
# Install the krew package manager
@@ -586,6 +607,7 @@ EOF
FILENAME=tilt.\$TILT_VERSION.linux.$TILT_ARCH.tar.gz
curl -fsSL https://github.com/tilt-dev/tilt/releases/download/v\$TILT_VERSION/\$FILENAME |
sudo tar -zxvf- -C /usr/local/bin tilt
tilt completion bash | sudo tee /etc/bash_completion.d/tilt
tilt version
fi"
@@ -594,6 +616,7 @@ EOF
if [ ! -x /usr/local/bin/skaffold ]; then
curl -fsSLo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-$ARCH &&
sudo install skaffold /usr/local/bin/
skaffold completion bash | sudo tee /etc/bash_completion.d/skaffold
skaffold version
fi"
@@ -602,9 +625,28 @@ EOF
if [ ! -x /usr/local/bin/kompose ]; then
curl -fsSLo kompose https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-$ARCH &&
sudo install kompose /usr/local/bin
kompose completion bash | sudo tee /etc/bash_completion.d/kompose
kompose version
fi"
# Install KinD
pssh "
if [ ! -x /usr/local/bin/kind ]; then
curl -fsSLo kind https://github.com/kubernetes-sigs/kind/releases/latest/download/kind-linux-$ARCH &&
sudo install kind /usr/local/bin
kind completion bash | sudo tee /etc/bash_completion.d/kind
kind version
fi"
# Install YTT
pssh "
if [ ! -x /usr/local/bin/ytt ]; then
curl -fsSLo ytt https://github.com/vmware-tanzu/carvel-ytt/releases/latest/download/ytt-linux-$ARCH &&
sudo install ytt /usr/local/bin
ytt completion bash | sudo tee /etc/bash_completion.d/ytt
ytt version
fi"
##VERSION## https://github.com/bitnami-labs/sealed-secrets/releases
KUBESEAL_VERSION=0.17.4
#case $ARCH in

View File

@@ -36,7 +36,7 @@ if os.path.isfile(domain_or_domain_file):
clusters = [line.split() for line in lines]
else:
ips = open(f"tags/{ips_file_or_tag}/ips.txt").read().split()
settings_file = f"tags/{tag}/settings.yaml"
settings_file = f"tags/{ips_file_or_tag}/settings.yaml"
clustersize = yaml.safe_load(open(settings_file))["clustersize"]
clusters = []
while ips:

View File

@@ -16,7 +16,7 @@ user_password: training
# For a list of old versions, check:
# https://kubernetes.io/releases/patch-releases/#non-active-branch-history
kubernetes_version: 1.18.20
kubernetes_version: 1.20.15
image:

View File

@@ -1,11 +1,11 @@
title: |
Docker Intensif
Docker
chat: "[Mattermost](https://highfive.container.training/mattermost)"
chat: "[#masterclass-module-docker](https://enixteam.slack.com/archives/C045SR5T2MP) (or GMeet)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2022-06-enix.container.training/
slides: https://2022-10-wemanity.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
@@ -35,34 +35,29 @@ content:
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- # DAY 2
- containers/Multi_Stage_Builds.md
- containers/Container_Networking_Basics.md
- containers/Local_Development_Workflow.md
- containers/Container_Network_Model.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- # DAY 3
- containers/Exercise_Dockerfile_Advanced.md
-
- |
# (Extra content - advanced Dockerfiles)
- containers/Dockerfile_Tips.md
- containers/Advanced_Dockerfiles.md
- containers/Buildkit.md
-
- |
# (Extra content - operations)
- containers/Start_And_Attach.md
- containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- containers/Dockerfile_Tips.md
- containers/Advanced_Dockerfiles.md
- containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Exercise_Dockerfile_Advanced.md
- # DAY 4
- containers/Buildkit.md
- containers/Network_Drivers.md
-
- |
# (Extra content - container internals)
- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
- containers/Orchestration_Overview.md
#- containers/Docker_Machine.md
#- containers/Init_Systems.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Containers_From_Scratch.md
#- containers/Container_Engines.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md
- shared/thankyou.md
#- containers/links.md

View File

@@ -1,11 +1,11 @@
title: |
Fondamentaux Kubernetes
Kubernetes
chat: "[Mattermost](https://highfive.container.training/mattermost)"
chat: "[#masterclass-module-kubernetes](https://enixteam.slack.com/archives/C045GKSEB4L) (or GMeet)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2022-06-enix.container.training/
slides: https://2022-10-wemanity.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
@@ -23,9 +23,6 @@ content:
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- exercises/k8sfundamentals-brief.md
- exercises/localcluster-brief.md
- exercises/healthchecks-brief.md
- shared/toc.md
- # 1
#- k8s/versions-k8s.md
@@ -36,33 +33,34 @@ content:
- k8s/concepts-k8s.md
- k8s/kubectlget.md
- k8s/kubectl-run.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
#- k8s/buildshiprun-dockerhub.md
- exercises/k8sfundamentals-details.md
- k8s/ourapponkube.md
#- k8s/exercise-wordsmith.md
- # 2
- k8s/ourapponkube.md
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- k8s/namespaces.md
- k8s/yamldeploy.md
- k8s/authoring-yaml.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/authoring-yaml.md
- k8s/k9s.md
- # 3
- k8s/kubenet.md
- k8s/setup-overview.md
- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/kubectlproxy.md
- exercises/localcluster-details.md
- # 3
- # 4
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
@@ -70,18 +68,41 @@ content:
- k8s/rollout.md
- k8s/healthchecks.md
#- k8s/healthchecks-more.md
- k8s/dashboard.md
- k8s/k9s.md
#- k8s/dashboard.md
- k8s/tilt.md
- exercises/healthchecks-details.md
- # 4
#- exercises/healthchecks-details.md
- # 5
- k8s/ingress.md
- k8s/ingress-tls.md
- k8s/volumes.md
#- k8s/exercise-configmap.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
- k8s/secrets.md
- #6
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/resource-limits.md
-
- |
# (Extra materials - autoscaling)
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- k8s/batch-jobs.md
- shared/thankyou.md
-
- |
# (Extra materials - stateful apps)
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md
#- k8s/eck.md
#- k8s/portworx.md
- k8s/openebs.md
- k8s/stateful-failover.md
-
- |
# (Extra materials - operators)
- k8s/operators-design.md
- k8s/operators-example.md
- k8s/owners-and-dependents.md
- k8s/events.md
- k8s/finalizers.md

View File

@@ -2,11 +2,11 @@ title: |
Packaging d'applications
pour Kubernetes
chat: "[Mattermost](https://highfive.container.training/mattermost)"
chat: "[#masterclass-module-packaging](https://enixteam.slack.com/archives/C045B7V2M37) (or GMeet)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2022-06-enix.container.training/
slides: https://2022-10-wemanity.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
@@ -15,22 +15,26 @@ exclude:
content:
- shared/title.md
#- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/prereqs.md
- shared/webssh.md
- shared/connecting.md
- logistics.md
#- k8s/intro.md
#- shared/about-slides.md
#- shared/prereqs.md
#- shared/webssh.md
#- shared/connecting.md
#- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/toc.md
-
#- k8s/demo-apps.md
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- exercises/helm-generic-chart-details.md
-
- k8s/helm-dependencies.md
- k8s/helm-values-schema-validation.md
- k8s/helm-secrets.md
- k8s/ytt.md
#- exercises/helm-umbrella-chart-details.md

View File

@@ -1,66 +0,0 @@
title: |
Kubernetes Avancé
chat: "[Mattermost](https://highfive.container.training/mattermost)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2022-06-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/prereqs.md
- shared/webssh.md
- shared/connecting.md
- shared/toc.md
- exercises/sealed-secrets-brief.md
- exercises/kyverno-ingress-domain-name-brief.md
- #1
- k8s/demo-apps.md
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/sealed-secrets.md
- k8s/cert-manager.md
- k8s/cainjector.md
- k8s/ingress-tls.md
- exercises/sealed-secrets-details.md
- #2
- k8s/extending-api.md
- k8s/crd.md
- k8s/operators.md
- k8s/admission.md
- k8s/cainjector.md
- k8s/kyverno.md
- exercises/kyverno-ingress-domain-name-details.md
- #3
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- k8s/apiserver-deepdive.md
- k8s/aggregation-layer.md
- k8s/hpa-v2.md
- #4
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md
#- k8s/eck.md
#- k8s/portworx.md
- k8s/openebs.md
- k8s/stateful-failover.md
- k8s/operators-design.md
- k8s/operators-example.md
- k8s/owners-and-dependents.md
- k8s/events.md
- k8s/finalizers.md

View File

@@ -1,58 +0,0 @@
title: |
Opérer Kubernetes
chat: "[Mattermost](https://highfive.container.training/mattermost)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2022-06-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
# DAY 1
-
- k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
-
- k8s/multinode.md
- k8s/cni.md
- k8s/interco.md
-
- k8s/cni-internals.md
- k8s/apilb.md
- k8s/internal-apis.md
- k8s/staticpods.md
- k8s/cluster-upgrade.md
- k8s/cluster-backup.md
#- k8s/cloud-controller-manager.md
-
- k8s/control-plane-auth.md
- k8s/user-cert.md
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/pod-security-intro.md
- k8s/pod-security-policies.md
- k8s/pod-security-admission.md
- shared/thankyou.md
-
|
# (Extra content)
- k8s/apiserver-deepdive.md
- k8s/setup-overview.md
- k8s/setup-devel.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md

View File

@@ -24,4 +24,4 @@
# Survey form
/please https://docs.google.com/forms/d/e/1FAIpQLSfIYSgrV7tpfBNm1hOaprjnBHgWKn5n-k5vtNXYJkOX1sRxng/viewform
/ /highfive.html 200!
/ /wemanity.html 200!

View File

@@ -19,7 +19,7 @@ They abstract the connection details for this services, and can help with:
* fail over (how do I know to which instance of a replicated service I should connect?)
* load balancing (how to I spread my requests across multiple instances of a service?)
* load balancing (how do I spread my requests across multiple instances of a service?)
* authentication (what if my service requires credentials, certificates, or otherwise?)

View File

@@ -4,6 +4,6 @@
(we will use the `rng` service in the dockercoins app)
- See what happens when the load increses
- See what happens when the load increases
(spoiler alert: it involves timeouts!)

View File

@@ -0,0 +1,7 @@
## Exercise — Network Policies
- Implement a system with 3 levels of security
(private pods, public pods, namespace pods)
- Apply it to the DockerCoins demo app

View File

@@ -0,0 +1,63 @@
# Exercise — Network Policies
We want to to implement a generic network security mechanism.
Instead of creating one policy per service, we want to
create a fixed number of policies, and use a single label
to indicate the security level of our pods.
Then, when adding a new service to the stack, instead
of writing a new network policy for that service, we
only need to add the right label to the pods of that service.
---
## Specifications
We will use the label `security` to classify our pods.
- If `security=private`:
*the pod shouldn't accept any traffic*
- If `security=public`:
*the pod should accept all traffic*
- If `security=namespace`:
*the pod should only accept connections coming from the same namespace*
If `security` isn't set, assume it's `private`.
---
## Test setup
- Deploy a copy of the DockerCoins app in a new namespace
- Modify the pod templates so that:
- `webui` has `security=public`
- `worker` has `security=private`
- `hasher`, `redis`, `rng` have `security=namespace`
---
## Implement and test policies
- Write the network policies
(feel free to draw inspiration from the ones we've seen so far)
- Check that:
- you can connect to the `webui` from outside the cluster
- the application works correctly (shows 3-4 hashes/second)
- you cannot connect to the `hasher`, `redis`, `rng` services
- you cannot connect or even ping the `worker` pods

View File

@@ -0,0 +1,9 @@
## Exercise — RBAC
- Create two namespaces for users `alice` and `bob`
- Give each user full access to their own namespace
- Give each user read-only access to the other's namespace
- Let `alice` view the nodes of the cluster as well

View File

@@ -0,0 +1,97 @@
# Exercise — RBAC
We want to:
- Create two namespaces for users `alice` and `bob`
- Give each user full access to their own namespace
- Give each user read-only access to the other's namespace
- Let `alice` view the nodes of the cluster as well
---
## Initial setup
- Create two namespaces named `alice` and `bob`
- Check that if we impersonate Alice, we can't access her namespace yet:
```bash
kubectl --as alice get pods --namespace alice
```
---
## Access for Alice
- Grant Alice full access to her own namespace
(you can use a pre-existing Cluster Role)
- Check that Alice can create stuff in her namespace:
```bash
kubectl --as alice create deployment hello --image nginx --namespace alice
```
- But that she can't create stuff in Bob's namespace:
```bash
kubectl --as alice create deployment hello --image nginx --namespace bob
```
---
## Access for Bob
- Similarly, grant Bob full access to his own namespace
- Check that Bob can create stuff in his namespace:
```bash
kubectl --as bob create deployment hello --image nginx --namespace bob
```
- But that he can't create stuff in Alice's namespace:
```bash
kubectl --as bob create deployment hello --image nginx --namespace alice
```
---
## Read-only access
- Now, give Alice read-only access to Bob's namespace
- Check that Alice can view Bob's stuff:
```bash
kubectl --as alice get pods --namespace bob
```
- But that she can't touch this:
```bash
kubectl --as alice delete pods --namespace bob --all
```
- Likewise, give Bob read-only access to Alice's namespace
---
## Nodes
- Give Alice read-only access to the cluster nodes
(this will require creating a custom Cluster Role)
- Check that Alice can view the nodes:
```bash
kubectl --as alice get nodes
```
- But that Bob cannot:
```bash
kubectl --as bob get nodes
```
- And that Alice can't update nodes:
```bash
kubectl --as alice label nodes --all hello=world
```

View File

@@ -1,111 +0,0 @@
<?xml version="1.0"?>
<html>
<head>
<style>
td {
background: #ccc;
padding: 1em;
}
</style>
</head>
<body>
<table>
<tr>
<td>Mardi 7 juin 2022</td>
<td>
<a href="1.yml.html">Docker Intensif</a>
</td>
</tr>
<tr>
<td>Mercredi 8 juin 2022</td>
<td>
<a href="1.yml.html">Docker Intensif</a>
</td>
</tr>
<tr>
<td>Jeudi 9 juin 2022</td>
<td>
<a href="1.yml.html">Docker Intensif</a>
</td>
</tr>
<tr>
<td>Vendredi 10 juin 2022</td>
<td>
<a href="1.yml.html">Docker Intensif</a>
</td>
</tr>
<tr>
<td>Lundi 13 juin 2022</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Mardi 14 juin 2022</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Mercredi 15 juin 2022</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Jeudi 16 juin 2022</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Lundi 20 juin 2022</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Mardi 21 juin 2022</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Mercredi 22 juin 2022</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Jeudi 23 juin 2022</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Lundi 27 juin 2022</td>
<td>
<a href="3.yml.html">Packaging d'applications pour Kubernetes</a>
</td>
</tr>
<tr>
<td>Mardi 28 juin 2022</td>
<td>
<a href="3.yml.html">Packaging d'applications pour Kubernetes</a>
</td>
</tr>
<tr>
<td>Mercredi 29 juin 2022</td>
<td>
<a href="5.yml.html">Opérer Kubernetes</a>
</td>
</tr>
<tr>
<td>Jeudi 30 juin 2022</td>
<td>
<a href="5.yml.html">Opérer Kubernetes</a>
</td>
</tr>
</table>
</body>
</html>

View File

@@ -246,7 +246,7 @@ class: extra-details
(they don't require hand-editing a file and restarting the API server)
- A service account is associated with a set of secrets
- A service account can be associated with a set of secrets
(the kind that you can view with `kubectl get secrets`)
@@ -256,6 +256,28 @@ class: extra-details
---
## Service account tokens evolution
- In Kubernetes 1.21 and above, pods use *bound service account tokens*:
- these tokens are *bound* to a specific object (e.g. a Pod)
- they are automatically invalidated when the object is deleted
- these tokens also expire quickly (e.g. 1 hour) and gets rotated automatically
- In Kubernetes 1.24 and above, unbound tokens aren't created automatically
- before 1.24, we would see unbound tokens with `kubectl get secrets`
- with 1.24 and above, these tokens can be created with `kubectl create token`
- ...or with a Secret with the right [type and annotation][create-token]
[create-token]: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#to-create-additional-api-tokens
---
class: extra-details
## Checking our authentication method
@@ -390,6 +412,10 @@ class: extra-details
It should be named `default-token-XXXXX`.
When running Kubernetes 1.24 and above, this Secret won't exist.
<br/>
Instead, create a token with `kubectl create token default`.
---
class: extra-details

View File

@@ -81,7 +81,7 @@
## What version are we running anyway?
- When I say, "I'm running Kubernetes 1.18", is that the version of:
- When I say, "I'm running Kubernetes 1.20", is that the version of:
- kubectl
@@ -157,15 +157,15 @@
## Kubernetes uses semantic versioning
- Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.18.20:
- Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.20.15:
- MAJOR = 1
- MINOR = 18
- PATCH = 20
- MINOR = 20
- PATCH = 15
- It's always possible to mix and match different PATCH releases
(e.g. 1.18.20 and 1.18.15 are compatible)
(e.g. 1.20.0 and 1.20.15 are compatible)
- It is recommended to run the latest PATCH release
@@ -181,9 +181,9 @@
- All components support a difference of one¹ MINOR version
- This allows live upgrades (since we can mix e.g. 1.18 and 1.19)
- This allows live upgrades (since we can mix e.g. 1.20 and 1.21)
- It also means that going from 1.18 to 1.20 requires going through 1.19
- It also means that going from 1.20 to 1.22 requires going through 1.21
.footnote[¹Except kubelet, which can be up to two MINOR behind API server,
and kubectl, which can be one MINOR ahead or behind API server.]
@@ -254,7 +254,7 @@ and kubectl, which can be one MINOR ahead or behind API server.]
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
```
- Look for the `image:` line, and update it to e.g. `v1.19.0`
- Look for the `image:` line, and update it to e.g. `v1.24.0`
]
@@ -308,11 +308,11 @@ and kubectl, which can be one MINOR ahead or behind API server.]
]
Note 1: kubeadm thinks that our cluster is running 1.19.0.
Note 1: kubeadm thinks that our cluster is running 1.24.0.
<br/>It is confused by our manual upgrade of the API server!
Note 2: kubeadm itself is still version 1.18.20..
<br/>It doesn't know how to upgrade do 1.19.X.
Note 2: kubeadm itself is still version 1.20.15..
<br/>It doesn't know how to upgrade do 1.21.X.
---
@@ -335,28 +335,28 @@ Note 2: kubeadm itself is still version 1.18.20..
]
Problem: kubeadm doesn't know know how to handle
upgrades from version 1.18.
upgrades from version 1.20.
This is because we installed version 1.22 (or even later).
This is because we installed version 1.24 (or even later).
We need to install kubeadm version 1.19.X.
We need to install kubeadm version 1.21.X.
---
## Downgrading kubeadm
- We need to go back to version 1.19.X.
- We need to go back to version 1.21.X.
.lab[
- View available versions for package `kubeadm`:
```bash
apt show kubeadm -a | grep ^Version | grep 1.19
apt show kubeadm -a | grep ^Version | grep 1.21
```
- Downgrade kubeadm:
```
sudo apt install kubeadm=1.19.8-00
sudo apt install kubeadm=1.21.0-00
```
- Check what kubeadm tells us:
@@ -366,7 +366,7 @@ We need to install kubeadm version 1.19.X.
]
kubeadm should now agree to upgrade to 1.19.8.
kubeadm should now agree to upgrade to 1.21.X.
---
@@ -464,9 +464,9 @@ kubeadm should now agree to upgrade to 1.19.8.
```bash
for N in 1 2 3; do
ssh oldversion$N "
sudo apt install kubeadm=1.19.8-00 &&
sudo apt install kubeadm=1.21.14-00 &&
sudo kubeadm upgrade node &&
sudo apt install kubelet=1.19.8-00"
sudo apt install kubelet=1.21.14-00"
done
```
]
@@ -475,7 +475,7 @@ kubeadm should now agree to upgrade to 1.19.8.
## Checking what we've done
- All our nodes should now be updated to version 1.19.8
- All our nodes should now be updated to version 1.21.14
.lab[
@@ -492,7 +492,7 @@ class: extra-details
## Skipping versions
- This example worked because we went from 1.18 to 1.19
- This example worked because we went from 1.20 to 1.21
- If you are upgrading from e.g. 1.16, you will have to go through 1.17 first

View File

@@ -14,22 +14,20 @@
## Creating a CRD
- We will create a CRD to represent the different species of coffee
- We will create a CRD to represent different recipes of pizzas
(arabica, liberica, and robusta)
- We will be able to run `kubectl get pizzas` and it will list the recipes
- We will be able to run `kubectl get coffees` and it will list the species
- Creating/deleting recipes won't do anything else
- Then we can label, edit, etc. the species to attach some information
(e.g. the taste profile of the coffee, or whatever we want)
(because we won't implement a *controller*)
---
## First shot of coffee
## First slice of pizza
```yaml
@@INCLUDE[k8s/coffee-1.yaml]
@@INCLUDE[k8s/pizza-1.yaml]
```
---
@@ -48,9 +46,9 @@
---
## Second shot of coffee
## Second slice of pizza
- The next slide will show file @@LINK[k8s/coffee-2.yaml]
- The next slide will show file @@LINK[k8s/pizza-2.yaml]
- Note the `spec.versions` list
@@ -65,20 +63,20 @@
---
```yaml
@@INCLUDE[k8s/coffee-2.yaml]
@@INCLUDE[k8s/pizza-2.yaml]
```
---
## Creating our Coffee CRD
## Baking some pizza
- Let's create the Custom Resource Definition for our Coffee resource
- Let's create the Custom Resource Definition for our Pizza resource
.lab[
- Load the CRD:
```bash
kubectl apply -f ~/container.training/k8s/coffee-2.yaml
kubectl apply -f ~/container.training/k8s/pizza-2.yaml
```
- Confirm that it shows up:
@@ -95,19 +93,19 @@
The YAML below defines a resource using the CRD that we just created:
```yaml
kind: Coffee
kind: Pizza
apiVersion: container.training/v1alpha1
metadata:
name: arabica
name: napolitana
spec:
taste: strong
toppings: [ mozzarella ]
```
.lab[
- Try to create a few types of coffee beans:
- Try to create a few pizza recipes:
```bash
kubectl apply -f ~/container.training/k8s/coffees.yaml
kubectl apply -f ~/container.training/k8s/pizzas.yaml
```
]
@@ -116,15 +114,39 @@ spec:
## Type validation
- Older versions of Kubernetes will accept our coffee beans as is
- Older versions of Kubernetes will accept our pizza definition as is
- Newer versions, however, will issue warnings about unknown fields
(and if we turn off validation, these fields will simply be dropped)
(and if we use `--validate=false`, these fields will simply be dropped)
- We need to improve our OpenAPI schema
(to add e.g. the `spec.taste` field used by our coffee resources)
(to add e.g. the `spec.toppings` field used by our pizza resources)
---
## Third slice of pizza
- Let's add a full OpenAPI v3 schema to our Pizza CRD
- We'll require a field `spec.sauce` which will be a string
- And a field `spec.toppings` which will have to be a list of strings
.lab[
- Update our pizza CRD:
```bash
kubectl apply -f ~/container.training/k8s/pizza-3.yaml
```
- Load our pizza recipes:
```bash
kubectl apply -f ~/container.training/k8s/pizzas.yaml
```
]
---
@@ -134,91 +156,48 @@ spec:
.lab[
- View the coffee beans that we just created:
- View the pizza recipes that we just created:
```bash
kubectl get coffees
kubectl get pizzas
```
]
- We'll see in a bit how to improve that
---
## What can we do with CRDs?
There are many possibilities!
- *Operators* encapsulate complex sets of resources
(e.g.: a PostgreSQL replicated cluster; an etcd cluster...
<br/>
see [awesome operators](https://github.com/operator-framework/awesome-operators) and
[OperatorHub](https://operatorhub.io/) to find more)
- Custom use-cases like [gitkube](https://gitkube.sh/)
- creates a new custom type, `Remote`, exposing a git+ssh server
- deploy by pushing YAML or Helm charts to that remote
- Replacing built-in types with CRDs
(see [this lightning talk by Tim Hockin](https://www.youtube.com/watch?v=ji0FWzFwNhA))
---
## What's next?
- Creating a basic CRD is quick and easy
- But there is a lot more that we can (and probably should) do:
- improve input with *data validation*
- improve output with *custom columns*
- And of course, we probably need a *controller* to go with our CRD!
(otherwise, we're just using the Kubernetes API as a fancy data store)
- Let's see how we can improve that display!
---
## Additional printer columns
- We can specify `additionalPrinterColumns` in the CRD
- This is similar to `-o custom-columns`
(map a column name to a path in the object, e.g. `.spec.taste`)
```yaml
- We can tell Kubernetes which columns to show:
```yaml
additionalPrinterColumns:
- jsonPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
- jsonPath: .spec.sauce
name: Sauce
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
```
- jsonPath: .spec.toppings
name: Toppings
type: string
```
- There is an updated CRD in @@LINK[k8s/pizza-4.yaml]
---
## Using additional printer columns
- Let's update our CRD using @@LINK[k8s/coffee-3.yaml]
- Let's update our CRD!
.lab[
- Update the CRD:
```bash
kubectl apply -f ~/container.training/k8s/coffee-3.yaml
kubectl apply -f ~/container.training/k8s/pizza-4.yaml
```
- Look at our Coffee resources:
- Look at our Pizza resources:
```bash
kubectl get coffees
kubectl get pizzas
```
]
@@ -229,50 +208,26 @@ Note: we can update a CRD without having to re-create the corresponding resource
---
## Data validation
## Better data validation
- CRDs are validated with the OpenAPI v3 schema that we specify
- Let's change the data schema so that the sauce can only be `red` or `white`
(with older versions of the API, when the schema was optional,
<br/>
no schema = no validation at all)
- This will be implemented by @@LINK[k8s/pizza-5.yaml]
- Otherwise, we can put anything we want in the `spec`
.lab[
- More advanced validation can also be done with admission webhooks, e.g.:
- Update the Pizza CRD:
```bash
kubectl apply -f ~/container.training/k8s/pizza-5.yaml
```
- consistency between parameters
- advanced integer filters (e.g. odd number of replicas)
- things that can change in one direction but not the other
---
## OpenAPI v3 schema example
This is what we have in @@LINK[k8s/coffee-3.yaml]:
```yaml
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string
required: [ taste ]
```
]
---
## Validation *a posteriori*
- Some of the "coffees" that we defined earlier *do not* pass validation
- Some of the pizzas that we defined earlier *do not* pass validation
- How is that possible?
@@ -340,15 +295,23 @@ This is what we have in @@LINK[k8s/coffee-3.yaml]:
---
## What's next?
## Even better data validation
- Generally, when creating a CRD, we also want to run a *controller*
- If we need more complex data validation, we can use a validating webhook
(otherwise nothing will happen when we create resources of that type)
- Use cases:
- The controller will typically *watch* our custom resources
- validating a "version" field for a database engine
(and take action when they are created/updated)
- validating that the number of e.g. coordination nodes is even
- preventing inconsistent or dangerous changes
<br/>
(e.g. major version downgrades)
- checking a key or certificate format or validity
- and much more!
---
@@ -390,6 +353,24 @@ This is what we have in @@LINK[k8s/coffee-3.yaml]:
(unrelated to containers, clusters, etc.)
---
## What's next?
- Creating a basic CRD is relatively straightforward
- But CRDs generally require a *controller* to do anything useful
- The controller will typically *watch* our custom resources
(and take action when they are created/updated)
- Most serious use-cases will also require *validation web hooks*
- When our CRD data format evolves, we'll also need *conversion web hooks*
- Doing all that work manually is tedious; use a framework!
???
:EN:- Custom Resource Definitions (CRDs)

View File

@@ -1,46 +1,62 @@
# Healthchecks
- Containers can have *healthchecks*
- Containers can have *healthchecks* (also called "probes")
- There are three kinds of healthchecks, corresponding to very different use-cases:
- There are three kinds of healthchecks, corresponding to different use-cases:
- liveness = detect when a container is "dead" and needs to be restarted
- readiness = detect when a container is ready to serve traffic
- startup = detect if a container has finished to boot
`startupProbe`, `readinessProbe`, `livenessProbe`
- These healthchecks are optional (we can use none, all, or some of them)
- Different probes are available (HTTP request, TCP connection, program execution)
- Different probes are available:
- Let's see the difference and how to use them!
HTTP GET, TCP connection, arbitrary program execution, GRPC
- All these probes have a binary result (success/failure)
- Probes that aren't defined will default to a "success" result
---
## Liveness probe
## Use-cases in brief
*My container takes a long time to boot before being able to serve traffic.*
→ use a `startupProbe` (but often a `readinessProbe` can also do the job)
*Sometimes, my container is unavailable or overloaded, and needs to e.g. be taken temporarily out of load balancer rotation.*
→ use a `readinessProbe`
*Sometimes, my container enters a broken state which can only be fixed by a restart.*
→ use a `livenessProbe`
---
## Liveness probes
*This container is dead, we don't know how to fix it, other than restarting it.*
- Indicates if the container is dead or alive
- Check if the container is dead or alive
- A dead container cannot come back to life
- If Kubernetes determines that the container is dead:
- If the liveness probe fails, the container is killed (destroyed)
- it terminates the container gracefully
(to make really sure that it's really dead; no zombies or undeads!)
- it restarts the container (unless the Pod's `restartPolicy` is `Never`)
- What happens next depends on the pod's `restartPolicy`:
- With the default parameters, it takes:
- `Never`: the container is not restarted
- up to 30 seconds to determine that the container is dead
- `OnFailure` or `Always`: the container is restarted
- up to 30 seconds to terminate it
---
## When to use a liveness probe
- To indicate failures that can't be recovered
- To detect failures that can't be recovered
- deadlocks (causing all requests to time out)
@@ -48,47 +64,45 @@
- Anything where our incident response would be "just restart/reboot it"
---
## Liveness probes gotchas
.warning[**Do not** use liveness probes for problems that can't be fixed by a restart]
- Otherwise we just restart our pods for no reason, creating useless load
---
.warning[**Do not** depend on other services within a liveness probe]
## Readiness probe (1)
- Otherwise we can experience cascading failures
*Make sure that a container is ready before continuing a rolling update.*
(example: web server liveness probe that makes a requests to a database)
- Indicates if the container is ready to handle traffic
.warning[**Make sure** that liveness probes respond quickly]
- When doing a rolling update, the Deployment controller waits for Pods to be ready
- The default probe timeout is 1 second (this can be tuned!)
(a Pod is ready when all the containers in the Pod are ready)
- Improves reliability and safety of rolling updates:
- don't roll out a broken version (that doesn't pass readiness checks)
- don't lose processing capacity during a rolling update
- If the probe takes longer than that, it will eventually cause a restart
---
## Readiness probe (2)
## Readiness probes
*Temporarily remove a container (overloaded or otherwise) from a Service load balancer.*
*Sometimes, my container "needs a break".*
- A container can mark itself "not ready" temporarily
- Check if the container is ready or not
(e.g. if it's overloaded or needs to reload/restart/garbage collect...)
- If the container is not ready, its Pod is not ready
- If a container becomes "unready" it might be ready again soon
- If the Pod belongs to a Service, it is removed from its Endpoints
- If the readiness probe fails:
(it stops receiving new connections but existing ones are not affected)
- the container is *not* killed
- If there is a rolling update in progress, it might pause
- if the pod is a member of a service, it is temporarily removed
(Kubernetes will try to respect the MaxUnavailable parameter)
- it is re-added as soon as the readiness probe passes again
- As soon as the readiness probe suceeds again, everything goes back to normal
---
@@ -102,67 +116,31 @@
- To indicate temporary failure or unavailability
- runtime is busy doing garbage collection or (re)loading data
- application can only service *N* parallel connections
- runtime is busy doing garbage collection or initial data load
- To redirect new connections to other Pods
(e.g. fail the readiness probe when the Pod's load is too high)
- new connections will be directed to other Pods
---
## Dependencies
## Startup probes
- If a web server depends on a database to function, and the database is down:
*My container takes a long time to boot before being able to serve traffic.*
- the web server's liveness probe should succeed
- After creating a container, Kubernetes runs its startup probe
- the web server's readiness probe should fail
- The container will be considered "unhealthy" until the probe succeeds
- Same thing for any hard dependency (without which the container can't work)
- As long as the container is "unhealthy", its Pod...:
.warning[**Do not** fail liveness probes for problems that are external to the container]
- is not added to Services' endpoints
---
- is not considered as "available" for rolling update purposes
## Timing and thresholds
- Readiness and liveness probes are enabled *after* startup probe reports success
- Probes are executed at intervals of `periodSeconds` (default: 10)
- The timeout for a probe is set with `timeoutSeconds` (default: 1)
.warning[If a probe takes longer than that, it is considered as a FAIL]
- A probe is considered successful after `successThreshold` successes (default: 1)
- A probe is considered failing after `failureThreshold` failures (default: 3)
- A probe can have an `initialDelaySeconds` parameter (default: 0)
- Kubernetes will wait that amount of time before running the probe for the first time
(this is important to avoid killing services that take a long time to start)
---
## Startup probe
*The container takes too long to start, and is killed by the liveness probe!*
- By default, probes (including liveness) start immediately
- With the default probe interval and failure threshold:
*a container must respond in less than 30 seconds, or it will be killed!*
- There are two ways to avoid that:
- set `initialDelaySeconds` (a fixed, rigid delay)
- use a `startupProbe`
- Kubernetes will run only the startup probe, and when it succeeds, run the other probes
(if there is no startup probe, readiness and liveness probes are enabled right away)
---
@@ -178,121 +156,296 @@
---
## Startup probes gotchas
- When defining a `startupProbe`, we almost always want to adjust its parameters
(specifically, its `failureThreshold` - this is explained in next slide)
- Otherwise, if the container fails to start within 30 seconds...
*Kubernetes terminates the container and restarts it!*
- Sometimes, it's easier/simpler to use a `readinessProbe` instead
(except when also using a `livenessProbe`)
---
## Timing and thresholds
- Probes are executed at intervals of `periodSeconds` (default: 10)
- The timeout for a probe is set with `timeoutSeconds` (default: 1)
.warning[If a probe takes longer than that, it is considered as a FAIL]
.warning[For liveness probes **and startup probes** this terminates and restarts the container]
- A probe is considered successful after `successThreshold` successes (default: 1)
- A probe is considered failing after `failureThreshold` failures (default: 3)
- All these parameters can be set independently for each probe
---
class: extra-details
## `initialDelaySeconds`
- A probe can have an `initialDelaySeconds` parameter (default: 0)
- Kubernetes will wait that amount of time before running the probe for the first time
- It is generally better to use a `startupProbe` instead
(but this parameter did exist before startup probes were implemented)
---
class: extra-details
## `readinessProbe` vs `startupProbe`
- A lot of blog posts / documentations / tutorials recommend readiness probes...
- ...even in scenarios where a startup probe would seem more appropriate!
- This is because startup probes are relatively recent
(they reached GA status in Kubernetes 1.20)
- When there is no `livenessProbe`, using a `readinessProbe` is simpler:
- a `startupProbe` generally requires to change the `failureThreshold`
- a `startupProbe` generally also requires a `readinessProbe`
- a single `readinessProbe` can fulfill both roles
---
## Different types of probes
- HTTP request
- Kubernetes supports the following mechanisms:
- specify URL of the request (and optional headers)
- `exec` (arbitrary program execution)
- any status code between 200 and 399 indicates success
- `httpGet` (HTTP GET request)
- TCP connection
- `tcpSocket` (check if a TCP port is accepting connections)
- the probe succeeds if the TCP port is open
- `grpc` (standard [GRPC Health Checking Protocol][grpc])
- arbitrary exec
- All probes give binary results ("it works" or "it doesn't")
- a command is executed in the container
- Let's see the specific details for each of them!
- exit status of zero indicates success
[grpc]: https://grpc.github.io/grpc/core/md_doc_health-checking.html
---
## Benefits of using probes
## `exec`
- Rolling updates proceed when containers are *actually ready*
- Runs an arbitrary program *inside* the container
(as opposed to merely started)
(like with `kubectl exec` or `docker exec`)
- Containers in a broken state get killed and restarted
- The program must be available in the container image
(instead of serving errors or timeouts)
- Kubernetes uses the exit status of the program
- Unavailable backends get removed from load balancer rotation
(thus improving response times across the board)
- If a probe is not defined, it's as if there was an "always successful" probe
(standard UNIX convention: 0 = success, anything else = failure)
---
## Example: HTTP probe
## `exec` example
Here is a pod template for the `rng` web service of the DockerCoins app:
When the worker is ready, it should create `/tmp/ready`.
<br/>
The following probe will give it 5 minutes to do so.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: healthy-app
name: queueworker
spec:
containers:
- name: myapp
image: myregistry.io/myapp:v1.0
- name: worker
image: myregistry.../worker:v1.0
startupProbe:
exec:
command:
- test
- -f
- /tmp/ready
failureThreshold: 30
```
---
## Using shell constructs
- If we want to use pipes, conditionals, etc. we should invoke a shell
- Example:
```yaml
exec:
command:
- sh
- -c
- "curl http://localhost:5000/status | jq .ready | grep true"
```
---
## `httpGet`
- Make an HTTP GET request to the container
- The request will be made by Kubelet
(doesn't require extra binaries in the container image)
- `port` must be specified
- `path` and extra `httpHeaders` can be specified optionally
- Kubernetes uses HTTP status code of the response:
- 200-399 = success
- anything else = failure
---
## `httpGet` example
The following liveness probe restarts the container if it stops responding on `/healthz`:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: frontend
image: myregistry.../frontend:v1.0
livenessProbe:
httpGet:
path: /health
port: 80
periodSeconds: 5
path: /healthz
```
If the backend serves an error, or takes longer than 1s, 3 times in a row, it gets killed.
---
## `tcpSocket`
- Kubernetes checks if the indicated TCP port accepts connections
- There is no additional check
.warning[It's quite possible for a process to be broken, but still accept TCP connections!]
---
## Example: exec probe
## `grpc`
Here is a pod template for a Redis server:
<!-- ##VERSION## -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: redis-with-liveness
spec:
containers:
- name: redis
image: redis
livenessProbe:
exec:
command: ["redis-cli", "ping"]
```
- Available in beta since Kubernetes 1.24
If the Redis process becomes unresponsive, it will be killed.
- Leverages standard [GRPC Health Checking Protocol][grpc]
[grpc]: https://grpc.github.io/grpc/core/md_doc_health-checking.html
---
## Questions to ask before adding healthchecks
## Best practices for healthchecks
- Do we want liveness, readiness, both?
- Readiness probes are almost always beneficial
(sometimes, we can use the same check, but with different failure thresholds)
- don't hesitate to add them early!
- Do we have existing HTTP endpoints that we can use?
- we can even make them *mandatory*
- Do we need to add new endpoints, or perhaps use something else?
- Be more careful with liveness and startup probes
- Are our healthchecks likely to use resources and/or slow down the app?
- they aren't always necessary
- Do they depend on additional services?
(this can be particularly tricky, see next slide)
- they can even cause harm
---
## Healthchecks and dependencies
## Readiness probes
- Liveness checks should not be influenced by the state of external services
- Almost always beneficial
- All checks should reply quickly (by default, less than 1 second)
- Exceptions:
- Otherwise, they are considered to fail
- web service that doesn't have a dedicated "health" or "ping" route
- This might require to check the health of dependencies asynchronously
- ...and all requests are "expensive" (e.g. lots of external calls)
(e.g. if a database or API might be healthy but still take more than
1 second to reply, we should check the status asynchronously and report
a cached status)
---
## Liveness probes
- If we're not careful, we end up restarting containers for no reason
(which can cause additional load on the cluster, cascading failures, data loss, etc.)
- Suggestion:
- don't add liveness probes immediately
- wait until you have a bit of production experience with that code
- then add narrow-scoped healthchecks to detect specific failure modes
- Readiness and liveness probes should be different
(different check *or* different timeouts *or* different thresholds)
---
## Startup probes
- Only beneficial for containers that need a long time to start
(more than 30 seconds)
- If there is no liveness probe, it's simpler to just use a readiness probe
(since we probably want to have a readiness probe anyway)
- In other words, startup probes are useful in one situation:
*we have a liveness probe, AND the container needs a lot of time to start*
- Don't forget to change the `failureThreshold`
(otherwise the container will fail to start and be killed)
---
## Recap of the gotchas
- The default timeout is 1 second
- if a probe takes longer than 1 second to reply, Kubernetes considers that it fails
- this can be changed by setting the `timeoutSeconds` parameter
<br/>(or refactoring the probe)
- Liveness probes should not be influenced by the state of external services
- Liveness probes and readiness probes should have different paramters
- For startup probes, remember to increase the `failureThreshold`
---
@@ -300,21 +453,21 @@ If the Redis process becomes unresponsive, it will be killed.
(In that context, worker = process that doesn't accept connections)
- Readiness is useful mostly for rolling updates
- A relatively easy solution is to use files
(because workers aren't backends for a service)
- For a startup or readiness probe:
- Liveness may help us restart a broken worker, but how can we check it?
- worker creates `/tmp/ready` when it's ready
- probe checks the existence of `/tmp/ready`
- Embedding an HTTP server is a (potentially expensive) option
- For a liveness probe:
- Using a "lease" file can be relatively easy:
- worker touches `/tmp/alive` regularly
<br/>(e.g. just before starting to work on a job)
- probe checks that the timestamp on `/tmp/alive` is recent
- if the timestamp is old, it means that the worker is stuck
- touch a file during each iteration of the main loop
- check the timestamp of that file from an exec probe
- Writing logs (and checking them from the probe) also works
- Sometimes it can also make sense to embed a web server in the worker
???

View File

@@ -317,6 +317,22 @@ class: extra-details
class: extra-details
## Determining if we're in a subchart
- `.Chart.IsRoot` indicates if we're in the top-level chart or in a sub-chart
- Useful in charts that are designed to be used standalone or as dependencies
- Example: generic chart
- when used standalone (`.Chart.IsRoot` is `true`), use `.Release.Name`
- when used as a subchart e.g. with multiple aliases, use `.Chart.Name`
---
class: extra-details
## Compatibility with Helm 2
- Chart `apiVersion: v1` is the only version supported by Helm 2

View File

@@ -167,7 +167,7 @@ Let's try one more round of decoding!
--
... OK, that was *a lot* of binary data. What sould we do with it?
... OK, that was *a lot* of binary data. What should we do with it?
---

View File

@@ -0,0 +1,148 @@
## Ingress and canary releases
- Let's see how to implement *canary releases*
- The example here will use Traefik v1
(which is obsolete)
- It won't work on your Kubernetes cluster!
(unless you're running an oooooold version of Kubernetes)
(and an equally oooooooold version of Traefik)
- We've left it here just as an example!
---
## Canary releases
- A *canary release* (or canary launch or canary deployment) is a release that will process only a small fraction of the workload
- After deploying the canary, we compare its metrics to the normal release
- If the metrics look good, the canary will progressively receive more traffic
(until it gets 100% and becomes the new normal release)
- If the metrics aren't good, the canary is automatically removed
- When we deploy a bad release, only a tiny fraction of traffic is affected
---
## Various ways to implement canary
- Example 1: canary for a microservice
- 1% of all requests (sampled randomly) are sent to the canary
- the remaining 99% are sent to the normal release
- Example 2: canary for a web app
- 1% of users are sent to the canary web site
- the remaining 99% are sent to the normal release
- Example 3: canary for shipping physical goods
- 1% of orders are shipped with the canary process
- the remaining 99% are shipped with the normal process
- We're going to implement example 1 (per-request routing)
---
## Canary releases with Traefik v1
- We need to deploy the canary and expose it with a separate service
- Then, in the Ingress resource, we need:
- multiple `paths` entries (one for each service, canary and normal)
- an extra annotation indicating the weight of each service
- If we want, we can send requests to more than 2 services
---
## The Ingress resource
.small[
```yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: rgb
annotations:
traefik.ingress.kubernetes.io/service-weights: |
red: 50%
green: 25%
blue: 25%
spec:
rules:
- host: rgb.`A.B.C.D`.nip.io
http:
paths:
- path: /
backend:
serviceName: red
servicePort: 80
- path: /
backend:
serviceName: green
servicePort: 80
- path: /
backend:
serviceName: blue
servicePort: 80
```
]
---
class: extra-details
## Other ingress controllers
*Just to illustrate how different things are ...*
- With the NGINX ingress controller:
- define two ingress ressources
<br/>
(specifying rules with the same host+path)
- add `nginx.ingress.kubernetes.io/canary` annotations on each
- With Linkerd2:
- define two services
- define an extra service for the weighted aggregate of the two
- define a TrafficSplit (this is a CRD introduced by the SMI spec)
---
class: extra-details
## We need more than that
What we saw is just one of the multiple building blocks that we need to achieve a canary release.
We also need:
- metrics (latency, performance ...) for our releases
- automation to alter canary weights
(increase canary weight if metrics look good; decrease otherwise)
- a mechanism to manage the lifecycle of the canary releases
(create them, promote them, delete them ...)
For inspiration, check [flagger by Weave](https://github.com/weaveworks/flagger).

View File

@@ -1,34 +1,36 @@
# Exposing HTTP services with Ingress resources
- HTTP services are typically exposed on port 80
- Service = layer 4 (TCP, UDP, SCTP)
(and 443 for HTTPS)
- works with every TCP/UDP/SCTP protocol
- `NodePort` services are great, but they are *not* on port 80
- doesn't "see" or interpret HTTP
(by default, they use port range 30000-32767)
- Ingress = layer 7 (HTTP)
- How can we get *many* HTTP services on port 80? 🤔
- only for HTTP
- can route requests depending on URI or host header
- can handle TLS
---
## Various ways to expose something on port 80
## Why should we use Ingress resources?
- Service with `type: LoadBalancer`
A few use-cases:
*costs a little bit of money; not always available*
- URI routing (e.g. for single page apps)
- Service with one (or multiple) `ExternalIP`
`/api` → service `api:5000`
*requires public nodes; limited by number of nodes*
everything else → service `static:80`
- Service with `hostPort` or `hostNetwork`
- Cost optimization
*same limitations as `ExternalIP`; even harder to manage*
(because individual `LoadBalancer` services typically cost money)
- Ingress resources
*addresses all these limitations, yay!*
- Automatic handling of TLS certificates
---
@@ -181,20 +183,70 @@ class: extra-details
---
## Deploying pods listening on port 80
## Accepting connections on port 80 (and 443)
- We want our ingress load balancer to be available on port 80
- Web site users don't want to specify port numbers
- The best way to do that would be with a `LoadBalancer` service
(e.g. "connect to https://blahblah.whatever:31550")
... but it requires support from the underlying infrastructure
- Our ingress controller needs to actually be exposed on port 80
- Instead, we are going to use the `hostNetwork` mode on the Traefik pods
(and 443 if we want to handle HTTPS)
- Let's see what this `hostNetwork` mode is about ...
- Let's see how we can achieve that!
---
## Various ways to expose something on port 80
- Service with `type: LoadBalancer`
*costs a little bit of money; not always available*
- Service with one (or multiple) `ExternalIP`
*requires public nodes; limited by number of nodes*
- Service with `hostPort` or `hostNetwork`
*same limitations as `ExternalIP`; even harder to manage*
---
## Deploying pods listening on port 80
- We are going to run Traefik in Pods with `hostNetwork: true`
(so that our load balancer can use the "real" port 80 of our nodes)
- Traefik Pods will be created by a DaemonSet
(so that we get one instance of Traefik on every node of the cluster)
- This means that we will be able to connect to any node of the cluster on port 80
.warning[This is not typical of a production setup!]
---
## Doing it in production
- When running "on cloud", the easiest option is a `LoadBalancer` service
- When running "on prem", it depends:
- [MetalLB] is a good option if a pool of public IP addresses is available
- otherwise, using `externalIPs` on a few nodes (2-3 for redundancy)
- Many variations/optimizations are possible depending on our exact scenario!
[MetalLB]: https://metallb.org/
---
class: extra-details
## Without `hostNetwork`
- Normally, each pod gets its own *network namespace*
@@ -211,6 +263,8 @@ class: extra-details
---
class: extra-details
## With `hostNetwork: true`
- No network namespace gets created
@@ -229,26 +283,6 @@ class: extra-details
---
class: extra-details
## Other techniques to expose port 80
- We could use pods specifying `hostPort: 80`
... but with most CNI plugins, this [doesn't work or requires additional setup](https://github.com/kubernetes/kubernetes/issues/23920)
- We could use a `NodePort` service
... but that requires [changing the `--service-node-port-range` flag in the API server](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/)
- We could create a service with an external IP
... this would work, but would require a few extra steps
(figuring out the IP address and adding it to the service)
---
## Running Traefik
- The [Traefik documentation][traefikdoc] recommends to use a Helm chart
@@ -270,6 +304,8 @@ class: extra-details
---
class: extra-details
## Taints and tolerations
- A *taint* is an attribute added to a node
@@ -496,10 +532,6 @@ This is normal: we haven't provided any ingress rule yet.
## Creating ingress resources
- Before Kubernetes 1.19, we must use YAML manifests
(see example on next slide)
- Since Kubernetes 1.19, we can use `kubectl create ingress`
```bash
@@ -534,7 +566,21 @@ This is normal: we haven't provided any ingress rule yet.
---
## Ingress resources in YAML
## Before Kubernetes 1.19
- Before Kubernetes 1.19:
- `kubectl create ingress` wasn't available
- `apiVersion: networking.k8s.io/v1` wasn't supported
- It was necessary to use YAML, and `apiVersion: networking.k8s.io/v1beta1`
(see example on next slide)
---
## YAML for old ingress resources
Here is a minimal host-based ingress resource:
@@ -555,23 +601,15 @@ spec:
```
(It is in `k8s/ingress.yaml`.)
---
class: extra-details
## Ingress API version
- The YAML on the previous slide uses `apiVersion: networking.k8s.io/v1beta1`
## YAML for new ingress resources
- Starting with Kubernetes 1.19, `networking.k8s.io/v1` is available
- However, with Kubernetes 1.19 (and later), we can use `kubectl create ingress`
- And we can use `kubectl create ingress` 🎉
- We chose to keep an "old" (deprecated!) YAML example for folks still using older versions of Kubernetes
- If we want to see "modern" YAML, we can use `-o yaml --dry-run=client`:
- We can see "modern" YAML with `-o yaml --dry-run=client`:
```bash
kubectl create ingress red -o yaml --dry-run=client \
@@ -641,157 +679,6 @@ class: extra-details
- It is still in alpha stage
---
## Vendor-specific example
- Let's see how to implement *canary releases*
- The example here will use Traefik v1
(which is obsolete)
- It won't work on your Kubernetes cluster!
(unless you're running an oooooold version of Kubernetes)
(and an equally oooooooold version of Traefik)
- We've left it here just as an example!
---
## Canary releases
- A *canary release* (or canary launch or canary deployment) is a release that will process only a small fraction of the workload
- After deploying the canary, we compare its metrics to the normal release
- If the metrics look good, the canary will progressively receive more traffic
(until it gets 100% and becomes the new normal release)
- If the metrics aren't good, the canary is automatically removed
- When we deploy a bad release, only a tiny fraction of traffic is affected
---
## Various ways to implement canary
- Example 1: canary for a microservice
- 1% of all requests (sampled randomly) are sent to the canary
- the remaining 99% are sent to the normal release
- Example 2: canary for a web app
- 1% of users are sent to the canary web site
- the remaining 99% are sent to the normal release
- Example 3: canary for shipping physical goods
- 1% of orders are shipped with the canary process
- the remaining 99% are shipped with the normal process
- We're going to implement example 1 (per-request routing)
---
## Canary releases with Traefik v1
- We need to deploy the canary and expose it with a separate service
- Then, in the Ingress resource, we need:
- multiple `paths` entries (one for each service, canary and normal)
- an extra annotation indicating the weight of each service
- If we want, we can send requests to more than 2 services
---
## The Ingress resource
.small[
```yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: rgb
annotations:
traefik.ingress.kubernetes.io/service-weights: |
red: 50%
green: 25%
blue: 25%
spec:
rules:
- host: rgb.`A.B.C.D`.nip.io
http:
paths:
- path: /
backend:
serviceName: red
servicePort: 80
- path: /
backend:
serviceName: green
servicePort: 80
- path: /
backend:
serviceName: blue
servicePort: 80
```
]
---
class: extra-details
## Other ingress controllers
*Just to illustrate how different things are ...*
- With the NGINX ingress controller:
- define two ingress ressources
<br/>
(specifying rules with the same host+path)
- add `nginx.ingress.kubernetes.io/canary` annotations on each
- With Linkerd2:
- define two services
- define an extra service for the weighted aggregate of the two
- define a TrafficSplit (this is a CRD introduced by the SMI spec)
---
class: extra-details
## We need more than that
What we saw is just one of the multiple building blocks that we need to achieve a canary release.
We also need:
- metrics (latency, performance ...) for our releases
- automation to alter canary weights
(increase canary weight if metrics look good; decrease otherwise)
- a mechanism to manage the lifecycle of the canary releases
(create them, promote them, delete them ...)
For inspiration, check [flagger by Weave](https://github.com/weaveworks/flagger).
???
:EN:- The Ingress resource

View File

@@ -142,7 +142,7 @@ configMapGenerator:
- overlays can only *add* resources, not *remove* them
- See the full list of [eschewed features](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/eschewedFeatures.md) for more details
- See the full list of [eschewed features](https://kubectl.docs.kubernetes.io/faq/kustomize/eschewedfeatures/) for more details
---

View File

@@ -554,6 +554,28 @@ Note: the `apiVersion` field appears to be optional.
---
class: extra-details
## Managing `ownerReferences`
- By default, the generated object and triggering object have independent lifecycles
(deleting the triggering object doesn't affect the generated object)
- It is possible to associate the generated object with the triggering object
(so that deleting the triggering object also deletes the generated object)
- This is done by adding the triggering object information to `ownerReferences`
(in the generated object `metadata`)
- See [Linking resources with ownerReferences][ownerref] for an example
[ownerref]: https://kyverno.io/docs/writing-policies/generate/#linking-resources-with-ownerreferences
---
## Asynchronous creation
- Kyverno creates resources asynchronously

View File

@@ -14,32 +14,20 @@
- CPU is a *compressible resource*
(it can be preempted immediately without adverse effect)
- it can be preempted immediately without adverse effect
- if we have N CPU and need 2N, we run at 50% speed
- Memory is an *incompressible resource*
(it needs to be swapped out to be reclaimed; and this is costly)
- it needs to be swapped out to be reclaimed; and this is costly
- if we have N GB RAM and need 2N, we might run at... 0.1% speed!
- As a result, exceeding limits will have different consequences for CPU and memory
---
## Exceeding CPU limits
- CPU can be reclaimed instantaneously
(in fact, it is preempted hundreds of times per second, at each context switch)
- If a container uses too much CPU, it can be throttled
(it will be scheduled less often)
- The processes in that container will run slower
(or rather: they will not run faster)
---
class: extra-details
## CPU limits implementation details
@@ -146,39 +134,59 @@ For more details, check [this blog post](https://erickhun.com/posts/kubernetes-f
---
## Exceeding memory limits
## Running low on memory
- Memory needs to be swapped out before being reclaimed
- When the system runs low on memory, it starts to reclaim used memory
- "Swapping" means writing memory pages to disk, which is very slow
(we talk about "memory pressure")
- On a classic system, a process that swaps can get 1000x slower
- Option 1: free up some buffers and caches
(because disk I/O is 1000x slower than memory I/O)
(fastest option; might affect performance if cache memory runs very low)
- Exceeding the memory limit (even by a small amount) can reduce performance *a lot*
- Option 2: swap, i.e. write to disk some memory of one process to give it to another
- Kubernetes *does not support swap* (more on that later!)
(can have a huge negative impact on performance because disks are slow)
- Exceeding the memory limit will cause the container to be killed
- Option 3: terminate a process and reclaim all its memory
(OOM or Out Of Memory Killer on Linux)
---
## Limits vs requests
## Memory limits on Kubernetes
- Limits are "hard limits" (they can't be exceeded)
- Kubernetes *does not support swap*
(but it may support it in the future, thanks to [KEP 2400])
- If a container exceeds its memory *limit*, it gets killed immediately
- If a node is overcommitted and under memory pressure, it will terminate some pods
(see next slide for some details about what "overcommit" means here!)
[KEP 2400]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md#implementation-history
---
## Overcommitting resources
- *Limits* are "hard limits" (a container *cannot* exceed its limits)
- a container exceeding its memory limit is killed
- a container exceeding its CPU limit is throttled
- Requests are used for scheduling purposes
- On a given node, the sum of pod *limits* can be higher than the node size
- a container using *less* than what it requested will never be killed or throttled
- *Requests* are used for scheduling purposes
- the scheduler uses the requested sizes to determine placement
- a container can use more than its requested CPU or RAM amounts
- the resources requested by all pods on a node will never exceed the node size
- a container using *less* than what it requested should never be killed or throttled
- On a given node, the sum of pod *requests* cannot be higher than the node size
---
@@ -222,9 +230,31 @@ Each pod is assigned a QoS class (visible in `status.qosClass`).
---
## Where is my swap?
class: extra-details
- The semantics of memory and swap limits on Linux cgroups are complex
## CPU and RAM reservation
- Kubernetes passes resources requests and limits to the container engine
- The container engine applies these requests and limits with specific mechanisms
- Example: on Linux, this is typically done with control groups aka cgroups
- Most systems use cgroups v1, but cgroups v2 are slowly being rolled out
(e.g. available in Ubuntu 22.04 LTS)
- Cgroups v2 have new, interesting features for memory control:
- ability to set "minimum" memory amounts (to effectively reserve memory)
- better control on the amount of swap used by a container
---
class: extra-details
## What's the deal with swap?
- With cgroups v1, it's not possible to disable swap for a cgroup
@@ -238,6 +268,8 @@ Each pod is assigned a QoS class (visible in `status.qosClass`).
- The simplest solution was to disable swap entirely
- Kubelet will refuse to start if it detects that swap is enabled!
---
## Alternative point of view
@@ -268,7 +300,7 @@ Each pod is assigned a QoS class (visible in `status.qosClass`).
- You will need to add the flag `--fail-swap-on=false` to kubelet
(otherwise, it won't start!)
(remember: it won't otherwise start if it detects that swap is enabled)
---
@@ -666,6 +698,18 @@ class: extra-details
---
## Underutilization
- Remember: when assigning a pod to a node, the scheduler looks at *requests*
(not at current utilization on the node)
- If pods request resources but don't use them, this can lead to underutilization
(because the scheduler will consider that the node is full and can't fit new pods)
---
## Viewing a namespace limits and quotas
- `kubectl describe namespace` will display resource limits and quotas
@@ -701,6 +745,10 @@ class: extra-details
- generates web reports on resource usage
- [nsinjector](https://github.com/blakelead/nsinjector)
- controller to automatically populate a Namespace when it is created
???
:EN:- Setting compute resource limits

View File

@@ -1,14 +1,18 @@
# Rolling updates
- By default (without rolling updates), when a scaled resource is updated:
- How should we update a running application?
- new pods are created
- Strategy 1: delete old version, then deploy new version
- old pods are terminated
(not great, because it obviously provokes downtime!)
- ... all at the same time
- Strategy 2: deploy new version, then delete old version
- if something goes wrong, ¯\\\_(ツ)\_/¯
(uses a lot of resources; also how do we shift traffic?)
- Strategy 3: replace running pods one at a time
(sounds interesting; and good news, Kubernetes does it for us!)
---

View File

@@ -20,13 +20,17 @@
## Docker Desktop
- Available on Mac and Windows
- Available on Linux, Mac, and Windows
- Free for personal use and small businesses
(less than 250 employees and less than $10 millions in annual revenue)
- Gives you one cluster with one node
- Very easy to use if you are already using Docker Desktop:
- Streamlined installation and user experience
go to Docker Desktop preferences and enable Kubernetes
- Great integration with various network stacks and e.g. corporate VPNs
- Ideal for Docker users who need good integration between both platforms
@@ -40,13 +44,11 @@
- Runs Kubernetes nodes in Docker containers
- Can deploy multiple clusters, with multiple nodes, and multiple master nodes
- Can deploy multiple clusters, with multiple nodes
- As of June 2020, two versions co-exist: stable (1.7) and beta (3.0)
- Runs the control plane on Kubernetes nodes
- They have different syntax and options, this can be confusing
(but don't let that stop you!)
- Control plane can also run on multiple nodes
---
@@ -84,7 +86,7 @@
- More advanced scenarios require writing a short [config file](https://kind.sigs.k8s.io/docs/user/quick-start#configuring-your-kind-cluster)
(to define multiple nodes, multiple master nodes, set Kubernetes versions ...)
(to define multiple nodes, multiple control plane nodes, set Kubernetes versions ...)
- Can deploy multiple clusters
@@ -124,7 +126,9 @@
## [Rancher Desktop](https://rancherdesktop.io/)
- Available on Mac and Windows
- Available on Linux, Mac, and Windows
- Free and open-source
- Runs a single cluster with a single node
@@ -134,7 +138,7 @@
- Emphasis on ease of use (like Docker Desktop)
- Very young product (first release in May 2021)
- Relatively young product (first release in May 2021)
- Based on k3s and other proven components

View File

@@ -62,6 +62,7 @@ content:
- #6
- k8s/ingress-tls.md
- k8s/ingress-advanced.md
#- k8s/ingress-canary.md
- k8s/cert-manager.md
- k8s/cainjector.md
- k8s/eck.md

View File

@@ -82,6 +82,7 @@ content:
-
- k8s/ingress.md
- k8s/ingress-advanced.md
#- k8s/ingress-canary.md
- k8s/ingress-tls.md
- k8s/cert-manager.md
- k8s/cainjector.md

View File

@@ -80,6 +80,7 @@ content:
- k8s/namespaces.md
- k8s/ingress.md
#- k8s/ingress-advanced.md
#- k8s/ingress-canary.md
#- k8s/ingress-tls.md
- k8s/kustomize.md
- k8s/helm-intro.md

View File

@@ -1,37 +1,59 @@
## Introductions
- Hello!
- Hi! I'm Jérôme Petazzoni ([@jpetazzo])
- On stage: Jérôme ([@jpetazzo])
- Work:
- Backstage: Alexandre, Amy, Antoine, Aurélien (x2), Benji, David, Julien, Kostas, Nicolas, Thibault
- 7 years at Docker (2011-2018)
- The training will run from 9:30 to 13:00
- sysadmin → dev → SRE manager → evangelist → trainer → ?
- There will be a break at (approximately) 11:00
- container hipster (ran containers in production before it was cool!)
- You ~~should~~ must ask questions! Lots of questions!
- Non-work:
- Use @@CHAT@@ to ask questions, get help, etc.
- [music], [reading SF], video games
[@alexbuisine]: https://twitter.com/alexbuisine
[EphemeraSearch]: https://ephemerasearch.com/
[@jpetazzo]: https://twitter.com/jpetazzo
[@s0ulshake]: https://twitter.com/s0ulshake
[Quantgene]: https://www.quantgene.com/
[music]: https://github.com/jpetazzo/griode
[reading SF]: https://gist.github.com/jpetazzo/046b8d32218e57d0c081b97aa85c3bb3
---
## Exercises
## Schedule
- At the end of each day, there is a series of exercises
- Monday: Docker
- To make the most out of the training, please try the exercises!
9:00-12:00 + 14:00-18:00
(it will help to practice and memorize the content of the day)
- Tuesday-Wednesday-Thursday: Kubernetes
- We recommend to take at least one hour to work on the exercises
9:30-12:00 + 14:00-17:00
(if you understood the content of the day, it will be much faster)
- Friday: Kubernetes Packaging (Kustomize, Helm...)
- Each day will start with a quick review of the exercises of the previous day
9:30-12:00 + 14:00-17:30
- And breaks! (Lots of breaks!)
---
## Q&A
- Two ways to ask questions:
- Slack
- Google Meet
- Don't hesitate to ask questions at any time!
---
## La matinale
- Every day, I'll be online 15 minutes earlier
(so, 9:15 every day)
- We can use that as extra Q&A or to cover any topic!

View File

@@ -1 +1 @@
3.7
3.8

View File

@@ -167,5 +167,6 @@ It has been preinstalled on your workshop nodes.*
- Ctrl-b Alt-1 → rearrange windows in columns
- Ctrl-b Alt-2 → rearrange windows in rows
- Ctrl-b arrows → navigate to other windows
- Ctrl-b , → rename window
- Ctrl-b d → detach session
- tmux attach → re-attach to session

45
slides/wemanity.html Normal file
View File

@@ -0,0 +1,45 @@
<?xml version="1.0"?>
<html>
<head>
<style>
td {
background: #ccc;
padding: 1em;
}
</style>
</head>
<body>
<table>
<tr>
<td>Lundi 10 octobre 2022</td>
<td>
<a href="1.yml.html">Docker</a>
</td>
</tr>
<tr>
<td>Mardi 11 octobre 2022</td>
<td>
<a href="2.yml.html">Kubernetes</a>
</td>
</tr>
<tr>
<td>Mercredi 12 octobre 2022</td>
<td>
<a href="2.yml.html">Kubernetes</a>
</td>
</tr>
<tr>
<td>Jeudi 13 octobre 2022</td>
<td>
<a href="2.yml.html">Kubernetes</a>
</td>
</tr>
<tr>
<td>Vendredi 14 octobre 2022</td>
<td>
<a href="2.yml.html">Packaging K8S</a>
</td>
</tr>
</table>
</body>
</html>