Compare commits

..

48 Commits

Author SHA1 Message Date
Jérôme Petazzoni
18c081e395 🚀 Prepare NASA JPL content (2023 version) 2023-02-23 10:07:19 +01:00
Jérôme Petazzoni
29b3185e7e 🐘 Add link to Mastodon profile 2023-02-23 10:06:38 +01:00
Jérôme Petazzoni
0616d74e37 Add gentle intro to YAML 2023-02-22 20:56:46 +01:00
Jérôme Petazzoni
676ebcdd3f ♻️ Replace jpetazzo/httpenv with jpetazzo/color 2023-02-20 14:22:02 +01:00
Jérôme Petazzoni
28f0253242 Add kubectl np-viewer in network policy section 2023-02-20 10:37:53 +01:00
Jérôme Petazzoni
73125b5ffb 🛠️ k9s fixed the file name in their releases 🎉 2023-02-18 15:20:44 +01:00
Jérôme Petazzoni
a90c521b77 🪓 Split tmux instructions across two slides 2023-02-12 18:03:41 +01:00
Jérôme Petazzoni
bd141ddfc5 💡 Add Ctrl-B Ctrl-O tmux shortcut to cheatsheet
Super convenient if you have something on top and would like it to
be on bottom and vice versa; or to switch left and right panes.

Usually not super helpful during normal use of tmux, but very
handy when streaming, e.g. when you have a camera view obscuring
part of the top panel (or on the left/right side) and you want
to switch panel arrangement.
2023-02-12 17:40:00 +01:00
Jérôme Petazzoni
634d101efc Update HPA v2 apiVersion 2023-02-12 15:39:55 +01:00
Jérôme Petazzoni
20347a1417 ♻️ Add script to clean up Linode PVC volumes 2023-02-12 15:38:58 +01:00
Jérôme Petazzoni
893be3b18f 🖼️ Add picture of a canary cage to illustrate canary deployments 2023-02-12 13:56:36 +01:00
Bret Fisher
dd6a1adc63 Apply suggestions from code review
Co-authored-by: Tianon Gravi <admwiggin@gmail.com>
2023-02-07 23:43:40 +01:00
Bret Fisher
4dc60d3250 Check for missing docker dir 2023-02-07 23:43:40 +01:00
Jérôme Petazzoni
1aa0e062d0 ♻️ Add script to clean up Linode nodebalancers 2023-02-04 10:49:04 +01:00
Torounia
cfbe578d4f helm intro set value to juice-shop chart 2023-02-03 17:59:54 +01:00
Jérôme Petazzoni
1d692898da ♻️ Bump up versions and improve reliability ot wait-for-nodes 2023-01-23 16:08:24 +01:00
Jérôme Petazzoni
9526a94b77 🐚 Improve Terraform-based deployment script
Each time we call that script, we must set a few env vars
beforehand. Let's make these vars optional parameters to
the script instead.

Also add helper scripts to list the locations (zones or
regions) available to each provider.
2023-01-23 16:07:28 +01:00
Jérôme Petazzoni
e6eb157cc6 🪓 Split "kubectl expose" and "service types" 2023-01-13 17:50:22 +01:00
Jérôme Petazzoni
b984049603 📃 Reorganize a bit the deck intro 2023-01-13 16:04:39 +01:00
Jérôme Petazzoni
c200c8e1da ♻️ Refactor script to count slides
For automatic transcription and chaptering, we'll need to know
exactly at which slide each section starts. This we already
had the count-slides.py script to count how many slides each
section had, and count the number of slides per part. The new
script does the same but also gives accurately the first slide
of each section.
2023-01-06 23:11:43 +01:00
Jérôme Petazzoni
4c30e7db14 ✂️ Remove containerd 1.5 pinning
Kubernetes 1.26 requires CRI v1, which means containerd 1.6.
2023-01-03 09:10:01 +01:00
Marco Verleun
9d5a083473 Update Container_Networking_Basics.md 2022-12-12 13:43:01 +01:00
Jérôme Petazzoni
a2be63e4c4 📃 Improve Ingress exercises 2022-12-08 17:28:53 -08:00
Jérôme Petazzoni
584dddd823 🔗 Fix link to create token 2022-12-08 05:53:12 -08:00
Jérôme Petazzoni
3e9307d420 🔑 Update dashboard YAML; add persisting token for the dashboard account 2022-12-08 05:52:41 -08:00
Jérôme Petazzoni
5d3881b7e1 Add CoLiMa and fix microk8s/minikube ordering 2022-12-08 05:44:48 -08:00
Bret Fisher
d57ba24f6f Updating stern link 2022-12-05 21:10:52 -08:00
Jérôme Petazzoni
f046a32567 🐋 Update info about Docker+K8S 2022-12-05 15:29:52 -08:00
Jérôme Petazzoni
c2a169167d ☁️ Add terraform configuration for Azure 2022-12-05 15:29:52 -08:00
dependabot[bot]
961cf34b6f Bump socket.io-parser from 4.0.4 to 4.0.5 in /slides/autopilot
Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.0.4 to 4.0.5.
- [Release notes](https://github.com/socketio/socket.io-parser/releases)
- [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io-parser/compare/4.0.4...4.0.5)

---
updated-dependencies:
- dependency-name: socket.io-parser
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-22 16:16:26 -08:00
dependabot[bot]
b23cae8f5b Bump engine.io from 6.2.0 to 6.2.1 in /slides/autopilot
Bumps [engine.io](https://github.com/socketio/engine.io) from 6.2.0 to 6.2.1.
- [Release notes](https://github.com/socketio/engine.io/releases)
- [Changelog](https://github.com/socketio/engine.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/engine.io/compare/6.2.0...6.2.1)

---
updated-dependencies:
- dependency-name: engine.io
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-22 16:11:57 -08:00
Jérôme Petazzoni
a09c4ec4f5 Improve netlify-dns script to suggest what to do if config file not found 2022-11-18 21:46:29 +01:00
Jérôme Petazzoni
527c63eee7 📦 ️Add pic of Catène de Conteneurs 2022-11-09 14:36:25 +01:00
Jérôme Petazzoni
6cfe991375 🐞 Typo fix 2022-11-04 17:23:14 +01:00
Jérôme Petazzoni
c8f90463e0 🌈 Change the tmux status bar to yellow (like a precious metal) 2022-11-02 17:02:44 +01:00
Jérôme Petazzoni
316f5b8fd8 🌈 Change tmux status bar color to blue
To help differentiate between environments
(shpod now defaults to red)
2022-11-01 11:44:32 +01:00
Jérôme Petazzoni
c86474a539 ♻️ Update kubebuilder workshop 2022-10-28 12:32:05 +02:00
Jérôme Petazzoni
2943ef4e26 Update Kyverno to 1.7 2022-10-26 19:49:23 +02:00
Jérôme Petazzoni
02004317ac 🐞 Fix some ambiguous markdown link references
I thought that the links were local to each slide, but...
apparently not. Whoops.
2022-10-24 20:41:23 +02:00
Jérôme Petazzoni
c9cc659f88 🐞 Typo fix 2022-10-09 23:05:27 +02:00
Jérôme Petazzoni
bb8e655f92 🔧 Disable unattended upgrades; add completion for kubeadm 2022-10-09 12:18:42 +02:00
Jérôme Petazzoni
50772ca439 🌍 Switch Scaleway to fr-par-2 (better PUE) 2022-10-09 12:18:07 +02:00
Jérôme Petazzoni
1082204ac7 📃 Add note about .Chart.IsRoot 2022-10-04 17:11:59 +02:00
Jérôme Petazzoni
c9c79c409c Add ytt; fix Weave YAML URL; add completion for a few tools 2022-10-04 16:53:36 +02:00
Jérôme Petazzoni
71daf27237 ⌨️ Add tmux rename window shortcut 2022-10-03 15:28:32 +02:00
Jérôme Petazzoni
986da15a22 🔗 Update kustomize eschewed features link 2022-10-03 15:23:18 +02:00
Jérôme Petazzoni
407a8631ed 🐞 Typo in variable name 2022-10-03 15:15:53 +02:00
Jérôme Petazzoni
b4a81a7054 🔧 Minor tweak to Terraform provisioning wrapper 2022-10-03 15:15:12 +02:00
67 changed files with 2032 additions and 1051 deletions

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,8 +253,8 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
spec:
containers:
- args:
@@ -262,7 +262,7 @@ spec:
- --sidecar-host=http://127.0.0.1:8000
- --enable-skip-login
- --enable-insecure-login
image: kubernetesui/dashboard:v2.6.1
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,15 +253,15 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
spec:
containers:
- args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
image: kubernetesui/dashboard:v2.6.1
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,15 +253,15 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.6.1
helm.sh/chart: kubernetes-dashboard-5.10.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
spec:
containers:
- args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
image: kubernetesui/dashboard:v2.6.1
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -344,3 +344,12 @@ metadata:
creationTimestamp: null
name: cluster-admin
namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cluster-admin-token
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: cluster-admin

View File

@@ -1,5 +1,5 @@
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
metadata:
name: rng
spec:

View File

@@ -15,10 +15,10 @@ spec:
- key: "{{ request.operation }}"
operator: Equals
value: UPDATE
- key: "{{ request.oldObject.metadata.labels.color }}"
- key: "{{ request.oldObject.metadata.labels.color || '' }}"
operator: NotEquals
value: ""
- key: "{{ request.object.metadata.labels.color }}"
- key: "{{ request.object.metadata.labels.color || '' }}"
operator: NotEquals
value: ""
validate:

View File

@@ -15,10 +15,10 @@ spec:
- key: "{{ request.operation }}"
operator: Equals
value: UPDATE
- key: "{{ request.oldObject.metadata.labels.color }}"
- key: "{{ request.oldObject.metadata.labels.color || '' }}"
operator: NotEquals
value: ""
- key: "{{ request.object.metadata.labels.color }}"
- key: "{{ request.object.metadata.labels.color || '' }}"
operator: Equals
value: ""
validate:

View File

@@ -70,4 +70,15 @@ add_namespace() {
kubectl create serviceaccount -n kubernetes-dashboard cluster-admin \
-o yaml --dry-run=client \
#
echo ---
cat <<EOF
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cluster-admin-token
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: cluster-admin
EOF
) > dashboard-with-token.yaml

View File

@@ -34,28 +34,15 @@ to that directory, then create the clusters using that configuration.
- Scaleway: run `scw init`
2. Optional: set number of clusters, cluster size, and region.
By default, 1 cluster will be configured, with 2 nodes, and auto-scaling up to 5 nodes.
If you want, you can override these parameters, with the following variables.
2. Run!
```bash
export TF_VAR_how_many_clusters=5
export TF_VAR_min_nodes_per_pool=2
export TF_VAR_max_nodes_per_pool=4
export TF_VAR_location=xxx
./run.sh <providername> <location> [number of clusters] [min nodes] [max nodes]
```
The `location` variable is optional. Each provider should have a default value.
The value of the `location` variable is provider-specific. Examples:
If you don't specify a provider name, it will list available providers.
| Provider | Example value | How to see possible values
|---------------|-------------------|---------------------------
| Digital Ocean | `ams3` | `doctl compute region list`
| Google Cloud | `europe-north1-a` | `gcloud compute zones list`
| Linode | `eu-central` | `linode-cli regions list`
| Oracle Cloud | `eu-stockholm-1` | `oci iam region list`
If you don't specify a location, it will list locations available for this provider.
You can also specify multiple locations, and then they will be
used in round-robin fashion.
@@ -66,22 +53,15 @@ my requests to increase that quota were denied) you can do the
following:
```bash
export TF_VAR_location=$(gcloud compute zones list --format=json | jq -r .[].name | grep ^europe)
LOCATIONS=$(gcloud compute zones list --format=json | jq -r .[].name | grep ^europe)
./run.sh googlecloud "$LOCATIONS"
```
Then when you apply, clusters will be created across all available
zones in Europe. (When I write this, there are 20+ zones in Europe,
so even with my quota, I can create 40 clusters.)
3. Run!
```bash
./run.sh <providername>
```
(If you don't specify a provider name, it will list available providers.)
4. Shutting down
3. Shutting down
Go to the directory that was created by the previous step (`tag-YYYY-MM...`)
and run `terraform destroy`.
@@ -112,7 +92,7 @@ terraform init
See steps above, and add the following extra steps:
- Digital Coean:
- Digital Ocean:
```bash
export DIGITALOCEAN_ACCESS_TOKEN=$(grep ^access-token ~/.config/doctl/config.yaml | cut -d: -f2 | tr -d " ")
```
@@ -160,3 +140,30 @@ terraform destroy
```bash
rm stage2/terraform.tfstate*
```
10. Clean up leftovers.
Some providers don't clean up properly the resources created by the CCM.
For instance, when you create a Kubernetes `Service` of type
`LoadBalancer`, it generally provisions a cloud load balancer.
On Linode (and possibly other providers, too!) these cloud load balancers
aren't deleted when the cluster gets deleted, and they keep incurring
charges. You should check for those, to make sure that you don't
get charged for resources that you don't use anymore. As I write this
paragraph, there is:
- `linode-delete-ccm-loadbalancers.sh` to delete the Linode
nodebalancers; but be careful: it deletes **all** the nodebalancers
whose name starts with `ccm-`, which means that if you still have
Kubernetes clusters, their load balancers will be deleted as well!
- `linode-delete-pvc-volumes.sh` to delete Linode persistent disks
that have been created to satisfy Persistent Volume Claims
(these need to be removed manually because the default Storage Class
on Linode has a RETAIN policy). Again, be careful, this will wipe
out any volume whose label starts with `pvc`. (I don't know if it
will remove volumes that are still attached.)
Eventually, I hope to add more scripts for other providers, and make
them more selective and more robust, but for now, that's better than
nothing.

View File

@@ -0,0 +1,4 @@
#!/bin/sh
linode-cli nodebalancers list --json |
jq '.[] | select(.label | startswith("ccm-")) | .id' |
xargs -n1 -P10 linode-cli nodebalancers delete

View File

@@ -0,0 +1,4 @@
#!/bin/sh
linode-cli volumes list --json |
jq '.[] | select(.label | startswith("pvc")) | .id' |
xargs -n1 -P10 linode-cli volumes delete

View File

@@ -3,11 +3,37 @@ set -e
TIME=$(which time)
PROVIDER=$1
[ "$PROVIDER" ] || {
echo "Please specify a provider as first argument, or 'ALL' for parallel mode."
if [ -f ~/.config/doctl/config.yaml ]; then
export DIGITALOCEAN_ACCESS_TOKEN=$(grep ^access-token ~/.config/doctl/config.yaml | cut -d: -f2 | tr -d " ")
fi
if [ -f ~/.config/linode-cli ]; then
export LINODE_TOKEN=$(grep ^token ~/.config/linode-cli | cut -d= -f2 | tr -d " ")
fi
[ "$1" ] || {
echo "Syntax:"
echo ""
echo "$0 <provider> <region> [how-many-clusters] [min-nodes] [max-nodes]"
echo ""
echo "Available providers:"
ls -1 source/modules
echo ""
echo "Leave the region empty to show available regions for this provider."
echo "You can also specify ALL as a provider to simultaneously provision"
echo "many clusters on *each* provider for benchmarking purposes."
echo ""
exit 1
}
PROVIDER="$1"
export TF_VAR_location="$2"
export TF_VAR_how_many_clusters="${3-1}"
export TF_VAR_min_nodes_per_pool="${4-2}"
export TF_VAR_max_nodes_per_pool="${5-4}"
[ "$TF_VAR_location" ] || {
"./source/modules/$PROVIDER/list_locations.sh"
exit 1
}

View File

@@ -62,9 +62,11 @@ resource "null_resource" "wait_for_nodes" {
KUBECONFIG = local_file.kubeconfig[each.key].filename
}
command = <<-EOT
set -e
kubectl get nodes --watch | grep --silent --line-buffered .
kubectl wait node --for=condition=Ready --all --timeout=10m
while sleep 1; do
kubectl get nodes --watch | grep --silent --line-buffered . &&
kubectl wait node --for=condition=Ready --all --timeout=10m &&
break
done
EOT
}
}

View File

@@ -0,0 +1,2 @@
#!/bin/sh
doctl compute region list

View File

@@ -0,0 +1,2 @@
#!/bin/sh
gcloud compute zones list

View File

@@ -0,0 +1,2 @@
#!/bin/sh
linode-cli regions list

View File

@@ -0,0 +1,2 @@
#!/bin/sh
oci iam region list

View File

@@ -0,0 +1,6 @@
#!/bin/sh
echo "# Note that this is hard-coded in $0.
# I don't know if there is a way to list regions through the Scaleway API.
fr-par
nl-ams
pl-waw"

View File

@@ -56,5 +56,5 @@ variable "location" {
# scw k8s version list -o json | jq -r .[].name
variable "k8s_version" {
type = string
default = "1.23.6"
default = "1.24.7"
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.7.1"
version = "2.16.1"
}
}
}

View File

@@ -1,3 +1,3 @@
INFRACLASS=scaleway
#SCW_INSTANCE_TYPE=DEV1-L
#SCW_ZONE=fr-par-2
SCW_ZONE=fr-par-2

View File

@@ -131,6 +131,8 @@ set nowrap
SQRL
pssh -I "sudo -u $USER_LOGIN tee /home/$USER_LOGIN/.tmux.conf" <<SQRL
set -g status-style bg=yellow,bold
bind h select-pane -L
bind j select-pane -D
bind k select-pane -U
@@ -157,6 +159,9 @@ _cmd_clusterize() {
TAG=$1
need_tag
# Disable unattended upgrades so that they don't mess up with the subsequent steps
pssh sudo rm -f /etc/apt/apt.conf.d/50unattended-upgrades
# Special case for scaleway since it doesn't come with sudo
if [ "$INFRACLASS" = "scaleway" ]; then
pssh -l root "
@@ -253,14 +258,6 @@ _cmd_docker() {
sudo ln -sfn /mnt/docker /var/lib/docker
fi
# containerd 1.6 breaks Weave.
# See https://github.com/containerd/containerd/issues/6921
sudo tee /etc/apt/preferences.d/containerd <<EOF
Package: containerd.io
Pin: version 1.5.*
Pin-Priority: 1000
EOF
# This will install the latest Docker.
sudo apt-get -qy install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
@@ -270,6 +267,7 @@ EOF
# Add registry mirror configuration.
if ! [ -f /etc/docker/daemon.json ]; then
sudo mkdir -p /etc/docker
echo '{\"registry-mirrors\": [\"https://mirror.gcr.io\"]}' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker
fi
@@ -361,7 +359,8 @@ EOF"
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet kubeadm kubectl &&
sudo apt-mark hold kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl &&
kubeadm completion bash | sudo tee /etc/bash_completion.d/kubeadm &&
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl &&
echo 'alias k=kubectl' | sudo tee /etc/bash_completion.d/k &&
echo 'complete -F __start_kubectl k' | sudo tee -a /etc/bash_completion.d/k"
@@ -434,8 +433,9 @@ EOF
# Install weave as the pod network
pssh "
if i_am_first_node; then
kubever=\$(kubectl version | base64 | tr -d '\n') &&
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
#kubever=\$(kubectl version | base64 | tr -d '\n') &&
#kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml
fi"
# Join the other nodes to the cluster
@@ -510,13 +510,13 @@ EOF
# Install stern
##VERSION## https://github.com/stern/stern/releases
STERN_VERSION=1.20.1
STERN_VERSION=1.22.0
FILENAME=stern_${STERN_VERSION}_linux_${ARCH}
URL=https://github.com/stern/stern/releases/download/v$STERN_VERSION/$FILENAME.tar.gz
pssh "
if [ ! -x /usr/local/bin/stern ]; then
curl -fsSL $URL |
sudo tar -C /usr/local/bin -zx --strip-components=1 $FILENAME/stern
sudo tar -C /usr/local/bin -zx stern
sudo chmod +x /usr/local/bin/stern
stern --completion bash | sudo tee /etc/bash_completion.d/stern
stern --version
@@ -532,7 +532,7 @@ EOF
# Install kustomize
##VERSION## https://github.com/kubernetes-sigs/kustomize/releases
KUSTOMIZE_VERSION=v4.4.0
KUSTOMIZE_VERSION=v4.5.7
URL=https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_${ARCH}.tar.gz
pssh "
if [ ! -x /usr/local/bin/kustomize ]; then
@@ -551,7 +551,7 @@ EOF
if [ ! -x /usr/local/bin/ship ]; then
##VERSION##
curl -fsSL https://github.com/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_$ARCH.tar.gz |
sudo tar -C /usr/local/bin -zx ship
sudo tar -C /usr/local/bin -zx ship
fi"
# Install the AWS IAM authenticator
@@ -559,8 +559,8 @@ EOF
if [ ! -x /usr/local/bin/aws-iam-authenticator ]; then
##VERSION##
sudo curl -fsSLo /usr/local/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/$ARCH/aws-iam-authenticator
sudo chmod +x /usr/local/bin/aws-iam-authenticator
aws-iam-authenticator version
sudo chmod +x /usr/local/bin/aws-iam-authenticator
aws-iam-authenticator version
fi"
# Install the krew package manager
@@ -577,7 +577,7 @@ EOF
# Install k9s
pssh "
if [ ! -x /usr/local/bin/k9s ]; then
FILENAME=k9s_Linux_$HERP_DERP_ARCH.tar.gz &&
FILENAME=k9s_Linux_$ARCH.tar.gz &&
curl -fsSL https://github.com/derailed/k9s/releases/latest/download/\$FILENAME |
sudo tar -zxvf- -C /usr/local/bin k9s
k9s version
@@ -602,6 +602,7 @@ EOF
FILENAME=tilt.\$TILT_VERSION.linux.$TILT_ARCH.tar.gz
curl -fsSL https://github.com/tilt-dev/tilt/releases/download/v\$TILT_VERSION/\$FILENAME |
sudo tar -zxvf- -C /usr/local/bin tilt
tilt completion bash | sudo tee /etc/bash_completion.d/tilt
tilt version
fi"
@@ -610,6 +611,7 @@ EOF
if [ ! -x /usr/local/bin/skaffold ]; then
curl -fsSLo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-$ARCH &&
sudo install skaffold /usr/local/bin/
skaffold completion bash | sudo tee /etc/bash_completion.d/skaffold
skaffold version
fi"
@@ -618,9 +620,28 @@ EOF
if [ ! -x /usr/local/bin/kompose ]; then
curl -fsSLo kompose https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-$ARCH &&
sudo install kompose /usr/local/bin
kompose completion bash | sudo tee /etc/bash_completion.d/kompose
kompose version
fi"
# Install KinD
pssh "
if [ ! -x /usr/local/bin/kind ]; then
curl -fsSLo kind https://github.com/kubernetes-sigs/kind/releases/latest/download/kind-linux-$ARCH &&
sudo install kind /usr/local/bin
kind completion bash | sudo tee /etc/bash_completion.d/kind
kind version
fi"
# Install YTT
pssh "
if [ ! -x /usr/local/bin/ytt ]; then
curl -fsSLo ytt https://github.com/vmware-tanzu/carvel-ytt/releases/latest/download/ytt-linux-$ARCH &&
sudo install ytt /usr/local/bin
ytt completion bash | sudo tee /etc/bash_completion.d/ytt
ytt version
fi"
##VERSION## https://github.com/bitnami-labs/sealed-secrets/releases
KUBESEAL_VERSION=0.17.4
#case $ARCH in

View File

@@ -36,7 +36,7 @@ if os.path.isfile(domain_or_domain_file):
clusters = [line.split() for line in lines]
else:
ips = open(f"tags/{ips_file_or_tag}/ips.txt").read().split()
settings_file = f"tags/{tag}/settings.yaml"
settings_file = f"tags/{ips_file_or_tag}/settings.yaml"
clustersize = yaml.safe_load(open(settings_file))["clustersize"]
clusters = []
while ips:

View File

@@ -17,8 +17,17 @@
exit 1
}
NETLIFY_USERID=$(jq .userId < ~/.config/netlify/config.json)
NETLIFY_TOKEN=$(jq -r .users[$NETLIFY_USERID].auth.token < ~/.config/netlify/config.json)
NETLIFY_CONFIG_FILE=~/.config/netlify/config.json
if ! [ -f "$NETLIFY_CONFIG_FILE" ]; then
echo "Could not find Netlify configuration file ($NETLIFY_CONFIG_FILE)."
echo "Try to run the following command, and try again:"
echo "npx netlify-cli login"
exit 1
fi
NETLIFY_USERID=$(jq .userId < "$NETLIFY_CONFIG_FILE")
NETLIFY_TOKEN=$(jq -r .users[$NETLIFY_USERID].auth.token < "$NETLIFY_CONFIG_FILE")
netlify() {
URI=$1

View File

@@ -0,0 +1,71 @@
resource "azurerm_resource_group" "_" {
name = var.prefix
location = var.location
}
resource "azurerm_public_ip" "_" {
count = var.how_many_nodes
name = format("%s-%04d", var.prefix, count.index + 1)
location = azurerm_resource_group._.location
resource_group_name = azurerm_resource_group._.name
allocation_method = "Dynamic"
}
resource "azurerm_network_interface" "_" {
count = var.how_many_nodes
name = format("%s-%04d", var.prefix, count.index + 1)
location = azurerm_resource_group._.location
resource_group_name = azurerm_resource_group._.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet._.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip._[count.index].id
}
}
resource "azurerm_linux_virtual_machine" "_" {
count = var.how_many_nodes
name = format("%s-%04d", var.prefix, count.index + 1)
resource_group_name = azurerm_resource_group._.name
location = azurerm_resource_group._.location
size = var.size
admin_username = "ubuntu"
network_interface_ids = [
azurerm_network_interface._[count.index].id,
]
admin_ssh_key {
username = "ubuntu"
public_key = local.authorized_keys
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS" # FIXME
version = "latest"
}
}
# The public IP address only gets allocated when the address actually gets
# attached to the virtual machine. So we need to do this extra indrection
# to retrieve the IP addresses. Otherwise the IP addresses show up as blank.
# See: https://github.com/hashicorp/terraform-provider-azurerm/issues/310#issuecomment-335479735
data "azurerm_public_ip" "_" {
count = var.how_many_nodes
name = format("%s-%04d", var.prefix, count.index + 1)
resource_group_name = azurerm_resource_group._.name
depends_on = [azurerm_linux_virtual_machine._]
}
output "ip_addresses" {
value = join("", formatlist("%s\n", data.azurerm_public_ip._.*.ip_address))
}

View File

@@ -0,0 +1,13 @@
resource "azurerm_virtual_network" "_" {
name = "tf-vnet"
address_space = ["10.10.0.0/16"]
location = azurerm_resource_group._.location
resource_group_name = azurerm_resource_group._.name
}
resource "azurerm_subnet" "_" {
name = "tf-subnet"
resource_group_name = azurerm_resource_group._.name
virtual_network_name = azurerm_virtual_network._.name
address_prefixes = ["10.10.0.0/20"]
}

View File

@@ -0,0 +1,13 @@
terraform {
required_version = ">= 1"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.33.0"
}
}
}
provider "azurerm" {
features {}
}

View File

@@ -0,0 +1,32 @@
variable "prefix" {
type = string
default = "provisioned-with-terraform"
}
variable "how_many_nodes" {
type = number
default = 2
}
locals {
authorized_keys = file("~/.ssh/id_rsa.pub")
}
/*
Available sizes:
"Standard_D11_v2" # CPU=2 RAM=14
"Standard_F4s_v2" # CPU=4 RAM=8
"Standard_D1_v2" # CPU=1 RAM=3.5
"Standard_B1ms" # CPU=1 RAM=2
"Standard_B2s" # CPU=2 RAM=4
*/
variable "size" {
type = string
default = "Standard_F4s_v2"
}
variable "location" {
type = string
default = "South Africa North"
}

View File

@@ -2,7 +2,7 @@
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
/ /kube-adv.yml.html 200!
/ /kube.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack

View File

@@ -194,9 +194,9 @@
}
},
"node_modules/engine.io": {
"version": "6.2.0",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-6.2.0.tgz",
"integrity": "sha512-4KzwW3F3bk+KlzSOY57fj/Jx6LyRQ1nbcyIadehl+AnXjKT7gDO0ORdRi/84ixvMKTym6ZKuxvbzN62HDDU1Lg==",
"version": "6.2.1",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-6.2.1.tgz",
"integrity": "sha512-ECceEFcAaNRybd3lsGQKas3ZlMVjN3cyWwMP25D2i0zWfyiytVbTpRPa34qrr+FHddtpBVOmq4H/DCv1O0lZRA==",
"dependencies": {
"@types/cookie": "^0.4.1",
"@types/cors": "^2.8.12",
@@ -742,9 +742,9 @@
"integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w=="
},
"node_modules/socket.io-client/node_modules/socket.io-parser": {
"version": "4.2.0",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.2.0.tgz",
"integrity": "sha512-tLfmEwcEwnlQTxFB7jibL/q2+q8dlVQzj4JdRLJ/W/G1+Fu9VSxCx1Lo+n1HvXxKnM//dUuD0xgiA7tQf57Vng==",
"version": "4.2.1",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.2.1.tgz",
"integrity": "sha512-V4GrkLy+HeF1F/en3SpUaM+7XxYXpuMUWLGde1kSSh5nQMN4hLrbPIkD+otwh6q9R6NOQBN4AMaOZ2zVjui82g==",
"dependencies": {
"@socket.io/component-emitter": "~3.1.0",
"debug": "~4.3.1"
@@ -754,9 +754,9 @@
}
},
"node_modules/socket.io-parser": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.0.4.tgz",
"integrity": "sha512-t+b0SS+IxG7Rxzda2EVvyBZbvFPBCjJoyHuE0P//7OAsN23GItzDRdWa6ALxZI/8R5ygK7jAR6t028/z+7295g==",
"version": "4.0.5",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.0.5.tgz",
"integrity": "sha512-sNjbT9dX63nqUFIOv95tTVm6elyIU4RvB1m8dOeZt+IgWwcWklFDOdmGcfo3zSiRsnR/3pJkjY5lfoGqEe4Eig==",
"dependencies": {
"@types/component-emitter": "^1.2.10",
"component-emitter": "~1.3.0",
@@ -1033,9 +1033,9 @@
"integrity": "sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w=="
},
"engine.io": {
"version": "6.2.0",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-6.2.0.tgz",
"integrity": "sha512-4KzwW3F3bk+KlzSOY57fj/Jx6LyRQ1nbcyIadehl+AnXjKT7gDO0ORdRi/84ixvMKTym6ZKuxvbzN62HDDU1Lg==",
"version": "6.2.1",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-6.2.1.tgz",
"integrity": "sha512-ECceEFcAaNRybd3lsGQKas3ZlMVjN3cyWwMP25D2i0zWfyiytVbTpRPa34qrr+FHddtpBVOmq4H/DCv1O0lZRA==",
"requires": {
"@types/cookie": "^0.4.1",
"@types/cors": "^2.8.12",
@@ -1456,9 +1456,9 @@
"integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w=="
},
"socket.io-parser": {
"version": "4.2.0",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.2.0.tgz",
"integrity": "sha512-tLfmEwcEwnlQTxFB7jibL/q2+q8dlVQzj4JdRLJ/W/G1+Fu9VSxCx1Lo+n1HvXxKnM//dUuD0xgiA7tQf57Vng==",
"version": "4.2.1",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.2.1.tgz",
"integrity": "sha512-V4GrkLy+HeF1F/en3SpUaM+7XxYXpuMUWLGde1kSSh5nQMN4hLrbPIkD+otwh6q9R6NOQBN4AMaOZ2zVjui82g==",
"requires": {
"@socket.io/component-emitter": "~3.1.0",
"debug": "~4.3.1"
@@ -1467,9 +1467,9 @@
}
},
"socket.io-parser": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.0.4.tgz",
"integrity": "sha512-t+b0SS+IxG7Rxzda2EVvyBZbvFPBCjJoyHuE0P//7OAsN23GItzDRdWa6ALxZI/8R5ygK7jAR6t028/z+7295g==",
"version": "4.0.5",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-4.0.5.tgz",
"integrity": "sha512-sNjbT9dX63nqUFIOv95tTVm6elyIU4RvB1m8dOeZt+IgWwcWklFDOdmGcfo3zSiRsnR/3pJkjY5lfoGqEe4Eig==",
"requires": {
"@types/component-emitter": "^1.2.10",
"component-emitter": "~1.3.0",

View File

@@ -35,7 +35,7 @@ At the end of this section, you will be able to:
---
## Runing an NGINX server
## Running an NGINX server
```bash
$ docker run -d -P nginx

View File

@@ -1,57 +1,75 @@
#!/usr/bin/env python
import re
import sys
import yaml
FIRST_SLIDE_MARKER = "name: toc-"
PART_PREFIX = "part-"
filename = sys.argv[1]
if filename.endswith(".html"):
html_file = filename
yaml_file = filename[: -len(".html")]
else:
html_file = filename + ".html"
yaml_file = filename
excluded_classes = yaml.safe_load(open(yaml_file))["exclude"]
PREFIX = "name: toc-"
EXCLUDED = ["in-person"]
class State(object):
def __init__(self):
self.current_slide = 1
self.section_title = None
self.section_start = 0
self.section_slides = 0
self.current_slide = -1
self.parts = {}
self.sections = {}
def show(self):
if self.section_title.startswith("part-"):
return
print("{0.section_title}\t{0.section_start}\t{0.section_slides}".format(self))
self.sections[self.section_title] = self.section_slides
def end_section(self):
if state.section_title:
print(
"{0.section_start}\t{0.section_slides}\t{0.section_title}".format(self)
)
if self.section_part:
if self.section_part not in self.parts:
self.parts[self.section_part] = 0
self.parts[self.section_part] += self.section_slides
def new_section(self, slide):
# Normally, the title should be prefixed by a space
# (because section titles are first-level titles in markdown,
# e.g. "# Introduction", and markmaker removes the # but leaves
# the leading space).
self.section_title = None
if "\n " in slide:
self.section_title = slide.split("\n ")[1].split("\n")[0]
toc_links = re.findall("\(#toc-(.*)\)", slide)
self.section_part = None
for toc_link in toc_links:
if toc_link.startswith(PART_PREFIX):
self.section_part = toc_link
self.section_start = self.current_slide
self.section_slides = 0
state = State()
state.new_section("")
print("{}\t{}\t{}".format("index", "size", "title"))
title = None
for line in open(sys.argv[1]):
line = line.rstrip()
if line.startswith(PREFIX):
if state.section_title is None:
print("{}\t{}\t{}".format("title", "index", "size"))
else:
state.show()
state.section_title = line[len(PREFIX):].strip()
state.section_start = state.current_slide
state.section_slides = 0
if line == "---":
for slide in open(html_file).read().split("\n---\n"):
excluded = False
for line in slide.split("\n"):
if line.startswith("class:"):
for klass in excluded_classes:
if klass in line.split():
excluded = True
if excluded:
continue
if FIRST_SLIDE_MARKER in slide:
# A new section starts. Show info about the part that just ended.
state.end_section()
state.new_section(slide)
state.section_slides += 1
for sub_slide in slide.split("\n--\n"):
state.current_slide += 1
state.section_slides += 1
if line == "--":
state.current_slide += 1
toc_links = re.findall("\(#toc-(.*)\)", line)
if toc_links and state.section_title.startswith("part-"):
if state.section_title not in state.parts:
state.parts[state.section_title] = []
state.parts[state.section_title].append(toc_links[0])
# This is really hackish
if line.startswith("class:"):
for klass in EXCLUDED:
if klass in line:
state.section_slides -= 1
state.current_slide -= 1
state.show()
else:
state.end_section()
for part in sorted(state.parts, key=lambda f: int(f.split("-")[1])):
part_size = sum(state.sections[s] for s in state.parts[part])
print("{}\t{}\t{}".format("total size for", part, part_size))
print("{}\t{}\t{}".format(0, state.parts[part], "total size for " + part))

View File

@@ -2,7 +2,7 @@
- Add an ingress controller to a Kubernetes cluster
- Create an ingress resource for a web app on that cluster
- Create an ingress resource for a couple of web apps on that cluster
- Challenge: accessing/exposing port 80

View File

@@ -1,49 +1,131 @@
# Exercise — Ingress
- We want to expose a web app through an ingress controller
- We want to expose a couple of web apps through an ingress controller
- This will require:
- the web app itself (dockercoins, NGINX, whatever we want)
- the web apps (e.g. two instances of `jpetazzo/color`)
- an ingress controller
- a domain name (`use \*.nip.io` or `\*.localdev.me`)
- an ingress resource
---
## Goal
## Different scenarios
- We want to be able to access the web app using a URL like:
We will use a different deployment mechanism depending on the cluster that we have:
http://webapp.localdev.me
- Managed cluster with working `LoadBalancer` Services
*or*
- Local development cluster
http://webapp.A.B.C.D.nip.io
- Cluster without `LoadBalancer` Services (e.g. deployed with `kubeadm`)
(where A.B.C.D is the IP address of one of our nodes)
---
## The apps
- The web apps will be deployed similarly, regardless of the scenario
- Let's start by deploying two web apps, e.g.:
a Deployment called `blue` and another called `green`, using image `jpetazzo/color`
- Expose them with two `ClusterIP` Services
---
## Scenario "classic cloud Kubernetes"
*Difficulty: easy*
For this scenario, we need a cluster with working `LoadBalancer` Services.
(For instance, a managed Kubernetes cluster from a cloud provider.)
We suggest to use "Ingress NGINX" with its default settings.
It can be installed with `kubectl apply` or with `helm`.
Both methods are described in [the documentation][ingress-nginx-deploy].
We want our apps to be available on e.g. http://X.X.X.X/blue and http://X.X.X.X/green
<br/>
(where X.X.X.X is the IP address of the `LoadBalancer` allocated by Ingress NGINX).
[ingress-nginx-deploy]: https://kubernetes.github.io/ingress-nginx/deploy/
---
## Scenario "local development cluster"
*Difficulty: easy-hard (depends on the type of cluster!)*
For this scenario, we want to use a local cluster like KinD, minikube, etc.
We suggest to use "Ingress NGINX" again, like for the previous scenario.
Furthermore, we want to use `localdev.me`.
We want our apps to be available on e.g. `blue.localdev.me` and `green.localdev.me`.
The difficulty is to ensure that `localhost:80` will map to the ingress controller.
(See next slide for hints!)
---
## Hints
- For the ingress controller, we can use:
- With clusters like Docker Desktop, the first `LoadBalancer` service uses `localhost`
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/index.md)
(if the ingress controller is the first `LoadBalancer` service, we're all set!)
- the [Traefik Helm chart](https://doc.traefik.io/traefik/getting-started/install-traefik/#use-the-helm-chart)
- With clusters like K3D and KinD, it is possible to define extra port mappings
- the container.training [Traefik DaemonSet](https://raw.githubusercontent.com/jpetazzo/container.training/main/k8s/traefik-v2.yaml)
(and map e.g. `localhost:80` to port 30080 on the node; then use that as a `NodePort`)
- If our cluster supports LoadBalancer Services: easy
---
(nothing special to do)
## Scenario "on premises cluster", take 1
- For local clusters, things can be more difficult; two options:
*Difficulty: easy*
- map localhost:80 to e.g. a NodePort service, and use `\*.localdev.me`
For this scenario, we need a cluster with nodes that are publicly accessible.
- use hostNetwork, or ExternalIP, and use `\*.nip.io`
We want to deploy the ingress controller so that it listens on port 80 on all nodes.
This can be done e.g. with the manifests in @@LINK[k8s/traefik.yaml].
We want our apps to be available on e.g. http://X.X.X.X/blue and http://X.X.X.X/green
<br/>
(where X.X.X.X is the IP address of any of our nodes).
---
## Scenario "on premises cluster", take 2
*Difficulty: medium*
We want to deploy the ingress controller so that it listens on port 80 on all nodes.
But this time, we want to use a Helm chart to install the ingress controller.
We can use either the Ingress NGINX Helm chart, or the Traefik Helm chart.
Test with an untainted node first.
Feel free to make it work on tainted nodes (e.g. control plane nodes) later.
---
## Scenario "on premises cluster", take 3
*Difficulty: hard*
This is similar to the previous scenario, but with two significant changes:
1. We only want to run the ingress controller on nodes that have the role `ingress`.
2. We don't want to use `hostNetwork`, but a list of `externalIPs` instead.

Binary file not shown.

After

Width:  |  Height:  |  Size: 394 KiB

View File

@@ -1,17 +0,0 @@
# Interlude
- As mentioned earlier:
*the content of this course will be adapted to suit your needs!*
- Please take a look at the form that we just shared in Slack
(you don't need to fill it *right now*)
- If there are parts that you are curious about, ask us now!
- We'll ask you to fill the form after today's session
(before the end of the day, basically, so we can process the results by tomorrow)
- Thank you!

View File

@@ -13,3 +13,4 @@ https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-1.jpg
https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-2.jpg
https://gallant-turing-d0d520.netlify.com/containers/two-containers-on-a-truck.jpg
https://gallant-turing-d0d520.netlify.com/containers/wall-of-containers.jpeg
https://gallant-turing-d0d520.netlify.com/containers/catene-de-conteneurs.jpg

View File

@@ -203,12 +203,12 @@ What does that mean?
## Let's experiment a bit!
- The examples in this section require a Kubernetes cluster
(any local development cluster will suffice)
- For this section, connect to the first node of the `test` cluster
.lab[
- SSH to the first node of the test cluster
- Check that the cluster is operational:
```bash
kubectl get nodes

View File

@@ -274,7 +274,7 @@ class: extra-details
- ...or with a Secret with the right [type and annotation][create-token]
[create-token]: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#to-create-additional-api-tokens
[create-token]: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#create-token
---

View File

@@ -202,7 +202,9 @@ class: extra-details
- These are JWS signatures using HMAC-SHA256
(see [here](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#configmap-signing) for more details)
(see [the reference documentation][configmap-signing] for more details)
[configmap-signing]: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#configmap-signing
---

View File

@@ -48,7 +48,7 @@
- We must run nodes on a supported infrastructure
- See [here] for a non-exhaustive list of supported providers
- Check the [GitHub repo][autoscaler-providers] for a non-exhaustive list of supported providers
- Sometimes, the cluster autoscaler is installed automatically
@@ -58,7 +58,7 @@
(which is often non-trivial and highly provider-specific)
[here]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider
[autoscaler-providers]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider
---

View File

@@ -138,7 +138,7 @@ class: extra-details
- The Cluster Autoscaler only supports a few cloud infrastructures
(see [here](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider) for a list)
(see the [kubernetes/autoscaler repo][kubernetes-autoscaler-repo] for a list)
- The Cluster Autoscaler cannot scale down nodes that have pods using:
@@ -148,6 +148,8 @@ class: extra-details
- a restrictive PodDisruptionBudget
[kubernetes-autoscaler-repo]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider
---
## Other way to do capacity planning

View File

@@ -24,11 +24,11 @@
- Interface parameters (MTU, sysctls) could be tweaked by the `tuning` plugin
The reference plugins are available [here].
The reference plugins are available [here][cni-reference-plugins].
Look in each plugin's directory for its documentation.
[here]: https://github.com/containernetworking/plugins/tree/master/plugins
[cni-reference-plugins]: https://github.com/containernetworking/plugins/tree/master/plugins
---
@@ -404,17 +404,17 @@ class: extra-details
- Create a Deployment running a web server:
```bash
kubectl create deployment web --image=jpetazzo/httpenv
kubectl create deployment blue --image=jpetazzo/color
```
- Scale it so that it spans multiple nodes:
```bash
kubectl scale deployment web --replicas=5
kubectl scale deployment blue --replicas=5
```
- Expose it with a Service:
```bash
kubectl expose deployment web --port=8888
kubectl expose deployment blue --port=8888
```
]

View File

@@ -79,6 +79,20 @@
(blue/green deployment, canary deployment)
--
.footnote[
On the next page: canary cage with an oxygen bottle, designed to keep the canary alive.
<br/>
(See https://post.lurk.org/@zilog/109632335293371919 for details.)
]
---
class: pic
![Canary cage](images/canary-cage.jpg)
---
## More things that Kubernetes can do for us
@@ -287,7 +301,9 @@ No!
--
- By default, Kubernetes uses the Docker Engine to run containers
- The Docker Engine used to be the default option to run containers with Kubernetes
- Support for Docker (specifically: dockershim) was removed in Kubernetes 1.24
- We can leverage other pluggable runtimes through the *Container Runtime Interface*
@@ -329,32 +345,26 @@ Yes!
- We can do these things without Docker
<br/>
(and get diagnosed with NIH¹ syndrome)
(but with some languages/frameworks, it might be much harder)
- Docker is still the most stable container engine today
<br/>
(but other options are maturing very quickly)
.footnote[¹[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)]
---
class: extra-details
## Do we need to run Docker at all?
- On our Kubernetes clusters:
*Not anymore*
- On our development environments, CI pipelines ... :
*Yes, almost certainly*
- On our production servers:
*Yes (today)*
*Probably not (in the future)*
.footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)]
---
## Interacting with Kubernetes

View File

@@ -317,6 +317,22 @@ class: extra-details
class: extra-details
## Determining if we're in a subchart
- `.Chart.IsRoot` indicates if we're in the top-level chart or in a sub-chart
- Useful in charts that are designed to be used standalone or as dependencies
- Example: generic chart
- when used standalone (`.Chart.IsRoot` is `true`), use `.Release.Name`
- when used as a subchart e.g. with multiple aliases, use `.Chart.Name`
---
class: extra-details
## Compatibility with Helm 2
- Chart `apiVersion: v1` is the only version supported by Helm 2

View File

@@ -504,7 +504,7 @@ The `readme` may or may not have (accurate) explanations for the values.
- Update `my-juice-shop`:
```bash
helm upgrade my-juice-shop juice/my-juice-shop \
helm upgrade my-juice-shop juice/juice-shop \
--set service.type=NodePort
```

View File

@@ -86,8 +86,8 @@
(This is inspired by the
[uselessoperator](https://github.com/tilt-dev/uselessoperator)
written by
[L Körbes](https://twitter.com/ellenkorbes).
written by
[V Körbes](https://twitter.com/veekorbes).
Highly recommend!💯)
---
@@ -160,34 +160,31 @@ type MachineSpec struct {
We can use Go *marker comments* to give `controller-gen` extra details about how to handle our type, for instance:
```go
//+kubebuilder:object:root=true
```
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.switchPosition",name=Position,type=string
→ top-level type exposed through API (as opposed to "member field of another type")
```go
//+kubebuilder:subresource:status
```
→ automatically generate a `status` subresource (very common with many types)
```go
//+kubebuilder:printcolumn:JSONPath=".spec.switchPosition",name=Position,type=string
```
(See
[marker syntax](https://book.kubebuilder.io/reference/markers.html),
[CRD generation](https://book.kubebuilder.io/reference/markers/crd.html),
[CRD validation](https://book.kubebuilder.io/reference/markers/crd-validation.html)
[CRD validation](https://book.kubebuilder.io/reference/markers/crd-validation.html),
[Object/DeepCopy](https://master.book.kubebuilder.io/reference/markers/object.html)
)
---
class: extra-details
## Using CRD v1
- By default, kubebuilder generates v1alpha1 CRDs
- If we want to generate v1 CRDs:
- edit `Makefile`
- update `crd:crdVersions=v1`
---
## Installing the CRD
After making these changes, we can run `make install`.
@@ -208,6 +205,7 @@ Edit `config/samples/useless_v1alpha1_machine.yaml`:
kind: Machine
apiVersion: useless.container.training/v1alpha1
metadata:
labels: # ...
name: machine-1
spec:
# Our useless operator will change that to "down"
@@ -252,20 +250,23 @@ spec:
## Loading an object
Open `controllers/machine_controller.go` and add that code in the `Reconcile` method:
Open `controllers/machine_controller.go`.
Add that code in the `Reconcile` method, at the `TODO(user)` location:
```go
var machine uselessv1alpha1.Machine
logger := log.FromContext(ctx)
if err := r.Get(ctx, req.NamespacedName, &machine); err != nil {
log.Info("error getting object")
return ctrl.Result{}, err
logger.Info("error getting object")
return ctrl.Result{}, err
}
r.Log.Info(
"reconciling",
"machine", req.NamespaceName,
"switchPosition", machine.Spec.SwitchPosition,
logger.Info(
"reconciling",
"machine", req.NamespacedName,
"switchPosition", machine.Spec.SwitchPosition,
)
```
@@ -288,7 +289,7 @@ Then:
--
🤔
We get a bunch of errors and go stack traces! 🤔
---
@@ -324,7 +325,7 @@ Let's try to update the machine like this:
if machine.Spec.SwitchPosition != "down" {
machine.Spec.SwitchPosition = "down"
if err := r.Update(ctx, &machine); err != nil {
log.Info("error updating switch position")
logger.Info("error updating switch position")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
}
@@ -344,9 +345,9 @@ Again - update, `make run`, test.
(maybe with degraded behavior in the meantime)
- Status will almost always be a sub-resource
- Status will almost always be a sub-resource, so that it can be updated separately
(so that it can be updated separately "cheaply")
(and potentially with different permissions)
---
@@ -399,8 +400,8 @@ class: extra-details
## To requeue ...
`return ctrl.Result{RequeueAfter: 1 * time.Second}`
`return ctrl.Result{RequeueAfter: 1 * time.Second}, nil`
- That means: "try again in 1 second, and I will check if progress was made"
- This *does not* guarantee that we will be called exactly 1 second later:
@@ -409,7 +410,9 @@ class: extra-details
- we might be called after (if the controller is busy with other objects)
- If we are waiting for another resource to change, there is an even better way!
- If we are waiting for another Kubernetes resource to change, there is a better way
(explained on next slide)
---
@@ -417,23 +420,41 @@ class: extra-details
`return ctrl.Result{}, nil`
- That means: "no need to set an alarm; we'll be notified some other way"
- That means: "we're done here!"
- Use this if we are waiting for another resource to update
- This is also what we should use if we are waiting for another resource
(e.g. a LoadBalancer to be provisioned, a Pod to be ready...)
- For this to work, we need to set a *watch* (more on that later)
- In that case, we will need to set a *watch* (more on that later)
---
## Keeping track of state
- If we simply requeue the object to examine it 1 second later...
- ...We'll keep examining/requeuing it forever!
- We need to "remember" that we saw it (and when)
- Option 1: keep state in controller
(e.g. an internal `map`)
- Option 2: keep state in the object
(typically in its status field)
- Tradeoffs: concurrency / failover / control plane overhead...
---
## "Improving" our controller, take 2
- Let's store in the machine status the moment when we saw it
Let's store in the machine status the moment when we saw it:
```go
// +kubebuilder:printcolumn:JSONPath=".status.seenAt",name=Seen,type=date
type MachineStatus struct {
// Time at which the machine was noticed by our controller.
SeenAt *metav1.Time ``json:"seenAt,omitempty"``
@@ -446,6 +467,12 @@ Note: `date` fields don't display timestamps in the future.
(That's why for this example it's simpler to use `seenAt` rather than `changeAt`.)
And for better visibility, add this along with the other `printcolumn` comments:
```go
//+kubebuilder:printcolumn:JSONPath=".status.seenAt",name=Seen,type=date
```
---
## Set `seenAt`
@@ -457,7 +484,7 @@ if machine.Status.SeenAt == nil {
now := metav1.Now()
machine.Status.SeenAt = &now
if err := r.Status().Update(ctx, &machine); err != nil {
log.Info("error updating status.seenAt")
logger.Info("error updating status.seenAt")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
return ctrl.Result{RequeueAfter: 5 * time.Second}, nil
@@ -478,8 +505,9 @@ if machine.Spec.SwitchPosition != "down" {
changeAt := machine.Status.SeenAt.Time.Add(5 * time.Second)
if now.Time.After(changeAt) {
machine.Spec.SwitchPosition = "down"
machine.Status.SeenAt = nil
if err := r.Update(ctx, &machine); err != nil {
log.Info("error updating switch position")
logger.Info("error updating switch position")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
}
@@ -496,15 +524,33 @@ if machine.Spec.SwitchPosition != "down" {
- We will now have two kinds of objects: machines, and switches
- Machines will store the number of switches in their spec
- Machines should have *at least* one switch, possibly *multiple ones*
- The position will now be stored in the switch, not the machine
- Our controller will automatically create switches if needed
- The machine will also expose the combined state of the switches
(a bit like the ReplicaSet controller automatically creates Pods)
- The switches will be tied to their machine through a label
(See next slide for an example)
(let's pick `machine=name-of-the-machine`)
---
## Switch state
- The position of a switch will now be stored in the switch
(not in the machine like in the first scenario)
- The machine will also expose the combined state of the switches
(through its status)
- The machine's status will be automatically updated by the controller
(each time a switch is added/changed/removed)
---
@@ -516,7 +562,7 @@ NAME SWITCHES POSITIONS
machine-cz2vl 3 ddd
machine-vf4xk 1 d
[jp@hex ~]$ kubectl get switches --show-labels
[jp@hex ~]$ kubectl get switches --show-labels
NAME POSITION SEEN LABELS
switch-6wmjw down machine=machine-cz2vl
switch-b8csg down machine=machine-cz2vl
@@ -530,39 +576,95 @@ switch-rc59l down machine=machine-vf4xk
## Tasks
Create the new resource type (but don't create a controller):
1. Create the new resource type (but don't create a controller)
2. Update `machine_types.go` and `switch_types.go`
3. Implement logic to display machine status (status of its switches)
4. Implement logic to automatically create switches
5. Implement logic to flip all switches down immediately
6. Then tweak it so that a given machine doesn't flip more than one switch every 5 seconds
*See next slides for detailed steps!*
---
## Creating the new type
```bash
kubebuilder create api --group useless --version v1alpha1 --kind Switch
```
Update `machine_types.go` and `switch_types.go`.
Implement the logic so that the controller flips all switches down immediately.
Then change it so that a given machine doesn't flip more than one switch every 5 seconds.
See next slides for hints!
Note: this time, only create a new custom resource; not a new controller.
---
## Listing objects
## Updating our types
We can use the `List` method with filters:
- Move the "switch position" and "seen at" to the new `Switch` type
```go
var switches uselessv1alpha1.SwitchList
- Update the `Machine` type to have:
if err := r.List(ctx, &switches,
client.InNamespace(req.Namespace),
client.MatchingLabels{"machine": req.Name},
); err != nil {
log.Error(err, "unable to list switches of the machine")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
- `spec.switches` (Go type: `int`, JSON type: `integer`)
log.Info("Found switches", "switches", switches)
```
- `status.positions` of type `string`
- Bonus points for adding [CRD Validation](https://book.kubebuilder.io/reference/markers/crd-validation.html) to the numbers of switches!
- Then install the new CRDs with `make install`
- Create a Machine, and a Switch linked to the Machine (by setting the `machine` label)
---
## Listing switches
- Switches are associated to Machines with a label
(`kubectl label switch switch-xyz machine=machine-xyz`)
- We can retrieve associated switches like this:
```go
var switches uselessv1alpha1.SwitchList
if err := r.List(ctx, &switches,
client.InNamespace(req.Namespace),
client.MatchingLabels{"machine": req.Name},
); err != nil {
logger.Error(err, "unable to list switches of the machine")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
logger.Info("Found switches", "switches", switches)
```
---
## Updating status
- Each time we reconcile a Machine, let's update its status:
```go
status := ""
for _, sw := range switches.Items {
status += string(sw.Spec.Position[0])
}
machine.Status.Positions = status
if err := r.Status().Update(ctx, &machine); err != nil {
...
```
- Run the controller and check that POSITIONS gets updated
- Add more switches linked to the same machine
- ...The POSITIONS don't get updated, unless we restart the controller
- We'll see later how to fix that!
---
@@ -590,20 +692,28 @@ if err := r.Create(ctx, &sw); err != nil { ...
---
## Create missing switches
- In our reconciler, if a machine doesn't have enough switches, create them!
- Option 1: directly create the number of missing switches
- Option 2: create only one switch (and rely on later requeuing)
- Note: option 2 won't quite work yet, since we haven't set up *watches* yet
---
## Watches
- Our controller will correctly flip switches when it starts
- It will also react to machine updates
- But it won't react if we directly touch the switches!
- By default, it only monitors machines, not switches
- Our controller doesn't react when switches are created/updated/deleted
- We need to tell it to watch switches
- We also need to tell it how to map a switch to its machine
(so that the correct machine gets queued and reconciled when a switch is updated)
---
## Mapping a switch to its machine
@@ -611,16 +721,15 @@ if err := r.Create(ctx, &sw); err != nil { ...
Define the following helper function:
```go
func (r *MachineReconciler) machineOfSwitch(obj handler.MapObject) []ctrl.Request {
r.Log.Debug("mos", "obj", obj)
return []ctrl.Request{
ctrl.Request{
NamespacedName: types.NamespacedName{
Name: obj.Meta.GetLabels()["machine"],
Namespace: obj.Meta.GetNamespace(),
},
},
}
func (r *MachineReconciler) machineOfSwitch(obj client.Object) []ctrl.Request {
return []ctrl.Request{
ctrl.Request{
NamespacedName: types.NamespacedName{
Name: obj.GetLabels()["machine"],
Namespace: obj.GetNamespace(),
},
},
}
}
```
@@ -631,24 +740,46 @@ func (r *MachineReconciler) machineOfSwitch(obj handler.MapObject) []ctrl.Reques
Update the `SetupWithManager` method in the controller:
```go
// SetupWithManager sets up the controller with the Manager.
func (r *MachineReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&uselessv1alpha1.Machine{}).
Owns(&uselessv1alpha1.Switch{}).
Watches(
&source.Kind{Type: &uselessv1alpha1.Switch{}},
&handler.EnqueueRequestsFromMapFunc{
ToRequests: handler.ToRequestsFunc(r.machineOfSwitch),
}).
Complete(r)
return ctrl.NewControllerManagedBy(mgr).
For(&uselessv1alpha1.Machine{}).
Owns(&uselessv1alpha1.Switch{}).
Watches(
&source.Kind{Type: &uselessv1alpha1.Switch{}},
handler.EnqueueRequestsFromMapFunc(r.machineOfSwitch),
).
Complete(r)
}
```
After this, our controller should now react to switch changes.
---
## ...And a few extra imports
Import the following packages referenced by the previous code:
```go
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/source"
"k8s.io/apimachinery/pkg/types"
```
After this, when we update a switch, it should reflect on the machine.
(Try to change switch positions and see the machine status update!)
---
## Bonus points
## Flipping switches
- Now re-add logic to flip switches that are not in "down" position
- Re-add logic to wait a few seconds before flipping a switch
- Change the logic to toggle one switch per machine every few seconds
(i.e. don't change all the switches for a machine; move them one at a time)
- Handle "scale down" of a machine (by deleting extraneous switches)
@@ -660,9 +791,25 @@ After this, our controller should now react to switch changes.
---
## Other possible improvements
- Formalize resource ownership
(by setting `ownerReferences` in the switches)
- This can simplify the watch mechanism a bit
- Allow to define a selector
(instead of using the hard-coded `machine` label)
- And much more!
---
## Acknowledgements
- Useless Operator, by [L Körbes](https://twitter.com/ellenkorbes)
- Useless Operator, by [V Körbes](https://twitter.com/veekorbes)
[code](https://github.com/tilt-dev/uselessoperator)
|

View File

@@ -141,7 +141,7 @@ class: extra-details
- There are external tools to address these shortcomings
(e.g.: [Stern](https://github.com/wercker/stern))
(e.g.: [Stern](https://github.com/stern/stern))
---

View File

@@ -18,6 +18,108 @@
---
## Running containers with open ports
- Since `ping` doesn't have anything to connect to, we'll have to run something else
- We are going to use `jpetazzo/color`, a tiny HTTP server written in Go
- `jpetazzo/color` listens on port 80
- It serves a page showing the pod's name
(this will be useful when checking load balancing behavior)
- We could also use the `nginx` official image instead
(but we wouldn't be able to tell the backends from each other)
---
## Running our HTTP server
- We will create a deployment with `kubectl create deployment`
- This will create a Pod running our HTTP server
.lab[
- Create a deployment named `blue`:
```bash
kubectl create deployment blue --image=jpetazzo/color
```
]
---
## Connecting to the HTTP server
- Let's connect to the HTTP server directly
(just to make sure everything works fine; we'll add the Service later)
.lab[
- Get the IP address of the Pod:
```bash
kubectl get pods -o wide
```
- Send an HTTP request to the Pod:
```bash
curl http://`IP-ADDRESSS`
```
]
You should see a response from the Pod.
---
class: extra-details
## Running with a local cluster
If you're running with a local cluster (Docker Desktop, KinD, minikube...),
you might get a connection timeout (or a message like "no route to host")
because the Pod isn't reachable directly from your local machine.
In that case, you can test the connection to the Pod by running a shell
*inside* the cluster:
```bash
kubectl run -it --rm my-test-pod --image=fedora
```
Then run `curl` in that Pod.
---
## The Pod doesn't have a "stable identity"
- The IP address that we used above isn't "stable"
(if the Pod gets deleted, the replacement Pod will have a different address)
.lab[
- Check the IP addresses of running Pods:
```bash
watch kubectl get pods -o wide
```
- Delete the Pod:
```bash
kubectl delete pod `blue-xxxxxxxx-yyyyy`
```
- Check that the replacement Pod has a different IP address
]
---
## Services in a nutshell
- Services give us a *stable endpoint* to connect to a pod or a group of pods
@@ -36,6 +138,164 @@
---
## Exposing our deployment
- Let's create a Service for our Deployment
.lab[
- Expose the HTTP port of our server:
```bash
kubectl expose deployment blue --port=80
```
- Look up which IP address was allocated:
```bash
kubectl get service
```
]
- By default, this created a `ClusterIP` service
(we'll discuss later the different types of services)
---
class: extra-details
## Services are layer 4 constructs
- Services can have IP addresses, but they are still *layer 4*
(i.e. a service is not just an IP address; it's an IP address + protocol + port)
- As a result: you *have to* indicate the port number for your service
(with some exceptions, like `ExternalName` or headless services, covered later)
---
## Testing our service
- We will now send a few HTTP requests to our Pod
.lab[
- Let's obtain the IP address that was allocated for our service, *programmatically:*
```bash
CLUSTER_IP=$(kubectl get svc blue -o go-template='{{ .spec.clusterIP }}')
```
<!--
```hide kubectl wait deploy blue --for condition=available```
```key ^D```
```key ^C```
-->
- Send a few requests:
```bash
for i in $(seq 10); do curl http://$CLUSTER_IP; done
```
]
---
## A *stable* endpoint
- Let's see what happens when the Pod has a problem
.lab[
- Keep sending requests to the Service address:
```bash
while sleep 0.3; do curl http://$CLUSTER_IP; done
```
- Meanwhile, delete the Pod:
```bash
kubectl delete pod `blue-xxxxxxxx-yyyyy`
```
]
- There might be a short interruption when we delete the pod...
- ...But requests will keep flowing after that (without requiring a manual intervention)
---
## Load balancing
- The Service will also act as a load balancer
(if there are multiple Pods in the Deployment)
.lab[
- Scale up the Deployment:
```bash
kubectl scale deployment blue --replicas=3
```
- Send a bunch of requests to the Service:
```bash
for i in $(seq 20); do curl http://$CLUSTER_IP; done
```
]
- Our requests are load balanced across the Pods!
---
## DNS integration
- Kubernetes provides an internal DNS resolver
- The resolver maps service names to their internal addresses
- By default, this only works *inside Pods* (not from the nodes themselves)
.lab[
- Get a shell in a Pod:
```bash
kubectl run --rm -it --image=fedora test-dns-integration
```
- Try to resolve the `blue` Service from the Pod:
```bash
curl blue
```
]
---
class: extra-details
## Under the hood...
- Check the content of `/etc/resolv.conf` inside a Pod
- It will have `nameserver X.X.X.X` (e.g. 10.96.0.10)
- Now check `kubectl get service kube-dns --namespace=kube-system`
- ...It's the same address! 😉
- The FQDN of a service is actually:
`<service-name>.<namespace>.svc.<cluster-domain>`
- `<cluster-domain>` defaults to `cluster.local`
- And the `search` includes `<namespace>.svc.<cluster-domain>`
---
## Advantages of services
- We don't need to look up the IP address of the pod(s)
@@ -54,510 +314,10 @@
(when a pod fails, the service seamlessly sends traffic to its replacement)
---
## Many kinds and flavors of service
- There are different types of services:
`ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName`
- There are also *headless services*
- Services can also have optional *external IPs*
- There is also another resource type called *Ingress*
(specifically for HTTP services)
- Wow, that's a lot! Let's start with the basics ...
---
## `ClusterIP`
- It's the default service type
- A virtual IP address is allocated for the service
(in an internal, private range; e.g. 10.96.0.0/12)
- This IP address is reachable only from within the cluster (nodes and pods)
- Our code can connect to the service using the original port number
- Perfect for internal communication, within the cluster
---
class: pic
![](images/kubernetes-services/11-CIP-by-addr.png)
---
class: pic
![](images/kubernetes-services/12-CIP-by-name.png)
---
class: pic
![](images/kubernetes-services/13-CIP-both.png)
---
class: pic
![](images/kubernetes-services/14-CIP-headless.png)
---
## `LoadBalancer`
- An external load balancer is allocated for the service
(typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...)
- This is available only when the underlying infrastructure provides some kind of
"load balancer as a service"
- Each service of that type will typically cost a little bit of money
(e.g. a few cents per hour on AWS or GCE)
- Ideally, traffic would flow directly from the load balancer to the pods
- In practice, it will often flow through a `NodePort` first
---
class: pic
![](images/kubernetes-services/31-LB-no-service.png)
---
class: pic
![](images/kubernetes-services/32-LB-plus-cip.png)
---
class: pic
![](images/kubernetes-services/33-LB-plus-lb.png)
---
class: pic
![](images/kubernetes-services/34-LB-internal-traffic.png)
---
class: pic
![](images/kubernetes-services/35-LB-pending.png)
---
class: pic
![](images/kubernetes-services/36-LB-ccm.png)
---
class: pic
![](images/kubernetes-services/37-LB-externalip.png)
---
class: pic
![](images/kubernetes-services/38-LB-external-traffic.png)
---
class: pic
![](images/kubernetes-services/39-LB-all-traffic.png)
---
class: pic
![](images/kubernetes-services/41-NP-why.png)
---
class: pic
![](images/kubernetes-services/42-NP-how-1.png)
---
class: pic
![](images/kubernetes-services/43-NP-how-2.png)
---
class: pic
![](images/kubernetes-services/44-NP-how-3.png)
---
class: pic
![](images/kubernetes-services/45-NP-how-4.png)
---
class: pic
![](images/kubernetes-services/46-NP-how-5.png)
---
class: pic
![](images/kubernetes-services/47-NP-only.png)
---
## `NodePort`
- A port number is allocated for the service
(by default, in the 30000-32767 range)
- That port is made available *on all our nodes* and anybody can connect to it
(we can connect to any node on that port to reach the service)
- Our code needs to be changed to connect to that new port number
- Under the hood: `kube-proxy` sets up a bunch of `iptables` rules on our nodes
- Sometimes, it's the only available option for external traffic
(e.g. most clusters deployed with kubeadm or on-premises)
---
## Running containers with open ports
- Since `ping` doesn't have anything to connect to, we'll have to run something else
- We could use the `nginx` official image, but ...
... we wouldn't be able to tell the backends from each other!
- We are going to use `jpetazzo/color`, a tiny HTTP server written in Go
- `jpetazzo/color` listens on port 80
- It serves a page showing the pod's name
(this will be useful when checking load balancing behavior)
---
## Creating a deployment for our HTTP server
- We will create a deployment with `kubectl create deployment`
- Then we will scale it with `kubectl scale`
.lab[
- In another window, watch the pods (to see when they are created):
```bash
kubectl get pods -w
```
<!--
```wait NAME```
```tmux split-pane -h```
-->
- Create a deployment for this very lightweight HTTP server:
```bash
kubectl create deployment blue --image=jpetazzo/color
```
- Scale it to 10 replicas:
```bash
kubectl scale deployment blue --replicas=10
```
]
---
## Exposing our deployment
- We'll create a default `ClusterIP` service
.lab[
- Expose the HTTP port of our server:
```bash
kubectl expose deployment blue --port=80
```
- Look up which IP address was allocated:
```bash
kubectl get service
```
]
---
## Services are layer 4 constructs
- You can assign IP addresses to services, but they are still *layer 4*
(i.e. a service is not an IP address; it's an IP address + protocol + port)
- This is caused by the current implementation of `kube-proxy`
(it relies on mechanisms that don't support layer 3)
- As a result: you *have to* indicate the port number for your service
(with some exceptions, like `ExternalName` or headless services, covered later)
---
## Testing our service
- We will now send a few HTTP requests to our pods
.lab[
- Let's obtain the IP address that was allocated for our service, *programmatically:*
```bash
IP=$(kubectl get svc blue -o go-template --template '{{ .spec.clusterIP }}')
```
<!--
```hide kubectl wait deploy blue --for condition=available```
```key ^D```
```key ^C```
-->
- Send a few requests:
```bash
curl http://$IP:80/
```
]
--
Try it a few times! Our requests are load balanced across multiple pods.
---
class: extra-details
## `ExternalName`
- Services of type `ExternalName` are quite different
- No load balancer (internal or external) is created
- Only a DNS entry gets added to the DNS managed by Kubernetes
- That DNS entry will just be a `CNAME` to a provided record
Example:
```bash
kubectl create service externalname k8s --external-name kubernetes.io
```
*Creates a CNAME `k8s` pointing to `kubernetes.io`*
---
class: extra-details
## External IPs
- We can add an External IP to a service, e.g.:
```bash
kubectl expose deploy my-little-deploy --port=80 --external-ip=1.2.3.4
```
- `1.2.3.4` should be the address of one of our nodes
(it could also be a virtual address, service address, or VIP, shared by multiple nodes)
- Connections to `1.2.3.4:80` will be sent to our service
- External IPs will also show up on services of type `LoadBalancer`
(they will be added automatically by the process provisioning the load balancer)
---
class: extra-details
## Headless services
- Sometimes, we want to access our scaled services directly:
- if we want to save a tiny little bit of latency (typically less than 1ms)
- if we need to connect over arbitrary ports (instead of a few fixed ones)
- if we need to communicate over another protocol than UDP or TCP
- if we want to decide how to balance the requests client-side
- ...
- In that case, we can use a "headless service"
---
class: extra-details
## Creating a headless services
- A headless service is obtained by setting the `clusterIP` field to `None`
(Either with `--cluster-ip=None`, or by providing a custom YAML)
- As a result, the service doesn't have a virtual IP address
- Since there is no virtual IP address, there is no load balancer either
- CoreDNS will return the pods' IP addresses as multiple `A` records
- This gives us an easy way to discover all the replicas for a deployment
---
class: extra-details
## Services and endpoints
- A service has a number of "endpoints"
- Each endpoint is a host + port where the service is available
- The endpoints are maintained and updated automatically by Kubernetes
.lab[
- Check the endpoints that Kubernetes has associated with our `blue` service:
```bash
kubectl describe service blue
```
]
In the output, there will be a line starting with `Endpoints:`.
That line will list a bunch of addresses in `host:port` format.
---
class: extra-details
## Viewing endpoint details
- When we have many endpoints, our display commands truncate the list
```bash
kubectl get endpoints
```
- If we want to see the full list, we can use one of the following commands:
```bash
kubectl describe endpoints blue
kubectl get endpoints blue -o yaml
```
- These commands will show us a list of IP addresses
- These IP addresses should match the addresses of the corresponding pods:
```bash
kubectl get pods -l app=blue -o wide
```
---
class: extra-details
## `endpoints` not `endpoint`
- `endpoints` is the only resource that cannot be singular
```bash
$ kubectl get endpoint
error: the server doesn't have a resource type "endpoint"
```
- This is because the type itself is plural (unlike every other resource)
- There is no `endpoint` object: `type Endpoints struct`
- The type doesn't represent a single endpoint, but a list of endpoints
---
class: extra-details
## The DNS zone
- In the `kube-system` namespace, there should be a service named `kube-dns`
- This is the internal DNS server that can resolve service names
- The default domain name for the service we created is `default.svc.cluster.local`
.lab[
- Get the IP address of the internal DNS server:
```bash
IP=$(kubectl -n kube-system get svc kube-dns -o jsonpath={.spec.clusterIP})
```
- Resolve the cluster IP for the `blue` service:
```bash
host blue.default.svc.cluster.local $IP
```
]
---
class: extra-details
## `Ingress`
- Ingresses are another type (kind) of resource
- They are specifically for HTTP services
(not TCP or UDP)
- They can also handle TLS certificates, URL rewriting ...
- They require an *Ingress Controller* to function
---
class: pic
![](images/kubernetes-services/61-ING.png)
---
class: pic
![](images/kubernetes-services/62-ING-path.png)
---
class: pic
![](images/kubernetes-services/63-ING-policy.png)
---
class: pic
![](images/kubernetes-services/64-ING-nolocal.png)
???
:EN:- Service discovery and load balancing
:EN:- Accessing pods through services
:EN:- Service types: ClusterIP, NodePort, LoadBalancer
:EN:- Service discovery and load balancing
:FR:- Exposer un service
:FR:- Différents types de services : ClusterIP, NodePort, LoadBalancer
:FR:- Utiliser CoreDNS pour la *service discovery*
:FR:- Le DNS interne de Kubernetes et la *service discovery*

View File

@@ -170,9 +170,9 @@ def hash_bytes(data):
headers={"Content-Type": "application/octet-stream"})
```
(Full source code available [here](
https://@@GITREPO@@/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
))
(Feel free to check the [full source code][dockercoins-worker-code] of the worker!)
[dockercoins-worker-code]: https://@@GITREPO@@/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
---

View File

@@ -142,7 +142,7 @@ configMapGenerator:
- overlays can only *add* resources, not *remove* them
- See the full list of [eschewed features](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/eschewedFeatures.md) for more details
- See the full list of [eschewed features](https://kubectl.docs.kubernetes.io/faq/kustomize/eschewedfeatures/) for more details
---

View File

@@ -156,7 +156,7 @@
- Install Kyverno:
```bash
kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/release-1.5/definitions/release/install.yaml
kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/release-1.7/config/release/install.yaml
```
]
@@ -302,23 +302,35 @@
---
## Invalid references
## Comparing "old" and "new"
- The fields of the webhook payload are available through `{{ request }}`
- For UPDATE requests, we can access:
`{{ request.oldObject }}` → the object as it is right now (before the request)
`{{ request.object }}` → the object with the changes made by the request
---
## Missing labels
- We can access the `color` label through `{{ request.object.metadata.labels.color }}`
- If we reference a label (or any field) that doesn't exist, the policy fails
- Except in *preconditions*: it then evaluates to an empty string
(with an error similar to `JMESPAth query failed: Unknown key ... in path`)
- We use a *precondition* to makes sure the label exists in both "old" and "new" objects
- To work around that, [use an OR expression][non-existence-checks]:
- Then in the *deny* block we can compare the old and new values
`{{ requests.object.metadata.labels.color || '' }}`
(and reject changes)
- Note that in older versions of Kyverno, this wasn't always necessary
- "Old" and "new" versions of the pod can be referenced through
(e.g. in *preconditions*, a missing label would evalute to an empty string)
`{{ request.oldObject }}` and `{{ request.object }}`
[non-existence-checks]: https://kyverno.io/docs/writing-policies/jmespath/#non-existence-checks
---
@@ -594,7 +606,7 @@ class: extra-details
## Footprint
- 7 CRDs
- 8 CRDs
- 5 webhooks

View File

@@ -69,12 +69,14 @@ Exactly what we need!
(no dependencies, extra libraries to install, etc)
- Binary releases are available [here](https://github.com/stern/stern/releases) on GitHub
- Binary releases are available [on GitHub][stern-releases]
- Stern is also available through most package managers
(e.g. on macOS, we can `brew install stern` or `sudo port install stern`)
[stern-releases]: https://github.com/stern/stern/releases
---
## Using Stern

View File

@@ -256,9 +256,9 @@ class: extra-details
- or stored in the node's `spec.podCIDR` field
.footnote[See [here] for more details about this `kubenet` plugin.]
.footnote[See [here][kubenet-plugin] for more details about this `kubenet` plugin.]
[here]: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet
[kubenet-plugin]: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet
---

View File

@@ -413,6 +413,8 @@ troubleshoot easily, without having to poke holes in our firewall.
- [Tufin Network Policy Viewer](https://orca.tufin.io/netpol/)
- [`kubectl np-viewer`](https://github.com/runoncloud/kubectl-np-viewer) (kubectl plugin)
- Two resources by [Ahmet Alp Balkan](https://ahmet.im/):
- a [very good talk about network policies](https://www.youtube.com/watch?list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-ulZb&v=3gGpMmYeEO8) at KubeCon North America 2017

View File

@@ -1,4 +1,4 @@
# Writing an tiny operator
# Writing a tiny operator
- Let's look at a simple operator

View File

@@ -40,9 +40,7 @@
- Each person gets their own private set of VMs
- The connection information is on a shared spreadsheet
(URL to be shared through portal and chat)
- Each person should have a printed card with connection information
- We will connect to these VMs with SSH

View File

@@ -330,8 +330,8 @@ This is what the spec of a Pod with resources will look like:
```yaml
containers:
- name: httpenv
image: jpetazzo/httpenv
- name: blue
image: jpetazzo/color
resources:
limits:
memory: "100Mi"

359
slides/k8s/service-types.md Normal file
View File

@@ -0,0 +1,359 @@
# Service Types
- There are different types of services:
`ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName`
- There are also *headless services*
- Services can also have optional *external IPs*
- There is also another resource type called *Ingress*
(specifically for HTTP services)
- Wow, that's a lot! Let's start with the basics ...
---
## `ClusterIP`
- It's the default service type
- A virtual IP address is allocated for the service
(in an internal, private range; e.g. 10.96.0.0/12)
- This IP address is reachable only from within the cluster (nodes and pods)
- Our code can connect to the service using the original port number
- Perfect for internal communication, within the cluster
---
class: pic
![](images/kubernetes-services/11-CIP-by-addr.png)
---
class: pic
![](images/kubernetes-services/12-CIP-by-name.png)
---
class: pic
![](images/kubernetes-services/13-CIP-both.png)
---
class: pic
![](images/kubernetes-services/14-CIP-headless.png)
---
## `LoadBalancer`
- An external load balancer is allocated for the service
(typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...)
- This is available only when the underlying infrastructure provides some kind of
"load balancer as a service"
- Each service of that type will typically cost a little bit of money
(e.g. a few cents per hour on AWS or GCE)
- Ideally, traffic would flow directly from the load balancer to the pods
- In practice, it will often flow through a `NodePort` first
---
class: pic
![](images/kubernetes-services/31-LB-no-service.png)
---
class: pic
![](images/kubernetes-services/32-LB-plus-cip.png)
---
class: pic
![](images/kubernetes-services/33-LB-plus-lb.png)
---
class: pic
![](images/kubernetes-services/34-LB-internal-traffic.png)
---
class: pic
![](images/kubernetes-services/35-LB-pending.png)
---
class: pic
![](images/kubernetes-services/36-LB-ccm.png)
---
class: pic
![](images/kubernetes-services/37-LB-externalip.png)
---
class: pic
![](images/kubernetes-services/38-LB-external-traffic.png)
---
class: pic
![](images/kubernetes-services/39-LB-all-traffic.png)
---
class: pic
![](images/kubernetes-services/41-NP-why.png)
---
class: pic
![](images/kubernetes-services/42-NP-how-1.png)
---
class: pic
![](images/kubernetes-services/43-NP-how-2.png)
---
class: pic
![](images/kubernetes-services/44-NP-how-3.png)
---
class: pic
![](images/kubernetes-services/45-NP-how-4.png)
---
class: pic
![](images/kubernetes-services/46-NP-how-5.png)
---
class: pic
![](images/kubernetes-services/47-NP-only.png)
---
## `NodePort`
- A port number is allocated for the service
(by default, in the 30000-32767 range)
- That port is made available *on all our nodes* and anybody can connect to it
(we can connect to any node on that port to reach the service)
- Our code needs to be changed to connect to that new port number
- Under the hood: `kube-proxy` sets up a bunch of `iptables` rules on our nodes
- Sometimes, it's the only available option for external traffic
(e.g. most clusters deployed with kubeadm or on-premises)
---
class: extra-details
## `ExternalName`
- Services of type `ExternalName` are quite different
- No load balancer (internal or external) is created
- Only a DNS entry gets added to the DNS managed by Kubernetes
- That DNS entry will just be a `CNAME` to a provided record
Example:
```bash
kubectl create service externalname k8s --external-name kubernetes.io
```
*Creates a CNAME `k8s` pointing to `kubernetes.io`*
---
class: extra-details
## External IPs
- We can add an External IP to a service, e.g.:
```bash
kubectl expose deploy my-little-deploy --port=80 --external-ip=1.2.3.4
```
- `1.2.3.4` should be the address of one of our nodes
(it could also be a virtual address, service address, or VIP, shared by multiple nodes)
- Connections to `1.2.3.4:80` will be sent to our service
- External IPs will also show up on services of type `LoadBalancer`
(they will be added automatically by the process provisioning the load balancer)
---
class: extra-details
## Headless services
- Sometimes, we want to access our scaled services directly:
- if we want to save a tiny little bit of latency (typically less than 1ms)
- if we need to connect over arbitrary ports (instead of a few fixed ones)
- if we need to communicate over another protocol than UDP or TCP
- if we want to decide how to balance the requests client-side
- ...
- In that case, we can use a "headless service"
---
class: extra-details
## Creating a headless services
- A headless service is obtained by setting the `clusterIP` field to `None`
(Either with `--cluster-ip=None`, or by providing a custom YAML)
- As a result, the service doesn't have a virtual IP address
- Since there is no virtual IP address, there is no load balancer either
- CoreDNS will return the pods' IP addresses as multiple `A` records
- This gives us an easy way to discover all the replicas for a deployment
---
class: extra-details
## Services and endpoints
- A service has a number of "endpoints"
- Each endpoint is a host + port where the service is available
- The endpoints are maintained and updated automatically by Kubernetes
.lab[
- Check the endpoints that Kubernetes has associated with our `blue` service:
```bash
kubectl describe service blue
```
]
In the output, there will be a line starting with `Endpoints:`.
That line will list a bunch of addresses in `host:port` format.
---
class: extra-details
## Viewing endpoint details
- When we have many endpoints, our display commands truncate the list
```bash
kubectl get endpoints
```
- If we want to see the full list, we can use one of the following commands:
```bash
kubectl describe endpoints blue
kubectl get endpoints blue -o yaml
```
- These commands will show us a list of IP addresses
- These IP addresses should match the addresses of the corresponding pods:
```bash
kubectl get pods -l app=blue -o wide
```
---
class: extra-details
## `endpoints` not `endpoint`
- `endpoints` is the only resource that cannot be singular
```bash
$ kubectl get endpoint
error: the server doesn't have a resource type "endpoint"
```
- This is because the type itself is plural (unlike every other resource)
- There is no `endpoint` object: `type Endpoints struct`
- The type doesn't represent a single endpoint, but a list of endpoints
---
class: extra-details
## `Ingress`
- Ingresses are another type (kind) of resource
- They are specifically for HTTP services
(not TCP or UDP)
- They can also handle TLS certificates, URL rewriting ...
- They require an *Ingress Controller* to function
---
class: pic
![](images/kubernetes-services/61-ING.png)
---
class: pic
![](images/kubernetes-services/62-ING-path.png)
---
class: pic
![](images/kubernetes-services/63-ING-policy.png)
---
class: pic
![](images/kubernetes-services/64-ING-nolocal.png)
???
:EN:- Service types: ClusterIP, NodePort, LoadBalancer
:FR:- Différents types de services : ClusterIP, NodePort, LoadBalancer

View File

@@ -18,6 +18,24 @@
---
### CoLiMa
- Container runtimes for LiMa
(LiMa = Linux on macOS)
- For macOS only (Intel and ARM architectures)
- CLI-driven (no GUI like Docker/Rancher Desktop)
- Supports containerd, Docker, Kubernetes
- Installable with brew, nix, or ports
- More info: https://github.com/abiosoft/colima
---
## Docker Desktop
- Available on Linux, Mac, and Windows
@@ -79,6 +97,8 @@
- Requires Docker (obviously!)
- Should also work with Podman and Rootless Docker
- Deploying a single node cluster using the latest version is simple:
```bash
kind create cluster
@@ -92,6 +112,20 @@
---
## [MicroK8s](https://microk8s.io/)
- Available on Linux, and since recently, on Mac and Windows as well
- The Linux version is installed through Snap
(which is pre-installed on all recent versions of Ubuntu)
- Also supports clustering (as in, multiple machines running MicroK8s)
- DNS is not enabled by default; enable it with `microk8s enable dns`
---
## [Minikube](https://minikube.sigs.k8s.io/docs/)
- The "legacy" option!
@@ -110,20 +144,6 @@
---
## [MicroK8s](https://microk8s.io/)
- Available on Linux, and since recently, on Mac and Windows as well
- The Linux version is installed through Snap
(which is pre-installed on all recent versions of Ubuntu)
- Also supports clustering (as in, multiple machines running MicroK8s)
- DNS is not enabled by default; enable it with `microk8s enable dns`
---
## [Rancher Desktop](https://rancherdesktop.io/)
- Available on Linux, Mac, and Windows

View File

@@ -389,7 +389,7 @@ class: extra-details
- A replacement Pod is created on another Node
- ... But it doens't start yet!
- ... But it doesn't start yet!
- Why? 🤔

View File

@@ -1,109 +0,0 @@
title: |
Advanced
Kubernetes
chat: "[Slack](https://newrelic.slack.com/archives/C0438EFM97F)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2022-09-nr2.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- #1
- k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
- interlude-form.md
- k8s/multinode.md
- k8s/cni.md
- k8s/interco.md
- k8s/cni-internals.md
- k8s/apilb.md
- #2
- k8s/demo-apps.md
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/user-cert.md
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/pod-security-intro.md
- k8s/pod-security-policies.md
- k8s/pod-security-admission.md
- exercises/netpol-details.md
- exercises/rbac-details.md
- #3
- k8s/helm-intro.md
- k8s/ingress.md
- k8s/cert-manager.md
- k8s/ingress-tls.md
- k8s/ingress-advanced.md
- k8s/kustomize.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- k8s/helm-dependencies.md
- k8s/helm-values-schema-validation.md
- k8s/helm-secrets.md
- exercises/helm-generic-chart-details.md
- exercises/helm-umbrella-chart-details.md
- #4
- k8s/extending-api.md
- k8s/admission.md
- k8s/cainjector.md
- k8s/kyverno.md
- k8s/crd.md
- k8s/operators.md
- k8s/sealed-secrets.md
- k8s/operators-design.md
- k8s/operators-example.md
- k8s/owners-and-dependents.md
- k8s/events.md
- k8s/finalizers.md
- exercises/sealed-secrets-details.md
- #5
- k8s/resource-limits.md
- k8s/cluster-sizing.md
- k8s/cluster-autoscaler.md
- k8s/horizontal-pod-autoscaler.md
- k8s/aggregation-layer.md
- k8s/metrics-server.md
- k8s/hpa-v2.md
- k8s/batch-jobs.md
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md
- k8s/stateful-failover.md
- shared/thankyou.md
-
- |
# (Extra content I)
- k8s/setup-devel.md
- k8s/accessinternal.md
- k8s/kubectlproxy.md
- k8s/k9s.md
- k8s/tilt.md
- k8s/ytt.md
-
- |
# (Extra content II)
- k8s/internal-apis.md
- k8s/staticpods.md
- k8s/cluster-upgrade.md
- k8s/control-plane-auth.md
- k8s/kubebuilder.md

95
slides/kube.yml Normal file
View File

@@ -0,0 +1,95 @@
title: |
Intermediate Kubernetes
chat: "[Mattermost](https://nasa.container.training/mattermost/)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2023-02-nasa.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- shared/toc.md
- # 1
#- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
- k8s/kubectl-run.md
- k8s/kubectlexpose.md
- k8s/service-types.md
- k8s/kubenet.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- exercises/k8sfundamentals-details.md
- k8s/ourapponkube.md
#- k8s/exercise-wordsmith.md
- # 2
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- k8s/namespaces.md
- k8s/yamldeploy.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- shared/yaml.md
- k8s/authoring-yaml.md
- k8s/setup-overview.md
- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/kubectlproxy.md
- exercises/localcluster-details.md
- # 3
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/ingress.md
#- k8s/healthchecks-more.md
- exercises/healthchecks-details.md
- # 4
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- exercises/netpol-details.md
- exercises/rbac-details.md
- # 5
#- k8s/ingress-tls.md
- k8s/volumes.md
#- k8s/exercise-configmap.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
- k8s/secrets.md
- k8s/batch-jobs.md
- k8s/dashboard.md
- k8s/k9s.md
- k8s/tilt.md
- shared/thankyou.md

View File

@@ -1,21 +1,13 @@
## Intros
## Introductions
- Hello! We are:
- Hello! I'm Jérôme Petazzoni ([@jpetazzo], [@jpetazzo@hachyderm.io])
- Jérôme Petazzoni ([@jpetazzo])
- Dana Engebretson ([@bigdana])
- The training will from 8am to noon, Monday to Friday
- There will be breaks every hour
- The training will run for 4 hours, with a 10 minutes break every hour
- Feel free to interrupt for questions at any time
- *Especially when you see full screen container pictures!*
(I will watch them in awkward silence while I wait for your questions)
- Live feedback, questions, help: @@CHAT@@
<!-- -->
@@ -24,17 +16,22 @@
[@bigdana]: https://twitter.com/bigdana
[EphemeraSearch]: https://ephemerasearch.com/
[@jpetazzo]: https://twitter.com/jpetazzo
[@jpetazzo@hachyderm.io]: https://hachyderm.io/@jpetazzo
[@s0ulshake]: https://twitter.com/s0ulshake
[Quantgene]: https://www.quantgene.com/
---
## Dynamic content
## Exercises
- The content of this course will be adapted to suit your needs!
- At the end of each day, there is a series of exercises
- We'll share a link to a form so you can tell us what we should work on
- To make the most out of the training, please try the exercises!
- Expect the content of the deck to change between Monday and Tuesday
(it will help to practice and memorize the content of the day)
(we will reorder/reorganize the content accordingly)
- We recommend to take at least one hour to work on the exercises
(if you understood the content of the day, it will be much faster)
- Each day will start with a quick review of the exercises of the previous day

View File

@@ -4,6 +4,8 @@
@@SLIDES@@
- This is a public URL, you're welcome to share it with others!
- Use arrows to move to next/previous slide
(up, down, left, right, page up, page down)
@@ -16,6 +18,28 @@
---
## These slides are open source
- The sources of these slides are available in a public GitHub repository:
https://@@GITREPO@@
- These slides are written in Markdown
- You are welcome to share, re-use, re-mix these slides
- Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
.footnote[👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.]
<!--
.lab[
```open https://@@GITREPO@@/tree/master/slides/common/about-slides.md```
]
-->
---
## Accessing these slides later
- Slides will remain online so you can review them later if needed
@@ -28,31 +52,23 @@
(then open the file `@@HTML@@`)
- You will find new versions of these slides on:
- You can also generate a PDF of the slides
https://container.training/
(by printing them to a file; but be patient with your browser!)
---
## These slides are open source
## These slides are constantly updated
- You are welcome to use, re-use, share these slides
- These slides are written in Markdown
- The sources of these slides are available in a public GitHub repository:
- Feel free to check the GitHub repository for updates:
https://@@GITREPO@@
- Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
- Look for branches named YYYY-MM-...
.footnote[👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.]
- You can also find specific decks and other resources on:
<!--
.lab[
```open https://@@GITREPO@@/tree/master/slides/common/about-slides.md```
]
-->
https://container.training/
---

View File

@@ -149,7 +149,7 @@ You are welcome to use the method that you feel the most comfortable with.
---
## Tmux cheat sheet
## Tmux cheat sheet (basic)
[Tmux](https://en.wikipedia.org/wiki/Tmux) is a terminal multiplexer like `screen`.
@@ -159,13 +159,35 @@ But some of us like to use it to switch between terminals.
<br/>
It has been preinstalled on your workshop nodes.*
- Ctrl-b c → creates a new window
- Ctrl-b n → go to next window
- Ctrl-b p → go to previous window
- Ctrl-b " → split window top/bottom
- Ctrl-b % → split window left/right
- Ctrl-b Alt-1 → rearrange windows in columns
- Ctrl-b Alt-2 → rearrange windows in rows
- Ctrl-b arrows → navigate to other windows
- You can start a new session with `tmux`
<br/>
(or resume or share an existing session with `tmux attach`)
- Then use these keyboard shortcuts:
- Ctrl-b ccreates a new window
- Ctrl-b n → go to next window
- Ctrl-b p → go to previous window
- Ctrl-b " → split window top/bottom
- Ctrl-b % → split window left/right
- Ctrl-b arrows → navigate within split windows
---
## Tmux cheat sheet (advanced)
- Ctrl-b d → detach session
- tmux attach → re-attach to session
<br/>
(resume it later with `tmux attach`)
- Ctrl-b Alt-1 → rearrange windows in columns
- Ctrl-b Alt-2 → rearrange windows in rows
- Ctrl-b , → rename window
- Ctrl-b Ctrl-o → cycle pane position (e.g. switch top/bottom)
- Ctrl-b PageUp → enter scrollback mode
<br/>
(use PageUp/PageDown to scroll; Ctrl-c or Enter to exit scrollback)

View File

@@ -1,4 +1,4 @@
# Pre-requirements
## Pre-requirements
- Be comfortable with the UNIX command line

310
slides/shared/yaml.md Normal file
View File

@@ -0,0 +1,310 @@
# Gentle introduction to YAML
- YAML Ain't Markup Language (according to [yaml.org][yaml])
- *Almost* required when working with containers:
- Docker Compose files
- Kubernetes manifests
- Many CI pipelines (GitHub, GitLab...)
- If you don't know much about YAML, this is for you!
[yaml]: https://yaml.org/
---
## What is it?
- Data representation language
```yaml
- country: France
capital: Paris
code: fr
population: 68042591
- country: Germany
capital: Berlin
code: de
population: 84270625
- country: Norway
capital: Oslo
code: no # It's a trap!
population: 5425270
```
- Even without knowing YAML, we probably can add a country to that file :)
---
## Trying YAML
- Method 1: in the browser
https://onlineyamltools.com/convert-yaml-to-json
https://onlineyamltools.com/highlight-yaml
- Method 2: in a shell
```bash
yq . foo.yaml
```
- Method 3: in Python
```python
import yaml; yaml.safe_load("""
- country: France
capital: Paris
""")
```
---
## Basic stuff
- Strings, numbers, boolean values, `null`
- Sequences (=arrays, lists)
- Mappings (=objects)
- Superset of JSON
(if you know JSON, you can just write JSON)
- Comments start with `#`
---
## Sequences
- Example: sequence of strings
```yaml
[ "france", "germany", "norway" ]
```
- Example: the same sequence, without the double-quotes
```yaml
[ france, germany, norway ]
```
- Example: the same sequence, in "block collection style" (=multi-line)
```yaml
- france
- germany
- norway
```
---
## Mappings
- Example: mapping strings to numbers
```yaml
{ "france": 68042591, "germany": 84270625, "norway": 5425270 }
```
- Example: the same mapping, without the double-quotes
```yaml
{ "france": 68042591, "germany": 84270625, "norway": 5425270 }
```
- Example: the same mapping, in "block collection style"
```yaml
france: 68042591
germany: 84270625
norway: 5425270
```
---
## Combining types
- In a sequence (or mapping) we can have different types
(including other sequences or mappings)
- Example:
```yaml
questions: [ name, quest, favorite color ]
answers: [ "Arthur, King of the Britons", Holy Grail, purple, 42 ]
```
- Note that we need to quote "Arthur" because of the comma
- Note that we don't have the same number of elements in questions and answers
---
## More combinations
- Example:
```yaml
- service: nginx
ports: [ 80, 443 ]
- service: bind
ports: [ 53/tcp, 53/udp ]
- service: ssh
ports: 22
```
- Note that `ports` doesn't always have the same type
(the code handling that data will probably have to be smart!)
---
## ⚠️ Automatic booleans
```yaml
codes:
france: fr
germany: de
norway: no
```
--
```json
{
"codes": {
"france": "fr",
"germany": "de",
"norway": false
}
}
```
---
## ⚠️ Automatic booleans
- `no` can become `false`
(it depends on the YAML parser used)
- It should be quoted instead:
```yaml
codes:
france: fr
germany: de
norway: "no"
```
---
## ⚠️ Automatic floats
```yaml
version:
libfoo: 1.10
fooctl: 1.0
```
--
```json
{
"version": {
"libfoo": 1.1,
"fooctl": 1
}
}
```
---
## ⚠️ Automatic floats
- Trailing zeros disappear
- These should also be quoted:
```yaml
version:
libfoo: "1.10"
fooctl: "1.0"
```
---
## ⚠️ Automatic times
```yaml
portmap:
- 80:80
- 22:22
```
--
```json
{
"portmap": [
"80:80",
1342
]
}
```
---
## ⚠️ Automatic times
- `22:22` becomes `1342`
- Thats 22 minutes and 22 seconds = 1342 seconds
- Again, it should be quoted
---
class: extra-details
## Advanced features
Anchors let you "memorize" and re-use content:
```yaml
debian: &debian
packages: deb
latest-stable: bullseye
also-debian: *debian
ubuntu:
<<: *debian
latest-stable: jammy
```
---
class: extra-details
## YAML, good or evil?
- Natural progression from XML to JSON to YAML
- There are other data languages out there
(e.g. HCL, domain-specific things crafted with Ruby, CUE...)
- Compromises are made, for instance:
- more user-friendly → more "magic" with side effects
- more powerful → steeper learning curve
- Love it or loathe it but it's a good idea to understand it!
- Interesting tool if you appreciate YAML: https://carvel.dev/ytt/
???
:EN:- Understanding YAML and its gotchas
:FR:- Comprendre le YAML et ses subtilités