Compare commits

..

118 Commits

Author SHA1 Message Date
Jérôme Petazzoni
8c62ba7b28 🏖️ Highfive May 2025 2025-06-13 08:52:05 +02:00
Jérôme Petazzoni
71ee3012fb Add DMUC advanced exercises 2025-06-13 08:49:59 +02:00
Jérôme Petazzoni
5ed12d6631 🔧 Tweak backup chapter 2025-06-13 08:49:59 +02:00
Jérôme Petazzoni
839b50a7a6 📃 Update chapter on static pods 2025-06-13 08:49:59 +02:00
Jérôme Petazzoni
e0fdbfdb50 📃 Update control plane auth section 2025-06-13 08:49:59 +02:00
Jérôme Petazzoni
d9f53288f2 🔒️ Update section on user key and cert generation 2025-06-13 08:49:59 +02:00
Jérôme Petazzoni
697e9cf9f7 🔗 Links to docs and blog posts about ephemeral storage isolation 2025-06-13 08:49:59 +02:00
Jérôme Petazzoni
6b06fa2b35 🔗 Update Kyverno doc links 2025-06-13 08:49:59 +02:00
Jérôme Petazzoni
240b2a24e2 🐞 Typo fix 2025-06-13 08:49:59 +02:00
Hiranyey Gajbhiye
4bc97aa1b8 Update concepts-k8s.md
Fixed spelling mistake if it was unintentional
2025-06-13 08:49:59 +02:00
Jérôme Petazzoni
798dc2216c 📃 Clarify what needs to be scaled up in healthcheck lab 2025-06-13 08:49:59 +02:00
Jérôme Petazzoni
5117b27386 🔧 Tweak portal VM size to use GP4 (GP2 is deprecated) 2025-06-13 08:49:59 +02:00
Jérôme Petazzoni
d2f736a850 📍 Pin express version in webui 2025-06-13 08:49:59 +02:00
Jérôme Petazzoni
01c374d0a4 Merge pull request #664 from lpiot/main
The missing slides…😅
2025-06-13 08:48:44 +02:00
Ludovic Piot
eee44979c5 📝 Add Kyverno install chapter 2025-06-12 22:13:19 +02:00
Ludovic Piot
4d3bc06e30 📝 Add Kyverno install chapter 2025-06-12 21:50:42 +02:00
Ludovic Piot
229ab045b3 🔥 2025-06-12 21:04:06 +02:00
Ludovic Piot
fe1a61eaeb 🎨 2025-06-12 21:03:49 +02:00
Ludovic Piot
9613589dea 📝 Add small section about SSH keypairs rotation for Flux 2025-06-12 20:23:59 +02:00
Ludovic Piot
ca8865a10b 📝 Change the mermaid scenario diagram 2025-06-12 20:07:11 +02:00
Ludovic Piot
f279bbea11 ✏️ 2025-06-12 20:06:27 +02:00
Ludovic Piot
bc6100301e 📝 Add monitoring stack install 2025-06-12 20:05:14 +02:00
Jérôme Petazzoni
a32751636a Merge pull request #663 from lpiot/main
The deck with a small fix
2025-06-11 20:33:27 +02:00
Ludovic Piot
4a0e23d131 🐛 Sorry Jerome 2025-06-11 19:59:52 +02:00
Ludovic Piot
6e987d1fca Merge branch 'm6' into main 2025-06-11 19:52:03 +02:00
Ludovic Piot
18b888009e 📝 Add an MVP Network policies section 2025-06-11 19:44:17 +02:00
Ludovic Piot
36dd8bb695 📝 Add the new chapters to the M6 stack 2025-06-11 19:33:35 +02:00
Ludovic Piot
395c5a38ab 🎨 Add reference to the chapter title 2025-06-11 19:24:57 +02:00
Ludovic Piot
2b0d3b87ac 📝 Add OpenEBS install chapter 2025-06-11 19:24:13 +02:00
Ludovic Piot
a165e60407 📝 Add k0s install chapter 2025-06-11 19:22:40 +02:00
Ludovic Piot
3c13fd51dd 🎨 Add Mario animation when Flux reconcile 2025-06-11 19:22:04 +02:00
Ludovic Piot
324ad2fdd0 🎨 Update mermaid scenario diagram 2025-06-11 19:21:13 +02:00
Ludovic Piot
269ae79e30 📝 Add k0s install chapter 2025-06-11 17:08:52 +02:00
Ludovic Piot
39a15b3d7d ✏️ Clean up consistency about how we evoke the OPS team 2025-06-11 17:08:52 +02:00
Ludovic Piot
9e7ed8cb49 📝 Add MOVY tenant creation chapter 2025-06-11 17:08:52 +02:00
Ludovic Piot
06e7a47659 📝 Upgrade the mermaid scenario 2025-06-11 17:08:52 +02:00
Ludovic Piot
802e525f57 📝 Add Ingress chapter 2025-06-11 17:08:52 +02:00
Ludovic Piot
0f68f89840 📝 Add Ingress chapter 2025-06-11 17:08:52 +02:00
Ludovic Piot
b275342bd2 ✏️ Fixing TEST emphasis 2025-06-11 17:08:52 +02:00
Ludovic Piot
e11e97ccff 📝 Add k0s install chapter 2025-06-11 15:10:43 +02:00
Ludovic Piot
023a9d0346 ✏️ Clean up consistency about how we evoke the OPS team 2025-06-10 19:20:25 +02:00
Ludovic Piot
3f5eaae6b9 📝 Add MOVY tenant creation chapter 2025-06-10 19:19:19 +02:00
Ludovic Piot
1634d5b5bc 📝 Upgrade the mermaid scenario 2025-06-10 17:15:38 +02:00
Ludovic Piot
40418be55a 📝 Add Ingress chapter 2025-06-10 16:19:06 +02:00
Ludovic Piot
04198b7f91 📝 Add Ingress chapter 2025-06-10 16:05:17 +02:00
Jérôme Petazzoni
150c8fc768 Merge pull request #660 from lpiot/main
Mostly the scenario upgrade with Mermaid schemas
2025-06-10 14:24:18 +02:00
Ludovic Piot
e2af1bb057 ✏️ Fixing TEST emphasis 2025-06-10 12:51:09 +02:00
Ludovic Piot
d4c260aa4a 💄 📝 🎨 Upgrade the mermaid scenario schema 2025-06-09 21:20:57 +02:00
Ludovic Piot
89cd677b09 📝 upgrade R01 chapter 2025-06-09 21:20:57 +02:00
Ludovic Piot
3008680c12 🛂 🐛 fix permissions for persistentVolumes management 2025-06-09 21:20:57 +02:00
Ludovic Piot
f7b8184617 🎨 2025-06-09 21:20:57 +02:00
Jérôme Petazzoni
a565c0979c Merge pull request #659 from lpiot/main
Add R01 chapter and fixes to previous chapters
2025-06-09 20:05:55 +02:00
Jérôme Petazzoni
7a11f03b5e Merge branch 'm6' into main 2025-06-09 20:05:26 +02:00
Ludovic Piot
b0760b99a5 ✏️ 📝 Fix shpod access methods 2025-06-09 17:11:57 +02:00
Ludovic Piot
bcb9c3003f 📝 Add R01 chapter about test-ROCKY tenant config 2025-06-09 17:10:35 +02:00
Ludovic Piot
99ce9b3a8a 🎨 📝 Add missing steps in demo 2025-06-09 16:09:45 +02:00
Ludovic Piot
0ba602b533 🎨 clean up code display 2025-06-09 16:08:58 +02:00
Jérôme Petazzoni
d43c41e11e Proof-read first half of M6-START 2025-06-09 14:46:13 +02:00
Ludovic Piot
331309dc63 🎨 cleanup display of some console results 2025-06-09 14:11:05 +02:00
Ludovic Piot
44146915e0 📝 🍱 add T03 chapter 2025-06-04 23:55:33 +02:00
Ludovic Piot
84996e739b 🍱 📝 rewording and updating pics 2025-06-04 23:54:51 +02:00
Ludovic Piot
2aea1f70b2 📝 Add Flux install 2025-05-29 18:00:18 +02:00
Ludovic Piot
985e2ae42c 📝 add M6 intro slidedeck 2025-05-29 12:25:57 +02:00
Ludovic Piot
ea58428a0c 🐛 Slides now generate! ♻️ Move a slide 2025-05-14 22:05:59 +02:00
Ludovic Piot
59e60786c0 🎨 make personnae and cluster names consistent 2025-05-14 21:49:09 +02:00
Ludovic Piot
af63cf1405 🚨 2025-05-14 21:25:59 +02:00
Ludovic Piot
f9041807f6 🎉 first M6 draft slidedeck 2025-05-14 20:52:32 +02:00
Jérôme Petazzoni
785d704726 🏭️ Rework Kyverno chapter 2025-05-11 18:34:11 +02:00
Jérôme Petazzoni
cd346ecace 📃 Update slides about k8s setup 2025-05-07 22:33:30 +02:00
Jérôme Petazzoni
4de3c303a6 🐞 Don't query when overwriting partial zip download
Thanks @swacquie for that one
2025-05-05 19:04:52 +02:00
Jérôme Petazzoni
121713a6c7 🔧 Tweak devcontainer configuration 2025-05-02 19:43:45 +02:00
Jérôme Petazzoni
4431cfe68a 📦️ Add devcontainer
This is still highly experimental, but hopefully it'll
let us go through the beginning of the class with
github codespaces.
2025-05-02 13:04:14 +02:00
Jérôme Petazzoni
dcf218dbe2 🐞 Fix webssh python version 2025-04-28 10:07:55 +02:00
Jérôme Petazzoni
43ff815d9f 🐞 Fix tabs in logins.jsonl 2025-04-27 14:03:02 +02:00
Jérôme Petazzoni
92e61ef83b ☁️ Add nano instances for scaleway konk usecase 2025-04-27 12:53:41 +02:00
Jérôme Petazzoni
45770cc584 Add monokube exercise 2025-03-25 17:35:01 -05:00
Jérôme Petazzoni
58700396f9 🐞 Fix permissions for injected kubeconfig in mk8s stage2 2025-03-23 18:27:31 -05:00
Jérôme Petazzoni
8783da014c 🐞 Handle dualstack nodes (with multiple ExternalIP) 2025-03-23 18:15:50 -05:00
Jérôme Petazzoni
f780100217 Add kuik and a blue green exercise 2025-03-22 18:46:55 -05:00
Jérôme Petazzoni
555cd058bb 🔗 Fix source link in API deep dive 2025-03-22 18:07:18 -05:00
Jérôme Petazzoni
a05d1f9d4f ♻️ Use a variable for proxmox VM storage 2025-02-17 18:38:18 +01:00
Jérôme Petazzoni
84365d03c6 🔧 Add tags to Proxmox VMs; use linked clones by default 2025-02-17 17:28:53 +00:00
Jérôme Petazzoni
164bc01388 🛜 code-server will now also listen on IPv6 2025-02-17 17:28:01 +00:00
Jérôme Petazzoni
c07116bd29 ♻️ Update etcdctl snapshot commands; mention auger 2025-02-17 18:26:34 +01:00
Jérôme Petazzoni
c4057f9c35 🔧 Minor update to Kyverno chapter and manifests 2025-02-17 14:46:07 +01:00
Jérôme Petazzoni
f57bd9a072 Bump code server version 2025-02-17 12:55:24 +01:00
Jérôme Petazzoni
fca6396540 🐞 Fix Flux link ref 2025-02-12 11:01:00 +01:00
Jérôme Petazzoni
28ee1115ae ️ Add support to deploy kubeadm clusters on Proxmox 2025-02-05 16:28:48 +00:00
Jérôme Petazzoni
2d171594fb 🏭️ Factor out the "terraform" action; use quay for weave-kube 2025-02-05 16:22:22 +00:00
Jérôme Petazzoni
f825f98247 🔧 Adjust Flux command; add resource graph 2025-02-04 19:56:20 +01:00
Jérôme Petazzoni
7a369b4bcd 🐞 Add extra line break for consistency 2025-02-03 16:16:46 +01:00
Jérôme Petazzoni
087a68c06d ♻️ Use shpod Helm chart instead of manifests; enable code-server 2025-01-27 14:59:05 +01:00
Jérôme Petazzoni
b163ad0934 🐞 Don't report an error for non-first nodes codeserver 2025-01-27 11:42:47 +01:00
Jérôme Petazzoni
a46476fb0d 🐞 Remove python-setuptools; bail on errors if packages are missing 2025-01-23 17:24:10 +01:00
Jérôme Petazzoni
37baf22bf2 ♻️ Update Compose section 2025-01-22 18:32:56 +01:00
Jérôme Petazzoni
79631603c5 ️ Add codeserver support
This adds a codeserver action, which installs code-server
and pre-installs a couple of useful extension. It also
installs a systemd user unit in the user account to run it
automatically.

The 'passwords' action has been tweaked so that it also
creates a code-server configuration file to set the password,
so that the same password can be used for SSH access and
for code-server access.
2025-01-15 19:52:12 +01:00
Jérôme Petazzoni
52e6569f47 🧹 Remove unused 'cards' action from docker settings 2025-01-15 19:48:47 +01:00
Jérôme Petazzoni
6c71a38ddc 🔧 Modernize Compose file 🙂 2025-01-13 16:39:52 +01:00
Jérôme Petazzoni
c6507c1561 🐞 Fix play-with-docker URL 2024-12-30 17:00:12 +01:00
Jérôme Petazzoni
10a4fff91c 🐞 Minor fix in topology aware routing 2024-12-12 21:36:57 +01:00
Jérôme Petazzoni
91218b2b16 🐞 Typo fix 2024-12-11 12:19:20 +01:00
Jérôme Petazzoni
106912fcf8 🐞 Minor typo fixes 2024-12-01 18:28:34 -06:00
Jérôme Petazzoni
9e712e8a9e 🐛 Add script to detect duplicate markdown links; fix duplicates
When there are multiple reference-style markdown links in the same deck
with the same label, they will silently clash - i.e. one will overwrite
the other. The problem can become very apparent when using many links
like [see the docs][docs] in different slides, where [docs] points to
a different URL each time.

This commit adds a crude script to detect such duplicates and display
them. This script was used to detect a bunch of duplicates and fix them
(by making the label unique). There are still a few duplicates left
but they point to the same places, so we decided to leave them as-is
for now (but might change that later).
2024-11-23 23:46:14 +01:00
Jérôme Petazzoni
cc4c096558 📛 Update instructor+assistant contact info slide; split QR code slide for reference 2024-11-23 23:46:14 +01:00
Jérôme Petazzoni
908ffe0dd2 🐞 Minor fixes 2024-11-23 23:46:14 +01:00
Jérôme Petazzoni
0e7058214a 🐞 Minor fixes 2024-11-23 21:23:22 +01:00
Jérôme Petazzoni
21dad159de 📝 Many fixes courtesy of @soulshake 2024-11-22 02:11:18 +01:00
Jérôme Petazzoni
3ab190710f 📃 Add half-column style 2024-11-20 02:47:08 +01:00
Jérôme Petazzoni
8ea09e93ee 💳 Refactor printed card generator
Cards are now credit-card sized.
The code aligning front and back should be more robust, too.
2024-11-20 02:20:26 +01:00
Jérôme Petazzoni
88fbb6f629 🏭 Store log/pass information in logins.jsonl 2024-11-20 02:18:59 +01:00
Jérôme Petazzoni
7ee8c00cfa 🔧 Generate login.tsv file for card generation 2024-11-19 00:14:43 -06:00
Jérôme Petazzoni
7d35bacbbe 🔧 Allow setting min and max nodes per pool for mk8s mode 2024-11-19 00:14:43 -06:00
Jérôme Petazzoni
cd81b5287b 🔧 Fix warning for missing tag files 2024-11-19 00:14:43 -06:00
Jérôme Petazzoni
0abc67e974 Add MLops material for QCON SF 2024 2024-11-18 19:21:18 -06:00
Jérôme Petazzoni
7305bcfe12 ♻️ Update connection instructions
These instructions were fine for the good old Docker
and Kubernetes workshops; but they needed to be updated
for managed Kubernetes clusters leveraging shpod.
2024-11-18 19:01:55 -06:00
Jérôme Petazzoni
0d1873145e 🧜‍♀️ Add Mermaid integration for inline diagrams 2024-11-18 19:01:06 -06:00
m-vasseur
6105b57914 Update flux.md
--public is now replaced by --private=false
2024-10-18 14:39:10 +02:00
dependabot[bot]
8724ab2835 Bump cookie, express and socket.io in /slides/autopilot
Bumps [cookie](https://github.com/jshttp/cookie) to 0.7.1 and updates ancestor dependencies [cookie](https://github.com/jshttp/cookie), [express](https://github.com/expressjs/express) and [socket.io](https://github.com/socketio/socket.io). These dependencies need to be updated together.


Updates `cookie` from 0.6.0 to 0.7.1
- [Release notes](https://github.com/jshttp/cookie/releases)
- [Commits](https://github.com/jshttp/cookie/compare/v0.6.0...v0.7.1)

Updates `express` from 4.21.0 to 4.21.1
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.1/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.21.0...4.21.1)

Updates `socket.io` from 4.7.5 to 4.8.0
- [Release notes](https://github.com/socketio/socket.io/releases)
- [Changelog](https://github.com/socketio/socket.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io/compare/socket.io@4.7.5...socket.io@4.8.0)

---
updated-dependencies:
- dependency-name: cookie
  dependency-type: indirect
- dependency-name: express
  dependency-type: direct:production
- dependency-name: socket.io
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-11 11:16:26 +02:00
158 changed files with 9524 additions and 1080 deletions

View File

@@ -0,0 +1,26 @@
{
"name": "container.training environment to get started with Docker and/or Kubernetes",
"image": "ghcr.io/jpetazzo/shpod",
"features": {
//"ghcr.io/devcontainers/features/common-utils:2": {}
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [],
//"postCreateCommand": "... install extra packages...",
"postStartCommand": "dind.sh",
// This lets us use "docker-outside-docker".
// Unfortunately, minikube, kind, etc. don't work very well that way;
// so for now, we'll likely use "docker-in-docker" instead (with a
// privilege dcontainer). But we're still exposing that socket in case
// someone wants to do something interesting with it.
"mounts": ["source=/var/run/docker.sock,target=/var/run/docker-host.sock,type=bind"],
// This is for docker-in-docker.
"privileged": true,
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
"remoteUser": "k8s"
}

1
.gitignore vendored
View File

@@ -9,6 +9,7 @@ prepare-labs/terraform/many-kubernetes/one-kubernetes-config/config.tf
prepare-labs/terraform/many-kubernetes/one-kubernetes-module/*.tf
prepare-labs/terraform/tags
prepare-labs/terraform/virtual-machines/openstack/*.tfvars
prepare-labs/terraform/virtual-machines/proxmox/*.tfvars
prepare-labs/www
slides/*.yml.html

View File

@@ -1,5 +1,5 @@
FROM node:4-slim
RUN npm install express
RUN npm install express@4
RUN npm install redis@3
COPY files/ /files/
COPY webui.js /

View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
use-forwarded-headers: true
compute-full-forwarded-for: true
use-proxy-protocol: true

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: ingress-nginx

View File

@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- M6-ingress-nginx-components.yaml
- sync.yaml
patches:
- path: M6-ingress-nginx-cm-patch.yaml
target:
kind: ConfigMap
- path: M6-ingress-nginx-svc-patch.yaml
target:
kind: Service

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v2: true
service.beta.kubernetes.io/scw-loadbalancer-use-hostname: true

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: kyverno

View File

@@ -0,0 +1,72 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: flux-multi-tenancy
spec:
validationFailureAction: enforce
rules:
- name: serviceAccountName
exclude:
resources:
namespaces:
- flux-system
match:
resources:
kinds:
- Kustomization
- HelmRelease
validate:
message: ".spec.serviceAccountName is required"
pattern:
spec:
serviceAccountName: "?*"
- name: kustomizationSourceRefNamespace
exclude:
resources:
namespaces:
- flux-system
- ingress-nginx
- kyverno
- monitoring
- openebs
match:
resources:
kinds:
- Kustomization
preconditions:
any:
- key: "{{request.object.spec.sourceRef.namespace}}"
operator: NotEquals
value: ""
validate:
message: "spec.sourceRef.namespace must be the same as metadata.namespace"
deny:
conditions:
- key: "{{request.object.spec.sourceRef.namespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"
- name: helmReleaseSourceRefNamespace
exclude:
resources:
namespaces:
- flux-system
- ingress-nginx
- kyverno
- monitoring
- openebs
match:
resources:
kinds:
- HelmRelease
preconditions:
any:
- key: "{{request.object.spec.chart.spec.sourceRef.namespace}}"
operator: NotEquals
value: ""
validate:
message: "spec.chart.spec.sourceRef.namespace must be the same as metadata.namespace"
deny:
conditions:
- key: "{{request.object.spec.chart.spec.sourceRef.namespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"

View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: monitoring
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
namespace: monitoring
spec:
ingressClassName: nginx
rules:
- host: grafana.test.metal.mybestdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kube-prometheus-stack-grafana
port:
number: 80

View File

@@ -0,0 +1,35 @@
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
app: web
ingress:
- from: []
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-db
spec:
podSelector:
matchLabels:
app: db
ingress:
- from:
- podSelector:
matchLabels:
app: web

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: openebs

View File

@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: openebs
resources:
- M6-openebs-components.yaml
- sync.yaml
configMapGenerator:
- name: openebs-values
files:
- values.yaml=M6-openebs-values.yaml
configurations:
- M6-openebs-kustomizeconfig.yaml

View File

@@ -0,0 +1,6 @@
nameReference:
- kind: ConfigMap
version: v1
fieldSpecs:
- path: spec/valuesFrom/name
kind: HelmRelease

View File

@@ -0,0 +1,15 @@
# helm install openebs --namespace openebs openebs/openebs
# --set engines.replicated.mayastor.enabled=false
# --set lvm-localpv.lvmNode.kubeletDir=/var/lib/k0s/kubelet/
# --create-namespace
engines:
replicated:
mayastor:
enabled: false
# Needed for k0s install since kubelet install is slightly divergent from vanilla install >:-(
lvm-localpv:
lvmNode:
kubeletDir: /var/lib/k0s/kubelet/
localprovisioner:
hostpathClass:
isDefaultClass: true

View File

@@ -0,0 +1,38 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: rocky-test
name: rocky-full-access
rules:
- apiGroups: ["", extensions, apps]
resources: [deployments, replicasets, pods, services, ingresses, statefulsets]
verbs: [get, list, watch, create, update, patch, delete] # You can also use [*]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rocky-pv-access
rules:
- apiGroups: [""]
resources: [persistentvolumes]
verbs: [get, list, watch, create, patch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
toolkit.fluxcd.io/tenant: rocky
name: rocky-reconciler2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: rocky-pv-access
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: gotk:rocky-test:reconciler
- kind: ServiceAccount
name: rocky
namespace: rocky-test

19
k8s/M6-rocky-ingress.yaml Normal file
View File

@@ -0,0 +1,19 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rocky
namespace: rocky-test
spec:
ingressClassName: nginx
rules:
- host: rocky.test.mybestdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 80

View File

@@ -0,0 +1,8 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/rocky
patches:
- path: M6-rocky-test-patch.yaml
target:
kind: Kustomization

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: rocky
namespace: rocky-test
spec:
path: ./k8s/plain

View File

@@ -3,7 +3,6 @@ kind: ClusterPolicy
metadata:
name: pod-color-policy-1
spec:
validationFailureAction: enforce
rules:
- name: ensure-pod-color-is-valid
match:
@@ -18,5 +17,6 @@ spec:
operator: NotIn
values: [ red, green, blue ]
validate:
failureAction: Enforce
message: "If it exists, the label color must be red, green, or blue."
deny: {}

View File

@@ -3,7 +3,6 @@ kind: ClusterPolicy
metadata:
name: pod-color-policy-2
spec:
validationFailureAction: enforce
background: false
rules:
- name: prevent-color-change
@@ -22,6 +21,7 @@ spec:
operator: NotEquals
value: ""
validate:
failureAction: Enforce
message: "Once label color has been added, it cannot be changed."
deny:
conditions:

View File

@@ -3,7 +3,6 @@ kind: ClusterPolicy
metadata:
name: pod-color-policy-3
spec:
validationFailureAction: enforce
background: false
rules:
- name: prevent-color-change
@@ -22,7 +21,6 @@ spec:
operator: Equals
value: ""
validate:
failureAction: Enforce
message: "Once label color has been added, it cannot be removed."
deny:
conditions:
deny: {}

View File

@@ -6,33 +6,44 @@
# (See https://docs.google.com/document/d/1n0lwp6rQKQUIuo_A5LQ1dgCzrmjkDjmDtNj1Jn92UrI)
# PRO2-XS = 4 core, 16 gb
set -e
PROVIDER=scaleway
STUDENTS=30
case "$PROVIDER" in
linode)
export TF_VAR_node_size=g6-standard-6
export TF_VAR_location=eu-west
export TF_VAR_location=us-east
;;
scaleway)
export TF_VAR_node_size=PRO2-XS
# For tiny testing purposes, these are okay too:
#export TF_VAR_node_size=PLAY2-NANO
export TF_VAR_location=fr-par-2
;;
esac
./labctl create --mode mk8s --settings settings/konk.env --provider $PROVIDER --tag konk
# set kubeconfig file
export KUBECONFIG=~/kubeconfig
cp tags/konk/stage2/kubeconfig.101 $KUBECONFIG
if [ "$PROVIDER" = "kind" ]; then
kind create cluster --name konk
ADDRTYPE=InternalIP
else
./labctl create --mode mk8s --settings settings/konk.env --provider $PROVIDER --tag konk
cp tags/konk/stage2/kubeconfig.101 $KUBECONFIG
ADDRTYPE=ExternalIP
fi
# set external_ip labels
kubectl get nodes -o=jsonpath='{range .items[*]}{.metadata.name} {.status.addresses[?(@.type=="ExternalIP")].address}{"\n"}{end}' |
while read node address; do
kubectl get nodes -o=jsonpath='{range .items[*]}{.metadata.name} {.status.addresses[?(@.type=="'$ADDRTYPE'")].address}{"\n"}{end}' |
while read node address ignoredaddresses; do
kubectl label node $node external_ip=$address
done
# vcluster all the things
./labctl create --settings settings/mk8s.env --provider vcluster --mode mk8s --students 50
./labctl create --settings settings/mk8s.env --provider vcluster --mode mk8s --students $STUDENTS
# install prometheus stack because that's cool
helm upgrade --install --repo https://prometheus-community.github.io/helm-charts \

View File

@@ -57,7 +57,7 @@ need_tag() {
if [ ! -d "tags/$TAG" ]; then
die "Tag $TAG not found (directory tags/$TAG does not exist)."
fi
for FILE in settings.env ips.txt; do
for FILE in mode provider settings.env status; do
if [ ! -f "tags/$TAG/$FILE" ]; then
warning "File tags/$TAG/$FILE not found."
fi

View File

@@ -19,20 +19,22 @@ _cmd_cards() {
TAG=$1
need_tag
die FIXME
OPTIONS_FILE=$2
[ -f "$OPTIONS_FILE" ] || die "Please specify a YAML options file as 2nd argument."
OPTIONS_FILE_PATH="$(readlink -f "$OPTIONS_FILE")"
# This will process ips.txt to generate two files: ips.pdf and ips.html
# This will process logins.jsonl to generate two files: cards.pdf and cards.html
(
cd tags/$TAG
../../../lib/ips-txt-to-html.py settings.yaml
../../../lib/make-login-cards.py "$OPTIONS_FILE_PATH"
)
ln -sf ../tags/$TAG/ips.html www/$TAG.html
ln -sf ../tags/$TAG/ips.pdf www/$TAG.pdf
ln -sf ../tags/$TAG/cards.html www/$TAG.html
ln -sf ../tags/$TAG/cards.pdf www/$TAG.pdf
info "Cards created. You can view them with:"
info "xdg-open tags/$TAG/ips.html tags/$TAG/ips.pdf (on Linux)"
info "open tags/$TAG/ips.html (on macOS)"
info "xdg-open tags/$TAG/cards.html tags/$TAG/cards.pdf (on Linux)"
info "open tags/$TAG/cards.html (on macOS)"
info "Or you can start a web server with:"
info "$0 www"
}
@@ -47,6 +49,41 @@ _cmd_clean() {
done
}
_cmd codeserver "Install code-server on the clusters"
_cmd_codeserver() {
TAG=$1
need_tag
ARCH=${ARCHITECTURE-amd64}
CODESERVER_VERSION=4.96.4
CODESERVER_URL=https://github.com/coder/code-server/releases/download/v${CODESERVER_VERSION}/code-server-${CODESERVER_VERSION}-linux-${ARCH}.tar.gz
pssh "
set -e
i_am_first_node || exit 0
if ! [ -x /usr/local/bin/code-server ]; then
curl -fsSL $CODESERVER_URL | sudo tar zx -C /opt
sudo ln -s /opt/code-server-${CODESERVER_VERSION}-linux-${ARCH}/bin/code-server /usr/local/bin/code-server
sudo -u $USER_LOGIN -H code-server --install-extension ms-azuretools.vscode-docker
sudo -u $USER_LOGIN -H code-server --install-extension ms-kubernetes-tools.vscode-kubernetes-tools
sudo -u $USER_LOGIN -H mkdir -p /home/$USER_LOGIN/.local/share/code-server/User
echo '{\"workbench.startupEditor\": \"terminal\"}' | sudo -u $USER_LOGIN tee /home/$USER_LOGIN/.local/share/code-server/User/settings.json
sudo -u $USER_LOGIN mkdir -p /home/$USER_LOGIN/.config/systemd/user
sudo -u $USER_LOGIN tee /home/$USER_LOGIN/.config/systemd/user/code-server.service <<EOF
[Unit]
Description=code-server
[Install]
WantedBy=default.target
[Service]
ExecStart=/usr/local/bin/code-server --bind-addr [::]:1789
Restart=always
EOF
sudo systemctl --user -M $USER_LOGIN@ enable code-server.service --now
sudo loginctl enable-linger $USER_LOGIN
fi"
}
_cmd createuser "Create the user that students will use"
_cmd_createuser() {
TAG=$1
@@ -257,21 +294,12 @@ _cmd_create() {
terraform init
echo tag = \"$TAG\" >> terraform.tfvars
echo how_many_clusters = $STUDENTS >> terraform.tfvars
echo nodes_per_cluster = $CLUSTERSIZE >> terraform.tfvars
for RETRY in 1 2 3; do
if terraform apply -auto-approve; then
touch terraform.ok
break
fi
done
if ! [ -f terraform.ok ]; then
die "Terraform failed."
if [ "$CLUSTERSIZE" ]; then
echo nodes_per_cluster = $CLUSTERSIZE >> terraform.tfvars
fi
)
sep
info "Successfully created $COUNT instances with tag $TAG"
echo create_ok > tags/$TAG/status
# If the settings.env file has a "STEPS" field,
# automatically execute all the actions listed in that field.
@@ -324,8 +352,8 @@ _cmd_clusterize() {
grep KUBECOLOR_ /etc/ssh/sshd_config || echo 'AcceptEnv KUBECOLOR_*' | sudo tee -a /etc/ssh/sshd_config
sudo systemctl restart ssh.service"
pssh -I < tags/$TAG/clusters.txt "
grep -w \$PSSH_HOST | tr ' ' '\n' > /tmp/cluster"
pssh -I < tags/$TAG/clusters.tsv "
grep -w \$PSSH_HOST | tr '\t' '\n' > /tmp/cluster"
pssh "
echo \$PSSH_HOST > /tmp/ipv4
head -n 1 /tmp/cluster | sudo tee /etc/ipv4_of_first_node
@@ -346,6 +374,14 @@ _cmd_clusterize() {
done < /tmp/cluster
"
jq --raw-input --compact-output \
--arg USER_LOGIN "$USER_LOGIN" --arg USER_PASSWORD "$USER_PASSWORD" '
{
"login": $USER_LOGIN,
"password": $USER_PASSWORD,
"ipaddrs": .
}' < tags/$TAG/clusters.tsv > tags/$TAG/logins.jsonl
echo cluster_ok > tags/$TAG/status
}
@@ -584,7 +620,9 @@ EOF
# Install weave as the pod network
pssh "
if i_am_first_node; then
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml
curl -fsSL https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml |
sed s,weaveworks/weave,quay.io/rackspace/weave, |
kubectl apply -f-
fi"
# FIXME this is a gross hack to add the deployment key to our SSH agent,
@@ -934,6 +972,15 @@ _cmd_inventory() {
FIXME
}
_cmd logins "Show login information for a group of instances"
_cmd_logins() {
TAG=$1
need_tag $TAG
cat tags/$TAG/logins.jsonl \
| jq -r '"\(if .codeServerPort then "\(.codeServerPort)\t" else "" end )\(.password)\tssh -l \(.login)\(if .port then " -p \(.port)" else "" end)\t\(.ipaddrs)"'
}
_cmd maketag "Generate a quasi-unique tag for a group of instances"
_cmd_maketag() {
if [ -z $USER ]; then
@@ -984,6 +1031,9 @@ _cmd_stage2() {
cd tags/$TAG/stage2
terraform init -upgrade
terraform apply -auto-approve
terraform output -raw logins_jsonl > ../logins.jsonl
terraform output -raw ips_txt > ../ips.txt
echo "stage2_ok" > status
}
_cmd standardize "Deal with non-standard Ubuntu cloud images"
@@ -1070,7 +1120,7 @@ _cmd_tailhist () {
set -e
sudo apt-get install unzip -y
wget -c https://github.com/joewalnes/websocketd/releases/download/v0.3.0/websocketd-0.3.0-linux_$ARCH.zip
unzip websocketd-0.3.0-linux_$ARCH.zip websocketd
unzip -o websocketd-0.3.0-linux_$ARCH.zip websocketd
sudo mv websocketd /usr/local/bin/websocketd
sudo mkdir -p /opt/tailhist
sudo tee /opt/tailhist.service <<EOF
@@ -1093,14 +1143,35 @@ EOF
pssh -I sudo tee /opt/tailhist/index.html <lib/tailhist.html
}
_cmd terraform "Apply Terraform configuration to provision resources."
_cmd_terraform() {
TAG=$1
need_tag
echo terraforming > tags/$TAG/status
(
cd tags/$TAG
terraform apply -auto-approve
# The Terraform provider for Proxmox has a bug; sometimes it fails
# to obtain VM address from the QEMU agent. In that case, we put
# ERROR in the ips.txt file (instead of the VM IP address). Detect
# that so that we run Terraform again (this typically solves the issue).
if grep -q ERROR ips.txt; then
die "Couldn't obtain IP address of some machines. Try to re-run terraform."
fi
)
echo terraformed > tags/$TAG/status
}
_cmd tools "Install a bunch of useful tools (editors, git, jq...)"
_cmd_tools() {
TAG=$1
need_tag
pssh "
set -e
sudo apt-get -q update
sudo apt-get -qy install apache2-utils emacs-nox git httping htop jid joe jq mosh python-setuptools tree unzip
sudo apt-get -qy install apache2-utils argon2 emacs-nox git httping htop jid joe jq mosh tree unzip
# This is for VMs with broken PRNG (symptom: running docker-compose randomly hangs)
sudo apt-get -qy install haveged
"
@@ -1163,8 +1234,8 @@ _cmd_tags() {
cd tags
echo "[#] [Status] [Tag] [Mode] [Provider]"
for tag in *; do
if [ -f $tag/ips.txt ]; then
count="$(wc -l < $tag/ips.txt)"
if [ -f $tag/logins.jsonl ]; then
count="$(wc -l < $tag/logins.jsonl)"
else
count="?"
fi
@@ -1240,7 +1311,13 @@ _cmd_passwords() {
$0 ips "$TAG" | paste "$PASSWORDS_FILE" - | while read password nodes; do
info "Setting password for $nodes..."
for node in $nodes; do
echo $USER_LOGIN:$password | ssh $SSHOPTS -i tags/$TAG/id_rsa ubuntu@$node sudo chpasswd
echo $USER_LOGIN $password | ssh $SSHOPTS -i tags/$TAG/id_rsa ubuntu@$node '
read login password
echo $login:$password | sudo chpasswd
hashedpassword=$(echo -n $password | argon2 saltysalt$RANDOM -e)
sudo -u $login mkdir -p /home/$login/.config/code-server
echo "hashed-password: \"$hashedpassword\"" | sudo -u $login tee /home/$login/.config/code-server/config.yaml >/dev/null
'
done
done
info "Done."
@@ -1272,6 +1349,11 @@ _cmd_wait() {
pssh -l $SSH_USER "
if [ -d /var/lib/cloud ]; then
cloud-init status --wait
case $? in
0) exit 0;; # all is good
2) exit 0;; # recoverable error (happens with proxmox deprecated cloud-init payloads)
*) exit 1;; # all other problems
esac
fi"
}
@@ -1314,7 +1396,7 @@ WantedBy=multi-user.target
[Service]
WorkingDirectory=/opt/webssh
ExecStart=/usr/bin/env python run.py --fbidhttp=false --port=1080 --policy=reject
ExecStart=/usr/bin/env python3 run.py --fbidhttp=false --port=1080 --policy=reject
User=nobody
Group=nogroup
Restart=always
@@ -1327,7 +1409,7 @@ EOF"
_cmd www "Run a web server to access card HTML and PDF"
_cmd_www() {
cd www
IPADDR=$(curl -sL canihazip.com/s)
IPADDR=$(curl -fsSL canihazip.com/s || echo localhost)
info "The following files are available:"
for F in *; do
echo "http://$IPADDR:8000/$F"

View File

@@ -1,32 +1,22 @@
#!/usr/bin/env python3
import json
import os
import sys
import yaml
import jinja2
# Read settings from user-provided settings file
context = yaml.safe_load(open(sys.argv[1]))
ips = list(open("ips.txt"))
clustersize = context["clustersize"]
context["logins"] = []
for line in open("logins.jsonl"):
if line.strip():
context["logins"].append(json.loads(line))
print("---------------------------------------------")
print(" Number of IPs: {}".format(len(ips)))
print(" VMs per cluster: {}".format(clustersize))
print(" Number of cards: {}".format(len(context["logins"])))
print("---------------------------------------------")
assert len(ips)%clustersize == 0
clusters = []
while ips:
cluster = ips[:clustersize]
ips = ips[clustersize:]
clusters.append(cluster)
context["clusters"] = clusters
template_file_name = context["cards_template"]
template_file_path = os.path.join(
os.path.dirname(__file__),
@@ -35,23 +25,23 @@ template_file_path = os.path.join(
template_file_name
)
template = jinja2.Template(open(template_file_path).read())
with open("ips.html", "w") as f:
f.write(template.render(**context))
print("Generated ips.html")
with open("cards.html", "w") as f:
f.write(template.render(**context))
print("Generated cards.html")
try:
import pdfkit
paper_size = context["paper_size"]
margin = {"A4": "0.5cm", "Letter": "0.2in"}[paper_size]
with open("ips.html") as f:
pdfkit.from_file(f, "ips.pdf", options={
with open("cards.html") as f:
pdfkit.from_file(f, "cards.pdf", options={
"page-size": paper_size,
"margin-top": margin,
"margin-bottom": margin,
"margin-left": margin,
"margin-right": margin,
})
print("Generated ips.pdf")
print("Generated cards.pdf")
except ImportError:
print("WARNING: could not import pdfkit; did not generate ips.pdf")
print("WARNING: could not import pdfkit; did not generate cards.pdf")

View File

@@ -7,6 +7,7 @@ USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
terraform
wait
standardize
clusterize

View File

@@ -7,6 +7,7 @@ USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
terraform
wait
standardize
clusterize

View File

@@ -11,6 +11,7 @@ USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
terraform
wait
standardize
clusterize

View File

@@ -10,6 +10,7 @@ USER_PASSWORD=training
KUBEVERSION=1.28.9
STEPS="
terraform
wait
standardize
clusterize

View File

@@ -6,6 +6,7 @@ USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
terraform
wait
standardize
clusterize

View File

@@ -6,6 +6,7 @@ USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
terraform
wait
standardize
clusterize

View File

@@ -6,6 +6,7 @@ USER_LOGIN=docker
USER_PASSWORD=training
STEPS="
terraform
wait
standardize
clusterize
@@ -14,6 +15,5 @@ STEPS="
createuser
webssh
tailhist
cards
ips
"
"

View File

@@ -3,4 +3,4 @@ CLUSTERSIZE=5
USER_LOGIN=k8s
USER_PASSWORD=
STEPS="stage2"
STEPS="terraform stage2"

View File

@@ -6,6 +6,7 @@ USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
terraform
wait
standardize
clusterize

View File

@@ -7,6 +7,7 @@ USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
terraform
wait
standardize
clusterize

View File

@@ -1,6 +1,4 @@
CLUSTERSIZE=2
USER_LOGIN=k8s
USER_PASSWORD=
STEPS="stage2"
STEPS="terraform stage2"

View File

@@ -1,4 +1,4 @@
#export TF_VAR_node_size=GP2.4
#export TF_VAR_node_size=GP4.4
#export TF_VAR_node_size=g6-standard-6
#export TF_VAR_node_size=m7i.xlarge
@@ -11,6 +11,7 @@ USER_LOGIN=portal
USER_PASSWORD=CHANGEME
STEPS="
terraform
wait
standardize
clusterize

View File

@@ -7,7 +7,7 @@
{%- set url = url
| default("http://FIXME.container.training/") -%}
{%- set pagesize = pagesize
| default(9) -%}
| default(10) -%}
{%- set lang = lang
| default("en") -%}
{%- set event = event
@@ -15,79 +15,36 @@
{%- set backside = backside
| default(False) -%}
{%- set image = image
| default("kube") -%}
| default(False) -%}
{%- set clusternumber = clusternumber
| default(None) -%}
{%- if qrcode == True -%}
{%- set qrcode = "https://container.training/q" -%}
{%- elif qrcode -%}
{%- set qrcode = qrcode -%}
{%- endif -%}
{%- set thing = thing
| default("lab environment") -%}
{# You can also set img_bottom_src instead. #}
{%- set img_logo_src = {
"docker": "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png",
"swarm": "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png",
"kube": "https://avatars1.githubusercontent.com/u/13629408",
"enix": "https://enix.io/static/img/logos/logo-domain-cropped.png",
}[image] -%}
{%- if lang == "en" and clustersize == 1 -%}
{%- set intro -%}
Here is the connection information to your very own
machine for this {{ event }}.
You can connect to this VM with any SSH client.
{%- endset -%}
{%- set listhead -%}
Your machine is:
{%- endset -%}
{%- endif -%}
{%- if lang == "en" and clustersize != 1 -%}
{%- set intro -%}
Here is the connection information to your very own
cluster for this {{ event }}.
You can connect to each VM with any SSH client.
{%- endset -%}
{%- set listhead -%}
Your machines are:
{%- endset -%}
{%- endif -%}
{%- if lang == "fr" and clustersize == 1 -%}
{%- set intro -%}
Voici les informations permettant de se connecter à votre
machine pour cette formation.
Vous pouvez vous connecter à cette machine virtuelle
avec n'importe quel client SSH.
{%- endset -%}
{%- set listhead -%}
Adresse IP:
{%- endset -%}
{%- endif -%}
{%- if lang == "en" and clusterprefix != "node" -%}
{%- set intro -%}
Here is the connection information for the
<strong>{{ clusterprefix }}</strong> environment.
{%- endset -%}
{%- endif -%}
{%- if lang == "fr" and clustersize != 1 -%}
{%- set intro -%}
Voici les informations permettant de se connecter à votre
cluster pour cette formation.
Vous pouvez vous connecter à chaque machine virtuelle
avec n'importe quel client SSH.
{%- endset -%}
{%- set listhead -%}
Adresses IP:
{%- endset -%}
{%- endif -%}
{%- if lang == "en" -%}
{%- set slides_are_at -%}
You can find the slides at:
{%- endset -%}
{%- if lang == "en" -%}
{%- set intro -%}
Here is the connection information to your very own
{{ thing }} for this {{ event }}.
You can connect to it with any SSH client.
{%- endset -%}
{%- endif -%}
{%- if lang == "fr" -%}
{%- set slides_are_at -%}
Le support de formation est à l'adresse suivante :
{%- endset -%}
{%- set intro -%}
Voici les informations permettant de se connecter à votre
{{ thing }} pour cette formation.
Vous pouvez vous y connecter
avec n'importe quel client SSH.
{%- endset -%}
{%- endif -%}
{%- if lang == "en" -%}
{%- set slides_are_at -%}
You can find the slides at:
{%- endset -%}
{%- endif -%}
{%- if lang == "fr" -%}
{%- set slides_are_at -%}
Le support de formation est à l'adresse suivante :
{%- endset -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
@@ -102,25 +59,21 @@
}
body {
/* this is A4 minus 0.5cm margins */
width: 20cm;
height: 28.7cm;
width: 20cm;
height: 28.7cm;
}
{% elif paper_size == "Letter" %}
@page {
size: Letter;
margin: 0.2in;
size: Letter; /* 8.5in x 11in */
}
body {
/* this is Letter minus 0.2in margins */
width: 8.6in;
heigth: 10.6in;
width: 6.75in; /* two cards wide */
margin-left: 0.875in; /* (8.5in - 6.75in)/2 */
margin-top: 0.1875in; /* (11in - 5 cards)/2 */
}
{% endif %}
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 15px;
font-family: 'Slabo 27px';
@@ -134,47 +87,45 @@ table {
padding-left: 0.4em;
}
div {
td:first-child {
width: 10.5em;
}
div.card {
float: left;
border: 1px dotted black;
{% if backside %}
height: 33%;
{% endif %}
/* columns * (width+left+right) < 100% */
border: 0.01in dotted black;
/*
width: 24.8%;
columns * (width+left+right) < 100%
height: 33%;
width: 24.8%;
width: 33%;
*/
/**/
width: 33%;
/**/
width: 3.355in; /* 3.375in minus two 0.01in borders */
height: 2.105in; /* 2.125in minus two 0.01in borders */
}
p {
margin: 0.8em;
}
div.back {
border: 1px dotted grey;
div.front {
{% if image %}
background-image: url("{{ image }}");
background-repeat: no-repeat;
background-size: 1in;
background-position-x: 2.8in;
background-position-y: center;
{% endif %}
}
span.scale {
white-space: nowrap;
}
img.logo {
height: 4.5em;
float: right;
}
img.bottom {
height: 2.5em;
display: block;
margin: 0.5em auto;
white-space: nowrap;
}
.qrcode img {
width: 40%;
margin: 1em;
height: 5.8em;
padding: 1em 1em 0.5em 1em;
float: left;
}
.logpass {
@@ -189,101 +140,97 @@ img.bottom {
height: 0;
}
</style>
<script type="text/javascript" src="https://cdn.rawgit.com/davidshimjs/qrcodejs/gh-pages/qrcode.min.js"></script>
<script type="text/javascript" src="qrcode.min.js"></script>
<script type="text/javascript">
function qrcodes() {
[].forEach.call(
document.getElementsByClassName("qrcode"),
(e, index) => {
new QRCode(e, {
text: "{{ qrcode }}",
correctLevel: QRCode.CorrectLevel.L
});
}
);
[].forEach.call(
document.getElementsByClassName("qrcode"),
(e, index) => {
new QRCode(e, {
text: "{{ qrcode }}",
correctLevel: QRCode.CorrectLevel.L
});
}
);
}
function scale() {
[].forEach.call(
document.getElementsByClassName("scale"),
(e, index) => {
var text_width = e.getBoundingClientRect().width;
var box_width = e.parentElement.getBoundingClientRect().width;
var percent = 100 * box_width / text_width + "%";
e.style.fontSize = percent;
}
);
[].forEach.call(
document.getElementsByClassName("scale"),
(e, index) => {
var text_width = e.getBoundingClientRect().width;
var box_width = e.parentElement.getBoundingClientRect().width;
var percent = 100 * box_width / text_width + "%";
e.style.fontSize = percent;
}
);
}
</script>
</head>
<body onload="qrcodes(); scale();">
{% for cluster in clusters %}
<div>
{% for login in logins %}
<div class="card front">
<p>{{ intro }}</p>
<p>
{% if img_logo_src %}
<img class="logo" src="{{ img_logo_src }}" />
{% endif %}
<table>
{% if clusternumber != None %}
<tr><td>cluster:</td></tr>
<tr><td class="logpass">{{ clusternumber + loop.index }}</td></tr>
{% endif %}
<tr><td>login:</td></tr>
<tr><td class="logpass">{{ user_login }}</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">{{ user_password }}</td></tr>
</table>
</p>
<p>
{{ listhead }}
<table>
{% for node in cluster %}
<tr>
<td>{{ clusterprefix }}{{ loop.index }}:</td>
<td>{{ node }}</td>
</tr>
{% endfor %}
<tr>
<td>login:</td>
<td>password:</td>
</tr>
<tr>
<td class="logpass">{{ login.login }}</td>
<td class="logpass">{{ login.password }}</td>
</tr>
<tr>
<td>IP address:</td>
{% if login.port %}
<td>port:</td>
{% endif %}
</tr>
<tr>
<td class="logpass">{{ login.ipaddrs.split("\t")[0] }}</td>
{% if login.port %}
<td class="logpass">{{ login.port }}</td>
{% endif %}
</tr>
</table>
</p>
<p>
{% if url %}
{{ slides_are_at }}
{{ slides_are_at }}
<p>
<span class="scale">{{ url }}</span>
</p>
{% endif %}
{% if img_bottom_src %}
<img class="bottom" src="{{ img_bottom_src }}" />
{% endif %}
</p>
</div>
{% if loop.index%pagesize==0 or loop.last %}
<span class="pagebreak"></span>
{% if backside %}
{% for x in range(pagesize) %}
<div class="back">
<p>Thanks for attending
"Getting Started With Kubernetes and Container Orchestration"
during CONFERENCE in Month YYYY!</p>
<p>If you liked that workshop,
I can train your team, in person or
online, with custom courses of
any length and any level.
</p>
{% if qrcode %}
<p>If you're interested, please scan that QR code to contact me:</p>
<span class="qrcode"></span>
{% for x in range(pagesize) %}
<div class="card back">
{{ backside }}
{#
<p>Thanks for attending
"Getting Started With Kubernetes and Container Orchestration"
during CONFERENCE in Month YYYY!</p>
<p>If you liked that workshop,
I can train your team, in person or
online, with custom courses of
any length and any level.
</p>
{% if qrcode %}
<p>If you're interested, please scan that QR code to contact me:</p>
<span class="qrcode"></span>
{% else %}
<p>If you're interested, you can contact me at:</p>
{% endif %}
<p>jerome.petazzoni@gmail.com</p>
</div>
{% endfor %}
<span class="pagebreak"></span>
{% endif %}
<p>If you're interested, you can contact me at:</p>
{% endif %}
<p>jerome.petazzoni@gmail.com</p>
#}
</div>
{% endfor %}
<span class="pagebreak"></span>
{% endif %}
{% endif %}
{% endfor %}
</body>

View File

@@ -0,0 +1,19 @@
cards_template: cards.html
paper_size: Letter
url: https://2024-11-qconsf.container.training
event: workshop
backside: |
<div class="qrcode"></div>
<p>
Thanks for attending the Asynchronous Architecture Patterns workshop at QCON!
</p>
<p>
<b>This QR code will give you my contact info</b> as well as a link to a feedback form.
</p>
<p>
If you liked this workshop, I can train your team, in person or online, with custom
courses of any length and any level, on Docker, Kubernetes, and MLops.
</p>
qrcode: https://2024-11-qconsf.container.training/#contact
thing: Kubernetes cluster
image: logo-kubernetes.png

View File

@@ -8,8 +8,8 @@ resource "random_string" "_" {
resource "time_static" "_" {}
locals {
min_nodes_per_pool = var.nodes_per_cluster
max_nodes_per_pool = var.nodes_per_cluster * 2
min_nodes_per_pool = var.min_nodes_per_cluster
max_nodes_per_pool = var.max_nodes_per_cluster
timestamp = formatdate("YYYY-MM-DD-hh-mm", time_static._.rfc3339)
tag = random_string._.result
# Common tags to be assigned to all resources

View File

@@ -14,6 +14,20 @@ provider "kubernetes" {
config_path = "./kubeconfig.${index}"
}
provider "helm" {
alias = "cluster_${index}"
kubernetes {
config_path = "./kubeconfig.${index}"
}
}
# Password used for SSH and code-server access
resource "random_string" "shpod_${index}" {
length = 6
special = false
upper = false
}
resource "kubernetes_namespace" "shpod_${index}" {
provider = kubernetes.cluster_${index}
metadata {
@@ -21,120 +35,57 @@ resource "kubernetes_namespace" "shpod_${index}" {
}
}
resource "kubernetes_deployment" "shpod_${index}" {
data "kubernetes_service" "shpod_${index}" {
depends_on = [ helm_release.shpod_${index} ]
provider = kubernetes.cluster_${index}
metadata {
name = "shpod"
namespace = kubernetes_namespace.shpod_${index}.metadata.0.name
}
spec {
selector {
match_labels = {
app = "shpod"
}
}
template {
metadata {
labels = {
app = "shpod"
}
}
spec {
service_account_name = "shpod"
container {
image = "jpetazzo/shpod"
name = "shpod"
env {
name = "PASSWORD"
value = random_string.shpod_${index}.result
}
lifecycle {
post_start {
exec {
command = [ "sh", "-c", "curl http://myip.enix.org/REMOTE_ADDR > /etc/HOSTIP || true" ]
}
}
}
resources {
limits = {
cpu = "2"
memory = "500M"
}
requests = {
cpu = "100m"
memory = "250M"
}
}
}
}
}
}
}
resource "kubernetes_service" "shpod_${index}" {
provider = kubernetes.cluster_${index}
lifecycle {
# Folks might alter their shpod Service to expose extra ports.
# Don't reset their changes.
ignore_changes = [ spec ]
}
metadata {
name = "shpod"
namespace = kubernetes_namespace.shpod_${index}.metadata.0.name
}
spec {
selector = {
app = "shpod"
}
port {
name = "ssh"
port = 22
target_port = 22
}
type = "NodePort"
}
}
resource "kubernetes_service_account" "shpod_${index}" {
provider = kubernetes.cluster_${index}
metadata {
name = "shpod"
namespace = kubernetes_namespace.shpod_${index}.metadata.0.name
}
}
resource "kubernetes_cluster_role_binding" "shpod_${index}" {
provider = kubernetes.cluster_${index}
metadata {
name = "shpod"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
kind = "ServiceAccount"
name = "shpod"
namespace = "shpod"
}
subject {
api_group = "rbac.authorization.k8s.io"
kind = "Group"
name = "shpod-cluster-admins"
}
resource "helm_release" "shpod_${index}" {
provider = helm.cluster_${index}
repository = "https://shpod.in"
chart = "shpod"
name = "shpod"
namespace = "shpod"
create_namespace = false
set {
name = "service.type"
value = "NodePort"
}
}
resource "random_string" "shpod_${index}" {
length = 6
special = false
upper = false
}
provider "helm" {
alias = "cluster_${index}"
kubernetes {
config_path = "./kubeconfig.${index}"
set {
name = "resources.requests.cpu"
value = "100m"
}
set {
name = "resources.requests.memory"
value = "500M"
}
set {
name = "resources.limits.cpu"
value = "1"
}
set {
name = "resources.limits.memory"
value = "1000M"
}
set {
name = "persistentVolume.enabled"
value = "true"
}
set {
name = "ssh.password"
value = random_string.shpod_${index}.result
}
set {
name = "rbac.cluster.clusterRoles"
value = "{cluster-admin}"
}
set {
name = "codeServer.enabled"
value = "true"
}
}
@@ -156,6 +107,36 @@ resource "helm_release" "metrics_server_${index}" {
}
}
# This section here deserves a little explanation.
#
# When we access a cluster with shpod (either through SSH or code-server)
# there is no kubeconfig file - we simply use "in-cluster" authentication
# with a ServiceAccount token. This is a bit unusual, and ideally, I would
# prefer to have a "normal" kubeconfig file in the students' shell.
#
# So what we're doing here, is that we're populating a ConfigMap with
# a kubeconfig file; and in the initialization scripts (e.g. bashrc) we
# automatically download the kubeconfig file from the ConfigMap and place
# it in ~/.kube/kubeconfig.
#
# But, which kubeconfig file should we use? We could use the "normal"
# kubeconfig file that was generated by the provider; but in some cases,
# that kubeconfig file might use a token instead of a certificate for
# user authentication - and ideally, I would like to have a certificate
# so that in the section about auth and RBAC, we can dissect that TLS
# certificate and explain where our permissions come from.
#
# So we're creating a TLS key pair; using the CSR API to issue a user
# certificate belongong to a special group; and grant the cluster-admin
# role to that group; then we use the kubeconfig file generated by the
# provider but override the user with that TLS key pair.
#
# This is not strictly necessary but it streamlines the lesson on auth.
#
# Lastly - in the ConfigMap we actually put both the original kubeconfig,
# and the one where we injected our new user (just in case we want to
# use or look at the original for any reason).
resource "kubernetes_config_map" "kubeconfig_${index}" {
provider = kubernetes.cluster_${index}
metadata {
@@ -202,6 +183,23 @@ resource "tls_cert_request" "cluster_admin_${index}" {
}
}
resource "kubernetes_cluster_role_binding" "shpod_cluster_admin_${index}" {
provider = kubernetes.cluster_${index}
metadata {
name = "shpod-cluster-admin"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
api_group = "rbac.authorization.k8s.io"
kind = "Group"
name = "shpod-cluster-admins"
}
}
resource "kubernetes_certificate_signing_request_v1" "cluster_admin_${index}" {
provider = kubernetes.cluster_${index}
metadata {
@@ -217,16 +215,28 @@ resource "kubernetes_certificate_signing_request_v1" "cluster_admin_${index}" {
%{ endfor ~}
output "ip_addresses_of_nodes" {
output "ips_txt" {
value = join("\n", [
%{ for index, cluster in clusters ~}
join("\t", concat(
[
random_string.shpod_${index}.result,
"ssh -l k8s -p $${kubernetes_service.shpod_${index}.spec[0].port[0].node_port}"
],
join("\n", concat(
split(" ", file("./externalips.${index}"))
)),
%{ endfor ~}
""
])
}
output "logins_jsonl" {
value = join("\n", [
%{ for index, cluster in clusters ~}
jsonencode({
login = "k8s",
password = random_string.shpod_${index}.result,
port = data.kubernetes_service.shpod_${index}.spec[0].port[0].node_port,
codeServerPort = data.kubernetes_service.shpod_${index}.spec[0].port[1].node_port,
ipaddrs = replace(file("./externalips.${index}"), " ", "\t"),
}),
%{ endfor ~}
""
])
}

View File

@@ -7,11 +7,16 @@ variable "how_many_clusters" {
default = 2
}
variable "nodes_per_cluster" {
variable "min_nodes_per_cluster" {
type = number
default = 2
}
variable "max_nodes_per_cluster" {
type = number
default = 4
}
variable "node_size" {
type = string
default = "M"

View File

@@ -0,0 +1,30 @@
variable "proxmox_endpoint" {
type = string
default = "https://localhost:8006/"
}
variable "proxmox_username" {
type = string
default = null
}
variable "proxmox_password" {
type = string
default = null
}
variable "proxmox_storage" {
type = string
default = "local"
}
variable "proxmox_template_node_name" {
type = string
default = null
}
variable "proxmox_template_vm_id" {
type = number
default = null
}

View File

@@ -0,0 +1,11 @@
# Since node size needs to be a string...
# To indicate number of CPUs + RAM, just pass it as a string with a space between them.
# RAM is in megabytes.
variable "node_sizes" {
type = map(any)
default = {
S = "1 2048"
M = "2 4096"
L = "3 8192"
}
}

View File

@@ -56,6 +56,7 @@ locals {
cluster_name = format("%s-%03d", var.tag, cn[0])
node_name = format("%s-%03d-%03d", var.tag, cn[0], cn[1])
node_size = lookup(var.node_sizes, var.node_size, var.node_size)
node_index = cn[0] * var.nodes_per_cluster + cn[1]
}
}
}
@@ -71,10 +72,10 @@ resource "local_file" "ip_addresses" {
resource "local_file" "clusters" {
content = join("", formatlist("%s\n", [
for cid in range(1, 1 + var.how_many_clusters) :
join(" ",
join("\t",
[for nid in range(1, 1 + var.nodes_per_cluster) :
local.ip_addresses[format("c%03dn%03d", cid, nid)]
])]))
filename = "clusters.txt"
filename = "clusters.tsv"
file_permission = "0600"
}

View File

@@ -0,0 +1 @@
../common.tf

View File

@@ -0,0 +1 @@
../../providers/proxmox/config.tf

View File

@@ -0,0 +1,79 @@
data "proxmox_virtual_environment_nodes" "_" {}
locals {
pve_nodes = data.proxmox_virtual_environment_nodes._.names
}
resource "proxmox_virtual_environment_vm" "_" {
node_name = local.pve_nodes[each.value.node_index % length(local.pve_nodes)]
for_each = local.nodes
name = each.value.node_name
tags = ["container.training", var.tag]
stop_on_destroy = true
cpu {
cores = split(" ", each.value.node_size)[0]
type = "x86-64-v2-AES" # recommended for modern CPUs
}
memory {
dedicated = split(" ", each.value.node_size)[1]
}
#disk {
# datastore_id = var.proxmox_storage
# file_id = proxmox_virtual_environment_file._.id
# interface = "scsi0"
# size = 30
# discard = "on"
#}
clone {
vm_id = var.proxmox_template_vm_id
node_name = var.proxmox_template_node_name
full = false
}
agent {
enabled = true
}
initialization {
datastore_id = var.proxmox_storage
user_account {
username = "ubuntu"
keys = [trimspace(tls_private_key.ssh.public_key_openssh)]
}
ip_config {
ipv4 {
address = "dhcp"
#gateway =
}
}
}
network_device {
bridge = "vmbr0"
}
operating_system {
type = "l26"
}
}
#resource "proxmox_virtual_environment_download_file" "ubuntu_2404_20250115" {
# content_type = "iso"
# datastore_id = "cephfs"
# node_name = "pve-lsd-1"
# url = "https://cloud-images.ubuntu.com/releases/24.04/release-20250115/ubuntu-24.04-server-cloudimg-amd64.img"
# file_name = "ubuntu_2404_20250115.img"
#}
#
#resource "proxmox_virtual_environment_file" "_" {
# datastore_id = "cephfs"
# node_name = "pve-lsd-1"
# source_file {
# path = "/root/noble-server-cloudimg-amd64.img"
# }
#}
locals {
ip_addresses = {
for key, value in local.nodes :
key => [for addr in flatten(concat(proxmox_virtual_environment_vm._[key].ipv4_addresses, ["ERROR"])) :
addr if addr != "127.0.0.1"][0]
}
}

View File

@@ -0,0 +1,15 @@
terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "~> 0.70.1"
}
}
}
provider "proxmox" {
endpoint = var.proxmox_endpoint
username = var.proxmox_username
password = var.proxmox_password
insecure = true
}

View File

@@ -0,0 +1,17 @@
# If you want to deploy to Proxmox, you need to:
# 1) copy that file to e.g. myproxmoxcluster.tfvars
# 2) make sure you have a VM template with QEMU agent pre-installed
# 3) customize the copy (you need to replace all the CHANGEME values)
# 4) deploy with "labctl create --provider proxmox/myproxmoxcluster ..."
proxmox_endpoint = "https://localhost:8006/"
proxmox_username = "terraform@pve"
proxmox_password = "CHANGEME"
# Which storage to use for VM disks. Defaults to "local".
#proxmox_storage = "ceph"
proxmox_template_node_name = "CHANGEME"
proxmox_template_vm_id = CHANGEME

View File

@@ -0,0 +1 @@
../../providers/proxmox/variables.tf

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

1
prepare-labs/www/qrcode.min.js vendored Normal file

File diff suppressed because one or more lines are too long

68
slides/1.yml Normal file
View File

@@ -0,0 +1,68 @@
title: |
Docker Intensif
chat: "[Mattermost](https://training.enix.io/mattermost)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2025-05-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- # DAY 1
#- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- # DAY 2
- containers/Container_Networking_Basics.md
- containers/Local_Development_Workflow.md
- containers/Container_Network_Model.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- # DAY 3
- containers/Start_And_Attach.md
- containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- containers/Dockerfile_Tips.md
- containers/Advanced_Dockerfiles.md
- containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Exercise_Dockerfile_Advanced.md
- # DAY 4
- containers/Buildkit.md
- containers/Network_Drivers.md
- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
- containers/Orchestration_Overview.md
#- containers/Docker_Machine.md
#- containers/Init_Systems.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Containers_From_Scratch.md
#- containers/Container_Engines.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md
- shared/thankyou.md
#- containers/links.md

View File

@@ -1,11 +1,11 @@
title: |
Kubernetes
Fondamentaux Kubernetes
chat: "[Mattermost](https://formintra.enix.io/mattermost)"
chat: "[Mattermost](https://training.enix.io/mattermost)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2024-10-formintra.container.training/
slides: https://2025-05-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
@@ -25,7 +25,7 @@ content:
#- shared/webssh.md
- shared/connecting.md
- exercises/k8sfundamentals-brief.md
- exercises/yaml-brief.md
- exercises/yaml-dockercoins-brief.md
- exercises/localcluster-brief.md
- exercises/healthchecks-brief.md
- shared/toc.md
@@ -64,7 +64,7 @@ content:
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/kubectlproxy.md
- exercises/yaml-details.md
- exercises/yaml-dockercoins-details.md
- exercises/localcluster-details.md
- # 3
#- k8s/kubectlscale.md

47
slides/3.yml Normal file
View File

@@ -0,0 +1,47 @@
title: |
Packaging d'applications
pour Kubernetes
chat: "[Mattermost](https://training.enix.io/mattermost)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2025-05-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- k8s/prereqs-advanced.md
- shared/handson.md
- shared/webssh.md
- shared/connecting.md
#- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/toc.md
-
- k8s/demo-apps.md
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- exercises/helm-generic-chart-details.md
-
- k8s/helm-create-better-chart.md
- k8s/helm-dependencies.md
- k8s/helm-values-schema-validation.md
- k8s/helm-secrets.md
- exercises/helm-umbrella-chart-details.md
-
- k8s/helmfile.md
- k8s/ytt.md
- k8s/gitworkflows.md
- k8s/flux.md
- k8s/argocd.md
- shared/thankyou.md

74
slides/4.yml Normal file
View File

@@ -0,0 +1,74 @@
title: |
Kubernetes Avancé
chat: "[Mattermost](https://training.enix.io/mattermost)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2025-05-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- k8s/prereqs-advanced.md
- shared/handson.md
- shared/webssh.md
- shared/connecting.md
- shared/toc.md
- exercises/netpol-brief.md
- exercises/sealed-secrets-brief.md
- exercises/rbac-brief.md
- exercises/kyverno-ingress-domain-name-brief.md
- exercises/reqlim-brief.md
- #1
- k8s/demo-apps.md
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/sealed-secrets.md
- k8s/cert-manager.md
- k8s/cainjector.md
- k8s/ingress-tls.md
- exercises/netpol-details.md
- exercises/sealed-secrets-details.md
- exercises/rbac-details.md
- #2
- k8s/extending-api.md
- k8s/crd.md
- k8s/operators.md
- k8s/admission.md
- k8s/cainjector.md
- k8s/kyverno.md
- exercises/kyverno-ingress-domain-name-details.md
- #3
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- k8s/apiserver-deepdive.md
- k8s/aggregation-layer.md
- k8s/hpa-v2.md
- exercises/reqlim-details.md
- #4
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md
#- k8s/eck.md
#- k8s/portworx.md
- k8s/openebs.md
- k8s/stateful-failover.md
- k8s/operators-design.md
- k8s/operators-example.md
- k8s/owners-and-dependents.md
- k8s/events.md
- k8s/finalizers.md
- shared/thankyou.md

71
slides/5.yml Normal file
View File

@@ -0,0 +1,71 @@
title: |
Opérer Kubernetes
chat: "[Mattermost](https://training.enix.io/mattermost)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2025-05-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
# DAY 1
-
- k8s/prereqs-advanced.md
- shared/handson.md
- k8s/architecture.md
- k8s/deploymentslideshow.md
- k8s/dmuc-easy.md
- k8s/dmuc-medium.md
- k8s/user-cert.md
- k8s/control-plane-auth.md
- k8s/staticpods.md
- exercises/dmuc-auth-details.md
- exercises/dmuc-networking-details.md
- exercises/dmuc-staticpods-details.md
-
- k8s/dmuc-hard.md
- k8s/apilb.md
- k8s/cni-internals.md
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/pod-security-intro.md
- k8s/pod-security-policies.md
- k8s/pod-security-admission.md
#- k8s/interco.md
#- k8s/internal-apis.md
- k8s/cluster-upgrade.md
- k8s/cluster-backup.md
#- k8s/cloud-controller-manager.md
-
- k8s/M6-START-a-company-scenario.md
- k8s/M6-T02-flux-install.md
- k8s/M6-T03-installing-tenants.md
- k8s/M6-R01-flux_configure-ROCKY-deployment.md
- k8s/M6-T05-ingress-config.md
- k8s/M6-M01-adding-MOVY-tenant.md
- k8s/M6-K01-METAL-install.md
- k8s/M6-K03-openebs-install.md
- k8s/M6-monitoring-stack-install.md
- k8s/M6-kyverno-install.md
- shared/thankyou.md
#-
# |
# # (Extra content)
# - k8s/apiserver-deepdive.md
# - k8s/setup-overview.md
# - k8s/setup-devel.md
# - k8s/setup-managed.md
# - k8s/setup-selfhosted.md

View File

@@ -24,4 +24,4 @@
# Survey form
/please https://docs.google.com/forms/d/e/1FAIpQLSfIYSgrV7tpfBNm1hOaprjnBHgWKn5n-k5vtNXYJkOX1sRxng/viewform
/ /kube.yml.html 200!
/ /highfive.html 200!

View File

@@ -8,8 +8,8 @@
"name": "container-training-pub-sub-server",
"version": "0.0.1",
"dependencies": {
"express": "^4.21.0",
"socket.io": "^4.7.5",
"express": "^4.21.1",
"socket.io": "^4.8.0",
"socket.io-client": "^4.7.5"
}
},
@@ -32,11 +32,11 @@
}
},
"node_modules/@types/node": {
"version": "20.14.6",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.14.6.tgz",
"integrity": "sha512-JbA0XIJPL1IiNnU7PFxDXyfAwcwVVrOoqyzzyQTyMeVhBzkJVMSkC1LlVsRQ2lpqiY4n6Bb9oCS6lzDKVQxbZw==",
"version": "22.7.5",
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.7.5.tgz",
"integrity": "sha512-jML7s2NAzMWc//QSJ1a3prpk78cOPchGvXJsC3C6R6PSMoooztvRVQEz89gmBTBY1SPMaqo5teB4uNHPdetShQ==",
"dependencies": {
"undici-types": "~5.26.4"
"undici-types": "~6.19.2"
}
},
"node_modules/accepts": {
@@ -133,9 +133,9 @@
}
},
"node_modules/cookie": {
"version": "0.6.0",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.6.0.tgz",
"integrity": "sha512-U71cyTamuh1CRNCfpGY6to28lxvNwPG4Guz/EVjgf3Jmzv0vlDp1atT9eS5dDjMYHucpHbWns6Lwf3BKz6svdw==",
"version": "0.7.1",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz",
"integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w==",
"engines": {
"node": ">= 0.6"
}
@@ -212,16 +212,16 @@
}
},
"node_modules/engine.io": {
"version": "6.5.5",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-6.5.5.tgz",
"integrity": "sha512-C5Pn8Wk+1vKBoHghJODM63yk8MvrO9EWZUfkAt5HAqIgPE4/8FF0PEGHXtEd40l223+cE5ABWuPzm38PHFXfMA==",
"version": "6.6.2",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-6.6.2.tgz",
"integrity": "sha512-gmNvsYi9C8iErnZdVcJnvCpSKbWTt1E8+JZo8b+daLninywUWi5NQ5STSHZ9rFjFO7imNcvb8Pc5pe/wMR5xEw==",
"dependencies": {
"@types/cookie": "^0.4.1",
"@types/cors": "^2.8.12",
"@types/node": ">=10.0.0",
"accepts": "~1.3.4",
"base64id": "2.0.0",
"cookie": "~0.4.1",
"cookie": "~0.7.2",
"cors": "~2.8.5",
"debug": "~4.3.1",
"engine.io-parser": "~5.2.1",
@@ -273,19 +273,19 @@
}
},
"node_modules/engine.io/node_modules/cookie": {
"version": "0.4.2",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.4.2.tgz",
"integrity": "sha512-aSWTXFzaKWkvHO1Ny/s+ePFpvKsPnjc551iI41v3ny/ow6tBG5Vd+FuqGNhh1LxOmVzOlGUriIlOaokOvhaStA==",
"version": "0.7.2",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.2.tgz",
"integrity": "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/engine.io/node_modules/debug": {
"version": "4.3.5",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.3.5.tgz",
"integrity": "sha512-pt0bNEmneDIvdL1Xsd9oDQ/wrQRkXDT4AUWlNZNPKvW5x/jyO9VFXkJUP07vQ2upmw5PlaITaPKc31jK13V+jg==",
"version": "4.3.7",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.3.7.tgz",
"integrity": "sha512-Er2nc/H7RrMXZBFCEim6TCmMk02Z8vLC2Rbi1KEBggpo0fS6l0S1nnapwmIi3yW/+GOJap1Krg4w0Hg80oCqgQ==",
"dependencies": {
"ms": "2.1.2"
"ms": "^2.1.3"
},
"engines": {
"node": ">=6.0"
@@ -297,9 +297,9 @@
}
},
"node_modules/engine.io/node_modules/ms": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz",
"integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w=="
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="
},
"node_modules/es-define-property": {
"version": "1.0.0",
@@ -334,16 +334,16 @@
}
},
"node_modules/express": {
"version": "4.21.0",
"resolved": "https://registry.npmjs.org/express/-/express-4.21.0.tgz",
"integrity": "sha512-VqcNGcj/Id5ZT1LZ/cfihi3ttTn+NJmkli2eZADigjq29qTlWi/hAQ43t/VLPq8+UX06FCEx3ByOYet6ZFblng==",
"version": "4.21.1",
"resolved": "https://registry.npmjs.org/express/-/express-4.21.1.tgz",
"integrity": "sha512-YSFlK1Ee0/GC8QaO91tHcDxJiE/X4FbpAyQWkxAvG6AXCuR65YzK8ua6D9hvi/TzUfZMpc+BwuM1IPw8fmQBiQ==",
"dependencies": {
"accepts": "~1.3.8",
"array-flatten": "1.1.1",
"body-parser": "1.20.3",
"content-disposition": "0.5.4",
"content-type": "~1.0.4",
"cookie": "0.6.0",
"cookie": "0.7.1",
"cookie-signature": "1.0.6",
"debug": "2.6.9",
"depd": "2.0.0",
@@ -798,15 +798,15 @@
}
},
"node_modules/socket.io": {
"version": "4.7.5",
"resolved": "https://registry.npmjs.org/socket.io/-/socket.io-4.7.5.tgz",
"integrity": "sha512-DmeAkF6cwM9jSfmp6Dr/5/mfMwb5Z5qRrSXLpo3Fq5SqyU8CMF15jIN4ZhfSwu35ksM1qmHZDQ/DK5XTccSTvA==",
"version": "4.8.0",
"resolved": "https://registry.npmjs.org/socket.io/-/socket.io-4.8.0.tgz",
"integrity": "sha512-8U6BEgGjQOfGz3HHTYaC/L1GaxDCJ/KM0XTkJly0EhZ5U/du9uNEZy4ZgYzEzIqlx2CMm25CrCqr1ck899eLNA==",
"dependencies": {
"accepts": "~1.3.4",
"base64id": "~2.0.0",
"cors": "~2.8.5",
"debug": "~4.3.2",
"engine.io": "~6.5.2",
"engine.io": "~6.6.0",
"socket.io-adapter": "~2.5.2",
"socket.io-parser": "~4.2.4"
},
@@ -962,9 +962,9 @@
}
},
"node_modules/undici-types": {
"version": "5.26.5",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz",
"integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA=="
"version": "6.19.8",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.19.8.tgz",
"integrity": "sha512-ve2KP6f/JnbPBFyobGHuerC9g1FYGn/F8n1LWTwNxCEzd6IfqTwUQcNXgEtmmQ6DlRrC1hrSrBnCZPokRrDHjw=="
},
"node_modules/unpipe": {
"version": "1.0.0",
@@ -1039,11 +1039,11 @@
}
},
"@types/node": {
"version": "20.14.6",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.14.6.tgz",
"integrity": "sha512-JbA0XIJPL1IiNnU7PFxDXyfAwcwVVrOoqyzzyQTyMeVhBzkJVMSkC1LlVsRQ2lpqiY4n6Bb9oCS6lzDKVQxbZw==",
"version": "22.7.5",
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.7.5.tgz",
"integrity": "sha512-jML7s2NAzMWc//QSJ1a3prpk78cOPchGvXJsC3C6R6PSMoooztvRVQEz89gmBTBY1SPMaqo5teB4uNHPdetShQ==",
"requires": {
"undici-types": "~5.26.4"
"undici-types": "~6.19.2"
}
},
"accepts": {
@@ -1115,9 +1115,9 @@
"integrity": "sha512-nTjqfcBFEipKdXCv4YDQWCfmcLZKm81ldF0pAopTvyrFGVbcR6P/VAAd5G7N+0tTr8QqiU0tFadD6FK4NtJwOA=="
},
"cookie": {
"version": "0.6.0",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.6.0.tgz",
"integrity": "sha512-U71cyTamuh1CRNCfpGY6to28lxvNwPG4Guz/EVjgf3Jmzv0vlDp1atT9eS5dDjMYHucpHbWns6Lwf3BKz6svdw=="
"version": "0.7.1",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.1.tgz",
"integrity": "sha512-6DnInpx7SJ2AK3+CTUE/ZM0vWTUboZCegxhC2xiIydHR9jNuTAASBrfEpHhiGOZw/nX51bHt6YQl8jsGo4y/0w=="
},
"cookie-signature": {
"version": "1.0.6",
@@ -1172,16 +1172,16 @@
"integrity": "sha512-Q0n9HRi4m6JuGIV1eFlmvJB7ZEVxu93IrMyiMsGC0lrMJMWzRgx6WGquyfQgZVb31vhGgXnfmPNNXmxnOkRBrg=="
},
"engine.io": {
"version": "6.5.5",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-6.5.5.tgz",
"integrity": "sha512-C5Pn8Wk+1vKBoHghJODM63yk8MvrO9EWZUfkAt5HAqIgPE4/8FF0PEGHXtEd40l223+cE5ABWuPzm38PHFXfMA==",
"version": "6.6.2",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-6.6.2.tgz",
"integrity": "sha512-gmNvsYi9C8iErnZdVcJnvCpSKbWTt1E8+JZo8b+daLninywUWi5NQ5STSHZ9rFjFO7imNcvb8Pc5pe/wMR5xEw==",
"requires": {
"@types/cookie": "^0.4.1",
"@types/cors": "^2.8.12",
"@types/node": ">=10.0.0",
"accepts": "~1.3.4",
"base64id": "2.0.0",
"cookie": "~0.4.1",
"cookie": "~0.7.2",
"cors": "~2.8.5",
"debug": "~4.3.1",
"engine.io-parser": "~5.2.1",
@@ -1189,22 +1189,22 @@
},
"dependencies": {
"cookie": {
"version": "0.4.2",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.4.2.tgz",
"integrity": "sha512-aSWTXFzaKWkvHO1Ny/s+ePFpvKsPnjc551iI41v3ny/ow6tBG5Vd+FuqGNhh1LxOmVzOlGUriIlOaokOvhaStA=="
"version": "0.7.2",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.7.2.tgz",
"integrity": "sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w=="
},
"debug": {
"version": "4.3.5",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.3.5.tgz",
"integrity": "sha512-pt0bNEmneDIvdL1Xsd9oDQ/wrQRkXDT4AUWlNZNPKvW5x/jyO9VFXkJUP07vQ2upmw5PlaITaPKc31jK13V+jg==",
"version": "4.3.7",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.3.7.tgz",
"integrity": "sha512-Er2nc/H7RrMXZBFCEim6TCmMk02Z8vLC2Rbi1KEBggpo0fS6l0S1nnapwmIi3yW/+GOJap1Krg4w0Hg80oCqgQ==",
"requires": {
"ms": "2.1.2"
"ms": "^2.1.3"
}
},
"ms": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz",
"integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w=="
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="
}
}
},
@@ -1264,16 +1264,16 @@
"integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg=="
},
"express": {
"version": "4.21.0",
"resolved": "https://registry.npmjs.org/express/-/express-4.21.0.tgz",
"integrity": "sha512-VqcNGcj/Id5ZT1LZ/cfihi3ttTn+NJmkli2eZADigjq29qTlWi/hAQ43t/VLPq8+UX06FCEx3ByOYet6ZFblng==",
"version": "4.21.1",
"resolved": "https://registry.npmjs.org/express/-/express-4.21.1.tgz",
"integrity": "sha512-YSFlK1Ee0/GC8QaO91tHcDxJiE/X4FbpAyQWkxAvG6AXCuR65YzK8ua6D9hvi/TzUfZMpc+BwuM1IPw8fmQBiQ==",
"requires": {
"accepts": "~1.3.8",
"array-flatten": "1.1.1",
"body-parser": "1.20.3",
"content-disposition": "0.5.4",
"content-type": "~1.0.4",
"cookie": "0.6.0",
"cookie": "0.7.1",
"cookie-signature": "1.0.6",
"debug": "2.6.9",
"depd": "2.0.0",
@@ -1593,15 +1593,15 @@
}
},
"socket.io": {
"version": "4.7.5",
"resolved": "https://registry.npmjs.org/socket.io/-/socket.io-4.7.5.tgz",
"integrity": "sha512-DmeAkF6cwM9jSfmp6Dr/5/mfMwb5Z5qRrSXLpo3Fq5SqyU8CMF15jIN4ZhfSwu35ksM1qmHZDQ/DK5XTccSTvA==",
"version": "4.8.0",
"resolved": "https://registry.npmjs.org/socket.io/-/socket.io-4.8.0.tgz",
"integrity": "sha512-8U6BEgGjQOfGz3HHTYaC/L1GaxDCJ/KM0XTkJly0EhZ5U/du9uNEZy4ZgYzEzIqlx2CMm25CrCqr1ck899eLNA==",
"requires": {
"accepts": "~1.3.4",
"base64id": "~2.0.0",
"cors": "~2.8.5",
"debug": "~4.3.2",
"engine.io": "~6.5.2",
"engine.io": "~6.6.0",
"socket.io-adapter": "~2.5.2",
"socket.io-parser": "~4.2.4"
},
@@ -1715,9 +1715,9 @@
}
},
"undici-types": {
"version": "5.26.5",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz",
"integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA=="
"version": "6.19.8",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.19.8.tgz",
"integrity": "sha512-ve2KP6f/JnbPBFyobGHuerC9g1FYGn/F8n1LWTwNxCEzd6IfqTwUQcNXgEtmmQ6DlRrC1hrSrBnCZPokRrDHjw=="
},
"unpipe": {
"version": "1.0.0",

View File

@@ -2,8 +2,8 @@
"name": "container-training-pub-sub-server",
"version": "0.0.1",
"dependencies": {
"express": "^4.21.0",
"socket.io": "^4.7.5",
"express": "^4.21.1",
"socket.io": "^4.8.0",
"socket.io-client": "^4.7.5"
}
}

View File

@@ -1,5 +1,3 @@
version: "2"
services:
www:
image: nginx

View File

@@ -40,7 +40,7 @@
- In multi-stage builds, all stages can be built in parallel
(example: https://github.com/jpetazzo/shpod; [before] and [after])
(example: https://github.com/jpetazzo/shpod; [before][shpod-before-parallel] and [after][shpod-after-parallel])
- Stages are built only when they are necessary
@@ -50,8 +50,8 @@
- Files are cached in the builder
[before]: https://github.com/jpetazzo/shpod/blob/c6efedad6d6c3dc3120dbc0ae0a6915f85862474/Dockerfile
[after]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile
[shpod-before-parallel]: https://github.com/jpetazzo/shpod/blob/c6efedad6d6c3dc3120dbc0ae0a6915f85862474/Dockerfile
[shpod-after-parallel]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile
---
@@ -121,10 +121,10 @@ docker buildx build … \
- Must not use binary downloads with hard-coded architectures!
(streamlining a Dockerfile for multi-arch: [before], [after])
(streamlining a Dockerfile for multi-arch: [before][shpod-before-multiarch], [after][shpod-after-multiarch])
[before]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile
[after]: https://github.com/jpetazzo/shpod/blob/c50789e662417b34fea6f5e1d893721d66d265b7/Dockerfile
[shpod-before-multiarch]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile
[shpod-after-multiarch]: https://github.com/jpetazzo/shpod/blob/c50789e662417b34fea6f5e1d893721d66d265b7/Dockerfile
---

View File

@@ -32,7 +32,7 @@ Compose enables a simple, powerful onboarding workflow:
1. Checkout our code.
2. Run `docker-compose up`.
2. Run `docker compose up`.
3. Our app is up and running!
@@ -66,19 +66,19 @@ class: pic
1. Write Dockerfiles
2. Describe our stack of containers in a YAML file called `docker-compose.yml`
2. Describe our stack of containers in a YAML file (the "Compose file")
3. `docker-compose up` (or `docker-compose up -d` to run in the background)
3. `docker compose up` (or `docker compose up -d` to run in the background)
4. Compose pulls and builds the required images, and starts the containers
5. Compose shows the combined logs of all the containers
(if running in the background, use `docker-compose logs`)
(if running in the background, use `docker compose logs`)
6. Hit Ctrl-C to stop the whole stack
(if running in the background, use `docker-compose stop`)
(if running in the background, use `docker compose stop`)
---
@@ -86,11 +86,11 @@ class: pic
After making changes to our source code, we can:
1. `docker-compose build` to rebuild container images
1. `docker compose build` to rebuild container images
2. `docker-compose up` to restart the stack with the new images
2. `docker compose up` to restart the stack with the new images
We can also combine both with `docker-compose up --build`
We can also combine both with `docker compose up --build`
Compose will be smart, and only recreate the containers that have changed.
@@ -114,7 +114,7 @@ cd trainingwheels
Second step: start the app.
```bash
docker-compose up
docker compose up
```
Watch Compose build and run the app.
@@ -141,7 +141,17 @@ After ten seconds (or if we press `^C` again) it will forcibly kill them.
---
## The `docker-compose.yml` file
## The Compose file
* Historically: docker-compose.yml or .yaml
* Recently (kind of): can also be named compose.yml or .yaml
(Since [version 1.28.6, March 2021](https://docs.docker.com/compose/releases/release-notes/#1286))
---
## Example
Here is the file used in the demo:
@@ -172,10 +182,10 @@ services:
A Compose file has multiple sections:
* `version` is mandatory. (Typically use "3".)
* `services` is mandatory. Each service corresponds to a container.
* `version` is optional (it used to be mandatory). It can be ignored.
* `networks` is optional and indicates to which networks containers should be connected.
<br/>(By default, containers will be connected on a private, per-compose-file network.)
@@ -183,24 +193,24 @@ A Compose file has multiple sections:
---
class: extra-details
## Compose file versions
* Version 1 is legacy and shouldn't be used.
(If you see a Compose file without `version` and `services`, it's a legacy v1 file.)
(If you see a Compose file without a `services` block, it's a legacy v1 file.)
* Version 2 added support for networks and volumes.
* Version 3 added support for deployment options (scaling, rolling updates, etc).
* Typically use `version: "3"`.
The [Docker documentation](https://docs.docker.com/compose/compose-file/)
has excellent information about the Compose file format if you need to know more about versions.
---
## Containers in `docker-compose.yml`
## Containers in Compose file
Each service in the YAML file must contain either `build`, or `image`.
@@ -278,7 +288,7 @@ For the full list, check: https://docs.docker.com/compose/compose-file/
`frontcopy_www`, `frontcopy_www_1`, `frontcopy_db_1`
- Alternatively, use `docker-compose -p frontcopy`
- Alternatively, use `docker compose -p frontcopy`
(to set the `--project-name` of a stack, which default to the dir name)
@@ -288,10 +298,10 @@ For the full list, check: https://docs.docker.com/compose/compose-file/
## Checking stack status
We have `ps`, `docker ps`, and similarly, `docker-compose ps`:
We have `ps`, `docker ps`, and similarly, `docker compose ps`:
```bash
$ docker-compose ps
$ docker compose ps
Name Command State Ports
----------------------------------------------------------------------------
trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp
@@ -310,13 +320,13 @@ If you have started your application in the background with Compose and
want to stop it easily, you can use the `kill` command:
```bash
$ docker-compose kill
$ docker compose kill
```
Likewise, `docker-compose rm` will let you remove containers (after confirmation):
Likewise, `docker compose rm` will let you remove containers (after confirmation):
```bash
$ docker-compose rm
$ docker compose rm
Going to remove trainingwheels_redis_1, trainingwheels_www_1
Are you sure? [yN] y
Removing trainingwheels_redis_1...
@@ -327,19 +337,19 @@ Removing trainingwheels_www_1...
## Cleaning up (2)
Alternatively, `docker-compose down` will stop and remove containers.
Alternatively, `docker compose down` will stop and remove containers.
It will also remove other resources, like networks that were created for the application.
```bash
$ docker-compose down
$ docker compose down
Stopping trainingwheels_www_1 ... done
Stopping trainingwheels_redis_1 ... done
Removing trainingwheels_www_1 ... done
Removing trainingwheels_redis_1 ... done
```
Use `docker-compose down -v` to remove everything including volumes.
Use `docker compose down -v` to remove everything including volumes.
---
@@ -369,15 +379,15 @@ Use `docker-compose down -v` to remove everything including volumes.
- If the container is deleted, the volume gets orphaned
- Example: `docker-compose down && docker-compose up`
- Example: `docker compose down && docker compose up`
- the old volume still exists, detached from its container
- a new volume gets created
- `docker-compose down -v`/`--volumes` deletes volumes
- `docker compose down -v`/`--volumes` deletes volumes
(but **not** `docker-compose down && docker-compose down -v`!)
(but **not** `docker compose down && docker compose down -v`!)
---
@@ -396,9 +406,9 @@ volumes:
- Volume will be named `<project>_data`
- It won't be orphaned with `docker-compose down`
- It won't be orphaned with `docker compose down`
- It will correctly be removed with `docker-compose down -v`
- It will correctly be removed with `docker compose down -v`
---
@@ -417,7 +427,7 @@ services:
(for migration, backups, disk usage accounting...)
- Won't be removed by `docker-compose down -v`
- Won't be removed by `docker compose down -v`
---
@@ -451,7 +461,7 @@ services:
- This is used when bringing up individual services
(e.g. `docker-compose up blah` or `docker-compose run foo`)
(e.g. `docker compose up blah` or `docker compose run foo`)
⚠️ It doesn't make a service "wait" for another one to be up!
@@ -471,7 +481,9 @@ class: extra-details
- `docker compose` command to deploy Compose stacks to some clouds
- progressively getting feature parity with `docker-compose`
- in Go instead of Python
- progressively getting feature parity with `docker compose`
- also provides numerous improvements (e.g. leverages BuildKit by default)

View File

@@ -120,11 +120,11 @@ class: extra-details
(and won't end up in the resulting image)
- See the [documentation] for the little details
- See the [documentation][dockerignore] for the little details
(exceptions can be made with `!`, multiple directory levels with `**`...)
[documentation]: https://docs.docker.com/engine/reference/builder/#dockerignore-file
[dockerignore]: https://docs.docker.com/engine/reference/builder/#dockerignore-file
???

View File

@@ -0,0 +1,32 @@
# Exercise — enable auth
- We want to enable authentication and authorization
- Checklist:
- non-privileged user can deploy in their namespace
<br/>(and nowhere else)
- each controller uses its own key, certificate, and identity
- each node uses its own key, certificate, and identity
- Service Accounts work properly
- See next slide for help / hints!
---
## Checklist
- Generate keys, certs, and kubeconfig for everything that needs them
(cluster admin, cluster user, controller manager, scheduler, kubelet)
- Reconfigure and restart each component to use its new identity
- Turn on `RBAC` and `Node` authorizers on the API server
- Check that everything works properly
(e.g. that you can create and scale a Deployment using the "cluster user" identity)

View File

@@ -0,0 +1,51 @@
# Exercise — networking
- We want to install extra networking components:
- a CNI configuration
- kube-proxy
- CoreDNS
- After doing that, we should be able to deploy a "complex" app
(with multiple containers communicating together + service discovery)
---
## CNI
- Easy option: Weave
https://github.com/weaveworks/weave/releases
- Better option: Cilium
https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#install-the-cilium-cli
or https://docs.cilium.io/en/stable/installation/k8s-install-helm/#installation-using-helm
---
## kube-proxy
- Option 1: author a DaemonSet
- Option 2: leverage the CNI (some CNIs like Cilium can replace kube-proxy)
---
## CoreDNS
- Suggested method: Helm chart
(available on https://github.com/coredns/helm)
---
## Testing
- Try to deploy DockerCoins and confirm that it works
(for instance with [this YAML file](https://raw.githubusercontent.com/jpetazzo/container.training/refs/heads/main/k8s/dockercoins.yaml))

View File

@@ -0,0 +1,22 @@
# Exercise — static pods
- We want to run the control plane in static pods
(etcd, API server, controller manager, scheduler)
- For Kubernetes components, we can use [these images](https://kubernetes.io/releases/download/#container-images)
- For etcd, we can use [this image](https://quay.io/repository/coreos/etcd?tab=tags)
- If we're using keys, certificates... We can use [hostPath volumes](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)
---
## Testing
After authoring our static pod manifests and placing them in the right directory,
we should be able to start our cluster simply by starting kubelet.
(Assuming that the container engine is already running.)
For bonus points: write and enable a systemd unit for kubelet!

View File

@@ -26,7 +26,7 @@
- it should initially show a few milliseconds latency
- that will increase when we scale up
- that will increase when we scale up the number of `worker` Pods
- it will also let us detect when the service goes "boom"

View File

@@ -26,8 +26,8 @@ When a Service gets created...
- We want to use a Kyverno `generate` ClusterPolicy
- For step 1, check [Generate Resources](https://kyverno.io/docs/writing-policies/generate/) documentation
- For step 1, check [Generate Resources](https://kyverno.io/docs/policy-types/cluster-policy/generate/) documentation
- For step 2, check [Preconditions](https://kyverno.io/docs/writing-policies/preconditions/) documentation
- For step 2, check [Preconditions](https://kyverno.io/docs/policy-types/cluster-policy/preconditions/) documentation
- For step 3, check [External Data Sources](https://kyverno.io/docs/writing-policies/external-data-sources/) documentation
- For step 3, check [External Data Sources](https://kyverno.io/docs/policy-types/cluster-policy/external-data-sources/) documentation

View File

@@ -0,0 +1,51 @@
# Exercise — Monokube static pods
- We want to run a very basic Kubernetes cluster by starting only:
- kubelet
- a container engine (e.g. Docker)
- The other components (control plane and otherwise) should be started with:
- static pods
- "classic" manifests loaded with e.g. `kubectl apply`
- This should be done with the "monokube" VM
(which has Docker and kubelet 1.19 binaries available)
---
## Images to use
Here are some suggestions of images:
- etcd → `quay.io/coreos/etcd:vX.Y.Z`
- Kubernetes components → `registry.k8s.io/kube-XXX:vX.Y.Z`
(where `XXX` = `apiserver`, `scheduler`, `controller-manager`)
To know which versions to use, check the version of the binaries installed on the `monokube` VM, and use the same ones.
See next slide for more hints!
---
## Inventory
We'll need to run:
- kubelet (with the flag for static pod manifests)
- Docker
- static pods for control plane components
(suggestion: use `hostNetwork`)
- static pod or DaemonSet for `kube-proxy`
(will require a privileged security context)

View File

@@ -0,0 +1,86 @@
# Exercise — Writing blue/green YAML
- We want to author YAML manifests for the "color" app
(use image `jpetazzo/color` or `ghcr.io/jpetazzo/color`)
- That app serves web requests on port 80
- We want to deploy two instances of that app (`blue` and `green`)
- We want to expose the app with a service named `front`, such that:
90% of the requests are sent to `blue`, and 10% to `green`
---
## End goal
- We want to be able to do something like:
```bash
kubectl apply -f blue-green-demo.yaml
```
- Then connect to the `front` service and see responses from `blue` and `green`
- Then measure e.g. on 100 requests how many go to `blue` and `green`
(we want a 90/10 traffic split)
- Go ahead, or check the next slides for hints!
---
## Step 1
- Test the app in isolation:
- create a Deployment called `blue`
- expose it with a Service
- connect to the service and see a "blue" reply
- If you use a `ClusterIP` service:
- if you're logged directly on the clusters you can connect directly
- otherwise you can use `kubectl port-forward`
- Otherwise, you can use a `NodePort` or `LoadBalancer` service
---
## Step 2
- Add the `green` Deployment
- Create the `front` service
- Edit the `front` service to replace its selector with a custom one
- Edit `blue` and `green` to add the label(s) of your custom selector
- Check that traffic hits both green and blue
- Think about how to obtain the 90/10 traffic split
---
## Step 3
- Generate, write, extract, ... YAML manifests for all components
(`blue` and `green` Deployments, `front` Service)
- Check that applying the manifests (e.g. in a brand new namespace) works
- Bonus points: add a one-shot pod to check the traffic split!
---
## Discussion
- Would this be a viable option to obtain, say, a 95% / 5% traffic split?
- What about 99% / 1 %?

View File

@@ -0,0 +1,5 @@
#!/bin/sh
for LINK in $(cat */*.md | sed -n 's/^\[\(.*\)\]:.*/\1/p' | sort | uniq -d); do
grep '^\['"$LINK"'\]:' */*.md
done

View File

@@ -12,118 +12,124 @@
<table>
<tr>
<td>Mardi 24 septembre 2024</td>
<td>Mardi 13 mai 2025</td>
<td>
<a href="1.yml.html">Docker Intensif</a>
</td>
</tr>
<tr>
<td>Mercredi 25 septembre 2024</td>
<td>Mercredi 14 mai 2025</td>
<td>
<a href="1.yml.html">Docker Intensif</a>
</td>
</tr>
<tr>
<td>Jeudi 26 septembre 2024</td>
<td>Jeudi 15 mai 2025</td>
<td>
<a href="1.yml.html">Docker Intensif</a>
</td>
</tr>
<tr>
<td>Vendredi 27 septembre 2024</td>
<td>Vendredi 16 mai 2025</td>
<td>
<a href="1.yml.html">Docker Intensif</a>
</td>
</tr>
<tr>
<td>Mardi 1er octobre 2024</td>
<td>Mardi 20 mai 2025</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Mercredi 2 octobre 2024</td>
<td>Mercredi 21 mai 2025</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Jeudi 3 octobre 2024</td>
<td>Jeudi 22 mai 2025</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Vendredi 4 octobre 2024</td>
<td>Vendredi 23 mai 2025</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Lundi 7 octobre 2024</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Mardi 8 octobre 2024</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Mercredi 9 octobre 2024</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Jeudi 10 octobre 2024</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Vendredi 11 octobre 2024</td>
<td>
<a href="5.yml.html">Opérer Kubernetes</a>
</td>
</tr>
<tr>
<td>Lundi 14 octobre 2024</td>
<td>
<a href="5.yml.html">Opérer Kubernetes</a>
</td>
</tr>
<tr>
<td>Mardi 15 octobre 2024</td>
<td>
<a href="5.yml.html">Opérer Kubernetes</a>
</td>
</tr>
<tr>
<td>Mercredi 16 octobre 2024</td>
<td>Lundi 26 mai 2025</td>
<td>
<a href="3.yml.html">Packaging d'applications pour Kubernetes</a>
</td>
</tr>
<tr>
<td>Jeudi 17 octobre 2024</td>
<td>Mardi 27 mai 2025</td>
<td>
<a href="3.yml.html">Packaging d'applications pour Kubernetes</a>
</td>
</tr>
<tr>
<td>Vendredi 18 octobre 2024</td>
<td>Mercredi 28 mai 2025</td>
<td>
<a href="3.yml.html">Packaging d'applications pour Kubernetes</a>
</td>
</tr>
<tr>
<td>Lundi 2 juin 2025</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Mardi 3 juin 2025</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Mercredi 4 juin 2025</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Jeudi 5 juin 2025</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Mardi 10 juin 2025</td>
<td>
<a href="5.yml.html">Opérer Kubernetes</a>
</td>
</tr>
<tr>
<td>Mercredi 11 juin 2025</td>
<td>
<a href="5.yml.html">Opérer Kubernetes</a>
</td>
</tr>
<tr>
<td>Jeudi 12 juin 2025</td>
<td>
<a href="5.yml.html">Opérer Kubernetes</a>
</td>
</tr>
<tr>
<td>Vendredi 13 juin 2025</td>
<td>
<a href="5.yml.html">Opérer Kubernetes</a>
</td>
</tr>
</table>
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 186 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 570 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 278 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 347 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 241 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 189 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 29 KiB

View File

@@ -0,0 +1,349 @@
# K01- Installing a Kubernetes cluster from scratch
We operated a managed cluster from **Scaleway** `Kapsule`.
It's great! Most batteries are included:
- storage classes, with an already configured default one
- a default CNI with `Cilium`
<br/>(`Calico` is supported too)
- a _IaaS_ load-balancer that is manageable by `ingress-controllers`
- a management _WebUI_ with the Kubernetes dashboard
- an observability stack with `metrics-server` and the Kubernetes dashboard
But what about _on premises_ needs?
---
class: extra-details
## On premises Kubernetes distributions
The [CNCF landscape](https://landscape.cncf.io/?fullscreen=yes&zoom=200&group=certified-partners-and-providers) currently lists **61!** Kubernetes distributions, today.
Not speaking of Kubernetes managed services from Cloud providers…
Please, refer to the [`Setting up Kubernetes` chapter in the High Five M2 module](./2.yml.html#toc-setting-up-kubernetes) for more infos about Kubernetes distributions.
---
## Introducing k0s
Nowadays, some "light" distros are considered good enough to run production clusters.
That's the case for `k0s`.
It's an open source Kubernetes lightweight distribution.
Mainly relying on **Mirantis**, a long-time software vendor in Kubernetes ecosystem.
(The ones who bought `Docker Enterprise` a long time ago. remember?)
`k0s` aims to be both
- a lightweight distribution for _edge-computing_ and development pupose
- an enterprise-grade HA distribution fully supported by its editor
<br/>`MKE4` and `kordent` leverage on `k0s`
---
### `k0s` package
Its single binary includes:
- a CRI (`containerd`)
- Kubernetes vanilla control plane components (including both `etcd`)
- a vanilla network stack
- `kube-router`
- `kube-proxy`
- `coredns`
- `konnectivity`
- `kubectl` CLI
- install / uninstall features
- backup / restore features
---
class: pic
![k0s package](images/M6-k0s-packaging.png)
---
class: extra-details
### Konnectivity
You've seen that Kubernetes cluster architecture is very versatile.
I'm referring to the [`Kubernetes architecture` chapter in the High Five M5 module](./5.yml.html#toc-kubernetes-architecture)
Network communications between control plane components and worker nodes might be uneasy to configure.
`Konnectivity` is a response to this pain. It acts as an RPC proxy for any communication initiated from control plane to workers.
These communications are listed in [`Kubernetes internal APIs` chapter in the High Five M5 module](https://2025-01-enix.container.training/5.yml.html#toc-kubernetes-internal-apis)
The agent deployed on each worker node maintains an RPC tunnel with the one deployed on control plane side.
---
class: pic
![konnectivity architecture](images/M6-konnectivity-architecture.png)
---
## Installing `k0s`
It installs with a one-liner command
- either in single-node lightweight footprint
- or in multi-nodes HA footprint
.lab[
- Get the binary
```bash
docker@m621: ~$ wget https://github.com/k0sproject/k0sctl/releases/download/v0.25.1/k0sctl-linux-amd64
```
]
---
### Prepare the config file
.lab[
- Create the config file
```bash
docker@m621: ~$ k0sctl init \
--controller-count 3 \
--user docker \
--k0s m621 m622 m623 > k0sctl.yaml
```
- change the following field: `spec.hosts[*].role: controller+worker`
- add the following fields: `spec.hosts[*].noTaints: true`
```bash
docker@m621: ~$ k0sctl apply --config k0sctl.yaml
```
]
---
### And the famous one-liner
.lab[
```bash
k8s@shpod: ~$ k0sctl apply --config k0sctl.yaml
```
]
---
### Check that k0s installed correctly
.lab[
```bash
docker@m621 ~$ sudo k0s status
Version: v1.33.1+k0s.1
Process ID: 60183
Role: controller
Workloads: true
SingleNode: false
Kube-api probing successful: true
Kube-api probing last error:
docker@m621 ~$ sudo k0s etcd member-list
{"members":{"m621":"https://10.10.3.190:2380","m622":"https://10.10.2.92:2380","m623":"https://10.10.2.110:2380"}}
```
]
---
### `kubectl` is included
.lab[
```bash
docker@m621 ~$ sudo k0s kubectl get nodes
NAME STATUS ROLES AGE VERSION
m621 Ready control-plane 66m v1.33.1+k0s
m622 Ready control-plane 66m v1.33.1+k0s
m623 Ready control-plane 66m v1.33.1+k0s
docker@m621 ~$ sudo k0s kubectl run shpod --image jpetazzo/shpod
```
]
---
class: extra-details
### Single node install (for info!)
For testing purpose, you may want to use a single-node (yet `etcd`-geared) install…
.lab[
- Install it
```bash
docker@m621 ~$ curl -sSLf https://get.k0s.sh | sudo sh
docker@m621 ~$ sudo k0s install controller --single
docker@m621 ~$ sudo k0s start
```
- Reset it
```bash
docker@m621 ~$ sudo k0s start
docker@m621 ~$ sudo k0s reset
```
]
---
## Deploying shpod
.lab[
```bash
docker@m621 ~$ sudo k0s kubectl apply -f https://shpod.in/shpod.yaml
docker@m621 ~$ sudo k0s kubectl apply -f https://shpod.in/shpod.yaml
```
]
---
## Flux install
We'll install `Flux`.
And replay the all scenario a 2nd time.
Let's face it: we don't have that much time. 😅
Since all our install and configuration is `GitOps`-based, we might just leverage on copy-paste and code configuration…
Maybe.
Let's copy the 📂 `./clusters/CLOUDY` folder and rename it 📂 `./clusters/METAL`.
---
### Modifying Flux config 📄 files
- In 📄 file `./clusters/METAL/flux-system/gotk-sync.yaml`
</br>change the `Kustomization` value `spec.path: ./clusters/METAL`
- ⚠️ We'll have to adapt the `Flux` _CLI_ command line
- And that's pretty much it!
- We'll see if anything goes wrong on that new cluster
---
### Connecting to our dedicated `Github` repo to host Flux config
.lab[
- let's replace `GITHUB_TOKEN` and `GITHUB_REPO` values
- don't forget to change the patch to `clusters/METAL`
```bash
k8s@shpod:~$ export GITHUB_TOKEN="my-token" && \
export GITHUB_USER="container-training-fleet" && \
export GITHUB_REPO="fleet-config-using-flux-XXXXX"
k8s@shpod:~$ flux bootstrap github \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--team=OPS \
--team=ROCKY --team=MOVY \
--path=clusters/METAL
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### Flux deployed our complete stack
Everything seems to be here but…
- one database is in `Pending` state
- our `ingresses` don't work well
```bash
k8s@shpod ~$ curl --header 'Host: rocky.test.enixdomain.com' http://${myIngressControllerSvcIP}
curl: (52) Empty reply from server
```
---
### Fixing the Ingress
The current `ingress-nginx` configuration leverages on specific annotations used by Scaleway to bind a _IaaS_ load-balancer to the `ingress-controller`.
We don't have such kind of things here.😕
- We could bind our `ingress-controller` to a `NodePort`.
`ingress-nginx` install manifests propose it here:
</br>https://github.com/kubernetes/ingress-nginx/deploy/static/provider/baremetal
- In the 📄file `./clusters/METAL/ingress-nginx/sync.yaml`,
</br>change the `Kustomization` value `spec.path: ./deploy/static/provider/baremetal`
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### Troubleshooting the database
One of our `db-0` pod is in `Pending` state.
```bash
k8s@shpod ~$ k get pods db-0 -n *-test -oyaml
()
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-06-11T11:15:42Z"
message: '0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims.
preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
```
---
### Troubleshooting the PersistentVolumeClaims
```bash
k8s@shpod ~$ k get pvc postgresql-data-db-0 -n *-test -o yaml
()
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 9s (x182 over 45m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
```
No `storage class` is available on this cluster.
We hadn't the problem on our managed cluster since a default storage class was configured and then associated to our `PersistentVolumeClaim`.
Why is there no problem with the other database?

View File

@@ -0,0 +1,129 @@
# K03- Installing OpenEBS as our CSI
`OpenEBS` is a _CSI_ solution capable of hyperconvergence, synchronous replication and other extra features.
It installs with `Helm` charts.
- `Flux` is able to watch `Helm` repositories and install `HelmReleases`
- To inject its configuration into the `Helm chart` , `Flux` relies on a `ConfigMap` including the `values.yaml` file
.lab[
```bash
k8s@shpod ~$ mkdir -p ./clusters/METAL/openebs/ && \
cp -pr ~/container.training/k8s/M6-openebs-*.yaml \
./clusters/METAL/openebs/ && \
cd ./clusters/METAL/openebs/ && \
mv M6-openebs-kustomization.yaml kustomization.yaml && \
cd -
```
]
---
## Creating an `Helm` source in Flux for OpenEBS Helm chart
.lab[
```bash
k8s@shpod ~$ flux create source helm openebs \
--url=https://openebs.github.io/openebs \
--interval=3m \
--export > ./clusters/METAL/openebs/sync.yaml
```
]
---
## Creating the `HelmRelease` in Flux
.lab[
```bash
k8s@shpod ~$ flux create helmrelease openebs \
--namespace=openebs \
--source=HelmRepository/openebs.flux-system \
--chart=openebs \
--values-from=ConfigMap/openebs-values \
--export >> ./clusters/METAL/openebs/sync.yaml
```
]
---
## 📂 Let's review the files
- `M6-openebs-components.yaml`
</br>To include the `Flux` resources in the same _namespace_ where `Flux` installs the `OpenEBS` resources, we need to create the _namespace_ **before** the installation occurs
- `sync.yaml`
</br>The resources `Flux` uses to watch and get the `Helm chart`
- `M6-openebs-values.yaml`
</br> the `values.yaml` file that will be injected into the `Helm chart`
- `kustomization.yaml`
</br>This one is a bit special: it includes a [ConfigMap generator](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/configmapgenerator/)
- `M6-openebs-kustomizeconfig.yaml`
</br></br>This one is tricky: in order for `Flux` to trigger an upgrade of the `Helm Release` when the `ConfigMap` is altered, you need to explain to the `Kustomize ConfigMap generator` how the resources are relating with each others. 🤯
And here we go!
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
## And the result
Now, we have a _cluster_ featuring `openEBS`.
But still… The PersistentVolumeClaim remains in `Pending` state!😭
```bash
k8s@shpod ~$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 82m
```
We still don't have a default `StorageClass`!😤
---
### Manually enforcing the default `StorageClass`
Even if Flux is constantly reconciling our resources, we still are able to test evolutions by hand.
.lab[
```bash
k8s@shpod ~$ flux suspend helmrelease openebs -n openebs
► suspending helmrelease openebs in openebs namespace
✔ helmrelease suspended
k8s@shpod ~$ kubectl patch storageclass openebs-hostpath \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
k8s@shpod ~$ k get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-hostpath (default) openebs.io/local Delete WaitForFirstConsumer false 82m
```
]
---
### Now the database is OK
```bash
k8s@shpod ~$ get pvc,pods -n movy-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/postgresql-data-db-0 Bound pvc-ede1634f-2478-42cd-8ee3-7547cd7cdde2 1Gi RWO openebs-hostpath <unset> 20m
NAME READY STATUS RESTARTS AGE
pod/db-0 1/1 Running 0 5h43m
()
```

View File

@@ -0,0 +1,320 @@
# M01- Configuring **_🎬MOVY_** deployment with Flux
**_🎸ROCKY_** _tenant_ is now fully usable in **_⚗TEST_** env, let's do the same for another _dev_ team: **_🎬MOVY_**
😈 We could do it by using `Flux` _CLI_,
but let's see if we can succeed by just adding manifests in our `Flux` configuration repository.
---
class: pic
![Flux configuration waterfall](images/M6-flux-config-dependencies.png)
---
## Impact study
In our `Flux` configuration repository:
- Creation of the following 📂 folders: `./tenants/[base|test]/MOVY`
- Modification of the following 📄 file: `./clusters/CLOUDY/tenants.yaml`?
- Well, we don't need to: the watched path include the whole `./tenants/[test]/*` folder
In the app repository:
- Creation of a `movy` branch to deploy another version of the app dedicated to movie soundtracks
---
### Creation of the 📂 folders
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cp -pr tenants/base/rocky tenants/base/movy
cp -pr tenants/test/rocky tenants/test/movy
```
]
---
### Modification of tenants/[base|test]/movy/* 📄 files
- For 📄`M6-rocky-*.yaml`, change the file names…
- and update the 📄`kustomization.yaml` file as a result
- In any file, replace any `rocky` entry by `movy`
- In 📄 `sync.yaml` be aware of what repository and what branch you want `Flux` to watch for **_🎬MOVY_** app deployment.
- for this demo, let's assume we create a `movy` branch
---
class: extra-details
### What about reusing rocky-cluster-roles?
💡 In 📄`M6-movy-cluster-role.yaml` and 📄`rbac.yaml`, we could have reused the already existing `ClusterRoles`: `rocky-full-access`, and `rocky-pv-access`
A `ClusterRole` is cluster wide. It is not dedicated to a namespace.
- Its permissions are restrained to a specific namespace by being bound to a `ServiceAccount` by a `RoleBinding`.
- Whereas a `ClusterRoleBinding` extends the permissions to the whole cluster scope.
But a _tenant_ is a **_tenant_** and permissions might evolved separately for **_🎸ROCKY_** and **_🎬MOVY_**.
So [we got to keep'em separated](https://www.youtube.com/watch?v=GHUql3OC_uU).
---
### Let-su-go!
The **_⚙OPS_** team push this new tenant configuration to `Github` for `Flux` controllers to watch and catch it!
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
git add . && \
git commit -m':wrench: :construction_worker: add MOVY tenant configuration' && \
git push
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
class: extra-details
### Another Flux error?
.lab[
- It seems that our `movy` branch is not present in the app repository
```bash
k8s@shpod:~$ flux get kustomization -A
NAMESPACE NAME REVISION SUSPENDED MESSAGE
()
flux-system tenant-prod False False kustomization path not found: stat /tmp/kustomization-113582828/tenants/prod: no such file or directory
()
movy-test movy False False Source artifact not found, retrying in 30s
```
]
---
### Creating the `movy` branch
- Let's create this new `movy` branch from `rocky` branch
.lab[
- You can force immediate reconciliation by typing this command:
```bash
k8s@shpod:~$ flux reconcile source git movy-app -n movy-test
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### New branch detected
You now have a second app responding on [http://movy.test.mybestdomain.com]
But as of now, it's just the same as the **_🎸ROCKY_** one.
We want a specific (pink-colored) version with a dataset full of movie soundtracks.
---
## New version of the **_🎬MOVY_** app
In our branch `movy`
Let's modify our `deployment.yaml` file with 2 modifications.
- in `spec.template.spec.containers.image` change the container image tag to `1.0.3`
- and… let's introduce some evil enthropy by changing this line… 😈😈😈
```yaml
value: jdbc:postgresql://db/music
```
by this one
```yaml
value: jdbc:postgresql://db.rocky-test/music
```
And push the modifications…
---
class: pic
![MOVY app has an incorrect dataset](images/M6-incorrect-dataset-in-MOVY-app.png)
---
class: pic
![ROCKY app has an incorrect dataset](images/M6-incorrect-dataset-in-ROCKY-app.png)
---
### MOVY app is connected to ROCKY database
How evil have we been! 😈
We connected the **_🎬MOVY_** app to the **_🎸ROCKY_** database.
Even if our tenants are isolated in how they manage their Kubernetes resources…
pod network is still full mesh and any connection is authorized.
> The **_⚙OPS_** team should fix this!
---
class: extra-details
## Adding NetworkPolicies to **_🎸ROCKY_** and **_🎬MOVY_** namespaces
`Network policies` may be seen as the firewall feature in the pod network.
They rules ingress and egress network connections considering a described subset of pods.
Please, refer to the [`Network policies` chapter in the High Five M4 module](./4.yml.html#toc-network-policies)
- In our case, we just add the file `~/container.training/k8s/M6-network-policies.yaml`
</br>in our `./tenants/base/movy` folder
- without forgetting to update our `kustomization.yaml` file
- and without forgetting to commit 😁
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:3
branch MOVY order:4
branch YouRHere order:5
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'k0s install on METAL cluster' tag:'K01'
commit id:'Flux config. for METAL cluster' tag:'K02'
branch METAL_TEST-PROD order:3
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for OpenEBS' tag:'K03'
checkout METAL_TEST-PROD
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Prometheus install'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout OPS
commit id:'Kyverno install'
commit id:'Kyverno rules'
checkout TEST-env
merge OPS type: HIGHLIGHT
</pre>

View File

@@ -0,0 +1,417 @@
# R01- Configuring **_🎸ROCKY_** deployment with Flux
The **_⚙OPS_** team manages 2 distinct envs: **_⚗TEST_** et _**🚜PROD**_
Thanks to _Kustomize_
1. it creates a **_base_** common config
2. this common config is overwritten with a **_⚗TEST_** _tenant_-specific configuration
3. the same applies with a _**🚜PROD**_-specific configuration
> 💡 This seems complex, but no worries: Flux's CLI handles most of it.
---
## Creating the **_🎸ROCKY_**-dedicated _tenant_ in **_⚗TEST_** env
- Using the `flux` _CLI_, we create the file configuring the **_🎸ROCKY_** team's dedicated _tenant_
- … this file takes place in the `base` common configuration for both envs
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p ./tenants/base/rocky && \
flux create tenant rocky \
--with-namespace=rocky-test \
--cluster-role=rocky-full-access \
--export > ./tenants/base/rocky/rbac.yaml
```
]
---
class: extra-details
### 📂 ./tenants/base/rocky/rbac.yaml
Let's see our file…
3 resources are created: `Namespace`, `ServiceAccount`, and `ClusterRoleBinding`
`Flux` **impersonates** as this `ServiceAccount` when it applies any resources found in this _tenant_-dedicated source(s)
- By default, the `ServiceAccount` is bound to the `cluster-admin` `ClusterRole`
- The team maintaining the sourced `Github` repository is almighty at cluster scope
A not that much isolated _tenant_! 😕
That's why the **_⚙OPS_** team enforces specific `ClusterRoles` with restricted permissions
Let's create these permissions!
---
## _namespace_ isolation for **_🎸ROCKY_**
.lab[
- Here are the restricted permissions to use in the `rocky-test` `Namespace`
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cp ~/container.training/k8s/M6-rocky-cluster-role.yaml ./tenants/base/rocky/
```
]
> 💡 Note that some resources are managed at cluster scope (like `PersistentVolumes`).
> We need specific permissions, then…
---
## Creating `Github` source in Flux for **_🎸ROCKY_** app repository
A specific _branch_ of the `Github` repository is monitored by the `Flux` source
.lab[
- ⚠️ you may change the **repository URL** to the one of your own clone
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create source git rocky-app \
--namespace=rocky-test \
--url=https://github.com/Musk8teers/container.training-spring-music/ \
--branch=rocky --export > ./tenants/base/rocky/sync.yaml
```
]
---
## Creating `kustomization` in Flux for **_🎸ROCKY_** app repository
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization rocky \
--namespace=rocky-test \
--service-account=rocky \
--source=GitRepository/rocky-app \
--path="./k8s/" --export >> ./tenants/base/rocky/sync.yaml
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cd ./tenants/base/rocky/ && \
kustomize create --autodetect && \
cd -
```
]
---
class: extra-details
### 📂 Flux config files
Let's review our `Flux` configuration files
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cat ./tenants/base/rocky/sync.yaml && \
cat ./tenants/base/rocky/kustomization.yaml
```
]
---
## Adding a kustomize patch for **_⚗TEST_** cluster deployment
💡 Remember the DRY strategy!
- The `Flux` tenant-dedicated configuration is looking for this file: `.tenants/test/rocky/kustomization.yaml`
- It has been configured here: `clusters/CLOUDY/tenants.yaml`
- All the files we just created are located in `.tenants/base/rocky`
- So we have to create a specific kustomization in the right location
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p ./tenants/test/rocky && \
cp ~/container.training/k8s/M6-rocky-test-patch.yaml ./tenants/test/rocky/ && \
cp ~/container.training/k8s/M6-rocky-test-kustomization.yaml ./tenants/test/rocky/kustomization.yaml
```
---
### Synchronizing Flux config with its Github repo
Locally, our `Flux` config repo is ready
The **_⚙OPS_** team has to push it to `Github` for `Flux` controllers to watch and catch it!
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
git add . && \
git commit -m':wrench: :construction_worker: add ROCKY tenant configuration' && \
git push
```
]
---
class: pic
![Running Mario](images/M6-running-Mario.gif)
---
class: pic
![rocky config files](images/M6-R01-config-files.png)
---
class: extra-details
### Flux resources for ROCKY tenant 1/2
.lab[
```bash
k8s@shpod:~$ flux get all -A
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
flux-system gitrepository/flux-system main@sha1:8ffd72cf False
True stored artifact for revision 'main@sha1:8ffd72cf'
rocky-test gitrepository/rocky-app rocky@sha1:ffe9f3fe False
True stored artifact for revision 'rocky@sha1:ffe9f3fe'
()
```
]
---
class: extra-details
### Flux resources for ROCKY _tenant_ 2/2
.lab[
```bash
k8s@shpod:~$ flux get all -A
()
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
flux-system kustomization/flux-system main@sha1:8ffd72cf False
True Applied revision: main@sha1:8ffd72cf
flux-system kustomization/tenant-prod False
False kustomization path not found: stat /tmp/kustomization-1164119282/tenants/prod: no such file or directory
flux-system kustomization/tenant-test main@sha1:8ffd72cf False
True Applied revision: main@sha1:8ffd72cf
rocky-test kustomization/rocky False
False StatefulSet/db dry-run failed (Forbidden): statefulsets.apps "db" is forbidden: User "system:serviceaccount:rocky-test:rocky" cannot patch resource "statefulsets" in API group "apps" at the cluster scope
```
]
And here is our 2nd Flux error(s)! 😅
---
class: extra-details
### Flux Kustomization, mutability, …
🔍 Notice that none of the expected resources is created:
the whole kustomization is rejected, even if the `StatefulSet` is this only resource that fails!
🔍 Flux Kustomization uses the dry-run feature to templatize the resources and then applying patches onto them
Good but some resources are not completely mutable, such as `StatefulSets`
We have to fix the mutation by applying the change without having to patch the resource.
🔍 Simply add the `spec.targetNamespace: rocky-test` to the `Kustomization` named `rocky`
---
class: extra-details
## And then it's deployed 1/2
You should see the following resources in the `rocky-test` namespace
.lab[
```bash
k8s@shpod-578d64468-tp7r2 ~/$ k get pods,svc,deployments -n rocky-test
NAME READY STATUS RESTARTS AGE
pod/db-0 1/1 Running 0 47s
pod/web-6c677bf97f-c7pkv 0/1 Running 1 (22s ago) 47s
pod/web-6c677bf97f-p7b4r 0/1 Running 1 (19s ago) 47s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/db ClusterIP 10.32.6.128 <none> 5432/TCP 48s
service/web ClusterIP 10.32.2.202 <none> 80/TCP 48s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 0/2 2 0 47s
```
]
---
class: extra-details
## And then it's deployed 2/2
You should see the following resources in the `rocky-test` namespace
.lab[
```bash
k8s@shpod-578d64468-tp7r2 ~/$ k get statefulsets,pvc,pv -n rocky-test
NAME READY AGE
statefulset.apps/db 1/1 47s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/postgresql-data-db-0 Bound pvc-c1963a2b-4fc9-4c74-9c5a-b0870b23e59a 1Gi RWO sbs-default <unset> 47s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
persistentvolume/postgresql-data 1Gi RWO,RWX Retain Available <unset> 47s
persistentvolume/pvc-150fcef5-ebba-458e-951f-68a7e214c635 1G RWO Delete Bound shpod/shpod sbs-default <unset> 4h46m
persistentvolume/pvc-c1963a2b-4fc9-4c74-9c5a-b0870b23e59a 1Gi RWO Delete Bound rocky-test/postgresql-data-db-0 sbs-default <unset> 47s
```
]
---
class: extra-details
### PersistentVolumes are using a default `StorageClass`
💡 This managed cluster comes with custom `StorageClasses` leveraging on Cloud _IaaS_ capabilities (i.e. block devices)
![Flux configuration waterfall](images/M6-persistentvolumes.png)
- a default `StorageClass` is applied if none is specified (like here)
- for **_🏭PROD_** purpose, ops team might enforce a more performant `StorageClass`
- on a bare-metal cluster, **_🏭PROD_** team has to configure and provide `StorageClasses` on its own
---
class: pic
![Flux configuration waterfall](images/M6-flux-config-dependencies.png)
---
## Upgrading ROCKY app
The Git source named `rocky-app` is pointing at
- a Github repository named [Musk8teers/container.training-spring-music](https://github.com/Musk8teers/container.training-spring-music/)
- on its branch named `rocky`
This branch deploy the v1.0.0 of the _Web_ app:
`spec.template.spec.containers.image: ghcr.io/musk8teers/container.training-spring-music:1.0.0`
What happens if the **_🎸ROCKY_** team upgrades its branch to deploy `v1.0.1` of the _Web_ app?
---
## _tenant_ **_🏭PROD_**
💡 **_🏭PROD_** _tenant_ is still waiting for its `Flux` configuration, but don't bother for it right now.
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:3
branch MOVY order:4
branch YouRHere order:5
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT
</pre>

Some files were not shown because too many files have changed in this diff Show More