Compare commits

...

71 Commits

Author SHA1 Message Date
Jérôme Petazzoni
8ba96380b7 🔧 Disable threading in flask debug server
For educational purposes, the RNG service is meant to
process only one request at a time (without concurrency).
But the flask server now defaults to a multi-threaded
implementation, which defeats our original purpose.
So here we disable threading to restore the original
behavior.
2026-01-30 13:00:01 +01:00
Olivier Delhomme
4311a09ccd 🔧 updates documentation links that changed 2026-01-28 15:31:00 +01:00
Jérôme Petazzoni
feb0a8cdb9 Use multiple # in included files' comments
...otherwise that causes side effects with the TOC generator 🙈
2026-01-27 08:50:23 +01:00
Jérôme Petazzoni
302924db40 🔧 Bump up vcluster version to work around weird bug
(Probably due to K8S version mismatch; vcluster was on 1.33 and the
host cluster was on 1.35. Symptoms: some pods start, all their
containers are ready, the pod shows up as ready, and yet, it's not
considered ready so the deployment says 0/1 and Helm never completes.)
2026-01-27 08:49:04 +01:00
Jérôme Petazzoni
4c2a7c6696 ⚙️ Remove academy builder script 2026-01-14 19:37:58 +01:00
Jérôme Petazzoni
a1f75a4e74 🔗 Add link to color source code 2026-01-14 18:07:55 +01:00
Jérôme Petazzoni
8dd674ec4b 🏭️ Refactor Kyverno chapter
- split out the kyverno 'colors' policies
- add a concrete example about conflicting ingress resources
2026-01-14 16:42:14 +01:00
Jérôme Petazzoni
93ad45da9b 🏭️ Refactor Services sections
Make the content suitable to both live classes and recorded content
2025-12-14 19:22:42 -06:00
Jérôme Petazzoni
01b2456e03 Add detailed section about taints and tolerations 2025-12-14 19:21:27 -06:00
Jérôme Petazzoni
942f20812b 🏭️ Refactor content about Ingress Controllers
The section about Ingress has been both simplified (separating
the content about taints and tolerations) and made somewhat
deeper, to make it more compatible with both live classes and
recorded videos.

A new section about setting up Ingress Controllers has been
added.
2025-12-14 19:19:16 -06:00
Jérôme Petazzoni
a44701960c Add ExternalDNS chapter
Based on what I did with Linode a few years ago,
but updated as ExternalDNS conventions have evolved.
2025-12-11 16:58:33 -06:00
Jérôme Petazzoni
34f3976777 🔧 Labs chapter shouldn't get its TOC entry 2025-12-11 12:41:12 -06:00
Jérôme Petazzoni
ba376feb10 🏭️ Big refactoring of December 2025
The structure of each deck should now be:
- title slide
- logistics (for live classes)
- chat room info (for live classes)
- shared/about-slides
- */prereqs* (when relevant; mostly k8s classes)
- shared/handson
- */labs-live (for live classes)
- shared/connecting (for live classes)
- */labs-async
- toc

This is more uniform across the different courses
(live and async; containers and K8S).
2025-12-10 19:46:14 -06:00
Jérôme Petazzoni
e8e2123457 📃 Make it easier to serve single markdown files 2025-12-04 12:58:01 -06:00
dependabot[bot]
f9d73c0a1e Bump path-to-regexp and express in /slides/autopilot
Bumps [path-to-regexp](https://github.com/pillarjs/path-to-regexp) to 0.1.12 and updates ancestor dependency [express](https://github.com/expressjs/express). These dependencies need to be updated together.


Updates `path-to-regexp` from 0.1.10 to 0.1.12
- [Release notes](https://github.com/pillarjs/path-to-regexp/releases)
- [Changelog](https://github.com/pillarjs/path-to-regexp/blob/master/History.md)
- [Commits](https://github.com/pillarjs/path-to-regexp/compare/v0.1.10...v0.1.12)

Updates `express` from 4.21.1 to 4.21.2
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/4.21.2/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.21.1...4.21.2)

---
updated-dependencies:
- dependency-name: path-to-regexp
  dependency-type: indirect
- dependency-name: express
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-03 17:34:10 +01:00
Jérôme Petazzoni
5ec84efa50 ️ Add small CNPG section 2025-11-19 19:27:33 +01:00
Zefiro Anthragon
bd36e965ee Fix typo in Training_Environment.md 2025-11-18 13:26:19 +01:00
Jérôme Petazzoni
17eb4efa3b 🐞 Refer to correct Traefik manifest in cert-manager chapter 2025-11-17 17:23:45 +01:00
Jérôme Petazzoni
c5c0f80b01 🔧 Tweak info about Gateway API 2025-11-17 17:20:14 +01:00
Jérôme Petazzoni
aa815a53fc 🔧 Tweak Grafana chapter 2025-11-17 17:05:46 +01:00
Jérôme Petazzoni
0beaf2f1f2 🛜 Generate HAProxy configuration for Proxmox IPv6 scenario 2025-11-17 15:31:25 +01:00
Jérôme Petazzoni
cf3ce21eec ️ Add Dockerfile example before starting to write our own 2025-11-12 17:11:22 +01:00
Jérôme Petazzoni
66dadf3c60 🔎 Clarify use of local Docker 2025-11-12 16:40:15 +01:00
Jérôme Petazzoni
80476c8323 🖼️ Add Docker architecture diagram 2025-11-12 16:34:54 +01:00
Jérôme Petazzoni
a8797b1f80 ♻️ Update instructions about lab environments
The link to Play With Docker was broken. Also, since PWD was
out of capacity, I also added a link to KodeKloud.
2025-11-12 16:19:04 +01:00
Jérôme Petazzoni
890b76e119 🚢 Add small hands-on chapter about Harbor 2025-11-11 18:13:05 +01:00
Jérôme Petazzoni
570ec8b25e 🛜 Make it work for hosts without IPv4 connectivity
Note that we install a TON of things from GitHub.
Since GitHub isn't available over IPv6, we are using
a custom solution based on cachttps, a caching
proxy to forward requests to GitHub. Our deployment
scripts try to detect a cachttps instance (assuming
it will be available through DNS over cachttps.internal)
and if they find one, they use it. Otherwise they
access GitHub directly - which won't work on IPv6-only
hosts, but will of course work fine on IPv4 and
dual-stack hosts.
2025-11-11 18:10:32 +01:00
Jérôme Petazzoni
de1d7430fd 🔧 Enable hostPort support in Cilium install 2025-11-11 11:08:43 +01:00
Jérôme Petazzoni
bc97f8c38c 🛜 Support AAAA records in cloudflare DNS scripts 2025-11-11 11:07:47 +01:00
Jérôme Petazzoni
1dea1acaa0 🛠️ Improve Proxmox support
The first iteration on Proxmox support relied on a single
template image hosted on shared storage. This new iteration
relies on template images hosted on local storage. It will
detect the template VM to use on each node thanks to its tags.

Note: later, we'll need to expose an easy way to switch
between shared-store and local-store template images.
2025-11-09 19:50:07 +01:00
Jérôme Petazzoni
7e891faadd 🛜 Bring IPv6 support to kubeadm deployments
Multiple small changes to allow deployment in IPv6-only environments.
What we do:
- detect if we are in an IPv6-only environment
- if yes, specify a service CIDR and listening address
  (kubeadm will otherwise pick the IPv4 address for the API server)
- switch to Cilium
Also minor changes to pssh and terraform to handle pinging and
connecting to IPv6 addresses.
2025-11-09 19:50:07 +01:00
Jérôme Petazzoni
a1fa6221d8 ♻️ Update dockercoins for IPv6 support
We want to be able to run on IPv6-only clusters
(as well as legacy IPv4 clusters, as well as
DualStack clusters). This requires minor changes
in the code, because in multiple places, we were
binding listening sockets explicitly to 0.0.0.0.
We change this to :: instead, and in some cases,
we make it easier to change that if needed (e.g.
through environment variables).
2025-11-09 19:50:07 +01:00
Arnaud Bienvenu
c42c7db516 Grammatical fix in slides 2025-11-08 10:43:30 +01:00
Ludovic Piot
96ecb86f23 📝 🎨 lpiot-issue-8: Add the Flux bootstrap without relying on an organization 2025-11-05 18:59:42 +01:00
Ludovic Piot
58255d47fa 📝 lpiot-issue-10: Add a "delete PAT" step during the Flux install process 2025-11-05 18:59:42 +01:00
Ludovic Piot
8ca2d2a4fb ✏️ 2025-11-05 18:59:42 +01:00
Ludovic Piot
641e0ea98b 📝 lpiot-issue-12: Flux only need REPO permissions in Github PAT 2025-11-05 18:59:42 +01:00
Ludovic Piot
356a0e814f 🎨 Change the name of the k0s servers 2025-11-05 18:59:42 +01:00
Ludovic Piot
2effd41ff0 📝 🐛 lpiot-issue-25: broken link 2025-11-05 18:59:42 +01:00
Ludovic Piot
af448c4540 🐛 add the YAML files needed by the M5/M6 section 2025-11-05 18:59:42 +01:00
Jérôme Petazzoni
9f0224bb26 🖼️ Re-add images for flux/M6 chapter 2025-11-04 08:19:09 +01:00
Jérôme Petazzoni
39a71565a0 🔧 Replace hyperkube with kube-apiserver
Hyperkube isn't available anymore, so the previous version of
the script would constantly redownload the tarball over and over
2025-11-04 07:46:27 +01:00
Jérôme Petazzoni
cbea696d2c ️ Invoke kind script to automatically start a k8s cluster 2025-10-29 16:09:42 +01:00
Jérôme Petazzoni
46b56b90e2 🐞 Typo fix 2025-10-29 13:40:00 +01:00
Jérôme Petazzoni
6d0d394948 ⚙️ Add academy builder script 2025-10-29 13:37:02 +01:00
Jérôme Petazzoni
d6017b5d40 ️ Add chapter about codespaces and dev clusters 2025-10-28 21:44:09 +01:00
Jérôme Petazzoni
8b91bd6ef0 🔗 Add link to FluxCD Kustomization 2025-10-28 17:59:55 +01:00
Jérôme Petazzoni
078e799666 Update Kustomize content 2025-10-28 16:22:54 +01:00
Jérôme Petazzoni
f25abf663b 🛠️ Improve AWS EKS support
- detect which EKS version to use
  (instead of hard-coding it in the TF config)
- do not issue a CSR on EKS
  (because EKS is broken and doesn't support it)
- automatically install a StorageClass on EKS
  (because the EBS CSI addon doesn't install one by default)
- put EKS clusters in the default VPC
  (instead of creating one VPC per cluster,
  since there is a default limit of 5 VPC per region)
2025-10-25 11:26:13 +02:00
Jérôme Petazzoni
6d8ae7132d ️ Improve googlecloud support
- add support to provision VMs on googlecloud
- refactor the way we define the project used by Terraform
  (we'll now use the GOOGLE_PROJECT environment variable,
  and if it's not set, we'll set it automatically by getting
  the default project from the gcloud CLI)
2025-10-24 10:46:54 +02:00
Jérôme Petazzoni
404f816de6 ️ Add a couple of slides about sidecars 2025-10-23 10:06:13 +02:00
Jérôme Petazzoni
b0a3460efa 🛜 Add details about Traffic Distribution
KEP4444 hit GA in 1.33, so I've updated the relevant slide
2025-10-22 17:05:54 +02:00
Jérôme Petazzoni
944db5f8ea ️ Add chapter on Gateway API 2025-10-22 16:48:49 +02:00
Ludovic Piot
e820ca466f 🆕 Add Flux (M5B/M6) content 2025-10-21 13:21:16 +02:00
Jérôme Petazzoni
d3c5bde6de ✏️ Mutating CEL is coming 2025-10-14 17:45:55 +02:00
Jérôme Petazzoni
b56e7bdb52 ️ Add content about Extended Resources and Dynamic Resource Allocation 2025-10-14 17:42:27 +02:00
Jérôme Petazzoni
f98c77564f 📃 Update information about swap 2025-10-13 17:30:32 +02:00
Jérôme Petazzoni
3d98d56bf8 🔗 Fix a couple of Helm URLs 2025-10-08 08:33:29 +02:00
Jérôme Petazzoni
25576a570f ♻️ Update vcluster Helm chart; improve konk script
It is now possible to have multiple konk clusters in parallel,
thanks to the KONKTAG environment variable.
2025-10-01 16:44:11 +02:00
Jérôme Petazzoni
47fc74a21a 🔗 Add a bunch of links to CNPG and ZFS talks in concept slides 2025-09-29 15:23:22 +02:00
Jérôme Petazzoni
d524cd73fa ️ Add mention to kl and gonzo 2025-09-22 16:13:48 +02:00
Jérôme Petazzoni
6b1fa88887 ️ Compile some cloud native security recs 2025-09-11 16:48:13 +02:00
Jérôme Petazzoni
f37d8112f8 🔧 Mention container engine levels 2025-09-11 16:21:27 +02:00
Jérôme Petazzoni
5005de823d ️ Merge container security content 2025-09-11 16:01:33 +02:00
Jérôme Petazzoni
de60cdbc7e ✏️ Tweak container from scratch exercise 2025-09-08 15:31:47 +02:00
Jérôme Petazzoni
605ee21b83 ️ Add BuildKit exercise 2025-09-07 10:52:42 +02:00
Jérôme Petazzoni
fd06364ab0 ♻️ Update notes about overlay support 2025-09-06 13:16:39 +02:00
Jérôme Petazzoni
1be66f3513 ️ Add image deep dive + exercise 2025-09-06 13:08:01 +02:00
Jérôme Petazzoni
3c142ad06d ️ Add logistics file for Enix 2025-09-04 17:00:39 +02:00
Jérôme Petazzoni
b291243472 ️ Add container from scratch exercise; update cgroup to v2 2025-09-04 15:01:11 +02:00
emanulato
ef7d4fcdaa fix PuTTY link in handson.md
The link to PuTTY was pointing to putty.org. This domain has no relation to the PuTTY project! Instead, the website run by the actual PuTTY team can be found under https://putty.software , see https://hachyderm.io/@simontatham/115025974777386803
2025-08-29 14:51:59 +02:00
172 changed files with 11464 additions and 2833 deletions

View File

@@ -9,7 +9,7 @@
"forwardPorts": [],
//"postCreateCommand": "... install extra packages...",
"postStartCommand": "dind.sh",
"postStartCommand": "dind.sh ; kind.sh",
// This lets us use "docker-outside-docker".
// Unfortunately, minikube, kind, etc. don't work very well that way;

2
.gitignore vendored
View File

@@ -17,6 +17,8 @@ slides/autopilot/state.yaml
slides/index.html
slides/past.html
slides/slides.zip
slides/_academy_*
slides/fragments
node_modules
### macOS ###

View File

@@ -1,26 +1,24 @@
version: "2"
services:
rng:
build: rng
ports:
- "8001:80"
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
- "8002:80"
webui:
build: webui
ports:
- "8000:80"
- "8000:80"
volumes:
- "./webui/files/:/files/"
- "./webui/files/:/files/"
redis:
image: redis
worker:
build: worker

View File

@@ -1,7 +1,8 @@
FROM ruby:alpine
WORKDIR /app
RUN apk add --update build-base curl
RUN gem install sinatra --version '~> 3'
RUN gem install thin --version '~> 1'
ADD hasher.rb /
CMD ["ruby", "hasher.rb"]
RUN gem install thin
COPY hasher.rb .
CMD ["ruby", "hasher.rb", "-o", "::"]
EXPOSE 80

View File

@@ -2,7 +2,6 @@ require 'digest'
require 'sinatra'
require 'socket'
set :bind, '0.0.0.0'
set :port, 80
post '/' do

View File

@@ -1,5 +1,7 @@
FROM python:alpine
WORKDIR /app
RUN pip install Flask
COPY rng.py /
CMD ["python", "rng.py"]
COPY rng.py .
ENV FLASK_APP=rng FLASK_RUN_HOST=:: FLASK_RUN_PORT=80
CMD ["flask", "run", "--without-threads"]
EXPOSE 80

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80, threaded=False)
app.run(port=80)

View File

@@ -1,7 +1,8 @@
FROM node:4-slim
RUN npm install express@4
RUN npm install redis@3
COPY files/ /files/
COPY webui.js /
FROM node:23-alpine
WORKDIR /app
RUN npm install express
RUN npm install morgan
RUN npm install redis@5
COPY . .
CMD ["node", "webui.js"]
EXPOSE 80

View File

@@ -1,26 +1,34 @@
var express = require('express');
var app = express();
var redis = require('redis');
import express from 'express';
import morgan from 'morgan';
import { createClient } from 'redis';
var client = redis.createClient(6379, 'redis');
client.on("error", function (err) {
console.error("Redis error", err);
});
var client = await createClient({
url: "redis://redis",
socket: {
family: 0
}
})
.on("error", function (err) {
console.error("Redis error", err);
})
.connect();
var app = express();
app.use(morgan('common'));
app.get('/', function (req, res) {
res.redirect('/index.html');
});
app.get('/json', function (req, res) {
client.hlen('wallet', function (err, coins) {
client.get('hashes', function (err, hashes) {
var now = Date.now() / 1000;
res.json( {
coins: coins,
hashes: hashes,
now: now
});
});
app.get('/json', async(req, res) => {
var coins = await client.hLen('wallet');
var hashes = await client.get('hashes');
var now = Date.now() / 1000;
res.json({
coins: coins,
hashes: hashes,
now: now
});
});

View File

@@ -1,5 +1,6 @@
FROM python:alpine
WORKDIR /app
RUN pip install redis
RUN pip install requests
COPY worker.py /
COPY worker.py .
CMD ["python", "worker.py"]

View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
use-forwarded-headers: true
compute-full-forwarded-for: true
use-proxy-protocol: true

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: ingress-nginx

View File

@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- M6-ingress-nginx-components.yaml
- sync.yaml
patches:
- path: M6-ingress-nginx-cm-patch.yaml
target:
kind: ConfigMap
- path: M6-ingress-nginx-svc-patch.yaml
target:
kind: Service

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v2: true
service.beta.kubernetes.io/scw-loadbalancer-use-hostname: true

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: kyverno

View File

@@ -0,0 +1,72 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: flux-multi-tenancy
spec:
validationFailureAction: enforce
rules:
- name: serviceAccountName
exclude:
resources:
namespaces:
- flux-system
match:
resources:
kinds:
- Kustomization
- HelmRelease
validate:
message: ".spec.serviceAccountName is required"
pattern:
spec:
serviceAccountName: "?*"
- name: kustomizationSourceRefNamespace
exclude:
resources:
namespaces:
- flux-system
- ingress-nginx
- kyverno
- monitoring
- openebs
match:
resources:
kinds:
- Kustomization
preconditions:
any:
- key: "{{request.object.spec.sourceRef.namespace}}"
operator: NotEquals
value: ""
validate:
message: "spec.sourceRef.namespace must be the same as metadata.namespace"
deny:
conditions:
- key: "{{request.object.spec.sourceRef.namespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"
- name: helmReleaseSourceRefNamespace
exclude:
resources:
namespaces:
- flux-system
- ingress-nginx
- kyverno
- monitoring
- openebs
match:
resources:
kinds:
- HelmRelease
preconditions:
any:
- key: "{{request.object.spec.chart.spec.sourceRef.namespace}}"
operator: NotEquals
value: ""
validate:
message: "spec.chart.spec.sourceRef.namespace must be the same as metadata.namespace"
deny:
conditions:
- key: "{{request.object.spec.chart.spec.sourceRef.namespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"

View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: monitoring
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
namespace: monitoring
spec:
ingressClassName: nginx
rules:
- host: grafana.test.metal.mybestdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kube-prometheus-stack-grafana
port:
number: 80

View File

@@ -0,0 +1,35 @@
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
app: web
ingress:
- from: []
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-db
spec:
podSelector:
matchLabels:
app: db
ingress:
- from:
- podSelector:
matchLabels:
app: web

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: openebs

View File

@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: openebs
resources:
- M6-openebs-components.yaml
- sync.yaml
configMapGenerator:
- name: openebs-values
files:
- values.yaml=M6-openebs-values.yaml
configurations:
- M6-openebs-kustomizeconfig.yaml

View File

@@ -0,0 +1,6 @@
nameReference:
- kind: ConfigMap
version: v1
fieldSpecs:
- path: spec/valuesFrom/name
kind: HelmRelease

View File

@@ -0,0 +1,15 @@
# helm install openebs --namespace openebs openebs/openebs
# --set engines.replicated.mayastor.enabled=false
# --set lvm-localpv.lvmNode.kubeletDir=/var/lib/k0s/kubelet/
# --create-namespace
engines:
replicated:
mayastor:
enabled: false
# Needed for k0s install since kubelet install is slightly divergent from vanilla install >:-(
lvm-localpv:
lvmNode:
kubeletDir: /var/lib/k0s/kubelet/
localprovisioner:
hostpathClass:
isDefaultClass: true

View File

@@ -0,0 +1,38 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: rocky-test
name: rocky-full-access
rules:
- apiGroups: ["", extensions, apps]
resources: [deployments, replicasets, pods, services, ingresses, statefulsets]
verbs: [get, list, watch, create, update, patch, delete] # You can also use [*]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rocky-pv-access
rules:
- apiGroups: [""]
resources: [persistentvolumes]
verbs: [get, list, watch, create, patch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
toolkit.fluxcd.io/tenant: rocky
name: rocky-reconciler2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: rocky-pv-access
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: gotk:rocky-test:reconciler
- kind: ServiceAccount
name: rocky
namespace: rocky-test

19
k8s/M6-rocky-ingress.yaml Normal file
View File

@@ -0,0 +1,19 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rocky
namespace: rocky-test
spec:
ingressClassName: nginx
rules:
- host: rocky.test.mybestdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 80

View File

@@ -0,0 +1,8 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/rocky
patches:
- path: M6-rocky-test-patch.yaml
target:
kind: Kustomization

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: rocky
namespace: rocky-test
spec:
path: ./k8s/plain

View File

@@ -12,5 +12,5 @@ listen very-basic-load-balancer
server blue color.blue.svc:80
server green color.green.svc:80
# Note: the services above must exist,
# otherwise HAproxy won't start.
### Note: the services above must exist,
### otherwise HAproxy won't start.

View File

@@ -0,0 +1,12 @@
# This removes the haproxy Deployment.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- patch: |-
$patch: delete
kind: Deployment
apiVersion: apps/v1
metadata:
name: haproxy

View File

@@ -0,0 +1,14 @@
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
# Within a Kustomization, it is not possible to specify in which
# order transformations (patches, replacements, etc) should be
# executed. If we want to execute transformations in a specific
# order, one possibility is to put them in individual components,
# and then invoke these components in the order we want.
# It works, but it creates an extra level of indirection, which
# reduces readability and complicates maintenance.
components:
- setup
- cleanup

View File

@@ -0,0 +1,20 @@
global
#log stdout format raw local0
#daemon
maxconn 32
defaults
#log global
timeout client 1h
timeout connect 1h
timeout server 1h
mode http
option abortonclose
frontend metrics
bind :9000
http-request use-service prometheus-exporter
frontend ollama_frontend
bind :8000
default_backend ollama_backend
maxconn 16
backend ollama_backend
server ollama_server localhost:11434 check

View File

@@ -0,0 +1,39 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: haproxy
name: haproxy
spec:
selector:
matchLabels:
app: haproxy
template:
metadata:
labels:
app: haproxy
spec:
volumes:
- name: haproxy
configMap:
name: haproxy
containers:
- image: haproxy:3.0
name: haproxy
volumeMounts:
- name: haproxy
mountPath: /usr/local/etc/haproxy
readinessProbe:
httpGet:
port: 9000
ports:
- name: haproxy
containerPort: 8000
- name: metrics
containerPort: 9000
resources:
requests:
cpu: 0.05
limits:
cpu: 1

View File

@@ -0,0 +1,75 @@
# This adds a sidecar to the ollama Deployment, by taking
# the pod template and volumes from the haproxy Deployment.
# The idea is to allow to run ollama+haproxy in two modes:
# - separately (each with their own Deployment),
# - together in the same Pod, sidecar-style.
# The YAML files define how to run them separetely, and this
# "replacements" directive fetches a specific volume and
# a specific container from the haproxy Deployment, to add
# them to the ollama Deployment.
#
# This would be simpler if kustomize allowed to append or
# merge lists in "replacements"; but it doesn't seem to be
# possible at the moment.
#
# It would be even better if kustomize allowed to perform
# a strategic merge using a fieldPath as the source, because
# we could merge both the containers and the volumes in a
# single operation.
#
# Note that technically, it might be possible to layer
# multiple kustomizations so that one generates the patch
# to be used in another; but it wouldn't be very readable
# or maintainable so we decided to not do that right now.
#
# However, the current approach (fetching fields one by one)
# has an advantage: it could let us transform the haproxy
# container into a real sidecar (i.e. an initContainer with
# a restartPolicy=Always).
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
resources:
- haproxy.yaml
configMapGenerator:
- name: haproxy
files:
- haproxy.cfg
replacements:
- source:
kind: Deployment
name: haproxy
fieldPath: spec.template.spec.volumes.[name=haproxy]
targets:
- select:
kind: Deployment
name: ollama
fieldPaths:
- spec.template.spec.volumes.[name=haproxy]
options:
create: true
- source:
kind: Deployment
name: haproxy
fieldPath: spec.template.spec.containers.[name=haproxy]
targets:
- select:
kind: Deployment
name: ollama
fieldPaths:
- spec.template.spec.containers.[name=haproxy]
options:
create: true
- source:
kind: Deployment
name: haproxy
fieldPath: spec.template.spec.containers.[name=haproxy].ports.[name=haproxy].containerPort
targets:
- select:
kind: Service
name: ollama
fieldPaths:
- spec.ports.[name=11434].targetPort

View File

@@ -0,0 +1,34 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: blue
name: blue
spec:
replicas: 2
selector:
matchLabels:
app: blue
template:
metadata:
labels:
app: blue
spec:
containers:
- image: jpetazzo/color
name: color
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: blue
name: blue
spec:
ports:
- port: 80
selector:
app: blue

View File

@@ -0,0 +1,94 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Each of these YAML files contains a Deployment and a Service.
# The blue.yaml file is here just to demonstrate that the rest
# of this Kustomization can be precisely scoped to the ollama
# Deployment (and Service): the blue Deployment and Service
# shouldn't be affected by our kustomize transformers.
resources:
- ollama.yaml
- blue.yaml
buildMetadata:
# Add a label app.kubernetes.io/managed-by=kustomize-vX.Y.Z
- managedByLabel
# Add an annotation config.kubernetes.io/origin, indicating:
# - which file defined that resource;
# - if it comes from a git repository, which one, and which
# ref (tag, branch...) it was.
- originAnnotations
# Add an annotation alpha.config.kubernetes.io/transformations
# indicating which patches and other transformers have changed
# each resource.
- transformerAnnotations
# Let's generate a ConfigMap with literal values.
# Note that this will actually add a suffix to the name of the
# ConfigMaps (e.g.: ollama-8bk8bd8m76) and it will update all
# references to the ConfigMap (e.g. in Deployment manifests)
# accordingly. The suffix is a hash of the ConfigMap contents,
# so that basically, if the ConfigMap is edited, any workload
# using that ConfigMap will automatically do a rolling update.
configMapGenerator:
- name: ollama
literals:
- "model=gemma3:270m"
- "prompt=If you visit Paris, I suggest that you"
- "queue=4"
name: ollama
patches:
# The Deployment manifest in ollama.yaml doesn't specify
# resource requests and limits, so that it can run on any
# cluster (including resource-constrained local clusters
# like KiND or minikube). The example belows add CPU
# requests and limits using a strategic merge patch.
# The patch is inlined here, but it could also be put
# in a file and referenced with "path: xxxxxx.yaml".
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: ollama
spec:
template:
spec:
containers:
- name: ollama
resources:
requests:
cpu: 1
limits:
cpu: 2
# This will have the same effect, with one little detail:
# JSON patches cannot specify containers by name, so this
# assumes that the ollama container is the first one in
# the pod template (whereas the strategic merge patch can
# use "merge keys" and identify containers by their name).
#- target:
# kind: Deployment
# name: ollama
# patch: |
# - op: add
# path: /spec/template/spec/containers/0/resources
# value:
# requests:
# cpu: 1
# limits:
# cpu: 2
# A "component" is a bit like a "base", in the sense that
# it lets us define some reusable resources and behaviors.
# There is a key different, though:
# - a "base" will be evaluated in isolation: it will
# generate+transform some resources, then these resources
# will be included in the main Kustomization;
# - a "component" has access to all the resources that
# have been generated by the main Kustomization, which
# means that it can transform them (with patches etc).
components:
- add-haproxy-sidecar

View File

@@ -0,0 +1,73 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ollama
name: ollama
spec:
selector:
matchLabels:
app: ollama
template:
metadata:
labels:
app: ollama
spec:
volumes:
- name: ollama
hostPath:
path: /opt/ollama
type: DirectoryOrCreate
containers:
- image: ollama/ollama
name: ollama
env:
- name: OLLAMA_MAX_QUEUE
valueFrom:
configMapKeyRef:
name: ollama
key: queue
- name: MODEL
valueFrom:
configMapKeyRef:
name: ollama
key: model
volumeMounts:
- name: ollama
mountPath: /root/.ollama
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- ollama pull $MODEL
livenessProbe:
httpGet:
port: 11434
readinessProbe:
exec:
command:
- /bin/sh
- -c
- ollama show $MODEL
ports:
- name: ollama
containerPort: 11434
---
apiVersion: v1
kind: Service
metadata:
labels:
app: ollama
name: ollama
spec:
ports:
- name: "11434"
port: 11434
protocol: TCP
targetPort: 11434
selector:
app: ollama
type: ClusterIP

View File

@@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- microservices
- redis

View File

@@ -0,0 +1,13 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- microservices.yaml
transformers:
- |
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: use-ghcr-io
prefix: ghcr.io/
fieldSpecs:
- path: spec/template/spec/containers/image

View File

@@ -0,0 +1,125 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: dockercoins/hasher:v0.1
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: dockercoins/rng:v0.1
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: dockercoins/webui:v0.1
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: dockercoins/worker:v0.1
name: worker

View File

@@ -0,0 +1,4 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- redis.yaml

View File

@@ -0,0 +1,35 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP

View File

@@ -0,0 +1,160 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: dockercoins/hasher:v0.1
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: dockercoins/rng:v0.1
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: dockercoins/webui:v0.1
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: dockercoins/worker:v0.1
name: worker

View File

@@ -0,0 +1,30 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- dockercoins.yaml
replacements:
- sourceValue: ghcr.io/dockercoins
targets:
- select:
kind: Deployment
labelSelector: "app in (hasher,rng,webui,worker)"
# It will soon be possible to use regexes in replacement selectors,
# meaning that the "labelSelector:" above can be replaced with the
# following "name:" selector which is a tiny bit simpler:
#name: hasher|rng|webui|worker
# Regex support in replacement selectors was added by this PR:
# https://github.com/kubernetes-sigs/kustomize/pull/5863
# This PR was merged in August 2025, but as of October 2025, the
# latest release of Kustomize is 5.7.1, which was released in July.
# Hopefully the feature will be available in the next release :)
# Another possibility would be to select all Deployments, and then
# reject the one(s) for which we don't want to update the registry;
# for instance:
#reject:
# kind: Deployment
# name: redis
fieldPaths:
- spec.template.spec.containers.*.image
options:
delimiter: "/"
index: 0

View File

@@ -1,87 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -1,114 +0,0 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik
namespace: traefik
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik
namespace: traefik
labels:
app: traefik
spec:
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
name: traefik
spec:
tolerations:
- effect: NoSchedule
operator: Exists
# If, for some reason, our CNI plugin doesn't support hostPort,
# we can enable hostNetwork instead. That should work everywhere
# but it doesn't provide the same isolation.
#hostNetwork: true
serviceAccountName: traefik
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v2.10
name: traefik
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --accesslog
- --api
- --api.insecure
- --log.level=INFO
- --metrics.prometheus
- --providers.kubernetesingress
- --entrypoints.http.Address=:80
- --entrypoints.https.Address=:443
- --entrypoints.https.http.tls.certResolver=default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik
subjects:
- kind: ServiceAccount
name: traefik
namespace: traefik
---
kind: IngressClass
apiVersion: networking.k8s.io/v1
metadata:
name: traefik
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: traefik.io/ingress-controller

View File

@@ -1 +0,0 @@
traefik-v2.yaml

123
k8s/traefik.yaml Normal file
View File

@@ -0,0 +1,123 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik
namespace: traefik
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik
namespace: traefik
labels:
app: traefik
spec:
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
name: traefik
spec:
tolerations:
- effect: NoSchedule
operator: Exists
# If, for some reason, our CNI plugin doesn't support hostPort,
# we can enable hostNetwork instead. That should work everywhere
# but it doesn't provide the same isolation.
#hostNetwork: true
serviceAccountName: traefik
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v3.5
name: traefik
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --accesslog
- --api
- --api.insecure
- --entrypoints.http.Address=:80
- --entrypoints.https.Address=:443
- --global.sendAnonymousUsage=true
- --log.level=INFO
- --metrics.prometheus
- --providers.kubernetesingress
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik
subjects:
- kind: ServiceAccount
name: traefik
namespace: traefik
---
kind: IngressClass
apiVersion: networking.k8s.io/v1
metadata:
name: traefik
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: traefik.io/ingress-controller

View File

@@ -66,7 +66,7 @@ Here is where we look for credentials for each provider:
- Civo: CLI configuration file (`~/.civo.json`)
- Digital Ocean: CLI configuration file (`~/.config/doctl/config.yaml`)
- Exoscale: CLI configuration file (`~/.config/exoscale/exoscale.toml`)
- Google Cloud: FIXME, note that the project name is currently hard-coded to `prepare-tf`
- Google Cloud: we're using "Application Default Credentials (ADC)"; run `gcloud auth application-default login`; note that we'll use the default "project" set in `gcloud` unless you set the `GOOGLE_PROJECT` environment variable
- Hetzner: CLI configuration file (`~/.config/hcloud/cli.toml`)
- Linode: CLI configuration file (`~/.config/linode-cli`)
- OpenStack: you will need to write a tfvars file (check [that exemple](terraform/virtual-machines/openstack/tfvars.example))

View File

@@ -36,8 +36,12 @@ _populate_zone() {
ZONE_ID=$(_get_zone_id $1)
shift
for IPADDR in $*; do
cloudflare zones/$ZONE_ID/dns_records "name=*" "type=A" "content=$IPADDR"
cloudflare zones/$ZONE_ID/dns_records "name=\@" "type=A" "content=$IPADDR"
case "$IPADDR" in
*.*) TYPE=A;;
*:*) TYPE=AAAA;;
esac
cloudflare zones/$ZONE_ID/dns_records "name=*" "type=$TYPE" "content=$IPADDR"
cloudflare zones/$ZONE_ID/dns_records "name=\@" "type=$TYPE" "content=$IPADDR"
done
}

View File

@@ -5,16 +5,25 @@
# 10% CPU
# (See https://docs.google.com/document/d/1n0lwp6rQKQUIuo_A5LQ1dgCzrmjkDjmDtNj1Jn92UrI)
# PRO2-XS = 4 core, 16 gb
# Note that we also need 2 volumes per vcluster (one for vcluster itself, one for shpod),
# so we might hit the maximum number of volumes per node!
# (TODO: check what that limit is on Scaleway and Linode)
#
# With vspod:
# 800 MB RAM
# 33% CPU
#
set -e
PROVIDER=scaleway
STUDENTS=30
KONKTAG=konk
PROVIDER=linode
STUDENTS=2
case "$PROVIDER" in
linode)
export TF_VAR_node_size=g6-standard-6
export TF_VAR_location=us-east
export TF_VAR_location=fr-par
;;
scaleway)
export TF_VAR_node_size=PRO2-XS
@@ -28,11 +37,13 @@ esac
export KUBECONFIG=~/kubeconfig
if [ "$PROVIDER" = "kind" ]; then
kind create cluster --name konk
kind create cluster --name $KONKTAG
ADDRTYPE=InternalIP
else
./labctl create --mode mk8s --settings settings/konk.env --provider $PROVIDER --tag konk
cp tags/konk/stage2/kubeconfig.101 $KUBECONFIG
if ! [ -f tags/$KONKTAG/stage2/kubeconfig.101 ]; then
./labctl create --mode mk8s --settings settings/konk.env --provider $PROVIDER --tag $KONKTAG
fi
cp tags/$KONKTAG/stage2/kubeconfig.101 $KUBECONFIG
ADDRTYPE=ExternalIP
fi

View File

@@ -56,7 +56,7 @@ _cmd_codeserver() {
ARCH=${ARCHITECTURE-amd64}
CODESERVER_VERSION=4.96.4
CODESERVER_URL=https://github.com/coder/code-server/releases/download/v${CODESERVER_VERSION}/code-server-${CODESERVER_VERSION}-linux-${ARCH}.tar.gz
CODESERVER_URL=\$GITHUB/coder/code-server/releases/download/v${CODESERVER_VERSION}/code-server-${CODESERVER_VERSION}-linux-${ARCH}.tar.gz
pssh "
set -e
i_am_first_node || exit 0
@@ -230,7 +230,7 @@ _cmd_create() {
;;
*) die "Invalid mode: $MODE (supported modes: mk8s, pssh)." ;;
esac
if ! [ -f "$SETTINGS" ]; then
die "Settings file ($SETTINGS) not found."
fi
@@ -270,7 +270,27 @@ _cmd_create() {
ln -s ../../$SETTINGS tags/$TAG/settings.env.orig
cp $SETTINGS tags/$TAG/settings.env
. $SETTINGS
# For Google Cloud, it is necessary to specify which "project" to use.
# Unfortunately, the Terraform provider doesn't seem to have a way
# to detect which Google Cloud project you want to use; it has to be
# specified one way or another. Let's decide that it should be set with
# the GOOGLE_PROJECT env var; and if that var is not set, we'll try to
# figure it out from gcloud.
# (See https://github.com/hashicorp/terraform-provider-google/issues/10907#issuecomment-1015721600)
# Since we need that variable to be set each time we'll call Terraform
# (e.g. when destroying the environment), let's save it to the settings.env
# file.
if [ "$PROVIDER" = "googlecloud" ]; then
if ! [ "$GOOGLE_PROJECT" ]; then
info "PROVIDER=googlecloud but GOOGLE_PROJECT is not set. Detecting it."
GOOGLE_PROJECT=$(gcloud config get project)
info "GOOGLE_PROJECT will be set to '$GOOGLE_PROJECT'."
fi
echo "export GOOGLE_PROJECT=$GOOGLE_PROJECT" >> tags/$TAG/settings.env
fi
. tags/$TAG/settings.env
echo $MODE > tags/$TAG/mode
echo $PROVIDER > tags/$TAG/provider
@@ -355,8 +375,8 @@ _cmd_clusterize() {
pssh -I < tags/$TAG/clusters.tsv "
grep -w \$PSSH_HOST | tr '\t' '\n' > /tmp/cluster"
pssh "
echo \$PSSH_HOST > /tmp/ipv4
head -n 1 /tmp/cluster | sudo tee /etc/ipv4_of_first_node
echo \$PSSH_HOST > /tmp/ip_address
head -n 1 /tmp/cluster | sudo tee /etc/ip_address_of_first_node
echo ${CLUSTERPREFIX}1 | sudo tee /etc/name_of_first_node
echo HOSTIP=\$PSSH_HOST | sudo tee -a /etc/environment
NODEINDEX=\$((\$PSSH_NODENUM%$CLUSTERSIZE+1))
@@ -439,7 +459,7 @@ _cmd_docker() {
set -e
### Install docker-compose.
sudo curl -fsSL -o /usr/local/bin/docker-compose \
https://github.com/docker/compose/releases/download/$COMPOSE_VERSION/docker-compose-$COMPOSE_PLATFORM
\$GITHUB/docker/compose/releases/download/$COMPOSE_VERSION/docker-compose-$COMPOSE_PLATFORM
sudo chmod +x /usr/local/bin/docker-compose
docker-compose version
@@ -447,7 +467,7 @@ _cmd_docker() {
##VERSION## https://github.com/docker/machine/releases
MACHINE_VERSION=v0.16.2
sudo curl -fsSL -o /usr/local/bin/docker-machine \
https://github.com/docker/machine/releases/download/\$MACHINE_VERSION/docker-machine-\$(uname -s)-\$(uname -m)
\$GITHUB/docker/machine/releases/download/\$MACHINE_VERSION/docker-machine-\$(uname -s)-\$(uname -m)
sudo chmod +x /usr/local/bin/docker-machine
docker-machine version
"
@@ -480,10 +500,10 @@ _cmd_kubebins() {
set -e
cd /usr/local/bin
if ! [ -x etcd ]; then
curl -L https://github.com/etcd-io/etcd/releases/download/$ETCD_VERSION/etcd-$ETCD_VERSION-linux-$ARCH.tar.gz \
curl -L \$GITHUB/etcd-io/etcd/releases/download/$ETCD_VERSION/etcd-$ETCD_VERSION-linux-$ARCH.tar.gz \
| sudo tar --strip-components=1 --wildcards -zx '*/etcd' '*/etcdctl'
fi
if ! [ -x hyperkube ]; then
if ! [ -x kube-apiserver ]; then
##VERSION##
curl -L https://dl.k8s.io/$K8SBIN_VERSION/kubernetes-server-linux-$ARCH.tar.gz \
| sudo tar --strip-components=3 -zx \
@@ -492,7 +512,7 @@ _cmd_kubebins() {
sudo mkdir -p /opt/cni/bin
cd /opt/cni/bin
if ! [ -x bridge ]; then
curl -L https://github.com/containernetworking/plugins/releases/download/$CNI_VERSION/cni-plugins-linux-$ARCH-$CNI_VERSION.tgz \
curl -L \$GITHUB/containernetworking/plugins/releases/download/$CNI_VERSION/cni-plugins-linux-$ARCH-$CNI_VERSION.tgz \
| sudo tar -zx
fi
"
@@ -542,6 +562,18 @@ EOF"
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl &&
echo 'alias k=kubecolor' | sudo tee /etc/bash_completion.d/k &&
echo 'complete -F __start_kubectl k' | sudo tee -a /etc/bash_completion.d/k"
# Install helm early
# (so that we can use it to install e.g. Cilium etc.)
ARCH=${ARCHITECTURE-amd64}
HELM_VERSION=3.19.1
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl -fsSL https://get.helm.sh/helm-v${HELM_VERSION}-linux-${ARCH}.tar.gz |
sudo tar --strip-components=1 --wildcards -zx -C /usr/local/bin '*/helm'
helm completion bash | sudo tee /etc/bash_completion.d/helm
helm version
fi"
}
_cmd kubeadm "Setup kubernetes clusters with kubeadm"
@@ -565,6 +597,18 @@ _cmd_kubeadm() {
# Initialize kube control plane
pssh --timeout 200 "
IPV6=\$(ip -json a | jq -r '.[].addr_info[] | select(.scope==\"global\" and .family==\"inet6\") | .local' | head -n1)
if [ \"\$IPV6\" ]; then
ADVERTISE=\"advertiseAddress: \$IPV6\"
SERVICE_SUBNET=\"serviceSubnet: fdff::/112\"
touch /tmp/install-cilium-ipv6-only
touch /tmp/ipv6-only
else
ADVERTISE=
SERVICE_SUBNET=
touch /tmp/install-weave
fi
echo IPV6=\$IPV6 ADVERTISE=\$ADVERTISE
if i_am_first_node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token &&
cat >/tmp/kubeadm-config.yaml <<EOF
@@ -572,9 +616,12 @@ kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: \$(cat /tmp/token)
localAPIEndpoint:
\$ADVERTISE
nodeRegistration:
ignorePreflightErrors:
- NumCPU
- FileContent--proc-sys-net-ipv6-conf-default-forwarding
$IGNORE_SYSTEMVERIFICATION
$IGNORE_SWAP
$IGNORE_IPTABLES
@@ -601,7 +648,9 @@ kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
apiServer:
certSANs:
- \$(cat /tmp/ipv4)
- \$(cat /tmp/ip_address)
networking:
\$SERVICE_SUBNET
$CLUSTER_CONFIGURATION_KUBERNETESVERSION
EOF
sudo kubeadm init --config=/tmp/kubeadm-config.yaml
@@ -620,9 +669,20 @@ EOF
# Install weave as the pod network
pssh "
if i_am_first_node; then
curl -fsSL https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml |
sed s,weaveworks/weave,quay.io/rackspace/weave, |
kubectl apply -f-
if [ -f /tmp/install-weave ]; then
curl -fsSL \$GITHUB/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml |
sed s,weaveworks/weave,quay.io/rackspace/weave, |
kubectl apply -f-
fi
if [ -f /tmp/install-cilium-ipv6-only ]; then
helm upgrade -i cilium cilium --repo https://helm.cilium.io/ \
--namespace kube-system \
--set cni.chainingMode=portmap \
--set ipv6.enabled=true \
--set ipv4.enabled=false \
--set underlayProtocol=ipv6 \
--version 1.18.3
fi
fi"
# FIXME this is a gross hack to add the deployment key to our SSH agent,
@@ -645,13 +705,16 @@ EOF
fi
# Install metrics server
pssh "
pssh -I <../k8s/metrics-server.yaml "
if i_am_first_node; then
kubectl apply -f https://raw.githubusercontent.com/jpetazzo/container.training/master/k8s/metrics-server.yaml
kubectl apply -f-
fi"
# It would be nice to be able to use that helm chart for metrics-server.
# Unfortunately, the charts themselves are on github.com and we want to
# avoid that due to their lack of IPv6 support.
#helm upgrade --install metrics-server \
# --repo https://kubernetes-sigs.github.io/metrics-server/ metrics-server \
# --namespace kube-system --set args={--kubelet-insecure-tls}
fi"
}
_cmd kubetools "Install a bunch of CLI tools for Kubernetes"
@@ -678,7 +741,7 @@ _cmd_kubetools() {
# Install ArgoCD CLI
##VERSION## https://github.com/argoproj/argo-cd/releases/latest
URL=https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-${ARCH}
URL=\$GITHUB/argoproj/argo-cd/releases/latest/download/argocd-linux-${ARCH}
pssh "
if [ ! -x /usr/local/bin/argocd ]; then
sudo curl -o /usr/local/bin/argocd -fsSL $URL
@@ -691,7 +754,7 @@ _cmd_kubetools() {
##VERSION## https://github.com/fluxcd/flux2/releases
FLUX_VERSION=2.3.0
FILENAME=flux_${FLUX_VERSION}_linux_${ARCH}
URL=https://github.com/fluxcd/flux2/releases/download/v$FLUX_VERSION/$FILENAME.tar.gz
URL=\$GITHUB/fluxcd/flux2/releases/download/v$FLUX_VERSION/$FILENAME.tar.gz
pssh "
if [ ! -x /usr/local/bin/flux ]; then
curl -fsSL $URL |
@@ -706,7 +769,7 @@ _cmd_kubetools() {
set -e
if ! [ -x /usr/local/bin/kctx ]; then
cd /tmp
git clone https://github.com/ahmetb/kubectx
git clone \$GITHUB/ahmetb/kubectx
sudo cp kubectx/kubectx /usr/local/bin/kctx
sudo cp kubectx/kubens /usr/local/bin/kns
sudo cp kubectx/completion/*.bash /etc/bash_completion.d
@@ -717,7 +780,7 @@ _cmd_kubetools() {
set -e
if ! [ -d /opt/kube-ps1 ]; then
cd /tmp
git clone https://github.com/jonmosco/kube-ps1
git clone \$GITHUB/jonmosco/kube-ps1
sudo mv kube-ps1 /opt/kube-ps1
sudo -u $USER_LOGIN sed -i s/docker-prompt/kube_ps1/ /home/$USER_LOGIN/.bashrc &&
sudo -u $USER_LOGIN tee -a /home/$USER_LOGIN/.bashrc <<EOF
@@ -734,7 +797,7 @@ EOF
##VERSION## https://github.com/stern/stern/releases
STERN_VERSION=1.29.0
FILENAME=stern_${STERN_VERSION}_linux_${ARCH}
URL=https://github.com/stern/stern/releases/download/v$STERN_VERSION/$FILENAME.tar.gz
URL=\$GITHUB/stern/stern/releases/download/v$STERN_VERSION/$FILENAME.tar.gz
pssh "
if [ ! -x /usr/local/bin/stern ]; then
curl -fsSL $URL |
@@ -745,9 +808,11 @@ EOF
fi"
# Install helm
HELM_VERSION=3.19.1
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 | sudo bash &&
curl -fsSL https://get.helm.sh/helm-v${HELM_VERSION}-linux-${ARCH}.tar.gz |
sudo tar --strip-components=1 --wildcards -zx -C /usr/local/bin '*/helm'
helm completion bash | sudo tee /etc/bash_completion.d/helm
helm version
fi"
@@ -755,7 +820,7 @@ EOF
# Install kustomize
##VERSION## https://github.com/kubernetes-sigs/kustomize/releases
KUSTOMIZE_VERSION=v5.4.1
URL=https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_${ARCH}.tar.gz
URL=\$GITHUB/kubernetes-sigs/kustomize/releases/download/kustomize/${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_${ARCH}.tar.gz
pssh "
if [ ! -x /usr/local/bin/kustomize ]; then
curl -fsSL $URL |
@@ -772,15 +837,17 @@ EOF
pssh "
if [ ! -x /usr/local/bin/ship ]; then
##VERSION##
curl -fsSL https://github.com/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_$ARCH.tar.gz |
curl -fsSL \$GITHUB/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_$ARCH.tar.gz |
sudo tar -C /usr/local/bin -zx ship
fi"
# Install the AWS IAM authenticator
AWSIAMAUTH_VERSION=0.7.8
URL=\$GITHUB/kubernetes-sigs/aws-iam-authenticator/releases/download/v${AWSIAMAUTH_VERSION}/aws-iam-authenticator_${AWSIAMAUTH_VERSION}_linux_${ARCH}
pssh "
if [ ! -x /usr/local/bin/aws-iam-authenticator ]; then
##VERSION##
sudo curl -fsSLo /usr/local/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/$ARCH/aws-iam-authenticator
sudo curl -fsSLo /usr/local/bin/aws-iam-authenticator $URL
sudo chmod +x /usr/local/bin/aws-iam-authenticator
aws-iam-authenticator version
fi"
@@ -790,17 +857,17 @@ EOF
if [ ! -x /usr/local/bin/jless ]; then
##VERSION##
sudo apt-get install -y libxcb-render0 libxcb-shape0 libxcb-xfixes0
wget https://github.com/PaulJuliusMartinez/jless/releases/download/v0.9.0/jless-v0.9.0-x86_64-unknown-linux-gnu.zip
wget \$GITHUB/PaulJuliusMartinez/jless/releases/download/v0.9.0/jless-v0.9.0-x86_64-unknown-linux-gnu.zip
unzip jless-v0.9.0-x86_64-unknown-linux-gnu
sudo mv jless /usr/local/bin
fi"
# Install the krew package manager
pssh "
if [ ! -d /home/$USER_LOGIN/.krew ]; then
if [ ! -d /home/$USER_LOGIN/.krew ] && [ ! -f /tmp/ipv6-only ]; then
cd /tmp &&
KREW=krew-linux_$ARCH
curl -fsSL https://github.com/kubernetes-sigs/krew/releases/latest/download/\$KREW.tar.gz |
curl -fsSL \$GITHUB/kubernetes-sigs/krew/releases/latest/download/\$KREW.tar.gz |
tar -zxf- &&
sudo -u $USER_LOGIN -H ./\$KREW install krew &&
echo export PATH=/home/$USER_LOGIN/.krew/bin:\\\$PATH | sudo -u $USER_LOGIN tee -a /home/$USER_LOGIN/.bashrc
@@ -808,7 +875,7 @@ EOF
# Install kubecolor
KUBECOLOR_VERSION=0.4.0
URL=https://github.com/kubecolor/kubecolor/releases/download/v${KUBECOLOR_VERSION}/kubecolor_${KUBECOLOR_VERSION}_linux_${ARCH}.tar.gz
URL=\$GITHUB/kubecolor/kubecolor/releases/download/v${KUBECOLOR_VERSION}/kubecolor_${KUBECOLOR_VERSION}_linux_${ARCH}.tar.gz
pssh "
if [ ! -x /usr/local/bin/kubecolor ]; then
##VERSION##
@@ -820,7 +887,7 @@ EOF
pssh "
if [ ! -x /usr/local/bin/k9s ]; then
FILENAME=k9s_Linux_$ARCH.tar.gz &&
curl -fsSL https://github.com/derailed/k9s/releases/latest/download/\$FILENAME |
curl -fsSL \$GITHUB/derailed/k9s/releases/latest/download/\$FILENAME |
sudo tar -C /usr/local/bin -zx k9s
k9s version
fi"
@@ -829,7 +896,7 @@ EOF
pssh "
if [ ! -x /usr/local/bin/popeye ]; then
FILENAME=popeye_Linux_$ARCH.tar.gz &&
curl -fsSL https://github.com/derailed/popeye/releases/latest/download/\$FILENAME |
curl -fsSL \$GITHUB/derailed/popeye/releases/latest/download/\$FILENAME |
sudo tar -C /usr/local/bin -zx popeye
popeye version
fi"
@@ -842,7 +909,7 @@ EOF
if [ ! -x /usr/local/bin/tilt ]; then
TILT_VERSION=0.33.13
FILENAME=tilt.\$TILT_VERSION.linux.$TILT_ARCH.tar.gz
curl -fsSL https://github.com/tilt-dev/tilt/releases/download/v\$TILT_VERSION/\$FILENAME |
curl -fsSL \$GITHUB/tilt-dev/tilt/releases/download/v\$TILT_VERSION/\$FILENAME |
sudo tar -C /usr/local/bin -zx tilt
tilt completion bash | sudo tee /etc/bash_completion.d/tilt
tilt version
@@ -860,7 +927,7 @@ EOF
# Install Kompose
pssh "
if [ ! -x /usr/local/bin/kompose ]; then
curl -fsSLo kompose https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-$ARCH &&
curl -fsSLo kompose \$GITHUB/kubernetes/kompose/releases/latest/download/kompose-linux-$ARCH &&
sudo install kompose /usr/local/bin
kompose completion bash | sudo tee /etc/bash_completion.d/kompose
kompose version
@@ -869,7 +936,7 @@ EOF
# Install KinD
pssh "
if [ ! -x /usr/local/bin/kind ]; then
curl -fsSLo kind https://github.com/kubernetes-sigs/kind/releases/latest/download/kind-linux-$ARCH &&
curl -fsSLo kind \$GITHUB/kubernetes-sigs/kind/releases/latest/download/kind-linux-$ARCH &&
sudo install kind /usr/local/bin
kind completion bash | sudo tee /etc/bash_completion.d/kind
kind version
@@ -878,7 +945,7 @@ EOF
# Install YTT
pssh "
if [ ! -x /usr/local/bin/ytt ]; then
curl -fsSLo ytt https://github.com/vmware-tanzu/carvel-ytt/releases/latest/download/ytt-linux-$ARCH &&
curl -fsSLo ytt \$GITHUB/vmware-tanzu/carvel-ytt/releases/latest/download/ytt-linux-$ARCH &&
sudo install ytt /usr/local/bin
ytt completion bash | sudo tee /etc/bash_completion.d/ytt
ytt version
@@ -886,7 +953,7 @@ EOF
##VERSION## https://github.com/bitnami-labs/sealed-secrets/releases
KUBESEAL_VERSION=0.26.2
URL=https://github.com/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION}/kubeseal-${KUBESEAL_VERSION}-linux-${ARCH}.tar.gz
URL=\$GITHUB/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION}/kubeseal-${KUBESEAL_VERSION}-linux-${ARCH}.tar.gz
#case $ARCH in
#amd64) FILENAME=kubeseal-linux-amd64;;
#arm64) FILENAME=kubeseal-arm64;;
@@ -903,7 +970,7 @@ EOF
VELERO_VERSION=1.13.2
pssh "
if [ ! -x /usr/local/bin/velero ]; then
curl -fsSL https://github.com/vmware-tanzu/velero/releases/download/v$VELERO_VERSION/velero-v$VELERO_VERSION-linux-$ARCH.tar.gz |
curl -fsSL \$GITHUB/vmware-tanzu/velero/releases/download/v$VELERO_VERSION/velero-v$VELERO_VERSION-linux-$ARCH.tar.gz |
sudo tar --strip-components=1 --wildcards -zx -C /usr/local/bin '*/velero'
velero completion bash | sudo tee /etc/bash_completion.d/velero
velero version --client-only
@@ -913,7 +980,7 @@ EOF
KUBENT_VERSION=0.7.2
pssh "
if [ ! -x /usr/local/bin/kubent ]; then
curl -fsSL https://github.com/doitintl/kube-no-trouble/releases/download/${KUBENT_VERSION}/kubent-${KUBENT_VERSION}-linux-$ARCH.tar.gz |
curl -fsSL \$GITHUB/doitintl/kube-no-trouble/releases/download/${KUBENT_VERSION}/kubent-${KUBENT_VERSION}-linux-$ARCH.tar.gz |
sudo tar -zxvf- -C /usr/local/bin kubent
kubent --version
fi"
@@ -921,7 +988,7 @@ EOF
# Ngrok. Note that unfortunately, this is the x86_64 binary.
# We might have to rethink how to handle this for multi-arch environments.
pssh "
if [ ! -x /usr/local/bin/ngrok ]; then
if [ ! -x /usr/local/bin/ngrok ] && [ ! -f /tmp/ipv6-only ]; then
curl -fsSL https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-linux-amd64.tgz |
sudo tar -zxvf- -C /usr/local/bin ngrok
fi"
@@ -1020,7 +1087,9 @@ _cmd_ping() {
TAG=$1
need_tag
fping < tags/$TAG/ips.txt
# If we connect to our VMs over IPv6, the IP address is between brackets.
# Unfortunately, fping doesn't support that; so let's strip brackets here.
tr -d [] < tags/$TAG/ips.txt | fping
}
_cmd stage2 "Finalize the setup of managed Kubernetes clusters"
@@ -1092,7 +1161,7 @@ _cmd_standardize() {
sudo netfilter-persistent start
fi"
# oracle-cloud-agent upgrades pacakges in the background.
# oracle-cloud-agent upgrades packages in the background.
# This breaks our deployment scripts, because when we invoke apt-get, it complains
# that the lock already exists (symptom: random "Exited with error code 100").
# Workaround: if we detect oracle-cloud-agent, remove it.
@@ -1104,6 +1173,15 @@ _cmd_standardize() {
sudo snap remove oracle-cloud-agent
sudo dpkg --remove --force-remove-reinstreq unified-monitoring-agent
fi"
# Check if a cachttps instance is available.
# (This is used to access GitHub on IPv6-only hosts.)
pssh "
if curl -fsSLI http://cachttps.internal:3131/https://github.com/ >/dev/null; then
echo GITHUB=http://cachttps.internal:3131/https://github.com
else
echo GITHUB=https://github.com
fi | sudo tee -a /etc/environment"
}
_cmd tailhist "Install history viewer on port 1088"
@@ -1119,7 +1197,7 @@ _cmd_tailhist () {
pssh "
set -e
sudo apt-get install unzip -y
wget -c https://github.com/joewalnes/websocketd/releases/download/v0.3.0/websocketd-0.3.0-linux_$ARCH.zip
wget -c \$GITHUB/joewalnes/websocketd/releases/download/v0.3.0/websocketd-0.3.0-linux_$ARCH.zip
unzip -o websocketd-0.3.0-linux_$ARCH.zip websocketd
sudo mv websocketd /usr/local/bin/websocketd
sudo mkdir -p /opt/tailhist
@@ -1218,14 +1296,17 @@ fi
"
}
_cmd ssh "Open an SSH session to the first node of a tag"
_cmd ssh "Open an SSH session to a node (first one by default)"
_cmd_ssh() {
TAG=$1
need_tag
IP=$(head -1 tags/$TAG/ips.txt)
info "Logging into $IP (default password: $USER_PASSWORD)"
ssh $SSHOPTS $USER_LOGIN@$IP
if [ "$2" ]; then
ssh -l ubuntu -i tags/$TAG/id_rsa $2
else
IP=$(head -1 tags/$TAG/ips.txt)
info "Logging into $IP (default password: $USER_PASSWORD)"
ssh $SSHOPTS $USER_LOGIN@$IP
fi
}
_cmd tags "List groups of VMs known locally"
@@ -1382,7 +1463,7 @@ _cmd_webssh() {
sudo apt-get install python3-tornado python3-paramiko -y"
pssh "
cd /opt
[ -d webssh ] || sudo git clone https://github.com/jpetazzo/webssh"
[ -d webssh ] || sudo git clone \$GITHUB/jpetazzo/webssh"
pssh "
for KEYFILE in /etc/ssh/*.pub; do
read a b c < \$KEYFILE; echo localhost \$a \$b
@@ -1467,7 +1548,7 @@ test_vm() {
"whoami" \
"hostname -i" \
"ls -l /usr/local/bin/i_am_first_node" \
"grep . /etc/name_of_first_node /etc/ipv4_of_first_node" \
"grep . /etc/name_of_first_node /etc/ip_addres_of_first_node" \
"cat /etc/hosts" \
"hostnamectl status" \
"docker version | grep Version -B1" \

View File

@@ -23,6 +23,14 @@ pssh() {
# necessary - or down to zero, too.
sleep ${PSSH_DELAY_PRE-1}
# When things go wrong, it's convenient to ask pssh to show the output
# of the failed command. Let's make that easy with a DEBUG env var.
if [ "$DEBUG" ]; then
PSSH_I=-i
else
PSSH_I=""
fi
$(which pssh || which parallel-ssh) -h $HOSTFILE -l ubuntu \
--par ${PSSH_PARALLEL_CONNECTIONS-100} \
--timeout 300 \
@@ -31,5 +39,6 @@ pssh() {
-O UserKnownHostsFile=/dev/null \
-O StrictHostKeyChecking=no \
-O ForwardAgent=yes \
$PSSH_I \
"$@"
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.16.1"
version = "~> 2.38.0"
}
helm = {
source = "hashicorp/helm"
@@ -107,6 +107,31 @@ resource "helm_release" "metrics_server_${index}" {
]
}
# As of October 2025, the ebs-csi-driver addon (which is used on EKS
# to provision persistent volumes) doesn't automatically create a
# StorageClass. Here, we're trying to detect the DaemonSet created
# by the ebs-csi-driver; and if we find it, we create the corresponding
# StorageClass.
data "kubernetes_resources" "ebs_csi_node_${index}" {
provider = kubernetes.cluster_${index}
api_version = "apps/v1"
kind = "DaemonSet"
label_selector = "app.kubernetes.io/name=aws-ebs-csi-driver"
namespace = "kube-system"
}
resource "kubernetes_storage_class" "ebs_csi_${index}" {
count = (length(data.kubernetes_resources.ebs_csi_node_${index}.objects) > 0) ? 1 : 0
provider = kubernetes.cluster_${index}
metadata {
name = "ebs-csi"
annotations = {
"storageclass.kubernetes.io/is-default-class" = "true"
}
}
storage_provisioner = "ebs.csi.aws.com"
}
# This section here deserves a little explanation.
#
# When we access a cluster with shpod (either through SSH or code-server)
@@ -136,8 +161,14 @@ resource "helm_release" "metrics_server_${index}" {
# Lastly - in the ConfigMap we actually put both the original kubeconfig,
# and the one where we injected our new user (just in case we want to
# use or look at the original for any reason).
#
# One more thing: the kubernetes.io/kube-apiserver-client signer is
# disabled on EKS, so... we don't generate that ConfigMap on EKS.
# To detect if we're on EKS, we're looking for the ebs-csi-node DaemonSet.
# (Which means that the detection will break if the ebs-csi addon is missing.)
resource "kubernetes_config_map" "kubeconfig_${index}" {
count = (length(data.kubernetes_resources.ebs_csi_node_${index}.objects) > 0) ? 0 : 1
provider = kubernetes.cluster_${index}
metadata {
name = "kubeconfig"
@@ -163,7 +194,7 @@ resource "kubernetes_config_map" "kubeconfig_${index}" {
- name: cluster-admin
user:
client-key-data: $${base64encode(tls_private_key.cluster_admin_${index}.private_key_pem)}
client-certificate-data: $${base64encode(kubernetes_certificate_signing_request_v1.cluster_admin_${index}.certificate)}
client-certificate-data: $${base64encode(kubernetes_certificate_signing_request_v1.cluster_admin_${index}[0].certificate)}
EOT
}
}
@@ -201,6 +232,7 @@ resource "kubernetes_cluster_role_binding" "shpod_cluster_admin_${index}" {
}
resource "kubernetes_certificate_signing_request_v1" "cluster_admin_${index}" {
count = (length(data.kubernetes_resources.ebs_csi_node_${index}.objects) > 0) ? 0 : 1
provider = kubernetes.cluster_${index}
metadata {
name = "cluster-admin"

View File

@@ -1,60 +1,45 @@
# Taken from:
# https://github.com/hashicorp/learn-terraform-provision-eks-cluster/blob/main/main.tf
data "aws_availability_zones" "available" {}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.19.0"
name = var.cluster_name
cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
data "aws_eks_cluster_versions" "_" {
default_only = true
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.5.1"
cluster_name = var.cluster_name
cluster_version = "1.24"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_endpoint_public_access = true
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
source = "terraform-aws-modules/eks/aws"
version = "~> 21.0"
name = var.cluster_name
kubernetes_version = data.aws_eks_cluster_versions._.cluster_versions[0].cluster_version
vpc_id = local.vpc_id
subnet_ids = local.subnet_ids
endpoint_public_access = true
enable_cluster_creator_admin_permissions = true
upgrade_policy = {
# The default policy is EXTENDED, which incurs additional costs
# when running an old control plane. We don't advise to run old
# control planes, but we also don't want to incur costs if an
# old version is chosen accidentally.
support_type = "STANDARD"
}
addons = {
coredns = {}
eks-pod-identity-agent = {
before_compute = true
}
kube-proxy = {}
vpc-cni = {
before_compute = true
}
aws-ebs-csi-driver = {
service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
}
}
eks_managed_node_groups = {
one = {
name = "node-group-one"
x86 = {
name = "x86"
instance_types = [local.node_size]
min_size = var.min_nodes_per_pool
max_size = var.max_nodes_per_pool
desired_size = var.min_nodes_per_pool
min_size = var.min_nodes_per_pool
max_size = var.max_nodes_per_pool
desired_size = var.min_nodes_per_pool
}
}
}
@@ -66,7 +51,7 @@ data "aws_iam_policy" "ebs_csi_policy" {
module "irsa-ebs-csi" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "4.7.0"
version = "~> 5.39.0"
create_role = true
role_name = "AmazonEKSTFEBSCSIRole-${module.eks.cluster_name}"
@@ -75,13 +60,9 @@ module "irsa-ebs-csi" {
oidc_fully_qualified_subjects = ["system:serviceaccount:kube-system:ebs-csi-controller-sa"]
}
resource "aws_eks_addon" "ebs-csi" {
cluster_name = module.eks.cluster_name
addon_name = "aws-ebs-csi-driver"
addon_version = "v1.5.2-eksbuild.1"
service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
tags = {
"eks_addon" = "ebs-csi"
"terraform" = "true"
}
resource "aws_vpc_security_group_ingress_rule" "_" {
security_group_id = module.eks.node_security_group_id
cidr_ipv4 = "0.0.0.0/0"
ip_protocol = -1
description = "Allow all traffic to Kubernetes nodes (so that we can use NodePorts, hostPorts, etc.)"
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.47.0"
version = "~> 6.17.0"
}
}
}

View File

@@ -0,0 +1,61 @@
# OK, we have two options here.
# 1. Create our own VPC
# - Pros: provides good isolation from other stuff deployed in the
# AWS account; makes sure that we don't interact with
# existing security groups, subnets, etc.
# - Cons: by default, there is a quota of 5 VPC per region, so
# we can only deploy 5 clusters
# 2. Use the default VPC
# - Pros/cons: the opposite :)
variable "use_default_vpc" {
type = bool
default = true
}
data "aws_vpc" "default" {
default = true
}
data "aws_subnets" "default" {
filter {
name = "vpc-id"
values = [data.aws_vpc.default.id]
}
}
data "aws_availability_zones" "available" {}
module "vpc" {
count = var.use_default_vpc ? 0 : 1
source = "terraform-aws-modules/vpc/aws"
version = "~> 6.0"
name = var.cluster_name
cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
private_subnets = ["10.0.11.0/24", "10.0.12.0/24", "10.0.13.0/24"]
public_subnets = ["10.0.21.0/24", "10.0.22.0/24", "10.0.23.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
map_public_ip_on_launch = true
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
locals {
vpc_id = var.use_default_vpc ? data.aws_vpc.default.id : module.vpc[0].vpc_id
subnet_ids = var.use_default_vpc ? data.aws_subnets.default.ids : module.vpc[0].public_subnets
}

View File

@@ -1,12 +0,0 @@
locals {
location = var.location != null ? var.location : "europe-north1-a"
region = replace(local.location, "/-[a-z]$/", "")
# Unfortunately, the following line doesn't work
# (that attribute just returns an empty string)
# so we have to hard-code the project name.
#project = data.google_client_config._.project
project = "prepare-tf"
}
data "google_client_config" "_" {}

View File

@@ -1,7 +1,7 @@
resource "google_container_cluster" "_" {
name = var.cluster_name
project = local.project
location = local.location
name = var.cluster_name
location = local.location
deletion_protection = false
#min_master_version = var.k8s_version
# To deploy private clusters, uncomment the section below,
@@ -42,7 +42,7 @@ resource "google_container_cluster" "_" {
node_pool {
name = "x86"
node_config {
tags = var.common_tags
tags = ["lab-${var.cluster_name}"]
machine_type = local.node_size
}
initial_node_count = var.min_nodes_per_pool
@@ -62,3 +62,25 @@ resource "google_container_cluster" "_" {
}
}
}
resource "google_compute_firewall" "_" {
name = "lab-${var.cluster_name}"
network = "default"
allow {
protocol = "tcp"
ports = ["0-65535"]
}
allow {
protocol = "udp"
ports = ["0-65535"]
}
allow {
protocol = "icmp"
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["lab-${var.cluster_name}"]
}

View File

@@ -6,6 +6,8 @@ output "has_metrics_server" {
value = true
}
data "google_client_config" "_" {}
output "kubeconfig" {
sensitive = true
value = <<-EOT

View File

@@ -1,8 +0,0 @@
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.5.0"
}
}
}

View File

@@ -0,0 +1 @@
../../providers/googlecloud/provider.tf

View File

@@ -4,23 +4,35 @@ resource "helm_release" "_" {
create_namespace = true
repository = "https://charts.loft.sh"
chart = "vcluster"
version = "0.19.7"
version = "0.30.4"
values = [
yamlencode({
service = {
type = "NodePort"
}
storage = {
persistence = false
}
sync = {
nodes = {
enabled = true
syncAllNodes = true
controlPlane = {
proxy = {
extraSANs = [ local.guest_api_server_host ]
}
service = {
spec = {
type = "NodePort"
}
}
statefulSet = {
persistence = {
volumeClaim = {
enabled = true
}
}
}
}
syncer = {
extraArgs = ["--tls-san=${local.guest_api_server_host}"]
sync = {
fromHost = {
nodes = {
enabled = true
selector = {
all = true
}
}
}
}
})
]

View File

@@ -0,0 +1,8 @@
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 7.0"
}
}
}

View File

@@ -9,5 +9,9 @@ variable "node_sizes" {
variable "location" {
type = string
default = null
default = "europe-north1-a"
}
locals {
location = (var.location != "" && var.location != null) ? var.location : "europe-north1-a"
}

View File

@@ -63,7 +63,8 @@ locals {
resource "local_file" "ip_addresses" {
content = join("", formatlist("%s\n", [
for key, value in local.ip_addresses : value
for key, value in local.ip_addresses :
strcontains(value, ".") ? value : "[${value}]"
]))
filename = "ips.txt"
file_permission = "0600"

View File

@@ -0,0 +1 @@
../common.tf

View File

@@ -0,0 +1 @@
../../providers/googlecloud/config.tf

View File

@@ -0,0 +1,54 @@
# Note: names and tags on GCP have to match a specific regex:
# (?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)
# In other words, they must start with a letter; and generally,
# we make them start with a number (year-month-day-etc, so 2025-...)
# so we prefix names and tags with "lab-" in this configuration.
resource "google_compute_instance" "_" {
for_each = local.nodes
zone = var.location
name = "lab-${each.value.node_name}"
tags = ["lab-${var.tag}"]
machine_type = each.value.node_size
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2404-lts-amd64"
}
}
network_interface {
network = "default"
access_config {}
}
metadata = {
"ssh-keys" = "ubuntu:${tls_private_key.ssh.public_key_openssh}"
}
}
locals {
ip_addresses = {
for key, value in local.nodes :
key => google_compute_instance._[key].network_interface[0].access_config[0].nat_ip
}
}
resource "google_compute_firewall" "_" {
name = "lab-${var.tag}"
network = "default"
allow {
protocol = "tcp"
ports = ["0-65535"]
}
allow {
protocol = "udp"
ports = ["0-65535"]
}
allow {
protocol = "icmp"
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["lab-${var.tag}"]
}

View File

@@ -0,0 +1 @@
../../providers/googlecloud/provider.tf

View File

@@ -0,0 +1 @@
../../providers/googlecloud/variables.tf

View File

@@ -0,0 +1,37 @@
# If we deploy in IPv6-only environments, and the students don't have IPv6
# connectivity, we want to offer a way to connect anyway. Our solution is
# to generate an HAProxy configuration snippet, that can be copied to a
# DualStack machine which will act as a proxy to our IPv6 machines.
# Note that the snippet still has to be copied, so this is not a 100%
# streamlined solution!
locals {
portmaps = {
for key, value in local.nodes :
(10000 + proxmox_virtual_environment_vm._[key].vm_id) => local.ip_addresses[key]
}
}
resource "local_file" "haproxy" {
filename = "./${var.tag}.cfg"
file_permission = "0644"
content = join("\n", [for port, address in local.portmaps : <<-EOT
frontend f${port}
bind *:${port}
default_backend b${port}
backend b${port}
mode tcp
server s${port} [${address}]:22 maxconn 16
EOT
])
}
resource "local_file" "sshproxy" {
filename = "sshproxy.txt"
file_permission = "0644"
content = join("", [
for cid in range(1, 1 + var.how_many_clusters) :
format("ssh -l k8s -p %d\n", proxmox_virtual_environment_vm._[format("c%03dn%03d", cid, 1)].vm_id + 10000)
])
}

View File

@@ -1,12 +1,34 @@
data "proxmox_virtual_environment_nodes" "_" {}
data "proxmox_virtual_environment_vms" "_" {
filter {
name = "template"
values = [true]
}
}
data "proxmox_virtual_environment_vms" "templates" {
for_each = toset(data.proxmox_virtual_environment_nodes._.names)
tags = ["ubuntu"]
filter {
name = "node_name"
values = [each.value]
}
filter {
name = "template"
values = [true]
}
}
locals {
pve_nodes = data.proxmox_virtual_environment_nodes._.names
pve_nodes = data.proxmox_virtual_environment_nodes._.names
pve_node = { for k, v in local.nodes : k => local.pve_nodes[v.node_index % length(local.pve_nodes)] }
pve_template_id = { for k, v in local.nodes : k => data.proxmox_virtual_environment_vms.templates[local.pve_node[k]].vms[0].vm_id }
}
resource "proxmox_virtual_environment_vm" "_" {
node_name = local.pve_nodes[each.value.node_index % length(local.pve_nodes)]
for_each = local.nodes
node_name = local.pve_node[each.key]
name = each.value.node_name
tags = ["container.training", var.tag]
stop_on_destroy = true
@@ -24,9 +46,17 @@ resource "proxmox_virtual_environment_vm" "_" {
# size = 30
# discard = "on"
#}
### Strategy 1: clone from shared storage
#clone {
# vm_id = var.proxmox_template_vm_id
# node_name = var.proxmox_template_node_name
# full = false
#}
### Strategy 2: clone from local storage
### (requires that the template exists on each node)
clone {
vm_id = var.proxmox_template_vm_id
node_name = var.proxmox_template_node_name
vm_id = local.pve_template_id[each.key]
node_name = local.pve_node[each.key]
full = false
}
agent {
@@ -41,7 +71,9 @@ resource "proxmox_virtual_environment_vm" "_" {
ip_config {
ipv4 {
address = "dhcp"
#gateway =
}
ipv6 {
address = "dhcp"
}
}
}
@@ -72,8 +104,10 @@ resource "proxmox_virtual_environment_vm" "_" {
locals {
ip_addresses = {
for key, value in local.nodes :
key => [for addr in flatten(concat(proxmox_virtual_environment_vm._[key].ipv4_addresses, ["ERROR"])) :
addr if addr != "127.0.0.1"][0]
key => [for addr in flatten(concat(
proxmox_virtual_environment_vm._[key].ipv6_addresses,
proxmox_virtual_environment_vm._[key].ipv4_addresses,
["ERROR"])) :
addr if addr != "127.0.0.1" && addr != "::1"][0]
}
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "~> 0.70.1"
version = "~> 0.86.0"
}
}
}

View File

@@ -10,8 +10,11 @@ proxmox_password = "CHANGEME"
# Which storage to use for VM disks. Defaults to "local".
#proxmox_storage = "ceph"
#proxmox_storage = "local-zfs"
proxmox_template_node_name = "CHANGEME"
proxmox_template_vm_id = CHANGEME
# We recently rewrote the Proxmox configurations to automatically
# detect which template to use; so these variables aren't used anymore.
#proxmox_template_node_name = "CHANGEME"
#proxmox_template_vm_id = CHANGEME

View File

@@ -23,3 +23,10 @@
# Survey form
/please https://docs.google.com/forms/d/e/1FAIpQLSfIYSgrV7tpfBNm1hOaprjnBHgWKn5n-k5vtNXYJkOX1sRxng/viewform
# Serve individual lessons with special URLs:
# - access http://container.training/k8s/ingress
# - ...redirects to http://container.training/view?k8s/ingress
# - ...proxies to http://container.training/workshop.html?k8s/ingress
/view /workshop.html 200
/* /view?:splat

View File

@@ -8,7 +8,7 @@
"name": "container-training-pub-sub-server",
"version": "0.0.1",
"dependencies": {
"express": "^4.21.1",
"express": "^4.21.2",
"socket.io": "^4.8.0",
"socket.io-client": "^4.7.5"
}
@@ -334,9 +334,9 @@
}
},
"node_modules/express": {
"version": "4.21.1",
"resolved": "https://registry.npmjs.org/express/-/express-4.21.1.tgz",
"integrity": "sha512-YSFlK1Ee0/GC8QaO91tHcDxJiE/X4FbpAyQWkxAvG6AXCuR65YzK8ua6D9hvi/TzUfZMpc+BwuM1IPw8fmQBiQ==",
"version": "4.21.2",
"resolved": "https://registry.npmjs.org/express/-/express-4.21.2.tgz",
"integrity": "sha512-28HqgMZAmih1Czt9ny7qr6ek2qddF4FclbMzwhCREB6OFfH+rXAnuNCwo1/wFvrtbgsQDb4kSbX9de9lFbrXnA==",
"dependencies": {
"accepts": "~1.3.8",
"array-flatten": "1.1.1",
@@ -357,7 +357,7 @@
"methods": "~1.1.2",
"on-finished": "2.4.1",
"parseurl": "~1.3.3",
"path-to-regexp": "0.1.10",
"path-to-regexp": "0.1.12",
"proxy-addr": "~2.0.7",
"qs": "6.13.0",
"range-parser": "~1.2.1",
@@ -372,6 +372,10 @@
},
"engines": {
"node": ">= 0.10.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/express"
}
},
"node_modules/finalhandler": {
@@ -633,9 +637,9 @@
}
},
"node_modules/path-to-regexp": {
"version": "0.1.10",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.10.tgz",
"integrity": "sha512-7lf7qcQidTku0Gu3YDPc8DJ1q7OOucfa/BSsIwjuh56VU7katFvuM8hULfkwB3Fns/rsVF7PwPKVw1sl5KQS9w=="
"version": "0.1.12",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.12.tgz",
"integrity": "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ=="
},
"node_modules/proxy-addr": {
"version": "2.0.7",
@@ -1264,9 +1268,9 @@
"integrity": "sha512-aIL5Fx7mawVa300al2BnEE4iNvo1qETxLrPI/o05L7z6go7fCw1J6EQmbK4FmJ2AS7kgVF/KEZWufBfdClMcPg=="
},
"express": {
"version": "4.21.1",
"resolved": "https://registry.npmjs.org/express/-/express-4.21.1.tgz",
"integrity": "sha512-YSFlK1Ee0/GC8QaO91tHcDxJiE/X4FbpAyQWkxAvG6AXCuR65YzK8ua6D9hvi/TzUfZMpc+BwuM1IPw8fmQBiQ==",
"version": "4.21.2",
"resolved": "https://registry.npmjs.org/express/-/express-4.21.2.tgz",
"integrity": "sha512-28HqgMZAmih1Czt9ny7qr6ek2qddF4FclbMzwhCREB6OFfH+rXAnuNCwo1/wFvrtbgsQDb4kSbX9de9lFbrXnA==",
"requires": {
"accepts": "~1.3.8",
"array-flatten": "1.1.1",
@@ -1287,7 +1291,7 @@
"methods": "~1.1.2",
"on-finished": "2.4.1",
"parseurl": "~1.3.3",
"path-to-regexp": "0.1.10",
"path-to-regexp": "0.1.12",
"proxy-addr": "~2.0.7",
"qs": "6.13.0",
"range-parser": "~1.2.1",
@@ -1473,9 +1477,9 @@
"integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ=="
},
"path-to-regexp": {
"version": "0.1.10",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.10.tgz",
"integrity": "sha512-7lf7qcQidTku0Gu3YDPc8DJ1q7OOucfa/BSsIwjuh56VU7katFvuM8hULfkwB3Fns/rsVF7PwPKVw1sl5KQS9w=="
"version": "0.1.12",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.12.tgz",
"integrity": "sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ=="
},
"proxy-addr": {
"version": "2.0.7",

View File

@@ -2,7 +2,7 @@
"name": "container-training-pub-sub-server",
"version": "0.0.1",
"dependencies": {
"express": "^4.21.1",
"express": "^4.21.2",
"socket.io": "^4.8.0",
"socket.io-client": "^4.7.5"
}

View File

@@ -29,6 +29,20 @@ At the end of this lesson, you will be able to:
---
## `Dockerfile` example
```
FROM python:alpine
WORKDIR /app
RUN pip install Flask
COPY rng.py .
ENV FLASK_APP=rng FLASK_RUN_HOST=:: FLASK_RUN_PORT=80
CMD ["flask", "run"]
EXPOSE 80
```
---
## Writing our first `Dockerfile`
Our Dockerfile must be in a **new, empty directory**.

View File

@@ -84,9 +84,9 @@ like Windows, macOS, Solaris, FreeBSD ...
* Each `lxc-start` process exposes a custom API over a local UNIX socket, allowing to interact with the container.
* No notion of image (container filesystems have to be managed manually).
* No notion of image (container filesystems had be managed manually).
* Networking has to be set up manually.
* Networking had to be set up manually.
---
@@ -98,10 +98,22 @@ like Windows, macOS, Solaris, FreeBSD ...
* Daemon exposing a REST API.
* Can run containers and virtual machines.
* Can manage images, snapshots, migrations, networking, storage.
* "offers a user experience similar to virtual machines but using Linux containers instead."
* Driven by Canonical.
---
## Incus
* Community-driven fork of LXD.
* Relatively recent [announced in August 2023](https://linuxcontainers.org/incus/announcement/) so time will tell what the notable differences will be.
---
## CRI-O
@@ -140,7 +152,7 @@ We're not aware of anyone using it directly (i.e. outside of Kubernetes).
---
## Kata containers
## [Kata containers](https://katacontainers.io/)
* OCI-compliant runtime.
@@ -152,7 +164,7 @@ We're not aware of anyone using it directly (i.e. outside of Kubernetes).
---
## gVisor
## [gVisor](https://gvisor.dev/)
* OCI-compliant runtime.
@@ -170,7 +182,17 @@ We're not aware of anyone using it directly (i.e. outside of Kubernetes).
---
## Overall ...
## Others
- Micro VMs: Firecracker, Edera...
- [crun](https://github.com/containers/crun) (runc rewritten in C)
- [youki](https://youki-dev.github.io/youki/) (runc rewritten in Rust)
---
## To Docker Or Not To Docker
* The Docker Engine is very developer-centric:
@@ -184,8 +206,26 @@ We're not aware of anyone using it directly (i.e. outside of Kubernetes).
* As a result, it is a fantastic tool in development environments.
* On servers:
* On Kubernetes clusters, containerd or CRI-O are better choices.
- Docker is a good default choice
* On Kubernetes clusters, the container engine is an implementation detail.
- If you use Kubernetes, the engine doesn't matter
---
## Different levels
- Directly use namespaces, cgroups, capabilities with custom code or scripts
*useful for troubleshooting/debugging and for educative purposes; e.g. pipework*
- Use low-level engines like runc, crun, youki
*useful when building custom architectures; e.g. a brand new orchestrator*
- Use low-level APIs like CRI or containerd grpc API
*useful to achieve high-level features like Docker, but without Docker; e.g. ctr, nerdctl*
- Use high-level APIs like Docker and Kubernetes
*that's what most people will do*

View File

@@ -327,9 +327,7 @@ class: extra-details
## Which one is the best?
- Eventually, overlay2 should be the best option.
- It is available on all modern systems.
- In modern (2015+) systems, overlay2 should be the best option.
- Its memory usage is better than Device Mapper, BTRFS, or ZFS.

View File

@@ -141,3 +141,13 @@ class: pic
* etc.
* Docker Inc. launches commercial offers.
---
## Standardization of container runtimes
- Docker 1.11 (2016) introduces containerd and runc
- [Kubernetes 1.5 (2016)](https://kubernetes.io/blog/2016/12/kubernetes-1-5-supporting-production-workloads/) introduces the CRI
- First releases of CRI-O (2017), kata containers...

View File

@@ -0,0 +1,5 @@
# Exercise — BuildKit cache mounts
We want to make our builds faster by leveraging BuildKit cache mounts.
Of course, if we don't make any changes to the code, the build should be instantaneous. Therefore, to benchmark our changes, we will make trivial changes to the code (e.g. change the message in a "print" statement) and measure (e.g. with `time`) how long it takes to rebuild the image.

View File

@@ -1,4 +1,4 @@
# Exercise — writing better Dockerfiles
# Exercise — multi-stage builds
Let's update our Dockerfiles to leverage multi-stage builds!

View File

@@ -0,0 +1,249 @@
# Deep Dive Into Images
- Image = files (layers) + metadata (configuration)
- Layers = regular tar archives
(potentially with *whiteouts*)
- Configuration = everything needed to run the container
(e.g. Cmd, Env, WorkdingDir...)
---
## Image formats
- Docker image [v1] (no longer used, except in `docker save` and `docker load`)
- Docker image v1.1 (IDs are now hashes instead of random values)
- Docker image [v2] (multi-arch support; content-addressable images)
- [OCI image format][oci] (almost the same, except for media types)
[v1]: https://github.com/moby/docker-image-spec?tab=readme-ov-file
[v2]: https://github.com/distribution/distribution/blob/main/docs/content/spec/manifest-v2-2.md
[oci]: https://github.com/opencontainers/image-spec/blob/main/spec.md
---
## OCI images
- Manifest = JSON document
- Used by container engines to know "what should I download to unpack this image?"
- Contains references to blobs, identified by their sha256 digest + size
- config (single sha256 digest)
- layers (list of sha256 digests)
- Also annotations (key/values)
- It's also possible to have a manifest list, or "fat manifest"
(which lists multiple manifests; this is used for multi-arch support)
---
## Config blob
- Also a JSON document
- `architecture` string (e.g. `amd64`)
- `config` object
Cmd, Entrypoint, Env, ExposedPorts, StopSignal, User, Volumes, WorkingDir
- `history` list
purely informative; shown with e.g. `docker history`
- `rootfs` object
`type` (always `layers`) + list of "diff ids"
---
class: extra-details
## Layers vs layers
- The image configuration contains digests of *uncompressed layers*
- The image manifest contains digests of *compressed layers*
(layer blobs in the registry can be tar, tar+gzip, tar+zstd)
---
## Layer format
- Layer = completely normal tar archive
- When a file is added or modified, it is added to the archive
(note: trivial changes, e.g. permissions, require to re-add the whole file!)
- When a file is deleted, a *whiteout* file is created
e.g. `rm hello.txt` results in a file named `.wh.hello.txt`
- Files starting with `.wh.` are forbidden in containers
- There is a special file, `.wh..wh..opq`, which means "remove all siblings"
(optimization to completely empty a directory)
- See [layer specification](https://github.com/opencontainers/image-spec/blob/main/layer.md) for details
---
class: extra-details
## Origin of layer format
- The initial storage driver for Docker was AUFS
- AUFS is out-of-tree but Debian and Ubuntu included it
(they used it for live CD / live USB boot)
- It meant that Docker could work out of the box on these distros
- Later, Docker added support for other systems
(devicemapper thin provisioning, btrfs, overlay...)
- Today, overlay is the best compromise for most use-cases
---
## Inspecting images
- `skopeo` can copy images between different places
(registries, Docker Engine, local storage as used by podman...)
- Example:
```bash
skopeo copy docker://alpine oci:/tmp/alpine.oci
```
- The image manifest will be in `/tmp/alpine.oci/index.json`
- Blobs (image configuration and layers) will be in `/tmp/alpine.oci/blobs/sha256`
- Note: as of version 1.20, `skopeo` doesn't handle extensions like stargz yet
(copying stargz images won't transfer the special index blobs)
---
## Layer surgery
Here is an example of how to manually edit an image.
https://github.com/jpetazzo/layeremove
It removes a specific layer from an image.
Note: it would be better to use a buildkit cache mount instead.
(This is just an educative example!)
---
## Stargz
- [Stargz] = Seekable Tar Gz, or "stargazer"
- Goal: start a container *before* its image has been fully downloaded
- Particularly useful for huge images that take minutes to download
- Also known as "streamable images" or "lazy loading"
- Alternative: [SOCI]
[stargz]: https://github.com/containerd/stargz-snapshotter
[SOCI]: https://github.com/awslabs/soci-snapshotter
---
## Stargz architecture
- Combination of:
- a backward-compatible extension to the OCI image format
- a containerd *snapshotter*
(=containerd component responsible for managing container and image storage)
- tooling to create, convert, optimize images
- Installation requires:
- running the snapshotter daemon
- configuring containerd
- building new images or converting the existing ones
---
## Stargz principle
- Normal image layer = tar.gz = gzip(tar(file1, file2, ...))
- Can't access fileN without uncompressing everything before it
- Seekable Tar Gz = gzip(tar(file1)) + gzip(tar(file2)) + ... + index
(big files can also be chunked)
- Can access individual files
(and even individual chunks, if needed)
- Downside: lower compression ratio
(less compression context; extra gzip headers)
---
## Stargz format
- The index mentioned above is stored in separate registry blobs
(one index for each layer)
- The digest of the index blobs is stored in annotations in normal OCI images
- Fully compatible with existing registries
- Existing container engines will load images transparently
(without leveraging stargz capabilities)
---
## Stargz limitations
- Tools like `skopeo` will ignore index blobs
(=copying images across registries will discard stargz capabilities)
- Indexes need to be downloaded before container can be started
(=still significant start time when there are many files in images)
- Significant latency when accessing a file lazily
(need to hit the registry, typically with a range header, uncompress file)
- Images can be optimized to pre-load important files

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,75 @@
# Rootless Networking
The "classic" approach for container networking is `veth` + bridge.
Pros:
- good performance
- easy to manage and understand
- flexible (possibility to use multiple, isolated bridges)
Cons:
- requires root access on the host to set up networking
---
## Rootless options
- Locked down helpers
- daemon, scripts started through sudo...
- used by some desktop virtualization platforms
- still requires root access at some point
- Userland networking stacks
- true solution that does not require root privileges
- lower performance
---
## Userland stacks
- [SLiRP](https://en.wikipedia.org/wiki/Slirp)
*the OG project that inspired the other ones!*
- [VPNKit](https://github.com/moby/vpnkit)
*introduced by Docker Desktop to play nice with enterprise VPNs*
- [slirp4netns](https://github.com/rootless-containers/slirp4netns)
*slirp adapted for network namespaces, and therefore, containers; better performance*
- [passt and pasta](https://passt.top/)
*more modern approach; better support for inbound traffic; IPv6...)*
---
## Passt/Pasta
- No dependencies
- NAT (like slirp4netns) or no-NAT (for e.g. KubeVirt)
- Can handle inbound traffic dynamically
- No dynamic memory allocation
- Good security posture
- IPv6 support
- Reasonable performance
---
## Demo?

View File

@@ -0,0 +1,162 @@
# Security models
In this section, we want to address a few security-related questions:
- What permissions do we need to run containers or a container engine?
- Can we use containers to escalate permissions?
- Can we break out of a container (move from container to host)?
- Is it safe to run untrusted code in containers?
- What about Kubernetes?
---
## Running Docker, containerd, podman...
- In the early days, running containers required root permissions
(to set up namespaces, cgroups, networking, mount filesystems...)
- Eventually, new kernel features were developed to allow "rootless" operation
(user namespaces and associated tweaks)
- Rootless requires a little bit of additional setup on the system (e.g. subuid)
(although this is increasingly often automated in modern distros)
- Docker runs as root by default; Podman runs rootless by default
---
## Advantages of rootless
- Containers can run without any intervention from root
(no package install, no daemon running as root...)
- Containerized processes run with non-privileged UID
- Container escape doesn't automatically result in full host compromise
- Can isolate workloads by using different UID
---
## Downsides of rootless
- *Relatively* newer (rootless Docker was introduced in 2019)
- many quirks/issues/limitations in the initial implementations
- kernel features and other mechanisms were introduced over time
- they're not always very well documented
- I/O performance (disk, network) is typically lower
(due to using special mechanisms instead of more direct access)
- Rootless and rootful engines must use different image storage
(due to UID mapping)
---
## Why not rootless everywhere?
- Not very useful on clusters
- users shouldn't log into cluster nodes
- questionable security improvement
- lower I/O performance
- Not very useful with Docker Desktop / Podman Desktop
- container workloads are already inside a VM
- could arguably provide a layer of inter-workload isolation
- would require new APIs and concepts
---
## Permission escalation
- Access to the Docker socket = root access to the machine
```bash
docker run --privileged -v /:/hostfs -ti alpine
```
- That's why by default, the Docker socket access is locked down
(only accessible by `root` and group `docker`)
- If user `alice` has access to the Docker socket:
*compromising user `alice` leads to whole host compromise!*
- Doesn't fundamentally change the threat model
(if `alice` gets compromised in the first place, we're in trouble!)
- Enables new threats (persistence, kernel access...)
---
## Avoiding the problem
- Rootless containers
- Container VM (Docker Desktop, Podman Desktop, Orbstack...)
- Unfortunately: no fine-grained access to the Docker API
(no way to e.g. disable privileged containers, volume mounts...)
---
## Escaping containers
- Very easy with some features
(privileged containers, volume mounts, device access)
- Otherwise impossible in theory
(but of course, vulnerabilities do exist!)
- **Be careful with scripts invoking `docker run`, or Compose files!**
---
## Untrusted code
- Should be safe as long as we're not enabling dangerous features
(privileged containers, volume mounts, device access, capabilities...)
- Remember that by default, containers can make network calls
(but see: `--net none` and also `docker network create --internal`)
- And of course, again: vulnerabilities do exist!
---
## What about Kubernetes?
- Ability to run arbitrary pods = dangerous
- But there are multiple safety mechanisms available:
- Pod Security Settings (limit "dangerous" features)
- RBAC (control who can do what)
- webhooks and policy engines for even finer grained control

View File

@@ -1,153 +0,0 @@
class: title
# Our training environment
![SSH terminal](images/title-our-training-environment.jpg)
---
## Our training environment
- If you are attending a tutorial or workshop:
- a VM has been provisioned for each student
- If you are doing or re-doing this course on your own, you can:
- install Docker locally (as explained in the chapter "Installing Docker")
- install Docker on e.g. a cloud VM
- use https://www.play-with-docker.com/ to instantly get a training environment
---
## Our Docker VM
*This section assumes that you are following this course as part of
a tutorial, training or workshop, where each student is given an
individual Docker VM.*
- The VM is created just before the training.
- It will stay up during the whole training.
- It will be destroyed shortly after the training.
- It comes pre-loaded with Docker and some other useful tools.
---
## What *is* Docker?
- "Installing Docker" really means "Installing the Docker Engine and CLI".
- The Docker Engine is a daemon (a service running in the background).
- This daemon manages containers, the same way that a hypervisor manages VMs.
- We interact with the Docker Engine by using the Docker CLI.
- The Docker CLI and the Docker Engine communicate through an API.
- There are many other programs and client libraries which use that API.
---
## Why don't we run Docker locally?
- We are going to download container images and distribution packages.
- This could put a bit of stress on the local WiFi and slow us down.
- Instead, we use a remote VM that has a good connectivity
- In some rare cases, installing Docker locally is challenging:
- no administrator/root access (computer managed by strict corp IT)
- 32-bit CPU or OS
- old OS version (e.g. CentOS 6, OSX pre-Yosemite, Windows 7)
- It's better to spend time learning containers than fiddling with the installer!
---
## Connecting to your Virtual Machine
You need an SSH client.
* On OS X, Linux, and other UNIX systems, just use `ssh`:
```bash
$ ssh <login>@<ip-address>
```
* On Windows, if you don't have an SSH client, you can download:
* Putty (www.putty.org)
* Git BASH (https://git-for-windows.github.io/)
* MobaXterm (https://mobaxterm.mobatek.net/)
---
class: in-person
## `tailhist`
The shell history of the instructor is available online in real time.
Note the IP address of the instructor's virtual machine (A.B.C.D).
Open http://A.B.C.D:1088 in your browser and you should see the history.
The history is updated in real time (using a WebSocket connection).
It should be green when the WebSocket is connected.
If it turns red, reloading the page should fix it.
---
## Checking your Virtual Machine
Once logged in, make sure that you can run a basic Docker command:
.small[
```bash
$ docker version
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:10:06 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.03.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:08:35 2018
OS/Arch: linux/amd64
Experimental: false
```
]
If this doesn't work, raise your hand so that an instructor can assist you!
???
:EN:Container concepts
:FR:Premier contact avec les conteneurs
:EN:- What's a container engine?
:FR:- Qu'est-ce qu'un *container engine* ?

View File

@@ -1,39 +0,0 @@
## A brief introduction
- This was initially written to support in-person, instructor-led workshops and tutorials
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://@@GITREPO@@/graphs/contributors)
- You can also follow along on your own, at your own pace
- We included as much information as possible in these slides
- We recommend having a mentor to help you ...
- ... Or be comfortable spending some time reading the Docker
[documentation](https://docs.docker.com/) ...
- ... And looking for answers in the [Docker forums](https://forums.docker.com),
[StackOverflow](http://stackoverflow.com/questions/tagged/docker),
and other outlets
---
class: self-paced
## Hands on, you shall practice
- Nobody ever became a Jedi by spending their lives reading Wookiepedia
- Likewise, it will take more than merely *reading* these slides
to make you an expert
- These slides include *tons* of demos, exercises, and examples
- They assume that you have access to a machine running Docker
- If you are attending a workshop or tutorial:
<br/>you will be given specific instructions to access a cloud VM
- If you are doing this on your own:
<br/>we will tell you how to install Docker or access a Docker environment

View File

@@ -0,0 +1,19 @@
## Running your own lab environments
- If you are doing or re-doing this course on your own, you can:
- install [Docker Desktop][docker-desktop] or [Podman Desktop][podman-desktop]
<br/>(available for Linux, Mac, Windows; provides a nice GUI)
- install [Docker CE][docker-ce] or [Podman][podman]
<br/>(for intermediate/advanced users who prefer the CLI)
- try platforms like [Play With Docker][pwd] or [KodeKloud]
<br/>(if you can't/won't install anything locally)
[docker-desktop]: https://docs.docker.com/desktop/
[podman-desktop]: https://podman-desktop.io/downloads
[docker-ce]: https://docs.docker.com/engine/install/
[podman]: https://podman.io/docs/installation#installing-on-linux
[pwd]: https://labs.play-with-docker.com/
[KodeKloud]: https://kodekloud.com/free-labs/docker/

View File

@@ -0,0 +1,35 @@
## Our training environment
- If you are attending a live class, a VM has been provisioned for you!
- Each student gets an individual VM.
- The VM is created just before the training.
- It will stay up during the whole training.
- It will be destroyed shortly after the training.
- It comes pre-loaded with Docker and some other useful tools.
---
## Can we run Docker locally?
- If you already have Docker (or Podman) installed, you can use it!
- The VMs can be convenient if:
- you can't/won't install Docker or Podman on your machine,
- your local internet connection is slow.
- We're going to download many container images and distribution packages.
- If the class takes place in a venue with slow WiFi, this can slow us down.
- The remote VMs have good connectivity and downloads will be fast there.
(Initially, we provided VMs to make sure that nobody would waste time
with installers, or because they didn't have the right permissions
on their machine, etc.)

View File

@@ -0,0 +1,159 @@
# Exercise — Build a container from scratch
Our goal will be to execute a container running a simple web server.
(Example: NGINX, or https://github.com/jpetazzo/color.)
We want the web server to be isolated:
- it shouldn't be able to access the outside world,
- but we should be able to connect to it from our machine.
Make sure to automate / script things as much as possible!
---
## Steps
1. Prepare the filesystem
2. Run it with chroot
3. Isolation with namespaces
4. Network configuration
5. Cgroups
6. Non-root
---
## Prepare the filesystem
- Obtain a root filesystem with one of the following methods:
- download an Alpine mini root fs
- export an Alpine or NGINX container image with Docker
- download and convert a container image with Skopeo
- make it from scratch with busybox + a static [jpetazzo/color](https://github.com/jpetazzo/color)
- ...anything you want! (Nix, anyone?)
- Enter the root filesystem with `chroot`
---
## Help, network does not work!
- Check that you have external connectivity from the chroot:
```bash
ping 1.1.1.1
```
(that *should* work; if it doesn't, we have a serious problem!)
- Check that DNS resolution works:
```bash
ping enix.io
```
- If you're having a DNS resolution error, configure DNS in the container:
```bash
echo nameserver 1.1.1.1 > /etc/resolv.conf
```
---
## Running a web server
Here are a few possibilities...
- Install the NGINX package and run it with `nginx`
(note: by default it will start in the background)
- Run NGINX in the foreground with `nginx -g "daemon off;"`
- Install the package Caddy and run `caddy file-server -ab`
(it will remain in the foreground and show logs; **RECOMMENDED**)
- Download and/or build https://github.com/jpetazzo/color
(if you're familiar with the Go ecosystem!)
---
## Run with chroot
- Start the web server from within the chroot
- Confirm that you can connect to it from outside
- Write a script to start our "proto-container"
---
## Isolation with namespaces
- Now, enter the root filesystem with `unshare`
(enable all the namespaces you want; maybe not `user` yet, though!)
- Start the web server
(you might need to configure at least the loopback network interface!)
- Confirm that we *cannot* connect from outside
- Update the our start script to use unshare
- Automate network configuration
(pay attention to the fact that network tools *may not* exist in the container)
---
## Network configuration
- While our "container" is running, create a `veth` pair
- Move one `veth` to the container
- Assign addresses to both `veth`
- Confirm that we can connect to the web server from outside
(using the address assigned to the container's `veth`)
- Update our start script to automate the setup of the `veth` pair
- Bonus points: update the script to that it can start *multiple* containers
---
## Cgroups
- Create a cgroup for our container
- Move the container to the cgroup
- Set a very low CPU limit and confirm that it slows down the server
(but doesn't affect the rest of the system)
- Update the script to automate this
---
## Non-root
- Switch to a non-privileged user when starting the container
- Adjust the web server configuration so that it starts
(non-privileged users cannot bind to ports below 1024)

View File

@@ -0,0 +1,35 @@
# Exercise — Images from scratch
There are two parts in this exercise:
1. Obtaining and unpacking an image from scratch
2. Adding overlay mounts to the "container from scratch" lab
---
## Pulling from scratch, easy mode
- Download manifest and layers with `skopeo`
- Parse manifest and configuration with e.g. `jq`
- Uncompress the layers in a directory
- Check that the result works (using `chroot`)
---
## Pulling from scratch, medium mode
- Don't use `skopeo`
- Hints: if pulling from the Docker Hub, you'll need a token
(there are examples in Docker's documentation)
---
## Pulling from scratch, hard mode
- Handle whiteouts!

126
slides/flux/add-cluster.md Normal file
View File

@@ -0,0 +1,126 @@
## Flux install
We'll install `Flux`.
And replay the all scenario a 2nd time.
Let's face it: we don't have that much time. 😅
Since all our install and configuration is `GitOps`-based, we might just leverage on copy-paste and code configuration…
Maybe.
Let's copy the 📂 `./clusters/CLOUDY` folder and rename it 📂 `./clusters/METAL`.
---
### Modifying Flux config 📄 files
- In 📄 file `./clusters/METAL/flux-system/gotk-sync.yaml`
</br>change the `Kustomization` value `spec.path: ./clusters/METAL`
- ⚠️ We'll have to adapt the `Flux` _CLI_ command line
- And that's pretty much it!
- We'll see if anything goes wrong on that new cluster
---
### Connecting to our dedicated `Github` repo to host Flux config
.lab[
- let's replace `GITHUB_TOKEN` and `GITHUB_REPO` values
- don't forget to change the patch to `clusters/METAL`
```bash
k8s@shpod:~$ export GITHUB_TOKEN="my-token" && \
export GITHUB_USER="container-training-fleet" && \
export GITHUB_REPO="fleet-config-using-flux-XXXXX"
k8s@shpod:~$ flux bootstrap github \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--team=OPS \
--team=ROCKY --team=MOVY \
--path=clusters/METAL
```
]
---
class: pic
![Running Mario](images/running-mario.gif)
---
### Flux deployed our complete stack
Everything seems to be here but…
- one database is in `Pending` state
- our `ingresses` don't work well
```bash
k8s@shpod ~$ curl --header 'Host: rocky.test.enixdomain.com' http://${myIngressControllerSvcIP}
curl: (52) Empty reply from server
```
---
### Fixing the Ingress
The current `ingress-nginx` configuration leverages on specific annotations used by Scaleway to bind a _IaaS_ load-balancer to the `ingress-controller`.
We don't have such kind of things here.😕
- We could bind our `ingress-controller` to a `NodePort`.
`ingress-nginx` install manifests propose it here:
</br>https://github.com/kubernetes/ingress-nginx/tree/release-1.14/deploy/static/provider/baremetal
- In the 📄file `./clusters/METAL/ingress-nginx/sync.yaml`,
</br>change the `Kustomization` value `spec.path: ./deploy/static/provider/baremetal`
---
class: pic
![Running Mario](images/running-mario.gif)
---
### Troubleshooting the database
One of our `db-0` pod is in `Pending` state.
```bash
k8s@shpod ~$ k get pods db-0 -n *-test -oyaml
()
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-06-11T11:15:42Z"
message: '0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims.
preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
```
---
### Troubleshooting the PersistentVolumeClaims
```bash
k8s@shpod ~$ k get pvc postgresql-data-db-0 -n *-test -o yaml
()
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 9s (x182 over 45m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
```
No `storage class` is available on this cluster.
We hadn't the problem on our managed cluster since a default storage class was configured and then associated to our `PersistentVolumeClaim`.
Why is there no problem with the other database?

View File

@@ -0,0 +1,417 @@
# R01- Configuring **_🎸ROCKY_** deployment with Flux
The **_⚙OPS_** team manages 2 distinct envs: **_⚗TEST_** et _**🚜PROD**_
Thanks to _Kustomize_
1. it creates a **_base_** common config
2. this common config is overwritten with a **_⚗TEST_** _tenant_-specific configuration
3. the same applies with a _**🚜PROD**_-specific configuration
> 💡 This seems complex, but no worries: Flux's CLI handles most of it.
---
## Creating the **_🎸ROCKY_**-dedicated _tenant_ in **_⚗TEST_** env
- Using the `flux` _CLI_, we create the file configuring the **_🎸ROCKY_** team's dedicated _tenant_
- … this file takes place in the `base` common configuration for both envs
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p ./tenants/base/rocky && \
flux create tenant rocky \
--with-namespace=rocky-test \
--cluster-role=rocky-full-access \
--export > ./tenants/base/rocky/rbac.yaml
```
]
---
class: extra-details
### 📂 ./tenants/base/rocky/rbac.yaml
Let's see our file…
3 resources are created: `Namespace`, `ServiceAccount`, and `ClusterRoleBinding`
`Flux` **impersonates** as this `ServiceAccount` when it applies any resources found in this _tenant_-dedicated source(s)
- By default, the `ServiceAccount` is bound to the `cluster-admin` `ClusterRole`
- The team maintaining the sourced `Github` repository is almighty at cluster scope
A not that much isolated _tenant_! 😕
That's why the **_⚙OPS_** team enforces specific `ClusterRoles` with restricted permissions
Let's create these permissions!
---
## _namespace_ isolation for **_🎸ROCKY_**
.lab[
- Here are the restricted permissions to use in the `rocky-test` `Namespace`
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cp ~/container.training/k8s/M6-rocky-cluster-role.yaml ./tenants/base/rocky/
```
]
> 💡 Note that some resources are managed at cluster scope (like `PersistentVolumes`).
> We need specific permissions, then…
---
## Creating `Github` source in Flux for **_🎸ROCKY_** app repository
A specific _branch_ of the `Github` repository is monitored by the `Flux` source
.lab[
- ⚠️ you may change the **repository URL** to the one of your own clone
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create source git rocky-app \
--namespace=rocky-test \
--url=https://github.com/Musk8teers/container.training-spring-music/ \
--branch=rocky --export > ./tenants/base/rocky/sync.yaml
```
]
---
## Creating `kustomization` in Flux for **_🎸ROCKY_** app repository
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization rocky \
--namespace=rocky-test \
--service-account=rocky \
--source=GitRepository/rocky-app \
--path="./k8s/" --export >> ./tenants/base/rocky/sync.yaml
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cd ./tenants/base/rocky/ && \
kustomize create --autodetect && \
cd -
```
]
---
class: extra-details
### 📂 Flux config files
Let's review our `Flux` configuration files
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cat ./tenants/base/rocky/sync.yaml && \
cat ./tenants/base/rocky/kustomization.yaml
```
]
---
## Adding a kustomize patch for **_⚗TEST_** cluster deployment
💡 Remember the DRY strategy!
- The `Flux` tenant-dedicated configuration is looking for this file: `.tenants/test/rocky/kustomization.yaml`
- It has been configured here: `clusters/CLOUDY/tenants.yaml`
- All the files we just created are located in `.tenants/base/rocky`
- So we have to create a specific kustomization in the right location
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p ./tenants/test/rocky && \
cp ~/container.training/k8s/M6-rocky-test-patch.yaml ./tenants/test/rocky/ && \
cp ~/container.training/k8s/M6-rocky-test-kustomization.yaml ./tenants/test/rocky/kustomization.yaml
```
---
### Synchronizing Flux config with its Github repo
Locally, our `Flux` config repo is ready
The **_⚙OPS_** team has to push it to `Github` for `Flux` controllers to watch and catch it!
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
git add . && \
git commit -m':wrench: :construction_worker: add ROCKY tenant configuration' && \
git push
```
]
---
class: pic
![Running Mario](images/running-mario.gif)
---
class: pic
![rocky config files](images/flux/R01-config-files.png)
---
class: extra-details
### Flux resources for ROCKY tenant 1/2
.lab[
```bash
k8s@shpod:~$ flux get all -A
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
flux-system gitrepository/flux-system main@sha1:8ffd72cf False
True stored artifact for revision 'main@sha1:8ffd72cf'
rocky-test gitrepository/rocky-app rocky@sha1:ffe9f3fe False
True stored artifact for revision 'rocky@sha1:ffe9f3fe'
()
```
]
---
class: extra-details
### Flux resources for ROCKY _tenant_ 2/2
.lab[
```bash
k8s@shpod:~$ flux get all -A
()
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
flux-system kustomization/flux-system main@sha1:8ffd72cf False
True Applied revision: main@sha1:8ffd72cf
flux-system kustomization/tenant-prod False
False kustomization path not found: stat /tmp/kustomization-1164119282/tenants/prod: no such file or directory
flux-system kustomization/tenant-test main@sha1:8ffd72cf False
True Applied revision: main@sha1:8ffd72cf
rocky-test kustomization/rocky False
False StatefulSet/db dry-run failed (Forbidden): statefulsets.apps "db" is forbidden: User "system:serviceaccount:rocky-test:rocky" cannot patch resource "statefulsets" in API group "apps" at the cluster scope
```
]
And here is our 2nd Flux error(s)! 😅
---
class: extra-details
### Flux Kustomization, mutability, …
🔍 Notice that none of the expected resources is created:
the whole kustomization is rejected, even if the `StatefulSet` is this only resource that fails!
🔍 Flux Kustomization uses the dry-run feature to templatize the resources and then applying patches onto them
Good but some resources are not completely mutable, such as `StatefulSets`
We have to fix the mutation by applying the change without having to patch the resource.
🔍 Simply add the `spec.targetNamespace: rocky-test` to the `Kustomization` named `rocky`
---
class: extra-details
## And then it's deployed 1/2
You should see the following resources in the `rocky-test` namespace
.lab[
```bash
k8s@shpod-578d64468-tp7r2 ~/$ k get pods,svc,deployments -n rocky-test
NAME READY STATUS RESTARTS AGE
pod/db-0 1/1 Running 0 47s
pod/web-6c677bf97f-c7pkv 0/1 Running 1 (22s ago) 47s
pod/web-6c677bf97f-p7b4r 0/1 Running 1 (19s ago) 47s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/db ClusterIP 10.32.6.128 <none> 5432/TCP 48s
service/web ClusterIP 10.32.2.202 <none> 80/TCP 48s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 0/2 2 0 47s
```
]
---
class: extra-details
## And then it's deployed 2/2
You should see the following resources in the `rocky-test` namespace
.lab[
```bash
k8s@shpod-578d64468-tp7r2 ~/$ k get statefulsets,pvc,pv -n rocky-test
NAME READY AGE
statefulset.apps/db 1/1 47s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/postgresql-data-db-0 Bound pvc-c1963a2b-4fc9-4c74-9c5a-b0870b23e59a 1Gi RWO sbs-default <unset> 47s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
persistentvolume/postgresql-data 1Gi RWO,RWX Retain Available <unset> 47s
persistentvolume/pvc-150fcef5-ebba-458e-951f-68a7e214c635 1G RWO Delete Bound shpod/shpod sbs-default <unset> 4h46m
persistentvolume/pvc-c1963a2b-4fc9-4c74-9c5a-b0870b23e59a 1Gi RWO Delete Bound rocky-test/postgresql-data-db-0 sbs-default <unset> 47s
```
]
---
class: extra-details
### PersistentVolumes are using a default `StorageClass`
💡 This managed cluster comes with custom `StorageClasses` leveraging on Cloud _IaaS_ capabilities (i.e. block devices)
![Flux configuration waterfall](images/flux/persistentvolumes.png)
- a default `StorageClass` is applied if none is specified (like here)
- for **_🏭PROD_** purpose, ops team might enforce a more performant `StorageClass`
- on a bare-metal cluster, **_🏭PROD_** team has to configure and provide `StorageClasses` on its own
---
class: pic
![Flux configuration waterfall](images/flux/flux-config-dependencies.png)
---
## Upgrading ROCKY app
The Git source named `rocky-app` is pointing at
- a Github repository named [Musk8teers/container.training-spring-music](https://github.com/Musk8teers/container.training-spring-music/)
- on its branch named `rocky`
This branch deploy the v1.0.0 of the _Web_ app:
`spec.template.spec.containers.image: ghcr.io/musk8teers/container.training-spring-music:1.0.0`
What happens if the **_🎸ROCKY_** team upgrades its branch to deploy `v1.0.1` of the _Web_ app?
---
## _tenant_ **_🏭PROD_**
💡 **_🏭PROD_** _tenant_ is still waiting for its `Flux` configuration, but don't bother for it right now.
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:3
branch MOVY order:4
branch YouRHere order:5
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT
</pre>

View File

@@ -0,0 +1,320 @@
# M01- Configuring **_🎬MOVY_** deployment with Flux
**_🎸ROCKY_** _tenant_ is now fully usable in **_⚗TEST_** env, let's do the same for another _dev_ team: **_🎬MOVY_**
😈 We could do it by using `Flux` _CLI_,
but let's see if we can succeed by just adding manifests in our `Flux` configuration repository.
---
class: pic
![Flux configuration waterfall](images/flux/flux-config-dependencies.png)
---
## Impact study
In our `Flux` configuration repository:
- Creation of the following 📂 folders: `./tenants/[base|test]/MOVY`
- Modification of the following 📄 file: `./clusters/CLOUDY/tenants.yaml`?
- Well, we don't need to: the watched path include the whole `./tenants/[test]/*` folder
In the app repository:
- Creation of a `movy` branch to deploy another version of the app dedicated to movie soundtracks
---
### Creation of the 📂 folders
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cp -pr tenants/base/rocky tenants/base/movy
cp -pr tenants/test/rocky tenants/test/movy
```
]
---
### Modification of tenants/[base|test]/movy/* 📄 files
- For 📄`M6-rocky-*.yaml`, change the file names…
- and update the 📄`kustomization.yaml` file as a result
- In any file, replace any `rocky` entry by `movy`
- In 📄 `sync.yaml` be aware of what repository and what branch you want `Flux` to watch for **_🎬MOVY_** app deployment.
- for this demo, let's assume we create a `movy` branch
---
class: extra-details
### What about reusing rocky-cluster-roles?
💡 In 📄`M6-movy-cluster-role.yaml` and 📄`rbac.yaml`, we could have reused the already existing `ClusterRoles`: `rocky-full-access`, and `rocky-pv-access`
A `ClusterRole` is cluster wide. It is not dedicated to a namespace.
- Its permissions are restrained to a specific namespace by being bound to a `ServiceAccount` by a `RoleBinding`.
- Whereas a `ClusterRoleBinding` extends the permissions to the whole cluster scope.
But a _tenant_ is a **_tenant_** and permissions might evolved separately for **_🎸ROCKY_** and **_🎬MOVY_**.
So [we got to keep'em separated](https://www.youtube.com/watch?v=GHUql3OC_uU).
---
### Let-su-go!
The **_⚙OPS_** team push this new tenant configuration to `Github` for `Flux` controllers to watch and catch it!
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
git add . && \
git commit -m':wrench: :construction_worker: add MOVY tenant configuration' && \
git push
```
]
---
class: pic
![Running Mario](images/running-mario.gif)
---
class: extra-details
### Another Flux error?
.lab[
- It seems that our `movy` branch is not present in the app repository
```bash
k8s@shpod:~$ flux get kustomization -A
NAMESPACE NAME REVISION SUSPENDED MESSAGE
()
flux-system tenant-prod False False kustomization path not found: stat /tmp/kustomization-113582828/tenants/prod: no such file or directory
()
movy-test movy False False Source artifact not found, retrying in 30s
```
]
---
### Creating the `movy` branch
- Let's create this new `movy` branch from `rocky` branch
.lab[
- You can force immediate reconciliation by typing this command:
```bash
k8s@shpod:~$ flux reconcile source git movy-app -n movy-test
```
]
---
class: pic
![Running Mario](images/running-mario.gif)
---
### New branch detected
You now have a second app responding on [http://movy.test.mybestdomain.com]
But as of now, it's just the same as the **_🎸ROCKY_** one.
We want a specific (pink-colored) version with a dataset full of movie soundtracks.
---
## New version of the **_🎬MOVY_** app
In our branch `movy`
Let's modify our `deployment.yaml` file with 2 modifications.
- in `spec.template.spec.containers.image` change the container image tag to `1.0.3`
- and… let's introduce some evil enthropy by changing this line… 😈😈😈
```yaml
value: jdbc:postgresql://db/music
```
by this one
```yaml
value: jdbc:postgresql://db.rocky-test/music
```
And push the modifications…
---
class: pic
![MOVY app has an incorrect dataset](images/flux/incorrect-dataset-in-MOVY-app.png)
---
class: pic
![ROCKY app has an incorrect dataset](images/flux/incorrect-dataset-in-ROCKY-app.png)
---
### MOVY app is connected to ROCKY database
How evil have we been! 😈
We connected the **_🎬MOVY_** app to the **_🎸ROCKY_** database.
Even if our tenants are isolated in how they manage their Kubernetes resources…
pod network is still full mesh and any connection is authorized.
> The **_⚙OPS_** team should fix this!
---
class: extra-details
## Adding NetworkPolicies to **_🎸ROCKY_** and **_🎬MOVY_** namespaces
`Network policies` may be seen as the firewall feature in the pod network.
They rules ingress and egress network connections considering a described subset of pods.
Please, refer to the [`Network policies` chapter in the High Five M4 module](./4.yml.html#toc-network-policies)
- In our case, we just add the file `~/container.training/k8s/M6-network-policies.yaml`
</br>in our `./tenants/base/movy` folder
- without forgetting to update our `kustomization.yaml` file
- and without forgetting to commit 😁
---
class: pic
![Running Mario](images/running-mario.gif)
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:3
branch MOVY order:4
branch YouRHere order:5
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'k0s install on METAL cluster' tag:'K01'
commit id:'Flux config. for METAL cluster' tag:'K02'
branch METAL_TEST-PROD order:3
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for OpenEBS' tag:'K03'
checkout METAL_TEST-PROD
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Prometheus install'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout OPS
commit id:'Kyverno install'
commit id:'Kyverno rules'
checkout TEST-env
merge OPS type: HIGHLIGHT
</pre>

450
slides/flux/bootstrap.md Normal file
View File

@@ -0,0 +1,450 @@
# T02- creating **_⚗TEST_** env on our **_☁CLOUDY_** cluster
Let's take a look at our **_☁CLOUDY_** cluster!
**_☁CLOUDY_** is a Kubernetes cluster created with [Scaleway Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/) managed service
This managed cluster comes preinstalled with specific features:
- Kubernetes dashboard
- specific _Storage Classes_ based on Scaleway _IaaS_ block storage offerings
- a `Cilium` _CNI_ stack already set up
---
## Accessing the managed Kubernetes cluster
To access our cluster, we'll connect via [`shpod`](https://github.com/jpetazzo/shpod)
.lab[
- If you already have a kubectl on your desktop computer
```bash
kubectl -n shpod run shpod --image=jpetazzo/shpod
kubectl -n shpod exec -it shpod -- bash
```
- or directly via ssh
```bash
ssh -p myPort k8s@mySHPODSvcIpAddress
```
]
---
## Flux installation
Once `Flux` is installed,
the **_⚙OPS_** team exclusively operates its clusters by updating a code base in a `Github` repository
_GitOps_ and `Flux` enable the **_⚙OPS_** team to rely on the _first-class citizen pattern_ in Kubernetes' world through these steps:
- describe the **desired target state**
- and let the **automated convergence** happen
---
### Checking prerequisites
The `Flux` _CLI_ is available in our `shpod` pod
Before installation, we need to check that:
- `Flux` _CLI_ is correctly installed
- it can connect to the `API server`
- our versions of `Flux` and Kubernetes are compatible
.lab[
```bash
k8s@shpod:~$ flux --version
flux version 2.5.1
k8s@shpod:~$ flux check --pre
► checking prerequisites
✔ Kubernetes 1.32.3 >=1.30.0-0
✔ prerequisites checks passed
```
]
---
### Git repository for Flux configuration
The **_⚙OPS_** team uses `Flux` _CLI_
- to create a `git` repository named `fleet-config-using-flux-XXXXX` (⚠ replace `XXXXX` by a personnal suffix)
- in our `Github` organization named `container-training-fleet`
Prerequisites are:
- `Flux` _CLI_ needs a `Github` personal access token (_PAT_)
- to create and/or access the `Github` repository
- to give permissions to existing teams in our `Github` organization
- The _PAT_ needs _CRUD_ permissions on our `Github` organization
- repositories
- As **_⚙OPS_** team, let's creates a `Github` personal access token…
---
class: pic
![Generating a Github personal access token](images/flux/github-add-token.jpg)
---
### Creating dedicated `Github` repo to host Flux config
.lab[
- let's replace the `GITHUB_TOKEN` value by our _Personal Access Token_
- and the `GITHUB_REPO` value by our specific repository name
```bash
k8s@shpod:~$ export GITHUB_TOKEN="my-token" && \
export GITHUB_USER="container-training-fleet" && \
export GITHUB_REPO="fleet-config-using-flux-XXXXX"
k8s@shpod:~$ flux bootstrap github \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--team=OPS \
--team=ROCKY --team=MOVY \
--path=clusters/CLOUDY
```
]
---
class: extra-details
### Creating a personnal dedicated `Github` repo
You don't need to rely onto a Github organization: any `Github` personnal repository is OK.
.lab[
- let's replace the `GITHUB_TOKEN` value by our _Personal Access Token_
- and the `GITHUB_REPO` value by our specific repository name
```bash
k8s@shpod:~$ export GITHUB_TOKEN="my-token" && \
export GITHUB_USER="lpiot" && \
export GITHUB_REPO="fleet-config-using-flux-XXXXX"
k8s@shpod:~$ flux bootstrap github \
--owner=${GITHUB_USER} \
--personal \
--repository=${GITHUB_REPO} \
--path=clusters/CLOUDY
```
]
---
class: extra-details
Here is the result
```bash
✔ repository "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX" created
► reconciling repository permissions
✔ granted "maintain" permissions to "OPS"
✔ granted "maintain" permissions to "ROCKY"
✔ granted "maintain" permissions to "MOVY"
► reconciling repository permissions
✔ reconciled repository permissions
► cloning branch "main" from Git repository "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX.git"
✔ cloned repository
► generating component manifests
✔ generated component manifests
✔ committed component manifests to "main" ("7c97bdeb5b932040fd8d8a65fe1dc84c66664cbf")
► pushing component manifests to "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX.git"
✔ component manifests are up to date
► installing components in "flux-system" namespace
✔ installed components
✔ reconciled components
► determining if source secret "flux-system/flux-system" exists
► generating source secret
✔ public key: ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFqaT8B8SezU92qoE+bhnv9xONv9oIGuy7yVAznAZfyoWWEVkgP2dYDye5lMbgl6MorG/yjfkyo75ETieAE49/m9D2xvL4esnSx9zsOLdnfS9W99XSfFpC2n6soL+Exodw==
✔ configured deploy key "flux-system-main-flux-system-./clusters/CLOUDY" for "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX"
► applying source secret "flux-system/flux-system"
✔ reconciled source secret
► generating sync manifests
✔ generated sync manifests
✔ committed sync manifests to "main" ("11035e19cabd9fd2c7c94f6e93707f22d69a5ff2")
► pushing sync manifests to "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX.git"
► applying sync manifests
✔ reconciled sync configuration
◎ waiting for GitRepository "flux-system/flux-system" to be reconciled
✔ GitRepository reconciled successfully
◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
✔ Kustomization reconciled successfully
► confirming components are healthy
✔ helm-controller: deployment ready
✔ kustomize-controller: deployment ready
✔ notification-controller: deployment ready
✔ source-controller: deployment ready
✔ all components are healthy
```
---
### Flux configures Github repository access for teams
- `Flux` sets up permissions that allow teams within our organization to **access** the `Github` repository as maintainers
- Teams need to exist before `Flux` proceeds to this configuration
![Teams in Github](images/flux/github-teams.png)
---
### ⚠️ Disclaimer
- In this lab, adding these teams as maintainers was merely a demonstration of how `Flux` _CLI_ sets up permissions in Github
- But there is no need for dev teams to have access to this `Github` repository
- One advantage of _GitOps_ lies in its ability to easily set up 💪🏼 **Separation of concerns** by using multiple `Flux` sources
---
### The PAT is not needed anymore!
- During the install process, `Flux` creates an `ssh` key pair so that it is able to contribute to the `Github` repository.
```bash
► generating source secret
✔ public key: ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFqaT8B8SezU92qoE+bhnv9xONv9oIGuy7yVAznAZfyoWWEVkgP2dYDye5lMbgl6MorG/yjfkyo75ETieAE49/m9D2xvL4esnSx9zsOLdnfS9W99XSfFpC2n6soL+Exodw==
✔ configured deploy key "flux-system-main-flux-system-./clusters/CLOUDY" for "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX"
► applying source secret "flux-system/flux-system"
✔ reconciled source secret
```
- You can now delete the formerly created _Personal Access Token_: `Flux` won't use it anymore.
---
### 📂 Flux config files
`Flux` has been successfully installed onto our **_☁CLOUDY_** Kubernetes cluster!
Its configuration is managed through a _Gitops_ workflow sourced directly from our `Github` repository
Let's review our `Flux` configuration files we've created and pushed into the `Github` repository…
… as well as the corresponding components running in our Kubernetes cluster
![Flux config files](images/flux/flux-config-files.png)
---
class: pic
<!-- FIXME: wrong schema -->
![Flux architecture](images/flux/flux-controllers.png)
---
class: extra-details
### Flux resources 1/2
.lab[
```bash
k8s@shpod:~$ kubectl get all --namespace flux-system
NAME READY STATUS RESTARTS AGE
pod/helm-controller-b6767d66-h6qhk 1/1 Running 0 5m
pod/kustomize-controller-57c7ff5596-94rnd 1/1 Running 0 5m
pod/notification-controller-58ffd586f7-zxfvk 1/1 Running 0 5m
pod/source-controller-6ff87cb475-g6gn6 1/1 Running 0 5m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/notification-controller ClusterIP 10.104.139.156 <none> 80/TCP 5m1s
service/source-controller ClusterIP 10.106.120.137 <none> 80/TCP 5m
service/webhook-receiver ClusterIP 10.96.28.236 <none> 80/TCP 5m
()
```
]
---
class: extra-details
### Flux resources 2/2
.lab[
```bash
k8s@shpod:~$ kubectl get all --namespace flux-system
()
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/helm-controller 1/1 1 1 5m
deployment.apps/kustomize-controller 1/1 1 1 5m
deployment.apps/notification-controller 1/1 1 1 5m
deployment.apps/source-controller 1/1 1 1 5m
NAME DESIRED CURRENT READY AGE
replicaset.apps/helm-controller-b6767d66 1 1 1 5m
replicaset.apps/kustomize-controller-57c7ff5596 1 1 1 5m
replicaset.apps/notification-controller-58ffd586f7 1 1 1 5m
replicaset.apps/source-controller-6ff87cb475 1 1 1 5m
```
]
---
### Flux components
- the `source controller` monitors `Git` repositories to apply Kubernetes resources on the cluster
- the `Helm controller` checks for new `Helm` _charts_ releases in `Helm` repositories and installs updates as needed
- _CRDs_ store `Flux` configuration within the Kubernetes control plane
---
class: extra-details
### Flux resources that have been created
.lab[
```bash
k8s@shpod:~$ flux get all --all-namespaces
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
flux-system gitrepository/flux-system main@sha1:d48291a8 False
True stored artifact for revision 'main@sha1:d48291a8'
NAMESPACE NAME REVISION SUSPENDED
READY MESSAGE
flux-system kustomization/flux-system main@sha1:d48291a8 False
True Applied revision: main@sha1:d48291a8
```
]
---
### Flux CLI
`Flux` Command-Line Interface fulfills 3 primary functions:
1. It installs and configures first mandatory `Flux` resources in a _Gitops_ `git` repository
- ensuring proper access and permissions
2. It locally generates `YAML` files for desired `Flux` resources so that we just need to `git push` them
- _tenants_
- sources
-
3. It requests the API server to manage `Flux`-related resources
- _operators_
- _CRDs_
- logs
---
class: extra-details
### Flux -- for more info
Please, refer to the [`Flux` chapter in the High Five M3 module](./3.yml.html#toc-helm-chart-format)
---
### Flux relies on Kustomize
The `Flux` component named `kustomize controller` look for `Kustomize` resources in `Flux` code-based sources
1. `Kustomize` look for `YAML` manifests listed in the `kustomization.yaml` file
2. and aggregates, hydrates and patches them following the `kustomization` configuration
---
class: extra-details
### 2 different kustomization resources
⚠️ `Flux` uses 2 distinct resources with `kind: kustomization`
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: kustomization
```
describes how Kustomize (the _CLI_ tool) appends and transforms `YAML` manifests into a single bunch of `YAML` described resources
```yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1 group
kind: Kustomization
```
describes where `Flux kustomize-controller` looks for a `kustomization.yaml` file in a given `Flux` code-based source
---
class: extra-details
### Kustomize -- for more info
Please, refer to the [`Kustomize` chapter in the High Five M3 module](./3.yml.html#toc-kustomize)
---
class: extra-details
### Group / Version / Kind -- for more info
For more info about how Kubernetes resource natures are identified by their `Group / Version / Kind` triplet…
… please, refer to the [`Kubernetes API` chapter in the High Five M5 module](./5.yml.html#toc-the-kubernetes-api)
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:3
branch MOVY order:4
branch YouRHere order:5
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
</pre>

284
slides/flux/ingress.md Normal file
View File

@@ -0,0 +1,284 @@
# T05- Configuring ingress for **_🎸ROCKY_** app
🍾 **_🎸ROCKY_** team has just deployed its `v1.0.0`
We would like to reach it from our workstations
The regular way to do it in Kubernetes is to configure an `Ingress` resource.
- `Ingress` is an abstract resource that manages how services are exposed outside of the Kubernetes cluster (Layer 7).
- It relies on `ingress-controller`(s) that are technical solutions to handle all the rules related to ingress.
- Available features vary, depending on the `ingress-controller`: load-balancing, networking, firewalling, API management, throttling, TLS encryption, etc.
- `ingress-controller` may provision Cloud _IaaS_ network resources such as load-balancer, persistent IPs, etc.
---
class: extra-details
## Ingress -- for more info
Please, refer to the [`Ingress` chapter in the High Five M2 module](./2.yml.html#toc-exposing-http-services-with-ingress-resources)
---
## Installing `ingress-nginx` as our `ingress-controller`
We'll use `ingress-nginx` (relying on `NGinX`), quite a popular choice.
- It is able to provision IaaS load-balancer in ScaleWay Cloud services
- As a reverse-proxy, it is able to balance HTTP connections on an on-premises cluster
The **_⚙OPS_** Team add this new install to its `Flux` config. repo
---
### Creating a `Github` source in Flux for `ingress-nginx`
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p ./clusters/CLOUDY/ingress-nginx && \
flux create source git ingress-nginx \
--namespace=ingress-nginx \
--url=https://github.com/kubernetes/ingress-nginx/ \
--branch=release-1.12 \
--export > ./clusters/CLOUDY/ingress-nginx/sync.yaml
```
]
---
### Creating `kustomization` in Flux for `ingress-nginx`
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization ingress-nginx \
--namespace=ingress-nginx \
--source=GitRepository/ingress-nginx \
--path="./deploy/static/provider/scw/" \
--export >> ./clusters/CLOUDY/ingress-nginx/sync.yaml
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cp -p ~/container.training/k8s/M6-ingress-nginx-kustomization.yaml \
./clusters/CLOUDY/ingress-nginx/kustomization.yaml && \
cp -p ~/container.training/k8s/M6-ingress-nginx-components.yaml \
~/container.training/k8s/M6-ingress-nginx-*-patch.yaml \
./clusters/CLOUDY/ingress-nginx/
```
]
---
### Applying the new config
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
git add ./clusters/CLOUDY/ingress-nginx && \
git commit -m':wrench: :rocket: add Ingress-controller' && \
git push
```
]
---
class: pic
![Running Mario](images/running-mario.gif)
---
class: pic
![Ingress-nginx provisionned a IaaS load-balancer in Scaleway Cloud services](images/flux/ingress-nginx-scaleway-lb.png)
---
class: extra-details
### Using external Git source
💡 Note that you can directly use pubilc `Github` repository (not maintained by your company).
- If you have to alter the configuration, `Kustomize` patching capabilities might help.
- Depending on the _gitflow_ this repository uses, updates will be deployed automatically to your cluster (here we're using a `release` branch).
- This repo exposes a `kustomization.yaml`. Well done!
---
## Adding the `ingress` resource to ROCKY app
.lab[
- Add the new manifest to our kustomization bunch
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
cp -pr ~/container.training/k8s/M6-rocky-ingress.yaml ./tenants/base/rocky && \
echo '- M6-rocky-ingress.yaml' >> ./tenants/base/rocky/kustomization.yaml
```
- Commit and its done
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
git add . && \
git commit -m':wrench: :rocket: add Ingress' && \
git push
```
]
---
class: pic
![Running Mario](images/running-mario.gif)
---
### Here is the result
After Flux reconciled the whole bunch of sources and kustomizations, you should see
- `Ingress-NGinX` controller components in `ingress-nginx` namespace
- A new `Ingress` in `rocky-test` namespace
.lab[
```bash
k8s@shpod:~$ kubectl get all -n ingress-nginx && \
kubectl get ingress -n rocky-test
k8s@shpod:~$ \
PublicIP=$(kubectl get ingress rocky -n rocky-test \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
k8s@shpod:~$ \
curl --header 'rocky.test.mybestdomain.com' http://$PublicIP/
```
]
---
class: pic
![Rocky application screenshot](images/flux/rocky-app-screenshot.png)
---
## Upgrading **_🎸ROCKY_** app
**_🎸ROCKY_** team is now fully able to upgrade and deploy its app autonomously.
Just give it a try!
- In the `deployment.yaml` file
- in the app repo ([https://github.com/Musk8teers/container.training-spring-music/])
- you can change the `spec.template.spec.containers.image` to `1.0.1` and then to `1.0.2`
Dont' forget which branch is watched by `Flux` Git source named `rocky`
Don't forget to commit!
---
## Few considerations
- The **_⚙OPS_** team has to decide how to manage name resolution for public IPs
- Scaleway propose to expose a wildcard domain for its Kubernetes clusters
- Here, we chose that `Ingress-controller` (that makes sense) but `Ingress` as well were managed by the **_⚙OPS_** team.
- It might have been done in many different ways!
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:3
branch MOVY order:4
branch YouRHere order:5
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT
</pre>

241
slides/flux/kyverno.md Normal file
View File

@@ -0,0 +1,241 @@
## introducing Kyverno
Kyverno is a tool to extend Kubernetes permission management to express complex policies…
</br>… and override manifests delivered by client teams.
---
class: extra-details
### Kyverno -- for more info
Please, refer to the [`Setting up Kubernetes` chapter in the High Five M4 module](./4.yml.html#toc-policy-management-with-kyverno) for more infos about `Kyverno`.
---
## Creating an `Helm` source in Flux for Kyverno Helm chart
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p clusters/CLOUDY/kyverno && \
cp -pr ~/container.training/k8s/
k8s@shpod ~$ flux create source helm kyverno \
--namespace=kyverno \
--url=https://kyverno.github.io/kyverno/ \
--interval=3m \
--export > ./clusters/CLOUDY/kyverno/sync2.yaml
```
]
---
## Creating the `HelmRelease` in Flux
.lab[
```bash
k8s@shpod ~$ flux create helmrelease kyverno \
--namespace=kyverno \
--source=HelmRepository/kyverno.flux-system \
--target-namespace=kyverno \
--create-target-namespace=true \
--chart-version=">=3.4.2" \
--chart=kyverno \
--export >> ./clusters/CLOUDY/kyverno/sync.yaml
```
]
---
## Add Kyverno policy
This polivy is just an example.
It enforces the use of a `Service Account` in `Flux` configurations
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
mkdir -p clusters/CLOUDY/kyverno-policies && \
cp -pr ~/container.training/k8s/M6-kyverno-enforce-service-account.yaml \
./clusters/CLOUDY/kyverno-policies/
---
### Creating `kustomization` in Flux for Kyverno policies
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
flux create kustomization kyverno-policies \
--namespace=kyverno \
--source=GitRepository/flux-system \
--path="./clusters/CLOUDY/kyverno-policies/" \
--prune true --interval 5m \
--depends-on kyverno \
--export >> ./clusters/CLOUDY/kyverno-policies/sync.yaml
```
]
## Apply Kyverno policy
```bash
flux create kustomization
--path
--source GitRepository/
--export > ./clusters/CLOUDY/kyverno-policies/sync.yaml
```
---
## Add Kyverno dependency for **_⚗TEST_** cluster
- Now that we've got `Kyverno` policies,
- ops team will enforce any upgrade from any kustomization in our dev team tenants
- to wait for the `kyverno` policies to be reconciled (in a `Flux` perspective)
- upgrade file `./clusters/CLOUDY/tenants.yaml`,
- by adding this property: `spec.dependsOn.{name: kyverno-policies}`
---
class: pic
![Running Mario](images/running-mario.gif)
---
### Debugging
`Kyverno-policies` `Kustomization` failed because `spec.dependsOn` property can only target a resource from the same `Kind`.
- Let's suppress the `spec.dependsOn` property.
Now `Kustomizations` for **_🎸ROCKY_** and **_🎬MOVY_** tenants failed because of our policies.
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:4
branch MOVY order:5
branch YouRHere order:6
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT tag:'T07'
checkout OPS
commit id:'k0s install on METAL cluster' tag:'K01'
commit id:'Flux config. for METAL cluster' tag:'K02'
branch METAL_TEST-PROD order:3
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for OpenEBS' tag:'K03'
checkout METAL_TEST-PROD
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Prometheus install'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout OPS
commit id:'Kyverno install'
commit id:'Kyverno rules'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'Flux config. for PROD tenant' tag:'P01'
branch PROD-env order:2
commit id:'ROCKY tenant on PROD'
checkout OPS
commit id:'ROCKY patch for PROD' tag:'R04'
checkout PROD-env
merge OPS id:'PROD ready to deploy ROCKY' type: HIGHLIGHT
checkout PROD-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout MOVY
commit id:'MOVY HELM chart' tag:'M03'
checkout TEST-env
merge MOVY tag:'MOVY v1.0'
</pre>

View File

@@ -0,0 +1,251 @@
# Install monitoring stack
The **_⚙OPS_** team wants to have a real monitoring stack for its clusters.
Let's deploy `Prometheus` and `Grafana` onto the clusters.
Note:
---
## Creating `Github` source in Flux for monitoring components install repository
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ mkdir -p clusters/CLOUDY/kube-prometheus-stack
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create source git monitoring \
--namespace=monitoring \
--url=https://github.com/fluxcd/flux2-monitoring-example.git \
--branch=main --export > ./clusters/CLOUDY/kube-prometheus-stack/sync.yaml
```
]
---
### Creating `kustomization` in Flux for monitoring stack
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization monitoring \
--namespace=monitoring \
--source=GitRepository/monitoring \
--path="./monitoring/controllers/kube-prometheus-stack/" \
--export >> ./clusters/CLOUDY/kube-prometheus-stack/sync.yaml
```
]
---
### Install Flux Grafana dashboards
.lab[
```bash
k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization dashboards \
--namespace=monitoring \
--source=GitRepository/monitoring \
--path="./monitoring/configs/" \
--export >> ./clusters/CLOUDY/kube-prometheus-stack/sync.yaml
```
]
---
class: pic
![Running Mario](images/running-mario.gif)
---
## Flux repository synchro is broken😅
It seems that `Flux` on **_☁CLOUDY_** cluster is not able to authenticate with `ssh` on its `Github` config repository!
What happened?
When we install `Flux` on **_🤘METAL_** cluster, it generates a new `ssh` keypair and override the one used by **_☁CLOUDY_** among the "deployment keys" of the `Github` repository.
⚠️ Beware of flux bootstrap command!
We have to
- generate a new keypair (or reuse an already existing one)
- add the private key to the Flux-dedicated secrets in **_☁CLOUDY_** cluster
- add it to the "deployment keys" of the `Github` repository
---
### the command
.lab[
- `Flux` _CLI_ helps to recreate the secret holding the `ssh` **private** key.
```bash
k8s@shpod:~$ flux create secret git flux-system \
--url=ssh://git@github.com/container-training-fleet/fleet-config-using-flux-XXXXX \
--private-key-file=/home/k8s/.ssh/id_ed25519
```
- copy the **public** key into the deployment keys of the `Github` repository
]
---
class: pic
![Running Mario](images/running-mario.gif)
---
## Access the Grafana dashboard
.lab[
- Get the `Host` and `IP` address to request
```bash
k8s@shpod:~$ kubectl -n monitoring get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
grafana nginx grafana.test.metal.mybestdomain.com 62.210.39.83 80 6m30s
```
- Get the `Grafana` admin password
```bash
k8s@shpod:~$ k get secret kube-prometheus-stack-grafana -n monitoring \
-o jsonpath='{.data.admin-password}' | base64 -d
```
]
## And browse…
class: pic
![Grafana dashboard screenshot](images/flux/grafana-dashboard.png)
---
### 🗺️ Where are we in our scenario?
<pre class="mermaid">
%%{init:
{
"theme": "default",
"gitGraph": {
"mainBranchName": "OPS",
"mainBranchOrder": 0
}
}
}%%
gitGraph
commit id:"0" tag:"start"
branch ROCKY order:4
branch MOVY order:5
branch YouRHere order:6
checkout OPS
commit id:'Flux install on CLOUDY cluster' tag:'T01'
branch TEST-env order:1
commit id:'FLUX install on TEST' tag:'T02' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for TEST tenant' tag:'T03'
commit id:'namespace isolation by RBAC'
checkout TEST-env
merge OPS id:'ROCKY tenant creation' tag:'T04'
checkout OPS
commit id:'ROCKY deploy. config.' tag:'R01'
checkout TEST-env
merge OPS id:'TEST ready to deploy ROCKY' type: HIGHLIGHT tag:'R02'
checkout ROCKY
commit id:'ROCKY' tag:'v1.0.0'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.0'
checkout OPS
commit id:'Ingress-controller config.' tag:'T05'
checkout TEST-env
merge OPS id:'Ingress-controller install' type: HIGHLIGHT tag:'T06'
checkout OPS
commit id:'ROCKY patch for ingress config.' tag:'R03'
checkout TEST-env
merge OPS id:'ingress config. for ROCKY app'
checkout ROCKY
commit id:'blue color' tag:'v1.0.1'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.1'
checkout ROCKY
commit id:'pink color' tag:'v1.0.2'
checkout TEST-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout OPS
commit id:'FLUX config for MOVY deployment' tag:'M01'
checkout TEST-env
merge OPS id:'FLUX ready to deploy MOVY' type: HIGHLIGHT tag:'M02'
checkout MOVY
commit id:'MOVY' tag:'v1.0.3'
checkout TEST-env
merge MOVY tag:'MOVY v1.0.3' type: REVERSE
checkout OPS
commit id:'Network policies'
checkout TEST-env
merge OPS type: HIGHLIGHT tag:'T07'
checkout OPS
commit id:'k0s install on METAL cluster' tag:'K01'
commit id:'Flux config. for METAL cluster' tag:'K02'
branch METAL_TEST-PROD order:3
commit id:'ROCKY/MOVY tenants on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for OpenEBS' tag:'K03'
checkout METAL_TEST-PROD
merge OPS id:'openEBS on METAL' type: HIGHLIGHT
checkout OPS
commit id:'Prometheus install'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout YouRHere
commit id:'x'
checkout OPS
merge YouRHere id:'YOU ARE HERE'
checkout OPS
commit id:'Kyverno install'
commit id:'Kyverno rules'
checkout TEST-env
merge OPS type: HIGHLIGHT
checkout OPS
commit id:'Flux config. for PROD tenant' tag:'P01'
branch PROD-env order:2
commit id:'ROCKY tenant on PROD'
checkout OPS
commit id:'ROCKY patch for PROD' tag:'R04'
checkout PROD-env
merge OPS id:'PROD ready to deploy ROCKY' type: HIGHLIGHT
checkout PROD-env
merge ROCKY tag:'ROCKY v1.0.2'
checkout MOVY
commit id:'MOVY HELM chart' tag:'M03'
checkout TEST-env
merge MOVY tag:'MOVY v1.0'
</pre>

129
slides/flux/openebs.md Normal file
View File

@@ -0,0 +1,129 @@
# K03- Installing OpenEBS as our CSI
`OpenEBS` is a _CSI_ solution capable of hyperconvergence, synchronous replication and other extra features.
It installs with `Helm` charts.
- `Flux` is able to watch `Helm` repositories and install `HelmReleases`
- To inject its configuration into the `Helm chart` , `Flux` relies on a `ConfigMap` including the `values.yaml` file
.lab[
```bash
k8s@shpod ~$ mkdir -p ./clusters/METAL/openebs/ && \
cp -pr ~/container.training/k8s/M6-openebs-*.yaml \
./clusters/METAL/openebs/ && \
cd ./clusters/METAL/openebs/ && \
mv M6-openebs-kustomization.yaml kustomization.yaml && \
cd -
```
]
---
## Creating an `Helm` source in Flux for OpenEBS Helm chart
.lab[
```bash
k8s@shpod ~$ flux create source helm openebs \
--url=https://openebs.github.io/openebs \
--interval=3m \
--export > ./clusters/METAL/openebs/sync.yaml
```
]
---
## Creating the `HelmRelease` in Flux
.lab[
```bash
k8s@shpod ~$ flux create helmrelease openebs \
--namespace=openebs \
--source=HelmRepository/openebs.flux-system \
--chart=openebs \
--values-from=ConfigMap/openebs-values \
--export >> ./clusters/METAL/openebs/sync.yaml
```
]
---
## 📂 Let's review the files
- `M6-openebs-components.yaml`
</br>To include the `Flux` resources in the same _namespace_ where `Flux` installs the `OpenEBS` resources, we need to create the _namespace_ **before** the installation occurs
- `sync.yaml`
</br>The resources `Flux` uses to watch and get the `Helm chart`
- `M6-openebs-values.yaml`
</br> the `values.yaml` file that will be injected into the `Helm chart`
- `kustomization.yaml`
</br>This one is a bit special: it includes a [ConfigMap generator](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/configmapgenerator/)
- `M6-openebs-kustomizeconfig.yaml`
</br></br>This one is tricky: in order for `Flux` to trigger an upgrade of the `Helm Release` when the `ConfigMap` is altered, you need to explain to the `Kustomize ConfigMap generator` how the resources are relating with each others. 🤯
And here we go!
---
class: pic
![Running Mario](images/running-mario.gif)
---
## And the result
Now, we have a _cluster_ featuring `openEBS`.
But still… The PersistentVolumeClaim remains in `Pending` state!😭
```bash
k8s@shpod ~$ kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-hostpath openebs.io/local Delete WaitForFirstConsumer false 82m
```
We still don't have a default `StorageClass`!😤
---
### Manually enforcing the default `StorageClass`
Even if Flux is constantly reconciling our resources, we still are able to test evolutions by hand.
.lab[
```bash
k8s@shpod ~$ flux suspend helmrelease openebs -n openebs
► suspending helmrelease openebs in openebs namespace
✔ helmrelease suspended
k8s@shpod ~$ kubectl patch storageclass openebs-hostpath \
-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
k8s@shpod ~$ k get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-hostpath (default) openebs.io/local Delete WaitForFirstConsumer false 82m
```
]
---
### Now the database is OK
```bash
k8s@shpod ~$ get pvc,pods -n movy-test
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/postgresql-data-db-0 Bound pvc-ede1634f-2478-42cd-8ee3-7547cd7cdde2 1Gi RWO openebs-hostpath <unset> 20m
NAME READY STATUS RESTARTS AGE
pod/db-0 1/1 Running 0 5h43m
()
```

Some files were not shown because too many files have changed in this diff Show More