Compare commits

...

43 Commits

Author SHA1 Message Date
Jérôme Petazzoni
4eef466f84 Add link to portal info 2025-11-15 12:05:53 +01:00
Marinka
eff4074687 Remove slides from this section 2025-11-15 07:57:14 +01:00
Marinka
772102f385 Add slides and change order, add diagram 2025-11-15 07:57:14 +01:00
Marinka
9fb94be33b Change order of slides 2025-11-15 07:57:14 +01:00
Marinka
e9f8ac8865 Update title 2025-11-13 17:09:17 +01:00
Marinka
3f2cf4d10c Fix typo in Training_Environment.md 2025-11-13 17:09:17 +01:00
Jérôme Petazzoni
94d629639c 🖼️ Add logos 2025-11-12 17:30:44 +01:00
Jérôme Petazzoni
723ac82dfe ️ Add Dockerfile example before starting to write our own 2025-11-12 17:13:14 +01:00
Jérôme Petazzoni
7b71884477 📛 Add contact info slide 2025-11-12 17:09:21 +01:00
Jérôme Petazzoni
fcf8e85bd5 ️ Add short intro to Docker 2025-11-12 16:54:41 +01:00
Jérôme Petazzoni
1164abdbbd ✂️ Remove tailhist, we won't use it 2025-11-12 16:54:41 +01:00
Jérôme Petazzoni
490a0249ed 🔎 Clarify use of local Docker 2025-11-12 16:54:41 +01:00
Jérôme Petazzoni
14192d5121 🖼️ Add Docker architecture diagram 2025-11-12 16:54:41 +01:00
Jérôme Petazzoni
738e69bf07 ♻️ Update instructions about lab environments
The link to Play With Docker was broken. Also, since PWD was
out of capacity, I also added a link to KodeKloud.
2025-11-12 16:54:41 +01:00
Jérôme Petazzoni
4019ff321c 🚢 Add small hands-on chapter about Harbor 2025-11-12 16:54:40 +01:00
Jérôme Petazzoni
6e4bc0264c 🛜 Make it work for hosts without IPv4 connectivity
Note that we install a TON of things from GitHub.
Since GitHub isn't available over IPv6, we are using
a custom solution based on cachttps, a caching
proxy to forward requests to GitHub. Our deployment
scripts try to detect a cachttps instance (assuming
it will be available through DNS over cachttps.internal)
and if they find one, they use it. Otherwise they
access GitHub directly - which won't work on IPv6-only
hosts, but will of course work fine on IPv4 and
dual-stack hosts.
2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
09ff8a65d8 🔧 Enable hostPort support in Cilium install 2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
312b225d89 🛜 Support AAAA records in cloudflare DNS scripts 2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
4957a1b561 🛠️ Improve Proxmox support
The first iteration on Proxmox support relied on a single
template image hosted on shared storage. This new iteration
relies on template images hosted on local storage. It will
detect the template VM to use on each node thanks to its tags.

Note: later, we'll need to expose an easy way to switch
between shared-store and local-store template images.
2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
153a5b8e59 🛜 Bring IPv6 support to kubeadm deployments
Multiple small changes to allow deployment in IPv6-only environments.
What we do:
- detect if we are in an IPv6-only environment
- if yes, specify a service CIDR and listening address
  (kubeadm will otherwise pick the IPv4 address for the API server)
- switch to Cilium
Also minor changes to pssh and terraform to handle pinging and
connecting to IPv6 addresses.
2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
1e0ca12225 ♻️ Update dockercoins for IPv6 support
We want to be able to run on IPv6-only clusters
(as well as legacy IPv4 clusters, as well as
DualStack clusters). This requires minor changes
in the code, because in multiple places, we were
binding listening sockets explicitly to 0.0.0.0.
We change this to :: instead, and in some cases,
we make it easier to change that if needed (e.g.
through environment variables).
2025-11-12 16:54:29 +01:00
Arnaud Bienvenu
3d98f35e95 Grammatical fix in slides 2025-11-12 16:54:29 +01:00
Ludovic Piot
3a9e74d4ad 📝 🎨 lpiot-issue-8: Add the Flux bootstrap without relying on an organization 2025-11-12 16:54:29 +01:00
Ludovic Piot
c4290f7eaa 📝 lpiot-issue-10: Add a "delete PAT" step during the Flux install process 2025-11-12 16:54:29 +01:00
Ludovic Piot
402c02b2a3 ✏️ 2025-11-12 16:54:29 +01:00
Ludovic Piot
f7d5f7c546 📝 lpiot-issue-12: Flux only need REPO permissions in Github PAT 2025-11-12 16:54:29 +01:00
Ludovic Piot
20d830f107 🎨 Change the name of the k0s servers 2025-11-12 16:54:29 +01:00
Ludovic Piot
efe243e260 📝 🐛 lpiot-issue-25: broken link 2025-11-12 16:54:29 +01:00
Ludovic Piot
96eafff7cc 🐛 add the YAML files needed by the M5/M6 section 2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
bca8059c42 🖼️ Re-add images for flux/M6 chapter 2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
c87e6328d2 🔧 Replace hyperkube with kube-apiserver
Hyperkube isn't available anymore, so the previous version of
the script would constantly redownload the tarball over and over
2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
c0a1f05cfc ️ Invoke kind script to automatically start a k8s cluster 2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
3c9c6a80a9 🐞 Typo fix 2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
b27f5d2226 ⚙️ Add academy builder script 2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
644de69b1a ️ Add chapter about codespaces and dev clusters 2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
d639b68d92 🔗 Add link to FluxCD Kustomization 2025-11-12 16:54:29 +01:00
Jérôme Petazzoni
07f6351582 Update Kustomize content 2025-11-12 16:54:17 +01:00
Jérôme Petazzoni
d3532bb99d 🛠️ Improve AWS EKS support
- detect which EKS version to use
  (instead of hard-coding it in the TF config)
- do not issue a CSR on EKS
  (because EKS is broken and doesn't support it)
- automatically install a StorageClass on EKS
  (because the EBS CSI addon doesn't install one by default)
- put EKS clusters in the default VPC
  (instead of creating one VPC per cluster,
  since there is a default limit of 5 VPC per region)
2025-11-12 16:53:30 +01:00
Jérôme Petazzoni
e1f63e66f9 ️ Improve googlecloud support
- add support to provision VMs on googlecloud
- refactor the way we define the project used by Terraform
  (we'll now use the GOOGLE_PROJECT environment variable,
  and if it's not set, we'll set it automatically by getting
  the default project from the gcloud CLI)
2025-11-12 16:53:30 +01:00
Marinka
031ea237c8 Remove 'version' from compose file 2025-11-12 16:22:53 +01:00
Marinka
d2918c61a0 Remove unit testing 2025-11-12 16:22:53 +01:00
Marinka
fa4ab4f377 Add contact info 2025-11-12 16:22:53 +01:00
Jérôme Petazzoni
f9001262a8 👩‍🏫 Docker Workshop (PyLadies x Empowered In Tech) 2025-10-24 17:09:12 +02:00
126 changed files with 3048 additions and 2703 deletions

View File

@@ -9,7 +9,7 @@
"forwardPorts": [],
//"postCreateCommand": "... install extra packages...",
"postStartCommand": "dind.sh",
"postStartCommand": "dind.sh ; kind.sh",
// This lets us use "docker-outside-docker".
// Unfortunately, minikube, kind, etc. don't work very well that way;

1
.gitignore vendored
View File

@@ -17,6 +17,7 @@ slides/autopilot/state.yaml
slides/index.html
slides/past.html
slides/slides.zip
slides/_academy_*
node_modules
### macOS ###

View File

@@ -1,26 +1,24 @@
version: "2"
services:
rng:
build: rng
ports:
- "8001:80"
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
- "8002:80"
webui:
build: webui
ports:
- "8000:80"
- "8000:80"
volumes:
- "./webui/files/:/files/"
- "./webui/files/:/files/"
redis:
image: redis
worker:
build: worker

View File

@@ -1,7 +1,8 @@
FROM ruby:alpine
WORKDIR /app
RUN apk add --update build-base curl
RUN gem install sinatra --version '~> 3'
RUN gem install thin --version '~> 1'
ADD hasher.rb /
CMD ["ruby", "hasher.rb"]
RUN gem install thin
COPY hasher.rb .
CMD ["ruby", "hasher.rb", "-o", "::"]
EXPOSE 80

View File

@@ -2,7 +2,6 @@ require 'digest'
require 'sinatra'
require 'socket'
set :bind, '0.0.0.0'
set :port, 80
post '/' do

View File

@@ -1,5 +1,7 @@
FROM python:alpine
WORKDIR /app
RUN pip install Flask
COPY rng.py /
CMD ["python", "rng.py"]
COPY rng.py .
ENV FLASK_APP=rng FLASK_RUN_HOST=:: FLASK_RUN_PORT=80
CMD ["flask", "run"]
EXPOSE 80

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80, threaded=False)
app.run(port=80)

View File

@@ -1,7 +1,8 @@
FROM node:4-slim
RUN npm install express@4
RUN npm install redis@3
COPY files/ /files/
COPY webui.js /
FROM node:23-alpine
WORKDIR /app
RUN npm install express
RUN npm install morgan
RUN npm install redis@5
COPY . .
CMD ["node", "webui.js"]
EXPOSE 80

View File

@@ -1,26 +1,34 @@
var express = require('express');
var app = express();
var redis = require('redis');
import express from 'express';
import morgan from 'morgan';
import { createClient } from 'redis';
var client = redis.createClient(6379, 'redis');
client.on("error", function (err) {
console.error("Redis error", err);
});
var client = await createClient({
url: "redis://redis",
socket: {
family: 0
}
})
.on("error", function (err) {
console.error("Redis error", err);
})
.connect();
var app = express();
app.use(morgan('common'));
app.get('/', function (req, res) {
res.redirect('/index.html');
});
app.get('/json', function (req, res) {
client.hlen('wallet', function (err, coins) {
client.get('hashes', function (err, hashes) {
var now = Date.now() / 1000;
res.json( {
coins: coins,
hashes: hashes,
now: now
});
});
app.get('/json', async(req, res) => {
var coins = await client.hLen('wallet');
var hashes = await client.get('hashes');
var now = Date.now() / 1000;
res.json({
coins: coins,
hashes: hashes,
now: now
});
});

View File

@@ -1,5 +1,6 @@
FROM python:alpine
WORKDIR /app
RUN pip install redis
RUN pip install requests
COPY worker.py /
COPY worker.py .
CMD ["python", "worker.py"]

View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
use-forwarded-headers: true
compute-full-forwarded-for: true
use-proxy-protocol: true

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: ingress-nginx

View File

@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- M6-ingress-nginx-components.yaml
- sync.yaml
patches:
- path: M6-ingress-nginx-cm-patch.yaml
target:
kind: ConfigMap
- path: M6-ingress-nginx-svc-patch.yaml
target:
kind: Service

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v2: true
service.beta.kubernetes.io/scw-loadbalancer-use-hostname: true

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: kyverno

View File

@@ -0,0 +1,72 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: flux-multi-tenancy
spec:
validationFailureAction: enforce
rules:
- name: serviceAccountName
exclude:
resources:
namespaces:
- flux-system
match:
resources:
kinds:
- Kustomization
- HelmRelease
validate:
message: ".spec.serviceAccountName is required"
pattern:
spec:
serviceAccountName: "?*"
- name: kustomizationSourceRefNamespace
exclude:
resources:
namespaces:
- flux-system
- ingress-nginx
- kyverno
- monitoring
- openebs
match:
resources:
kinds:
- Kustomization
preconditions:
any:
- key: "{{request.object.spec.sourceRef.namespace}}"
operator: NotEquals
value: ""
validate:
message: "spec.sourceRef.namespace must be the same as metadata.namespace"
deny:
conditions:
- key: "{{request.object.spec.sourceRef.namespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"
- name: helmReleaseSourceRefNamespace
exclude:
resources:
namespaces:
- flux-system
- ingress-nginx
- kyverno
- monitoring
- openebs
match:
resources:
kinds:
- HelmRelease
preconditions:
any:
- key: "{{request.object.spec.chart.spec.sourceRef.namespace}}"
operator: NotEquals
value: ""
validate:
message: "spec.chart.spec.sourceRef.namespace must be the same as metadata.namespace"
deny:
conditions:
- key: "{{request.object.spec.chart.spec.sourceRef.namespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"

View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: monitoring
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
namespace: monitoring
spec:
ingressClassName: nginx
rules:
- host: grafana.test.metal.mybestdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kube-prometheus-stack-grafana
port:
number: 80

View File

@@ -0,0 +1,35 @@
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
app: web
ingress:
- from: []
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-db
spec:
podSelector:
matchLabels:
app: db
ingress:
- from:
- podSelector:
matchLabels:
app: web

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: openebs

View File

@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: openebs
resources:
- M6-openebs-components.yaml
- sync.yaml
configMapGenerator:
- name: openebs-values
files:
- values.yaml=M6-openebs-values.yaml
configurations:
- M6-openebs-kustomizeconfig.yaml

View File

@@ -0,0 +1,6 @@
nameReference:
- kind: ConfigMap
version: v1
fieldSpecs:
- path: spec/valuesFrom/name
kind: HelmRelease

View File

@@ -0,0 +1,15 @@
# helm install openebs --namespace openebs openebs/openebs
# --set engines.replicated.mayastor.enabled=false
# --set lvm-localpv.lvmNode.kubeletDir=/var/lib/k0s/kubelet/
# --create-namespace
engines:
replicated:
mayastor:
enabled: false
# Needed for k0s install since kubelet install is slightly divergent from vanilla install >:-(
lvm-localpv:
lvmNode:
kubeletDir: /var/lib/k0s/kubelet/
localprovisioner:
hostpathClass:
isDefaultClass: true

View File

@@ -0,0 +1,38 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: rocky-test
name: rocky-full-access
rules:
- apiGroups: ["", extensions, apps]
resources: [deployments, replicasets, pods, services, ingresses, statefulsets]
verbs: [get, list, watch, create, update, patch, delete] # You can also use [*]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rocky-pv-access
rules:
- apiGroups: [""]
resources: [persistentvolumes]
verbs: [get, list, watch, create, patch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
toolkit.fluxcd.io/tenant: rocky
name: rocky-reconciler2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: rocky-pv-access
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: gotk:rocky-test:reconciler
- kind: ServiceAccount
name: rocky
namespace: rocky-test

19
k8s/M6-rocky-ingress.yaml Normal file
View File

@@ -0,0 +1,19 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rocky
namespace: rocky-test
spec:
ingressClassName: nginx
rules:
- host: rocky.test.mybestdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 80

View File

@@ -0,0 +1,8 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/rocky
patches:
- path: M6-rocky-test-patch.yaml
target:
kind: Kustomization

View File

@@ -0,0 +1,7 @@
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: rocky
namespace: rocky-test
spec:
path: ./k8s/plain

View File

@@ -0,0 +1,12 @@
# This removes the haproxy Deployment.
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patches:
- patch: |-
$patch: delete
kind: Deployment
apiVersion: apps/v1
metadata:
name: haproxy

View File

@@ -0,0 +1,14 @@
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
# Within a Kustomization, it is not possible to specify in which
# order transformations (patches, replacements, etc) should be
# executed. If we want to execute transformations in a specific
# order, one possibility is to put them in individual components,
# and then invoke these components in the order we want.
# It works, but it creates an extra level of indirection, which
# reduces readability and complicates maintenance.
components:
- setup
- cleanup

View File

@@ -0,0 +1,20 @@
global
#log stdout format raw local0
#daemon
maxconn 32
defaults
#log global
timeout client 1h
timeout connect 1h
timeout server 1h
mode http
option abortonclose
frontend metrics
bind :9000
http-request use-service prometheus-exporter
frontend ollama_frontend
bind :8000
default_backend ollama_backend
maxconn 16
backend ollama_backend
server ollama_server localhost:11434 check

View File

@@ -0,0 +1,39 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: haproxy
name: haproxy
spec:
selector:
matchLabels:
app: haproxy
template:
metadata:
labels:
app: haproxy
spec:
volumes:
- name: haproxy
configMap:
name: haproxy
containers:
- image: haproxy:3.0
name: haproxy
volumeMounts:
- name: haproxy
mountPath: /usr/local/etc/haproxy
readinessProbe:
httpGet:
port: 9000
ports:
- name: haproxy
containerPort: 8000
- name: metrics
containerPort: 9000
resources:
requests:
cpu: 0.05
limits:
cpu: 1

View File

@@ -0,0 +1,75 @@
# This adds a sidecar to the ollama Deployment, by taking
# the pod template and volumes from the haproxy Deployment.
# The idea is to allow to run ollama+haproxy in two modes:
# - separately (each with their own Deployment),
# - together in the same Pod, sidecar-style.
# The YAML files define how to run them separetely, and this
# "replacements" directive fetches a specific volume and
# a specific container from the haproxy Deployment, to add
# them to the ollama Deployment.
#
# This would be simpler if kustomize allowed to append or
# merge lists in "replacements"; but it doesn't seem to be
# possible at the moment.
#
# It would be even better if kustomize allowed to perform
# a strategic merge using a fieldPath as the source, because
# we could merge both the containers and the volumes in a
# single operation.
#
# Note that technically, it might be possible to layer
# multiple kustomizations so that one generates the patch
# to be used in another; but it wouldn't be very readable
# or maintainable so we decided to not do that right now.
#
# However, the current approach (fetching fields one by one)
# has an advantage: it could let us transform the haproxy
# container into a real sidecar (i.e. an initContainer with
# a restartPolicy=Always).
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
resources:
- haproxy.yaml
configMapGenerator:
- name: haproxy
files:
- haproxy.cfg
replacements:
- source:
kind: Deployment
name: haproxy
fieldPath: spec.template.spec.volumes.[name=haproxy]
targets:
- select:
kind: Deployment
name: ollama
fieldPaths:
- spec.template.spec.volumes.[name=haproxy]
options:
create: true
- source:
kind: Deployment
name: haproxy
fieldPath: spec.template.spec.containers.[name=haproxy]
targets:
- select:
kind: Deployment
name: ollama
fieldPaths:
- spec.template.spec.containers.[name=haproxy]
options:
create: true
- source:
kind: Deployment
name: haproxy
fieldPath: spec.template.spec.containers.[name=haproxy].ports.[name=haproxy].containerPort
targets:
- select:
kind: Service
name: ollama
fieldPaths:
- spec.ports.[name=11434].targetPort

View File

@@ -0,0 +1,34 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: blue
name: blue
spec:
replicas: 2
selector:
matchLabels:
app: blue
template:
metadata:
labels:
app: blue
spec:
containers:
- image: jpetazzo/color
name: color
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: blue
name: blue
spec:
ports:
- port: 80
selector:
app: blue

View File

@@ -0,0 +1,94 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Each of these YAML files contains a Deployment and a Service.
# The blue.yaml file is here just to demonstrate that the rest
# of this Kustomization can be precisely scoped to the ollama
# Deployment (and Service): the blue Deployment and Service
# shouldn't be affected by our kustomize transformers.
resources:
- ollama.yaml
- blue.yaml
buildMetadata:
# Add a label app.kubernetes.io/managed-by=kustomize-vX.Y.Z
- managedByLabel
# Add an annotation config.kubernetes.io/origin, indicating:
# - which file defined that resource;
# - if it comes from a git repository, which one, and which
# ref (tag, branch...) it was.
- originAnnotations
# Add an annotation alpha.config.kubernetes.io/transformations
# indicating which patches and other transformers have changed
# each resource.
- transformerAnnotations
# Let's generate a ConfigMap with literal values.
# Note that this will actually add a suffix to the name of the
# ConfigMaps (e.g.: ollama-8bk8bd8m76) and it will update all
# references to the ConfigMap (e.g. in Deployment manifests)
# accordingly. The suffix is a hash of the ConfigMap contents,
# so that basically, if the ConfigMap is edited, any workload
# using that ConfigMap will automatically do a rolling update.
configMapGenerator:
- name: ollama
literals:
- "model=gemma3:270m"
- "prompt=If you visit Paris, I suggest that you"
- "queue=4"
name: ollama
patches:
# The Deployment manifest in ollama.yaml doesn't specify
# resource requests and limits, so that it can run on any
# cluster (including resource-constrained local clusters
# like KiND or minikube). The example belows add CPU
# requests and limits using a strategic merge patch.
# The patch is inlined here, but it could also be put
# in a file and referenced with "path: xxxxxx.yaml".
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: ollama
spec:
template:
spec:
containers:
- name: ollama
resources:
requests:
cpu: 1
limits:
cpu: 2
# This will have the same effect, with one little detail:
# JSON patches cannot specify containers by name, so this
# assumes that the ollama container is the first one in
# the pod template (whereas the strategic merge patch can
# use "merge keys" and identify containers by their name).
#- target:
# kind: Deployment
# name: ollama
# patch: |
# - op: add
# path: /spec/template/spec/containers/0/resources
# value:
# requests:
# cpu: 1
# limits:
# cpu: 2
# A "component" is a bit like a "base", in the sense that
# it lets us define some reusable resources and behaviors.
# There is a key different, though:
# - a "base" will be evaluated in isolation: it will
# generate+transform some resources, then these resources
# will be included in the main Kustomization;
# - a "component" has access to all the resources that
# have been generated by the main Kustomization, which
# means that it can transform them (with patches etc).
components:
- add-haproxy-sidecar

View File

@@ -0,0 +1,73 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ollama
name: ollama
spec:
selector:
matchLabels:
app: ollama
template:
metadata:
labels:
app: ollama
spec:
volumes:
- name: ollama
hostPath:
path: /opt/ollama
type: DirectoryOrCreate
containers:
- image: ollama/ollama
name: ollama
env:
- name: OLLAMA_MAX_QUEUE
valueFrom:
configMapKeyRef:
name: ollama
key: queue
- name: MODEL
valueFrom:
configMapKeyRef:
name: ollama
key: model
volumeMounts:
- name: ollama
mountPath: /root/.ollama
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- ollama pull $MODEL
livenessProbe:
httpGet:
port: 11434
readinessProbe:
exec:
command:
- /bin/sh
- -c
- ollama show $MODEL
ports:
- name: ollama
containerPort: 11434
---
apiVersion: v1
kind: Service
metadata:
labels:
app: ollama
name: ollama
spec:
ports:
- name: "11434"
port: 11434
protocol: TCP
targetPort: 11434
selector:
app: ollama
type: ClusterIP

View File

@@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- microservices
- redis

View File

@@ -0,0 +1,13 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- microservices.yaml
transformers:
- |
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: use-ghcr-io
prefix: ghcr.io/
fieldSpecs:
- path: spec/template/spec/containers/image

View File

@@ -0,0 +1,125 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: dockercoins/hasher:v0.1
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: dockercoins/rng:v0.1
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: dockercoins/webui:v0.1
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: dockercoins/worker:v0.1
name: worker

View File

@@ -0,0 +1,4 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- redis.yaml

View File

@@ -0,0 +1,35 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP

View File

@@ -0,0 +1,160 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: dockercoins/hasher:v0.1
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: dockercoins/rng:v0.1
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: dockercoins/webui:v0.1
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: dockercoins/worker:v0.1
name: worker

View File

@@ -0,0 +1,30 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- dockercoins.yaml
replacements:
- sourceValue: ghcr.io/dockercoins
targets:
- select:
kind: Deployment
labelSelector: "app in (hasher,rng,webui,worker)"
# It will soon be possible to use regexes in replacement selectors,
# meaning that the "labelSelector:" above can be replaced with the
# following "name:" selector which is a tiny bit simpler:
#name: hasher|rng|webui|worker
# Regex support in replacement selectors was added by this PR:
# https://github.com/kubernetes-sigs/kustomize/pull/5863
# This PR was merged in August 2025, but as of October 2025, the
# latest release of Kustomize is 5.7.1, which was released in July.
# Hopefully the feature will be available in the next release :)
# Another possibility would be to select all Deployments, and then
# reject the one(s) for which we don't want to update the registry;
# for instance:
#reject:
# kind: Deployment
# name: redis
fieldPaths:
- spec.template.spec.containers.*.image
options:
delimiter: "/"
index: 0

View File

@@ -66,7 +66,7 @@ Here is where we look for credentials for each provider:
- Civo: CLI configuration file (`~/.civo.json`)
- Digital Ocean: CLI configuration file (`~/.config/doctl/config.yaml`)
- Exoscale: CLI configuration file (`~/.config/exoscale/exoscale.toml`)
- Google Cloud: FIXME, note that the project name is currently hard-coded to `prepare-tf`
- Google Cloud: we're using "Application Default Credentials (ADC)"; run `gcloud auth application-default login`; note that we'll use the default "project" set in `gcloud` unless you set the `GOOGLE_PROJECT` environment variable
- Hetzner: CLI configuration file (`~/.config/hcloud/cli.toml`)
- Linode: CLI configuration file (`~/.config/linode-cli`)
- OpenStack: you will need to write a tfvars file (check [that exemple](terraform/virtual-machines/openstack/tfvars.example))

View File

@@ -36,8 +36,12 @@ _populate_zone() {
ZONE_ID=$(_get_zone_id $1)
shift
for IPADDR in $*; do
cloudflare zones/$ZONE_ID/dns_records "name=*" "type=A" "content=$IPADDR"
cloudflare zones/$ZONE_ID/dns_records "name=\@" "type=A" "content=$IPADDR"
case "$IPADDR" in
*.*) TYPE=A;;
*:*) TYPE=AAAA;;
esac
cloudflare zones/$ZONE_ID/dns_records "name=*" "type=$TYPE" "content=$IPADDR"
cloudflare zones/$ZONE_ID/dns_records "name=\@" "type=$TYPE" "content=$IPADDR"
done
}

View File

@@ -56,7 +56,7 @@ _cmd_codeserver() {
ARCH=${ARCHITECTURE-amd64}
CODESERVER_VERSION=4.96.4
CODESERVER_URL=https://github.com/coder/code-server/releases/download/v${CODESERVER_VERSION}/code-server-${CODESERVER_VERSION}-linux-${ARCH}.tar.gz
CODESERVER_URL=\$GITHUB/coder/code-server/releases/download/v${CODESERVER_VERSION}/code-server-${CODESERVER_VERSION}-linux-${ARCH}.tar.gz
pssh "
set -e
i_am_first_node || exit 0
@@ -230,7 +230,7 @@ _cmd_create() {
;;
*) die "Invalid mode: $MODE (supported modes: mk8s, pssh)." ;;
esac
if ! [ -f "$SETTINGS" ]; then
die "Settings file ($SETTINGS) not found."
fi
@@ -270,7 +270,27 @@ _cmd_create() {
ln -s ../../$SETTINGS tags/$TAG/settings.env.orig
cp $SETTINGS tags/$TAG/settings.env
. $SETTINGS
# For Google Cloud, it is necessary to specify which "project" to use.
# Unfortunately, the Terraform provider doesn't seem to have a way
# to detect which Google Cloud project you want to use; it has to be
# specified one way or another. Let's decide that it should be set with
# the GOOGLE_PROJECT env var; and if that var is not set, we'll try to
# figure it out from gcloud.
# (See https://github.com/hashicorp/terraform-provider-google/issues/10907#issuecomment-1015721600)
# Since we need that variable to be set each time we'll call Terraform
# (e.g. when destroying the environment), let's save it to the settings.env
# file.
if [ "$PROVIDER" = "googlecloud" ]; then
if ! [ "$GOOGLE_PROJECT" ]; then
info "PROVIDER=googlecloud but GOOGLE_PROJECT is not set. Detecting it."
GOOGLE_PROJECT=$(gcloud config get project)
info "GOOGLE_PROJECT will be set to '$GOOGLE_PROJECT'."
fi
echo "export GOOGLE_PROJECT=$GOOGLE_PROJECT" >> tags/$TAG/settings.env
fi
. tags/$TAG/settings.env
echo $MODE > tags/$TAG/mode
echo $PROVIDER > tags/$TAG/provider
@@ -355,8 +375,8 @@ _cmd_clusterize() {
pssh -I < tags/$TAG/clusters.tsv "
grep -w \$PSSH_HOST | tr '\t' '\n' > /tmp/cluster"
pssh "
echo \$PSSH_HOST > /tmp/ipv4
head -n 1 /tmp/cluster | sudo tee /etc/ipv4_of_first_node
echo \$PSSH_HOST > /tmp/ip_address
head -n 1 /tmp/cluster | sudo tee /etc/ip_address_of_first_node
echo ${CLUSTERPREFIX}1 | sudo tee /etc/name_of_first_node
echo HOSTIP=\$PSSH_HOST | sudo tee -a /etc/environment
NODEINDEX=\$((\$PSSH_NODENUM%$CLUSTERSIZE+1))
@@ -439,7 +459,7 @@ _cmd_docker() {
set -e
### Install docker-compose.
sudo curl -fsSL -o /usr/local/bin/docker-compose \
https://github.com/docker/compose/releases/download/$COMPOSE_VERSION/docker-compose-$COMPOSE_PLATFORM
\$GITHUB/docker/compose/releases/download/$COMPOSE_VERSION/docker-compose-$COMPOSE_PLATFORM
sudo chmod +x /usr/local/bin/docker-compose
docker-compose version
@@ -447,7 +467,7 @@ _cmd_docker() {
##VERSION## https://github.com/docker/machine/releases
MACHINE_VERSION=v0.16.2
sudo curl -fsSL -o /usr/local/bin/docker-machine \
https://github.com/docker/machine/releases/download/\$MACHINE_VERSION/docker-machine-\$(uname -s)-\$(uname -m)
\$GITHUB/docker/machine/releases/download/\$MACHINE_VERSION/docker-machine-\$(uname -s)-\$(uname -m)
sudo chmod +x /usr/local/bin/docker-machine
docker-machine version
"
@@ -480,10 +500,10 @@ _cmd_kubebins() {
set -e
cd /usr/local/bin
if ! [ -x etcd ]; then
curl -L https://github.com/etcd-io/etcd/releases/download/$ETCD_VERSION/etcd-$ETCD_VERSION-linux-$ARCH.tar.gz \
curl -L \$GITHUB/etcd-io/etcd/releases/download/$ETCD_VERSION/etcd-$ETCD_VERSION-linux-$ARCH.tar.gz \
| sudo tar --strip-components=1 --wildcards -zx '*/etcd' '*/etcdctl'
fi
if ! [ -x hyperkube ]; then
if ! [ -x kube-apiserver ]; then
##VERSION##
curl -L https://dl.k8s.io/$K8SBIN_VERSION/kubernetes-server-linux-$ARCH.tar.gz \
| sudo tar --strip-components=3 -zx \
@@ -492,7 +512,7 @@ _cmd_kubebins() {
sudo mkdir -p /opt/cni/bin
cd /opt/cni/bin
if ! [ -x bridge ]; then
curl -L https://github.com/containernetworking/plugins/releases/download/$CNI_VERSION/cni-plugins-linux-$ARCH-$CNI_VERSION.tgz \
curl -L \$GITHUB/containernetworking/plugins/releases/download/$CNI_VERSION/cni-plugins-linux-$ARCH-$CNI_VERSION.tgz \
| sudo tar -zx
fi
"
@@ -542,6 +562,18 @@ EOF"
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl &&
echo 'alias k=kubecolor' | sudo tee /etc/bash_completion.d/k &&
echo 'complete -F __start_kubectl k' | sudo tee -a /etc/bash_completion.d/k"
# Install helm early
# (so that we can use it to install e.g. Cilium etc.)
ARCH=${ARCHITECTURE-amd64}
HELM_VERSION=3.19.1
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl -fsSL https://get.helm.sh/helm-v${HELM_VERSION}-linux-${ARCH}.tar.gz |
sudo tar --strip-components=1 --wildcards -zx -C /usr/local/bin '*/helm'
helm completion bash | sudo tee /etc/bash_completion.d/helm
helm version
fi"
}
_cmd kubeadm "Setup kubernetes clusters with kubeadm"
@@ -565,6 +597,18 @@ _cmd_kubeadm() {
# Initialize kube control plane
pssh --timeout 200 "
IPV6=\$(ip -json a | jq -r '.[].addr_info[] | select(.scope==\"global\" and .family==\"inet6\") | .local' | head -n1)
if [ \"\$IPV6\" ]; then
ADVERTISE=\"advertiseAddress: \$IPV6\"
SERVICE_SUBNET=\"serviceSubnet: fdff::/112\"
touch /tmp/install-cilium-ipv6-only
touch /tmp/ipv6-only
else
ADVERTISE=
SERVICE_SUBNET=
touch /tmp/install-weave
fi
echo IPV6=\$IPV6 ADVERTISE=\$ADVERTISE
if i_am_first_node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token &&
cat >/tmp/kubeadm-config.yaml <<EOF
@@ -572,9 +616,12 @@ kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: \$(cat /tmp/token)
localAPIEndpoint:
\$ADVERTISE
nodeRegistration:
ignorePreflightErrors:
- NumCPU
- FileContent--proc-sys-net-ipv6-conf-default-forwarding
$IGNORE_SYSTEMVERIFICATION
$IGNORE_SWAP
$IGNORE_IPTABLES
@@ -601,7 +648,9 @@ kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
apiServer:
certSANs:
- \$(cat /tmp/ipv4)
- \$(cat /tmp/ip_address)
networking:
\$SERVICE_SUBNET
$CLUSTER_CONFIGURATION_KUBERNETESVERSION
EOF
sudo kubeadm init --config=/tmp/kubeadm-config.yaml
@@ -620,9 +669,20 @@ EOF
# Install weave as the pod network
pssh "
if i_am_first_node; then
curl -fsSL https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml |
sed s,weaveworks/weave,quay.io/rackspace/weave, |
kubectl apply -f-
if [ -f /tmp/install-weave ]; then
curl -fsSL \$GITHUB/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml |
sed s,weaveworks/weave,quay.io/rackspace/weave, |
kubectl apply -f-
fi
if [ -f /tmp/install-cilium-ipv6-only ]; then
helm upgrade -i cilium cilium --repo https://helm.cilium.io/ \
--namespace kube-system \
--set cni.chainingMode=portmap \
--set ipv6.enabled=true \
--set ipv4.enabled=false \
--set underlayProtocol=ipv6 \
--version 1.18.3
fi
fi"
# FIXME this is a gross hack to add the deployment key to our SSH agent,
@@ -645,13 +705,16 @@ EOF
fi
# Install metrics server
pssh "
pssh -I <../k8s/metrics-server.yaml "
if i_am_first_node; then
kubectl apply -f https://raw.githubusercontent.com/jpetazzo/container.training/master/k8s/metrics-server.yaml
kubectl apply -f-
fi"
# It would be nice to be able to use that helm chart for metrics-server.
# Unfortunately, the charts themselves are on github.com and we want to
# avoid that due to their lack of IPv6 support.
#helm upgrade --install metrics-server \
# --repo https://kubernetes-sigs.github.io/metrics-server/ metrics-server \
# --namespace kube-system --set args={--kubelet-insecure-tls}
fi"
}
_cmd kubetools "Install a bunch of CLI tools for Kubernetes"
@@ -678,7 +741,7 @@ _cmd_kubetools() {
# Install ArgoCD CLI
##VERSION## https://github.com/argoproj/argo-cd/releases/latest
URL=https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-${ARCH}
URL=\$GITHUB/argoproj/argo-cd/releases/latest/download/argocd-linux-${ARCH}
pssh "
if [ ! -x /usr/local/bin/argocd ]; then
sudo curl -o /usr/local/bin/argocd -fsSL $URL
@@ -691,7 +754,7 @@ _cmd_kubetools() {
##VERSION## https://github.com/fluxcd/flux2/releases
FLUX_VERSION=2.3.0
FILENAME=flux_${FLUX_VERSION}_linux_${ARCH}
URL=https://github.com/fluxcd/flux2/releases/download/v$FLUX_VERSION/$FILENAME.tar.gz
URL=\$GITHUB/fluxcd/flux2/releases/download/v$FLUX_VERSION/$FILENAME.tar.gz
pssh "
if [ ! -x /usr/local/bin/flux ]; then
curl -fsSL $URL |
@@ -706,7 +769,7 @@ _cmd_kubetools() {
set -e
if ! [ -x /usr/local/bin/kctx ]; then
cd /tmp
git clone https://github.com/ahmetb/kubectx
git clone \$GITHUB/ahmetb/kubectx
sudo cp kubectx/kubectx /usr/local/bin/kctx
sudo cp kubectx/kubens /usr/local/bin/kns
sudo cp kubectx/completion/*.bash /etc/bash_completion.d
@@ -717,7 +780,7 @@ _cmd_kubetools() {
set -e
if ! [ -d /opt/kube-ps1 ]; then
cd /tmp
git clone https://github.com/jonmosco/kube-ps1
git clone \$GITHUB/jonmosco/kube-ps1
sudo mv kube-ps1 /opt/kube-ps1
sudo -u $USER_LOGIN sed -i s/docker-prompt/kube_ps1/ /home/$USER_LOGIN/.bashrc &&
sudo -u $USER_LOGIN tee -a /home/$USER_LOGIN/.bashrc <<EOF
@@ -734,7 +797,7 @@ EOF
##VERSION## https://github.com/stern/stern/releases
STERN_VERSION=1.29.0
FILENAME=stern_${STERN_VERSION}_linux_${ARCH}
URL=https://github.com/stern/stern/releases/download/v$STERN_VERSION/$FILENAME.tar.gz
URL=\$GITHUB/stern/stern/releases/download/v$STERN_VERSION/$FILENAME.tar.gz
pssh "
if [ ! -x /usr/local/bin/stern ]; then
curl -fsSL $URL |
@@ -745,9 +808,11 @@ EOF
fi"
# Install helm
HELM_VERSION=3.19.1
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 | sudo bash &&
curl -fsSL https://get.helm.sh/helm-v${HELM_VERSION}-linux-${ARCH}.tar.gz |
sudo tar --strip-components=1 --wildcards -zx -C /usr/local/bin '*/helm'
helm completion bash | sudo tee /etc/bash_completion.d/helm
helm version
fi"
@@ -755,7 +820,7 @@ EOF
# Install kustomize
##VERSION## https://github.com/kubernetes-sigs/kustomize/releases
KUSTOMIZE_VERSION=v5.4.1
URL=https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_${ARCH}.tar.gz
URL=\$GITHUB/kubernetes-sigs/kustomize/releases/download/kustomize/${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_${ARCH}.tar.gz
pssh "
if [ ! -x /usr/local/bin/kustomize ]; then
curl -fsSL $URL |
@@ -772,15 +837,17 @@ EOF
pssh "
if [ ! -x /usr/local/bin/ship ]; then
##VERSION##
curl -fsSL https://github.com/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_$ARCH.tar.gz |
curl -fsSL \$GITHUB/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_$ARCH.tar.gz |
sudo tar -C /usr/local/bin -zx ship
fi"
# Install the AWS IAM authenticator
AWSIAMAUTH_VERSION=0.7.8
URL=\$GITHUB/kubernetes-sigs/aws-iam-authenticator/releases/download/v${AWSIAMAUTH_VERSION}/aws-iam-authenticator_${AWSIAMAUTH_VERSION}_linux_${ARCH}
pssh "
if [ ! -x /usr/local/bin/aws-iam-authenticator ]; then
##VERSION##
sudo curl -fsSLo /usr/local/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/$ARCH/aws-iam-authenticator
sudo curl -fsSLo /usr/local/bin/aws-iam-authenticator $URL
sudo chmod +x /usr/local/bin/aws-iam-authenticator
aws-iam-authenticator version
fi"
@@ -790,17 +857,17 @@ EOF
if [ ! -x /usr/local/bin/jless ]; then
##VERSION##
sudo apt-get install -y libxcb-render0 libxcb-shape0 libxcb-xfixes0
wget https://github.com/PaulJuliusMartinez/jless/releases/download/v0.9.0/jless-v0.9.0-x86_64-unknown-linux-gnu.zip
wget \$GITHUB/PaulJuliusMartinez/jless/releases/download/v0.9.0/jless-v0.9.0-x86_64-unknown-linux-gnu.zip
unzip jless-v0.9.0-x86_64-unknown-linux-gnu
sudo mv jless /usr/local/bin
fi"
# Install the krew package manager
pssh "
if [ ! -d /home/$USER_LOGIN/.krew ]; then
if [ ! -d /home/$USER_LOGIN/.krew ] && [ ! -f /tmp/ipv6-only ]; then
cd /tmp &&
KREW=krew-linux_$ARCH
curl -fsSL https://github.com/kubernetes-sigs/krew/releases/latest/download/\$KREW.tar.gz |
curl -fsSL \$GITHUB/kubernetes-sigs/krew/releases/latest/download/\$KREW.tar.gz |
tar -zxf- &&
sudo -u $USER_LOGIN -H ./\$KREW install krew &&
echo export PATH=/home/$USER_LOGIN/.krew/bin:\\\$PATH | sudo -u $USER_LOGIN tee -a /home/$USER_LOGIN/.bashrc
@@ -808,7 +875,7 @@ EOF
# Install kubecolor
KUBECOLOR_VERSION=0.4.0
URL=https://github.com/kubecolor/kubecolor/releases/download/v${KUBECOLOR_VERSION}/kubecolor_${KUBECOLOR_VERSION}_linux_${ARCH}.tar.gz
URL=\$GITHUB/kubecolor/kubecolor/releases/download/v${KUBECOLOR_VERSION}/kubecolor_${KUBECOLOR_VERSION}_linux_${ARCH}.tar.gz
pssh "
if [ ! -x /usr/local/bin/kubecolor ]; then
##VERSION##
@@ -820,7 +887,7 @@ EOF
pssh "
if [ ! -x /usr/local/bin/k9s ]; then
FILENAME=k9s_Linux_$ARCH.tar.gz &&
curl -fsSL https://github.com/derailed/k9s/releases/latest/download/\$FILENAME |
curl -fsSL \$GITHUB/derailed/k9s/releases/latest/download/\$FILENAME |
sudo tar -C /usr/local/bin -zx k9s
k9s version
fi"
@@ -829,7 +896,7 @@ EOF
pssh "
if [ ! -x /usr/local/bin/popeye ]; then
FILENAME=popeye_Linux_$ARCH.tar.gz &&
curl -fsSL https://github.com/derailed/popeye/releases/latest/download/\$FILENAME |
curl -fsSL \$GITHUB/derailed/popeye/releases/latest/download/\$FILENAME |
sudo tar -C /usr/local/bin -zx popeye
popeye version
fi"
@@ -842,7 +909,7 @@ EOF
if [ ! -x /usr/local/bin/tilt ]; then
TILT_VERSION=0.33.13
FILENAME=tilt.\$TILT_VERSION.linux.$TILT_ARCH.tar.gz
curl -fsSL https://github.com/tilt-dev/tilt/releases/download/v\$TILT_VERSION/\$FILENAME |
curl -fsSL \$GITHUB/tilt-dev/tilt/releases/download/v\$TILT_VERSION/\$FILENAME |
sudo tar -C /usr/local/bin -zx tilt
tilt completion bash | sudo tee /etc/bash_completion.d/tilt
tilt version
@@ -860,7 +927,7 @@ EOF
# Install Kompose
pssh "
if [ ! -x /usr/local/bin/kompose ]; then
curl -fsSLo kompose https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-$ARCH &&
curl -fsSLo kompose \$GITHUB/kubernetes/kompose/releases/latest/download/kompose-linux-$ARCH &&
sudo install kompose /usr/local/bin
kompose completion bash | sudo tee /etc/bash_completion.d/kompose
kompose version
@@ -869,7 +936,7 @@ EOF
# Install KinD
pssh "
if [ ! -x /usr/local/bin/kind ]; then
curl -fsSLo kind https://github.com/kubernetes-sigs/kind/releases/latest/download/kind-linux-$ARCH &&
curl -fsSLo kind \$GITHUB/kubernetes-sigs/kind/releases/latest/download/kind-linux-$ARCH &&
sudo install kind /usr/local/bin
kind completion bash | sudo tee /etc/bash_completion.d/kind
kind version
@@ -878,7 +945,7 @@ EOF
# Install YTT
pssh "
if [ ! -x /usr/local/bin/ytt ]; then
curl -fsSLo ytt https://github.com/vmware-tanzu/carvel-ytt/releases/latest/download/ytt-linux-$ARCH &&
curl -fsSLo ytt \$GITHUB/vmware-tanzu/carvel-ytt/releases/latest/download/ytt-linux-$ARCH &&
sudo install ytt /usr/local/bin
ytt completion bash | sudo tee /etc/bash_completion.d/ytt
ytt version
@@ -886,7 +953,7 @@ EOF
##VERSION## https://github.com/bitnami-labs/sealed-secrets/releases
KUBESEAL_VERSION=0.26.2
URL=https://github.com/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION}/kubeseal-${KUBESEAL_VERSION}-linux-${ARCH}.tar.gz
URL=\$GITHUB/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION}/kubeseal-${KUBESEAL_VERSION}-linux-${ARCH}.tar.gz
#case $ARCH in
#amd64) FILENAME=kubeseal-linux-amd64;;
#arm64) FILENAME=kubeseal-arm64;;
@@ -903,7 +970,7 @@ EOF
VELERO_VERSION=1.13.2
pssh "
if [ ! -x /usr/local/bin/velero ]; then
curl -fsSL https://github.com/vmware-tanzu/velero/releases/download/v$VELERO_VERSION/velero-v$VELERO_VERSION-linux-$ARCH.tar.gz |
curl -fsSL \$GITHUB/vmware-tanzu/velero/releases/download/v$VELERO_VERSION/velero-v$VELERO_VERSION-linux-$ARCH.tar.gz |
sudo tar --strip-components=1 --wildcards -zx -C /usr/local/bin '*/velero'
velero completion bash | sudo tee /etc/bash_completion.d/velero
velero version --client-only
@@ -913,7 +980,7 @@ EOF
KUBENT_VERSION=0.7.2
pssh "
if [ ! -x /usr/local/bin/kubent ]; then
curl -fsSL https://github.com/doitintl/kube-no-trouble/releases/download/${KUBENT_VERSION}/kubent-${KUBENT_VERSION}-linux-$ARCH.tar.gz |
curl -fsSL \$GITHUB/doitintl/kube-no-trouble/releases/download/${KUBENT_VERSION}/kubent-${KUBENT_VERSION}-linux-$ARCH.tar.gz |
sudo tar -zxvf- -C /usr/local/bin kubent
kubent --version
fi"
@@ -921,7 +988,7 @@ EOF
# Ngrok. Note that unfortunately, this is the x86_64 binary.
# We might have to rethink how to handle this for multi-arch environments.
pssh "
if [ ! -x /usr/local/bin/ngrok ]; then
if [ ! -x /usr/local/bin/ngrok ] && [ ! -f /tmp/ipv6-only ]; then
curl -fsSL https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-linux-amd64.tgz |
sudo tar -zxvf- -C /usr/local/bin ngrok
fi"
@@ -1020,7 +1087,9 @@ _cmd_ping() {
TAG=$1
need_tag
fping < tags/$TAG/ips.txt
# If we connect to our VMs over IPv6, the IP address is between brackets.
# Unfortunately, fping doesn't support that; so let's strip brackets here.
tr -d [] < tags/$TAG/ips.txt | fping
}
_cmd stage2 "Finalize the setup of managed Kubernetes clusters"
@@ -1092,7 +1161,7 @@ _cmd_standardize() {
sudo netfilter-persistent start
fi"
# oracle-cloud-agent upgrades pacakges in the background.
# oracle-cloud-agent upgrades packages in the background.
# This breaks our deployment scripts, because when we invoke apt-get, it complains
# that the lock already exists (symptom: random "Exited with error code 100").
# Workaround: if we detect oracle-cloud-agent, remove it.
@@ -1104,6 +1173,15 @@ _cmd_standardize() {
sudo snap remove oracle-cloud-agent
sudo dpkg --remove --force-remove-reinstreq unified-monitoring-agent
fi"
# Check if a cachttps instance is available.
# (This is used to access GitHub on IPv6-only hosts.)
pssh "
if curl -fsSLI http://cachttps.internal:3131/https://github.com/ >/dev/null; then
echo GITHUB=http://cachttps.internal:3131/https://github.com
else
echo GITHUB=https://github.com
fi | sudo tee -a /etc/environment"
}
_cmd tailhist "Install history viewer on port 1088"
@@ -1119,7 +1197,7 @@ _cmd_tailhist () {
pssh "
set -e
sudo apt-get install unzip -y
wget -c https://github.com/joewalnes/websocketd/releases/download/v0.3.0/websocketd-0.3.0-linux_$ARCH.zip
wget -c \$GITHUB/joewalnes/websocketd/releases/download/v0.3.0/websocketd-0.3.0-linux_$ARCH.zip
unzip -o websocketd-0.3.0-linux_$ARCH.zip websocketd
sudo mv websocketd /usr/local/bin/websocketd
sudo mkdir -p /opt/tailhist
@@ -1218,14 +1296,17 @@ fi
"
}
_cmd ssh "Open an SSH session to the first node of a tag"
_cmd ssh "Open an SSH session to a node (first one by default)"
_cmd_ssh() {
TAG=$1
need_tag
IP=$(head -1 tags/$TAG/ips.txt)
info "Logging into $IP (default password: $USER_PASSWORD)"
ssh $SSHOPTS $USER_LOGIN@$IP
if [ "$2" ]; then
ssh -l ubuntu -i tags/$TAG/id_rsa $2
else
IP=$(head -1 tags/$TAG/ips.txt)
info "Logging into $IP (default password: $USER_PASSWORD)"
ssh $SSHOPTS $USER_LOGIN@$IP
fi
}
_cmd tags "List groups of VMs known locally"
@@ -1382,7 +1463,7 @@ _cmd_webssh() {
sudo apt-get install python3-tornado python3-paramiko -y"
pssh "
cd /opt
[ -d webssh ] || sudo git clone https://github.com/jpetazzo/webssh"
[ -d webssh ] || sudo git clone \$GITHUB/jpetazzo/webssh"
pssh "
for KEYFILE in /etc/ssh/*.pub; do
read a b c < \$KEYFILE; echo localhost \$a \$b
@@ -1467,7 +1548,7 @@ test_vm() {
"whoami" \
"hostname -i" \
"ls -l /usr/local/bin/i_am_first_node" \
"grep . /etc/name_of_first_node /etc/ipv4_of_first_node" \
"grep . /etc/name_of_first_node /etc/ip_addres_of_first_node" \
"cat /etc/hosts" \
"hostnamectl status" \
"docker version | grep Version -B1" \

View File

@@ -23,6 +23,14 @@ pssh() {
# necessary - or down to zero, too.
sleep ${PSSH_DELAY_PRE-1}
# When things go wrong, it's convenient to ask pssh to show the output
# of the failed command. Let's make that easy with a DEBUG env var.
if [ "$DEBUG" ]; then
PSSH_I=-i
else
PSSH_I=""
fi
$(which pssh || which parallel-ssh) -h $HOSTFILE -l ubuntu \
--par ${PSSH_PARALLEL_CONNECTIONS-100} \
--timeout 300 \
@@ -31,5 +39,6 @@ pssh() {
-O UserKnownHostsFile=/dev/null \
-O StrictHostKeyChecking=no \
-O ForwardAgent=yes \
$PSSH_I \
"$@"
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.16.1"
version = "~> 2.38.0"
}
helm = {
source = "hashicorp/helm"
@@ -107,6 +107,31 @@ resource "helm_release" "metrics_server_${index}" {
]
}
# As of October 2025, the ebs-csi-driver addon (which is used on EKS
# to provision persistent volumes) doesn't automatically create a
# StorageClass. Here, we're trying to detect the DaemonSet created
# by the ebs-csi-driver; and if we find it, we create the corresponding
# StorageClass.
data "kubernetes_resources" "ebs_csi_node_${index}" {
provider = kubernetes.cluster_${index}
api_version = "apps/v1"
kind = "DaemonSet"
label_selector = "app.kubernetes.io/name=aws-ebs-csi-driver"
namespace = "kube-system"
}
resource "kubernetes_storage_class" "ebs_csi_${index}" {
count = (length(data.kubernetes_resources.ebs_csi_node_${index}.objects) > 0) ? 1 : 0
provider = kubernetes.cluster_${index}
metadata {
name = "ebs-csi"
annotations = {
"storageclass.kubernetes.io/is-default-class" = "true"
}
}
storage_provisioner = "ebs.csi.aws.com"
}
# This section here deserves a little explanation.
#
# When we access a cluster with shpod (either through SSH or code-server)
@@ -136,8 +161,14 @@ resource "helm_release" "metrics_server_${index}" {
# Lastly - in the ConfigMap we actually put both the original kubeconfig,
# and the one where we injected our new user (just in case we want to
# use or look at the original for any reason).
#
# One more thing: the kubernetes.io/kube-apiserver-client signer is
# disabled on EKS, so... we don't generate that ConfigMap on EKS.
# To detect if we're on EKS, we're looking for the ebs-csi-node DaemonSet.
# (Which means that the detection will break if the ebs-csi addon is missing.)
resource "kubernetes_config_map" "kubeconfig_${index}" {
count = (length(data.kubernetes_resources.ebs_csi_node_${index}.objects) > 0) ? 0 : 1
provider = kubernetes.cluster_${index}
metadata {
name = "kubeconfig"
@@ -163,7 +194,7 @@ resource "kubernetes_config_map" "kubeconfig_${index}" {
- name: cluster-admin
user:
client-key-data: $${base64encode(tls_private_key.cluster_admin_${index}.private_key_pem)}
client-certificate-data: $${base64encode(kubernetes_certificate_signing_request_v1.cluster_admin_${index}.certificate)}
client-certificate-data: $${base64encode(kubernetes_certificate_signing_request_v1.cluster_admin_${index}[0].certificate)}
EOT
}
}
@@ -201,6 +232,7 @@ resource "kubernetes_cluster_role_binding" "shpod_cluster_admin_${index}" {
}
resource "kubernetes_certificate_signing_request_v1" "cluster_admin_${index}" {
count = (length(data.kubernetes_resources.ebs_csi_node_${index}.objects) > 0) ? 0 : 1
provider = kubernetes.cluster_${index}
metadata {
name = "cluster-admin"

View File

@@ -1,60 +1,45 @@
# Taken from:
# https://github.com/hashicorp/learn-terraform-provision-eks-cluster/blob/main/main.tf
data "aws_availability_zones" "available" {}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.19.0"
name = var.cluster_name
cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
data "aws_eks_cluster_versions" "_" {
default_only = true
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.5.1"
cluster_name = var.cluster_name
cluster_version = "1.24"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_endpoint_public_access = true
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
source = "terraform-aws-modules/eks/aws"
version = "~> 21.0"
name = var.cluster_name
kubernetes_version = data.aws_eks_cluster_versions._.cluster_versions[0].cluster_version
vpc_id = local.vpc_id
subnet_ids = local.subnet_ids
endpoint_public_access = true
enable_cluster_creator_admin_permissions = true
upgrade_policy = {
# The default policy is EXTENDED, which incurs additional costs
# when running an old control plane. We don't advise to run old
# control planes, but we also don't want to incur costs if an
# old version is chosen accidentally.
support_type = "STANDARD"
}
addons = {
coredns = {}
eks-pod-identity-agent = {
before_compute = true
}
kube-proxy = {}
vpc-cni = {
before_compute = true
}
aws-ebs-csi-driver = {
service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
}
}
eks_managed_node_groups = {
one = {
name = "node-group-one"
x86 = {
name = "x86"
instance_types = [local.node_size]
min_size = var.min_nodes_per_pool
max_size = var.max_nodes_per_pool
desired_size = var.min_nodes_per_pool
min_size = var.min_nodes_per_pool
max_size = var.max_nodes_per_pool
desired_size = var.min_nodes_per_pool
}
}
}
@@ -66,7 +51,7 @@ data "aws_iam_policy" "ebs_csi_policy" {
module "irsa-ebs-csi" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "4.7.0"
version = "~> 5.39.0"
create_role = true
role_name = "AmazonEKSTFEBSCSIRole-${module.eks.cluster_name}"
@@ -75,13 +60,9 @@ module "irsa-ebs-csi" {
oidc_fully_qualified_subjects = ["system:serviceaccount:kube-system:ebs-csi-controller-sa"]
}
resource "aws_eks_addon" "ebs-csi" {
cluster_name = module.eks.cluster_name
addon_name = "aws-ebs-csi-driver"
addon_version = "v1.5.2-eksbuild.1"
service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
tags = {
"eks_addon" = "ebs-csi"
"terraform" = "true"
}
resource "aws_vpc_security_group_ingress_rule" "_" {
security_group_id = module.eks.node_security_group_id
cidr_ipv4 = "0.0.0.0/0"
ip_protocol = -1
description = "Allow all traffic to Kubernetes nodes (so that we can use NodePorts, hostPorts, etc.)"
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.47.0"
version = "~> 6.17.0"
}
}
}

View File

@@ -0,0 +1,61 @@
# OK, we have two options here.
# 1. Create our own VPC
# - Pros: provides good isolation from other stuff deployed in the
# AWS account; makes sure that we don't interact with
# existing security groups, subnets, etc.
# - Cons: by default, there is a quota of 5 VPC per region, so
# we can only deploy 5 clusters
# 2. Use the default VPC
# - Pros/cons: the opposite :)
variable "use_default_vpc" {
type = bool
default = true
}
data "aws_vpc" "default" {
default = true
}
data "aws_subnets" "default" {
filter {
name = "vpc-id"
values = [data.aws_vpc.default.id]
}
}
data "aws_availability_zones" "available" {}
module "vpc" {
count = var.use_default_vpc ? 0 : 1
source = "terraform-aws-modules/vpc/aws"
version = "~> 6.0"
name = var.cluster_name
cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
private_subnets = ["10.0.11.0/24", "10.0.12.0/24", "10.0.13.0/24"]
public_subnets = ["10.0.21.0/24", "10.0.22.0/24", "10.0.23.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
map_public_ip_on_launch = true
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
locals {
vpc_id = var.use_default_vpc ? data.aws_vpc.default.id : module.vpc[0].vpc_id
subnet_ids = var.use_default_vpc ? data.aws_subnets.default.ids : module.vpc[0].public_subnets
}

View File

@@ -1,12 +0,0 @@
locals {
location = var.location != null ? var.location : "europe-north1-a"
region = replace(local.location, "/-[a-z]$/", "")
# Unfortunately, the following line doesn't work
# (that attribute just returns an empty string)
# so we have to hard-code the project name.
#project = data.google_client_config._.project
project = "prepare-tf"
}
data "google_client_config" "_" {}

View File

@@ -1,7 +1,7 @@
resource "google_container_cluster" "_" {
name = var.cluster_name
project = local.project
location = local.location
name = var.cluster_name
location = local.location
deletion_protection = false
#min_master_version = var.k8s_version
# To deploy private clusters, uncomment the section below,
@@ -42,7 +42,7 @@ resource "google_container_cluster" "_" {
node_pool {
name = "x86"
node_config {
tags = var.common_tags
tags = ["lab-${var.cluster_name}"]
machine_type = local.node_size
}
initial_node_count = var.min_nodes_per_pool
@@ -62,3 +62,25 @@ resource "google_container_cluster" "_" {
}
}
}
resource "google_compute_firewall" "_" {
name = "lab-${var.cluster_name}"
network = "default"
allow {
protocol = "tcp"
ports = ["0-65535"]
}
allow {
protocol = "udp"
ports = ["0-65535"]
}
allow {
protocol = "icmp"
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["lab-${var.cluster_name}"]
}

View File

@@ -6,6 +6,8 @@ output "has_metrics_server" {
value = true
}
data "google_client_config" "_" {}
output "kubeconfig" {
sensitive = true
value = <<-EOT

View File

@@ -1,8 +0,0 @@
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.5.0"
}
}
}

View File

@@ -0,0 +1 @@
../../providers/googlecloud/provider.tf

View File

@@ -0,0 +1,8 @@
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 7.0"
}
}
}

View File

@@ -9,5 +9,9 @@ variable "node_sizes" {
variable "location" {
type = string
default = null
default = "europe-north1-a"
}
locals {
location = (var.location != "" && var.location != null) ? var.location : "europe-north1-a"
}

View File

@@ -63,7 +63,8 @@ locals {
resource "local_file" "ip_addresses" {
content = join("", formatlist("%s\n", [
for key, value in local.ip_addresses : value
for key, value in local.ip_addresses :
strcontains(value, ".") ? value : "[${value}]"
]))
filename = "ips.txt"
file_permission = "0600"

View File

@@ -0,0 +1 @@
../common.tf

View File

@@ -0,0 +1 @@
../../providers/googlecloud/config.tf

View File

@@ -0,0 +1,54 @@
# Note: names and tags on GCP have to match a specific regex:
# (?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?)
# In other words, they must start with a letter; and generally,
# we make them start with a number (year-month-day-etc, so 2025-...)
# so we prefix names and tags with "lab-" in this configuration.
resource "google_compute_instance" "_" {
for_each = local.nodes
zone = var.location
name = "lab-${each.value.node_name}"
tags = ["lab-${var.tag}"]
machine_type = each.value.node_size
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2404-lts-amd64"
}
}
network_interface {
network = "default"
access_config {}
}
metadata = {
"ssh-keys" = "ubuntu:${tls_private_key.ssh.public_key_openssh}"
}
}
locals {
ip_addresses = {
for key, value in local.nodes :
key => google_compute_instance._[key].network_interface[0].access_config[0].nat_ip
}
}
resource "google_compute_firewall" "_" {
name = "lab-${var.tag}"
network = "default"
allow {
protocol = "tcp"
ports = ["0-65535"]
}
allow {
protocol = "udp"
ports = ["0-65535"]
}
allow {
protocol = "icmp"
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["lab-${var.tag}"]
}

View File

@@ -0,0 +1 @@
../../providers/googlecloud/provider.tf

View File

@@ -0,0 +1 @@
../../providers/googlecloud/variables.tf

View File

@@ -1,12 +1,34 @@
data "proxmox_virtual_environment_nodes" "_" {}
data "proxmox_virtual_environment_vms" "_" {
filter {
name = "template"
values = [true]
}
}
data "proxmox_virtual_environment_vms" "templates" {
for_each = toset(data.proxmox_virtual_environment_nodes._.names)
tags = ["ubuntu"]
filter {
name = "node_name"
values = [each.value]
}
filter {
name = "template"
values = [true]
}
}
locals {
pve_nodes = data.proxmox_virtual_environment_nodes._.names
pve_nodes = data.proxmox_virtual_environment_nodes._.names
pve_node = { for k, v in local.nodes : k => local.pve_nodes[v.node_index % length(local.pve_nodes)] }
pve_template_id = { for k, v in local.nodes : k => data.proxmox_virtual_environment_vms.templates[local.pve_node[k]].vms[0].vm_id }
}
resource "proxmox_virtual_environment_vm" "_" {
node_name = local.pve_nodes[each.value.node_index % length(local.pve_nodes)]
for_each = local.nodes
node_name = local.pve_node[each.key]
name = each.value.node_name
tags = ["container.training", var.tag]
stop_on_destroy = true
@@ -24,9 +46,17 @@ resource "proxmox_virtual_environment_vm" "_" {
# size = 30
# discard = "on"
#}
### Strategy 1: clone from shared storage
#clone {
# vm_id = var.proxmox_template_vm_id
# node_name = var.proxmox_template_node_name
# full = false
#}
### Strategy 2: clone from local storage
### (requires that the template exists on each node)
clone {
vm_id = var.proxmox_template_vm_id
node_name = var.proxmox_template_node_name
vm_id = local.pve_template_id[each.key]
node_name = local.pve_node[each.key]
full = false
}
agent {
@@ -41,7 +71,9 @@ resource "proxmox_virtual_environment_vm" "_" {
ip_config {
ipv4 {
address = "dhcp"
#gateway =
}
ipv6 {
address = "dhcp"
}
}
}
@@ -72,8 +104,11 @@ resource "proxmox_virtual_environment_vm" "_" {
locals {
ip_addresses = {
for key, value in local.nodes :
key => [for addr in flatten(concat(proxmox_virtual_environment_vm._[key].ipv4_addresses, ["ERROR"])) :
addr if addr != "127.0.0.1"][0]
key => [for addr in flatten(concat(
proxmox_virtual_environment_vm._[key].ipv6_addresses,
proxmox_virtual_environment_vm._[key].ipv4_addresses,
["ERROR"])) :
addr if addr != "127.0.0.1" && addr != "::1"][0]
}
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "~> 0.70.1"
version = "~> 0.86.0"
}
}
}

View File

@@ -2,6 +2,7 @@
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
/ /docker.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack

31
slides/academy-build.py Executable file
View File

@@ -0,0 +1,31 @@
#!/usr/bin/env python
import os
import re
import sys
html_file = sys.argv[1]
output_file_template = "_academy_{}.html"
title_regex = "name: toc-(.*)"
redirects = open("_redirects", "w")
sections = re.split(title_regex, open(html_file).read())[1:]
while sections:
link, markdown = sections[0], sections[1]
sections = sections[2:]
output_file_name = output_file_template.format(link)
with open(output_file_name, "w") as f:
html = open("workshop.html").read()
html = html.replace("@@MARKDOWN@@", markdown)
titles = re.findall("# (.*)", markdown) + [""]
html = html.replace("@@TITLE@@", "{} — Kubernetes Academy".format(titles[0]))
html = html.replace("@@SLIDENUMBERPREFIX@@", "")
html = html.replace("@@EXCLUDE@@", "")
html = html.replace(".nav[", ".hide[")
f.write(html)
redirects.write("/{} /{} 200!\n".format(link, output_file_name))
html = open(html_file).read()
html = re.sub("#toc-([^)]*)", "_academy_\\1.html", html)
sys.stdout.write(html)

View File

@@ -29,6 +29,20 @@ At the end of this lesson, you will be able to:
---
## `Dockerfile` example
```
FROM python:alpine
WORKDIR /app
RUN pip install Flask
COPY rng.py .
ENV FLASK_APP=rng FLASK_RUN_HOST=:: FLASK_RUN_PORT=80
CMD ["flask", "run"]
EXPOSE 80
```
---
## Writing our first `Dockerfile`
Our Dockerfile must be in a **new, empty directory**.
@@ -87,119 +101,7 @@ To keep things simple for now: this is the directory where our Dockerfile is loc
---
## What happens when we build the image?
It depends if we're using BuildKit or not!
If there are lots of blue lines and the first line looks like this:
```
[+] Building 1.8s (4/6)
```
... then we're using BuildKit.
If the output is mostly black-and-white and the first line looks like this:
```
Sending build context to Docker daemon 2.048kB
```
... then we're using the "classic" or "old-style" builder.
---
## To BuildKit or Not To BuildKit
Classic builder:
- copies the whole "build context" to the Docker Engine
- linear (processes lines one after the other)
- requires a full Docker Engine
BuildKit:
- only transfers parts of the "build context" when needed
- will parallelize operations (when possible)
- can run in non-privileged containers (e.g. on Kubernetes)
---
## With the classic builder
The output of `docker build` looks like this:
.small[
```bash
docker build -t figlet .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM ubuntu
---> f975c5035748
Step 2/3 : RUN apt-get update
---> Running in e01b294dbffd
(...output of the RUN command...)
Removing intermediate container e01b294dbffd
---> eb8d9b561b37
Step 3/3 : RUN apt-get install figlet
---> Running in c29230d70f9b
(...output of the RUN command...)
Removing intermediate container c29230d70f9b
---> 0dfd7a253f21
Successfully built 0dfd7a253f21
Successfully tagged figlet:latest
```
]
* The output of the `RUN` commands has been omitted.
* Let's explain what this output means.
---
## Sending the build context to Docker
```bash
Sending build context to Docker daemon 2.048 kB
```
* The build context is the `.` directory given to `docker build`.
* It is sent (as an archive) by the Docker client to the Docker daemon.
* This allows to use a remote machine to build using local files.
* Be careful (or patient) if that directory is big and your link is slow.
* You can speed up the process with a [`.dockerignore`](https://docs.docker.com/engine/reference/builder/#dockerignore-file) file
* It tells docker to ignore specific files in the directory
* Only ignore files that you won't need in the build context!
---
## Executing each step
```bash
Step 2/3 : RUN apt-get update
---> Running in e01b294dbffd
(...output of the RUN command...)
Removing intermediate container e01b294dbffd
---> eb8d9b561b37
```
* A container (`e01b294dbffd`) is created from the base image.
* The `RUN` command is executed in this container.
* The container is committed into an image (`eb8d9b561b37`).
* The build container (`e01b294dbffd`) is removed.
* The output of this step will be the base image for the next one.
---
## With BuildKit
## Build output
.small[
```bash
@@ -231,7 +133,7 @@ Removing intermediate container e01b294dbffd
---
## Understanding BuildKit output
## Understanding builder output
- BuildKit transfers the Dockerfile and the *build context*
@@ -249,9 +151,9 @@ Removing intermediate container e01b294dbffd
class: extra-details
## BuildKit plain output
## Builder plain output
- When running BuildKit in e.g. a CI pipeline, its output will be different
- When running builds in e.g. a CI pipeline, its output will be different
- We can see the same output format by using `--progress=plain`
@@ -360,6 +262,8 @@ class: extra-details
---
class: extra-details
## Shell syntax vs exec syntax
Dockerfile commands that execute something can have two forms:
@@ -374,6 +278,8 @@ We are going to change our Dockerfile to see how it affects the resulting image.
---
class: extra-details
## Using exec syntax in our Dockerfile
Let's change our Dockerfile as follows!
@@ -392,6 +298,8 @@ $ docker build -t figlet .
---
class: extra-details
## History with exec syntax
Compare the new history:
@@ -413,6 +321,8 @@ IMAGE CREATED CREATED BY SIZE
---
class: extra-details
## When to use exec syntax and shell syntax
* shell syntax:
@@ -431,6 +341,8 @@ IMAGE CREATED CREATED BY SIZE
---
class: extra-details
## Pro-tip: the `exec` shell built-in
POSIX shells have a built-in command named `exec`.
@@ -447,6 +359,8 @@ From a user perspective:
---
class: extra-details
## Example using `exec`
```dockerfile

View File

@@ -42,7 +42,7 @@ Our new Dockerfile will look like this:
```dockerfile
FROM ubuntu
RUN apt-get update
RUN ["apt-get", "install", "figlet"]
RUN apt-get install figlet
CMD figlet -f script hello
```
@@ -96,6 +96,8 @@ root@7ac86a641116:/#
---
class: extra-details
## Using `ENTRYPOINT`
We want to be able to specify a different message on the command line,
@@ -117,6 +119,8 @@ We will use the `ENTRYPOINT` verb in Dockerfile.
---
class: extra-details
## Adding `ENTRYPOINT` to our Dockerfile
Our new Dockerfile will look like this:
@@ -124,7 +128,7 @@ Our new Dockerfile will look like this:
```dockerfile
FROM ubuntu
RUN apt-get update
RUN ["apt-get", "install", "figlet"]
RUN apt-get install figlet
ENTRYPOINT ["figlet", "-f", "script"]
```
@@ -138,6 +142,8 @@ Why did we use JSON syntax for our `ENTRYPOINT`?
---
class: extra-details
## Implications of JSON vs string syntax
* When CMD or ENTRYPOINT use string syntax, they get wrapped in `sh -c`.
@@ -158,6 +164,8 @@ sh -c "figlet -f script" salut
---
class: extra-details
## Build and test our image
Let's build it:
@@ -182,6 +190,8 @@ $ docker run figlet salut
---
class: extra-details
## Using `CMD` and `ENTRYPOINT` together
What if we want to define a default message for our container?
@@ -196,6 +206,8 @@ Then we will use `ENTRYPOINT` and `CMD` together.
---
class: extra-details
## `CMD` and `ENTRYPOINT` together
Our new Dockerfile will look like this:
@@ -203,7 +215,7 @@ Our new Dockerfile will look like this:
```dockerfile
FROM ubuntu
RUN apt-get update
RUN ["apt-get", "install", "figlet"]
RUN apt-get install figlet
ENTRYPOINT ["figlet", "-f", "script"]
CMD ["hello world"]
```
@@ -217,6 +229,8 @@ CMD ["hello world"]
---
class: extra-details
## Build and test our image
Let's build it:
@@ -241,6 +255,8 @@ $ docker run myfiglet
---
class: extra-details
## Overriding the image default parameters
Now let's pass extra arguments to the image.
@@ -258,6 +274,8 @@ We overrode `CMD` but still used `ENTRYPOINT`.
---
class: extra-details
## Overriding `ENTRYPOINT`
What if we want to run a shell in our container?
@@ -274,6 +292,8 @@ root@6027e44e2955:/#
---
class: extra-details
## `CMD` and `ENTRYPOINT` recap
- `docker run myimage` executes `ENTRYPOINT` + `CMD`
@@ -297,6 +317,8 @@ root@6027e44e2955:/#
---
class: extra-details
## When to use `ENTRYPOINT` vs `CMD`
`ENTRYPOINT` is great for "containerized binaries".

View File

@@ -157,8 +157,6 @@ Here is the file used in the demo:
.small[
```yaml
version: "3"
services:
www:
build: www
@@ -278,6 +276,8 @@ For the full list, check: https://docs.docker.com/compose/compose-file/
---
class: extra-details
## Running multiple copies of a stack
- Copy the stack in two different directories, e.g. `front` and `frontcopy`
@@ -353,6 +353,8 @@ Use `docker compose down -v` to remove everything including volumes.
---
class: extra-details
## Special handling of volumes
- When an image gets updated, Compose automatically creates a new container
@@ -371,6 +373,8 @@ Use `docker compose down -v` to remove everything including volumes.
---
class: extra-details
## Gotchas with volumes
- Unfortunately, Docker volumes don't have labels or metadata
@@ -391,6 +395,8 @@ Use `docker compose down -v` to remove everything including volumes.
---
class: extra-details
## Managing volumes explicitly
Option 1: *named volumes*
@@ -412,6 +418,8 @@ volumes:
---
class: extra-details
## Managing volumes explicitly
Option 2: *relative paths*
@@ -431,6 +439,8 @@ services:
---
class: extra-details
## Managing complex stacks
- Compose provides multiple features to manage complex stacks
@@ -453,6 +463,8 @@ services:
---
class: extra-details
## Dependencies
- A service can have a `depends_on` section
@@ -465,28 +477,6 @@ services:
⚠️ It doesn't make a service "wait" for another one to be up!
---
class: extra-details
## A bit of history and trivia
- Compose was initially named "Fig"
- Compose is one of the only components of Docker written in Python
(almost everything else is in Go)
- In 2020, Docker introduced "Compose CLI":
- `docker compose` command to deploy Compose stacks to some clouds
- in Go instead of Python
- progressively getting feature parity with `docker compose`
- also provides numerous improvements (e.g. leverages BuildKit by default)
???
:EN:- Using compose to describe an environment

View File

@@ -235,6 +235,8 @@ communication across hosts, and publishing/load balancing for inbound traffic.
---
class: extra-details
## Finding the container's IP address
We can use the `docker inspect` command to find the IP address of the
@@ -253,6 +255,8 @@ $ docker inspect --format '{{ .NetworkSettings.IPAddress }}' <yourContainerID>
---
class: extra-details
## Pinging our container
Let's try to ping our container *from another container.*

View File

@@ -1,140 +1,44 @@
# Docker 30,000ft overview
# Docker? Containers?
In this lesson, we will learn about:
- **Docker:** open-source platform that runs containers.
* Why containers (non-technical elevator pitch)
- **Container:** unit of software/deployment that contains everything needed for the code to run.
* Why containers (technical elevator pitch)
- Docker containers can run (almost) everywhere.
* How Docker helps us to build, ship, and run
- Containers typically use less resources than VMs.
* The history of containers
- Can be easily copied and deployed. Make development faster.
We won't actually run Docker or containers in this chapter (yet!).
- Isolated from each other and from the host.
Don't worry, we will get to that fast enough!
---
## Elevator pitch
## Container vs VM
### (for your manager, your boss...)
**Virtual Machine**
---
- Heavier and slower to boot.
- Include a full guest OS.
- Better for running multiple OS types on one host.
## OK... Why the buzz around containers?
**Container**
- Lightweight and fast to start.
- Share the host OS kernel.
- Use fewer resources (CPU, RAM, storage).
- Ideal for microservices and scalable applications.
* The software industry has changed
* Before:
* monolithic applications
* long development cycles
* single environment
* slowly scaling up
* Now:
* decoupled services
* fast, iterative improvements
* multiple environments
* quickly scaling out
---
## Deployment becomes very complex
* Many different stacks:
* languages
* frameworks
* databases
* Many different targets:
* individual development environments
* pre-production, QA, staging...
* production: on prem, cloud, hybrid
---
class: pic
## The deployment problem
![problem](images/shipping-software-problem.png)
![Container vs CM](images/cont_vs_vm.png)
---
class: pic
## The matrix from hell
![matrix](images/shipping-matrix-from-hell.png)
---
class: pic
## The parallel with the shipping industry
![history](images/shipping-industry-problem.png)
---
class: pic
## Intermodal shipping containers
![shipping](images/shipping-industry-solution.png)
---
class: pic
## A new shipping ecosystem
![shipeco](images/shipping-indsutry-results.png)
---
class: pic
## A shipping container system for applications
![shipapp](images/shipping-software-solution.png)
---
class: pic
## Eliminate the matrix from hell
![elimatrix](images/shipping-matrix-solved.png)
---
## Results
* [Dev-to-prod reduced from 9 months to 15 minutes (ING)](
https://gallant-turing-d0d520.netlify.com/docker-case-studies/CS_ING_01.25.2015_1.pdf)
* [Continuous integration job time reduced by more than 60% (BBC)](
https://gallant-turing-d0d520.netlify.com/docker-case-studies/CS_BBCNews_01.25.2015_1.pdf)
* [Deploy 100 times a day instead of once a week (GILT)](
https://gallant-turing-d0d520.netlify.com/docker-case-studies/CS_Gilt_Groupe_03.18.2015_0.pdf)
* [70% infrastructure consolidation (MetLife)](
https://www.youtube.com/watch?v=Bwt3xigvlj0)
* etc.
---
## Elevator pitch
### (for your fellow devs and ops)
---
## Escape dependency hell
## Basic workflow
1. Write installation instructions into an `INSTALL.txt` file
@@ -162,7 +66,7 @@ Never again "worked in dev - ops problem now!"
```bash
git clone ...
docker-compose up
docker compose up
```
With this, you can create development, integration, QA environments in minutes!
@@ -209,109 +113,6 @@ Images contain all the libraries, dependencies, etc. needed to run the app.
class: extra-details
## Decouple "plumbing" from application logic
1. Write your code to connect to named services ("db", "api"...)
2. Use Compose to start your stack
3. Docker will setup per-container DNS resolver for those names
4. You can now scale, add load balancers, replication ... without changing your code
Note: this is not covered in this intro level workshop!
---
class: extra-details
## What did Docker bring to the table?
### Docker before/after
---
class: extra-details
## Formats and APIs, before Docker
* No standardized exchange format.
<br/>(No, a rootfs tarball is *not* a format!)
* Containers are hard to use for developers.
<br/>(Where's the equivalent of `docker run debian`?)
* As a result, they are *hidden* from the end users.
* No re-usable components, APIs, tools.
<br/>(At best: VM abstractions, e.g. libvirt.)
Analogy:
* Shipping containers are not just steel boxes.
* They are steel boxes that are a standard size, with the same hooks and holes.
---
class: extra-details
## Formats and APIs, after Docker
* Standardize the container format, because containers were not portable.
* Make containers easy to use for developers.
* Emphasis on re-usable components, APIs, ecosystem of standard tools.
* Improvement over ad-hoc, in-house, specific tools.
---
class: extra-details
## Shipping, before Docker
* Ship packages: deb, rpm, gem, jar, homebrew...
* Dependency hell.
* "Works on my machine."
* Base deployment often done from scratch (debootstrap...) and unreliable.
---
class: extra-details
## Shipping, after Docker
* Ship container images with all their dependencies.
* Images are bigger, but they are broken down into layers.
* Only ship layers that have changed.
* Save disk, network, memory usage.
---
class: extra-details
## Example
Layers:
* CentOS
* JRE
* Tomcat
* Dependencies
* Application JAR
* Configuration
---
class: extra-details
## Devs vs Ops, before Docker
* Drop a tarball (or a commit hash) with instructions.
@@ -348,3 +149,71 @@ class: extra-details
* Devs can be empowered to make releases themselves
more easily.
---
## Pets vs. Cattle
* In the "pets vs. cattle" metaphor, there are two kinds of servers.
* Pets:
* have distinctive names and unique configurations
* when they have an outage, we do everything we can to fix them
* Cattle:
* have generic names (e.g. with numbers) and generic configuration
* configuration is enforced by configuration management, golden images ...
* when they have an outage, we can replace them immediately with a new server
* What's the connection with Docker and containers?
---
## Local development environments
* When we use local VMs (with e.g. VirtualBox or VMware), our workflow looks like this:
* create VM from base template (Ubuntu, CentOS...)
* install packages, set up environment
* work on project
* when done, shut down VM
* next time we need to work on project, restart VM as we left it
* if we need to tweak the environment, we do it live
* Over time, the VM configuration evolves, diverges.
* We don't have a clean, reliable, deterministic way to provision that environment.
---
## Local development with Docker
* With Docker, the workflow looks like this:
* create container image with our dev environment
* run container with that image
* work on project
* when done, shut down container
* next time we need to work on project, start a new container
* if we need to tweak the environment, we create a new image
* We have a clear definition of our environment, and can share it reliably with others.
* Let's see in the next chapters how to bake a custom image with `figlet`!

View File

@@ -6,8 +6,6 @@ We will see how to:
* Leverage the build cache so that builds can be faster.
* Embed unit testing in the build process.
---
## Reducing the number of layers
@@ -76,6 +74,8 @@ CMD ["python", "app.py"]
---
class: extra-details
## Be careful with `chown`, `chmod`, `mv`
* Layers cannot store efficiently changes in permissions or ownership.
@@ -117,6 +117,8 @@ CMD ["python", "app.py"]
---
class: extra-details
## Use `COPY --chown`
* The Dockerfile instruction `COPY` can take a `--chown` parameter.
@@ -140,6 +142,8 @@ CMD ["python", "app.py"]
---
class: extra-details
## Set correct permissions locally
* Instead of using `chmod`, set the right file permissions locally.
@@ -148,29 +152,6 @@ CMD ["python", "app.py"]
---
## Embedding unit tests in the build process
```dockerfile
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
RUN <install test dependencies>
COPY <test data sets and fixtures>
RUN <unit tests>
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
CMD, EXPOSE ...
```
* The build fails as soon as an instruction fails
* If `RUN <unit tests>` fails, the build doesn't produce an image
* If it succeeds, it produces a clean image (without test libraries and data)
---
# Dockerfile examples
There are a number of tips, tricks, and techniques that we can use in Dockerfiles.
@@ -286,6 +267,8 @@ ENV PIP=9.0.3 \
---
class: extra-details
## Entrypoints and wrappers
It is very common to define a custom entrypoint.
@@ -303,6 +286,8 @@ That entrypoint will generally be a script, performing any combination of:
---
class: extra-details
## A typical entrypoint script
```dockerfile
@@ -357,67 +342,6 @@ RUN ...
---
## Overrides
In theory, development and production images should be the same.
In practice, we often need to enable specific behaviors in development (e.g. debug statements).
One way to reconcile both needs is to use Compose to enable these behaviors.
Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example.
---
## Production image
This Dockerfile builds an image leveraging gunicorn:
```dockerfile
FROM python
RUN pip install flask
RUN pip install gunicorn
RUN pip install redis
COPY . /src
WORKDIR /src
CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app
EXPOSE 5000
```
(Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
---
## Development Compose file
This Compose file uses the same image, but with a few overrides for development:
- the Flask development server is used (overriding `CMD`),
- the `DEBUG` environment variable is set,
- a volume is used to provide a faster local development workflow.
.small[
```yaml
services:
www:
build: www
ports:
- 8000:5000
user: nobody
environment:
DEBUG: 1
command: python counter.py
volumes:
- ./www:/src
```
]
(Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml))
---
## How to know which best practices are better?
- The main goal of containers is to make our lives easier.

View File

@@ -147,6 +147,9 @@ Now, try to:
* run `figlet`. Does that work?
???
On macOS: brew list | wc -l
---
class: self-paced
@@ -225,73 +228,3 @@ bash: figlet: command not found
*This puts a strong emphasis on automation and repeatability. Let's see why ...*
---
## Pets vs. Cattle
* In the "pets vs. cattle" metaphor, there are two kinds of servers.
* Pets:
* have distinctive names and unique configurations
* when they have an outage, we do everything we can to fix them
* Cattle:
* have generic names (e.g. with numbers) and generic configuration
* configuration is enforced by configuration management, golden images ...
* when they have an outage, we can replace them immediately with a new server
* What's the connection with Docker and containers?
---
## Local development environments
* When we use local VMs (with e.g. VirtualBox or VMware), our workflow looks like this:
* create VM from base template (Ubuntu, CentOS...)
* install packages, set up environment
* work on project
* when done, shut down VM
* next time we need to work on project, restart VM as we left it
* if we need to tweak the environment, we do it live
* Over time, the VM configuration evolves, diverges.
* We don't have a clean, reliable, deterministic way to provision that environment.
---
## Local development with Docker
* With Docker, the workflow looks like this:
* create container image with our dev environment
* run container with that image
* work on project
* when done, shut down container
* next time we need to work on project, start a new container
* if we need to tweak the environment, we create a new image
* We have a clear definition of our environment, and can share it reliably with others.
* Let's see in the next chapters how to bake a custom image with `figlet`!
???
:EN:- Running our first container
:FR:- Lancer nos premiers conteneurs

View File

@@ -115,46 +115,7 @@ If an image is read-only, how do we change it?
* A new image is created by stacking the new layer on top of the old image.
---
## A chicken-and-egg problem
* The only way to create an image is by "freezing" a container.
* The only way to create a container is by instantiating an image.
* Help!
---
## Creating the first images
There is a special empty image called `scratch`.
* It allows to *build from scratch*.
The `docker import` command loads a tarball into Docker.
* The imported tarball becomes a standalone image.
* That new image has a single layer.
Note: you will probably never have to do this yourself.
---
## Creating other images
`docker commit`
* Saves all the changes made to a container into a new layer.
* Creates a new image (effectively a copy of the container).
`docker build` **(used 99% of the time)**
* Performs a repeatable build sequence.
* This is the preferred method!
We will explain both methods in a moment.
* This can be automated by writing a `Dockerfile` and then running `docker build`.
---
@@ -162,15 +123,15 @@ We will explain both methods in a moment.
There are three namespaces:
* Official images
* Official images on the Docker Hub
e.g. `ubuntu`, `busybox` ...
* User (and organizations) images
* User (and organizations) images on the Docker Hub
e.g. `jpetazzo/clock`
* Self-hosted images
* Images on registries that are NOT the Docker Hub
e.g. `registry.example.com:5000/my-private/image`
@@ -283,30 +244,6 @@ jpetazzo/clock latest 12068b93616f 12 months ago 2.433 MB
---
## Searching for images
We cannot list *all* images on a remote registry, but
we can search for a specific keyword:
```bash
$ docker search marathon
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
mesosphere/marathon A cluster-wide init and co... 105 [OK]
mesoscloud/marathon Marathon 31 [OK]
mesosphere/marathon-lb Script to update haproxy b... 22 [OK]
tobilg/mongodb-marathon A Docker image to start a ... 4 [OK]
```
* "Stars" indicate the popularity of the image.
* "Official" images are those in the root namespace.
* "Automated" images are built automatically by the Docker Hub.
<br/>(This means that their build recipe is always available.)
---
## Downloading images
There are two ways to download images.

View File

@@ -314,52 +314,6 @@ class: extra-details
---
## Trash your servers and burn your code
*(This is the title of a
[2013 blog post][immutable-deployments]
by Chad Fowler, where he explains the concept of immutable infrastructure.)*
[immutable-deployments]: https://web.archive.org/web/20160305073617/http://chadfowler.com/blog/2013/06/23/immutable-deployments/
--
* Let's majorly mess up our container.
(Remove files or whatever.)
* Now, how can we fix this?
--
* Our old container (with the blue version of the code) is still running.
* See on which port it is exposed:
```bash
docker ps
```
* Point our browser to it to confirm that it still works fine.
---
## Immutable infrastructure in a nutshell
* Instead of *updating* a server, we deploy a new one.
* This might be challenging with classical servers, but it's trivial with containers.
* In fact, with Docker, the most logical workflow is to build a new image and run it.
* If something goes wrong with the new image, we can always restart the old one.
* We can even keep both versions running side by side.
If this pattern sounds interesting, you might want to read about *blue/green deployment*
and *canary deployments*.
---
## Recap of the development workflow
1. Write a Dockerfile to build an image containing our development environment.
@@ -387,35 +341,6 @@ and *canary deployments*.
class: extra-details
## Debugging inside the container
Docker has a command called `docker exec`.
It allows users to run a new process in a container which is already running.
If sometimes you find yourself wishing you could SSH into a container: you can use `docker exec` instead.
You can get a shell prompt inside an existing container this way, or run an arbitrary process for automation.
---
class: extra-details
## `docker exec` example
```bash
$ # You can run ruby commands in the area the app is running and more!
$ docker exec -it <yourContainerId> bash
root@5ca27cf74c2e:/opt/namer# irb
irb(main):001:0> [0, 1, 2, 3, 4].map {|x| x ** 2}.compact
=> [0, 1, 4, 9, 16]
irb(main):002:0> exit
```
---
class: extra-details
## Stopping the container
Now that we're done let's stop our container.

View File

@@ -0,0 +1,140 @@
class: title
# More Dockerfile Instructions
![construction](images/title-advanced-dockerfiles.jpg)
---
## `Dockerfile` usage summary
* `Dockerfile` instructions are executed in order.
* Each instruction creates a new layer in the image.
* Docker maintains a cache with the layers of previous builds.
* When there are no changes in the instructions and files making a layer,
the builder re-uses the cached layer, without executing the instruction for that layer.
* The `FROM` instruction MUST be the first non-comment instruction.
* Lines starting with `#` are treated as comments.
* Some instructions (like `CMD` or `ENTRYPOINT`) update a piece of metadata.
(As a result, each call to these instructions makes the previous one useless.)
---
## The `EXPOSE` instruction
The `EXPOSE` instruction tells Docker what ports are to be published
in this image.
```dockerfile
EXPOSE 8080
EXPOSE 80 443
EXPOSE 53/tcp 53/udp
```
* All ports are private by default.
* Declaring a port with `EXPOSE` is not enough to make it public.
* The `Dockerfile` doesn't control on which port a service gets exposed.
---
## Exposing ports
* When you `docker run -p <port> ...`, that port becomes public.
(Even if it was not declared with `EXPOSE`.)
* When you `docker run -P ...` (without port number), all ports
declared with `EXPOSE` become public.
A *public port* is reachable from other containers and from outside the host.
A *private port* is not reachable from outside.
---
## `VOLUME`
The `VOLUME` instruction tells Docker that a specific directory
should be a *volume*.
```dockerfile
VOLUME /var/lib/mysql
```
Filesystem access in volumes bypasses the copy-on-write layer,
offering native performance to I/O done in those directories.
Volumes can be attached to multiple containers, allowing to
"port" data over from a container to another, e.g. to
upgrade a database to a newer version.
It is possible to start a container in "read-only" mode.
The container filesystem will be made read-only, but volumes
can still have read/write access if necessary.
---
## The `WORKDIR` instruction
The `WORKDIR` instruction sets the working directory for subsequent
instructions.
It also affects `CMD` and `ENTRYPOINT`, since it sets the working
directory used when starting the container.
```dockerfile
WORKDIR /src
```
You can specify `WORKDIR` again to change the working directory for
further operations.
---
## The `ENV` instruction
The `ENV` instruction specifies environment variables that should be
set in any container launched from the image.
```dockerfile
ENV WEBAPP_PORT 8080
```
This will result in an environment variable being created in any
containers created from this image of
```bash
WEBAPP_PORT=8080
```
You can also specify environment variables when you use `docker run`.
```bash
$ docker run -e WEBAPP_PORT=8000 -e WEBAPP_HOST=www.example.com ...
```
---
class: extra-details
## The `USER` instruction
The `USER` instruction sets the user name or UID to use when running
the image.
It can be used multiple times to change back to root or to another user.
???
:EN:- Advanced Dockerfile syntax
:FR:- Dockerfile niveau expert

View File

@@ -48,161 +48,6 @@ Therefore, `RUN rm` does not reduce the size of the image or free up disk space.
---
## Removing unnecessary files
Various techniques are available to obtain smaller images:
- collapsing layers,
- adding binaries that are built outside of the Dockerfile,
- squashing the final image,
- multi-stage builds.
Let's review them quickly.
---
## Collapsing layers
You will frequently see Dockerfiles like this:
```dockerfile
FROM ubuntu
RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ...
```
Or the (more readable) variant:
```dockerfile
FROM ubuntu
RUN apt-get update \
&& apt-get install xxx \
&& ... \
&& apt-get remove xxx \
&& ...
```
This `RUN` command gives us a single layer.
The files that are added, then removed in the same layer, do not grow the layer size.
---
## Collapsing layers: pros and cons
Pros:
- works on all versions of Docker
- doesn't require extra tools
Cons:
- not very readable
- some unnecessary files might still remain if the cleanup is not thorough
- that layer is expensive (slow to build)
---
## Building binaries outside of the Dockerfile
This results in a Dockerfile looking like this:
```dockerfile
FROM ubuntu
COPY xxx /usr/local/bin
```
Of course, this implies that the file `xxx` exists in the build context.
That file has to exist before you can run `docker build`.
For instance, it can:
- exist in the code repository,
- be created by another tool (script, Makefile...),
- be created by another container image and extracted from the image.
See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox).
---
## Building binaries outside: pros and cons
Pros:
- final image can be very small
Cons:
- requires an extra build tool
- we're back in dependency hell and "works on my machine"
Cons, if binary is added to code repository:
- breaks portability across different platforms
- grows repository size a lot if the binary is updated frequently
---
## Squashing the final image
The idea is to transform the final image into a single-layer image.
This can be done in (at least) two ways.
- Activate experimental features and squash the final image:
```bash
docker image build --squash ...
```
- Export/import the final image.
```bash
docker build -t temp-image .
docker run --entrypoint true --name temp-container temp-image
docker export temp-container | docker import - final-image
docker rm temp-container
docker rmi temp-image
```
---
## Squashing the image: pros and cons
Pros:
- single-layer images are smaller and faster to download
- removed files no longer take up storage and network resources
Cons:
- we still need to actively remove unnecessary files
- squash operation can take a lot of time (on big images)
- squash operation does not benefit from cache
<br/>
(even if we change just a tiny file, the whole image needs to be re-squashed)
---
## Multi-stage builds
Multi-stage builds allow us to have multiple *stages*.
Each stage is a separate image, and can copy files from previous stages.
We're going to see how they work in more detail.
---
# Multi-stage builds
* At any point in our `Dockerfile`, we can add a new `FROM` line.
@@ -315,7 +160,7 @@ class: extra-details
(instead of using multiple Dockerfiles, which could go out of sync)
--
---
class: extra-details

View File

@@ -15,11 +15,21 @@ class: title
- If you are doing or re-doing this course on your own, you can:
- install Docker locally (as explained in the chapter "Installing Docker")
- install [Docker Desktop][docker-desktop] or [Podman Desktop][podman-desktop]
<br/>(available for Linux, Mac, Windows; provides a nice GUI)
- install Docker on e.g. a cloud VM
- install [Docker CE][docker-ce] or [Podman][podman]
<br/>(for intermediate/advanced users who prefer the CLI)
- use https://www.play-with-docker.com/ to instantly get a training environment
- try platforms like [Play With Docker][pwd] or [KodeKloud]
<br/>(if you can't/won't install anything locally)
[docker-desktop]: https://docs.docker.com/desktop/
[podman-desktop]: https://podman-desktop.io/downloads
[docker-ce]: https://docs.docker.com/engine/install/
[podman]: https://podman.io/docs/installation#installing-on-linux
[pwd]: https://labs.play-with-docker.com/
[KodeKloud]: https://kodekloud.com/free-labs/docker/
---
@@ -39,42 +49,6 @@ individual Docker VM.*
---
## What *is* Docker?
- "Installing Docker" really means "Installing the Docker Engine and CLI".
- The Docker Engine is a daemon (a service running in the background).
- This daemon manages containers, the same way that a hypervisor manages VMs.
- We interact with the Docker Engine by using the Docker CLI.
- The Docker CLI and the Docker Engine communicate through an API.
- There are many other programs and client libraries which use that API.
---
## Why don't we run Docker locally?
- We are going to download container images and distribution packages.
- This could put a bit of stress on the local WiFi and slow us down.
- Instead, we use a remote VM that has a good connectivity
- In some rare cases, installing Docker locally is challenging:
- no administrator/root access (computer managed by strict corp IT)
- 32-bit CPU or OS
- old OS version (e.g. CentOS 6, OSX pre-Yosemite, Windows 7)
- It's better to spend time learning containers than fiddling with the installer!
---
## Connecting to your Virtual Machine
You need an SSH client.
@@ -93,23 +67,6 @@ $ ssh <login>@<ip-address>
* MobaXterm (https://mobaxterm.mobatek.net/)
---
class: in-person
## `tailhist`
The shell history of the instructor is available online in real time.
Note the IP address of the instructor's virtual machine (A.B.C.D).
Open http://A.B.C.D:1088 in your browser and you should see the history.
The history is updated in real time (using a WebSocket connection).
It should be green when the WebSocket is connected.
If it turns red, reloading the page should fix it.
---
@@ -144,10 +101,47 @@ Server:
If this doesn't work, raise your hand so that an instructor can assist you!
???
---
:EN:Container concepts
:FR:Premier contact avec les conteneurs
## Installing Docker
- "Installing Docker" really means "Installing the **Docker Engine** and **CLI**".
- The Docker Engine is a **daemon** (a service running in the background) —— it manages containers, the same way that a hypervisor manages VMs.
- We interact with the Docker Engine by using the Docker CLI.
- The Docker CLI and the Docker Engine communicate through an API.
- There are many other programs and client libraries which use that API.
---
class: pic
![Docker Architecture](images/docker-engine-architecture.svg)
---
## Can we run Docker locally?
- If you already have Docker (or Podman) installed, you can use it!
- The VMs can be convenient if:
- you can't/won't install Docker or Podman on your machine,
- your local internet connection is slow.
- We're going to download many container images and distribution packages.
- If the class takes place in a venue with slow WiFi, this can slow us down.
- The remote VMs have good connectivity and downloads will be fast there.
(Initially, we provided VMs to make sure that nobody would waste time
with installers, or because they didn't have the right permissions
on their machine, etc.)
:EN:- What's a container engine?
:FR:- Qu'est-ce qu'un *container engine* ?

62
slides/docker.yml Normal file
View File

@@ -0,0 +1,62 @@
title: |
Docker Fundamentals
& Optimizations
<div style="display:flex; justify-content:center; align-items:center; gap:70px;">
<img src="https://images.seeklogo.com/logo-png/44/1/ecosia-logo-png_seeklogo-440094.png" width="250">
<img src="https://gist.githubusercontent.com/jpetazzo/dcecd53a111f1fbe65c29ee15b9143e4/raw/fe18ea3aa66d1dc16964d4223bf6cf8f6a51d40a/empowered.png" width="200">
<img src="https://gist.githubusercontent.com/jpetazzo/dcecd53a111f1fbe65c29ee15b9143e4/raw/fe18ea3aa66d1dc16964d4223bf6cf8f6a51d40a/pyladies.png" width="300">
</div>
#chat: "[Mattermost](https://training.enix.io/mattermost)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2025-11-docker.container.training/
slidenumberprefix: "workshop.container.training &mdash; login = firstname@ &mdash; password = where we are :) &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- shared/contact.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- # MORNING
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/Docker_Overview.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Initial_Images.md
#- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Dockerfile_Tips.md
- containers/More_Dockerfile_Instructions.md
- containers/Multi_Stage_Builds.md
- containers/Exercise_Dockerfile_Basic.md
- containers/Exercise_Dockerfile_Multistage.md
- # AFTERNOON
- containers/Container_Networking_Basics.md
- containers/Local_Development_Workflow.md
#- containers/Container_Network_Model.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
#- containers/Start_And_Attach.md
#- containers/Naming_And_Inspecting.md
#- containers/Labels.md
#- containers/Getting_Inside.md
#- containers/Publishing_To_Docker_Hub.md
#- containers/Buildkit.md
- shared/thankyou.md

View File

@@ -48,7 +48,7 @@ k8s@shpod:~$ flux bootstrap github \
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---
@@ -74,7 +74,7 @@ We don't have such kind of things here.😕
- We could bind our `ingress-controller` to a `NodePort`.
`ingress-nginx` install manifests propose it here:
</br>https://github.com/kubernetes/ingress-nginx/deploy/static/provider/baremetal
</br>https://github.com/kubernetes/ingress-nginx/tree/release-1.14/deploy/static/provider/baremetal
- In the 📄file `./clusters/METAL/ingress-nginx/sync.yaml`,
</br>change the `Kustomization` value `spec.path: ./deploy/static/provider/baremetal`
@@ -83,7 +83,7 @@ We don't have such kind of things here.😕
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---

View File

@@ -167,13 +167,13 @@ k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---
class: pic
![rocky config files](images/M6-R01-config-files.png)
![rocky config files](images/flux/R01-config-files.png)
---
@@ -300,7 +300,7 @@ class: extra-details
💡 This managed cluster comes with custom `StorageClasses` leveraging on Cloud _IaaS_ capabilities (i.e. block devices)
![Flux configuration waterfall](images/M6-persistentvolumes.png)
![Flux configuration waterfall](images/flux/persistentvolumes.png)
- a default `StorageClass` is applied if none is specified (like here)
- for **_🏭PROD_** purpose, ops team might enforce a more performant `StorageClass`
@@ -310,7 +310,7 @@ class: extra-details
class: pic
![Flux configuration waterfall](images/M6-flux-config-dependencies.png)
![Flux configuration waterfall](images/flux/flux-config-dependencies.png)
---

View File

@@ -9,7 +9,7 @@ but let's see if we can succeed by just adding manifests in our `Flux` configura
class: pic
![Flux configuration waterfall](images/M6-flux-config-dependencies.png)
![Flux configuration waterfall](images/flux/flux-config-dependencies.png)
---
@@ -89,7 +89,7 @@ k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---
@@ -132,7 +132,7 @@ k8s@shpod:~$ flux reconcile source git movy-app -n movy-test
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---
@@ -170,13 +170,13 @@ And push the modifications…
class: pic
![MOVY app has an incorrect dataset](images/M6-incorrect-dataset-in-MOVY-app.png)
![MOVY app has an incorrect dataset](images/flux/incorrect-dataset-in-MOVY-app.png)
---
class: pic
![ROCKY app has an incorrect dataset](images/M6-incorrect-dataset-in-ROCKY-app.png)
![ROCKY app has an incorrect dataset](images/flux/incorrect-dataset-in-ROCKY-app.png)
---
@@ -212,7 +212,7 @@ Please, refer to the [`Network policies` chapter in the High Five M4 module](./4
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---

View File

@@ -39,7 +39,7 @@ the **_⚙OPS_** team exclusively operates its clusters by updating a code ba
_GitOps_ and `Flux` enable the **_⚙OPS_** team to rely on the _first-class citizen pattern_ in Kubernetes' world through these steps:
- describe the **desired target state**
- and let the **automated convergence** happens
- and let the **automated convergence** happen
---
@@ -78,10 +78,8 @@ Prerequisites are:
- `Flux` _CLI_ needs a `Github` personal access token (_PAT_)
- to create and/or access the `Github` repository
- to give permissions to existing teams in our `Github` organization
- The PAT needs _CRUD_ permissions on our `Github` organization
- The _PAT_ needs _CRUD_ permissions on our `Github` organization
- repositories
- admin:public_key
- users
- As **_⚙OPS_** team, let's creates a `Github` personal access token…
@@ -89,7 +87,7 @@ Prerequisites are:
class: pic
![Generating a Github personal access token](images/M6-github-add-token.jpg)
![Generating a Github personal access token](images/flux/github-add-token.jpg)
---
@@ -118,6 +116,32 @@ k8s@shpod:~$ flux bootstrap github \
class: extra-details
### Creating a personnal dedicated `Github` repo
You don't need to rely onto a Github organization: any `Github` personnal repository is OK.
.lab[
- let's replace the `GITHUB_TOKEN` value by our _Personal Access Token_
- and the `GITHUB_REPO` value by our specific repository name
```bash
k8s@shpod:~$ export GITHUB_TOKEN="my-token" && \
export GITHUB_USER="lpiot" && \
export GITHUB_REPO="fleet-config-using-flux-XXXXX"
k8s@shpod:~$ flux bootstrap github \
--owner=${GITHUB_USER} \
--personal \
--repository=${GITHUB_REPO} \
--path=clusters/CLOUDY
```
]
---
class: extra-details
Here is the result
```bash
@@ -169,7 +193,7 @@ Here is the result
- `Flux` sets up permissions that allow teams within our organization to **access** the `Github` repository as maintainers
- Teams need to exist before `Flux` proceeds to this configuration
![Teams in Github](images/M6-github-teams.png)
![Teams in Github](images/flux/github-teams.png)
---
@@ -183,6 +207,22 @@ Here is the result
---
### The PAT is not needed anymore!
- During the install process, `Flux` creates an `ssh` key pair so that it is able to contribute to the `Github` repository.
```bash
► generating source secret
✔ public key: ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFqaT8B8SezU92qoE+bhnv9xONv9oIGuy7yVAznAZfyoWWEVkgP2dYDye5lMbgl6MorG/yjfkyo75ETieAE49/m9D2xvL4esnSx9zsOLdnfS9W99XSfFpC2n6soL+Exodw==
✔ configured deploy key "flux-system-main-flux-system-./clusters/CLOUDY" for "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX"
► applying source secret "flux-system/flux-system"
✔ reconciled source secret
```
- You can now delete the formerly created _Personal Access Token_: `Flux` won't use it anymore.
---
### 📂 Flux config files
`Flux` has been successfully installed onto our **_☁CLOUDY_** Kubernetes cluster!
@@ -192,13 +232,13 @@ Its configuration is managed through a _Gitops_ workflow sourced directly from o
Let's review our `Flux` configuration files we've created and pushed into the `Github` repository…
… as well as the corresponding components running in our Kubernetes cluster
![Flux config files](images/M6-flux-config-files.png)
![Flux config files](images/flux/flux-config-files.png)
---
class: pic
<!-- FIXME: wrong schema -->
![Flux architecture](images/M6-flux-controllers.png)
![Flux architecture](images/flux/flux-controllers.png)
---

View File

@@ -90,13 +90,13 @@ k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---
class: pic
![Ingress-nginx provisionned a IaaS load-balancer in Scaleway Cloud services](images/M6-ingress-nginx-scaleway-lb.png)
![Ingress-nginx provisionned a IaaS load-balancer in Scaleway Cloud services](images/flux/ingress-nginx-scaleway-lb.png)
---
@@ -141,7 +141,7 @@ k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---
@@ -172,7 +172,7 @@ k8s@shpod:~$ \
class: pic
![Rocky application screenshot](images/M6-rocky-app-screenshot.png)
![Rocky application screenshot](images/flux/rocky-app-screenshot.png)
---

View File

@@ -13,7 +13,7 @@ Please, refer to the [`Setting up Kubernetes` chapter in the High Five M4 module
---
## Creating an `Helm` source in Flux for OpenEBS Helm chart
## Creating an `Helm` source in Flux for Kyverno Helm chart
.lab[
@@ -107,7 +107,7 @@ flux create kustomization
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---

View File

@@ -58,7 +58,7 @@ k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization dashboards
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---
@@ -98,7 +98,7 @@ k8s@shpod:~$ flux create secret git flux-system \
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---
@@ -127,7 +127,7 @@ k8s@shpod:~$ k get secret kube-prometheus-stack-grafana -n monitoring \
class: pic
![Grafana dashboard screenshot](images/M6-grafana-dashboard.png)
![Grafana dashboard screenshot](images/flux/grafana-dashboard.png)
---

View File

@@ -76,7 +76,7 @@ And here we go!
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---

View File

@@ -34,7 +34,7 @@ Several _tenants_ are created
class: pic
![Multi-tenants clusters](images/M6-cluster-multi-tenants.png )
![Multi-tenants clusters](images/flux/cluster-multi-tenants.png )
---
@@ -105,7 +105,7 @@ Let's review the `fleet-config-using-flux-XXXXX/clusters/CLOUDY/tenants.yaml` fi
class: pic
![Running Mario](images/M6-running-Mario.gif)
![Running Mario](images/running-mario.gif)
---

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 186 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 412 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 570 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 278 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 347 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 192 KiB

Some files were not shown because too many files have changed in this diff Show More