Compare commits

..

27 Commits

Author SHA1 Message Date
Jérôme Petazzoni
964a325fcd ️ Add chapter about codespaces and dev clusters 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
b9bf015c50 🔗 Add link to FluxCD Kustomization 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
21b8ac6085 Update Kustomize content 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
38562fe788 🛠️ Improve AWS EKS support
- detect which EKS version to use
  (instead of hard-coding it in the TF config)
- do not issue a CSR on EKS
  (because EKS is broken and doesn't support it)
- automatically install a StorageClass on EKS
  (because the EBS CSI addon doesn't install one by default)
- put EKS clusters in the default VPC
  (instead of creating one VPC per cluster,
  since there is a default limit of 5 VPC per region)
2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
6ab0aa11ae ️ Improve googlecloud support
- add support to provision VMs on googlecloud
- refactor the way we define the project used by Terraform
  (we'll now use the GOOGLE_PROJECT environment variable,
  and if it's not set, we'll set it automatically by getting
  the default project from the gcloud CLI)
2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
62237556b1 ️ Add a couple of slides about sidecars 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
f7da1ae656 🛜 Add details about Traffic Distribution
KEP4444 hit GA in 1.33, so I've updated the relevant slide
2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
adbd10506a ️ Add chapter on Gateway API 2025-10-28 21:45:45 +01:00
Ludovic Piot
487968dee5 🆕 Add Flux (M5B/M6) content 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
093f31c25f ✏️ Mutating CEL is coming 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
eaec3e6148 ️ Add content about Extended Resources and Dynamic Resource Allocation 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
84a1124461 📃 Update information about swap 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
dd747ac726 🔗 Fix a couple of Helm URLs 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
4a8725fde4 ♻️ Update vcluster Helm chart; improve konk script
It is now possible to have multiple konk clusters in parallel,
thanks to the KONKTAG environment variable.
2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
55f9b2e21d 🔗 Add a bunch of links to CNPG and ZFS talks in concept slides 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
4f84fab763 ️ Add mention to kl and gonzo 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
b88218a9a1 ️ Compile some cloud native security recs 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
01e46bfa37 🔧 Mention container engine levels 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
86efafeb85 ️ Merge container security content 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
4d17cab888 ✏️ Tweak container from scratch exercise 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
c7a2b7a12d ️ Add BuildKit exercise 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
305dbe24ed ♻️ Update notes about overlay support 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
6cffc0e2e7 ️ Add image deep dive + exercise 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
2b8298c0c2 ️ Add logistics file for Enix 2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
26e1309218 ️ Add container from scratch exercise; update cgroup to v2 2025-10-28 21:45:45 +01:00
emanulato
38714b4e2b fix PuTTY link in handson.md
The link to PuTTY was pointing to putty.org. This domain has no relation to the PuTTY project! Instead, the website run by the actual PuTTY team can be found under https://putty.software , see https://hachyderm.io/@simontatham/115025974777386803
2025-10-28 21:45:45 +01:00
Jérôme Petazzoni
2b0fae3c94 🚧 WIP Ardan Live Oct 2025 2025-06-30 22:34:18 +02:00
93 changed files with 1712 additions and 1556 deletions

View File

@@ -9,7 +9,7 @@
"forwardPorts": [],
//"postCreateCommand": "... install extra packages...",
"postStartCommand": "dind.sh ; kind.sh",
"postStartCommand": "dind.sh",
// This lets us use "docker-outside-docker".
// Unfortunately, minikube, kind, etc. don't work very well that way;

1
.gitignore vendored
View File

@@ -17,7 +17,6 @@ slides/autopilot/state.yaml
slides/index.html
slides/past.html
slides/slides.zip
slides/_academy_*
node_modules
### macOS ###

View File

@@ -1,24 +1,26 @@
services:
version: "2"
services:
rng:
build: rng
ports:
- "8001:80"
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
- "8002:80"
webui:
build: webui
ports:
- "8000:80"
- "8000:80"
volumes:
- "./webui/files/:/files/"
- "./webui/files/:/files/"
redis:
image: redis
worker:
build: worker

View File

@@ -1,8 +1,7 @@
FROM ruby:alpine
WORKDIR /app
RUN apk add --update build-base curl
RUN gem install sinatra --version '~> 3'
RUN gem install thin
COPY hasher.rb .
CMD ["ruby", "hasher.rb", "-o", "::"]
RUN gem install thin --version '~> 1'
ADD hasher.rb /
CMD ["ruby", "hasher.rb"]
EXPOSE 80

View File

@@ -2,6 +2,7 @@ require 'digest'
require 'sinatra'
require 'socket'
set :bind, '0.0.0.0'
set :port, 80
post '/' do

View File

@@ -1,7 +1,5 @@
FROM python:alpine
WORKDIR /app
RUN pip install Flask
COPY rng.py .
ENV FLASK_APP=rng FLASK_RUN_HOST=:: FLASK_RUN_PORT=80
CMD ["flask", "run"]
COPY rng.py /
CMD ["python", "rng.py"]
EXPOSE 80

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(port=80)
app.run(host="0.0.0.0", port=80, threaded=False)

View File

@@ -1,8 +1,7 @@
FROM node:23-alpine
WORKDIR /app
RUN npm install express
RUN npm install morgan
RUN npm install redis@5
COPY . .
FROM node:4-slim
RUN npm install express@4
RUN npm install redis@3
COPY files/ /files/
COPY webui.js /
CMD ["node", "webui.js"]
EXPOSE 80

View File

@@ -1,34 +1,26 @@
import express from 'express';
import morgan from 'morgan';
import { createClient } from 'redis';
var client = await createClient({
url: "redis://redis",
socket: {
family: 0
}
})
.on("error", function (err) {
console.error("Redis error", err);
})
.connect();
var express = require('express');
var app = express();
var redis = require('redis');
app.use(morgan('common'));
var client = redis.createClient(6379, 'redis');
client.on("error", function (err) {
console.error("Redis error", err);
});
app.get('/', function (req, res) {
res.redirect('/index.html');
});
app.get('/json', async(req, res) => {
var coins = await client.hLen('wallet');
var hashes = await client.get('hashes');
var now = Date.now() / 1000;
res.json({
coins: coins,
hashes: hashes,
now: now
app.get('/json', function (req, res) {
client.hlen('wallet', function (err, coins) {
client.get('hashes', function (err, hashes) {
var now = Date.now() / 1000;
res.json( {
coins: coins,
hashes: hashes,
now: now
});
});
});
});

View File

@@ -1,6 +1,5 @@
FROM python:alpine
WORKDIR /app
RUN pip install redis
RUN pip install requests
COPY worker.py .
COPY worker.py /
CMD ["python", "worker.py"]

View File

@@ -1,9 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
use-forwarded-headers: true
compute-full-forwarded-for: true
use-proxy-protocol: true

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: ingress-nginx

View File

@@ -1,12 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- M6-ingress-nginx-components.yaml
- sync.yaml
patches:
- path: M6-ingress-nginx-cm-patch.yaml
target:
kind: ConfigMap
- path: M6-ingress-nginx-svc-patch.yaml
target:
kind: Service

View File

@@ -1,8 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/scw-loadbalancer-proxy-protocol-v2: true
service.beta.kubernetes.io/scw-loadbalancer-use-hostname: true

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: kyverno

View File

@@ -1,72 +0,0 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: flux-multi-tenancy
spec:
validationFailureAction: enforce
rules:
- name: serviceAccountName
exclude:
resources:
namespaces:
- flux-system
match:
resources:
kinds:
- Kustomization
- HelmRelease
validate:
message: ".spec.serviceAccountName is required"
pattern:
spec:
serviceAccountName: "?*"
- name: kustomizationSourceRefNamespace
exclude:
resources:
namespaces:
- flux-system
- ingress-nginx
- kyverno
- monitoring
- openebs
match:
resources:
kinds:
- Kustomization
preconditions:
any:
- key: "{{request.object.spec.sourceRef.namespace}}"
operator: NotEquals
value: ""
validate:
message: "spec.sourceRef.namespace must be the same as metadata.namespace"
deny:
conditions:
- key: "{{request.object.spec.sourceRef.namespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"
- name: helmReleaseSourceRefNamespace
exclude:
resources:
namespaces:
- flux-system
- ingress-nginx
- kyverno
- monitoring
- openebs
match:
resources:
kinds:
- HelmRelease
preconditions:
any:
- key: "{{request.object.spec.chart.spec.sourceRef.namespace}}"
operator: NotEquals
value: ""
validate:
message: "spec.chart.spec.sourceRef.namespace must be the same as metadata.namespace"
deny:
conditions:
- key: "{{request.object.spec.chart.spec.sourceRef.namespace}}"
operator: NotEquals
value: "{{request.object.metadata.namespace}}"

View File

@@ -1,29 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: monitoring
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
namespace: monitoring
spec:
ingressClassName: nginx
rules:
- host: grafana.test.metal.mybestdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kube-prometheus-stack-grafana
port:
number: 80

View File

@@ -1,35 +0,0 @@
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
app: web
ingress:
- from: []
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-db
spec:
podSelector:
matchLabels:
app: db
ingress:
- from:
- podSelector:
matchLabels:
app: web

View File

@@ -1,10 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: flux-system
app.kubernetes.io/part-of: flux
app.kubernetes.io/version: v2.5.1
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
name: openebs

View File

@@ -1,12 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: openebs
resources:
- M6-openebs-components.yaml
- sync.yaml
configMapGenerator:
- name: openebs-values
files:
- values.yaml=M6-openebs-values.yaml
configurations:
- M6-openebs-kustomizeconfig.yaml

View File

@@ -1,6 +0,0 @@
nameReference:
- kind: ConfigMap
version: v1
fieldSpecs:
- path: spec/valuesFrom/name
kind: HelmRelease

View File

@@ -1,15 +0,0 @@
# helm install openebs --namespace openebs openebs/openebs
# --set engines.replicated.mayastor.enabled=false
# --set lvm-localpv.lvmNode.kubeletDir=/var/lib/k0s/kubelet/
# --create-namespace
engines:
replicated:
mayastor:
enabled: false
# Needed for k0s install since kubelet install is slightly divergent from vanilla install >:-(
lvm-localpv:
lvmNode:
kubeletDir: /var/lib/k0s/kubelet/
localprovisioner:
hostpathClass:
isDefaultClass: true

View File

@@ -1,38 +0,0 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: rocky-test
name: rocky-full-access
rules:
- apiGroups: ["", extensions, apps]
resources: [deployments, replicasets, pods, services, ingresses, statefulsets]
verbs: [get, list, watch, create, update, patch, delete] # You can also use [*]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: rocky-pv-access
rules:
- apiGroups: [""]
resources: [persistentvolumes]
verbs: [get, list, watch, create, patch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
toolkit.fluxcd.io/tenant: rocky
name: rocky-reconciler2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: rocky-pv-access
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: gotk:rocky-test:reconciler
- kind: ServiceAccount
name: rocky
namespace: rocky-test

View File

@@ -1,19 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rocky
namespace: rocky-test
spec:
ingressClassName: nginx
rules:
- host: rocky.test.mybestdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 80

View File

@@ -1,8 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base/rocky
patches:
- path: M6-rocky-test-patch.yaml
target:
kind: Kustomization

View File

@@ -1,7 +0,0 @@
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
name: rocky
namespace: rocky-test
spec:
path: ./k8s/plain

View File

@@ -36,12 +36,8 @@ _populate_zone() {
ZONE_ID=$(_get_zone_id $1)
shift
for IPADDR in $*; do
case "$IPADDR" in
*.*) TYPE=A;;
*:*) TYPE=AAAA;;
esac
cloudflare zones/$ZONE_ID/dns_records "name=*" "type=$TYPE" "content=$IPADDR"
cloudflare zones/$ZONE_ID/dns_records "name=\@" "type=$TYPE" "content=$IPADDR"
cloudflare zones/$ZONE_ID/dns_records "name=*" "type=A" "content=$IPADDR"
cloudflare zones/$ZONE_ID/dns_records "name=\@" "type=A" "content=$IPADDR"
done
}

View File

@@ -56,7 +56,7 @@ _cmd_codeserver() {
ARCH=${ARCHITECTURE-amd64}
CODESERVER_VERSION=4.96.4
CODESERVER_URL=\$GITHUB/coder/code-server/releases/download/v${CODESERVER_VERSION}/code-server-${CODESERVER_VERSION}-linux-${ARCH}.tar.gz
CODESERVER_URL=https://github.com/coder/code-server/releases/download/v${CODESERVER_VERSION}/code-server-${CODESERVER_VERSION}-linux-${ARCH}.tar.gz
pssh "
set -e
i_am_first_node || exit 0
@@ -230,7 +230,7 @@ _cmd_create() {
;;
*) die "Invalid mode: $MODE (supported modes: mk8s, pssh)." ;;
esac
if ! [ -f "$SETTINGS" ]; then
die "Settings file ($SETTINGS) not found."
fi
@@ -375,8 +375,8 @@ _cmd_clusterize() {
pssh -I < tags/$TAG/clusters.tsv "
grep -w \$PSSH_HOST | tr '\t' '\n' > /tmp/cluster"
pssh "
echo \$PSSH_HOST > /tmp/ip_address
head -n 1 /tmp/cluster | sudo tee /etc/ip_address_of_first_node
echo \$PSSH_HOST > /tmp/ipv4
head -n 1 /tmp/cluster | sudo tee /etc/ipv4_of_first_node
echo ${CLUSTERPREFIX}1 | sudo tee /etc/name_of_first_node
echo HOSTIP=\$PSSH_HOST | sudo tee -a /etc/environment
NODEINDEX=\$((\$PSSH_NODENUM%$CLUSTERSIZE+1))
@@ -459,7 +459,7 @@ _cmd_docker() {
set -e
### Install docker-compose.
sudo curl -fsSL -o /usr/local/bin/docker-compose \
\$GITHUB/docker/compose/releases/download/$COMPOSE_VERSION/docker-compose-$COMPOSE_PLATFORM
https://github.com/docker/compose/releases/download/$COMPOSE_VERSION/docker-compose-$COMPOSE_PLATFORM
sudo chmod +x /usr/local/bin/docker-compose
docker-compose version
@@ -467,7 +467,7 @@ _cmd_docker() {
##VERSION## https://github.com/docker/machine/releases
MACHINE_VERSION=v0.16.2
sudo curl -fsSL -o /usr/local/bin/docker-machine \
\$GITHUB/docker/machine/releases/download/\$MACHINE_VERSION/docker-machine-\$(uname -s)-\$(uname -m)
https://github.com/docker/machine/releases/download/\$MACHINE_VERSION/docker-machine-\$(uname -s)-\$(uname -m)
sudo chmod +x /usr/local/bin/docker-machine
docker-machine version
"
@@ -500,10 +500,10 @@ _cmd_kubebins() {
set -e
cd /usr/local/bin
if ! [ -x etcd ]; then
curl -L \$GITHUB/etcd-io/etcd/releases/download/$ETCD_VERSION/etcd-$ETCD_VERSION-linux-$ARCH.tar.gz \
curl -L https://github.com/etcd-io/etcd/releases/download/$ETCD_VERSION/etcd-$ETCD_VERSION-linux-$ARCH.tar.gz \
| sudo tar --strip-components=1 --wildcards -zx '*/etcd' '*/etcdctl'
fi
if ! [ -x kube-apiserver ]; then
if ! [ -x hyperkube ]; then
##VERSION##
curl -L https://dl.k8s.io/$K8SBIN_VERSION/kubernetes-server-linux-$ARCH.tar.gz \
| sudo tar --strip-components=3 -zx \
@@ -512,7 +512,7 @@ _cmd_kubebins() {
sudo mkdir -p /opt/cni/bin
cd /opt/cni/bin
if ! [ -x bridge ]; then
curl -L \$GITHUB/containernetworking/plugins/releases/download/$CNI_VERSION/cni-plugins-linux-$ARCH-$CNI_VERSION.tgz \
curl -L https://github.com/containernetworking/plugins/releases/download/$CNI_VERSION/cni-plugins-linux-$ARCH-$CNI_VERSION.tgz \
| sudo tar -zx
fi
"
@@ -562,18 +562,6 @@ EOF"
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl &&
echo 'alias k=kubecolor' | sudo tee /etc/bash_completion.d/k &&
echo 'complete -F __start_kubectl k' | sudo tee -a /etc/bash_completion.d/k"
# Install helm early
# (so that we can use it to install e.g. Cilium etc.)
ARCH=${ARCHITECTURE-amd64}
HELM_VERSION=3.19.1
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl -fsSL https://get.helm.sh/helm-v${HELM_VERSION}-linux-${ARCH}.tar.gz |
sudo tar --strip-components=1 --wildcards -zx -C /usr/local/bin '*/helm'
helm completion bash | sudo tee /etc/bash_completion.d/helm
helm version
fi"
}
_cmd kubeadm "Setup kubernetes clusters with kubeadm"
@@ -597,18 +585,6 @@ _cmd_kubeadm() {
# Initialize kube control plane
pssh --timeout 200 "
IPV6=\$(ip -json a | jq -r '.[].addr_info[] | select(.scope==\"global\" and .family==\"inet6\") | .local' | head -n1)
if [ \"\$IPV6\" ]; then
ADVERTISE=\"advertiseAddress: \$IPV6\"
SERVICE_SUBNET=\"serviceSubnet: fdff::/112\"
touch /tmp/install-cilium-ipv6-only
touch /tmp/ipv6-only
else
ADVERTISE=
SERVICE_SUBNET=
touch /tmp/install-weave
fi
echo IPV6=\$IPV6 ADVERTISE=\$ADVERTISE
if i_am_first_node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token &&
cat >/tmp/kubeadm-config.yaml <<EOF
@@ -616,12 +592,9 @@ kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: \$(cat /tmp/token)
localAPIEndpoint:
\$ADVERTISE
nodeRegistration:
ignorePreflightErrors:
- NumCPU
- FileContent--proc-sys-net-ipv6-conf-default-forwarding
$IGNORE_SYSTEMVERIFICATION
$IGNORE_SWAP
$IGNORE_IPTABLES
@@ -648,9 +621,7 @@ kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
apiServer:
certSANs:
- \$(cat /tmp/ip_address)
networking:
\$SERVICE_SUBNET
- \$(cat /tmp/ipv4)
$CLUSTER_CONFIGURATION_KUBERNETESVERSION
EOF
sudo kubeadm init --config=/tmp/kubeadm-config.yaml
@@ -669,20 +640,9 @@ EOF
# Install weave as the pod network
pssh "
if i_am_first_node; then
if [ -f /tmp/install-weave ]; then
curl -fsSL \$GITHUB/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml |
sed s,weaveworks/weave,quay.io/rackspace/weave, |
kubectl apply -f-
fi
if [ -f /tmp/install-cilium-ipv6-only ]; then
helm upgrade -i cilium cilium --repo https://helm.cilium.io/ \
--namespace kube-system \
--set cni.chainingMode=portmap \
--set ipv6.enabled=true \
--set ipv4.enabled=false \
--set underlayProtocol=ipv6 \
--version 1.18.3
fi
curl -fsSL https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml |
sed s,weaveworks/weave,quay.io/rackspace/weave, |
kubectl apply -f-
fi"
# FIXME this is a gross hack to add the deployment key to our SSH agent,
@@ -705,16 +665,13 @@ EOF
fi
# Install metrics server
pssh -I <../k8s/metrics-server.yaml "
pssh "
if i_am_first_node; then
kubectl apply -f-
fi"
# It would be nice to be able to use that helm chart for metrics-server.
# Unfortunately, the charts themselves are on github.com and we want to
# avoid that due to their lack of IPv6 support.
kubectl apply -f https://raw.githubusercontent.com/jpetazzo/container.training/master/k8s/metrics-server.yaml
#helm upgrade --install metrics-server \
# --repo https://kubernetes-sigs.github.io/metrics-server/ metrics-server \
# --namespace kube-system --set args={--kubelet-insecure-tls}
fi"
}
_cmd kubetools "Install a bunch of CLI tools for Kubernetes"
@@ -741,7 +698,7 @@ _cmd_kubetools() {
# Install ArgoCD CLI
##VERSION## https://github.com/argoproj/argo-cd/releases/latest
URL=\$GITHUB/argoproj/argo-cd/releases/latest/download/argocd-linux-${ARCH}
URL=https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-${ARCH}
pssh "
if [ ! -x /usr/local/bin/argocd ]; then
sudo curl -o /usr/local/bin/argocd -fsSL $URL
@@ -754,7 +711,7 @@ _cmd_kubetools() {
##VERSION## https://github.com/fluxcd/flux2/releases
FLUX_VERSION=2.3.0
FILENAME=flux_${FLUX_VERSION}_linux_${ARCH}
URL=\$GITHUB/fluxcd/flux2/releases/download/v$FLUX_VERSION/$FILENAME.tar.gz
URL=https://github.com/fluxcd/flux2/releases/download/v$FLUX_VERSION/$FILENAME.tar.gz
pssh "
if [ ! -x /usr/local/bin/flux ]; then
curl -fsSL $URL |
@@ -769,7 +726,7 @@ _cmd_kubetools() {
set -e
if ! [ -x /usr/local/bin/kctx ]; then
cd /tmp
git clone \$GITHUB/ahmetb/kubectx
git clone https://github.com/ahmetb/kubectx
sudo cp kubectx/kubectx /usr/local/bin/kctx
sudo cp kubectx/kubens /usr/local/bin/kns
sudo cp kubectx/completion/*.bash /etc/bash_completion.d
@@ -780,7 +737,7 @@ _cmd_kubetools() {
set -e
if ! [ -d /opt/kube-ps1 ]; then
cd /tmp
git clone \$GITHUB/jonmosco/kube-ps1
git clone https://github.com/jonmosco/kube-ps1
sudo mv kube-ps1 /opt/kube-ps1
sudo -u $USER_LOGIN sed -i s/docker-prompt/kube_ps1/ /home/$USER_LOGIN/.bashrc &&
sudo -u $USER_LOGIN tee -a /home/$USER_LOGIN/.bashrc <<EOF
@@ -797,7 +754,7 @@ EOF
##VERSION## https://github.com/stern/stern/releases
STERN_VERSION=1.29.0
FILENAME=stern_${STERN_VERSION}_linux_${ARCH}
URL=\$GITHUB/stern/stern/releases/download/v$STERN_VERSION/$FILENAME.tar.gz
URL=https://github.com/stern/stern/releases/download/v$STERN_VERSION/$FILENAME.tar.gz
pssh "
if [ ! -x /usr/local/bin/stern ]; then
curl -fsSL $URL |
@@ -808,11 +765,9 @@ EOF
fi"
# Install helm
HELM_VERSION=3.19.1
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl -fsSL https://get.helm.sh/helm-v${HELM_VERSION}-linux-${ARCH}.tar.gz |
sudo tar --strip-components=1 --wildcards -zx -C /usr/local/bin '*/helm'
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 | sudo bash &&
helm completion bash | sudo tee /etc/bash_completion.d/helm
helm version
fi"
@@ -820,7 +775,7 @@ EOF
# Install kustomize
##VERSION## https://github.com/kubernetes-sigs/kustomize/releases
KUSTOMIZE_VERSION=v5.4.1
URL=\$GITHUB/kubernetes-sigs/kustomize/releases/download/kustomize/${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_${ARCH}.tar.gz
URL=https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_${ARCH}.tar.gz
pssh "
if [ ! -x /usr/local/bin/kustomize ]; then
curl -fsSL $URL |
@@ -837,17 +792,15 @@ EOF
pssh "
if [ ! -x /usr/local/bin/ship ]; then
##VERSION##
curl -fsSL \$GITHUB/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_$ARCH.tar.gz |
curl -fsSL https://github.com/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_$ARCH.tar.gz |
sudo tar -C /usr/local/bin -zx ship
fi"
# Install the AWS IAM authenticator
AWSIAMAUTH_VERSION=0.7.8
URL=\$GITHUB/kubernetes-sigs/aws-iam-authenticator/releases/download/v${AWSIAMAUTH_VERSION}/aws-iam-authenticator_${AWSIAMAUTH_VERSION}_linux_${ARCH}
pssh "
if [ ! -x /usr/local/bin/aws-iam-authenticator ]; then
##VERSION##
sudo curl -fsSLo /usr/local/bin/aws-iam-authenticator $URL
sudo curl -fsSLo /usr/local/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/$ARCH/aws-iam-authenticator
sudo chmod +x /usr/local/bin/aws-iam-authenticator
aws-iam-authenticator version
fi"
@@ -857,17 +810,17 @@ EOF
if [ ! -x /usr/local/bin/jless ]; then
##VERSION##
sudo apt-get install -y libxcb-render0 libxcb-shape0 libxcb-xfixes0
wget \$GITHUB/PaulJuliusMartinez/jless/releases/download/v0.9.0/jless-v0.9.0-x86_64-unknown-linux-gnu.zip
wget https://github.com/PaulJuliusMartinez/jless/releases/download/v0.9.0/jless-v0.9.0-x86_64-unknown-linux-gnu.zip
unzip jless-v0.9.0-x86_64-unknown-linux-gnu
sudo mv jless /usr/local/bin
fi"
# Install the krew package manager
pssh "
if [ ! -d /home/$USER_LOGIN/.krew ] && [ ! -f /tmp/ipv6-only ]; then
if [ ! -d /home/$USER_LOGIN/.krew ]; then
cd /tmp &&
KREW=krew-linux_$ARCH
curl -fsSL \$GITHUB/kubernetes-sigs/krew/releases/latest/download/\$KREW.tar.gz |
curl -fsSL https://github.com/kubernetes-sigs/krew/releases/latest/download/\$KREW.tar.gz |
tar -zxf- &&
sudo -u $USER_LOGIN -H ./\$KREW install krew &&
echo export PATH=/home/$USER_LOGIN/.krew/bin:\\\$PATH | sudo -u $USER_LOGIN tee -a /home/$USER_LOGIN/.bashrc
@@ -875,7 +828,7 @@ EOF
# Install kubecolor
KUBECOLOR_VERSION=0.4.0
URL=\$GITHUB/kubecolor/kubecolor/releases/download/v${KUBECOLOR_VERSION}/kubecolor_${KUBECOLOR_VERSION}_linux_${ARCH}.tar.gz
URL=https://github.com/kubecolor/kubecolor/releases/download/v${KUBECOLOR_VERSION}/kubecolor_${KUBECOLOR_VERSION}_linux_${ARCH}.tar.gz
pssh "
if [ ! -x /usr/local/bin/kubecolor ]; then
##VERSION##
@@ -887,7 +840,7 @@ EOF
pssh "
if [ ! -x /usr/local/bin/k9s ]; then
FILENAME=k9s_Linux_$ARCH.tar.gz &&
curl -fsSL \$GITHUB/derailed/k9s/releases/latest/download/\$FILENAME |
curl -fsSL https://github.com/derailed/k9s/releases/latest/download/\$FILENAME |
sudo tar -C /usr/local/bin -zx k9s
k9s version
fi"
@@ -896,7 +849,7 @@ EOF
pssh "
if [ ! -x /usr/local/bin/popeye ]; then
FILENAME=popeye_Linux_$ARCH.tar.gz &&
curl -fsSL \$GITHUB/derailed/popeye/releases/latest/download/\$FILENAME |
curl -fsSL https://github.com/derailed/popeye/releases/latest/download/\$FILENAME |
sudo tar -C /usr/local/bin -zx popeye
popeye version
fi"
@@ -909,7 +862,7 @@ EOF
if [ ! -x /usr/local/bin/tilt ]; then
TILT_VERSION=0.33.13
FILENAME=tilt.\$TILT_VERSION.linux.$TILT_ARCH.tar.gz
curl -fsSL \$GITHUB/tilt-dev/tilt/releases/download/v\$TILT_VERSION/\$FILENAME |
curl -fsSL https://github.com/tilt-dev/tilt/releases/download/v\$TILT_VERSION/\$FILENAME |
sudo tar -C /usr/local/bin -zx tilt
tilt completion bash | sudo tee /etc/bash_completion.d/tilt
tilt version
@@ -927,7 +880,7 @@ EOF
# Install Kompose
pssh "
if [ ! -x /usr/local/bin/kompose ]; then
curl -fsSLo kompose \$GITHUB/kubernetes/kompose/releases/latest/download/kompose-linux-$ARCH &&
curl -fsSLo kompose https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-$ARCH &&
sudo install kompose /usr/local/bin
kompose completion bash | sudo tee /etc/bash_completion.d/kompose
kompose version
@@ -936,7 +889,7 @@ EOF
# Install KinD
pssh "
if [ ! -x /usr/local/bin/kind ]; then
curl -fsSLo kind \$GITHUB/kubernetes-sigs/kind/releases/latest/download/kind-linux-$ARCH &&
curl -fsSLo kind https://github.com/kubernetes-sigs/kind/releases/latest/download/kind-linux-$ARCH &&
sudo install kind /usr/local/bin
kind completion bash | sudo tee /etc/bash_completion.d/kind
kind version
@@ -945,7 +898,7 @@ EOF
# Install YTT
pssh "
if [ ! -x /usr/local/bin/ytt ]; then
curl -fsSLo ytt \$GITHUB/vmware-tanzu/carvel-ytt/releases/latest/download/ytt-linux-$ARCH &&
curl -fsSLo ytt https://github.com/vmware-tanzu/carvel-ytt/releases/latest/download/ytt-linux-$ARCH &&
sudo install ytt /usr/local/bin
ytt completion bash | sudo tee /etc/bash_completion.d/ytt
ytt version
@@ -953,7 +906,7 @@ EOF
##VERSION## https://github.com/bitnami-labs/sealed-secrets/releases
KUBESEAL_VERSION=0.26.2
URL=\$GITHUB/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION}/kubeseal-${KUBESEAL_VERSION}-linux-${ARCH}.tar.gz
URL=https://github.com/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION}/kubeseal-${KUBESEAL_VERSION}-linux-${ARCH}.tar.gz
#case $ARCH in
#amd64) FILENAME=kubeseal-linux-amd64;;
#arm64) FILENAME=kubeseal-arm64;;
@@ -970,7 +923,7 @@ EOF
VELERO_VERSION=1.13.2
pssh "
if [ ! -x /usr/local/bin/velero ]; then
curl -fsSL \$GITHUB/vmware-tanzu/velero/releases/download/v$VELERO_VERSION/velero-v$VELERO_VERSION-linux-$ARCH.tar.gz |
curl -fsSL https://github.com/vmware-tanzu/velero/releases/download/v$VELERO_VERSION/velero-v$VELERO_VERSION-linux-$ARCH.tar.gz |
sudo tar --strip-components=1 --wildcards -zx -C /usr/local/bin '*/velero'
velero completion bash | sudo tee /etc/bash_completion.d/velero
velero version --client-only
@@ -980,7 +933,7 @@ EOF
KUBENT_VERSION=0.7.2
pssh "
if [ ! -x /usr/local/bin/kubent ]; then
curl -fsSL \$GITHUB/doitintl/kube-no-trouble/releases/download/${KUBENT_VERSION}/kubent-${KUBENT_VERSION}-linux-$ARCH.tar.gz |
curl -fsSL https://github.com/doitintl/kube-no-trouble/releases/download/${KUBENT_VERSION}/kubent-${KUBENT_VERSION}-linux-$ARCH.tar.gz |
sudo tar -zxvf- -C /usr/local/bin kubent
kubent --version
fi"
@@ -988,7 +941,7 @@ EOF
# Ngrok. Note that unfortunately, this is the x86_64 binary.
# We might have to rethink how to handle this for multi-arch environments.
pssh "
if [ ! -x /usr/local/bin/ngrok ] && [ ! -f /tmp/ipv6-only ]; then
if [ ! -x /usr/local/bin/ngrok ]; then
curl -fsSL https://bin.equinox.io/c/bNyj1mQVY4c/ngrok-v3-stable-linux-amd64.tgz |
sudo tar -zxvf- -C /usr/local/bin ngrok
fi"
@@ -1087,9 +1040,7 @@ _cmd_ping() {
TAG=$1
need_tag
# If we connect to our VMs over IPv6, the IP address is between brackets.
# Unfortunately, fping doesn't support that; so let's strip brackets here.
tr -d [] < tags/$TAG/ips.txt | fping
fping < tags/$TAG/ips.txt
}
_cmd stage2 "Finalize the setup of managed Kubernetes clusters"
@@ -1161,7 +1112,7 @@ _cmd_standardize() {
sudo netfilter-persistent start
fi"
# oracle-cloud-agent upgrades packages in the background.
# oracle-cloud-agent upgrades pacakges in the background.
# This breaks our deployment scripts, because when we invoke apt-get, it complains
# that the lock already exists (symptom: random "Exited with error code 100").
# Workaround: if we detect oracle-cloud-agent, remove it.
@@ -1173,15 +1124,6 @@ _cmd_standardize() {
sudo snap remove oracle-cloud-agent
sudo dpkg --remove --force-remove-reinstreq unified-monitoring-agent
fi"
# Check if a cachttps instance is available.
# (This is used to access GitHub on IPv6-only hosts.)
pssh "
if curl -fsSLI http://cachttps.internal:3131/https://github.com/ >/dev/null; then
echo GITHUB=http://cachttps.internal:3131/https://github.com
else
echo GITHUB=https://github.com
fi | sudo tee -a /etc/environment"
}
_cmd tailhist "Install history viewer on port 1088"
@@ -1197,7 +1139,7 @@ _cmd_tailhist () {
pssh "
set -e
sudo apt-get install unzip -y
wget -c \$GITHUB/joewalnes/websocketd/releases/download/v0.3.0/websocketd-0.3.0-linux_$ARCH.zip
wget -c https://github.com/joewalnes/websocketd/releases/download/v0.3.0/websocketd-0.3.0-linux_$ARCH.zip
unzip -o websocketd-0.3.0-linux_$ARCH.zip websocketd
sudo mv websocketd /usr/local/bin/websocketd
sudo mkdir -p /opt/tailhist
@@ -1463,7 +1405,7 @@ _cmd_webssh() {
sudo apt-get install python3-tornado python3-paramiko -y"
pssh "
cd /opt
[ -d webssh ] || sudo git clone \$GITHUB/jpetazzo/webssh"
[ -d webssh ] || sudo git clone https://github.com/jpetazzo/webssh"
pssh "
for KEYFILE in /etc/ssh/*.pub; do
read a b c < \$KEYFILE; echo localhost \$a \$b
@@ -1548,7 +1490,7 @@ test_vm() {
"whoami" \
"hostname -i" \
"ls -l /usr/local/bin/i_am_first_node" \
"grep . /etc/name_of_first_node /etc/ip_addres_of_first_node" \
"grep . /etc/name_of_first_node /etc/ipv4_of_first_node" \
"cat /etc/hosts" \
"hostnamectl status" \
"docker version | grep Version -B1" \

View File

@@ -63,8 +63,7 @@ locals {
resource "local_file" "ip_addresses" {
content = join("", formatlist("%s\n", [
for key, value in local.ip_addresses :
strcontains(value, ".") ? value : "[${value}]"
for key, value in local.ip_addresses : value
]))
filename = "ips.txt"
file_permission = "0600"

View File

@@ -1,37 +0,0 @@
# If we deploy in IPv6-only environments, and the students don't have IPv6
# connectivity, we want to offer a way to connect anyway. Our solution is
# to generate an HAProxy configuration snippet, that can be copied to a
# DualStack machine which will act as a proxy to our IPv6 machines.
# Note that the snippet still has to be copied, so this is not a 100%
# streamlined solution!
locals {
portmaps = {
for key, value in local.nodes :
(10000 + proxmox_virtual_environment_vm._[key].vm_id) => local.ip_addresses[key]
}
}
resource "local_file" "haproxy" {
filename = "./${var.tag}.cfg"
file_permission = "0644"
content = join("\n", [for port, address in local.portmaps : <<-EOT
frontend f${port}
bind *:${port}
default_backend b${port}
backend b${port}
mode tcp
server s${port} [${address}]:22 maxconn 16
EOT
])
}
resource "local_file" "sshproxy" {
filename = "sshproxy.txt"
file_permission = "0644"
content = join("", [
for cid in range(1, 1 + var.how_many_clusters) :
format("ssh -l k8s -p %d\n", proxmox_virtual_environment_vm._[format("c%03dn%03d", cid, 1)].vm_id + 10000)
])
}

View File

@@ -1,34 +1,12 @@
data "proxmox_virtual_environment_nodes" "_" {}
data "proxmox_virtual_environment_vms" "_" {
filter {
name = "template"
values = [true]
}
}
data "proxmox_virtual_environment_vms" "templates" {
for_each = toset(data.proxmox_virtual_environment_nodes._.names)
tags = ["ubuntu"]
filter {
name = "node_name"
values = [each.value]
}
filter {
name = "template"
values = [true]
}
}
locals {
pve_nodes = data.proxmox_virtual_environment_nodes._.names
pve_node = { for k, v in local.nodes : k => local.pve_nodes[v.node_index % length(local.pve_nodes)] }
pve_template_id = { for k, v in local.nodes : k => data.proxmox_virtual_environment_vms.templates[local.pve_node[k]].vms[0].vm_id }
pve_nodes = data.proxmox_virtual_environment_nodes._.names
}
resource "proxmox_virtual_environment_vm" "_" {
node_name = local.pve_nodes[each.value.node_index % length(local.pve_nodes)]
for_each = local.nodes
node_name = local.pve_node[each.key]
name = each.value.node_name
tags = ["container.training", var.tag]
stop_on_destroy = true
@@ -46,17 +24,9 @@ resource "proxmox_virtual_environment_vm" "_" {
# size = 30
# discard = "on"
#}
### Strategy 1: clone from shared storage
#clone {
# vm_id = var.proxmox_template_vm_id
# node_name = var.proxmox_template_node_name
# full = false
#}
### Strategy 2: clone from local storage
### (requires that the template exists on each node)
clone {
vm_id = local.pve_template_id[each.key]
node_name = local.pve_node[each.key]
vm_id = var.proxmox_template_vm_id
node_name = var.proxmox_template_node_name
full = false
}
agent {
@@ -71,9 +41,7 @@ resource "proxmox_virtual_environment_vm" "_" {
ip_config {
ipv4 {
address = "dhcp"
}
ipv6 {
address = "dhcp"
#gateway =
}
}
}
@@ -104,10 +72,8 @@ resource "proxmox_virtual_environment_vm" "_" {
locals {
ip_addresses = {
for key, value in local.nodes :
key => [for addr in flatten(concat(
proxmox_virtual_environment_vm._[key].ipv6_addresses,
proxmox_virtual_environment_vm._[key].ipv4_addresses,
["ERROR"])) :
addr if addr != "127.0.0.1" && addr != "::1"][0]
key => [for addr in flatten(concat(proxmox_virtual_environment_vm._[key].ipv4_addresses, ["ERROR"])) :
addr if addr != "127.0.0.1"][0]
}
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
proxmox = {
source = "bpg/proxmox"
version = "~> 0.86.0"
version = "~> 0.70.1"
}
}
}

View File

@@ -10,11 +10,8 @@ proxmox_password = "CHANGEME"
# Which storage to use for VM disks. Defaults to "local".
#proxmox_storage = "ceph"
#proxmox_storage = "local-zfs"
# We recently rewrote the Proxmox configurations to automatically
# detect which template to use; so these variables aren't used anymore.
#proxmox_template_node_name = "CHANGEME"
#proxmox_template_vm_id = CHANGEME
proxmox_template_node_name = "CHANGEME"
proxmox_template_vm_id = CHANGEME

View File

@@ -2,7 +2,7 @@
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
/ /menu.html 200!
/ /kube.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env python
import os
import re
import sys
html_file = sys.argv[1]
output_file_template = "_academy_{}.html"
title_regex = "name: toc-(.*)"
redirects = open("_redirects", "w")
sections = re.split(title_regex, open(html_file).read())[1:]
while sections:
link, markdown = sections[0], sections[1]
sections = sections[2:]
output_file_name = output_file_template.format(link)
with open(output_file_name, "w") as f:
html = open("workshop.html").read()
html = html.replace("@@MARKDOWN@@", markdown)
titles = re.findall("# (.*)", markdown) + [""]
html = html.replace("@@TITLE@@", "{} — Kubernetes Academy".format(titles[0]))
html = html.replace("@@SLIDENUMBERPREFIX@@", "")
html = html.replace("@@EXCLUDE@@", "")
html = html.replace(".nav[", ".hide[")
f.write(html)
redirects.write("/{} /{} 200!\n".format(link, output_file_name))
html = open(html_file).read()
html = re.sub("#toc-([^)]*)", "_academy_\\1.html", html)
sys.stdout.write(html)

View File

@@ -1,81 +0,0 @@
title: |
Containers
chat: "[Mattermost](https://training.enix.io/)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2025-06-dila.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- # DAY 1
#- containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Namespaces_Cgroups.md
- containers/Init_Systems.md
- exercises/container-from-scratch-details.md
- # DAY 2
- containers/Initial_Images.md
- containers/Copy_On_Write.md
- containers/Images_Deep_Dive.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Multi_Stage_Builds.md
- containers/Buildkit.md
- containers/Exercise_Dockerfile_Basic.md
- containers/Exercise_Dockerfile_Multistage.md
- containers/Exercise_Dockerfile_Buildkit.md
- exercises/image-from-scratch-details.md
- # DAY 3
- |
# Security mechanisms
[🔗 Overall security features](#toc-security-features)
[🔗 User namespaces](#96)
- containers/Security.md
- containers/Rootless_Networking.md
- containers/Container_Engines.md
- shared/cloud-native-security.md
- shared/thankyou.md
#- containers/links.md
#- containers/Orchestration_Overview.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Training_Environment.md
#- containers/Installing_Docker.md
#- containers/First_Containers.md
#- containers/Background_Containers.md
#- containers/Building_Images_Interactively.md
#- containers/Building_Images_With_Dockerfiles.md
#- containers/Cmd_And_Entrypoint.md
#- containers/Copying_Files_During_Build.md
#- containers/Container_Networking_Basics.md
#- containers/Local_Development_Workflow.md
#- containers/Container_Network_Model.md
#- containers/Compose_For_Dev_Stacks.md
#- containers/Exercise_Composefile.md
#- containers/Start_And_Attach.md
#- containers/Naming_And_Inspecting.md
#- containers/Labels.md
#- containers/Getting_Inside.md
#- containers/Dockerfile_Tips.md
#- containers/Advanced_Dockerfiles.md
#- containers/Publishing_To_Docker_Hub.md
#- containers/Network_Drivers.md
#- containers/Docker_Machine.md

View File

@@ -29,20 +29,6 @@ At the end of this lesson, you will be able to:
---
## `Dockerfile` example
```
FROM python:alpine
WORKDIR /app
RUN pip install Flask
COPY rng.py .
ENV FLASK_APP=rng FLASK_RUN_HOST=:: FLASK_RUN_PORT=80
CMD ["flask", "run"]
EXPOSE 80
```
---
## Writing our first `Dockerfile`
Our Dockerfile must be in a **new, empty directory**.

View File

@@ -1 +1,3 @@
# Building containers from scratch (TODO)
# Building containers from scratch
(This is a "bonus section" done if time permits.)

View File

@@ -15,21 +15,11 @@ class: title
- If you are doing or re-doing this course on your own, you can:
- install [Docker Desktop][docker-desktop] or [Podman Desktop][podman-desktop]
<br/>(available for Linux, Mac, Windows; provides a nice GUI)
- install Docker locally (as explained in the chapter "Installing Docker")
- install [Docker CE][docker-ce] or [Podman][podman]
<br/>(for intermediate/advanced users who prefer the CLI)
- install Docker on e.g. a cloud VM
- try platforms like [Play With Docker][pwd] or [KodeKloud]
<br/>(if you can't/won't install anything locally)
[docker-desktop]: https://docs.docker.com/desktop/
[podman-desktop]: https://podman-desktop.io/downloads
[docker-ce]: https://docs.docker.com/engine/install/
[podman]: https://podman.io/docs/installation#installing-on-linux
[pwd]: https://labs.play-with-docker.com/
[KodeKloud]: https://kodekloud.com/free-labs/docker/
- use https://www.play-with-docker.com/ to instantly get a training environment
---
@@ -65,31 +55,23 @@ individual Docker VM.*
---
class: pic
## Why don't we run Docker locally?
![Docker Architecture](images/docker-engine-architecture.svg)
- We are going to download container images and distribution packages.
---
- This could put a bit of stress on the local WiFi and slow us down.
## Can we run Docker locally?
- Instead, we use a remote VM that has a good connectivity
- If you already have Docker (or Podman) installed, you can use it!
- In some rare cases, installing Docker locally is challenging:
- The VMs can be convenient if:
- no administrator/root access (computer managed by strict corp IT)
- you can't/won't install Docker or Podman on your machine,
- 32-bit CPU or OS
- your local internet connection is slow.
- old OS version (e.g. CentOS 6, OSX pre-Yosemite, Windows 7)
- We're going to download many container images and distribution packages.
- If the class takes place in a venue with slow WiFi, this can slow us down.
- The remote VMs have good connectivity and downloads will be fast there.
(Initially, we provided VMs to make sure that nobody would waste time
with installers, or because they didn't have the right permissions
on their machine, etc.)
- It's better to spend time learning containers than fiddling with the installer!
---

View File

@@ -48,7 +48,7 @@ k8s@shpod:~$ flux bootstrap github \
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---
@@ -74,7 +74,7 @@ We don't have such kind of things here.😕
- We could bind our `ingress-controller` to a `NodePort`.
`ingress-nginx` install manifests propose it here:
</br>https://github.com/kubernetes/ingress-nginx/tree/release-1.14/deploy/static/provider/baremetal
</br>https://github.com/kubernetes/ingress-nginx/deploy/static/provider/baremetal
- In the 📄file `./clusters/METAL/ingress-nginx/sync.yaml`,
</br>change the `Kustomization` value `spec.path: ./deploy/static/provider/baremetal`
@@ -83,7 +83,7 @@ We don't have such kind of things here.😕
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---

View File

@@ -167,13 +167,13 @@ k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---
class: pic
![rocky config files](images/flux/R01-config-files.png)
![rocky config files](images/M6-R01-config-files.png)
---
@@ -300,7 +300,7 @@ class: extra-details
💡 This managed cluster comes with custom `StorageClasses` leveraging on Cloud _IaaS_ capabilities (i.e. block devices)
![Flux configuration waterfall](images/flux/persistentvolumes.png)
![Flux configuration waterfall](images/M6-persistentvolumes.png)
- a default `StorageClass` is applied if none is specified (like here)
- for **_🏭PROD_** purpose, ops team might enforce a more performant `StorageClass`
@@ -310,7 +310,7 @@ class: extra-details
class: pic
![Flux configuration waterfall](images/flux/flux-config-dependencies.png)
![Flux configuration waterfall](images/M6-flux-config-dependencies.png)
---

View File

@@ -9,7 +9,7 @@ but let's see if we can succeed by just adding manifests in our `Flux` configura
class: pic
![Flux configuration waterfall](images/flux/flux-config-dependencies.png)
![Flux configuration waterfall](images/M6-flux-config-dependencies.png)
---
@@ -89,7 +89,7 @@ k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---
@@ -132,7 +132,7 @@ k8s@shpod:~$ flux reconcile source git movy-app -n movy-test
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---
@@ -170,13 +170,13 @@ And push the modifications…
class: pic
![MOVY app has an incorrect dataset](images/flux/incorrect-dataset-in-MOVY-app.png)
![MOVY app has an incorrect dataset](images/M6-incorrect-dataset-in-MOVY-app.png)
---
class: pic
![ROCKY app has an incorrect dataset](images/flux/incorrect-dataset-in-ROCKY-app.png)
![ROCKY app has an incorrect dataset](images/M6-incorrect-dataset-in-ROCKY-app.png)
---
@@ -212,7 +212,7 @@ Please, refer to the [`Network policies` chapter in the High Five M4 module](./4
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---

View File

@@ -39,7 +39,7 @@ the **_⚙OPS_** team exclusively operates its clusters by updating a code ba
_GitOps_ and `Flux` enable the **_⚙OPS_** team to rely on the _first-class citizen pattern_ in Kubernetes' world through these steps:
- describe the **desired target state**
- and let the **automated convergence** happen
- and let the **automated convergence** happens
---
@@ -78,8 +78,10 @@ Prerequisites are:
- `Flux` _CLI_ needs a `Github` personal access token (_PAT_)
- to create and/or access the `Github` repository
- to give permissions to existing teams in our `Github` organization
- The _PAT_ needs _CRUD_ permissions on our `Github` organization
- The PAT needs _CRUD_ permissions on our `Github` organization
- repositories
- admin:public_key
- users
- As **_⚙OPS_** team, let's creates a `Github` personal access token…
@@ -87,7 +89,7 @@ Prerequisites are:
class: pic
![Generating a Github personal access token](images/flux/github-add-token.jpg)
![Generating a Github personal access token](images/M6-github-add-token.jpg)
---
@@ -116,32 +118,6 @@ k8s@shpod:~$ flux bootstrap github \
class: extra-details
### Creating a personnal dedicated `Github` repo
You don't need to rely onto a Github organization: any `Github` personnal repository is OK.
.lab[
- let's replace the `GITHUB_TOKEN` value by our _Personal Access Token_
- and the `GITHUB_REPO` value by our specific repository name
```bash
k8s@shpod:~$ export GITHUB_TOKEN="my-token" && \
export GITHUB_USER="lpiot" && \
export GITHUB_REPO="fleet-config-using-flux-XXXXX"
k8s@shpod:~$ flux bootstrap github \
--owner=${GITHUB_USER} \
--personal \
--repository=${GITHUB_REPO} \
--path=clusters/CLOUDY
```
]
---
class: extra-details
Here is the result
```bash
@@ -193,7 +169,7 @@ Here is the result
- `Flux` sets up permissions that allow teams within our organization to **access** the `Github` repository as maintainers
- Teams need to exist before `Flux` proceeds to this configuration
![Teams in Github](images/flux/github-teams.png)
![Teams in Github](images/M6-github-teams.png)
---
@@ -207,22 +183,6 @@ Here is the result
---
### The PAT is not needed anymore!
- During the install process, `Flux` creates an `ssh` key pair so that it is able to contribute to the `Github` repository.
```bash
► generating source secret
✔ public key: ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFqaT8B8SezU92qoE+bhnv9xONv9oIGuy7yVAznAZfyoWWEVkgP2dYDye5lMbgl6MorG/yjfkyo75ETieAE49/m9D2xvL4esnSx9zsOLdnfS9W99XSfFpC2n6soL+Exodw==
✔ configured deploy key "flux-system-main-flux-system-./clusters/CLOUDY" for "https://github.com/container-training-fleet/fleet-config-using-flux-XXXXX"
► applying source secret "flux-system/flux-system"
✔ reconciled source secret
```
- You can now delete the formerly created _Personal Access Token_: `Flux` won't use it anymore.
---
### 📂 Flux config files
`Flux` has been successfully installed onto our **_☁CLOUDY_** Kubernetes cluster!
@@ -232,13 +192,13 @@ Its configuration is managed through a _Gitops_ workflow sourced directly from o
Let's review our `Flux` configuration files we've created and pushed into the `Github` repository…
… as well as the corresponding components running in our Kubernetes cluster
![Flux config files](images/flux/flux-config-files.png)
![Flux config files](images/M6-flux-config-files.png)
---
class: pic
<!-- FIXME: wrong schema -->
![Flux architecture](images/flux/flux-controllers.png)
![Flux architecture](images/M6-flux-controllers.png)
---

View File

@@ -90,13 +90,13 @@ k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---
class: pic
![Ingress-nginx provisionned a IaaS load-balancer in Scaleway Cloud services](images/flux/ingress-nginx-scaleway-lb.png)
![Ingress-nginx provisionned a IaaS load-balancer in Scaleway Cloud services](images/M6-ingress-nginx-scaleway-lb.png)
---
@@ -141,7 +141,7 @@ k8s@shpod:~/fleet-config-using-flux-XXXXX$ \
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---
@@ -172,7 +172,7 @@ k8s@shpod:~$ \
class: pic
![Rocky application screenshot](images/flux/rocky-app-screenshot.png)
![Rocky application screenshot](images/M6-rocky-app-screenshot.png)
---

View File

@@ -13,7 +13,7 @@ Please, refer to the [`Setting up Kubernetes` chapter in the High Five M4 module
---
## Creating an `Helm` source in Flux for Kyverno Helm chart
## Creating an `Helm` source in Flux for OpenEBS Helm chart
.lab[
@@ -107,7 +107,7 @@ flux create kustomization
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---

View File

@@ -58,7 +58,7 @@ k8s@shpod:~/fleet-config-using-flux-XXXXX$ flux create kustomization dashboards
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---
@@ -98,7 +98,7 @@ k8s@shpod:~$ flux create secret git flux-system \
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---
@@ -127,7 +127,7 @@ k8s@shpod:~$ k get secret kube-prometheus-stack-grafana -n monitoring \
class: pic
![Grafana dashboard screenshot](images/flux/grafana-dashboard.png)
![Grafana dashboard screenshot](images/M6-grafana-dashboard.png)
---

View File

@@ -76,7 +76,7 @@ And here we go!
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---

View File

@@ -34,7 +34,7 @@ Several _tenants_ are created
class: pic
![Multi-tenants clusters](images/flux/cluster-multi-tenants.png )
![Multi-tenants clusters](images/M6-cluster-multi-tenants.png )
---
@@ -105,7 +105,7 @@ Let's review the `fleet-config-using-flux-XXXXX/clusters/CLOUDY/tenants.yaml` fi
class: pic
![Running Mario](images/running-mario.gif)
![Running Mario](images/M6-running-Mario.gif)
---

Binary file not shown.

Before

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 186 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 412 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 162 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 570 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 278 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 347 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 241 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 189 KiB

74
slides/intro-fullday.yml Normal file
View File

@@ -0,0 +1,74 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
#- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
#- containers/Start_And_Attach.md
- containers/Naming_And_Inspecting.md
#- containers/Labels.md
- containers/Getting_Inside.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Container_Networking_Basics.md
#- containers/Network_Drivers.md
- containers/Local_Development_Workflow.md
- containers/Container_Network_Model.md
- shared/yaml.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
- containers/Multi_Stage_Builds.md
#- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Multistage.md
#- containers/Docker_Machine.md
#- containers/Advanced_Dockerfiles.md
#- containers/Buildkit.md
#- containers/Exercise_Dockerfile_Buildkit.md
#- containers/Init_Systems.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Container_Engines.md
#- containers/Security.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md
#- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -0,0 +1,77 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
content:
- shared/title.md
# - shared/logistics.md
- containers/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- - containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- - containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Multistage.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- shared/yaml.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- containers/Docker_Machine.md
- - containers/Advanced_Dockerfiles.md
- containers/Buildkit.md
- containers/Exercise_Dockerfile_Buildkit.md
- containers/Init_Systems.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Resource_Limits.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
- containers/Containers_From_Scratch.md
- containers/Security.md
- containers/Rootless_Networking.md
- containers/Images_Deep_Dive.md
- - containers/Container_Engines.md
- containers/Pods_Anatomy.md
- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

84
slides/intro-twodays.yml Normal file
View File

@@ -0,0 +1,84 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- # DAY 1
- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Dockerfile_Tips.md
- containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Exercise_Dockerfile_Multistage.md
-
- containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Start_And_Attach.md
- containers/Getting_Inside.md
- containers/Resource_Limits.md
- # DAY 2
- containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
-
- containers/Local_Development_Workflow.md
- containers/Working_With_Volumes.md
- shared/yaml.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
- containers/Installing_Docker.md
- containers/Container_Engines.md
- containers/Init_Systems.md
- containers/Advanced_Dockerfiles.md
- containers/Buildkit.md
- containers/Exercise_Dockerfile_Buildkit.md
-
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Orchestration_Overview.md
-
- shared/thankyou.md
- containers/links.md
#-
#- containers/Docker_Machine.md
#- containers/Ambassadors.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Rootless_Networking.md
#- containers/Security.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md

View File

@@ -180,7 +180,7 @@ An Ingress Controller! 😅
- Install an Ingress Controller:
```bash
kubectl apply -f ~/container.training/k8s/traefik.yaml
kubectl apply -f ~/container.training/k8s/traefik-v2.yaml
```
- Wait a little bit, and check that we now have a `kubernetes.io/tls` Secret:

View File

@@ -1,216 +0,0 @@
# CloudNativePG
- CloudNativePG (CNPG) is an operator to run PostreSQL on Kubernetes
- Makes it easy to run production Postgres on K8S
- Supports streaming replication, backups, PITR, TLS, monitoring...
- Open source
- Accepted to CNCF on January 21, 2025 at the Sandbox maturity level
(https://www.cncf.io/projects/cloudnativepg/)
---
## A few examples
- [EphemeraSearch](https://www.ephemerasearch.com/)
*personal project, ~200 GB database, tiny budget*
- [Sellsy](https://enix.io/en/clients/)
*40,000 databases across 50 clusters, Talos, Proxmox VE*
- MistralAI
*30 production clusters, each from a few GB to a few TB size*
→ CNPG works for environments with small, big, and many clusters!
---
## Typical operation
- Decide what kind of storage we want to use
(cloud, local, distributed, hyperconverged...)
- Decide on backup strategy
(typically object store, e.g. S3-compatible)
- Set up `StorageClass` if needed
- Install CNPG
- Deploy Postgres cluster(s) with YAML manifests
- Profit!
---
## Local vs remote storage
- Local storage can feel less safe
(compared to a SAN, cloud block device, distributed volume...)
- However, it can be much faster
(much lower latency, much higher throughput)
- If we're using replication, losing a local volume is no problem
- Distributed storage can also fail
(or be unavailable for a while)
---
## CNPG installation
Example with Helm:
```bash
helm upgrade --install --namespace cnpg-system --create-namespace \
--repo https://cloudnative-pg.io/charts/ \
cloudnative-pg cloudnative-pg \
--version 1.25.1
```
Interesting options to add, to integrate with Prometheus Operator:
```bash
--set monitoring.podMonitorEnabled=true
--set monitoring.grafanaDashboard.create=true
--set monitoring.grafanaDashboard.namespace=prom-system
```
---
## Minimal Postgres cluster
```yaml
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: minimal
spec:
instances: 2
storage:
size: 10G
```
Note: this is missing (notably) resource requests and backups!
---
## `kubectl` plugin
- There is a `kubectl-cnpg` plugin
- Install it (e.g. with `krew`)
- Check commands like:
`k cnpg status`
`k cnpg psql`
`k cnpg backup`
`k cnpg promote`
---
## Production clusters
Check the following YAML manifest:
https://github.com/jpetazzo/pozok/blob/main/cluster-production.yaml
If you want to test this, you need an S3-compatible object store.
- Set the required variables:
`$CLUSTER_NAME`, `$AWS_ACCESS_KEY_ID`, `$AWS_SECRET_ACCESS_KEY`, `$AWS_DEFAULT_REGION`, `$BUCKET_NAME`, `$AWS_ENDPOINT_URL`
- Then `envsubst < cluster-production.yaml | kubectl apply -f-`
- Cluster comes up; backups and WAL segments land in the S3 bucket!
---
## Automated switchover
- CNPG detects when we `kubectl cordon` a node
- It assumes "cordon = maintenance"
- If the node hosts a primary server, it initiates a switchover
- It also uses Pod Disruption Budgets (PDB) to collaborate with evictions
(the PDB prevents the eviction of the primary until it gets demoted)
---
## Benchmarking
- Postgres has `pgbench`
- Step 1: execute e.g. `pgbench -i -s 10` to prepare the database
(`-s` is an optional "scaling factor" for a bigger dataset)
- Step 2: execute `pgbench -P1 -T10` to run the benchmark
(`-P1` = report progress every second, `-T10` = run for 10 seconds)
- These commands can be executed in the pod running the primary, e.g.:
`kubectl exec minimal-1 -- pgbench app -i -s 10`
`kubectl exec minimal-1 -- pgbench app -P1 -T60`
---
## CNPG lab 1
- Install CNPG on a managed cluster with a default `StorageClass`
- Provision a CNPG cluster (primary+replica)
- Run a `pgbench` (e.g. 60 seconds)
- Note the number of transactions / second
- Install another `StorageClass` (e.g. `rancher/local-path-provisioner`)
- Provision another CNPG cluster with that storage class
- Run a benchmark and compare the numbers
- Discuss!
---
## CNPG lab 2
- This one requires access to an S3-compatible object store
- Deploy a cluster sending backups to the object store
- Run a benchmark (to populate the database)
- Trigger a backup (e.g. with `k cnpg backup`)
- Create a new cluster from the backup
- Confirm that the numbers of rows (e.g. in `pgbench_history`) is the same
???
:EN:- Deploying Postgres clusters with CloudNativePG
:FR:- Déployer des clusters Postgres avec CloudNativePG

View File

@@ -1,10 +0,0 @@
# CSI (FIXME)
---
---
---
---
---
---
---

View File

@@ -1,253 +0,0 @@
# Running a Harbor registry
- There are many open source registries available out there
- We're going to show an end-to-end example using a very popular one:
[Harbor](https://goharbor.io) (https://goharbor.io)
- We will:
- install Harbor
- create a private registry on Harbor
- set up an automated build pipeline pushing images to Harbor
- configure an app to use images from the private registry
---
## Requirements
- Virtually all registry clients **require** TLS when communicating with registries
(one exception: when the registry is on `localhost`)
- This means that we'll need a valid TLS certificate for our registry
- We can easily get one with cert-manager and e.g. Let's Encrypt
(as long as we can associate a domain with our ingress controller)
- To run the demos in this chapter, **we need a domain name!**
---
## Alternatives
- We could configure our build pipeline to ignore certificates
(so that it can push images without complaining)
- We could hack something so that the registry is available over `localhost`
- Or we could add the registry's certificate everywhere
(in our build pipeline, on our container engines...)
- These extra steps are out of scope for this chapter
- **We need a domain name!**
---
## Automating TLS certificates
- Let's install Traefik:
```bash
kubectl apply -f ~/container.training/k8s/traefik.yml
```
- And cert-manager:
```bash
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.1/cert-manager.yaml
```
- Edit the `ClusterIssuer` manifest and apply it:
```bash
vim ~/container.training/k8s/cm-clusterissuer.yaml
kubectl apply -f ~/container.training/k8s/cm-clusterissuer.yaml
```
⚠️ Make sure to update the cluster issuer name to `letsencrypt-production` !
---
## Checking that it works
- Deploy a simple application and expose it with TLS:
```bash
kubectl create deployment blue --image jpetazzo/color --replicas 2 --port 80
kubectl expose deployment blue
kubectl create ingress blue --rule=blue.`$DOMAIN`/*=blue:80,tls \
--annotation cert-manager.io/cluster-issuer=letsencrypt-production
```
- Check that the certificate was correctly issued:
```bash
kubectl get cert
curl https://blue.`$DOMAIN`/
```
---
## Deploying Harbor
- There is a Helm chart (https://artifacthub.io/packages/helm/harbor/harbor)
- Let's install it:
```bash
helm upgrade --install --repo https://helm.goharbor.io \
--namespace harbor --create-namespace harbor harbor \
--set persistence.enabled=false \
--set expose.ingress.hosts.core=harbor.`$DOMAIN` \
--set expose.ingress.annotations."cert-manager\.io/cluster-issuer"=letsencrypt-production \
--version 1.18.0
```
- Wait until all pods are `Running` in the `harbor` namespace
---
## Logging into Harbor
- Go to https://harbor.$DOMAIN/
- The default login is `admin`
- The default password is `Harbor12345`
(yes, it would be a good idea to change that in production😁)
---
## Creating a new repository
- In Harbor, repositories are associated to "projects"
- Create a new project named `dockercoins`
---
## Creating Harbor users
- We will create two "robot accounts":
- one with `push` permission (for the build pipeline)
- one with `pull` permission (for our Kubernetes workloads)
- Create a first robot account, `dockercoins-push`
- don't give any systems permission
- for project permissions, check `dockercoins`
- then select permissions, check `push` and `pull`
- Write down the user and password!
---
## Setting up the build pipeline
- This part requires a GitHub account
- On GitHub, fork https://github.com/jpetazzo/dockercoins
(it has a GitHub Actions workflow that is almost ready to use!)
- In your fork, go to settings / secrets and variables / actions
- Create the following secrets:
`REGISTRY_ADDRESS` = `harbor.$DOMAIN` (make sure to enter the real domain of course!)
`REGISTRY_USERNAME` = the user name generated by Harbor
`REGISTRY_PASSWORD` = the password generated by Harbor
---
## Setting up the build pipeline
- Edit `.github/workflows/automated-build.yaml`
- Comment out the steps related to GitHub Container Registry and Docker Hub
- Uncomment the steps related to the custom external registry
- Commit
- In your fork, click on the "Actions" button on top
- You should see the workflow running
- After a couple of minutes, it should (hopefully) report success
---
## Creating the `pull` robot account
- In Harbor, create another robot account
- Let's name it `dockercoins-pull`
- Again, don't give it any systems permission
- Give it `pull` permissions for the `dockercions` project
- Write down the user and password
---
## Create a Secret for the `pull` account
- Let's create a Kubernetes Secret holding the registry credentials:
```bash
kubectl create secret docker-registry dockercoins-pull \
--docker-username '`robot$dockercoins-pull``' \
--docker-password `abcdefghijKLMNOPQRST` \
--docker-server harbor.`$DOMAIN`
```
- Make sure to quote the username (the `$` will cause problems otherwise)
---
## Use the Secret
- We have two possibilities here:
- add `imagePullSecrets` to every Pod template that needs them
- add `imagePullSecrets` to the ServiceAccount used by the Pods
- Let's patch the default ServiceAccount:
```bash
kubectl patch serviceaccount default \
--patch 'imagePullSecrets: [ name: dockercoins-pull ]'
```
---
## Use the private registry
- Make a copy of `~/container.training/k8s/dockercoins.yml`
- In that copy, replace every `dockercoins/*` image with `harbor.$DOMAIN/dockercoins/*`
(put the actual domain, not `$DOMAIN`!)
- Apply that YAML
- Check that the application is up and running
- Check that the number of pulls has increased in the Harbor web UI
- Congratulations, you've deployed an image from a self-hosted private registry! 🎉
???
:EN:- Hosting private images with Harbor
:FR:- Héberger des images privées avec Harbor

View File

@@ -704,19 +704,13 @@ class: extra-details
## Ingress in the future
- The [Gateway API SIG](https://gateway-api.sigs.k8s.io/) is probably be the future of Ingress
- The [Gateway API SIG](https://gateway-api.sigs.k8s.io/) might be the future of Ingress
- It proposes new resources:
GatewayClass, Gateway, HTTPRoute, TCPRoute...
- It now has feature parity with Ingress
(=it can be used instead of Ingress resources; or in addition to them)
- It is, however, more complex to set up and operate
(at least for now!)
- It is now in beta (since v0.5.0, released in 2022)
???

View File

@@ -228,7 +228,7 @@ class: pic
k0sctl init \
--controller-count 3 \
--user docker \
--k0s flusk1 flusk2 flusk3 > k0sctl.yaml
--k0s m621 m622 m623 > k0sctl.yaml
```
- Edit the following field so that controller nodes also run kubelet:
@@ -304,8 +304,8 @@ class: pic
- The result should look like this:
```
{"members":{"flusk1":"https://10.10.3.190:2380","flusk2":"https://10.10.2.92:2380",
"flusk3":"https://10.10.2.110:2380"}}
{"members":{"m621":"https://10.10.3.190:2380","m622":"https://10.10.2.92:2380",
"m623":"https://10.10.2.110:2380"}}
```
---
@@ -326,9 +326,9 @@ class: pic
- The result should look like this:
```
NAME STATUS ROLES AGE VERSION
flusk1 Ready control-plane 66m v1.33.1+k0s
flusk2 Ready control-plane 66m v1.33.1+k0s
flusk3 Ready control-plane 66m v1.33.1+k0s
m621 Ready control-plane 66m v1.33.1+k0s
m622 Ready control-plane 66m v1.33.1+k0s
m623 Ready control-plane 66m v1.33.1+k0s
```
---
@@ -362,7 +362,7 @@ class: extra-details
To stop the running cluster:
```bash
sudo k0s stop
sudo k0s start
```
Reset and wipe its state:

View File

@@ -189,18 +189,6 @@ class: extra-details
(https://codeberg.org/hjacobs/kube-resource-report)
---
## Going back to Grafana...
- We installed kube-prometheus-stack earlier
- Let's check Grafana again!
- How much resource is Harbor using?
- Can we compare the resource usage in Grafana vs `kubectl top pods`?
???
:EN:- The resource metrics pipeline

View File

@@ -38,7 +38,7 @@
(without doing `helm repo add` first)
- Let's use the same name for the release, the namespace...:
- Otherwise, keep the same naming strategy:
```bash
helm upgrade --install kube-prometheus-stack kube-prometheus-stack \
--namespace kube-prometheus-stack --create-namespace \
@@ -56,39 +56,17 @@
## Exposing Grafana
- Let's do this only if we have an ingress controller and a domain name!
(we can also skip this and come back to it later)
- Create an Ingress for Grafana:
- Let's create an Ingress for Grafana
```bash
kubectl create ingress --namespace kube-prometheus-stack grafana \
--rule=grafana.`cloudnative.party`/*=kube-prometheus-stack-grafana:80
```
(make sure to use *your* domain name above)
(as usual, make sure to use *your* domain name above)
- Connect to Grafana
---
## Exposing Grafana without Ingress
- What if we don't have an ingress controller?
- We can use a `NodePort` service instead
- Option 1: `kubectl edit` or `kubectl patch` the service
(it's `kube-prometheus-stack-grafana`)
- Option 2: pass the correct value to `helm upgrade --install`
(check the [chart values][kps-values] to find the right one!)
- We can also use `kubectl port-forward`, or a `LoadBalacner` service!
[kps-value]: https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack?modal=values
(remember that the DNS record might take a few minutes to come up)
---
@@ -131,8 +109,6 @@
- Feel free to explore the other dashboards!
- There won't be much data right now, but there will be more later
???
:EN:- Installing Prometheus and Grafana

View File

@@ -1,12 +0,0 @@
# Deploying a registry (FIXME)
---
---
---
---
---
---
---
---
---

65
slides/kadm-fullday.yml Normal file
View File

@@ -0,0 +1,65 @@
title: |
Kubernetes
for Admins and Ops
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
- static-pods-exercise
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
- k8s/prereqs-advanced.md
- shared/handson.md
- k8s/architecture.md
#- k8s/internal-apis.md
- k8s/deploymentslideshow.md
- k8s/dmuc-easy.md
-
- k8s/dmuc-medium.md
- k8s/dmuc-hard.md
#- k8s/multinode.md
#- k8s/cni.md
- k8s/cni-internals.md
#- k8s/interco.md
-
- k8s/apilb.md
#- k8s/setup-overview.md
#- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md
- k8s/cluster-backup.md
- k8s/staticpods.md
-
#- k8s/cloud-controller-manager.md
#- k8s/bootstrap.md
- k8s/control-plane-auth.md
- k8s/pod-security-intro.md
- k8s/pod-security-policies.md
- k8s/pod-security-admission.md
- k8s/user-cert.md
- k8s/csr-api.md
- k8s/openid-connect.md
-
#- k8s/lastwords-admin.md
- k8s/links.md
- shared/thankyou.md

96
slides/kadm-twodays.yml Normal file
View File

@@ -0,0 +1,96 @@
title: |
Kubernetes
for administrators
and operators
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
# DAY 1
- - k8s/prereqs-advanced.md
- shared/handson.md
- k8s/architecture.md
- k8s/internal-apis.md
- k8s/deploymentslideshow.md
- k8s/dmuc-easy.md
- - k8s/dmuc-medium.md
- k8s/dmuc-hard.md
#- k8s/multinode.md
#- k8s/cni.md
- k8s/cni-internals.md
#- k8s/interco.md
- - k8s/apilb.md
- k8s/setup-overview.md
#- k8s/setup-devel.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md
- k8s/staticpods.md
- - k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
# DAY 2
- - k8s/kubercoins.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
- k8s/authn-authz.md
- k8s/user-cert.md
- k8s/csr-api.md
- - k8s/openid-connect.md
- k8s/control-plane-auth.md
###- k8s/bootstrap.md
- k8s/netpol.md
- k8s/pod-security-intro.md
- k8s/pod-security-policies.md
- k8s/pod-security-admission.md
- - k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/disruptions.md
- k8s/horizontal-pod-autoscaler.md
- - k8s/prometheus.md
#- k8s/prometheus-stack.md
- k8s/extending-api.md
- k8s/crd.md
- k8s/operators.md
- k8s/eck.md
###- k8s/operators-design.md
###- k8s/operators-example.md
# CONCLUSION
- - k8s/lastwords.md
- k8s/links.md
- shared/thankyou.md
- |
# (All content after this slide is bonus material)
# EXTRA
- - k8s/volumes.md
- k8s/configuration.md
- k8s/secrets.md
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md
#- k8s/portworx.md
- k8s/openebs.md
- k8s/stateful-failover.md

95
slides/kube-adv.yml Normal file
View File

@@ -0,0 +1,95 @@
title: |
Advanced
Kubernetes
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- #1
- k8s/prereqs-advanced.md
- shared/handson.md
- k8s/architecture.md
- k8s/internal-apis.md
- k8s/deploymentslideshow.md
- k8s/dmuc-easy.md
- #2
- k8s/dmuc-medium.md
- k8s/dmuc-hard.md
#- k8s/multinode.md
#- k8s/cni.md
#- k8s/interco.md
- k8s/cni-internals.md
- #3
- k8s/apilb.md
- k8s/control-plane-auth.md
- |
# (Extra content)
- k8s/staticpods.md
- k8s/cluster-upgrade.md
- #4
- k8s/templating.md
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- |
# (Extra content)
- k8s/helm-create-better-chart.md
- k8s/helm-dependencies.md
- k8s/helm-values-schema-validation.md
- k8s/helm-secrets.md
- k8s/ytt.md
- #5
- k8s/extending-api.md
- k8s/operators.md
- k8s/sealed-secrets.md
- k8s/crd.md
- #6
- k8s/ingress-tls.md
#- k8s/ingress-advanced.md
#- k8s/ingress-canary.md
- k8s/gateway-api.md
- k8s/cert-manager.md
- k8s/cainjector.md
- k8s/eck.md
- #7
- k8s/admission.md
- k8s/kyverno.md
- #8
- k8s/aggregation-layer.md
- k8s/metrics-server.md
- k8s/prometheus.md
- k8s/prometheus-stack.md
- k8s/hpa-v2.md
- #9
- k8s/operators-design.md
- k8s/operators-example.md
- k8s/kubebuilder.md
- k8s/events.md
- k8s/finalizers.md
- |
# (Extra content)
- k8s/owners-and-dependents.md
- k8s/apiserver-deepdive.md
#- k8s/record.md
- shared/thankyou.md

138
slides/kube-fullday.yml Normal file
View File

@@ -0,0 +1,138 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
- shared/prereqs.md
- shared/handson.md
#- shared/webssh.md
- shared/connecting.md
#- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectl-run.md
#- k8s/batch-jobs.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubectlexpose.md
- k8s/service-types.md
- k8s/kubenet.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
#- k8s/exercise-wordsmith.md
-
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- k8s/yamldeploy.md
- k8s/namespaces.md
- k8s/setup-overview.md
- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
-
- k8s/dashboard.md
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/ingress.md
#- k8s/gateway-api.md
#- k8s/volumes.md
- k8s/configuration.md
- k8s/secrets.md
- k8s/openebs.md
#- k8s/k9s.md
#- k8s/tilt.md
#- k8s/kubectlscale.md
#- k8s/scalingdockercoins.md
#- shared/hastyconclusions.md
#- k8s/daemonset.md
#- shared/yaml.md
#- k8s/exercise-yaml.md
#- k8s/localkubeconfig.md
#- k8s/access-eks-cluster.md
#- k8s/accessinternal.md
#- k8s/kubectlproxy.md
#- k8s/healthchecks-more.md
#- k8s/record.md
#- k8s/ingress-tls.md
#- k8s/templating.md
#- k8s/kustomize.md
#- k8s/helm-intro.md
#- k8s/helm-chart-format.md
#- k8s/helm-create-basic-chart.md
#- k8s/helm-create-better-chart.md
#- k8s/helm-dependencies.md
#- k8s/helm-values-schema-validation.md
#- k8s/helm-secrets.md
#- k8s/exercise-helm.md
#- k8s/ytt.md
#- k8s/gitlab.md
#- k8s/create-chart.md
#- k8s/create-more-charts.md
#- k8s/netpol.md
#- k8s/authn-authz.md
#- k8s/user-cert.md
#- k8s/csr-api.md
#- k8s/openid-connect.md
#- k8s/pod-security-intro.md
#- k8s/pod-security-policies.md
#- k8s/pod-security-admission.md
#- k8s/exercise-configmap.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
#- k8s/logs-centralized.md
#- k8s/prometheus.md
#- k8s/prometheus-stack.md
#- k8s/statefulsets.md
#- k8s/consul.md
#- k8s/pv-pvc-sc.md
#- k8s/volume-claim-templates.md
#- k8s/portworx.md
#- k8s/openebs.md
#- k8s/stateful-failover.md
#- k8s/extending-api.md
#- k8s/crd.md
#- k8s/admission.md
#- k8s/operators.md
#- k8s/operators-design.md
#- k8s/operators-example.md
#- k8s/staticpods.md
#- k8s/finalizers.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
-
#- k8s/whatsnext.md
- k8s/lastwords.md
#- k8s/links.md
- shared/thankyou.md

92
slides/kube-halfday.yml Normal file
View File

@@ -0,0 +1,92 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
#- logistics.md
# Bridget-specific; others use logistics.md
- logistics-bridget.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- - shared/prereqs.md
- shared/handson.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
#- k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-overview.md
#- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- - k8s/kubectl-run.md
#- k8s/batch-jobs.md
#- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/deploymentslideshow.md
- k8s/kubectlexpose.md
#- k8s/service-types.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
#- k8s/localkubeconfig.md
#- k8s/access-eks-cluster.md
#- k8s/accessinternal.md
#- k8s/kubectlproxy.md
- - k8s/dashboard.md
#- k8s/k9s.md
#- k8s/tilt.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/rollout.md
#- k8s/record.md
- - k8s/logs-cli.md
# Bridget hasn't added EFK yet
#- k8s/logs-centralized.md
- k8s/namespaces.md
- k8s/helm-intro.md
#- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
#- k8s/helm-create-better-chart.md
#- k8s/helm-dependencies.md
#- k8s/helm-values-schema-validation.md
#- k8s/helm-secrets.md
#- k8s/templating.md
#- k8s/kustomize.md
#- k8s/ytt.md
#- k8s/netpol.md
- k8s/whatsnext.md
# - k8s/links.md
# Bridget-specific
- k8s/links-bridget.md
- shared/thankyou.md

177
slides/kube-selfpaced.yml Normal file
View File

@@ -0,0 +1,177 @@
title: |
Deploying and Scaling Microservices
with Docker and Kubernetes
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
content:
- shared/title.md
#- logistics.md
- k8s/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
- shared/prereqs.md
- shared/handson.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
-
- k8s/kubectlget.md
- k8s/kubectl-run.md
- k8s/batch-jobs.md
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
-
- k8s/kubectlexpose.md
- k8s/service-types.md
- k8s/kubenet.md
- k8s/shippingimages.md
- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
#- k8s/exercise-wordsmith.md
- shared/yaml.md
- k8s/yamldeploy.md
- k8s/namespaces.md
-
- k8s/setup-overview.md
- k8s/setup-devel.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/dashboard.md
- k8s/k9s.md
- k8s/tilt.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
#- k8s/exercise-yaml.md
-
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
- k8s/record.md
-
- k8s/localkubeconfig.md
#- k8s/access-eks-cluster.md
- k8s/accessinternal.md
- k8s/kubectlproxy.md
-
- k8s/ingress.md
- k8s/ingress-advanced.md
#- k8s/ingress-canary.md
- k8s/ingress-tls.md
- k8s/gateway-api.md
- k8s/cert-manager.md
- k8s/cainjector.md
- k8s/templating.md
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- k8s/helm-dependencies.md
- k8s/helm-values-schema-validation.md
- k8s/helm-secrets.md
#- k8s/exercise-helm.md
- k8s/gitlab.md
- k8s/ytt.md
-
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/pod-security-intro.md
- k8s/pod-security-policies.md
- k8s/pod-security-admission.md
- k8s/user-cert.md
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/control-plane-auth.md
-
- k8s/volumes.md
#- k8s/exercise-configmap.md
- k8s/build-with-docker.md
- k8s/build-with-kaniko.md
-
- k8s/configuration.md
- k8s/secrets.md
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md
- k8s/portworx.md
- k8s/openebs.md
- k8s/stateful-failover.md
-
- k8s/gitworkflows.md
- k8s/flux.md
- k8s/argocd.md
-
- k8s/logs-centralized.md
- k8s/prometheus.md
- k8s/prometheus-stack.md
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/disruptions.md
- k8s/cluster-autoscaler.md
- k8s/horizontal-pod-autoscaler.md
- k8s/hpa-v2.md
-
- k8s/extending-api.md
- k8s/apiserver-deepdive.md
- k8s/crd.md
- k8s/aggregation-layer.md
- k8s/admission.md
- k8s/operators.md
- k8s/operators-design.md
- k8s/operators-example.md
- k8s/kubebuilder.md
- k8s/kuik.md
- k8s/sealed-secrets.md
- k8s/kyverno.md
- k8s/eck.md
- k8s/finalizers.md
- k8s/owners-and-dependents.md
- k8s/events.md
-
- k8s/dmuc-easy.md
- k8s/dmuc-medium.md
- k8s/dmuc-hard.md
#- k8s/multinode.md
#- k8s/cni.md
- k8s/cni-internals.md
- k8s/apilb.md
- k8s/staticpods.md
-
- k8s/cluster-upgrade.md
- k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
-
- k8s/lastwords.md
- k8s/links.md
- shared/thankyou.md

138
slides/kube-twodays.yml Normal file
View File

@@ -0,0 +1,138 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
- shared/prereqs.md
- shared/handson.md
#- shared/webssh.md
- shared/connecting.md
#- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectl-run.md
- k8s/batch-jobs.md
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubectlexpose.md
- k8s/service-types.md
- k8s/kubenet.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
#- k8s/exercise-wordsmith.md
-
- k8s/yamldeploy.md
- k8s/setup-overview.md
- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- k8s/dashboard.md
- k8s/k9s.md
#- k8s/tilt.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- shared/yaml.md
#- k8s/exercise-yaml.md
-
- k8s/localkubeconfig.md
#- k8s/access-eks-cluster.md
- k8s/accessinternal.md
#- k8s/kubectlproxy.md
- k8s/rollout.md
- k8s/healthchecks.md
#- k8s/healthchecks-more.md
- k8s/record.md
-
- k8s/namespaces.md
- k8s/ingress.md
#- k8s/ingress-advanced.md
#- k8s/ingress-canary.md
#- k8s/ingress-tls.md
#- k8s/gateway-api.md
- k8s/templating.md
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- k8s/helm-dependencies.md
- k8s/helm-values-schema-validation.md
- k8s/helm-secrets.md
#- k8s/exercise-helm.md
#- k8s/ytt.md
- k8s/gitlab.md
-
- k8s/netpol.md
- k8s/authn-authz.md
#- k8s/csr-api.md
#- k8s/openid-connect.md
#- k8s/pod-security-intro.md
#- k8s/pod-security-policies.md
#- k8s/pod-security-admission.md
-
- k8s/volumes.md
#- k8s/exercise-configmap.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
- k8s/secrets.md
- k8s/logs-centralized.md
#- k8s/prometheus.md
#- k8s/prometheus-stack.md
-
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md
#- k8s/portworx.md
- k8s/openebs.md
- k8s/stateful-failover.md
#- k8s/extending-api.md
#- k8s/admission.md
#- k8s/operators.md
#- k8s/operators-design.md
#- k8s/operators-example.md
#- k8s/staticpods.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
-
- k8s/whatsnext.md
- k8s/lastwords.md
- k8s/links.md
- shared/thankyou.md

139
slides/kube.yml Normal file
View File

@@ -0,0 +1,139 @@
title: |
Intensive Docker
and Kubernetes Bootcamp
#chat: "[Mattermost](https://training.enix.io/mattermost)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
#- shared/prereqs.md
#- shared/handson.md
#- shared/webssh.md
#- shared/connecting.md
- shared/toc.md
-
- # DAY 1
#- containers/Docker_Overview.md
#- containers/Docker_History.md
#- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- # DAY 2
- containers/Container_Networking_Basics.md
- containers/Local_Development_Workflow.md
- containers/Container_Network_Model.md
- containers/Naming_And_Inspecting.md
- containers/Getting_Inside.md
- containers/Dockerfile_Tips.md
#- containers/Advanced_Dockerfiles.md
- containers/Multi_Stage_Builds.md
- containers/Exercise_Dockerfile_Advanced.md
- # DAY 3
#- containers/Start_And_Attach.md
#- containers/Labels.md
#- containers/Publishing_To_Docker_Hub.md
#- containers/Exercise_Composefile.md
- containers/Compose_For_Dev_Stacks.md
- shared/sampleapp.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
- k8s/kubectl-run.md
- k8s/kubectlexpose.md
#- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
#- k8s/buildshiprun-dockerhub.md
- exercises/k8sfundamentals-details.md
- # DAY 4
- k8s/ourapponkube.md
- k8s/service-types.md
- k8s/kubenet.md
- shared/yaml.md
- k8s/yamldeploy.md
- k8s/namespaces.md
#- k8s/setup-overview.md
- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
#- k8s/localkubeconfig.md
- k8s/accessinternal.md
#- k8s/kubectlproxy.md
- exercises/yaml-dockercoins-details.md
- exercises/localcluster-details.md
- # DAY 5
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/rollout.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/k9s.md
- # DAY 6
- k8s/demo-apps.md
#- k8s/kubectlscale.md
- k8s/volumes.md
#- k8s/exercise-configmap.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
- k8s/secrets.md
#- k8s/ingress-tls.md
#- k8s/ingress-advanced.md
- k8s/healthchecks.md
#- k8s/healthchecks-more.md
- exercises/healthchecks-details.md
- # DAY 7
- k8s/ingress.md
- k8s/cert-manager.md
- k8s/ingress-tls.md
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- exercises/reqlim-details.md
- # DAY 8
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/sealed-secrets.md
- exercises/netpol-details.md
- exercises/sealed-secrets-details.md
- exercises/rbac-details.md
- # DAY 9
- k8s/batch-jobs.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- exercises/helm-generic-chart-details.md
- k8s/tilt.md
- k8s/helm-create-better-chart.md
- # DAY 10
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md
- k8s/helmfile.md
- k8s/dashboard.md
- shared/thankyou.md

View File

@@ -1,222 +0,0 @@
title: |
Kubernetes
chat: "Mattermost"
gitrepo: github.com/jpetazzo/container.training
slides: https://2025-06-dila.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- shared/prereqs.md
- shared/handson.md
- shared/connecting.md
- shared/sampleapp.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
- k8s/kubectl-run.md
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- k8s/kubectlexpose.md
- k8s/service-types.md
- k8s/kubenet.md
- exercises/k8sfundamentals-details.md
-
- shared/yaml.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/yamldeploy.md
- k8s/namespaces.md
- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/dashboard.md
- k8s/k9s.md
- exercises/yaml-dockercoins-details.md
-
- |
# Overview for today
- We'll learn how to install Helm charts
- We'll install Prometheus+Grafana
<br/>(so that se start collecting metrics for later)
- We'll install an ingress controller and cert-manager
- We'll use that to deploy a private Harbor registry
- We'll use that registry to store private Dockercoins images
- We'll deploy Dockercoins with these images
- Finally we'll go back to the metrics to see what has been collected!
- k8s/templating.md
- k8s/helm-intro.md
- k8s/prometheus-stack.md
- k8s/ingress.md
#- k8s/ingress-advanced.md
#- k8s/ingress-canary.md
#- k8s/ingress-tls.md
- k8s/cert-manager.md
- k8s/harbor.md
- k8s/metrics-server.md
- k8s/cni-internals.md
- |
# Exercise — Harbor
- Deploy an ingress controller
- Deploy cert-manager
- Configure it with Let's Encrypt
- Deploy Harbor
- Create a "dockercoins" project in Harbor
- Fork https://github.com/jpetazzo/dockercoins
- Configure GitHub Actions to automatically push images to Harbor
- Deploy DockerCoins using the images hosted on Harbor
-
- k8s/authn-authz.md
- k8s/control-plane-auth.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- k8s/gitworkflows.md
#- k8s/flux.md
- k8s/argocd.md
#- k8s/dmuc-easy.md
#- k8s/dmuc-medium.md
#- k8s/dmuc-hard.md
#- k8s/multinode.md
#- k8s/apilb.md
#- k8s/staticpods.md
#- k8s/cluster-upgrade.md
#- k8s/cluster-backup.md
#- k8s/cloud-controller-manager.md
#- k8s/lastwords.md
#- k8s/links.md
- |
# Exercise — ArgoCD
- Deploy ArgoCD
- Set up ArgoCD to deploy DockerCoins with a plain YAML manifest
<br/>
(e.g. copy [that manifest][dockercoins-yaml] to a repo of your own)
- Set up ArgoCD to deploy DockerCoins with a basic Helm chart
<br/>
(e.g. just put the YAML manifest in the `templates` directory)
- Set up ArgoCD to deploy DockerCoins with a generic chart
<br/>
(=deploy the same chart 5 times but with different values)
- Set up ArgoCD to deploy DockerCoins using the images on Harbor
- Confirm that you can make changes to the DockerCoins code and deploy them
- Bonus: how to automatically roll out new images?
[dockercoins-yaml]: https://github.com/jpetazzo/container.training/blob/main/k8s/dockercoins.yaml
-
- k8s/volumes.md
#- k8s/exercise-configmap.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
- k8s/secrets.md
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md
- k8s/stateful-failover.md
#- k8s/csi.md
- k8s/cnpg.md
#- k8s/logs-centralized.md
#- k8s/resource-limits.md
#- k8s/cluster-sizing.md
#- k8s/disruptions.md
#- k8s/cluster-autoscaler.md
#- k8s/horizontal-pod-autoscaler.md
#- k8s/hpa-v2.md
- shared/thankyou.md
#- shared/composescale.md
#- shared/hastyconclusions.md
#- k8s/batch-jobs.md
#- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
#- k8s/buildshiprun-dockerhub.md
#- k8s/ourapponkube.md
#- k8s/exercise-wordsmith.md
#- k8s/setup-overview.md
#- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
#- k8s/tilt.md
#- k8s/exercise-yaml.md
#- k8s/rollout.md
#- k8s/healthchecks.md
#- k8s/healthchecks-more.md
#- k8s/record.md
#- k8s/localkubeconfig.md
#- k8s/access-eks-cluster.md
#- k8s/accessinternal.md
#- k8s/kubectlproxy.md
#- k8s/cainjector.md
#- k8s/kustomize.md
#- k8s/pod-security-intro.md
#- k8s/pod-security-policies.md
#- k8s/pod-security-admission.md
#- k8s/user-cert.md
#- k8s/csr-api.md
#- k8s/openid-connect.md
#- k8s/extending-api.md
#- k8s/apiserver-deepdive.md
#- k8s/crd.md
#- k8s/aggregation-layer.md
#- k8s/admission.md
#- k8s/operators.md
#- k8s/operators-design.md
#- k8s/operators-example.md
#- k8s/kubebuilder.md
#- k8s/kuik.md
#- k8s/sealed-secrets.md
#- k8s/kyverno.md
#- k8s/eck.md
#- k8s/finalizers.md
#- k8s/owners-and-dependents.md
#- k8s/events.md
#- shared/webssh.md
#- k8s/versions-k8s.md
#- k8s/helm-dependencies.md
#- k8s/helm-values-schema-validation.md
#- k8s/helm-secrets.md
#- k8s/exercise-helm.md
#- k8s/gitlab.md
#- k8s/ytt.md
#- k8s/netpol.md
#- k8s/portworx.md
#- k8s/openebs.md

View File

@@ -2,9 +2,17 @@
- Bonjour !
- Sur scène : Jérôme ([@jpetazzo@hachyderm.io]) + Alexandre ([@alexbuisine])
- Sur scène : Jérôme ([@jpetazzo@hachyderm.io])
- Horaires : de 9h à 13h
<!--
- Sur scène lundi+mardi : Jérôme ([@jpetazzo@hachyderm.io])
- Sur scène mercredi+jeudi : Ludovic
-->
- En backstage : Alexandre, Antoine, Aurélien (x2), Baptiste, Benjamin, David, Hadrien, Kostas, Louis, Magalie, Nicolas, Paul, Sébastien, Thibault, Yoann...
- Horaires : tous les jours de 9h à 13h
- On fera une pause vers (environ) 11h
@@ -21,6 +29,20 @@
---
## Les 15 minutes du matin
- Chaque jour, on commencera à 9h par une mini-présentation de 15 minutes
(sur un sujet choisi ensemble, pas forcément en relation avec la formation!)
- L'occasion de s'échauffer les neurones avec 🥐/☕️/🍊
(avant d'attaquer les choses sérieuses)
- Puis à 9h15 on rentre dans le vif du sujet
---
## Travaux pratiques
- À la fin de chaque matinée, il y a un exercice pratique concret
@@ -37,12 +59,18 @@
- On est là pour vous aider si vous bloquez sur un exercice !
---
???
## La hotline
- Mercredi après-midi : une heure de questions/réponses ouvertes !
- Chaque après-midi : une heure de questions/réponses ouvertes !
- Horaire: 15h00-16h00
(sauf le dernier jour)
- Sur Teams (?)
- Mardi: 16h00-17h00
- Mercredi: 15h30-16h30
- Jeudi: 15h00-16h00
- Sur Jitsi (lien "visioconf" sur le portail de formation)

View File

@@ -1 +1 @@
logistics-enix.md
logistics-template.md

View File

@@ -1,3 +0,0 @@
<a href="containers.yml.html">Containers</a>
|
<a href="kubernetes.yml.html">Kubernetes</a>

44
slides/mlops.yml Normal file
View File

@@ -0,0 +1,44 @@
title: |
Asynchronous Architecture Patterns To Scale ML and Other High Latency Workloads on Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: https://FIXME.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- k8s/prereqs-advanced.md
- k8s/handson-mlops.md
- shared/connecting.md
- k8s/mlops-headsup.md
- shared/toc.md
-
- k8s/ollama-intro.md
- k8s/ollama-metrics.md
- k8s/queue-architecture.md
- k8s/bento-intro.md
-
- k8s/resource-limits.md
- k8s/cluster-autoscaler.md
- k8s/ollama-reqlim.md
- k8s/bento-hpa.md
- k8s/bento-rmq.md
- k8s/bento-cnpg.md
- k8s/helmfile.md
- shared/thankyou.md
- shared/contact.md

View File

@@ -50,6 +50,8 @@
.lab[
```bash
- Create a Kubernetes cluster with KinD:
```bash
kind create cluster

72
slides/swarm-fullday.yml Normal file
View File

@@ -0,0 +1,72 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
- snap
- btp-auto
- benchmarking
- elk-manual
- prom-manual
content:
- shared/title.md
- logistics.md
- swarm/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- - shared/prereqs.md
- shared/handson.md
- shared/connecting.md
- swarm/versions.md
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/stacks.md
- swarm/cicd.md
- swarm/updatingservices.md
- swarm/rollingupdates.md
- swarm/healthchecks.md
- - swarm/operatingswarm.md
- swarm/netshoot.md
- swarm/ipsec.md
- swarm/swarmtools.md
- swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- - swarm/logging.md
- swarm/metrics.md
- swarm/gui.md
- swarm/stateful.md
- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

71
slides/swarm-halfday.yml Normal file
View File

@@ -0,0 +1,71 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
- snap
- btp-manual
- benchmarking
- elk-manual
- prom-manual
content:
- shared/title.md
- logistics.md
- swarm/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- - shared/prereqs.md
- shared/handson.md
- shared/connecting.md
- swarm/versions.md
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
#- swarm/hostingregistry.md
#- swarm/testingregistry.md
#- swarm/btp-manual.md
#- swarm/swarmready.md
- swarm/stacks.md
- swarm/cicd.md
- swarm/updatingservices.md
#- swarm/rollingupdates.md
#- swarm/healthchecks.md
- - swarm/operatingswarm.md
#- swarm/netshoot.md
#- swarm/ipsec.md
#- swarm/swarmtools.md
- swarm/security.md
#- swarm/secrets.md
#- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- swarm/logging.md
- swarm/metrics.md
#- swarm/stateful.md
#- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

View File

@@ -0,0 +1,80 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
- btp-auto
content:
- shared/title.md
#- shared/logistics.md
- swarm/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- - shared/prereqs.md
- shared/handson.md
- shared/connecting.md
- swarm/versions.md
- |
name: part-1
class: title, self-paced
Part 1
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/stacks.md
- swarm/cicd.md
- |
name: part-2
class: title, self-paced
Part 2
- - swarm/operatingswarm.md
- swarm/netshoot.md
- swarm/swarmnbt.md
- swarm/ipsec.md
- swarm/updatingservices.md
- swarm/rollingupdates.md
- swarm/healthchecks.md
- swarm/nodeinfo.md
- swarm/swarmtools.md
- - swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- swarm/logging.md
- swarm/metrics.md
- swarm/stateful.md
- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

75
slides/swarm-video.yml Normal file
View File

@@ -0,0 +1,75 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
- btp-auto
content:
- shared/title.md
#- shared/logistics.md
- swarm/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/handson.md
- shared/connecting.md
- swarm/versions.md
- |
name: part-1
class: title, self-paced
Part 1
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/stacks.md
- |
name: part-2
class: title, self-paced
Part 2
- - swarm/operatingswarm.md
#- swarm/netshoot.md
#- swarm/swarmnbt.md
- swarm/ipsec.md
- swarm/updatingservices.md
- swarm/rollingupdates.md
#- swarm/healthchecks.md
- swarm/nodeinfo.md
- swarm/swarmtools.md
- - swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
#- swarm/logging.md
#- swarm/metrics.md
- swarm/stateful.md
- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

View File

@@ -228,6 +228,3 @@ td {
width: 50%;
}
.hide {
display: none;
}