Compare commits

..

35 Commits

Author SHA1 Message Date
Jérôme Petazzoni
ea3178327a Prepare content for Thoughtworks Infrastructure 2022-08-17 14:51:57 +02:00
Jérôme Petazzoni
2724a611a6 📃 Update rolling update intro slide 2022-08-17 14:49:17 +02:00
Jérôme Petazzoni
2ca239ddfc 🔒️ Mention bound service account tokens 2022-08-17 14:18:15 +02:00
Jérôme Petazzoni
e74a158c59 📃 Document dependency on yq 2022-08-17 13:49:15 +02:00
Jérôme Petazzoni
138af3b5d2 ♻️ Upgrade build image to Netlify Focal; bump up Python version 2022-08-17 13:48:55 +02:00
Jérôme Petazzoni
ad6d16bade Add RBAC and NetPol exercises 2022-08-17 13:16:52 +02:00
Jérôme Petazzoni
1aaf9b0bd5 ♻️ Update Linode LKE terraform module 2022-07-29 14:37:37 +02:00
Jérôme Petazzoni
ce39f97a28 Bump up versions for cluster upgrade lab 2022-07-22 11:32:22 +02:00
jonjohnsonjr
162651bdfd Typo: sould -> should 2022-07-18 19:16:47 +02:00
Jérôme Petazzoni
2958ca3a32 ♻️ Update CRD content
Rehaul for crd/v1; demonstrate what happens when adding
data validation a posteriori.
2022-07-14 10:32:34 +02:00
Jérôme Petazzoni
02a15d94a3 Add nsinjector 2022-07-06 14:28:24 +02:00
Jérôme Petazzoni
12d9f06f8a Add YTT content 2022-06-23 08:37:50 +02:00
Jérôme Petazzoni
43caccbdf6 ♻️ Bump up socket.io versions to address dependabot complaints
The autopilot code isn't exposed to anything; but this will stop dependabot
from displaying the annoying warning banners 😅
2022-06-20 07:09:36 +02:00
Tianon Gravi
a52f642231 Update links to kube-resource-report
Also, remove links to demos that no longer exist.
2022-06-10 21:43:56 +02:00
Tianon Gravi
30b1bfde5b Fix a few minor typos 2022-06-10 21:43:56 +02:00
Jérôme Petazzoni
5b39218593 Bump up Kapsule k8s version 2022-06-08 14:35:24 +02:00
Jérôme Petazzoni
f65ca19b44 📃 Mention type validation issues for CRDs 2022-06-06 13:59:13 +02:00
Jérôme Petazzoni
abb0fbe364 📃 Update operators intro to be less db-centric 2022-06-06 13:03:51 +02:00
Jerome Petazzoni
a18af8f4c4 🐞 Fix WaitForFirstConsumer with OpenEBS hostpath 2022-06-01 08:57:42 +02:00
Jerome Petazzoni
41e9047f3d Bump up sealed secret controller
quay.io doesn't work anymore, and kubeseal 0.17.4 was using
an image on quay. kubeseal 0.17.5 uses an image on the docker
hub instead
2022-06-01 08:51:31 +02:00
Jérôme Petazzoni
907e769d4e 📍 Pin containerd version to avoid weave/containerd issue
See https://github.com/containerd/containerd/issues/6921 for details
2022-05-25 08:59:14 +02:00
Karol Berezicki
71ba3ec520 Fixed link to Docker forums in intro.md 2022-05-23 14:41:59 +02:00
Jérôme Petazzoni
cc6c0d5db8 🐞 Minor bug fixes 2022-05-12 19:37:05 +02:00
Jérôme Petazzoni
9ed00c5da1 Update DOKS version 2022-05-07 11:36:01 +02:00
Jérôme Petazzoni
b4b67536e9 ️Add retry logic for linode provisioning
It looks like Linode now enforces something like 10 requests / 10 seconds.
We need to add some retry logic when provisioning more than 10 VMs.
2022-05-03 11:33:12 +02:00
Jérôme Petazzoni
52ce402803 ♻️ Switch to official FRR images; disable NHT
We're now using an official image for FRR.
Also, by default, BGPD will accept routes only if their
next-hop is reachable. This relies on a mechanism called
NHT (Next Hop Tracking). However, when we receive routes
from Kubernetes clusters, the peers usually advertise
addresses that we are not directly connected to. This
causes these addresses to be filtered out (unless the
route reflector is running on the same VPC or Layer 2
network as the Kubernetes nodes). To accept these routes
anyway, we basically disable NHT, by considering that
nodes are reachable if we can reach them through our
default route.
2022-04-12 22:17:27 +02:00
Jérôme Petazzoni
7076152bb9 ♻️ Update sealed-secrets version and install instructions 2022-04-12 20:46:01 +02:00
Jérôme Petazzoni
39eebe320f Add CA injector content 2022-04-12 18:24:41 +02:00
Jérôme Petazzoni
97c563e76a ♻️ Don't use ngrok for Tilt
ngrok now requires an account to serve HTML content.
We won't use ngrok anymore for the Tilt UI
(and we'll suggest to use a NodePort service instead,
when running in a Pod).
2022-04-11 21:08:54 +02:00
Jérôme Petazzoni
4a7b04dd01 ♻️ Add helm install command for metrics-server
Don't use it yet, but have it handy in case we want to switch.
2022-04-08 21:06:19 +02:00
Jérôme Petazzoni
8b3f7a9aba ♻️ Switch to SIG metrics-server chart 2022-04-08 20:36:07 +02:00
Jérôme Petazzoni
f9bb780f80 Bump up DOK version 2022-04-08 20:35:53 +02:00
Jérôme Petazzoni
94545f800a 📃 Add TOC item to nsplease 2022-04-06 22:01:22 +02:00
Jérôme Petazzoni
5896ad577b Bump up k8s version on Linode 2022-03-31 10:59:09 +02:00
Denis Laxalde
030f3728f7 Update link to "Efficient Node Heartbeats" KEP
Previous file was moved in commit 7eef794bb5
2022-03-28 16:52:32 +02:00
79 changed files with 3382 additions and 1128 deletions

8
.gitignore vendored
View File

@@ -6,13 +6,7 @@ prepare-vms/tags
prepare-vms/infra
prepare-vms/www
prepare-tf/.terraform*
prepare-tf/terraform.*
prepare-tf/stage2/*.tf
prepare-tf/stage2/kubeconfig.*
prepare-tf/stage2/.terraform*
prepare-tf/stage2/terraform.*
prepare-tf/stage2/externalips.*
prepare-tf/tag-*
slides/*.yml.html
slides/autopilot/state.yaml

View File

@@ -1,2 +1,3 @@
hostname frr
ip nht resolve-via-default
log stdout

View File

@@ -2,30 +2,36 @@ version: "3"
services:
bgpd:
image: ajones17/frr:662
image: frrouting/frr:v8.2.2
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: /usr/lib/frr/bgpd -f /etc/frr/bgpd.conf --log=stdout --log-level=debug --no_kernel
cap_add:
- NET_ADMIN
- SYS_ADMIN
entrypoint: /usr/lib/frr/bgpd -f /etc/frr/bgpd.conf --log=stdout --log-level=debug --no_kernel --no_zebra
restart: always
zebra:
image: ajones17/frr:662
image: frrouting/frr:v8.2.2
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
cap_add:
- NET_ADMIN
- SYS_ADMIN
entrypoint: /usr/lib/frr/zebra -f /etc/frr/zebra.conf --log=stdout --log-level=debug
restart: always
vtysh:
image: ajones17/frr:662
image: frrouting/frr:v8.2.2
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: vtysh -c "show ip bgp"
entrypoint: vtysh
chmod:
image: alpine

View File

@@ -48,20 +48,25 @@ k8s_yaml('../k8s/dockercoins.yaml')
# The following line lets Tilt run with the default kubeadm cluster-admin context.
allow_k8s_contexts('kubernetes-admin@kubernetes')
# This will run an ngrok tunnel to expose Tilt to the outside world.
# This is intended to be used when Tilt runs on a remote machine.
local_resource(name='ngrok:tunnel', serve_cmd='ngrok http 10350')
# Note: the whole section below (to set up ngrok tunnels) is disabled,
# because ngrok now requires to set up an account to serve HTML
# content. So we can still use ngrok for e.g. webhooks and "raw" APIs,
# but not to serve web pages like the Tilt UI.
# This will wait until the ngrok tunnel is up, and show its URL to the user.
# We send the output to /dev/tty so that it doesn't get intercepted by
# Tilt, and gets displayed to the user's terminal instead.
# Note: this assumes that the ngrok instance will be running on port 4040.
# If you have other ngrok instances running on the machine, this might not work.
local_resource(name='ngrok:showurl', cmd='''
while sleep 1; do
TUNNELS=$(curl -fsSL http://localhost:4040/api/tunnels | jq -r .tunnels[].public_url)
[ "$TUNNELS" ] && break
done
printf "\nYou should be able to connect to the Tilt UI with the following URL(s): %s\n" "$TUNNELS" >/dev/tty
'''
)
# # This will run an ngrok tunnel to expose Tilt to the outside world.
# # This is intended to be used when Tilt runs on a remote machine.
# local_resource(name='ngrok:tunnel', serve_cmd='ngrok http 10350')
# # This will wait until the ngrok tunnel is up, and show its URL to the user.
# # We send the output to /dev/tty so that it doesn't get intercepted by
# # Tilt, and gets displayed to the user's terminal instead.
# # Note: this assumes that the ngrok instance will be running on port 4040.
# # If you have other ngrok instances running on the machine, this might not work.
# local_resource(name='ngrok:showurl', cmd='''
# while sleep 1; do
# TUNNELS=$(curl -fsSL http://localhost:4040/api/tunnels | jq -r .tunnels[].public_url)
# [ "$TUNNELS" ] && break
# done
# printf "\nYou should be able to connect to the Tilt UI with the following URL(s): %s\n" "$TUNNELS" >/dev/tty
# '''
# )

14
k8s/pizza-1.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz

20
k8s/pizza-2.yaml Normal file
View File

@@ -0,0 +1,20 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object

32
k8s/pizza-3.yaml Normal file
View File

@@ -0,0 +1,32 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
toppings:
type: array
items:
type: string

39
k8s/pizza-4.yaml Normal file
View File

@@ -0,0 +1,39 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
toppings:
type: array
items:
type: string
additionalPrinterColumns:
- jsonPath: .spec.sauce
name: Sauce
type: string
- jsonPath: .spec.toppings
name: Toppings
type: string

40
k8s/pizza-5.yaml Normal file
View File

@@ -0,0 +1,40 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
enum: [ red, white ]
toppings:
type: array
items:
type: string
additionalPrinterColumns:
- jsonPath: .spec.sauce
name: Sauce
type: string
- jsonPath: .spec.toppings
name: Toppings
type: string

45
k8s/pizzas.yaml Normal file
View File

@@ -0,0 +1,45 @@
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: margherita
spec:
sauce: red
toppings:
- mozarella
- basil
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: quatrostagioni
spec:
sauce: red
toppings:
- artichoke
- basil
- mushrooms
- prosciutto
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: mehl31
spec:
sauce: white
toppings:
- goatcheese
- pear
- walnuts
- mozzarella
- rosemary
- honey
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: brownie
spec:
sauce: chocolate
toppings:
- nuts

View File

@@ -0,0 +1,164 @@
#! Define and use variables.
---
#@ repository = "dockercoins"
#@ tag = "v0.1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ "{}/hasher:{}".format(repository, tag)
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ "{}/rng:{}".format(repository, tag)
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ "{}/webui:{}".format(repository, tag)
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ "{}/worker:{}".format(repository, tag)
name: worker

View File

@@ -0,0 +1,167 @@
#! Define and use a function to set the deployment image.
---
#@ repository = "dockercoins"
#@ tag = "v0.1"
#@ def image(component):
#@ return "{}/{}:{}".format(repository, component, tag)
#@ end
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ image("hasher")
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ image("rng")
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ image("webui")
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ image("worker")
name: worker

164
k8s/ytt/3-labels/app.yaml Normal file
View File

@@ -0,0 +1,164 @@
#! Define and use functions, demonstrating how to generate labels.
---
#@ repository = "dockercoins"
#@ tag = "v0.1"
#@ def image(component):
#@ return "{}/{}:{}".format(repository, component, tag)
#@ end
#@ def labels(component):
#@ return {
#@ "app": component,
#@ "container.training/generated-by": "ytt",
#@ }
#@ end
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ image("hasher")
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("redis")
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("redis")
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("rng")
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ image("rng")
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("rng")
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("webui")
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ image("webui")
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("webui")
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("worker")
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ image("worker")
name: worker

162
k8s/ytt/4-data/app.yaml Normal file
View File

@@ -0,0 +1,162 @@
---
#@ load("@ytt:data", "data")
#@ def image(component):
#@ return "{}/{}:{}".format(data.values.repository, component, data.values.tag)
#@ end
#@ def labels(component):
#@ return {
#@ "app": component,
#@ "container.training/generated-by": "ytt",
#@ }
#@ end
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ image("hasher")
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("redis")
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("redis")
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("rng")
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ image("rng")
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("rng")
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("webui")
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ image("webui")
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("webui")
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("worker")
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ image("worker")
name: worker

View File

@@ -0,0 +1,4 @@
#@data/values-schema
---
repository: dockercoins
tag: v0.1

54
k8s/ytt/5-factor/app.yaml Normal file
View File

@@ -0,0 +1,54 @@
---
#@ load("@ytt:data", "data")
---
#@ def Deployment(component, repository=data.values.repository, tag=data.values.tag):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ component
container.training/generated-by: ytt
name: #@ component
spec:
replicas: 1
selector:
matchLabels:
app: #@ component
template:
metadata:
labels:
app: #@ component
spec:
containers:
- image: #@ repository + "/" + component + ":" + tag
name: #@ component
#@ end
---
#@ def Service(component, port=80, type="ClusterIP"):
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ component
container.training/generated-by: ytt
name: #@ component
spec:
ports:
- port: #@ port
protocol: TCP
targetPort: #@ port
selector:
app: #@ component
type: #@ type
#@ end
---
--- #@ Deployment("hasher")
--- #@ Service("hasher")
--- #@ Deployment("redis", repository="library", tag="latest")
--- #@ Service("redis", port=6379)
--- #@ Deployment("rng")
--- #@ Service("rng")
--- #@ Deployment("webui")
--- #@ Service("webui", type="NodePort")
--- #@ Deployment("worker")
---

View File

@@ -0,0 +1,4 @@
#@data/values-schema
---
repository: dockercoins
tag: v0.1

View File

@@ -0,0 +1,56 @@
---
#@ load("@ytt:data", "data")
#@ load("@ytt:template", "template")
---
#@ def component(name, repository=data.values.repository, tag=data.values.tag, port=None, type="ClusterIP"):
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
replicas: 1
selector:
matchLabels:
app: #@ name
template:
metadata:
labels:
app: #@ name
spec:
containers:
- image: #@ repository + "/" + name + ":" + tag
name: #@ name
#@ if/end port==80:
readinessProbe:
httpGet:
port: #@ port
#@ if port != None:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
ports:
- port: #@ port
protocol: TCP
targetPort: #@ port
selector:
app: #@ name
type: #@ type
#@ end
#@ end
---
--- #@ template.replace(component("hasher", port=80))
--- #@ template.replace(component("redis", repository="library", tag="latest", port=6379))
--- #@ template.replace(component("rng", port=80))
--- #@ template.replace(component("webui", port=80, type="NodePort"))
--- #@ template.replace(component("worker"))
---

View File

@@ -0,0 +1,4 @@
#@data/values-schema
---
repository: dockercoins
tag: v0.1

View File

@@ -0,0 +1,65 @@
---
#@ load("@ytt:data", "data")
#@ load("@ytt:template", "template")
---
#@ def component(name, repository, tag, port=None, type="ClusterIP"):
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
replicas: 1
selector:
matchLabels:
app: #@ name
template:
metadata:
labels:
app: #@ name
spec:
containers:
- image: #@ repository + "/" + name + ":" + tag
name: #@ name
#@ if/end port==80:
readinessProbe:
httpGet:
port: #@ port
#@ if port != None:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
ports:
- port: #@ port
protocol: TCP
targetPort: #@ port
selector:
app: #@ name
type: #@ type
#@ end
#@ end
---
#@ defaults = {}
#@ for name in data.values:
#@ if name.startswith("_"):
#@ defaults.update(data.values[name])
#@ end
#@ end
---
#@ for name in data.values:
#@ if not name.startswith("_"):
#@ values = dict(name=name)
#@ values.update(defaults)
#@ values.update(data.values[name])
--- #@ template.replace(component(**values))
#@ end
#@ end

View File

@@ -0,0 +1,19 @@
#@data/values-schema
#! Entries starting with an underscore will hold default values.
#! Entires NOT starting with an underscore will generate a Deployment
#! (and a Service if a port number is set).
---
_default_:
repository: dockercoins
tag: v0.1
hasher:
port: 80
redis:
repository: library
tag: latest
rng:
port: 80
webui:
port: 80
type: NodePort
worker: {}

View File

@@ -0,0 +1,26 @@
#@ load("@ytt:data", "data")
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
replicas: 1
selector:
matchLabels:
app: #@ data.values.name
template:
metadata:
labels:
app: #@ data.values.name
spec:
containers:
- image: #@ data.values.repository + "/" + data.values.name + ":" + data.values.tag
name: #@ data.values.name
#@ if/end data.values.port==80:
readinessProbe:
httpGet:
port: #@ data.values.port

View File

@@ -0,0 +1,7 @@
#@data/values-schema
---
name: component
repository: dockercoins
tag: v0.1
port: 0
type: ClusterIP

View File

@@ -0,0 +1,19 @@
#@ load("@ytt:data", "data")
#@ if data.values.port > 0:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
ports:
- port: #@ data.values.port
protocol: TCP
targetPort: #@ data.values.port
selector:
app: #@ data.values.name
type: #@ data.values.type
#@ end

View File

@@ -0,0 +1,20 @@
#@ load("@ytt:data", "data")
#@ load("@ytt:library", "library")
#@ load("@ytt:template", "template")
#@
#@ component = library.get("component")
#@
#@ defaults = {}
#@ for name in data.values:
#@ if name.startswith("_"):
#@ defaults.update(data.values[name])
#@ end
#@ end
#@ for name in data.values:
#@ if not name.startswith("_"):
#@ values = dict(name=name)
#@ values.update(defaults)
#@ values.update(data.values[name])
--- #@ template.replace(component.with_data_values(values).eval())
#@ end
#@ end

View File

@@ -0,0 +1,19 @@
#@data/values-schema
#! Entries starting with an underscore will hold default values.
#! Entires NOT starting with an underscore will generate a Deployment
#! (and a Service if a port number is set).
---
_default_:
repository: dockercoins
tag: v0.1
hasher:
port: 80
redis:
repository: library
tag: latest
rng:
port: 80
webui:
port: 80
type: NodePort
worker: {}

View File

@@ -0,0 +1,26 @@
#@ load("@ytt:data", "data")
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
replicas: 1
selector:
matchLabels:
app: #@ data.values.name
template:
metadata:
labels:
app: #@ data.values.name
spec:
containers:
- image: #@ data.values.repository + "/" + data.values.name + ":" + data.values.tag
name: #@ data.values.name
#@ if/end data.values.port==80:
readinessProbe:
httpGet:
port: #@ data.values.port

View File

@@ -0,0 +1,7 @@
#@data/values-schema
---
name: component
repository: dockercoins
tag: v0.1
port: 0
type: ClusterIP

View File

@@ -0,0 +1,19 @@
#@ load("@ytt:data", "data")
#@ if data.values.port > 0:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
ports:
- port: #@ data.values.port
protocol: TCP
targetPort: #@ data.values.port
selector:
app: #@ data.values.name
type: #@ data.values.type
#@ end

View File

@@ -0,0 +1,20 @@
#@ load("@ytt:data", "data")
#@ load("@ytt:library", "library")
#@ load("@ytt:template", "template")
#@
#@ component = library.get("component")
#@
#@ defaults = {}
#@ for name in data.values:
#@ if name.startswith("_"):
#@ defaults.update(data.values[name])
#@ end
#@ end
#@ for name in data.values:
#@ if not name.startswith("_"):
#@ values = dict(name=name)
#@ values.update(defaults)
#@ values.update(data.values[name])
--- #@ template.replace(component.with_data_values(values).eval())
#@ end
#@ end

View File

@@ -0,0 +1,20 @@
#@ load("@ytt:overlay", "overlay")
#@ def match():
kind: Deployment
metadata:
name: rng
#@ end
#@overlay/match by=overlay.subset(match())
---
spec:
template:
spec:
containers:
#@overlay/match by="name"
- name: rng
readinessProbe:
httpGet:
#@overlay/match missing_ok=True
path: /1

View File

@@ -0,0 +1,19 @@
#@data/values-schema
#! Entries starting with an underscore will hold default values.
#! Entires NOT starting with an underscore will generate a Deployment
#! (and a Service if a port number is set).
---
_default_:
repository: dockercoins
tag: v0.1
hasher:
port: 80
redis:
repository: library
tag: latest
rng:
port: 80
webui:
port: 80
type: NodePort
worker: {}

View File

@@ -0,0 +1,25 @@
#@ load("@ytt:overlay", "overlay")
#@ def match():
kind: Deployment
metadata:
name: worker
#@ end
#! This removes the number of replicas:
#@overlay/match by=overlay.subset(match())
---
spec:
#@overlay/remove
replicas:
#! This overrides it:
#@overlay/match by=overlay.subset(match())
---
spec:
#@overlay/match missing_ok=True
replicas: 10
#! Note that it's not necessary to remove the number of replicas.
#! We're just presenting both options here (for instance, you might
#! want to remove the number of replicas if you're using an HPA).

View File

@@ -2,4 +2,3 @@
base = "slides"
publish = "slides"
command = "./build.sh once"

View File

@@ -1,6 +1,6 @@
resource "random_string" "_" {
length = 4
number = false
numeric = false
special = false
upper = false
}

View File

@@ -53,5 +53,5 @@ variable "location" {
# doctl kubernetes options versions -o json | jq -r .[].slug
variable "k8s_version" {
type = string
default = "1.21.5-do.0"
default = "1.22.8-do.1"
}

View File

@@ -3,7 +3,7 @@ resource "linode_lke_cluster" "_" {
tags = var.common_tags
# "region" is mandatory, so let's provide a default value if none was given.
region = var.location != null ? var.location : "eu-central"
k8s_version = var.k8s_version
k8s_version = local.k8s_version
pool {
type = local.node_type

View File

@@ -51,7 +51,22 @@ variable "location" {
# To view supported versions, run:
# linode-cli lke versions-list --json | jq -r .[].id
data "external" "k8s_version" {
program = [
"sh",
"-c",
<<-EOT
linode-cli lke versions-list --json |
jq -r '{"latest": [.[].id] | sort [-1]}'
EOT
]
}
variable "k8s_version" {
type = string
default = "1.21"
default = ""
}
locals {
k8s_version = var.k8s_version != "" ? var.k8s_version : data.external.k8s_version.result.latest
}

View File

@@ -56,5 +56,5 @@ variable "location" {
# scw k8s version list -o json | jq -r .[].name
variable "k8s_version" {
type = string
default = "1.22.2"
default = "1.23.6"
}

View File

@@ -145,23 +145,15 @@ resource "helm_release" "metrics_server_${index}" {
# but only if it's not already installed.
count = yamldecode(file("./flags.${index}"))["has_metrics_server"] ? 0 : 1
provider = helm.cluster_${index}
repository = "https://charts.bitnami.com/bitnami"
repository = "https://kubernetes-sigs.github.io/metrics-server/"
chart = "metrics-server"
version = "5.8.8"
version = "3.8.2"
name = "metrics-server"
namespace = "metrics-server"
create_namespace = true
set {
name = "apiService.create"
value = "true"
}
set {
name = "extraArgs.kubelet-insecure-tls"
value = "true"
}
set {
name = "extraArgs.kubelet-preferred-address-types"
value = "InternalIP"
name = "args"
value = "{--kubelet-insecure-tls}"
}
}
@@ -201,7 +193,6 @@ resource "tls_private_key" "cluster_admin_${index}" {
}
resource "tls_cert_request" "cluster_admin_${index}" {
key_algorithm = tls_private_key.cluster_admin_${index}.algorithm
private_key_pem = tls_private_key.cluster_admin_${index}.private_key_pem
subject {
common_name = "cluster-admin"

View File

@@ -17,6 +17,7 @@ These tools can help you to create VMs on:
- [Parallel SSH](https://github.com/lilydjwg/pssh)
(should be installable with `pip install git+https://github.com/lilydjwg/pssh`;
on a Mac, try `brew install pssh`)
- [yq](https://github.com/kislyuk/yq)
Depending on the infrastructure that you want to use, you also need to install
the CLI that is specific to that cloud. For OpenStack deployments, you will

View File

@@ -239,6 +239,14 @@ _cmd_docker() {
sudo ln -sfn /mnt/docker /var/lib/docker
fi
# containerd 1.6 breaks Weave.
# See https://github.com/containerd/containerd/issues/6921
sudo tee /etc/apt/preferences.d/containerd <<EOF
Package: containerd.io
Pin: version 1.5.*
Pin-Priority: 1000
EOF
# This will install the latest Docker.
sudo apt-get -qy install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
@@ -427,6 +435,9 @@ EOF
pssh "
if i_am_first_node; then
kubectl apply -f https://raw.githubusercontent.com/jpetazzo/container.training/master/k8s/metrics-server.yaml
#helm upgrade --install metrics-server \
# --repo https://kubernetes-sigs.github.io/metrics-server/ metrics-server \
# --namespace kube-system --set args={--kubelet-insecure-tls}
fi"
}
@@ -595,16 +606,16 @@ EOF
fi"
##VERSION## https://github.com/bitnami-labs/sealed-secrets/releases
KUBESEAL_VERSION=v0.16.0
case $ARCH in
amd64) FILENAME=kubeseal-linux-amd64;;
arm64) FILENAME=kubeseal-arm64;;
*) FILENAME=nope;;
esac
[ "$FILENAME" = "nope" ] || pssh "
KUBESEAL_VERSION=0.17.4
#case $ARCH in
#amd64) FILENAME=kubeseal-linux-amd64;;
#arm64) FILENAME=kubeseal-arm64;;
#*) FILENAME=nope;;
#esac
pssh "
if [ ! -x /usr/local/bin/kubeseal ]; then
curl -fsSLo kubeseal https://github.com/bitnami-labs/sealed-secrets/releases/download/$KUBESEAL_VERSION/$FILENAME &&
sudo install kubeseal /usr/local/bin
curl -fsSL https://github.com/bitnami-labs/sealed-secrets/releases/download/v$KUBESEAL_VERSION/kubeseal-$KUBESEAL_VERSION-linux-$ARCH.tar.gz |
sudo tar -zxvf- -C /usr/local/bin kubeseal
kubeseal --version
fi"
}

View File

@@ -26,12 +26,24 @@ infra_start() {
info " Name: $NAME"
info " Instance type: $LINODE_TYPE"
ROOT_PASS="$(base64 /dev/urandom | cut -c1-20 | head -n 1)"
linode-cli linodes create \
MAX_TRY=5
TRY=1
WAIT=1
while ! linode-cli linodes create \
--type=${LINODE_TYPE} --region=${LINODE_REGION} \
--image=linode/ubuntu18.04 \
--authorized_keys="${LINODE_SSHKEY}" \
--root_pass="${ROOT_PASS}" \
--tags=${TAG} --label=${NAME}
--tags=${TAG} --label=${NAME}; do
warning "Failed to create VM (attempt $TRY/$MAX_TRY)."
if [ $TRY -ge $MAX_TRY ]; then
die "Giving up."
fi
info "Waiting $WAIT seconds and retrying."
sleep $WAIT
TRY=$(($TRY+1))
WAIT=$(($WAIT*2))
done
done
sep

View File

@@ -16,7 +16,7 @@ user_password: training
# For a list of old versions, check:
# https://kubernetes.io/releases/patch-releases/#non-active-branch-history
kubernetes_version: 1.18.20
kubernetes_version: 1.20.15
image:

View File

@@ -1,7 +1,6 @@
# Uncomment and/or edit one of the the following lines if necessary.
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
/ /kube.yml.html 200!
# And this allows to do "git clone https://container.training".

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@
"version": "0.0.1",
"dependencies": {
"express": "^4.16.2",
"socket.io": "^2.4.0"
"socket.io": "^4.5.1",
"socket.io-client": "^4.5.1"
}
}

View File

@@ -58,7 +58,7 @@ class: pic
- it uses different concepts (Compose services ≠ Kubernetes services)
- it needs a Docker Engine (althought containerd support might be coming)
- it needs a Docker Engine (although containerd support might be coming)
---

View File

@@ -111,7 +111,7 @@ CMD ["python", "app.py"]
RUN wget http://.../foo.tar.gz \
&& tar -zxf foo.tar.gz \
&& mv foo/fooctl /usr/local/bin \
&& rm -rf foo
&& rm -rf foo foo.tar.gz
...
```

View File

@@ -317,9 +317,11 @@ class: extra-details
## Trash your servers and burn your code
*(This is the title of a
[2013 blog post](http://chadfowler.com/2013/06/23/immutable-deployments.html)
[2013 blog post][immutable-deployments]
by Chad Fowler, where he explains the concept of immutable infrastructure.)*
[immutable-deployments]: https://web.archive.org/web/20160305073617/http://chadfowler.com/blog/2013/06/23/immutable-deployments/
--
* Let's majorly mess up our container.

View File

@@ -13,7 +13,7 @@
- ... Or be comfortable spending some time reading the Docker
[documentation](https://docs.docker.com/) ...
- ... And looking for answers in the [Docker forums](forums.docker.com),
- ... And looking for answers in the [Docker forums](https://forums.docker.com),
[StackOverflow](http://stackoverflow.com/questions/tagged/docker),
and other outlets

View File

@@ -0,0 +1,7 @@
## Exercise — Network Policies
- Implement a system with 3 levels of security
(private pods, public pods, namespace pods)
- Apply it to the DockerCoins demo app

View File

@@ -0,0 +1,63 @@
# Exercise — Network Policies
We want to to implement a generic network security mechanism.
Instead of creating one policy per service, we want to
create a fixed number of policies, and use a single label
to indicate the security level of our pods.
Then, when adding a new service to the stack, instead
of writing a new network policy for that service, we
only need to add the right label to the pods of that service.
---
## Specifications
We will use the label `security` to classify our pods.
- If `security=private`:
*the pod shouldn't accept any traffic*
- If `security=public`:
*the pod should accept all traffic*
- If `security=namespace`:
*the pod should only accept connections coming from the same namespace*
If `security` isn't set, assume it's `private`.
---
## Test setup
- Deploy a copy of the DockerCoins app in a new namespace
- Modify the pod templates so that:
- `webui` has `security=public`
- `worker` has `security=private`
- `hasher`, `redis`, `rng` have `security=namespace`
---
## Implement and test policies
- Write the network policies
(feel free to draw inspiration from the ones we've seen so far)
- Check that:
- you can connect to the `webui` from outside the cluster
- the application works correctly (shows 3-4 hashes/second)
- you cannot connect to the `hasher`, `redis`, `rng` services
- you cannot connect or even ping the `worker` pods

View File

@@ -0,0 +1,9 @@
## Exercise — RBAC
- Create two namespaces for users `alice` and `bob`
- Give each user full access to their own namespace
- Give each user read-only access to the other's namespace
- Let `alice` view the nodes of the cluster as well

View File

@@ -0,0 +1,97 @@
# Exercise — RBAC
We want to:
- Create two namespaces for users `alice` and `bob`
- Give each user full access to their own namespace
- Give each user read-only access to the other's namespace
- Let `alice` view the nodes of the cluster as well
---
## Initial setup
- Create two namespaces named `alice` and `bob`
- Check that if we impersonate Alice, we can't access her namespace yet:
```bash
kubectl --as alice get pods --namespace alice
```
---
## Access for Alice
- Grant Alice full access to her own namespace
(you can use a pre-existing Cluster Role)
- Check that Alice can create stuff in her namespace:
```bash
kubectl --as alice create deployment hello --image nginx --namespace alice
```
- But that she can't create stuff in Bob's namespace:
```bash
kubectl --as alice create deployment hello --image nginx --namespace bob
```
---
## Access for Bob
- Similarly, grant Bob full access to his own namespace
- Check that Bob can create stuff in his namespace:
```bash
kubectl --as bob create deployment hello --image nginx --namespace bob
```
- But that he can't create stuff in Alice's namespace:
```bash
kubectl --as bob create deployment hello --image nginx --namespace alice
```
---
## Read-only access
- Now, give Alice read-only access to Bob's namespace
- Check that Alice can view Bob's stuff:
```bash
kubectl --as alice get pods --namespace bob
```
- But that she can't touch this:
```bash
kubectl --as alice delete pods --namespace bob --all
```
- Likewise, give Bob read-only access to Alice's namespace
---
## Nodes
- Give Alice read-only access to the cluster nodes
(this will require creating a custom Cluster Role)
- Check that Alice can view the nodes:
```bash
kubectl --as alice get nodes
```
- But that Bob cannot:
```bash
kubectl --as bob get nodes
```
- And that Alice can't update nodes:
```bash
kubectl --as alice label nodes --all hello=world
```

View File

@@ -168,7 +168,7 @@ class: extra-details
(`O=system:nodes`, `CN=system:node:name-of-the-node`)
- The Kubernetse API can act as a CA
- The Kubernetes API can act as a CA
(by wrapping an X509 CSR into a CertificateSigningRequest resource)
@@ -246,7 +246,7 @@ class: extra-details
(they don't require hand-editing a file and restarting the API server)
- A service account is associated with a set of secrets
- A service account can be associated with a set of secrets
(the kind that you can view with `kubectl get secrets`)
@@ -256,6 +256,28 @@ class: extra-details
---
## Service account tokens evolution
- In Kubernetes 1.21 and above, pods use *bound service account tokens*:
- these tokens are *bound* to a specific object (e.g. a Pod)
- they are automatically invalidated when the object is deleted
- these tokens also expire quickly (e.g. 1 hour) and gets rotated automatically
- In Kubernetes 1.24 and above, unbound tokens aren't created automatically
- before 1.24, we would see unbound tokens with `kubectl get secrets`
- with 1.24 and above, these tokens can be created with `kubectl create token`
- ...or with a Secret with the right [type and annotation][create-token]
[create-token]: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#to-create-additional-api-tokens
---
class: extra-details
## Checking our authentication method
@@ -390,6 +412,10 @@ class: extra-details
It should be named `default-token-XXXXX`.
When running Kubernetes 1.24 and above, this Secret won't exist.
<br/>
Instead, create a token with `kubectl create token default`.
---
class: extra-details

60
slides/k8s/cainjector.md Normal file
View File

@@ -0,0 +1,60 @@
## CA injector - overview
- The Kubernetes API server can invoke various webhooks:
- conversion webhooks (registered in CustomResourceDefinitions)
- mutation webhooks (registered in MutatingWebhookConfigurations)
- validation webhooks (registered in ValidatingWebhookConfiguration)
- These webhooks must be served over TLS
- These webhooks must use valid TLS certificates
---
## Webhook certificates
- Option 1: certificate issued by a global CA
- doesn't work with internal services
<br/>
(their CN must be `<servicename>.<namespace>.svc`)
- Option 2: certificate issued by private CA + CA certificate in system store
- requires access to API server certificates tore
- generally not doable on managed Kubernetes clusters
- Option 3: certificate issued by private CA + CA certificate in `caBundle`
- pass the CA certificate in `caBundle` field
<br/>
(in CRD or webhook manifests)
- can be managed automatically by cert-manager
---
## CA injector - details
- Add annotation to *injectable* resource
(CustomResouceDefinition, MutatingWebhookConfiguration, ValidatingWebhookConfiguration)
- Annotation refers to the thing holding the certificate:
- `cert-manager.io/inject-ca-from: <namespace>/<certificate>`
- `cert-manager.io/inject-ca-from-secret: <namespace>/<secret>`
- `cert-manager.io/inject-apiserver-ca: true` (use API server CA)
- When injecting from a Secret, the Secret must have a special annotation:
`cert-manager.io/allow-direct-injection: "true"`
- See [cert-manager documentation][docs] for details
[docs]: https://cert-manager.io/docs/concepts/ca-injector/

View File

@@ -81,7 +81,7 @@
## What version are we running anyway?
- When I say, "I'm running Kubernetes 1.18", is that the version of:
- When I say, "I'm running Kubernetes 1.20", is that the version of:
- kubectl
@@ -157,15 +157,15 @@
## Kubernetes uses semantic versioning
- Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.18.20:
- Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.20.15:
- MAJOR = 1
- MINOR = 18
- PATCH = 20
- MINOR = 20
- PATCH = 15
- It's always possible to mix and match different PATCH releases
(e.g. 1.18.20 and 1.18.15 are compatible)
(e.g. 1.20.0 and 1.20.15 are compatible)
- It is recommended to run the latest PATCH release
@@ -181,9 +181,9 @@
- All components support a difference of one¹ MINOR version
- This allows live upgrades (since we can mix e.g. 1.18 and 1.19)
- This allows live upgrades (since we can mix e.g. 1.20 and 1.21)
- It also means that going from 1.18 to 1.20 requires going through 1.19
- It also means that going from 1.20 to 1.22 requires going through 1.21
.footnote[¹Except kubelet, which can be up to two MINOR behind API server,
and kubectl, which can be one MINOR ahead or behind API server.]
@@ -254,7 +254,7 @@ and kubectl, which can be one MINOR ahead or behind API server.]
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
```
- Look for the `image:` line, and update it to e.g. `v1.19.0`
- Look for the `image:` line, and update it to e.g. `v1.24.0`
]
@@ -308,11 +308,11 @@ and kubectl, which can be one MINOR ahead or behind API server.]
]
Note 1: kubeadm thinks that our cluster is running 1.19.0.
Note 1: kubeadm thinks that our cluster is running 1.24.0.
<br/>It is confused by our manual upgrade of the API server!
Note 2: kubeadm itself is still version 1.18.20..
<br/>It doesn't know how to upgrade do 1.19.X.
Note 2: kubeadm itself is still version 1.20.15..
<br/>It doesn't know how to upgrade do 1.21.X.
---
@@ -335,28 +335,28 @@ Note 2: kubeadm itself is still version 1.18.20..
]
Problem: kubeadm doesn't know know how to handle
upgrades from version 1.18.
upgrades from version 1.20.
This is because we installed version 1.22 (or even later).
This is because we installed version 1.24 (or even later).
We need to install kubeadm version 1.19.X.
We need to install kubeadm version 1.21.X.
---
## Downgrading kubeadm
- We need to go back to version 1.19.X.
- We need to go back to version 1.21.X.
.lab[
- View available versions for package `kubeadm`:
```bash
apt show kubeadm -a | grep ^Version | grep 1.19
apt show kubeadm -a | grep ^Version | grep 1.21
```
- Downgrade kubeadm:
```
sudo apt install kubeadm=1.19.8-00
sudo apt install kubeadm=1.21.0-00
```
- Check what kubeadm tells us:
@@ -366,7 +366,7 @@ We need to install kubeadm version 1.19.X.
]
kubeadm should now agree to upgrade to 1.19.8.
kubeadm should now agree to upgrade to 1.21.X.
---
@@ -464,9 +464,9 @@ kubeadm should now agree to upgrade to 1.19.8.
```bash
for N in 1 2 3; do
ssh oldversion$N "
sudo apt install kubeadm=1.19.8-00 &&
sudo apt install kubeadm=1.21.14-00 &&
sudo kubeadm upgrade node &&
sudo apt install kubelet=1.19.8-00"
sudo apt install kubelet=1.21.14-00"
done
```
]
@@ -475,7 +475,7 @@ kubeadm should now agree to upgrade to 1.19.8.
## Checking what we've done
- All our nodes should now be updated to version 1.19.8
- All our nodes should now be updated to version 1.21.14
.lab[
@@ -492,7 +492,7 @@ class: extra-details
## Skipping versions
- This example worked because we went from 1.18 to 1.19
- This example worked because we went from 1.20 to 1.21
- If you are upgrading from e.g. 1.16, you will have to go through 1.17 first

View File

@@ -14,22 +14,20 @@
## Creating a CRD
- We will create a CRD to represent the different species of coffee
- We will create a CRD to represent different recipes of pizzas
(arabica, liberica, and robusta)
- We will be able to run `kubectl get pizzas` and it will list the recipes
- We will be able to run `kubectl get coffees` and it will list the species
- Creating/deleting recipes won't do anything else
- Then we can label, edit, etc. the species to attach some information
(e.g. the taste profile of the coffee, or whatever we want)
(because we won't implement a *controller*)
---
## First shot of coffee
## First slice of pizza
```yaml
@@INCLUDE[k8s/coffee-1.yaml]
@@INCLUDE[k8s/pizza-1.yaml]
```
---
@@ -48,9 +46,9 @@
---
## Second shot of coffee
## Second slice of pizza
- The next slide will show file @@LINK[k8s/coffee-2.yaml]
- The next slide will show file @@LINK[k8s/pizza-2.yaml]
- Note the `spec.versions` list
@@ -65,20 +63,20 @@
---
```yaml
@@INCLUDE[k8s/coffee-2.yaml]
@@INCLUDE[k8s/pizza-2.yaml]
```
---
## Creating our Coffee CRD
## Baking some pizza
- Let's create the Custom Resource Definition for our Coffee resource
- Let's create the Custom Resource Definition for our Pizza resource
.lab[
- Load the CRD:
```bash
kubectl apply -f ~/container.training/k8s/coffee-2.yaml
kubectl apply -f ~/container.training/k8s/pizza-2.yaml
```
- Confirm that it shows up:
@@ -95,19 +93,57 @@
The YAML below defines a resource using the CRD that we just created:
```yaml
kind: Coffee
kind: Pizza
apiVersion: container.training/v1alpha1
metadata:
name: arabica
name: napolitana
spec:
taste: strong
toppings: [ mozzarella ]
```
.lab[
- Create a few types of coffee beans:
- Try to create a few pizza recipes:
```bash
kubectl apply -f ~/container.training/k8s/coffees.yaml
kubectl apply -f ~/container.training/k8s/pizzas.yaml
```
]
---
## Type validation
- Older versions of Kubernetes will accept our pizza definition as is
- Newer versions, however, will issue warnings about unknown fields
(and if we use `--validate=false`, these fields will simply be dropped)
- We need to improve our OpenAPI schema
(to add e.g. the `spec.toppings` field used by our pizza resources)
---
## Third slice of pizza
- Let's add a full OpenAPI v3 schema to our Pizza CRD
- We'll require a field `spec.sauce` which will be a string
- And a field `spec.toppings` which will have to be a list of strings
.lab[
- Update our pizza CRD:
```bash
kubectl apply -f ~/container.training/k8s/pizza-3.yaml
```
- Load our pizza recipes:
```bash
kubectl apply -f ~/container.training/k8s/pizzas.yaml
```
]
@@ -120,91 +156,48 @@ spec:
.lab[
- View the coffee beans that we just created:
- View the pizza recipes that we just created:
```bash
kubectl get coffees
kubectl get pizzas
```
]
- We'll see in a bit how to improve that
---
## What can we do with CRDs?
There are many possibilities!
- *Operators* encapsulate complex sets of resources
(e.g.: a PostgreSQL replicated cluster; an etcd cluster...
<br/>
see [awesome operators](https://github.com/operator-framework/awesome-operators) and
[OperatorHub](https://operatorhub.io/) to find more)
- Custom use-cases like [gitkube](https://gitkube.sh/)
- creates a new custom type, `Remote`, exposing a git+ssh server
- deploy by pushing YAML or Helm charts to that remote
- Replacing built-in types with CRDs
(see [this lightning talk by Tim Hockin](https://www.youtube.com/watch?v=ji0FWzFwNhA))
---
## What's next?
- Creating a basic CRD is quick and easy
- But there is a lot more that we can (and probably should) do:
- improve input with *data validation*
- improve output with *custom columns*
- And of course, we probably need a *controller* to go with our CRD!
(otherwise, we're just using the Kubernetes API as a fancy data store)
- Let's see how we can improve that display!
---
## Additional printer columns
- We can specify `additionalPrinterColumns` in the CRD
- This is similar to `-o custom-columns`
(map a column name to a path in the object, e.g. `.spec.taste`)
```yaml
- We can tell Kubernetes which columns to show:
```yaml
additionalPrinterColumns:
- jsonPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
- jsonPath: .spec.sauce
name: Sauce
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
```
- jsonPath: .spec.toppings
name: Toppings
type: string
```
- There is an updated CRD in @@LINK[k8s/pizza-4.yaml]
---
## Using additional printer columns
- Let's update our CRD using @@LINK[k8s/coffee-3.yaml]
- Let's update our CRD!
.lab[
- Update the CRD:
```bash
kubectl apply -f ~/container.training/k8s/coffee-3.yaml
kubectl apply -f ~/container.training/k8s/pizza-4.yaml
```
- Look at our Coffee resources:
- Look at our Pizza resources:
```bash
kubectl get coffees
kubectl get pizzas
```
]
@@ -215,50 +208,26 @@ Note: we can update a CRD without having to re-create the corresponding resource
---
## Data validation
## Better data validation
- CRDs are validated with the OpenAPI v3 schema that we specify
- Let's change the data schema so that the sauce can only be `red` or `white`
(with older versions of the API, when the schema was optional,
<br/>
no schema = no validation at all)
- This will be implemented by @@LINK[k8s/pizza-5.yaml]
- Otherwise, we can put anything we want in the `spec`
.lab[
- More advanced validation can also be done with admission webhooks, e.g.:
- Update the Pizza CRD:
```bash
kubectl apply -f ~/container.training/k8s/pizza-5.yaml
```
- consistency between parameters
- advanced integer filters (e.g. odd number of replicas)
- things that can change in one direction but not the other
---
## OpenAPI v3 schema example
This is what we have in @@LINK[k8s/coffee-3.yaml]:
```yaml
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string
required: [ taste ]
```
]
---
## Validation *a posteriori*
- Some of the "coffees" that we defined earlier *do not* pass validation
- Some of the pizzas that we defined earlier *do not* pass validation
- How is that possible?
@@ -326,15 +295,23 @@ This is what we have in @@LINK[k8s/coffee-3.yaml]:
---
## What's next?
## Even better data validation
- Generally, when creating a CRD, we also want to run a *controller*
- If we need more complex data validation, we can use a validating webhook
(otherwise nothing will happen when we create resources of that type)
- Use cases:
- The controller will typically *watch* our custom resources
- validating a "version" field for a database engine
(and take action when they are created/updated)
- validating that the number of e.g. coordination nodes is even
- preventing inconsistent or dangerous changes
<br/>
(e.g. major version downgrades)
- checking a key or certificate format or validity
- and much more!
---
@@ -376,6 +353,24 @@ This is what we have in @@LINK[k8s/coffee-3.yaml]:
(unrelated to containers, clusters, etc.)
---
## What's next?
- Creating a basic CRD is relatively straightforward
- But CRDs generally require a *controller* to do anything useful
- The controller will typically *watch* our custom resources
(and take action when they are created/updated)
- Most serious use-cases will also require *validation web hooks*
- When our CRD data format evolves, we'll also need *conversion web hooks*
- Doing all that work manually is tedious; use a framework!
???
:EN:- Custom Resource Definitions (CRDs)

View File

@@ -157,7 +157,7 @@ class: extra-details
(as opposed to, e.g., installing a new release each time we run it)
- Other example: `kubectl -f some-file.yaml`
- Other example: `kubectl apply -f some-file.yaml`
---

View File

@@ -66,7 +66,7 @@
Where do that `repository` and `version` come from?
We're assuming here that we did our reserach,
We're assuming here that we did our research,
or that our resident Helm expert advised us to
use Bitnami's Redis chart.

View File

@@ -167,7 +167,7 @@ Let's try one more round of decoding!
--
... OK, that was *a lot* of binary data. What sould we do with it?
... OK, that was *a lot* of binary data. What should we do with it?
---

View File

@@ -460,9 +460,9 @@ class: extra-details
(i.e. node regularly pinging the control plane to say "I'm alive!")
- For more details, see [KEP-0009] or the [node controller documentation]
- For more details, see [Efficient Node Heartbeats KEP] or the [node controller documentation]
[KEP-0009]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/0009-node-heartbeat.md
[Efficient Node Heartbeats KEP]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/589-efficient-node-heartbeats/README.md
[node controller documentation]: https://kubernetes.io/docs/concepts/architecture/nodes/#node-controller
---

View File

@@ -99,9 +99,9 @@ Pros:
- That Pod will fetch metrics from all our Nodes
- It will expose them through the Kubernetes API agregation layer
- It will expose them through the Kubernetes API aggregation layer
(we won't say much more about that agregation layer; that's fairly advanced stuff!)
(we won't say much more about that aggregation layer; that's fairly advanced stuff!)
---
@@ -128,7 +128,7 @@ Pros:
- `apiService.create=true`
register `metrics-server` with the Kubernetes agregation layer
register `metrics-server` with the Kubernetes aggregation layer
(create an entry that will show up in `kubectl get apiservices`)
@@ -192,7 +192,7 @@ Pros:
- kube-resource-report can generate HTML reports
(https://github.com/hjacobs/kube-resource-report)
(https://codeberg.org/hjacobs/kube-resource-report)
???

View File

@@ -190,19 +190,25 @@ EOF
---
## Making sure that a PV was created for our PVC
## `WaitForFirstConsumer`
- Normally, the `openebs-hostpath` StorageClass created a PV for our PVC
- Did OpenEBS create a PV for our PVC?
.lab[
- Look at the PV and PVC:
- Find out:
```bash
kubectl get pv,pvc
```
]
--
- No!
- This is because that class is `WaitForFirstConsumer` instead of `Immediate`
---
## Create a Pod to consume the PV
@@ -231,6 +237,21 @@ EOF
---
## Making sure that a PV was created for our PVC
- At that point, the `openebs-hostpath` StorageClass created a PV for our PVC
.lab[
- Look at the PV and PVC:
```bash
kubectl get pv,pvc
```
]
---
## Verify that data is written on the node
- Let's find the file written by the Pod on the node where the Pod is running
@@ -335,4 +356,4 @@ EOF
:EN:- Deploying stateful apps with OpenEBS
:FR:- Comprendre le "Container Attached Storage" (CAS)
:FR:- Déployer une application "stateful" avec OpenEBS
:FR:- Déployer une application "stateful" avec OpenEBS

View File

@@ -127,7 +127,7 @@ class: extra-details
- either directly
- or by extending the API server
<br/>(for instance by using the agregation layer, like [metrics server](https://github.com/kubernetes-incubator/metrics-server) does)
<br/>(for instance by using the aggregation layer, like [metrics server](https://github.com/kubernetes-incubator/metrics-server) does)
---

View File

@@ -151,3 +151,8 @@ on my needs) to be deployed into its specific Kubernetes Namespace.*
- Improvement idea: this operator could generate *events*
(visible with `kubectl get events` and `kubectl describe`)
???
:EN:- How to write a simple operator with shell scripts
:FR:- Comment écrire un opérateur simple en shell script

View File

@@ -1,19 +1,58 @@
# Operators
The Kubernetes documentation describes the [Operator pattern] as follows:
*Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. Operators follow Kubernetes principles, notably the control loop.*
Another good definition from [CoreOS](https://coreos.com/blog/introducing-operators.html):
*An operator represents **human operational knowledge in software,**
<br/>
to reliably manage an application.
— [CoreOS](https://coreos.com/blog/introducing-operators.html)*
to reliably manage an application.*
Examples:
There are many different use cases spanning different domains; but the general idea is:
- Deploying and configuring replication with MySQL, PostgreSQL ...
*Manage some resources (that reside inside our outside the cluster),
<br/>
using Kubernetes manifests and tooling.*
- Setting up Elasticsearch, Kafka, RabbitMQ, Zookeeper ...
[Operator pattern]: https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
- Reacting to failures when intervention is needed
---
- Scaling up and down these systems
## Some uses cases
- Managing external resources ([AWS], [GCP], [KubeVirt]...)
- Setting up database replication or distributed systems
<br/>
(Cassandra, Consul, CouchDB, ElasticSearch, etcd, Kafka, MongoDB, MySQL, PostgreSQL, RabbitMQ, Redis, ZooKeeper...)
- Running and configuring CI/CD
<br/>
([ArgoCD], [Flux]), backups ([Velero]), policies ([Gatekeeper], [Kyverno])...
- Automating management of certificates and secrets
<br/>
([cert-manager]), secrets ([External Secrets Operator], [Sealed Secrets]...)
- Configuration of cluster components ([Istio], [Prometheus])
- etc.
[ArgoCD]: https://github.com/argoproj/argo-cd
[AWS]: https://aws-controllers-k8s.github.io/community/docs/community/services/
[cert-manager]: https://cert-manager.io/
[External Secrets Operator]: https://external-secrets.io/
[Flux]: https://fluxcd.io/
[Gatekeeper]: https://open-policy-agent.github.io/gatekeeper/website/docs/
[GCP]: https://github.com/paulczar/gcp-cloud-compute-operator
[Istio]: https://istio.io/latest/docs/setup/install/operator/
[KubeVirt]: https://kubevirt.io/
[Kyverno]: https://kyverno.io/
[Prometheus]: https://prometheus-operator.dev/
[Sealed Secrets]: https://github.com/bitnami-labs/sealed-secrets
[Velero]: https://velero.io/
---
@@ -37,7 +76,7 @@ Examples:
---
## Why use operators?
## Operators for e.g. replicated databases
- Kubernetes gives us Deployments, StatefulSets, Services ...
@@ -59,38 +98,6 @@ Examples:
---
## Use-cases for operators
- Systems with primary/secondary replication
Examples: MariaDB, MySQL, PostgreSQL, Redis ...
- Systems where different groups of nodes have different roles
Examples: ElasticSearch, MongoDB ...
- Systems with complex dependencies (that are themselves managed with operators)
Examples: Flink or Kafka, which both depend on Zookeeper
---
## More use-cases
- Representing and managing external resources
(Example: [AWS S3 Operator](https://operatorhub.io/operator/awss3-operator-registry))
- Managing complex cluster add-ons
(Example: [Istio operator](https://operatorhub.io/operator/istio))
- Deploying and managing our applications' lifecycles
(more on that later)
---
## How operators work
- An operator creates one or more CRDs
@@ -105,38 +112,6 @@ Examples:
---
## Deploying our apps with operators
- It is very simple to deploy with `kubectl create deployment` / `kubectl expose`
- We can unlock more features by writing YAML and using `kubectl apply`
- Kustomize or Helm let us deploy in multiple environments
(and adjust/tweak parameters in each environment)
- We can also use an operator to deploy our application
---
## Pros and cons of deploying with operators
- The app definition and configuration is persisted in the Kubernetes API
- Multiple instances of the app can be manipulated with `kubectl get`
- We can add labels, annotations to the app instances
- Our controller can execute custom code for any lifecycle event
- However, we need to write this controller
- We need to be careful about changes
(what happens when the resource `spec` is updated?)
---
## Operators are not magic
- Look at this ElasticSearch resource definition:

View File

@@ -6,7 +6,7 @@
- Easier to use
(doesn't require complex interaction bewteen policies and RBAC)
(doesn't require complex interaction between policies and RBAC)
---
@@ -206,7 +206,7 @@ class: extra-details
- If new namespaces are created, they will get default permissions
- We can change that be using an *admission configuration*
- We can change that by using an *admission configuration*
- Step 1: write an "admission configuration file"
@@ -232,7 +232,7 @@ Let's use @@LINK[k8s/admission-configuration.yaml]:
- For convenience, let's copy it do `/etc/kubernetes/pki`
(it's definitely where it *should* be, but that'll do!)
(it's definitely not where it *should* be, but that'll do!)
.lab[

View File

@@ -697,13 +697,13 @@ class: extra-details
- gives PromQL expressions to compute good values
<br/>(our app needs to be running for a while)
- [Kube Resource Report](https://github.com/hjacobs/kube-resource-report/)
- [Kube Resource Report](https://codeberg.org/hjacobs/kube-resource-report)
- generates web reports on resource usage
- [static demo](https://hjacobs.github.io/kube-resource-report/sample-report/output/index.html)
|
[live demo](https://kube-resource-report.demo.j-serv.de/applications.html)
- [nsinjector](https://github.com/blakelead/nsinjector)
- controller to automatically populate a Namespace when it is created
???

View File

@@ -1,14 +1,18 @@
# Rolling updates
- By default (without rolling updates), when a scaled resource is updated:
- How should we update a running application?
- new pods are created
- Strategy 1: delete old version, then deploy new version
- old pods are terminated
(not great, because it obviously provokes downtime!)
- ... all at the same time
- Strategy 2: deploy new version, then delete old version
- if something goes wrong, ¯\\\_(ツ)\_/¯
(uses a lot of resources; also how do we shift traffic?)
- Strategy 3: replace running pods one at a time
(sounds interesting; and good news, Kubernetes does it for us!)
---

View File

@@ -54,9 +54,7 @@
- The official installation is done through a single YAML file
- There is also a Helm chart if you prefer that
(if you're using Kubernetes 1.22+, see next slide!)
- There is also a Helm chart if you prefer that (see next slide!)
<!-- #VERSION# -->
@@ -66,7 +64,7 @@
.small[
```bash
kubectl apply -f \
https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.16.0/controller.yaml
https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.17.5/controller.yaml
```
]
@@ -80,15 +78,9 @@ If you change that, you will also need to inform `kubeseal` later on.
class: extra-details
## Sealed Secrets on Kubernetes 1.22
## Installing with Helm
- As of version 0.16, Sealed Secrets manifests uses RBAC v1beta1
- RBAC v1beta1 isn't supported anymore in Kubernetes 1.22
- Sealed Secerets Helm chart provides manifests using RBAC v1
- Conclusion: to install Sealed Secrets on Kubernetes 1.22, use the Helm chart:
- The Sealed Secrets controller can be installed like this:
```bash
helm install --repo https://bitnami-labs.github.io/sealed-secrets/ \
@@ -336,4 +328,4 @@ class: extra-details
???
:EN:- The Sealed Secrets Operator
:FR:- L'opérateur *Sealed Secrets*
:FR:- L'opérateur *Sealed Secrets*

View File

@@ -72,7 +72,7 @@
## Accessing private repositories
- Let's see how to access an image on private registry!
- Let's see how to access an image on a private registry!
- These images are protected by a username + password
@@ -243,7 +243,7 @@ class: extra-details
## Encryption at rest
- It is possible to [encrypted secrets at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/)
- It is possible to [encrypt secrets at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/)
- This means that secrets will be safe if someone ...

View File

@@ -210,25 +210,54 @@ Ah, right ...
## Running Tilt on a remote machine
- If Tilt runs remotely, we can't access http://localhost:10350
- If Tilt runs remotely, we can't access `http://localhost:10350`
- Our Tiltfile includes an ngrok tunnel, let's use that
- We'll need to tell Tilt to listen to `0.0.0.0`
- Start Tilt:
```bash
tilt up
```
(instead of just `localhost`)
- The ngrok URL should appear in the Tilt output
- If we run Tilt in a Pod, we need to expose port 10350 somehow
(something like `https://xxxx-aa-bb-cc-dd.ngrok.io/`)
- Open that URL in your browser
*Note: it's also possible to run `tilt up --host=0.0.0.0`.*
(and Tilt needs to listen on `0.0.0.0`, too)
---
## Telling Tilt to listen in `0.0.0.0`
- This can be done with the `--host` flag:
```bash
tilt --host=0.0.0.0
```
- Or by setting the `TILT_HOST` environment variable:
```bash
export TILT_HOST=0.0.0.0
tilt up
```
---
## Running Tilt in a Pod
If you use `shpod`, you can use the following command:
```bash
kubectl patch service shpod --namespace shpod -p "
spec:
ports:
- name: tilt
port: 10350
targetPort: 10350
nodePort: 30150
protocol: TCP
"
```
Then connect to port 30150 on any of your nodes.
If you use something else than `shpod`, adapt these instructions!
---
class: extra-details
## Kubernetes contexts

635
slides/k8s/ytt.md Normal file
View File

@@ -0,0 +1,635 @@
# YTT
- YAML Templating Tool
- Part of [Carvel]
(a set of tools for Kubernetes application building, configuration, and deployment)
- Can be used for any YAML
(Kubernetes, Compose, CI pipelines...)
[Carvel]: https://carvel.dev/
---
## Features
- Manipulate data structures, not text (≠ Helm)
- Deterministic, hermetic execution
- Define variables, blocks, functions
- Write code in Starlark (dialect of Python)
- Define and override values (Helm-style)
- Patch resources arbitrarily (Kustomize-style)
---
## Getting started
- Install `ytt` ([binary download][download])
- Start with one (or multiple) Kubernetes YAML files
*(without comments; no `#` allowed at this point!)*
- `ytt -f one.yaml -f two.yaml | kubectl apply -f-`
- `ytt -f. | kubectl apply -f-`
[download]: https://github.com/vmware-tanzu/carvel-ytt/releases/latest
---
## No comments?!?
- Replace `#` with `#!`
- `#@` is used by ytt
- It's a kind of template tag, for instance:
```yaml
#! This is a comment
#@ a = 42
#@ b = "*"
a: #@ a
b: #@ b
operation: multiply
result: #@ a*b
```
- `#@` at the beginning of a line = instruction
- `#@` somewhere else = value
---
## Building strings
- Concatenation:
```yaml
#@ repository = "dockercoins"
#@ tag = "v0.1"
containers:
- name: worker
image: #@ repository + "/worker:" + tag
```
- Formatting:
```yaml
#@ repository = "dockercoins"
#@ tag = "v0.1"
containers:
- name: worker
image: #@ "{}/worker:{}".format(repository, tag)
```
---
## Defining functions
- Reusable functions can be written in Starlark (=Python)
- Blocks (`def`, `if`, `for`...) must be terminated with `#@ end`
- Example:
```yaml
#@ def image(component, repository="dockercoins", tag="v0.1"):
#@ return "{}/{}:{}".format(repository, component, tag)
#@ end
containers:
- name: worker
image: #@ image("worker")
- name: hasher
image: #@ image("hasher")
```
---
## Structured data
- Functions can return complex types
- Example: defining a common set of labels
```yaml
#@ name = "worker"
#@ def labels(component):
#@ return {
#@ "app": component,
#@ "container.training/generated-by": "ytt",
#@ }
#@ end
kind: Pod
apiVersion: v1
metadata:
name: #@ name
labels: #@ labels(name)
```
---
## YAML functions
- Function body can also be straight YAML:
```yaml
#@ name = "worker"
#@ def labels(component):
app: #@ component
container.training/generated-by: ytt
#@ end
kind: Pod
apiVersion: v1
metadata:
name: #@ name
labels: #@ labels(name)
```
- The return type of the function is then a [YAML fragment][fragment]
[fragment]: https://carvel.dev/ytt/docs/v0.41.0/
---
## More YAML functions
- We can load library functions:
```yaml
#@ load("@ytt:sha256", "sha256")
```
- This is (sort of) equivalent fo `from ytt.sha256 import sha256`
- Functions can contain a mix of code and YAML fragment:
```yaml
#@ load("@ytt:sha256", "sha256")
#@ def annotations():
#@ author = "Jérôme Petazzoni"
author: #@ author
author_hash: #@ sha256.sum(author)[:8]
#@ end
annotations: #@ annotations()
```
---
## Data values
- We can define a *schema* in a separate file:
```yaml
#@data/values-schema
--- #! there must be a "---" here!
repository: dockercoins
tag: v0.1
```
- This defines the data values (=customizable parameters),
as well as their *types* and *default values*
- Technically, `#@data/values-schema` is an annotation,
and it applies to a YAML document; so the following
element must be a YAML document
- This is conceptually similar to Helm's *values* file
<br/>
(but with type enforcement as a bonus)
---
## Using data values
- Requires loading `@ytt:data`
- Values are then available in `data.values`
- Example:
```yaml
#@ load("@ytt:data", "data")
#@ def image(component):
#@ return "{}/{}:{}".format(data.values.repository, component, data.values.tag)
#@ end
#@ name = "worker"
containers:
- name: #@ name
image: #@ image(name)
```
---
## Overriding data values
- There are many ways to set and override data values:
- plain YAML files
- data value overlays
- environment variables
- command-line flags
- Precedence of the different methods is defined in the [docs]
[docs]: https://carvel.dev/ytt/docs/v0.41.0/ytt-data-values/#data-values-merge-order
---
## Values in plain YAML files
- Content of `values.yaml`:
```yaml
tag: latest
```
- Values get merged with `--data-values-file`:
```bash
ytt -f config/ --data-values-file values.yaml
```
- Multiple files can be specified
- These files can also be URLs!
---
## Data value overlay
- Content of `values.yaml`:
```yaml
#@data/values
--- #! must have --- here
tag: latest
```
- Values get merged by being specified like "normal" files:
```bash
ytt -f config/ -f values.yaml
```
- Multiple files can be specified
---
## Set a value with a flag
- Set a string value:
```bash
ytt -f config/ --data-value tag=latest
```
- Set a YAML value (useful to parse it as e.g. integer, boolean...):
```bash
ytt -f config/ --data-value-yaml replicas=10
```
- Read a string value from a file:
```bash
ytt -f config/ --data-value-file ca_cert=cert.pem
```
---
## Set values from environment variables
- Set environment variables with a prefix:
```bash
export VAL_tag=latest
export VAL_repository=ghcr.io/dockercoins
```
- Use the variables as strings:
```bash
ytt -f config/ --data-values-env VAL
```
- Or parse them as YAML:
```bash
ytt -f config/ --data-values-env-yaml VAL
```
---
## Lines starting with `#@`
- This generates an empty document:
```yaml
#@ def hello():
hello: world
#@ end
#@ hello()
```
- Do this instead:
```yaml
#@ def hello():
hello: world
#@ end
--- #@ hello()
```
---
## Generating multiple documents, take 1
- This won't work:
```yaml
#@ def app():
kind: Deployment
apiVersion: apps/v1
--- #! separate from next document
kind: Service
apiVersion: v1
#@ end
--- #@ app()
```
---
## Generating multiple documents, take 2
- This won't work either:
```yaml
#@ def app():
--- #! the initial separator indicates "this is a Document Set"
kind: Deployment
apiVersion: apps/v1
--- #! separate from next document
kind: Service
apiVersion: v1
#@ end
--- #@ app()
```
---
## Generating multiple documents, take 3
- We must use the `template` module:
```yaml
#@ load("@ytt:template", "template")
#@ def app():
--- #! the initial separator indicates "this is a Document Set"
kind: Deployment
apiVersion: apps/v1
--- #! separate from next document
kind: Service
apiVersion: v1
#@ end
--- #@ template.replace(app())
```
- `template.replace(...)` is the only way (?) to replace one element with many
---
## Libraries
- A reusable ytt configuration can be transformed into a library
- Put it in a subdirectory named `_ytt_lib/whatever`, then:
```yaml
#@ load("@ytt:library", "library")
#@ load("@ytt:template", "template")
#@ whatever = library.get("whatever")
#@ my_values = {"tag": "latest", "registry": "..."}
#@ output = whatever.with_data_values(my_values).eval()
--- #@ template.replace(output)
```
- The `with_data_values()` step is optional, but useful to "configure" the library
- Note the whole combo:
```yaml
template.replace(library.get("...").with_data_values(...).eval())
```
---
## Overlays
- Powerful, but complex, but powerful! 💥
- Define transformations that are applied after generating the whole document set
- General idea:
- select YAML nodes to be transformed with an `#@overlay/match` decorator
- write a YAML snippet with the modifications to be applied
<br/>
(a bit like a strategic merge patch)
---
## Example
```yaml
#@ load("@ytt:overlay", "overlay")
#@ selector = {"kind": "Deployment", "metadata": {"name": "worker"}}
#@overlay/match by=overlay.subset(selector)
---
spec:
replicas: 10
```
- By default, `#@overlay/match` must find *exactly* one match
(that can be changed by specifying `expects=...`, `missing_ok=True`... see [docs])
- By default, the specified fields (here, `spec.replicas`) must exist
(that can also be changed by annotating the optional fields)
[docs]: https://carvel.dev/ytt/docs/v0.41.0/lang-ref-ytt-overlay/#overlaymatch
---
## Matching using a YAML document
```yaml
#@ load("@ytt:overlay", "overlay")
#@ def match():
kind: Deployment
metadata:
name: worker
#@ end
#@overlay/match by=overlay.subset(match())
---
spec:
replicas: 10
```
- This is equivalent to the subset match of the previous slide
- It will find YAML nodes having all the listed fields
---
## Removing a field
```yaml
#@ load("@ytt:overlay", "overlay")
#@ def match():
kind: Deployment
metadata:
name: worker
#@ end
#@overlay/match by=overlay.subset(match())
---
spec:
#@overlay/remove
replicas:
```
- This would remove the `replicas:` field from a specific Deployment spec
- This could be used e.g. when enabling autoscaling
---
## Selecting multiple nodes
```yaml
#@ load("@ytt:overlay", "overlay")
#@ def match():
kind: Deployment
#@ end
#@overlay/match by=overlay.subset(match()), expects="1+"
---
spec:
#@overlay/remove
replicas:
```
- This would match all Deployments
<br/>
(assuming that *at least one* exists)
- It would remove the `replicas:` field from their spec
<br/>
(the field must exist!)
---
## Adding a field
```yaml
#@ load("@ytt:overlay", "overlay")
#@overlay/match by=overlay.all, expects="1+"
---
metadata:
#@overlay/match missing_ok=True
annotations:
#@overlay/match expects=0
rainbow: 🌈
```
- `#@overlay/match missing_ok=True`
<br/>
*will match whether our resources already have annotations or not*
- `#@overlay/match expects=0`
<br/>
*will only match if the `rainbow` annotation doesn't exist*
<br/>
*(to make sure that we don't override/replace an existing annotation)*
---
## Overlays vs data values
- The documentation has a [detailed discussion][docs] about this question
- In short:
- values = for parameters that are exposed to the user
- overlays = for arbitrary extra modifications
- Values are easier to use (use them when possible!)
- Fallback to overlays when values don't expose what you need
(keeping in mind that overlays are harder to write/understand/maintain)
[docs]: https://carvel.dev/ytt/docs/v0.41.0/data-values-vs-overlays/
---
## Gotchas
- Reminder: put your `#@` at the right place!
```yaml
#! This will generate "hello, world!"
--- #@ "{}, {}!".format("hello", "world")
```
```yaml
#! But this will generate an empty document
---
#@ "{}, {}!".format("hello", "world")
```
- Also, don't use YAML anchors (`*foo` and `&foo`)
- They don't mix well with ytt
- Remember to use `template.render(...)` when generating multiple nodes
(or to update lists or arrays without replacing them entirely)
---
## Next steps with ytt
- Read this documentation page about [injecting secrets][secrets]
- Check the [FAQ], it gives some insights about what's possible with ytt
- Exercise idea: write an overlay that will find all ConfigMaps mounted in Pods...
...and annotate the Pod with a hash of the ConfigMap
[FAQ]: https://carvel.dev/ytt/docs/v0.41.0/faq/
[secrets]: https://carvel.dev/ytt/docs/v0.41.0/injecting-secrets/
???
:EN:- YTT
:FR:- YTT

View File

@@ -1,13 +1,12 @@
title: |
Kubernetes
Thoughtworks Infrastructure
(Starring: Kubernetes!)
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "[Mattermost](https://live.container.training/mattermost)"
chat: "[thoughtworks-infrastructure Slack](https://skillsmatter.slack.com/archives/C03E90W6Z6U)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2022-03-live.container.training/
slides: https://2022-08-thoughtworks.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
@@ -19,113 +18,28 @@ content:
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
#- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- shared/toc.md
- # DAY 1
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
#- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
- k8s/kubectl-run.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- exercises/k8sfundamentals-details.md
- k8s/ourapponkube.md
- # DAY 2
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- k8s/namespaces.md
- k8s/yamldeploy.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/authoring-yaml.md
#- k8s/setup-overview.md
- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
#- k8s/kubectlproxy.md
- exercises/localcluster-details.md
- # DAY 3
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/ingress.md
- exercises/healthchecks-details.md
- exercises/ingress-details.md
#- k8s/ingress-advanced.md
#- k8s/ingress-tls.md
- # DAY 4
- #1
- k8s/demo-apps.md
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/volumes.md
- k8s/configuration.md
- k8s/secrets.md
- exercises/appconfig-details.md
- # DAY 5
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
#- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
#- k8s/helm-dependencies.md
#- k8s/helm-values-schema-validation.md
#- k8s/helm-secrets.md
#- k8s/exercise-helm.md
#- k8s/gitlab.md
#- k8s/portworx.md
#- k8s/openebs.md
#- k8s/stateful-failover.md
#- k8s/extending-api.md
#- k8s/admission.md
#- k8s/operators.md
#- k8s/operators-design.md
#- k8s/operators-example.md
#- k8s/staticpods.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
#- k8s/dashboard.md
#- k8s/kubectlscale.md
#- k8s/healthchecks-more.md
#- k8s/record.md
#- k8s/csr-api.md
#- k8s/openid-connect.md
#- k8s/pod-security-intro.md
#- k8s/pod-security-policies.md
#- k8s/pod-security-admission.md
#- k8s/logs-centralized.md
#- k8s/prometheus.md
#- k8s/prometheus-stack.md
- shared/thankyou.md
-
- exercises/netpol-details.md
- exercises/rbac-details.md
- #2
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/kubectlproxy.md
- k8s/setup-devel.md
- exercises/localcluster-details.md
- exercises/healthchecks-details.md
- #3
- |
# (Extra content)
- k8s/k9s.md
- k8s/tilt.md
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md
- k8s/batch-jobs.md
- shared/thankyou.md
# (Extra material)
- k8s/deploymentslideshow.md

View File

@@ -2,32 +2,24 @@
- Hello! I'm Jérôme Petazzoni ([@jpetazzo])
- The training will run for 4 hours, with a break in the middle
- We'll have two 2-hour workshops
- Feel free to interrupt for questions at any time! 💬
(August 17th and 24th)
- Live feedback, questions, help: @@CHAT@@
- We'll do a short 5-minute break in the middle of each workshop
- Feel free to interrupt for questions at any time!
- Live feedback, questions, help, useful links:
@@CHAT@@
- I'll be available on that Slack channel after the workshop, too!
<!-- -->
[@alexbuisine]: https://twitter.com/alexbuisine
[EphemeraSearch]: https://ephemerasearch.com/
[@jpetazzo]: https://twitter.com/jpetazzo
[@s0ulshake]: https://twitter.com/s0ulshake
[Quantgene]: https://www.quantgene.com/
---
## Exercises
- At the end of each day, there is a series of exercises
- To make the most out of the training, please try the exercises!
(it will help to practice and memorize the content of the day)
- We recommend to take at least one hour to work on the exercises
(if you understood the content of the day, it will be much faster)
- Each day will start with a quick review of the exercises of the previous day
(note: that review will happen *before* the start of the training!)

View File

@@ -1 +1 @@
3.7
3.8

View File

@@ -1,4 +1,4 @@
# Pre-requirements
## Pre-requirements
- Be comfortable with the UNIX command line