Compare commits

...

111 Commits

Author SHA1 Message Date
Jerome Petazzoni
870f27eb81 📃 Last updates 2021-02-28 21:34:18 +01:00
Jerome Petazzoni
25d216ee06 Merge branch 'main' into 2021-02-enix 2021-02-28 21:19:14 +01:00
Jerome Petazzoni
6303b67b86 🐞 Fix missing resource name in Kyverno examples 2021-02-27 19:52:07 +01:00
Jerome Petazzoni
4f3bb9beb2 🔑 Add Cilium and Tufin web tools to generate and view network policies 2021-02-27 19:48:38 +01:00
Jerome Petazzoni
1f34da55b3 🔑 Add details about etcd security 2021-02-27 19:13:50 +01:00
Jerome Petazzoni
12747106ea Merge branch 'main' into 2021-02-enix 2021-02-24 22:35:35 +01:00
Jerome Petazzoni
f30792027f 🔧 Minor tweaks and improvements 2021-02-24 22:35:25 +01:00
Jerome Petazzoni
95a9dfd215 🔗 Update wrong slides and gitter links 2021-02-24 21:43:02 +01:00
Jerome Petazzoni
74679ab77e 💻️ Update Scaleway deployment scripts 2021-02-24 21:41:30 +01:00
Jerome Petazzoni
71ce2eb31a 🔧 Fix args example 2021-02-24 18:22:47 +01:00
Jerome Petazzoni
eb96dd21bb ✂️ Remove ctr.run 2021-02-24 14:20:09 +01:00
Jerome Petazzoni
e82d2812aa 🔑 Explain how to use imagePullSecrets 2021-02-23 21:44:57 +01:00
Jerome Petazzoni
9c8c3ef537 📊 Update Helm stable chart and add deprecation warning 2021-02-22 22:30:19 +01:00
Jerome Petazzoni
2f2948142a ↔️ Update DNS map script 2021-02-22 21:35:02 +01:00
Jerome Petazzoni
3590b55af2 📃 Add M3 and M4 to TOC 2021-02-21 22:09:45 +01:00
Jerome Petazzoni
1de30d4c73 Merge branch 'main' into 2021-02-enix 2021-02-21 16:29:58 +01:00
Jerome Petazzoni
2516b2d32b 🐞 Fix Helm command in Prom deploy 2021-02-21 16:29:49 +01:00
Jerome Petazzoni
adf8e25835 📓 Update M3 and M4 content 2021-02-21 16:29:20 +01:00
Jerome Petazzoni
d77d48e57d Merge branch 'main' into 2021-02-enix 2021-02-21 15:12:10 +01:00
Jerome Petazzoni
42f4b65c87 🧪 Add GitLab chapter 2021-02-21 15:12:00 +01:00
Jerome Petazzoni
9992b9f402 Merge branch 'main' into 2021-02-enix 2021-02-20 11:51:56 +01:00
Jerome Petazzoni
989a62b5ff 🔎 Extra details about CPU limits 2021-02-20 11:51:45 +01:00
Jerome Petazzoni
4b155c397c Merge branch 'main' into 2021-02-enix 2021-02-18 09:18:34 +01:00
Jerome Petazzoni
b5eb59ab80 🐞 Fix missing closing triple-backquote 2021-02-18 09:18:23 +01:00
Jerome Petazzoni
197e5637c3 Merge branch 'main' into 2021-02-enix 2021-02-15 22:19:56 +01:00
Jerome Petazzoni
10920509c3 Add diagrams showing the different k8s network layers 2021-02-15 22:19:45 +01:00
Jerome Petazzoni
c952b8098f Remove links to sections that haven't been finalized yet 2021-02-07 21:54:01 +01:00
Jerome Petazzoni
79819ef7c0 Merge branch 'main' into 2021-02-enix 2021-02-07 21:52:25 +01:00
Jerome Petazzoni
955149e019 Add Tilt section 2021-02-07 21:44:38 +01:00
Jerome Petazzoni
111ff30c38 Add k9s section 2021-02-07 21:41:08 +01:00
Jerome Petazzoni
fb1cd06d63 Commit Enix Feb 2021 content, part 1 2021-02-07 21:38:02 +01:00
Jérôme Petazzoni
6c038a5d33 Merge pull request #578 from otomato-gh/volumeSnapshotsInfo
Update volumeSnapshot link and status
2021-02-05 09:35:39 +01:00
Anton Weiss
6737a20840 Update volumeSnapshot link and status 2021-01-31 12:18:09 +02:00
Jérôme Petazzoni
1d1060a319 Merge pull request #577 from jpetazzo/dependabot/npm_and_yarn/slides/autopilot/socket.io-2.4.0
Bump socket.io from 2.0.4 to 2.4.0 in /slides/autopilot
2021-01-26 08:01:45 -06:00
dependabot[bot]
93e9a60634 Bump socket.io from 2.0.4 to 2.4.0 in /slides/autopilot
Bumps [socket.io](https://github.com/socketio/socket.io) from 2.0.4 to 2.4.0.
- [Release notes](https://github.com/socketio/socket.io/releases)
- [Changelog](https://github.com/socketio/socket.io/blob/2.4.0/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io/compare/2.0.4...2.4.0)

Signed-off-by: dependabot[bot] <support@github.com>
2021-01-20 23:13:24 +00:00
Jerome Petazzoni
de2c0e72c3 Add 2021 high five sessions 2021-01-13 00:41:59 -06:00
Jerome Petazzoni
41204c948b 📃 Add Kubernetes internal APIs 2021-01-05 16:12:36 -06:00
Jerome Petazzoni
553b1f7871 Expand secrets section 2021-01-04 21:14:23 -06:00
Jerome Petazzoni
bd168f7676 Diametrally doesn't seem to be an English word
Thanks Peter Uys for letting me know :)
2020-12-11 17:07:42 +01:00
Jérôme Petazzoni
3a527649d1 Merge pull request #576 from hvariant/patch-1
fix typo
2020-12-08 23:05:26 +01:00
hvariant
ecbbcf8b51 fix typo 2020-12-05 12:26:43 +11:00
Jerome Petazzoni
29edb1aefe Minor tweaks after 1st NR session 2020-11-30 00:29:05 +01:00
Jerome Petazzoni
bd3c91f342 Update udemy promo codes 2020-11-23 12:26:04 +01:00
jsubirat
fa709f0cb4 Update kyverno.md
Adds missing `pod`s in the commands
2020-11-19 17:29:12 +01:00
jsubirat
543b44fb29 Update kyverno.md
Adds missing `pod` in the command
2020-11-19 17:28:54 +01:00
Jerome Petazzoni
536a9cc44b Update advanced TOC 2020-11-15 22:06:49 +01:00
Jerome Petazzoni
2ff3d88bab typo 2020-11-15 22:06:38 +01:00
Jerome Petazzoni
295ee9b6b4 Add warning about using CSR API for user certs 2020-11-15 19:29:45 +01:00
Jerome Petazzoni
17c5f6de01 Add cert-manager section 2020-11-15 19:29:35 +01:00
Jerome Petazzoni
556dbb965c Add networking.k8s.io permissions to Traefik v2 2020-11-15 18:44:17 +01:00
Jerome Petazzoni
32250f8053 Update section about swap with cgroups v2 info 2020-11-15 16:44:18 +01:00
Jerome Petazzoni
bdede6de07 Add aggregation layer details 2020-11-14 20:57:27 +01:00
Jerome Petazzoni
eefdc21488 Add details about /status 2020-11-14 19:10:04 +01:00
Jerome Petazzoni
e145428910 Add notes about backups 2020-11-14 14:39:43 +01:00
Jerome Petazzoni
76789b6113 Add Sealed Secrets 2020-11-14 14:35:49 +01:00
Jerome Petazzoni
f9660ba9dc Add kubebuilder tutorial 2020-11-13 18:46:16 +01:00
Jerome Petazzoni
c2497508f8 Add API server deep dive 2020-11-13 15:08:15 +01:00
Jerome Petazzoni
b5d3b213b1 Update CRD section 2020-11-13 12:50:55 +01:00
Jerome Petazzoni
b4c76ad11d Add CNI deep dive 2020-11-12 13:37:33 +01:00
Jerome Petazzoni
b251ff3812 --output-watch-events 2020-11-11 22:46:20 +01:00
Jerome Petazzoni
ede4ea0dd5 Add note about GVK 2020-11-11 21:17:54 +01:00
Jerome Petazzoni
2ab06c6dfd Add events section 2020-11-11 20:51:33 +01:00
Jerome Petazzoni
3a01deb039 Add section on finalizers 2020-11-11 15:05:33 +01:00
Jerome Petazzoni
b88f63e1f7 Update Docker Desktop and k3d instructions
Fixes #572
2020-11-10 17:55:02 +01:00
Jerome Petazzoni
918311ac51 Separate CRD and ECK; reorganize API extension chapter 2020-11-10 17:43:08 +01:00
Jerome Petazzoni
73e8110f09 Tweak 2020-11-10 17:43:08 +01:00
Jerome Petazzoni
ecb5106d59 Add provenance of default RBAC rules 2020-11-10 17:43:08 +01:00
Jérôme Petazzoni
e4d8cd4952 Merge pull request #573 from wrekone/master
Update ingress.md
2020-11-05 06:51:09 +01:00
Ben
c4aedbd327 Update ingress.md
fix typo
2020-11-04 20:19:34 -08:00
Jerome Petazzoni
2fb3584b1b Small update about selectors 2020-11-03 21:59:04 +01:00
Jerome Petazzoni
cb90cc9a1e Rename images 2020-10-31 11:32:16 +01:00
Jerome Petazzoni
bf28dff816 Add HPA v2 content using Prometheus Adapter 2020-10-30 17:55:46 +01:00
Jerome Petazzoni
b5cb871c69 Update Prometheus chart location 2020-10-29 17:39:14 +01:00
Jerome Petazzoni
aa8f538574 Add example to generate certs with local CA 2020-10-29 14:53:42 +01:00
Jerome Petazzoni
ebf2e23785 Add info about advanced label selectors 2020-10-29 12:32:01 +01:00
Jerome Petazzoni
0553a1ba8b Add chapter on Kyverno 2020-10-28 00:00:32 +01:00
Jerome Petazzoni
9d47177028 Add activeDeadlineSeconds explanation 2020-10-27 11:11:29 +01:00
Jerome Petazzoni
9d4a035497 Add Kompose, Skaffold, and Tilt. Move tools to a separate kubetools action. 2020-10-27 10:58:31 +01:00
Jerome Petazzoni
6fe74cb35c Add note about 'kubectl describe ns' 2020-10-24 16:23:36 +02:00
Jerome Petazzoni
43aa41ed51 Add note to remap_nodeports command 2020-10-24 16:23:21 +02:00
Jerome Petazzoni
f6e810f648 Add k9s and popeye 2020-10-24 11:27:33 +02:00
Jerome Petazzoni
4c710d6826 Add Krew support 2020-10-23 21:19:27 +02:00
Jerome Petazzoni
410c98399e Use empty values by default
This allows content rendering with an almost-empty YAML file
2020-10-22 14:13:11 +02:00
Jerome Petazzoni
19c9843a81 Add admission webhook content 2020-10-22 14:12:32 +02:00
Jerome Petazzoni
69d084e04a Update PSP (runtime/default instead of docker/default) 2020-10-20 22:11:26 +02:00
Jerome Petazzoni
1300d76890 Update dashboard content 2020-10-20 21:19:08 +02:00
Jerome Petazzoni
0040313371 Bump up admin clusters scripts 2020-10-20 16:53:24 +02:00
Jerome Petazzoni
c9e04b906d Bump up k8s bins; add 'k' alias and completion 2020-10-20 16:53:24 +02:00
Jérôme Petazzoni
41f66f4144 Merge pull request #571 from bbaassssiiee/bugfix/typo
typo: should read: characters
2020-10-20 11:29:32 +02:00
Bas Meijer
aced587fd0 characters 2020-10-20 11:03:59 +02:00
Jerome Petazzoni
749b3d1648 Add survey form 2020-10-13 16:05:33 +02:00
Jérôme Petazzoni
c40cc71bbc Merge pull request #570 from fc92/patch-2
update server-side dry run for recent kubectl
2020-10-11 23:22:28 +02:00
Jérôme Petazzoni
69b775ef27 Merge pull request #569 from fc92/patch-1
Update dashboard.md
2020-10-11 23:20:51 +02:00
fc92
3bfc14c5f7 update server-side dry run for recent kubectl
Error message :
$ kubectl apply -f web.yaml --server-dry-run --validate=false -o yaml                                                                   
Error: unknown flag: --server-dry-run                                                                                                   
See 'kubectl apply --help' for usage.

Doc : 
      --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be                  
sent, without sending it. If server strategy, submit server-side request without persisting the resource.
2020-10-10 23:07:45 +02:00
fc92
97984af8a2 Update dashboard.md
Kube Ops View URL changed to
2020-10-10 22:12:21 +02:00
Jérôme Petazzoni
9b31c45899 Merge pull request #567 from christianbumann/patch-1
Add description for the -f flag
2020-10-08 08:37:26 +02:00
Jérôme Petazzoni
c0db28d439 Merge pull request #568 from christianbumann/patch-2
Fix typo
2020-10-08 08:36:38 +02:00
Jérôme Petazzoni
0e49bfa837 Merge pull request #566 from tullo/master
fix backend svc name in cheeseplate ingress
2020-10-08 08:36:11 +02:00
Christian Bumann
fc9c0a6285 Update Container_Network_Model.md 2020-10-08 08:16:53 +02:00
Christian Bumann
d4914fa168 Fix typo 2020-10-08 08:14:59 +02:00
Christian Bumann
e4edd9445c Add description for the -f flag 2020-10-07 14:00:19 +02:00
Andreas Amstutz
ba7deefce5 fix k8s version 2020-10-05 12:06:26 +02:00
Andreas
be104f1b44 fix backend svc name in cheeseplate ingress 2020-10-05 12:02:31 +02:00
Jerome Petazzoni
5c329b0b79 Bump versions 2020-10-04 20:59:36 +02:00
Jerome Petazzoni
78ffd22499 Typo fix 2020-10-04 15:53:40 +02:00
Jerome Petazzoni
33174a1682 Add clean command 2020-09-27 16:25:37 +02:00
Jerome Petazzoni
d402a2ea93 Add tailhist 2020-09-24 17:00:52 +02:00
Jerome Petazzoni
1fc3abcffd Add jid (JSON explorer tool) 2020-09-24 11:52:03 +02:00
Jerome Petazzoni
c1020f24b1 Add Ingress TLS chapter 2020-09-15 17:44:05 +02:00
Jerome Petazzoni
4fc81209d4 Skip comments in domain file 2020-09-14 17:43:11 +02:00
Jerome Petazzoni
ed841711c5 Fix 'list' command 2020-09-14 16:58:55 +02:00
114 changed files with 13520 additions and 1045 deletions

49
dockercoins/Tiltfile Normal file
View File

@@ -0,0 +1,49 @@
k8s_yaml(blob('''
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: registry
name: registry
spec:
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- image: registry
name: registry
---
apiVersion: v1
kind: Service
metadata:
labels:
app: registry
name: registry
spec:
ports:
- port: 5000
protocol: TCP
targetPort: 5000
nodePort: 30555
selector:
app: registry
type: NodePort
'''))
default_registry('localhost:30555')
docker_build('dockercoins/hasher', 'hasher')
docker_build('dockercoins/rng', 'rng')
docker_build('dockercoins/webui', 'webui')
docker_build('dockercoins/worker', 'worker')
k8s_yaml('../k8s/dockercoins.yaml')
# Uncomment the following line to let tilt run with the default kubeadm cluster-admin context.
#allow_k8s_contexts('kubernetes-admin@kubernetes')
# While we're here: if you're controlling a remote cluster, uncomment that line.
# It will create a port forward so that you can access the remote registry.
#k8s_resource(workload='registry', port_forwards='30555:5000')

33
k8s/certbot.yaml Normal file
View File

@@ -0,0 +1,33 @@
kind: Service
apiVersion: v1
metadata:
name: certbot
spec:
ports:
- port: 80
protocol: TCP
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: certbot
spec:
rules:
- http:
paths:
- path: /.well-known/acme-challenge/
backend:
serviceName: certbot
servicePort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: certbot
subsets:
- addresses:
- ip: A.B.C.D
ports:
- port: 8000
protocol: TCP

11
k8s/cm-certificate.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: xyz.A.B.C.D.nip.io
spec:
secretName: xyz.A.B.C.D.nip.io
dnsNames:
- xyz.A.B.C.D.nip.io
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer

18
k8s/cm-clusterissuer.yaml Normal file
View File

@@ -0,0 +1,18 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# Remember to update this if you use this manifest to obtain real certificates :)
email: hello@example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
# To use the production environment, use the following line instead:
#server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: issuer-letsencrypt-staging
solvers:
- http01:
ingress:
class: traefik

View File

@@ -0,0 +1,336 @@
# This file is based on the following manifest:
# https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml
# It adds a ServiceAccount that has cluster-admin privileges on the cluster,
# and exposes the dashboard on a NodePort. It makes it easier to do quick demos
# of the Kubernetes dashboard, without compromising the security too much.
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: cluster-admin
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: cluster-admin
namespace: kubernetes-dashboard

30
k8s/event-node.yaml Normal file
View File

@@ -0,0 +1,30 @@
kind: Event
apiVersion: v1
metadata:
generateName: hello-
labels:
container.training/test: ""
#eventTime: "2020-07-04T00:00:00.000000Z"
#firstTimestamp: "2020-01-01T00:00:00.000000Z"
#lastTimestamp: "2020-12-31T00:00:00.000000Z"
#count: 42
involvedObject:
kind: Node
apiVersion: v1
name: kind-control-plane
# Note: the uid should be the Node name (not the uid of the Node).
# This might be specific to global objects.
uid: kind-control-plane
type: Warning
reason: NodeOverheat
message: "Node temperature exceeds critical threshold"
action: Hello
source:
component: thermal-probe
#host: node1
#reportingComponent: ""
#reportingInstance: ""

36
k8s/event-pod.yaml Normal file
View File

@@ -0,0 +1,36 @@
kind: Event
apiVersion: v1
metadata:
# One convention is to use <objectname>.<timestamp>,
# where the timestamp is taken with a nanosecond
# precision and expressed in hexadecimal.
# Example: web-5dcb957ccc-fjvzc.164689730a36ec3d
name: hello.1234567890
# The label doesn't serve any purpose, except making
# it easier to identify or delete that specific event.
labels:
container.training/test: ""
#eventTime: "2020-07-04T00:00:00.000000Z"
#firstTimestamp: "2020-01-01T00:00:00.000000Z"
#lastTimestamp: "2020-12-31T00:00:00.000000Z"
#count: 42
involvedObject:
### These 5 lines should be updated to refer to an object.
### Make sure to put the correct "uid", because it is what
### "kubectl describe" is using to gather relevant events.
#apiVersion: v1
#kind: Pod
#name: magic-bean
#namespace: blue
#uid: 7f28fda8-6ef4-4580-8d87-b55721fcfc30
type: Normal
reason: BackupSuccessful
message: "Object successfully dumped to gitops repository"
source:
component: gitops-sync
#reportingComponent: ""
#reportingInstance: ""

View File

@@ -0,0 +1,29 @@
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta2
metadata:
name: rng
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: rng
minReplicas: 1
maxReplicas: 20
behavior:
scaleUp:
stabilizationWindowSeconds: 60
scaleDown:
stabilizationWindowSeconds: 180
metrics:
- type: Object
object:
describedObject:
apiVersion: v1
kind: Service
name: httplat
metric:
name: httplat_latency_seconds
target:
type: Value
value: 0.1

View File

@@ -3,6 +3,10 @@ kind: Ingress
metadata:
name: whatever
spec:
#tls:
#- secretName: whatever.A.B.C.D.nip.io
# hosts:
# - whatever.A.B.C.D.nip.io
rules:
- host: whatever.A.B.C.D.nip.io
http:

View File

@@ -0,0 +1,63 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: setup-namespace
spec:
rules:
- name: setup-limitrange
match:
resources:
kinds:
- Namespace
generate:
kind: LimitRange
name: default-limitrange
namespace: "{{request.object.metadata.name}}"
data:
spec:
limits:
- type: Container
min:
cpu: 0.1
memory: 0.1
max:
cpu: 2
memory: 2Gi
default:
cpu: 0.25
memory: 500Mi
defaultRequest:
cpu: 0.25
memory: 250Mi
- name: setup-resourcequota
match:
resources:
kinds:
- Namespace
generate:
kind: ResourceQuota
name: default-resourcequota
namespace: "{{request.object.metadata.name}}"
data:
spec:
hard:
requests.cpu: "10"
requests.memory: 10Gi
limits.cpu: "20"
limits.memory: 20Gi
- name: setup-networkpolicy
match:
resources:
kinds:
- Namespace
generate:
kind: NetworkPolicy
name: default-networkpolicy
namespace: "{{request.object.metadata.name}}"
data:
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}

View File

@@ -0,0 +1,22 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: pod-color-policy-1
spec:
validationFailureAction: enforce
rules:
- name: ensure-pod-color-is-valid
match:
resources:
kinds:
- Pod
selector:
matchExpressions:
- key: color
operator: Exists
- key: color
operator: NotIn
values: [ red, green, blue ]
validate:
message: "If it exists, the label color must be red, green, or blue."
deny: {}

View File

@@ -0,0 +1,21 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: pod-color-policy-2
spec:
validationFailureAction: enforce
background: false
rules:
- name: prevent-color-change
match:
resources:
kinds:
- Pod
validate:
message: "Once label color has been added, it cannot be changed."
deny:
conditions:
- key: "{{ request.oldObject.metadata.labels.color }}"
operator: NotEqual
value: "{{ request.object.metadata.labels.color }}"

View File

@@ -0,0 +1,25 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: pod-color-policy-3
spec:
validationFailureAction: enforce
background: false
rules:
- name: prevent-color-removal
match:
resources:
kinds:
- Pod
selector:
matchExpressions:
- key: color
operator: DoesNotExist
validate:
message: "Once label color has been added, it cannot be removed."
deny:
conditions:
- key: "{{ request.oldObject.metadata.labels.color }}"
operator: NotIn
value: []

View File

@@ -5,8 +5,8 @@ metadata:
annotations:
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
seccomp.security.alpha.kubernetes.io/allowedProfileNames: runtime/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: runtime/default
name: restricted
spec:
allowPrivilegeEscalation: false

17
k8s/test.yaml Normal file
View File

@@ -0,0 +1,17 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whatever
spec:
#tls:
#- secretName: whatever.A.B.C.D.nip.io
# hosts:
# - whatever.A.B.C.D.nip.io
rules:
- host: whatever.nip.io
http:
paths:
- path: /
backend:
serviceName: whatever
servicePort: 1234

View File

@@ -50,8 +50,10 @@ spec:
- --api.insecure
- --log.level=INFO
- --metrics.prometheus
- --providers.kubernetescrd
- --providers.kubernetesingress
- --entrypoints.http.Address=:80
- --entrypoints.https.Address=:443
- --entrypoints.https.http.tls.certResolver=default
---
kind: Service
apiVersion: v1
@@ -96,6 +98,15 @@ rules:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1

View File

@@ -1 +1 @@
traefik-v1.yaml
traefik-v2.yaml

View File

@@ -1 +1,3 @@
INFRACLASS=scaleway
#SCW_INSTANCE_TYPE=DEV1-L
#SCW_ZONE=fr-par-2

View File

@@ -43,6 +43,16 @@ _cmd_cards() {
info "$0 www"
}
_cmd clean "Remove information about stopped clusters"
_cmd_clean() {
for TAG in tags/*; do
if grep -q ^stopped$ "$TAG/status"; then
info "Removing $TAG..."
rm -rf "$TAG"
fi
done
}
_cmd deploy "Install Docker on a bunch of running VMs"
_cmd_deploy() {
TAG=$1
@@ -157,7 +167,7 @@ _cmd_kubebins() {
fi
if ! [ -x hyperkube ]; then
##VERSION##
curl -L https://dl.k8s.io/v1.18.8/kubernetes-server-linux-amd64.tar.gz \
curl -L https://dl.k8s.io/v1.18.10/kubernetes-server-linux-amd64.tar.gz \
| sudo tar --strip-components=3 -zx \
kubernetes/server/bin/kube{ctl,let,-proxy,-apiserver,-scheduler,-controller-manager}
fi
@@ -194,7 +204,9 @@ _cmd_kube() {
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet$EXTRA_APTGET kubeadm$EXTRA_APTGET kubectl$EXTRA_APTGET &&
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl &&
echo 'alias k=kubectl' | sudo tee /etc/bash_completion.d/k &&
echo 'complete -F __start_kubectl k' | sudo tee -a /etc/bash_completion.d/k"
# Initialize kube master
pssh --timeout 200 "
@@ -233,6 +245,12 @@ _cmd_kube() {
if i_am_first_node; then
kubectl apply -f https://raw.githubusercontent.com/jpetazzo/container.training/master/k8s/metrics-server.yaml
fi"
}
_cmd kubetools "Install a bunch of CLI tools for Kubernetes"
_cmd_kubetools() {
TAG=$1
need_tag
# Install kubectx and kubens
pssh "
@@ -294,7 +312,54 @@ EOF"
sudo chmod +x /usr/local/bin/aws-iam-authenticator
fi"
sep "Done"
# Install the krew package manager
pssh "
if [ ! -d /home/docker/.krew ]; then
cd /tmp &&
curl -fsSL https://github.com/kubernetes-sigs/krew/releases/latest/download/krew.tar.gz |
tar -zxf- &&
sudo -u docker -H ./krew-linux_amd64 install krew &&
echo export PATH=/home/docker/.krew/bin:\\\$PATH | sudo -u docker tee -a /home/docker/.bashrc
fi"
# Install k9s and popeye
pssh "
if [ ! -x /usr/local/bin/k9s ]; then
FILENAME=k9s_\$(uname -s)_\$(uname -m).tar.gz &&
curl -sSL https://github.com/derailed/k9s/releases/latest/download/\$FILENAME |
sudo tar -zxvf- -C /usr/local/bin k9s
fi
if [ ! -x /usr/local/bin/popeye ]; then
FILENAME=popeye_\$(uname -s)_\$(uname -m).tar.gz &&
curl -sSL https://github.com/derailed/popeye/releases/latest/download/\$FILENAME |
sudo tar -zxvf- -C /usr/local/bin popeye
fi"
# Install Tilt
pssh "
if [ ! -x /usr/local/bin/tilt ]; then
curl -fsSL https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.sh | bash
fi"
# Install Skaffold
pssh "
if [ ! -x /usr/local/bin/skaffold ]; then
curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 &&
sudo install skaffold /usr/local/bin/
fi"
# Install Kompose
pssh "
if [ ! -x /usr/local/bin/kompose ]; then
curl -Lo kompose https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-amd64 &&
sudo install kompose /usr/local/bin
fi"
pssh "
if [ ! -x /usr/local/bin/kubeseal ]; then
curl -Lo kubeseal https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.13.1/kubeseal-linux-amd64 &&
sudo install kubeseal /usr/local/bin
fi"
}
_cmd kubereset "Wipe out Kubernetes configuration on all nodes"
@@ -338,10 +403,22 @@ _cmd_ips() {
done < tags/$TAG/ips.txt
}
_cmd list "List available groups for a given infrastructure"
_cmd list "List all VMs on a given infrastructure (or all infras if no arg given)"
_cmd_list() {
need_infra $1
infra_list
case "$1" in
"")
for INFRA in infra/*; do
$0 list $INFRA
done
;;
*/example.*)
;;
*)
need_infra $1
sep "Listing instances for $1"
infra_list
;;
esac
}
_cmd listall "List VMs running on all configured infrastructures"
@@ -471,6 +548,17 @@ _cmd_remap_nodeports() {
if i_am_first_node && ! grep -q '$ADD_LINE' $MANIFEST_FILE; then
sudo sed -i 's/\($FIND_LINE\)\$/\1\n$ADD_LINE/' $MANIFEST_FILE
fi"
info "If you have manifests hard-coding nodePort values,"
info "you might want to patch them with a command like:"
info "
if i_am_first_node; then
kubectl -n kube-system patch svc prometheus-server \\
-p 'spec: { ports: [ {port: 80, nodePort: 10101} ]}'
fi
"
}
_cmd quotas "Check our infrastructure quotas (max instances)"
@@ -568,6 +656,8 @@ _cmd_start() {
done
sep
info "Deployment successful."
info "To log into the first machine of that batch, you can run:"
info "$0 ssh $TAG"
info "To terminate these instances, you can run:"
info "$0 stop $TAG"
}
@@ -631,8 +721,8 @@ _cmd_helmprom() {
need_tag
pssh "
if i_am_first_node; then
sudo -u docker -H helm repo add stable https://kubernetes-charts.storage.googleapis.com/
sudo -u docker -H helm install prometheus stable/prometheus \
sudo -u docker -H helm repo add prometheus-community https://prometheus-community.github.io/helm-charts/
sudo -u docker -H helm install prometheus prometheus-community/prometheus \
--namespace kube-system \
--set server.service.type=NodePort \
--set server.service.nodePort=30090 \

View File

@@ -3,7 +3,8 @@ if ! command -v aws >/dev/null; then
fi
infra_list() {
aws_display_tags
aws ec2 describe-instances --output json |
jq -r '.Reservations[].Instances[] | [.InstanceId, .ClientToken, .State.Name, .InstanceType ] | @tsv'
}
infra_quotas() {

View File

@@ -5,6 +5,13 @@ if ! [ -f ~/.config/hcloud/cli.toml ]; then
warn "~/.config/hcloud/cli.toml not found."
fi
infra_list() {
[ "$(hcloud server list -o json)" = "null" ] && return
hcloud server list -o json |
jq -r '.[] | [.id, .name , .status, .server_type.name] | @tsv'
}
infra_start() {
COUNT=$1

View File

@@ -1,3 +1,8 @@
infra_list() {
openstack server list -f json |
jq -r '.[] | [.ID, .Name , .Status, .Flavor] | @tsv'
}
infra_start() {
COUNT=$1
@@ -44,5 +49,5 @@ oscli_get_instances_json() {
oscli_get_ips_by_tag() {
TAG=$1
oscli_get_instances_json $TAG |
jq -r .[].Networks | cut -d= -f2 | cut -d, -f1 | grep . || true
jq -r .[].Networks | grep -oE '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' || true
}

View File

@@ -5,12 +5,17 @@ if ! [ -f ~/.config/scw/config.yaml ]; then
warn "~/.config/scw/config.yaml not found."
fi
SCW_INSTANCE_TYPE=${SCW_INSTANCE_TYPE-DEV1-M}
SCW_ZONE=${SCW_ZONE-fr-par-1}
infra_list() {
scw instance server list -o json |
jq -r '.[] | [.id, .name, .state, .commercial_type] | @tsv'
}
infra_start() {
COUNT=$1
SCW_INSTANCE_TYPE=${SCW_INSTANCE_TYPE-DEV1-M}
SCW_ZONE=${SCW_ZONE-fr-par-1}
for I in $(seq 1 $COUNT); do
NAME=$(printf "%s-%03d" $TAG $I)
sep "Starting instance $I/$COUNT"
@@ -31,16 +36,16 @@ infra_stop() {
scw_get_ids_by_tag $TAG | wc -l
info "Deleting instances..."
scw_get_ids_by_tag $TAG |
xargs -n1 -P10 -I@@ \
scw instance server delete force-shutdown=true server-id=@@
xargs -n1 -P10 \
scw instance server delete zone=${SCW_ZONE} force-shutdown=true with-ip=true
}
scw_get_ids_by_tag() {
TAG=$1
scw instance server list name=$TAG -o json | jq -r .[].id
scw instance server list zone=${SCW_ZONE} name=$TAG -o json | jq -r .[].id
}
scw_get_ips_by_tag() {
TAG=$1
scw instance server list name=$TAG -o json | jq -r .[].public_ip.address
scw instance server list zone=${SCW_ZONE} name=$TAG -o json | jq -r .[].public_ip.address
}

View File

@@ -114,7 +114,7 @@ system("sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /e
system("sudo service ssh restart")
system("sudo apt-get -q update")
system("sudo apt-get -qy install git jq")
system("sudo apt-get -qy install git jid jq")
system("sudo apt-get -qy install emacs-nox joe")
#######################

View File

@@ -2,11 +2,11 @@
"""
There are two ways to use this script:
1. Pass a tag name as a single argument.
It will then take the clusters corresponding to that tag, and assign one
domain name per cluster. Currently it gets the domains from a hard-coded
path. There should be more domains than clusters.
Example: ./map-dns.py 2020-08-15-jp
1. Pass a file name and a tag name as a single argument.
It will load a list of domains from the given file (one per line),
and assign them to the clusters corresponding to that tag.
There should be more domains than clusters.
Example: ./map-dns.py domains.txt 2020-08-15-jp
2. Pass a domain as the 1st argument, and IP addresses then.
It will configure the domain with the listed IP addresses.
@@ -19,54 +19,53 @@ import requests
import sys
import yaml
# configurable stuff
domains_file = "../../plentydomains/domains.txt"
# This can be tweaked if necessary.
config_file = os.path.join(
os.environ["HOME"], ".config/gandi/config.yaml")
tag = None
os.environ["HOME"], ".config/gandi/config.yaml")
apiurl = "https://dns.api.gandi.net/api/v5/domains"
if len(sys.argv) == 2:
tag = sys.argv[1]
domains = open(domains_file).read().split()
ips = open(f"tags/{tag}/ips.txt").read().split()
settings_file = f"tags/{tag}/settings.yaml"
clustersize = yaml.safe_load(open(settings_file))["clustersize"]
else:
domains = [sys.argv[1]]
ips = sys.argv[2:]
clustersize = len(ips)
# inferred stuff
apikey = yaml.safe_load(open(config_file))["apirest"]["key"]
# now do the fucking work
while domains and ips:
domain = domains[0]
domains = domains[1:]
cluster = ips[:clustersize]
ips = ips[clustersize:]
print(f"{domain} => {cluster}")
zone = ""
node = 0
for ip in cluster:
node += 1
zone += f"@ 300 IN A {ip}\n"
zone += f"* 300 IN A {ip}\n"
zone += f"node{node} 300 IN A {ip}\n"
r = requests.put(
f"{apiurl}/{domain}/records",
headers={"x-api-key": apikey},
data=zone)
print(r.text)
# Figure out if we're called for a bunch of domains, or just one.
first_arg = sys.argv[1]
if os.path.isfile(first_arg):
domains = open(first_arg).read().split()
domains = [ d for d in domains if not d.startswith('#') ]
tag = sys.argv[2]
ips = open(f"tags/{tag}/ips.txt").read().split()
settings_file = f"tags/{tag}/settings.yaml"
clustersize = yaml.safe_load(open(settings_file))["clustersize"]
else:
domains = [first_arg]
ips = sys.argv[2:]
clustersize = len(ips)
#r = requests.get(
# f"{apiurl}/{domain}/records",
# headers={"x-api-key": apikey},
# )
# Now, do the work.
while domains and ips:
domain = domains[0]
domains = domains[1:]
cluster = ips[:clustersize]
ips = ips[clustersize:]
print(f"{domain} => {cluster}")
zone = ""
node = 0
for ip in cluster:
node += 1
zone += f"@ 300 IN A {ip}\n"
zone += f"* 300 IN A {ip}\n"
zone += f"node{node} 300 IN A {ip}\n"
r = requests.put(
f"{apiurl}/{domain}/records",
headers={"x-api-key": apikey},
data=zone)
print(r.text)
#r = requests.get(
# f"{apiurl}/{domain}/records",
# headers={"x-api-key": apikey},
# )
if domains:
print(f"Good, we have {len(domains)} domains left.")
print(f"Good, we have {len(domains)} domains left.")
if ips:
print(f"Crap, we have {len(ips)} IP addresses left.")
print(f"Crap, we have {len(ips)} IP addresses left.")

View File

@@ -25,5 +25,6 @@ steps:
- webssh
- tailhist
- kube
- kubetools
- cards
- kubetest

View File

@@ -35,6 +35,8 @@ TAG=$PREFIX-$SETTINGS
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl disabledocker $TAG
retry 5 ./workshopctl kubebins $TAG
retry 5 ./workshopctl webssh $TAG
retry 5 ./workshopctl tailhist $TAG
./workshopctl cards $TAG
SETTINGS=admin-kubenet
@@ -48,6 +50,8 @@ TAG=$PREFIX-$SETTINGS
retry 5 ./workshopctl disableaddrchecks $TAG
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl kubebins $TAG
retry 5 ./workshopctl webssh $TAG
retry 5 ./workshopctl tailhist $TAG
./workshopctl cards $TAG
SETTINGS=admin-kuberouter
@@ -61,6 +65,8 @@ TAG=$PREFIX-$SETTINGS
retry 5 ./workshopctl disableaddrchecks $TAG
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl kubebins $TAG
retry 5 ./workshopctl webssh $TAG
retry 5 ./workshopctl tailhist $TAG
./workshopctl cards $TAG
#INFRA=infra/aws-us-west-1
@@ -76,5 +82,7 @@ TAG=$PREFIX-$SETTINGS
--count $((3*$STUDENTS))
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl kube $TAG 1.15.9
retry 5 ./workshopctl kube $TAG 1.17.13
retry 5 ./workshopctl webssh $TAG
retry 5 ./workshopctl tailhist $TAG
./workshopctl cards $TAG

69
slides/1.yml Normal file
View File

@@ -0,0 +1,69 @@
title: |
Docker Intensif
chat: "[Gitter](https://gitter.im/jpetazzo/training-202102-online)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2021-02-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- # DAY 1
#- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- # DAY 2
- containers/Container_Networking_Basics.md
- containers/Local_Development_Workflow.md
- containers/Start_And_Attach.md
- containers/Naming_And_Inspecting.md
- containers/Labels.md
-
- containers/Container_Network_Model.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- # DAY 3
- containers/Getting_Inside.md
- containers/Network_Drivers.md
- containers/Dockerfile_Tips.md
- containers/Advanced_Dockerfiles.md
-
- containers/Orchestration_Overview.md
- containers/Multi_Stage_Builds.md
#- containers/Publishing_To_Docker_Hub.md
- containers/Exercise_Dockerfile_Advanced.md
#- containers/Docker_Machine.md
#- containers/Init_Systems.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Container_Engines.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md
- shared/thankyou.md
#- containers/links.md

103
slides/2.yml Normal file
View File

@@ -0,0 +1,103 @@
title: |
Fondamentaux Kubernetes
chat: "[Gitter](https://gitter.im/jpetazzo/training-202102-online)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2021-02-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- shared/toc.md
- # 1
#- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
- # 2
- k8s/kubectl-run.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
#- k8s/exercise-wordsmith.md
- # 3
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- k8s/namespaces.md
- k8s/yamldeploy.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- # 4
- k8s/daemonset.md
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
- # 5
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/kubectlproxy.md
- k8s/dashboard.md
- k8s/k9s.md
- k8s/tilt.md
- # 6
- k8s/setup-overview.md
- k8s/setup-devel.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md
- # 7
- k8s/ingress.md
- k8s/ingress-tls.md
- # 8
- k8s/volumes.md
#- k8s/exercise-configmap.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
- k8s/secrets.md
- k8s/batch-jobs.md
#- k8s/logs-centralized.md
#- k8s/prometheus.md
#- k8s/statefulsets.md
#- k8s/local-persistent-volumes.md
#- k8s/portworx.md
#- k8s/extending-api.md
#- k8s/operators.md
#- k8s/operators-design.md
#- k8s/staticpods.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
#- k8s/whatsnext.md
#- k8s/lastwords.md
- shared/thankyou.md
- k8s/links.md
#-
# - |
# # (Bonus)
# - k8s/record.md
# - k8s/dryrun.md

39
slides/3.yml Normal file
View File

@@ -0,0 +1,39 @@
title: |
Packaging d'applications
et CI/CD pour Kubernetes
chat: "[Gitter](https://gitter.im/jpetazzo/training-202102-online)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2021-02-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
#- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/prereqs.md
- shared/webssh.md
- shared/connecting.md
#- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/toc.md
-
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- k8s/helm-secrets.md
-
- k8s/cert-manager.md
- k8s/gitlab.md
- |
# (Extra content)
- k8s/prometheus.md

50
slides/4.yml Normal file
View File

@@ -0,0 +1,50 @@
title: |
Kubernetes Avancé
chat: "[Gitter](https://gitter.im/jpetazzo/training-202102-online)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2021-02-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/prereqs.md
- shared/webssh.md
- shared/connecting.md
- shared/toc.md
- #1
- k8s/netpol.md
- k8s/authn-authz.md
- #2
- k8s/extending-api.md
- k8s/operators.md
- k8s/sealed-secrets.md
- k8s/crd.md
- #3
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- #4
- k8s/aggregation-layer.md
- k8s/prometheus.md
- k8s/hpa-v2.md
- #5
- k8s/admission.md
- k8s/kyverno.md
- #6
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/eck.md
- k8s/portworx.md

56
slides/5.yml Normal file
View File

@@ -0,0 +1,56 @@
title: |
Opérer Kubernetes
chat: "[Gitter](https://gitter.im/jpetazzo/training-202102-online)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2021-02-enix.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
# DAY 1
-
- k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
-
- k8s/multinode.md
- k8s/cni.md
- k8s/interco.md
-
- k8s/cni-internals.md
- k8s/apilb.md
- k8s/internal-apis.md
- k8s/staticpods.md
- k8s/cluster-upgrade.md
- k8s/cluster-backup.md
#- k8s/cloud-controller-manager.md
-
- k8s/control-plane-auth.md
- k8s/user-cert.md
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/podsecuritypolicy.md
- shared/thankyou.md
-
|
# (Extra content)
- k8s/apiserver-deepdive.md
- k8s/setup-overview.md
- k8s/setup-devel.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md

View File

@@ -18,3 +18,5 @@
#/next https://www.eventbrite.com/e/livestream-intensive-kubernetes-bootcamp-tickets-103262336428
/next https://skillsmatter.com/courses/700-advanced-kubernetes-concepts-workshop-jerome-petazzoni
/hi5 https://enix.io/fr/services/formation/online/
/ /highfive.html 200!

View File

@@ -24,14 +24,9 @@
"integrity": "sha1-ml9pkFGx5wczKPKgCJaLZOopVdI="
},
"arraybuffer.slice": {
"version": "0.0.6",
"resolved": "https://registry.npmjs.org/arraybuffer.slice/-/arraybuffer.slice-0.0.6.tgz",
"integrity": "sha1-8zshWfBTKj8xB6JywMz70a0peco="
},
"async-limiter": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/async-limiter/-/async-limiter-1.0.0.tgz",
"integrity": "sha512-jp/uFnooOiO+L211eZOoSyzpOITMXx1rBITauYykG3BRYPu8h0UcxsPNB04RR5vo4Tyz3+ay17tR6JVf9qzYWg=="
"version": "0.0.7",
"resolved": "https://registry.npmjs.org/arraybuffer.slice/-/arraybuffer.slice-0.0.7.tgz",
"integrity": "sha512-wGUIVQXuehL5TCqQun8OW81jGzAWycqzFF8lFp+GOM5BXLYj3bKNsYC4daB7n6XjCqxQA/qgTJ+8ANR3acjrog=="
},
"backo2": {
"version": "1.0.2",
@@ -39,27 +34,19 @@
"integrity": "sha1-MasayLEpNjRj41s+u2n038+6eUc="
},
"base64-arraybuffer": {
"version": "0.1.5",
"resolved": "https://registry.npmjs.org/base64-arraybuffer/-/base64-arraybuffer-0.1.5.tgz",
"integrity": "sha1-c5JncZI7Whl0etZmqlzUv5xunOg="
"version": "0.1.4",
"resolved": "https://registry.npmjs.org/base64-arraybuffer/-/base64-arraybuffer-0.1.4.tgz",
"integrity": "sha1-mBjHngWbE1X5fgQooBfIOOkLqBI="
},
"base64id": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/base64id/-/base64id-1.0.0.tgz",
"integrity": "sha1-R2iMuZu2gE8OBtPnY7HDLlfY5rY="
},
"better-assert": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/better-assert/-/better-assert-1.0.2.tgz",
"integrity": "sha1-QIZrnhueC1W0gYlDEeaPr/rrxSI=",
"requires": {
"callsite": "1.0.0"
}
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/base64id/-/base64id-2.0.0.tgz",
"integrity": "sha512-lGe34o6EHj9y3Kts9R4ZYs/Gr+6N7MCaMlIFA3F1R2O5/m7K06AxfSeO5530PEERE6/WyEg3lsuyw4GHlPZHog=="
},
"blob": {
"version": "0.0.4",
"resolved": "https://registry.npmjs.org/blob/-/blob-0.0.4.tgz",
"integrity": "sha1-vPEwUspURj8w+fx+lbmkdjCpSSE="
"version": "0.0.5",
"resolved": "https://registry.npmjs.org/blob/-/blob-0.0.5.tgz",
"integrity": "sha512-gaqbzQPqOoamawKg0LGVd7SzLgXS+JH61oWprSLH+P+abTczqJbhTR8CmJ2u9/bUYNmHTGJx/UEmn6doAvvuig=="
},
"body-parser": {
"version": "1.18.2",
@@ -83,20 +70,15 @@
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.0.0.tgz",
"integrity": "sha1-0ygVQE1olpn4Wk6k+odV3ROpYEg="
},
"callsite": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/callsite/-/callsite-1.0.0.tgz",
"integrity": "sha1-KAOY5dZkvXQDi28JBRU+borxvCA="
},
"component-bind": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/component-bind/-/component-bind-1.0.0.tgz",
"integrity": "sha1-AMYIq33Nk4l8AAllGx06jh5zu9E="
},
"component-emitter": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/component-emitter/-/component-emitter-1.2.1.tgz",
"integrity": "sha1-E3kY1teCg/ffemt8WmPhQOaUJeY="
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/component-emitter/-/component-emitter-1.3.0.tgz",
"integrity": "sha512-Rd3se6QB+sO1TwqZjscQrurpEPIfO0/yYnSin6Q/rD3mOutHvUrCAhJub3r90uNb+SESBuE0QYoB90YdfatsRg=="
},
"component-inherit": {
"version": "0.0.3",
@@ -152,58 +134,76 @@
"integrity": "sha1-eePVhlU0aQn+bw9Fpd5oEDspTSA="
},
"engine.io": {
"version": "3.1.4",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-3.1.4.tgz",
"integrity": "sha1-PQIRtwpVLOhB/8fahiezAamkFi4=",
"version": "3.5.0",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-3.5.0.tgz",
"integrity": "sha512-21HlvPUKaitDGE4GXNtQ7PLP0Sz4aWLddMPw2VTyFz1FVZqu/kZsJUO8WNpKuE/OCL7nkfRaOui2ZCJloGznGA==",
"requires": {
"accepts": "1.3.3",
"base64id": "1.0.0",
"cookie": "0.3.1",
"debug": "2.6.9",
"engine.io-parser": "2.1.1",
"uws": "0.14.5",
"ws": "3.3.3"
"accepts": "~1.3.4",
"base64id": "2.0.0",
"cookie": "~0.4.1",
"debug": "~4.1.0",
"engine.io-parser": "~2.2.0",
"ws": "~7.4.2"
},
"dependencies": {
"accepts": {
"version": "1.3.3",
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.3.tgz",
"integrity": "sha1-w8p0NJOGSMPg2cHjKN1otiLChMo=",
"cookie": {
"version": "0.4.1",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.4.1.tgz",
"integrity": "sha512-ZwrFkGJxUR3EIoXtO+yVE69Eb7KlixbaeAWfBQB9vVsNn/o+Yw69gBWSSDK825hQNdN+wF8zELf3dFNl/kxkUA=="
},
"debug": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
"integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
"requires": {
"mime-types": "2.1.17",
"negotiator": "0.6.1"
"ms": "^2.1.1"
}
},
"ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="
}
}
},
"engine.io-client": {
"version": "3.1.4",
"resolved": "https://registry.npmjs.org/engine.io-client/-/engine.io-client-3.1.4.tgz",
"integrity": "sha1-T88TcLRxY70s6b4nM5ckMDUNTqE=",
"version": "3.5.0",
"resolved": "https://registry.npmjs.org/engine.io-client/-/engine.io-client-3.5.0.tgz",
"integrity": "sha512-12wPRfMrugVw/DNyJk34GQ5vIVArEcVMXWugQGGuw2XxUSztFNmJggZmv8IZlLyEdnpO1QB9LkcjeWewO2vxtA==",
"requires": {
"component-emitter": "1.2.1",
"component-emitter": "~1.3.0",
"component-inherit": "0.0.3",
"debug": "2.6.9",
"engine.io-parser": "2.1.1",
"debug": "~3.1.0",
"engine.io-parser": "~2.2.0",
"has-cors": "1.1.0",
"indexof": "0.0.1",
"parseqs": "0.0.5",
"parseuri": "0.0.5",
"ws": "3.3.3",
"xmlhttprequest-ssl": "1.5.4",
"parseqs": "0.0.6",
"parseuri": "0.0.6",
"ws": "~7.4.2",
"xmlhttprequest-ssl": "~1.5.4",
"yeast": "0.1.2"
},
"dependencies": {
"debug": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/debug/-/debug-3.1.0.tgz",
"integrity": "sha512-OX8XqP7/1a9cqkxYw2yXss15f26NKWBpDXQd0/uK/KPqdQhxbPa994hnzjcE2VqQpDslf55723cKPUOGSmMY3g==",
"requires": {
"ms": "2.0.0"
}
}
}
},
"engine.io-parser": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/engine.io-parser/-/engine.io-parser-2.1.1.tgz",
"integrity": "sha1-4Ps/DgRi9/WLt3waUun1p+JuRmg=",
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/engine.io-parser/-/engine.io-parser-2.2.1.tgz",
"integrity": "sha512-x+dN/fBH8Ro8TFwJ+rkB2AmuVw9Yu2mockR/p3W8f8YtExwFgDvBDi0GWyb4ZLkpahtDGZgtr3zLovanJghPqg==",
"requires": {
"after": "0.8.2",
"arraybuffer.slice": "0.0.6",
"base64-arraybuffer": "0.1.5",
"blob": "0.0.4",
"has-binary2": "1.0.2"
"arraybuffer.slice": "~0.0.7",
"base64-arraybuffer": "0.1.4",
"blob": "0.0.5",
"has-binary2": "~1.0.2"
}
},
"escape-html": {
@@ -278,9 +278,9 @@
"integrity": "sha1-PYyt2Q2XZWn6g1qx+OSyOhBWBac="
},
"has-binary2": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/has-binary2/-/has-binary2-1.0.2.tgz",
"integrity": "sha1-6D26SfC5vk0CbSc2U1DZ8D9Uvpg=",
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/has-binary2/-/has-binary2-1.0.3.tgz",
"integrity": "sha512-G1LWKhDSvhGeAQ8mPVQlqNcOB2sJdwATtZKl2pDKKHfpf/rYj24lkinxf69blJbnsvtqqNU+L3SL50vzZhXOnw==",
"requires": {
"isarray": "2.0.1"
}
@@ -376,11 +376,6 @@
"resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.1.tgz",
"integrity": "sha1-KzJxhOiZIQEXeyhWP7XnECrNDKk="
},
"object-component": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/object-component/-/object-component-0.0.3.tgz",
"integrity": "sha1-8MaapQ78lbhmwYb0AKM3acsvEpE="
},
"on-finished": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
@@ -390,20 +385,14 @@
}
},
"parseqs": {
"version": "0.0.5",
"resolved": "https://registry.npmjs.org/parseqs/-/parseqs-0.0.5.tgz",
"integrity": "sha1-1SCKNzjkZ2bikbouoXNoSSGouJ0=",
"requires": {
"better-assert": "1.0.2"
}
"version": "0.0.6",
"resolved": "https://registry.npmjs.org/parseqs/-/parseqs-0.0.6.tgz",
"integrity": "sha512-jeAGzMDbfSHHA091hr0r31eYfTig+29g3GKKE/PPbEQ65X0lmMwlEoqmhzu0iztID5uJpZsFlUPDP8ThPL7M8w=="
},
"parseuri": {
"version": "0.0.5",
"resolved": "https://registry.npmjs.org/parseuri/-/parseuri-0.0.5.tgz",
"integrity": "sha1-gCBKUNTbt3m/3G6+J3jZDkvOMgo=",
"requires": {
"better-assert": "1.0.2"
}
"version": "0.0.6",
"resolved": "https://registry.npmjs.org/parseuri/-/parseuri-0.0.6.tgz",
"integrity": "sha512-AUjen8sAkGgao7UyCX6Ahv0gIK2fABKmYjvP4xmy5JaKvcbTRueIqIPHLAfq30xJddqSE033IOMUSOMCcK3Sow=="
},
"parseurl": {
"version": "1.3.2",
@@ -487,51 +476,104 @@
"integrity": "sha512-BvE/TwpZX4FXExxOxZyRGQQv651MSwmWKZGqvmPcRIjDqWub67kTKuIMx43cZZrS/cBBzwBcNDWoFxt2XEFIpQ=="
},
"socket.io": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/socket.io/-/socket.io-2.0.4.tgz",
"integrity": "sha1-waRZDO/4fs8TxyZS8Eb3FrKeYBQ=",
"version": "2.4.0",
"resolved": "https://registry.npmjs.org/socket.io/-/socket.io-2.4.0.tgz",
"integrity": "sha512-9UPJ1UTvKayuQfVv2IQ3k7tCQC/fboDyIK62i99dAQIyHKaBsNdTpwHLgKJ6guRWxRtC9H+138UwpaGuQO9uWQ==",
"requires": {
"debug": "2.6.9",
"engine.io": "3.1.4",
"socket.io-adapter": "1.1.1",
"socket.io-client": "2.0.4",
"socket.io-parser": "3.1.2"
"debug": "~4.1.0",
"engine.io": "~3.5.0",
"has-binary2": "~1.0.2",
"socket.io-adapter": "~1.1.0",
"socket.io-client": "2.4.0",
"socket.io-parser": "~3.4.0"
},
"dependencies": {
"debug": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
"integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
"requires": {
"ms": "^2.1.1"
}
},
"ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="
}
}
},
"socket.io-adapter": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/socket.io-adapter/-/socket.io-adapter-1.1.1.tgz",
"integrity": "sha1-KoBeihTWNyEk3ZFZrUUC+MsH8Gs="
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/socket.io-adapter/-/socket.io-adapter-1.1.2.tgz",
"integrity": "sha512-WzZRUj1kUjrTIrUKpZLEzFZ1OLj5FwLlAFQs9kuZJzJi5DKdU7FsWc36SNmA8iDOtwBQyT8FkrriRM8vXLYz8g=="
},
"socket.io-client": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/socket.io-client/-/socket.io-client-2.0.4.tgz",
"integrity": "sha1-CRilUkBtxeVAs4Dc2Xr8SmQzL44=",
"version": "2.4.0",
"resolved": "https://registry.npmjs.org/socket.io-client/-/socket.io-client-2.4.0.tgz",
"integrity": "sha512-M6xhnKQHuuZd4Ba9vltCLT9oa+YvTsP8j9NcEiLElfIg8KeYPyhWOes6x4t+LTAC8enQbE/995AdTem2uNyKKQ==",
"requires": {
"backo2": "1.0.2",
"base64-arraybuffer": "0.1.5",
"component-bind": "1.0.0",
"component-emitter": "1.2.1",
"debug": "2.6.9",
"engine.io-client": "3.1.4",
"has-cors": "1.1.0",
"component-emitter": "~1.3.0",
"debug": "~3.1.0",
"engine.io-client": "~3.5.0",
"has-binary2": "~1.0.2",
"indexof": "0.0.1",
"object-component": "0.0.3",
"parseqs": "0.0.5",
"parseuri": "0.0.5",
"socket.io-parser": "3.1.2",
"parseqs": "0.0.6",
"parseuri": "0.0.6",
"socket.io-parser": "~3.3.0",
"to-array": "0.1.4"
},
"dependencies": {
"debug": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/debug/-/debug-3.1.0.tgz",
"integrity": "sha512-OX8XqP7/1a9cqkxYw2yXss15f26NKWBpDXQd0/uK/KPqdQhxbPa994hnzjcE2VqQpDslf55723cKPUOGSmMY3g==",
"requires": {
"ms": "2.0.0"
}
},
"socket.io-parser": {
"version": "3.3.2",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-3.3.2.tgz",
"integrity": "sha512-FJvDBuOALxdCI9qwRrO/Rfp9yfndRtc1jSgVgV8FDraihmSP/MLGD5PEuJrNfjALvcQ+vMDM/33AWOYP/JSjDg==",
"requires": {
"component-emitter": "~1.3.0",
"debug": "~3.1.0",
"isarray": "2.0.1"
}
}
}
},
"socket.io-parser": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-3.1.2.tgz",
"integrity": "sha1-28IoIVH8T6675Aru3Ady66YZ9/I=",
"version": "3.4.1",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-3.4.1.tgz",
"integrity": "sha512-11hMgzL+WCLWf1uFtHSNvliI++tcRUWdoeYuwIl+Axvwy9z2gQM+7nJyN3STj1tLj5JyIUH8/gpDGxzAlDdi0A==",
"requires": {
"component-emitter": "1.2.1",
"debug": "2.6.9",
"has-binary2": "1.0.2",
"debug": "~4.1.0",
"isarray": "2.0.1"
},
"dependencies": {
"component-emitter": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/component-emitter/-/component-emitter-1.2.1.tgz",
"integrity": "sha1-E3kY1teCg/ffemt8WmPhQOaUJeY="
},
"debug": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.1.1.tgz",
"integrity": "sha512-pYAIzeRo8J6KPEaJ0VWOh5Pzkbw/RetuzehGM7QRRX5he4fPHx2rdKMB256ehJCkX+XRQm16eZLqLNS8RSZXZw==",
"requires": {
"ms": "^2.1.1"
}
},
"ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="
}
}
},
"statuses": {
@@ -553,11 +595,6 @@
"mime-types": "2.1.17"
}
},
"ultron": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/ultron/-/ultron-1.1.1.tgz",
"integrity": "sha512-UIEXBNeYmKptWH6z8ZnqTeS8fV74zG0/eRU9VGkpzz+LIJNs8W/zM/L+7ctCkRrgbNnnR0xxw4bKOr0cW0N0Og=="
},
"unpipe": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz",
@@ -568,31 +605,20 @@
"resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz",
"integrity": "sha1-n5VxD1CiZ5R7LMwSR0HBAoQn5xM="
},
"uws": {
"version": "0.14.5",
"resolved": "https://registry.npmjs.org/uws/-/uws-0.14.5.tgz",
"integrity": "sha1-Z6rzPEaypYel9mZtAPdpEyjxSdw=",
"optional": true
},
"vary": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz",
"integrity": "sha1-IpnwLG3tMNSllhsLn3RSShj2NPw="
},
"ws": {
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/ws/-/ws-3.3.3.tgz",
"integrity": "sha512-nnWLa/NwZSt4KQJu51MYlCcSQ5g7INpOrOMt4XV8j4dqTXdmlUmSHQ8/oLC069ckre0fRsgfvsKwbTdtKLCDkA==",
"requires": {
"async-limiter": "1.0.0",
"safe-buffer": "5.1.1",
"ultron": "1.1.1"
}
"version": "7.4.2",
"resolved": "https://registry.npmjs.org/ws/-/ws-7.4.2.tgz",
"integrity": "sha512-T4tewALS3+qsrpGI/8dqNMLIVdq/g/85U98HPMa6F0m6xTbvhXU6RCQLqPH3+SlomNV/LdY6RXEbBpMH6EOJnA=="
},
"xmlhttprequest-ssl": {
"version": "1.5.4",
"resolved": "https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.4.tgz",
"integrity": "sha1-BPVgkVcks4kIhxXMDteBPpZ3v1c="
"version": "1.5.5",
"resolved": "https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz",
"integrity": "sha1-wodrBhaKrcQOV9l+gRkayPQ5iz4="
},
"yeast": {
"version": "0.1.2",

View File

@@ -3,6 +3,6 @@
"version": "0.0.1",
"dependencies": {
"express": "^4.16.2",
"socket.io": "^2.0.4"
"socket.io": "^2.4.0"
}
}

View File

@@ -307,6 +307,8 @@ Let's remove the `redis` container:
$ docker rm -f redis
```
* `-f`: Force the removal of a running container (uses SIGKILL)
And create one that doesn't block the `redis` name:
```bash

View File

@@ -424,7 +424,7 @@ services:
- In this chapter, we showed many ways to write Dockerfiles.
- These Dockerfiles use sometimes diametrally opposed techniques.
- These Dockerfiles use sometimes diametrically opposed techniques.
- Yet, they were the "right" ones *for a specific situation.*

View File

@@ -95,6 +95,24 @@ $ ssh <login>@<ip-address>
---
class: in-person
## `tailhist`
The shell history of the instructor is available online in real time.
Note the IP address of the instructor's virtual machine (A.B.C.D).
Open http://A.B.C.D:1088 in your browser and you should see the history.
The history is updated in real time (using a WebSocket connection).
It should be green when the WebSocket is connected.
If it turns red, reloading the page should fix it.
---
## Checking your Virtual Machine
Once logged in, make sure that you can run a basic Docker command:

View File

@@ -119,7 +119,7 @@ Nano and LinuxKit VMs in Hyper-V!)
- golang, mongo, python, redis, hello-world ... and more being added
- you should still use `--plaform` with multi-os images to be certain
- you should still use `--platform` with multi-os images to be certain
- Windows Containers now support `localhost` accessible containers (July 2018)

98
slides/highfive.html Normal file
View File

@@ -0,0 +1,98 @@
<?xml version="1.0"?>
<html>
<head>
<style>
td {
background: #ccc;
padding: 1em;
}
</style>
</head>
<body>
<table>
<tr>
<td>Lundi 8 février 2021</td>
<td>
<a href="1.yml.html">Docker Intensif</a>
</td>
</tr>
<tr>
<td>Mardi 9 février 2021</td>
<td>
<a href="1.yml.html">Docker Intensif</a>
</td>
</tr>
<tr>
<td>Mercredi 10 février 2021</td>
<td>
<a href="1.yml.html">Docker Intensif</a>
</td>
</tr>
<tr>
<td>Lundi 15 février 2021</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Mardi 16 février 2021</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Mercredi 17 février 2021</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Jeudi 18 février 2021</td>
<td>
<a href="2.yml.html">Fondamentaux Kubernetes</a>
</td>
</tr>
<tr>
<td>Lundi 22 février 2021</td>
<td>
<a href="3.yml.html">Packaging d'applications et CI/CD pour Kubernetes</a>
</td>
</tr>
<tr>
<td>Mardi 23 février 2021</td>
<td>
<a href="3.yml.html">Packaging d'applications et CI/CD pour Kubernetes</a>
</td>
</tr>
<td>Mercredi 24 février 2021</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
</tr>
<td>Jeudi 25 février 2021</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
</tr>
<td>Vendredi 26 février 2021</td>
<td>
<a href="4.yml.html">Kubernetes Avancé</a>
</td>
</tr>
<tr>
<td>Lundi 1er mars 2021</td>
<td>
<a href="5.yml.html">Opérer Kubernetes</a>
</td>
</tr>
<tr>
<td>Mardi 2 mars 2021</td>
<td>
<a href="5.yml.html">Opérer Kubernetes</a>
</td>
</tr>
</table>
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 99 KiB

View File

@@ -0,0 +1,519 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
version="1.1"
viewBox="16 32 880 495"
fill="none"
stroke="none"
stroke-linecap="square"
stroke-miterlimit="10"
id="svg464"
sodipodi:docname="k8s-net-1-pod-to-pod.svg"
width="1600"
height="900"
inkscape:version="1.0.1 (3bc2e813f5, 2020-09-07, custom)">
<metadata
id="metadata470">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
</cc:Work>
</rdf:RDF>
</metadata>
<defs
id="defs468" />
<sodipodi:namedview
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1"
objecttolerance="10"
gridtolerance="10"
guidetolerance="10"
inkscape:pageopacity="0"
inkscape:pageshadow="2"
inkscape:window-width="476"
inkscape:window-height="1032"
id="namedview466"
showgrid="false"
inkscape:zoom="1.0208333"
inkscape:cx="480"
inkscape:cy="360"
inkscape:window-x="480"
inkscape:window-y="18"
inkscape:window-maximized="0"
inkscape:current-layer="svg464" />
<clipPath
id="p.0">
<path
d="M 0,0 H 960 V 720 H 0 Z"
clip-rule="nonzero"
id="path317" />
</clipPath>
<g
clip-path="url(#p.0)"
id="g462">
<path
fill="#000000"
fill-opacity="0"
d="M 0,0 H 960 V 720 H 0 Z"
fill-rule="evenodd"
id="path320" />
<path
fill="#d9d9d9"
d="m 66.944885,154.29968 v 0 c 0,-13.08115 10.604363,-23.68552 23.685509,-23.68552 H 296.55071 c 6.28177,0 12.30628,2.49544 16.74817,6.93734 4.4419,4.44189 6.93732,10.4664 6.93732,16.74818 v 94.73921 c 0,13.08116 -10.60434,23.6855 -23.68549,23.6855 H 90.630394 c -13.081146,0 -23.685509,-10.60434 -23.685509,-23.6855 z"
fill-rule="evenodd"
id="path322" />
<path
stroke="#434343"
stroke-width="2"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 66.944885,154.29968 v 0 c 0,-13.08115 10.604363,-23.68552 23.685509,-23.68552 H 296.55071 c 6.28177,0 12.30628,2.49544 16.74817,6.93734 4.4419,4.44189 6.93732,10.4664 6.93732,16.74818 v 94.73921 c 0,13.08116 -10.60434,23.6855 -23.68549,23.6855 H 90.630394 c -13.081146,0 -23.685509,-10.60434 -23.685509,-23.6855 z"
fill-rule="evenodd"
id="path324" />
<path
fill="#000000"
d="m 85.75713,154.61205 0.04687,1.23437 q 1.125,-1.40625 2.953125,-1.40625 3.125,0 3.15625,3.51563 v 6.51562 h -1.6875 v -6.51562 q -0.01563,-1.07813 -0.5,-1.57813 -0.46875,-0.51562 -1.484375,-0.51562 -0.8125,0 -1.4375,0.4375 -0.609375,0.4375 -0.96875,1.15625 v 7.01562 h -1.67187 v -9.85937 z m 8.246857,4.84375 q 0,-1.45313 0.5625,-2.60938 0.578125,-1.15625 1.59375,-1.78125 1.015625,-0.625 2.3125,-0.625 2.015623,0 3.250003,1.39063 1.25,1.39062 1.25,3.70312 v 0.125 q 0,1.4375 -0.54688,2.57813 -0.54687,1.14062 -1.57812,1.78125 -1.015628,0.64062 -2.359378,0.64062 -2,0 -3.25,-1.39062 -1.234375,-1.40625 -1.234375,-3.70313 z m 1.6875,0.20312 q 0,1.64063 0.765625,2.64063 0.765625,0.98437 2.03125,0.98437 1.296875,0 2.046878,-1 0.75,-1.01562 0.75,-2.82812 0,-1.625 -0.76563,-2.625 -0.765623,-1.01563 -2.046873,-1.01563 -1.25,0 -2.015625,1 -0.765625,0.98438 -0.765625,2.84375 z m 8.983643,-0.20312 q 0,-2.26563 1.07813,-3.64063 1.07812,-1.375 2.8125,-1.375 1.73437,0 2.75,1.17188 v -5.14063 h 1.6875 v 14 h -1.54688 l -0.0937,-1.0625 q -1,1.25 -2.8125,1.25 -1.70312,0 -2.79687,-1.40625 -1.07813,-1.40625 -1.07813,-3.65625 z m 1.6875,0.20312 q 0,1.67188 0.6875,2.625 0.70313,0.9375 1.92188,0.9375 1.60937,0 2.34375,-1.4375 v -4.53125 q -0.76563,-1.39062 -2.32813,-1.39062 -1.23437,0 -1.9375,0.95312 -0.6875,0.95313 -0.6875,2.84375 z m 13.33397,5 q -2,0 -3.26563,-1.3125 -1.25,-1.32812 -1.25,-3.53125 v -0.3125 q 0,-1.46875 0.5625,-2.60937 0.5625,-1.15625 1.5625,-1.79688 1.01563,-0.65625 2.1875,-0.65625 1.92188,0 2.98438,1.26563 1.0625,1.26562 1.0625,3.625 v 0.6875 h -6.67188 q 0.0312,1.46875 0.84375,2.375 0.82813,0.89062 2.07813,0.89062 0.89062,0 1.51562,-0.35937 0.625,-0.375 1.07813,-0.96875 l 1.03125,0.79687 q -1.23438,1.90625 -3.71875,1.90625 z m -0.20313,-8.84375 q -1.01562,0 -1.71875,0.75 -0.6875,0.73438 -0.84375,2.07813 h 4.9375 v -0.125 q -0.0781,-1.28125 -0.70312,-1.98438 -0.60938,-0.71875 -1.67188,-0.71875 z"
fill-rule="nonzero"
id="path326" />
<path
fill="#000000"
fill-opacity="0"
d="M 741.52496,67.91863 H 819.2572"
fill-rule="evenodd"
id="path328" />
<path
stroke="#0000ff"
stroke-width="4"
stroke-linejoin="round"
stroke-linecap="butt"
d="M 741.52496,67.91863 H 819.2572"
fill-rule="evenodd"
id="path330" />
<path
fill="#000000"
fill-opacity="0"
d="M 692.19946,70.157394 H 868.57745 V 116.01565 H 692.19946 Z"
fill-rule="evenodd"
id="path332" />
<path
fill="#000000"
d="m 747.46643,91.32864 q 0,2.078125 -0.96875,3.359375 -0.95313,1.28125 -2.57813,1.28125 -1.67187,0 -2.625,-1.0625 v 4.40625 h -1.5625 V 86.64114 h 1.42188 l 0.0781,1.015625 q 0.95313,-1.1875 2.65625,-1.1875 1.65625,0 2.60938,1.25 0.96875,1.234375 0.96875,3.453125 z m -1.5625,-0.1875 q 0,-1.546875 -0.67188,-2.4375 -0.65625,-0.90625 -1.8125,-0.90625 -1.42187,0 -2.125,1.265625 v 4.375 q 0.70313,1.25 2.15625,1.25 1.125,0 1.78125,-0.890625 0.67188,-0.890625 0.67188,-2.65625 z m 3.12799,0 q 0,-1.34375 0.53125,-2.421875 0.53125,-1.078125 1.46875,-1.65625 0.95313,-0.59375 2.15625,-0.59375 1.875,0 3.03125,1.296875 1.15625,1.296875 1.15625,3.453125 v 0.109375 q 0,1.328125 -0.51562,2.390625 -0.51563,1.0625 -1.46875,1.65625 -0.95313,0.59375 -2.1875,0.59375 -1.85938,0 -3.01563,-1.296875 -1.15625,-1.296875 -1.15625,-3.421875 z m 1.57813,0.1875 q 0,1.515625 0.70312,2.4375 0.70313,0.921875 1.89063,0.921875 1.20312,0 1.89062,-0.9375 0.70313,-0.9375 0.70313,-2.609375 0,-1.515625 -0.71875,-2.4375 -0.70313,-0.9375 -1.89063,-0.9375 -1.15625,0 -1.875,0.921875 -0.70312,0.921875 -0.70312,2.640625 z m 8.33557,-0.1875 q 0,-2.109375 1,-3.390625 1,-1.28125 2.625,-1.28125 1.60937,0 2.54687,1.109375 v -4.78125 h 1.5625 v 13 h -1.4375 l -0.0781,-0.984375 q -0.9375,1.15625 -2.60938,1.15625 -1.59375,0 -2.60937,-1.296875 -1,-1.3125 -1,-3.40625 z m 1.57812,0.1875 q 0,1.546875 0.64063,2.4375 0.64062,0.875 1.76562,0.875 1.5,0 2.1875,-1.34375 v -4.203125 q -0.70312,-1.296875 -2.15625,-1.296875 -1.15625,0 -1.79687,0.890625 -0.64063,0.890625 -0.64063,2.640625 z m 11.83496,-0.125 h -4.125 v -1.28125 h 4.125 z m 3.65546,-6.78125 v 2.21875 h 1.70312 v 1.21875 h -1.70312 v 5.671875 q 0,0.546875 0.21875,0.828125 0.23437,0.265625 0.78125,0.265625 0.26562,0 0.75,-0.09375 v 1.265625 q -0.625,0.171875 -1.20313,0.171875 -1.04687,0 -1.57812,-0.625 -0.53125,-0.640625 -0.53125,-1.8125 V 87.85989 h -1.67188 v -1.21875 h 1.67188 v -2.21875 z m 2.94427,6.71875 q 0,-1.34375 0.53125,-2.421875 0.53125,-1.078125 1.46875,-1.65625 0.95313,-0.59375 2.15625,-0.59375 1.875,0 3.03125,1.296875 1.15625,1.296875 1.15625,3.453125 v 0.109375 q 0,1.328125 -0.51562,2.390625 -0.51563,1.0625 -1.46875,1.65625 -0.95313,0.59375 -2.1875,0.59375 -1.85938,0 -3.01563,-1.296875 -1.15625,-1.296875 -1.15625,-3.421875 z m 1.57813,0.1875 q 0,1.515625 0.70312,2.4375 0.70313,0.921875 1.89063,0.921875 1.20312,0 1.89062,-0.9375 0.70313,-0.9375 0.70313,-2.609375 0,-1.515625 -0.71875,-2.4375 -0.70313,-0.9375 -1.89063,-0.9375 -1.15625,0 -1.875,0.921875 -0.70312,0.921875 -0.70312,2.640625 z m 11.9762,-0.125 h -4.125 v -1.28125 h 4.125 z m 9.26483,0.125 q 0,2.078125 -0.96875,3.359375 -0.95313,1.28125 -2.57813,1.28125 -1.67187,0 -2.625,-1.0625 v 4.40625 h -1.5625 V 86.64114 h 1.42188 l 0.0781,1.015625 q 0.95313,-1.1875 2.65625,-1.1875 1.65625,0 2.60938,1.25 0.96875,1.234375 0.96875,3.453125 z m -1.5625,-0.1875 q 0,-1.546875 -0.67188,-2.4375 -0.65625,-0.90625 -1.8125,-0.90625 -1.42187,0 -2.125,1.265625 v 4.375 q 0.70313,1.25 2.15625,1.25 1.125,0 1.78125,-0.890625 0.67188,-0.890625 0.67188,-2.65625 z m 3.12793,0 q 0,-1.34375 0.53125,-2.421875 0.53125,-1.078125 1.46875,-1.65625 0.95312,-0.59375 2.15625,-0.59375 1.875,0 3.03125,1.296875 1.15625,1.296875 1.15625,3.453125 v 0.109375 q 0,1.328125 -0.51563,2.390625 -0.51562,1.0625 -1.46875,1.65625 -0.95312,0.59375 -2.1875,0.59375 -1.85937,0 -3.01562,-1.296875 -1.15625,-1.296875 -1.15625,-3.421875 z m 1.57812,0.1875 q 0,1.515625 0.70313,2.4375 0.70312,0.921875 1.89062,0.921875 1.20313,0 1.89063,-0.9375 0.70312,-0.9375 0.70312,-2.609375 0,-1.515625 -0.71875,-2.4375 -0.70312,-0.9375 -1.89062,-0.9375 -1.15625,0 -1.875,0.921875 -0.70313,0.921875 -0.70313,2.640625 z m 8.33557,-0.1875 q 0,-2.109375 1,-3.390625 1,-1.28125 2.625,-1.28125 1.60938,0 2.54688,1.109375 v -4.78125 h 1.5625 v 13 h -1.4375 l -0.0781,-0.984375 q -0.9375,1.15625 -2.60937,1.15625 -1.59375,0 -2.60938,-1.296875 -1,-1.3125 -1,-3.40625 z m 1.57813,0.1875 q 0,1.546875 0.64062,2.4375 0.64063,0.875 1.76563,0.875 1.5,0 2.1875,-1.34375 v -4.203125 q -0.70313,-1.296875 -2.15625,-1.296875 -1.15625,0 -1.79688,0.890625 -0.64062,0.890625 -0.64062,2.640625 z"
fill-rule="nonzero"
id="path334" />
<path
fill="#f4cccc"
d="m 228.27559,151.61942 h 63.53264 l 8.34137,8.34138 v 41.70587 h -71.87401 z"
fill-rule="evenodd"
id="path336" />
<path
stroke="#434343"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 228.27559,151.61942 h 63.53264 l 8.34137,8.34138 v 41.70587 h -71.87401 z"
fill-rule="evenodd"
id="path338" />
<path
fill="#000000"
d="m 246.86934,177.89761 q 0,2.25 -1.03125,3.625 -1.01563,1.375 -2.78125,1.375 -1.79688,0 -2.82813,-1.14063 v 4.75 h -1.67187 v -13.65625 h 1.53125 l 0.0781,1.09375 q 1.03125,-1.26562 2.85938,-1.26562 1.78125,0 2.8125,1.34375 1.03125,1.32812 1.03125,3.71875 z m -1.67188,-0.20313 q 0,-1.65625 -0.71875,-2.625 -0.70312,-0.96875 -1.95312,-0.96875 -1.53125,0 -2.29688,1.35938 v 4.70312 q 0.76563,1.35938 2.32813,1.35938 1.20312,0 1.92187,-0.95313 0.71875,-0.96875 0.71875,-2.875 z m 3.37307,0 q 0,-1.45312 0.5625,-2.60937 0.57812,-1.15625 1.59375,-1.78125 1.01562,-0.625 2.3125,-0.625 2.01562,0 3.25,1.39062 1.25,1.39063 1.25,3.70313 v 0.125 q 0,1.4375 -0.54688,2.57812 -0.54687,1.14063 -1.57812,1.78125 -1.01563,0.64063 -2.35938,0.64063 -2,0 -3.25,-1.39063 -1.23437,-1.40625 -1.23437,-3.70312 z m 1.6875,0.20313 q 0,1.64062 0.76562,2.64062 0.76563,0.98438 2.03125,0.98438 1.29688,0 2.04688,-1 0.75,-1.01563 0.75,-2.82813 0,-1.625 -0.76563,-2.625 -0.76562,-1.01562 -2.04687,-1.01562 -1.25,0 -2.01563,1 -0.76562,0.98437 -0.76562,2.84375 z m 8.98364,-0.20313 q 0,-2.26562 1.07813,-3.64062 1.07812,-1.375 2.8125,-1.375 1.73437,0 2.75,1.17187 v -5.14062 h 1.6875 v 14 h -1.54688 l -0.0937,-1.0625 q -1,1.25 -2.8125,1.25 -1.70312,0 -2.79687,-1.40625 -1.07813,-1.40625 -1.07813,-3.65625 z m 1.6875,0.20313 q 0,1.67187 0.6875,2.625 0.70313,0.9375 1.92188,0.9375 1.60937,0 2.34375,-1.4375 v -4.53125 q -0.76563,-1.39063 -2.32813,-1.39063 -1.23437,0 -1.9375,0.95313 -0.6875,0.95312 -0.6875,2.84375 z"
fill-rule="nonzero"
id="path340" />
<path
fill="#f4cccc"
d="m 180.86351,212.08398 h 63.53265 l 8.34137,8.34138 v 41.70586 h -71.87402 z"
fill-rule="evenodd"
id="path342" />
<path
stroke="#434343"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 180.86351,212.08398 h 63.53265 l 8.34137,8.34138 v 41.70586 h -71.87402 z"
fill-rule="evenodd"
id="path344" />
<path
fill="#000000"
d="m 199.45726,238.36217 q 0,2.25 -1.03125,3.625 -1.01563,1.375 -2.78125,1.375 -1.79688,0 -2.82813,-1.14063 v 4.75 h -1.67187 v -13.65625 h 1.53125 l 0.0781,1.09375 q 1.03125,-1.26562 2.85938,-1.26562 1.78125,0 2.8125,1.34375 1.03125,1.32812 1.03125,3.71875 z m -1.67188,-0.20313 q 0,-1.65625 -0.71875,-2.625 -0.70312,-0.96875 -1.95312,-0.96875 -1.53125,0 -2.29688,1.35938 v 4.70312 q 0.76563,1.35938 2.32813,1.35938 1.20312,0 1.92187,-0.95313 0.71875,-0.96875 0.71875,-2.875 z m 3.37307,0 q 0,-1.45312 0.5625,-2.60937 0.57812,-1.15625 1.59375,-1.78125 1.01562,-0.625 2.3125,-0.625 2.01562,0 3.25,1.39062 1.25,1.39063 1.25,3.70313 v 0.125 q 0,1.4375 -0.54688,2.57812 -0.54687,1.14063 -1.57812,1.78125 -1.01563,0.64063 -2.35938,0.64063 -2,0 -3.25,-1.39063 -1.23437,-1.40625 -1.23437,-3.70312 z m 1.6875,0.20313 q 0,1.64062 0.76562,2.64062 0.76563,0.98438 2.03125,0.98438 1.29688,0 2.04688,-1 0.75,-1.01563 0.75,-2.82813 0,-1.625 -0.76563,-2.625 -0.76562,-1.01562 -2.04687,-1.01562 -1.25,0 -2.01563,1 -0.76562,0.98437 -0.76562,2.84375 z m 8.98364,-0.20313 q 0,-2.26562 1.07813,-3.64062 1.07812,-1.375 2.8125,-1.375 1.73437,0 2.75,1.17187 v -5.14062 h 1.6875 v 14 h -1.54688 l -0.0937,-1.0625 q -1,1.25 -2.8125,1.25 -1.70312,0 -2.79687,-1.40625 -1.07813,-1.40625 -1.07813,-3.65625 z m 1.6875,0.20313 q 0,1.67187 0.6875,2.625 0.70313,0.9375 1.92188,0.9375 1.60937,0 2.34375,-1.4375 v -4.53125 q -0.76563,-1.39063 -2.32813,-1.39063 -1.23437,0 -1.9375,0.95313 -0.6875,0.95312 -0.6875,2.84375 z"
fill-rule="nonzero"
id="path346" />
<path
fill="#d9d9d9"
d="m 398.08398,209.82724 v 0 c 0,-13.08115 10.60437,-23.6855 23.68552,-23.6855 h 205.92032 c 6.2818,0 12.30627,2.49542 16.74817,6.93732 4.44189,4.44189 6.93731,10.4664 6.93731,16.74818 v 94.73923 c 0,13.08115 -10.60437,23.68552 -23.68548,23.68552 H 421.7695 c -13.08115,0 -23.68552,-10.60437 -23.68552,-23.68552 z"
fill-rule="evenodd"
id="path348" />
<path
stroke="#434343"
stroke-width="2"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 398.08398,209.82724 v 0 c 0,-13.08115 10.60437,-23.6855 23.68552,-23.6855 h 205.92032 c 6.2818,0 12.30627,2.49542 16.74817,6.93732 4.44189,4.44189 6.93731,10.4664 6.93731,16.74818 v 94.73923 c 0,13.08115 -10.60437,23.68552 -23.68548,23.68552 H 421.7695 c -13.08115,0 -23.68552,-10.60437 -23.68552,-23.68552 z"
fill-rule="evenodd"
id="path350" />
<path
fill="#000000"
d="m 416.89624,210.1396 0.0469,1.23437 q 1.125,-1.40625 2.95313,-1.40625 3.125,0 3.15625,3.51563 v 6.51562 h -1.6875 v -6.51562 q -0.0156,-1.07813 -0.5,-1.57813 -0.46875,-0.51562 -1.48438,-0.51562 -0.8125,0 -1.4375,0.4375 -0.60937,0.4375 -0.96875,1.15625 v 7.01562 h -1.67187 v -9.85937 z m 8.24686,4.84375 q 0,-1.45313 0.5625,-2.60938 0.57812,-1.15625 1.59375,-1.78125 1.01562,-0.625 2.3125,-0.625 2.01562,0 3.25,1.39063 1.25,1.39062 1.25,3.70312 v 0.125 q 0,1.4375 -0.54688,2.57813 -0.54687,1.14062 -1.57812,1.78125 -1.01563,0.64062 -2.35938,0.64062 -2,0 -3.25,-1.39062 -1.23437,-1.40625 -1.23437,-3.70313 z m 1.6875,0.20312 q 0,1.64063 0.76562,2.64063 0.76563,0.98437 2.03125,0.98437 1.29688,0 2.04688,-1 0.75,-1.01562 0.75,-2.82812 0,-1.625 -0.76563,-2.625 -0.76562,-1.01563 -2.04687,-1.01563 -1.25,0 -2.01563,1 -0.76562,0.98438 -0.76562,2.84375 z m 8.98364,-0.20312 q 0,-2.26563 1.07812,-3.64063 1.07813,-1.375 2.8125,-1.375 1.73438,0 2.75,1.17188 v -5.14063 h 1.6875 v 14 h -1.54687 l -0.0937,-1.0625 q -1,1.25 -2.8125,1.25 -1.70313,0 -2.79688,-1.40625 -1.07812,-1.40625 -1.07812,-3.65625 z m 1.6875,0.20312 q 0,1.67188 0.6875,2.625 0.70312,0.9375 1.92187,0.9375 1.60938,0 2.34375,-1.4375 v -4.53125 q -0.76562,-1.39062 -2.32812,-1.39062 -1.23438,0 -1.9375,0.95312 -0.6875,0.95313 -0.6875,2.84375 z m 13.33395,5 q -2,0 -3.26562,-1.3125 -1.25,-1.32812 -1.25,-3.53125 v -0.3125 q 0,-1.46875 0.5625,-2.60937 0.5625,-1.15625 1.5625,-1.79688 1.01562,-0.65625 2.1875,-0.65625 1.92187,0 2.98437,1.26563 1.0625,1.26562 1.0625,3.625 v 0.6875 h -6.67187 q 0.0312,1.46875 0.84375,2.375 0.82812,0.89062 2.07812,0.89062 0.89063,0 1.51563,-0.35937 0.625,-0.375 1.07812,-0.96875 l 1.03125,0.79687 q -1.23437,1.90625 -3.71875,1.90625 z m -0.20312,-8.84375 q -1.01563,0 -1.71875,0.75 -0.6875,0.73438 -0.84375,2.07813 h 4.9375 v -0.125 q -0.0781,-1.28125 -0.70313,-1.98438 -0.60937,-0.71875 -1.67187,-0.71875 z"
fill-rule="nonzero"
id="path352" />
<path
fill="#f4cccc"
d="m 436.5315,244.97638 h 63.53265 l 8.34137,8.34137 v 41.70587 H 436.5315 Z"
fill-rule="evenodd"
id="path354" />
<path
stroke="#434343"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 436.5315,244.97638 h 63.53265 l 8.34137,8.34137 v 41.70587 H 436.5315 Z"
fill-rule="evenodd"
id="path356" />
<path
fill="#000000"
d="m 455.12524,271.25458 q 0,2.25 -1.03125,3.625 -1.01562,1.375 -2.78125,1.375 -1.79687,0 -2.82812,-1.14063 v 4.75 h -1.67188 V 266.2077 h 1.53125 l 0.0781,1.09375 q 1.03125,-1.26562 2.85937,-1.26562 1.78125,0 2.8125,1.34375 1.03125,1.32812 1.03125,3.71875 z m -1.67187,-0.20313 q 0,-1.65625 -0.71875,-2.625 -0.70313,-0.96875 -1.95313,-0.96875 -1.53125,0 -2.29687,1.35938 v 4.70312 q 0.76562,1.35938 2.32812,1.35938 1.20313,0 1.92188,-0.95313 0.71875,-0.96875 0.71875,-2.875 z m 3.37307,0 q 0,-1.45312 0.5625,-2.60937 0.57813,-1.15625 1.59375,-1.78125 1.01563,-0.625 2.3125,-0.625 2.01563,0 3.25,1.39062 1.25,1.39063 1.25,3.70313 v 0.125 q 0,1.4375 -0.54687,2.57812 -0.54688,1.14063 -1.57813,1.78125 -1.01562,0.64063 -2.35937,0.64063 -2,0 -3.25,-1.39063 -1.23438,-1.40625 -1.23438,-3.70312 z m 1.6875,0.20313 q 0,1.64062 0.76563,2.64062 0.76562,0.98438 2.03125,0.98438 1.29687,0 2.04687,-1 0.75,-1.01563 0.75,-2.82813 0,-1.625 -0.76562,-2.625 -0.76563,-1.01562 -2.04688,-1.01562 -1.25,0 -2.01562,1 -0.76563,0.98437 -0.76563,2.84375 z m 8.98365,-0.20313 q 0,-2.26562 1.07812,-3.64062 1.07813,-1.375 2.8125,-1.375 1.73438,0 2.75,1.17187 v -5.14062 h 1.6875 v 14 h -1.54687 l -0.0937,-1.0625 q -1,1.25 -2.8125,1.25 -1.70313,0 -2.79688,-1.40625 -1.07812,-1.40625 -1.07812,-3.65625 z m 1.6875,0.20313 q 0,1.67187 0.6875,2.625 0.70312,0.9375 1.92187,0.9375 1.60938,0 2.34375,-1.4375 v -4.53125 q -0.76562,-1.39063 -2.32812,-1.39063 -1.23438,0 -1.9375,0.95313 -0.6875,0.95312 -0.6875,2.84375 z"
fill-rule="nonzero"
id="path358" />
<path
fill="#d9d9d9"
d="m 95.60105,380.54904 v 0 c 0,-13.08115 10.60436,-23.68552 23.68551,-23.68552 h 205.92032 c 6.2818,0 12.30627,2.49543 16.74817,6.93732 4.44192,4.4419 6.93735,10.4664 6.93735,16.7482 v 94.7392 c 0,13.08115 -10.60437,23.68552 -23.68552,23.68552 H 119.28656 c -13.08115,0 -23.68551,-10.60437 -23.68551,-23.68552 z"
fill-rule="evenodd"
id="path360" />
<path
stroke="#434343"
stroke-width="2"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 95.60105,380.54904 v 0 c 0,-13.08115 10.60436,-23.68552 23.68551,-23.68552 h 205.92032 c 6.2818,0 12.30627,2.49543 16.74817,6.93732 4.44192,4.4419 6.93735,10.4664 6.93735,16.7482 v 94.7392 c 0,13.08115 -10.60437,23.68552 -23.68552,23.68552 H 119.28656 c -13.08115,0 -23.68551,-10.60437 -23.68551,-23.68552 z"
fill-rule="evenodd"
id="path362" />
<path
fill="#000000"
d="m 114.4133,380.8614 0.0469,1.23437 q 1.125,-1.40625 2.95312,-1.40625 3.125,0 3.15625,3.51563 v 6.51562 h -1.6875 v -6.51562 q -0.0156,-1.07813 -0.5,-1.57813 -0.46875,-0.51562 -1.48437,-0.51562 -0.8125,0 -1.4375,0.4375 -0.60938,0.4375 -0.96875,1.15625 v 7.01562 h -1.67188 v -9.85937 z m 8.24686,4.84375 q 0,-1.45313 0.5625,-2.60938 0.57812,-1.15625 1.59375,-1.78125 1.01562,-0.625 2.3125,-0.625 2.01562,0 3.25,1.39063 1.25,1.39062 1.25,3.70312 v 0.125 q 0,1.4375 -0.54688,2.57813 -0.54687,1.14062 -1.57812,1.78125 -1.01563,0.64062 -2.35938,0.64062 -2,0 -3.25,-1.39062 -1.23437,-1.40625 -1.23437,-3.70313 z m 1.6875,0.20312 q 0,1.64063 0.76562,2.64063 0.76563,0.98437 2.03125,0.98437 1.29688,0 2.04688,-1 0.75,-1.01562 0.75,-2.82812 0,-1.625 -0.76563,-2.625 -0.76562,-1.01563 -2.04687,-1.01563 -1.25,0 -2.01563,1 -0.76562,0.98438 -0.76562,2.84375 z m 8.98364,-0.20312 q 0,-2.26563 1.07812,-3.64063 1.07813,-1.375 2.8125,-1.375 1.73438,0 2.75,1.17188 v -5.14063 h 1.6875 v 14 h -1.54687 l -0.0937,-1.0625 q -1,1.25 -2.8125,1.25 -1.70313,0 -2.79688,-1.40625 -1.07812,-1.40625 -1.07812,-3.65625 z m 1.6875,0.20312 q 0,1.67188 0.6875,2.625 0.70312,0.9375 1.92187,0.9375 1.60938,0 2.34375,-1.4375 v -4.53125 q -0.76562,-1.39062 -2.32812,-1.39062 -1.23438,0 -1.9375,0.95312 -0.6875,0.95313 -0.6875,2.84375 z m 13.33397,5 q -2,0 -3.26563,-1.3125 -1.25,-1.32812 -1.25,-3.53125 v -0.3125 q 0,-1.46875 0.5625,-2.60937 0.5625,-1.15625 1.5625,-1.79688 1.01563,-0.65625 2.1875,-0.65625 1.92188,0 2.98438,1.26563 1.0625,1.26562 1.0625,3.625 v 0.6875 h -6.67188 q 0.0312,1.46875 0.84375,2.375 0.82813,0.89062 2.07813,0.89062 0.89062,0 1.51562,-0.35937 0.625,-0.375 1.07813,-0.96875 l 1.03125,0.79687 q -1.23438,1.90625 -3.71875,1.90625 z m -0.20313,-8.84375 q -1.01562,0 -1.71875,0.75 -0.6875,0.73438 -0.84375,2.07813 h 4.9375 v -0.125 q -0.0781,-1.28125 -0.70312,-1.98438 -0.60938,-0.71875 -1.67188,-0.71875 z"
fill-rule="nonzero"
id="path364" />
<path
fill="#f4cccc"
d="m 200.92651,377.43307 h 63.53262 l 8.3414,8.34137 v 41.70587 h -71.87402 z"
fill-rule="evenodd"
id="path366" />
<path
stroke="#434343"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 200.92651,377.43307 h 63.53262 l 8.3414,8.34137 v 41.70587 h -71.87402 z"
fill-rule="evenodd"
id="path368" />
<path
fill="#000000"
d="m 219.52026,403.71124 q 0,2.25 -1.03125,3.625 -1.01563,1.375 -2.78125,1.375 -1.79688,0 -2.82813,-1.14063 v 4.75 h -1.67187 v -13.65625 h 1.53125 l 0.0781,1.09375 q 1.03125,-1.26562 2.85938,-1.26562 1.78125,0 2.8125,1.34375 1.03125,1.32812 1.03125,3.71875 z m -1.67188,-0.20313 q 0,-1.65625 -0.71875,-2.625 -0.70312,-0.96875 -1.95312,-0.96875 -1.53125,0 -2.29688,1.35938 v 4.70312 q 0.76563,1.35938 2.32813,1.35938 1.20312,0 1.92187,-0.95313 0.71875,-0.96875 0.71875,-2.875 z m 3.37307,0 q 0,-1.45312 0.5625,-2.60937 0.57812,-1.15625 1.59375,-1.78125 1.01562,-0.625 2.3125,-0.625 2.01562,0 3.25,1.39062 1.25,1.39063 1.25,3.70313 v 0.125 q 0,1.4375 -0.54688,2.57812 -0.54687,1.14063 -1.57812,1.78125 -1.01563,0.64063 -2.35938,0.64063 -2,0 -3.25,-1.39063 -1.23437,-1.40625 -1.23437,-3.70312 z m 1.6875,0.20313 q 0,1.64062 0.76562,2.64062 0.76563,0.98438 2.03125,0.98438 1.29688,0 2.04688,-1 0.75,-1.01563 0.75,-2.82813 0,-1.625 -0.76563,-2.625 -0.76562,-1.01562 -2.04687,-1.01562 -1.25,0 -2.01563,1 -0.76562,0.98437 -0.76562,2.84375 z m 8.98364,-0.20313 q 0,-2.26562 1.07813,-3.64062 1.07812,-1.375 2.8125,-1.375 1.73437,0 2.75,1.17187 v -5.14062 h 1.6875 v 14 h -1.54688 l -0.0937,-1.0625 q -1,1.25 -2.8125,1.25 -1.70312,0 -2.79687,-1.40625 -1.07813,-1.40625 -1.07813,-3.65625 z m 1.6875,0.20313 q 0,1.67187 0.6875,2.625 0.70313,0.9375 1.92188,0.9375 1.60937,0 2.34375,-1.4375 v -4.53125 q -0.76563,-1.39063 -2.32813,-1.39063 -1.23437,0 -1.9375,0.95313 -0.6875,0.95312 -0.6875,2.84375 z"
fill-rule="nonzero"
id="path370" />
<path
fill="#000000"
fill-opacity="0"
d="m 145.48819,305.3176 v 0 c 0,-108.95232 87.37144,-197.2756 195.1496,-197.2756 v 0 c 51.7569,0 101.39395,20.78433 137.99161,57.78069 36.59766,36.99635 57.15802,87.17416 57.15802,139.49492 v 0 c 0,108.9523 -87.37146,197.27557 -195.14963,197.27557 v 0 c -107.77815,0 -195.1496,-88.32327 -195.1496,-197.27557 z"
fill-rule="evenodd"
id="path372" />
<path
stroke="#0000ff"
stroke-width="4"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 145.48819,305.3176 v 0 c 0,-108.95232 87.37144,-197.2756 195.1496,-197.2756 v 0 c 51.7569,0 101.39395,20.78433 137.99161,57.78069 36.59766,36.99635 57.15802,87.17416 57.15802,139.49492 v 0 c 0,108.9523 -87.37146,197.27557 -195.14963,197.27557 v 0 c -107.77815,0 -195.1496,-88.32327 -195.1496,-197.27557 z"
fill-rule="evenodd"
id="path374" />
<path
fill="#f4cccc"
d="m 108.9895,417.0105 h 63.53264 l 8.34137,8.34137 v 41.70587 H 108.9895 Z"
fill-rule="evenodd"
id="path376" />
<path
stroke="#434343"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 108.9895,417.0105 h 63.53264 l 8.34137,8.34137 v 41.70587 H 108.9895 Z"
fill-rule="evenodd"
id="path378" />
<path
fill="#000000"
d="m 127.58325,443.2887 q 0,2.25 -1.03125,3.625 -1.01562,1.375 -2.78125,1.375 -1.79687,0 -2.82812,-1.14063 v 4.75 h -1.67188 v -13.65625 h 1.53125 l 0.0781,1.09375 q 1.03125,-1.26562 2.85937,-1.26562 1.78125,0 2.8125,1.34375 1.03125,1.32812 1.03125,3.71875 z m -1.67187,-0.20313 q 0,-1.65625 -0.71875,-2.625 -0.70313,-0.96875 -1.95313,-0.96875 -1.53125,0 -2.29687,1.35938 v 4.70312 q 0.76562,1.35938 2.32812,1.35938 1.20313,0 1.92188,-0.95313 0.71875,-0.96875 0.71875,-2.875 z m 3.37306,0 q 0,-1.45312 0.5625,-2.60937 0.57812,-1.15625 1.59375,-1.78125 1.01562,-0.625 2.3125,-0.625 2.01562,0 3.25,1.39062 1.25,1.39063 1.25,3.70313 v 0.125 q 0,1.4375 -0.54688,2.57812 -0.54687,1.14063 -1.57812,1.78125 -1.01563,0.64063 -2.35938,0.64063 -2,0 -3.25,-1.39063 -1.23437,-1.40625 -1.23437,-3.70312 z m 1.6875,0.20313 q 0,1.64062 0.76562,2.64062 0.76563,0.98438 2.03125,0.98438 1.29688,0 2.04688,-1 0.75,-1.01563 0.75,-2.82813 0,-1.625 -0.76563,-2.625 -0.76562,-1.01562 -2.04687,-1.01562 -1.25,0 -2.01563,1 -0.76562,0.98437 -0.76562,2.84375 z m 8.98364,-0.20313 q 0,-2.26562 1.07813,-3.64062 1.07812,-1.375 2.8125,-1.375 1.73437,0 2.75,1.17187 v -5.14062 h 1.6875 v 14 h -1.54688 l -0.0937,-1.0625 q -1,1.25 -2.8125,1.25 -1.70312,0 -2.79687,-1.40625 -1.07813,-1.40625 -1.07813,-3.65625 z m 1.6875,0.20313 q 0,1.67187 0.6875,2.625 0.70313,0.9375 1.92188,0.9375 1.60937,0 2.34375,-1.4375 v -4.53125 q -0.76563,-1.39063 -2.32813,-1.39063 -1.23437,0 -1.9375,0.95313 -0.6875,0.95312 -0.6875,2.84375 z"
fill-rule="nonzero"
id="path380" />
<path
fill="#f4cccc"
d="m 87.96063,209.9895 h 63.53264 l 8.34137,8.34137 v 41.70587 H 87.96063 Z"
fill-rule="evenodd"
id="path382" />
<path
stroke="#434343"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 87.96063,209.9895 h 63.53264 l 8.34137,8.34137 v 41.70587 H 87.96063 Z"
fill-rule="evenodd"
id="path384" />
<path
fill="#000000"
d="m 106.55438,236.26768 q 0,2.25 -1.03125,3.625 -1.01563,1.375 -2.78125,1.375 -1.79688,0 -2.828125,-1.14062 v 4.75 H 98.24188 v -13.65625 h 1.53125 l 0.07813,1.09375 q 1.03124,-1.26563 2.85937,-1.26563 1.78125,0 2.8125,1.34375 1.03125,1.32813 1.03125,3.71875 z m -1.67188,-0.20312 q 0,-1.65625 -0.71875,-2.625 -0.70312,-0.96875 -1.95312,-0.96875 -1.53125,0 -2.296875,1.35937 v 4.70313 q 0.765625,1.35937 2.328125,1.35937 1.20312,0 1.92187,-0.95312 0.71875,-0.96875 0.71875,-2.875 z m 3.37307,0 q 0,-1.45313 0.5625,-2.60938 0.57812,-1.15625 1.59375,-1.78125 1.01562,-0.625 2.3125,-0.625 2.01562,0 3.25,1.39063 1.25,1.39062 1.25,3.70312 v 0.125 q 0,1.4375 -0.54688,2.57813 -0.54687,1.14062 -1.57812,1.78125 -1.01563,0.64062 -2.35938,0.64062 -2,0 -3.25,-1.39062 -1.23437,-1.40625 -1.23437,-3.70313 z m 1.6875,0.20312 q 0,1.64063 0.76562,2.64063 0.76563,0.98437 2.03125,0.98437 1.29688,0 2.04688,-1 0.75,-1.01562 0.75,-2.82812 0,-1.625 -0.76563,-2.625 -0.76562,-1.01563 -2.04687,-1.01563 -1.25,0 -2.01563,1 -0.76562,0.98438 -0.76562,2.84375 z m 8.98364,-0.20312 q 0,-2.26563 1.07813,-3.64063 1.07812,-1.375 2.8125,-1.375 1.73437,0 2.75,1.17188 v -5.14063 h 1.6875 v 14 h -1.54688 l -0.0937,-1.0625 q -1,1.25 -2.8125,1.25 -1.70312,0 -2.79687,-1.40625 -1.07813,-1.40625 -1.07813,-3.65625 z m 1.6875,0.20312 q 0,1.67188 0.6875,2.625 0.70313,0.9375 1.92188,0.9375 1.60937,0 2.34375,-1.4375 v -4.53125 q -0.76563,-1.39062 -2.32813,-1.39062 -1.23437,0 -1.9375,0.95312 -0.6875,0.95313 -0.6875,2.84375 z"
fill-rule="nonzero"
id="path386" />
<path
fill="#000000"
fill-opacity="0"
d="m 204.99213,171.03674 -81.10236,38.96063"
fill-rule="evenodd"
id="path388" />
<path
stroke="#0000ff"
stroke-width="4"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 204.99213,171.03674 -81.10236,38.96063"
fill-rule="evenodd"
id="path390" />
<path
fill="#000000"
fill-opacity="0"
d="m 185.52843,162.97461 31.27559,49.10237"
fill-rule="evenodd"
id="path392" />
<path
stroke="#0000ff"
stroke-width="4"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 185.52843,162.97461 31.27559,49.10237"
fill-rule="evenodd"
id="path394" />
<path
fill="#000000"
fill-opacity="0"
d="m 182.18898,171.03674 46.07873,5.60631"
fill-rule="evenodd"
id="path396" />
<path
stroke="#0000ff"
stroke-width="4"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 182.18898,171.03674 46.07873,5.60631"
fill-rule="evenodd"
id="path398" />
<path
fill="#000000"
fill-opacity="0"
d="m 180.86351,442.03412 44,27.2756"
fill-rule="evenodd"
id="path400" />
<path
stroke="#0000ff"
stroke-width="4"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 180.86351,442.03412 44,27.2756"
fill-rule="evenodd"
id="path402" />
<path
fill="#000000"
fill-opacity="0"
d="M 236.86351,427.48032 208.73753,469.3386"
fill-rule="evenodd"
id="path404" />
<path
stroke="#0000ff"
stroke-width="4"
stroke-linejoin="round"
stroke-linecap="butt"
d="M 236.86351,427.48032 208.73753,469.3386"
fill-rule="evenodd"
id="path406" />
<path
fill="#000000"
fill-opacity="0"
d="m 228.2021,461.26248 h 24.53543"
fill-rule="evenodd"
id="path408" />
<path
stroke="#0000ff"
stroke-width="4"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 228.2021,461.26248 h 24.53543"
fill-rule="evenodd"
id="path410" />
<path
fill="#000000"
fill-opacity="0"
d="m 521.6842,219.98076 35.21259,20.59843"
fill-rule="evenodd"
id="path412" />
<path
stroke="#0000ff"
stroke-width="4"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 521.6842,219.98076 35.21259,20.59843"
fill-rule="evenodd"
id="path414" />
<path
fill="#000000"
fill-opacity="0"
d="m 505.5599,219.98076 -33.10236,25.00789"
fill-rule="evenodd"
id="path416" />
<path
stroke="#0000ff"
stroke-width="4"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 505.5599,219.98076 -33.10236,25.00789"
fill-rule="evenodd"
id="path418" />
<path
fill="#0000ff"
d="m 205.39896,461.26248 v 0 c 0,-6.29694 5.10466,-11.40158 11.40157,-11.40158 v 0 c 3.02389,0 5.92392,1.20123 8.06213,3.33945 2.13821,2.13818 3.33945,5.03823 3.33945,8.06213 v 0 c 0,6.29691 -5.10466,11.40155 -11.40158,11.40155 v 0 c -6.29691,0 -11.40157,-5.10465 -11.40157,-11.40155 z"
fill-rule="evenodd"
id="path420" />
<path
fill="#000000"
fill-opacity="0"
d="m 208.73839,453.20035 16.12427,16.12424 m 0,-16.12424 -16.12427,16.12424"
fill-rule="evenodd"
id="path422" />
<path
fill="#000000"
fill-opacity="0"
d="m 205.39896,461.26248 v 0 c 0,-6.29694 5.10466,-11.40158 11.40157,-11.40158 v 0 c 3.02389,0 5.92392,1.20123 8.06213,3.33945 2.13821,2.13818 3.33945,5.03823 3.33945,8.06213 v 0 c 0,6.29691 -5.10466,11.40155 -11.40158,11.40155 v 0 c -6.29691,0 -11.40157,-5.10465 -11.40157,-11.40155 z"
fill-rule="evenodd"
id="path424" />
<path
stroke="#ffffff"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 208.73839,453.20035 16.12427,16.12424 m 0,-16.12424 -16.12427,16.12424"
fill-rule="evenodd"
id="path426" />
<path
stroke="#ffffff"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 205.39896,461.26248 v 0 c 0,-6.29694 5.10466,-11.40158 11.40157,-11.40158 v 0 c 3.02389,0 5.92392,1.20123 8.06213,3.33945 2.13821,2.13818 3.33945,5.03823 3.33945,8.06213 v 0 c 0,6.29691 -5.10466,11.40155 -11.40158,11.40155 v 0 c -6.29691,0 -11.40157,-5.10465 -11.40157,-11.40155 z"
fill-rule="evenodd"
id="path428" />
<path
fill="#0000ff"
d="m 182.18898,171.03674 v 0 c 0,-6.29692 5.10466,-11.40157 11.40157,-11.40157 v 0 c 3.02389,0 5.92392,1.20124 8.06213,3.33944 2.13821,2.13821 3.33945,5.03825 3.33945,8.06213 v 0 c 0,6.29692 -5.10466,11.40158 -11.40158,11.40158 v 0 c -6.29691,0 -11.40157,-5.10466 -11.40157,-11.40158 z"
fill-rule="evenodd"
id="path430" />
<path
fill="#000000"
fill-opacity="0"
d="m 185.52843,162.97461 16.12425,16.12427 m 0,-16.12427 -16.12425,16.12427"
fill-rule="evenodd"
id="path432" />
<path
fill="#000000"
fill-opacity="0"
d="m 182.18898,171.03674 v 0 c 0,-6.29692 5.10466,-11.40157 11.40157,-11.40157 v 0 c 3.02389,0 5.92392,1.20124 8.06213,3.33944 2.13821,2.13821 3.33945,5.03825 3.33945,8.06213 v 0 c 0,6.29692 -5.10466,11.40158 -11.40158,11.40158 v 0 c -6.29691,0 -11.40157,-5.10466 -11.40157,-11.40158 z"
fill-rule="evenodd"
id="path434" />
<path
stroke="#ffffff"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 185.52843,162.97461 16.12425,16.12427 m 0,-16.12427 -16.12425,16.12427"
fill-rule="evenodd"
id="path436" />
<path
stroke="#ffffff"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 182.18898,171.03674 v 0 c 0,-6.29692 5.10466,-11.40157 11.40157,-11.40157 v 0 c 3.02389,0 5.92392,1.20124 8.06213,3.33944 2.13821,2.13821 3.33945,5.03825 3.33945,8.06213 v 0 c 0,6.29692 -5.10466,11.40158 -11.40158,11.40158 v 0 c -6.29691,0 -11.40157,-5.10466 -11.40157,-11.40158 z"
fill-rule="evenodd"
id="path438" />
<path
fill="#0000ff"
d="m 502.22046,211.91864 v 0 c 0,-6.29692 5.10468,-11.40158 11.40161,-11.40158 v 0 c 3.02387,0 5.92389,1.20123 8.06214,3.33945 2.13818,2.13821 3.33941,5.03823 3.33941,8.06213 v 0 c 0,6.29691 -5.10467,11.40157 -11.40155,11.40157 v 0 c -6.29693,0 -11.40161,-5.10466 -11.40161,-11.40157 z"
fill-rule="evenodd"
id="path440" />
<path
fill="#000000"
fill-opacity="0"
d="m 505.5599,203.8565 16.1243,16.12425 m 0,-16.12425 -16.1243,16.12425"
fill-rule="evenodd"
id="path442" />
<path
fill="#000000"
fill-opacity="0"
d="m 502.22046,211.91864 v 0 c 0,-6.29692 5.10468,-11.40158 11.40161,-11.40158 v 0 c 3.02387,0 5.92389,1.20123 8.06214,3.33945 2.13818,2.13821 3.33941,5.03823 3.33941,8.06213 v 0 c 0,6.29691 -5.10467,11.40157 -11.40155,11.40157 v 0 c -6.29693,0 -11.40161,-5.10466 -11.40161,-11.40157 z"
fill-rule="evenodd"
id="path444" />
<path
stroke="#ffffff"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 505.5599,203.8565 16.1243,16.12425 m 0,-16.12425 -16.1243,16.12425"
fill-rule="evenodd"
id="path446" />
<path
stroke="#ffffff"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 502.22046,211.91864 v 0 c 0,-6.29692 5.10468,-11.40158 11.40161,-11.40158 v 0 c 3.02387,0 5.92389,1.20123 8.06214,3.33945 2.13818,2.13821 3.33941,5.03823 3.33941,8.06213 v 0 c 0,6.29691 -5.10467,11.40157 -11.40155,11.40157 v 0 c -6.29693,0 -11.40161,-5.10466 -11.40161,-11.40157 z"
fill-rule="evenodd"
id="path448" />
<path
fill="#f4cccc"
d="m 520.96063,240.58267 h 63.53265 l 8.34137,8.34138 v 41.70586 h -71.87402 z"
fill-rule="evenodd"
id="path450" />
<path
stroke="#434343"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 520.96063,240.58267 h 63.53265 l 8.34137,8.34138 v 41.70586 h -71.87402 z"
fill-rule="evenodd"
id="path452" />
<path
fill="#000000"
d="m 539.5544,266.86087 q 0,2.25 -1.03125,3.625 -1.01563,1.375 -2.78125,1.375 -1.79688,0 -2.82813,-1.14063 v 4.75 h -1.67187 v -13.65625 h 1.53125 l 0.0781,1.09375 q 1.03125,-1.26562 2.85938,-1.26562 1.78125,0 2.8125,1.34375 1.03125,1.32812 1.03125,3.71875 z m -1.67188,-0.20313 q 0,-1.65625 -0.71875,-2.625 -0.70312,-0.96875 -1.95312,-0.96875 -1.53125,0 -2.29688,1.35938 v 4.70312 q 0.76563,1.35938 2.32813,1.35938 1.20312,0 1.92187,-0.95313 0.71875,-0.96875 0.71875,-2.875 z m 3.37305,0 q 0,-1.45312 0.5625,-2.60937 0.57813,-1.15625 1.59375,-1.78125 1.01563,-0.625 2.3125,-0.625 2.01563,0 3.25,1.39062 1.25,1.39063 1.25,3.70313 v 0.125 q 0,1.4375 -0.54687,2.57812 -0.54688,1.14063 -1.57813,1.78125 -1.01562,0.64063 -2.35937,0.64063 -2,0 -3.25,-1.39063 -1.23438,-1.40625 -1.23438,-3.70312 z m 1.6875,0.20313 q 0,1.64062 0.76563,2.64062 0.76562,0.98438 2.03125,0.98438 1.29687,0 2.04687,-1 0.75,-1.01563 0.75,-2.82813 0,-1.625 -0.76562,-2.625 -0.76563,-1.01562 -2.04688,-1.01562 -1.25,0 -2.01562,1 -0.76563,0.98437 -0.76563,2.84375 z m 8.98364,-0.20313 q 0,-2.26562 1.07813,-3.64062 1.07812,-1.375 2.8125,-1.375 1.73437,0 2.75,1.17187 v -5.14062 h 1.6875 v 14 h -1.54688 l -0.0937,-1.0625 q -1,1.25 -2.8125,1.25 -1.70312,0 -2.79687,-1.40625 -1.07813,-1.40625 -1.07813,-3.65625 z m 1.6875,0.20313 q 0,1.67187 0.6875,2.625 0.70313,0.9375 1.92188,0.9375 1.60937,0 2.34375,-1.4375 v -4.53125 q -0.76563,-1.39063 -2.32813,-1.39063 -1.23437,0 -1.9375,0.95313 -0.6875,0.95312 -0.6875,2.84375 z"
fill-rule="nonzero"
id="path454" />
<path
fill="#f4cccc"
d="m 252.73753,436.23883 h 63.53264 l 8.34137,8.3414 v 41.70587 h -71.87401 z"
fill-rule="evenodd"
id="path456" />
<path
stroke="#434343"
stroke-width="1"
stroke-linejoin="round"
stroke-linecap="butt"
d="m 252.73753,436.23883 h 63.53264 l 8.34137,8.3414 v 41.70587 h -71.87401 z"
fill-rule="evenodd"
id="path458" />
<path
fill="#000000"
d="m 271.33127,462.51703 q 0,2.25 -1.03125,3.625 -1.01562,1.375 -2.78125,1.375 -1.79687,0 -2.82812,-1.14063 v 4.75 h -1.67188 v -13.65625 h 1.53125 l 0.0781,1.09375 q 1.03125,-1.26562 2.85937,-1.26562 1.78125,0 2.8125,1.34375 1.03125,1.32812 1.03125,3.71875 z m -1.67187,-0.20313 q 0,-1.65625 -0.71875,-2.625 -0.70313,-0.96875 -1.95313,-0.96875 -1.53125,0 -2.29687,1.35938 v 4.70312 q 0.76562,1.35938 2.32812,1.35938 1.20313,0 1.92188,-0.95313 0.71875,-0.96875 0.71875,-2.875 z m 3.37307,0 q 0,-1.45312 0.5625,-2.60937 0.57813,-1.15625 1.59375,-1.78125 1.01563,-0.625 2.3125,-0.625 2.01563,0 3.25,1.39062 1.25,1.39063 1.25,3.70313 v 0.125 q 0,1.4375 -0.54687,2.57812 -0.54688,1.14063 -1.57813,1.78125 -1.01562,0.64063 -2.35937,0.64063 -2,0 -3.25,-1.39063 -1.23438,-1.40625 -1.23438,-3.70312 z m 1.6875,0.20313 q 0,1.64062 0.76563,2.64062 0.76562,0.98438 2.03125,0.98438 1.29687,0 2.04687,-1 0.75,-1.01563 0.75,-2.82813 0,-1.625 -0.76562,-2.625 -0.76563,-1.01562 -2.04688,-1.01562 -1.25,0 -2.01562,1 -0.76563,0.98437 -0.76563,2.84375 z m 8.98365,-0.20313 q 0,-2.26562 1.07812,-3.64062 1.07813,-1.375 2.8125,-1.375 1.73438,0 2.75,1.17187 v -5.14062 h 1.6875 v 14 h -1.54687 l -0.0937,-1.0625 q -1,1.25 -2.8125,1.25 -1.70313,0 -2.79688,-1.40625 -1.07812,-1.40625 -1.07812,-3.65625 z m 1.6875,0.20313 q 0,1.67187 0.6875,2.625 0.70312,0.9375 1.92187,0.9375 1.60938,0 2.34375,-1.4375 v -4.53125 q -0.76562,-1.39063 -2.32812,-1.39063 -1.23438,0 -1.9375,0.95313 -0.6875,0.95312 -0.6875,2.84375 z"
fill-rule="nonzero"
id="path460" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 39 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 57 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 55 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 110 KiB

View File

@@ -1,3 +1,48 @@
- date: [2021-02-08, 2021-02-10]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Docker intensif (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2021-02-15, 2021-02-18]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Fondamentaux Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2021-02-22, 2021-02-23]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Packaging et CI/CD pour Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2021-02-24, 2021-02-26]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Kubernetes avancé (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2021-03-01, 2021-03-02]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Opérer Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-10-05, 2020-10-06]
country: www
city: streaming

549
slides/k8s/admission.md Normal file
View File

@@ -0,0 +1,549 @@
# Dynamic Admission Control
- This is one of the many ways to extend the Kubernetes API
- High level summary: dynamic admission control relies on webhooks that are ...
- dynamic (can be added/removed on the fly)
- running inside our outside the cluster
- *validating* (yay/nay) or *mutating* (can change objects that are created/updated)
- selective (can be configured to apply only to some kinds, some selectors...)
- mandatory or optional (should it block operations when webhook is down?)
- Used for themselves (e.g. policy enforcement) or as part of operators
---
## Use cases
Some examples ...
- Stand-alone admission controllers
*validating:* policy enforcement (e.g. quotas, naming conventions ...)
*mutating:* inject or provide default values (e.g. pod presets)
- Admission controllers part of a greater system
*validating:* advanced typing for operators
*mutating:* inject sidecars for service meshes
---
## You said *dynamic?*
- Some admission controllers are built in the API server
- They are enabled/disabled through Kubernetes API server configuration
(e.g. `--enable-admission-plugins`/`--disable-admission-plugins` flags)
- Here, we're talking about *dynamic* admission controllers
- They can be added/remove while the API server is running
(without touching the configuration files or even having access to them)
- This is done through two kinds of cluster-scope resources:
ValidatingWebhookConfiguration and MutatingWebhookConfiguration
---
## You said *webhooks?*
- A ValidatingWebhookConfiguration or MutatingWebhookConfiguration contains:
- a resource filter
<br/>
(e.g. "all pods", "deployments in namespace xyz", "everything"...)
- an operations filter
<br/>
(e.g. CREATE, UPDATE, DELETE)
- the address of the webhook server
- Each time an operation matches the filters, it is sent to the webhook server
---
## What gets sent exactly?
- The API server will `POST` a JSON object to the webhook
- That object will be a Kubernetes API message with `kind` `AdmissionReview`
- It will contain a `request` field, with, notably:
- `request.uid` (to be used when replying)
- `request.object` (the object created/deleted/changed)
- `request.oldObject` (when an object is modified)
- `request.userInfo` (who was making the request to the API in the first place)
(See [the documentation](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#request) for a detailed example showing more fields.)
---
## How should the webhook respond?
- By replying with another `AdmissionReview` in JSON
- It should have a `response` field, with, notably:
- `response.uid` (matching the `request.uid`)
- `response.allowed` (`true`/`false`)
- `response.status.message` (optional string; useful when denying requests)
- `response.patchType` (when a mutating webhook changes the object; e.g. `json`)
- `response.patch` (the patch, encoded in base64)
---
## What if the webhook *does not* respond?
- If "something bad" happens, the API server follows the `failurePolicy` option
- this is a per-webhook option (specified in the webhook configuration)
- it can be `Fail` (the default) or `Ignore` ("allow all, unmodified")
- What's "something bad"?
- webhook responds with something invalid
- webhook takes more than 10 seconds to respond
<br/>
(this can be changed with `timeoutSeconds` field in the webhook config)
- webhook is down or has invalid certificates
<br/>
(TLS! It's not just a good idea; for admission control, it's the law!)
---
## What did you say about TLS?
- The webhook configuration can indicate:
- either `url` of the webhook server (has to begin with `https://`)
- or `service.name` and `service.namespace` of a Service on the cluster
- In the latter case, the Service has to accept TLS connections on port 443
- It has to use a certificate with CN `<name>.<namespace>.svc`
(**and** a `subjectAltName` extension with `DNS:<name>.<namespace>.svc`)
- The certificate needs to be valid (signed by a CA trusted by the API server)
... alternatively, we can pass a `caBundle` in the webhook configuration
---
## Webhook server inside or outside
- "Outside" webhook server is defined with `url` option
- convenient for external webooks (e.g. tamper-resistent audit trail)
- also great for initial development (e.g. with ngrok)
- requires outbound connectivity (duh) and can become a SPOF
- "Inside" webhook server is defined with `service` option
- convenient when the webhook needs to be deployed and managed on the cluster
- also great for air gapped clusters
- development can be harder (but tools like [Tilt](https://tilt.dev) can help)
---
## Developing a simple admission webhook
- We're going to register a custom webhook!
- First, we'll just dump the `AdmissionRequest` object
(using a little Node app)
- Then, we'll implement a strict policy on a specific label
(using a little Flask app)
- Development will happen in local containers, plumbed with ngrok
- The we will deploy to the cluster 🔥
---
## Running the webhook locally
- We prepared a Docker Compose file to start the whole stack
(the Node "echo" app, the Flask app, and one ngrok tunnel for each of them)
.exercise[
- Go to the webhook directory:
```bash
cd ~/container.training/webhooks/admission
```
- Start the webhook in Docker containers:
```bash
docker-compose up
```
]
*Note the URL in `ngrok-echo_1` looking like `url=https://xxxx.ngrok.io`.*
---
class: extra-details
## What's ngrok?
- Ngrok provides secure tunnels to access local services
- Example: run `ngrok http 1234`
- `ngrok` will display a publicly-available URL (e.g. https://xxxxyyyyzzzz.ngrok.io)
- Connections to https://xxxxyyyyzzzz.ngrok.io will terminate at `localhost:1234`
- Basic product is free; extra features (vanity domains, end-to-end TLS...) for $$$
- Perfect to develop our webhook!
- Probably not for production, though
(webhook requests and responses now pass through the ngrok platform)
---
## Update the webhook configuration
- We have a webhook configuration in `k8s/webhook-configuration.yaml`
- We need to update the configuration with the correct `url`
.exercise[
- Edit the webhook configuration manifest:
```bash
vim k8s/webhook-configuration.yaml
```
- **Uncomment** the `url:` line
- **Update** the `.ngrok.io` URL with the URL shown by Compose
- Save and quit
]
---
## Register the webhook configuration
- Just after we register the webhook, it will be called for each matching request
(CREATE and UPDATE on Pods in all namespaces)
- The `failurePolicy` is `Ignore`
(so if the webhook server is down, we can still create pods)
.exercise[
- Register the webhook:
```bash
kubectl apply -f k8s/webhook-configuration.yaml
```
]
It is strongly recommended to tail the logs of the API server while doing that.
---
## Create a pod
- Let's create a pod and try to set a `color` label
.exercise[
- Create a pod named `chroma`:
```bash
kubectl run --restart=Never chroma --image=nginx
```
- Add a label `color` set to `pink`:
```bash
kubectl label pod chroma color=pink
```
]
We should see the `AdmissionReview` objects in the Compose logs.
Note: the webhook doesn't do anything (other than printing the request payload).
---
## Use the "real" admission webhook
- We have a small Flask app implementing a particular policy on pod labels:
- if a pod sets a label `color`, it must be `blue`, `green`, `red`
- once that `color` label is set, it cannot be removed or changed
- That Flask app was started when we did `docker-compose up` earlier
- It is exposed through its own ngrok tunnel
- We are going to use that webhook instead of the other one
(by changing only the `url` field in the ValidatingWebhookConfiguration)
---
## Update the webhook configuration
.exercise[
- First, check the ngrok URL of the tunnel for the Flask app:
```bash
docker-compose logs ngrok-flask
```
- Then, edit the webhook configuration:
```bash
kubectl edit validatingwebhookconfiguration admission.container.training
```
- Find the `url:` field with the `.ngrok.io` URL and update it
- Save and quit; the new configuration is applied immediately
]
---
## Verify the behavior of the webhook
- Try to create a few pods and/or change labels on existing pods
- What happens if we try to make changes to the earlier pod?
(the one that has `label=pink`)
---
## Deploying the webhook on the cluster
- Let's see what's needed to self-host the webhook server!
- The webhook needs to be reachable through a Service on our cluster
- The Service needs to accept TLS connections on port 443
- We need a proper TLS certificate:
- with the right `CN` and `subjectAltName` (`<servicename>.<namespace>.svc`)
- signed by a trusted CA
- We can either use a "real" CA, or use the `caBundle` option to specify the CA cert
(the latter makes it easy to use self-signed certs)
---
## In practice
- We're going to generate a key pair and a self-signed certificate
- We will store them in a Secret
- We will run the webhook in a Deployment, exposed with a Service
- We will update the webhook configuration to use that Service
- The Service will be named `admission`, in Namespace `webhooks`
(keep in mind that the ValidatingWebhookConfiguration itself is at cluster scope)
---
## Let's get to work!
.exercise[
- Make sure we're in the right directory:
```bash
cd ~/container.training/webhooks/admission
```
- Create the namespace:
```bash
kubectl create namespace webhooks
```
- Switch to the namespace:
```bash
kubectl config set-context --current --namespace=webhooks
```
]
---
## Deploying the webhook
- *Normally,* we would author an image for this
- Since our webhook is just *one* Python source file ...
... we'll store it in a ConfigMap, and install dependencies on the fly
.exercise[
- Load the webhook source in a ConfigMap:
```bash
kubectl create configmap admission --from-file=flask/webhook.py
```
- Create the Deployment and Service:
```bash
kubectl apply -f k8s/webhook-server.yaml
```
]
---
## Generating the key pair and certificate
- Let's call OpenSSL to the rescue!
(of course, there are plenty others options; e.g. `cfssl`)
.exercise[
- Generate a self-signed certificate:
```bash
NAMESPACE=webhooks
SERVICE=admission
CN=$SERVICE.$NAMESPACE.svc
openssl req -x509 -newkey rsa:4096 -nodes -keyout key.pem -out cert.pem \
-days 30 -subj /CN=$CN -addext subjectAltName=DNS:$CN
```
- Load up the key and cert in a Secret:
```bash
kubectl create secret tls admission --cert=cert.pem --key=key.pem
```
]
---
## Update the webhook configuration
- Let's reconfigure the webhook to use our Service instead of ngrok
.exercise[
- Edit the webhook configuration manifest:
```bash
vim k8s/webhook-configuration.yaml
```
- Comment out the `url:` line
- Uncomment the `service:` section
- Save, quit
- Update the webhook configuration:
```bash
kubectl apply -f k8s/webhook-configuration.yaml
```
]
---
## Add our self-signed cert to the `caBundle`
- The API server won't accept our self-signed certificate
- We need to add it to the `caBundle` field in the webhook configuration
- The `caBundle` will be our `cert.pem` file, encoded in base64
---
Shell to the rescue!
.exercise[
- Load up our cert and encode it in base64:
```bash
CA=$(base64 -w0 < cert.pem)
```
- Define a patch operation to update the `caBundle`:
```bash
PATCH='[{
"op": "replace",
"path": "/webhooks/0/clientConfig/caBundle",
"value":"'$CA'"
}]'
```
- Patch the webhook configuration:
```bash
kubectl patch validatingwebhookconfiguration \
admission.webhook.container.training \
--type='json' -p="$PATCH"
```
]
---
## Try it out!
- Keep an eye on the API server logs
- Tail the logs of the pod running the webhook server
- Create a few pods; we should see requests in the webhook server logs
- Check that the label `color` is enforced correctly
(it should only allow values of `red`, `green`, `blue`)
???
:EN:- Dynamic admission control with webhooks
:FR:- Contrôle d'admission dynamique (webhooks)

View File

@@ -0,0 +1,394 @@
# The Aggregation Layer
- The aggregation layer is a way to extend the Kubernetes API
- It is similar to CRDs
- it lets us define new resource types
- these resources can then be used with `kubectl` and other clients
- The implementation is very different
- CRDs are handled within the API server
- the aggregation layer offloads requests to another process
- They are designed for very different use-cases
---
## CRDs vs aggregation layer
- The Kubernetes API is a REST-ish API with a hierarchical structure
- It can be extended with Custom Resource Definifions (CRDs)
- Custom resources are managed by the Kubernetes API server
- we don't need to write code
- the API server does all the heavy lifting
- these resources are persisted in Kubernetes' "standard" database
<br/>
(for most installations, that's `etcd`)
- We can also define resources that are *not* managed by the API server
(the API server merely proxies the requests to another server)
---
## Which one is best?
- For things that "map" well to objects stored in a traditional database:
*probably CRDs*
- For things that "exist" only in Kubernetes and don't represent external resources:
*probably CRDs*
- For things that are read-only, at least from Kubernetes' perspective:
*probably aggregation layer*
- For things that can't be stored in etcd because of size or access patterns:
*probably aggregation layer*
---
## How are resources organized?
- Let's have a look at the Kubernetes API hierarchical structure
- We'll ask `kubectl` to show us the exacts requests that it's making
.exercise[
- Check the URI for a cluster-scope, "core" resource, e.g. a Node:
```bash
kubectl -v6 get node node1
```
- Check the URI for a cluster-scope, "non-core" resource, e.g. a ClusterRole:
```bash
kubectl -v6 get clusterrole view
```
]
---
## Core vs non-core
- This is the structure of the URIs that we just checked:
```
/api/v1/nodes/node1
↑ ↑ ↑
`version` `kind` `name`
/apis/rbac.authorization.k8s.io/v1/clusterroles/view
↑ ↑ ↑ ↑
`group` `version` `kind` `name`
```
- There is no group for "core" resources
- Or, we could say that the group, `core`, is implied
---
## Group-Version-Kind
- In the API server, the Group-Version-Kind triple maps to a Go type
(look for all the "GVK" occurrences in the source code!)
- In the API server URI router, the GVK is parsed "relatively early"
(so that the server can know which resource we're talking about)
- "Well, actually ..." Things are a bit more complicated, see next slides!
---
class: extra-details
## Namespaced resources
- What about namespaced resources?
.exercise[
- Check the URI for a namespaced, "core" resource, e.g. a Service:
```bash
kubectl -v6 get service kubernetes --namespace default
```
]
- Here are what namespaced resources URIs look like:
```
/api/v1/namespaces/default/services/kubernetes
↑ ↑ ↑ ↑
`version` `namespace` `kind` `name`
/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy
↑ ↑ ↑ ↑ ↑
`group` `version` `namespace` `kind` `name`
```
---
class: extra-details
## Subresources
- Many resources have *subresources*, for instance:
- `/status` (decouples status updates from other updates)
- `/scale` (exposes a consistent interface for autoscalers)
- `/proxy` (allows access to HTTP resources)
- `/portforward` (used by `kubectl port-forward`)
- `/logs` (access pod logs)
- These are added at the end of the URI
---
class: extra-details
## Accessing a subresource
.exercise[
- List `kube-proxy` pods:
```bash
kubectl get pods --namespace=kube-system --selector=k8s-app=kube-proxy
PODNAME=$(
kubectl get pods --namespace=kube-system --selector=k8s-app=kube-proxy \
-o json | jq -r .items[0].metadata.name)
```
- Execute a command in a pod, showing the API requests:
```bash
kubectl -v6 exec --namespace=kube-system $PODNAME -- echo hello world
```
]
--
The full request looks like:
```
POST https://.../api/v1/namespaces/kube-system/pods/kube-proxy-c7rlw/exec?
command=echo&command=hello&command=world&container=kube-proxy&stderr=true&stdout=true
```
---
## Listing what's supported on the server
- There are at least three useful commands to introspect the API server
.exercise[
- List resources types, their group, kind, short names, and scope:
```bash
kubectl api-resources
```
- List API groups + versions:
```bash
kubectl api-versions
```
- List APIServices:
```bash
kubectl get apiservices
```
]
--
🤔 What's the difference between the last two?
---
## API registration
- `kubectl api-versions` shows all API groups, including `apiregistration.k8s.io`
- `kubectl get apiservices` shows the "routing table" for API requests
- The latter doesn't show `apiregistration.k8s.io`
(APIServices belong to `apiregistration.k8s.io`)
- Most API groups are `Local` (handled internally by the API server)
- If we're running the `metrics-server`, it should handle `metrics.k8s.io`
- This is an API group handled *outside* of the API server
- This is the *aggregation layer!*
---
## Finding resources
The following assumes that `metrics-server` is deployed on your cluster.
.exercise[
- Check that the metrics.k8s.io is registered with `metrics-server`:
```bash
kubectl get apiservices | grep metrics.k8s.io
```
- Check the resource kinds registered in the metrics.k8s.io group:
```bash
kubectl api-resources --api-group=metrics.k8s.io
```
]
(If the output of either command is empty, install `metrics-server` first.)
---
## `nodes` vs `nodes`
- We can have multiple resources with the same name
.exercise[
- Look for resources named `node`:
```bash
kubectl api-resources | grep -w nodes
```
- Compare the output of both commands:
```bash
kubectl get nodes
kubectl get nodes.metrics.k8s.io
```
]
--
🤔 What are the second kind of nodes? How can we see what's really in them?
---
## Node vs NodeMetrics
- `nodes.metrics.k8s.io` (aka NodeMetrics) don't have fancy *printer columns*
- But we can look at the raw data (with `-o json` or `-o yaml`)
.exercise[
- Look at NodeMetrics objects with one of these commands:
```bash
kubectl get -o yaml nodes.metrics.k8s.io
kubectl get -o yaml NodeMetrics
```
]
--
💡 Alright, these are the live metrics (CPU, RAM) for our nodes.
---
## An easier way to consume metrics
- We might have seen these metrics before ... With an easier command!
--
.exercise[
- Display node metrics:
```bash
kubectl top nodes
```
- Check which API requests happen behind the scenes:
```bash
kubectl top nodes -v6
```
]
---
## Aggregation layer in practice
- We can write an API server to handle a subset of the Kubernetes API
- Then we can register that server by creating an APIService resource
.exercise[
- Check the definition used for the `metrics-server`:
```bash
kubectl describe apiservices v1beta1.metrics.k8s.io
```
]
- Group priority is used when multiple API groups provide similar kinds
(e.g. `nodes` and `nodes.metrics.k8s.io` as seen earlier)
---
## Authentication flow
- We have two Kubernetes API servers:
- "aggregator" (the main one; clients connect to it)
- "aggregated" (the one providing the extra API; aggregator connects to it)
- Aggregator deals with client authentication
- Aggregator authenticates with aggregated using mutual TLS
- Aggregator passes (/forwards/proxies/...) requests to aggregated
- Aggregated performs authorization by calling back aggregator
("can subject X perform action Y on resource Z?")
[This doc page](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#authentication-flow) has very nice swim lanes showing that flow.
---
## Discussion
- Aggregation layer is great for metrics
(fast-changing, ephemeral data, that would be outrageously bad for etcd)
- It *could* be a good fit to expose other REST APIs as a pass-thru
(but it's more common to see CRDs instead)
???
:EN:- The aggregation layer
:FR:- Étendre l'API avec le *aggregation layer*

View File

@@ -0,0 +1,179 @@
# API server internals
- Understanding the internals of the API server is useful.red[¹]:
- when extending the Kubernetes API server (CRDs, webhooks...)
- when running Kubernetes at scale
- Let's dive into a bit of code!
.footnote[.red[¹]And by *useful*, we mean *strongly recommended or else...*]
---
## The main handler
- The API server parses its configuration, and builds a `GenericAPIServer`
- ... which contains an `APIServerHandler` ([src](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/server/handler.go#L37
))
- ... which contains a couple of `http.Handler` fields
- Requests go through:
- `FullhandlerChain` (a series of HTTP filters, see next slide)
- `Director` (switches the request to `GoRestfulContainer` or `NonGoRestfulMux`)
- `GoRestfulContainer` is for "normal" APIs; integrates nicely with OpenAPI
- `NonGoRestfulMux` is for everything else (e.g. proxy, delegation)
---
## The chain of handlers
- API requests go through a complex chain of filters ([src](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/server/config.go#L671))
(note when reading that code: requests start at the bottom and go up)
- This is where authentication, authorization, and admission happen
(as well as a few other things!)
- Let's review an arbitrary selection of some of these handlers!
*In the following slides, the handlers are in chronological order.*
*Note: handlers are nested; so they can act at the beginning and end of a request.*
---
## `WithPanicRecovery`
- Reminder about Go: there is no exception handling in Go; instead:
- functions typically return a composite `(SomeType, error)` type
- when things go really bad, the code can call `panic()`
- `panic()` can be caught with `recover()`
<br/>
(but this is almost never used like an exception handler!)
- The API server code is not supposed to `panic()`
- But just in case, we have that handler to prevent (some) crashes
---
## `WithRequestInfo` ([src](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/request/requestinfo.go#L163))
- Parse out essential information:
API group, version, namespace, resource, subresource, verb ...
- WithRequestInfo: parse out API group+version, Namespace, resource, subresource ...
- Maps HTTP verbs (GET, PUT, ...) to Kubernetes verbs (list, get, watch, ...)
---
class: extra-details
## HTTP verb mapping
- POST → create
- PUT → update
- PATCH → patch
- DELETE
<br/> → delete (if a resource name is specified)
<br/> → deletecollection (otherwise)
- GET, HEAD
<br/> → get (if a resource name is specified)
<br/> → list (otherwise)
<br/> → watch (if the `?watch=true` option is specified)
---
## `WithWaitGroup`,
- When we shutdown, tells clients (with in-flight requests) to retry
- only for "short" requests
- for long running requests, the client needs to do more
- Long running requests include `watch` verb, `proxy` sub-resource
(See also `WithTimeoutForNonLongRunningRequests`)
---
## AuthN and AuthZ
- `WithAuthentication`:
the request goes through a *chain* of authenticators
([src](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/filters/authentication.go#L38))
- WithAudit
- WithImpersonation: used for e.g. `kubectl ... --as another.user`
- WithPriorityAndFairness or WithMaxInFlightLimit
(`system:masters` can bypass these)
- WithAuthorization
---
## After all these handlers ...
- We get to the "director" mentioned above
- Api Groups get installed into the "gorestfulhandler"
([src](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/server/genericapiserver.go#L423))
- REST-ish resources are managed by various handlers
(in [this directory](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/))
- These files show us the code path for each type of request
---
class: extra-details
## Request code path
- [create.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/create.go):
decode to HubGroupVersion; admission; mutating admission; store
- [delete.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/delete.go):
validating admission only; deletion
- [get.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/get.go) (get, list):
directly fetch from rest storage abstraction
- [patch.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/patch.go):
admission; mutating admission; patch
- [update.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/update.go):
decode to HubGroupVersion; admission; mutating admission; store
- [watch.go](https://github.com/kubernetes/apiserver/blob/release-1.19/pkg/endpoints/handlers/watch.go):
similar to get.go, but with watch logic
(HubGroupVersion = in-memory, "canonical" version.)
???
:EN:- Kubernetes API server internals
:FR:- Fonctionnement interne du serveur API

View File

@@ -273,6 +273,26 @@ class: extra-details
---
class: extra-details
## Group-Version-Kind, or GVK
- A particular type will be identified by the combination of:
- the API group it belongs to (core, `apps`, `metrics.k8s.io`, ...)
- the version of this API group (`v1`, `v1beta1`, ...)
- the "Kind" itself (Pod, Role, Job, ...)
- "GVK" appears a lot in the API machinery code
- Conversions are possible between different versions and even between API groups
(e.g. when Deployments moved from `extensions` to `apps`)
---
## Update
- Let's update our namespace object
@@ -334,6 +354,34 @@ We demonstrated *update* and *watch* semantics.
---
class: extra-details
## Watch events
- `kubectl get --watch` shows changes
- If we add `--output-watch-events`, we can also see:
- the difference between ADDED and MODIFIED resources
- DELETED resources
.exercise[
- In one terminal, watch pods, displaying full events:
```bash
kubectl get pods --watch --output-watch-events
```
- In another, run a short-lived pod:
```bash
kubectl run pause --image=alpine --rm -ti --restart=Never -- sleep 5
```
]
---
# Other control plane components
- API server ✔️

View File

@@ -10,7 +10,7 @@
- Jobs are great for "long" background work
("long" being at least minutes our hours)
("long" being at least minutes or hours)
- CronJobs are great to schedule Jobs at regular intervals
@@ -148,6 +148,28 @@ class: extra-details
class: extra-details
## Setting a time limit
- It is possible to set a time limit (or deadline) for a job
- This is done with the field `spec.activeDeadlineSeconds`
(by default, it is unlimited)
- When the job is older than this time limit, all its pods are terminated
- Note that there can also be a `spec.activeDeadlineSeconds` field in pods!
- They can be set independently, and have different effects:
- the deadline of the job will stop the entire job
- the deadline of the pod will only stop an individual pod
---
class: extra-details
## What about `kubectl run` before v1.18?
- Creating a Deployment:

244
slides/k8s/cert-manager.md Normal file
View File

@@ -0,0 +1,244 @@
# cert-manager
- cert-manager¹ facilitates certificate signing through the Kubernetes API:
- we create a Certificate object (that's a CRD)
- cert-manager creates a private key
- it signs that key ...
- ... or interacts with a certificate authority to obtain the signature
- it stores the resulting key+cert in a Secret resource
- These Secret resources can be used in many places (Ingress, mTLS, ...)
.footnote[.red[¹]Always lower case, words separated with a dash; see the [style guide](https://cert-manager.io/docs/faq/style/_.)]
---
## Getting signatures
- cert-manager can use multiple *Issuers* (another CRD), including:
- self-signed
- cert-manager acting as a CA
- the [ACME protocol](https://en.wikipedia.org/wiki/Automated_Certificate_Management_Environment]) (notably used by Let's Encrypt)
- [HashiCorp Vault](https://www.vaultproject.io/)
- Multiple issuers can be configured simultaneously
- Issuers can be available in a single namespace, or in the whole cluster
(then we use the *ClusterIssuer* CRD)
---
## cert-manager in action
- We will install cert-manager
- We will create a ClusterIssuer to obtain certificates with Let's Encrypt
(this will involve setting up an Ingress Controller)
- We will create a Certificate request
- cert-manager will honor that request and create a TLS Secret
---
## Installing cert-manager
- It can be installed with a YAML manifest, or with Helm
.exercise[
- Create the namespace for cert-manager:
```bash
kubectl create ns cert-manager
```
- Add the Jetstack repository:
```bash
helm repo add jetstack https://charts.jetstack.io
```
- Install cert-manager:
```bash
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--set installCRDs=true
```
]
---
## ClusterIssuer manifest
```yaml
@@INCLUDE[k8s/cm-clusterissuer.yaml]
```
---
## Creating the ClusterIssuer
- The manifest shown on the previous slide is in @@LINK[k8s/cm-clusterissuer.yaml]
.exercise[
- Create the ClusterIssuer:
```bash
kubectl apply -f ~/container.training/k8s/cm-clusterissuer.yaml
```
]
---
## Certificate manifest
```yaml
@@INCLUDE[k8s/cm-certificate.yaml]
```
- The `name`, `secretName`, and `dnsNames` don't have to match
- There can be multiple `dnsNames`
- The `issuerRef` must match the ClusterIssuer that we created earlier
---
## Creating the Certificate
- The manifest shown on the previous slide is in @@LINK[k8s/cm-certificate.yaml]
.exercise[
- Edit the Certificate to update the domain name
(make sure to replace A.B.C.D with the IP address of one of your nodes!)
- Create the Certificate:
```bash
kubectl apply -f ~/container.training/k8s/cm-certificate.yaml
```
]
---
## What's happening?
- cert-manager will create:
- the secret key
- a Pod, a Service, and an Ingress to complete the HTTP challenge
- then it waits for the challenge to complete
.exercise[
- View the resources created by cert-manager:
```bash
kubectl get pods,services,ingresses \
--selector=acme.cert-manager.io/http01-solver=true
```
]
---
## HTTP challenge
- The CA (in this case, Let's Encrypt) will fetch a particular URL:
`http://<our-domain>/.well-known/acme-challenge/<token>`
.exercise[
- Check the *path* of the Ingress in particular:
```bash
kubectl describe ingress
--selector=acme.cert-manager.io/http01-solver=true
```
]
---
## What's missing ?
--
An Ingress Controller! 😅
.exercise[
- Install an Ingress Controller:
```bash
kubectl apply -f ~/container.training/k8s/traefik-v2.yaml
```
- Wait a little bit, and check that we now have a `kubernetes.io/tls` Secret:
```bash
kubectl get secrets
```
]
---
class: extra-details
## Using the secret
- For bonus points, try to use the secret in an Ingress!
- This is what the manifest would look like:
```yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: xyz
spec:
tls:
- secretName: xyz.A.B.C.D.nip.io
hosts:
- xyz.A.B.C.D.nip.io
rules:
...
```
---
class: extra-details
## Let's Encrypt and nip.io
- Let's Encrypt has [rate limits](https://letsencrypt.org/docs/rate-limits/) per domain
(the limits only apply to the production environment, not staging)
- There is a limit of 50 certificates per registered domain
- If we try to use the production environment, we will probably hit the limit
- It's fine to use the staging environment for these experiments
(our certs won't validate in a browser, but we can always check
the details of the cert to verify that it was issued by Let's Encrypt!)
???
:EN:- Obtaining certificates with cert-manager
:FR:- Obtenir des certificats avec cert-manager

View File

@@ -338,9 +338,9 @@ docker run --rm --net host -v $PWD:/vol \
(e.g. [Portworx](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-snapshots/) can [create snapshots through annotations](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-snapshots/snaps-annotations/#taking-periodic-snapshots-on-a-running-pod))
- Option 3: [snapshots through Kubernetes API](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/)
- Option 3: [snapshots through Kubernetes API](https://kubernetes.io/docs/concepts/storage/volume-snapshots/)
(now in alpha for a few storage providers: GCE, OpenSDS, Ceph, Portworx)
(Generally available since Kuberentes 1.20 for a number of [CSI](https://kubernetes.io/blog/2019/01/15/container-storage-interface-ga/) volume plugins : GCE, OpenSDS, Ceph, Portworx, etc)
---

193
slides/k8s/cni-internals.md Normal file
View File

@@ -0,0 +1,193 @@
# CNI internals
- Kubelet looks for a CNI configuration file
(by default, in `/etc/cni/net.d`)
- Note: if we have multiple files, the first one will be used
(in lexicographic order)
- If no configuration can be found, kubelet holds off on creating containers
(except if they are using `hostNetwork`)
- Let's see how exactly plugins are invoked!
---
## General principle
- A plugin is an executable program
- It is invoked with by kubelet to set up / tear down networking for a container
- It doesn't take any command-line argument
- However, it uses environment variables to know what to do, which container, etc.
- It reads JSON on stdin, and writes back JSON on stdout
- There will generally be multiple plugins invoked in a row
(at least IPAM + network setup; possibly more)
---
## Environment variables
- `CNI_COMMAND`: `ADD`, `DEL`, `CHECK`, or `VERSION`
- `CNI_CONTAINERID`: opaque identifier
(container ID of the "sandbox", i.e. the container running the `pause` image)
- `CNI_NETNS`: path to network namespace pseudo-file
(e.g. `/var/run/netns/cni-0376f625-29b5-7a21-6c45-6a973b3224e5`)
- `CNI_IFNAME`: interface name, usually `eth0`
- `CNI_PATH`: path(s) with plugin executables (e.g. `/opt/cni/bin`)
- `CNI_ARGS`: "extra arguments" (see next slide)
---
## `CNI_ARGS`
- Extra key/value pair arguments passed by "the user"
- "The user", here, is "kubelet" (or in an abstract way, "Kubernetes")
- This is used to pass the pod name and namespace to the CNI plugin
- Example:
```
IgnoreUnknown=1
K8S_POD_NAMESPACE=default
K8S_POD_NAME=web-96d5df5c8-jcn72
K8S_POD_INFRA_CONTAINER_ID=016493dbff152641d334d9828dab6136c1ff...
```
Note that technically, it's a `;`-separated list, so it really looks like this:
```
CNI_ARGS=IgnoreUnknown=1;K8S_POD_NAMESPACE=default;K8S_POD_NAME=web-96d...
```
---
## JSON in, JSON out
- The plugin reads its configuration on stdin
- It writes back results in JSON
(e.g. allocated address, routes, DNS...)
⚠️ "Plugin configuration" is not always the same as "CNI configuration"!
---
## Conf vs Conflist
- The CNI configuration can be a single plugin configuration
- it will then contain a `type` field in the top-most structure
- it will be passed "as-is"
- It can also be a "conflist", containing a chain of plugins
(it will then contain a `plugins` field in the top-most structure)
- Plugins are then invoked in order (reverse order for `DEL` action)
- In that case, the input of the plugin is not the whole configuration
(see details on next slide)
---
## List of plugins
- When invoking a plugin in a list, the JSON input will be:
- the configuration of the plugin
- augmented with `name` (matching the conf list `name`)
- augmented with `prevResult` (which will be the output of the previous plugin)
- Conceptually, a plugin (generally the first one) will do the "main setup"
- Other plugins can do tuning / refinement (firewalling, traffic shaping...)
---
## Analyzing plugins
- Let's see what goes in and out of our CNI plugins!
- We will create a fake plugin that:
- saves its environment and input
- executes the real plugin with the saved input
- saves the plugin output
- passes the saved output
---
## Our fake plugin
```bash
#!/bin/sh
PLUGIN=$(basename $0)
cat > /tmp/cni.$$.$PLUGIN.in
env | sort > /tmp/cni.$$.$PLUGIN.env
echo "PPID=$PPID, $(readlink /proc/$PPID/exe)" > /tmp/cni.$$.$PLUGIN.parent
$0.real < /tmp/cni.$$.$PLUGIN.in > /tmp/cni.$$.$PLUGIN.out
EXITSTATUS=$?
cat /tmp/cni.$$.$PLUGIN.out
exit $EXITSTATUS
```
Save this script as `/opt/cni/bin/debug` and make it executable.
---
## Substituting the fake plugin
- For each plugin that we want to instrument:
- rename the plugin from e.g. `ptp` to `ptp.real`
- symlink `ptp` to our `debug` plugin
- There is no need to change the CNI configuration or restart kubelet
---
## Create some pods and looks at the results
- Create a pod
- For each instrumented plugin, there will be files in `/tmp`:
`cni.PID.pluginname.in` (JSON input)
`cni.PID.pluginname.env` (environment variables)
`cni.PID.pluginname.parent` (parent process information)
`cni.PID.pluginname.out` (JSON output)
❓️ What is calling our plugins?
???
:EN:- Deep dive into CNI internals
:FR:- La Container Network Interface (CNI) en détails

View File

@@ -60,21 +60,41 @@
## Command-line arguments
- Pass options to `args` array in the container specification
- Indicate what should run in the container
- Example ([source](https://github.com/coreos/pods/blob/master/kubernetes.yaml#L29)):
- Pass `command` and/or `args` in the container options in a Pod's template
- Both `command` and `args` are arrays
- Example ([source](https://github.com/jpetazzo/container.training/blob/main/k8s/consul-1.yaml#L70)):
```yaml
args:
- "--data-dir=/var/lib/etcd"
- "--advertise-client-urls=http://127.0.0.1:2379"
- "--listen-client-urls=http://127.0.0.1:2379"
- "--listen-peer-urls=http://127.0.0.1:2380"
- "--name=etcd"
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=provider=k8s label_selector=\"app=consul\" namespace=\"$(NS)\""
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
- "-ui"
```
- The options can be passed directly to the program that we run ...
---
... or to a wrapper script that will use them to e.g. generate a config file
## `args` or `command`?
- Use `command` to override the `ENTRYPOINT` defined in the image
- Use `args` to keep the `ENTRYPOINT` defined in the image
(the parameters specified in `args` are added to the `ENTRYPOINT`)
- In doubt, use `command`
- It is also possible to use *both* `command` and `args`
(they will be strung together, just like `ENTRYPOINT` and `CMD`)
- See the [docs](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes) to see how they interact together
---
@@ -514,73 +534,12 @@ spec:
]
---
## Passwords, tokens, sensitive information
- For sensitive information, there is another special resource: *Secrets*
- Secrets and Configmaps work almost the same way
(we'll expose the differences on the next slide)
- The *intent* is different, though:
*"You should use secrets for things which are actually secret like API keys,
credentials, etc., and use config map for not-secret configuration data."*
*"In the future there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc."*
(Source: [the author of both features](https://stackoverflow.com/a/36925553/580281
))
---
class: extra-details
## Differences between configmaps and secrets
- Secrets are base64-encoded when shown with `kubectl get secrets -o yaml`
- keep in mind that this is just *encoding*, not *encryption*
- it is very easy to [automatically extract and decode secrets](https://medium.com/@mveritym/decoding-kubernetes-secrets-60deed7a96a3)
- [Secrets can be encrypted at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/)
- With RBAC, we can authorize a user to access configmaps, but not secrets
(since they are two different kinds of resources)
---
class: extra-details
## Immutable ConfigMaps and Secrets
- Since Kubernetes 1.19, it is possible to mark a ConfigMap or Secret as *immutable*
```bash
kubectl patch configmap xyz --patch='{"immutable": true}'
```
- This brings performance improvements when using lots of ConfigMaps and Secrets
(lots = tens of thousands)
- Once a ConfigMap or Secret has been marked as immutable:
- its content cannot be changed anymore
- the `immutable` field can't be changed back either
- the only way to change it is to delete and re-create it
- Pods using it will have to be re-created as well
???
:EN:- Managing application configuration
:EN:- Exposing configuration with the downward API
:EN:- Exposing configuration with Config Maps and Secrets
:EN:- Exposing configuration with Config Maps
:FR:- Gérer la configuration des applications
:FR:- Configuration au travers de la *downward API*
:FR:- Configuration via les *Config Maps* et *Secrets*
:FR:- Configurer les applications avec des *Config Maps*

View File

@@ -92,6 +92,29 @@
---
## etcd authorization
- etcd supports RBAC, but Kubernetes doesn't use it by default
(note: etcd RBAC is completely different from Kubernetes RBAC!)
- By default, etcd access is "all or nothing"
(if you have a valid certificate, you get in)
- Be very careful if you use the same root CA for etcd and other things
(if etcd trusts the root CA, then anyone with a valid cert gets full etcd access)
- For more details, check the following resources:
- [etcd documentation on authentication](https://etcd.io/docs/current/op-guide/authentication/)
- [PKI The Wrong Way](https://www.youtube.com/watch?v=gcOLDEzsVHI) at KubeCon NA 2020
---
## API server clients
- The API server has a sophisticated authentication and authorization system
@@ -190,6 +213,24 @@
---
class: extra-details
## How are these permissions set up?
- A bunch of roles and bindings are defined as constants in the API server code:
[auth/authorizer/rbac/bootstrappolicy/policy.go](https://github.com/kubernetes/kubernetes/blob/release-1.19/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/policy.go#L188)
- They are created automatically when the API server starts:
[registry/rbac/rest/storage_rbac.go](https://github.com/kubernetes/kubernetes/blob/release-1.19/pkg/registry/rbac/rest/storage_rbac.go#L140)
- We must use the correct Common Names (`CN`) for the control plane certificates
(since the bindings defined above refer to these common names)
---
## Service account tokens
- Each time we create a service account, the controller manager generates a token

334
slides/k8s/crd.md Normal file
View File

@@ -0,0 +1,334 @@
# Custom Resource Definitions
- CRDs are one of the (many) ways to extend the API
- CRDs can be defined dynamically
(no need to recompile or reload the API server)
- A CRD is defined with a CustomResourceDefinition resource
(CustomResourceDefinition is conceptually similar to a *metaclass*)
---
## A very simple CRD
The file @@LINK[k8s/coffee-1.yaml] describes a very simple CRD representing different kinds of coffee:
```yaml
@@INCLUDE[k8s/coffee-1.yaml]
```
---
## Creating a CRD
- Let's create the Custom Resource Definition for our Coffee resource
.exercise[
- Load the CRD:
```bash
kubectl apply -f ~/container.training/k8s/coffee-1.yaml
```
- Confirm that it shows up:
```bash
kubectl get crds
```
]
---
## Creating custom resources
The YAML below defines a resource using the CRD that we just created:
```yaml
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: arabica
spec:
taste: strong
```
.exercise[
- Create a few types of coffee beans:
```bash
kubectl apply -f ~/container.training/k8s/coffees.yaml
```
]
---
## Viewing custom resources
- By default, `kubectl get` only shows name and age of custom resources
.exercise[
- View the coffee beans that we just created:
```bash
kubectl get coffees
```
]
- We'll see in a bit how to improve that
---
## What can we do with CRDs?
There are many possibilities!
- *Operators* encapsulate complex sets of resources
(e.g.: a PostgreSQL replicated cluster; an etcd cluster...
<br/>
see [awesome operators](https://github.com/operator-framework/awesome-operators) and
[OperatorHub](https://operatorhub.io/) to find more)
- Custom use-cases like [gitkube](https://gitkube.sh/)
- creates a new custom type, `Remote`, exposing a git+ssh server
- deploy by pushing YAML or Helm charts to that remote
- Replacing built-in types with CRDs
(see [this lightning talk by Tim Hockin](https://www.youtube.com/watch?v=ji0FWzFwNhA))
---
## What's next?
- Creating a basic CRD is quick and easy
- But there is a lot more that we can (and probably should) do:
- improve input with *data validation*
- improve output with *custom columns*
- And of course, we probably need a *controller* to go with our CRD!
(otherwise, we're just using the Kubernetes API as a fancy data store)
---
## Additional printer columns
- We can specify `additionalPrinterColumns` in the CRD
- This is similar to `-o custom-columns`
(map a column name to a path in the object, e.g. `.spec.taste`)
```yaml
additionalPrinterColumns:
- jsonPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
```
---
## Using additional printer columns
- Let's update our CRD using @@LINK[k8s/coffee-3.yaml]
.exercise[
- Update the CRD:
```bash
kubectl apply -f ~/container.training/k8s/coffee-3.yaml
```
- Look at our Coffee resources:
```bash
kubectl get coffees
```
]
Note: we can update a CRD without having to re-create the corresponding resources.
(Good news, right?)
---
## Data validation
- By default, CRDs are not *validated*
(we can put anything we want in the `spec`)
- When creating a CRD, we can pass an OpenAPI v3 schema
(which will then be used to validate resources)
- More advanced validation can also be done with admission webhooks, e.g.:
- consistency between parameters
- advanced integer filters (e.g. odd number of replicas)
- things that can change in one direction but not the other
---
## OpenAPI v3 scheme exapmle
This is what we have in @@LINK[k8s/coffee-3.yaml]:
```yaml
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string
required: [ taste ]
```
---
## Validation *a posteriori*
- Some of the "coffees" that we defined earlier *do not* pass validation
- How is that possible?
--
- Validation happens at *admission*
(when resources get written into the database)
- Therefore, we can have "invalid" resources in etcd
(they are invalid from the CRD perspective, but the CRD can be changed)
🤔 How should we handle that ?
---
## Versions
- If the data format changes, we can roll out a new version of the CRD
(e.g. go from `v1alpha1` to `v1alpha2`)
- In a CRD we can specify the versions that exist, that are *served*, and *stored*
- multiple versions can be *served*
- only one can be *stored*
- Kubernetes doesn't automatically migrate the content of the database
- However, it can convert between versions when resources are read/written
---
## Conversion
- When *creating* a new resource, the *stored* version is used
(if we create it with another version, it gets converted)
- When *getting* or *watching* resources, the *requested* version is used
(if it is stored with another version, it gets converted)
- By default, "conversion" only changes the `apiVersion` field
- ... But we can register *conversion webhooks*
(see [that doc page](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#webhook-conversion) for details)
---
## Migrating database content
- We need to *serve* a version as long as we *store* objects in that version
(=as long as the database has at least one object with that version)
- If we want to "retire" a version, we need to migrate these objects first
- All we have to do is to read and re-write them
(the [kube-storage-version-migrator](https://github.com/kubernetes-sigs/kube-storage-version-migrator) tool can help)
---
## What's next?
- Generally, when creating a CRD, we also want to run a *controller*
(otherwise nothing will happen when we create resources of that type)
- The controller will typically *watch* our custom resources
(and take action when they are created/updated)
---
## CRDs in the wild
- [gitkube](https://storage.googleapis.com/gitkube/gitkube-setup-stable.yaml)
- [A redis operator](https://github.com/amaizfinance/redis-operator/blob/master/deploy/crds/k8s_v1alpha1_redis_crd.yaml)
- [cert-manager](https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.yaml)
*How big are these YAML files?*
*What's the size (e.g. in lines) of each resource?*
---
## CRDs in practice
- Production-grade CRDs can be extremely verbose
(because of the openAPI schema validation)
- This can (and usually will) be managed by a framework
---
## (Ab)using the API server
- If we need to store something "safely" (as in: in etcd), we can use CRDs
- This gives us primitives to read/write/list objects (and optionally validate them)
- The Kubernetes API server can run on its own
(without the scheduler, controller manager, and kubelets)
- By loading CRDs, we can have it manage totally different objects
(unrelated to containers, clusters, etc.)
???
:EN:- Custom Resource Definitions (CRDs)
:FR:- Les CRDs *(Custom Resource Definitions)*

View File

@@ -108,6 +108,26 @@ The CA (or anyone else) never needs to know my private key.
---
## Warning
- The CSR API isn't really suited to issue user certificates
- It is primarily intended to issue control plane certificates
(for instance, deal with kubelet certificates renewal)
- The API was expanded a bit in Kubernetes 1.19 to encompass broader usage
- There are still lots of gaps in the spec
(e.g. how to specify expiration in a standard way)
- ... And no other implementation to this date
(but [cert-manager](https://cert-manager.io/docs/faq/#kubernetes-has-a-builtin-certificatesigningrequest-api-why-not-use-that) might eventually get there!)
---
## General idea
- We will create a Namespace named "users"

View File

@@ -431,15 +431,23 @@ class: extra-details
---
## Complex selectors
## Selectors with multiple labels
- If a selector specifies multiple labels, they are understood as a logical *AND*
(In other words: the pods must match all the labels)
(in other words: the pods must match all the labels)
- Kubernetes has support for advanced, set-based selectors
- We cannot have a logical *OR*
(But these cannot be used with services, at least not yet!)
(e.g. `app=api AND (release=prod OR release=preprod)`)
- We can, however, apply as many extra labels as we want to our pods:
- use selector `app=api AND prod-or-preprod=yes`
- add `prod-or-preprod=yes` to both sets of pods
- We will see later that in other places, we can use more advanced selectors
---
@@ -689,6 +697,95 @@ class: extra-details
- This gives us building blocks for canary and blue/green deployments
---
class: extra-details
## Advanced label selectors
- As indicated earlier, service selectors are limited to a `AND`
- But in many other places in the Kubernetes API, we can use complex selectors
(e.g. Deployment, ReplicaSet, DaemonSet, NetworkPolicy ...)
- These allow extra operations; specifically:
- checking for presence (or absence) of a label
- checking if a label is (or is not) in a given set
- Relevant documentation:
[Service spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#servicespec-v1-core),
[LabelSelector spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#labelselector-v1-meta),
[label selector doc](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors)
---
class: extra-details
## Example of advanced selector
```yaml
theSelector:
matchLabels:
app: portal
component: api
matchExpressions:
- key: release
operator: In
values: [ production, preproduction ]
- key: signed-off-by
operator: Exists
```
This selector matches pods that meet *all* the indicated conditions.
`operator` can be `In`, `NotIn`, `Exists`, `DoesNotExist`.
A `nil` selector matches *nothing*, a `{}` selector matches *everything*.
<br/>
(Because that means "match all pods that meet at least zero condition".)
---
class: extra-details
## Services and Endpoints
- Each Service has a corresponding Endpoints resource
(see `kubectl get endpoints` or `kubectl get ep`)
- That Endpoints resource is used by various controllers
(e.g. `kube-proxy` when setting up `iptables` rules for ClusterIP services)
- These Endpoints are populated (and updated) with the Service selector
- We can update the Endpoints manually, but our changes will get overwritten
- ... Except if the Service selector is empty!
---
class: extra-details
## Empty Service selector
- If a service selector is empty, Endpoints don't get updated automatically
(but we can still set them manually)
- This lets us create Services pointing to arbitrary destinations
(potentially outside the cluster; or things that are not in pods)
- Another use-case: the `kubernetes` service in the `default` namespace
(its Endpoints are maintained automatically by the API server)
???
:EN:- Scaling with Daemon Sets

View File

@@ -2,47 +2,65 @@
- Kubernetes resources can also be viewed with a web dashboard
- That dashboard is usually exposed over HTTPS
(this requires obtaining a proper TLS certificate)
- Dashboard users need to authenticate
- We are going to take a *dangerous* shortcut
(typically with a token)
- The dashboard should be exposed over HTTPS
(to prevent interception of the aforementioned token)
- Ideally, this requires obtaining a proper TLS certificate
(for instance, with Let's Encrypt)
---
## The insecure method
## Three ways to install the dashboard
- We could (and should) use [Let's Encrypt](https://letsencrypt.org/) ...
- Our `k8s` directory has no less than three manifests!
- ... but we don't want to deal with TLS certificates
- `dashboard-recommended.yaml`
- We could (and should) learn how authentication and authorization work ...
(purely internal dashboard; user must be created manually)
- ... but we will use a guest account with admin access instead
- `dashboard-with-token.yaml`
.footnote[.warning[Yes, this will open our cluster to all kinds of shenanigans. Don't do this at home.]]
(dashboard exposed with NodePort; creates an admin user for us)
- `dashboard-insecure.yaml` aka *YOLO*
(dashboard exposed over HTTP; gives root access to anonymous users)
---
## Running a very insecure dashboard
## `dashboard-insecure.yaml`
- We are going to deploy that dashboard with *one single command*
- This will allow anyone to deploy anything on your cluster
- This command will create all the necessary resources
(without any authentication whatsoever)
(the dashboard itself, the HTTP wrapper, the admin/guest account)
- **Do not** use this, except maybe on a local cluster
- All these resources are defined in a YAML file
(or a cluster that you will destroy a few minutes later)
- All we have to do is load that YAML file with with `kubectl apply -f`
- On "normal" clusters, use `dashboard-with-token.yaml` instead!
---
## What's in the manifest?
- The dashboard itself
- An HTTP/HTTPS unwrapper (using `socat`)
- The guest/admin account
.exercise[
- Create all the dashboard resources, with the following command:
```bash
kubectl apply -f ~/container.training/k8s/insecure-dashboard.yaml
kubectl apply -f ~/container.training/k8s/dashboard-insecure.yaml
```
]
@@ -89,11 +107,26 @@ The dashboard will then ask you which authentication you want to use.
--
.warning[By the way, we just added a backdoor to our Kubernetes cluster!]
.warning[Remember, we just added a backdoor to our Kubernetes cluster!]
---
## Running the Kubernetes dashboard securely
## Closing the backdoor
- Seriously, don't leave that thing running!
.exercise[
- Remove what we just created:
```bash
kubectl delete -f ~/container.training/k8s/dashboard-insecure.yaml
```
]
---
## The risks
- The steps that we just showed you are *for educational purposes only!*
@@ -105,6 +138,99 @@ The dashboard will then ask you which authentication you want to use.
---
## `dashboard-with-token.yaml`
- This is a less risky way to deploy the dashboard
- It's not completely secure, either:
- we're using a self-signed certificate
- this is subject to eavesdropping attacks
- Using `kubectl port-forward` or `kubectl proxy` is even better
---
## What's in the manifest?
- The dashboard itself (but exposed with a `NodePort`)
- A ServiceAccount with `cluster-admin` privileges
(named `kubernetes-dashboard:cluster-admin`)
.exercise[
- Create all the dashboard resources, with the following command:
```bash
kubectl apply -f ~/container.training/k8s/dashboard-with-token.yaml
```
]
---
## Obtaining the token
- The manifest creates a ServiceAccount
- Kubernetes will automatically generate a token for that ServiceAccount
.exercise[
- Display the token:
```bash
kubectl --namespace=kubernetes-dashboard \
describe secret cluster-admin-token
```
]
The token should start with `eyJ...` (it's a JSON Web Token).
Note that the secret name will actually be `cluster-admin-token-xxxxx`.
<br/>
(But `kubectl` prefix matches are great!)
---
## Connecting to the dashboard
.exercise[
- Check which port the dashboard is on:
```bash
kubectl get svc --namespace=kubernetes-dashboard
```
]
You'll want the `3xxxx` port.
.exercise[
- Connect to http://oneofournodes:3xxxx/
<!-- ```open http://node1:3xxxx/``` -->
]
The dashboard will then ask you which authentication you want to use.
---
## Dashboard authentication
- Select "token" authentication
- Copy paste the token (starting with `eyJ...`) obtained earlier
- We're logged in!
---
## Other dashboards
- [Kube Web View](https://codeberg.org/hjacobs/kube-web-view)
@@ -115,7 +241,7 @@ The dashboard will then ask you which authentication you want to use.
- see [vision and goals](https://kube-web-view.readthedocs.io/en/latest/vision.html#vision) for details
- [Kube Ops View](https://github.com/hjacobs/kube-ops-view)
- [Kube Ops View](https://codeberg.org/hjacobs/kube-ops-view)
- "provides a common operational picture for multiple Kubernetes clusters"

View File

@@ -124,7 +124,7 @@ The resulting YAML doesn't represent a valid DaemonSet.
- Try the same YAML file as earlier, with server-side dry run:
```bash
kubectl apply -f web.yaml --server-dry-run --validate=false -o yaml
kubectl apply -f web.yaml --dry-run=server --validate=false -o yaml
```
]

455
slides/k8s/eck.md Normal file
View File

@@ -0,0 +1,455 @@
# An ElasticSearch Operator
- We will install [Elastic Cloud on Kubernetes](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html), an ElasticSearch operator
- This operator requires PersistentVolumes
- We will install Rancher's [local path storage provisioner](https://github.com/rancher/local-path-provisioner) to automatically create these
- Then, we will create an ElasticSearch resource
- The operator will detect that resource and provision the cluster
- We will integrate that ElasticSearch cluster with other resources
(Kibana, Filebeat, Cerebro ...)
---
## Installing a Persistent Volume provisioner
(This step can be skipped if you already have a dynamic volume provisioner.)
- This provisioner creates Persistent Volumes backed by `hostPath`
(local directories on our nodes)
- It doesn't require anything special ...
- ... But losing a node = losing the volumes on that node!
.exercise[
- Install the local path storage provisioner:
```bash
kubectl apply -f ~/container.training/k8s/local-path-storage.yaml
```
]
---
## Making sure we have a default StorageClass
- The ElasticSearch operator will create StatefulSets
- These StatefulSets will instantiate PersistentVolumeClaims
- These PVCs need to be explicitly associated with a StorageClass
- Or we need to tag a StorageClass to be used as the default one
.exercise[
- List StorageClasses:
```bash
kubectl get storageclasses
```
]
We should see the `local-path` StorageClass.
---
## Setting a default StorageClass
- This is done by adding an annotation to the StorageClass:
`storageclass.kubernetes.io/is-default-class: true`
.exercise[
- Tag the StorageClass so that it's the default one:
```bash
kubectl annotate storageclass local-path \
storageclass.kubernetes.io/is-default-class=true
```
- Check the result:
```bash
kubectl get storageclasses
```
]
Now, the StorageClass should have `(default)` next to its name.
---
## Install the ElasticSearch operator
- The operator provides:
- a few CustomResourceDefinitions
- a Namespace for its other resources
- a ValidatingWebhookConfiguration for type checking
- a StatefulSet for its controller and webhook code
- a ServiceAccount, ClusterRole, ClusterRoleBinding for permissions
- All these resources are grouped in a convenient YAML file
.exercise[
- Install the operator:
```bash
kubectl apply -f ~/container.training/k8s/eck-operator.yaml
```
]
---
## Check our new custom resources
- Let's see which CRDs were created
.exercise[
- List all CRDs:
```bash
kubectl get crds
```
]
This operator supports ElasticSearch, but also Kibana and APM. Cool!
---
## Create the `eck-demo` namespace
- For clarity, we will create everything in a new namespace, `eck-demo`
- This namespace is hard-coded in the YAML files that we are going to use
- We need to create that namespace
.exercise[
- Create the `eck-demo` namespace:
```bash
kubectl create namespace eck-demo
```
- Switch to that namespace:
```bash
kns eck-demo
```
]
---
class: extra-details
## Can we use a different namespace?
Yes, but then we need to update all the YAML manifests that we
are going to apply in the next slides.
The `eck-demo` namespace is hard-coded in these YAML manifests.
Why?
Because when defining a ClusterRoleBinding that references a
ServiceAccount, we have to indicate in which namespace the
ServiceAccount is located.
---
## Create an ElasticSearch resource
- We can now create a resource with `kind: ElasticSearch`
- The YAML for that resource will specify all the desired parameters:
- how many nodes we want
- image to use
- add-ons (kibana, cerebro, ...)
- whether to use TLS or not
- etc.
.exercise[
- Create our ElasticSearch cluster:
```bash
kubectl apply -f ~/container.training/k8s/eck-elasticsearch.yaml
```
]
---
## Operator in action
- Over the next minutes, the operator will create our ES cluster
- It will report our cluster status through the CRD
.exercise[
- Check the logs of the operator:
```bash
stern --namespace=elastic-system operator
```
<!--
```wait elastic-operator-0```
```tmux split-pane -v```
--->
- Watch the status of the cluster through the CRD:
```bash
kubectl get es -w
```
<!--
```longwait green```
```key ^C```
```key ^D```
```key ^C```
-->
]
---
## Connecting to our cluster
- It's not easy to use the ElasticSearch API from the shell
- But let's check at least if ElasticSearch is up!
.exercise[
- Get the ClusterIP of our ES instance:
```bash
kubectl get services
```
- Issue a request with `curl`:
```bash
curl http://`CLUSTERIP`:9200
```
]
We get an authentication error. Our cluster is protected!
---
## Obtaining the credentials
- The operator creates a user named `elastic`
- It generates a random password and stores it in a Secret
.exercise[
- Extract the password:
```bash
kubectl get secret demo-es-elastic-user \
-o go-template="{{ .data.elastic | base64decode }} "
```
- Use it to connect to the API:
```bash
curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200
```
]
We should see a JSON payload with the `"You Know, for Search"` tagline.
---
## Sending data to the cluster
- Let's send some data to our brand new ElasticSearch cluster!
- We'll deploy a filebeat DaemonSet to collect node logs
.exercise[
- Deploy filebeat:
```bash
kubectl apply -f ~/container.training/k8s/eck-filebeat.yaml
```
- Wait until some pods are up:
```bash
watch kubectl get pods -l k8s-app=filebeat
```
<!--
```wait Running```
```key ^C```
-->
- Check that a filebeat index was created:
```bash
curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200/_cat/indices
```
]
---
## Deploying an instance of Kibana
- Kibana can visualize the logs injected by filebeat
- The ECK operator can also manage Kibana
- Let's give it a try!
.exercise[
- Deploy a Kibana instance:
```bash
kubectl apply -f ~/container.training/k8s/eck-kibana.yaml
```
- Wait for it to be ready:
```bash
kubectl get kibana -w
```
<!--
```longwait green```
```key ^C```
-->
]
---
## Connecting to Kibana
- Kibana is automatically set up to conect to ElasticSearch
(this is arranged by the YAML that we're using)
- However, it will ask for authentication
- It's using the same user/password as ElasticSearch
.exercise[
- Get the NodePort allocated to Kibana:
```bash
kubectl get services
```
- Connect to it with a web browser
- Use the same user/password as before
]
---
## Setting up Kibana
After the Kibana UI loads, we need to click around a bit
.exercise[
- Pick "explore on my own"
- Click on Use Elasticsearch data / Connect to your Elasticsearch index"
- Enter `filebeat-*` for the index pattern and click "Next step"
- Select `@timestamp` as time filter field name
- Click on "discover" (the small icon looking like a compass on the left bar)
- Play around!
]
---
## Scaling up the cluster
- At this point, we have only one node
- We are going to scale up
- But first, we'll deploy Cerebro, an UI for ElasticSearch
- This will let us see the state of the cluster, how indexes are sharded, etc.
---
## Deploying Cerebro
- Cerebro is stateless, so it's fairly easy to deploy
(one Deployment + one Service)
- However, it needs the address and credentials for ElasticSearch
- We prepared yet another manifest for that!
.exercise[
- Deploy Cerebro:
```bash
kubectl apply -f ~/container.training/k8s/eck-cerebro.yaml
```
- Lookup the NodePort number and connect to it:
```bash
kubectl get services
```
]
---
## Scaling up the cluster
- We can see on Cerebro that the cluster is "yellow"
(because our index is not replicated)
- Let's change that!
.exercise[
- Edit the ElasticSearch cluster manifest:
```bash
kubectl edit es demo
```
- Find the field `count: 1` and change it to 3
- Save and quit
<!--
```wait Please edit```
```keys /count:```
```key ^J```
```keys $r3:x```
```key ^J```
-->
]
???
:EN:- Deploying ElasticSearch with ECK
:FR:- Déployer ElasticSearch avec ECK

152
slides/k8s/events.md Normal file
View File

@@ -0,0 +1,152 @@
# Events
- Kubernetes has an internal structured log of *events*
- These events are ordinary resources:
- we can view them with `kubectl get events`
- they can be viewed and created through the Kubernetes API
- they are stored in Kubernetes default database (e.g. etcd)
- Most components will generate events to let us know what's going on
- Events can be *related* to other resources
---
## Reading events
- `kubectl get events` (or `kubectl get ev`)
- Can use `--watch`
⚠️ Looks like `tail -f`, but events aren't necessarily sorted!
- Can use `--all-namespaces`
- Cluster events (e.g. related to nodes) are in the `default` namespace
- Viewing all "non-normal" events:
```bash
kubectl get ev -A --field-selector=type!=Normal
```
(as of Kubernetes 1.19, `type` can be either `Normal` or `Warning`)
---
## Reading events (take 2)
- When we use `kubectl describe` on an object, `kubectl` retrieves the associated events
.exercise[
- See the API requests happening when we use `kubectl describe`:
```bash
kubectl describe service kubernetes --namespace=default -v6 >/dev/null
```
]
---
## Generating events
- This is rarely (if ever) done manually
(i.e. by crafting some YAML)
- But controllers (e.g. operators) need this!
- It's not mandatory, but it helps with *operability*
(e.g. when we `kubectl describe` a CRD, we will see associated events)
---
## ⚠️ Work in progress
- "Events" can be :
- "old-style" events (in core API group, aka `v1`)
- "new-style" events (in API group `events.k8s.io`)
- See [KEP 383](https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/383-new-event-api-ga-graduation/README.md) in particular this [comparison between old and new APIs](https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/383-new-event-api-ga-graduation/README.md#comparison-between-old-and-new-apis)
---
## Experimenting with events
- Let's create an event related to a Node, based on @@LINK[k8s/event-node.yaml]
.exercise[
- Edit `k8s/event-node.yaml`
- Update the `name` and `uid` of the `involvedObject`
- Create the event with `kubectl create -f`
- Look at the Node with `kubectl describe`
]
---
## Experimenting with events
- Let's create an event related to a Pod, based on @@LINK[k8s/event-pod.yaml]
.exercise[
- Create a pod
- Edit `k8s/event-pod.yaml`
- Edit the `involvedObject` section (don't forget the `uid`)
- Create the event with `kubectl create -f`
- Look at the Pod with `kubectl describe`
]
---
## Generating events in practice
- In Go, use an `EventRecorder` provided by the `kubernetes/client-go` library
- [EventRecorder interface](https://github.com/kubernetes/client-go/blob/release-1.19/tools/record/event.go#L87)
- [kubebuilder book example](https://book-v1.book.kubebuilder.io/beyond_basics/creating_events.html)
- It will take care of formatting / aggregating events
- To get an idea of what to put in the `reason` field, check [kubelet events](
https://github.com/kubernetes/kubernetes/blob/release-1.19/pkg/kubelet/events/event.go)
---
## Cluster operator perspective
- Events are kept 1 hour by default
- This can be changed with the `--event-ttl` flag on the API server
- On very busy clusters, events can be kept on a separate etcd cluster
- This is done with the `--etcd-servers-overrides` flag on the API server
- Example:
```
--etcd-servers-overrides=/events#http://127.0.0.1:12379
```
???
:EN:- Consuming and generating cluster events
:FR:- Suivre l'activité du cluster avec les *events*

View File

@@ -10,7 +10,7 @@ Level 2: make it so that the number of replicas can be set with `--set replicas=
Level 3: change the colors of the lego bricks.
(For level 3, fork the repository and use ctr.run to build images.)
(For level 3, you'll have to build/push your own images.)
See next slide if you need hints!
@@ -44,20 +44,12 @@ Also add `replicas: 5` to `values.yaml` to provide a default value.
## Changing the color
- Fork the repository
- Create an account on e.g. Docker Hub (e.g. `janedoe`)
- Make sure that your fork has valid Dockerfiles
(or identify a branch that has valid Dockerfiles)
- Use the following images:
ctr.run/yourgithubusername/wordsmith/db:branchname
(replace db with web and words for the other components)
- Create an image repository (e.g. `janedoe/web`)
- Change the images and/or CSS in `web/static`
- Commit, push, trigger a rolling update
- Build and push
(`imagePullPolicy` should be `Always`, which is the default)
- Trigger a rolling update using the image you just pushed

View File

@@ -4,224 +4,133 @@ There are multiple ways to extend the Kubernetes API.
We are going to cover:
- Controllers
- Dynamic Admission Webhooks
- Custom Resource Definitions (CRDs)
- Admission Webhooks
- The Aggregation Layer
But first, let's re(re)visit the API server ...
---
## Revisiting the API server
- The Kubernetes API server is a central point of the control plane
(everything connects to it: controller manager, scheduler, kubelets)
- Everything connects to the API server:
- Almost everything in Kubernetes is materialized by a resource
- users (that's us, but also automation like CI/CD)
- Resources have a type (or "kind")
- kubelets
(similar to strongly typed languages)
- network components (e.g. `kube-proxy`, pod network, NPC)
- We can see existing types with `kubectl api-resources`
- We can list resources of a given type with `kubectl get <type>`
- controllers; lots of controllers
---
## Creating new types
## Some controllers
- We can create new types with Custom Resource Definitions (CRDs)
- `kube-controller-manager` runs built-on controllers
- CRDs are created dynamically
(watching Deployments, Nodes, ReplicaSets, and much more)
(without recompiling or restarting the API server)
- `kube-scheduler` runs the scheduler
- CRDs themselves are resources:
(it's conceptually not different from another controller)
- we can create a new type with `kubectl create` and some YAML
- `cloud-controller-manager` takes care of "cloud stuff"
- we can see all our custom types with `kubectl get crds`
(e.g. provisioning load balancers, persistent volumes...)
- After we create a CRD, the new type works just like built-in types
- Some components mentioned above are also controllers
(e.g. Network Policy Controller)
---
## A very simple CRD
## More controllers
The YAML below describes a very simple CRD representing different kinds of coffee:
- Cloud resources can also be managed by additional controllers
```yaml
apiVersion: apiextensions.k8s.io/v1alpha1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof
```
(e.g. the [AWS Load Balancer Controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller))
- Leveraging Ingress resources requires an Ingress Controller
(many options available here; we can even install multiple ones!)
- Many add-ons (including CRDs and operators) have controllers as well
🤔 *What's even a controller ?!?*
---
## Creating a CRD
## What's a controller?
- Let's create the Custom Resource Definition for our Coffee resource
According to the [documentation](https://kubernetes.io/docs/concepts/architecture/controller/):
.exercise[
*Controllers are **control loops** that<br/>
**watch** the state of your cluster,<br/>
then make or request changes where needed.*
- Load the CRD:
```bash
kubectl apply -f ~/container.training/k8s/coffee-1.yaml
```
- Confirm that it shows up:
```bash
kubectl get crds
```
]
*Each controller tries to move the current cluster state closer to the desired state.*
---
## Creating custom resources
## What controllers do
The YAML below defines a resource using the CRD that we just created:
- Watch resources
```yaml
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: arabica
spec:
taste: strong
```
- Make changes:
.exercise[
- purely at the API level (e.g. Deployment, ReplicaSet controllers)
- Create a few types of coffee beans:
```bash
kubectl apply -f ~/container.training/k8s/coffees.yaml
```
- and/or configure resources (e.g. `kube-proxy`)
]
- and/or provision resources (e.g. load balancer controller)
---
## Viewing custom resources
## Extending Kubernetes with controllers
- By default, `kubectl get` only shows name and age of custom resources
- Random example:
.exercise[
- watch resources like Deployments, Services ...
- View the coffee beans that we just created:
```bash
kubectl get coffees
```
- read annotations to configure monitoring
]
- Technically, this is not extending the API
- We can improve that, but it's outside the scope of this section!
(but it can still be very useful!)
---
## What can we do with CRDs?
## Other ways to extend Kubernetes
There are many possibilities!
- Prevent or alter API requests before resources are committed to storage:
- *Operators* encapsulate complex sets of resources
*Admission Control*
(e.g.: a PostgreSQL replicated cluster; an etcd cluster...
<br/>
see [awesome operators](https://github.com/operator-framework/awesome-operators) and
[OperatorHub](https://operatorhub.io/) to find more)
- Create new resource types leveraging Kubernetes storage facilities:
- Custom use-cases like [gitkube](https://gitkube.sh/)
*Custom Resource Definitions*
- creates a new custom type, `Remote`, exposing a git+ssh server
- Create new resource types with different storage or different semantics:
- deploy by pushing YAML or Helm charts to that remote
*Aggregation Layer*
- Replacing built-in types with CRDs
- Spoiler alert: often, we will combine multiple techniques
(see [this lightning talk by Tim Hockin](https://www.youtube.com/watch?v=ji0FWzFwNhA))
---
## Little details
- By default, CRDs are not *validated*
(we can put anything we want in the `spec`)
- When creating a CRD, we can pass an OpenAPI v3 schema (BETA!)
(which will then be used to validate resources)
- Generally, when creating a CRD, we also want to run a *controller*
(otherwise nothing will happen when we create resources of that type)
- The controller will typically *watch* our custom resources
(and take action when they are created/updated)
*
Examples:
[YAML to install the gitkube CRD](https://storage.googleapis.com/gitkube/gitkube-setup-stable.yaml),
[YAML to install a redis operator CRD](https://github.com/amaizfinance/redis-operator/blob/master/deploy/crds/k8s_v1alpha1_redis_crd.yaml)
*
---
## (Ab)using the API server
- If we need to store something "safely" (as in: in etcd), we can use CRDs
- This gives us primitives to read/write/list objects (and optionally validate them)
- The Kubernetes API server can run on its own
(without the scheduler, controller manager, and kubelets)
- By loading CRDs, we can have it manage totally different objects
(unrelated to containers, clusters, etc.)
---
## Service catalog
- *Service catalog* is another extension mechanism
- It's not extending the Kubernetes API strictly speaking
(but it still provides new features!)
- It doesn't create new types; it uses:
- ClusterServiceBroker
- ClusterServiceClass
- ClusterServicePlan
- ServiceInstance
- ServiceBinding
- It uses the Open service broker API
(and involve controllers as well!)
---
## Admission controllers
- Admission controllers are another way to extend the Kubernetes API
- Instead of creating new types, admission controllers can transform or vet API requests
- Admission controllers can vet or transform API requests
- The diagram on the next slide shows the path of an API request
@@ -273,9 +182,9 @@ class: extra-details
---
## Admission Webhooks
## Dynamic Admission Control
- We can setup *admission webhooks* to extend the behavior of the API server
- We can set up *admission webhooks* to extend the behavior of the API server
- The API server will submit incoming API requests to these webhooks
@@ -307,6 +216,77 @@ class: extra-details
(to avoid e.g. triggering webhooks when setting up webhooks)
- The webhook server can be hosted in or out of the cluster
---
## Dynamic Admission Examples
- Policy control
([Kyverno](https://kyverno.io/),
[Open Policy Agent](https://www.openpolicyagent.org/docs/latest/))
- Sidecar injection
(Used by some service meshes)
- Type validation
(More on this later, in the CRD section)
---
## Kubernetes API types
- Almost everything in Kubernetes is materialized by a resource
- Resources have a type (or "kind")
(similar to strongly typed languages)
- We can see existing types with `kubectl api-resources`
- We can list resources of a given type with `kubectl get <type>`
---
## Creating new types
- We can create new types with Custom Resource Definitions (CRDs)
- CRDs are created dynamically
(without recompiling or restarting the API server)
- CRDs themselves are resources:
- we can create a new type with `kubectl create` and some YAML
- we can see all our custom types with `kubectl get crds`
- After we create a CRD, the new type works just like built-in types
---
## Examples
- Representing composite resources
(e.g. clusters like databases, messages queues ...)
- Representing external resources
(e.g. virtual machines, object store buckets, domain names ...)
- Representing configuration for controllers and operators
(e.g. custom Ingress resources, certificate issuers, backups ...)
- Alternate representations of other objects; services and service instances
(e.g. encrypted secret, git endpoints ...)
---
## The aggregation layer
@@ -325,9 +305,57 @@ class: extra-details
- Example: `metrics-server`
(storing live metrics in etcd would be extremely inefficient)
---
- Requires significantly more work than CRDs!
## Why?
- Using a CRD for live metrics would be extremely inefficient
(etcd **is not** a metrics store; write performance is way too slow)
- Instead, `metrics-server`:
- collects metrics from kubelets
- stores them in memory
- exposes them as PodMetrics and NodeMetrics (in API group metrics.k8s.io)
- is registered as an APIService
---
## Drawbacks
- Requires a server
- ... that implements a non-trivial API (aka the Kubernetes API semantics)
- If we need REST semantics, CRDs are probably way simpler
- *Sometimes* synchronizing external state with CRDs might do the trick
(unless we want the external state to be our single source of truth)
---
## Service catalog
- *Service catalog* is another extension mechanism
- It's not extending the Kubernetes API strictly speaking
(but it still provides new features!)
- It doesn't create new types; it uses:
- ClusterServiceBroker
- ClusterServiceClass
- ClusterServicePlan
- ServiceInstance
- ServiceBinding
- It uses the Open service broker API
---
@@ -347,11 +375,5 @@ class: extra-details
???
:EN:- Extending the Kubernetes API
:EN:- Custom Resource Definitions (CRDs)
:EN:- The aggregation layer
:EN:- Admission control and webhooks
:EN:- Overview of Kubernetes API extensions
:FR:- Comment étendre l'API Kubernetes
:FR:- Les CRDs *(Custom Resource Definitions)*
:FR:- Extension via *aggregation layer*, *admission control*, *webhooks*

230
slides/k8s/finalizers.md Normal file
View File

@@ -0,0 +1,230 @@
# Finalizers
- Sometimes, we.red[¹] want to prevent a resource from being deleted:
- perhaps it's "precious" (holds important data)
- perhaps other resources depend on it (and should be deleted first)
- perhaps we need to perform some clean up before it's deleted
- *Finalizers* are a way to do that!
.footnote[.red[¹]The "we" in that sentence generally stands for a controller.
<br/>(We can also use finalizers directly ourselves, but it's not very common.)]
---
## Examples
- Prevent deletion of a PersistentVolumeClaim which is used by a Pod
- Prevent deletion of a PersistentVolume which is bound to a PersistentVolumeClaim
- Prevent deletion of a Namespace that still contains objects
- When a LoadBalancer Service is deleted, make sure that the corresponding external resource (e.g. NLB, GLB, etc.) gets deleted.red[¹]
- When a CRD gets deleted, make sure that all the associated resources get deleted.red[²]
.footnote[.red[¹²]Finalizers are not the only solution for these use-cases.]
---
## How do they work?
- Each resource can have list of `finalizers` in its `metadata`, e.g.:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
annotations: ...
finalizers:
- kubernetes.io/pvc-protection
```
- If we try to delete an resource that has at least one finalizer:
- the resource is *not* deleted
- instead, its `deletionTimestamp` is set to the current time
- we are merely *marking the resource for deletion*
---
## What happens next?
- The controller that added the finalizer is supposed to:
- watch for resources with a `deletionTimestamp`
- execute necessary clean-up actions
- then remove the finalizer
- The resource is deleted once all the finalizers have been removed
(there is no timeout, so this could take forever)
- Until then, the resource can be used normally
(but no further finalizer can be *added* to the resource)
---
## Finalizers in review
Let's review the examples mentioned earlier.
For each of them, we'll see if there are other (perhaps better) options.
---
## Volume finalizer
- Kubernetes applies the following finalizers:
- `kubernetes.io/pvc-protection` on PersistentVolumeClaims
- `kubernetes.io/pv-protection` on PersistentVolumes
- This prevents removing them when they are in use
- Implementation detail: the finalizer is present *even when the resource is not in use*
- When the resource is ~~deleted~~ marked for deletion, the controller will check if the finalizer can be removed
(Perhaps to avoid race conditions?)
---
## Namespace finalizer
- Kubernetes applies a finalizer named `kubernetes`
- It prevents removing the namespace if it still contains objects
- *Can we remove the namespace anyway?*
- remove the finalizer
- delete the namespace
- force deletion
- It *seems to works* but, in fact, the objects in the namespace still exist
(and they will re-appear if we re-create the namespace)
See [this blog post](https://www.openshift.com/blog/the-hidden-dangers-of-terminating-namespaces) for more details about this.
---
## LoadBalancer finalizer
- Scenario:
We run a custom controller to implement provisioning of LoadBalancer Services.
When a Service with type=LoadBalancer is deleted, we want to make sure
that the corresponding external resources are properly deleted.
- Rationale for using a finalizer:
Normally, we would watch and observe the deletion of the Service;
but if the Service is deleted while our controller is down,
we could "miss" the deletion and forget to clean up the external resource.
The finalizer ensures that we will "see" the deletion
and clean up the external resource.
---
## Counterpoint
- We could also:
- Tag the external resources
<br/>(to indicate which Kubernetes Service they correspond to)
- Periodically reconcile them against Kubernetes resources
- If a Kubernetes resource does no longer exist, delete the external resource
- This doesn't have to be a *pre-delete* hook
(unless we store important information in the Service, e.g. as annotations)
---
## CRD finalizer
- Scenario:
We have a CRD that represents a PostgreSQL cluster.
It provisions StatefulSets, Deployments, Services, Secrets, ConfigMaps.
When the CRD is deleted, we want to delete all these resources.
- Rationale for using a finalizer:
Same as previously; we could observe the CRD, but if it is deleted
while the controller isn't running, we would miss the deletion,
and the other resources would keep running.
---
## Counterpoint
- We could use the same technique as described before
(tag the resources with e.g. annotations, to associate them with the CRD)
- Even better: we could use `ownerReferences`
(this feature is *specifically* designed for that use-case!)
---
## CRD finalizer (take two)
- Scenario:
We have a CRD that represents a PostgreSQL cluster.
It provisions StatefulSets, Deployments, Services, Secrets, ConfigMaps.
When the CRD is deleted, we want to delete all these resources.
We also want to store a final backup of the database.
We also want to update final usage metrics (e.g. for billing purposes).
- Rationale for using a finalizer:
We need to take some actions *before* the resources get deleted, not *after*.
---
## Wrapping up
- Finalizers are a great way to:
- prevent deletion of a resource that is still in use
- have a "guaranteed" pre-delete hook
- They can also be (ab)used for other purposes
- Code spelunking exercise:
*check where finalizers are used in the Kubernetes code base and why!*
???
:EN:- Using "finalizers" to manage resource lifecycle
:FR:- Gérer le cycle de vie des ressources avec les *finalizers*

447
slides/k8s/gitlab.md Normal file
View File

@@ -0,0 +1,447 @@
# CI/CD with GitLab
- In this section, we will see how to set up a CI/CD pipeline with GitLab
(using a "self-hosted" GitLab; i.e. running on our Kubernetes cluster)
- The big picture:
- each time we push code to GitLab, it will be deployed in a staging environment
- each time we push the `production` tag, it will be deployed in production
---
## Disclaimers
- We'll use GitLab here as an exemple, but there are many other options
(e.g. some combination of Argo, Harbor, Tekton ...)
- There are also hosted options
(e.g. GitHub Actions and many others)
- We'll use a specific pipeline and workflow, but it's purely arbitrary
(treat it as a source of inspiration, not a model to be copied!)
---
## Workflow overview
- Push code to GitLab's git server
- GitLab notices the `.gitlab-ci.yml` file, which defines our pipeline
- Our pipeline can have multiple *stages* executed sequentially
(e.g. lint, build, test, deploy ...)
- Each stage can have multiple *jobs* executed in parallel
(e.g. build images in parallel)
- Each job will be executed in an independent *runner* pod
---
## Pipeline overview
- Our repository holds source code, Dockerfiles, and a Helm chart
- *Lint* stage will check the Helm chart validity
- *Build* stage will build container images
(and push them to GitLab's integrated registry)
- *Deploy* stage will deploy the Helm chart, using these images
- Pushes to `production` will deploy to "the" production namespace
- Pushes to other tags/branches will deploy to a namespace created on the fly
- We will discuss shortcomings and alternatives and the end of this chapter!
---
## Lots of requirements
- We need *a lot* of components to pull this off:
- a domain name
- a storage class
- a TLS-capable ingress controller
- the cert-manager operator
- GitLab itself
- the GitLab pipeline
- Wow, why?!?
---
## I find your lack of TLS disturbing
- We need a container registry (obviously!)
- Docker (and other container engines) *require* TLS on the registry
(with valid certificates)
- A few options:
- use a "real" TLS certificate (e.g. obtained with Let's Encrypt)
- use a self-signed TLS certificate
- communicate with the registry over localhost (TLS isn't required then)
---
class: extra-details
## Why not self-signed certs?
- When using self-signed certs, we need to either:
- add the cert (or CA) to trusted certs
- disable cert validation
- This needs to be done on *every client* connecting to the registry:
- CI/CD pipeline (building and pushing images)
- container engine (deploying the images)
- other tools (e.g. container security scanner)
- It's doable, but it's a lot of hacks (especially when adding more tools!)
---
class: extra-details
## Why not localhost?
- TLS is usually not required when the registry is on localhost
- We could expose the registry e.g. on a `NodePort`
- ... And then tweak the CI/CD pipeline to use that instead
- This is great when obtaining valid certs is difficult:
- air-gapped or internal environments (that can't use Let's Encrypt)
- no domain name available
- Downside: the registry isn't easily or safely available from outside
(the `NodePort` essentially defeats TLS)
---
class: extra-details
## Can we use `nip.io`?
- We will use Let's Encrypt
- Let's Encrypt has a quota of certificates per domain
(in 2020, that was [50 certificates per week per domain](https://letsencrypt.org/docs/rate-limits/))
- So if we all use `nip.io`, we will probably run into that limit
- But you can try and see if it works!
---
## Ingress
- We will assume that we have a domain name pointing to our cluster
(i.e. with a wildcard record pointing to at least one node of the cluster)
- We will get traffic in the cluster by leveraging `ExternalIPs` services
(but it would be easy to use `LoadBalancer` services instead)
- We will use Traefik as the ingress controller
(but any other one should work too)
- We will use cert-manager to obtain certificates with Let's Encrypt
---
## Other details
- We will deploy GitLab with its official Helm chart
- It will still require a bunch of parameters and customization
- We also need a Storage Class
(unless our cluster already has one, of course)
- We suggest the [Rancher local path provisioner](https://github.com/rancher/local-path-provisioner)
---
## Setting everything up
1. `git clone https://github.com/jpetazzo/kubecoin`
2. `export EMAIL=xxx@example.com DOMAIN=awesome-kube-ci.io`
(we need a real email address and a domain pointing to the cluster!)
3. `. setup-gitlab-on-k8s.rc`
(this doesn't do anything, but defines a number of helper functions)
4. Execute each helper function, one after another
(try `do_[TAB]` to see these functions)
---
## Local Storage
`do_1_localstorage`
Applies the YAML directly from Rancher's repository.
Annotate the Storage Class so that it becomes the default one.
---
## Traefik
`do_2_traefik_with_externalips`
Install the official Traefik Helm chart.
Instead of a `LoadBalancer` service, use a `ClusterIP` with `ExternalIPs`.
Automatically infer the `ExternalIPs` from `kubectl get nodes`.
Enable TLS.
---
## cert-manager
`do_3_certmanager`
Install cert-manager using their official YAML.
Easy-peasy.
---
## Certificate issuers
`do_4_issuers`
Create a couple of `ClusterIssuer` resources for cert-manager.
(One for the staging Let's Encrypt environment, one for production.)
Note: this requires to specify a valid `$EMAIL` address!
Note: if this fails, wait a bit and try again (cert-manager needs to be up).
---
## GitLab
`do_5_gitlab`
Deploy GitLab using their official Helm chart.
We pass a lot of parameters to this chart:
- the domain name to use
- disable GitLab's own ingress and cert-manager
- annotate the ingress resources so that cert-manager kicks in
- bind the shell service (git over SSH) to port 222 to avoid conflict
- use ExternalIPs for that shell service
Note: on modest cloud instances, it can take 10 minutes for GitLab to come up.
We can check the status with `kubectl get pods --namespace=gitlab`
---
## Log into GitLab and configure it
`do_6_showlogin`
This will get the GitLab root password (stored in a Secret).
Then we need to:
- log into GitLab
- add our SSH key (top-right user menu → settings, then SSH keys on the left)
- create a project (using the + menu next to the search bar on top)
- go to project configuration (on the left, settings → CI/CD)
- add a `KUBECONFIG` file variable with the content of our `.kube/config` file
- go to settings → access tokens to create a read-only registry token
- add variables `REGISTRY_USER` and `REGISTRY_PASSWORD` with that token
- push our repo (`git remote add gitlab ...` then `git push gitlab ...`)
---
## Monitoring progress and troubleshooting
- Click on "CI/CD" in the left bar to view pipelines
- If you see a permission issue mentioning `system:serviceaccount:gitlab:...`:
*make sure you did set `KUBECONFIG` correctly!*
- GitLab will create namespaces named `gl-<user>-<project>`
- At the end of the deployment, the web UI will be available on some unique URL
(`http://<user>-<project>-<githash>-gitlab.<domain>`)
---
## Production
- `git tag -f production && git push -f --tags`
- Our CI/CD pipeline will deploy on the production URL
(`http://<user>-<project>-gitlab.<domain>`)
- It will do it *only* if that same git commit was pushed to staging first
(look in the pipeline configuration file to see how it's done!)
---
## Let's talk about build
- There are many ways to build container images on Kubernetes
- ~~And they all suck~~ Many of them have inconveniencing issues
- Let's do a quick review!
---
## Docker-based approaches
- Bind-mount the Docker socket
- very easy, but requires Docker Engine
- build resource usage "evades" Kubernetes scheduler
- insecure
- Docker-in-Docker in a pod
- requires privileged pod
- insecure
- approaches like rootless or sysbox might help in the future
- External build host
- more secure
- requires resources outside of the Kubernetes cluster
---
## Non-privileged builders
- Kaniko
- each build runs in its own containers or pod
- no caching by default
- registry-based caching is possible
- BuildKit / `docker buildx`
- can leverage Docker Engine or long-running Kubernetes worker pod
- supports distributed, multi-arch build farms
- basic caching out of the box
- can also leverage registry-based caching
---
## Other approaches
- Ditch the Dockerfile!
- bazel
- jib
- ko
- etc.
---
## Discussion
- Our CI/CD workflow is just *one* of the many possibilities
- It would be nice to add some actual unit or e2e tests
- Map the production namespace to a "real" domain name
- Automatically remove older staging environments
(see e.g. [kube-janitor](https://codeberg.org/hjacobs/kube-janitor))
- Deploy production to a separate cluster
- Better segregate permissions
(don't give `cluster-admin` to the GitLab pipeline)
---
## Pros
- GitLab is an amazing, open source, all-in-one platform
- Available as hosted, community, or enterprise editions
- Rich ecosystem, very customizable
- Can run on Kubernetes, or somewhere else
---
## Cons
- It can be difficult to use components separately
(e.g. use a different registry, or a different job runner)
- More than one way to configure it
(it's not an opinionated platform)
- Not "Kubernetes-native"
(for instance, jobs are not Kubernetes jobs)
- Job latency could be improved
*Note: most of these drawbacks are the flip side of the "pros" on the previous slide!*
???
:EN:- CI/CD with GitLab
:FR:- CI/CD avec GitLab

View File

@@ -244,7 +244,7 @@ fine for personal and development clusters.)
- Add the `stable` repo:
```bash
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo add stable https://charts.helm.sh/stable
```
]
@@ -255,6 +255,22 @@ It's OK to add a repo that already exists (it will merely update it).
---
class: extra-details
## Deprecation warning
- That "stable" is being deprecated, in favor of a more decentralized approach
(each community / company / group / project hosting their own repository)
- We're going to use it here for educational purposes
- But if you're looking for production-grade charts, look elsewhere!
(namely, on the Helm Hub)
---
## Search available charts
- We can search available charts with `helm search`

659
slides/k8s/hpa-v2.md Normal file
View File

@@ -0,0 +1,659 @@
# Scaling with custom metrics
- The HorizontalPodAutoscaler v1 can only scale on Pod CPU usage
- Sometimes, we need to scale using other metrics:
- memory
- requests per second
- latency
- active sessions
- items in a work queue
- ...
- The HorizontalPodAutoscaler v2 can do it!
---
## Requirements
⚠️ Autoscaling on custom metrics is fairly complex!
- We need some metrics system
(Prometheus is a popular option, but others are possible too)
- We need our metrics (latency, traffic...) to be fed in the system
(with Prometheus, this might require a custom exporter)
- We need to expose these metrics to Kubernetes
(Kubernetes doesn't "speak" the Prometheus API)
- Then we can set up autoscaling!
---
## The plan
- We will deploy the DockerCoins demo app
(one of its components has a bottleneck; its latency will increase under load)
- We will use Prometheus to collect and store metrics
- We will deploy a tiny HTTP latency monitor (a Prometheus *exporter*)
- We will deploy the "Prometheus adapter"
(mapping Prometheus metrics to Kubernetes-compatible metrics)
- We will create an HorizontalPodAutoscaler 🎉
---
## Deploying DockerCoins
- That's the easy part!
.exercise[
- Create a new namespace and switch to it:
```bash
kubectl create namespace customscaling
kns customscaling
```
- Deploy DockerCoins, and scale up the `worker` Deployment:
```bash
kubectl apply -f ~/container.training/k8s/dockercoins.yaml
kubectl scale deployment worker --replicas=10
```
]
---
## Current state of affairs
- The `rng` service is a bottleneck
(it cannot handle more than 10 requests/second)
- With enough traffic, its latency increases
(by about 100ms per `worker` Pod after the 3rd worker)
.exercise[
- Check the `webui` port and open it in your browser:
```bash
kubectl get service webui
```
- Check the `rng` ClusterIP and test it with e.g. `httping`:
```bash
kubectl get service rng
```
]
---
## Measuring latency
- We will use a tiny custom Prometheus exporter, [httplat](https://github.com/jpetazzo/httplat)
- `httplat` exposes Prometheus metrics on port 9080 (by default)
- It monitors exactly one URL, that must be passed as a command-line argument
.exercise[
- Deploy `httplat`:
```bash
kubectl create deployment httplat --image=jpetazzo/httplat -- httplat http://rng/
```
- Expose it:
```bash
kubectl expose deployment httplat --port=9080
```
]
---
class: extra-details
## Measuring latency in the real world
- We are using this tiny custom exporter for simplicity
- A more common method to collect latency is to use a service mesh
- A service mesh can usually collect latency for *all* services automatically
---
## Install Prometheus
- We will use the Prometheus community Helm chart
(because we can configure it dynamically with annotations)
.exercise[
- If it's not installed yet on the cluster, install Prometheus:
```bash
helm repo add prometheus-community
https://prometheus-community.github.io/helm-charts
helm upgrade prometheus prometheus-community/prometheus \
--install \
--namespace kube-system \
--set server.service.type=NodePort \
--set server.service.nodePort=30090 \
--set server.persistentVolume.enabled=false \
--set alertmanager.enabled=false
```
]
---
## Configure Prometheus
- We can use annotations to tell Prometheus to collect the metrics
.exercise[
- Tell Prometheus to "scrape" our latency exporter:
```bash
kubectl annotate service httplat \
prometheus.io/scrape=true \
prometheus.io/port=9080 \
prometheus.io/path=/metrics
```
]
If you deployed Prometheus differently, you might have to configure it manually.
You'll need to instruct it to scrape http://httplat.customscaling.svc:9080/metrics.
---
## Make sure that metrics get collected
- Before moving on, confirm that Prometheus has our metrics
.exercise[
- Connect to Prometheus
(if you installed it like instructed above, it is exposed as a NodePort on port 30090)
- Check that `httplat` metrics are available
- You can try to graph the following PromQL expression:
```
rate(httplat_latency_seconds_sum[2m])/rate(httplat_latency_seconds_count[2m])
```
]
---
## Troubleshooting
- Make sure that the exporter works:
- get the ClusterIP of the exporter with `kubectl get svc httplat`
- `curl http://<ClusterIP>:9080/metrics`
- check that the result includes the `httplat` histogram
- Make sure that Prometheus is scraping the exporter:
- go to `Status` / `Targets` in Prometheus
- make sure that `httplat` shows up in there
---
## Creating the autoscaling policy
- We need custom YAML (we can't use the `kubectl autoscale` command)
- It must specify `scaleTargetRef`, the resource to scale
- any resource with a `scale` sub-resource will do
- this includes Deployment, ReplicaSet, StatefulSet...
- It must specify one or more `metrics` to look at
- if multiple metrics are given, the autoscaler will "do the math" for each one
- it will then keep the largest result
---
## Details about the `metrics` list
- Each item will look like this:
```yaml
- type: <TYPE-OF-METRIC>
<TYPE-OF-METRIC>:
metric:
name: <NAME-OF-METRIC>
<...optional selector (mandatory for External metrics)...>
target:
type: <TYPE-OF-TARGET>
<TYPE-OF-TARGET>: <VALUE>
<describedObject field, for Object metrics>
```
`<TYPE-OF-METRIC>` can be `Resource`, `Pods`, `Object`, or `External`.
`<TYPE-OF-TARGET>` can be `Utilization`, `Value`, or `AverageValue`.
Let's explain the 4 different `<TYPE-OF-METRIC>` values!
---
## `Resource`
Use "classic" metrics served by `metrics-server` (`cpu` and `memory`).
```yaml
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
```
Compute average *utilization* (usage/requests) across pods.
It's also possible to specify `Value` or `AverageValue` instead of `Utilization`.
(To scale according to "raw" CPU or memory usage.)
---
## `Pods`
Use custom metrics. These are still "per-Pod" metrics.
```yaml
- type: Pods
pods:
metric:
name: packets-per-second
target:
type: AverageValue
averageValue: 1k
```
`type:` *must* be `AverageValue`.
(It cannot be `Utilization`, since these can't be used in Pod `requests`.)
---
## `Object`
Use custom metrics. These metrics are "linked" to any arbitrary resource.
(E.g. a Deployment, Service, Ingress, ...)
```yaml
- type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1
kind: Ingress
name: main-route
target:
type: AverageValue
value: 100
```
`type:` can be `Value` or `AverageValue` (see next slide for details).
---
## `Value` vs `AverageValue`
- `Value`
- use the value as-is
- useful to pace a client or producer
- "target a specific total load on a specific endpoint or queue"
- `AverageValue`
- divide the value by the number of pods
- useful to scale a server or consumer
- "scale our systems to meet a given SLA/SLO"
---
## `External`
Use arbitrary metrics. The series to use is specified with a label selector.
```yaml
- type: External
external:
metric:
name: queue_messages_ready
selector: "queue=worker_tasks"
target:
type: AverageValue
averageValue: 30
```
The `selector` will be passed along when querying the metrics API.
Its meaninng is implementation-dependent.
It may or may not correspond to Kubernetes labels.
---
## One more thing ...
- We can give a `behavior` set of options
- Indicates:
- how much to scale up/down in a single step
- a *stabilization window* to avoid hysteresis effects
- The default stabilization window is 15 seconds for `scaleUp`
(we might want to change that!)
---
Putting togeher @@LINK[k8s/hpa-v2-pa-httplat.yaml]:
.small[
```yaml
@@INCLUDE[k8s/hpa-v2-pa-httplat.yaml]
```
]
---
## Creating the autoscaling policy
- We will register the policy
- Of course, it won't quite work yet (we're missing the *Prometheus adapter*)
.exercise[
- Create the HorizontalPodAutoscaler:
```bash
kubectl apply -f ~/container.training/k8s/hpa-v2-pa-httplat.yaml
```
- Check the logs of the `controller-manager`:
```bash
stern --namespace=kube-system --tail=10 controller-manager
```
]
After a little while we should see messages like this:
```
no custom metrics API (custom.metrics.k8s.io) registered
```
---
## `custom.metrics.k8s.io`
- The HorizontalPodAutoscaler will get the metrics *from the Kubernetes API itself*
- In our specific case, it will access a resource like this one:
.small[
```
/apis/custom.metrics.k8s.io/v1beta1/namespaces/customscaling/services/httplat/httplat_latency_seconds
```
]
- By default, the Kubernetes API server doesn't implement `custom.metrics.k8s.io`
(we can have a look at `kubectl get apiservices`)
- We need to:
- start an API service implementing this API group
- register it with our API server
---
## The Prometheus adapter
- The Prometheus adapter is an open source project:
https://github.com/DirectXMan12/k8s-prometheus-adapter
- It's a Kubernetes API service implementing API group `custom.metrics.k8s.io`
- It maps the requests it receives to Prometheus metrics
- Exactly what we need!
---
## Deploying the Prometheus adapter
- There is ~~an app~~ a Helm chart for that
.exercise[
- Install the Prometheus adapter:
```bash
helm upgrade prometheus-adapter prometheus-community/prometheus-adapter \
--install --namespace=kube-system \
--set prometheus.url=http://prometheus-server.kube-system.svc \
--set prometheus.port=80
```
]
- It comes with some default mappings
- But we will need to add `httplat` to these mappings
---
## Configuring the Prometheus adapter
- The Prometheus adapter can be configured/customized through a ConfigMap
- We are going to edit that ConfigMap, then restart the adapter
- We need to add a rule that will say:
- all the metrics series named `httplat_latency_seconds_sum` ...
- ... belong to *Services* ...
- ... the name of the Service and its Namespace are indicated by the `kubernetes_name` and `kubernetes_namespace` Prometheus tags respectively ...
- ... and the exact value to use should be the following PromQL expression
---
## The mapping rule
Here is the rule that we need to add to the configuration:
```yaml
- seriesQuery: |
httplat_latency_seconds_sum{kubernetes_namespace!="",kubernetes_name!=""}
resources:
overrides:
kubernetes_namespace:
resource: namespace
kubernetes_name:
resource: service
name:
matches: "httplat_latency_seconds_sum"
as: "httplat_latency_seconds"
metricsQuery: |
rate(httplat_latency_seconds_sum{<<.LabelMatchers>>}[2m])
/rate(httplat_latency_seconds_count{<<.LabelMatchers>>}[2m])
```
(I built it following the [walkthrough](https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config-walkthrough.md
) in the Prometheus adapter documentation.)
---
## Editing the adapter's configuration
.exercise[
- Edit the adapter's ConfigMap:
```bash
kubectl edit configmap prometheus-adapter --namespace=kube-system
```
- Add the new rule in the `rules` section, at the end of the configuration file
- Save, quit
- Restart the Prometheus adapter:
```bash
kubectl rollout restart deployment --namespace=kube-system prometheus-adapter
```
]
---
## Witness the marvel of custom autoscaling
(Sort of)
- After a short while, the `rng` Deployment will scale up
- It should scale up until the latency drops below 100ms
(and continue to scale up a little bit more after that)
- Then, since the latency will be well below 100ms, it will scale down
- ... and back up again, etc.
(See pictures on next slides!)
---
class: pic
![Latency over time](images/hpa-v2-pa-latency.png)
---
class: pic
![Number of pods over time](images/hpa-v2-pa-pods.png)
---
## What's going on?
- The autoscaler's information is slightly out of date
(not by much; probably between 1 and 2 minute)
- It's enough to cause the oscillations to happen
- One possible fix is to tell the autoscaler to wait a bit after each action
- It will reduce oscillations, but will also slow down its reaction time
(and therefore, how fast it reacts to a peak of traffic)
---
## What's going on? Take 2
- As soon as the measured latency is *significantly* below our target (100ms) ...
the autoscaler tries to scale down
- If the latency is measured at 20ms ...
the autoscaler will try to *divide the number of pods by five!*
- One possible solution: apply a formula to the measured latency,
so that values between e.g. 10 and 100ms get very close to 100ms.
- Another solution: instead of targetting for a specific latency,
target a 95th percentile latency or something similar, using
a more advanced PromQL expression (and leveraging the fact that
we have histograms instead of raw values).
---
## Troubleshooting
Check that the adapter registered itself correctly:
```bash
kubectl get apiservices | grep metrics
```
Check that the adapter correctly serves metrics:
```bash
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
```
Check that our `httplat` metrics are available:
```bash
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1\
/namespaces/customscaling/services/httplat/httplat_latency_seconds
```
Also check the logs of the `prometheus-adapter` and the `kube-controller-manager`.
---
## Useful links
- [Horizontal Pod Autoscaler walkthrough](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) in the Kubernetes documentation
- [Autoscaling design proposal](https://github.com/kubernetes/community/tree/master/contributors/design-proposals/autoscaling)
- [Kubernetes custom metrics API alternative implementations](https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md)
- [Prometheus adapter configuration walkthrough](https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/docs/config-walkthrough.md)
???
:EN:- Autoscaling with custom metrics
:FR:- Suivi de charge avancé (HPAv2)

403
slides/k8s/ingress-tls.md Normal file
View File

@@ -0,0 +1,403 @@
# Ingress and TLS certificates
- Most ingress controllers support TLS connections
(in a way that is standard across controllers)
- The TLS key and certificate are stored in a Secret
- The Secret is then referenced in the Ingress resource:
```yaml
spec:
tls:
- secretName: XXX
hosts:
- YYY
rules:
- ZZZ
```
---
## Obtaining a certificate
- In the next section, we will need a TLS key and certificate
- These usually come in [PEM](https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail) format:
```
-----BEGIN CERTIFICATE-----
MIIDATCCAemg...
...
-----END CERTIFICATE-----
```
- We will see how to generate a self-signed certificate
(easy, fast, but won't be recognized by web browsers)
- We will also see how to obtain a certificate from [Let's Encrypt](https://letsencrypt.org/)
(requires the cluster to be reachable through a domain name)
---
class: extra-details
## In production ...
- A very popular option is to use the [cert-manager](https://cert-manager.io/docs/) operator
- It's a flexible, modular approach to automated certificate management
- For simplicity, in this section, we will use [certbot](https://certbot.eff.org/)
- The method shown here works well for one-time certs, but lacks:
- automation
- renewal
---
## Which domain to use
- If you're doing this in a training:
*the instructor will tell you what to use*
- If you're doing this on your own Kubernetes cluster:
*you should use a domain that points to your cluster*
- More precisely:
*you should use a domain that points to your ingress controller*
- If you don't have a domain name, you can use [nip.io](https://nip.io/)
(if your ingress controller is on 1.2.3.4, you can use `whatever.1.2.3.4.nip.io`)
---
## Setting `$DOMAIN`
- We will use `$DOMAIN` in the following section
- Let's set it now
.exercise[
- Set the `DOMAIN` environment variable:
```bash
export DOMAIN=...
```
]
---
## Method 1, self-signed certificate
- Thanks to `openssl`, generating a self-signed cert is just one command away!
.exercise[
- Generate a key and certificate:
```bash
openssl req \
-newkey rsa -nodes -keyout privkey.pem \
-x509 -days 30 -subj /CN=$DOMAIN/ -out cert.pem
```
]
This will create two files, `privkey.pem` and `cert.pem`.
---
## Method 2, Let's Encrypt with certbot
- `certbot` is an [ACME](https://tools.ietf.org/html/rfc8555) client
(Automatic Certificate Management Environment)
- We can use it to obtain certificates from Let's Encrypt
- It needs to listen to port 80
(to complete the [HTTP-01 challenge](https://letsencrypt.org/docs/challenge-types/))
- If port 80 is already taken by our ingress controller, see method 3
---
class: extra-details
## HTTP-01 challenge
- `certbot` contacts Let's Encrypt, asking for a cert for `$DOMAIN`
- Let's Encrypt gives a token to `certbot`
- Let's Encrypt then tries to access the following URL:
`http://$DOMAIN/.well-known/acme-challenge/<token>`
- That URL needs to be routed to `certbot`
- Once Let's Encrypt gets the response from `certbot`, it issues the certificate
---
## Running certbot
- There is a very convenient container image, `certbot/certbot`
- Let's use a volume to get easy access to the generated key and certificate
.exercise[
- Obtain a certificate from Let's Encrypt:
```bash
EMAIL=your.address@example.com
docker run --rm -p 80:80 -v $PWD/letsencrypt:/etc/letsencrypt \
certbot/certbot certonly \
-m $EMAIL \
--standalone --agree-tos -n \
--domain $DOMAIN \
--test-cert
```
]
This will get us a "staging" certificate.
Remove `--test-cert` to obtain a *real* certificate.
---
## Copying the key and certificate
- If everything went fine:
- the key and certificate files are in `letsencrypt/live/$DOMAIN`
- they are owned by `root`
.exercise[
- Grant ourselves permissions on these files:
```bash
sudo chown -R $USER letsencrypt
```
- Copy the certificate and key to the current directory:
```bash
cp letsencrypt/live/test/{cert,privkey}.pem .
```
]
---
## Method 3, certbot with Ingress
- Sometimes, we can't simply listen to port 80:
- we might already have an ingress controller there
- our nodes might be on an internal network
- But we can define an Ingress to route the HTTP-01 challenge to `certbot`!
- Our Ingress needs to route all requests to `/.well-known/acme-challenge` to `certbot`
- There are at least two ways to do that:
- run `certbot` in a Pod (and extract the cert+key when it's done)
- run `certbot` in a container on a node (and manually route traffic to it)
- We're going to use the second option
(mostly because it will give us an excuse to tinker with Endpoints resources!)
---
## The plan
- We need the following resources:
- an Endpoints¹ listing a hard-coded IP address and port
<br/>(where our `certbot` container will be listening)
- a Service corresponding to that Endpoints
- an Ingress sending requests to `/.well-known/acme-challenge/*` to that Service
<br/>(we don't even need to include a domain name in it)
- Then we need to start `certbot` so that it's listening on the right address+port
.footnote[¹Endpoints is always plural, because even a single resource is a list of endpoints.]
---
## Creating resources
- We prepared a YAML file to create the three resources
- However, the Endpoints needs to be adapted to put the current node's address
.exercise[
- Edit `~/containers.training/k8s/certbot.yaml`
(replace `A.B.C.D` with the current node's address)
- Create the resources:
```bash
kubectl apply -f ~/containers.training/k8s/certbot.yaml
```
]
---
## Obtaining the certificate
- Now we can run `certbot`, listening on the port listed in the Endpoints
(i.e. 8000)
.exercise[
- Run `certbot`:
```bash
EMAIL=your.address@example.com
docker run --rm -p 8000:80 -v $PWD/letsencrypt:/etc/letsencrypt \
certbot/certbot certonly \
-m $EMAIL \
--standalone --agree-tos -n \
--domain $DOMAIN \
--test-cert
```
]
This is using the staging environment.
Remove `--test-cert` to get a production certificate.
---
## Copying the certificate
- Just like in the previous method, the certificate is in `letsencrypt/live/$DOMAIN`
(and owned by root)
.exercise[
- Grand ourselves permissions on these files:
```bash
sudo chown -R $USER letsencrypt
```
- Copy the certificate and key to the current directory:
```bash
cp letsencrypt/live/$DOMAIN/{cert,privkey}.pem .
```
]
---
## Creating the Secret
- We now have two files:
- `privkey.pem` (the private key)
- `cert.pem` (the certificate)
- We can create a Secret to hold them
.exercise[
- Create the Secret:
```bash
kubectl create secret tls $DOMAIN --cert=cert.pem --key=privkey.pem
```
]
---
## Ingress with TLS
- To enable TLS for an Ingress, we need to add a `tls` section to the Ingress:
```yaml
spec:
tls:
- secretName: DOMAIN
hosts:
- DOMAIN
rules: ...
```
- The list of hosts will be used by the ingress controller
(to know which certificate to use with [SNI](https://en.wikipedia.org/wiki/Server_Name_Indication))
- Of course, the name of the secret can be different
(here, for clarity and convenience, we set it to match the domain)
---
class: extra-details
## About the ingress controller
- Many ingress controllers can use different "stores" for keys and certificates
- Our ingress controller needs to be configured to use secrets
(as opposed to, e.g., obtain certificates directly with Let's Encrypt)
---
## Using the certificate
.exercise[
- Edit the Ingress manifest, `~/container.training/k8s/ingress.yaml`
- Uncomment the `tls` section
- Update the `secretName` and `hosts` list
- Create or update the Ingress:
```bash
kubectl apply -f ~/container.training/k8s/ingress.yaml
```
- Check that the URL now works over `https`
(it might take a minute to be picked up by the ingress controller)
]
---
## Discussion
*To repeat something mentioned earlier ...*
- The methods presented here are for *educational purpose only*
- In most production scenarios, the certificates will be obtained automatically
- A very popular option is to use the [cert-manager](https://cert-manager.io/docs/) operator
???
:EN:- Ingress and TLS
:FR:- Certificats TLS et *ingress*

View File

@@ -586,7 +586,7 @@ class: extra-details
- Example 3: canary for shipping physical goods
- 1% of orders are shipped with the canary process
- the reamining 99% are shipped with the normal process
- the remaining 99% are shipped with the normal process
- We're going to implement example 1 (per-request routing)
@@ -638,7 +638,7 @@ spec:
servicePort: 80
- path: /
backend:
serviceName: wensledale
serviceName: wensleydale
servicePort: 80
- path: /
backend:

151
slides/k8s/internal-apis.md Normal file
View File

@@ -0,0 +1,151 @@
# Kubernetes Internal APIs
- Almost every Kubernetes component has some kind of internal API
(some components even have multiple APIs on different ports!)
- At the very least, these can be used for healthchecks
(you *should* leverage this if you are deploying and operating Kubernetes yourself!)
- Sometimes, they are used internally by Kubernetes
(e.g. when the API server retrieves logs from kubelet)
- Let's review some of these APIs!
---
## API hunting guide
This is how we found and investigated these APIs:
- look for open ports on Kubernetes nodes
(worker nodes or control plane nodes)
- check which process owns that port
- probe the port (with `curl` or other tools)
- read the source code of that process
(in particular when looking for API routes)
OK, now let's see the results!
---
## etcd
- 2379/tcp → etcd clients
- should be HTTPS and require mTLS authentication
- 2380/tcp → etcd peers
- should be HTTPS and require mTLS authentication
- 2381/tcp → etcd healthcheck
- HTTP without authentication
- exposes two API routes: `/health` and `/metrics`
---
## kubelet
- 10248/tcp → healthcheck
- HTTP without authentication
- exposes a single API route, `/healthz`, that just returns `ok`
- 10250/tcp → internal API
- should be HTTPS and require mTLS authentication
- used by the API server to obtain logs, `kubectl exec`, etc.
---
class: extra-details
## kubelet API
- We can authenticate with e.g. our TLS admin certificate
- The following routes should be available:
- `/healthz`
- `/configz` (serves kubelet configuration)
- `/metrics`
- `/pods` (returns *desired state*)
- `/runningpods` (returns *current state* from the container runtime)
- `/logs` (serves files from `/var/log`)
- `/containerLogs/<namespace>/<podname>/<containername>` (can add e.g. `?tail=10`)
- `/run`, `/exec`, `/attach`, `/portForward`
- See [kubelet source code](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go) for details!
---
class: extra-details
## Trying the kubelet API
The following example should work on a cluster deployed with `kubeadm`.
1. Obtain the key and certificate for the `cluster-admin` user.
2. Log into a node.
3. Copy the key and certificate on the node.
4. Find out the name of the `kube-proxy` pod running on that node.
5. Run the following command, updating the pod name:
```bash
curl -d cmd=ls -k --cert admin.crt --key admin.key \
https://localhost:10250/run/kube-system/`kube-proxy-xy123`/kube-proxy
```
... This should show the content of the root directory in the pod.
---
## kube-proxy
- 10249/tcp → healthcheck
- HTTP, without authentication
- exposes a few API routes: `/healthz` (just returns `ok`), `/configz`, `/metrics`
- 10256/tcp → another healthcheck
- HTTP, without authentication
- also exposes a `/healthz` API route (but this one shows a timestamp)
---
## kube-controller and kube-scheduler
- 10257/tcp → kube-controller
- HTTPS, with optional mTLS authentication
- `/healthz` doesn't require authentication
- ... but `/configz` and `/metrics` do (use e.g. admin key and certificate)
- 10259/tcp → kube-scheduler
- similar to kube-controller, with the same routes
???
:EN:- Kubernetes internal APIs
:FR:- Les APIs internes de Kubernetes

141
slides/k8s/k9s.md Normal file
View File

@@ -0,0 +1,141 @@
# k9s
- Somewhere in between CLI and GUI (or web UI), we can find the magic land of TUI
- [Text-based user interfaces](https://en.wikipedia.org/wiki/Text-based_user_interface)
- often using libraries like [curses](https://en.wikipedia.org/wiki/Curses_%28programming_library%29) and its successors
- Some folks love them, some folks hate them, some are indifferent ...
- But it's nice to have different options!
- Let's see one particular TUI for Kubernetes: [k9s](https://k9scli.io/)
---
## Installing k9s
- If you are using a training cluster or the [shpod](https://github.com/jpetazzo/shpod) image, k9s is pre-installed
- Otherwise, it can be installed easily:
- with [various package managers](https://k9scli.io/topics/install/)
- or by fetching a [binary release](https://github.com/derailed/k9s/releases)
- We don't need to set up or configure anything
(it will use the same configuration as `kubectl` and other well-behaved clients)
- Just run `k9s` to fire it up!
---
## What kind to we want to see?
- Press `:` to change the type of resource to view
- Then type, for instance, `ns` or `namespace` or `nam[TAB]`, then `[ENTER]`
- Use the arrows to move down to e.g. `kube-system`, and press `[ENTER]`
- Or, type `/kub` or `/sys` to filter the output, and press `[ENTER]` twice
(once to exit the filter, once to enter the namespace)
- We now see the pods in `kube-system`!
---
## Interacting with pods
- `l` to view logs
- `d` to describe
- `s` to get a shell (won't work if `sh` isn't available in the container image)
- `e` to edit
- `shift-f` to define port forwarding
- `ctrl-k` to kill
- `[ESC]` to get out or get back
---
## Quick navigation between namespaces
- On top of the screen, we should see shortcuts like this:
```
<0> all
<1> kube-system
<2> default
```
- Pressing the corresponding number switches to that namespace
(or shows resources across all namespaces with `0`)
- Locate a namespace with a copy of DockerCoins, and go there!
---
## Interacting with Deployments
- View Deployments (type `:` `deploy` `[ENTER]`)
- Select e.g. `worker`
- Scale it with `s`
- View its aggregated logs with `l`
---
## Exit
- Exit at any time with `Ctrl-C`
- k9s will "remember" where you were
(and go back there next time you run it)
---
## Pros
- Very convenient to navigate through resources
(hopping from a deployment, to its pod, to another namespace, etc.)
- Very convenient to quickly view logs of e.g. init containers
- Very convenient to get a (quasi) realtime view of resources
(if we use `watch kubectl get` a lot, we will probably like k9s)
---
## Cons
- Doesn't promote automation / scripting
(if you repeat the same things over and over, there is a scripting opportunity)
- Not all features are available
(e.g. executing arbitrary commands in containers)
---
## Conclusion
Try it out, and see if it makes you more productive!
???
:EN:- The k9s TUI
:FR:- L'interface texte k9s

686
slides/k8s/kubebuilder.md Normal file
View File

@@ -0,0 +1,686 @@
# Kubebuilder
- Writing a quick and dirty operator is (relatively) easy
- Doing it right, however ...
--
- We need:
- proper CRD with schema validation
- controller performing a reconcilation loop
- manage errors, retries, dependencies between resources
- maybe webhooks for admission and/or conversion
😱
---
## Frameworks
- There are a few frameworks available out there:
- [kubebuilder](https://github.com/kubernetes-sigs/kubebuilder)
([book](https://book.kubebuilder.io/)):
go-centric, very close to Kubernetes' core types
- [operator-framework](https://operatorframework.io/):
higher level; also supports Ansible and Helm
- [KUDO](https://kudo.dev/):
declarative operators written in YAML
- [KOPF](https://kopf.readthedocs.io/en/latest/):
operators in Python
- ...
---
## Kubebuilder workflow
- Kubebuilder will create scaffolding for us
(Go stubs for types and controllers)
- Then we edit these types and controllers files
- Kubebuilder generates CRD manifests from our type definitions
(and regenerates the manifests whenver we update the types)
- It also gives us tools to quickly run the controller against a cluster
(not necessarily *on* the cluster)
---
## Our objective
- We're going to implement a *useless machine*
[basic example](https://www.youtube.com/watch?v=aqAUmgE3WyM)
|
[playful example](https://www.youtube.com/watch?v=kproPsch7i0)
|
[advanced example](https://www.youtube.com/watch?v=Nqk_nWAjBus)
|
[another advanced example](https://www.youtube.com/watch?v=eLtUB8ncEnA)
- A machine manifest will look like this:
```yaml
kind: Machine
apiVersion: useless.container.training/v1alpha1
metadata:
name: machine-1
spec:
# Our useless operator will change that to "down"
switchPosition: up
```
- Each time we change the `switchPosition`, the operator will move it back to `down`
(This is inspired by the
[uselessoperator](https://github.com/tilt-dev/uselessoperator)
written by
[L Körbes](https://twitter.com/ellenkorbes).
Highly recommend!💯)
---
class: extra-details
## Local vs remote
- Building Go code can be a little bit slow on our modest lab VMs
- It will typically be *much* faster on a local machine
- All the demos and labs in this section will run fine either way!
---
## Preparation
- Install Go
(on our VMs: `sudo snap install go --classic`)
- Install kubebuilder
([get a release](https://github.com/kubernetes-sigs/kubebuilder/releases/), untar, move the `kubebuilder` binary to the `$PATH`)
- Initialize our workspace:
```bash
mkdir useless
cd useless
go mod init container.training/useless
kubebuilder init --domain container.training
```
---
## Create scaffolding
- Create a type and corresponding controller:
```bash
kubebuilder create api --group useless --version v1alpha1 --kind Machine
```
- Answer `y` to both questions
- Then we need to edit the type that just got created!
---
## Edit type
Edit `api/v1alpha1/machine_types.go`.
Add the `switchPosition` field in the `spec` structure:
```go
// MachineSpec defines the desired state of Machine
type MachineSpec struct {
// Position of the switch on the machine, for instance up or down.
SwitchPosition string ``json:"switchPosition,omitempty"``
}
```
⚠️ The backticks above should be simple backticks, not double-backticks. Sorry.
---
## Go markers
We can use Go *marker comments* to give `controller-gen` extra details about how to handle our type, for instance:
```
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.switchPosition",name=Position,type=string
```
(See
[marker syntax](https://book.kubebuilder.io/reference/markers.html),
[CRD generation](https://book.kubebuilder.io/reference/markers/crd.html),
[CRD validation](https://book.kubebuilder.io/reference/markers/crd-validation.html)
)
---
class: extra-details
## Using CRD v1
- By default, kubebuilder generates v1alpha1 CRDs
- If we want to generate v1 CRDs:
- edit `Makefile`
- update `crd:crdVersions=v1`
---
## Installing the CRD
After making these changes, we can run `make install`.
This will build the Go code, but also:
- generate the CRD manifest
- and apply the manifest to the cluster
---
## Creating a machine
Edit `config/samples/useless_v1alpha1_machine.yaml`:
```yaml
kind: Machine
apiVersion: useless.container.training/v1alpha1
metadata:
name: machine-1
spec:
# Our useless operator will change that to "down"
switchPosition: up
```
... and apply it to the cluster.
---
## Designing the controller
- Our controller needs to:
- notice when a `switchPosition` is not `down`
- move it to `down` when that happens
- Later, we can add fancy improvements (wait a bit before moving it, etc.)
---
## Reconciler logic
- Kubebuilder will call our *reconciler* when necessary
- When necessary = when changes happen ...
- on our resource
- or resources that it *watches* (related resources)
- After "doing stuff", the reconciler can return ...
- `ctrl.Result{},nil` = all is good
- `ctrl.Result{Requeue...},nil` = all is good, but call us back in a bit
- `ctrl.Result{},err` = something's wrong, try again later
---
## Loading an object
Open `controllers/machine_controller.go` and add that code in the `Reconcile` method:
```go
var machine uselessv1alpha1.Machine
if err := r.Get(ctx, req.NamespacedName, &machine); err != nil {
log.Info("error getting object")
return ctrl.Result{}, err
}
r.Log.Info(
"reconciling",
"machine", req.NamespaceName,
"switchPosition", machine.Spec.SwitchPosition,
)
```
---
## Running the controller
Our controller is not done yet, but let's try what we have right now!
This will compile the controller and run it:
```
make run
```
Then:
- create a machine
- change the `switchPosition`
- delete the machine
--
🤔
---
## `IgnoreNotFound`
When we are called for object deletion, the object has *already* been deleted.
(Unless we're using finalizers, but that's another story.)
When we return `err`, the controller will try to access the object ...
... We need to tell it to *not* do that.
Don't just return `err`, but instead, wrap it around `client.IgnoreNotFound`:
```go
return ctrl.Result{}, client.IgnoreNotFound(err)
```
Update the code, `make run` again, create/change/delete again.
--
🎉
---
## Updating the machine
Let's try to update the machine like this:
```go
if machine.Spec.SwitchPosition != "down" {
machine.Spec.SwitchPosition = "down"
if err := r.Update(ctx, &machine); err != nil {
log.Info("error updating switch position")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
}
```
Again - update, `make run`, test.
---
## Spec vs Status
- Spec = desired state
- Status = observed state
- If Status is lost, the controller should be able to reconstruct it
(maybe with degraded behavior in the meantime)
- Status will almost always be a sub-resource
(so that it can be updated separately "cheaply")
---
class: extra-details
## Spec vs Status (in depth)
- The `/status` subresource is handled differently by the API server
- Updates to `/status` don't alter the rest of the object
- Conversely, updates to the object ignore changes in the status
(See [the docs](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#status-subresource) for the fine print.)
---
## "Improving" our controller
- We want to wait a few seconds before flipping the switch
- Let's add the following line of code to the controller:
```go
time.Sleep(5 * time.Second)
```
- `make run`, create a few machines, observe what happens
--
💡 Concurrency!
---
## Controller logic
- Our controller shouldn't block (think "event loop")
- There is a queue of objects that need to be reconciled
- We can ask to be put back on the queue for later processing
- When we need to block (wait for something to happen), two options:
- ask for a *requeue* ("call me back later")
- yield because we know we will be notified by another resource
---
## To requeue ...
`return ctrl.Result{RequeueAfter: 1 * time.Second}`
- That means: "try again in 1 second, and I will check if progress was made"
- This *does not* guarantee that we will be called exactly 1 second later:
- we might be called before (if other changes happen)
- we might be called after (if the controller is busy with other objects)
- If we are waiting for another resource to change, there is an even better way!
---
## ... or not to requeue
`return ctrl.Result{}, nil`
- That means: "no need to set an alarm; we'll be notified some other way"
- Use this if we are waiting for another resource to update
(e.g. a LoadBalancer to be provisioned, a Pod to be ready...)
- For this to work, we need to set a *watch* (more on that later)
---
## "Improving" our controller, take 2
- Let's store in the machine status the moment when we saw it
```go
// +kubebuilder:printcolumn:JSONPath=".status.seenAt",name=Seen,type=date
type MachineStatus struct {
// Time at which the machine was noticed by our controller.
SeenAt *metav1.Time ``json:"seenAt,omitempty"``
}
```
⚠️ The backticks above should be simple backticks, not double-backticks. Sorry.
Note: `date` fields don't display timestamps in the future.
(That's why for this example it's simpler to use `seenAt` rather than `changeAt`.)
---
## Set `seenAt`
Let's add the following block in our reconciler:
```go
if machine.Status.SeenAt == nil {
now := metav1.Now()
machine.Status.SeenAt = &now
if err := r.Status().Update(ctx, &machine); err != nil {
log.Info("error updating status.seenAt")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
return ctrl.Result{RequeueAfter: 5 * time.Second}, nil
}
```
(If needed, add `metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"` to our imports.)
---
## Use `seenAt`
Our switch-position-changing code can now become:
```go
if machine.Spec.SwitchPosition != "down" {
now := metav1.Now()
changeAt := machine.Status.SeenAt.Time.Add(5 * time.Second)
if now.Time.After(changeAt) {
machine.Spec.SwitchPosition = "down"
if err := r.Update(ctx, &machine); err != nil {
log.Info("error updating switch position")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
}
}
```
`make run`, create a few machines, tweak their switches.
---
## Owner and dependents
- Next, let's see how to have relationships between objects!
- We will now have two kinds of objects: machines, and switches
- Machines should have *at least* one switch, possibly *multiple ones*
- The position will now be stored in the switch, not the machine
- The machine will also expose the combined state of the switches
- The switches will be tied to their machine through a label
(See next slide for an example)
---
## Switches and machines
```
[jp@hex ~]$ kubectl get machines
NAME SWITCHES POSITIONS
machine-cz2vl 3 ddd
machine-vf4xk 1 d
[jp@hex ~]$ kubectl get switches --show-labels
NAME POSITION SEEN LABELS
switch-6wmjw down machine=machine-cz2vl
switch-b8csg down machine=machine-cz2vl
switch-fl8dq down machine=machine-cz2vl
switch-rc59l down machine=machine-vf4xk
```
(The field `status.positions` shows the first letter of the `position` of each switch.)
---
## Tasks
Create the new resource type (but don't create a controller):
```bash
kubebuilder create api --group useless --version v1alpha1 --kind Switch
```
Update `machine_types.go` and `switch_types.go`.
Implement the logic so that the controller flips all switches down immediately.
Then change it so that a given machine doesn't flip more than one switch every 5 seconds.
See next slides for hints!
---
## Listing objects
We can use the `List` method with filters:
```go
var switches uselessv1alpha1.SwitchList
if err := r.List(ctx, &switches,
client.InNamespace(req.Namespace),
client.MatchingLabels{"machine": req.Name},
); err != nil {
log.Error(err, "unable to list switches of the machine")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
log.Info("Found switches", "switches", switches)
```
---
## Creating objects
We can use the `Create` method to create a new object:
```go
sw := uselessv1alpha1.Switch{
TypeMeta: metav1.TypeMeta{
APIVersion: uselessv1alpha1.GroupVersion.String(),
Kind: "Switch",
},
ObjectMeta: metav1.ObjectMeta{
GenerateName: "switch-",
Namespace: machine.Namespace,
Labels: map[string]string{"machine": machine.Name},
},
Spec: uselessv1alpha1.SwitchSpec{
Position: "down",
},
}
if err := r.Create(ctx, &sw); err != nil { ...
```
---
## Watches
- Our controller will correctly flip switches when it starts
- It will also react to machine updates
- But it won't react if we directly touch the switches!
- By default, it only monitors machines, not switches
- We need to tell it to watch switches
- We also need to tell it how to map a switch to its machine
---
## Mapping a switch to its machine
Define the following helper function:
```go
func (r *MachineReconciler) machineOfSwitch(obj handler.MapObject) []ctrl.Request {
r.Log.Debug("mos", "obj", obj)
return []ctrl.Request{
ctrl.Request{
NamespacedName: types.NamespacedName{
Name: obj.Meta.GetLabels()["machine"],
Namespace: obj.Meta.GetNamespace(),
},
},
}
}
```
---
## Telling the controller to watch switches
Update the `SetupWithManager` method in the controller:
```go
func (r *MachineReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&uselessv1alpha1.Machine{}).
Owns(&uselessv1alpha1.Switch{}).
Watches(
&source.Kind{Type: &uselessv1alpha1.Switch{}},
&handler.EnqueueRequestsFromMapFunc{
ToRequests: handler.ToRequestsFunc(r.machineOfSwitch),
}).
Complete(r)
}
```
After this, our controller should now react to switch changes.
---
## Bonus points
- Handle "scale down" of a machine (by deleting extraneous switches)
- Automatically delete switches when a machine is deleted
(ideally, using ownership information)
- Test corner cases (e.g. changing a switch label)
---
## Acknowledgements
- Useless Operator, by [L Körbes](https://twitter.com/ellenkorbes)
[code](https://github.com/tilt-dev/uselessoperator)
|
[video (EN)](https://www.youtube.com/watch?v=85dKpsFFju4)
|
[video (PT)](https://www.youtube.com/watch?v=Vt7Eg4wWNDw)
- Zero To Operator, by [Solly Ross](https://twitter.com/directxman12)
[code](https://pres.metamagical.dev/kubecon-us-2019/code)
|
[video](https://www.youtube.com/watch?v=KBTXBUVNF2I)
|
[slides](https://pres.metamagical.dev/kubecon-us-2019/)
- The [kubebuilder book](https://book.kubebuilder.io/)
???
:EN:- Implementing an operator with kubebuilder
:FR:- Implémenter un opérateur avec kubebuilder

View File

@@ -128,6 +128,36 @@ class: extra-details
---
class: pic
![Overview of the three Kubernetes network layers](images/k8s-net-0-overview.svg)
---
class: pic
![Pod-to-pod network](images/k8s-net-1-pod-to-pod.svg)
---
class: pic
![Pod-to-service network](images/k8s-net-2-pod-to-svc.svg)
---
class: pic
![Network policies](images/k8s-net-3-netpol.svg)
---
class: pic
![View with all the layers again](images/k8s-net-4-overview.svg)
---
class: extra-details
## Even more moving parts

637
slides/k8s/kyverno.md Normal file
View File

@@ -0,0 +1,637 @@
# Policy Management with Kyverno
- The Kubernetes permission management system is very flexible ...
- ... But it can't express *everything!*
- Examples:
- forbid using `:latest` image tag
- enforce that each Deployment, Service, etc. has an `owner` label
<br/>(except in e.g. `kube-system`)
- enforce that each container has at least a `readinessProbe` healthcheck
- How can we address that, and express these more complex *policies?*
---
## Admission control
- The Kubernetes API server provides a generic mechanism called *admission control*
- Admission controllers will examine each write request, and can:
- approve/deny it (for *validating* admission controllers)
- additionally *update* the object (for *mutating* admission controllers)
- These admission controllers can be:
- plug-ins built into the Kubernetes API server
<br/>(selectively enabled/disabled by e.g. command-line flags)
- webhooks registered dynamically with the Kubernetes API server
---
## What's Kyverno?
- Policy management solution for Kubernetes
- Open source (https://github.com/kyverno/kyverno/)
- Compatible with all clusters
(doesn't require to reconfigure the control plane, enable feature gates...)
- We don't endorse / support it in a particular way, but we think it's cool
- It's not the only solution!
(see e.g. [Open Policy Agent](https://www.openpolicyagent.org/docs/v0.12.2/kubernetes-admission-control/))
---
## What can Kyverno do?
- *Validate* resource manifests
(accept/deny depending on whether they conform to our policies)
- *Mutate* resources when they get created or updated
(to add/remove/change fields on the fly)
- *Generate* additional resources when a resource gets created
(e.g. when namespace is created, automatically add quotas and limits)
- *Audit* existing resources
(warn about resources that violate certain policies)
---
## How does it do it?
- Kyverno is implemented as a *controller* or *operator*
- It typically runs as a Deployment on our cluster
- Policies are defined as *custom resource definitions*
- They are implemented with a set of *dynamic admission control webhooks*
--
🤔
--
- Let's unpack that!
---
## Custom resource definitions
- When we install Kyverno, it will register new resource types:
- Policy and ClusterPolicy (per-namespace and cluster-scope policies)
- PolicyViolation and ClusterPolicyViolation (used in audit mode)
- GenerateRequest (used internally when generating resources asynchronously)
- We will be able to do e.g. `kubectl get policyviolations --all-namespaces`
(to see policy violations across all namespaces)
- Policies will be defined in YAML and registered/updated with e.g. `kubectl apply`
---
## Dynamic admission control webhooks
- When we install Kyverno, it will register a few webhooks for its use
(by creating ValidatingWebhookConfiguration and MutatingWebhookConfiguration resources)
- All subsequent resource modifications are submitted to these webhooks
(creations, updates, deletions)
---
## Controller
- When we install Kyverno, it creates a Deployment (and therefore, a Pod)
- That Pod runs the server used by the webhooks
- It also runs a controller that will:
- run optional checks in the background (and generate PolicyViolation objects)
- process GenerateRequest objects asynchronously
---
## Kyverno in action
- We're going to install Kyverno on our cluster
- Then, we will use it to implement a few policies
---
class: extra-details
## Kyverno versions
- We're going to use version 1.2
- Version 1.3.0-rc came out in November 2020
- It introduces a few changes
(e.g. PolicyViolations are now PolicyReports)
- Expect this to change in the near future!
---
## Installing Kyverno
- Kyverno can be installed with a (big) YAML manifest
- ... or with Helm charts (which allows to customize a few things)
.exercise[
- Install Kyverno:
```bash
kubectl apply -f https://raw.githubusercontent.com/kyverno/kyverno\
/v1.2.1/definitions/release/install.yaml
```
]
---
## Kyverno policies in a nutshell
- Which resources does it *select?*
- can specify resources to *match* and/or *exclude*
- can specify *kinds* and/or *selector* and/or users/roles doing the action
- Which operation should be done?
- validate, mutate, or generate
- For validation, whether it should *enforce* or *audit* failures
- Operation details (what exactly to validate, mutate, or generate)
---
## Immutable primary colors, take 1
- Our pods can have an optional `color` label
- If the label exists, it *must* be `red`, `green`, or `blue`
- One possible approach:
- *match* all pods that have a `color` label that is not `red`, `green`, or `blue`
- *deny* these pods
- We could also *match* all pods, then *deny* with a condition
---
## Testing without the policy
- First, let's create a pod with an "invalid" label
(while we still can!)
- We will use this later
.exercise[
- Create a pod:
```bash
kubectl run test-color-0 --image=nginx
```
- Apply a color label:
```bash
kubectl label pod test-color-0 color=purple
```
]
---
## Our first Kyverno policy
.small[
```yaml
@@INCLUDE[k8s/kyverno-pod-color-1.yaml]
```
]
---
## Load and try the policy
.exercise[
- Load the policy:
```bash
kubectl apply -f ~/container.training/k8s/kyverno-pod-color-1.yaml
```
- Create a pod:
```bash
kubectl run test-color-1 --image=nginx
```
- Try to apply a few color labels:
```bash
kubectl label pod test-color-1 color=purple
kubectl label pod test-color-1 color=red
kubectl label pod test-color-1 color-
```
]
---
## Immutable primary colors, take 2
- New rule: once a `color` label has been added, it cannot be changed
(i.e. if `color=red`, we can't change it to `color=blue`)
- Our approach:
- *match* all pods
- *deny* these pods if their `color` label has changed
- "Old" and "new" versions of the pod can be referenced through
`{{ request.oldObject }}` and `{{ request.object }}`
- Our label is available through `{{ request.object.metadata.labels.color }}`
- Again, other approaches are possible!
---
## Our second Kyverno policy
.small[
```yaml
@@INCLUDE[k8s/kyverno-pod-color-2.yaml]
```
]
---
## Load and try the policy
.exercise[
- Load the policy:
```bash
kubectl apply -f ~/container.training/k8s/kyverno-pod-color-2.yaml
```
- Create a pod:
```bash
kubectl run test-color-2 --image=nginx
```
- Try to apply a few color labels:
```bash
kubectl label pod test-color-2 color=purple
kubectl label pod test-color-2 color=red
kubectl label pod test-color-2 color=blue --overwrite
```
]
---
## `background`
- What is this `background: false` option, and why do we need it?
--
- Admission controllers are only invoked when we change an object
- Existing objects are not affected
(e.g. if we have a pod with `color=pink` *before* installing our policy)
- Kyvero can also run checks in the background, and report violations
(we'll see later how they are reported)
- `background: false` disables that
--
- Alright, but ... *why* do we need it?
---
## Accessing `AdmissionRequest` context
- In this specific policy, we want to prevent an *update*
(as opposed to a mere *create* operation)
- We want to compare the *old* and *new* version
(to check if a specific label was removed)
- The `AdmissionRequest` object has `object` and `oldObject` fields
(the `AdmissionRequest` object is the thing that gets submitted to the webhook)
- Kyverno lets us access the `AdmissionRequest` object
(and in particular, `{{ request.object }}` and `{{ request.oldObject }}`)
--
- Alright, but ... what's the link with `background: false`?
---
## `{{ request }}`
- The `{{ request }}` context is only available when there is an `AdmissionRequest`
- When a resource is "at rest", there is no `{{ request }}` (and no old/new)
- Therefore, a policy that uses `{{ request }}` cannot validate existing objects
(it can only be used when an object is actually created/updated/deleted)
---
## Immutable primary colors, take 3
- New rule: once a `color` label has been added, it cannot be removed
- Our approach:
- *match* all pods that *do not* have a `color` label
- *deny* these pods if they had a `color` label before
- "before" can be referenced through `{{ request.oldObject }}`
- Again, other approaches are possible!
---
## Our third Kyverno policy
.small[
```yaml
@@INCLUDE[k8s/kyverno-pod-color-3.yaml]
```
]
---
## Load and try the policy
.exercise[
- Load the policy:
```bash
kubectl apply -f ~/container.training/k8s/kyverno-pod-color-3.yaml
```
- Create a pod:
```bash
kubectl run test-color-3 --image=nginx
```
- Try to apply a few color labels:
```bash
kubectl label pod test-color-3 color=purple
kubectl label pod test-color-3 color=red
kubectl label pod test-color-3 color-
```
]
---
## Background checks
- What about the `test-color-0` pod that we create initially?
(remember: we did set `color=purple`)
- Kyverno generated a ClusterPolicyViolation to indicate it
.exercise[
- Check that the pod still an "invalid" color:
```bash
kubectl get pods -L color
```
- List ClusterPolicyViolations:
```bash
kubectl get clusterpolicyviolations
kubectl get cpolv
```
]
---
## Generating objects
- When we create a Namespace, we also want to automatically create:
- a LimitRange (to set default CPU and RAM requests and limits)
- a ResourceQuota (to limit the resources used by the namespace)
- a NetworkPolicy (to isolate the namespace)
- We can do that with a Kyverno policy with a *generate* action
(it is mutually exclusive with the *validate* action)
---
## Overview
- The *generate* action must specify:
- the `kind` of resource to generate
- the `name` of the resource to generate
- its `namespace`, when applicable
- *either* a `data` structure, to be used to populate the resource
- *or* a `clone` reference, to copy an existing resource
Note: the `apiVersion` field appears to be optional.
---
## In practice
- We will use the policy @@LINK[k8s/kyverno-namespace-setup.yaml]
- We need to generate 3 resources, so we have 3 rules in the policy
- Excerpt:
```yaml
generate:
kind: LimitRange
name: default-limitrange
namespace: "{{request.object.metadata.name}}"
data:
spec:
limits:
```
- Note that we have to specify the `namespace`
(and we infer it from the name of the resource being created, i.e. the Namespace)
---
## Lifecycle
- After generated objects have been created, we can change them
(Kyverno won't update them)
- Except if we use `clone` together with the `synchronize` flag
(in that case, Kyverno will watch the cloned resource)
- This is convenient for e.g. ConfigMaps shared between Namespaces
- Objects are generated only at *creation* (not when updating an old object)
---
## Asynchronous creation
- Kyverno creates resources asynchronously
(by creating a GenerateRequest resource first)
- This is useful when the resource cannot be created
(because of permissions or dependency issues)
- Kyverno will periodically loop through the pending GenerateRequests
- Once the ressource is created, the GenerateRequest is marked as Completed
---
## Footprint
- 5 CRDs: 4 user-facing, 1 internal (GenerateRequest)
- 5 webhooks
- 1 Service, 1 Deployment, 1 ConfigMap
- Internal resources (GenerateRequest) "parked" in a Namespace
- Kyverno packs a lot of features in a small footprint
---
## Strengths
- Kyverno is very easy to install
(it's harder to get easier than one `kubectl apply -f`)
- The setup of the webhooks is fully automated
(including certificate generation)
- It offers both namespaced and cluster-scope policies
(same thing for the policy violations)
- The policy language leverages existing constructs
(e.g. `matchExpressions`)
---
## Caveats
- By default, the webhook failure policy is `Ignore`
(meaning that there is a potential to evade policies if we can DOS the webhook)
- Advanced policies (with conditionals) have unique, exotic syntax:
```yaml
spec:
=(volumes):
=(hostPath):
path: "!/var/run/docker.sock"
```
- The `{{ request }}` context is powerful, but difficult to validate
(Kyverno can't know ahead of time how it will be populated)
- Policy validation is difficult
---
class: extra-details
## Pods created by controllers
- When e.g. a ReplicaSet or DaemonSet creates a pod, it "owns" it
(the ReplicaSet or DaemonSet is listed in the Pod's `.metadata.ownerReferences`)
- Kyverno treats these Pods differently
- If my understanding of the code is correct (big *if*):
- it skips validation for "owned" Pods
- instead, it validates their controllers
- this way, Kyverno can report errors on the controller instead of the pod
- This can be a bit confusing when testing policies on such pods!
???
:EN:- Policy Management with Kyverno
:FR:- Gestion de *policies* avec Kyverno

View File

@@ -214,7 +214,7 @@ class: extra-details
- Label *values* are up to 63 characters, with the same restrictions
- Annotations *values* can have arbitrary characeters (yes, even binary)
- Annotations *values* can have arbitrary characters (yes, even binary)
- Maximum length isn't defined

View File

@@ -34,11 +34,11 @@
- Download the `kubectl` binary from one of these links:
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.18.8/bin/linux/amd64/kubectl)
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.19.2/bin/linux/amd64/kubectl)
|
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.18.8/bin/darwin/amd64/kubectl)
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.19.2/bin/darwin/amd64/kubectl)
|
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.18.8/bin/windows/amd64/kubectl.exe)
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.19.2/bin/windows/amd64/kubectl.exe)
- On Linux and macOS, make the binary executable with `chmod +x kubectl`

View File

@@ -14,7 +14,7 @@
- The Docker Engine is installed (and running) on these machines
- The Kubernetes packages are installed, but nothing is running
- The Kubernetes binaries are installed, but nothing is running
- We will use `kubenet1` to run the control plane

View File

@@ -427,26 +427,34 @@ troubleshoot easily, without having to poke holes in our firewall.
---
## Further resources
## Tools and resources
- As always, the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is a good starting point
- [Cilium Network Policy Editor](https://editor.cilium.io/)
- The API documentation has a lot of detail about the format of various objects:
- [Tufin Network Policy Viewer](https://orca.tufin.io/netpol/)
- [NetworkPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicy-v1-networking-k8s-io)
- [NetworkPolicySpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyspec-v1-networking-k8s-io)
- [NetworkPolicyIngressRule](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyingressrule-v1-networking-k8s-io)
- etc.
- And two resources by [Ahmet Alp Balkan](https://ahmet.im/):
- Two resources by [Ahmet Alp Balkan](https://ahmet.im/):
- a [very good talk about network policies](https://www.youtube.com/watch?list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-ulZb&v=3gGpMmYeEO8) at KubeCon North America 2017
- a repository of [ready-to-use recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for network policies
---
## Documentation
- As always, the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is a good starting point
- The API documentation has a lot of detail about the format of various objects: <!-- ##VERSION## -->
- [NetworkPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#networkpolicy-v1-networking-k8s-io)
- [NetworkPolicySpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#networkpolicyspec-v1-networking-k8s-io)
- [NetworkPolicyIngressRule](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#networkpolicyingressrule-v1-networking-k8s-io)
- etc.
???
:EN:- Isolating workloads with Network Policies

View File

@@ -222,9 +222,9 @@ class: extra-details
|
[Simple example](https://medium.com/faun/writing-your-first-kubernetes-operator-8f3df4453234)
- Zalando Kubernetes Operator Pythonic Framework (KOPF)
- Kubernetes Operator Pythonic Framework (KOPF)
[GitHub](https://github.com/zalando-incubator/kopf)
[GitHub](https://github.com/nolar/kopf)
|
[Docs](https://kopf.readthedocs.io/)
|
@@ -240,6 +240,12 @@ class: extra-details
|
[Zookeeper example](https://github.com/kudobuilder/frameworks/tree/master/repo/stable/zookeeper)
- Kubebuilder (Go, very close to the Kubernetes API codebase)
[GitHub](https://github.com/kubernetes-sigs/kubebuilder)
|
[Book](https://book.kubebuilder.io/)
---
## Validation

View File

@@ -1,19 +1,5 @@
# Operators
- Operators are one of the many ways to extend Kubernetes
- We will define operators
- We will see how they work
- We will install a specific operator (for ElasticSearch)
- We will use it to provision an ElasticSearch cluster
---
## What are operators?
*An operator represents **human operational knowledge in software,**
<br/>
to reliably manage an application.
@@ -119,455 +105,6 @@ Examples:
---
## One operator in action
- We will install [Elastic Cloud on Kubernetes](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html), an ElasticSearch operator
- This operator requires PersistentVolumes
- We will install Rancher's [local path storage provisioner](https://github.com/rancher/local-path-provisioner) to automatically create these
- Then, we will create an ElasticSearch resource
- The operator will detect that resource and provision the cluster
---
## Installing a Persistent Volume provisioner
(This step can be skipped if you already have a dynamic volume provisioner.)
- This provisioner creates Persistent Volumes backed by `hostPath`
(local directories on our nodes)
- It doesn't require anything special ...
- ... But losing a node = losing the volumes on that node!
.exercise[
- Install the local path storage provisioner:
```bash
kubectl apply -f ~/container.training/k8s/local-path-storage.yaml
```
]
---
## Making sure we have a default StorageClass
- The ElasticSearch operator will create StatefulSets
- These StatefulSets will instantiate PersistentVolumeClaims
- These PVCs need to be explicitly associated with a StorageClass
- Or we need to tag a StorageClass to be used as the default one
.exercise[
- List StorageClasses:
```bash
kubectl get storageclasses
```
]
We should see the `local-path` StorageClass.
---
## Setting a default StorageClass
- This is done by adding an annotation to the StorageClass:
`storageclass.kubernetes.io/is-default-class: true`
.exercise[
- Tag the StorageClass so that it's the default one:
```bash
kubectl annotate storageclass local-path \
storageclass.kubernetes.io/is-default-class=true
```
- Check the result:
```bash
kubectl get storageclasses
```
]
Now, the StorageClass should have `(default)` next to its name.
---
## Install the ElasticSearch operator
- The operator provides:
- a few CustomResourceDefinitions
- a Namespace for its other resources
- a ValidatingWebhookConfiguration for type checking
- a StatefulSet for its controller and webhook code
- a ServiceAccount, ClusterRole, ClusterRoleBinding for permissions
- All these resources are grouped in a convenient YAML file
.exercise[
- Install the operator:
```bash
kubectl apply -f ~/container.training/k8s/eck-operator.yaml
```
]
---
## Check our new custom resources
- Let's see which CRDs were created
.exercise[
- List all CRDs:
```bash
kubectl get crds
```
]
This operator supports ElasticSearch, but also Kibana and APM. Cool!
---
## Create the `eck-demo` namespace
- For clarity, we will create everything in a new namespace, `eck-demo`
- This namespace is hard-coded in the YAML files that we are going to use
- We need to create that namespace
.exercise[
- Create the `eck-demo` namespace:
```bash
kubectl create namespace eck-demo
```
- Switch to that namespace:
```bash
kns eck-demo
```
]
---
class: extra-details
## Can we use a different namespace?
Yes, but then we need to update all the YAML manifests that we
are going to apply in the next slides.
The `eck-demo` namespace is hard-coded in these YAML manifests.
Why?
Because when defining a ClusterRoleBinding that references a
ServiceAccount, we have to indicate in which namespace the
ServiceAccount is located.
---
## Create an ElasticSearch resource
- We can now create a resource with `kind: ElasticSearch`
- The YAML for that resource will specify all the desired parameters:
- how many nodes we want
- image to use
- add-ons (kibana, cerebro, ...)
- whether to use TLS or not
- etc.
.exercise[
- Create our ElasticSearch cluster:
```bash
kubectl apply -f ~/container.training/k8s/eck-elasticsearch.yaml
```
]
---
## Operator in action
- Over the next minutes, the operator will create our ES cluster
- It will report our cluster status through the CRD
.exercise[
- Check the logs of the operator:
```bash
stern --namespace=elastic-system operator
```
<!--
```wait elastic-operator-0```
```tmux split-pane -v```
--->
- Watch the status of the cluster through the CRD:
```bash
kubectl get es -w
```
<!--
```longwait green```
```key ^C```
```key ^D```
```key ^C```
-->
]
---
## Connecting to our cluster
- It's not easy to use the ElasticSearch API from the shell
- But let's check at least if ElasticSearch is up!
.exercise[
- Get the ClusterIP of our ES instance:
```bash
kubectl get services
```
- Issue a request with `curl`:
```bash
curl http://`CLUSTERIP`:9200
```
]
We get an authentication error. Our cluster is protected!
---
## Obtaining the credentials
- The operator creates a user named `elastic`
- It generates a random password and stores it in a Secret
.exercise[
- Extract the password:
```bash
kubectl get secret demo-es-elastic-user \
-o go-template="{{ .data.elastic | base64decode }} "
```
- Use it to connect to the API:
```bash
curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200
```
]
We should see a JSON payload with the `"You Know, for Search"` tagline.
---
## Sending data to the cluster
- Let's send some data to our brand new ElasticSearch cluster!
- We'll deploy a filebeat DaemonSet to collect node logs
.exercise[
- Deploy filebeat:
```bash
kubectl apply -f ~/container.training/k8s/eck-filebeat.yaml
```
- Wait until some pods are up:
```bash
watch kubectl get pods -l k8s-app=filebeat
```
<!--
```wait Running```
```key ^C```
-->
- Check that a filebeat index was created:
```bash
curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200/_cat/indices
```
]
---
## Deploying an instance of Kibana
- Kibana can visualize the logs injected by filebeat
- The ECK operator can also manage Kibana
- Let's give it a try!
.exercise[
- Deploy a Kibana instance:
```bash
kubectl apply -f ~/container.training/k8s/eck-kibana.yaml
```
- Wait for it to be ready:
```bash
kubectl get kibana -w
```
<!--
```longwait green```
```key ^C```
-->
]
---
## Connecting to Kibana
- Kibana is automatically set up to conect to ElasticSearch
(this is arranged by the YAML that we're using)
- However, it will ask for authentication
- It's using the same user/password as ElasticSearch
.exercise[
- Get the NodePort allocated to Kibana:
```bash
kubectl get services
```
- Connect to it with a web browser
- Use the same user/password as before
]
---
## Setting up Kibana
After the Kibana UI loads, we need to click around a bit
.exercise[
- Pick "explore on my own"
- Click on Use Elasticsearch data / Connect to your Elasticsearch index"
- Enter `filebeat-*` for the index pattern and click "Next step"
- Select `@timestamp` as time filter field name
- Click on "discover" (the small icon looking like a compass on the left bar)
- Play around!
]
---
## Scaling up the cluster
- At this point, we have only one node
- We are going to scale up
- But first, we'll deploy Cerebro, an UI for ElasticSearch
- This will let us see the state of the cluster, how indexes are sharded, etc.
---
## Deploying Cerebro
- Cerebro is stateless, so it's fairly easy to deploy
(one Deployment + one Service)
- However, it needs the address and credentials for ElasticSearch
- We prepared yet another manifest for that!
.exercise[
- Deploy Cerebro:
```bash
kubectl apply -f ~/container.training/k8s/eck-cerebro.yaml
```
- Lookup the NodePort number and connect to it:
```bash
kubectl get services
```
]
---
## Scaling up the cluster
- We can see on Cerebro that the cluster is "yellow"
(because our index is not replicated)
- Let's change that!
.exercise[
- Edit the ElasticSearch cluster manifest:
```bash
kubectl edit es demo
```
- Find the field `count: 1` and change it to 3
- Save and quit
<!--
```wait Please edit```
```keys /count:```
```key ^J```
```keys $r3:x```
```key ^J```
-->
]
---
## Deploying our apps with operators
- It is very simple to deploy with `kubectl create deployment` / `kubectl expose`
@@ -602,9 +139,9 @@ After the Kibana UI loads, we need to click around a bit
## Operators are not magic
- Look at the ElasticSearch resource definition
- Look at this ElasticSearch resource definition:
(`~/container.training/k8s/eck-elasticsearch.yaml`)
@@LINK[k8s/eck-elasticsearch.yaml]
- What should happen if we flip the TLS flag? Twice?
@@ -619,7 +156,4 @@ But we need to know exactly the scenarios that they can handle.*
???
:EN:- Kubernetes operators
:EN:- Deploying ElasticSearch with ECK
:FR:- Les opérateurs
:FR:- Déployer ElasticSearch avec ECK

View File

@@ -218,7 +218,7 @@ We need to:
---
## Step 2: add the `stable` repo
## Step 2: add the `prometheus-community` repo
- This will add the repository containing the chart for Prometheus
@@ -230,7 +230,8 @@ We need to:
- Add the repository:
```bash
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo add prometheus-community \
https://prometheus-community.github.io/helm-charts
```
]
@@ -247,7 +248,7 @@ We need to:
- Install Prometheus on our cluster:
```bash
helm upgrade prometheus stable/prometheus \
helm upgrade prometheus prometheus-community/prometheus \
--install \
--namespace kube-system \
--set server.service.type=NodePort \
@@ -266,17 +267,19 @@ class: extra-details
## Explaining all the Helm flags
- `helm upgrade prometheus` → upgrade release "prometheus" to the latest version...
- `helm upgrade prometheus` → upgrade the release named `prometheus` ...
<br/>
(a "release" is an instance of an app deployed with Helm)
(a "release" is a unique name given to an app deployed with Helm)
- `prometheus-community/...` → of a chart located in the `prometheus-community` repo ...
- `stable/prometheus` → ... of the chart `prometheus` in repo `stable`
- `.../prometheus` → in that repo, get the chart named `prometheus` ...
- `--install` → if the app doesn't exist, create it
- `--install` → if the app doesn't exist, create it ...
- `--namespace kube-system` → put it in that specific namespace
- `--namespace kube-system` → put it in that specific namespace ...
- And set the following *values* when rendering the chart's templates:
- ... and set the following *values* when rendering the chart's templates:
- `server.service.type=NodePort` → expose the Prometheus server with a NodePort
- `server.service.nodePort=30090` → set the specific NodePort number to use

View File

@@ -40,6 +40,112 @@
---
class: extra-details
## CPU limits implementation details
- A container with a CPU limit will be "rationed" by the kernel
- Every `cfs_period_us`, it will receive a CPU quota, like an "allowance"
(that interval defaults to 100ms)
- Once it has used its quota, it will be stalled until the next period
- This can easily result in throttling for bursty workloads
(see details on next slide)
---
class: extra-details
## A bursty example
- Web service receives one request per minute
- Each request takes 1 second of CPU
- Average load: 0.16%
- Let's say we set a CPU limit of 10%
- This means CPU quotas of 10ms every 100ms
- Obtaining the quota for 1 second of CPU will take 10 seconds
- Observed latency will be 10 seconds (... actually 9.9s) instead of 1 second
(real-life scenarios will of course be less extreme, but they do happen!)
---
class: extra-details
## Multi-core scheduling details
- Each core gets a small share of the container's CPU quota
(this avoids locking and contention on the "global" quota for the container)
- By default, the kernel distributes that quota to CPUs in 5ms increments
(tunable with `kernel.sched_cfs_bandwidth_slice_us`)
- If a containerized process (or thread) uses up its local CPU quota:
*it gets more from the "global" container quota (if there's some left)*
- If it "yields" (e.g. sleeps for I/O) before using its local CPU quota:
*the quota is **soon** returned to the "global" container quota, **minus** 1ms*
---
class: extra-details
## Low quotas on machines with many cores
- The local CPU quota is not immediately returned to the global quota
- this reduces locking and contention on the global quota
- but this can cause starvation when many threads/processes become runnable
- That 1ms that "stays" on the local CPU quota is often useful
- if the thread/process becomes runnable, it can be scheduled immediately
- again, this reduces locking and contention on the global quota
- but if the thread/process doesn't become runnable, it is wasted!
- this can become a huge problem on machines with many cores
---
class: extra-details
## CPU limits in a nutshell
- Beware if you run small bursty workloads on machines with many cores!
("highly-threaded, user-interactive, non-cpu bound applications")
- Check the `nr_throttled` and `throttled_time` metrics in `cpu.stat`
- Possible solutions/workarounds:
- be generous with the limits
- make sure your kernel has the [appropriate patch](https://lkml.org/lkml/2019/5/17/581)
- use [static CPU manager policy](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy)
For more details, check [this blog post](https://erickhun.com/posts/kubernetes-faster-services-no-cpu-limits/) or these ones ([part 1](https://engineering.indeedblog.com/blog/2019/12/unthrottled-fixing-cpu-limits-in-the-cloud/), [part 2](https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/)).
---
## Exceeding memory limits
- Memory needs to be swapped out before being reclaimed
@@ -56,8 +162,6 @@
- Exceeding the memory limit will cause the container to be killed
---
## Limits vs requests
@@ -122,13 +226,17 @@ Each pod is assigned a QoS class (visible in `status.qosClass`).
- The semantics of memory and swap limits on Linux cgroups are complex
- In particular, it's not possible to disable swap for a cgroup
- With cgroups v1, it's not possible to disable swap for a cgroup
(the closest option is to [reduce "swappiness"](https://unix.stackexchange.com/questions/77939/turning-off-swapping-for-only-one-process-with-cgroups))
- It is possible with cgroups v2 (see the [kernel docs](https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html) and the [fbatx docs](https://facebookmicrosites.github.io/cgroup2/docs/memory-controller.html#using-swap))
- Cgroups v2 aren't widely deployed yet
- The architects of Kubernetes wanted to ensure that Guaranteed pods never swap
- The only solution was to disable swap entirely
- The simplest solution was to disable swap entirely
---
@@ -518,6 +626,26 @@ services.nodeports 0 0
---
## Viewing a namespace limits and quotas
- `kubectl describe namespace` will display resource limits and quotas
.exercise[
- Try it out:
```bash
kubectl describe namespace default
```
- View limits and quotas for *all* namespaces:
```bash
kubectl describe namespace
```
]
---
## Additional resources
- [A Practical Guide to Setting Kubernetes Requests and Limits](http://blog.kubecost.com/blog/requests-and-limits/)

View File

@@ -0,0 +1,310 @@
# Sealed Secrets
- Kubernetes provides the "Secret" resource to store credentials, keys, passwords ...
- Secrets can be protected with RBAC
(e.g. "you can write secrets, but only the app's service account can read them")
- [Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets) is an operator that lets us store secrets in code repositories
- It uses asymetric cryptography:
- anyone can *encrypt* a secret
- only the cluster can *decrypt* a secret
---
## Principle
- The Sealed Secrets operator uses a *public* and a *private* key
- The public key is available publicly (duh!)
- We use the public key to encrypt secrets into a SealedSecret resource
- the SealedSecret resource can be stored in a code repo (even a public one)
- The SealedSecret resource is `kubectl apply`'d to the cluster
- The Sealed Secrets controller decrypts the SealedSecret with the private key
(this creates a classic Secret resource)
- Nobody else can decrypt secrets, since only the controller has the private key
---
## In action
- We will install the Sealed Secrets operator
- We will generate a Secret
- We will "seal" that Secret (generate a SealedSecret)
- We will load that SealedSecret on the cluster
- We will check that we now have a Secret
---
## Installing the operator
- The official installation is done through a single YAML file
- There is also a Helm chart if you prefer that
.exercise[
- Install the operator:
.small[
```bash
kubectl apply -f \
https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.13.1/controller.yaml
```
]
]
Note: it installs into `kube-system` by default.
If you change that, you will also need to inform `kubeseal` later on.
---
## Creating a Secret
- Let's create a normal (unencrypted) secret
.exercise[
- Create a Secret with a couple of API tokens:
```bash
kubectl create secret generic awskey \
--from-literal=AWS_ACCESS_KEY_ID=AKI... \
--from-literal=AWS_SECRET_ACCESS_KEY=abc123xyz... \
--dry-run=client -o yaml > secret-aws.yaml
```
]
- Note the `--dry-run` and `-o yaml`
(we're just generating YAML, not sending the secrets to our Kubernetes cluster)
- We could also write the YAML from scratch or generate it with other tools
---
## Creating a Sealed Secret
- This is done with the `kubeseal` tool
- It will obtain the public key from the cluster
.exercise[
- Create the Sealed Secret:
```bash
kubeseal < secret-aws.yaml > sealed-secret-aws.json
```
]
- The file `sealed-secret-aws.json` can be committed to your public repo
(if you prefer YAML output, you can add `-o yaml`)
---
## Using a Sealed Secret
- Now let's `kubectl apply` that Sealed Secret to the cluster
- The Sealed Secret controller will "unseal" it for us
.exercise[
- Check that our Secret doesn't exist (yet):
```bash
kubectl get secrets
```
- Load the Sealed Secret into the cluster:
```bash
kubectl create -f sealed-secret-aws.json
```
- Check that the secret is now available:
```bash
kubectl get secrets
```
]
---
## Tweaking secrets
- Let's see what happens if we try to rename the Secret
(or use it in a different namespace)
.exercise[
- Delete both the Secret and the SealedSecret
- Edit `sealed-secret-aws.json`
- Change the name of the secret, or its namespace
(both in the SealedSecret metadata and in the Secret template)
- `kubectl apply -f` the new JSON file and observe the results 🤔
]
---
## Sealed Secrets are *scoped*
- A SealedSecret cannot be renamed or moved to another namespace
(at least, not by default!)
- Otherwise, it would allow to evade RBAC rules:
- if I can view Secrets in namespace `myapp` but not in namespace `yourapp`
- I could take a SealedSecret belonging to namespace `yourapp`
- ... and deploy it in `myapp`
- ... and view the resulting decrypted Secret!
- This can be changed with `--scope namespace-wide` or `--scope cluster-wide`
---
## Working offline
- We can obtain the public key from the server
(technically, as a PEM certificate)
- Then we can use that public key offline
(without contacting the server)
- Relevant commands:
`kubeseal --fetch-cert > seal.pem`
`kubeseal --cert seal.pem < secret.yaml > sealedsecret.json`
---
## Key rotation
- The controller generate new keys every month by default
- The keys are kept as TLS Secrets in the `kube-system` namespace
(named `sealed-secrets-keyXXXXX`)
- When keys are "rotated", old decryption keys are kept
(otherwise we can't decrypt previously-generated SealedSecrets)
---
## Key compromise
- If the *sealing* key (obtained with `--fetch-cert` is compromised):
*we don't need to do anything (it's a public key!)*
- However, if the *unsealing* key (the TLS secret in `kube-system`) is compromised ...
*we need to:*
- rotate the key
- rotate the SealedSecrets that were encrypted with that key
<br/>
(as they are compromised)
---
## Rotating the key
- By default, new keys are generated every 30 days
- To force the generation of a new key "right now":
- obtain an RFC1123 timestamp with `date -R`
- edit Deployment `sealed-secrets-controller` (in `kube-system`)
- add `--key-cutoff-time=TIMESTAMP` to the command-line
- *Then*, rotate the SealedSecrets that were encrypted with it
(generate new Secrets, then encrypt them with the new key)
---
## Discussion (the good)
- The footprint of the operator is rather small:
- only one CRD
- one Deployment, one Service
- a few RBAC-related objects
---
## Discussion (the less good)
- Events could be improved
- `no key to decrypt secret` when there is a name/namespace mismatch
- no event indicating that a SealedSecret was successfully unsealed
- Key rotation could be improved (how to find secrets corresponding to a key?)
- If the sealing keys are lost, it's impossible to unseal the SealedSecrets
(e.g. cluster reinstall)
- ... Which means that we need to back up the sealing keys
- ... Which means that we need to be super careful with these backups!
---
## Other approaches
- [Kamus](https://kamus.soluto.io/) ([git](https://github.com/Soluto/kamus)) offers "zero-trust" secrets
(the cluster cannot decrypt secrets; only the application can decrypt them)
- [Vault](https://learn.hashicorp.com/tutorials/vault/kubernetes-sidecar?in=vault/kubernetes) can do ... a lot
- dynamic secrets (generated on the fly for a consumer)
- certificate management
- integration outside of Kubernetes
- and much more!
???
:EN:- The Sealed Secrets Operator
:FR:- L'opérateur *Sealed Secrets*

289
slides/k8s/secrets.md Normal file
View File

@@ -0,0 +1,289 @@
# Managing secrets
- Sometimes our code needs sensitive information:
- passwords
- API tokens
- TLS keys
- ...
- *Secrets* can be used for that purpose
- Secrets and ConfigMaps are very similar
---
## Similarities between ConfigMap and Secrets
- ConfigMap and Secrets are key-value maps
(a Secret can contain zero, one, or many key-value pairs)
- They can both be exposed with the downward API or volumes
- They can both be created with YAML or with a CLI command
(`kubectl create configmap` / `kubectl create secret`)
---
## ConfigMap and Secrets are different resources
- They can have different RBAC permissions
(e.g. the default `view` role can read ConfigMaps but not Secrets)
- They indicate a different *intent*:
*"You should use secrets for things which are actually secret like API keys,
credentials, etc., and use config map for not-secret configuration data."*
*"In the future there will likely be some differentiators for secrets like rotation or support for backing the secret API w/ HSMs, etc."*
(Source: [the author of both features](https://stackoverflow.com/a/36925553/580281
))
---
## Secrets have an optional *type*
- The type indicates which keys must exist in the secrets, for instance:
`kubernetes.io/tls` requires `tls.crt` and `tls.key`
`kubernetes.io/basic-auth` requires `username` and `password`
`kubernetes.io/ssh-auth` requires `ssh-privatekey`
`kubernetes.io/dockerconfigjson` requires `.dockerconfigjson`
`kubernetes.io/service-account-token` requires `token`, `namespace`, `ca.crt`
(the whole list is in [the documentation](https://kubernetes.io/docs/concepts/configuration/secret/#secret-types))
- This is merely for our (human) convenience:
“Ah yes, this secret is a ...”
---
## Accessing private repositories
- Let's see how to access an image on private registry!
- These images are protected by a username + password
(on some registries, it's token + password, but it's the same thing)
- To access a private image, we need to:
- create a secret
- reference that secret in a Pod template
- or reference that secret in a ServiceAccount used by a Pod
---
## In practice
- Let's try to access an image on a private registry!
- image = docker-registry.enix.io/jpetazzo/private:latest
- user = reader
- password = VmQvqdtXFwXfyy4Jb5DR
.exercise[
- Create a Deployment using that image:
```bash
kubectl create deployment priv \
--image=docker-registry.enix.io/jpetazzo/private
```
- Check that the Pod won't start:
```bash
kubectl get pods --selector=app=priv
```
]
---
## Creating a secret
- Let's create a secret with the information provided earlier
.exercise[
- Create the registry secret:
```bash
kubectl create secret docker-registry enix \
--docker-server=docker-registry.enix.io \
--docker-username=reader \
--docker-password=VmQvqdtXFwXfyy4Jb5DR
```
]
Why do we have to specify the registry address?
If we use multiple sets of credentials for different registries, it prevents leaking the credentials of one registry to *another* registry.
---
## Using the secret
- The first way to use a secret is to add it to `imagePullSecrets`
(in the `spec` section of a Pod template)
.exercise[
- Patch the `priv` Deployment that we created earlier:
```bash
kubectl patch deploy priv --patch='
spec:
template:
spec:
imagePullSecrets:
- name: enix
'
```
]
---
## Checking the results
.exercise[
- Confirm that our Pod can now start correctly:
```bash
kubectl get pods --selector=app=priv
```
]
---
## Another way to use the secret
- We can add the secret to the ServiceAccount
- This is convenient to automatically use credentials for *all* pods
(as long as they're using a specific ServiceAccount, of course)
.exercise[
- Add the secret to the ServiceAccount:
```bash
kubectl patch serviceaccount default --patch='
imagePullSecrets:
- name: enix
'
```
]
---
## Secrets are displayed with base64 encoding
- When shown with e.g. `kubectl get secrets -o yaml`, secrets are base64-encoded
- Likewise, when defining it with YAML, `data` values are base64-encoded
- Example:
```yaml
kind: Secret
apiVersion: v1
metadata:
name: pin-codes
data:
onetwothreefour: MTIzNA==
zerozerozerozero: MDAwMA==
```
- Keep in mind that this is just *encoding*, not *encryption*
- It is very easy to [automatically extract and decode secrets](https://medium.com/@mveritym/decoding-kubernetes-secrets-60deed7a96a3)
---
class: extra-details
## Using `stringData`
- When creating a Secret, it is possible to bypass base64
- Just use `stringData` instead of `data`:
```yaml
kind: Secret
apiVersion: v1
metadata:
name: pin-codes
stringData:
onetwothreefour: 1234
zerozerozerozero: 0000
```
- It will show up as base64 if you `kubectl get -o yaml`
- No `type` was specified, so it defaults to `Opaque`
---
class: extra-details
## Encryption at rest
- It is possible to [encrypted secrets at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/)
- This means that secrets will be safe if someone ...
- steals our etcd servers
- steals our backups
- snoops the e.g. iSCSI link between our etcd servers and SAN
- However, starting the API server will now require human intervention
(to provide the decryption keys)
- This is only for extremely regulated environments (military, nation states...)
---
class: extra-details
## Immutable ConfigMaps and Secrets
- Since Kubernetes 1.19, it is possible to mark a ConfigMap or Secret as *immutable*
```bash
kubectl patch configmap xyz --patch='{"immutable": true}'
```
- This brings performance improvements when using lots of ConfigMaps and Secrets
(lots = tens of thousands)
- Once a ConfigMap or Secret has been marked as immutable:
- its content cannot be changed anymore
- the `immutable` field can't be changed back either
- the only way to change it is to delete and re-create it
- Pods using it will have to be re-created as well
???
:EN:- Handling passwords and tokens safely
:FR:- Manipulation de mots de passe, clés API etc.

View File

@@ -24,8 +24,6 @@
- Gives you one cluster with one node
- Rather old version of Kubernetes
- Very easy to use if you are already using Docker Desktop:
go to Docker Desktop preferences and enable Kubernetes
@@ -54,25 +52,22 @@
## k3d in action
- Get `k3d` beta 3 binary on https://github.com/rancher/k3d/releases
- Install `k3d` (e.g. get the binary from https://github.com/rancher/k3d/releases)
- Create a simple cluster:
```bash
k3d create cluster petitcluster --update-kubeconfig
```
- Use it:
```bash
kubectl config use-context k3d-petitcluster
k3d cluster create petitcluster
```
- Create a more complex cluster with a custom version:
```bash
k3d create cluster groscluster --update-kubeconfig \
--image rancher/k3s:v1.18.3-k3s1 --masters 3 --workers 5 --api-port 6444
k3d cluster create groscluster \
--image rancher/k3s:v1.18.9-k3s1 --servers 3 --agents 5
```
(note: API port seems to be necessary when running multiple clusters)
(3 nodes for the control plane + 5 worker nodes)
- Clusters are automatically added to `.kube/config` file
---

View File

@@ -94,28 +94,20 @@
## Building on the fly
- Some services can build images on the fly from a repository
- Conceptually, it is possible to build images on the fly from a repository
- Example: [ctr.run](https://ctr.run/)
.exercise[
(deprecated in August 2020, after being aquired by Datadog)
- Use ctr.run to automatically build a container image and run it:
- It did allow something like this:
```bash
docker run ctr.run/github.com/jpetazzo/container.training/dockercoins/hasher
```
<!--
```longwait Sinatra```
```key ^C```
-->
- No alternative yet
]
There might be a long pause before the first layer is pulled,
because the API behind `docker pull` doesn't allow to stream build logs, and there is no feedback during the build.
It is possible to view the build logs by setting up an account on [ctr.run](https://ctr.run/).
(free startup idea, anyone?)
???

View File

@@ -30,7 +30,7 @@
- They are stopped in reverse order (from *R-1* to 0)
- Each pod know its identity (i.e. which number it is in the set)
- Each pod knows its identity (i.e. which number it is in the set)
- Each pod can discover the IP address of the others easily

302
slides/k8s/tilt.md Normal file
View File

@@ -0,0 +1,302 @@
# Tilt
- What does a development workflow look like?
- make changes
- test / see these changes
- repeat!
- What does it look like, with containers?
🤔
---
## Basic Docker workflow
- Preparation
- write Dockerfiles
- Iteration
- edit code
- `docker build`
- `docker run`
- test
- `docker stop`
Straightforward when we have a single container.
---
## Docker workflow with volumes
- Preparation
- write Dockerfiles
- `docker build` + `docker run`
- Iteration
- edit code
- test
Note: only works with interpreted languages.
<br/>
(Compiled languages require extra work.)
---
## Docker workflow with Compose
- Preparation
- write Dockerfiles + Compose file
- `docker-compose up`
- Iteration
- edit code
- test
- `docker-compose up` (as needed)
Simplifies complex scenarios (multiple containers).
<br/>
Facilitates updating images.
---
## Basic Kubernetes workflow
- Preparation
- write Dockerfiles
- write Kubernetes YAML
- set up container registry
- Iteration
- edit code
- build images
- push images
- update Kubernetes resources
Seems simple enough, right?
---
## Basic Kubernetes workflow
- Preparation
- write Dockerfiles
- write Kubernetes YAML
- **set up container registry**
- Iteration
- edit code
- build images
- **push images**
- update Kubernetes resources
Ah, right ...
---
## We need a registry
- Remember "build, ship, and run"
- Registries are involved in the "ship" phase
- With Docker, we were building and running on the same node
- We didn't need a registry!
- With Kubernetes, though ...
---
## Special case of single node clusters
- If our Kubernetes has only one node ...
- ... We can build directly on that node ...
- ... We don't need to push images ...
- ... We don't need to run a registry!
- Examples: Docker Desktop, Minikube ...
---
## When we have more than one node
- Which registry should we use?
(Docker Hub, Quay, cloud-based, self-hosted ...)
- Should we use a single registry, or one per cluster or environment?
- Which tags and credentials should we use?
(in particular when using a shared registry!)
- How do we provision that registry and its users?
- How do we adjust our Kubernetes YAML manifests?
(e.g. to inject image names and tags)
---
## More questions
- The whole cycle (build+push+update) is expensive
- If we have many services, how do we update only the ones we need?
- Can we take shortcuts?
(e.g. synchronized files without going through a whole build+push+update cycle)
---
## Tilt
- Tilt is a tool to address all these questions
- There are other similar tools (e.g. Skaffold)
- We arbitrarily decided to focus on that one
---
## Tilt in practice
- The `dockercoins` directory in our repository has a `Tiltfile`
- Go to that directory and try `tilt up`
- Tilt should refuse to start, but it will explain why
- Edit the `Tiltfile` accordingly and try again
- Open the Tilt web UI
(if running Tilt on a remote machine, you will need `tilt up --host 0.0.0.0`)
- Watch as the Dockercoins app is built, pushed, started
---
## What's in our Tiltfile?
- Kubernetes manifests for a local registry
- Kubernetes manifests for DockerCoins
- Instructions indicating how to build DockerCoins' images
- A tiny bit of sugar
(telling Tilt which registry to use)
---
## How does it work?
- Tilt keeps track of dependencies between files and resources
(a bit like a `make` that would run continuously)
- It automatically alters some resources
(for instance, it updates the images used in our Kubernetes manifests)
- That's it!
(And of course, it provides a great web UI, lots of libraries, etc.)
---
## What happens when we edit a file (1/2)
- Let's change e.g. `worker/worker.py`
- Thanks to this line,
```python
docker_build('dockercoins/worker', 'worker')
```
... Tilt watches the `worker` directory and uses it to build `dockercoins/worker`
- Thanks to this line,
```python
default_registry('localhost:30555')
```
... Tilt actually renames `dockercoins/worker` to `localhost:30555/dockercoins_worker`
- Tilt will tag the image with something like `tilt-xxxxxxxxxx`
---
## What happens when we edit a file (2/2)
- Thanks to this line,
```python
k8s_yaml('../k8s/dockercoins.yaml')
```
... Tilt is aware of our Kubernetes resources
- The `worker` Deployment uses `dockercoins/worker`, so it must be updated
- `dockercoins/worker` becomes `localhost:30555/dockercoins_worker:tilt-xxx`
- The `worker` Deployment gets updated on the Kubernetes cluster
- All these operations (and their log output) are visible in the Tilt UI
---
## Configuration file format
- The Tiltfile is written in [Starlark](https://github.com/bazelbuild/starlark)
(essentially a subset of Python)
- Tilt monitors the Tiltfile too
(so it reloads it immediately when we change it)
---
## Tilt "killer features"
- Dependency engine
(build or run only what's necessary)
- Ability to watch resources
(execute actions immediately, without explicitly running a command)
- Rich library of function and helpers
(build container images, manipulate YAML manifests...)
- Convenient UI (web; TUI also available)
(provides immediate feedback and logs)
- Extensibility!
???
:EN:- Development workflow with Tilt
:FR:- Développer avec Tilt

218
slides/k8s/user-cert.md Normal file
View File

@@ -0,0 +1,218 @@
# Generating user certificates
- The most popular ways to authenticate users with Kubernetes are:
- TLS certificates
- JSON Web Tokens (OIDC or ServiceAccount tokens)
- We're going to see how to use TLS certificates
- We will generate a certificate for an user and give them some permissions
- Then we will use that certificate to access the cluster
---
## Heads up!
- The demos in this section require that we have access to our cluster's CA
- This is easy if we are using a cluster deployed with `kubeadm`
- Otherwise, we may or may not have access to the cluster's CA
- We may or may not be able to use the CSR API instead
---
## Check that we have access to the CA
- Make sure that you are logged on the node hosting the control plane
(if a cluster has been provisioned for you for a training, it's `node1`)
.exercise[
- Check that the CA key is here:
```bash
sudo ls -l /etc/kubernetes/pki
```
]
The output should include `ca.key` and `ca.crt`.
---
## How it works
- The API server is configured to accept all certificates signed by a given CA
- The certificate contains:
- the user name (in the `CN` field)
- the groups the user belongs to (as multiple `O` fields)
.exercise[
- Check which CA is used by the Kubernetes API server:
```bash
sudo grep crt /etc/kubernetes/manifests/kube-apiserver.yaml
```
]
This is the flag that we're looking for:
```
--client-ca-file=/etc/kubernetes/pki/ca.crt
```
---
## Generating a key and CSR for our user
- These operations could be done on a separate machine
- We only need to transfer the CSR (Certificate Signing Request) to the CA
(we never need to expoes the private key)
.exercise[
- Generate a private key:
```bash
openssl genrsa 4096 > user.key
```
- Generate a CSR:
```bash
openssl req -new -key user.key -subj /CN=jerome/O=devs/O=ops > user.csr
```
]
---
## Generating a signed certificate
- This has to be done on the machine holding the CA private key
(copy the `user.csr` file if needed)
.exercise[
- Verify the CSR paramters:
```bash
openssl req -in user.csr -text | head
```
- Generate the certificate:
```bash
sudo openssl x509 -req \
-CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key \
-in user.csr -days 1 -set_serial 1234 > user.crt
```
]
If you are using two separate machines, transfer `user.crt` to the other machine.
---
## Adding the key and certificate to kubeconfig
- We have to edit our `.kube/config` file
- This can be done relatively easily with `kubectl config`
.exercise[
- Create a new `user` entry in our `.kube/config` file:
```bash
kubectl config set-credentials jerome \
--client-key=user.key --client-certificate=user.crt
```
]
The configuration file now points to our local files.
We could also embed the key and certs with the `--embed-certs` option.
(So that the kubeconfig file can be used without `user.key` and `user.crt`.)
---
## Using the new identity
- At the moment, we probably use the admin certificate generated by `kubeadm`
(with `CN=kubernetes-admin` and `O=system:masters`)
- Let's edit our *context* to use our new certificate instead!
.exercise[
- Edit the context:
```bash
kubectl config set-context --current --user=jerome
```
- Try any command:
```bash
kubectl get pods
```
]
Access will be denied, but we should see that were correctly *authenticated* as `jerome`.
---
## Granting permissions
- Let's add some read-only permissions to the `devs` group (for instance)
.exercise[
- Switch back to our admin identity:
```bash
kubectl config set-context --current --user=kubernetes-admin
```
- Grant permissions:
```bash
kubectl create clusterrolebinding devs-can-view \
--clusterrole=view --group=devs
```
]
---
## Testing the new permissions
- As soon as we create the ClusterRoleBinding, all users in the `devs` group get access
- Let's verify that we can e.g. list pods!
.exercise[
- Switch to our user identity again:
```bash
kubectl config set-context --current --user=jerome
```
- Test the permissions:
```bash
kubectl get pods
```
]
???
:EN:- Authentication with user certificates
:FR:- Identification par certificat TLS

View File

@@ -1,7 +1,7 @@
## Versions installed
- Kubernetes 1.18.0
- Docker Engine 19.03.8
- Kubernetes 1.19.2
- Docker Engine 19.03.13
- Docker Compose 1.25.4
<!-- ##VERSION## -->

View File

@@ -28,11 +28,13 @@ content:
-
- k8s/prereqs-admin.md
- k8s/architecture.md
#- k8s/internal-apis.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
-
- k8s/multinode.md
- k8s/cni.md
- k8s/cni-internals.md
- k8s/interco.md
-
- k8s/apilb.md
@@ -48,6 +50,7 @@ content:
#- k8s/bootstrap.md
- k8s/control-plane-auth.md
- k8s/podsecuritypolicy.md
- k8s/user-cert.md
- k8s/csr-api.md
- k8s/openid-connect.md
-

View File

@@ -28,10 +28,12 @@ content:
# DAY 1
- - k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/internal-apis.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
- - k8s/multinode.md
- k8s/cni.md
- k8s/cni-internals.md
- k8s/interco.md
- - k8s/apilb.md
- k8s/setup-overview.md
@@ -49,6 +51,7 @@ content:
- k8s/logs-cli.md
- k8s/logs-centralized.md
- k8s/authn-authz.md
- k8s/user-cert.md
- k8s/csr-api.md
- - k8s/openid-connect.md
- k8s/control-plane-auth.md
@@ -61,7 +64,9 @@ content:
- k8s/horizontal-pod-autoscaler.md
- - k8s/prometheus.md
- k8s/extending-api.md
- k8s/crd.md
- k8s/operators.md
- k8s/eck.md
###- k8s/operators-design.md
# CONCLUSION
- - k8s/lastwords.md
@@ -72,6 +77,7 @@ content:
# EXTRA
- - k8s/volumes.md
- k8s/configuration.md
- k8s/secrets.md
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md

81
slides/kube-adv.yml Normal file
View File

@@ -0,0 +1,81 @@
title: |
Advanced
Kubernetes
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- #1
- k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/internal-apis.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
- #2
- k8s/multinode.md
- k8s/cni.md
- k8s/interco.md
- #3
- k8s/cni-internals.md
- k8s/apilb.md
- k8s/control-plane-auth.md
- |
# (Extra content)
- k8s/staticpods.md
- k8s/cluster-upgrade.md
- #4
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- |
# (Extra content)
- k8s/helm-create-better-chart.md
- k8s/helm-secrets.md
- #5
- k8s/extending-api.md
- k8s/operators.md
- k8s/sealed-secrets.md
- k8s/crd.md
- #6
- k8s/ingress-tls.md
- k8s/cert-manager.md
- k8s/eck.md
- #7
- k8s/admission.md
- k8s/kyverno.md
- #8
- k8s/aggregation-layer.md
- k8s/metrics-server.md
- k8s/prometheus.md
- k8s/hpa-v2.md
- #9
- k8s/operators-design.md
- k8s/kubebuilder.md
- k8s/events.md
- k8s/finalizers.md
- |
# (Extra content)
- k8s/owners-and-dependents.md
- k8s/apiserver-deepdive.md
#- k8s/record.md
- shared/thankyou.md

View File

@@ -58,6 +58,8 @@ content:
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
#- k8s/dashboard.md
#- k8s/k9s.md
#- k8s/tilt.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
@@ -74,6 +76,7 @@ content:
-
- k8s/namespaces.md
- k8s/ingress.md
#- k8s/ingress-tls.md
#- k8s/kustomize.md
#- k8s/helm-intro.md
#- k8s/helm-chart-format.md
@@ -81,10 +84,12 @@ content:
#- k8s/helm-create-better-chart.md
#- k8s/helm-secrets.md
#- k8s/exercise-helm.md
#- k8s/gitlab.md
#- k8s/create-chart.md
#- k8s/create-more-charts.md
#- k8s/netpol.md
#- k8s/authn-authz.md
#- k8s/user-cert.md
#- k8s/csr-api.md
#- k8s/openid-connect.md
#- k8s/podsecuritypolicy.md
@@ -93,15 +98,19 @@ content:
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
- k8s/secrets.md
#- k8s/logs-centralized.md
#- k8s/prometheus.md
#- k8s/statefulsets.md
#- k8s/local-persistent-volumes.md
#- k8s/portworx.md
#- k8s/extending-api.md
#- k8s/crd.md
#- k8s/admission.md
#- k8s/operators.md
#- k8s/operators-design.md
#- k8s/staticpods.md
#- k8s/finalizers.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
-

Some files were not shown because too many files have changed in this diff Show More