Compare commits

..

54 Commits

Author SHA1 Message Date
Jerome Petazzoni
de074e9da5 Remove cards mention 2020-05-07 14:55:09 +02:00
Jerome Petazzoni
ba826cbb99 Fix Smaïne twitter link 2020-05-07 14:53:16 +02:00
Jerome Petazzoni
b62afded35 end slide with link to form and training 2020-05-07 14:50:26 +02:00
Jerome Petazzoni
e29f24453a Add Smaïne 2020-05-07 14:04:49 +02:00
Jerome Petazzoni
db4f73ab84 Merge branch 'master' into 2020-05-helm 2020-05-07 13:24:53 +02:00
Jerome Petazzoni
eb9052ae9a Add twitch chat info 2020-05-07 13:24:35 +02:00
Jerome Petazzoni
c8f04c9b3b merge + add chat link 2020-05-07 13:13:08 +02:00
Jerome Petazzoni
8f85332d8a Advanced Dockerfiles -> Advanced Dockerfile Syntax 2020-05-06 17:25:03 +02:00
Jerome Petazzoni
0479ad2285 Add force redirects 2020-05-06 17:22:13 +02:00
Jerome Petazzoni
986d7eb9c2 Add foreword to operators design section 2020-05-05 17:24:05 +02:00
Jerome Petazzoni
3fafbb8d4e Add kustomize CLI and completion 2020-05-04 16:47:26 +02:00
Jerome Petazzoni
5a24df3fd4 Add details on Kustomize 2020-05-04 16:25:35 +02:00
Jerome Petazzoni
1bbfba0531 Add definition of idempotent 2020-05-04 02:18:05 +02:00
Jerome Petazzoni
8d98431ba0 Add Helm graduation status 2020-05-04 02:09:00 +02:00
Jerome Petazzoni
c31c81a286 Allow overriding YAML desc through env vars 2020-05-04 00:54:34 +02:00
Jerome Petazzoni
a0314fc5f5 Keep --restart=Never for folks running 1.17- 2020-05-03 17:08:32 +02:00
Jérôme Petazzoni
3f088236a4 Merge pull request #557 from barpilot/psp
psp: update deprecated parts
2020-05-03 17:07:41 +02:00
Jerome Petazzoni
ce4e2ffe46 Add sleep command in init container example
It can be tricky to illustrate what's going on here, since installing
git and cloning the repo can be so fast. So we're sleeping a few seconds
to help with this demo and make it easier to show the race condition.
2020-05-03 17:01:59 +02:00
Jérôme Petazzoni
c3a05a6393 Merge pull request #558 from barpilot/vol-init
volume: add missing pod nginx-with-init creating
2020-05-03 16:57:46 +02:00
Jerome Petazzoni
40b2b8e62e Fix deployment name in labels/selector intro
(Fixes #552)
2020-05-03 16:53:25 +02:00
Jerome Petazzoni
efdcf4905d Bump up Kubernetes dashboard to 2.0.0 2020-05-03 16:01:19 +02:00
Jérôme Petazzoni
bdb57c05b4 Merge pull request #550 from BretFisher/patch-20
update k8s dashboard versions
2020-05-03 15:55:15 +02:00
Jerome Petazzoni
af0762a0a2 Remove ':' from file names
Colons are not allowed in file names on Windows. Let's use
something else instead.

(Initially reported by @DenisBalan. This closes #549.)
2020-05-03 15:49:37 +02:00
Jerome Petazzoni
0d6c364a95 Add MacPorts instructions for stern 2020-05-03 13:40:01 +02:00
Jerome Petazzoni
690a1eb75c Move Ardan Live 2020-05-01 15:37:57 -05:00
Jerome Petazzoni
8c503f1e69 Prep helm slides 2020-05-01 10:42:36 -05:00
Jérôme Petazzoni
c796a6bfc1 Merge pull request #556 from barpilot/healthcheck
healthcheck: fix rng manifest filename
2020-04-30 22:51:37 +02:00
Jerome Petazzoni
0b10d3d40d Add a bunch of other managed offerings 2020-04-30 15:50:24 -05:00
Jérôme Petazzoni
cdb50925da Merge pull request #554 from barpilot/installer
separate managed options from deployment
2020-04-30 22:47:22 +02:00
Jérôme Petazzoni
ca1f8ec828 Merge pull request #553 from barpilot/kubeadm
Remove experimental status on kubeadm HA
2020-04-30 22:46:33 +02:00
Jerome Petazzoni
7302d3533f Use built-in dockercoins manifest instead of separate kubercoins repo 2020-04-30 15:45:12 -05:00
Jerome Petazzoni
d3c931e602 Add separate instructions for Zoom webinar 2020-04-30 15:42:41 -05:00
Guilhem Lettron
7402c8e6a8 psp: update psp apiVersion to policy/v1beta1 2020-04-29 22:46:33 +02:00
Guilhem Lettron
1de539bff8 healthcheck: fix rng manifest filename 2020-04-29 22:41:15 +02:00
Guilhem Lettron
a6c7d69986 volume: add missing pod nginx-with-init creating 2020-04-29 22:37:49 +02:00
Guilhem Lettron
b0bff595cf psp: update generator helpers
kubectl run →  kubectl create deployment
kubectl run --restart=Never → kubectl run
2020-04-29 22:33:34 +02:00
Jerome Petazzoni
6f806ed200 typo 2020-04-28 14:23:52 -05:00
Jerome Petazzoni
0c8b20f6b6 typo 2020-04-28 14:21:31 -05:00
Jerome Petazzoni
2ba35e1f8d typo 2020-04-28 14:20:22 -05:00
Jerome Petazzoni
eb0d9bed2a Update descriptions 2020-04-28 06:18:59 -05:00
Jerome Petazzoni
bab493a926 Update descriptions 2020-04-28 06:17:21 -05:00
Guilhem Lettron
f4f2d83fa4 separate managed options from deployment 2020-04-27 20:55:23 +02:00
Guilhem Lettron
9f049951ab Remove experimental status on kubeadm HA 2020-04-27 20:47:30 +02:00
Jerome Petazzoni
7257a5c594 Add outline tags to Kubernetes course 2020-04-27 07:35:14 -05:00
Jerome Petazzoni
102aef5ac5 Add outline tags to Docker short course 2020-04-26 11:36:50 -05:00
Jerome Petazzoni
d2b3a1d663 Add Ardan Live 2020-04-23 08:46:56 -05:00
Jerome Petazzoni
d84ada0927 Fix slides counter 2020-04-23 07:33:46 -05:00
Jerome Petazzoni
0e04b4a07d Modularize logistics file and add logistics-online file 2020-04-20 15:51:02 -05:00
Jerome Petazzoni
aef910b4b7 Do not show 'Module 1' if there is only one module 2020-04-20 13:01:06 -05:00
Jerome Petazzoni
298b6db20c Rename 'chapter' into 'module' 2020-04-20 11:49:35 -05:00
Jerome Petazzoni
7ec6e871c9 Add shortlink container.training/next 2020-04-15 13:17:03 -05:00
Jerome Petazzoni
a0558e4ee5 Rework kubectl run section, break it down
We now have better explanations on labels and selectors.
The kubectl run section was getting very long, so now
it is different parts: kubectl run basics; how to create
other resources like batch jobs; first contact with
labels and annotations; and showing the limitations
of kubectl logs.
2020-04-08 18:29:59 -05:00
Jerome Petazzoni
16a62f9f84 Really dirty script to add force redirects 2020-04-07 17:00:53 -05:00
Bret Fisher
2ce50007d2 update k8s dashboard versions 2020-03-16 17:57:41 -04:00
120 changed files with 2085 additions and 1681 deletions

View File

@@ -1 +0,0 @@
image: jpetazzo/shpod

View File

@@ -1,3 +1,10 @@
# This file is based on the following manifest:
# https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml
# It adds the "skip login" flag, as well as an insecure hack to defeat SSL.
# As its name implies, it is INSECURE and you should not use it in production,
# or on clusters that contain any kind of important or sensitive data, or on
# clusters that have a life span of more than a few hours.
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -187,7 +194,7 @@ spec:
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-rc2
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
@@ -226,7 +233,7 @@ spec:
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
@@ -272,7 +279,7 @@ spec:
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.2
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
@@ -293,7 +300,7 @@ spec:
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master

View File

@@ -1,3 +1,6 @@
# This is a copy of the following file:
# https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -12,19 +15,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
kind: Namespace
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
name: kubernetes-dashboard
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
@@ -32,62 +28,147 @@ metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
@@ -95,7 +176,7 @@ metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
@@ -108,55 +189,117 @@ spec:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
- port: 8000
targetPort: 8000
selector:
k8s-app: kubernetes-dashboard
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}

View File

@@ -14,7 +14,7 @@ spec:
initContainers:
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
command: [ "sh", "-c", "apk add git && sleep 5 && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/

View File

@@ -1,5 +1,5 @@
---
apiVersion: extensions/v1beta1
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:

View File

@@ -8,24 +8,24 @@ metadata:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: users:jean.doe
name: user=jean.doe
rules:
- apiGroups: [ certificates.k8s.io ]
resources: [ certificatesigningrequests ]
verbs: [ create ]
- apiGroups: [ certificates.k8s.io ]
resourceNames: [ users:jean.doe ]
resourceNames: [ user=jean.doe ]
resources: [ certificatesigningrequests ]
verbs: [ get, create, delete, watch ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: users:jean.doe
name: user=jean.doe
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: users:jean.doe
name: user=jean.doe
subjects:
- kind: ServiceAccount
name: jean.doe

View File

@@ -246,6 +246,14 @@ EOF"
helm completion bash | sudo tee /etc/bash_completion.d/helm
fi"
# Install kustomize
pssh "
if [ ! -x /usr/local/bin/kustomize ]; then
curl -L https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.1_linux_amd64.tar.gz |
sudo tar -C /usr/local/bin -zx kustomize
echo complete -C /usr/local/bin/kustomize kustomize | sudo tee /etc/bash_completion.d/kustomize
fi"
# Install ship
pssh "
if [ ! -x /usr/local/bin/ship ]; then

View File

@@ -1,7 +1,8 @@
# Uncomment and/or edit one of the the following lines if necessary.
#/ /kube-halfday.yml.html 200
#/ /kube-fullday.yml.html 200
#/ /kube-twodays.yml.html 200
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
/ /helm.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack
@@ -13,3 +14,6 @@
# Shortlink for the QRCode
/q /qrcode.html 200
/next https://www.eventbrite.com/e/intensive-kubernetes-advanced-concepts-live-stream-tickets-102358725704
/chat https://gitter.im/jpetazzo/workshop-20200507-online

View File

@@ -1,7 +1,7 @@
class: title
# Advanced Dockerfiles
# Advanced Dockerfile Syntax
![construction](images/title-advanced-dockerfiles.jpg)
@@ -12,7 +12,10 @@ class: title
We have seen simple Dockerfiles to illustrate how Docker build
container images.
In this section, we will see more Dockerfile commands.
In this section, we will give a recap of the Dockerfile syntax,
and introduce advanced Dockerfile commands that we might
come across sometimes; or that we might want to use in some
specific scenarios.
---
@@ -420,3 +423,8 @@ ONBUILD COPY . /src
* You can't chain `ONBUILD` instructions with `ONBUILD`.
* `ONBUILD` can't be used to trigger `FROM` instructions.
???
:EN:- Advanced Dockerfile syntax
:FR:- Dockerfile niveau expert

View File

@@ -280,3 +280,8 @@ CONTAINER ID IMAGE ... CREATED STATUS
5c1dfd4d81f1 jpetazzo/clock ... 40 min. ago Exited (0) 40 min. ago
b13c164401fb ubuntu ... 55 min. ago Exited (130) 53 min. ago
```
???
:EN:- Foreground and background containers
:FR:- Exécution interactive ou en arrière-plan

View File

@@ -167,3 +167,8 @@ Automated process = good.
In the next chapter, we will learn how to automate the build
process by writing a `Dockerfile`.
???
:EN:- Building our first images interactively
:FR:- Fabriquer nos premières images à la main

View File

@@ -363,3 +363,10 @@ In this example, `sh -c` will still be used, but
The shell gets replaced by `figlet` when `figlet` starts execution.
This allows to run processes as PID 1 without using JSON.
???
:EN:- Towards automated, reproducible builds
:EN:- Writing our first Dockerfile
:FR:- Rendre le processus automatique et reproductible
:FR:- Écrire son premier Dockerfile

View File

@@ -272,3 +272,7 @@ $ docker run -it --entrypoint bash myfiglet
root@6027e44e2955:/#
```
???
:EN:- CMD and ENTRYPOINT
:FR:- CMD et ENTRYPOINT

View File

@@ -322,3 +322,11 @@ You can:
Each copy will run in a different network, totally isolated from the other.
This is ideal to debug regressions, do side-by-side comparisons, etc.
???
:EN:- Using compose to describe an environment
:EN:- Connecting services together with a *Compose file*
:FR:- Utiliser Compose pour décrire son environnement
:FR:- Écrire un *Compose file* pour connecter les services entre eux

View File

@@ -226,3 +226,13 @@ We've learned how to:
In the next chapter, we will see how to connect
containers together without exposing their ports.
???
:EN:Connecting containers
:EN:- Container networking basics
:EN:- Exposing a container
:FR:Connecter les conteneurs
:FR:- Description du modèle réseau des conteneurs
:FR:- Exposer un conteneur

View File

@@ -98,3 +98,8 @@ Success!
* Place it in a different directory, with the `WORKDIR` instruction.
* Even better, use the `gcc` official image.
???
:EN:- The build cache
:FR:- Tirer parti du cache afin d'optimiser la vitesse de *build*

View File

@@ -431,3 +431,8 @@ services:
- It's OK (and even encouraged) to start simple and evolve as needed.
- Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!
???
:EN:- Dockerfile tips, tricks, and best practices
:FR:- Bonnes pratiques pour la construction des images

View File

@@ -290,3 +290,8 @@ bash: figlet: command not found
* We have a clear definition of our environment, and can share it reliably with others.
* Let's see in the next chapters how to bake a custom image with `figlet`!
???
:EN:- Running our first container
:FR:- Lancer nos premiers conteneurs

View File

@@ -226,3 +226,8 @@ docker export <container_id> | tar tv
```
This will give a detailed listing of the content of the container.
???
:EN:- Troubleshooting and getting inside a container
:FR:- Inspecter un conteneur en détail, en *live* ou *post-mortem*

View File

@@ -375,3 +375,13 @@ We've learned how to:
* Understand Docker image namespacing.
* Search and download images.
???
:EN:Building images
:EN:- Containers, images, and layers
:EN:- Image addresses and tags
:EN:- Finding and transferring images
:FR:Construire des images
:FR:- La différence entre un conteneur et une image
:FR:- La notion de *layer* partagé entre images

View File

@@ -80,3 +80,8 @@ $ docker ps --filter label=owner=alice
(To determine internal cross-billing, or who to page in case of outage.)
* etc.
???
:EN:- Using labels to identify containers
:FR:- Étiqueter ses conteneurs avec des méta-données

View File

@@ -391,3 +391,10 @@ We've learned how to:
* Use a simple local development workflow.
???
:EN:Developing with containers
:EN:- “Containerize” a development environment
:FR:Développer au jour le jour
:FR:- « Containeriser » son environnement de développement

View File

@@ -313,3 +313,11 @@ virtually "free."
* Sometimes, we want to inspect a specific intermediary build stage.
* Or, we want to describe multiple images using a single Dockerfile.
???
:EN:Optimizing our images and their build process
:EN:- Leveraging multi-stage builds
:FR:Optimiser les images et leur construction
:FR:- Utilisation d'un *multi-stage build*

View File

@@ -130,3 +130,12 @@ $ docker inspect --format '{{ json .Created }}' <containerID>
* The optional `json` keyword asks for valid JSON output.
<br/>(e.g. here it adds the surrounding double-quotes.)
???
:EN:Managing container lifecycle
:EN:- Naming and inspecting containers
:FR:Suivre ses conteneurs à la loupe
:FR:- Obtenir des informations détaillées sur un conteneur
:FR:- Associer un identifiant unique à un conteneur

View File

@@ -175,3 +175,10 @@ class: extra-details
* This will cause some CLI and TUI programs to redraw the screen.
* But not all of them.
???
:EN:- Restarting old containers
:EN:- Detaching and reattaching to container
:FR:- Redémarrer des anciens conteneurs
:FR:- Se détacher et rattacher à des conteneurs

View File

@@ -125,3 +125,11 @@ Server:
]
If this doesn't work, raise your hand so that an instructor can assist you!
???
:EN:Container concepts
:FR:Premier contact avec les conteneurs
:EN:- What's a container engine?
:FR:- Qu'est-ce qu'un *container engine* ?

View File

@@ -11,10 +11,10 @@ class State(object):
self.section_title = None
self.section_start = 0
self.section_slides = 0
self.chapters = {}
self.modules = {}
self.sections = {}
def show(self):
if self.section_title.startswith("chapter-"):
if self.section_title.startswith("module-"):
return
print("{0.section_title}\t{0.section_start}\t{0.section_slides}".format(self))
self.sections[self.section_title] = self.section_slides
@@ -38,10 +38,10 @@ for line in open(sys.argv[1]):
if line == "--":
state.current_slide += 1
toc_links = re.findall("\(#toc-(.*)\)", line)
if toc_links and state.section_title.startswith("chapter-"):
if state.section_title not in state.chapters:
state.chapters[state.section_title] = []
state.chapters[state.section_title].append(toc_links[0])
if toc_links and state.section_title.startswith("module-"):
if state.section_title not in state.modules:
state.modules[state.section_title] = []
state.modules[state.section_title].append(toc_links[0])
# This is really hackish
if line.startswith("class:"):
for klass in EXCLUDED:
@@ -51,7 +51,7 @@ for line in open(sys.argv[1]):
state.show()
for chapter in sorted(state.chapters, key=lambda f: int(f.split("-")[1])):
chapter_size = sum(state.sections[s] for s in state.chapters[chapter])
print("{}\t{}\t{}".format("total size for", chapter, chapter_size))
for module in sorted(state.modules, key=lambda f: int(f.split("-")[1])):
module_size = sum(state.sections[s] for s in state.modules[module])
print("{}\t{}\t{}".format("total size for", module, module_size))

118
slides/fix-redirects.sh Executable file
View File

@@ -0,0 +1,118 @@
#!/bin/sh
# This script helps to add "force-redirects" where needed.
# This might replace your entire git repos with Vogon poetry.
# Use at your own peril!
set -eu
# The easiest way to set this env var is by copy-pasting from
# the netlify web dashboard, then doctoring the output a bit.
# Yeah, that's gross, but after spending 10 minutes with the
# API and the CLI and OAuth, it took about 10 seconds to do it
# with le copier-coller, so ... :)
SITES="
2020-01-caen
2020-01-zr
2020-02-caen
2020-02-enix
2020-02-outreach
2020-02-vmware
2020-03-ardan
2020-03-qcon
alfun-2019-06
boosterconf2018
clt-2019-10
dc17eu
decembre2018
devopsdaysams2018
devopsdaysmsp2018
gotochgo2018
gotochgo2019
indexconf2018
intro-2019-01
intro-2019-04
intro-2019-06
intro-2019-08
intro-2019-09
intro-2019-11
intro-2019-12
k8s2d
kadm-2019-04
kadm-2019-06
kube
kube-2019-01
kube-2019-02
kube-2019-03
kube-2019-04
kube-2019-06
kube-2019-08
kube-2019-09
kube-2019-10
kube-2019-11
lisa-2019-10
lisa16t1
lisa17m7
lisa17t9
maersk-2019-07
maersk-2019-08
ndcminnesota2018
nr-2019-08
oscon2018
oscon2019
osseu17
pycon2019
qconsf18wkshp
qconsf2017intro
qconsf2017swarm
qconsf2018
qconuk2019
septembre2018
sfsf-2019-06
srecon2018
swarm2017
velny-k8s101-2018
velocity-2019-11
velocityeu2018
velocitysj2018
vmware-2019-11
weka
wwc-2019-10
wwrk-2019-05
wwrk-2019-06
"
for SITE in $SITES; do
echo "##### $SITE"
git checkout -q origin/$SITE
# No _redirects? No problem.
if ! [ -f _redirects ]; then
continue
fi
# If there is already a force redirect on /, we're good.
if grep '^/ .* 200!' _redirects; then
continue
fi
# If there is a redirect on / ... and it's not forced ... do something.
if grep "^/ .* 200$" _redirects; then
echo "##### $SITE needs to be patched"
sed -i 's,^/ \(.*\) 200$,/ \1 200!,' _redirects
git add _redirects
git commit -m "fix-redirects.sh: adding forced redirect"
git push origin HEAD:$SITE
continue
fi
if grep "^/ " _redirects; then
echo "##### $SITE with / but no status code"
echo "##### Should I add '200!' ?"
read foo
sed -i 's,^/ \(.*\)$,/ \1 200!,' _redirects
git add _redirects
git commit -m "fix-redirects.sh: adding status code and forced redirect"
git push origin HEAD:$SITE
continue
fi
echo "##### $SITE without / ?"
cat _redirects
done

36
slides/helm.yml Normal file
View File

@@ -0,0 +1,36 @@
title: |
Helm Workshop
CNCF Paris
chat: "[Gitter](https://gitter.im/jpetazzo/workshop-20200507-online)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2020-05-helm.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-twitch.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
- shared/prereqs.md
- shared/webssh.md
- shared/connecting.md
- k8s/kubercoins.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- k8s/helm-secrets.md
#- k8s/exercise-helm.md
- shared/thankyou.md

View File

@@ -7,6 +7,7 @@ FLAGS=dict(
fr=u"🇫🇷",
uk=u"🇬🇧",
us=u"🇺🇸",
www=u"🌐",
)
TEMPLATE="""<html>
@@ -19,7 +20,7 @@ TEMPLATE="""<html>
<div class="main">
<table>
<tr><td class="header" colspan="3">{{ title }}</td></tr>
<tr><td class="details" colspan="3">Note: while some workshops are delivered in French, slides are always in English.</td></tr>
<tr><td class="details" colspan="3">Note: while some workshops are delivered in other languages, slides are always in English.</td></tr>
<tr><td class="title" colspan="3">Free video of our latest workshop</td></tr>
@@ -35,7 +36,7 @@ TEMPLATE="""<html>
<td class="details">If you're interested, we can deliver that workshop (or longer courses) to your team or organization.</td>
</tr>
<tr>
<td class="details">Contact <a href="mailto:jerome.petazzoni@gmail.com">Jérôme Petazzoni</a> to make that happen!</a></td>
<td class="details">Contact <a href="mailto:jerome.petazzoni@gmail.com">Jérôme Petazzoni</a> to make that happen!</td>
</tr>
{% if coming_soon %}

View File

@@ -1,3 +1,36 @@
- date: [2020-06-09, 2020-06-11]
country: www
city: streaming
event: Ardan Live
speaker: jpetazzo
title: Intensive Kubernetes Bootcamp
attend: https://www.eventbrite.com/e/livestream-intensive-kubernetes-bootcamp-tickets-103262336428
- date: [2020-05-19, 2020-05-21]
country: www
city: streaming
event: Ardan Live
speaker: jpetazzo
title: Intensive Docker Bootcamp
attend: https://www.eventbrite.com/e/livestream-intensive-docker-bootcamp-tickets-103258886108
- date: [2020-05-04, 2020-05-08]
country: www
city: streaming
event: Ardan Live
speaker: jpetazzo
title: Intensive Kubernetes - Advanced Concepts
attend: https://www.eventbrite.com/e/livestream-intensive-kubernetes-advanced-concepts-tickets-102358725704
- date: [2020-03-30, 2020-04-02]
country: www
city: streaming
event: Ardan Live
speaker: jpetazzo
title: Intensive Docker and Kubernetes
attend: https://www.eventbrite.com/e/ardan-labs-live-worldwide-march-30-april-2-2020-tickets-100331129108#
slides: https://https://2020-03-ardan.container.training/
- date: 2020-03-06
country: uk
city: London

View File

@@ -1,69 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/toc.md
-
#- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
#- containers/Start_And_Attach.md
- containers/Naming_And_Inspecting.md
#- containers/Labels.md
- containers/Getting_Inside.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Container_Networking_Basics.md
#- containers/Network_Drivers.md
#- containers/Container_Network_Model.md
- containers/Local_Development_Workflow.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
- containers/Multi_Stage_Builds.md
#- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
#- containers/Docker_Machine.md
#- containers/Advanced_Dockerfiles.md
#- containers/Init_Systems.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Container_Engines.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md
#- containers/Orchestration_Overview.md
-
- shared/thankyou.md
- containers/links.md

View File

@@ -1,69 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
chapters:
- shared/title.md
# - shared/logistics.md
- containers/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/toc.md
- - containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- - containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- containers/Docker_Machine.md
- - containers/Advanced_Dockerfiles.md
- containers/Init_Systems.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Resource_Limits.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
- - containers/Container_Engines.md
- containers/Pods_Anatomy.md
- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -1,77 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/toc.md
- # DAY 1
- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Dockerfile_Tips.md
- containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Exercise_Dockerfile_Advanced.md
-
- containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Start_And_Attach.md
- containers/Getting_Inside.md
- containers/Resource_Limits.md
- # DAY 2
- containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
-
- containers/Local_Development_Workflow.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
- containers/Installing_Docker.md
- containers/Container_Engines.md
- containers/Init_Systems.md
- containers/Advanced_Dockerfiles.md
-
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Orchestration_Overview.md
-
- shared/thankyou.md
- containers/links.md
#-
#- containers/Docker_Machine.md
#- containers/Ambassadors.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md

View File

@@ -129,3 +129,8 @@ installed and set up `kubectl` to communicate with your cluster.
```
]
???
:EN:- Securely accessing internal services
:FR:- Accès sécurisé aux services internes

View File

@@ -87,3 +87,8 @@
- Tunnels are also fine
(e.g. [k3s](https://k3s.io/) uses a tunnel to allow each node to contact the API server)
???
:EN:- Ensuring API server availability
:FR:- Assurer la disponibilité du serveur API

View File

@@ -381,3 +381,8 @@ We demonstrated *update* and *watch* semantics.
- if the pod has special constraints that can't be met
- if the scheduler is not running (!)
???
:EN:- Kubernetes architecture review
:FR:- Passage en revue de l'architecture de Kubernetes

View File

@@ -676,3 +676,17 @@ class: extra-details
- Both are available as standalone programs, or as plugins for `kubectl`
(`kubectl` plugins can be installed and managed with `krew`)
???
:EN:- Authentication and authorization in Kubernetes
:EN:- Authentication with tokens and certificates
:EN:- Aithorization with RBAC (Role-Based Access Control)
:EN:- Restricting permissions with Service Accounts
:EN:- Working with Roles, Cluster Roles, Role Bindings, etc.
:FR:- Identification et droits d'accès dans Kubernetes
:FR:- Mécanismes d'identification par jetons et certificats
:FR:- Le modèle RBAC *(Role-Based Access Control)*
:FR:- Restreindre les permissions grâce aux *Service Accounts*
:FR:- Comprendre les *Roles*, *Cluster Roles*, *Role Bindings*, etc.

194
slides/k8s/batch-jobs.md Normal file
View File

@@ -0,0 +1,194 @@
# Executing batch jobs
- Deployments are great for stateless web apps
(as well as workers that keep running forever)
- Pods are great for one-off execution that we don't care about
(because they don't get automatically restarted if something goes wrong)
- Jobs are great for "long" background work
("long" being at least minutes our hours)
- CronJobs are great to schedule Jobs at regular intervals
(just like the classic UNIX `cron` daemon with its `crontab` files)
---
## Creating a Job
- A Job will create a Pod
- If the Pod fails, the Job will create another one
- The Job will keep trying until:
- either a Pod succeeds,
- or we hit the *backoff limit* of the Job (default=6)
.exercise[
- Create a Job that has a 50% chance of success:
```bash
kubectl create job flipcoin --image=alpine -- sh -c 'exit $(($RANDOM%2))'
```
]
---
## Our Job in action
- Our Job will create a Pod named `flipcoin-xxxxx`
- If the Pod succeeds, the Job stops
- If the Pod fails, the Job creates another Pod
.exercise[
- Check the status of the Pod(s) created by the Job:
```bash
kubectl get pods --selector=job-name=flipcoin
```
]
---
class: extra-details
## More advanced jobs
- We can specify a number of "completions" (default=1)
- This indicates how many times the Job must be executed
- We can specify the "parallelism" (default=1)
- This indicates how many Pods should be running in parallel
- These options cannot be specified with `kubectl create job`
(we have to write our own YAML manifest to use them)
---
## Scheduling periodic background work
- A Cron Job is a Job that will be executed at specific intervals
(the name comes from the traditional cronjobs executed by the UNIX crond)
- It requires a *schedule*, represented as five space-separated fields:
- minute [0,59]
- hour [0,23]
- day of the month [1,31]
- month of the year [1,12]
- day of the week ([0,6] with 0=Sunday)
- `*` means "all valid values"; `/N` means "every N"
- Example: `*/3 * * * *` means "every three minutes"
---
## Creating a Cron Job
- Let's create a simple job to be executed every three minutes
- Careful: make sure that the job terminates!
(The Cron Job will not hold if a previous job is still running)
.exercise[
- Create the Cron Job:
```bash
kubectl create cronjob every3mins --schedule="*/3 * * * *" \
--image=alpine -- sleep 10
```
- Check the resource that was created:
```bash
kubectl get cronjobs
```
]
---
## Cron Jobs in action
- At the specified schedule, the Cron Job will create a Job
- The Job will create a Pod
- The Job will make sure that the Pod completes
(re-creating another one if it fails, for instance if its node fails)
.exercise[
- Check the Jobs that are created:
```bash
kubectl get jobs
```
]
(It will take a few minutes before the first job is scheduled.)
---
class: extra-details
## What about `kubectl run` before v1.18?
- Creating a Deployment:
`kubectl run`
- Creating a Pod:
`kubectl run --restart=Never`
- Creating a Job:
`kubectl run --restart=OnFailure`
- Creating a Cron Job:
`kubectl run --restart=OnFailure --schedule=...`
*Avoid using these forms, as they are deprecated since Kubernetes 1.18!*
---
## Beyond `kubectl create`
- As hinted earlier, `kubectl create` doesn't always expose all options
- can't express parallelism or completions of Jobs
- can't express Pods with multiple containers
- can't express healthchecks, resource limits
- etc.
- `kubectl create` and `kubectl run` are *helpers* that generate YAML manifests
- If we write these manifests ourselves, we can use all features and options
- We'll see later how to do that!
???
:EN:- Running batch and cron jobs
:FR:- Tâches périodiques *(cron)* et traitement par lots *(batch)*

View File

@@ -257,3 +257,8 @@ This is the TLS bootstrap mechanism, step by step.
- [kubeadm token](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-token/) command
- [kubeadm join](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/) command (has details about [the join workflow](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/#join-workflow))
???
:EN:- Leveraging TLS bootstrap to join nodes
:FR:- Ajout de nœuds grâce au *TLS bootstrap*

View File

@@ -142,3 +142,8 @@ The list includes the following providers:
- [configuration](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) (mainly for OpenStack)
- [deployment](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/)
???
:EN:- The Cloud Controller Manager
:FR:- Le *Cloud Controller Manager*

View File

@@ -364,3 +364,8 @@ docker run --rm --net host -v $PWD:/vol \
- [bivac](https://github.com/camptocamp/bivac)
Backup Interface for Volumes Attached to Containers
???
:EN:- Backing up clusters
:FR:- Politiques de sauvegarde

View File

@@ -165,3 +165,12 @@ class: extra-details
- Security advantage (stronger isolation between pods)
Check [this blog post](http://jpetazzo.github.io/2019/02/13/running-kubernetes-without-nodes-with-kiyot/) for more details.
???
:EN:- What happens when the cluster is at, or over, capacity
:EN:- Cluster sizing and scaling
:FR:- Ce qui se passe quand il n'y a plus assez de ressources
:FR:- Dimensionner et redimensionner ses clusters

View File

@@ -501,3 +501,11 @@ class: extra-details
- Then upgrading kubeadm to 1.16.X, etc.
- **Make sure to read the release notes before upgrading!**
???
:EN:- Best practices for cluster upgrades
:EN:- Example: upgrading a kubeadm cluster
:FR:- Bonnes pratiques pour la mise à jour des clusters
:FR:- Exemple : mettre à jour un cluster kubeadm

View File

@@ -574,3 +574,8 @@ done
- This could be useful for embedded platforms with very limited resources
(or lab environments for learning purposes)
???
:EN:- Configuring CNI plugins
:FR:- Configurer des plugins CNI

View File

@@ -401,3 +401,8 @@ class: pic
- IP addresses are associated with *pods*, not with individual containers
Both diagrams used with permission.
???
:EN:- Kubernetes concepts
:FR:- Kubernetes en théorie

View File

@@ -547,3 +547,13 @@ spec:
- With RBAC, we can authorize a user to access configmaps, but not secrets
(since they are two different kinds of resources)
???
:EN:- Managing application configuration
:EN:- Exposing configuration with the downward API
:EN:- Exposing configuration with Config Maps and Secrets
:FR:- Gérer la configuration des applications
:FR:- Configuration au travers de la *downward API*
:FR:- Configuration via les *Config Maps* et *Secrets*

View File

@@ -263,3 +263,8 @@ spec:
#name: web-xyz1234567-pqr89
EOF
```
???
:EN:- Control plane authentication
:FR:- Sécurisation du plan de contrôle

View File

@@ -132,11 +132,33 @@ For a user named `jean.doe`, we will have:
- ServiceAccount `jean.doe` in Namespace `users`
- CertificateSigningRequest `users:jean.doe`
- CertificateSigningRequest `user=jean.doe`
- ClusterRole `users:jean.doe` giving read/write access to that CSR
- ClusterRole `user=jean.doe` giving read/write access to that CSR
- ClusterRoleBinding `users:jean.doe` binding ClusterRole and ServiceAccount
- ClusterRoleBinding `user=jean.doe` binding ClusterRole and ServiceAccount
---
class: extra-details
## About resource name constraints
- Most Kubernetes identifiers and names are fairly restricted
- They generally are DNS-1123 *labels* or *subdomains* (from [RFC 1123](https://tools.ietf.org/html/rfc1123))
- A label is lowercase letters, numbers, dashes; can't start or finish with a dash
- A subdomain is one or multiple labels separated by dots
- Some resources have more relaxed constraints, and can be "path segment names"
(uppercase are allowed, as well as some characters like `#:?!,_`)
- This includes RBAC objects (like Roles, RoleBindings...) and CSRs
- See the [Identifiers and Names](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/identifiers.md) design document and the [Object Names and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#path-segment-names) documentation page for more details
---
@@ -153,7 +175,7 @@ For a user named `jean.doe`, we will have:
- Create the ServiceAccount, ClusterRole, ClusterRoleBinding for `jean.doe`:
```bash
kubectl apply -f ~/container.training/k8s/users:jean.doe.yaml
kubectl apply -f ~/container.training/k8s/user=jean.doe.yaml
```
]
@@ -195,7 +217,13 @@ For a user named `jean.doe`, we will have:
- Add a new context using that identity:
```bash
kubectl config set-context jean.doe --user=token:jean.doe --cluster=kubernetes
kubectl config set-context jean.doe --user=token:jean.doe --cluster=`kubernetes`
```
(Make sure to adapt the cluster name if yours is different!)
- Use that context:
```bash
kubectl config use-context jean.doe
```
]
@@ -216,7 +244,7 @@ For a user named `jean.doe`, we will have:
- Try to access "our" CertificateSigningRequest:
```bash
kubectl get csr users:jean.doe
kubectl get csr user=jean.doe
```
(This should tell us "NotFound")
@@ -273,7 +301,7 @@ The command above generates:
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: users:jean.doe
name: user=jean.doe
spec:
request: $(base64 -w0 < csr.pem)
usages:
@@ -324,12 +352,12 @@ The command above generates:
- Inspect the CSR:
```bash
kubectl describe csr users:jean.doe
kubectl describe csr user=jean.doe
```
- Approve it:
```bash
kubectl certificate approve users:jean.doe
kubectl certificate approve user=jean.doe
```
]
@@ -347,7 +375,7 @@ The command above generates:
- Retrieve the updated CSR object and extract the certificate:
```bash
kubectl get csr users:jean.doe \
kubectl get csr user=jean.doe \
-o jsonpath={.status.certificate} \
| base64 -d > cert.pem
```
@@ -424,3 +452,8 @@ To be usable in real environments, we would need to add:
- we get strong security *and* convenience
- Systems like Vault also have certificate issuance mechanisms
???
:EN:- Generating user certificates with the CSR API
:FR:- Génération de certificats utilisateur avec la CSR API

View File

@@ -688,3 +688,8 @@ class: extra-details
(by setting their label accordingly)
- This gives us building blocks for canary and blue/green deployments
???
:EN:- Scaling with Daemon Sets
:FR:- Utilisation de Daemon Sets

View File

@@ -172,3 +172,8 @@ The dashboard will then ask you which authentication you want to use.
- It introduces new failure modes
(for instance, if you try to apply YAML from a link that's no longer valid)
???
:EN:- The Kubernetes dashboard
:FR:- Le *dashboard* Kubernetes

View File

@@ -26,3 +26,8 @@
- When we want to change some resource, we update the *spec*
- Kubernetes will then *converge* that resource
???
:EN:- Declarative vs imperative models
:FR:- Modèles déclaratifs et impératifs

View File

@@ -823,3 +823,8 @@ class: extra-details
(it could be as a bare process, or in a container/pod using the host network)
- ... And it expects to be listening on port 6443 with TLS
???
:EN:- Building our own cluster from scratch
:FR:- Construire son cluster à la main

View File

@@ -344,3 +344,14 @@ class: extra-details
- [Dynamic Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/)
- [Aggregation Layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
???
:EN:- Extending the Kubernetes API
:EN:- Custom Resource Definitions (CRDs)
:EN:- The aggregation layer
:EN:- Admission control and webhooks
:FR:- Comment étendre l'API Kubernetes
:FR:- Les CRDs *(Custom Resource Definitions)*
:FR:- Extension via *aggregation layer*, *admission control*, *webhooks*

View File

@@ -237,3 +237,8 @@
- Gitkube can also deploy Helm charts
(instead of raw YAML files)
???
:EN:- GitOps
:FR:- GitOps

View File

@@ -154,9 +154,9 @@ It will use the default success threshold (1 successful attempt = alive).
.exercise[
- Edit `rng-daemonset.yaml` and add the liveness probe
- Edit `rng-deployment.yaml` and add the liveness probe
```bash
vim rng-daemonset.yaml
vim rng-deployment.yaml
```
- Load the YAML for all the resources of DockerCoins:
@@ -333,3 +333,8 @@ class: extra-details
(and have gcr.io/pause take care of the reaping)
- Discussion of this in [Video - 10 Ways to Shoot Yourself in the Foot with Kubernetes, #9 Will Surprise You](https://www.youtube.com/watch?v=QKI-JRs2RIE)
???
:EN:- Adding healthchecks to an app
:FR:- Ajouter des *healthchecks* à une application

View File

@@ -282,3 +282,8 @@ If the Redis process becomes unresponsive, it will be killed.
- check the timestamp of that file from an exec probe
- Writing logs (and checking them from the probe) also works
???
:EN:- Using healthchecks to improve availability
:FR:- Utiliser des *healthchecks* pour améliorer la disponibilité

View File

@@ -237,3 +237,8 @@ We see the components mentioned above: `Chart.yaml`, `templates/`, `values.yaml`
- This can be use for database migrations, backups, notifications, smoke tests ...
- Hooks named `test` are executed only when running `helm test RELEASE-NAME`
???
:EN:- Helm charts format
:FR:- Le format des *Helm charts*

View File

@@ -218,3 +218,8 @@ have details about recommended annotations and labels.
```
]
???
:EN:- Writing a basic Helm chart for the whole app
:FR:- Écriture d'un *chart* Helm simplifié

View File

@@ -121,7 +121,7 @@ This creates a basic chart in the directory `helmcoins`.
helm install COMPONENT-NAME CHART-DIRECTORY
```
- We can also use the following command, which is idempotent:
- We can also use the following command, which is *idempotent*:
```bash
helm upgrade COMPONENT-NAME CHART-DIRECTORY --install
```
@@ -139,6 +139,28 @@ This creates a basic chart in the directory `helmcoins`.
---
class: extra-details
## "Idempotent"
- Idempotent = that can be applied multiple times without changing the result
(the word is commonly used in maths and computer science)
- In this context, this means:
- if the action (installing the chart) wasn't done, do it
- if the action was already done, don't do anything
- Ideally, when such an action fails, it can be retried safely
(as opposed to, e.g., installing a new release each time we run it)
- Other example: `kubectl -f some-file.yaml`
---
## Checking what we've done
- Let's see if DockerCoins is working!
@@ -577,3 +599,8 @@ We can look at the definition, but it's fairly complex ...
- We can change the number of workers with `replicaCount`
- And much more!
???
:EN:- Writing better Helm charts for app components
:FR:- Écriture de *charts* composant par composant

View File

@@ -18,6 +18,25 @@
---
## CNCF graduation status
- On April 30th 2020, Helm was the 10th project to *graduate* within the CNCF
.emoji[🎉]
(alongside Containerd, Prometheus, and Kubernetes itself)
- This is an acknowledgement by the CNCF for projects that
*demonstrate thriving adoption, an open governance process,
<br/>
and a strong commitment to community, sustainability, and inclusivity.*
- See [CNCF announcement](https://www.cncf.io/announcement/2020/04/30/cloud-native-computing-foundation-announces-helm-graduation/)
and [Helm announcement](https://helm.sh/blog/celebrating-helms-cncf-graduation/)
---
## Helm concepts
- `helm` is a CLI tool
@@ -417,3 +436,13 @@ All unspecified values will take the default values defined in the chart.
```
]
???
:EN:- Helm concepts
:EN:- Installing software with Helm
:EN:- Helm 2, Helm 3, and the Helm Hub
:FR:- Fonctionnement général de Helm
:FR:- Installer des composants via Helm
:FR:- Helm 2, Helm 3, et le *Helm Hub*

View File

@@ -232,3 +232,8 @@ The chart is in a structured format, but it's entirely captured in this JSON.
(including the full source of the chart, and the values used)
- This allows arbitrary rollbacks, as well as tweaking values even without having access to the source of the chart (or the chart repo) used for deployment
???
:EN:- Deep dive into Helm internals
:FR:- Fonctionnement interne de Helm

View File

@@ -306,3 +306,8 @@ This can also be set with `--cpu-percent=`.
-->
]
???
:EN:- Auto-scaling resources
:FR:- *Auto-scaling* (dimensionnement automatique) des ressources

View File

@@ -718,3 +718,8 @@ We also need:
(create them, promote them, delete them ...)
For inspiration, check [flagger by Weave](https://github.com/weaveworks/flagger).
???
:EN:- The Ingress resource
:FR:- La ressource *ingress*

View File

@@ -155,3 +155,8 @@ For critical services, we might want to precisely control the update process.
- Even better if it's combined with DNS integration
(to facilitate name → ClusterIP resolution)
???
:EN:- Interconnecting clusters
:FR:- Interconnexion de clusters

162
slides/k8s/kubectl-logs.md Normal file
View File

@@ -0,0 +1,162 @@
# Revisiting `kubectl logs`
- In this section, we assume that we have a Deployment with multiple Pods
(e.g. `pingpong` that we scaled to at least 3 pods)
- We will highlights some of the limitations of `kubectl logs`
---
## Streaming logs of multiple pods
- By default, `kubectl logs` shows us the output of a single Pod
.exercise[
- Try to check the output of the Pods related to a Deployment:
```bash
kubectl logs deploy/pingpong --tail 1 --follow
```
<!--
```wait using pod/pingpong-```
```keys ^C```
-->
]
`kubectl logs` only shows us the logs of one of the Pods.
---
## Viewing logs of multiple pods
- When we specify a deployment name, only one single pod's logs are shown
- We can view the logs of multiple pods by specifying a *selector*
- If we check the pods created by the deployment, they all have the label `app=pingpong`
(this is just a default label that gets added when using `kubectl create deployment`)
.exercise[
- View the last line of log from all pods with the `app=pingpong` label:
```bash
kubectl logs -l app=pingpong --tail 1
```
]
---
## Streaming logs of multiple pods
- Can we stream the logs of all our `pingpong` pods?
.exercise[
- Combine `-l` and `-f` flags:
```bash
kubectl logs -l app=pingpong --tail 1 -f
```
<!--
```wait seq=```
```key ^C```
-->
]
*Note: combining `-l` and `-f` is only possible since Kubernetes 1.14!*
*Let's try to understand why ...*
---
class: extra-details
## Streaming logs of many pods
- Let's see what happens if we try to stream the logs for more than 5 pods
.exercise[
- Scale up our deployment:
```bash
kubectl scale deployment pingpong --replicas=8
```
- Stream the logs:
```bash
kubectl logs -l app=pingpong --tail 1 -f
```
<!-- ```wait error:``` -->
]
We see a message like the following one:
```
error: you are attempting to follow 8 log streams,
but maximum allowed concurency is 5,
use --max-log-requests to increase the limit
```
---
class: extra-details
## Why can't we stream the logs of many pods?
- `kubectl` opens one connection to the API server per pod
- For each pod, the API server opens one extra connection to the corresponding kubelet
- If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server
- This could easily put a lot of stress on the API server
- Prior Kubernetes 1.14, it was decided to *not* allow multiple connections
- From Kubernetes 1.14, it is allowed, but limited to 5 connections
(this can be changed with `--max-log-requests`)
- For more details about the rationale, see
[PR #67573](https://github.com/kubernetes/kubernetes/pull/67573)
---
## Shortcomings of `kubectl logs`
- We don't see which pod sent which log line
- If pods are restarted / replaced, the log stream stops
- If new pods are added, we don't see their logs
- To stream the logs of multiple pods, we need to write a selector
- There are external tools to address these shortcomings
(e.g.: [Stern](https://github.com/wercker/stern))
---
class: extra-details
## `kubectl logs -l ... --tail N`
- If we run this with Kubernetes 1.12, the last command shows multiple lines
- This is a regression when `--tail` is used together with `-l`/`--selector`
- It always shows the last 10 lines of output for each container
(instead of the number of lines specified on the command line)
- The problem was fixed in Kubernetes 1.13
*See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.*

View File

@@ -384,11 +384,11 @@ class: extra-details
kubectl logs deploy/pingpong --tail 1 --follow
```
- Leave that command running, so that we can keep an eye on these logs
- Stop it with Ctrl-C
<!--
```wait seq=3```
```tmux split-pane -h```
```keys ^C```
-->
]
@@ -411,62 +411,55 @@ class: extra-details
kubectl scale deployment pingpong --replicas 3
```
- Check that we now have multiple pods:
```bash
kubectl get pods
```
]
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?
We could! But the *deployment* would notice it right away, and scale back to the initial level.
---
## Log streaming
class: extra-details
- Let's look again at the output of `kubectl logs`
## Scaling a Replica Set
(the one we started before scaling up)
- What if we scale the Replica Set instead of the Deployment?
- `kubectl logs` shows us one line per second
- The Deployment would notice it right away and scale back to the initial level
- We could expect 3 lines per second
- The Replica Set makes sure that we have the right numbers of Pods
(since we should now have 3 pods running `ping`)
- The Deployment makes sure that the Replica Set has the right size
- Let's try to figure out what's happening!
(conceptually, it delegates the management of the Pods to the Replica Set)
- This might seem weird (why this extra layer?) but will soon make sense
(when we will look at how rolling updates work!)
---
## Streaming logs of multiple pods
- What happens if we restart `kubectl logs`?
- What happens if we try `kubectl logs` now that we have multiple pods?
.exercise[
- Interrupt `kubectl logs` (with Ctrl-C)
<!--
```tmux last-pane```
```key ^C```
-->
- Restart it:
```bash
kubectl logs deploy/pingpong --tail 1 --follow
kubectl logs deploy/pingpong --tail 3
```
<!--
```wait using pod/pingpong-```
```tmux last-pane```
-->
]
`kubectl logs` will warn us that multiple pods were found, and that it's showing us only one of them.
`kubectl logs` will warn us that multiple pods were found.
Let's leave `kubectl logs` running while we keep exploring.
It is showing us only one of them.
We'll see later how to address that shortcoming.
---
## Resilience
- The *deployment* `pingpong` watches its *replica set*
@@ -524,365 +517,7 @@ Let's leave `kubectl logs` running while we keep exploring.
- The pod is then killed, and `kubectl logs` exits
---
???
## Viewing logs of multiple pods
- When we specify a deployment name, only one single pod's logs are shown
- We can view the logs of multiple pods by specifying a *selector*
- A selector is a logic expression using *labels*
- If we check the pods created by the deployment, they all have the label `app=pingpong`
(this is just a default label that gets added when using `kubectl create deployment`)
.exercise[
- View the last line of log from all pods with the `app=pingpong` label:
```bash
kubectl logs -l app=pingpong --tail 1
```
]
---
### Streaming logs of multiple pods
- Can we stream the logs of all our `pingpong` pods?
.exercise[
- Combine `-l` and `-f` flags:
```bash
kubectl logs -l app=pingpong --tail 1 -f
```
<!--
```wait seq=```
```key ^C```
-->
]
*Note: combining `-l` and `-f` is only possible since Kubernetes 1.14!*
*Let's try to understand why ...*
---
class: extra-details
### Streaming logs of many pods
- Let's see what happens if we try to stream the logs for more than 5 pods
.exercise[
- Scale up our deployment:
```bash
kubectl scale deployment pingpong --replicas=8
```
- Stream the logs:
```bash
kubectl logs -l app=pingpong --tail 1 -f
```
<!-- ```wait error:``` -->
]
We see a message like the following one:
```
error: you are attempting to follow 8 log streams,
but maximum allowed concurency is 5,
use --max-log-requests to increase the limit
```
---
class: extra-details
## Why can't we stream the logs of many pods?
- `kubectl` opens one connection to the API server per pod
- For each pod, the API server opens one extra connection to the corresponding kubelet
- If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server
- This could easily put a lot of stress on the API server
- Prior Kubernetes 1.14, it was decided to *not* allow multiple connections
- From Kubernetes 1.14, it is allowed, but limited to 5 connections
(this can be changed with `--max-log-requests`)
- For more details about the rationale, see
[PR #67573](https://github.com/kubernetes/kubernetes/pull/67573)
---
## Shortcomings of `kubectl logs`
- We don't see which pod sent which log line
- If pods are restarted / replaced, the log stream stops
- If new pods are added, we don't see their logs
- To stream the logs of multiple pods, we need to write a selector
- There are external tools to address these shortcomings
(e.g.: [Stern](https://github.com/wercker/stern))
---
class: extra-details
## `kubectl logs -l ... --tail N`
- If we run this with Kubernetes 1.12, the last command shows multiple lines
- This is a regression when `--tail` is used together with `-l`/`--selector`
- It always shows the last 10 lines of output for each container
(instead of the number of lines specified on the command line)
- The problem was fixed in Kubernetes 1.13
*See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.*
---
class: extra-details
## Party tricks involving IP addresses
- It is possible to specify an IP address with less than 4 bytes
(example: `127.1`)
- Zeroes are then inserted in the middle
- As a result, `127.1` expands to `127.0.0.1`
- So we can `ping 127.1` to ping `localhost`!
(See [this blog post](https://ma.ttias.be/theres-more-than-one-way-to-write-an-ip-address/
) for more details.)
---
class: extra-details
## More party tricks with IP addresses
- We can also ping `1.1`
- `1.1` will expand to `1.0.0.1`
- This is one of the addresses of Cloudflare's
[public DNS resolver](https://blog.cloudflare.com/announcing-1111/)
- This is a quick way to check connectivity
(if we can reach 1.1, we probably have internet access)
---
## Creating other kinds of resources
- Deployments are great for stateless web apps
(as well as workers that keep running forever)
- Jobs are great for "long" background work
("long" being at least minutes our hours)
- CronJobs are great to schedule Jobs at regular intervals
(just like the classic UNIX `cron` daemon with its `crontab` files)
- Pods are great for one-off execution that we don't care about
(because they don't get automatically restarted if something goes wrong)
---
## Creating a Job
- A Job will create a Pod
- If the Pod fails, the Job will create another one
- The Job will keep trying until:
- either a Pod succeeds,
- or we hit the *backoff limit* of the Job (default=6)
.exercise[
- Create a Job that has a 50% chance of success:
```bash
kubectl create job flipcoin --image=alpine -- sh -c 'exit $(($RANDOM%2))'
```
]
---
## Our Job in action
- Our Job will create a Pod named `flipcoin-xxxxx`
- If the Pod succeeds, the Job stops
- If the Pod fails, the Job creates another Pod
.exercise[
- Check the status of the Pod(s) created by the Job:
```bash
kubectl get pods --selector=job-name=flipcoin
```
]
---
class: extra-details
## More advanced jobs
- We can specify a number of "completions" (default=1)
- This indicates how many times the Job must be executed
- We can specify the "parallelism" (default=1)
- This indicates how many Pods should be running in parallel
- These options cannot be specified with `kubectl create job`
(we have to write our own YAML manifest to use them)
---
## Scheduling periodic background work
- A Cron Job is a Job that will be executed at specific intervals
(the name comes from the traditional cronjobs executed by the UNIX crond)
- It requires a *schedule*, represented as five space-separated fields:
- minute [0,59]
- hour [0,23]
- day of the month [1,31]
- month of the year [1,12]
- day of the week ([0,6] with 0=Sunday)
- `*` means "all valid values"; `/N` means "every N"
- Example: `*/3 * * * *` means "every three minutes"
---
## Creating a Cron Job
- Let's create a simple job to be executed every three minutes
- Careful: make sure that the job terminates!
(The Cron Job will not hold if a previous job is still running)
.exercise[
- Create the Cron Job:
```bash
kubectl create cronjob every3mins --schedule="*/3 * * * *" \
--image=alpine -- sleep 10
```
- Check the resource that was created:
```bash
kubectl get cronjobs
```
]
---
## Cron Jobs in action
- At the specified schedule, the Cron Job will create a Job
- The Job will create a Pod
- The Job will make sure that the Pod completes
(re-creating another one if it fails, for instance if its node fails)
.exercise[
- Check the Jobs that are created:
```bash
kubectl get jobs
```
]
(It will take a few minutes before the first job is scheduled.)
---
class: extra-details
## What about `kubectl run` before v1.18?
- Creating a Deployment:
`kubectl run`
- Creating a Pod:
`kubectl run --restart=Never`
- Creating a Job:
`kubectl run --restart=OnFailure`
- Creating a Cron Job:
`kubectl run --restart=OnFailure --schedule=...`
*Avoid using these forms, as they are deprecated since Kubernetes 1.18!*
---
## Beyond `kubectl create`
- As hinted earlier, `kubectl create` doesn't always expose all options
- can't express parallelism or completions of Jobs
- can't express Pods with multiple containers
- can't express healthchecks, resource limits
- etc.
- `kubectl create` and `kubectl run` are *helpers* that generate YAML manifests
- If we write these manifests ourselves, we can use all features and options
- We'll see later how to do that!
:EN:- Running pods and deployments
:FR:- Créer un pod et un déploiement

View File

@@ -438,3 +438,13 @@ class: extra-details
- They can also handle TLS certificates, URL rewriting ...
- They require an *Ingress Controller* to function
???
:EN:- Service discovery and load balancing
:EN:- Accessing pods through services
:EN:- Service types: ClusterIP, NodePort, LoadBalancer
:FR:- Exposer un service
:FR:- Différents types de services : ClusterIP, NodePort, LoadBalancer
:FR:- Utiliser CoreDNS pour la *service discovery*

View File

@@ -578,3 +578,8 @@ $ curl -k https://10.96.0.1
- Code running in pods can connect to services using their name
(e.g. https://kubernetes/...)
???
:EN:- Getting started with kubectl
:FR:- Se familiariser avec kubectl

View File

@@ -145,3 +145,8 @@ class: extra-details
- Some solutions can fill multiple roles
(e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy)
???
:EN:- The Kubernetes network model
:FR:- Le modèle réseau de Kubernetes

View File

@@ -31,23 +31,17 @@
---
## Cloning some repos
## Cloning the repository
- We will need two repositories:
- We will need to clone the training repository
- the first one has the "DockerCoins" demo app
- It has the DockerCoins demo app ...
- the second one has these slides, some scripts, more manifests ...
- ... as well as these slides, some scripts, more manifests
.exercise[
- Clone the kubercoins repository on `node1`:
```bash
git clone https://github.com/jpetazzo/kubercoins
```
- Clone the container.training repository as well:
- Clone the repository on `node1`:
```bash
git clone https://@@GITREPO@@
```
@@ -62,9 +56,9 @@ Without further ado, let's start this application!
.exercise[
- Apply all the manifests from the kubercoins repository:
- Apply the manifest for dockercoins:
```bash
kubectl apply -f kubercoins/
kubectl apply -f ~/container.training/k8s/dockercoins.yaml
```
]
@@ -242,3 +236,8 @@ https://@@GITREPO@@/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/wo
A drawing area should show up, and after a few seconds, a blue
graph will appear.
???
:EN:- Deploying a sample app with YAML manifests
:FR:- Lancer une application de démo avec du YAML

View File

@@ -8,45 +8,164 @@
- They are left untouched by Kustomize
- Kustomize lets us define *overlays* that extend or change the resource files
- Kustomize lets us define *kustomizations*
- A *kustomization* is conceptually similar to a *layer*
- Technically, a *kustomization* is a file named `kustomization.yaml`
(or a directory containing that files + additional files)
---
## Differences with Helm
## What's in a kustomization
- Helm charts use placeholders `{{ like.this }}`
- A kustomization can do any combination of the following:
- Kustomize "bases" are standard Kubernetes YAML
- include other kustomizations
- It is possible to use an existing set of YAML as a Kustomize base
- include Kubernetes resources defined in YAML files
- As a result, writing a Helm chart is more work ...
- patch Kubernetes resources (change values)
- ... But Helm charts are also more powerful; e.g. they can:
- add labels or annotations to all resources
- use flags to conditionally include resources or blocks
- specify ConfigMaps and Secrets from literal values or local files
- check if a given Kubernetes API group is supported
- [and much more](https://helm.sh/docs/chart_template_guide/)
(... And a few more advanced features that we won't cover today!)
---
## Kustomize concepts
## A simple kustomization
- Kustomize needs a `kustomization.yaml` file
This features a Deployment, Service, and Ingress (in separate files),
and a couple of patches (to change the number of replicas and the hostname
used in the Ingress).
- That file can be a *base* or a *variant*
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesStrategicMerge:
- scale-deployment.yaml
- ingress-hostname.yaml
resources:
- deployment.yaml
- service.yaml
- ingress.yaml
```
- If it's a *base*:
On the next slide, let's see a more complex example ...
- it lists YAML resource files to use
---
- If it's a *variant* (or *overlay*):
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
add-this-to-all-my-resources: please
patchesStrategicMerge:
- prod-scaling.yaml
- prod-healthchecks.yaml
bases:
- api/
- frontend/
- db/
- github.com/example/app?ref=tag-or-branch
resources:
- ingress.yaml
- permissions.yaml
configMapGenerator:
- name: appconfig
files:
- global.conf
- local.conf=prod.conf
```
- it refers to (at least) one *base*
---
- and some *patches*
## Glossary
- A *base* is a kustomization that is referred to by other kustomizations
- An *overlay* is a kustomization that refers to other kustomizations
- A kustomization can be both a base and an overlay at the same time
(a kustomization can refer to another, which can refer to a third)
- A *patch* describes how to alter an existing resource
(e.g. to change the image in a Deployment; or scaling parameters; etc.)
- A *variant* is the final outcome of applying bases + overlays
(See the [kustomize glossary](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md) for more definitions!)
---
## What Kustomize *cannot* do
- By design, there are a number of things that Kustomize won't do
- For instance:
- using command-line arguments or environment variables to generate a variant
- overlays can only *add* resources, not *remove* them
- See the full list of [eschewed features](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/eschewedFeatures.md) for more details
---
## Kustomize workflows
- The Kustomize documentation proposes two different workflows
- *Bespoke configuration*
- base and overlays managed by the same team
- *Off-the-shelf configuration* (OTS)
- base and overlays managed by different teams
- base is regularly updated by "upstream" (e.g. a vendor)
- our overlays and patches should (hopefully!) apply cleanly
- we may regularly update the base, or use a remote base
---
## Remote bases
- Kustomize can fetch remote bases using Hashicorp go-getter library
- Examples:
github.com/jpetazzo/kubercoins (remote git repository)
github.com/jpetazzo/kubercoins?ref=kustomize (specific tag or branch)
https://releases.hello.io/k/1.0.zip (remote archive)
https://releases.hello.io/k/1.0.zip//some-subdir (subdirectory in archive)
- See [hashicorp/go-getter URL format docs](https://github.com/hashicorp/go-getter#url-format) for more examples
---
## Managing `kustomization.yaml`
- There are many ways to manage `kustomization.yaml` files, including:
- web wizards like [Replicated Ship](https://www.replicated.com/ship/)
- the `kustomize` CLI
- opening the file with our favorite text editor
- Let's see these in action!
---
@@ -199,3 +318,63 @@
]
Note: it might take a minute or two for the worker to start.
---
## Working with the `kustomize` CLI
- This is another way to get started
- General workflow:
`kustomize create` to generate an empty `kustomization.yaml` file
`kustomize edit add resource` to add Kubernetes YAML files to it
`kustomize edit add patch` to add patches to said resources
`kustomize build | kubectl apply -f-` or `kubectl apply -k .`
---
## `kubectl apply -k`
- Kustomize has been integrated in `kubectl`
- The `kustomize` tool is still needed if we want to use `create`, `edit`, ...
- Also, warning: `kubectl apply -k` is a slightly older version than `kustomize`!
- In recent versions of `kustomize`, bases can be listed in `resources`
(and `kustomize edit add base` will add its arguments to `resources`)
- `kubectl apply -k` requires bases to be listed in `bases`
(so after using `kustomize edit add base`, we need to fix `kustomization.yaml`)
---
## Differences with Helm
- Helm charts use placeholders `{{ like.this }}`
- Kustomize "bases" are standard Kubernetes YAML
- It is possible to use an existing set of YAML as a Kustomize base
- As a result, writing a Helm chart is more work ...
- ... But Helm charts are also more powerful; e.g. they can:
- use flags to conditionally include resources or blocks
- check if a given Kubernetes API group is supported
- [and much more](https://helm.sh/docs/chart_template_guide/)
???
:EN:- Packaging and running apps with Kustomize
:FR:- *Packaging* d'applications avec Kustomize

View File

@@ -0,0 +1,202 @@
# Labels and annotations
- Most Kubernetes resources can have *labels* and *annotations*
- Both labels and annotations are arbitrary strings
(with some limitations that we'll explain in a minute)
- Both labels and annotations can be added, removed, changed, dynamically
- This can be done with:
- the `kubectl edit` command
- the `kubectl label` and `kubectl annotate`
- ... many other ways! (`kubectl apply -f`, `kubectl patch`, ...)
---
## Viewing labels and annotations
- Let's see what we get when we create a Deployment
.exercise[
- Create a Deployment:
```bash
kubectl create deployment clock --image=jpetazzo/clock
```
- Look at its annotations and labels:
```bash
kubectl describe deployment clock
```
]
So, what do we get?
---
## Labels and annotations for our Deployment
- We see one label:
```
Labels: app=clock
```
- This is added by `kubectl create deployment`
- And one annotation:
```
Annotations: deployment.kubernetes.io/revision: 1
```
- This is to keep track of successive versions when doing rolling updates
---
## And for the related Pod?
- Let's look up the Pod that was created and check it too
.exercise[
- Find the name of the Pod:
```bash
kubectl get pods
```
- Display its information:
```bash
kubectl describe pod clock-xxxxxxxxxx-yyyyy
```
]
So, what do we get?
---
## Labels and annotations for our Pod
- We see two labels:
```
Labels: app=clock
pod-template-hash=xxxxxxxxxx
```
- `app=clock` comes from `kubectl create deployment` too
- `pod-template-hash` was assigned by the Replica Set
(when we will do rolling updates, each set of Pods will have a different hash)
- There are no annotations:
```
Annotations: <none>
```
---
## Selectors
- A *selector* is an expression matching labels
- It will restrict a command to the objects matching *at least* all these labels
.exercise[
- List all the pods with at least `app=clock`:
```bash
kubectl get pods --selector=app=clock
```
- List all the pods with a label `app`, regardless of its value:
```bash
kubectl get pods --selector=app
```
]
---
## Settings labels and annotations
- The easiest method is to use `kubectl label` and `kubectl annotate`
.exercise[
- Set a label on the `clock` Deployment:
```bash
kubectl label deployment clock color=blue
```
- Check it out:
```bash
kubectl describe deployment clock
```
]
---
class: extra-details
## More on selectors
- If a selector has multiple labels, it means "match at least these labels"
Example: `--selector=app=frontend,release=prod`
- `--selector` can be abbreviated as `-l` (for **l**abels)
We can also use negative selectors
Example: `--selector=app!=clock`
- Selectors can be used with most `kubectl` commands
Examples: `kubectl delete`, `kubectl label`, ...
---
## Other ways to view labels
- We can use the `--show-labels` flag with `kubectl get`
.exercise[
- Show labels for a bunch of objects:
```bash
kubectl get --show-labels po,rs,deploy,svc,no
```
]
---
## Differences between labels and annotations
- The *key* for both labels and annotations:
- must start and end with a letter or digit
- can also have `.` `-` `_` (but not in first or last position)
- can be up to 63 characters, or 253 + `/` + 63
- Label *values* are up to 63 characters, with the same restrictions
- Annotations *values* can have arbitrary characeters (yes, even binary)
- Maximum length isn't defined
(dozens of kilobytes is fine, hundreds maybe not so much)
???
:EN:- Labels and annotations
:FR:- *Labels* et annotations

View File

@@ -246,3 +246,10 @@
(when we can't or won't dedicate a whole disk to a volume)
- It's possible to mix both (using distinct Storage Classes)
???
:EN:- Static vs dynamic volume provisioning
:EN:- Example: local persistent volume provisioner
:FR:- Création statique ou dynamique de volumes
:FR:- Exemple : création de volumes locaux

View File

@@ -193,3 +193,8 @@ class: extra-details
]
We can now utilize the cluster exactly as if we're logged into a node, except that it's remote.
???
:EN:- Working with remote Kubernetes clusters
:FR:- Travailler avec des *clusters* distants

View File

@@ -145,3 +145,8 @@ But this is outside of the scope of this chapter.
The YAML file that we used creates all the resources in the
`default` namespace, for simplicity. In a real scenario, you will
create the resources in the `kube-system` namespace or in a dedicated namespace.
???
:EN:- Centralizing logs
:FR:- Centraliser les logs

View File

@@ -45,7 +45,7 @@ Exactly what we need!
---
## Installing Stern
## Checking if Stern is installed
- Run `stern` (without arguments) to check if it's installed:
@@ -57,7 +57,17 @@ Exactly what we need!
stern pod-query [flags]
```
- If it is not installed, the easiest method is to download a [binary release](https://github.com/wercker/stern/releases)
- If it's missing, let's see how to install it
---
## Installing Stern
- Stern is written in Go, and Go programs are usually shipped as a single binary
- We just need to download that binary and put it in our `PATH`!
- Binary releases are available [here](https://github.com/wercker/stern/releases) on GitHub
- The following commands will install Stern on a Linux Intel 64 bit machine:
```bash
@@ -66,7 +76,7 @@ Exactly what we need!
sudo chmod +x /usr/local/bin/stern
```
- On OS X, just `brew install stern`
- On macOS, we can also `brew install stern` or `port install stern`
<!-- ##VERSION## -->
@@ -149,3 +159,8 @@ Exactly what we need!
-->
]
???
:EN:- Viewing pod logs from the CLI
:FR:- Consulter les logs des pods depuis la CLI

View File

@@ -80,3 +80,8 @@ If it shows our nodes and their CPU and memory load, we're good!
- kube-resource-report can generate HTML reports
(https://github.com/hjacobs/kube-resource-report)
???
:EN:- The *core metrics pipeline*
:FR:- Le *core metrics pipeline*

View File

@@ -532,3 +532,8 @@ Sometimes it works, sometimes it doesn't. Why?
- We want to automate all these steps
- We want something that works on all networks
???
:EN:- Connecting nodes ands pods
:FR:- Interconnecter les nœuds et les pods

View File

@@ -365,3 +365,7 @@ Note: we could have used `--namespace=default` for the same result.
- Pro-tip: install it on your machine during the next break!
???
:EN:- Organizing resources with Namespaces
:FR:- Organiser les ressources avec des *namespaces*

View File

@@ -446,3 +446,8 @@ troubleshoot easily, without having to poke holes in our firewall.
- a [very good talk about network policies](https://www.youtube.com/watch?list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-ulZb&v=3gGpMmYeEO8) at KubeCon North America 2017
- a repository of [ready-to-use recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for network policies
???
:EN:- Isolating workloads with Network Policies
:FR:- Isolation réseau avec les *network policies*

View File

@@ -377,3 +377,8 @@ class: extra-details
- It should now say "Signature Verified"
]
???
:EN:- Authenticating with OIDC
:FR:- S'identifier avec OIDC

View File

@@ -1,3 +1,35 @@
# Designing an operator
- Once we understand CRDs and operators, it's tempting to use them everywhere
- Yes, we can do (almost) everything with operators ...
- ... But *should we?*
- Very often, the answer is **“no!”**
- Operators are powerful, but significantly more complex than other solutions
---
## When should we (not) use operators?
- Operators are great if our app needs to react to cluster events
(nodes or pods going down, and requiring extensive reconfiguration)
- Operators *might* be helpful to encapsulate complexity
(manipulate one single custom resource for an entire stack)
- Operators are probably overkill if a Helm chart would suffice
- That being said, if we really want to write an operator ...
Read on!
---
## What does it take to write an operator?
- Writing a quick-and-dirty operator, or a POC/MVP, is easy
@@ -356,3 +388,8 @@ class: extra-details
(this is used e.g. by the metrics server)
- [This documentation page](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#choosing-a-method-for-adding-custom-resources) compares the features of CRDs and API aggregation
???
:EN:- Guidelines to design our own operators
:FR:- Comment concevoir nos propres opérateurs

View File

@@ -615,3 +615,11 @@ After the Kibana UI loads, we need to click around a bit
*Operators can be very powerful.
<br/>
But we need to know exactly the scenarios that they can handle.*
???
:EN:- Kubernetes operators
:EN:- Deploying ElasticSearch with ECK
:FR:- Les opérateurs
:FR:- Déployer ElasticSearch avec ECK

View File

@@ -162,3 +162,8 @@ Yes, this may take a little while to update. *(Narrator: it was DNS.)*
--
*Alright, we're back to where we started, when we were running on a single node!*
???
:EN:- Running our demo app on Kubernetes
:FR:- Faire tourner l'application de démo sur Kubernetes

View File

@@ -180,3 +180,8 @@ class: extra-details
]
As always, the [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) has useful extra information and pointers.
???
:EN:- Owners and dependents
:FR:- Liens de parenté entre les ressources

View File

@@ -287,7 +287,7 @@
- Try to create a Deployment:
```bash
kubectl run testpsp2 --image=nginx
kubectl create deployment testpsp2 --image=nginx
```
- Look at existing resources:
@@ -350,7 +350,7 @@ We can get hints at what's happening by looking at the ReplicaSet and Events.
- Create a Deployment as well:
```bash
kubectl run testpsp4 --image=nginx
kubectl create deployment testpsp4 --image=nginx
```
- Confirm that the Deployment is *not* creating any Pods:
@@ -531,3 +531,8 @@ class: extra-details
```
]
???
:EN:- Preventing privilege escalation with Pod Security Policies
:FR:- Limiter les droits des conteneurs avec les *Pod Security Policies*

View File

@@ -678,3 +678,11 @@ were inspired by [Portworx examples on Katacoda](https://katacoda.com/portworx/s
- [HA PostgreSQL on Kubernetes with Portworx](https://www.katacoda.com/portworx/scenarios/px-k8s-postgres-all-in-one)
(with adaptations to use a Stateful Set and simplify PostgreSQL's setup)
???
:EN:- Using highly available persistent volumes
:EN:- Example: deploying a database that can withstand node outages
:FR:- Utilisation de volumes à haute disponibilité
:FR:- Exemple : déployer une base de données survivant à la défaillance d'un nœud

View File

@@ -562,3 +562,8 @@ class: extra-details
Don't panic if you don't know these tools!
...But make sure at least one person in your team is on it 💯
???
:EN:- Collecting metrics with Prometheus
:FR:- Collecter des métriques avec Prometheus

View File

@@ -536,3 +536,15 @@ services.nodeports 0 0
- [static demo](https://hjacobs.github.io/kube-resource-report/sample-report/output/index.html)
|
[live demo](https://kube-resource-report.demo.j-serv.de/applications.html)
???
:EN:- Setting compute resource limits
:EN:- Defining default policies for resource usage
:EN:- Managing cluster allocation and quotas
:EN:- Resource management in practice
:FR:- Allouer et limiter les ressources des conteneurs
:FR:- Définir des ressources par défaut
:FR:- Gérer les quotas de ressources au niveau du cluster
:FR:- Conseils pratiques

View File

@@ -437,3 +437,12 @@ class: extra-details
]
]
???
:EN:- Rolling updates
:EN:- Rolling back a bad deployment
:FR:- Mettre à jour un déploiement
:FR:- Concept de *rolling update* et *rollback*
:FR:- Paramétrer la vitesse de déploiement

View File

@@ -200,3 +200,8 @@ Now we can access the IP addresses of our services through `$HASHER` and `$RNG`.
- `rng` is not (it should take about 700 milliseconds if there are 10 workers)
- Something is wrong with `rng`, but ... what?
???
:EN:- Scaling up our demo app
:FR:- *Scale up* de l'application de démo

View File

@@ -32,11 +32,7 @@
- Doesn't set up the overlay network
- Doesn't set up multi-master (no high availability)
--
(At least ... not yet! Though it's [experimental in 1.12](https://kubernetes.io/docs/setup/independent/high-availability/).)
- [Some extra steps](https://kubernetes.io/docs/setup/independent/high-availability/) to support HA control plane
--
@@ -44,18 +40,29 @@
---
## Managed options
- On AWS: [EKS](https://aws.amazon.com/eks/),
[eksctl](https://eksctl.io/)
- On Azure: [AKS](https://azure.microsoft.com/services/kubernetes-service/)
- On DigitalOcean: [DOK](https://www.digitalocean.com/products/kubernetes/)
- On Google Cloud: [GKE](https://cloud.google.com/kubernetes-engine/)
- On Linode: [LKE](https://www.linode.com/products/kubernetes/)
- On OVHcloud: [Managed Kubernetes Service](https://www.ovhcloud.com/en/public-cloud/kubernetes/)
- On Scaleway: [Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/)
- and much more!
---
## Other deployment options
- [AKS](https://azure.microsoft.com/services/kubernetes-service/):
managed Kubernetes on Azure
- [GKE](https://cloud.google.com/kubernetes-engine/):
managed Kubernetes on Google Cloud
- [EKS](https://aws.amazon.com/eks/),
[eksctl](https://eksctl.io/):
managed Kubernetes on AWS
- [kops](https://github.com/kubernetes/kops):
customizable deployments on AWS, Digital Ocean, GCE (beta), vSphere (alpha)
@@ -92,3 +99,8 @@
- For a longer list, check the Kubernetes documentation:
<br/>
it has a great guide to [pick the right solution](https://kubernetes.io/docs/setup/#production-environment) to set up Kubernetes.
???
:EN:- Overview of the kubeadm installer
:FR:- Survol de kubeadm

View File

@@ -250,6 +250,11 @@ with a cloud provider
- OVH
- Scaleway (private beta)
- Scaleway
- ...
???
:EN:- Installing a managed cluster
:FR:- Installer un cluster infogéré

View File

@@ -108,3 +108,8 @@
<br/>(do they need training?)
- etc.
???
:EN:- Various ways to set up Kubernetes
:FR:- Différentes méthodes pour installer Kubernetes

View File

@@ -115,4 +115,9 @@
There might be a long pause before the first layer is pulled,
because the API behind `docker pull` doesn't allow to stream build logs, and there is no feedback during the build.
It is possible to view the build logs by setting up an account on [ctr.run](https://ctr.run/).
It is possible to view the build logs by setting up an account on [ctr.run](https://ctr.run/).
???
:EN:- Shipping images to Kubernetes
:FR:- Déployer des images sur notre cluster

View File

@@ -600,3 +600,13 @@ This will trigger the following actions.
5. The PersistentVolumeClaimBinder associates the PVs and the PVCs together.
6. PVCs are now bound, the Pods can start.
???
:EN:- Deploying apps with Stateful Sets
:EN:- Example: deploying a Consul cluster
:EN:- Understanding Persistent Volume Claims and Storage Classes
:FR:- Déployer une application avec un *Stateful Set*
:FR:- Example : lancer un cluster Consul
:FR:- Comprendre les *Persistent Volume Claims* et *Storage Classes*

View File

@@ -239,3 +239,8 @@ The `-node1` suffix was added automatically by kubelet.
If we delete the pod (with `kubectl delete`), it will be recreated immediately.
To delete the pod, we need to delete (or move) the manifest file.
???
:EN:- Static pods
:FR:- Les *static pods*

View File

@@ -88,3 +88,8 @@ class: extra-details
```
- Check [the documentation](https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl) for the whole story about compatibility
???
:EN:- Kubernetes versioning and compatibility
:FR:- Les versions de Kubernetes et leur compatibilité

View File

@@ -404,7 +404,7 @@ spec:
initContainers:
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
@@ -417,9 +417,12 @@ spec:
.exercise[
- Repeat the same operation as earlier
- Create the pod:
```bash
kubectl create -f ~/container.training/k8s/nginx-4-with-init.yaml
```
(try to send HTTP requests as soon as the pod comes up)
- Try to send HTTP requests as soon as the pod comes up
<!--
```key ^D```
@@ -467,3 +470,11 @@ spec:
- A volume survives across container restarts
- A volume is destroyed (or, for remote storage, detached) when the pod is destroyed
???
:EN:- Sharing data between containers with volumes
:EN:- When and how to use Init Containers
:FR:- Partager des données grâce aux volumes
:FR:- Quand et comment utiliser un *Init Container*

Some files were not shown because too many files have changed in this diff Show More