Compare commits

...

87 Commits

Author SHA1 Message Date
Jerome Petazzoni
a12117ab80 index-redirect 2020-08-26 13:43:27 +02:00
Jerome Petazzoni
63e37e8c16 slides 2020-08-26 13:29:41 +02:00
Jerome Petazzoni
efdc4fcfa9 bump versions 2020-08-26 12:38:51 +02:00
Jerome Petazzoni
c32fcc81bb Tweak 1-day content 2020-08-26 09:10:15 +02:00
Jerome Petazzoni
f6930042bd Mention downward API fields 2020-08-26 09:05:24 +02:00
Jerome Petazzoni
2e2767b090 Bump up kubectl versions in remote section 2020-08-19 13:38:49 +02:00
Jerome Petazzoni
115cc5e0c0 Add support for Scaleway Cloud instances 2020-08-15 14:02:24 +02:00
Jerome Petazzoni
d252fe254b Update DNS script 2020-08-15 12:34:08 +02:00
Jerome Petazzoni
7d96562042 Minor updates after LKE testing 2020-08-12 19:22:57 +02:00
Jerome Petazzoni
4ded8c699d typo 2020-08-05 18:23:37 +02:00
Jérôme Petazzoni
620a3df798 Merge pull request #563 from lucas-foodles/patch-1
Fix typo
2020-08-05 17:28:34 +02:00
Jerome Petazzoni
d28723f07a Add fwdays workshops 2020-08-04 17:21:31 +02:00
Jerome Petazzoni
f2334d2d1b Add skillsmatter dates 2020-07-30 19:11:43 +02:00
Jerome Petazzoni
ddf79eebc7 Add skillsmatter 2020-07-30 19:09:42 +02:00
Jerome Petazzoni
6467264ff5 Add Bret coupon codes; high five online october 2020-07-30 12:11:29 +02:00
lucas-foodles
55fcff9333 Fix typo 2020-07-29 10:46:17 +02:00
Jerome Petazzoni
8fb7ea3908 Use 'sudo port', as per #529 2020-07-09 15:32:21 +02:00
Jérôme Petazzoni
7dd72f123f Merge pull request #562 from guilhem/patch-1
mismatch requests/limits
2020-07-07 15:35:46 +02:00
Guilhem Lettron
ff95066006 mismatch requests/limits
Burstable are killed when node is overloaded and exceeded requests
2020-07-07 13:55:28 +02:00
Jerome Petazzoni
8146c4dabe Add CRD that I had forgotten 2020-07-01 18:15:33 +02:00
Jerome Petazzoni
17aea33beb Add config for Traefik v2 2020-07-01 18:15:23 +02:00
Jerome Petazzoni
9770f81a1c Update DaemonSet in filebeat example to apps/v1 2020-07-01 16:55:48 +02:00
Jerome Petazzoni
0cb9095303 Fix up CRDs and add better openapiv3 schema validation 2020-07-01 16:53:51 +02:00
Jerome Petazzoni
ffded8469b Clean up socat deployment (even if we don't use it anymore) 2020-07-01 16:10:40 +02:00
Jerome Petazzoni
0e892cf8b4 Fix indentation in volume example 2020-06-28 12:10:01 +02:00
Jerome Petazzoni
b87efbd6e9 Update etcd slide 2020-06-26 07:32:53 +02:00
Jerome Petazzoni
1a24b530d6 Update Kustomize version 2020-06-22 08:33:21 +02:00
Jerome Petazzoni
122ffec5c2 kubectl get --show-labels and -L 2020-06-16 22:50:38 +02:00
Jerome Petazzoni
276a2dbdda Fix titles 2020-06-04 12:55:42 +02:00
Jerome Petazzoni
2836b58078 Add ENIX high five sessions 2020-06-04 12:53:25 +02:00
Jerome Petazzoni
0d065788a4 Improve how we display dates (sounds silly but with longer online events it becomes necessary) 2020-06-04 12:42:44 +02:00
Jerome Petazzoni
14271a4df0 Rehaul 'setup k8s' sections 2020-06-03 16:54:41 +02:00
Jerome Petazzoni
412d029d0c Tweak self-hosted options 2020-06-02 17:45:51 +02:00
Jerome Petazzoni
f960230f8e Reorganize managed options; add Scaleway 2020-06-02 17:28:23 +02:00
Jerome Petazzoni
774c8a0e31 Rewrite intro to the authn/authz module 2020-06-01 23:43:33 +02:00
Jerome Petazzoni
4671a981a7 Add deployment automation steps
The settings file can now specify an optional list of steps.
After creating a bunch of instances, the steps are then
automatically executed. This helps since virtually all
deployments will be a sequence of 'start + deploy + otheractions'.

It also helps to automatically excecute steps like webssh
and tailhist (since I tend to forget them often).
2020-06-01 20:58:23 +02:00
Jerome Petazzoni
b9743a5f8c Simplify Portworx setup and update it for k8s 1.18 2020-06-01 14:41:25 +02:00
Jerome Petazzoni
df4980750c Bump up ship version 2020-05-27 17:41:22 +02:00
Jerome Petazzoni
9467c7309e Update shortlinks 2020-05-17 20:21:15 +02:00
Jerome Petazzoni
86b0380a77 Update operator links 2020-05-13 20:29:59 +02:00
Jerome Petazzoni
eb9052ae9a Add twitch chat info 2020-05-07 13:24:35 +02:00
Jerome Petazzoni
8f85332d8a Advanced Dockerfiles -> Advanced Dockerfile Syntax 2020-05-06 17:25:03 +02:00
Jerome Petazzoni
0479ad2285 Add force redirects 2020-05-06 17:22:13 +02:00
Jerome Petazzoni
986d7eb9c2 Add foreword to operators design section 2020-05-05 17:24:05 +02:00
Jerome Petazzoni
3fafbb8d4e Add kustomize CLI and completion 2020-05-04 16:47:26 +02:00
Jerome Petazzoni
5a24df3fd4 Add details on Kustomize 2020-05-04 16:25:35 +02:00
Jerome Petazzoni
1bbfba0531 Add definition of idempotent 2020-05-04 02:18:05 +02:00
Jerome Petazzoni
8d98431ba0 Add Helm graduation status 2020-05-04 02:09:00 +02:00
Jerome Petazzoni
c31c81a286 Allow overriding YAML desc through env vars 2020-05-04 00:54:34 +02:00
Jerome Petazzoni
a0314fc5f5 Keep --restart=Never for folks running 1.17- 2020-05-03 17:08:32 +02:00
Jérôme Petazzoni
3f088236a4 Merge pull request #557 from barpilot/psp
psp: update deprecated parts
2020-05-03 17:07:41 +02:00
Jerome Petazzoni
ce4e2ffe46 Add sleep command in init container example
It can be tricky to illustrate what's going on here, since installing
git and cloning the repo can be so fast. So we're sleeping a few seconds
to help with this demo and make it easier to show the race condition.
2020-05-03 17:01:59 +02:00
Jérôme Petazzoni
c3a05a6393 Merge pull request #558 from barpilot/vol-init
volume: add missing pod nginx-with-init creating
2020-05-03 16:57:46 +02:00
Jerome Petazzoni
40b2b8e62e Fix deployment name in labels/selector intro
(Fixes #552)
2020-05-03 16:53:25 +02:00
Jerome Petazzoni
efdcf4905d Bump up Kubernetes dashboard to 2.0.0 2020-05-03 16:01:19 +02:00
Jérôme Petazzoni
bdb57c05b4 Merge pull request #550 from BretFisher/patch-20
update k8s dashboard versions
2020-05-03 15:55:15 +02:00
Jerome Petazzoni
af0762a0a2 Remove ':' from file names
Colons are not allowed in file names on Windows. Let's use
something else instead.

(Initially reported by @DenisBalan. This closes #549.)
2020-05-03 15:49:37 +02:00
Jerome Petazzoni
0d6c364a95 Add MacPorts instructions for stern 2020-05-03 13:40:01 +02:00
Jerome Petazzoni
690a1eb75c Move Ardan Live 2020-05-01 15:37:57 -05:00
Jérôme Petazzoni
c796a6bfc1 Merge pull request #556 from barpilot/healthcheck
healthcheck: fix rng manifest filename
2020-04-30 22:51:37 +02:00
Jerome Petazzoni
0b10d3d40d Add a bunch of other managed offerings 2020-04-30 15:50:24 -05:00
Jérôme Petazzoni
cdb50925da Merge pull request #554 from barpilot/installer
separate managed options from deployment
2020-04-30 22:47:22 +02:00
Jérôme Petazzoni
ca1f8ec828 Merge pull request #553 from barpilot/kubeadm
Remove experimental status on kubeadm HA
2020-04-30 22:46:33 +02:00
Jerome Petazzoni
7302d3533f Use built-in dockercoins manifest instead of separate kubercoins repo 2020-04-30 15:45:12 -05:00
Jerome Petazzoni
d3c931e602 Add separate instructions for Zoom webinar 2020-04-30 15:42:41 -05:00
Guilhem Lettron
7402c8e6a8 psp: update psp apiVersion to policy/v1beta1 2020-04-29 22:46:33 +02:00
Guilhem Lettron
1de539bff8 healthcheck: fix rng manifest filename 2020-04-29 22:41:15 +02:00
Guilhem Lettron
a6c7d69986 volume: add missing pod nginx-with-init creating 2020-04-29 22:37:49 +02:00
Guilhem Lettron
b0bff595cf psp: update generator helpers
kubectl run →  kubectl create deployment
kubectl run --restart=Never → kubectl run
2020-04-29 22:33:34 +02:00
Jerome Petazzoni
6f806ed200 typo 2020-04-28 14:23:52 -05:00
Jerome Petazzoni
0c8b20f6b6 typo 2020-04-28 14:21:31 -05:00
Jerome Petazzoni
2ba35e1f8d typo 2020-04-28 14:20:22 -05:00
Jerome Petazzoni
eb0d9bed2a Update descriptions 2020-04-28 06:18:59 -05:00
Jerome Petazzoni
bab493a926 Update descriptions 2020-04-28 06:17:21 -05:00
Guilhem Lettron
f4f2d83fa4 separate managed options from deployment 2020-04-27 20:55:23 +02:00
Guilhem Lettron
9f049951ab Remove experimental status on kubeadm HA 2020-04-27 20:47:30 +02:00
Jerome Petazzoni
7257a5c594 Add outline tags to Kubernetes course 2020-04-27 07:35:14 -05:00
Jerome Petazzoni
102aef5ac5 Add outline tags to Docker short course 2020-04-26 11:36:50 -05:00
Jerome Petazzoni
d2b3a1d663 Add Ardan Live 2020-04-23 08:46:56 -05:00
Jerome Petazzoni
d84ada0927 Fix slides counter 2020-04-23 07:33:46 -05:00
Jerome Petazzoni
0e04b4a07d Modularize logistics file and add logistics-online file 2020-04-20 15:51:02 -05:00
Jerome Petazzoni
aef910b4b7 Do not show 'Module 1' if there is only one module 2020-04-20 13:01:06 -05:00
Jerome Petazzoni
298b6db20c Rename 'chapter' into 'module' 2020-04-20 11:49:35 -05:00
Jerome Petazzoni
7ec6e871c9 Add shortlink container.training/next 2020-04-15 13:17:03 -05:00
Jerome Petazzoni
a0558e4ee5 Rework kubectl run section, break it down
We now have better explanations on labels and selectors.
The kubectl run section was getting very long, so now
it is different parts: kubectl run basics; how to create
other resources like batch jobs; first contact with
labels and annotations; and showing the limitations
of kubectl logs.
2020-04-08 18:29:59 -05:00
Jerome Petazzoni
16a62f9f84 Really dirty script to add force redirects 2020-04-07 17:00:53 -05:00
Bret Fisher
2ce50007d2 update k8s dashboard versions 2020-03-16 17:57:41 -04:00
138 changed files with 4282 additions and 2169 deletions

View File

@@ -9,21 +9,21 @@ services:
etcd:
network_mode: "service:pause"
image: k8s.gcr.io/etcd:3.4.3
image: k8s.gcr.io/etcd:3.4.9
command: etcd
kube-apiserver:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount --allow-privileged
kube-controller-manager:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-controller-manager --master http://localhost:8080 --allocate-node-cidrs --cluster-cidr=10.CLUSTER.0.0/16
"Edit the CLUSTER placeholder first. Then, remove this line.":
kube-scheduler:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-scheduler --master http://localhost:8080

View File

@@ -9,20 +9,20 @@ services:
etcd:
network_mode: "service:pause"
image: k8s.gcr.io/etcd:3.4.3
image: k8s.gcr.io/etcd:3.4.9
command: etcd
kube-apiserver:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount
kube-controller-manager:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-controller-manager --master http://localhost:8080
kube-scheduler:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-scheduler --master http://localhost:8080

View File

@@ -4,6 +4,10 @@ metadata:
name: coffees.container.training
spec:
group: container.training
versions:
- name: v1alpha1
served: true
storage: true
scope: Namespaced
names:
plural: coffees
@@ -11,25 +15,4 @@ spec:
kind: Coffee
shortNames:
- cof
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
properties:
spec:
required:
- taste
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string
additionalPrinterColumns:
- jsonPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date

37
k8s/coffee-3.yaml Normal file
View File

@@ -0,0 +1,37 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string
required: [ taste ]
additionalPrinterColumns:
- jsonPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date

View File

@@ -9,9 +9,9 @@ spec:
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: robusta
name: excelsa
spec:
taste: stronger
taste: fruity
---
kind: Coffee
apiVersion: container.training/v1alpha1
@@ -23,7 +23,12 @@ spec:
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: excelsa
name: robusta
spec:
taste: fruity
taste: stronger
bitterness: high
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: java

View File

@@ -52,7 +52,7 @@ data:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
@@ -60,6 +60,9 @@ metadata:
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:

View File

@@ -27,7 +27,7 @@ spec:
command:
- sh
- -c
- "apk update && apk add curl && curl https://github.com/jpetazzo.keys > /root/.ssh/authorized_keys"
- "mkdir -p /root/.ssh && apk update && apk add curl && curl https://github.com/jpetazzo.keys > /root/.ssh/authorized_keys"
containers:
- name: web
image: nginx

View File

@@ -1,3 +1,10 @@
# This file is based on the following manifest:
# https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml
# It adds the "skip login" flag, as well as an insecure hack to defeat SSL.
# As its name implies, it is INSECURE and you should not use it in production,
# or on clusters that contain any kind of important or sensitive data, or on
# clusters that have a life span of more than a few hours.
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -187,7 +194,7 @@ spec:
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-rc2
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
@@ -226,7 +233,7 @@ spec:
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
@@ -272,7 +279,7 @@ spec:
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.2
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
@@ -293,7 +300,7 @@ spec:
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master

View File

@@ -1,3 +1,6 @@
# This is a copy of the following file:
# https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -12,19 +15,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
kind: Namespace
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
name: kubernetes-dashboard
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
@@ -32,62 +28,147 @@ metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
@@ -95,7 +176,7 @@ metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
@@ -108,55 +189,117 @@ spec:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
- port: 8000
targetPort: 8000
selector:
k8s-app: kubernetes-dashboard
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}

View File

@@ -14,7 +14,7 @@ spec:
initContainers:
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
command: [ "sh", "-c", "apk add git && sleep 5 && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/

File diff suppressed because it is too large Load Diff

View File

@@ -22,7 +22,10 @@ spec:
command: ["sh", "-c", "if [ -d /vol/lost+found ]; then rmdir /vol/lost+found; fi"]
containers:
- name: postgres
image: postgres:11
image: postgres:12
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres

View File

@@ -1,5 +1,5 @@
---
apiVersion: extensions/v1beta1
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:

View File

@@ -1,28 +1,17 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: null
generation: 1
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
spec:
replicas: 1
selector:
matchLabels:
app: socat
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: socat
spec:
@@ -34,34 +23,19 @@ spec:
image: alpine
imagePullPolicy: Always
name: socat
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: socat
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

103
k8s/traefik-v1.yaml Normal file
View File

@@ -0,0 +1,103 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

111
k8s/traefik-v2.yaml Normal file
View File

@@ -0,0 +1,111 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --accesslog
- --api
- --api.insecure
- --log.level=INFO
- --metrics.prometheus
- --providers.kubernetescrd
- --providers.kubernetesingress
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -1,103 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

1
k8s/traefik.yaml Symbolic link
View File

@@ -0,0 +1 @@
traefik-v1.yaml

View File

@@ -8,24 +8,24 @@ metadata:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: users:jean.doe
name: user=jean.doe
rules:
- apiGroups: [ certificates.k8s.io ]
resources: [ certificatesigningrequests ]
verbs: [ create ]
- apiGroups: [ certificates.k8s.io ]
resourceNames: [ users:jean.doe ]
resourceNames: [ user=jean.doe ]
resources: [ certificatesigningrequests ]
verbs: [ get, create, delete, watch ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: users:jean.doe
name: user=jean.doe
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: users:jean.doe
name: user=jean.doe
subjects:
- kind: ServiceAccount
name: jean.doe

View File

@@ -0,0 +1,5 @@
INFRACLASS=scaleway
if ! [ -f ~/.config/scw/config.yaml ]; then
warn "~/.config/scw/config.yaml not found."
warn "Make sure that the scaleway CLI is installed and configured."
fi

View File

@@ -65,6 +65,14 @@ _cmd_deploy() {
sleep 1
done"
# Special case for scaleway since it doesn't come with sudo
if [ "$INFRACLASS" = "scaleway" ]; then
pssh -l root "
grep DEBIAN_FRONTEND /etc/environment || echo DEBIAN_FRONTEND=noninteractive >> /etc/environment
grep cloud-init /etc/sudoers && rm /etc/sudoers
apt-get update && apt-get install sudo -y"
fi
# Copy settings and install Python YAML parser
pssh -I tee /tmp/settings.yaml <tags/$TAG/settings.yaml
pssh "
@@ -131,19 +139,19 @@ _cmd_kubebins() {
cd /usr/local/bin
if ! [ -x etcd ]; then
##VERSION##
curl -L https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz \
curl -L https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz \
| sudo tar --strip-components=1 --wildcards -zx '*/etcd' '*/etcdctl'
fi
if ! [ -x hyperkube ]; then
##VERSION##
curl -L https://dl.k8s.io/v1.17.2/kubernetes-server-linux-amd64.tar.gz \
curl -L https://dl.k8s.io/v1.18.8/kubernetes-server-linux-amd64.tar.gz \
| sudo tar --strip-components=3 -zx \
kubernetes/server/bin/kube{ctl,let,-proxy,-apiserver,-scheduler,-controller-manager}
fi
sudo mkdir -p /opt/cni/bin
cd /opt/cni/bin
if ! [ -x bridge ]; then
curl -L https://github.com/containernetworking/plugins/releases/download/v0.7.6/cni-plugins-amd64-v0.7.6.tgz \
curl -L https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz \
| sudo tar -zx
fi
"
@@ -246,11 +254,22 @@ EOF"
helm completion bash | sudo tee /etc/bash_completion.d/helm
fi"
# Install kustomize
pssh "
if [ ! -x /usr/local/bin/kustomize ]; then
##VERSION##
curl -L https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.6.1/kustomize_v3.6.1_linux_amd64.tar.gz |
sudo tar -C /usr/local/bin -zx kustomize
echo complete -C /usr/local/bin/kustomize kustomize | sudo tee /etc/bash_completion.d/kustomize
fi"
# Install ship
# Note: 0.51.3 is the last version that doesn't display GIN-debug messages
# (don't want to get folks confused by that!)
pssh "
if [ ! -x /usr/local/bin/ship ]; then
##VERSION##
curl -L https://github.com/replicatedhq/ship/releases/download/v0.40.0/ship_0.40.0_linux_amd64.tar.gz |
curl -L https://github.com/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_amd64.tar.gz |
sudo tar -C /usr/local/bin -zx ship
fi"
@@ -329,7 +348,7 @@ _cmd_maketag() {
if [ -z $USER ]; then
export USER=anonymous
fi
MS=$(($(date +%N)/1000000))
MS=$(($(date +%N | tr -d 0)/1000000))
date +%Y-%m-%d-%H-%M-$MS-$USER
}
@@ -483,6 +502,7 @@ _cmd_start() {
--settings) SETTINGS=$2; shift 2;;
--count) COUNT=$2; shift 2;;
--tag) TAG=$2; shift 2;;
--students) STUDENTS=$2; shift 2;;
*) die "Unrecognized parameter: $1."
esac
done
@@ -494,8 +514,14 @@ _cmd_start() {
die "Please add --settings flag to specify which settings file to use."
fi
if [ -z "$COUNT" ]; then
COUNT=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
warning "No --count option was specified. Using value from settings file ($COUNT)."
CLUSTERSIZE=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
if [ -z "$STUDENTS" ]; then
warning "Neither --count nor --students was specified."
warning "According to the settings file, the cluster size is $CLUSTERSIZE."
warning "Deploying one cluster of $CLUSTERSIZE nodes."
STUDENTS=1
fi
COUNT=$(($STUDENTS*$CLUSTERSIZE))
fi
# Check that the specified settings and infrastructure are valid.
@@ -513,11 +539,41 @@ _cmd_start() {
infra_start $COUNT
sep
info "Successfully created $COUNT instances with tag $TAG"
sep
echo created > tags/$TAG/status
info "To deploy Docker on these instances, you can run:"
info "$0 deploy $TAG"
# If the settings.yaml file has a "steps" field,
# automatically execute all the actions listed in that field.
# If an action fails, retry it up to 10 times.
python -c 'if True: # hack to deal with indentation
import sys, yaml
settings = yaml.safe_load(sys.stdin)
print ("\n".join(settings.get("steps", [])))
' < tags/$TAG/settings.yaml \
| while read step; do
if [ -z "$step" ]; then
break
fi
sep
info "Automatically executing step '$step'."
TRY=1
MAXTRY=10
while ! $0 $step $TAG ; do
TRY=$(($TRY+1))
if [ $TRY -gt $MAXTRY ]; then
error "This step ($step) failed after $MAXTRY attempts."
info "You can troubleshoot the situation manually, or terminate these instances with:"
info "$0 stop $TAG"
die "Giving up."
else
sep
info "Step '$step' failed. Let's wait 10 seconds and try again."
info "(Attempt $TRY out of $MAXTRY.)"
sleep 10
fi
done
done
sep
info "Deployment successful."
info "To terminate these instances, you can run:"
info "$0 stop $TAG"
}

View File

@@ -0,0 +1,56 @@
infra_list() {
die "unimplemented"
}
infra_quotas() {
die "unimplemented"
}
infra_start() {
COUNT=$1
AWS_KEY_NAME=$(make_key_name)
SCW_INSTANCE_TYPE=${SCW_INSTANCE_TYPE-DEV1-M}
SCW_ZONE=${SCW_ZONE-fr-par-1}
for I in $(seq 1 $COUNT); do
NAME=$(printf "%s-%03d" $TAG $I)
sep "Starting instance $I/$COUNT"
info " Zone: $SCW_ZONE"
info " Name: $NAME"
info " Instance type: $SCW_INSTANCE_TYPE"
scw instance server create \
type=${SCW_INSTANCE_TYPE} zone=${SCW_ZONE} \
image=ubuntu_bionic name=${NAME}
done
sep
scw_get_ips_by_tag $TAG > tags/$TAG/ips.txt
}
infra_stop() {
for ID in $(scw_get_ids_by_tag $TAG); do
info "Scheduling deletion of instance $ID..."
scw instance server delete force-shutdown=true server-id=$ID &
done
info "Waiting for deletion to complete..."
wait
}
scw_get_ids_by_tag() {
TAG=$1
scw instance server list name=$TAG -o json | jq -r .[].id
}
scw_get_ips_by_tag() {
TAG=$1
scw instance server list name=$TAG -o json | jq -r .[].public_ip.address
}
infra_opensg() {
die "unimplemented"
}
infra_disableaddrchecks() {
die "unimplemented"
}

View File

@@ -1,21 +1,44 @@
#!/usr/bin/env python
"""
There are two ways to use this script:
1. Pass a tag name as a single argument.
It will then take the clusters corresponding to that tag, and assign one
domain name per cluster. Currently it gets the domains from a hard-coded
path. There should be more domains than clusters.
Example: ./map-dns.py 2020-08-15-jp
2. Pass a domain as the 1st argument, and IP addresses then.
It will configure the domain with the listed IP addresses.
Example: ./map-dns.py open-duck.site 1.2.3.4 2.3.4.5 3.4.5.6
In both cases, the domains should be configured to use GANDI LiveDNS.
"""
import os
import requests
import sys
import yaml
# configurable stuff
domains_file = "../../plentydomains/domains.txt"
config_file = os.path.join(
os.environ["HOME"], ".config/gandi/config.yaml")
tag = "test"
tag = None
apiurl = "https://dns.api.gandi.net/api/v5/domains"
if len(sys.argv) == 2:
tag = sys.argv[1]
domains = open(domains_file).read().split()
ips = open(f"tags/{tag}/ips.txt").read().split()
settings_file = f"tags/{tag}/settings.yaml"
clustersize = yaml.safe_load(open(settings_file))["clustersize"]
else:
domains = [sys.argv[1]]
ips = sys.argv[2:]
clustersize = len(ips)
# inferred stuff
domains = open(domains_file).read().split()
apikey = yaml.safe_load(open(config_file))["apirest"]["key"]
ips = open(f"tags/{tag}/ips.txt").read().split()
settings_file = f"tags/{tag}/settings.yaml"
clustersize = yaml.safe_load(open(settings_file))["clustersize"]
# now do the fucking work
while domains and ips:

View File

@@ -21,3 +21,9 @@ machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training
steps:
- deploy
- webssh
- tailhist
- cards

View File

@@ -20,3 +20,10 @@ machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
steps:
- deploy
- webssh
- tailhist
- kube
- cards
- kubetest

View File

@@ -1,15 +1,21 @@
# Uncomment and/or edit one of the the following lines if necessary.
#/ /kube-halfday.yml.html 200
#/ /kube-fullday.yml.html 200
#/ /kube-twodays.yml.html 200
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
/ /kadm-fullday.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack
#/dockermastery https://www.udemy.com/course/docker-mastery/?referralCode=1410924A733D33635CCB
#/kubernetesmastery https://www.udemy.com/course/kubernetesmastery/?referralCode=7E09090AF9B79E6C283F
/dockermastery https://www.udemy.com/course/docker-mastery/?couponCode=SWEETFEBSALEC1
/kubernetesmastery https://www.udemy.com/course/kubernetesmastery/?couponCode=SWEETFEBSALEC4
/dockermastery https://www.udemy.com/course/docker-mastery/?couponCode=DOCKERALLDAY
/kubernetesmastery https://www.udemy.com/course/kubernetesmastery/?couponCode=DOCKERALLDAY
# Shortlink for the QRCode
/q /qrcode.html 200
# Shortlinks for next training in English and French
#/next https://www.eventbrite.com/e/livestream-intensive-kubernetes-bootcamp-tickets-103262336428
/next https://skillsmatter.com/courses/700-advanced-kubernetes-concepts-workshop-jerome-petazzoni
/hi5 https://enix.io/fr/services/formation/online/

View File

@@ -233,7 +233,7 @@ def setup_tmux_and_ssh():
ipaddr = "$IPADDR"
uid = os.getuid()
raise Exception("""
raise Exception(r"""
1. If you're running this directly from a node:
tmux
@@ -247,6 +247,16 @@ rm -f /tmp/tmux-{uid}/default && ssh -t -L /tmp/tmux-{uid}/default:/tmp/tmux-100
3. If you cannot control a remote tmux:
tmux new-session ssh docker@{ipaddr}
4. If you are running this locally with a remote cluster, make sure your prompt has the expected format:
tmux
IPADDR=$(
kubectl get nodes -o json |
jq -r '.items[0].status.addresses[] | select(.type=="ExternalIP") | .address'
)
export PS1="\n[{ipaddr}] \u@\h:\w\n\$ "
""".format(uid=uid, ipaddr=ipaddr))
else:
logging.info("Found tmux session. Trying to acquire shell prompt.")

View File

@@ -1,7 +1,7 @@
class: title
# Advanced Dockerfiles
# Advanced Dockerfile Syntax
![construction](images/title-advanced-dockerfiles.jpg)
@@ -12,7 +12,10 @@ class: title
We have seen simple Dockerfiles to illustrate how Docker build
container images.
In this section, we will see more Dockerfile commands.
In this section, we will give a recap of the Dockerfile syntax,
and introduce advanced Dockerfile commands that we might
come across sometimes; or that we might want to use in some
specific scenarios.
---
@@ -420,3 +423,8 @@ ONBUILD COPY . /src
* You can't chain `ONBUILD` instructions with `ONBUILD`.
* `ONBUILD` can't be used to trigger `FROM` instructions.
???
:EN:- Advanced Dockerfile syntax
:FR:- Dockerfile niveau expert

View File

@@ -280,3 +280,8 @@ CONTAINER ID IMAGE ... CREATED STATUS
5c1dfd4d81f1 jpetazzo/clock ... 40 min. ago Exited (0) 40 min. ago
b13c164401fb ubuntu ... 55 min. ago Exited (130) 53 min. ago
```
???
:EN:- Foreground and background containers
:FR:- Exécution interactive ou en arrière-plan

View File

@@ -167,3 +167,8 @@ Automated process = good.
In the next chapter, we will learn how to automate the build
process by writing a `Dockerfile`.
???
:EN:- Building our first images interactively
:FR:- Fabriquer nos premières images à la main

View File

@@ -363,3 +363,10 @@ In this example, `sh -c` will still be used, but
The shell gets replaced by `figlet` when `figlet` starts execution.
This allows to run processes as PID 1 without using JSON.
???
:EN:- Towards automated, reproducible builds
:EN:- Writing our first Dockerfile
:FR:- Rendre le processus automatique et reproductible
:FR:- Écrire son premier Dockerfile

View File

@@ -272,3 +272,7 @@ $ docker run -it --entrypoint bash myfiglet
root@6027e44e2955:/#
```
???
:EN:- CMD and ENTRYPOINT
:FR:- CMD et ENTRYPOINT

View File

@@ -322,3 +322,11 @@ You can:
Each copy will run in a different network, totally isolated from the other.
This is ideal to debug regressions, do side-by-side comparisons, etc.
???
:EN:- Using compose to describe an environment
:EN:- Connecting services together with a *Compose file*
:FR:- Utiliser Compose pour décrire son environnement
:FR:- Écrire un *Compose file* pour connecter les services entre eux

View File

@@ -226,3 +226,13 @@ We've learned how to:
In the next chapter, we will see how to connect
containers together without exposing their ports.
???
:EN:Connecting containers
:EN:- Container networking basics
:EN:- Exposing a container
:FR:Connecter les conteneurs
:FR:- Description du modèle réseau des conteneurs
:FR:- Exposer un conteneur

View File

@@ -98,3 +98,8 @@ Success!
* Place it in a different directory, with the `WORKDIR` instruction.
* Even better, use the `gcc` official image.
???
:EN:- The build cache
:FR:- Tirer parti du cache afin d'optimiser la vitesse de *build*

View File

@@ -431,3 +431,8 @@ services:
- It's OK (and even encouraged) to start simple and evolve as needed.
- Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!
???
:EN:- Dockerfile tips, tricks, and best practices
:FR:- Bonnes pratiques pour la construction des images

View File

@@ -290,3 +290,8 @@ bash: figlet: command not found
* We have a clear definition of our environment, and can share it reliably with others.
* Let's see in the next chapters how to bake a custom image with `figlet`!
???
:EN:- Running our first container
:FR:- Lancer nos premiers conteneurs

View File

@@ -226,3 +226,8 @@ docker export <container_id> | tar tv
```
This will give a detailed listing of the content of the container.
???
:EN:- Troubleshooting and getting inside a container
:FR:- Inspecter un conteneur en détail, en *live* ou *post-mortem*

View File

@@ -375,3 +375,13 @@ We've learned how to:
* Understand Docker image namespacing.
* Search and download images.
???
:EN:Building images
:EN:- Containers, images, and layers
:EN:- Image addresses and tags
:EN:- Finding and transferring images
:FR:Construire des images
:FR:- La différence entre un conteneur et une image
:FR:- La notion de *layer* partagé entre images

View File

@@ -80,3 +80,8 @@ $ docker ps --filter label=owner=alice
(To determine internal cross-billing, or who to page in case of outage.)
* etc.
???
:EN:- Using labels to identify containers
:FR:- Étiqueter ses conteneurs avec des méta-données

View File

@@ -391,3 +391,10 @@ We've learned how to:
* Use a simple local development workflow.
???
:EN:Developing with containers
:EN:- “Containerize” a development environment
:FR:Développer au jour le jour
:FR:- « Containeriser » son environnement de développement

View File

@@ -313,3 +313,11 @@ virtually "free."
* Sometimes, we want to inspect a specific intermediary build stage.
* Or, we want to describe multiple images using a single Dockerfile.
???
:EN:Optimizing our images and their build process
:EN:- Leveraging multi-stage builds
:FR:Optimiser les images et leur construction
:FR:- Utilisation d'un *multi-stage build*

View File

@@ -130,3 +130,12 @@ $ docker inspect --format '{{ json .Created }}' <containerID>
* The optional `json` keyword asks for valid JSON output.
<br/>(e.g. here it adds the surrounding double-quotes.)
???
:EN:Managing container lifecycle
:EN:- Naming and inspecting containers
:FR:Suivre ses conteneurs à la loupe
:FR:- Obtenir des informations détaillées sur un conteneur
:FR:- Associer un identifiant unique à un conteneur

View File

@@ -175,3 +175,10 @@ class: extra-details
* This will cause some CLI and TUI programs to redraw the screen.
* But not all of them.
???
:EN:- Restarting old containers
:EN:- Detaching and reattaching to container
:FR:- Redémarrer des anciens conteneurs
:FR:- Se détacher et rattacher à des conteneurs

View File

@@ -125,3 +125,11 @@ Server:
]
If this doesn't work, raise your hand so that an instructor can assist you!
???
:EN:Container concepts
:FR:Premier contact avec les conteneurs
:EN:- What's a container engine?
:FR:- Qu'est-ce qu'un *container engine* ?

View File

@@ -11,10 +11,10 @@ class State(object):
self.section_title = None
self.section_start = 0
self.section_slides = 0
self.chapters = {}
self.modules = {}
self.sections = {}
def show(self):
if self.section_title.startswith("chapter-"):
if self.section_title.startswith("module-"):
return
print("{0.section_title}\t{0.section_start}\t{0.section_slides}".format(self))
self.sections[self.section_title] = self.section_slides
@@ -38,10 +38,10 @@ for line in open(sys.argv[1]):
if line == "--":
state.current_slide += 1
toc_links = re.findall("\(#toc-(.*)\)", line)
if toc_links and state.section_title.startswith("chapter-"):
if state.section_title not in state.chapters:
state.chapters[state.section_title] = []
state.chapters[state.section_title].append(toc_links[0])
if toc_links and state.section_title.startswith("module-"):
if state.section_title not in state.modules:
state.modules[state.section_title] = []
state.modules[state.section_title].append(toc_links[0])
# This is really hackish
if line.startswith("class:"):
for klass in EXCLUDED:
@@ -51,7 +51,7 @@ for line in open(sys.argv[1]):
state.show()
for chapter in sorted(state.chapters, key=lambda f: int(f.split("-")[1])):
chapter_size = sum(state.sections[s] for s in state.chapters[chapter])
print("{}\t{}\t{}".format("total size for", chapter, chapter_size))
for module in sorted(state.modules, key=lambda f: int(f.split("-")[1])):
module_size = sum(state.sections[s] for s in state.modules[module])
print("{}\t{}\t{}".format("total size for", module, module_size))

118
slides/fix-redirects.sh Executable file
View File

@@ -0,0 +1,118 @@
#!/bin/sh
# This script helps to add "force-redirects" where needed.
# This might replace your entire git repos with Vogon poetry.
# Use at your own peril!
set -eu
# The easiest way to set this env var is by copy-pasting from
# the netlify web dashboard, then doctoring the output a bit.
# Yeah, that's gross, but after spending 10 minutes with the
# API and the CLI and OAuth, it took about 10 seconds to do it
# with le copier-coller, so ... :)
SITES="
2020-01-caen
2020-01-zr
2020-02-caen
2020-02-enix
2020-02-outreach
2020-02-vmware
2020-03-ardan
2020-03-qcon
alfun-2019-06
boosterconf2018
clt-2019-10
dc17eu
decembre2018
devopsdaysams2018
devopsdaysmsp2018
gotochgo2018
gotochgo2019
indexconf2018
intro-2019-01
intro-2019-04
intro-2019-06
intro-2019-08
intro-2019-09
intro-2019-11
intro-2019-12
k8s2d
kadm-2019-04
kadm-2019-06
kube
kube-2019-01
kube-2019-02
kube-2019-03
kube-2019-04
kube-2019-06
kube-2019-08
kube-2019-09
kube-2019-10
kube-2019-11
lisa-2019-10
lisa16t1
lisa17m7
lisa17t9
maersk-2019-07
maersk-2019-08
ndcminnesota2018
nr-2019-08
oscon2018
oscon2019
osseu17
pycon2019
qconsf18wkshp
qconsf2017intro
qconsf2017swarm
qconsf2018
qconuk2019
septembre2018
sfsf-2019-06
srecon2018
swarm2017
velny-k8s101-2018
velocity-2019-11
velocityeu2018
velocitysj2018
vmware-2019-11
weka
wwc-2019-10
wwrk-2019-05
wwrk-2019-06
"
for SITE in $SITES; do
echo "##### $SITE"
git checkout -q origin/$SITE
# No _redirects? No problem.
if ! [ -f _redirects ]; then
continue
fi
# If there is already a force redirect on /, we're good.
if grep '^/ .* 200!' _redirects; then
continue
fi
# If there is a redirect on / ... and it's not forced ... do something.
if grep "^/ .* 200$" _redirects; then
echo "##### $SITE needs to be patched"
sed -i 's,^/ \(.*\) 200$,/ \1 200!,' _redirects
git add _redirects
git commit -m "fix-redirects.sh: adding forced redirect"
git push origin HEAD:$SITE
continue
fi
if grep "^/ " _redirects; then
echo "##### $SITE with / but no status code"
echo "##### Should I add '200!' ?"
read foo
sed -i 's,^/ \(.*\)$,/ \1 200!,' _redirects
git add _redirects
git commit -m "fix-redirects.sh: adding status code and forced redirect"
git push origin HEAD:$SITE
continue
fi
echo "##### $SITE without / ?"
cat _redirects
done

View File

@@ -7,6 +7,7 @@ FLAGS=dict(
fr=u"🇫🇷",
uk=u"🇬🇧",
us=u"🇺🇸",
www=u"🌐",
)
TEMPLATE="""<html>
@@ -19,9 +20,9 @@ TEMPLATE="""<html>
<div class="main">
<table>
<tr><td class="header" colspan="3">{{ title }}</td></tr>
<tr><td class="details" colspan="3">Note: while some workshops are delivered in French, slides are always in English.</td></tr>
<tr><td class="details" colspan="3">Note: while some workshops are delivered in other languages, slides are always in English.</td></tr>
<tr><td class="title" colspan="3">Free video of our latest workshop</td></tr>
<tr><td class="title" colspan="3">Free Kubernetes intro course</td></tr>
<tr>
<td>Getting Started With Kubernetes and Container Orchestration</td>
@@ -35,11 +36,11 @@ TEMPLATE="""<html>
<td class="details">If you're interested, we can deliver that workshop (or longer courses) to your team or organization.</td>
</tr>
<tr>
<td class="details">Contact <a href="mailto:jerome.petazzoni@gmail.com">Jérôme Petazzoni</a> to make that happen!</a></td>
<td class="details">Contact <a href="mailto:jerome.petazzoni@gmail.com">Jérôme Petazzoni</a> to make that happen!</td>
</tr>
{% if coming_soon %}
<tr><td class="title" colspan="3">Coming soon near you</td></tr>
<tr><td class="title" colspan="3">Coming soon</td></tr>
{% for item in coming_soon %}
<tr>
@@ -140,13 +141,26 @@ import yaml
items = yaml.safe_load(open("index.yaml"))
def prettyparse(date):
months = [
"January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November", "December"
]
month = months[date.month-1]
suffix = {
1: "st", 2: "nd", 3: "rd",
21: "st", 22: "nd", 23: "rd",
31: "st"}.get(date.day, "th")
return date.year, month, "{}{}".format(date.day, suffix)
# Items with a date correspond to scheduled sessions.
# Items without a date correspond to self-paced content.
# The date should be specified as a string (e.g. 2018-11-26).
# It can also be a list of two elements (e.g. [2018-11-26, 2018-11-28]).
# The latter indicates an event spanning multiple dates.
# The first date will be used in the generated page, but the event
# will be considered "current" (and therefore, shown in the list of
# The event will be considered "current" (shown in the list of
# upcoming events) until the second date.
for item in items:
@@ -156,19 +170,23 @@ for item in items:
date_begin, date_end = date
else:
date_begin, date_end = date, date
suffix = {
1: "st", 2: "nd", 3: "rd",
21: "st", 22: "nd", 23: "rd",
31: "st"}.get(date_begin.day, "th")
# %e is a non-standard extension (it displays the day, but without a
# leading zero). If strftime fails with ValueError, try to fall back
# on %d (which displays the day but with a leading zero when needed).
try:
item["prettydate"] = date_begin.strftime("%B %e{}, %Y").format(suffix)
except ValueError:
item["prettydate"] = date_begin.strftime("%B %d{}, %Y").format(suffix)
y1, m1, d1 = prettyparse(date_begin)
y2, m2, d2 = prettyparse(date_end)
if (y1, m1, d1) == (y2, m2, d2):
# Single day event
pretty_date = "{} {}, {}".format(m1, d1, y1)
elif (y1, m1) == (y2, m2):
# Multi-day event within a single month
pretty_date = "{} {}-{}, {}".format(m1, d1, d2, y1)
elif y1 == y2:
# Multi-day event spanning more than a month
pretty_date = "{} {}-{} {}, {}".format(m1, d1, m2, d2, y1)
else:
# Event spanning the turn of the year (REALLY???)
pretty_date = "{} {}, {}-{} {}, {}".format(m1, d1, y1, m2, d2, y2)
item["begin"] = date_begin
item["end"] = date_end
item["prettydate"] = pretty_date
item["flag"] = FLAGS.get(item.get("country"),"")
today = datetime.date.today()

View File

@@ -1,3 +1,150 @@
- date: [2020-10-05, 2020-10-06]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Docker intensif (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-10-07, 2020-10-09]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Fondamentaux Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: 2020-10-12
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Packaging pour Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-10-13, 2020-10-14]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Kubernetes avancé (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-10-19, 2020-10-20]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Opérer Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-09-28, 2020-10-01]
country: www
city: streaming
event: Skills Matter
speaker: jpetazzo
title: Advanced Kubernetes Concepts
attend: https://skillsmatter.com/courses/700-advanced-kubernetes-concepts-workshop-jerome-petazzoni
- date: [2020-08-29, 2020-08-30]
country: www
city: streaming
event: fwdays
speaker: jpetazzo
title: Intensive Docker Online Workshop
attend: https://fwdays.com/en/event/intensive-docker-workshop
- date: [2020-09-12, 2020-09-13]
country: www
city: streaming
event: fwdays
speaker: jpetazzo
title: Kubernetes Intensive Online Workshop
attend: https://fwdays.com/en/event/kubernetes-intensive-workshop
- date: [2020-07-07, 2020-07-09]
country: www
city: streaming
event: Ardan Live
speaker: jpetazzo
title: Intensive Docker Bootcamp
attend: https://www.eventbrite.com/e/livestream-intensive-docker-bootcamp-tickets-103258886108
- date: [2020-06-15, 2020-06-16]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Docker intensif (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-06-17, 2020-06-19]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Fondamentaux Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: 2020-06-22
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Packaging pour Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-06-23, 2020-06-24]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Kubernetes avancé (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-06-25, 2020-06-26]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Opérer Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-06-09, 2020-06-11]
country: www
city: streaming
event: Ardan Live
speaker: jpetazzo
title: Intensive Kubernetes Bootcamp
attend: https://www.eventbrite.com/e/livestream-intensive-kubernetes-bootcamp-tickets-103262336428
- date: [2020-05-04, 2020-05-08]
country: www
city: streaming
event: Ardan Live
speaker: jpetazzo
title: Intensive Kubernetes - Advanced Concepts
attend: https://www.eventbrite.com/e/livestream-intensive-kubernetes-advanced-concepts-tickets-102358725704
- date: [2020-03-30, 2020-04-02]
country: www
city: streaming
event: Ardan Live
speaker: jpetazzo
title: Intensive Docker and Kubernetes
attend: https://www.eventbrite.com/e/ardan-labs-live-worldwide-march-30-april-2-2020-tickets-100331129108#
slides: https://2020-03-ardan.container.training/
- date: 2020-03-06
country: uk
city: London

View File

@@ -1,69 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/toc.md
-
#- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
#- containers/Start_And_Attach.md
- containers/Naming_And_Inspecting.md
#- containers/Labels.md
- containers/Getting_Inside.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Container_Networking_Basics.md
#- containers/Network_Drivers.md
#- containers/Container_Network_Model.md
- containers/Local_Development_Workflow.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
- containers/Multi_Stage_Builds.md
#- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
#- containers/Docker_Machine.md
#- containers/Advanced_Dockerfiles.md
#- containers/Init_Systems.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Container_Engines.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md
#- containers/Orchestration_Overview.md
-
- shared/thankyou.md
- containers/links.md

View File

@@ -1,69 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
chapters:
- shared/title.md
# - shared/logistics.md
- containers/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/toc.md
- - containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- - containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- containers/Docker_Machine.md
- - containers/Advanced_Dockerfiles.md
- containers/Init_Systems.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Resource_Limits.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
- - containers/Container_Engines.md
- containers/Pods_Anatomy.md
- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -1,77 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom.md
- shared/toc.md
- # DAY 1
- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Dockerfile_Tips.md
- containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Exercise_Dockerfile_Advanced.md
-
- containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Start_And_Attach.md
- containers/Getting_Inside.md
- containers/Resource_Limits.md
- # DAY 2
- containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
-
- containers/Local_Development_Workflow.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
- containers/Installing_Docker.md
- containers/Container_Engines.md
- containers/Init_Systems.md
- containers/Advanced_Dockerfiles.md
-
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Orchestration_Overview.md
-
- shared/thankyou.md
- containers/links.md
#-
#- containers/Docker_Machine.md
#- containers/Ambassadors.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md

View File

@@ -129,3 +129,8 @@ installed and set up `kubectl` to communicate with your cluster.
```
]
???
:EN:- Securely accessing internal services
:FR:- Accès sécurisé aux services internes

View File

@@ -87,3 +87,8 @@
- Tunnels are also fine
(e.g. [k3s](https://k3s.io/) uses a tunnel to allow each node to contact the API server)
???
:EN:- Ensuring API server availability
:FR:- Assurer la disponibilité du serveur API

View File

@@ -381,3 +381,8 @@ We demonstrated *update* and *watch* semantics.
- if the pod has special constraints that can't be met
- if the scheduler is not running (!)
???
:EN:- Kubernetes architecture review
:FR:- Passage en revue de l'architecture de Kubernetes

View File

@@ -1,6 +1,74 @@
# Authentication and authorization
*And first, a little refresher!*
- In this section, we will:
- define authentication and authorization
- explain how they are implemented in Kubernetes
- talk about tokens, certificates, service accounts, RBAC ...
- But first: why do we need all this?
---
## The need for fine-grained security
- The Kubernetes API should only be available for identified users
- we don't want "guest access" (except in very rare scenarios)
- we don't want strangers to use our compute resources, delete our apps ...
- our keys and passwords should not be exposed to the public
- Users will often have different access rights
- cluster admin (similar to UNIX "root") can do everything
- developer might access specific resources, or a specific namespace
- supervision might have read only access to *most* resources
---
## Example: custom HTTP load balancer
- Let's imagine that we have a custom HTTP load balancer for multiple apps
- Each app has its own *Deployment* resource
- By default, the apps are "sleeping" and scaled to zero
- When a request comes in, the corresponding app gets woken up
- After some inactivity, the app is scaled down again
- This HTTP load balancer needs API access (to scale up/down)
- What if *a wild vulnerability appears*?
---
## Consequences of vulnerability
- If the HTTP load balancer has the same API access as we do:
*full cluster compromise (easy data leak, cryptojacking...)*
- If the HTTP load balancer has `update` permissions on the Deployments:
*defacement (easy), MITM / impersonation (medium to hard)*
- If the HTTP load balancer only has permission to `scale` the Deployments:
*denial-of-service*
- All these outcomes are bad, but some are worse than others
---
## Definitions
- Authentication = verifying the identity of a person
@@ -147,7 +215,7 @@ class: extra-details
(if their key is compromised, or they leave the organization)
- Option 1: re-create a new CA and re-issue everyone's certificates
- Option 1: re-create a new CA and re-issue everyone's certificates
<br/>
→ Maybe OK if we only have a few users; no way otherwise
@@ -631,7 +699,7 @@ class: extra-details
- Let's look for these in existing ClusterRoleBindings:
```bash
kubectl get clusterrolebindings -o yaml |
kubectl get clusterrolebindings -o yaml |
grep -e kubernetes-admin -e system:masters
```
@@ -676,3 +744,17 @@ class: extra-details
- Both are available as standalone programs, or as plugins for `kubectl`
(`kubectl` plugins can be installed and managed with `krew`)
???
:EN:- Authentication and authorization in Kubernetes
:EN:- Authentication with tokens and certificates
:EN:- Authorization with RBAC (Role-Based Access Control)
:EN:- Restricting permissions with Service Accounts
:EN:- Working with Roles, Cluster Roles, Role Bindings, etc.
:FR:- Identification et droits d'accès dans Kubernetes
:FR:- Mécanismes d'identification par jetons et certificats
:FR:- Le modèle RBAC *(Role-Based Access Control)*
:FR:- Restreindre les permissions grâce aux *Service Accounts*
:FR:- Comprendre les *Roles*, *Cluster Roles*, *Role Bindings*, etc.

194
slides/k8s/batch-jobs.md Normal file
View File

@@ -0,0 +1,194 @@
# Executing batch jobs
- Deployments are great for stateless web apps
(as well as workers that keep running forever)
- Pods are great for one-off execution that we don't care about
(because they don't get automatically restarted if something goes wrong)
- Jobs are great for "long" background work
("long" being at least minutes our hours)
- CronJobs are great to schedule Jobs at regular intervals
(just like the classic UNIX `cron` daemon with its `crontab` files)
---
## Creating a Job
- A Job will create a Pod
- If the Pod fails, the Job will create another one
- The Job will keep trying until:
- either a Pod succeeds,
- or we hit the *backoff limit* of the Job (default=6)
.exercise[
- Create a Job that has a 50% chance of success:
```bash
kubectl create job flipcoin --image=alpine -- sh -c 'exit $(($RANDOM%2))'
```
]
---
## Our Job in action
- Our Job will create a Pod named `flipcoin-xxxxx`
- If the Pod succeeds, the Job stops
- If the Pod fails, the Job creates another Pod
.exercise[
- Check the status of the Pod(s) created by the Job:
```bash
kubectl get pods --selector=job-name=flipcoin
```
]
---
class: extra-details
## More advanced jobs
- We can specify a number of "completions" (default=1)
- This indicates how many times the Job must be executed
- We can specify the "parallelism" (default=1)
- This indicates how many Pods should be running in parallel
- These options cannot be specified with `kubectl create job`
(we have to write our own YAML manifest to use them)
---
## Scheduling periodic background work
- A Cron Job is a Job that will be executed at specific intervals
(the name comes from the traditional cronjobs executed by the UNIX crond)
- It requires a *schedule*, represented as five space-separated fields:
- minute [0,59]
- hour [0,23]
- day of the month [1,31]
- month of the year [1,12]
- day of the week ([0,6] with 0=Sunday)
- `*` means "all valid values"; `/N` means "every N"
- Example: `*/3 * * * *` means "every three minutes"
---
## Creating a Cron Job
- Let's create a simple job to be executed every three minutes
- Careful: make sure that the job terminates!
(The Cron Job will not hold if a previous job is still running)
.exercise[
- Create the Cron Job:
```bash
kubectl create cronjob every3mins --schedule="*/3 * * * *" \
--image=alpine -- sleep 10
```
- Check the resource that was created:
```bash
kubectl get cronjobs
```
]
---
## Cron Jobs in action
- At the specified schedule, the Cron Job will create a Job
- The Job will create a Pod
- The Job will make sure that the Pod completes
(re-creating another one if it fails, for instance if its node fails)
.exercise[
- Check the Jobs that are created:
```bash
kubectl get jobs
```
]
(It will take a few minutes before the first job is scheduled.)
---
class: extra-details
## What about `kubectl run` before v1.18?
- Creating a Deployment:
`kubectl run`
- Creating a Pod:
`kubectl run --restart=Never`
- Creating a Job:
`kubectl run --restart=OnFailure`
- Creating a Cron Job:
`kubectl run --restart=OnFailure --schedule=...`
*Avoid using these forms, as they are deprecated since Kubernetes 1.18!*
---
## Beyond `kubectl create`
- As hinted earlier, `kubectl create` doesn't always expose all options
- can't express parallelism or completions of Jobs
- can't express Pods with multiple containers
- can't express healthchecks, resource limits
- etc.
- `kubectl create` and `kubectl run` are *helpers* that generate YAML manifests
- If we write these manifests ourselves, we can use all features and options
- We'll see later how to do that!
???
:EN:- Running batch and cron jobs
:FR:- Tâches périodiques *(cron)* et traitement par lots *(batch)*

View File

@@ -257,3 +257,8 @@ This is the TLS bootstrap mechanism, step by step.
- [kubeadm token](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-token/) command
- [kubeadm join](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/) command (has details about [the join workflow](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/#join-workflow))
???
:EN:- Leveraging TLS bootstrap to join nodes
:FR:- Ajout de nœuds grâce au *TLS bootstrap*

View File

@@ -142,3 +142,8 @@ The list includes the following providers:
- [configuration](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) (mainly for OpenStack)
- [deployment](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/)
???
:EN:- The Cloud Controller Manager
:FR:- Le *Cloud Controller Manager*

View File

@@ -217,15 +217,16 @@ docker run --rm --net host -v $PWD:/vol \
## How can we remember all these flags?
- Look at the static pod manifest for etcd
- Older versions of kubeadm did add a healthcheck probe with all these flags
(in `/etc/kubernetes/manifests`)
- That healthcheck probe was calling `etcdctl` with all the right flags
- The healthcheck probe is calling `etcdctl` with all the right flags
😉👍✌️
- With recent versions of kubeadm, we're on our own!
- Exercise: write the YAML for a batch job to perform the backup
(how will you access the key and certificate required to connect?)
---
## Restoring an etcd snapshot
@@ -364,3 +365,8 @@ docker run --rm --net host -v $PWD:/vol \
- [bivac](https://github.com/camptocamp/bivac)
Backup Interface for Volumes Attached to Containers
???
:EN:- Backing up clusters
:FR:- Politiques de sauvegarde

View File

@@ -165,3 +165,12 @@ class: extra-details
- Security advantage (stronger isolation between pods)
Check [this blog post](http://jpetazzo.github.io/2019/02/13/running-kubernetes-without-nodes-with-kiyot/) for more details.
???
:EN:- What happens when the cluster is at, or over, capacity
:EN:- Cluster sizing and scaling
:FR:- Ce qui se passe quand il n'y a plus assez de ressources
:FR:- Dimensionner et redimensionner ses clusters

View File

@@ -501,3 +501,11 @@ class: extra-details
- Then upgrading kubeadm to 1.16.X, etc.
- **Make sure to read the release notes before upgrading!**
???
:EN:- Best practices for cluster upgrades
:EN:- Example: upgrading a kubeadm cluster
:FR:- Bonnes pratiques pour la mise à jour des clusters
:FR:- Exemple : mettre à jour un cluster kubeadm

View File

@@ -574,3 +574,8 @@ done
- This could be useful for embedded platforms with very limited resources
(or lab environments for learning purposes)
???
:EN:- Configuring CNI plugins
:FR:- Configurer des plugins CNI

View File

@@ -401,3 +401,8 @@ class: pic
- IP addresses are associated with *pods*, not with individual containers
Both diagrams used with permission.
???
:EN:- Kubernetes concepts
:FR:- Kubernetes en théorie

View File

@@ -210,6 +210,8 @@
(through files that get created in the container filesystem)
- That second link also includes a list of all the fields that can be used with the downward API
---
## Environment variables, pros and cons
@@ -547,3 +549,13 @@ spec:
- With RBAC, we can authorize a user to access configmaps, but not secrets
(since they are two different kinds of resources)
???
:EN:- Managing application configuration
:EN:- Exposing configuration with the downward API
:EN:- Exposing configuration with Config Maps and Secrets
:FR:- Gérer la configuration des applications
:FR:- Configuration au travers de la *downward API*
:FR:- Configuration via les *Config Maps* et *Secrets*

View File

@@ -263,3 +263,8 @@ spec:
#name: web-xyz1234567-pqr89
EOF
```
???
:EN:- Control plane authentication
:FR:- Sécurisation du plan de contrôle

View File

@@ -132,11 +132,33 @@ For a user named `jean.doe`, we will have:
- ServiceAccount `jean.doe` in Namespace `users`
- CertificateSigningRequest `users:jean.doe`
- CertificateSigningRequest `user=jean.doe`
- ClusterRole `users:jean.doe` giving read/write access to that CSR
- ClusterRole `user=jean.doe` giving read/write access to that CSR
- ClusterRoleBinding `users:jean.doe` binding ClusterRole and ServiceAccount
- ClusterRoleBinding `user=jean.doe` binding ClusterRole and ServiceAccount
---
class: extra-details
## About resource name constraints
- Most Kubernetes identifiers and names are fairly restricted
- They generally are DNS-1123 *labels* or *subdomains* (from [RFC 1123](https://tools.ietf.org/html/rfc1123))
- A label is lowercase letters, numbers, dashes; can't start or finish with a dash
- A subdomain is one or multiple labels separated by dots
- Some resources have more relaxed constraints, and can be "path segment names"
(uppercase are allowed, as well as some characters like `#:?!,_`)
- This includes RBAC objects (like Roles, RoleBindings...) and CSRs
- See the [Identifiers and Names](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/identifiers.md) design document and the [Object Names and IDs](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#path-segment-names) documentation page for more details
---
@@ -153,7 +175,7 @@ For a user named `jean.doe`, we will have:
- Create the ServiceAccount, ClusterRole, ClusterRoleBinding for `jean.doe`:
```bash
kubectl apply -f ~/container.training/k8s/users:jean.doe.yaml
kubectl apply -f ~/container.training/k8s/user=jean.doe.yaml
```
]
@@ -195,7 +217,13 @@ For a user named `jean.doe`, we will have:
- Add a new context using that identity:
```bash
kubectl config set-context jean.doe --user=token:jean.doe --cluster=kubernetes
kubectl config set-context jean.doe --user=token:jean.doe --cluster=`kubernetes`
```
(Make sure to adapt the cluster name if yours is different!)
- Use that context:
```bash
kubectl config use-context jean.doe
```
]
@@ -216,7 +244,7 @@ For a user named `jean.doe`, we will have:
- Try to access "our" CertificateSigningRequest:
```bash
kubectl get csr users:jean.doe
kubectl get csr user=jean.doe
```
(This should tell us "NotFound")
@@ -273,7 +301,7 @@ The command above generates:
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: users:jean.doe
name: user=jean.doe
spec:
request: $(base64 -w0 < csr.pem)
usages:
@@ -324,12 +352,12 @@ The command above generates:
- Inspect the CSR:
```bash
kubectl describe csr users:jean.doe
kubectl describe csr user=jean.doe
```
- Approve it:
```bash
kubectl certificate approve users:jean.doe
kubectl certificate approve user=jean.doe
```
]
@@ -347,7 +375,7 @@ The command above generates:
- Retrieve the updated CSR object and extract the certificate:
```bash
kubectl get csr users:jean.doe \
kubectl get csr user=jean.doe \
-o jsonpath={.status.certificate} \
| base64 -d > cert.pem
```
@@ -424,3 +452,8 @@ To be usable in real environments, we would need to add:
- we get strong security *and* convenience
- Systems like Vault also have certificate issuance mechanisms
???
:EN:- Generating user certificates with the CSR API
:FR:- Génération de certificats utilisateur avec la CSR API

View File

@@ -688,3 +688,8 @@ class: extra-details
(by setting their label accordingly)
- This gives us building blocks for canary and blue/green deployments
???
:EN:- Scaling with Daemon Sets
:FR:- Utilisation de Daemon Sets

View File

@@ -172,3 +172,8 @@ The dashboard will then ask you which authentication you want to use.
- It introduces new failure modes
(for instance, if you try to apply YAML from a link that's no longer valid)
???
:EN:- The Kubernetes dashboard
:FR:- Le *dashboard* Kubernetes

View File

@@ -26,3 +26,8 @@
- When we want to change some resource, we update the *spec*
- Kubernetes will then *converge* that resource
???
:EN:- Declarative vs imperative models
:FR:- Modèles déclaratifs et impératifs

View File

@@ -823,3 +823,8 @@ class: extra-details
(it could be as a bare process, or in a container/pod using the host network)
- ... And it expects to be listening on port 6443 with TLS
???
:EN:- Building our own cluster from scratch
:FR:- Construire son cluster à la main

View File

@@ -344,3 +344,14 @@ class: extra-details
- [Dynamic Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/)
- [Aggregation Layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)
???
:EN:- Extending the Kubernetes API
:EN:- Custom Resource Definitions (CRDs)
:EN:- The aggregation layer
:EN:- Admission control and webhooks
:FR:- Comment étendre l'API Kubernetes
:FR:- Les CRDs *(Custom Resource Definitions)*
:FR:- Extension via *aggregation layer*, *admission control*, *webhooks*

View File

@@ -237,3 +237,8 @@
- Gitkube can also deploy Helm charts
(instead of raw YAML files)
???
:EN:- GitOps
:FR:- GitOps

View File

@@ -154,9 +154,9 @@ It will use the default success threshold (1 successful attempt = alive).
.exercise[
- Edit `rng-daemonset.yaml` and add the liveness probe
- Edit `rng-deployment.yaml` and add the liveness probe
```bash
vim rng-daemonset.yaml
vim rng-deployment.yaml
```
- Load the YAML for all the resources of DockerCoins:
@@ -333,3 +333,8 @@ class: extra-details
(and have gcr.io/pause take care of the reaping)
- Discussion of this in [Video - 10 Ways to Shoot Yourself in the Foot with Kubernetes, #9 Will Surprise You](https://www.youtube.com/watch?v=QKI-JRs2RIE)
???
:EN:- Adding healthchecks to an app
:FR:- Ajouter des *healthchecks* à une application

View File

@@ -282,3 +282,8 @@ If the Redis process becomes unresponsive, it will be killed.
- check the timestamp of that file from an exec probe
- Writing logs (and checking them from the probe) also works
???
:EN:- Using healthchecks to improve availability
:FR:- Utiliser des *healthchecks* pour améliorer la disponibilité

View File

@@ -237,3 +237,8 @@ We see the components mentioned above: `Chart.yaml`, `templates/`, `values.yaml`
- This can be use for database migrations, backups, notifications, smoke tests ...
- Hooks named `test` are executed only when running `helm test RELEASE-NAME`
???
:EN:- Helm charts format
:FR:- Le format des *Helm charts*

View File

@@ -218,3 +218,8 @@ have details about recommended annotations and labels.
```
]
???
:EN:- Writing a basic Helm chart for the whole app
:FR:- Écriture d'un *chart* Helm simplifié

View File

@@ -121,7 +121,7 @@ This creates a basic chart in the directory `helmcoins`.
helm install COMPONENT-NAME CHART-DIRECTORY
```
- We can also use the following command, which is idempotent:
- We can also use the following command, which is *idempotent*:
```bash
helm upgrade COMPONENT-NAME CHART-DIRECTORY --install
```
@@ -139,6 +139,28 @@ This creates a basic chart in the directory `helmcoins`.
---
class: extra-details
## "Idempotent"
- Idempotent = that can be applied multiple times without changing the result
(the word is commonly used in maths and computer science)
- In this context, this means:
- if the action (installing the chart) wasn't done, do it
- if the action was already done, don't do anything
- Ideally, when such an action fails, it can be retried safely
(as opposed to, e.g., installing a new release each time we run it)
- Other example: `kubectl -f some-file.yaml`
---
## Checking what we've done
- Let's see if DockerCoins is working!
@@ -577,3 +599,8 @@ We can look at the definition, but it's fairly complex ...
- We can change the number of workers with `replicaCount`
- And much more!
???
:EN:- Writing better Helm charts for app components
:FR:- Écriture de *charts* composant par composant

View File

@@ -18,6 +18,25 @@
---
## CNCF graduation status
- On April 30th 2020, Helm was the 10th project to *graduate* within the CNCF
.emoji[🎉]
(alongside Containerd, Prometheus, and Kubernetes itself)
- This is an acknowledgement by the CNCF for projects that
*demonstrate thriving adoption, an open governance process,
<br/>
and a strong commitment to community, sustainability, and inclusivity.*
- See [CNCF announcement](https://www.cncf.io/announcement/2020/04/30/cloud-native-computing-foundation-announces-helm-graduation/)
and [Helm announcement](https://helm.sh/blog/celebrating-helms-cncf-graduation/)
---
## Helm concepts
- `helm` is a CLI tool
@@ -417,3 +436,13 @@ All unspecified values will take the default values defined in the chart.
```
]
???
:EN:- Helm concepts
:EN:- Installing software with Helm
:EN:- Helm 2, Helm 3, and the Helm Hub
:FR:- Fonctionnement général de Helm
:FR:- Installer des composants via Helm
:FR:- Helm 2, Helm 3, et le *Helm Hub*

View File

@@ -232,3 +232,8 @@ The chart is in a structured format, but it's entirely captured in this JSON.
(including the full source of the chart, and the values used)
- This allows arbitrary rollbacks, as well as tweaking values even without having access to the source of the chart (or the chart repo) used for deployment
???
:EN:- Deep dive into Helm internals
:FR:- Fonctionnement interne de Helm

View File

@@ -306,3 +306,8 @@ This can also be set with `--cpu-percent=`.
-->
]
???
:EN:- Auto-scaling resources
:FR:- *Auto-scaling* (dimensionnement automatique) des ressources

View File

@@ -718,3 +718,8 @@ We also need:
(create them, promote them, delete them ...)
For inspiration, check [flagger by Weave](https://github.com/weaveworks/flagger).
???
:EN:- The Ingress resource
:FR:- La ressource *ingress*

View File

@@ -155,3 +155,8 @@ For critical services, we might want to precisely control the update process.
- Even better if it's combined with DNS integration
(to facilitate name → ClusterIP resolution)
???
:EN:- Interconnecting clusters
:FR:- Interconnexion de clusters

162
slides/k8s/kubectl-logs.md Normal file
View File

@@ -0,0 +1,162 @@
# Revisiting `kubectl logs`
- In this section, we assume that we have a Deployment with multiple Pods
(e.g. `pingpong` that we scaled to at least 3 pods)
- We will highlights some of the limitations of `kubectl logs`
---
## Streaming logs of multiple pods
- By default, `kubectl logs` shows us the output of a single Pod
.exercise[
- Try to check the output of the Pods related to a Deployment:
```bash
kubectl logs deploy/pingpong --tail 1 --follow
```
<!--
```wait using pod/pingpong-```
```keys ^C```
-->
]
`kubectl logs` only shows us the logs of one of the Pods.
---
## Viewing logs of multiple pods
- When we specify a deployment name, only one single pod's logs are shown
- We can view the logs of multiple pods by specifying a *selector*
- If we check the pods created by the deployment, they all have the label `app=pingpong`
(this is just a default label that gets added when using `kubectl create deployment`)
.exercise[
- View the last line of log from all pods with the `app=pingpong` label:
```bash
kubectl logs -l app=pingpong --tail 1
```
]
---
## Streaming logs of multiple pods
- Can we stream the logs of all our `pingpong` pods?
.exercise[
- Combine `-l` and `-f` flags:
```bash
kubectl logs -l app=pingpong --tail 1 -f
```
<!--
```wait seq=```
```key ^C```
-->
]
*Note: combining `-l` and `-f` is only possible since Kubernetes 1.14!*
*Let's try to understand why ...*
---
class: extra-details
## Streaming logs of many pods
- Let's see what happens if we try to stream the logs for more than 5 pods
.exercise[
- Scale up our deployment:
```bash
kubectl scale deployment pingpong --replicas=8
```
- Stream the logs:
```bash
kubectl logs -l app=pingpong --tail 1 -f
```
<!-- ```wait error:``` -->
]
We see a message like the following one:
```
error: you are attempting to follow 8 log streams,
but maximum allowed concurency is 5,
use --max-log-requests to increase the limit
```
---
class: extra-details
## Why can't we stream the logs of many pods?
- `kubectl` opens one connection to the API server per pod
- For each pod, the API server opens one extra connection to the corresponding kubelet
- If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server
- This could easily put a lot of stress on the API server
- Prior Kubernetes 1.14, it was decided to *not* allow multiple connections
- From Kubernetes 1.14, it is allowed, but limited to 5 connections
(this can be changed with `--max-log-requests`)
- For more details about the rationale, see
[PR #67573](https://github.com/kubernetes/kubernetes/pull/67573)
---
## Shortcomings of `kubectl logs`
- We don't see which pod sent which log line
- If pods are restarted / replaced, the log stream stops
- If new pods are added, we don't see their logs
- To stream the logs of multiple pods, we need to write a selector
- There are external tools to address these shortcomings
(e.g.: [Stern](https://github.com/wercker/stern))
---
class: extra-details
## `kubectl logs -l ... --tail N`
- If we run this with Kubernetes 1.12, the last command shows multiple lines
- This is a regression when `--tail` is used together with `-l`/`--selector`
- It always shows the last 10 lines of output for each container
(instead of the number of lines specified on the command line)
- The problem was fixed in Kubernetes 1.13
*See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.*

View File

@@ -338,7 +338,7 @@ class: extra-details
kubectl get all
```
<!-- ```hide kubectl wait pod --selector=run=pingpong --for condition=ready ``` -->
<!-- ```hide kubectl wait pod --selector=app=pingpong --for condition=ready ``` -->
]
@@ -384,11 +384,11 @@ class: extra-details
kubectl logs deploy/pingpong --tail 1 --follow
```
- Leave that command running, so that we can keep an eye on these logs
- Stop it with Ctrl-C
<!--
```wait seq=3```
```tmux split-pane -h```
```keys ^C```
-->
]
@@ -411,62 +411,55 @@ class: extra-details
kubectl scale deployment pingpong --replicas 3
```
- Check that we now have multiple pods:
```bash
kubectl get pods
```
]
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?
We could! But the *deployment* would notice it right away, and scale back to the initial level.
---
## Log streaming
class: extra-details
- Let's look again at the output of `kubectl logs`
## Scaling a Replica Set
(the one we started before scaling up)
- What if we scale the Replica Set instead of the Deployment?
- `kubectl logs` shows us one line per second
- The Deployment would notice it right away and scale back to the initial level
- We could expect 3 lines per second
- The Replica Set makes sure that we have the right numbers of Pods
(since we should now have 3 pods running `ping`)
- The Deployment makes sure that the Replica Set has the right size
- Let's try to figure out what's happening!
(conceptually, it delegates the management of the Pods to the Replica Set)
- This might seem weird (why this extra layer?) but will soon make sense
(when we will look at how rolling updates work!)
---
## Streaming logs of multiple pods
- What happens if we restart `kubectl logs`?
- What happens if we try `kubectl logs` now that we have multiple pods?
.exercise[
- Interrupt `kubectl logs` (with Ctrl-C)
<!--
```tmux last-pane```
```key ^C```
-->
- Restart it:
```bash
kubectl logs deploy/pingpong --tail 1 --follow
kubectl logs deploy/pingpong --tail 3
```
<!--
```wait using pod/pingpong-```
```tmux last-pane```
-->
]
`kubectl logs` will warn us that multiple pods were found, and that it's showing us only one of them.
`kubectl logs` will warn us that multiple pods were found.
Let's leave `kubectl logs` running while we keep exploring.
It is showing us only one of them.
We'll see later how to address that shortcoming.
---
## Resilience
- The *deployment* `pingpong` watches its *replica set*
@@ -501,9 +494,7 @@ Let's leave `kubectl logs` running while we keep exploring.
```key ^J```
```check```
```key ^D```
```tmux select-pane -t 1```
```key ^C```
```key ^D```
-->
]
@@ -524,365 +515,7 @@ Let's leave `kubectl logs` running while we keep exploring.
- The pod is then killed, and `kubectl logs` exits
---
???
## Viewing logs of multiple pods
- When we specify a deployment name, only one single pod's logs are shown
- We can view the logs of multiple pods by specifying a *selector*
- A selector is a logic expression using *labels*
- If we check the pods created by the deployment, they all have the label `app=pingpong`
(this is just a default label that gets added when using `kubectl create deployment`)
.exercise[
- View the last line of log from all pods with the `app=pingpong` label:
```bash
kubectl logs -l app=pingpong --tail 1
```
]
---
### Streaming logs of multiple pods
- Can we stream the logs of all our `pingpong` pods?
.exercise[
- Combine `-l` and `-f` flags:
```bash
kubectl logs -l app=pingpong --tail 1 -f
```
<!--
```wait seq=```
```key ^C```
-->
]
*Note: combining `-l` and `-f` is only possible since Kubernetes 1.14!*
*Let's try to understand why ...*
---
class: extra-details
### Streaming logs of many pods
- Let's see what happens if we try to stream the logs for more than 5 pods
.exercise[
- Scale up our deployment:
```bash
kubectl scale deployment pingpong --replicas=8
```
- Stream the logs:
```bash
kubectl logs -l app=pingpong --tail 1 -f
```
<!-- ```wait error:``` -->
]
We see a message like the following one:
```
error: you are attempting to follow 8 log streams,
but maximum allowed concurency is 5,
use --max-log-requests to increase the limit
```
---
class: extra-details
## Why can't we stream the logs of many pods?
- `kubectl` opens one connection to the API server per pod
- For each pod, the API server opens one extra connection to the corresponding kubelet
- If there are 1000 pods in our deployment, that's 1000 inbound + 1000 outbound connections on the API server
- This could easily put a lot of stress on the API server
- Prior Kubernetes 1.14, it was decided to *not* allow multiple connections
- From Kubernetes 1.14, it is allowed, but limited to 5 connections
(this can be changed with `--max-log-requests`)
- For more details about the rationale, see
[PR #67573](https://github.com/kubernetes/kubernetes/pull/67573)
---
## Shortcomings of `kubectl logs`
- We don't see which pod sent which log line
- If pods are restarted / replaced, the log stream stops
- If new pods are added, we don't see their logs
- To stream the logs of multiple pods, we need to write a selector
- There are external tools to address these shortcomings
(e.g.: [Stern](https://github.com/wercker/stern))
---
class: extra-details
## `kubectl logs -l ... --tail N`
- If we run this with Kubernetes 1.12, the last command shows multiple lines
- This is a regression when `--tail` is used together with `-l`/`--selector`
- It always shows the last 10 lines of output for each container
(instead of the number of lines specified on the command line)
- The problem was fixed in Kubernetes 1.13
*See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.*
---
class: extra-details
## Party tricks involving IP addresses
- It is possible to specify an IP address with less than 4 bytes
(example: `127.1`)
- Zeroes are then inserted in the middle
- As a result, `127.1` expands to `127.0.0.1`
- So we can `ping 127.1` to ping `localhost`!
(See [this blog post](https://ma.ttias.be/theres-more-than-one-way-to-write-an-ip-address/
) for more details.)
---
class: extra-details
## More party tricks with IP addresses
- We can also ping `1.1`
- `1.1` will expand to `1.0.0.1`
- This is one of the addresses of Cloudflare's
[public DNS resolver](https://blog.cloudflare.com/announcing-1111/)
- This is a quick way to check connectivity
(if we can reach 1.1, we probably have internet access)
---
## Creating other kinds of resources
- Deployments are great for stateless web apps
(as well as workers that keep running forever)
- Jobs are great for "long" background work
("long" being at least minutes our hours)
- CronJobs are great to schedule Jobs at regular intervals
(just like the classic UNIX `cron` daemon with its `crontab` files)
- Pods are great for one-off execution that we don't care about
(because they don't get automatically restarted if something goes wrong)
---
## Creating a Job
- A Job will create a Pod
- If the Pod fails, the Job will create another one
- The Job will keep trying until:
- either a Pod succeeds,
- or we hit the *backoff limit* of the Job (default=6)
.exercise[
- Create a Job that has a 50% chance of success:
```bash
kubectl create job flipcoin --image=alpine -- sh -c 'exit $(($RANDOM%2))'
```
]
---
## Our Job in action
- Our Job will create a Pod named `flipcoin-xxxxx`
- If the Pod succeeds, the Job stops
- If the Pod fails, the Job creates another Pod
.exercise[
- Check the status of the Pod(s) created by the Job:
```bash
kubectl get pods --selector=job-name=flipcoin
```
]
---
class: extra-details
## More advanced jobs
- We can specify a number of "completions" (default=1)
- This indicates how many times the Job must be executed
- We can specify the "parallelism" (default=1)
- This indicates how many Pods should be running in parallel
- These options cannot be specified with `kubectl create job`
(we have to write our own YAML manifest to use them)
---
## Scheduling periodic background work
- A Cron Job is a Job that will be executed at specific intervals
(the name comes from the traditional cronjobs executed by the UNIX crond)
- It requires a *schedule*, represented as five space-separated fields:
- minute [0,59]
- hour [0,23]
- day of the month [1,31]
- month of the year [1,12]
- day of the week ([0,6] with 0=Sunday)
- `*` means "all valid values"; `/N` means "every N"
- Example: `*/3 * * * *` means "every three minutes"
---
## Creating a Cron Job
- Let's create a simple job to be executed every three minutes
- Careful: make sure that the job terminates!
(The Cron Job will not hold if a previous job is still running)
.exercise[
- Create the Cron Job:
```bash
kubectl create cronjob every3mins --schedule="*/3 * * * *" \
--image=alpine -- sleep 10
```
- Check the resource that was created:
```bash
kubectl get cronjobs
```
]
---
## Cron Jobs in action
- At the specified schedule, the Cron Job will create a Job
- The Job will create a Pod
- The Job will make sure that the Pod completes
(re-creating another one if it fails, for instance if its node fails)
.exercise[
- Check the Jobs that are created:
```bash
kubectl get jobs
```
]
(It will take a few minutes before the first job is scheduled.)
---
class: extra-details
## What about `kubectl run` before v1.18?
- Creating a Deployment:
`kubectl run`
- Creating a Pod:
`kubectl run --restart=Never`
- Creating a Job:
`kubectl run --restart=OnFailure`
- Creating a Cron Job:
`kubectl run --restart=OnFailure --schedule=...`
*Avoid using these forms, as they are deprecated since Kubernetes 1.18!*
---
## Beyond `kubectl create`
- As hinted earlier, `kubectl create` doesn't always expose all options
- can't express parallelism or completions of Jobs
- can't express Pods with multiple containers
- can't express healthchecks, resource limits
- etc.
- `kubectl create` and `kubectl run` are *helpers* that generate YAML manifests
- If we write these manifests ourselves, we can use all features and options
- We'll see later how to do that!
:EN:- Running pods and deployments
:FR:- Créer un pod et un déploiement

View File

@@ -438,3 +438,13 @@ class: extra-details
- They can also handle TLS certificates, URL rewriting ...
- They require an *Ingress Controller* to function
???
:EN:- Service discovery and load balancing
:EN:- Accessing pods through services
:EN:- Service types: ClusterIP, NodePort, LoadBalancer
:FR:- Exposer un service
:FR:- Différents types de services : ClusterIP, NodePort, LoadBalancer
:FR:- Utiliser CoreDNS pour la *service discovery*

View File

@@ -578,3 +578,8 @@ $ curl -k https://10.96.0.1
- Code running in pods can connect to services using their name
(e.g. https://kubernetes/...)
???
:EN:- Getting started with kubectl
:FR:- Se familiariser avec kubectl

View File

@@ -145,3 +145,8 @@ class: extra-details
- Some solutions can fill multiple roles
(e.g. kube-router can be set up to provide the pod network and/or network policies and/or replace kube-proxy)
???
:EN:- The Kubernetes network model
:FR:- Le modèle réseau de Kubernetes

View File

@@ -31,23 +31,17 @@
---
## Cloning some repos
## Cloning the repository
- We will need two repositories:
- We will need to clone the training repository
- the first one has the "DockerCoins" demo app
- It has the DockerCoins demo app ...
- the second one has these slides, some scripts, more manifests ...
- ... as well as these slides, some scripts, more manifests
.exercise[
- Clone the kubercoins repository on `node1`:
```bash
git clone https://github.com/jpetazzo/kubercoins
```
- Clone the container.training repository as well:
- Clone the repository on `node1`:
```bash
git clone https://@@GITREPO@@
```
@@ -62,9 +56,9 @@ Without further ado, let's start this application!
.exercise[
- Apply all the manifests from the kubercoins repository:
- Apply the manifest for dockercoins:
```bash
kubectl apply -f kubercoins/
kubectl apply -f ~/container.training/k8s/dockercoins.yaml
```
]
@@ -242,3 +236,8 @@ https://@@GITREPO@@/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/wo
A drawing area should show up, and after a few seconds, a blue
graph will appear.
???
:EN:- Deploying a sample app with YAML manifests
:FR:- Lancer une application de démo avec du YAML

View File

@@ -8,45 +8,164 @@
- They are left untouched by Kustomize
- Kustomize lets us define *overlays* that extend or change the resource files
- Kustomize lets us define *kustomizations*
- A *kustomization* is conceptually similar to a *layer*
- Technically, a *kustomization* is a file named `kustomization.yaml`
(or a directory containing that files + additional files)
---
## Differences with Helm
## What's in a kustomization
- Helm charts use placeholders `{{ like.this }}`
- A kustomization can do any combination of the following:
- Kustomize "bases" are standard Kubernetes YAML
- include other kustomizations
- It is possible to use an existing set of YAML as a Kustomize base
- include Kubernetes resources defined in YAML files
- As a result, writing a Helm chart is more work ...
- patch Kubernetes resources (change values)
- ... But Helm charts are also more powerful; e.g. they can:
- add labels or annotations to all resources
- use flags to conditionally include resources or blocks
- specify ConfigMaps and Secrets from literal values or local files
- check if a given Kubernetes API group is supported
- [and much more](https://helm.sh/docs/chart_template_guide/)
(... And a few more advanced features that we won't cover today!)
---
## Kustomize concepts
## A simple kustomization
- Kustomize needs a `kustomization.yaml` file
This features a Deployment, Service, and Ingress (in separate files),
and a couple of patches (to change the number of replicas and the hostname
used in the Ingress).
- That file can be a *base* or a *variant*
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesStrategicMerge:
- scale-deployment.yaml
- ingress-hostname.yaml
resources:
- deployment.yaml
- service.yaml
- ingress.yaml
```
- If it's a *base*:
On the next slide, let's see a more complex example ...
- it lists YAML resource files to use
---
- If it's a *variant* (or *overlay*):
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
add-this-to-all-my-resources: please
patchesStrategicMerge:
- prod-scaling.yaml
- prod-healthchecks.yaml
bases:
- api/
- frontend/
- db/
- github.com/example/app?ref=tag-or-branch
resources:
- ingress.yaml
- permissions.yaml
configMapGenerator:
- name: appconfig
files:
- global.conf
- local.conf=prod.conf
```
- it refers to (at least) one *base*
---
- and some *patches*
## Glossary
- A *base* is a kustomization that is referred to by other kustomizations
- An *overlay* is a kustomization that refers to other kustomizations
- A kustomization can be both a base and an overlay at the same time
(a kustomization can refer to another, which can refer to a third)
- A *patch* describes how to alter an existing resource
(e.g. to change the image in a Deployment; or scaling parameters; etc.)
- A *variant* is the final outcome of applying bases + overlays
(See the [kustomize glossary](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/glossary.md) for more definitions!)
---
## What Kustomize *cannot* do
- By design, there are a number of things that Kustomize won't do
- For instance:
- using command-line arguments or environment variables to generate a variant
- overlays can only *add* resources, not *remove* them
- See the full list of [eschewed features](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/eschewedFeatures.md) for more details
---
## Kustomize workflows
- The Kustomize documentation proposes two different workflows
- *Bespoke configuration*
- base and overlays managed by the same team
- *Off-the-shelf configuration* (OTS)
- base and overlays managed by different teams
- base is regularly updated by "upstream" (e.g. a vendor)
- our overlays and patches should (hopefully!) apply cleanly
- we may regularly update the base, or use a remote base
---
## Remote bases
- Kustomize can fetch remote bases using Hashicorp go-getter library
- Examples:
github.com/jpetazzo/kubercoins (remote git repository)
github.com/jpetazzo/kubercoins?ref=kustomize (specific tag or branch)
https://releases.hello.io/k/1.0.zip (remote archive)
https://releases.hello.io/k/1.0.zip//some-subdir (subdirectory in archive)
- See [hashicorp/go-getter URL format docs](https://github.com/hashicorp/go-getter#url-format) for more examples
---
## Managing `kustomization.yaml`
- There are many ways to manage `kustomization.yaml` files, including:
- web wizards like [Replicated Ship](https://www.replicated.com/ship/)
- the `kustomize` CLI
- opening the file with our favorite text editor
- Let's see these in action!
---
@@ -199,3 +318,63 @@
]
Note: it might take a minute or two for the worker to start.
---
## Working with the `kustomize` CLI
- This is another way to get started
- General workflow:
`kustomize create` to generate an empty `kustomization.yaml` file
`kustomize edit add resource` to add Kubernetes YAML files to it
`kustomize edit add patch` to add patches to said resources
`kustomize build | kubectl apply -f-` or `kubectl apply -k .`
---
## `kubectl apply -k`
- Kustomize has been integrated in `kubectl`
- The `kustomize` tool is still needed if we want to use `create`, `edit`, ...
- Also, warning: `kubectl apply -k` is a slightly older version than `kustomize`!
- In recent versions of `kustomize`, bases can be listed in `resources`
(and `kustomize edit add base` will add its arguments to `resources`)
- `kubectl apply -k` requires bases to be listed in `bases`
(so after using `kustomize edit add base`, we need to fix `kustomization.yaml`)
---
## Differences with Helm
- Helm charts use placeholders `{{ like.this }}`
- Kustomize "bases" are standard Kubernetes YAML
- It is possible to use an existing set of YAML as a Kustomize base
- As a result, writing a Helm chart is more work ...
- ... But Helm charts are also more powerful; e.g. they can:
- use flags to conditionally include resources or blocks
- check if a given Kubernetes API group is supported
- [and much more](https://helm.sh/docs/chart_template_guide/)
???
:EN:- Packaging and running apps with Kustomize
:FR:- *Packaging* d'applications avec Kustomize

View File

@@ -0,0 +1,226 @@
# Labels and annotations
- Most Kubernetes resources can have *labels* and *annotations*
- Both labels and annotations are arbitrary strings
(with some limitations that we'll explain in a minute)
- Both labels and annotations can be added, removed, changed, dynamically
- This can be done with:
- the `kubectl edit` command
- the `kubectl label` and `kubectl annotate`
- ... many other ways! (`kubectl apply -f`, `kubectl patch`, ...)
---
## Viewing labels and annotations
- Let's see what we get when we create a Deployment
.exercise[
- Create a Deployment:
```bash
kubectl create deployment clock --image=jpetazzo/clock
```
- Look at its annotations and labels:
```bash
kubectl describe deployment clock
```
]
So, what do we get?
---
## Labels and annotations for our Deployment
- We see one label:
```
Labels: app=clock
```
- This is added by `kubectl create deployment`
- And one annotation:
```
Annotations: deployment.kubernetes.io/revision: 1
```
- This is to keep track of successive versions when doing rolling updates
---
## And for the related Pod?
- Let's look up the Pod that was created and check it too
.exercise[
- Find the name of the Pod:
```bash
kubectl get pods
```
- Display its information:
```bash
kubectl describe pod clock-xxxxxxxxxx-yyyyy
```
]
So, what do we get?
---
## Labels and annotations for our Pod
- We see two labels:
```
Labels: app=clock
pod-template-hash=xxxxxxxxxx
```
- `app=clock` comes from `kubectl create deployment` too
- `pod-template-hash` was assigned by the Replica Set
(when we will do rolling updates, each set of Pods will have a different hash)
- There are no annotations:
```
Annotations: <none>
```
---
## Selectors
- A *selector* is an expression matching labels
- It will restrict a command to the objects matching *at least* all these labels
.exercise[
- List all the pods with at least `app=clock`:
```bash
kubectl get pods --selector=app=clock
```
- List all the pods with a label `app`, regardless of its value:
```bash
kubectl get pods --selector=app
```
]
---
## Settings labels and annotations
- The easiest method is to use `kubectl label` and `kubectl annotate`
.exercise[
- Set a label on the `clock` Deployment:
```bash
kubectl label deployment clock color=blue
```
- Check it out:
```bash
kubectl describe deployment clock
```
]
---
## Other ways to view labels
- `kubectl get` gives us a couple of useful flags to check labels
- `kubectl get --show-labels` shows all labels
- `kubectl get -L xyz` shows the value of label `xyz`
.exercise[
- List all the labels that we have on pods:
```bash
kubectl get pods --show-labels
```
- List the value of label `app` on these pods:
```bash
kubectl get pods -L app
```
]
---
class: extra-details
## More on selectors
- If a selector has multiple labels, it means "match at least these labels"
Example: `--selector=app=frontend,release=prod`
- `--selector` can be abbreviated as `-l` (for **l**abels)
We can also use negative selectors
Example: `--selector=app!=clock`
- Selectors can be used with most `kubectl` commands
Examples: `kubectl delete`, `kubectl label`, ...
---
## Other ways to view labels
- We can use the `--show-labels` flag with `kubectl get`
.exercise[
- Show labels for a bunch of objects:
```bash
kubectl get --show-labels po,rs,deploy,svc,no
```
]
---
## Differences between labels and annotations
- The *key* for both labels and annotations:
- must start and end with a letter or digit
- can also have `.` `-` `_` (but not in first or last position)
- can be up to 63 characters, or 253 + `/` + 63
- Label *values* are up to 63 characters, with the same restrictions
- Annotations *values* can have arbitrary characeters (yes, even binary)
- Maximum length isn't defined
(dozens of kilobytes is fine, hundreds maybe not so much)
???
:EN:- Labels and annotations
:FR:- *Labels* et annotations

View File

@@ -160,7 +160,7 @@
- Check that our Consul clusters has 3 members indeed:
```bash
kubectl exec persistentconsul-0 consul members
kubectl exec persistentconsul-0 -- consul members
```
]
@@ -246,3 +246,10 @@
(when we can't or won't dedicate a whole disk to a volume)
- It's possible to mix both (using distinct Storage Classes)
???
:EN:- Static vs dynamic volume provisioning
:EN:- Example: local persistent volume provisioner
:FR:- Création statique ou dynamique de volumes
:FR:- Exemple : création de volumes locaux

View File

@@ -34,11 +34,11 @@
- Download the `kubectl` binary from one of these links:
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl)
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.18.8/bin/linux/amd64/kubectl)
|
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl)
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.18.8/bin/darwin/amd64/kubectl)
|
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/windows/amd64/kubectl.exe)
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.18.8/bin/windows/amd64/kubectl.exe)
- On Linux and macOS, make the binary executable with `chmod +x kubectl`
@@ -193,3 +193,8 @@ class: extra-details
]
We can now utilize the cluster exactly as if we're logged into a node, except that it's remote.
???
:EN:- Working with remote Kubernetes clusters
:FR:- Travailler avec des *clusters* distants

View File

@@ -145,3 +145,8 @@ But this is outside of the scope of this chapter.
The YAML file that we used creates all the resources in the
`default` namespace, for simplicity. In a real scenario, you will
create the resources in the `kube-system` namespace or in a dedicated namespace.
???
:EN:- Centralizing logs
:FR:- Centraliser les logs

View File

@@ -45,7 +45,7 @@ Exactly what we need!
---
## Installing Stern
## Checking if Stern is installed
- Run `stern` (without arguments) to check if it's installed:
@@ -57,7 +57,17 @@ Exactly what we need!
stern pod-query [flags]
```
- If it is not installed, the easiest method is to download a [binary release](https://github.com/wercker/stern/releases)
- If it's missing, let's see how to install it
---
## Installing Stern
- Stern is written in Go, and Go programs are usually shipped as a single binary
- We just need to download that binary and put it in our `PATH`!
- Binary releases are available [here](https://github.com/wercker/stern/releases) on GitHub
- The following commands will install Stern on a Linux Intel 64 bit machine:
```bash
@@ -66,7 +76,7 @@ Exactly what we need!
sudo chmod +x /usr/local/bin/stern
```
- On OS X, just `brew install stern`
- On macOS, we can also `brew install stern` or `sudo port install stern`
<!-- ##VERSION## -->
@@ -149,3 +159,8 @@ Exactly what we need!
-->
]
???
:EN:- Viewing pod logs from the CLI
:FR:- Consulter les logs des pods depuis la CLI

View File

@@ -80,3 +80,8 @@ If it shows our nodes and their CPU and memory load, we're good!
- kube-resource-report can generate HTML reports
(https://github.com/hjacobs/kube-resource-report)
???
:EN:- The *core metrics pipeline*
:FR:- Le *core metrics pipeline*

View File

@@ -532,3 +532,8 @@ Sometimes it works, sometimes it doesn't. Why?
- We want to automate all these steps
- We want something that works on all networks
???
:EN:- Connecting nodes ands pods
:FR:- Interconnecter les nœuds et les pods

View File

@@ -365,3 +365,7 @@ Note: we could have used `--namespace=default` for the same result.
- Pro-tip: install it on your machine during the next break!
???
:EN:- Organizing resources with Namespaces
:FR:- Organiser les ressources avec des *namespaces*

View File

@@ -446,3 +446,8 @@ troubleshoot easily, without having to poke holes in our firewall.
- a [very good talk about network policies](https://www.youtube.com/watch?list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-ulZb&v=3gGpMmYeEO8) at KubeCon North America 2017
- a repository of [ready-to-use recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for network policies
???
:EN:- Isolating workloads with Network Policies
:FR:- Isolation réseau avec les *network policies*

View File

@@ -377,3 +377,8 @@ class: extra-details
- It should now say "Signature Verified"
]
???
:EN:- Authenticating with OIDC
:FR:- S'identifier avec OIDC

Some files were not shown because too many files have changed in this diff Show More