Compare commits

..

29 Commits

Author SHA1 Message Date
Jerome Petazzoni
5447d8caf7 Add VMs and chat URL 2020-07-07 17:29:17 +02:00
Jerome Petazzoni
424d32ad47 Final updates to content 2020-07-07 17:14:43 +02:00
Jérôme Petazzoni
7dd72f123f Merge pull request #562 from guilhem/patch-1
mismatch requests/limits
2020-07-07 15:35:46 +02:00
Guilhem Lettron
ff95066006 mismatch requests/limits
Burstable are killed when node is overloaded and exceeded requests
2020-07-07 13:55:28 +02:00
Jerome Petazzoni
8146c4dabe Add CRD that I had forgotten 2020-07-01 18:15:33 +02:00
Jerome Petazzoni
17aea33beb Add config for Traefik v2 2020-07-01 18:15:23 +02:00
Jerome Petazzoni
9770f81a1c Update DaemonSet in filebeat example to apps/v1 2020-07-01 16:55:48 +02:00
Jerome Petazzoni
0cb9095303 Fix up CRDs and add better openapiv3 schema validation 2020-07-01 16:53:51 +02:00
Jerome Petazzoni
ffded8469b Clean up socat deployment (even if we don't use it anymore) 2020-07-01 16:10:40 +02:00
Jerome Petazzoni
0e892cf8b4 Fix indentation in volume example 2020-06-28 12:10:01 +02:00
Jerome Petazzoni
b87efbd6e9 Update etcd slide 2020-06-26 07:32:53 +02:00
Jerome Petazzoni
1a24b530d6 Update Kustomize version 2020-06-22 08:33:21 +02:00
Jerome Petazzoni
122ffec5c2 kubectl get --show-labels and -L 2020-06-16 22:50:38 +02:00
Jerome Petazzoni
276a2dbdda Fix titles 2020-06-04 12:55:42 +02:00
Jerome Petazzoni
2836b58078 Add ENIX high five sessions 2020-06-04 12:53:25 +02:00
Jerome Petazzoni
0d065788a4 Improve how we display dates (sounds silly but with longer online events it becomes necessary) 2020-06-04 12:42:44 +02:00
Jerome Petazzoni
14271a4df0 Rehaul 'setup k8s' sections 2020-06-03 16:54:41 +02:00
Jerome Petazzoni
412d029d0c Tweak self-hosted options 2020-06-02 17:45:51 +02:00
Jerome Petazzoni
f960230f8e Reorganize managed options; add Scaleway 2020-06-02 17:28:23 +02:00
Jerome Petazzoni
774c8a0e31 Rewrite intro to the authn/authz module 2020-06-01 23:43:33 +02:00
Jerome Petazzoni
4671a981a7 Add deployment automation steps
The settings file can now specify an optional list of steps.
After creating a bunch of instances, the steps are then
automatically executed. This helps since virtually all
deployments will be a sequence of 'start + deploy + otheractions'.

It also helps to automatically excecute steps like webssh
and tailhist (since I tend to forget them often).
2020-06-01 20:58:23 +02:00
Jerome Petazzoni
b9743a5f8c Simplify Portworx setup and update it for k8s 1.18 2020-06-01 14:41:25 +02:00
Jerome Petazzoni
df4980750c Bump up ship version 2020-05-27 17:41:22 +02:00
Jerome Petazzoni
9467c7309e Update shortlinks 2020-05-17 20:21:15 +02:00
Jerome Petazzoni
86b0380a77 Update operator links 2020-05-13 20:29:59 +02:00
Jerome Petazzoni
eb9052ae9a Add twitch chat info 2020-05-07 13:24:35 +02:00
Jerome Petazzoni
8f85332d8a Advanced Dockerfiles -> Advanced Dockerfile Syntax 2020-05-06 17:25:03 +02:00
Jerome Petazzoni
0479ad2285 Add force redirects 2020-05-06 17:22:13 +02:00
Jerome Petazzoni
986d7eb9c2 Add foreword to operators design section 2020-05-05 17:24:05 +02:00
44 changed files with 2327 additions and 938 deletions

View File

@@ -4,6 +4,10 @@ metadata:
name: coffees.container.training
spec:
group: container.training
versions:
- name: v1alpha1
served: true
storage: true
scope: Namespaced
names:
plural: coffees
@@ -11,25 +15,4 @@ spec:
kind: Coffee
shortNames:
- cof
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
properties:
spec:
required:
- taste
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string
additionalPrinterColumns:
- jsonPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date

37
k8s/coffee-3.yaml Normal file
View File

@@ -0,0 +1,37 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string
required: [ taste ]
additionalPrinterColumns:
- jsonPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date

View File

@@ -9,9 +9,9 @@ spec:
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: robusta
name: excelsa
spec:
taste: stronger
taste: fruity
---
kind: Coffee
apiVersion: container.training/v1alpha1
@@ -23,7 +23,12 @@ spec:
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: excelsa
name: robusta
spec:
taste: fruity
taste: stronger
bitterness: high
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: java

View File

@@ -52,7 +52,7 @@ data:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
@@ -60,6 +60,9 @@ metadata:
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:

File diff suppressed because it is too large Load Diff

View File

@@ -22,7 +22,10 @@ spec:
command: ["sh", "-c", "if [ -d /vol/lost+found ]; then rmdir /vol/lost+found; fi"]
containers:
- name: postgres
image: postgres:11
image: postgres:12
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres

View File

@@ -1,28 +1,17 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: null
generation: 1
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
spec:
replicas: 1
selector:
matchLabels:
app: socat
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: socat
spec:
@@ -34,34 +23,19 @@ spec:
image: alpine
imagePullPolicy: Always
name: socat
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: socat
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

103
k8s/traefik-v1.yaml Normal file
View File

@@ -0,0 +1,103 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

111
k8s/traefik-v2.yaml Normal file
View File

@@ -0,0 +1,111 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --accesslog
- --api
- --api.insecure
- --log.level=INFO
- --metrics.prometheus
- --providers.kubernetescrd
- --providers.kubernetesingress
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -1,103 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

1
k8s/traefik.yaml Symbolic link
View File

@@ -0,0 +1 @@
traefik-v1.yaml

View File

@@ -249,16 +249,19 @@ EOF"
# Install kustomize
pssh "
if [ ! -x /usr/local/bin/kustomize ]; then
curl -L https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.5.4/kustomize_v3.5.1_linux_amd64.tar.gz |
##VERSION##
curl -L https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.6.1/kustomize_v3.6.1_linux_amd64.tar.gz |
sudo tar -C /usr/local/bin -zx kustomize
echo complete -C /usr/local/bin/kustomize kustomize | sudo tee /etc/bash_completion.d/kustomize
fi"
# Install ship
# Note: 0.51.3 is the last version that doesn't display GIN-debug messages
# (don't want to get folks confused by that!)
pssh "
if [ ! -x /usr/local/bin/ship ]; then
##VERSION##
curl -L https://github.com/replicatedhq/ship/releases/download/v0.40.0/ship_0.40.0_linux_amd64.tar.gz |
curl -L https://github.com/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_amd64.tar.gz |
sudo tar -C /usr/local/bin -zx ship
fi"
@@ -337,7 +340,7 @@ _cmd_maketag() {
if [ -z $USER ]; then
export USER=anonymous
fi
MS=$(($(date +%N)/1000000))
MS=$(($(date +%N | tr -d 0)/1000000))
date +%Y-%m-%d-%H-%M-$MS-$USER
}
@@ -491,6 +494,7 @@ _cmd_start() {
--settings) SETTINGS=$2; shift 2;;
--count) COUNT=$2; shift 2;;
--tag) TAG=$2; shift 2;;
--students) STUDENTS=$2; shift 2;;
*) die "Unrecognized parameter: $1."
esac
done
@@ -502,8 +506,14 @@ _cmd_start() {
die "Please add --settings flag to specify which settings file to use."
fi
if [ -z "$COUNT" ]; then
COUNT=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
warning "No --count option was specified. Using value from settings file ($COUNT)."
CLUSTERSIZE=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
if [ -z "$STUDENTS" ]; then
warning "Neither --count nor --students was specified."
warning "According to the settings file, the cluster size is $CLUSTERSIZE."
warning "Deploying one cluster of $CLUSTERSIZE nodes."
STUDENTS=1
fi
COUNT=$(($STUDENTS*$CLUSTERSIZE))
fi
# Check that the specified settings and infrastructure are valid.
@@ -521,11 +531,41 @@ _cmd_start() {
infra_start $COUNT
sep
info "Successfully created $COUNT instances with tag $TAG"
sep
echo created > tags/$TAG/status
info "To deploy Docker on these instances, you can run:"
info "$0 deploy $TAG"
# If the settings.yaml file has a "steps" field,
# automatically execute all the actions listed in that field.
# If an action fails, retry it up to 10 times.
python -c 'if True: # hack to deal with indentation
import sys, yaml
settings = yaml.safe_load(sys.stdin)
print ("\n".join(settings.get("steps", [])))
' < tags/$TAG/settings.yaml \
| while read step; do
if [ -z "$step" ]; then
break
fi
sep
info "Automatically executing step '$step'."
TRY=1
MAXTRY=10
while ! $0 $step $TAG ; do
TRY=$(($TRY+1))
if [ $TRY -gt $MAXTRY ]; then
error "This step ($step) failed after $MAXTRY attempts."
info "You can troubleshoot the situation manually, or terminate these instances with:"
info "$0 stop $TAG"
die "Giving up."
else
sep
info "Step '$step' failed. Let's wait 10 seconds and try again."
info "(Attempt $TRY out of $MAXTRY.)"
sleep 10
fi
done
done
sep
info "Deployment successful."
info "To terminate these instances, you can run:"
info "$0 stop $TAG"
}

View File

@@ -6,8 +6,8 @@ clustersize: 1
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Jinja2 template to use to generate ready-to-cut
cards_template: clusters.csv
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
@@ -21,3 +21,9 @@ machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training
steps:
- deploy
- webssh
- tailhist
- cards

View File

@@ -20,3 +20,10 @@ machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
steps:
- deploy
- webssh
- tailhist
- kube
- cards
- kubetest

View File

@@ -1,8 +1,7 @@
# Uncomment and/or edit one of the the following lines if necessary.
#/ /kube-halfday.yml.html 200
#/ /kube-fullday.yml.html 200
#/ /kube-twodays.yml.html 200
/ /ardan.html 200!
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack
@@ -14,5 +13,11 @@
# Shortlink for the QRCode
/q /qrcode.html 200
/next https://www.eventbrite.com/e/intensive-kubernetes-advanced-concepts-live-stream-tickets-102358725704
# Shortlinks for next training in English and French
/next https://www.eventbrite.com/e/livestream-intensive-kubernetes-bootcamp-tickets-103262336428
/hi5 https://enix.io/fr/services/formation/online/
/ /intro.yml.html 200!
/vms https://docs.google.com/spreadsheets/d/1u91MzRvUiZiI55x_sto1kk9LP4QoxqOBjvjhZCeyjU4/edit
/chat https://gitter.im/jpetazzo/training-20200707-online

View File

@@ -1,48 +0,0 @@
<h1>
Ardan Live - May and June training materials
</h1>
<h2>
Intensive Kubernetes: Advanced Concepts
</h2>
<ul>
<li>
<a href="k8s-adv-1.yml.html">
Day 1: packaging applications with Kustomize and Helm
</a>
</li>
<li>
<a href="k8s-adv-2.yml.html">
Day 2: capacity management and Kubernetes operators
</a>
</li>
<li>
<a href="k8s-adv-3.yml.html">
Day 3: security focus
</a>
</li>
<li>
<a href="k8s-adv-4.yml.html">
Day 4: application configuration and stateful apps
</a>
</li>
</ul>
<h2>
Docker Bootcamp
</h2>
<ul>
<li>
<a href="ctr-bootcamp.yml.html">
Day 1, 2, and 3
</a>
</li>
</ul>
<h2>
Kubernetes Bootcamp
</h2>
<ul>
<li>
<a href="k8s-bootcamp.yml.html">
Day 1, 2, and 3
</a>
</li>
</ul>

View File

@@ -1,7 +1,7 @@
class: title
# Advanced Dockerfiles
# Advanced Dockerfile Syntax
![construction](images/title-advanced-dockerfiles.jpg)
@@ -12,7 +12,10 @@ class: title
We have seen simple Dockerfiles to illustrate how Docker build
container images.
In this section, we will see more Dockerfile commands.
In this section, we will give a recap of the Dockerfile syntax,
and introduce advanced Dockerfile commands that we might
come across sometimes; or that we might want to use in some
specific scenarios.
---
@@ -420,3 +423,8 @@ ONBUILD COPY . /src
* You can't chain `ONBUILD` instructions with `ONBUILD`.
* `ONBUILD` can't be used to trigger `FROM` instructions.
???
:EN:- Advanced Dockerfile syntax
:FR:- Dockerfile niveau expert

View File

@@ -1,4 +1,4 @@
# Logging
# Logging (extra material)
In this chapter, we will explain the different ways to send logs from containers.

View File

@@ -22,7 +22,7 @@ TEMPLATE="""<html>
<tr><td class="header" colspan="3">{{ title }}</td></tr>
<tr><td class="details" colspan="3">Note: while some workshops are delivered in other languages, slides are always in English.</td></tr>
<tr><td class="title" colspan="3">Free video of our latest workshop</td></tr>
<tr><td class="title" colspan="3">Free Kubernetes intro course</td></tr>
<tr>
<td>Getting Started With Kubernetes and Container Orchestration</td>
@@ -40,7 +40,7 @@ TEMPLATE="""<html>
</tr>
{% if coming_soon %}
<tr><td class="title" colspan="3">Coming soon near you</td></tr>
<tr><td class="title" colspan="3">Coming soon</td></tr>
{% for item in coming_soon %}
<tr>
@@ -141,13 +141,26 @@ import yaml
items = yaml.safe_load(open("index.yaml"))
def prettyparse(date):
months = [
"January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November", "December"
]
month = months[date.month-1]
suffix = {
1: "st", 2: "nd", 3: "rd",
21: "st", 22: "nd", 23: "rd",
31: "st"}.get(date.day, "th")
return date.year, month, "{}{}".format(date.day, suffix)
# Items with a date correspond to scheduled sessions.
# Items without a date correspond to self-paced content.
# The date should be specified as a string (e.g. 2018-11-26).
# It can also be a list of two elements (e.g. [2018-11-26, 2018-11-28]).
# The latter indicates an event spanning multiple dates.
# The first date will be used in the generated page, but the event
# will be considered "current" (and therefore, shown in the list of
# The event will be considered "current" (shown in the list of
# upcoming events) until the second date.
for item in items:
@@ -157,19 +170,23 @@ for item in items:
date_begin, date_end = date
else:
date_begin, date_end = date, date
suffix = {
1: "st", 2: "nd", 3: "rd",
21: "st", 22: "nd", 23: "rd",
31: "st"}.get(date_begin.day, "th")
# %e is a non-standard extension (it displays the day, but without a
# leading zero). If strftime fails with ValueError, try to fall back
# on %d (which displays the day but with a leading zero when needed).
try:
item["prettydate"] = date_begin.strftime("%B %e{}, %Y").format(suffix)
except ValueError:
item["prettydate"] = date_begin.strftime("%B %d{}, %Y").format(suffix)
y1, m1, d1 = prettyparse(date_begin)
y2, m2, d2 = prettyparse(date_end)
if (y1, m1, d1) == (y2, m2, d2):
# Single day event
pretty_date = "{} {}, {}".format(m1, d1, y1)
elif (y1, m1) == (y2, m2):
# Multi-day event within a single month
pretty_date = "{} {}-{}, {}".format(m1, d1, d2, y1)
elif y1 == y2:
# Multi-day event spanning more than a month
pretty_date = "{} {}-{} {}, {}".format(m1, d1, m2, d2, y1)
else:
# Event spanning the turn of the year (REALLY???)
pretty_date = "{} {}, {}-{} {}, {}".format(m1, d1, y1, m2, d2, y2)
item["begin"] = date_begin
item["end"] = date_end
item["prettydate"] = pretty_date
item["flag"] = FLAGS.get(item.get("country"),"")
today = datetime.date.today()

View File

@@ -1,3 +1,56 @@
- date: [2020-07-07, 2020-07-09]
country: www
city: streaming
event: Ardan Live
speaker: jpetazzo
title: Intensive Docker Bootcamp
attend: https://www.eventbrite.com/e/livestream-intensive-docker-bootcamp-tickets-103258886108
- date: [2020-06-15, 2020-06-16]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Docker intensif (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-06-17, 2020-06-19]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Fondamentaux Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: 2020-06-22
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Packaging pour Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-06-23, 2020-06-24]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Kubernetes avancé (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-06-25, 2020-06-26]
country: www
city: streaming
event: ENIX SAS
speaker: jpetazzo
title: Opérer Kubernetes (en français)
lang: fr
attend: https://enix.io/fr/services/formation/online/
- date: [2020-06-09, 2020-06-11]
country: www
city: streaming
@@ -6,14 +59,6 @@
title: Intensive Kubernetes Bootcamp
attend: https://www.eventbrite.com/e/livestream-intensive-kubernetes-bootcamp-tickets-103262336428
- date: [2020-05-19, 2020-05-21]
country: www
city: streaming
event: Ardan Live
speaker: jpetazzo
title: Intensive Docker Bootcamp
attend: https://www.eventbrite.com/e/livestream-intensive-docker-bootcamp-tickets-103258886108
- date: [2020-05-04, 2020-05-08]
country: www
city: streaming
@@ -29,7 +74,7 @@
speaker: jpetazzo
title: Intensive Docker and Kubernetes
attend: https://www.eventbrite.com/e/ardan-labs-live-worldwide-march-30-april-2-2020-tickets-100331129108#
slides: https://https://2020-03-ardan.container.training/
slides: https://2020-03-ardan.container.training/
- date: 2020-03-06
country: uk

70
slides/intro-fullday.yml Normal file
View File

@@ -0,0 +1,70 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
#- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
#- containers/Start_And_Attach.md
- containers/Naming_And_Inspecting.md
#- containers/Labels.md
- containers/Getting_Inside.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Container_Networking_Basics.md
#- containers/Network_Drivers.md
#- containers/Container_Network_Model.md
- containers/Local_Development_Workflow.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
- containers/Multi_Stage_Builds.md
#- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
#- containers/Docker_Machine.md
#- containers/Advanced_Dockerfiles.md
#- containers/Init_Systems.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Container_Engines.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md
#- containers/Orchestration_Overview.md
-
- shared/thankyou.md
- containers/links.md

View File

@@ -0,0 +1,70 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
content:
- shared/title.md
# - shared/logistics.md
- containers/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- - containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- - containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- containers/Docker_Machine.md
- - containers/Advanced_Dockerfiles.md
- containers/Init_Systems.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Resource_Limits.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
- - containers/Container_Engines.md
- containers/Pods_Anatomy.md
- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -3,11 +3,12 @@ title: |
Docker
Bootcamp
chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/jpetazzo/training-20200707-online)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2020-05-ardan.container.training/
slides: https://2020-07-ardan.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
@@ -19,42 +20,52 @@ content:
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-zoom-webinar.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
- # DAY 1
#- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- # DAY 2
- containers/Publishing_To_Docker_Hub.md
- containers/Naming_And_Inspecting.md
#- containers/Labels.md
- containers/Labels.md
- containers/Start_And_Attach.md
- containers/Getting_Inside.md
- containers/Resource_Limits.md
-
- containers/Dockerfile_Tips.md
- containers/Multi_Stage_Builds.md
- containers/Advanced_Dockerfiles.md
- containers/Exercise_Dockerfile_Advanced.md
- # DAY 3
- containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
-
- containers/Local_Development_Workflow.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
#- containers/Resource_Limits.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Getting_Inside.md
- containers/Multi_Stage_Builds.md
- containers/Dockerfile_Tips.md
- containers/Advanced_Dockerfiles.md
- containers/Publishing_To_Docker_Hub.md
- containers/Orchestration_Overview.md
- containers/Init_Systems.md
-
#- containers/Working_With_Volumes.md
#- containers/Application_Configuration.md
- shared/thankyou.md
- containers/links.md
#-
#- containers/Docker_Machine.md
#- containers/Ambassadors.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md

View File

@@ -1,38 +0,0 @@
title: |
Intensive Kubernetes
Advanced Concepts
Packaging Applications
with Kustomize and Helm
chat: "[Gitter](https://gitter.im/jpetazzo/training-20200504-online)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2020-05-ardan.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-zoom-webinar.md
- shared/toc.md
#- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
-
- k8s/kubercoins.md
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- k8s/helm-secrets.md
#- k8s/exercise-helm.md
- shared/thankyou.md

View File

@@ -1,38 +0,0 @@
title: |
Intensive Kubernetes
Advanced Concepts
Capacity Management
and Kubernetes Operators
chat: "[Gitter](https://gitter.im/jpetazzo/training-20200504-online)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2020-05-ardan.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-zoom-webinar.md
- shared/toc.md
#- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
-
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
-
- k8s/extending-api.md
- k8s/operators.md
- k8s/operators-design.md
- shared/thankyou.md

View File

@@ -1,36 +0,0 @@
title: |
Intensive Kubernetes
Advanced Concepts
Security Focus
chat: "[Gitter](https://gitter.im/jpetazzo/training-20200504-online)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2020-05-ardan.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-zoom-webinar.md
- shared/toc.md
#- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
-
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/staticpods.md
- k8s/podsecuritypolicy.md
- k8s/openid-connect.md
- k8s/csr-api.md
#- k8s/control-plane-auth.md
- shared/thankyou.md

View File

@@ -1,34 +0,0 @@
title: |
Intensive Kubernetes
Advanced Concepts
Application Configuration
and Stateful Applications
chat: "[Gitter](https://gitter.im/jpetazzo/training-20200504-online)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2020-05-ardan.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-zoom-webinar.md
- shared/toc.md
#- shared/prereqs.md
#- shared/webssh.md
-
- k8s/volumes.md
- k8s/configuration.md
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
- shared/thankyou.md

View File

@@ -1,79 +0,0 @@
title: |
Intensive
Kubernetes
Bootcamp
chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2020-05-ardan.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
- k8s/kubectl-run.md
- k8s/batch-jobs.md
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
-
#- k8s/exercise-wordsmith.md
- k8s/yamldeploy.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
#- k8s/dryrun.md
#- k8s/exercise-yaml.md
- k8s/rollout.md
- k8s/record.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
- k8s/setup-k8s.md
- k8s/versions-k8s.md
-
- k8s/namespaces.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/kubectlproxy.md
- k8s/dashboard.md
- k8s/ingress.md
- k8s/volumes.md
#- k8s/exercise-configmap.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
-
- k8s/whatsnext.md
- k8s/lastwords.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,6 +1,74 @@
# Authentication and authorization
*And first, a little refresher!*
- In this section, we will:
- define authentication and authorization
- explain how they are implemented in Kubernetes
- talk about tokens, certificates, service accounts, RBAC ...
- But first: why do we need all this?
---
## The need for fine-grained security
- The Kubernetes API should only be available for identified users
- we don't want "guest access" (except in very rare scenarios)
- we don't want strangers to use our compute resources, delete our apps ...
- our keys and passwords should not be exposed to the public
- Users will often have different access rights
- cluster admin (similar to UNIX "root") can do everything
- developer might access specific resources, or a specific namespace
- supervision might have read only access to *most* resources
---
## Example: custom HTTP load balancer
- Let's imagine that we have a custom HTTP load balancer for multiple apps
- Each app has its own *Deployment* resource
- By default, the apps are "sleeping" and scaled to zero
- When a request comes in, the corresponding app gets woken up
- After some inactivity, the app is scaled down again
- This HTTP load balancer needs API access (to scale up/down)
- What if *a wild vulnerability appears*?
---
## Consequences of vulnerability
- If the HTTP load balancer has the same API access as we do:
*full cluster compromise (easy data leak, cryptojacking...)*
- If the HTTP load balancer has `update` permissions on the Deployments:
*defacement (easy), MITM / impersonation (medium to hard)*
- If the HTTP load balancer only has permission to `scale` the Deployments:
*denial-of-service*
- All these outcomes are bad, but some are worse than others
---
## Definitions
- Authentication = verifying the identity of a person
@@ -147,7 +215,7 @@ class: extra-details
(if their key is compromised, or they leave the organization)
- Option 1: re-create a new CA and re-issue everyone's certificates
- Option 1: re-create a new CA and re-issue everyone's certificates
<br/>
→ Maybe OK if we only have a few users; no way otherwise
@@ -631,7 +699,7 @@ class: extra-details
- Let's look for these in existing ClusterRoleBindings:
```bash
kubectl get clusterrolebindings -o yaml |
kubectl get clusterrolebindings -o yaml |
grep -e kubernetes-admin -e system:masters
```

View File

@@ -217,15 +217,16 @@ docker run --rm --net host -v $PWD:/vol \
## How can we remember all these flags?
- Look at the static pod manifest for etcd
- Older versions of kubeadm did add a healthcheck probe with all these flags
(in `/etc/kubernetes/manifests`)
- That healthcheck probe was calling `etcdctl` with all the right flags
- The healthcheck probe is calling `etcdctl` with all the right flags
😉👍✌️
- With recent versions of kubeadm, we're on our own!
- Exercise: write the YAML for a batch job to perform the backup
(how will you access the key and certificate required to connect?)
---
## Restoring an etcd snapshot

View File

@@ -244,7 +244,7 @@ fine for personal and development clusters.)
- Add the `stable` repo:
```bash
helm repo add stable https://charts.helm.sh/stable
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
```
]

View File

@@ -143,6 +143,30 @@ So, what do we get?
---
## Other ways to view labels
- `kubectl get` gives us a couple of useful flags to check labels
- `kubectl get --show-labels` shows all labels
- `kubectl get -L xyz` shows the value of label `xyz`
.exercise[
- List all the labels that we have on pods:
```bash
kubectl get pods --show-labels
```
- List the value of label `app` on these pods:
```bash
kubectl get pods -L app
```
]
---
class: extra-details
## More on selectors

View File

@@ -93,11 +93,11 @@ Examples:
- Representing and managing external resources
(Example: [AWS Service Operator](https://operatorhub.io/operator/alpha/aws-service-operator.v0.0.1))
(Example: [AWS S3 Operator](https://operatorhub.io/operator/awss3-operator-registry))
- Managing complex cluster add-ons
(Example: [Istio operator](https://operatorhub.io/operator/beta/istio-operator.0.1.6))
(Example: [Istio operator](https://operatorhub.io/operator/istio))
- Deploying and managing our applications' lifecycles

View File

@@ -74,29 +74,78 @@
---
## Portworx requirements
## Installing Portworx
- Kubernetes cluster ✔️
- Portworx installation is relatively simple
- Optional key/value store (etcd or Consul) ❌
- ... But we made it *even simpler!*
- At least one available block device ❌
- We are going to use a YAML manifest that will take care of everything
- Warning: this manifest is customized for a very specific setup
(like the VMs that we provide during workshops and training sessions)
- It will probably *not work* If you are using a different setup
(like Docker Desktop, k3s, MicroK8S, Minikube ...)
---
## The key-value store
## The simplified Portworx installer
- In the current version of Portworx (1.4) it is recommended to use etcd or Consul
- The Portworx installation will take a few minutes
- But Portworx also has beta support for an embedded key/value store
- Let's start it, then we'll explain what happens behind the scenes
- For simplicity, we are going to use the latter option
.exercise[
(but if we have deployed Consul or etcd, we can use that, too)
- Install Portworx:
```bash
kubectl apply -f ~/container.training/k8s/portworx.yaml
```
]
<!-- ##VERSION ## -->
*Note: this was tested with Kubernetes 1.18. Newer versions may or may not work.*
---
## One available block device
class: extra-details
## What's in this YAML manifest?
- Portworx installation itself, pre-configured for our setup
- A default *Storage Class* using Portworx
- A *Daemon Set* to create loop devices on each node of the cluster
---
class: extra-details
## Portworx installation
- The official way to install Portworx is to use [PX-Central](https://central.portworx.com/)
(this requires a free account)
- PX-Central will ask us a few questions about our cluster
(Kubernetes version, on-prem/cloud deployment, etc.)
- Using our answers, it will generate a YAML manifest that we can use
---
class: extra-details
## Portworx storage configuration
- Portworx needs at least one *block device*
- Block device = disk or partition on a disk
@@ -112,71 +161,41 @@
---
class: extra-details
## Setting up a loop device
- We are going to create a 10 GB (empty) file on each node
- Our `portworx.yaml` manifest includes a *Daemon Set* that will:
- Then make a loop device from it, to be used by Portworx
- create a 10 GB (empty) file on each node
.exercise[
- load the `loop` module (if it's not already loaded)
- Create a 10 GB file on each node:
```bash
for N in $(seq 1 4); do ssh node$N sudo truncate --size 10G /portworx.blk; done
```
(If SSH asks to confirm host keys, enter `yes` each time.)
- associate a loop device with the 10 GB file
- Associate the file to a loop device on each node:
```bash
for N in $(seq 1 4); do ssh node$N sudo losetup /dev/loop4 /portworx.blk; done
```
]
---
## Installing Portworx
- To install Portworx, we need to go to https://install.portworx.com/
- This website will ask us a bunch of questions about our cluster
- Then, it will generate a YAML file that we should apply to our cluster
--
- Or, we can just apply that YAML file directly (it's in `k8s/portworx.yaml`)
.exercise[
- Install Portworx:
```bash
kubectl apply -f ~/container.training/k8s/portworx.yaml
```
]
- After these steps, we have a block device that Portworx can use
---
class: extra-details
## Generating a custom YAML file
## Implementation details
If you want to generate a YAML file tailored to your own needs, the easiest
way is to use https://install.portworx.com/.
- The file is `/portworx.blk`
FYI, this is how we obtained the YAML file used earlier:
```
KBVER=$(kubectl version -o json | jq -r .serverVersion.gitVersion)
BLKDEV=/dev/loop4
curl https://install.portworx.com/1.4/?kbver=$KBVER&b=true&s=$BLKDEV&c=px-workshop&stork=true&lh=true
```
If you want to use an external key/value store, add one of the following:
```
&k=etcd://`XXX`:2379
&k=consul://`XXX`:8500
```
... where `XXX` is the name or address of your etcd or Consul server.
(it is a [sparse file](https://en.wikipedia.org/wiki/Sparse_file) created with `truncate`)
- The loop device is `/dev/loop4`
- This can be verified by running `sudo losetup`
- The *Daemon Set* uses a privileged *Init Container*
- We can check the logs of that container with:
```bash
kubectl logs --selector=app=setup-loop4-for-portworx \
-c setup-loop4-for-portworx
```
---
@@ -276,11 +295,9 @@ parameters:
priority_io: "high"
```
- It says "use Portworx to create volumes"
- It says "use Portworx to create volumes and keep 2 replicas of these volumes"
- It tells Portworx to "keep 2 replicas of these volumes"
- It marks the Storage Class as being the default one
- The annotation makes this Storage Class the default one
---
@@ -323,7 +340,10 @@ spec:
schedulerName: stork
containers:
- name: postgres
image: postgres:11
image: postgres:12
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres
@@ -401,14 +421,14 @@ autopilot prompt detection expects $ or # at the beginning of the line.
- Populate it with `pgbench`:
```bash
pgbench -i -s 10 demo
pgbench -i demo
```
]
- The `-i` flag means "create tables"
- The `-s 10` flag means "create 10 x 100,000 rows"
- If you want more data in the test tables, add e.g. `-s 10` (to get 10x more rows)
---
@@ -428,11 +448,55 @@ autopilot prompt detection expects $ or # at the beginning of the line.
psql demo -c "select count(*) from pgbench_accounts"
```
<!-- ```key ^D``` -->
- Check that `pgbench_history` is currently empty:
```bash
psql demo -c "select count(*) from pgbench_history"
```
]
(We should see a count of 1,000,000 rows.)
---
## Testing the load generator
- Let's use `pgbench` to generate a few transactions
.exercise[
- Run `pgbench` for 10 seconds, reporting progress every second:
```bash
pgbench -P 1 -T 10 demo
```
- Check the size of the history table now:
```bash
psql demo -c "select count(*) from pgbench_history"
```
]
Note: on small cloud instances, a typical speed is about 100 transactions/second.
---
## Generating transactions
- Now let's use `pgbench` to generate more transactions
- While it's running, we will disrupt the database server
.exercise[
- Run `pgbench` for 10 minutes, reporting progress every second:
```bash
pgbench -P 1 -T 600 demo
```
- You can use a longer time period if you need more time to run the next steps
<!-- ```tmux split-pane -h``` -->
]
---
@@ -522,15 +586,18 @@ By "disrupt" we mean: "disconnect it from the network".
```key ^J```
-->
- Check the number of rows in the `pgbench_accounts` table:
- Check how many transactions are now in the `pgbench_history` table:
```bash
psql demo -c "select count(*) from pgbench_accounts"
psql demo -c "select count(*) from pgbench_history"
```
<!-- ```key ^D``` -->
]
If the 10-second test that we ran earlier gave e.g. 80 transactions per second,
and we failed the node after 30 seconds, we should have about 2400 row in that table.
---
## Double-check that the pod has really moved
@@ -598,7 +665,7 @@ class: extra-details
- If we need to see what's going on with Portworx:
```
PXPOD=$(kubectl -n kube-system get pod -l name=portworx -o json |
PXPOD=$(kubectl -n kube-system get pod -l name=portworx -o json |
jq -r .items[0].metadata.name)
kubectl -n kube-system exec $PXPOD -- /opt/pwx/bin/pxctl status
```

View File

@@ -104,9 +104,9 @@ Each pod is assigned a QoS class (visible in `status.qosClass`).
- When a node is overloaded, BestEffort pods are killed first
- Then, Burstable pods that exceed their limits
- Then, Burstable pods that exceed their requests
- Burstable and Guaranteed pods below their limits are never killed
- Burstable and Guaranteed pods below their requests are never killed
(except if their node fails)

145
slides/k8s/setup-devel.md Normal file
View File

@@ -0,0 +1,145 @@
# Running a local development cluster
- Let's review some options to run Kubernetes locally
- There is no "best option", it depends what you value:
- ability to run on all platforms (Linux, Mac, Windows, other?)
- ability to run clusters with multiple nodes
- ability to run multiple clusters side by side
- ability to run recent (or even, unreleased) versions of Kubernetes
- availability of plugins
- etc.
---
## Docker Desktop
- Available on Mac and Windows
- Gives you one cluster with one node
- Rather old version of Kubernetes
- Very easy to use if you are already using Docker Desktop:
go to Docker Desktop preferences and enable Kubernetes
- Ideal for Docker users who need good integration between both platforms
---
## [k3d](https://k3d.io/)
- Based on [K3s](https://k3s.io/) by Rancher Labs
- Requires Docker
- Runs Kubernetes nodes in Docker containers
- Can deploy multiple clusters, with multiple nodes, and multiple master nodes
- As of June 2020, two versions co-exist: stable (1.7) and beta (3.0)
- They have different syntax and options, this can be confusing
(but don't let that stop you!)
---
## k3d in action
- Get `k3d` beta 3 binary on https://github.com/rancher/k3d/releases
- Create a simple cluster:
```bash
k3d create cluster petitcluster --update-kubeconfig
```
- Use it:
```bash
kubectl config use-context k3d-petitcluster
```
- Create a more complex cluster with a custom version:
```bash
k3d create cluster groscluster --update-kubeconfig \
--image rancher/k3s:v1.18.3-k3s1 --masters 3 --workers 5 --api-port 6444
```
(note: API port seems to be necessary when running multiple clusters)
---
## [KinD](https://kind.sigs.k8s.io/)
- Kubernetes-in-Docker
- Requires Docker (obviously!)
- Deploying a single node cluster using the latest version is simple:
```bash
kind create cluster
```
- More advanced scenarios require writing a short [config file](https://kind.sigs.k8s.io/docs/user/quick-start#configuring-your-kind-cluster)
(to define multiple nodes, multiple master nodes, set Kubernetes versions ...)
- Can deploy multiple clusters
---
## [Minikube](https://minikube.sigs.k8s.io/docs/)
- The "legacy" option!
(note: this is not a bad thing, it means that it's very stable, has lots of plugins, etc.)
- Supports many [drivers](https://minikube.sigs.k8s.io/docs/drivers/)
(HyperKit, Hyper-V, KVM, VirtualBox, but also Docker and many others)
- Can deploy a single cluster; recent versions can deploy multiple nodes
- Great option if you want a "Kubernetes first" experience
(i.e. if you don't already have Docker and/or don't want/need it)
---
## [MicroK8s](https://microk8s.io/)
- Available on Linux, and since recently, on Mac and Windows as well
- The Linux version is installed through Snap
(which is pre-installed on all recent versions of Ubuntu)
- Also supports clustering (as in, multiple machines running MicroK8s)
- DNS is not enabled by default; enable it with `microk8s enable dns`
---
## VM with custom install
- Choose your own adventure!
- Pick any Linux distribution!
- Build your cluster from scratch or use a Kubernetes installer!
- Discover exotic CNI plugins and container runtimes!
- The only limit is yourself, and the time you are willing to sink in!
???
:EN:- Kubernetes options for local development
:FR:- Installation de Kubernetes pour travailler en local

View File

@@ -1,106 +0,0 @@
# Setting up Kubernetes
- How did we set up these Kubernetes clusters that we're using?
--
<!-- ##VERSION## -->
- We used `kubeadm` on freshly installed VM instances running Ubuntu LTS
1. Install Docker
2. Install Kubernetes packages
3. Run `kubeadm init` on the first node (it deploys the control plane on that node)
4. Set up Weave (the overlay network)
<br/>
(that step is just one `kubectl apply` command; discussed later)
5. Run `kubeadm join` on the other nodes (with the token produced by `kubeadm init`)
6. Copy the configuration file generated by `kubeadm init`
- Check the [prepare VMs README](https://@@GITREPO@@/blob/master/prepare-vms/README.md) for more details
---
## `kubeadm` drawbacks
- Doesn't set up Docker or any other container engine
- Doesn't set up the overlay network
- [Some extra steps](https://kubernetes.io/docs/setup/independent/high-availability/) to support HA control plane
--
- "It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme
---
## Managed options
- On AWS: [EKS](https://aws.amazon.com/eks/),
[eksctl](https://eksctl.io/)
- On Azure: [AKS](https://azure.microsoft.com/services/kubernetes-service/)
- On DigitalOcean: [DOK](https://www.digitalocean.com/products/kubernetes/)
- On Google Cloud: [GKE](https://cloud.google.com/kubernetes-engine/)
- On Linode: [LKE](https://www.linode.com/products/kubernetes/)
- On OVHcloud: [Managed Kubernetes Service](https://www.ovhcloud.com/en/public-cloud/kubernetes/)
- On Scaleway: [Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/)
- and much more!
---
## Other deployment options
- [kops](https://github.com/kubernetes/kops):
customizable deployments on AWS, Digital Ocean, GCE (beta), vSphere (alpha)
- [minikube](https://kubernetes.io/docs/setup/minikube/),
[kubespawn](https://github.com/kinvolk/kube-spawn),
[Docker Desktop](https://docs.docker.com/docker-for-mac/kubernetes/),
[kind](https://kind.sigs.k8s.io):
for local development
- [kubicorn](https://github.com/kubicorn/kubicorn),
the [Cluster API](https://blogs.vmware.com/cloudnative/2019/03/14/what-and-why-of-cluster-api/):
deploy your clusters declaratively, "the Kubernetes way"
---
## Even more deployment options
- If you like Ansible:
[kubespray](https://github.com/kubernetes-incubator/kubespray)
- If you like Terraform:
[typhoon](https://github.com/poseidon/typhoon)
- If you like Terraform and Puppet:
[tarmak](https://github.com/jetstack/tarmak)
- You can also learn how to install every component manually, with
the excellent tutorial [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way)
*Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.*
- There are also many commercial options available!
- For a longer list, check the Kubernetes documentation:
<br/>
it has a great guide to [pick the right solution](https://kubernetes.io/docs/setup/#production-environment) to set up Kubernetes.
???
:EN:- Overview of the kubeadm installer
:FR:- Survol de kubeadm

View File

@@ -1,4 +1,4 @@
# Installing a managed cluster
# Deploying a managed cluster
*"The easiest way to install Kubernetes is to get someone
else to do it for you."
@@ -11,6 +11,8 @@ else to do it for you."
(the goal is to show the actual steps to get started)
- The list is sorted alphabetically
- All the options mentioned here require an account
with a cloud provider
@@ -18,123 +20,6 @@ with a cloud provider
---
## EKS (the old way)
- [Read the doc](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html)
- Create service roles, VPCs, and a bunch of other oddities
- Try to figure out why it doesn't work
- Start over, following an [official AWS blog post](https://aws.amazon.com/blogs/aws/amazon-eks-now-generally-available/)
- Try to find the missing Cloud Formation template
--
.footnote[(╯°□°)╯︵ ┻━┻]
---
## EKS (the new way)
- Install `eksctl`
- Set the usual environment variables
([AWS_DEFAULT_REGION](https://docs.aws.amazon.com/general/latest/gr/rande.html#eks_region), AWS_ACCESS_KEY, AWS_SECRET_ACCESS_KEY)
- Create the cluster:
```bash
eksctl create cluster
```
- Wait 15-20 minutes (yes, it's sloooooooooooooooooow)
- Add cluster add-ons
(by default, it doesn't come with metrics-server, logging, etc.)
---
## EKS (cleanup)
- Delete the cluster:
```bash
eksctl delete cluster <clustername>
```
- If you need to find the name of the cluster:
```bash
eksctl get clusters
```
.footnote[Note: the AWS documentation has been updated and now includes [eksctl instructions](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html).]
---
## GKE (initial setup)
- Install `gcloud`
- Login:
```bash
gcloud auth init
```
- Create a "project":
```bash
gcloud projects create my-gke-project
gcloud config set project my-gke-project
```
- Pick a [region](https://cloud.google.com/compute/docs/regions-zones/)
(example: `europe-west1`, `us-west1`, ...)
---
## GKE (create cluster)
- Create the cluster:
```bash
gcloud container clusters create my-gke-cluster --region us-west1 --num-nodes=2
```
(without `--num-nodes` you might exhaust your IP address quota!)
- The first time you try to create a cluster in a given project, you get an error
- you need to enable the Kubernetes Engine API
- the error message gives you a link
- follow the link and enable the API (and billing)
<br/>(it's just a couple of clicks and it's instantaneous)
- Wait a couple of minutes (yes, it's faaaaaaaaast)
- The cluster comes with many add-ons
---
## GKE (cleanup)
- List clusters (if you forgot its name):
```bash
gcloud container clusters list
```
- Delete the cluster:
```bash
gcloud container clusters delete my-gke-cluster --region us-west1
```
- Delete the project (optional):
```bash
gcloud projects delete my-gke-project
```
---
## AKS (initial setup)
- Install the Azure CLI
@@ -168,8 +53,6 @@ with a cloud provider
az aks get-credentials --resource-group my-aks-group --name my-aks-cluster
```
- The cluster has useful components pre-installed, such as the metrics server
---
## AKS (cleanup)
@@ -190,6 +73,95 @@ with a cloud provider
---
## AKS (notes)
- The cluster has useful components pre-installed, such as the metrics server
- There is also a product called [AKS Engine](https://github.com/Azure/aks-engine):
- leverages ARM (Azure Resource Manager) templates to deploy Kubernetes
- it's "the library used by AKS"
- fully customizable
- think of it as "half-managed" Kubernetes option
---
## Amazon EKS (the old way)
- [Read the doc](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html)
- Create service roles, VPCs, and a bunch of other oddities
- Try to figure out why it doesn't work
- Start over, following an [official AWS blog post](https://aws.amazon.com/blogs/aws/amazon-eks-now-generally-available/)
- Try to find the missing Cloud Formation template
--
.footnote[(╯°□°)╯︵ ┻━┻]
---
## Amazon EKS (the new way)
- Install `eksctl`
- Set the usual environment variables
([AWS_DEFAULT_REGION](https://docs.aws.amazon.com/general/latest/gr/rande.html#eks_region), AWS_ACCESS_KEY, AWS_SECRET_ACCESS_KEY)
- Create the cluster:
```bash
eksctl create cluster
```
- Cluster can take a long time to be ready (15-20 minutes is typical)
- Add cluster add-ons
(by default, it doesn't come with metrics-server, logging, etc.)
---
## Amazon EKS (cleanup)
- Delete the cluster:
```bash
eksctl delete cluster <clustername>
```
- If you need to find the name of the cluster:
```bash
eksctl get clusters
```
.footnote[Note: the AWS documentation has been updated and now includes [eksctl instructions](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html).]
---
## Amazon EKS (notes)
- Convenient if you *have to* use AWS
- Needs extra steps to be truly production-ready
- [Versions tend to be outdated](https://twitter.com/jpetazzo/status/1252948707680686081)
- The only officially supported pod network is the [Amazon VPC CNI plugin](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html)
- integrates tightly with security groups and VPC networking
- not suitable for high density clusters (with many small pods on big nodes)
- other plugins [should still work](https://docs.aws.amazon.com/eks/latest/userguide/alternate-cni-plugins.html) but will require extra work
---
## Digital Ocean (initial setup)
- Install `doctl`
@@ -242,15 +214,181 @@ with a cloud provider
---
## GKE (initial setup)
- Install `gcloud`
- Login:
```bash
gcloud auth init
```
- Create a "project":
```bash
gcloud projects create my-gke-project
gcloud config set project my-gke-project
```
- Pick a [region](https://cloud.google.com/compute/docs/regions-zones/)
(example: `europe-west1`, `us-west1`, ...)
---
## GKE (create cluster)
- Create the cluster:
```bash
gcloud container clusters create my-gke-cluster --region us-west1 --num-nodes=2
```
(without `--num-nodes` you might exhaust your IP address quota!)
- The first time you try to create a cluster in a given project, you get an error
- you need to enable the Kubernetes Engine API
- the error message gives you a link
- follow the link and enable the API (and billing)
<br/>(it's just a couple of clicks and it's instantaneous)
- Clutser should be ready in a couple of minutes
---
## GKE (cleanup)
- List clusters (if you forgot its name):
```bash
gcloud container clusters list
```
- Delete the cluster:
```bash
gcloud container clusters delete my-gke-cluster --region us-west1
```
- Delete the project (optional):
```bash
gcloud projects delete my-gke-project
```
---
## GKE (notes)
- Well-rounded product overall
(it used to be one of the best managed Kubernetes offerings available;
now that many other providers entered the game, that title is debatable)
- The cluster comes with many add-ons
- Versions lag a bit:
- latest minor version (e.g. 1.18) tends to be unsupported
- previous minor version (e.g. 1.17) supported through alpha channel
- previous versions (e.g. 1.14-1.16) supported
---
## Scaleway (initial setup)
- After creating your account, make sure you set a password or get an API key
(by default, it uses email "magic links" to sign in)
- Install `scw`
(you need [CLI v2](https://github.com/scaleway/scaleway-cli/tree/v2#Installation), which in beta as of May 2020)
- Generate the CLI configuration with `scw init`
(it will prompt for your API key, or email + password)
---
## Scaleway (create cluster)
- Create the cluster:
```bash
k8s cluster create name=my-kapsule-cluster version=1.18.3 cni=cilium \
default-pool-config.node-type=DEV1-M default-pool-config.size=3
```
- After less than 5 minutes, cluster state will be `ready`
(check cluster status with e.g. `scw k8s cluster list` on a wide terminal
)
- Add connection information to your `.kube/config` file:
```bash
scw k8s kubeconfig install `CLUSTERID`
```
(the cluster ID is shown by `scw k8s cluster list`)
---
class: extra-details
## Scaleway (automation)
- If you want to obtain the cluster ID programmatically, this will do it:
```bash
scw k8s cluster list
# or
CLUSTERID=$(scw k8s cluster list -o json | \
jq -r '.[] | select(.name="my-kapsule-cluster") | .id')
```
---
## Scaleway (cleanup)
- Get cluster ID (e.g. with `scw k8s cluster list`)
- Delete the cluster:
```bash
scw cluster delete cluster-id=$CLUSTERID
```
- Warning: as of May 2020, load balancers have to be deleted separately!
---
## Scaleway (notes)
- The `create` command is a bit more complex than with other providers
(you must specify the Kubernetes version, CNI plugin, and node type)
- To see available versions and CNI plugins, run `scw k8s version list`
- As of May 2020, Kapsule supports:
- multiple CNI plugins, including: cilium, calico, weave, flannel
- Kubernetes versions 1.15 to 1.18
- multiple container runtimes, including: Docker, containerd, CRI-O
- To see available node types and their price, check their [pricing page](
https://www.scaleway.com/en/pricing/)
---
## More options
- Alibaba Cloud
- [IBM Cloud](https://console.bluemix.net/docs/containers/cs_cli_install.html#cs_cli_install)
- OVH
- [Linode Kubernetes Engine (LKE)](https://www.linode.com/products/kubernetes/)
- Scaleway
- OVHcloud [Managed Kubernetes Service](https://www.ovhcloud.com/en/public-cloud/kubernetes/)
- ...

View File

@@ -0,0 +1,192 @@
# Setting up Kubernetes
- Kubernetes is made of many components that require careful configuration
- Secure operation typically requires TLS certificates and a local CA
(certificate authority)
- Setting up everything manually is possible, but rarely done
(except for learning purposes)
- Let's do a quick overview of available options!
---
## Local development
- Are you writing code that will eventually run on Kubernetes?
- Then it's a good idea to have a development cluster!
- Development clusters only need one node
- This simplifies their setup a lot:
- pod networking doesn't even need CNI plugins, overlay networks, etc.
- they can be fully contained (no pun intended) in an easy-to-ship VM image
- some of the security aspects may be simplified (different threat model)
- Examples: Docker Desktop, k3d, KinD, MicroK8s, Minikube
(some of these also support clusters with multiple nodes)
---
## Managed clusters
- Many cloud providers and hosting providers offer "managed Kubernetes"
- The deployment and maintenance of the cluster is entirely managed by the provider
(ideally, clusters can be spun up automatically through an API, CLI, or web interface)
- Given the complexity of Kubernetes, this approach is *strongly recommended*
(at least for your first production clusters)
- After working for a while with Kubernetes, you will be better equipped to decide:
- whether to operate it yourself or use a managed offering
- which offering or which distribution works best for you and your needs
---
## Managed clusters details
- Pricing models differ from one provider to another
- nodes are generally charged at their usual price
- control plane may be free or incur a small nominal fee
- Beyond pricing, there are *huge* differences in features between providers
- The "major" providers are not always the best ones!
---
## Managed clusters differences
- Most providers let you pick which Kubernetes version you want
- some providers offer up-to-date versions
- others lag significantly (sometimes by 2 or 3 minor versions)
- Some providers offer multiple networking or storage options
- Others will only support one, tied to their infrastructure
(changing that is in theory possible, but might be complex or unsupported)
- Some providers let you configure or customize the control plane
(generally through Kubernetes "feature gates")
---
## Kubernetes distributions and installers
- If you want to run Kubernetes yourselves, there are many options
(free, commercial, proprietary, open source ...)
- Some of them are installers, while some are complete platforms
- Some of them leverage other well-known deployment tools
(like Puppet, Terraform ...)
- A good starting point to explore these options is this [guide](https://v1-16.docs.kubernetes.io/docs/setup/#production-environment)
(it defines categories like "managed", "turnkey" ...)
---
## kubeadm
- kubeadm is a tool part of Kubernetes to facilitate cluster setup
- Many other installers and distributions use it (but not all of them)
- It can also be used by itself
- Excellent starting point to install Kubernetes on your own machines
(virtual, physical, it doesn't matter)
- It even supports highly available control planes, or "multi-master"
(this is more complex, though, because it introduces the need for an API load balancer)
---
## Manual setup
- The resources below are mainly for educational purposes!
- [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) by Kelsey Hightower
- step by step guide to install Kubernetes on Google Cloud
- covers certificates, high availability ...
- *“Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.”*
- [Deep Dive into Kubernetes Internals for Builders and Operators](https://www.youtube.com/watch?v=3KtEAa7_duA)
- conference presentation showing step-by-step control plane setup
- emphasis on simplicity, not on security and availability
---
## About our training clusters
- How did we set up these Kubernetes clusters that we're using?
--
- We used `kubeadm` on freshly installed VM instances running Ubuntu LTS
1. Install Docker
2. Install Kubernetes packages
3. Run `kubeadm init` on the first node (it deploys the control plane on that node)
4. Set up Weave (the overlay network) with a single `kubectl apply` command
5. Run `kubeadm join` on the other nodes (with the token produced by `kubeadm init`)
6. Copy the configuration file generated by `kubeadm init`
- Check the [prepare VMs README](https://@@GITREPO@@/blob/master/prepare-vms/README.md) for more details
---
## `kubeadm` "drawbacks"
- Doesn't set up Docker or any other container engine
(this is by design, to give us choice)
- Doesn't set up the overlay network
(this is also by design, for the same reasons)
- HA control plane requires [some extra steps](https://kubernetes.io/docs/setup/independent/high-availability/)
- Note that HA control plane also requires setting up a specific API load balancer
(which is beyond the scope of kubeadm)
???
:EN:- Various ways to install Kubernetes
:FR:- Survol des techniques d'installation de Kubernetes

View File

@@ -1,5 +1,15 @@
# Kubernetes distributions and installers
- Sometimes, we need to run Kubernetes ourselves
(as opposed to "use a managed offering")
- Beware: it takes *a lot of work* to set up and maintain Kubernetes
- It might be necessary if you have specific security or compliance requirements
(e.g. national security for states that don't have a suitable domestic cloud)
- There are [countless](https://kubernetes.io/docs/setup/pick-right-solution/) distributions available
- We can't review them all
@@ -8,7 +18,7 @@
---
## kops
## [kops](https://github.com/kubernetes/kops)
- Deploys Kubernetes using cloud infrastructure
@@ -32,7 +42,7 @@
---
## Kubespray
## [kubespray](https://github.com/kubernetes-incubator/kubespray)
- Based on Ansible
@@ -78,15 +88,21 @@
## And many more ...
- [AKS Engine](https://github.com/Azure/aks-engine)
- Docker Enterprise Edition
- [AKS Engine](https://github.com/Azure/aks-engine)
- [Lokomotive](https://github.com/kinvolk/lokomotive), leveraging Terraform and [Flatcar Linux](https://www.flatcar-linux.org/)
- Pivotal Container Service (PKS)
- Tectonic by CoreOS
- [Tarmak](https://github.com/jetstack/tarmak), leveraging Puppet and Terraform
- etc.
- Tectonic by CoreOS (now being integrated into Red Hat OpenShift)
- [Typhoon](https://typhoon.psdn.io/), leveraging Terraform
- VMware Tanzu Kubernetes Grid (TKG)
---
@@ -111,5 +127,5 @@
???
:EN:- Various ways to set up Kubernetes
:FR:- Différentes méthodes pour installer Kubernetes
:EN:- Kubernetes distributions and installers
:FR:- L'offre Kubernetes "on premises"

View File

@@ -117,9 +117,9 @@ spec:
name: my-nfs-volume
volumes:
- name: my-nfs-volume
nfs:
server: 192.168.0.55
path: "/exports/assets"
nfs:
server: 192.168.0.55
path: "/exports/assets"
```
---

View File

@@ -4,7 +4,7 @@
- .emoji[👷🏻‍♀️] AJ ([@s0ulshake], [EphemeraSearch])
- .emoji[🐳] Jérôme ([@jpetazzo], Ardan Labs)
- .emoji[🐳] Jérôme ([@jpetazzo], Enix SAS)
- The training will run for 4 hours, with a 10 minutes break every hour
@@ -18,4 +18,4 @@
[EphemeraSearch]: https://ephemerasearch.com/
[@s0ulshake]: https://twitter.com/s0ulshake
[@jpetazzo]: https://twitter.com/jpetazzo
[@jpetazzo]: https://twitter.com/jpetazzo

View File

@@ -1,32 +1,16 @@
## Intros
- This slide should be customized by the tutorial instructor(s).
- Hello! I'm Jérôme Petazzoni ([@jpetazzo](https://twitter.com/jpetazzo))
- Hello! We are:
- The training will run Tue/Wed/Thu:
- .emoji[👩🏻‍🏫] Ann O'Nymous ([@...](https://twitter.com/...), Megacorp Inc)
- 9am-1pm PDT (aka Pacific)
- noon-4pm EDT (aka Eastern)
- 6pm-10pm CEST (aka Europe)
- .emoji[👨🏾‍🎓] Stu Dent ([@...](https://twitter.com/...), University of Wakanda)
- There will be short breaks every hour
<!-- .dummy[
- .emoji[👷🏻‍♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
- .emoji[🚁] Alexandre ([@alexbuisine](https://twitter.com/alexbuisine), Enix SAS)
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- .emoji[⛵] Jérémy ([@jeremygarrouste](twitter.com/jeremygarrouste), Inpiwee)
- .emoji[🎧] Romain ([@rdegez](https://twitter.com/rdegez), Enix SAS)
] -->
- The workshop will run from ...
- There will be a lunch break at ...
(And coffee breaks!)
(And a longer break every two hours)
- Feel free to interrupt for questions at any time

View File

@@ -1 +1 @@
logistics-online.md
logistics-template.md

View File

@@ -0,0 +1,37 @@
## Use the chat!
- We have set up a chat room on @@CHAT@@
(clicking the link above will take you to the chat room)
- Don't hesitate to use it to ask questions, or get help, or share feedback
- We will *not* use the Twitch chat room for Q&A
(nothing wrong with it, but Gitter is more convenient for code snippets etc.)
- Feel free to ask questions at any time
- Sometimes we will wait a bit to answer ...
... but don't worry, we'll make sure to address all your questions!
---
## Use non-verbal communication cues
- ... wait, what?!?
--
- In the chat room, you are welcome (even encouraged!) to use emojis!
- Some of our favorites:
.emoji[🤔✔️👍🏻👍🏼👍🏽👍🏾👍🏿⚠️🛑]
- During the session, we'll often ask audience participation questions
- Feel free to answer in the chat room, any way you like!
(short message, emoji reaction ...)