Compare commits

..

12 Commits

Author SHA1 Message Date
Jerome Petazzoni
ae9780feea fix-redirects.sh: adding forced redirect 2020-04-07 16:50:33 -05:00
Jerome Petazzoni
bb7d03751f Ready to deploy 2019-10-30 11:54:28 -05:00
Jerome Petazzoni
efe491c05d cbr0 woes 2019-10-29 20:34:50 -05:00
Jerome Petazzoni
a150f53fa7 Merge branch 'master' into lisa-2019-10 2019-10-29 20:31:31 -05:00
Jerome Petazzoni
1d8353b7e2 settings 2019-10-29 20:29:50 -05:00
Jerome Petazzoni
93b1cc5e6e Fixes 2019-10-29 20:29:26 -05:00
Jerome Petazzoni
4ad56bd8e7 3 nodes are enough 2019-10-29 19:49:39 -05:00
Jerome Petazzoni
aefa0576a7 Merge branch 'master' into lisa-2019-10 2019-10-29 19:48:35 -05:00
Jerome Petazzoni
52a7434e70 fixes 2019-10-29 19:43:39 -05:00
Jerome Petazzoni
0e4ed4fa5a Tutorial 2019-10-29 19:37:28 -05:00
Jerome Petazzoni
d01635b5fb Last minute fixes 2019-10-28 13:14:08 -05:00
Jerome Petazzoni
e9e650ee48 Push 2019-10-28 11:40:06 -05:00
48 changed files with 1761 additions and 1066 deletions

View File

@@ -1,160 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: dockercoins/hasher:v0.1
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: dockercoins/rng:v0.1
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: dockercoins/webui:v0.1
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: dockercoins/worker:v0.1
name: worker

View File

@@ -7,8 +7,8 @@ workshop.
## 1. Prerequisites
Virtualbox, Vagrant and Ansible
Virtualbox, Vagrant and Ansible
- Virtualbox: https://www.virtualbox.org/wiki/Downloads
@@ -25,7 +25,7 @@ Virtualbox, Vagrant and Ansible
$ git clone --recursive https://github.com/ansible/ansible.git
$ cd ansible
$ git checkout stable-{{ getStableVersionFromAnsibleProject }}
$ git checkout stable-2.0.0.1
$ git submodule update
- source the setup script to make Ansible available on this terminal session:
@@ -38,7 +38,6 @@ Virtualbox, Vagrant and Ansible
## 2. Preparing the environment
Change into directory that has your Vagrantfile
Run the following commands:
@@ -67,14 +66,6 @@ will reflect inside the instance.
- Depending on the Vagrant version, `sudo apt-get install bsdtar` may be needed
- If you get an error like "no Vagrant file found" or you have a file but "cannot open base box" when running `vagrant up`,
chances are good you not in the correct directory.
Make sure you are in sub directory named "prepare-local". It has all the config files required by ansible, vagrant and virtualbox
- If you are using Python 3.7, running the ansible-playbook provisioning, see an error like "SyntaxError: invalid syntax" and it mentions
the word "async", you need to upgrade your Ansible version to 2.6 or higher to resolve the keyword conflict.
https://github.com/ansible/ansible/issues/42105
- If you get strange Ansible errors about dependencies, try to check your pip
version with `pip --version`. The current version is 8.1.1. If your pip is
older than this, upgrade it with `sudo pip install --upgrade pip`, restart

View File

@@ -10,21 +10,15 @@ These tools can help you to create VMs on:
- [Docker](https://docs.docker.com/engine/installation/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
Depending on the infrastructure that you want to use, you also need to install
the Azure CLI, the AWS CLI, or terraform (for OpenStack deployment).
And if you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML)
- [jinja2](https://pypi.python.org/pypi/Jinja2)
You can install them with pip (perhaps with `pip install --user`, or even use `virtualenv` if that's your thing).
These require Python 3. If you are on a Mac, see below for specific instructions on setting up
Python 3 to be the default Python on a Mac. In particular, if you installed `mosh`, Homebrew
may have changed your default Python to Python 2.
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
- [jinja2](https://pypi.python.org/pypi/Jinja2) (on a Mac: `brew install jinja2`)
## General Workflow
@@ -262,32 +256,3 @@ If you don't have `wkhtmltopdf` installed, you will get a warning that it is a m
- Don't write to bash history in system() in postprep
- compose, etc version inconsistent (int vs str)
## Making sure Python3 is the default (Mac only)
Check the `/usr/local/bin/python` symlink. It should be pointing to
`/usr/local/Cellar/python/3`-something. If it isn't, follow these
instructions.
1) Verify that Python 3 is installed.
```
ls -la /usr/local/Cellar/Python
```
You should see one or more versions of Python 3. If you don't,
install it with `brew install python`.
2) Verify that `python` points to Python3.
```
ls -la /usr/local/bin/python
```
If this points to `/usr/local/Cellar/python@2`, then we'll need to change it.
```
rm /usr/local/bin/python
ln -s /usr/local/Cellar/Python/xxxx /usr/local/bin/python
# where xxxx is the most recent Python 3 version you saw above
```

View File

@@ -1,5 +1,5 @@
# Number of VMs per cluster
clustersize: 4
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
@@ -26,3 +26,8 @@ machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
url: https://lisa-2019-10.container.training
event: tutorial
backside: true
clusternumber: 10

25
prepare-vms/setup-lisa.sh Executable file
View File

@@ -0,0 +1,25 @@
#!/bin/sh
set -e
export AWS_INSTANCE_TYPE=t3a.small
INFRA=infra/aws-us-west-2
STUDENTS=120
PREFIX=$(date +%Y-%m-%d-%H-%M)
SETTINGS=jerome
TAG=$PREFIX-$SETTINGS
./workshopctl start \
--tag $TAG \
--infra $INFRA \
--settings settings/$SETTINGS.yaml \
--count $((3*$STUDENTS))
./workshopctl deploy $TAG
./workshopctl disabledocker $TAG
./workshopctl kubebins $TAG
./workshopctl disableaddrchecks $TAG
./workshopctl cards $TAG

View File

@@ -5,15 +5,15 @@
#}
{%- set url = url
| default("http://velocity-2019-11.container.training/") -%}
| default("http://FIXME.container.training/") -%}
{%- set pagesize = pagesize
| default(9) -%}
{%- set lang = lang
| default("en") -%}
{%- set event = event
| default("tutorial") -%}
| default("training session") -%}
{%- set backside = backside
| default(True) -%}
| default(False) -%}
{%- set image = image
| default("kube") -%}
{%- set clusternumber = clusternumber
@@ -212,14 +212,15 @@ img.kube {
{% for x in range(pagesize) %}
<div class="back">
<br/>
<p>You got this at the tutorial
"Deploying and Scaling Applications
with Kubernetes"
during Velocity Berlin (November 2019).</p>
<p>You got this at the tutorial:<br/>
"Deep Dive into Kubernetes Internals
for Builders and Operators" during LISA
in Portland (October 2019).
</p>
<p>If you liked that tutorial,
I can train your team or organization
on Docker, container, and Kubernetes,
with curriculums of 1 to 5 days.
with courses of 1 to 5 days.
</p>
<p>Interested? Contact me at:</p>
<p><strong>jerome.petazzoni@gmail.com</strong></p>

View File

@@ -1,7 +1,9 @@
# Uncomment and/or edit one of the the following lines if necessary.
#/ /kube-halfday.yml.html 200
#/ /kube-fullday.yml.html 200
/ /kube-twodays.yml.html 200!
#/ /kube-twodays.yml.html 200
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack
/ /lisa.html 200!

View File

@@ -1,26 +0,0 @@
# End of day 1!
---
class: pic
![Electricity map](images/electricity-map.png)
---
## Our VMs are low-carbon
- The closest EC2 region to Berlin is eu-central-1
(Frankfurt; Germany electricity mix: wind, coal, nuclear, gas)
- Instead, we deployed these VMs in eu-north-1
(Stockholm; Sweden electricity mix: hydro, nuclear, wind)
- According to [Electricity Map](https://electricitymap.org/), they produce ~5x less carbon
(at least Monday morning, when the VMs were deployed)
- The latency was a bit higher; let me know if you saw any difference!

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 227 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.3 KiB

View File

@@ -5,7 +5,6 @@
speaker: jpetazzo
title: Deploying and scaling applications with Kubernetes
attend: https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/79109
slides: https://velocity-2019-11.container.training/
- date: 2019-11-13
country: fr

View File

@@ -667,12 +667,17 @@ class: extra-details
- For auditing purposes, sometimes we want to know who can perform an action
- There are a few tools to help us with that
- There is a proof-of-concept tool by Aqua Security which does exactly that:
- [kubectl-who-can](https://github.com/aquasecurity/kubectl-who-can) by Aqua Security
https://github.com/aquasecurity/kubectl-who-can
- [Review Access (aka Rakkess)](https://github.com/corneliusweig/rakkess)
- This is one way to install it:
```bash
docker run --rm -v /usr/local/bin:/go/bin golang \
go get -v github.com/aquasecurity/kubectl-who-can
```
- Both are available as standalone programs, or as plugins for `kubectl`
(`kubectl` plugins can be installed and managed with `krew`)
- This is one way to use it:
```bash
kubectl-who-can create pods
```

View File

@@ -15,3 +15,26 @@
- `dockercoins/webui:v0.1`
- `dockercoins/worker:v0.1`
---
## Setting `$REGISTRY` and `$TAG`
- In the upcoming exercises and labs, we use a couple of environment variables:
- `$REGISTRY` as a prefix to all image names
- `$TAG` as the image version tag
- For example, the worker image is `$REGISTRY/worker:$TAG`
- If you copy-paste the commands in these exercises:
**make sure that you set `$REGISTRY` and `$TAG` first!**
- For example:
```bash
export REGISTRY=dockercoins TAG=v0.1
```
(this will expand `$REGISTRY/worker:$TAG` to `dockercoins/worker:v0.1`)

View File

@@ -110,9 +110,9 @@ class: extra-details
## In practice: kube-router
- We are going to set up a new cluster
- We are going to reconfigure our cluster
- For this new cluster, we will use kube-router
(control plane and kubelets
- kube-router will provide the "pod network"
@@ -184,73 +184,79 @@ class: extra-details
## The plan
- We'll work in a new cluster (named `kuberouter`)
- We'll update the control plane's configuration
- We will run a simple control plane (like before)
- the controller manager will allocate `podCIDR` subnets
- ... But this time, the controller manager will allocate `podCIDR` subnets
(so that we don't have to manually assign subnets to individual nodes)
- we will allow privileged containers
- We will create a DaemonSet for kube-router
- We will join nodes to the cluster
- We will restart kubelets in CNI mode
- The DaemonSet will automatically start a kube-router pod on each node
---
## Logging into the new cluster
## Getting the files
.exercise[
- Log into node `kuberouter1`
- Clone the workshop repository:
- If you haven't cloned the training repo yet, do it:
```bash
git clone https://@@GITREPO@@
cd ~
git clone https://container.training
```
- Move to this directory:
- Then move to this directory:
```bash
cd container.training/compose/kube-router-k8s-control-plane
cd ~/container.training/compose/kube-router-k8s-control-plane
```
]
---
## Our control plane
## Changes to the control plane
- We will use a Compose file to start the control plane
- It is similar to the one we used with the `kubenet` cluster
- The API server is started with `--allow-privileged`
- The API server must be started with `--allow-privileged`
(because we will start kube-router in privileged pods)
- The controller manager is started with extra flags too:
- The controller manager must be started with extra flags too:
`--allocate-node-cidrs` and `--cluster-cidr`
- We need to edit the Compose file to set the Cluster CIDR
.exercise[
- Make these changes!
(You might have to restart scheduler and controller manager, too.)
]
.footnote[If your control plane is broken, don't worry!
<br/>We provide a Compose file to catch up.]
---
## Starting the control plane
## Catching up
- Our cluster CIDR will be `10.C.0.0/16`
(where `C` is our cluster number)
- If your control plane is broken, here is how to start a new one
.exercise[
- Edit the Compose file to set the Cluster CIDR:
- Make sure the Docker Engine is running, or start it with:
```bash
vim docker-compose.yaml
dockerd
```
- Edit the Compose file to change the `--cluster-cidr`
- Our cluster CIDR will be `10.C.0.0/16`
<br/>
(where `C` is our cluster number)
- Start the control plane:
```bash
docker-compose up
@@ -278,7 +284,7 @@ class: extra-details
- The address of the API server will be `http://A.B.C.D:8080`
(where `A.B.C.D` is the public address of `kuberouter1`, running the control plane)
(where `A.B.C.D` is the public address of `node1`, running the control plane)
.exercise[
@@ -294,46 +300,9 @@ class: extra-details
]
Note: the DaemonSet won't create any pods (yet) since there are no nodes (yet).
---
## Generating the kubeconfig for kubelet
- This is similar to what we did for the `kubenet` cluster
.exercise[
- Generate the kubeconfig file (replacing `X.X.X.X` with the address of `kuberouter1`):
```bash
kubectl config set-cluster cni --server http://`X.X.X.X`:8080
kubectl config set-context cni --cluster cni
kubectl config use-context cni
cp ~/.kube/config ~/kubeconfig
```
]
---
## Distributing kubeconfig
- We need to copy that kubeconfig file to the other nodes
.exercise[
- Copy `kubeconfig` to the other nodes:
```bash
for N in 2 3; do
scp ~/kubeconfig kuberouter$N:
done
```
]
---
## Starting kubelet
## Restarting kubelets
- We don't need the `--pod-cidr` option anymore
@@ -350,37 +319,59 @@ Note: the DaemonSet won't create any pods (yet) since there are no nodes (yet).
- Open more terminals and join the other nodes:
```bash
ssh kuberouter2 sudo kubelet --kubeconfig ~/kubeconfig --network-plugin=cni
ssh kuberouter3 sudo kubelet --kubeconfig ~/kubeconfig --network-plugin=cni
ssh node2 sudo kubelet --kubeconfig ~/kubeconfig --network-plugin=cni
ssh node3 sudo kubelet --kubeconfig ~/kubeconfig --network-plugin=cni
```
]
---
## Setting up a test
## Check kuberouter pods
- Let's create a Deployment and expose it with a Service
- Make sure that kuberouter pods are running
.exercise[
- Create a Deployment running a web server:
- List pods in the `kube-system` namespace:
```bash
kubectl create deployment web --image=jpetazzo/httpenv
```
- Scale it so that it spans multiple nodes:
```bash
kubectl scale deployment web --replicas=5
```
- Expose it with a Service:
```bash
kubectl expose deployment web --port=8888
kubectl get pods --namespace=kube-system
```
]
If the pods aren't running, it could be:
- privileged containers aren't enabled
<br/>(add `--allow-privileged` flag to the API server)
- missing service account token
<br/>(add `--disable-admission-plugins=ServiceAccount` flag)
---
## Testing
- Let's delete all pods
- They should be re-created with new, correct addresses
.exercise[
- Delete all pods:
```bash
kubectl delete pods --all
```
- Check the new pods:
```bash
kuectl get pods -o wide
```
]
Note: if you provisioned a new control plane, re-create and re-expose the deployment.
---
## Checking that everything works
@@ -403,6 +394,20 @@ Note that if you send multiple requests, they are load-balanced in a round robin
This shows that we are using IPVS (vs. iptables, which picked random endpoints).
Problems? Check next slide!
---
## If it doesn't quite work ...
- If we used kubenet before, we now have a `cbr0` bridge
- This bridge (and its subnet) might conflict with what we're using now
- To see if it's the case, check if you have duplicate routes with `ip ro`
- To fix it, delete the old bridge with `ip link del cbr0`
---
## Troubleshooting
@@ -462,7 +467,7 @@ We should see the local pod CIDR connected to `kube-bridge`, and the other nodes
These commands will give an error message that includes:
```
dial tcp: lookup kuberouterX on 127.0.0.11:53: no such host
dial tcp: lookup nodeX on 127.0.0.11:53: no such host
```
What does that mean?
@@ -475,7 +480,7 @@ What does that mean?
- By default, it creates a connection using the kubelet's name
(e.g. `http://kuberouter1:...`)
(e.g. `http://node1:...`)
- This requires our nodes names to be in DNS

View File

@@ -31,31 +31,12 @@
---
## Our environment
- We will use the machine indicated as `dmuc1`
(this stands for "Dessine Moi Un Cluster" or "Draw Me A Sheep",
<br/>in homage to Saint-Exupery's "The Little Prince")
- This machine:
- runs Ubuntu LTS
- has Kubernetes, Docker, and etcd binaries installed
- but nothing is running
---
## Checking our environment
- Let's make sure we have everything we need first
.exercise[
- Log into the `dmuc1` machine
- Get root:
```bash
sudo -i
@@ -547,7 +528,7 @@ Success!
Our node should show up.
Its name will be its hostname (it should be `dmuc1`).
Its name will be its hostname (it should be `node1`).
---

View File

@@ -1,3 +1,41 @@
## Questions to ask before adding healthchecks
- Do we want liveness, readiness, both?
(sometimes, we can use the same check, but with different failure thresholds)
- Do we have existing HTTP endpoints that we can use?
- Do we need to add new endpoints, or perhaps use something else?
- Are our healthchecks likely to use resources and/or slow down the app?
- Do they depend on additional services?
(this can be particularly tricky, see next slide)
---
## Healthchecks and dependencies
- A good healthcheck should always indicate the health of the service itself
- It should not be affected by the state of the service's dependencies
- Example: a web server requiring a database connection to operate
(make sure that the healthcheck can report "OK" even if the database is down;
<br/>
because it won't help us to restart the web server if the issue is with the DB!)
- Example: a microservice calling other microservices
- Example: a worker process
(these will generally require minor code changes to report health)
---
## Adding healthchecks to an app
- Let's add healthchecks to DockerCoins!
@@ -333,3 +371,25 @@ class: extra-details
(and have gcr.io/pause take care of the reaping)
- Discussion of this in [Video - 10 Ways to Shoot Yourself in the Foot with Kubernetes, #9 Will Surprise You](https://www.youtube.com/watch?v=QKI-JRs2RIE)
---
## Healthchecks for worker
- Readiness isn't useful
(because worker isn't a backend for a service)
- Liveness may help us restart a broken worker, but how can we check it?
- Embedding an HTTP server is an option
(but it has a high potential for unwanted side effects and false positives)
- Using a "lease" file can be relatively easy:
- touch a file during each iteration of the main loop
- check the timestamp of that file from an exec probe
- Writing logs (and checking them from the probe) also works

View File

@@ -42,11 +42,9 @@
- internal corruption (causing all requests to error)
- Anything where our incident response would be "just restart/reboot it"
- If the liveness probe fails *N* consecutive times, the container is killed
.warning[**Do not** use liveness probes for problems that can't be fixed by a restart]
- Otherwise we just restart our pods for no reason, creating useless load
- *N* is the `failureThreshold` (3 by default)
---
@@ -54,7 +52,7 @@
- Indicates if the container is ready to serve traffic
- If a container becomes "unready" it might be ready again soon
- If a container becomes "unready" (let's say busy!) it might be ready again soon
- If the readiness probe fails:
@@ -68,79 +66,19 @@
## When to use a readiness probe
- To indicate failure due to an external cause
- To indicate temporary failures
- database is down or unreachable
- the application can only service *N* parallel connections
- mandatory auth or other backend service unavailable
- the runtime is busy doing garbage collection or initial data load
- To indicate temporary failure or unavailability
- The container is marked as "not ready" after `failureThreshold` failed attempts
- application can only service *N* parallel connections
(3 by default)
- runtime is busy doing garbage collection or initial data load
- It is marked again as "ready" after `successThreshold` successful attempts
- For processes that take a long time to start
(more on that later)
---
## Dependencies
- If a web server depends on a database to function, and the database is down:
- the web server's liveness probe should succeed
- the web server's readiness probe should fail
- Same thing for any hard dependency (without which the container can't work)
.warning[**Do not** fail liveness probes for problems that are external to the container]
---
## Timing and thresholds
- Probes are executed at intervals of `periodSeconds` (default: 10)
- The timeout for a probe is set with `timeoutSeconds` (default: 1)
.warning[If a probe takes longer than that, it is considered as a FAIL]
- A probe is considered successful after `successThreshold` successes (default: 1)
- A probe is considered failing after `failureThreshold` failures (default: 3)
- A probe can have an `initialDelaySeconds` parameter (default: 0)
- Kubernetes will wait that amount of time before running the probe for the first time
(this is important to avoid killing services that take a long time to start)
---
class: extra-details
## Startup probe
- Kubernetes 1.16 introduces a third type of probe: `startupProbe`
(it is in `alpha` in Kubernetes 1.16)
- It can be used to indicate "container not ready *yet*"
- process is still starting
- loading external data, priming caches
- Before Kubernetes 1.16, we had to use the `initialDelaySeconds` parameter
(available for both liveness and readiness probes)
- `initialDelaySeconds` is a rigid delay (always wait X before running probes)
- `startupProbe` works better when a container start time can vary a lot
(1 by default)
---
@@ -174,12 +112,10 @@ class: extra-details
(instead of serving errors or timeouts)
- Unavailable backends get removed from load balancer rotation
- Overloaded backends get removed from load balancer rotation
(thus improving response times across the board)
- If a probe is not defined, it's as if there was an "always successful" probe
---
## Example: HTTP probe
@@ -229,56 +165,14 @@ If the Redis process becomes unresponsive, it will be killed.
---
## Questions to ask before adding healthchecks
## Details about liveness and readiness probes
- Do we want liveness, readiness, both?
- Probes are executed at intervals of `periodSeconds` (default: 10)
(sometimes, we can use the same check, but with different failure thresholds)
- The timeout for a probe is set with `timeoutSeconds` (default: 1)
- Do we have existing HTTP endpoints that we can use?
- A probe is considered successful after `successThreshold` successes (default: 1)
- Do we need to add new endpoints, or perhaps use something else?
- A probe is considered failing after `failureThreshold` failures (default: 3)
- Are our healthchecks likely to use resources and/or slow down the app?
- Do they depend on additional services?
(this can be particularly tricky, see next slide)
---
## Healthchecks and dependencies
- Liveness checks should not be influenced by the state of external services
- All checks should reply quickly (by default, less than 1 second)
- Otherwise, they are considered to fail
- This might require to check the health of dependencies asynchronously
(e.g. if a database or API might be healthy but still take more than
1 second to reply, we should check the status asynchronously and report
a cached status)
---
## Healthchecks for workers
(In that context, worker = process that doesn't accept connections)
- Readiness isn't useful
(because workers aren't backends for a service)
- Liveness may help us restart a broken worker, but how can we check it?
- Embedding an HTTP server is a (potentially expensive) option
- Using a "lease" file can be relatively easy:
- touch a file during each iteration of the main loop
- check the timestamp of that file from an exec probe
- Writing logs (and checking them from the probe) also works
- If a probe is not defined, it's as if there was an "always successful" probe

View File

@@ -153,7 +153,10 @@ pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
kubectl logs deploy/pingpong --tail 1 --follow
```
- Leave that command running, so that we can keep an eye on these logs
<!--
```wait seq=3```
```keys ^C```
-->
]
@@ -183,44 +186,6 @@ We could! But the *deployment* would notice it right away, and scale back to the
---
## Log streaming
- Let's look again at the output of `kubectl logs`
(the one we started before scaling up)
- `kubectl logs` shows us one line per second
- We could expect 3 lines per second
(since we should now have 3 pods running `ping`)
- Let's try to figure out what's happening!
---
## Streaming logs of multiple pods
- What happens if we restart `kubectl logs`?
.exercise[
- Interrupt `kubectl logs` (with Ctrl-C)
- Restart it:
```bash
kubectl logs deploy/pingpong --tail 1 --follow
```
]
`kubectl logs` will warn us that multiple pods were found, and that it's showing us only one of them.
Let's leave `kubectl logs` running while we keep exploring.
---
## Resilience
- The *deployment* `pingpong` watches its *replica set*
@@ -231,12 +196,20 @@ Let's leave `kubectl logs` running while we keep exploring.
.exercise[
- In a separate window, watch the list of pods:
- In a separate window, list pods, and keep watching them:
```bash
watch kubectl get pods
kubectl get pods -w
```
- Destroy the pod currently shown by `kubectl logs`:
<!--
```wait Running```
```keys ^C```
```hide kubectl wait deploy pingpong --for condition=available```
```keys kubectl delete pod ping```
```copypaste pong-..........-.....```
-->
- Destroy a pod:
```
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
```
@@ -244,23 +217,6 @@ Let's leave `kubectl logs` running while we keep exploring.
---
## What happened?
- `kubectl delete pod` terminates the pod gracefully
(sending it the TERM signal and waiting for it to shutdown)
- As soon as the pod is in "Terminating" state, the Replica Set replaces it
- But we can still see the output of the "Terminating" pod in `kubectl logs`
- Until 30 seconds later, when the grace period expires
- The pod is then killed, and `kubectl logs` exits
---
## What if we wanted something different?
- What if we wanted to start a "one-shot" container that *doesn't* get restarted?
@@ -278,72 +234,6 @@ Let's leave `kubectl logs` running while we keep exploring.
---
## Scheduling periodic background work
- A Cron Job is a job that will be executed at specific intervals
(the name comes from the traditional cronjobs executed by the UNIX crond)
- It requires a *schedule*, represented as five space-separated fields:
- minute [0,59]
- hour [0,23]
- day of the month [1,31]
- month of the year [1,12]
- day of the week ([0,6] with 0=Sunday)
- `*` means "all valid values"; `/N` means "every N"
- Example: `*/3 * * * *` means "every three minutes"
---
## Creating a Cron Job
- Let's create a simple job to be executed every three minutes
- Cron Jobs need to terminate, otherwise they'd run forever
.exercise[
- Create the Cron Job:
```bash
kubectl run --schedule="*/3 * * * *" --restart=OnFailure --image=alpine sleep 10
```
- Check the resource that was created:
```bash
kubectl get cronjobs
```
]
---
## Cron Jobs in action
- At the specified schedule, the Cron Job will create a Job
- The Job will create a Pod
- The Job will make sure that the Pod completes
(re-creating another one if it fails, for instance if its node fails)
.exercise[
- Check the Jobs that are created:
```bash
kubectl get jobs
```
]
(It will take a few minutes before the first job is scheduled.)
---
## What about that deprecation warning?
- As we can see from the previous slide, `kubectl run` can do many things

View File

@@ -66,8 +66,6 @@ Exactly what we need!
sudo chmod +x /usr/local/bin/stern
```
- On OS X, just `brew install stern`
<!-- ##VERSION## -->
---

View File

@@ -4,41 +4,11 @@
- Let's see what it takes to add more nodes
- We are going to use another set of machines: `kubenet`
---
## The environment
## Next steps
- We have 3 identical machines: `kubenet1`, `kubenet2`, `kubenet3`
- The Docker Engine is installed (and running) on these machines
- The Kubernetes packages are installed, but nothing is running
- We will use `kubenet1` to run the control plane
---
## The plan
- Start the control plane on `kubenet1`
- Join the 3 nodes to the cluster
- Deploy and scale a simple web server
.exercise[
- Log into `kubenet1`
]
---
## Running the control plane
- We will use a Compose file to start the control plane components
- We will need some files that are on the tutorial GitHub repo
.exercise[
@@ -47,6 +17,56 @@
git clone https://@@GITREPO@@
```
]
---
## Control plane
- We can use the control plane that we deployed on node1
- If that didn't quite work, don't panic!
- We provide a way to catch up and get a control plane in a pinch
---
## Cleaning up
- Only do this if your control plane doesn't work and want to start over
.exercise[
- Reboot the node to make sure nothing else is running:
```bash
sudo reboot
```
- Log in again:
```bash
ssh docker@`A.B.C.D`
```
- Get root:
```
sudo -i
```
]
---
## Catching up
- We will use a Compose file to start the control plane components
.exercise[
- Start the Docker Engine:
```bash
dockerd
```
- Go to the `compose/simple-k8s-control-plane` directory:
```bash
cd container.training/compose/simple-k8s-control-plane
@@ -84,7 +104,7 @@
class: extra-details
## Differences from `dmuc`
## Differences with the other control plane
- Our new control plane listens on `0.0.0.0` instead of the default `127.0.0.1`
@@ -120,12 +140,9 @@ class: extra-details
.exercise[
- Copy `kubeconfig` to the other nodes:
```bash
for N in 2 3; do
scp ~/kubeconfig kubenet$N:
done
```
- Copy `~/.kube/config` to the other nodes
(Given the size of the file, you can copy-paste it!)
]
@@ -133,28 +150,49 @@ class: extra-details
## Starting kubelet
- Reminder: kubelet needs to run as root; don't forget `sudo`!
*The following assumes that you copied the kubeconfig file to /tmp/kubeconfig.*
.exercise[
- Join the first node:
```bash
sudo kubelet --kubeconfig ~/kubeconfig
```
- Log into node2
- Open more terminals and join the other nodes to the cluster:
- Start the Docker Engine:
```bash
ssh kubenet2 sudo kubelet --kubeconfig ~/kubeconfig
ssh kubenet3 sudo kubelet --kubeconfig ~/kubeconfig
sudo dockerd &
```
- Start kubelet:
```bash
sudo kubelet --kubeconfig /tmp/kubeconfig
```
]
Repeat on more nodes if desired.
---
## If we're running the "old" control plane
- By default, the API server only listens on localhost
- The other nodes will not be able to connect
(symptom: a flood of `node "nodeX" not found` messages)
- We need to add `--address 0.0.0.0` to the API server
(yes, [this will expose our API server to all kinds of shenanigans](https://twitter.com/TabbySable/status/1188901099446554624))
- Restarting API server might cause scheduler and controller manager to quit
(you might have to restart them)
---
## Checking cluster status
- We should now see all 3 nodes
- We should now see all the nodes
- At first, their `STATUS` will be `NotReady`
@@ -179,14 +217,14 @@ class: extra-details
.exercise[
- Create a Deployment running NGINX:
- Create a Deployment running httpenv:
```bash
kubectl create deployment web --image=nginx
kubectl create deployment httpenv --image=jpetazzo/httpenv
```
- Scale it:
```bash
kubectl scale deployment web --replicas=5
kubectl scale deployment httpenv --replicas=5
```
]
@@ -197,7 +235,7 @@ class: extra-details
- The pods will be scheduled on the nodes
- The nodes will pull the `nginx` image, and start the pods
- The nodes will pull the `jpetazzo/httpenv` image, and start the pods
- What are the IP addresses of our pods?
@@ -403,7 +441,7 @@ class: extra-details
- Expose our Deployment:
```bash
kubectl expose deployment web --port=80
kubectl expose deployment httpenv --port=8888
```
]
@@ -416,7 +454,7 @@ class: extra-details
- Retrieve the ClusterIP address:
```bash
kubectl get svc web
kubectl get svc httpenv
```
- Send a few requests to the ClusterIP address (with `curl`)

View File

@@ -11,36 +11,16 @@
- Deploy everything else:
```bash
kubectl create deployment hasher --image=dockercoins/hasher:v0.1
kubectl create deployment rng --image=dockercoins/rng:v0.1
kubectl create deployment webui --image=dockercoins/webui:v0.1
kubectl create deployment worker --image=dockercoins/worker:v0.1
set -u
for SERVICE in hasher rng webui worker; do
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
```
]
---
class: extra-details
## Deploying other images
- If we wanted to deploy images from another registry ...
- ... Or with a different tag ...
- ... We could use the following snippet:
```bash
REGISTRY=dockercoins
TAG=v0.1
for SERVICE in hasher rng webui worker; do
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
```
---
## Is this working?
- After waiting for the deployment to complete, let's look at the logs!

View File

@@ -61,6 +61,32 @@
---
## Building a new version of the `worker` service
.warning[
Only run these commands if you have built and pushed DockerCoins to a local registry.
<br/>
If you are using images from the Docker Hub (`dockercoins/worker:v0.1`), skip this.
]
.exercise[
- Go to the `stacks` directory (`~/container.training/stacks`)
- Edit `dockercoins/worker/worker.py`; update the first `sleep` line to sleep 1 second
- Build a new tag and push it to the registry:
```bash
#export REGISTRY=localhost:3xxxx
export TAG=v0.2
docker-compose -f dockercoins.yml build
docker-compose -f dockercoins.yml push
```
]
---
## Rolling out the new `worker` service
.exercise[
@@ -79,7 +105,7 @@
- Update `worker` either with `kubectl edit`, or by running:
```bash
kubectl set image deploy worker worker=dockercoins/worker:v0.2
kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
```
]
@@ -120,7 +146,8 @@ That rollout should be pretty quick. What shows in the web UI?
- Update `worker` by specifying a non-existent image:
```bash
kubectl set image deploy worker worker=dockercoins/worker:v0.3
export TAG=v0.3
kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
```
- Check what's going on:
@@ -189,14 +216,27 @@ If you didn't deploy the Kubernetes dashboard earlier, just skip this slide.
.exercise[
- Connect to the dashboard that we deployed earlier
- Check that we have failures in Deployments, Pods, and Replica Sets
- Can we see the reason for the failure?
- Check which port the dashboard is on:
```bash
kubectl -n kube-system get svc socat
```
]
Note the `3xxxx` port.
.exercise[
- Connect to http://oneofournodes:3xxxx/
<!-- ```open https://node1:3xxxx/``` -->
]
--
- We have failures in Deployments, Pods, and Replica Sets
---
## Recovering from a bad rollout
@@ -245,7 +285,7 @@ spec:
spec:
containers:
- name: worker
image: dockercoins/worker:v0.1
image: $REGISTRY/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0
@@ -276,7 +316,7 @@ class: extra-details
spec:
containers:
- name: worker
image: dockercoins/worker:v0.1
image: $REGISTRY/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0

View File

@@ -136,227 +136,141 @@ And *then* it is time to look at orchestration!
---
## Congratulations!
## HTTP traffic handling
- We learned a lot about Kubernetes, its internals, its advanced concepts
- *Services* are layer 4 constructs
- HTTP is a layer 7 protocol
- It is handled by *ingresses* (a different resource kind)
- *Ingresses* allow:
- virtual host routing
- session stickiness
- URI mapping
- and much more!
- [This section](kube-selfpaced.yml.html#toc-exposing-http-services-with-ingress-resources) shows how to expose multiple HTTP apps using [Træfik](https://docs.traefik.io/user-guide/kubernetes/)
---
## Logging
- Logging is delegated to the container engine
- Logs are exposed through the API
- Logs are also accessible through local files (`/var/log/containers`)
- Log shipping to a central platform is usually done through these files
(e.g. with an agent bind-mounting the log directory)
- [This section](kube-selfpaced.yml.html#toc-centralized-logging) shows how to do that with [Fluentd](https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd) and the EFK stack
---
## Metrics
- The kubelet embeds [cAdvisor](https://github.com/google/cadvisor), which exposes container metrics
(cAdvisor might be separated in the future for more flexibility)
- It is a good idea to start with [Prometheus](https://prometheus.io/)
(even if you end up using something else)
- Starting from Kubernetes 1.8, we can use the [Metrics API](https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/)
- [Heapster](https://github.com/kubernetes/heapster) was a popular add-on
(but is being [deprecated](https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md) starting with Kubernetes 1.11)
---
## Managing the configuration of our applications
- Two constructs are particularly useful: secrets and config maps
- They allow to expose arbitrary information to our containers
- **Avoid** storing configuration in container images
(There are some exceptions to that rule, but it's generally a Bad Idea)
- **Never** store sensitive information in container images
(It's the container equivalent of the password on a post-it note on your screen)
- [This section](kube-selfpaced.yml.html#toc-managing-configuration) shows how to manage app config with config maps (among others)
---
## Managing stack deployments
- The best deployment tool will vary, depending on:
- the size and complexity of your stack(s)
- how often you change it (i.e. add/remove components)
- the size and skills of your team
- A few examples:
- shell scripts invoking `kubectl`
- YAML resources descriptions committed to a repo
- [Helm](https://github.com/kubernetes/helm) (~package manager)
- [Spinnaker](https://www.spinnaker.io/) (Netflix' CD platform)
- [Brigade](https://brigade.sh/) (event-driven scripting; no YAML)
---
## Cluster federation
--
- That was just the easy part
- The hard challenges will revolve around *culture* and *people*
![Star Trek Federation](images/startrek-federation.jpg)
--
- ... What does that mean?
Sorry Star Trek fans, this is not the federation you're looking for!
--
(If I add "Your cluster is in another federation" I might get a 3rd fandom wincing!)
---
## Running an app involves many steps
## Cluster federation
- Write the app
- Kubernetes master operation relies on etcd
- Tests, QA ...
- etcd uses the [Raft](https://raft.github.io/) protocol
- Ship *something* (more on that later)
- Raft recommends low latency between nodes
- Provision resources (e.g. VMs, clusters)
- What if our cluster spreads to multiple regions?
- Deploy the *something* on the resources
--
- Manage, maintain, monitor the resources
- Break it down in local clusters
- Manage, maintain, monitor the app
- Regroup them in a *cluster federation*
- And much more
- Synchronize resources across clusters
- Discover resources across clusters
---
## Who does what?
## Developer experience
- The old "devs vs ops" division has changed
*We've put this last, but it's pretty important!*
- In some organizations, "ops" are now called "SRE" or "platform" teams
- How do you on-board a new developer?
(and they have very different sets of skills)
- What do they need to install to get a dev stack?
- Do you know which team is responsible for each item on the list on the previous page?
- How does a code change make it from dev to prod?
- Acknowledge that a lot of tasks are outsourced
(e.g. if we add "buy/rack/provision machines" in that list)
---
## What do we ship?
- Some organizations embrace "you build it, you run it"
- When "build" and "run" are owned by different teams, where's the line?
- What does the "build" team ship to the "run" team?
- Let's see a few options, and what they imply
---
## Shipping code
- Team "build" ships code
(hopefully in a repository, identified by a commit hash)
- Team "run" containerizes that code
✔️ no extra work for developers
❌ very little advantage of using containers
---
## Shipping container images
- Team "build" ships container images
(hopefully built automatically from a source repository)
- Team "run" uses theses images to create e.g. Kubernetes resources
✔️ universal artefact (support all languages uniformly)
✔️ easy to start a single component (good for monoliths)
❌ complex applications will require a lot of extra work
❌ adding/removing components in the stack also requires extra work
❌ complex applications will run very differently between dev and prod
---
## Shipping Compose files
(Or another kind of dev-centric manifest)
- Team "build" ships a manifest that works on a single node
(as well as images, or ways to build them)
- Team "run" adapts that manifest to work on a cluster
✔️ all teams can start the stack in a reliable, deterministic manner
❌ adding/removing components still requires *some* work (but less than before)
❌ there will be *some* differences between dev and prod
---
## Shipping Kubernetes manifests
- Team "build" ships ready-to-run manifests
(YAML, Helm charts, Kustomize ...)
- Team "run" adjusts some parameters and monitors the application
✔️ parity between dev and prod environments
✔️ "run" team can focus on SLAs, SLOs, and overall quality
❌ requires *a lot* of extra work (and new skills) from the "build" team
❌ Kubernetes is not a very convenient development platform (at least, not yet)
---
## What's the right answer?
- It depends on our teams
- existing skills (do they know how to do it?)
- availability (do they have the time to do it?)
- potential skills (can they learn to do it?)
- It depends on our culture
- owning "run" often implies being on call
- do we reward on-call duty without encouraging hero syndrome?
- do we give people resources (time, money) to learn?
---
class: extra-details
## Tools to develop on Kubernetes
*If we decide to make Kubernetes the primary development platform, here
are a few tools that can help us.*
- Docker Desktop
- Draft
- Minikube
- Skaffold
- Tilt
- ...
---
## Where do we run?
- Managed vs. self-hosted
- Cloud vs. on-premises
- If cloud: public vs. private
- Which vendor/distribution to pick?
- Which versions/features to enable?
---
## Some guidelines
- Start small
- Outsource what we don't know
- Start simple, and stay simple as long as possible
(try to stay away from complex features that we don't need)
- Automate
(regularly check that we can successfully redeploy by following scripts)
- Transfer knowledge
(make sure everyone is on the same page/level)
- Iterate!
---
## Recommended sessions
Dev?
**The state of Kubernetes development tooling**<br/>
by Ellen Korbes (Garden)<br/>
13:2514:05 Wednesday, Hall A1
Ops?
**Kubernetes the very hard way**<br/>
by Laurent Bernaille (Datadog)<br/>
11:3512:15 Wednesday, Hall A1
- How does someone add a component to a stack?

View File

@@ -1,93 +0,0 @@
# Deploying with YAML
- So far, we created resources with the following commands:
- `kubectl run`
- `kubectl create deployment`
- `kubectl expose`
- We can also create resources directly with YAML manifests
---
## `kubectl apply` vs `create`
- `kubectl create -f whatever.yaml`
- creates resources if they don't exist
- if resources already exist, don't alter them
<br/>(and display error message)
- `kubectl apply -f whatever.yaml`
- creates resources if they don't exist
- if resources already exist, update them
<br/>(to match the definition provided by the YAML file)
- stores the manifest as an *annotation* in the resource
---
## Creating multiple resources
- The manifest can contain multiple resources separated by `---`
```yaml
kind: ...
apiVersion: ...
metadata: ...
name: ...
...
---
kind: ...
apiVersion: ...
metadata: ...
name: ...
...
```
---
## Creating multiple resources
- The manifest can also contain a list of resources
```yaml
apiVersion: v1
kind: List
items:
- kind: ...
apiVersion: ...
...
- kind: ...
apiVersion: ...
...
```
---
## Deploying dockercoins with YAML
- We provide a YAML manifest with all the resources for Dockercoins
(Deployments and Services)
- We can use it if we need to deploy or redeploy Dockercoins
.exercise[
- Deploy or redeploy Dockercoins:
```bash
kubectl apply -f ~/container.training/k8s/dockercoins.yaml
```
]
(If we deployed Dockercoins earlier, we will see warning messages,
because the resources that we created lack the necessary annotation.
We can safely ignore them.)

43
slides/kadm-fullday.yml Normal file
View File

@@ -0,0 +1,43 @@
title: |
Kubernetes
for Admins and Ops
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
- static-pods-exercise
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/dmuc.md
- - k8s/multinode.md
- k8s/cni.md
- k8s/apilb.md
- k8s/control-plane-auth.md
- - k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md
- k8s/staticpods.md
- k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
- k8s/bootstrap.md
- - k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- - k8s/lastwords-admin.md
- k8s/links.md
- shared/thankyou.md

69
slides/kadm-twodays.yml Normal file
View File

@@ -0,0 +1,69 @@
title: |
Kubernetes
for administrators
and operators
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
# DAY 1
- - k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
- - k8s/multinode.md
- k8s/cni.md
- - k8s/apilb.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md
- k8s/staticpods.md
- - k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
# DAY 2
- - k8s/kubercoins.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
- k8s/authn-authz.md
- k8s/csr-api.md
- - k8s/openid-connect.md
- k8s/control-plane-auth.md
###- k8s/bootstrap.md
- k8s/netpol.md
- k8s/podsecuritypolicy.md
- - k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- - k8s/prometheus.md
- k8s/extending-api.md
- k8s/operators.md
###- k8s/operators-design.md
# CONCLUSION
- - k8s/lastwords-admin.md
- k8s/links.md
- shared/thankyou.md
- |
# (All content after this slide is bonus material)
# EXTRA
- - k8s/volumes.md
- k8s/configuration.md
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md

89
slides/kube-fullday.yml Normal file
View File

@@ -0,0 +1,89 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectlrun.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
#- k8s/kubectlproxy.md
#- k8s/localkubeconfig.md
#- k8s/accessinternal.md
-
- k8s/setup-k8s.md
- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
#- k8s/dryrun.md
- k8s/rollout.md
#- k8s/healthchecks.md
#- k8s/healthchecks-more.md
#- k8s/record.md
-
- k8s/namespaces.md
- k8s/ingress.md
#- k8s/kustomize.md
#- k8s/helm.md
#- k8s/create-chart.md
#- k8s/netpol.md
#- k8s/authn-authz.md
#- k8s/csr-api.md
#- k8s/openid-connect.md
#- k8s/podsecuritypolicy.md
- k8s/volumes.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
#- k8s/logs-centralized.md
#- k8s/prometheus.md
#- k8s/statefulsets.md
#- k8s/local-persistent-volumes.md
#- k8s/portworx.md
#- k8s/extending-api.md
#- k8s/operators.md
#- k8s/operators-design.md
#- k8s/staticpods.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
-
- k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

68
slides/kube-halfday.yml Normal file
View File

@@ -0,0 +1,68 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- shared/title.md
#- logistics.md
# Bridget-specific; others use logistics.md
- logistics-bridget.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-k8s.md
- - k8s/kubectlrun.md
- k8s/deploymentslideshow.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
#- k8s/kubectlproxy.md
#- k8s/localkubeconfig.md
#- k8s/accessinternal.md
- - k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/rollout.md
#- k8s/record.md
- - k8s/logs-cli.md
# Bridget hasn't added EFK yet
#- k8s/logs-centralized.md
- k8s/namespaces.md
- k8s/helm.md
- k8s/create-chart.md
#- k8s/kustomize.md
#- k8s/netpol.md
- k8s/whatsnext.md
# - k8s/links.md
# Bridget-specific
- k8s/links-bridget.md
- shared/thankyou.md

96
slides/kube-selfpaced.yml Normal file
View File

@@ -0,0 +1,96 @@
title: |
Deploying and Scaling Microservices
with Docker and Kubernetes
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
chapters:
- shared/title.md
#- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectlrun.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
-
- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/setup-k8s.md
- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/dryrun.md
-
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
- k8s/record.md
-
- k8s/namespaces.md
- k8s/ingress.md
- k8s/kustomize.md
- k8s/helm.md
- k8s/create-chart.md
-
- k8s/netpol.md
- k8s/authn-authz.md
-
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/podsecuritypolicy.md
-
- k8s/volumes.md
- k8s/build-with-docker.md
- k8s/build-with-kaniko.md
- k8s/configuration.md
-
- k8s/logs-centralized.md
- k8s/prometheus.md
-
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
-
- k8s/extending-api.md
- k8s/operators.md
- k8s/operators-design.md
- k8s/staticpods.md
- k8s/owners-and-dependents.md
- k8s/gitworkflows.md
-
- k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,17 +1,14 @@
title: |
Deploying and Scaling
applications
Deploying and Scaling Microservices
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/jpetazzo/workshop-20191104-berlin)"
#chat: "In person!"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://velocity-2019-11.container.training/
slidenumberprefix: "#VelocityConf &mdash; "
slides: http://container.training/
exclude:
- self-paced
@@ -24,9 +21,9 @@ chapters:
- shared/toc.md
-
- shared/prereqs.md
- shared/webssh.md
#- shared/webssh.md
- shared/connecting.md
#- k8s/versions-k8s.md
- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
@@ -46,23 +43,21 @@ chapters:
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
-
- k8s/yamldeploy.md
#- k8s/setup-k8s.md
- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/setup-k8s.md
- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/dryrun.md
-
#- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/dryrun.md
- k8s/rollout.md
- k8s/healthchecks.md
#- k8s/healthchecks-more.md
- k8s/healthchecks-more.md
- k8s/record.md
- electricity.md
-
- k8s/namespaces.md
- k8s/ingress.md

3
slides/lisa.html Normal file
View File

@@ -0,0 +1,3 @@
<a href="talk.yml.html">Slides for the talk (Monday)</a>
|
<a href="tutorial.yml.html">Slides for the tutorial (Wednesday)</a>

272
slides/lisa/begin.md Normal file
View File

@@ -0,0 +1,272 @@
class: title
@@TITLE@@
.footnote[![QR Code to the slides](images/qrcode-lisa.png)☝🏻 Slides!]
---
## Outline
- Introductions
- Kubernetes anatomy
- Building a 1-node cluster
- Connecting to services
- Adding more nodes
- What's missing
---
class: title
Introductions
---
class: tutorial-only
## Viewer advisory
- Have you attended my talk on Monday?
--
- Then you may experience *déjà-vu* during the next few minutes
(Sorry!)
--
- But I promise we'll soon build (and break) some clusters!
---
## Hi!
- Jérôme Petazzoni ([@jpetazzo](https://twitter.com/jpetazzo))
- 🇫🇷🇺🇸🇩🇪
- 📦🧔🏻
- 🐋(📅📅📅📅📅📅📅)
- 🔥🧠😢💊 ([1], [2], [3])
- 👨🏻‍🏫✨☸️💰
- 😄👍🏻
[1]: http://jpetazzo.github.io/2018/09/06/the-depression-gnomes/
[2]: http://jpetazzo.github.io/2018/02/17/seven-years-at-docker/
[3]: http://jpetazzo.github.io/2017/12/24/productivity-depression-kanban-emoji/
???
I'm French, living in the US, with also a foot in Berlin (Germany).
I'm a container hipster: I was running containers in production,
before it was cool.
I worked 7 years at Docker, which according to Corey Quinn,
is "long enough to be legally declared dead".
I also struggled a few years with depressed and burn-out.
It's not what I'll discuss today, but it's a topic that matters
a lot to me, and I wrote a bit about it, check my blog if you'd like.
After a break, I decided to do something I love:
teaching witchcraft. I deliver Kubernetes training.
As you can see, I love emojis, but if you don't, it's OK.
(There will be much less emojis on the following slides.)
---
## Why this talk?
- One of my goals in 2018: pass the CKA exam
--
- Things I knew:
- kubeadm
- kubectl run, expose, YAML, Helm
- ancient container lore
--
- Things I didn't:
- how Kubernetes *really* works
- deploy Kubernetes The Hard Way
---
## Scope
- Goals:
- learn enough about Kubernetes to ace that exam
- learn enough to teach that stuff
- Non-goals:
- set up a *production* cluster from scratch
- build everything from source
---
## Why are *you* here?
--
- Need/want/must build Kubernetes clusters
--
- Just curious about Kubernetes internals
--
- The Zelda theme
--
- (Other, please specify)
--
class: tutorial-only
.footnote[*Damn. Jérôme is even using the same jokes for his talk and his tutorial!<br/>This guy really has no shame. Tsk.*]
---
class: title
TL,DR
---
class: title
*The easiest way to install Kubernetes
is to get someone else to do it for you.*
(Me, after extensive research.)
???
Which means that if any point, you decide to leave,
I will not take it personally, but assume that you
eventually saw the light, and that you would like to
hire me or some of my colleagues to build your
Kubernetes clusters. It's all good.
---
class: talk-only
## This talk is also available as a tutorial
- Wednesday, October 30, 2019 - 11:00 am12:30 pm
- Salon ABCD
- Same content
- Everyone will get a cluster of VMs
- Everyone will be able to do the stuff that I'll demo today!
---
class: title
The Truth¹ About Kubernetes
.footnote[¹Some of it]
---
## What we want to do
```bash
kubectl run web --image=nginx --replicas=3
```
*or*
```bash
kubectl create deployment web --image=nginx
kubectl scale deployment web --replicas=3
```
*then*
```bash
kubectl expose deployment web --port=80
curl http://...
```
???
Kubernetes might feel like an imperative system,
because we can say "run this; do that."
---
## What really happens
- `kubectl` generates a manifest describing a Deployment
- That manifest is sent to the Kubernetes API server
- The Kubernetes API server validates the manifest
- ... then persists it to etcd
- Some *controllers* wake up and do a bunch of stuff
.footnote[*The amazing diagram on the next slide is courtesy of [Lucas Käldström](https://twitter.com/kubernetesonarm).*]
???
In reality, it is a declarative system.
We write manifests, descriptions of what we want, and Kubernetes tries to make it happen.
---
class: pic
![Diagram showing Kubernetes architecture](images/k8s-arch4-thanks-luxas.png)
???
What we're really doing, is storing a bunch of objects in etcd.
But etcd, unlike a SQL database, doesn't have schemas or types.
So to prevent us from dumping any kind of trash data in etcd,
We have to read/write to it through the API server.
The API server will enforce typing and consistency.
Etcd doesn't have schemas or types, but it has the ability to
watch a key or set of keys, meaning that it's possible to subscribe
to updates of objects.
The controller manager is a process that has a bunch of loops,
each one responsible for a specific type of object.
So there is one that will watch the deployments, and as soon
as we create, updated, delete a deployment, it will wake up
and do something about it.

167
slides/lisa/cni.md Normal file
View File

@@ -0,0 +1,167 @@
class: title
Beyond kubenet
---
## When kubenet is not enough (1/2)
- IP address allocation is rigid
(one subnet per node)
- What about DHCP?
- What about e.g. ENI on AWS?
(allocating Elastic Network Interfaces to containers)
---
## When kubenet is not enough (1/2)
- Containers are connected to a Linux bridge
- What about:
- Open vSwitch
- VXLAN
- skipping layer 2
- using directly a network interface (macvlan, SR-IOV...)
---
## The Container Network Interface
- Allows us to decouple network configuration from Kubernetes
- Implemented by plugins
- Plugins are executables that will be invoked by kubelet
- Plugins are responsible for:
- allocating IP addresses for containers
- configuring the network for containers
- Plugins can be combined and chained when it makes sense
---
## Combining plugins
- Interface could be created by e.g. `vlan` or `bridge` plugin
- IP address could be allocated by e.g. `dhcp` or `host-local` plugin
- Interface parameters (MTU, sysctls) could be tweaked by the `tuning` plugin
The reference plugins are available [here].
Look in each plugin's directory for its documentation.
[here]: https://github.com/containernetworking/plugins/tree/master/plugins
---
## How plugins are invoked
- Parameters are given through environment variables, including:
- CNI_COMMAND: desired operation (ADD, DEL, CHECK, or VERSION)
- CNI_CONTAINERID: container ID
- CNI_NETNS: path to network namespace file
- CNI_IFNAME: what the network interface should be named
- The network configuration must be provided to the plugin on stdin
(this avoids race conditions that could happen by passing a file path)
---
## Setting up CNI
- We are going to use kube-router
- kube-router will provide the "pod network"
(connectivity with pods)
- kube-router will also provide internal service connectivity
(replacing kube-proxy)
- kube-router can also function as a Network Policy Controller
(implementing firewalling between pods)
---
## How kube-router works
- Very simple architecture
- Does not introduce new CNI plugins
(uses the `bridge` plugin, with `host-local` for IPAM)
- Pod traffic is routed between nodes
(no tunnel, no new protocol)
- Internal service connectivity is implemented with IPVS
- kube-router daemon runs on every node
---
## What kube-router does
- Connect to the API server
- Obtain the local node's `podCIDR`
- Inject it into the CNI configuration file
(we'll use `/etc/cni/net.d/10-kuberouter.conflist`)
- Obtain the addresses of all nodes
- Establish a *full mesh* BGP peering with the other nodes
- Exchange routes over BGP
- Add routes to the Linux kernel
---
## What's BGP?
- BGP (Border Gateway Protocol) is the protocol used between internet routers
- It [scales](https://www.cidr-report.org/as2.0/)
pretty [well](https://www.cidr-report.org/cgi-bin/plota?file=%2fvar%2fdata%2fbgp%2fas2.0%2fbgp-active%2etxt&descr=Active%20BGP%20entries%20%28FIB%29&ylabel=Active%20BGP%20entries%20%28FIB%29&with=step)
(it is used to announce the 700k CIDR prefixes of the internet)
- It is spoken by many hardware routers from many vendors
- It also has many software implementations (Quagga, Bird, FRR...)
- Experienced network folks generally know it (and appreciate it)
- It also used by Calico (another popular network system for Kubernetes)
- Using BGP allows us to interconnect our "pod network" with other systems
---
class: pic
![Demo time!](images/demo-with-kht.png)

56
slides/lisa/dmuc.md Normal file
View File

@@ -0,0 +1,56 @@
class: title
Building a 1-node cluster
---
## Requirements
- Linux machine (x86_64)
2 GB RAM, 1 CPU is OK
- Root (for Docker and Kubelet)
- Binaries:
- etcd
- Kubernetes
- Docker
---
## What we will do
- Create a deployment
(with `kubectl create deployment`)
- Look for our pods
- If pods are created: victory
- Else: troubleshoot, try again
.footnote[*Note: the exact commands that I run will be available
in the slides of the tutorial.*]
---
class: pic
![Demo time!](images/demo-with-kht.png)
---
## What have we done?
- Started a basic Kubernetes control plane
(no authentication; many features are missing)
- Deployed a few pods

128
slides/lisa/end.md Normal file
View File

@@ -0,0 +1,128 @@
class: title, talk-only
What's missing?
---
## What's missing?
- Mostly: security
- Notably: RBAC
- Also: availabilty
---
## TLS! TLS everywhere!
- Create certs for the control plane:
- etcd
- API server
- controller manager
- scheduler
- Create individual certs for nodes
- Create the service account key pair
---
## Service accounts
- The controller manager will generate tokens for service accounts
(these tokens are JWT, JSON Web Tokens, signed with a specific key)
- The API server will validate these tokens (with the matching key)
---
## Nodes
- Enable NodeRestriction admission controller
- authorizes kubelet to update their own node and pods data
- Enable Node Authorizer
- prevents kubelets from accessing data that they shouldn't
- only authorize access to e.g. a configmap if a pod is using it
- Bootstrap tokens
- add nodes to the cluster safely+dynamically
---
## Consequences of API server outage
- What happens if the API server goes down?
- kubelet will try to reconnect (as long as necessary)
- our apps will be just fine (but autoscaling will be broken)
- How can we improve the API server availability?
- redundancy (the API server is stateless)
- achieve a low MTTR
---
## Improving API server availability
- Redundancy implies to add one layer
(between API clients and servers)
- Multiple options available:
- external load balancer
- local load balancer (NGINX, HAProxy... on each node)
- DNS Round-Robin
---
## Achieving a low MTTR
- Run the control plane in highly available VMs
(e.g. many hypervisors can do that, with shared or mirrored storage)
- Run the control plane in highly available containers
(e.g. on another Kubernetes cluster)
---
class: title
Thank you!
---
## A word from my sponsor
- If you liked this presentation and would like me to train your team ...
Contact me: jerome.petazzoni@gmail.com
- Thank you! ♥️
- Also, slides👇🏻
![QR code to the slides](images/qrcode-lisa.png)

77
slides/lisa/env.md Normal file
View File

@@ -0,0 +1,77 @@
class: title
Let's get this party started!
---
class: pic
![Oprah's "you get a car" picture](images/you-get-a-cluster.jpg)
---
## Everyone gets their own cluster
- Everyone should have a little printed card
- That card has IP address / login / password for a personal cluster
- That cluster will be up for the duration of the tutorial
(but not much longer, alas, because these cost $$$)
---
## How these clusters are deployed
- Create a bunch of cloud VMs
(today: Ubuntu 18.04 on AWS EC2)
- Install binaries, create user account
(with parallel-ssh because it's *fast*)
- Generate the little cards with a Jinja2 template
- If you want to do it for your own tutorial:
check the [prepare-vms](https://github.com/jpetazzo/container.training/tree/master/prepare-vms) directory in the training repo!
---
## Exercises
- Labs and exercises are clearly identified
.exercise[
- This indicate something that you are invited to do
- First, let's log into the first node of the cluster:
```bash
ssh docker@`A.B.C.D`
```
(Replace A.B.C.D with the IP address of the first node)
]
---
## Slides
- These slides are available online
.exercise[
- Open this slides deck in a local browser:
```open
@@SLIDES@@
```
- Select the tutorial link
- Type the number of that slide + ENTER
]

39
slides/lisa/kubenet.md Normal file
View File

@@ -0,0 +1,39 @@
class: title
Adding more nodes
---
## What do we need to do?
- More machines!
- Can we "just" start kubelet on these machines?
--
- We need to update the kubeconfig file used by kubelet
- It currently uses `localhost:8080` for the API server
- We need to change that!
---
## What we will do
- Get more nodes
- Generate a new kubeconfig file
(pointing to the node running the API server)
- Start more kubelets
- Scale up our deployment
---
class: pic
![Demo time!](images/demo-with-kht.png)

34
slides/lisa/kubeproxy.md Normal file
View File

@@ -0,0 +1,34 @@
class: title
Pod-to-service networking
---
## What we will do
- Create a service to connect to our pods
(with `kubectl expose deployment`)
- Try to connect to the service's ClusterIP
- If it works: victory
- Else: troubleshoot, try again
.footnote[*Note: the exact commands that I run will be available
in the slides of the tutorial.*]
---
class: pic
![Demo time!](images/demo-with-kht.png)
---
## What have we done?
- Started kube-proxy
- ... which created a bunch of iptables rules

View File

@@ -1,16 +1,32 @@
## Intros
- This slide should be customized by the tutorial instructor(s).
- Hello! We are:
- AJ ([@s0ulshake](https://twitter.com/s0ulshake))
- Jérôme ([@jpetazzo](https://twitter.com/jpetazzo))
- Sean ([@someara](https://twitter.com/someara))
- .emoji[👩🏻‍🏫] Ann O'Nymous ([@...](https://twitter.com/...), Megacorp Inc)
- The workshop will run from 9am to 5pm
- .emoji[👨🏾‍🎓] Stu Dent ([@...](https://twitter.com/...), University of Wakanda)
- There will be a lunch break at 12:30pm
<!-- .dummy[
(And coffee breaks at 10:30am and 3pm)
- .emoji[👷🏻‍♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
- .emoji[🚁] Alexandre ([@alexbuisine](https://twitter.com/alexbuisine), Enix SAS)
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- .emoji[⛵] Jérémy ([@jeremygarrouste](twitter.com/jeremygarrouste), Inpiwee)
- .emoji[🎧] Romain ([@rdegez](https://twitter.com/rdegez), Enix SAS)
] -->
- The workshop will run from ...
- There will be a lunch break at ...
(And coffee breaks!)
- Feel free to interrupt for questions at any time

View File

@@ -80,7 +80,7 @@ def flatten(titles):
def generatefromyaml(manifest, filename):
manifest = yaml.safe_load(manifest)
manifest = yaml.load(manifest)
markdown, titles = processchapter(manifest["chapters"], filename)
logging.debug("Found {} titles.".format(len(titles)))
@@ -111,7 +111,6 @@ def generatefromyaml(manifest, filename):
html = html.replace("@@GITREPO@@", manifest["gitrepo"])
html = html.replace("@@SLIDES@@", manifest["slides"])
html = html.replace("@@TITLE@@", manifest["title"].replace("\n", " "))
html = html.replace("@@SLIDENUMBERPREFIX@@", manifest.get("slidenumberprefix", ""))
return html
@@ -158,7 +157,7 @@ def processchapter(chapter, filename):
return processchapter(chapter.encode("utf-8"), filename)
if isinstance(chapter, str):
if "\n" in chapter:
titles = re.findall("^# (.*)", chapter, re.MULTILINE)
titles = [] # re.findall("^# (.*)", chapter, re.MULTILINE)
slidefooter = ".debug[{}]".format(makelink(filename))
chapter = chapter.replace("\n---\n", "\n{}\n---\n".format(slidefooter))
chapter += "\n" + slidefooter

View File

@@ -4,12 +4,7 @@ class: in-person
.exercise[
- Log into the first VM (`node1`) with your SSH client:
```bash
ssh `user`@`A.B.C.D`
```
(Replace `user` and `A.B.C.D` with the user and IP address provided to you)
- Log into the first VM (`node1`) with your SSH client
<!--
```bash
@@ -23,13 +18,16 @@ done
```
-->
- Check that you can SSH (without password) to `node2`:
```bash
ssh node2
```
- Type `exit` or `^D` to come back to `node1`
<!-- ```bash exit``` -->
]
You should see a prompt looking like this:
```
[A.B.C.D] (...) user@node1 ~
$
```
If anything goes wrong — ask for help!
---
@@ -54,20 +52,6 @@ If anything goes wrong — ask for help!
---
## For a consistent Kubernetes experience ...
- If you are using your own Kubernetes cluster, you can use [shpod](https://github.com/jpetazzo/shpod)
- `shpod` provides a shell running in a pod on your own cluster
- It comes with many tools pre-installed (helm, stern...)
- These tools are used in many exercises in these slides
- `shpod` also gives you completion and a fancy prompt
---
class: self-paced
## Get your own Docker nodes

View File

@@ -9,25 +9,3 @@ class: title, in-person
That's all, folks! <br/> Questions?
![end](images/end.jpg)
---
## Final words
Did you like that tutorial? Then:
1. Please [rate](https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/79109) it on the O'Reilly website
(your feedback is important to the conference organizers!)
2. Feel free to use, re-use, and share these slides
(they will remain online for at least a year)
3. Hire me to train your team, anywhere in the world
(contact me: **jerome.petazzoni@gmail.com**)
*Keep the little cards with the VM IP addresses.
The VMs will be shut down shortly, but the URL
of the slides and my e-mail address are on the cards.*

View File

@@ -11,11 +11,11 @@ class: title, in-person
@@TITLE@@<br/></br>
.footnote[
**WiFi: OReillyCon** —
**Password: oreillycon19**
**Be kind to the WiFi!**<br/>
<!-- *Use the 5G network.* -->
*Don't use your hotspot.*<br/>
*Don't stream videos or download big files during the workshop[.](https://www.youtube.com/watch?v=h16zyxiwDLY)*<br/>
*Don't use your hotspot. Thank you!*
*Thank you!*
**Slides: @@SLIDES@@**
]

22
slides/talk.yml Normal file
View File

@@ -0,0 +1,22 @@
title: |
Deep Dive into
Kubernetes Internals
for Builders and Operators
(LISA2019 talk)
chat: ""
gitrepo: ""
slides: ""
exclude:
- tutorial-only
chapters:
- lisa/begin.md
- k8s/deploymentslideshow.md
- lisa/dmuc.md
- lisa/kubeproxy.md
- lisa/kubenet.md
- lisa/cni.md
- lisa/end.md

22
slides/tutorial.yml Normal file
View File

@@ -0,0 +1,22 @@
title: |
Deep Dive into
Kubernetes Internals
for Builders and Operators
(LISA2019 tutorial)
chat: ""
gitrepo: container.training
slides: https://lisa-2019-10.container.training
exclude:
- talk-only
chapters:
- lisa/begin.md
- k8s/deploymentslideshow.md
- lisa/env.md
- k8s/dmuc.md
- k8s/multinode.md
- k8s/cni.md
- lisa/end.md

View File

@@ -109,8 +109,8 @@ div.pic p {
div.pic img {
display: block;
margin: auto;
max-width: 1210px;
max-height: 550px;
max-width: 100%;
max-height: 100%;
}
div.pic h1, div.pic h2, div.title h1, div.title h2 {
text-align: center;

View File

@@ -28,7 +28,8 @@
var slideshow = remark.create({
ratio: '16:9',
highlightSpans: true,
slideNumberFormat: '@@SLIDENUMBERPREFIX@@%current%/%total%',
slideNumberFormat: '#LISA19 — @jpetazzo — %current%/%total%',
countIncrementalSlides: false,
excludedClasses: [@@EXCLUDE@@]
});
</script>