Compare commits

..

12 Commits

Author SHA1 Message Date
Jerome Petazzoni
bf88262ceb fix-redirects.sh: adding forced redirect 2020-04-07 16:46:17 -05:00
Jerome Petazzoni
e65b4df0d8 Reorg content 2020-01-21 14:56:25 -06:00
Jerome Petazzoni
c0aa09d5b5 Add timing info 2020-01-21 13:56:52 -06:00
Jerome Petazzoni
a265f32650 WiFi 2020-01-21 01:07:02 -06:00
Jerome Petazzoni
a751ee2360 Merge branch 'master' into 2020-01-zr 2020-01-20 14:24:08 -06:00
Jerome Petazzoni
578331f850 Update logistics + chatroom 2020-01-20 03:32:15 -06:00
Jerome Petazzoni
cbb12c06bf Merge branch 'master' into 2020-01-zr 2020-01-20 02:52:31 -06:00
Jerome Petazzoni
9c2535861a Update TOC 2020-01-16 13:28:10 -06:00
Jerome Petazzoni
7ea7c0400b Merge branch 'master' into 2020-01-zr 2020-01-16 07:28:22 -06:00
Jerome Petazzoni
9fe97b8792 Prep content according to Ran's instructions 2020-01-13 17:29:24 -06:00
Jerome Petazzoni
db86d79de1 Merge master 2020-01-13 16:38:16 -06:00
Jerome Petazzoni
e925b52827 Prep content 2019-11-28 10:20:37 -06:00
64 changed files with 1293 additions and 3287 deletions

View File

@@ -1,15 +0,0 @@
apiVersion: apiextensions.k8s.io/v1alpha1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof

View File

@@ -1,32 +0,0 @@
apiVersion: apiextensions.k8s.io/v1alpha1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof
additionalPrinterColumns:
- JSONPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
type: string
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
validation:
openAPIV3Schema:
properties:
spec:
required:
- taste
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string

View File

@@ -1,29 +0,0 @@
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: arabica
spec:
taste: strong
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: robusta
spec:
taste: stronger
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: liberica
spec:
taste: smoky
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: excelsa
spec:
taste: fruity

View File

@@ -13,7 +13,7 @@ spec:
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ]
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/

View File

@@ -61,6 +61,6 @@ TAG=$PREFIX-$SETTINGS
--count $((3*$STUDENTS))
./workshopctl deploy $TAG
./workshopctl kube $TAG 1.16.6
./workshopctl kube $TAG 1.14.6
./workshopctl cards $TAG

View File

@@ -1,69 +0,0 @@
title: |
Jour 1
Fondamentaux
Conteneurs & Docker
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/enix/formation-highfive-202002)"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-02-enix.container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/toc.md
-
- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Naming_And_Inspecting.md
#- containers/Labels.md
- containers/Getting_Inside.md
#- containers/Resource_Limits.md
- containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
-
- containers/Container_Networking_Basics.md
#- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
#- containers/Ambassadors.md
- containers/Local_Development_Workflow.md
#- containers/Windows_Containers.md
#- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
#- containers/Docker_Machine.md
#- containers/Advanced_Dockerfiles.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Container_Engines.md
#- containers/Ecosystem.md
#- containers/Orchestration_Overview.md
-
- shared/thankyou.md
- containers/links.md

View File

@@ -1,57 +0,0 @@
title: |
Jour 2
Fondamentaux
Orchestration
& Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/enix/formation-highfive-202002)"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-02-enix.container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectlrun.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
-
- k8s/shippingimages.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
- k8s/yamldeploy.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
-
- k8s/rollout.md
#- k8s/dryrun.md
- k8s/healthchecks.md
#- k8s/healthchecks-more.md
#- k8s/record.md
#- k8s/dashboard.md
- k8s/ingress.md
-
- shared/thankyou.md

View File

@@ -1,81 +0,0 @@
title: |
Jour 3
Méthodologies DevOps
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/enix/formation-highfive-202002)"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-02-enix.container.training/
exclude:
- self-paced
- hide-exercise
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- shared/prereqs.md
- shared/connecting.md
# Bien démarrer en local (minikube, kind)
- shared/sampleapp.md
- k8s/software-dev-banalities.md
- k8s/on-desktop.md
- k8s/volumes.md
- k8s/namespaces.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/testing.md
-
- k8s/configuration.md
- k8s/sealed-secrets.md
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-secrets.md
-
- k8s/shippingimages.md
- k8s/registries.md
- k8s/stop-manual.md
- k8s/ci-cd.md
- k8s/exercise-ci-build.md
- k8s/kaniko.md
- k8s/exercise-ci-kaniko.md
- k8s/rollout.md
- k8s/advanced-rollout.md
- k8s/devs-and-ops-joined-topics.md
-
- k8s/prometheus-endpoint.md
- k8s/exercise-prometheus.md
- k8s/opentelemetry.md
- k8s/exercise-opentelemetry.md
- k8s/kubernetes-security.md
#- |
# # (Automatiser)
#- |
# # Fabrication d'image
#- |
# # Skaffold
#- |
# # Registries
#- |
# # Gitlab, CI
#- |
# # ROllout avancé, blue green, canary
#- |
# # Monitoring applicatif
#- |
# # Prometheus Grafana
#- |
# # Telemetry
-
- shared/thankyou.md

View File

@@ -1,40 +0,0 @@
title: |
Jour 4
Kubernetes Avancé
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/enix/formation-highfive-202002)"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-02-enix.container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- k8s/netpol.md
- k8s/authn-authz.md
-
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
-
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
-
- k8s/prometheus.md
- k8s/logs-centralized.md
- k8s/extending-api.md
- k8s/operators.md
#- k8s/operators-design.md
-
- shared/thankyou.md

View File

@@ -1,42 +0,0 @@
title: |
Jour 5
Opérer Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/enix/formation-highfive-202002)"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-02-enix.container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
-
- k8s/multinode.md
- k8s/cni.md
-
- k8s/apilb.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md
- k8s/cluster-backup.md
- k8s/staticpods.md
-
- k8s/control-plane-auth.md
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/podsecuritypolicy.md
-
- shared/thankyou.md

View File

@@ -2,7 +2,7 @@
#/ /kube-halfday.yml.html 200
#/ /kube-fullday.yml.html 200
#/ /kube-twodays.yml.html 200
/ /menu.html 200!
/ /kube.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack

View File

@@ -1,5 +0,0 @@
# Exercise -- write a simple pipeline
Let's create a simple pipeline with gitlab
The code is at: https://github.com/enix/kubecoin-build

View File

@@ -1,76 +0,0 @@
# Advanced Rollout
- In some cases the built-in mechanism of kubernetes is not enough.
- You want more control on the rollout, include a feedback of the monitoring, deploying
on multiple clusters, etc
- Two "main" strategies exist here:
- canary deployment
- blue/green deployment
---
## Canary deployment
- focus on one component of the stack
- deploy a new version of the component close to the production
- redirect some portion of prod traffic to new version
- scale up new version, redirect more traffic, checking everything is ok
- scale down old version
- move component to component with the same procedure
- That's what kubernetes does by default, but does every components at the same time
- Could be paired with `kubectl wait --for` and applying component sequentially,
for hand made canary deployement
---
## Blue/Green deployment
- focus on entire stack
- deploy a new stack
- check the new stack work as espected
- put traffic on new stack, rollback if any goes wrong
- garbage collect the previous infra structure
- there is nothing like that by default in kubernetes
- helm chart with multiple releases is the closest one
- could be paired with ingress feature like `nginx.ingress.kubernetes.io/canary-*`
---
## Not hand-made ?
There is a few additionnal controllers that help achieving those kind of rollout behaviours
They leverage kubernetes API at different levels to achieve this goal.
---
## Spinnaker
- https://www.spinnaker.io
- Help to deploy the same app on multiple cluster.
- Is able to analyse rollout status (canary analysis) and correlate it to monitoring
- Rollback if anything goes wrong
- also support Blue/Green
- Configuration done via UI
---
## Argo-rollout
- https://github.com/argoproj/argo-rollouts
- Replace your deployments with CRD (Custom Resource Definition) "deployment-like"
- Full control via CRDs
- BlueGreen and Canary deployment

View File

@@ -1,51 +0,0 @@
## Jenkins / Jenkins-X
- Multi-purpose CI
- Self-hosted CI for kubernetes
- create a namespace per commit and apply manifests in the namespace
</br>
"A deploy per feature-branch"
.small[
```shell
curl -L "https://github.com/jenkins-x/jx/releases/download/v2.0.1103/jx-darwin-amd64.tar.gz" | tar xzv jx
./jx boot
```
]
---
## GitLab
- Repository + registry + CI/CD integrated all-in-one
```shell
helm repo add gitlab https://charts.gitlab.io/
helm install gitlab gitlab/gitlab
```
---
## ArgoCD / flux
- Watch a git repository and apply changes to kubernetes
- provide UI to see changes, rollback
.small[
```shell
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
]
---
## Tekton / knative
- knative is serverless project from google
- Tekton leverages knative to run pipelines
- not really user friendly today, but stay tune for wrappers/products

View File

@@ -360,7 +360,3 @@ docker run --rm --net host -v $PWD:/vol \
- [kube-backup](https://github.com/pieterlange/kube-backup)
simple scripts to save resource YAML to a git repository
- [bivac](https://github.com/camptocamp/bivac)
Backup Interface for Volumes Attached to Containers

View File

@@ -154,7 +154,7 @@ class: extra-details
- "Running Kubernetes without nodes"
- Systems like [Virtual Kubelet](https://virtual-kubelet.io/) or [Kiyot](https://static.elotl.co/docs/latest/kiyot/kiyot.html) can run pods using on-demand resources
- Systems like [Virtual Kubelet](https://virtual-kubelet.io/) or Kiyot can run pods using on-demand resources
- Virtual Kubelet can leverage e.g. ACI or Fargate to run pods

View File

@@ -81,7 +81,7 @@
## What version are we running anyway?
- When I say, "I'm running Kubernetes 1.16", is that the version of:
- When I say, "I'm running Kubernetes 1.11", is that the version of:
- kubectl
@@ -139,73 +139,6 @@
---
## Important questions
- Should we upgrade the control plane before or after the kubelets?
- Within the control plane, should we upgrade the API server first or last?
- How often should we upgrade?
- How long are versions maintained?
- All the answers are in [the documentation about version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/)!
- Let's review the key elements together ...
---
## Kubernetes uses semantic versioning
- Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.17.2:
- MAJOR = 1
- MINOR = 17
- PATCH = 2
- It's always possible to mix and match different PATCH releases
(e.g. 1.16.1 and 1.16.6 are compatible)
- It is recommended to run the latest PATCH release
(but it's mandatory only when there is a security advisory)
---
## Version skew
- API server must be more recent than its clients (kubelet and control plane)
- ... Which means it must always be upgraded first
- All components support a difference of one¹ MINOR version
- This allows live upgrades (since we can mix e.g. 1.15 and 1.16)
- It also means that going from 1.14 to 1.16 requires going through 1.15
.footnote[¹Except kubelet, which can be up to two MINOR behind API server,
and kubectl, which can be one MINOR ahead or behind API server.]
---
## Release cycle
- There is a new PATCH relese whenever necessary
(every few weeks, or "ASAP" when there is a security vulnerability)
- There is a new MINOR release every 3 months (approximately)
- At any given time, three MINOR releases are maintained
- ... Which means that MINOR releases are maintained approximately 9 months
- We should expect to upgrade at least every 3 months (on average)
---
## In practice
- We are going to update a few cluster components
@@ -218,6 +151,47 @@ and kubectl, which can be one MINOR ahead or behind API server.]
---
## Updating kubelet
- These nodes have been installed using the official Kubernetes packages
- We can therefore use `apt` or `apt-get`
.exercise[
- Log into node `test3`
- View available versions for package `kubelet`:
```bash
apt show kubelet -a | grep ^Version
```
- Upgrade kubelet:
```bash
sudo apt install kubelet=1.15.3-00
```
]
---
## Checking what we've done
.exercise[
- Log into node `test1`
- Check node versions:
```bash
kubectl get nodes -o wide
```
- Create a deployment and scale it to make sure that the node still works
]
---
## Updating the API server
- This cluster has been deployed with kubeadm
@@ -254,7 +228,7 @@ and kubectl, which can be one MINOR ahead or behind API server.]
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
```
- Look for the `image:` line, and update it to e.g. `v1.17.0`
- Look for the `image:` line, and update it to e.g. `v1.15.0`
]
@@ -275,27 +249,9 @@ and kubectl, which can be one MINOR ahead or behind API server.]
---
## Was that a good idea?
--
**No!**
--
- Remember the guideline we gave earlier:
*To update a component, use whatever was used to install it.*
- This control plane was deployed with kubeadm
- We should use kubeadm to upgrade it!
---
## Updating the whole control plane
- Let's make it right, and use kubeadm to upgrade the entire control plane
- As an example, we'll use kubeadm to upgrade the entire control plane
(note: this is possible only because the cluster was installed with kubeadm)
@@ -308,11 +264,11 @@ and kubectl, which can be one MINOR ahead or behind API server.]
]
Note 1: kubeadm thinks that our cluster is running 1.17.0.
Note 1: kubeadm thinks that our cluster is running 1.15.0.
<br/>It is confused by our manual upgrade of the API server!
Note 2: kubeadm itself is still version 1.16.6.
<br/>It doesn't know how to upgrade do 1.17.X.
Note 2: kubeadm itself is still version 1.14.6.
<br/>It doesn't know how to upgrade do 1.15.X.
---
@@ -334,8 +290,8 @@ Note 2: kubeadm itself is still version 1.16.6.
]
Note: kubeadm still thinks that our cluster is running 1.17.0.
<br/>But at least it knows about version 1.17.X now.
Note: kubeadm still thinks that our cluster is running 1.15.0.
<br/>But at least it knows about version 1.15.X now.
---
@@ -351,89 +307,28 @@ Note: kubeadm still thinks that our cluster is running 1.17.0.
- Perform the upgrade:
```bash
sudo kubeadm upgrade apply v1.17.2
sudo kubeadm upgrade apply v1.15.3
```
]
---
## Updating kubelet
## Updating kubelets
- These nodes have been installed using the official Kubernetes packages
- After updating the control plane, we need to update each kubelet
- We can therefore use `apt` or `apt-get`
- This requires to run a special command on each node, to download the config
.exercise[
- Log into node `test3`
- View available versions for package `kubelet`:
```bash
apt show kubelet -a | grep ^Version
```
- Upgrade kubelet:
```bash
sudo apt install kubelet=1.17.2-00
```
]
---
## Checking what we've done
.exercise[
- Log into node `test1`
- Check node versions:
```bash
kubectl get nodes -o wide
```
- Create a deployment and scale it to make sure that the node still works
]
---
## Was that a good idea?
--
**Almost!**
--
- Yes, kubelet was installed with distribution packages
- However, kubeadm took care of configuring kubelet
(when doing `kubeadm join ...`)
- We were supposed to run a special command *before* upgrading kubelet!
- That command should be executed on each node
- It will download the kubelet configuration generated by kubeadm
---
## Upgrading kubelet the right way
- The command that we need to run was shown by kubeadm
(after upgrading the control plane)
(this config is generated by kubeadm)
.exercise[
- Download the configuration on each node, and upgrade kubelet:
```bash
for N in 1 2 3; do
ssh test$N sudo kubeadm upgrade node config --kubelet-version v1.17.2
ssh test$N sudo apt install kubelet=1.17.2-00
ssh test$N sudo kubeadm upgrade node config --kubelet-version v1.15.3
ssh test$N sudo apt install kubelet=1.15.3-00
done
```
]
@@ -442,7 +337,7 @@ Note: kubeadm still thinks that our cluster is running 1.17.0.
## Checking what we've done
- All our nodes should now be updated to version 1.17.2
- All our nodes should now be updated to version 1.15.3
.exercise[
@@ -459,12 +354,12 @@ class: extra-details
## Skipping versions
- This example worked because we went from 1.16 to 1.17
- This example worked because we went from 1.14 to 1.15
- If you are upgrading from e.g. 1.14, you will have to go through 1.15 first
- If you are upgrading from e.g. 1.13, you will generally have to go through 1.14 first
- This means upgrading kubeadm to 1.15.X, then using it to upgrade the cluster
- This means upgrading kubeadm to 1.14.X, then using it to upgrade the cluster
- Then upgrading kubeadm to 1.16.X, etc.
- Then upgrading kubeadm to 1.15.X, etc.
- **Make sure to read the release notes before upgrading!**

View File

@@ -28,7 +28,7 @@ The reference plugins are available [here].
Look in each plugin's directory for its documentation.
[here]: https://github.com/containernetworking/plugins
[here]: https://github.com/containernetworking/plugins/tree/master/plugins
---

View File

@@ -10,29 +10,6 @@
---
## What can we do with Kubernetes?
- Let's imagine that we have a 3-tier e-commerce app:
- web frontend
- API backend
- database (that we will keep out of Kubernetes for now)
- We have built images for our frontend and backend components
(e.g. with Dockerfiles and `docker build`)
- We are running them successfully with a local environment
(e.g. with Docker Compose)
- Let's see how we would deploy our app on Kubernetes!
---
## Basic things we can ask Kubernetes to do
--

114
slides/k8s/create-chart.md Normal file
View File

@@ -0,0 +1,114 @@
## Creating a chart
- We are going to show a way to create a *very simplified* chart
- In a real chart, *lots of things* would be templatized
(Resource names, service types, number of replicas...)
.exercise[
- Create a sample chart:
```bash
helm create dockercoins
```
- Move away the sample templates and create an empty template directory:
```bash
mv dockercoins/templates dockercoins/default-templates
mkdir dockercoins/templates
```
]
---
## Exporting the YAML for our application
- The following section assumes that DockerCoins is currently running
.exercise[
- Create one YAML file for each resource that we need:
.small[
```bash
while read kind name; do
kubectl get -o yaml $kind $name > dockercoins/templates/$name-$kind.yaml
done <<EOF
deployment worker
deployment hasher
daemonset rng
deployment webui
deployment redis
service hasher
service rng
service webui
service redis
EOF
```
]
]
---
## Testing our helm chart
.exercise[
- Let's install our helm chart! (`dockercoins` is the path to the chart)
```
helm install dockercoins
```
]
--
- Since the application is already deployed, this will fail:<br>
`Error: release loitering-otter failed: services "hasher" already exists`
- To avoid naming conflicts, we will deploy the application in another *namespace*
---
## Switching to another namespace
- We can create a new namespace and switch to it
(Helm will automatically use the namespace specified in our context)
- We can also tell Helm which namespace to use
.exercise[
- Tell Helm to use a specific namespace:
```bash
helm install dockercoins --namespace=magenta
```
]
---
## Checking our new copy of DockerCoins
- We can check the worker logs, or the web UI
.exercise[
- Retrieve the NodePort number of the web UI:
```bash
kubectl get service webui --namespace=magenta
```
- Open it in a web browser
- Look at the worker logs:
```bash
kubectl logs deploy/worker --tail=10 --follow --namespace=magenta
```
]
Note: it might take a minute or two for the worker to start.

View File

@@ -0,0 +1,367 @@
# Creating Helm charts
- We are going to create a generic Helm chart
- We will use that Helm chart to deploy DockerCoins
- Each component of DockerCoins will have its own *release*
- In other words, we will "install" that Helm chart multiple times
(one time per component of DockerCoins)
---
## Creating a generic chart
- Rather than starting from scratch, we will use `helm create`
- This will give us a basic chart that we will customize
.exercise[
- Create a basic chart:
```bash
cd ~
helm create helmcoins
```
]
This creates a basic chart in the directory `helmcoins`.
---
## What's in the basic chart?
- The basic chart will create a Deployment and a Service
- Optionally, it will also include an Ingress
- If we don't pass any values, it will deploy the `nginx` image
- We can override many things in that chart
- Let's try to deploy DockerCoins components with that chart!
---
## Writing `values.yaml` for our components
- We need to write one `values.yaml` file for each component
(hasher, redis, rng, webui, worker)
- We will start with the `values.yaml` of the chart, and remove what we don't need
- We will create 5 files:
hasher.yaml, redis.yaml, rng.yaml, webui.yaml, worker.yaml
---
## Getting started
- For component X, we want to use the image dockercoins/X:v0.1
(for instance, for rng, we want to use the image dockercoins/rng:v0.1)
- Exception: for redis, we want to use the official image redis:latest
.exercise[
- Write minimal YAML files for the 5 components, specifying only the image
]
--
*Hint: our YAML files should look like this.*
```yaml
### rng.yaml
image:
repository: dockercoins/`rng`
tag: v0.1
```
---
## Deploying DockerCoins components
- For convenience, let's work in a separate namespace
.exercise[
- Create a new namespace:
```bash
kubectl create namespace helmcoins
```
- Switch to that namespace:
```bash
kns helmcoins
```
]
---
## Deploying the chart
- To install a chart, we can use the following command:
```bash
helm install [--name `X`] <chart>
```
- We can also use the following command, which is idempotent:
```bash
helm upgrade --install `X` chart
```
.exercise[
- Install the 5 components of DockerCoins:
```bash
for COMPONENT in hasher redis rng webui worker; do
helm upgrade --install $COMPONENT helmcoins/ --values=$COMPONENT.yaml
done
```
]
---
## Checking what we've done
- Let's see if DockerCoins is working!
.exercise[
- Check the logs of the worker:
```bash
stern worker
```
- Look at the resources that were created:
```bash
kubectl get all
```
]
There are *many* issues to fix!
---
## Service names
- Our services should be named `rng`, `hasher`, etc., but they are named differently
- Look at the YAML template used for the services
- Does it look like we can override the name of the services?
--
- *Yes*, we can use `.Values.nameOverride`
- This means setting `nameOverride` in the values YAML file
---
## Setting service names
- Let's add `nameOverride: X` in each values YAML file!
(where X is hasher, redis, rng, etc.)
.exercise[
- Edit the 5 YAML files to add `nameOverride: X`
- Deploy the updated Chart:
```bash
for COMPONENT in hasher redis rng webui worker; do
helm upgrade --install $COMPONENT helmcoins/ --values=$COMPONENT.yaml
done
```
(Yes, this is exactly the same command as before!)
]
---
## Checking what we've done
.exercise[
- Check the service names:
```bash
kubectl get services
```
Great! (We have a useless service for `worker`, but let's ignore it for now.)
- Check the state of the pods:
```bash
kubectl get pods
```
Not so great... Some pods are *not ready.*
]
---
## Troubleshooting pods
- The easiest way to troubleshoot pods is to look at *events*
- We can look at all the events on the cluster (with `kubectl get events`)
- Or we can use `kubectl describe` on the objects that have problems
(`kubectl describe` will retrieve the events related to the object)
.exercise[
- Check the events for the redis pods:
```bash
kubectl describe pod -l app.kubernetes.io/name=redis
```
]
What's going on?
---
## Healthchecks
- The default chart defines healthchecks doing HTTP requests on port 80
- That won't work for redis and worker
(redis is not HTTP, and not on port 80; worker doesn't even listen)
--
- We could comment out the healthchecks
- We could also make them conditional
- This sounds more interesting, let's do that!
---
## Conditionals
- We need to enclose the healthcheck block with:
`{{ if CONDITION }}` at the beginning
`{{ end }}` at the end
- For the condition, we will use `.Values.healthcheck`
---
## Updating the deployment template
.exercise[
- Edit `helmcoins/templates/deployment.yaml`
- Before the healthchecks section (it starts with `livenessProbe:`), add:
`{{ if .Values.healthcheck }}`
- After the healthchecks section (just before `resources:`), add:
`{{ end }}`
- Edit `hasher.yaml`, `rng.yaml`, `webui.yaml` to add:
`healthcheck: true`
]
---
## Update the deployed charts
- We can now apply the new templates (and the new values)
.exercise[
- Use the same command as earlier to upgrade all five components
- Use `kubectl describe` to confirm that `redis` starts correctly
- Use `kubectl describe` to confirm that `hasher` still has healthchecks
]
---
## Is it working now?
- If we look at the worker logs, it appears that the worker is still stuck
- What could be happening?
--
- The redis service is not on port 80!
- We need to update the port number in redis.yaml
- We also need to update the port number in deployment.yaml
(it is hard-coded to 80 there)
---
## Setting the redis port
.exercise[
- Edit `redis.yaml` to add:
```yaml
service:
port: 6379
```
- Edit `helmcoins/templates/deployment.yaml`
- The line with `containerPort` should be:
```yaml
containerPort: {{ .Values.service.port }}
```
]
---
## Apply changes
- Re-run the for loop to execute `helm upgrade` one more time
- Check the worker logs
- This time, it should be working!
---
## Extra steps
- We don't need to create a service for the worker
- We can put the whole service block in a conditional
(this will require additional changes in other files referencing the service)
- We can set the webui to be a NodePort service
- We can change the number of workers with `replicaCount`
- And much more!

View File

@@ -52,7 +52,7 @@
<!-- ##VERSION## -->
- Unfortunately, as of Kubernetes 1.17, the CLI cannot create daemon sets
- Unfortunately, as of Kubernetes 1.15, the CLI cannot create daemon sets
--
@@ -427,7 +427,7 @@ class: extra-details
- We need to change the selector of the `rng` service!
- Let's add another label to that selector (e.g. `active=yes`)
- Let's add another label to that selector (e.g. `enabled=yes`)
---
@@ -445,11 +445,11 @@ class: extra-details
## The plan
1. Add the label `active=yes` to all our `rng` pods
1. Add the label `enabled=yes` to all our `rng` pods
2. Update the selector for the `rng` service to also include `active=yes`
2. Update the selector for the `rng` service to also include `enabled=yes`
3. Toggle traffic to a pod by manually adding/removing the `active` label
3. Toggle traffic to a pod by manually adding/removing the `enabled` label
4. Profit!
@@ -464,7 +464,7 @@ be any interruption.*
## Adding labels to pods
- We want to add the label `active=yes` to all pods that have `app=rng`
- We want to add the label `enabled=yes` to all pods that have `app=rng`
- We could edit each pod one by one with `kubectl edit` ...
@@ -474,9 +474,9 @@ be any interruption.*
.exercise[
- Add `active=yes` to all pods that have `app=rng`:
- Add `enabled=yes` to all pods that have `app=rng`:
```bash
kubectl label pods -l app=rng active=yes
kubectl label pods -l app=rng enabled=yes
```
]
@@ -495,7 +495,7 @@ be any interruption.*
.exercise[
- Update the service to add `active: yes` to its selector:
- Update the service to add `enabled: yes` to its selector:
```bash
kubectl edit service rng
```
@@ -504,7 +504,7 @@ be any interruption.*
```wait Please edit the object below```
```keys /app: rng```
```key ^J```
```keys noactive: yes```
```keys noenabled: yes```
```key ^[``` ]
```keys :wq```
```key ^J```
@@ -530,7 +530,7 @@ be any interruption.*
- If we want the string `"42"` or the string `"yes"`, we have to quote them
- So we have to use `active: "yes"`
- So we have to use `enabled: "yes"`
.footnote[For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!]
@@ -542,7 +542,7 @@ be any interruption.*
- Update the YAML manifest of the service
- Add `active: "yes"` to its selector
- Add `enabled: "yes"` to its selector
<!--
```wait Please edit the object below```
@@ -566,7 +566,7 @@ If we did everything correctly, the web UI shouldn't show any change.
- We want to disable the pod that was created by the deployment
- All we have to do, is remove the `active` label from that pod
- All we have to do, is remove the `enabled` label from that pod
- To identify that pod, we can use its name
@@ -600,7 +600,7 @@ If we did everything correctly, the web UI shouldn't show any change.
- In another window, remove the label from the pod:
```bash
kubectl label pod -l app=rng,pod-template-hash active-
kubectl label pod -l app=rng,pod-template-hash enabled-
```
(The stream of HTTP logs should stop immediately)
@@ -623,7 +623,7 @@ class: extra-details
- If we scale up our cluster by adding new nodes, the daemon set will create more pods
- These pods won't have the `active=yes` label
- These pods won't have the `enabled=yes` label
- If we want these pods to have that label, we need to edit the daemon set spec

View File

@@ -1,10 +0,0 @@
## We are done, what else ?
We have seen what means developping an application on kubernetes.
There still few subjects to tackle that are not purely relevant for developers
They have *some involvement* for developers:
- Monitoring
- Security

View File

@@ -1,5 +0,0 @@
## Exercise - building with Kubernetes
- Let's go to https://github.com/enix/kubecoin
- Our goal is to follow the instructions and complete exercise #1

View File

@@ -1,3 +0,0 @@
## Exercice - build with kaniko
Complete exercise #2, (again code at: https://github.com/enix/kubecoin )

View File

@@ -1,5 +0,0 @@
## Exercice - monitor with opentelemetry
Complete exercise #5, (again code at: https://github.com/enix/kubecoin )
*Note: Not all daemon are "ready" for opentelemetry, only `rng` and `worker`

View File

@@ -1,5 +0,0 @@
## Exercice - monitor with prometheus
Complete exercise #4, (again code at: https://github.com/enix/kubecoin )
*Note: Not all daemon are "ready" for prometheus, only `hasher` and `redis`

View File

@@ -8,8 +8,6 @@ We are going to cover:
- Admission Webhooks
- The Aggregation Layer
---
## Revisiting the API server
@@ -48,90 +46,6 @@ We are going to cover:
---
## A very simple CRD
The YAML below describes a very simple CRD representing different kinds of coffee:
```yaml
apiVersion: apiextensions.k8s.io/v1alpha1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof
```
---
## Creating a CRD
- Let's create the Custom Resource Definition for our Coffee resource
.exercise[
- Load the CRD:
```bash
kubectl apply -f ~/container.training/k8s/coffee-1.yaml
```
- Confirm that it shows up:
```bash
kubectl get crds
```
]
---
## Creating custom resources
The YAML below defines a resource using the CRD that we just created:
```yaml
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: arabica
spec:
taste: strong
```
.exercise[
- Create a few types of coffee beans:
```bash
kubectl apply -f ~/container.training/k8s/coffees.yaml
```
]
---
## Viewing custom resources
- By default, `kubectl get` only shows name and age of custom resources
.exercise[
- View the coffee beans that we just created:
```bash
kubectl get coffees
```
]
- We can improve that, but it's outside the scope of this section!
---
## What can we do with CRDs?
There are many possibilities!
@@ -151,7 +65,7 @@ There are many possibilities!
- Replacing built-in types with CRDs
(see [this lightning talk by Tim Hockin](https://www.youtube.com/watch?v=ji0FWzFwNhA))
(see [this lightning talk by Tim Hockin](https://www.youtube.com/watch?v=ji0FWzFwNhA&index=2&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU))
---
@@ -167,7 +81,7 @@ There are many possibilities!
- Generally, when creating a CRD, we also want to run a *controller*
(otherwise nothing will happen when we create resources of that type)
(otherwise nothing will happen when we create resources of that type)
- The controller will typically *watch* our custom resources
@@ -181,22 +95,6 @@ Examples:
---
## (Ab)using the API server
- If we need to store something "safely" (as in: in etcd), we can use CRDs
- This gives us primitives to read/write/list objects (and optionally validate them)
- The Kubernetes API server can run on its own
(without the scheduler, controller manager, and kubelets)
- By loading CRDs, we can have it manage totally different objects
(unrelated to containers, clusters, etc.)
---
## Service catalog
- *Service catalog* is another extension mechanism
@@ -211,7 +109,7 @@ Examples:
- ClusterServiceClass
- ClusterServicePlan
- ServiceInstance
- ServiceBinding
- ServiceBinding
- It uses the Open service broker API
@@ -219,13 +117,17 @@ Examples:
## Admission controllers
- Admission controllers are another way to extend the Kubernetes API
- When a Pod is created, it is associated with a ServiceAccount
- Instead of creating new types, admission controllers can transform or vet API requests
(even if we did not specify one explicitly)
- The diagram on the next slide shows the path of an API request
- That ServiceAccount was added on the fly by an *admission controller*
(courtesy of Banzai Cloud)
(specifically, a *mutating admission controller*)
- Admission controllers sit on the API request path
(see the cool diagram on next slide, courtesy of Banzai Cloud)
---
@@ -235,7 +137,7 @@ class: pic
---
## Types of admission controllers
## Admission controllers
- *Validating* admission controllers can accept/reject the API call
@@ -249,27 +151,7 @@ class: pic
(see [documentation](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#what-does-each-admission-controller-do) for a list)
- We can also dynamically define and register our own
---
class: extra-details
## Some built-in admission controllers
- ServiceAccount:
automatically adds a ServiceAccount to Pods that don't explicitly specify one
- LimitRanger:
applies resource constraints specified by LimitRange objects when Pods are created
- NamespaceAutoProvision:
automatically creates namespaces when an object is created in a non-existent namespace
*Note: #1 and #2 are enabled by default; #3 is not.*
- But we can also define our own!
---
@@ -309,25 +191,19 @@ class: extra-details
---
## The aggregation layer
## (Ab)using the API server
- We can delegate entire parts of the Kubernetes API to external servers
- If we need to store something "safely" (as in: in etcd), we can use CRDs
- This is done by creating APIService resources
- This gives us primitives to read/write/list objects (and optionally validate them)
(check them with `kubectl get apiservices`!)
- The Kubernetes API server can run on its own
- The APIService resource maps a type (kind) and version to an external service
(without the scheduler, controller manager, and kubelets)
- All requests concerning that type are sent (proxied) to the external service
- By loading CRDs, we can have it manage totally different objects
- This allows to have resources like CRDs, but that aren't stored in etcd
- Example: `metrics-server`
(storing live metrics in etcd would be extremely inefficient)
- Requires significantly more work than CRDs!
(unrelated to containers, clusters, etc.)
---
@@ -342,5 +218,3 @@ class: extra-details
- [Built-in Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/)
- [Dynamic Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/)
- [Aggregation Layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)

View File

@@ -1,239 +0,0 @@
# Helm chart format
- What exactly is a chart?
- What's in it?
- What would be involved in creating a chart?
(we won't create a chart, but we'll see the required steps)
---
## What is a chart
- A chart is a set of files
- Some of these files are mandatory for the chart to be viable
(more on that later)
- These files are typically packed in a tarball
- These tarballs are stored in "repos"
(which can be static HTTP servers)
- We can install from a repo, from a local tarball, or an unpacked tarball
(the latter option is preferred when developing a chart)
---
## What's in a chart
- A chart must have at least:
- a `templates` directory, with YAML manifests for Kubernetes resources
- a `values.yaml` file, containing (tunable) parameters for the chart
- a `Chart.yaml` file, containing metadata (name, version, description ...)
- Let's look at a simple chart, `stable/tomcat`
---
## Downloading a chart
- We can use `helm pull` to download a chart from a repo
.exercise[
- Download the tarball for `stable/tomcat`:
```bash
helm pull stable/tomcat
```
(This will create a file named `tomcat-X.Y.Z.tgz`.)
- Or, download + untar `stable/tomcat`:
```bash
helm pull stable/tomcat --untar
```
(This will create a directory named `tomcat`.)
]
---
## Looking at the chart's content
- Let's look at the files and directories in the `tomcat` chart
.exercise[
- Display the tree structure of the chart we just downloaded:
```bash
tree tomcat
```
]
We see the components mentioned above: `Chart.yaml`, `templates/`, `values.yaml`.
---
## Templates
- The `templates/` directory contains YAML manifests for Kubernetes resources
(Deployments, Services, etc.)
- These manifests can contain template tags
(using the standard Go template library)
.exercise[
- Look at the template file for the tomcat Service resource:
```bash
cat tomcat/templates/appsrv-svc.yaml
```
]
---
## Analyzing the template file
- Tags are identified by `{{ ... }}`
- `{{ template "x.y" }}` expands a [named template](https://helm.sh/docs/chart_template_guide/named_templates/#declaring-and-using-templates-with-define-and-template)
(previously defined with `{{ define "x.y "}}...stuff...{{ end }}`)
- The `.` in `{{ template "x.y" . }}` is the *context* for that named template
(so that the named template block can access variables from the local context)
- `{{ .Release.xyz }}` refers to [built-in variables](https://helm.sh/docs/chart_template_guide/builtin_objects/) initialized by Helm
(indicating the chart name, version, whether we are installing or upgrading ...)
- `{{ .Values.xyz }}` refers to tunable/settable [values](https://helm.sh/docs/chart_template_guide/values_files/)
(more on that in a minute)
---
## Values
- Each chart comes with a
[values file](https://helm.sh/docs/chart_template_guide/values_files/)
- It's a YAML file containing a set of default parameters for the chart
- The values can be accessed in templates with e.g. `{{ .Values.x.y }}`
(corresponding to field `y` in map `x` in the values file)
- The values can be set or overridden when installing or ugprading a chart:
- with `--set x.y=z` (can be used multiple times to set multiple values)
- with `--values some-yaml-file.yaml` (set a bunch of values from a file)
- Charts following best practices will have values following specific patterns
(e.g. having a `service` map allowing to set `service.type` etc.)
---
## Other useful tags
- `{{ if x }} y {{ end }}` allows to include `y` if `x` evaluates to `true`
(can be used for e.g. healthchecks, annotations, or even an entire resource)
- `{{ range x }} y {{ end }}` iterates over `x`, evaluating `y` each time
(the elements of `x` are assigned to `.` in the range scope)
- `{{- x }}`/`{{ x -}}` will remove whitespace on the left/right
- The whole [Sprig](http://masterminds.github.io/sprig/) library, with additions:
`lower` `upper` `quote` `trim` `default` `b64enc` `b64dec` `sha256sum` `indent` `toYaml` ...
---
## Pipelines
- `{{ quote blah }}` can also be expressed as `{{ blah | quote }}`
- With multiple arguments, `{{ x y z }}` can be expressed as `{{ z | x y }}`)
- Example: `{{ .Values.annotations | toYaml | indent 4 }}`
- transforms the map under `annotations` into a YAML string
- indents it with 4 spaces (to match the surrounding context)
- Pipelines are not specific to Helm, but a feature of Go templates
(check the [Go text/template documentation](https://golang.org/pkg/text/template/) for more details and examples)
---
## README and NOTES.txt
- At the top-level of the chart, it's a good idea to have a README
- It will be viewable with e.g. `helm show readme stable/tomcat`
- In the `templates/` directory, we can also have a `NOTES.txt` file
- When the template is installed (or upgraded), `NOTES.txt` is processed too
(i.e. its `{{ ... }}` tags are evaluated)
- It gets displayed after the install or upgrade
- It's a great place to generate messages to tell the user:
- how to connect to the release they just deployed
- any passwords or other thing that we generated for them
---
## Additional files
- We can place arbitrary files in the chart (outside of the `templates/` directory)
- They can be accessed in templates with `.Files`
- They can be transformed into ConfigMaps or Secrets with `AsConfig` and `AsSecrets`
(see [this example](https://helm.sh/docs/chart_template_guide/accessing_files/#configmap-and-secrets-utility-functions) in the Helm docs)
---
## Hooks and tests
- We can define *hooks* in our templates
- Hooks are resources annotated with `"helm.sh/hook": NAME-OF-HOOK`
- Hook names include `pre-install`, `post-install`, `test`, [and much more](https://helm.sh/docs/topics/charts_hooks/#the-available-hooks)
- The resources defined in hooks are loaded at a specific time
- Hook execution is *synchronous*
(if the resource is a Job or Pod, Helm will wait for its completion)
- This can be use for database migrations, backups, notifications, smoke tests ...
- Hooks named `test` are executed only when running `helm test RELEASE-NAME`

View File

@@ -1,220 +0,0 @@
# Creating a basic chart
- We are going to show a way to create a *very simplified* chart
- In a real chart, *lots of things* would be templatized
(Resource names, service types, number of replicas...)
.exercise[
- Create a sample chart:
```bash
helm create dockercoins
```
- Move away the sample templates and create an empty template directory:
```bash
mv dockercoins/templates dockercoins/default-templates
mkdir dockercoins/templates
```
]
---
## Exporting the YAML for our application
- The following section assumes that DockerCoins is currently running
- If DockerCoins is not running, see next slide
.exercise[
- Create one YAML file for each resource that we need:
.small[
```bash
while read kind name; do
kubectl get -o yaml $kind $name > dockercoins/templates/$name-$kind.yaml
done <<EOF
deployment worker
deployment hasher
daemonset rng
deployment webui
deployment redis
service hasher
service rng
service webui
service redis
EOF
```
]
]
---
## Obtaining DockerCoins YAML
- If DockerCoins is not running, we can also obtain the YAML from a public repository
.exercise[
- Clone the kubercoins repository:
```bash
git clone https://github.com/jpetazzo/kubercoins
```
- Copy the YAML files to the `templates/` directory:
```bash
cp kubercoins/*.yaml dockercoins/templates/
```
]
---
## Testing our helm chart
.exercise[
- Let's install our helm chart!
```
helm install helmcoins dockercoins
```
(`helmcoins` is the name of the release; `dockercoins` is the local path of the chart)
]
--
- Since the application is already deployed, this will fail:
```
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: existing resource conflict:
kind: Service, namespace: default, name: hasher
```
- To avoid naming conflicts, we will deploy the application in another *namespace*
---
## Switching to another namespace
- We need create a new namespace
(Helm 2 creates namespaces automatically; Helm 3 doesn't anymore)
- We need to tell Helm which namespace to use
.exercise[
- Create a new namespace:
```bash
kubectl create namespace helmcoins
```
- Deploy our chart in that namespace:
```bash
helm install helmcoins dockercoins --namespace=helmcoins
```
]
---
## Helm releases are namespaced
- Let's try to see the release that we just deployed
.exercise[
- List Helm releases:
```bash
helm list
```
]
Our release doesn't show up!
We have to specify its namespace (or switch to that namespace).
---
## Specifying the namespace
- Try again, with the correct namespace
.exercise[
- List Helm releases in `helmcoins`:
```bash
helm list --namespace=helmcoins
```
]
---
## Checking our new copy of DockerCoins
- We can check the worker logs, or the web UI
.exercise[
- Retrieve the NodePort number of the web UI:
```bash
kubectl get service webui --namespace=helmcoins
```
- Open it in a web browser
- Look at the worker logs:
```bash
kubectl logs deploy/worker --tail=10 --follow --namespace=helmcoins
```
]
Note: it might take a minute or two for the worker to start.
---
## Discussion, shortcomings
- Helm (and Kubernetes) best practices recommend to add a number of annotations
(e.g. `app.kubernetes.io/name`, `helm.sh/chart`, `app.kubernetes.io/instance` ...)
- Our basic chart doesn't have any of these
- Our basic chart doesn't use any template tag
- Does it make sense to use Helm in that case?
- *Yes,* because Helm will:
- track the resources created by the chart
- save successive revisions, allowing us to rollback
[Helm docs](https://helm.sh/docs/topics/chart_best_practices/labels/)
and [Kubernetes docs](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/)
have details about recommended annotations and labels.
---
## Cleaning up
- Let's remove that chart before moving on
.exercise[
- Delete the release (don't forget to specify the namespace):
```bash
helm delete helmcoins --namespace=helmcoins
```
]

View File

@@ -1,579 +0,0 @@
# Creating better Helm charts
- We are going to create a chart with the helper `helm create`
- This will give us a chart implementing lots of Helm best practices
(labels, annotations, structure of the `values.yaml` file ...)
- We will use that chart as a generic Helm chart
- We will use it to deploy DockerCoins
- Each component of DockerCoins will have its own *release*
- In other words, we will "install" that Helm chart multiple times
(one time per component of DockerCoins)
---
## Creating a generic chart
- Rather than starting from scratch, we will use `helm create`
- This will give us a basic chart that we will customize
.exercise[
- Create a basic chart:
```bash
cd ~
helm create helmcoins
```
]
This creates a basic chart in the directory `helmcoins`.
---
## What's in the basic chart?
- The basic chart will create a Deployment and a Service
- Optionally, it will also include an Ingress
- If we don't pass any values, it will deploy the `nginx` image
- We can override many things in that chart
- Let's try to deploy DockerCoins components with that chart!
---
## Writing `values.yaml` for our components
- We need to write one `values.yaml` file for each component
(hasher, redis, rng, webui, worker)
- We will start with the `values.yaml` of the chart, and remove what we don't need
- We will create 5 files:
hasher.yaml, redis.yaml, rng.yaml, webui.yaml, worker.yaml
- In each file, we want to have:
```yaml
image:
repository: IMAGE-REPOSITORY-NAME
tag: IMAGE-TAG
```
---
## Getting started
- For component X, we want to use the image dockercoins/X:v0.1
(for instance, for rng, we want to use the image dockercoins/rng:v0.1)
- Exception: for redis, we want to use the official image redis:latest
.exercise[
- Write YAML files for the 5 components, with the following model:
```yaml
image:
repository: `IMAGE-REPOSITORY-NAME` (e.g. dockercoins/worker)
tag: `IMAGE-TAG` (e.g. v0.1)
```
]
---
## Deploying DockerCoins components
- For convenience, let's work in a separate namespace
.exercise[
- Create a new namespace (if it doesn't already exist):
```bash
kubectl create namespace helmcoins
```
- Switch to that namespace:
```bash
kns helmcoins
```
]
---
## Deploying the chart
- To install a chart, we can use the following command:
```bash
helm install COMPONENT-NAME CHART-DIRECTORY
```
- We can also use the following command, which is idempotent:
```bash
helm upgrade COMPONENT-NAME CHART-DIRECTORY --install
```
.exercise[
- Install the 5 components of DockerCoins:
```bash
for COMPONENT in hasher redis rng webui worker; do
helm upgrade $COMPONENT helmcoins --install --values=$COMPONENT.yaml
done
```
]
---
## Checking what we've done
- Let's see if DockerCoins is working!
.exercise[
- Check the logs of the worker:
```bash
stern worker
```
- Look at the resources that were created:
```bash
kubectl get all
```
]
There are *many* issues to fix!
---
## Can't pull image
- It looks like our images can't be found
.exercise[
- Use `kubectl describe` on any of the pods in error
]
- We're trying to pull `rng:1.16.0` instead of `rng:v0.1`!
- Where does that `1.16.0` tag come from?
---
## Inspecting our template
- Let's look at the `templates/` directory
(and try to find the one generating the Deployment resource)
.exercise[
- Show the structure of the `helmcoins` chart that Helm generated:
```bash
tree helmcoins
```
- Check the file `helmcoins/templates/deployment.yaml`
- Look for the `image:` parameter
]
*The image tag references `{{ .Chart.AppVersion }}`. Where does that come from?*
---
## The `.Chart` variable
- `.Chart` is a map corresponding to the values in `Chart.yaml`
- Let's look for `AppVersion` there!
.exercise[
- Check the file `helmcoins/Chart.yaml`
- Look for the `appVersion:` parameter
]
(Yes, the case is different between the template and the Chart file.)
---
## Using the correct tags
- If we change `AppVersion` to `v0.1`, it will change for *all* deployments
(including redis)
- Instead, let's change the *template* to use `{{ .Values.image.tag }}`
(to match what we've specified in our values YAML files)
.exercise[
- Edit `helmcoins/templates/deployment.yaml`
- Replace `{{ .Chart.AppVersion }}` with `{{ .Values.image.tag }}`
]
---
## Upgrading to use the new template
- Technically, we just made a new version of the *chart*
- To use the new template, we need to *upgrade* the release to use that chart
.exercise[
- Upgrade all components:
```bash
for COMPONENT in hasher redis rng webui worker; do
helm upgrade $COMPONENT helmcoins
done
```
- Check how our pods are doing:
```bash
kubectl get pods
```
]
We should see all pods "Running". But ... not all of them are READY.
---
## Troubleshooting readiness
- `hasher`, `rng`, `webui` should show up as `1/1 READY`
- But `redis` and `worker` should show up as `0/1 READY`
- Why?
---
## Troubleshooting pods
- The easiest way to troubleshoot pods is to look at *events*
- We can look at all the events on the cluster (with `kubectl get events`)
- Or we can use `kubectl describe` on the objects that have problems
(`kubectl describe` will retrieve the events related to the object)
.exercise[
- Check the events for the redis pods:
```bash
kubectl describe pod -l app.kubernetes.io/name=redis
```
]
It's failing both its liveness and readiness probes!
---
## Healthchecks
- The default chart defines healthchecks doing HTTP requests on port 80
- That won't work for redis and worker
(redis is not HTTP, and not on port 80; worker doesn't even listen)
--
- We could remove or comment out the healthchecks
- We could also make them conditional
- This sounds more interesting, let's do that!
---
## Conditionals
- We need to enclose the healthcheck block with:
`{{ if false }}` at the beginning (we can change the condition later)
`{{ end }}` at the end
.exercise[
- Edit `helmcoins/templates/deployment.yaml`
- Add `{{ if false }}` on the line before `livenessProbe`
- Add `{{ end }}` after the `readinessProbe` section
(see next slide for details)
]
---
This is what the new YAML should look like (added lines in yellow):
```yaml
ports:
- name: http
containerPort: 80
protocol: TCP
`{{ if false }}`
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
`{{ end }}`
resources:
{{- toYaml .Values.resources | nindent 12 }}
```
---
## Testing the new chart
- We need to upgrade all the services again to use the new chart
.exercise[
- Upgrade all components:
```bash
for COMPONENT in hasher redis rng webui worker; do
helm upgrade $COMPONENT helmcoins
done
```
- Check how our pods are doing:
```bash
kubectl get pods
```
]
Everything should now be running!
---
## What's next?
- Is this working now?
.exercise[
- Let's check the logs of the worker:
```bash
stern worker
```
]
This error might look familiar ... The worker can't resolve `redis`.
Typically, that error means that the `redis` service doesn't exist.
---
## Checking services
- What about the services created by our chart?
.exercise[
- Check the list of services:
```bash
kubectl get services
```
]
They are named `COMPONENT-helmcoins` instead of just `COMPONENT`.
We need to change that!
---
## Where do the service names come from?
- Look at the YAML template used for the services
- It should be using `{{ include "helmcoins.fullname" }}`
- `include` indicates a *template block* defined somewhere else
.exercise[
- Find where that `fullname` thing is defined:
```bash
grep define.*fullname helmcoins/templates/*
```
]
It should be in `_helpers.tpl`.
We can look at the definition, but it's fairly complex ...
---
## Changing service names
- Instead of that `{{ include }}` tag, let's use the name of the release
- The name of the release is available as `{{ .Release.Name }}`
.exercise[
- Edit `helmcoins/templates/service.yaml`
- Replace the service name with `{{ .Release.Name }}`
- Upgrade all the releases to use the new chart
- Confirm that the services now have the right names
]
---
## Is it working now?
- If we look at the worker logs, it appears that the worker is still stuck
- What could be happening?
--
- The redis service is not on port 80!
- Let's see how the port number is set
- We need to look at both the *deployment* template and the *service* template
---
## Service template
- In the service template, we have the following section:
```yaml
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
```
- `port` is the port on which the service is "listening"
(i.e. to which our code needs to connect)
- `targetPort` is the port on which the pods are listening
- The `name` is not important (it's OK if it's `http` even for non-HTTP traffic)
---
## Setting the redis port
- Let's add a `service.port` value to the redis release
.exercise[
- Edit `redis.yaml` to add:
```yaml
service:
port: 6379
```
- Apply the new values file:
```bash
helm upgrade redis helmcoins --values=redis.yaml
```
]
---
## Deployment template
- If we look at the deployment template, we see this section:
```yaml
ports:
- name: http
containerPort: 80
protocol: TCP
```
- The container port is hard-coded to 80
- We'll change it to use the port number specified in the values
---
## Changing the deployment template
.exercise[
- Edit `helmcoins/templates/deployment.yaml`
- The line with `containerPort` should be:
```yaml
containerPort: {{ .Values.service.port }}
```
]
---
## Apply changes
- Re-run the for loop to execute `helm upgrade` one more time
- Check the worker logs
- This time, it should be working!
---
## Extra steps
- We don't need to create a service for the worker
- We can put the whole service block in a conditional
(this will require additional changes in other files referencing the service)
- We can set the webui to be a NodePort service
- We can change the number of workers with `replicaCount`
- And much more!

View File

@@ -1,234 +0,0 @@
# Helm secrets
- Helm can do *rollbacks*:
- to previously installed charts
- to previous sets of values
- How and where does it store the data needed to do that?
- Let's investigate!
---
## We need a release
- We need to install something with Helm
- Let's use the `stable/tomcat` chart as an example
.exercise[
- Install a release called `tomcat` with the chart `stable/tomcat`:
```bash
helm upgrade tomcat stable/tomcat --install
```
- Let's upgrade that release, and change a value:
```bash
helm upgrade tomcat stable/tomcat --set ingress.enabled=true
```
]
---
## Release history
- Helm stores successive revisions of each release
.exercise[
- View the history for that release:
```bash
helm history tomcat
```
]
Where does that come from?
---
## Investigate
- Possible options:
- local filesystem (no, because history is visible from other machines)
- persistent volumes (no, Helm works even without them)
- ConfigMaps, Secrets?
.exercise[
- Look for ConfigMaps and Secrets:
```bash
kubectl get configmaps,secrets
```
]
--
We should see a number of secrets with TYPE `helm.sh/release.v1`.
---
## Unpacking a secret
- Let's find out what is in these Helm secrets
.exercise[
- Examine the secret corresponding to the second release of `tomcat`:
```bash
kubectl describe secret sh.helm.release.v1.tomcat.v2
```
(`v1` is the secret format; `v2` means revision 2 of the `tomcat` release)
]
There is a key named `release`.
---
## Unpacking the release data
- Let's see what's in this `release` thing!
.exercise[
- Dump the secret:
```bash
kubectl get secret sh.helm.release.v1.tomcat.v2 \
-o go-template='{{ .data.release }}'
```
]
Secrets are encoded in base64. We need to decode that!
---
## Decoding base64
- We can pipe the output through `base64 -d` or use go-template's `base64decode`
.exercise[
- Decode the secret:
```bash
kubectl get secret sh.helm.release.v1.tomcat.v2 \
-o go-template='{{ .data.release | base64decode }}'
```
]
--
... Wait, this *still* looks like base64. What's going on?
--
Let's try one more round of decoding!
---
## Decoding harder
- Just add one more base64 decode filter
.exercise[
- Decode it twice:
```bash
kubectl get secret sh.helm.release.v1.tomcat.v2 \
-o go-template='{{ .data.release | base64decode | base64decode }}'
```
]
--
... OK, that was *a lot* of binary data. What sould we do with it?
---
## Guessing data type
- We could use `file` to figure out the data type
.exercise[
- Pipe the decoded release through `file -`:
```bash
kubectl get secret sh.helm.release.v1.tomcat.v2 \
-o go-template='{{ .data.release | base64decode | base64decode }}' \
| file -
```
]
--
Gzipped data! It can be decoded with `gunzip -c`.
---
## Uncompressing the data
- Let's uncompress the data and save it to a file
.exercise[
- Rerun the previous command, but with `| gunzip -c > release-info` :
```bash
kubectl get secret sh.helm.release.v1.tomcat.v2 \
-o go-template='{{ .data.release | base64decode | base64decode }}' \
| gunzip -c > release-info
```
- Look at `release-info`:
```bash
cat release-info
```
]
--
It's a bundle of ~~YAML~~ JSON.
---
## Looking at the JSON
If we inspect that JSON (e.g. with `jq keys release-info`), we see:
- `chart` (contains the entire chart used for that release)
- `config` (contains the values that we've set)
- `info` (date of deployment, status messages)
- `manifest` (YAML generated from the templates)
- `name` (name of the release, so `tomcat`)
- `namespace` (namespace where we deployed the release)
- `version` (revision number within that release; starts at 1)
The chart is in a structured format, but it's entirely captured in this JSON.
---
## Conclusions
- Helm stores each release information in a Secret in the namespace of the release
- The secret is JSON object (gzipped and encoded in base64)
- It contains the manifests generated for that release
- ... And everything needed to rebuild these manifests
(including the full source of the chart, and the values used)
- This allows arbitrary rollbacks, as well as tweaking values even without having access to the source of the chart (or the chart repo) used for deployment

View File

@@ -314,7 +314,7 @@ class: extra-details
- List all the resources created by this release:
```bash
kubectl get all --selector=release=java4ever
kuectl get all --selector=release=java4ever
```
]
@@ -416,4 +416,4 @@ All unspecified values will take the default values defined in the chart.
curl localhost:$PORT/sample/
```
]
]

View File

@@ -120,13 +120,19 @@
- We want our ingress load balancer to be available on port 80
- The best way to do that would be with a `LoadBalancer` service
- We could do that with a `LoadBalancer` service
... but it requires support from the underlying infrastructure
- Instead, we are going to use the `hostNetwork` mode on the Traefik pods
- We could use pods specifying `hostPort: 80`
- Let's see what this `hostNetwork` mode is about ...
... but with most CNI plugins, this [doesn't work or requires additional setup](https://github.com/kubernetes/kubernetes/issues/23920)
- We could use a `NodePort` service
... but that requires [changing the `--service-node-port-range` flag in the API server](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/)
- Last resort: the `hostNetwork` mode
---
@@ -164,26 +170,6 @@
---
class: extra-details
## Other techniques to expose port 80
- We could use pods specifying `hostPort: 80`
... but with most CNI plugins, this [doesn't work or requires additional setup](https://github.com/kubernetes/kubernetes/issues/23920)
- We could use a `NodePort` service
... but that requires [changing the `--service-node-port-range` flag in the API server](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/)
- We could create a service with an external IP
... this would work, but would require a few extra steps
(figuring out the IP address and adding it to the service)
---
## Running Traefik
- The [Traefik documentation](https://docs.traefik.io/user-guide/kubernetes/#deploy-trfik-using-a-deployment-or-daemonset) tells us to pick between Deployment and Daemon Set

View File

@@ -1,34 +0,0 @@
## Privileged container
- Running privileged container could be really harmful for the node it run on.
- Getting control of a node could expose other containers in the cluster and the cluster itself
- It's even worse when it is docker that run in this privileged container
- `docker build` doesn't allow to run privileged container for building layer
- nothing forbid to run `docker run --privileged`
---
## Kaniko
- https://github.com/GoogleContainerTools/kaniko
- *kaniko doesn't depend on a Docker daemon and executes each command
within a Dockerfile completely in userspace*
- Kaniko is only a build system, there is no runtime like docker does
- generates OCI compatible image, so could be run on Docker or other CRI
- use a different cache system than Docker
---
## Rootless docker and rootless buildkit
- This is experimental
- Have a lot of requirement of kernel param, options to set
- But it exists

View File

@@ -1,76 +1,20 @@
# Exposing containers
- We can connect to our pods using their IP address
- `kubectl expose` creates a *service* for existing pods
- Then we need to figure out a lot of things:
- A *service* is a stable address for a pod (or a bunch of pods)
- how do we look up the IP address of the pod(s)?
- If we want to connect to our pod(s), we need to create a *service*
- how do we connect from outside the cluster?
- Once a service is created, CoreDNS will allow us to resolve it by name
- how do we load balance traffic?
(i.e. after creating service `hello`, the name `hello` will resolve to something)
- what if a pod fails?
- Kubernetes has a resource type named *Service*
- Services address all these questions!
---
## Services in a nutshell
- Services give us a *stable endpoint* to connect to a pod or a group of pods
- An easy way to create a service is to use `kubectl expose`
- If we have a deployment named `my-little-deploy`, we can run:
`kubectl expose deployment my-little-deploy --port=80`
... and this will create a service with the same name (`my-little-deploy`)
- Services are automatically added to an internal DNS zone
(in the example above, our code can now connect to http://my-little-deploy/)
---
## Advantages of services
- We don't need to look up the IP address of the pod(s)
(we resolve the IP address of the service using DNS)
- There are multiple service types; some of them allow external traffic
(e.g. `LoadBalancer` and `NodePort`)
- Services provide load balancing
(for both internal and external traffic)
- Service addresses are independent from pods' addresses
(when a pod fails, the service seamlessly sends traffic to its replacement)
---
## Many kinds and flavors of service
- There are different types of services:
- There are different types of services, detailed on the following slides:
`ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName`
- There are also *headless services*
- Services can also have optional *external IPs*
- There is also another resource type called *Ingress*
(specifically for HTTP services)
- Wow, that's a lot! Let's start with the basics ...
- HTTP services can also use `Ingress` resources (more on that later)
---
@@ -129,6 +73,24 @@
---
class: extra-details
## `ExternalName`
- No load balancer (internal or external) is created
- Only a DNS entry gets added to the DNS managed by Kubernetes
- That DNS entry will just be a `CNAME` to a provided record
Example:
```bash
kubectl create service externalname k8s --external-name kubernetes.io
```
*Creates a CNAME `k8s` pointing to `kubernetes.io`*
---
## Running containers with open ports
- Since `ping` doesn't have anything to connect to, we'll have to run something else
@@ -213,7 +175,9 @@
- As a result: you *have to* indicate the port number for your service
(with some exceptions, like `ExternalName` or headless services, covered later)
- Running services with arbitrary port (or port ranges) requires hacks
(e.g. host networking mode)
---
@@ -254,48 +218,7 @@ Try it a few times! Our requests are load balanced across multiple pods.
class: extra-details
## `ExternalName`
- Services of type `ExternalName` are quite different
- No load balancer (internal or external) is created
- Only a DNS entry gets added to the DNS managed by Kubernetes
- That DNS entry will just be a `CNAME` to a provided record
Example:
```bash
kubectl create service externalname k8s --external-name kubernetes.io
```
*Creates a CNAME `k8s` pointing to `kubernetes.io`*
---
class: extra-details
## External IPs
- We can add an External IP to a service, e.g.:
```bash
kubectl expose deploy my-little-deploy --port=80 --external-ip=1.2.3.4
```
- `1.2.3.4` should be the address of one of our nodes
(it could also be a virtual address, service address, or VIP, shared by multiple nodes)
- Connections to `1.2.3.4:80` will be sent to our service
- External IPs will also show up on services of type `LoadBalancer`
(they will be added automatically by the process provisioning the load balancer)
---
class: extra-details
## Headless services
## If we don't need a load balancer
- Sometimes, we want to access our scaled services directly:
@@ -315,7 +238,7 @@ class: extra-details
class: extra-details
## Creating a headless services
## Headless services
- A headless service is obtained by setting the `clusterIP` field to `None`
@@ -401,32 +324,18 @@ error: the server doesn't have a resource type "endpoint"
class: extra-details
## The DNS zone
## `ExternalIP`
- In the `kube-system` namespace, there should be a service named `kube-dns`
- When creating a servivce, we can also specify an `ExternalIP`
- This is the internal DNS server that can resolve service names
(this is not a type, but an extra attribute to the service)
- The default domain name for the service we created is `default.svc.cluster.local`
- It will make the service availableon this IP address
.exercise[
- Get the IP address of the internal DNS server:
```bash
IP=$(kubectl -n kube-system get svc kube-dns -o jsonpath={.spec.clusterIP})
```
- Resolve the cluster IP for the `httpenv` service:
```bash
host httpenv.default.svc.cluster.local $IP
```
]
(if the IP address belongs to a node of the cluster)
---
class: extra-details
## `Ingress`
- Ingresses are another type (kind) of resource

View File

@@ -1,78 +0,0 @@
# Security and kubernetes
There are many mechanisms in kubernetes to ensure the security.
Obviously the more you constrain your app, the better.
There is also mechanism to forbid "unsafe" application to be launched on
kubernetes, but that's more for ops-guys 😈 (more on that next days)
Let's focus on what can we do on the developer latop, to make app
compatible with secure system, enforced or not (it's always a good practice)
---
## No container in privileged mode
- risks:
- If one privileged container get compromised,
we basically get full access to the node from within a container
(not need to tamper auth logs, alter binary).
- Sniffing networks allow often to get access to the entire cluster.
- how to avoid:
```
[...]
spec:
containers:
- name: foo
securityContext:
privileged: false
```
Luckily that's the default !
---
## No container run as "root"
- risks:
- bind mounting a directory like /usr/bin allow to change node system core
</br>ex: copy a tampered version of "ping", wait for an admin to login
and to issue a ping command and bingo !
- how to avoid:
```
[...]
spec:
containers:
- name: foo
securityContext:
runAsUser: 1000
runAsGroup: 100
```
- The default is to use the image default
- If your writing your own Dockerfile, don't forget about the `USER` instruction
---
## Capabilities
- You can give capabilities one-by-one to a container
- It's useful if you need more capabilities (for some reason), but not grating 'root' privileged
- risks: no risks whatsoever, except by granting a big list of capabilities
- how to use:
```
[...]
spec:
containers:
- name: foo
securityContext:
capabilities:
add: ["NET_ADMIN", "SYS_TIME"]
drop: []
```
The default use the container runtime defaults
- and we can also drop default capabilities granted by the container runtime !

View File

@@ -102,6 +102,8 @@
]
- Some tools like Helm will create namespaces automatically when needed
---
## Using namespaces
@@ -339,29 +341,12 @@ Note: we could have used `--namespace=default` for the same result.
- `kube-ps1` makes it easy to track these, by showing them in our shell prompt
- It is installed on our training clusters, and when using [shpod](https://github.com/jpetazzo/shpod)
- It's a simple shell script available from https://github.com/jonmosco/kube-ps1
- It gives us a prompt looking like this one:
- On our clusters, `kube-ps1` is installed and included in `PS1`:
```
[123.45.67.89] `(kubernetes-admin@kubernetes:default)` docker@node1 ~
```
(The highlighted part is `context:namespace`, managed by `kube-ps1`)
- Highly recommended if you work across multiple contexts or namespaces!
---
## Installing `kube-ps1`
- It's a simple shell script available from https://github.com/jonmosco/kube-ps1
- It needs to be [installed in our profile/rc files](https://github.com/jonmosco/kube-ps1#installing)
(instructions differ depending on platform, shell, etc.)
- Once installed, it defines aliases called `kube_ps1`, `kubeon`, `kubeoff`
(to selectively enable/disable it when needed)
- Pro-tip: install it on your machine during the next break!

View File

@@ -1,179 +0,0 @@
# Development Workflow
In this section we will see how to set up a local development workflow.
We will list multiple options.
Keep in mind that we don't have to use *all* these tools!
It's up to the developer to find what best suits them.
---
## What does it mean to develop on Kubernetes ?
In theory, the generic workflow is:
1. Make changes to our code or edit a Dockerfile
2. Build a new Docker image with a new tag
3. Push that Docker image to a registry
4. Update the YAML or templates referencing that Docker image
<br/>(e.g. of the corresponding Deployment, StatefulSet, Job ...)
5. Apply the YAML or templates
6. Are we satisfied with the result?
<br/>No → go back to step 1 (or step 4 if the image is OK)
<br/>Yes → commit and push our changes to source control
---
## A few quirks
In practice, there are some details that make this workflow more complex.
- We need a Docker container registry to store our images
<br/>
(for Open Source projects, a free Docker Hub account works fine)
- We need to set image tags properly, hopefully automatically
- If we decide to use a fixed tag (like `:latest`) instead:
- we need to specify `imagePullPolicy=Always` to force image pull
- we need to trigger a rollout when we want to deploy a new image
<br/>(with `kubectl rollout restart` or by killing the running pods)
- We need a fast internet connection to push the images
- We need to regularly clean up the registry to avoid accumulating old images
---
## When developing locally
- If we work with a local cluster, pushes and pulls are much faster
- Even better, with a one-node cluster, most of these problems disappear
- If we build and run the images on the same node, ...
- we don't need to push images
- we don't need a fast internet connection
- we don't need a registry
- we can use bind mounts to edit code locally and make changes available immediately in running containers
- This means that it is much simpler to deploy to local development environment (like Minikube, Docker Desktop ...) than to a "real" cluster
---
## Minikube
- Start a VM with the hypervisor of your choice: VirtualBox, kvm, Hyper-V ...
- Well supported by the Kubernetes community
- Lot of addons
- Easy cleanup: delete the VM with `minikube delete`
- Bind mounts depend on the underlying hypervisor
(they may require additionnal setup)
---
## Docker Desktop
- Available for Mac and Windows
- Start a VM with the appropriate hypervisor (even better!)
- Bind mounts work out of the box
```yaml
volumes:
- name: repo_dir
hostPath:
path: /C/Users/Enix/my_code_repository
```
- Ingress and other addons need to be installed manually
---
## Kind
- Kubernetes-in-Docker
- Uses Docker-in-Docker to run Kubernetes
<br/>
(technically, it's more like Containerd-in-Docker)
- We don't get a real Docker Engine (and cannot build Dockerfiles)
- Single-node by default, but multi-node clusters are possible
- Very convenient to test Kubernetes deployments when only Docker is available
<br/>
(e.g. on public CI services like Travis, Circle, GitHub Actions ...)
- Bind mounts require extra configuration
- Extra configuration for a couple of addons, totally custom for other
- Doesn't work with BTRFS (sorry BTRFS users😢)
---
## microk8s
- Distribution of Kubernetes using Snap
(Snap is a container-like method to install software)
- Available on Ubuntu and derivatives
- Bind mounts work natively (but require extra setup if we run in a VM)
- Big list of addons; easy to install
---
## Proper tooling
The simple workflow seems to be:
- set up a one-node cluster with one of the methods mentioned previously,
- find the remote Docker endpoint,
- configure the `DOCKER_HOST` variable to use that endpoint,
- follow the previous 7-step workflow.
Can we do better?
---
## Helpers
- Skaffold (https://skaffold.dev/):
- build with docker, kaniko, google builder
- install with pure yaml manifests, kustomize, helm
- Tilt (https://tilt.dev/)
- Tiltfile is programmatic format (python ?)
- Primitive for building with docker
- Primitive for deploying with pure yaml manifests, kustomize, helm
- Garden (https://garden.io/)
- Forge (https://forge.sh/)

View File

@@ -1,84 +0,0 @@
# OpenTelemetry
*OpenTelemetry* is a "tracing" framework.
It's a fusion of two other frameworks:
*OpenTracing* and *OpenCensus*.
Its goal is to provide deep integration with programming languages and
application frameworks to enabled deep dive tracing of different events accross different components.
---
## Span ! span ! span !
- A unit of tracing is called a *span*
- A span has: a start time, a stop time, and an ID
- It represents an action that took some time to complete
(e.g.: function call, database transaction, REST API call ...)
- A span can have a parent span, and can have multiple child spans
(e.g.: when calling function `B`, sub-calls to `C` and `D` were issued)
- Think of it as a "tree" of calls
---
## Distributed tracing
- When two components interact, their spans can be connected together
- Example: microservice `A` sends a REST API call to microservice `B`
- `A` will have a span for the call to `B`
- `B` will have a span for the call from `A`
<br/>(that normally starts shortly after, and finishes shortly before)
- the span of `A` will be the parent of the span of `B`
- they join the same "tree" of calls
<!-- FIXME the thing below? -->
details: `A` will send headers (depends of the protocol used) to tag the span ID,
so that `B` can generate child span and joining the same tree of call
---
## Centrally stored
- What do we do with all these spans?
- We store them!
- In the previous exemple:
- `A` will send trace information to its local agent
- `B` will do the same
- every span will end up in the same DB
- at a later point, we can reconstruct the "tree" of call and analyze it
- There are multiple implementations of this stack (agent + DB + web UI)
(the most famous open source ones are Zipkin and Jaeger)
---
## Data sampling
- Do we store *all* the spans?
(it looks like this could need a lot of storage!)
- No, we can use *sampling*, to reduce storage and network requirements
- Smart sampling is applied directly in the application to save CPU if span is not needed
- It also insures that if a span is marked as sampled, all child span are sampled as well
(so that the tree of call is complete)

View File

@@ -530,7 +530,7 @@ After the Kibana UI loads, we need to click around a bit
- Lookup the NodePort number and connect to it:
```bash
kubectl get services
kuebctl get services
```
]

View File

@@ -1,150 +0,0 @@
# Prometheus
Prometheus is a monitoring system with a small storage I/O footprint.
It's quite ubiquitous in the Kubernetes world.
This section is not an in-depth description of Prometheus.
*Note: More on Prometheus next day!*
<!--
FIXME maybe just use prometheus.md and add this file after it?
This way there is not need to write a Prom intro.
-->
---
## Prometheus exporter
- Prometheus *scrapes* (pulls) metrics from *exporters*
- A Prometheus exporter is an HTTP endpoint serving a response like this one:
```
# HELP http_requests_total The total number of HTTP requests.
# TYPE http_requests_total counter
http_requests_total{method="post",code="200"} 1027 1395066363000
http_requests_total{method="post",code="400"} 3 1395066363000
# Minimalistic line:
metric_without_timestamp_and_labels 12.47
```
- Our goal, as a developer, will be to expose such an endpoint to Prometheus
---
## Implementing a Prometheus exporter
Multiple strategies can be used:
- Implement the exporter in the application itself
(especially if it's already an HTTP server)
- Use building blocks that may already expose such an endpoint
(puma, uwsgi)
- Add a sidecar exporter that leverages and adapts an existing monitoring channel
(e.g. JMX for Java applications)
---
## Implementing a Prometheus exporter
- The Prometheus client libraries are often the easiest solution
- They offer multiple ways of integration, including:
- "I'm already running a web server, just add a monitoring route"
- "I don't have a web server (or I want another one), please run one in a thread"
- Client libraries for various languages:
- https://github.com/prometheus/client_python
- https://github.com/prometheus/client_ruby
- https://github.com/prometheus/client_golang
(Can you see the pattern?)
---
## Adding a sidecar exporter
- There are many exporters available already:
https://prometheus.io/docs/instrumenting/exporters/
- These are "translators" from one monitoring channel to another
- Writing your own is not complicated
(using the client libraries mentioned previously)
- Avoid exposing the internal monitoring channel more than enough
(the app and its sidecars run in the same network namespace,
<br/>so they can communicate over `localhost`)
---
## Configuring the Prometheus server
- We need to tell the Prometheus server to *scrape* our exporter
- Prometheus has a very flexible "service discovery" mechanism
(to discover and enumerate the targets that it should scrape)
- Depending on how we installed Prometheus, various methods might be available
---
## Configuring Prometheus, option 1
- Edit `prometheus.conf`
- Always possible
(we should always have a Prometheus configuration file somewhere!)
- Dangerous and error-prone
(if we get it wrong, it is very easy to break Prometheus)
- Hard to maintain
(the file will grow over time, and might accumulate obsolete information)
---
## Configuring Prometheus, option 2
- Add *annotations* to the pods or services to monitor
- We can do that if Prometheus is installed with the official Helm chart
- Prometheus will detect these annotations and automatically start scraping
- Example:
```yaml
annotations:
prometheus.io/port: 9090
prometheus.io/path: /metrics
```
---
## Configuring Prometheus, option 3
- Create a ServiceMonitor custom resource
- We can do that if we are using the CoreOS Prometheus operator
- See the [Prometheus operator documentation](https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#servicemonitor) for more details

View File

@@ -1,99 +0,0 @@
# Registries
- There are lots of options to ship our container images to a registry
- We can group them depending on some characteristics:
- SaaS or self-hosted
- with or without a build system
---
## Docker registry
- Self-hosted and [open source](https://github.com/docker/distribution)
- Runs in a single Docker container
- Supports multiple storage backends
- Supports basic authentication out of the box
- [Other authentication schemes](https://docs.docker.com/registry/deploying/#more-advanced-authentication) through proxy or delegation
- No build system
- To run it with the Docker engine:
```shell
docker run -d -p 5000:5000 --name registry registry:2
```
- Or use the dedicated plugin in minikube, microk8s, etc.
---
## Harbor
- Self-hostend and [open source](https://github.com/goharbor/harbor)
- Supports both Docker images and Helm charts
- Advanced authentification mechanism
- Multi-site synchronisation
- Vulnerability scanning
- No build system
- To run it with Helm:
```shell
helm repo add harbor https://helm.goharbor.io
helm install my-release harbor/harbor
```
---
## Gitlab
- Available both as a SaaS product and self-hosted
- SaaS product is free for open source projects; paid subscription otherwise
- Some parts are [open source](https://gitlab.com/gitlab-org/gitlab-foss/)
- Integrated CI
- No build system (but a custom build system can be hooked to the CI)
- To run it with Helm:
```shell
helm repo add gitlab https://charts.gitlab.io/
helm install gitlab gitlab/gitlab
```
---
## Docker Hub
- SaaS product: [hub.docker.com](https://hub.docker.com)
- Free for public image; paid subscription for private ones
- Build system included
---
## Quay
- Available both as a SaaS product (Quay) and self-hosted ([quay.io](https://quay.io))
- SaaS product is free for public repositories; paid subscription otherwise
- Some components of Quay and quay.io are open source
(see [Project Quay](https://www.projectquay.io/) and the [announcement](https://www.redhat.com/en/blog/red-hat-introduces-open-source-project-quay-container-registry))
- Build system included

View File

@@ -80,7 +80,6 @@
- Rolling updates can be monitored with the `kubectl rollout` subcommand
---
class: hide-exercise
## Rolling out the new `worker` service
@@ -110,7 +109,6 @@ class: hide-exercise
That rollout should be pretty quick. What shows in the web UI?
---
class: hide-exercise
## Give it some time
@@ -133,7 +131,6 @@ class: hide-exercise
(The grace period is 30 seconds, but [can be changed](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) if needed)
---
class: hide-exercise
## Rolling out something invalid
@@ -151,10 +148,10 @@ class: hide-exercise
kubectl rollout status deploy worker
```
/<!--
<!--
```wait Waiting for deployment```
```key ^C```
/-->
-->
]
@@ -165,7 +162,6 @@ Our rollout is stuck. However, the app is not dead.
(After a minute, it will stabilize to be 20-25% slower.)
---
class: hide-exercise
## What's going on with our rollout?
@@ -206,7 +202,6 @@ class: extra-details
- Our rollout is stuck at this point!
---
class: hide-exercise
## Checking the dashboard during the bad rollout
@@ -223,7 +218,6 @@ If you didn't deploy the Kubernetes dashboard earlier, just skip this slide.
]
---
class: hide-exercise
## Recovering from a bad rollout
@@ -246,7 +240,6 @@ class: hide-exercise
]
---
class: hide-exercise
## Rolling back to an older version
@@ -257,7 +250,6 @@ class: hide-exercise
- How can we get back to the previous version?
---
class: hide-exercise
## Multiple "undos"
@@ -277,7 +269,6 @@ class: hide-exercise
🤔 That didn't work.
---
class: hide-exercise
## Multiple "undos" don't work
@@ -300,8 +291,6 @@ class: hide-exercise
---
class: hide-exercise
## In this specific scenario
- Our version numbers are easy to guess
@@ -312,8 +301,6 @@ class: hide-exercise
---
class: hide-exercise
## Listing versions
- We can list successive versions of a Deployment with `kubectl rollout history`
@@ -334,7 +321,6 @@ We might see something like 1, 4, 5.
(Depending on how many "undos" we did before.)
---
class: hide-exercise
## Explaining deployment revisions
@@ -354,7 +340,6 @@ class: hide-exercise
---
class: extra-details
class: hide-exercise
## What about the missing revisions?
@@ -369,7 +354,6 @@ class: hide-exercise
(if we wanted to!)
---
class: hide-exercise
## Rolling back to an older version
@@ -389,7 +373,6 @@ class: hide-exercise
---
class: extra-details
class: hide-exercise
## Changing rollout parameters
@@ -397,7 +380,7 @@ class: hide-exercise
- revert to `v0.1`
- be conservative on availability (always have desired number of available workers)
- go slow on rollout speed (update only one pod at a time)
- go slow on rollout speed (update only one pod at a time)
- give some time to our workers to "warm up" before starting more
The corresponding changes can be expressed in the following YAML snippet:
@@ -421,7 +404,6 @@ spec:
---
class: extra-details
class: hide-exercise
## Applying changes through a YAML patch
@@ -452,6 +434,6 @@ class: hide-exercise
kubectl get deploy -o json worker |
jq "{name:.metadata.name} + .spec.strategy.rollingUpdate"
```
]
]
]

View File

@@ -1,72 +0,0 @@
# sealed-secrets
- https://github.com/bitnami-labs/sealed-secrets
- has a server side (standard kubernetes deployment) and a client side *kubeseal* binary
- server-side start by generating a key pair, keep the private, expose the public.
- To create a sealed-secret, you only need access to public key
- You can enforce access with RBAC rules of kubernetes
---
## sealed-secrets how to
- adding a secret: *kubeseal* will cipher it with the public key
- server side controller will re-create original secret, when the ciphered one are added to the cluster
- it makes it "safe" to add those secret to your source tree
- since version 0.9 key rotation are enable by default, so remember to backup private keys regularly.
</br> (or you won't be able to decrypt all you keys, in a case of *disaster recovery*)
---
## First "sealed-secret"
.exercise[
- Install *kubeseal*
```bash
wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.7/kubeseal-linux-amd64 -O kubeseal
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
```
- Install controller
```bash
helm install -n kube-system sealed-secrets-controller stable/sealed-secrets
```
- Create a secret you don't want to leak
```bash
kubectl create secret generic --from-literal=foo=bar my-secret -o yaml --dry-run \
| kubeseal > mysecret.yaml
kubectl apply -f mysecret.yaml
```
]
---
## Alternative: sops / git crypt
- You can work a VCS level (ie totally abstracted from kubernetess)
- sops (https://github.com/mozilla/sops), VCS agnostic, encrypt portion of files
- git-crypt that work with git to transparently encrypt (some) files in git
---
## Other alternative
- You can delegate secret management to another component like *hashicorp vault*
- Can work in multiple ways:
- encrypt secret from API-server (instead of the much secure *base64*)
- encrypt secret before sending it in kubernetes (avoid git in plain text)
- manager secret entirely in vault and expose to the container via volume

View File

@@ -1,15 +0,0 @@
## Software development
From years, decades, (centuries !), software development has followed the same principles:
- Development
- Testing
- Packaging
- Shipping
- Deployment
We will see how this map to Kubernetes world.

View File

@@ -1,17 +0,0 @@
# Automation && CI/CD
What we've done so far:
- development of our application
- manual testing, and exploration of automated testing strategies
- packaging in a container image
- shipping that image to a registry
What still need to be done:
- deployment of our application
- automation of the whole build / ship / run cycle

View File

@@ -1,82 +0,0 @@
# Testing
There are multiple levels of testing:
- unit testing (many small tests that run in isolation),
- integration testing (bigger tests involving multiple components),
- functional or end-to-end testing (even bigger tests involving the whole app).
In this section, we will focus on *unit testing*, where each test case
should (ideally) be completely isolated from other components and system
interaction: no real database, no real backend, *mocks* everywhere.
(For a good discussion on the merits of unit testing, we can read
[Just Say No to More End-to-End Tests](https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html).)
Unfortunately, this ideal scenario is easier said than done ...
---
## Multi-stage build
```dockerfile
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
RUN <install test dependencies>
COPY <test data sets and fixtures>
RUN <unit tests>
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
CMD, EXPOSE ...
```
- This leverages the Docker cache: if the code doesn't change, the tests don't need to run
- If the tests require a database or other backend, we can use `docker build --network`
- If the tests fail, the build fails; and no image is generated
---
## Docker Compose
```yaml
version: 3
service:
project:
image: my_image_name
build:
context: .
target: dev
database:
image: redis
backend:
image: backend
```
+
```shell
docker-compose build && docker-compose run project pytest -v
```
---
## Skaffold/Container-structure-test
- The `test` field of the `skaffold.yaml` instructs skaffold to run test against your image.
- It uses the [container-structure-test](https://github.com/GoogleContainerTools/container-structure-test)
- It allows to run custom commands
- Unfortunately, nothing to run other Docker images
(to start a database or a backend that we need to run tests)

View File

@@ -1,6 +1,6 @@
## Versions installed
- Kubernetes 1.17.2
- Kubernetes 1.17.1
- Docker Engine 19.03.5
- Docker Compose 1.24.1

View File

@@ -50,7 +50,7 @@ class: extra-details
- *Volumes*:
- appear in Pod specifications (we'll see that in a few slides)
- appear in Pod specifications (see next slide)
- do not exist as API resources (**cannot** do `kubectl get volumes`)
@@ -232,7 +232,7 @@ spec:
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ]
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
@@ -298,14 +298,14 @@ spec:
- As soon as we see its IP address, access it:
```bash
curl `$IP`
curl $IP
```
<!-- ```bash /bin/sleep 5``` -->
- A few seconds later, the state of the pod will change; access it again:
```bash
curl `$IP`
curl $IP
```
]

View File

@@ -91,52 +91,3 @@
because the resources that we created lack the necessary annotation.
We can safely ignore them.)
---
## Deleting resources
- We can also use a YAML file to *delete* resources
- `kubectl delete -f ...` will delete all the resources mentioned in a YAML file
(useful to clean up everything that was created by `kubectl apply -f ...`)
- The definitions of the resources don't matter
(just their `kind`, `apiVersion`, and `name`)
---
## Pruning¹ resources
- We can also tell `kubectl` to remove old resources
- This is done with `kubectl apply -f ... --prune`
- It will remove resources that don't exist in the YAML file(s)
- But only if they were created with `kubectl apply` in the first place
(technically, if they have an annotation `kubectl.kubernetes.io/last-applied-configuration`)
.footnote[¹If English is not your first language: *to prune* means to remove dead or overgrown branches in a tree, to help it to grow.]
---
## YAML as source of truth
- Imagine the following workflow:
- do not use `kubectl run`, `kubectl create deployment`, `kubectl expose` ...
- define everything with YAML
- `kubectl apply -f ... --prune --all` that YAML
- keep that YAML under version control
- enforce all changes to go through that YAML (e.g. with pull requests)
- Our version control system now has a full history of what we deploy
- Compares to "Infrastructure-as-Code", but for app deployments

45
slides/kadm-fullday.yml Normal file
View File

@@ -0,0 +1,45 @@
title: |
Kubernetes
for Admins and Ops
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
- static-pods-exercise
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/dmuc.md
- - k8s/multinode.md
- k8s/cni.md
- k8s/apilb.md
- k8s/control-plane-auth.md
- - k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md
- k8s/staticpods.md
- k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
- k8s/bootstrap.md
- - k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- - k8s/lastwords-admin.md
- k8s/links.md
- shared/thankyou.md

71
slides/kadm-twodays.yml Normal file
View File

@@ -0,0 +1,71 @@
title: |
Kubernetes
for administrators
and operators
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
# DAY 1
- - k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
- - k8s/multinode.md
- k8s/cni.md
- - k8s/apilb.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md
- k8s/staticpods.md
- - k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
# DAY 2
- - k8s/kubercoins.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
- k8s/authn-authz.md
- k8s/csr-api.md
- - k8s/openid-connect.md
- k8s/control-plane-auth.md
###- k8s/bootstrap.md
- k8s/netpol.md
- k8s/podsecuritypolicy.md
- - k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- - k8s/prometheus.md
- k8s/extending-api.md
- k8s/operators.md
###- k8s/operators-design.md
# CONCLUSION
- - k8s/lastwords-admin.md
- k8s/links.md
- shared/thankyou.md
- |
# (All content after this slide is bonus material)
# EXTRA
- - k8s/volumes.md
- k8s/configuration.md
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md

93
slides/kube-fullday.yml Normal file
View File

@@ -0,0 +1,93 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
#- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectlrun.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
-
- k8s/yamldeploy.md
- k8s/setup-k8s.md
#- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
#- k8s/dryrun.md
#- k8s/kubectlproxy.md
#- k8s/localkubeconfig.md
#- k8s/accessinternal.md
- k8s/rollout.md
#- k8s/healthchecks.md
#- k8s/healthchecks-more.md
#- k8s/record.md
-
- k8s/namespaces.md
- k8s/ingress.md
#- k8s/kustomize.md
#- k8s/helm.md
#- k8s/create-chart.md
#- k8s/create-more-charts.md
#- k8s/netpol.md
#- k8s/authn-authz.md
#- k8s/csr-api.md
#- k8s/openid-connect.md
#- k8s/podsecuritypolicy.md
- k8s/volumes.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
#- k8s/logs-centralized.md
#- k8s/prometheus.md
#- k8s/statefulsets.md
#- k8s/local-persistent-volumes.md
#- k8s/portworx.md
#- k8s/extending-api.md
#- k8s/operators.md
#- k8s/operators-design.md
#- k8s/staticpods.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
-
- k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

71
slides/kube-halfday.yml Normal file
View File

@@ -0,0 +1,71 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
chapters:
- shared/title.md
#- logistics.md
# Bridget-specific; others use logistics.md
- logistics-bridget.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-k8s.md
- - k8s/kubectlrun.md
- k8s/deploymentslideshow.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
#- k8s/kubectlproxy.md
#- k8s/localkubeconfig.md
#- k8s/accessinternal.md
- - k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/rollout.md
#- k8s/record.md
- - k8s/logs-cli.md
# Bridget hasn't added EFK yet
#- k8s/logs-centralized.md
- k8s/namespaces.md
- k8s/helm.md
- k8s/create-chart.md
#- k8s/create-more-charts.md
#- k8s/kustomize.md
#- k8s/netpol.md
- k8s/whatsnext.md
# - k8s/links.md
# Bridget-specific
- k8s/links-bridget.md
- shared/thankyou.md

115
slides/kube-selfpaced.yml Normal file
View File

@@ -0,0 +1,115 @@
title: |
Deploying and Scaling Microservices
with Docker and Kubernetes
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
chapters:
- shared/title.md
#- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
-
- k8s/kubectlget.md
- k8s/kubectlrun.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
-
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
- k8s/yamldeploy.md
-
- k8s/setup-k8s.md
- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/dryrun.md
-
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
- k8s/record.md
-
- k8s/namespaces.md
- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
-
- k8s/ingress.md
- k8s/kustomize.md
- k8s/helm.md
- k8s/create-chart.md
- k8s/create-more-charts.md
-
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/podsecuritypolicy.md
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/control-plane-auth.md
-
- k8s/volumes.md
- k8s/build-with-docker.md
- k8s/build-with-kaniko.md
-
- k8s/configuration.md
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
-
- k8s/logs-centralized.md
- k8s/prometheus.md
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
-
- k8s/extending-api.md
- k8s/operators.md
- k8s/operators-design.md
- k8s/owners-and-dependents.md
-
- k8s/dmuc.md
- k8s/multinode.md
- k8s/cni.md
- k8s/apilb.md
- k8s/staticpods.md
-
- k8s/cluster-upgrade.md
- k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
- k8s/gitworkflows.md
-
- k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

97
slides/kube-twodays.yml Normal file
View File

@@ -0,0 +1,97 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
#- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectlrun.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
-
- k8s/yamldeploy.md
#- k8s/setup-k8s.md
- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/dryrun.md
-
#- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/rollout.md
- k8s/healthchecks.md
#- k8s/healthchecks-more.md
- k8s/record.md
-
- k8s/namespaces.md
- k8s/ingress.md
- k8s/kustomize.md
- k8s/helm.md
- k8s/create-chart.md
#- k8s/create-more-charts.md
-
- k8s/netpol.md
- k8s/authn-authz.md
#- k8s/csr-api.md
#- k8s/openid-connect.md
#- k8s/podsecuritypolicy.md
-
- k8s/volumes.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
- k8s/logs-centralized.md
- k8s/prometheus.md
-
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
#- k8s/extending-api.md
#- k8s/operators.md
#- k8s/operators-design.md
#- k8s/staticpods.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
-
- k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

110
slides/kube.yml Normal file
View File

@@ -0,0 +1,110 @@
title: |
Advanced
Kubernetes
Training
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/jpetazzo/training-20200121-telaviv)"
#chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-01-zr.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- # DAY 1
- shared/prereqs.md
- shared/webssh.md
- shared/connecting.md
- shared/sampleapp.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectlrun.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
-
- k8s/shippingimages.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
- k8s/yamldeploy.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
#- k8s/dryrun.md
- # CH4 / DAY 2 (ish)
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
- k8s/record.md
- # CH5
- k8s/namespaces.md
- k8s/ingress.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- # CH6
- k8s/logs-centralized.md
- k8s/prometheus.md
- # CH7
- k8s/volumes.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
- # CH8 / DAY 3
- k8s/extending-api.md
- k8s/operators.md
- k8s/operators-design.md
- # CH9
#- k8s/prereqs-admin.md
#- k8s/architecture.md
#- k8s/deploymentslideshow.md
- k8s/dmuc.md
- # CH10
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- # CH11
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
- # CH12 / EXTRA CONTENT
- |
# (Extra content: security)
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/control-plane-auth.md
- # CH13 / NICE TO HAVE
- |
# (Extra content: templatization & packaging)
- k8s/kustomize.md
- k8s/helm.md
- k8s/create-chart.md
- k8s/create-more-charts.md
- # CH14 / MEH
- |
# (Extra content: user management)
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/podsecuritypolicy.md
- # CH15 / END
- k8s/lastwords-admin.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -2,16 +2,18 @@
- Hello! We are:
- .emoji[🐳] Jérôme Petazzoni ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- .emoji[👷🏻‍♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Tiny Shell Script LLC)
- .emoji[☸️] Julien Girardin ([Zempashi](https://github.com/zempashi), Enix SAS)
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Ardan Labs LLC)
- The training will run from 9am to 5:30pm (with lunch and coffee breaks)
- The workshop will run from 9:15 to 16:30
- For lunch, we'll invite you at [Chameleon, 70 Rue René Boulanger](https://goo.gl/maps/h2XjmJN5weDSUios8)
- There will be a lunch break around noon
(please let us know if you'll eat on your own)
(And coffee breaks!)
- Feel free to interrupt for questions at any time
- *Especially when you see full screen container pictures!*
- Live feedback, questions, help: @@CHAT@@

View File

@@ -28,7 +28,7 @@ class Interstitials(object):
def next(self):
index = self.index % len(self.images)
self.index += 1
index += 1
return self.images[index]

View File

@@ -1,7 +0,0 @@
<ul>
<li><a href="1.yml.html">Jour 1</a></li>
<li><a href="2.yml.html">Jour 2</a></li>
<li><a href="3.yml.html">Jour 3</a></li>
<li><a href="4.yml.html">Jour 4</a></li>
<li><a href="5.yml.html">Jour 5</a></li>
</ul>

View File

@@ -1,49 +1,22 @@
## Accessing these slides now
## About these slides
- We recommend that you open these slides in your browser:
@@SLIDES@@
- Use arrows to move to next/previous slide
(up, down, left, right, page up, page down)
- Type a slide number + ENTER to go to that slide
- The slide number is also visible in the URL bar
(e.g. .../#123 for slide 123)
---
## Accessing these slides later
- Slides will remain online so you can review them later if needed
(let's say we'll keep them online at least 1 year, how about that?)
- You can download the slides using that URL:
@@ZIP@@
(then open the file `@@HTML@@`)
- You will to find new versions of these slides on:
https://container.training/
---
## These slides are open source
- You are welcome to use, re-use, share these slides
- These slides are written in markdown
- The sources of these slides are available in a public GitHub repository:
- All the content is available in a public GitHub repository:
https://@@GITREPO@@
- You can get updated "builds" of the slides there:
http://container.training/
<!--
.exercise[
```open https://@@GITREPO@@```
```open http://container.training/```
]
-->
--
- Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
.footnote[.emoji[👇] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.]
@@ -73,19 +46,3 @@ class: extra-details
- you want only the most essential information
- You can review these slides another time if you want, they'll be waiting for you ☺
---
class: in-person, chat-room
## Chat room
- We've set up a chat room that we will monitor during the workshop
- Don't hesitate to use it to ask questions, or get help, or share feedback
- The chat room will also be available after the workshop
- Join the chat room: @@CHAT@@
- Say hi in the chat room!

View File

@@ -58,6 +58,28 @@ Misattributed to Benjamin Franklin
---
## Navigating slides
- Use arrows to move to next/previous slide
(up, down, left, right, page up, page down)
- Type a slide number + ENTER to go to that slide
- The slide number is also visible in the URL bar
(e.g. .../#123 for slide 123)
- Slides will remain online so you can review them later if needed
- You can download the slides using that URL:
@@ZIP@@
(then open the file `@@HTML@@`)
---
class: in-person
## Where are we going to run our containers?

View File

@@ -11,8 +11,9 @@ class: title, in-person
@@TITLE@@<br/></br>
.footnote[
**WiFi: CONFERENCE**<br/>
**Mot de passe: 123conference**
**WiFi: shpalter2.4**<br/>
**Password: 987654321**<br/>
(Then follow the prompts.)
**Slides[:](https://www.youtube.com/watch?v=h16zyxiwDLY) @@SLIDES@@**
**Slides: @@SLIDES@@**
]