Compare commits

..

3 Commits

Author SHA1 Message Date
Jerome Petazzoni
0b1b942b21 fix-redirects.sh: adding forced redirect 2020-04-07 16:45:23 -05:00
Jerome Petazzoni
0f046ed78c Merge branch 'master' into 2020-01-caen 2020-01-30 01:11:22 -06:00
Jerome Petazzoni
c5ed86c92b Set up slides for Caen K8S 3-day course 2020-01-28 03:04:23 -06:00
53 changed files with 423 additions and 2020 deletions

View File

@@ -1,15 +0,0 @@
apiVersion: apiextensions.k8s.io/v1alpha1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof

View File

@@ -1,32 +0,0 @@
apiVersion: apiextensions.k8s.io/v1alpha1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof
additionalPrinterColumns:
- JSONPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
type: string
- JSONPath: .metadata.creationTimestamp
name: Age
type: date
validation:
openAPIV3Schema:
properties:
spec:
required:
- taste
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string

View File

@@ -1,29 +0,0 @@
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: arabica
spec:
taste: strong
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: robusta
spec:
taste: stronger
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: liberica
spec:
taste: smoky
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: excelsa
spec:
taste: fruity

View File

@@ -13,7 +13,7 @@ spec:
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ]
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/

View File

@@ -61,6 +61,6 @@ TAG=$PREFIX-$SETTINGS
--count $((3*$STUDENTS))
./workshopctl deploy $TAG
./workshopctl kube $TAG 1.16.6
./workshopctl kube $TAG 1.14.6
./workshopctl cards $TAG

View File

@@ -1,69 +0,0 @@
title: |
Jour 1
Fondamentaux
Conteneurs & Docker
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/enix/formation-highfive-202002)"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-02-enix.container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/toc.md
-
- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Naming_And_Inspecting.md
#- containers/Labels.md
- containers/Getting_Inside.md
#- containers/Resource_Limits.md
- containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
-
- containers/Container_Networking_Basics.md
#- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
#- containers/Ambassadors.md
- containers/Local_Development_Workflow.md
#- containers/Windows_Containers.md
#- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
#- containers/Docker_Machine.md
#- containers/Advanced_Dockerfiles.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Container_Engines.md
#- containers/Ecosystem.md
#- containers/Orchestration_Overview.md
-
- shared/thankyou.md
- containers/links.md

View File

@@ -1,57 +0,0 @@
title: |
Jour 2
Fondamentaux
Orchestration
& Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/enix/formation-highfive-202002)"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-02-enix.container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectlrun.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
-
- k8s/shippingimages.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
- k8s/yamldeploy.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
-
- k8s/rollout.md
#- k8s/dryrun.md
- k8s/healthchecks.md
#- k8s/healthchecks-more.md
#- k8s/record.md
#- k8s/dashboard.md
- k8s/ingress.md
-
- shared/thankyou.md

View File

@@ -1,81 +0,0 @@
title: |
Jour 3
Méthodologies DevOps
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/enix/formation-highfive-202002)"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-02-enix.container.training/
exclude:
- self-paced
- hide-exercise
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- shared/prereqs.md
- shared/connecting.md
# Bien démarrer en local (minikube, kind)
- shared/sampleapp.md
- k8s/software-dev-banalities.md
- k8s/on-desktop.md
- k8s/volumes.md
- k8s/namespaces.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/testing.md
-
- k8s/configuration.md
- k8s/sealed-secrets.md
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-secrets.md
-
- k8s/shippingimages.md
- k8s/registries.md
- k8s/stop-manual.md
- k8s/ci-cd.md
- k8s/exercise-ci-build.md
- k8s/kaniko.md
- k8s/exercise-ci-kaniko.md
- k8s/rollout.md
- k8s/advanced-rollout.md
- k8s/devs-and-ops-joined-topics.md
-
- k8s/prometheus-endpoint.md
- k8s/exercise-prometheus.md
- k8s/opentelemetry.md
- k8s/exercise-opentelemetry.md
- k8s/kubernetes-security.md
#- |
# # (Automatiser)
#- |
# # Fabrication d'image
#- |
# # Skaffold
#- |
# # Registries
#- |
# # Gitlab, CI
#- |
# # ROllout avancé, blue green, canary
#- |
# # Monitoring applicatif
#- |
# # Prometheus Grafana
#- |
# # Telemetry
-
- shared/thankyou.md

View File

@@ -1,40 +0,0 @@
title: |
Jour 4
Kubernetes Avancé
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/enix/formation-highfive-202002)"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-02-enix.container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- k8s/netpol.md
- k8s/authn-authz.md
-
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
-
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
-
- k8s/prometheus.md
- k8s/logs-centralized.md
- k8s/extending-api.md
- k8s/operators.md
#- k8s/operators-design.md
-
- shared/thankyou.md

View File

@@ -1,42 +0,0 @@
title: |
Jour 5
Opérer Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/enix/formation-highfive-202002)"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-02-enix.container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
-
- k8s/multinode.md
- k8s/cni.md
-
- k8s/apilb.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md
- k8s/cluster-backup.md
- k8s/staticpods.md
-
- k8s/control-plane-auth.md
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/podsecuritypolicy.md
-
- shared/thankyou.md

View File

@@ -1,8 +1,7 @@
# Uncomment and/or edit one of the the following lines if necessary.
#/ /kube-halfday.yml.html 200
#/ /kube-fullday.yml.html 200
#/ /kube-twodays.yml.html 200
/ /menu.html 200!
/ /kube.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack

View File

@@ -1,5 +0,0 @@
# Exercise -- write a simple pipeline
Let's create a simple pipeline with gitlab
The code is at: https://github.com/enix/kubecoin-build

View File

@@ -1,76 +0,0 @@
# Advanced Rollout
- In some cases the built-in mechanism of kubernetes is not enough.
- You want more control on the rollout, include a feedback of the monitoring, deploying
on multiple clusters, etc
- Two "main" strategies exist here:
- canary deployment
- blue/green deployment
---
## Canary deployment
- focus on one component of the stack
- deploy a new version of the component close to the production
- redirect some portion of prod traffic to new version
- scale up new version, redirect more traffic, checking everything is ok
- scale down old version
- move component to component with the same procedure
- That's what kubernetes does by default, but does every components at the same time
- Could be paired with `kubectl wait --for` and applying component sequentially,
for hand made canary deployement
---
## Blue/Green deployment
- focus on entire stack
- deploy a new stack
- check the new stack work as espected
- put traffic on new stack, rollback if any goes wrong
- garbage collect the previous infra structure
- there is nothing like that by default in kubernetes
- helm chart with multiple releases is the closest one
- could be paired with ingress feature like `nginx.ingress.kubernetes.io/canary-*`
---
## Not hand-made ?
There is a few additionnal controllers that help achieving those kind of rollout behaviours
They leverage kubernetes API at different levels to achieve this goal.
---
## Spinnaker
- https://www.spinnaker.io
- Help to deploy the same app on multiple cluster.
- Is able to analyse rollout status (canary analysis) and correlate it to monitoring
- Rollback if anything goes wrong
- also support Blue/Green
- Configuration done via UI
---
## Argo-rollout
- https://github.com/argoproj/argo-rollouts
- Replace your deployments with CRD (Custom Resource Definition) "deployment-like"
- Full control via CRDs
- BlueGreen and Canary deployment

View File

@@ -1,51 +0,0 @@
## Jenkins / Jenkins-X
- Multi-purpose CI
- Self-hosted CI for kubernetes
- create a namespace per commit and apply manifests in the namespace
</br>
"A deploy per feature-branch"
.small[
```shell
curl -L "https://github.com/jenkins-x/jx/releases/download/v2.0.1103/jx-darwin-amd64.tar.gz" | tar xzv jx
./jx boot
```
]
---
## GitLab
- Repository + registry + CI/CD integrated all-in-one
```shell
helm repo add gitlab https://charts.gitlab.io/
helm install gitlab gitlab/gitlab
```
---
## ArgoCD / flux
- Watch a git repository and apply changes to kubernetes
- provide UI to see changes, rollback
.small[
```shell
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
]
---
## Tekton / knative
- knative is serverless project from google
- Tekton leverages knative to run pipelines
- not really user friendly today, but stay tune for wrappers/products

View File

@@ -360,7 +360,3 @@ docker run --rm --net host -v $PWD:/vol \
- [kube-backup](https://github.com/pieterlange/kube-backup)
simple scripts to save resource YAML to a git repository
- [bivac](https://github.com/camptocamp/bivac)
Backup Interface for Volumes Attached to Containers

View File

@@ -154,7 +154,7 @@ class: extra-details
- "Running Kubernetes without nodes"
- Systems like [Virtual Kubelet](https://virtual-kubelet.io/) or [Kiyot](https://static.elotl.co/docs/latest/kiyot/kiyot.html) can run pods using on-demand resources
- Systems like [Virtual Kubelet](https://virtual-kubelet.io/) or Kiyot can run pods using on-demand resources
- Virtual Kubelet can leverage e.g. ACI or Fargate to run pods

View File

@@ -81,7 +81,7 @@
## What version are we running anyway?
- When I say, "I'm running Kubernetes 1.16", is that the version of:
- When I say, "I'm running Kubernetes 1.11", is that the version of:
- kubectl
@@ -139,73 +139,6 @@
---
## Important questions
- Should we upgrade the control plane before or after the kubelets?
- Within the control plane, should we upgrade the API server first or last?
- How often should we upgrade?
- How long are versions maintained?
- All the answers are in [the documentation about version skew policy](https://kubernetes.io/docs/setup/release/version-skew-policy/)!
- Let's review the key elements together ...
---
## Kubernetes uses semantic versioning
- Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.17.2:
- MAJOR = 1
- MINOR = 17
- PATCH = 2
- It's always possible to mix and match different PATCH releases
(e.g. 1.16.1 and 1.16.6 are compatible)
- It is recommended to run the latest PATCH release
(but it's mandatory only when there is a security advisory)
---
## Version skew
- API server must be more recent than its clients (kubelet and control plane)
- ... Which means it must always be upgraded first
- All components support a difference of one¹ MINOR version
- This allows live upgrades (since we can mix e.g. 1.15 and 1.16)
- It also means that going from 1.14 to 1.16 requires going through 1.15
.footnote[¹Except kubelet, which can be up to two MINOR behind API server,
and kubectl, which can be one MINOR ahead or behind API server.]
---
## Release cycle
- There is a new PATCH relese whenever necessary
(every few weeks, or "ASAP" when there is a security vulnerability)
- There is a new MINOR release every 3 months (approximately)
- At any given time, three MINOR releases are maintained
- ... Which means that MINOR releases are maintained approximately 9 months
- We should expect to upgrade at least every 3 months (on average)
---
## In practice
- We are going to update a few cluster components
@@ -218,6 +151,47 @@ and kubectl, which can be one MINOR ahead or behind API server.]
---
## Updating kubelet
- These nodes have been installed using the official Kubernetes packages
- We can therefore use `apt` or `apt-get`
.exercise[
- Log into node `test3`
- View available versions for package `kubelet`:
```bash
apt show kubelet -a | grep ^Version
```
- Upgrade kubelet:
```bash
sudo apt install kubelet=1.15.3-00
```
]
---
## Checking what we've done
.exercise[
- Log into node `test1`
- Check node versions:
```bash
kubectl get nodes -o wide
```
- Create a deployment and scale it to make sure that the node still works
]
---
## Updating the API server
- This cluster has been deployed with kubeadm
@@ -254,7 +228,7 @@ and kubectl, which can be one MINOR ahead or behind API server.]
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
```
- Look for the `image:` line, and update it to e.g. `v1.17.0`
- Look for the `image:` line, and update it to e.g. `v1.15.0`
]
@@ -275,27 +249,9 @@ and kubectl, which can be one MINOR ahead or behind API server.]
---
## Was that a good idea?
--
**No!**
--
- Remember the guideline we gave earlier:
*To update a component, use whatever was used to install it.*
- This control plane was deployed with kubeadm
- We should use kubeadm to upgrade it!
---
## Updating the whole control plane
- Let's make it right, and use kubeadm to upgrade the entire control plane
- As an example, we'll use kubeadm to upgrade the entire control plane
(note: this is possible only because the cluster was installed with kubeadm)
@@ -308,11 +264,11 @@ and kubectl, which can be one MINOR ahead or behind API server.]
]
Note 1: kubeadm thinks that our cluster is running 1.17.0.
Note 1: kubeadm thinks that our cluster is running 1.15.0.
<br/>It is confused by our manual upgrade of the API server!
Note 2: kubeadm itself is still version 1.16.6.
<br/>It doesn't know how to upgrade do 1.17.X.
Note 2: kubeadm itself is still version 1.14.6.
<br/>It doesn't know how to upgrade do 1.15.X.
---
@@ -334,8 +290,8 @@ Note 2: kubeadm itself is still version 1.16.6.
]
Note: kubeadm still thinks that our cluster is running 1.17.0.
<br/>But at least it knows about version 1.17.X now.
Note: kubeadm still thinks that our cluster is running 1.15.0.
<br/>But at least it knows about version 1.15.X now.
---
@@ -351,89 +307,28 @@ Note: kubeadm still thinks that our cluster is running 1.17.0.
- Perform the upgrade:
```bash
sudo kubeadm upgrade apply v1.17.2
sudo kubeadm upgrade apply v1.15.3
```
]
---
## Updating kubelet
## Updating kubelets
- These nodes have been installed using the official Kubernetes packages
- After updating the control plane, we need to update each kubelet
- We can therefore use `apt` or `apt-get`
- This requires to run a special command on each node, to download the config
.exercise[
- Log into node `test3`
- View available versions for package `kubelet`:
```bash
apt show kubelet -a | grep ^Version
```
- Upgrade kubelet:
```bash
sudo apt install kubelet=1.17.2-00
```
]
---
## Checking what we've done
.exercise[
- Log into node `test1`
- Check node versions:
```bash
kubectl get nodes -o wide
```
- Create a deployment and scale it to make sure that the node still works
]
---
## Was that a good idea?
--
**Almost!**
--
- Yes, kubelet was installed with distribution packages
- However, kubeadm took care of configuring kubelet
(when doing `kubeadm join ...`)
- We were supposed to run a special command *before* upgrading kubelet!
- That command should be executed on each node
- It will download the kubelet configuration generated by kubeadm
---
## Upgrading kubelet the right way
- The command that we need to run was shown by kubeadm
(after upgrading the control plane)
(this config is generated by kubeadm)
.exercise[
- Download the configuration on each node, and upgrade kubelet:
```bash
for N in 1 2 3; do
ssh test$N sudo kubeadm upgrade node config --kubelet-version v1.17.2
ssh test$N sudo apt install kubelet=1.17.2-00
ssh test$N sudo kubeadm upgrade node config --kubelet-version v1.15.3
ssh test$N sudo apt install kubelet=1.15.3-00
done
```
]
@@ -442,7 +337,7 @@ Note: kubeadm still thinks that our cluster is running 1.17.0.
## Checking what we've done
- All our nodes should now be updated to version 1.17.2
- All our nodes should now be updated to version 1.15.3
.exercise[
@@ -459,12 +354,12 @@ class: extra-details
## Skipping versions
- This example worked because we went from 1.16 to 1.17
- This example worked because we went from 1.14 to 1.15
- If you are upgrading from e.g. 1.14, you will have to go through 1.15 first
- If you are upgrading from e.g. 1.13, you will generally have to go through 1.14 first
- This means upgrading kubeadm to 1.15.X, then using it to upgrade the cluster
- This means upgrading kubeadm to 1.14.X, then using it to upgrade the cluster
- Then upgrading kubeadm to 1.16.X, etc.
- Then upgrading kubeadm to 1.15.X, etc.
- **Make sure to read the release notes before upgrading!**

View File

@@ -28,7 +28,7 @@ The reference plugins are available [here].
Look in each plugin's directory for its documentation.
[here]: https://github.com/containernetworking/plugins
[here]: https://github.com/containernetworking/plugins/tree/master/plugins
---

View File

@@ -10,29 +10,6 @@
---
## What can we do with Kubernetes?
- Let's imagine that we have a 3-tier e-commerce app:
- web frontend
- API backend
- database (that we will keep out of Kubernetes for now)
- We have built images for our frontend and backend components
(e.g. with Dockerfiles and `docker build`)
- We are running them successfully with a local environment
(e.g. with Docker Compose)
- Let's see how we would deploy our app on Kubernetes!
---
## Basic things we can ask Kubernetes to do
--

View File

@@ -52,7 +52,7 @@
<!-- ##VERSION## -->
- Unfortunately, as of Kubernetes 1.17, the CLI cannot create daemon sets
- Unfortunately, as of Kubernetes 1.15, the CLI cannot create daemon sets
--
@@ -427,7 +427,7 @@ class: extra-details
- We need to change the selector of the `rng` service!
- Let's add another label to that selector (e.g. `active=yes`)
- Let's add another label to that selector (e.g. `enabled=yes`)
---
@@ -445,11 +445,11 @@ class: extra-details
## The plan
1. Add the label `active=yes` to all our `rng` pods
1. Add the label `enabled=yes` to all our `rng` pods
2. Update the selector for the `rng` service to also include `active=yes`
2. Update the selector for the `rng` service to also include `enabled=yes`
3. Toggle traffic to a pod by manually adding/removing the `active` label
3. Toggle traffic to a pod by manually adding/removing the `enabled` label
4. Profit!
@@ -464,7 +464,7 @@ be any interruption.*
## Adding labels to pods
- We want to add the label `active=yes` to all pods that have `app=rng`
- We want to add the label `enabled=yes` to all pods that have `app=rng`
- We could edit each pod one by one with `kubectl edit` ...
@@ -474,9 +474,9 @@ be any interruption.*
.exercise[
- Add `active=yes` to all pods that have `app=rng`:
- Add `enabled=yes` to all pods that have `app=rng`:
```bash
kubectl label pods -l app=rng active=yes
kubectl label pods -l app=rng enabled=yes
```
]
@@ -495,7 +495,7 @@ be any interruption.*
.exercise[
- Update the service to add `active: yes` to its selector:
- Update the service to add `enabled: yes` to its selector:
```bash
kubectl edit service rng
```
@@ -504,7 +504,7 @@ be any interruption.*
```wait Please edit the object below```
```keys /app: rng```
```key ^J```
```keys noactive: yes```
```keys noenabled: yes```
```key ^[``` ]
```keys :wq```
```key ^J```
@@ -530,7 +530,7 @@ be any interruption.*
- If we want the string `"42"` or the string `"yes"`, we have to quote them
- So we have to use `active: "yes"`
- So we have to use `enabled: "yes"`
.footnote[For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!]
@@ -542,7 +542,7 @@ be any interruption.*
- Update the YAML manifest of the service
- Add `active: "yes"` to its selector
- Add `enabled: "yes"` to its selector
<!--
```wait Please edit the object below```
@@ -566,7 +566,7 @@ If we did everything correctly, the web UI shouldn't show any change.
- We want to disable the pod that was created by the deployment
- All we have to do, is remove the `active` label from that pod
- All we have to do, is remove the `enabled` label from that pod
- To identify that pod, we can use its name
@@ -600,7 +600,7 @@ If we did everything correctly, the web UI shouldn't show any change.
- In another window, remove the label from the pod:
```bash
kubectl label pod -l app=rng,pod-template-hash active-
kubectl label pod -l app=rng,pod-template-hash enabled-
```
(The stream of HTTP logs should stop immediately)
@@ -623,7 +623,7 @@ class: extra-details
- If we scale up our cluster by adding new nodes, the daemon set will create more pods
- These pods won't have the `active=yes` label
- These pods won't have the `enabled=yes` label
- If we want these pods to have that label, we need to edit the daemon set spec

View File

@@ -1,10 +0,0 @@
## We are done, what else ?
We have seen what means developping an application on kubernetes.
There still few subjects to tackle that are not purely relevant for developers
They have *some involvement* for developers:
- Monitoring
- Security

View File

@@ -1,5 +0,0 @@
## Exercise - building with Kubernetes
- Let's go to https://github.com/enix/kubecoin
- Our goal is to follow the instructions and complete exercise #1

View File

@@ -1,3 +0,0 @@
## Exercice - build with kaniko
Complete exercise #2, (again code at: https://github.com/enix/kubecoin )

View File

@@ -1,5 +0,0 @@
## Exercice - monitor with opentelemetry
Complete exercise #5, (again code at: https://github.com/enix/kubecoin )
*Note: Not all daemon are "ready" for opentelemetry, only `rng` and `worker`

View File

@@ -1,5 +0,0 @@
## Exercice - monitor with prometheus
Complete exercise #4, (again code at: https://github.com/enix/kubecoin )
*Note: Not all daemon are "ready" for prometheus, only `hasher` and `redis`

View File

@@ -8,8 +8,6 @@ We are going to cover:
- Admission Webhooks
- The Aggregation Layer
---
## Revisiting the API server
@@ -48,90 +46,6 @@ We are going to cover:
---
## A very simple CRD
The YAML below describes a very simple CRD representing different kinds of coffee:
```yaml
apiVersion: apiextensions.k8s.io/v1alpha1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof
```
---
## Creating a CRD
- Let's create the Custom Resource Definition for our Coffee resource
.exercise[
- Load the CRD:
```bash
kubectl apply -f ~/container.training/k8s/coffee-1.yaml
```
- Confirm that it shows up:
```bash
kubectl get crds
```
]
---
## Creating custom resources
The YAML below defines a resource using the CRD that we just created:
```yaml
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: arabica
spec:
taste: strong
```
.exercise[
- Create a few types of coffee beans:
```bash
kubectl apply -f ~/container.training/k8s/coffees.yaml
```
]
---
## Viewing custom resources
- By default, `kubectl get` only shows name and age of custom resources
.exercise[
- View the coffee beans that we just created:
```bash
kubectl get coffees
```
]
- We can improve that, but it's outside the scope of this section!
---
## What can we do with CRDs?
There are many possibilities!
@@ -151,7 +65,7 @@ There are many possibilities!
- Replacing built-in types with CRDs
(see [this lightning talk by Tim Hockin](https://www.youtube.com/watch?v=ji0FWzFwNhA))
(see [this lightning talk by Tim Hockin](https://www.youtube.com/watch?v=ji0FWzFwNhA&index=2&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU))
---
@@ -167,7 +81,7 @@ There are many possibilities!
- Generally, when creating a CRD, we also want to run a *controller*
(otherwise nothing will happen when we create resources of that type)
(otherwise nothing will happen when we create resources of that type)
- The controller will typically *watch* our custom resources
@@ -181,22 +95,6 @@ Examples:
---
## (Ab)using the API server
- If we need to store something "safely" (as in: in etcd), we can use CRDs
- This gives us primitives to read/write/list objects (and optionally validate them)
- The Kubernetes API server can run on its own
(without the scheduler, controller manager, and kubelets)
- By loading CRDs, we can have it manage totally different objects
(unrelated to containers, clusters, etc.)
---
## Service catalog
- *Service catalog* is another extension mechanism
@@ -211,7 +109,7 @@ Examples:
- ClusterServiceClass
- ClusterServicePlan
- ServiceInstance
- ServiceBinding
- ServiceBinding
- It uses the Open service broker API
@@ -219,13 +117,17 @@ Examples:
## Admission controllers
- Admission controllers are another way to extend the Kubernetes API
- When a Pod is created, it is associated with a ServiceAccount
- Instead of creating new types, admission controllers can transform or vet API requests
(even if we did not specify one explicitly)
- The diagram on the next slide shows the path of an API request
- That ServiceAccount was added on the fly by an *admission controller*
(courtesy of Banzai Cloud)
(specifically, a *mutating admission controller*)
- Admission controllers sit on the API request path
(see the cool diagram on next slide, courtesy of Banzai Cloud)
---
@@ -235,7 +137,7 @@ class: pic
---
## Types of admission controllers
## Admission controllers
- *Validating* admission controllers can accept/reject the API call
@@ -249,27 +151,7 @@ class: pic
(see [documentation](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#what-does-each-admission-controller-do) for a list)
- We can also dynamically define and register our own
---
class: extra-details
## Some built-in admission controllers
- ServiceAccount:
automatically adds a ServiceAccount to Pods that don't explicitly specify one
- LimitRanger:
applies resource constraints specified by LimitRange objects when Pods are created
- NamespaceAutoProvision:
automatically creates namespaces when an object is created in a non-existent namespace
*Note: #1 and #2 are enabled by default; #3 is not.*
- But we can also define our own!
---
@@ -309,25 +191,19 @@ class: extra-details
---
## The aggregation layer
## (Ab)using the API server
- We can delegate entire parts of the Kubernetes API to external servers
- If we need to store something "safely" (as in: in etcd), we can use CRDs
- This is done by creating APIService resources
- This gives us primitives to read/write/list objects (and optionally validate them)
(check them with `kubectl get apiservices`!)
- The Kubernetes API server can run on its own
- The APIService resource maps a type (kind) and version to an external service
(without the scheduler, controller manager, and kubelets)
- All requests concerning that type are sent (proxied) to the external service
- By loading CRDs, we can have it manage totally different objects
- This allows to have resources like CRDs, but that aren't stored in etcd
- Example: `metrics-server`
(storing live metrics in etcd would be extremely inefficient)
- Requires significantly more work than CRDs!
(unrelated to containers, clusters, etc.)
---
@@ -342,5 +218,3 @@ class: extra-details
- [Built-in Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/)
- [Dynamic Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/)
- [Aggregation Layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)

View File

@@ -314,7 +314,7 @@ class: extra-details
- List all the resources created by this release:
```bash
kubectl get all --selector=release=java4ever
kuectl get all --selector=release=java4ever
```
]
@@ -416,4 +416,4 @@ All unspecified values will take the default values defined in the chart.
curl localhost:$PORT/sample/
```
]
]

View File

@@ -65,7 +65,7 @@ Where does that come from?
- Look for ConfigMaps and Secrets:
```bash
kubectl get configmaps,secrets
kuebectl get configmaps,secrets
```
]

View File

@@ -120,13 +120,19 @@
- We want our ingress load balancer to be available on port 80
- The best way to do that would be with a `LoadBalancer` service
- We could do that with a `LoadBalancer` service
... but it requires support from the underlying infrastructure
- Instead, we are going to use the `hostNetwork` mode on the Traefik pods
- We could use pods specifying `hostPort: 80`
- Let's see what this `hostNetwork` mode is about ...
... but with most CNI plugins, this [doesn't work or requires additional setup](https://github.com/kubernetes/kubernetes/issues/23920)
- We could use a `NodePort` service
... but that requires [changing the `--service-node-port-range` flag in the API server](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/)
- Last resort: the `hostNetwork` mode
---
@@ -164,26 +170,6 @@
---
class: extra-details
## Other techniques to expose port 80
- We could use pods specifying `hostPort: 80`
... but with most CNI plugins, this [doesn't work or requires additional setup](https://github.com/kubernetes/kubernetes/issues/23920)
- We could use a `NodePort` service
... but that requires [changing the `--service-node-port-range` flag in the API server](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/)
- We could create a service with an external IP
... this would work, but would require a few extra steps
(figuring out the IP address and adding it to the service)
---
## Running Traefik
- The [Traefik documentation](https://docs.traefik.io/user-guide/kubernetes/#deploy-trfik-using-a-deployment-or-daemonset) tells us to pick between Deployment and Daemon Set

View File

@@ -1,34 +0,0 @@
## Privileged container
- Running privileged container could be really harmful for the node it run on.
- Getting control of a node could expose other containers in the cluster and the cluster itself
- It's even worse when it is docker that run in this privileged container
- `docker build` doesn't allow to run privileged container for building layer
- nothing forbid to run `docker run --privileged`
---
## Kaniko
- https://github.com/GoogleContainerTools/kaniko
- *kaniko doesn't depend on a Docker daemon and executes each command
within a Dockerfile completely in userspace*
- Kaniko is only a build system, there is no runtime like docker does
- generates OCI compatible image, so could be run on Docker or other CRI
- use a different cache system than Docker
---
## Rootless docker and rootless buildkit
- This is experimental
- Have a lot of requirement of kernel param, options to set
- But it exists

View File

@@ -1,76 +1,20 @@
# Exposing containers
- We can connect to our pods using their IP address
- `kubectl expose` creates a *service* for existing pods
- Then we need to figure out a lot of things:
- A *service* is a stable address for a pod (or a bunch of pods)
- how do we look up the IP address of the pod(s)?
- If we want to connect to our pod(s), we need to create a *service*
- how do we connect from outside the cluster?
- Once a service is created, CoreDNS will allow us to resolve it by name
- how do we load balance traffic?
(i.e. after creating service `hello`, the name `hello` will resolve to something)
- what if a pod fails?
- Kubernetes has a resource type named *Service*
- Services address all these questions!
---
## Services in a nutshell
- Services give us a *stable endpoint* to connect to a pod or a group of pods
- An easy way to create a service is to use `kubectl expose`
- If we have a deployment named `my-little-deploy`, we can run:
`kubectl expose deployment my-little-deploy --port=80`
... and this will create a service with the same name (`my-little-deploy`)
- Services are automatically added to an internal DNS zone
(in the example above, our code can now connect to http://my-little-deploy/)
---
## Advantages of services
- We don't need to look up the IP address of the pod(s)
(we resolve the IP address of the service using DNS)
- There are multiple service types; some of them allow external traffic
(e.g. `LoadBalancer` and `NodePort`)
- Services provide load balancing
(for both internal and external traffic)
- Service addresses are independent from pods' addresses
(when a pod fails, the service seamlessly sends traffic to its replacement)
---
## Many kinds and flavors of service
- There are different types of services:
- There are different types of services, detailed on the following slides:
`ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName`
- There are also *headless services*
- Services can also have optional *external IPs*
- There is also another resource type called *Ingress*
(specifically for HTTP services)
- Wow, that's a lot! Let's start with the basics ...
- HTTP services can also use `Ingress` resources (more on that later)
---
@@ -129,6 +73,24 @@
---
class: extra-details
## `ExternalName`
- No load balancer (internal or external) is created
- Only a DNS entry gets added to the DNS managed by Kubernetes
- That DNS entry will just be a `CNAME` to a provided record
Example:
```bash
kubectl create service externalname k8s --external-name kubernetes.io
```
*Creates a CNAME `k8s` pointing to `kubernetes.io`*
---
## Running containers with open ports
- Since `ping` doesn't have anything to connect to, we'll have to run something else
@@ -213,7 +175,9 @@
- As a result: you *have to* indicate the port number for your service
(with some exceptions, like `ExternalName` or headless services, covered later)
- Running services with arbitrary port (or port ranges) requires hacks
(e.g. host networking mode)
---
@@ -254,48 +218,7 @@ Try it a few times! Our requests are load balanced across multiple pods.
class: extra-details
## `ExternalName`
- Services of type `ExternalName` are quite different
- No load balancer (internal or external) is created
- Only a DNS entry gets added to the DNS managed by Kubernetes
- That DNS entry will just be a `CNAME` to a provided record
Example:
```bash
kubectl create service externalname k8s --external-name kubernetes.io
```
*Creates a CNAME `k8s` pointing to `kubernetes.io`*
---
class: extra-details
## External IPs
- We can add an External IP to a service, e.g.:
```bash
kubectl expose deploy my-little-deploy --port=80 --external-ip=1.2.3.4
```
- `1.2.3.4` should be the address of one of our nodes
(it could also be a virtual address, service address, or VIP, shared by multiple nodes)
- Connections to `1.2.3.4:80` will be sent to our service
- External IPs will also show up on services of type `LoadBalancer`
(they will be added automatically by the process provisioning the load balancer)
---
class: extra-details
## Headless services
## If we don't need a load balancer
- Sometimes, we want to access our scaled services directly:
@@ -315,7 +238,7 @@ class: extra-details
class: extra-details
## Creating a headless services
## Headless services
- A headless service is obtained by setting the `clusterIP` field to `None`
@@ -401,32 +324,18 @@ error: the server doesn't have a resource type "endpoint"
class: extra-details
## The DNS zone
## `ExternalIP`
- In the `kube-system` namespace, there should be a service named `kube-dns`
- When creating a servivce, we can also specify an `ExternalIP`
- This is the internal DNS server that can resolve service names
(this is not a type, but an extra attribute to the service)
- The default domain name for the service we created is `default.svc.cluster.local`
- It will make the service availableon this IP address
.exercise[
- Get the IP address of the internal DNS server:
```bash
IP=$(kubectl -n kube-system get svc kube-dns -o jsonpath={.spec.clusterIP})
```
- Resolve the cluster IP for the `httpenv` service:
```bash
host httpenv.default.svc.cluster.local $IP
```
]
(if the IP address belongs to a node of the cluster)
---
class: extra-details
## `Ingress`
- Ingresses are another type (kind) of resource

View File

@@ -1,78 +0,0 @@
# Security and kubernetes
There are many mechanisms in kubernetes to ensure the security.
Obviously the more you constrain your app, the better.
There is also mechanism to forbid "unsafe" application to be launched on
kubernetes, but that's more for ops-guys 😈 (more on that next days)
Let's focus on what can we do on the developer latop, to make app
compatible with secure system, enforced or not (it's always a good practice)
---
## No container in privileged mode
- risks:
- If one privileged container get compromised,
we basically get full access to the node from within a container
(not need to tamper auth logs, alter binary).
- Sniffing networks allow often to get access to the entire cluster.
- how to avoid:
```
[...]
spec:
containers:
- name: foo
securityContext:
privileged: false
```
Luckily that's the default !
---
## No container run as "root"
- risks:
- bind mounting a directory like /usr/bin allow to change node system core
</br>ex: copy a tampered version of "ping", wait for an admin to login
and to issue a ping command and bingo !
- how to avoid:
```
[...]
spec:
containers:
- name: foo
securityContext:
runAsUser: 1000
runAsGroup: 100
```
- The default is to use the image default
- If your writing your own Dockerfile, don't forget about the `USER` instruction
---
## Capabilities
- You can give capabilities one-by-one to a container
- It's useful if you need more capabilities (for some reason), but not grating 'root' privileged
- risks: no risks whatsoever, except by granting a big list of capabilities
- how to use:
```
[...]
spec:
containers:
- name: foo
securityContext:
capabilities:
add: ["NET_ADMIN", "SYS_TIME"]
drop: []
```
The default use the container runtime defaults
- and we can also drop default capabilities granted by the container runtime !

View File

@@ -102,6 +102,8 @@
]
- Some tools like Helm will create namespaces automatically when needed
---
## Using namespaces
@@ -339,29 +341,12 @@ Note: we could have used `--namespace=default` for the same result.
- `kube-ps1` makes it easy to track these, by showing them in our shell prompt
- It is installed on our training clusters, and when using [shpod](https://github.com/jpetazzo/shpod)
- It's a simple shell script available from https://github.com/jonmosco/kube-ps1
- It gives us a prompt looking like this one:
- On our clusters, `kube-ps1` is installed and included in `PS1`:
```
[123.45.67.89] `(kubernetes-admin@kubernetes:default)` docker@node1 ~
```
(The highlighted part is `context:namespace`, managed by `kube-ps1`)
- Highly recommended if you work across multiple contexts or namespaces!
---
## Installing `kube-ps1`
- It's a simple shell script available from https://github.com/jonmosco/kube-ps1
- It needs to be [installed in our profile/rc files](https://github.com/jonmosco/kube-ps1#installing)
(instructions differ depending on platform, shell, etc.)
- Once installed, it defines aliases called `kube_ps1`, `kubeon`, `kubeoff`
(to selectively enable/disable it when needed)
- Pro-tip: install it on your machine during the next break!

View File

@@ -1,179 +0,0 @@
# Development Workflow
In this section we will see how to set up a local development workflow.
We will list multiple options.
Keep in mind that we don't have to use *all* these tools!
It's up to the developer to find what best suits them.
---
## What does it mean to develop on Kubernetes ?
In theory, the generic workflow is:
1. Make changes to our code or edit a Dockerfile
2. Build a new Docker image with a new tag
3. Push that Docker image to a registry
4. Update the YAML or templates referencing that Docker image
<br/>(e.g. of the corresponding Deployment, StatefulSet, Job ...)
5. Apply the YAML or templates
6. Are we satisfied with the result?
<br/>No → go back to step 1 (or step 4 if the image is OK)
<br/>Yes → commit and push our changes to source control
---
## A few quirks
In practice, there are some details that make this workflow more complex.
- We need a Docker container registry to store our images
<br/>
(for Open Source projects, a free Docker Hub account works fine)
- We need to set image tags properly, hopefully automatically
- If we decide to use a fixed tag (like `:latest`) instead:
- we need to specify `imagePullPolicy=Always` to force image pull
- we need to trigger a rollout when we want to deploy a new image
<br/>(with `kubectl rollout restart` or by killing the running pods)
- We need a fast internet connection to push the images
- We need to regularly clean up the registry to avoid accumulating old images
---
## When developing locally
- If we work with a local cluster, pushes and pulls are much faster
- Even better, with a one-node cluster, most of these problems disappear
- If we build and run the images on the same node, ...
- we don't need to push images
- we don't need a fast internet connection
- we don't need a registry
- we can use bind mounts to edit code locally and make changes available immediately in running containers
- This means that it is much simpler to deploy to local development environment (like Minikube, Docker Desktop ...) than to a "real" cluster
---
## Minikube
- Start a VM with the hypervisor of your choice: VirtualBox, kvm, Hyper-V ...
- Well supported by the Kubernetes community
- Lot of addons
- Easy cleanup: delete the VM with `minikube delete`
- Bind mounts depend on the underlying hypervisor
(they may require additionnal setup)
---
## Docker Desktop
- Available for Mac and Windows
- Start a VM with the appropriate hypervisor (even better!)
- Bind mounts work out of the box
```yaml
volumes:
- name: repo_dir
hostPath:
path: /C/Users/Enix/my_code_repository
```
- Ingress and other addons need to be installed manually
---
## Kind
- Kubernetes-in-Docker
- Uses Docker-in-Docker to run Kubernetes
<br/>
(technically, it's more like Containerd-in-Docker)
- We don't get a real Docker Engine (and cannot build Dockerfiles)
- Single-node by default, but multi-node clusters are possible
- Very convenient to test Kubernetes deployments when only Docker is available
<br/>
(e.g. on public CI services like Travis, Circle, GitHub Actions ...)
- Bind mounts require extra configuration
- Extra configuration for a couple of addons, totally custom for other
- Doesn't work with BTRFS (sorry BTRFS users😢)
---
## microk8s
- Distribution of Kubernetes using Snap
(Snap is a container-like method to install software)
- Available on Ubuntu and derivatives
- Bind mounts work natively (but require extra setup if we run in a VM)
- Big list of addons; easy to install
---
## Proper tooling
The simple workflow seems to be:
- set up a one-node cluster with one of the methods mentioned previously,
- find the remote Docker endpoint,
- configure the `DOCKER_HOST` variable to use that endpoint,
- follow the previous 7-step workflow.
Can we do better?
---
## Helpers
- Skaffold (https://skaffold.dev/):
- build with docker, kaniko, google builder
- install with pure yaml manifests, kustomize, helm
- Tilt (https://tilt.dev/)
- Tiltfile is programmatic format (python ?)
- Primitive for building with docker
- Primitive for deploying with pure yaml manifests, kustomize, helm
- Garden (https://garden.io/)
- Forge (https://forge.sh/)

View File

@@ -1,84 +0,0 @@
# OpenTelemetry
*OpenTelemetry* is a "tracing" framework.
It's a fusion of two other frameworks:
*OpenTracing* and *OpenCensus*.
Its goal is to provide deep integration with programming languages and
application frameworks to enabled deep dive tracing of different events accross different components.
---
## Span ! span ! span !
- A unit of tracing is called a *span*
- A span has: a start time, a stop time, and an ID
- It represents an action that took some time to complete
(e.g.: function call, database transaction, REST API call ...)
- A span can have a parent span, and can have multiple child spans
(e.g.: when calling function `B`, sub-calls to `C` and `D` were issued)
- Think of it as a "tree" of calls
---
## Distributed tracing
- When two components interact, their spans can be connected together
- Example: microservice `A` sends a REST API call to microservice `B`
- `A` will have a span for the call to `B`
- `B` will have a span for the call from `A`
<br/>(that normally starts shortly after, and finishes shortly before)
- the span of `A` will be the parent of the span of `B`
- they join the same "tree" of calls
<!-- FIXME the thing below? -->
details: `A` will send headers (depends of the protocol used) to tag the span ID,
so that `B` can generate child span and joining the same tree of call
---
## Centrally stored
- What do we do with all these spans?
- We store them!
- In the previous exemple:
- `A` will send trace information to its local agent
- `B` will do the same
- every span will end up in the same DB
- at a later point, we can reconstruct the "tree" of call and analyze it
- There are multiple implementations of this stack (agent + DB + web UI)
(the most famous open source ones are Zipkin and Jaeger)
---
## Data sampling
- Do we store *all* the spans?
(it looks like this could need a lot of storage!)
- No, we can use *sampling*, to reduce storage and network requirements
- Smart sampling is applied directly in the application to save CPU if span is not needed
- It also insures that if a span is marked as sampled, all child span are sampled as well
(so that the tree of call is complete)

View File

@@ -530,7 +530,7 @@ After the Kibana UI loads, we need to click around a bit
- Lookup the NodePort number and connect to it:
```bash
kubectl get services
kuebctl get services
```
]

View File

@@ -1,150 +0,0 @@
# Prometheus
Prometheus is a monitoring system with a small storage I/O footprint.
It's quite ubiquitous in the Kubernetes world.
This section is not an in-depth description of Prometheus.
*Note: More on Prometheus next day!*
<!--
FIXME maybe just use prometheus.md and add this file after it?
This way there is not need to write a Prom intro.
-->
---
## Prometheus exporter
- Prometheus *scrapes* (pulls) metrics from *exporters*
- A Prometheus exporter is an HTTP endpoint serving a response like this one:
```
# HELP http_requests_total The total number of HTTP requests.
# TYPE http_requests_total counter
http_requests_total{method="post",code="200"} 1027 1395066363000
http_requests_total{method="post",code="400"} 3 1395066363000
# Minimalistic line:
metric_without_timestamp_and_labels 12.47
```
- Our goal, as a developer, will be to expose such an endpoint to Prometheus
---
## Implementing a Prometheus exporter
Multiple strategies can be used:
- Implement the exporter in the application itself
(especially if it's already an HTTP server)
- Use building blocks that may already expose such an endpoint
(puma, uwsgi)
- Add a sidecar exporter that leverages and adapts an existing monitoring channel
(e.g. JMX for Java applications)
---
## Implementing a Prometheus exporter
- The Prometheus client libraries are often the easiest solution
- They offer multiple ways of integration, including:
- "I'm already running a web server, just add a monitoring route"
- "I don't have a web server (or I want another one), please run one in a thread"
- Client libraries for various languages:
- https://github.com/prometheus/client_python
- https://github.com/prometheus/client_ruby
- https://github.com/prometheus/client_golang
(Can you see the pattern?)
---
## Adding a sidecar exporter
- There are many exporters available already:
https://prometheus.io/docs/instrumenting/exporters/
- These are "translators" from one monitoring channel to another
- Writing your own is not complicated
(using the client libraries mentioned previously)
- Avoid exposing the internal monitoring channel more than enough
(the app and its sidecars run in the same network namespace,
<br/>so they can communicate over `localhost`)
---
## Configuring the Prometheus server
- We need to tell the Prometheus server to *scrape* our exporter
- Prometheus has a very flexible "service discovery" mechanism
(to discover and enumerate the targets that it should scrape)
- Depending on how we installed Prometheus, various methods might be available
---
## Configuring Prometheus, option 1
- Edit `prometheus.conf`
- Always possible
(we should always have a Prometheus configuration file somewhere!)
- Dangerous and error-prone
(if we get it wrong, it is very easy to break Prometheus)
- Hard to maintain
(the file will grow over time, and might accumulate obsolete information)
---
## Configuring Prometheus, option 2
- Add *annotations* to the pods or services to monitor
- We can do that if Prometheus is installed with the official Helm chart
- Prometheus will detect these annotations and automatically start scraping
- Example:
```yaml
annotations:
prometheus.io/port: 9090
prometheus.io/path: /metrics
```
---
## Configuring Prometheus, option 3
- Create a ServiceMonitor custom resource
- We can do that if we are using the CoreOS Prometheus operator
- See the [Prometheus operator documentation](https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#servicemonitor) for more details

View File

@@ -1,99 +0,0 @@
# Registries
- There are lots of options to ship our container images to a registry
- We can group them depending on some characteristics:
- SaaS or self-hosted
- with or without a build system
---
## Docker registry
- Self-hosted and [open source](https://github.com/docker/distribution)
- Runs in a single Docker container
- Supports multiple storage backends
- Supports basic authentication out of the box
- [Other authentication schemes](https://docs.docker.com/registry/deploying/#more-advanced-authentication) through proxy or delegation
- No build system
- To run it with the Docker engine:
```shell
docker run -d -p 5000:5000 --name registry registry:2
```
- Or use the dedicated plugin in minikube, microk8s, etc.
---
## Harbor
- Self-hostend and [open source](https://github.com/goharbor/harbor)
- Supports both Docker images and Helm charts
- Advanced authentification mechanism
- Multi-site synchronisation
- Vulnerability scanning
- No build system
- To run it with Helm:
```shell
helm repo add harbor https://helm.goharbor.io
helm install my-release harbor/harbor
```
---
## Gitlab
- Available both as a SaaS product and self-hosted
- SaaS product is free for open source projects; paid subscription otherwise
- Some parts are [open source](https://gitlab.com/gitlab-org/gitlab-foss/)
- Integrated CI
- No build system (but a custom build system can be hooked to the CI)
- To run it with Helm:
```shell
helm repo add gitlab https://charts.gitlab.io/
helm install gitlab gitlab/gitlab
```
---
## Docker Hub
- SaaS product: [hub.docker.com](https://hub.docker.com)
- Free for public image; paid subscription for private ones
- Build system included
---
## Quay
- Available both as a SaaS product (Quay) and self-hosted ([quay.io](https://quay.io))
- SaaS product is free for public repositories; paid subscription otherwise
- Some components of Quay and quay.io are open source
(see [Project Quay](https://www.projectquay.io/) and the [announcement](https://www.redhat.com/en/blog/red-hat-introduces-open-source-project-quay-container-registry))
- Build system included

View File

@@ -80,7 +80,6 @@
- Rolling updates can be monitored with the `kubectl rollout` subcommand
---
class: hide-exercise
## Rolling out the new `worker` service
@@ -110,7 +109,6 @@ class: hide-exercise
That rollout should be pretty quick. What shows in the web UI?
---
class: hide-exercise
## Give it some time
@@ -133,7 +131,6 @@ class: hide-exercise
(The grace period is 30 seconds, but [can be changed](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) if needed)
---
class: hide-exercise
## Rolling out something invalid
@@ -151,10 +148,10 @@ class: hide-exercise
kubectl rollout status deploy worker
```
/<!--
<!--
```wait Waiting for deployment```
```key ^C```
/-->
-->
]
@@ -165,7 +162,6 @@ Our rollout is stuck. However, the app is not dead.
(After a minute, it will stabilize to be 20-25% slower.)
---
class: hide-exercise
## What's going on with our rollout?
@@ -206,7 +202,6 @@ class: extra-details
- Our rollout is stuck at this point!
---
class: hide-exercise
## Checking the dashboard during the bad rollout
@@ -223,7 +218,6 @@ If you didn't deploy the Kubernetes dashboard earlier, just skip this slide.
]
---
class: hide-exercise
## Recovering from a bad rollout
@@ -246,7 +240,6 @@ class: hide-exercise
]
---
class: hide-exercise
## Rolling back to an older version
@@ -257,7 +250,6 @@ class: hide-exercise
- How can we get back to the previous version?
---
class: hide-exercise
## Multiple "undos"
@@ -277,7 +269,6 @@ class: hide-exercise
🤔 That didn't work.
---
class: hide-exercise
## Multiple "undos" don't work
@@ -300,8 +291,6 @@ class: hide-exercise
---
class: hide-exercise
## In this specific scenario
- Our version numbers are easy to guess
@@ -312,8 +301,6 @@ class: hide-exercise
---
class: hide-exercise
## Listing versions
- We can list successive versions of a Deployment with `kubectl rollout history`
@@ -334,7 +321,6 @@ We might see something like 1, 4, 5.
(Depending on how many "undos" we did before.)
---
class: hide-exercise
## Explaining deployment revisions
@@ -354,7 +340,6 @@ class: hide-exercise
---
class: extra-details
class: hide-exercise
## What about the missing revisions?
@@ -369,7 +354,6 @@ class: hide-exercise
(if we wanted to!)
---
class: hide-exercise
## Rolling back to an older version
@@ -389,7 +373,6 @@ class: hide-exercise
---
class: extra-details
class: hide-exercise
## Changing rollout parameters
@@ -397,7 +380,7 @@ class: hide-exercise
- revert to `v0.1`
- be conservative on availability (always have desired number of available workers)
- go slow on rollout speed (update only one pod at a time)
- go slow on rollout speed (update only one pod at a time)
- give some time to our workers to "warm up" before starting more
The corresponding changes can be expressed in the following YAML snippet:
@@ -421,7 +404,6 @@ spec:
---
class: extra-details
class: hide-exercise
## Applying changes through a YAML patch
@@ -452,6 +434,6 @@ class: hide-exercise
kubectl get deploy -o json worker |
jq "{name:.metadata.name} + .spec.strategy.rollingUpdate"
```
]
]
]

View File

@@ -1,72 +0,0 @@
# sealed-secrets
- https://github.com/bitnami-labs/sealed-secrets
- has a server side (standard kubernetes deployment) and a client side *kubeseal* binary
- server-side start by generating a key pair, keep the private, expose the public.
- To create a sealed-secret, you only need access to public key
- You can enforce access with RBAC rules of kubernetes
---
## sealed-secrets how to
- adding a secret: *kubeseal* will cipher it with the public key
- server side controller will re-create original secret, when the ciphered one are added to the cluster
- it makes it "safe" to add those secret to your source tree
- since version 0.9 key rotation are enable by default, so remember to backup private keys regularly.
</br> (or you won't be able to decrypt all you keys, in a case of *disaster recovery*)
---
## First "sealed-secret"
.exercise[
- Install *kubeseal*
```bash
wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.7/kubeseal-linux-amd64 -O kubeseal
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
```
- Install controller
```bash
helm install -n kube-system sealed-secrets-controller stable/sealed-secrets
```
- Create a secret you don't want to leak
```bash
kubectl create secret generic --from-literal=foo=bar my-secret -o yaml --dry-run \
| kubeseal > mysecret.yaml
kubectl apply -f mysecret.yaml
```
]
---
## Alternative: sops / git crypt
- You can work a VCS level (ie totally abstracted from kubernetess)
- sops (https://github.com/mozilla/sops), VCS agnostic, encrypt portion of files
- git-crypt that work with git to transparently encrypt (some) files in git
---
## Other alternative
- You can delegate secret management to another component like *hashicorp vault*
- Can work in multiple ways:
- encrypt secret from API-server (instead of the much secure *base64*)
- encrypt secret before sending it in kubernetes (avoid git in plain text)
- manager secret entirely in vault and expose to the container via volume

View File

@@ -1,15 +0,0 @@
## Software development
From years, decades, (centuries !), software development has followed the same principles:
- Development
- Testing
- Packaging
- Shipping
- Deployment
We will see how this map to Kubernetes world.

View File

@@ -1,17 +0,0 @@
# Automation && CI/CD
What we've done so far:
- development of our application
- manual testing, and exploration of automated testing strategies
- packaging in a container image
- shipping that image to a registry
What still need to be done:
- deployment of our application
- automation of the whole build / ship / run cycle

View File

@@ -1,82 +0,0 @@
# Testing
There are multiple levels of testing:
- unit testing (many small tests that run in isolation),
- integration testing (bigger tests involving multiple components),
- functional or end-to-end testing (even bigger tests involving the whole app).
In this section, we will focus on *unit testing*, where each test case
should (ideally) be completely isolated from other components and system
interaction: no real database, no real backend, *mocks* everywhere.
(For a good discussion on the merits of unit testing, we can read
[Just Say No to More End-to-End Tests](https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html).)
Unfortunately, this ideal scenario is easier said than done ...
---
## Multi-stage build
```dockerfile
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
RUN <install test dependencies>
COPY <test data sets and fixtures>
RUN <unit tests>
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
CMD, EXPOSE ...
```
- This leverages the Docker cache: if the code doesn't change, the tests don't need to run
- If the tests require a database or other backend, we can use `docker build --network`
- If the tests fail, the build fails; and no image is generated
---
## Docker Compose
```yaml
version: 3
service:
project:
image: my_image_name
build:
context: .
target: dev
database:
image: redis
backend:
image: backend
```
+
```shell
docker-compose build && docker-compose run project pytest -v
```
---
## Skaffold/Container-structure-test
- The `test` field of the `skaffold.yaml` instructs skaffold to run test against your image.
- It uses the [container-structure-test](https://github.com/GoogleContainerTools/container-structure-test)
- It allows to run custom commands
- Unfortunately, nothing to run other Docker images
(to start a database or a backend that we need to run tests)

View File

@@ -1,6 +1,6 @@
## Versions installed
- Kubernetes 1.17.2
- Kubernetes 1.17.1
- Docker Engine 19.03.5
- Docker Compose 1.24.1

View File

@@ -50,7 +50,7 @@ class: extra-details
- *Volumes*:
- appear in Pod specifications (we'll see that in a few slides)
- appear in Pod specifications (see next slide)
- do not exist as API resources (**cannot** do `kubectl get volumes`)
@@ -232,7 +232,7 @@ spec:
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add git && git clone https://github.com/octocat/Spoon-Knife /www" ]
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
@@ -298,14 +298,14 @@ spec:
- As soon as we see its IP address, access it:
```bash
curl `$IP`
curl $IP
```
<!-- ```bash /bin/sleep 5``` -->
- A few seconds later, the state of the pod will change; access it again:
```bash
curl `$IP`
curl $IP
```
]

View File

@@ -91,52 +91,3 @@
because the resources that we created lack the necessary annotation.
We can safely ignore them.)
---
## Deleting resources
- We can also use a YAML file to *delete* resources
- `kubectl delete -f ...` will delete all the resources mentioned in a YAML file
(useful to clean up everything that was created by `kubectl apply -f ...`)
- The definitions of the resources don't matter
(just their `kind`, `apiVersion`, and `name`)
---
## Pruning¹ resources
- We can also tell `kubectl` to remove old resources
- This is done with `kubectl apply -f ... --prune`
- It will remove resources that don't exist in the YAML file(s)
- But only if they were created with `kubectl apply` in the first place
(technically, if they have an annotation `kubectl.kubernetes.io/last-applied-configuration`)
.footnote[¹If English is not your first language: *to prune* means to remove dead or overgrown branches in a tree, to help it to grow.]
---
## YAML as source of truth
- Imagine the following workflow:
- do not use `kubectl run`, `kubectl create deployment`, `kubectl expose` ...
- define everything with YAML
- `kubectl apply -f ... --prune --all` that YAML
- keep that YAML under version control
- enforce all changes to go through that YAML (e.g. with pull requests)
- Our version control system now has a full history of what we deploy
- Compares to "Infrastructure-as-Code", but for app deployments

117
slides/kube-selfpaced.yml Normal file
View File

@@ -0,0 +1,117 @@
title: |
Deploying and Scaling Microservices
with Docker and Kubernetes
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
chapters:
- shared/title.md
#- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
-
- k8s/kubectlget.md
- k8s/kubectlrun.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
-
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
- k8s/yamldeploy.md
-
- k8s/setup-k8s.md
- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/dryrun.md
-
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
- k8s/record.md
-
- k8s/namespaces.md
- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
-
- k8s/ingress.md
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- k8s/helm-secrets.md
-
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/podsecuritypolicy.md
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/control-plane-auth.md
-
- k8s/volumes.md
- k8s/build-with-docker.md
- k8s/build-with-kaniko.md
-
- k8s/configuration.md
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
-
- k8s/logs-centralized.md
- k8s/prometheus.md
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
-
- k8s/extending-api.md
- k8s/operators.md
- k8s/operators-design.md
- k8s/owners-and-dependents.md
-
- k8s/dmuc.md
- k8s/multinode.md
- k8s/cni.md
- k8s/apilb.md
- k8s/staticpods.md
-
- k8s/cluster-upgrade.md
- k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
- k8s/gitworkflows.md
-
- k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

101
slides/kube.yml Normal file
View File

@@ -0,0 +1,101 @@
title: |
Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://2020-01-caen.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- # DAY 1
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
#- k8s/versions-k8s.md
- shared/sampleapp.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectlrun.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
-
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
-
- k8s/daemonset.md
- k8s/rollout.md
- k8s/healthchecks.md
#- k8s/healthchecks-more.md
- k8s/record.md
- # DAY 2
- k8s/namespaces.md
- k8s/yamldeploy.md
#- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/ingress.md
-
- k8s/volumes.md
- k8s/configuration.md
-
- k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
-
- k8s/kustomize.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- k8s/helm-secrets.md
- # DAY 3
- k8s/netpol.md
- k8s/authn-authz.md
-
- k8s/dashboard.md
- k8s/logs-centralized.md
- k8s/prometheus.md
-
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
-
- k8s/extending-api.md
- k8s/operators.md
- k8s/operators-design.md
- # END
- k8s/lastwords-admin.md
- k8s/links.md
- shared/thankyou.md
# EXTRA
#- k8s/staticpods.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
#- k8s/csr-api.md
#- k8s/openid-connect.md
#- k8s/podsecuritypolicy.md
#- k8s/setup-k8s.md
#- k8s/dryrun.md

View File

@@ -1,17 +1,14 @@
## Intros
- Hello! We are:
- Hello! I'm Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- .emoji[🐳] Jérôme Petazzoni ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- The training will run from 9am to 5pm
- .emoji[☸️] Julien Girardin ([Zempashi](https://github.com/zempashi), Enix SAS)
- There will be a lunch break at 12:30pm
- The training will run from 9am to 5:30pm (with lunch and coffee breaks)
- For lunch, we'll invite you at [Chameleon, 70 Rue René Boulanger](https://goo.gl/maps/h2XjmJN5weDSUios8)
(please let us know if you'll eat on your own)
(And coffee breaks!)
- Feel free to interrupt for questions at any time
- *Especially when you see full screen container pictures!*

View File

@@ -1,7 +0,0 @@
<ul>
<li><a href="1.yml.html">Jour 1</a></li>
<li><a href="2.yml.html">Jour 2</a></li>
<li><a href="3.yml.html">Jour 3</a></li>
<li><a href="4.yml.html">Jour 4</a></li>
<li><a href="5.yml.html">Jour 5</a></li>
</ul>

View File

@@ -1,49 +1,22 @@
## Accessing these slides now
## About these slides
- We recommend that you open these slides in your browser:
@@SLIDES@@
- Use arrows to move to next/previous slide
(up, down, left, right, page up, page down)
- Type a slide number + ENTER to go to that slide
- The slide number is also visible in the URL bar
(e.g. .../#123 for slide 123)
---
## Accessing these slides later
- Slides will remain online so you can review them later if needed
(let's say we'll keep them online at least 1 year, how about that?)
- You can download the slides using that URL:
@@ZIP@@
(then open the file `@@HTML@@`)
- You will to find new versions of these slides on:
https://container.training/
---
## These slides are open source
- You are welcome to use, re-use, share these slides
- These slides are written in markdown
- The sources of these slides are available in a public GitHub repository:
- All the content is available in a public GitHub repository:
https://@@GITREPO@@
- You can get updated "builds" of the slides there:
http://container.training/
<!--
.exercise[
```open https://@@GITREPO@@```
```open http://container.training/```
]
-->
--
- Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
.footnote[.emoji[👇] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.]
@@ -73,19 +46,3 @@ class: extra-details
- you want only the most essential information
- You can review these slides another time if you want, they'll be waiting for you ☺
---
class: in-person, chat-room
## Chat room
- We've set up a chat room that we will monitor during the workshop
- Don't hesitate to use it to ask questions, or get help, or share feedback
- The chat room will also be available after the workshop
- Join the chat room: @@CHAT@@
- Say hi in the chat room!

View File

@@ -58,6 +58,28 @@ Misattributed to Benjamin Franklin
---
## Navigating slides
- Use arrows to move to next/previous slide
(up, down, left, right, page up, page down)
- Type a slide number + ENTER to go to that slide
- The slide number is also visible in the URL bar
(e.g. .../#123 for slide 123)
- Slides will remain online so you can review them later if needed
- You can download the slides using that URL:
@@ZIP@@
(then open the file `@@HTML@@`)
---
class: in-person
## Where are we going to run our containers?

View File

@@ -11,8 +11,5 @@ class: title, in-person
@@TITLE@@<br/></br>
.footnote[
**WiFi: CONFERENCE**<br/>
**Mot de passe: 123conference**
**Slides[:](https://www.youtube.com/watch?v=h16zyxiwDLY) @@SLIDES@@**
**Slides: @@SLIDES@@**
]