Compare commits

..

12 Commits

Author SHA1 Message Date
Jerome Petazzoni
99b8886c3e fix-redirects.sh: adding forced redirect 2020-04-07 16:49:49 -05:00
Jerome Petazzoni
13164971ca Re-add builder chapters 2019-01-19 03:58:20 -06:00
Jerome Petazzoni
c55ce66751 Merge branch 'master' into kube-2019-01 2019-01-19 03:57:47 -06:00
Jerome Petazzoni
e1ccad3ee2 Update WiFi information 2019-01-17 01:36:42 -06:00
Jerome Petazzoni
83535c3f69 Update gitter link; opt out of builder demos 2019-01-17 00:47:43 -06:00
Jerome Petazzoni
9ece59e821 Merge branch 'master' into kube-2019-01 2019-01-14 12:01:47 -06:00
Jerome Petazzoni
cd7a5520fb Add WiFi information 2019-01-14 02:13:20 -06:00
Jerome Petazzoni
c1757160d7 Merge branch 'master' into kube-2019-01 2019-01-13 15:15:08 -06:00
Jerome Petazzoni
4f8ffcdd97 Merge branch 'master' into kube-2019-01 2019-01-13 15:13:56 -06:00
Jerome Petazzoni
a66b295d06 Use the 2-day deck 2019-01-13 15:08:01 -06:00
Jerome Petazzoni
ae04e02519 Merge branch 'enixlogo' into kube-2019-01 2019-01-13 15:03:29 -06:00
Jerome Petazzoni
7b03961182 Customization 2019-01-13 15:02:20 -06:00
11 changed files with 19 additions and 528 deletions

View File

@@ -1,10 +0,0 @@
apiVersion: v1
Kind: Pod
metadata:
name: hello
namespace: default
spec:
containers:
- name: hello
image: nginx

Binary file not shown.

Before

Width:  |  Height:  |  Size: 203 KiB

View File

@@ -6,24 +6,6 @@
title: Getting Started With Kubernetes and Container Orchestration
attend: https://gotochgo.com/2019/workshops/148
- date: [2019-04-23, 2019-04-24]
country: fr
city: Paris
event: ENIX SAS
speaker: "jpetazzo, rdegez"
title: Déployer ses applications avec Kubernetes (in French)
lang: fr
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
- date: [2019-04-15, 2019-04-16]
country: fr
city: Paris
event: ENIX SAS
speaker: "jpetazzo, alexbuisine"
title: Bien démarrer avec les conteneurs (in French)
lang: fr
attend: https://enix.io/fr/services/formation/bien-demarrer-avec-les-conteneurs/
- date: 2019-03-07
country: uk
city: London
@@ -32,40 +14,6 @@
title: Getting Started With Kubernetes and Container Orchestration
attend: https://qconlondon.com/london2019/workshop/getting-started-kubernetes-and-container-orchestration
- date: 2019-02-25
country: ca
city: Montréal
event: Elapse Technologies
speaker: jpetazzo
title: Getting Started With Docker And Containers
attend: http://elapsetech.com/formation/docker-101
- date: 2019-02-26
country: ca
city: Montréal
event: Elapse Technologies
speaker: jpetazzo
title: Getting Started With Kubernetes And Orchestration
attend: http://elapsetech.com/formation/kubernetes-101
- date: 2019-02-28
country: ca
city: Québec
lang: fr
event: Elapse Technologies
speaker: jpetazzo
title: Bien démarrer avec Docker et les conteneurs (in French)
attend: http://elapsetech.com/formation/docker-101
- date: 2019-03-01
country: ca
city: Québec
lang: fr
event: Elapse Technologies
speaker: jpetazzo
title: Bien démarrer avec Docker et l'orchestration (in French)
attend: http://elapsetech.com/formation/kubernetes-101
- date: [2019-01-07, 2019-01-08]
country: fr
city: Paris
@@ -74,7 +22,6 @@
title: Bien démarrer avec les conteneurs (in French)
lang: fr
attend: https://enix.io/fr/services/formation/bien-demarrer-avec-les-conteneurs/
slides: https://intro-2019-01.container.training
- date: [2018-12-17, 2018-12-18]
country: fr
@@ -84,7 +31,6 @@
title: Déployer ses applications avec Kubernetes (in French)
lang: fr
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
slides: http://decembre2018.container.training
- date: 2018-11-08
city: San Francisco, CA

View File

@@ -1,214 +0,0 @@
# Extending the Kubernetes API
There are multiple ways to extend the Kubernetes API.
We are going to cover:
- Custom Resource Definitions (CRDs)
- Admission Webhooks
---
## Revisiting the API server
- The Kubernetes API server is a central point of the control plane
(everything connects to it: controller manager, scheduler, kubelets)
- Almost everything in Kubernetes is materialized by a resource
- Resources have a type (or "kind")
(similar to strongly typed languages)
- We can see existing types with `kubectl api-resources`
- We can list resources of a given type with `kubectl get <type>`
---
## Creating new types
- We can create new types with Custom Resource Definitions (CRDs)
- CRDs are created dynamically
(without recompiling or restarting the API server)
- CRDs themselves are resources:
- we can create a new type with `kubectl create` and some YAML
- we can see all our custom types with `kubectl get crds`
- After we create a CRD, the new type works just like built-in types
---
## What can we do with CRDs?
There are many possibilities!
- *Operators* encapsulate complex sets of resources
(e.g.: a PostgreSQL replicated cluster; an etcd cluster;
[and more...](https://github.com/operator-framework/awesome-operators))
- Custom use-cases like [gitkube](https://gitkube.sh/)
- creates a new custom type, `Remote`, exposing a git+ssh server
- deploy by pushing YAML or Helm Charts to that remote
- Replacing built-in types with CRDs
(see [this lightning talk by Tim Hockin](https://www.youtube.com/watch?v=ji0FWzFwNhA&index=2&list=PLj6h78yzYM2PZf9eA7bhWnIh_mK1vyOfU))
---
## Little details
- By default, CRDs are not *validated*
(we can put anything we want in the `spec`)
- When creating a CRD, we can pass an OpenAPI v3 schema (BETA!)
(which will then be used to validate resources)
- Generally, when creating a CRD, we also want to run a *controller*
(otherwise nothing will happen when we create resources of that type)
- The controller will typically *watch* our custom resources
(and take action when they are created/updated)
*Example: [YAML to install the gitkube CRD](https://storage.googleapis.com/gitkube/gitkube-setup-stable.yaml)*
---
## Service catalog
- *Service catalog* is another extension mechanism
- It's not extending the Kubernetes API strictly speaking
(but it still provides new features!)
- It doesn't create new types; it uses:
- ClusterServiceBroker
- ClusterServiceClass
- ClusterServicePlan
- ServiceInstance
- ServiceBinding
- It uses the Open service broker API
---
## Admission controllers
- When a Pod is created, it is associated to a ServiceAccount
(even if we did not specify one explicitly)
- That ServiceAccount was added on the fly by an *admission controller*
(specifically, a *mutating admission controller*)
- Admission controllers sit on the API request path
(see the cool diagram on next slide, courtesy of Banzai Cloud)
---
class: pic
![API request lifecycle](images/api-request-lifecycle.png)
---
## Admission controllers
- *Validating* admission controllers can accept/reject the API call
- *Mutating* admission controllers can modify the API request payload
- Both types can also trigger additional actions
(e.g. automatically create a Namespace if it doesn't exist)
- There are a number of built-in admission controllers
(see [documentation](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#what-does-each-admission-controller-do) for a list)
- But we can also define our own!
---
## Admission Webhooks
- We can setup *admission webhooks* to extend the behavior of the API server
- The API server will submit incoming API requests to these webhooks
- These webhooks can be *validating* or *mutating*
- Webhooks can be setup dynamically (without restarting the API server)
- To setup a dynamic admission webhook, we create a special resource:
a `ValidatingWebhookConfiguration` or a `MutatingWebhookConfiguration`
- These resources are created and managed like other resources
(i.e. `kubectl create`, `kubectl get` ...)
---
## Webhook Configuration
- A ValidatingWebhookConfiguration or MutatingWebhookConfiguration contains:
- the address of the webhook
- the authentication information to use with the webhook
- a list of rules
- The rules indicate for which objects and actions the webhook is triggered
(to avoid e.g. triggering webhooks when setting up webhooks)
---
## (Ab)using the API server
- If we need to store something "safely" (as in: in etcd), we can use CRDs
- This gives us primitives to read/write/list objects (and optionally validate them)
- The Kubernetes API server can run on its own
(without the scheduler, controller manager, and kubelets)
- By loading CRDs, we can have it manage totally different objects
(unrelated to containers, clusters, etc.)
---
## Documentation
- [Custom Resource Definitions: when to use them](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
- [Custom Resources Definitions: how to use them](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/)
- [Service Catalog](https://kubernetes.io/docs/concepts/extend-kubernetes/service-catalog/)
- [Built-in Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/)
- [Dynamic Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/)

View File

@@ -53,8 +53,8 @@
[GKE](https://cloud.google.com/kubernetes-engine/)
- If you are on AWS:
[EKS](https://aws.amazon.com/eks/),
[eksctl](https://eksctl.io/),
[EKS](https://aws.amazon.com/eks/)
or
[kops](https://github.com/kubernetes/kops)
- On a local machine:

View File

@@ -1,237 +0,0 @@
# Static pods
- Hosting the Kubernetes control plane on Kubernetes has advantages:
- we can use Kubernetes' replication and scaling features for the control plane
- we can leverage rolling updates to upgrade the control plane
- However, there is a catch:
- deploying on Kubernetes requires the API to be available
- the API won't be available until the control plane is deployed
- How can we get out of that chicken-and-egg problem?
---
## A possible approach
- Since each component of the control plane can be replicated ...
- We could set up the control plane outside of the cluster
- Then, once the cluster is fully operational, create replicas running on the cluster
- Finally, remove the replicas that are running outside of the cluster
*What could possibly go wrong?*
---
## Sawing off the branch you're sitting on
- What if anything goes wrong?
(During the setup or at a later point)
- Worst case scenario, we might need to:
- set up a new control plane (outside of the cluster)
- restore a backup from the old control plane
- move the new control plane to the cluster (again)
- This doesn't sound like a great experience
---
## Static pods to the rescue
- Pods are started by kubelet (an agent running on every node)
- To know which pods it should run, the kubelet queries the API server
- The kubelet can also get a list of *static pods* from:
- a directory containing one (or multiple) *manifests*, and/or
- a URL (serving a *manifest*)
- These "manifests" are basically YAML definitions
(As produced by `kubectl get pod my-little-pod -o yaml --export`)
---
## Static pods are dynamic
- Kubelet will periodically reload the manifests
- It will start/stop pods accordingly
(i.e. it is not necessary to restart the kubelet after updating the manifests)
- When connected to the Kubernetes API, the kubelet will create *mirror pods*
- Mirror pods are copies of the static pods
(so they can be seen with e.g. `kubectl get pods`)
---
## Bootstrapping a cluster with static pods
- We can run control plane components with these static pods
- They can start without requiring access to the API server
- Once they are up and running, the API becomes available
- These pods are then visible through the API
(We cannot upgrade them from the API, though)
*This is how kubeadm has initialized our clusters.*
---
## Static pods vs normal pods
- The API only gives us a read-only access to static pods
- We can `kubectl delete` a static pod ...
... But the kubelet will restart it immediately
- Static pods can be selected just like other pods
(So they can receive service traffic)
- A service can select a mixture of static and other pods
---
## From static pods to normal pods
- Once the control plane is up and running, it can be used to create normal pods
- We can then set up a copy of the control plane in normal pods
- Then the static pods can be removed
- The scheduler and the controller manager use leader election
(Only one is active at a time; removing an instance is seamless)
- Each instance of the API server adds itself to the `kubernetes` service
- Etcd will typically require more work!
---
## From normal pods back to static pods
- Alright, but what if the control plane is down and we need to fix it?
- We restart it using static pods!
- This can be done automatically with the [Pod Checkpointer]
- The Pod Checkpointer automatically generates manifests of running pods
- The manifests are used to restart these pods if API contact is lost
(More details in the [Pod Checkpointer] documentation page)
- This technique is used by [bootkube]
[Pod Checkpointer]: https://github.com/kubernetes-incubator/bootkube/blob/master/cmd/checkpoint/README.md
[bootkube]: https://github.com/kubernetes-incubator/bootkube
---
## Where should the control plane run?
*Is it better to run the control plane in static pods, or normal pods?*
- If I'm a *user* of the cluster: I don't care, it makes no difference to me
- What if I'm an *admin*, i.e. the person who installs, upgrades, repairs... the cluster?
- If I'm using a managed Kubernetes cluster (AKS, EKS, GKE...) it's not my problem
(I'm not the one setting up and managing the control plane)
- If I already picked a tool (kubeadm, kops...) to set up my cluster, the tool decides for me
- What if I haven't picked a tool yet, or if I'm installing from scratch?
- static pods = easier to set up, easier to troubleshoot, less risk of outage
- normal pods = easier to upgrade, easier to move (if nodes need to be shut down)
---
## Static pods in action
- On our clusters, the `staticPodPath` is `/etc/kubernetes/manifests`
.exercise[
- Have a look at this directory:
```bash
ls -l /etc/kubernetes/manifests
```
]
We should see YAML files corresponding to the pods of the control plane.
---
## Running a static pod
- We are going to add a pod manifest to the directory, and kubelet will run it
.exercise[
- Copy a manifest to the directory:
```bash
sudo cp ~/container.training/k8s/just-a-pod.yaml /etc/kubernetes/manifests
```
- Check that it's running:
```bash
kubectl get pods
```
]
The output should include a pod named `hello-node1`.
---
## Remarks
In the manifest, the pod was named `hello`.
```yaml
apiVersion: v1
Kind: Pod
metadata:
name: hello
namespace: default
spec:
containers:
- name: hello
image: nginx
```
The `-node1` suffix was added automatically by kubelet.
If we delete the pod (with `kubectl delete`), it will be recreated immediately.
To delete the pod, we need to delete (or move) the manifest file.

View File

@@ -55,7 +55,6 @@ chapters:
# - k8s/build-with-kaniko.md
# - k8s/configuration.md
#- - k8s/owners-and-dependents.md
# - k8s/extending-api.md
# - k8s/statefulsets.md
# - k8s/portworx.md
- - k8s/whatsnext.md

View File

@@ -55,10 +55,8 @@ chapters:
- k8s/build-with-kaniko.md
- k8s/configuration.md
- - k8s/owners-and-dependents.md
- k8s/extending-api.md
- k8s/statefulsets.md
- k8s/portworx.md
- k8s/staticpods.md
- - k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -2,11 +2,12 @@ title: |
Deploying and Scaling Applications
with Kubernetes
chat: "[Gitter](https://gitter.im/enix/formation-kubernetes-20190128)"
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/enix/formation-kubernetes-20190117)"
gitrepo: github.com/jpetazzo/container.training
slides: http://kube-2019-02.container.training/
slides: http://kube-2019-01.container.training/
exclude:
- self-paced
@@ -46,14 +47,15 @@ chapters:
- k8s/netpol.md
- k8s/authn-authz.md
- - k8s/ingress.md
#- k8s/gitworkflows.md
- k8s/prometheus.md
- - k8s/volumes.md
- k8s/build-with-docker.md
- k8s/build-with-kaniko.md
- k8s/configuration.md
- k8s/owners-and-dependents.md
- - k8s/extending-api.md
- - k8s/owners-and-dependents.md
- k8s/statefulsets.md
- k8s/portworx.md
- k8s/staticpods.md
- - k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,8 +1,12 @@
## Intros
- Hello! I'm Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- Hello! We are:
- The workshop will run from 9am to 5pm
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- .emoji[🎧] Romain ([@rdegez](https://twitter.com/rdegez), Enix SAS)
- The training will run from 9am to 5pm
- There will be a lunch break around noon

View File

@@ -11,7 +11,10 @@ class: title, in-person
@@TITLE@@<br/></br>
.footnote[
[.](https://www.youtube.com/watch?v=h16zyxiwDLY)
<!--
**WiFi: ENIX**</br>
**Password: AIRBUS2019**<br/>
-->
**Slides: @@SLIDES@@**
]