Rehaul 'setup k8s' sections

This commit is contained in:
Jerome Petazzoni
2020-06-03 16:54:41 +02:00
parent 412d029d0c
commit 14271a4df0
11 changed files with 391 additions and 128 deletions

145
slides/k8s/setup-devel.md Normal file
View File

@@ -0,0 +1,145 @@
# Running a local development cluster
- Let's review some options to run Kubernetes locally
- There is no "best option", it depends what you value:
- ability to run on all platforms (Linux, Mac, Windows, other?)
- ability to run clusters with multiple nodes
- ability to run multiple clusters side by side
- ability to run recent (or even, unreleased) versions of Kubernetes
- availability of plugins
- etc.
---
## Docker Desktop
- Available on Mac and Windows
- Gives you one cluster with one node
- Rather old version of Kubernetes
- Very easy to use if you are already using Docker Desktop:
go to Docker Desktop preferences and enable Kubernetes
- Ideal for Docker users who need good integration between both platforms
---
## [k3d](https://k3d.io/)
- Based on [K3s](https://k3s.io/) by Rancher Labs
- Requires Docker
- Runs Kubernetes nodes in Docker containers
- Can deploy multiple clusters, with multiple nodes, and multiple master nodes
- As of June 2020, two versions co-exist: stable (1.7) and beta (3.0)
- They have different syntax and options, this can be confusing
(but don't let that stop you!)
---
## k3d in action
- Get `k3d` beta 3 binary on https://github.com/rancher/k3d/releases
- Create a simple cluster:
```bash
k3d create cluster petitcluster --update-kubeconfig
```
- Use it:
```bash
kubectl config use-context k3d-petitcluster
```
- Create a more complex cluster with a custom version:
```bash
k3d create cluster groscluster --update-kubeconfig \
--image rancher/k3s:v1.18.3-k3s1 --masters 3 --workers 5 --api-port 6444
```
(note: API port seems to be necessary when running multiple clusters)
---
## [KinD](https://kind.sigs.k8s.io/)
- Kubernetes-in-Docker
- Requires Docker (obviously!)
- Deploying a single node cluster using the latest version is simple:
```bash
kind create cluster
```
- More advanced scenarios require writing a short [config file](https://kind.sigs.k8s.io/docs/user/quick-start#configuring-your-kind-cluster)
(to define multiple nodes, multiple master nodes, set Kubernetes versions ...)
- Can deploy multiple clusters
---
## [Minikube](https://minikube.sigs.k8s.io/docs/)
- The "legacy" option!
(note: this is not a bad thing, it means that it's very stable, has lots of plugins, etc.)
- Supports many [drivers](https://minikube.sigs.k8s.io/docs/drivers/)
(HyperKit, Hyper-V, KVM, VirtualBox, but also Docker and many others)
- Can deploy a single cluster; recent versions can deploy multiple nodes
- Great option if you want a "Kubernetes first" experience
(i.e. if you don't already have Docker and/or don't want/need it)
---
## [MicroK8s](https://microk8s.io/)
- Available on Linux, and since recently, on Mac and Windows as well
- The Linux version is installed through Snap
(which is pre-installed on all recent versions of Ubuntu)
- Also supports clustering (as in, multiple machines running MicroK8s)
- DNS is not enabled by default; enable it with `microk8s enable dns`
---
## VM with custom install
- Choose your own adventure!
- Pick any Linux distribution!
- Build your cluster from scratch or use a Kubernetes installer!
- Discover exotic CNI plugins and container runtimes!
- The only limit is yourself, and the time you are willing to sink in!
???
:EN:- Kubernetes options for local development
:FR:- Installation de Kubernetes pour travailler en local

View File

@@ -1,106 +0,0 @@
# Setting up Kubernetes
- How did we set up these Kubernetes clusters that we're using?
--
<!-- ##VERSION## -->
- We used `kubeadm` on freshly installed VM instances running Ubuntu LTS
1. Install Docker
2. Install Kubernetes packages
3. Run `kubeadm init` on the first node (it deploys the control plane on that node)
4. Set up Weave (the overlay network)
<br/>
(that step is just one `kubectl apply` command; discussed later)
5. Run `kubeadm join` on the other nodes (with the token produced by `kubeadm init`)
6. Copy the configuration file generated by `kubeadm init`
- Check the [prepare VMs README](https://@@GITREPO@@/blob/master/prepare-vms/README.md) for more details
---
## `kubeadm` drawbacks
- Doesn't set up Docker or any other container engine
- Doesn't set up the overlay network
- [Some extra steps](https://kubernetes.io/docs/setup/independent/high-availability/) to support HA control plane
--
- "It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme
---
## Managed options
- On AWS: [EKS](https://aws.amazon.com/eks/),
[eksctl](https://eksctl.io/)
- On Azure: [AKS](https://azure.microsoft.com/services/kubernetes-service/)
- On DigitalOcean: [DOK](https://www.digitalocean.com/products/kubernetes/)
- On Google Cloud: [GKE](https://cloud.google.com/kubernetes-engine/)
- On Linode: [LKE](https://www.linode.com/products/kubernetes/)
- On OVHcloud: [Managed Kubernetes Service](https://www.ovhcloud.com/en/public-cloud/kubernetes/)
- On Scaleway: [Kapsule](https://www.scaleway.com/en/kubernetes-kapsule/)
- and much more!
---
## Other deployment options
- [kops](https://github.com/kubernetes/kops):
customizable deployments on AWS, Digital Ocean, GCE (beta), vSphere (alpha)
- [minikube](https://kubernetes.io/docs/setup/minikube/),
[kubespawn](https://github.com/kinvolk/kube-spawn),
[Docker Desktop](https://docs.docker.com/docker-for-mac/kubernetes/),
[kind](https://kind.sigs.k8s.io):
for local development
- [kubicorn](https://github.com/kubicorn/kubicorn),
the [Cluster API](https://blogs.vmware.com/cloudnative/2019/03/14/what-and-why-of-cluster-api/):
deploy your clusters declaratively, "the Kubernetes way"
---
## Even more deployment options
- If you like Ansible:
[kubespray](https://github.com/kubernetes-incubator/kubespray)
- If you like Terraform:
[typhoon](https://github.com/poseidon/typhoon)
- If you like Terraform and Puppet:
[tarmak](https://github.com/jetstack/tarmak)
- You can also learn how to install every component manually, with
the excellent tutorial [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way)
*Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.*
- There are also many commercial options available!
- For a longer list, check the Kubernetes documentation:
<br/>
it has a great guide to [pick the right solution](https://kubernetes.io/docs/setup/#production-environment) to set up Kubernetes.
???
:EN:- Overview of the kubeadm installer
:FR:- Survol de kubeadm

View File

@@ -1,4 +1,4 @@
# Installing a managed cluster
# Deploying a managed cluster
*"The easiest way to install Kubernetes is to get someone
else to do it for you."
@@ -317,7 +317,26 @@ with a cloud provider
default-pool-config.node-type=DEV1-M default-pool-config.size=3
```
- Get cluster ID:
- After less than 5 minutes, cluster state will be `ready`
(check cluster status with e.g. `scw k8s cluster list` on a wide terminal
)
- Add connection information to your `.kube/config` file:
```bash
scw k8s kubeconfig install `CLUSTERID`
```
(the cluster ID is shown by `scw k8s cluster list`)
---
class: extra-details
## Scaleway (automation)
- If you want to obtain the cluster ID programmatically, this will do it:
```bash
scw k8s cluster list
# or
@@ -325,15 +344,6 @@ with a cloud provider
jq -r '.[] | select(.name="my-kapsule-cluster") | .id')
```
- Check cluster status with e.g. `scw k8s cluster list` on a wide terminal
- After less than 5 minutes, status should be `ready`
- Add connection information to your `.kube/config` file:
```bash
scw k8s kubeconfig install $CLUSTERID
```
---
## Scaleway (cleanup)
@@ -376,7 +386,9 @@ https://www.scaleway.com/en/pricing/)
- [IBM Cloud](https://console.bluemix.net/docs/containers/cs_cli_install.html#cs_cli_install)
- OVH
- [Linode Kubernetes Engine (LKE)](https://www.linode.com/products/kubernetes/)
- OVHcloud [Managed Kubernetes Service](https://www.ovhcloud.com/en/public-cloud/kubernetes/)
- ...

View File

@@ -0,0 +1,192 @@
# Setting up Kubernetes
- Kubernetes is made of many components that require careful configuration
- Secure operation typically requires TLS certificates and a local CA
(certificate authority)
- Setting up everything manually is possible, but rarely done
(except for learning purposes)
- Let's do a quick overview of available options!
---
## Local development
- Are you writing code that will eventually run on Kubernetes?
- Then it's a good idea to have a development cluster!
- Development clusters only need one node
- This simplifies their setup a lot:
- pod networking doesn't even need CNI plugins, overlay networks, etc.
- they can be fully contained (no pun intended) in an easy-to-ship VM image
- some of the security aspects may be simplified (different threat model)
- Examples: Docker Desktop, k3d, KinD, MicroK8s, Minikube
(some of these also support clusters with multiple nodes)
---
## Managed clusters
- Many cloud providers and hosting providers offer "managed Kubernetes"
- The deployment and maintenance of the cluster is entirely managed by the provider
(ideally, clusters can be spun up automatically through an API, CLI, or web interface)
- Given the complexity of Kubernetes, this approach is *strongly recommended*
(at least for your first production clusters)
- After working for a while with Kubernetes, you will be better equipped to decide:
- whether to operate it yourself or use a managed offering
- which offering or which distribution works best for you and your needs
---
## Managed clusters details
- Pricing models differ from one provider to another
- nodes are generally charged at their usual price
- control plane may be free or incur a small nominal fee
- Beyond pricing, there are *huge* differences in features between providers
- The "major" providers are not always the best ones!
---
## Managed clusters differences
- Most providers let you pick which Kubernetes version you want
- some providers offer up-to-date versions
- others lag significantly (sometimes by 2 or 3 minor versions)
- Some providers offer multiple networking or storage options
- Others will only support one, tied to their infrastructure
(changing that is in theory possible, but might be complex or unsupported)
- Some providers let you configure or customize the control plane
(generally through Kubernetes "feature gates")
---
## Kubernetes distributions and installers
- If you want to run Kubernetes yourselves, there are many options
(free, commercial, proprietary, open source ...)
- Some of them are installers, while some are complete platforms
- Some of them leverage other well-known deployment tools
(like Puppet, Terraform ...)
- A good starting point to explore these options is this [guide](https://v1-16.docs.kubernetes.io/docs/setup/#production-environment)
(it defines categories like "managed", "turnkey" ...)
---
## kubeadm
- kubeadm is a tool part of Kubernetes to facilitate cluster setup
- Many other installers and distributions use it (but not all of them)
- It can also be used by itself
- Excellent starting point to install Kubernetes on your own machines
(virtual, physical, it doesn't matter)
- It even supports highly available control planes, or "multi-master"
(this is more complex, though, because it introduces the need for an API load balancer)
---
## Manual setup
- The resources below are mainly for educational purposes!
- [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) by Kelsey Hightower
- step by step guide to install Kubernetes on Google Cloud
- covers certificates, high availability ...
- *“Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.”*
- [Deep Dive into Kubernetes Internals for Builders and Operators](https://www.youtube.com/watch?v=3KtEAa7_duA)
- conference presentation showing step-by-step control plane setup
- emphasis on simplicity, not on security and availability
---
## About our training clusters
- How did we set up these Kubernetes clusters that we're using?
--
- We used `kubeadm` on freshly installed VM instances running Ubuntu LTS
1. Install Docker
2. Install Kubernetes packages
3. Run `kubeadm init` on the first node (it deploys the control plane on that node)
4. Set up Weave (the overlay network) with a single `kubectl apply` command
5. Run `kubeadm join` on the other nodes (with the token produced by `kubeadm init`)
6. Copy the configuration file generated by `kubeadm init`
- Check the [prepare VMs README](https://@@GITREPO@@/blob/master/prepare-vms/README.md) for more details
---
## `kubeadm` "drawbacks"
- Doesn't set up Docker or any other container engine
(this is by design, to give us choice)
- Doesn't set up the overlay network
(this is also by design, for the same reasons)
- HA control plane requires [some extra steps](https://kubernetes.io/docs/setup/independent/high-availability/)
- Note that HA control plane also requires setting up a specific API load balancer
(which is beyond the scope of kubeadm)
???
:EN:- Various ways to install Kubernetes
:FR:- Survol des techniques d'installation de Kubernetes

View File

@@ -18,7 +18,7 @@
---
## kops
## [kops](https://github.com/kubernetes/kops)
- Deploys Kubernetes using cloud infrastructure
@@ -42,7 +42,7 @@
---
## Kubespray
## [kubespray](https://github.com/kubernetes-incubator/kubespray)
- Based on Ansible
@@ -92,13 +92,17 @@
- Docker Enterprise Edition
- [Lokomotive](https://github.com/kinvolk/lokomotive), leveraging Terraform and [Flatcar Linux](https://www.flatcar-linux.org/)
- Pivotal Container Service (PKS)
- [Tarmak](https://github.com/jetstack/tarmak), leveraging Puppet and Terraform
- Tectonic by CoreOS (now being integrated into Red Hat OpenShift)
- VMware Tanzu Kubernetes Grid (TKG)
- [Typhoon](https://typhoon.psdn.io/), leveraging Terraform
- etc.
- VMware Tanzu Kubernetes Grid (TKG)
---
@@ -123,5 +127,5 @@
???
:EN:- Various ways to set up Kubernetes
:FR:- Différentes méthodes pour installer Kubernetes
:EN:- Kubernetes distributions and installers
:FR:- L'offre Kubernetes "on premises"

View File

@@ -36,6 +36,8 @@ content:
- k8s/interco.md
-
- k8s/apilb.md
#- k8s/setup-overview.md
#- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md

View File

@@ -34,6 +34,8 @@ content:
- k8s/cni.md
- k8s/interco.md
- - k8s/apilb.md
- k8s/setup-overview.md
#- k8s/setup-devel.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md

View File

@@ -53,7 +53,10 @@ content:
#- k8s/exercise-wordsmith.md
-
- k8s/yamldeploy.md
- k8s/setup-k8s.md
- k8s/setup-overview.md
#- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
#- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md

View File

@@ -40,7 +40,10 @@ content:
- k8s/declarative.md
- k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-k8s.md
- k8s/setup-overview.md
#- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- - k8s/kubectl-run.md
#- k8s/batch-jobs.md
#- k8s/labels-annotations.md

View File

@@ -54,7 +54,10 @@ content:
#- k8s/exercise-wordsmith.md
- k8s/yamldeploy.md
-
- k8s/setup-k8s.md
- k8s/setup-overview.md
- k8s/setup-devel.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md

View File

@@ -53,7 +53,10 @@ content:
#- k8s/exercise-wordsmith.md
-
- k8s/yamldeploy.md
#- k8s/setup-k8s.md
- k8s/setup-overview.md
- k8s/setup-devel.md
#- k8s/setup-managed.md
#- k8s/setup-selfhosted.md
- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md