Compare commits

...

114 Commits

Author SHA1 Message Date
Julien Girardin
6a8e00fc7d Change last day schedule of Allo Docker for Julien 2023-05-30 15:44:33 +02:00
Jérôme Petazzoni
e8c2b29c8f ⚛️ HighFive 2023Q2 content update 2023-05-29 14:54:07 +02:00
Jérôme Petazzoni
ccb73fc872 Add CloudFlare script (WIP) 2023-05-29 12:24:54 +02:00
Jérôme Petazzoni
bb302a25de ✂️ Split prereqs/handson instructions 2023-05-29 09:05:57 +02:00
Julien Girardin
e66b90eb4e Replace ship lab by kustomize lab 2023-05-26 17:33:38 +02:00
dependabot[bot]
74add4d435 Bump socket.io-parser from 4.2.2 to 4.2.3 in /slides/autopilot
Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.2.2 to 4.2.3.
- [Release notes](https://github.com/socketio/socket.io-parser/releases)
- [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io-parser/compare/4.2.2...4.2.3)

---
updated-dependencies:
- dependency-name: socket.io-parser
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-25 16:25:15 +02:00
Jérôme Petazzoni
5ee1367e79 🖼️ Use ngrok/ngrok image instead of building it from scratch 2023-05-25 16:09:47 +02:00
Jérôme Petazzoni
c1f8177f4e 🔧 Pass kubernetesVersion: in kubeadm config file 2023-05-17 19:04:32 +02:00
Jérôme Petazzoni
d4a9ea2461 🪆 Fix vcluster deployment and add konk.sh script 2023-05-16 19:16:19 +02:00
Jérôme Petazzoni
dd0f6d00fa 🏭️ Refactor the DaemonSet section 2023-05-14 20:10:23 +02:00
Jérôme Petazzoni
79359e2abc 🏭️ Refactor YAML and Namespace chapters 2023-05-14 19:58:45 +02:00
Jérôme Petazzoni
9cd812de75 Update ingress chapter and manifest 2023-05-13 12:06:47 +02:00
Jérôme Petazzoni
e29bfe7921 🔧 Improve mk8s Terraform configuration
- instead of using 'kubectl wait nodes', we now use a simpler
  'kubectl get nodes -o name' and check if there is anything
  in the output. This seems to work better (as the previous
  method would sometimes remain stuck because the kubectl
  process would never get stopped by SIGPIPE).
- the shpod SSH NodePort is no longer hard-coded to 32222,
  which allows us to use e.g. vcluster to deploy multiple
  Kubernetes labs on a single 'home' (or 'outer') Kubernetes
  cluster.
2023-05-13 08:19:19 +02:00
Jérôme Petazzoni
11bc78851b Add Scaleway and Hetzner to ARM providers 2023-05-12 18:13:19 +02:00
Jérôme Petazzoni
c611f55dca Update cluster upgrade section
We now go from 1.22 to 1.23.

Updating to 1.22 was necessary because Kubernetes 1.27
deprecated kubeadm config v1beta2, which forced us to
upgrade to v1beta3, which was only introduced in 1.22.
In other words, our scripts can only install Kubernetes
1.22+ now.
2023-05-12 07:23:36 +02:00
Jérôme Petazzoni
980bc66c3a 🔧 Improve output of 'labctl tags' 2023-05-12 07:03:49 +02:00
Jérôme Petazzoni
fd0bc97a7a 🔓️ Disable port protection on AWS and OpenStack
This is required for the kubenet and kuberouter labs, for
'operating kubernetes' training classes.
2023-05-12 06:57:54 +02:00
Jérôme Petazzoni
8f6c32e94a 🔧 Tweak history limit to keep 1 million lines 2023-05-11 14:43:04 +02:00
Jérôme Petazzoni
1a711f8c2c Add kubent
Kube No Trouble (kubent) is a simple tool to check whether you're using any of these API versions in your cluster and therefore should upgrade your workloads first, before upgrading your Kubernetes cluster.
2023-05-10 19:12:55 +02:00
Jérôme Petazzoni
0080f21817 Add velero CLI 2023-05-10 18:45:34 +02:00
ENIX NOC
f937456232 Fixed executable name for pssh on ubuntu 2023-05-09 15:28:37 +00:00
ENIX NOC
8376aba5fd Fixed ssh key usage when setting password 2023-05-09 15:28:20 +00:00
Jérôme Petazzoni
6d13122a4d Add BuildKit RUN --mount=type=cache... 2023-05-09 07:50:40 +02:00
Jérôme Petazzoni
8184c46ed3 Upgrade metrics-server install instructions 2023-05-09 07:25:48 +02:00
Jérôme Petazzoni
0b900f9e5c Add example file for OpenStack tfvars 2023-05-09 07:25:11 +02:00
Jérôme Petazzoni
e14d0d4ca4 🔧 Tweak netlify DNS script to take domain as env var
Now that script can be used for container.training, but also our
other properties at Netlify (e.g. tinyshellscript.com)
2023-05-08 21:50:17 +02:00
dependabot[bot]
cdb1e41524 Bump engine.io and socket.io in /slides/autopilot
Bumps [engine.io](https://github.com/socketio/engine.io) to 6.4.2 and updates ancestor dependency [socket.io](https://github.com/socketio/socket.io). These dependencies need to be updated together.


Updates `engine.io` from 6.2.1 to 6.4.2
- [Release notes](https://github.com/socketio/engine.io/releases)
- [Changelog](https://github.com/socketio/engine.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/engine.io/compare/6.2.1...6.4.2)

Updates `socket.io` from 4.5.1 to 4.6.1
- [Release notes](https://github.com/socketio/socket.io/releases)
- [Changelog](https://github.com/socketio/socket.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io/compare/4.5.1...4.6.1)

---
updated-dependencies:
- dependency-name: engine.io
  dependency-type: indirect
- dependency-name: socket.io
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-04 10:25:18 +02:00
Jérôme Petazzoni
600e7c441c Bump up kubeadm configuration version
v1beta2 support was removed in Kubernetes 1.27.
Warning, v1beta3 was introduced in Kubernetes 1.22
(I think?) which means that the minimum version for
"old cluster" deployments is now 1.22.
2023-04-24 06:58:06 +02:00
Jérôme Petazzoni
81913d88a0 Add script to list civo locations 2023-04-23 16:13:51 +02:00
Jérôme Petazzoni
17d3d9a92a ♻️ Add clean up script to remove stray LBs and PVs 2023-04-12 08:25:47 +02:00
Jérôme Petazzoni
dd026b3db2 📃 Update healthchecks section 2023-04-11 12:42:51 +02:00
Jérôme Petazzoni
b9426af9cd ✂️ Remove Dockerfile and Compose file
They're not valid anymore, and fixing them would require quite a lot of
work, since we drastically changed the way we provision things. I'm
removing them rather than leaving a completely broken thing.
2023-04-11 10:19:20 +02:00
MrUtkarsh
aa4c0846ca Update Dockerfile_Tips.md
Updated the chown to chmod as its repeated.
2023-04-10 16:18:34 +02:00
Jérôme Petazzoni
abca33af29 🏭️ Second pass of Terraform refactoring
Break down provider-specific configuration into two files:
- config.tf (actual configuration, e.g. credentials, that cannot be
  included in submodules)
- variables.tf (per-provider knobs and settings, e.g. mapping logical
  VM size like S/M/L to actual cloud SKUs)
2023-04-09 09:45:05 +02:00
Jérôme Petazzoni
f69a9d3eb8 🔧 Update .gitignore to get some Terraform stuff out of the way 2023-04-04 19:34:51 +02:00
Jérôme Petazzoni
bc10c5a5ca 📔 A bit of doc 😅 2023-04-04 19:32:49 +02:00
Jérôme Petazzoni
b6340acb6e ⚛️ Huge refactoring of lab environment deployment system
Summary of changes:
- "workshopctl" is now "labctl"
- it can handle deployment of VMs but also of managed
  Kubernetes clusters (and therefore, it replaces
  the "prepare-tf" directory)
- support for many more providers has been added

Check the README.md, in particular the "directory structure";
it has the most important information.
2023-03-29 18:36:48 +02:00
Jérôme Petazzoni
f8ab4adfb7 ⚙️ Make it possible to change number of parallel SSH connections with env var 2023-03-21 17:54:29 +01:00
Jérôme Petazzoni
dc8bd21062 📃 Add YAML exercise 2023-03-20 12:56:06 +01:00
Jérôme Petazzoni
c9710a9f70 📃 Update YAML section
- fix mapping example
- fix indentation
- add information about multi-documents
- add information about multi-line strings
2023-03-20 12:46:16 +01:00
ENIX NOC
bc1ba942c0 🔧 Retry 'terraform apply' 3 times if it fails
Some platforms (looking at you OpenStack) can exhibit random
transient failures. This helps to work around them.
2023-03-11 19:42:57 +01:00
ENIX NOC
fa0a894ebc 🔧 OpenStack pool and external_network_id are now variables 2023-03-11 19:42:57 +01:00
ENIX NOC
e78e0de377 🐞 Fix bug in 'passwords' action
It was still hard-coded to user 'docker' instead of using
the USER_LOGIN environment variable.

Also add download-retry when wgetting the websocketd deb.
2023-03-11 19:42:57 +01:00
Jérôme Petazzoni
cba2ff5ff7 🔧 Check for httpie in netlify DNS script 2023-03-08 17:57:17 +01:00
Jérôme Petazzoni
d8f8bf6d87 ♻️ Switch Hetzner to the new Terraform system 2023-03-04 15:24:51 +01:00
Jérôme Petazzoni
84f131cdc5 🏭️ Refactor Digital Ocean and Linode authentication in prepare-tf
Fetch credentials from CLI configuration files instead of environment variables.
2023-03-04 14:35:09 +01:00
Jérôme Petazzoni
8738f68a72 🏭️ Small refactorings to prepare Terraform migration
- add support for Digital Ocean (through Terraform)
- add support for per-cluster SSH key (hackish for now)
- pre-load Kubernetes APT GPG key (because of GCS outage)
2023-03-04 13:40:43 +01:00
Jérôme Petazzoni
e130884184 Bump up DOK version 2023-03-04 10:18:53 +01:00
Jérôme Petazzoni
74cb1aec85 ⚙️ Store terraform variables (# of nodes...) in tfvars file
Using environment variables was a mistake, because they must be set again
manually each time we want to re-apply the Terraform configurations.
Instead, put the variables in a tfvars file.
2023-03-04 10:18:35 +01:00
Jérôme Petazzoni
70e60d7f4e 🏭️ Big refactoring to move to Ubuntu 22.04
Instead of Ubuntu 18.04, we should use 22.04 (especially as
18.04 will be EOL soon). This moves a few providers to 22.04
(and more will follow).

We now ship a small containerd configuration file (instead
of defaulting to an empty configuration like we did before)
since it looks like recent versions of containerd cause
infinite crashloops if the cgroups driver isn't set properly.

Also, Linode is now provisioned using Terraform (instead of
the old-style system relying on linode-cli) which should make
instance provisioning faster (thanks to Terraform parallelism).

The "wait" command now tries to log in with both "ubuntu" and
"root", and if it fails with "ubuntu" but succeeds with "root",
it will create the "ubuntu" user and give it full sudo rights.

Finally, a "standardize" action has been created to gather all
the commands that deal with non-standard Ubuntu images.

Note that for completeness, we should check that all providers
work correctly; currently only Linode has been validated.
2023-02-23 16:32:10 +01:00
Jérôme Petazzoni
29b3185e7e 🐘 Add link to Mastodon profile 2023-02-23 10:06:38 +01:00
Jérôme Petazzoni
0616d74e37 Add gentle intro to YAML 2023-02-22 20:56:46 +01:00
Jérôme Petazzoni
676ebcdd3f ♻️ Replace jpetazzo/httpenv with jpetazzo/color 2023-02-20 14:22:02 +01:00
Jérôme Petazzoni
28f0253242 Add kubectl np-viewer in network policy section 2023-02-20 10:37:53 +01:00
Jérôme Petazzoni
73125b5ffb 🛠️ k9s fixed the file name in their releases 🎉 2023-02-18 15:20:44 +01:00
Jérôme Petazzoni
a90c521b77 🪓 Split tmux instructions across two slides 2023-02-12 18:03:41 +01:00
Jérôme Petazzoni
bd141ddfc5 💡 Add Ctrl-B Ctrl-O tmux shortcut to cheatsheet
Super convenient if you have something on top and would like it to
be on bottom and vice versa; or to switch left and right panes.

Usually not super helpful during normal use of tmux, but very
handy when streaming, e.g. when you have a camera view obscuring
part of the top panel (or on the left/right side) and you want
to switch panel arrangement.
2023-02-12 17:40:00 +01:00
Jérôme Petazzoni
634d101efc Update HPA v2 apiVersion 2023-02-12 15:39:55 +01:00
Jérôme Petazzoni
20347a1417 ♻️ Add script to clean up Linode PVC volumes 2023-02-12 15:38:58 +01:00
Jérôme Petazzoni
893be3b18f 🖼️ Add picture of a canary cage to illustrate canary deployments 2023-02-12 13:56:36 +01:00
Bret Fisher
dd6a1adc63 Apply suggestions from code review
Co-authored-by: Tianon Gravi <admwiggin@gmail.com>
2023-02-07 23:43:40 +01:00
Bret Fisher
4dc60d3250 Check for missing docker dir 2023-02-07 23:43:40 +01:00
Jérôme Petazzoni
1aa0e062d0 ♻️ Add script to clean up Linode nodebalancers 2023-02-04 10:49:04 +01:00
Torounia
cfbe578d4f helm intro set value to juice-shop chart 2023-02-03 17:59:54 +01:00
Jérôme Petazzoni
1d692898da ♻️ Bump up versions and improve reliability ot wait-for-nodes 2023-01-23 16:08:24 +01:00
Jérôme Petazzoni
9526a94b77 🐚 Improve Terraform-based deployment script
Each time we call that script, we must set a few env vars
beforehand. Let's make these vars optional parameters to
the script instead.

Also add helper scripts to list the locations (zones or
regions) available to each provider.
2023-01-23 16:07:28 +01:00
Jérôme Petazzoni
e6eb157cc6 🪓 Split "kubectl expose" and "service types" 2023-01-13 17:50:22 +01:00
Jérôme Petazzoni
b984049603 📃 Reorganize a bit the deck intro 2023-01-13 16:04:39 +01:00
Jérôme Petazzoni
c200c8e1da ♻️ Refactor script to count slides
For automatic transcription and chaptering, we'll need to know
exactly at which slide each section starts. This we already
had the count-slides.py script to count how many slides each
section had, and count the number of slides per part. The new
script does the same but also gives accurately the first slide
of each section.
2023-01-06 23:11:43 +01:00
Jérôme Petazzoni
4c30e7db14 ✂️ Remove containerd 1.5 pinning
Kubernetes 1.26 requires CRI v1, which means containerd 1.6.
2023-01-03 09:10:01 +01:00
Marco Verleun
9d5a083473 Update Container_Networking_Basics.md 2022-12-12 13:43:01 +01:00
Jérôme Petazzoni
a2be63e4c4 📃 Improve Ingress exercises 2022-12-08 17:28:53 -08:00
Jérôme Petazzoni
584dddd823 🔗 Fix link to create token 2022-12-08 05:53:12 -08:00
Jérôme Petazzoni
3e9307d420 🔑 Update dashboard YAML; add persisting token for the dashboard account 2022-12-08 05:52:41 -08:00
Jérôme Petazzoni
5d3881b7e1 Add CoLiMa and fix microk8s/minikube ordering 2022-12-08 05:44:48 -08:00
Bret Fisher
d57ba24f6f Updating stern link 2022-12-05 21:10:52 -08:00
Jérôme Petazzoni
f046a32567 🐋 Update info about Docker+K8S 2022-12-05 15:29:52 -08:00
Jérôme Petazzoni
c2a169167d ☁️ Add terraform configuration for Azure 2022-12-05 15:29:52 -08:00
dependabot[bot]
961cf34b6f Bump socket.io-parser from 4.0.4 to 4.0.5 in /slides/autopilot
Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.0.4 to 4.0.5.
- [Release notes](https://github.com/socketio/socket.io-parser/releases)
- [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io-parser/compare/4.0.4...4.0.5)

---
updated-dependencies:
- dependency-name: socket.io-parser
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-22 16:16:26 -08:00
dependabot[bot]
b23cae8f5b Bump engine.io from 6.2.0 to 6.2.1 in /slides/autopilot
Bumps [engine.io](https://github.com/socketio/engine.io) from 6.2.0 to 6.2.1.
- [Release notes](https://github.com/socketio/engine.io/releases)
- [Changelog](https://github.com/socketio/engine.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/engine.io/compare/6.2.0...6.2.1)

---
updated-dependencies:
- dependency-name: engine.io
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-22 16:11:57 -08:00
Jérôme Petazzoni
a09c4ec4f5 Improve netlify-dns script to suggest what to do if config file not found 2022-11-18 21:46:29 +01:00
Jérôme Petazzoni
527c63eee7 📦 ️Add pic of Catène de Conteneurs 2022-11-09 14:36:25 +01:00
Jérôme Petazzoni
6cfe991375 🐞 Typo fix 2022-11-04 17:23:14 +01:00
Jérôme Petazzoni
c8f90463e0 🌈 Change the tmux status bar to yellow (like a precious metal) 2022-11-02 17:02:44 +01:00
Jérôme Petazzoni
316f5b8fd8 🌈 Change tmux status bar color to blue
To help differentiate between environments
(shpod now defaults to red)
2022-11-01 11:44:32 +01:00
Jérôme Petazzoni
c86474a539 ♻️ Update kubebuilder workshop 2022-10-28 12:32:05 +02:00
Jérôme Petazzoni
2943ef4e26 Update Kyverno to 1.7 2022-10-26 19:49:23 +02:00
Jérôme Petazzoni
02004317ac 🐞 Fix some ambiguous markdown link references
I thought that the links were local to each slide, but...
apparently not. Whoops.
2022-10-24 20:41:23 +02:00
Jérôme Petazzoni
c9cc659f88 🐞 Typo fix 2022-10-09 23:05:27 +02:00
Jérôme Petazzoni
bb8e655f92 🔧 Disable unattended upgrades; add completion for kubeadm 2022-10-09 12:18:42 +02:00
Jérôme Petazzoni
50772ca439 🌍 Switch Scaleway to fr-par-2 (better PUE) 2022-10-09 12:18:07 +02:00
Jérôme Petazzoni
1082204ac7 📃 Add note about .Chart.IsRoot 2022-10-04 17:11:59 +02:00
Jérôme Petazzoni
c9c79c409c Add ytt; fix Weave YAML URL; add completion for a few tools 2022-10-04 16:53:36 +02:00
Jérôme Petazzoni
71daf27237 ⌨️ Add tmux rename window shortcut 2022-10-03 15:28:32 +02:00
Jérôme Petazzoni
986da15a22 🔗 Update kustomize eschewed features link 2022-10-03 15:23:18 +02:00
Jérôme Petazzoni
407a8631ed 🐞 Typo in variable name 2022-10-03 15:15:53 +02:00
Jérôme Petazzoni
b4a81a7054 🔧 Minor tweak to Terraform provisioning wrapper 2022-10-03 15:15:12 +02:00
Jérôme Petazzoni
d0f0d2c87b 🔧 Typo fix 2022-09-27 14:53:14 +02:00
Jérôme Petazzoni
0f77eaa48b 📃 Update info about Docker Desktop and Rancher Desktop 2022-09-26 13:42:20 +02:00
Jérôme Petazzoni
659713a697 Bump up dashboard version 2022-09-26 11:41:28 +02:00
Jérôme Petazzoni
20d21b742a Bump up Compose version to use 2.X everywhere 2022-09-25 17:28:52 +02:00
Jérôme Petazzoni
747605357d 🏭️ Refactor Ingress chapter 2022-09-25 14:20:26 +02:00
Jérôme Petazzoni
17bb84d22e 🏭️ Refactor healthcheck chapter
Add more details for startup probes.
Mention GRPC check.
Better spell out recommendations and gotchas.
2022-09-11 13:11:01 +02:00
Jérôme Petazzoni
d343264b86 📃 Update swap/cgroups v2 section to mention KEP2400 2022-09-10 09:31:39 +02:00
Jérôme Petazzoni
a216aa2034 🐞 Fix install of kube-ps1
The former method was invalid and didn't work with e.g. screen.
2022-08-31 12:42:47 +02:00
Francesco Manzali
64f993ff69 - Update VMs to ubuntu/focal64 20.04 LTS (trusty64 reaced EOL on April 25 2019)
- Update Docker installation task from the
  [official docs](https://docs.docker.com/engine/install/ubuntu/)
2022-08-31 12:06:10 +02:00
Jérôme Petazzoni
73b3cad0b8 🔧 Fix a couple of issues related to OCI images 2022-08-22 17:20:36 +02:00
Naeem Ilyas
26e5459fae type fix 2022-08-22 10:23:57 +02:00
Jérôme Petazzoni
9c564e6787 Add info about ownerReferences with Kyverno 2022-08-19 14:59:11 +02:00
Jérôme Petazzoni
2724a611a6 📃 Update rolling update intro slide 2022-08-17 14:49:17 +02:00
Jérôme Petazzoni
2ca239ddfc 🔒️ Mention bound service account tokens 2022-08-17 14:18:15 +02:00
Jérôme Petazzoni
e74a158c59 📃 Document dependency on yq 2022-08-17 13:49:15 +02:00
Jérôme Petazzoni
138af3b5d2 ♻️ Upgrade build image to Netlify Focal; bump up Python version 2022-08-17 13:48:55 +02:00
Jérôme Petazzoni
ad6d16bade Add RBAC and NetPol exercises 2022-08-17 13:16:52 +02:00
332 changed files with 7034 additions and 5484 deletions

14
.gitignore vendored
View File

@@ -2,11 +2,14 @@
*.swp
*~
prepare-vms/tags
prepare-vms/infra
prepare-vms/www
prepare-tf/tag-*
**/terraform.tfstate
**/terraform.tfstate.backup
prepare-labs/terraform/lab-environments
prepare-labs/terraform/many-kubernetes/one-kubernetes-config/config.tf
prepare-labs/terraform/many-kubernetes/one-kubernetes-module/*.tf
prepare-labs/terraform/tags
prepare-labs/terraform/virtual-machines/openstack/*.tfvars
prepare-labs/www
slides/*.yml.html
slides/autopilot/state.yaml
@@ -26,3 +29,4 @@ node_modules
Thumbs.db
ehthumbs.db
ehthumbs_vista.db

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,8 +253,8 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
spec:
containers:
- args:
@@ -262,7 +262,7 @@ spec:
- --sidecar-host=http://127.0.0.1:8000
- --enable-skip-login
- --enable-insecure-login
image: kubernetesui/dashboard:v2.5.0
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -293,7 +293,7 @@ spec:
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
- image: kubernetesui/metrics-scraper:v1.0.8
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,15 +253,15 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
spec:
containers:
- args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
image: kubernetesui/dashboard:v2.5.0
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -292,7 +292,7 @@ spec:
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
- image: kubernetesui/metrics-scraper:v1.0.8
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,15 +253,15 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
spec:
containers:
- args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
image: kubernetesui/dashboard:v2.5.0
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -292,7 +292,7 @@ spec:
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
- image: kubernetesui/metrics-scraper:v1.0.8
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -344,3 +344,12 @@ metadata:
creationTimestamp: null
name: cluster-admin
namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cluster-admin-token
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: cluster-admin

View File

@@ -1,5 +1,5 @@
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
metadata:
name: rng
spec:

View File

@@ -15,10 +15,10 @@ spec:
- key: "{{ request.operation }}"
operator: Equals
value: UPDATE
- key: "{{ request.oldObject.metadata.labels.color }}"
- key: "{{ request.oldObject.metadata.labels.color || '' }}"
operator: NotEquals
value: ""
- key: "{{ request.object.metadata.labels.color }}"
- key: "{{ request.object.metadata.labels.color || '' }}"
operator: NotEquals
value: ""
validate:

View File

@@ -15,10 +15,10 @@ spec:
- key: "{{ request.operation }}"
operator: Equals
value: UPDATE
- key: "{{ request.oldObject.metadata.labels.color }}"
- key: "{{ request.oldObject.metadata.labels.color || '' }}"
operator: NotEquals
value: ""
- key: "{{ request.object.metadata.labels.color }}"
- key: "{{ request.object.metadata.labels.color || '' }}"
operator: Equals
value: ""
validate:

View File

@@ -1,36 +1,44 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
name: traefik
namespace: traefik
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
name: traefik
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
app: traefik
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
app: traefik
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
app: traefik
name: traefik
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
# If, for some reason, our CNI plugin doesn't support hostPort,
# we can enable hostNetwork instead. That should work everywhere
# but it doesn't provide the same isolation.
#hostNetwork: true
serviceAccountName: traefik
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v2.5
name: traefik-ingress-lb
- image: traefik:v2.10
name: traefik
ports:
- name: http
containerPort: 80
@@ -61,7 +69,7 @@ spec:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
name: traefik
rules:
- apiGroups:
- ""
@@ -73,14 +81,6 @@ rules:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
@@ -94,15 +94,15 @@ rules:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
name: traefik
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
name: traefik
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
name: traefik
namespace: traefik
---
kind: IngressClass
apiVersion: networking.k8s.io/v1

View File

@@ -70,4 +70,15 @@ add_namespace() {
kubectl create serviceaccount -n kubernetes-dashboard cluster-admin \
-o yaml --dry-run=client \
#
echo ---
cat <<EOF
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cluster-admin-token
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: cluster-admin
EOF
) > dashboard-with-token.yaml

View File

@@ -2,4 +2,3 @@
base = "slides"
publish = "slides"
command = "./build.sh once"

201
prepare-labs/README.md Normal file
View File

@@ -0,0 +1,201 @@
# Tools to create lab environments
This directory contains tools to create lab environments for Docker and Kubernetes courses and workshops.
It also contains Terraform configurations that can be used stand-alone to create simple Kubernetes clusters.
Assuming that you have installed all the necessary dependencies, and placed cloud provider access tokens in the right locations, you could do, for instance:
```bash
# For a Docker course with 50 students,
# create 50 VMs on Digital Ocean.
./labctl create --students 50 --settings settings/docker.env --provider digitalocean
# For a Kubernetes training with 20 students,
# create 20 clusters of 4 VMs each using kubeadm,
# on a private Openstack cluster.
./labctl create --students 20 --settings settings/kubernetes.env --provider openstack/enix
# For a Kubernetes workshop with 80 students,
# create 80 clusters with 2 VMs each,
# using Scaleway Kapsule (managed Kubernetes).
./labctl create --students 20 --settings settings/mk8s.env --provider scaleway --mode mk8s
```
Interested? Read on!
## Software requirements
For Docker labs and Kubernetes labs based on kubeadm:
- [Parallel SSH](https://github.com/lilydjwg/pssh)
(should be installable with `pip install git+https://github.com/lilydjwg/pssh`;
on a Mac, try `brew install pssh`)
For all labs:
- Terraform
If you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML)
- [jinja2](https://pypi.python.org/pypi/Jinja2)
These require Python 3. If you are on a Mac, see below for specific instructions on setting up
Python 3 to be the default Python on a Mac. In particular, if you installed `mosh`, Homebrew
may have changed your default Python to Python 2.
You will also need an account with the cloud provider(s) that you want to use to deploy the lab environments.
## Cloud provider account(s) and credentials
These scripts create VMs or Kubernetes cluster on cloud providers, so you will need cloud provider account(s) and credentials.
Generally, we try to use the credentials stored in the configuration file used by the cloud providers CLI tools.
This means, for instance, that for Linode, if you install `linode-cli` and configure it properly, it will place your credentials in `~/.config/linode-cli`, and our Terraform configurations will try to read that file and use the credentials in it.
You don't **have to** install the CLI tools of the cloud provider(s) that you want to use; but we recommend that you do.
If you want to provide your cloud credentials through other means, you will have to adjust the Terraform configuration files in `terraform/provider-config` accordingly.
## General Workflow
- fork/clone repo
- make sure your cloud credentials have been configured properly
- run `./labctl create ...` to create lab environments
- run `./labctl destroy ...` when you don't need the environments anymore
## Customizing things
You can edit the `settings/*.env` files, for instance to change the size of the clusters, the login or password used for the students...
Note that these files are sourced before executing any operation on a specific set of lab environments, which means that you can set Terraform variables by adding lines like the following one in the `*.env` files:
```bash
export TF_VAR_node_size=GP1.L
export TF_VAR_location=eu-north
```
## `./labctl` Usage
If you run `./labctl` without arguments, it will show a list of available commands.
### Summary of What `./labctl` Does For You
The script will create a Terraform configuration using a provider-specific template.
There are two modes: `pssh` and `mk8s`.
In `pssh` mode, students connect directly to the virtual machines using SSH.
The Terraform configuration creates a bunch of virtual machines, then the provisioning and configuration are done with `pssh`. There are a number of "steps" that are executed on the VMs, to install Docker, install a number of convenient tools, install and set up Kubernetes (if needed)... The list of "steps" to be executed is configured in the `settings/*.env` file.
In `mk8s` mode, students don't connect directly to the virtual machines. Instead, they connect to an SSH server running in a Pod (using the `jpetazzo/shpod` image), itself running on a Kubernetes cluster. The Kubernetes cluster is a managed cluster created by the Terraform configuration.
## `terraform` directory structure and principles
Legend:
- `📁` directory
- `📄` file
- `📄📄📄` multiple files
- `🌍` Terraform configuration that can be used "as-is"
```
📁terraform
├── 📁list-locations
│ └── 📄📄📄 helper scripts
│ (to list available locations for each provider)
├── 📁many-kubernetes
│ └── 📄📄📄 Terraform configuration template
│ (used in mk8s mode)
├── 📁one-kubernetes
│ │ (contains Terraform configurations that can spawn
│ │ a single Kubernetes cluster on a given provider)
│ ├── 📁🌍aws
│ ├── 📁🌍civo
│ ├── 📄common.tf
│ ├── 📁🌍digitalocean
│ └── ...
├── 📁providers
│ ├── 📁aws
│ │ ├── 📄config.tf
│ │ └── 📄variables.tf
│ ├── 📁azure
│ │ ├── 📄config.tf
│ │ └── 📄variables.tf
│ ├── 📁civo
│ │ ├── 📄config.tf
│ │ └── 📄variables.tf
│ ├── 📁digitalocean
│ │ ├── 📄config.tf
│ │ └── 📄variables.tf
│ └── ...
├── 📁tags
│ │ (contains Terraform configurations + other files
│ │ for a specific set of VMs or K8S clusters; these
│ │ are created by labctl)
│ ├── 📁2023-03-27-10-04-79-jp
│ ├── 📁2023-03-27-10-07-41-jp
│ ├── 📁2023-03-27-10-16-418-jp
│ └── ...
└── 📁virtual-machines
│ (contains Terraform configurations that can spawn
│ a bunch of virtual machines on a given provider)
├── 📁🌍aws
├── 📁🌍azure
├── 📄common.tf
├── 📁🌍digitalocean
└── ...
```
The directory structure can feel a bit overwhelming at first, but it's built with specific goals in mind.
**Consistent input/output between providers.** The per-provider configurations in `one-kubernetes` all take the same input variables, and provide the same output variables. Same thing for the per-provider configurations in `virtual-machines`.
**Don't repeat yourself.** As much as possible, common variables, definitions, and logic has been factored in the `common.tf` file that you can see in `one-kubernetes` and `virtual-machines`. That file is then symlinked in each provider-specific directory, to make sure that all providers use the same version of the `common.tf` file.
**Don't repeat yourself (again).** The things that are specific to each provider have been placed in the `providers` directory, and are shared between the `one-kubernetes` and the `virtual-machines` configurations. Specifically, for each provider, there is `config.tf` (which contains provider configuration, e.g. how to obtain the credentials for that provider) and `variables.tf` (which contains default values like which location and which VM size to use).
**Terraform configurations should work in `labctl` or standalone, without extra work.** The Terraform configurations (identified by 🌍 in the directory tree above) can be used directly. Just go to one of these directories, `terraform init`, `terraform apply`, and you're good to go. But they can also be used from `labctl`. `labctl` shouldn't barf out if you did a `terraform apply` in one of these directories (because it will only copy the `*.tf` files, and leave alone the other files, like the Terraform state).
The latter means that it should be easy to tweak these configurations, or create a new one, without having to use `labctl` to test it. It also means that if you want to use these configurations but don't care about `labctl`, you absolutely can!
## Miscellaneous info
### Making sure Python3 is the default (Mac only)
Check the `/usr/local/bin/python` symlink. It should be pointing to
`/usr/local/Cellar/python/3`-something. If it isn't, follow these
instructions.
1) Verify that Python 3 is installed.
```
ls -la /usr/local/Cellar/Python
```
You should see one or more versions of Python 3. If you don't,
install it with `brew install python`.
2) Verify that `python` points to Python3.
```
ls -la /usr/local/bin/python
```
If this points to `/usr/local/Cellar/python@2`, then we'll need to change it.
```
rm /usr/local/bin/python
ln -s /usr/local/Cellar/Python/xxxx /usr/local/bin/python
# where xxxx is the most recent Python 3 version you saw above
```
### AWS specific notes
Initial assumptions are you're using a root account. If you'd like to use a IAM user, it will need the right permissions. For `pssh` mode, that includes at least `AmazonEC2FullAccess` and `IAMReadOnlyAccess`.
In `pssh` mode, the Terraform configuration currently uses the default VPC and Security Group. If you want to use another one, you'll have to make changes to `terraform/virtual-machines/aws`.
The default VPC Security Group does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`.

28
prepare-labs/cleanup.sh Executable file
View File

@@ -0,0 +1,28 @@
#!/bin/sh
case "$1-$2" in
linode-lb)
linode-cli nodebalancers list --json |
jq '.[] | select(.label | startswith("ccm-")) | .id' |
xargs -n1 -P10 linode-cli nodebalancers delete
;;
linode-pvc)
linode-cli volumes list --json |
jq '.[] | select(.label | startswith("pvc")) | .id' |
xargs -n1 -P10 linode-cli volumes delete
;;
digitalocean-lb)
doctl compute load-balancer list --output json |
jq .[].id |
xargs -n1 -P10 doctl compute load-balancer delete --force
;;
digitalocean-pvc)
doctl compute volume list --output json |
jq '.[] | select(.name | startswith("pvc-")) | .id' |
xargs -n1 -P10 doctl compute volume delete --force
;;
*)
echo "Unknown combination of provider ('$1') and resource ('$2')."
;;
esac

41
prepare-labs/dns-cloudflare.sh Executable file
View File

@@ -0,0 +1,41 @@
#!/bin/sh
#set -eu
if ! command -v http >/dev/null; then
echo "Could not find the 'http' command line tool."
echo "Please install it (the package name might be 'httpie')."
exit 1
fi
. ~/creds/creds.cloudflare.dns
cloudflare() {
URI=$1
shift
http https://api.cloudflare.com/client/v4/$URI "$@" "Authorization:Bearer $CLOUDFLARE_TOKEN"
}
_list_zones() {
cloudflare zones | jq -r .result[].name
}
_get_zone_id() {
cloudflare zones?name=$1 | jq -r .result[0].id
}
_populate_zone() {
ZONE_ID=$(_get_zone_id $1)
shift
for IPADDR in $*; do
cloudflare zones/$ZONE_ID/dns_records "name=*" "type=A" "content=$IPADDR"
cloudflare zones/$ZONE_ID/dns_records "name=\@" "type=A" "content=$IPADDR"
done
}
_add_zone() {
cloudflare zones "name=$1"
}
echo "This script is still work in progress."
echo "You can source it and then use its individual functions."

View File

@@ -2,16 +2,16 @@
"""
There are two ways to use this script:
1. Pass a file name and a tag name as a single argument.
It will load a list of domains from the given file (one per line),
and assign them to the clusters corresponding to that tag.
There should be more domains than clusters.
Example: ./map-dns.py domains.txt 2020-08-15-jp
2. Pass a domain as the 1st argument, and IP addresses then.
1. Pass a domain as the 1st argument, and IP addresses then.
It will configure the domain with the listed IP addresses.
Example: ./map-dns.py open-duck.site 1.2.3.4 2.3.4.5 3.4.5.6
2. Pass two files names as argument, in which case the first
file should contain a list of domains, and the second a list of
groups of IP addresses, with one group per line.
There should be more domains than groups of addresses.
Example: ./map-dns.py domains.txt tags/2020-08-15-jp/clusters.txt
In both cases, the domains should be configured to use GANDI LiveDNS.
"""
import os
@@ -30,18 +30,9 @@ domain_or_domain_file = sys.argv[1]
if os.path.isfile(domain_or_domain_file):
domains = open(domain_or_domain_file).read().split()
domains = [ d for d in domains if not d.startswith('#') ]
ips_file_or_tag = sys.argv[2]
if os.path.isfile(ips_file_or_tag):
lines = open(ips_file_or_tag).read().split('\n')
clusters = [line.split() for line in lines]
else:
ips = open(f"tags/{ips_file_or_tag}/ips.txt").read().split()
settings_file = f"tags/{tag}/settings.yaml"
clustersize = yaml.safe_load(open(settings_file))["clustersize"]
clusters = []
while ips:
clusters.append(ips[:clustersize])
ips = ips[clustersize:]
clusters_file = sys.argv[2]
lines = open(clusters_file).read().split('\n')
clusters = [line.split() for line in lines]
else:
domains = [domain_or_domain_file]
clusters = [sys.argv[2:]]

View File

@@ -17,8 +17,26 @@
exit 1
}
NETLIFY_USERID=$(jq .userId < ~/.config/netlify/config.json)
NETLIFY_TOKEN=$(jq -r .users[$NETLIFY_USERID].auth.token < ~/.config/netlify/config.json)
NETLIFY_CONFIG_FILE=~/.config/netlify/config.json
if ! [ "$DOMAIN" ]; then
DOMAIN=container.training
fi
if ! [ -f "$NETLIFY_CONFIG_FILE" ]; then
echo "Could not find Netlify configuration file ($NETLIFY_CONFIG_FILE)."
echo "Try to run the following command, and try again:"
echo "npx netlify-cli login"
exit 1
fi
if ! command -v http >/dev/null; then
echo "Could not find the 'http' command line tool."
echo "Please install it (the package name might be 'httpie')."
exit 1
fi
NETLIFY_USERID=$(jq .userId < "$NETLIFY_CONFIG_FILE")
NETLIFY_TOKEN=$(jq -r .users[$NETLIFY_USERID].auth.token < "$NETLIFY_CONFIG_FILE")
netlify() {
URI=$1
@@ -27,7 +45,7 @@ netlify() {
}
ZONE_ID=$(netlify dns_zones |
jq -r '.[] | select ( .name == "container.training" ) | .id')
jq -r '.[] | select ( .name == "'$DOMAIN'" ) | .id')
_list() {
netlify dns_zones/$ZONE_ID/dns_records |
@@ -35,7 +53,7 @@ _list() {
}
_add() {
NAME=$1.container.training
NAME=$1.$DOMAIN
ADDR=$2

View File

Before

Width:  |  Height:  |  Size: 127 KiB

After

Width:  |  Height:  |  Size: 127 KiB

19
prepare-labs/konk.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/bin/sh
# deploy big cluster
TF_VAR_node_size=g6-standard-6 \
TF_VAR_nodes_per_cluster=5 \
TF_VAR_location=eu-west \
./labctl create --mode mk8s --settings settings/mk8s.env --provider linode --tag konk
# set kubeconfig file
cp tags/konk/stage2/kubeconfig.101 ~/kubeconfig
# set external_ip labels
kubectl get nodes -o=jsonpath='{range .items[*]}{.metadata.name} {.status.addresses[?(@.type=="ExternalIP")].address}{"\n"}{end}' |
while read node address; do
kubectl label node $node external_ip=$address
done
# vcluster all the things
./labctl create --settings settings/mk8s.env --provider vcluster --mode mk8s --students 27

View File

@@ -21,10 +21,13 @@ DEPENDENCIES="
man
pssh
ssh
wkhtmltopdf
yq
"
UNUSED_DEPENDENCIES="
wkhtmltopdf
"
# Check for missing dependencies, and issue a warning if necessary.
missing=0
for dependency in $DEPENDENCIES; do

View File

@@ -50,20 +50,6 @@ sep() {
fi
}
need_infra() {
if [ -z "$1" ]; then
die "Please specify infrastructure file. (e.g.: infra/aws)"
fi
if [ "$1" = "--infra" ]; then
die "The infrastructure file should be passed directly to this command. Remove '--infra' and try again."
fi
if [ ! -f "$1" ]; then
die "Infrastructure file $1 doesn't exist."
fi
. "$1"
. "lib/infra/$INFRACLASS.sh"
}
need_tag() {
if [ -z "$TAG" ]; then
die "Please specify a tag. To see available tags, run: $0 tags"
@@ -71,25 +57,12 @@ need_tag() {
if [ ! -d "tags/$TAG" ]; then
die "Tag $TAG not found (directory tags/$TAG does not exist)."
fi
for FILE in settings.yaml ips.txt infra.sh; do
for FILE in settings.env ips.txt; do
if [ ! -f "tags/$TAG/$FILE" ]; then
warning "File tags/$TAG/$FILE not found."
fi
done
. "tags/$TAG/infra.sh"
. "lib/infra/$INFRACLASS.sh"
}
need_settings() {
if [ -z "$1" ]; then
die "Please specify a settings file. (e.g.: settings/kube101.yaml)"
fi
if [ ! -f "$1" ]; then
die "Settings file $1 doesn't exist."
if [ -f "tags/$TAG/settings.env" ]; then
. tags/$TAG/settings.env
fi
}
need_login_password() {
USER_LOGIN=$(yq -r .user_login < tags/$TAG/settings.yaml)
USER_PASSWORD=$(yq -r .user_password < tags/$TAG/settings.yaml)
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,7 @@
version = 2
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true

Binary file not shown.

View File

@@ -16,18 +16,12 @@ pssh() {
}
echo "[parallel-ssh] $@"
export PSSH=$(which pssh || which parallel-ssh)
case "$INFRACLASS" in
hetzner) LOGIN=root ;;
linode) LOGIN=root ;;
*) LOGIN=ubuntu ;;
esac
$PSSH -h $HOSTFILE -l $LOGIN \
--par 100 \
$(which pssh || which parallel-ssh) -h $HOSTFILE -l ubuntu \
--par ${PSSH_PARALLEL_CONNECTIONS-100} \
--timeout 300 \
-O LogLevel=ERROR \
-O IdentityFile=tags/$TAG/id_rsa \
-O UserKnownHostsFile=/dev/null \
-O StrictHostKeyChecking=no \
-O ForwardAgent=yes \

View File

@@ -0,0 +1,21 @@
CLUSTERSIZE=1
CLUSTERPREFIX=dmuc
USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
wait
standardize
clusterize
tools
docker
disabledocker
createuser
webssh
tailhist
kubebins
kubetools
ips
"

View File

@@ -0,0 +1,21 @@
CLUSTERSIZE=3
CLUSTERPREFIX=kubenet
CLUSTERNUMBER=100
USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
wait
standardize
clusterize
tools
docker
createuser
webssh
tailhist
kubebins
kubetools
ips
"

View File

@@ -0,0 +1,21 @@
CLUSTERSIZE=3
CLUSTERPREFIX=kuberouter
CLUSTERNUMBER=200
USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
wait
standardize
clusterize
tools
docker
createuser
webssh
tailhist
kubebins
kubetools
ips
"

View File

@@ -0,0 +1,24 @@
CLUSTERSIZE=3
CLUSTERPREFIX=oldversion
USER_LOGIN=k8s
USER_PASSWORD=training
# For a list of old versions, check:
# https://kubernetes.io/releases/patch-releases/#non-active-branch-history
KUBEVERSION=1.22.5
STEPS="
wait
standardize
clusterize
tools
docker
createuser
webssh
tailhist
kube
kubetools
kubetest
"

View File

@@ -0,0 +1,20 @@
CLUSTERSIZE=3
CLUSTERPREFIX=test
USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
wait
standardize
clusterize
tools
docker
createuser
webssh
tailhist
kube
kubetools
kubetest
"

View File

@@ -0,0 +1,19 @@
CLUSTERSIZE=1
CLUSTERPREFIX=moby
USER_LOGIN=docker
USER_PASSWORD=training
STEPS="
wait
standardize
clusterize
tools
docker
createuser
webssh
tailhist
cards
ips
"

View File

@@ -0,0 +1,20 @@
CLUSTERSIZE=4
CLUSTERPREFIX=node
USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
wait
standardize
clusterize
tools
docker
createuser
webssh
tailhist
kube
kubetools
kubetest
"

View File

@@ -0,0 +1,21 @@
CLUSTERSIZE=10
export TF_VAR_node_size=GP1.M
CLUSTERPREFIX=node
USER_LOGIN=k8s
USER_PASSWORD=training
STEPS="
wait
standardize
clusterize
tools
docker
createuser
webssh
tailhist
kube
kubetools
kubetest
"

View File

@@ -0,0 +1,6 @@
CLUSTERSIZE=2
USER_LOGIN=k8s
USER_PASSWORD=
STEPS="stage2"

View File

@@ -0,0 +1,16 @@
CLUSTERSIZE=1
CLUSTERPREFIX=CHANGEME
USER_LOGIN=portal
USER_PASSWORD=CHANGEME
STEPS="
wait
standardize
clusterize
tools
docker
createuser
ips
"

View File

@@ -0,0 +1,48 @@
#!/bin/sh
set -e
PREFIX=$(date +%Y-%m-%d-%H-%M)
PROVIDER=openstack/enix # aws also works
STUDENTS=2
#export TF_VAR_location=eu-north-1
export TF_VAR_node_size=S
SETTINGS=admin-dmuc
TAG=$PREFIX-$SETTINGS
./labctl create \
--tag $TAG \
--provider $PROVIDER \
--settings settings/$SETTINGS.env \
--students $STUDENTS
SETTINGS=admin-kubenet
TAG=$PREFIX-$SETTINGS
./labctl create \
--tag $TAG \
--provider $PROVIDER \
--settings settings/$SETTINGS.env \
--students $STUDENTS
SETTINGS=admin-kuberouter
TAG=$PREFIX-$SETTINGS
./labctl create \
--tag $TAG \
--provider $PROVIDER \
--settings settings/$SETTINGS.env \
--students $STUDENTS
SETTINGS=admin-oldversion
TAG=$PREFIX-$SETTINGS
./labctl create \
--tag $TAG \
--provider $PROVIDER \
--settings settings/$SETTINGS.env \
--students $STUDENTS
SETTINGS=admin-test
TAG=$PREFIX-$SETTINGS
./labctl create \
--tag $TAG \
--provider $PROVIDER \
--settings settings/$SETTINGS.env \
--students $STUDENTS

1
prepare-labs/tags Symbolic link
View File

@@ -0,0 +1 @@
terraform/tags

View File

Can't render this file because it contains an unexpected character in line 1 and column 42.

View File

@@ -0,0 +1,4 @@
#!/bin/sh
az account list-locations -o table \
--query "sort_by([?metadata.regionType == 'Physical'], &regionalDisplayName)[]
.{ displayName: displayName, regionalDisplayName: regionalDisplayName }"

View File

@@ -0,0 +1,2 @@
#!/bin/sh
civo region ls

View File

@@ -0,0 +1,2 @@
#!/bin/sh
doctl compute region list

View File

@@ -0,0 +1,2 @@
#!/bin/sh
gcloud compute zones list

View File

@@ -0,0 +1,2 @@
#!/bin/sh
linode-cli regions list

View File

@@ -0,0 +1,2 @@
#!/bin/sh
oci iam region list

View File

@@ -0,0 +1,6 @@
#!/bin/sh
echo "# Note that this is hard-coded in $0.
# I don't know if there is a way to list regions through the Scaleway API.
fr-par
nl-ams
pl-waw"

View File

@@ -8,8 +8,10 @@ resource "random_string" "_" {
resource "time_static" "_" {}
locals {
timestamp = formatdate("YYYY-MM-DD-hh-mm", time_static._.rfc3339)
tag = random_string._.result
min_nodes_per_pool = var.nodes_per_cluster
max_nodes_per_pool = var.nodes_per_cluster * 2
timestamp = formatdate("YYYY-MM-DD-hh-mm", time_static._.rfc3339)
tag = random_string._.result
# Common tags to be assigned to all resources
common_tags = [
"created-by-terraform",

View File

@@ -1,10 +1,9 @@
module "clusters" {
source = "./modules/PROVIDER"
source = "./one-kubernetes-module"
for_each = local.clusters
cluster_name = each.value.cluster_name
min_nodes_per_pool = var.min_nodes_per_pool
max_nodes_per_pool = var.max_nodes_per_pool
enable_arm_pool = var.enable_arm_pool
min_nodes_per_pool = local.min_nodes_per_pool
max_nodes_per_pool = local.max_nodes_per_pool
node_size = var.node_size
common_tags = local.common_tags
location = each.value.location
@@ -62,9 +61,11 @@ resource "null_resource" "wait_for_nodes" {
KUBECONFIG = local_file.kubeconfig[each.key].filename
}
command = <<-EOT
set -e
kubectl get nodes --watch | grep --silent --line-buffered .
kubectl wait node --for=condition=Ready --all --timeout=10m
while sleep 1; do
kubectl get nodes -o name | grep --silent . &&
kubectl wait node --for=condition=Ready --all --timeout=10m &&
break
done
EOT
}
}

View File

@@ -0,0 +1 @@
one-kubernetes-config/config.tf

View File

@@ -0,0 +1,3 @@
This directory should contain a config.tf file, even if it's empty.
(Because if the file doesn't exist, then the Terraform configuration
in the parent directory will fail.)

View File

@@ -0,0 +1,8 @@
This directory should contain a copy of one of the "one-kubernetes" modules.
For instance, when located in this directory, you can do:
cp ../../one-kubernetes/linode/* .
Then, move the config.tf file to ../one-kubernetes-config:
mv config.tf ../one-kubernetes-config

View File

@@ -0,0 +1 @@
one-kubernetes-module/provider.tf

View File

@@ -0,0 +1,3 @@
terraform {
required_version = ">= 1.4"
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.7.1"
version = "2.16.1"
}
}
}
@@ -90,7 +90,6 @@ resource "kubernetes_service" "shpod_${index}" {
name = "ssh"
port = 22
target_port = 22
node_port = 32222
}
type = "NodePort"
}
@@ -222,7 +221,10 @@ output "ip_addresses_of_nodes" {
value = join("\n", [
%{ for index, cluster in clusters ~}
join("\t", concat(
[ random_string.shpod_${index}.result, "ssh -l k8s -p 32222" ],
[
random_string.shpod_${index}.result,
"ssh -l k8s -p $${kubernetes_service.shpod_${index}.spec[0].port[0].node_port}"
],
split(" ", file("./externalips.${index}"))
)),
%{ endfor ~}

View File

@@ -0,0 +1,28 @@
variable "tag" {
type = string
}
variable "how_many_clusters" {
type = number
default = 2
}
variable "nodes_per_cluster" {
type = number
default = 2
}
variable "node_size" {
type = string
default = "M"
}
variable "location" {
type = string
default = null
}
# TODO: perhaps handle if it's space-separated instead of newline?
locals {
locations = var.location == null ? [null] : split("\n", var.location)
}

View File

@@ -0,0 +1 @@
../common.tf

View File

@@ -0,0 +1 @@
../../providers/aws/config.tf

View File

@@ -0,0 +1,87 @@
# Taken from:
# https://github.com/hashicorp/learn-terraform-provision-eks-cluster/blob/main/main.tf
data "aws_availability_zones" "available" {}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.19.0"
name = var.cluster_name
cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.5.1"
cluster_name = var.cluster_name
cluster_version = "1.24"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_endpoint_public_access = true
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
}
eks_managed_node_groups = {
one = {
name = "node-group-one"
instance_types = [local.node_size]
min_size = var.min_nodes_per_pool
max_size = var.max_nodes_per_pool
desired_size = var.min_nodes_per_pool
}
}
}
# https://aws.amazon.com/blogs/containers/amazon-ebs-csi-driver-is-now-generally-available-in-amazon-eks-add-ons/
data "aws_iam_policy" "ebs_csi_policy" {
arn = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
}
module "irsa-ebs-csi" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "4.7.0"
create_role = true
role_name = "AmazonEKSTFEBSCSIRole-${module.eks.cluster_name}"
provider_url = module.eks.oidc_provider
role_policy_arns = [data.aws_iam_policy.ebs_csi_policy.arn]
oidc_fully_qualified_subjects = ["system:serviceaccount:kube-system:ebs-csi-controller-sa"]
}
resource "aws_eks_addon" "ebs-csi" {
cluster_name = module.eks.cluster_name
addon_name = "aws-ebs-csi-driver"
addon_version = "v1.5.2-eksbuild.1"
service_account_role_arn = module.irsa-ebs-csi.iam_role_arn
tags = {
"eks_addon" = "ebs-csi"
"terraform" = "true"
}
}

View File

@@ -0,0 +1,44 @@
output "cluster_id" {
value = module.eks.cluster_arn
}
output "has_metrics_server" {
value = false
}
output "kubeconfig" {
sensitive = true
value = yamlencode({
apiVersion = "v1"
kind = "Config"
clusters = [{
name = var.cluster_name
cluster = {
certificate-authority-data = module.eks.cluster_certificate_authority_data
server = module.eks.cluster_endpoint
}
}]
contexts = [{
name = var.cluster_name
context = {
cluster = var.cluster_name
user = var.cluster_name
}
}]
users = [{
name = var.cluster_name
user = {
exec = {
apiVersion = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", var.cluster_name]
}
}
}]
current-context = var.cluster_name
})
}
data "aws_eks_cluster_auth" "_" {
name = module.eks.cluster_name
}

View File

@@ -0,0 +1,7 @@
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}

View File

@@ -0,0 +1 @@
../../providers/aws/variables.tf

View File

@@ -0,0 +1 @@
../common.tf

View File

@@ -0,0 +1 @@
../../providers/civo/config.tf

View File

@@ -0,0 +1,17 @@
# As of March 2023, the default type ("k3s") only supports up
# to Kubernetes 1.23, which belongs to a museum.
# So let's use Talos, which supports up to 1.25.
resource "civo_kubernetes_cluster" "_" {
name = var.cluster_name
firewall_id = civo_firewall._.id
cluster_type = "talos"
pools {
size = local.node_size
node_count = var.min_nodes_per_pool
}
}
resource "civo_firewall" "_" {
name = var.cluster_name
}

View File

@@ -0,0 +1,12 @@
output "cluster_id" {
value = civo_kubernetes_cluster._.id
}
output "has_metrics_server" {
value = false
}
output "kubeconfig" {
value = civo_kubernetes_cluster._.kubeconfig
sensitive = true
}

View File

@@ -0,0 +1,7 @@
terraform {
required_providers {
civo = {
source = "civo/civo"
}
}
}

View File

@@ -0,0 +1 @@
../../providers/civo/variables.tf

View File

@@ -0,0 +1,28 @@
variable "cluster_name" {
type = string
default = "deployed-with-terraform"
}
variable "common_tags" {
type = list(string)
default = []
}
variable "node_size" {
type = string
default = "M"
}
variable "min_nodes_per_pool" {
type = number
default = 2
}
variable "max_nodes_per_pool" {
type = number
default = 4
}
locals {
node_size = lookup(var.node_sizes, var.node_size, var.node_size)
}

View File

@@ -0,0 +1 @@
../common.tf

View File

@@ -0,0 +1 @@
../../providers/digitalocean/config.tf

View File

@@ -3,15 +3,18 @@ resource "digitalocean_kubernetes_cluster" "_" {
tags = var.common_tags
# Region is mandatory, so let's provide a default value.
region = var.location != null ? var.location : "nyc1"
version = var.k8s_version
version = data.digitalocean_kubernetes_versions._.latest_version
node_pool {
name = "x86"
tags = var.common_tags
size = local.node_type
auto_scale = true
size = local.node_size
auto_scale = var.max_nodes_per_pool > var.min_nodes_per_pool
min_nodes = var.min_nodes_per_pool
max_nodes = max(var.min_nodes_per_pool, var.max_nodes_per_pool)
}
}
data "digitalocean_kubernetes_versions" "_" {
}

View File

@@ -1,7 +1,3 @@
output "kubeconfig" {
value = digitalocean_kubernetes_cluster._.kube_config.0.raw_config
}
output "cluster_id" {
value = digitalocean_kubernetes_cluster._.id
}
@@ -9,3 +5,8 @@ output "cluster_id" {
output "has_metrics_server" {
value = false
}
output "kubeconfig" {
value = digitalocean_kubernetes_cluster._.kube_config.0.raw_config
sensitive = true
}

View File

@@ -0,0 +1 @@
../../providers/digitalocean/variables.tf

View File

@@ -0,0 +1 @@
../common.tf

View File

@@ -0,0 +1 @@
../../providers/exoscale/config.tf

View File

@@ -0,0 +1,20 @@
resource "exoscale_sks_cluster" "_" {
zone = var.location
name = var.cluster_name
service_level = "starter"
}
resource "exoscale_sks_nodepool" "_" {
cluster_id = exoscale_sks_cluster._.id
zone = exoscale_sks_cluster._.zone
name = var.cluster_name
instance_type = local.node_size
size = var.min_nodes_per_pool
}
resource "exoscale_sks_kubeconfig" "_" {
cluster_id = exoscale_sks_cluster._.id
zone = exoscale_sks_cluster._.zone
user = "kubernetes-admin"
groups = ["system:masters"]
}

View File

@@ -0,0 +1,12 @@
output "cluster_id" {
value = exoscale_sks_cluster._.id
}
output "has_metrics_server" {
value = true
}
output "kubeconfig" {
value = exoscale_sks_kubeconfig._.kubeconfig
sensitive = true
}

View File

@@ -0,0 +1,7 @@
terraform {
required_providers {
exoscale = {
source = "exoscale/exoscale"
}
}
}

View File

@@ -0,0 +1 @@
../../providers/exoscale/variables.tf

View File

@@ -0,0 +1 @@
../common.tf

View File

@@ -0,0 +1 @@
../../providers/googlecloud/config.tf

View File

@@ -0,0 +1,12 @@
locals {
location = var.location != null ? var.location : "europe-north1-a"
region = replace(local.location, "/-[a-z]$/", "")
# Unfortunately, the following line doesn't work
# (that attribute just returns an empty string)
# so we have to hard-code the project name.
#project = data.google_client_config._.project
project = "prepare-tf"
}
data "google_client_config" "_" {}

View File

@@ -1,8 +1,8 @@
resource "google_container_cluster" "_" {
name = var.cluster_name
project = local.project
location = local.location
min_master_version = var.k8s_version
name = var.cluster_name
project = local.project
location = local.location
#min_master_version = var.k8s_version
# To deploy private clusters, uncomment the section below,
# and uncomment the block in network.tf.
@@ -43,12 +43,12 @@ resource "google_container_cluster" "_" {
name = "x86"
node_config {
tags = var.common_tags
machine_type = local.node_type
machine_type = local.node_size
}
initial_node_count = var.min_nodes_per_pool
autoscaling {
min_node_count = var.min_nodes_per_pool
max_node_count = max(var.min_nodes_per_pool, var.max_nodes_per_pool)
max_node_count = var.max_nodes_per_pool
}
}
@@ -62,4 +62,3 @@ resource "google_container_cluster" "_" {
}
}
}

View File

@@ -1,7 +1,14 @@
data "google_client_config" "_" {}
output "cluster_id" {
value = google_container_cluster._.id
}
output "has_metrics_server" {
value = true
}
output "kubeconfig" {
value = <<-EOT
sensitive = true
value = <<-EOT
apiVersion: v1
kind: Config
current-context: ${google_container_cluster._.name}
@@ -25,11 +32,3 @@ output "kubeconfig" {
token: ${data.google_client_config._.access_token}
EOT
}
output "cluster_id" {
value = google_container_cluster._.id
}
output "has_metrics_server" {
value = true
}

View File

@@ -0,0 +1 @@
../../providers/googlecloud/variables.tf

View File

@@ -0,0 +1 @@
../common.tf

View File

@@ -0,0 +1 @@
../../providers/linode/config.tf

View File

@@ -3,10 +3,10 @@ resource "linode_lke_cluster" "_" {
tags = var.common_tags
# "region" is mandatory, so let's provide a default value if none was given.
region = var.location != null ? var.location : "eu-central"
k8s_version = local.k8s_version
k8s_version = data.linode_lke_versions._.versions[0].id
pool {
type = local.node_type
type = local.node_size
count = var.min_nodes_per_pool
autoscaler {
min = var.min_nodes_per_pool
@@ -15,3 +15,9 @@ resource "linode_lke_cluster" "_" {
}
}
data "linode_lke_versions" "_" {
}
# FIXME: sort the versions to be sure that we get the most recent one?
# (We don't know in which order they are returned by the provider.)

View File

@@ -1,7 +1,3 @@
output "kubeconfig" {
value = base64decode(linode_lke_cluster._.kubeconfig)
}
output "cluster_id" {
value = linode_lke_cluster._.id
}
@@ -9,3 +5,8 @@ output "cluster_id" {
output "has_metrics_server" {
value = false
}
output "kubeconfig" {
value = base64decode(linode_lke_cluster._.kubeconfig)
sensitive = true
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
linode = {
source = "linode/linode"
version = "1.22.0"
version = "1.30.0"
}
}
}

View File

@@ -0,0 +1 @@
../../providers/linode/variables.tf

View File

@@ -0,0 +1 @@
../common.tf

Some files were not shown because too many files have changed in this diff Show More