Compare commits

...

94 Commits

Author SHA1 Message Date
Jérôme Petazzoni
18c081e395 🚀 Prepare NASA JPL content (2023 version) 2023-02-23 10:07:19 +01:00
Jérôme Petazzoni
29b3185e7e 🐘 Add link to Mastodon profile 2023-02-23 10:06:38 +01:00
Jérôme Petazzoni
0616d74e37 Add gentle intro to YAML 2023-02-22 20:56:46 +01:00
Jérôme Petazzoni
676ebcdd3f ♻️ Replace jpetazzo/httpenv with jpetazzo/color 2023-02-20 14:22:02 +01:00
Jérôme Petazzoni
28f0253242 Add kubectl np-viewer in network policy section 2023-02-20 10:37:53 +01:00
Jérôme Petazzoni
73125b5ffb 🛠️ k9s fixed the file name in their releases 🎉 2023-02-18 15:20:44 +01:00
Jérôme Petazzoni
a90c521b77 🪓 Split tmux instructions across two slides 2023-02-12 18:03:41 +01:00
Jérôme Petazzoni
bd141ddfc5 💡 Add Ctrl-B Ctrl-O tmux shortcut to cheatsheet
Super convenient if you have something on top and would like it to
be on bottom and vice versa; or to switch left and right panes.

Usually not super helpful during normal use of tmux, but very
handy when streaming, e.g. when you have a camera view obscuring
part of the top panel (or on the left/right side) and you want
to switch panel arrangement.
2023-02-12 17:40:00 +01:00
Jérôme Petazzoni
634d101efc Update HPA v2 apiVersion 2023-02-12 15:39:55 +01:00
Jérôme Petazzoni
20347a1417 ♻️ Add script to clean up Linode PVC volumes 2023-02-12 15:38:58 +01:00
Jérôme Petazzoni
893be3b18f 🖼️ Add picture of a canary cage to illustrate canary deployments 2023-02-12 13:56:36 +01:00
Bret Fisher
dd6a1adc63 Apply suggestions from code review
Co-authored-by: Tianon Gravi <admwiggin@gmail.com>
2023-02-07 23:43:40 +01:00
Bret Fisher
4dc60d3250 Check for missing docker dir 2023-02-07 23:43:40 +01:00
Jérôme Petazzoni
1aa0e062d0 ♻️ Add script to clean up Linode nodebalancers 2023-02-04 10:49:04 +01:00
Torounia
cfbe578d4f helm intro set value to juice-shop chart 2023-02-03 17:59:54 +01:00
Jérôme Petazzoni
1d692898da ♻️ Bump up versions and improve reliability ot wait-for-nodes 2023-01-23 16:08:24 +01:00
Jérôme Petazzoni
9526a94b77 🐚 Improve Terraform-based deployment script
Each time we call that script, we must set a few env vars
beforehand. Let's make these vars optional parameters to
the script instead.

Also add helper scripts to list the locations (zones or
regions) available to each provider.
2023-01-23 16:07:28 +01:00
Jérôme Petazzoni
e6eb157cc6 🪓 Split "kubectl expose" and "service types" 2023-01-13 17:50:22 +01:00
Jérôme Petazzoni
b984049603 📃 Reorganize a bit the deck intro 2023-01-13 16:04:39 +01:00
Jérôme Petazzoni
c200c8e1da ♻️ Refactor script to count slides
For automatic transcription and chaptering, we'll need to know
exactly at which slide each section starts. This we already
had the count-slides.py script to count how many slides each
section had, and count the number of slides per part. The new
script does the same but also gives accurately the first slide
of each section.
2023-01-06 23:11:43 +01:00
Jérôme Petazzoni
4c30e7db14 ✂️ Remove containerd 1.5 pinning
Kubernetes 1.26 requires CRI v1, which means containerd 1.6.
2023-01-03 09:10:01 +01:00
Marco Verleun
9d5a083473 Update Container_Networking_Basics.md 2022-12-12 13:43:01 +01:00
Jérôme Petazzoni
a2be63e4c4 📃 Improve Ingress exercises 2022-12-08 17:28:53 -08:00
Jérôme Petazzoni
584dddd823 🔗 Fix link to create token 2022-12-08 05:53:12 -08:00
Jérôme Petazzoni
3e9307d420 🔑 Update dashboard YAML; add persisting token for the dashboard account 2022-12-08 05:52:41 -08:00
Jérôme Petazzoni
5d3881b7e1 Add CoLiMa and fix microk8s/minikube ordering 2022-12-08 05:44:48 -08:00
Bret Fisher
d57ba24f6f Updating stern link 2022-12-05 21:10:52 -08:00
Jérôme Petazzoni
f046a32567 🐋 Update info about Docker+K8S 2022-12-05 15:29:52 -08:00
Jérôme Petazzoni
c2a169167d ☁️ Add terraform configuration for Azure 2022-12-05 15:29:52 -08:00
dependabot[bot]
961cf34b6f Bump socket.io-parser from 4.0.4 to 4.0.5 in /slides/autopilot
Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.0.4 to 4.0.5.
- [Release notes](https://github.com/socketio/socket.io-parser/releases)
- [Changelog](https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io-parser/compare/4.0.4...4.0.5)

---
updated-dependencies:
- dependency-name: socket.io-parser
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-22 16:16:26 -08:00
dependabot[bot]
b23cae8f5b Bump engine.io from 6.2.0 to 6.2.1 in /slides/autopilot
Bumps [engine.io](https://github.com/socketio/engine.io) from 6.2.0 to 6.2.1.
- [Release notes](https://github.com/socketio/engine.io/releases)
- [Changelog](https://github.com/socketio/engine.io/blob/main/CHANGELOG.md)
- [Commits](https://github.com/socketio/engine.io/compare/6.2.0...6.2.1)

---
updated-dependencies:
- dependency-name: engine.io
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-11-22 16:11:57 -08:00
Jérôme Petazzoni
a09c4ec4f5 Improve netlify-dns script to suggest what to do if config file not found 2022-11-18 21:46:29 +01:00
Jérôme Petazzoni
527c63eee7 📦 ️Add pic of Catène de Conteneurs 2022-11-09 14:36:25 +01:00
Jérôme Petazzoni
6cfe991375 🐞 Typo fix 2022-11-04 17:23:14 +01:00
Jérôme Petazzoni
c8f90463e0 🌈 Change the tmux status bar to yellow (like a precious metal) 2022-11-02 17:02:44 +01:00
Jérôme Petazzoni
316f5b8fd8 🌈 Change tmux status bar color to blue
To help differentiate between environments
(shpod now defaults to red)
2022-11-01 11:44:32 +01:00
Jérôme Petazzoni
c86474a539 ♻️ Update kubebuilder workshop 2022-10-28 12:32:05 +02:00
Jérôme Petazzoni
2943ef4e26 Update Kyverno to 1.7 2022-10-26 19:49:23 +02:00
Jérôme Petazzoni
02004317ac 🐞 Fix some ambiguous markdown link references
I thought that the links were local to each slide, but...
apparently not. Whoops.
2022-10-24 20:41:23 +02:00
Jérôme Petazzoni
c9cc659f88 🐞 Typo fix 2022-10-09 23:05:27 +02:00
Jérôme Petazzoni
bb8e655f92 🔧 Disable unattended upgrades; add completion for kubeadm 2022-10-09 12:18:42 +02:00
Jérôme Petazzoni
50772ca439 🌍 Switch Scaleway to fr-par-2 (better PUE) 2022-10-09 12:18:07 +02:00
Jérôme Petazzoni
1082204ac7 📃 Add note about .Chart.IsRoot 2022-10-04 17:11:59 +02:00
Jérôme Petazzoni
c9c79c409c Add ytt; fix Weave YAML URL; add completion for a few tools 2022-10-04 16:53:36 +02:00
Jérôme Petazzoni
71daf27237 ⌨️ Add tmux rename window shortcut 2022-10-03 15:28:32 +02:00
Jérôme Petazzoni
986da15a22 🔗 Update kustomize eschewed features link 2022-10-03 15:23:18 +02:00
Jérôme Petazzoni
407a8631ed 🐞 Typo in variable name 2022-10-03 15:15:53 +02:00
Jérôme Petazzoni
b4a81a7054 🔧 Minor tweak to Terraform provisioning wrapper 2022-10-03 15:15:12 +02:00
Jérôme Petazzoni
d0f0d2c87b 🔧 Typo fix 2022-09-27 14:53:14 +02:00
Jérôme Petazzoni
0f77eaa48b 📃 Update info about Docker Desktop and Rancher Desktop 2022-09-26 13:42:20 +02:00
Jérôme Petazzoni
659713a697 Bump up dashboard version 2022-09-26 11:41:28 +02:00
Jérôme Petazzoni
20d21b742a Bump up Compose version to use 2.X everywhere 2022-09-25 17:28:52 +02:00
Jérôme Petazzoni
747605357d 🏭️ Refactor Ingress chapter 2022-09-25 14:20:26 +02:00
Jérôme Petazzoni
17bb84d22e 🏭️ Refactor healthcheck chapter
Add more details for startup probes.
Mention GRPC check.
Better spell out recommendations and gotchas.
2022-09-11 13:11:01 +02:00
Jérôme Petazzoni
d343264b86 📃 Update swap/cgroups v2 section to mention KEP2400 2022-09-10 09:31:39 +02:00
Jérôme Petazzoni
a216aa2034 🐞 Fix install of kube-ps1
The former method was invalid and didn't work with e.g. screen.
2022-08-31 12:42:47 +02:00
Francesco Manzali
64f993ff69 - Update VMs to ubuntu/focal64 20.04 LTS (trusty64 reaced EOL on April 25 2019)
- Update Docker installation task from the
  [official docs](https://docs.docker.com/engine/install/ubuntu/)
2022-08-31 12:06:10 +02:00
Jérôme Petazzoni
73b3cad0b8 🔧 Fix a couple of issues related to OCI images 2022-08-22 17:20:36 +02:00
Naeem Ilyas
26e5459fae type fix 2022-08-22 10:23:57 +02:00
Jérôme Petazzoni
9c564e6787 Add info about ownerReferences with Kyverno 2022-08-19 14:59:11 +02:00
Jérôme Petazzoni
2724a611a6 📃 Update rolling update intro slide 2022-08-17 14:49:17 +02:00
Jérôme Petazzoni
2ca239ddfc 🔒️ Mention bound service account tokens 2022-08-17 14:18:15 +02:00
Jérôme Petazzoni
e74a158c59 📃 Document dependency on yq 2022-08-17 13:49:15 +02:00
Jérôme Petazzoni
138af3b5d2 ♻️ Upgrade build image to Netlify Focal; bump up Python version 2022-08-17 13:48:55 +02:00
Jérôme Petazzoni
ad6d16bade Add RBAC and NetPol exercises 2022-08-17 13:16:52 +02:00
Jérôme Petazzoni
1aaf9b0bd5 ♻️ Update Linode LKE terraform module 2022-07-29 14:37:37 +02:00
Jérôme Petazzoni
ce39f97a28 Bump up versions for cluster upgrade lab 2022-07-22 11:32:22 +02:00
jonjohnsonjr
162651bdfd Typo: sould -> should 2022-07-18 19:16:47 +02:00
Jérôme Petazzoni
2958ca3a32 ♻️ Update CRD content
Rehaul for crd/v1; demonstrate what happens when adding
data validation a posteriori.
2022-07-14 10:32:34 +02:00
Jérôme Petazzoni
02a15d94a3 Add nsinjector 2022-07-06 14:28:24 +02:00
Jérôme Petazzoni
12d9f06f8a Add YTT content 2022-06-23 08:37:50 +02:00
Jérôme Petazzoni
43caccbdf6 ♻️ Bump up socket.io versions to address dependabot complaints
The autopilot code isn't exposed to anything; but this will stop dependabot
from displaying the annoying warning banners 😅
2022-06-20 07:09:36 +02:00
Tianon Gravi
a52f642231 Update links to kube-resource-report
Also, remove links to demos that no longer exist.
2022-06-10 21:43:56 +02:00
Tianon Gravi
30b1bfde5b Fix a few minor typos 2022-06-10 21:43:56 +02:00
Jérôme Petazzoni
5b39218593 Bump up Kapsule k8s version 2022-06-08 14:35:24 +02:00
Jérôme Petazzoni
f65ca19b44 📃 Mention type validation issues for CRDs 2022-06-06 13:59:13 +02:00
Jérôme Petazzoni
abb0fbe364 📃 Update operators intro to be less db-centric 2022-06-06 13:03:51 +02:00
Jerome Petazzoni
a18af8f4c4 🐞 Fix WaitForFirstConsumer with OpenEBS hostpath 2022-06-01 08:57:42 +02:00
Jerome Petazzoni
41e9047f3d Bump up sealed secret controller
quay.io doesn't work anymore, and kubeseal 0.17.4 was using
an image on quay. kubeseal 0.17.5 uses an image on the docker
hub instead
2022-06-01 08:51:31 +02:00
Jérôme Petazzoni
907e769d4e 📍 Pin containerd version to avoid weave/containerd issue
See https://github.com/containerd/containerd/issues/6921 for details
2022-05-25 08:59:14 +02:00
Karol Berezicki
71ba3ec520 Fixed link to Docker forums in intro.md 2022-05-23 14:41:59 +02:00
Jérôme Petazzoni
cc6c0d5db8 🐞 Minor bug fixes 2022-05-12 19:37:05 +02:00
Jérôme Petazzoni
9ed00c5da1 Update DOKS version 2022-05-07 11:36:01 +02:00
Jérôme Petazzoni
b4b67536e9 ️Add retry logic for linode provisioning
It looks like Linode now enforces something like 10 requests / 10 seconds.
We need to add some retry logic when provisioning more than 10 VMs.
2022-05-03 11:33:12 +02:00
Jérôme Petazzoni
52ce402803 ♻️ Switch to official FRR images; disable NHT
We're now using an official image for FRR.
Also, by default, BGPD will accept routes only if their
next-hop is reachable. This relies on a mechanism called
NHT (Next Hop Tracking). However, when we receive routes
from Kubernetes clusters, the peers usually advertise
addresses that we are not directly connected to. This
causes these addresses to be filtered out (unless the
route reflector is running on the same VPC or Layer 2
network as the Kubernetes nodes). To accept these routes
anyway, we basically disable NHT, by considering that
nodes are reachable if we can reach them through our
default route.
2022-04-12 22:17:27 +02:00
Jérôme Petazzoni
7076152bb9 ♻️ Update sealed-secrets version and install instructions 2022-04-12 20:46:01 +02:00
Jérôme Petazzoni
39eebe320f Add CA injector content 2022-04-12 18:24:41 +02:00
Jérôme Petazzoni
97c563e76a ♻️ Don't use ngrok for Tilt
ngrok now requires an account to serve HTML content.
We won't use ngrok anymore for the Tilt UI
(and we'll suggest to use a NodePort service instead,
when running in a Pod).
2022-04-11 21:08:54 +02:00
Jérôme Petazzoni
4a7b04dd01 ♻️ Add helm install command for metrics-server
Don't use it yet, but have it handy in case we want to switch.
2022-04-08 21:06:19 +02:00
Jérôme Petazzoni
8b3f7a9aba ♻️ Switch to SIG metrics-server chart 2022-04-08 20:36:07 +02:00
Jérôme Petazzoni
f9bb780f80 Bump up DOK version 2022-04-08 20:35:53 +02:00
Jérôme Petazzoni
94545f800a 📃 Add TOC item to nsplease 2022-04-06 22:01:22 +02:00
Jérôme Petazzoni
5896ad577b Bump up k8s version on Linode 2022-03-31 10:59:09 +02:00
Denis Laxalde
030f3728f7 Update link to "Efficient Node Heartbeats" KEP
Previous file was moved in commit 7eef794bb5
2022-03-28 16:52:32 +02:00
150 changed files with 5993 additions and 3592 deletions

8
.gitignore vendored
View File

@@ -6,13 +6,7 @@ prepare-vms/tags
prepare-vms/infra
prepare-vms/www
prepare-tf/.terraform*
prepare-tf/terraform.*
prepare-tf/stage2/*.tf
prepare-tf/stage2/kubeconfig.*
prepare-tf/stage2/.terraform*
prepare-tf/stage2/terraform.*
prepare-tf/stage2/externalips.*
prepare-tf/tag-*
slides/*.yml.html
slides/autopilot/state.yaml

View File

@@ -1,2 +1,3 @@
hostname frr
ip nht resolve-via-default
log stdout

View File

@@ -2,30 +2,36 @@ version: "3"
services:
bgpd:
image: ajones17/frr:662
image: frrouting/frr:v8.2.2
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: /usr/lib/frr/bgpd -f /etc/frr/bgpd.conf --log=stdout --log-level=debug --no_kernel
cap_add:
- NET_ADMIN
- SYS_ADMIN
entrypoint: /usr/lib/frr/bgpd -f /etc/frr/bgpd.conf --log=stdout --log-level=debug --no_kernel --no_zebra
restart: always
zebra:
image: ajones17/frr:662
image: frrouting/frr:v8.2.2
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
cap_add:
- NET_ADMIN
- SYS_ADMIN
entrypoint: /usr/lib/frr/zebra -f /etc/frr/zebra.conf --log=stdout --log-level=debug
restart: always
vtysh:
image: ajones17/frr:662
image: frrouting/frr:v8.2.2
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: vtysh -c "show ip bgp"
entrypoint: vtysh
chmod:
image: alpine

View File

@@ -48,20 +48,25 @@ k8s_yaml('../k8s/dockercoins.yaml')
# The following line lets Tilt run with the default kubeadm cluster-admin context.
allow_k8s_contexts('kubernetes-admin@kubernetes')
# This will run an ngrok tunnel to expose Tilt to the outside world.
# This is intended to be used when Tilt runs on a remote machine.
local_resource(name='ngrok:tunnel', serve_cmd='ngrok http 10350')
# Note: the whole section below (to set up ngrok tunnels) is disabled,
# because ngrok now requires to set up an account to serve HTML
# content. So we can still use ngrok for e.g. webhooks and "raw" APIs,
# but not to serve web pages like the Tilt UI.
# This will wait until the ngrok tunnel is up, and show its URL to the user.
# We send the output to /dev/tty so that it doesn't get intercepted by
# Tilt, and gets displayed to the user's terminal instead.
# Note: this assumes that the ngrok instance will be running on port 4040.
# If you have other ngrok instances running on the machine, this might not work.
local_resource(name='ngrok:showurl', cmd='''
while sleep 1; do
TUNNELS=$(curl -fsSL http://localhost:4040/api/tunnels | jq -r .tunnels[].public_url)
[ "$TUNNELS" ] && break
done
printf "\nYou should be able to connect to the Tilt UI with the following URL(s): %s\n" "$TUNNELS" >/dev/tty
'''
)
# # This will run an ngrok tunnel to expose Tilt to the outside world.
# # This is intended to be used when Tilt runs on a remote machine.
# local_resource(name='ngrok:tunnel', serve_cmd='ngrok http 10350')
# # This will wait until the ngrok tunnel is up, and show its URL to the user.
# # We send the output to /dev/tty so that it doesn't get intercepted by
# # Tilt, and gets displayed to the user's terminal instead.
# # Note: this assumes that the ngrok instance will be running on port 4040.
# # If you have other ngrok instances running on the machine, this might not work.
# local_resource(name='ngrok:showurl', cmd='''
# while sleep 1; do
# TUNNELS=$(curl -fsSL http://localhost:4040/api/tunnels | jq -r .tunnels[].public_url)
# [ "$TUNNELS" ] && break
# done
# printf "\nYou should be able to connect to the Tilt UI with the following URL(s): %s\n" "$TUNNELS" >/dev/tty
# '''
# )

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,8 +253,8 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
spec:
containers:
- args:
@@ -262,7 +262,7 @@ spec:
- --sidecar-host=http://127.0.0.1:8000
- --enable-skip-login
- --enable-insecure-login
image: kubernetesui/dashboard:v2.5.0
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -293,7 +293,7 @@ spec:
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
- image: kubernetesui/metrics-scraper:v1.0.8
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,15 +253,15 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
spec:
containers:
- args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
image: kubernetesui/dashboard:v2.5.0
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -292,7 +292,7 @@ spec:
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
- image: kubernetesui/metrics-scraper:v1.0.8
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:

View File

@@ -17,8 +17,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
@@ -30,8 +30,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
@@ -43,8 +43,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
@@ -56,8 +56,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
@@ -71,8 +71,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
@@ -84,8 +84,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
rules:
- apiGroups:
@@ -106,8 +106,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -126,8 +126,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
@@ -182,8 +182,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
@@ -204,8 +204,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
@@ -229,8 +229,8 @@ metadata:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
@@ -253,15 +253,15 @@ spec:
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
app.kubernetes.io/version: 2.7.0
helm.sh/chart: kubernetes-dashboard-6.0.0
spec:
containers:
- args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
image: kubernetesui/dashboard:v2.5.0
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -292,7 +292,7 @@ spec:
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
- image: kubernetesui/metrics-scraper:v1.0.8
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -344,3 +344,12 @@ metadata:
creationTimestamp: null
name: cluster-admin
namespace: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cluster-admin-token
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: cluster-admin

View File

@@ -1,5 +1,5 @@
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta2
apiVersion: autoscaling/v2
metadata:
name: rng
spec:

View File

@@ -15,10 +15,10 @@ spec:
- key: "{{ request.operation }}"
operator: Equals
value: UPDATE
- key: "{{ request.oldObject.metadata.labels.color }}"
- key: "{{ request.oldObject.metadata.labels.color || '' }}"
operator: NotEquals
value: ""
- key: "{{ request.object.metadata.labels.color }}"
- key: "{{ request.object.metadata.labels.color || '' }}"
operator: NotEquals
value: ""
validate:

View File

@@ -15,10 +15,10 @@ spec:
- key: "{{ request.operation }}"
operator: Equals
value: UPDATE
- key: "{{ request.oldObject.metadata.labels.color }}"
- key: "{{ request.oldObject.metadata.labels.color || '' }}"
operator: NotEquals
value: ""
- key: "{{ request.object.metadata.labels.color }}"
- key: "{{ request.object.metadata.labels.color || '' }}"
operator: Equals
value: ""
validate:

14
k8s/pizza-1.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
version: v1alpha1
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz

20
k8s/pizza-2.yaml Normal file
View File

@@ -0,0 +1,20 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object

32
k8s/pizza-3.yaml Normal file
View File

@@ -0,0 +1,32 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
toppings:
type: array
items:
type: string

39
k8s/pizza-4.yaml Normal file
View File

@@ -0,0 +1,39 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
toppings:
type: array
items:
type: string
additionalPrinterColumns:
- jsonPath: .spec.sauce
name: Sauce
type: string
- jsonPath: .spec.toppings
name: Toppings
type: string

40
k8s/pizza-5.yaml Normal file
View File

@@ -0,0 +1,40 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: pizzas.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: pizzas
singular: pizza
kind: Pizza
shortNames:
- piz
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
required: [ sauce, toppings ]
properties:
sauce:
type: string
enum: [ red, white ]
toppings:
type: array
items:
type: string
additionalPrinterColumns:
- jsonPath: .spec.sauce
name: Sauce
type: string
- jsonPath: .spec.toppings
name: Toppings
type: string

45
k8s/pizzas.yaml Normal file
View File

@@ -0,0 +1,45 @@
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: margherita
spec:
sauce: red
toppings:
- mozarella
- basil
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: quatrostagioni
spec:
sauce: red
toppings:
- artichoke
- basil
- mushrooms
- prosciutto
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: mehl31
spec:
sauce: white
toppings:
- goatcheese
- pear
- walnuts
- mozzarella
- rosemary
- honey
---
apiVersion: container.training/v1alpha1
kind: Pizza
metadata:
name: brownie
spec:
sauce: chocolate
toppings:
- nuts

View File

@@ -70,4 +70,15 @@ add_namespace() {
kubectl create serviceaccount -n kubernetes-dashboard cluster-admin \
-o yaml --dry-run=client \
#
echo ---
cat <<EOF
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cluster-admin-token
namespace: kubernetes-dashboard
annotations:
kubernetes.io/service-account.name: cluster-admin
EOF
) > dashboard-with-token.yaml

View File

@@ -0,0 +1,164 @@
#! Define and use variables.
---
#@ repository = "dockercoins"
#@ tag = "v0.1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ "{}/hasher:{}".format(repository, tag)
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ "{}/rng:{}".format(repository, tag)
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ "{}/webui:{}".format(repository, tag)
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ "{}/worker:{}".format(repository, tag)
name: worker

View File

@@ -0,0 +1,167 @@
#! Define and use a function to set the deployment image.
---
#@ repository = "dockercoins"
#@ tag = "v0.1"
#@ def image(component):
#@ return "{}/{}:{}".format(repository, component, tag)
#@ end
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ image("hasher")
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ image("rng")
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ image("webui")
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ image("worker")
name: worker

164
k8s/ytt/3-labels/app.yaml Normal file
View File

@@ -0,0 +1,164 @@
#! Define and use functions, demonstrating how to generate labels.
---
#@ repository = "dockercoins"
#@ tag = "v0.1"
#@ def image(component):
#@ return "{}/{}:{}".format(repository, component, tag)
#@ end
#@ def labels(component):
#@ return {
#@ "app": component,
#@ "container.training/generated-by": "ytt",
#@ }
#@ end
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ image("hasher")
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("redis")
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("redis")
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("rng")
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ image("rng")
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("rng")
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("webui")
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ image("webui")
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("webui")
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("worker")
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ image("worker")
name: worker

162
k8s/ytt/4-data/app.yaml Normal file
View File

@@ -0,0 +1,162 @@
---
#@ load("@ytt:data", "data")
#@ def image(component):
#@ return "{}/{}:{}".format(data.values.repository, component, data.values.tag)
#@ end
#@ def labels(component):
#@ return {
#@ "app": component,
#@ "container.training/generated-by": "ytt",
#@ }
#@ end
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: #@ image("hasher")
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("hasher")
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("redis")
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("redis")
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("rng")
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: #@ image("rng")
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("rng")
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("webui")
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: #@ image("webui")
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels: #@ labels("webui")
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels: #@ labels("worker")
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: #@ image("worker")
name: worker

View File

@@ -0,0 +1,4 @@
#@data/values-schema
---
repository: dockercoins
tag: v0.1

54
k8s/ytt/5-factor/app.yaml Normal file
View File

@@ -0,0 +1,54 @@
---
#@ load("@ytt:data", "data")
---
#@ def Deployment(component, repository=data.values.repository, tag=data.values.tag):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ component
container.training/generated-by: ytt
name: #@ component
spec:
replicas: 1
selector:
matchLabels:
app: #@ component
template:
metadata:
labels:
app: #@ component
spec:
containers:
- image: #@ repository + "/" + component + ":" + tag
name: #@ component
#@ end
---
#@ def Service(component, port=80, type="ClusterIP"):
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ component
container.training/generated-by: ytt
name: #@ component
spec:
ports:
- port: #@ port
protocol: TCP
targetPort: #@ port
selector:
app: #@ component
type: #@ type
#@ end
---
--- #@ Deployment("hasher")
--- #@ Service("hasher")
--- #@ Deployment("redis", repository="library", tag="latest")
--- #@ Service("redis", port=6379)
--- #@ Deployment("rng")
--- #@ Service("rng")
--- #@ Deployment("webui")
--- #@ Service("webui", type="NodePort")
--- #@ Deployment("worker")
---

View File

@@ -0,0 +1,4 @@
#@data/values-schema
---
repository: dockercoins
tag: v0.1

View File

@@ -0,0 +1,56 @@
---
#@ load("@ytt:data", "data")
#@ load("@ytt:template", "template")
---
#@ def component(name, repository=data.values.repository, tag=data.values.tag, port=None, type="ClusterIP"):
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
replicas: 1
selector:
matchLabels:
app: #@ name
template:
metadata:
labels:
app: #@ name
spec:
containers:
- image: #@ repository + "/" + name + ":" + tag
name: #@ name
#@ if/end port==80:
readinessProbe:
httpGet:
port: #@ port
#@ if port != None:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
ports:
- port: #@ port
protocol: TCP
targetPort: #@ port
selector:
app: #@ name
type: #@ type
#@ end
#@ end
---
--- #@ template.replace(component("hasher", port=80))
--- #@ template.replace(component("redis", repository="library", tag="latest", port=6379))
--- #@ template.replace(component("rng", port=80))
--- #@ template.replace(component("webui", port=80, type="NodePort"))
--- #@ template.replace(component("worker"))
---

View File

@@ -0,0 +1,4 @@
#@data/values-schema
---
repository: dockercoins
tag: v0.1

View File

@@ -0,0 +1,65 @@
---
#@ load("@ytt:data", "data")
#@ load("@ytt:template", "template")
---
#@ def component(name, repository, tag, port=None, type="ClusterIP"):
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
replicas: 1
selector:
matchLabels:
app: #@ name
template:
metadata:
labels:
app: #@ name
spec:
containers:
- image: #@ repository + "/" + name + ":" + tag
name: #@ name
#@ if/end port==80:
readinessProbe:
httpGet:
port: #@ port
#@ if port != None:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ name
container.training/generated-by: ytt
name: #@ name
spec:
ports:
- port: #@ port
protocol: TCP
targetPort: #@ port
selector:
app: #@ name
type: #@ type
#@ end
#@ end
---
#@ defaults = {}
#@ for name in data.values:
#@ if name.startswith("_"):
#@ defaults.update(data.values[name])
#@ end
#@ end
---
#@ for name in data.values:
#@ if not name.startswith("_"):
#@ values = dict(name=name)
#@ values.update(defaults)
#@ values.update(data.values[name])
--- #@ template.replace(component(**values))
#@ end
#@ end

View File

@@ -0,0 +1,19 @@
#@data/values-schema
#! Entries starting with an underscore will hold default values.
#! Entires NOT starting with an underscore will generate a Deployment
#! (and a Service if a port number is set).
---
_default_:
repository: dockercoins
tag: v0.1
hasher:
port: 80
redis:
repository: library
tag: latest
rng:
port: 80
webui:
port: 80
type: NodePort
worker: {}

View File

@@ -0,0 +1,26 @@
#@ load("@ytt:data", "data")
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
replicas: 1
selector:
matchLabels:
app: #@ data.values.name
template:
metadata:
labels:
app: #@ data.values.name
spec:
containers:
- image: #@ data.values.repository + "/" + data.values.name + ":" + data.values.tag
name: #@ data.values.name
#@ if/end data.values.port==80:
readinessProbe:
httpGet:
port: #@ data.values.port

View File

@@ -0,0 +1,7 @@
#@data/values-schema
---
name: component
repository: dockercoins
tag: v0.1
port: 0
type: ClusterIP

View File

@@ -0,0 +1,19 @@
#@ load("@ytt:data", "data")
#@ if data.values.port > 0:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
ports:
- port: #@ data.values.port
protocol: TCP
targetPort: #@ data.values.port
selector:
app: #@ data.values.name
type: #@ data.values.type
#@ end

View File

@@ -0,0 +1,20 @@
#@ load("@ytt:data", "data")
#@ load("@ytt:library", "library")
#@ load("@ytt:template", "template")
#@
#@ component = library.get("component")
#@
#@ defaults = {}
#@ for name in data.values:
#@ if name.startswith("_"):
#@ defaults.update(data.values[name])
#@ end
#@ end
#@ for name in data.values:
#@ if not name.startswith("_"):
#@ values = dict(name=name)
#@ values.update(defaults)
#@ values.update(data.values[name])
--- #@ template.replace(component.with_data_values(values).eval())
#@ end
#@ end

View File

@@ -0,0 +1,19 @@
#@data/values-schema
#! Entries starting with an underscore will hold default values.
#! Entires NOT starting with an underscore will generate a Deployment
#! (and a Service if a port number is set).
---
_default_:
repository: dockercoins
tag: v0.1
hasher:
port: 80
redis:
repository: library
tag: latest
rng:
port: 80
webui:
port: 80
type: NodePort
worker: {}

View File

@@ -0,0 +1,26 @@
#@ load("@ytt:data", "data")
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
replicas: 1
selector:
matchLabels:
app: #@ data.values.name
template:
metadata:
labels:
app: #@ data.values.name
spec:
containers:
- image: #@ data.values.repository + "/" + data.values.name + ":" + data.values.tag
name: #@ data.values.name
#@ if/end data.values.port==80:
readinessProbe:
httpGet:
port: #@ data.values.port

View File

@@ -0,0 +1,7 @@
#@data/values-schema
---
name: component
repository: dockercoins
tag: v0.1
port: 0
type: ClusterIP

View File

@@ -0,0 +1,19 @@
#@ load("@ytt:data", "data")
#@ if data.values.port > 0:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: #@ data.values.name
container.training/generated-by: ytt
name: #@ data.values.name
spec:
ports:
- port: #@ data.values.port
protocol: TCP
targetPort: #@ data.values.port
selector:
app: #@ data.values.name
type: #@ data.values.type
#@ end

View File

@@ -0,0 +1,20 @@
#@ load("@ytt:data", "data")
#@ load("@ytt:library", "library")
#@ load("@ytt:template", "template")
#@
#@ component = library.get("component")
#@
#@ defaults = {}
#@ for name in data.values:
#@ if name.startswith("_"):
#@ defaults.update(data.values[name])
#@ end
#@ end
#@ for name in data.values:
#@ if not name.startswith("_"):
#@ values = dict(name=name)
#@ values.update(defaults)
#@ values.update(data.values[name])
--- #@ template.replace(component.with_data_values(values).eval())
#@ end
#@ end

View File

@@ -0,0 +1,20 @@
#@ load("@ytt:overlay", "overlay")
#@ def match():
kind: Deployment
metadata:
name: rng
#@ end
#@overlay/match by=overlay.subset(match())
---
spec:
template:
spec:
containers:
#@overlay/match by="name"
- name: rng
readinessProbe:
httpGet:
#@overlay/match missing_ok=True
path: /1

View File

@@ -0,0 +1,19 @@
#@data/values-schema
#! Entries starting with an underscore will hold default values.
#! Entires NOT starting with an underscore will generate a Deployment
#! (and a Service if a port number is set).
---
_default_:
repository: dockercoins
tag: v0.1
hasher:
port: 80
redis:
repository: library
tag: latest
rng:
port: 80
webui:
port: 80
type: NodePort
worker: {}

View File

@@ -0,0 +1,25 @@
#@ load("@ytt:overlay", "overlay")
#@ def match():
kind: Deployment
metadata:
name: worker
#@ end
#! This removes the number of replicas:
#@overlay/match by=overlay.subset(match())
---
spec:
#@overlay/remove
replicas:
#! This overrides it:
#@overlay/match by=overlay.subset(match())
---
spec:
#@overlay/match missing_ok=True
replicas: 10
#! Note that it's not necessary to remove the number of replicas.
#! We're just presenting both options here (for instance, you might
#! want to remove the number of replicas if you're using an HPA).

View File

@@ -2,4 +2,3 @@
base = "slides"
publish = "slides"
command = "./build.sh once"

View File

@@ -1,11 +1,10 @@
---
- hosts: nodes
sudo: true
become: yes
vars_files:
- vagrant.yml
tasks:
- name: clean up the home folder
file:
path: /home/vagrant/{{ item }}
@@ -24,25 +23,23 @@
- name: installing dependencies
apt:
name: apt-transport-https,ca-certificates,python-pip,tmux
name: apt-transport-https,ca-certificates,python3-pip,tmux
state: present
update_cache: true
- name: fetching docker repo key
apt_key:
keyserver: hkp://p80.pool.sks-keyservers.net:80
id: 58118E89F3A912897C070ADBF76221572C52609D
- name: adding package repos
apt_repository:
repo: "{{ item }}"
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: adding docker repo
apt_repository:
repo: deb https://download.docker.com/linux/ubuntu focal stable
state: present
with_items:
- deb https://apt.dockerproject.org/repo ubuntu-trusty main
- name: installing docker
apt:
name: docker-engine
name: docker-ce,docker-ce-cli,containerd.io,docker-compose-plugin
state: present
update_cache: true
@@ -56,7 +53,7 @@
lineinfile:
dest: /etc/default/docker
line: DOCKER_OPTS="--host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:55555"
regexp: '^#?DOCKER_OPTS=.*$'
regexp: "^#?DOCKER_OPTS=.*$"
state: present
register: docker_opts
@@ -66,22 +63,14 @@
state: restarted
when: docker_opts is defined and docker_opts.changed
- name: performing pip autoupgrade
pip:
name: pip
state: latest
- name: installing virtualenv
pip:
name: virtualenv
state: latest
- name: Install Docker Compose via PIP
pip: name=docker-compose
- name: install docker-compose from official github repo
get_url:
url: https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64
dest: /usr/local/bin/docker-compose
mode: "u+x,g+x"
- name:
file:
path="/usr/local/bin/docker-compose"
file: path="/usr/local/bin/docker-compose"
state=file
mode=0755
owner=vagrant
@@ -128,5 +117,3 @@
line: "127.0.0.1 localhost {{ inventory_hostname }}"
- regexp: '^127\.0\.1\.1'
line: "127.0.1.1 {{ inventory_hostname }}"

View File

@@ -1,13 +1,12 @@
---
vagrant:
default_box: ubuntu/trusty64
default_box: ubuntu/focal64
default_box_check_update: true
ssh_insert_key: false
min_memory: 256
min_cores: 1
instances:
- hostname: node1
private_ip: 10.10.10.10
memory: 1512
@@ -37,6 +36,3 @@ instances:
private_ip: 10.10.10.50
memory: 512
cores: 1

View File

@@ -34,28 +34,15 @@ to that directory, then create the clusters using that configuration.
- Scaleway: run `scw init`
2. Optional: set number of clusters, cluster size, and region.
By default, 1 cluster will be configured, with 2 nodes, and auto-scaling up to 5 nodes.
If you want, you can override these parameters, with the following variables.
2. Run!
```bash
export TF_VAR_how_many_clusters=5
export TF_VAR_min_nodes_per_pool=2
export TF_VAR_max_nodes_per_pool=4
export TF_VAR_location=xxx
./run.sh <providername> <location> [number of clusters] [min nodes] [max nodes]
```
The `location` variable is optional. Each provider should have a default value.
The value of the `location` variable is provider-specific. Examples:
If you don't specify a provider name, it will list available providers.
| Provider | Example value | How to see possible values
|---------------|-------------------|---------------------------
| Digital Ocean | `ams3` | `doctl compute region list`
| Google Cloud | `europe-north1-a` | `gcloud compute zones list`
| Linode | `eu-central` | `linode-cli regions list`
| Oracle Cloud | `eu-stockholm-1` | `oci iam region list`
If you don't specify a location, it will list locations available for this provider.
You can also specify multiple locations, and then they will be
used in round-robin fashion.
@@ -66,22 +53,15 @@ my requests to increase that quota were denied) you can do the
following:
```bash
export TF_VAR_location=$(gcloud compute zones list --format=json | jq -r .[].name | grep ^europe)
LOCATIONS=$(gcloud compute zones list --format=json | jq -r .[].name | grep ^europe)
./run.sh googlecloud "$LOCATIONS"
```
Then when you apply, clusters will be created across all available
zones in Europe. (When I write this, there are 20+ zones in Europe,
so even with my quota, I can create 40 clusters.)
3. Run!
```bash
./run.sh <providername>
```
(If you don't specify a provider name, it will list available providers.)
4. Shutting down
3. Shutting down
Go to the directory that was created by the previous step (`tag-YYYY-MM...`)
and run `terraform destroy`.
@@ -112,7 +92,7 @@ terraform init
See steps above, and add the following extra steps:
- Digital Coean:
- Digital Ocean:
```bash
export DIGITALOCEAN_ACCESS_TOKEN=$(grep ^access-token ~/.config/doctl/config.yaml | cut -d: -f2 | tr -d " ")
```
@@ -160,3 +140,30 @@ terraform destroy
```bash
rm stage2/terraform.tfstate*
```
10. Clean up leftovers.
Some providers don't clean up properly the resources created by the CCM.
For instance, when you create a Kubernetes `Service` of type
`LoadBalancer`, it generally provisions a cloud load balancer.
On Linode (and possibly other providers, too!) these cloud load balancers
aren't deleted when the cluster gets deleted, and they keep incurring
charges. You should check for those, to make sure that you don't
get charged for resources that you don't use anymore. As I write this
paragraph, there is:
- `linode-delete-ccm-loadbalancers.sh` to delete the Linode
nodebalancers; but be careful: it deletes **all** the nodebalancers
whose name starts with `ccm-`, which means that if you still have
Kubernetes clusters, their load balancers will be deleted as well!
- `linode-delete-pvc-volumes.sh` to delete Linode persistent disks
that have been created to satisfy Persistent Volume Claims
(these need to be removed manually because the default Storage Class
on Linode has a RETAIN policy). Again, be careful, this will wipe
out any volume whose label starts with `pvc`. (I don't know if it
will remove volumes that are still attached.)
Eventually, I hope to add more scripts for other providers, and make
them more selective and more robust, but for now, that's better than
nothing.

View File

@@ -0,0 +1,4 @@
#!/bin/sh
linode-cli nodebalancers list --json |
jq '.[] | select(.label | startswith("ccm-")) | .id' |
xargs -n1 -P10 linode-cli nodebalancers delete

View File

@@ -0,0 +1,4 @@
#!/bin/sh
linode-cli volumes list --json |
jq '.[] | select(.label | startswith("pvc")) | .id' |
xargs -n1 -P10 linode-cli volumes delete

View File

@@ -3,11 +3,37 @@ set -e
TIME=$(which time)
PROVIDER=$1
[ "$PROVIDER" ] || {
echo "Please specify a provider as first argument, or 'ALL' for parallel mode."
if [ -f ~/.config/doctl/config.yaml ]; then
export DIGITALOCEAN_ACCESS_TOKEN=$(grep ^access-token ~/.config/doctl/config.yaml | cut -d: -f2 | tr -d " ")
fi
if [ -f ~/.config/linode-cli ]; then
export LINODE_TOKEN=$(grep ^token ~/.config/linode-cli | cut -d= -f2 | tr -d " ")
fi
[ "$1" ] || {
echo "Syntax:"
echo ""
echo "$0 <provider> <region> [how-many-clusters] [min-nodes] [max-nodes]"
echo ""
echo "Available providers:"
ls -1 source/modules
echo ""
echo "Leave the region empty to show available regions for this provider."
echo "You can also specify ALL as a provider to simultaneously provision"
echo "many clusters on *each* provider for benchmarking purposes."
echo ""
exit 1
}
PROVIDER="$1"
export TF_VAR_location="$2"
export TF_VAR_how_many_clusters="${3-1}"
export TF_VAR_min_nodes_per_pool="${4-2}"
export TF_VAR_max_nodes_per_pool="${5-4}"
[ "$TF_VAR_location" ] || {
"./source/modules/$PROVIDER/list_locations.sh"
exit 1
}

View File

@@ -1,6 +1,6 @@
resource "random_string" "_" {
length = 4
number = false
numeric = false
special = false
upper = false
}

View File

@@ -62,9 +62,11 @@ resource "null_resource" "wait_for_nodes" {
KUBECONFIG = local_file.kubeconfig[each.key].filename
}
command = <<-EOT
set -e
kubectl get nodes --watch | grep --silent --line-buffered .
kubectl wait node --for=condition=Ready --all --timeout=10m
while sleep 1; do
kubectl get nodes --watch | grep --silent --line-buffered . &&
kubectl wait node --for=condition=Ready --all --timeout=10m &&
break
done
EOT
}
}

View File

@@ -0,0 +1,2 @@
#!/bin/sh
doctl compute region list

View File

@@ -53,5 +53,5 @@ variable "location" {
# doctl kubernetes options versions -o json | jq -r .[].slug
variable "k8s_version" {
type = string
default = "1.21.5-do.0"
default = "1.22.8-do.1"
}

View File

@@ -0,0 +1,2 @@
#!/bin/sh
gcloud compute zones list

View File

@@ -0,0 +1,2 @@
#!/bin/sh
linode-cli regions list

View File

@@ -3,7 +3,7 @@ resource "linode_lke_cluster" "_" {
tags = var.common_tags
# "region" is mandatory, so let's provide a default value if none was given.
region = var.location != null ? var.location : "eu-central"
k8s_version = var.k8s_version
k8s_version = local.k8s_version
pool {
type = local.node_type

View File

@@ -51,7 +51,22 @@ variable "location" {
# To view supported versions, run:
# linode-cli lke versions-list --json | jq -r .[].id
data "external" "k8s_version" {
program = [
"sh",
"-c",
<<-EOT
linode-cli lke versions-list --json |
jq -r '{"latest": [.[].id] | sort [-1]}'
EOT
]
}
variable "k8s_version" {
type = string
default = "1.21"
default = ""
}
locals {
k8s_version = var.k8s_version != "" ? var.k8s_version : data.external.k8s_version.result.latest
}

View File

@@ -0,0 +1,2 @@
#!/bin/sh
oci iam region list

View File

@@ -0,0 +1,6 @@
#!/bin/sh
echo "# Note that this is hard-coded in $0.
# I don't know if there is a way to list regions through the Scaleway API.
fr-par
nl-ams
pl-waw"

View File

@@ -56,5 +56,5 @@ variable "location" {
# scw k8s version list -o json | jq -r .[].name
variable "k8s_version" {
type = string
default = "1.22.2"
default = "1.24.7"
}

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.7.1"
version = "2.16.1"
}
}
}
@@ -145,23 +145,15 @@ resource "helm_release" "metrics_server_${index}" {
# but only if it's not already installed.
count = yamldecode(file("./flags.${index}"))["has_metrics_server"] ? 0 : 1
provider = helm.cluster_${index}
repository = "https://charts.bitnami.com/bitnami"
repository = "https://kubernetes-sigs.github.io/metrics-server/"
chart = "metrics-server"
version = "5.8.8"
version = "3.8.2"
name = "metrics-server"
namespace = "metrics-server"
create_namespace = true
set {
name = "apiService.create"
value = "true"
}
set {
name = "extraArgs.kubelet-insecure-tls"
value = "true"
}
set {
name = "extraArgs.kubelet-preferred-address-types"
value = "InternalIP"
name = "args"
value = "{--kubelet-insecure-tls}"
}
}
@@ -201,7 +193,6 @@ resource "tls_private_key" "cluster_admin_${index}" {
}
resource "tls_cert_request" "cluster_admin_${index}" {
key_algorithm = tls_private_key.cluster_admin_${index}.algorithm
private_key_pem = tls_private_key.cluster_admin_${index}.private_key_pem
subject {
common_name = "cluster-admin"

View File

@@ -17,6 +17,7 @@ These tools can help you to create VMs on:
- [Parallel SSH](https://github.com/lilydjwg/pssh)
(should be installable with `pip install git+https://github.com/lilydjwg/pssh`;
on a Mac, try `brew install pssh`)
- [yq](https://github.com/kislyuk/yq)
Depending on the infrastructure that you want to use, you also need to install
the CLI that is specific to that cloud. For OpenStack deployments, you will

View File

@@ -1,3 +1,3 @@
INFRACLASS=scaleway
#SCW_INSTANCE_TYPE=DEV1-L
#SCW_ZONE=fr-par-2
SCW_ZONE=fr-par-2

View File

@@ -131,6 +131,8 @@ set nowrap
SQRL
pssh -I "sudo -u $USER_LOGIN tee /home/$USER_LOGIN/.tmux.conf" <<SQRL
set -g status-style bg=yellow,bold
bind h select-pane -L
bind j select-pane -D
bind k select-pane -U
@@ -157,6 +159,9 @@ _cmd_clusterize() {
TAG=$1
need_tag
# Disable unattended upgrades so that they don't mess up with the subsequent steps
pssh sudo rm -f /etc/apt/apt.conf.d/50unattended-upgrades
# Special case for scaleway since it doesn't come with sudo
if [ "$INFRACLASS" = "scaleway" ]; then
pssh -l root "
@@ -182,9 +187,23 @@ _cmd_clusterize() {
pssh "
if [ -f /etc/iptables/rules.v4 ]; then
sudo sed -i 's/-A INPUT -j REJECT --reject-with icmp-host-prohibited//' /etc/iptables/rules.v4
sudo netfilter-persistent flush
sudo netfilter-persistent start
fi"
# oracle-cloud-agent upgrades pacakges in the background.
# This breaks our deployment scripts, because when we invoke apt-get, it complains
# that the lock already exists (symptom: random "Exited with error code 100").
# Workaround: if we detect oracle-cloud-agent, remove it.
# But this agent seems to also take care of installing/upgrading
# the unified-monitoring-agent package, so when we stop the snap,
# it can leave dpkg in a broken state. We "fix" it with the 2nd command.
pssh "
if [ -d /snap/oracle-cloud-agent ]; then
sudo snap remove oracle-cloud-agent
sudo dpkg --remove --force-remove-reinstreq unified-monitoring-agent
fi"
# Copy settings and install Python YAML parser
pssh -I tee /tmp/settings.yaml <tags/$TAG/settings.yaml
pssh "
@@ -248,19 +267,21 @@ _cmd_docker() {
# Add registry mirror configuration.
if ! [ -f /etc/docker/daemon.json ]; then
sudo mkdir -p /etc/docker
echo '{\"registry-mirrors\": [\"https://mirror.gcr.io\"]}' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker
fi
"
##VERSION## https://github.com/docker/compose/releases
if [ "$ARCHITECTURE" ]; then
COMPOSE_VERSION=v2.2.3
COMPOSE_PLATFORM='linux-$(uname -m)'
else
COMPOSE_VERSION=1.29.2
COMPOSE_PLATFORM='Linux-$(uname -m)'
fi
COMPOSE_VERSION=v2.11.1
COMPOSE_PLATFORM='linux-$(uname -m)'
# Just in case you need Compose 1.X, you can use the following lines.
# (But it will probably only work for x86_64 machines.)
#COMPOSE_VERSION=1.29.2
#COMPOSE_PLATFORM='Linux-$(uname -m)'
pssh "
set -e
### Install docker-compose.
@@ -338,7 +359,8 @@ EOF"
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet kubeadm kubectl &&
sudo apt-mark hold kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl &&
kubeadm completion bash | sudo tee /etc/bash_completion.d/kubeadm &&
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl &&
echo 'alias k=kubectl' | sudo tee /etc/bash_completion.d/k &&
echo 'complete -F __start_kubectl k' | sudo tee -a /etc/bash_completion.d/k"
@@ -411,8 +433,9 @@ EOF
# Install weave as the pod network
pssh "
if i_am_first_node; then
kubever=\$(kubectl version | base64 | tr -d '\n') &&
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
#kubever=\$(kubectl version | base64 | tr -d '\n') &&
#kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml
fi"
# Join the other nodes to the cluster
@@ -427,6 +450,9 @@ EOF
pssh "
if i_am_first_node; then
kubectl apply -f https://raw.githubusercontent.com/jpetazzo/container.training/master/k8s/metrics-server.yaml
#helm upgrade --install metrics-server \
# --repo https://kubernetes-sigs.github.io/metrics-server/ metrics-server \
# --namespace kube-system --set args={--kubelet-insecure-tls}
fi"
}
@@ -467,12 +493,13 @@ _cmd_kubetools() {
# Install kube-ps1
pssh "
set -e
if ! [ -f /etc/profile.d/kube-ps1.sh ]; then
if ! [ -d /opt/kube-ps1 ]; then
cd /tmp
git clone https://github.com/jonmosco/kube-ps1
sudo cp kube-ps1/kube-ps1.sh /etc/profile.d/kube-ps1.sh
sudo mv kube-ps1 /opt/kube-ps1
sudo -u $USER_LOGIN sed -i s/docker-prompt/kube_ps1/ /home/$USER_LOGIN/.bashrc &&
sudo -u $USER_LOGIN tee -a /home/$USER_LOGIN/.bashrc <<EOF
. /opt/kube-ps1/kube-ps1.sh
KUBE_PS1_PREFIX=""
KUBE_PS1_SUFFIX=""
KUBE_PS1_SYMBOL_ENABLE="false"
@@ -483,13 +510,13 @@ EOF
# Install stern
##VERSION## https://github.com/stern/stern/releases
STERN_VERSION=1.20.1
STERN_VERSION=1.22.0
FILENAME=stern_${STERN_VERSION}_linux_${ARCH}
URL=https://github.com/stern/stern/releases/download/v$STERN_VERSION/$FILENAME.tar.gz
pssh "
if [ ! -x /usr/local/bin/stern ]; then
curl -fsSL $URL |
sudo tar -C /usr/local/bin -zx --strip-components=1 $FILENAME/stern
sudo tar -C /usr/local/bin -zx stern
sudo chmod +x /usr/local/bin/stern
stern --completion bash | sudo tee /etc/bash_completion.d/stern
stern --version
@@ -505,7 +532,7 @@ EOF
# Install kustomize
##VERSION## https://github.com/kubernetes-sigs/kustomize/releases
KUSTOMIZE_VERSION=v4.4.0
KUSTOMIZE_VERSION=v4.5.7
URL=https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_${ARCH}.tar.gz
pssh "
if [ ! -x /usr/local/bin/kustomize ]; then
@@ -524,7 +551,7 @@ EOF
if [ ! -x /usr/local/bin/ship ]; then
##VERSION##
curl -fsSL https://github.com/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_$ARCH.tar.gz |
sudo tar -C /usr/local/bin -zx ship
sudo tar -C /usr/local/bin -zx ship
fi"
# Install the AWS IAM authenticator
@@ -532,8 +559,8 @@ EOF
if [ ! -x /usr/local/bin/aws-iam-authenticator ]; then
##VERSION##
sudo curl -fsSLo /usr/local/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/$ARCH/aws-iam-authenticator
sudo chmod +x /usr/local/bin/aws-iam-authenticator
aws-iam-authenticator version
sudo chmod +x /usr/local/bin/aws-iam-authenticator
aws-iam-authenticator version
fi"
# Install the krew package manager
@@ -550,7 +577,7 @@ EOF
# Install k9s
pssh "
if [ ! -x /usr/local/bin/k9s ]; then
FILENAME=k9s_Linux_$HERP_DERP_ARCH.tar.gz &&
FILENAME=k9s_Linux_$ARCH.tar.gz &&
curl -fsSL https://github.com/derailed/k9s/releases/latest/download/\$FILENAME |
sudo tar -zxvf- -C /usr/local/bin k9s
k9s version
@@ -575,6 +602,7 @@ EOF
FILENAME=tilt.\$TILT_VERSION.linux.$TILT_ARCH.tar.gz
curl -fsSL https://github.com/tilt-dev/tilt/releases/download/v\$TILT_VERSION/\$FILENAME |
sudo tar -zxvf- -C /usr/local/bin tilt
tilt completion bash | sudo tee /etc/bash_completion.d/tilt
tilt version
fi"
@@ -583,6 +611,7 @@ EOF
if [ ! -x /usr/local/bin/skaffold ]; then
curl -fsSLo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-$ARCH &&
sudo install skaffold /usr/local/bin/
skaffold completion bash | sudo tee /etc/bash_completion.d/skaffold
skaffold version
fi"
@@ -591,20 +620,39 @@ EOF
if [ ! -x /usr/local/bin/kompose ]; then
curl -fsSLo kompose https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-$ARCH &&
sudo install kompose /usr/local/bin
kompose completion bash | sudo tee /etc/bash_completion.d/kompose
kompose version
fi"
# Install KinD
pssh "
if [ ! -x /usr/local/bin/kind ]; then
curl -fsSLo kind https://github.com/kubernetes-sigs/kind/releases/latest/download/kind-linux-$ARCH &&
sudo install kind /usr/local/bin
kind completion bash | sudo tee /etc/bash_completion.d/kind
kind version
fi"
# Install YTT
pssh "
if [ ! -x /usr/local/bin/ytt ]; then
curl -fsSLo ytt https://github.com/vmware-tanzu/carvel-ytt/releases/latest/download/ytt-linux-$ARCH &&
sudo install ytt /usr/local/bin
ytt completion bash | sudo tee /etc/bash_completion.d/ytt
ytt version
fi"
##VERSION## https://github.com/bitnami-labs/sealed-secrets/releases
KUBESEAL_VERSION=v0.16.0
case $ARCH in
amd64) FILENAME=kubeseal-linux-amd64;;
arm64) FILENAME=kubeseal-arm64;;
*) FILENAME=nope;;
esac
[ "$FILENAME" = "nope" ] || pssh "
KUBESEAL_VERSION=0.17.4
#case $ARCH in
#amd64) FILENAME=kubeseal-linux-amd64;;
#arm64) FILENAME=kubeseal-arm64;;
#*) FILENAME=nope;;
#esac
pssh "
if [ ! -x /usr/local/bin/kubeseal ]; then
curl -fsSLo kubeseal https://github.com/bitnami-labs/sealed-secrets/releases/download/$KUBESEAL_VERSION/$FILENAME &&
sudo install kubeseal /usr/local/bin
curl -fsSL https://github.com/bitnami-labs/sealed-secrets/releases/download/v$KUBESEAL_VERSION/kubeseal-$KUBESEAL_VERSION-linux-$ARCH.tar.gz |
sudo tar -zxvf- -C /usr/local/bin kubeseal
kubeseal --version
fi"
}

View File

@@ -26,12 +26,24 @@ infra_start() {
info " Name: $NAME"
info " Instance type: $LINODE_TYPE"
ROOT_PASS="$(base64 /dev/urandom | cut -c1-20 | head -n 1)"
linode-cli linodes create \
MAX_TRY=5
TRY=1
WAIT=1
while ! linode-cli linodes create \
--type=${LINODE_TYPE} --region=${LINODE_REGION} \
--image=linode/ubuntu18.04 \
--authorized_keys="${LINODE_SSHKEY}" \
--root_pass="${ROOT_PASS}" \
--tags=${TAG} --label=${NAME}
--tags=${TAG} --label=${NAME}; do
warning "Failed to create VM (attempt $TRY/$MAX_TRY)."
if [ $TRY -ge $MAX_TRY ]; then
die "Giving up."
fi
info "Waiting $WAIT seconds and retrying."
sleep $WAIT
TRY=$(($TRY+1))
WAIT=$(($WAIT*2))
done
done
sep

View File

@@ -36,7 +36,7 @@ if os.path.isfile(domain_or_domain_file):
clusters = [line.split() for line in lines]
else:
ips = open(f"tags/{ips_file_or_tag}/ips.txt").read().split()
settings_file = f"tags/{tag}/settings.yaml"
settings_file = f"tags/{ips_file_or_tag}/settings.yaml"
clustersize = yaml.safe_load(open(settings_file))["clustersize"]
clusters = []
while ips:

View File

@@ -17,8 +17,17 @@
exit 1
}
NETLIFY_USERID=$(jq .userId < ~/.config/netlify/config.json)
NETLIFY_TOKEN=$(jq -r .users[$NETLIFY_USERID].auth.token < ~/.config/netlify/config.json)
NETLIFY_CONFIG_FILE=~/.config/netlify/config.json
if ! [ -f "$NETLIFY_CONFIG_FILE" ]; then
echo "Could not find Netlify configuration file ($NETLIFY_CONFIG_FILE)."
echo "Try to run the following command, and try again:"
echo "npx netlify-cli login"
exit 1
fi
NETLIFY_USERID=$(jq .userId < "$NETLIFY_CONFIG_FILE")
NETLIFY_TOKEN=$(jq -r .users[$NETLIFY_USERID].auth.token < "$NETLIFY_CONFIG_FILE")
netlify() {
URI=$1

View File

@@ -16,7 +16,7 @@ user_password: training
# For a list of old versions, check:
# https://kubernetes.io/releases/patch-releases/#non-active-branch-history
kubernetes_version: 1.18.20
kubernetes_version: 1.20.15
image:

View File

@@ -0,0 +1,71 @@
resource "azurerm_resource_group" "_" {
name = var.prefix
location = var.location
}
resource "azurerm_public_ip" "_" {
count = var.how_many_nodes
name = format("%s-%04d", var.prefix, count.index + 1)
location = azurerm_resource_group._.location
resource_group_name = azurerm_resource_group._.name
allocation_method = "Dynamic"
}
resource "azurerm_network_interface" "_" {
count = var.how_many_nodes
name = format("%s-%04d", var.prefix, count.index + 1)
location = azurerm_resource_group._.location
resource_group_name = azurerm_resource_group._.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet._.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip._[count.index].id
}
}
resource "azurerm_linux_virtual_machine" "_" {
count = var.how_many_nodes
name = format("%s-%04d", var.prefix, count.index + 1)
resource_group_name = azurerm_resource_group._.name
location = azurerm_resource_group._.location
size = var.size
admin_username = "ubuntu"
network_interface_ids = [
azurerm_network_interface._[count.index].id,
]
admin_ssh_key {
username = "ubuntu"
public_key = local.authorized_keys
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "18.04-LTS" # FIXME
version = "latest"
}
}
# The public IP address only gets allocated when the address actually gets
# attached to the virtual machine. So we need to do this extra indrection
# to retrieve the IP addresses. Otherwise the IP addresses show up as blank.
# See: https://github.com/hashicorp/terraform-provider-azurerm/issues/310#issuecomment-335479735
data "azurerm_public_ip" "_" {
count = var.how_many_nodes
name = format("%s-%04d", var.prefix, count.index + 1)
resource_group_name = azurerm_resource_group._.name
depends_on = [azurerm_linux_virtual_machine._]
}
output "ip_addresses" {
value = join("", formatlist("%s\n", data.azurerm_public_ip._.*.ip_address))
}

View File

@@ -0,0 +1,13 @@
resource "azurerm_virtual_network" "_" {
name = "tf-vnet"
address_space = ["10.10.0.0/16"]
location = azurerm_resource_group._.location
resource_group_name = azurerm_resource_group._.name
}
resource "azurerm_subnet" "_" {
name = "tf-subnet"
resource_group_name = azurerm_resource_group._.name
virtual_network_name = azurerm_virtual_network._.name
address_prefixes = ["10.10.0.0/20"]
}

View File

@@ -0,0 +1,13 @@
terraform {
required_version = ">= 1"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.33.0"
}
}
}
provider "azurerm" {
features {}
}

View File

@@ -0,0 +1,32 @@
variable "prefix" {
type = string
default = "provisioned-with-terraform"
}
variable "how_many_nodes" {
type = number
default = 2
}
locals {
authorized_keys = file("~/.ssh/id_rsa.pub")
}
/*
Available sizes:
"Standard_D11_v2" # CPU=2 RAM=14
"Standard_F4s_v2" # CPU=4 RAM=8
"Standard_D1_v2" # CPU=1 RAM=3.5
"Standard_B1ms" # CPU=1 RAM=2
"Standard_B2s" # CPU=2 RAM=4
*/
variable "size" {
type = string
default = "Standard_F4s_v2"
}
variable "location" {
type = string
default = "South Africa North"
}

View File

@@ -2,6 +2,7 @@
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
/ /kube.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@
"version": "0.0.1",
"dependencies": {
"express": "^4.16.2",
"socket.io": "^2.4.0"
"socket.io": "^4.5.1",
"socket.io-client": "^4.5.1"
}
}

View File

@@ -19,7 +19,7 @@ They abstract the connection details for this services, and can help with:
* fail over (how do I know to which instance of a replicated service I should connect?)
* load balancing (how to I spread my requests across multiple instances of a service?)
* load balancing (how do I spread my requests across multiple instances of a service?)
* authentication (what if my service requires credentials, certificates, or otherwise?)

View File

@@ -58,7 +58,7 @@ class: pic
- it uses different concepts (Compose services ≠ Kubernetes services)
- it needs a Docker Engine (althought containerd support might be coming)
- it needs a Docker Engine (although containerd support might be coming)
---

View File

@@ -35,7 +35,7 @@ At the end of this section, you will be able to:
---
## Runing an NGINX server
## Running an NGINX server
```bash
$ docker run -d -P nginx

View File

@@ -111,7 +111,7 @@ CMD ["python", "app.py"]
RUN wget http://.../foo.tar.gz \
&& tar -zxf foo.tar.gz \
&& mv foo/fooctl /usr/local/bin \
&& rm -rf foo
&& rm -rf foo foo.tar.gz
...
```

View File

@@ -317,9 +317,11 @@ class: extra-details
## Trash your servers and burn your code
*(This is the title of a
[2013 blog post](http://chadfowler.com/2013/06/23/immutable-deployments.html)
[2013 blog post][immutable-deployments]
by Chad Fowler, where he explains the concept of immutable infrastructure.)*
[immutable-deployments]: https://web.archive.org/web/20160305073617/http://chadfowler.com/blog/2013/06/23/immutable-deployments/
--
* Let's majorly mess up our container.

View File

@@ -13,7 +13,7 @@
- ... Or be comfortable spending some time reading the Docker
[documentation](https://docs.docker.com/) ...
- ... And looking for answers in the [Docker forums](forums.docker.com),
- ... And looking for answers in the [Docker forums](https://forums.docker.com),
[StackOverflow](http://stackoverflow.com/questions/tagged/docker),
and other outlets

View File

@@ -1,57 +1,75 @@
#!/usr/bin/env python
import re
import sys
import yaml
FIRST_SLIDE_MARKER = "name: toc-"
PART_PREFIX = "part-"
filename = sys.argv[1]
if filename.endswith(".html"):
html_file = filename
yaml_file = filename[: -len(".html")]
else:
html_file = filename + ".html"
yaml_file = filename
excluded_classes = yaml.safe_load(open(yaml_file))["exclude"]
PREFIX = "name: toc-"
EXCLUDED = ["in-person"]
class State(object):
def __init__(self):
self.current_slide = 1
self.section_title = None
self.section_start = 0
self.section_slides = 0
self.current_slide = -1
self.parts = {}
self.sections = {}
def show(self):
if self.section_title.startswith("part-"):
return
print("{0.section_title}\t{0.section_start}\t{0.section_slides}".format(self))
self.sections[self.section_title] = self.section_slides
def end_section(self):
if state.section_title:
print(
"{0.section_start}\t{0.section_slides}\t{0.section_title}".format(self)
)
if self.section_part:
if self.section_part not in self.parts:
self.parts[self.section_part] = 0
self.parts[self.section_part] += self.section_slides
def new_section(self, slide):
# Normally, the title should be prefixed by a space
# (because section titles are first-level titles in markdown,
# e.g. "# Introduction", and markmaker removes the # but leaves
# the leading space).
self.section_title = None
if "\n " in slide:
self.section_title = slide.split("\n ")[1].split("\n")[0]
toc_links = re.findall("\(#toc-(.*)\)", slide)
self.section_part = None
for toc_link in toc_links:
if toc_link.startswith(PART_PREFIX):
self.section_part = toc_link
self.section_start = self.current_slide
self.section_slides = 0
state = State()
state.new_section("")
print("{}\t{}\t{}".format("index", "size", "title"))
title = None
for line in open(sys.argv[1]):
line = line.rstrip()
if line.startswith(PREFIX):
if state.section_title is None:
print("{}\t{}\t{}".format("title", "index", "size"))
else:
state.show()
state.section_title = line[len(PREFIX):].strip()
state.section_start = state.current_slide
state.section_slides = 0
if line == "---":
for slide in open(html_file).read().split("\n---\n"):
excluded = False
for line in slide.split("\n"):
if line.startswith("class:"):
for klass in excluded_classes:
if klass in line.split():
excluded = True
if excluded:
continue
if FIRST_SLIDE_MARKER in slide:
# A new section starts. Show info about the part that just ended.
state.end_section()
state.new_section(slide)
state.section_slides += 1
for sub_slide in slide.split("\n--\n"):
state.current_slide += 1
state.section_slides += 1
if line == "--":
state.current_slide += 1
toc_links = re.findall("\(#toc-(.*)\)", line)
if toc_links and state.section_title.startswith("part-"):
if state.section_title not in state.parts:
state.parts[state.section_title] = []
state.parts[state.section_title].append(toc_links[0])
# This is really hackish
if line.startswith("class:"):
for klass in EXCLUDED:
if klass in line:
state.section_slides -= 1
state.current_slide -= 1
state.show()
else:
state.end_section()
for part in sorted(state.parts, key=lambda f: int(f.split("-")[1])):
part_size = sum(state.sections[s] for s in state.parts[part])
print("{}\t{}\t{}".format("total size for", part, part_size))
print("{}\t{}\t{}".format(0, state.parts[part], "total size for " + part))

View File

@@ -4,6 +4,6 @@
(we will use the `rng` service in the dockercoins app)
- See what happens when the load increses
- See what happens when the load increases
(spoiler alert: it involves timeouts!)

View File

@@ -2,7 +2,7 @@
- Add an ingress controller to a Kubernetes cluster
- Create an ingress resource for a web app on that cluster
- Create an ingress resource for a couple of web apps on that cluster
- Challenge: accessing/exposing port 80

View File

@@ -1,49 +1,131 @@
# Exercise — Ingress
- We want to expose a web app through an ingress controller
- We want to expose a couple of web apps through an ingress controller
- This will require:
- the web app itself (dockercoins, NGINX, whatever we want)
- the web apps (e.g. two instances of `jpetazzo/color`)
- an ingress controller
- a domain name (`use \*.nip.io` or `\*.localdev.me`)
- an ingress resource
---
## Goal
## Different scenarios
- We want to be able to access the web app using a URL like:
We will use a different deployment mechanism depending on the cluster that we have:
http://webapp.localdev.me
- Managed cluster with working `LoadBalancer` Services
*or*
- Local development cluster
http://webapp.A.B.C.D.nip.io
- Cluster without `LoadBalancer` Services (e.g. deployed with `kubeadm`)
(where A.B.C.D is the IP address of one of our nodes)
---
## The apps
- The web apps will be deployed similarly, regardless of the scenario
- Let's start by deploying two web apps, e.g.:
a Deployment called `blue` and another called `green`, using image `jpetazzo/color`
- Expose them with two `ClusterIP` Services
---
## Scenario "classic cloud Kubernetes"
*Difficulty: easy*
For this scenario, we need a cluster with working `LoadBalancer` Services.
(For instance, a managed Kubernetes cluster from a cloud provider.)
We suggest to use "Ingress NGINX" with its default settings.
It can be installed with `kubectl apply` or with `helm`.
Both methods are described in [the documentation][ingress-nginx-deploy].
We want our apps to be available on e.g. http://X.X.X.X/blue and http://X.X.X.X/green
<br/>
(where X.X.X.X is the IP address of the `LoadBalancer` allocated by Ingress NGINX).
[ingress-nginx-deploy]: https://kubernetes.github.io/ingress-nginx/deploy/
---
## Scenario "local development cluster"
*Difficulty: easy-hard (depends on the type of cluster!)*
For this scenario, we want to use a local cluster like KinD, minikube, etc.
We suggest to use "Ingress NGINX" again, like for the previous scenario.
Furthermore, we want to use `localdev.me`.
We want our apps to be available on e.g. `blue.localdev.me` and `green.localdev.me`.
The difficulty is to ensure that `localhost:80` will map to the ingress controller.
(See next slide for hints!)
---
## Hints
- For the ingress controller, we can use:
- With clusters like Docker Desktop, the first `LoadBalancer` service uses `localhost`
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/index.md)
(if the ingress controller is the first `LoadBalancer` service, we're all set!)
- the [Traefik Helm chart](https://doc.traefik.io/traefik/getting-started/install-traefik/#use-the-helm-chart)
- With clusters like K3D and KinD, it is possible to define extra port mappings
- the container.training [Traefik DaemonSet](https://raw.githubusercontent.com/jpetazzo/container.training/main/k8s/traefik-v2.yaml)
(and map e.g. `localhost:80` to port 30080 on the node; then use that as a `NodePort`)
- If our cluster supports LoadBalancer Services: easy
---
(nothing special to do)
## Scenario "on premises cluster", take 1
- For local clusters, things can be more difficult; two options:
*Difficulty: easy*
- map localhost:80 to e.g. a NodePort service, and use `\*.localdev.me`
For this scenario, we need a cluster with nodes that are publicly accessible.
- use hostNetwork, or ExternalIP, and use `\*.nip.io`
We want to deploy the ingress controller so that it listens on port 80 on all nodes.
This can be done e.g. with the manifests in @@LINK[k8s/traefik.yaml].
We want our apps to be available on e.g. http://X.X.X.X/blue and http://X.X.X.X/green
<br/>
(where X.X.X.X is the IP address of any of our nodes).
---
## Scenario "on premises cluster", take 2
*Difficulty: medium*
We want to deploy the ingress controller so that it listens on port 80 on all nodes.
But this time, we want to use a Helm chart to install the ingress controller.
We can use either the Ingress NGINX Helm chart, or the Traefik Helm chart.
Test with an untainted node first.
Feel free to make it work on tainted nodes (e.g. control plane nodes) later.
---
## Scenario "on premises cluster", take 3
*Difficulty: hard*
This is similar to the previous scenario, but with two significant changes:
1. We only want to run the ingress controller on nodes that have the role `ingress`.
2. We don't want to use `hostNetwork`, but a list of `externalIPs` instead.

View File

@@ -0,0 +1,7 @@
## Exercise — Network Policies
- Implement a system with 3 levels of security
(private pods, public pods, namespace pods)
- Apply it to the DockerCoins demo app

View File

@@ -0,0 +1,63 @@
# Exercise — Network Policies
We want to to implement a generic network security mechanism.
Instead of creating one policy per service, we want to
create a fixed number of policies, and use a single label
to indicate the security level of our pods.
Then, when adding a new service to the stack, instead
of writing a new network policy for that service, we
only need to add the right label to the pods of that service.
---
## Specifications
We will use the label `security` to classify our pods.
- If `security=private`:
*the pod shouldn't accept any traffic*
- If `security=public`:
*the pod should accept all traffic*
- If `security=namespace`:
*the pod should only accept connections coming from the same namespace*
If `security` isn't set, assume it's `private`.
---
## Test setup
- Deploy a copy of the DockerCoins app in a new namespace
- Modify the pod templates so that:
- `webui` has `security=public`
- `worker` has `security=private`
- `hasher`, `redis`, `rng` have `security=namespace`
---
## Implement and test policies
- Write the network policies
(feel free to draw inspiration from the ones we've seen so far)
- Check that:
- you can connect to the `webui` from outside the cluster
- the application works correctly (shows 3-4 hashes/second)
- you cannot connect to the `hasher`, `redis`, `rng` services
- you cannot connect or even ping the `worker` pods

View File

@@ -0,0 +1,9 @@
## Exercise — RBAC
- Create two namespaces for users `alice` and `bob`
- Give each user full access to their own namespace
- Give each user read-only access to the other's namespace
- Let `alice` view the nodes of the cluster as well

View File

@@ -0,0 +1,97 @@
# Exercise — RBAC
We want to:
- Create two namespaces for users `alice` and `bob`
- Give each user full access to their own namespace
- Give each user read-only access to the other's namespace
- Let `alice` view the nodes of the cluster as well
---
## Initial setup
- Create two namespaces named `alice` and `bob`
- Check that if we impersonate Alice, we can't access her namespace yet:
```bash
kubectl --as alice get pods --namespace alice
```
---
## Access for Alice
- Grant Alice full access to her own namespace
(you can use a pre-existing Cluster Role)
- Check that Alice can create stuff in her namespace:
```bash
kubectl --as alice create deployment hello --image nginx --namespace alice
```
- But that she can't create stuff in Bob's namespace:
```bash
kubectl --as alice create deployment hello --image nginx --namespace bob
```
---
## Access for Bob
- Similarly, grant Bob full access to his own namespace
- Check that Bob can create stuff in his namespace:
```bash
kubectl --as bob create deployment hello --image nginx --namespace bob
```
- But that he can't create stuff in Alice's namespace:
```bash
kubectl --as bob create deployment hello --image nginx --namespace alice
```
---
## Read-only access
- Now, give Alice read-only access to Bob's namespace
- Check that Alice can view Bob's stuff:
```bash
kubectl --as alice get pods --namespace bob
```
- But that she can't touch this:
```bash
kubectl --as alice delete pods --namespace bob --all
```
- Likewise, give Bob read-only access to Alice's namespace
---
## Nodes
- Give Alice read-only access to the cluster nodes
(this will require creating a custom Cluster Role)
- Check that Alice can view the nodes:
```bash
kubectl --as alice get nodes
```
- But that Bob cannot:
```bash
kubectl --as bob get nodes
```
- And that Alice can't update nodes:
```bash
kubectl --as alice label nodes --all hello=world
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 394 KiB

View File

@@ -13,3 +13,4 @@ https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-1.jpg
https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-2.jpg
https://gallant-turing-d0d520.netlify.com/containers/two-containers-on-a-truck.jpg
https://gallant-turing-d0d520.netlify.com/containers/wall-of-containers.jpeg
https://gallant-turing-d0d520.netlify.com/containers/catene-de-conteneurs.jpg

View File

@@ -1,71 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
#- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
#- containers/Start_And_Attach.md
- containers/Naming_And_Inspecting.md
#- containers/Labels.md
- containers/Getting_Inside.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Container_Networking_Basics.md
#- containers/Network_Drivers.md
- containers/Local_Development_Workflow.md
- containers/Container_Network_Model.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
- containers/Multi_Stage_Builds.md
#- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
#- containers/Docker_Machine.md
#- containers/Advanced_Dockerfiles.md
#- containers/Buildkit.md
#- containers/Init_Systems.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Container_Engines.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md
#- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -1,72 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
content:
- shared/title.md
# - shared/logistics.md
- containers/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- - containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- - containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- containers/Docker_Machine.md
- - containers/Advanced_Dockerfiles.md
- containers/Buildkit.md
- containers/Init_Systems.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Resource_Limits.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
- - containers/Container_Engines.md
- containers/Pods_Anatomy.md
- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -1,80 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: https://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- # DAY 1
- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Dockerfile_Tips.md
- containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Exercise_Dockerfile_Advanced.md
-
- containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Start_And_Attach.md
- containers/Getting_Inside.md
- containers/Resource_Limits.md
- # DAY 2
- containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
-
- containers/Local_Development_Workflow.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
- containers/Installing_Docker.md
- containers/Container_Engines.md
- containers/Init_Systems.md
- containers/Advanced_Dockerfiles.md
- containers/Buildkit.md
-
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Orchestration_Overview.md
-
- shared/thankyou.md
- containers/links.md
#-
#- containers/Docker_Machine.md
#- containers/Ambassadors.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md

View File

@@ -168,7 +168,7 @@ class: extra-details
(`O=system:nodes`, `CN=system:node:name-of-the-node`)
- The Kubernetse API can act as a CA
- The Kubernetes API can act as a CA
(by wrapping an X509 CSR into a CertificateSigningRequest resource)
@@ -246,7 +246,7 @@ class: extra-details
(they don't require hand-editing a file and restarting the API server)
- A service account is associated with a set of secrets
- A service account can be associated with a set of secrets
(the kind that you can view with `kubectl get secrets`)
@@ -256,6 +256,28 @@ class: extra-details
---
## Service account tokens evolution
- In Kubernetes 1.21 and above, pods use *bound service account tokens*:
- these tokens are *bound* to a specific object (e.g. a Pod)
- they are automatically invalidated when the object is deleted
- these tokens also expire quickly (e.g. 1 hour) and gets rotated automatically
- In Kubernetes 1.24 and above, unbound tokens aren't created automatically
- before 1.24, we would see unbound tokens with `kubectl get secrets`
- with 1.24 and above, these tokens can be created with `kubectl create token`
- ...or with a Secret with the right [type and annotation][create-token]
[create-token]: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#create-token
---
class: extra-details
## Checking our authentication method
@@ -390,6 +412,10 @@ class: extra-details
It should be named `default-token-XXXXX`.
When running Kubernetes 1.24 and above, this Secret won't exist.
<br/>
Instead, create a token with `kubectl create token default`.
---
class: extra-details

View File

@@ -202,7 +202,9 @@ class: extra-details
- These are JWS signatures using HMAC-SHA256
(see [here](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#configmap-signing) for more details)
(see [the reference documentation][configmap-signing] for more details)
[configmap-signing]: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#configmap-signing
---

60
slides/k8s/cainjector.md Normal file
View File

@@ -0,0 +1,60 @@
## CA injector - overview
- The Kubernetes API server can invoke various webhooks:
- conversion webhooks (registered in CustomResourceDefinitions)
- mutation webhooks (registered in MutatingWebhookConfigurations)
- validation webhooks (registered in ValidatingWebhookConfiguration)
- These webhooks must be served over TLS
- These webhooks must use valid TLS certificates
---
## Webhook certificates
- Option 1: certificate issued by a global CA
- doesn't work with internal services
<br/>
(their CN must be `<servicename>.<namespace>.svc`)
- Option 2: certificate issued by private CA + CA certificate in system store
- requires access to API server certificates tore
- generally not doable on managed Kubernetes clusters
- Option 3: certificate issued by private CA + CA certificate in `caBundle`
- pass the CA certificate in `caBundle` field
<br/>
(in CRD or webhook manifests)
- can be managed automatically by cert-manager
---
## CA injector - details
- Add annotation to *injectable* resource
(CustomResouceDefinition, MutatingWebhookConfiguration, ValidatingWebhookConfiguration)
- Annotation refers to the thing holding the certificate:
- `cert-manager.io/inject-ca-from: <namespace>/<certificate>`
- `cert-manager.io/inject-ca-from-secret: <namespace>/<secret>`
- `cert-manager.io/inject-apiserver-ca: true` (use API server CA)
- When injecting from a Secret, the Secret must have a special annotation:
`cert-manager.io/allow-direct-injection: "true"`
- See [cert-manager documentation][docs] for details
[docs]: https://cert-manager.io/docs/concepts/ca-injector/

View File

@@ -48,7 +48,7 @@
- We must run nodes on a supported infrastructure
- See [here] for a non-exhaustive list of supported providers
- Check the [GitHub repo][autoscaler-providers] for a non-exhaustive list of supported providers
- Sometimes, the cluster autoscaler is installed automatically
@@ -58,7 +58,7 @@
(which is often non-trivial and highly provider-specific)
[here]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider
[autoscaler-providers]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider
---

View File

@@ -138,7 +138,7 @@ class: extra-details
- The Cluster Autoscaler only supports a few cloud infrastructures
(see [here](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider) for a list)
(see the [kubernetes/autoscaler repo][kubernetes-autoscaler-repo] for a list)
- The Cluster Autoscaler cannot scale down nodes that have pods using:
@@ -148,6 +148,8 @@ class: extra-details
- a restrictive PodDisruptionBudget
[kubernetes-autoscaler-repo]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider
---
## Other way to do capacity planning

View File

@@ -81,7 +81,7 @@
## What version are we running anyway?
- When I say, "I'm running Kubernetes 1.18", is that the version of:
- When I say, "I'm running Kubernetes 1.20", is that the version of:
- kubectl
@@ -157,15 +157,15 @@
## Kubernetes uses semantic versioning
- Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.18.20:
- Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.20.15:
- MAJOR = 1
- MINOR = 18
- PATCH = 20
- MINOR = 20
- PATCH = 15
- It's always possible to mix and match different PATCH releases
(e.g. 1.18.20 and 1.18.15 are compatible)
(e.g. 1.20.0 and 1.20.15 are compatible)
- It is recommended to run the latest PATCH release
@@ -181,9 +181,9 @@
- All components support a difference of one¹ MINOR version
- This allows live upgrades (since we can mix e.g. 1.18 and 1.19)
- This allows live upgrades (since we can mix e.g. 1.20 and 1.21)
- It also means that going from 1.18 to 1.20 requires going through 1.19
- It also means that going from 1.20 to 1.22 requires going through 1.21
.footnote[¹Except kubelet, which can be up to two MINOR behind API server,
and kubectl, which can be one MINOR ahead or behind API server.]
@@ -254,7 +254,7 @@ and kubectl, which can be one MINOR ahead or behind API server.]
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
```
- Look for the `image:` line, and update it to e.g. `v1.19.0`
- Look for the `image:` line, and update it to e.g. `v1.24.0`
]
@@ -308,11 +308,11 @@ and kubectl, which can be one MINOR ahead or behind API server.]
]
Note 1: kubeadm thinks that our cluster is running 1.19.0.
Note 1: kubeadm thinks that our cluster is running 1.24.0.
<br/>It is confused by our manual upgrade of the API server!
Note 2: kubeadm itself is still version 1.18.20..
<br/>It doesn't know how to upgrade do 1.19.X.
Note 2: kubeadm itself is still version 1.20.15..
<br/>It doesn't know how to upgrade do 1.21.X.
---
@@ -335,28 +335,28 @@ Note 2: kubeadm itself is still version 1.18.20..
]
Problem: kubeadm doesn't know know how to handle
upgrades from version 1.18.
upgrades from version 1.20.
This is because we installed version 1.22 (or even later).
This is because we installed version 1.24 (or even later).
We need to install kubeadm version 1.19.X.
We need to install kubeadm version 1.21.X.
---
## Downgrading kubeadm
- We need to go back to version 1.19.X.
- We need to go back to version 1.21.X.
.lab[
- View available versions for package `kubeadm`:
```bash
apt show kubeadm -a | grep ^Version | grep 1.19
apt show kubeadm -a | grep ^Version | grep 1.21
```
- Downgrade kubeadm:
```
sudo apt install kubeadm=1.19.8-00
sudo apt install kubeadm=1.21.0-00
```
- Check what kubeadm tells us:
@@ -366,7 +366,7 @@ We need to install kubeadm version 1.19.X.
]
kubeadm should now agree to upgrade to 1.19.8.
kubeadm should now agree to upgrade to 1.21.X.
---
@@ -464,9 +464,9 @@ kubeadm should now agree to upgrade to 1.19.8.
```bash
for N in 1 2 3; do
ssh oldversion$N "
sudo apt install kubeadm=1.19.8-00 &&
sudo apt install kubeadm=1.21.14-00 &&
sudo kubeadm upgrade node &&
sudo apt install kubelet=1.19.8-00"
sudo apt install kubelet=1.21.14-00"
done
```
]
@@ -475,7 +475,7 @@ kubeadm should now agree to upgrade to 1.19.8.
## Checking what we've done
- All our nodes should now be updated to version 1.19.8
- All our nodes should now be updated to version 1.21.14
.lab[
@@ -492,7 +492,7 @@ class: extra-details
## Skipping versions
- This example worked because we went from 1.18 to 1.19
- This example worked because we went from 1.20 to 1.21
- If you are upgrading from e.g. 1.16, you will have to go through 1.17 first

View File

@@ -24,11 +24,11 @@
- Interface parameters (MTU, sysctls) could be tweaked by the `tuning` plugin
The reference plugins are available [here].
The reference plugins are available [here][cni-reference-plugins].
Look in each plugin's directory for its documentation.
[here]: https://github.com/containernetworking/plugins/tree/master/plugins
[cni-reference-plugins]: https://github.com/containernetworking/plugins/tree/master/plugins
---
@@ -404,17 +404,17 @@ class: extra-details
- Create a Deployment running a web server:
```bash
kubectl create deployment web --image=jpetazzo/httpenv
kubectl create deployment blue --image=jpetazzo/color
```
- Scale it so that it spans multiple nodes:
```bash
kubectl scale deployment web --replicas=5
kubectl scale deployment blue --replicas=5
```
- Expose it with a Service:
```bash
kubectl expose deployment web --port=8888
kubectl expose deployment blue --port=8888
```
]

View File

@@ -79,6 +79,20 @@
(blue/green deployment, canary deployment)
--
.footnote[
On the next page: canary cage with an oxygen bottle, designed to keep the canary alive.
<br/>
(See https://post.lurk.org/@zilog/109632335293371919 for details.)
]
---
class: pic
![Canary cage](images/canary-cage.jpg)
---
## More things that Kubernetes can do for us
@@ -287,7 +301,9 @@ No!
--
- By default, Kubernetes uses the Docker Engine to run containers
- The Docker Engine used to be the default option to run containers with Kubernetes
- Support for Docker (specifically: dockershim) was removed in Kubernetes 1.24
- We can leverage other pluggable runtimes through the *Container Runtime Interface*
@@ -329,32 +345,26 @@ Yes!
- We can do these things without Docker
<br/>
(and get diagnosed with NIH¹ syndrome)
(but with some languages/frameworks, it might be much harder)
- Docker is still the most stable container engine today
<br/>
(but other options are maturing very quickly)
.footnote[¹[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)]
---
class: extra-details
## Do we need to run Docker at all?
- On our Kubernetes clusters:
*Not anymore*
- On our development environments, CI pipelines ... :
*Yes, almost certainly*
- On our production servers:
*Yes (today)*
*Probably not (in the future)*
.footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)]
---
## Interacting with Kubernetes

Some files were not shown because too many files have changed in this diff Show More