Compare commits

...

81 Commits

Author SHA1 Message Date
Jérôme Petazzoni
935d2c68b2 ♻️ Prepare content for DerivCo May sessions 2022-05-23 09:13:20 +02:00
Jérôme Petazzoni
cc6c0d5db8 🐞 Minor bug fixes 2022-05-12 19:37:05 +02:00
Jérôme Petazzoni
9ed00c5da1 Update DOKS version 2022-05-07 11:36:01 +02:00
Jérôme Petazzoni
b4b67536e9 ️Add retry logic for linode provisioning
It looks like Linode now enforces something like 10 requests / 10 seconds.
We need to add some retry logic when provisioning more than 10 VMs.
2022-05-03 11:33:12 +02:00
Jérôme Petazzoni
52ce402803 ♻️ Switch to official FRR images; disable NHT
We're now using an official image for FRR.
Also, by default, BGPD will accept routes only if their
next-hop is reachable. This relies on a mechanism called
NHT (Next Hop Tracking). However, when we receive routes
from Kubernetes clusters, the peers usually advertise
addresses that we are not directly connected to. This
causes these addresses to be filtered out (unless the
route reflector is running on the same VPC or Layer 2
network as the Kubernetes nodes). To accept these routes
anyway, we basically disable NHT, by considering that
nodes are reachable if we can reach them through our
default route.
2022-04-12 22:17:27 +02:00
Jérôme Petazzoni
7076152bb9 ♻️ Update sealed-secrets version and install instructions 2022-04-12 20:46:01 +02:00
Jérôme Petazzoni
39eebe320f Add CA injector content 2022-04-12 18:24:41 +02:00
Jérôme Petazzoni
97c563e76a ♻️ Don't use ngrok for Tilt
ngrok now requires an account to serve HTML content.
We won't use ngrok anymore for the Tilt UI
(and we'll suggest to use a NodePort service instead,
when running in a Pod).
2022-04-11 21:08:54 +02:00
Jérôme Petazzoni
4a7b04dd01 ♻️ Add helm install command for metrics-server
Don't use it yet, but have it handy in case we want to switch.
2022-04-08 21:06:19 +02:00
Jérôme Petazzoni
8b3f7a9aba ♻️ Switch to SIG metrics-server chart 2022-04-08 20:36:07 +02:00
Jérôme Petazzoni
f9bb780f80 Bump up DOK version 2022-04-08 20:35:53 +02:00
Jérôme Petazzoni
94545f800a 📃 Add TOC item to nsplease 2022-04-06 22:01:22 +02:00
Jérôme Petazzoni
5896ad577b Bump up k8s version on Linode 2022-03-31 10:59:09 +02:00
Denis Laxalde
030f3728f7 Update link to "Efficient Node Heartbeats" KEP
Previous file was moved in commit 7eef794bb5
2022-03-28 16:52:32 +02:00
Jérôme Petazzoni
913c934dbb 🔗 Add shortlinks to March 2022 training 2022-03-22 08:25:24 +01:00
Jérôme Petazzoni
b6b718635a ♻️ Switch diagram around 2022-03-21 08:20:02 +01:00
Jérôme Petazzoni
a830d51e5e Add a couple more Kyverno policies with fancy preconditions 2022-03-16 19:14:45 +01:00
Cyril Mizzi
7af1a4cfbc fix(slides.k8s.hpa-v2): update prometheus-adapter mapping rule 2022-03-16 17:50:57 +01:00
Cyril Mizzi
4f6b4b0306 fix(slides.k8s.hpa-v2): update namespace for prometheus-adapter 2022-03-16 17:50:57 +01:00
Jérôme Petazzoni
888aad583e ♻️ Update YAML manifests for dashboard
Include namespace (to work around 'helm template' bug).
Enable metrics scraper (because metrics are fun).
2022-03-08 18:14:42 +01:00
Jérôme Petazzoni
f7c1e87a89 🐛 Add missing content-type header in livedns API call 2022-03-08 16:42:58 +01:00
Jérôme Petazzoni
2e4e6bc787 Merge pull request #608 from nchauvat/patch-1
fix typo in definition of access modes
2022-02-10 16:14:39 +01:00
nchauvat
1b704316c8 fix typo in definition of access modes
IIRC https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes it is the PVClaim that lists the access modes it requires and the PV that lists the access modes it supports.
2022-02-10 12:12:36 +01:00
Jérôme Petazzoni
2e6e5425d0 Add platform check Dockerfile 2022-02-04 08:30:54 +01:00
Jérôme Petazzoni
5e2aac701e ♻️ Add cgroup v2 content 2022-02-03 18:58:21 +01:00
Jérôme Petazzoni
bb19d525e9 Merge Buildkit content 2022-02-03 17:57:35 +01:00
Jérôme Petazzoni
8ca6c5ba40 🏭️ Support multiple Terraform configurations
Historically, we only support one Terraform configuration,
through the "openstack-tf" infraclass. With these changes,
we support multiple Terraform configurations, including
(at this point) "openstack" and "oci" (Oracle Cloud).

Existing infra files that use INFRACLASS=openstack-tf
should be changed as follows:

INFRACLASS=terraform
TERRAFORM=openstack
2022-02-03 07:59:56 +01:00
Jérôme Petazzoni
e1290c5b84 Add some info about profiles and .env 2022-01-31 19:48:12 +01:00
Jérôme Petazzoni
2c2574fece ♻️ Improve PriorityClass slides 2022-01-27 13:14:26 +01:00
Jérôme Petazzoni
5c96b40bbd 🐞 Fix kustomize completion 2022-01-27 13:14:16 +01:00
Jérôme Petazzoni
5aa20362eb ♻️ Update healthcheck content 2022-01-27 11:23:43 +01:00
Jérôme Petazzoni
a01fecf679 ♻️ Bump Consul version and move SA at the beginning of the YAML
It's a tiny bit easier to run through the YAML when it starts with
the ServiceAccount, I find.
2022-01-27 10:40:37 +01:00
Jérôme Petazzoni
b75d6562b5 🏭️ Rewrite kubectl-run chapter 2022-01-27 10:36:52 +01:00
Jérôme Petazzoni
7f5944b157 📍 Correctly pin+hold package versions with APT preferences 2022-01-27 08:59:12 +01:00
Jérôme Petazzoni
21287d16bf ♻️ Switch to containerd 2022-01-26 21:05:01 +01:00
Jérôme Petazzoni
9434b40b58 🐞 Fix a couple of search-and-replace mistakes 2022-01-23 10:39:54 +01:00
Jérôme Petazzoni
b59f5dd00d Merge pull request #606 from sebgl/fix-pvc-link
Update link to the PersistentVolumeClaimBinder design doc
2022-01-23 09:08:11 +01:00
sebgl
d8ad0021cc Update link to the PersistentVolumeClaimBinder design doc
It looks like that doc has been moved elsewhere. This commit updates the link to (what I think is) the intended page.
2022-01-21 10:34:35 +01:00
Jérôme Petazzoni
8dbd6d54a0 🐞 Add warning about initial_node_count 2022-01-20 11:49:28 +01:00
Jérôme Petazzoni
b454749e92 🐞 Add info about Terraform provider version pinning 2022-01-20 09:29:11 +01:00
Jérôme Petazzoni
9a71d0e260 📃 Add gcloud auth application-default login 2022-01-19 11:24:00 +01:00
Jérôme Petazzoni
25e844fdf4 Bump up version numbers in upgrade labs 2022-01-18 12:16:46 +01:00
Jérôme Petazzoni
c40f4f5f2a 📝 Update ingress chapter
Replace cheese images with jpetazz/color.
Add details on GKE Ingress and clarify cost for cloud ingress.
Mention that Traefik canary v1 is obsolete.
2022-01-18 12:09:33 +01:00
Jérôme Petazzoni
cfa89b3ab5 📃 Update AJ's affiliation 2022-01-17 19:18:09 +01:00
Jérôme Petazzoni
a10cf8d9c3 Add GKE networking; kubernetes resource creation in TF 2022-01-17 18:18:49 +01:00
Jérôme Petazzoni
749e5da20b Add command to remove a DNS record 2022-01-17 11:08:11 +01:00
Jérôme Petazzoni
69c7ac2371 Add Terraform workshop with GKE and node pools 2022-01-17 00:00:49 +01:00
Jérôme Petazzoni
de0ad83686 Add quick intro to demo apps 2022-01-16 16:01:58 +01:00
Jérôme Petazzoni
f630f08713 🔧 Uniformize labels in rainbow demo app 2022-01-16 16:01:03 +01:00
Jérôme Petazzoni
920a075afe 🔧 Pin old cluster to an even older version 2022-01-15 18:36:16 +01:00
Jérôme Petazzoni
a47c51618b 🔧 Improve GKE config to spread across multiple locations
GCP quotas are fairly limited (on my account, I can only
use 8 public IP addresses per zone, which means that I cannot
deploy many public clusters in a single zone). I tried to
use private clusters, but that causes other problems.
This refactoring makes it possible to spread clusters
across multiple zones. Since I have access to 20+ zones
in Europe and 20+ zones in the US, this lets me create a
lot of public clusters and simplifies the module quite a bit.
2022-01-14 12:30:55 +01:00
Jérôme Petazzoni
f3156513b8 🏭️ Add wrapper script for 'prepare-tf'
This should make it easy to start a bunch of clusters
(using the new Terraform provisioning method) on various
providers.
2022-01-11 10:11:42 +01:00
Jérôme Petazzoni
96de30ca78 🐞 Minor typo fix in help line 2022-01-10 21:05:34 +01:00
Jérôme Petazzoni
8de9e6e868 🏭️ Refactor prepare-tf
- fix tags so that they don't contain '='
- install metrics-server only if necessary
- set a maximum size to GKE node pool
- change tags to be shorter
2022-01-09 20:51:58 +01:00
Jérôme Petazzoni
7eb90b9d6f Merge pull request #555 from barpilot/gitops
update gitops slides
2022-01-09 17:31:22 +01:00
Jérôme Petazzoni
931455ba31 📃 Add GCP to doc and tweak them a bit 2022-01-07 15:40:56 +01:00
Jérôme Petazzoni
f02cef0351 Add content about externalTrafficPolicy
Describe impact of extra hops when using an ingress controller.
Also discuss how to preserve the HTTP client IP address.
2022-01-06 20:44:36 +01:00
Jérôme Petazzoni
9054fd58ea 🙏🏻 Add acknowledgements+thanks to @soulshake 2022-01-06 13:32:04 +01:00
Jérôme Petazzoni
24aa1ae9f7 More tweaks on the cluster autoscaler content 2022-01-06 12:52:28 +01:00
Jérôme Petazzoni
c1c4e48457 Tweaks on the cluster autoscaler content 2022-01-06 12:05:12 +01:00
Jérôme Petazzoni
0614087b2f Update CSR API to v1 in Terraform deployment configs 2022-01-06 11:54:43 +01:00
Jérôme Petazzoni
3745d0e12a Add cluster autoscaler section 2022-01-06 11:49:36 +01:00
Jérôme Petazzoni
90885e49cf Add Terraform configurations for GKE 2022-01-04 18:51:35 +01:00
Jérôme Petazzoni
07d02e345e 🛠️ Add script to find unmerged changes 2022-01-04 12:50:20 +01:00
Jérôme Petazzoni
f2311545cd 🔙 Backport EKS section from flatiron training 2022-01-04 11:30:46 +01:00
Jérôme Petazzoni
e902962f3a 🩺 Update healthcheck exercise 2022-01-03 19:36:16 +01:00
Jérôme Petazzoni
ee7547999c ♻️ Update pssh install instructions 2022-01-03 18:06:11 +01:00
Jérôme Petazzoni
34fd6c0393 🔒️ Move slides links to HTTPS 2022-01-03 13:20:55 +01:00
Jérôme Petazzoni
e67fca695e 🛠️ Add 'list' function to Netlify helper script 2022-01-03 13:18:31 +01:00
Jérôme Petazzoni
b56e54eaec ♻️ s/exercise/lab/
Now that we have a good number of longer exercises, it makes
sense to rename the shorter demos/labs into 'labs' to avoid
confusion between the two.
2021-12-29 17:18:07 +01:00
Jérôme Petazzoni
2669eae49b Merge pull request #599 from soulshake/patch-1
Fix typo "an URL"
2021-12-15 16:21:51 +01:00
AJ Bowen
c26e51d69c Fix typo "an URL" 2021-12-15 05:44:09 -06:00
Jérôme Petazzoni
c9518631e5 🧹 Delete OCI compartments 2021-12-14 17:35:36 +01:00
Jérôme Petazzoni
164651c461 Add new Kyverno exercise 2021-12-14 16:39:06 +01:00
Jérôme Petazzoni
1d8062f1dc 📃 Improve README to show how to set token variables 2021-12-14 15:46:00 +01:00
Guilhem Lettron
3d724d87db gitops: update create branch method 2020-04-29 22:09:52 +02:00
Guilhem Lettron
8c04154430 gitops: update Flux log for identity.pub 2020-04-29 22:07:02 +02:00
Guilhem Lettron
66b7d118ba gitops: add Flux helm install method 2020-04-29 22:04:41 +02:00
Guilhem Lettron
a772fff88e gitops: flux use kustomize 2020-04-29 21:57:54 +02:00
Guilhem Lettron
57af933c2d gitops: add missing cd 2020-04-29 21:55:56 +02:00
Guilhem Lettron
4888ec1f5b gitops: add bash highlight 2020-04-29 21:54:27 +02:00
237 changed files with 7066 additions and 4159 deletions

View File

@@ -1,2 +1,3 @@
hostname frr
ip nht resolve-via-default
log stdout

View File

@@ -2,30 +2,36 @@ version: "3"
services:
bgpd:
image: ajones17/frr:662
image: frrouting/frr:v8.2.2
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: /usr/lib/frr/bgpd -f /etc/frr/bgpd.conf --log=stdout --log-level=debug --no_kernel
cap_add:
- NET_ADMIN
- SYS_ADMIN
entrypoint: /usr/lib/frr/bgpd -f /etc/frr/bgpd.conf --log=stdout --log-level=debug --no_kernel --no_zebra
restart: always
zebra:
image: ajones17/frr:662
image: frrouting/frr:v8.2.2
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
cap_add:
- NET_ADMIN
- SYS_ADMIN
entrypoint: /usr/lib/frr/zebra -f /etc/frr/zebra.conf --log=stdout --log-level=debug
restart: always
vtysh:
image: ajones17/frr:662
image: frrouting/frr:v8.2.2
volumes:
- ./conf:/etc/frr
- ./run:/var/run/frr
network_mode: host
entrypoint: vtysh -c "show ip bgp"
entrypoint: vtysh
chmod:
image: alpine

View File

@@ -48,20 +48,25 @@ k8s_yaml('../k8s/dockercoins.yaml')
# The following line lets Tilt run with the default kubeadm cluster-admin context.
allow_k8s_contexts('kubernetes-admin@kubernetes')
# This will run an ngrok tunnel to expose Tilt to the outside world.
# This is intended to be used when Tilt runs on a remote machine.
local_resource(name='ngrok:tunnel', serve_cmd='ngrok http 10350')
# Note: the whole section below (to set up ngrok tunnels) is disabled,
# because ngrok now requires to set up an account to serve HTML
# content. So we can still use ngrok for e.g. webhooks and "raw" APIs,
# but not to serve web pages like the Tilt UI.
# This will wait until the ngrok tunnel is up, and show its URL to the user.
# We send the output to /dev/tty so that it doesn't get intercepted by
# Tilt, and gets displayed to the user's terminal instead.
# Note: this assumes that the ngrok instance will be running on port 4040.
# If you have other ngrok instances running on the machine, this might not work.
local_resource(name='ngrok:showurl', cmd='''
while sleep 1; do
TUNNELS=$(curl -fsSL http://localhost:4040/api/tunnels | jq -r .tunnels[].public_url)
[ "$TUNNELS" ] && break
done
printf "\nYou should be able to connect to the Tilt UI with the following URL(s): %s\n" "$TUNNELS" >/dev/tty
'''
)
# # This will run an ngrok tunnel to expose Tilt to the outside world.
# # This is intended to be used when Tilt runs on a remote machine.
# local_resource(name='ngrok:tunnel', serve_cmd='ngrok http 10350')
# # This will wait until the ngrok tunnel is up, and show its URL to the user.
# # We send the output to /dev/tty so that it doesn't get intercepted by
# # Tilt, and gets displayed to the user's terminal instead.
# # Note: this assumes that the ngrok instance will be running on port 4040.
# # If you have other ngrok instances running on the machine, this might not work.
# local_resource(name='ngrok:showurl', cmd='''
# while sleep 1; do
# TUNNELS=$(curl -fsSL http://localhost:4040/api/tunnels | jq -r .tunnels[].public_url)
# [ "$TUNNELS" ] && break
# done
# printf "\nYou should be able to connect to the Tilt UI with the following URL(s): %s\n" "$TUNNELS" >/dev/tty
# '''
# )

View File

@@ -3,6 +3,12 @@
# - no actual persistence
# - scaling down to 1 will break the cluster
# - pods may be colocated
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
@@ -28,11 +34,6 @@ subjects:
name: consul
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
---
apiVersion: v1
kind: Service
metadata:
name: consul
@@ -61,7 +62,7 @@ spec:
serviceAccountName: consul
containers:
- name: consul
image: "consul:1.8"
image: "consul:1.11"
env:
- name: NAMESPACE
valueFrom:

View File

@@ -2,6 +2,12 @@
# There is still no actual persistence, but:
# - podAntiaffinity prevents pod colocation
# - clusters works when scaling down to 1 (thanks to lifecycle hook)
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
@@ -27,11 +33,6 @@ subjects:
name: consul
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
---
apiVersion: v1
kind: Service
metadata:
name: consul
@@ -68,7 +69,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.8"
image: "consul:1.11"
env:
- name: NAMESPACE
valueFrom:

View File

@@ -1,5 +1,11 @@
# Even better Consul cluster.
# That one uses a volumeClaimTemplate to achieve true persistence.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
@@ -25,11 +31,6 @@ subjects:
name: consul
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
---
apiVersion: v1
kind: Service
metadata:
name: consul
@@ -75,7 +76,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.8"
image: "consul:1.11"
volumeMounts:
- name: data
mountPath: /consul/data

View File

@@ -9,377 +9,273 @@ metadata:
spec: {}
status: {}
---
---
# Source: kubernetes-dashboard/templates/serviceaccount.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/secret.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# kubernetes-dashboard-certs
apiVersion: v1
kind: Secret
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
# Source: kubernetes-dashboard/templates/secret.yaml
# kubernetes-dashboard-csrf
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
---
# Source: kubernetes-dashboard/templates/secret.yaml
# kubernetes-dashboard-key-holder
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
# Source: kubernetes-dashboard/templates/configmap.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
data: null
kind: ConfigMap
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-settings
data:
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/clusterrole-metrics.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "kubernetes-dashboard-metrics"
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-metrics
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
# Source: kubernetes-dashboard/templates/clusterrolebinding-metrics.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "kubernetes-dashboard-metrics"
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard-metrics
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/role.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
- apiGroups:
- ""
resourceNames:
- kubernetes-dashboard-key-holder
- kubernetes-dashboard-certs
- kubernetes-dashboard-csrf
resources:
- secrets
verbs:
- get
- update
- delete
- apiGroups:
- ""
resourceNames:
- kubernetes-dashboard-settings
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resourceNames:
- heapster
- dashboard-metrics-scraper
resources:
- services
verbs:
- proxy
- apiGroups:
- ""
resourceNames:
- heapster
- 'http:heapster:'
- 'https:heapster:'
- dashboard-metrics-scraper
- http:dashboard-metrics-scraper
resources:
- services/proxy
verbs:
- get
---
# Source: kubernetes-dashboard/templates/rolebinding.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/service.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
type: NodePort
ports:
- port: 443
targetPort: http
name: http
selector:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- name: http
port: 443
targetPort: http
selector:
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/name: kubernetes-dashboard
type: NodePort
---
# Source: kubernetes-dashboard/templates/deployment.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/name: kubernetes-dashboard
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/component: kubernetes-dashboard
template:
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: "kubernetesui/dashboard:v2.3.1"
- args:
- --namespace=kubernetes-dashboard
- --sidecar-host=http://127.0.0.1:8000
- --enable-skip-login
- --enable-insecure-login
image: kubernetesui/dashboard:v2.5.0
imagePullPolicy: IfNotPresent
args:
- --namespace=kubernetes-dashboard
- --metrics-provider=none
- --enable-skip-login
- --enable-insecure-login
ports:
- name: http
containerPort: 9090
protocol: TCP
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 9090
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 9090
name: http
protocol: TCP
resources:
limits:
cpu: 2
@@ -392,102 +288,42 @@ spec:
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
volumeMounts:
- mountPath: /certs
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /
port: 8000
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 30
name: dashboard-metrics-scraper
ports:
- containerPort: 8000
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: kubernetes-dashboard
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
---
# Source: kubernetes-dashboard/templates/clusterrole-readonly.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/clusterrolebinding-readonly.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/ingress.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/networkpolicy.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/pdb.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/psp.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- emptyDir: {}
name: tmp-volume
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding

View File

@@ -9,376 +9,272 @@ metadata:
spec: {}
status: {}
---
---
# Source: kubernetes-dashboard/templates/serviceaccount.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/secret.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# kubernetes-dashboard-certs
apiVersion: v1
kind: Secret
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
# Source: kubernetes-dashboard/templates/secret.yaml
# kubernetes-dashboard-csrf
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
---
# Source: kubernetes-dashboard/templates/secret.yaml
# kubernetes-dashboard-key-holder
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
# Source: kubernetes-dashboard/templates/configmap.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
data: null
kind: ConfigMap
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-settings
data:
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/clusterrole-metrics.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "kubernetes-dashboard-metrics"
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-metrics
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
# Source: kubernetes-dashboard/templates/clusterrolebinding-metrics.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "kubernetes-dashboard-metrics"
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard-metrics
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/role.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
- apiGroups:
- ""
resourceNames:
- kubernetes-dashboard-key-holder
- kubernetes-dashboard-certs
- kubernetes-dashboard-csrf
resources:
- secrets
verbs:
- get
- update
- delete
- apiGroups:
- ""
resourceNames:
- kubernetes-dashboard-settings
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resourceNames:
- heapster
- dashboard-metrics-scraper
resources:
- services
verbs:
- proxy
- apiGroups:
- ""
resourceNames:
- heapster
- 'http:heapster:'
- 'https:heapster:'
- dashboard-metrics-scraper
- http:dashboard-metrics-scraper
resources:
- services/proxy
verbs:
- get
---
# Source: kubernetes-dashboard/templates/rolebinding.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/service.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
type: ClusterIP
ports:
- port: 443
targetPort: https
name: https
selector:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- name: https
port: 443
targetPort: https
selector:
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/name: kubernetes-dashboard
type: ClusterIP
---
# Source: kubernetes-dashboard/templates/deployment.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/name: kubernetes-dashboard
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/component: kubernetes-dashboard
template:
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: "kubernetesui/dashboard:v2.3.1"
- args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
image: kubernetesui/dashboard:v2.5.0
imagePullPolicy: IfNotPresent
args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --metrics-provider=none
ports:
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 8443
name: https
protocol: TCP
resources:
limits:
cpu: 2
@@ -391,99 +287,39 @@ spec:
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
volumeMounts:
- mountPath: /certs
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /
port: 8000
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 30
name: dashboard-metrics-scraper
ports:
- containerPort: 8000
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: kubernetes-dashboard
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
---
# Source: kubernetes-dashboard/templates/clusterrole-readonly.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/clusterrolebinding-readonly.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/ingress.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/networkpolicy.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/pdb.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/psp.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- emptyDir: {}
name: tmp-volume

View File

@@ -9,376 +9,272 @@ metadata:
spec: {}
status: {}
---
---
# Source: kubernetes-dashboard/templates/serviceaccount.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/secret.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# kubernetes-dashboard-certs
apiVersion: v1
kind: Secret
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
# Source: kubernetes-dashboard/templates/secret.yaml
# kubernetes-dashboard-csrf
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
---
# Source: kubernetes-dashboard/templates/secret.yaml
# kubernetes-dashboard-key-holder
apiVersion: v1
kind: Secret
metadata:
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
# Source: kubernetes-dashboard/templates/configmap.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
data: null
kind: ConfigMap
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-settings
data:
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/clusterrole-metrics.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "kubernetes-dashboard-metrics"
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-metrics
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
# Source: kubernetes-dashboard/templates/clusterrolebinding-metrics.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "kubernetes-dashboard-metrics"
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard-metrics
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/role.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
- apiGroups:
- ""
resourceNames:
- kubernetes-dashboard-key-holder
- kubernetes-dashboard-certs
- kubernetes-dashboard-csrf
resources:
- secrets
verbs:
- get
- update
- delete
- apiGroups:
- ""
resourceNames:
- kubernetes-dashboard-settings
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resourceNames:
- heapster
- dashboard-metrics-scraper
resources:
- services
verbs:
- proxy
- apiGroups:
- ""
resourceNames:
- heapster
- 'http:heapster:'
- 'https:heapster:'
- dashboard-metrics-scraper
- http:dashboard-metrics-scraper
resources:
- services/proxy
verbs:
- get
---
# Source: kubernetes-dashboard/templates/rolebinding.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
# Source: kubernetes-dashboard/templates/service.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
type: NodePort
ports:
- port: 443
targetPort: https
name: https
selector:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- name: https
port: 443
targetPort: https
selector:
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/name: kubernetes-dashboard
type: NodePort
---
# Source: kubernetes-dashboard/templates/deployment.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-dashboard
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/name: kubernetes-dashboard
strategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/component: kubernetes-dashboard
template:
metadata:
annotations: null
labels:
app.kubernetes.io/name: kubernetes-dashboard
helm.sh/chart: kubernetes-dashboard-5.0.2
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/version: "2.3.1"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: kubernetes-dashboard
app.kubernetes.io/instance: kubernetes-dashboard
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kubernetes-dashboard
app.kubernetes.io/version: 2.5.0
helm.sh/chart: kubernetes-dashboard-5.2.0
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: "kubernetesui/dashboard:v2.3.1"
- args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --sidecar-host=http://127.0.0.1:8000
image: kubernetesui/dashboard:v2.5.0
imagePullPolicy: IfNotPresent
args:
- --namespace=kubernetes-dashboard
- --auto-generate-certificates
- --metrics-provider=none
ports:
- name: https
containerPort: 8443
protocol: TCP
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
- containerPort: 8443
name: https
protocol: TCP
resources:
limits:
cpu: 2
@@ -391,102 +287,42 @@ spec:
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
volumeMounts:
- mountPath: /certs
name: kubernetes-dashboard-certs
- mountPath: /tmp
name: tmp-volume
- image: kubernetesui/metrics-scraper:v1.0.7
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /
port: 8000
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 30
name: dashboard-metrics-scraper
ports:
- containerPort: 8000
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsGroup: 2001
runAsUser: 1001
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccountName: kubernetes-dashboard
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
---
# Source: kubernetes-dashboard/templates/clusterrole-readonly.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/clusterrolebinding-readonly.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/ingress.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/networkpolicy.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/pdb.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
# Source: kubernetes-dashboard/templates/psp.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- emptyDir: {}
name: tmp-volume
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding

View File

@@ -0,0 +1,28 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: ingress-domain-name
spec:
rules:
- name: create-ingress
match:
resources:
kinds:
- Service
generate:
kind: Ingress
name: "{{request.object.metadata.name}}"
namespace: "{{request.object.metadata.namespace}}"
data:
spec:
rules:
- host: "{{request.object.metadata.name}}.{{request.object.metadata.namespace}}.A.B.C.D.nip.io"
http:
paths:
- backend:
service:
name: "{{request.object.metadata.name}}"
port:
number: 80
path: /
pathType: Prefix

View File

@@ -0,0 +1,32 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: ingress-domain-name
spec:
rules:
- name: create-ingress
match:
resources:
kinds:
- Service
preconditions:
- key: "{{request.object.spec.ports[0].name}}"
operator: Equals
value: http
generate:
kind: Ingress
name: "{{request.object.metadata.name}}"
namespace: "{{request.object.metadata.namespace}}"
data:
spec:
rules:
- host: "{{request.object.metadata.name}}.{{request.object.metadata.namespace}}.A.B.C.D.nip.io"
http:
paths:
- backend:
service:
name: "{{request.object.metadata.name}}"
port:
name: http
path: /
pathType: Prefix

View File

@@ -0,0 +1,32 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: ingress-domain-name
spec:
rules:
- name: create-ingress
match:
resources:
kinds:
- Service
preconditions:
- key: http
operator: In
value: "{{request.object.spec.ports[*].name}}"
generate:
kind: Ingress
name: "{{request.object.metadata.name}}"
namespace: "{{request.object.metadata.namespace}}"
data:
spec:
rules:
- host: "{{request.object.metadata.name}}.{{request.object.metadata.namespace}}.A.B.C.D.nip.io"
http:
paths:
- backend:
service:
name: "{{request.object.metadata.name}}"
port:
name: http
path: /
pathType: Prefix

View File

@@ -0,0 +1,34 @@
# Note: this policy uses the operator "AnyIn", which was introduced in Kyverno 1.6.
# (This policy won't work with Kyverno 1.5!)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: ingress-domain-name
spec:
rules:
- name: create-ingress
match:
resources:
kinds:
- Service
preconditions:
- key: "{{request.object.spec.ports[*].port}}"
operator: AnyIn
value: [ 80 ]
generate:
kind: Ingress
name: "{{request.object.metadata.name}}"
namespace: "{{request.object.metadata.namespace}}"
data:
spec:
rules:
- host: "{{request.object.metadata.name}}.{{request.object.metadata.namespace}}.A.B.C.D.nip.io"
http:
paths:
- backend:
service:
name: "{{request.object.metadata.name}}"
port:
name: http
path: /
pathType: Prefix

View File

@@ -0,0 +1,37 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: ingress-domain-name
spec:
rules:
- name: create-ingress
context:
- name: configmap
configMap:
name: ingress-domain-name
namespace: "{{request.object.metadata.namespace}}"
match:
resources:
kinds:
- Service
preconditions:
- key: "{{request.object.spec.ports[0].name}}"
operator: Equals
value: http
generate:
kind: Ingress
name: "{{request.object.metadata.name}}"
namespace: "{{request.object.metadata.namespace}}"
data:
spec:
rules:
- host: "{{request.object.metadata.name}}.{{request.object.metadata.namespace}}.{{configmap.data.domain}}"
http:
paths:
- backend:
service:
name: "{{request.object.metadata.name}}"
port:
name: http
path: /
pathType: Prefix

View File

@@ -17,12 +17,12 @@ metadata:
spec:
selector:
matchLabels:
app: color
app: rainbow
color: blue
template:
metadata:
labels:
app: color
app: rainbow
color: blue
spec:
containers:
@@ -33,7 +33,7 @@ apiVersion: v1
kind: Service
metadata:
labels:
app: color
app: rainbow
color: blue
name: color
namespace: blue
@@ -44,7 +44,7 @@ spec:
protocol: TCP
targetPort: 80
selector:
app: color
app: rainbow
color: blue
type: ClusterIP
---
@@ -66,12 +66,12 @@ metadata:
spec:
selector:
matchLabels:
app: color
app: rainbow
color: green
template:
metadata:
labels:
app: color
app: rainbow
color: green
spec:
containers:
@@ -82,7 +82,7 @@ apiVersion: v1
kind: Service
metadata:
labels:
app: color
app: rainbow
color: green
name: color
namespace: green
@@ -93,7 +93,7 @@ spec:
protocol: TCP
targetPort: 80
selector:
app: color
app: rainbow
color: green
type: ClusterIP
---
@@ -115,12 +115,12 @@ metadata:
spec:
selector:
matchLabels:
app: color
app: rainbow
color: red
template:
metadata:
labels:
app: color
app: rainbow
color: red
spec:
containers:
@@ -131,7 +131,7 @@ apiVersion: v1
kind: Service
metadata:
labels:
app: color
app: rainbow
color: red
name: color
namespace: red
@@ -142,6 +142,6 @@ spec:
protocol: TCP
targetPort: 80
selector:
app: color
app: rainbow
color: red
type: ClusterIP

View File

@@ -5,25 +5,34 @@ banner() {
echo "#"
}
namespace() {
create_namespace() {
# 'helm template --namespace ... --create-namespace'
# doesn't create the namespace, so we need to create it.
# https://github.com/helm/helm/issues/9813
echo ---
kubectl create namespace kubernetes-dashboard \
-o yaml --dry-run=client
echo ---
}
add_namespace() {
# 'helm template --namespace ...' doesn't add namespace information,
# so we do it with this convenient filter instead.
# https://github.com/helm/helm/issues/10737
kubectl create -f- -o yaml --dry-run=client --namespace kubernetes-dashboard
}
(
banner
namespace
create_namespace
helm template kubernetes-dashboard kubernetes-dashboard \
--repo https://kubernetes.github.io/dashboard/ \
--create-namespace --namespace kubernetes-dashboard \
--set "extraArgs={--enable-skip-login,--enable-insecure-login}" \
--set metricsScraper.enabled=true \
--set protocolHttp=true \
--set service.type=NodePort \
#
| add_namespace
echo ---
kubectl create clusterrolebinding kubernetes-dashboard:insecure \
--clusterrole=cluster-admin \
@@ -34,21 +43,23 @@ namespace() {
(
banner
namespace
create_namespace
helm template kubernetes-dashboard kubernetes-dashboard \
--repo https://kubernetes.github.io/dashboard/ \
--create-namespace --namespace kubernetes-dashboard \
#
--set metricsScraper.enabled=true \
| add_namespace
) > dashboard-recommended.yaml
(
banner
namespace
create_namespace
helm template kubernetes-dashboard kubernetes-dashboard \
--repo https://kubernetes.github.io/dashboard/ \
--create-namespace --namespace kubernetes-dashboard \
--set metricsScraper.enabled=true \
--set service.type=NodePort \
#
| add_namespace
echo ---
kubectl create clusterrolebinding kubernetes-dashboard:cluster-admin \
--clusterrole=cluster-admin \

View File

@@ -1,16 +1,106 @@
This directory contains a Terraform configuration to deploy
a bunch of Kubernetes clusters on various cloud providers, using their respective managed Kubernetes products.
⚠️ This is work in progress. The UX needs to be improved,
and the docs could be better.
To use it:
This directory contains a Terraform configuration to deploy
a bunch of Kubernetes clusters on various cloud providers,
using their respective managed Kubernetes products.
## With shell wrapper
This is the recommended use. It makes it easy to start N clusters
on any provider. It will create a directory with a name like
`tag-YYYY-MM-DD-HH-MM-SS-SEED-PROVIDER`, copy the Terraform configuration
to that directory, then create the clusters using that configuration.
1. One-time setup: configure provider authentication for the provider(s) that you wish to use.
- Digital Ocean:
```bash
doctl auth init
```
- Google Cloud Platform: you will need to create a project named `prepare-tf`
and enable the relevant APIs for this project (sorry, if you're new to GCP,
this sounds vague; but if you're familiar with it you know what to do; if you
want to change the project name you can edit the Terraform configuration)
- Linode:
```bash
linode-cli configure
```
- Oracle Cloud: FIXME
(set up `oci` through the `oci-cli` Python package)
- Scaleway: run `scw init`
2. Optional: set number of clusters, cluster size, and region.
By default, 1 cluster will be configured, with 2 nodes, and auto-scaling up to 5 nodes.
If you want, you can override these parameters, with the following variables.
```bash
export TF_VAR_how_many_clusters=5
export TF_VAR_min_nodes_per_pool=2
export TF_VAR_max_nodes_per_pool=4
export TF_VAR_location=xxx
```
The `location` variable is optional. Each provider should have a default value.
The value of the `location` variable is provider-specific. Examples:
| Provider | Example value | How to see possible values
|---------------|-------------------|---------------------------
| Digital Ocean | `ams3` | `doctl compute region list`
| Google Cloud | `europe-north1-a` | `gcloud compute zones list`
| Linode | `eu-central` | `linode-cli regions list`
| Oracle Cloud | `eu-stockholm-1` | `oci iam region list`
You can also specify multiple locations, and then they will be
used in round-robin fashion.
For example, with Google Cloud, since the default quotas are very
low (my account is limited to 8 public IP addresses per zone, and
my requests to increase that quota were denied) you can do the
following:
```bash
export TF_VAR_location=$(gcloud compute zones list --format=json | jq -r .[].name | grep ^europe)
```
Then when you apply, clusters will be created across all available
zones in Europe. (When I write this, there are 20+ zones in Europe,
so even with my quota, I can create 40 clusters.)
3. Run!
```bash
./run.sh <providername>
```
(If you don't specify a provider name, it will list available providers.)
4. Shutting down
Go to the directory that was created by the previous step (`tag-YYYY-MM...`)
and run `terraform destroy`.
You can also run `./clean.sh` which will destroy ALL clusters deployed by the previous run script.
## Without shell wrapper
Expert mode.
Useful to run steps sperarately, and/or when working on the Terraform configurations.
1. Select the provider you wish to use.
Change the `source` attribute of the `module "clusters"` section.
Check the content of the `modules` directory to see available choices.
Go to the `source` directory and edit `main.tf`.
```bash
vim main.tf
```
Change the `source` attribute of the `module "clusters"` section.
Check the content of the `modules` directory to see available choices.
2. Initialize the provider.
@@ -20,24 +110,20 @@ terraform init
3. Configure provider authentication.
- Digital Ocean: `export DIGITALOCEAN_ACCESS_TOKEN=...`
(check `~/.config/doctl/config.yaml` for the token)
- Linode: `export LINODE_TOKEN=...`
(check `~/.config/linode-cli` for the token)
- Oracle Cloud: it should use `~/.oci/config`
- Scaleway: run `scw init`
See steps above, and add the following extra steps:
- Digital Coean:
```bash
export DIGITALOCEAN_ACCESS_TOKEN=$(grep ^access-token ~/.config/doctl/config.yaml | cut -d: -f2 | tr -d " ")
```
- Linode:
```bash
export LINODE_TOKEN=$(grep ^token ~/.config/linode-cli | cut -d= -f2 | tr -d " ")
```
4. Decide how many clusters and how many nodes per clusters you want.
```bash
export TF_VAR_how_many_clusters=5
export TF_VAR_min_nodes_per_pool=2
# Optional (will enable autoscaler when available)
export TF_VAR_max_nodes_per_pool=4
# Optional (will only work on some providers)
export TF_VAR_enable_arm_pool=true
```
5. Provision clusters.
```bash
@@ -46,7 +132,7 @@ terraform apply
6. Perform second stage provisioning.
This will install a SSH server on the clusters.
This will install an SSH server on the clusters.
```bash
cd stage2
@@ -72,5 +158,5 @@ terraform destroy
9. Clean up stage2.
```bash
rm stage/terraform.tfstate*
rm stage2/terraform.tfstate*
```

9
prepare-tf/cleanup.sh Executable file
View File

@@ -0,0 +1,9 @@
#!/bin/sh
export LINODE_TOKEN=$(grep ^token ~/.config/linode-cli | cut -d= -f2 | tr -d " ")
export DIGITALOCEAN_ACCESS_TOKEN=$(grep ^access-token ~/.config/doctl/config.yaml | cut -d: -f2 | tr -d " ")
for T in tag-*; do
(
cd $T
terraform apply -destroy -auto-approve && mv ../$T ../deleted$T
)
done

View File

@@ -1,16 +0,0 @@
resource "random_string" "_" {
length = 5
special = false
upper = false
}
resource "time_static" "_" {}
locals {
tag = format("tf-%s-%s", formatdate("YYYY-MM-DD-hh-mm", time_static._.rfc3339), random_string._.result)
# Common tags to be assigned to all resources
common_tags = [
"created-by=terraform",
"tag=${local.tag}"
]
}

49
prepare-tf/run.sh Executable file
View File

@@ -0,0 +1,49 @@
#!/bin/sh
set -e
TIME=$(which time)
PROVIDER=$1
[ "$PROVIDER" ] || {
echo "Please specify a provider as first argument, or 'ALL' for parallel mode."
echo "Available providers:"
ls -1 source/modules
exit 1
}
[ "$TAG" ] || {
TIMESTAMP=$(date +%Y-%m-%d-%H-%M-%S)
RANDOMTAG=$(base64 /dev/urandom | tr A-Z a-z | tr -d /+ | head -c5)
export TAG=tag-$TIMESTAMP-$RANDOMTAG
}
[ "$PROVIDER" = "ALL" ] && {
for PROVIDER in $(ls -1 source/modules); do
$TERMINAL -T $TAG-$PROVIDER -e sh -c "
export TAG=$TAG-$PROVIDER
$0 $PROVIDER
cd $TAG-$PROVIDER
bash
" &
done
exit 0
}
[ -d "source/modules/$PROVIDER" ] || {
echo "Provider '$PROVIDER' not found."
echo "Available providers:"
ls -1 source/modules
exit 1
}
export LINODE_TOKEN=$(grep ^token ~/.config/linode-cli | cut -d= -f2 | tr -d " ")
export DIGITALOCEAN_ACCESS_TOKEN=$(grep ^access-token ~/.config/doctl/config.yaml | cut -d: -f2 | tr -d " ")
cp -a source $TAG
cd $TAG
cp -r modules/$PROVIDER modules/PROVIDER
$TIME -o time.1.init terraform init
$TIME -o time.2.stage1 terraform apply -auto-approve
cd stage2
$TIME -o ../time.3.init terraform init
$TIME -o ../time.4.stage2 terraform apply -auto-approve

View File

@@ -0,0 +1,19 @@
resource "random_string" "_" {
length = 4
number = false
special = false
upper = false
}
resource "time_static" "_" {}
locals {
timestamp = formatdate("YYYY-MM-DD-hh-mm", time_static._.rfc3339)
tag = random_string._.result
# Common tags to be assigned to all resources
common_tags = [
"created-by-terraform",
format("created-at-%s", local.timestamp),
format("created-for-%s", local.tag)
]
}

View File

@@ -1,5 +1,5 @@
module "clusters" {
source = "./modules/linode"
source = "./modules/PROVIDER"
for_each = local.clusters
cluster_name = each.value.cluster_name
min_nodes_per_pool = var.min_nodes_per_pool
@@ -7,22 +7,24 @@ module "clusters" {
enable_arm_pool = var.enable_arm_pool
node_size = var.node_size
common_tags = local.common_tags
location = each.value.location
}
locals {
clusters = {
for i in range(101, 101 + var.how_many_clusters) :
i => {
cluster_name = format("%s-%03d", local.tag, i)
kubeconfig_path = format("./stage2/kubeconfig.%03d", i)
#dashdash_kubeconfig = format("--kubeconfig=./stage2/kubeconfig.%03d", i)
cluster_name = format("%s-%03d", local.tag, i)
kubeconfig_path = format("./stage2/kubeconfig.%03d", i)
externalips_path = format("./stage2/externalips.%03d", i)
flags_path = format("./stage2/flags.%03d", i)
location = local.locations[i % length(local.locations)]
}
}
}
resource "local_file" "stage2" {
filename = "./stage2/main.tf"
filename = "./stage2/main.tf"
file_permission = "0644"
content = templatefile(
"./stage2.tmpl",
@@ -30,6 +32,15 @@ resource "local_file" "stage2" {
)
}
resource "local_file" "flags" {
for_each = local.clusters
filename = each.value.flags_path
file_permission = "0600"
content = <<-EOT
has_metrics_server: ${module.clusters[each.key].has_metrics_server}
EOT
}
resource "local_file" "kubeconfig" {
for_each = local.clusters
filename = each.value.kubeconfig_path
@@ -59,8 +70,8 @@ resource "null_resource" "wait_for_nodes" {
}
data "external" "externalips" {
for_each = local.clusters
depends_on = [ null_resource.wait_for_nodes ]
for_each = local.clusters
depends_on = [null_resource.wait_for_nodes]
program = [
"sh",
"-c",

View File

@@ -1,12 +1,13 @@
resource "digitalocean_kubernetes_cluster" "_" {
name = var.cluster_name
tags = local.common_tags
region = var.region
name = var.cluster_name
tags = var.common_tags
# Region is mandatory, so let's provide a default value.
region = var.location != null ? var.location : "nyc1"
version = var.k8s_version
node_pool {
name = "dok-x86"
tags = local.common_tags
name = "x86"
tags = var.common_tags
size = local.node_type
auto_scale = true
min_nodes = var.min_nodes_per_pool

View File

@@ -5,3 +5,7 @@ output "kubeconfig" {
output "cluster_id" {
value = digitalocean_kubernetes_cluster._.id
}
output "has_metrics_server" {
value = false
}

View File

@@ -8,10 +8,6 @@ variable "common_tags" {
default = []
}
locals {
common_tags = [for tag in var.common_tags : replace(tag, "=", "-")]
}
variable "node_size" {
type = string
default = "M"
@@ -48,14 +44,14 @@ locals {
# To view supported regions, run:
# doctl compute region list
variable "region" {
variable "location" {
type = string
default = "nyc1"
default = null
}
# To view supported versions, run:
# doctl kubernetes options versions -o json | jq -r .[].slug
variable "k8s_version" {
type = string
default = "1.21.5-do.0"
default = "1.22.8-do.1"
}

View File

@@ -0,0 +1,65 @@
resource "google_container_cluster" "_" {
name = var.cluster_name
project = local.project
location = local.location
min_master_version = var.k8s_version
# To deploy private clusters, uncomment the section below,
# and uncomment the block in network.tf.
# Private clusters require extra resources (Cloud NAT,
# router, network, subnet) and the quota for some of these
# resources is fairly low on GCP; so if you want to deploy
# a lot of private clusters (more than 10), you can use these
# blocks as a base but you will probably have to refactor
# things quite a bit (you will at least need to define a single
# shared router and use it across all the clusters).
/*
network = google_compute_network._.name
subnetwork = google_compute_subnetwork._.name
private_cluster_config {
enable_private_nodes = true
# This must be set to "false".
# (Otherwise, access to the public endpoint is disabled.)
enable_private_endpoint = false
# This must be set to a /28.
# I think it shouldn't collide with the pod network subnet.
master_ipv4_cidr_block = "10.255.255.0/28"
}
# Private clusters require "VPC_NATIVE" networking mode
# (as opposed to the legacy "ROUTES").
networking_mode = "VPC_NATIVE"
# ip_allocation_policy is required for VPC_NATIVE clusters.
ip_allocation_policy {
# This is the block that will be used for pods.
cluster_ipv4_cidr_block = "10.0.0.0/12"
# The services block is optional
# (GKE will pick one automatically).
#services_ipv4_cidr_block = ""
}
*/
node_pool {
name = "x86"
node_config {
tags = var.common_tags
machine_type = local.node_type
}
initial_node_count = var.min_nodes_per_pool
autoscaling {
min_node_count = var.min_nodes_per_pool
max_node_count = max(var.min_nodes_per_pool, var.max_nodes_per_pool)
}
}
# This is not strictly necessary.
# We'll see if we end up using it.
# (If it is removed, make sure to also remove the corresponding
# key+cert variables from outputs.tf!)
master_auth {
client_certificate_config {
issue_client_certificate = true
}
}
}

View File

@@ -0,0 +1,38 @@
/*
resource "google_compute_network" "_" {
name = var.cluster_name
project = local.project
# The default is to create subnets automatically.
# However, this creates one subnet per zone in all regions,
# which causes a quick exhaustion of the subnet quota.
auto_create_subnetworks = false
}
resource "google_compute_subnetwork" "_" {
name = var.cluster_name
ip_cidr_range = "10.254.0.0/16"
region = local.region
network = google_compute_network._.id
project = local.project
}
resource "google_compute_router" "_" {
name = var.cluster_name
region = local.region
network = google_compute_network._.name
project = local.project
}
resource "google_compute_router_nat" "_" {
name = var.cluster_name
router = google_compute_router._.name
region = local.region
project = local.project
# Everyone in the network is allowed to NAT out.
# (We would change this if we only wanted to allow specific subnets to NAT out.)
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
# Pick NAT addresses automatically.
# (We would change this if we wanted to use specific addresses to NAT out.)
nat_ip_allocate_option = "AUTO_ONLY"
}
*/

View File

@@ -0,0 +1,35 @@
data "google_client_config" "_" {}
output "kubeconfig" {
value = <<-EOT
apiVersion: v1
kind: Config
current-context: ${google_container_cluster._.name}
clusters:
- name: ${google_container_cluster._.name}
cluster:
server: https://${google_container_cluster._.endpoint}
certificate-authority-data: ${google_container_cluster._.master_auth[0].cluster_ca_certificate}
contexts:
- name: ${google_container_cluster._.name}
context:
cluster: ${google_container_cluster._.name}
user: client-token
users:
- name: client-cert
user:
client-key-data: ${google_container_cluster._.master_auth[0].client_key}
client-certificate-data: ${google_container_cluster._.master_auth[0].client_certificate}
- name: client-token
user:
token: ${data.google_client_config._.access_token}
EOT
}
output "cluster_id" {
value = google_container_cluster._.id
}
output "has_metrics_server" {
value = true
}

View File

@@ -0,0 +1,8 @@
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.5.0"
}
}
}

View File

@@ -0,0 +1,68 @@
variable "cluster_name" {
type = string
default = "deployed-with-terraform"
}
variable "common_tags" {
type = list(string)
default = []
}
variable "node_size" {
type = string
default = "M"
}
variable "min_nodes_per_pool" {
type = number
default = 2
}
variable "max_nodes_per_pool" {
type = number
default = 5
}
# FIXME
variable "enable_arm_pool" {
type = bool
default = false
}
variable "node_types" {
type = map(string)
default = {
"S" = "e2-small"
"M" = "e2-medium"
"L" = "e2-standard-2"
}
}
locals {
node_type = var.node_types[var.node_size]
}
# To view supported locations, run:
# gcloud compute zones list
variable "location" {
type = string
default = null
}
# To view supported versions, run:
# gcloud container get-server-config --region=europe-north1 '--format=flattened(channels)'
# But it's also possible to just specify e.g. "1.20" and it figures it out.
variable "k8s_version" {
type = string
default = "1.21"
}
locals {
location = var.location != null ? var.location : "europe-north1-a"
region = replace(local.location, "/-[a-z]$/", "")
# Unfortunately, the following line doesn't work
# (that attribute just returns an empty string)
# so we have to hard-code the project name.
#project = data.google_client_config._.project
project = "prepare-tf"
}

View File

@@ -1,7 +1,8 @@
resource "linode_lke_cluster" "_" {
label = var.cluster_name
tags = var.common_tags
region = var.region
label = var.cluster_name
tags = var.common_tags
# "region" is mandatory, so let's provide a default value if none was given.
region = var.location != null ? var.location : "eu-central"
k8s_version = var.k8s_version
pool {

View File

@@ -5,3 +5,7 @@ output "kubeconfig" {
output "cluster_id" {
value = linode_lke_cluster._.id
}
output "has_metrics_server" {
value = false
}

View File

@@ -42,16 +42,16 @@ locals {
node_type = var.node_types[var.node_size]
}
# To view supported versions, run:
# To view supported regions, run:
# linode-cli regions list
variable "region" {
variable "location" {
type = string
default = "us-east"
default = null
}
# To view supported versions, run:
# linode-cli lke versions-list --json | jq -r .[].id
variable "k8s_version" {
type = string
default = "1.21"
default = "1.22"
}

View File

@@ -1,6 +1,7 @@
resource "oci_identity_compartment" "_" {
name = var.cluster_name
description = var.cluster_name
name = var.cluster_name
description = var.cluster_name
enable_delete = true
}
locals {

View File

@@ -9,3 +9,7 @@ output "kubeconfig" {
output "cluster_id" {
value = oci_containerengine_cluster._.id
}
output "has_metrics_server" {
value = false
}

View File

@@ -70,6 +70,13 @@ locals {
node_type = var.node_types[var.node_size]
}
# To view supported regions, run:
# oci iam region list | jq .data[].name
variable "location" {
type = string
default = null
}
# To view supported versions, run:
# oci ce cluster-options get --cluster-option-id all | jq -r '.data["kubernetes-versions"][]'
variable "k8s_version" {

View File

@@ -1,5 +1,6 @@
resource "scaleway_k8s_cluster" "_" {
name = var.cluster_name
region = var.location
tags = var.common_tags
version = var.k8s_version
cni = var.cni
@@ -8,7 +9,7 @@ resource "scaleway_k8s_cluster" "_" {
resource "scaleway_k8s_pool" "_" {
cluster_id = scaleway_k8s_cluster._.id
name = "scw-x86"
name = "x86"
tags = var.common_tags
node_type = local.node_type
size = var.min_nodes_per_pool

View File

@@ -5,3 +5,7 @@ output "kubeconfig" {
output "cluster_id" {
value = scaleway_k8s_cluster._.id
}
output "has_metrics_server" {
value = sort([var.k8s_version, "1.22"])[0] == "1.22"
}

View File

@@ -47,7 +47,12 @@ variable "cni" {
default = "cilium"
}
# See supported versions with:
variable "location" {
type = string
default = null
}
# To view supported versions, run:
# scw k8s version list -o json | jq -r .[].name
variable "k8s_version" {
type = string

View File

@@ -2,7 +2,7 @@ terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.0.3"
version = "2.7.1"
}
}
}
@@ -119,6 +119,11 @@ resource "kubernetes_cluster_role_binding" "shpod_${index}" {
name = "shpod"
namespace = "shpod"
}
subject {
api_group = "rbac.authorization.k8s.io"
kind = "Group"
name = "shpod-cluster-admins"
}
}
resource "random_string" "shpod_${index}" {
@@ -135,24 +140,20 @@ provider "helm" {
}
resource "helm_release" "metrics_server_${index}" {
# Some providers pre-install metrics-server.
# Some don't. Let's install metrics-server,
# but only if it's not already installed.
count = yamldecode(file("./flags.${index}"))["has_metrics_server"] ? 0 : 1
provider = helm.cluster_${index}
repository = "https://charts.bitnami.com/bitnami"
repository = "https://kubernetes-sigs.github.io/metrics-server/"
chart = "metrics-server"
version = "5.8.8"
version = "3.8.2"
name = "metrics-server"
namespace = "metrics-server"
create_namespace = true
set {
name = "apiService.create"
value = "true"
}
set {
name = "extraArgs.kubelet-insecure-tls"
value = "true"
}
set {
name = "extraArgs.kubelet-preferred-address-types"
value = "InternalIP"
name = "args"
value = "{--kubelet-insecure-tls}"
}
}
@@ -182,7 +183,7 @@ resource "kubernetes_config_map" "kubeconfig_${index}" {
- name: cluster-admin
user:
client-key-data: $${base64encode(tls_private_key.cluster_admin_${index}.private_key_pem)}
client-certificate-data: $${base64encode(kubernetes_certificate_signing_request.cluster_admin_${index}.certificate)}
client-certificate-data: $${base64encode(kubernetes_certificate_signing_request_v1.cluster_admin_${index}.certificate)}
EOT
}
}
@@ -196,11 +197,14 @@ resource "tls_cert_request" "cluster_admin_${index}" {
private_key_pem = tls_private_key.cluster_admin_${index}.private_key_pem
subject {
common_name = "cluster-admin"
organization = "system:masters"
# Note: CSR API v1 doesn't allow issuing certs with "system:masters" anymore.
#organization = "system:masters"
# We'll use this custom group name instead.cluster-admin user.
organization = "shpod-cluster-admins"
}
}
resource "kubernetes_certificate_signing_request" "cluster_admin_${index}" {
resource "kubernetes_certificate_signing_request_v1" "cluster_admin_${index}" {
provider = kubernetes.cluster_${index}
metadata {
name = "cluster-admin"
@@ -208,6 +212,7 @@ resource "kubernetes_certificate_signing_request" "cluster_admin_${index}" {
spec {
usages = ["client auth"]
request = tls_cert_request.cluster_admin_${index}.cert_request_pem
signer_name = "kubernetes.io/kube-apiserver-client"
}
auto_approve = true
}

View File

@@ -0,0 +1,40 @@
variable "how_many_clusters" {
type = number
default = 1
}
variable "node_size" {
type = string
default = "M"
# Can be S, M, L.
# We map these values to different specific instance types for each provider,
# but the idea is that they shoudl correspond to the following sizes:
# S = 2 GB RAM
# M = 4 GB RAM
# L = 8 GB RAM
}
variable "min_nodes_per_pool" {
type = number
default = 1
}
variable "max_nodes_per_pool" {
type = number
default = 0
}
variable "enable_arm_pool" {
type = bool
default = false
}
variable "location" {
type = string
default = null
}
# TODO: perhaps handle if it's space-separated instead of newline?
locals {
locations = var.location == null ? [null] : split("\n", var.location)
}

View File

@@ -1,28 +0,0 @@
variable "how_many_clusters" {
type = number
default = 2
}
variable "node_size" {
type = string
default = "M"
# Can be S, M, L.
# S = 2 GB RAM
# M = 4 GB RAM
# L = 8 GB RAM
}
variable "min_nodes_per_pool" {
type = number
default = 1
}
variable "max_nodes_per_pool" {
type = number
default = 0
}
variable "enable_arm_pool" {
type = bool
default = true
}

View File

@@ -14,7 +14,9 @@ These tools can help you to create VMs on:
- [Docker](https://docs.docker.com/engine/installation/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`)
- [Parallel SSH](https://github.com/lilydjwg/pssh)
(should be installable with `pip install git+https://github.com/lilydjwg/pssh`;
on a Mac, try `brew install pssh`)
Depending on the infrastructure that you want to use, you also need to install
the CLI that is specific to that cloud. For OpenStack deployments, you will

View File

@@ -1,4 +1,5 @@
INFRACLASS=openstack-tf
INFRACLASS=terraform
TERRAFORM=openstack
# If you are using OpenStack, copy this file (e.g. to "openstack" or "enix")
# and customize the variables below.

View File

@@ -178,6 +178,13 @@ _cmd_clusterize() {
# install --owner=ubuntu --mode=600 /root/.ssh/authorized_keys --target-directory /home/ubuntu/.ssh"
#fi
# Special case for oracle since their iptables blocks everything but SSH
pssh "
if [ -f /etc/iptables/rules.v4 ]; then
sudo sed -i 's/-A INPUT -j REJECT --reject-with icmp-host-prohibited//' /etc/iptables/rules.v4
sudo netfilter-persistent start
fi"
# Copy settings and install Python YAML parser
pssh -I tee /tmp/settings.yaml <tags/$TAG/settings.yaml
pssh "
@@ -185,10 +192,10 @@ _cmd_clusterize() {
sudo apt-get install -y python-yaml"
# If there is no "python" binary, symlink to python3
#pssh "
#if ! which python; then
# ln -s $(which python3) /usr/local/bin/python
#fi"
pssh "
if ! which python; then
sudo ln -s $(which python3) /usr/local/bin/python
fi"
# Copy postprep.py to the remote machines, and execute it, feeding it the list of IP addresses
pssh -I tee /tmp/clusterize.py <lib/clusterize.py
@@ -248,7 +255,7 @@ _cmd_docker() {
##VERSION## https://github.com/docker/compose/releases
if [ "$ARCHITECTURE" ]; then
COMPOSE_VERSION=v2.0.1
COMPOSE_VERSION=v2.2.3
COMPOSE_PLATFORM='linux-$(uname -m)'
else
COMPOSE_VERSION=1.29.2
@@ -314,11 +321,12 @@ _cmd_kube() {
SETTINGS=tags/$TAG/settings.yaml
KUBEVERSION=$(awk '/^kubernetes_version:/ {print $2}' $SETTINGS)
if [ "$KUBEVERSION" ]; then
EXTRA_APTGET="=$KUBEVERSION-00"
EXTRA_KUBEADM="kubernetesVersion: v$KUBEVERSION"
else
EXTRA_APTGET=""
EXTRA_KUBEADM=""
pssh "
sudo tee /etc/apt/preferences.d/kubernetes <<EOF
Package: kubectl kubeadm kubelet
Pin: version $KUBEVERSION*
Pin-Priority: 1000
EOF"
fi
# Install packages
@@ -329,7 +337,8 @@ _cmd_kube() {
sudo tee /etc/apt/sources.list.d/kubernetes.list"
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet$EXTRA_APTGET kubeadm$EXTRA_APTGET kubectl$EXTRA_APTGET &&
sudo apt-get install -qy kubelet kubeadm kubectl &&
sudo apt-mark hold kubelet kubeadm kubectl
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl &&
echo 'alias k=kubectl' | sudo tee /etc/bash_completion.d/k &&
echo 'complete -F __start_kubectl k' | sudo tee -a /etc/bash_completion.d/k"
@@ -341,6 +350,11 @@ _cmd_kube() {
sudo swapoff -a"
fi
# Re-enable CRI interface in containerd
pssh "
echo '# Use default parameters for containerd.' | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd"
# Initialize kube control plane
pssh --timeout 200 "
if i_am_first_node && [ ! -f /etc/kubernetes/admin.conf ]; then
@@ -350,19 +364,38 @@ kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: \$(cat /tmp/token)
nodeRegistration:
# Comment out the next line to switch back to Docker.
criSocket: /run/containerd/containerd.sock
ignorePreflightErrors:
- NumCPU
---
kind: JoinConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
bootstrapToken:
apiServerEndpoint: \$(cat /etc/name_of_first_node):6443
token: \$(cat /tmp/token)
unsafeSkipCAVerification: true
nodeRegistration:
# Comment out the next line to switch back to Docker.
criSocket: /run/containerd/containerd.sock
ignorePreflightErrors:
- NumCPU
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: cgroupfs
# The following line is necessary when using Docker.
# It doesn't seem necessary when using containerd.
#cgroupDriver: cgroupfs
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
apiServer:
certSANs:
- \$(cat /tmp/ipv4)
$EXTRA_KUBEADM
EOF
sudo kubeadm init --config=/tmp/kubeadm-config.yaml --ignore-preflight-errors=NumCPU
sudo kubeadm init --config=/tmp/kubeadm-config.yaml
fi"
# Put kubeconfig in ubuntu's and $USER_LOGIN's accounts
@@ -386,14 +419,17 @@ EOF
pssh --timeout 200 "
if ! i_am_first_node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
FIRSTNODE=\$(cat /etc/name_of_first_node) &&
TOKEN=\$(ssh $SSHOPTS \$FIRSTNODE cat /tmp/token) &&
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN \$FIRSTNODE:6443
ssh $SSHOPTS \$FIRSTNODE cat /tmp/kubeadm-config.yaml > /tmp/kubeadm-config.yaml &&
sudo kubeadm join --config /tmp/kubeadm-config.yaml
fi"
# Install metrics server
pssh "
if i_am_first_node; then
kubectl apply -f https://raw.githubusercontent.com/jpetazzo/container.training/master/k8s/metrics-server.yaml
#helm upgrade --install metrics-server \
# --repo https://kubernetes-sigs.github.io/metrics-server/ metrics-server \
# --namespace kube-system --set args={--kubelet-insecure-tls}
fi"
}
@@ -478,7 +514,7 @@ EOF
if [ ! -x /usr/local/bin/kustomize ]; then
curl -fsSL $URL |
sudo tar -C /usr/local/bin -zx kustomize
echo complete -C /usr/local/bin/kustomize kustomize | sudo tee /etc/bash_completion.d/kustomize
kustomize completion bash | sudo tee /etc/bash_completion.d/kustomize
kustomize version
fi"
@@ -562,16 +598,16 @@ EOF
fi"
##VERSION## https://github.com/bitnami-labs/sealed-secrets/releases
KUBESEAL_VERSION=v0.16.0
case $ARCH in
amd64) FILENAME=kubeseal-linux-amd64;;
arm64) FILENAME=kubeseal-arm64;;
*) FILENAME=nope;;
esac
[ "$FILENAME" = "nope" ] || pssh "
KUBESEAL_VERSION=0.17.4
#case $ARCH in
#amd64) FILENAME=kubeseal-linux-amd64;;
#arm64) FILENAME=kubeseal-arm64;;
#*) FILENAME=nope;;
#esac
pssh "
if [ ! -x /usr/local/bin/kubeseal ]; then
curl -fsSLo kubeseal https://github.com/bitnami-labs/sealed-secrets/releases/download/$KUBESEAL_VERSION/$FILENAME &&
sudo install kubeseal /usr/local/bin
curl -fsSL https://github.com/bitnami-labs/sealed-secrets/releases/download/v$KUBESEAL_VERSION/kubeseal-$KUBESEAL_VERSION-linux-$ARCH.tar.gz |
sudo tar -zxvf- -C /usr/local/bin kubeseal
kubeseal --version
fi"
}
@@ -1025,7 +1061,8 @@ _cmd_webssh() {
need_tag
pssh "
sudo apt-get update &&
sudo apt-get install python-tornado python-paramiko -y"
sudo apt-get install python-tornado python-paramiko -y ||
sudo apt-get install python3-tornado python3-paramiko -y"
pssh "
cd /opt
[ -d webssh ] || sudo git clone https://github.com/jpetazzo/webssh"

View File

@@ -26,12 +26,24 @@ infra_start() {
info " Name: $NAME"
info " Instance type: $LINODE_TYPE"
ROOT_PASS="$(base64 /dev/urandom | cut -c1-20 | head -n 1)"
linode-cli linodes create \
MAX_TRY=5
TRY=1
WAIT=1
while ! linode-cli linodes create \
--type=${LINODE_TYPE} --region=${LINODE_REGION} \
--image=linode/ubuntu18.04 \
--authorized_keys="${LINODE_SSHKEY}" \
--root_pass="${ROOT_PASS}" \
--tags=${TAG} --label=${NAME}
--tags=${TAG} --label=${NAME}; do
warning "Failed to create VM (attempt $TRY/$MAX_TRY)."
if [ $TRY -ge $MAX_TRY ]; then
die "Giving up."
fi
info "Waiting $WAIT seconds and retrying."
sleep $WAIT
TRY=$(($TRY+1))
WAIT=$(($WAIT*2))
done
done
sep

View File

@@ -1,7 +1,26 @@
error_terraform_configuration() {
error "When using the terraform infraclass, the TERRAFORM"
error "environment variable must be set to one of the available"
error "terraform configurations. These configurations are in"
error "the prepare-vm/terraform subdirectory. You should probably"
error "update your infra file and set the variable."
error "(e.g. with TERRAFORM=openstack)"
}
if [ "$TERRAFORM" = "" ]; then
error_terraform_configuration
die "Aborting because TERRAFORM variable is not set."
fi
if [ ! -d terraform/$TERRAFORM ]; then
error_terraform_configuration
die "Aborting because no terraform configuration was found in 'terraform/$TERRAFORM'."
fi
infra_start() {
COUNT=$1
cp terraform-openstack/*.tf tags/$TAG
cp terraform/$TERRAFORM/*.tf tags/$TAG
(
cd tags/$TAG
if ! terraform init; then

View File

@@ -60,7 +60,10 @@ while domains and clusters:
zone += f"node{node} 300 IN A {ip}\n"
r = requests.put(
f"{apiurl}/{domain}/records",
headers={"x-api-key": apikey},
headers={
"x-api-key": apikey,
"content-type": "text/plain",
},
data=zone)
print(r.text)

View File

@@ -1,22 +1,22 @@
#!/bin/sh
# https://open-api.netlify.com/#tag/dnsZone
[ "$1" ] || {
echo ""
echo "Add a record in Netlify DNS."
echo "This script is hardcoded to add a record to container.training".
echo ""
echo "Syntax:"
echo "$0 <name> <ipaddr>"
echo "$0 list"
echo "$0 add <name> <ipaddr>"
echo "$0 del <recordid>"
echo ""
echo "Example to create a A record for eu.container.training:"
echo "$0 eu 185.145.250.0"
echo "$0 add eu 185.145.250.0"
echo ""
exit 1
}
NAME=$1.container.training
ADDR=$2
NETLIFY_USERID=$(jq .userId < ~/.config/netlify/config.json)
NETLIFY_TOKEN=$(jq -r .users[$NETLIFY_USERID].auth.token < ~/.config/netlify/config.json)
@@ -29,19 +29,54 @@ netlify() {
ZONE_ID=$(netlify dns_zones |
jq -r '.[] | select ( .name == "container.training" ) | .id')
# It looks like if we create two identical records, then delete one of them,
# Netlify DNS ends up in a weird state (the name doesn't resolve anymore even
# though it's still visible through the API and the website?)
_list() {
netlify dns_zones/$ZONE_ID/dns_records |
jq -r '.[] | select(.type=="A") | [.hostname, .type, .value, .id] | @tsv'
}
if netlify dns_zones/$ZONE_ID/dns_records |
jq '.[] | select(.hostname=="'$NAME'" and .type=="A" and .value=="'$ADDR'")' |
grep .
then
echo "It looks like that record already exists. Refusing to create it."
exit 1
fi
_add() {
NAME=$1.container.training
ADDR=$2
netlify dns_zones/$ZONE_ID/dns_records type=A hostname=$NAME value=$ADDR ttl=300
netlify dns_zones/$ZONE_ID/dns_records |
jq '.[] | select(.hostname=="'$NAME'")'
# It looks like if we create two identical records, then delete one of them,
# Netlify DNS ends up in a weird state (the name doesn't resolve anymore even
# though it's still visible through the API and the website?)
if netlify dns_zones/$ZONE_ID/dns_records |
jq '.[] | select(.hostname=="'$NAME'" and .type=="A" and .value=="'$ADDR'")' |
grep .
then
echo "It looks like that record already exists. Refusing to create it."
exit 1
fi
netlify dns_zones/$ZONE_ID/dns_records type=A hostname=$NAME value=$ADDR ttl=300
netlify dns_zones/$ZONE_ID/dns_records |
jq '.[] | select(.hostname=="'$NAME'")'
}
_del() {
RECORD_ID=$1
# OK, since that one is dangerous, I'm putting the whole request explicitly here
http DELETE \
https://api.netlify.com/api/v1/dns_zones/$ZONE_ID/dns_records/$RECORD_ID \
"Authorization:Bearer $NETLIFY_TOKEN"
}
case "$1" in
list)
_list
;;
add)
_add $2 $3
;;
del)
_del $2
;;
*)
echo "Unknown command '$1'."
exit 1
;;
esac

View File

@@ -14,7 +14,9 @@ paper_size: A4
user_login: k8s
user_password: training
kubernetes_version: 1.19.16
# For a list of old versions, check:
# https://kubernetes.io/releases/patch-releases/#non-active-branch-history
kubernetes_version: 1.18.20
image:

View File

@@ -0,0 +1,48 @@
resource "oci_identity_compartment" "_" {
name = var.prefix
description = var.prefix
enable_delete = true
}
locals {
compartment_id = oci_identity_compartment._.id
}
data "oci_identity_availability_domains" "_" {
compartment_id = local.compartment_id
}
data "oci_core_images" "_" {
compartment_id = local.compartment_id
shape = var.shape
operating_system = "Canonical Ubuntu"
operating_system_version = "20.04"
#operating_system = "Oracle Linux"
#operating_system_version = "7.9"
}
resource "oci_core_instance" "_" {
count = var.how_many_nodes
display_name = format("%s-%04d", var.prefix, count.index + 1)
availability_domain = data.oci_identity_availability_domains._.availability_domains[var.availability_domain].name
compartment_id = local.compartment_id
shape = var.shape
shape_config {
memory_in_gbs = var.memory_in_gbs_per_node
ocpus = var.ocpus_per_node
}
source_details {
source_id = data.oci_core_images._.images[0].id
source_type = "image"
}
create_vnic_details {
subnet_id = oci_core_subnet._.id
}
metadata = {
ssh_authorized_keys = local.authorized_keys
}
}
output "ip_addresses" {
value = join("", formatlist("%s\n", oci_core_instance._.*.public_ip))
}

View File

@@ -0,0 +1,63 @@
resource "oci_core_vcn" "_" {
compartment_id = local.compartment_id
cidr_block = "10.0.0.0/16"
display_name = "tf-vcn"
}
#
# On OCI, you can have either "public" or "private" subnets.
# In both cases, instances get addresses in the VCN CIDR block;
# but instances in "public" subnets also get a public address.
#
# Then, to enable communication to the outside world, you need:
# - for public subnets, an "internet gateway"
# (will allow inbound and outbound traffic)
# - for private subnets, a "NAT gateway"
# (will only allow outbound traffic)
# - optionally, for private subnets, a "service gateway"
# (to access other OCI services, e.g. object store)
#
# In this configuration, we use public subnets, and since we
# need outside access, we add an internet gateway.
#
# Note that the default routing table in a VCN is empty, so we
# add the internet gateway to the default routing table.
# Similarly, the default security group in a VCN blocks almost
# everything, so we add a blanket rule in that security group.
#
resource "oci_core_internet_gateway" "_" {
compartment_id = local.compartment_id
display_name = "tf-igw"
vcn_id = oci_core_vcn._.id
}
resource "oci_core_default_route_table" "_" {
manage_default_resource_id = oci_core_vcn._.default_route_table_id
route_rules {
destination = "0.0.0.0/0"
destination_type = "CIDR_BLOCK"
network_entity_id = oci_core_internet_gateway._.id
}
}
resource "oci_core_default_security_list" "_" {
manage_default_resource_id = oci_core_vcn._.default_security_list_id
ingress_security_rules {
protocol = "all"
source = "0.0.0.0/0"
}
egress_security_rules {
protocol = "all"
destination = "0.0.0.0/0"
}
}
resource "oci_core_subnet" "_" {
compartment_id = local.compartment_id
cidr_block = "10.0.0.0/20"
vcn_id = oci_core_vcn._.id
display_name = "tf-subnet"
route_table_id = oci_core_default_route_table._.id
security_list_ids = [oci_core_default_security_list._.id]
}

View File

@@ -0,0 +1,8 @@
terraform {
required_version = ">= 1"
required_providers {
openstack = {
source = "hashicorp/oci"
version = "4.48.0" }
}
}

View File

@@ -0,0 +1,42 @@
variable "prefix" {
type = string
default = "provisioned-with-terraform"
}
variable "how_many_nodes" {
type = number
default = 2
}
locals {
authorized_keys = file("~/.ssh/id_rsa.pub")
}
/*
Available flex shapes:
"VM.Optimized3.Flex" # Intel Ice Lake
"VM.Standard3.Flex" # Intel Ice Lake
"VM.Standard.A1.Flex" # Ampere Altra
"VM.Standard.E3.Flex" # AMD Rome
"VM.Standard.E4.Flex" # AMD Milan
*/
variable "shape" {
type = string
default = "VM.Standard.A1.Flex"
}
variable "availability_domain" {
type = number
default = 0
}
variable "ocpus_per_node" {
type = number
default = 1
}
variable "memory_in_gbs_per_node" {
type = number
default = 4
}

114
slides/03.yml Normal file
View File

@@ -0,0 +1,114 @@
title: |
Docker & Kubernetes
chat: "[Teams](https://teams.microsoft.com/l/team/19%3aMhQUes73UU8qA8zZDA7b7ZAbQRZUxEdanl5bbN4A1EM1%40thread.tacv2/conversations?groupId=44c2561d-82c8-4db0-9269-0aa802fa85d8&tenantId=72aa0d83-624a-4ebf-a683-1b9b45548610)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2022-05-derivco.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics-03.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- # DAY 1
#- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Container_Networking_Basics.md
- containers/Container_Network_Model.md
- containers/Getting_Inside.md
- containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- # DAY 2
- containers/Dockerfile_Tips.md
- containers/Local_Development_Workflow.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- |
# Kubernetes
- shared/connecting.md
#- shared/webssh.md
#- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
- k8s/kubectl-run.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- exercises/k8sfundamentals-details.md
- # DAY 3
- k8s/ourapponkube.md
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- k8s/namespaces.md
- k8s/yamldeploy.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/authoring-yaml.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/setup-devel.md
- k8s/localkubeconfig.md
#- k8s/access-eks-cluster.md
- k8s/accessinternal.md
#- k8s/kubectlproxy.md
- exercises/localcluster-details.md
- containers/Multi_Stage_Builds.md
- containers/Exercise_Dockerfile_Advanced.md
- # DAY 4
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/volumes.md
- k8s/configuration.md
- k8s/secrets.md
- k8s/resource-limits.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- shared/thankyou.md
- # EXTRA
- |
# (Extra Docker content)
- containers/Start_And_Attach.md
- containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Advanced_Dockerfiles.md
- containers/Network_Drivers.md
- # EXTRA
- |
# (Extra Kubernetes content)
- k8s/batch-jobs.md
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md

115
slides/23.yml Normal file
View File

@@ -0,0 +1,115 @@
title: |
Docker & Kubernetes
chat: "[Teams](https://teams.microsoft.com/_?tenantId=72aa0d83-624a-4ebf-a683-1b9b45548610#/conversations/General?threadId=19:LMcv0F_ghgBoyTCo1J8a1Q1VfrPmcUGj-luuWQwSJjY1@thread.tacv2&ctx=channel)"
gitrepo: github.com/jpetazzo/container.training
slides: https://2022-05-derivco.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics-23.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- # DAY 1
#- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- # DAY 2
- containers/Dockerfile_Tips.md
- containers/Multi_Stage_Builds.md
- containers/Container_Networking_Basics.md
- containers/Local_Development_Workflow.md
- containers/Getting_Inside.md
- containers/Container_Network_Model.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- containers/Exercise_Dockerfile_Advanced.md
- # DAY 3
- |
# Kubernetes
- shared/connecting.md
#- shared/webssh.md
#- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
- k8s/kubectl-run.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- exercises/k8sfundamentals-details.md
- k8s/ourapponkube.md
- # DAY 4
- k8s/labels-annotations.md
- k8s/kubectl-logs.md
- k8s/logs-cli.md
- k8s/namespaces.md
- k8s/yamldeploy.md
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/authoring-yaml.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/setup-devel.md
- k8s/localkubeconfig.md
#- k8s/access-eks-cluster.md
- k8s/accessinternal.md
#- k8s/kubectlproxy.md
- exercises/localcluster-details.md
- # DAY 5
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/volumes.md
- k8s/configuration.md
- k8s/secrets.md
- k8s/resource-limits.md
- k8s/helm-intro.md
- k8s/helm-chart-format.md
- k8s/helm-create-basic-chart.md
- k8s/helm-create-better-chart.md
- shared/thankyou.md
- # EXTRA
- |
# (Extra Docker content)
- containers/Start_And_Attach.md
- containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Advanced_Dockerfiles.md
- containers/Network_Drivers.md
- # EXTRA
- |
# (Extra Kubernetes content)
- k8s/batch-jobs.md
- k8s/netpol.md
- k8s/authn-authz.md
- k8s/statefulsets.md
- k8s/consul.md
- k8s/pv-pvc-sc.md
- k8s/volume-claim-templates.md

View File

@@ -2,6 +2,7 @@
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
/ /derivco.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack
@@ -18,6 +19,8 @@
#/next https://www.eventbrite.com/e/livestream-intensive-kubernetes-bootcamp-tickets-103262336428
/next https://skillsmatter.com/courses/700-advanced-kubernetes-concepts-workshop-jerome-petazzoni
/hi5 https://enix.io/fr/services/formation/online/
/us https://www.ardanlabs.com/live-training-events/deploying-microservices-and-traditional-applications-with-kubernetes-march-28-2022.html
/uk https://skillsmatter.com/workshops/827-deploying-microservices-and-traditional-applications-with-kubernetes-with-jerome-petazzoni
# Survey form
/please https://docs.google.com/forms/d/e/1FAIpQLSfIYSgrV7tpfBNm1hOaprjnBHgWKn5n-k5vtNXYJkOX1sRxng/viewform

View File

@@ -0,0 +1,362 @@
# Buildkit
- "New" backend for Docker builds
- announced in 2017
- ships with Docker Engine 18.09
- enabled by default on Docker Desktop in 2021
- Huge improvements in build efficiency
- 100% compatible with existing Dockerfiles
- New features for multi-arch
- Not just for building container images
---
## Old vs New
- Classic `docker build`:
- copy whole build context
- linear execution
- `docker run` + `docker commit` + `docker run` + `docker commit`...
- Buildkit:
- copy files only when they are needed; cache them
- compute dependency graph (dependencies are expressed by `COPY`)
- parallel execution
- doesn't rely on Docker, but on internal runner/snapshotter
- can run in "normal" containers (including in Kubernetes pods)
---
## Parallel execution
- In multi-stage builds, all stages can be built in parallel
(example: https://github.com/jpetazzo/shpod; [before] and [after])
- Stages are built only when they are necessary
(i.e. if their output is tagged or used in another necessary stage)
- Files are copied from context only when needed
- Files are cached in the builder
[before]: https://github.com/jpetazzo/shpod/blob/c6efedad6d6c3dc3120dbc0ae0a6915f85862474/Dockerfile
[after]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile
---
## Turning it on and off
- On recent version of Docker Desktop (since 2021):
*enabled by default*
- On older versions, or on Docker CE (Linux):
`export DOCKER_BUILDKIT=1`
- Turning it off:
`export DOCKER_BUILDKIT=0`
---
## Multi-arch support
- Historically, Docker only ran on x86_64 / amd64
(Intel/AMD 64 bits architecture)
- Folks have been running it on 32-bit ARM for ages
(e.g. Raspberry Pi)
- This required a Go compiler and appropriate base images
(which means changing/adapting Dockerfiles to use these base images)
- Docker [image manifest v2 schema 2][manifest] introduces multi-arch images
(`FROM alpine` automatically gets the right image for your architecture)
[manifest]: https://docs.docker.com/registry/spec/manifest-v2-2/
---
## Why?
- Raspberry Pi (32-bit and 64-bit ARM)
- Other ARM-based embedded systems (ODROID, NVIDIA Jetson...)
- Apple M1
- AWS Graviton
- Ampere Altra (e.g. on Oracle Cloud)
- ...
---
## Multi-arch builds in a nutshell
Use the `docker buildx build` command:
```bash
docker buildx build … \
--platform linux/amd64,linux/arm64,linux/arm/v7,linux/386 \
[--tag jpetazzo/hello --push]
```
- Requires all base images to be available for these platforms
- Must not use binary downloads with hard-coded architectures!
(streamlining a Dockerfile for multi-arch: [before], [after])
[before]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile
[after]: https://github.com/jpetazzo/shpod/blob/c50789e662417b34fea6f5e1d893721d66d265b7/Dockerfile
---
## Native vs emulated vs cross
- Native builds:
*aarch64 machine running aarch64 programs building aarch64 images/binaries*
- Emulated builds:
*x86_64 machine running aarch64 programs building aarch64 images/binaries*
- Cross builds:
*x86_64 machine running x86_64 programs building aarch64 images/binaries*
---
## Native
- Dockerfiles are (relatively) simple to write
(nothing special to do to handle multi-arch; just avoid hard-coded archs)
- Best performance
- Requires "exotic" machines
- Requires setting up a build farm
---
## Emulated
- Dockerfiles are (relatively) simple to write
- Emulation performance can vary
(from "OK" to "ouch this is slow")
- Emulation isn't always perfect
(weird bugs/crashes are rare but can happen)
- Doesn't require special machines
- Supports arbitrary architectures thanks to QEMU
---
## Cross
- Dockerfiles are more complicated to write
- Requires cross-compilation toolchains
- Performance is good
- Doesn't require special machines
---
## Native builds
- Requires base images to be available
- To view available architectures for an image:
```bash
regctl manifest get --list <imagename>
docker manifest inspect <imagename>
```
- Nothing special to do, *except* when downloading binaries!
```
https://releases.hashicorp.com/terraform/1.1.5/terraform_1.1.5_linux_`amd64`.zip
```
---
## Finding the right architecture
`uname -m` → armv7l, aarch64, i686, x86_64
`GOARCH` (from `go env`) → arm, arm64, 386, amd64
In Dockerfile, add `ARG TARGETARCH` (or `ARG TARGETPLATFORM`)
- `TARGETARCH` matches `GOARCH`
- `TARGETPLAFORM` → linux/arm/v7, linux/arm64, linux/386, linux/amd64
---
class: extra-details
## Welp
Sometimes, binary releases be like:
```
Linux_arm64.tar.gz
Linux_ppc64le.tar.gz
Linux_s390x.tar.gz
Linux_x86_64.tar.gz
```
This needs a bit of custom mapping.
---
## Emulation
- Leverages `binfmt_misc` and QEMU on Linux
- Enabling:
```bash
docker run --rm --privileged aptman/qus -s -- -p
```
- Disabling:
```bash
docker run --rm --privileged aptman/qus -- -r
```
- Checking status:
```bash
ls -l /proc/sys/fs/binfmt_misc
```
---
class: extra-details
## How it works
- `binfmt_misc` lets us register _interpreters_ for binaries, e.g.:
- [DOSBox][dosbox] for DOS programs
- [Wine][wine] for Windows programs
- [QEMU][qemu] for Linux programs for other architectures
- When we try to execute e.g. a SPARC binary on our x86_64 machine:
- `binfmt_misc` detects the binary format and invokes `qemu-<arch> the-binary ...`
- QEMU translates SPARC instructions to x86_64 instructions
- system calls go straight to the kernel
[dosbox]: https://www.dosbox.com/
[QEMU]: https://www.qemu.org/
[wine]: https://www.winehq.org/
---
class: extra-details
## QEMU registration
- The `aptman/qus` image mentioned earlier contains static QEMU builds
- It registers all these interpreters with the kernel
- For more details, check:
- https://github.com/dbhi/qus
- https://dbhi.github.io/qus/
---
## Cross-compilation
- Cross-compilation is about 10x faster than emulation
(non-scientific benchmarks!)
- In Dockerfile, add:
`ARG BUILDARCH BUILDPLATFORM TARGETARCH TARGETPLATFORM`
- Can use `FROM --platform=$BUILDPLATFORM <image>`
- Then use `$TARGETARCH` or `$TARGETPLATFORM`
(e.g. for Go, `export GOARCH=$TARGETARCH`)
- Check [tonistiigi/xx][xx] and [Toni's blog][toni] for some amazing cross tools!
[xx]: https://github.com/tonistiigi/xx
[toni]: https://medium.com/@tonistiigi/faster-multi-platform-builds-dockerfile-cross-compilation-guide-part-1-ec087c719eaf
---
## Checking runtime capabilities
Build and run the following Dockerfile:
```dockerfile
FROM --platform=linux/amd64 busybox AS amd64
FROM --platform=linux/arm64 busybox AS arm64
FROM --platform=linux/arm/v7 busybox AS arm32
FROM --platform=linux/386 busybox AS ia32
FROM alpine
RUN apk add file
WORKDIR /root
COPY --from=amd64 /bin/busybox /root/amd64/busybox
COPY --from=arm64 /bin/busybox /root/arm64/busybox
COPY --from=arm32 /bin/busybox /root/arm32/busybox
COPY --from=ia32 /bin/busybox /root/ia32/busybox
CMD for A in *; do echo "$A => $($A/busybox uname -a)"; done
```
It will indicate which executables can be run on your engine.
---
## More than builds
- Buildkit is also used in other systems:
- [Earthly] - generic repeatable build pipelines
- [Dagger] - CICD pipelines that run anywhere
- and more!
[Earthly]: https://earthly.dev/
[Dagger]: https://dagger.io/

View File

@@ -58,7 +58,7 @@ class: pic
- it uses different concepts (Compose services ≠ Kubernetes services)
- it needs a Docker Engine (althought containerd support might be coming)
- it needs a Docker Engine (although containerd support might be coming)
---
@@ -96,7 +96,7 @@ Compose will be smart, and only recreate the containers that have changed.
When working with interpreted languages:
- dont' rebuild each time
- don't rebuild each time
- leverage a `volumes` section instead
@@ -250,6 +250,24 @@ For the full list, check: https://docs.docker.com/compose/compose-file/
---
## Configuring a Compose stack
- Follow [12-factor app configuration principles][12factorconfig]
(configure the app through environment variables)
- Provide (in the repo) a default environment file suitable for development
(no secret or sensitive value)
- Copy the default environment file to `.env` and tweak it
(or: provide a script to generate `.env` from a template)
[12factorconfig]: https://12factor.net/config
---
## Running multiple copies of a stack
- Copy the stack in two different directories, e.g. `front` and `frontcopy`
@@ -331,7 +349,7 @@ Use `docker-compose down -v` to remove everything including volumes.
- The data in the old container is lost...
- ... Except if the container is using a *volume*
- ...Except if the container is using a *volume*
- Compose will then re-attach that volume to the new container
@@ -343,6 +361,102 @@ Use `docker-compose down -v` to remove everything including volumes.
---
## Gotchas with volumes
- Unfortunately, Docker volumes don't have labels or metadata
- Compose tracks volumes thanks to their associated container
- If the container is deleted, the volume gets orphaned
- Example: `docker-compose down && docker-compose up`
- the old volume still exists, detached from its container
- a new volume gets created
- `docker-compose down -v`/`--volumes` deletes volumes
(but **not** `docker-compose down && docker-compose down -v`!)
---
## Managing volumes explicitly
Option 1: *named volumes*
```yaml
services:
app:
volumes:
- data:/some/path
volumes:
data:
```
- Volume will be named `<project>_data`
- It won't be orphaned with `docker-compose down`
- It will correctly be removed with `docker-compose down -v`
---
## Managing volumes explicitly
Option 2: *relative paths*
```yaml
services:
app:
volumes:
- ./data:/some/path
```
- Makes it easy to colocate the app and its data
(for migration, backups, disk usage accounting...)
- Won't be removed by `docker-compose down -v`
---
## Managing complex stacks
- Compose provides multiple features to manage complex stacks
(with many containers)
- `-f`/`--file`/`$COMPOSE_FILE` can be a list of Compose files
(separated by `:` and merged together)
- Services can be assigned to one or more *profiles*
- `--profile`/`$COMPOSE_PROFILE` can be a list of comma-separated profiles
(see [Using service profiles][profiles] in the Compose documentation)
- These variables can be set in `.env`
[profiles]: https://docs.docker.com/compose/profiles/
---
## Dependencies
- A service can have a `depends_on` section
(listing one or more other services)
- This is used when bringing up individual services
(e.g. `docker-compose up blah` or `docker-compose run foo`)
⚠️ It doesn't make a service "wait" for another one to be up!
---
class: extra-details
## A bit of history and trivia

View File

@@ -111,7 +111,7 @@ CMD ["python", "app.py"]
RUN wget http://.../foo.tar.gz \
&& tar -zxf foo.tar.gz \
&& mv foo/fooctl /usr/local/bin \
&& rm -rf foo
&& rm -rf foo foo.tar.gz
...
```

View File

@@ -317,9 +317,11 @@ class: extra-details
## Trash your servers and burn your code
*(This is the title of a
[2013 blog post](http://chadfowler.com/2013/06/23/immutable-deployments.html)
[2013 blog post][immutable-deployments]
by Chad Fowler, where he explains the concept of immutable infrastructure.)*
[immutable-deployments]: https://web.archive.org/web/20160305073617/http://chadfowler.com/blog/2013/06/23/immutable-deployments/
--
* Let's majorly mess up our container.

View File

@@ -32,6 +32,432 @@ The last item should be done for educational purposes only!
---
# Control groups
- Control groups provide resource *metering* and *limiting*.
- This covers a number of "usual suspects" like:
- memory
- CPU
- block I/O
- network (with cooperation from iptables/tc)
- And a few exotic ones:
- huge pages (a special way to allocate memory)
- RDMA (resources specific to InfiniBand / remote memory transfer)
---
## Crowd control
- Control groups also allow to group processes for special operations:
- freezer (conceptually similar to a "mass-SIGSTOP/SIGCONT")
- perf_event (gather performance events on multiple processes)
- cpuset (limit or pin processes to specific CPUs)
- There is a "pids" cgroup to limit the number of processes in a given group.
- There is also a "devices" cgroup to control access to device nodes.
(i.e. everything in `/dev`.)
---
## Generalities
- Cgroups form a hierarchy (a tree).
- We can create nodes in that hierarchy.
- We can associate limits to a node.
- We can move a process (or multiple processes) to a node.
- The process (or processes) will then respect these limits.
- We can check the current usage of each node.
- In other words: limits are optional (if we only want accounting).
- When a process is created, it is placed in its parent's groups.
---
## Example
The numbers are PIDs.
The names are the names of our nodes (arbitrarily chosen).
.small[
```bash
cpu memory
├── batch ├── stateless
│ ├── cryptoscam │ ├── 25
│ │ └── 52 │ ├── 26
│ └── ffmpeg │ ├── 27
│ ├── 109 │ ├── 52
│ └── 88 │ ├── 109
└── realtime │ └── 88
├── nginx └── databases
│ ├── 25 ├── 1008
│ ├── 26 └── 524
│ └── 27
├── postgres
│ └── 524
└── redis
└── 1008
```
]
---
class: extra-details, deep-dive
## Cgroups v1 vs v2
- Cgroups v1 are available on all systems (and widely used).
- Cgroups v2 are a huge refactor.
(Development started in Linux 3.10, released in 4.5.)
- Cgroups v2 have a number of differences:
- single hierarchy (instead of one tree per controller),
- processes can only be on leaf nodes (not inner nodes),
- and of course many improvements / refactorings.
- Cgroups v2 enabled by default on Fedora 31 (2019), Ubuntu 21.10...
---
## Memory cgroup: accounting
- Keeps track of pages used by each group:
- file (read/write/mmap from block devices),
- anonymous (stack, heap, anonymous mmap),
- active (recently accessed),
- inactive (candidate for eviction).
- Each page is "charged" to a group.
- Pages can be shared across multiple groups.
(Example: multiple processes reading from the same files.)
- To view all the counters kept by this cgroup:
```bash
$ cat /sys/fs/cgroup/memory/memory.stat
```
---
## Memory cgroup v1: limits
- Each group can have (optional) hard and soft limits.
- Limits can be set for different kinds of memory:
- physical memory,
- kernel memory,
- total memory (including swap).
---
## Soft limits and hard limits
- Soft limits are not enforced.
(But they influence reclaim under memory pressure.)
- Hard limits *cannot* be exceeded:
- if a group of processes exceeds a hard limit,
- and if the kernel cannot reclaim any memory,
- then the OOM (out-of-memory) killer is triggered,
- and processes are killed until memory gets below the limit again.
---
class: extra-details, deep-dive
## Avoiding the OOM killer
- For some workloads (databases and stateful systems), killing
processes because we run out of memory is not acceptable.
- The "oom-notifier" mechanism helps with that.
- When "oom-notifier" is enabled and a hard limit is exceeded:
- all processes in the cgroup are frozen,
- a notification is sent to user space (instead of killing processes),
- user space can then raise limits, migrate containers, etc.,
- once the memory usage is below the hard limit, unfreeze the cgroup.
---
class: extra-details, deep-dive
## Overhead of the memory cgroup
- Each time a process grabs or releases a page, the kernel update counters.
- This adds some overhead.
- Unfortunately, this cannot be enabled/disabled per process.
- It has to be done system-wide, at boot time.
- Also, when multiple groups use the same page:
- only the first group gets "charged",
- but if it stops using it, the "charge" is moved to another group.
---
class: extra-details, deep-dive
## Setting up a limit with the memory cgroup
Create a new memory cgroup:
```bash
$ CG=/sys/fs/cgroup/memory/onehundredmegs
$ sudo mkdir $CG
```
Limit it to approximately 100MB of memory usage:
```bash
$ sudo tee $CG/memory.memsw.limit_in_bytes <<< 100000000
```
Move the current process to that cgroup:
```bash
$ sudo tee $CG/tasks <<< $$
```
The current process *and all its future children* are now limited.
(Confused about `<<<`? Look at the next slide!)
---
class: extra-details, deep-dive
## What's `<<<`?
- This is a "here string". (It is a non-POSIX shell extension.)
- The following commands are equivalent:
```bash
foo <<< hello
```
```bash
echo hello | foo
```
```bash
foo <<EOF
hello
EOF
```
- Why did we use that?
---
class: extra-details, deep-dive
## Writing to cgroups pseudo-files requires root
Instead of:
```bash
sudo tee $CG/tasks <<< $$
```
We could have done:
```bash
sudo sh -c "echo $$ > $CG/tasks"
```
The following commands, however, would be invalid:
```bash
sudo echo $$ > $CG/tasks
```
```bash
sudo -i # (or su)
echo $$ > $CG/tasks
```
---
class: extra-details, deep-dive
## Testing the memory limit
Start the Python interpreter:
```bash
$ python
Python 3.6.4 (default, Jan 5 2018, 02:35:40)
[GCC 7.2.1 20171224] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
Allocate 80 megabytes:
```python
>>> s = "!" * 1000000 * 80
```
Add 20 megabytes more:
```python
>>> t = "!" * 1000000 * 20
Killed
```
---
## Memory cgroup v2: limits
- `memory.min` = hard reservation (guaranteed memory for this cgroup)
- `memory.low` = soft reservation ("*try* not to reclaim memory if we're below this")
- `memory.high` = soft limit (aggressively reclaim memory; don't trigger OOMK)
- `memory.max` = hard limit (triggers OOMK)
- `memory.swap.high` = aggressively reclaim memory when using that much swap
- `memory.swap.max` = prevent using more swap than this
---
## CPU cgroup
- Keeps track of CPU time used by a group of processes.
(This is easier and more accurate than `getrusage` and `/proc`.)
- Keeps track of usage per CPU as well.
(i.e., "this group of process used X seconds of CPU0 and Y seconds of CPU1".)
- Allows setting relative weights used by the scheduler.
---
## Cpuset cgroup
- Pin groups to specific CPU(s).
- Use-case: reserve CPUs for specific apps.
- Warning: make sure that "default" processes aren't using all CPUs!
- CPU pinning can also avoid performance loss due to cache flushes.
- This is also relevant for NUMA systems.
- Provides extra dials and knobs.
(Per zone memory pressure, process migration costs...)
---
## Blkio cgroup
- Keeps track of I/Os for each group:
- per block device
- read vs write
- sync vs async
- Set throttle (limits) for each group:
- per block device
- read vs write
- ops vs bytes
- Set relative weights for each group.
- Note: most writes go through the page cache.
<br/>(So classic writes will appear to be unthrottled at first.)
---
## Net_cls and net_prio cgroup
- Only works for egress (outgoing) traffic.
- Automatically set traffic class or priority
for traffic generated by processes in the group.
- Net_cls will assign traffic to a class.
- Classes have to be matched with tc or iptables, otherwise traffic just flows normally.
- Net_prio will assign traffic to a priority.
- Priorities are used by queuing disciplines.
---
## Devices cgroup
- Controls what the group can do on device nodes
- Permissions include read/write/mknod
- Typical use:
- allow `/dev/{tty,zero,random,null}` ...
- deny everything else
- A few interesting nodes:
- `/dev/net/tun` (network interface manipulation)
- `/dev/fuse` (filesystems in user space)
- `/dev/kvm` (VMs in containers, yay inception!)
- `/dev/dri` (GPU)
---
# Namespaces
- Provide processes with their own view of the system.
@@ -46,6 +472,8 @@ The last item should be done for educational purposes only!
- uts
- ipc
- user
- time
- cgroup
(We are going to detail them individually.)
@@ -619,411 +1047,25 @@ class: extra-details, deep-dive
---
# Control groups
## Time namespace
- Control groups provide resource *metering* and *limiting*.
- Virtualize time
- This covers a number of "usual suspects" like:
- Expose a slower/faster clock to some processes
- memory
(for e.g. simulation purposes)
- CPU
- Expose a clock offset to some processes
- block I/O
- network (with cooperation from iptables/tc)
- And a few exotic ones:
- huge pages (a special way to allocate memory)
- RDMA (resources specific to InfiniBand / remote memory transfer)
(simulation, suspend/restore...)
---
## Crowd control
## Cgroup namespace
- Control groups also allow to group processes for special operations:
- Virtualize access to `/proc/<PID>/cgroup`
- freezer (conceptually similar to a "mass-SIGSTOP/SIGCONT")
- perf_event (gather performance events on multiple processes)
- cpuset (limit or pin processes to specific CPUs)
- There is a "pids" cgroup to limit the number of processes in a given group.
- There is also a "devices" cgroup to control access to device nodes.
(i.e. everything in `/dev`.)
---
## Generalities
- Cgroups form a hierarchy (a tree).
- We can create nodes in that hierarchy.
- We can associate limits to a node.
- We can move a process (or multiple processes) to a node.
- The process (or processes) will then respect these limits.
- We can check the current usage of each node.
- In other words: limits are optional (if we only want accounting).
- When a process is created, it is placed in its parent's groups.
---
## Example
The numbers are PIDs.
The names are the names of our nodes (arbitrarily chosen).
.small[
```bash
cpu memory
├── batch ├── stateless
│ ├── cryptoscam │ ├── 25
│ │ └── 52 │ ├── 26
│ └── ffmpeg │ ├── 27
│ ├── 109 │ ├── 52
│ └── 88 │ ├── 109
└── realtime │ └── 88
├── nginx └── databases
│ ├── 25 ├── 1008
│ ├── 26 └── 524
│ └── 27
├── postgres
│ └── 524
└── redis
└── 1008
```
]
---
class: extra-details, deep-dive
## Cgroups v1 vs v2
- Cgroups v1 are available on all systems (and widely used).
- Cgroups v2 are a huge refactor.
(Development started in Linux 3.10, released in 4.5.)
- Cgroups v2 have a number of differences:
- single hierarchy (instead of one tree per controller),
- processes can only be on leaf nodes (not inner nodes),
- and of course many improvements / refactorings.
---
## Memory cgroup: accounting
- Keeps track of pages used by each group:
- file (read/write/mmap from block devices),
- anonymous (stack, heap, anonymous mmap),
- active (recently accessed),
- inactive (candidate for eviction).
- Each page is "charged" to a group.
- Pages can be shared across multiple groups.
(Example: multiple processes reading from the same files.)
- To view all the counters kept by this cgroup:
```bash
$ cat /sys/fs/cgroup/memory/memory.stat
```
---
## Memory cgroup: limits
- Each group can have (optional) hard and soft limits.
- Limits can be set for different kinds of memory:
- physical memory,
- kernel memory,
- total memory (including swap).
---
## Soft limits and hard limits
- Soft limits are not enforced.
(But they influence reclaim under memory pressure.)
- Hard limits *cannot* be exceeded:
- if a group of processes exceeds a hard limit,
- and if the kernel cannot reclaim any memory,
- then the OOM (out-of-memory) killer is triggered,
- and processes are killed until memory gets below the limit again.
---
class: extra-details, deep-dive
## Avoiding the OOM killer
- For some workloads (databases and stateful systems), killing
processes because we run out of memory is not acceptable.
- The "oom-notifier" mechanism helps with that.
- When "oom-notifier" is enabled and a hard limit is exceeded:
- all processes in the cgroup are frozen,
- a notification is sent to user space (instead of killing processes),
- user space can then raise limits, migrate containers, etc.,
- once the memory usage is below the hard limit, unfreeze the cgroup.
---
class: extra-details, deep-dive
## Overhead of the memory cgroup
- Each time a process grabs or releases a page, the kernel update counters.
- This adds some overhead.
- Unfortunately, this cannot be enabled/disabled per process.
- It has to be done system-wide, at boot time.
- Also, when multiple groups use the same page:
- only the first group gets "charged",
- but if it stops using it, the "charge" is moved to another group.
---
class: extra-details, deep-dive
## Setting up a limit with the memory cgroup
Create a new memory cgroup:
```bash
$ CG=/sys/fs/cgroup/memory/onehundredmegs
$ sudo mkdir $CG
```
Limit it to approximately 100MB of memory usage:
```bash
$ sudo tee $CG/memory.memsw.limit_in_bytes <<< 100000000
```
Move the current process to that cgroup:
```bash
$ sudo tee $CG/tasks <<< $$
```
The current process *and all its future children* are now limited.
(Confused about `<<<`? Look at the next slide!)
---
class: extra-details, deep-dive
## What's `<<<`?
- This is a "here string". (It is a non-POSIX shell extension.)
- The following commands are equivalent:
```bash
foo <<< hello
```
```bash
echo hello | foo
```
```bash
foo <<EOF
hello
EOF
```
- Why did we use that?
---
class: extra-details, deep-dive
## Writing to cgroups pseudo-files requires root
Instead of:
```bash
sudo tee $CG/tasks <<< $$
```
We could have done:
```bash
sudo sh -c "echo $$ > $CG/tasks"
```
The following commands, however, would be invalid:
```bash
sudo echo $$ > $CG/tasks
```
```bash
sudo -i # (or su)
echo $$ > $CG/tasks
```
---
class: extra-details, deep-dive
## Testing the memory limit
Start the Python interpreter:
```bash
$ python
Python 3.6.4 (default, Jan 5 2018, 02:35:40)
[GCC 7.2.1 20171224] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
Allocate 80 megabytes:
```python
>>> s = "!" * 1000000 * 80
```
Add 20 megabytes more:
```python
>>> t = "!" * 1000000 * 20
Killed
```
---
## CPU cgroup
- Keeps track of CPU time used by a group of processes.
(This is easier and more accurate than `getrusage` and `/proc`.)
- Keeps track of usage per CPU as well.
(i.e., "this group of process used X seconds of CPU0 and Y seconds of CPU1".)
- Allows setting relative weights used by the scheduler.
---
## Cpuset cgroup
- Pin groups to specific CPU(s).
- Use-case: reserve CPUs for specific apps.
- Warning: make sure that "default" processes aren't using all CPUs!
- CPU pinning can also avoid performance loss due to cache flushes.
- This is also relevant for NUMA systems.
- Provides extra dials and knobs.
(Per zone memory pressure, process migration costs...)
---
## Blkio cgroup
- Keeps track of I/Os for each group:
- per block device
- read vs write
- sync vs async
- Set throttle (limits) for each group:
- per block device
- read vs write
- ops vs bytes
- Set relative weights for each group.
- Note: most writes go through the page cache.
<br/>(So classic writes will appear to be unthrottled at first.)
---
## Net_cls and net_prio cgroup
- Only works for egress (outgoing) traffic.
- Automatically set traffic class or priority
for traffic generated by processes in the group.
- Net_cls will assign traffic to a class.
- Classes have to be matched with tc or iptables, otherwise traffic just flows normally.
- Net_prio will assign traffic to a priority.
- Priorities are used by queuing disciplines.
---
## Devices cgroup
- Controls what the group can do on device nodes
- Permissions include read/write/mknod
- Typical use:
- allow `/dev/{tty,zero,random,null}` ...
- deny everything else
- A few interesting nodes:
- `/dev/net/tun` (network interface manipulation)
- `/dev/fuse` (filesystems in user space)
- `/dev/kvm` (VMs in containers, yay inception!)
- `/dev/dri` (GPU)
- Lets containerized processes view their relative cgroup tree
---
@@ -1126,8 +1168,8 @@ See `man capabilities` for the full list and details.
???
:EN:Containers internals
:EN:- Linux kernel namespaces
:EN:- Control groups (cgroups)
:EN:- Linux kernel namespaces
:FR:Fonctionnement interne des conteneurs
:FR:- Les namespaces du noyau Linux
:FR:- Les "control groups" (cgroups)
:FR:- Les namespaces du noyau Linux

View File

@@ -109,7 +109,7 @@ class: extra-details
- Example: [ctr.run](https://ctr.run/)
.exercise[
.lab[
- Use ctr.run to automatically build a container image and run it:
```bash

View File

@@ -28,7 +28,7 @@ class: self-paced
- Likewise, it will take more than merely *reading* these slides
to make you an expert
- These slides include *tons* of exercises and examples
- These slides include *tons* of demos, exercises, and examples
- They assume that you have access to a machine running Docker

27
slides/derivco.html Normal file
View File

@@ -0,0 +1,27 @@
<?xml version="1.0"?>
<html>
<head>
<style>
td {
background: #ccc;
padding: 1em;
}
</style>
</head>
<body>
<table>
<tr>
<td>Session starting May 3rd (4 days)</td>
<td>
<a href="03.yml.html">Docker and Kubernetes</a>
</td>
</tr>
<tr>
<td>Session starting May 23rd (5 days)</td>
<td>
<a href="23.yml.html">Docker and Kubernetes</a>
</td>
</tr>
</table>
</body>
</html>

View File

@@ -4,8 +4,6 @@
(we will use the `rng` service in the dockercoins app)
- Observe the correct behavior of the readiness probe
- See what happens when the load increses
(when deploying e.g. an invalid image)
- Observe the behavior of the liveness probe
(spoiler alert: it involves timeouts!)

View File

@@ -2,34 +2,85 @@
- We want to add healthchecks to the `rng` service in dockercoins
- First, deploy a new copy of dockercoins
- The `rng` service exhibits an interesting behavior under load:
- Then, add a readiness probe on the `rng` service
*its latency increases (which will cause probes to time out!)*
(using a simple HTTP check on the `/` route of the service)
- We want to see:
- Check what happens when deploying an invalid image for `rng` (e.g. `alpine`)
- what happens when the readiness probe fails
- Then roll back `rng` to the original image and add a liveness probe
- what happens when the liveness probe fails
(with the same parameters)
- Scale up the `worker` service (to 15+ workers) and observe
- What happens?
- how to set "appropriate" probes and probe parameters
---
## Goal
## Setup
- *Before* adding the readiness probe:
- First, deploy a new copy of dockercoins
updating the image of the `rng` service with `alpine` should break it
(for instance, in a brand new namespace)
- *After* adding the readiness probe:
- Pro tip #1: ping (e.g. with `httping`) the `rng` service at all times
updating the image of the `rng` service with `alpine` shouldn't break it
- it should initially show a few milliseconds latency
- When adding the liveness probe, nothing special should happen
- that will increase when we scale up
- Scaling the `worker` service will then cause disruptions
- it will also let us detect when the service goes "boom"
- Pro tip #2: also keep an eye on the web UI
---
## Readiness
- Add a readiness probe to `rng`
- this requires editing the pod template in the Deployment manifest
- use a simple HTTP check on the `/` route of the service
- keep all other parameters (timeouts, thresholds...) at their default values
- Check what happens when deploying an invalid image for `rng` (e.g. `alpine`)
*(If the probe was set up correctly, the app will continue to work,
because Kubernetes won't switch over the traffic to the `alpine` containers,
because they don't pass the readiness probe.)*
---
## Readiness under load
- Then roll back `rng` to the original image
- Check what happens when we scale up the `worker` Deployment to 15+ workers
(get the latency above 1 second)
*(We should now observe intermittent unavailability of the service, i.e. every
30 seconds it will be unreachable for a bit, then come back, then go away again, etc.)*
---
## Liveness
- Now replace the readiness probe with a liveness probe
- What happens now?
*(At first the behavior looks the same as with the readiness probe:
service becomes unreachable, then reachable again, etc.; but there is
a significant difference behind the scenes. What is it?)*
---
## Readiness and liveness
- Bonus questions!
- What happens if we enable both probes at the same time?
- What strategies can we use so that both probes are useful?

View File

@@ -16,7 +16,7 @@
## Goal
- We want to be able to access the web app using an URL like:
- We want to be able to access the web app using a URL like:
http://webapp.localdev.me

View File

@@ -1,3 +1,5 @@
⚠️ BROKEN EXERCISE - DO NOT USE
## Exercise — Ingress Secret Policy
*Implement policy to limit impact of ingress controller vulnerabilities.*

View File

@@ -1,3 +1,5 @@
⚠️ BROKEN EXERCISE - DO NOT USE
# Exercise — Ingress Secret Policy
- Most ingress controllers have access to all Secrets

View File

@@ -0,0 +1,9 @@
## Exercise — Generating Ingress With Kyverno
- When a Service gets created, automatically generate an Ingress
- Step 1: expose all services with a hard-coded domain name
- Step 2: only expose services that have a port named `http`
- Step 3: configure the domain name with a per-namespace ConfigMap

View File

@@ -0,0 +1,33 @@
# Exercise — Generating Ingress With Kyverno
When a Service gets created...
*(for instance, Service `blue` in Namespace `rainbow`)*
...Automatically generate an Ingress.
*(for instance, with host name `blue.rainbow.MYDOMAIN.COM`)*
---
## Goals
- Step 1: expose all services with a hard-coded domain name
- Step 2: only expose services that have a port named `http`
- Step 3: configure the domain name with a per-namespace ConfigMap
(e.g. `kubectl create configmap ingress-domain-name --from-literal=domain=1.2.3.4.nip.io`)
---
## Hints
- We want to use a Kyverno `generate` ClusterPolicy
- For step 1, check [Generate Resources](https://kyverno.io/docs/writing-policies/generate/) documentation
- For step 2, check [Preconditions](https://kyverno.io/docs/writing-policies/preconditions/) documentation
- For step 3, check [External Data Sources](https://kyverno.io/docs/writing-policies/external-data-sources/) documentation

View File

@@ -0,0 +1,9 @@
## Exercise — Terraform Node Pools
- Write a Terraform configuration to deploy a cluster
- The cluster should have two node pools with autoscaling
- Deploy two apps, each using exclusively one node pool
- Bonus: deploy an app balanced across both node pools

View File

@@ -0,0 +1,69 @@
# Exercise — Terraform Node Pools
- Write a Terraform configuration to deploy a cluster
- The cluster should have two node pools with autoscaling
- Deploy two apps, each using exclusively one node pool
- Bonus: deploy an app balanced across both node pools
---
## Cluster deployment
- Write a Terraform configuration to deploy a cluster
- We want to have two node pools with autoscaling
- Example for sizing:
- 4 GB / 1 CPU per node
- pools of 1 to 4 nodes
---
## Cluster autoscaling
- Deploy an app on the cluster
(you can use `nginx`, `jpetazzo/color`...)
- Set a resource request (e.g. 1 GB RAM)
- Scale up and verify that the autoscaler kicks in
---
## Pool isolation
- We want to deploy two apps
- The first app should be deployed exclusively on the first pool
- The second app should be deployed exclusively on the second pool
- Check the next slide for hints!
---
## Hints
- One solution involves adding a `nodeSelector` to the pod templates
- Another solution involves adding:
- `taints` to the node pools
- matching `tolerations` to the pod templates
---
## Balancing
- Step 1: make sure that the pools are not balanced
- Step 2: deploy a new app, check that it goes to the emptiest pool
- Step 3: update the app so that it balances (as much as possible) between pools

60
slides/find-unmerged-changes.sh Executable file
View File

@@ -0,0 +1,60 @@
#!/bin/sh
# The materials for a given training live in their own branch.
# Sometimes, we write custom content (or simply new content) for a training,
# and that content doesn't get merged back to main. This script tries to
# detect that with the following heuristics:
# - list all remote branches
# - for each remote branch, list the changes that weren't merged into main
# (using "diff main...$BRANCH", three dots)
# - ignore a bunch of training-specific files that change all the time anyway
# - for the remaining files, compute the diff between main and the branch
# (using "diff main..$BRANCH", two dots)
# - ignore changes of less than 10 lines
# - also ignore a few red herrings
# - display whatever is left
# For "git diff" (in the filter function) to work correctly, we must be
# at the root of the repo.
cd $(git rev-parse --show-toplevel)
BRANCHES=$(git branch -r | grep -v origin/HEAD | grep origin/2)
filter() {
threshold=10
while read filename; do
case $filename in
# Generic training-specific files
slides/*.html) continue;;
slides/*.yml) continue;;
slides/logistics*.md) continue;;
# Specific content that can be ignored
#slides/containers/Local_Environment.md) threshold=100;;
# Content that was moved/refactored enough to confuse us
slides/containers/Local_Environment.md) threshold=100;;
slides/exercises.md) continue;;
slides/k8s/batch-jobs) threshold=20;;
# Renames
*/{*}*) continue;;
esac
git diff --find-renames --numstat main..$BRANCH -- "$filename" | {
# If the files are identical, the diff will be empty, and "read" will fail.
read plus minus filename || return
# Ignore binary files (FIXME though?)
if [ $plus = - ]; then
return
fi
diff=$((plus-minus))
if [ $diff -gt $threshold ]; then
echo git diff main..$BRANCH -- $filename
fi
}
done
}
for BRANCH in $BRANCHES; do
if FILES=$(git diff --find-renames --name-only main...$BRANCH | filter | grep .); then
echo "🌳 $BRANCH:"
echo "$FILES"
fi
done

View File

@@ -1,70 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
-
#- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
#- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
#- containers/Start_And_Attach.md
- containers/Naming_And_Inspecting.md
#- containers/Labels.md
- containers/Getting_Inside.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Container_Networking_Basics.md
#- containers/Network_Drivers.md
- containers/Local_Development_Workflow.md
- containers/Container_Network_Model.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
- containers/Multi_Stage_Builds.md
#- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
#- containers/Docker_Machine.md
#- containers/Advanced_Dockerfiles.md
#- containers/Init_Systems.md
#- containers/Application_Configuration.md
#- containers/Logging.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Container_Engines.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md
#- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -1,71 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
content:
- shared/title.md
# - shared/logistics.md
- containers/intro.md
- shared/about-slides.md
#- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- - containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- - containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- containers/Docker_Machine.md
- - containers/Advanced_Dockerfiles.md
- containers/Init_Systems.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Resource_Limits.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
- - containers/Container_Engines.md
- containers/Pods_Anatomy.md
- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -1,79 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
content:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/chat-room-im.md
#- shared/chat-room-slack.md
#- shared/chat-room-zoom-meeting.md
#- shared/chat-room-zoom-webinar.md
- shared/toc.md
- # DAY 1
- containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Initial_Images.md
-
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
-
- containers/Dockerfile_Tips.md
- containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Exercise_Dockerfile_Advanced.md
-
- containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Start_And_Attach.md
- containers/Getting_Inside.md
- containers/Resource_Limits.md
- # DAY 2
- containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
-
- containers/Local_Development_Workflow.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
-
- containers/Installing_Docker.md
- containers/Container_Engines.md
- containers/Init_Systems.md
- containers/Advanced_Dockerfiles.md
-
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Orchestration_Overview.md
-
- shared/thankyou.md
- containers/links.md
#-
#- containers/Docker_Machine.md
#- containers/Ambassadors.md
#- containers/Namespaces_Cgroups.md
#- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
#- containers/Pods_Anatomy.md
#- containers/Ecosystem.md

View File

@@ -32,7 +32,7 @@
- You're welcome to use whatever you like (e.g. AWS profiles)
.exercise[
.lab[
- Set the AWS region, API access key, and secret key:
```bash
@@ -58,7 +58,7 @@
- register it in our kubeconfig file
.exercise[
.lab[
- Update our kubeconfig file:
```bash

View File

@@ -20,13 +20,13 @@
## Suspension of disbelief
The exercises in this section assume that we have set up `kubectl` on our
The labs and demos in this section assume that we have set up `kubectl` on our
local machine in order to access a remote cluster.
We will therefore show how to access services and pods of the remote cluster,
from our local machine.
You can also run these exercises directly on the cluster (if you haven't
You can also run these commands directly on the cluster (if you haven't
installed and set up `kubectl` locally).
Running commands locally will be less useful
@@ -58,7 +58,7 @@ installed and set up `kubectl` to communicate with your cluster.
- Let's access the `webui` service through `kubectl proxy`
.exercise[
.lab[
- Run an API proxy in the background:
```bash
@@ -101,7 +101,7 @@ installed and set up `kubectl` to communicate with your cluster.
- Let's access our remote Redis server
.exercise[
.lab[
- Forward connections from local port 10000 to remote port 6379:
```bash

View File

@@ -198,7 +198,7 @@ Some examples ...
(the Node "echo" app, the Flask app, and one ngrok tunnel for each of them)
.exercise[
.lab[
- Go to the webhook directory:
```bash
@@ -244,7 +244,7 @@ class: extra-details
- We need to update the configuration with the correct `url`
.exercise[
.lab[
- Edit the webhook configuration manifest:
```bash
@@ -271,7 +271,7 @@ class: extra-details
(so if the webhook server is down, we can still create pods)
.exercise[
.lab[
- Register the webhook:
```bash
@@ -288,7 +288,7 @@ It is strongly recommended to tail the logs of the API server while doing that.
- Let's create a pod and try to set a `color` label
.exercise[
.lab[
- Create a pod named `chroma`:
```bash
@@ -328,7 +328,7 @@ Note: the webhook doesn't do anything (other than printing the request payload).
## Update the webhook configuration
.exercise[
.lab[
- First, check the ngrok URL of the tunnel for the Flask app:
```bash
@@ -395,7 +395,7 @@ Note: the webhook doesn't do anything (other than printing the request payload).
## Let's get to work!
.exercise[
.lab[
- Make sure we're in the right directory:
```bash
@@ -424,7 +424,7 @@ Note: the webhook doesn't do anything (other than printing the request payload).
... we'll store it in a ConfigMap, and install dependencies on the fly
.exercise[
.lab[
- Load the webhook source in a ConfigMap:
```bash
@@ -446,7 +446,7 @@ Note: the webhook doesn't do anything (other than printing the request payload).
(of course, there are plenty others options; e.g. `cfssl`)
.exercise[
.lab[
- Generate a self-signed certificate:
```bash
@@ -470,7 +470,7 @@ Note: the webhook doesn't do anything (other than printing the request payload).
- Let's reconfigure the webhook to use our Service instead of ngrok
.exercise[
.lab[
- Edit the webhook configuration manifest:
```bash
@@ -504,7 +504,7 @@ Note: the webhook doesn't do anything (other than printing the request payload).
Shell to the rescue!
.exercise[
.lab[
- Load up our cert and encode it in base64:
```bash

View File

@@ -66,7 +66,7 @@
- We'll ask `kubectl` to show us the exacts requests that it's making
.exercise[
.lab[
- Check the URI for a cluster-scope, "core" resource, e.g. a Node:
```bash
@@ -122,7 +122,7 @@ class: extra-details
- What about namespaced resources?
.exercise[
.lab[
- Check the URI for a namespaced, "core" resource, e.g. a Service:
```bash
@@ -169,7 +169,7 @@ class: extra-details
## Accessing a subresource
.exercise[
.lab[
- List `kube-proxy` pods:
```bash
@@ -200,7 +200,7 @@ command=echo&command=hello&command=world&container=kube-proxy&stderr=true&stdout
- There are at least three useful commands to introspect the API server
.exercise[
.lab[
- List resources types, their group, kind, short names, and scope:
```bash
@@ -249,7 +249,7 @@ command=echo&command=hello&command=world&container=kube-proxy&stderr=true&stdout
The following assumes that `metrics-server` is deployed on your cluster.
.exercise[
.lab[
- Check that the metrics.k8s.io is registered with `metrics-server`:
```bash
@@ -271,7 +271,7 @@ The following assumes that `metrics-server` is deployed on your cluster.
- We can have multiple resources with the same name
.exercise[
.lab[
- Look for resources named `node`:
```bash
@@ -298,7 +298,7 @@ The following assumes that `metrics-server` is deployed on your cluster.
- But we can look at the raw data (with `-o json` or `-o yaml`)
.exercise[
.lab[
- Look at NodeMetrics objects with one of these commands:
```bash
@@ -320,7 +320,7 @@ The following assumes that `metrics-server` is deployed on your cluster.
--
.exercise[
.lab[
- Display node metrics:
```bash
@@ -342,7 +342,7 @@ The following assumes that `metrics-server` is deployed on your cluster.
- Then we can register that server by creating an APIService resource
.exercise[
.lab[
- Check the definition used for the `metrics-server`:
```bash

View File

@@ -103,7 +103,7 @@ class: extra-details
---
## `WithWaitGroup`,
## `WithWaitGroup`
- When we shutdown, tells clients (with in-flight requests) to retry

View File

@@ -14,70 +14,6 @@ Kubernetes also relies on underlying infrastructure:
---
## Control plane location
The control plane can run:
- in containers, on the same nodes that run other application workloads
(default behavior for local clusters like [Minikube](https://github.com/kubernetes/minikube), [kind](https://kind.sigs.k8s.io/)...)
- on a dedicated node
(default behavior when deploying with kubeadm)
- on a dedicated set of nodes
([Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way); [kops](https://github.com/kubernetes/kops); also kubeadm)
- outside of the cluster
(most managed clusters like AKS, DOK, EKS, GKE, Kapsule, LKE, OKE...)
---
class: pic
![](images/control-planes/single-node-dev.svg)
---
class: pic
![](images/control-planes/managed-kubernetes.svg)
---
class: pic
![](images/control-planes/single-control-and-workers.svg)
---
class: pic
![](images/control-planes/stacked-control-plane.svg)
---
class: pic
![](images/control-planes/non-dedicated-stacked-nodes.svg)
---
class: pic
![](images/control-planes/advanced-control-plane.svg)
---
class: pic
![](images/control-planes/advanced-control-plane-split-events.svg)
---
class: pic
![Kubernetes architecture diagram: communication between components](images/k8s-arch4-thanks-luxas.png)
@@ -157,6 +93,70 @@ The kubelet agent uses a number of special-purpose protocols and interfaces, inc
---
## Control plane location
The control plane can run:
- in containers, on the same nodes that run other application workloads
(default behavior for local clusters like [Minikube](https://github.com/kubernetes/minikube), [kind](https://kind.sigs.k8s.io/)...)
- on a dedicated node
(default behavior when deploying with kubeadm)
- on a dedicated set of nodes
([Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way); [kops](https://github.com/kubernetes/kops); also kubeadm)
- outside of the cluster
(most managed clusters like AKS, DOK, EKS, GKE, Kapsule, LKE, OKE...)
---
class: pic
![](images/control-planes/single-node-dev.svg)
---
class: pic
![](images/control-planes/managed-kubernetes.svg)
---
class: pic
![](images/control-planes/single-control-and-workers.svg)
---
class: pic
![](images/control-planes/stacked-control-plane.svg)
---
class: pic
![](images/control-planes/non-dedicated-stacked-nodes.svg)
---
class: pic
![](images/control-planes/advanced-control-plane.svg)
---
class: pic
![](images/control-planes/advanced-control-plane-split-events.svg)
---
# The Kubernetes API
[
@@ -203,9 +203,9 @@ What does that mean?
## Let's experiment a bit!
- For the exercises in this section, connect to the first node of the `test` cluster
- For this section, connect to the first node of the `test` cluster
.exercise[
.lab[
- SSH to the first node of the test cluster
@@ -224,7 +224,7 @@ What does that mean?
- Let's create a simple object
.exercise[
.lab[
- Create a namespace with the following command:
```bash
@@ -246,7 +246,7 @@ This is equivalent to `kubectl create namespace hello`.
- Let's retrieve the object we just created
.exercise[
.lab[
- Read back our object:
```bash
@@ -354,7 +354,7 @@ class: extra-details
- The easiest way is to use `kubectl label`
.exercise[
.lab[
- In one terminal, watch namespaces:
```bash
@@ -402,7 +402,7 @@ class: extra-details
- DELETED resources
.exercise[
.lab[
- In one terminal, watch pods, displaying full events:
```bash

View File

@@ -361,7 +361,7 @@ class: extra-details
## Listing service accounts
.exercise[
.lab[
- The resource name is `serviceaccount` or `sa` for short:
```bash
@@ -378,7 +378,7 @@ class: extra-details
## Finding the secret
.exercise[
.lab[
- List the secrets for the `default` service account:
```bash
@@ -398,7 +398,7 @@ class: extra-details
- The token is stored in the secret, wrapped with base64 encoding
.exercise[
.lab[
- View the secret:
```bash
@@ -421,7 +421,7 @@ class: extra-details
- Let's send a request to the API, without and with the token
.exercise[
.lab[
- Find the ClusterIP for the `kubernetes` service:
```bash
@@ -616,7 +616,7 @@ class: extra-details
- Nixery automatically generates images with the requested packages
.exercise[
.lab[
- Run our pod:
```bash
@@ -632,7 +632,7 @@ class: extra-details
- Normally, at this point, we don't have any API permission
.exercise[
.lab[
- Check our permissions with `kubectl`:
```bash
@@ -658,7 +658,7 @@ class: extra-details
(but again, we could call it `view` or whatever we like)
.exercise[
.lab[
- Create the new role binding:
```bash
@@ -716,7 +716,7 @@ It's important to note a couple of details in these flags...
- We should be able to *view* things, but not to *edit* them
.exercise[
.lab[
- Check our permissions with `kubectl`:
```bash

View File

@@ -93,7 +93,7 @@
- We can use the `--dry-run=client` option
.exercise[
.lab[
- Generate the YAML for a Deployment without creating it:
```bash
@@ -128,7 +128,7 @@ class: extra-details
## The limits of `kubectl apply --dry-run=client`
.exercise[
.lab[
- Generate the YAML for a deployment:
```bash
@@ -161,7 +161,7 @@ class: extra-details
(all validation and mutation hooks will be executed)
.exercise[
.lab[
- Try the same YAML file as earlier, with server-side dry run:
```bash
@@ -200,7 +200,7 @@ class: extra-details
- `kubectl diff` does a server-side dry run, *and* shows differences
.exercise[
.lab[
- Try `kubectl diff` on the YAML that we tweaked earlier:
```bash

693
slides/k8s/aws-eks.md Normal file
View File

@@ -0,0 +1,693 @@
# Amazon EKS
- Elastic Kubernetes Service
- AWS runs the Kubernetes control plane
(all we see is an API server endpoint)
- Pods can run on any combination of:
- EKS-managed nodes
- self-managed nodes
- Fargate
- Leverages and integrates with AWS services and APIs
---
## Some integrations
- Authenticate with IAM users and roles
- Associate IAM roles to Kubernetes ServiceAccounts
- Load balance traffic with ALB/ELB/NLB
- Persist data with EBS/EFS
- Label nodes with instance ID, instance type, region, AZ ...
- Pods can be "first class citizens" of VPC
---
## Pros/cons
- Fully managed control plane
- Handles deployment, upgrade, scaling of the control plane
- Available versions and features tend to lag a bit
- Doesn't fit the most demanding users
("demanding" starts somewhere between 100 and 1000 nodes)
---
## Good to know ...
- Some integrations are specific to EKS
(some authentication models)
- Many integrations are *not* specific to EKS
- The Cloud Controller Manager can run outside of EKS
(and provide LoadBalancer services, EBS volumes, and more)
---
# Provisioning clusters
- AWS console, API, CLI
- `eksctl`
- Infrastructure-as-Code
---
## AWS "native" provisioning
- AWS web console
- click-click-click!
- difficulty: low
- AWS API or CLI
- must provide subnets, ARNs
- difficulty: medium
---
## `eksctl`
- Originally developed by Weave
(back when AWS "native" provisioning wasn't very good)
- `eksctl create cluster` just works™
- Has been "adopted" by AWS
(is listed in official documentations)
---
## Infrastructure-as-Code
- Cloud Formation
- Terraform
[terraform-aws-eks](https://github.com/terraform-aws-modules/terraform-aws-eks)
by the community
([example](https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/examples/basic))
[terraform-provider-aws](https://github.com/hashicorp/terraform-provider-aws)
by Hashicorp
([example](https://github.com/hashicorp/terraform-provider-aws/tree/main/examples/eks-getting-started))
[Kubestack](https://www.kubestack.com/)
---
## Node groups
- Virtually all provisioning models have a concept of "node group"
- Node group = group of similar nodes in an ASG
- can span multiple AZ
- can have instances of different types¹
- A cluster will need at least one node group
.footnote[¹As I understand it, to specify fallbacks if one instance type is unavailable or out of capacity.]
---
# IAM → EKS authentication
- Access EKS clusters using IAM users and roles
- No special role, permission, or policy is needed in IAM
(but the `eks:DescribeCluster` permission can be useful, see later)
- Users and roles need to be explicitly listed in the cluster
- Configuration is done through a ConfigMap in the cluster
---
## Setting it up
- Nothing to do when creating the cluster
(feature is always enabled)
- Users and roles are *mapped* to Kubernetes users and groups
(through the `aws-auth` ConfigMap in `kube-system`)
- That's it!
---
## Mapping
- The `aws-auth` ConfigMap can contain two entries:
- `mapRoles` (map IAM roles)
- `mapUsers` (map IAM users)
- Each entry is a YAML file
- Each entry includes:
- `rolearn` or `userarn` to map
- `username` (as a string)
- `groups` (as a list; can be empty)
---
## Example
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-system
name: aws-auth
data:
mapRoles: `|`
- rolearn: arn:aws:iam::111122223333:role/blah
username: blah
groups: [ devs, ops ]
mapUsers: `|`
- userarn: arn:aws:iam::111122223333:user/alice
username: alice
groups: [ system:masters ]
- userarn: arn:aws:iam::111122223333:user/bob
username: bob
groups: [ system:masters ]
```
---
## Client setup
- We need either the `aws` CLI or the `aws-iam-authenticator`
- We use them as `exec` plugins in `~/.kube/config`
- Done automatically by `eksctl`
- Or manually with `aws eks update-kubeconfig`
- Discovering the address of the API server requires one IAM permission
```json
"Action": [
"eks:DescribeCluster"
],
"Resource": "arn:aws:eks:<region>:<account>:cluster/<cluster-name>"
```
(wildcards can be used when specifying the resource)
---
class: extra-details
## How it works
- The helper generates a token
(with `aws eks get-token` or `aws-iam-authenticator token`)
- Note: these calls will always succeed!
(even if AWS API keys are invalid)
- The token is used to authenticate with the Kubernetes API
- AWS' Kubernetes API server will decode and validate the token
(and map the underlying user or role accordingly)
---
## Read The Fine Manual
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
---
# EKS → IAM authentication
- Access AWS services from workloads running on EKS
(e.g.: access S3 bucket from code running in a Pod)
- This works by associating an IAM role to a K8S ServiceAccount
- There are also a few specific roles used internally by EKS
(e.g. to let the nodes establish network configurations)
- ... We won't talk about these
---
## The big picture
- One-time setup task
([create an OIDC provider associated to our EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html))
- Create (or update) a role with an appropriate *trust policy*
(more on that later)
- Annotate service accounts to map them to that role
`eks.amazonaws.com/role-arn=arn:aws:iam::111122223333:role/some-iam-role`
- Create (or re-create) pods using that ServiceAccount
- The pods can now use that role!
---
## Trust policies
- IAM roles have a *trust policy* (aka *assume role policy*)
(cf `aws iam create-role ... --assume-role-policy-document ...`)
- That policy contains a *statement* list
- This list indicates who/what is allowed to assume (use) the role
- In the current scenario, that policy will contain something saying:
*ServiceAccount S on EKS cluster C is allowed to use this role*
---
## Trust policy for a single ServiceAccount
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER}:sub":
"system:serviceaccount:<namespace>:<service-account>"
}
}
}
]
}
```
---
## Trust policy for multiple ServiceAccounts
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"${OIDC_PROVIDER}:sub":
["system:serviceaccount:container-training:*"]
}
}
}
]
}
```
---
## The little details
- When pods are created, they are processed by a mutating webhook
(typically named `pod-identity-webhook`)
- Pods using a ServiceAccount with the right annotation get:
- an extra token
<br/>
(mounted in `/var/run/secrets/eks.amazonaws.com/serviceaccount/token`)
- a few env vars
<br/>
(including `AWS_WEB_IDENTITY_TOKEN_FILE` and `AWS_ROLE_ARN`)
- AWS client libraries and tooling will work this that
(see [this list](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html) for supported versions)
---
# CNI
- EKS is a compliant Kubernetes implementation
(which means we can use a wide range of CNI plugins)
- However, the recommended CNI plugin is the "AWS VPC CNI"
(https://github.com/aws/amazon-vpc-cni-k8s)
- Pods are then "first class citizens" of AWS VPC
---
## AWS VPC CNI
- Each Pod gets an address in a VPC subnet
- No overlay network, no encapsulation, no overhead
(other than AWS network fabric, obviously)
- Probably the fastest network option when running on AWS
- Allows "direct" load balancing (more on that later)
- Can use security groups with Pod traffic
- But: limits the number of Pods per Node
- But: more complex configuration (more on that later)
---
## Number of Pods per Node
- Each Pod gets an IP address on an ENI
(Elastic Network Interface)
- EC2 instances can only have a limited number of ENIs
(the exact limit depends on the instance type)
- ENIs can only have a limited number of IP addresses
(with variations here as well)
- This gives limits of e.g. 35 pods on `t3.large`, 29 on `c5.large` ...
(see
[full list of limits per instance type](https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
)
and
[ENI/IP details](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/pkg/awsutils/vpc_ip_resource_limit.go
))
---
## Limits?
- These limits might seem low
- They're not *that* low if you compute e.g. the RAM/Pod ratio
- Except if you're running lots if tiny pods
- Bottom line: do the math!
---
class: extra-details
## Pre-loading
- It can take a little while to allocate/attach an ENI
- The AWS VPC CNI can keep a few extra addresses on each Node
(by default, one ENI worth of IP addresses)
- This is tunable if needed
(see [the docs](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/eni-and-ip-target.md
) for details)
---
## Better load balancing
- The default path for inbound traffic is:
Load balancer → NodePort → Pod
- With the AWS VPC CNI, it becomes possible to do:
Load balancer → Pod
- More on that in the load balancing section!
---
## Configuration complexity
- The AWS VPC CNI is a very good solution when running EKS
- It brings optimized solutions to various use-cases:
- direct load balancing
- user authentication
- interconnection with other infrastructure
- etc.
- Keep in mind that all these solutions are AWS-specific
- They can require a non-trivial amount of specific configuration
- Especially when moving from a simple POC to an IAC deployment!
---
# Load Balancers
- Here be dragons!
- Multiple options, each with different pros/cons
- It's necessary to know both AWS products and K8S concepts
---
## AWS load balancers
- CLB / Classic Load Balancer (formerly known as ELB)
- can work in L4 (TCP) or L7 (HTTP) mode
- can do TLS unrolling
- can't do websockets, HTTP/2, content-based routing ...
- NLB / Network Load Balancer
- high-performance L4 load balancer with TLS support
- ALB / Application Load Balancer
- HTTP load balancer
- can do TLS unrolling
- can do websockets, HTTP/2, content-based routing ...
---
## Load balancing modes
- "IP targets"
- send traffic directly from LB to Pods
- Pods must use the AWS VPC CNI
- compatible with Fargate Pods
- "Instance targets"
- send traffic to a NodePort (generally incurs an extra hop)
- Pods can use any CNI
- not compatible with Fargate Pods
- Each LB (Service) can use a different mode, if necessary
---
## Kubernetes load balancers
- Service (L4)
- ClusterIP: internal load balancing
- NodePort: external load balancing on ports >30000
- LoadBalancer: external load balancing on the port you want
- ExternalIP: external load balancing directly on nodes
- Ingress (L7 HTTP)
- partial content-based routing (`Host` header, request path)
- requires an Ingress Controller (in front)
- works with Services (in back)
---
## Two controllers are available
- Kubernetes "in-tree" load balancer controller
- always available
- used by default for LoadBalancer Services
- creates CLB by default; can also do NLB
- can only do "instance targets"
- can use extra CLB features (TLS, HTTP)
- AWS Load Balancer Controller (fka AWS ALB Ingress Controller)
- optional add-on (requires additional config)
- primarily meant to be an Ingress Controller
- creates NLB and ALB
- can do "instance targets" and "IP targets"
- can also be used for LoadBalancer Services with type `nlb-ip`
- They can run side by side
---
## Which one should we use?
- AWS Load Balancer Controller supports "IP targets"
(which means direct routing of traffic to Pods)
- It can be used as an Ingress controller
- It *seems* to be the perfect solution for EKS!
- However ...
---
## Caveats
- AWS Load Balancer Controller requires extensive configuration
- a few hours to a few days to get it to work in a POC ...
- a few days to a few weeks to industrialize that process?
- It's AWS-specific
- It still introduces an extra hop, even if that hop is invisible
- Other ingress controllers can have interesting features
(canary deployment, A/B testing ...)
---
## Noteworthy annotations and docs
- `service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip`
- LoadBalancer Service with "IP targets" ([docs](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/nlb_ip_mode/))
- requires AWS Load Balancer Controller
- `service.beta.kubernetes.io/aws-load-balancer-internal: "true"`
- internal load balancer (for private VPC)
- `service.beta.kubernetes.io/aws-load-balancer-type: nlb`
- opt for NLB instead of CLB with in-tree controller
- `service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"`
- use HAProxy [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt)
---
## TLS-related annotations
- `service.beta.kubernetes.io/aws-load-balancer-ssl-cert`
- enable TLS and use that certificate
- example value: `arn:aws:acm:<region>:<account>:certificate/<cert-id>`
- `service.beta.kubernetes.io/aws-load-balancer-ssl-ports`
- enable TLS *only* on the specified ports (when multiple ports are exposed)
- example value: `"443,8443"`
- `service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`
- specify ciphers and other TLS parameters to use (see [that list](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html))
- example value: `"ELBSecurityPolicy-TLS-1-2-2017-01"`
---
## To HTTP(S) or not to HTTP(S)
- `service.beta.kubernetes.io/aws-load-balancer-backend-protocol`
- can be either `http`, `https`, `ssl`, or `tcp`
- if `https` or `ssl`: enable TLS to the backend
- if `http` or `https`: enable HTTP `x-forwarded-for` headers (with `http` or `https`)
???
## Cluster autoscaling
## Logging
https://docs.aws.amazon.com/eks/latest/userguide/logging-using-cloudtrail.html
:EN:- Working with EKS
:EN:- Cluster and user provisioning
:EN:- Networking and load balancing
:FR:- Travailler avec EKS
:FR:- Outils de déploiement
:FR:- Intégration avec IAM
:FR:- Fonctionalités réseau

View File

@@ -30,7 +30,7 @@
- or we hit the *backoff limit* of the Job (default=6)
.exercise[
.lab[
- Create a Job that has a 50% chance of success:
```bash
@@ -49,7 +49,7 @@
- If the Pod fails, the Job creates another Pod
.exercise[
.lab[
- Check the status of the Pod(s) created by the Job:
```bash
@@ -108,7 +108,7 @@ class: extra-details
(The Cron Job will not hold if a previous job is still running)
.exercise[
.lab[
- Create the Cron Job:
```bash
@@ -135,7 +135,7 @@ class: extra-details
(re-creating another one if it fails, for instance if its node fails)
.exercise[
.lab[
- Check the Jobs that are created:
```bash

Some files were not shown because too many files have changed in this diff Show More