Compare commits
206 Commits
weka
...
qconuk2019
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
787ed190b0 | ||
|
|
745ebefc3d | ||
|
|
53907d82b4 | ||
|
|
b8f11b3c72 | ||
|
|
9b4413f332 | ||
|
|
e5a7e15ef8 | ||
|
|
52be1aa464 | ||
|
|
6a644e53e0 | ||
|
|
ff91c26976 | ||
|
|
ff40b79775 | ||
|
|
3f8ec37225 | ||
|
|
1dfdec413a | ||
|
|
cf3fae6db1 | ||
|
|
c9b85650cb | ||
|
|
964057cd52 | ||
|
|
da13946ba0 | ||
|
|
f6d154cb84 | ||
|
|
1657503da1 | ||
|
|
af8441912e | ||
|
|
e16c1d982a | ||
|
|
1fb0ec7580 | ||
|
|
ad80914000 | ||
|
|
d877844a5e | ||
|
|
195c08cb91 | ||
|
|
8a3dad3206 | ||
|
|
4f59e293ee | ||
|
|
8753279603 | ||
|
|
d84c585fdc | ||
|
|
b8f8ffa07d | ||
|
|
4f2ecb0f4a | ||
|
|
662b3a47a0 | ||
|
|
8325dcc6a0 | ||
|
|
42c1a93d5f | ||
|
|
8d1737c2b3 | ||
|
|
8045215c63 | ||
|
|
ad20e1efe6 | ||
|
|
f0f3d70521 | ||
|
|
53cf52f05c | ||
|
|
e280cec60f | ||
|
|
c8047897e7 | ||
|
|
cc071b79c3 | ||
|
|
869f46060a | ||
|
|
258c134421 | ||
|
|
c6d9edbf12 | ||
|
|
5fc62e8fd7 | ||
|
|
f207adfe13 | ||
|
|
8c2107fba9 | ||
|
|
d4096e9c21 | ||
|
|
5c89738ab6 | ||
|
|
893a84feb7 | ||
|
|
f807964416 | ||
|
|
2ea9cbb00f | ||
|
|
8cd9a314d3 | ||
|
|
ede085cf48 | ||
|
|
bc349d6c4d | ||
|
|
80d6b57697 | ||
|
|
5c2599a2b9 | ||
|
|
a6f6ff161d | ||
|
|
6aaa8fab75 | ||
|
|
01042101a2 | ||
|
|
5afb37a3b9 | ||
|
|
995ea626db | ||
|
|
a1adbb66c8 | ||
|
|
3212561c89 | ||
|
|
003a232b79 | ||
|
|
2770da68cd | ||
|
|
c502d019ff | ||
|
|
a07e50ecf8 | ||
|
|
46c6866ce9 | ||
|
|
fe95318108 | ||
|
|
65232f93ba | ||
|
|
9fa7b958dc | ||
|
|
a95e5c960e | ||
|
|
5b87162e95 | ||
|
|
8c4914294e | ||
|
|
7b9b9f527d | ||
|
|
3c7f39747c | ||
|
|
be67a742ee | ||
|
|
40cd934118 | ||
|
|
556db65251 | ||
|
|
ff781a3065 | ||
|
|
8348d750df | ||
|
|
9afa0acbf9 | ||
|
|
cb624755e4 | ||
|
|
523ca55831 | ||
|
|
f0b48935fa | ||
|
|
abcc47b563 | ||
|
|
33e1bfd8be | ||
|
|
2efc29991e | ||
|
|
11387f1330 | ||
|
|
fe93dccbac | ||
|
|
5fad84a7cf | ||
|
|
22dd6b4e70 | ||
|
|
a3594e7e1e | ||
|
|
7f74e5ce32 | ||
|
|
9e051abb32 | ||
|
|
3ebcfd142b | ||
|
|
6c5d049c4c | ||
|
|
072ba44cba | ||
|
|
bc8a9dc4e7 | ||
|
|
b1ba881eee | ||
|
|
337a5d94ed | ||
|
|
43acccc0af | ||
|
|
4a447c7bf5 | ||
|
|
b9de73d0fd | ||
|
|
6b9b83a7ae | ||
|
|
3f7675be04 | ||
|
|
b4bb9e5958 | ||
|
|
9a6160ba1f | ||
|
|
1d243b72ec | ||
|
|
c5c1ccaa25 | ||
|
|
b68afe502b | ||
|
|
d18cacab4c | ||
|
|
2faca4a507 | ||
|
|
d797ec62ed | ||
|
|
a475d63789 | ||
|
|
dd3f2d054f | ||
|
|
73594fd505 | ||
|
|
16a1b5c6b5 | ||
|
|
ff7a257844 | ||
|
|
77046a8ddf | ||
|
|
3ca696f059 | ||
|
|
305db76340 | ||
|
|
b1672704e8 | ||
|
|
c058f67a1f | ||
|
|
ab56c63901 | ||
|
|
a5341f9403 | ||
|
|
b2bdac3384 | ||
|
|
a2531a0c63 | ||
|
|
84e2b90375 | ||
|
|
9639dfb9cc | ||
|
|
8722de6da2 | ||
|
|
f2f87e52b0 | ||
|
|
56ad2845e7 | ||
|
|
f23272d154 | ||
|
|
86e35480a4 | ||
|
|
1020a8ff86 | ||
|
|
20b1079a22 | ||
|
|
f090172413 | ||
|
|
e4251cfa8f | ||
|
|
b6dd55b21c | ||
|
|
53d1a68765 | ||
|
|
156ce67413 | ||
|
|
e372850b06 | ||
|
|
f543b54426 | ||
|
|
35614714c8 | ||
|
|
100c6b46cf | ||
|
|
36ccaf7ea4 | ||
|
|
4a655db1ba | ||
|
|
2a80586504 | ||
|
|
0a942118c1 | ||
|
|
2f1ad67fb3 | ||
|
|
4b0ac6d0e3 | ||
|
|
ac273da46c | ||
|
|
7a6594c96d | ||
|
|
657b7465c6 | ||
|
|
08059a845f | ||
|
|
24e2042c9d | ||
|
|
9771f054ea | ||
|
|
5db4e2adfa | ||
|
|
bde5db49a7 | ||
|
|
7c6b2730f5 | ||
|
|
7f6a15fbb7 | ||
|
|
d97b1e5944 | ||
|
|
1519196c95 | ||
|
|
f8629a2689 | ||
|
|
fadecd52ee | ||
|
|
524d6e4fc1 | ||
|
|
51f5f5393c | ||
|
|
f574afa9d2 | ||
|
|
4f49015a6e | ||
|
|
f25d12b53d | ||
|
|
78259c3eb6 | ||
|
|
adc922e4cd | ||
|
|
f68194227c | ||
|
|
29a3ce0ba2 | ||
|
|
e5fe27dd54 | ||
|
|
6016ffe7d7 | ||
|
|
7c94a6f689 | ||
|
|
5953ffe10b | ||
|
|
3016019560 | ||
|
|
0d5da73c74 | ||
|
|
91c835fcb4 | ||
|
|
d01ae0ff39 | ||
|
|
63b85da4f6 | ||
|
|
2406e72210 | ||
|
|
32e1edc2a2 | ||
|
|
84225e982f | ||
|
|
e76a06e942 | ||
|
|
0519682c30 | ||
|
|
91f7a81964 | ||
|
|
a66fcaf04c | ||
|
|
9a0649e671 | ||
|
|
d23ad0cd8f | ||
|
|
63755c1cd3 | ||
|
|
149cf79615 | ||
|
|
a627128570 | ||
|
|
91e3078d2e | ||
|
|
31dd943141 | ||
|
|
3866701475 | ||
|
|
521f8e9889 | ||
|
|
49c3fdd3b2 | ||
|
|
4bb6a49ee0 | ||
|
|
cb407e75ab | ||
|
|
27d4612449 | ||
|
|
43ab5f79b6 |
18
.gitignore
vendored
@@ -1,13 +1,23 @@
|
||||
*.pyc
|
||||
*.swp
|
||||
*~
|
||||
prepare-vms/ips.txt
|
||||
prepare-vms/ips.html
|
||||
prepare-vms/ips.pdf
|
||||
prepare-vms/settings.yaml
|
||||
prepare-vms/tags
|
||||
prepare-vms/infra
|
||||
slides/*.yml.html
|
||||
slides/autopilot/state.yaml
|
||||
slides/index.html
|
||||
slides/past.html
|
||||
slides/slides.zip
|
||||
node_modules
|
||||
|
||||
### macOS ###
|
||||
# General
|
||||
.DS_Store
|
||||
.AppleDouble
|
||||
.LSOverride
|
||||
|
||||
### Windows ###
|
||||
# Windows thumbnail cache files
|
||||
Thumbs.db
|
||||
ehthumbs.db
|
||||
ehthumbs_vista.db
|
||||
|
||||
@@ -199,7 +199,7 @@ this section is for you!
|
||||
locked-down computer, host firewall, etc.
|
||||
- Horrible wifi, or ssh port TCP/22 not open on network! If wifi sucks you
|
||||
can try using MOSH https://mosh.org which handles SSH over UDP. TMUX can also
|
||||
prevent you from loosing your place if you get disconnected from servers.
|
||||
prevent you from losing your place if you get disconnected from servers.
|
||||
https://tmux.github.io
|
||||
- Forget to print "cards" and cut them up for handing out IP's.
|
||||
- Forget to have fun and focus on your students!
|
||||
|
||||
@@ -5,6 +5,3 @@ RUN gem install thin
|
||||
ADD hasher.rb /
|
||||
CMD ["ruby", "hasher.rb"]
|
||||
EXPOSE 80
|
||||
HEALTHCHECK \
|
||||
--interval=1s --timeout=2s --retries=3 --start-period=1s \
|
||||
CMD curl http://localhost/ || exit 1
|
||||
|
||||
@@ -2,14 +2,14 @@ version: "2"
|
||||
|
||||
services:
|
||||
elasticsearch:
|
||||
image: elasticsearch
|
||||
image: elasticsearch:2
|
||||
# If you need to access ES directly, just uncomment those lines.
|
||||
#ports:
|
||||
# - "9200:9200"
|
||||
# - "9300:9300"
|
||||
|
||||
logstash:
|
||||
image: logstash
|
||||
image: logstash:2
|
||||
command: |
|
||||
-e '
|
||||
input {
|
||||
@@ -47,7 +47,7 @@ services:
|
||||
- "12201:12201/udp"
|
||||
|
||||
kibana:
|
||||
image: kibana
|
||||
image: kibana:4
|
||||
ports:
|
||||
- "5601:5601"
|
||||
environment:
|
||||
|
||||
@@ -1,3 +1,37 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: consul
|
||||
labels:
|
||||
app: consul
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources:
|
||||
- pods
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: consul
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: consul
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: consul
|
||||
namespace: default
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: consul
|
||||
labels:
|
||||
app: consul
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
@@ -24,6 +58,7 @@ spec:
|
||||
labels:
|
||||
app: consul
|
||||
spec:
|
||||
serviceAccountName: consul
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
requiredDuringSchedulingIgnoredDuringExecution:
|
||||
@@ -37,18 +72,11 @@ spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: consul
|
||||
image: "consul:1.2.2"
|
||||
env:
|
||||
- name: NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
image: "consul:1.4.0"
|
||||
args:
|
||||
- "agent"
|
||||
- "-bootstrap-expect=3"
|
||||
- "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
|
||||
- "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
|
||||
- "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
|
||||
- "-retry-join=provider=k8s label_selector=\"app=consul\""
|
||||
- "-client=0.0.0.0"
|
||||
- "-data-dir=/consul/data"
|
||||
- "-server"
|
||||
|
||||
@@ -72,6 +72,8 @@ spec:
|
||||
value: "elastic"
|
||||
- name: FLUENT_ELASTICSEARCH_PASSWORD
|
||||
value: "changeme"
|
||||
- name: FLUENT_UID
|
||||
value: "0"
|
||||
resources:
|
||||
limits:
|
||||
memory: 200Mi
|
||||
@@ -130,6 +132,9 @@ spec:
|
||||
resources: {}
|
||||
terminationMessagePath: /dev/termination-log
|
||||
terminationMessagePolicy: File
|
||||
env:
|
||||
- name: ES_JAVA_OPTS
|
||||
value: "-Xms1g -Xmx1g"
|
||||
dnsPolicy: ClusterFirst
|
||||
restartPolicy: Always
|
||||
schedulerName: default-scheduler
|
||||
|
||||
@@ -14,5 +14,5 @@ frontend the-frontend
|
||||
|
||||
backend the-backend
|
||||
server google.com-80 google.com:80 maxconn 32 check
|
||||
server bing.com-80 bing.com:80 maxconn 32 check
|
||||
server ibm.fr-80 ibm.fr:80 maxconn 32 check
|
||||
|
||||
|
||||
10
k8s/just-a-pod.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
apiVersion: v1
|
||||
Kind: Pod
|
||||
metadata:
|
||||
name: hello
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- name: hello
|
||||
image: nginx
|
||||
|
||||
@@ -19,7 +19,7 @@ spec:
|
||||
image: gcr.io/kaniko-project/executor:latest
|
||||
args:
|
||||
- "--context=/workspace/dockercoins/rng"
|
||||
- "--skip-tls-verify"
|
||||
- "--insecure"
|
||||
- "--destination=registry:5000/rng-kaniko:latest"
|
||||
volumeMounts:
|
||||
- name: workspace
|
||||
|
||||
@@ -5,7 +5,7 @@ metadata:
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
run: testweb
|
||||
app: testweb
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
|
||||
@@ -5,6 +5,6 @@ metadata:
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
run: testweb
|
||||
app: testweb
|
||||
ingress: []
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ metadata:
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
run: webui
|
||||
app: webui
|
||||
ingress:
|
||||
- from: []
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# SOURCE: https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop0&c=px-workshop&stork=true&lh=true
|
||||
# SOURCE: https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
@@ -372,7 +372,7 @@ metadata:
|
||||
name: portworx
|
||||
namespace: kube-system
|
||||
annotations:
|
||||
portworx.com/install-source: "https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop0&c=px-workshop&stork=true&lh=true"
|
||||
portworx.com/install-source: "https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true"
|
||||
spec:
|
||||
minReadySeconds: 0
|
||||
updateStrategy:
|
||||
@@ -402,7 +402,7 @@ spec:
|
||||
image: portworx/oci-monitor:1.4.2.2
|
||||
imagePullPolicy: Always
|
||||
args:
|
||||
["-c", "px-workshop", "-s", "/dev/loop0", "-b",
|
||||
["-c", "px-workshop", "-s", "/dev/loop4", "-b",
|
||||
"-x", "kubernetes"]
|
||||
env:
|
||||
- name: "PX_TEMPLATE_VERSION"
|
||||
|
||||
@@ -17,7 +17,7 @@ spec:
|
||||
- name: postgres
|
||||
image: postgres:10.5
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/postgresql
|
||||
- mountPath: /var/lib/postgresql/data
|
||||
name: postgres
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
|
||||
@@ -6,7 +6,7 @@ metadata:
|
||||
creationTimestamp: null
|
||||
generation: 1
|
||||
labels:
|
||||
run: socat
|
||||
app: socat
|
||||
name: socat
|
||||
namespace: kube-system
|
||||
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
|
||||
@@ -14,7 +14,7 @@ spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
run: socat
|
||||
app: socat
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 1
|
||||
@@ -24,7 +24,7 @@ spec:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
run: socat
|
||||
app: socat
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
@@ -49,7 +49,7 @@ kind: Service
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
labels:
|
||||
run: socat
|
||||
app: socat
|
||||
name: socat
|
||||
namespace: kube-system
|
||||
selfLink: /api/v1/namespaces/kube-system/services/socat
|
||||
@@ -60,7 +60,7 @@ spec:
|
||||
protocol: TCP
|
||||
targetPort: 80
|
||||
selector:
|
||||
run: socat
|
||||
app: socat
|
||||
sessionAffinity: None
|
||||
type: NodePort
|
||||
status:
|
||||
|
||||
@@ -32,7 +32,7 @@ Virtualbox, Vagrant and Ansible
|
||||
|
||||
$ source path/to/your-ansible-clone/hacking/env-setup
|
||||
|
||||
- you need to repeat the last step everytime you open a new terminal session
|
||||
- you need to repeat the last step every time you open a new terminal session
|
||||
and want to use any Ansible command (but you'll probably only need to run
|
||||
it once).
|
||||
|
||||
|
||||
@@ -1,4 +1,10 @@
|
||||
# Trainer tools to create and prepare VMs for Docker workshops on AWS or Azure
|
||||
# Trainer tools to create and prepare VMs for Docker workshops
|
||||
|
||||
These tools can help you to create VMs on:
|
||||
|
||||
- Azure
|
||||
- EC2
|
||||
- OpenStack
|
||||
|
||||
## Prerequisites
|
||||
|
||||
@@ -6,6 +12,9 @@
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/)
|
||||
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
|
||||
|
||||
Depending on the infrastructure that you want to use, you also need to install
|
||||
the Azure CLI, the AWS CLI, or terraform (for OpenStack deployment).
|
||||
|
||||
And if you want to generate printable cards:
|
||||
|
||||
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
|
||||
@@ -14,20 +23,25 @@ And if you want to generate printable cards:
|
||||
## General Workflow
|
||||
|
||||
- fork/clone repo
|
||||
- set required environment variables
|
||||
- create an infrastructure configuration in the `prepare-vms/infra` directory
|
||||
(using one of the example files in that directory)
|
||||
- create your own setting file from `settings/example.yaml`
|
||||
- if necessary, increase allowed open files: `ulimit -Sn 10000`
|
||||
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
|
||||
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
|
||||
- run `./workshopctl start` to create instances
|
||||
- run `./workshopctl deploy` to install Docker and setup environment
|
||||
- run `./workshopctl kube` (if you want to install and setup Kubernetes)
|
||||
- run `./workshopctl cards` (if you want to generate PDF for printing handouts of each users host IP's and login info)
|
||||
- run `./workshopctl stop` at the end of the workshop to terminate instances
|
||||
|
||||
## Clone/Fork the Repo, and Build the Tools Image
|
||||
|
||||
The Docker Compose file here is used to build a image with all the dependencies to run the `./workshopctl` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](workshopctl#L5).
|
||||
|
||||
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
|
||||
$ cd orchestration-workshop/prepare-vms
|
||||
$ git clone https://github.com/jpetazzo/container.training
|
||||
$ cd container.training/prepare-vms
|
||||
$ docker-compose build
|
||||
|
||||
|
||||
## Preparing to Run `./workshopctl`
|
||||
|
||||
### Required AWS Permissions/Info
|
||||
@@ -36,27 +50,37 @@ The Docker Compose file here is used to build a image with all the dependencies
|
||||
- Using a non-default VPC or Security Group isn't supported out of box yet, so you will have to customize `lib/commands.sh` if you want to change that.
|
||||
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./workshopctl opensg` which opens up all ports.
|
||||
|
||||
### Required Environment Variables
|
||||
### Create your `infra` file
|
||||
|
||||
- `AWS_ACCESS_KEY_ID`
|
||||
- `AWS_SECRET_ACCESS_KEY`
|
||||
- `AWS_DEFAULT_REGION`
|
||||
You need to do this only once. (On AWS, you can create one `infra`
|
||||
file per region.)
|
||||
|
||||
If you're not using AWS, set these to placeholder values:
|
||||
Make a copy of one of the example files in the `infra` directory.
|
||||
|
||||
For instance:
|
||||
|
||||
```bash
|
||||
cp infra/example.aws infra/aws-us-west-2
|
||||
```
|
||||
export AWS_ACCESS_KEY_ID="foo"
|
||||
export AWS_SECRET_ACCESS_KEY="foo"
|
||||
export AWS_DEFAULT_REGION="foo"
|
||||
```
|
||||
|
||||
Edit your infrastructure file to customize it.
|
||||
You will probably need to put your cloud provider credentials,
|
||||
select region...
|
||||
|
||||
If you don't have the `aws` CLI installed, you will get a warning that it's a missing dependency. If you're not using AWS you can ignore this.
|
||||
|
||||
### Update/copy `settings/example.yaml`
|
||||
### Create your `settings` file
|
||||
|
||||
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
|
||||
Similarly, pick one of the files in `settings` and copy it
|
||||
to customize it.
|
||||
|
||||
./workshopctl cards 2016-09-28-00-33-bret settings/orchestration.yaml
|
||||
For instance:
|
||||
|
||||
```bash
|
||||
cp settings/example.yaml settings/myworkshop.yaml
|
||||
```
|
||||
|
||||
You're all set!
|
||||
|
||||
## `./workshopctl` Usage
|
||||
|
||||
@@ -66,7 +90,7 @@ Commands:
|
||||
ami Show the AMI that will be used for deployment
|
||||
amis List Ubuntu AMIs in the current region
|
||||
build Build the Docker image to run this program in a container
|
||||
cards Generate ready-to-print cards for a batch of VMs
|
||||
cards Generate ready-to-print cards for a group of VMs
|
||||
deploy Install Docker on a bunch of running VMs
|
||||
ec2quotas Check our EC2 quotas (max instances)
|
||||
help Show available commands
|
||||
@@ -74,14 +98,14 @@ ids List the instance IDs belonging to a given tag or token
|
||||
ips List the IP addresses of the VMs for a given tag or token
|
||||
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
|
||||
kubetest Check that all notes are reporting as Ready
|
||||
list List available batches in the current region
|
||||
list List available groups in the current region
|
||||
opensg Open the default security group to ALL ingress traffic
|
||||
pull_images Pre-pull a bunch of Docker images
|
||||
retag Apply a new tag to a batch of VMs
|
||||
start Start a batch of VMs
|
||||
status List instance status for a given batch
|
||||
retag Apply a new tag to a group of VMs
|
||||
start Start a group of VMs
|
||||
status List instance status for a given group
|
||||
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
|
||||
test Run tests (pre-flight checks) on a batch of VMs
|
||||
test Run tests (pre-flight checks) on a group of VMs
|
||||
wrap Run this program in a container
|
||||
```
|
||||
|
||||
@@ -95,22 +119,22 @@ wrap Run this program in a container
|
||||
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
|
||||
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. This can be configured with the `docker_user_password` property in the settings file.
|
||||
|
||||
### Example Steps to Launch a Batch of AWS Instances for a Workshop
|
||||
### Example Steps to Launch a group of AWS Instances for a Workshop
|
||||
|
||||
- Run `./workshopctl start N` Creates `N` EC2 instances
|
||||
- Run `./workshopctl start --infra infra/aws-us-east-2 --settings/myworkshop.yaml --count 60` to create 60 EC2 instances
|
||||
- Your local SSH key will be synced to instances under `ubuntu` user
|
||||
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
|
||||
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
|
||||
- Run `./workshopctl deploy TAG` to run `lib/postprep.py` via parallel-ssh
|
||||
- If it errors or times out, you should be able to rerun
|
||||
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
|
||||
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
|
||||
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
|
||||
- Run `./workshopctl cards TAG` generates PDF/HTML files to print and cut and hand out to students
|
||||
- *Have a great workshop*
|
||||
- Run `./workshopctl stop TAG` to terminate instances.
|
||||
|
||||
### Example Steps to Launch Azure Instances
|
||||
|
||||
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account
|
||||
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account (`az login`)
|
||||
- Customize `azuredeploy.parameters.json`
|
||||
- Required:
|
||||
- Provide the SSH public key you plan to use for instance configuration
|
||||
@@ -155,27 +179,16 @@ az group delete --resource-group workshop
|
||||
|
||||
### Example Steps to Configure Instances from a non-AWS Source
|
||||
|
||||
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to ssh into them.
|
||||
- Set placeholder values for [AWS environment variable settings](#required-environment-variables).
|
||||
- Choose a tag. It could be an event name, datestamp, etc. Ensure you have created a directory for your tag: `prepare-vms/tags/<tag>/`
|
||||
- If you have not already generated a file with the IPs to be configured:
|
||||
- The file should be named `prepare-vms/tags/<tag>/ips.txt`
|
||||
- Format is one IP per line, no other info needed.
|
||||
- Ensure the settings file is as desired (especially the number of nodes): `prepare-vms/settings/kube101.yaml`
|
||||
- For a tag called `myworkshop`, configure instances: `workshopctl deploy myworkshop settings/kube101.yaml`
|
||||
- Optionally, configure Kubernetes clusters of the size in the settings: `workshopctl kube myworkshop`
|
||||
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: `workshopctl kubetest myworkshop`
|
||||
- Generate cards to print and hand out: `workshopctl cards myworkshop settings/kube101.yaml`
|
||||
- Print the cards file: `prepare-vms/tags/myworkshop/ips.html`
|
||||
|
||||
|
||||
## Other Tools
|
||||
|
||||
### Deploying your SSH key to all the machines
|
||||
|
||||
- Make sure that you have SSH keys loaded (`ssh-add -l`).
|
||||
- Source `rc`.
|
||||
- Run `pcopykey`.
|
||||
- Copy `infra/example.generic` to `infra/generic`
|
||||
- Run `./workshopctl start --infra infra/generic --settings settings/...yaml`
|
||||
- Note the `prepare-vms/tags/TAG/` path that has been auto-created.
|
||||
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to SSH into them.
|
||||
- Edit the file `prepare-vms/tags/TAG/ips.txt`, it should list the IP addresses of the VMs (one per line, without any comments or other info)
|
||||
- Continue deployment of cluster configuration with `./workshopctl deploy TAG`
|
||||
- Optionally, configure Kubernetes clusters of the size in the settings: workshopctl kube `TAG`
|
||||
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: workshopctl kubetest `TAG`
|
||||
- Generate cards to print and hand out: workshopctl cards `TAG`
|
||||
- Print the cards file: prepare-vms/tags/`TAG`/ips.html
|
||||
|
||||
|
||||
## Even More Details
|
||||
@@ -188,7 +201,7 @@ To see which local key will be uploaded, run `ssh-add -l | grep RSA`.
|
||||
|
||||
#### Instance + tag creation
|
||||
|
||||
10 VMs will be started, with an automatically generated tag (timestamp + your username).
|
||||
The VMs will be started, with an automatically generated tag (timestamp + your username).
|
||||
|
||||
Your SSH key will be added to the `authorized_keys` of the ubuntu user.
|
||||
|
||||
@@ -196,15 +209,11 @@ Your SSH key will be added to the `authorized_keys` of the ubuntu user.
|
||||
|
||||
Following the creation of the VMs, a text file will be created containing a list of their IPs.
|
||||
|
||||
This ips.txt file will be created in the $TAG/ directory and a symlink will be placed in the working directory of the script.
|
||||
|
||||
If you create new VMs, the symlinked file will be overwritten.
|
||||
|
||||
#### Deployment
|
||||
|
||||
Instances can be deployed manually using the `deploy` command:
|
||||
|
||||
$ ./workshopctl deploy TAG settings/somefile.yaml
|
||||
$ ./workshopctl deploy TAG
|
||||
|
||||
The `postprep.py` file will be copied via parallel-ssh to all of the VMs and executed.
|
||||
|
||||
@@ -214,7 +223,7 @@ The `postprep.py` file will be copied via parallel-ssh to all of the VMs and exe
|
||||
|
||||
#### Generate cards
|
||||
|
||||
$ ./workshopctl cards TAG settings/somefile.yaml
|
||||
$ ./workshopctl cards TAG
|
||||
|
||||
If you want to generate both HTML and PDF cards, install [wkhtmltopdf](https://wkhtmltopdf.org/downloads.html); without that installed, only HTML cards will be generated.
|
||||
|
||||
@@ -222,13 +231,11 @@ If you don't have `wkhtmltopdf` installed, you will get a warning that it is a m
|
||||
|
||||
#### List tags
|
||||
|
||||
$ ./workshopctl list
|
||||
$ ./workshopctl list infra/some-infra-file
|
||||
|
||||
#### List VMs
|
||||
$ ./workshopctl listall
|
||||
|
||||
$ ./workshopctl list TAG
|
||||
|
||||
This will print a human-friendly list containing some information about each instance.
|
||||
$ ./workshopctl tags
|
||||
|
||||
#### Stop and destroy VMs
|
||||
|
||||
|
||||
@@ -7,15 +7,6 @@ fi
|
||||
if id docker; then
|
||||
sudo userdel -r docker
|
||||
fi
|
||||
pip install --user awscli jinja2 pdfkit
|
||||
sudo apt-get install -y wkhtmltopdf xvfb
|
||||
tmux new-session \; send-keys "
|
||||
[ -f ~/.ssh/id_rsa ] || ssh-keygen
|
||||
|
||||
eval \$(ssh-agent)
|
||||
ssh-add
|
||||
Xvfb :0 &
|
||||
export DISPLAY=:0
|
||||
mkdir -p ~/www
|
||||
sudo docker run -d -p 80:80 -v \$HOME/www:/usr/share/nginx/html nginx
|
||||
"
|
||||
sudo apt-get update -q
|
||||
sudo apt-get install -qy jq python-pip wkhtmltopdf xvfb
|
||||
pip install --user awscli jinja2 pdfkit pssh
|
||||
|
||||
6
prepare-vms/infra/example.aws
Normal file
@@ -0,0 +1,6 @@
|
||||
INFRACLASS=aws
|
||||
# If you are using AWS to deploy, copy this file (e.g. to "aws", or "us-east-1")
|
||||
# and customize the variables below.
|
||||
export AWS_DEFAULT_REGION=us-east-1
|
||||
export AWS_ACCESS_KEY_ID=AKI...
|
||||
export AWS_SECRET_ACCESS_KEY=...
|
||||
2
prepare-vms/infra/example.generic
Normal file
@@ -0,0 +1,2 @@
|
||||
INFRACLASS=generic
|
||||
# This is for manual provisioning. No other variable or configuration is needed.
|
||||
9
prepare-vms/infra/example.openstack
Normal file
@@ -0,0 +1,9 @@
|
||||
INFRACLASS=openstack
|
||||
# If you are using OpenStack, copy this file (e.g. to "openstack" or "enix")
|
||||
# and customize the variables below.
|
||||
export TF_VAR_user="jpetazzo"
|
||||
export TF_VAR_tenant="training"
|
||||
export TF_VAR_domain="Default"
|
||||
export TF_VAR_password="..."
|
||||
export TF_VAR_auth_url="https://api.r1.nxs.enix.io/v3"
|
||||
export TF_VAR_flavor="GP1.S"
|
||||
@@ -1,105 +0,0 @@
|
||||
aws_display_tags() {
|
||||
# Print all "Name" tags in our region with their instance count
|
||||
echo "[#] [Status] [Token] [Tag]" \
|
||||
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
|
||||
aws ec2 describe-instances \
|
||||
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
|
||||
| tr -d "\r" \
|
||||
| uniq -c \
|
||||
| sort -k 3 \
|
||||
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
|
||||
}
|
||||
|
||||
aws_get_tokens() {
|
||||
aws ec2 describe-instances --output text \
|
||||
--query 'Reservations[*].Instances[*].[ClientToken]' \
|
||||
| sort -u
|
||||
}
|
||||
|
||||
aws_display_instance_statuses_by_tag() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
|
||||
IDS=$(aws ec2 describe-instances \
|
||||
--filters "Name=tag:Name,Values=$TAG" \
|
||||
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
|
||||
|
||||
aws ec2 describe-instance-status \
|
||||
--instance-ids $IDS \
|
||||
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
|
||||
--output table
|
||||
}
|
||||
|
||||
aws_display_instances_by_tag() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
result=$(aws ec2 describe-instances --output table \
|
||||
--filter "Name=tag:Name,Values=$TAG" \
|
||||
--query "Reservations[*].Instances[*].[ \
|
||||
InstanceId, \
|
||||
State.Name, \
|
||||
Tags[0].Value, \
|
||||
PublicIpAddress, \
|
||||
InstanceType \
|
||||
]"
|
||||
)
|
||||
if [[ -z $result ]]; then
|
||||
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
|
||||
else
|
||||
echo "$result"
|
||||
fi
|
||||
}
|
||||
|
||||
aws_get_instance_ids_by_filter() {
|
||||
FILTER=$1
|
||||
aws ec2 describe-instances --filters $FILTER \
|
||||
--query Reservations[*].Instances[*].InstanceId \
|
||||
--output text | tr "\t" "\n" | tr -d "\r"
|
||||
}
|
||||
|
||||
aws_get_instance_ids_by_client_token() {
|
||||
TOKEN=$1
|
||||
need_tag $TOKEN
|
||||
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
|
||||
}
|
||||
|
||||
aws_get_instance_ids_by_tag() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
|
||||
}
|
||||
|
||||
aws_get_instance_ips_by_tag() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
|
||||
--output text \
|
||||
--query "Reservations[*].Instances[*].PublicIpAddress" \
|
||||
| tr "\t" "\n" \
|
||||
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
|
||||
}
|
||||
|
||||
aws_kill_instances_by_tag() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
IDS=$(aws_get_instance_ids_by_tag $TAG)
|
||||
if [ -z "$IDS" ]; then
|
||||
die "Invalid tag."
|
||||
fi
|
||||
|
||||
info "Deleting instances with tag $TAG."
|
||||
|
||||
aws ec2 terminate-instances --instance-ids $IDS \
|
||||
| grep ^TERMINATINGINSTANCES
|
||||
|
||||
info "Deleted instances with tag $TAG."
|
||||
}
|
||||
|
||||
aws_tag_instances() {
|
||||
OLD_TAG_OR_TOKEN=$1
|
||||
NEW_TAG=$2
|
||||
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
|
||||
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
|
||||
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
|
||||
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
|
||||
}
|
||||
@@ -50,27 +50,41 @@ sep() {
|
||||
fi
|
||||
}
|
||||
|
||||
need_tag() {
|
||||
need_infra() {
|
||||
if [ -z "$1" ]; then
|
||||
die "Please specify infrastructure file. (e.g.: infra/aws)"
|
||||
fi
|
||||
if [ "$1" = "--infra" ]; then
|
||||
die "The infrastructure file should be passed directly to this command. Remove '--infra' and try again."
|
||||
fi
|
||||
if [ ! -f "$1" ]; then
|
||||
die "Infrastructure file $1 doesn't exist."
|
||||
fi
|
||||
. "$1"
|
||||
. "lib/infra/$INFRACLASS.sh"
|
||||
}
|
||||
|
||||
need_tag() {
|
||||
if [ -z "$TAG" ]; then
|
||||
die "Please specify a tag or token. To see available tags and tokens, run: $0 list"
|
||||
fi
|
||||
if [ ! -d "tags/$TAG" ]; then
|
||||
die "Tag $TAG not found (directory tags/$TAG does not exist)."
|
||||
fi
|
||||
for FILE in settings.yaml ips.txt infra.sh; do
|
||||
if [ ! -f "tags/$TAG/$FILE" ]; then
|
||||
warning "File tags/$TAG/$FILE not found."
|
||||
fi
|
||||
done
|
||||
. "tags/$TAG/infra.sh"
|
||||
. "lib/infra/$INFRACLASS.sh"
|
||||
}
|
||||
|
||||
need_settings() {
|
||||
if [ -z "$1" ]; then
|
||||
die "Please specify a settings file."
|
||||
elif [ ! -f "$1" ]; then
|
||||
die "Please specify a settings file. (e.g.: settings/kube101.yaml)"
|
||||
fi
|
||||
if [ ! -f "$1" ]; then
|
||||
die "Settings file $1 doesn't exist."
|
||||
fi
|
||||
}
|
||||
|
||||
need_ips_file() {
|
||||
IPS_FILE=$1
|
||||
if [ -z "$IPS_FILE" ]; then
|
||||
die "IPS_FILE not set."
|
||||
fi
|
||||
|
||||
if [ ! -s "$IPS_FILE" ]; then
|
||||
die "IPS_FILE $IPS_FILE not found. Please run: $0 ips <TAG>"
|
||||
fi
|
||||
}
|
||||
|
||||
@@ -7,21 +7,11 @@ _cmd() {
|
||||
|
||||
_cmd help "Show available commands"
|
||||
_cmd_help() {
|
||||
printf "$(basename $0) - the orchestration workshop swiss army knife\n"
|
||||
printf "$(basename $0) - the container training swiss army knife\n"
|
||||
printf "Commands:"
|
||||
printf "%s" "$HELP" | sort
|
||||
}
|
||||
|
||||
_cmd amis "List Ubuntu AMIs in the current region"
|
||||
_cmd_amis() {
|
||||
find_ubuntu_ami -r $AWS_DEFAULT_REGION "$@"
|
||||
}
|
||||
|
||||
_cmd ami "Show the AMI that will be used for deployment"
|
||||
_cmd_ami() {
|
||||
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
|
||||
}
|
||||
|
||||
_cmd build "Build the Docker image to run this program in a container"
|
||||
_cmd_build() {
|
||||
docker-compose build
|
||||
@@ -32,64 +22,53 @@ _cmd_wrap() {
|
||||
docker-compose run --rm workshopctl "$@"
|
||||
}
|
||||
|
||||
_cmd cards "Generate ready-to-print cards for a batch of VMs"
|
||||
_cmd cards "Generate ready-to-print cards for a group of VMs"
|
||||
_cmd_cards() {
|
||||
TAG=$1
|
||||
SETTINGS=$2
|
||||
need_tag $TAG
|
||||
need_settings $SETTINGS
|
||||
need_tag
|
||||
|
||||
# If you're not using AWS, populate the ips.txt file manually
|
||||
if [ ! -f tags/$TAG/ips.txt ]; then
|
||||
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
|
||||
fi
|
||||
|
||||
# Remove symlinks to old cards
|
||||
rm -f ips.html ips.pdf
|
||||
|
||||
# This will generate two files in the base dir: ips.pdf and ips.html
|
||||
lib/ips-txt-to-html.py $SETTINGS
|
||||
|
||||
for f in ips.html ips.pdf; do
|
||||
# Remove old versions of cards if they exist
|
||||
rm -f tags/$TAG/$f
|
||||
|
||||
# Move the generated file and replace it with a symlink
|
||||
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
|
||||
done
|
||||
# This will process ips.txt to generate two files: ips.pdf and ips.html
|
||||
(
|
||||
cd tags/$TAG
|
||||
../../lib/ips-txt-to-html.py settings.yaml
|
||||
)
|
||||
|
||||
info "Cards created. You can view them with:"
|
||||
info "xdg-open ips.html ips.pdf (on Linux)"
|
||||
info "open ips.html ips.pdf (on MacOS)"
|
||||
info "xdg-open tags/$TAG/ips.html tags/$TAG/ips.pdf (on Linux)"
|
||||
info "open tags/$TAG/ips.html (on macOS)"
|
||||
}
|
||||
|
||||
_cmd deploy "Install Docker on a bunch of running VMs"
|
||||
_cmd_deploy() {
|
||||
TAG=$1
|
||||
SETTINGS=$2
|
||||
need_tag $TAG
|
||||
need_settings $SETTINGS
|
||||
link_tag $TAG
|
||||
count=$(wc -l ips.txt)
|
||||
need_tag
|
||||
|
||||
# wait until all hosts are reachable before trying to deploy
|
||||
info "Trying to reach $TAG instances..."
|
||||
while ! tag_is_reachable $TAG; do
|
||||
while ! tag_is_reachable; do
|
||||
>/dev/stderr echo -n "."
|
||||
sleep 2
|
||||
done
|
||||
>/dev/stderr echo ""
|
||||
|
||||
echo deploying > tags/$TAG/status
|
||||
sep "Deploying tag $TAG"
|
||||
pssh -I tee /tmp/settings.yaml <$SETTINGS
|
||||
|
||||
# Wait for cloudinit to be done
|
||||
pssh "
|
||||
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
|
||||
sleep 1
|
||||
done"
|
||||
|
||||
# Copy settings and install Python YAML parser
|
||||
pssh -I tee /tmp/settings.yaml <tags/$TAG/settings.yaml
|
||||
pssh "
|
||||
sudo apt-get update &&
|
||||
sudo apt-get install -y python-setuptools &&
|
||||
sudo easy_install pyyaml"
|
||||
sudo apt-get install -y python-yaml"
|
||||
|
||||
# Copy postprep.py to the remote machines, and execute it, feeding it the list of IP addresses
|
||||
pssh -I tee /tmp/postprep.py <lib/postprep.py
|
||||
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <ips.txt
|
||||
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <tags/$TAG/ips.txt
|
||||
|
||||
# Install docker-prompt script
|
||||
pssh -I sudo tee /usr/local/bin/docker-prompt <lib/docker-prompt
|
||||
@@ -117,14 +96,17 @@ _cmd_deploy() {
|
||||
fi"
|
||||
|
||||
sep "Deployed tag $TAG"
|
||||
echo deployed > tags/$TAG/status
|
||||
info "You may want to run one of the following commands:"
|
||||
info "$0 kube $TAG"
|
||||
info "$0 pull_images $TAG"
|
||||
info "$0 cards $TAG $SETTINGS"
|
||||
info "$0 cards $TAG"
|
||||
}
|
||||
|
||||
_cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
|
||||
_cmd_kube() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
|
||||
# Install packages
|
||||
pssh --timeout 200 "
|
||||
@@ -134,13 +116,13 @@ _cmd_kube() {
|
||||
sudo tee /etc/apt/sources.list.d/kubernetes.list"
|
||||
pssh --timeout 200 "
|
||||
sudo apt-get update -q &&
|
||||
sudo apt-get install -qy kubelet kubeadm kubectl
|
||||
sudo apt-get install -qy kubelet kubeadm kubectl &&
|
||||
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
|
||||
|
||||
# Initialize kube master
|
||||
pssh --timeout 200 "
|
||||
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
|
||||
kubeadm token generate > /tmp/token
|
||||
kubeadm token generate > /tmp/token &&
|
||||
sudo kubeadm init --token \$(cat /tmp/token)
|
||||
fi"
|
||||
|
||||
@@ -157,38 +139,66 @@ _cmd_kube() {
|
||||
# Install weave as the pod network
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
kubever=\$(kubectl version | base64 | tr -d '\n')
|
||||
kubever=\$(kubectl version | base64 | tr -d '\n') &&
|
||||
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
|
||||
fi"
|
||||
|
||||
# Join the other nodes to the cluster
|
||||
pssh --timeout 200 "
|
||||
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
|
||||
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
|
||||
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token) &&
|
||||
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
|
||||
fi"
|
||||
|
||||
# Install kubectx and kubens
|
||||
pssh "
|
||||
[ -d kubectx ] || git clone https://github.com/ahmetb/kubectx &&
|
||||
sudo ln -sf /home/ubuntu/kubectx/kubectx /usr/local/bin/kctx &&
|
||||
sudo ln -sf /home/ubuntu/kubectx/kubens /usr/local/bin/kns &&
|
||||
sudo cp /home/ubuntu/kubectx/completion/*.bash /etc/bash_completion.d &&
|
||||
[ -d kube-ps1 ] || git clone https://github.com/jonmosco/kube-ps1 &&
|
||||
sudo -u docker sed -i s/docker-prompt/kube_ps1/ /home/docker/.bashrc &&
|
||||
sudo -u docker tee -a /home/docker/.bashrc <<EOF
|
||||
. /home/ubuntu/kube-ps1/kube-ps1.sh
|
||||
KUBE_PS1_PREFIX=""
|
||||
KUBE_PS1_SUFFIX=""
|
||||
KUBE_PS1_SYMBOL_ENABLE="false"
|
||||
KUBE_PS1_CTX_COLOR="green"
|
||||
KUBE_PS1_NS_COLOR="green"
|
||||
EOF"
|
||||
|
||||
# Install stern
|
||||
pssh "
|
||||
if [ ! -x /usr/local/bin/stern ]; then
|
||||
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64
|
||||
sudo chmod +x /usr/local/bin/stern
|
||||
##VERSION##
|
||||
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.10.0/stern_linux_amd64 &&
|
||||
sudo chmod +x /usr/local/bin/stern &&
|
||||
stern --completion bash | sudo tee /etc/bash_completion.d/stern
|
||||
fi"
|
||||
|
||||
# Install helm
|
||||
pssh "
|
||||
if [ ! -x /usr/local/bin/helm ]; then
|
||||
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | sudo bash
|
||||
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | sudo bash &&
|
||||
helm completion bash | sudo tee /etc/bash_completion.d/helm
|
||||
fi"
|
||||
|
||||
|
||||
sep "Done"
|
||||
}
|
||||
|
||||
_cmd kubetest "Check that all notes are reporting as Ready"
|
||||
_cmd kubereset "Wipe out Kubernetes configuration on all nodes"
|
||||
_cmd_kubereset() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
|
||||
pssh "sudo kubeadm reset --force"
|
||||
}
|
||||
|
||||
_cmd kubetest "Check that all nodes are reporting as Ready"
|
||||
_cmd_kubetest() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
|
||||
# There are way too many backslashes in the command below.
|
||||
# Feel free to make that better ♥
|
||||
pssh "
|
||||
@@ -202,7 +212,7 @@ _cmd_kubetest() {
|
||||
fi"
|
||||
}
|
||||
|
||||
_cmd ids "List the instance IDs belonging to a given tag or token"
|
||||
_cmd ids "(FIXME) List the instance IDs belonging to a given tag or token"
|
||||
_cmd_ids() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
@@ -215,262 +225,264 @@ _cmd_ids() {
|
||||
aws_get_instance_ids_by_client_token $TAG
|
||||
}
|
||||
|
||||
_cmd ips "List the IP addresses of the VMs for a given tag or token"
|
||||
_cmd_ips() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
mkdir -p tags/$TAG
|
||||
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
|
||||
link_tag $TAG
|
||||
}
|
||||
|
||||
_cmd list "List available batches in the current region"
|
||||
_cmd list "List available groups for a given infrastructure"
|
||||
_cmd_list() {
|
||||
info "Listing batches in region $AWS_DEFAULT_REGION:"
|
||||
aws_display_tags
|
||||
need_infra $1
|
||||
infra_list
|
||||
}
|
||||
|
||||
_cmd status "List instance status for a given batch"
|
||||
_cmd_status() {
|
||||
info "Using region $AWS_DEFAULT_REGION."
|
||||
_cmd listall "List VMs running on all configured infrastructures"
|
||||
_cmd_listall() {
|
||||
for infra in infra/*; do
|
||||
case $infra in
|
||||
infra/example.*)
|
||||
;;
|
||||
*)
|
||||
info "Listing infrastructure $infra:"
|
||||
need_infra $infra
|
||||
infra_list
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
_cmd netfix "Disable GRO and run a pinger job on the VMs"
|
||||
_cmd_netfix () {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
describe_tag $TAG
|
||||
tag_is_reachable $TAG
|
||||
info "You may be interested in running one of the following commands:"
|
||||
info "$0 ips $TAG"
|
||||
info "$0 deploy $TAG <settings/somefile.yaml>"
|
||||
need_tag
|
||||
|
||||
pssh "
|
||||
sudo ethtool -K ens3 gro off
|
||||
sudo tee /root/pinger.service <<EOF
|
||||
[Unit]
|
||||
Description=pinger
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
[Service]
|
||||
WorkingDirectory=/
|
||||
ExecStart=/bin/ping -w60 1.1
|
||||
User=nobody
|
||||
Group=nogroup
|
||||
Restart=always
|
||||
EOF
|
||||
sudo systemctl enable /root/pinger.service
|
||||
sudo systemctl start pinger"
|
||||
}
|
||||
|
||||
_cmd opensg "Open the default security group to ALL ingress traffic"
|
||||
_cmd_opensg() {
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol icmp \
|
||||
--port -1 \
|
||||
--cidr 0.0.0.0/0
|
||||
need_infra $1
|
||||
infra_opensg
|
||||
}
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol udp \
|
||||
--port 0-65535 \
|
||||
--cidr 0.0.0.0/0
|
||||
_cmd pssh "Run an arbitrary command on all nodes"
|
||||
_cmd_pssh() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
shift
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol tcp \
|
||||
--port 0-65535 \
|
||||
--cidr 0.0.0.0/0
|
||||
pssh "$@"
|
||||
}
|
||||
|
||||
_cmd pull_images "Pre-pull a bunch of Docker images"
|
||||
_cmd_pull_images() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
pull_tag $TAG
|
||||
need_tag
|
||||
pull_tag
|
||||
}
|
||||
|
||||
_cmd retag "Apply a new tag to a batch of VMs"
|
||||
_cmd quotas "Check our infrastructure quotas (max instances)"
|
||||
_cmd_quotas() {
|
||||
need_infra $1
|
||||
infra_quotas
|
||||
}
|
||||
|
||||
_cmd retag "(FIXME) Apply a new tag to a group of VMs"
|
||||
_cmd_retag() {
|
||||
OLDTAG=$1
|
||||
NEWTAG=$2
|
||||
need_tag $OLDTAG
|
||||
TAG=$OLDTAG
|
||||
need_tag
|
||||
if [[ -z "$NEWTAG" ]]; then
|
||||
die "You must specify a new tag to apply."
|
||||
fi
|
||||
aws_tag_instances $OLDTAG $NEWTAG
|
||||
}
|
||||
|
||||
_cmd start "Start a batch of VMs"
|
||||
_cmd start "Start a group of VMs"
|
||||
_cmd_start() {
|
||||
# Number of instances to create
|
||||
COUNT=$1
|
||||
# Optional settings file (to carry on with deployment)
|
||||
SETTINGS=$2
|
||||
|
||||
while [ ! -z "$*" ]; do
|
||||
case "$1" in
|
||||
--infra) INFRA=$2; shift 2;;
|
||||
--settings) SETTINGS=$2; shift 2;;
|
||||
--count) COUNT=$2; shift 2;;
|
||||
--tag) TAG=$2; shift 2;;
|
||||
*) die "Unrecognized parameter: $1."
|
||||
esac
|
||||
done
|
||||
|
||||
if [ -z "$INFRA" ]; then
|
||||
die "Please add --infra flag to specify which infrastructure file to use."
|
||||
fi
|
||||
if [ -z "$SETTINGS" ]; then
|
||||
die "Please add --settings flag to specify which settings file to use."
|
||||
fi
|
||||
if [ -z "$COUNT" ]; then
|
||||
die "Indicate number of instances to start."
|
||||
COUNT=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
|
||||
warning "No --count option was specified. Using value from settings file ($COUNT)."
|
||||
fi
|
||||
|
||||
# Check that the specified settings and infrastructure are valid.
|
||||
need_settings $SETTINGS
|
||||
need_infra $INFRA
|
||||
|
||||
# Print our AWS username, to ease the pain of credential-juggling
|
||||
greet
|
||||
|
||||
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
|
||||
key_name=$(sync_keys)
|
||||
|
||||
AMI=$(_cmd_ami) # Retrieve the AWS image ID
|
||||
if [ -z "$AMI" ]; then
|
||||
die "I could not find which AMI to use in this region. Try another region?"
|
||||
if [ -z "$TAG" ]; then
|
||||
TAG=$(make_tag)
|
||||
fi
|
||||
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
|
||||
AWS_KEY_NAME=$(make_key_name)
|
||||
|
||||
sep "Starting instances"
|
||||
info " Count: $COUNT"
|
||||
info " Region: $AWS_DEFAULT_REGION"
|
||||
info " Token/tag: $TOKEN"
|
||||
info " AMI: $AMI"
|
||||
info " Key name: $AWS_KEY_NAME"
|
||||
result=$(aws ec2 run-instances \
|
||||
--key-name $AWS_KEY_NAME \
|
||||
--count $COUNT \
|
||||
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
|
||||
--client-token $TOKEN \
|
||||
--image-id $AMI)
|
||||
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
|
||||
info "Reservation ID: $reservation_id"
|
||||
sep
|
||||
|
||||
# if instance creation succeeded, we should have some IDs
|
||||
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
|
||||
if [ -z "$IDS" ]; then
|
||||
die "Instance creation failed."
|
||||
fi
|
||||
|
||||
# Tag these new instances with a tag that is the same as the token
|
||||
TAG=$TOKEN
|
||||
aws_tag_instances $TOKEN $TAG
|
||||
|
||||
wait_until_tag_is_running $TAG $COUNT
|
||||
mkdir -p tags/$TAG
|
||||
ln -s ../../$INFRA tags/$TAG/infra.sh
|
||||
ln -s ../../$SETTINGS tags/$TAG/settings.yaml
|
||||
echo creating > tags/$TAG/status
|
||||
|
||||
infra_start $COUNT
|
||||
sep
|
||||
info "Successfully created $COUNT instances with tag $TAG"
|
||||
sep
|
||||
echo created > tags/$TAG/status
|
||||
|
||||
mkdir -p tags/$TAG
|
||||
IPS=$(aws_get_instance_ips_by_tag $TAG)
|
||||
echo "$IPS" >tags/$TAG/ips.txt
|
||||
link_tag $TAG
|
||||
if [ -n "$SETTINGS" ]; then
|
||||
_cmd_deploy $TAG $SETTINGS
|
||||
else
|
||||
info "To deploy or kill these instances, run one of the following:"
|
||||
info "$0 deploy $TAG <settings/somefile.yaml>"
|
||||
info "$0 stop $TAG"
|
||||
fi
|
||||
}
|
||||
|
||||
_cmd ec2quotas "Check our EC2 quotas (max instances)"
|
||||
_cmd_ec2quotas() {
|
||||
greet
|
||||
|
||||
max_instances=$(aws ec2 describe-account-attributes \
|
||||
--attribute-names max-instances \
|
||||
--query 'AccountAttributes[*][AttributeValues]')
|
||||
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
|
||||
|
||||
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
|
||||
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
|
||||
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
|
||||
info "Available regions:"
|
||||
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
|
||||
info "To deploy Docker on these instances, you can run:"
|
||||
info "$0 deploy $TAG"
|
||||
info "To terminate these instances, you can run:"
|
||||
info "$0 stop $TAG"
|
||||
}
|
||||
|
||||
_cmd stop "Stop (terminate, shutdown, kill, remove, destroy...) instances"
|
||||
_cmd_stop() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
aws_kill_instances_by_tag $TAG
|
||||
need_tag
|
||||
infra_stop
|
||||
echo stopped > tags/$TAG/status
|
||||
}
|
||||
|
||||
_cmd test "Run tests (pre-flight checks) on a batch of VMs"
|
||||
_cmd tags "List groups of VMs known locally"
|
||||
_cmd_tags() {
|
||||
(
|
||||
cd tags
|
||||
echo "[#] [Status] [Tag] [Infra]" \
|
||||
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
|
||||
for tag in *; do
|
||||
if [ -f $tag/ips.txt ]; then
|
||||
count="$(wc -l < $tag/ips.txt)"
|
||||
else
|
||||
count="?"
|
||||
fi
|
||||
if [ -f $tag/status ]; then
|
||||
status="$(cat $tag/status)"
|
||||
else
|
||||
status="?"
|
||||
fi
|
||||
if [ -f $tag/infra.sh ]; then
|
||||
infra="$(basename $(readlink $tag/infra.sh))"
|
||||
else
|
||||
infra="?"
|
||||
fi
|
||||
echo "$count $status $tag $infra" \
|
||||
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
|
||||
done
|
||||
)
|
||||
}
|
||||
|
||||
_cmd test "Run tests (pre-flight checks) on a group of VMs"
|
||||
_cmd_test() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
test_tag $TAG
|
||||
need_tag
|
||||
test_tag
|
||||
}
|
||||
|
||||
###
|
||||
_cmd helmprom "Install Helm and Prometheus"
|
||||
_cmd_helmprom() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
pssh "
|
||||
if grep -q node1 /tmp/node; then
|
||||
kubectl -n kube-system get serviceaccount helm ||
|
||||
kubectl -n kube-system create serviceaccount helm
|
||||
helm init --service-account helm
|
||||
kubectl get clusterrolebinding helm-can-do-everything ||
|
||||
kubectl create clusterrolebinding helm-can-do-everything \
|
||||
--clusterrole=cluster-admin \
|
||||
--serviceaccount=kube-system:helm
|
||||
helm upgrade --install prometheus stable/prometheus \
|
||||
--namespace kube-system \
|
||||
--set server.service.type=NodePort \
|
||||
--set server.service.nodePort=30090 \
|
||||
--set server.persistentVolume.enabled=false \
|
||||
--set alertmanager.enabled=false
|
||||
fi"
|
||||
}
|
||||
|
||||
# Sometimes, weave fails to come up on some nodes.
|
||||
# Symptom: the pods on a node are unreachable (they don't even ping).
|
||||
# Remedy: wipe out Weave state and delete weave pod on that node.
|
||||
# Specifically, identify the weave pod that is defective, then:
|
||||
# kubectl -n kube-system exec weave-net-XXXXX -c weave rm /weavedb/weave-netdata.db
|
||||
# kubectl -n kube-system delete pod weave-net-XXXXX
|
||||
_cmd weavetest "Check that weave seems properly setup"
|
||||
_cmd_weavetest() {
|
||||
TAG=$1
|
||||
need_tag
|
||||
pssh "
|
||||
kubectl -n kube-system get pods -o name | grep weave | cut -d/ -f2 |
|
||||
xargs -I POD kubectl -n kube-system exec POD -c weave -- \
|
||||
sh -c \"./weave --local status | grep Connections | grep -q ' 1 failed' || ! echo POD \""
|
||||
}
|
||||
|
||||
greet() {
|
||||
IAMUSER=$(aws iam get-user --query 'User.UserName')
|
||||
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
|
||||
}
|
||||
|
||||
link_tag() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
IPS_FILE=tags/$TAG/ips.txt
|
||||
need_ips_file $IPS_FILE
|
||||
ln -sf $IPS_FILE ips.txt
|
||||
}
|
||||
|
||||
pull_tag() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
link_tag $TAG
|
||||
if [ ! -s $IPS_FILE ]; then
|
||||
die "Nonexistent or empty IPs file $IPS_FILE."
|
||||
fi
|
||||
|
||||
# Pre-pull a bunch of images
|
||||
pssh --timeout 900 'for I in \
|
||||
debian:latest \
|
||||
ubuntu:latest \
|
||||
fedora:latest \
|
||||
centos:latest \
|
||||
elasticsearch:2 \
|
||||
postgres \
|
||||
redis \
|
||||
alpine \
|
||||
registry \
|
||||
nicolaka/netshoot \
|
||||
jpetazzo/trainingwheels \
|
||||
golang \
|
||||
training/namer \
|
||||
dockercoins/hasher \
|
||||
dockercoins/rng \
|
||||
dockercoins/webui \
|
||||
dockercoins/worker \
|
||||
logstash \
|
||||
prom/node-exporter \
|
||||
google/cadvisor \
|
||||
dockersamples/visualizer \
|
||||
nathanleclaire/redisonrails; do
|
||||
debian:latest \
|
||||
ubuntu:latest \
|
||||
fedora:latest \
|
||||
centos:latest \
|
||||
elasticsearch:2 \
|
||||
postgres \
|
||||
redis \
|
||||
alpine \
|
||||
registry \
|
||||
nicolaka/netshoot \
|
||||
jpetazzo/trainingwheels \
|
||||
golang \
|
||||
training/namer \
|
||||
dockercoins/hasher \
|
||||
dockercoins/rng \
|
||||
dockercoins/webui \
|
||||
dockercoins/worker \
|
||||
logstash \
|
||||
prom/node-exporter \
|
||||
google/cadvisor \
|
||||
dockersamples/visualizer \
|
||||
nathanleclaire/redisonrails; do
|
||||
sudo -u docker docker pull $I
|
||||
done'
|
||||
|
||||
info "Finished pulling images for $TAG."
|
||||
info "You may now want to run:"
|
||||
info "$0 cards $TAG <settings/somefile.yaml>"
|
||||
}
|
||||
|
||||
wait_until_tag_is_running() {
|
||||
max_retry=50
|
||||
TAG=$1
|
||||
COUNT=$2
|
||||
i=0
|
||||
done_count=0
|
||||
while [[ $done_count -lt $COUNT ]]; do
|
||||
let "i += 1"
|
||||
info "$(printf "%d/%d instances online" $done_count $COUNT)"
|
||||
done_count=$(aws ec2 describe-instances \
|
||||
--filters "Name=instance-state-name,Values=running" \
|
||||
"Name=tag:Name,Values=$TAG" \
|
||||
--query "Reservations[*].Instances[*].State.Name" \
|
||||
| tr "\t" "\n" \
|
||||
| wc -l)
|
||||
|
||||
if [[ $i -gt $max_retry ]]; then
|
||||
die "Timed out while waiting for instance creation (after $max_retry retries)"
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
}
|
||||
|
||||
tag_is_reachable() {
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
link_tag $TAG
|
||||
pssh -t 5 true 2>&1 >/dev/null
|
||||
}
|
||||
|
||||
test_tag() {
|
||||
TAG=$1
|
||||
ips_file=tags/$TAG/ips.txt
|
||||
info "Picking a random IP address in $ips_file to run tests."
|
||||
n=$((1 + $RANDOM % $(wc -l <$ips_file)))
|
||||
ip=$(head -n $n $ips_file | tail -n 1)
|
||||
ip=$(shuf -n1 $ips_file)
|
||||
test_vm $ip
|
||||
info "Tests complete."
|
||||
}
|
||||
@@ -546,17 +558,9 @@ sync_keys() {
|
||||
fi
|
||||
}
|
||||
|
||||
get_token() {
|
||||
make_tag() {
|
||||
if [ -z $USER ]; then
|
||||
export USER=anonymous
|
||||
fi
|
||||
date +%Y-%m-%d-%H-%M-$USER
|
||||
}
|
||||
|
||||
describe_tag() {
|
||||
# Display instance details and reachability/status information
|
||||
TAG=$1
|
||||
need_tag $TAG
|
||||
aws_display_instances_by_tag $TAG
|
||||
aws_display_instance_statuses_by_tag $TAG
|
||||
}
|
||||
|
||||
26
prepare-vms/lib/infra.sh
Normal file
@@ -0,0 +1,26 @@
|
||||
# Default stub functions for infrastructure libraries.
|
||||
# When loading an infrastructure library, these functions will be overridden.
|
||||
|
||||
infra_list() {
|
||||
warning "infra_list is unsupported on $INFRACLASS."
|
||||
}
|
||||
|
||||
infra_quotas() {
|
||||
warning "infra_quotas is unsupported on $INFRACLASS."
|
||||
}
|
||||
|
||||
infra_start() {
|
||||
warning "infra_start is unsupported on $INFRACLASS."
|
||||
}
|
||||
|
||||
infra_stop() {
|
||||
warning "infra_stop is unsupported on $INFRACLASS."
|
||||
}
|
||||
|
||||
infra_quotas() {
|
||||
warning "infra_quotas is unsupported on $INFRACLASS."
|
||||
}
|
||||
|
||||
infra_opensg() {
|
||||
warning "infra_opensg is unsupported on $INFRACLASS."
|
||||
}
|
||||
206
prepare-vms/lib/infra/aws.sh
Normal file
@@ -0,0 +1,206 @@
|
||||
infra_list() {
|
||||
aws_display_tags
|
||||
}
|
||||
|
||||
infra_quotas() {
|
||||
greet
|
||||
|
||||
max_instances=$(aws ec2 describe-account-attributes \
|
||||
--attribute-names max-instances \
|
||||
--query 'AccountAttributes[*][AttributeValues]')
|
||||
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
|
||||
|
||||
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
|
||||
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
|
||||
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
|
||||
info "Available regions:"
|
||||
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
|
||||
}
|
||||
|
||||
infra_start() {
|
||||
COUNT=$1
|
||||
|
||||
# Print our AWS username, to ease the pain of credential-juggling
|
||||
greet
|
||||
|
||||
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
|
||||
key_name=$(sync_keys)
|
||||
|
||||
AMI=$(aws_get_ami) # Retrieve the AWS image ID
|
||||
if [ -z "$AMI" ]; then
|
||||
die "I could not find which AMI to use in this region. Try another region?"
|
||||
fi
|
||||
AWS_KEY_NAME=$(make_key_name)
|
||||
|
||||
sep "Starting instances"
|
||||
info " Count: $COUNT"
|
||||
info " Region: $AWS_DEFAULT_REGION"
|
||||
info " Token/tag: $TAG"
|
||||
info " AMI: $AMI"
|
||||
info " Key name: $AWS_KEY_NAME"
|
||||
result=$(aws ec2 run-instances \
|
||||
--key-name $AWS_KEY_NAME \
|
||||
--count $COUNT \
|
||||
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
|
||||
--client-token $TAG \
|
||||
--block-device-mapping 'DeviceName=/dev/sda1,Ebs={VolumeSize=20}' \
|
||||
--image-id $AMI)
|
||||
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
|
||||
info "Reservation ID: $reservation_id"
|
||||
sep
|
||||
|
||||
# if instance creation succeeded, we should have some IDs
|
||||
IDS=$(aws_get_instance_ids_by_client_token $TAG)
|
||||
if [ -z "$IDS" ]; then
|
||||
die "Instance creation failed."
|
||||
fi
|
||||
|
||||
# Tag these new instances with a tag that is the same as the token
|
||||
aws_tag_instances $TAG $TAG
|
||||
|
||||
# Wait until EC2 API tells us that the instances are running
|
||||
wait_until_tag_is_running $TAG $COUNT
|
||||
|
||||
aws_get_instance_ips_by_tag $TAG > tags/$TAG/ips.txt
|
||||
}
|
||||
|
||||
infra_stop() {
|
||||
aws_kill_instances_by_tag
|
||||
}
|
||||
|
||||
infra_opensg() {
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol icmp \
|
||||
--port -1 \
|
||||
--cidr 0.0.0.0/0
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol udp \
|
||||
--port 0-65535 \
|
||||
--cidr 0.0.0.0/0
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-name default \
|
||||
--protocol tcp \
|
||||
--port 0-65535 \
|
||||
--cidr 0.0.0.0/0
|
||||
}
|
||||
|
||||
wait_until_tag_is_running() {
|
||||
max_retry=50
|
||||
i=0
|
||||
done_count=0
|
||||
while [[ $done_count -lt $COUNT ]]; do
|
||||
let "i += 1"
|
||||
info "$(printf "%d/%d instances online" $done_count $COUNT)"
|
||||
done_count=$(aws ec2 describe-instances \
|
||||
--filters "Name=tag:Name,Values=$TAG" \
|
||||
"Name=instance-state-name,Values=running" \
|
||||
--query "length(Reservations[].Instances[])")
|
||||
if [[ $i -gt $max_retry ]]; then
|
||||
die "Timed out while waiting for instance creation (after $max_retry retries)"
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
}
|
||||
|
||||
aws_display_tags() {
|
||||
# Print all "Name" tags in our region with their instance count
|
||||
echo "[#] [Status] [Token] [Tag]" \
|
||||
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
|
||||
aws ec2 describe-instances \
|
||||
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
|
||||
| tr -d "\r" \
|
||||
| uniq -c \
|
||||
| sort -k 3 \
|
||||
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
|
||||
}
|
||||
|
||||
aws_get_tokens() {
|
||||
aws ec2 describe-instances --output text \
|
||||
--query 'Reservations[*].Instances[*].[ClientToken]' \
|
||||
| sort -u
|
||||
}
|
||||
|
||||
aws_display_instance_statuses_by_tag() {
|
||||
IDS=$(aws ec2 describe-instances \
|
||||
--filters "Name=tag:Name,Values=$TAG" \
|
||||
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
|
||||
|
||||
aws ec2 describe-instance-status \
|
||||
--instance-ids $IDS \
|
||||
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
|
||||
--output table
|
||||
}
|
||||
|
||||
aws_display_instances_by_tag() {
|
||||
result=$(aws ec2 describe-instances --output table \
|
||||
--filter "Name=tag:Name,Values=$TAG" \
|
||||
--query "Reservations[*].Instances[*].[ \
|
||||
InstanceId, \
|
||||
State.Name, \
|
||||
Tags[0].Value, \
|
||||
PublicIpAddress, \
|
||||
InstanceType \
|
||||
]"
|
||||
)
|
||||
if [[ -z $result ]]; then
|
||||
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
|
||||
else
|
||||
echo "$result"
|
||||
fi
|
||||
}
|
||||
|
||||
aws_get_instance_ids_by_filter() {
|
||||
FILTER=$1
|
||||
aws ec2 describe-instances --filters $FILTER \
|
||||
--query Reservations[*].Instances[*].InstanceId \
|
||||
--output text | tr "\t" "\n" | tr -d "\r"
|
||||
}
|
||||
|
||||
aws_get_instance_ids_by_client_token() {
|
||||
TOKEN=$1
|
||||
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
|
||||
}
|
||||
|
||||
aws_get_instance_ids_by_tag() {
|
||||
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
|
||||
}
|
||||
|
||||
aws_get_instance_ips_by_tag() {
|
||||
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
|
||||
--output text \
|
||||
--query "Reservations[*].Instances[*].PublicIpAddress" \
|
||||
| tr "\t" "\n" \
|
||||
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
|
||||
}
|
||||
|
||||
aws_kill_instances_by_tag() {
|
||||
IDS=$(aws_get_instance_ids_by_tag $TAG)
|
||||
if [ -z "$IDS" ]; then
|
||||
die "Invalid tag."
|
||||
fi
|
||||
|
||||
info "Deleting instances with tag $TAG."
|
||||
|
||||
aws ec2 terminate-instances --instance-ids $IDS \
|
||||
| grep ^TERMINATINGINSTANCES
|
||||
|
||||
info "Deleted instances with tag $TAG."
|
||||
}
|
||||
|
||||
aws_tag_instances() {
|
||||
OLD_TAG_OR_TOKEN=$1
|
||||
NEW_TAG=$2
|
||||
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
|
||||
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
|
||||
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
|
||||
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
|
||||
}
|
||||
|
||||
aws_get_ami() {
|
||||
##VERSION##
|
||||
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 18.04 -t hvm:ebs -N -q
|
||||
}
|
||||
8
prepare-vms/lib/infra/generic.sh
Normal file
@@ -0,0 +1,8 @@
|
||||
infra_start() {
|
||||
COUNT=$1
|
||||
info "You should now run your provisioning commands for $COUNT machines."
|
||||
info "Note: no machines have been automatically created!"
|
||||
info "Once done, put the list of IP addresses in tags/$TAG/ips.txt"
|
||||
info "(one IP address per line, without any comments or extra lines)."
|
||||
touch tags/$TAG/ips.txt
|
||||
}
|
||||
20
prepare-vms/lib/infra/openstack.sh
Normal file
@@ -0,0 +1,20 @@
|
||||
infra_start() {
|
||||
COUNT=$1
|
||||
|
||||
cp terraform/*.tf tags/$TAG
|
||||
(
|
||||
cd tags/$TAG
|
||||
terraform init
|
||||
echo prefix = \"$TAG\" >> terraform.tfvars
|
||||
echo count = \"$COUNT\" >> terraform.tfvars
|
||||
terraform apply -auto-approve
|
||||
terraform output ip_addresses > ips.txt
|
||||
)
|
||||
}
|
||||
|
||||
infra_stop() {
|
||||
(
|
||||
cd tags/$TAG
|
||||
terraform destroy -auto-approve
|
||||
)
|
||||
}
|
||||
@@ -31,7 +31,13 @@ while ips:
|
||||
clusters.append(cluster)
|
||||
|
||||
template_file_name = SETTINGS["cards_template"]
|
||||
template = jinja2.Template(open(template_file_name).read())
|
||||
template_file_path = os.path.join(
|
||||
os.path.dirname(__file__),
|
||||
"..",
|
||||
"templates",
|
||||
template_file_name
|
||||
)
|
||||
template = jinja2.Template(open(template_file_path).read())
|
||||
with open("ips.html", "w") as f:
|
||||
f.write(template.render(clusters=clusters, **SETTINGS))
|
||||
print("Generated ips.html")
|
||||
|
||||
@@ -83,7 +83,7 @@ system("sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /e
|
||||
|
||||
system("sudo service ssh restart")
|
||||
system("sudo apt-get -q update")
|
||||
system("sudo apt-get -qy install git jq python-pip")
|
||||
system("sudo apt-get -qy install git jq")
|
||||
|
||||
#######################
|
||||
### DOCKER INSTALLS ###
|
||||
@@ -98,7 +98,6 @@ system("sudo apt-get -q update")
|
||||
system("sudo apt-get -qy install docker-ce")
|
||||
|
||||
### Install docker-compose
|
||||
#system("sudo pip install -U docker-compose=={}".format(COMPOSE_VERSION))
|
||||
system("sudo curl -sSL -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/{}/docker-compose-{}-{}".format(COMPOSE_VERSION, platform.system(), platform.machine()))
|
||||
system("sudo chmod +x /usr/local/bin/docker-compose")
|
||||
system("docker-compose version")
|
||||
|
||||
@@ -1,12 +1,17 @@
|
||||
# This file can be sourced in order to directly run commands on
|
||||
# a batch of VMs whose IPs are located in ips.txt of the directory in which
|
||||
# a group of VMs whose IPs are located in ips.txt of the directory in which
|
||||
# the command is run.
|
||||
|
||||
pssh() {
|
||||
HOSTFILE="ips.txt"
|
||||
if [ -z "$TAG" ]; then
|
||||
>/dev/stderr echo "Variable \$TAG is not set."
|
||||
return
|
||||
fi
|
||||
|
||||
HOSTFILE="tags/$TAG/ips.txt"
|
||||
|
||||
[ -f $HOSTFILE ] || {
|
||||
>/dev/stderr echo "No hostfile found at $HOSTFILE"
|
||||
>/dev/stderr echo "Hostfile $HOSTFILE not found."
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
26
prepare-vms/settings/enix.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 1
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: enix.html
|
||||
|
||||
# Use "Letter" in the US, and "A4" everywhere else
|
||||
paper_size: A4
|
||||
|
||||
# Feel free to reduce this if your printer can handle it
|
||||
paper_margin: 0.2in
|
||||
|
||||
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
|
||||
# If you print (or generate a PDF) using ips.html, they will be ignored.
|
||||
# (The equivalent parameters must be set from the browser's print dialog.)
|
||||
|
||||
# This can be "test" or "stable"
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.21.1
|
||||
machine_version: 0.14.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
docker_user_password: training
|
||||
|
||||
26
prepare-vms/settings/jerome.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
# Number of VMs per cluster
|
||||
clustersize: 4
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: jerome.html
|
||||
|
||||
# Use "Letter" in the US, and "A4" everywhere else
|
||||
paper_size: A4
|
||||
|
||||
# Feel free to reduce this if your printer can handle it
|
||||
paper_margin: 0.2in
|
||||
|
||||
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
|
||||
# If you print (or generate a PDF) using ips.html, they will be ignored.
|
||||
# (The equivalent parameters must be set from the browser's print dialog.)
|
||||
|
||||
# This can be "test" or "stable"
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.21.1
|
||||
machine_version: 0.14.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
docker_user_password: training
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
clustersize: 3
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: settings/kube101.html
|
||||
cards_template: kube101.html
|
||||
|
||||
# Use "Letter" in the US, and "A4" everywhere else
|
||||
paper_size: Letter
|
||||
@@ -24,4 +24,5 @@ compose_version: 1.21.1
|
||||
machine_version: 0.14.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
docker_user_password: training
|
||||
docker_user_password: training
|
||||
|
||||
|
||||
@@ -20,8 +20,8 @@ paper_margin: 0.2in
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.21.1
|
||||
machine_version: 0.14.0
|
||||
compose_version: 1.22.0
|
||||
machine_version: 0.15.0
|
||||
|
||||
# Password used to connect with the "docker user"
|
||||
docker_user_password: training
|
||||
docker_user_password: training
|
||||
|
||||
|
Can't render this file because it contains an unexpected character in line 1 and column 42.
|
121
prepare-vms/templates/enix.html
Normal file
@@ -0,0 +1,121 @@
|
||||
{# Feel free to customize or override anything in there! #}
|
||||
{%- set url = "http://FIXME.container.training" -%}
|
||||
{%- set pagesize = 9 -%}
|
||||
{%- if clustersize == 1 -%}
|
||||
{%- set workshop_name = "Docker workshop" -%}
|
||||
{%- set cluster_or_machine = "machine virtuelle" -%}
|
||||
{%- set this_or_each = "cette" -%}
|
||||
{%- set plural = "" -%}
|
||||
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
|
||||
{%- else -%}
|
||||
{%- set workshop_name = "Kubernetes workshop" -%}
|
||||
{%- set cluster_or_machine = "cluster" -%}
|
||||
{%- set this_or_each = "chaque" -%}
|
||||
{%- set plural = "s" -%}
|
||||
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
|
||||
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
|
||||
{%- set image_src = image_src_kube -%}
|
||||
{%- endif -%}
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
||||
<html>
|
||||
<head><style>
|
||||
@import url('https://fonts.googleapis.com/css?family=Slabo+27px');
|
||||
|
||||
body, table {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
line-height: 1em;
|
||||
font-size: 15px;
|
||||
font-family: 'Slabo 27px';
|
||||
}
|
||||
|
||||
table {
|
||||
border-spacing: 0;
|
||||
margin-top: 0.4em;
|
||||
margin-bottom: 0.4em;
|
||||
border-left: 0.8em double grey;
|
||||
padding-left: 0.4em;
|
||||
}
|
||||
|
||||
div {
|
||||
float: left;
|
||||
border: 1px dotted black;
|
||||
padding-top: 1%;
|
||||
padding-bottom: 1%;
|
||||
/* columns * (width+left+right) < 100% */
|
||||
width: 30%;
|
||||
padding-left: 1.5%;
|
||||
padding-right: 1.5%;
|
||||
}
|
||||
|
||||
p {
|
||||
margin: 0.4em 0 0.4em 0;
|
||||
}
|
||||
|
||||
img {
|
||||
height: 4em;
|
||||
float: right;
|
||||
margin-right: -0.3em;
|
||||
}
|
||||
|
||||
img.enix {
|
||||
height: 4.0em;
|
||||
margin-top: 0.4em;
|
||||
}
|
||||
|
||||
img.kube {
|
||||
height: 4.2em;
|
||||
margin-top: 1.7em;
|
||||
}
|
||||
|
||||
.logpass {
|
||||
font-family: monospace;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.pagebreak {
|
||||
page-break-after: always;
|
||||
clear: both;
|
||||
display: block;
|
||||
height: 8px;
|
||||
}
|
||||
</style></head>
|
||||
<body>
|
||||
{% for cluster in clusters %}
|
||||
{% if loop.index0>0 and loop.index0%pagesize==0 %}
|
||||
<span class="pagebreak"></span>
|
||||
{% endif %}
|
||||
<div>
|
||||
|
||||
<p>
|
||||
Voici les informations permettant de se connecter à votre
|
||||
{{ cluster_or_machine }} pour cette formation.
|
||||
Vous pouvez vous connecter à {{ this_or_each }} machine virtuelle
|
||||
avec n'importe quel client SSH.
|
||||
</p>
|
||||
<p>
|
||||
<img class="enix" src="https://enix.io/static/img/logos/logo-domain-cropped.png" />
|
||||
<table>
|
||||
<tr><td>identifiant:</td></tr>
|
||||
<tr><td class="logpass">docker</td></tr>
|
||||
<tr><td>mot de passe:</td></tr>
|
||||
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
|
||||
</table>
|
||||
|
||||
</p>
|
||||
<p>
|
||||
Adresse{{ plural }} IP :
|
||||
<!--<img class="kube" src="{{ image_src }}" />-->
|
||||
<table>
|
||||
{% for node in cluster %}
|
||||
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
</p>
|
||||
<p>Le support de formation est à l'adresse suivante :
|
||||
<center>{{ url }}</center>
|
||||
</p>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</body>
|
||||
</html>
|
||||
134
prepare-vms/templates/jerome.html
Normal file
@@ -0,0 +1,134 @@
|
||||
{# Feel free to customize or override anything in there! #}
|
||||
{%- set url = "http://qconuk2019.container.training/" -%}
|
||||
{%- set pagesize = 9 -%}
|
||||
{%- if clustersize == 1 -%}
|
||||
{%- set workshop_name = "Docker workshop" -%}
|
||||
{%- set cluster_or_machine = "machine" -%}
|
||||
{%- set this_or_each = "this" -%}
|
||||
{%- set machine_is_or_machines_are = "machine is" -%}
|
||||
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
|
||||
{%- else -%}
|
||||
{%- set workshop_name = "Kubernetes workshop" -%}
|
||||
{%- set cluster_or_machine = "cluster" -%}
|
||||
{%- set this_or_each = "each" -%}
|
||||
{%- set machine_is_or_machines_are = "machines are" -%}
|
||||
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
|
||||
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
|
||||
{%- set image_src = image_src_kube -%}
|
||||
{%- endif -%}
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
||||
<html>
|
||||
<head><style>
|
||||
@import url('https://fonts.googleapis.com/css?family=Slabo+27px');
|
||||
body, table {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
line-height: 1.0em;
|
||||
font-size: 15px;
|
||||
font-family: 'Slabo 27px';
|
||||
}
|
||||
|
||||
table {
|
||||
border-spacing: 0;
|
||||
margin-top: 0.4em;
|
||||
margin-bottom: 0.4em;
|
||||
border-left: 0.8em double grey;
|
||||
padding-left: 0.4em;
|
||||
}
|
||||
|
||||
div {
|
||||
float: left;
|
||||
border: 1px dotted black;
|
||||
height: 31%;
|
||||
padding-top: 1%;
|
||||
padding-bottom: 1%;
|
||||
/* columns * (width+left+right) < 100% */
|
||||
width: 30%;
|
||||
padding-left: 1.5%;
|
||||
padding-right: 1.5%;
|
||||
}
|
||||
|
||||
div.back {
|
||||
border: 1px dotted white;
|
||||
}
|
||||
|
||||
div.back p {
|
||||
margin: 0.5em 1em 0 1em;
|
||||
}
|
||||
|
||||
p {
|
||||
margin: 0.4em 0 0.8em 0;
|
||||
}
|
||||
|
||||
img {
|
||||
height: 5em;
|
||||
float: right;
|
||||
margin-right: 1em;
|
||||
}
|
||||
|
||||
.logpass {
|
||||
font-family: monospace;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.pagebreak {
|
||||
page-break-after: always;
|
||||
clear: both;
|
||||
display: block;
|
||||
height: 8px;
|
||||
}
|
||||
</style></head>
|
||||
<body>
|
||||
{% for cluster in clusters %}
|
||||
<div>
|
||||
|
||||
<p>
|
||||
Here is the connection information to your very own
|
||||
{{ cluster_or_machine }} for this {{ workshop_name }}.
|
||||
You can connect to {{ this_or_each }} VM with any SSH client.
|
||||
</p>
|
||||
<p>
|
||||
<img src="{{ image_src }}" />
|
||||
<table>
|
||||
<tr><td>login:</td></tr>
|
||||
<tr><td class="logpass">docker</td></tr>
|
||||
<tr><td>password:</td></tr>
|
||||
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
|
||||
</table>
|
||||
|
||||
</p>
|
||||
<p>
|
||||
Your {{ machine_is_or_machines_are }}:
|
||||
<table>
|
||||
{% for node in cluster %}
|
||||
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
</p>
|
||||
<p>You can find the slides at:
|
||||
<center>{{ url }}</center>
|
||||
</p>
|
||||
</div>
|
||||
{% if loop.index%pagesize==0 or loop.last %}
|
||||
<span class="pagebreak"></span>
|
||||
{% for x in range(pagesize) %}
|
||||
<div class="back">
|
||||
<br/>
|
||||
<p>You got this at the workshop
|
||||
"Getting Started With Kubernetes and Container Orchestration"
|
||||
during QCON London (March 2019).</p>
|
||||
<p>If you liked that workshop,
|
||||
I can train your team or organization
|
||||
on Docker, container, and Kubernetes,
|
||||
with curriculums of 1 to 5 days.
|
||||
</p>
|
||||
<p>Interested? Contact me at:</p>
|
||||
<p>jerome.petazzoni@gmail.com</p>
|
||||
<p>Thank you!</p>
|
||||
</div>
|
||||
{% endfor %}
|
||||
<span class="pagebreak"></span>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
</body>
|
||||
</html>
|
||||
5
prepare-vms/terraform/keypair.tf
Normal file
@@ -0,0 +1,5 @@
|
||||
resource "openstack_compute_keypair_v2" "ssh_deploy_key" {
|
||||
name = "${var.prefix}"
|
||||
public_key = "${file("~/.ssh/id_rsa.pub")}"
|
||||
}
|
||||
|
||||
32
prepare-vms/terraform/machines.tf
Normal file
@@ -0,0 +1,32 @@
|
||||
resource "openstack_compute_instance_v2" "machine" {
|
||||
count = "${var.count}"
|
||||
name = "${format("%s-%04d", "${var.prefix}", count.index+1)}"
|
||||
image_name = "Ubuntu 16.04.5 (Xenial Xerus)"
|
||||
flavor_name = "${var.flavor}"
|
||||
security_groups = ["${openstack_networking_secgroup_v2.full_access.name}"]
|
||||
key_pair = "${openstack_compute_keypair_v2.ssh_deploy_key.name}"
|
||||
|
||||
network {
|
||||
name = "${openstack_networking_network_v2.internal.name}"
|
||||
fixed_ip_v4 = "${cidrhost("${openstack_networking_subnet_v2.internal.cidr}", count.index+10)}"
|
||||
}
|
||||
}
|
||||
|
||||
resource "openstack_compute_floatingip_v2" "machine" {
|
||||
count = "${var.count}"
|
||||
# This is something provided to us by Enix when our tenant was provisioned.
|
||||
pool = "Public Floating"
|
||||
}
|
||||
|
||||
resource "openstack_compute_floatingip_associate_v2" "machine" {
|
||||
count = "${var.count}"
|
||||
floating_ip = "${openstack_compute_floatingip_v2.machine.*.address[count.index]}"
|
||||
instance_id = "${openstack_compute_instance_v2.machine.*.id[count.index]}"
|
||||
fixed_ip = "${cidrhost("${openstack_networking_subnet_v2.internal.cidr}", count.index+10)}"
|
||||
}
|
||||
|
||||
output "ip_addresses" {
|
||||
value = "${join("\n", openstack_compute_floatingip_v2.machine.*.address)}"
|
||||
}
|
||||
|
||||
variable "flavor" {}
|
||||
23
prepare-vms/terraform/network.tf
Normal file
@@ -0,0 +1,23 @@
|
||||
resource "openstack_networking_network_v2" "internal" {
|
||||
name = "${var.prefix}"
|
||||
}
|
||||
|
||||
resource "openstack_networking_subnet_v2" "internal" {
|
||||
name = "${var.prefix}"
|
||||
network_id = "${openstack_networking_network_v2.internal.id}"
|
||||
cidr = "10.10.0.0/16"
|
||||
ip_version = 4
|
||||
dns_nameservers = ["1.1.1.1"]
|
||||
}
|
||||
|
||||
resource "openstack_networking_router_v2" "router" {
|
||||
name = "${var.prefix}"
|
||||
external_network_id = "15f0c299-1f50-42a6-9aff-63ea5b75f3fc"
|
||||
}
|
||||
|
||||
resource "openstack_networking_router_interface_v2" "router_internal" {
|
||||
router_id = "${openstack_networking_router_v2.router.id}"
|
||||
subnet_id = "${openstack_networking_subnet_v2.internal.id}"
|
||||
}
|
||||
|
||||
|
||||
13
prepare-vms/terraform/provider.tf
Normal file
@@ -0,0 +1,13 @@
|
||||
provider "openstack" {
|
||||
user_name = "${var.user}"
|
||||
tenant_name = "${var.tenant}"
|
||||
domain_name = "${var.domain}"
|
||||
password = "${var.password}"
|
||||
auth_url = "${var.auth_url}"
|
||||
}
|
||||
|
||||
variable "user" {}
|
||||
variable "tenant" {}
|
||||
variable "domain" {}
|
||||
variable "password" {}
|
||||
variable "auth_url" {}
|
||||
12
prepare-vms/terraform/secgroup.tf
Normal file
@@ -0,0 +1,12 @@
|
||||
resource "openstack_networking_secgroup_v2" "full_access" {
|
||||
name = "${var.prefix} - full access"
|
||||
}
|
||||
|
||||
resource "openstack_networking_secgroup_rule_v2" "full_access" {
|
||||
direction = "ingress"
|
||||
ethertype = "IPv4"
|
||||
protocol = ""
|
||||
remote_ip_prefix = "0.0.0.0/0"
|
||||
security_group_id = "${openstack_networking_secgroup_v2.full_access.id}"
|
||||
}
|
||||
|
||||
8
prepare-vms/terraform/vars.tf
Normal file
@@ -0,0 +1,8 @@
|
||||
variable "prefix" {
|
||||
type = "string"
|
||||
}
|
||||
|
||||
variable "count" {
|
||||
type = "string"
|
||||
}
|
||||
|
||||
@@ -1,20 +1,19 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Get the script's real directory, whether we're being called directly or via a symlink
|
||||
# Get the script's real directory.
|
||||
# This should work whether we're being called directly or via a symlink.
|
||||
if [ -L "$0" ]; then
|
||||
export SCRIPT_DIR=$(dirname $(readlink "$0"))
|
||||
else
|
||||
export SCRIPT_DIR=$(dirname "$0")
|
||||
fi
|
||||
|
||||
# Load all scriptlets
|
||||
# Load all scriptlets.
|
||||
cd "$SCRIPT_DIR"
|
||||
for lib in lib/*.sh; do
|
||||
. $lib
|
||||
done
|
||||
|
||||
TRAINER_IMAGE="preparevms_prepare-vms"
|
||||
|
||||
DEPENDENCIES="
|
||||
aws
|
||||
ssh
|
||||
@@ -25,49 +24,26 @@ DEPENDENCIES="
|
||||
man
|
||||
"
|
||||
|
||||
ENVVARS="
|
||||
AWS_ACCESS_KEY_ID
|
||||
AWS_SECRET_ACCESS_KEY
|
||||
AWS_DEFAULT_REGION
|
||||
SSH_AUTH_SOCK
|
||||
"
|
||||
# Check for missing dependencies, and issue a warning if necessary.
|
||||
missing=0
|
||||
for dependency in $DEPENDENCIES; do
|
||||
if ! command -v $dependency >/dev/null; then
|
||||
warning "Dependency $dependency could not be found."
|
||||
missing=1
|
||||
fi
|
||||
done
|
||||
if [ $missing = 1 ]; then
|
||||
warning "At least one dependency is missing. Install it or try the image wrapper."
|
||||
fi
|
||||
|
||||
check_envvars() {
|
||||
status=0
|
||||
for envvar in $ENVVARS; do
|
||||
if [ -z "${!envvar}" ]; then
|
||||
error "Environment variable $envvar is not set."
|
||||
if [ "$envvar" = "SSH_AUTH_SOCK" ]; then
|
||||
error "Hint: run 'eval \$(ssh-agent) ; ssh-add' and try again?"
|
||||
fi
|
||||
status=1
|
||||
fi
|
||||
done
|
||||
return $status
|
||||
}
|
||||
# Check if SSH_AUTH_SOCK is set.
|
||||
# (If it's not, deployment will almost certainly fail.)
|
||||
if [ -z "${SSH_AUTH_SOCK}" ]; then
|
||||
warning "Environment variable SSH_AUTH_SOCK is not set."
|
||||
warning "Hint: run 'eval \$(ssh-agent) ; ssh-add' and try again?"
|
||||
fi
|
||||
|
||||
check_dependencies() {
|
||||
status=0
|
||||
for dependency in $DEPENDENCIES; do
|
||||
if ! command -v $dependency >/dev/null; then
|
||||
warning "Dependency $dependency could not be found."
|
||||
status=1
|
||||
fi
|
||||
done
|
||||
return $status
|
||||
}
|
||||
|
||||
check_image() {
|
||||
docker inspect $TRAINER_IMAGE >/dev/null 2>&1
|
||||
}
|
||||
|
||||
check_envvars \
|
||||
|| die "Please set all required environment variables."
|
||||
|
||||
check_dependencies \
|
||||
|| warning "At least one dependency is missing. Install it or try the image wrapper."
|
||||
|
||||
# Now check which command was invoked and execute it
|
||||
# Now check which command was invoked and execute it.
|
||||
if [ "$1" ]; then
|
||||
cmd="$1"
|
||||
shift
|
||||
@@ -77,6 +53,3 @@ fi
|
||||
fun=_cmd_$cmd
|
||||
type -t $fun | grep -q function || die "Invalid command: $cmd"
|
||||
$fun "$@"
|
||||
|
||||
# export SSH_AUTH_DIRNAME=$(dirname $SSH_AUTH_SOCK)
|
||||
# docker-compose run prepare-vms "$@"
|
||||
|
||||
4
slides/Dockerfile
Normal file
@@ -0,0 +1,4 @@
|
||||
FROM alpine:3.11
|
||||
RUN apk add --no-cache entr py3-pip git zip
|
||||
COPY requirements.txt .
|
||||
RUN pip3 install -r requirements.txt
|
||||
@@ -34,6 +34,14 @@ compile each `foo.yml` file into `foo.yml.html`.
|
||||
You can also run `./build.sh forever`: it will monitor the current
|
||||
directory and rebuild slides automatically when files are modified.
|
||||
|
||||
If you have problems running `./build.sh` (because of
|
||||
Python dependencies or whatever),
|
||||
you can also run `docker-compose up` in this directory.
|
||||
It will start the `./build.sh forever` script in a container.
|
||||
It will also start a web server exposing the slides
|
||||
(but the slides should also work if you load them from your
|
||||
local filesystem).
|
||||
|
||||
|
||||
## Publishing pipeline
|
||||
|
||||
@@ -53,4 +61,4 @@ You can run `./slidechecker foo.yml.html` to check for
|
||||
missing images and show the number of slides in that deck.
|
||||
It requires `phantomjs` to be installed. It takes some
|
||||
time to run so it is not yet integrated with the publishing
|
||||
pipeline.
|
||||
pipeline.
|
||||
|
||||
1
slides/_redirects
Normal file
@@ -0,0 +1 @@
|
||||
/ /kube-fullday.yml.html 200!
|
||||
@@ -223,7 +223,7 @@ def check_exit_status():
|
||||
def setup_tmux_and_ssh():
|
||||
if subprocess.call(["tmux", "has-session"]):
|
||||
logging.error("Couldn't connect to tmux. Please setup tmux first.")
|
||||
ipaddr = open("../../prepare-vms/ips.txt").read().split("\n")[0]
|
||||
ipaddr = "$IPADDR"
|
||||
uid = os.getuid()
|
||||
|
||||
raise Exception("""
|
||||
|
||||
@@ -14,6 +14,7 @@ once)
|
||||
./appendcheck.py $YAML.html
|
||||
done
|
||||
fi
|
||||
zip -qr slides.zip . && echo "Created slides.zip archive."
|
||||
;;
|
||||
|
||||
forever)
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Advanced Dockerfiles
|
||||
|
||||

|
||||
|
||||
@@ -156,6 +156,36 @@ Different deployments will use different underlying technologies.
|
||||
|
||||
---
|
||||
|
||||
## Service meshes
|
||||
|
||||
* A service mesh is a configurable network layer.
|
||||
|
||||
* It can provide service discovery, high availability, load balancing, observability...
|
||||
|
||||
* Service meshes are particularly useful for microservices applications.
|
||||
|
||||
* Service meshes are often implemented as proxies.
|
||||
|
||||
* Applications connect to the service mesh, which relays the connection where needed.
|
||||
|
||||
*Does that sound familiar?*
|
||||
|
||||
---
|
||||
|
||||
## Ambassadors and service meshes
|
||||
|
||||
* When using a service mesh, a "sidecar container" is often used as a proxy
|
||||
|
||||
* Our services connect (transparently) to that sidecar container
|
||||
|
||||
* That sidecar container figures out where to forward the traffic
|
||||
|
||||
... Does that sound familiar?
|
||||
|
||||
(It should, because service meshes are essentially app-wide or cluster-wide ambassadors!)
|
||||
|
||||
---
|
||||
|
||||
## Section summary
|
||||
|
||||
We've learned how to:
|
||||
@@ -168,3 +198,10 @@ For more information about the ambassador pattern, including demos on Swarm and
|
||||
|
||||
* [SwarmWeek video about Swarm+Compose](https://youtube.com/watch?v=qbIvUvwa6As)
|
||||
|
||||
Some services meshes and related projects:
|
||||
|
||||
* [Istio](https://istio.io/)
|
||||
|
||||
* [Linkerd](https://linkerd.io/)
|
||||
|
||||
* [Gloo](https://gloo.solo.io/)
|
||||
@@ -36,7 +36,7 @@ docker run jpetazzo/hamba 80 www1:80 www2:80
|
||||
|
||||
* Appropriate for mandatory parameters (without which the service cannot start).
|
||||
|
||||
* Convenient for "toolbelt" services instanciated many times.
|
||||
* Convenient for "toolbelt" services instantiated many times.
|
||||
|
||||
(Because there is no extra step: just run it!)
|
||||
|
||||
@@ -63,7 +63,7 @@ docker run -e ELASTICSEARCH_URL=http://es42:9201/ kibana
|
||||
|
||||
* Appropriate for optional parameters (since the image can provide default values).
|
||||
|
||||
* Also convenient for services instanciated many times.
|
||||
* Also convenient for services instantiated many times.
|
||||
|
||||
(It's as easy as command-line parameters.)
|
||||
|
||||
|
||||
@@ -144,6 +144,10 @@ At a first glance, it looks like this would be particularly useful in scripts.
|
||||
However, if we want to start a container and get its ID in a reliable way,
|
||||
it is better to use `docker run -d`, which we will cover in a bit.
|
||||
|
||||
(Using `docker ps -lq` is prone to race conditions: what happens if someone
|
||||
else, or another program or script, starts another container just before
|
||||
we run `docker ps -lq`?)
|
||||
|
||||
---
|
||||
|
||||
## View the logs of a container
|
||||
|
||||
@@ -131,6 +131,12 @@ Sending build context to Docker daemon 2.048 kB
|
||||
|
||||
* Be careful (or patient) if that directory is big and your link is slow.
|
||||
|
||||
* You can speed up the process with a [`.dockerignore`](https://docs.docker.com/engine/reference/builder/#dockerignore-file) file
|
||||
|
||||
* It tells docker to ignore specific files in the directory
|
||||
|
||||
* Only ignore files that you won't need in the build context!
|
||||
|
||||
---
|
||||
|
||||
## Executing each step
|
||||
|
||||
@@ -78,7 +78,7 @@ First step: clone the source code for the app we will be working on.
|
||||
|
||||
```bash
|
||||
$ cd
|
||||
$ git clone git://github.com/jpetazzo/trainingwheels
|
||||
$ git clone https://github.com/jpetazzo/trainingwheels
|
||||
...
|
||||
$ cd trainingwheels
|
||||
```
|
||||
|
||||
@@ -67,7 +67,8 @@ The following list is not exhaustive.
|
||||
|
||||
Furthermore, we limited the scope to Linux containers.
|
||||
|
||||
Containers also exist (sometimes with other names) on Windows, macOS, Solaris, FreeBSD ...
|
||||
We can also find containers (or things that look like containers) on other platforms
|
||||
like Windows, macOS, Solaris, FreeBSD ...
|
||||
|
||||
---
|
||||
|
||||
@@ -155,6 +156,36 @@ We're not aware of anyone using it directly (i.e. outside of Kubernetes).
|
||||
|
||||
---
|
||||
|
||||
## Kata containers
|
||||
|
||||
* OCI-compliant runtime.
|
||||
|
||||
* Fusion of two projects: Intel Clear Containers and Hyper runV.
|
||||
|
||||
* Run each container in a lightweight virtual machine.
|
||||
|
||||
* Requires to run on bare metal *or* with nested virtualization.
|
||||
|
||||
---
|
||||
|
||||
## gVisor
|
||||
|
||||
* OCI-compliant runtime.
|
||||
|
||||
* Implements a subset of the Linux kernel system calls.
|
||||
|
||||
* Written in go, uses a smaller subset of system calls.
|
||||
|
||||
* Can be heavily sandboxed.
|
||||
|
||||
* Can run in two modes:
|
||||
|
||||
* KVM (requires bare metal or nested virtualization),
|
||||
|
||||
* ptrace (no requirement, but slower).
|
||||
|
||||
---
|
||||
|
||||
## Overall ...
|
||||
|
||||
* The Docker Engine is very developer-centric:
|
||||
@@ -174,4 +205,3 @@ We're not aware of anyone using it directly (i.e. outside of Kubernetes).
|
||||
- Docker is a good default choice
|
||||
|
||||
- If you use Kubernetes, the engine doesn't matter
|
||||
|
||||
|
||||
@@ -107,9 +107,17 @@ class: pic
|
||||
|
||||
class: pic
|
||||
|
||||
## Two containers on a single Docker network
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Two containers on two Docker networks
|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -520,7 +528,7 @@ Very short instructions:
|
||||
- `docker network create mynet --driver overlay`
|
||||
- `docker service create --network mynet myimage`
|
||||
|
||||
See http://jpetazzo.github.io/container.training for all the deets about clustering!
|
||||
See https://jpetazzo.github.io/container.training for all the deets about clustering!
|
||||
|
||||
---
|
||||
|
||||
@@ -713,3 +721,20 @@ eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03
|
||||
...
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Building with a custom network
|
||||
|
||||
* We can build a Dockerfile with a custom network with `docker build --network NAME`.
|
||||
|
||||
* This can be used to check that a build doesn't access the network.
|
||||
|
||||
(But keep in mind that most Dockerfiles will fail,
|
||||
<br/>because they need to install remote packages and dependencies!)
|
||||
|
||||
* This may be used to access an internal package repository.
|
||||
|
||||
(But try to use a multi-stage build instead, if possible!)
|
||||
|
||||
@@ -169,5 +169,5 @@ Would we give the same answers to the questions on the previous slide?
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Getting inside a container
|
||||
|
||||
@@ -66,14 +66,6 @@ class: pic
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Multiple containers sharing the same image
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Differences between containers and images
|
||||
|
||||
* An image is a read-only filesystem.
|
||||
@@ -88,6 +80,14 @@ class: pic
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Multiple containers sharing the same image
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Comparison with object-oriented programming
|
||||
|
||||
* Images are conceptually similar to *classes*.
|
||||
@@ -118,7 +118,7 @@ If an image is read-only, how do we change it?
|
||||
|
||||
* The only way to create an image is by "freezing" a container.
|
||||
|
||||
* The only way to create a container is by instanciating an image.
|
||||
* The only way to create a container is by instantiating an image.
|
||||
|
||||
* Help!
|
||||
|
||||
@@ -216,7 +216,7 @@ clock
|
||||
|
||||
---
|
||||
|
||||
## Self-Hosted namespace
|
||||
## Self-hosted namespace
|
||||
|
||||
This namespace holds images which are not hosted on Docker Hub, but on third
|
||||
party registries.
|
||||
@@ -233,6 +233,13 @@ localhost:5000/wordpress
|
||||
* `localhost:5000` is the host and port of the registry
|
||||
* `wordpress` is the name of the image
|
||||
|
||||
Other examples:
|
||||
|
||||
```bash
|
||||
quay.io/coreos/etcd
|
||||
gcr.io/google-containers/hugo
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How do you store and manage images?
|
||||
@@ -352,6 +359,8 @@ Do specify tags:
|
||||
* To ensure that the same version will be used everywhere.
|
||||
* To ensure repeatability later.
|
||||
|
||||
This is similar to what we would do with `pip install`, `npm install`, etc.
|
||||
|
||||
---
|
||||
|
||||
## Section summary
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Installing Docker
|
||||
@@ -81,11 +82,11 @@ class: extra-details
|
||||
|
||||
## Installing Docker on macOS and Windows
|
||||
|
||||
* On macOS, the recommended method is to use Docker for Mac:
|
||||
* On macOS, the recommended method is to use Docker Desktop for Mac:
|
||||
|
||||
https://docs.docker.com/docker-for-mac/install/
|
||||
|
||||
* On Windows 10 Pro, Enterprise, and Education, you can use Docker for Windows:
|
||||
* On Windows 10 Pro, Enterprise, and Education, you can use Docker Desktop for Windows:
|
||||
|
||||
https://docs.docker.com/docker-for-windows/install/
|
||||
|
||||
@@ -99,7 +100,7 @@ class: extra-details
|
||||
|
||||
---
|
||||
|
||||
## Docker for Mac and Docker for Windows
|
||||
## Docker Desktop for Mac and Docker Desktop for Windows
|
||||
|
||||
* Special Docker Editions that integrate well with their respective host OS
|
||||
|
||||
|
||||
@@ -309,54 +309,6 @@ and *canary deployments*.
|
||||
|
||||
---
|
||||
|
||||
## Improving the workflow
|
||||
|
||||
The workflow that we showed is nice, but it requires us to:
|
||||
|
||||
* keep track of all the `docker run` flags required to run the container,
|
||||
|
||||
* inspect the `Dockerfile` to know which path(s) to mount,
|
||||
|
||||
* write scripts to hide that complexity.
|
||||
|
||||
There has to be a better way!
|
||||
|
||||
---
|
||||
|
||||
## Docker Compose to the rescue
|
||||
|
||||
* Docker Compose allows us to "encode" `docker run` parameters in a YAML file.
|
||||
|
||||
* Here is the `docker-compose.yml` file that we can use for our "namer" app:
|
||||
|
||||
```yaml
|
||||
www:
|
||||
build: .
|
||||
volumes:
|
||||
- .:/src
|
||||
ports:
|
||||
- 80:9292
|
||||
```
|
||||
|
||||
* Try it:
|
||||
```bash
|
||||
$ docker-compose up -d
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Working with Docker Compose
|
||||
|
||||
* When you see a `docker-compose.yml` file, you can use `docker-compose up`.
|
||||
|
||||
* It can build images and run them with the required parameters.
|
||||
|
||||
* Compose can also deal with complex, multi-container apps.
|
||||
|
||||
(More on this later!)
|
||||
|
||||
---
|
||||
|
||||
## Recap of the development workflow
|
||||
|
||||
1. Write a Dockerfile to build an image containing our development environment.
|
||||
|
||||
@@ -194,9 +194,13 @@ will have equal success with Fluent or other logging stacks!*
|
||||
|
||||
- We are going to use a Compose file describing the ELK stack.
|
||||
|
||||
- The Compose file is in the container.training repository on GitHub.
|
||||
|
||||
```bash
|
||||
$ cd ~/container.training/stacks
|
||||
$ docker-compose -f elk.yml up -d
|
||||
$ git clone https://github.com/jpetazzo/container.training
|
||||
$ cd container.training
|
||||
$ cd elk
|
||||
$ docker-compose up
|
||||
```
|
||||
|
||||
- Let's have a look at the Compose file while it's deploying.
|
||||
@@ -291,4 +295,4 @@ that you don't drop messages on the floor. Good luck.
|
||||
|
||||
If you want to learn more about the GELF driver,
|
||||
have a look at [this blog post](
|
||||
http://jpetazzo.github.io/2017/01/20/docker-logging-gelf/).
|
||||
https://jpetazzo.github.io/2017/01/20/docker-logging-gelf/).
|
||||
|
||||
@@ -293,3 +293,23 @@ We can achieve even smaller images if we use smaller base images.
|
||||
However, if we use common base images (e.g. if we standardize on `ubuntu`),
|
||||
these common images will be pulled only once per node, so they are
|
||||
virtually "free."
|
||||
|
||||
---
|
||||
|
||||
## Build targets
|
||||
|
||||
* We can also tag an intermediary stage with `docker build --target STAGE --tag NAME`
|
||||
|
||||
* This will create an image (named `NAME`) corresponding to stage `STAGE`
|
||||
|
||||
* This can be used to easily access an intermediary stage for inspection
|
||||
|
||||
(Instead of parsing the output of `docker build` to find out the image ID)
|
||||
|
||||
* This can also be used to describe multiple images from a single Dockerfile
|
||||
|
||||
(Instead of using multiple Dockerfiles, which could go out of sync)
|
||||
|
||||
* Sometimes, we want to inspect a specific intermediary build stage.
|
||||
|
||||
* Or, we want to describe multiple images using a single Dockerfile.
|
||||
|
||||
@@ -155,7 +155,7 @@ processes or data flows are given access to system resources.*
|
||||
|
||||
The scheduler is concerned mainly with:
|
||||
|
||||
- throughput (total amount or work done per time unit);
|
||||
- throughput (total amount of work done per time unit);
|
||||
- turnaround time (between submission and completion);
|
||||
- response time (between submission and start);
|
||||
- waiting time (between job readiness and execution);
|
||||
@@ -243,58 +243,76 @@ Scheduling = deciding which hypervisor to use for each VM.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Scheduling with one resource
|
||||
|
||||
.center[]
|
||||
|
||||
Can we do better?
|
||||
## Can we do better?
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Scheduling with one resource
|
||||
|
||||
.center[]
|
||||
|
||||
Yup!
|
||||
## Yup!
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Scheduling with two resources
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Scheduling with three resources
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## You need to be good at this
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## But also, you must be quick!
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## And be web scale!
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## And think outside (?) of the box!
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Good luck!
|
||||
|
||||
.center[]
|
||||
@@ -372,7 +390,7 @@ It depends on:
|
||||
|
||||
(Marathon = long running processes; Chronos = run at intervals; ...)
|
||||
|
||||
- Commercial offering through DC/OS my Mesosphere.
|
||||
- Commercial offering through DC/OS by Mesosphere.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -91,12 +91,12 @@ class: extra-details
|
||||
|
||||
* We need a Dockerized repository!
|
||||
* Let's go to https://github.com/jpetazzo/trainingwheels and fork it.
|
||||
* Go to the Docker Hub (https://hub.docker.com/).
|
||||
* Select "Create" in the top-right bar, and select "Create Automated Build."
|
||||
* Go to the Docker Hub (https://hub.docker.com/) and sign-in. Select "Repositories" in the blue navigation menu.
|
||||
* Select "Create" in the top-right bar, and select "Create Repository+".
|
||||
* Connect your Docker Hub account to your GitHub account.
|
||||
* Select your user and the repository that we just forked.
|
||||
* Create.
|
||||
* Then go to "Build Settings."
|
||||
* Put `/www` in "Dockerfile Location" (or whichever directory the Dockerfile is in).
|
||||
* Click "Trigger" to build the repository immediately (without waiting for a git push).
|
||||
* Click "Create" button.
|
||||
* Then go to "Builds" folder.
|
||||
* Click on Github icon and select your user and the repository that we just forked.
|
||||
* In "Build rules" block near page bottom, put `/www` in "Build Context" column (or whichever directory the Dockerfile is in).
|
||||
* Click "Save and Build" to build the repository immediately (without waiting for a git push).
|
||||
* Subsequent builds will happen automatically, thanks to GitHub hooks.
|
||||
|
||||
@@ -24,7 +24,7 @@ Analogy: attaching to a container is like plugging a keyboard and screen to a ph
|
||||
|
||||
---
|
||||
|
||||
## Detaching from a container
|
||||
## Detaching from a container (Linux/macOS)
|
||||
|
||||
* If you have started an *interactive* container (with option `-it`), you can detach from it.
|
||||
|
||||
@@ -41,6 +41,20 @@ What does `-it` stand for?
|
||||
|
||||
---
|
||||
|
||||
## Detaching cont. (Win PowerShell and cmd.exe)
|
||||
|
||||
* Docker for Windows has a different detach experience due to shell features.
|
||||
|
||||
* `^P^Q` does not work.
|
||||
|
||||
* `^C` will detach, rather than stop the container.
|
||||
|
||||
* Using Bash, Subsystem for Linux, etc. on Windows behaves like Linux/macOS shells.
|
||||
|
||||
* Both PowerShell and Bash work well in Win 10; just be aware of differences.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Specifying a custom detach sequence
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Our training environment
|
||||
@@ -18,7 +19,7 @@ class: title
|
||||
|
||||
- install Docker on e.g. a cloud VM
|
||||
|
||||
- use http://www.play-with-docker.com/ to instantly get a training environment
|
||||
- use https://www.play-with-docker.com/ to instantly get a training environment
|
||||
|
||||
---
|
||||
|
||||
@@ -90,7 +91,7 @@ $ ssh <login>@<ip-address>
|
||||
|
||||
* Git BASH (https://git-for-windows.github.io/)
|
||||
|
||||
* MobaXterm (http://moabaxterm.mobatek.net)
|
||||
* MobaXterm (https://mobaxterm.mobatek.net/)
|
||||
|
||||
---
|
||||
|
||||
|
||||
164
slides/containers/Windows_Containers.md
Normal file
@@ -0,0 +1,164 @@
|
||||
class: title
|
||||
|
||||
# Windows Containers
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
At the end of this section, you will be able to:
|
||||
|
||||
* Understand Windows Container vs. Linux Container.
|
||||
|
||||
* Know about the features of Docker for Windows for choosing architecture.
|
||||
|
||||
* Run other container architectures via QEMU emulation.
|
||||
|
||||
---
|
||||
|
||||
## Are containers *just* for Linux?
|
||||
|
||||
Remember that a container must run on the kernel of the OS it's on.
|
||||
|
||||
- This is both a benefit and a limitation.
|
||||
|
||||
(It makes containers lightweight, but limits them to a specific kernel.)
|
||||
|
||||
- At its launch in 2013, Docker did only support Linux, and only on amd64 CPUs.
|
||||
|
||||
- Since then, many platforms and OS have been added.
|
||||
|
||||
(Windows, ARM, i386, IBM mainframes ... But no macOS or iOS yet!)
|
||||
|
||||
--
|
||||
|
||||
- Docker Desktop (macOS and Windows) can run containers for other architectures
|
||||
|
||||
(Check the docs to see how to [run a Raspberry Pi (ARM) or PPC container](https://docs.docker.com/docker-for-mac/multi-arch/)!)
|
||||
|
||||
---
|
||||
|
||||
## History of Windows containers
|
||||
|
||||
- Early 2016, Windows 10 gained support for running Windows binaries in containers.
|
||||
|
||||
- These are known as "Windows Containers"
|
||||
|
||||
- Win 10 expects Docker for Windows to be installed for full features
|
||||
|
||||
- These must run in Hyper-V mini-VM's with a Windows Server x64 kernel
|
||||
|
||||
- No "scratch" containers, so use "Core" and "Nano" Server OS base layers
|
||||
|
||||
- Since Hyper-V is required, Windows 10 Home won't work (yet...)
|
||||
|
||||
--
|
||||
|
||||
- Late 2016, Windows Server 2016 ships with native Docker support
|
||||
|
||||
- Installed via PowerShell, doesn't need Docker for Windows
|
||||
|
||||
- Can run native (without VM), or with [Hyper-V Isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)
|
||||
|
||||
---
|
||||
|
||||
## LCOW (Linux Containers On Windows)
|
||||
|
||||
While Docker on Windows is largely playing catch up with Docker on Linux,
|
||||
it's moving fast; and this is one thing that you *cannot* do on Linux!
|
||||
|
||||
- LCOW came with the [2017 Fall Creators Update](https://blog.docker.com/2018/02/docker-for-windows-18-02-with-windows-10-fall-creators-update/).
|
||||
|
||||
- It can run Linux and Windows containers side-by-side on Win 10.
|
||||
|
||||
- It is no longer necessary to switch the Engine to "Linux Containers".
|
||||
|
||||
(In fact, if you want to run both Linux and Windows containers at the same time,
|
||||
make sure that your Engine is set to "Windows Containers" mode!)
|
||||
|
||||
--
|
||||
|
||||
If you are a Docker for Windows user, start your engine and try this:
|
||||
|
||||
```bash
|
||||
docker pull microsoft/nanoserver:1803
|
||||
```
|
||||
|
||||
(Make sure to switch to "Windows Containers mode" if necessary.)
|
||||
|
||||
---
|
||||
|
||||
## Run Both Windows and Linux containers
|
||||
|
||||
- Run a Windows Nano Server (minimal CLI-only server)
|
||||
|
||||
```bash
|
||||
docker run --rm -it microsoft/nanoserver:1803 powershell
|
||||
Get-Process
|
||||
exit
|
||||
```
|
||||
|
||||
- Run busybox on Linux in LCOW
|
||||
|
||||
```bash
|
||||
docker run --rm --platform linux busybox echo hello
|
||||
```
|
||||
|
||||
(Although you will not be able to see them, this will create hidden
|
||||
Nano and LinuxKit VMs in Hyper-V!)
|
||||
|
||||
---
|
||||
|
||||
## Did We Say Things Move Fast
|
||||
|
||||
- Things keep improving.
|
||||
|
||||
- Now `--platform` defaults to `windows`, some images support both:
|
||||
|
||||
- golang, mongo, python, redis, hello-world ... and more being added
|
||||
|
||||
- you should still use `--plaform` with multi-os images to be certain
|
||||
|
||||
- Windows Containers now support `localhost` accessible containers (July 2018)
|
||||
|
||||
- Microsoft (April 2018) added Hyper-V support to Windows 10 Home ...
|
||||
|
||||
... so stay tuned for Docker support, maybe?!?
|
||||
|
||||
---
|
||||
|
||||
## Other Windows container options
|
||||
|
||||
Most "official" Docker images don't run on Windows yet.
|
||||
|
||||
Places to Look:
|
||||
|
||||
- Hub Official: https://hub.docker.com/u/winamd64/
|
||||
|
||||
- Microsoft: https://hub.docker.com/r/microsoft/
|
||||
|
||||
---
|
||||
|
||||
## SQL Server? Choice of Linux or Windows
|
||||
|
||||
- Microsoft [SQL Server for Linux 2017](https://hub.docker.com/r/microsoft/mssql-server-linux/) (amd64/linux)
|
||||
|
||||
- Microsoft [SQL Server Express 2017](https://hub.docker.com/r/microsoft/mssql-server-windows-express/) (amd64/windows)
|
||||
|
||||
---
|
||||
|
||||
## Windows Tools and Tips
|
||||
|
||||
- PowerShell [Tab Completion: DockerCompletion](https://github.com/matt9ucci/DockerCompletion)
|
||||
|
||||
- Best Shell GUI: [Cmder.net](https://cmder.net/)
|
||||
|
||||
- Good Windows Container Blogs and How-To's
|
||||
|
||||
- Docker DevRel [Elton Stoneman, Microsoft MVP](https://blog.sixeyed.com/)
|
||||
|
||||
- Docker Captain [Nicholas Dille](https://dille.name/blog/)
|
||||
|
||||
- Docker Captain [Stefan Scherer](https://stefanscherer.github.io/)
|
||||
@@ -401,7 +401,7 @@ or providing extra features. For instance:
|
||||
* [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g.
|
||||
SAN or NAS), or by cloud block stores (e.g. EBS, EFS).
|
||||
|
||||
* [Portworx](http://portworx.com/) - provides distributed block store for containers.
|
||||
* [Portworx](https://portworx.com/) - provides distributed block store for containers.
|
||||
|
||||
* [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale
|
||||
to several petabytes. It provides interfaces for object, block and file storage.
|
||||
|
||||
@@ -30,7 +30,7 @@ class: self-paced
|
||||
|
||||
- These slides include *tons* of exercises and examples
|
||||
|
||||
- They assume that you have acccess to a machine running Docker
|
||||
- They assume that you have access to a machine running Docker
|
||||
|
||||
- If you are attending a workshop or tutorial:
|
||||
<br/>you will be given specific instructions to access a cloud VM
|
||||
|
||||
16
slides/docker-compose.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
version: "2"
|
||||
|
||||
services:
|
||||
www:
|
||||
image: nginx
|
||||
volumes:
|
||||
- .:/usr/share/nginx/html
|
||||
ports:
|
||||
- 80
|
||||
builder:
|
||||
build: .
|
||||
volumes:
|
||||
- ..:/repo
|
||||
working_dir: /repo/slides
|
||||
command: ./build.sh forever
|
||||
|
||||
BIN
slides/images/bridge1.png
Normal file → Executable file
|
Before Width: | Height: | Size: 30 KiB After Width: | Height: | Size: 97 KiB |
BIN
slides/images/bridge2.png
Normal file → Executable file
|
Before Width: | Height: | Size: 30 KiB After Width: | Height: | Size: 119 KiB |
BIN
slides/images/bridge3.png
Executable file
|
After Width: | Height: | Size: 137 KiB |
BIN
slides/images/ci-cd-with-docker.png
Normal file
|
After Width: | Height: | Size: 85 KiB |
2
slides/images/dockercoins-diagram.svg
Normal file
|
After Width: | Height: | Size: 14 KiB |
1
slides/images/dockercoins-diagram.xml
Normal file
@@ -0,0 +1 @@
|
||||
<mxfile userAgent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" version="9.3.0" editor="www.draw.io" type="device"><diagram id="cb13f823-9e55-f92e-d17e-d0d789fca2e0" name="Page-1">7Vnfb9sgEP5rLG0vlQ3+kTyuabs9bFq1Vtr2iO2LjUqMhfGS9q8fxNgxJtXSqmmmrVEeuAMOuO874LCHF6vNR0Hq8gvPgXnIzzcevvAQCkKEPP338/tOk0RxpygEzU2jneKGPoBR+kbb0hwaq6HknEla28qMVxVk0tIRIfjabrbkzB61JgU4ipuMMFf7neay7LSzyN/pPwEtyn7kwDc1KcnuCsHbyoznIbzc/rrqFeltmfZNSXK+HqnwpYcXgnPZlVabBTDt295tXb+rR2qHeQuo5EEdDC6/CGuhn/J2YvK+d8Z2OaA7+B4+X5dUwk1NMl27VvArXSlXTEmBKi4pYwvOuNj2xTmB2TJT+kYKfgejmjibQbpUNQUjTWOMZ3xFM1MeXKOFJa/kFVlRpgn1jadccj0uF/RB1ZBhdCUYNqFQyWZxICRsHvVQMPhd8Rn4CqS4V01Mhx4pw+RgZuT1jhdJrytHnMC9khguFoPpHR6qYCDZDw920FlzcQfCwUg5q9bFrE3hzyClHaKf00Ex0PZrKxmtYOTPZ7h9QgJFf5TtJUEep7HaGvqaPtbwS0FnY4cSF7sA7cHuJaALHehEVbzhdghusQ1bGL4ibJEDW0ma8i3iDow4dELo3KNMQE6bN+QOQW44rk6BXOIec5C29A25g3ZLfELk+gv7CLqPl7cOcFDlH/S1XGOnr3v6kmfdGp+MQDBz/KkxYSQFdj7gPALh6mqhfoPLIXcygInD1QJ4KzKwbmKSiALk6IR3YRm5Pdrj9V4ngBFJf9mT2AeFGeGaUzW9AfVgutPO57aJbvKm1zgDmBpKpvSZGOqW7BjaMmNY9mFkCRyyXH+9+U/YEp2SLWge2SDHk9g/lC04nBgKJoZekC3IYcvt4vr/IEt8UrIk4VniR7MZwhGahz0OBnH8XOqgWXSmzQVJHG7N20SKjkckN4v+N4mU/GVEwoF9tDybOuEkkT8mWdy8vW32pH+qF60b6N6puksp421+wAPZyV6ykokX4+g1L4puYn3SiyJsqPyhyv5ZL/3cSoGRrkFQtUjQkekfM2h7wo2jNjll1E5eX5LnBm0wif7kiFcFN4G84NkdiIUy3ugh65rRTHmFVx6KmdTJoIrpuNCld0vtrO3HBElUViia924jh6kqDqXNTTvvq7jOL60k0agIo0WlRAZLbUHHtJob+2DUkteP7CL2Q7z1Pv7YI/oRtpFJuhnM3V0kjvfQM8BP30aUuPsW0hFj98EJX/4G</diagram></mxfile>
|
||||
BIN
slides/images/windows-containers.jpg
Normal file
|
After Width: | Height: | Size: 426 KiB |
@@ -1,14 +1,16 @@
|
||||
#!/usr/bin/env python2
|
||||
#!/usr/bin/env python3
|
||||
# coding: utf-8
|
||||
TEMPLATE="""<html>
|
||||
<head>
|
||||
<title>{{ title }}</title>
|
||||
<link rel="stylesheet" href="index.css">
|
||||
<meta charset="UTF-8">
|
||||
</head>
|
||||
<body>
|
||||
<div class="main">
|
||||
<table>
|
||||
<tr><td class="header" colspan="3">{{ title }}</td></tr>
|
||||
<tr><td class="details" colspan="3">Note: while some workshops are delivered in French, slides are always in English.</td></tr>
|
||||
|
||||
{% if coming_soon %}
|
||||
<tr><td class="title" colspan="3">Coming soon near you</td></tr>
|
||||
@@ -17,7 +19,10 @@ TEMPLATE="""<html>
|
||||
<tr>
|
||||
<td>{{ item.title }}</td>
|
||||
<td>{% if item.slides %}<a class="slides" href="{{ item.slides }}" />{% endif %}</td>
|
||||
<td><a class="attend" href="{{ item.attend }}" /></td>
|
||||
<td>{% if item.attend %}<a class="attend" href="{{ item.attend }}" />
|
||||
{% else %}
|
||||
<p class="details">{{ item.status }}</p>
|
||||
{% endif %}</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="details">Scheduled {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
|
||||
@@ -31,7 +36,10 @@ TEMPLATE="""<html>
|
||||
{% for item in past_workshops[:5] %}
|
||||
<tr>
|
||||
<td>{{ item.title }}</td>
|
||||
<td><a class="slides" href="{{ item.slides }}" /></td>
|
||||
<td>{% if item.slides %}<a class="slides" href="{{ item.slides }}" />
|
||||
{% else %}
|
||||
<p class="details">{{ item.status }}</p>
|
||||
{% endif %}</td>
|
||||
<td>{% if item.video %}<a class="video" href="{{ item.video }}" />{% endif %}</td>
|
||||
</tr>
|
||||
<tr>
|
||||
@@ -98,34 +106,49 @@ TEMPLATE="""<html>
|
||||
</table>
|
||||
</div>
|
||||
</body>
|
||||
</html>""".decode("utf-8")
|
||||
</html>"""
|
||||
|
||||
import datetime
|
||||
import jinja2
|
||||
import yaml
|
||||
|
||||
items = yaml.load(open("index.yaml"))
|
||||
items = yaml.safe_load(open("index.yaml"))
|
||||
|
||||
# Items with a date correspond to scheduled sessions.
|
||||
# Items without a date correspond to self-paced content.
|
||||
# The date should be specified as a string (e.g. 2018-11-26).
|
||||
# It can also be a list of two elements (e.g. [2018-11-26, 2018-11-28]).
|
||||
# The latter indicates an event spanning multiple dates.
|
||||
# The first date will be used in the generated page, but the event
|
||||
# will be considered "current" (and therefore, shown in the list of
|
||||
# upcoming events) until the second date.
|
||||
|
||||
for item in items:
|
||||
if "date" in item:
|
||||
date = item["date"]
|
||||
if type(date) == list:
|
||||
date_begin, date_end = date
|
||||
else:
|
||||
date_begin, date_end = date, date
|
||||
suffix = {
|
||||
1: "st", 2: "nd", 3: "rd",
|
||||
21: "st", 22: "nd", 23: "rd",
|
||||
31: "st"}.get(date.day, "th")
|
||||
31: "st"}.get(date_begin.day, "th")
|
||||
# %e is a non-standard extension (it displays the day, but without a
|
||||
# leading zero). If strftime fails with ValueError, try to fall back
|
||||
# on %d (which displays the day but with a leading zero when needed).
|
||||
try:
|
||||
item["prettydate"] = date.strftime("%B %e{}, %Y").format(suffix)
|
||||
item["prettydate"] = date_begin.strftime("%B %e{}, %Y").format(suffix)
|
||||
except ValueError:
|
||||
item["prettydate"] = date.strftime("%B %d{}, %Y").format(suffix)
|
||||
item["prettydate"] = date_begin.strftime("%B %d{}, %Y").format(suffix)
|
||||
item["begin"] = date_begin
|
||||
item["end"] = date_end
|
||||
|
||||
today = datetime.date.today()
|
||||
coming_soon = [i for i in items if i.get("date") and i["date"] >= today]
|
||||
coming_soon.sort(key=lambda i: i["date"])
|
||||
past_workshops = [i for i in items if i.get("date") and i["date"] < today]
|
||||
past_workshops.sort(key=lambda i: i["date"], reverse=True)
|
||||
coming_soon = [i for i in items if i.get("date") and i["end"] >= today]
|
||||
coming_soon.sort(key=lambda i: i["begin"])
|
||||
past_workshops = [i for i in items if i.get("date") and i["end"] < today]
|
||||
past_workshops.sort(key=lambda i: i["begin"], reverse=True)
|
||||
self_paced = [i for i in items if not i.get("date")]
|
||||
recorded_workshops = [i for i in items if i.get("video")]
|
||||
|
||||
@@ -137,10 +160,10 @@ with open("index.html", "w") as f:
|
||||
past_workshops=past_workshops,
|
||||
self_paced=self_paced,
|
||||
recorded_workshops=recorded_workshops
|
||||
).encode("utf-8"))
|
||||
))
|
||||
|
||||
with open("past.html", "w") as f:
|
||||
f.write(template.render(
|
||||
title="Container Training",
|
||||
all_past_workshops=past_workshops
|
||||
).encode("utf-8"))
|
||||
))
|
||||
|
||||
@@ -1,26 +1,143 @@
|
||||
- date: 2018-11-23
|
||||
city: Copenhagen
|
||||
country: dk
|
||||
- date: 2019-06-18
|
||||
country: ca
|
||||
city: Montréal
|
||||
event: Elapse Technologies
|
||||
title: Getting Started With Kubernetes And Orchestration
|
||||
speaker: jpetazzo
|
||||
status: coming soon
|
||||
hidden: http://elapsetech.com/formation/kubernetes-101
|
||||
|
||||
- date: 2019-06-17
|
||||
country: ca
|
||||
city: Montréal
|
||||
event: Elapse Technologies
|
||||
title: Getting Started With Docker And Containers
|
||||
speaker: jpetazzo
|
||||
status: coming soon
|
||||
hidden: http://elapsetech.com/formation/docker-101
|
||||
|
||||
- date: 2019-05-01
|
||||
country: us
|
||||
city: Cleveland, OH
|
||||
event: PyCon
|
||||
speaker: jpetazzo, s0ulshake
|
||||
title: Getting started with Kubernetes and container orchestration
|
||||
attend: https://us.pycon.org/2019/schedule/presentation/74/
|
||||
|
||||
- date: 2019-04-28
|
||||
country: us
|
||||
city: Chicago, IL
|
||||
event: GOTO
|
||||
title: Build Container Orchestration with Docker Swarm
|
||||
speaker: bretfisher
|
||||
attend: https://gotocph.com/2018/workshops/121
|
||||
speaker: jpetazzo
|
||||
title: Getting Started With Kubernetes and Container Orchestration
|
||||
attend: https://gotochgo.com/2019/workshops/148
|
||||
|
||||
- date: [2019-04-23, 2019-04-24]
|
||||
country: fr
|
||||
city: Paris
|
||||
event: ENIX SAS
|
||||
speaker: "jpetazzo, rdegez"
|
||||
title: Déployer ses applications avec Kubernetes (in French)
|
||||
lang: fr
|
||||
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
|
||||
|
||||
- date: [2019-04-15, 2019-04-16]
|
||||
country: fr
|
||||
city: Paris
|
||||
event: ENIX SAS
|
||||
speaker: "jpetazzo, alexbuisine"
|
||||
title: Bien démarrer avec les conteneurs (in French)
|
||||
lang: fr
|
||||
attend: https://enix.io/fr/services/formation/bien-demarrer-avec-les-conteneurs/
|
||||
|
||||
- date: 2019-03-08
|
||||
country: uk
|
||||
city: London
|
||||
event: QCON
|
||||
speaker: jpetazzo
|
||||
title: Getting Started With Kubernetes and Container Orchestration
|
||||
attend: https://qconlondon.com/london2019/workshop/getting-started-kubernetes-and-container-orchestration
|
||||
slides: https://qconuk2019.container.training/
|
||||
|
||||
- date: 2019-02-25
|
||||
country: ca
|
||||
city: Montréal
|
||||
event: Elapse Technologies
|
||||
speaker: jpetazzo
|
||||
title: <strike>Getting Started With Docker And Containers</strike> (rescheduled for June 2019)
|
||||
status: rescheduled
|
||||
|
||||
- date: 2019-02-26
|
||||
country: ca
|
||||
city: Montréal
|
||||
event: Elapse Technologies
|
||||
speaker: jpetazzo
|
||||
title: <strike>Getting Started With Kubernetes And Orchestration</strike> (rescheduled for June 2019)
|
||||
status: rescheduled
|
||||
|
||||
- date: 2019-02-28
|
||||
country: ca
|
||||
city: Québec
|
||||
lang: fr
|
||||
event: Elapse Technologies
|
||||
speaker: jpetazzo
|
||||
title: <strike>Bien démarrer avec Docker et les conteneurs (in French)</strike>
|
||||
status: cancelled
|
||||
|
||||
- date: 2019-03-01
|
||||
country: ca
|
||||
city: Québec
|
||||
lang: fr
|
||||
event: Elapse Technologies
|
||||
speaker: jpetazzo
|
||||
title: <strike>Bien démarrer avec Docker et l'orchestration (in French)</strike>
|
||||
status: cancelled
|
||||
|
||||
- date: [2019-01-07, 2019-01-08]
|
||||
country: fr
|
||||
city: Paris
|
||||
event: ENIX SAS
|
||||
speaker: "jpetazzo, alexbuisine"
|
||||
title: Bien démarrer avec les conteneurs (in French)
|
||||
lang: fr
|
||||
attend: https://enix.io/fr/services/formation/bien-demarrer-avec-les-conteneurs/
|
||||
slides: https://intro-2019-01.container.training
|
||||
|
||||
- date: [2018-12-17, 2018-12-18]
|
||||
country: fr
|
||||
city: Paris
|
||||
event: ENIX SAS
|
||||
speaker: "jpetazzo, rdegez"
|
||||
title: Déployer ses applications avec Kubernetes
|
||||
lang: fr
|
||||
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
|
||||
slides: http://decembre2018.container.training
|
||||
|
||||
- date: 2018-11-08
|
||||
city: San Francisco, CA
|
||||
country: us
|
||||
event: QCON
|
||||
title: Introduction to Docker and Containers
|
||||
speaker: jpetazzo
|
||||
speaker: zeroasterisk
|
||||
attend: https://qconsf.com/sf2018/workshop/introduction-docker-and-containers
|
||||
|
||||
- date: 2018-11-08
|
||||
city: San Francisco, CA
|
||||
country: us
|
||||
event: QCON
|
||||
title: Getting Started With Kubernetes and Container Orchestration
|
||||
speaker: jpetazzo
|
||||
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration-thursday-section
|
||||
slides: http://qconsf2018.container.training/
|
||||
|
||||
- date: 2018-11-09
|
||||
city: San Francisco, CA
|
||||
country: us
|
||||
event: QCON
|
||||
title: Getting Started With Kubernetes and Container Orchestration
|
||||
speaker: jpetazzo
|
||||
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration
|
||||
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration-friday-section
|
||||
slides: http://qconsf2018.container.training/
|
||||
|
||||
- date: 2018-10-31
|
||||
city: London, UK
|
||||
@@ -28,6 +145,7 @@
|
||||
event: Velocity EU
|
||||
title: Kubernetes 101
|
||||
speaker: bridgetkromhout
|
||||
slides: https://velocityeu2018.container.training
|
||||
attend: https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/71149
|
||||
|
||||
- date: 2018-10-30
|
||||
@@ -54,16 +172,18 @@
|
||||
title: Kubernetes 101
|
||||
speaker: bridgetkromhout
|
||||
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70102
|
||||
slides: https://velny-k8s101-2018.container.training
|
||||
|
||||
- date: 2018-09-30
|
||||
- date: 2018-10-01
|
||||
city: New York, NY
|
||||
country: us
|
||||
event: Velocity
|
||||
title: Kubernetes Bootcamp - Deploying and Scaling Microservices
|
||||
speaker: jpetazzo
|
||||
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/69875
|
||||
slides: https://k8s2d.container.training
|
||||
|
||||
- date: 2018-09-30
|
||||
- date: 2018-10-01
|
||||
city: New York, NY
|
||||
country: us
|
||||
event: Velocity
|
||||
@@ -76,9 +196,10 @@
|
||||
city: Paris
|
||||
event: ENIX SAS
|
||||
speaker: jpetazzo
|
||||
title: Déployer ses applications avec Kubernetes (in French)
|
||||
title: Déployer ses applications avec Kubernetes
|
||||
lang: fr
|
||||
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
|
||||
slides: https://septembre2018.container.training
|
||||
|
||||
- date: 2018-07-17
|
||||
city: Portland, OR
|
||||
|
||||
@@ -42,6 +42,7 @@ chapters:
|
||||
#- containers/Connecting_Containers_With_Links.md
|
||||
- containers/Ambassadors.md
|
||||
- - containers/Local_Development_Workflow.md
|
||||
- containers/Windows_Containers.md
|
||||
- containers/Working_With_Volumes.md
|
||||
- containers/Compose_For_Dev_Stacks.md
|
||||
- containers/Docker_Machine.md
|
||||
|
||||
@@ -42,6 +42,7 @@ chapters:
|
||||
#- containers/Connecting_Containers_With_Links.md
|
||||
- containers/Ambassadors.md
|
||||
- - containers/Local_Development_Workflow.md
|
||||
- containers/Windows_Containers.md
|
||||
- containers/Working_With_Volumes.md
|
||||
- containers/Compose_For_Dev_Stacks.md
|
||||
- containers/Docker_Machine.md
|
||||
|
||||
@@ -24,15 +24,9 @@
|
||||
|
||||
(it examines headers, certificates ... anything available)
|
||||
|
||||
- Many authentication methods can be used simultaneously:
|
||||
- Many authentication methods are available and can be used simultaneously
|
||||
|
||||
- TLS client certificates (that's what we've been doing with `kubectl` so far)
|
||||
|
||||
- bearer tokens (a secret token in the HTTP headers of the request)
|
||||
|
||||
- [HTTP basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication) (carrying user and password in a HTTP header)
|
||||
|
||||
- authentication proxy (sitting in front of the API and setting trusted headers)
|
||||
(we will see them on the next slide)
|
||||
|
||||
- It's the job of the authentication method to produce:
|
||||
|
||||
@@ -44,6 +38,26 @@
|
||||
|
||||
---
|
||||
|
||||
## Authentication methods
|
||||
|
||||
- TLS client certificates
|
||||
|
||||
(that's what we've been doing with `kubectl` so far)
|
||||
|
||||
- Bearer tokens
|
||||
|
||||
(a secret token in the HTTP headers of the request)
|
||||
|
||||
- [HTTP basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication)
|
||||
|
||||
(carrying user and password in a HTTP header)
|
||||
|
||||
- Authentication proxy
|
||||
|
||||
(sitting in front of the API and setting trusted headers)
|
||||
|
||||
---
|
||||
|
||||
## Anonymous requests
|
||||
|
||||
- If any authentication method *rejects* a request, it's denied
|
||||
@@ -119,6 +133,30 @@ class: extra-details
|
||||
|
||||
→ We are user `kubernetes-admin`, in group `system:masters`.
|
||||
|
||||
(We will see later how and why this gives us the permissions that we have.)
|
||||
|
||||
---
|
||||
|
||||
## User certificates in practice
|
||||
|
||||
- The Kubernetes API server does not support certificate revocation
|
||||
|
||||
(see issue [#18982](https://github.com/kubernetes/kubernetes/issues/18982))
|
||||
|
||||
- As a result, we cannot easily suspend a user's access
|
||||
|
||||
- There are workarounds, but they are very inconvenient:
|
||||
|
||||
- issue short-lived certificates (e.g. 24 hours) and regenerate them often
|
||||
|
||||
- re-create the CA and re-issue all certificates in case of compromise
|
||||
|
||||
- grant permissions to individual users, not groups
|
||||
<br/>
|
||||
(and remove all permissions to a compromised user)
|
||||
|
||||
- Until this is fixed, we probably want to use other methods
|
||||
|
||||
---
|
||||
|
||||
## Authentication with tokens
|
||||
@@ -182,23 +220,23 @@ class: extra-details
|
||||
kubectl get sa
|
||||
```
|
||||
|
||||
]
|
||||
]
|
||||
|
||||
There should be just one service account in the default namespace: `default`.
|
||||
There should be just one service account in the default namespace: `default`.
|
||||
|
||||
---
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
class: extra-details
|
||||
|
||||
## Finding the secret
|
||||
## Finding the secret
|
||||
|
||||
.exercise[
|
||||
.exercise[
|
||||
|
||||
- List the secrets for the `default` service account:
|
||||
```bash
|
||||
kubectl get sa default -o yaml
|
||||
SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name)
|
||||
```
|
||||
- List the secrets for the `default` service account:
|
||||
```bash
|
||||
kubectl get sa default -o yaml
|
||||
SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name)
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
@@ -502,7 +540,7 @@ It's important to note a couple of details in these flags ...
|
||||
|
||||
- But that we can't create things:
|
||||
```
|
||||
./kubectl run tryme --image=nginx
|
||||
./kubectl create deployment testrbac --image=nginx
|
||||
```
|
||||
|
||||
- Exit the container with `exit` or `^D`
|
||||
@@ -531,3 +569,45 @@ It's important to note a couple of details in these flags ...
|
||||
kubectl auth can-i list nodes \
|
||||
--as system:serviceaccount:<namespace>:<name-of-service-account>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Where do our permissions come from?
|
||||
|
||||
- When interacting with the Kubernetes API, we are using a client certificate
|
||||
|
||||
- We saw previously that this client certificate contained:
|
||||
|
||||
`CN=kubernetes-admin` and `O=system:masters`
|
||||
|
||||
- Let's look for these in existing ClusterRoleBindings:
|
||||
```bash
|
||||
kubectl get clusterrolebindings -o yaml |
|
||||
grep -e kubernetes-admin -e system:masters
|
||||
```
|
||||
|
||||
(`system:masters` should show up, but not `kubernetes-admin`.)
|
||||
|
||||
- Where does this match come from?
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## The `system:masters` group
|
||||
|
||||
- If we eyeball the output of `kubectl get clusterrolebindings -o yaml`, we'll find out!
|
||||
|
||||
- It is in the `cluster-admin` binding:
|
||||
```bash
|
||||
kubectl describe clusterrolebinding cluster-admin
|
||||
```
|
||||
|
||||
- This binding associates `system:masters` to the cluster role `cluster-admin`
|
||||
|
||||
- And the `cluster-admin` is, basically, `root`:
|
||||
```bash
|
||||
kubectl describe clusterrole cluster-admin
|
||||
```
|
||||
|
||||
@@ -327,7 +327,7 @@ We'll cover them just after!*
|
||||
|
||||
- We will provide a simple HAproxy configuration, `k8s/haproxy.cfg`
|
||||
|
||||
- It listens on port 80, and load balances connections between Google and Bing
|
||||
- It listens on port 80, and load balances connections between IBM and Google
|
||||
|
||||
---
|
||||
|
||||
@@ -407,20 +407,22 @@ spec:
|
||||
|
||||
- half of the connections to Google
|
||||
|
||||
- the other half to Bing
|
||||
- the other half to IBM
|
||||
|
||||
.exercise[
|
||||
|
||||
- Access the load balancer a few times:
|
||||
```bash
|
||||
curl -I $IP
|
||||
curl -I $IP
|
||||
curl -I $IP
|
||||
curl $IP
|
||||
curl $IP
|
||||
curl $IP
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
We should see connections served by Google (look for the `Location` header) and others served by Bing (indicated by the `X-MSEdge-Ref` header).
|
||||
We should see connections served by Google, and others served by IBM.
|
||||
<br/>
|
||||
(Each server sends us a redirect page. Look at the URL that they send us to!)
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -36,7 +36,9 @@
|
||||
|
||||
## Creating a daemon set
|
||||
|
||||
- Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
|
||||
<!-- ##VERSION## -->
|
||||
|
||||
- Unfortunately, as of Kubernetes 1.13, the CLI cannot create daemon sets
|
||||
|
||||
--
|
||||
|
||||
@@ -252,38 +254,29 @@ The master node has [taints](https://kubernetes.io/docs/concepts/configuration/t
|
||||
|
||||
---
|
||||
|
||||
## What are all these pods doing?
|
||||
## Is this working?
|
||||
|
||||
- Let's check the logs of all these `rng` pods
|
||||
|
||||
- All these pods have a `run=rng` label:
|
||||
|
||||
- the first pod, because that's what `kubectl run` does
|
||||
- the other ones (in the daemon set), because we
|
||||
*copied the spec from the first one*
|
||||
|
||||
- Therefore, we can query everybody's logs using that `run=rng` selector
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check the logs of all the pods having a label `run=rng`:
|
||||
```bash
|
||||
kubectl logs -l run=rng --tail 1
|
||||
```
|
||||
|
||||
]
|
||||
- Look at the web UI
|
||||
|
||||
--
|
||||
|
||||
It appears that *all the pods* are serving requests at the moment.
|
||||
- The graph should now go above 10 hashes per second!
|
||||
|
||||
--
|
||||
|
||||
- It looks like the newly created pods are serving traffic correctly
|
||||
|
||||
- How and why did this happen?
|
||||
|
||||
(We didn't do anything special to add them to the `rng` service load balancer!)
|
||||
|
||||
---
|
||||
|
||||
## The magic of selectors
|
||||
# Labels and selectors
|
||||
|
||||
- The `rng` *service* is load balancing requests to a set of pods
|
||||
|
||||
- This set of pods is defined as "pods having the label `run=rng`"
|
||||
- That set of pods is defined by the *selector* of the `rng` service
|
||||
|
||||
.exercise[
|
||||
|
||||
@@ -294,110 +287,333 @@ It appears that *all the pods* are serving requests at the moment.
|
||||
|
||||
]
|
||||
|
||||
When we created additional pods with this label, they were
|
||||
automatically detected by `svc/rng` and added as *endpoints*
|
||||
to the associated load balancer.
|
||||
- The selector is `app=rng`
|
||||
|
||||
- It means "all the pods having the label `app=rng`"
|
||||
|
||||
(They can have additional labels as well, that's OK!)
|
||||
|
||||
---
|
||||
|
||||
## Removing the first pod from the load balancer
|
||||
## Selector evaluation
|
||||
|
||||
- We can use selectors with many `kubectl` commands
|
||||
|
||||
- For instance, with `kubectl get`, `kubectl logs`, `kubectl delete` ... and more
|
||||
|
||||
.exercise[
|
||||
|
||||
- Get the list of pods matching selector `app=rng`:
|
||||
```bash
|
||||
kubectl get pods -l app=rng
|
||||
kubectl get pods --selector app=rng
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
But ... why do these pods (in particular, the *new* ones) have this `app=rng` label?
|
||||
|
||||
---
|
||||
|
||||
## Where do labels come from?
|
||||
|
||||
- When we create a deployment with `kubectl create deployment rng`,
|
||||
<br/>this deployment gets the label `app=rng`
|
||||
|
||||
- The replica sets created by this deployment also get the label `app=rng`
|
||||
|
||||
- The pods created by these replica sets also get the label `app=rng`
|
||||
|
||||
- When we created the daemon set from the deployment, we re-used the same spec
|
||||
|
||||
- Therefore, the pods created by the daemon set get the same labels
|
||||
|
||||
.footnote[Note: when we use `kubectl run stuff`, the label is `run=stuff` instead.]
|
||||
|
||||
---
|
||||
|
||||
## Updating load balancer configuration
|
||||
|
||||
- We would like to remove a pod from the load balancer
|
||||
|
||||
- What would happen if we removed that pod, with `kubectl delete pod ...`?
|
||||
|
||||
--
|
||||
|
||||
The `replicaset` would re-create it immediately.
|
||||
It would be re-created immediately (by the replica set or the daemon set)
|
||||
|
||||
--
|
||||
|
||||
- What would happen if we removed the `run=rng` label from that pod?
|
||||
- What would happen if we removed the `app=rng` label from that pod?
|
||||
|
||||
--
|
||||
|
||||
The `replicaset` would re-create it immediately.
|
||||
It would *also* be re-created immediately
|
||||
|
||||
--
|
||||
|
||||
... Because what matters to the `replicaset` is the number of pods *matching that selector.*
|
||||
|
||||
--
|
||||
|
||||
- But but but ... Don't we have more than one pod with `run=rng` now?
|
||||
|
||||
--
|
||||
|
||||
The answer lies in the exact selector used by the `replicaset` ...
|
||||
Why?!?
|
||||
|
||||
---
|
||||
|
||||
## Deep dive into selectors
|
||||
## Selectors for replica sets and daemon sets
|
||||
|
||||
- Let's look at the selectors for the `rng` *deployment* and the associated *replica set*
|
||||
- The "mission" of a replica set is:
|
||||
|
||||
"Make sure that there is the right number of pods matching this spec!"
|
||||
|
||||
- The "mission" of a daemon set is:
|
||||
|
||||
"Make sure that there is a pod matching this spec on each node!"
|
||||
|
||||
--
|
||||
|
||||
- *In fact,* replica sets and daemon sets do not check pod specifications
|
||||
|
||||
- They merely have a *selector*, and they look for pods matching that selector
|
||||
|
||||
- Yes, we can fool them by manually creating pods with the "right" labels
|
||||
|
||||
- Bottom line: if we remove our `app=rng` label ...
|
||||
|
||||
... The pod "diseappears" for its parent, which re-creates another pod to replace it
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Isolation of replica sets and daemon sets
|
||||
|
||||
- Since both the `rng` daemon set and the `rng` replica set use `app=rng` ...
|
||||
|
||||
... Why don't they "find" each other's pods?
|
||||
|
||||
--
|
||||
|
||||
- *Replica sets* have a more specific selector, visible with `kubectl describe`
|
||||
|
||||
(It looks like `app=rng,pod-template-hash=abcd1234`)
|
||||
|
||||
- *Daemon sets* also have a more specific selector, but it's invisible
|
||||
|
||||
(It looks like `app=rng,controller-revision-hash=abcd1234`)
|
||||
|
||||
- As a result, each controller only "sees" the pods it manages
|
||||
|
||||
---
|
||||
|
||||
## Removing a pod from the load balancer
|
||||
|
||||
- Currently, the `rng` service is defined by the `app=rng` selector
|
||||
|
||||
- The only way to remove a pod is to remove or change the `app` label
|
||||
|
||||
- ... But that will cause another pod to be created instead!
|
||||
|
||||
- What's the solution?
|
||||
|
||||
--
|
||||
|
||||
- We need to change the selector of the `rng` service!
|
||||
|
||||
- Let's add another label to that selector (e.g. `enabled=yes`)
|
||||
|
||||
---
|
||||
|
||||
## Complex selectors
|
||||
|
||||
- If a selector specifies multiple labels, they are understood as a logical *AND*
|
||||
|
||||
(In other words: the pods must match all the labels)
|
||||
|
||||
- Kubernetes has support for advanced, set-based selectors
|
||||
|
||||
(But these cannot be used with services, at least not yet!)
|
||||
|
||||
---
|
||||
|
||||
## The plan
|
||||
|
||||
1. Add the label `enabled=yes` to all our `rng` pods
|
||||
|
||||
2. Update the selector for the `rng` service to also include `enabled=yes`
|
||||
|
||||
3. Toggle traffic to a pod by manually adding/removing the `enabled` label
|
||||
|
||||
4. Profit!
|
||||
|
||||
*Note: if we swap steps 1 and 2, it will cause a short
|
||||
service disruption, because there will be a period of time
|
||||
during which the service selector won't match any pod.
|
||||
During that time, requests to the service will time out.
|
||||
By doing things in the order above, we guarantee that there won't
|
||||
be any interruption.*
|
||||
|
||||
---
|
||||
|
||||
## Adding labels to pods
|
||||
|
||||
- We want to add the label `enabled=yes` to all pods that have `app=rng`
|
||||
|
||||
- We could edit each pod one by one with `kubectl edit` ...
|
||||
|
||||
- ... Or we could use `kubectl label` to label them all
|
||||
|
||||
- `kubectl label` can use selectors itself
|
||||
|
||||
.exercise[
|
||||
|
||||
- Show detailed information about the `rng` deployment:
|
||||
- Add `enabled=yes` to all pods that have `app=rng`:
|
||||
```bash
|
||||
kubectl describe deploy rng
|
||||
kubectl label pods -l app=rng enabled=yes
|
||||
```
|
||||
|
||||
- Show detailed information about the `rng` replica:
|
||||
<br/>(The second command doesn't require you to get the exact name of the replica set)
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Updating the service selector
|
||||
|
||||
- We need to edit the service specification
|
||||
|
||||
- Reminder: in the service definition, we will see `app: rng` in two places
|
||||
|
||||
- the label of the service itself (we don't need to touch that one)
|
||||
|
||||
- the selector of the service (that's the one we want to change)
|
||||
|
||||
.exercise[
|
||||
|
||||
- Update the service to add `enabled: yes` to its selector:
|
||||
```bash
|
||||
kubectl describe rs rng-yyyy
|
||||
kubectl describe rs -l run=rng
|
||||
kubectl edit service rng
|
||||
```
|
||||
|
||||
<!--
|
||||
```wait Please edit the object below```
|
||||
```keys /app: rng```
|
||||
```keys ^J```
|
||||
```keys noenabled: yes```
|
||||
```keys ^[``` ]
|
||||
```keys :wq```
|
||||
```keys ^J```
|
||||
-->
|
||||
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
The replica set selector also has a `pod-template-hash`, unlike the pods in our daemon set.
|
||||
... And then we get *the weirdest error ever.* Why?
|
||||
|
||||
---
|
||||
|
||||
# Updating a service through labels and selectors
|
||||
## When the YAML parser is being too smart
|
||||
|
||||
- What if we want to drop the `rng` deployment from the load balancer?
|
||||
- YAML parsers try to help us:
|
||||
|
||||
- Option 1:
|
||||
- `xyz` is the string `"xyz"`
|
||||
|
||||
- destroy it
|
||||
- `42` is the integer `42`
|
||||
|
||||
- Option 2:
|
||||
- `yes` is the boolean value `true`
|
||||
|
||||
- add an extra *label* to the daemon set
|
||||
- If we want the string `"42"` or the string `"yes"`, we have to quote them
|
||||
|
||||
- update the service *selector* to refer to that *label*
|
||||
- So we have to use `enabled: "yes"`
|
||||
|
||||
--
|
||||
|
||||
Of course, option 2 offers more learning opportunities. Right?
|
||||
.footnote[For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!]
|
||||
|
||||
---
|
||||
|
||||
## Add an extra label to the daemon set
|
||||
## Updating the service selector, take 2
|
||||
|
||||
- We will update the daemon set "spec"
|
||||
.exercise[
|
||||
|
||||
- Option 1:
|
||||
- Update the service to add `enabled: "yes"` to its selector:
|
||||
```bash
|
||||
kubectl edit service rng
|
||||
```
|
||||
|
||||
- edit the `rng.yml` file that we used earlier
|
||||
<!--
|
||||
```wait Please edit the object below```
|
||||
```keys /app: rng```
|
||||
```keys ^J```
|
||||
```keys noenabled: "yes"```
|
||||
```keys ^[``` ]
|
||||
```keys :wq```
|
||||
```keys ^J```
|
||||
-->
|
||||
|
||||
- load the new definition with `kubectl apply`
|
||||
]
|
||||
|
||||
- Option 2:
|
||||
This time it should work!
|
||||
|
||||
- use `kubectl edit`
|
||||
|
||||
--
|
||||
|
||||
*If you feel like you got this💕🌈, feel free to try directly.*
|
||||
|
||||
*We've included a few hints on the next slides for your convenience!*
|
||||
If we did everything correctly, the web UI shouldn't show any change.
|
||||
|
||||
---
|
||||
|
||||
## Updating labels
|
||||
|
||||
- We want to disable the pod that was created by the deployment
|
||||
|
||||
- All we have to do, is remove the `enabled` label from that pod
|
||||
|
||||
- To identify that pod, we can use its name
|
||||
|
||||
- ... Or rely on the fact that it's the only one with a `pod-template-hash` label
|
||||
|
||||
- Good to know:
|
||||
|
||||
- `kubectl label ... foo=` doesn't remove a label (it sets it to an empty string)
|
||||
|
||||
- to remove label `foo`, use `kubectl label ... foo-`
|
||||
|
||||
- to change an existing label, we would need to add `--overwrite`
|
||||
|
||||
---
|
||||
|
||||
## Removing a pod from the load balancer
|
||||
|
||||
.exercise[
|
||||
|
||||
- In one window, check the logs of that pod:
|
||||
```bash
|
||||
POD=$(kubectl get pod -l app=rng,pod-template-hash -o name)
|
||||
kubectl logs --tail 1 --follow $POD
|
||||
|
||||
```
|
||||
(We should see a steady stream of HTTP logs)
|
||||
|
||||
- In another window, remove the label from the pod:
|
||||
```bash
|
||||
kubectl label pod -l app=rng,pod-template-hash enabled-
|
||||
```
|
||||
(The stream of HTTP logs should stop immediately)
|
||||
|
||||
]
|
||||
|
||||
There might be a slight change in the web UI (since we removed a bit
|
||||
of capacity from the `rng` service). If we remove more pods,
|
||||
the effect should be more visible.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Updating the daemon set
|
||||
|
||||
- If we scale up our cluster by adding new nodes, the daemon set will create more pods
|
||||
|
||||
- These pods won't have the `enabled=yes` label
|
||||
|
||||
- If we want these pods to have that label, we need to edit the daemon set spec
|
||||
|
||||
- We can do that with e.g. `kubectl edit daemonset rng`
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## We've put resources in your resources
|
||||
|
||||
- Reminder: a daemon set is a resource that creates more resources!
|
||||
@@ -410,7 +626,9 @@ Of course, option 2 offers more learning opportunities. Right?
|
||||
|
||||
- the label(s) of the resource(s) created by the first resource (in the `template` block)
|
||||
|
||||
- You need to update the selector and the template (metadata labels are not mandatory)
|
||||
- We would need to update the selector and the template
|
||||
|
||||
(metadata labels are not mandatory)
|
||||
|
||||
- The template must match the selector
|
||||
|
||||
@@ -418,175 +636,6 @@ Of course, option 2 offers more learning opportunities. Right?
|
||||
|
||||
---
|
||||
|
||||
## Adding our label
|
||||
|
||||
- Let's add a label `isactive: yes`
|
||||
|
||||
- In YAML, `yes` should be quoted; i.e. `isactive: "yes"`
|
||||
|
||||
.exercise[
|
||||
|
||||
- Update the daemon set to add `isactive: "yes"` to the selector and template label:
|
||||
```bash
|
||||
kubectl edit daemonset rng
|
||||
```
|
||||
|
||||
<!--
|
||||
```wait Please edit the object below```
|
||||
```keys /run: rng```
|
||||
```keys ^J```
|
||||
```keys noisactive: "yes"```
|
||||
```keys ^[``` ]
|
||||
```keys /run: rng```
|
||||
```keys ^J```
|
||||
```keys oisactive: "yes"```
|
||||
```keys ^[``` ]
|
||||
```keys :wq```
|
||||
```keys ^J```
|
||||
-->
|
||||
|
||||
- Update the service to add `isactive: "yes"` to its selector:
|
||||
```bash
|
||||
kubectl edit service rng
|
||||
```
|
||||
|
||||
<!--
|
||||
```wait Please edit the object below```
|
||||
```keys /run: rng```
|
||||
```keys ^J```
|
||||
```keys noisactive: "yes"```
|
||||
```keys ^[``` ]
|
||||
```keys :wq```
|
||||
```keys ^J```
|
||||
-->
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Checking what we've done
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check the most recent log line of all `run=rng` pods to confirm that exactly one per node is now active:
|
||||
```bash
|
||||
kubectl logs -l run=rng --tail 1
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
The timestamps should give us a hint about how many pods are currently receiving traffic.
|
||||
|
||||
.exercise[
|
||||
|
||||
- Look at the pods that we have right now:
|
||||
```bash
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Cleaning up
|
||||
|
||||
- The pods of the deployment and the "old" daemon set are still running
|
||||
|
||||
- We are going to identify them programmatically
|
||||
|
||||
.exercise[
|
||||
|
||||
- List the pods with `run=rng` but without `isactive=yes`:
|
||||
```bash
|
||||
kubectl get pods -l run=rng,isactive!=yes
|
||||
```
|
||||
|
||||
- Remove these pods:
|
||||
```bash
|
||||
kubectl delete pods -l run=rng,isactive!=yes
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Cleaning up stale pods
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
rng-54f57d4d49-7pt82 1/1 Terminating 0 51m
|
||||
rng-54f57d4d49-vgz9h 1/1 Running 0 22s
|
||||
rng-b85tm 1/1 Terminating 0 39m
|
||||
rng-hfbrr 1/1 Terminating 0 39m
|
||||
rng-vplmj 1/1 Running 0 7m
|
||||
rng-xbpvg 1/1 Running 0 7m
|
||||
[...]
|
||||
```
|
||||
|
||||
- The extra pods (noted `Terminating` above) are going away
|
||||
|
||||
- ... But a new one (`rng-54f57d4d49-vgz9h` above) was restarted immediately!
|
||||
|
||||
--
|
||||
|
||||
- Remember, the *deployment* still exists, and makes sure that one pod is up and running
|
||||
|
||||
- If we delete the pod associated to the deployment, it is recreated automatically
|
||||
|
||||
---
|
||||
|
||||
## Deleting a deployment
|
||||
|
||||
.exercise[
|
||||
|
||||
- Remove the `rng` deployment:
|
||||
```bash
|
||||
kubectl delete deployment rng
|
||||
```
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
- The pod that was created by the deployment is now being terminated:
|
||||
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
rng-54f57d4d49-vgz9h 1/1 Terminating 0 4m
|
||||
rng-vplmj 1/1 Running 0 11m
|
||||
rng-xbpvg 1/1 Running 0 11m
|
||||
[...]
|
||||
```
|
||||
|
||||
Ding, dong, the deployment is dead! And the daemon set lives on.
|
||||
|
||||
---
|
||||
|
||||
## Avoiding extra pods
|
||||
|
||||
- When we changed the definition of the daemon set, it immediately created new pods. We had to remove the old ones manually.
|
||||
|
||||
- How could we have avoided this?
|
||||
|
||||
--
|
||||
|
||||
- By adding the `isactive: "yes"` label to the pods before changing the daemon set!
|
||||
|
||||
- This can be done programmatically with `kubectl patch`:
|
||||
|
||||
```bash
|
||||
PATCH='
|
||||
metadata:
|
||||
labels:
|
||||
isactive: "yes"
|
||||
'
|
||||
kubectl get pods -l run=rng -l controller-revision-hash -o name |
|
||||
xargs kubectl patch -p "$PATCH"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Labels and debugging
|
||||
|
||||
- When a pod is misbehaving, we can delete it: another one will be recreated
|
||||
|
||||
@@ -182,7 +182,7 @@ The dashboard will then ask you which authentication you want to use.
|
||||
kubectl -n kube-system edit service kubernetes-dashboard
|
||||
```
|
||||
|
||||
- Change `ClusterIP` to `NodePort`, save, and exit
|
||||
- Change type `type:` from `ClusterIP` to `NodePort`, save, and exit
|
||||
|
||||
<!--
|
||||
```wait Please edit the object below```
|
||||
|
||||
@@ -111,7 +111,7 @@
|
||||
|
||||
- Display that key:
|
||||
```
|
||||
kubectl get logs deployment flux | grep identity
|
||||
kubectl logs deployment flux | grep identity
|
||||
```
|
||||
|
||||
- Then add that key to the repository, giving it **write** access
|
||||
|
||||
@@ -164,6 +164,21 @@ The chart's metadata includes an URL to the project's home page.
|
||||
|
||||
---
|
||||
|
||||
## Viewing installed charts
|
||||
|
||||
- Helm keeps track of what we've installed
|
||||
|
||||
.exercise[
|
||||
|
||||
- List installed Helm charts:
|
||||
```bash
|
||||
helm list
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Creating a chart
|
||||
|
||||
- We are going to show a way to create a *very simplified* chart
|
||||
|
||||
@@ -344,7 +344,7 @@ This is normal: we haven't provided any ingress rule yet.
|
||||
|
||||
- To make our lives easier, we will use [nip.io](http://nip.io)
|
||||
|
||||
- Check out `http://cheddar.A.B.C.D.mip.io`
|
||||
- Check out `http://cheddar.A.B.C.D.nip.io`
|
||||
|
||||
(replacing A.B.C.D with the IP address of `node1`)
|
||||
|
||||
@@ -392,9 +392,9 @@ This is normal: we haven't provided any ingress rule yet.
|
||||
|
||||
- Run all three deployments:
|
||||
```bash
|
||||
kubectl run cheddar --image=errm/cheese:cheddar
|
||||
kubectl run stilton --image=errm/cheese:stilton
|
||||
kubectl run wensleydale --image=errm/cheese:wensleydale
|
||||
kubectl create deployment cheddar --image=errm/cheese:cheddar
|
||||
kubectl create deployment stilton --image=errm/cheese:stilton
|
||||
kubectl create deployment wensleydale --image=errm/cheese:wensleydale
|
||||
```
|
||||
|
||||
- Create a service for each of them:
|
||||
|
||||
@@ -43,45 +43,63 @@ Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables`
|
||||
- an external load balancer is allocated for the service
|
||||
- the load balancer is configured accordingly
|
||||
<br/>(e.g.: a `NodePort` service is created, and the load balancer sends traffic to that port)
|
||||
- available only when the underlying infrastructure provides some "load balancer as a service"
|
||||
<br/>(e.g. AWS, Azure, GCE, OpenStack...)
|
||||
|
||||
- `ExternalName`
|
||||
|
||||
- the DNS entry managed by CoreDNS will just be a `CNAME` to a provided record
|
||||
- no port, no IP address, no nothing else is allocated
|
||||
|
||||
The `LoadBalancer` type is currently only available on AWS, Azure, and GCE.
|
||||
|
||||
---
|
||||
|
||||
## Running containers with open ports
|
||||
|
||||
- Since `ping` doesn't have anything to connect to, we'll have to run something else
|
||||
|
||||
- We could use the `nginx` official image, but ...
|
||||
|
||||
... we wouldn't be able to tell the backends from each other!
|
||||
|
||||
- We are going to use `jpetazzo/httpenv`, a tiny HTTP server written in Go
|
||||
|
||||
- `jpetazzo/httpenv` listens on port 8888
|
||||
|
||||
- It serves its environment variables in JSON format
|
||||
|
||||
- The environment variables will include `HOSTNAME`, which will be the pod name
|
||||
|
||||
(and therefore, will be different on each backend)
|
||||
|
||||
---
|
||||
|
||||
## Creating a deployment for our HTTP server
|
||||
|
||||
- We *could* do `kubectl run httpenv --image=jpetazzo/httpenv` ...
|
||||
|
||||
- But since `kubectl run` is being deprecated, let's see how to use `kubectl create` instead
|
||||
|
||||
.exercise[
|
||||
|
||||
- Start a bunch of HTTP servers:
|
||||
```bash
|
||||
kubectl run httpenv --image=jpetazzo/httpenv --replicas=10
|
||||
```
|
||||
|
||||
- Watch them being started:
|
||||
- In another window, watch the pods (to see when they will be created):
|
||||
```bash
|
||||
kubectl get pods -w
|
||||
```
|
||||
|
||||
<!--
|
||||
```wait httpenv-```
|
||||
```keys ^C```
|
||||
-->
|
||||
<!-- ```keys ^C``` -->
|
||||
|
||||
- Create a deployment for this very lightweight HTTP server:
|
||||
```bash
|
||||
kubectl create deployment httpenv --image=jpetazzo/httpenv
|
||||
```
|
||||
|
||||
- Scale it to 10 replicas:
|
||||
```bash
|
||||
kubectl scale deployment httpenv --replicas=10
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
The `jpetazzo/httpenv` image runs an HTTP server on port 8888.
|
||||
<br/>
|
||||
It serves its environment variables in JSON format.
|
||||
|
||||
The `-w` option "watches" events happening on the specified resources.
|
||||
|
||||
---
|
||||
|
||||
## Exposing our deployment
|
||||
@@ -92,12 +110,12 @@ The `-w` option "watches" events happening on the specified resources.
|
||||
|
||||
- Expose the HTTP port of our server:
|
||||
```bash
|
||||
kubectl expose deploy/httpenv --port 8888
|
||||
kubectl expose deployment httpenv --port 8888
|
||||
```
|
||||
|
||||
- Look up which IP address was allocated:
|
||||
```bash
|
||||
kubectl get svc
|
||||
kubectl get service
|
||||
```
|
||||
|
||||
]
|
||||
@@ -151,7 +169,7 @@ The `-w` option "watches" events happening on the specified resources.
|
||||
|
||||
--
|
||||
|
||||
Our requests are load balanced across multiple pods.
|
||||
Try it a few times! Our requests are load balanced across multiple pods.
|
||||
|
||||
---
|
||||
|
||||
@@ -237,7 +255,7 @@ class: extra-details
|
||||
|
||||
- These IP addresses should match the addresses of the corresponding pods:
|
||||
```bash
|
||||
kubectl get pods -l run=httpenv -o wide
|
||||
kubectl get pods -l app=httpenv -o wide
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
@@ -32,7 +32,8 @@
|
||||
|
||||
--
|
||||
|
||||
OK, what just happened?
|
||||
(Starting with Kubernetes 1.12, we get a message telling us that
|
||||
`kubectl run` is deprecated. Let's ignore it for now.)
|
||||
|
||||
---
|
||||
|
||||
@@ -172,6 +173,11 @@ pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
|
||||
kubectl scale deploy/pingpong --replicas 8
|
||||
```
|
||||
|
||||
- Note that this command does exactly the same thing:
|
||||
```bash
|
||||
kubectl scale deployment pingpong --replicas 8
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?
|
||||
@@ -228,6 +234,44 @@ We could! But the *deployment* would notice it right away, and scale back to the
|
||||
|
||||
---
|
||||
|
||||
## What about that deprecation warning?
|
||||
|
||||
- As we can see from the previous slide, `kubectl run` can do many things
|
||||
|
||||
- The exact type of resource created is not obvious
|
||||
|
||||
- To make things more explicit, it is better to use `kubectl create`:
|
||||
|
||||
- `kubectl create deployment` to create a deployment
|
||||
|
||||
- `kubectl create job` to create a job
|
||||
|
||||
- Eventually, `kubectl run` will be used only to start one-shot pods
|
||||
|
||||
(see https://github.com/kubernetes/kubernetes/pull/68132)
|
||||
|
||||
---
|
||||
|
||||
## Various ways of creating resources
|
||||
|
||||
- `kubectl run`
|
||||
|
||||
- easy way to get started
|
||||
- versatile
|
||||
|
||||
- `kubectl create <resource>`
|
||||
|
||||
- explicit, but lacks some features
|
||||
- can't create a CronJob
|
||||
- can't pass command-line arguments to deployments
|
||||
|
||||
- `kubectl create -f foo.yaml` or `kubectl apply -f foo.yaml`
|
||||
|
||||
- all features are available
|
||||
- requires writing YAML
|
||||
|
||||
---
|
||||
|
||||
## Viewing logs of multiple pods
|
||||
|
||||
- When we specify a deployment name, only one single pod's logs are shown
|
||||
@@ -248,6 +292,26 @@ We could! But the *deployment* would notice it right away, and scale back to the
|
||||
]
|
||||
|
||||
Unfortunately, `--follow` cannot (yet) be used to stream the logs from multiple containers.
|
||||
<br/>
|
||||
(But this will change in the future; see [PR #67573](https://github.com/kubernetes/kubernetes/pull/67573).)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## `kubectl logs -l ... --tail N`
|
||||
|
||||
- If we run this with Kubernetes 1.12, the last command shows multiple lines
|
||||
|
||||
- This is a regression when `--tail` is used together with `-l`/`--selector`
|
||||
|
||||
- It always shows the last 10 lines of output for each container
|
||||
|
||||
(instead of the number of lines specified on the command line)
|
||||
|
||||
- The problem was fixed in Kubernetes 1.13
|
||||
|
||||
*See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.*
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,15 +1,13 @@
|
||||
# Links and resources
|
||||
|
||||
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
|
||||
|
||||
- [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)
|
||||
|
||||
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
|
||||
- [Microsoft Learn](https://docs.microsoft.com/learn/)
|
||||
|
||||
- [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/)
|
||||
|
||||
- [Cloud Developer Advocates](https://developer.microsoft.com/advocates/)
|
||||
|
||||
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
|
||||
|
||||
- [Local meetups](https://www.meetup.com/)
|
||||
|
||||
- [devopsdays](https://www.devopsdays.org/)
|
||||
|
||||
@@ -12,13 +12,15 @@
|
||||
|
||||
.exercise[
|
||||
|
||||
<!-- ##VERSION## -->
|
||||
|
||||
- Download the `kubectl` binary from one of these links:
|
||||
|
||||
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.11.2/bin/linux/amd64/kubectl)
|
||||
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.13.4/bin/linux/amd64/kubectl)
|
||||
|
|
||||
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.11.2/bin/darwin/amd64/kubectl)
|
||||
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.13.4/bin/darwin/amd64/kubectl)
|
||||
|
|
||||
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.11.2/bin/windows/amd64/kubectl.exe)
|
||||
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.13.4/bin/windows/amd64/kubectl.exe)
|
||||
|
||||
- On Linux and macOS, make the binary executable with `chmod +x kubectl`
|
||||
|
||||
|
||||
@@ -59,13 +59,15 @@ Exactly what we need!
|
||||
|
||||
- If it is not installed, the easiest method is to download a [binary release](https://github.com/wercker/stern/releases)
|
||||
|
||||
- The following commands will install Stern on a Linux Intel 64 bits machine:
|
||||
- The following commands will install Stern on a Linux Intel 64 bit machine:
|
||||
```bash
|
||||
sudo curl -L -o /usr/local/bin/stern \
|
||||
https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64
|
||||
https://github.com/wercker/stern/releases/download/1.10.0/stern_linux_amd64
|
||||
sudo chmod +x /usr/local/bin/stern
|
||||
```
|
||||
|
||||
<!-- ##VERSION## -->
|
||||
|
||||
---
|
||||
|
||||
## Using Stern
|
||||
@@ -130,11 +132,13 @@ Exactly what we need!
|
||||
|
||||
- We can use that property to view the logs of all the pods created with `kubectl run`
|
||||
|
||||
- Similarly, everything created with `kubectl create deployment` has a label `app`
|
||||
|
||||
.exercise[
|
||||
|
||||
- View the logs for all the things started with `kubectl run`:
|
||||
- View the logs for all the things started with `kubectl create deployment`:
|
||||
```bash
|
||||
stern -l run
|
||||
stern -l app
|
||||
```
|
||||
|
||||
<!--
|
||||
|
||||
@@ -68,7 +68,7 @@
|
||||
kubectl -n blue get svc
|
||||
```
|
||||
|
||||
- We can also use *contexts*
|
||||
- We can also change our current *context*
|
||||
|
||||
- A context is a *(user, cluster, namespace)* tuple
|
||||
|
||||
@@ -76,9 +76,9 @@
|
||||
|
||||
---
|
||||
|
||||
## Creating a context
|
||||
## Viewing existing contexts
|
||||
|
||||
- We are going to create a context for the `blue` namespace
|
||||
- On our training environments, at this point, there should be only one context
|
||||
|
||||
.exercise[
|
||||
|
||||
@@ -87,29 +87,79 @@
|
||||
kubectl config get-contexts
|
||||
```
|
||||
|
||||
- Create a new context:
|
||||
]
|
||||
|
||||
- The current context (the only one!) is tagged with a `*`
|
||||
|
||||
- What are NAME, CLUSTER, AUTHINFO, and NAMESPACE?
|
||||
|
||||
---
|
||||
|
||||
## What's in a context
|
||||
|
||||
- NAME is an arbitrary string to identify the context
|
||||
|
||||
- CLUSTER is a reference to a cluster
|
||||
|
||||
(i.e. API endpoint URL, and optional certificate)
|
||||
|
||||
- AUTHINFO is a reference to the authentication information to use
|
||||
|
||||
(i.e. a TLS client certificate, token, or otherwise)
|
||||
|
||||
- NAMESPACE is the namespace
|
||||
|
||||
(empty string = `default`)
|
||||
|
||||
---
|
||||
|
||||
## Switching contexts
|
||||
|
||||
- We want to use a different namespace
|
||||
|
||||
- Solution 1: update the current context
|
||||
|
||||
*This is appropriate if we need to change just one thing (e.g. namespace or authentication).*
|
||||
|
||||
- Solution 2: create a new context and switch to it
|
||||
|
||||
*This is appropriate if we need to change multiple things and switch back and forth.*
|
||||
|
||||
- Let's go with solution 1!
|
||||
|
||||
---
|
||||
|
||||
## Updating a context
|
||||
|
||||
- This is done through `kubectl config set-context`
|
||||
|
||||
- We can update a context by passing its name, or the current context with `--current`
|
||||
|
||||
.exercise[
|
||||
|
||||
- Update the current context to use the `blue` namespace:
|
||||
```bash
|
||||
kubectl config set-context blue --namespace=blue \
|
||||
--cluster=kubernetes --user=kubernetes-admin
|
||||
kubectl config set-context --current --namespace=blue
|
||||
```
|
||||
|
||||
- Check the result:
|
||||
```bash
|
||||
kubectl config get-contexts
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
We have created a context; but this is just some configuration values.
|
||||
|
||||
The namespace doesn't exist yet.
|
||||
|
||||
---
|
||||
|
||||
## Using a context
|
||||
## Using our new namespace
|
||||
|
||||
- Let's switch to our new context and deploy the DockerCoins chart
|
||||
- Let's check that we are in our new namespace, then deploy the DockerCoins chart
|
||||
|
||||
.exercise[
|
||||
|
||||
- Use the `blue` context:
|
||||
- Verify that the new context is empty:
|
||||
```bash
|
||||
kubectl config use-context blue
|
||||
kubectl get all
|
||||
```
|
||||
|
||||
- Deploy DockerCoins:
|
||||
@@ -139,7 +189,46 @@ we created our Helm chart before.
|
||||
|
||||
]
|
||||
|
||||
Note: it might take a minute or two for the app to be up and running.
|
||||
If the graph shows up but stays at zero, check the next slide!
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If did the exercices from the chapter about labels and selectors,
|
||||
the app that you just created may not work, because the `rng` service
|
||||
selector has `enabled=yes` but the pods created by the `rng` daemon set
|
||||
do not have that label.
|
||||
|
||||
How can we troubleshoot that?
|
||||
|
||||
- Query individual services manually
|
||||
|
||||
→ the `rng` service will time out
|
||||
|
||||
- Inspect the services with `kubectl describe service`
|
||||
|
||||
→ the `rng` service will have an empty list of backends
|
||||
|
||||
---
|
||||
|
||||
## Fixing the broken service
|
||||
|
||||
The easiest option is to add the `enabled=yes` label to the relevant pods.
|
||||
|
||||
.exercise[
|
||||
|
||||
- Add the `enabled` label to the pods of the `rng` daemon set:
|
||||
```bash
|
||||
kubectl label pods -l app=rng enabled=yes
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
The *best* option is to change either the service definition, or the
|
||||
daemon set definition, so that their respective selectors match correctly.
|
||||
|
||||
*This is left as an exercise for the reader!*
|
||||
|
||||
---
|
||||
|
||||
@@ -175,48 +264,66 @@ Note: it might take a minute or two for the app to be up and running.
|
||||
|
||||
---
|
||||
|
||||
## Network policies overview
|
||||
|
||||
- We can create as many network policies as we want
|
||||
|
||||
- Each network policy has:
|
||||
|
||||
- a *pod selector*: "which pods are targeted by the policy?"
|
||||
|
||||
- lists of ingress and/or egress rules: "which peers and ports are allowed or blocked?"
|
||||
|
||||
- If a pod is not targeted by any policy, traffic is allowed by default
|
||||
|
||||
- If a pod is targeted by at least one policy, traffic must be allowed explicitly
|
||||
|
||||
---
|
||||
|
||||
## More about network policies
|
||||
|
||||
- This remains a high level overview of network policies
|
||||
|
||||
- For more details, check:
|
||||
|
||||
- the [Kubernetes documentation about network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
|
||||
- this [talk about network policies at KubeCon 2017 US](https://www.youtube.com/watch?v=3gGpMmYeEO8) by [@ahmetb](https://twitter.com/ahmetb)
|
||||
|
||||
---
|
||||
|
||||
## Switch back to the default namespace
|
||||
|
||||
- Let's make sure that we don't run future exercises in the `blue` namespace
|
||||
|
||||
.exercise[
|
||||
|
||||
- View the names of the contexts:
|
||||
```bash
|
||||
kubectl config get-contexts
|
||||
```
|
||||
|
||||
- Switch back to the original context:
|
||||
```bash
|
||||
kubectl config use-context kubernetes-admin@kubernetes
|
||||
kubectl config set-context --current --namespace=
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
Note: we could have used `--namespace=default` for the same result.
|
||||
|
||||
---
|
||||
|
||||
## Switching namespaces more easily
|
||||
|
||||
- We can also use a little helper tool called `kubens`:
|
||||
|
||||
```bash
|
||||
# Switch to namespace foo
|
||||
kubens foo
|
||||
# Switch back to the previous namespace
|
||||
kubens -
|
||||
```
|
||||
|
||||
- On our clusters, `kubens` is called `kns` instead
|
||||
|
||||
(so that it's even fewer keystrokes to switch namespaces)
|
||||
|
||||
---
|
||||
|
||||
## `kubens` and `kubectx`
|
||||
|
||||
- With `kubens`, we can switch quickly between namespaces
|
||||
|
||||
- With `kubectx`, we can switch quickly between contexts
|
||||
|
||||
- Both tools are simple shell scripts available from https://github.com/ahmetb/kubectx
|
||||
|
||||
- On our clusters, they are installed as `kns` and `kctx`
|
||||
|
||||
(for brevity and to avoid completion clashes between `kubectx` and `kubectl`)
|
||||
|
||||
---
|
||||
|
||||
## `kube-ps1`
|
||||
|
||||
- It's easy to lose track of our current cluster / context / namespace
|
||||
|
||||
- `kube-ps1` makes it easy to track these, by showing them in our shell prompt
|
||||
|
||||
- It's a simple shell script available from https://github.com/jonmosco/kube-ps1
|
||||
|
||||
- On our clusters, `kube-ps1` is installed and included in `PS1`:
|
||||
```
|
||||
[123.45.67.89] `(kubernetes-admin@kubernetes:default)` docker@node1 ~
|
||||
```
|
||||
(The highlighted part is `context:namespace`, managed by `kube-ps1`)
|
||||
|
||||
- Highly recommended if you work across multiple contexts or namespaces!
|
||||
|
||||
@@ -117,13 +117,13 @@ This is our game plan:
|
||||
|
||||
- Let's use the `nginx` image:
|
||||
```bash
|
||||
kubectl run testweb --image=nginx
|
||||
kubectl create deployment testweb --image=nginx
|
||||
```
|
||||
|
||||
- Find out the IP address of the pod with one of these two commands:
|
||||
```bash
|
||||
kubectl get pods -o wide -l run=testweb
|
||||
IP=$(kubectl get pods -l run=testweb -o json | jq -r .items[0].status.podIP)
|
||||
kubectl get pods -o wide -l app=testweb
|
||||
IP=$(kubectl get pods -l app=testweb -o json | jq -r .items[0].status.podIP)
|
||||
```
|
||||
|
||||
- Check that we can connect to the server:
|
||||
@@ -138,7 +138,7 @@ The `curl` command should show us the "Welcome to nginx!" page.
|
||||
|
||||
## Adding a very restrictive network policy
|
||||
|
||||
- The policy will select pods with the label `run=testweb`
|
||||
- The policy will select pods with the label `app=testweb`
|
||||
|
||||
- It will specify an empty list of ingress rules (matching nothing)
|
||||
|
||||
@@ -172,7 +172,7 @@ metadata:
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
run: testweb
|
||||
app: testweb
|
||||
ingress: []
|
||||
```
|
||||
|
||||
@@ -207,7 +207,7 @@ metadata:
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
run: testweb
|
||||
app: testweb
|
||||
ingress:
|
||||
- from:
|
||||
- podSelector:
|
||||
@@ -247,9 +247,9 @@ The second command will fail and time out after 3 seconds.
|
||||
|
||||
- Some network plugins only have partial support for network policies
|
||||
|
||||
- For instance, Weave [doesn't support ipBlock (yet)](https://github.com/weaveworks/weave/issues/3168)
|
||||
- For instance, Weave added support for egress rules [in version 2.4](https://github.com/weaveworks/weave/pull/3313) (released in July 2018)
|
||||
|
||||
- Weave added support for egress rules [in version 2.4](https://github.com/weaveworks/weave/pull/3313) (released in July 2018)
|
||||
- But only recently added support for ipBlock [in version 2.5](https://github.com/weaveworks/weave/pull/3367) (released in Nov 2018)
|
||||
|
||||
- Unsupported features might be silently ignored
|
||||
|
||||
@@ -325,7 +325,7 @@ spec:
|
||||
|
||||
## Allowing traffic to `webui` pods
|
||||
|
||||
This policy selects all pods with label `run=webui`.
|
||||
This policy selects all pods with label `app=webui`.
|
||||
|
||||
It allows traffic from any source.
|
||||
|
||||
@@ -339,7 +339,7 @@ metadata:
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels:
|
||||
run: webui
|
||||
app: webui
|
||||
ingress:
|
||||
- from: []
|
||||
```
|
||||
@@ -371,6 +371,23 @@ troubleshoot easily, without having to poke holes in our firewall.
|
||||
|
||||
---
|
||||
|
||||
## Cleaning up our network policies
|
||||
|
||||
- The network policies that we have installed block all traffic to the default namespace
|
||||
|
||||
- We should remove them, otherwise further exercises will fail!
|
||||
|
||||
.exercise[
|
||||
|
||||
- Remove all network policies:
|
||||
```bash
|
||||
kubectl delete networkpolicies --all
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Protecting the control plane
|
||||
|
||||
- Should we add network policies to block unauthorized access to the control plane?
|
||||
@@ -405,11 +422,11 @@ troubleshoot easily, without having to poke holes in our firewall.
|
||||
|
||||
- The API documentation has a lot of detail about the format of various objects:
|
||||
|
||||
- [NetworkPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#networkpolicy-v1-networking-k8s-io)
|
||||
- [NetworkPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicy-v1-networking-k8s-io)
|
||||
|
||||
- [NetworkPolicySpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#networkpolicyspec-v1-networking-k8s-io)
|
||||
- [NetworkPolicySpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyspec-v1-networking-k8s-io)
|
||||
|
||||
- [NetworkPolicyIngressRule](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#networkpolicyingressrule-v1-networking-k8s-io)
|
||||
- [NetworkPolicyIngressRule](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyingressrule-v1-networking-k8s-io)
|
||||
|
||||
- etc.
|
||||
|
||||
|
||||