Compare commits
377 Commits
paris
...
devopsdays
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8ef6219295 | ||
|
|
346ce0e15c | ||
|
|
964d936435 | ||
|
|
546d9a2986 | ||
|
|
8e5d27b185 | ||
|
|
e8d9e94b72 | ||
|
|
ca980de2fd | ||
|
|
4b2b5ff7e4 | ||
|
|
ee2b20926c | ||
|
|
96a76d2a19 | ||
|
|
78ac91fcd5 | ||
|
|
971b5b0e6d | ||
|
|
3393563498 | ||
|
|
64fb407e8c | ||
|
|
ea4f46599d | ||
|
|
94483ebfec | ||
|
|
db5d5878f5 | ||
|
|
2585daac9b | ||
|
|
21043108b3 | ||
|
|
65faa4507c | ||
|
|
644f2b9c7a | ||
|
|
dab9d9fb7e | ||
|
|
139757613b | ||
|
|
10eed2c1c7 | ||
|
|
c4fa75a1da | ||
|
|
847140560f | ||
|
|
1dc07c33ab | ||
|
|
4fc73d95c0 | ||
|
|
690ed55953 | ||
|
|
16a5809518 | ||
|
|
0fed34600b | ||
|
|
2d95f4177a | ||
|
|
e9d1db56fa | ||
|
|
a076a766a9 | ||
|
|
be3c78bf54 | ||
|
|
5bb6b8e2ab | ||
|
|
f79193681d | ||
|
|
379ae69db5 | ||
|
|
cde89f50a2 | ||
|
|
98563ba1ce | ||
|
|
99bf8cc39f | ||
|
|
ea642cf90e | ||
|
|
a7d89062cf | ||
|
|
564e4856b4 | ||
|
|
011cd08af3 | ||
|
|
e294a4726c | ||
|
|
a21e8b0849 | ||
|
|
cc6f36b50f | ||
|
|
6e35162788 | ||
|
|
30ca940eeb | ||
|
|
14eb19a42b | ||
|
|
da053ecde2 | ||
|
|
c86ef7de45 | ||
|
|
c5572020b9 | ||
|
|
3d7ed3a3f7 | ||
|
|
138163056f | ||
|
|
5e78e00bc9 | ||
|
|
2cb06edc2d | ||
|
|
8915bfb443 | ||
|
|
24017ad83f | ||
|
|
3edebe3747 | ||
|
|
636a2d5c87 | ||
|
|
4213aba76e | ||
|
|
3e822bad82 | ||
|
|
cd5b06b9c7 | ||
|
|
b0841562ea | ||
|
|
06f70e8246 | ||
|
|
9614f8761a | ||
|
|
92f9ab9001 | ||
|
|
ad554f89fc | ||
|
|
5bb37dff49 | ||
|
|
0d52dc2290 | ||
|
|
c575cb9cd5 | ||
|
|
9cdccd40c7 | ||
|
|
fdd10c5a98 | ||
|
|
8a617fdbc7 | ||
|
|
a058a74d8f | ||
|
|
4896a3265e | ||
|
|
131947275c | ||
|
|
1b7e8cec5e | ||
|
|
c17c0ea9aa | ||
|
|
7b378d2425 | ||
|
|
47da7d8278 | ||
|
|
3c69941fcd | ||
|
|
beb188facf | ||
|
|
dfea8f6535 | ||
|
|
3b89149bf0 | ||
|
|
c8d73caacd | ||
|
|
290185f16b | ||
|
|
05e9d36eed | ||
|
|
05815fcbf3 | ||
|
|
bce900a4ca | ||
|
|
bf7ba49013 | ||
|
|
323aa075b3 | ||
|
|
f526014dc8 | ||
|
|
dec546fa65 | ||
|
|
36390a7921 | ||
|
|
313d705778 | ||
|
|
ca34efa2d7 | ||
|
|
25e92cfe39 | ||
|
|
999359e81a | ||
|
|
3a74248746 | ||
|
|
cb828ecbd3 | ||
|
|
e1e984e02d | ||
|
|
d6e19fe350 | ||
|
|
1f91c748b5 | ||
|
|
38356acb4e | ||
|
|
7b2d598c38 | ||
|
|
c276eb0cfa | ||
|
|
571de591ca | ||
|
|
e49a197fd5 | ||
|
|
a30eabc23a | ||
|
|
73c4cddba5 | ||
|
|
6e341f770a | ||
|
|
527145ec81 | ||
|
|
c93edceffe | ||
|
|
6f9eac7c8e | ||
|
|
522420ef34 | ||
|
|
927bf052b0 | ||
|
|
1e44689b79 | ||
|
|
b967865faa | ||
|
|
054c0cafb2 | ||
|
|
29e37c8e2b | ||
|
|
44fc2afdc7 | ||
|
|
7776c8ee38 | ||
|
|
9ee7e1873f | ||
|
|
e21fcbd1bd | ||
|
|
5852ab513d | ||
|
|
3fe33e4e9e | ||
|
|
c44b90b5a4 | ||
|
|
f06dc6548c | ||
|
|
e13552c306 | ||
|
|
0305c3783f | ||
|
|
5158ac3d98 | ||
|
|
25c08b0885 | ||
|
|
f8131c97e9 | ||
|
|
3de1fab66a | ||
|
|
ab664128b7 | ||
|
|
91de693b80 | ||
|
|
a64606fb32 | ||
|
|
58d9103bd2 | ||
|
|
61ab5be12d | ||
|
|
030900b602 | ||
|
|
476d689c7d | ||
|
|
4aedbb69c2 | ||
|
|
db2a68709c | ||
|
|
f114a89136 | ||
|
|
96eda76391 | ||
|
|
e7d9a8fa2d | ||
|
|
1cca8db828 | ||
|
|
2cde665d2f | ||
|
|
d660c6342f | ||
|
|
7e8bb0e51f | ||
|
|
c87f4cc088 | ||
|
|
05c50349a8 | ||
|
|
e985952816 | ||
|
|
19f0ef9c86 | ||
|
|
cc8e13a85f | ||
|
|
6475a05794 | ||
|
|
cc9840afe5 | ||
|
|
b7a2cde458 | ||
|
|
453992b55d | ||
|
|
0b1067f95e | ||
|
|
21777cd95b | ||
|
|
827ad3bdf2 | ||
|
|
7818157cd0 | ||
|
|
d547241714 | ||
|
|
c41e0e9286 | ||
|
|
c2d4784895 | ||
|
|
11163965cf | ||
|
|
e9df065820 | ||
|
|
101ab0c11a | ||
|
|
25f081c0b7 | ||
|
|
700baef094 | ||
|
|
3faa586b16 | ||
|
|
8ca77fe8a4 | ||
|
|
019829cc4d | ||
|
|
a7f6bb223a | ||
|
|
eb77a8f328 | ||
|
|
5a484b2667 | ||
|
|
982c35f8e7 | ||
|
|
adffe5f47f | ||
|
|
f90a194b86 | ||
|
|
99e9356e5d | ||
|
|
860840a4c1 | ||
|
|
ab63b76ae0 | ||
|
|
29bca726b3 | ||
|
|
91297a68f8 | ||
|
|
2bea8ade63 | ||
|
|
ec486cf78c | ||
|
|
63ac378866 | ||
|
|
35db387fc2 | ||
|
|
a0f9baf5e7 | ||
|
|
4e54a79abc | ||
|
|
37bea7158f | ||
|
|
618fe4e959 | ||
|
|
0c73144977 | ||
|
|
ff8c3b1595 | ||
|
|
b756d0d0dc | ||
|
|
23147fafd1 | ||
|
|
b036b5f24b | ||
|
|
3b9014f750 | ||
|
|
6ad7a285e7 | ||
|
|
e529eaed2d | ||
|
|
4697c6c6ad | ||
|
|
56e47c3550 | ||
|
|
b3a9ba339c | ||
|
|
8d0ce37a59 | ||
|
|
a1bbbd6f7b | ||
|
|
de87743c6a | ||
|
|
9d4a72a4ba | ||
|
|
19e39aea49 | ||
|
|
da064a6005 | ||
|
|
a12a38a7a9 | ||
|
|
2c3a442a4c | ||
|
|
25d560cf46 | ||
|
|
c3324cf64c | ||
|
|
053bbe7028 | ||
|
|
74f980437f | ||
|
|
5ef96a29ac | ||
|
|
f261e7aa96 | ||
|
|
8e44e911ca | ||
|
|
6711ba06d9 | ||
|
|
fce69b6bb2 | ||
|
|
1183e2e4bf | ||
|
|
de3082e48f | ||
|
|
3acac34e4b | ||
|
|
f97bd2b357 | ||
|
|
3bac124921 | ||
|
|
ba44603d0f | ||
|
|
358f844c88 | ||
|
|
74bf2d742c | ||
|
|
acba3d5467 | ||
|
|
cfc066c8ea | ||
|
|
4f69f19866 | ||
|
|
c508f88af2 | ||
|
|
9757fdb42f | ||
|
|
24d57f535b | ||
|
|
e42dfc0726 | ||
|
|
3f54f23535 | ||
|
|
c7198b3538 | ||
|
|
827d10dd49 | ||
|
|
1b7a072f25 | ||
|
|
af1347ca17 | ||
|
|
f741cf5b23 | ||
|
|
eb1b3c8729 | ||
|
|
40e4678a45 | ||
|
|
d3c0a60de9 | ||
|
|
83bba80f3b | ||
|
|
44e0cfb878 | ||
|
|
a58e21e313 | ||
|
|
1131635006 | ||
|
|
c6e477e6ab | ||
|
|
18a81120bc | ||
|
|
17cd67f4d0 | ||
|
|
38a40d56a0 | ||
|
|
96fd2e26fd | ||
|
|
581bbc847d | ||
|
|
da7cbc41d2 | ||
|
|
282e22acb9 | ||
|
|
9374eebdf6 | ||
|
|
dcd5c5b39a | ||
|
|
974f8ee244 | ||
|
|
8212aa378a | ||
|
|
403d4c6408 | ||
|
|
142681fa27 | ||
|
|
69c9141817 | ||
|
|
9ed88e7608 | ||
|
|
b216f4d90b | ||
|
|
26ee07d8ba | ||
|
|
a8e5b02fb4 | ||
|
|
80a8912a53 | ||
|
|
1ba6797f25 | ||
|
|
11a2167dea | ||
|
|
af4eeb6e6b | ||
|
|
ea6459e2bd | ||
|
|
2dfa5a9660 | ||
|
|
b86434fbd3 | ||
|
|
223525cc69 | ||
|
|
fd63c079c8 | ||
|
|
ebe4511c57 | ||
|
|
e1a81ef8f3 | ||
|
|
3382c83d6e | ||
|
|
a89430673f | ||
|
|
fcea6dbdb6 | ||
|
|
c744a7d168 | ||
|
|
0256dc8640 | ||
|
|
41819794d7 | ||
|
|
836903cb02 | ||
|
|
7f822d33b5 | ||
|
|
232fdbb1ff | ||
|
|
f3f6111622 | ||
|
|
a8378e7e7f | ||
|
|
eb3165096f | ||
|
|
90ca58cda8 | ||
|
|
5a81526387 | ||
|
|
8df073b8ac | ||
|
|
0f7356b002 | ||
|
|
0c2166fb5f | ||
|
|
d228222fa6 | ||
|
|
e4b7d3244e | ||
|
|
7d0e841a73 | ||
|
|
9859e441e1 | ||
|
|
e1c638439f | ||
|
|
253aaaad97 | ||
|
|
a249ccc12b | ||
|
|
22fb898267 | ||
|
|
e038797875 | ||
|
|
7b9f9e23c0 | ||
|
|
01d062a68f | ||
|
|
a66dfb5faf | ||
|
|
ac1480680a | ||
|
|
13a9b5ca00 | ||
|
|
0cdf6abf0b | ||
|
|
2071694983 | ||
|
|
12e2b18a6f | ||
|
|
28e128756d | ||
|
|
a15109a12c | ||
|
|
e500fb57e8 | ||
|
|
f1849092eb | ||
|
|
f1dbd7e8a6 | ||
|
|
d417f454dd | ||
|
|
d79718d834 | ||
|
|
de9c3a1550 | ||
|
|
90fc7a4ed3 | ||
|
|
09edbc24bc | ||
|
|
92f8701c37 | ||
|
|
c828888770 | ||
|
|
bb7728e7e7 | ||
|
|
5f544f9c78 | ||
|
|
5b6a7d1995 | ||
|
|
b21185dde7 | ||
|
|
deaee0dc82 | ||
|
|
4206346496 | ||
|
|
6658b632b3 | ||
|
|
d9be7160ef | ||
|
|
d56424a287 | ||
|
|
2d397c5cb8 | ||
|
|
08004caa5d | ||
|
|
522358a004 | ||
|
|
e00a6c36e3 | ||
|
|
4664497cbc | ||
|
|
6be424bde5 | ||
|
|
0903438242 | ||
|
|
b874b68e57 | ||
|
|
6af9385c5f | ||
|
|
29398ac33b | ||
|
|
7525739b24 | ||
|
|
50ff71f3f3 | ||
|
|
70a9215c9d | ||
|
|
9c1a5d9a7d | ||
|
|
9a9b4a6892 | ||
|
|
e5502c724e | ||
|
|
125878e280 | ||
|
|
b4c1498ca1 | ||
|
|
88d534a7f2 | ||
|
|
6ce4ed0937 | ||
|
|
1b9ba62dc8 | ||
|
|
f3639e6200 | ||
|
|
1fe56cf401 | ||
|
|
a3add3d816 | ||
|
|
2807de2123 | ||
|
|
5029b956d2 | ||
|
|
815aaefad9 | ||
|
|
7ea740f647 | ||
|
|
eaf25e5b36 | ||
|
|
3b336a9127 | ||
|
|
cc4d1fd1c7 | ||
|
|
17ec6441a0 | ||
|
|
a1b107cecb | ||
|
|
2e06bc2352 | ||
|
|
af0a239bd9 | ||
|
|
92939ca3f2 | ||
|
|
8d15dba26d | ||
|
|
cdca5655fc | ||
|
|
7f72ee1296 | ||
|
|
5438fca35a |
2
.gitignore
vendored
@@ -8,4 +8,6 @@ prepare-vms/settings.yaml
|
|||||||
prepare-vms/tags
|
prepare-vms/tags
|
||||||
slides/*.yml.html
|
slides/*.yml.html
|
||||||
slides/autopilot/state.yaml
|
slides/autopilot/state.yaml
|
||||||
|
slides/index.html
|
||||||
|
slides/past.html
|
||||||
node_modules
|
node_modules
|
||||||
|
|||||||
31
CHECKLIST.md
@@ -1,19 +1,24 @@
|
|||||||
This is the checklist that I (Jérôme) use when delivering a workshop.
|
Checklist to use when delivering a workshop
|
||||||
|
Authored by Jérôme; additions by Bridget
|
||||||
|
|
||||||
- [ ] Create branch + `_redirects` + push to GitHub + Netlify setup
|
- [ ] Create event-named branch (such as `conferenceYYYY`) in the [main repo](https://github.com/jpetazzo/container.training/)
|
||||||
- [ ] Add branch to index.html
|
- [ ] Create file `slides/_redirects` containing a link to the desired tutorial: `/ /kube-halfday.yml.html 200`
|
||||||
- [ ] Update the slides that says which versions we are using
|
- [ ] Push local branch to GitHub and merge into main repo
|
||||||
- [ ] Update the version of Compose and Machine in settings
|
- [ ] [Netlify setup](https://app.netlify.com/sites/container-training/settings/domain): create subdomain for event-named branch
|
||||||
- [ ] Create chatroom
|
- [ ] Add link to event-named branch to [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
|
||||||
- [ ] Set chatroom in YML and deploy
|
- [ ] Update the slides that says which versions we are using for [kube](https://github.com/jpetazzo/container.training/blob/master/slides/kube/versions-k8s.md) or [swarm](https://github.com/jpetazzo/container.training/blob/master/slides/swarm/versions.md) workshops
|
||||||
- [ ] Put chat room in index.html
|
- [ ] Update the version of Compose and Machine in [settings](https://github.com/jpetazzo/container.training/tree/master/prepare-vms/settings)
|
||||||
- [ ] Walk the room to count seats, check power supplies, lectern, A/V setup
|
- [ ] (optional) Create chatroom
|
||||||
- [ ] How many VMs do we need?
|
- [ ] (optional) Set chatroom in YML ([kube half-day example](https://github.com/jpetazzo/container.training/blob/master/slides/kube-halfday.yml#L6-L8)) and deploy
|
||||||
- [ ] Provision VMs
|
- [ ] (optional) Put chat link on [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
|
||||||
|
- [ ] How many VMs do we need? Check with event organizers ahead of time
|
||||||
|
- [ ] Provision VMs (slightly more than we think we'll need)
|
||||||
|
- [ ] Change password on presenter's VMs (to forestall any hijinx)
|
||||||
|
- [ ] Onsite: walk the room to count seats, check power supplies, lectern, A/V setup
|
||||||
- [ ] Print cards
|
- [ ] Print cards
|
||||||
- [ ] Cut cards
|
- [ ] Cut cards
|
||||||
- [ ] Last minute merge from master
|
- [ ] Last-minute merge from master
|
||||||
- [ ] Check that all looks good
|
- [ ] Check that all looks good
|
||||||
- [ ] DELIVER!
|
- [ ] DELIVER!
|
||||||
- [ ] Shutdown VMs
|
- [ ] Shut down VMs
|
||||||
- [ ] Update index.html to remove chat link and move session to past things
|
- [ ] Update index.html to remove chat link and move session to past things
|
||||||
|
|||||||
28
README.md
@@ -292,15 +292,31 @@ If there is a bug and you can't even reproduce it:
|
|||||||
sorry. It is probably an Heisenbug. We can't act on it
|
sorry. It is probably an Heisenbug. We can't act on it
|
||||||
until it's reproducible, alas.
|
until it's reproducible, alas.
|
||||||
|
|
||||||
If you have attended this workshop and have feedback,
|
|
||||||
or if you want somebody to deliver that workshop at your
|
|
||||||
conference or for your company: you can contact one of us!
|
|
||||||
|
|
||||||
- jerome at docker dot com
|
# “Please teach us!”
|
||||||
|
|
||||||
|
If you have attended one of these workshops, and want
|
||||||
|
your team or organization to attend a similar one, you
|
||||||
|
can look at the list of upcoming events on
|
||||||
|
http://container.training/.
|
||||||
|
|
||||||
|
You are also welcome to reuse these materials to run
|
||||||
|
your own workshop, for your team or even at a meetup
|
||||||
|
or conference. In that case, you might enjoy watching
|
||||||
|
[Bridget Kromhout's talk at KubeCon 2018 Europe](
|
||||||
|
https://www.youtube.com/watch?v=mYsp_cGY2O0), explaining
|
||||||
|
precisely how to run such a workshop yourself.
|
||||||
|
|
||||||
|
Finally, you can also contact the following persons,
|
||||||
|
who are experienced speakers, are familiar with the
|
||||||
|
material, and are available to deliver these workshops
|
||||||
|
at your conference or for your company:
|
||||||
|
|
||||||
|
- jerome dot petazzoni at gmail dot com
|
||||||
- bret at bretfisher dot com
|
- bret at bretfisher dot com
|
||||||
|
|
||||||
If you are willing and able to deliver such workshops,
|
(If you are willing and able to deliver such workshops,
|
||||||
feel free to submit a PR to add your name to that list!
|
feel free to submit a PR to add your name to that list!)
|
||||||
|
|
||||||
**Thank you!**
|
**Thank you!**
|
||||||
|
|
||||||
|
|||||||
@@ -28,5 +28,5 @@ def rng(how_many_bytes):
|
|||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
app.run(host="0.0.0.0", port=80)
|
app.run(host="0.0.0.0", port=80, threaded=False)
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# Trainer tools to create and prepare VMs for Docker workshops on AWS
|
# Trainer tools to create and prepare VMs for Docker workshops on AWS or Azure
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
@@ -14,8 +14,9 @@ And if you want to generate printable cards:
|
|||||||
## General Workflow
|
## General Workflow
|
||||||
|
|
||||||
- fork/clone repo
|
- fork/clone repo
|
||||||
- set required environment variables for AWS
|
- set required environment variables
|
||||||
- create your own setting file from `settings/example.yaml`
|
- create your own setting file from `settings/example.yaml`
|
||||||
|
- if necessary, increase allowed open files: `ulimit -Sn 10000`
|
||||||
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
|
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
|
||||||
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
|
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
|
||||||
|
|
||||||
@@ -102,7 +103,7 @@ wrap Run this program in a container
|
|||||||
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
|
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
|
||||||
- If it errors or times out, you should be able to rerun
|
- If it errors or times out, you should be able to rerun
|
||||||
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
|
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
|
||||||
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
|
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
|
||||||
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
|
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
|
||||||
- *Have a great workshop*
|
- *Have a great workshop*
|
||||||
- Run `./workshopctl stop TAG` to terminate instances.
|
- Run `./workshopctl stop TAG` to terminate instances.
|
||||||
@@ -209,7 +210,7 @@ The `postprep.py` file will be copied via parallel-ssh to all of the VMs and exe
|
|||||||
|
|
||||||
#### Pre-pull images
|
#### Pre-pull images
|
||||||
|
|
||||||
$ ./workshopctl pull-images TAG
|
$ ./workshopctl pull_images TAG
|
||||||
|
|
||||||
#### Generate cards
|
#### Generate cards
|
||||||
|
|
||||||
|
|||||||
@@ -7,7 +7,6 @@ services:
|
|||||||
working_dir: /root/prepare-vms
|
working_dir: /root/prepare-vms
|
||||||
volumes:
|
volumes:
|
||||||
- $HOME/.aws/:/root/.aws/
|
- $HOME/.aws/:/root/.aws/
|
||||||
- /etc/localtime:/etc/localtime:ro
|
|
||||||
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
|
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
|
||||||
- $PWD/:/root/prepare-vms/
|
- $PWD/:/root/prepare-vms/
|
||||||
environment:
|
environment:
|
||||||
|
|||||||
@@ -48,7 +48,7 @@ _cmd_cards() {
|
|||||||
rm -f ips.html ips.pdf
|
rm -f ips.html ips.pdf
|
||||||
|
|
||||||
# This will generate two files in the base dir: ips.pdf and ips.html
|
# This will generate two files in the base dir: ips.pdf and ips.html
|
||||||
python lib/ips-txt-to-html.py $SETTINGS
|
lib/ips-txt-to-html.py $SETTINGS
|
||||||
|
|
||||||
for f in ips.html ips.pdf; do
|
for f in ips.html ips.pdf; do
|
||||||
# Remove old versions of cards if they exist
|
# Remove old versions of cards if they exist
|
||||||
@@ -132,7 +132,7 @@ _cmd_kube() {
|
|||||||
sudo apt-key add - &&
|
sudo apt-key add - &&
|
||||||
echo deb http://apt.kubernetes.io/ kubernetes-xenial main |
|
echo deb http://apt.kubernetes.io/ kubernetes-xenial main |
|
||||||
sudo tee /etc/apt/sources.list.d/kubernetes.list"
|
sudo tee /etc/apt/sources.list.d/kubernetes.list"
|
||||||
pssh "
|
pssh --timeout 200 "
|
||||||
sudo apt-get update -q &&
|
sudo apt-get update -q &&
|
||||||
sudo apt-get install -qy kubelet kubeadm kubectl
|
sudo apt-get install -qy kubelet kubeadm kubectl
|
||||||
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
|
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
|
||||||
@@ -177,7 +177,9 @@ _cmd_kubetest() {
|
|||||||
# Feel free to make that better ♥
|
# Feel free to make that better ♥
|
||||||
pssh "
|
pssh "
|
||||||
set -e
|
set -e
|
||||||
|
[ -f /tmp/node ]
|
||||||
if grep -q node1 /tmp/node; then
|
if grep -q node1 /tmp/node; then
|
||||||
|
which kubectl
|
||||||
for NODE in \$(awk /\ node/\ {print\ \\\$2} /etc/hosts); do
|
for NODE in \$(awk /\ node/\ {print\ \\\$2} /etc/hosts); do
|
||||||
echo \$NODE ; kubectl get nodes | grep -w \$NODE | grep -w Ready
|
echo \$NODE ; kubectl get nodes | grep -w \$NODE | grep -w Ready
|
||||||
done
|
done
|
||||||
@@ -391,9 +393,23 @@ pull_tag() {
|
|||||||
ubuntu:latest \
|
ubuntu:latest \
|
||||||
fedora:latest \
|
fedora:latest \
|
||||||
centos:latest \
|
centos:latest \
|
||||||
|
elasticsearch:2 \
|
||||||
postgres \
|
postgres \
|
||||||
redis \
|
redis \
|
||||||
|
alpine \
|
||||||
|
registry \
|
||||||
|
nicolaka/netshoot \
|
||||||
|
jpetazzo/trainingwheels \
|
||||||
|
golang \
|
||||||
training/namer \
|
training/namer \
|
||||||
|
dockercoins/hasher \
|
||||||
|
dockercoins/rng \
|
||||||
|
dockercoins/webui \
|
||||||
|
dockercoins/worker \
|
||||||
|
logstash \
|
||||||
|
prom/node-exporter \
|
||||||
|
google/cadvisor \
|
||||||
|
dockersamples/visualizer \
|
||||||
nathanleclaire/redisonrails; do
|
nathanleclaire/redisonrails; do
|
||||||
sudo -u docker docker pull $I
|
sudo -u docker docker pull $I
|
||||||
done'
|
done'
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ def system(cmd):
|
|||||||
|
|
||||||
# On EC2, the ephemeral disk might be mounted on /mnt.
|
# On EC2, the ephemeral disk might be mounted on /mnt.
|
||||||
# If /mnt is a mountpoint, place Docker workspace on it.
|
# If /mnt is a mountpoint, place Docker workspace on it.
|
||||||
system("if mountpoint -q /mnt; then sudo mkdir /mnt/docker && sudo ln -s /mnt/docker /var/lib/docker; fi")
|
system("if mountpoint -q /mnt; then sudo mkdir -p /mnt/docker && sudo ln -sfn /mnt/docker /var/lib/docker; fi")
|
||||||
|
|
||||||
# Put our public IP in /tmp/ipv4
|
# Put our public IP in /tmp/ipv4
|
||||||
# ipv4_retrieval_endpoint = "http://169.254.169.254/latest/meta-data/public-ipv4"
|
# ipv4_retrieval_endpoint = "http://169.254.169.254/latest/meta-data/public-ipv4"
|
||||||
@@ -108,7 +108,7 @@ system("sudo chmod +x /usr/local/bin/docker-machine")
|
|||||||
system("docker-machine version")
|
system("docker-machine version")
|
||||||
|
|
||||||
system("sudo apt-get remove -y --purge dnsmasq-base")
|
system("sudo apt-get remove -y --purge dnsmasq-base")
|
||||||
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
|
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh tree")
|
||||||
|
|
||||||
### Wait for Docker to be up.
|
### Wait for Docker to be up.
|
||||||
### (If we don't do this, Docker will not be responsive during the next step.)
|
### (If we don't do this, Docker will not be responsive during the next step.)
|
||||||
|
|||||||
@@ -17,8 +17,8 @@ paper_margin: 0.2in
|
|||||||
# (The equivalent parameters must be set from the browser's print dialog.)
|
# (The equivalent parameters must be set from the browser's print dialog.)
|
||||||
|
|
||||||
# This can be "test" or "stable"
|
# This can be "test" or "stable"
|
||||||
engine_version: test
|
engine_version: stable
|
||||||
|
|
||||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||||
compose_version: 1.17.1
|
compose_version: 1.21.1
|
||||||
machine_version: 0.13.0
|
machine_version: 0.14.0
|
||||||
|
|||||||
@@ -17,8 +17,8 @@ paper_margin: 0.2in
|
|||||||
# (The equivalent parameters must be set from the browser's print dialog.)
|
# (The equivalent parameters must be set from the browser's print dialog.)
|
||||||
|
|
||||||
# This can be "test" or "stable"
|
# This can be "test" or "stable"
|
||||||
engine_version: test
|
engine_version: stable
|
||||||
|
|
||||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||||
compose_version: 1.18.0
|
compose_version: 1.21.1
|
||||||
machine_version: 0.13.0
|
machine_version: 0.14.0
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
|
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
|
||||||
|
|
||||||
# Number of VMs per cluster
|
# Number of VMs per cluster
|
||||||
clustersize: 5
|
clustersize: 3
|
||||||
|
|
||||||
# Jinja2 template to use to generate ready-to-cut cards
|
# Jinja2 template to use to generate ready-to-cut cards
|
||||||
cards_template: cards.html
|
cards_template: cards.html
|
||||||
@@ -17,8 +17,8 @@ paper_margin: 0.2in
|
|||||||
# (The equivalent parameters must be set from the browser's print dialog.)
|
# (The equivalent parameters must be set from the browser's print dialog.)
|
||||||
|
|
||||||
# This can be "test" or "stable"
|
# This can be "test" or "stable"
|
||||||
engine_version: test
|
engine_version: stable
|
||||||
|
|
||||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||||
compose_version: 1.17.1
|
compose_version: 1.21.1
|
||||||
machine_version: 0.13.0
|
machine_version: 0.14.0
|
||||||
@@ -1 +1,2 @@
|
|||||||
/* http://paris-container-training.netlify.com/:splat 200!
|
/ /kube-90min.yml.html 200!
|
||||||
|
|
||||||
|
|||||||
@@ -19,6 +19,9 @@ logging.basicConfig(level=os.environ.get("LOG_LEVEL", "INFO"))
|
|||||||
|
|
||||||
TIMEOUT = 60 # 1 minute
|
TIMEOUT = 60 # 1 minute
|
||||||
|
|
||||||
|
# This one is not a constant. It's an ugly global.
|
||||||
|
IPADDR = None
|
||||||
|
|
||||||
|
|
||||||
class State(object):
|
class State(object):
|
||||||
|
|
||||||
@@ -163,6 +166,9 @@ def wait_for_prompt():
|
|||||||
last_line = output.split('\n')[-1]
|
last_line = output.split('\n')[-1]
|
||||||
# Our custom prompt on the VMs has two lines; the 2nd line is just '$'
|
# Our custom prompt on the VMs has two lines; the 2nd line is just '$'
|
||||||
if last_line == "$":
|
if last_line == "$":
|
||||||
|
# This is a perfect opportunity to grab the node's IP address
|
||||||
|
global IPADDR
|
||||||
|
IPADDR = re.findall("^\[(.*)\]", output, re.MULTILINE)[-1]
|
||||||
return
|
return
|
||||||
# When we are in an alpine container, the prompt will be "/ #"
|
# When we are in an alpine container, the prompt will be "/ #"
|
||||||
if last_line == "/ #":
|
if last_line == "/ #":
|
||||||
@@ -397,8 +403,7 @@ while True:
|
|||||||
elif method == "open":
|
elif method == "open":
|
||||||
# Cheap way to get node1's IP address
|
# Cheap way to get node1's IP address
|
||||||
screen = capture_pane()
|
screen = capture_pane()
|
||||||
ipaddr = re.findall("^\[(.*)\]", screen, re.MULTILINE)[-1]
|
url = data.replace("/node1", "/{}".format(IPADDR))
|
||||||
url = data.replace("/node1", "/{}".format(ipaddr))
|
|
||||||
# This should probably be adapted to run on different OS
|
# This should probably be adapted to run on different OS
|
||||||
subprocess.check_output(["xdg-open", url])
|
subprocess.check_output(["xdg-open", url])
|
||||||
focus_browser()
|
focus_browser()
|
||||||
|
|||||||
@@ -1,6 +1,8 @@
|
|||||||
#!/bin/sh
|
#!/bin/sh
|
||||||
|
set -e
|
||||||
case "$1" in
|
case "$1" in
|
||||||
once)
|
once)
|
||||||
|
./index.py
|
||||||
for YAML in *.yml; do
|
for YAML in *.yml; do
|
||||||
./markmaker.py $YAML > $YAML.html || {
|
./markmaker.py $YAML > $YAML.html || {
|
||||||
rm $YAML.html
|
rm $YAML.html
|
||||||
@@ -15,6 +17,13 @@ once)
|
|||||||
;;
|
;;
|
||||||
|
|
||||||
forever)
|
forever)
|
||||||
|
set +e
|
||||||
|
# check if entr is installed
|
||||||
|
if ! command -v entr >/dev/null; then
|
||||||
|
echo >&2 "First install 'entr' with apt, brew, etc."
|
||||||
|
exit
|
||||||
|
fi
|
||||||
|
|
||||||
# There is a weird bug in entr, at least on MacOS,
|
# There is a weird bug in entr, at least on MacOS,
|
||||||
# where it doesn't restore the terminal to a clean
|
# where it doesn't restore the terminal to a clean
|
||||||
# state when exitting. So let's try to work around
|
# state when exitting. So let's try to work around
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
- All the content is available in a public GitHub repository:
|
- All the content is available in a public GitHub repository:
|
||||||
|
|
||||||
https://github.com/jpetazzo/container.training
|
https://@@GITREPO@@
|
||||||
|
|
||||||
- You can get updated "builds" of the slides there:
|
- You can get updated "builds" of the slides there:
|
||||||
|
|
||||||
@@ -10,7 +10,7 @@
|
|||||||
|
|
||||||
<!--
|
<!--
|
||||||
.exercise[
|
.exercise[
|
||||||
```open https://github.com/jpetazzo/container.training```
|
```open https://@@GITREPO@@```
|
||||||
```open http://container.training/```
|
```open http://container.training/```
|
||||||
]
|
]
|
||||||
-->
|
-->
|
||||||
@@ -23,6 +23,26 @@
|
|||||||
|
|
||||||
<!--
|
<!--
|
||||||
.exercise[
|
.exercise[
|
||||||
```open https://github.com/jpetazzo/container.training/tree/master/slides/common/about-slides.md```
|
```open https://@@GITREPO@@/tree/master/slides/common/about-slides.md```
|
||||||
]
|
]
|
||||||
-->
|
-->
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
|
## Extra details
|
||||||
|
|
||||||
|
- This slide has a little magnifying glass in the top left corner
|
||||||
|
|
||||||
|
- This magnifying glass indicates slides that provide extra details
|
||||||
|
|
||||||
|
- Feel free to skip them if:
|
||||||
|
|
||||||
|
- you are in a hurry
|
||||||
|
|
||||||
|
- you are new to this and want to avoid cognitive overload
|
||||||
|
|
||||||
|
- you want only the most essential information
|
||||||
|
|
||||||
|
- You can review these slides another time if you want, they'll be waiting for you ☺
|
||||||
|
|||||||
@@ -49,26 +49,6 @@ Tip: use `^S` and `^Q` to pause/resume log output.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
class: extra-details
|
|
||||||
|
|
||||||
## Upgrading from Compose 1.6
|
|
||||||
|
|
||||||
.warning[The `logs` command has changed between Compose 1.6 and 1.7!]
|
|
||||||
|
|
||||||
- Up to 1.6
|
|
||||||
|
|
||||||
- `docker-compose logs` is the equivalent of `logs --follow`
|
|
||||||
|
|
||||||
- `docker-compose logs` must be restarted if containers are added
|
|
||||||
|
|
||||||
- Since 1.7
|
|
||||||
|
|
||||||
- `--follow` must be specified explicitly
|
|
||||||
|
|
||||||
- new containers are automatically picked up by `docker-compose logs`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Scaling up the application
|
## Scaling up the application
|
||||||
|
|
||||||
- Our goal is to make that performance graph go up (without changing a line of code!)
|
- Our goal is to make that performance graph go up (without changing a line of code!)
|
||||||
@@ -126,7 +106,7 @@ We have available resources.
|
|||||||
|
|
||||||
- Start one more `worker` container:
|
- Start one more `worker` container:
|
||||||
```bash
|
```bash
|
||||||
docker-compose scale worker=2
|
docker-compose up -d --scale worker=2
|
||||||
```
|
```
|
||||||
|
|
||||||
- Look at the performance graph (it should show a x2 improvement)
|
- Look at the performance graph (it should show a x2 improvement)
|
||||||
@@ -147,7 +127,7 @@ We have available resources.
|
|||||||
|
|
||||||
- Start eight more `worker` containers:
|
- Start eight more `worker` containers:
|
||||||
```bash
|
```bash
|
||||||
docker-compose scale worker=10
|
docker-compose up -d --scale worker=10
|
||||||
```
|
```
|
||||||
|
|
||||||
- Look at the performance graph: does it show a x10 improvement?
|
- Look at the performance graph: does it show a x10 improvement?
|
||||||
|
|||||||
@@ -8,7 +8,7 @@
|
|||||||
|
|
||||||
- Imperative:
|
- Imperative:
|
||||||
|
|
||||||
*Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in cup.*
|
*Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.*
|
||||||
|
|
||||||
--
|
--
|
||||||
|
|
||||||
|
|||||||
@@ -1,66 +1,4 @@
|
|||||||
# Pre-requirements
|
## Hands-on
|
||||||
|
|
||||||
- Be comfortable with the UNIX command line
|
|
||||||
|
|
||||||
- navigating directories
|
|
||||||
|
|
||||||
- editing files
|
|
||||||
|
|
||||||
- a little bit of bash-fu (environment variables, loops)
|
|
||||||
|
|
||||||
- Some Docker knowledge
|
|
||||||
|
|
||||||
- `docker run`, `docker ps`, `docker build`
|
|
||||||
|
|
||||||
- ideally, you know how to write a Dockerfile and build it
|
|
||||||
<br/>
|
|
||||||
(even if it's a `FROM` line and a couple of `RUN` commands)
|
|
||||||
|
|
||||||
- It's totally OK if you are not a Docker expert!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
class: extra-details
|
|
||||||
|
|
||||||
## Extra details
|
|
||||||
|
|
||||||
- This slide has a little magnifying glass in the top left corner
|
|
||||||
|
|
||||||
- This magnifiying glass indicates slides that provide extra details
|
|
||||||
|
|
||||||
- Feel free to skip them if:
|
|
||||||
|
|
||||||
- you are in a hurry
|
|
||||||
|
|
||||||
- you are new to this and want to avoid cognitive overload
|
|
||||||
|
|
||||||
- you want only the most essential information
|
|
||||||
|
|
||||||
- You can review these slides another time if you want, they'll be waiting for you ☺
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
class: title
|
|
||||||
|
|
||||||
*Tell me and I forget.*
|
|
||||||
<br/>
|
|
||||||
*Teach me and I remember.*
|
|
||||||
<br/>
|
|
||||||
*Involve me and I learn.*
|
|
||||||
|
|
||||||
Misattributed to Benjamin Franklin
|
|
||||||
|
|
||||||
[(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Hands-on sections
|
|
||||||
|
|
||||||
- The whole workshop is hands-on
|
|
||||||
|
|
||||||
- We are going to build, ship, and run containers!
|
|
||||||
|
|
||||||
- You are invited to reproduce all the demos
|
|
||||||
|
|
||||||
- All hands-on sections are clearly identified, like the gray rectangle below
|
- All hands-on sections are clearly identified, like the gray rectangle below
|
||||||
|
|
||||||
@@ -68,55 +6,12 @@ Misattributed to Benjamin Franklin
|
|||||||
|
|
||||||
- This is the stuff you're supposed to do!
|
- This is the stuff you're supposed to do!
|
||||||
|
|
||||||
- Go to [container.training](http://container.training/) to view these slides
|
- Go to @@SLIDES@@ to view these slides
|
||||||
|
|
||||||
- Join the chat room: @@CHAT@@
|
|
||||||
|
|
||||||
<!-- ```open http://container.training/``` -->
|
|
||||||
|
|
||||||
]
|
]
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
class: in-person
|
|
||||||
|
|
||||||
## Where are we going to run our containers?
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
class: in-person, pic
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
class: in-person
|
|
||||||
|
|
||||||
## You get a cluster of cloud VMs
|
|
||||||
|
|
||||||
- Each person gets a private cluster of cloud VMs (not shared with anybody else)
|
- Each person gets a private cluster of cloud VMs (not shared with anybody else)
|
||||||
|
|
||||||
- They'll remain up for the duration of the workshop
|
|
||||||
|
|
||||||
- You should have a little card with login+password+IP addresses
|
|
||||||
|
|
||||||
- You can automatically SSH from one VM to another
|
|
||||||
|
|
||||||
- The nodes have aliases: `node1`, `node2`, etc.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
class: in-person
|
|
||||||
|
|
||||||
## Why don't we run containers locally?
|
|
||||||
|
|
||||||
- Installing that stuff can be hard on some machines
|
|
||||||
|
|
||||||
(32 bits CPU or OS... Laptops without administrator access... etc.)
|
|
||||||
|
|
||||||
- *"The whole team downloaded all these container images from the WiFi!
|
|
||||||
<br/>... and it went great!"* (Literally no-one ever)
|
|
||||||
|
|
||||||
- All you need is a computer (or even a phone or tablet!), with:
|
- All you need is a computer (or even a phone or tablet!), with:
|
||||||
|
|
||||||
- an internet connection
|
- an internet connection
|
||||||
@@ -129,47 +24,11 @@ class: in-person
|
|||||||
|
|
||||||
class: in-person
|
class: in-person
|
||||||
|
|
||||||
## SSH clients
|
|
||||||
|
|
||||||
- On Linux, OS X, FreeBSD... you are probably all set
|
|
||||||
|
|
||||||
- On Windows, get one of these:
|
|
||||||
|
|
||||||
- [putty](http://www.putty.org/)
|
|
||||||
- Microsoft [Win32 OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH)
|
|
||||||
- [Git BASH](https://git-for-windows.github.io/)
|
|
||||||
- [MobaXterm](http://mobaxterm.mobatek.net/)
|
|
||||||
|
|
||||||
- On Android, [JuiceSSH](https://juicessh.com/)
|
|
||||||
([Play Store](https://play.google.com/store/apps/details?id=com.sonelli.juicessh))
|
|
||||||
works pretty well
|
|
||||||
|
|
||||||
- Nice-to-have: [Mosh](https://mosh.org/) instead of SSH, if your internet connection tends to lose packets
|
|
||||||
<br/>(available with `(apt|yum|brew) install mosh`; then connect with `mosh user@host`)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
class: in-person
|
|
||||||
|
|
||||||
## Connecting to our lab environment
|
## Connecting to our lab environment
|
||||||
|
|
||||||
.exercise[
|
.exercise[
|
||||||
|
|
||||||
- Log into the first VM (`node1`) with SSH or MOSH
|
- Log into the first VM (`node1`) with your SSH client
|
||||||
|
|
||||||
<!--
|
|
||||||
```bash
|
|
||||||
for N in $(awk '/node/{print $2}' /etc/hosts); do
|
|
||||||
ssh -o StrictHostKeyChecking=no node$N true
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
if which kubectl; then
|
|
||||||
kubectl get all -o name | grep -v services/kubernetes | xargs -n1 kubectl delete
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
-->
|
|
||||||
|
|
||||||
- Check that you can SSH (without password) to `node2`:
|
- Check that you can SSH (without password) to `node2`:
|
||||||
```bash
|
```bash
|
||||||
@@ -177,102 +36,6 @@ fi
|
|||||||
```
|
```
|
||||||
- Type `exit` or `^D` to come back to `node1`
|
- Type `exit` or `^D` to come back to `node1`
|
||||||
|
|
||||||
<!-- ```bash exit``` -->
|
|
||||||
|
|
||||||
]
|
]
|
||||||
|
|
||||||
If anything goes wrong — ask for help!
|
If anything goes wrong — ask for help!
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Doing or re-doing the workshop on your own?
|
|
||||||
|
|
||||||
- Use something like
|
|
||||||
[Play-With-Docker](http://play-with-docker.com/) or
|
|
||||||
[Play-With-Kubernetes](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
|
|
||||||
|
|
||||||
Zero setup effort; but environment are short-lived and
|
|
||||||
might have limited resources
|
|
||||||
|
|
||||||
- Create your own cluster (local or cloud VMs)
|
|
||||||
|
|
||||||
Small setup effort; small cost; flexible environments
|
|
||||||
|
|
||||||
- Create a bunch of clusters for you and your friends
|
|
||||||
([instructions](https://github.com/jpetazzo/container.training/tree/master/prepare-vms))
|
|
||||||
|
|
||||||
Bigger setup effort; ideal for group training
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
class: self-paced
|
|
||||||
|
|
||||||
## Get your own Docker nodes
|
|
||||||
|
|
||||||
- If you already have some Docker nodes: great!
|
|
||||||
|
|
||||||
- If not: let's get some thanks to Play-With-Docker
|
|
||||||
|
|
||||||
.exercise[
|
|
||||||
|
|
||||||
- Go to http://www.play-with-docker.com/
|
|
||||||
|
|
||||||
- Log in
|
|
||||||
|
|
||||||
- Create your first node
|
|
||||||
|
|
||||||
<!-- ```open http://www.play-with-docker.com/``` -->
|
|
||||||
|
|
||||||
]
|
|
||||||
|
|
||||||
You will need a Docker ID to use Play-With-Docker.
|
|
||||||
|
|
||||||
(Creating a Docker ID is free.)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## We will (mostly) interact with node1 only
|
|
||||||
|
|
||||||
*These remarks apply only when using multiple nodes, of course.*
|
|
||||||
|
|
||||||
- Unless instructed, **all commands must be run from the first VM, `node1`**
|
|
||||||
|
|
||||||
- We will only checkout/copy the code on `node1`
|
|
||||||
|
|
||||||
- During normal operations, we do not need access to the other nodes
|
|
||||||
|
|
||||||
- If we had to troubleshoot issues, we would use a combination of:
|
|
||||||
|
|
||||||
- SSH (to access system logs, daemon status...)
|
|
||||||
|
|
||||||
- Docker API (to check running containers and container engine status)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Terminals
|
|
||||||
|
|
||||||
Once in a while, the instructions will say:
|
|
||||||
<br/>"Open a new terminal."
|
|
||||||
|
|
||||||
There are multiple ways to do this:
|
|
||||||
|
|
||||||
- create a new window or tab on your machine, and SSH into the VM;
|
|
||||||
|
|
||||||
- use screen or tmux on the VM and open a new window from there.
|
|
||||||
|
|
||||||
You are welcome to use the method that you feel the most comfortable with.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Tmux cheatsheet
|
|
||||||
|
|
||||||
- Ctrl-b c → creates a new window
|
|
||||||
- Ctrl-b n → go to next window
|
|
||||||
- Ctrl-b p → go to previous window
|
|
||||||
- Ctrl-b " → split window top/bottom
|
|
||||||
- Ctrl-b % → split window left/right
|
|
||||||
- Ctrl-b Alt-1 → rearrange windows in columns
|
|
||||||
- Ctrl-b Alt-2 → rearrange windows in rows
|
|
||||||
- Ctrl-b arrows → navigate to other windows
|
|
||||||
- Ctrl-b d → detach session
|
|
||||||
- tmux attach → reattach to session
|
|
||||||
|
|||||||
@@ -1,16 +1,71 @@
|
|||||||
# Our sample application
|
# Our sample application
|
||||||
|
|
||||||
|
- We will clone the GitHub repository onto our `node1`
|
||||||
|
|
||||||
|
- The repository also contains scripts and tools that we will use through the workshop
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
<!--
|
||||||
|
```bash
|
||||||
|
if [ -d container.training ]; then
|
||||||
|
mv container.training container.training.$$
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
-->
|
||||||
|
|
||||||
|
- Clone the repository on `node1`:
|
||||||
|
```bash
|
||||||
|
git clone git://@@GITREPO@@
|
||||||
|
```
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Downloading and running the application
|
||||||
|
|
||||||
|
Let's start this before we look around, as downloading will take a little time...
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Go to the `dockercoins` directory, in the cloned repo:
|
||||||
|
```bash
|
||||||
|
cd ~/container.training/dockercoins
|
||||||
|
```
|
||||||
|
|
||||||
|
- Use Compose to build and run all containers:
|
||||||
|
```bash
|
||||||
|
docker-compose up
|
||||||
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
|
```longwait units of work done```
|
||||||
|
-->
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
Compose tells Docker to build all container images (pulling
|
||||||
|
the corresponding base images), then starts all containers,
|
||||||
|
and displays aggregated logs.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## More detail on our sample application
|
||||||
|
|
||||||
- Visit the GitHub repository with all the materials of this workshop:
|
- Visit the GitHub repository with all the materials of this workshop:
|
||||||
<br/>https://github.com/jpetazzo/container.training
|
<br/>https://@@GITREPO@@
|
||||||
|
|
||||||
- The application is in the [dockercoins](
|
- The application is in the [dockercoins](
|
||||||
https://github.com/jpetazzo/container.training/tree/master/dockercoins)
|
https://@@GITREPO@@/tree/master/dockercoins)
|
||||||
subdirectory
|
subdirectory
|
||||||
|
|
||||||
- Let's look at the general layout of the source code:
|
- Let's look at the general layout of the source code:
|
||||||
|
|
||||||
there is a Compose file [docker-compose.yml](
|
there is a Compose file [docker-compose.yml](
|
||||||
https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml) ...
|
https://@@GITREPO@@/blob/master/dockercoins/docker-compose.yml) ...
|
||||||
|
|
||||||
... and 4 other services, each in its own directory:
|
... and 4 other services, each in its own directory:
|
||||||
|
|
||||||
@@ -39,61 +94,6 @@ class: extra-details
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Service discovery in container-land
|
|
||||||
|
|
||||||
- We do not hard-code IP addresses in the code
|
|
||||||
|
|
||||||
- We do not hard-code FQDN in the code, either
|
|
||||||
|
|
||||||
- We just connect to a service name, and container-magic does the rest
|
|
||||||
|
|
||||||
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Example in `worker/worker.py`
|
|
||||||
|
|
||||||
```python
|
|
||||||
redis = Redis("`redis`")
|
|
||||||
|
|
||||||
|
|
||||||
def get_random_bytes():
|
|
||||||
r = requests.get("http://`rng`/32")
|
|
||||||
return r.content
|
|
||||||
|
|
||||||
|
|
||||||
def hash_bytes(data):
|
|
||||||
r = requests.post("http://`hasher`/",
|
|
||||||
data=data,
|
|
||||||
headers={"Content-Type": "application/octet-stream"})
|
|
||||||
```
|
|
||||||
|
|
||||||
(Full source code available [here](
|
|
||||||
https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
|
|
||||||
))
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
class: extra-details
|
|
||||||
|
|
||||||
## Links, naming, and service discovery
|
|
||||||
|
|
||||||
- Containers can have network aliases (resolvable through DNS)
|
|
||||||
|
|
||||||
- Compose file version 2+ makes each container reachable through its service name
|
|
||||||
|
|
||||||
- Compose file version 1 did require "links" sections
|
|
||||||
|
|
||||||
- Network aliases are automatically namespaced
|
|
||||||
|
|
||||||
- you can have multiple apps declaring and using a service named `database`
|
|
||||||
|
|
||||||
- containers in the blue app will resolve `database` to the IP of the blue database
|
|
||||||
|
|
||||||
- containers in the green app will resolve `database` to the IP of the green database
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## What's this application?
|
## What's this application?
|
||||||
|
|
||||||
--
|
--
|
||||||
@@ -120,61 +120,6 @@ class: extra-details
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Getting the application source code
|
|
||||||
|
|
||||||
- We will clone the GitHub repository
|
|
||||||
|
|
||||||
- The repository also contains scripts and tools that we will use through the workshop
|
|
||||||
|
|
||||||
.exercise[
|
|
||||||
|
|
||||||
<!--
|
|
||||||
```bash
|
|
||||||
if [ -d container.training ]; then
|
|
||||||
mv container.training container.training.$$
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
-->
|
|
||||||
|
|
||||||
- Clone the repository on `node1`:
|
|
||||||
```bash
|
|
||||||
git clone git://github.com/jpetazzo/container.training
|
|
||||||
```
|
|
||||||
|
|
||||||
]
|
|
||||||
|
|
||||||
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# Running the application
|
|
||||||
|
|
||||||
Without further ado, let's start our application.
|
|
||||||
|
|
||||||
.exercise[
|
|
||||||
|
|
||||||
- Go to the `dockercoins` directory, in the cloned repo:
|
|
||||||
```bash
|
|
||||||
cd ~/container.training/dockercoins
|
|
||||||
```
|
|
||||||
|
|
||||||
- Use Compose to build and run all containers:
|
|
||||||
```bash
|
|
||||||
docker-compose up
|
|
||||||
```
|
|
||||||
|
|
||||||
<!--
|
|
||||||
```longwait units of work done```
|
|
||||||
-->
|
|
||||||
|
|
||||||
]
|
|
||||||
|
|
||||||
Compose tells Docker to build all container images (pulling
|
|
||||||
the corresponding base images), then starts all containers,
|
|
||||||
and displays aggregated logs.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Our application at work
|
## Our application at work
|
||||||
|
|
||||||
- On the left-hand side, the "rainbow strip" shows the container names
|
- On the left-hand side, the "rainbow strip" shows the container names
|
||||||
@@ -299,5 +244,5 @@ class: extra-details
|
|||||||
|
|
||||||
Some containers exit immediately, others take longer.
|
Some containers exit immediately, others take longer.
|
||||||
|
|
||||||
The containers that do not handle `SIGTERM` end up being killed after a 10s timeout.
|
The containers that do not handle `SIGTERM` end up being killed after a 10s timeout. If we are very impatient, we can hit `^C` a second time!
|
||||||
|
|
||||||
|
|||||||
@@ -17,5 +17,5 @@ class: title, in-person
|
|||||||
*Don't stream videos or download big files during the workshop.*<br/>
|
*Don't stream videos or download big files during the workshop.*<br/>
|
||||||
*Thank you!*
|
*Thank you!*
|
||||||
|
|
||||||
**Slides: http://container.training/**
|
**Slides: @@SLIDES@@**
|
||||||
]
|
]
|
||||||
|
|||||||
57
slides/count-slides.py
Executable file
@@ -0,0 +1,57 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
|
||||||
|
PREFIX = "name: toc-"
|
||||||
|
EXCLUDED = ["in-person"]
|
||||||
|
|
||||||
|
class State(object):
|
||||||
|
def __init__(self):
|
||||||
|
self.current_slide = 1
|
||||||
|
self.section_title = None
|
||||||
|
self.section_start = 0
|
||||||
|
self.section_slides = 0
|
||||||
|
self.chapters = {}
|
||||||
|
self.sections = {}
|
||||||
|
def show(self):
|
||||||
|
if self.section_title.startswith("chapter-"):
|
||||||
|
return
|
||||||
|
print("{0.section_title}\t{0.section_start}\t{0.section_slides}".format(self))
|
||||||
|
self.sections[self.section_title] = self.section_slides
|
||||||
|
|
||||||
|
state = State()
|
||||||
|
|
||||||
|
title = None
|
||||||
|
for line in open(sys.argv[1]):
|
||||||
|
line = line.rstrip()
|
||||||
|
if line.startswith(PREFIX):
|
||||||
|
if state.section_title is None:
|
||||||
|
print("{}\t{}\t{}".format("title", "index", "size"))
|
||||||
|
else:
|
||||||
|
state.show()
|
||||||
|
state.section_title = line[len(PREFIX):].strip()
|
||||||
|
state.section_start = state.current_slide
|
||||||
|
state.section_slides = 0
|
||||||
|
if line == "---":
|
||||||
|
state.current_slide += 1
|
||||||
|
state.section_slides += 1
|
||||||
|
if line == "--":
|
||||||
|
state.current_slide += 1
|
||||||
|
toc_links = re.findall("\(#toc-(.*)\)", line)
|
||||||
|
if toc_links and state.section_title.startswith("chapter-"):
|
||||||
|
if state.section_title not in state.chapters:
|
||||||
|
state.chapters[state.section_title] = []
|
||||||
|
state.chapters[state.section_title].append(toc_links[0])
|
||||||
|
# This is really hackish
|
||||||
|
if line.startswith("class:"):
|
||||||
|
for klass in EXCLUDED:
|
||||||
|
if klass in line:
|
||||||
|
state.section_slides -= 1
|
||||||
|
state.current_slide -= 1
|
||||||
|
|
||||||
|
state.show()
|
||||||
|
|
||||||
|
for chapter in sorted(state.chapters):
|
||||||
|
chapter_size = sum(state.sections[s] for s in state.chapters[chapter])
|
||||||
|
print("{}\t{}\t{}".format("total size for", chapter, chapter_size))
|
||||||
|
|
||||||
BIN
slides/images/binpacking-1d-1.gif
Normal file
|
After Width: | Height: | Size: 9.4 KiB |
BIN
slides/images/binpacking-1d-2.gif
Normal file
|
After Width: | Height: | Size: 7.8 KiB |
BIN
slides/images/binpacking-2d.gif
Normal file
|
After Width: | Height: | Size: 11 KiB |
BIN
slides/images/binpacking-3d.gif
Normal file
|
After Width: | Height: | Size: 15 KiB |
BIN
slides/images/bridge1.png
Normal file
|
After Width: | Height: | Size: 30 KiB |
BIN
slides/images/bridge2.png
Normal file
|
After Width: | Height: | Size: 30 KiB |
BIN
slides/images/conductor.jpg
Normal file
|
After Width: | Height: | Size: 53 KiB |
BIN
slides/images/container-layers.jpg
Normal file
|
After Width: | Height: | Size: 45 KiB |
BIN
slides/images/demo.jpg
Normal file
|
After Width: | Height: | Size: 178 KiB |
213
slides/images/docker-con-15-logo.svg
Normal file
@@ -0,0 +1,213 @@
|
|||||||
|
<?xml version="1.0" encoding="utf-8"?>
|
||||||
|
<!-- Generator: Adobe Illustrator 18.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
|
||||||
|
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||||
|
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
|
||||||
|
viewBox="0 0 445 390" enable-background="new 0 0 445 390" xml:space="preserve">
|
||||||
|
<g>
|
||||||
|
<path fill="#3A4D54" d="M158.8,352.2h-25.9c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9h-19c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9
|
||||||
|
h25.3c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9h-15.9c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9h6.8c3.2,0,5.8-2.6,5.8-5.9
|
||||||
|
c0-3.2-2.6-5.9-5.8-5.9H64.9c-0.1,0-0.3,0-0.4,0c3,0.2,5.4,2.7,5.4,5.9c0,3.1-2.4,5.7-5.4,5.9c0.1,0,0.3,0,0.4,0h-0.8h-6.1
|
||||||
|
c-3.2,0-5.8,2.6-5.8,5.9s2.6,5.9,5.8,5.9H74h3.7c3.2,0,5.8,2.6,5.8,5.9c0,3.2-2.6,5.9-5.8,5.9H74H47.9c-3.2,0-5.8,2.6-5.8,5.9
|
||||||
|
s2.6,5.9,5.8,5.9h44.8H93c0,0-0.1,0-0.1,0c3.1,0.1,5.6,2.7,5.6,5.9c0,3.2-2.5,5.8-5.6,5.9c0,0,0.1,0,0.1,0h-0.2
|
||||||
|
c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9h66c3.2,0,5.8-2.6,5.8-5.9C164.6,354.8,162,352.2,158.8,352.2z"/>
|
||||||
|
<circle fill="#FBBF45" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" cx="214.6" cy="124.2" r="68.7"/>
|
||||||
|
<circle fill="#3A4D54" cx="367.5" cy="335.5" r="5.9"/>
|
||||||
|
<g>
|
||||||
|
<polygon fill="#E8593A" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" points="116.1,199.1 116.1,214.6 302.9,214.5
|
||||||
|
302.9,199.1 "/>
|
||||||
|
<rect x="159.4" y="78.6" fill="#3A4D54" width="4.2" height="50.4"/>
|
||||||
|
<rect x="174.5" y="93.8" fill="#3A4D54" width="4.2" height="35.1"/>
|
||||||
|
<rect x="280.2" y="108.2" fill="#3A4D54" width="4.2" height="20.8"/>
|
||||||
|
<rect x="190.2" y="106.9" fill="#3A4D54" width="4.2" height="22"/>
|
||||||
|
<rect x="143.3" y="59.8" fill="#3A4D54" width="4.2" height="69.1"/>
|
||||||
|
<path fill="#3A4D54" d="M294.3,107.9c3.5-2.3,6.9-4.8,10.4-7.4V87.7c-5.2,4.3-10.6,8.2-15.9,11.6c-7.8,4.9-15.1,8.5-22.4,11
|
||||||
|
c-7.9,2.8-15.7,4.3-23.4,4.7c-7.6,0.3-15.3-0.5-22.8-2.6c-6.9-1.9-13.7-4.7-20.4-8.6C188.8,97.5,178.4,89,168,77.6
|
||||||
|
c-7.7-8.4-14.7-17.7-21.6-28.2c-5-7.8-9.6-15.8-13.6-23.9c-4-8.1-6.1-13.5-6.9-16c-0.7-1.8-1-3.1-1.2-3.8l0-0.1l0.1-2.7l-0.5,0
|
||||||
|
l0-0.1H123l-8.1-0.6l-3.1-0.1l-0.1,3.4l0,0.4c0,1.2,0.2,1.9,0.3,2.5l0,0.1c0.3,1.4,0.9,3.2,1.7,5.3c1.2,3.4,3.6,9.1,7.7,17.2
|
||||||
|
c4.3,8.4,9.2,16.8,14.6,25c7.3,11.1,14.9,20.8,23.2,29.6c11.4,12.1,22.9,21.3,35.1,28.1c7.6,4.2,15.4,7.4,23.2,9.4
|
||||||
|
c7,1.8,14.2,2.7,21.4,2.7c0,0,0,0,0,0c1.6,0,3.2,0,4.7-0.1c8.7-0.5,17.6-2.4,26.4-5.6 M141.1,52.8c-5.2-7.9-10-16.1-14.2-24.4
|
||||||
|
c-4-7.9-6.3-13.4-7.5-16.6c-0.5-1.3-0.8-2.4-1.1-3.3l1,0.1c0.3,0.9,0.6,1.9,1,2.9c1.6,4.5,4.2,10.4,7.2,16.6
|
||||||
|
c4.1,8.3,8.8,16.5,13.9,24.5c5.5,8.5,11.1,16.2,17.1,23.3C152.4,68.9,146.7,61.3,141.1,52.8z"/>
|
||||||
|
<path fill="#E8593A" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" d="M340.9,53h-7.9h-4.3v8.2h-19.4V53h-4.3h-7.9
|
||||||
|
h-4.3v8.2v2.7v186.7c0,0.8,0.6,1.4,1.3,1.4h3h42.4h4.3c0.7,0,1.3-0.6,1.3-1.4V62v-0.8V53H340.9z M334.8,206.6h-31.5V152
|
||||||
|
c0-0.4,0.3-0.7,0.6-0.7h30.2c0.4,0,0.6,0.3,0.6,0.7V206.6z M334.8,142.1h-31.5V125c0-0.4,0.3-0.7,0.6-0.7h30.2
|
||||||
|
c0.4,0,0.6,0.3,0.6,0.7V142.1z M334.8,115.1h-31.5V97.9c0-0.4,0.3-0.7,0.6-0.7h30.2c0.4,0,0.6,0.3,0.6,0.7V115.1z M334.8,88h-31.5
|
||||||
|
V70.9c0-0.4,0.3-0.7,0.6-0.7h30.2c0.4,0,0.6,0.3,0.6,0.7V88z"/>
|
||||||
|
<polygon fill="#E8593A" points="272.2,203 286.7,201.1 297.2,201.1 297.2,214.6 271.7,214.6 "/>
|
||||||
|
<path fill="#E8593A" d="M298.7,96.2c-2.7,2-5.5,3.9-8.3,5.7c-7.3,4.6-15,8.5-23,11.3c-7.9,2.8-16.1,4.5-24.3,4.8
|
||||||
|
c-8.1,0.4-16.1-0.6-23.7-2.7c-7.6-2-14.6-5.1-21.1-8.9c-13-7.5-23.7-17.1-32.6-26.8c-8.9-9.8-16-19.6-21.9-28.6
|
||||||
|
c-5.8-9-10.3-17.3-13.7-24.2c-3.4-6.9-5.7-12.5-7.1-16.3c-0.7-1.9-1.1-3.3-1.3-4.2c-0.1-0.4-0.1-0.7-0.1-0.4l0,0.1
|
||||||
|
c0,0,0-0.1,0-0.1c0-0.1,0-0.1,0-0.1c0-0.1,0-0.1,0-0.1l-7-0.5c0,0,0,0,0,0.1c0,0,0,0.1,0,0.1c0,0,0,0.1,0,0.1c0,0.1,0,0.2,0,0.3
|
||||||
|
c0,0.9,0.1,1.4,0.3,2.1c0.3,1.3,0.8,2.9,1.6,5c1.5,4.1,4,9.8,7.6,16.9c3.6,7.1,8.3,15.5,14.4,24.7c6.1,9.2,13.5,19.2,22.9,29.2
|
||||||
|
c9.3,9.9,20.5,19.8,34.3,27.5c6.9,3.8,14.4,7,22.5,9.1c8,2.1,16.6,3,25.2,2.5c8.6-0.5,17.3-2.4,25.5-5.4c8.3-3,16.2-7.2,23.7-12
|
||||||
|
c2-1.3,4.1-2.7,6-4.2V96.2z"/>
|
||||||
|
<path fill="#E8593A" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" d="M122.9,4.2h-3.2h-6.6v11.7H66.1V4.2h-4.6h-6.2
|
||||||
|
h-6.6v11.7v3.8v265.1c0,1.1,0.9,2,2,2h4.6h65.7h6.6c1.1,0,2-0.9,2-2V17v-1.1V4.2H122.9z M113.5,204.2H64.7v-59.4c0-0.6,0.4-1,1-1
|
||||||
|
h46.7c0.6,0,1,0.4,1,1V204.2z M113.5,130.8H64.7v-24.3c0-0.6,0.4-1,1-1h46.7c0.6,0,1,0.4,1,1V130.8z M113.5,92.4H64.7V68.1
|
||||||
|
c0-0.6,0.4-1,1-1h46.7c0.6,0,1,0.4,1,1V92.4z M113.5,54H64.7V29.7c0-0.6,0.4-1,1-1h46.7c0.6,0,1,0.4,1,1V54z"/>
|
||||||
|
<g>
|
||||||
|
<g>
|
||||||
|
<path fill="#2BB8EB" stroke="#3A4D54" stroke-width="5" stroke-miterlimit="10" d="M435.8,132.9H364c-1.4,0-2.6,1.3-2.6,3v44.2
|
||||||
|
c0,1.7,1.2,3,2.6,3h71.8c2.5,0,3.6-3.7,1.5-5.4l-11.4-13.5c-3.2-3.3-3.2-9,0-12.3l11.4-13.5
|
||||||
|
C439.3,136.6,438.3,132.9,435.8,132.9z"/>
|
||||||
|
<path fill="#FFFFFF" stroke="#3A4D54" stroke-width="5" stroke-miterlimit="10" d="M9.8,183.1h129.7c1.4,0,2.6-1.3,2.6-3v-44.2
|
||||||
|
c0-1.7-1.2-3-2.6-3H9.8c-2.5,0-3.6,3.7-1.5,5.4l11.4,13.5c3.2,3.3,3.2,9,0,12.3L8.3,177.7C6.2,179.4,7.3,183.1,9.8,183.1z"/>
|
||||||
|
<path fill="#FFFFFF" stroke="#3A4E55" stroke-width="5" stroke-miterlimit="10" d="M402.5,190H42.1c-3.6,0-6.5-1.1-6.5-4.6
|
||||||
|
v-54.7c0-3.6,2.9-6.5,6.5-6.5h360.4c3.6,0,6.5,2.9,6.5,6.5v52.9C409,187.1,406.1,190,402.5,190z"/>
|
||||||
|
<path fill="#2BB8EB" d="M402.5,124.2h-46.3V190h46.3c3.6,0,6.5-2.9,6.5-6.5v-52.9C409,127.1,406.1,124.2,402.5,124.2z"/>
|
||||||
|
<g>
|
||||||
|
<path fill="#FFFFFF" d="M376.2,144.3v21.3c0,1.1-0.9,2-2,2c-1.1,0-2-0.9-2-2v-17.8l-1.4,0.8c-0.3,0.2-0.7,0.3-1,0.3
|
||||||
|
c-0.7,0-1.3-0.4-1.7-1c-0.6-0.9-0.3-2.2,0.7-2.7l4.4-2.6c0,0,0.1,0,0.1-0.1c0.1,0,0.1-0.1,0.2-0.1c0.1,0,0.1,0,0.2,0
|
||||||
|
c0,0,0.1,0,0.1,0c0.1,0,0.2,0,0.3,0c0,0,0.1,0,0.1,0h0c0.1,0,0.2,0,0.3,0c0,0,0.1,0,0.1,0c0.1,0,0.1,0,0.2,0.1c0,0,0.1,0,0.1,0
|
||||||
|
c0.1,0.1,0.1,0.1,0.2,0.1c0,0,0.1,0.1,0.1,0.1c0,0,0.1,0.1,0.1,0.1c0.1,0,0.1,0.1,0.1,0.1c0,0,0.1,0.1,0.1,0.1
|
||||||
|
c0,0,0.1,0.1,0.1,0.1l0,0.1c0,0,0,0.1,0,0.1c0,0.1,0.1,0.1,0.1,0.2c0,0.1,0,0.1,0.1,0.2c0,0.1,0,0.1,0,0.2c0,0.1,0,0.2,0.1,0.3
|
||||||
|
C376.2,144.3,376.2,144.3,376.2,144.3z"/>
|
||||||
|
<path fill="#FFFFFF" d="M393.4,152.3c1.8,1.7,2.6,4.1,2.6,6.4c0,2.3-0.9,4.6-2.6,6.3c-1.7,1.8-4.1,2.6-6.3,2.6
|
||||||
|
c-0.1,0-0.1,0-0.1,0c-2.2,0-4.6-0.9-6.3-2.6c-0.8-0.8-0.8-2.1,0-2.9c0.8-0.8,2.1-0.8,2.9,0c0.9,1,2.2,1.4,3.5,1.4
|
||||||
|
c1.2,0,2.5-0.5,3.4-1.4c0.9-0.9,1.4-2.2,1.4-3.4c0-1.3-0.5-2.5-1.4-3.5c-0.9-1-2.2-1.4-3.4-1.4c-1.2,0-2.5,0.4-3.5,1.4
|
||||||
|
c-0.8,0.8-2.1,0.8-2.9,0c-0.1-0.1-0.3-0.3-0.4-0.5c0-0.1,0-0.1,0-0.1c0-0.1,0-0.1-0.1-0.2c0-0.1,0-0.2,0-0.3c0,0,0,0,0-0.1
|
||||||
|
c0-0.2,0-0.4,0-0.6l1.1-9.4c0.1-0.6,0.4-1.1,0.9-1.4c0.1,0,0.1,0,0.1-0.1c0,0,0.1,0,0.1-0.1c0.3-0.1,0.6-0.2,0.9-0.2h9.2
|
||||||
|
c1.2,0,2.1,0.9,2.1,2.1c0,1.1-0.9,2-2.1,2h-7.4l-0.4,3.6c0.8-0.2,1.6-0.3,2.4-0.3C389.4,149.7,391.7,150.6,393.4,152.3z"/>
|
||||||
|
</g>
|
||||||
|
<g>
|
||||||
|
<path fill="#3A4D54" d="M157.8,142.1L157.8,142.1l-0.9,0c-0.7,0-2.6,2-3,2.5c-1.7,1.7-3.5,3.4-5.2,5.1v-13.7
|
||||||
|
c0-1.2-0.8-2.2-2-2.2h-0.3c-1.3,0-2,1-2,2.2v29.9c0,1.2,0.8,2.2,2,2.2h0.3c1.3,0,2-1,2-2.2v-5.3l3.4,3.3c1,1,2,2,3,3
|
||||||
|
c0.5,0.5,1.3,1.3,2.1,1.3h0.4c1.1,0,1.8-0.8,2-1.8l0-0.1v-0.5c0-0.4-0.1-0.7-0.3-1c-0.2-0.3-0.5-0.6-0.7-0.8
|
||||||
|
c-0.6-0.7-1.2-1.3-1.9-1.9c-2.3-2.3-4.6-4.6-6.9-6.9l5.3-5.4c1-1.1,2.1-2.1,3.1-3.2c0.5-0.5,1.3-1.4,1.3-2.1V144
|
||||||
|
C159.6,142.9,158.9,142.3,157.8,142.1z"/>
|
||||||
|
<path fill="#3A4D54" d="M138.9,143.9l-0.2-0.1c-1.9-1.3-4.1-2-6.5-2h-0.9c-2.2,0-4.3,0.6-6.2,1.7c-4.1,2.4-6.5,6.2-6.5,11v0.9
|
||||||
|
c0,1.1,0.1,2.2,0.5,3.3c1.9,6.3,6.8,9.9,13.4,9.5c1.9-0.1,6.8-0.7,6.8-3.4v-0.4c0-1.1-0.8-1.7-1.8-1.9l-0.1,0h-0.8l-0.2,0.1
|
||||||
|
c-1.1,0.5-2.7,1.2-3.9,1.2c-1.3,0-2.9-0.1-4.2-0.7c-3.4-1.6-5.4-4.3-5.4-8c0-1.2,0.2-2.4,0.8-3.6c1.6-3.3,4.2-5.3,7.9-5.2
|
||||||
|
c0.7,0,2,0.1,2.6,0.4c0.6,0.3,2.1,1,2.7,1h0.3l0.1,0c1-0.2,1.9-0.8,1.9-1.9v-0.4c0-0.4-0.2-0.8-0.4-1.2L138.9,143.9z"/>
|
||||||
|
<path fill="#3A4D54" d="M85.2,133.7h-0.4c-1.3,0-2,1-2,2.2v9.3c-2.3-2-5.1-3.3-8.3-3.3h-0.9c-2.2,0-4.3,0.6-6.2,1.7
|
||||||
|
c-4.1,2.4-6.5,6.2-6.5,11v0.9c0,2.2,0.6,4.3,1.7,6.2c2.4,4.1,6.2,6.5,11,6.5h0.9c2.2,0,4.3-0.6,6.2-1.7c4.1-2.4,6.5-6.2,6.5-11
|
||||||
|
v-19.6C87.2,134.6,86.5,133.7,85.2,133.7z M81.6,159.3c-1.7,2.9-4.2,4.5-7.6,4.5c-1.4,0-2.7-0.4-3.9-1c-3-1.7-4.7-4.3-4.7-7.7
|
||||||
|
c0-1.2,0.2-2.4,0.8-3.6c1.6-3.3,4.3-5.2,8-5.2c1.8,0,3.4,0.5,4.9,1.6c2.4,1.7,3.8,4.1,3.8,7.1C82.8,156.5,82.4,158,81.6,159.3z
|
||||||
|
"/>
|
||||||
|
<path fill="#3A4D54" d="M103.1,141.9h-0.6c-2.2,0-4.3,0.6-6.2,1.7c-4.1,2.4-6.5,6.2-6.5,11v0.9c0,2.2,0.6,4.3,1.7,6.2
|
||||||
|
c2.4,4.1,6.2,6.5,11,6.5h0.9c2.2,0,4.3-0.6,6.2-1.7c4.1-2.4,6.5-6.2,6.5-11v-0.9c0-2-0.5-4-1.5-5.8
|
||||||
|
C112.1,144.4,108.2,141.9,103.1,141.9z M110.5,159.3c-1.7,2.8-4.2,4.5-7.5,4.5c-1.6,0-3-0.4-4.3-1.2c-2.8-1.7-4.5-4.2-4.5-7.6
|
||||||
|
c0-1.2,0.2-2.4,0.8-3.6c1.6-3.3,4.3-5.2,8-5.2c1.7,0,3.3,0.5,4.7,1.4c2.6,1.7,4.1,4.1,4.1,7.2
|
||||||
|
C111.7,156.5,111.3,158,110.5,159.3z"/>
|
||||||
|
<path fill="#3A4D54" d="M186.4,148c-1.2-2.1-3-3.7-5.2-4.8c-4-2-8.3-2.2-12.2,0.1l-0.6,0.3c-1.6,0.9-3,2.1-4,3.6
|
||||||
|
c-3,4.4-3.4,9.3-0.7,14l0.3,0.5c1.1,2,2.7,3.6,4.6,4.6c4.2,2.3,8.6,2.6,12.8,0.2l0.4-0.2c1.1-0.7,1.4-1.8,0.8-3
|
||||||
|
c-0.2-0.5-0.7-0.8-1.2-1.1l-0.1-0.1l-0.1,0c-0.8-0.1-2.9,0.8-3.8,1.2c-1.6,0.3-3.5,0.4-5.1-0.2c2.9-2.5,5.8-5.1,8.8-7.6
|
||||||
|
c1.3-1.1,2.7-2.4,4.1-3.5c1.2-0.9,2.3-2.2,1.4-3.8L186.4,148z M178.4,152.1c-3.3,2.8-6.5,5.6-9.8,8.4c-0.3-0.4-0.6-0.8-0.9-1.2
|
||||||
|
c-0.7-1.2-1.1-2.5-1.1-3.9c-0.1-3.5,1.2-6.3,4.2-8.1c2.3-1.3,4.8-1.7,7.4-0.7c1.3,0.5,2.7,1.3,3.6,2.4
|
||||||
|
C180.7,150.2,179.5,151.2,178.4,152.1z"/>
|
||||||
|
<path fill="#3A4D54" d="M204.2,142.1h-0.4c-2.6,0-5,0.8-7.1,2.3c-3.5,2.5-5.6,6-5.6,10.4V166c0,1.2,0.8,2.2,2,2.2h0.3
|
||||||
|
c1.3,0,2-1,2-2.2v-10.7c0-2.4,0.7-4.5,2.4-6.2c1.4-1.3,3.3-2.5,5.2-2.5c1.5,0,3.3-0.5,3.3-2.3
|
||||||
|
C206.4,142.9,205.5,142.1,204.2,142.1z"/>
|
||||||
|
</g>
|
||||||
|
<g>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#2BB8EB" d="M281.3,146.6c-0.7-0.3-1.9-0.4-2.6-0.4
|
||||||
|
c-3.7-0.1-6.4,1.9-7.9,5.2c-0.5,1.1-0.8,2.3-0.8,3.6c0,3.8,2,6.4,5.4,8c1.2,0.6,2.8,0.7,4.2,0.7c1.2,0,2.9-0.7,3.9-1.2l0.2-0.1
|
||||||
|
h0.8l0.1,0c1,0.2,1.8,0.8,1.8,1.9v0.4c0,2.7-4.9,3.3-6.8,3.4c-6.6,0.5-11.6-3.2-13.4-9.5c-0.3-1.1-0.5-2.2-0.5-3.3v-0.9
|
||||||
|
c0-4.8,2.4-8.6,6.5-11c1.9-1.1,4-1.7,6.2-1.7h0.9c2.4,0,4.5,0.7,6.5,2l0.2,0.1l0.1,0.2c0.2,0.3,0.4,0.7,0.4,1.2v0.4
|
||||||
|
c0,1.1-0.8,1.7-1.9,1.9l-0.1,0H284C283.4,147.6,281.9,146.9,281.3,146.6z"/>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#2BB8EB" d="M301.3,141.9h0.6c5.1,0,9,2.5,11.5,6.9c1,1.8,1.5,3.7,1.5,5.8
|
||||||
|
v0.9c0,4.8-2.4,8.6-6.5,11c-1.9,1.1-4,1.7-6.2,1.7h-0.9c-4.8,0-8.6-2.4-11-6.5c-1.1-1.9-1.7-4-1.7-6.2v-0.9
|
||||||
|
c0-4.8,2.4-8.6,6.5-11C297,142.4,299.1,141.9,301.3,141.9z M293,155c0,3.4,1.6,5.8,4.5,7.6c1.3,0.8,2.8,1.2,4.3,1.2
|
||||||
|
c3.3,0,5.8-1.7,7.5-4.5c0.8-1.3,1.2-2.8,1.2-4.4c0-3.1-1.5-5.5-4.1-7.2c-1.4-0.9-3-1.4-4.7-1.4c-3.7,0-6.4,1.9-8,5.2
|
||||||
|
C293.3,152.6,293,153.8,293,155z"/>
|
||||||
|
<path fill="#2BB8EB" d="M344,148.8c-2.5-4.5-6.4-6.9-11.5-6.9h-0.6c-2.2,0-4.3,0.6-6.2,1.7c-4.1,2.4-6.5,6.2-6.5,11v0.3v11
|
||||||
|
c0,1.2,0.8,2.2,2,2.2h0.3c1.3,0,2-1,2-2.2v-11h0c0-1.2,0.3-2.4,0.8-3.5c1.6-3.3,4.3-5.2,8-5.2c1.7,0,3.3,0.5,4.7,1.4
|
||||||
|
c2.6,1.7,4.1,4.1,4.1,7.2v11c0,1.2,0.8,2.2,2,2.2h0.3c1.3,0,2-1,2-2.2v-11v-0.3C345.5,152.6,345,150.6,344,148.8z"/>
|
||||||
|
</g>
|
||||||
|
</g>
|
||||||
|
<path fill="none" stroke="#3A4D54" stroke-width="5" stroke-miterlimit="10" d="M402.5,190H42.1c-3.6,0-6.5-2.9-6.5-6.5v-52.9
|
||||||
|
c0-3.6,2.9-6.5,6.5-6.5h360.4c3.6,0,6.5,2.9,6.5,6.5v52.9C409,187.1,406.1,190,402.5,190z"/>
|
||||||
|
</g>
|
||||||
|
<polygon fill="#E8593A" points="147.8,203 133.3,201.1 122.8,201.1 122.8,214.6 148.3,214.6 "/>
|
||||||
|
<rect x="353.6" y="124.2" fill="#3A4D54" width="5.1" height="55.2"/>
|
||||||
|
</g>
|
||||||
|
<g>
|
||||||
|
<path fill="#3A4D54" d="M91.8,293.4H20.2c-3.2,0-5.8-2.6-5.8-5.9s2.6-5.9,5.8-5.9h71.6c3.2,0,5.8,2.6,5.8,5.9S95,293.4,91.8,293.4
|
||||||
|
z"/>
|
||||||
|
</g>
|
||||||
|
<path fill="#3A4D54" d="M428.9,282.7h-83c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9h-54.7c-3.2,0-5.8,2.6-5.8,5.9
|
||||||
|
c0,3.2,2.6,5.9,5.8,5.9H308c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9h-28.9c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9H262
|
||||||
|
c-3.2,0-5.8,2.6-5.8,5.9s2.6,5.9,5.8,5.9h13.7c-3.2,0-5.8,2.6-5.8,5.9s2.6,5.9,5.8,5.9h-37.8c-3.2,0-5.8,2.6-5.8,5.9
|
||||||
|
c0,3,2.2,5.5,5.1,5.8h-48.8c-0.9-0.6-2-1-3.2-1h-47.1c3.2,0,5.8,2.6,5.8,5.9c0,3.2-2.6,5.9-5.8,5.9h-2.8c-3.2,0-5.8,2.9-5.8,6.4
|
||||||
|
c0,3.5,2.6,6.4,5.8,6.4h58.5h7.5H286c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9H286h-2.7c-3.2,0-5.8-2.6-5.8-5.9
|
||||||
|
c0-3.2,2.6-5.9,5.8-5.9h66c0.2,0,0.4,0,0.6,0h6.7c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9h-27.2c0,0,0,0,0,0h-0.7
|
||||||
|
c-3.2,0-5.8-2.6-5.8-5.9c0-3.2,2.6-5.9,5.8-5.9h0.7h14.1c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9h0.2c-3.2,0-5.8-2.6-5.8-5.9
|
||||||
|
c0-3.2,2.6-5.9,5.8-5.9h0.7h28.9c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9h-16.1h-0.8c0.1,0,0.3,0,0.4,0
|
||||||
|
c-3-0.2-5.4-2.7-5.4-5.9c0-3.1,2.4-5.7,5.4-5.9c-0.1,0-0.3,0-0.4,0h0.8h65.2h6.5c3.2,0,5.8-2.6,5.8-5.9
|
||||||
|
C434.6,285.3,432.1,282.7,428.9,282.7z"/>
|
||||||
|
<g>
|
||||||
|
<path id="outline_3_" fill-rule="evenodd" clip-rule="evenodd" fill="#3A4D54" d="M258,210.8h37v37.8h18.7
|
||||||
|
c8.6,0,17.5-1.5,25.7-4.3c4-1.4,8.5-3.3,12.5-5.6c-5.2-6.8-7.9-15.4-8.7-23.9c-1.1-11.5,1.3-26.5,9.1-35.6l3.9-4.5l4.6,3.7
|
||||||
|
c11.7,9.4,21.5,22.5,23.2,37.4c14-4.1,30.5-3.2,42.9,4l5.1,2.9l-2.7,5.2c-10.5,20.4-32.3,26.7-53.7,25.6
|
||||||
|
C343.5,333.3,273.8,371,189.4,371c-43.6,0-83.7-16.3-106.5-55l-0.4-0.6l-3.3-6.8c-7.7-17-10.3-35.7-8.5-54.4l0.5-5.6h31.6v-37.8
|
||||||
|
h37v-37h73.9v-37H258V210.8z"/>
|
||||||
|
<g id="body_colors_3_">
|
||||||
|
<path fill="#08AADA" d="M377.8,224.8c2.5-19.3-11.9-34.4-20.9-41.6c-10.3,11.9-11.9,43.1,4.3,56.3c-9,8-28,15.3-47.5,15.3H76.8
|
||||||
|
c-1.9,20.3,1.7,39,9.8,55l2.7,4.9c1.7,2.9,3.6,5.7,5.6,8.4h0c9.7,0.6,18.7,0.8,26.9,0.7c0,0,0,0,0,0c16.1-0.4,29.3-2.3,39.3-5.7
|
||||||
|
c1.5-0.5,3.1,0.3,3.6,1.8c0.5,1.5-0.3,3.1-1.8,3.6c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0c-7.9,2.2-16.3,3.8-27.2,4.4
|
||||||
|
c0.6,0-0.7,0.1-0.7,0.1c-0.4,0-0.8,0.1-1.2,0.1c-4.3,0.2-8.9,0.3-13.6,0.3c-5.2,0-10.3-0.1-15.9-0.4l-0.1,0.1
|
||||||
|
c19.7,22.2,50.6,35.5,89.3,35.5c81.9,0,151.3-36.3,182.1-117.8c21.8,2.2,42.8-3.3,52.3-21.9C408.6,216.4,389,219.2,377.8,224.8z"
|
||||||
|
/>
|
||||||
|
<path fill="#2BB8EB" d="M377.8,224.8c2.5-19.3-11.9-34.4-20.9-41.6c-10.3,11.9-11.9,43.1,4.3,56.3c-9,8-28,15.3-47.5,15.3H90.8
|
||||||
|
c-1,31.1,10.6,54.7,31,69c0,0,0,0,0,0c16.1-0.4,29.3-2.3,39.3-5.7c1.5-0.5,3.1,0.3,3.6,1.8c0.5,1.5-0.3,3.1-1.8,3.6
|
||||||
|
c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0c-7.9,2.2-17,3.9-27.9,4.6c0,0-0.3-0.3-0.3-0.3c27.9,14.3,68.3,14.2,114.6-3.6
|
||||||
|
c51.9-20,100.3-58,134-101.5C378.8,224.3,378.3,224.6,377.8,224.8z"/>
|
||||||
|
<path fill="#088CB9" d="M76.6,279.5c1.5,10.9,4.7,21.1,9.4,30.4l2.7,4.9c1.7,2.9,3.6,5.7,5.6,8.4c9.7,0.6,18.7,0.8,26.9,0.7
|
||||||
|
c16.1-0.4,29.3-2.3,39.3-5.7c1.5-0.5,3.1,0.3,3.6,1.8c0.5,1.5-0.3,3.1-1.8,3.6c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0
|
||||||
|
c-7.9,2.2-17,3.9-27.8,4.5c-0.4,0-1,0-1.4,0c-4.3,0.2-8.9,0.4-13.6,0.4c-5.2,0-10.4-0.1-16.1-0.4c19.7,22.2,50.8,35.5,89.5,35.5
|
||||||
|
c70.1,0,131.1-26.6,166.5-85.4H76.6z"/>
|
||||||
|
<path fill="#069BC6" d="M92.9,279.5c4.2,19.1,14.3,34.1,28.9,44.3c16.1-0.4,29.3-2.3,39.3-5.7c1.5-0.5,3.1,0.3,3.6,1.8
|
||||||
|
c0.5,1.5-0.3,3.1-1.8,3.6c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0c-7.9,2.2-17.2,3.9-28,4.5c27.9,14.3,68.2,14.1,114.5-3.7
|
||||||
|
c28-10.8,55-26.8,79.2-46.1H92.9z"/>
|
||||||
|
</g>
|
||||||
|
<g id="Containers_3_">
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M135.8,219.7h2.5v26.7h-2.5V219.7z M130.9,219.7h2.6v26.7h-2.6
|
||||||
|
V219.7z M126.1,219.7h2.6v26.7h-2.6V219.7z M121.2,219.7h2.6v26.7h-2.6V219.7z M116.3,219.7h2.6v26.7h-2.6V219.7z M111.6,219.7
|
||||||
|
h2.5v26.7h-2.5V219.7z M108.9,217h32v32h-32V217z"/>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M172.7,182.7h2.5v26.7h-2.5V182.7z M167.9,182.7h2.6v26.7h-2.6
|
||||||
|
V182.7z M163,182.7h2.6v26.7H163V182.7z M158.2,182.7h2.6v26.7h-2.6V182.7z M153.3,182.7h2.6v26.7h-2.6V182.7z M148.6,182.7h2.5
|
||||||
|
v26.7h-2.5V182.7z M145.9,180h32v32h-32V180z"/>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M172.7,219.7h2.5v26.7h-2.5V219.7z M167.9,219.7h2.6v26.7h-2.6
|
||||||
|
V219.7z M163,219.7h2.6v26.7H163V219.7z M158.2,219.7h2.6v26.7h-2.6V219.7z M153.3,219.7h2.6v26.7h-2.6V219.7z M148.6,219.7h2.5
|
||||||
|
v26.7h-2.5V219.7z M145.9,217h32v32h-32V217z"/>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M209.7,219.7h2.5v26.7h-2.5V219.7z M204.8,219.7h2.6v26.7h-2.6
|
||||||
|
V219.7z M200,219.7h2.6v26.7H200V219.7z M195.1,219.7h2.6v26.7h-2.6V219.7z M190.3,219.7h2.6v26.7h-2.6V219.7z M185.5,219.7h2.5
|
||||||
|
v26.7h-2.5V219.7z M182.9,217h32v32h-32V217z"/>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M209.7,182.7h2.5v26.7h-2.5V182.7z M204.8,182.7h2.6v26.7h-2.6
|
||||||
|
V182.7z M200,182.7h2.6v26.7H200V182.7z M195.1,182.7h2.6v26.7h-2.6V182.7z M190.3,182.7h2.6v26.7h-2.6V182.7z M185.5,182.7h2.5
|
||||||
|
v26.7h-2.5V182.7z M182.9,180h32v32h-32V180z"/>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M246.7,219.7h2.5v26.7h-2.5V219.7z M241.8,219.7h2.6v26.7h-2.6
|
||||||
|
V219.7z M237,219.7h2.6v26.7H237V219.7z M232.1,219.7h2.6v26.7h-2.6V219.7z M227.3,219.7h2.6v26.7h-2.6V219.7z M222.5,219.7h2.5
|
||||||
|
v26.7h-2.5V219.7z M219.8,217h32v32h-32V217z"/>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M246.7,182.7h2.5v26.7h-2.5V182.7z M241.8,182.7h2.6v26.7h-2.6
|
||||||
|
V182.7z M237,182.7h2.6v26.7H237V182.7z M232.1,182.7h2.6v26.7h-2.6V182.7z M227.3,182.7h2.6v26.7h-2.6V182.7z M222.5,182.7h2.5
|
||||||
|
v26.7h-2.5V182.7z M219.8,180h32v32h-32V180z"/>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M246.7,145.7h2.5v26.7h-2.5V145.7z M241.8,145.7h2.6v26.7h-2.6
|
||||||
|
V145.7z M237,145.7h2.6v26.7H237V145.7z M232.1,145.7h2.6v26.7h-2.6V145.7z M227.3,145.7h2.6v26.7h-2.6V145.7z M222.5,145.7h2.5
|
||||||
|
v26.7h-2.5V145.7z M219.8,143.1h32v32h-32V143.1z"/>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M283.6,219.7h2.5v26.7h-2.5V219.7z M278.8,219.7h2.6v26.7h-2.6
|
||||||
|
V219.7z M273.9,219.7h2.6v26.7h-2.6V219.7z M269.1,219.7h2.6v26.7h-2.6V219.7z M264.2,219.7h2.6v26.7h-2.6V219.7z M259.5,219.7
|
||||||
|
h2.5v26.7h-2.5V219.7z M256.8,217h32v32h-32V217z"/>
|
||||||
|
</g>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#D4EDF1" d="M175.9,301c4.9,0,8.8,4,8.8,8.8s-4,8.8-8.8,8.8
|
||||||
|
c-4.9,0-8.8-4-8.8-8.8S171,301,175.9,301"/>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#3A4D54" d="M175.9,303.5c0.8,0,1.6,0.2,2.3,0.4c-0.8,0.4-1.3,1.3-1.3,2.2
|
||||||
|
c0,1.4,1.2,2.6,2.6,2.6c1,0,1.8-0.5,2.3-1.3c0.3,0.7,0.5,1.6,0.5,2.4c0,3.5-2.8,6.3-6.3,6.3c-3.5,0-6.3-2.8-6.3-6.3
|
||||||
|
C169.6,306.3,172.4,303.5,175.9,303.5"/>
|
||||||
|
<path fill-rule="evenodd" clip-rule="evenodd" fill="#3A4D54" d="M19.6,282.7h193.6h23.9h190.5c0.4,0,1.6,0.1,1.2,0
|
||||||
|
c-9.2-2.2-24.9-6.2-23.5-15.8c0.1-0.7-0.2-0.8-0.6-0.3c-16.6,17.5-54.1,12.2-64.3,3.2c-0.2-0.1-0.4-0.1-0.5,0.1
|
||||||
|
c-11.5,15.4-73.3,9.7-79.3-2.3c-0.1-0.2-0.4-0.3-0.6-0.1c-14.1,15.7-55.7,15.7-69.8,0c-0.2-0.2-0.5-0.1-0.6,0.1
|
||||||
|
c-6,12-67.8,17.7-79.3,2.3c-0.1-0.2-0.3-0.2-0.5-0.1c-10.1,8.9-44.5,14.3-61.2-3c-0.3-0.3-0.8-0.1-0.8,0.4
|
||||||
|
C48.9,277.6,28.1,280.5,19.6,282.7"/>
|
||||||
|
<path fill="#C0DBE0" d="M199.4,364.7c-21.9-10.4-33.9-24.5-40.6-39.9c-8.1,2.3-17.9,3.8-29.3,4.4c-4.3,0.2-8.8,0.4-13.5,0.4
|
||||||
|
c-5.4,0-11.2-0.2-17.2-0.5c20.1,20.1,44.8,35.5,90.5,35.8C192.7,364.9,196.1,364.8,199.4,364.7z"/>
|
||||||
|
<path fill="#D4EDF1" d="M167,339c-3-4.1-6-9.3-8.1-14.2c-8.1,2.3-17.9,3.8-29.3,4.4C137.4,333.4,148.5,337.4,167,339z"/>
|
||||||
|
</g>
|
||||||
|
<circle fill="#3A4D54" cx="34.8" cy="311" r="5.9"/>
|
||||||
|
<path fill="#3A4D54" d="M346.8,297.2l-1-2.8c0,0,5.3-11.7-7.4-11.7c-12.7,0,3.5-4.7,3.5-4.7l21.8,2.8l9.6,6.8l-16.1,4.1
|
||||||
|
L346.8,297.2z"/>
|
||||||
|
<path fill="#3A4D54" d="M78.7,297.2l1-2.8c0,0-5.3-11.7,7.4-11.7s-3.5-4.7-3.5-4.7l-21.8,2.8l-9.6,6.8l16.1,4.1L78.7,297.2z"/>
|
||||||
|
<path fill="#3A4D54" d="M361.7,279.5v4.4l15.6,6.7l45.5-4.1l7.3-3.7c0,0-3.8-0.6-7.3-1.7c-3.6-1.1-15.2-1.6-15.2-1.6h-28.3
|
||||||
|
l-13.6,1.8L361.7,279.5z"/>
|
||||||
|
</g>
|
||||||
|
</svg>
|
||||||
|
After Width: | Height: | Size: 20 KiB |
BIN
slides/images/docker-ecosystem-2015.png
Normal file
|
After Width: | Height: | Size: 1.0 MiB |
2597
slides/images/docker-engine-architecture.svg
Normal file
|
After Width: | Height: | Size: 183 KiB |
BIN
slides/images/dockerd-and-containerd.png
Normal file
|
After Width: | Height: | Size: 12 KiB |
BIN
slides/images/fu-face.jpg
Normal file
|
After Width: | Height: | Size: 150 KiB |
BIN
slides/images/getting-inside.png
Normal file
|
After Width: | Height: | Size: 301 KiB |
BIN
slides/images/ingress-lb.png
Normal file
|
After Width: | Height: | Size: 70 KiB |
BIN
slides/images/ingress-routing-mesh.png
Normal file
|
After Width: | Height: | Size: 60 KiB |
BIN
slides/images/sharing-layers.jpg
Normal file
|
After Width: | Height: | Size: 55 KiB |
|
Before Width: | Height: | Size: 22 KiB |
BIN
slides/images/tangram.gif
Normal file
|
After Width: | Height: | Size: 12 KiB |
BIN
slides/images/tesla.jpg
Normal file
|
After Width: | Height: | Size: 484 KiB |
BIN
slides/images/tetris-1.png
Normal file
|
After Width: | Height: | Size: 8.8 KiB |
BIN
slides/images/tetris-2.gif
Normal file
|
After Width: | Height: | Size: 730 KiB |
BIN
slides/images/tetris-3.png
Normal file
|
After Width: | Height: | Size: 24 KiB |
BIN
slides/images/traffic-graph.png
Normal file
|
After Width: | Height: | Size: 21 KiB |
BIN
slides/images/trollface.png
Normal file
|
After Width: | Height: | Size: 2.9 KiB |
59
slides/index.css
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
body {
|
||||||
|
background-image: url("images/container-background.jpg");
|
||||||
|
max-width: 1024px;
|
||||||
|
margin: 0 auto;
|
||||||
|
}
|
||||||
|
table {
|
||||||
|
font-size: 20px;
|
||||||
|
font-family: sans-serif;
|
||||||
|
background: white;
|
||||||
|
width: 100%;
|
||||||
|
height: 100%;
|
||||||
|
padding: 20px;
|
||||||
|
}
|
||||||
|
.header {
|
||||||
|
font-size: 300%;
|
||||||
|
font-weight: bold;
|
||||||
|
}
|
||||||
|
.title {
|
||||||
|
font-size: 150%;
|
||||||
|
font-weight: bold;
|
||||||
|
}
|
||||||
|
.details {
|
||||||
|
font-size: 80%;
|
||||||
|
font-style: italic;
|
||||||
|
}
|
||||||
|
td {
|
||||||
|
padding: 1px;
|
||||||
|
height: 1em;
|
||||||
|
}
|
||||||
|
td.spacer {
|
||||||
|
height: unset;
|
||||||
|
}
|
||||||
|
td.footer {
|
||||||
|
padding-top: 80px;
|
||||||
|
height: 100px;
|
||||||
|
}
|
||||||
|
td.title {
|
||||||
|
border-bottom: thick solid black;
|
||||||
|
padding-bottom: 2px;
|
||||||
|
padding-top: 20px;
|
||||||
|
}
|
||||||
|
a {
|
||||||
|
text-decoration: none;
|
||||||
|
}
|
||||||
|
a:hover {
|
||||||
|
background: yellow;
|
||||||
|
}
|
||||||
|
a.attend:after {
|
||||||
|
content: "📅 attend";
|
||||||
|
}
|
||||||
|
a.slides:after {
|
||||||
|
content: "📚 slides";
|
||||||
|
}
|
||||||
|
a.chat:after {
|
||||||
|
content: "💬 chat";
|
||||||
|
}
|
||||||
|
a.video:after {
|
||||||
|
content: "📺 video";
|
||||||
|
}
|
||||||
@@ -1,178 +0,0 @@
|
|||||||
<html>
|
|
||||||
<head>
|
|
||||||
<title>Container Training</title>
|
|
||||||
<style type="text/css">
|
|
||||||
body {
|
|
||||||
background-image: url("images/container-background.jpg");
|
|
||||||
max-width: 1024px;
|
|
||||||
margin: 0 auto;
|
|
||||||
}
|
|
||||||
table {
|
|
||||||
font-size: 20px;
|
|
||||||
font-family: sans-serif;
|
|
||||||
background: white;
|
|
||||||
width: 100%;
|
|
||||||
height: 100%;
|
|
||||||
padding: 20px;
|
|
||||||
}
|
|
||||||
.header {
|
|
||||||
font-size: 300%;
|
|
||||||
font-weight: bold;
|
|
||||||
}
|
|
||||||
.title {
|
|
||||||
font-size: 150%;
|
|
||||||
font-weight: bold;
|
|
||||||
}
|
|
||||||
td {
|
|
||||||
padding: 1px;
|
|
||||||
height: 1em;
|
|
||||||
}
|
|
||||||
td.spacer {
|
|
||||||
height: unset;
|
|
||||||
}
|
|
||||||
td.footer {
|
|
||||||
padding-top: 80px;
|
|
||||||
height: 100px;
|
|
||||||
}
|
|
||||||
td.title {
|
|
||||||
border-bottom: thick solid black;
|
|
||||||
padding-bottom: 2px;
|
|
||||||
padding-top: 20px;
|
|
||||||
}
|
|
||||||
a {
|
|
||||||
text-decoration: none;
|
|
||||||
}
|
|
||||||
a:hover {
|
|
||||||
background: yellow;
|
|
||||||
}
|
|
||||||
a.attend:after {
|
|
||||||
content: "📅 attend";
|
|
||||||
}
|
|
||||||
a.slides:after {
|
|
||||||
content: "📚 slides";
|
|
||||||
}
|
|
||||||
a.chat:after {
|
|
||||||
content: "💬 chat";
|
|
||||||
}
|
|
||||||
a.video:after {
|
|
||||||
content: "📺 video";
|
|
||||||
}
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<div class="main">
|
|
||||||
<table>
|
|
||||||
<tr><td class="header" colspan="4">Container Training</td></tr>
|
|
||||||
|
|
||||||
<tr><td class="title" colspan="4">Coming soon at a conference near you</td></tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<!--
|
|
||||||
<td>Nothing for now (stay tuned...)</td>
|
|
||||||
thing for now (stay tuned...)</td>
|
|
||||||
-->
|
|
||||||
<td>March 14, 2018: Boosterconf — Kubernetes 101</td>
|
|
||||||
<td> </td>
|
|
||||||
<td><a class="attend" href="https://2018.boosterconf.no/talks/1179" />
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>March 27, 2018: SREcon Americas — Kubernetes 101</td>
|
|
||||||
<td> </td>
|
|
||||||
<td><a class="attend" href="https://www.usenix.org/conference/srecon18americas/presentation/kromhout" />
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
|
|
||||||
<tr><td class="title" colspan="4">Past workshops</td></tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<!-- February 22, 2018 -->
|
|
||||||
<td>IndexConf: Kubernetes 101</td>
|
|
||||||
<td><a class="slides" href="http://indexconf2018.container.training/" /></td>
|
|
||||||
<!--
|
|
||||||
<td><a class="attend" href="https://developer.ibm.com/indexconf/sessions/#!?id=5474" />
|
|
||||||
-->
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>Kubernetes enablement at Docker</td>
|
|
||||||
<td><a class="slides" href="http://kube.container.training/" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>QCON SF: Orchestrating Microservices with Docker Swarm</td>
|
|
||||||
<td><a class="slides" href="http://qconsf2017swarm.container.training/" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>QCON SF: Introduction to Docker and Containers</td>
|
|
||||||
<td><a class="slides" href="http://qconsf2017intro.container.training/" /></td>
|
|
||||||
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviLgqTum8MkspG_8VzGl6C07" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>LISA17 M7: Getting Started with Docker and Containers</td>
|
|
||||||
<td><a class="slides" href="http://lisa17m7.container.training/" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>LISA17 T9: Build, Ship, and Run Microservices on a Docker Swarm Cluster</td>
|
|
||||||
<td><a class="slides" href="http://lisa17t9.container.training/" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>Deploying and scaling microservices with Docker and Kubernetes</td>
|
|
||||||
<td><a class="slides" href="http://osseu17.container.training/" /></td>
|
|
||||||
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviLrsyydCzxWrIP_1-wkcSHS" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>DockerCon Workshop: from Zero to Hero (full day, B3 M1-2)</td>
|
|
||||||
<td><a class="slides" href="http://dc17eu.container.training/" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>DockerCon Workshop: Orchestration for Advanced Users (afternoon, B4 M5-6)</td>
|
|
||||||
<td><a class="slides" href="https://www.bretfisher.com/dockercon17eu/" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>LISA16 T1: Deploying and Scaling Applications with Docker Swarm</td>
|
|
||||||
<td><a class="slides" href="http://lisa16t1.container.training/" /></td>
|
|
||||||
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviIDDhr8vIwCN1wkyNGXjbbc" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>PyCon2016: Introduction to Docker and containers</td>
|
|
||||||
<td><a class="slides" href="https://us.pycon.org/2016/site_media/media/tutorial_handouts/DockerSlides.pdf" /></td>
|
|
||||||
<td><a class="video" href="https://www.youtube.com/watch?v=ZVaRK10HBjo" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr><td class="title" colspan="4">Self-paced tutorials</td></tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>Introduction to Docker and Containers</td>
|
|
||||||
<td><a class="slides" href="intro-fullday.yml.html" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>Container Orchestration with Docker and Swarm</td>
|
|
||||||
<td><a class="slides" href="swarm-selfpaced.yml.html" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td>Deploying and Scaling Microservices with Docker and Kubernetes</td>
|
|
||||||
<td><a class="slides" href="kube-halfday.yml.html" /></td>
|
|
||||||
</tr>
|
|
||||||
|
|
||||||
<tr><td class="spacer"></td></tr>
|
|
||||||
|
|
||||||
<tr>
|
|
||||||
<td class="footer">
|
|
||||||
Maintained by Jérôme Petazzoni (<a href="https://twitter.com/jpetazzo">@jpetazzo</a>)
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
|
||||||
</div>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
140
slides/index.py
Executable file
@@ -0,0 +1,140 @@
|
|||||||
|
#!/usr/bin/env python2
|
||||||
|
# coding: utf-8
|
||||||
|
TEMPLATE="""<html>
|
||||||
|
<head>
|
||||||
|
<title>{{ title }}</title>
|
||||||
|
<link rel="stylesheet" href="index.css">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div class="main">
|
||||||
|
<table>
|
||||||
|
<tr><td class="header" colspan="3">{{ title }}</td></tr>
|
||||||
|
|
||||||
|
{% if coming_soon %}
|
||||||
|
<tr><td class="title" colspan="3">Coming soon near you</td></tr>
|
||||||
|
|
||||||
|
{% for item in coming_soon %}
|
||||||
|
<tr>
|
||||||
|
<td>{{ item.title }}</td>
|
||||||
|
<td>{% if item.slides %}<a class="slides" href="{{ item.slides }}" />{% endif %}</td>
|
||||||
|
<td><a class="attend" href="{{ item.attend }}" /></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td class="details">Scheduled {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
|
||||||
|
</tr>
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if past_workshops %}
|
||||||
|
<tr><td class="title" colspan="3">Past workshops</td></tr>
|
||||||
|
|
||||||
|
{% for item in past_workshops[:5] %}
|
||||||
|
<tr>
|
||||||
|
<td>{{ item.title }}</td>
|
||||||
|
<td><a class="slides" href="{{ item.slides }}" /></td>
|
||||||
|
<td>{% if item.video %}<a class="video" href="{{ item.video }}" />{% endif %}</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
|
||||||
|
</tr>
|
||||||
|
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
{% if past_workshops[5:] %}
|
||||||
|
<tr>
|
||||||
|
<td>... and at least <a href="past.html">{{ past_workshops[5:] | length }} more</a>.</td>
|
||||||
|
</tr>
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if recorded_workshops %}
|
||||||
|
<tr><td class="title" colspan="3">Recorded workshops</td></tr>
|
||||||
|
|
||||||
|
{% for item in recorded_workshops %}
|
||||||
|
<tr>
|
||||||
|
<td>{{ item.title }}</td>
|
||||||
|
<td><a class="slides" href="{{ item.slides }}" /></td>
|
||||||
|
<td><a class="video" href="{{ item.video }}" /></td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
|
||||||
|
</tr>
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if self_paced %}
|
||||||
|
<tr><td class="title" colspan="3">Self-paced tutorials</td></tr>
|
||||||
|
{% for item in self_paced %}
|
||||||
|
<tr>
|
||||||
|
<td>{{ item.title }}</td>
|
||||||
|
<td><a class="slides" href="{{ item.slides }}" /></td>
|
||||||
|
</tr>
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if all_past_workshops %}
|
||||||
|
<tr><td class="title" colspan="3">Past workshops</td></tr>
|
||||||
|
{% for item in all_past_workshops %}
|
||||||
|
<tr>
|
||||||
|
<td>{{ item.title }}</td>
|
||||||
|
<td><a class="slides" href="{{ item.slides }}" /></td>
|
||||||
|
{% if item.video %}
|
||||||
|
<td><a class="video" href="{{ item.video }}" /></td>
|
||||||
|
{% endif %}
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
|
||||||
|
</tr>
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
<tr><td class="spacer"></td></tr>
|
||||||
|
|
||||||
|
<tr>
|
||||||
|
<td class="footer">
|
||||||
|
Maintained by Jérôme Petazzoni (<a href="https://twitter.com/jpetazzo">@jpetazzo</a>) and <a href="https://github.com/jpetazzo/container.training/graphs/contributors">contributors</a>.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
</div>
|
||||||
|
</body>
|
||||||
|
</html>""".decode("utf-8")
|
||||||
|
|
||||||
|
import datetime
|
||||||
|
import jinja2
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
items = yaml.load(open("index.yaml"))
|
||||||
|
|
||||||
|
for item in items:
|
||||||
|
if "date" in item:
|
||||||
|
date = item["date"]
|
||||||
|
suffix = {
|
||||||
|
1: "st", 2: "nd", 3: "rd",
|
||||||
|
21: "st", 22: "nd", 23: "rd",
|
||||||
|
31: "st"}.get(date.day, "th")
|
||||||
|
item["prettydate"] = date.strftime("%B %e{}, %Y").format(suffix)
|
||||||
|
|
||||||
|
today = datetime.date.today()
|
||||||
|
coming_soon = [i for i in items if i.get("date") and i["date"] >= today]
|
||||||
|
coming_soon.sort(key=lambda i: i["date"])
|
||||||
|
past_workshops = [i for i in items if i.get("date") and i["date"] < today]
|
||||||
|
past_workshops.sort(key=lambda i: i["date"], reverse=True)
|
||||||
|
self_paced = [i for i in items if not i.get("date")]
|
||||||
|
recorded_workshops = [i for i in items if i.get("video")]
|
||||||
|
|
||||||
|
template = jinja2.Template(TEMPLATE)
|
||||||
|
with open("index.html", "w") as f:
|
||||||
|
f.write(template.render(
|
||||||
|
title="Container Training",
|
||||||
|
coming_soon=coming_soon,
|
||||||
|
past_workshops=past_workshops,
|
||||||
|
self_paced=self_paced,
|
||||||
|
recorded_workshops=recorded_workshops
|
||||||
|
).encode("utf-8"))
|
||||||
|
|
||||||
|
with open("past.html", "w") as f:
|
||||||
|
f.write(template.render(
|
||||||
|
title="Container Training",
|
||||||
|
all_past_workshops=past_workshops
|
||||||
|
).encode("utf-8"))
|
||||||
361
slides/index.yaml
Normal file
@@ -0,0 +1,361 @@
|
|||||||
|
- date: 2018-07-12
|
||||||
|
city: Minneapolis, MN
|
||||||
|
country: us
|
||||||
|
event: devopsdays Minneapolis
|
||||||
|
title: Kubernetes 101
|
||||||
|
speaker: "ashleymcnamara, bketelsen"
|
||||||
|
attend: https://www.devopsdays.org/events/2018-minneapolis/registration/
|
||||||
|
|
||||||
|
- date: 2018-10-01
|
||||||
|
city: New York, NY
|
||||||
|
country: us
|
||||||
|
event: Velocity
|
||||||
|
title: Kubernetes 101
|
||||||
|
speaker: bridgetkromhout
|
||||||
|
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70102
|
||||||
|
|
||||||
|
- date: 2018-09-30
|
||||||
|
city: New York, NY
|
||||||
|
country: us
|
||||||
|
event: Velocity
|
||||||
|
title: Kubernetes Bootcamp - Deploying and Scaling Microservices
|
||||||
|
speaker: jpetazzo
|
||||||
|
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/69875
|
||||||
|
|
||||||
|
- date: 2018-07-17
|
||||||
|
city: Portland, OR
|
||||||
|
country: us
|
||||||
|
event: OSCON
|
||||||
|
title: Kubernetes 101
|
||||||
|
speaker: bridgetkromhout
|
||||||
|
attend: https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/66287
|
||||||
|
|
||||||
|
- date: 2018-06-27
|
||||||
|
city: Amsterdam
|
||||||
|
country: nl
|
||||||
|
event: devopsdays
|
||||||
|
title: Kubernetes 101
|
||||||
|
speaker: bridgetkromhout
|
||||||
|
slides: https://devopsdaysams2018.container.training
|
||||||
|
attend: https://www.devopsdays.org/events/2018-amsterdam/registration/
|
||||||
|
|
||||||
|
- date: 2018-06-12
|
||||||
|
city: San Jose, CA
|
||||||
|
country: us
|
||||||
|
event: Velocity
|
||||||
|
title: Kubernetes 101
|
||||||
|
speaker: bridgetkromhout
|
||||||
|
slides: https://velocitysj2018.container.training
|
||||||
|
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66286
|
||||||
|
|
||||||
|
- date: 2018-06-12
|
||||||
|
city: San Jose, CA
|
||||||
|
country: us
|
||||||
|
event: Velocity
|
||||||
|
title: "Kubernetes two-day kickstart: Deploying and Scaling Microservices with Kubernetes"
|
||||||
|
speaker: "bketelsen, erikstmartin"
|
||||||
|
slides: http://kubernetes.academy/kube-fullday.yml.html#1
|
||||||
|
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
|
||||||
|
|
||||||
|
- date: 2018-06-11
|
||||||
|
city: San Jose, CA
|
||||||
|
country: us
|
||||||
|
event: Velocity
|
||||||
|
title: "Kubernetes two-day kickstart: Introduction to Docker and Containers"
|
||||||
|
speaker: "bketelsen, erikstmartin"
|
||||||
|
slides: http://kubernetes.academy/intro-fullday.yml.html#1
|
||||||
|
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
|
||||||
|
|
||||||
|
- date: 2018-05-17
|
||||||
|
city: Virginia Beach, FL
|
||||||
|
country: us
|
||||||
|
event: Revolution Conf
|
||||||
|
title: Docker 101
|
||||||
|
speaker: bretfisher
|
||||||
|
slides: https://revconf18.bretfisher.com
|
||||||
|
|
||||||
|
- date: 2018-05-10
|
||||||
|
city: Saint Paul, MN
|
||||||
|
country: us
|
||||||
|
event: NDC Minnesota
|
||||||
|
title: Kubernetes 101
|
||||||
|
slides: https://ndcminnesota2018.container.training
|
||||||
|
|
||||||
|
- date: 2018-05-08
|
||||||
|
city: Budapest
|
||||||
|
country: hu
|
||||||
|
event: CRAFT
|
||||||
|
title: Swarm Orchestration
|
||||||
|
slides: https://craftconf18.bretfisher.com
|
||||||
|
|
||||||
|
- date: 2018-04-27
|
||||||
|
city: Chicago, IL
|
||||||
|
country: us
|
||||||
|
event: GOTO
|
||||||
|
title: Swarm Orchestration
|
||||||
|
slides: https://gotochgo18.bretfisher.com
|
||||||
|
|
||||||
|
- date: 2018-04-24
|
||||||
|
city: Chicago, IL
|
||||||
|
country: us
|
||||||
|
event: GOTO
|
||||||
|
title: Kubernetes 101
|
||||||
|
slides: http://gotochgo2018.container.training/
|
||||||
|
|
||||||
|
- date: 2018-04-11
|
||||||
|
city: Paris
|
||||||
|
country: fr
|
||||||
|
title: Introduction aux conteneurs
|
||||||
|
lang: fr
|
||||||
|
slides: https://avril2018.container.training/intro.yml.html
|
||||||
|
|
||||||
|
- date: 2018-04-13
|
||||||
|
city: Paris
|
||||||
|
country: fr
|
||||||
|
lang: fr
|
||||||
|
title: Introduction à l'orchestration
|
||||||
|
slides: https://avril2018.container.training/kube.yml.html
|
||||||
|
|
||||||
|
- date: 2018-04-06
|
||||||
|
city: Sacramento, CA
|
||||||
|
country: us
|
||||||
|
event: MuraCon
|
||||||
|
title: Docker 101
|
||||||
|
slides: https://muracon18.bretfisher.com
|
||||||
|
|
||||||
|
- date: 2018-03-27
|
||||||
|
city: Santa Clara, CA
|
||||||
|
country: us
|
||||||
|
event: SREcon Americas
|
||||||
|
title: Kubernetes 101
|
||||||
|
slides: http://srecon2018.container.training/
|
||||||
|
|
||||||
|
- date: 2018-03-27
|
||||||
|
city: Bergen
|
||||||
|
country: no
|
||||||
|
event: Boosterconf
|
||||||
|
title: Kubernetes 101
|
||||||
|
slides: http://boosterconf2018.container.training/
|
||||||
|
|
||||||
|
- date: 2018-02-22
|
||||||
|
city: San Francisco, CA
|
||||||
|
country: us
|
||||||
|
event: IndexConf
|
||||||
|
title: Kubernetes 101
|
||||||
|
slides: http://indexconf2018.container.training/
|
||||||
|
#attend: https://developer.ibm.com/indexconf/sessions/#!?id=5474
|
||||||
|
|
||||||
|
- date: 2017-11-17
|
||||||
|
city: San Francisco, CA
|
||||||
|
country: us
|
||||||
|
event: QCON SF
|
||||||
|
title: Orchestrating Microservices with Docker Swarm
|
||||||
|
slides: http://qconsf2017swarm.container.training/
|
||||||
|
|
||||||
|
- date: 2017-11-16
|
||||||
|
city: San Francisco, CA
|
||||||
|
country: us
|
||||||
|
event: QCON SF
|
||||||
|
title: Introduction to Docker and Containers
|
||||||
|
slides: http://qconsf2017intro.container.training/
|
||||||
|
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLgqTum8MkspG_8VzGl6C07
|
||||||
|
|
||||||
|
- date: 2017-10-30
|
||||||
|
city: San Franciso, CA
|
||||||
|
country: us
|
||||||
|
event: LISA
|
||||||
|
title: (M7) Getting Started with Docker and Containers
|
||||||
|
slides: http://lisa17m7.container.training/
|
||||||
|
|
||||||
|
- date: 2017-10-31
|
||||||
|
city: San Franciso, CA
|
||||||
|
country: us
|
||||||
|
event: LISA
|
||||||
|
title: (T9) Build, Ship, and Run Microservices on a Docker Swarm Cluster
|
||||||
|
slides: http://lisa17t9.container.training/
|
||||||
|
|
||||||
|
- date: 2017-10-26
|
||||||
|
city: Prague
|
||||||
|
country: cz
|
||||||
|
event: Open Source Summit Europe
|
||||||
|
title: Deploying and scaling microservices with Docker and Kubernetes
|
||||||
|
slides: http://osseu17.container.training/
|
||||||
|
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLrsyydCzxWrIP_1-wkcSHS
|
||||||
|
|
||||||
|
- date: 2017-10-16
|
||||||
|
city: Copenhagen
|
||||||
|
country: dk
|
||||||
|
event: DockerCon
|
||||||
|
title: Swarm from Zero to Hero
|
||||||
|
slides: http://dc17eu.container.training/
|
||||||
|
|
||||||
|
- date: 2017-10-16
|
||||||
|
city: Copenhagen
|
||||||
|
country: dk
|
||||||
|
event: DockerCon
|
||||||
|
title: Orchestration for Advanced Users
|
||||||
|
slides: https://www.bretfisher.com/dockercon17eu
|
||||||
|
|
||||||
|
- date: 2017-07-25
|
||||||
|
city: Minneapolis, MN
|
||||||
|
country: us
|
||||||
|
event: devopsdays
|
||||||
|
title: Deploying & Scaling microservices with Docker Swarm
|
||||||
|
video: https://www.youtube.com/watch?v=DABbqyJeG_E
|
||||||
|
|
||||||
|
- date: 2017-06-12
|
||||||
|
city: Berlin
|
||||||
|
country: de
|
||||||
|
event: DevOpsCon
|
||||||
|
title: Deploying and scaling containerized Microservices with Docker and Swarm
|
||||||
|
|
||||||
|
- date: 2017-05-18
|
||||||
|
city: Portland, OR
|
||||||
|
country: us
|
||||||
|
event: PyCon
|
||||||
|
title: Deploy and scale containers with Docker native, open source orchestration
|
||||||
|
video: https://www.youtube.com/watch?v=EuzoEaE6Cqs
|
||||||
|
|
||||||
|
- date: 2017-05-08
|
||||||
|
city: Austin, TX
|
||||||
|
country: us
|
||||||
|
event: OSCON
|
||||||
|
title: Deploying and scaling applications in containers with Docker
|
||||||
|
|
||||||
|
- date: 2017-05-04
|
||||||
|
city: Chicago, IL
|
||||||
|
country: us
|
||||||
|
event: GOTO
|
||||||
|
title: Container deployment, scaling, and orchestration with Docker Swarm
|
||||||
|
|
||||||
|
- date: 2017-04-17
|
||||||
|
city: Austin, TX
|
||||||
|
country: us
|
||||||
|
event: DockerCon
|
||||||
|
title: Orchestration Workshop
|
||||||
|
|
||||||
|
- date: 2017-03-22
|
||||||
|
city: San Jose, CA
|
||||||
|
country: us
|
||||||
|
event: Devoxx
|
||||||
|
title: Container deployment, scaling, and orchestration with Docker Swarm
|
||||||
|
|
||||||
|
- date: 2017-03-03
|
||||||
|
city: Pasadena, CA
|
||||||
|
country: us
|
||||||
|
event: SCALE
|
||||||
|
title: Container deployment, scaling, and orchestration with Docker Swarm
|
||||||
|
|
||||||
|
- date: 2016-12-06
|
||||||
|
city: Boston, MA
|
||||||
|
country: us
|
||||||
|
event: LISA
|
||||||
|
title: Deploying and Scaling Applications with Docker Swarm
|
||||||
|
slides: http://lisa16t1.container.training/
|
||||||
|
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviIDDhr8vIwCN1wkyNGXjbbc
|
||||||
|
|
||||||
|
- date: 2016-10-07
|
||||||
|
city: Berlin
|
||||||
|
country: de
|
||||||
|
event: LinuxCon
|
||||||
|
title: Orchestrating Containers in Production at Scale with Docker Swarm
|
||||||
|
|
||||||
|
- date: 2016-09-20
|
||||||
|
city: New York, NY
|
||||||
|
country: us
|
||||||
|
event: Velocity
|
||||||
|
title: Deployment and orchestration at scale with Docker
|
||||||
|
|
||||||
|
- date: 2016-08-25
|
||||||
|
city: Toronto
|
||||||
|
country: ca
|
||||||
|
event: LinuxCon
|
||||||
|
title: Orchestrating Containers in Production at Scale with Docker Swarm
|
||||||
|
|
||||||
|
- date: 2016-06-22
|
||||||
|
city: Seattle, WA
|
||||||
|
country: us
|
||||||
|
event: DockerCon
|
||||||
|
title: Orchestration Workshop
|
||||||
|
|
||||||
|
- date: 2016-05-29
|
||||||
|
city: Portland, OR
|
||||||
|
country: us
|
||||||
|
event: PyCon
|
||||||
|
title: Introduction to Docker and containers
|
||||||
|
slides: https://us.pycon.org/2016/site_media/media/tutorial_handouts/DockerSlides.pdf
|
||||||
|
video: https://www.youtube.com/watch?v=ZVaRK10HBjo
|
||||||
|
|
||||||
|
- date: 2016-05-17
|
||||||
|
city: Austin, TX
|
||||||
|
country: us
|
||||||
|
event: OSCON
|
||||||
|
title: Deployment and orchestration at scale with Docker Swarm
|
||||||
|
|
||||||
|
- date: 2016-04-27
|
||||||
|
city: Budapest
|
||||||
|
country: hu
|
||||||
|
event: CRAFT
|
||||||
|
title: Advanced Docker concepts and container orchestration
|
||||||
|
|
||||||
|
- date: 2016-04-22
|
||||||
|
city: Berlin
|
||||||
|
country: de
|
||||||
|
event: Neofonie
|
||||||
|
title: Orchestration Workshop
|
||||||
|
|
||||||
|
- date: 2016-04-05
|
||||||
|
city: Stockholm
|
||||||
|
country: se
|
||||||
|
event: Praqma
|
||||||
|
title: Orchestration Workshop
|
||||||
|
|
||||||
|
- date: 2016-03-22
|
||||||
|
city: Munich
|
||||||
|
country: de
|
||||||
|
event: Stylight
|
||||||
|
title: Orchestration Workshop
|
||||||
|
|
||||||
|
- date: 2016-03-11
|
||||||
|
city: London
|
||||||
|
country: uk
|
||||||
|
event: QCON
|
||||||
|
title: Containers in production with Docker Swarm
|
||||||
|
|
||||||
|
- date: 2016-02-19
|
||||||
|
city: Amsterdam
|
||||||
|
country: nl
|
||||||
|
event: Container Solutions
|
||||||
|
title: Orchestration Workshop
|
||||||
|
|
||||||
|
- date: 2016-02-15
|
||||||
|
city: Paris
|
||||||
|
country: fr
|
||||||
|
event: Zenika
|
||||||
|
title: Orchestration Workshop
|
||||||
|
|
||||||
|
- date: 2016-01-22
|
||||||
|
city: Pasadena, CA
|
||||||
|
country: us
|
||||||
|
event: SCALE
|
||||||
|
title: Advanced Docker concepts and container orchestration
|
||||||
|
|
||||||
|
#- date: 2015-11-10
|
||||||
|
# city: Washington DC
|
||||||
|
# country: us
|
||||||
|
# event: LISA
|
||||||
|
# title: Deploying and Scaling Applications with Docker Swarm
|
||||||
|
|
||||||
|
#2015-09-24-strangeloop
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
- title: Introduction to Docker and Containers
|
||||||
|
slides: intro-selfpaced.yml.html
|
||||||
|
|
||||||
|
- title: Container Orchestration with Docker and Swarm
|
||||||
|
slides: swarm-selfpaced.yml.html
|
||||||
|
|
||||||
|
- title: Deploying and Scaling Microservices with Docker and Kubernetes
|
||||||
|
slides: kube-selfpaced.yml.html
|
||||||
|
|
||||||
@@ -1,11 +1,14 @@
|
|||||||
title: |
|
title: |
|
||||||
Introduction
|
Introduction
|
||||||
to Docker and
|
to Containers
|
||||||
Containers
|
|
||||||
|
|
||||||
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||||
|
|
||||||
|
gitrepo: github.com/jpetazzo/container.training
|
||||||
|
|
||||||
|
slides: http://container.training/
|
||||||
|
|
||||||
exclude:
|
exclude:
|
||||||
- self-paced
|
- self-paced
|
||||||
|
|
||||||
@@ -16,7 +19,7 @@ chapters:
|
|||||||
- common/about-slides.md
|
- common/about-slides.md
|
||||||
- common/toc.md
|
- common/toc.md
|
||||||
- - intro/Docker_Overview.md
|
- - intro/Docker_Overview.md
|
||||||
#- intro/Docker_History.md
|
- intro/Docker_History.md
|
||||||
- intro/Training_Environment.md
|
- intro/Training_Environment.md
|
||||||
- intro/Installing_Docker.md
|
- intro/Installing_Docker.md
|
||||||
- intro/First_Containers.md
|
- intro/First_Containers.md
|
||||||
@@ -27,11 +30,13 @@ chapters:
|
|||||||
- intro/Building_Images_With_Dockerfiles.md
|
- intro/Building_Images_With_Dockerfiles.md
|
||||||
- intro/Cmd_And_Entrypoint.md
|
- intro/Cmd_And_Entrypoint.md
|
||||||
- intro/Copying_Files_During_Build.md
|
- intro/Copying_Files_During_Build.md
|
||||||
- intro/Multi_Stage_Builds.md
|
- - intro/Multi_Stage_Builds.md
|
||||||
- intro/Publishing_To_Docker_Hub.md
|
- intro/Publishing_To_Docker_Hub.md
|
||||||
- intro/Dockerfile_Tips.md
|
- intro/Dockerfile_Tips.md
|
||||||
- - intro/Naming_And_Inspecting.md
|
- - intro/Naming_And_Inspecting.md
|
||||||
- intro/Container_Networking_Basics.md
|
- intro/Labels.md
|
||||||
|
- intro/Getting_Inside.md
|
||||||
|
- - intro/Container_Networking_Basics.md
|
||||||
- intro/Network_Drivers.md
|
- intro/Network_Drivers.md
|
||||||
- intro/Container_Network_Model.md
|
- intro/Container_Network_Model.md
|
||||||
#- intro/Connecting_Containers_With_Links.md
|
#- intro/Connecting_Containers_With_Links.md
|
||||||
@@ -39,6 +44,16 @@ chapters:
|
|||||||
- - intro/Local_Development_Workflow.md
|
- - intro/Local_Development_Workflow.md
|
||||||
- intro/Working_With_Volumes.md
|
- intro/Working_With_Volumes.md
|
||||||
- intro/Compose_For_Dev_Stacks.md
|
- intro/Compose_For_Dev_Stacks.md
|
||||||
- intro/Advanced_Dockerfiles.md
|
- intro/Docker_Machine.md
|
||||||
|
- - intro/Advanced_Dockerfiles.md
|
||||||
|
- intro/Application_Configuration.md
|
||||||
|
- intro/Logging.md
|
||||||
|
- intro/Resource_Limits.md
|
||||||
|
- - intro/Namespaces_Cgroups.md
|
||||||
|
- intro/Copy_On_Write.md
|
||||||
|
#- intro/Containers_From_Scratch.md
|
||||||
|
- - intro/Container_Engines.md
|
||||||
|
- intro/Ecosystem.md
|
||||||
|
- intro/Orchestration_Overview.md
|
||||||
- common/thankyou.md
|
- common/thankyou.md
|
||||||
- intro/links.md
|
- intro/links.md
|
||||||
|
|||||||
@@ -1,11 +1,14 @@
|
|||||||
title: |
|
title: |
|
||||||
Introduction
|
Introduction
|
||||||
to Docker and
|
to Containers
|
||||||
Containers
|
|
||||||
|
|
||||||
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||||
|
|
||||||
|
gitrepo: github.com/jpetazzo/container.training
|
||||||
|
|
||||||
|
slides: http://container.training/
|
||||||
|
|
||||||
exclude:
|
exclude:
|
||||||
- in-person
|
- in-person
|
||||||
|
|
||||||
@@ -16,7 +19,7 @@ chapters:
|
|||||||
- common/about-slides.md
|
- common/about-slides.md
|
||||||
- common/toc.md
|
- common/toc.md
|
||||||
- - intro/Docker_Overview.md
|
- - intro/Docker_Overview.md
|
||||||
#- intro/Docker_History.md
|
- intro/Docker_History.md
|
||||||
- intro/Training_Environment.md
|
- intro/Training_Environment.md
|
||||||
- intro/Installing_Docker.md
|
- intro/Installing_Docker.md
|
||||||
- intro/First_Containers.md
|
- intro/First_Containers.md
|
||||||
@@ -27,11 +30,13 @@ chapters:
|
|||||||
- intro/Building_Images_With_Dockerfiles.md
|
- intro/Building_Images_With_Dockerfiles.md
|
||||||
- intro/Cmd_And_Entrypoint.md
|
- intro/Cmd_And_Entrypoint.md
|
||||||
- intro/Copying_Files_During_Build.md
|
- intro/Copying_Files_During_Build.md
|
||||||
- intro/Multi_Stage_Builds.md
|
- - intro/Multi_Stage_Builds.md
|
||||||
- intro/Publishing_To_Docker_Hub.md
|
- intro/Publishing_To_Docker_Hub.md
|
||||||
- intro/Dockerfile_Tips.md
|
- intro/Dockerfile_Tips.md
|
||||||
- - intro/Naming_And_Inspecting.md
|
- - intro/Naming_And_Inspecting.md
|
||||||
- intro/Container_Networking_Basics.md
|
- intro/Labels.md
|
||||||
|
- intro/Getting_Inside.md
|
||||||
|
- - intro/Container_Networking_Basics.md
|
||||||
- intro/Network_Drivers.md
|
- intro/Network_Drivers.md
|
||||||
- intro/Container_Network_Model.md
|
- intro/Container_Network_Model.md
|
||||||
#- intro/Connecting_Containers_With_Links.md
|
#- intro/Connecting_Containers_With_Links.md
|
||||||
@@ -39,6 +44,16 @@ chapters:
|
|||||||
- - intro/Local_Development_Workflow.md
|
- - intro/Local_Development_Workflow.md
|
||||||
- intro/Working_With_Volumes.md
|
- intro/Working_With_Volumes.md
|
||||||
- intro/Compose_For_Dev_Stacks.md
|
- intro/Compose_For_Dev_Stacks.md
|
||||||
- intro/Advanced_Dockerfiles.md
|
- intro/Docker_Machine.md
|
||||||
|
- - intro/Advanced_Dockerfiles.md
|
||||||
|
- intro/Application_Configuration.md
|
||||||
|
- intro/Logging.md
|
||||||
|
- intro/Resource_Limits.md
|
||||||
|
- - intro/Namespaces_Cgroups.md
|
||||||
|
- intro/Copy_On_Write.md
|
||||||
|
#- intro/Containers_From_Scratch.md
|
||||||
|
- - intro/Container_Engines.md
|
||||||
|
- intro/Ecosystem.md
|
||||||
|
- intro/Orchestration_Overview.md
|
||||||
- common/thankyou.md
|
- common/thankyou.md
|
||||||
- intro/links.md
|
- intro/links.md
|
||||||
|
|||||||
@@ -34,18 +34,6 @@ In this section, we will see more Dockerfile commands.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## The `MAINTAINER` instruction
|
|
||||||
|
|
||||||
The `MAINTAINER` instruction tells you who wrote the `Dockerfile`.
|
|
||||||
|
|
||||||
```dockerfile
|
|
||||||
MAINTAINER Docker Education Team <education@docker.com>
|
|
||||||
```
|
|
||||||
|
|
||||||
It's optional but recommended.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## The `RUN` instruction
|
## The `RUN` instruction
|
||||||
|
|
||||||
The `RUN` instruction can be specified in two ways.
|
The `RUN` instruction can be specified in two ways.
|
||||||
@@ -94,8 +82,6 @@ RUN apt-get update && apt-get install -y wget && apt-get clean
|
|||||||
|
|
||||||
It is also possible to break a command onto multiple lines:
|
It is also possible to break a command onto multiple lines:
|
||||||
|
|
||||||
It is possible to execute multiple commands in a single step:
|
|
||||||
|
|
||||||
```dockerfile
|
```dockerfile
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install -y wget \
|
&& apt-get install -y wget \
|
||||||
@@ -369,7 +355,7 @@ class: extra-details
|
|||||||
|
|
||||||
## Overriding the `ENTRYPOINT` instruction
|
## Overriding the `ENTRYPOINT` instruction
|
||||||
|
|
||||||
The entry point can be overriden as well.
|
The entry point can be overridden as well.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker run -it training/ls
|
$ docker run -it training/ls
|
||||||
@@ -430,5 +416,4 @@ ONBUILD COPY . /src
|
|||||||
```
|
```
|
||||||
|
|
||||||
* You can't chain `ONBUILD` instructions with `ONBUILD`.
|
* You can't chain `ONBUILD` instructions with `ONBUILD`.
|
||||||
* `ONBUILD` can't be used to trigger `FROM` and `MAINTAINER`
|
* `ONBUILD` can't be used to trigger `FROM` instructions.
|
||||||
instructions.
|
|
||||||
|
|||||||
@@ -40,6 +40,8 @@ ambassador containers.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
class: pic
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
201
slides/intro/Application_Configuration.md
Normal file
@@ -0,0 +1,201 @@
|
|||||||
|
# Application Configuration
|
||||||
|
|
||||||
|
There are many ways to provide configuration to containerized applications.
|
||||||
|
|
||||||
|
There is no "best way" — it depends on factors like:
|
||||||
|
|
||||||
|
* configuration size,
|
||||||
|
|
||||||
|
* mandatory and optional parameters,
|
||||||
|
|
||||||
|
* scope of configuration (per container, per app, per customer, per site, etc),
|
||||||
|
|
||||||
|
* frequency of changes in the configuration.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Command-line parameters
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run jpetazzo/hamba 80 www1:80 www2:80
|
||||||
|
```
|
||||||
|
|
||||||
|
* Configuration is provided through command-line parameters.
|
||||||
|
|
||||||
|
* In the above example, the `ENTRYPOINT` is a script that will:
|
||||||
|
|
||||||
|
- parse the parameters,
|
||||||
|
|
||||||
|
- generate a configuration file,
|
||||||
|
|
||||||
|
- start the actual service.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Command-line parameters pros and cons
|
||||||
|
|
||||||
|
* Appropriate for mandatory parameters (without which the service cannot start).
|
||||||
|
|
||||||
|
* Convenient for "toolbelt" services instanciated many times.
|
||||||
|
|
||||||
|
(Because there is no extra step: just run it!)
|
||||||
|
|
||||||
|
* Not great for dynamic configurations or bigger configurations.
|
||||||
|
|
||||||
|
(These things are still possible, but more cumbersome.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment variables
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -e ELASTICSEARCH_URL=http://es42:9201/ kibana
|
||||||
|
```
|
||||||
|
|
||||||
|
* Configuration is provided through environment variables.
|
||||||
|
|
||||||
|
* The environment variable can be used straight by the program,
|
||||||
|
<br/>or by a script generating a configuration file.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment variables pros and cons
|
||||||
|
|
||||||
|
* Appropriate for optional parameters (since the image can provide default values).
|
||||||
|
|
||||||
|
* Also convenient for services instanciated many times.
|
||||||
|
|
||||||
|
(It's as easy as command-line parameters.)
|
||||||
|
|
||||||
|
* Great for services with lots of parameters, but you only want to specify a few.
|
||||||
|
|
||||||
|
(And use default values for everything else.)
|
||||||
|
|
||||||
|
* Ability to introspect possible parameters and their default values.
|
||||||
|
|
||||||
|
* Not great for dynamic configurations.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Baked-in configuration
|
||||||
|
|
||||||
|
```
|
||||||
|
FROM prometheus
|
||||||
|
COPY prometheus.conf /etc
|
||||||
|
```
|
||||||
|
|
||||||
|
* The configuration is added to the image.
|
||||||
|
|
||||||
|
* The image may have a default configuration; the new configuration can:
|
||||||
|
|
||||||
|
- replace the default configuration,
|
||||||
|
|
||||||
|
- extend it (if the code can read multiple configuration files).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Baked-in configuration pros and cons
|
||||||
|
|
||||||
|
* Allows arbitrary customization and complex configuration files.
|
||||||
|
|
||||||
|
* Requires to write a configuration file. (Obviously!)
|
||||||
|
|
||||||
|
* Requires to build an image to start the service.
|
||||||
|
|
||||||
|
* Requires to rebuild the image to reconfigure the service.
|
||||||
|
|
||||||
|
* Requires to rebuild the image to upgrade the service.
|
||||||
|
|
||||||
|
* Configured images can be stored in registries.
|
||||||
|
|
||||||
|
(Which is great, but requires a registry.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration volume
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -v appconfig:/etc/appconfig myapp
|
||||||
|
```
|
||||||
|
|
||||||
|
* The configuration is stored in a volume.
|
||||||
|
|
||||||
|
* The volume is attached to the container.
|
||||||
|
|
||||||
|
* The image may have a default configuration.
|
||||||
|
|
||||||
|
(But this results in a less "obvious" setup, that needs more documentation.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration volume pros and cons
|
||||||
|
|
||||||
|
* Allows arbitrary customization and complex configuration files.
|
||||||
|
|
||||||
|
* Requires to create a volume for each different configuration.
|
||||||
|
|
||||||
|
* Services with identical configurations can use the same volume.
|
||||||
|
|
||||||
|
* Doesn't require to build / rebuild an image when upgrading / reconfiguring.
|
||||||
|
|
||||||
|
* Configuration can be generated or edited through another container.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dynamic configuration volume
|
||||||
|
|
||||||
|
* This is a powerful pattern for dynamic, complex configurations.
|
||||||
|
|
||||||
|
* The configuration is stored in a volume.
|
||||||
|
|
||||||
|
* The configuration is generated / updated by a special container.
|
||||||
|
|
||||||
|
* The application container detects when the configuration is changed.
|
||||||
|
|
||||||
|
(And automatically reloads the configuration when necessary.)
|
||||||
|
|
||||||
|
* The configuration can be shared between multiple services if needed.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dynamic configuration volume example
|
||||||
|
|
||||||
|
In a first terminal, start a load balancer with an initial configuration:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run --name loadbalancer jpetazzo/hamba \
|
||||||
|
80 goo.gl:80
|
||||||
|
```
|
||||||
|
|
||||||
|
In another terminal, reconfigure that load balancer:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run --rm --volumes-from loadbalancer jpetazzo/hamba reconfigure \
|
||||||
|
80 google.com:80
|
||||||
|
```
|
||||||
|
|
||||||
|
The configuration could also be updated through e.g. a REST API.
|
||||||
|
|
||||||
|
(The REST API being itself served from another container.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Keeping secrets
|
||||||
|
|
||||||
|
.warning[Ideally, you should not put secrets (passwords, tokens...) in:]
|
||||||
|
|
||||||
|
* command-line or environment variables (anyone with Docker API access can get them),
|
||||||
|
|
||||||
|
* images, especially stored in a registry.
|
||||||
|
|
||||||
|
Secrets management is better handled with an orchestrator (like Swarm or Kubernetes).
|
||||||
|
|
||||||
|
Orchestrators will allow to pass secrets in a "one-way" manner.
|
||||||
|
|
||||||
|
Managing secrets securely without an orchestrator can be contrived.
|
||||||
|
|
||||||
|
E.g.:
|
||||||
|
|
||||||
|
- read the secret on stdin when the service starts,
|
||||||
|
|
||||||
|
- pass the secret using an API endpoint.
|
||||||
@@ -117,7 +117,7 @@ CONTAINER ID IMAGE ... CREATED STATUS ...
|
|||||||
|
|
||||||
Many Docker commands will work on container IDs: `docker stop`, `docker rm`...
|
Many Docker commands will work on container IDs: `docker stop`, `docker rm`...
|
||||||
|
|
||||||
If we want to list only the IDs of our containers (without the other colums
|
If we want to list only the IDs of our containers (without the other columns
|
||||||
or the header line),
|
or the header line),
|
||||||
we can use the `-q` ("Quiet", "Quick") flag:
|
we can use the `-q` ("Quiet", "Quick") flag:
|
||||||
|
|
||||||
|
|||||||
@@ -93,20 +93,22 @@ The output of `docker build` looks like this:
|
|||||||
|
|
||||||
.small[
|
.small[
|
||||||
```bash
|
```bash
|
||||||
$ docker build -t figlet .
|
docker build -t figlet .
|
||||||
Sending build context to Docker daemon 2.048 kB
|
Sending build context to Docker daemon 2.048kB
|
||||||
Sending build context to Docker daemon
|
Step 1/3 : FROM ubuntu
|
||||||
Step 0 : FROM ubuntu
|
---> f975c5035748
|
||||||
---> e54ca5efa2e9
|
Step 2/3 : RUN apt-get update
|
||||||
Step 1 : RUN apt-get update
|
---> Running in e01b294dbffd
|
||||||
---> Running in 840cb3533193
|
(...output of the RUN command...)
|
||||||
---> 7257c37726a1
|
Removing intermediate container e01b294dbffd
|
||||||
Removing intermediate container 840cb3533193
|
---> eb8d9b561b37
|
||||||
Step 2 : RUN apt-get install figlet
|
Step 3/3 : RUN apt-get install figlet
|
||||||
---> Running in 2b44df762a2f
|
---> Running in c29230d70f9b
|
||||||
---> f9e8f1642759
|
(...output of the RUN command...)
|
||||||
Removing intermediate container 2b44df762a2f
|
Removing intermediate container c29230d70f9b
|
||||||
Successfully built f9e8f1642759
|
---> 0dfd7a253f21
|
||||||
|
Successfully built 0dfd7a253f21
|
||||||
|
Successfully tagged figlet:latest
|
||||||
```
|
```
|
||||||
]
|
]
|
||||||
|
|
||||||
@@ -134,20 +136,20 @@ Sending build context to Docker daemon 2.048 kB
|
|||||||
## Executing each step
|
## Executing each step
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
Step 1 : RUN apt-get update
|
Step 2/3 : RUN apt-get update
|
||||||
---> Running in 840cb3533193
|
---> Running in e01b294dbffd
|
||||||
(...output of the RUN command...)
|
(...output of the RUN command...)
|
||||||
---> 7257c37726a1
|
Removing intermediate container e01b294dbffd
|
||||||
Removing intermediate container 840cb3533193
|
---> eb8d9b561b37
|
||||||
```
|
```
|
||||||
|
|
||||||
* A container (`840cb3533193`) is created from the base image.
|
* A container (`e01b294dbffd`) is created from the base image.
|
||||||
|
|
||||||
* The `RUN` command is executed in this container.
|
* The `RUN` command is executed in this container.
|
||||||
|
|
||||||
* The container is committed into an image (`7257c37726a1`).
|
* The container is committed into an image (`eb8d9b561b37`).
|
||||||
|
|
||||||
* The build container (`840cb3533193`) is removed.
|
* The build container (`e01b294dbffd`) is removed.
|
||||||
|
|
||||||
* The output of this step will be the base image for the next one.
|
* The output of this step will be the base image for the next one.
|
||||||
|
|
||||||
|
|||||||
@@ -64,6 +64,7 @@ Let's build it:
|
|||||||
$ docker build -t figlet .
|
$ docker build -t figlet .
|
||||||
...
|
...
|
||||||
Successfully built 042dff3b4a8d
|
Successfully built 042dff3b4a8d
|
||||||
|
Successfully tagged figlet:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
And run it:
|
And run it:
|
||||||
@@ -165,6 +166,7 @@ Let's build it:
|
|||||||
$ docker build -t figlet .
|
$ docker build -t figlet .
|
||||||
...
|
...
|
||||||
Successfully built 36f588918d73
|
Successfully built 36f588918d73
|
||||||
|
Successfully tagged figlet:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
And run it:
|
And run it:
|
||||||
@@ -223,6 +225,7 @@ Let's build it:
|
|||||||
$ docker build -t figlet .
|
$ docker build -t figlet .
|
||||||
...
|
...
|
||||||
Successfully built 6e0b6a048a07
|
Successfully built 6e0b6a048a07
|
||||||
|
Successfully tagged figlet:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
Run it without parameters:
|
Run it without parameters:
|
||||||
|
|||||||
@@ -49,7 +49,7 @@ Before diving in, let's see a small example of Compose in action.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Compose in action
|
class: pic
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@@ -60,6 +60,10 @@ Before diving in, let's see a small example of Compose in action.
|
|||||||
If you are using the official training virtual machines, Compose has been
|
If you are using the official training virtual machines, Compose has been
|
||||||
pre-installed.
|
pre-installed.
|
||||||
|
|
||||||
|
If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them.
|
||||||
|
|
||||||
|
If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`.
|
||||||
|
|
||||||
You can always check that it is installed by running:
|
You can always check that it is installed by running:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -135,22 +139,33 @@ services:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Compose file versions
|
## Compose file structure
|
||||||
|
|
||||||
Version 1 directly has the various containers (`www`, `redis`...) at the top level of the file.
|
A Compose file has multiple sections:
|
||||||
|
|
||||||
Version 2 has multiple sections:
|
* `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.)
|
||||||
|
|
||||||
* `version` is mandatory and should be `"2"`.
|
* `services` is mandatory. A service is one or more replicas of the same image running as containers.
|
||||||
|
|
||||||
* `services` is mandatory and corresponds to the content of the version 1 format.
|
|
||||||
|
|
||||||
* `networks` is optional and indicates to which networks containers should be connected.
|
* `networks` is optional and indicates to which networks containers should be connected.
|
||||||
<br/>(By default, containers will be connected on a private, per-app network.)
|
<br/>(By default, containers will be connected on a private, per-compose-file network.)
|
||||||
|
|
||||||
* `volumes` is optional and can define volumes to be used and/or shared by the containers.
|
* `volumes` is optional and can define volumes to be used and/or shared by the containers.
|
||||||
|
|
||||||
Version 3 adds support for deployment options (scaling, rolling updates, etc.)
|
---
|
||||||
|
|
||||||
|
## Compose file versions
|
||||||
|
|
||||||
|
* Version 1 is legacy and shouldn't be used.
|
||||||
|
|
||||||
|
(If you see a Compose file without `version` and `services`, it's a legacy v1 file.)
|
||||||
|
|
||||||
|
* Version 2 added support for networks and volumes.
|
||||||
|
|
||||||
|
* Version 3 added support for deployment options (scaling, rolling updates, etc).
|
||||||
|
|
||||||
|
The [Docker documentation](https://docs.docker.com/compose/compose-file/)
|
||||||
|
has excellent information about the Compose file format if you need to know more about versions.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -260,6 +275,8 @@ Removing trainingwheels_www_1 ... done
|
|||||||
Removing trainingwheels_redis_1 ... done
|
Removing trainingwheels_redis_1 ... done
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Use `docker-compose down -v` to remove everything including volumes.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Special handling of volumes
|
## Special handling of volumes
|
||||||
|
|||||||
177
slides/intro/Container_Engines.md
Normal file
@@ -0,0 +1,177 @@
|
|||||||
|
# Docker Engine and other container engines
|
||||||
|
|
||||||
|
* We are going to cover the architecture of the Docker Engine.
|
||||||
|
|
||||||
|
* We will also present other container engines.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: pic
|
||||||
|
|
||||||
|
## Docker Engine external architecture
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Docker Engine external architecture
|
||||||
|
|
||||||
|
* The Engine is a daemon (service running in the background).
|
||||||
|
|
||||||
|
* All interaction is done through a REST API exposed over a socket.
|
||||||
|
|
||||||
|
* On Linux, the default socket is a UNIX socket: `/var/run/docker.sock`.
|
||||||
|
|
||||||
|
* We can also use a TCP socket, with optional mutual TLS authentication.
|
||||||
|
|
||||||
|
* The `docker` CLI communicates with the Engine over the socket.
|
||||||
|
|
||||||
|
Note: strictly speaking, the Docker API is not fully REST.
|
||||||
|
|
||||||
|
Some operations (e.g. dealing with interactive containers
|
||||||
|
and log streaming) don't fit the REST model.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: pic
|
||||||
|
|
||||||
|
## Docker Engine internal architecture
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Docker Engine internal architecture
|
||||||
|
|
||||||
|
* Up to Docker 1.10: the Docker Engine is one single monolithic binary.
|
||||||
|
|
||||||
|
* Starting with Docker 1.11, the Engine is split into multiple parts:
|
||||||
|
|
||||||
|
- `dockerd` (REST API, auth, networking, storage)
|
||||||
|
|
||||||
|
- `containerd` (container lifecycle, controlled over a gRPC API)
|
||||||
|
|
||||||
|
- `containerd-shim` (per-container; does almost nothing but allows to restart the Engine without restarting the containers)
|
||||||
|
|
||||||
|
- `runc` (per-container; does the actual heavy lifting to start the container)
|
||||||
|
|
||||||
|
* Some features (like image and snapshot management) are progressively being pushed from `dockerd` to `containerd`.
|
||||||
|
|
||||||
|
For more details, check [this short presentation by Phil Estes](https://www.slideshare.net/PhilEstes/diving-through-the-layers-investigating-runc-containerd-and-the-docker-engine-architecture).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Other container engines
|
||||||
|
|
||||||
|
The following list is not exhaustive.
|
||||||
|
|
||||||
|
Furthermore, we limited the scope to Linux containers.
|
||||||
|
|
||||||
|
Containers also exist (sometimes with other names) on Windows, macOS, Solaris, FreeBSD ...
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LXC
|
||||||
|
|
||||||
|
* The venerable ancestor (first released in 2008).
|
||||||
|
|
||||||
|
* Docker initially relied on it to execute containers.
|
||||||
|
|
||||||
|
* No daemon; no central API.
|
||||||
|
|
||||||
|
* Each container is managed by a `lxc-start` process.
|
||||||
|
|
||||||
|
* Each `lxc-start` process exposes a custom API over a local UNIX socket, allowing to interact with the container.
|
||||||
|
|
||||||
|
* No notion of image (container filesystems have to be managed manually).
|
||||||
|
|
||||||
|
* Networking has to be setup manually.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LXD
|
||||||
|
|
||||||
|
* Re-uses LXC code (through liblxc).
|
||||||
|
|
||||||
|
* Builds on top of LXC to offer a more modern experience.
|
||||||
|
|
||||||
|
* Daemon exposing a REST API.
|
||||||
|
|
||||||
|
* Can manage images, snapshots, migrations, networking, storage.
|
||||||
|
|
||||||
|
* "offers a user experience similar to virtual machines but using Linux containers instead."
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## rkt
|
||||||
|
|
||||||
|
* Compares to `runc`.
|
||||||
|
|
||||||
|
* No daemon or API.
|
||||||
|
|
||||||
|
* Strong emphasis on security (through privilege separation).
|
||||||
|
|
||||||
|
* Networking has to be setup separately (e.g. through CNI plugins).
|
||||||
|
|
||||||
|
* Partial image management (pull, but no push).
|
||||||
|
|
||||||
|
(Image build is handled by separate tools.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CRI-O
|
||||||
|
|
||||||
|
* Designed to be used with Kubernetes as a simple, basic runtime.
|
||||||
|
|
||||||
|
* Compares to `containerd`.
|
||||||
|
|
||||||
|
* Daemon exposing a gRPC interface.
|
||||||
|
|
||||||
|
* Controlled using the CRI API (Container Runtime Interface defined by Kubernetes).
|
||||||
|
|
||||||
|
* Needs an underlying OCI runtime (e.g. runc).
|
||||||
|
|
||||||
|
* Handles storage, images, networking (through CNI plugins).
|
||||||
|
|
||||||
|
We're not aware of anyone using it directly (i.e. outside of Kubernetes).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## systemd
|
||||||
|
|
||||||
|
* "init" system (PID 1) in most modern Linux distributions.
|
||||||
|
|
||||||
|
* Offers tools like `systemd-nspawn` and `machinectl` to manage containers.
|
||||||
|
|
||||||
|
* `systemd-nspawn` is "In many ways it is similar to chroot(1), but more powerful".
|
||||||
|
|
||||||
|
* `machinectl` can interact with VMs and containers managed by systemd.
|
||||||
|
|
||||||
|
* Exposes a DBUS API.
|
||||||
|
|
||||||
|
* Basic image support (tar archives and raw disk images).
|
||||||
|
|
||||||
|
* Network has to be setup manually.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overall ...
|
||||||
|
|
||||||
|
* The Docker Engine is very developer-centric:
|
||||||
|
|
||||||
|
- easy to install
|
||||||
|
|
||||||
|
- easy to use
|
||||||
|
|
||||||
|
- no manual setup
|
||||||
|
|
||||||
|
- first-class image build and transfer
|
||||||
|
|
||||||
|
* As a result, it is a fantastic tool in development environments.
|
||||||
|
|
||||||
|
* On servers:
|
||||||
|
|
||||||
|
- Docker is a good default choice
|
||||||
|
|
||||||
|
- If you use Kubernetes, the engine doesn't matter
|
||||||
|
|
||||||
@@ -65,9 +65,17 @@ eb0eeab782f4 host host
|
|||||||
|
|
||||||
* A network is managed by a *driver*.
|
* A network is managed by a *driver*.
|
||||||
|
|
||||||
* All the drivers that we have seen before are available.
|
* The built-in drivers include:
|
||||||
|
|
||||||
* A new multi-host driver, *overlay*, is available out of the box.
|
* `bridge` (default)
|
||||||
|
|
||||||
|
* `none`
|
||||||
|
|
||||||
|
* `host`
|
||||||
|
|
||||||
|
* `macvlan`
|
||||||
|
|
||||||
|
* A multi-host driver, *overlay*, is available out of the box (for Swarm clusters).
|
||||||
|
|
||||||
* More drivers can be provided by plugins (OVS, VLAN...)
|
* More drivers can be provided by plugins (OVS, VLAN...)
|
||||||
|
|
||||||
@@ -75,6 +83,8 @@ eb0eeab782f4 host host
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
## Differences with the CNI
|
## Differences with the CNI
|
||||||
|
|
||||||
* CNI = Container Network Interface
|
* CNI = Container Network Interface
|
||||||
@@ -87,6 +97,22 @@ eb0eeab782f4 host host
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
class: pic
|
||||||
|
|
||||||
|
## Single container in a Docker network
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: pic
|
||||||
|
|
||||||
|
## Two containers on two Docker networks
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Creating a network
|
## Creating a network
|
||||||
|
|
||||||
Let's create a network called `dev`.
|
Let's create a network called `dev`.
|
||||||
@@ -284,7 +310,7 @@ since we wiped out the old Redis container).
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
class: x-extra-details
|
class: extra-details
|
||||||
|
|
||||||
## Names are *local* to each network
|
## Names are *local* to each network
|
||||||
|
|
||||||
@@ -324,7 +350,7 @@ class: extra-details
|
|||||||
Create the `prod` network.
|
Create the `prod` network.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker create network prod
|
$ docker network create prod
|
||||||
5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c
|
5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -434,7 +460,7 @@ When creating a network, extra options can be provided.
|
|||||||
|
|
||||||
* `--internal` disables outbound traffic (the network won't have a default gateway).
|
* `--internal` disables outbound traffic (the network won't have a default gateway).
|
||||||
|
|
||||||
* `--gateway` indicates which address to use for the gateway (when utbound traffic is allowed).
|
* `--gateway` indicates which address to use for the gateway (when outbound traffic is allowed).
|
||||||
|
|
||||||
* `--subnet` (in CIDR notation) indicates the subnet to use.
|
* `--subnet` (in CIDR notation) indicates the subnet to use.
|
||||||
|
|
||||||
@@ -472,11 +498,13 @@ b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09
|
|||||||
|
|
||||||
* If containers span multiple hosts, we need an *overlay* network to connect them together.
|
* If containers span multiple hosts, we need an *overlay* network to connect them together.
|
||||||
|
|
||||||
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN.
|
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging
|
||||||
|
VXLAN, *enabled with Swarm Mode*.
|
||||||
|
|
||||||
* Other plugins (Weave, Calico...) can provide overlay networks as well.
|
* Other plugins (Weave, Calico...) can provide overlay networks as well.
|
||||||
|
|
||||||
* Once you have an overlay network, *all the features that we've used in this chapter work identically.*
|
* Once you have an overlay network, *all the features that we've used in this chapter work identically
|
||||||
|
across multiple hosts.*
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -514,13 +542,174 @@ General idea:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Section summary
|
## Connecting and disconnecting dynamically
|
||||||
|
|
||||||
We've learned how to:
|
* So far, we have specified which network to use when starting the container.
|
||||||
|
|
||||||
* Create private networks for groups of containers.
|
* The Docker Engine also allows to connect and disconnect while the container runs.
|
||||||
|
|
||||||
* Assign IP addresses to containers.
|
* This feature is exposed through the Docker API, and through two Docker CLI commands:
|
||||||
|
|
||||||
* Use container naming to implement service discovery.
|
* `docker network connect <network> <container>`
|
||||||
|
|
||||||
|
* `docker network disconnect <network> <container>`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dynamically connecting to a network
|
||||||
|
|
||||||
|
* We have a container named `es` connected to a network named `dev`.
|
||||||
|
|
||||||
|
* Let's start a simple alpine container on the default network:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run -ti alpine sh
|
||||||
|
/ #
|
||||||
|
```
|
||||||
|
|
||||||
|
* In this container, try to ping the `es` container:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/ # ping es
|
||||||
|
ping: bad address 'es'
|
||||||
|
```
|
||||||
|
|
||||||
|
This doesn't work, but we will change that by connecting the container.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Finding the container ID and connecting it
|
||||||
|
|
||||||
|
* Figure out the ID of our alpine container; here are two methods:
|
||||||
|
|
||||||
|
* looking at `/etc/hostname` in the container,
|
||||||
|
|
||||||
|
* running `docker ps -lq` on the host.
|
||||||
|
|
||||||
|
* Run the following command on the host:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker network connect dev `<container_id>`
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Checking what we did
|
||||||
|
|
||||||
|
* Try again to `ping es` from the container.
|
||||||
|
|
||||||
|
* It should now work correctly:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/ # ping es
|
||||||
|
PING es (172.20.0.3): 56 data bytes
|
||||||
|
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms
|
||||||
|
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms
|
||||||
|
^C
|
||||||
|
```
|
||||||
|
|
||||||
|
* Interrupt it with Ctrl-C.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Looking at the network setup in the container
|
||||||
|
|
||||||
|
We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`:
|
||||||
|
|
||||||
|
.small[
|
||||||
|
```bash
|
||||||
|
/ # ip a
|
||||||
|
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
|
||||||
|
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||||
|
inet 127.0.0.1/8 scope host lo
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
|
||||||
|
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
|
||||||
|
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
20: eth1@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
|
||||||
|
link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff
|
||||||
|
inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1
|
||||||
|
valid_lft forever preferred_lft forever
|
||||||
|
/ #
|
||||||
|
```
|
||||||
|
]
|
||||||
|
|
||||||
|
Each network connection is materialized with a virtual network interface.
|
||||||
|
|
||||||
|
As we can see, we can be connected to multiple networks at the same time.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Disconnecting from a network
|
||||||
|
|
||||||
|
* Let's try the symmetrical command to disconnect the container:
|
||||||
|
```bash
|
||||||
|
$ docker network disconnect dev <container_id>
|
||||||
|
```
|
||||||
|
|
||||||
|
* From now on, if we try to ping `es`, it will not resolve:
|
||||||
|
```bash
|
||||||
|
/ # ping es
|
||||||
|
ping: bad address 'es'
|
||||||
|
```
|
||||||
|
|
||||||
|
* Trying to ping the IP address directly won't work either:
|
||||||
|
```bash
|
||||||
|
/ # ping 172.20.0.3
|
||||||
|
... (nothing happens until we interrupt it with Ctrl-C)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
|
## Network aliases are scoped per network
|
||||||
|
|
||||||
|
* Each network has its own set of network aliases.
|
||||||
|
|
||||||
|
* We saw this earlier: `es` resolves to different addresses in `dev` and `prod`.
|
||||||
|
|
||||||
|
* If we are connected to multiple networks, the resolver looks up names in each of them
|
||||||
|
(as of Docker Engine 18.03, it is the connection order) and stops as soon as the name
|
||||||
|
is found.
|
||||||
|
|
||||||
|
* Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not**
|
||||||
|
give us the addresses of all the `es` services; but only the ones in `dev` or `prod`.
|
||||||
|
|
||||||
|
* However, we can lookup `es.dev` or `es.prod` if we need to.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
|
## Finding out about our networks and names
|
||||||
|
|
||||||
|
* We can do reverse DNS lookups on containers' IP addresses.
|
||||||
|
|
||||||
|
* If the IP address belongs to a network (other than the default bridge), the result will be:
|
||||||
|
|
||||||
|
```
|
||||||
|
name-or-first-alias-or-container-id.network-name
|
||||||
|
```
|
||||||
|
|
||||||
|
* Example:
|
||||||
|
|
||||||
|
.small[
|
||||||
|
```bash
|
||||||
|
$ docker run -ti --net prod --net-alias hello alpine
|
||||||
|
/ # apk add --no-cache drill
|
||||||
|
...
|
||||||
|
OK: 5 MiB in 13 packages
|
||||||
|
/ # ifconfig
|
||||||
|
eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03
|
||||||
|
inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0
|
||||||
|
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||||
|
...
|
||||||
|
/ # drill -t ptr `3.0.21.172`.in-addr.arpa
|
||||||
|
...
|
||||||
|
;; ANSWER SECTION:
|
||||||
|
3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`.
|
||||||
|
...
|
||||||
|
```
|
||||||
|
]
|
||||||
|
|||||||
@@ -49,14 +49,14 @@ We will use `docker ps`:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker ps
|
$ docker ps
|
||||||
CONTAINER ID IMAGE ... PORTS ...
|
CONTAINER ID IMAGE ... PORTS ...
|
||||||
e40ffb406c9e nginx ... 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp ...
|
e40ffb406c9e nginx ... 0.0.0.0:32768->80/tcp ...
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
* The web server is running on ports 80 and 443 inside the container.
|
* The web server is running on port 80 inside the container.
|
||||||
|
|
||||||
* Those ports are mapped to ports 32769 and 32768 on our Docker host.
|
* This port is mapped to port 32768 on our Docker host.
|
||||||
|
|
||||||
We will explain the whys and hows of this port mapping.
|
We will explain the whys and hows of this port mapping.
|
||||||
|
|
||||||
@@ -81,7 +81,7 @@ Make sure to use the right port number if it is different
|
|||||||
from the example below:
|
from the example below:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ curl localhost:32769
|
$ curl localhost:32768
|
||||||
<!DOCTYPE html>
|
<!DOCTYPE html>
|
||||||
<html>
|
<html>
|
||||||
<head>
|
<head>
|
||||||
@@ -91,6 +91,31 @@ $ curl localhost:32769
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## How does Docker know which port to map?
|
||||||
|
|
||||||
|
* There is metadata in the image telling "this image has something on port 80".
|
||||||
|
|
||||||
|
* We can see that metadata with `docker inspect`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker inspect --format '{{.Config.ExposedPorts}}' nginx
|
||||||
|
map[80/tcp:{}]
|
||||||
|
```
|
||||||
|
|
||||||
|
* This metadata was set in the Dockerfile, with the `EXPOSE` keyword.
|
||||||
|
|
||||||
|
* We can see that with `docker history`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker history nginx
|
||||||
|
IMAGE CREATED CREATED BY
|
||||||
|
7f70b30f2cc6 11 days ago /bin/sh -c #(nop) CMD ["nginx" "-g" "…
|
||||||
|
<missing> 11 days ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM]
|
||||||
|
<missing> 11 days ago /bin/sh -c #(nop) EXPOSE 80/tcp
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Why are we mapping ports?
|
## Why are we mapping ports?
|
||||||
|
|
||||||
* We are out of IPv4 addresses.
|
* We are out of IPv4 addresses.
|
||||||
@@ -113,7 +138,7 @@ There is a command to help us:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker port <containerID> 80
|
$ docker port <containerID> 80
|
||||||
32769
|
32768
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -128,7 +153,7 @@ $ docker run -d -p 8000:80 nginx
|
|||||||
$ docker run -d -p 8080:80 -p 8888:80 nginx
|
$ docker run -d -p 8080:80 -p 8888:80 nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
* We are running two NGINX web servers.
|
* We are running three NGINX web servers.
|
||||||
* The first one is exposed on port 80.
|
* The first one is exposed on port 80.
|
||||||
* The second one is exposed on port 8000.
|
* The second one is exposed on port 8000.
|
||||||
* The third one is exposed on ports 8080 and 8888.
|
* The third one is exposed on ports 8080 and 8888.
|
||||||
|
|||||||
3
slides/intro/Containers_From_Scratch.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Building containers from scratch
|
||||||
|
|
||||||
|
(This is a "bonus section" done if time permits.)
|
||||||
339
slides/intro/Copy_On_Write.md
Normal file
@@ -0,0 +1,339 @@
|
|||||||
|
# Copy-on-write filesystems
|
||||||
|
|
||||||
|
Container engines rely on copy-on-write to be able
|
||||||
|
to start containers quickly, regardless of their size.
|
||||||
|
|
||||||
|
We will explain how that works, and review some of
|
||||||
|
the copy-on-write storage systems available on Linux.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What is copy-on-write?
|
||||||
|
|
||||||
|
- Copy-on-write is a mechanism allowing to share data.
|
||||||
|
|
||||||
|
- The data appears to be a copy, but is only
|
||||||
|
a link (or reference) to the original data.
|
||||||
|
|
||||||
|
- The actual copy happens only when someone
|
||||||
|
tries to change the shared data.
|
||||||
|
|
||||||
|
- Whoever changes the shared data ends up
|
||||||
|
using their own copy instead of the shared data.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## A few metaphors
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- First metaphor:
|
||||||
|
<br/>white board and tracing paper
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- Second metaphor:
|
||||||
|
<br/>magic books with shadowy pages
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- Third metaphor:
|
||||||
|
<br/>just-in-time house building
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Copy-on-write is *everywhere*
|
||||||
|
|
||||||
|
- Process creation with `fork()`.
|
||||||
|
|
||||||
|
- Consistent disk snapshots.
|
||||||
|
|
||||||
|
- Efficient VM provisioning.
|
||||||
|
|
||||||
|
- And, of course, containers.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Copy-on-write and containers
|
||||||
|
|
||||||
|
Copy-on-write is essential to give us "convenient" containers.
|
||||||
|
|
||||||
|
- Creating a new container (from an existing image) is "free".
|
||||||
|
|
||||||
|
(Otherwise, we would have to copy the image first.)
|
||||||
|
|
||||||
|
- Customizing a container (by tweaking a few files) is cheap.
|
||||||
|
|
||||||
|
(Adding a 1 KB configuration file to a 1 GB container takes 1 KB, not 1 GB.)
|
||||||
|
|
||||||
|
- We can take snapshots, i.e. have "checkpoints" or "save points"
|
||||||
|
when building images.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AUFS overview
|
||||||
|
|
||||||
|
- The original (legacy) copy-on-write filesystem used by first versions of Docker.
|
||||||
|
|
||||||
|
- Combine multiple *branches* in a specific order.
|
||||||
|
|
||||||
|
- Each branch is just a normal directory.
|
||||||
|
|
||||||
|
- You generally have:
|
||||||
|
|
||||||
|
- at least one read-only branch (at the bottom),
|
||||||
|
|
||||||
|
- exactly one read-write branch (at the top).
|
||||||
|
|
||||||
|
(But other fun combinations are possible too!)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AUFS operations: opening a file
|
||||||
|
|
||||||
|
- With `O_RDONLY` - read-only access:
|
||||||
|
|
||||||
|
- look it up in each branch, starting from the top
|
||||||
|
|
||||||
|
- open the first one we find
|
||||||
|
|
||||||
|
- With `O_WRONLY` or `O_RDWR` - write access:
|
||||||
|
|
||||||
|
- if the file exists on the top branch: open it
|
||||||
|
|
||||||
|
- if the file exists on another branch: "copy up"
|
||||||
|
<br/>
|
||||||
|
(i.e. copy the file to the top branch and open the copy)
|
||||||
|
|
||||||
|
- if the file doesn't exist on any branch: create it on the top branch
|
||||||
|
|
||||||
|
That "copy-up" operation can take a while if the file is big!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AUFS operations: deleting a file
|
||||||
|
|
||||||
|
- A *whiteout* file is created.
|
||||||
|
|
||||||
|
- This is similar to the concept of "tombstones" used in some data systems.
|
||||||
|
|
||||||
|
```
|
||||||
|
# docker run ubuntu rm /etc/shadow
|
||||||
|
|
||||||
|
# ls -la /var/lib/docker/aufs/diff/$(docker ps --no-trunc -lq)/etc
|
||||||
|
total 8
|
||||||
|
drwxr-xr-x 2 root root 4096 Jan 27 15:36 .
|
||||||
|
drwxr-xr-x 5 root root 4096 Jan 27 15:36 ..
|
||||||
|
-r--r--r-- 2 root root 0 Jan 27 15:36 .wh.shadow
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AUFS performance
|
||||||
|
|
||||||
|
- AUFS `mount()` is fast, so creation of containers is quick.
|
||||||
|
|
||||||
|
- Read/write access has native speeds.
|
||||||
|
|
||||||
|
- But initial `open()` is expensive in two scenarios:
|
||||||
|
|
||||||
|
- when writing big files (log files, databases ...),
|
||||||
|
|
||||||
|
- when searching many directories (PATH, classpath, etc.) over many layers.
|
||||||
|
|
||||||
|
- Protip: when we built dotCloud, we ended up putting
|
||||||
|
all important data on *volumes*.
|
||||||
|
|
||||||
|
- When starting the same container multiple times:
|
||||||
|
|
||||||
|
- the data is loaded only once from disk, and cached only once in memory;
|
||||||
|
|
||||||
|
- but `dentries` will be duplicated.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Device Mapper
|
||||||
|
|
||||||
|
Device Mapper is a rich subsystem with many features.
|
||||||
|
|
||||||
|
It can be used for: RAID, encrypted devices, snapshots, and more.
|
||||||
|
|
||||||
|
In the context of containers (and Docker in particular), "Device Mapper"
|
||||||
|
means:
|
||||||
|
|
||||||
|
"the Device Mapper system + its *thin provisioning target*"
|
||||||
|
|
||||||
|
If you see the abbreviation "thinp" it stands for "thin provisioning".
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Device Mapper principles
|
||||||
|
|
||||||
|
- Copy-on-write happens on the *block* level
|
||||||
|
(instead of the *file* level).
|
||||||
|
|
||||||
|
- Each container and each image get their own block device.
|
||||||
|
|
||||||
|
- At any given time, it is possible to take a snapshot:
|
||||||
|
|
||||||
|
- of an existing container (to create a frozen image),
|
||||||
|
|
||||||
|
- of an existing image (to create a container from it).
|
||||||
|
|
||||||
|
- If a block has never been written to:
|
||||||
|
|
||||||
|
- it's assumed to be all zeros,
|
||||||
|
|
||||||
|
- it's not allocated on disk.
|
||||||
|
|
||||||
|
(That last property is the reason for the name "thin" provisioning.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Device Mapper operational details
|
||||||
|
|
||||||
|
- Two storage areas are needed:
|
||||||
|
one for *data*, another for *metadata*.
|
||||||
|
|
||||||
|
- "data" is also called the "pool"; it's just a big pool of blocks.
|
||||||
|
|
||||||
|
(Docker uses the smallest possible block size, 64 KB.)
|
||||||
|
|
||||||
|
- "metadata" contains the mappings between virtual offsets (in the
|
||||||
|
snapshots) and physical offsets (in the pool).
|
||||||
|
|
||||||
|
- Each time a new block (or a copy-on-write block) is written,
|
||||||
|
a block is allocated from the pool.
|
||||||
|
|
||||||
|
- When there are no more blocks in the pool, attempts to write
|
||||||
|
will stall until the pool is increased (or the write operation
|
||||||
|
aborted).
|
||||||
|
|
||||||
|
- In other words: when running out of space, containers are
|
||||||
|
frozen, but operations will resume as soon as space is available.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Device Mapper performance
|
||||||
|
|
||||||
|
- By default, Docker puts data and metadata on a loop device
|
||||||
|
backed by a sparse file.
|
||||||
|
|
||||||
|
- This is great from a usability point of view,
|
||||||
|
since zero configuration is needed.
|
||||||
|
|
||||||
|
- But it is terrible from a performance point of view:
|
||||||
|
|
||||||
|
- each time a container writes to a new block,
|
||||||
|
- a block has to be allocated from the pool,
|
||||||
|
- and when it's written to,
|
||||||
|
- a block has to be allocated from the sparse file,
|
||||||
|
- and sparse file performance isn't great anyway.
|
||||||
|
|
||||||
|
- If you use Device Mapper, make sure to put data (and metadata)
|
||||||
|
on devices!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## BTRFS principles
|
||||||
|
|
||||||
|
- BTRFS is a filesystem (like EXT4, XFS, NTFS...) with built-in snapshots.
|
||||||
|
|
||||||
|
- The "copy-on-write" happens at the filesystem level.
|
||||||
|
|
||||||
|
- BTRFS integrates the snapshot and block pool management features
|
||||||
|
at the filesystem level.
|
||||||
|
|
||||||
|
(Instead of the block level for Device Mapper.)
|
||||||
|
|
||||||
|
- In practice, we create a "subvolume" and
|
||||||
|
later take a "snapshot" of that subvolume.
|
||||||
|
|
||||||
|
Imagine: `mkdir` with Super Powers and `cp -a` with Super Powers.
|
||||||
|
|
||||||
|
- These operations can be executed with the `btrfs` CLI tool.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## BTRFS in practice with Docker
|
||||||
|
|
||||||
|
- Docker can use BTRFS and its snapshotting features to store container images.
|
||||||
|
|
||||||
|
- The only requirement is that `/var/lib/docker` is on a BTRFS filesystem.
|
||||||
|
|
||||||
|
(Or, the directory specified with the `--data-root` flag when starting the engine.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
|
## BTRFS quirks
|
||||||
|
|
||||||
|
- BTRFS works by dividing its storage in *chunks*.
|
||||||
|
|
||||||
|
- A chunk can contain data or metadata.
|
||||||
|
|
||||||
|
- You can run out of chunks (and get `No space left on device`)
|
||||||
|
even though `df` shows space available.
|
||||||
|
|
||||||
|
(Because chunks are only partially allocated.)
|
||||||
|
|
||||||
|
- Quick fix:
|
||||||
|
|
||||||
|
```
|
||||||
|
# btrfs filesys balance start -dusage=1 /var/lib/docker
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overlay2
|
||||||
|
|
||||||
|
- Overlay2 is very similar to AUFS.
|
||||||
|
|
||||||
|
- However, it has been merged in "upstream" kernel.
|
||||||
|
|
||||||
|
- It is therefore available on all modern kernels.
|
||||||
|
|
||||||
|
(AUFS was available on Debian and Ubuntu, but required custom kernels on other distros.)
|
||||||
|
|
||||||
|
- It is simpler than AUFS (it can only have two branches, called "layers").
|
||||||
|
|
||||||
|
- The container engine abstracts this detail, so this is not a concern.
|
||||||
|
|
||||||
|
- Overlay2 storage drivers generally use hard links between layers.
|
||||||
|
|
||||||
|
- This improves `stat()` and `open()` performance, at the expense of inode usage.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ZFS
|
||||||
|
|
||||||
|
- ZFS is similar to BTRFS (at least from a container user's perspective).
|
||||||
|
|
||||||
|
- Pros:
|
||||||
|
|
||||||
|
- high performance
|
||||||
|
- high reliability (with e.g. data checksums)
|
||||||
|
- optional data compression and deduplication
|
||||||
|
|
||||||
|
- Cons:
|
||||||
|
|
||||||
|
- high memory usage
|
||||||
|
- not in upstream kernel
|
||||||
|
|
||||||
|
- It is available as a kernel module or through FUSE.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Which one is the best?
|
||||||
|
|
||||||
|
- Eventually, overlay2 should be the best option.
|
||||||
|
|
||||||
|
- It is available on all modern systems.
|
||||||
|
|
||||||
|
- Its memory usage is better than Device Mapper, BTRFS, or ZFS.
|
||||||
|
|
||||||
|
- The remarks about *write performance* shouldn't bother you:
|
||||||
|
<br/>
|
||||||
|
data should always be stored in volumes anyway!
|
||||||
|
|
||||||
@@ -64,7 +64,7 @@ Create this Dockerfile.
|
|||||||
|
|
||||||
## Testing our C program
|
## Testing our C program
|
||||||
|
|
||||||
* Create `hello.c` and `Dockerfile` in the same direcotry.
|
* Create `hello.c` and `Dockerfile` in the same directory.
|
||||||
|
|
||||||
* Run `docker build -t hello .` in this directory.
|
* Run `docker build -t hello .` in this directory.
|
||||||
|
|
||||||
@@ -93,7 +93,7 @@ Success!
|
|||||||
* Older Dockerfiles also have the `ADD` instruction.
|
* Older Dockerfiles also have the `ADD` instruction.
|
||||||
<br/>It is similar but can automatically extract archives.
|
<br/>It is similar but can automatically extract archives.
|
||||||
|
|
||||||
* If we really wanted to compile C code in a compiler, we would:
|
* If we really wanted to compile C code in a container, we would:
|
||||||
|
|
||||||
* Place it in a different directory, with the `WORKDIR` instruction.
|
* Place it in a different directory, with the `WORKDIR` instruction.
|
||||||
|
|
||||||
|
|||||||
@@ -10,10 +10,12 @@
|
|||||||
|
|
||||||
* [Solaris Containers (2004)](https://en.wikipedia.org/wiki/Solaris_Containers)
|
* [Solaris Containers (2004)](https://en.wikipedia.org/wiki/Solaris_Containers)
|
||||||
|
|
||||||
* [FreeBSD jails (1999)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
|
* [FreeBSD jails (1999-2000)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
|
||||||
|
|
||||||
Containers have been around for a *very long time* indeed.
|
Containers have been around for a *very long time* indeed.
|
||||||
|
|
||||||
|
(See [this excellent blog post by Serge Hallyn](https://s3hh.wordpress.com/2018/03/22/history-of-containers/) for more historic details.)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
class: pic
|
class: pic
|
||||||
|
|||||||
81
slides/intro/Docker_Machine.md
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
# Managing hosts with Docker Machine
|
||||||
|
|
||||||
|
- Docker Machine is a tool to provision and manage Docker hosts.
|
||||||
|
|
||||||
|
- It automates the creation of a virtual machine:
|
||||||
|
|
||||||
|
- locally, with a tool like VirtualBox or VMware;
|
||||||
|
|
||||||
|
- on a public cloud like AWS EC2, Azure, Digital Ocean, GCP, etc.;
|
||||||
|
|
||||||
|
- on a private cloud like OpenStack.
|
||||||
|
|
||||||
|
- It can also configure existing machines through an SSH connection.
|
||||||
|
|
||||||
|
- It can manage as many hosts as you want, with as many "drivers" as you want.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Docker Machine workflow
|
||||||
|
|
||||||
|
1) Prepare the environment: setup VirtualBox, obtain cloud credentials ...
|
||||||
|
|
||||||
|
2) Create hosts with `docker-machine create -d drivername machinename`.
|
||||||
|
|
||||||
|
3) Use a specific machine with `eval $(docker-machine env machinename)`.
|
||||||
|
|
||||||
|
4) Profit!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment variables
|
||||||
|
|
||||||
|
- Most of the tools (CLI, libraries...) connecting to the Docker API can use environment variables.
|
||||||
|
|
||||||
|
- These variables are:
|
||||||
|
|
||||||
|
- `DOCKER_HOST` (indicates address+port to connect to, or path of UNIX socket)
|
||||||
|
|
||||||
|
- `DOCKER_TLS_VERIFY` (indicates that TLS mutual auth should be used)
|
||||||
|
|
||||||
|
- `DOCKER_CERT_PATH` (path to the keypair and certificate to use for auth)
|
||||||
|
|
||||||
|
- `docker-machine env ...` will generate the variables needed to connect to a host.
|
||||||
|
|
||||||
|
- `$(eval docker-machine env ...)` sets these variables in the current shell.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Host management features
|
||||||
|
|
||||||
|
With `docker-machine`, we can:
|
||||||
|
|
||||||
|
- upgrade a host to the latest version of the Docker Engine,
|
||||||
|
|
||||||
|
- start/stop/restart hosts,
|
||||||
|
|
||||||
|
- get a shell on a remote machine (with SSH),
|
||||||
|
|
||||||
|
- copy files to/from remotes machines (with SCP),
|
||||||
|
|
||||||
|
- mount a remote host's directory on the local machine (with SSHFS),
|
||||||
|
|
||||||
|
- ...
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The `generic` driver
|
||||||
|
|
||||||
|
When provisioning a new host, `docker-machine` executes these steps:
|
||||||
|
|
||||||
|
1) Create the host using a cloud or hypervisor API.
|
||||||
|
|
||||||
|
2) Connect to the host over SSH.
|
||||||
|
|
||||||
|
3) Install and configure Docker on the host.
|
||||||
|
|
||||||
|
With the `generic` driver, we provide the IP address of an existing host
|
||||||
|
(instead of e.g. cloud credentials) and we omit the first step.
|
||||||
|
|
||||||
|
This allows to provision physical machines, or VMs provided by a 3rd
|
||||||
|
party, or use a cloud for which we don't have a provisioning API.
|
||||||
@@ -72,7 +72,7 @@ class: pic
|
|||||||
|
|
||||||
class: pic
|
class: pic
|
||||||
|
|
||||||
## The parallel with the shipping indsutry
|
## The parallel with the shipping industry
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|||||||
@@ -51,9 +51,8 @@ The dependencies are reinstalled every time, because the build system does not k
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
FROM python
|
FROM python
|
||||||
MAINTAINER Docker Education Team <education@docker.com>
|
|
||||||
COPY . /src/
|
|
||||||
WORKDIR /src
|
WORKDIR /src
|
||||||
|
COPY . .
|
||||||
RUN pip install -qr requirements.txt
|
RUN pip install -qr requirements.txt
|
||||||
EXPOSE 5000
|
EXPOSE 5000
|
||||||
CMD ["python", "app.py"]
|
CMD ["python", "app.py"]
|
||||||
@@ -67,11 +66,10 @@ Adding the dependencies as a separate step means that Docker can cache more effi
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
FROM python
|
FROM python
|
||||||
MAINTAINER Docker Education Team <education@docker.com>
|
COPY requirements.txt /tmp/requirements.txt
|
||||||
COPY ./requirements.txt /tmp/requirements.txt
|
|
||||||
RUN pip install -qr /tmp/requirements.txt
|
RUN pip install -qr /tmp/requirements.txt
|
||||||
COPY . /src/
|
|
||||||
WORKDIR /src
|
WORKDIR /src
|
||||||
|
COPY . .
|
||||||
EXPOSE 5000
|
EXPOSE 5000
|
||||||
CMD ["python", "app.py"]
|
CMD ["python", "app.py"]
|
||||||
```
|
```
|
||||||
@@ -98,3 +96,266 @@ CMD, EXPOSE ...
|
|||||||
* The build fails as soon as an instruction fails
|
* The build fails as soon as an instruction fails
|
||||||
* If `RUN <unit tests>` fails, the build doesn't produce an image
|
* If `RUN <unit tests>` fails, the build doesn't produce an image
|
||||||
* If it succeeds, it produces a clean image (without test libraries and data)
|
* If it succeeds, it produces a clean image (without test libraries and data)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Dockerfile examples
|
||||||
|
|
||||||
|
There are a number of tips, tricks, and techniques that we can use in Dockerfiles.
|
||||||
|
|
||||||
|
But sometimes, we have to use different (and even opposed) practices depending on:
|
||||||
|
|
||||||
|
- the complexity of our project,
|
||||||
|
|
||||||
|
- the programming language or framework that we are using,
|
||||||
|
|
||||||
|
- the stage of our project (early MVP vs. super-stable production),
|
||||||
|
|
||||||
|
- whether we're building a final image or a base for further images,
|
||||||
|
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
We are going to show a few examples using very different techniques.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to optimize an image
|
||||||
|
|
||||||
|
When authoring official images, it is a good idea to reduce as much as possible:
|
||||||
|
|
||||||
|
- the number of layers,
|
||||||
|
|
||||||
|
- the size of the final image.
|
||||||
|
|
||||||
|
This is often done at the expense of build time and convenience for the image maintainer;
|
||||||
|
but when an image is downloaded millions of time, saving even a few seconds of pull time
|
||||||
|
can be worth it.
|
||||||
|
|
||||||
|
.small[
|
||||||
|
```dockerfile
|
||||||
|
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \
|
||||||
|
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
|
||||||
|
&& docker-php-ext-install gd
|
||||||
|
...
|
||||||
|
RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \
|
||||||
|
&& echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \
|
||||||
|
&& tar -xzf wordpress.tar.gz -C /usr/src/ \
|
||||||
|
&& rm wordpress.tar.gz \
|
||||||
|
&& chown -R www-data:www-data /usr/src/wordpress
|
||||||
|
```
|
||||||
|
]
|
||||||
|
|
||||||
|
(Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile))
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to *not* optimize an image
|
||||||
|
|
||||||
|
Sometimes, it is better to prioritize *maintainer convenience*.
|
||||||
|
|
||||||
|
In particular, if:
|
||||||
|
|
||||||
|
- the image changes a lot,
|
||||||
|
|
||||||
|
- the image has very few users (e.g. only 1, the maintainer!),
|
||||||
|
|
||||||
|
- the image is built and run on the same machine,
|
||||||
|
|
||||||
|
- the image is built and run on machines with a very fast link ...
|
||||||
|
|
||||||
|
In these cases, just keep things simple!
|
||||||
|
|
||||||
|
(Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
FROM debian:sid
|
||||||
|
|
||||||
|
RUN apt-get update -q
|
||||||
|
RUN apt-get install -yq build-essential make
|
||||||
|
RUN apt-get install -yq zlib1g-dev
|
||||||
|
RUN apt-get install -yq ruby ruby-dev
|
||||||
|
RUN apt-get install -yq python-pygments
|
||||||
|
RUN apt-get install -yq nodejs
|
||||||
|
RUN apt-get install -yq cmake
|
||||||
|
RUN gem install --no-rdoc --no-ri github-pages
|
||||||
|
|
||||||
|
COPY . /blog
|
||||||
|
WORKDIR /blog
|
||||||
|
|
||||||
|
VOLUME /blog/_site
|
||||||
|
|
||||||
|
EXPOSE 4000
|
||||||
|
CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Multi-dimensional versioning systems
|
||||||
|
|
||||||
|
Images can have a tag, indicating the version of the image.
|
||||||
|
|
||||||
|
But sometimes, there are multiple important components, and we need to indicate the versions
|
||||||
|
for all of them.
|
||||||
|
|
||||||
|
This can be done with environment variables:
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
ENV PIP=9.0.3 \
|
||||||
|
ZC_BUILDOUT=2.11.2 \
|
||||||
|
SETUPTOOLS=38.7.0 \
|
||||||
|
PLONE_MAJOR=5.1 \
|
||||||
|
PLONE_VERSION=5.1.0 \
|
||||||
|
PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d
|
||||||
|
```
|
||||||
|
|
||||||
|
(Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile))
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Entrypoints and wrappers
|
||||||
|
|
||||||
|
It is very common to define a custom entrypoint.
|
||||||
|
|
||||||
|
That entrypoint will generally be a script, performing any combination of:
|
||||||
|
|
||||||
|
- pre-flights checks (if a required dependency is not available, display
|
||||||
|
a nice error message early instead of an obscure one in a deep log file),
|
||||||
|
|
||||||
|
- generation or validation of configuration files,
|
||||||
|
|
||||||
|
- dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`),
|
||||||
|
|
||||||
|
- and more.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## A typical entrypoint script
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
#!/bin/sh
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# first arg is '-f' or '--some-option'
|
||||||
|
# or first arg is 'something.conf'
|
||||||
|
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
|
||||||
|
set -- redis-server "$@"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# allow the container to be started with '--user'
|
||||||
|
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
|
||||||
|
chown -R redis .
|
||||||
|
exec su-exec redis "$0" "$@"
|
||||||
|
fi
|
||||||
|
|
||||||
|
exec "$@"
|
||||||
|
```
|
||||||
|
|
||||||
|
(Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh))
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Factoring information
|
||||||
|
|
||||||
|
To facilitate maintenance (and avoid human errors), avoid to repeat information like:
|
||||||
|
|
||||||
|
- version numbers,
|
||||||
|
|
||||||
|
- remote asset URLs (e.g. source tarballs) ...
|
||||||
|
|
||||||
|
Instead, use environment variables.
|
||||||
|
|
||||||
|
.small[
|
||||||
|
```dockerfile
|
||||||
|
ENV NODE_VERSION 10.2.1
|
||||||
|
...
|
||||||
|
RUN ...
|
||||||
|
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \
|
||||||
|
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
|
||||||
|
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
|
||||||
|
&& grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
|
||||||
|
&& tar -xf "node-v$NODE_VERSION.tar.xz" \
|
||||||
|
&& cd "node-v$NODE_VERSION" \
|
||||||
|
...
|
||||||
|
```
|
||||||
|
]
|
||||||
|
|
||||||
|
(Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile))
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overrides
|
||||||
|
|
||||||
|
In theory, development and production images should be the same.
|
||||||
|
|
||||||
|
In practice, we often need to enable specific behaviors in development (e.g. debug statements).
|
||||||
|
|
||||||
|
One way to reconcile both needs is to use Compose to enable these behaviors.
|
||||||
|
|
||||||
|
Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Production image
|
||||||
|
|
||||||
|
This Dockerfile builds an image leveraging gunicorn:
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
FROM python
|
||||||
|
RUN pip install flask
|
||||||
|
RUN pip install gunicorn
|
||||||
|
RUN pip install redis
|
||||||
|
COPY . /src
|
||||||
|
WORKDIR /src
|
||||||
|
CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app
|
||||||
|
EXPOSE 5000
|
||||||
|
```
|
||||||
|
|
||||||
|
(Source: [traininghweels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Development Compose file
|
||||||
|
|
||||||
|
This Compose file uses the same image, but with a few overrides for development:
|
||||||
|
|
||||||
|
- the Flask development server is used (overriding `CMD`),
|
||||||
|
|
||||||
|
- the `DEBUG` environment variable is set,
|
||||||
|
|
||||||
|
- a volume is used to provide a faster local development workflow.
|
||||||
|
|
||||||
|
.small[
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
www:
|
||||||
|
build: www
|
||||||
|
ports:
|
||||||
|
- 8000:5000
|
||||||
|
user: nobody
|
||||||
|
environment:
|
||||||
|
DEBUG: 1
|
||||||
|
command: python counter.py
|
||||||
|
volumes:
|
||||||
|
- ./www:/src
|
||||||
|
```
|
||||||
|
]
|
||||||
|
|
||||||
|
(Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml))
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## How to know which best practices are better?
|
||||||
|
|
||||||
|
- The main goal of containers is to make our lives easier.
|
||||||
|
|
||||||
|
- In this chapter, we showed many ways to write Dockerfiles.
|
||||||
|
|
||||||
|
- These Dockerfiles use sometimes diametrally opposed techniques.
|
||||||
|
|
||||||
|
- Yet, they were the "right" ones *for a specific situation.*
|
||||||
|
|
||||||
|
- It's OK (and even encouraged) to start simple and evolve as needed.
|
||||||
|
|
||||||
|
- Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!
|
||||||
|
|||||||
173
slides/intro/Ecosystem.md
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
# The container ecosystem
|
||||||
|
|
||||||
|
In this chapter, we will talk about a few actors of the container ecosystem.
|
||||||
|
|
||||||
|
We have (arbitrarily) decided to focus on two groups:
|
||||||
|
|
||||||
|
- the Docker ecosystem,
|
||||||
|
|
||||||
|
- the Cloud Native Computing Foundation (CNCF) and its projects.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: pic
|
||||||
|
|
||||||
|
## The Docker ecosystem
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Moby vs. Docker
|
||||||
|
|
||||||
|
- Docker Inc. (the company) started Docker (the open source project).
|
||||||
|
|
||||||
|
- At some point, it became necessary to differentiate between:
|
||||||
|
|
||||||
|
- the open source project (code base, contributors...),
|
||||||
|
|
||||||
|
- the product that we use to run containers (the engine),
|
||||||
|
|
||||||
|
- the platform that we use to manage containerized applications,
|
||||||
|
|
||||||
|
- the brand.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: pic
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Exercise in brand management
|
||||||
|
|
||||||
|
Questions:
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- What is the brand of the car on the previous slide?
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- What kind of engine does it have?
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- Would you say that it's a safe or unsafe car?
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- Harder question: can you drive from the US West to East coasts with it?
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
The answers to these questions are part of the Tesla brand.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What if ...
|
||||||
|
|
||||||
|
- The blueprints for Tesla cars were available for free.
|
||||||
|
|
||||||
|
- You could legally build your own Tesla.
|
||||||
|
|
||||||
|
- You were allowed to customize it entirely.
|
||||||
|
|
||||||
|
(Put a combustion engine, drive it with a game pad ...)
|
||||||
|
|
||||||
|
- You could even sell the customized versions.
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- ... And call your customized version "Tesla".
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
Would we give the same answers to the questions on the previous slide?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## From Docker to Moby
|
||||||
|
|
||||||
|
- Docker Inc. decided to split the brand.
|
||||||
|
|
||||||
|
- Moby is the open source project.
|
||||||
|
|
||||||
|
(= Components and libraries that you can use, reuse, customize, sell ...)
|
||||||
|
|
||||||
|
- Docker is the product.
|
||||||
|
|
||||||
|
(= Software that you can use, buy support contracts ...)
|
||||||
|
|
||||||
|
- Docker is made with Moby.
|
||||||
|
|
||||||
|
- When Docker Inc. improves the Docker products, it improves Moby.
|
||||||
|
|
||||||
|
(And vice versa.)
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Other examples
|
||||||
|
|
||||||
|
- *Read the Docs* is an open source project to generate and host documentation.
|
||||||
|
|
||||||
|
- You can host it yourself (on your own servers).
|
||||||
|
|
||||||
|
- You can also get hosted on readthedocs.org.
|
||||||
|
|
||||||
|
- The maintainers of the open source project often receive
|
||||||
|
support requests from users of the hosted product ...
|
||||||
|
|
||||||
|
- ... And the maintainers of the hosted product often
|
||||||
|
receive support requests from users of self-hosted instances.
|
||||||
|
|
||||||
|
- Another example:
|
||||||
|
|
||||||
|
*WordPress.com is a blogging platform that is owned and hosted online by
|
||||||
|
Automattic. It is run on WordPress, an open source piece of software used by
|
||||||
|
bloggers. (Wikipedia)*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Docker CE vs Docker EE
|
||||||
|
|
||||||
|
- Docker CE = Community Edition.
|
||||||
|
|
||||||
|
- Available on most Linux distros, Mac, Windows.
|
||||||
|
|
||||||
|
- Optimized for developers and ease of use.
|
||||||
|
|
||||||
|
- Docker EE = Enterprise Edition.
|
||||||
|
|
||||||
|
- Available only on a subset of Linux distros + Windows servers.
|
||||||
|
|
||||||
|
(Only available when there is a strong partnership to offer enterprise-class support.)
|
||||||
|
|
||||||
|
- Optimized for production use.
|
||||||
|
|
||||||
|
- Comes with additional components: security scanning, RBAC ...
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The CNCF
|
||||||
|
|
||||||
|
- Non-profit, part of the Linux Foundation; founded in December 2015.
|
||||||
|
|
||||||
|
*The Cloud Native Computing Foundation builds sustainable ecosystems and fosters
|
||||||
|
a community around a constellation of high-quality projects that orchestrate
|
||||||
|
containers as part of a microservices architecture.*
|
||||||
|
|
||||||
|
*CNCF is an open source software foundation dedicated to making cloud-native computing universal and sustainable.*
|
||||||
|
|
||||||
|
- Home of Kubernetes (and many other projects now).
|
||||||
|
|
||||||
|
- Funded by corporate memberships.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: pic
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
@@ -110,6 +110,8 @@ Beautiful! .emoji[😍]
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
class: in-person
|
||||||
|
|
||||||
## Counting packages in the container
|
## Counting packages in the container
|
||||||
|
|
||||||
Let's check how many packages are installed there.
|
Let's check how many packages are installed there.
|
||||||
@@ -127,6 +129,8 @@ How many packages do we have on our host?
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
class: in-person
|
||||||
|
|
||||||
## Counting packages on the host
|
## Counting packages on the host
|
||||||
|
|
||||||
Exit the container by logging out of the shell, like you would usually do.
|
Exit the container by logging out of the shell, like you would usually do.
|
||||||
@@ -145,18 +149,34 @@ Now, try to:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
class: self-paced
|
||||||
|
|
||||||
|
## Comparing the container and the host
|
||||||
|
|
||||||
|
Exit the container by logging out of the shell, with `^D` or `exit`.
|
||||||
|
|
||||||
|
Now try to run `figlet`. Does that work?
|
||||||
|
|
||||||
|
(It shouldn't; except if, by coincidence, you are running on a machine where figlet was installed before.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Host and containers are independent things
|
## Host and containers are independent things
|
||||||
|
|
||||||
* We ran an `ubuntu` container on an `ubuntu` host.
|
* We ran an `ubuntu` container on an Linux/Windows/macOS host.
|
||||||
|
|
||||||
* But they have different, independent packages.
|
* They have different, independent packages.
|
||||||
|
|
||||||
* Installing something on the host doesn't expose it to the container.
|
* Installing something on the host doesn't expose it to the container.
|
||||||
|
|
||||||
* And vice-versa.
|
* And vice-versa.
|
||||||
|
|
||||||
|
* Even if both the host and the container have the same Linux distro!
|
||||||
|
|
||||||
* We can run *any container* on *any host*.
|
* We can run *any container* on *any host*.
|
||||||
|
|
||||||
|
(One exception: Windows containers cannot run on Linux machines; at least not yet.)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Where's our container?
|
## Where's our container?
|
||||||
|
|||||||
227
slides/intro/Getting_Inside.md
Normal file
@@ -0,0 +1,227 @@
|
|||||||
|
class: title
|
||||||
|
|
||||||
|
# Getting inside a container
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Objectives
|
||||||
|
|
||||||
|
On a traditional server or VM, we sometimes need to:
|
||||||
|
|
||||||
|
* log into the machine (with SSH or on the console),
|
||||||
|
|
||||||
|
* analyze the disks (by removing them or rebooting with a rescue system).
|
||||||
|
|
||||||
|
In this chapter, we will see how to do that with containers.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Getting a shell
|
||||||
|
|
||||||
|
Every once in a while, we want to log into a machine.
|
||||||
|
|
||||||
|
In an perfect world, this shouldn't be necessary.
|
||||||
|
|
||||||
|
* You need to install or update packages (and their configuration)?
|
||||||
|
|
||||||
|
Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...)
|
||||||
|
|
||||||
|
* You need to view logs and metrics?
|
||||||
|
|
||||||
|
Collect and access them through a centralized platform.
|
||||||
|
|
||||||
|
In the real world, though ... we often need shell access!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Not getting a shell
|
||||||
|
|
||||||
|
Even without a perfect deployment system, we can do many operations without getting a shell.
|
||||||
|
|
||||||
|
* Installing packages can (and should) be done in the container image.
|
||||||
|
|
||||||
|
* Configuration can be done at the image level, or when the container starts.
|
||||||
|
|
||||||
|
* Dynamic configuration can be stored in a volume (shared with another container).
|
||||||
|
|
||||||
|
* Logs written to stdout are automatically collected by the Docker Engine.
|
||||||
|
|
||||||
|
* Other logs can be written to a shared volume.
|
||||||
|
|
||||||
|
* Process information and metrics are visible from the host.
|
||||||
|
|
||||||
|
_Let's save logging, volumes ... for later, but let's have a look at process information!_
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Viewing container processes from the host
|
||||||
|
|
||||||
|
If you run Docker on Linux, container processes are visible on the host.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ ps faux | less
|
||||||
|
```
|
||||||
|
|
||||||
|
* Scroll around the output of this command.
|
||||||
|
|
||||||
|
* You should see the `jpetazzo/clock` container.
|
||||||
|
|
||||||
|
* A containerized process is just like any other process on the host.
|
||||||
|
|
||||||
|
* We can use tools like `lsof`, `strace`, `gdb` ... To analyze them.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
|
## What's the difference between a container process and a host process?
|
||||||
|
|
||||||
|
* Each process (containerized or not) belongs to *namespaces* and *cgroups*.
|
||||||
|
|
||||||
|
* The namespaces and cgroups determine what a process can "see" and "do".
|
||||||
|
|
||||||
|
* Analogy: each process (containerized or not) runs with a specific UID (user ID).
|
||||||
|
|
||||||
|
* UID=0 is root, and has elevated privileges. Other UIDs are normal users.
|
||||||
|
|
||||||
|
_We will give more details about namespaces and cgroups later._
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Getting a shell in a running container
|
||||||
|
|
||||||
|
* Sometimes, we need to get a shell anyway.
|
||||||
|
|
||||||
|
* We _could_ run some SSH server in the container ...
|
||||||
|
|
||||||
|
* But it is easier to use `docker exec`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker exec -ti ticktock sh
|
||||||
|
```
|
||||||
|
|
||||||
|
* This creates a new process (running `sh`) _inside_ the container.
|
||||||
|
|
||||||
|
* This can also be done "manually" with the tool `nsenter`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Caveats
|
||||||
|
|
||||||
|
* The tool that you want to run needs to exist in the container.
|
||||||
|
|
||||||
|
* Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time.
|
||||||
|
|
||||||
|
(This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.)
|
||||||
|
|
||||||
|
* Most importantly: the container needs to be running.
|
||||||
|
|
||||||
|
* What if the container is stopped or crashed?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Getting a shell in a stopped container
|
||||||
|
|
||||||
|
* A stopped container is only _storage_ (like a disk drive).
|
||||||
|
|
||||||
|
* We cannot SSH into a disk drive or USB stick!
|
||||||
|
|
||||||
|
* We need to connect the disk to a running machine.
|
||||||
|
|
||||||
|
* How does that translate into the container world?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Analyzing a stopped container
|
||||||
|
|
||||||
|
As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run jpetazzo/crashtest
|
||||||
|
```
|
||||||
|
|
||||||
|
The container starts, but then stops immediately, without any output.
|
||||||
|
|
||||||
|
What would MacGyver™ do?
|
||||||
|
|
||||||
|
First, let's check the status of that container.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker ps -l
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Viewing filesystem changes
|
||||||
|
|
||||||
|
* We can use `docker diff` to see files that were added / changed / removed.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker diff <container_id>
|
||||||
|
```
|
||||||
|
|
||||||
|
* The container ID was shown by `docker ps -l`.
|
||||||
|
|
||||||
|
* We can also see it with `docker ps -lq`.
|
||||||
|
|
||||||
|
* The output of `docker diff` shows some interesting log files!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Accessing files
|
||||||
|
|
||||||
|
* We can extract files with `docker cp`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker cp <container_id>:/var/log/nginx/error.log .
|
||||||
|
```
|
||||||
|
|
||||||
|
* Then we can look at that log file.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat error.log
|
||||||
|
```
|
||||||
|
|
||||||
|
(The directory `/run/nginx` doesn't exist.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Exploring a crashed container
|
||||||
|
|
||||||
|
* We can restart a container with `docker start` ...
|
||||||
|
|
||||||
|
* ... But it will probably crash again immediately!
|
||||||
|
|
||||||
|
* We cannot specify a different program to run with `docker start`
|
||||||
|
|
||||||
|
* But we can create a new image from the crashed container
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker commit <container_id> debugimage
|
||||||
|
```
|
||||||
|
|
||||||
|
* Then we can run a new container from that image, with a custom entrypoint
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -ti --entrypoint sh debugimage
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
|
## Obtaining a complete dump
|
||||||
|
|
||||||
|
* We can also dump the entire filesystem of a container.
|
||||||
|
|
||||||
|
* This is done with `docker export`.
|
||||||
|
|
||||||
|
* It generates a tar archive.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker export <container_id> | tar tv
|
||||||
|
```
|
||||||
|
|
||||||
|
This will give a detailed listing of the content of the container.
|
||||||
@@ -46,6 +46,8 @@ In this section, we will explain:
|
|||||||
|
|
||||||
## Example for a Java webapp
|
## Example for a Java webapp
|
||||||
|
|
||||||
|
Each of the following items will correspond to one layer:
|
||||||
|
|
||||||
* CentOS base layer
|
* CentOS base layer
|
||||||
* Packages and configuration files added by our local IT
|
* Packages and configuration files added by our local IT
|
||||||
* JRE
|
* JRE
|
||||||
@@ -56,6 +58,22 @@ In this section, we will explain:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
class: pic
|
||||||
|
|
||||||
|
## The read-write layer
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: pic
|
||||||
|
|
||||||
|
## Multiple containers sharing the same image
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Differences between containers and images
|
## Differences between containers and images
|
||||||
|
|
||||||
* An image is a read-only filesystem.
|
* An image is a read-only filesystem.
|
||||||
@@ -63,24 +81,14 @@ In this section, we will explain:
|
|||||||
* A container is an encapsulated set of processes running in a
|
* A container is an encapsulated set of processes running in a
|
||||||
read-write copy of that filesystem.
|
read-write copy of that filesystem.
|
||||||
|
|
||||||
* To optimize container boot time, *copy-on-write* is used
|
* To optimize container boot time, *copy-on-write* is used
|
||||||
instead of regular copy.
|
instead of regular copy.
|
||||||
|
|
||||||
* `docker run` starts a container from a given image.
|
* `docker run` starts a container from a given image.
|
||||||
|
|
||||||
Let's give a couple of metaphors to illustrate those concepts.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Image as stencils
|
## Comparison with object-oriented programming
|
||||||
|
|
||||||
Images are like templates or stencils that you can create containers from.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Object-oriented programming
|
|
||||||
|
|
||||||
* Images are conceptually similar to *classes*.
|
* Images are conceptually similar to *classes*.
|
||||||
|
|
||||||
@@ -99,7 +107,7 @@ If an image is read-only, how do we change it?
|
|||||||
* We create a new container from that image.
|
* We create a new container from that image.
|
||||||
|
|
||||||
* Then we make changes to that container.
|
* Then we make changes to that container.
|
||||||
|
|
||||||
* When we are satisfied with those changes, we transform them into a new layer.
|
* When we are satisfied with those changes, we transform them into a new layer.
|
||||||
|
|
||||||
* A new image is created by stacking the new layer on top of the old image.
|
* A new image is created by stacking the new layer on top of the old image.
|
||||||
@@ -118,7 +126,7 @@ If an image is read-only, how do we change it?
|
|||||||
|
|
||||||
## Creating the first images
|
## Creating the first images
|
||||||
|
|
||||||
There is a special empty image called `scratch`.
|
There is a special empty image called `scratch`.
|
||||||
|
|
||||||
* It allows to *build from scratch*.
|
* It allows to *build from scratch*.
|
||||||
|
|
||||||
@@ -138,7 +146,7 @@ Note: you will probably never have to do this yourself.
|
|||||||
* Saves all the changes made to a container into a new layer.
|
* Saves all the changes made to a container into a new layer.
|
||||||
* Creates a new image (effectively a copy of the container).
|
* Creates a new image (effectively a copy of the container).
|
||||||
|
|
||||||
`docker build`
|
`docker build` **(used 99% of the time)**
|
||||||
|
|
||||||
* Performs a repeatable build sequence.
|
* Performs a repeatable build sequence.
|
||||||
* This is the preferred method!
|
* This is the preferred method!
|
||||||
@@ -180,6 +188,8 @@ Those images include:
|
|||||||
|
|
||||||
* Ready-to-use components and services, like redis, postgresql...
|
* Ready-to-use components and services, like redis, postgresql...
|
||||||
|
|
||||||
|
* Over 130 at this point!
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## User namespace
|
## User namespace
|
||||||
@@ -299,9 +309,9 @@ There are two ways to download images.
|
|||||||
```bash
|
```bash
|
||||||
$ docker pull debian:jessie
|
$ docker pull debian:jessie
|
||||||
Pulling repository debian
|
Pulling repository debian
|
||||||
b164861940b8: Download complete
|
b164861940b8: Download complete
|
||||||
b164861940b8: Pulling image (jessie) from debian
|
b164861940b8: Pulling image (jessie) from debian
|
||||||
d1881793a057: Download complete
|
d1881793a057: Download complete
|
||||||
```
|
```
|
||||||
|
|
||||||
* As seen previously, images are made up of layers.
|
* As seen previously, images are made up of layers.
|
||||||
|
|||||||
@@ -29,7 +29,7 @@ We can arbitrarily distinguish:
|
|||||||
|
|
||||||
* Installing Docker on an existing Linux machine (physical or VM)
|
* Installing Docker on an existing Linux machine (physical or VM)
|
||||||
|
|
||||||
* Installing Docker on MacOS or Windows
|
* Installing Docker on macOS or Windows
|
||||||
|
|
||||||
* Installing Docker on a fleet of cloud VMs
|
* Installing Docker on a fleet of cloud VMs
|
||||||
|
|
||||||
@@ -37,7 +37,9 @@ We can arbitrarily distinguish:
|
|||||||
|
|
||||||
## Installing Docker on Linux
|
## Installing Docker on Linux
|
||||||
|
|
||||||
* The recommended method is to install the packages supplied by Docker Inc.
|
* The recommended method is to install the packages supplied by Docker Inc.:
|
||||||
|
|
||||||
|
https://store.docker.com
|
||||||
|
|
||||||
* The general method is:
|
* The general method is:
|
||||||
|
|
||||||
@@ -55,13 +57,35 @@ We can arbitrarily distinguish:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Installing Docker on MacOS and Windows
|
class: extra-details
|
||||||
|
|
||||||
* On MacOS, the recommended method is to use Docker4Mac:
|
## Docker Inc. packages vs distribution packages
|
||||||
|
|
||||||
|
* Docker Inc. releases new versions monthly (edge) and quarterly (stable)
|
||||||
|
|
||||||
|
* Releases are immediately available on Docker Inc.'s package repositories
|
||||||
|
|
||||||
|
* Linux distros don't always update to the latest Docker version
|
||||||
|
|
||||||
|
(Sometimes, updating would break their guidelines for major/minor upgrades)
|
||||||
|
|
||||||
|
* Sometimes, some distros have carried packages with custom patches
|
||||||
|
|
||||||
|
* Sometimes, these patches added critical security bugs ☹
|
||||||
|
|
||||||
|
* Installing through Docker Inc.'s repositories is a bit of extra work …
|
||||||
|
|
||||||
|
… but it is generally worth it!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Installing Docker on macOS and Windows
|
||||||
|
|
||||||
|
* On macOS, the recommended method is to use Docker for Mac:
|
||||||
|
|
||||||
https://docs.docker.com/docker-for-mac/install/
|
https://docs.docker.com/docker-for-mac/install/
|
||||||
|
|
||||||
* On Windows 10 Pro, Enterprise, and Eduction, you can use Docker4Windows:
|
* On Windows 10 Pro, Enterprise, and Education, you can use Docker for Windows:
|
||||||
|
|
||||||
https://docs.docker.com/docker-for-windows/install/
|
https://docs.docker.com/docker-for-windows/install/
|
||||||
|
|
||||||
@@ -69,9 +93,36 @@ We can arbitrarily distinguish:
|
|||||||
|
|
||||||
https://docs.docker.com/toolbox/toolbox_install_windows/
|
https://docs.docker.com/toolbox/toolbox_install_windows/
|
||||||
|
|
||||||
|
* On Windows Server 2016, you can also install the native engine:
|
||||||
|
|
||||||
|
https://docs.docker.com/install/windows/docker-ee/
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Running Docker on MacOS and Windows
|
## Docker for Mac and Docker for Windows
|
||||||
|
|
||||||
|
* Special Docker Editions that integrate well with their respective host OS
|
||||||
|
|
||||||
|
* Provide user-friendly GUI to edit Docker configuration and settings
|
||||||
|
|
||||||
|
* Leverage the host OS virtualization subsystem (e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS)
|
||||||
|
|
||||||
|
* Installed like normal user applications on the host
|
||||||
|
|
||||||
|
* Under the hood, they both run a tiny VM (transparent to our daily use)
|
||||||
|
|
||||||
|
* Access network resources like normal applications
|
||||||
|
<br/>(and therefore, play better with enterprise VPNs and firewalls)
|
||||||
|
|
||||||
|
* Support filesystem sharing through volumes (we'll talk about this later)
|
||||||
|
|
||||||
|
* They only support running one Docker VM at a time ...
|
||||||
|
<br/>
|
||||||
|
... but we can use `docker-machine`, the Docker Toolbox, VirtualBox, etc. to get a cluster.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Running Docker on macOS and Windows
|
||||||
|
|
||||||
When you execute `docker version` from the terminal:
|
When you execute `docker version` from the terminal:
|
||||||
|
|
||||||
@@ -88,25 +139,6 @@ This will also allow to use remote Engines exactly as if they were local.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Docker4Mac and Docker4Windows
|
|
||||||
|
|
||||||
* They let you run Docker without VirtualBox
|
|
||||||
|
|
||||||
* They are installed like normal applications (think QEMU, but faster)
|
|
||||||
|
|
||||||
* They access network resources like normal applications
|
|
||||||
<br/>(and therefore, play well with enterprise VPNs and firewalls)
|
|
||||||
|
|
||||||
* They support filesystem sharing through volumes (we'll talk about this later)
|
|
||||||
|
|
||||||
* They only support running one Docker VM at a time ...
|
|
||||||
|
|
||||||
... so if you want to run a full cluster locally, install e.g. the Docker Toolbox
|
|
||||||
|
|
||||||
* They can co-exist with the Docker Toolbox
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Important PSA about security
|
## Important PSA about security
|
||||||
|
|
||||||
* If you have access to the Docker control socket, you can take over the machine
|
* If you have access to the Docker control socket, you can take over the machine
|
||||||
|
|||||||
82
slides/intro/Labels.md
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
# Labels
|
||||||
|
|
||||||
|
* Labels allow to attach arbitrary metadata to containers.
|
||||||
|
|
||||||
|
* Labels are key/value pairs.
|
||||||
|
|
||||||
|
* They are specified at container creation.
|
||||||
|
|
||||||
|
* You can query them with `docker inspect`.
|
||||||
|
|
||||||
|
* They can also be used as filters with some commands (e.g. `docker ps`).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Using labels
|
||||||
|
|
||||||
|
Let's create a few containers with a label `owner`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -d -l owner=alice nginx
|
||||||
|
docker run -d -l owner=bob nginx
|
||||||
|
docker run -d -l owner nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
We didn't specify a value for the `owner` label in the last example.
|
||||||
|
|
||||||
|
This is equivalent to setting the value to be an empty string.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Querying labels
|
||||||
|
|
||||||
|
We can view the labels with `docker inspect`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker inspect $(docker ps -lq) | grep -A3 Labels
|
||||||
|
"Labels": {
|
||||||
|
"maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>",
|
||||||
|
"owner": ""
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
We can use the `--format` flag to list the value of a label.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker inspect $(docker ps -q) --format 'OWNER={{.Config.Labels.owner}}'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Using labels to select containers
|
||||||
|
|
||||||
|
We can list containers having a specific label.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker ps --filter label=owner
|
||||||
|
```
|
||||||
|
|
||||||
|
Or we can list containers having a specific label with a specific value.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker ps --filter label=owner=alice
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Use-cases for labels
|
||||||
|
|
||||||
|
|
||||||
|
* HTTP vhost of a web app or web service.
|
||||||
|
|
||||||
|
(The label is used to generate the configuration for NGINX, HAProxy, etc.)
|
||||||
|
|
||||||
|
* Backup schedule for a stateful service.
|
||||||
|
|
||||||
|
(The label is used by a cron job to determine if/when to backup container data.)
|
||||||
|
|
||||||
|
* Service ownership.
|
||||||
|
|
||||||
|
(To determine internal cross-billing, or who to page in case of outage.)
|
||||||
|
|
||||||
|
* etc.
|
||||||
@@ -17,7 +17,7 @@ At the end of this section, you will be able to:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Containerized local development environments
|
## Local development in a container
|
||||||
|
|
||||||
We want to solve the following issues:
|
We want to solve the following issues:
|
||||||
|
|
||||||
@@ -69,7 +69,6 @@ Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe?
|
|||||||
|
|
||||||
```dockerfile
|
```dockerfile
|
||||||
FROM ruby
|
FROM ruby
|
||||||
MAINTAINER Education Team at Docker <education@docker.com>
|
|
||||||
|
|
||||||
COPY . /src
|
COPY . /src
|
||||||
WORKDIR /src
|
WORKDIR /src
|
||||||
@@ -177,7 +176,9 @@ $ docker run -d -v $(pwd):/src -P namer
|
|||||||
|
|
||||||
* `namer` is the name of the image we will run.
|
* `namer` is the name of the image we will run.
|
||||||
|
|
||||||
* We don't specify a command to run because is is already set in the Dockerfile.
|
* We don't specify a command to run because it is already set in the Dockerfile.
|
||||||
|
|
||||||
|
Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
294
slides/intro/Logging.md
Normal file
@@ -0,0 +1,294 @@
|
|||||||
|
# Logging
|
||||||
|
|
||||||
|
In this chapter, we will explain the different ways to send logs from containers.
|
||||||
|
|
||||||
|
We will then show one particular method in action, using ELK and Docker's logging drivers.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## There are many ways to send logs
|
||||||
|
|
||||||
|
- The simplest method is to write on the standard output and error.
|
||||||
|
|
||||||
|
- Applications can write their logs to local files.
|
||||||
|
|
||||||
|
(The files are usually periodically rotated and compressed.)
|
||||||
|
|
||||||
|
- It is also very common (on UNIX systems) to use syslog.
|
||||||
|
|
||||||
|
(The logs are collected by syslogd or an equivalent like journald.)
|
||||||
|
|
||||||
|
- In large applications with many components, it is common to use a logging service.
|
||||||
|
|
||||||
|
(The code uses a library to send messages to the logging service.)
|
||||||
|
|
||||||
|
*All these methods are available with containers.*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Writing on stdout/stderr
|
||||||
|
|
||||||
|
- The standard output and error of containers is managed by the container engine.
|
||||||
|
|
||||||
|
- This means that each line written by the container is received by the engine.
|
||||||
|
|
||||||
|
- The engine can then do "whatever" with these log lines.
|
||||||
|
|
||||||
|
- With Docker, the default configuration is to write the logs to local files.
|
||||||
|
|
||||||
|
- The files can then be queried with e.g. `docker logs` (and the equivalent API request).
|
||||||
|
|
||||||
|
- This can be customized, as we will see later.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Writing to local files
|
||||||
|
|
||||||
|
- If we write to files, it is possible to access them but cumbersome.
|
||||||
|
|
||||||
|
(We have to use `docker exec` or `docker cp`.)
|
||||||
|
|
||||||
|
- Furthermore, if the container is stopped, we cannot use `docker exec`.
|
||||||
|
|
||||||
|
- If the container is deleted, the logs disappear.
|
||||||
|
|
||||||
|
- What should we do for programs who can only log to local files?
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- There are multiple solutions.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Using a volume or bind mount
|
||||||
|
|
||||||
|
- Instead of writing logs to a normal directory, we can place them on a volume.
|
||||||
|
|
||||||
|
- The volume can be accessed by other containers.
|
||||||
|
|
||||||
|
- We can run a program like `filebeat` in another container accessing the same volume.
|
||||||
|
|
||||||
|
(`filebeat` reads local log files continuously, like `tail -f`, and sends them
|
||||||
|
to a centralized system like ElasticSearch.)
|
||||||
|
|
||||||
|
- We can also use a bind mount, e.g. `-v /var/log/containers/www:/var/log/tomcat`.
|
||||||
|
|
||||||
|
- The container will write log files to a directory mapped to a host directory.
|
||||||
|
|
||||||
|
- The log files will appear on the host and be consumable directly from the host.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Using logging services
|
||||||
|
|
||||||
|
- We can use logging frameworks (like log4j or the Python `logging` package).
|
||||||
|
|
||||||
|
- These frameworks require some code and/or configuration in our application code.
|
||||||
|
|
||||||
|
- These mechanisms can be used identically inside or outside of containers.
|
||||||
|
|
||||||
|
- Sometimes, we can leverage containerized networking to simplify their setup.
|
||||||
|
|
||||||
|
- For instance, our code can send log messages to a server named `log`.
|
||||||
|
|
||||||
|
- The name `log` will resolve to different addresses in development, production, etc.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Using syslog
|
||||||
|
|
||||||
|
- What if our code (or the program we are running in containers) uses syslog?
|
||||||
|
|
||||||
|
- One possibility is to run a syslog daemon in the container.
|
||||||
|
|
||||||
|
- Then that daemon can be setup to write to local files or forward to the network.
|
||||||
|
|
||||||
|
- Under the hood, syslog clients connect to a local UNIX socket, `/dev/log`.
|
||||||
|
|
||||||
|
- We can expose a syslog socket to the container (by using a volume or bind-mount).
|
||||||
|
|
||||||
|
- Then just create a symlink from `/dev/log` to the syslog socket.
|
||||||
|
|
||||||
|
- Voilà!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Using logging drivers
|
||||||
|
|
||||||
|
- If we log to stdout and stderr, the container engine receives the log messages.
|
||||||
|
|
||||||
|
- The Docker Engine has a modular logging system with many plugins, including:
|
||||||
|
|
||||||
|
- json-file (the default one)
|
||||||
|
- syslog
|
||||||
|
- journald
|
||||||
|
- gelf
|
||||||
|
- fluentd
|
||||||
|
- splunk
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
- Each plugin can process and forward the logs to another process or system.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## A word of warning about `json-file`
|
||||||
|
|
||||||
|
- By default, log file size is unlimited.
|
||||||
|
|
||||||
|
- This means that a very verbose container *will* use up all your disk space.
|
||||||
|
|
||||||
|
(Or a less verbose container, but running for a very long time.)
|
||||||
|
|
||||||
|
- Log rotation can be enabled by setting a `max-size` option.
|
||||||
|
|
||||||
|
- Older log files can be removed by setting a `max-file` option.
|
||||||
|
|
||||||
|
- Just like other logging options, these can be set per container, or globally.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```bash
|
||||||
|
$ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Demo: sending logs to ELK
|
||||||
|
|
||||||
|
- We are going to deploy an ELK stack.
|
||||||
|
|
||||||
|
- It will accept logs over a GELF socket.
|
||||||
|
|
||||||
|
- We will run a few containers with the `gelf` logging driver.
|
||||||
|
|
||||||
|
- We will then see our logs in Kibana, the web interface provided by ELK.
|
||||||
|
|
||||||
|
*Important foreword: this is not an "official" or "recommended"
|
||||||
|
setup; it is just an example. We used ELK in this demo because
|
||||||
|
it's a popular setup and we keep being asked about it; but you
|
||||||
|
will have equal success with Fluent or other logging stacks!*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What's in an ELK stack?
|
||||||
|
|
||||||
|
- ELK is three components:
|
||||||
|
|
||||||
|
- ElasticSearch (to store and index log entries)
|
||||||
|
|
||||||
|
- Logstash (to receive log entries from various
|
||||||
|
sources, process them, and forward them to various
|
||||||
|
destinations)
|
||||||
|
|
||||||
|
- Kibana (to view/search log entries with a nice UI)
|
||||||
|
|
||||||
|
- The only component that we will configure is Logstash
|
||||||
|
|
||||||
|
- We will accept log entries using the GELF protocol
|
||||||
|
|
||||||
|
- Log entries will be stored in ElasticSearch,
|
||||||
|
<br/>and displayed on Logstash's stdout for debugging
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Running ELK
|
||||||
|
|
||||||
|
- We are going to use a Compose file describing the ELK stack.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ cd ~/container.training/stacks
|
||||||
|
$ docker-compose -f elk.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
- Let's have a look at the Compose file while it's deploying.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Our basic ELK deployment
|
||||||
|
|
||||||
|
- We are using images from the Docker Hub: `elasticsearch`, `logstash`, `kibana`.
|
||||||
|
|
||||||
|
- We don't need to change the configuration of ElasticSearch.
|
||||||
|
|
||||||
|
- We need to tell Kibana the address of ElasticSearch:
|
||||||
|
|
||||||
|
- it is set with the `ELASTICSEARCH_URL` environment variable,
|
||||||
|
|
||||||
|
- by default it is `localhost:9200`, we change it to `elasticsearch:9200`.
|
||||||
|
|
||||||
|
- We need to configure Logstash:
|
||||||
|
|
||||||
|
- we pass the entire configuration file through command-line arguments,
|
||||||
|
|
||||||
|
- this is a hack so that we don't have to create an image just for the config.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Sending logs to ELK
|
||||||
|
|
||||||
|
- The ELK stack accepts log messages through a GELF socket.
|
||||||
|
|
||||||
|
- The GELF socket listens on UDP port 12201.
|
||||||
|
|
||||||
|
- To send a message, we need to change the logging driver used by Docker.
|
||||||
|
|
||||||
|
- This can be done globally (by reconfiguring the Engine) or on a per-container basis.
|
||||||
|
|
||||||
|
- Let's override the logging driver for a single container:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run --log-driver=gelf --log-opt=gelf-address=udp://localhost:12201 \
|
||||||
|
alpine echo hello world
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Viewing the logs in ELK
|
||||||
|
|
||||||
|
- Connect to the Kibana interface.
|
||||||
|
|
||||||
|
- It is exposed on port 5601.
|
||||||
|
|
||||||
|
- Browse http://X.X.X.X:5601.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## "Configuring" Kibana
|
||||||
|
|
||||||
|
- Kibana should offer you to "Configure an index pattern":
|
||||||
|
<br/>in the "Time-field name" drop down, select "@timestamp", and hit the
|
||||||
|
"Create" button.
|
||||||
|
|
||||||
|
- Then:
|
||||||
|
|
||||||
|
- click "Discover" (in the top-left corner),
|
||||||
|
- click "Last 15 minutes" (in the top-right corner),
|
||||||
|
- click "Last 1 hour" (in the list in the middle),
|
||||||
|
- click "Auto-refresh" (top-right corner),
|
||||||
|
- click "5 seconds" (top-left of the list).
|
||||||
|
|
||||||
|
- You should see a series of green bars (with one new green bar every minute).
|
||||||
|
|
||||||
|
- Our 'hello world' message should be visible there.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Important afterword
|
||||||
|
|
||||||
|
**This is not a "production-grade" setup.**
|
||||||
|
|
||||||
|
It is just an educational example. Since we have only
|
||||||
|
one node , we did set up a single
|
||||||
|
ElasticSearch instance and a single Logstash instance.
|
||||||
|
|
||||||
|
In a production setup, you need an ElasticSearch cluster
|
||||||
|
(both for capacity and availability reasons). You also
|
||||||
|
need multiple Logstash instances.
|
||||||
|
|
||||||
|
And if you want to withstand
|
||||||
|
bursts of logs, you need some kind of message queue:
|
||||||
|
Redis if you're cheap, Kafka if you want to make sure
|
||||||
|
that you don't drop messages on the floor. Good luck.
|
||||||
|
|
||||||
|
If you want to learn more about the GELF driver,
|
||||||
|
have a look at [this blog post](
|
||||||
|
http://jpetazzo.github.io/2017/01/20/docker-logging-gelf/).
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
# Multi-stage builds
|
# Reducing image size
|
||||||
|
|
||||||
* In the previous example, our final image contain:
|
* In the previous example, our final image contained:
|
||||||
|
|
||||||
* our `hello` program
|
* our `hello` program
|
||||||
|
|
||||||
@@ -14,7 +14,196 @@
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Multi-stage builds principles
|
## Can't we remove superfluous files with `RUN`?
|
||||||
|
|
||||||
|
What happens if we do one of the following commands?
|
||||||
|
|
||||||
|
- `RUN rm -rf ...`
|
||||||
|
|
||||||
|
- `RUN apt-get remove ...`
|
||||||
|
|
||||||
|
- `RUN make clean ...`
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
This adds a layer which removes a bunch of files.
|
||||||
|
|
||||||
|
But the previous layers (which added the files) still exist.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Removing files with an extra layer
|
||||||
|
|
||||||
|
When downloading an image, all the layers must be downloaded.
|
||||||
|
|
||||||
|
| Dockerfile instruction | Layer size | Image size |
|
||||||
|
| ---------------------- | ---------- | ---------- |
|
||||||
|
| `FROM ubuntu` | Size of base image | Size of base image |
|
||||||
|
| `...` | ... | Sum of this layer <br/>+ all previous ones |
|
||||||
|
| `RUN apt-get install somepackage` | Size of files added <br/>(e.g. a few MB) | Sum of this layer <br/>+ all previous ones |
|
||||||
|
| `...` | ... | Sum of this layer <br/>+ all previous ones |
|
||||||
|
| `RUN apt-get remove somepackage` | Almost zero <br/>(just metadata) | Same as previous one |
|
||||||
|
|
||||||
|
Therefore, `RUN rm` does not reduce the size of the image or free up disk space.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Removing unnecessary files
|
||||||
|
|
||||||
|
Various techniques are available to obtain smaller images:
|
||||||
|
|
||||||
|
- collapsing layers,
|
||||||
|
|
||||||
|
- adding binaries that are built outside of the Dockerfile,
|
||||||
|
|
||||||
|
- squashing the final image,
|
||||||
|
|
||||||
|
- multi-stage builds.
|
||||||
|
|
||||||
|
Let's review them quickly.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Collapsing layers
|
||||||
|
|
||||||
|
You will frequently see Dockerfiles like this:
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
FROM ubuntu
|
||||||
|
RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ...
|
||||||
|
```
|
||||||
|
|
||||||
|
Or the (more readable) variant:
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
FROM ubuntu
|
||||||
|
RUN apt-get update \
|
||||||
|
&& apt-get install xxx \
|
||||||
|
&& ... \
|
||||||
|
&& apt-get remove xxx \
|
||||||
|
&& ...
|
||||||
|
```
|
||||||
|
|
||||||
|
This `RUN` command gives us a single layer.
|
||||||
|
|
||||||
|
The files that are added, then removed in the same layer, do not grow the layer size.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Collapsing layers: pros and cons
|
||||||
|
|
||||||
|
Pros:
|
||||||
|
|
||||||
|
- works on all versions of Docker
|
||||||
|
|
||||||
|
- doesn't require extra tools
|
||||||
|
|
||||||
|
Cons:
|
||||||
|
|
||||||
|
- not very readable
|
||||||
|
|
||||||
|
- some unnecessary files might still remain if the cleanup is not thorough
|
||||||
|
|
||||||
|
- that layer is expensive (slow to build)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Building binaries outside of the Dockerfile
|
||||||
|
|
||||||
|
This results in a Dockerfile looking like this:
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
FROM ubuntu
|
||||||
|
COPY xxx /usr/local/bin
|
||||||
|
```
|
||||||
|
|
||||||
|
Of course, this implies that the file `xxx` exists in the build context.
|
||||||
|
|
||||||
|
That file has to exist before you can run `docker build`.
|
||||||
|
|
||||||
|
For instance, it can:
|
||||||
|
|
||||||
|
- exist in the code repository,
|
||||||
|
- be created by another tool (script, Makefile...),
|
||||||
|
- be created by another container image and extracted from the image.
|
||||||
|
|
||||||
|
See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Building binaries outside: pros and cons
|
||||||
|
|
||||||
|
Pros:
|
||||||
|
|
||||||
|
- final image can be very small
|
||||||
|
|
||||||
|
Cons:
|
||||||
|
|
||||||
|
- requires an extra build tool
|
||||||
|
|
||||||
|
- we're back in dependency hell and "works on my machine"
|
||||||
|
|
||||||
|
Cons, if binary is added to code repository:
|
||||||
|
|
||||||
|
- breaks portability across different platforms
|
||||||
|
|
||||||
|
- grows repository size a lot if the binary is updated frequently
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Squashing the final image
|
||||||
|
|
||||||
|
The idea is to transform the final image into a single-layer image.
|
||||||
|
|
||||||
|
This can be done in (at least) two ways.
|
||||||
|
|
||||||
|
- Activate experimental features and squash the final image:
|
||||||
|
```bash
|
||||||
|
docker image build --squash ...
|
||||||
|
```
|
||||||
|
|
||||||
|
- Export/import the final image.
|
||||||
|
```bash
|
||||||
|
docker build -t temp-image .
|
||||||
|
docker run --entrypoint true --name temp-container temp-image
|
||||||
|
docker export temp-container | docker import - final-image
|
||||||
|
docker rm temp-container
|
||||||
|
docker rmi temp-image
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Squashing the image: pros and cons
|
||||||
|
|
||||||
|
Pros:
|
||||||
|
|
||||||
|
- single-layer images are smaller and faster to download
|
||||||
|
|
||||||
|
- removed files no longer take up storage and network resources
|
||||||
|
|
||||||
|
Cons:
|
||||||
|
|
||||||
|
- we still need to actively remove unnecessary files
|
||||||
|
|
||||||
|
- squash operation can take a lot of time (on big images)
|
||||||
|
|
||||||
|
- squash operation does not benefit from cache
|
||||||
|
<br/>
|
||||||
|
(even if we change just a tiny file, the whole image needs to be re-squashed)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Multi-stage builds
|
||||||
|
|
||||||
|
Multi-stage builds allow us to have multiple *stages*.
|
||||||
|
|
||||||
|
Each stage is a separate image, and can copy files from previous stages.
|
||||||
|
|
||||||
|
We're going to see how they work in more detail.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Multi-stage builds
|
||||||
|
|
||||||
* At any point in our `Dockerfile`, we can add a new `FROM` line.
|
* At any point in our `Dockerfile`, we can add a new `FROM` line.
|
||||||
|
|
||||||
|
|||||||
1124
slides/intro/Namespaces_Cgroups.md
Normal file
422
slides/intro/Orchestration_Overview.md
Normal file
@@ -0,0 +1,422 @@
|
|||||||
|
# Orchestration, an overview
|
||||||
|
|
||||||
|
In this chapter, we will:
|
||||||
|
|
||||||
|
* Explain what is orchestration and why we would need it.
|
||||||
|
|
||||||
|
* Present (from a high-level perspective) some orchestrators.
|
||||||
|
|
||||||
|
* Show one orchestrator (Kubernetes) in action.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: pic
|
||||||
|
|
||||||
|
## What's orchestration?
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What's orchestration?
|
||||||
|
|
||||||
|
According to Wikipedia:
|
||||||
|
|
||||||
|
*Orchestration describes the __automated__ arrangement,
|
||||||
|
coordination, and management of complex computer systems,
|
||||||
|
middleware, and services.*
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
*[...] orchestration is often discussed in the context of
|
||||||
|
__service-oriented architecture__, __virtualization__, provisioning,
|
||||||
|
Converged Infrastructure and __dynamic datacenter__ topics.*
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
What does that really mean?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example 1: dynamic cloud instances
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- Q: do we always use 100% of our servers?
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- A: obviously not!
|
||||||
|
|
||||||
|
.center[]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example 1: dynamic cloud instances
|
||||||
|
|
||||||
|
- Every night, scale down
|
||||||
|
|
||||||
|
(by shutting down extraneous replicated instances)
|
||||||
|
|
||||||
|
- Every morning, scale up
|
||||||
|
|
||||||
|
(by deploying new copies)
|
||||||
|
|
||||||
|
- "Pay for what you use"
|
||||||
|
|
||||||
|
(i.e. save big $$$ here)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example 1: dynamic cloud instances
|
||||||
|
|
||||||
|
How do we implement this?
|
||||||
|
|
||||||
|
- Crontab
|
||||||
|
|
||||||
|
- Autoscaling (save even bigger $$$)
|
||||||
|
|
||||||
|
That's *relatively* easy.
|
||||||
|
|
||||||
|
Now, how are things for our IAAS provider?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example 2: dynamic datacenter
|
||||||
|
|
||||||
|
- Q: what's the #1 cost in a datacenter?
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- A: electricity!
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- Q: what uses electricity?
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- A: servers, obviously
|
||||||
|
|
||||||
|
- A: ... and associated cooling
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- Q: do we always use 100% of our servers?
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- A: obviously not!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example 2: dynamic datacenter
|
||||||
|
|
||||||
|
- If only we could turn off unused servers during the night...
|
||||||
|
|
||||||
|
- Problem: we can only turn off a server if it's totally empty!
|
||||||
|
|
||||||
|
(i.e. all VMs on it are stopped/moved)
|
||||||
|
|
||||||
|
- Solution: *migrate* VMs and shutdown empty servers
|
||||||
|
|
||||||
|
(e.g. combine two hypervisors with 40% load into 80%+0%,
|
||||||
|
<br/>and shutdown the one at 0%)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example 2: dynamic datacenter
|
||||||
|
|
||||||
|
How do we implement this?
|
||||||
|
|
||||||
|
- Shutdown empty hosts (but keep some spare capacity)
|
||||||
|
|
||||||
|
- Start hosts again when capacity gets low
|
||||||
|
|
||||||
|
- Ability to "live migrate" VMs
|
||||||
|
|
||||||
|
(Xen already did this 10+ years ago)
|
||||||
|
|
||||||
|
- Rebalance VMs on a regular basis
|
||||||
|
|
||||||
|
- what if a VM is stopped while we move it?
|
||||||
|
- should we allow provisioning on hosts involved in a migration?
|
||||||
|
|
||||||
|
*Scheduling* becomes more complex.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What is scheduling?
|
||||||
|
|
||||||
|
According to Wikipedia (again):
|
||||||
|
|
||||||
|
*In computing, scheduling is the method by which threads,
|
||||||
|
processes or data flows are given access to system resources.*
|
||||||
|
|
||||||
|
The scheduler is concerned mainly with:
|
||||||
|
|
||||||
|
- throughput (total amount or work done per time unit);
|
||||||
|
- turnaround time (between submission and completion);
|
||||||
|
- response time (between submission and start);
|
||||||
|
- waiting time (between job readiness and execution);
|
||||||
|
- fairness (appropriate times according to priorities).
|
||||||
|
|
||||||
|
In practice, these goals often conflict.
|
||||||
|
|
||||||
|
**"Scheduling" = decide which resources to use.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Exercise 1
|
||||||
|
|
||||||
|
- You have:
|
||||||
|
|
||||||
|
- 5 hypervisors (physical machines)
|
||||||
|
|
||||||
|
- Each server has:
|
||||||
|
|
||||||
|
- 16 GB RAM, 8 cores, 1 TB disk
|
||||||
|
|
||||||
|
- Each week, your team asks:
|
||||||
|
|
||||||
|
- one VM with X RAM, Y CPU, Z disk
|
||||||
|
|
||||||
|
Scheduling = deciding which hypervisor to use for each VM.
|
||||||
|
|
||||||
|
Difficulty: easy!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- Warning, two almost identical slides (for img effect) -->
|
||||||
|
|
||||||
|
## Exercise 2
|
||||||
|
|
||||||
|
- You have:
|
||||||
|
|
||||||
|
- 1000+ hypervisors (and counting!)
|
||||||
|
|
||||||
|
- Each server has different resources:
|
||||||
|
|
||||||
|
- 8-500 GB of RAM, 4-64 cores, 1-100 TB disk
|
||||||
|
|
||||||
|
- Multiple times a day, a different team asks for:
|
||||||
|
|
||||||
|
- up to 50 VMs with different characteristics
|
||||||
|
|
||||||
|
Scheduling = deciding which hypervisor to use for each VM.
|
||||||
|
|
||||||
|
Difficulty: ???
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- Warning, two almost identical slides (for img effect) -->
|
||||||
|
|
||||||
|
## Exercise 2
|
||||||
|
|
||||||
|
- You have:
|
||||||
|
|
||||||
|
- 1000+ hypervisors (and counting!)
|
||||||
|
|
||||||
|
- Each server has different resources:
|
||||||
|
|
||||||
|
- 8-500 GB of RAM, 4-64 cores, 1-100 TB disk
|
||||||
|
|
||||||
|
- Multiple times a day, a different team asks for:
|
||||||
|
|
||||||
|
- up to 50 VMs with different characteristics
|
||||||
|
|
||||||
|
Scheduling = deciding which hypervisor to use for each VM.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Exercise 3
|
||||||
|
|
||||||
|
- You have machines (physical and/or virtual)
|
||||||
|
|
||||||
|
- You have containers
|
||||||
|
|
||||||
|
- You are trying to put the containers on the machines
|
||||||
|
|
||||||
|
- Sounds familiar?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Scheduling with one resource
|
||||||
|
|
||||||
|
.center[]
|
||||||
|
|
||||||
|
Can we do better?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Scheduling with one resource
|
||||||
|
|
||||||
|
.center[]
|
||||||
|
|
||||||
|
Yup!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Scheduling with two resources
|
||||||
|
|
||||||
|
.center[]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Scheduling with three resources
|
||||||
|
|
||||||
|
.center[]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## You need to be good at this
|
||||||
|
|
||||||
|
.center[]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## But also, you must be quick!
|
||||||
|
|
||||||
|
.center[]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## And be web scale!
|
||||||
|
|
||||||
|
.center[]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## And think outside (?) of the box!
|
||||||
|
|
||||||
|
.center[]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Good luck!
|
||||||
|
|
||||||
|
.center[]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## TL,DR
|
||||||
|
|
||||||
|
* Scheduling with multiple resources (dimensions) is hard.
|
||||||
|
|
||||||
|
* Don't expect to solve the problem with a Tiny Shell Script.
|
||||||
|
|
||||||
|
* There are literally tons of research papers written on this.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## But our orchestrator also needs to manage ...
|
||||||
|
|
||||||
|
* Network connectivity (or filtering) between containers.
|
||||||
|
|
||||||
|
* Load balancing (external and internal).
|
||||||
|
|
||||||
|
* Failure recovery (if a node or a whole datacenter fails).
|
||||||
|
|
||||||
|
* Rolling out new versions of our applications.
|
||||||
|
|
||||||
|
(Canary deployments, blue/green deployments...)
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Some orchestrators
|
||||||
|
|
||||||
|
We are going to present briefly a few orchestrators.
|
||||||
|
|
||||||
|
There is no "absolute best" orchestrator.
|
||||||
|
|
||||||
|
It depends on:
|
||||||
|
|
||||||
|
- your applications,
|
||||||
|
|
||||||
|
- your requirements,
|
||||||
|
|
||||||
|
- your pre-existing skills...
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Nomad
|
||||||
|
|
||||||
|
- Open Source project by Hashicorp.
|
||||||
|
|
||||||
|
- Arbitrary scheduler (not just for containers).
|
||||||
|
|
||||||
|
- Great if you want to schedule mixed workloads.
|
||||||
|
|
||||||
|
(VMs, containers, processes...)
|
||||||
|
|
||||||
|
- Less integration with the rest of the container ecosystem.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Mesos
|
||||||
|
|
||||||
|
- Open Source project in the Apache Foundation.
|
||||||
|
|
||||||
|
- Arbitrary scheduler (not just for containers).
|
||||||
|
|
||||||
|
- Two-level scheduler.
|
||||||
|
|
||||||
|
- Top-level scheduler acts as a resource broker.
|
||||||
|
|
||||||
|
- Second-level schedulers (aka "frameworks") obtain resources from top-level.
|
||||||
|
|
||||||
|
- Frameworks implement various strategies.
|
||||||
|
|
||||||
|
(Marathon = long running processes; Chronos = run at intervals; ...)
|
||||||
|
|
||||||
|
- Commercial offering through DC/OS my Mesosphere.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Rancher
|
||||||
|
|
||||||
|
- Rancher 1 offered a simple interface for Docker hosts.
|
||||||
|
|
||||||
|
- Rancher 2 is a complete management platform for Docker and Kubernetes.
|
||||||
|
|
||||||
|
- Technically not an orchestrator, but it's a popular option.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Swarm
|
||||||
|
|
||||||
|
- Tightly integrated with the Docker Engine.
|
||||||
|
|
||||||
|
- Extremely simple to deploy and setup, even in multi-manager (HA) mode.
|
||||||
|
|
||||||
|
- Secure by default.
|
||||||
|
|
||||||
|
- Strongly opinionated:
|
||||||
|
|
||||||
|
- smaller set of features,
|
||||||
|
|
||||||
|
- easier to operate.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Kubernetes
|
||||||
|
|
||||||
|
- Open Source project initiated by Google.
|
||||||
|
|
||||||
|
- Contributions from many other actors.
|
||||||
|
|
||||||
|
- *De facto* standard for container orchestration.
|
||||||
|
|
||||||
|
- Many deployment options; some of them very complex.
|
||||||
|
|
||||||
|
- Reputation: steep learning curve.
|
||||||
|
|
||||||
|
- Reality:
|
||||||
|
|
||||||
|
- true, if we try to understand *everything*;
|
||||||
|
|
||||||
|
- false, if we focus on what matters.
|
||||||
|
|
||||||
@@ -21,7 +21,7 @@ public images is free as well.*
|
|||||||
docker login
|
docker login
|
||||||
```
|
```
|
||||||
|
|
||||||
.warning[When running Docker4Mac, Docker4Windows, or
|
.warning[When running Docker for Mac/Windows, or
|
||||||
Docker on a Linux workstation, it can (and will when
|
Docker on a Linux workstation, it can (and will when
|
||||||
possible) integrate with your system's keyring to
|
possible) integrate with your system's keyring to
|
||||||
store your credentials securely. However, on most Linux
|
store your credentials securely. However, on most Linux
|
||||||
|
|||||||
229
slides/intro/Resource_Limits.md
Normal file
@@ -0,0 +1,229 @@
|
|||||||
|
# Limiting resources
|
||||||
|
|
||||||
|
- So far, we have used containers as convenient units of deployment.
|
||||||
|
|
||||||
|
- What happens when a container tries to use more resources than available?
|
||||||
|
|
||||||
|
(RAM, CPU, disk usage, disk and network I/O...)
|
||||||
|
|
||||||
|
- What happens when multiple containers compete for the same resource?
|
||||||
|
|
||||||
|
- Can we limit resources available to a container?
|
||||||
|
|
||||||
|
(Spoiler alert: yes!)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Container processes are normal processes
|
||||||
|
|
||||||
|
- Containers are closer to "fancy processes" than to "lightweight VMs".
|
||||||
|
|
||||||
|
- A process running in a container is, in fact, a process running on the host.
|
||||||
|
|
||||||
|
- Let's look at the output of `ps` on a container host running 3 containers :
|
||||||
|
|
||||||
|
```
|
||||||
|
0 2662 0.2 0.3 /usr/bin/dockerd -H fd://
|
||||||
|
0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe
|
||||||
|
0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
|
||||||
|
0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off;
|
||||||
|
101 23543 0.0 0.0 | \_ `nginx`: worker process
|
||||||
|
0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
|
||||||
|
102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2
|
||||||
|
0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
|
||||||
|
0 23725 0.0 0.0 \_ `/bin/sh`
|
||||||
|
```
|
||||||
|
|
||||||
|
- The highlighted processes are containerized processes.
|
||||||
|
<br/>
|
||||||
|
(That host is running nginx, elasticsearch, and alpine.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## By default: nothing changes
|
||||||
|
|
||||||
|
- What happens when a process uses too much memory on a Linux system?
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- Simplified answer:
|
||||||
|
|
||||||
|
- swap is used (if available);
|
||||||
|
|
||||||
|
- if there is not enough swap space, eventually, the out-of-memory killer is invoked;
|
||||||
|
|
||||||
|
- the OOM killer uses heuristics to kill processes;
|
||||||
|
|
||||||
|
- sometimes, it kills an unrelated process.
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- What happens when a container uses too much memory?
|
||||||
|
|
||||||
|
- The same thing!
|
||||||
|
|
||||||
|
(i.e., a process eventually gets killed, possibly in another container.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Limiting container resources
|
||||||
|
|
||||||
|
- The Linux kernel offers rich mechanisms to limit container resources.
|
||||||
|
|
||||||
|
- For memory usage, the mechanism is part of the *cgroup* subsystem.
|
||||||
|
|
||||||
|
- This subsystem allows to limit the memory for a process or a group of processes.
|
||||||
|
|
||||||
|
- A container engine leverages these mechanisms to limit memory for a container.
|
||||||
|
|
||||||
|
- The out-of-memory killer has a new behavior:
|
||||||
|
|
||||||
|
- it runs when a container exceeds its allowed memory usage,
|
||||||
|
|
||||||
|
- in that case, it only kills processes in that container.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Limiting memory in practice
|
||||||
|
|
||||||
|
- The Docker Engine offers multiple flags to limit memory usage.
|
||||||
|
|
||||||
|
- The two most useful ones are `--memory` and `--memory-swap`.
|
||||||
|
|
||||||
|
- `--memory` limits the amount of physical RAM used by a container.
|
||||||
|
|
||||||
|
- `--memory-swap` limits the total amount (RAM+swap) used by a container.
|
||||||
|
|
||||||
|
- The memory limit can be expressed in bytes, or with a unit suffix.
|
||||||
|
|
||||||
|
(e.g.: `--memory 100m` = 100 megabytes.)
|
||||||
|
|
||||||
|
- We will see two strategies: limiting RAM usage, or limiting both
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Limiting RAM usage
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -ti --memory 100m python
|
||||||
|
```
|
||||||
|
|
||||||
|
If the container tries to use more than 100 MB of RAM, *and* swap is available:
|
||||||
|
|
||||||
|
- the container will not be killed,
|
||||||
|
|
||||||
|
- memory above 100 MB will be swapped out,
|
||||||
|
|
||||||
|
- in most cases, the app in the container will be slowed down (a lot).
|
||||||
|
|
||||||
|
If we run out of swap, the global OOM killer still intervenes.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Limiting both RAM and swap usage
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -ti --memory 100m --memory-swap 100m python
|
||||||
|
```
|
||||||
|
|
||||||
|
If the container tries to use more than 100 MB of memory, it is killed.
|
||||||
|
|
||||||
|
On the other hand, the application will never be slowed down because of swap.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## When to pick which strategy?
|
||||||
|
|
||||||
|
- Stateful services (like databases) will lose or corrupt data when killed
|
||||||
|
|
||||||
|
- Allow them to use swap space, but monitor swap usage
|
||||||
|
|
||||||
|
- Stateless services can usually be killed with little impact
|
||||||
|
|
||||||
|
- Limit their mem+swap usage, but monitor if they get killed
|
||||||
|
|
||||||
|
- Ultimately, this is no different from "do I want swap, and how much?"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Limiting CPU usage
|
||||||
|
|
||||||
|
- There are no less than 3 ways to limit CPU usage:
|
||||||
|
|
||||||
|
- setting a relative priority with `--cpu-shares`,
|
||||||
|
|
||||||
|
- setting a CPU% limit with `--cpus`,
|
||||||
|
|
||||||
|
- pinning a container to specific CPUs with `--cpuset-cpus`.
|
||||||
|
|
||||||
|
- They can be used separately or together.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Setting relative priority
|
||||||
|
|
||||||
|
- Each container has a relative priority used by the Linux scheduler.
|
||||||
|
|
||||||
|
- By default, this priority is 1024.
|
||||||
|
|
||||||
|
- As long as CPU usage is not maxed out, this has no effect.
|
||||||
|
|
||||||
|
- When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority.
|
||||||
|
|
||||||
|
- In other words: a container with `--cpu-shares 2048` will receive twice as much than the default.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Setting a CPU% limit
|
||||||
|
|
||||||
|
- This setting will make sure that a container doesn't use more than a given % of CPU.
|
||||||
|
|
||||||
|
- The value is expressed in CPUs; therefore:
|
||||||
|
|
||||||
|
`--cpus 0.1` means 10% of one CPU,
|
||||||
|
|
||||||
|
`--cpus 1.0` means 100% of one whole CPU,
|
||||||
|
|
||||||
|
`--cpus 10.0` means 10 entire CPUs.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Pinning containers to CPUs
|
||||||
|
|
||||||
|
- On multi-core machines, it is possible to restrict the execution on a set of CPUs.
|
||||||
|
|
||||||
|
- Examples:
|
||||||
|
|
||||||
|
`--cpuset-cpus 0` forces the container to run on CPU 0;
|
||||||
|
|
||||||
|
`--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7;
|
||||||
|
|
||||||
|
`--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11.
|
||||||
|
|
||||||
|
- This will not reserve the corresponding CPUs!
|
||||||
|
|
||||||
|
(They might still be used by other containers, or uncontainerized processes.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Limiting disk usage
|
||||||
|
|
||||||
|
- Most storage drivers do not support limiting the disk usage of containers.
|
||||||
|
|
||||||
|
(With the exception of devicemapper, but the limit cannot be set easily.)
|
||||||
|
|
||||||
|
- This means that a single container could exhaust disk space for everyone.
|
||||||
|
|
||||||
|
- In practice, however, this is not a concern, because:
|
||||||
|
|
||||||
|
- data files (for stateful services) should reside on volumes,
|
||||||
|
|
||||||
|
- assets (e.g. images, user-generated content...) should reside on object stores or on volume,
|
||||||
|
|
||||||
|
- logs are written on standard output and gathered by the container engine.
|
||||||
|
|
||||||
|
- Container disk usage can be audited with `docker ps -s` and `docker diff`.
|
||||||
@@ -38,6 +38,42 @@ individual Docker VM.*
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
## What *is* Docker?
|
||||||
|
|
||||||
|
- "Installing Docker" really means "Installing the Docker Engine and CLI".
|
||||||
|
|
||||||
|
- The Docker Engine is a daemon (a service running in the background).
|
||||||
|
|
||||||
|
- This daemon manages containers, the same way that an hypervisor manages VMs.
|
||||||
|
|
||||||
|
- We interact with the Docker Engine by using the Docker CLI.
|
||||||
|
|
||||||
|
- The Docker CLI and the Docker Engine communicate through an API.
|
||||||
|
|
||||||
|
- There are many other programs, and many client libraries, to use that API.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why don't we run Docker locally?
|
||||||
|
|
||||||
|
- We are going to download container images and distribution packages.
|
||||||
|
|
||||||
|
- This could put a bit of stress on the local WiFi and slow us down.
|
||||||
|
|
||||||
|
- Instead, we use a remote VM that has a good connectivity
|
||||||
|
|
||||||
|
- In some rare cases, installing Docker locally is challenging:
|
||||||
|
|
||||||
|
- no administrator/root access (computer managed by strict corp IT)
|
||||||
|
|
||||||
|
- 32-bit CPU or OS
|
||||||
|
|
||||||
|
- old OS version (e.g. CentOS 6, OSX pre-Yosemite, Windows 7)
|
||||||
|
|
||||||
|
- It's better to spend time learning containers than fiddling with the installer!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Connecting to your Virtual Machine
|
## Connecting to your Virtual Machine
|
||||||
|
|
||||||
You need an SSH client.
|
You need an SSH client.
|
||||||
@@ -66,21 +102,24 @@ Once logged in, make sure that you can run a basic Docker command:
|
|||||||
```bash
|
```bash
|
||||||
$ docker version
|
$ docker version
|
||||||
Client:
|
Client:
|
||||||
Version: 17.09.0-ce
|
Version: 18.03.0-ce
|
||||||
API version: 1.32
|
API version: 1.37
|
||||||
Go version: go1.8.3
|
Go version: go1.9.4
|
||||||
Git commit: afdb6d4
|
Git commit: 0520e24
|
||||||
Built: Tue Sep 26 22:40:09 2017
|
Built: Wed Mar 21 23:10:06 2018
|
||||||
OS/Arch: darwin/amd64
|
OS/Arch: linux/amd64
|
||||||
|
Experimental: false
|
||||||
|
Orchestrator: swarm
|
||||||
|
|
||||||
Server:
|
Server:
|
||||||
Version: 17.09.0-ce
|
Engine:
|
||||||
API version: 1.32 (minimum version 1.12)
|
Version: 18.03.0-ce
|
||||||
Go version: go1.8.3
|
API version: 1.37 (minimum version 1.12)
|
||||||
Git commit: afdb6d4
|
Go version: go1.9.4
|
||||||
Built: Tue Sep 26 22:45:38 2017
|
Git commit: 0520e24
|
||||||
OS/Arch: linux/amd64
|
Built: Wed Mar 21 23:08:35 2018
|
||||||
Experimental: true
|
OS/Arch: linux/amd64
|
||||||
|
Experimental: false
|
||||||
```
|
```
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|||||||
@@ -33,6 +33,8 @@ Docker volumes can be used to achieve many things, including:
|
|||||||
|
|
||||||
* Sharing a *single file* between the host and a container.
|
* Sharing a *single file* between the host and a container.
|
||||||
|
|
||||||
|
* Using remote storage and custom storage with "volume drivers".
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Volumes are special directories in a container
|
## Volumes are special directories in a container
|
||||||
@@ -118,7 +120,7 @@ $ curl localhost:8080
|
|||||||
|
|
||||||
## Volumes exist independently of containers
|
## Volumes exist independently of containers
|
||||||
|
|
||||||
If a container is stopped, its volumes still exist and are available.
|
If a container is stopped or removed, its volumes still exist and are available.
|
||||||
|
|
||||||
Volumes can be listed and manipulated with `docker volume` subcommands:
|
Volumes can be listed and manipulated with `docker volume` subcommands:
|
||||||
|
|
||||||
@@ -201,7 +203,7 @@ Then run `curl localhost:1234` again to see your changes.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Managing volumes explicitly
|
## Using custom "bind-mounts"
|
||||||
|
|
||||||
In some cases, you want a specific directory on the host to be mapped
|
In some cases, you want a specific directory on the host to be mapped
|
||||||
inside the container:
|
inside the container:
|
||||||
@@ -244,6 +246,8 @@ of an existing container.
|
|||||||
|
|
||||||
* Newer containers can use `--volumes-from` too.
|
* Newer containers can use `--volumes-from` too.
|
||||||
|
|
||||||
|
* Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes).
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
class: extra-details
|
class: extra-details
|
||||||
@@ -259,7 +263,7 @@ $ docker run -d --name redis28 redis:2.8
|
|||||||
Connect to the Redis container and set some data.
|
Connect to the Redis container and set some data.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker run -ti --link redis28:redis alpine telnet redis 6379
|
$ docker run -ti --link redis28:redis busybox telnet redis 6379
|
||||||
```
|
```
|
||||||
|
|
||||||
Issue the following commands:
|
Issue the following commands:
|
||||||
@@ -298,7 +302,7 @@ class: extra-details
|
|||||||
Connect to the Redis container and see our data.
|
Connect to the Redis container and see our data.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker run -ti --link redis30:redis alpine telnet redis 6379
|
docker run -ti --link redis30:redis busybox telnet redis 6379
|
||||||
```
|
```
|
||||||
|
|
||||||
Issue a few commands.
|
Issue a few commands.
|
||||||
@@ -394,10 +398,56 @@ has root-like access to the host.]
|
|||||||
You can install plugins to manage volumes backed by particular storage systems,
|
You can install plugins to manage volumes backed by particular storage systems,
|
||||||
or providing extra features. For instance:
|
or providing extra features. For instance:
|
||||||
|
|
||||||
* [dvol](https://github.com/ClusterHQ/dvol) - allows to commit/branch/rollback volumes;
|
* [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g.
|
||||||
* [Flocker](https://clusterhq.com/flocker/introduction/), [REX-Ray](https://github.com/emccode/rexray) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS);
|
SAN or NAS), or by cloud block stores (e.g. EBS, EFS).
|
||||||
* [Blockbridge](http://www.blockbridge.com/), [Portworx](http://portworx.com/) - provide distributed block store for containers;
|
|
||||||
* and much more!
|
* [Portworx](http://portworx.com/) - provides distributed block store for containers.
|
||||||
|
|
||||||
|
* [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale
|
||||||
|
to several petabytes. It provides interfaces for object, block and file storage.
|
||||||
|
|
||||||
|
* and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Volumes vs. Mounts
|
||||||
|
|
||||||
|
* Since Docker 17.06, a new options is available: `--mount`.
|
||||||
|
|
||||||
|
* It offers a new, richer syntax to manipulate data in containers.
|
||||||
|
|
||||||
|
* It makes an explicit difference between:
|
||||||
|
|
||||||
|
- volumes (identified with a unique name, managed by a storage plugin),
|
||||||
|
|
||||||
|
- bind mounts (identified with a host path, not managed).
|
||||||
|
|
||||||
|
* The former `-v` / `--volume` option is still usable.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## `--mount` syntax
|
||||||
|
|
||||||
|
Binding a host path to a container path:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run \
|
||||||
|
--mount type=bind,source=/path/on/host,target=/path/in/container alpine
|
||||||
|
```
|
||||||
|
|
||||||
|
Mounting a volume to a container path:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run \
|
||||||
|
--mount source=myvolume,target=/path/in/container alpine
|
||||||
|
```
|
||||||
|
|
||||||
|
Mounting a tmpfs (in-memory, for temporary files):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ docker run \
|
||||||
|
--mount type=tmpfs,destination=/path/in/container,tmpfs-size=1000000 alpine
|
||||||
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,8 @@
|
|||||||
## A brief introduction
|
## A brief introduction
|
||||||
|
|
||||||
- This was initially written to support in-person,
|
- This was initially written to support in-person, instructor-led workshops and tutorials
|
||||||
instructor-led workshops and tutorials
|
|
||||||
|
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://@@GITREPO@@/graphs/contributors)
|
||||||
|
|
||||||
- You can also follow along on your own, at your own pace
|
- You can also follow along on your own, at your own pace
|
||||||
|
|
||||||
|
|||||||
52
slides/kube-90min.yml
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
title: |
|
||||||
|
Kubernetes 101
|
||||||
|
|
||||||
|
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||||
|
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
|
||||||
|
chat: "In person!"
|
||||||
|
|
||||||
|
gitrepo: github.com/jpetazzo/container.training
|
||||||
|
|
||||||
|
slides: http://container.training/
|
||||||
|
|
||||||
|
exclude:
|
||||||
|
- self-paced
|
||||||
|
- extra-details
|
||||||
|
|
||||||
|
chapters:
|
||||||
|
- common/title.md
|
||||||
|
- logistics.md
|
||||||
|
#- kube/intro.md
|
||||||
|
- common/about-slides.md
|
||||||
|
- common/toc.md
|
||||||
|
- - common/prereqs.md
|
||||||
|
- kube/versions-k8s.md
|
||||||
|
- common/sampleapp.md
|
||||||
|
# Bridget doesn't go into as much depth with compose
|
||||||
|
#- common/composescale.md
|
||||||
|
- common/composedown.md
|
||||||
|
- kube/concepts-k8s.md
|
||||||
|
# - common/declarative.md
|
||||||
|
- kube/declarative.md
|
||||||
|
# - kube/kubenet.md
|
||||||
|
- kube/kubectlget.md
|
||||||
|
- kube/setup-k8s.md
|
||||||
|
- - kube/kubectlrun.md
|
||||||
|
- kube/kubectlexpose.md
|
||||||
|
- kube/ourapponkube.md
|
||||||
|
#- kube/kubectlproxy.md
|
||||||
|
- - kube/dashboard.md
|
||||||
|
- kube/kubectlscale.md
|
||||||
|
- kube/daemonset.md
|
||||||
|
- kube/rollout.md
|
||||||
|
# Stern is interesting but can be skipped
|
||||||
|
#- - kube/logs-cli.md
|
||||||
|
# Bridget hasn't added EFK yet
|
||||||
|
#- kube/logs-centralized.md
|
||||||
|
- kube/helm.md
|
||||||
|
- kube/namespaces.md
|
||||||
|
- kube/whatsnext.md
|
||||||
|
- kube/links.md
|
||||||
|
# Bridget-specific
|
||||||
|
# - kube/links-bridget.md
|
||||||
|
- common/thankyou.md
|
||||||
47
slides/kube-fullday.yml
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
title: |
|
||||||
|
Deploying and Scaling Microservices
|
||||||
|
with Kubernetes
|
||||||
|
|
||||||
|
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||||
|
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||||
|
chat: "In person!"
|
||||||
|
|
||||||
|
gitrepo: github.com/jpetazzo/container.training
|
||||||
|
|
||||||
|
slides: http://container.training/
|
||||||
|
|
||||||
|
exclude:
|
||||||
|
- self-paced
|
||||||
|
|
||||||
|
chapters:
|
||||||
|
- common/title.md
|
||||||
|
- logistics.md
|
||||||
|
- kube/intro.md
|
||||||
|
- common/about-slides.md
|
||||||
|
- common/toc.md
|
||||||
|
- - common/prereqs.md
|
||||||
|
- kube/versions-k8s.md
|
||||||
|
- common/sampleapp.md
|
||||||
|
#- common/composescale.md
|
||||||
|
- common/composedown.md
|
||||||
|
- - kube/concepts-k8s.md
|
||||||
|
- common/declarative.md
|
||||||
|
- kube/declarative.md
|
||||||
|
- kube/kubenet.md
|
||||||
|
- kube/kubectlget.md
|
||||||
|
- kube/setup-k8s.md
|
||||||
|
- kube/kubectlrun.md
|
||||||
|
- - kube/kubectlexpose.md
|
||||||
|
- kube/ourapponkube.md
|
||||||
|
- kube/kubectlproxy.md
|
||||||
|
- kube/dashboard.md
|
||||||
|
- - kube/kubectlscale.md
|
||||||
|
- kube/daemonset.md
|
||||||
|
- kube/rollout.md
|
||||||
|
#- kube/logs-cli.md
|
||||||
|
#- kube/logs-centralized.md
|
||||||
|
#- kube/helm.md
|
||||||
|
#- kube/namespaces.md
|
||||||
|
- kube/whatsnext.md
|
||||||
|
- kube/links.md
|
||||||
|
- common/thankyou.md
|
||||||
@@ -1,12 +1,14 @@
|
|||||||
title: |
|
title: |
|
||||||
Deploying and Scaling Microservices
|
Kubernetes 101
|
||||||
with Kubernetes
|
|
||||||
|
|
||||||
|
|
||||||
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
|
||||||
chat: "In person!"
|
chat: "In person!"
|
||||||
|
|
||||||
|
gitrepo: github.com/jpetazzo/container.training
|
||||||
|
|
||||||
|
slides: http://container.training/
|
||||||
|
|
||||||
exclude:
|
exclude:
|
||||||
- self-paced
|
- self-paced
|
||||||
|
|
||||||
@@ -19,21 +21,30 @@ chapters:
|
|||||||
- - common/prereqs.md
|
- - common/prereqs.md
|
||||||
- kube/versions-k8s.md
|
- kube/versions-k8s.md
|
||||||
- common/sampleapp.md
|
- common/sampleapp.md
|
||||||
|
# Bridget doesn't go into as much depth with compose
|
||||||
#- common/composescale.md
|
#- common/composescale.md
|
||||||
- common/composedown.md
|
- common/composedown.md
|
||||||
- - kube/concepts-k8s.md
|
- kube/concepts-k8s.md
|
||||||
- common/declarative.md
|
- common/declarative.md
|
||||||
- kube/declarative.md
|
- kube/declarative.md
|
||||||
- kube/kubenet.md
|
- kube/kubenet.md
|
||||||
- kube/kubectlget.md
|
- kube/kubectlget.md
|
||||||
- kube/setup-k8s.md
|
- kube/setup-k8s.md
|
||||||
- kube/kubectlrun.md
|
- - kube/kubectlrun.md
|
||||||
- - kube/kubectlexpose.md
|
- kube/kubectlexpose.md
|
||||||
- kube/ourapponkube.md
|
- kube/ourapponkube.md
|
||||||
- kube/dashboard.md
|
#- kube/kubectlproxy.md
|
||||||
- - kube/kubectlscale.md
|
- - kube/dashboard.md
|
||||||
|
- kube/kubectlscale.md
|
||||||
- kube/daemonset.md
|
- kube/daemonset.md
|
||||||
- kube/rollout.md
|
- kube/rollout.md
|
||||||
|
- - kube/logs-cli.md
|
||||||
|
# Bridget hasn't added EFK yet
|
||||||
|
#- kube/logs-centralized.md
|
||||||
|
- kube/helm.md
|
||||||
|
- kube/namespaces.md
|
||||||
- kube/whatsnext.md
|
- kube/whatsnext.md
|
||||||
|
# - kube/links.md
|
||||||
|
# Bridget-specific
|
||||||
|
- kube/links-bridget.md
|
||||||
- common/thankyou.md
|
- common/thankyou.md
|
||||||
- kube/links.md
|
|
||||||
|
|||||||
@@ -5,6 +5,10 @@ title: |
|
|||||||
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||||
|
|
||||||
|
gitrepo: github.com/jpetazzo/container.training
|
||||||
|
|
||||||
|
slides: http://container.training/
|
||||||
|
|
||||||
exclude:
|
exclude:
|
||||||
- in-person
|
- in-person
|
||||||
|
|
||||||
@@ -28,10 +32,15 @@ chapters:
|
|||||||
- kube/kubectlrun.md
|
- kube/kubectlrun.md
|
||||||
- - kube/kubectlexpose.md
|
- - kube/kubectlexpose.md
|
||||||
- kube/ourapponkube.md
|
- kube/ourapponkube.md
|
||||||
|
- kube/kubectlproxy.md
|
||||||
- kube/dashboard.md
|
- kube/dashboard.md
|
||||||
- - kube/kubectlscale.md
|
- - kube/kubectlscale.md
|
||||||
- kube/daemonset.md
|
- kube/daemonset.md
|
||||||
- kube/rollout.md
|
- kube/rollout.md
|
||||||
|
- - kube/logs-cli.md
|
||||||
|
- kube/logs-centralized.md
|
||||||
|
- kube/helm.md
|
||||||
|
- kube/namespaces.md
|
||||||
- kube/whatsnext.md
|
- kube/whatsnext.md
|
||||||
- common/thankyou.md
|
|
||||||
- kube/links.md
|
- kube/links.md
|
||||||
|
- common/thankyou.md
|
||||||
|
|||||||
@@ -98,47 +98,80 @@ class: pic
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Kubernetes architecture: the master
|
|
||||||
|
|
||||||
- The Kubernetes logic (its "brains") is a collection of services:
|
|
||||||
|
|
||||||
- the API server (our point of entry to everything!)
|
|
||||||
- core services like the scheduler and controller manager
|
|
||||||
- `etcd` (a highly available key/value store; the "database" of Kubernetes)
|
|
||||||
|
|
||||||
- Together, these services form what is called the "master"
|
|
||||||
|
|
||||||
- These services can run straight on a host, or in containers
|
|
||||||
<br/>
|
|
||||||
(that's an implementation detail)
|
|
||||||
|
|
||||||
- `etcd` can be run on separate machines (first schema) or co-located (second schema)
|
|
||||||
|
|
||||||
- We need at least one master, but we can have more (for high availability)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Kubernetes architecture: the nodes
|
## Kubernetes architecture: the nodes
|
||||||
|
|
||||||
- The nodes executing our containers run another collection of services:
|
- The nodes executing our containers run a collection of services:
|
||||||
|
|
||||||
- a container Engine (typically Docker)
|
- a container Engine (typically Docker)
|
||||||
|
|
||||||
- kubelet (the "node agent")
|
- kubelet (the "node agent")
|
||||||
|
|
||||||
- kube-proxy (a necessary but not sufficient network component)
|
- kube-proxy (a necessary but not sufficient network component)
|
||||||
|
|
||||||
- Nodes were formerly called "minions"
|
- Nodes were formerly called "minions"
|
||||||
|
|
||||||
- It is customary to *not* run apps on the node(s) running master components
|
(You might see that word in older articles or documentation)
|
||||||
|
|
||||||
(Except when using small development clusters)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Do we need to run Docker at all?
|
## Kubernetes architecture: the control plane
|
||||||
|
|
||||||
No!
|
- The Kubernetes logic (its "brains") is a collection of services:
|
||||||
|
|
||||||
--
|
- the API server (our point of entry to everything!)
|
||||||
|
|
||||||
|
- core services like the scheduler and controller manager
|
||||||
|
|
||||||
|
- `etcd` (a highly available key/value store; the "database" of Kubernetes)
|
||||||
|
|
||||||
|
- Together, these services form the control plane of our cluster
|
||||||
|
|
||||||
|
- The control plane is also called the "master"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Running the control plane on special nodes
|
||||||
|
|
||||||
|
- It is common to reserve a dedicated node for the control plane
|
||||||
|
|
||||||
|
(Except for single-node development clusters, like when using minikube)
|
||||||
|
|
||||||
|
- This node is then called a "master"
|
||||||
|
|
||||||
|
(Yes, this is ambiguous: is the "master" a node, or the whole control plane?)
|
||||||
|
|
||||||
|
- Normal applications are restricted from running on this node
|
||||||
|
|
||||||
|
(By using a mechanism called ["taints"](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/))
|
||||||
|
|
||||||
|
- When high availability is required, each service of the control plane must be resilient
|
||||||
|
|
||||||
|
- The control plane is then replicated on multiple nodes
|
||||||
|
|
||||||
|
(This is sometimes called a "multi-master" setup)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Running the control plane outside containers
|
||||||
|
|
||||||
|
- The services of the control plane can run in or out of containers
|
||||||
|
|
||||||
|
- For instance: since `etcd` is a critical service, some people
|
||||||
|
deploy it directly on a dedicated cluster (without containers)
|
||||||
|
|
||||||
|
(This is illustrated on the first "super complicated" schema)
|
||||||
|
|
||||||
|
- In some hosted Kubernetes offerings (e.g. GKE), the control plane is invisible
|
||||||
|
|
||||||
|
(We only "see" a Kubernetes API endpoint)
|
||||||
|
|
||||||
|
- In that case, there is no "master node"
|
||||||
|
|
||||||
|
*For this reason, it is more accurate to say "control plane" rather than "master".*
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Default container runtime
|
||||||
|
|
||||||
- By default, Kubernetes uses the Docker Engine to run containers
|
- By default, Kubernetes uses the Docker Engine to run containers
|
||||||
|
|
||||||
@@ -148,43 +181,7 @@ No!
|
|||||||
|
|
||||||
(like CRI-O, or containerd)
|
(like CRI-O, or containerd)
|
||||||
|
|
||||||
---
|
.footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)]
|
||||||
|
|
||||||
## Do we need to run Docker at all?
|
|
||||||
|
|
||||||
Yes!
|
|
||||||
|
|
||||||
--
|
|
||||||
|
|
||||||
- In this workshop, we run our app on a single node first
|
|
||||||
|
|
||||||
- We will need to build images and ship them around
|
|
||||||
|
|
||||||
- We can do these things without Docker
|
|
||||||
<br/>
|
|
||||||
(and get diagnosed with NIH¹ syndrome)
|
|
||||||
|
|
||||||
- Docker is still the most stable container engine today
|
|
||||||
<br/>
|
|
||||||
(but other options are maturing very quickly)
|
|
||||||
|
|
||||||
.footnote[¹[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)]
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Do we need to run Docker at all?
|
|
||||||
|
|
||||||
- On our development environments, CI pipelines ... :
|
|
||||||
|
|
||||||
*Yes, almost certainly*
|
|
||||||
|
|
||||||
- On our production servers:
|
|
||||||
|
|
||||||
*Yes (today)*
|
|
||||||
|
|
||||||
*Probably not (in the future)*
|
|
||||||
|
|
||||||
.footnote[More information about CRI [on the Kubernetes blog](http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html)]
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -198,6 +195,7 @@ Yes!
|
|||||||
|
|
||||||
- node (a machine — physical or virtual — in our cluster)
|
- node (a machine — physical or virtual — in our cluster)
|
||||||
- pod (group of containers running together on a node)
|
- pod (group of containers running together on a node)
|
||||||
|
- IP addresses are associated with *pods*, not with individual containers
|
||||||
- service (stable network endpoint to connect to one or multiple containers)
|
- service (stable network endpoint to connect to one or multiple containers)
|
||||||
- namespace (more-or-less isolated group of things)
|
- namespace (more-or-less isolated group of things)
|
||||||
- secret (bundle of sensitive data to be passed to a container)
|
- secret (bundle of sensitive data to be passed to a container)
|
||||||
@@ -209,25 +207,3 @@ Yes!
|
|||||||
class: pic
|
class: pic
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
class: pic
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Credits
|
|
||||||
|
|
||||||
- The first diagram is courtesy of Weave Works
|
|
||||||
|
|
||||||
- a *pod* can have multiple containers working together
|
|
||||||
|
|
||||||
- IP addresses are associated with *pods*, not with individual containers
|
|
||||||
|
|
||||||
- The second diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha)
|
|
||||||
|
|
||||||
- it's one of the best Kubernetes architecture diagrams available!
|
|
||||||
|
|
||||||
Both diagrams used with permission.
|
|
||||||
|
|||||||
@@ -36,7 +36,7 @@
|
|||||||
|
|
||||||
## Creating a daemon set
|
## Creating a daemon set
|
||||||
|
|
||||||
- Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
|
- Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
|
||||||
|
|
||||||
--
|
--
|
||||||
|
|
||||||
@@ -55,7 +55,7 @@
|
|||||||
|
|
||||||
--
|
--
|
||||||
|
|
||||||
- option 1: read the docs
|
- option 1: [read the docs](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset)
|
||||||
|
|
||||||
--
|
--
|
||||||
|
|
||||||
@@ -142,7 +142,7 @@ We all knew this couldn't be that easy, right!
|
|||||||
|
|
||||||
- We could also tell Kubernetes to ignore these errors and try anyway
|
- We could also tell Kubernetes to ignore these errors and try anyway
|
||||||
|
|
||||||
- The `--force` flag actual name is `--validate=false`
|
- The `--force` flag's actual name is `--validate=false`
|
||||||
|
|
||||||
.exercise[
|
.exercise[
|
||||||
|
|
||||||
@@ -178,29 +178,37 @@ Wait ... Now, can it be *that* easy?
|
|||||||
|
|
||||||
--
|
--
|
||||||
|
|
||||||
We have both `deploy/rng` and `ds/rng` now!
|
We have two resources called `rng`:
|
||||||
|
|
||||||
--
|
- the *deployment* that was existing before
|
||||||
|
|
||||||
And one too many pods...
|
- the *daemon set* that we just created
|
||||||
|
|
||||||
|
We also have one too many pods.
|
||||||
|
<br/>
|
||||||
|
(The pod corresponding to the *deployment* still exists.)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Explanation
|
## `deploy/rng` and `ds/rng`
|
||||||
|
|
||||||
- You can have different resource types with the same name
|
- You can have different resource types with the same name
|
||||||
|
|
||||||
(i.e. a *deployment* and a *daemonset* both named `rng`)
|
(i.e. a *deployment* and a *daemon set* both named `rng`)
|
||||||
|
|
||||||
- We still have the old `rng` *deployment*
|
- We still have the old `rng` *deployment*
|
||||||
|
|
||||||
- But now we have the new `rng` *daemonset* as well
|
```
|
||||||
|
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||||
|
deployment.apps/rng 1 1 1 1 18m
|
||||||
|
```
|
||||||
|
|
||||||
- If we look at the pods, we have:
|
- But now we have the new `rng` *daemon set* as well
|
||||||
|
|
||||||
- *one pod* for the deployment
|
```
|
||||||
|
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
|
||||||
- *one pod per node* for the daemonset
|
daemonset.apps/rng 2 2 2 2 2 <none> 9s
|
||||||
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -308,116 +316,27 @@ The replica set selector also has a `pod-template-hash`, unlike the pods in our
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Updating a service through labels and selectors
|
## Deleting a deployment
|
||||||
|
|
||||||
- What if we want to drop the `rng` deployment from the load balancer?
|
.exercise[
|
||||||
|
|
||||||
- Option 1:
|
- Remove the `rng` deployment:
|
||||||
|
```bash
|
||||||
- destroy it
|
kubectl delete deployment rng
|
||||||
|
```
|
||||||
- Option 2:
|
]
|
||||||
|
|
||||||
- add an extra *label* to the daemon set
|
|
||||||
|
|
||||||
- update the service *selector* to refer to that *label*
|
|
||||||
|
|
||||||
--
|
--
|
||||||
|
|
||||||
Of course, option 2 offers more learning opportunities. Right?
|
- The pod that was created by the deployment is now being terminated:
|
||||||
|
|
||||||
---
|
```
|
||||||
|
$ kubectl get pods
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
rng-54f57d4d49-vgz9h 1/1 Terminating 0 4m
|
||||||
|
rng-vplmj 1/1 Running 0 11m
|
||||||
|
rng-xbpvg 1/1 Running 0 11m
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
## Add an extra label to the daemon set
|
Ding, dong, the deployment is dead! And the daemon set lives on.
|
||||||
|
|
||||||
- We will update the daemon set "spec"
|
|
||||||
|
|
||||||
- Option 1:
|
|
||||||
|
|
||||||
- edit the `rng.yml` file that we used earlier
|
|
||||||
|
|
||||||
- load the new definition with `kubectl apply`
|
|
||||||
|
|
||||||
- Option 2:
|
|
||||||
|
|
||||||
- use `kubectl edit`
|
|
||||||
|
|
||||||
--
|
|
||||||
|
|
||||||
*If you feel like you got this💕🌈, feel free to try directly.*
|
|
||||||
|
|
||||||
*We've included a few hints on the next slides for your convenience!*
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## We've put resources in your resources
|
|
||||||
|
|
||||||
- Reminder: a daemon set is a resource that creates more resources!
|
|
||||||
|
|
||||||
- There is a difference between:
|
|
||||||
|
|
||||||
- the label(s) of a resource (in the `metadata` block in the beginning)
|
|
||||||
|
|
||||||
- the selector of a resource (in the `spec` block)
|
|
||||||
|
|
||||||
- the label(s) of the resource(s) created by the first resource (in the `template` block)
|
|
||||||
|
|
||||||
- You need to update the selector and the template (metadata labels are not mandatory)
|
|
||||||
|
|
||||||
- The template must match the selector
|
|
||||||
|
|
||||||
(i.e. the resource will refuse to create resources that it will not select)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Adding our label
|
|
||||||
|
|
||||||
- Let's add a label `isactive: yes`
|
|
||||||
|
|
||||||
- In YAML, `yes` should be quoted; i.e. `isactive: "yes"`
|
|
||||||
|
|
||||||
.exercise[
|
|
||||||
|
|
||||||
- Update the daemon set to add `isactive: "yes"` to the selector and template label:
|
|
||||||
```bash
|
|
||||||
kubectl edit daemonset rng
|
|
||||||
```
|
|
||||||
|
|
||||||
- Update the service to add `isactive: "yes"` to its selector:
|
|
||||||
```bash
|
|
||||||
kubectl edit service rng
|
|
||||||
```
|
|
||||||
|
|
||||||
]
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Checking what we've done
|
|
||||||
|
|
||||||
.exercise[
|
|
||||||
|
|
||||||
- Check the logs of all `run=rng` pods to confirm that exactly one per node is now active:
|
|
||||||
```bash
|
|
||||||
kubectl logs -l run=rng
|
|
||||||
```
|
|
||||||
|
|
||||||
]
|
|
||||||
|
|
||||||
The timestamps should give us a hint about how many pods are currently receiving traffic.
|
|
||||||
|
|
||||||
.exercise[
|
|
||||||
|
|
||||||
- Look at the pods that we have right now:
|
|
||||||
```bash
|
|
||||||
kubectl get pods
|
|
||||||
```
|
|
||||||
|
|
||||||
]
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## More labels, more selectors, more problems?
|
|
||||||
|
|
||||||
- Bonus exercise 1: clean up the pods of the "old" daemon set
|
|
||||||
|
|
||||||
- Bonus exercise 2: how could we have done this to avoid creating new pods?
|
|
||||||
|
|||||||
@@ -10,9 +10,6 @@
|
|||||||
|
|
||||||
3) bypass authentication for the dashboard
|
3) bypass authentication for the dashboard
|
||||||
|
|
||||||
--
|
|
||||||
|
|
||||||
There is an additional step to make the dashboard available from outside (we'll get to that)
|
|
||||||
|
|
||||||
--
|
--
|
||||||
|
|
||||||
@@ -87,6 +84,17 @@ The goo.gl URL expands to:
|
|||||||
|
|
||||||
## Connecting to the dashboard
|
## Connecting to the dashboard
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Check which port the dashboard is on:
|
||||||
|
```bash
|
||||||
|
kubectl -n kube-system get svc socat
|
||||||
|
```
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
You'll want the `3xxxx` port.
|
||||||
|
|
||||||
|
|
||||||
.exercise[
|
.exercise[
|
||||||
|
|
||||||
@@ -137,56 +145,6 @@ The dashboard will then ask you which authentication you want to use.
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Exposing the dashboard over HTTPS
|
|
||||||
|
|
||||||
- We took a shortcut by forwarding HTTP to HTTPS inside the cluster
|
|
||||||
|
|
||||||
- Let's expose the dashboard over HTTPS!
|
|
||||||
|
|
||||||
- The dashboard is exposed through a `ClusterIP` service (internal traffic only)
|
|
||||||
|
|
||||||
- We will change that into a `NodePort` service (accepting outside traffic)
|
|
||||||
|
|
||||||
.exercise[
|
|
||||||
|
|
||||||
- Edit the service:
|
|
||||||
```bash
|
|
||||||
kubectl edit service kubernetes-dashboard
|
|
||||||
```
|
|
||||||
|
|
||||||
]
|
|
||||||
|
|
||||||
--
|
|
||||||
|
|
||||||
`NotFound`?!? Y U NO WORK?!?
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Editing the `kubernetes-dashboard` service
|
|
||||||
|
|
||||||
- If we look at the YAML that we loaded just before, we'll get a hint
|
|
||||||
|
|
||||||
--
|
|
||||||
|
|
||||||
- The dashboard was created in the `kube-system` namespace
|
|
||||||
|
|
||||||
--
|
|
||||||
|
|
||||||
.exercise[
|
|
||||||
|
|
||||||
- Edit the service:
|
|
||||||
```bash
|
|
||||||
kubectl -n kube-system edit service kubernetes-dashboard
|
|
||||||
```
|
|
||||||
|
|
||||||
- Change `ClusterIP` to `NodePort`, save, and exit
|
|
||||||
|
|
||||||
- Check the port that was assigned with `kubectl -n kube-system get services`
|
|
||||||
|
|
||||||
]
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Running the Kubernetes dashboard securely
|
## Running the Kubernetes dashboard securely
|
||||||
|
|
||||||
- The steps that we just showed you are *for educational purposes only!*
|
- The steps that we just showed you are *for educational purposes only!*
|
||||||
@@ -243,9 +201,9 @@ The dashboard will then ask you which authentication you want to use.
|
|||||||
|
|
||||||
- It's safe if you use HTTPS URLs from trusted sources
|
- It's safe if you use HTTPS URLs from trusted sources
|
||||||
|
|
||||||
--
|
|
||||||
|
|
||||||
- It introduces new failure modes
|
|
||||||
|
|
||||||
- Example: the official setup instructions for most pod networks
|
- Example: the official setup instructions for most pod networks
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- It introduces new failure modes (like if you try to apply yaml from a link that's no longer valid)
|
||||||
|
|
||||||
|
|||||||
217
slides/kube/helm.md
Normal file
@@ -0,0 +1,217 @@
|
|||||||
|
# Managing stacks with Helm
|
||||||
|
|
||||||
|
- We created our first resources with `kubectl run`, `kubectl expose` ...
|
||||||
|
|
||||||
|
- We have also created resources by loading YAML files with `kubectl apply -f`
|
||||||
|
|
||||||
|
- For larger stacks, managing thousands of lines of YAML is unreasonable
|
||||||
|
|
||||||
|
- These YAML bundles need to be customized with variable parameters
|
||||||
|
|
||||||
|
(E.g.: number of replicas, image version to use ...)
|
||||||
|
|
||||||
|
- It would be nice to have an organized, versioned collection of bundles
|
||||||
|
|
||||||
|
- It would be nice to be able to upgrade/rollback these bundles carefully
|
||||||
|
|
||||||
|
- [Helm](https://helm.sh/) is an open source project offering all these things!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Helm concepts
|
||||||
|
|
||||||
|
- `helm` is a CLI tool
|
||||||
|
|
||||||
|
- `tiller` is its companion server-side component
|
||||||
|
|
||||||
|
- A "chart" is an archive containing templatized YAML bundles
|
||||||
|
|
||||||
|
- Charts are versioned
|
||||||
|
|
||||||
|
- Charts can be stored on private or public repositories
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Installing Helm
|
||||||
|
|
||||||
|
- We need to install the `helm` CLI; then use it to deploy `tiller`
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Install the `helm` CLI:
|
||||||
|
```bash
|
||||||
|
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
|
||||||
|
```
|
||||||
|
|
||||||
|
- Deploy `tiller`:
|
||||||
|
```bash
|
||||||
|
helm init
|
||||||
|
```
|
||||||
|
|
||||||
|
- Add the `helm` completion:
|
||||||
|
```bash
|
||||||
|
. <(helm completion $(basename $SHELL))
|
||||||
|
```
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Fix account permissions
|
||||||
|
|
||||||
|
- Helm permission model requires us to tweak permissions
|
||||||
|
|
||||||
|
- In a more realistic deployment, you might create per-user or per-team
|
||||||
|
service accounts, roles, and role bindings
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Grant `cluster-admin` role to `kube-system:default` service account:
|
||||||
|
```bash
|
||||||
|
kubectl create clusterrolebinding add-on-cluster-admin \
|
||||||
|
--clusterrole=cluster-admin --serviceaccount=kube-system:default
|
||||||
|
```
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
(Defining the exact roles and permissions on your cluster requires
|
||||||
|
a deeper knowledge of Kubernetes' RBAC model. The command above is
|
||||||
|
fine for personal and development clusters.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## View available charts
|
||||||
|
|
||||||
|
- A public repo is pre-configured when installing Helm
|
||||||
|
|
||||||
|
- We can view available charts with `helm search` (and an optional keyword)
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- View all available charts:
|
||||||
|
```bash
|
||||||
|
helm search
|
||||||
|
```
|
||||||
|
|
||||||
|
- View charts related to `prometheus`:
|
||||||
|
```bash
|
||||||
|
helm search prometheus
|
||||||
|
```
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Install a chart
|
||||||
|
|
||||||
|
- Most charts use `LoadBalancer` service types by default
|
||||||
|
|
||||||
|
- Most charts require persistent volumes to store data
|
||||||
|
|
||||||
|
- We need to relax these requirements a bit
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Install the Prometheus metrics collector on our cluster:
|
||||||
|
```bash
|
||||||
|
helm install stable/prometheus \
|
||||||
|
--set server.service.type=NodePort \
|
||||||
|
--set server.persistentVolume.enabled=false
|
||||||
|
```
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
Where do these `--set` options come from?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Inspecting a chart
|
||||||
|
|
||||||
|
- `helm inspect` shows details about a chart (including available options)
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- See the metadata and all available options for `stable/prometheus`:
|
||||||
|
```bash
|
||||||
|
helm inspect stable/prometheus
|
||||||
|
```
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
The chart's metadata includes an URL to the project's home page.
|
||||||
|
|
||||||
|
(Sometimes it conveniently points to the documentation for the chart.)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Creating a chart
|
||||||
|
|
||||||
|
- We are going to show a way to create a *very simplified* chart
|
||||||
|
|
||||||
|
- In a real chart, *lots of things* would be templatized
|
||||||
|
|
||||||
|
(Resource names, service types, number of replicas...)
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Create a sample chart:
|
||||||
|
```bash
|
||||||
|
helm create dockercoins
|
||||||
|
```
|
||||||
|
|
||||||
|
- Move away the sample templates and create an empty template directory:
|
||||||
|
```bash
|
||||||
|
mv dockercoins/templates dockercoins/default-templates
|
||||||
|
mkdir dockercoins/templates
|
||||||
|
```
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Exporting the YAML for our application
|
||||||
|
|
||||||
|
- The following section assumes that DockerCoins is currently running
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Create one YAML file for each resource that we need:
|
||||||
|
.small[
|
||||||
|
```bash
|
||||||
|
|
||||||
|
while read kind name; do
|
||||||
|
kubectl get -o yaml --export $kind $name > dockercoins/templates/$name-$kind.yaml
|
||||||
|
done <<EOF
|
||||||
|
deployment worker
|
||||||
|
deployment hasher
|
||||||
|
daemonset rng
|
||||||
|
deployment webui
|
||||||
|
deployment redis
|
||||||
|
service hasher
|
||||||
|
service rng
|
||||||
|
service webui
|
||||||
|
service redis
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
]
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing our helm chart
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Let's install our helm chart! (`dockercoins` is the path to the chart)
|
||||||
|
```bash
|
||||||
|
helm install dockercoins
|
||||||
|
```
|
||||||
|
]
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
- Since the application is already deployed, this will fail:<br>
|
||||||
|
`Error: release loitering-otter failed: services "hasher" already exists`
|
||||||
|
|
||||||
|
- To avoid naming conflicts, we will deploy the application in another *namespace*
|
||||||
@@ -1,7 +1,9 @@
|
|||||||
## A brief introduction
|
## A brief introduction
|
||||||
|
|
||||||
- This was initially written to support in-person,
|
- This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person,
|
||||||
instructor-led workshops and tutorials
|
instructor-led workshops and tutorials
|
||||||
|
|
||||||
|
- Credit is also due to [multiple contributors](https://@@GITREPO@@/graphs/contributors) — thank you!
|
||||||
|
|
||||||
- You can also follow along on your own, at your own pace
|
- You can also follow along on your own, at your own pace
|
||||||
|
|
||||||
|
|||||||
@@ -123,7 +123,7 @@ Note: please DO NOT call the service `search`. It would collide with the TLD.
|
|||||||
|
|
||||||
.exercise[
|
.exercise[
|
||||||
|
|
||||||
- Let's obtain the IP address that was allocated for our service, *programatically:*
|
- Let's obtain the IP address that was allocated for our service, *programmatically:*
|
||||||
```bash
|
```bash
|
||||||
IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}')
|
IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}')
|
||||||
```
|
```
|
||||||
@@ -137,4 +137,116 @@ Note: please DO NOT call the service `search`. It would collide with the TLD.
|
|||||||
|
|
||||||
--
|
--
|
||||||
|
|
||||||
Our requests are load balanced across multiple pods.
|
We may see `curl: (7) Failed to connect to _IP_ port 9200: Connection refused`.
|
||||||
|
|
||||||
|
This is normal while the service starts up.
|
||||||
|
|
||||||
|
--
|
||||||
|
|
||||||
|
Once it's running, our requests are load balanced across multiple pods.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
|
## If we don't need a load balancer
|
||||||
|
|
||||||
|
- Sometimes, we want to access our scaled services directly:
|
||||||
|
|
||||||
|
- if we want to save a tiny little bit of latency (typically less than 1ms)
|
||||||
|
|
||||||
|
- if we need to connect over arbitrary ports (instead of a few fixed ones)
|
||||||
|
|
||||||
|
- if we need to communicate over another protocol than UDP or TCP
|
||||||
|
|
||||||
|
- if we want to decide how to balance the requests client-side
|
||||||
|
|
||||||
|
- ...
|
||||||
|
|
||||||
|
- In that case, we can use a "headless service"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
|
## Headless services
|
||||||
|
|
||||||
|
- A headless service is obtained by setting the `clusterIP` field to `None`
|
||||||
|
|
||||||
|
(Either with `--cluster-ip=None`, or by providing a custom YAML)
|
||||||
|
|
||||||
|
- As a result, the service doesn't have a virtual IP address
|
||||||
|
|
||||||
|
- Since there is no virtual IP address, there is no load balancer either
|
||||||
|
|
||||||
|
- `kube-dns` will return the pods' IP addresses as multiple `A` records
|
||||||
|
|
||||||
|
- This gives us an easy way to discover all the replicas for a deployment
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
|
## Services and endpoints
|
||||||
|
|
||||||
|
- A service has a number of "endpoints"
|
||||||
|
|
||||||
|
- Each endpoint is a host + port where the service is available
|
||||||
|
|
||||||
|
- The endpoints are maintained and updated automatically by Kubernetes
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Check the endpoints that Kubernetes has associated with our `elastic` service:
|
||||||
|
```bash
|
||||||
|
kubectl describe service elastic
|
||||||
|
```
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
In the output, there will be a line starting with `Endpoints:`.
|
||||||
|
|
||||||
|
That line will list a bunch of addresses in `host:port` format.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
|
## Viewing endpoint details
|
||||||
|
|
||||||
|
- When we have many endpoints, our display commands truncate the list
|
||||||
|
```bash
|
||||||
|
kubectl get endpoints
|
||||||
|
```
|
||||||
|
|
||||||
|
- If we want to see the full list, we can use one of the following commands:
|
||||||
|
```bash
|
||||||
|
kubectl describe endpoints elastic
|
||||||
|
kubectl get endpoints elastic -o yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
- These commands will show us a list of IP addresses
|
||||||
|
|
||||||
|
- These IP addresses should match the addresses of the corresponding pods:
|
||||||
|
```bash
|
||||||
|
kubectl get pods -l run=elastic -o wide
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
|
## `endpoints` not `endpoint`
|
||||||
|
|
||||||
|
- `endpoints` is the only resource that cannot be singular
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl get endpoint
|
||||||
|
error: the server doesn't have a resource type "endpoint"
|
||||||
|
```
|
||||||
|
|
||||||
|
- This is because the type itself is plural (unlike every other resource)
|
||||||
|
|
||||||
|
- There is no `endpoint` object: `type Endpoints struct`
|
||||||
|
|
||||||
|
- The type doesn't represent a single endpoint, but a list of endpoints
|
||||||
|
|||||||
@@ -1,3 +1,5 @@
|
|||||||
|
class: extra-details
|
||||||
|
|
||||||
# First contact with `kubectl`
|
# First contact with `kubectl`
|
||||||
|
|
||||||
- `kubectl` is (almost) the only tool we'll need to talk to Kubernetes
|
- `kubectl` is (almost) the only tool we'll need to talk to Kubernetes
|
||||||
@@ -79,6 +81,8 @@
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
## What's available?
|
## What's available?
|
||||||
|
|
||||||
- `kubectl` has pretty good introspection facilities
|
- `kubectl` has pretty good introspection facilities
|
||||||
@@ -265,4 +269,4 @@ The `kube-system` namespace is used for the control plane.
|
|||||||
]
|
]
|
||||||
--
|
--
|
||||||
|
|
||||||
- `kube-public` is created by kubeadm & [used for security bootstrapping](http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html)
|
- `kube-public` is created by kubeadm & [used for security bootstrapping](https://kubernetes.io/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters)
|
||||||
|
|||||||
117
slides/kube/kubectlproxy.md
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
# Accessing internal services with `kubectl proxy`
|
||||||
|
|
||||||
|
- `kubectl proxy` runs a proxy in the foreground
|
||||||
|
|
||||||
|
- This proxy lets us access the Kubernetes API without authentication
|
||||||
|
|
||||||
|
(`kubectl proxy` adds our credentials on the fly to the requests)
|
||||||
|
|
||||||
|
- This proxy lets us access the Kubernetes API over plain HTTP
|
||||||
|
|
||||||
|
- This is a great tool to learn and experiment with the Kubernetes API
|
||||||
|
|
||||||
|
- The Kubernetes API also gives us a proxy to HTTP and HTTPS services
|
||||||
|
|
||||||
|
- Therefore, we can use `kubectl proxy` to access internal services
|
||||||
|
|
||||||
|
(Without using a `NodePort` or similar service)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Secure by default
|
||||||
|
|
||||||
|
- By default, the proxy listens on port 8001
|
||||||
|
|
||||||
|
(But this can be changed, or we can tell `kubectl proxy` to pick a port)
|
||||||
|
|
||||||
|
- By default, the proxy binds to `127.0.0.1`
|
||||||
|
|
||||||
|
(Making it unreachable from other machines, for security reasons)
|
||||||
|
|
||||||
|
- By default, the proxy only accepts connections from:
|
||||||
|
|
||||||
|
`^localhost$,^127\.0\.0\.1$,^\[::1\]$`
|
||||||
|
|
||||||
|
- This is great when running `kubectl proxy` locally
|
||||||
|
|
||||||
|
- Not-so-great when running it on a remote machine
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Running `kubectl proxy` on a remote machine
|
||||||
|
|
||||||
|
- We are going to bind to `INADDR_ANY` instead of `127.0.0.1`
|
||||||
|
|
||||||
|
- We are going to accept connections from any address
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Run an open proxy to the Kubernetes API:
|
||||||
|
```bash
|
||||||
|
kubectl proxy --port=8888 --address=0.0.0.0 --accept-hosts=.*
|
||||||
|
```
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
.warning[Anyone can now do whatever they want with our Kubernetes cluster!
|
||||||
|
<br/>
|
||||||
|
(Don't do this on a real cluster!)]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Viewing available API routes
|
||||||
|
|
||||||
|
- The default route (i.e. `/`) shows a list of available API endpoints
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Point your browser to the IP address of the node running `kubectl proxy`, port 8888
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
The result should look like this:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"paths": [
|
||||||
|
"/api",
|
||||||
|
"/api/v1",
|
||||||
|
"/apis",
|
||||||
|
"/apis/",
|
||||||
|
"/apis/admissionregistration.k8s.io",
|
||||||
|
…
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Connecting to a service through the proxy
|
||||||
|
|
||||||
|
- The API can proxy HTTP and HTTPS requests by accessing a special route:
|
||||||
|
```
|
||||||
|
/api/v1/namespaces/`name_of_namespace`/services/`name_of_service`/proxy
|
||||||
|
```
|
||||||
|
|
||||||
|
- Since we now have access to the API, we can use this special route
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Access the `hasher` service through the special proxy route:
|
||||||
|
```open
|
||||||
|
http://`X.X.X.X`:8888/api/v1/namespaces/default/services/hasher/proxy
|
||||||
|
```
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
|
You should see the banner of the hasher service: `HASHER running on ...`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Stopping the proxy
|
||||||
|
|
||||||
|
- Remember: as it is running right now, `kubectl proxy` gives open access to our cluster
|
||||||
|
|
||||||
|
.exercise[
|
||||||
|
|
||||||
|
- Stop the `kubectl proxy` process with Ctrl-C
|
||||||
|
|
||||||
|
]
|
||||||
|
|
||||||
@@ -20,9 +20,10 @@
|
|||||||
|
|
||||||
.exercise[
|
.exercise[
|
||||||
|
|
||||||
- Let's ping `goo.gl`:
|
- Let's ping `1.1.1.1`, Cloudflare's
|
||||||
|
[public DNS resolver](https://blog.cloudflare.com/announcing-1111/):
|
||||||
```bash
|
```bash
|
||||||
kubectl run pingpong --image alpine ping goo.gl
|
kubectl run pingpong --image alpine ping 1.1.1.1
|
||||||
```
|
```
|
||||||
|
|
||||||
]
|
]
|
||||||
@@ -49,9 +50,11 @@ OK, what just happened?
|
|||||||
--
|
--
|
||||||
|
|
||||||
We should see the following things:
|
We should see the following things:
|
||||||
- `deploy/pingpong` (the *deployment* that we just created)
|
- `deployment.apps/pingpong` (the *deployment* that we just created)
|
||||||
- `rs/pingpong-xxxx` (a *replica set* created by the deployment)
|
- `replicaset.apps/pingpong-xxxxxxxxxx` (a *replica set* created by the deployment)
|
||||||
- `po/pingpong-yyyy` (a *pod* created by the replica set)
|
- `pod/pingpong-xxxxxxxxxx-yyyyy` (a *pod* created by the replica set)
|
||||||
|
|
||||||
|
Note: as of 1.10.1, resource types are displayed in more detail.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -78,21 +81,34 @@ We should see the following things:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
## Our `pingpong` deployment
|
## Our `pingpong` deployment
|
||||||
|
|
||||||
- `kubectl run` created a *deployment*, `deploy/pingpong`
|
- `kubectl run` created a *deployment*, `deployment.apps/pingpong`
|
||||||
|
|
||||||
- That deployment created a *replica set*, `rs/pingpong-xxxx`
|
```
|
||||||
|
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||||
|
deployment.apps/pingpong 1 1 1 1 10m
|
||||||
|
```
|
||||||
|
|
||||||
- That replica set created a *pod*, `po/pingpong-yyyy`
|
- That deployment created a *replica set*, `replicaset.apps/pingpong-xxxxxxxxxx`
|
||||||
|
|
||||||
|
```
|
||||||
|
NAME DESIRED CURRENT READY AGE
|
||||||
|
replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m
|
||||||
|
```
|
||||||
|
|
||||||
|
- That replica set created a *pod*, `pod/pingpong-xxxxxxxxxx-yyyyy`
|
||||||
|
|
||||||
|
```
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
|
||||||
|
```
|
||||||
|
|
||||||
- We'll see later how these folks play together for:
|
- We'll see later how these folks play together for:
|
||||||
|
|
||||||
- scaling
|
- scaling, high availability, rolling updates
|
||||||
|
|
||||||
- high availability
|
|
||||||
|
|
||||||
- rolling updates
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -119,6 +135,8 @@ We should see the following things:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
class: extra-details
|
||||||
|
|
||||||
## Streaming logs in real time
|
## Streaming logs in real time
|
||||||
|
|
||||||
- Just like `docker logs`, `kubectl logs` supports convenient options:
|
- Just like `docker logs`, `kubectl logs` supports convenient options:
|
||||||
@@ -137,9 +155,8 @@ We should see the following things:
|
|||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
```keys
|
```wait seq=3```
|
||||||
^C
|
```keys ^C```
|
||||||
```
|
|
||||||
-->
|
-->
|
||||||
|
|
||||||
]
|
]
|
||||||
@@ -159,7 +176,7 @@ We should see the following things:
|
|||||||
|
|
||||||
]
|
]
|
||||||
|
|
||||||
Note: what if we tried to scale `rs/pingpong-xxxx`?
|
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?
|
||||||
|
|
||||||
We could! But the *deployment* would notice it right away, and scale back to the initial level.
|
We could! But the *deployment* would notice it right away, and scale back to the initial level.
|
||||||
|
|
||||||
@@ -181,14 +198,13 @@ We could! But the *deployment* would notice it right away, and scale back to the
|
|||||||
```
|
```
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
```keys
|
```wait Running```
|
||||||
^C
|
```keys ^C```
|
||||||
```
|
|
||||||
-->
|
-->
|
||||||
|
|
||||||
- Destroy a pod:
|
- Destroy a pod:
|
||||||
```bash
|
```bash
|
||||||
kubectl delete pod pingpong-yyyy
|
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
|
||||||
```
|
```
|
||||||
]
|
]
|
||||||
|
|
||||||
@@ -211,6 +227,8 @@ We could! But the *deployment* would notice it right away, and scale back to the
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
clas: extra-details
|
||||||
|
|
||||||
## Viewing logs of multiple pods
|
## Viewing logs of multiple pods
|
||||||
|
|
||||||
- When we specify a deployment name, only one single pod's logs are shown
|
- When we specify a deployment name, only one single pod's logs are shown
|
||||||
@@ -234,15 +252,17 @@ Unfortunately, `--follow` cannot (yet) be used to stream the logs from multiple
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
class: title
|
class: extra-details
|
||||||
|
|
||||||
Meanwhile,
|
## Aren't we flooding 1.1.1.1?
|
||||||
<br/>
|
|
||||||
at the Google NOC ...
|
- If you're wondering this, good question!
|
||||||
<br/>
|
|
||||||
<br/>
|
- Don't worry, though:
|
||||||
.small[“Why the hell]
|
|
||||||
<br/>
|
*APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.*
|
||||||
.small[are we getting 1000 packets per second]
|
|
||||||
<br/>
|
(Source: https://blog.cloudflare.com/announcing-1111/)
|
||||||
.small[of ICMP ECHO traffic from these IPs?!?”]
|
|
||||||
|
- It's very unlikely that our concerted pings manage to produce
|
||||||
|
even a modest blip at Cloudflare's NOC!
|
||||||
|
|||||||
@@ -52,9 +52,9 @@
|
|||||||
|
|
||||||
(15 are listed in the Kubernetes documentation)
|
(15 are listed in the Kubernetes documentation)
|
||||||
|
|
||||||
- It *looks like* you have a level 3 network, but it's only level 4
|
- Pods have level 3 (IP) connectivity, but *services* are level 4
|
||||||
|
|
||||||
(The spec requires UDP and TCP, but not port ranges or arbitrary IP packets)
|
(Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets)
|
||||||
|
|
||||||
- `kube-proxy` is on the data path when connecting to a pod or container,
|
- `kube-proxy` is on the data path when connecting to a pod or container,
|
||||||
<br/>and it's not particularly fast (relies on userland proxying or iptables)
|
<br/>and it's not particularly fast (relies on userland proxying or iptables)
|
||||||
@@ -63,7 +63,7 @@
|
|||||||
|
|
||||||
## Kubernetes network model: in practice
|
## Kubernetes network model: in practice
|
||||||
|
|
||||||
- The nodes that we are using have been set up to use Weave
|
- The nodes that we are using have been set up to use [Weave](https://github.com/weaveworks/weave)
|
||||||
|
|
||||||
- We don't endorse Weave in a particular way, it just Works For Us
|
- We don't endorse Weave in a particular way, it just Works For Us
|
||||||
|
|
||||||
@@ -72,10 +72,32 @@
|
|||||||
- Unless you:
|
- Unless you:
|
||||||
|
|
||||||
- routinely saturate 10G network interfaces
|
- routinely saturate 10G network interfaces
|
||||||
|
|
||||||
- count packet rates in millions per second
|
- count packet rates in millions per second
|
||||||
|
|
||||||
- run high-traffic VOIP or gaming platforms
|
- run high-traffic VOIP or gaming platforms
|
||||||
|
|
||||||
- do weird things that involve millions of simultaneous connections
|
- do weird things that involve millions of simultaneous connections
|
||||||
<br/>(in which case you're already familiar with kernel tuning)
|
<br/>(in which case you're already familiar with kernel tuning)
|
||||||
|
|
||||||
|
- If necessary, there are alternatives to `kube-proxy`; e.g.
|
||||||
|
[`kube-router`](https://www.kube-router.io)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## The Container Network Interface (CNI)
|
||||||
|
|
||||||
|
- The CNI has a well-defined [specification](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration) for network plugins
|
||||||
|
|
||||||
|
- When a pod is created, Kubernetes delegates the network setup to CNI plugins
|
||||||
|
|
||||||
|
- Typically, a CNI plugin will:
|
||||||
|
|
||||||
|
- allocate an IP address (by calling an IPAM plugin)
|
||||||
|
|
||||||
|
- add a network interface into the pod's network namespace
|
||||||
|
|
||||||
|
- configure the interface as well as required routes etc.
|
||||||
|
|
||||||
|
- Using multiple plugins can be done with "meta-plugins" like CNI-Genie or Multus
|
||||||
|
|
||||||
|
- Not all CNI plugins are equal
|
||||||
|
|
||||||
|
(e.g. they don't all implement network policies, which are required to isolate pods)
|
||||||
|
|||||||