Compare commits
212 Commits
avril2018
...
devopsdays
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8ef6219295 | ||
|
|
346ce0e15c | ||
|
|
964d936435 | ||
|
|
546d9a2986 | ||
|
|
8e5d27b185 | ||
|
|
e8d9e94b72 | ||
|
|
ca980de2fd | ||
|
|
4b2b5ff7e4 | ||
|
|
ee2b20926c | ||
|
|
96a76d2a19 | ||
|
|
78ac91fcd5 | ||
|
|
971b5b0e6d | ||
|
|
3393563498 | ||
|
|
64fb407e8c | ||
|
|
ea4f46599d | ||
|
|
94483ebfec | ||
|
|
db5d5878f5 | ||
|
|
2585daac9b | ||
|
|
21043108b3 | ||
|
|
65faa4507c | ||
|
|
644f2b9c7a | ||
|
|
dab9d9fb7e | ||
|
|
139757613b | ||
|
|
10eed2c1c7 | ||
|
|
c4fa75a1da | ||
|
|
847140560f | ||
|
|
1dc07c33ab | ||
|
|
4fc73d95c0 | ||
|
|
690ed55953 | ||
|
|
16a5809518 | ||
|
|
0fed34600b | ||
|
|
2d95f4177a | ||
|
|
e9d1db56fa | ||
|
|
a076a766a9 | ||
|
|
be3c78bf54 | ||
|
|
5bb6b8e2ab | ||
|
|
f79193681d | ||
|
|
379ae69db5 | ||
|
|
cde89f50a2 | ||
|
|
98563ba1ce | ||
|
|
99bf8cc39f | ||
|
|
ea642cf90e | ||
|
|
a7d89062cf | ||
|
|
564e4856b4 | ||
|
|
011cd08af3 | ||
|
|
e294a4726c | ||
|
|
a21e8b0849 | ||
|
|
cc6f36b50f | ||
|
|
6e35162788 | ||
|
|
30ca940eeb | ||
|
|
14eb19a42b | ||
|
|
da053ecde2 | ||
|
|
c86ef7de45 | ||
|
|
c5572020b9 | ||
|
|
3d7ed3a3f7 | ||
|
|
138163056f | ||
|
|
5e78e00bc9 | ||
|
|
2cb06edc2d | ||
|
|
8915bfb443 | ||
|
|
24017ad83f | ||
|
|
3edebe3747 | ||
|
|
636a2d5c87 | ||
|
|
4213aba76e | ||
|
|
3e822bad82 | ||
|
|
cd5b06b9c7 | ||
|
|
b0841562ea | ||
|
|
06f70e8246 | ||
|
|
9614f8761a | ||
|
|
92f9ab9001 | ||
|
|
ad554f89fc | ||
|
|
5bb37dff49 | ||
|
|
0d52dc2290 | ||
|
|
c575cb9cd5 | ||
|
|
9cdccd40c7 | ||
|
|
fdd10c5a98 | ||
|
|
8a617fdbc7 | ||
|
|
a058a74d8f | ||
|
|
4896a3265e | ||
|
|
131947275c | ||
|
|
1b7e8cec5e | ||
|
|
c17c0ea9aa | ||
|
|
7b378d2425 | ||
|
|
47da7d8278 | ||
|
|
3c69941fcd | ||
|
|
beb188facf | ||
|
|
dfea8f6535 | ||
|
|
3b89149bf0 | ||
|
|
c8d73caacd | ||
|
|
290185f16b | ||
|
|
05e9d36eed | ||
|
|
05815fcbf3 | ||
|
|
bce900a4ca | ||
|
|
bf7ba49013 | ||
|
|
323aa075b3 | ||
|
|
f526014dc8 | ||
|
|
dec546fa65 | ||
|
|
36390a7921 | ||
|
|
313d705778 | ||
|
|
ca34efa2d7 | ||
|
|
25e92cfe39 | ||
|
|
999359e81a | ||
|
|
3a74248746 | ||
|
|
cb828ecbd3 | ||
|
|
e1e984e02d | ||
|
|
d6e19fe350 | ||
|
|
1f91c748b5 | ||
|
|
38356acb4e | ||
|
|
7b2d598c38 | ||
|
|
c276eb0cfa | ||
|
|
571de591ca | ||
|
|
e49a197fd5 | ||
|
|
a30eabc23a | ||
|
|
73c4cddba5 | ||
|
|
6e341f770a | ||
|
|
527145ec81 | ||
|
|
c93edceffe | ||
|
|
6f9eac7c8e | ||
|
|
522420ef34 | ||
|
|
927bf052b0 | ||
|
|
1e44689b79 | ||
|
|
b967865faa | ||
|
|
054c0cafb2 | ||
|
|
29e37c8e2b | ||
|
|
44fc2afdc7 | ||
|
|
7776c8ee38 | ||
|
|
9ee7e1873f | ||
|
|
e21fcbd1bd | ||
|
|
5852ab513d | ||
|
|
3fe33e4e9e | ||
|
|
c44b90b5a4 | ||
|
|
f06dc6548c | ||
|
|
e13552c306 | ||
|
|
0305c3783f | ||
|
|
5158ac3d98 | ||
|
|
25c08b0885 | ||
|
|
f8131c97e9 | ||
|
|
3de1fab66a | ||
|
|
ab664128b7 | ||
|
|
91de693b80 | ||
|
|
a64606fb32 | ||
|
|
58d9103bd2 | ||
|
|
61ab5be12d | ||
|
|
030900b602 | ||
|
|
476d689c7d | ||
|
|
4aedbb69c2 | ||
|
|
db2a68709c | ||
|
|
f114a89136 | ||
|
|
96eda76391 | ||
|
|
e7d9a8fa2d | ||
|
|
1cca8db828 | ||
|
|
2cde665d2f | ||
|
|
d660c6342f | ||
|
|
7e8bb0e51f | ||
|
|
c87f4cc088 | ||
|
|
05c50349a8 | ||
|
|
e985952816 | ||
|
|
19f0ef9c86 | ||
|
|
cc8e13a85f | ||
|
|
6475a05794 | ||
|
|
cc9840afe5 | ||
|
|
b7a2cde458 | ||
|
|
453992b55d | ||
|
|
0b1067f95e | ||
|
|
21777cd95b | ||
|
|
827ad3bdf2 | ||
|
|
7818157cd0 | ||
|
|
d547241714 | ||
|
|
c41e0e9286 | ||
|
|
c2d4784895 | ||
|
|
11163965cf | ||
|
|
e9df065820 | ||
|
|
101ab0c11a | ||
|
|
25f081c0b7 | ||
|
|
700baef094 | ||
|
|
3faa586b16 | ||
|
|
8ca77fe8a4 | ||
|
|
019829cc4d | ||
|
|
a7f6bb223a | ||
|
|
eb77a8f328 | ||
|
|
5a484b2667 | ||
|
|
982c35f8e7 | ||
|
|
adffe5f47f | ||
|
|
f90a194b86 | ||
|
|
99e9356e5d | ||
|
|
860840a4c1 | ||
|
|
ab63b76ae0 | ||
|
|
29bca726b3 | ||
|
|
91297a68f8 | ||
|
|
2bea8ade63 | ||
|
|
ec486cf78c | ||
|
|
63ac378866 | ||
|
|
35db387fc2 | ||
|
|
a0f9baf5e7 | ||
|
|
4e54a79abc | ||
|
|
37bea7158f | ||
|
|
618fe4e959 | ||
|
|
0c73144977 | ||
|
|
ff8c3b1595 | ||
|
|
b756d0d0dc | ||
|
|
23147fafd1 | ||
|
|
b036b5f24b | ||
|
|
3b9014f750 | ||
|
|
de87743c6a | ||
|
|
74f980437f | ||
|
|
6711ba06d9 | ||
|
|
f97bd2b357 | ||
|
|
3f54f23535 | ||
|
|
827d10dd49 | ||
|
|
1b7a072f25 | ||
|
|
eb1b3c8729 | ||
|
|
40e4678a45 | ||
|
|
38a40d56a0 |
2
.gitignore
vendored
@@ -8,4 +8,6 @@ prepare-vms/settings.yaml
|
||||
prepare-vms/tags
|
||||
slides/*.yml.html
|
||||
slides/autopilot/state.yaml
|
||||
slides/index.html
|
||||
slides/past.html
|
||||
node_modules
|
||||
|
||||
26
README.md
@@ -292,15 +292,31 @@ If there is a bug and you can't even reproduce it:
|
||||
sorry. It is probably an Heisenbug. We can't act on it
|
||||
until it's reproducible, alas.
|
||||
|
||||
If you have attended this workshop and have feedback,
|
||||
or if you want somebody to deliver that workshop at your
|
||||
conference or for your company: you can contact one of us!
|
||||
|
||||
# “Please teach us!”
|
||||
|
||||
If you have attended one of these workshops, and want
|
||||
your team or organization to attend a similar one, you
|
||||
can look at the list of upcoming events on
|
||||
http://container.training/.
|
||||
|
||||
You are also welcome to reuse these materials to run
|
||||
your own workshop, for your team or even at a meetup
|
||||
or conference. In that case, you might enjoy watching
|
||||
[Bridget Kromhout's talk at KubeCon 2018 Europe](
|
||||
https://www.youtube.com/watch?v=mYsp_cGY2O0), explaining
|
||||
precisely how to run such a workshop yourself.
|
||||
|
||||
Finally, you can also contact the following persons,
|
||||
who are experienced speakers, are familiar with the
|
||||
material, and are available to deliver these workshops
|
||||
at your conference or for your company:
|
||||
|
||||
- jerome dot petazzoni at gmail dot com
|
||||
- bret at bretfisher dot com
|
||||
|
||||
If you are willing and able to deliver such workshops,
|
||||
feel free to submit a PR to add your name to that list!
|
||||
(If you are willing and able to deliver such workshops,
|
||||
feel free to submit a PR to add your name to that list!)
|
||||
|
||||
**Thank you!**
|
||||
|
||||
|
||||
@@ -28,5 +28,5 @@ def rng(how_many_bytes):
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app.run(host="0.0.0.0", port=80)
|
||||
app.run(host="0.0.0.0", port=80, threaded=False)
|
||||
|
||||
|
||||
@@ -103,7 +103,7 @@ wrap Run this program in a container
|
||||
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
|
||||
- If it errors or times out, you should be able to rerun
|
||||
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
|
||||
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
|
||||
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
|
||||
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
|
||||
- *Have a great workshop*
|
||||
- Run `./workshopctl stop TAG` to terminate instances.
|
||||
@@ -210,7 +210,7 @@ The `postprep.py` file will be copied via parallel-ssh to all of the VMs and exe
|
||||
|
||||
#### Pre-pull images
|
||||
|
||||
$ ./workshopctl pull-images TAG
|
||||
$ ./workshopctl pull_images TAG
|
||||
|
||||
#### Generate cards
|
||||
|
||||
|
||||
@@ -7,7 +7,6 @@ services:
|
||||
working_dir: /root/prepare-vms
|
||||
volumes:
|
||||
- $HOME/.aws/:/root/.aws/
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
|
||||
- $PWD/:/root/prepare-vms/
|
||||
environment:
|
||||
|
||||
@@ -48,7 +48,7 @@ _cmd_cards() {
|
||||
rm -f ips.html ips.pdf
|
||||
|
||||
# This will generate two files in the base dir: ips.pdf and ips.html
|
||||
python lib/ips-txt-to-html.py $SETTINGS
|
||||
lib/ips-txt-to-html.py $SETTINGS
|
||||
|
||||
for f in ips.html ips.pdf; do
|
||||
# Remove old versions of cards if they exist
|
||||
@@ -393,9 +393,23 @@ pull_tag() {
|
||||
ubuntu:latest \
|
||||
fedora:latest \
|
||||
centos:latest \
|
||||
elasticsearch:2 \
|
||||
postgres \
|
||||
redis \
|
||||
alpine \
|
||||
registry \
|
||||
nicolaka/netshoot \
|
||||
jpetazzo/trainingwheels \
|
||||
golang \
|
||||
training/namer \
|
||||
dockercoins/hasher \
|
||||
dockercoins/rng \
|
||||
dockercoins/webui \
|
||||
dockercoins/worker \
|
||||
logstash \
|
||||
prom/node-exporter \
|
||||
google/cadvisor \
|
||||
dockersamples/visualizer \
|
||||
nathanleclaire/redisonrails; do
|
||||
sudo -u docker docker pull $I
|
||||
done'
|
||||
|
||||
@@ -108,7 +108,7 @@ system("sudo chmod +x /usr/local/bin/docker-machine")
|
||||
system("docker-machine version")
|
||||
|
||||
system("sudo apt-get remove -y --purge dnsmasq-base")
|
||||
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
|
||||
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh tree")
|
||||
|
||||
### Wait for Docker to be up.
|
||||
### (If we don't do this, Docker will not be responsive during the next step.)
|
||||
|
||||
@@ -17,8 +17,8 @@ paper_margin: 0.2in
|
||||
# (The equivalent parameters must be set from the browser's print dialog.)
|
||||
|
||||
# This can be "test" or "stable"
|
||||
engine_version: test
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.17.1
|
||||
machine_version: 0.13.0
|
||||
compose_version: 1.21.1
|
||||
machine_version: 0.14.0
|
||||
|
||||
@@ -20,5 +20,5 @@ paper_margin: 0.2in
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.20.1
|
||||
compose_version: 1.21.1
|
||||
machine_version: 0.14.0
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
|
||||
|
||||
# Number of VMs per cluster
|
||||
clustersize: 5
|
||||
clustersize: 3
|
||||
|
||||
# Jinja2 template to use to generate ready-to-cut cards
|
||||
cards_template: cards.html
|
||||
@@ -20,5 +20,5 @@ paper_margin: 0.2in
|
||||
engine_version: stable
|
||||
|
||||
# These correspond to the version numbers visible on their respective GitHub release pages
|
||||
compose_version: 1.20.1
|
||||
compose_version: 1.21.1
|
||||
machine_version: 0.14.0
|
||||
2
slides/_redirects
Normal file
@@ -0,0 +1,2 @@
|
||||
/ /kube-90min.yml.html 200!
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
case "$1" in
|
||||
once)
|
||||
./index.py
|
||||
for YAML in *.yml; do
|
||||
./markmaker.py $YAML > $YAML.html || {
|
||||
rm $YAML.html
|
||||
@@ -15,6 +17,13 @@ once)
|
||||
;;
|
||||
|
||||
forever)
|
||||
set +e
|
||||
# check if entr is installed
|
||||
if ! command -v entr >/dev/null; then
|
||||
echo >&2 "First install 'entr' with apt, brew, etc."
|
||||
exit
|
||||
fi
|
||||
|
||||
# There is a weird bug in entr, at least on MacOS,
|
||||
# where it doesn't restore the terminal to a clean
|
||||
# state when exitting. So let's try to work around
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
- All the content is available in a public GitHub repository:
|
||||
|
||||
https://github.com/jpetazzo/container.training
|
||||
https://@@GITREPO@@
|
||||
|
||||
- You can get updated "builds" of the slides there:
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
|
||||
<!--
|
||||
.exercise[
|
||||
```open https://github.com/jpetazzo/container.training```
|
||||
```open https://@@GITREPO@@```
|
||||
```open http://container.training/```
|
||||
]
|
||||
-->
|
||||
@@ -23,7 +23,7 @@
|
||||
|
||||
<!--
|
||||
.exercise[
|
||||
```open https://github.com/jpetazzo/container.training/tree/master/slides/common/about-slides.md```
|
||||
```open https://@@GITREPO@@/tree/master/slides/common/about-slides.md```
|
||||
]
|
||||
-->
|
||||
|
||||
@@ -35,7 +35,7 @@ class: extra-details
|
||||
|
||||
- This slide has a little magnifying glass in the top left corner
|
||||
|
||||
- This magnifiying glass indicates slides that provide extra details
|
||||
- This magnifying glass indicates slides that provide extra details
|
||||
|
||||
- Feel free to skip them if:
|
||||
|
||||
|
||||
@@ -49,26 +49,6 @@ Tip: use `^S` and `^Q` to pause/resume log output.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Upgrading from Compose 1.6
|
||||
|
||||
.warning[The `logs` command has changed between Compose 1.6 and 1.7!]
|
||||
|
||||
- Up to 1.6
|
||||
|
||||
- `docker-compose logs` is the equivalent of `logs --follow`
|
||||
|
||||
- `docker-compose logs` must be restarted if containers are added
|
||||
|
||||
- Since 1.7
|
||||
|
||||
- `--follow` must be specified explicitly
|
||||
|
||||
- new containers are automatically picked up by `docker-compose logs`
|
||||
|
||||
---
|
||||
|
||||
## Scaling up the application
|
||||
|
||||
- Our goal is to make that performance graph go up (without changing a line of code!)
|
||||
@@ -126,7 +106,7 @@ We have available resources.
|
||||
|
||||
- Start one more `worker` container:
|
||||
```bash
|
||||
docker-compose scale worker=2
|
||||
docker-compose up -d --scale worker=2
|
||||
```
|
||||
|
||||
- Look at the performance graph (it should show a x2 improvement)
|
||||
@@ -147,7 +127,7 @@ We have available resources.
|
||||
|
||||
- Start eight more `worker` containers:
|
||||
```bash
|
||||
docker-compose scale worker=10
|
||||
docker-compose up -d --scale worker=10
|
||||
```
|
||||
|
||||
- Look at the performance graph: does it show a x10 improvement?
|
||||
|
||||
@@ -1,46 +1,4 @@
|
||||
# Pre-requirements
|
||||
|
||||
- Be comfortable with the UNIX command line
|
||||
|
||||
- navigating directories
|
||||
|
||||
- editing files
|
||||
|
||||
- a little bit of bash-fu (environment variables, loops)
|
||||
|
||||
- Some Docker knowledge
|
||||
|
||||
- `docker run`, `docker ps`, `docker build`
|
||||
|
||||
- ideally, you know how to write a Dockerfile and build it
|
||||
<br/>
|
||||
(even if it's a `FROM` line and a couple of `RUN` commands)
|
||||
|
||||
- It's totally OK if you are not a Docker expert!
|
||||
|
||||
---
|
||||
|
||||
class: title
|
||||
|
||||
*Tell me and I forget.*
|
||||
<br/>
|
||||
*Teach me and I remember.*
|
||||
<br/>
|
||||
*Involve me and I learn.*
|
||||
|
||||
Misattributed to Benjamin Franklin
|
||||
|
||||
[(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/)
|
||||
|
||||
---
|
||||
|
||||
## Hands-on sections
|
||||
|
||||
- The whole workshop is hands-on
|
||||
|
||||
- We are going to build, ship, and run containers!
|
||||
|
||||
- You are invited to reproduce all the demos
|
||||
## Hands-on
|
||||
|
||||
- All hands-on sections are clearly identified, like the gray rectangle below
|
||||
|
||||
@@ -48,55 +6,12 @@ Misattributed to Benjamin Franklin
|
||||
|
||||
- This is the stuff you're supposed to do!
|
||||
|
||||
- Go to [container.training](http://container.training/) to view these slides
|
||||
|
||||
- Join the chat room: @@CHAT@@
|
||||
|
||||
<!-- ```open http://container.training/``` -->
|
||||
- Go to @@SLIDES@@ to view these slides
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## Where are we going to run our containers?
|
||||
|
||||
---
|
||||
|
||||
class: in-person, pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## You get a cluster of cloud VMs
|
||||
|
||||
- Each person gets a private cluster of cloud VMs (not shared with anybody else)
|
||||
|
||||
- They'll remain up for the duration of the workshop
|
||||
|
||||
- You should have a little card with login+password+IP addresses
|
||||
|
||||
- You can automatically SSH from one VM to another
|
||||
|
||||
- The nodes have aliases: `node1`, `node2`, etc.
|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## Why don't we run containers locally?
|
||||
|
||||
- Installing that stuff can be hard on some machines
|
||||
|
||||
(32 bits CPU or OS... Laptops without administrator access... etc.)
|
||||
|
||||
- *"The whole team downloaded all these container images from the WiFi!
|
||||
<br/>... and it went great!"* (Literally no-one ever)
|
||||
|
||||
- All you need is a computer (or even a phone or tablet!), with:
|
||||
|
||||
- an internet connection
|
||||
@@ -109,201 +24,18 @@ class: in-person
|
||||
|
||||
class: in-person
|
||||
|
||||
## SSH clients
|
||||
|
||||
- On Linux, OS X, FreeBSD... you are probably all set
|
||||
|
||||
- On Windows, get one of these:
|
||||
|
||||
- [putty](http://www.putty.org/)
|
||||
- Microsoft [Win32 OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH)
|
||||
- [Git BASH](https://git-for-windows.github.io/)
|
||||
- [MobaXterm](http://mobaxterm.mobatek.net/)
|
||||
|
||||
- On Android, [JuiceSSH](https://juicessh.com/)
|
||||
([Play Store](https://play.google.com/store/apps/details?id=com.sonelli.juicessh))
|
||||
works pretty well
|
||||
|
||||
- Nice-to-have: [Mosh](https://mosh.org/) instead of SSH, if your internet connection tends to lose packets
|
||||
|
||||
---
|
||||
|
||||
class: in-person, extra-details
|
||||
|
||||
## What is this Mosh thing?
|
||||
|
||||
*You don't have to use Mosh or even know about it to follow along.
|
||||
<br/>
|
||||
We're just telling you about it because some of us think it's cool!*
|
||||
|
||||
- Mosh is "the mobile shell"
|
||||
|
||||
- It is essentially SSH over UDP, with roaming features
|
||||
|
||||
- It retransmits packets quickly, so it works great even on lossy connections
|
||||
|
||||
(Like hotel or conference WiFi)
|
||||
|
||||
- It has intelligent local echo, so it works great even in high-latency connections
|
||||
|
||||
(Like hotel or conference WiFi)
|
||||
|
||||
- It supports transparent roaming when your client IP address changes
|
||||
|
||||
(Like when you hop from hotel to conference WiFi)
|
||||
|
||||
---
|
||||
|
||||
class: in-person, extra-details
|
||||
|
||||
## Using Mosh
|
||||
|
||||
- To install it: `(apt|yum|brew) install mosh`
|
||||
|
||||
- It has been pre-installed on the VMs that we are using
|
||||
|
||||
- To connect to a remote machine: `mosh user@host`
|
||||
|
||||
(It is going to establish an SSH connection, then hand off to UDP)
|
||||
|
||||
- It requires UDP ports to be open
|
||||
|
||||
(By default, it uses a UDP port between 60000 and 61000)
|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## Connecting to our lab environment
|
||||
|
||||
.exercise[
|
||||
|
||||
- Log into the first VM (`node1`) with your SSH client
|
||||
|
||||
<!--
|
||||
```bash
|
||||
for N in $(awk '/\Wnode/{print $2}' /etc/hosts); do
|
||||
ssh -o StrictHostKeyChecking=no $N true
|
||||
done
|
||||
```
|
||||
|
||||
```bash
|
||||
if which kubectl; then
|
||||
kubectl get all -o name | grep -v service/kubernetes | xargs -n1 kubectl delete
|
||||
fi
|
||||
```
|
||||
-->
|
||||
|
||||
- Check that you can SSH (without password) to `node2`:
|
||||
```bash
|
||||
ssh node2
|
||||
```
|
||||
- Type `exit` or `^D` to come back to `node1`
|
||||
|
||||
<!-- ```bash exit``` -->
|
||||
|
||||
]
|
||||
|
||||
If anything goes wrong — ask for help!
|
||||
|
||||
---
|
||||
|
||||
## Doing or re-doing the workshop on your own?
|
||||
|
||||
- Use something like
|
||||
[Play-With-Docker](http://play-with-docker.com/) or
|
||||
[Play-With-Kubernetes](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
|
||||
|
||||
Zero setup effort; but environment are short-lived and
|
||||
might have limited resources
|
||||
|
||||
- Create your own cluster (local or cloud VMs)
|
||||
|
||||
Small setup effort; small cost; flexible environments
|
||||
|
||||
- Create a bunch of clusters for you and your friends
|
||||
([instructions](https://github.com/jpetazzo/container.training/tree/master/prepare-vms))
|
||||
|
||||
Bigger setup effort; ideal for group training
|
||||
|
||||
---
|
||||
|
||||
class: self-paced
|
||||
|
||||
## Get your own Docker nodes
|
||||
|
||||
- If you already have some Docker nodes: great!
|
||||
|
||||
- If not: let's get some thanks to Play-With-Docker
|
||||
|
||||
.exercise[
|
||||
|
||||
- Go to http://www.play-with-docker.com/
|
||||
|
||||
- Log in
|
||||
|
||||
- Create your first node
|
||||
|
||||
<!-- ```open http://www.play-with-docker.com/``` -->
|
||||
|
||||
]
|
||||
|
||||
You will need a Docker ID to use Play-With-Docker.
|
||||
|
||||
(Creating a Docker ID is free.)
|
||||
|
||||
---
|
||||
|
||||
## We will (mostly) interact with node1 only
|
||||
|
||||
*These remarks apply only when using multiple nodes, of course.*
|
||||
|
||||
- Unless instructed, **all commands must be run from the first VM, `node1`**
|
||||
|
||||
- We will only checkout/copy the code on `node1`
|
||||
|
||||
- During normal operations, we do not need access to the other nodes
|
||||
|
||||
- If we had to troubleshoot issues, we would use a combination of:
|
||||
|
||||
- SSH (to access system logs, daemon status...)
|
||||
|
||||
- Docker API (to check running containers and container engine status)
|
||||
|
||||
---
|
||||
|
||||
## Terminals
|
||||
|
||||
Once in a while, the instructions will say:
|
||||
<br/>"Open a new terminal."
|
||||
|
||||
There are multiple ways to do this:
|
||||
|
||||
- create a new window or tab on your machine, and SSH into the VM;
|
||||
|
||||
- use screen or tmux on the VM and open a new window from there.
|
||||
|
||||
You are welcome to use the method that you feel the most comfortable with.
|
||||
|
||||
---
|
||||
|
||||
## Tmux cheatsheet
|
||||
|
||||
[Tmux](https://en.wikipedia.org/wiki/Tmux) is a terminal multiplexer like `screen`.
|
||||
|
||||
*You don't have to use it or even know about it to follow along.
|
||||
<br/>
|
||||
But some of us like to use it to switch between terminals.
|
||||
<br/>
|
||||
It has been preinstalled on your workshop nodes.*
|
||||
|
||||
- Ctrl-b c → creates a new window
|
||||
- Ctrl-b n → go to next window
|
||||
- Ctrl-b p → go to previous window
|
||||
- Ctrl-b " → split window top/bottom
|
||||
- Ctrl-b % → split window left/right
|
||||
- Ctrl-b Alt-1 → rearrange windows in columns
|
||||
- Ctrl-b Alt-2 → rearrange windows in rows
|
||||
- Ctrl-b arrows → navigate to other windows
|
||||
- Ctrl-b d → detach session
|
||||
- tmux attach → reattach to session
|
||||
|
||||
@@ -16,7 +16,7 @@ fi
|
||||
|
||||
- Clone the repository on `node1`:
|
||||
```bash
|
||||
git clone git://github.com/jpetazzo/container.training
|
||||
git clone git://@@GITREPO@@
|
||||
```
|
||||
|
||||
]
|
||||
@@ -56,16 +56,16 @@ and displays aggregated logs.
|
||||
## More detail on our sample application
|
||||
|
||||
- Visit the GitHub repository with all the materials of this workshop:
|
||||
<br/>https://github.com/jpetazzo/container.training
|
||||
<br/>https://@@GITREPO@@
|
||||
|
||||
- The application is in the [dockercoins](
|
||||
https://github.com/jpetazzo/container.training/tree/master/dockercoins)
|
||||
https://@@GITREPO@@/tree/master/dockercoins)
|
||||
subdirectory
|
||||
|
||||
- Let's look at the general layout of the source code:
|
||||
|
||||
there is a Compose file [docker-compose.yml](
|
||||
https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml) ...
|
||||
https://@@GITREPO@@/blob/master/dockercoins/docker-compose.yml) ...
|
||||
|
||||
... and 4 other services, each in its own directory:
|
||||
|
||||
@@ -94,61 +94,6 @@ class: extra-details
|
||||
|
||||
---
|
||||
|
||||
## Service discovery in container-land
|
||||
|
||||
- We do not hard-code IP addresses in the code
|
||||
|
||||
- We do not hard-code FQDN in the code, either
|
||||
|
||||
- We just connect to a service name, and container-magic does the rest
|
||||
|
||||
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
|
||||
|
||||
---
|
||||
|
||||
## Example in `worker/worker.py`
|
||||
|
||||
```python
|
||||
redis = Redis("`redis`")
|
||||
|
||||
|
||||
def get_random_bytes():
|
||||
r = requests.get("http://`rng`/32")
|
||||
return r.content
|
||||
|
||||
|
||||
def hash_bytes(data):
|
||||
r = requests.post("http://`hasher`/",
|
||||
data=data,
|
||||
headers={"Content-Type": "application/octet-stream"})
|
||||
```
|
||||
|
||||
(Full source code available [here](
|
||||
https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
|
||||
))
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Links, naming, and service discovery
|
||||
|
||||
- Containers can have network aliases (resolvable through DNS)
|
||||
|
||||
- Compose file version 2+ makes each container reachable through its service name
|
||||
|
||||
- Compose file version 1 did require "links" sections
|
||||
|
||||
- Network aliases are automatically namespaced
|
||||
|
||||
- you can have multiple apps declaring and using a service named `database`
|
||||
|
||||
- containers in the blue app will resolve `database` to the IP of the blue database
|
||||
|
||||
- containers in the green app will resolve `database` to the IP of the green database
|
||||
|
||||
---
|
||||
|
||||
## What's this application?
|
||||
|
||||
--
|
||||
|
||||
@@ -17,5 +17,5 @@ class: title, in-person
|
||||
*Don't stream videos or download big files during the workshop.*<br/>
|
||||
*Thank you!*
|
||||
|
||||
**Slides: http://container.training/**
|
||||
]
|
||||
**Slides: @@SLIDES@@**
|
||||
]
|
||||
|
||||
57
slides/count-slides.py
Executable file
@@ -0,0 +1,57 @@
|
||||
#!/usr/bin/env python
|
||||
import re
|
||||
import sys
|
||||
|
||||
PREFIX = "name: toc-"
|
||||
EXCLUDED = ["in-person"]
|
||||
|
||||
class State(object):
|
||||
def __init__(self):
|
||||
self.current_slide = 1
|
||||
self.section_title = None
|
||||
self.section_start = 0
|
||||
self.section_slides = 0
|
||||
self.chapters = {}
|
||||
self.sections = {}
|
||||
def show(self):
|
||||
if self.section_title.startswith("chapter-"):
|
||||
return
|
||||
print("{0.section_title}\t{0.section_start}\t{0.section_slides}".format(self))
|
||||
self.sections[self.section_title] = self.section_slides
|
||||
|
||||
state = State()
|
||||
|
||||
title = None
|
||||
for line in open(sys.argv[1]):
|
||||
line = line.rstrip()
|
||||
if line.startswith(PREFIX):
|
||||
if state.section_title is None:
|
||||
print("{}\t{}\t{}".format("title", "index", "size"))
|
||||
else:
|
||||
state.show()
|
||||
state.section_title = line[len(PREFIX):].strip()
|
||||
state.section_start = state.current_slide
|
||||
state.section_slides = 0
|
||||
if line == "---":
|
||||
state.current_slide += 1
|
||||
state.section_slides += 1
|
||||
if line == "--":
|
||||
state.current_slide += 1
|
||||
toc_links = re.findall("\(#toc-(.*)\)", line)
|
||||
if toc_links and state.section_title.startswith("chapter-"):
|
||||
if state.section_title not in state.chapters:
|
||||
state.chapters[state.section_title] = []
|
||||
state.chapters[state.section_title].append(toc_links[0])
|
||||
# This is really hackish
|
||||
if line.startswith("class:"):
|
||||
for klass in EXCLUDED:
|
||||
if klass in line:
|
||||
state.section_slides -= 1
|
||||
state.current_slide -= 1
|
||||
|
||||
state.show()
|
||||
|
||||
for chapter in sorted(state.chapters):
|
||||
chapter_size = sum(state.sections[s] for s in state.chapters[chapter])
|
||||
print("{}\t{}\t{}".format("total size for", chapter, chapter_size))
|
||||
|
||||
@@ -1,10 +0,0 @@
|
||||
#!/bin/sh
|
||||
INPUT=$1
|
||||
|
||||
{
|
||||
echo "# Front matter"
|
||||
cat "$INPUT"
|
||||
} |
|
||||
grep -e "^# " -e ^---$ | uniq -c |
|
||||
sed "s/^ *//" | sed s/---// |
|
||||
paste -d "\t" - -
|
||||
BIN
slides/images/bridge1.png
Normal file
|
After Width: | Height: | Size: 30 KiB |
BIN
slides/images/bridge2.png
Normal file
|
After Width: | Height: | Size: 30 KiB |
BIN
slides/images/container-layers.jpg
Normal file
|
After Width: | Height: | Size: 45 KiB |
BIN
slides/images/ingress-lb.png
Normal file
|
After Width: | Height: | Size: 70 KiB |
BIN
slides/images/ingress-routing-mesh.png
Normal file
|
After Width: | Height: | Size: 60 KiB |
BIN
slides/images/sharing-layers.jpg
Normal file
|
After Width: | Height: | Size: 55 KiB |
|
Before Width: | Height: | Size: 22 KiB |
59
slides/index.css
Normal file
@@ -0,0 +1,59 @@
|
||||
body {
|
||||
background-image: url("images/container-background.jpg");
|
||||
max-width: 1024px;
|
||||
margin: 0 auto;
|
||||
}
|
||||
table {
|
||||
font-size: 20px;
|
||||
font-family: sans-serif;
|
||||
background: white;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
padding: 20px;
|
||||
}
|
||||
.header {
|
||||
font-size: 300%;
|
||||
font-weight: bold;
|
||||
}
|
||||
.title {
|
||||
font-size: 150%;
|
||||
font-weight: bold;
|
||||
}
|
||||
.details {
|
||||
font-size: 80%;
|
||||
font-style: italic;
|
||||
}
|
||||
td {
|
||||
padding: 1px;
|
||||
height: 1em;
|
||||
}
|
||||
td.spacer {
|
||||
height: unset;
|
||||
}
|
||||
td.footer {
|
||||
padding-top: 80px;
|
||||
height: 100px;
|
||||
}
|
||||
td.title {
|
||||
border-bottom: thick solid black;
|
||||
padding-bottom: 2px;
|
||||
padding-top: 20px;
|
||||
}
|
||||
a {
|
||||
text-decoration: none;
|
||||
}
|
||||
a:hover {
|
||||
background: yellow;
|
||||
}
|
||||
a.attend:after {
|
||||
content: "📅 attend";
|
||||
}
|
||||
a.slides:after {
|
||||
content: "📚 slides";
|
||||
}
|
||||
a.chat:after {
|
||||
content: "💬 chat";
|
||||
}
|
||||
a.video:after {
|
||||
content: "📺 video";
|
||||
}
|
||||
@@ -1,236 +0,0 @@
|
||||
<html>
|
||||
<head>
|
||||
<title>Container Training</title>
|
||||
<style type="text/css">
|
||||
body {
|
||||
background-image: url("images/container-background.jpg");
|
||||
max-width: 1024px;
|
||||
margin: 0 auto;
|
||||
}
|
||||
table {
|
||||
font-size: 20px;
|
||||
font-family: sans-serif;
|
||||
background: white;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
padding: 20px;
|
||||
}
|
||||
.header {
|
||||
font-size: 300%;
|
||||
font-weight: bold;
|
||||
}
|
||||
.title {
|
||||
font-size: 150%;
|
||||
font-weight: bold;
|
||||
}
|
||||
td {
|
||||
padding: 1px;
|
||||
height: 1em;
|
||||
}
|
||||
td.spacer {
|
||||
height: unset;
|
||||
}
|
||||
td.footer {
|
||||
padding-top: 80px;
|
||||
height: 100px;
|
||||
}
|
||||
td.title {
|
||||
border-bottom: thick solid black;
|
||||
padding-bottom: 2px;
|
||||
padding-top: 20px;
|
||||
}
|
||||
a {
|
||||
text-decoration: none;
|
||||
}
|
||||
a:hover {
|
||||
background: yellow;
|
||||
}
|
||||
a.attend:after {
|
||||
content: "📅 attend";
|
||||
}
|
||||
a.slides:after {
|
||||
content: "📚 slides";
|
||||
}
|
||||
a.chat:after {
|
||||
content: "💬 chat";
|
||||
}
|
||||
a.video:after {
|
||||
content: "📺 video";
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="main">
|
||||
<table>
|
||||
<tr><td class="header" colspan="4">Container Training</td></tr>
|
||||
|
||||
<tr><td class="title" colspan="4">Coming soon near you</td></tr>
|
||||
|
||||
<!--
|
||||
<td>Nothing for now (stay tuned...)</td>
|
||||
thing for now (stay tuned...)</td>
|
||||
-->
|
||||
|
||||
<tr>
|
||||
<td>April 11-12, 2018: Introduction aux conteneurs (in French)</td>
|
||||
<td> </td>
|
||||
<td><a class="attend" href="http://paris.container.training/intro.html" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>April 13, 2018: Introduction à l'orchestration (in French)</td>
|
||||
<td> </td>
|
||||
<td><a class="attend" href="http://paris.container.training/kube.html" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>April 24th, 2018: GOTO Chicago - Kubernetes 101</td>
|
||||
<td> </td>
|
||||
<td><a class="attend" href="https://gotochgo.com/2018/workshops/88#k8s101" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>April 27th, 2018: GOTO Chicago - Swarm Orchestration</td>
|
||||
<td> </td>
|
||||
<td><a class="attend" href="https://gotochgo.com/2018/workshops/85" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>May 8th, 2018: CRAFT Budapest - Swarm Orchestration</td>
|
||||
<td> </td>
|
||||
<td><a class="attend" href="https://craft-conf.com/speaker/BretFisher" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>May 17th, 2018: Revolution Conf Virginia Beach - Docker 101</td>
|
||||
<td> </td>
|
||||
<td><a class="attend" href="https://revolutionconf.com/" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>June 12th, 2018: Velocity San Jose - Kubernetes 101</td>
|
||||
<td> </td>
|
||||
<td><a class="attend" href="https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66286" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>July 17th, 2018: OSCON - Kubernetes 101</td>
|
||||
<td> </td>
|
||||
<td><a class="attend" href="https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/66287" /></td>
|
||||
</tr>
|
||||
|
||||
|
||||
|
||||
<tr><td class="title" colspan="4">Past workshops</td></tr>
|
||||
|
||||
<tr>
|
||||
<td>April 6th, 2018: MuraCon Sacramento, CA - Docker 101</td>
|
||||
<td><a class="slides" href="https://muracon18.bretfisher.com" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>March 27, 2018: SREcon Americas — Kubernetes 101</td>
|
||||
<td><a class="slides" href="http://srecon2018.container.training/" /></td>
|
||||
</tr>
|
||||
|
||||
|
||||
<tr>
|
||||
<td>March 27, 2018: Boosterconf: Kubernetes 101</td>
|
||||
<td><a class="slides" href="http://boosterconf2018.container.training/" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>February 22, 2018: IndexConf: Kubernetes 101</td>
|
||||
<td><a class="slides" href="http://indexconf2018.container.training/" /></td>
|
||||
<!--
|
||||
<td><a class="attend" href="https://developer.ibm.com/indexconf/sessions/#!?id=5474" />
|
||||
-->
|
||||
</tr>
|
||||
|
||||
<!--
|
||||
<tr>
|
||||
<td>Kubernetes enablement at Docker</td>
|
||||
<td><a class="slides" href="http://kube.container.training/" /></td>
|
||||
</tr>
|
||||
-->
|
||||
|
||||
<tr>
|
||||
<td>QCON SF: Orchestrating Microservices with Docker Swarm</td>
|
||||
<td><a class="slides" href="http://qconsf2017swarm.container.training/" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>QCON SF: Introduction to Docker and Containers</td>
|
||||
<td><a class="slides" href="http://qconsf2017intro.container.training/" /></td>
|
||||
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviLgqTum8MkspG_8VzGl6C07" /></td>
|
||||
</tr>
|
||||
|
||||
<!--
|
||||
<tr>
|
||||
<td>LISA17 M7: Getting Started with Docker and Containers</td>
|
||||
<td><a class="slides" href="http://lisa17m7.container.training/" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>LISA17 T9: Build, Ship, and Run Microservices on a Docker Swarm Cluster</td>
|
||||
<td><a class="slides" href="http://lisa17t9.container.training/" /></td>
|
||||
</tr>
|
||||
-->
|
||||
|
||||
<tr>
|
||||
<td>Deploying and scaling microservices with Docker and Kubernetes</td>
|
||||
<td><a class="slides" href="http://osseu17.container.training/" /></td>
|
||||
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviLrsyydCzxWrIP_1-wkcSHS" /></td>
|
||||
</tr>
|
||||
|
||||
<!--
|
||||
<tr>
|
||||
<td>DockerCon Workshop: from Zero to Hero (full day, B3 M1-2)</td>
|
||||
<td><a class="slides" href="http://dc17eu.container.training/" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>DockerCon Workshop: Orchestration for Advanced Users (afternoon, B4 M5-6)</td>
|
||||
<td><a class="slides" href="https://www.bretfisher.com/dockercon17eu/" /></td>
|
||||
</tr>
|
||||
-->
|
||||
|
||||
<tr>
|
||||
<td>LISA16 T1: Deploying and Scaling Applications with Docker Swarm</td>
|
||||
<td><a class="slides" href="http://lisa16t1.container.training/" /></td>
|
||||
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviIDDhr8vIwCN1wkyNGXjbbc" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>PyCon2016: Introduction to Docker and containers</td>
|
||||
<td><a class="slides" href="https://us.pycon.org/2016/site_media/media/tutorial_handouts/DockerSlides.pdf" /></td>
|
||||
<td><a class="video" href="https://www.youtube.com/watch?v=ZVaRK10HBjo" /></td>
|
||||
</tr>
|
||||
|
||||
<tr><td class="title" colspan="4">Self-paced tutorials</td></tr>
|
||||
|
||||
<tr>
|
||||
<td>Introduction to Docker and Containers</td>
|
||||
<td><a class="slides" href="intro-selfpaced.yml.html" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>Container Orchestration with Docker and Swarm</td>
|
||||
<td><a class="slides" href="swarm-selfpaced.yml.html" /></td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td>Deploying and Scaling Microservices with Docker and Kubernetes</td>
|
||||
<td><a class="slides" href="kube-selfpaced.yml.html" /></td>
|
||||
</tr>
|
||||
|
||||
<tr><td class="spacer"></td></tr>
|
||||
|
||||
<tr>
|
||||
<td class="footer">
|
||||
Maintained by Jérôme Petazzoni (<a href="https://twitter.com/jpetazzo">@jpetazzo</a>) and <a href="https://github.com/jpetazzo/container.training/graphs/contributors">contributors</a>.
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
140
slides/index.py
Executable file
@@ -0,0 +1,140 @@
|
||||
#!/usr/bin/env python2
|
||||
# coding: utf-8
|
||||
TEMPLATE="""<html>
|
||||
<head>
|
||||
<title>{{ title }}</title>
|
||||
<link rel="stylesheet" href="index.css">
|
||||
</head>
|
||||
<body>
|
||||
<div class="main">
|
||||
<table>
|
||||
<tr><td class="header" colspan="3">{{ title }}</td></tr>
|
||||
|
||||
{% if coming_soon %}
|
||||
<tr><td class="title" colspan="3">Coming soon near you</td></tr>
|
||||
|
||||
{% for item in coming_soon %}
|
||||
<tr>
|
||||
<td>{{ item.title }}</td>
|
||||
<td>{% if item.slides %}<a class="slides" href="{{ item.slides }}" />{% endif %}</td>
|
||||
<td><a class="attend" href="{{ item.attend }}" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="details">Scheduled {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
{% if past_workshops %}
|
||||
<tr><td class="title" colspan="3">Past workshops</td></tr>
|
||||
|
||||
{% for item in past_workshops[:5] %}
|
||||
<tr>
|
||||
<td>{{ item.title }}</td>
|
||||
<td><a class="slides" href="{{ item.slides }}" /></td>
|
||||
<td>{% if item.video %}<a class="video" href="{{ item.video }}" />{% endif %}</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
|
||||
</tr>
|
||||
|
||||
{% endfor %}
|
||||
|
||||
{% if past_workshops[5:] %}
|
||||
<tr>
|
||||
<td>... and at least <a href="past.html">{{ past_workshops[5:] | length }} more</a>.</td>
|
||||
</tr>
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
{% if recorded_workshops %}
|
||||
<tr><td class="title" colspan="3">Recorded workshops</td></tr>
|
||||
|
||||
{% for item in recorded_workshops %}
|
||||
<tr>
|
||||
<td>{{ item.title }}</td>
|
||||
<td><a class="slides" href="{{ item.slides }}" /></td>
|
||||
<td><a class="video" href="{{ item.video }}" /></td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
{% if self_paced %}
|
||||
<tr><td class="title" colspan="3">Self-paced tutorials</td></tr>
|
||||
{% for item in self_paced %}
|
||||
<tr>
|
||||
<td>{{ item.title }}</td>
|
||||
<td><a class="slides" href="{{ item.slides }}" /></td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
{% if all_past_workshops %}
|
||||
<tr><td class="title" colspan="3">Past workshops</td></tr>
|
||||
{% for item in all_past_workshops %}
|
||||
<tr>
|
||||
<td>{{ item.title }}</td>
|
||||
<td><a class="slides" href="{{ item.slides }}" /></td>
|
||||
{% if item.video %}
|
||||
<td><a class="video" href="{{ item.video }}" /></td>
|
||||
{% endif %}
|
||||
</tr>
|
||||
<tr>
|
||||
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
<tr><td class="spacer"></td></tr>
|
||||
|
||||
<tr>
|
||||
<td class="footer">
|
||||
Maintained by Jérôme Petazzoni (<a href="https://twitter.com/jpetazzo">@jpetazzo</a>) and <a href="https://github.com/jpetazzo/container.training/graphs/contributors">contributors</a>.
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
</div>
|
||||
</body>
|
||||
</html>""".decode("utf-8")
|
||||
|
||||
import datetime
|
||||
import jinja2
|
||||
import yaml
|
||||
|
||||
items = yaml.load(open("index.yaml"))
|
||||
|
||||
for item in items:
|
||||
if "date" in item:
|
||||
date = item["date"]
|
||||
suffix = {
|
||||
1: "st", 2: "nd", 3: "rd",
|
||||
21: "st", 22: "nd", 23: "rd",
|
||||
31: "st"}.get(date.day, "th")
|
||||
item["prettydate"] = date.strftime("%B %e{}, %Y").format(suffix)
|
||||
|
||||
today = datetime.date.today()
|
||||
coming_soon = [i for i in items if i.get("date") and i["date"] >= today]
|
||||
coming_soon.sort(key=lambda i: i["date"])
|
||||
past_workshops = [i for i in items if i.get("date") and i["date"] < today]
|
||||
past_workshops.sort(key=lambda i: i["date"], reverse=True)
|
||||
self_paced = [i for i in items if not i.get("date")]
|
||||
recorded_workshops = [i for i in items if i.get("video")]
|
||||
|
||||
template = jinja2.Template(TEMPLATE)
|
||||
with open("index.html", "w") as f:
|
||||
f.write(template.render(
|
||||
title="Container Training",
|
||||
coming_soon=coming_soon,
|
||||
past_workshops=past_workshops,
|
||||
self_paced=self_paced,
|
||||
recorded_workshops=recorded_workshops
|
||||
).encode("utf-8"))
|
||||
|
||||
with open("past.html", "w") as f:
|
||||
f.write(template.render(
|
||||
title="Container Training",
|
||||
all_past_workshops=past_workshops
|
||||
).encode("utf-8"))
|
||||
361
slides/index.yaml
Normal file
@@ -0,0 +1,361 @@
|
||||
- date: 2018-07-12
|
||||
city: Minneapolis, MN
|
||||
country: us
|
||||
event: devopsdays Minneapolis
|
||||
title: Kubernetes 101
|
||||
speaker: "ashleymcnamara, bketelsen"
|
||||
attend: https://www.devopsdays.org/events/2018-minneapolis/registration/
|
||||
|
||||
- date: 2018-10-01
|
||||
city: New York, NY
|
||||
country: us
|
||||
event: Velocity
|
||||
title: Kubernetes 101
|
||||
speaker: bridgetkromhout
|
||||
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70102
|
||||
|
||||
- date: 2018-09-30
|
||||
city: New York, NY
|
||||
country: us
|
||||
event: Velocity
|
||||
title: Kubernetes Bootcamp - Deploying and Scaling Microservices
|
||||
speaker: jpetazzo
|
||||
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/69875
|
||||
|
||||
- date: 2018-07-17
|
||||
city: Portland, OR
|
||||
country: us
|
||||
event: OSCON
|
||||
title: Kubernetes 101
|
||||
speaker: bridgetkromhout
|
||||
attend: https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/66287
|
||||
|
||||
- date: 2018-06-27
|
||||
city: Amsterdam
|
||||
country: nl
|
||||
event: devopsdays
|
||||
title: Kubernetes 101
|
||||
speaker: bridgetkromhout
|
||||
slides: https://devopsdaysams2018.container.training
|
||||
attend: https://www.devopsdays.org/events/2018-amsterdam/registration/
|
||||
|
||||
- date: 2018-06-12
|
||||
city: San Jose, CA
|
||||
country: us
|
||||
event: Velocity
|
||||
title: Kubernetes 101
|
||||
speaker: bridgetkromhout
|
||||
slides: https://velocitysj2018.container.training
|
||||
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66286
|
||||
|
||||
- date: 2018-06-12
|
||||
city: San Jose, CA
|
||||
country: us
|
||||
event: Velocity
|
||||
title: "Kubernetes two-day kickstart: Deploying and Scaling Microservices with Kubernetes"
|
||||
speaker: "bketelsen, erikstmartin"
|
||||
slides: http://kubernetes.academy/kube-fullday.yml.html#1
|
||||
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
|
||||
|
||||
- date: 2018-06-11
|
||||
city: San Jose, CA
|
||||
country: us
|
||||
event: Velocity
|
||||
title: "Kubernetes two-day kickstart: Introduction to Docker and Containers"
|
||||
speaker: "bketelsen, erikstmartin"
|
||||
slides: http://kubernetes.academy/intro-fullday.yml.html#1
|
||||
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
|
||||
|
||||
- date: 2018-05-17
|
||||
city: Virginia Beach, FL
|
||||
country: us
|
||||
event: Revolution Conf
|
||||
title: Docker 101
|
||||
speaker: bretfisher
|
||||
slides: https://revconf18.bretfisher.com
|
||||
|
||||
- date: 2018-05-10
|
||||
city: Saint Paul, MN
|
||||
country: us
|
||||
event: NDC Minnesota
|
||||
title: Kubernetes 101
|
||||
slides: https://ndcminnesota2018.container.training
|
||||
|
||||
- date: 2018-05-08
|
||||
city: Budapest
|
||||
country: hu
|
||||
event: CRAFT
|
||||
title: Swarm Orchestration
|
||||
slides: https://craftconf18.bretfisher.com
|
||||
|
||||
- date: 2018-04-27
|
||||
city: Chicago, IL
|
||||
country: us
|
||||
event: GOTO
|
||||
title: Swarm Orchestration
|
||||
slides: https://gotochgo18.bretfisher.com
|
||||
|
||||
- date: 2018-04-24
|
||||
city: Chicago, IL
|
||||
country: us
|
||||
event: GOTO
|
||||
title: Kubernetes 101
|
||||
slides: http://gotochgo2018.container.training/
|
||||
|
||||
- date: 2018-04-11
|
||||
city: Paris
|
||||
country: fr
|
||||
title: Introduction aux conteneurs
|
||||
lang: fr
|
||||
slides: https://avril2018.container.training/intro.yml.html
|
||||
|
||||
- date: 2018-04-13
|
||||
city: Paris
|
||||
country: fr
|
||||
lang: fr
|
||||
title: Introduction à l'orchestration
|
||||
slides: https://avril2018.container.training/kube.yml.html
|
||||
|
||||
- date: 2018-04-06
|
||||
city: Sacramento, CA
|
||||
country: us
|
||||
event: MuraCon
|
||||
title: Docker 101
|
||||
slides: https://muracon18.bretfisher.com
|
||||
|
||||
- date: 2018-03-27
|
||||
city: Santa Clara, CA
|
||||
country: us
|
||||
event: SREcon Americas
|
||||
title: Kubernetes 101
|
||||
slides: http://srecon2018.container.training/
|
||||
|
||||
- date: 2018-03-27
|
||||
city: Bergen
|
||||
country: no
|
||||
event: Boosterconf
|
||||
title: Kubernetes 101
|
||||
slides: http://boosterconf2018.container.training/
|
||||
|
||||
- date: 2018-02-22
|
||||
city: San Francisco, CA
|
||||
country: us
|
||||
event: IndexConf
|
||||
title: Kubernetes 101
|
||||
slides: http://indexconf2018.container.training/
|
||||
#attend: https://developer.ibm.com/indexconf/sessions/#!?id=5474
|
||||
|
||||
- date: 2017-11-17
|
||||
city: San Francisco, CA
|
||||
country: us
|
||||
event: QCON SF
|
||||
title: Orchestrating Microservices with Docker Swarm
|
||||
slides: http://qconsf2017swarm.container.training/
|
||||
|
||||
- date: 2017-11-16
|
||||
city: San Francisco, CA
|
||||
country: us
|
||||
event: QCON SF
|
||||
title: Introduction to Docker and Containers
|
||||
slides: http://qconsf2017intro.container.training/
|
||||
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLgqTum8MkspG_8VzGl6C07
|
||||
|
||||
- date: 2017-10-30
|
||||
city: San Franciso, CA
|
||||
country: us
|
||||
event: LISA
|
||||
title: (M7) Getting Started with Docker and Containers
|
||||
slides: http://lisa17m7.container.training/
|
||||
|
||||
- date: 2017-10-31
|
||||
city: San Franciso, CA
|
||||
country: us
|
||||
event: LISA
|
||||
title: (T9) Build, Ship, and Run Microservices on a Docker Swarm Cluster
|
||||
slides: http://lisa17t9.container.training/
|
||||
|
||||
- date: 2017-10-26
|
||||
city: Prague
|
||||
country: cz
|
||||
event: Open Source Summit Europe
|
||||
title: Deploying and scaling microservices with Docker and Kubernetes
|
||||
slides: http://osseu17.container.training/
|
||||
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLrsyydCzxWrIP_1-wkcSHS
|
||||
|
||||
- date: 2017-10-16
|
||||
city: Copenhagen
|
||||
country: dk
|
||||
event: DockerCon
|
||||
title: Swarm from Zero to Hero
|
||||
slides: http://dc17eu.container.training/
|
||||
|
||||
- date: 2017-10-16
|
||||
city: Copenhagen
|
||||
country: dk
|
||||
event: DockerCon
|
||||
title: Orchestration for Advanced Users
|
||||
slides: https://www.bretfisher.com/dockercon17eu
|
||||
|
||||
- date: 2017-07-25
|
||||
city: Minneapolis, MN
|
||||
country: us
|
||||
event: devopsdays
|
||||
title: Deploying & Scaling microservices with Docker Swarm
|
||||
video: https://www.youtube.com/watch?v=DABbqyJeG_E
|
||||
|
||||
- date: 2017-06-12
|
||||
city: Berlin
|
||||
country: de
|
||||
event: DevOpsCon
|
||||
title: Deploying and scaling containerized Microservices with Docker and Swarm
|
||||
|
||||
- date: 2017-05-18
|
||||
city: Portland, OR
|
||||
country: us
|
||||
event: PyCon
|
||||
title: Deploy and scale containers with Docker native, open source orchestration
|
||||
video: https://www.youtube.com/watch?v=EuzoEaE6Cqs
|
||||
|
||||
- date: 2017-05-08
|
||||
city: Austin, TX
|
||||
country: us
|
||||
event: OSCON
|
||||
title: Deploying and scaling applications in containers with Docker
|
||||
|
||||
- date: 2017-05-04
|
||||
city: Chicago, IL
|
||||
country: us
|
||||
event: GOTO
|
||||
title: Container deployment, scaling, and orchestration with Docker Swarm
|
||||
|
||||
- date: 2017-04-17
|
||||
city: Austin, TX
|
||||
country: us
|
||||
event: DockerCon
|
||||
title: Orchestration Workshop
|
||||
|
||||
- date: 2017-03-22
|
||||
city: San Jose, CA
|
||||
country: us
|
||||
event: Devoxx
|
||||
title: Container deployment, scaling, and orchestration with Docker Swarm
|
||||
|
||||
- date: 2017-03-03
|
||||
city: Pasadena, CA
|
||||
country: us
|
||||
event: SCALE
|
||||
title: Container deployment, scaling, and orchestration with Docker Swarm
|
||||
|
||||
- date: 2016-12-06
|
||||
city: Boston, MA
|
||||
country: us
|
||||
event: LISA
|
||||
title: Deploying and Scaling Applications with Docker Swarm
|
||||
slides: http://lisa16t1.container.training/
|
||||
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviIDDhr8vIwCN1wkyNGXjbbc
|
||||
|
||||
- date: 2016-10-07
|
||||
city: Berlin
|
||||
country: de
|
||||
event: LinuxCon
|
||||
title: Orchestrating Containers in Production at Scale with Docker Swarm
|
||||
|
||||
- date: 2016-09-20
|
||||
city: New York, NY
|
||||
country: us
|
||||
event: Velocity
|
||||
title: Deployment and orchestration at scale with Docker
|
||||
|
||||
- date: 2016-08-25
|
||||
city: Toronto
|
||||
country: ca
|
||||
event: LinuxCon
|
||||
title: Orchestrating Containers in Production at Scale with Docker Swarm
|
||||
|
||||
- date: 2016-06-22
|
||||
city: Seattle, WA
|
||||
country: us
|
||||
event: DockerCon
|
||||
title: Orchestration Workshop
|
||||
|
||||
- date: 2016-05-29
|
||||
city: Portland, OR
|
||||
country: us
|
||||
event: PyCon
|
||||
title: Introduction to Docker and containers
|
||||
slides: https://us.pycon.org/2016/site_media/media/tutorial_handouts/DockerSlides.pdf
|
||||
video: https://www.youtube.com/watch?v=ZVaRK10HBjo
|
||||
|
||||
- date: 2016-05-17
|
||||
city: Austin, TX
|
||||
country: us
|
||||
event: OSCON
|
||||
title: Deployment and orchestration at scale with Docker Swarm
|
||||
|
||||
- date: 2016-04-27
|
||||
city: Budapest
|
||||
country: hu
|
||||
event: CRAFT
|
||||
title: Advanced Docker concepts and container orchestration
|
||||
|
||||
- date: 2016-04-22
|
||||
city: Berlin
|
||||
country: de
|
||||
event: Neofonie
|
||||
title: Orchestration Workshop
|
||||
|
||||
- date: 2016-04-05
|
||||
city: Stockholm
|
||||
country: se
|
||||
event: Praqma
|
||||
title: Orchestration Workshop
|
||||
|
||||
- date: 2016-03-22
|
||||
city: Munich
|
||||
country: de
|
||||
event: Stylight
|
||||
title: Orchestration Workshop
|
||||
|
||||
- date: 2016-03-11
|
||||
city: London
|
||||
country: uk
|
||||
event: QCON
|
||||
title: Containers in production with Docker Swarm
|
||||
|
||||
- date: 2016-02-19
|
||||
city: Amsterdam
|
||||
country: nl
|
||||
event: Container Solutions
|
||||
title: Orchestration Workshop
|
||||
|
||||
- date: 2016-02-15
|
||||
city: Paris
|
||||
country: fr
|
||||
event: Zenika
|
||||
title: Orchestration Workshop
|
||||
|
||||
- date: 2016-01-22
|
||||
city: Pasadena, CA
|
||||
country: us
|
||||
event: SCALE
|
||||
title: Advanced Docker concepts and container orchestration
|
||||
|
||||
#- date: 2015-11-10
|
||||
# city: Washington DC
|
||||
# country: us
|
||||
# event: LISA
|
||||
# title: Deploying and Scaling Applications with Docker Swarm
|
||||
|
||||
#2015-09-24-strangeloop
|
||||
|
||||
|
||||
|
||||
- title: Introduction to Docker and Containers
|
||||
slides: intro-selfpaced.yml.html
|
||||
|
||||
- title: Container Orchestration with Docker and Swarm
|
||||
slides: swarm-selfpaced.yml.html
|
||||
|
||||
- title: Deploying and Scaling Microservices with Docker and Kubernetes
|
||||
slides: kube-selfpaced.yml.html
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
title: |
|
||||
Introduction
|
||||
to Docker and
|
||||
Containers
|
||||
to Containers
|
||||
|
||||
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: http://container.training/
|
||||
|
||||
exclude:
|
||||
- self-paced
|
||||
|
||||
@@ -27,13 +30,13 @@ chapters:
|
||||
- intro/Building_Images_With_Dockerfiles.md
|
||||
- intro/Cmd_And_Entrypoint.md
|
||||
- intro/Copying_Files_During_Build.md
|
||||
- intro/Multi_Stage_Builds.md
|
||||
- - intro/Multi_Stage_Builds.md
|
||||
- intro/Publishing_To_Docker_Hub.md
|
||||
- intro/Dockerfile_Tips.md
|
||||
- - intro/Naming_And_Inspecting.md
|
||||
- intro/Labels.md
|
||||
- intro/Getting_Inside.md
|
||||
- intro/Container_Networking_Basics.md
|
||||
- - intro/Container_Networking_Basics.md
|
||||
- intro/Network_Drivers.md
|
||||
- intro/Container_Network_Model.md
|
||||
#- intro/Connecting_Containers_With_Links.md
|
||||
@@ -42,13 +45,14 @@ chapters:
|
||||
- intro/Working_With_Volumes.md
|
||||
- intro/Compose_For_Dev_Stacks.md
|
||||
- intro/Docker_Machine.md
|
||||
- intro/Advanced_Dockerfiles.md
|
||||
- - intro/Advanced_Dockerfiles.md
|
||||
- intro/Application_Configuration.md
|
||||
- intro/Logging.md
|
||||
- intro/Resource_Limits.md
|
||||
- - intro/Namespaces_Cgroups.md
|
||||
- intro/Copy_On_Write.md
|
||||
#- intro/Containers_From_Scratch.md
|
||||
- intro/Container_Engines.md
|
||||
- - intro/Container_Engines.md
|
||||
- intro/Ecosystem.md
|
||||
- intro/Orchestration_Overview.md
|
||||
- common/thankyou.md
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
title: |
|
||||
Introduction
|
||||
to Docker and
|
||||
Containers
|
||||
to Containers
|
||||
|
||||
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: http://container.training/
|
||||
|
||||
exclude:
|
||||
- in-person
|
||||
|
||||
@@ -27,13 +30,13 @@ chapters:
|
||||
- intro/Building_Images_With_Dockerfiles.md
|
||||
- intro/Cmd_And_Entrypoint.md
|
||||
- intro/Copying_Files_During_Build.md
|
||||
- intro/Multi_Stage_Builds.md
|
||||
- - intro/Multi_Stage_Builds.md
|
||||
- intro/Publishing_To_Docker_Hub.md
|
||||
- intro/Dockerfile_Tips.md
|
||||
- - intro/Naming_And_Inspecting.md
|
||||
- intro/Labels.md
|
||||
- intro/Getting_Inside.md
|
||||
- intro/Container_Networking_Basics.md
|
||||
- - intro/Container_Networking_Basics.md
|
||||
- intro/Network_Drivers.md
|
||||
- intro/Container_Network_Model.md
|
||||
#- intro/Connecting_Containers_With_Links.md
|
||||
@@ -42,13 +45,14 @@ chapters:
|
||||
- intro/Working_With_Volumes.md
|
||||
- intro/Compose_For_Dev_Stacks.md
|
||||
- intro/Docker_Machine.md
|
||||
- intro/Advanced_Dockerfiles.md
|
||||
- - intro/Advanced_Dockerfiles.md
|
||||
- intro/Application_Configuration.md
|
||||
- intro/Logging.md
|
||||
- intro/Resource_Limits.md
|
||||
- - intro/Namespaces_Cgroups.md
|
||||
- intro/Copy_On_Write.md
|
||||
#- intro/Containers_From_Scratch.md
|
||||
- intro/Container_Engines.md
|
||||
- - intro/Container_Engines.md
|
||||
- intro/Ecosystem.md
|
||||
- intro/Orchestration_Overview.md
|
||||
- common/thankyou.md
|
||||
|
||||
@@ -34,18 +34,6 @@ In this section, we will see more Dockerfile commands.
|
||||
|
||||
---
|
||||
|
||||
## The `MAINTAINER` instruction
|
||||
|
||||
The `MAINTAINER` instruction tells you who wrote the `Dockerfile`.
|
||||
|
||||
```dockerfile
|
||||
MAINTAINER Docker Education Team <education@docker.com>
|
||||
```
|
||||
|
||||
It's optional but recommended.
|
||||
|
||||
---
|
||||
|
||||
## The `RUN` instruction
|
||||
|
||||
The `RUN` instruction can be specified in two ways.
|
||||
@@ -367,7 +355,7 @@ class: extra-details
|
||||
|
||||
## Overriding the `ENTRYPOINT` instruction
|
||||
|
||||
The entry point can be overriden as well.
|
||||
The entry point can be overridden as well.
|
||||
|
||||
```bash
|
||||
$ docker run -it training/ls
|
||||
@@ -428,5 +416,4 @@ ONBUILD COPY . /src
|
||||
```
|
||||
|
||||
* You can't chain `ONBUILD` instructions with `ONBUILD`.
|
||||
* `ONBUILD` can't be used to trigger `FROM` and `MAINTAINER`
|
||||
instructions.
|
||||
* `ONBUILD` can't be used to trigger `FROM` instructions.
|
||||
|
||||
@@ -40,6 +40,8 @@ ambassador containers.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -117,7 +117,7 @@ CONTAINER ID IMAGE ... CREATED STATUS ...
|
||||
|
||||
Many Docker commands will work on container IDs: `docker stop`, `docker rm`...
|
||||
|
||||
If we want to list only the IDs of our containers (without the other colums
|
||||
If we want to list only the IDs of our containers (without the other columns
|
||||
or the header line),
|
||||
we can use the `-q` ("Quiet", "Quick") flag:
|
||||
|
||||
|
||||
@@ -49,7 +49,7 @@ Before diving in, let's see a small example of Compose in action.
|
||||
|
||||
---
|
||||
|
||||
## Compose in action
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
@@ -60,6 +60,10 @@ Before diving in, let's see a small example of Compose in action.
|
||||
If you are using the official training virtual machines, Compose has been
|
||||
pre-installed.
|
||||
|
||||
If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them.
|
||||
|
||||
If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`.
|
||||
|
||||
You can always check that it is installed by running:
|
||||
|
||||
```bash
|
||||
@@ -135,22 +139,33 @@ services:
|
||||
|
||||
---
|
||||
|
||||
## Compose file versions
|
||||
## Compose file structure
|
||||
|
||||
Version 1 directly has the various containers (`www`, `redis`...) at the top level of the file.
|
||||
A Compose file has multiple sections:
|
||||
|
||||
Version 2 has multiple sections:
|
||||
* `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.)
|
||||
|
||||
* `version` is mandatory and should be `"2"`.
|
||||
|
||||
* `services` is mandatory and corresponds to the content of the version 1 format.
|
||||
* `services` is mandatory. A service is one or more replicas of the same image running as containers.
|
||||
|
||||
* `networks` is optional and indicates to which networks containers should be connected.
|
||||
<br/>(By default, containers will be connected on a private, per-app network.)
|
||||
<br/>(By default, containers will be connected on a private, per-compose-file network.)
|
||||
|
||||
* `volumes` is optional and can define volumes to be used and/or shared by the containers.
|
||||
|
||||
Version 3 adds support for deployment options (scaling, rolling updates, etc.)
|
||||
---
|
||||
|
||||
## Compose file versions
|
||||
|
||||
* Version 1 is legacy and shouldn't be used.
|
||||
|
||||
(If you see a Compose file without `version` and `services`, it's a legacy v1 file.)
|
||||
|
||||
* Version 2 added support for networks and volumes.
|
||||
|
||||
* Version 3 added support for deployment options (scaling, rolling updates, etc).
|
||||
|
||||
The [Docker documentation](https://docs.docker.com/compose/compose-file/)
|
||||
has excellent information about the Compose file format if you need to know more about versions.
|
||||
|
||||
---
|
||||
|
||||
@@ -260,6 +275,8 @@ Removing trainingwheels_www_1 ... done
|
||||
Removing trainingwheels_redis_1 ... done
|
||||
```
|
||||
|
||||
Use `docker-compose down -v` to remove everything including volumes.
|
||||
|
||||
---
|
||||
|
||||
## Special handling of volumes
|
||||
|
||||
@@ -73,7 +73,7 @@ Containers also exist (sometimes with other names) on Windows, macOS, Solaris, F
|
||||
|
||||
## LXC
|
||||
|
||||
* The venerable ancestor (first realeased in 2008).
|
||||
* The venerable ancestor (first released in 2008).
|
||||
|
||||
* Docker initially relied on it to execute containers.
|
||||
|
||||
|
||||
@@ -65,9 +65,17 @@ eb0eeab782f4 host host
|
||||
|
||||
* A network is managed by a *driver*.
|
||||
|
||||
* All the drivers that we have seen before are available.
|
||||
* The built-in drivers include:
|
||||
|
||||
* A new multi-host driver, *overlay*, is available out of the box.
|
||||
* `bridge` (default)
|
||||
|
||||
* `none`
|
||||
|
||||
* `host`
|
||||
|
||||
* `macvlan`
|
||||
|
||||
* A multi-host driver, *overlay*, is available out of the box (for Swarm clusters).
|
||||
|
||||
* More drivers can be provided by plugins (OVS, VLAN...)
|
||||
|
||||
@@ -75,6 +83,8 @@ eb0eeab782f4 host host
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Differences with the CNI
|
||||
|
||||
* CNI = Container Network Interface
|
||||
@@ -87,6 +97,22 @@ eb0eeab782f4 host host
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Single container in a Docker network
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Two containers on two Docker networks
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Creating a network
|
||||
|
||||
Let's create a network called `dev`.
|
||||
@@ -284,7 +310,7 @@ since we wiped out the old Redis container).
|
||||
|
||||
---
|
||||
|
||||
class: x-extra-details
|
||||
class: extra-details
|
||||
|
||||
## Names are *local* to each network
|
||||
|
||||
@@ -324,7 +350,7 @@ class: extra-details
|
||||
Create the `prod` network.
|
||||
|
||||
```bash
|
||||
$ docker create network prod
|
||||
$ docker network create prod
|
||||
5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c
|
||||
```
|
||||
|
||||
@@ -472,11 +498,13 @@ b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09
|
||||
|
||||
* If containers span multiple hosts, we need an *overlay* network to connect them together.
|
||||
|
||||
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN.
|
||||
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging
|
||||
VXLAN, *enabled with Swarm Mode*.
|
||||
|
||||
* Other plugins (Weave, Calico...) can provide overlay networks as well.
|
||||
|
||||
* Once you have an overlay network, *all the features that we've used in this chapter work identically.*
|
||||
* Once you have an overlay network, *all the features that we've used in this chapter work identically
|
||||
across multiple hosts.*
|
||||
|
||||
---
|
||||
|
||||
@@ -514,13 +542,174 @@ General idea:
|
||||
|
||||
---
|
||||
|
||||
## Section summary
|
||||
## Connecting and disconnecting dynamically
|
||||
|
||||
We've learned how to:
|
||||
* So far, we have specified which network to use when starting the container.
|
||||
|
||||
* Create private networks for groups of containers.
|
||||
* The Docker Engine also allows to connect and disconnect while the container runs.
|
||||
|
||||
* Assign IP addresses to containers.
|
||||
* This feature is exposed through the Docker API, and through two Docker CLI commands:
|
||||
|
||||
* Use container naming to implement service discovery.
|
||||
* `docker network connect <network> <container>`
|
||||
|
||||
* `docker network disconnect <network> <container>`
|
||||
|
||||
---
|
||||
|
||||
## Dynamically connecting to a network
|
||||
|
||||
* We have a container named `es` connected to a network named `dev`.
|
||||
|
||||
* Let's start a simple alpine container on the default network:
|
||||
|
||||
```bash
|
||||
$ docker run -ti alpine sh
|
||||
/ #
|
||||
```
|
||||
|
||||
* In this container, try to ping the `es` container:
|
||||
|
||||
```bash
|
||||
/ # ping es
|
||||
ping: bad address 'es'
|
||||
```
|
||||
|
||||
This doesn't work, but we will change that by connecting the container.
|
||||
|
||||
---
|
||||
|
||||
## Finding the container ID and connecting it
|
||||
|
||||
* Figure out the ID of our alpine container; here are two methods:
|
||||
|
||||
* looking at `/etc/hostname` in the container,
|
||||
|
||||
* running `docker ps -lq` on the host.
|
||||
|
||||
* Run the following command on the host:
|
||||
|
||||
```bash
|
||||
$ docker network connect dev `<container_id>`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Checking what we did
|
||||
|
||||
* Try again to `ping es` from the container.
|
||||
|
||||
* It should now work correctly:
|
||||
|
||||
```bash
|
||||
/ # ping es
|
||||
PING es (172.20.0.3): 56 data bytes
|
||||
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms
|
||||
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms
|
||||
^C
|
||||
```
|
||||
|
||||
* Interrupt it with Ctrl-C.
|
||||
|
||||
---
|
||||
|
||||
## Looking at the network setup in the container
|
||||
|
||||
We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`:
|
||||
|
||||
.small[
|
||||
```bash
|
||||
/ # ip a
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
|
||||
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
|
||||
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
|
||||
valid_lft forever preferred_lft forever
|
||||
20: eth1@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
|
||||
link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff
|
||||
inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1
|
||||
valid_lft forever preferred_lft forever
|
||||
/ #
|
||||
```
|
||||
]
|
||||
|
||||
Each network connection is materialized with a virtual network interface.
|
||||
|
||||
As we can see, we can be connected to multiple networks at the same time.
|
||||
|
||||
---
|
||||
|
||||
## Disconnecting from a network
|
||||
|
||||
* Let's try the symmetrical command to disconnect the container:
|
||||
```bash
|
||||
$ docker network disconnect dev <container_id>
|
||||
```
|
||||
|
||||
* From now on, if we try to ping `es`, it will not resolve:
|
||||
```bash
|
||||
/ # ping es
|
||||
ping: bad address 'es'
|
||||
```
|
||||
|
||||
* Trying to ping the IP address directly won't work either:
|
||||
```bash
|
||||
/ # ping 172.20.0.3
|
||||
... (nothing happens until we interrupt it with Ctrl-C)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Network aliases are scoped per network
|
||||
|
||||
* Each network has its own set of network aliases.
|
||||
|
||||
* We saw this earlier: `es` resolves to different addresses in `dev` and `prod`.
|
||||
|
||||
* If we are connected to multiple networks, the resolver looks up names in each of them
|
||||
(as of Docker Engine 18.03, it is the connection order) and stops as soon as the name
|
||||
is found.
|
||||
|
||||
* Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not**
|
||||
give us the addresses of all the `es` services; but only the ones in `dev` or `prod`.
|
||||
|
||||
* However, we can lookup `es.dev` or `es.prod` if we need to.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Finding out about our networks and names
|
||||
|
||||
* We can do reverse DNS lookups on containers' IP addresses.
|
||||
|
||||
* If the IP address belongs to a network (other than the default bridge), the result will be:
|
||||
|
||||
```
|
||||
name-or-first-alias-or-container-id.network-name
|
||||
```
|
||||
|
||||
* Example:
|
||||
|
||||
.small[
|
||||
```bash
|
||||
$ docker run -ti --net prod --net-alias hello alpine
|
||||
/ # apk add --no-cache drill
|
||||
...
|
||||
OK: 5 MiB in 13 packages
|
||||
/ # ifconfig
|
||||
eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03
|
||||
inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
...
|
||||
/ # drill -t ptr `3.0.21.172`.in-addr.arpa
|
||||
...
|
||||
;; ANSWER SECTION:
|
||||
3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`.
|
||||
...
|
||||
```
|
||||
]
|
||||
|
||||
@@ -98,7 +98,7 @@ $ curl localhost:32768
|
||||
* We can see that metadata with `docker inspect`:
|
||||
|
||||
```bash
|
||||
$ docker inspect nginx --format {{.Config.ExposedPorts}}
|
||||
$ docker inspect --format '{{.Config.ExposedPorts}}' nginx
|
||||
map[80/tcp:{}]
|
||||
```
|
||||
|
||||
|
||||
@@ -64,7 +64,7 @@ Create this Dockerfile.
|
||||
|
||||
## Testing our C program
|
||||
|
||||
* Create `hello.c` and `Dockerfile` in the same direcotry.
|
||||
* Create `hello.c` and `Dockerfile` in the same directory.
|
||||
|
||||
* Run `docker build -t hello .` in this directory.
|
||||
|
||||
|
||||
@@ -10,10 +10,12 @@
|
||||
|
||||
* [Solaris Containers (2004)](https://en.wikipedia.org/wiki/Solaris_Containers)
|
||||
|
||||
* [FreeBSD jails (1999)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
|
||||
* [FreeBSD jails (1999-2000)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
|
||||
|
||||
Containers have been around for a *very long time* indeed.
|
||||
|
||||
(See [this excellent blog post by Serge Hallyn](https://s3hh.wordpress.com/2018/03/22/history-of-containers/) for more historic details.)
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
@@ -30,7 +30,7 @@
|
||||
|
||||
## Environment variables
|
||||
|
||||
- Most of the tools (CLI, libraries...) connecting to the Docker API can use ennvironment variables.
|
||||
- Most of the tools (CLI, libraries...) connecting to the Docker API can use environment variables.
|
||||
|
||||
- These variables are:
|
||||
|
||||
@@ -40,7 +40,7 @@
|
||||
|
||||
- `DOCKER_CERT_PATH` (path to the keypair and certificate to use for auth)
|
||||
|
||||
- `docker-machine env ...` will generate the variables needed to connect to an host.
|
||||
- `docker-machine env ...` will generate the variables needed to connect to a host.
|
||||
|
||||
- `$(eval docker-machine env ...)` sets these variables in the current shell.
|
||||
|
||||
@@ -50,7 +50,7 @@
|
||||
|
||||
With `docker-machine`, we can:
|
||||
|
||||
- upgrade an host to the latest version of the Docker Engine,
|
||||
- upgrade a host to the latest version of the Docker Engine,
|
||||
|
||||
- start/stop/restart hosts,
|
||||
|
||||
|
||||
@@ -51,9 +51,8 @@ The dependencies are reinstalled every time, because the build system does not k
|
||||
|
||||
```bash
|
||||
FROM python
|
||||
MAINTAINER Docker Education Team <education@docker.com>
|
||||
COPY . /src/
|
||||
WORKDIR /src
|
||||
COPY . .
|
||||
RUN pip install -qr requirements.txt
|
||||
EXPOSE 5000
|
||||
CMD ["python", "app.py"]
|
||||
@@ -67,11 +66,10 @@ Adding the dependencies as a separate step means that Docker can cache more effi
|
||||
|
||||
```bash
|
||||
FROM python
|
||||
MAINTAINER Docker Education Team <education@docker.com>
|
||||
COPY ./requirements.txt /tmp/requirements.txt
|
||||
COPY requirements.txt /tmp/requirements.txt
|
||||
RUN pip install -qr /tmp/requirements.txt
|
||||
COPY . /src/
|
||||
WORKDIR /src
|
||||
COPY . .
|
||||
EXPOSE 5000
|
||||
CMD ["python", "app.py"]
|
||||
```
|
||||
@@ -98,3 +96,266 @@ CMD, EXPOSE ...
|
||||
* The build fails as soon as an instruction fails
|
||||
* If `RUN <unit tests>` fails, the build doesn't produce an image
|
||||
* If it succeeds, it produces a clean image (without test libraries and data)
|
||||
|
||||
---
|
||||
|
||||
# Dockerfile examples
|
||||
|
||||
There are a number of tips, tricks, and techniques that we can use in Dockerfiles.
|
||||
|
||||
But sometimes, we have to use different (and even opposed) practices depending on:
|
||||
|
||||
- the complexity of our project,
|
||||
|
||||
- the programming language or framework that we are using,
|
||||
|
||||
- the stage of our project (early MVP vs. super-stable production),
|
||||
|
||||
- whether we're building a final image or a base for further images,
|
||||
|
||||
- etc.
|
||||
|
||||
We are going to show a few examples using very different techniques.
|
||||
|
||||
---
|
||||
|
||||
## When to optimize an image
|
||||
|
||||
When authoring official images, it is a good idea to reduce as much as possible:
|
||||
|
||||
- the number of layers,
|
||||
|
||||
- the size of the final image.
|
||||
|
||||
This is often done at the expense of build time and convenience for the image maintainer;
|
||||
but when an image is downloaded millions of time, saving even a few seconds of pull time
|
||||
can be worth it.
|
||||
|
||||
.small[
|
||||
```dockerfile
|
||||
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \
|
||||
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
|
||||
&& docker-php-ext-install gd
|
||||
...
|
||||
RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \
|
||||
&& echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \
|
||||
&& tar -xzf wordpress.tar.gz -C /usr/src/ \
|
||||
&& rm wordpress.tar.gz \
|
||||
&& chown -R www-data:www-data /usr/src/wordpress
|
||||
```
|
||||
]
|
||||
|
||||
(Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile))
|
||||
|
||||
---
|
||||
|
||||
## When to *not* optimize an image
|
||||
|
||||
Sometimes, it is better to prioritize *maintainer convenience*.
|
||||
|
||||
In particular, if:
|
||||
|
||||
- the image changes a lot,
|
||||
|
||||
- the image has very few users (e.g. only 1, the maintainer!),
|
||||
|
||||
- the image is built and run on the same machine,
|
||||
|
||||
- the image is built and run on machines with a very fast link ...
|
||||
|
||||
In these cases, just keep things simple!
|
||||
|
||||
(Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.)
|
||||
|
||||
---
|
||||
|
||||
```dockerfile
|
||||
FROM debian:sid
|
||||
|
||||
RUN apt-get update -q
|
||||
RUN apt-get install -yq build-essential make
|
||||
RUN apt-get install -yq zlib1g-dev
|
||||
RUN apt-get install -yq ruby ruby-dev
|
||||
RUN apt-get install -yq python-pygments
|
||||
RUN apt-get install -yq nodejs
|
||||
RUN apt-get install -yq cmake
|
||||
RUN gem install --no-rdoc --no-ri github-pages
|
||||
|
||||
COPY . /blog
|
||||
WORKDIR /blog
|
||||
|
||||
VOLUME /blog/_site
|
||||
|
||||
EXPOSE 4000
|
||||
CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Multi-dimensional versioning systems
|
||||
|
||||
Images can have a tag, indicating the version of the image.
|
||||
|
||||
But sometimes, there are multiple important components, and we need to indicate the versions
|
||||
for all of them.
|
||||
|
||||
This can be done with environment variables:
|
||||
|
||||
```dockerfile
|
||||
ENV PIP=9.0.3 \
|
||||
ZC_BUILDOUT=2.11.2 \
|
||||
SETUPTOOLS=38.7.0 \
|
||||
PLONE_MAJOR=5.1 \
|
||||
PLONE_VERSION=5.1.0 \
|
||||
PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d
|
||||
```
|
||||
|
||||
(Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile))
|
||||
|
||||
---
|
||||
|
||||
## Entrypoints and wrappers
|
||||
|
||||
It is very common to define a custom entrypoint.
|
||||
|
||||
That entrypoint will generally be a script, performing any combination of:
|
||||
|
||||
- pre-flights checks (if a required dependency is not available, display
|
||||
a nice error message early instead of an obscure one in a deep log file),
|
||||
|
||||
- generation or validation of configuration files,
|
||||
|
||||
- dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`),
|
||||
|
||||
- and more.
|
||||
|
||||
---
|
||||
|
||||
## A typical entrypoint script
|
||||
|
||||
```dockerfile
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# first arg is '-f' or '--some-option'
|
||||
# or first arg is 'something.conf'
|
||||
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
|
||||
set -- redis-server "$@"
|
||||
fi
|
||||
|
||||
# allow the container to be started with '--user'
|
||||
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
|
||||
chown -R redis .
|
||||
exec su-exec redis "$0" "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
```
|
||||
|
||||
(Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh))
|
||||
|
||||
---
|
||||
|
||||
## Factoring information
|
||||
|
||||
To facilitate maintenance (and avoid human errors), avoid to repeat information like:
|
||||
|
||||
- version numbers,
|
||||
|
||||
- remote asset URLs (e.g. source tarballs) ...
|
||||
|
||||
Instead, use environment variables.
|
||||
|
||||
.small[
|
||||
```dockerfile
|
||||
ENV NODE_VERSION 10.2.1
|
||||
...
|
||||
RUN ...
|
||||
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \
|
||||
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
|
||||
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
|
||||
&& grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
|
||||
&& tar -xf "node-v$NODE_VERSION.tar.xz" \
|
||||
&& cd "node-v$NODE_VERSION" \
|
||||
...
|
||||
```
|
||||
]
|
||||
|
||||
(Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile))
|
||||
|
||||
---
|
||||
|
||||
## Overrides
|
||||
|
||||
In theory, development and production images should be the same.
|
||||
|
||||
In practice, we often need to enable specific behaviors in development (e.g. debug statements).
|
||||
|
||||
One way to reconcile both needs is to use Compose to enable these behaviors.
|
||||
|
||||
Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example.
|
||||
|
||||
---
|
||||
|
||||
## Production image
|
||||
|
||||
This Dockerfile builds an image leveraging gunicorn:
|
||||
|
||||
```dockerfile
|
||||
FROM python
|
||||
RUN pip install flask
|
||||
RUN pip install gunicorn
|
||||
RUN pip install redis
|
||||
COPY . /src
|
||||
WORKDIR /src
|
||||
CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app
|
||||
EXPOSE 5000
|
||||
```
|
||||
|
||||
(Source: [traininghweels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
|
||||
|
||||
---
|
||||
|
||||
## Development Compose file
|
||||
|
||||
This Compose file uses the same image, but with a few overrides for development:
|
||||
|
||||
- the Flask development server is used (overriding `CMD`),
|
||||
|
||||
- the `DEBUG` environment variable is set,
|
||||
|
||||
- a volume is used to provide a faster local development workflow.
|
||||
|
||||
.small[
|
||||
```yaml
|
||||
services:
|
||||
www:
|
||||
build: www
|
||||
ports:
|
||||
- 8000:5000
|
||||
user: nobody
|
||||
environment:
|
||||
DEBUG: 1
|
||||
command: python counter.py
|
||||
volumes:
|
||||
- ./www:/src
|
||||
```
|
||||
]
|
||||
|
||||
(Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml))
|
||||
|
||||
---
|
||||
|
||||
## How to know which best practices are better?
|
||||
|
||||
- The main goal of containers is to make our lives easier.
|
||||
|
||||
- In this chapter, we showed many ways to write Dockerfiles.
|
||||
|
||||
- These Dockerfiles use sometimes diametrally opposed techniques.
|
||||
|
||||
- Yet, they were the "right" ones *for a specific situation.*
|
||||
|
||||
- It's OK (and even encouraged) to start simple and evolve as needed.
|
||||
|
||||
- Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!
|
||||
|
||||
@@ -110,6 +110,8 @@ Beautiful! .emoji[😍]
|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## Counting packages in the container
|
||||
|
||||
Let's check how many packages are installed there.
|
||||
@@ -127,6 +129,8 @@ How many packages do we have on our host?
|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## Counting packages on the host
|
||||
|
||||
Exit the container by logging out of the shell, like you would usually do.
|
||||
@@ -145,18 +149,34 @@ Now, try to:
|
||||
|
||||
---
|
||||
|
||||
class: self-paced
|
||||
|
||||
## Comparing the container and the host
|
||||
|
||||
Exit the container by logging out of the shell, with `^D` or `exit`.
|
||||
|
||||
Now try to run `figlet`. Does that work?
|
||||
|
||||
(It shouldn't; except if, by coincidence, you are running on a machine where figlet was installed before.)
|
||||
|
||||
---
|
||||
|
||||
## Host and containers are independent things
|
||||
|
||||
* We ran an `ubuntu` container on an `ubuntu` host.
|
||||
* We ran an `ubuntu` container on an Linux/Windows/macOS host.
|
||||
|
||||
* But they have different, independent packages.
|
||||
* They have different, independent packages.
|
||||
|
||||
* Installing something on the host doesn't expose it to the container.
|
||||
|
||||
* And vice-versa.
|
||||
|
||||
* Even if both the host and the container have the same Linux distro!
|
||||
|
||||
* We can run *any container* on *any host*.
|
||||
|
||||
(One exception: Windows containers cannot run on Linux machines; at least not yet.)
|
||||
|
||||
---
|
||||
|
||||
## Where's our container?
|
||||
|
||||
@@ -144,7 +144,7 @@ docker run jpetazzo/crashtest
|
||||
|
||||
The container starts, but then stops immediately, without any output.
|
||||
|
||||
What would McGyver do?
|
||||
What would MacGyver™ do?
|
||||
|
||||
First, let's check the status of that container.
|
||||
|
||||
|
||||
@@ -46,6 +46,8 @@ In this section, we will explain:
|
||||
|
||||
## Example for a Java webapp
|
||||
|
||||
Each of the following items will correspond to one layer:
|
||||
|
||||
* CentOS base layer
|
||||
* Packages and configuration files added by our local IT
|
||||
* JRE
|
||||
@@ -56,6 +58,22 @@ In this section, we will explain:
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## The read-write layer
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Multiple containers sharing the same image
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Differences between containers and images
|
||||
|
||||
* An image is a read-only filesystem.
|
||||
@@ -63,24 +81,14 @@ In this section, we will explain:
|
||||
* A container is an encapsulated set of processes running in a
|
||||
read-write copy of that filesystem.
|
||||
|
||||
* To optimize container boot time, *copy-on-write* is used
|
||||
* To optimize container boot time, *copy-on-write* is used
|
||||
instead of regular copy.
|
||||
|
||||
* `docker run` starts a container from a given image.
|
||||
|
||||
Let's give a couple of metaphors to illustrate those concepts.
|
||||
|
||||
---
|
||||
|
||||
## Image as stencils
|
||||
|
||||
Images are like templates or stencils that you can create containers from.
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Object-oriented programming
|
||||
## Comparison with object-oriented programming
|
||||
|
||||
* Images are conceptually similar to *classes*.
|
||||
|
||||
@@ -99,7 +107,7 @@ If an image is read-only, how do we change it?
|
||||
* We create a new container from that image.
|
||||
|
||||
* Then we make changes to that container.
|
||||
|
||||
|
||||
* When we are satisfied with those changes, we transform them into a new layer.
|
||||
|
||||
* A new image is created by stacking the new layer on top of the old image.
|
||||
@@ -118,7 +126,7 @@ If an image is read-only, how do we change it?
|
||||
|
||||
## Creating the first images
|
||||
|
||||
There is a special empty image called `scratch`.
|
||||
There is a special empty image called `scratch`.
|
||||
|
||||
* It allows to *build from scratch*.
|
||||
|
||||
@@ -138,7 +146,7 @@ Note: you will probably never have to do this yourself.
|
||||
* Saves all the changes made to a container into a new layer.
|
||||
* Creates a new image (effectively a copy of the container).
|
||||
|
||||
`docker build`
|
||||
`docker build` **(used 99% of the time)**
|
||||
|
||||
* Performs a repeatable build sequence.
|
||||
* This is the preferred method!
|
||||
@@ -180,6 +188,8 @@ Those images include:
|
||||
|
||||
* Ready-to-use components and services, like redis, postgresql...
|
||||
|
||||
* Over 130 at this point!
|
||||
|
||||
---
|
||||
|
||||
## User namespace
|
||||
@@ -299,9 +309,9 @@ There are two ways to download images.
|
||||
```bash
|
||||
$ docker pull debian:jessie
|
||||
Pulling repository debian
|
||||
b164861940b8: Download complete
|
||||
b164861940b8: Pulling image (jessie) from debian
|
||||
d1881793a057: Download complete
|
||||
b164861940b8: Download complete
|
||||
b164861940b8: Pulling image (jessie) from debian
|
||||
d1881793a057: Download complete
|
||||
```
|
||||
|
||||
* As seen previously, images are made up of layers.
|
||||
|
||||
@@ -37,7 +37,9 @@ We can arbitrarily distinguish:
|
||||
|
||||
## Installing Docker on Linux
|
||||
|
||||
* The recommended method is to install the packages supplied by Docker Inc.
|
||||
* The recommended method is to install the packages supplied by Docker Inc.:
|
||||
|
||||
https://store.docker.com
|
||||
|
||||
* The general method is:
|
||||
|
||||
@@ -79,11 +81,11 @@ class: extra-details
|
||||
|
||||
## Installing Docker on macOS and Windows
|
||||
|
||||
* On macOS, the recommended method is to use Docker4Mac:
|
||||
* On macOS, the recommended method is to use Docker for Mac:
|
||||
|
||||
https://docs.docker.com/docker-for-mac/install/
|
||||
|
||||
* On Windows 10 Pro, Enterprise, and Eduction, you can use Docker4Windows:
|
||||
* On Windows 10 Pro, Enterprise, and Education, you can use Docker for Windows:
|
||||
|
||||
https://docs.docker.com/docker-for-windows/install/
|
||||
|
||||
@@ -91,6 +93,33 @@ class: extra-details
|
||||
|
||||
https://docs.docker.com/toolbox/toolbox_install_windows/
|
||||
|
||||
* On Windows Server 2016, you can also install the native engine:
|
||||
|
||||
https://docs.docker.com/install/windows/docker-ee/
|
||||
|
||||
---
|
||||
|
||||
## Docker for Mac and Docker for Windows
|
||||
|
||||
* Special Docker Editions that integrate well with their respective host OS
|
||||
|
||||
* Provide user-friendly GUI to edit Docker configuration and settings
|
||||
|
||||
* Leverage the host OS virtualization subsystem (e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS)
|
||||
|
||||
* Installed like normal user applications on the host
|
||||
|
||||
* Under the hood, they both run a tiny VM (transparent to our daily use)
|
||||
|
||||
* Access network resources like normal applications
|
||||
<br/>(and therefore, play better with enterprise VPNs and firewalls)
|
||||
|
||||
* Support filesystem sharing through volumes (we'll talk about this later)
|
||||
|
||||
* They only support running one Docker VM at a time ...
|
||||
<br/>
|
||||
... but we can use `docker-machine`, the Docker Toolbox, VirtualBox, etc. to get a cluster.
|
||||
|
||||
---
|
||||
|
||||
## Running Docker on macOS and Windows
|
||||
@@ -110,25 +139,6 @@ This will also allow to use remote Engines exactly as if they were local.
|
||||
|
||||
---
|
||||
|
||||
## Docker4Mac and Docker4Windows
|
||||
|
||||
* They let you run Docker without VirtualBox
|
||||
|
||||
* They are installed like normal applications (think QEMU, but faster)
|
||||
|
||||
* They access network resources like normal applications
|
||||
<br/>(and therefore, play well with enterprise VPNs and firewalls)
|
||||
|
||||
* They support filesystem sharing through volumes (we'll talk about this later)
|
||||
|
||||
* They only support running one Docker VM at a time ...
|
||||
|
||||
... so if you want to run a full cluster locally, install e.g. the Docker Toolbox
|
||||
|
||||
* They can co-exist with the Docker Toolbox
|
||||
|
||||
---
|
||||
|
||||
## Important PSA about security
|
||||
|
||||
* If you have access to the Docker control socket, you can take over the machine
|
||||
|
||||
@@ -17,7 +17,7 @@ At the end of this section, you will be able to:
|
||||
|
||||
---
|
||||
|
||||
## Containerized local development environments
|
||||
## Local development in a container
|
||||
|
||||
We want to solve the following issues:
|
||||
|
||||
@@ -69,7 +69,6 @@ Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe?
|
||||
|
||||
```dockerfile
|
||||
FROM ruby
|
||||
MAINTAINER Education Team at Docker <education@docker.com>
|
||||
|
||||
COPY . /src
|
||||
WORKDIR /src
|
||||
@@ -177,7 +176,9 @@ $ docker run -d -v $(pwd):/src -P namer
|
||||
|
||||
* `namer` is the name of the image we will run.
|
||||
|
||||
* We don't specify a command to run because is is already set in the Dockerfile.
|
||||
* We don't specify a command to run because it is already set in the Dockerfile.
|
||||
|
||||
Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell).
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -131,6 +131,27 @@ We will then show one particular method in action, using ELK and Docker's loggin
|
||||
|
||||
---
|
||||
|
||||
## A word of warning about `json-file`
|
||||
|
||||
- By default, log file size is unlimited.
|
||||
|
||||
- This means that a very verbose container *will* use up all your disk space.
|
||||
|
||||
(Or a less verbose container, but running for a very long time.)
|
||||
|
||||
- Log rotation can be enabled by setting a `max-size` option.
|
||||
|
||||
- Older log files can be removed by setting a `max-file` option.
|
||||
|
||||
- Just like other logging options, these can be set per container, or globally.
|
||||
|
||||
Example:
|
||||
```bash
|
||||
$ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Demo: sending logs to ELK
|
||||
|
||||
- We are going to deploy an ELK stack.
|
||||
@@ -192,7 +213,7 @@ $ docker-compose -f elk.yml up -d
|
||||
|
||||
- it is set with the `ELASTICSEARCH_URL` environment variable,
|
||||
|
||||
- by default it is `localhost:9200`, we change it to `elastichsearch:9200`.
|
||||
- by default it is `localhost:9200`, we change it to `elasticsearch:9200`.
|
||||
|
||||
- We need to configure Logstash:
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Multi-stage builds
|
||||
# Reducing image size
|
||||
|
||||
* In the previous example, our final image contain:
|
||||
* In the previous example, our final image contained:
|
||||
|
||||
* our `hello` program
|
||||
|
||||
@@ -14,7 +14,196 @@
|
||||
|
||||
---
|
||||
|
||||
## Multi-stage builds principles
|
||||
## Can't we remove superfluous files with `RUN`?
|
||||
|
||||
What happens if we do one of the following commands?
|
||||
|
||||
- `RUN rm -rf ...`
|
||||
|
||||
- `RUN apt-get remove ...`
|
||||
|
||||
- `RUN make clean ...`
|
||||
|
||||
--
|
||||
|
||||
This adds a layer which removes a bunch of files.
|
||||
|
||||
But the previous layers (which added the files) still exist.
|
||||
|
||||
---
|
||||
|
||||
## Removing files with an extra layer
|
||||
|
||||
When downloading an image, all the layers must be downloaded.
|
||||
|
||||
| Dockerfile instruction | Layer size | Image size |
|
||||
| ---------------------- | ---------- | ---------- |
|
||||
| `FROM ubuntu` | Size of base image | Size of base image |
|
||||
| `...` | ... | Sum of this layer <br/>+ all previous ones |
|
||||
| `RUN apt-get install somepackage` | Size of files added <br/>(e.g. a few MB) | Sum of this layer <br/>+ all previous ones |
|
||||
| `...` | ... | Sum of this layer <br/>+ all previous ones |
|
||||
| `RUN apt-get remove somepackage` | Almost zero <br/>(just metadata) | Same as previous one |
|
||||
|
||||
Therefore, `RUN rm` does not reduce the size of the image or free up disk space.
|
||||
|
||||
---
|
||||
|
||||
## Removing unnecessary files
|
||||
|
||||
Various techniques are available to obtain smaller images:
|
||||
|
||||
- collapsing layers,
|
||||
|
||||
- adding binaries that are built outside of the Dockerfile,
|
||||
|
||||
- squashing the final image,
|
||||
|
||||
- multi-stage builds.
|
||||
|
||||
Let's review them quickly.
|
||||
|
||||
---
|
||||
|
||||
## Collapsing layers
|
||||
|
||||
You will frequently see Dockerfiles like this:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu
|
||||
RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ...
|
||||
```
|
||||
|
||||
Or the (more readable) variant:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu
|
||||
RUN apt-get update \
|
||||
&& apt-get install xxx \
|
||||
&& ... \
|
||||
&& apt-get remove xxx \
|
||||
&& ...
|
||||
```
|
||||
|
||||
This `RUN` command gives us a single layer.
|
||||
|
||||
The files that are added, then removed in the same layer, do not grow the layer size.
|
||||
|
||||
---
|
||||
|
||||
## Collapsing layers: pros and cons
|
||||
|
||||
Pros:
|
||||
|
||||
- works on all versions of Docker
|
||||
|
||||
- doesn't require extra tools
|
||||
|
||||
Cons:
|
||||
|
||||
- not very readable
|
||||
|
||||
- some unnecessary files might still remain if the cleanup is not thorough
|
||||
|
||||
- that layer is expensive (slow to build)
|
||||
|
||||
---
|
||||
|
||||
## Building binaries outside of the Dockerfile
|
||||
|
||||
This results in a Dockerfile looking like this:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu
|
||||
COPY xxx /usr/local/bin
|
||||
```
|
||||
|
||||
Of course, this implies that the file `xxx` exists in the build context.
|
||||
|
||||
That file has to exist before you can run `docker build`.
|
||||
|
||||
For instance, it can:
|
||||
|
||||
- exist in the code repository,
|
||||
- be created by another tool (script, Makefile...),
|
||||
- be created by another container image and extracted from the image.
|
||||
|
||||
See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox).
|
||||
|
||||
---
|
||||
|
||||
## Building binaries outside: pros and cons
|
||||
|
||||
Pros:
|
||||
|
||||
- final image can be very small
|
||||
|
||||
Cons:
|
||||
|
||||
- requires an extra build tool
|
||||
|
||||
- we're back in dependency hell and "works on my machine"
|
||||
|
||||
Cons, if binary is added to code repository:
|
||||
|
||||
- breaks portability across different platforms
|
||||
|
||||
- grows repository size a lot if the binary is updated frequently
|
||||
|
||||
---
|
||||
|
||||
## Squashing the final image
|
||||
|
||||
The idea is to transform the final image into a single-layer image.
|
||||
|
||||
This can be done in (at least) two ways.
|
||||
|
||||
- Activate experimental features and squash the final image:
|
||||
```bash
|
||||
docker image build --squash ...
|
||||
```
|
||||
|
||||
- Export/import the final image.
|
||||
```bash
|
||||
docker build -t temp-image .
|
||||
docker run --entrypoint true --name temp-container temp-image
|
||||
docker export temp-container | docker import - final-image
|
||||
docker rm temp-container
|
||||
docker rmi temp-image
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Squashing the image: pros and cons
|
||||
|
||||
Pros:
|
||||
|
||||
- single-layer images are smaller and faster to download
|
||||
|
||||
- removed files no longer take up storage and network resources
|
||||
|
||||
Cons:
|
||||
|
||||
- we still need to actively remove unnecessary files
|
||||
|
||||
- squash operation can take a lot of time (on big images)
|
||||
|
||||
- squash operation does not benefit from cache
|
||||
<br/>
|
||||
(even if we change just a tiny file, the whole image needs to be re-squashed)
|
||||
|
||||
---
|
||||
|
||||
## Multi-stage builds
|
||||
|
||||
Multi-stage builds allow us to have multiple *stages*.
|
||||
|
||||
Each stage is a separate image, and can copy files from previous stages.
|
||||
|
||||
We're going to see how they work in more detail.
|
||||
|
||||
---
|
||||
|
||||
# Multi-stage builds
|
||||
|
||||
* At any point in our `Dockerfile`, we can add a new `FROM` line.
|
||||
|
||||
|
||||
@@ -76,6 +76,8 @@ The last item should be done for educational purposes only!
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Manipulating namespaces
|
||||
|
||||
- Namespaces are created with two methods:
|
||||
@@ -94,6 +96,8 @@ The last item should be done for educational purposes only!
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Namespaces lifecycle
|
||||
|
||||
- When the last process of a namespace exits, the namespace is destroyed.
|
||||
@@ -114,6 +118,8 @@ The last item should be done for educational purposes only!
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Namespaces can be used independently
|
||||
|
||||
- As mentioned in the previous slides:
|
||||
@@ -138,7 +144,7 @@ The last item should be done for educational purposes only!
|
||||
|
||||
- Also allows to set the NIS domain.
|
||||
|
||||
(If you dont' know what a NIS domain is, you don't have to worry about it!)
|
||||
(If you don't know what a NIS domain is, you don't have to worry about it!)
|
||||
|
||||
- If you're wondering: UTS = UNIX time sharing.
|
||||
|
||||
@@ -150,6 +156,8 @@ The last item should be done for educational purposes only!
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Creating our first namespace
|
||||
|
||||
Let's use `unshare` to create a new process that will have its own UTS namespace:
|
||||
@@ -166,6 +174,8 @@ $ sudo unshare --uts
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Demonstrating our uts namespace
|
||||
|
||||
In our new "container", check the hostname, change it, and check it:
|
||||
@@ -398,6 +408,8 @@ class: extra-details
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Setting up a private `/tmp`
|
||||
|
||||
Create a new mount namespace:
|
||||
@@ -435,6 +447,8 @@ The mount is automatically cleaned up when you exit the process.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## PID namespace in action
|
||||
|
||||
Create a new PID namespace:
|
||||
@@ -453,10 +467,14 @@ Check the process tree in the new namespace:
|
||||
|
||||
--
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
🤔 Why do we see all the processes?!?
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## PID namespaces and `/proc`
|
||||
|
||||
- Tools like `ps` rely on the `/proc` pseudo-filesystem.
|
||||
@@ -471,6 +489,8 @@ Check the process tree in the new namespace:
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## PID namespaces, take 2
|
||||
|
||||
- This can be solved by mounting `/proc` in the namespace.
|
||||
@@ -570,6 +590,8 @@ Check `man 2 unshare` and `man pid_namespaces` if you want more details.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## User namespace challenges
|
||||
|
||||
- UID needs to be mapped when passed between processes or kernel subsystems.
|
||||
@@ -686,6 +708,8 @@ cpu memory
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Cgroups v1 vs v2
|
||||
|
||||
- Cgroups v1 are available on all systems (and widely used).
|
||||
@@ -759,6 +783,8 @@ cpu memory
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Avoiding the OOM killer
|
||||
|
||||
- For some workloads (databases and stateful systems), killing
|
||||
@@ -778,6 +804,8 @@ cpu memory
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Overhead of the memory cgroup
|
||||
|
||||
- Each time a process grabs or releases a page, the kernel update counters.
|
||||
@@ -796,6 +824,8 @@ cpu memory
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Setting up a limit with the memory cgroup
|
||||
|
||||
Create a new memory cgroup:
|
||||
@@ -808,7 +838,7 @@ $ sudo mkdir $CG
|
||||
Limit it to approximately 100MB of memory usage:
|
||||
|
||||
```bash
|
||||
$ sudo tee $CG/memory.memsw.limit_in_bytes <<<100000000
|
||||
$ sudo tee $CG/memory.memsw.limit_in_bytes <<< 100000000
|
||||
```
|
||||
|
||||
Move the current process to that cgroup:
|
||||
@@ -819,8 +849,67 @@ $ sudo tee $CG/tasks <<< $$
|
||||
|
||||
The current process *and all its future children* are now limited.
|
||||
|
||||
(Confused about `<<<`? Look at the next slide!)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## What's `<<<`?
|
||||
|
||||
- This is a "here string". (It is a non-POSIX shell extension.)
|
||||
|
||||
- The following commands are equivalent:
|
||||
|
||||
```bash
|
||||
foo <<< hello
|
||||
```
|
||||
|
||||
```bash
|
||||
echo hello | foo
|
||||
```
|
||||
|
||||
```bash
|
||||
foo <<EOF
|
||||
hello
|
||||
EOF
|
||||
```
|
||||
|
||||
- Why did we use that?
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Writing to cgroups pseudo-files requires root
|
||||
|
||||
Instead of:
|
||||
|
||||
```bash
|
||||
sudo tee $CG/tasks <<< $$
|
||||
```
|
||||
|
||||
We could have done:
|
||||
|
||||
```bash
|
||||
sudo sh -c "echo $$ > $CG/tasks"
|
||||
```
|
||||
|
||||
The following commands, however, would be invalid:
|
||||
|
||||
```bash
|
||||
sudo echo $$ > $CG/tasks
|
||||
```
|
||||
|
||||
```bash
|
||||
sudo -i # (or su)
|
||||
echo $$ > $CG/tasks
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details, deep-dive
|
||||
|
||||
## Testing the memory limit
|
||||
|
||||
Start the Python interpreter:
|
||||
@@ -860,8 +949,6 @@ Killed
|
||||
|
||||
- Allows to set relative weights used by the scheduler.
|
||||
|
||||
- We cannot set CPU limits (like, "don't use more than 10% of CPU").
|
||||
|
||||
---
|
||||
|
||||
## Cpuset cgroup
|
||||
|
||||
@@ -420,8 +420,3 @@ It depends on:
|
||||
|
||||
- false, if we focus on what matters.
|
||||
|
||||
---
|
||||
|
||||
## Kubernetes in action
|
||||
|
||||
.center[]
|
||||
@@ -21,7 +21,7 @@ public images is free as well.*
|
||||
docker login
|
||||
```
|
||||
|
||||
.warning[When running Docker4Mac, Docker4Windows, or
|
||||
.warning[When running Docker for Mac/Windows, or
|
||||
Docker on a Linux workstation, it can (and will when
|
||||
possible) integrate with your system's keyring to
|
||||
store your credentials securely. However, on most Linux
|
||||
|
||||
229
slides/intro/Resource_Limits.md
Normal file
@@ -0,0 +1,229 @@
|
||||
# Limiting resources
|
||||
|
||||
- So far, we have used containers as convenient units of deployment.
|
||||
|
||||
- What happens when a container tries to use more resources than available?
|
||||
|
||||
(RAM, CPU, disk usage, disk and network I/O...)
|
||||
|
||||
- What happens when multiple containers compete for the same resource?
|
||||
|
||||
- Can we limit resources available to a container?
|
||||
|
||||
(Spoiler alert: yes!)
|
||||
|
||||
---
|
||||
|
||||
## Container processes are normal processes
|
||||
|
||||
- Containers are closer to "fancy processes" than to "lightweight VMs".
|
||||
|
||||
- A process running in a container is, in fact, a process running on the host.
|
||||
|
||||
- Let's look at the output of `ps` on a container host running 3 containers :
|
||||
|
||||
```
|
||||
0 2662 0.2 0.3 /usr/bin/dockerd -H fd://
|
||||
0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe
|
||||
0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
|
||||
0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off;
|
||||
101 23543 0.0 0.0 | \_ `nginx`: worker process
|
||||
0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
|
||||
102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2
|
||||
0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
|
||||
0 23725 0.0 0.0 \_ `/bin/sh`
|
||||
```
|
||||
|
||||
- The highlighted processes are containerized processes.
|
||||
<br/>
|
||||
(That host is running nginx, elasticsearch, and alpine.)
|
||||
|
||||
---
|
||||
|
||||
## By default: nothing changes
|
||||
|
||||
- What happens when a process uses too much memory on a Linux system?
|
||||
|
||||
--
|
||||
|
||||
- Simplified answer:
|
||||
|
||||
- swap is used (if available);
|
||||
|
||||
- if there is not enough swap space, eventually, the out-of-memory killer is invoked;
|
||||
|
||||
- the OOM killer uses heuristics to kill processes;
|
||||
|
||||
- sometimes, it kills an unrelated process.
|
||||
|
||||
--
|
||||
|
||||
- What happens when a container uses too much memory?
|
||||
|
||||
- The same thing!
|
||||
|
||||
(i.e., a process eventually gets killed, possibly in another container.)
|
||||
|
||||
---
|
||||
|
||||
## Limiting container resources
|
||||
|
||||
- The Linux kernel offers rich mechanisms to limit container resources.
|
||||
|
||||
- For memory usage, the mechanism is part of the *cgroup* subsystem.
|
||||
|
||||
- This subsystem allows to limit the memory for a process or a group of processes.
|
||||
|
||||
- A container engine leverages these mechanisms to limit memory for a container.
|
||||
|
||||
- The out-of-memory killer has a new behavior:
|
||||
|
||||
- it runs when a container exceeds its allowed memory usage,
|
||||
|
||||
- in that case, it only kills processes in that container.
|
||||
|
||||
---
|
||||
|
||||
## Limiting memory in practice
|
||||
|
||||
- The Docker Engine offers multiple flags to limit memory usage.
|
||||
|
||||
- The two most useful ones are `--memory` and `--memory-swap`.
|
||||
|
||||
- `--memory` limits the amount of physical RAM used by a container.
|
||||
|
||||
- `--memory-swap` limits the total amount (RAM+swap) used by a container.
|
||||
|
||||
- The memory limit can be expressed in bytes, or with a unit suffix.
|
||||
|
||||
(e.g.: `--memory 100m` = 100 megabytes.)
|
||||
|
||||
- We will see two strategies: limiting RAM usage, or limiting both
|
||||
|
||||
---
|
||||
|
||||
## Limiting RAM usage
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker run -ti --memory 100m python
|
||||
```
|
||||
|
||||
If the container tries to use more than 100 MB of RAM, *and* swap is available:
|
||||
|
||||
- the container will not be killed,
|
||||
|
||||
- memory above 100 MB will be swapped out,
|
||||
|
||||
- in most cases, the app in the container will be slowed down (a lot).
|
||||
|
||||
If we run out of swap, the global OOM killer still intervenes.
|
||||
|
||||
---
|
||||
|
||||
## Limiting both RAM and swap usage
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker run -ti --memory 100m --memory-swap 100m python
|
||||
```
|
||||
|
||||
If the container tries to use more than 100 MB of memory, it is killed.
|
||||
|
||||
On the other hand, the application will never be slowed down because of swap.
|
||||
|
||||
---
|
||||
|
||||
## When to pick which strategy?
|
||||
|
||||
- Stateful services (like databases) will lose or corrupt data when killed
|
||||
|
||||
- Allow them to use swap space, but monitor swap usage
|
||||
|
||||
- Stateless services can usually be killed with little impact
|
||||
|
||||
- Limit their mem+swap usage, but monitor if they get killed
|
||||
|
||||
- Ultimately, this is no different from "do I want swap, and how much?"
|
||||
|
||||
---
|
||||
|
||||
## Limiting CPU usage
|
||||
|
||||
- There are no less than 3 ways to limit CPU usage:
|
||||
|
||||
- setting a relative priority with `--cpu-shares`,
|
||||
|
||||
- setting a CPU% limit with `--cpus`,
|
||||
|
||||
- pinning a container to specific CPUs with `--cpuset-cpus`.
|
||||
|
||||
- They can be used separately or together.
|
||||
|
||||
---
|
||||
|
||||
## Setting relative priority
|
||||
|
||||
- Each container has a relative priority used by the Linux scheduler.
|
||||
|
||||
- By default, this priority is 1024.
|
||||
|
||||
- As long as CPU usage is not maxed out, this has no effect.
|
||||
|
||||
- When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority.
|
||||
|
||||
- In other words: a container with `--cpu-shares 2048` will receive twice as much than the default.
|
||||
|
||||
---
|
||||
|
||||
## Setting a CPU% limit
|
||||
|
||||
- This setting will make sure that a container doesn't use more than a given % of CPU.
|
||||
|
||||
- The value is expressed in CPUs; therefore:
|
||||
|
||||
`--cpus 0.1` means 10% of one CPU,
|
||||
|
||||
`--cpus 1.0` means 100% of one whole CPU,
|
||||
|
||||
`--cpus 10.0` means 10 entire CPUs.
|
||||
|
||||
---
|
||||
|
||||
## Pinning containers to CPUs
|
||||
|
||||
- On multi-core machines, it is possible to restrict the execution on a set of CPUs.
|
||||
|
||||
- Examples:
|
||||
|
||||
`--cpuset-cpus 0` forces the container to run on CPU 0;
|
||||
|
||||
`--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7;
|
||||
|
||||
`--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11.
|
||||
|
||||
- This will not reserve the corresponding CPUs!
|
||||
|
||||
(They might still be used by other containers, or uncontainerized processes.)
|
||||
|
||||
---
|
||||
|
||||
## Limiting disk usage
|
||||
|
||||
- Most storage drivers do not support limiting the disk usage of containers.
|
||||
|
||||
(With the exception of devicemapper, but the limit cannot be set easily.)
|
||||
|
||||
- This means that a single container could exhaust disk space for everyone.
|
||||
|
||||
- In practice, however, this is not a concern, because:
|
||||
|
||||
- data files (for stateful services) should reside on volumes,
|
||||
|
||||
- assets (e.g. images, user-generated content...) should reside on object stores or on volume,
|
||||
|
||||
- logs are written on standard output and gathered by the container engine.
|
||||
|
||||
- Container disk usage can be audited with `docker ps -s` and `docker diff`.
|
||||
@@ -33,6 +33,8 @@ Docker volumes can be used to achieve many things, including:
|
||||
|
||||
* Sharing a *single file* between the host and a container.
|
||||
|
||||
* Using remote storage and custom storage with "volume drivers".
|
||||
|
||||
---
|
||||
|
||||
## Volumes are special directories in a container
|
||||
@@ -118,7 +120,7 @@ $ curl localhost:8080
|
||||
|
||||
## Volumes exist independently of containers
|
||||
|
||||
If a container is stopped, its volumes still exist and are available.
|
||||
If a container is stopped or removed, its volumes still exist and are available.
|
||||
|
||||
Volumes can be listed and manipulated with `docker volume` subcommands:
|
||||
|
||||
@@ -201,7 +203,7 @@ Then run `curl localhost:1234` again to see your changes.
|
||||
|
||||
---
|
||||
|
||||
## Managing volumes explicitly
|
||||
## Using custom "bind-mounts"
|
||||
|
||||
In some cases, you want a specific directory on the host to be mapped
|
||||
inside the container:
|
||||
@@ -244,6 +246,8 @@ of an existing container.
|
||||
|
||||
* Newer containers can use `--volumes-from` too.
|
||||
|
||||
* Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes).
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
@@ -259,7 +263,7 @@ $ docker run -d --name redis28 redis:2.8
|
||||
Connect to the Redis container and set some data.
|
||||
|
||||
```bash
|
||||
$ docker run -ti --link redis28:redis alpine telnet redis 6379
|
||||
$ docker run -ti --link redis28:redis busybox telnet redis 6379
|
||||
```
|
||||
|
||||
Issue the following commands:
|
||||
@@ -298,7 +302,7 @@ class: extra-details
|
||||
Connect to the Redis container and see our data.
|
||||
|
||||
```bash
|
||||
docker run -ti --link redis30:redis alpine telnet redis 6379
|
||||
docker run -ti --link redis30:redis busybox telnet redis 6379
|
||||
```
|
||||
|
||||
Issue a few commands.
|
||||
@@ -394,10 +398,15 @@ has root-like access to the host.]
|
||||
You can install plugins to manage volumes backed by particular storage systems,
|
||||
or providing extra features. For instance:
|
||||
|
||||
* [dvol](https://github.com/ClusterHQ/dvol) - allows to commit/branch/rollback volumes;
|
||||
* [Flocker](https://clusterhq.com/flocker/introduction/), [REX-Ray](https://github.com/emccode/rexray) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS);
|
||||
* [Blockbridge](http://www.blockbridge.com/), [Portworx](http://portworx.com/) - provide distributed block store for containers;
|
||||
* and much more!
|
||||
* [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g.
|
||||
SAN or NAS), or by cloud block stores (e.g. EBS, EFS).
|
||||
|
||||
* [Portworx](http://portworx.com/) - provides distributed block store for containers.
|
||||
|
||||
* [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale
|
||||
to several petabytes. It provides interfaces for object, block and file storage.
|
||||
|
||||
* and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)!
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
- This was initially written to support in-person, instructor-led workshops and tutorials
|
||||
|
||||
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors)
|
||||
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://@@GITREPO@@/graphs/contributors)
|
||||
|
||||
- You can also follow along on your own, at your own pace
|
||||
|
||||
|
||||
52
slides/kube-90min.yml
Normal file
@@ -0,0 +1,52 @@
|
||||
title: |
|
||||
Kubernetes 101
|
||||
|
||||
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
|
||||
chat: "In person!"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: http://container.training/
|
||||
|
||||
exclude:
|
||||
- self-paced
|
||||
- extra-details
|
||||
|
||||
chapters:
|
||||
- common/title.md
|
||||
- logistics.md
|
||||
#- kube/intro.md
|
||||
- common/about-slides.md
|
||||
- common/toc.md
|
||||
- - common/prereqs.md
|
||||
- kube/versions-k8s.md
|
||||
- common/sampleapp.md
|
||||
# Bridget doesn't go into as much depth with compose
|
||||
#- common/composescale.md
|
||||
- common/composedown.md
|
||||
- kube/concepts-k8s.md
|
||||
# - common/declarative.md
|
||||
- kube/declarative.md
|
||||
# - kube/kubenet.md
|
||||
- kube/kubectlget.md
|
||||
- kube/setup-k8s.md
|
||||
- - kube/kubectlrun.md
|
||||
- kube/kubectlexpose.md
|
||||
- kube/ourapponkube.md
|
||||
#- kube/kubectlproxy.md
|
||||
- - kube/dashboard.md
|
||||
- kube/kubectlscale.md
|
||||
- kube/daemonset.md
|
||||
- kube/rollout.md
|
||||
# Stern is interesting but can be skipped
|
||||
#- - kube/logs-cli.md
|
||||
# Bridget hasn't added EFK yet
|
||||
#- kube/logs-centralized.md
|
||||
- kube/helm.md
|
||||
- kube/namespaces.md
|
||||
- kube/whatsnext.md
|
||||
- kube/links.md
|
||||
# Bridget-specific
|
||||
# - kube/links-bridget.md
|
||||
- common/thankyou.md
|
||||
@@ -6,6 +6,10 @@ title: |
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||
chat: "In person!"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: http://container.training/
|
||||
|
||||
exclude:
|
||||
- self-paced
|
||||
|
||||
@@ -29,6 +33,7 @@ chapters:
|
||||
- kube/kubectlrun.md
|
||||
- - kube/kubectlexpose.md
|
||||
- kube/ourapponkube.md
|
||||
- kube/kubectlproxy.md
|
||||
- kube/dashboard.md
|
||||
- - kube/kubectlscale.md
|
||||
- kube/daemonset.md
|
||||
|
||||
50
slides/kube-halfday.yml
Normal file
@@ -0,0 +1,50 @@
|
||||
title: |
|
||||
Kubernetes 101
|
||||
|
||||
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
|
||||
chat: "In person!"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: http://container.training/
|
||||
|
||||
exclude:
|
||||
- self-paced
|
||||
|
||||
chapters:
|
||||
- common/title.md
|
||||
- logistics.md
|
||||
- kube/intro.md
|
||||
- common/about-slides.md
|
||||
- common/toc.md
|
||||
- - common/prereqs.md
|
||||
- kube/versions-k8s.md
|
||||
- common/sampleapp.md
|
||||
# Bridget doesn't go into as much depth with compose
|
||||
#- common/composescale.md
|
||||
- common/composedown.md
|
||||
- kube/concepts-k8s.md
|
||||
- common/declarative.md
|
||||
- kube/declarative.md
|
||||
- kube/kubenet.md
|
||||
- kube/kubectlget.md
|
||||
- kube/setup-k8s.md
|
||||
- - kube/kubectlrun.md
|
||||
- kube/kubectlexpose.md
|
||||
- kube/ourapponkube.md
|
||||
#- kube/kubectlproxy.md
|
||||
- - kube/dashboard.md
|
||||
- kube/kubectlscale.md
|
||||
- kube/daemonset.md
|
||||
- kube/rollout.md
|
||||
- - kube/logs-cli.md
|
||||
# Bridget hasn't added EFK yet
|
||||
#- kube/logs-centralized.md
|
||||
- kube/helm.md
|
||||
- kube/namespaces.md
|
||||
- kube/whatsnext.md
|
||||
# - kube/links.md
|
||||
# Bridget-specific
|
||||
- kube/links-bridget.md
|
||||
- common/thankyou.md
|
||||
@@ -5,6 +5,10 @@ title: |
|
||||
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: http://container.training/
|
||||
|
||||
exclude:
|
||||
- in-person
|
||||
|
||||
@@ -28,11 +32,12 @@ chapters:
|
||||
- kube/kubectlrun.md
|
||||
- - kube/kubectlexpose.md
|
||||
- kube/ourapponkube.md
|
||||
- kube/kubectlproxy.md
|
||||
- kube/dashboard.md
|
||||
- - kube/kubectlscale.md
|
||||
- kube/daemonset.md
|
||||
- kube/rollout.md
|
||||
- kube/logs-cli.md
|
||||
- - kube/logs-cli.md
|
||||
- kube/logs-centralized.md
|
||||
- kube/helm.md
|
||||
- kube/namespaces.md
|
||||
|
||||
@@ -171,11 +171,7 @@ class: pic
|
||||
|
||||
---
|
||||
|
||||
## Do we need to run Docker at all?
|
||||
|
||||
No!
|
||||
|
||||
--
|
||||
## Default container runtime
|
||||
|
||||
- By default, Kubernetes uses the Docker Engine to run containers
|
||||
|
||||
@@ -185,42 +181,6 @@ No!
|
||||
|
||||
(like CRI-O, or containerd)
|
||||
|
||||
---
|
||||
|
||||
## Do we need to run Docker at all?
|
||||
|
||||
Yes!
|
||||
|
||||
--
|
||||
|
||||
- In this workshop, we run our app on a single node first
|
||||
|
||||
- We will need to build images and ship them around
|
||||
|
||||
- We can do these things without Docker
|
||||
<br/>
|
||||
(and get diagnosed with NIH¹ syndrome)
|
||||
|
||||
- Docker is still the most stable container engine today
|
||||
<br/>
|
||||
(but other options are maturing very quickly)
|
||||
|
||||
.footnote[¹[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)]
|
||||
|
||||
---
|
||||
|
||||
## Do we need to run Docker at all?
|
||||
|
||||
- On our development environments, CI pipelines ... :
|
||||
|
||||
*Yes, almost certainly*
|
||||
|
||||
- On our production servers:
|
||||
|
||||
*Yes (today)*
|
||||
|
||||
*Probably not (in the future)*
|
||||
|
||||
.footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)]
|
||||
|
||||
---
|
||||
@@ -235,6 +195,7 @@ Yes!
|
||||
|
||||
- node (a machine — physical or virtual — in our cluster)
|
||||
- pod (group of containers running together on a node)
|
||||
- IP addresses are associated with *pods*, not with individual containers
|
||||
- service (stable network endpoint to connect to one or multiple containers)
|
||||
- namespace (more-or-less isolated group of things)
|
||||
- secret (bundle of sensitive data to be passed to a container)
|
||||
@@ -246,25 +207,3 @@ Yes!
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Credits
|
||||
|
||||
- The first diagram is courtesy of Weave Works
|
||||
|
||||
- a *pod* can have multiple containers working together
|
||||
|
||||
- IP addresses are associated with *pods*, not with individual containers
|
||||
|
||||
- The second diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha)
|
||||
|
||||
- it's one of the best Kubernetes architecture diagrams available!
|
||||
|
||||
Both diagrams used with permission.
|
||||
|
||||
@@ -36,7 +36,7 @@
|
||||
|
||||
## Creating a daemon set
|
||||
|
||||
- Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
|
||||
- Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
|
||||
|
||||
--
|
||||
|
||||
@@ -55,7 +55,7 @@
|
||||
|
||||
--
|
||||
|
||||
- option 1: read the docs
|
||||
- option 1: [read the docs](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset)
|
||||
|
||||
--
|
||||
|
||||
@@ -178,29 +178,37 @@ Wait ... Now, can it be *that* easy?
|
||||
|
||||
--
|
||||
|
||||
We have both `deploy/rng` and `ds/rng` now!
|
||||
We have two resources called `rng`:
|
||||
|
||||
--
|
||||
- the *deployment* that was existing before
|
||||
|
||||
And one too many pods...
|
||||
- the *daemon set* that we just created
|
||||
|
||||
We also have one too many pods.
|
||||
<br/>
|
||||
(The pod corresponding to the *deployment* still exists.)
|
||||
|
||||
---
|
||||
|
||||
## Explanation
|
||||
## `deploy/rng` and `ds/rng`
|
||||
|
||||
- You can have different resource types with the same name
|
||||
|
||||
(i.e. a *deployment* and a *daemonset* both named `rng`)
|
||||
(i.e. a *deployment* and a *daemon set* both named `rng`)
|
||||
|
||||
- We still have the old `rng` *deployment*
|
||||
|
||||
- But now we have the new `rng` *daemonset* as well
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/rng 1 1 1 1 18m
|
||||
```
|
||||
|
||||
- If we look at the pods, we have:
|
||||
- But now we have the new `rng` *daemon set* as well
|
||||
|
||||
- *one pod* for the deployment
|
||||
|
||||
- *one pod per node* for the daemonset
|
||||
```
|
||||
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
|
||||
daemonset.apps/rng 2 2 2 2 2 <none> 9s
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
@@ -308,155 +316,27 @@ The replica set selector also has a `pod-template-hash`, unlike the pods in our
|
||||
|
||||
---
|
||||
|
||||
# Updating a service through labels and selectors
|
||||
## Deleting a deployment
|
||||
|
||||
- What if we want to drop the `rng` deployment from the load balancer?
|
||||
.exercise[
|
||||
|
||||
- Option 1:
|
||||
|
||||
- destroy it
|
||||
|
||||
- Option 2:
|
||||
|
||||
- add an extra *label* to the daemon set
|
||||
|
||||
- update the service *selector* to refer to that *label*
|
||||
- Remove the `rng` deployment:
|
||||
```bash
|
||||
kubectl delete deployment rng
|
||||
```
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
Of course, option 2 offers more learning opportunities. Right?
|
||||
- The pod that was created by the deployment is now being terminated:
|
||||
|
||||
---
|
||||
```
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
rng-54f57d4d49-vgz9h 1/1 Terminating 0 4m
|
||||
rng-vplmj 1/1 Running 0 11m
|
||||
rng-xbpvg 1/1 Running 0 11m
|
||||
[...]
|
||||
```
|
||||
|
||||
## Add an extra label to the daemon set
|
||||
|
||||
- We will update the daemon set "spec"
|
||||
|
||||
- Option 1:
|
||||
|
||||
- edit the `rng.yml` file that we used earlier
|
||||
|
||||
- load the new definition with `kubectl apply`
|
||||
|
||||
- Option 2:
|
||||
|
||||
- use `kubectl edit`
|
||||
|
||||
--
|
||||
|
||||
*If you feel like you got this💕🌈, feel free to try directly.*
|
||||
|
||||
*We've included a few hints on the next slides for your convenience!*
|
||||
|
||||
---
|
||||
|
||||
## We've put resources in your resources
|
||||
|
||||
- Reminder: a daemon set is a resource that creates more resources!
|
||||
|
||||
- There is a difference between:
|
||||
|
||||
- the label(s) of a resource (in the `metadata` block in the beginning)
|
||||
|
||||
- the selector of a resource (in the `spec` block)
|
||||
|
||||
- the label(s) of the resource(s) created by the first resource (in the `template` block)
|
||||
|
||||
- You need to update the selector and the template (metadata labels are not mandatory)
|
||||
|
||||
- The template must match the selector
|
||||
|
||||
(i.e. the resource will refuse to create resources that it will not select)
|
||||
|
||||
---
|
||||
|
||||
## Adding our label
|
||||
|
||||
- Let's add a label `isactive: yes`
|
||||
|
||||
- In YAML, `yes` should be quoted; i.e. `isactive: "yes"`
|
||||
|
||||
.exercise[
|
||||
|
||||
- Update the daemon set to add `isactive: "yes"` to the selector and template label:
|
||||
```bash
|
||||
kubectl edit daemonset rng
|
||||
```
|
||||
|
||||
- Update the service to add `isactive: "yes"` to its selector:
|
||||
```bash
|
||||
kubectl edit service rng
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Checking what we've done
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check the logs of all `run=rng` pods to confirm that exactly one per node is now active:
|
||||
```bash
|
||||
kubectl logs -l run=rng
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
The timestamps should give us a hint about how many pods are currently receiving traffic.
|
||||
|
||||
.exercise[
|
||||
|
||||
- Look at the pods that we have right now:
|
||||
```bash
|
||||
kubectl get pods
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Cleaning up
|
||||
|
||||
- The pods of the "old" daemon set are still running
|
||||
|
||||
- We are going to identify them programmatically
|
||||
|
||||
.exercise[
|
||||
|
||||
- List the pods with `run=rng` but without `isactive=yes`:
|
||||
```bash
|
||||
kubectl get pods -l run=rng,isactive!=yes
|
||||
```
|
||||
|
||||
- Remove these pods:
|
||||
```bash
|
||||
kubectl get pods -l run=rng,isactive!=yes -o name |
|
||||
xargs kubectl delete
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Avoiding extra pods
|
||||
|
||||
- When we changed the definition of the daemon set, it immediately created new pods
|
||||
|
||||
- How could we have avoided this?
|
||||
|
||||
--
|
||||
|
||||
- By adding the `isactive: "yes"` label to the pods before changing the daemon set!
|
||||
|
||||
- This can be done programmatically with `kubectl patch`:
|
||||
|
||||
```bash
|
||||
PATCH='
|
||||
metadata:
|
||||
labels:
|
||||
isactive: "yes"
|
||||
'
|
||||
kubectl get pods -l run=rng -o name |
|
||||
xargs kubectl patch -p "$PATCH"
|
||||
```
|
||||
Ding, dong, the deployment is dead! And the daemon set lives on.
|
||||
|
||||
@@ -10,9 +10,6 @@
|
||||
|
||||
3) bypass authentication for the dashboard
|
||||
|
||||
--
|
||||
|
||||
There is an additional step to make the dashboard available from outside (we'll get to that)
|
||||
|
||||
--
|
||||
|
||||
@@ -148,58 +145,6 @@ The dashboard will then ask you which authentication you want to use.
|
||||
|
||||
---
|
||||
|
||||
## Exposing the dashboard over HTTPS
|
||||
|
||||
- We took a shortcut by forwarding HTTP to HTTPS inside the cluster
|
||||
|
||||
- Let's expose the dashboard over HTTPS!
|
||||
|
||||
- The dashboard is exposed through a `ClusterIP` service (internal traffic only)
|
||||
|
||||
- We will change that into a `NodePort` service (accepting outside traffic)
|
||||
|
||||
.exercise[
|
||||
|
||||
- Edit the service:
|
||||
```bash
|
||||
kubectl edit service kubernetes-dashboard
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
`NotFound`?!? Y U NO WORK?!?
|
||||
|
||||
---
|
||||
|
||||
## Editing the `kubernetes-dashboard` service
|
||||
|
||||
- If we look at the [YAML](https://goo.gl/Qamqab) that we loaded before, we'll get a hint
|
||||
|
||||
--
|
||||
|
||||
- The dashboard was created in the `kube-system` namespace
|
||||
|
||||
--
|
||||
|
||||
.exercise[
|
||||
|
||||
- Edit the service:
|
||||
```bash
|
||||
kubectl -n kube-system edit service kubernetes-dashboard
|
||||
```
|
||||
|
||||
- Change `ClusterIP` to `NodePort`, save, and exit
|
||||
|
||||
- Check the port that was assigned with `kubectl -n kube-system get services`
|
||||
|
||||
- Connect to https://oneofournodes:3xxxx/ (yes, https)
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Running the Kubernetes dashboard securely
|
||||
|
||||
- The steps that we just showed you are *for educational purposes only!*
|
||||
@@ -256,9 +201,9 @@ The dashboard will then ask you which authentication you want to use.
|
||||
|
||||
- It's safe if you use HTTPS URLs from trusted sources
|
||||
|
||||
--
|
||||
|
||||
- It introduces new failure modes
|
||||
|
||||
- Example: the official setup instructions for most pod networks
|
||||
|
||||
--
|
||||
|
||||
- It introduces new failure modes (like if you try to apply yaml from a link that's no longer valid)
|
||||
|
||||
|
||||
@@ -48,6 +48,11 @@
|
||||
helm init
|
||||
```
|
||||
|
||||
- Add the `helm` completion:
|
||||
```bash
|
||||
. <(helm completion $(basename $SHELL))
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
- This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person,
|
||||
instructor-led workshops and tutorials
|
||||
|
||||
- Credit is also due to [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) — thank you!
|
||||
- Credit is also due to [multiple contributors](https://@@GITREPO@@/graphs/contributors) — thank you!
|
||||
|
||||
- You can also follow along on your own, at your own pace
|
||||
|
||||
|
||||
@@ -123,7 +123,7 @@ Note: please DO NOT call the service `search`. It would collide with the TLD.
|
||||
|
||||
.exercise[
|
||||
|
||||
- Let's obtain the IP address that was allocated for our service, *programatically:*
|
||||
- Let's obtain the IP address that was allocated for our service, *programmatically:*
|
||||
```bash
|
||||
IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}')
|
||||
```
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
class: extra-details
|
||||
|
||||
# First contact with `kubectl`
|
||||
|
||||
- `kubectl` is (almost) the only tool we'll need to talk to Kubernetes
|
||||
@@ -79,6 +81,8 @@
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## What's available?
|
||||
|
||||
- `kubectl` has pretty good introspection facilities
|
||||
|
||||
117
slides/kube/kubectlproxy.md
Normal file
@@ -0,0 +1,117 @@
|
||||
# Accessing internal services with `kubectl proxy`
|
||||
|
||||
- `kubectl proxy` runs a proxy in the foreground
|
||||
|
||||
- This proxy lets us access the Kubernetes API without authentication
|
||||
|
||||
(`kubectl proxy` adds our credentials on the fly to the requests)
|
||||
|
||||
- This proxy lets us access the Kubernetes API over plain HTTP
|
||||
|
||||
- This is a great tool to learn and experiment with the Kubernetes API
|
||||
|
||||
- The Kubernetes API also gives us a proxy to HTTP and HTTPS services
|
||||
|
||||
- Therefore, we can use `kubectl proxy` to access internal services
|
||||
|
||||
(Without using a `NodePort` or similar service)
|
||||
|
||||
---
|
||||
|
||||
## Secure by default
|
||||
|
||||
- By default, the proxy listens on port 8001
|
||||
|
||||
(But this can be changed, or we can tell `kubectl proxy` to pick a port)
|
||||
|
||||
- By default, the proxy binds to `127.0.0.1`
|
||||
|
||||
(Making it unreachable from other machines, for security reasons)
|
||||
|
||||
- By default, the proxy only accepts connections from:
|
||||
|
||||
`^localhost$,^127\.0\.0\.1$,^\[::1\]$`
|
||||
|
||||
- This is great when running `kubectl proxy` locally
|
||||
|
||||
- Not-so-great when running it on a remote machine
|
||||
|
||||
---
|
||||
|
||||
## Running `kubectl proxy` on a remote machine
|
||||
|
||||
- We are going to bind to `INADDR_ANY` instead of `127.0.0.1`
|
||||
|
||||
- We are going to accept connections from any address
|
||||
|
||||
.exercise[
|
||||
|
||||
- Run an open proxy to the Kubernetes API:
|
||||
```bash
|
||||
kubectl proxy --port=8888 --address=0.0.0.0 --accept-hosts=.*
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
.warning[Anyone can now do whatever they want with our Kubernetes cluster!
|
||||
<br/>
|
||||
(Don't do this on a real cluster!)]
|
||||
|
||||
---
|
||||
|
||||
## Viewing available API routes
|
||||
|
||||
- The default route (i.e. `/`) shows a list of available API endpoints
|
||||
|
||||
.exercise[
|
||||
|
||||
- Point your browser to the IP address of the node running `kubectl proxy`, port 8888
|
||||
|
||||
]
|
||||
|
||||
The result should look like this:
|
||||
```json
|
||||
{
|
||||
"paths": [
|
||||
"/api",
|
||||
"/api/v1",
|
||||
"/apis",
|
||||
"/apis/",
|
||||
"/apis/admissionregistration.k8s.io",
|
||||
…
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Connecting to a service through the proxy
|
||||
|
||||
- The API can proxy HTTP and HTTPS requests by accessing a special route:
|
||||
```
|
||||
/api/v1/namespaces/`name_of_namespace`/services/`name_of_service`/proxy
|
||||
```
|
||||
|
||||
- Since we now have access to the API, we can use this special route
|
||||
|
||||
.exercise[
|
||||
|
||||
- Access the `hasher` service through the special proxy route:
|
||||
```open
|
||||
http://`X.X.X.X`:8888/api/v1/namespaces/default/services/hasher/proxy
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
You should see the banner of the hasher service: `HASHER running on ...`
|
||||
|
||||
---
|
||||
|
||||
## Stopping the proxy
|
||||
|
||||
- Remember: as it is running right now, `kubectl proxy` gives open access to our cluster
|
||||
|
||||
.exercise[
|
||||
|
||||
- Stop the `kubectl proxy` process with Ctrl-C
|
||||
|
||||
]
|
||||
|
||||
@@ -20,9 +20,10 @@
|
||||
|
||||
.exercise[
|
||||
|
||||
- Let's ping `goo.gl`:
|
||||
- Let's ping `1.1.1.1`, Cloudflare's
|
||||
[public DNS resolver](https://blog.cloudflare.com/announcing-1111/):
|
||||
```bash
|
||||
kubectl run pingpong --image alpine ping goo.gl
|
||||
kubectl run pingpong --image alpine ping 1.1.1.1
|
||||
```
|
||||
|
||||
]
|
||||
@@ -49,9 +50,11 @@ OK, what just happened?
|
||||
--
|
||||
|
||||
We should see the following things:
|
||||
- `deploy/pingpong` (the *deployment* that we just created)
|
||||
- `rs/pingpong-xxxx` (a *replica set* created by the deployment)
|
||||
- `po/pingpong-yyyy` (a *pod* created by the replica set)
|
||||
- `deployment.apps/pingpong` (the *deployment* that we just created)
|
||||
- `replicaset.apps/pingpong-xxxxxxxxxx` (a *replica set* created by the deployment)
|
||||
- `pod/pingpong-xxxxxxxxxx-yyyyy` (a *pod* created by the replica set)
|
||||
|
||||
Note: as of 1.10.1, resource types are displayed in more detail.
|
||||
|
||||
---
|
||||
|
||||
@@ -78,21 +81,34 @@ We should see the following things:
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Our `pingpong` deployment
|
||||
|
||||
- `kubectl run` created a *deployment*, `deploy/pingpong`
|
||||
- `kubectl run` created a *deployment*, `deployment.apps/pingpong`
|
||||
|
||||
- That deployment created a *replica set*, `rs/pingpong-xxxx`
|
||||
```
|
||||
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/pingpong 1 1 1 1 10m
|
||||
```
|
||||
|
||||
- That replica set created a *pod*, `po/pingpong-yyyy`
|
||||
- That deployment created a *replica set*, `replicaset.apps/pingpong-xxxxxxxxxx`
|
||||
|
||||
```
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m
|
||||
```
|
||||
|
||||
- That replica set created a *pod*, `pod/pingpong-xxxxxxxxxx-yyyyy`
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
|
||||
```
|
||||
|
||||
- We'll see later how these folks play together for:
|
||||
|
||||
- scaling
|
||||
|
||||
- high availability
|
||||
|
||||
- rolling updates
|
||||
- scaling, high availability, rolling updates
|
||||
|
||||
---
|
||||
|
||||
@@ -119,6 +135,8 @@ We should see the following things:
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Streaming logs in real time
|
||||
|
||||
- Just like `docker logs`, `kubectl logs` supports convenient options:
|
||||
@@ -158,7 +176,7 @@ We should see the following things:
|
||||
|
||||
]
|
||||
|
||||
Note: what if we tried to scale `rs/pingpong-xxxx`?
|
||||
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?
|
||||
|
||||
We could! But the *deployment* would notice it right away, and scale back to the initial level.
|
||||
|
||||
@@ -186,7 +204,7 @@ We could! But the *deployment* would notice it right away, and scale back to the
|
||||
|
||||
- Destroy a pod:
|
||||
```bash
|
||||
kubectl delete pod pingpong-yyyy
|
||||
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
|
||||
```
|
||||
]
|
||||
|
||||
@@ -209,6 +227,8 @@ We could! But the *deployment* would notice it right away, and scale back to the
|
||||
|
||||
---
|
||||
|
||||
clas: extra-details
|
||||
|
||||
## Viewing logs of multiple pods
|
||||
|
||||
- When we specify a deployment name, only one single pod's logs are shown
|
||||
@@ -232,15 +252,17 @@ Unfortunately, `--follow` cannot (yet) be used to stream the logs from multiple
|
||||
|
||||
---
|
||||
|
||||
class: title
|
||||
class: extra-details
|
||||
|
||||
Meanwhile,
|
||||
<br/>
|
||||
at the Google NOC ...
|
||||
<br/>
|
||||
<br/>
|
||||
.small[“Why the hell]
|
||||
<br/>
|
||||
.small[are we getting 1000 packets per second]
|
||||
<br/>
|
||||
.small[of ICMP ECHO traffic from these IPs?!?”]
|
||||
## Aren't we flooding 1.1.1.1?
|
||||
|
||||
- If you're wondering this, good question!
|
||||
|
||||
- Don't worry, though:
|
||||
|
||||
*APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.*
|
||||
|
||||
(Source: https://blog.cloudflare.com/announcing-1111/)
|
||||
|
||||
- It's very unlikely that our concerted pings manage to produce
|
||||
even a modest blip at Cloudflare's NOC!
|
||||
|
||||
@@ -63,7 +63,7 @@
|
||||
|
||||
## Kubernetes network model: in practice
|
||||
|
||||
- The nodes that we are using have been set up to use Weave
|
||||
- The nodes that we are using have been set up to use [Weave](https://github.com/weaveworks/weave)
|
||||
|
||||
- We don't endorse Weave in a particular way, it just Works For Us
|
||||
|
||||
|
||||
@@ -1,19 +1,10 @@
|
||||
# Links and resources
|
||||
|
||||
All things Kubernetes:
|
||||
|
||||
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
|
||||
- [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)
|
||||
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
|
||||
|
||||
All things Docker:
|
||||
- [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/)
|
||||
|
||||
- [Docker documentation](http://docs.docker.com/)
|
||||
- [Docker Hub](https://hub.docker.com)
|
||||
- [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker)
|
||||
- [Play With Docker Hands-On Labs](http://training.play-with-docker.com/)
|
||||
|
||||
Everything else:
|
||||
- [Cloud Developer Advocates](https://developer.microsoft.com/advocates/)
|
||||
|
||||
- [Local meetups](https://www.meetup.com/)
|
||||
|
||||
|
||||
@@ -40,7 +40,12 @@
|
||||
|
||||
## Creating namespaces
|
||||
|
||||
- We can create namespaces with a very minimal YAML, e.g.:
|
||||
- Creating a namespace is done with the `kubectl create namespace` command:
|
||||
```bash
|
||||
kubectl create namespace blue
|
||||
```
|
||||
|
||||
- We can also get fancy and use a very minimal YAML snippet, e.g.:
|
||||
```bash
|
||||
kubectl apply -f- <<EOF
|
||||
apiVersion: v1
|
||||
@@ -50,6 +55,8 @@
|
||||
EOF
|
||||
```
|
||||
|
||||
- The two methods above are identical
|
||||
|
||||
- If we are using a tool like Helm, it will create namespaces automatically
|
||||
|
||||
---
|
||||
|
||||
@@ -4,6 +4,8 @@ Our app on Kube
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## What's on the menu?
|
||||
|
||||
In this part, we will:
|
||||
@@ -130,6 +132,8 @@ We should see:
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Testing our local registry
|
||||
|
||||
- We can retag a small image, and push it to the registry
|
||||
@@ -151,6 +155,8 @@ We should see:
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Checking again what's on our local registry
|
||||
|
||||
- Let's use the same endpoint as before
|
||||
|
||||
@@ -33,6 +33,23 @@
|
||||
|
||||
---
|
||||
|
||||
## Checking current rollout parameters
|
||||
|
||||
- Recall how we build custom reports with `kubectl` and `jq`:
|
||||
|
||||
.exercise[
|
||||
|
||||
- Show the rollout plan for our deployments:
|
||||
```bash
|
||||
kubectl get deploy -o json |
|
||||
jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate"
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
|
||||
## Rolling updates in practice
|
||||
|
||||
- As of Kubernetes 1.8, we can do rolling updates with:
|
||||
@@ -94,7 +111,29 @@ That rollout should be pretty quick. What shows in the web UI?
|
||||
|
||||
---
|
||||
|
||||
## Rolling out a boo-boo
|
||||
## Give it some time
|
||||
|
||||
- At first, it looks like nothing is happening (the graph remains at the same level)
|
||||
|
||||
- According to `kubectl get deploy -w`, the `deployment` was updated really quickly
|
||||
|
||||
- But `kubectl get pods -w` tells a different story
|
||||
|
||||
- The old `pods` are still here, and they stay in `Terminating` state for a while
|
||||
|
||||
- Eventually, they are terminated; and then the graph decreases significantly
|
||||
|
||||
- This delay is due to the fact that our worker doesn't handle signals
|
||||
|
||||
- Kubernetes sends a "polite" shutdown request to the worker, which ignores it
|
||||
|
||||
- After a grace period, Kubernetes gets impatient and kills the container
|
||||
|
||||
(The grace period is 30 seconds, but [can be changed](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) if needed)
|
||||
|
||||
---
|
||||
|
||||
## Rolling out something invalid
|
||||
|
||||
- What happens if we make a mistake?
|
||||
|
||||
@@ -119,6 +158,66 @@ Our rollout is stuck. However, the app is not dead (just 10% slower).
|
||||
|
||||
---
|
||||
|
||||
## What's going on with our rollout?
|
||||
|
||||
- Why is our app 10% slower?
|
||||
|
||||
- Because `MaxUnavailable=1`, so the rollout terminated 1 replica out of 10 available
|
||||
|
||||
- Okay, but why do we see 2 new replicas being rolled out?
|
||||
|
||||
- Because `MaxSurge=1`, so in addition to replacing the terminated one, the rollout is also starting one more
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## The nitty-gritty details
|
||||
|
||||
- We start with 10 pods running for the `worker` deployment
|
||||
|
||||
- Current settings: MaxUnavailable=1 and MaxSurge=1
|
||||
|
||||
- When we start the rollout:
|
||||
|
||||
- one replica is taken down (as per MaxUnavailable=1)
|
||||
- another is created (with the new version) to replace it
|
||||
- another is created (with the new version) per MaxSurge=1
|
||||
|
||||
- Now we have 9 replicas up and running, and 2 being deployed
|
||||
|
||||
- Our rollout is stuck at this point!
|
||||
|
||||
---
|
||||
|
||||
## Checking the dashboard during the bad rollout
|
||||
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check which port the dashboard is on:
|
||||
```bash
|
||||
kubectl -n kube-system get svc socat
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
Note the `3xxxx` port.
|
||||
|
||||
.exercise[
|
||||
|
||||
- Connect to http://oneofournodes:3xxxx/
|
||||
|
||||
<!-- ```open https://node1:3xxxx/``` -->
|
||||
|
||||
]
|
||||
|
||||
--
|
||||
|
||||
- We have failures in Deployments, Pods, and Replica Sets
|
||||
|
||||
---
|
||||
|
||||
## Recovering from a bad rollout
|
||||
|
||||
- We could push some `v0.3` image
|
||||
@@ -200,6 +299,8 @@ spec:
|
||||
minReadySeconds: 10
|
||||
"
|
||||
kubectl rollout status deployment worker
|
||||
kubectl get deploy -o json worker |
|
||||
jq "{name:.metadata.name} + .spec.strategy.rollingUpdate"
|
||||
```
|
||||
]
|
||||
|
||||
|
||||
@@ -20,27 +20,10 @@
|
||||
|
||||
6. Copy the configuration file generated by `kubeadm init`
|
||||
|
||||
- Check the [prepare VMs README](https://github.com/jpetazzo/container.training/blob/master/prepare-vms/README.md) for more details
|
||||
- Check the [prepare VMs README](https://@@GITREPO@@/blob/master/prepare-vms/README.md) for more details
|
||||
|
||||
---
|
||||
|
||||
## `kubeadm` drawbacks
|
||||
|
||||
- Doesn't set up Docker or any other container engine
|
||||
|
||||
- Doesn't set up the overlay network
|
||||
|
||||
- Scripting is complex
|
||||
<br/>
|
||||
(because extracting the token requires advanced `kubectl` commands)
|
||||
|
||||
- Doesn't set up multi-master (no high availability)
|
||||
|
||||
--
|
||||
|
||||
- "It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme
|
||||
|
||||
---
|
||||
|
||||
## Other deployment options
|
||||
|
||||
@@ -65,4 +48,23 @@
|
||||
|
||||
Probably the closest to a multi-cloud/hybrid solution so far, but in development
|
||||
|
||||
- Also, many commercial options!
|
||||
---
|
||||
|
||||
## Even more deployment options
|
||||
|
||||
- If you like Ansible:
|
||||
[kubespray](https://github.com/kubernetes-incubator/kubespray)
|
||||
|
||||
- If you like Terraform:
|
||||
[typhoon](https://github.com/poseidon/typhoon/)
|
||||
|
||||
- You can also learn how to install every component manually, with
|
||||
the excellent tutorial [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way)
|
||||
|
||||
*Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.*
|
||||
|
||||
- There are also many commercial options available!
|
||||
|
||||
- For a longer list, check the Kubernetes documentation:
|
||||
<br/>
|
||||
it has a great guide to [pick the right solution](https://kubernetes.io/docs/setup/pick-right-solution/) to set up Kubernetes.
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
## Versions installed
|
||||
|
||||
- Kubernetes 1.10.0
|
||||
- Docker Engine 18.03.0-ce
|
||||
- Docker Compose 1.20.1
|
||||
- Kubernetes 1.11.0
|
||||
- Docker Engine 18.03.1-ce
|
||||
- Docker Compose 1.21.1
|
||||
|
||||
|
||||
.exercise[
|
||||
@@ -22,7 +22,7 @@ class: extra-details
|
||||
|
||||
## Kubernetes and Docker compatibility
|
||||
|
||||
- Kubernetes 1.10 only validates Docker Engine versions [1.11.2 to 1.13.1 and 17.03.x](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies)
|
||||
- Kubernetes 1.10.x only validates Docker Engine versions [1.11.2 to 1.13.1 and 17.03.x](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies)
|
||||
|
||||
--
|
||||
|
||||
|
||||
@@ -1,45 +1,3 @@
|
||||
# Next steps
|
||||
|
||||
*Alright, how do I get started and containerize my apps?*
|
||||
|
||||
--
|
||||
|
||||
Suggested containerization checklist:
|
||||
|
||||
.checklist[
|
||||
- write a Dockerfile for one service in one app
|
||||
- write Dockerfiles for the other (buildable) services
|
||||
- write a Compose file for that whole app
|
||||
- make sure that devs are empowered to run the app in containers
|
||||
- set up automated builds of container images from the code repo
|
||||
- set up a CI pipeline using these container images
|
||||
- set up a CD pipeline (for staging/QA) using these images
|
||||
]
|
||||
|
||||
And *then* it is time to look at orchestration!
|
||||
|
||||
---
|
||||
|
||||
## Namespaces
|
||||
|
||||
- Namespaces let you run multiple identical stacks side by side
|
||||
|
||||
- Two namespaces (e.g. `blue` and `green`) can each have their own `redis` service
|
||||
|
||||
- Each of the two `redis` services has its own `ClusterIP`
|
||||
|
||||
- `kube-dns` creates two entries, mapping to these two `ClusterIP` addresses:
|
||||
|
||||
`redis.blue.svc.cluster.local` and `redis.green.svc.cluster.local`
|
||||
|
||||
- Pods in the `blue` namespace get a *search suffix* of `blue.svc.cluster.local`
|
||||
|
||||
- As a result, resolving `redis` from a pod in the `blue` namespace yields the "local" `redis`
|
||||
|
||||
.warning[This does not provide *isolation*! That would be the job of network policies.]
|
||||
|
||||
---
|
||||
|
||||
## Stateful services (databases etc.)
|
||||
|
||||
- As a first step, it is wiser to keep stateful services *outside* of the cluster
|
||||
@@ -172,17 +130,3 @@ Sorry Star Trek fans, this is not the federation you're looking for!
|
||||
- Synchronize resources across clusters
|
||||
|
||||
- Discover resources across clusters
|
||||
|
||||
---
|
||||
|
||||
## Developer experience
|
||||
|
||||
*I've put this last, but it's pretty important!*
|
||||
|
||||
- How do you on-board a new developer?
|
||||
|
||||
- What do they need to install to get a dev stack?
|
||||
|
||||
- How does a code change make it from dev to prod?
|
||||
|
||||
- How does someone add a component to a stack?
|
||||
|
||||
@@ -1,31 +1,14 @@
|
||||
## Intros
|
||||
|
||||
- This slide should be customized by the tutorial instructor(s).
|
||||
- Hello! We are:
|
||||
|
||||
- Hello! We are:
|
||||
- .emoji[✨] Ashley ([@ashleymcnamara](https://twitter.com/ashleymcnamara))
|
||||
|
||||
- .emoji[👩🏻🏫] Ann O'Nymous ([@...](https://twitter.com/...), Megacorp Inc)
|
||||
- .emoji[🌟] Brian ([@bketelsen](https://twitter.com/bketelsen))
|
||||
|
||||
- .emoji[👨🏾🎓] Stu Dent ([@...](https://twitter.com/...), University of Wakanda)
|
||||
|
||||
<!-- .dummy[
|
||||
|
||||
- .emoji[👷🏻♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
|
||||
|
||||
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
|
||||
|
||||
- .emoji[⛵] Jérémy ([@jeremygarrouste](twitter.com/jeremygarrouste), Inpiwee)
|
||||
|
||||
] -->
|
||||
|
||||
- The workshop will run from ...
|
||||
|
||||
- There will be a lunch break at ...
|
||||
|
||||
(And coffee breaks!)
|
||||
- The workshop will run from 13:30-15:00
|
||||
|
||||
- Feel free to interrupt for questions at any time
|
||||
|
||||
- *Especially when you see full screen container pictures!*
|
||||
|
||||
- Live feedback, questions, help: @@CHAT@@
|
||||
|
||||
@@ -114,6 +114,8 @@ def generatefromyaml(manifest, filename):
|
||||
html = html.replace("@@MARKDOWN@@", markdown)
|
||||
html = html.replace("@@EXCLUDE@@", exclude)
|
||||
html = html.replace("@@CHAT@@", manifest["chat"])
|
||||
html = html.replace("@@GITREPO@@", manifest["gitrepo"])
|
||||
html = html.replace("@@SLIDES@@", manifest["slides"])
|
||||
html = html.replace("@@TITLE@@", manifest["title"].replace("\n", " "))
|
||||
return html
|
||||
|
||||
|
||||
@@ -1,2 +1,3 @@
|
||||
# This is for netlify
|
||||
PyYAML
|
||||
jinja2
|
||||
|
||||
@@ -5,6 +5,10 @@ title: |
|
||||
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: http://container.training/
|
||||
|
||||
exclude:
|
||||
- self-paced
|
||||
- snap
|
||||
|
||||
@@ -5,6 +5,10 @@ title: |
|
||||
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: http://container.training/
|
||||
|
||||
exclude:
|
||||
- self-paced
|
||||
- snap
|
||||
|
||||
@@ -4,6 +4,10 @@ title: |
|
||||
|
||||
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: http://container.training/
|
||||
|
||||
exclude:
|
||||
- in-person
|
||||
- btp-auto
|
||||
|
||||
@@ -4,6 +4,10 @@ title: |
|
||||
|
||||
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
|
||||
gitrepo: github.com/jpetazzo/container.training
|
||||
|
||||
slides: http://container.training/
|
||||
|
||||
exclude:
|
||||
- in-person
|
||||
- btp-auto
|
||||
|
||||
@@ -141,7 +141,7 @@ It alters the code path for `docker run`, so it is allowed only under strict cir
|
||||
|
||||
- Update `webui` so that we can connect to it from outside:
|
||||
```bash
|
||||
docker service update webui --publish-add 8000:80 --detach=false
|
||||
docker service update webui --publish-add 8000:80
|
||||
```
|
||||
|
||||
]
|
||||
@@ -197,7 +197,7 @@ It has been replaced by the new version, with port 80 accessible from outside.
|
||||
|
||||
- Bring up more workers:
|
||||
```bash
|
||||
docker service update worker --replicas 10 --detach=false
|
||||
docker service update worker --replicas 10
|
||||
```
|
||||
|
||||
- Check the result in the web UI
|
||||
@@ -235,7 +235,7 @@ You should see the performance peaking at 10 hashes/s (like before).
|
||||
- Re-create the `rng` service with *global scheduling*:
|
||||
```bash
|
||||
docker service create --name rng --network dockercoins --mode global \
|
||||
--detach=false $REGISTRY/rng:$TAG
|
||||
$REGISTRY/rng:$TAG
|
||||
```
|
||||
|
||||
- Look at the result in the web UI
|
||||
@@ -258,14 +258,12 @@ class: extra-details
|
||||
|
||||
- This might change in the future (after all, it was possible in 1.12 RC!)
|
||||
|
||||
- As of Docker Engine 17.05, other parameters requiring to `rm`/`create` the service are:
|
||||
- As of Docker Engine 18.03, other parameters requiring to `rm`/`create` the service are:
|
||||
|
||||
- service name
|
||||
|
||||
- hostname
|
||||
|
||||
- network
|
||||
|
||||
---
|
||||
|
||||
## Removing everything
|
||||
|
||||
@@ -114,7 +114,7 @@ services:
|
||||
|
||||
- Deploy our local registry:
|
||||
```bash
|
||||
docker stack deploy registry --compose-file registry.yml
|
||||
docker stack deploy --compose-file registry.yml registry
|
||||
```
|
||||
|
||||
]
|
||||
@@ -304,7 +304,7 @@ services:
|
||||
|
||||
- Create the application stack:
|
||||
```bash
|
||||
docker stack deploy dockercoins --compose-file dockercoins.yml
|
||||
docker stack deploy --compose-file dockercoins.yml dockercoins
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
@@ -49,7 +49,7 @@ This will display the unlock key. Copy-paste it somewhere safe.
|
||||
]
|
||||
|
||||
Note: if you are doing the workshop on your own, using nodes
|
||||
that you [provisioned yourself](https://github.com/jpetazzo/container.training/tree/master/prepare-machine) or with [Play-With-Docker](http://play-with-docker.com/), you might have to use a different method to restart the Engine.
|
||||
that you [provisioned yourself](https://@@GITREPO@@/tree/master/prepare-machine) or with [Play-With-Docker](http://play-with-docker.com/), you might have to use a different method to restart the Engine.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -77,7 +77,7 @@ More resources on this topic:
|
||||
|
||||
- It won't be scheduled automatically when constraints are satisfiable again
|
||||
|
||||
- You will have to update the service; you can do a no-op udate with:
|
||||
- You will have to update the service; you can do a no-op update with:
|
||||
```bash
|
||||
docker service update ... --force
|
||||
```
|
||||
|
||||
@@ -20,23 +20,6 @@
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## `--detach` for service creation
|
||||
|
||||
(New in Docker Engine 17.05)
|
||||
|
||||
If you are running Docker 17.05 to 17.09, you will see the following message:
|
||||
|
||||
```
|
||||
Since --detach=false was not specified, tasks will be created in the background.
|
||||
In a future release, --detach=false will become the default.
|
||||
```
|
||||
|
||||
You can ignore that for now; but we'll come back to it in just a few minutes!
|
||||
|
||||
---
|
||||
|
||||
## Checking service logs
|
||||
|
||||
(New in Docker Engine 17.05)
|
||||
@@ -62,20 +45,6 @@ Note: by default, when a container is destroyed (e.g. when scaling down), its lo
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Before Docker Engine 17.05
|
||||
|
||||
- Docker 1.13/17.03/17.04 have `docker service logs` as an experimental feature
|
||||
<br/>(available only when enabling the experimental feature flag)
|
||||
|
||||
- We have to use `docker logs`, which only works on local containers
|
||||
|
||||
- We will have to connect to the node running our container
|
||||
<br/>(unless it was scheduled locally, of course)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Looking up where our container is running
|
||||
|
||||
- The `docker service ps` command told us where our container was scheduled
|
||||
@@ -127,7 +96,7 @@ class: extra-details
|
||||
|
||||
- Scale the service to ensure 2 copies per node:
|
||||
```bash
|
||||
docker service update pingpong --replicas 10
|
||||
docker service update pingpong --replicas 6
|
||||
```
|
||||
|
||||
- Check that we have two containers on the current node:
|
||||
@@ -141,15 +110,16 @@ class: extra-details
|
||||
|
||||
## Monitoring deployment progress with `--detach`
|
||||
|
||||
(New in Docker Engine 17.05)
|
||||
(New in Docker Engine 17.10)
|
||||
|
||||
- The CLI can monitor commands that create/update/delete services
|
||||
- The CLI monitors commands that create/update/delete services
|
||||
|
||||
- `--detach=false`
|
||||
- In effect, `--detach=false` is the default
|
||||
|
||||
- synchronous operation
|
||||
- the CLI will monitor and display the progress of our request
|
||||
- it exits only when the operation is complete
|
||||
- Ctrl-C to detach at anytime
|
||||
|
||||
- `--detach=true`
|
||||
|
||||
@@ -198,12 +168,12 @@ class: extra-details
|
||||
|
||||
- Scale the service to ensure 3 copies per node:
|
||||
```bash
|
||||
docker service update pingpong --replicas 15 --detach=false
|
||||
docker service update pingpong --replicas 9 --detach=false
|
||||
```
|
||||
|
||||
- And then to 4 copies per node:
|
||||
```bash
|
||||
docker service update pingpong --replicas 20 --detach=true
|
||||
docker service update pingpong --replicas 12 --detach=true
|
||||
```
|
||||
|
||||
]
|
||||
@@ -237,7 +207,7 @@ class: extra-details
|
||||
|
||||
- Create an ElasticSearch service (and give it a name while we're at it):
|
||||
```bash
|
||||
docker service create --name search --publish 9200:9200 --replicas 7 \
|
||||
docker service create --name search --publish 9200:9200 --replicas 5 \
|
||||
elasticsearch`:2`
|
||||
```
|
||||
|
||||
@@ -267,7 +237,7 @@ The latest version of the ElasticSearch image won't start without mandatory conf
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
class: extra-details, pic
|
||||
|
||||

|
||||
|
||||
@@ -321,10 +291,10 @@ apk add --no-cache jq
|
||||
|
||||
## Load balancing results
|
||||
|
||||
Traffic is handled by our clusters [TCP routing mesh](
|
||||
Traffic is handled by our clusters [routing mesh](
|
||||
https://docs.docker.com/engine/swarm/ingress/).
|
||||
|
||||
Each request is served by one of the 7 instances, in rotation.
|
||||
Each request is served by one of the instances, in rotation.
|
||||
|
||||
Note: if you try to access the service from your browser,
|
||||
you will probably see the same
|
||||
@@ -333,7 +303,13 @@ to re-use the same connection.
|
||||
|
||||
---
|
||||
|
||||
## Under the hood of the TCP routing mesh
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Under the hood of the routing mesh
|
||||
|
||||
- Load balancing is done by IPVS
|
||||
|
||||
@@ -352,9 +328,9 @@ to re-use the same connection.
|
||||
|
||||
There are many ways to deal with inbound traffic on a Swarm cluster.
|
||||
|
||||
- Put all (or a subset) of your nodes in a DNS `A` record
|
||||
- Put all (or a subset) of your nodes in a DNS `A` record (good for web clients)
|
||||
|
||||
- Assign your nodes (or a subset) to an ELB
|
||||
- Assign your nodes (or a subset) to an external load balancer (ELB, etc.)
|
||||
|
||||
- Use a virtual IP and make sure that it is assigned to an "alive" node
|
||||
|
||||
@@ -362,22 +338,37 @@ There are many ways to deal with inbound traffic on a Swarm cluster.
|
||||
|
||||
---
|
||||
|
||||
class: btw-labels
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Managing HTTP traffic
|
||||
|
||||
- The TCP routing mesh doesn't parse HTTP headers
|
||||
|
||||
- If you want to place multiple HTTP services on port 80, you need something more
|
||||
- If you want to place multiple HTTP services on port 80/443, you need something more
|
||||
|
||||
- You can set up NGINX or HAProxy on port 80 to do the virtual host switching
|
||||
- You can set up NGINX or HAProxy on port 80/443 to route connections to the correct
|
||||
Service, but they need to be "Swarm aware" to dynamically update configs
|
||||
|
||||
- Docker Universal Control Plane provides its own [HTTP routing mesh](
|
||||
https://docs.docker.com/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services/)
|
||||
--
|
||||
|
||||
- add a specific label starting with `com.docker.ucp.mesh.http` to your services
|
||||
- Docker EE provides its own [Layer 7 routing](https://docs.docker.com/ee/ucp/interlock/)
|
||||
|
||||
- labels are detected automatically and dynamically update the configuration
|
||||
- Service labels like `com.docker.lb.hosts=<FQDN>` are detected automatically via Docker
|
||||
API and dynamically update the configuration
|
||||
|
||||
--
|
||||
|
||||
- Two common open source options:
|
||||
|
||||
- [Traefik](https://traefik.io/) - popular, many features, requires running on managers,
|
||||
needs key/value for HA
|
||||
|
||||
- [Docker Flow Proxy](http://proxy.dockerflow.com/) - uses HAProxy, made for
|
||||
Swarm by Docker Captain [@vfarcic](https://twitter.com/vfarcic)
|
||||
|
||||
---
|
||||
|
||||
@@ -395,7 +386,7 @@ class: btw-labels
|
||||
|
||||
- owner of a service (for billing, paging...)
|
||||
|
||||
- etc.
|
||||
- correlate Swarm objects together (services, volumes, configs, secrets, etc.)
|
||||
|
||||
---
|
||||
|
||||
@@ -448,16 +439,10 @@ class: extra-details
|
||||
|
||||
.exercise[
|
||||
|
||||
- Get the source code of this simple-yet-beautiful visualization app:
|
||||
- Run this simple-yet-beautiful visualization app:
|
||||
```bash
|
||||
cd ~
|
||||
git clone git://github.com/dockersamples/docker-swarm-visualizer
|
||||
```
|
||||
|
||||
- Build and run the Swarm visualizer:
|
||||
```bash
|
||||
cd docker-swarm-visualizer
|
||||
docker-compose up -d
|
||||
cd ~/container.training/stacks
|
||||
docker-compose -f visualizer.yml up -d
|
||||
```
|
||||
|
||||
<!-- ```longwait Creating dockerswarmvisualizer_viz_1``` -->
|
||||
@@ -498,7 +483,7 @@ class: extra-details
|
||||
|
||||
- Instead of viewing your cluster, this could take care of logging, metrics, autoscaling ...
|
||||
|
||||
- We can run it within a service, too! We won't do it, but the command would look like:
|
||||
- We can run it within a service, too! We won't do it yet, but the command would look like:
|
||||
|
||||
```bash
|
||||
docker service create \
|
||||
@@ -506,12 +491,16 @@ class: extra-details
|
||||
--name viz --constraint node.role==manager ...
|
||||
```
|
||||
|
||||
.footnote[
|
||||
|
||||
Credits: the visualization code was written by
|
||||
[Francisco Miranda](https://github.com/maroshii).
|
||||
<br/>
|
||||
|
||||
[Mano Marks](https://twitter.com/manomarks) adapted
|
||||
it to Swarm and maintains it.
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Terminate our services
|
||||
|
||||
@@ -120,7 +120,7 @@ We will use the following Compose file (`stacks/dockercoins+healthcheck.yml`):
|
||||
|
||||
- Deploy the updated stack:
|
||||
```bash
|
||||
docker stack deploy dockercoins --compose-file dockercoins+healthcheck.yml
|
||||
docker stack deploy --compose-file dockercoins+healthcheck.yml dockercoins
|
||||
```
|
||||
|
||||
]
|
||||
@@ -146,7 +146,7 @@ First, let's make an "innocent" change and deploy it.
|
||||
docker-compose -f dockercoins+healthcheck.yml build
|
||||
docker-compose -f dockercoins+healthcheck.yml push
|
||||
docker service update dockercoins_hasher \
|
||||
--detach=false --image=127.0.0.1:5000/hasher:$TAG
|
||||
--image=127.0.0.1:5000/hasher:$TAG
|
||||
```
|
||||
|
||||
]
|
||||
@@ -170,7 +170,7 @@ And now, a breaking change that will cause the health check to fail:
|
||||
docker-compose -f dockercoins+healthcheck.yml build
|
||||
docker-compose -f dockercoins+healthcheck.yml push
|
||||
docker service update dockercoins_hasher \
|
||||
--detach=false --image=127.0.0.1:5000/hasher:$TAG
|
||||
--image=127.0.0.1:5000/hasher:$TAG
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
- This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person,
|
||||
instructor-led workshops and tutorials
|
||||
|
||||
- Over time, [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) also helped to improve these materials — thank you!
|
||||
- Over time, [multiple contributors](https://@@GITREPO@@/graphs/contributors) also helped to improve these materials — thank you!
|
||||
|
||||
- You can also follow along on your own, at your own pace
|
||||
|
||||
|
||||
@@ -113,7 +113,7 @@ class: elk-manual
|
||||
|
||||
- We could author a custom image bundling this configuration
|
||||
|
||||
- We can also pass the [configuration](https://github.com/jpetazzo/container.training/blob/master/elk/logstash.conf) on the command line
|
||||
- We can also pass the [configuration](https://@@GITREPO@@/blob/master/elk/logstash.conf) on the command line
|
||||
|
||||
.exercise[
|
||||
|
||||
@@ -187,7 +187,7 @@ class: elk-auto
|
||||
```bash
|
||||
docker-compose -f elk.yml build
|
||||
docker-compose -f elk.yml push
|
||||
docker stack deploy elk -c elk.yml
|
||||
docker stack deploy -c elk.yml elk
|
||||
```
|
||||
|
||||
]
|
||||
@@ -195,7 +195,7 @@ class: elk-auto
|
||||
Note: the *build* and *push* steps are not strictly necessary, but they don't hurt!
|
||||
|
||||
Let's have a look at the [Compose file](
|
||||
https://github.com/jpetazzo/container.training/blob/master/stacks/elk.yml).
|
||||
https://@@GITREPO@@/blob/master/stacks/elk.yml).
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -169,7 +169,7 @@ class: in-person
|
||||
This should tell us that we are talking to `node3`.
|
||||
|
||||
Note: it can be useful to use a [custom shell prompt](
|
||||
https://github.com/jpetazzo/container.training/blob/master/prepare-vms/scripts/postprep.rc#L68)
|
||||
https://@@GITREPO@@/blob/master/prepare-vms/scripts/postprep.rc#L68)
|
||||
reflecting the `DOCKER_HOST` variable.
|
||||
|
||||
---
|
||||
|
||||
@@ -554,7 +554,7 @@ class: snap
|
||||
|
||||
## Instruct all nodes to join the agreement
|
||||
|
||||
- We dont need another fancy global service!
|
||||
- We don't need another fancy global service!
|
||||
|
||||
- We can join nodes from any existing node of the cluster
|
||||
|
||||
@@ -1183,7 +1183,7 @@ class: prom
|
||||
|
||||
]
|
||||
|
||||
You should see 11 endpoints (5 cadvisor, 5 node, 1 prometheus).
|
||||
You should see 7 endpoints (3 cadvisor, 3 node, 1 prometheus).
|
||||
|
||||
Their state should be "UP".
|
||||
|
||||
|
||||
@@ -35,33 +35,7 @@ class: in-person
|
||||
|
||||
## Building our full cluster
|
||||
|
||||
- We could SSH to nodes 3, 4, 5; and copy-paste the command
|
||||
|
||||
--
|
||||
|
||||
class: in-person
|
||||
|
||||
- Or we could use the AWESOME POWER OF THE SHELL!
|
||||
|
||||
--
|
||||
|
||||
class: in-person
|
||||
|
||||

|
||||
|
||||
--
|
||||
|
||||
class: in-person
|
||||
|
||||
- No, not *that* shell
|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## Let's form like Swarm-tron
|
||||
|
||||
- Let's get the token, and loop over the remaining nodes with SSH
|
||||
- Let's get the token, and use a one-liner for the remaining node with SSH
|
||||
|
||||
.exercise[
|
||||
|
||||
@@ -70,11 +44,9 @@ class: in-person
|
||||
TOKEN=$(docker swarm join-token -q manager)
|
||||
```
|
||||
|
||||
- Loop over the 3 remaining nodes:
|
||||
- Add the remaining node:
|
||||
```bash
|
||||
for NODE in node3 node4 node5; do
|
||||
ssh $NODE docker swarm join --token $TOKEN node1:2377
|
||||
done
|
||||
ssh node3 docker swarm join --token $TOKEN node1:2377
|
||||
```
|
||||
|
||||
]
|
||||
@@ -207,7 +179,7 @@ class: self-paced
|
||||
|
||||
- one of the main take-aways was *"you're gonna need a bigger manager"*
|
||||
|
||||
- Testing by the community: [4700 heterogenous nodes all over the 'net](https://sematext.com/blog/2016/11/14/docker-swarm-lessons-from-swarm3k/)
|
||||
- Testing by the community: [4700 heterogeneous nodes all over the 'net](https://sematext.com/blog/2016/11/14/docker-swarm-lessons-from-swarm3k/)
|
||||
|
||||
- it just works
|
||||
|
||||
|
||||
@@ -75,7 +75,7 @@ When enabling user namespaces:
|
||||
|
||||
For practical reasons, when enabling user namespaces, the Docker Engine places containers and images (and everything else) in a different directory.
|
||||
|
||||
As a resut, if you enable user namespaces on an existing installation:
|
||||
As a result, if you enable user namespaces on an existing installation:
|
||||
|
||||
- all containers and images (and e.g. Swarm data) disappear
|
||||
|
||||
|
||||
@@ -302,7 +302,7 @@ class: extra-details, benchmarking
|
||||
|
||||
- Requests are a bit slower in the parallel benchmark
|
||||
|
||||
- It looks like `hasher` is better equiped to deal with concurrency than `rng`
|
||||
- It looks like `hasher` is better equipped to deal with concurrency than `rng`
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ TOKEN=$(docker swarm join-token -q manager)
|
||||
for N in $(seq 2 5); do
|
||||
DOCKER_HOST=tcp://node$N:2375 docker swarm join --token $TOKEN node1:2377
|
||||
done
|
||||
git clone git://github.com/jpetazzo/container.training
|
||||
git clone git://@@GITREPO@@
|
||||
cd container.training/stacks
|
||||
docker stack deploy --compose-file registry.yml registry
|
||||
docker-compose -f dockercoins.yml build
|
||||
|
||||
@@ -79,6 +79,11 @@ We just have to adapt this to our application, which has 4 services!
|
||||
- doesn't come with anything either
|
||||
- located wherever you want
|
||||
|
||||
- **Lots of 3rd party cloud or self-hosted options**
|
||||
|
||||
- AWS/Azure/Google Container Registry
|
||||
- GitLab, Quay, JFrog
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
@@ -107,7 +112,7 @@ class: extra-details
|
||||
|
||||
- Make sure we have a Docker Hub account
|
||||
|
||||
- [Activate a Docker Datacenter subscription](
|
||||
- [Activate a Docker EE subscription](
|
||||
https://hub.docker.com/enterprise/trial/)
|
||||
|
||||
- Install DTR on our machines
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
## Brand new versions!
|
||||
|
||||
- Engine 17.12
|
||||
- Compose 1.17
|
||||
- Machine 0.13
|
||||
- Engine 18.03
|
||||
- Compose 1.21
|
||||
- Machine 0.14
|
||||
|
||||
.exercise[
|
||||
|
||||
@@ -69,7 +69,7 @@ class: extra-details
|
||||
|
||||
(containerd, libcontainer, SwarmKit...)
|
||||
|
||||
- More predictible release schedule (see next slide)
|
||||
- More predictable release schedule (see next slide)
|
||||
|
||||
---
|
||||
|
||||
@@ -89,8 +89,11 @@ class: pic
|
||||
| 2016 | 1.12 | Swarm mode, routing mesh, encrypted networking, healthchecks
|
||||
| 2017 | 1.13 | Stacks, attachable overlays, image squash and compress
|
||||
| 2017 | 1.13 | Windows Server 2016 Swarm mode
|
||||
| 2017 | 17.03 | Secrets
|
||||
| 2017 | 17.03 | Secrets, encrypted Raft
|
||||
| 2017 | 17.04 | Update rollback, placement preferences (soft constraints)
|
||||
| 2017 | 17.05 | Multi-stage image builds, service logs
|
||||
| 2017 | 17.06 | Swarm configs, node/service events
|
||||
| 2017 | 17.06 | Swarm configs, node/service events, multi-stage build, service logs
|
||||
| 2017 | 17.06 | Windows Server 2016 Swarm overlay networks, secrets
|
||||
| 2017 | 17.09 | ADD/COPY chown, start\_period, stop-signal, overlay2 default
|
||||
| 2017 | 17.12 | containerd, Hyper-V isolation, Windows routing mesh
|
||||
| 2018 | 18.03 | Templates for secrets/configs, multi-yaml stacks, LCOW
|
||||
| 2018 | 18.03 | Stack deploy to Kubernetes, `docker trust`, tmpfs, manifest CLI
|
||||
|
||||