Compare commits

..

28 Commits

Author SHA1 Message Date
Diego Quintana
051dd13c21 Enforce alpine version that includes telnet (#292)
* Enforce alpine version that contains telnet

alpine 3.7 does not contain `telnet` by default https://github.com/gliderlabs/docker-alpine/issues/397#issuecomment-375415746

* bump fix

enforce alpine 3.6 in another slide that mentions `telnet`
2018-06-28 08:26:30 -05:00
Diego Quintana
8c3d4c2c56 Add short inline explanation for -w (#291)
I don't know, but maybe having this short explanation saves a `docker run --help` for someone. 

Tell me if it's too much :D
2018-06-28 08:25:34 -05:00
Jerome Petazzoni
817e17a3a8 Merge branch 'master' into avril2018 2018-04-13 08:13:10 +02:00
Jérôme Petazzoni
e48016a0de Merge pull request #203 from jpetazzo/master
Typo fix, thanks Bridget! ♥
2018-04-12 15:55:36 -05:00
Jerome Petazzoni
39765c9ad0 Add food menu 2018-04-12 15:54:20 -05:00
Jerome Petazzoni
ca06269f00 Merge branch 'master' into avril2018 2018-04-12 12:06:44 -05:00
Jerome Petazzoni
9876a9aaa6 Add dockerfile samples 2018-04-12 09:04:47 +02:00
Jerome Petazzoni
853ba7ec39 Add dockerfile samples 2018-04-12 09:04:36 +02:00
Jerome Petazzoni
3d5c89774c Merge branch 'master' into avril2018 2018-04-11 12:17:03 +02:00
Jerome Petazzoni
21bb5fa9e1 Clarify wifi 2018-04-11 01:40:35 -05:00
Jerome Petazzoni
3fe4d730e7 merge master 2018-04-11 01:13:24 -05:00
Jerome Petazzoni
056b3a7127 hotfix for kubectl get all 2018-04-10 17:21:29 -05:00
Jerome Petazzoni
292885566d Merge branch 'master' into avril2018 2018-04-10 17:12:21 -05:00
Jerome Petazzoni
a54287a6bb Setup chapters appropriately 2018-04-10 09:13:25 -05:00
Jerome Petazzoni
e1fe41b7d7 Merge branch 'master' into avril2018 2018-04-10 08:41:34 -05:00
Jerome Petazzoni
817e3f9217 Fix @jgarrouste's Twitter link 2018-04-10 06:31:42 -05:00
Jerome Petazzoni
bb94c6fe76 Cards for Paris 2018-04-10 06:31:13 -05:00
Jerome Petazzoni
fd05530fff Merge branch 'more-info-on-labels-and-rollouts' into avril2018 2018-04-10 06:05:33 -05:00
Jerome Petazzoni
86f2395b2c Merge branch 'master' into avril2018 2018-04-10 05:31:47 -05:00
Jerome Petazzoni
60f68351c6 Add demos by @jgarrouste 2018-04-10 04:45:41 -05:00
Jerome Petazzoni
035d015a61 Merge branch 'master' into avril2018 2018-04-10 04:25:22 -05:00
Jerome Petazzoni
83efd145b8 Merge branch 'master' into avril2018 2018-04-09 17:07:02 -05:00
Jerome Petazzoni
c6c1a942e7 Update WiFi password and schedule 2018-04-09 15:44:32 -05:00
Jerome Petazzoni
59f5ff7788 Customize outline and title 2018-04-09 15:32:52 -05:00
Jerome Petazzoni
1fbf7b7dbd herp derp symlinks and stuff 2018-04-09 15:32:41 -05:00
Jerome Petazzoni
249947b0dd Setup links to slide decks 2018-04-09 15:26:47 -05:00
Jerome Petazzoni
e9af03e976 On a second thought, let's have relative links 2018-04-09 15:22:12 -05:00
Jerome Petazzoni
ab583e2670 Custom index for avril2018.container.training 2018-04-09 15:21:35 -05:00
107 changed files with 1256 additions and 2724 deletions

2
.gitignore vendored
View File

@@ -8,6 +8,4 @@ prepare-vms/settings.yaml
prepare-vms/tags
slides/*.yml.html
slides/autopilot/state.yaml
slides/index.html
slides/past.html
node_modules

View File

@@ -292,31 +292,15 @@ If there is a bug and you can't even reproduce it:
sorry. It is probably an Heisenbug. We can't act on it
until it's reproducible, alas.
# “Please teach us!”
If you have attended one of these workshops, and want
your team or organization to attend a similar one, you
can look at the list of upcoming events on
http://container.training/.
You are also welcome to reuse these materials to run
your own workshop, for your team or even at a meetup
or conference. In that case, you might enjoy watching
[Bridget Kromhout's talk at KubeCon 2018 Europe](
https://www.youtube.com/watch?v=mYsp_cGY2O0), explaining
precisely how to run such a workshop yourself.
Finally, you can also contact the following persons,
who are experienced speakers, are familiar with the
material, and are available to deliver these workshops
at your conference or for your company:
If you have attended this workshop and have feedback,
or if you want somebody to deliver that workshop at your
conference or for your company: you can contact one of us!
- jerome dot petazzoni at gmail dot com
- bret at bretfisher dot com
(If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!)
If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!
**Thank you!**

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80, threaded=False)
app.run(host="0.0.0.0", port=80)

View File

@@ -103,7 +103,7 @@ wrap Run this program in a container
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
@@ -210,7 +210,7 @@ The `postprep.py` file will be copied via parallel-ssh to all of the VMs and exe
#### Pre-pull images
$ ./workshopctl pull_images TAG
$ ./workshopctl pull-images TAG
#### Generate cards

View File

@@ -1,20 +1,18 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://container.training/" -%}
{%- set url = "avril2018.container.training" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set workshop_name = "formation" -%}
{%- set cluster_or_machine = "votre VM" -%}
{%- set machine_is_or_machines_are = "Votre VM" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "orchestration workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set workshop_name = "formation" -%}
{%- set cluster_or_machine = "votre cluster" -%}
{%- set machine_is_or_machines_are = "Votre cluster" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_swarm -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
@@ -75,9 +73,9 @@ img {
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
Voici les informations pour vous connecter à
{{ cluster_or_machine }} pour cette formation.
Vous pouvez vous connecter avec n'importe quel client SSH.
</p>
<p>
<img src="{{ image_src }}" />
@@ -90,14 +88,14 @@ img {
</p>
<p>
Your {{ machine_is_or_machines_are }}:
{{ machine_is_or_machines_are }} :
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<p>Les slides sont à l'adresse suivante :
<center>{{ url }}</center>
</p>
</div>

View File

@@ -7,6 +7,7 @@ services:
working_dir: /root/prepare-vms
volumes:
- $HOME/.aws/:/root/.aws/
- /etc/localtime:/etc/localtime:ro
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
- $PWD/:/root/prepare-vms/
environment:

View File

@@ -48,7 +48,7 @@ _cmd_cards() {
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
lib/ips-txt-to-html.py $SETTINGS
python lib/ips-txt-to-html.py $SETTINGS
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
@@ -393,23 +393,9 @@ pull_tag() {
ubuntu:latest \
fedora:latest \
centos:latest \
elasticsearch:2 \
postgres \
redis \
alpine \
registry \
nicolaka/netshoot \
jpetazzo/trainingwheels \
golang \
training/namer \
dockercoins/hasher \
dockercoins/rng \
dockercoins/webui \
dockercoins/worker \
logstash \
prom/node-exporter \
google/cadvisor \
dockersamples/visualizer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'

View File

@@ -108,7 +108,7 @@ system("sudo chmod +x /usr/local/bin/docker-machine")
system("docker-machine version")
system("sudo apt-get remove -y --purge dnsmasq-base")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh tree")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
### Wait for Docker to be up.
### (If we don't do this, Docker will not be responsive during the next step.)

View File

@@ -7,7 +7,7 @@ clustersize: 1
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
@@ -20,5 +20,5 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
compose_version: 1.20.1
machine_version: 0.14.0

View File

@@ -20,5 +20,5 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
compose_version: 1.20.1
machine_version: 0.14.0

View File

@@ -1,13 +1,13 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
# Number of VMs per cluster
clustersize: 3
clustersize: 5
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
@@ -20,5 +20,5 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
compose_version: 1.20.1
machine_version: 0.14.0

View File

@@ -1,2 +0,0 @@
/ /kube-90min.yml.html 200!

View File

@@ -1,8 +1,6 @@
#!/bin/sh
set -e
case "$1" in
once)
./index.py
for YAML in *.yml; do
./markmaker.py $YAML > $YAML.html || {
rm $YAML.html
@@ -17,13 +15,6 @@ once)
;;
forever)
set +e
# check if entr is installed
if ! command -v entr >/dev/null; then
echo >&2 "First install 'entr' with apt, brew, etc."
exit
fi
# There is a weird bug in entr, at least on MacOS,
# where it doesn't restore the terminal to a clean
# state when exitting. So let's try to work around

View File

@@ -2,7 +2,7 @@
- All the content is available in a public GitHub repository:
https://@@GITREPO@@
https://github.com/jpetazzo/container.training
- You can get updated "builds" of the slides there:
@@ -10,7 +10,7 @@
<!--
.exercise[
```open https://@@GITREPO@@```
```open https://github.com/jpetazzo/container.training```
```open http://container.training/```
]
-->
@@ -23,7 +23,7 @@
<!--
.exercise[
```open https://@@GITREPO@@/tree/master/slides/common/about-slides.md```
```open https://github.com/jpetazzo/container.training/tree/master/slides/common/about-slides.md```
]
-->
@@ -35,7 +35,7 @@ class: extra-details
- This slide has a little magnifying glass in the top left corner
- This magnifying glass indicates slides that provide extra details
- This magnifiying glass indicates slides that provide extra details
- Feel free to skip them if:

View File

@@ -49,6 +49,26 @@ Tip: use `^S` and `^Q` to pause/resume log output.
---
class: extra-details
## Upgrading from Compose 1.6
.warning[The `logs` command has changed between Compose 1.6 and 1.7!]
- Up to 1.6
- `docker-compose logs` is the equivalent of `logs --follow`
- `docker-compose logs` must be restarted if containers are added
- Since 1.7
- `--follow` must be specified explicitly
- new containers are automatically picked up by `docker-compose logs`
---
## Scaling up the application
- Our goal is to make that performance graph go up (without changing a line of code!)
@@ -106,7 +126,7 @@ We have available resources.
- Start one more `worker` container:
```bash
docker-compose up -d --scale worker=2
docker-compose scale worker=2
```
- Look at the performance graph (it should show a x2 improvement)
@@ -127,7 +147,7 @@ We have available resources.
- Start eight more `worker` containers:
```bash
docker-compose up -d --scale worker=10
docker-compose scale worker=10
```
- Look at the performance graph: does it show a x10 improvement?

View File

@@ -1,4 +1,46 @@
## Hands-on
# Pre-requirements
- Be comfortable with the UNIX command line
- navigating directories
- editing files
- a little bit of bash-fu (environment variables, loops)
- Some Docker knowledge
- `docker run`, `docker ps`, `docker build`
- ideally, you know how to write a Dockerfile and build it
<br/>
(even if it's a `FROM` line and a couple of `RUN` commands)
- It's totally OK if you are not a Docker expert!
---
class: title
*Tell me and I forget.*
<br/>
*Teach me and I remember.*
<br/>
*Involve me and I learn.*
Misattributed to Benjamin Franklin
[(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/)
---
## Hands-on sections
- The whole workshop is hands-on
- We are going to build, ship, and run containers!
- You are invited to reproduce all the demos
- All hands-on sections are clearly identified, like the gray rectangle below
@@ -6,12 +48,55 @@
- This is the stuff you're supposed to do!
- Go to @@SLIDES@@ to view these slides
- Go to [container.training](http://container.training/) to view these slides
- Join the chat room: @@CHAT@@
<!-- ```open http://container.training/``` -->
]
---
class: in-person
## Where are we going to run our containers?
---
class: in-person, pic
![You get a cluster](images/you-get-a-cluster.jpg)
---
class: in-person
## You get a cluster of cloud VMs
- Each person gets a private cluster of cloud VMs (not shared with anybody else)
- They'll remain up for the duration of the workshop
- You should have a little card with login+password+IP addresses
- You can automatically SSH from one VM to another
- The nodes have aliases: `node1`, `node2`, etc.
---
class: in-person
## Why don't we run containers locally?
- Installing that stuff can be hard on some machines
(32 bits CPU or OS... Laptops without administrator access... etc.)
- *"The whole team downloaded all these container images from the WiFi!
<br/>... and it went great!"* (Literally no-one ever)
- All you need is a computer (or even a phone or tablet!), with:
- an internet connection
@@ -24,18 +109,201 @@
class: in-person
## SSH clients
- On Linux, OS X, FreeBSD... you are probably all set
- On Windows, get one of these:
- [putty](http://www.putty.org/)
- Microsoft [Win32 OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH)
- [Git BASH](https://git-for-windows.github.io/)
- [MobaXterm](http://mobaxterm.mobatek.net/)
- On Android, [JuiceSSH](https://juicessh.com/)
([Play Store](https://play.google.com/store/apps/details?id=com.sonelli.juicessh))
works pretty well
- Nice-to-have: [Mosh](https://mosh.org/) instead of SSH, if your internet connection tends to lose packets
---
class: in-person, extra-details
## What is this Mosh thing?
*You don't have to use Mosh or even know about it to follow along.
<br/>
We're just telling you about it because some of us think it's cool!*
- Mosh is "the mobile shell"
- It is essentially SSH over UDP, with roaming features
- It retransmits packets quickly, so it works great even on lossy connections
(Like hotel or conference WiFi)
- It has intelligent local echo, so it works great even in high-latency connections
(Like hotel or conference WiFi)
- It supports transparent roaming when your client IP address changes
(Like when you hop from hotel to conference WiFi)
---
class: in-person, extra-details
## Using Mosh
- To install it: `(apt|yum|brew) install mosh`
- It has been pre-installed on the VMs that we are using
- To connect to a remote machine: `mosh user@host`
(It is going to establish an SSH connection, then hand off to UDP)
- It requires UDP ports to be open
(By default, it uses a UDP port between 60000 and 61000)
---
class: in-person
## Connecting to our lab environment
.exercise[
- Log into the first VM (`node1`) with your SSH client
<!--
```bash
for N in $(awk '/\Wnode/{print $2}' /etc/hosts); do
ssh -o StrictHostKeyChecking=no $N true
done
```
```bash
if which kubectl; then
kubectl get all -o name | grep -v service/kubernetes | xargs -n1 kubectl delete
fi
```
-->
- Check that you can SSH (without password) to `node2`:
```bash
ssh node2
```
- Type `exit` or `^D` to come back to `node1`
<!-- ```bash exit``` -->
]
If anything goes wrong — ask for help!
---
## Doing or re-doing the workshop on your own?
- Use something like
[Play-With-Docker](http://play-with-docker.com/) or
[Play-With-Kubernetes](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
Zero setup effort; but environment are short-lived and
might have limited resources
- Create your own cluster (local or cloud VMs)
Small setup effort; small cost; flexible environments
- Create a bunch of clusters for you and your friends
([instructions](https://github.com/jpetazzo/container.training/tree/master/prepare-vms))
Bigger setup effort; ideal for group training
---
class: self-paced
## Get your own Docker nodes
- If you already have some Docker nodes: great!
- If not: let's get some thanks to Play-With-Docker
.exercise[
- Go to http://www.play-with-docker.com/
- Log in
- Create your first node
<!-- ```open http://www.play-with-docker.com/``` -->
]
You will need a Docker ID to use Play-With-Docker.
(Creating a Docker ID is free.)
---
## We will (mostly) interact with node1 only
*These remarks apply only when using multiple nodes, of course.*
- Unless instructed, **all commands must be run from the first VM, `node1`**
- We will only checkout/copy the code on `node1`
- During normal operations, we do not need access to the other nodes
- If we had to troubleshoot issues, we would use a combination of:
- SSH (to access system logs, daemon status...)
- Docker API (to check running containers and container engine status)
---
## Terminals
Once in a while, the instructions will say:
<br/>"Open a new terminal."
There are multiple ways to do this:
- create a new window or tab on your machine, and SSH into the VM;
- use screen or tmux on the VM and open a new window from there.
You are welcome to use the method that you feel the most comfortable with.
---
## Tmux cheatsheet
[Tmux](https://en.wikipedia.org/wiki/Tmux) is a terminal multiplexer like `screen`.
*You don't have to use it or even know about it to follow along.
<br/>
But some of us like to use it to switch between terminals.
<br/>
It has been preinstalled on your workshop nodes.*
- Ctrl-b c → creates a new window
- Ctrl-b n → go to next window
- Ctrl-b p → go to previous window
- Ctrl-b " → split window top/bottom
- Ctrl-b % → split window left/right
- Ctrl-b Alt-1 → rearrange windows in columns
- Ctrl-b Alt-2 → rearrange windows in rows
- Ctrl-b arrows → navigate to other windows
- Ctrl-b d → detach session
- tmux attach → reattach to session

View File

@@ -16,7 +16,7 @@ fi
- Clone the repository on `node1`:
```bash
git clone git://@@GITREPO@@
git clone git://github.com/jpetazzo/container.training
```
]
@@ -56,16 +56,16 @@ and displays aggregated logs.
## More detail on our sample application
- Visit the GitHub repository with all the materials of this workshop:
<br/>https://@@GITREPO@@
<br/>https://github.com/jpetazzo/container.training
- The application is in the [dockercoins](
https://@@GITREPO@@/tree/master/dockercoins)
https://github.com/jpetazzo/container.training/tree/master/dockercoins)
subdirectory
- Let's look at the general layout of the source code:
there is a Compose file [docker-compose.yml](
https://@@GITREPO@@/blob/master/dockercoins/docker-compose.yml) ...
https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml) ...
... and 4 other services, each in its own directory:
@@ -94,6 +94,61 @@ class: extra-details
---
## Service discovery in container-land
- We do not hard-code IP addresses in the code
- We do not hard-code FQDN in the code, either
- We just connect to a service name, and container-magic does the rest
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
---
## Example in `worker/worker.py`
```python
redis = Redis("`redis`")
def get_random_bytes():
r = requests.get("http://`rng`/32")
return r.content
def hash_bytes(data):
r = requests.post("http://`hasher`/",
data=data,
headers={"Content-Type": "application/octet-stream"})
```
(Full source code available [here](
https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
))
---
class: extra-details
## Links, naming, and service discovery
- Containers can have network aliases (resolvable through DNS)
- Compose file version 2+ makes each container reachable through its service name
- Compose file version 1 did require "links" sections
- Network aliases are automatically namespaced
- you can have multiple apps declaring and using a service named `database`
- containers in the blue app will resolve `database` to the IP of the blue database
- containers in the green app will resolve `database` to the IP of the green database
---
## What's this application?
--

View File

@@ -11,11 +11,9 @@ class: title, in-person
@@TITLE@@<br/></br>
.footnote[
**Be kind to the WiFi!**<br/>
<!-- *Use the 5G network.* -->
*Don't use your hotspot.*<br/>
*Don't stream videos or download big files during the workshop.*<br/>
*Thank you!*
**WiFI: `ArtyLoft`** ou **`ArtyLoft 5 GHz`**
<br/>
**Mot de passe: `TFLEVENT5`**
**Slides: @@SLIDES@@**
**Slides: http://avril2018.container.training/**
]

View File

@@ -1,57 +0,0 @@
#!/usr/bin/env python
import re
import sys
PREFIX = "name: toc-"
EXCLUDED = ["in-person"]
class State(object):
def __init__(self):
self.current_slide = 1
self.section_title = None
self.section_start = 0
self.section_slides = 0
self.chapters = {}
self.sections = {}
def show(self):
if self.section_title.startswith("chapter-"):
return
print("{0.section_title}\t{0.section_start}\t{0.section_slides}".format(self))
self.sections[self.section_title] = self.section_slides
state = State()
title = None
for line in open(sys.argv[1]):
line = line.rstrip()
if line.startswith(PREFIX):
if state.section_title is None:
print("{}\t{}\t{}".format("title", "index", "size"))
else:
state.show()
state.section_title = line[len(PREFIX):].strip()
state.section_start = state.current_slide
state.section_slides = 0
if line == "---":
state.current_slide += 1
state.section_slides += 1
if line == "--":
state.current_slide += 1
toc_links = re.findall("\(#toc-(.*)\)", line)
if toc_links and state.section_title.startswith("chapter-"):
if state.section_title not in state.chapters:
state.chapters[state.section_title] = []
state.chapters[state.section_title].append(toc_links[0])
# This is really hackish
if line.startswith("class:"):
for klass in EXCLUDED:
if klass in line:
state.section_slides -= 1
state.current_slide -= 1
state.show()
for chapter in sorted(state.chapters):
chapter_size = sum(state.sections[s] for s in state.chapters[chapter])
print("{}\t{}\t{}".format("total size for", chapter, chapter_size))

View File

@@ -0,0 +1,10 @@
#!/bin/sh
INPUT=$1
{
echo "# Front matter"
cat "$INPUT"
} |
grep -e "^# " -e ^---$ | uniq -c |
sed "s/^ *//" | sed s/---// |
paste -d "\t" - -

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -1,59 +0,0 @@
body {
background-image: url("images/container-background.jpg");
max-width: 1024px;
margin: 0 auto;
}
table {
font-size: 20px;
font-family: sans-serif;
background: white;
width: 100%;
height: 100%;
padding: 20px;
}
.header {
font-size: 300%;
font-weight: bold;
}
.title {
font-size: 150%;
font-weight: bold;
}
.details {
font-size: 80%;
font-style: italic;
}
td {
padding: 1px;
height: 1em;
}
td.spacer {
height: unset;
}
td.footer {
padding-top: 80px;
height: 100px;
}
td.title {
border-bottom: thick solid black;
padding-bottom: 2px;
padding-top: 20px;
}
a {
text-decoration: none;
}
a:hover {
background: yellow;
}
a.attend:after {
content: "📅 attend";
}
a.slides:after {
content: "📚 slides";
}
a.chat:after {
content: "💬 chat";
}
a.video:after {
content: "📺 video";
}

29
slides/index.html Normal file
View File

@@ -0,0 +1,29 @@
<html>
<head>
<link rel="stylesheet" type="text/css" href="theme.css">
<title>Formation/workshop containers, orchestration, et Kubernetes à Paris en avril</title>
</head>
<body>
<div class="index">
<div class="block">
<h4>Introduction aux conteneurs</h4>
<h5>De la pratique … aux bonnes pratiques</h5>
<h6>(11-12 avril 2018)</h6>
<p>
<a href="intro.yml.html">SLIDES</a>
<a href="https://gitter.im/jpetazzo/training-20180411-paris">CHATROOM</a>
</p>
</div>
<div class="block">
<h4>Introduction à l'orchestration</h4>
<h5>Kubernetes par l'exemple</h5>
<h6>(13 avril 2018)</h6>
<p>
<a href="kube.yml.html">SLIDES</a>
<a href="https://gitter.im/jpetazzo/training-20180413-paris">CHATROOM</a>
<a href="https://docs.google.com/spreadsheets/d/1KiuCVduTf3wf-4-vSmcK96I61WYdDP0BppkOx_XZcjM/edit?ts=5acfc2ef#gid=0">FOODMENU</a>
</p>
</div>
</div>
</body>
</html>

View File

@@ -1,140 +0,0 @@
#!/usr/bin/env python2
# coding: utf-8
TEMPLATE="""<html>
<head>
<title>{{ title }}</title>
<link rel="stylesheet" href="index.css">
</head>
<body>
<div class="main">
<table>
<tr><td class="header" colspan="3">{{ title }}</td></tr>
{% if coming_soon %}
<tr><td class="title" colspan="3">Coming soon near you</td></tr>
{% for item in coming_soon %}
<tr>
<td>{{ item.title }}</td>
<td>{% if item.slides %}<a class="slides" href="{{ item.slides }}" />{% endif %}</td>
<td><a class="attend" href="{{ item.attend }}" /></td>
</tr>
<tr>
<td class="details">Scheduled {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
{% if past_workshops %}
<tr><td class="title" colspan="3">Past workshops</td></tr>
{% for item in past_workshops[:5] %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
<td>{% if item.video %}<a class="video" href="{{ item.video }}" />{% endif %}</td>
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% if past_workshops[5:] %}
<tr>
<td>... and at least <a href="past.html">{{ past_workshops[5:] | length }} more</a>.</td>
</tr>
{% endif %}
{% endif %}
{% if recorded_workshops %}
<tr><td class="title" colspan="3">Recorded workshops</td></tr>
{% for item in recorded_workshops %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
<td><a class="video" href="{{ item.video }}" /></td>
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
{% if self_paced %}
<tr><td class="title" colspan="3">Self-paced tutorials</td></tr>
{% for item in self_paced %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
</tr>
{% endfor %}
{% endif %}
{% if all_past_workshops %}
<tr><td class="title" colspan="3">Past workshops</td></tr>
{% for item in all_past_workshops %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
{% if item.video %}
<td><a class="video" href="{{ item.video }}" /></td>
{% endif %}
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
<tr><td class="spacer"></td></tr>
<tr>
<td class="footer">
Maintained by Jérôme Petazzoni (<a href="https://twitter.com/jpetazzo">@jpetazzo</a>) and <a href="https://github.com/jpetazzo/container.training/graphs/contributors">contributors</a>.
</td>
</tr>
</table>
</div>
</body>
</html>""".decode("utf-8")
import datetime
import jinja2
import yaml
items = yaml.load(open("index.yaml"))
for item in items:
if "date" in item:
date = item["date"]
suffix = {
1: "st", 2: "nd", 3: "rd",
21: "st", 22: "nd", 23: "rd",
31: "st"}.get(date.day, "th")
item["prettydate"] = date.strftime("%B %e{}, %Y").format(suffix)
today = datetime.date.today()
coming_soon = [i for i in items if i.get("date") and i["date"] >= today]
coming_soon.sort(key=lambda i: i["date"])
past_workshops = [i for i in items if i.get("date") and i["date"] < today]
past_workshops.sort(key=lambda i: i["date"], reverse=True)
self_paced = [i for i in items if not i.get("date")]
recorded_workshops = [i for i in items if i.get("video")]
template = jinja2.Template(TEMPLATE)
with open("index.html", "w") as f:
f.write(template.render(
title="Container Training",
coming_soon=coming_soon,
past_workshops=past_workshops,
self_paced=self_paced,
recorded_workshops=recorded_workshops
).encode("utf-8"))
with open("past.html", "w") as f:
f.write(template.render(
title="Container Training",
all_past_workshops=past_workshops
).encode("utf-8"))

View File

@@ -1,361 +0,0 @@
- date: 2018-07-12
city: Minneapolis, MN
country: us
event: devopsdays Minneapolis
title: Kubernetes 101
speaker: "ashleymcnamara, bketelsen"
attend: https://www.devopsdays.org/events/2018-minneapolis/registration/
- date: 2018-10-01
city: New York, NY
country: us
event: Velocity
title: Kubernetes 101
speaker: bridgetkromhout
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70102
- date: 2018-09-30
city: New York, NY
country: us
event: Velocity
title: Kubernetes Bootcamp - Deploying and Scaling Microservices
speaker: jpetazzo
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/69875
- date: 2018-07-17
city: Portland, OR
country: us
event: OSCON
title: Kubernetes 101
speaker: bridgetkromhout
attend: https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/66287
- date: 2018-06-27
city: Amsterdam
country: nl
event: devopsdays
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://devopsdaysams2018.container.training
attend: https://www.devopsdays.org/events/2018-amsterdam/registration/
- date: 2018-06-12
city: San Jose, CA
country: us
event: Velocity
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://velocitysj2018.container.training
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66286
- date: 2018-06-12
city: San Jose, CA
country: us
event: Velocity
title: "Kubernetes two-day kickstart: Deploying and Scaling Microservices with Kubernetes"
speaker: "bketelsen, erikstmartin"
slides: http://kubernetes.academy/kube-fullday.yml.html#1
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
- date: 2018-06-11
city: San Jose, CA
country: us
event: Velocity
title: "Kubernetes two-day kickstart: Introduction to Docker and Containers"
speaker: "bketelsen, erikstmartin"
slides: http://kubernetes.academy/intro-fullday.yml.html#1
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
- date: 2018-05-17
city: Virginia Beach, FL
country: us
event: Revolution Conf
title: Docker 101
speaker: bretfisher
slides: https://revconf18.bretfisher.com
- date: 2018-05-10
city: Saint Paul, MN
country: us
event: NDC Minnesota
title: Kubernetes 101
slides: https://ndcminnesota2018.container.training
- date: 2018-05-08
city: Budapest
country: hu
event: CRAFT
title: Swarm Orchestration
slides: https://craftconf18.bretfisher.com
- date: 2018-04-27
city: Chicago, IL
country: us
event: GOTO
title: Swarm Orchestration
slides: https://gotochgo18.bretfisher.com
- date: 2018-04-24
city: Chicago, IL
country: us
event: GOTO
title: Kubernetes 101
slides: http://gotochgo2018.container.training/
- date: 2018-04-11
city: Paris
country: fr
title: Introduction aux conteneurs
lang: fr
slides: https://avril2018.container.training/intro.yml.html
- date: 2018-04-13
city: Paris
country: fr
lang: fr
title: Introduction à l'orchestration
slides: https://avril2018.container.training/kube.yml.html
- date: 2018-04-06
city: Sacramento, CA
country: us
event: MuraCon
title: Docker 101
slides: https://muracon18.bretfisher.com
- date: 2018-03-27
city: Santa Clara, CA
country: us
event: SREcon Americas
title: Kubernetes 101
slides: http://srecon2018.container.training/
- date: 2018-03-27
city: Bergen
country: no
event: Boosterconf
title: Kubernetes 101
slides: http://boosterconf2018.container.training/
- date: 2018-02-22
city: San Francisco, CA
country: us
event: IndexConf
title: Kubernetes 101
slides: http://indexconf2018.container.training/
#attend: https://developer.ibm.com/indexconf/sessions/#!?id=5474
- date: 2017-11-17
city: San Francisco, CA
country: us
event: QCON SF
title: Orchestrating Microservices with Docker Swarm
slides: http://qconsf2017swarm.container.training/
- date: 2017-11-16
city: San Francisco, CA
country: us
event: QCON SF
title: Introduction to Docker and Containers
slides: http://qconsf2017intro.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLgqTum8MkspG_8VzGl6C07
- date: 2017-10-30
city: San Franciso, CA
country: us
event: LISA
title: (M7) Getting Started with Docker and Containers
slides: http://lisa17m7.container.training/
- date: 2017-10-31
city: San Franciso, CA
country: us
event: LISA
title: (T9) Build, Ship, and Run Microservices on a Docker Swarm Cluster
slides: http://lisa17t9.container.training/
- date: 2017-10-26
city: Prague
country: cz
event: Open Source Summit Europe
title: Deploying and scaling microservices with Docker and Kubernetes
slides: http://osseu17.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLrsyydCzxWrIP_1-wkcSHS
- date: 2017-10-16
city: Copenhagen
country: dk
event: DockerCon
title: Swarm from Zero to Hero
slides: http://dc17eu.container.training/
- date: 2017-10-16
city: Copenhagen
country: dk
event: DockerCon
title: Orchestration for Advanced Users
slides: https://www.bretfisher.com/dockercon17eu
- date: 2017-07-25
city: Minneapolis, MN
country: us
event: devopsdays
title: Deploying & Scaling microservices with Docker Swarm
video: https://www.youtube.com/watch?v=DABbqyJeG_E
- date: 2017-06-12
city: Berlin
country: de
event: DevOpsCon
title: Deploying and scaling containerized Microservices with Docker and Swarm
- date: 2017-05-18
city: Portland, OR
country: us
event: PyCon
title: Deploy and scale containers with Docker native, open source orchestration
video: https://www.youtube.com/watch?v=EuzoEaE6Cqs
- date: 2017-05-08
city: Austin, TX
country: us
event: OSCON
title: Deploying and scaling applications in containers with Docker
- date: 2017-05-04
city: Chicago, IL
country: us
event: GOTO
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2017-04-17
city: Austin, TX
country: us
event: DockerCon
title: Orchestration Workshop
- date: 2017-03-22
city: San Jose, CA
country: us
event: Devoxx
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2017-03-03
city: Pasadena, CA
country: us
event: SCALE
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2016-12-06
city: Boston, MA
country: us
event: LISA
title: Deploying and Scaling Applications with Docker Swarm
slides: http://lisa16t1.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviIDDhr8vIwCN1wkyNGXjbbc
- date: 2016-10-07
city: Berlin
country: de
event: LinuxCon
title: Orchestrating Containers in Production at Scale with Docker Swarm
- date: 2016-09-20
city: New York, NY
country: us
event: Velocity
title: Deployment and orchestration at scale with Docker
- date: 2016-08-25
city: Toronto
country: ca
event: LinuxCon
title: Orchestrating Containers in Production at Scale with Docker Swarm
- date: 2016-06-22
city: Seattle, WA
country: us
event: DockerCon
title: Orchestration Workshop
- date: 2016-05-29
city: Portland, OR
country: us
event: PyCon
title: Introduction to Docker and containers
slides: https://us.pycon.org/2016/site_media/media/tutorial_handouts/DockerSlides.pdf
video: https://www.youtube.com/watch?v=ZVaRK10HBjo
- date: 2016-05-17
city: Austin, TX
country: us
event: OSCON
title: Deployment and orchestration at scale with Docker Swarm
- date: 2016-04-27
city: Budapest
country: hu
event: CRAFT
title: Advanced Docker concepts and container orchestration
- date: 2016-04-22
city: Berlin
country: de
event: Neofonie
title: Orchestration Workshop
- date: 2016-04-05
city: Stockholm
country: se
event: Praqma
title: Orchestration Workshop
- date: 2016-03-22
city: Munich
country: de
event: Stylight
title: Orchestration Workshop
- date: 2016-03-11
city: London
country: uk
event: QCON
title: Containers in production with Docker Swarm
- date: 2016-02-19
city: Amsterdam
country: nl
event: Container Solutions
title: Orchestration Workshop
- date: 2016-02-15
city: Paris
country: fr
event: Zenika
title: Orchestration Workshop
- date: 2016-01-22
city: Pasadena, CA
country: us
event: SCALE
title: Advanced Docker concepts and container orchestration
#- date: 2015-11-10
# city: Washington DC
# country: us
# event: LISA
# title: Deploying and Scaling Applications with Docker Swarm
#2015-09-24-strangeloop
- title: Introduction to Docker and Containers
slides: intro-selfpaced.yml.html
- title: Container Orchestration with Docker and Swarm
slides: swarm-selfpaced.yml.html
- title: Deploying and Scaling Microservices with Docker and Kubernetes
slides: kube-selfpaced.yml.html

View File

@@ -2,12 +2,8 @@ title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/jpetazzo/training-20180411-paris)"
exclude:
- self-paced
@@ -22,21 +18,21 @@ chapters:
- intro/Docker_History.md
- intro/Training_Environment.md
- intro/Installing_Docker.md
- intro/First_Containers.md
- - intro/First_Containers.md
- intro/Background_Containers.md
- intro/Start_And_Attach.md
- - intro/Initial_Images.md
- intro/Building_Images_Interactively.md
- intro/Initial_Images.md
- - intro/Building_Images_Interactively.md
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
- - intro/Multi_Stage_Builds.md
- intro/Multi_Stage_Builds.md
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- - intro/Naming_And_Inspecting.md
- intro/Labels.md
- intro/Getting_Inside.md
- - intro/Container_Networking_Basics.md
- intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
#- intro/Connecting_Containers_With_Links.md
@@ -45,10 +41,11 @@ chapters:
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- intro/Docker_Machine.md
- - intro/Advanced_Dockerfiles.md
- - intro/CI_Pipeline.md
- intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Dockerfile_Samples.md
- intro/Logging.md
- intro/Resource_Limits.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md

View File

@@ -1,14 +1,11 @@
title: |
Introduction
to Containers
to Docker and
Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
@@ -30,13 +27,13 @@ chapters:
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
- - intro/Multi_Stage_Builds.md
- intro/Multi_Stage_Builds.md
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- - intro/Naming_And_Inspecting.md
- intro/Labels.md
- intro/Getting_Inside.md
- - intro/Container_Networking_Basics.md
- intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
#- intro/Connecting_Containers_With_Links.md
@@ -45,14 +42,13 @@ chapters:
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- intro/Docker_Machine.md
- - intro/Advanced_Dockerfiles.md
- intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Logging.md
- intro/Resource_Limits.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- - intro/Container_Engines.md
- intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md
- common/thankyou.md

1
slides/intro.yml Symbolic link
View File

@@ -0,0 +1 @@
intro-fullday.yml

View File

@@ -34,6 +34,18 @@ In this section, we will see more Dockerfile commands.
---
## The `MAINTAINER` instruction
The `MAINTAINER` instruction tells you who wrote the `Dockerfile`.
```dockerfile
MAINTAINER Docker Education Team <education@docker.com>
```
It's optional but recommended.
---
## The `RUN` instruction
The `RUN` instruction can be specified in two ways.
@@ -355,7 +367,7 @@ class: extra-details
## Overriding the `ENTRYPOINT` instruction
The entry point can be overridden as well.
The entry point can be overriden as well.
```bash
$ docker run -it training/ls
@@ -416,4 +428,5 @@ ONBUILD COPY . /src
```
* You can't chain `ONBUILD` instructions with `ONBUILD`.
* `ONBUILD` can't be used to trigger `FROM` instructions.
* `ONBUILD` can't be used to trigger `FROM` and `MAINTAINER`
instructions.

View File

@@ -40,8 +40,6 @@ ambassador containers.
---
class: pic
![ambassador](images/ambassador-diagram.png)
---

View File

@@ -117,7 +117,7 @@ CONTAINER ID IMAGE ... CREATED STATUS ...
Many Docker commands will work on container IDs: `docker stop`, `docker rm`...
If we want to list only the IDs of our containers (without the other columns
If we want to list only the IDs of our containers (without the other colums
or the header line),
we can use the `-q` ("Quiet", "Quick") flag:

View File

@@ -0,0 +1,3 @@
# Building a CI pipeline
.center[![Demo](images/demo.jpg)]

View File

@@ -49,7 +49,7 @@ Before diving in, let's see a small example of Compose in action.
---
class: pic
## Compose in action
![composeup](images/composeup.gif)
@@ -60,10 +60,6 @@ class: pic
If you are using the official training virtual machines, Compose has been
pre-installed.
If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them.
If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`.
You can always check that it is installed by running:
```bash
@@ -139,33 +135,22 @@ services:
---
## Compose file structure
## Compose file versions
A Compose file has multiple sections:
Version 1 directly has the various containers (`www`, `redis`...) at the top level of the file.
* `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.)
Version 2 has multiple sections:
* `services` is mandatory. A service is one or more replicas of the same image running as containers.
* `version` is mandatory and should be `"2"`.
* `services` is mandatory and corresponds to the content of the version 1 format.
* `networks` is optional and indicates to which networks containers should be connected.
<br/>(By default, containers will be connected on a private, per-compose-file network.)
<br/>(By default, containers will be connected on a private, per-app network.)
* `volumes` is optional and can define volumes to be used and/or shared by the containers.
---
## Compose file versions
* Version 1 is legacy and shouldn't be used.
(If you see a Compose file without `version` and `services`, it's a legacy v1 file.)
* Version 2 added support for networks and volumes.
* Version 3 added support for deployment options (scaling, rolling updates, etc).
The [Docker documentation](https://docs.docker.com/compose/compose-file/)
has excellent information about the Compose file format if you need to know more about versions.
Version 3 adds support for deployment options (scaling, rolling updates, etc.)
---
@@ -275,8 +260,6 @@ Removing trainingwheels_www_1 ... done
Removing trainingwheels_redis_1 ... done
```
Use `docker-compose down -v` to remove everything including volumes.
---
## Special handling of volumes

View File

@@ -73,7 +73,7 @@ Containers also exist (sometimes with other names) on Windows, macOS, Solaris, F
## LXC
* The venerable ancestor (first released in 2008).
* The venerable ancestor (first realeased in 2008).
* Docker initially relied on it to execute containers.

View File

@@ -65,17 +65,9 @@ eb0eeab782f4 host host
* A network is managed by a *driver*.
* The built-in drivers include:
* All the drivers that we have seen before are available.
* `bridge` (default)
* `none`
* `host`
* `macvlan`
* A multi-host driver, *overlay*, is available out of the box (for Swarm clusters).
* A new multi-host driver, *overlay*, is available out of the box.
* More drivers can be provided by plugins (OVS, VLAN...)
@@ -83,8 +75,6 @@ eb0eeab782f4 host host
---
class: extra-details
## Differences with the CNI
* CNI = Container Network Interface
@@ -97,22 +87,6 @@ class: extra-details
---
class: pic
## Single container in a Docker network
![bridge0](images/bridge1.png)
---
class: pic
## Two containers on two Docker networks
![bridge3](images/bridge2.png)
---
## Creating a network
Let's create a network called `dev`.
@@ -310,7 +284,7 @@ since we wiped out the old Redis container).
---
class: extra-details
class: x-extra-details
## Names are *local* to each network
@@ -350,7 +324,7 @@ class: extra-details
Create the `prod` network.
```bash
$ docker network create prod
$ docker create network prod
5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c
```
@@ -498,13 +472,11 @@ b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09
* If containers span multiple hosts, we need an *overlay* network to connect them together.
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging
VXLAN, *enabled with Swarm Mode*.
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN.
* Other plugins (Weave, Calico...) can provide overlay networks as well.
* Once you have an overlay network, *all the features that we've used in this chapter work identically
across multiple hosts.*
* Once you have an overlay network, *all the features that we've used in this chapter work identically.*
---
@@ -542,174 +514,13 @@ General idea:
---
## Connecting and disconnecting dynamically
## Section summary
* So far, we have specified which network to use when starting the container.
We've learned how to:
* The Docker Engine also allows to connect and disconnect while the container runs.
* Create private networks for groups of containers.
* This feature is exposed through the Docker API, and through two Docker CLI commands:
* Assign IP addresses to containers.
* `docker network connect <network> <container>`
* Use container naming to implement service discovery.
* `docker network disconnect <network> <container>`
---
## Dynamically connecting to a network
* We have a container named `es` connected to a network named `dev`.
* Let's start a simple alpine container on the default network:
```bash
$ docker run -ti alpine sh
/ #
```
* In this container, try to ping the `es` container:
```bash
/ # ping es
ping: bad address 'es'
```
This doesn't work, but we will change that by connecting the container.
---
## Finding the container ID and connecting it
* Figure out the ID of our alpine container; here are two methods:
* looking at `/etc/hostname` in the container,
* running `docker ps -lq` on the host.
* Run the following command on the host:
```bash
$ docker network connect dev `<container_id>`
```
---
## Checking what we did
* Try again to `ping es` from the container.
* It should now work correctly:
```bash
/ # ping es
PING es (172.20.0.3): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms
^C
```
* Interrupt it with Ctrl-C.
---
## Looking at the network setup in the container
We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`:
.small[
```bash
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
20: eth1@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1
valid_lft forever preferred_lft forever
/ #
```
]
Each network connection is materialized with a virtual network interface.
As we can see, we can be connected to multiple networks at the same time.
---
## Disconnecting from a network
* Let's try the symmetrical command to disconnect the container:
```bash
$ docker network disconnect dev <container_id>
```
* From now on, if we try to ping `es`, it will not resolve:
```bash
/ # ping es
ping: bad address 'es'
```
* Trying to ping the IP address directly won't work either:
```bash
/ # ping 172.20.0.3
... (nothing happens until we interrupt it with Ctrl-C)
```
---
class: extra-details
## Network aliases are scoped per network
* Each network has its own set of network aliases.
* We saw this earlier: `es` resolves to different addresses in `dev` and `prod`.
* If we are connected to multiple networks, the resolver looks up names in each of them
(as of Docker Engine 18.03, it is the connection order) and stops as soon as the name
is found.
* Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not**
give us the addresses of all the `es` services; but only the ones in `dev` or `prod`.
* However, we can lookup `es.dev` or `es.prod` if we need to.
---
class: extra-details
## Finding out about our networks and names
* We can do reverse DNS lookups on containers' IP addresses.
* If the IP address belongs to a network (other than the default bridge), the result will be:
```
name-or-first-alias-or-container-id.network-name
```
* Example:
.small[
```bash
$ docker run -ti --net prod --net-alias hello alpine
/ # apk add --no-cache drill
...
OK: 5 MiB in 13 packages
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03
inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
...
/ # drill -t ptr `3.0.21.172`.in-addr.arpa
...
;; ANSWER SECTION:
3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`.
...
```
]

View File

@@ -98,7 +98,7 @@ $ curl localhost:32768
* We can see that metadata with `docker inspect`:
```bash
$ docker inspect --format '{{.Config.ExposedPorts}}' nginx
$ docker inspect nginx --format {{.Config.ExposedPorts}}
map[80/tcp:{}]
```

View File

@@ -64,7 +64,7 @@ Create this Dockerfile.
## Testing our C program
* Create `hello.c` and `Dockerfile` in the same directory.
* Create `hello.c` and `Dockerfile` in the same direcotry.
* Run `docker build -t hello .` in this directory.

View File

@@ -10,12 +10,10 @@
* [Solaris Containers (2004)](https://en.wikipedia.org/wiki/Solaris_Containers)
* [FreeBSD jails (1999-2000)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
* [FreeBSD jails (1999)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
Containers have been around for a *very long time* indeed.
(See [this excellent blog post by Serge Hallyn](https://s3hh.wordpress.com/2018/03/22/history-of-containers/) for more historic details.)
---
class: pic

View File

@@ -30,7 +30,7 @@
## Environment variables
- Most of the tools (CLI, libraries...) connecting to the Docker API can use environment variables.
- Most of the tools (CLI, libraries...) connecting to the Docker API can use ennvironment variables.
- These variables are:
@@ -40,7 +40,7 @@
- `DOCKER_CERT_PATH` (path to the keypair and certificate to use for auth)
- `docker-machine env ...` will generate the variables needed to connect to a host.
- `docker-machine env ...` will generate the variables needed to connect to an host.
- `$(eval docker-machine env ...)` sets these variables in the current shell.
@@ -50,7 +50,7 @@
With `docker-machine`, we can:
- upgrade a host to the latest version of the Docker Engine,
- upgrade an host to the latest version of the Docker Engine,
- start/stop/restart hosts,

View File

@@ -0,0 +1,5 @@
# Dockerfile Samples
---
## (Demo in terminal)

View File

@@ -51,8 +51,9 @@ The dependencies are reinstalled every time, because the build system does not k
```bash
FROM python
MAINTAINER Docker Education Team <education@docker.com>
COPY . /src/
WORKDIR /src
COPY . .
RUN pip install -qr requirements.txt
EXPOSE 5000
CMD ["python", "app.py"]
@@ -66,10 +67,11 @@ Adding the dependencies as a separate step means that Docker can cache more effi
```bash
FROM python
COPY requirements.txt /tmp/requirements.txt
MAINTAINER Docker Education Team <education@docker.com>
COPY ./requirements.txt /tmp/requirements.txt
RUN pip install -qr /tmp/requirements.txt
COPY . /src/
WORKDIR /src
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
```
@@ -96,266 +98,3 @@ CMD, EXPOSE ...
* The build fails as soon as an instruction fails
* If `RUN <unit tests>` fails, the build doesn't produce an image
* If it succeeds, it produces a clean image (without test libraries and data)
---
# Dockerfile examples
There are a number of tips, tricks, and techniques that we can use in Dockerfiles.
But sometimes, we have to use different (and even opposed) practices depending on:
- the complexity of our project,
- the programming language or framework that we are using,
- the stage of our project (early MVP vs. super-stable production),
- whether we're building a final image or a base for further images,
- etc.
We are going to show a few examples using very different techniques.
---
## When to optimize an image
When authoring official images, it is a good idea to reduce as much as possible:
- the number of layers,
- the size of the final image.
This is often done at the expense of build time and convenience for the image maintainer;
but when an image is downloaded millions of time, saving even a few seconds of pull time
can be worth it.
.small[
```dockerfile
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-install gd
...
RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \
&& echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \
&& tar -xzf wordpress.tar.gz -C /usr/src/ \
&& rm wordpress.tar.gz \
&& chown -R www-data:www-data /usr/src/wordpress
```
]
(Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile))
---
## When to *not* optimize an image
Sometimes, it is better to prioritize *maintainer convenience*.
In particular, if:
- the image changes a lot,
- the image has very few users (e.g. only 1, the maintainer!),
- the image is built and run on the same machine,
- the image is built and run on machines with a very fast link ...
In these cases, just keep things simple!
(Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.)
---
```dockerfile
FROM debian:sid
RUN apt-get update -q
RUN apt-get install -yq build-essential make
RUN apt-get install -yq zlib1g-dev
RUN apt-get install -yq ruby ruby-dev
RUN apt-get install -yq python-pygments
RUN apt-get install -yq nodejs
RUN apt-get install -yq cmake
RUN gem install --no-rdoc --no-ri github-pages
COPY . /blog
WORKDIR /blog
VOLUME /blog/_site
EXPOSE 4000
CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"]
```
---
## Multi-dimensional versioning systems
Images can have a tag, indicating the version of the image.
But sometimes, there are multiple important components, and we need to indicate the versions
for all of them.
This can be done with environment variables:
```dockerfile
ENV PIP=9.0.3 \
ZC_BUILDOUT=2.11.2 \
SETUPTOOLS=38.7.0 \
PLONE_MAJOR=5.1 \
PLONE_VERSION=5.1.0 \
PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d
```
(Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile))
---
## Entrypoints and wrappers
It is very common to define a custom entrypoint.
That entrypoint will generally be a script, performing any combination of:
- pre-flights checks (if a required dependency is not available, display
a nice error message early instead of an obscure one in a deep log file),
- generation or validation of configuration files,
- dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`),
- and more.
---
## A typical entrypoint script
```dockerfile
#!/bin/sh
set -e
# first arg is '-f' or '--some-option'
# or first arg is 'something.conf'
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
set -- redis-server "$@"
fi
# allow the container to be started with '--user'
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
chown -R redis .
exec su-exec redis "$0" "$@"
fi
exec "$@"
```
(Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh))
---
## Factoring information
To facilitate maintenance (and avoid human errors), avoid to repeat information like:
- version numbers,
- remote asset URLs (e.g. source tarballs) ...
Instead, use environment variables.
.small[
```dockerfile
ENV NODE_VERSION 10.2.1
...
RUN ...
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xf "node-v$NODE_VERSION.tar.xz" \
&& cd "node-v$NODE_VERSION" \
...
```
]
(Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile))
---
## Overrides
In theory, development and production images should be the same.
In practice, we often need to enable specific behaviors in development (e.g. debug statements).
One way to reconcile both needs is to use Compose to enable these behaviors.
Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example.
---
## Production image
This Dockerfile builds an image leveraging gunicorn:
```dockerfile
FROM python
RUN pip install flask
RUN pip install gunicorn
RUN pip install redis
COPY . /src
WORKDIR /src
CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app
EXPOSE 5000
```
(Source: [traininghweels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
---
## Development Compose file
This Compose file uses the same image, but with a few overrides for development:
- the Flask development server is used (overriding `CMD`),
- the `DEBUG` environment variable is set,
- a volume is used to provide a faster local development workflow.
.small[
```yaml
services:
www:
build: www
ports:
- 8000:5000
user: nobody
environment:
DEBUG: 1
command: python counter.py
volumes:
- ./www:/src
```
]
(Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml))
---
## How to know which best practices are better?
- The main goal of containers is to make our lives easier.
- In this chapter, we showed many ways to write Dockerfiles.
- These Dockerfiles use sometimes diametrally opposed techniques.
- Yet, they were the "right" ones *for a specific situation.*
- It's OK (and even encouraged) to start simple and evolve as needed.
- Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!

View File

@@ -110,8 +110,6 @@ Beautiful! .emoji[😍]
---
class: in-person
## Counting packages in the container
Let's check how many packages are installed there.
@@ -129,8 +127,6 @@ How many packages do we have on our host?
---
class: in-person
## Counting packages on the host
Exit the container by logging out of the shell, like you would usually do.
@@ -149,34 +145,18 @@ Now, try to:
---
class: self-paced
## Comparing the container and the host
Exit the container by logging out of the shell, with `^D` or `exit`.
Now try to run `figlet`. Does that work?
(It shouldn't; except if, by coincidence, you are running on a machine where figlet was installed before.)
---
## Host and containers are independent things
* We ran an `ubuntu` container on an Linux/Windows/macOS host.
* We ran an `ubuntu` container on an `ubuntu` host.
* They have different, independent packages.
* But they have different, independent packages.
* Installing something on the host doesn't expose it to the container.
* And vice-versa.
* Even if both the host and the container have the same Linux distro!
* We can run *any container* on *any host*.
(One exception: Windows containers cannot run on Linux machines; at least not yet.)
---
## Where's our container?

View File

@@ -144,7 +144,7 @@ docker run jpetazzo/crashtest
The container starts, but then stops immediately, without any output.
What would MacGyver&trade; do?
What would McGyver do?
First, let's check the status of that container.

View File

@@ -46,8 +46,6 @@ In this section, we will explain:
## Example for a Java webapp
Each of the following items will correspond to one layer:
* CentOS base layer
* Packages and configuration files added by our local IT
* JRE
@@ -58,22 +56,6 @@ Each of the following items will correspond to one layer:
---
class: pic
## The read-write layer
![layers](images/container-layers.jpg)
---
class: pic
## Multiple containers sharing the same image
![layers](images/sharing-layers.jpg)
---
## Differences between containers and images
* An image is a read-only filesystem.
@@ -81,14 +63,24 @@ class: pic
* A container is an encapsulated set of processes running in a
read-write copy of that filesystem.
* To optimize container boot time, *copy-on-write* is used
* To optimize container boot time, *copy-on-write* is used
instead of regular copy.
* `docker run` starts a container from a given image.
Let's give a couple of metaphors to illustrate those concepts.
---
## Comparison with object-oriented programming
## Image as stencils
Images are like templates or stencils that you can create containers from.
![stencil](images/stenciling-wall.jpg)
---
## Object-oriented programming
* Images are conceptually similar to *classes*.
@@ -107,7 +99,7 @@ If an image is read-only, how do we change it?
* We create a new container from that image.
* Then we make changes to that container.
* When we are satisfied with those changes, we transform them into a new layer.
* A new image is created by stacking the new layer on top of the old image.
@@ -126,7 +118,7 @@ If an image is read-only, how do we change it?
## Creating the first images
There is a special empty image called `scratch`.
There is a special empty image called `scratch`.
* It allows to *build from scratch*.
@@ -146,7 +138,7 @@ Note: you will probably never have to do this yourself.
* Saves all the changes made to a container into a new layer.
* Creates a new image (effectively a copy of the container).
`docker build` **(used 99% of the time)**
`docker build`
* Performs a repeatable build sequence.
* This is the preferred method!
@@ -188,8 +180,6 @@ Those images include:
* Ready-to-use components and services, like redis, postgresql...
* Over 130 at this point!
---
## User namespace
@@ -309,9 +299,9 @@ There are two ways to download images.
```bash
$ docker pull debian:jessie
Pulling repository debian
b164861940b8: Download complete
b164861940b8: Pulling image (jessie) from debian
d1881793a057: Download complete
b164861940b8: Download complete
b164861940b8: Pulling image (jessie) from debian
d1881793a057: Download complete
```
* As seen previously, images are made up of layers.

View File

@@ -37,9 +37,7 @@ We can arbitrarily distinguish:
## Installing Docker on Linux
* The recommended method is to install the packages supplied by Docker Inc.:
https://store.docker.com
* The recommended method is to install the packages supplied by Docker Inc.
* The general method is:
@@ -81,11 +79,11 @@ class: extra-details
## Installing Docker on macOS and Windows
* On macOS, the recommended method is to use Docker for Mac:
* On macOS, the recommended method is to use Docker4Mac:
https://docs.docker.com/docker-for-mac/install/
* On Windows 10 Pro, Enterprise, and Education, you can use Docker for Windows:
* On Windows 10 Pro, Enterprise, and Eduction, you can use Docker4Windows:
https://docs.docker.com/docker-for-windows/install/
@@ -93,33 +91,6 @@ class: extra-details
https://docs.docker.com/toolbox/toolbox_install_windows/
* On Windows Server 2016, you can also install the native engine:
https://docs.docker.com/install/windows/docker-ee/
---
## Docker for Mac and Docker for Windows
* Special Docker Editions that integrate well with their respective host OS
* Provide user-friendly GUI to edit Docker configuration and settings
* Leverage the host OS virtualization subsystem (e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS)
* Installed like normal user applications on the host
* Under the hood, they both run a tiny VM (transparent to our daily use)
* Access network resources like normal applications
<br/>(and therefore, play better with enterprise VPNs and firewalls)
* Support filesystem sharing through volumes (we'll talk about this later)
* They only support running one Docker VM at a time ...
<br/>
... but we can use `docker-machine`, the Docker Toolbox, VirtualBox, etc. to get a cluster.
---
## Running Docker on macOS and Windows
@@ -139,6 +110,25 @@ This will also allow to use remote Engines exactly as if they were local.
---
## Docker4Mac and Docker4Windows
* They let you run Docker without VirtualBox
* They are installed like normal applications (think QEMU, but faster)
* They access network resources like normal applications
<br/>(and therefore, play well with enterprise VPNs and firewalls)
* They support filesystem sharing through volumes (we'll talk about this later)
* They only support running one Docker VM at a time ...
... so if you want to run a full cluster locally, install e.g. the Docker Toolbox
* They can co-exist with the Docker Toolbox
---
## Important PSA about security
* If you have access to the Docker control socket, you can take over the machine

View File

@@ -17,7 +17,7 @@ At the end of this section, you will be able to:
---
## Local development in a container
## Containerized local development environments
We want to solve the following issues:
@@ -69,6 +69,7 @@ Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe?
```dockerfile
FROM ruby
MAINTAINER Education Team at Docker <education@docker.com>
COPY . /src
WORKDIR /src
@@ -176,9 +177,7 @@ $ docker run -d -v $(pwd):/src -P namer
* `namer` is the name of the image we will run.
* We don't specify a command to run because it is already set in the Dockerfile.
Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell).
* We don't specify a command to run because is is already set in the Dockerfile.
---

View File

@@ -131,27 +131,6 @@ We will then show one particular method in action, using ELK and Docker's loggin
---
## A word of warning about `json-file`
- By default, log file size is unlimited.
- This means that a very verbose container *will* use up all your disk space.
(Or a less verbose container, but running for a very long time.)
- Log rotation can be enabled by setting a `max-size` option.
- Older log files can be removed by setting a `max-file` option.
- Just like other logging options, these can be set per container, or globally.
Example:
```bash
$ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch
```
---
## Demo: sending logs to ELK
- We are going to deploy an ELK stack.
@@ -213,7 +192,7 @@ $ docker-compose -f elk.yml up -d
- it is set with the `ELASTICSEARCH_URL` environment variable,
- by default it is `localhost:9200`, we change it to `elasticsearch:9200`.
- by default it is `localhost:9200`, we change it to `elastichsearch:9200`.
- We need to configure Logstash:

View File

@@ -1,6 +1,6 @@
# Reducing image size
# Multi-stage builds
* In the previous example, our final image contained:
* In the previous example, our final image contain:
* our `hello` program
@@ -14,196 +14,7 @@
---
## Can't we remove superfluous files with `RUN`?
What happens if we do one of the following commands?
- `RUN rm -rf ...`
- `RUN apt-get remove ...`
- `RUN make clean ...`
--
This adds a layer which removes a bunch of files.
But the previous layers (which added the files) still exist.
---
## Removing files with an extra layer
When downloading an image, all the layers must be downloaded.
| Dockerfile instruction | Layer size | Image size |
| ---------------------- | ---------- | ---------- |
| `FROM ubuntu` | Size of base image | Size of base image |
| `...` | ... | Sum of this layer <br/>+ all previous ones |
| `RUN apt-get install somepackage` | Size of files added <br/>(e.g. a few MB) | Sum of this layer <br/>+ all previous ones |
| `...` | ... | Sum of this layer <br/>+ all previous ones |
| `RUN apt-get remove somepackage` | Almost zero <br/>(just metadata) | Same as previous one |
Therefore, `RUN rm` does not reduce the size of the image or free up disk space.
---
## Removing unnecessary files
Various techniques are available to obtain smaller images:
- collapsing layers,
- adding binaries that are built outside of the Dockerfile,
- squashing the final image,
- multi-stage builds.
Let's review them quickly.
---
## Collapsing layers
You will frequently see Dockerfiles like this:
```dockerfile
FROM ubuntu
RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ...
```
Or the (more readable) variant:
```dockerfile
FROM ubuntu
RUN apt-get update \
&& apt-get install xxx \
&& ... \
&& apt-get remove xxx \
&& ...
```
This `RUN` command gives us a single layer.
The files that are added, then removed in the same layer, do not grow the layer size.
---
## Collapsing layers: pros and cons
Pros:
- works on all versions of Docker
- doesn't require extra tools
Cons:
- not very readable
- some unnecessary files might still remain if the cleanup is not thorough
- that layer is expensive (slow to build)
---
## Building binaries outside of the Dockerfile
This results in a Dockerfile looking like this:
```dockerfile
FROM ubuntu
COPY xxx /usr/local/bin
```
Of course, this implies that the file `xxx` exists in the build context.
That file has to exist before you can run `docker build`.
For instance, it can:
- exist in the code repository,
- be created by another tool (script, Makefile...),
- be created by another container image and extracted from the image.
See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox).
---
## Building binaries outside: pros and cons
Pros:
- final image can be very small
Cons:
- requires an extra build tool
- we're back in dependency hell and "works on my machine"
Cons, if binary is added to code repository:
- breaks portability across different platforms
- grows repository size a lot if the binary is updated frequently
---
## Squashing the final image
The idea is to transform the final image into a single-layer image.
This can be done in (at least) two ways.
- Activate experimental features and squash the final image:
```bash
docker image build --squash ...
```
- Export/import the final image.
```bash
docker build -t temp-image .
docker run --entrypoint true --name temp-container temp-image
docker export temp-container | docker import - final-image
docker rm temp-container
docker rmi temp-image
```
---
## Squashing the image: pros and cons
Pros:
- single-layer images are smaller and faster to download
- removed files no longer take up storage and network resources
Cons:
- we still need to actively remove unnecessary files
- squash operation can take a lot of time (on big images)
- squash operation does not benefit from cache
<br/>
(even if we change just a tiny file, the whole image needs to be re-squashed)
---
## Multi-stage builds
Multi-stage builds allow us to have multiple *stages*.
Each stage is a separate image, and can copy files from previous stages.
We're going to see how they work in more detail.
---
# Multi-stage builds
## Multi-stage builds principles
* At any point in our `Dockerfile`, we can add a new `FROM` line.

View File

@@ -76,8 +76,6 @@ The last item should be done for educational purposes only!
---
class: extra-details, deep-dive
## Manipulating namespaces
- Namespaces are created with two methods:
@@ -96,8 +94,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Namespaces lifecycle
- When the last process of a namespace exits, the namespace is destroyed.
@@ -118,8 +114,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Namespaces can be used independently
- As mentioned in the previous slides:
@@ -144,7 +138,7 @@ class: extra-details, deep-dive
- Also allows to set the NIS domain.
(If you don't know what a NIS domain is, you don't have to worry about it!)
(If you dont' know what a NIS domain is, you don't have to worry about it!)
- If you're wondering: UTS = UNIX time sharing.
@@ -156,8 +150,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Creating our first namespace
Let's use `unshare` to create a new process that will have its own UTS namespace:
@@ -174,8 +166,6 @@ $ sudo unshare --uts
---
class: extra-details, deep-dive
## Demonstrating our uts namespace
In our new "container", check the hostname, change it, and check it:
@@ -408,8 +398,6 @@ class: extra-details
---
class: extra-details, deep-dive
## Setting up a private `/tmp`
Create a new mount namespace:
@@ -447,8 +435,6 @@ The mount is automatically cleaned up when you exit the process.
---
class: extra-details, deep-dive
## PID namespace in action
Create a new PID namespace:
@@ -467,14 +453,10 @@ Check the process tree in the new namespace:
--
class: extra-details, deep-dive
🤔 Why do we see all the processes?!?
---
class: extra-details, deep-dive
## PID namespaces and `/proc`
- Tools like `ps` rely on the `/proc` pseudo-filesystem.
@@ -489,8 +471,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## PID namespaces, take 2
- This can be solved by mounting `/proc` in the namespace.
@@ -590,8 +570,6 @@ Check `man 2 unshare` and `man pid_namespaces` if you want more details.
---
class: extra-details, deep-dive
## User namespace challenges
- UID needs to be mapped when passed between processes or kernel subsystems.
@@ -708,8 +686,6 @@ cpu memory
---
class: extra-details, deep-dive
## Cgroups v1 vs v2
- Cgroups v1 are available on all systems (and widely used).
@@ -783,8 +759,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Avoiding the OOM killer
- For some workloads (databases and stateful systems), killing
@@ -804,8 +778,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Overhead of the memory cgroup
- Each time a process grabs or releases a page, the kernel update counters.
@@ -824,8 +796,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Setting up a limit with the memory cgroup
Create a new memory cgroup:
@@ -838,7 +808,7 @@ $ sudo mkdir $CG
Limit it to approximately 100MB of memory usage:
```bash
$ sudo tee $CG/memory.memsw.limit_in_bytes <<< 100000000
$ sudo tee $CG/memory.memsw.limit_in_bytes <<<100000000
```
Move the current process to that cgroup:
@@ -849,67 +819,8 @@ $ sudo tee $CG/tasks <<< $$
The current process *and all its future children* are now limited.
(Confused about `<<<`? Look at the next slide!)
---
class: extra-details, deep-dive
## What's `<<<`?
- This is a "here string". (It is a non-POSIX shell extension.)
- The following commands are equivalent:
```bash
foo <<< hello
```
```bash
echo hello | foo
```
```bash
foo <<EOF
hello
EOF
```
- Why did we use that?
---
class: extra-details, deep-dive
## Writing to cgroups pseudo-files requires root
Instead of:
```bash
sudo tee $CG/tasks <<< $$
```
We could have done:
```bash
sudo sh -c "echo $$ > $CG/tasks"
```
The following commands, however, would be invalid:
```bash
sudo echo $$ > $CG/tasks
```
```bash
sudo -i # (or su)
echo $$ > $CG/tasks
```
---
class: extra-details, deep-dive
## Testing the memory limit
Start the Python interpreter:
@@ -949,6 +860,8 @@ Killed
- Allows to set relative weights used by the scheduler.
- We cannot set CPU limits (like, "don't use more than 10% of CPU").
---
## Cpuset cgroup

View File

@@ -420,3 +420,8 @@ It depends on:
- false, if we focus on what matters.
---
## Kubernetes in action
.center[![Demo stamp](images/demo.jpg)]

View File

@@ -21,7 +21,7 @@ public images is free as well.*
docker login
```
.warning[When running Docker for Mac/Windows, or
.warning[When running Docker4Mac, Docker4Windows, or
Docker on a Linux workstation, it can (and will when
possible) integrate with your system's keyring to
store your credentials securely. However, on most Linux

View File

@@ -1,229 +0,0 @@
# Limiting resources
- So far, we have used containers as convenient units of deployment.
- What happens when a container tries to use more resources than available?
(RAM, CPU, disk usage, disk and network I/O...)
- What happens when multiple containers compete for the same resource?
- Can we limit resources available to a container?
(Spoiler alert: yes!)
---
## Container processes are normal processes
- Containers are closer to "fancy processes" than to "lightweight VMs".
- A process running in a container is, in fact, a process running on the host.
- Let's look at the output of `ps` on a container host running 3 containers :
```
0 2662 0.2 0.3 /usr/bin/dockerd -H fd://
0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe
0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off;
101 23543 0.0 0.0 | \_ `nginx`: worker process
0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2
0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
0 23725 0.0 0.0 \_ `/bin/sh`
```
- The highlighted processes are containerized processes.
<br/>
(That host is running nginx, elasticsearch, and alpine.)
---
## By default: nothing changes
- What happens when a process uses too much memory on a Linux system?
--
- Simplified answer:
- swap is used (if available);
- if there is not enough swap space, eventually, the out-of-memory killer is invoked;
- the OOM killer uses heuristics to kill processes;
- sometimes, it kills an unrelated process.
--
- What happens when a container uses too much memory?
- The same thing!
(i.e., a process eventually gets killed, possibly in another container.)
---
## Limiting container resources
- The Linux kernel offers rich mechanisms to limit container resources.
- For memory usage, the mechanism is part of the *cgroup* subsystem.
- This subsystem allows to limit the memory for a process or a group of processes.
- A container engine leverages these mechanisms to limit memory for a container.
- The out-of-memory killer has a new behavior:
- it runs when a container exceeds its allowed memory usage,
- in that case, it only kills processes in that container.
---
## Limiting memory in practice
- The Docker Engine offers multiple flags to limit memory usage.
- The two most useful ones are `--memory` and `--memory-swap`.
- `--memory` limits the amount of physical RAM used by a container.
- `--memory-swap` limits the total amount (RAM+swap) used by a container.
- The memory limit can be expressed in bytes, or with a unit suffix.
(e.g.: `--memory 100m` = 100 megabytes.)
- We will see two strategies: limiting RAM usage, or limiting both
---
## Limiting RAM usage
Example:
```bash
docker run -ti --memory 100m python
```
If the container tries to use more than 100 MB of RAM, *and* swap is available:
- the container will not be killed,
- memory above 100 MB will be swapped out,
- in most cases, the app in the container will be slowed down (a lot).
If we run out of swap, the global OOM killer still intervenes.
---
## Limiting both RAM and swap usage
Example:
```bash
docker run -ti --memory 100m --memory-swap 100m python
```
If the container tries to use more than 100 MB of memory, it is killed.
On the other hand, the application will never be slowed down because of swap.
---
## When to pick which strategy?
- Stateful services (like databases) will lose or corrupt data when killed
- Allow them to use swap space, but monitor swap usage
- Stateless services can usually be killed with little impact
- Limit their mem+swap usage, but monitor if they get killed
- Ultimately, this is no different from "do I want swap, and how much?"
---
## Limiting CPU usage
- There are no less than 3 ways to limit CPU usage:
- setting a relative priority with `--cpu-shares`,
- setting a CPU% limit with `--cpus`,
- pinning a container to specific CPUs with `--cpuset-cpus`.
- They can be used separately or together.
---
## Setting relative priority
- Each container has a relative priority used by the Linux scheduler.
- By default, this priority is 1024.
- As long as CPU usage is not maxed out, this has no effect.
- When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority.
- In other words: a container with `--cpu-shares 2048` will receive twice as much than the default.
---
## Setting a CPU% limit
- This setting will make sure that a container doesn't use more than a given % of CPU.
- The value is expressed in CPUs; therefore:
`--cpus 0.1` means 10% of one CPU,
`--cpus 1.0` means 100% of one whole CPU,
`--cpus 10.0` means 10 entire CPUs.
---
## Pinning containers to CPUs
- On multi-core machines, it is possible to restrict the execution on a set of CPUs.
- Examples:
`--cpuset-cpus 0` forces the container to run on CPU 0;
`--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7;
`--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11.
- This will not reserve the corresponding CPUs!
(They might still be used by other containers, or uncontainerized processes.)
---
## Limiting disk usage
- Most storage drivers do not support limiting the disk usage of containers.
(With the exception of devicemapper, but the limit cannot be set easily.)
- This means that a single container could exhaust disk space for everyone.
- In practice, however, this is not a concern, because:
- data files (for stateful services) should reside on volumes,
- assets (e.g. images, user-generated content...) should reside on object stores or on volume,
- logs are written on standard output and gathered by the container engine.
- Container disk usage can be audited with `docker ps -s` and `docker diff`.

View File

@@ -33,8 +33,6 @@ Docker volumes can be used to achieve many things, including:
* Sharing a *single file* between the host and a container.
* Using remote storage and custom storage with "volume drivers".
---
## Volumes are special directories in a container
@@ -120,7 +118,7 @@ $ curl localhost:8080
## Volumes exist independently of containers
If a container is stopped or removed, its volumes still exist and are available.
If a container is stopped, its volumes still exist and are available.
Volumes can be listed and manipulated with `docker volume` subcommands:
@@ -197,13 +195,13 @@ Let's start another container using the `webapps` volume.
$ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp
```
Vandalize the page, save, exit.
Where `-w` sets the working directory inside the container. Vandalize the page, save and exit.
Then run `curl localhost:1234` again to see your changes.
---
## Using custom "bind-mounts"
## Managing volumes explicitly
In some cases, you want a specific directory on the host to be mapped
inside the container:
@@ -246,8 +244,6 @@ of an existing container.
* Newer containers can use `--volumes-from` too.
* Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes).
---
class: extra-details
@@ -263,7 +259,7 @@ $ docker run -d --name redis28 redis:2.8
Connect to the Redis container and set some data.
```bash
$ docker run -ti --link redis28:redis busybox telnet redis 6379
$ docker run -ti --link redis28:redis alpine:3.6 telnet redis 6379
```
Issue the following commands:
@@ -302,7 +298,7 @@ class: extra-details
Connect to the Redis container and see our data.
```bash
docker run -ti --link redis30:redis busybox telnet redis 6379
docker run -ti --link redis30:redis alpine:3.6 telnet redis 6379
```
Issue a few commands.
@@ -398,15 +394,10 @@ has root-like access to the host.]
You can install plugins to manage volumes backed by particular storage systems,
or providing extra features. For instance:
* [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g.
SAN or NAS), or by cloud block stores (e.g. EBS, EFS).
* [Portworx](http://portworx.com/) - provides distributed block store for containers.
* [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale
to several petabytes. It provides interfaces for object, block and file storage.
* and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)!
* [dvol](https://github.com/ClusterHQ/dvol) - allows to commit/branch/rollback volumes;
* [Flocker](https://clusterhq.com/flocker/introduction/), [REX-Ray](https://github.com/emccode/rexray) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS);
* [Blockbridge](http://www.blockbridge.com/), [Portworx](http://portworx.com/) - provide distributed block store for containers;
* and much more!
---

View File

@@ -2,7 +2,7 @@
- This was initially written to support in-person, instructor-led workshops and tutorials
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://@@GITREPO@@/graphs/contributors)
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors)
- You can also follow along on your own, at your own pace

View File

@@ -1,52 +0,0 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
- extra-details
chapters:
- common/title.md
- logistics.md
#- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- common/composescale.md
- common/composedown.md
- kube/concepts-k8s.md
# - common/declarative.md
- kube/declarative.md
# - kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- - kube/kubectlrun.md
- kube/kubectlexpose.md
- kube/ourapponkube.md
#- kube/kubectlproxy.md
- - kube/dashboard.md
- kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
# Stern is interesting but can be skipped
#- - kube/logs-cli.md
# Bridget hasn't added EFK yet
#- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
- kube/links.md
# Bridget-specific
# - kube/links-bridget.md
- common/thankyou.md

View File

@@ -1,14 +1,10 @@
title: |
Deploying and Scaling Microservices
Introduction to Orchestration
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
#chat: "In person!"
exclude:
- self-paced
@@ -22,26 +18,25 @@ chapters:
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
#- common/composescale.md
- common/composescale.md
- common/composedown.md
- - kube/concepts-k8s.md
- kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
- kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- kube/kubectlrun.md
- - kube/kubectlexpose.md
- - kube/kubectlrun.md
- kube/kubectlexpose.md
- kube/ourapponkube.md
- kube/kubectlproxy.md
- kube/dashboard.md
- - kube/kubectlscale.md
- - kube/dashboard.md
- kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
#- kube/logs-cli.md
#- kube/logs-centralized.md
#- kube/helm.md
#- kube/namespaces.md
- - kube/logs-cli.md
- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
- kube/links.md
- common/thankyou.md

View File

@@ -1,50 +0,0 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- common/title.md
- logistics.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- common/composescale.md
- common/composedown.md
- kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
- kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- - kube/kubectlrun.md
- kube/kubectlexpose.md
- kube/ourapponkube.md
#- kube/kubectlproxy.md
- - kube/dashboard.md
- kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- - kube/logs-cli.md
# Bridget hasn't added EFK yet
#- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
# - kube/links.md
# Bridget-specific
- kube/links-bridget.md
- common/thankyou.md

View File

@@ -5,10 +5,6 @@ title: |
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
@@ -32,12 +28,11 @@ chapters:
- kube/kubectlrun.md
- - kube/kubectlexpose.md
- kube/ourapponkube.md
- kube/kubectlproxy.md
- kube/dashboard.md
- - kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- - kube/logs-cli.md
- kube/logs-cli.md
- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md

1
slides/kube.yml Symbolic link
View File

@@ -0,0 +1 @@
kube-fullday.yml

View File

@@ -171,7 +171,11 @@ class: pic
---
## Default container runtime
## Do we need to run Docker at all?
No!
--
- By default, Kubernetes uses the Docker Engine to run containers
@@ -181,6 +185,42 @@ class: pic
(like CRI-O, or containerd)
---
## Do we need to run Docker at all?
Yes!
--
- In this workshop, we run our app on a single node first
- We will need to build images and ship them around
- We can do these things without Docker
<br/>
(and get diagnosed with NIH¹ syndrome)
- Docker is still the most stable container engine today
<br/>
(but other options are maturing very quickly)
.footnote[¹[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)]
---
## Do we need to run Docker at all?
- On our development environments, CI pipelines ... :
*Yes, almost certainly*
- On our production servers:
*Yes (today)*
*Probably not (in the future)*
.footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)]
---
@@ -195,7 +235,6 @@ class: pic
- node (a machine — physical or virtual — in our cluster)
- pod (group of containers running together on a node)
- IP addresses are associated with *pods*, not with individual containers
- service (stable network endpoint to connect to one or multiple containers)
- namespace (more-or-less isolated group of things)
- secret (bundle of sensitive data to be passed to a container)
@@ -207,3 +246,25 @@ class: pic
class: pic
![Node, pod, container](images/k8s-arch3-thanks-weave.png)
---
class: pic
![One of the best Kubernetes architecture diagrams available](images/k8s-arch4-thanks-luxas.png)
---
## Credits
- The first diagram is courtesy of Weave Works
- a *pod* can have multiple containers working together
- IP addresses are associated with *pods*, not with individual containers
- The second diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha)
- it's one of the best Kubernetes architecture diagrams available!
Both diagrams used with permission.

View File

@@ -36,7 +36,7 @@
## Creating a daemon set
- Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
- Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
--
@@ -55,7 +55,7 @@
--
- option 1: [read the docs](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset)
- option 1: read the docs
--
@@ -178,37 +178,29 @@ Wait ... Now, can it be *that* easy?
--
We have two resources called `rng`:
We have both `deploy/rng` and `ds/rng` now!
- the *deployment* that was existing before
--
- the *daemon set* that we just created
We also have one too many pods.
<br/>
(The pod corresponding to the *deployment* still exists.)
And one too many pods...
---
## `deploy/rng` and `ds/rng`
## Explanation
- You can have different resource types with the same name
(i.e. a *deployment* and a *daemon set* both named `rng`)
(i.e. a *deployment* and a *daemonset* both named `rng`)
- We still have the old `rng` *deployment*
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/rng 1 1 1 1 18m
```
- But now we have the new `rng` *daemonset* as well
- But now we have the new `rng` *daemon set* as well
- If we look at the pods, we have:
```
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/rng 2 2 2 2 2 <none> 9s
```
- *one pod* for the deployment
- *one pod per node* for the daemonset
---
@@ -316,27 +308,191 @@ The replica set selector also has a `pod-template-hash`, unlike the pods in our
---
## Deleting a deployment
# Updating a service through labels and selectors
.exercise[
- What if we want to drop the `rng` deployment from the load balancer?
- Remove the `rng` deployment:
```bash
kubectl delete deployment rng
```
]
- Option 1:
- destroy it
- Option 2:
- add an extra *label* to the daemon set
- update the service *selector* to refer to that *label*
--
- The pod that was created by the deployment is now being terminated:
Of course, option 2 offers more learning opportunities. Right?
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rng-54f57d4d49-vgz9h 1/1 Terminating 0 4m
rng-vplmj 1/1 Running 0 11m
rng-xbpvg 1/1 Running 0 11m
[...]
```
---
Ding, dong, the deployment is dead! And the daemon set lives on.
## Add an extra label to the daemon set
- We will update the daemon set "spec"
- Option 1:
- edit the `rng.yml` file that we used earlier
- load the new definition with `kubectl apply`
- Option 2:
- use `kubectl edit`
--
*If you feel like you got this💕🌈, feel free to try directly.*
*We've included a few hints on the next slides for your convenience!*
---
## We've put resources in your resources
- Reminder: a daemon set is a resource that creates more resources!
- There is a difference between:
- the label(s) of a resource (in the `metadata` block in the beginning)
- the selector of a resource (in the `spec` block)
- the label(s) of the resource(s) created by the first resource (in the `template` block)
- You need to update the selector and the template (metadata labels are not mandatory)
- The template must match the selector
(i.e. the resource will refuse to create resources that it will not select)
---
## Adding our label
- Let's add a label `isactive: yes`
- In YAML, `yes` should be quoted; i.e. `isactive: "yes"`
.exercise[
- Update the daemon set to add `isactive: "yes"` to the selector and template label:
```bash
kubectl edit daemonset rng
```
- Update the service to add `isactive: "yes"` to its selector:
```bash
kubectl edit service rng
```
]
---
## Checking what we've done
.exercise[
- Check the logs of all `run=rng` pods to confirm that exactly one per node is now active:
```bash
kubectl logs -l run=rng
```
]
The timestamps should give us a hint about how many pods are currently receiving traffic.
.exercise[
- Look at the pods that we have right now:
```bash
kubectl get pods
```
]
---
## Cleaning up
- The pods of the "old" daemon set are still running
- We are going to identify them programmatically
.exercise[
- List the pods with `run=rng` but without `isactive=yes`:
```bash
kubectl get pods -l run=rng,isactive!=yes
```
- Remove these pods:
```bash
kubectl get pods -l run=rng,isactive!=yes -o name |
xargs kubectl delete
```
]
---
## Avoiding extra pods
- When we changed the definition of the daemon set, it immediately created new pods
- How could we have avoided this?
--
- By adding the `isactive: "yes"` label to the pods before changing the daemon set!
- This can be done programmatically with `kubectl patch`:
```bash
PATCH='
metadata:
labels:
isactive: "yes"
'
kubectl get pods -l run=rng -o name |
xargs kubectl patch -p "$PATCH"
```
---
## Labels and debugging
- When a pod is misbehaving, we can delete it: another one will be recreated
- But we can also change its labels
- It will be removed from the load balancer (it won't receive traffic anymore)
- Another pod will be recreated immediately
- But the problematic pod is still here, and we can inspect and debug it
- We can even re-add it to the rotation if necessary
(Very useful to troubleshoot intermittent and elusive bugs)
---
## Labels and advanced rollout control
- Conversely, we can add pods matching a service's selector
- These pods will then receive requests and serve traffic
- Examples:
- one-shot pod with all debug flags enabled, to collect logs
- pods created automatically, but added to rotation in a second step
<br/>
(by setting their label accordingly)
- This gives us building blocks for canary and blue/green deployments

View File

@@ -10,6 +10,9 @@
3) bypass authentication for the dashboard
--
There is an additional step to make the dashboard available from outside (we'll get to that)
--
@@ -145,6 +148,58 @@ The dashboard will then ask you which authentication you want to use.
---
## Exposing the dashboard over HTTPS
- We took a shortcut by forwarding HTTP to HTTPS inside the cluster
- Let's expose the dashboard over HTTPS!
- The dashboard is exposed through a `ClusterIP` service (internal traffic only)
- We will change that into a `NodePort` service (accepting outside traffic)
.exercise[
- Edit the service:
```bash
kubectl edit service kubernetes-dashboard
```
]
--
`NotFound`?!? Y U NO WORK?!?
---
## Editing the `kubernetes-dashboard` service
- If we look at the [YAML](https://goo.gl/Qamqab) that we loaded before, we'll get a hint
--
- The dashboard was created in the `kube-system` namespace
--
.exercise[
- Edit the service:
```bash
kubectl -n kube-system edit service kubernetes-dashboard
```
- Change `ClusterIP` to `NodePort`, save, and exit
- Check the port that was assigned with `kubectl -n kube-system get services`
- Connect to https://oneofournodes:3xxxx/ (yes, https)
]
---
## Running the Kubernetes dashboard securely
- The steps that we just showed you are *for educational purposes only!*
@@ -201,9 +256,9 @@ The dashboard will then ask you which authentication you want to use.
- It's safe if you use HTTPS URLs from trusted sources
- Example: the official setup instructions for most pod networks
--
- It introduces new failure modes (like if you try to apply yaml from a link that's no longer valid)
- It introduces new failure modes
- Example: the official setup instructions for most pod networks

View File

@@ -48,11 +48,6 @@
helm init
```
- Add the `helm` completion:
```bash
. <(helm completion $(basename $SHELL))
```
]
---

View File

@@ -3,7 +3,7 @@
- This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person,
instructor-led workshops and tutorials
- Credit is also due to [multiple contributors](https://@@GITREPO@@/graphs/contributors) — thank you!
- Credit is also due to [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) — thank you!
- You can also follow along on your own, at your own pace

View File

@@ -123,7 +123,7 @@ Note: please DO NOT call the service `search`. It would collide with the TLD.
.exercise[
- Let's obtain the IP address that was allocated for our service, *programmatically:*
- Let's obtain the IP address that was allocated for our service, *programatically:*
```bash
IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}')
```

View File

@@ -1,5 +1,3 @@
class: extra-details
# First contact with `kubectl`
- `kubectl` is (almost) the only tool we'll need to talk to Kubernetes
@@ -81,8 +79,6 @@ class: extra-details
---
class: extra-details
## What's available?
- `kubectl` has pretty good introspection facilities

View File

@@ -1,117 +0,0 @@
# Accessing internal services with `kubectl proxy`
- `kubectl proxy` runs a proxy in the foreground
- This proxy lets us access the Kubernetes API without authentication
(`kubectl proxy` adds our credentials on the fly to the requests)
- This proxy lets us access the Kubernetes API over plain HTTP
- This is a great tool to learn and experiment with the Kubernetes API
- The Kubernetes API also gives us a proxy to HTTP and HTTPS services
- Therefore, we can use `kubectl proxy` to access internal services
(Without using a `NodePort` or similar service)
---
## Secure by default
- By default, the proxy listens on port 8001
(But this can be changed, or we can tell `kubectl proxy` to pick a port)
- By default, the proxy binds to `127.0.0.1`
(Making it unreachable from other machines, for security reasons)
- By default, the proxy only accepts connections from:
`^localhost$,^127\.0\.0\.1$,^\[::1\]$`
- This is great when running `kubectl proxy` locally
- Not-so-great when running it on a remote machine
---
## Running `kubectl proxy` on a remote machine
- We are going to bind to `INADDR_ANY` instead of `127.0.0.1`
- We are going to accept connections from any address
.exercise[
- Run an open proxy to the Kubernetes API:
```bash
kubectl proxy --port=8888 --address=0.0.0.0 --accept-hosts=.*
```
]
.warning[Anyone can now do whatever they want with our Kubernetes cluster!
<br/>
(Don't do this on a real cluster!)]
---
## Viewing available API routes
- The default route (i.e. `/`) shows a list of available API endpoints
.exercise[
- Point your browser to the IP address of the node running `kubectl proxy`, port 8888
]
The result should look like this:
```json
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
```
---
## Connecting to a service through the proxy
- The API can proxy HTTP and HTTPS requests by accessing a special route:
```
/api/v1/namespaces/`name_of_namespace`/services/`name_of_service`/proxy
```
- Since we now have access to the API, we can use this special route
.exercise[
- Access the `hasher` service through the special proxy route:
```open
http://`X.X.X.X`:8888/api/v1/namespaces/default/services/hasher/proxy
```
]
You should see the banner of the hasher service: `HASHER running on ...`
---
## Stopping the proxy
- Remember: as it is running right now, `kubectl proxy` gives open access to our cluster
.exercise[
- Stop the `kubectl proxy` process with Ctrl-C
]

View File

@@ -20,10 +20,9 @@
.exercise[
- Let's ping `1.1.1.1`, Cloudflare's
[public DNS resolver](https://blog.cloudflare.com/announcing-1111/):
- Let's ping `goo.gl`:
```bash
kubectl run pingpong --image alpine ping 1.1.1.1
kubectl run pingpong --image alpine ping goo.gl
```
]
@@ -42,7 +41,8 @@ OK, what just happened?
- List most resource types:
```bash
kubectl get all
kubectl get all # This was broken in Kubernetes 1.10, so ...
kubectl get all -o custom-columns=KIND:.kind,NAME:.metadata.name
```
]
@@ -50,11 +50,9 @@ OK, what just happened?
--
We should see the following things:
- `deployment.apps/pingpong` (the *deployment* that we just created)
- `replicaset.apps/pingpong-xxxxxxxxxx` (a *replica set* created by the deployment)
- `pod/pingpong-xxxxxxxxxx-yyyyy` (a *pod* created by the replica set)
Note: as of 1.10.1, resource types are displayed in more detail.
- A `Deployment` named `pingpong` (the thing that we just created)
- A `ReplicaSet` named `pingpong-xxxx` (created by the deployment)
- A `Pod` named `pingpong-yyyy` (created by the replica set)
---
@@ -81,34 +79,21 @@ Note: as of 1.10.1, resource types are displayed in more detail.
---
class: extra-details
## Our `pingpong` deployment
- `kubectl run` created a *deployment*, `deployment.apps/pingpong`
- `kubectl run` created a *deployment*, `deploy/pingpong`
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/pingpong 1 1 1 1 10m
```
- That deployment created a *replica set*, `rs/pingpong-xxxx`
- That deployment created a *replica set*, `replicaset.apps/pingpong-xxxxxxxxxx`
```
NAME DESIRED CURRENT READY AGE
replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m
```
- That replica set created a *pod*, `pod/pingpong-xxxxxxxxxx-yyyyy`
```
NAME READY STATUS RESTARTS AGE
pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
```
- That replica set created a *pod*, `po/pingpong-yyyy`
- We'll see later how these folks play together for:
- scaling, high availability, rolling updates
- scaling
- high availability
- rolling updates
---
@@ -135,8 +120,6 @@ pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
---
class: extra-details
## Streaming logs in real time
- Just like `docker logs`, `kubectl logs` supports convenient options:
@@ -176,7 +159,7 @@ class: extra-details
]
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?
Note: what if we tried to scale `rs/pingpong-xxxx`?
We could! But the *deployment* would notice it right away, and scale back to the initial level.
@@ -204,7 +187,7 @@ We could! But the *deployment* would notice it right away, and scale back to the
- Destroy a pod:
```bash
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
kubectl delete pod pingpong-yyyy
```
]
@@ -227,8 +210,6 @@ We could! But the *deployment* would notice it right away, and scale back to the
---
clas: extra-details
## Viewing logs of multiple pods
- When we specify a deployment name, only one single pod's logs are shown
@@ -252,17 +233,15 @@ Unfortunately, `--follow` cannot (yet) be used to stream the logs from multiple
---
class: extra-details
class: title
## Aren't we flooding 1.1.1.1?
- If you're wondering this, good question!
- Don't worry, though:
*APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.*
(Source: https://blog.cloudflare.com/announcing-1111/)
- It's very unlikely that our concerted pings manage to produce
even a modest blip at Cloudflare's NOC!
Meanwhile,
<br/>
at the Google NOC ...
<br/>
<br/>
.small[“Why the hell]
<br/>
.small[are we getting 1000 packets per second]
<br/>
.small[of ICMP ECHO traffic from these IPs?!?”]

View File

@@ -63,7 +63,7 @@
## Kubernetes network model: in practice
- The nodes that we are using have been set up to use [Weave](https://github.com/weaveworks/weave)
- The nodes that we are using have been set up to use Weave
- We don't endorse Weave in a particular way, it just Works For Us

View File

@@ -1,10 +1,19 @@
# Links and resources
All things Kubernetes:
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
- [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
- [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/)
All things Docker:
- [Cloud Developer Advocates](https://developer.microsoft.com/advocates/)
- [Docker documentation](http://docs.docker.com/)
- [Docker Hub](https://hub.docker.com)
- [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker)
- [Play With Docker Hands-On Labs](http://training.play-with-docker.com/)
Everything else:
- [Local meetups](https://www.meetup.com/)

View File

@@ -40,12 +40,7 @@
## Creating namespaces
- Creating a namespace is done with the `kubectl create namespace` command:
```bash
kubectl create namespace blue
```
- We can also get fancy and use a very minimal YAML snippet, e.g.:
- We can create namespaces with a very minimal YAML, e.g.:
```bash
kubectl apply -f- <<EOF
apiVersion: v1
@@ -55,8 +50,6 @@
EOF
```
- The two methods above are identical
- If we are using a tool like Helm, it will create namespaces automatically
---

View File

@@ -4,8 +4,6 @@ Our app on Kube
---
class: extra-details
## What's on the menu?
In this part, we will:
@@ -132,8 +130,6 @@ We should see:
---
class: extra-details
## Testing our local registry
- We can retag a small image, and push it to the registry
@@ -155,8 +151,6 @@ class: extra-details
---
class: extra-details
## Checking again what's on our local registry
- Let's use the same endpoint as before

View File

@@ -33,23 +33,6 @@
---
## Checking current rollout parameters
- Recall how we build custom reports with `kubectl` and `jq`:
.exercise[
- Show the rollout plan for our deployments:
```bash
kubectl get deploy -o json |
jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate"
```
]
---
## Rolling updates in practice
- As of Kubernetes 1.8, we can do rolling updates with:
@@ -127,13 +110,11 @@ That rollout should be pretty quick. What shows in the web UI?
- Kubernetes sends a "polite" shutdown request to the worker, which ignores it
- After a grace period, Kubernetes gets impatient and kills the container
(The grace period is 30 seconds, but [can be changed](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) if needed)
- Eventually, Kubernetes gets impatient and kills the container
---
## Rolling out something invalid
## Rolling out a boo-boo
- What happens if we make a mistake?
@@ -158,66 +139,6 @@ Our rollout is stuck. However, the app is not dead (just 10% slower).
---
## What's going on with our rollout?
- Why is our app 10% slower?
- Because `MaxUnavailable=1`, so the rollout terminated 1 replica out of 10 available
- Okay, but why do we see 2 new replicas being rolled out?
- Because `MaxSurge=1`, so in addition to replacing the terminated one, the rollout is also starting one more
---
class: extra-details
## The nitty-gritty details
- We start with 10 pods running for the `worker` deployment
- Current settings: MaxUnavailable=1 and MaxSurge=1
- When we start the rollout:
- one replica is taken down (as per MaxUnavailable=1)
- another is created (with the new version) to replace it
- another is created (with the new version) per MaxSurge=1
- Now we have 9 replicas up and running, and 2 being deployed
- Our rollout is stuck at this point!
---
## Checking the dashboard during the bad rollout
.exercise[
- Check which port the dashboard is on:
```bash
kubectl -n kube-system get svc socat
```
]
Note the `3xxxx` port.
.exercise[
- Connect to http://oneofournodes:3xxxx/
<!-- ```open https://node1:3xxxx/``` -->
]
--
- We have failures in Deployments, Pods, and Replica Sets
---
## Recovering from a bad rollout
- We could push some `v0.3` image
@@ -299,8 +220,6 @@ spec:
minReadySeconds: 10
"
kubectl rollout status deployment worker
kubectl get deploy -o json worker |
jq "{name:.metadata.name} + .spec.strategy.rollingUpdate"
```
]

View File

@@ -20,10 +20,27 @@
6. Copy the configuration file generated by `kubeadm init`
- Check the [prepare VMs README](https://@@GITREPO@@/blob/master/prepare-vms/README.md) for more details
- Check the [prepare VMs README](https://github.com/jpetazzo/container.training/blob/master/prepare-vms/README.md) for more details
---
## `kubeadm` drawbacks
- Doesn't set up Docker or any other container engine
- Doesn't set up the overlay network
- Scripting is complex
<br/>
(because extracting the token requires advanced `kubectl` commands)
- Doesn't set up multi-master (no high availability)
--
- "It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme
---
## Other deployment options
@@ -48,23 +65,4 @@
Probably the closest to a multi-cloud/hybrid solution so far, but in development
---
## Even more deployment options
- If you like Ansible:
[kubespray](https://github.com/kubernetes-incubator/kubespray)
- If you like Terraform:
[typhoon](https://github.com/poseidon/typhoon/)
- You can also learn how to install every component manually, with
the excellent tutorial [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way)
*Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.*
- There are also many commercial options available!
- For a longer list, check the Kubernetes documentation:
<br/>
it has a great guide to [pick the right solution](https://kubernetes.io/docs/setup/pick-right-solution/) to set up Kubernetes.
- Also, many commercial options!

View File

@@ -1,8 +1,8 @@
## Versions installed
- Kubernetes 1.11.0
- Docker Engine 18.03.1-ce
- Docker Compose 1.21.1
- Kubernetes 1.10.0
- Docker Engine 18.03.0-ce
- Docker Compose 1.20.1
.exercise[
@@ -22,7 +22,7 @@ class: extra-details
## Kubernetes and Docker compatibility
- Kubernetes 1.10.x only validates Docker Engine versions [1.11.2 to 1.13.1 and 17.03.x](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies)
- Kubernetes 1.10 only validates Docker Engine versions [1.11.2 to 1.13.1 and 17.03.x](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies)
--

View File

@@ -1,3 +1,45 @@
# Next steps
*Alright, how do I get started and containerize my apps?*
--
Suggested containerization checklist:
.checklist[
- write a Dockerfile for one service in one app
- write Dockerfiles for the other (buildable) services
- write a Compose file for that whole app
- make sure that devs are empowered to run the app in containers
- set up automated builds of container images from the code repo
- set up a CI pipeline using these container images
- set up a CD pipeline (for staging/QA) using these images
]
And *then* it is time to look at orchestration!
---
## Namespaces
- Namespaces let you run multiple identical stacks side by side
- Two namespaces (e.g. `blue` and `green`) can each have their own `redis` service
- Each of the two `redis` services has its own `ClusterIP`
- `kube-dns` creates two entries, mapping to these two `ClusterIP` addresses:
`redis.blue.svc.cluster.local` and `redis.green.svc.cluster.local`
- Pods in the `blue` namespace get a *search suffix* of `blue.svc.cluster.local`
- As a result, resolving `redis` from a pod in the `blue` namespace yields the "local" `redis`
.warning[This does not provide *isolation*! That would be the job of network policies.]
---
## Stateful services (databases etc.)
- As a first step, it is wiser to keep stateful services *outside* of the cluster
@@ -32,6 +74,12 @@
---
## Stateful services (demo!)
.center[![Demo](images/demo.jpg)]
---
## HTTP traffic handling
- *Services* are layer 4 constructs
@@ -51,6 +99,12 @@
---
## Ingress with Træfik (demo!)
.center[![Demo](images/demo.jpg)]
---
## Logging and metrics
- Logging is delegated to the container engine
@@ -130,3 +184,17 @@ Sorry Star Trek fans, this is not the federation you're looking for!
- Synchronize resources across clusters
- Discover resources across clusters
---
## Developer experience
*I've put this last, but it's pretty important!*
- How do you on-board a new developer?
- What do they need to install to get a dev stack?
- How does a code change make it from dev to prod?
- How does someone add a component to a stack?

View File

@@ -1,14 +1,19 @@
## Intros
- Hello! We are:
- Hello! We are:
- .emoji[] Ashley ([@ashleymcnamara](https://twitter.com/ashleymcnamara))
- .emoji[] Jérémy ([@jeremygarrouste](https://twitter.com/jeremygarrouste), Inpiwee)
- .emoji[🌟] Brian ([@bketelsen](https://twitter.com/bketelsen))
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- The workshop will run from 13:30-15:00
- The training will run from 9:15 to 17:00
- There will be a lunch break at 12:30
(And coffee breaks!)
- Feel free to interrupt for questions at any time
- *Especially when you see full screen container pictures!*
- Live feedback, questions, help: @@CHAT@@

View File

@@ -114,8 +114,6 @@ def generatefromyaml(manifest, filename):
html = html.replace("@@MARKDOWN@@", markdown)
html = html.replace("@@EXCLUDE@@", exclude)
html = html.replace("@@CHAT@@", manifest["chat"])
html = html.replace("@@GITREPO@@", manifest["gitrepo"])
html = html.replace("@@SLIDES@@", manifest["slides"])
html = html.replace("@@TITLE@@", manifest["title"].replace("\n", " "))
return html

View File

@@ -1,3 +1,2 @@
# This is for netlify
PyYAML
jinja2

View File

@@ -1,61 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
- snap
- btp-auto
- benchmarking
- elk-manual
- prom-manual
chapters:
- common/title.md
- logistics.md
- swarm/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- swarm/versions.md
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- swarm/swarmkit.md
- common/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/updatingservices.md
#- swarm/rollingupdates.md
- swarm/healthchecks.md
- - swarm/operatingswarm.md
- swarm/netshoot.md
- swarm/ipsec.md
- swarm/swarmtools.md
- swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- - swarm/logging.md
- swarm/metrics.md
- swarm/stateful.md
- swarm/extratips.md
- common/thankyou.md
- swarm/links.md

View File

@@ -1,61 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
- snap
- btp-manual
- benchmarking
- elk-manual
- prom-manual
chapters:
- common/title.md
- logistics.md
- swarm/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- swarm/versions.md
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- swarm/swarmkit.md
- common/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
#- swarm/hostingregistry.md
#- swarm/testingregistry.md
#- swarm/btp-manual.md
#- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/updatingservices.md
#- swarm/rollingupdates.md
#- swarm/healthchecks.md
- - swarm/operatingswarm.md
#- swarm/netshoot.md
#- swarm/ipsec.md
#- swarm/swarmtools.md
- swarm/security.md
#- swarm/secrets.md
#- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- swarm/logging.md
- swarm/metrics.md
#- swarm/stateful.md
#- swarm/extratips.md
- common/thankyou.md
- swarm/links.md

View File

@@ -1,70 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
- btp-auto
chapters:
- common/title.md
#- common/logistics.md
- swarm/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- swarm/versions.md
- |
name: part-1
class: title, self-paced
Part 1
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- swarm/swarmkit.md
- common/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/compose2swarm.md
- |
name: part-2
class: title, self-paced
Part 2
- - swarm/operatingswarm.md
- swarm/netshoot.md
- swarm/swarmnbt.md
- swarm/ipsec.md
- swarm/updatingservices.md
- swarm/rollingupdates.md
- swarm/healthchecks.md
- swarm/nodeinfo.md
- swarm/swarmtools.md
- - swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- swarm/logging.md
- swarm/metrics.md
- swarm/stateful.md
- swarm/extratips.md
- common/thankyou.md
- swarm/links.md

View File

@@ -1,70 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
- btp-auto
chapters:
- common/title.md
#- common/logistics.md
- swarm/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- swarm/versions.md
- |
name: part-1
class: title, self-paced
Part 1
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- swarm/swarmkit.md
- common/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/compose2swarm.md
- |
name: part-2
class: title, self-paced
Part 2
- - swarm/operatingswarm.md
#- swarm/netshoot.md
#- swarm/swarmnbt.md
- swarm/ipsec.md
- swarm/updatingservices.md
- swarm/rollingupdates.md
#- swarm/healthchecks.md
- swarm/nodeinfo.md
- swarm/swarmtools.md
- - swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
#- swarm/logging.md
#- swarm/metrics.md
- swarm/stateful.md
- swarm/extratips.md
- common/thankyou.md
- swarm/links.md

View File

@@ -141,7 +141,7 @@ It alters the code path for `docker run`, so it is allowed only under strict cir
- Update `webui` so that we can connect to it from outside:
```bash
docker service update webui --publish-add 8000:80
docker service update webui --publish-add 8000:80 --detach=false
```
]
@@ -197,7 +197,7 @@ It has been replaced by the new version, with port 80 accessible from outside.
- Bring up more workers:
```bash
docker service update worker --replicas 10
docker service update worker --replicas 10 --detach=false
```
- Check the result in the web UI
@@ -235,7 +235,7 @@ You should see the performance peaking at 10 hashes/s (like before).
- Re-create the `rng` service with *global scheduling*:
```bash
docker service create --name rng --network dockercoins --mode global \
$REGISTRY/rng:$TAG
--detach=false $REGISTRY/rng:$TAG
```
- Look at the result in the web UI
@@ -258,12 +258,14 @@ class: extra-details
- This might change in the future (after all, it was possible in 1.12 RC!)
- As of Docker Engine 18.03, other parameters requiring to `rm`/`create` the service are:
- As of Docker Engine 17.05, other parameters requiring to `rm`/`create` the service are:
- service name
- hostname
- network
---
## Removing everything

View File

@@ -114,7 +114,7 @@ services:
- Deploy our local registry:
```bash
docker stack deploy --compose-file registry.yml registry
docker stack deploy registry --compose-file registry.yml
```
]
@@ -304,7 +304,7 @@ services:
- Create the application stack:
```bash
docker stack deploy --compose-file dockercoins.yml dockercoins
docker stack deploy dockercoins --compose-file dockercoins.yml
```
]

View File

@@ -49,7 +49,7 @@ This will display the unlock key. Copy-paste it somewhere safe.
]
Note: if you are doing the workshop on your own, using nodes
that you [provisioned yourself](https://@@GITREPO@@/tree/master/prepare-machine) or with [Play-With-Docker](http://play-with-docker.com/), you might have to use a different method to restart the Engine.
that you [provisioned yourself](https://github.com/jpetazzo/container.training/tree/master/prepare-machine) or with [Play-With-Docker](http://play-with-docker.com/), you might have to use a different method to restart the Engine.
---

View File

@@ -77,7 +77,7 @@ More resources on this topic:
- It won't be scheduled automatically when constraints are satisfiable again
- You will have to update the service; you can do a no-op update with:
- You will have to update the service; you can do a no-op udate with:
```bash
docker service update ... --force
```

View File

@@ -20,6 +20,23 @@
---
class: extra-details
## `--detach` for service creation
(New in Docker Engine 17.05)
If you are running Docker 17.05 to 17.09, you will see the following message:
```
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.
```
You can ignore that for now; but we'll come back to it in just a few minutes!
---
## Checking service logs
(New in Docker Engine 17.05)
@@ -45,6 +62,20 @@ Note: by default, when a container is destroyed (e.g. when scaling down), its lo
class: extra-details
## Before Docker Engine 17.05
- Docker 1.13/17.03/17.04 have `docker service logs` as an experimental feature
<br/>(available only when enabling the experimental feature flag)
- We have to use `docker logs`, which only works on local containers
- We will have to connect to the node running our container
<br/>(unless it was scheduled locally, of course)
---
class: extra-details
## Looking up where our container is running
- The `docker service ps` command told us where our container was scheduled
@@ -96,7 +127,7 @@ class: extra-details
- Scale the service to ensure 2 copies per node:
```bash
docker service update pingpong --replicas 6
docker service update pingpong --replicas 10
```
- Check that we have two containers on the current node:
@@ -110,16 +141,15 @@ class: extra-details
## Monitoring deployment progress with `--detach`
(New in Docker Engine 17.10)
(New in Docker Engine 17.05)
- The CLI monitors commands that create/update/delete services
- The CLI can monitor commands that create/update/delete services
- In effect, `--detach=false` is the default
- `--detach=false`
- synchronous operation
- the CLI will monitor and display the progress of our request
- it exits only when the operation is complete
- Ctrl-C to detach at anytime
- `--detach=true`
@@ -168,12 +198,12 @@ class: extra-details
- Scale the service to ensure 3 copies per node:
```bash
docker service update pingpong --replicas 9 --detach=false
docker service update pingpong --replicas 15 --detach=false
```
- And then to 4 copies per node:
```bash
docker service update pingpong --replicas 12 --detach=true
docker service update pingpong --replicas 20 --detach=true
```
]
@@ -207,7 +237,7 @@ class: extra-details
- Create an ElasticSearch service (and give it a name while we're at it):
```bash
docker service create --name search --publish 9200:9200 --replicas 5 \
docker service create --name search --publish 9200:9200 --replicas 7 \
elasticsearch`:2`
```
@@ -237,7 +267,7 @@ The latest version of the ElasticSearch image won't start without mandatory conf
---
class: extra-details, pic
class: extra-details
![diagram showing what happens during docker service create, courtesy of @aluzzardi](images/docker-service-create.svg)
@@ -291,10 +321,10 @@ apk add --no-cache jq
## Load balancing results
Traffic is handled by our clusters [routing mesh](
Traffic is handled by our clusters [TCP routing mesh](
https://docs.docker.com/engine/swarm/ingress/).
Each request is served by one of the instances, in rotation.
Each request is served by one of the 7 instances, in rotation.
Note: if you try to access the service from your browser,
you will probably see the same
@@ -303,13 +333,7 @@ to re-use the same connection.
---
class: pic
![routing mesh](images/ingress-routing-mesh.png)
---
## Under the hood of the routing mesh
## Under the hood of the TCP routing mesh
- Load balancing is done by IPVS
@@ -328,9 +352,9 @@ class: pic
There are many ways to deal with inbound traffic on a Swarm cluster.
- Put all (or a subset) of your nodes in a DNS `A` record (good for web clients)
- Put all (or a subset) of your nodes in a DNS `A` record
- Assign your nodes (or a subset) to an external load balancer (ELB, etc.)
- Assign your nodes (or a subset) to an ELB
- Use a virtual IP and make sure that it is assigned to an "alive" node
@@ -338,37 +362,22 @@ There are many ways to deal with inbound traffic on a Swarm cluster.
---
class: pic
![external LB](images/ingress-lb.png)
---
class: btw-labels
## Managing HTTP traffic
- The TCP routing mesh doesn't parse HTTP headers
- If you want to place multiple HTTP services on port 80/443, you need something more
- If you want to place multiple HTTP services on port 80, you need something more
- You can set up NGINX or HAProxy on port 80/443 to route connections to the correct
Service, but they need to be "Swarm aware" to dynamically update configs
- You can set up NGINX or HAProxy on port 80 to do the virtual host switching
--
- Docker Universal Control Plane provides its own [HTTP routing mesh](
https://docs.docker.com/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services/)
- Docker EE provides its own [Layer 7 routing](https://docs.docker.com/ee/ucp/interlock/)
- add a specific label starting with `com.docker.ucp.mesh.http` to your services
- Service labels like `com.docker.lb.hosts=<FQDN>` are detected automatically via Docker
API and dynamically update the configuration
--
- Two common open source options:
- [Traefik](https://traefik.io/) - popular, many features, requires running on managers,
needs key/value for HA
- [Docker Flow Proxy](http://proxy.dockerflow.com/) - uses HAProxy, made for
Swarm by Docker Captain [@vfarcic](https://twitter.com/vfarcic)
- labels are detected automatically and dynamically update the configuration
---
@@ -386,7 +395,7 @@ class: btw-labels
- owner of a service (for billing, paging...)
- correlate Swarm objects together (services, volumes, configs, secrets, etc.)
- etc.
---
@@ -439,10 +448,16 @@ class: extra-details
.exercise[
- Run this simple-yet-beautiful visualization app:
- Get the source code of this simple-yet-beautiful visualization app:
```bash
cd ~/container.training/stacks
docker-compose -f visualizer.yml up -d
cd ~
git clone git://github.com/dockersamples/docker-swarm-visualizer
```
- Build and run the Swarm visualizer:
```bash
cd docker-swarm-visualizer
docker-compose up -d
```
<!-- ```longwait Creating dockerswarmvisualizer_viz_1``` -->
@@ -483,7 +498,7 @@ class: extra-details
- Instead of viewing your cluster, this could take care of logging, metrics, autoscaling ...
- We can run it within a service, too! We won't do it yet, but the command would look like:
- We can run it within a service, too! We won't do it, but the command would look like:
```bash
docker service create \
@@ -491,16 +506,12 @@ class: extra-details
--name viz --constraint node.role==manager ...
```
.footnote[
Credits: the visualization code was written by
[Francisco Miranda](https://github.com/maroshii).
<br/>
[Mano Marks](https://twitter.com/manomarks) adapted
it to Swarm and maintains it.
]
---
## Terminate our services

View File

@@ -120,7 +120,7 @@ We will use the following Compose file (`stacks/dockercoins+healthcheck.yml`):
- Deploy the updated stack:
```bash
docker stack deploy --compose-file dockercoins+healthcheck.yml dockercoins
docker stack deploy dockercoins --compose-file dockercoins+healthcheck.yml
```
]
@@ -146,7 +146,7 @@ First, let's make an "innocent" change and deploy it.
docker-compose -f dockercoins+healthcheck.yml build
docker-compose -f dockercoins+healthcheck.yml push
docker service update dockercoins_hasher \
--image=127.0.0.1:5000/hasher:$TAG
--detach=false --image=127.0.0.1:5000/hasher:$TAG
```
]
@@ -170,7 +170,7 @@ And now, a breaking change that will cause the health check to fail:
docker-compose -f dockercoins+healthcheck.yml build
docker-compose -f dockercoins+healthcheck.yml push
docker service update dockercoins_hasher \
--image=127.0.0.1:5000/hasher:$TAG
--detach=false --image=127.0.0.1:5000/hasher:$TAG
```
]

View File

@@ -3,7 +3,7 @@
- This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person,
instructor-led workshops and tutorials
- Over time, [multiple contributors](https://@@GITREPO@@/graphs/contributors) also helped to improve these materials — thank you!
- Over time, [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) also helped to improve these materials — thank you!
- You can also follow along on your own, at your own pace

View File

@@ -113,7 +113,7 @@ class: elk-manual
- We could author a custom image bundling this configuration
- We can also pass the [configuration](https://@@GITREPO@@/blob/master/elk/logstash.conf) on the command line
- We can also pass the [configuration](https://github.com/jpetazzo/container.training/blob/master/elk/logstash.conf) on the command line
.exercise[
@@ -187,7 +187,7 @@ class: elk-auto
```bash
docker-compose -f elk.yml build
docker-compose -f elk.yml push
docker stack deploy -c elk.yml elk
docker stack deploy elk -c elk.yml
```
]
@@ -195,7 +195,7 @@ class: elk-auto
Note: the *build* and *push* steps are not strictly necessary, but they don't hurt!
Let's have a look at the [Compose file](
https://@@GITREPO@@/blob/master/stacks/elk.yml).
https://github.com/jpetazzo/container.training/blob/master/stacks/elk.yml).
---

View File

@@ -169,7 +169,7 @@ class: in-person
This should tell us that we are talking to `node3`.
Note: it can be useful to use a [custom shell prompt](
https://@@GITREPO@@/blob/master/prepare-vms/scripts/postprep.rc#L68)
https://github.com/jpetazzo/container.training/blob/master/prepare-vms/scripts/postprep.rc#L68)
reflecting the `DOCKER_HOST` variable.
---

View File

@@ -554,7 +554,7 @@ class: snap
## Instruct all nodes to join the agreement
- We don't need another fancy global service!
- We dont need another fancy global service!
- We can join nodes from any existing node of the cluster
@@ -1183,7 +1183,7 @@ class: prom
]
You should see 7 endpoints (3 cadvisor, 3 node, 1 prometheus).
You should see 11 endpoints (5 cadvisor, 5 node, 1 prometheus).
Their state should be "UP".

View File

@@ -35,7 +35,33 @@ class: in-person
## Building our full cluster
- Let's get the token, and use a one-liner for the remaining node with SSH
- We could SSH to nodes 3, 4, 5; and copy-paste the command
--
class: in-person
- Or we could use the AWESOME POWER OF THE SHELL!
--
class: in-person
![Mario Red Shell](images/mario-red-shell.png)
--
class: in-person
- No, not *that* shell
---
class: in-person
## Let's form like Swarm-tron
- Let's get the token, and loop over the remaining nodes with SSH
.exercise[
@@ -44,9 +70,11 @@ class: in-person
TOKEN=$(docker swarm join-token -q manager)
```
- Add the remaining node:
- Loop over the 3 remaining nodes:
```bash
ssh node3 docker swarm join --token $TOKEN node1:2377
for NODE in node3 node4 node5; do
ssh $NODE docker swarm join --token $TOKEN node1:2377
done
```
]
@@ -179,7 +207,7 @@ class: self-paced
- one of the main take-aways was *"you're gonna need a bigger manager"*
- Testing by the community: [4700 heterogeneous nodes all over the 'net](https://sematext.com/blog/2016/11/14/docker-swarm-lessons-from-swarm3k/)
- Testing by the community: [4700 heterogenous nodes all over the 'net](https://sematext.com/blog/2016/11/14/docker-swarm-lessons-from-swarm3k/)
- it just works

Some files were not shown because too many files have changed in this diff Show More