Compare commits

..

28 Commits

Author SHA1 Message Date
Diego Quintana
051dd13c21 Enforce alpine version that includes telnet (#292)
* Enforce alpine version that contains telnet

alpine 3.7 does not contain `telnet` by default https://github.com/gliderlabs/docker-alpine/issues/397#issuecomment-375415746

* bump fix

enforce alpine 3.6 in another slide that mentions `telnet`
2018-06-28 08:26:30 -05:00
Diego Quintana
8c3d4c2c56 Add short inline explanation for -w (#291)
I don't know, but maybe having this short explanation saves a `docker run --help` for someone. 

Tell me if it's too much :D
2018-06-28 08:25:34 -05:00
Jerome Petazzoni
817e17a3a8 Merge branch 'master' into avril2018 2018-04-13 08:13:10 +02:00
Jérôme Petazzoni
e48016a0de Merge pull request #203 from jpetazzo/master
Typo fix, thanks Bridget! ♥
2018-04-12 15:55:36 -05:00
Jerome Petazzoni
39765c9ad0 Add food menu 2018-04-12 15:54:20 -05:00
Jerome Petazzoni
ca06269f00 Merge branch 'master' into avril2018 2018-04-12 12:06:44 -05:00
Jerome Petazzoni
9876a9aaa6 Add dockerfile samples 2018-04-12 09:04:47 +02:00
Jerome Petazzoni
853ba7ec39 Add dockerfile samples 2018-04-12 09:04:36 +02:00
Jerome Petazzoni
3d5c89774c Merge branch 'master' into avril2018 2018-04-11 12:17:03 +02:00
Jerome Petazzoni
21bb5fa9e1 Clarify wifi 2018-04-11 01:40:35 -05:00
Jerome Petazzoni
3fe4d730e7 merge master 2018-04-11 01:13:24 -05:00
Jerome Petazzoni
056b3a7127 hotfix for kubectl get all 2018-04-10 17:21:29 -05:00
Jerome Petazzoni
292885566d Merge branch 'master' into avril2018 2018-04-10 17:12:21 -05:00
Jerome Petazzoni
a54287a6bb Setup chapters appropriately 2018-04-10 09:13:25 -05:00
Jerome Petazzoni
e1fe41b7d7 Merge branch 'master' into avril2018 2018-04-10 08:41:34 -05:00
Jerome Petazzoni
817e3f9217 Fix @jgarrouste's Twitter link 2018-04-10 06:31:42 -05:00
Jerome Petazzoni
bb94c6fe76 Cards for Paris 2018-04-10 06:31:13 -05:00
Jerome Petazzoni
fd05530fff Merge branch 'more-info-on-labels-and-rollouts' into avril2018 2018-04-10 06:05:33 -05:00
Jerome Petazzoni
86f2395b2c Merge branch 'master' into avril2018 2018-04-10 05:31:47 -05:00
Jerome Petazzoni
60f68351c6 Add demos by @jgarrouste 2018-04-10 04:45:41 -05:00
Jerome Petazzoni
035d015a61 Merge branch 'master' into avril2018 2018-04-10 04:25:22 -05:00
Jerome Petazzoni
83efd145b8 Merge branch 'master' into avril2018 2018-04-09 17:07:02 -05:00
Jerome Petazzoni
c6c1a942e7 Update WiFi password and schedule 2018-04-09 15:44:32 -05:00
Jerome Petazzoni
59f5ff7788 Customize outline and title 2018-04-09 15:32:52 -05:00
Jerome Petazzoni
1fbf7b7dbd herp derp symlinks and stuff 2018-04-09 15:32:41 -05:00
Jerome Petazzoni
249947b0dd Setup links to slide decks 2018-04-09 15:26:47 -05:00
Jerome Petazzoni
e9af03e976 On a second thought, let's have relative links 2018-04-09 15:22:12 -05:00
Jerome Petazzoni
ab583e2670 Custom index for avril2018.container.training 2018-04-09 15:21:35 -05:00
84 changed files with 519 additions and 1879 deletions

View File

@@ -292,31 +292,15 @@ If there is a bug and you can't even reproduce it:
sorry. It is probably an Heisenbug. We can't act on it
until it's reproducible, alas.
# “Please teach us!”
If you have attended one of these workshops, and want
your team or organization to attend a similar one, you
can look at the list of upcoming events on
http://container.training/.
You are also welcome to reuse these materials to run
your own workshop, for your team or even at a meetup
or conference. In that case, you might enjoy watching
[Bridget Kromhout's talk at KubeCon 2018 Europe](
https://www.youtube.com/watch?v=mYsp_cGY2O0), explaining
precisely how to run such a workshop yourself.
Finally, you can also contact the following persons,
who are experienced speakers, are familiar with the
material, and are available to deliver these workshops
at your conference or for your company:
If you have attended this workshop and have feedback,
or if you want somebody to deliver that workshop at your
conference or for your company: you can contact one of us!
- jerome dot petazzoni at gmail dot com
- bret at bretfisher dot com
(If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!)
If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!
**Thank you!**

View File

@@ -103,7 +103,7 @@ wrap Run this program in a container
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
@@ -210,7 +210,7 @@ The `postprep.py` file will be copied via parallel-ssh to all of the VMs and exe
#### Pre-pull images
$ ./workshopctl pull_images TAG
$ ./workshopctl pull-images TAG
#### Generate cards

View File

@@ -1,16 +1,18 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "juin2018.container.training" -%}
{%- set url = "avril2018.container.training" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "formation" -%}
{%- set cluster_or_machine = "votre VM" -%}
{%- set machine_is_or_machines_are = "Votre VM" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "formation" -%}
{%- set cluster_or_machine = "votre cluster" -%}
{%- set machine_is_or_machines_are = "Votre cluster" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_swarm -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
@@ -86,7 +88,7 @@ img {
</p>
<p>
{{ machine_is_or_machines_are }}:
{{ machine_is_or_machines_are }} :
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>

View File

@@ -7,6 +7,7 @@ services:
working_dir: /root/prepare-vms
volumes:
- $HOME/.aws/:/root/.aws/
- /etc/localtime:/etc/localtime:ro
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
- $PWD/:/root/prepare-vms/
environment:

View File

@@ -393,23 +393,9 @@ pull_tag() {
ubuntu:latest \
fedora:latest \
centos:latest \
elasticsearch:2 \
postgres \
redis \
alpine \
registry \
nicolaka/netshoot \
jpetazzo/trainingwheels \
golang \
training/namer \
dockercoins/hasher \
dockercoins/rng \
dockercoins/webui \
dockercoins/worker \
logstash \
prom/node-exporter \
google/cadvisor \
dockersamples/visualizer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'

View File

@@ -108,7 +108,7 @@ system("sudo chmod +x /usr/local/bin/docker-machine")
system("docker-machine version")
system("sudo apt-get remove -y --purge dnsmasq-base")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh tree")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
### Wait for Docker to be up.
### (If we don't do this, Docker will not be responsive during the next step.)

View File

@@ -20,5 +20,5 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
compose_version: 1.20.1
machine_version: 0.14.0

View File

@@ -20,5 +20,5 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
compose_version: 1.20.1
machine_version: 0.14.0

View File

@@ -20,5 +20,5 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
compose_version: 1.20.1
machine_version: 0.14.0

View File

@@ -1,24 +0,0 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
# Number of VMs per cluster
clustersize: 3
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0

View File

@@ -1 +0,0 @@
/ /deck.yml.html

View File

@@ -15,12 +15,6 @@ once)
;;
forever)
# check if entr is installed
if ! command -v entr >/dev/null; then
echo >&2 "First install 'entr' with apt, brew, etc."
exit
fi
# There is a weird bug in entr, at least on MacOS,
# where it doesn't restore the terminal to a clean
# state when exitting. So let's try to work around

View File

@@ -2,7 +2,7 @@
- All the content is available in a public GitHub repository:
https://@@GITREPO@@
https://github.com/jpetazzo/container.training
- You can get updated "builds" of the slides there:
@@ -10,7 +10,7 @@
<!--
.exercise[
```open https://@@GITREPO@@```
```open https://github.com/jpetazzo/container.training```
```open http://container.training/```
]
-->
@@ -23,7 +23,7 @@
<!--
.exercise[
```open https://@@GITREPO@@/tree/master/slides/common/about-slides.md```
```open https://github.com/jpetazzo/container.training/tree/master/slides/common/about-slides.md```
]
-->

View File

@@ -49,6 +49,26 @@ Tip: use `^S` and `^Q` to pause/resume log output.
---
class: extra-details
## Upgrading from Compose 1.6
.warning[The `logs` command has changed between Compose 1.6 and 1.7!]
- Up to 1.6
- `docker-compose logs` is the equivalent of `logs --follow`
- `docker-compose logs` must be restarted if containers are added
- Since 1.7
- `--follow` must be specified explicitly
- new containers are automatically picked up by `docker-compose logs`
---
## Scaling up the application
- Our goal is to make that performance graph go up (without changing a line of code!)
@@ -106,7 +126,7 @@ We have available resources.
- Start one more `worker` container:
```bash
docker-compose up -d --scale worker=2
docker-compose scale worker=2
```
- Look at the performance graph (it should show a x2 improvement)
@@ -127,7 +147,7 @@ We have available resources.
- Start eight more `worker` containers:
```bash
docker-compose up -d --scale worker=10
docker-compose scale worker=10
```
- Look at the performance graph: does it show a x10 improvement?

View File

@@ -1,9 +1,14 @@
# Orchestration
# Pre-requirements
- Now that we have learned some container knowledge,
we can get started with orchestration!
- Be comfortable with the UNIX command line
- Note: all that is needed to follow along the orchestration part is some *basic* Docker knowledge, i.e.:
- navigating directories
- editing files
- a little bit of bash-fu (environment variables, loops)
- Some Docker knowledge
- `docker run`, `docker ps`, `docker build`
@@ -31,7 +36,7 @@ Misattributed to Benjamin Franklin
## Hands-on sections
- Of course, we have tons of exercises and hands-on labs
- The whole workshop is hands-on
- We are going to build, ship, and run containers!
@@ -43,11 +48,11 @@ Misattributed to Benjamin Franklin
- This is the stuff you're supposed to do!
- Go to @@SLIDES@@ to view these slides
- Go to [container.training](http://container.training/) to view these slides
- Join the chat room: @@CHAT@@
<!-- ```open @@SLIDES@@``` -->
<!-- ```open http://container.training/``` -->
]
@@ -73,9 +78,7 @@ class: in-person
- They'll remain up for the duration of the workshop
- You should have **another** little card with login+password+IP addresses
(But that one has 5 nodes instead of only 1)
- You should have a little card with login+password+IP addresses
- You can automatically SSH from one VM to another
@@ -186,7 +189,7 @@ done
```bash
if which kubectl; then
kubectl get all -o name | grep -v service/kubernetes | xargs -rn1 kubectl delete
kubectl get all -o name | grep -v service/kubernetes | xargs -n1 kubectl delete
fi
```
-->
@@ -219,7 +222,7 @@ If anything goes wrong — ask for help!
Small setup effort; small cost; flexible environments
- Create a bunch of clusters for you and your friends
([instructions](https://@@GITREPO@@/tree/master/prepare-vms))
([instructions](https://github.com/jpetazzo/container.training/tree/master/prepare-vms))
Bigger setup effort; ideal for group training

View File

@@ -16,7 +16,7 @@ fi
- Clone the repository on `node1`:
```bash
git clone git://@@GITREPO@@
git clone git://github.com/jpetazzo/container.training
```
]
@@ -56,16 +56,16 @@ and displays aggregated logs.
## More detail on our sample application
- Visit the GitHub repository with all the materials of this workshop:
<br/>https://@@GITREPO@@
<br/>https://github.com/jpetazzo/container.training
- The application is in the [dockercoins](
https://@@GITREPO@@/tree/master/dockercoins)
https://github.com/jpetazzo/container.training/tree/master/dockercoins)
subdirectory
- Let's look at the general layout of the source code:
there is a Compose file [docker-compose.yml](
https://@@GITREPO@@/blob/master/dockercoins/docker-compose.yml) ...
https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml) ...
... and 4 other services, each in its own directory:
@@ -124,7 +124,7 @@ def hash_bytes(data):
```
(Full source code available [here](
https://@@GITREPO@@/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
))
---

View File

@@ -11,5 +11,9 @@ class: title, in-person
@@TITLE@@<br/></br>
.footnote[
**Slides: @@SLIDES@@**
**WiFI: `ArtyLoft`** ou **`ArtyLoft 5 GHz`**
<br/>
**Mot de passe: `TFLEVENT5`**
**Slides: http://avril2018.container.training/**
]

View File

@@ -1,57 +0,0 @@
#!/usr/bin/env python
import re
import sys
PREFIX = "name: toc-"
EXCLUDED = ["in-person"]
class State(object):
def __init__(self):
self.current_slide = 1
self.section_title = None
self.section_start = 0
self.section_slides = 0
self.chapters = {}
self.sections = {}
def show(self):
if self.section_title.startswith("chapter-"):
return
print("{0.section_title}\t{0.section_start}\t{0.section_slides}".format(self))
self.sections[self.section_title] = self.section_slides
state = State()
title = None
for line in open(sys.argv[1]):
line = line.rstrip()
if line.startswith(PREFIX):
if state.section_title is None:
print("{}\t{}\t{}".format("title", "index", "size"))
else:
state.show()
state.section_title = line[len(PREFIX):].strip()
state.section_start = state.current_slide
state.section_slides = 0
if line == "---":
state.current_slide += 1
state.section_slides += 1
if line == "--":
state.current_slide += 1
toc_links = re.findall("\(#toc-(.*)\)", line)
if toc_links and state.section_title.startswith("chapter-"):
if state.section_title not in state.chapters:
state.chapters[state.section_title] = []
state.chapters[state.section_title].append(toc_links[0])
# This is really hackish
if line.startswith("class:"):
for klass in EXCLUDED:
if klass in line:
state.section_slides -= 1
state.current_slide -= 1
state.show()
for chapter in sorted(state.chapters):
chapter_size = sum(state.sections[s] for s in state.chapters[chapter])
print("{}\t{}\t{}".format("total size for", chapter, chapter_size))

View File

@@ -1,114 +0,0 @@
title: |
Introduction
to Containers
and Orchestration
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/jpetazzo/training-20180605-montpellier)"
gitrepo: github.com/jpetazzo/container.training
slides: http://juin2018.container.training/
exclude:
- self-paced
chapters:
- common/title.md
- logistics.md
- intro/intro.md
- common/about-slides.md
- common/toc.md
- - intro/Docker_Overview.md
- intro/Docker_History.md
- intro/Training_Environment.md
- intro/Installing_Docker.md
- intro/First_Containers.md
- intro/Background_Containers.md
- intro/Start_And_Attach.md
- - intro/Initial_Images.md
- intro/Building_Images_Interactively.md
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
- - |
# Exercise — writing Dockerfiles
Let's write Dockerfiles for an existing application!
The code is at: https://bitbucket.org/jgarrouste/k8s-wordsmith-exo/src/master/
- intro/Multi_Stage_Builds.md
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- |
# Exercise — writing better Dockerfiles
Let's update our Dockerfiles to leverage multi-stage builds!
The code is at: https://bitbucket.org/jgarrouste/k8s-wordsmith-exo/src/master/
Use a different tag for these images, so that we can compare their sizes.
What's the size difference between single-stage and multi-stage builds?
- - intro/Naming_And_Inspecting.md
- intro/Labels.md
- intro/Getting_Inside.md
- intro/Resource_Limits.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- - intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
#- intro/Connecting_Containers_With_Links.md
- intro/Ambassadors.md
- - intro/Local_Development_Workflow.md
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- |
# Exercise — writing a Compose file
Let's write a Compose file for the wordsmith app!
The code is at: https://bitbucket.org/jgarrouste/k8s-wordsmith-exo/src/master/
- - intro/CI_Pipeline.md
- intro/Docker_Machine.md
- intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Logging.md
- - intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md
- intro/links.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
- kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- - kube/kubectlrun.md
- kube/kubectlexpose.md
- kube/ourapponkube.md
- - kube/dashboard.md
- |
# Exercise — running wordsmith on Kubernetes
Now that we know how to deploy containers on Kubernetes, let's deploy the wordsmith app on our cluster!
The code is at: https://bitbucket.org/jgarrouste/k8s-wordsmith-exo/src/master/
- kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- - kube/logs-cli.md
- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
- kube/links.md
- common/thankyou.md

View File

@@ -0,0 +1,10 @@
#!/bin/sh
INPUT=$1
{
echo "# Front matter"
cat "$INPUT"
} |
grep -e "^# " -e ^---$ | uniq -c |
sed "s/^ *//" | sed s/---// |
paste -d "\t" - -

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

29
slides/index.html Normal file
View File

@@ -0,0 +1,29 @@
<html>
<head>
<link rel="stylesheet" type="text/css" href="theme.css">
<title>Formation/workshop containers, orchestration, et Kubernetes à Paris en avril</title>
</head>
<body>
<div class="index">
<div class="block">
<h4>Introduction aux conteneurs</h4>
<h5>De la pratique … aux bonnes pratiques</h5>
<h6>(11-12 avril 2018)</h6>
<p>
<a href="intro.yml.html">SLIDES</a>
<a href="https://gitter.im/jpetazzo/training-20180411-paris">CHATROOM</a>
</p>
</div>
<div class="block">
<h4>Introduction à l'orchestration</h4>
<h5>Kubernetes par l'exemple</h5>
<h6>(13 avril 2018)</h6>
<p>
<a href="kube.yml.html">SLIDES</a>
<a href="https://gitter.im/jpetazzo/training-20180413-paris">CHATROOM</a>
<a href="https://docs.google.com/spreadsheets/d/1KiuCVduTf3wf-4-vSmcK96I61WYdDP0BppkOx_XZcjM/edit?ts=5acfc2ef#gid=0">FOODMENU</a>
</p>
</div>
</div>
</body>
</html>

View File

@@ -3,11 +3,7 @@ title: |
to Containers
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/jpetazzo/training-20180605-montpellier)"
gitrepo: github.com/jpetazzo/container.training
slides: http://juin2018.container.training/
chat: "[Gitter](https://gitter.im/jpetazzo/training-20180411-paris)"
exclude:
- self-paced
@@ -22,41 +18,21 @@ chapters:
- intro/Docker_History.md
- intro/Training_Environment.md
- intro/Installing_Docker.md
- intro/First_Containers.md
- - intro/First_Containers.md
- intro/Background_Containers.md
- intro/Start_And_Attach.md
- - intro/Initial_Images.md
- intro/Building_Images_Interactively.md
- intro/Initial_Images.md
- - intro/Building_Images_Interactively.md
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
- - |
# Exercise — writing Dockerfiles
Let's write Dockerfiles for an existing application!
The code is at: https://bitbucket.org/jgarrouste/k8s-wordsmith-exo/src/master/
- intro/Multi_Stage_Builds.md
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- |
# Exercise — writing better Dockerfiles
Let's update our Dockerfiles to leverage multi-stage builds!
The code is at: https://bitbucket.org/jgarrouste/k8s-wordsmith-exo/src/master/
Use a different tag for these images, so that we can compare their sizes.
What's the size difference between single-stage and multi-stage builds?
- - intro/Naming_And_Inspecting.md
- intro/Labels.md
- intro/Getting_Inside.md
- intro/Resource_Limits.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- - intro/Container_Networking_Basics.md
- intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
#- intro/Connecting_Containers_With_Links.md
@@ -64,18 +40,15 @@ chapters:
- - intro/Local_Development_Workflow.md
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- |
# Exercise — writing a Compose file
Let's write a Compose file for the wordsmith app!
The code is at: https://bitbucket.org/jgarrouste/k8s-wordsmith-exo/src/master/
- - intro/CI_Pipeline.md
- intro/Docker_Machine.md
- - intro/CI_Pipeline.md
- intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Dockerfile_Samples.md
- intro/Logging.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- - intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md

View File

@@ -1,14 +1,11 @@
title: |
Introduction
to Containers
to Docker and
Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
@@ -30,13 +27,13 @@ chapters:
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
- - intro/Multi_Stage_Builds.md
- intro/Multi_Stage_Builds.md
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- - intro/Naming_And_Inspecting.md
- intro/Labels.md
- intro/Getting_Inside.md
- - intro/Container_Networking_Basics.md
- intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
#- intro/Connecting_Containers_With_Links.md
@@ -45,14 +42,13 @@ chapters:
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- intro/Docker_Machine.md
- - intro/Advanced_Dockerfiles.md
- intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Logging.md
- intro/Resource_Limits.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- - intro/Container_Engines.md
- intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md
- common/thankyou.md

1
slides/intro.yml Symbolic link
View File

@@ -0,0 +1 @@
intro-fullday.yml

View File

@@ -34,6 +34,18 @@ In this section, we will see more Dockerfile commands.
---
## The `MAINTAINER` instruction
The `MAINTAINER` instruction tells you who wrote the `Dockerfile`.
```dockerfile
MAINTAINER Docker Education Team <education@docker.com>
```
It's optional but recommended.
---
## The `RUN` instruction
The `RUN` instruction can be specified in two ways.
@@ -416,4 +428,5 @@ ONBUILD COPY . /src
```
* You can't chain `ONBUILD` instructions with `ONBUILD`.
* `ONBUILD` can't be used to trigger `FROM` instructions.
* `ONBUILD` can't be used to trigger `FROM` and `MAINTAINER`
instructions.

View File

@@ -40,8 +40,6 @@ ambassador containers.
---
class: pic
![ambassador](images/ambassador-diagram.png)
---

View File

@@ -49,7 +49,7 @@ Before diving in, let's see a small example of Compose in action.
---
class: pic
## Compose in action
![composeup](images/composeup.gif)
@@ -60,10 +60,6 @@ class: pic
If you are using the official training virtual machines, Compose has been
pre-installed.
If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them.
If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`.
You can always check that it is installed by running:
```bash
@@ -139,33 +135,22 @@ services:
---
## Compose file structure
## Compose file versions
A Compose file has multiple sections:
Version 1 directly has the various containers (`www`, `redis`...) at the top level of the file.
* `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.)
Version 2 has multiple sections:
* `services` is mandatory. A service is one or more replicas of the same image running as containers.
* `version` is mandatory and should be `"2"`.
* `services` is mandatory and corresponds to the content of the version 1 format.
* `networks` is optional and indicates to which networks containers should be connected.
<br/>(By default, containers will be connected on a private, per-compose-file network.)
<br/>(By default, containers will be connected on a private, per-app network.)
* `volumes` is optional and can define volumes to be used and/or shared by the containers.
---
## Compose file versions
* Version 1 is legacy and shouldn't be used.
(If you see a Compose file without `version` and `services`, it's a legacy v1 file.)
* Version 2 added support for networks and volumes.
* Version 3 added support for deployment options (scaling, rolling updates, etc).
The [Docker documentation](https://docs.docker.com/compose/compose-file/)
has excellent information about the Compose file format if you need to know more about versions.
Version 3 adds support for deployment options (scaling, rolling updates, etc.)
---
@@ -275,8 +260,6 @@ Removing trainingwheels_www_1 ... done
Removing trainingwheels_redis_1 ... done
```
Use `docker-compose down -v` to remove everything including volumes.
---
## Special handling of volumes

View File

@@ -73,7 +73,7 @@ Containers also exist (sometimes with other names) on Windows, macOS, Solaris, F
## LXC
* The venerable ancestor (first released in 2008).
* The venerable ancestor (first realeased in 2008).
* Docker initially relied on it to execute containers.

View File

@@ -65,17 +65,9 @@ eb0eeab782f4 host host
* A network is managed by a *driver*.
* The built-in drivers include:
* All the drivers that we have seen before are available.
* `bridge` (default)
* `none`
* `host`
* `macvlan`
* A multi-host driver, *overlay*, is available out of the box (for Swarm clusters).
* A new multi-host driver, *overlay*, is available out of the box.
* More drivers can be provided by plugins (OVS, VLAN...)
@@ -83,8 +75,6 @@ eb0eeab782f4 host host
---
class: extra-details
## Differences with the CNI
* CNI = Container Network Interface
@@ -97,22 +87,6 @@ class: extra-details
---
class: pic
## Single container in a Docker network
![bridge0](images/bridge1.png)
---
class: pic
## Two containers on two Docker networks
![bridge3](images/bridge2.png)
---
## Creating a network
Let's create a network called `dev`.
@@ -310,7 +284,7 @@ since we wiped out the old Redis container).
---
class: extra-details
class: x-extra-details
## Names are *local* to each network
@@ -350,7 +324,7 @@ class: extra-details
Create the `prod` network.
```bash
$ docker network create prod
$ docker create network prod
5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c
```
@@ -498,13 +472,11 @@ b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09
* If containers span multiple hosts, we need an *overlay* network to connect them together.
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging
VXLAN, *enabled with Swarm Mode*.
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN.
* Other plugins (Weave, Calico...) can provide overlay networks as well.
* Once you have an overlay network, *all the features that we've used in this chapter work identically
across multiple hosts.*
* Once you have an overlay network, *all the features that we've used in this chapter work identically.*
---
@@ -542,174 +514,13 @@ General idea:
---
## Connecting and disconnecting dynamically
## Section summary
* So far, we have specified which network to use when starting the container.
We've learned how to:
* The Docker Engine also allows to connect and disconnect while the container runs.
* Create private networks for groups of containers.
* This feature is exposed through the Docker API, and through two Docker CLI commands:
* Assign IP addresses to containers.
* `docker network connect <network> <container>`
* Use container naming to implement service discovery.
* `docker network disconnect <network> <container>`
---
## Dynamically connecting to a network
* We have a container named `es` connected to a network named `dev`.
* Let's start a simple alpine container on the default network:
```bash
$ docker run -ti alpine sh
/ #
```
* In this container, try to ping the `es` container:
```bash
/ # ping es
ping: bad address 'es'
```
This doesn't work, but we will change that by connecting the container.
---
## Finding the container ID and connecting it
* Figure out the ID of our alpine container; here are two methods:
* looking at `/etc/hostname` in the container,
* running `docker ps -lq` on the host.
* Run the following command on the host:
```bash
$ docker network connect dev `<container_id>`
```
---
## Checking what we did
* Try again to `ping es` from the container.
* It should now work correctly:
```bash
/ # ping es
PING es (172.20.0.3): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms
^C
```
* Interrupt it with Ctrl-C.
---
## Looking at the network setup in the container
We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`:
.small[
```bash
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
20: eth1@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1
valid_lft forever preferred_lft forever
/ #
```
]
Each network connection is materialized with a virtual network interface.
As we can see, we can be connected to multiple networks at the same time.
---
## Disconnecting from a network
* Let's try the symmetrical command to disconnect the container:
```bash
$ docker network disconnect dev <container_id>
```
* From now on, if we try to ping `es`, it will not resolve:
```bash
/ # ping es
ping: bad address 'es'
```
* Trying to ping the IP address directly won't work either:
```bash
/ # ping 172.20.0.3
... (nothing happens until we interrupt it with Ctrl-C)
```
---
class: extra-details
## Network aliases are scoped per network
* Each network has its own set of network aliases.
* We saw this earlier: `es` resolves to different addresses in `dev` and `prod`.
* If we are connected to multiple networks, the resolver looks up names in each of them
(as of Docker Engine 18.03, it is the connection order) and stops as soon as the name
is found.
* Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not**
give us the addresses of all the `es` services; but only the ones in `dev` or `prod`.
* However, we can lookup `es.dev` or `es.prod` if we need to.
---
class: extra-details
## Finding out about our networks and names
* We can do reverse DNS lookups on containers' IP addresses.
* If the IP address belongs to a network (other than the default bridge), the result will be:
```
name-or-first-alias-or-container-id.network-name
```
* Example:
.small[
```bash
$ docker run -ti --net prod --net-alias hello alpine
/ # apk add --no-cache drill
...
OK: 5 MiB in 13 packages
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03
inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
...
/ # drill -t ptr `3.0.21.172`.in-addr.arpa
...
;; ANSWER SECTION:
3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`.
...
```
]

View File

@@ -10,12 +10,10 @@
* [Solaris Containers (2004)](https://en.wikipedia.org/wiki/Solaris_Containers)
* [FreeBSD jails (1999-2000)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
* [FreeBSD jails (1999)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
Containers have been around for a *very long time* indeed.
(See [this excellent blog post by Serge Hallyn](https://s3hh.wordpress.com/2018/03/22/history-of-containers/) for more historic details.)
---
class: pic

View File

@@ -0,0 +1,5 @@
# Dockerfile Samples
---
## (Demo in terminal)

View File

@@ -51,8 +51,9 @@ The dependencies are reinstalled every time, because the build system does not k
```bash
FROM python
MAINTAINER Docker Education Team <education@docker.com>
COPY . /src/
WORKDIR /src
COPY . .
RUN pip install -qr requirements.txt
EXPOSE 5000
CMD ["python", "app.py"]
@@ -66,10 +67,11 @@ Adding the dependencies as a separate step means that Docker can cache more effi
```bash
FROM python
COPY requirements.txt /tmp/requirements.txt
MAINTAINER Docker Education Team <education@docker.com>
COPY ./requirements.txt /tmp/requirements.txt
RUN pip install -qr /tmp/requirements.txt
COPY . /src/
WORKDIR /src
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
```
@@ -96,266 +98,3 @@ CMD, EXPOSE ...
* The build fails as soon as an instruction fails
* If `RUN <unit tests>` fails, the build doesn't produce an image
* If it succeeds, it produces a clean image (without test libraries and data)
---
# Dockerfile examples
There are a number of tips, tricks, and techniques that we can use in Dockerfiles.
But sometimes, we have to use different (and even opposed) practices depending on:
- the complexity of our project,
- the programming language or framework that we are using,
- the stage of our project (early MVP vs. super-stable production),
- whether we're building a final image or a base for further images,
- etc.
We are going to show a few examples using very different techniques.
---
## When to optimize an image
When authoring official images, it is a good idea to reduce as much as possible:
- the number of layers,
- the size of the final image.
This is often done at the expense of build time and convenience for the image maintainer;
but when an image is downloaded millions of time, saving even a few seconds of pull time
can be worth it.
.small[
```dockerfile
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-install gd
...
RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \
&& echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \
&& tar -xzf wordpress.tar.gz -C /usr/src/ \
&& rm wordpress.tar.gz \
&& chown -R www-data:www-data /usr/src/wordpress
```
]
(Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile))
---
## When to *not* optimize an image
Sometimes, it is better to prioritize *maintainer convenience*.
In particular, if:
- the image changes a lot,
- the image has very few users (e.g. only 1, the maintainer!),
- the image is built and run on the same machine,
- the image is built and run on machines with a very fast link ...
In these cases, just keep things simple!
(Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.)
---
```dockerfile
FROM debian:sid
RUN apt-get update -q
RUN apt-get install -yq build-essential make
RUN apt-get install -yq zlib1g-dev
RUN apt-get install -yq ruby ruby-dev
RUN apt-get install -yq python-pygments
RUN apt-get install -yq nodejs
RUN apt-get install -yq cmake
RUN gem install --no-rdoc --no-ri github-pages
COPY . /blog
WORKDIR /blog
VOLUME /blog/_site
EXPOSE 4000
CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"]
```
---
## Multi-dimensional versioning systems
Images can have a tag, indicating the version of the image.
But sometimes, there are multiple important components, and we need to indicate the versions
for all of them.
This can be done with environment variables:
```dockerfile
ENV PIP=9.0.3 \
ZC_BUILDOUT=2.11.2 \
SETUPTOOLS=38.7.0 \
PLONE_MAJOR=5.1 \
PLONE_VERSION=5.1.0 \
PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d
```
(Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile))
---
## Entrypoints and wrappers
It is very common to define a custom entrypoint.
That entrypoint will generally be a script, performing any combination of:
- pre-flights checks (if a required dependency is not available, display
a nice error message early instead of an obscure one in a deep log file),
- generation or validation of configuration files,
- dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`),
- and more.
---
## A typical entrypoint script
```dockerfile
#!/bin/sh
set -e
# first arg is '-f' or '--some-option'
# or first arg is 'something.conf'
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
set -- redis-server "$@"
fi
# allow the container to be started with '--user'
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
chown -R redis .
exec su-exec redis "$0" "$@"
fi
exec "$@"
```
(Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh))
---
## Factoring information
To facilitate maintenance (and avoid human errors), avoid to repeat information like:
- version numbers,
- remote asset URLs (e.g. source tarballs) ...
Instead, use environment variables.
.small[
```dockerfile
ENV NODE_VERSION 10.2.1
...
RUN ...
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xf "node-v$NODE_VERSION.tar.xz" \
&& cd "node-v$NODE_VERSION" \
...
```
]
(Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile))
---
## Overrides
In theory, development and production images should be the same.
In practice, we often need to enable specific behaviors in development (e.g. debug statements).
One way to reconcile both needs is to use Compose to enable these behaviors.
Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example.
---
## Production image
This Dockerfile builds an image leveraging gunicorn:
```dockerfile
FROM python
RUN pip install flask
RUN pip install gunicorn
RUN pip install redis
COPY . /src
WORKDIR /src
CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app
EXPOSE 5000
```
(Source: [traininghweels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
---
## Development Compose file
This Compose file uses the same image, but with a few overrides for development:
- the Flask development server is used (overriding `CMD`),
- the `DEBUG` environment variable is set,
- a volume is used to provide a faster local development workflow.
.small[
```yaml
services:
www:
build: www
ports:
- 8000:5000
user: nobody
environment:
DEBUG: 1
command: python counter.py
volumes:
- ./www:/src
```
]
(Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml))
---
## How to know which best practices are better?
- The main goal of containers is to make our lives easier.
- In this chapter, we showed many ways to write Dockerfiles.
- These Dockerfiles use sometimes diametrally opposed techniques.
- Yet, they were the "right" ones *for a specific situation.*
- It's OK (and even encouraged) to start simple and evolve as needed.
- Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!

View File

@@ -110,8 +110,6 @@ Beautiful! .emoji[😍]
---
class: in-person
## Counting packages in the container
Let's check how many packages are installed there.
@@ -129,8 +127,6 @@ How many packages do we have on our host?
---
class: in-person
## Counting packages on the host
Exit the container by logging out of the shell, like you would usually do.
@@ -149,34 +145,18 @@ Now, try to:
---
class: self-paced
## Comparing the container and the host
Exit the container by logging out of the shell, with `^D` or `exit`.
Now try to run `figlet`. Does that work?
(It shouldn't; except if, by coincidence, you are running on a machine where figlet was installed before.)
---
## Host and containers are independent things
* We ran an `ubuntu` container on an Linux/Windows/macOS host.
* We ran an `ubuntu` container on an `ubuntu` host.
* They have different, independent packages.
* But they have different, independent packages.
* Installing something on the host doesn't expose it to the container.
* And vice-versa.
* Even if both the host and the container have the same Linux distro!
* We can run *any container* on *any host*.
(One exception: Windows containers cannot run on Linux machines; at least not yet.)
---
## Where's our container?

View File

@@ -144,7 +144,7 @@ docker run jpetazzo/crashtest
The container starts, but then stops immediately, without any output.
What would MacGyver&trade; do?
What would McGyver do?
First, let's check the status of that container.

View File

@@ -46,8 +46,6 @@ In this section, we will explain:
## Example for a Java webapp
Each of the following items will correspond to one layer:
* CentOS base layer
* Packages and configuration files added by our local IT
* JRE
@@ -58,22 +56,6 @@ Each of the following items will correspond to one layer:
---
class: pic
## The read-write layer
![layers](images/container-layers.jpg)
---
class: pic
## Multiple containers sharing the same image
![layers](images/sharing-layers.jpg)
---
## Differences between containers and images
* An image is a read-only filesystem.
@@ -81,14 +63,24 @@ class: pic
* A container is an encapsulated set of processes running in a
read-write copy of that filesystem.
* To optimize container boot time, *copy-on-write* is used
* To optimize container boot time, *copy-on-write* is used
instead of regular copy.
* `docker run` starts a container from a given image.
Let's give a couple of metaphors to illustrate those concepts.
---
## Comparison with object-oriented programming
## Image as stencils
Images are like templates or stencils that you can create containers from.
![stencil](images/stenciling-wall.jpg)
---
## Object-oriented programming
* Images are conceptually similar to *classes*.
@@ -107,7 +99,7 @@ If an image is read-only, how do we change it?
* We create a new container from that image.
* Then we make changes to that container.
* When we are satisfied with those changes, we transform them into a new layer.
* A new image is created by stacking the new layer on top of the old image.
@@ -126,7 +118,7 @@ If an image is read-only, how do we change it?
## Creating the first images
There is a special empty image called `scratch`.
There is a special empty image called `scratch`.
* It allows to *build from scratch*.
@@ -146,7 +138,7 @@ Note: you will probably never have to do this yourself.
* Saves all the changes made to a container into a new layer.
* Creates a new image (effectively a copy of the container).
`docker build` **(used 99% of the time)**
`docker build`
* Performs a repeatable build sequence.
* This is the preferred method!
@@ -188,8 +180,6 @@ Those images include:
* Ready-to-use components and services, like redis, postgresql...
* Over 130 at this point!
---
## User namespace
@@ -309,9 +299,9 @@ There are two ways to download images.
```bash
$ docker pull debian:jessie
Pulling repository debian
b164861940b8: Download complete
b164861940b8: Pulling image (jessie) from debian
d1881793a057: Download complete
b164861940b8: Download complete
b164861940b8: Pulling image (jessie) from debian
d1881793a057: Download complete
```
* As seen previously, images are made up of layers.

View File

@@ -37,9 +37,7 @@ We can arbitrarily distinguish:
## Installing Docker on Linux
* The recommended method is to install the packages supplied by Docker Inc.:
https://store.docker.com
* The recommended method is to install the packages supplied by Docker Inc.
* The general method is:
@@ -81,11 +79,11 @@ class: extra-details
## Installing Docker on macOS and Windows
* On macOS, the recommended method is to use Docker for Mac:
* On macOS, the recommended method is to use Docker4Mac:
https://docs.docker.com/docker-for-mac/install/
* On Windows 10 Pro, Enterprise, and Education, you can use Docker for Windows:
* On Windows 10 Pro, Enterprise, and Eduction, you can use Docker4Windows:
https://docs.docker.com/docker-for-windows/install/
@@ -93,33 +91,6 @@ class: extra-details
https://docs.docker.com/toolbox/toolbox_install_windows/
* On Windows Server 2016, you can also install the native engine:
https://docs.docker.com/install/windows/docker-ee/
---
## Docker for Mac and Docker for Windows
* Special Docker Editions that integrate well with their respective host OS
* Provide user-friendly GUI to edit Docker configuration and settings
* Leverage the host OS virtualization subsystem (e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS)
* Installed like normal user applications on the host
* Under the hood, they both run a tiny VM (transparent to our daily use)
* Access network resources like normal applications
<br/>(and therefore, play better with enterprise VPNs and firewalls)
* Support filesystem sharing through volumes (we'll talk about this later)
* They only support running one Docker VM at a time ...
<br/>
... but we can use `docker-machine`, the Docker Toolbox, VirtualBox, etc. to get a cluster.
---
## Running Docker on macOS and Windows
@@ -139,6 +110,25 @@ This will also allow to use remote Engines exactly as if they were local.
---
## Docker4Mac and Docker4Windows
* They let you run Docker without VirtualBox
* They are installed like normal applications (think QEMU, but faster)
* They access network resources like normal applications
<br/>(and therefore, play well with enterprise VPNs and firewalls)
* They support filesystem sharing through volumes (we'll talk about this later)
* They only support running one Docker VM at a time ...
... so if you want to run a full cluster locally, install e.g. the Docker Toolbox
* They can co-exist with the Docker Toolbox
---
## Important PSA about security
* If you have access to the Docker control socket, you can take over the machine

View File

@@ -17,7 +17,7 @@ At the end of this section, you will be able to:
---
## Local development in a container
## Containerized local development environments
We want to solve the following issues:
@@ -69,6 +69,7 @@ Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe?
```dockerfile
FROM ruby
MAINTAINER Education Team at Docker <education@docker.com>
COPY . /src
WORKDIR /src
@@ -178,8 +179,6 @@ $ docker run -d -v $(pwd):/src -P namer
* We don't specify a command to run because is is already set in the Dockerfile.
Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell).
---
## Mounting volumes inside containers

View File

@@ -131,27 +131,6 @@ We will then show one particular method in action, using ELK and Docker's loggin
---
## A word of warning about `json-file`
- By default, log file size is unlimited.
- This means that a very verbose container *will* use up all your disk space.
(Or a less verbose container, but running for a very long time.)
- Log rotation can be enabled by setting a `max-size` option.
- Older log files can be removed by setting a `max-file` option.
- Just like other logging options, these can be set per container, or globally.
Example:
```bash
$ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch
```
---
## Demo: sending logs to ELK
- We are going to deploy an ELK stack.
@@ -213,7 +192,7 @@ $ docker-compose -f elk.yml up -d
- it is set with the `ELASTICSEARCH_URL` environment variable,
- by default it is `localhost:9200`, we change it to `elasticsearch:9200`.
- by default it is `localhost:9200`, we change it to `elastichsearch:9200`.
- We need to configure Logstash:

View File

@@ -1,6 +1,6 @@
# Reducing image size
# Multi-stage builds
* In the previous example, our final image contained:
* In the previous example, our final image contain:
* our `hello` program
@@ -14,196 +14,7 @@
---
## Can't we remove superfluous files with `RUN`?
What happens if we do one of the following commands?
- `RUN rm -rf ...`
- `RUN apt-get remove ...`
- `RUN make clean ...`
--
This adds a layer which removes a bunch of files.
But the previous layers (which added the files) still exist.
---
## Removing files with an extra layer
When downloading an image, all the layers must be downloaded.
| Dockerfile instruction | Layer size | Image size |
| ---------------------- | ---------- | ---------- |
| `FROM ubuntu` | Size of base image | Size of base image |
| `...` | ... | Sum of this layer <br/>+ all previous ones |
| `RUN apt-get install somepackage` | Size of files added <br/>(e.g. a few MB) | Sum of this layer <br/>+ all previous ones |
| `...` | ... | Sum of this layer <br/>+ all previous ones |
| `RUN apt-get remove somepackage` | Almost zero <br/>(just metadata) | Same as previous one |
Therefore, `RUN rm` does not reduce the size of the image or free up disk space.
---
## Removing unnecessary files
Various techniques are available to obtain smaller images:
- collapsing layers,
- adding binaries that are built outside of the Dockerfile,
- squashing the final image,
- multi-stage builds.
Let's review them quickly.
---
## Collapsing layers
You will frequently see Dockerfiles like this:
```dockerfile
FROM ubuntu
RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ...
```
Or the (more readable) variant:
```dockerfile
FROM ubuntu
RUN apt-get update \
&& apt-get install xxx \
&& ... \
&& apt-get remove xxx \
&& ...
```
This `RUN` command gives us a single layer.
The files that are added, then removed in the same layer, do not grow the layer size.
---
## Collapsing layers: pros and cons
Pros:
- works on all versions of Docker
- doesn't require extra tools
Cons:
- not very readable
- some unnecessary files might still remain if the cleanup is not torough
- that layer is expensive (slow to build)
---
## Building binaries outside of the Dockerfile
This results in a Dockerfile looking like this:
```dockerfile
FROM ubuntu
COPY xxx /usr/local/bin
```
Of course, this implies that the file `xxx` exists in the build context.
That file has to exist before you can run `docker build`.
For instance, it can:
- exist in the code repository,
- be created by another tool (script, Makefile...),
- be created by another container image and extracted from the image.
See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox).
---
## Building binaries outside: pros and cons
Pros:
- final image can be very small
Cons:
- requires an extra build tool
- we're back in dependency hell and "works on my machine"
Cons, if binary is added to code repository:
- breaks portability across different platforms
- grows repository size a lot if the binary is updated frequently
---
## Squashing the final image
The idea is to transform the final image into a single-layer image.
This can be done in (at least) two ways.
- Activate experimental features and squash the final image:
```bash
docker image build --squash ...
```
- Export/import the final image.
```bash
docker build -t temp-image .
docker run --entrypoint true --name temp-container temp-image
docker export temp-container | docker import - final-image
docker rm temp-container
docker rmi temp-image
```
---
## Squashing the image: pros and cons
Pros:
- single-layer images are smaller and faster to download
- removed files no longer take up storage and network resources
Cons:
- we still need to actively remove unnecessary files
- squash operation can take a lot of time (on big images)
- squash operation does not benefit from cache
<br/>
(even if we change just a tiny file, the whole image needs to be re-squashed)
---
## Multi-stage builds
Multi-stage builds allow us to have multiple *stages*.
Each stage is a separate image, and can copy files from previous stages.
We're going to see how they work in more detail.
---
# Multi-stage builds
## Multi-stage builds principles
* At any point in our `Dockerfile`, we can add a new `FROM` line.

View File

@@ -76,8 +76,6 @@ The last item should be done for educational purposes only!
---
class: extra-details, deep-dive
## Manipulating namespaces
- Namespaces are created with two methods:
@@ -96,8 +94,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Namespaces lifecycle
- When the last process of a namespace exits, the namespace is destroyed.
@@ -118,8 +114,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Namespaces can be used independently
- As mentioned in the previous slides:
@@ -156,8 +150,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Creating our first namespace
Let's use `unshare` to create a new process that will have its own UTS namespace:
@@ -174,8 +166,6 @@ $ sudo unshare --uts
---
class: extra-details, deep-dive
## Demonstrating our uts namespace
In our new "container", check the hostname, change it, and check it:
@@ -408,8 +398,6 @@ class: extra-details
---
class: extra-details, deep-dive
## Setting up a private `/tmp`
Create a new mount namespace:
@@ -447,8 +435,6 @@ The mount is automatically cleaned up when you exit the process.
---
class: extra-details, deep-dive
## PID namespace in action
Create a new PID namespace:
@@ -467,14 +453,10 @@ Check the process tree in the new namespace:
--
class: extra-details, deep-dive
🤔 Why do we see all the processes?!?
---
class: extra-details, deep-dive
## PID namespaces and `/proc`
- Tools like `ps` rely on the `/proc` pseudo-filesystem.
@@ -489,8 +471,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## PID namespaces, take 2
- This can be solved by mounting `/proc` in the namespace.
@@ -590,8 +570,6 @@ Check `man 2 unshare` and `man pid_namespaces` if you want more details.
---
class: extra-details, deep-dive
## User namespace challenges
- UID needs to be mapped when passed between processes or kernel subsystems.
@@ -708,8 +686,6 @@ cpu memory
---
class: extra-details, deep-dive
## Cgroups v1 vs v2
- Cgroups v1 are available on all systems (and widely used).
@@ -783,8 +759,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Avoiding the OOM killer
- For some workloads (databases and stateful systems), killing
@@ -804,8 +778,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Overhead of the memory cgroup
- Each time a process grabs or releases a page, the kernel update counters.
@@ -824,8 +796,6 @@ class: extra-details, deep-dive
---
class: extra-details, deep-dive
## Setting up a limit with the memory cgroup
Create a new memory cgroup:
@@ -838,7 +808,7 @@ $ sudo mkdir $CG
Limit it to approximately 100MB of memory usage:
```bash
$ sudo tee $CG/memory.memsw.limit_in_bytes <<< 100000000
$ sudo tee $CG/memory.memsw.limit_in_bytes <<<100000000
```
Move the current process to that cgroup:
@@ -849,67 +819,8 @@ $ sudo tee $CG/tasks <<< $$
The current process *and all its future children* are now limited.
(Confused about `<<<`? Look at the next slide!)
---
class: extra-details, deep-dive
## What's `<<<`?
- This is a "here string". (It is a non-POSIX shell extension.)
- The following commands are equivalent:
```bash
foo <<< hello
```
```bash
echo hello | foo
```
```bash
foo <<EOF
hello
EOF
```
- Why did we use that?
---
class: extra-details, deep-dive
## Writing to cgroups pseudo-files requires root
Instead of:
```bash
sudo tee $CG/tasks <<< $$
```
We could have done:
```bash
sudo sh -c "echo $$ > $CG/tasks"
```
The following commands, however, would be invalid:
```bash
sudo echo $$ > $CG/tasks
```
```bash
sudo -i # (or su)
echo $$ > $CG/tasks
```
---
class: extra-details, deep-dive
## Testing the memory limit
Start the Python interpreter:
@@ -949,6 +860,8 @@ Killed
- Allows to set relative weights used by the scheduler.
- We cannot set CPU limits (like, "don't use more than 10% of CPU").
---
## Cpuset cgroup

View File

@@ -420,3 +420,8 @@ It depends on:
- false, if we focus on what matters.
---
## Kubernetes in action
.center[![Demo stamp](images/demo.jpg)]

View File

@@ -21,7 +21,7 @@ public images is free as well.*
docker login
```
.warning[When running Docker for Mac/Windows, or
.warning[When running Docker4Mac, Docker4Windows, or
Docker on a Linux workstation, it can (and will when
possible) integrate with your system's keyring to
store your credentials securely. However, on most Linux

View File

@@ -1,229 +0,0 @@
# Limiting resources
- So far, we have used containers as convenient units of deployment.
- What happens when a container tries to use more resources than available?
(RAM, CPU, disk usage, disk and network I/O...)
- What happens when multiple containers compete for the same resource?
- Can we limit resources available to a container?
(Spoiler alert: yes!)
---
## Container processes are normal processes
- Containers are closer to "fancy processes" than to "lightweight VMs".
- A process running in a container is, in fact, a process running on the host.
- Let's look at the output of `ps` on a container host running 3 containers :
```
0 2662 0.2 0.3 /usr/bin/dockerd -H fd://
0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe
0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off;
101 23543 0.0 0.0 | \_ `nginx`: worker process
0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2
0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
0 23725 0.0 0.0 \_ `/bin/sh`
```
- The highlighted processes are containerized processes.
<br/>
(That host is running nginx, elasticsearch, and alpine.)
---
## By default: nothing changes
- What happens when a process uses too much memory on a Linux system?
--
- Simplified answer:
- swap is used (if available);
- if there is not enough swap space, eventually, the out-of-memory killer is invoked;
- the OOM killer uses heuristics to kill processes;
- sometimes, it kills an unrelated process.
--
- What happens when a container uses too much memory?
- The same thing!
(i.e., a process eventually gets killed, possibly in another container.)
---
## Limiting container resources
- The Linux kernel offers rich mechanisms to limit container resources.
- For memory usage, the mechanism is part of the *cgroup* subsystem.
- This subsystem allows to limit the memory for a process or a group of processes.
- A container engine leverages these mechanisms to limit memory for a container.
- The out-of-memory killer has a new behavior:
- it runs when a container exceeds its allowed memory usage,
- in that case, it only kills processes in that container.
---
## Limiting memory in practice
- The Docker Engine offers multiple flags to limit memory usage.
- The two most useful ones are `--memory` and `--memory-swap`.
- `--memory` limits the amount of physical RAM used by a container.
- `--memory-swap` limits the total amount (RAM+swap) used by a container.
- The memory limit can be expressed in bytes, or with a unit suffix.
(e.g.: `--memory 100m` = 100 megabytes.)
- We will see two strategies: limiting RAM usage, or limiting both
---
## Limiting RAM usage
Example:
```bash
docker run -ti --memory 100m python
```
If the container tries to use more than 100 MB of RAM, *and* swap is available:
- the container will not be killed,
- memory above 100 MB will be swapped out,
- in most cases, the app in the container will be slowed down (a lot).
If we run out of swap, the global OOM killer still intervenes.
---
## Limiting both RAM and swap usage
Example:
```bash
docker run -ti --memory 100m --memory-swap 100m python
```
If the container tries to use more than 100 MB of memory, it is killed.
On the other hand, the application will never be slowed down because of swap.
---
## When to pick which strategy?
- Stateful services (like databases) will lose or corrupt data when killed
- Allow them to use swap space, but monitor swap usage
- Stateless services can usually be killed with little impact
- Limit their mem+swap usage, but monitor if they get killed
- Ultimately, this is no different from "do I want swap, and how much?"
---
## Limiting CPU usage
- There are no less than 3 ways to limit CPU usage:
- setting a relative priority with `--cpu-shares`,
- setting a CPU% limit with `--cpus`,
- pinning a container to specific CPUs with `--cpuset-cpus`.
- They can be used separately or together.
---
## Setting relative priority
- Each container has a relative priority used by the Linux scheduler.
- By default, this priority is 1024.
- As long as CPU usage is not maxed out, this has no effect.
- When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority.
- In other words: a container with `--cpu-shares 2048` will receive twice as much than the default.
---
## Setting a CPU% limit
- This setting will make sure that a container doesn't use more than a given % of CPU.
- The value is expressed in CPUs; therefore:
`--cpus 0.1` means 10% of one CPU,
`--cpus 1.0` means 100% of one whole CPU,
`--cpus 10.0` means 10 entire CPUs.
---
## Pinning containers to CPUs
- On multi-core machines, it is possible to restrict the execution on a set of CPUs.
- Examples:
`--cpuset-cpus 0` forces the container to run on CPU 0;
`--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7;
`--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11.
- This will not reserve the corresponding CPUs!
(They might still be used by other containers, or uncontainerized processes.)
---
## Limiting disk usage
- Most storage drivers do not support limiting the disk usage of containers.
(With the exception of devicemapper, but the limit cannot be set easily.)
- This means that a single container could exhaust disk space for everyone.
- In practice, however, this is not a concern, because:
- data files (for stateful services) should reside on volumes,
- assets (e.g. images, user-generated content...) should reside on object stores or on volume,
- logs are written on standard output and gathered by the container engine.
- Container disk usage can be audited with `docker ps -s` and `docker diff`.

View File

@@ -36,10 +36,6 @@ individual Docker VM.*
- It comes pre-loaded with Docker and some other useful tools.
- **Keep the card with your VM IP address!**
**(We will be in a different room tomorrow.)**
---
## What *is* Docker?

View File

@@ -33,8 +33,6 @@ Docker volumes can be used to achieve many things, including:
* Sharing a *single file* between the host and a container.
* Using remote storage and custom storage with "volume drivers".
---
## Volumes are special directories in a container
@@ -120,7 +118,7 @@ $ curl localhost:8080
## Volumes exist independently of containers
If a container is stopped or removed, its volumes still exist and are available.
If a container is stopped, its volumes still exist and are available.
Volumes can be listed and manipulated with `docker volume` subcommands:
@@ -197,13 +195,13 @@ Let's start another container using the `webapps` volume.
$ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp
```
Vandalize the page, save, exit.
Where `-w` sets the working directory inside the container. Vandalize the page, save and exit.
Then run `curl localhost:1234` again to see your changes.
---
## Using custom "bind-mounts"
## Managing volumes explicitly
In some cases, you want a specific directory on the host to be mapped
inside the container:
@@ -246,8 +244,6 @@ of an existing container.
* Newer containers can use `--volumes-from` too.
* Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes).
---
class: extra-details
@@ -263,7 +259,7 @@ $ docker run -d --name redis28 redis:2.8
Connect to the Redis container and set some data.
```bash
$ docker run -ti --link redis28:redis busybox telnet redis 6379
$ docker run -ti --link redis28:redis alpine:3.6 telnet redis 6379
```
Issue the following commands:
@@ -302,7 +298,7 @@ class: extra-details
Connect to the Redis container and see our data.
```bash
docker run -ti --link redis30:redis busybox telnet redis 6379
docker run -ti --link redis30:redis alpine:3.6 telnet redis 6379
```
Issue a few commands.
@@ -398,15 +394,10 @@ has root-like access to the host.]
You can install plugins to manage volumes backed by particular storage systems,
or providing extra features. For instance:
* [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g.
SAN or NAS), or by cloud block stores (e.g. EBS, EFS).
* [Portworx](http://portworx.com/) - provides distributed block store for containers.
* [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale
to several petabytes. It provides interfaces for object, block and file storage.
* and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)!
* [dvol](https://github.com/ClusterHQ/dvol) - allows to commit/branch/rollback volumes;
* [Flocker](https://clusterhq.com/flocker/introduction/), [REX-Ray](https://github.com/emccode/rexray) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS);
* [Blockbridge](http://www.blockbridge.com/), [Portworx](http://portworx.com/) - provide distributed block store for containers;
* and much more!
---

View File

@@ -2,7 +2,7 @@
- This was initially written to support in-person, instructor-led workshops and tutorials
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://@@GITREPO@@/graphs/contributors)
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors)
- You can also follow along on your own, at your own pace

View File

@@ -3,11 +3,8 @@ title: |
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/jpetazzo/training-20180607-montpellier)"
gitrepo: github.com/jpetazzo/container.training
slides: http://juin2018.container.training/
chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
#chat: "In person!"
exclude:
- self-paced

View File

@@ -1,51 +0,0 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- common/title.md
#- logistics.md
# Bridget-specific; others use logistics.md
- logistics-bridget.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- common/composescale.md
- common/composedown.md
- kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
- kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- - kube/kubectlrun.md
- kube/kubectlexpose.md
- kube/ourapponkube.md
- - kube/dashboard.md
- kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- - kube/logs-cli.md
# Bridget hasn't added EFK yet
#- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
# - kube/links.md
# Bridget-specific
- kube/links-bridget.md
- common/thankyou.md

View File

@@ -5,10 +5,6 @@ title: |
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
@@ -36,7 +32,7 @@ chapters:
- - kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- - kube/logs-cli.md
- kube/logs-cli.md
- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md

1
slides/kube.yml Symbolic link
View File

@@ -0,0 +1 @@
kube-fullday.yml

View File

@@ -36,7 +36,7 @@
## Creating a daemon set
- Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
- Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
--
@@ -55,7 +55,7 @@
--
- option 1: [read the docs](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset)
- option 1: read the docs
--
@@ -178,65 +178,29 @@ Wait ... Now, can it be *that* easy?
--
We have two resources called `rng`:
- the *deployment* that was existing before
- the *daemon set* that we just created
We also have one too many pods.
<br/>
(The pod corresponding to the *deployment* still exists.)
---
## `deploy/rng` and `ds/rng`
- You can have different resource types with the same name
(i.e. a *deployment* and a *daemon set* both named `rng`)
- We still have the old `rng` *deployment*
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/rng 1 1 1 1 18m
```
- But now we have the new `rng` *daemon set* as well
```
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/rng 2 2 2 2 2 <none> 9s
```
---
## Too many pods
- If we check with `kubectl get pods`, we see:
- *one pod* for the deployment (named `rng-xxxxxxxxxx-yyyyy`)
- *one pod per node* for the daemon set (named `rng-zzzzz`)
```
NAME READY STATUS RESTARTS AGE
rng-54f57d4d49-7pt82 1/1 Running 0 11m
rng-b85tm 1/1 Running 0 25s
rng-hfbrr 1/1 Running 0 25s
[...]
```
We have both `deploy/rng` and `ds/rng` now!
--
The daemon set created one pod per node, except on the master node.
And one too many pods...
The master node has [taints](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) preventing pods from running there.
---
(To schedule a pod on this node anyway, the pod will require appropriate [tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/).)
## Explanation
.footnote[(Off by one? We don't run these pods on the node hosting the control plane.)]
- You can have different resource types with the same name
(i.e. a *deployment* and a *daemonset* both named `rng`)
- We still have the old `rng` *deployment*
- But now we have the new `rng` *daemonset* as well
- If we look at the pods, we have:
- *one pod* for the deployment
- *one pod per node* for the daemonset
---
@@ -432,9 +396,9 @@ Of course, option 2 offers more learning opportunities. Right?
.exercise[
- Check the most recent log line of all `run=rng` pods to confirm that exactly one per node is now active:
- Check the logs of all `run=rng` pods to confirm that exactly one per node is now active:
```bash
kubectl logs -l run=rng --tail 1
kubectl logs -l run=rng
```
]
@@ -454,7 +418,7 @@ The timestamps should give us a hint about how many pods are currently receiving
## Cleaning up
- The pods of the deployment and the "old" daemon set are still running
- The pods of the "old" daemon set are still running
- We are going to identify them programmatically
@@ -467,69 +431,17 @@ The timestamps should give us a hint about how many pods are currently receiving
- Remove these pods:
```bash
kubectl delete pods -l run=rng,isactive!=yes
kubectl get pods -l run=rng,isactive!=yes -o name |
xargs kubectl delete
```
]
---
## Cleaning up stale pods
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rng-54f57d4d49-7pt82 1/1 Terminating 0 51m
rng-54f57d4d49-vgz9h 1/1 Running 0 22s
rng-b85tm 1/1 Terminating 0 39m
rng-hfbrr 1/1 Terminating 0 39m
rng-vplmj 1/1 Running 0 7m
rng-xbpvg 1/1 Running 0 7m
[...]
```
- The extra pods (noted `Terminating` above) are going away
- ... But a new one (`rng-54f57d4d49-vgz9h` above) was restarted immediately!
--
- Remember, the *deployment* still exists, and makes sure that one pod is up and running
- If we delete the pod associated to the deployment, it is recreated automatically
---
## Deleting a deployment
.exercise[
- Remove the `rng` deployment:
```bash
kubectl delete deployment rng
```
]
--
- The pod that was created by the deployment is now being terminated:
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rng-54f57d4d49-vgz9h 1/1 Terminating 0 4m
rng-vplmj 1/1 Running 0 11m
rng-xbpvg 1/1 Running 0 11m
[...]
```
Ding, dong, the deployment is dead! And the daemon set lives on.
---
## Avoiding extra pods
- When we changed the definition of the daemon set, it immediately created new pods. We had to remove the old ones manually.
- When we changed the definition of the daemon set, it immediately created new pods
- How could we have avoided this?
@@ -545,7 +457,7 @@ Ding, dong, the deployment is dead! And the daemon set lives on.
labels:
isactive: "yes"
'
kubectl get pods -l run=rng -l controller-revision-hash -o name |
kubectl get pods -l run=rng -o name |
xargs kubectl patch -p "$PATCH"
```

View File

@@ -256,9 +256,9 @@ The dashboard will then ask you which authentication you want to use.
- It's safe if you use HTTPS URLs from trusted sources
- Example: the official setup instructions for most pod networks
--
- It introduces new failure modes (like if you try to apply yaml from a link that's no longer valid)
- It introduces new failure modes
- Example: the official setup instructions for most pod networks

View File

@@ -48,11 +48,6 @@
helm init
```
- Add the `helm` completion:
```bash
. <(helm completion $(basename $SHELL))
```
]
---

View File

@@ -3,7 +3,7 @@
- This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person,
instructor-led workshops and tutorials
- Credit is also due to [multiple contributors](https://@@GITREPO@@/graphs/contributors) — thank you!
- Credit is also due to [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) — thank you!
- You can also follow along on your own, at your own pace

View File

@@ -20,10 +20,9 @@
.exercise[
- Let's ping `1.1.1.1`, Cloudflare's
[public DNS resolver](https://blog.cloudflare.com/announcing-1111/):
- Let's ping `goo.gl`:
```bash
kubectl run pingpong --image alpine ping 1.1.1.1
kubectl run pingpong --image alpine ping goo.gl
```
]
@@ -42,7 +41,8 @@ OK, what just happened?
- List most resource types:
```bash
kubectl get all
kubectl get all # This was broken in Kubernetes 1.10, so ...
kubectl get all -o custom-columns=KIND:.kind,NAME:.metadata.name
```
]
@@ -50,11 +50,9 @@ OK, what just happened?
--
We should see the following things:
- `deployment.apps/pingpong` (the *deployment* that we just created)
- `replicaset.apps/pingpong-xxxxxxxxxx` (a *replica set* created by the deployment)
- `pod/pingpong-xxxxxxxxxx-yyyyy` (a *pod* created by the replica set)
Note: as of 1.10.1, resource types are displayed in more detail.
- A `Deployment` named `pingpong` (the thing that we just created)
- A `ReplicaSet` named `pingpong-xxxx` (created by the deployment)
- A `Pod` named `pingpong-yyyy` (created by the replica set)
---
@@ -83,30 +81,19 @@ Note: as of 1.10.1, resource types are displayed in more detail.
## Our `pingpong` deployment
- `kubectl run` created a *deployment*, `deployment.apps/pingpong`
- `kubectl run` created a *deployment*, `deploy/pingpong`
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/pingpong 1 1 1 1 10m
```
- That deployment created a *replica set*, `rs/pingpong-xxxx`
- That deployment created a *replica set*, `replicaset.apps/pingpong-xxxxxxxxxx`
```
NAME DESIRED CURRENT READY AGE
replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m
```
- That replica set created a *pod*, `pod/pingpong-xxxxxxxxxx-yyyyy`
```
NAME READY STATUS RESTARTS AGE
pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
```
- That replica set created a *pod*, `po/pingpong-yyyy`
- We'll see later how these folks play together for:
- scaling, high availability, rolling updates
- scaling
- high availability
- rolling updates
---
@@ -172,7 +159,7 @@ pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
]
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?
Note: what if we tried to scale `rs/pingpong-xxxx`?
We could! But the *deployment* would notice it right away, and scale back to the initial level.
@@ -200,7 +187,7 @@ We could! But the *deployment* would notice it right away, and scale back to the
- Destroy a pod:
```bash
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
kubectl delete pod pingpong-yyyy
```
]
@@ -246,15 +233,15 @@ Unfortunately, `--follow` cannot (yet) be used to stream the logs from multiple
---
## Aren't we flooding 1.1.1.1?
class: title
- If you're wondering this, good question!
- Don't worry, though:
*APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.*
(Source: https://blog.cloudflare.com/announcing-1111/)
- It's very unlikely that our concerted pings manage to produce
even a modest blip at Cloudflare's NOC!
Meanwhile,
<br/>
at the Google NOC ...
<br/>
<br/>
.small[“Why the hell]
<br/>
.small[are we getting 1000 packets per second]
<br/>
.small[of ICMP ECHO traffic from these IPs?!?”]

View File

@@ -63,7 +63,7 @@
## Kubernetes network model: in practice
- The nodes that we are using have been set up to use [Weave](https://github.com/weaveworks/weave)
- The nodes that we are using have been set up to use Weave
- We don't endorse Weave in a particular way, it just Works For Us

View File

@@ -40,12 +40,7 @@
## Creating namespaces
- Creating a namespace is done with the `kubectl create namespace` command:
```bash
kubectl create namespace blue
```
- We can also get fancy and use a very minimal YAML snippet, e.g.:
- We can create namespaces with a very minimal YAML, e.g.:
```bash
kubectl apply -f- <<EOF
apiVersion: v1
@@ -55,8 +50,6 @@
EOF
```
- The two methods above are identical
- If we are using a tool like Helm, it will create namespaces automatically
---

View File

@@ -33,23 +33,6 @@
---
## Checking current rollout parameters
- Recall how we build custom reports with `kubectl` and `jq`:
.exercise[
- Show the rollout plan for our deployments:
```bash
kubectl get deploy -o json |
jq ".items[] | {name:.metadata.name} + .spec.strategy.rollingUpdate"
```
]
---
## Rolling updates in practice
- As of Kubernetes 1.8, we can do rolling updates with:
@@ -127,13 +110,11 @@ That rollout should be pretty quick. What shows in the web UI?
- Kubernetes sends a "polite" shutdown request to the worker, which ignores it
- After a grace period, Kubernetes gets impatient and kills the container
(The grace period is 30 seconds, but [can be changed](https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods) if needed)
- Eventually, Kubernetes gets impatient and kills the container
---
## Rolling out something invalid
## Rolling out a boo-boo
- What happens if we make a mistake?
@@ -158,38 +139,6 @@ Our rollout is stuck. However, the app is not dead (just 10% slower).
---
## What's going on with our rollout?
- Why is our app 10% slower?
- Because `MaxUnavailable=1`, so the rollout terminated 1 replica out of 10 available
- Okay, but why do we see 2 new replicas being rolled out?
- Because `MaxSurge=1`, so in addition to replacing the terminated one, the rollout is also starting one more
---
class: extra-details
## The nitty-gritty details
- We start with 10 pods running for the `worker` deployment
- Current settings: MaxUnavailable=1 and MaxSurge=1
- When we start the rollout:
- one replica is taken down (as per MaxUnavailable=1)
- another is created (with the new version) to replace it
- another is created (with the new version) per MaxSurge=1
- Now we have 9 replicas up and running, and 2 being deployed
- Our rollout is stuck at this point!
---
## Recovering from a bad rollout
- We could push some `v0.3` image
@@ -271,8 +220,6 @@ spec:
minReadySeconds: 10
"
kubectl rollout status deployment worker
kubectl get deploy -o json worker |
jq "{name:.metadata.name} + .spec.strategy.rollingUpdate"
```
]

View File

@@ -20,7 +20,7 @@
6. Copy the configuration file generated by `kubeadm init`
- Check the [prepare VMs README](https://@@GITREPO@@/blob/master/prepare-vms/README.md) for more details
- Check the [prepare VMs README](https://github.com/jpetazzo/container.training/blob/master/prepare-vms/README.md) for more details
---
@@ -30,12 +30,12 @@
- Doesn't set up the overlay network
- Scripting is complex
<br/>
(because extracting the token requires advanced `kubectl` commands)
- Doesn't set up multi-master (no high availability)
--
(At least ... not yet!)
--
- "It's still twice as many steps as setting up a Swarm cluster 😕" -- Jérôme
@@ -65,23 +65,4 @@
Probably the closest to a multi-cloud/hybrid solution so far, but in development
---
## Even more deployment options
- If you like Ansible:
[kubespray](https://github.com/kubernetes-incubator/kubespray)
- If you like Terraform:
[typhoon](https://github.com/poseidon/typhoon/)
- You can also learn how to install every component manually, with
the excellent tutorial [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way)
*Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.*
- There are also many commercial options available!
- For a longer list, check the Kubernetes documentation:
<br/>
it has a great guide to [pick the right solution](https://kubernetes.io/docs/setup/pick-right-solution/) to set up Kubernetes.
- Also, many commercial options!

View File

@@ -1,8 +1,8 @@
## Versions installed
- Kubernetes 1.10.3
- Kubernetes 1.10.0
- Docker Engine 18.03.0-ce
- Docker Compose 1.21.1
- Docker Compose 1.20.1
.exercise[
@@ -22,7 +22,7 @@ class: extra-details
## Kubernetes and Docker compatibility
- Kubernetes 1.10.x only validates Docker Engine versions [1.11.2 to 1.13.1 and 17.03.x](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies)
- Kubernetes 1.10 only validates Docker Engine versions [1.11.2 to 1.13.1 and 17.03.x](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies)
--

View File

@@ -6,9 +6,9 @@
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- The training will run from 9:15 to 18:00
- The training will run from 9:15 to 17:00
- There will be a lunch break from 12:00 to 13:30
- There will be a lunch break at 12:30
(And coffee breaks!)

View File

@@ -114,8 +114,6 @@ def generatefromyaml(manifest, filename):
html = html.replace("@@MARKDOWN@@", markdown)
html = html.replace("@@EXCLUDE@@", exclude)
html = html.replace("@@CHAT@@", manifest["chat"])
html = html.replace("@@GITREPO@@", manifest["gitrepo"])
html = html.replace("@@SLIDES@@", manifest["slides"])
html = html.replace("@@TITLE@@", manifest["title"].replace("\n", " "))
return html

View File

@@ -141,7 +141,7 @@ It alters the code path for `docker run`, so it is allowed only under strict cir
- Update `webui` so that we can connect to it from outside:
```bash
docker service update webui --publish-add 8000:80
docker service update webui --publish-add 8000:80 --detach=false
```
]
@@ -197,7 +197,7 @@ It has been replaced by the new version, with port 80 accessible from outside.
- Bring up more workers:
```bash
docker service update worker --replicas 10
docker service update worker --replicas 10 --detach=false
```
- Check the result in the web UI
@@ -235,7 +235,7 @@ You should see the performance peaking at 10 hashes/s (like before).
- Re-create the `rng` service with *global scheduling*:
```bash
docker service create --name rng --network dockercoins --mode global \
$REGISTRY/rng:$TAG
--detach=false $REGISTRY/rng:$TAG
```
- Look at the result in the web UI
@@ -258,12 +258,14 @@ class: extra-details
- This might change in the future (after all, it was possible in 1.12 RC!)
- As of Docker Engine 18.03, other parameters requiring to `rm`/`create` the service are:
- As of Docker Engine 17.05, other parameters requiring to `rm`/`create` the service are:
- service name
- hostname
- network
---
## Removing everything

View File

@@ -114,7 +114,7 @@ services:
- Deploy our local registry:
```bash
docker stack deploy --compose-file registry.yml registry
docker stack deploy registry --compose-file registry.yml
```
]
@@ -304,7 +304,7 @@ services:
- Create the application stack:
```bash
docker stack deploy --compose-file dockercoins.yml dockercoins
docker stack deploy dockercoins --compose-file dockercoins.yml
```
]

View File

@@ -49,7 +49,7 @@ This will display the unlock key. Copy-paste it somewhere safe.
]
Note: if you are doing the workshop on your own, using nodes
that you [provisioned yourself](https://@@GITREPO@@/tree/master/prepare-machine) or with [Play-With-Docker](http://play-with-docker.com/), you might have to use a different method to restart the Engine.
that you [provisioned yourself](https://github.com/jpetazzo/container.training/tree/master/prepare-machine) or with [Play-With-Docker](http://play-with-docker.com/), you might have to use a different method to restart the Engine.
---

View File

@@ -20,6 +20,23 @@
---
class: extra-details
## `--detach` for service creation
(New in Docker Engine 17.05)
If you are running Docker 17.05 to 17.09, you will see the following message:
```
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.
```
You can ignore that for now; but we'll come back to it in just a few minutes!
---
## Checking service logs
(New in Docker Engine 17.05)
@@ -45,6 +62,20 @@ Note: by default, when a container is destroyed (e.g. when scaling down), its lo
class: extra-details
## Before Docker Engine 17.05
- Docker 1.13/17.03/17.04 have `docker service logs` as an experimental feature
<br/>(available only when enabling the experimental feature flag)
- We have to use `docker logs`, which only works on local containers
- We will have to connect to the node running our container
<br/>(unless it was scheduled locally, of course)
---
class: extra-details
## Looking up where our container is running
- The `docker service ps` command told us where our container was scheduled
@@ -96,7 +127,7 @@ class: extra-details
- Scale the service to ensure 2 copies per node:
```bash
docker service update pingpong --replicas 6
docker service update pingpong --replicas 10
```
- Check that we have two containers on the current node:
@@ -110,16 +141,15 @@ class: extra-details
## Monitoring deployment progress with `--detach`
(New in Docker Engine 17.10)
(New in Docker Engine 17.05)
- The CLI monitors commands that create/update/delete services
- The CLI can monitor commands that create/update/delete services
- In effect, `--detach=false` is the default
- `--detach=false`
- synchronous operation
- the CLI will monitor and display the progress of our request
- it exits only when the operation is complete
- Ctrl-C to detach at anytime
- `--detach=true`
@@ -168,12 +198,12 @@ class: extra-details
- Scale the service to ensure 3 copies per node:
```bash
docker service update pingpong --replicas 9 --detach=false
docker service update pingpong --replicas 15 --detach=false
```
- And then to 4 copies per node:
```bash
docker service update pingpong --replicas 12 --detach=true
docker service update pingpong --replicas 20 --detach=true
```
]
@@ -207,7 +237,7 @@ class: extra-details
- Create an ElasticSearch service (and give it a name while we're at it):
```bash
docker service create --name search --publish 9200:9200 --replicas 5 \
docker service create --name search --publish 9200:9200 --replicas 7 \
elasticsearch`:2`
```
@@ -237,7 +267,7 @@ The latest version of the ElasticSearch image won't start without mandatory conf
---
class: extra-details, pic
class: extra-details
![diagram showing what happens during docker service create, courtesy of @aluzzardi](images/docker-service-create.svg)
@@ -291,10 +321,10 @@ apk add --no-cache jq
## Load balancing results
Traffic is handled by our clusters [routing mesh](
Traffic is handled by our clusters [TCP routing mesh](
https://docs.docker.com/engine/swarm/ingress/).
Each request is served by one of the instances, in rotation.
Each request is served by one of the 7 instances, in rotation.
Note: if you try to access the service from your browser,
you will probably see the same
@@ -303,13 +333,7 @@ to re-use the same connection.
---
class: pic
![routing mesh](images/ingress-routing-mesh.png)
---
## Under the hood of the routing mesh
## Under the hood of the TCP routing mesh
- Load balancing is done by IPVS
@@ -328,9 +352,9 @@ class: pic
There are many ways to deal with inbound traffic on a Swarm cluster.
- Put all (or a subset) of your nodes in a DNS `A` record (good for web clients)
- Put all (or a subset) of your nodes in a DNS `A` record
- Assign your nodes (or a subset) to an external load balancer (ELB, etc.)
- Assign your nodes (or a subset) to an ELB
- Use a virtual IP and make sure that it is assigned to an "alive" node
@@ -338,37 +362,22 @@ There are many ways to deal with inbound traffic on a Swarm cluster.
---
class: pic
![external LB](images/ingress-lb.png)
---
class: btw-labels
## Managing HTTP traffic
- The TCP routing mesh doesn't parse HTTP headers
- If you want to place multiple HTTP services on port 80/443, you need something more
- If you want to place multiple HTTP services on port 80, you need something more
- You can set up NGINX or HAProxy on port 80/443 to route connections to the correct
Service, but they need to be "Swarm aware" to dynamically update configs
- You can set up NGINX or HAProxy on port 80 to do the virtual host switching
--
- Docker Universal Control Plane provides its own [HTTP routing mesh](
https://docs.docker.com/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services/)
- Docker EE provides its own [Layer 7 routing](https://docs.docker.com/ee/ucp/interlock/)
- add a specific label starting with `com.docker.ucp.mesh.http` to your services
- Service labels like `com.docker.lb.hosts=<FQDN>` are detected automatically via Docker
API and dynamically update the configuration
--
- Two common open source options:
- [Traefik](https://traefik.io/) - popular, many features, requires running on managers,
needs key/value for HA
- [Docker Flow Proxy](http://proxy.dockerflow.com/) - uses HAProxy, made for
Swarm by Docker Captain [@vfarcic](https://twitter.com/vfarcic)
- labels are detected automatically and dynamically update the configuration
---
@@ -386,7 +395,7 @@ class: btw-labels
- owner of a service (for billing, paging...)
- corelate Swarm objects together (services, volumes, configs, secrets, etc.)
- etc.
---
@@ -439,10 +448,16 @@ class: extra-details
.exercise[
- Run this simple-yet-beautiful visualization app:
- Get the source code of this simple-yet-beautiful visualization app:
```bash
cd ~/container.training/stacks
docker-compose -f visualizer.yml up -d
cd ~
git clone git://github.com/dockersamples/docker-swarm-visualizer
```
- Build and run the Swarm visualizer:
```bash
cd docker-swarm-visualizer
docker-compose up -d
```
<!-- ```longwait Creating dockerswarmvisualizer_viz_1``` -->
@@ -483,7 +498,7 @@ class: extra-details
- Instead of viewing your cluster, this could take care of logging, metrics, autoscaling ...
- We can run it within a service, too! We won't do it yet, but the command would look like:
- We can run it within a service, too! We won't do it, but the command would look like:
```bash
docker service create \
@@ -491,16 +506,12 @@ class: extra-details
--name viz --constraint node.role==manager ...
```
.footnote[
Credits: the visualization code was written by
[Francisco Miranda](https://github.com/maroshii).
<br/>
[Mano Marks](https://twitter.com/manomarks) adapted
it to Swarm and maintains it.
]
---
## Terminate our services

View File

@@ -120,7 +120,7 @@ We will use the following Compose file (`stacks/dockercoins+healthcheck.yml`):
- Deploy the updated stack:
```bash
docker stack deploy --compose-file dockercoins+healthcheck.yml dockercoins
docker stack deploy dockercoins --compose-file dockercoins+healthcheck.yml
```
]
@@ -146,7 +146,7 @@ First, let's make an "innocent" change and deploy it.
docker-compose -f dockercoins+healthcheck.yml build
docker-compose -f dockercoins+healthcheck.yml push
docker service update dockercoins_hasher \
--image=127.0.0.1:5000/hasher:$TAG
--detach=false --image=127.0.0.1:5000/hasher:$TAG
```
]
@@ -170,7 +170,7 @@ And now, a breaking change that will cause the health check to fail:
docker-compose -f dockercoins+healthcheck.yml build
docker-compose -f dockercoins+healthcheck.yml push
docker service update dockercoins_hasher \
--image=127.0.0.1:5000/hasher:$TAG
--detach=false --image=127.0.0.1:5000/hasher:$TAG
```
]

View File

@@ -3,7 +3,7 @@
- This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person,
instructor-led workshops and tutorials
- Over time, [multiple contributors](https://@@GITREPO@@/graphs/contributors) also helped to improve these materials — thank you!
- Over time, [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) also helped to improve these materials — thank you!
- You can also follow along on your own, at your own pace

View File

@@ -113,7 +113,7 @@ class: elk-manual
- We could author a custom image bundling this configuration
- We can also pass the [configuration](https://@@GITREPO@@/blob/master/elk/logstash.conf) on the command line
- We can also pass the [configuration](https://github.com/jpetazzo/container.training/blob/master/elk/logstash.conf) on the command line
.exercise[
@@ -187,7 +187,7 @@ class: elk-auto
```bash
docker-compose -f elk.yml build
docker-compose -f elk.yml push
docker stack deploy -c elk.yml elk
docker stack deploy elk -c elk.yml
```
]
@@ -195,7 +195,7 @@ class: elk-auto
Note: the *build* and *push* steps are not strictly necessary, but they don't hurt!
Let's have a look at the [Compose file](
https://@@GITREPO@@/blob/master/stacks/elk.yml).
https://github.com/jpetazzo/container.training/blob/master/stacks/elk.yml).
---

View File

@@ -169,7 +169,7 @@ class: in-person
This should tell us that we are talking to `node3`.
Note: it can be useful to use a [custom shell prompt](
https://@@GITREPO@@/blob/master/prepare-vms/scripts/postprep.rc#L68)
https://github.com/jpetazzo/container.training/blob/master/prepare-vms/scripts/postprep.rc#L68)
reflecting the `DOCKER_HOST` variable.
---

View File

@@ -1183,7 +1183,7 @@ class: prom
]
You should see 7 endpoints (3 cadvisor, 3 node, 1 prometheus).
You should see 11 endpoints (5 cadvisor, 5 node, 1 prometheus).
Their state should be "UP".

View File

@@ -35,7 +35,33 @@ class: in-person
## Building our full cluster
- Let's get the token, and use a one-liner for the remaining node with SSH
- We could SSH to nodes 3, 4, 5; and copy-paste the command
--
class: in-person
- Or we could use the AWESOME POWER OF THE SHELL!
--
class: in-person
![Mario Red Shell](images/mario-red-shell.png)
--
class: in-person
- No, not *that* shell
---
class: in-person
## Let's form like Swarm-tron
- Let's get the token, and loop over the remaining nodes with SSH
.exercise[
@@ -44,9 +70,11 @@ class: in-person
TOKEN=$(docker swarm join-token -q manager)
```
- Add the remaining node:
- Loop over the 3 remaining nodes:
```bash
ssh node3 docker swarm join --token $TOKEN node1:2377
for NODE in node3 node4 node5; do
ssh $NODE docker swarm join --token $TOKEN node1:2377
done
```
]

View File

@@ -33,7 +33,7 @@ TOKEN=$(docker swarm join-token -q manager)
for N in $(seq 2 5); do
DOCKER_HOST=tcp://node$N:2375 docker swarm join --token $TOKEN node1:2377
done
git clone git://@@GITREPO@@
git clone git://github.com/jpetazzo/container.training
cd container.training/stacks
docker stack deploy --compose-file registry.yml registry
docker-compose -f dockercoins.yml build

View File

@@ -79,11 +79,6 @@ We just have to adapt this to our application, which has 4 services!
- doesn't come with anything either
- located wherever you want
- **Lots of 3rd party cloud or self-hosted options**
- AWS/Azure/Google Container Registry
- GitLab, Quay, JFrog
]
---
@@ -112,7 +107,7 @@ class: extra-details
- Make sure we have a Docker Hub account
- [Activate a Docker EE subscription](
- [Activate a Docker Datacenter subscription](
https://hub.docker.com/enterprise/trial/)
- Install DTR on our machines

View File

@@ -1,8 +1,8 @@
## Brand new versions!
- Engine 18.03
- Compose 1.21
- Machine 0.14
- Engine 17.12
- Compose 1.17
- Machine 0.13
.exercise[
@@ -89,11 +89,8 @@ class: pic
| 2016 | 1.12 | Swarm mode, routing mesh, encrypted networking, healthchecks
| 2017 | 1.13 | Stacks, attachable overlays, image squash and compress
| 2017 | 1.13 | Windows Server 2016 Swarm mode
| 2017 | 17.03 | Secrets, encrypted Raft
| 2017 | 17.03 | Secrets
| 2017 | 17.04 | Update rollback, placement preferences (soft constraints)
| 2017 | 17.06 | Swarm configs, node/service events, multi-stage build, service logs
| 2017 | 17.05 | Multi-stage image builds, service logs
| 2017 | 17.06 | Swarm configs, node/service events
| 2017 | 17.06 | Windows Server 2016 Swarm overlay networks, secrets
| 2017 | 17.09 | ADD/COPY chown, start\_period, stop-signal, overlay2 default
| 2017 | 17.12 | containerd, Hyper-V isolation, Windows routing mesh
| 2018 | 18.03 | Templates for secrets/configs, multi-yaml stacks, LCOW
| 2018 | 18.03 | Stack deploy to Kubernetes, `docker trust`, tmpfs, manifest CLI

85
slides/theme.css Normal file
View File

@@ -0,0 +1,85 @@
@import url('https://fonts.googleapis.com/css?family=PT+Sans');
body {
font-family: 'PT Sans', sans-serif;
max-width: 900px;
margin: 0 auto 0 auto;
font-size: 13pt;
background: lightgrey;
}
body > div {
background: white;
padding: 0 5em 0 5em;
}
ul, p, h1, h2, h3, h4, h5, h6 {
margin: 0;
}
h1, h2, h3 {
padding-top: 1em;
padding-bottom: 0.5em;
}
ul, p {
padding-bottom: 1em;
}
img {
width: 200px;
float: left;
margin-right: 1em;
margin-bottom: 0.5em;
margin-top: 3em;
}
h2:nth-of-type(n+5) {
color: #0069A8;
}
h2:nth-of-type(-n+4) {
text-align: center;
}
h2:nth-of-type(1) {
font-size: 3em;
}
h2:nth-of-type(2) {
font-size: 2em;
}
h2:nth-of-type(3) {
font-size: 1.5em;
}
h2:nth-of-type(4) {
font-size: 1em;
}
/* index */
.index h4 {
font-size: 2.0em;
}
.index h5 {
font-size: 1.3em;
}
.index h6 {
font-size: 1.0em;
}
.index h4, .index h5, .index h6, .index p {
padding: 5pt;
}
div.index {
}
div.block {
background: #e1f8ff;
padding: 1em;
margin: 2em;
}
.index {
font-size: 1.5em;
padding: 0.5em;
}

View File

@@ -1,13 +0,0 @@
version: "3"
services:
viz:
image: dockersamples/visualizer
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "8080:8080"
deploy:
placement:
constraints:
- node.role == manager