Initial release
This commit is contained in:
0
.woodpecker.yml
Normal file
0
.woodpecker.yml
Normal file
4
slides/Dockerfile
Normal file
4
slides/Dockerfile
Normal file
@@ -0,0 +1,4 @@
|
||||
FROM alpine:3.17
|
||||
RUN apk add --no-cache entr py3-pip git zip
|
||||
COPY requirements.txt .
|
||||
RUN pip3 install -r requirements.txt
|
||||
64
slides/README.md
Normal file
64
slides/README.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# MarkMaker
|
||||
|
||||
General principles:
|
||||
|
||||
- each slides deck is described in a YAML manifest;
|
||||
- the YAML manifest lists a number of Markdown files
|
||||
that compose the slides deck;
|
||||
- a Python script "compiles" the YAML manifest into
|
||||
a HTML file;
|
||||
- that HTML file can be displayed in your browser
|
||||
(you don't need to host it), or you can publish it
|
||||
(along with a few static assets) if you want.
|
||||
|
||||
|
||||
## Getting started
|
||||
|
||||
Look at the YAML file corresponding to the deck that
|
||||
you want to edit. The format should be self-explanatory.
|
||||
|
||||
*I (Jérôme) am still in the process of fine-tuning that
|
||||
format. Once I settle for something, I will add better
|
||||
documentation.*
|
||||
|
||||
Make changes in the YAML file, and/or in the referenced
|
||||
Markdown files. If you have never used Remark before:
|
||||
|
||||
- use `---` to separate slides,
|
||||
- use `.foo[bla]` if you want `bla` to have CSS class `foo`,
|
||||
- define (or edit) CSS classes in [workshop.css](workshop.css).
|
||||
|
||||
After making changes, run `./build.sh once`; it will
|
||||
compile each `foo.yml` file into `foo.yml.html`.
|
||||
|
||||
You can also run `./build.sh forever`: it will monitor the current
|
||||
directory and rebuild slides automatically when files are modified.
|
||||
|
||||
If you have problems running `./build.sh` (because of
|
||||
Python dependencies or whatever),
|
||||
you can also run `docker-compose up` in this directory.
|
||||
It will start the `./build.sh forever` script in a container.
|
||||
It will also start a web server exposing the slides
|
||||
(but the slides should also work if you load them from your
|
||||
local filesystem).
|
||||
|
||||
|
||||
## Publishing pipeline
|
||||
|
||||
Each time we push to `master`, a webhook pings
|
||||
[Netlify](https://www.netlify.com/), which will pull
|
||||
the repo, build the slides (by running `build.sh once`),
|
||||
and publish them to http://container.training/.
|
||||
|
||||
Pull requests are automatically deployed to testing
|
||||
subdomains. I had no idea that I would ever say this
|
||||
about a static page hosting service, but it is seriously awesome. ⚡️💥
|
||||
|
||||
|
||||
## Extra bells and whistles
|
||||
|
||||
You can run `./slidechecker foo.yml.html` to check for
|
||||
missing images and show the number of slides in that deck.
|
||||
It requires `phantomjs` to be installed. It takes some
|
||||
time to run so it is not yet integrated with the publishing
|
||||
pipeline.
|
||||
8
slides/TODO
Normal file
8
slides/TODO
Normal file
@@ -0,0 +1,8 @@
|
||||
Black belt references that I want to add somewhere:
|
||||
|
||||
What Have Namespaces Done for You Lately?
|
||||
https://www.youtube.com/watch?v=MHv6cWjvQjM&list=PLkA60AVN3hh-biQ6SCtBJ-WVTyBmmYho8&index=8
|
||||
|
||||
Cilium: Network and Application Security with BPF and XDP
|
||||
https://www.youtube.com/watch?v=ilKlmTDdFgk&list=PLkA60AVN3hh-biQ6SCtBJ-WVTyBmmYho8&index=9
|
||||
|
||||
25
slides/_redirects
Normal file
25
slides/_redirects
Normal file
@@ -0,0 +1,25 @@
|
||||
# Uncomment and/or edit one of the the following lines if necessary.
|
||||
#/ /kube-halfday.yml.html 200!
|
||||
#/ /kube-fullday.yml.html 200!
|
||||
#/ /kube-twodays.yml.html 200!
|
||||
|
||||
# And this allows to do "git clone https://container.training".
|
||||
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack
|
||||
|
||||
/dockermastery https://www.udemy.com/course/docker-mastery/?referralCode=1410924A733D33635CCB
|
||||
/kubernetesmastery https://www.udemy.com/course/kubernetesmastery/?referralCode=7E09090AF9B79E6C283F
|
||||
#/dockermastery https://www.udemy.com/course/docker-mastery/?couponCode=DOCKERALLDAY
|
||||
#/kubernetesmastery https://www.udemy.com/course/kubernetesmastery/?couponCode=DOCKERALLDAY
|
||||
|
||||
# Shortlink for the QRCode
|
||||
/q /qrcode.html 200
|
||||
|
||||
# Shortlinks for next training in English and French
|
||||
#/next https://www.eventbrite.com/e/livestream-intensive-kubernetes-bootcamp-tickets-103262336428
|
||||
/next https://skillsmatter.com/courses/700-advanced-kubernetes-concepts-workshop-jerome-petazzoni
|
||||
/hi5 https://enix.io/fr/services/formation/online/
|
||||
/us https://www.ardanlabs.com/live-training-events/deploying-microservices-and-traditional-applications-with-kubernetes-march-28-2022.html
|
||||
/uk https://skillsmatter.com/workshops/827-deploying-microservices-and-traditional-applications-with-kubernetes-with-jerome-petazzoni
|
||||
|
||||
# Survey form
|
||||
/please https://docs.google.com/forms/d/e/1FAIpQLSfIYSgrV7tpfBNm1hOaprjnBHgWKn5n-k5vtNXYJkOX1sRxng/viewform
|
||||
17
slides/appendcheck.py
Executable file
17
slides/appendcheck.py
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import logging
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
logging.basicConfig(level=os.environ.get("LOG_LEVEL", "INFO"))
|
||||
|
||||
filename = sys.argv[1]
|
||||
|
||||
logging.info("Checking file {}...".format(filename))
|
||||
text = subprocess.check_output(["./slidechecker.js", filename])
|
||||
html = open(filename).read()
|
||||
html = html.replace("</textarea>", "\n---\n```\n{}\n```\n</textarea>".format(text))
|
||||
|
||||
open(filename, "w").write(html)
|
||||
544
slides/autopilot/autotest.py
Executable file
544
slides/autopilot/autotest.py
Executable file
@@ -0,0 +1,544 @@
|
||||
#!/usr/bin/env python
|
||||
# coding: utf-8
|
||||
|
||||
import click
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import select
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import uuid
|
||||
import yaml
|
||||
|
||||
|
||||
logging.basicConfig(level=os.environ.get("LOG_LEVEL", "INFO"))
|
||||
|
||||
|
||||
TIMEOUT = 60 # 1 minute
|
||||
|
||||
# This one is not a constant. It's an ugly global.
|
||||
IPADDR = None
|
||||
|
||||
|
||||
class State(object):
|
||||
|
||||
def __init__(self):
|
||||
self.clipboard = ""
|
||||
self.interactive = True
|
||||
self.verify_status = True
|
||||
self.simulate_type = False
|
||||
self.switch_desktop = False
|
||||
self.sync_slides = False
|
||||
self.open_links = False
|
||||
self.run_hidden = True
|
||||
self.slide = 1
|
||||
self.snippet = 0
|
||||
|
||||
def load(self):
|
||||
data = yaml.load(open("state.yaml"))
|
||||
self.clipboard = str(data["clipboard"])
|
||||
self.interactive = bool(data["interactive"])
|
||||
self.verify_status = bool(data["verify_status"])
|
||||
self.simulate_type = bool(data["simulate_type"])
|
||||
self.switch_desktop = bool(data["switch_desktop"])
|
||||
self.sync_slides = bool(data["sync_slides"])
|
||||
self.open_links = bool(data["open_links"])
|
||||
self.run_hidden = bool(data["run_hidden"])
|
||||
self.slide = int(data["slide"])
|
||||
self.snippet = int(data["snippet"])
|
||||
|
||||
def save(self):
|
||||
with open("state.yaml", "w") as f:
|
||||
yaml.dump(dict(
|
||||
clipboard=self.clipboard,
|
||||
interactive=self.interactive,
|
||||
verify_status=self.verify_status,
|
||||
simulate_type=self.simulate_type,
|
||||
switch_desktop=self.switch_desktop,
|
||||
sync_slides=self.sync_slides,
|
||||
open_links=self.open_links,
|
||||
run_hidden=self.run_hidden,
|
||||
slide=self.slide,
|
||||
snippet=self.snippet,
|
||||
), f, default_flow_style=False)
|
||||
|
||||
|
||||
state = State()
|
||||
|
||||
|
||||
outfile = open("autopilot.log", "w")
|
||||
|
||||
def hrule():
|
||||
return "="*int(subprocess.check_output(["tput", "cols"]))
|
||||
|
||||
# A "snippet" is something that the user is supposed to do in the workshop.
|
||||
# Most of the "snippets" are shell commands.
|
||||
# Some of them can be key strokes or other actions.
|
||||
# In the markdown source, they are the code sections (identified by triple-
|
||||
# quotes) within .exercise[] sections.
|
||||
|
||||
class Snippet(object):
|
||||
|
||||
def __init__(self, slide, content):
|
||||
self.slide = slide
|
||||
self.content = content
|
||||
# Extract the "method" (e.g. bash, keys, ...)
|
||||
# On multi-line snippets, the method is alone on the first line
|
||||
# On single-line snippets, the data follows the method immediately
|
||||
if '\n' in content:
|
||||
self.method, self.data = content.split('\n', 1)
|
||||
self.data = self.data.strip()
|
||||
elif ' ' in content:
|
||||
self.method, self.data = content.split(' ', 1)
|
||||
else:
|
||||
self.method, self.data = content, None
|
||||
self.next = None
|
||||
|
||||
def __str__(self):
|
||||
return self.content
|
||||
|
||||
|
||||
class Slide(object):
|
||||
|
||||
current_slide = 0
|
||||
|
||||
def __init__(self, content):
|
||||
self.number = Slide.current_slide
|
||||
Slide.current_slide += 1
|
||||
|
||||
# Remove commented-out slides
|
||||
# (remark.js considers ??? to be the separator for speaker notes)
|
||||
content = re.split("\n\?\?\?\n", content)[0]
|
||||
self.content = content
|
||||
|
||||
self.snippets = []
|
||||
exercises = re.findall("\.exercise\[(.*)\]", content, re.DOTALL)
|
||||
for exercise in exercises:
|
||||
if "```" in exercise:
|
||||
previous = None
|
||||
for snippet_content in exercise.split("```")[1::2]:
|
||||
snippet = Snippet(self, snippet_content)
|
||||
if previous:
|
||||
previous.next = snippet
|
||||
previous = snippet
|
||||
self.snippets.append(snippet)
|
||||
else:
|
||||
logging.warning("Exercise on slide {} does not have any ``` snippet."
|
||||
.format(self.number))
|
||||
self.debug()
|
||||
|
||||
def __str__(self):
|
||||
text = self.content
|
||||
for snippet in self.snippets:
|
||||
text = text.replace(snippet.content, ansi("7")(snippet.content))
|
||||
return text
|
||||
|
||||
def debug(self):
|
||||
logging.debug("\n{}\n{}\n{}".format(hrule(), self.content, hrule()))
|
||||
|
||||
|
||||
def focus_slides():
|
||||
if not state.switch_desktop:
|
||||
return
|
||||
subprocess.check_output(["i3-msg", "workspace", "3"])
|
||||
subprocess.check_output(["i3-msg", "workspace", "1"])
|
||||
|
||||
def focus_terminal():
|
||||
if not state.switch_desktop:
|
||||
return
|
||||
subprocess.check_output(["i3-msg", "workspace", "2"])
|
||||
subprocess.check_output(["i3-msg", "workspace", "1"])
|
||||
|
||||
def focus_browser():
|
||||
if not state.switch_desktop:
|
||||
return
|
||||
subprocess.check_output(["i3-msg", "workspace", "4"])
|
||||
subprocess.check_output(["i3-msg", "workspace", "1"])
|
||||
|
||||
|
||||
def ansi(code):
|
||||
return lambda s: "\x1b[{}m{}\x1b[0m".format(code, s)
|
||||
|
||||
|
||||
# Sleeps the indicated delay, but interruptible by pressing ENTER.
|
||||
# If interrupted, returns True.
|
||||
def interruptible_sleep(t):
|
||||
rfds, _, _ = select.select([0], [], [], t)
|
||||
return 0 in rfds
|
||||
|
||||
|
||||
def wait_for_string(s, timeout=TIMEOUT):
|
||||
logging.debug("Waiting for string: {}".format(s))
|
||||
deadline = time.time() + timeout
|
||||
while time.time() < deadline:
|
||||
output = capture_pane()
|
||||
if s in output:
|
||||
return
|
||||
if interruptible_sleep(1): return
|
||||
raise Exception("Timed out while waiting for {}!".format(s))
|
||||
|
||||
|
||||
def wait_for_prompt():
|
||||
logging.debug("Waiting for prompt.")
|
||||
deadline = time.time() + TIMEOUT
|
||||
while time.time() < deadline:
|
||||
output = capture_pane()
|
||||
# If we are not at the bottom of the screen, there will be a bunch of extra \n's
|
||||
output = output.rstrip('\n')
|
||||
last_line = output.split('\n')[-1]
|
||||
# Our custom prompt on the VMs has two lines; the 2nd line is just '$'
|
||||
if last_line == "$":
|
||||
# This is a perfect opportunity to grab the node's IP address
|
||||
global IPADDR
|
||||
IPADDR = re.findall("\[(.*)\]", output, re.MULTILINE)[-1]
|
||||
return
|
||||
# When we are in an alpine container, the prompt will be "/ #"
|
||||
if last_line == "/ #":
|
||||
return
|
||||
# We did not recognize a known prompt; wait a bit and check again
|
||||
logging.debug("Could not find a known prompt on last line: {!r}"
|
||||
.format(last_line))
|
||||
if interruptible_sleep(1): return
|
||||
raise Exception("Timed out while waiting for prompt!")
|
||||
|
||||
|
||||
def check_exit_status():
|
||||
if not state.verify_status:
|
||||
return
|
||||
token = uuid.uuid4().hex
|
||||
data = "echo {} $?\n".format(token)
|
||||
logging.debug("Sending {!r} to get exit status.".format(data))
|
||||
send_keys(data)
|
||||
time.sleep(0.5)
|
||||
wait_for_prompt()
|
||||
screen = capture_pane()
|
||||
status = re.findall("\n{} ([0-9]+)\n".format(token), screen, re.MULTILINE)
|
||||
logging.debug("Got exit status: {}.".format(status))
|
||||
if len(status) == 0:
|
||||
raise Exception("Couldn't retrieve status code {}. Timed out?".format(token))
|
||||
if len(status) > 1:
|
||||
raise Exception("More than one status code {}. I'm seeing double! Shoot them both.".format(token))
|
||||
code = int(status[0])
|
||||
if code != 0:
|
||||
raise Exception("Non-zero exit status: {}.".format(code))
|
||||
# Otherwise just return peacefully.
|
||||
|
||||
|
||||
def setup_tmux_and_ssh():
|
||||
if subprocess.call(["tmux", "has-session"]):
|
||||
logging.error("Couldn't connect to tmux. Please setup tmux first.")
|
||||
ipaddr = "$IPADDR"
|
||||
uid = os.getuid()
|
||||
|
||||
raise Exception(r"""
|
||||
1. If you're running this directly from a node:
|
||||
|
||||
tmux
|
||||
|
||||
2. If you want to control a remote tmux:
|
||||
|
||||
rm -f /tmp/tmux-{uid}/default && ssh -t -L /tmp/tmux-{uid}/default:/tmp/tmux-1001/default docker@{ipaddr} tmux new-session -As 0
|
||||
|
||||
(Or use workshopctl tmux)
|
||||
|
||||
3. If you cannot control a remote tmux:
|
||||
|
||||
tmux new-session ssh docker@{ipaddr}
|
||||
|
||||
4. If you are running this locally with a remote cluster, make sure your prompt has the expected format:
|
||||
|
||||
tmux
|
||||
IPADDR=$(
|
||||
kubectl get nodes -o json |
|
||||
jq -r '.items[0].status.addresses[] | select(.type=="ExternalIP") | .address'
|
||||
)
|
||||
export PS1="\n[{ipaddr}] \u@\h:\w\n\$ "
|
||||
|
||||
""".format(uid=uid, ipaddr=ipaddr))
|
||||
else:
|
||||
logging.info("Found tmux session. Trying to acquire shell prompt.")
|
||||
wait_for_prompt()
|
||||
logging.info("Successfully connected to test cluster in tmux session.")
|
||||
|
||||
|
||||
slides = [Slide("Dummy slide zero")]
|
||||
content = open(sys.argv[1]).read()
|
||||
|
||||
# OK, this part is definitely hackish, and will break if the
|
||||
# excludedClasses parameter is not on a single line.
|
||||
excluded_classes = re.findall("excludedClasses: (\[.*\])", content)
|
||||
excluded_classes = set(eval(excluded_classes[0]))
|
||||
|
||||
for slide in re.split("\n---?\n", content):
|
||||
slide_classes = re.findall("class: (.*)", slide)
|
||||
if slide_classes:
|
||||
slide_classes = slide_classes[0].split(",")
|
||||
slide_classes = [c.strip() for c in slide_classes]
|
||||
if excluded_classes & set(slide_classes):
|
||||
logging.debug("Skipping excluded slide.")
|
||||
continue
|
||||
slides.append(Slide(slide))
|
||||
|
||||
|
||||
def capture_pane():
|
||||
return subprocess.check_output(["tmux", "capture-pane", "-p"]).decode('utf-8')
|
||||
|
||||
|
||||
setup_tmux_and_ssh()
|
||||
|
||||
|
||||
try:
|
||||
state.load()
|
||||
logging.debug("Successfully loaded state from file.")
|
||||
# Let's override the starting state, so that when an error occurs,
|
||||
# we can restart the auto-tester and then single-step or debug.
|
||||
# (Instead of running again through the same issue immediately.)
|
||||
state.interactive = True
|
||||
except Exception as e:
|
||||
logging.exception("Could not load state from file.")
|
||||
logging.warning("Using default values.")
|
||||
|
||||
|
||||
def move_forward():
|
||||
state.snippet += 1
|
||||
if state.snippet > len(slides[state.slide].snippets):
|
||||
state.slide += 1
|
||||
state.snippet = 0
|
||||
check_bounds()
|
||||
|
||||
|
||||
def move_backward():
|
||||
state.snippet -= 1
|
||||
if state.snippet < 0:
|
||||
state.slide -= 1
|
||||
state.snippet = 0
|
||||
check_bounds()
|
||||
|
||||
|
||||
def check_bounds():
|
||||
if state.slide < 1:
|
||||
state.slide = 1
|
||||
if state.slide >= len(slides):
|
||||
state.slide = len(slides)-1
|
||||
|
||||
|
||||
##########################################################
|
||||
# All functions starting with action_ correspond to the
|
||||
# code to be executed when seeing ```foo``` blocks in the
|
||||
# input. ```foo``` would call action_foo(state, snippet).
|
||||
##########################################################
|
||||
|
||||
|
||||
def send_keys(keys):
|
||||
subprocess.check_call(["tmux", "send-keys", keys])
|
||||
|
||||
# Send a single key.
|
||||
# Useful for special keys, e.g. tmux interprets these strings:
|
||||
# ^C (and all other sequences starting with a caret)
|
||||
# Space
|
||||
# ... and many others (check tmux manpage for details).
|
||||
def action_key(state, snippet):
|
||||
send_keys(snippet.data)
|
||||
|
||||
|
||||
# Send multiple keys.
|
||||
# If keystroke simulation is off, all keys are sent at once.
|
||||
# If keystroke simulation is on, keys are sent one by one, with a delay between them.
|
||||
def action_keys(state, snippet, keys=None):
|
||||
if keys is None:
|
||||
keys = snippet.data
|
||||
if not state.simulate_type:
|
||||
send_keys(keys)
|
||||
else:
|
||||
for key in keys:
|
||||
if key == ";":
|
||||
key = "\\;"
|
||||
if key == "\n":
|
||||
if interruptible_sleep(1): return
|
||||
send_keys(key)
|
||||
if interruptible_sleep(0.15*random.random()): return
|
||||
if key == "\n":
|
||||
if interruptible_sleep(1): return
|
||||
|
||||
|
||||
def action_hide(state, snippet):
|
||||
if state.run_hidden:
|
||||
action_bash(state, snippet)
|
||||
|
||||
|
||||
def action_bash(state, snippet):
|
||||
data = snippet.data
|
||||
# Make sure that we're ready
|
||||
wait_for_prompt()
|
||||
# Strip leading spaces
|
||||
data = re.sub("\n +", "\n", data)
|
||||
# Remove backticks (they are used to highlight sections)
|
||||
data = data.replace('`', '')
|
||||
# Add "RETURN" at the end of the command :)
|
||||
data += "\n"
|
||||
# Send command
|
||||
action_keys(state, snippet, data)
|
||||
# Force a short sleep to avoid race condition
|
||||
time.sleep(0.5)
|
||||
if snippet.next and snippet.next.method == "wait":
|
||||
wait_for_string(snippet.next.data)
|
||||
elif snippet.next and snippet.next.method == "longwait":
|
||||
wait_for_string(snippet.next.data, 10*TIMEOUT)
|
||||
else:
|
||||
wait_for_prompt()
|
||||
# Verify return code
|
||||
check_exit_status()
|
||||
|
||||
|
||||
def action_copy(state, snippet):
|
||||
screen = capture_pane()
|
||||
matches = re.findall(snippet.data, screen, flags=re.DOTALL)
|
||||
if len(matches) == 0:
|
||||
raise Exception("Could not find regex {} in output.".format(snippet.data))
|
||||
# Arbitrarily get the most recent match
|
||||
match = matches[-1]
|
||||
# Remove line breaks (like a screen copy paste would do)
|
||||
match = match.replace('\n', '')
|
||||
logging.debug("Copied {} to clipboard.".format(match))
|
||||
state.clipboard = match
|
||||
|
||||
|
||||
def action_paste(state, snippet):
|
||||
logging.debug("Pasting {} from clipboard.".format(state.clipboard))
|
||||
action_keys(state, snippet, state.clipboard)
|
||||
|
||||
|
||||
def action_check(state, snippet):
|
||||
wait_for_prompt()
|
||||
check_exit_status()
|
||||
|
||||
|
||||
def action_open(state, snippet):
|
||||
# Cheap way to get node1's IP address
|
||||
screen = capture_pane()
|
||||
url = snippet.data.replace("/node1", "/{}".format(IPADDR))
|
||||
# This should probably be adapted to run on different OS
|
||||
if state.open_links:
|
||||
subprocess.check_output(["xdg-open", url])
|
||||
focus_browser()
|
||||
if state.interactive:
|
||||
print("Press any key to continue to next step...")
|
||||
click.getchar()
|
||||
|
||||
|
||||
def action_tmux(state, snippet):
|
||||
subprocess.check_call(["tmux"] + snippet.data.split())
|
||||
|
||||
|
||||
def action_unknown(state, snippet):
|
||||
logging.warning("Unknown method {}: {!r}".format(snippet.method, snippet.data))
|
||||
|
||||
|
||||
def run_snippet(state, snippet):
|
||||
logging.info("Running with method {}: {}".format(snippet.method, snippet.data))
|
||||
try:
|
||||
action = globals()["action_"+snippet.method]
|
||||
except KeyError:
|
||||
action = action_unknown
|
||||
try:
|
||||
action(state, snippet)
|
||||
result = "OK"
|
||||
except:
|
||||
result = "ERR"
|
||||
logging.exception("While running method {} with {!r}".format(snippet.method, snippet.data))
|
||||
# Try to recover
|
||||
try:
|
||||
wait_for_prompt()
|
||||
except:
|
||||
subprocess.check_call(["tmux", "new-window"])
|
||||
wait_for_prompt()
|
||||
outfile.write("{} SLIDE={} METHOD={} DATA={!r}\n".format(result, state.slide, snippet.method, snippet.data))
|
||||
outfile.flush()
|
||||
|
||||
|
||||
while True:
|
||||
state.save()
|
||||
slide = slides[state.slide]
|
||||
if state.snippet and state.snippet <= len(slide.snippets):
|
||||
snippet = slide.snippets[state.snippet-1]
|
||||
else:
|
||||
snippet = None
|
||||
click.clear()
|
||||
print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}] "
|
||||
"[switch_desktop:{}] [sync_slides:{}] [open_links:{}] [run_hidden:{}]"
|
||||
.format(state.slide, len(slides)-1,
|
||||
state.snippet, len(slide.snippets) if slide.snippets else 0,
|
||||
state.simulate_type, state.verify_status,
|
||||
state.switch_desktop, state.sync_slides,
|
||||
state.open_links, state.run_hidden))
|
||||
print(hrule())
|
||||
if snippet:
|
||||
print(slide.content.replace(snippet.content, ansi(7)(snippet.content)))
|
||||
focus_terminal()
|
||||
else:
|
||||
print(slide.content)
|
||||
if state.sync_slides:
|
||||
subprocess.check_output(["./gotoslide.js", str(slide.number)])
|
||||
focus_slides()
|
||||
print(hrule())
|
||||
if state.interactive:
|
||||
print("y/⎵/⏎ Execute snippet or advance to next snippet")
|
||||
print("p/← Previous")
|
||||
print("n/→ Next")
|
||||
print("s Simulate keystrokes")
|
||||
print("v Validate exit status")
|
||||
print("d Switch desktop")
|
||||
print("k Sync slides")
|
||||
print("o Open links")
|
||||
print("h Run hidden commands")
|
||||
print("g Go to a specific slide")
|
||||
print("q Quit")
|
||||
print("c Continue non-interactively until next error")
|
||||
command = click.getchar()
|
||||
else:
|
||||
command = "y"
|
||||
|
||||
if command in ("n", "\x1b[C"):
|
||||
move_forward()
|
||||
elif command in ("p", "\x1b[D"):
|
||||
move_backward()
|
||||
elif command == "s":
|
||||
state.simulate_type = not state.simulate_type
|
||||
elif command == "v":
|
||||
state.verify_status = not state.verify_status
|
||||
elif command == "d":
|
||||
state.switch_desktop = not state.switch_desktop
|
||||
elif command == "k":
|
||||
state.sync_slides = not state.sync_slides
|
||||
elif command == "o":
|
||||
state.open_links = not state.open_links
|
||||
elif command == "h":
|
||||
state.run_hidden = not state.run_hidden
|
||||
elif command == "g":
|
||||
state.slide = click.prompt("Enter slide number", type=int)
|
||||
state.snippet = 0
|
||||
check_bounds()
|
||||
elif command == "q":
|
||||
break
|
||||
elif command == "c":
|
||||
# continue until next timeout
|
||||
state.interactive = False
|
||||
elif command in ("y", "\r", " "):
|
||||
if snippet:
|
||||
run_snippet(state, snippet)
|
||||
move_forward()
|
||||
else:
|
||||
# Advance to next snippet
|
||||
# Advance until a slide that has snippets
|
||||
while not slides[state.slide].snippets:
|
||||
move_forward()
|
||||
# But stop if we reach the last slide
|
||||
if state.slide == len(slides)-1:
|
||||
break
|
||||
# And then advance to the snippet
|
||||
move_forward()
|
||||
else:
|
||||
logging.warning("Unknown command {}.".format(command))
|
||||
17
slides/autopilot/gotoslide.js
Executable file
17
slides/autopilot/gotoslide.js
Executable file
@@ -0,0 +1,17 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/* Expects a slide number as first argument.
|
||||
* Will connect to the local pub/sub server,
|
||||
* and issue a "go to slide X" command, which
|
||||
* will be sent to all connected browsers.
|
||||
*/
|
||||
|
||||
var io = require('socket.io-client');
|
||||
var socket = io('http://localhost:3000');
|
||||
socket.on('connect_error', function(){
|
||||
console.log('connection error');
|
||||
socket.close();
|
||||
});
|
||||
socket.emit('slide change', process.argv[2], function(){
|
||||
socket.close();
|
||||
});
|
||||
1540
slides/autopilot/package-lock.json
generated
Normal file
1540
slides/autopilot/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
9
slides/autopilot/package.json
Normal file
9
slides/autopilot/package.json
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"name": "container-training-pub-sub-server",
|
||||
"version": "0.0.1",
|
||||
"dependencies": {
|
||||
"express": "^4.16.2",
|
||||
"socket.io": "^4.5.1",
|
||||
"socket.io-client": "^4.5.1"
|
||||
}
|
||||
}
|
||||
21
slides/autopilot/remote.js
Normal file
21
slides/autopilot/remote.js
Normal file
@@ -0,0 +1,21 @@
|
||||
/* This snippet is loaded from the workshop HTML file.
|
||||
* It sets up callbacks to synchronize the local slide
|
||||
* number with the remote pub/sub server.
|
||||
*/
|
||||
|
||||
var socket = io();
|
||||
var leader = true;
|
||||
|
||||
slideshow.on('showSlide', function (slide) {
|
||||
if (leader) {
|
||||
var n = slide.getSlideIndex()+1;
|
||||
socket.emit('slide change', n);
|
||||
}
|
||||
});
|
||||
|
||||
socket.on('slide change', function (n) {
|
||||
leader = false;
|
||||
slideshow.gotoSlide(n);
|
||||
leader = true;
|
||||
});
|
||||
|
||||
1
slides/autopilot/requirements.txt
Normal file
1
slides/autopilot/requirements.txt
Normal file
@@ -0,0 +1 @@
|
||||
click
|
||||
41
slides/autopilot/server.js
Executable file
41
slides/autopilot/server.js
Executable file
@@ -0,0 +1,41 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/* This is a very simple pub/sub server, allowing to
|
||||
* remote control browsers displaying the slides.
|
||||
* The browsers connect to this pub/sub server using
|
||||
* Socket.IO, and the server tells them which slides
|
||||
* to display.
|
||||
*
|
||||
* The server can be controlled with a little CLI,
|
||||
* or by one of the browsers.
|
||||
*/
|
||||
|
||||
var express = require('express');
|
||||
var app = express();
|
||||
var http = require('http').Server(app);
|
||||
var io = require('socket.io')(http);
|
||||
|
||||
app.get('/', function(req, res){
|
||||
res.send('container.training autopilot pub/sub server');
|
||||
});
|
||||
|
||||
/* Serve remote.js from the current directory */
|
||||
app.use(express.static('.'));
|
||||
|
||||
/* Serve slides etc. from current and the parent directory */
|
||||
app.use(express.static('..'));
|
||||
|
||||
io.on('connection', function(socket){
|
||||
console.log('a client connected: ' + socket.handshake.address);
|
||||
socket.on('slide change', function(n, ack){
|
||||
console.log('slide change: ' + n);
|
||||
socket.broadcast.emit('slide change', n);
|
||||
if (typeof ack === 'function') {
|
||||
ack();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
http.listen(3000, function(){
|
||||
console.log('listening on *:3000');
|
||||
});
|
||||
7
slides/autopilot/tmux-style.sh
Executable file
7
slides/autopilot/tmux-style.sh
Executable file
@@ -0,0 +1,7 @@
|
||||
#!/bin/sh
|
||||
# This removes the clock (and other extraneous stuff) from the
|
||||
# tmux status bar, and it gives it a non-default color.
|
||||
tmux set-option -g status-left ""
|
||||
tmux set-option -g status-right ""
|
||||
tmux set-option -g status-style bg=cyan
|
||||
|
||||
49
slides/build.sh
Executable file
49
slides/build.sh
Executable file
@@ -0,0 +1,49 @@
|
||||
#!/bin/sh
|
||||
set -e
|
||||
case "$1" in
|
||||
once)
|
||||
./index.py
|
||||
for YAML in *.yml; do
|
||||
./markmaker.py $YAML > $YAML.html || {
|
||||
rm $YAML.html
|
||||
break
|
||||
}
|
||||
done
|
||||
if [ -n "$SLIDECHECKER" ]; then
|
||||
for YAML in *.yml; do
|
||||
./appendcheck.py $YAML.html
|
||||
done
|
||||
fi
|
||||
zip -qr slides.zip . && echo "Created slides.zip archive."
|
||||
;;
|
||||
|
||||
forever)
|
||||
set +e
|
||||
# check if entr is installed
|
||||
if ! command -v entr >/dev/null; then
|
||||
echo >&2 "First install 'entr' with apt, brew, etc."
|
||||
exit
|
||||
fi
|
||||
|
||||
# There is a weird bug in entr, at least on MacOS,
|
||||
# where it doesn't restore the terminal to a clean
|
||||
# state when exitting. So let's try to work around
|
||||
# it with stty.
|
||||
STTY=$(stty -g)
|
||||
while true; do
|
||||
#find . | entr -n -d $0 once
|
||||
inotifywait .
|
||||
STATUS=$?
|
||||
case $STATUS in
|
||||
2) echo "Directory has changed. Restarting.";;
|
||||
130) echo "SIGINT or q pressed. Exiting."; break;;
|
||||
*) echo "Weird exit code: $STATUS. Retrying in 1 second."; sleep 1;;
|
||||
esac
|
||||
done
|
||||
stty $STTY
|
||||
;;
|
||||
|
||||
*)
|
||||
echo "$0 <once|forever>"
|
||||
;;
|
||||
esac
|
||||
430
slides/containers/Advanced_Dockerfiles.md
Normal file
430
slides/containers/Advanced_Dockerfiles.md
Normal file
@@ -0,0 +1,430 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Advanced Dockerfile Syntax
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
We have seen simple Dockerfiles to illustrate how Docker build
|
||||
container images.
|
||||
|
||||
In this section, we will give a recap of the Dockerfile syntax,
|
||||
and introduce advanced Dockerfile commands that we might
|
||||
come across sometimes; or that we might want to use in some
|
||||
specific scenarios.
|
||||
|
||||
---
|
||||
|
||||
## `Dockerfile` usage summary
|
||||
|
||||
* `Dockerfile` instructions are executed in order.
|
||||
|
||||
* Each instruction creates a new layer in the image.
|
||||
|
||||
* Docker maintains a cache with the layers of previous builds.
|
||||
|
||||
* When there are no changes in the instructions and files making a layer,
|
||||
the builder re-uses the cached layer, without executing the instruction for that layer.
|
||||
|
||||
* The `FROM` instruction MUST be the first non-comment instruction.
|
||||
|
||||
* Lines starting with `#` are treated as comments.
|
||||
|
||||
* Some instructions (like `CMD` or `ENTRYPOINT`) update a piece of metadata.
|
||||
|
||||
(As a result, each call to these instructions makes the previous one useless.)
|
||||
|
||||
---
|
||||
|
||||
## The `RUN` instruction
|
||||
|
||||
The `RUN` instruction can be specified in two ways.
|
||||
|
||||
With shell wrapping, which runs the specified command inside a shell,
|
||||
with `/bin/sh -c`:
|
||||
|
||||
```dockerfile
|
||||
RUN apt-get update
|
||||
```
|
||||
|
||||
Or using the `exec` method, which avoids shell string expansion, and
|
||||
allows execution in images that don't have `/bin/sh`:
|
||||
|
||||
```dockerfile
|
||||
RUN [ "apt-get", "update" ]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## More about the `RUN` instruction
|
||||
|
||||
`RUN` will do the following:
|
||||
|
||||
* Execute a command.
|
||||
* Record changes made to the filesystem.
|
||||
* Work great to install libraries, packages, and various files.
|
||||
|
||||
`RUN` will NOT do the following:
|
||||
|
||||
* Record state of *processes*.
|
||||
* Automatically start daemons.
|
||||
|
||||
If you want to start something automatically when the container runs,
|
||||
you should use `CMD` and/or `ENTRYPOINT`.
|
||||
|
||||
---
|
||||
|
||||
## Collapsing layers
|
||||
|
||||
It is possible to execute multiple commands in a single step:
|
||||
|
||||
```dockerfile
|
||||
RUN apt-get update && apt-get install -y wget && apt-get clean
|
||||
```
|
||||
|
||||
It is also possible to break a command onto multiple lines:
|
||||
|
||||
```dockerfile
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y wget \
|
||||
&& apt-get clean
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The `EXPOSE` instruction
|
||||
|
||||
The `EXPOSE` instruction tells Docker what ports are to be published
|
||||
in this image.
|
||||
|
||||
```dockerfile
|
||||
EXPOSE 8080
|
||||
EXPOSE 80 443
|
||||
EXPOSE 53/tcp 53/udp
|
||||
```
|
||||
|
||||
* All ports are private by default.
|
||||
|
||||
* Declaring a port with `EXPOSE` is not enough to make it public.
|
||||
|
||||
* The `Dockerfile` doesn't control on which port a service gets exposed.
|
||||
|
||||
---
|
||||
|
||||
## Exposing ports
|
||||
|
||||
* When you `docker run -p <port> ...`, that port becomes public.
|
||||
|
||||
(Even if it was not declared with `EXPOSE`.)
|
||||
|
||||
* When you `docker run -P ...` (without port number), all ports
|
||||
declared with `EXPOSE` become public.
|
||||
|
||||
A *public port* is reachable from other containers and from outside the host.
|
||||
|
||||
A *private port* is not reachable from outside.
|
||||
|
||||
---
|
||||
|
||||
## The `COPY` instruction
|
||||
|
||||
The `COPY` instruction adds files and content from your host into the
|
||||
image.
|
||||
|
||||
```dockerfile
|
||||
COPY . /src
|
||||
```
|
||||
|
||||
This will add the contents of the *build context* (the directory
|
||||
passed as an argument to `docker build`) to the directory `/src`
|
||||
in the container.
|
||||
|
||||
---
|
||||
|
||||
## Build context isolation
|
||||
|
||||
Note: you can only reference files and directories *inside* the
|
||||
build context. Absolute paths are taken as being anchored to
|
||||
the build context, so the two following lines are equivalent:
|
||||
|
||||
```dockerfile
|
||||
COPY . /src
|
||||
COPY / /src
|
||||
```
|
||||
|
||||
Attempts to use `..` to get out of the build context will be
|
||||
detected and blocked with Docker, and the build will fail.
|
||||
|
||||
Otherwise, a `Dockerfile` could succeed on host A, but fail on host B.
|
||||
|
||||
---
|
||||
|
||||
## `ADD`
|
||||
|
||||
`ADD` works almost like `COPY`, but has a few extra features.
|
||||
|
||||
`ADD` can get remote files:
|
||||
|
||||
```dockerfile
|
||||
ADD http://www.example.com/webapp.jar /opt/
|
||||
```
|
||||
|
||||
This would download the `webapp.jar` file and place it in the `/opt`
|
||||
directory.
|
||||
|
||||
`ADD` will automatically unpack zip files and tar archives:
|
||||
|
||||
```dockerfile
|
||||
ADD ./assets.zip /var/www/htdocs/assets/
|
||||
```
|
||||
|
||||
This would unpack `assets.zip` into `/var/www/htdocs/assets`.
|
||||
|
||||
*However,* `ADD` will not automatically unpack remote archives.
|
||||
|
||||
---
|
||||
|
||||
## `ADD`, `COPY`, and the build cache
|
||||
|
||||
* Before creating a new layer, Docker checks its build cache.
|
||||
|
||||
* For most Dockerfile instructions, Docker only looks at the
|
||||
`Dockerfile` content to do the cache lookup.
|
||||
|
||||
* For `ADD` and `COPY` instructions, Docker also checks if the files
|
||||
to be added to the container have been changed.
|
||||
|
||||
* `ADD` always needs to download the remote file before
|
||||
it can check if it has been changed.
|
||||
|
||||
(It cannot use,
|
||||
e.g., ETags or If-Modified-Since headers.)
|
||||
|
||||
---
|
||||
|
||||
## `VOLUME`
|
||||
|
||||
The `VOLUME` instruction tells Docker that a specific directory
|
||||
should be a *volume*.
|
||||
|
||||
```dockerfile
|
||||
VOLUME /var/lib/mysql
|
||||
```
|
||||
|
||||
Filesystem access in volumes bypasses the copy-on-write layer,
|
||||
offering native performance to I/O done in those directories.
|
||||
|
||||
Volumes can be attached to multiple containers, allowing to
|
||||
"port" data over from a container to another, e.g. to
|
||||
upgrade a database to a newer version.
|
||||
|
||||
It is possible to start a container in "read-only" mode.
|
||||
The container filesystem will be made read-only, but volumes
|
||||
can still have read/write access if necessary.
|
||||
|
||||
---
|
||||
|
||||
## The `WORKDIR` instruction
|
||||
|
||||
The `WORKDIR` instruction sets the working directory for subsequent
|
||||
instructions.
|
||||
|
||||
It also affects `CMD` and `ENTRYPOINT`, since it sets the working
|
||||
directory used when starting the container.
|
||||
|
||||
```dockerfile
|
||||
WORKDIR /src
|
||||
```
|
||||
|
||||
You can specify `WORKDIR` again to change the working directory for
|
||||
further operations.
|
||||
|
||||
---
|
||||
|
||||
## The `ENV` instruction
|
||||
|
||||
The `ENV` instruction specifies environment variables that should be
|
||||
set in any container launched from the image.
|
||||
|
||||
```dockerfile
|
||||
ENV WEBAPP_PORT 8080
|
||||
```
|
||||
|
||||
This will result in an environment variable being created in any
|
||||
containers created from this image of
|
||||
|
||||
```bash
|
||||
WEBAPP_PORT=8080
|
||||
```
|
||||
|
||||
You can also specify environment variables when you use `docker run`.
|
||||
|
||||
```bash
|
||||
$ docker run -e WEBAPP_PORT=8000 -e WEBAPP_HOST=www.example.com ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## The `USER` instruction
|
||||
|
||||
The `USER` instruction sets the user name or UID to use when running
|
||||
the image.
|
||||
|
||||
It can be used multiple times to change back to root or to another user.
|
||||
|
||||
---
|
||||
|
||||
## The `CMD` instruction
|
||||
|
||||
The `CMD` instruction is a default command run when a container is
|
||||
launched from the image.
|
||||
|
||||
```dockerfile
|
||||
CMD [ "nginx", "-g", "daemon off;" ]
|
||||
```
|
||||
|
||||
Means we don't need to specify `nginx -g "daemon off;"` when running the
|
||||
container.
|
||||
|
||||
Instead of:
|
||||
|
||||
```bash
|
||||
$ docker run <dockerhubUsername>/web_image nginx -g "daemon off;"
|
||||
```
|
||||
|
||||
We can just do:
|
||||
|
||||
```bash
|
||||
$ docker run <dockerhubUsername>/web_image
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## More about the `CMD` instruction
|
||||
|
||||
Just like `RUN`, the `CMD` instruction comes in two forms.
|
||||
The first executes in a shell:
|
||||
|
||||
```dockerfile
|
||||
CMD nginx -g "daemon off;"
|
||||
```
|
||||
|
||||
The second executes directly, without shell processing:
|
||||
|
||||
```dockerfile
|
||||
CMD [ "nginx", "-g", "daemon off;" ]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Overriding the `CMD` instruction
|
||||
|
||||
The `CMD` can be overridden when you run a container.
|
||||
|
||||
```bash
|
||||
$ docker run -it <dockerhubUsername>/web_image bash
|
||||
```
|
||||
|
||||
Will run `bash` instead of `nginx -g "daemon off;"`.
|
||||
|
||||
---
|
||||
|
||||
## The `ENTRYPOINT` instruction
|
||||
|
||||
The `ENTRYPOINT` instruction is like the `CMD` instruction,
|
||||
but arguments given on the command line are *appended* to the
|
||||
entry point.
|
||||
|
||||
Note: you have to use the "exec" syntax (`[ "..." ]`).
|
||||
|
||||
```dockerfile
|
||||
ENTRYPOINT [ "/bin/ls" ]
|
||||
```
|
||||
|
||||
If we were to run:
|
||||
|
||||
```bash
|
||||
$ docker run training/ls -l
|
||||
```
|
||||
|
||||
Instead of trying to run `-l`, the container will run `/bin/ls -l`.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Overriding the `ENTRYPOINT` instruction
|
||||
|
||||
The entry point can be overridden as well.
|
||||
|
||||
```bash
|
||||
$ docker run -it training/ls
|
||||
bin dev home lib64 mnt proc run srv tmp var
|
||||
boot etc lib media opt root sbin sys usr
|
||||
$ docker run -it --entrypoint bash training/ls
|
||||
root@d902fb7b1fc7:/#
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How `CMD` and `ENTRYPOINT` interact
|
||||
|
||||
The `CMD` and `ENTRYPOINT` instructions work best when used
|
||||
together.
|
||||
|
||||
```dockerfile
|
||||
ENTRYPOINT [ "nginx" ]
|
||||
CMD [ "-g", "daemon off;" ]
|
||||
```
|
||||
|
||||
The `ENTRYPOINT` specifies the command to be run and the `CMD`
|
||||
specifies its options. On the command line we can then potentially
|
||||
override the options when needed.
|
||||
|
||||
```bash
|
||||
$ docker run -d <dockerhubUsername>/web_image -t
|
||||
```
|
||||
|
||||
This will override the options `CMD` provided with new flags.
|
||||
|
||||
---
|
||||
|
||||
## Advanced Dockerfile instructions
|
||||
|
||||
* `ONBUILD` lets you stash instructions that will be executed
|
||||
when this image is used as a base for another one.
|
||||
* `LABEL` adds arbitrary metadata to the image.
|
||||
* `ARG` defines build-time variables (optional or mandatory).
|
||||
* `STOPSIGNAL` sets the signal for `docker stop` (`TERM` by default).
|
||||
* `HEALTHCHECK` defines a command assessing the status of the container.
|
||||
* `SHELL` sets the default program to use for string-syntax RUN, CMD, etc.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## The `ONBUILD` instruction
|
||||
|
||||
The `ONBUILD` instruction is a trigger. It sets instructions that will
|
||||
be executed when another image is built from the image being build.
|
||||
|
||||
This is useful for building images which will be used as a base
|
||||
to build other images.
|
||||
|
||||
```dockerfile
|
||||
ONBUILD COPY . /src
|
||||
```
|
||||
|
||||
* You can't chain `ONBUILD` instructions with `ONBUILD`.
|
||||
* `ONBUILD` can't be used to trigger `FROM` instructions.
|
||||
|
||||
???
|
||||
|
||||
:EN:- Advanced Dockerfile syntax
|
||||
:FR:- Dockerfile niveau expert
|
||||
233
slides/containers/Ambassadors.md
Normal file
233
slides/containers/Ambassadors.md
Normal file
@@ -0,0 +1,233 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Ambassadors
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## The ambassador pattern
|
||||
|
||||
Ambassadors are containers that "masquerade" or "proxy" for another service.
|
||||
|
||||
They abstract the connection details for this services, and can help with:
|
||||
|
||||
* discovery (where is my service actually running?)
|
||||
|
||||
* migration (what if my service has to be moved while I use it?)
|
||||
|
||||
* fail over (how do I know to which instance of a replicated service I should connect?)
|
||||
|
||||
* load balancing (how do I spread my requests across multiple instances of a service?)
|
||||
|
||||
* authentication (what if my service requires credentials, certificates, or otherwise?)
|
||||
|
||||
---
|
||||
|
||||
## Introduction to Ambassadors
|
||||
|
||||
The ambassador pattern:
|
||||
|
||||
* Takes advantage of Docker's per-container naming system and abstracts
|
||||
connections between services.
|
||||
|
||||
* Allows you to manage services without hard-coding connection
|
||||
information inside applications.
|
||||
|
||||
To do this, instead of directly connecting containers you insert
|
||||
ambassador containers.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Interacting with ambassadors
|
||||
|
||||
* The web container uses normal Docker networking to connect
|
||||
to the ambassador.
|
||||
|
||||
* The database container also talks with an ambassador.
|
||||
|
||||
* For both containers, the ambassador is totally transparent.
|
||||
<br/>
|
||||
(There is no difference between normal
|
||||
operation and operation with an ambassador.)
|
||||
|
||||
* If the database container is moved (or a failover happens), its new location will
|
||||
be tracked by the ambassador containers, and the web application
|
||||
container will still be able to connect, without reconfiguration.
|
||||
|
||||
---
|
||||
|
||||
## Ambassadors for simple service discovery
|
||||
|
||||
Use case:
|
||||
|
||||
* my application code connects to `redis` on the default port (6379),
|
||||
* my Redis service runs on another machine, on a non-default port (e.g. 12345),
|
||||
* I want to use an ambassador to let my application connect without modification.
|
||||
|
||||
The ambassador will be:
|
||||
|
||||
* a container running right next to my application,
|
||||
* using the name `redis` (or linked as `redis`),
|
||||
* listening on port 6379,
|
||||
* forwarding connections to the actual Redis service.
|
||||
|
||||
---
|
||||
|
||||
## Ambassadors for service migration
|
||||
|
||||
Use case:
|
||||
|
||||
* my application code still connects to `redis`,
|
||||
* my Redis service runs somewhere else,
|
||||
* my Redis service is moved to a different host+port,
|
||||
* the location of the Redis service is given to me via e.g. DNS SRV records,
|
||||
* I want to use an ambassador to automatically connect to the new location, with as little disruption as possible.
|
||||
|
||||
The ambassador will be:
|
||||
|
||||
* the same kind of container as before,
|
||||
* running an additional routine to monitor DNS SRV records,
|
||||
* updating the forwarding destination when the DNS SRV records are updated.
|
||||
|
||||
---
|
||||
|
||||
## Ambassadors for credentials injection
|
||||
|
||||
Use case:
|
||||
|
||||
* my application code still connects to `redis`,
|
||||
* my application code doesn't provide Redis credentials,
|
||||
* my production Redis service requires credentials,
|
||||
* my staging Redis service requires different credentials,
|
||||
* I want to use an ambassador to abstract those credentials.
|
||||
|
||||
The ambassador will be:
|
||||
|
||||
* a container using the name `redis` (or a link),
|
||||
* passed the credentials to use,
|
||||
* running a custom proxy that accepts connections on Redis default port,
|
||||
* performing authentication with the target Redis service before forwarding traffic.
|
||||
|
||||
---
|
||||
|
||||
## Ambassadors for load balancing
|
||||
|
||||
Use case:
|
||||
|
||||
* my application code connects to a web service called `api`,
|
||||
* I want to run multiple instances of the `api` backend,
|
||||
* those instances will be on different machines and ports,
|
||||
* I want to use an ambassador to abstract those details.
|
||||
|
||||
The ambassador will be:
|
||||
|
||||
* a container using the name `api` (or a link),
|
||||
* passed the list of backends to use (statically or dynamically),
|
||||
* running a load balancer (e.g. HAProxy or NGINX),
|
||||
* dispatching requests across all backends transparently.
|
||||
|
||||
---
|
||||
|
||||
## "Ambassador" is a *pattern*
|
||||
|
||||
There are many ways to implement the pattern.
|
||||
|
||||
Different deployments will use different underlying technologies.
|
||||
|
||||
* On-premise deployments with a trusted network can track
|
||||
container locations in e.g. Zookeeper, and generate HAproxy
|
||||
configurations each time a location key changes.
|
||||
* Public cloud deployments or deployments across unsafe
|
||||
networks can add TLS encryption.
|
||||
* Ad-hoc deployments can use a master-less discovery protocol
|
||||
like avahi to register and discover services.
|
||||
* It is also possible to do one-shot reconfiguration of the
|
||||
ambassadors. It is slightly less dynamic but has far fewer
|
||||
requirements.
|
||||
* Ambassadors can be used in addition to, or instead of, overlay networks.
|
||||
|
||||
---
|
||||
|
||||
## Service meshes
|
||||
|
||||
* A service mesh is a configurable network layer.
|
||||
|
||||
* It can provide service discovery, high availability, load balancing, observability...
|
||||
|
||||
* Service meshes are particularly useful for microservices applications.
|
||||
|
||||
* Service meshes are often implemented as proxies.
|
||||
|
||||
* Applications connect to the service mesh, which relays the connection where needed.
|
||||
|
||||
*Does that sound familiar?*
|
||||
|
||||
---
|
||||
|
||||
## Ambassadors and service meshes
|
||||
|
||||
* When using a service mesh, a "sidecar container" is often used as a proxy
|
||||
|
||||
* Our services connect (transparently) to that sidecar container
|
||||
|
||||
* That sidecar container figures out where to forward the traffic
|
||||
|
||||
... Does that sound familiar?
|
||||
|
||||
(It should, because service meshes are essentially app-wide or cluster-wide ambassadors!)
|
||||
|
||||
---
|
||||
|
||||
## Some popular service meshes
|
||||
|
||||
... And related projects:
|
||||
|
||||
* [Consul Connect](https://www.consul.io/docs/connect/index.html)
|
||||
<br/>
|
||||
Transparently secures service-to-service connections with mTLS.
|
||||
|
||||
* [Gloo](https://gloo.solo.io/)
|
||||
<br/>
|
||||
API gateway that can interconnect applications on VMs, containers, and serverless.
|
||||
|
||||
* [Istio](https://istio.io/)
|
||||
<br/>
|
||||
A popular service mesh.
|
||||
|
||||
* [Linkerd](https://linkerd.io/)
|
||||
<br/>
|
||||
Another popular service mesh.
|
||||
|
||||
---
|
||||
|
||||
## Learning more about service meshes
|
||||
|
||||
A few blog posts about service meshes:
|
||||
|
||||
* [Containers, microservices, and service meshes](http://jpetazzo.github.io/2019/05/17/containers-microservices-service-meshes/)
|
||||
<br/>
|
||||
Provides historical context: how did we do before service meshes were invented?
|
||||
|
||||
* [Do I Need a Service Mesh?](https://www.nginx.com/blog/do-i-need-a-service-mesh/)
|
||||
<br/>
|
||||
Explains the purpose of service meshes. Illustrates some NGINX features.
|
||||
|
||||
* [Do you need a service mesh?](https://www.oreilly.com/ideas/do-you-need-a-service-mesh)
|
||||
<br/>
|
||||
Includes high-level overview and definitions.
|
||||
|
||||
* [What is Service Mesh and Why Do We Need It?](https://containerjournal.com/2018/12/12/what-is-service-mesh-and-why-do-we-need-it/)
|
||||
<br/>
|
||||
Includes a step-by-step demo of Linkerd.
|
||||
|
||||
And a video:
|
||||
|
||||
* [What is a Service Mesh, and Do I Need One When Developing Microservices?](https://www.datawire.io/envoyproxy/service-mesh/)
|
||||
201
slides/containers/Application_Configuration.md
Normal file
201
slides/containers/Application_Configuration.md
Normal file
@@ -0,0 +1,201 @@
|
||||
# Application Configuration
|
||||
|
||||
There are many ways to provide configuration to containerized applications.
|
||||
|
||||
There is no "best way" — it depends on factors like:
|
||||
|
||||
* configuration size,
|
||||
|
||||
* mandatory and optional parameters,
|
||||
|
||||
* scope of configuration (per container, per app, per customer, per site, etc),
|
||||
|
||||
* frequency of changes in the configuration.
|
||||
|
||||
---
|
||||
|
||||
## Command-line parameters
|
||||
|
||||
```bash
|
||||
docker run jpetazzo/hamba 80 www1:80 www2:80
|
||||
```
|
||||
|
||||
* Configuration is provided through command-line parameters.
|
||||
|
||||
* In the above example, the `ENTRYPOINT` is a script that will:
|
||||
|
||||
- parse the parameters,
|
||||
|
||||
- generate a configuration file,
|
||||
|
||||
- start the actual service.
|
||||
|
||||
---
|
||||
|
||||
## Command-line parameters pros and cons
|
||||
|
||||
* Appropriate for mandatory parameters (without which the service cannot start).
|
||||
|
||||
* Convenient for "toolbelt" services instantiated many times.
|
||||
|
||||
(Because there is no extra step: just run it!)
|
||||
|
||||
* Not great for dynamic configurations or bigger configurations.
|
||||
|
||||
(These things are still possible, but more cumbersome.)
|
||||
|
||||
---
|
||||
|
||||
## Environment variables
|
||||
|
||||
```bash
|
||||
docker run -e ELASTICSEARCH_URL=http://es42:9201/ kibana
|
||||
```
|
||||
|
||||
* Configuration is provided through environment variables.
|
||||
|
||||
* The environment variable can be used straight by the program,
|
||||
<br/>or by a script generating a configuration file.
|
||||
|
||||
---
|
||||
|
||||
## Environment variables pros and cons
|
||||
|
||||
* Appropriate for optional parameters (since the image can provide default values).
|
||||
|
||||
* Also convenient for services instantiated many times.
|
||||
|
||||
(It's as easy as command-line parameters.)
|
||||
|
||||
* Great for services with lots of parameters, but you only want to specify a few.
|
||||
|
||||
(And use default values for everything else.)
|
||||
|
||||
* Ability to introspect possible parameters and their default values.
|
||||
|
||||
* Not great for dynamic configurations.
|
||||
|
||||
---
|
||||
|
||||
## Baked-in configuration
|
||||
|
||||
```
|
||||
FROM prometheus
|
||||
COPY prometheus.conf /etc
|
||||
```
|
||||
|
||||
* The configuration is added to the image.
|
||||
|
||||
* The image may have a default configuration; the new configuration can:
|
||||
|
||||
- replace the default configuration,
|
||||
|
||||
- extend it (if the code can read multiple configuration files).
|
||||
|
||||
---
|
||||
|
||||
## Baked-in configuration pros and cons
|
||||
|
||||
* Allows arbitrary customization and complex configuration files.
|
||||
|
||||
* Requires writing a configuration file. (Obviously!)
|
||||
|
||||
* Requires building an image to start the service.
|
||||
|
||||
* Requires rebuilding the image to reconfigure the service.
|
||||
|
||||
* Requires rebuilding the image to upgrade the service.
|
||||
|
||||
* Configured images can be stored in registries.
|
||||
|
||||
(Which is great, but requires a registry.)
|
||||
|
||||
---
|
||||
|
||||
## Configuration volume
|
||||
|
||||
```bash
|
||||
docker run -v appconfig:/etc/appconfig myapp
|
||||
```
|
||||
|
||||
* The configuration is stored in a volume.
|
||||
|
||||
* The volume is attached to the container.
|
||||
|
||||
* The image may have a default configuration.
|
||||
|
||||
(But this results in a less "obvious" setup, that needs more documentation.)
|
||||
|
||||
---
|
||||
|
||||
## Configuration volume pros and cons
|
||||
|
||||
* Allows arbitrary customization and complex configuration files.
|
||||
|
||||
* Requires creating a volume for each different configuration.
|
||||
|
||||
* Services with identical configurations can use the same volume.
|
||||
|
||||
* Doesn't require building / rebuilding an image when upgrading / reconfiguring.
|
||||
|
||||
* Configuration can be generated or edited through another container.
|
||||
|
||||
---
|
||||
|
||||
## Dynamic configuration volume
|
||||
|
||||
* This is a powerful pattern for dynamic, complex configurations.
|
||||
|
||||
* The configuration is stored in a volume.
|
||||
|
||||
* The configuration is generated / updated by a special container.
|
||||
|
||||
* The application container detects when the configuration is changed.
|
||||
|
||||
(And automatically reloads the configuration when necessary.)
|
||||
|
||||
* The configuration can be shared between multiple services if needed.
|
||||
|
||||
---
|
||||
|
||||
## Dynamic configuration volume example
|
||||
|
||||
In a first terminal, start a load balancer with an initial configuration:
|
||||
|
||||
```bash
|
||||
$ docker run --name loadbalancer jpetazzo/hamba \
|
||||
80 goo.gl:80
|
||||
```
|
||||
|
||||
In another terminal, reconfigure that load balancer:
|
||||
|
||||
```bash
|
||||
$ docker run --rm --volumes-from loadbalancer jpetazzo/hamba reconfigure \
|
||||
80 google.com:80
|
||||
```
|
||||
|
||||
The configuration could also be updated through e.g. a REST API.
|
||||
|
||||
(The REST API being itself served from another container.)
|
||||
|
||||
---
|
||||
|
||||
## Keeping secrets
|
||||
|
||||
.warning[Ideally, you should not put secrets (passwords, tokens...) in:]
|
||||
|
||||
* command-line or environment variables (anyone with Docker API access can get them),
|
||||
|
||||
* images, especially stored in a registry.
|
||||
|
||||
Secrets management is better handled with an orchestrator (like Swarm or Kubernetes).
|
||||
|
||||
Orchestrators will allow to pass secrets in a "one-way" manner.
|
||||
|
||||
Managing secrets securely without an orchestrator can be contrived.
|
||||
|
||||
E.g.:
|
||||
|
||||
- read the secret on stdin when the service starts,
|
||||
|
||||
- pass the secret using an API endpoint.
|
||||
345
slides/containers/Background_Containers.md
Normal file
345
slides/containers/Background_Containers.md
Normal file
@@ -0,0 +1,345 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Background containers
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
Our first containers were *interactive*.
|
||||
|
||||
We will now see how to:
|
||||
|
||||
* Run a non-interactive container.
|
||||
* Run a container in the background.
|
||||
* List running containers.
|
||||
* Check the logs of a container.
|
||||
* Stop a container.
|
||||
* List stopped containers.
|
||||
|
||||
---
|
||||
|
||||
## A non-interactive container
|
||||
|
||||
We will run a small custom container.
|
||||
|
||||
This container just displays the time every second.
|
||||
|
||||
```bash
|
||||
$ docker run jpetazzo/clock
|
||||
Fri Feb 20 00:28:53 UTC 2015
|
||||
Fri Feb 20 00:28:54 UTC 2015
|
||||
Fri Feb 20 00:28:55 UTC 2015
|
||||
...
|
||||
```
|
||||
|
||||
* This container will run forever.
|
||||
* To stop it, press `^C`.
|
||||
* Docker has automatically downloaded the image `jpetazzo/clock`.
|
||||
* This image is a user image, created by `jpetazzo`.
|
||||
* We will hear more about user images (and other types of images) later.
|
||||
|
||||
---
|
||||
|
||||
## When `^C` doesn't work...
|
||||
|
||||
Sometimes, `^C` won't be enough.
|
||||
|
||||
Why? And how can we stop the container in that case?
|
||||
|
||||
---
|
||||
|
||||
## What happens when we hit `^C`
|
||||
|
||||
`SIGINT` gets sent to the container, which means:
|
||||
|
||||
- `SIGINT` gets sent to PID 1 (default case)
|
||||
|
||||
- `SIGINT` gets sent to *foreground processes* when running with `-ti`
|
||||
|
||||
But there is a special case for PID 1: it ignores all signals!
|
||||
|
||||
- except `SIGKILL` and `SIGSTOP`
|
||||
|
||||
- except signals handled explicitly
|
||||
|
||||
TL,DR: there are many circumstances when `^C` won't stop the container.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Why is PID 1 special?
|
||||
|
||||
- PID 1 has some extra responsibilities:
|
||||
|
||||
- it starts (directly or indirectly) every other process
|
||||
|
||||
- when a process exits, its processes are "reparented" under PID 1
|
||||
|
||||
- When PID 1 exits, everything stops:
|
||||
|
||||
- on a "regular" machine, it causes a kernel panic
|
||||
|
||||
- in a container, it kills all the processes
|
||||
|
||||
- We don't want PID 1 to stop accidentally
|
||||
|
||||
- That's why it has these extra protections
|
||||
|
||||
---
|
||||
|
||||
## How to stop these containers, then?
|
||||
|
||||
- Start another terminal and forget about them
|
||||
|
||||
(for now!)
|
||||
|
||||
- We'll shortly learn about `docker kill`
|
||||
|
||||
---
|
||||
|
||||
## Run a container in the background
|
||||
|
||||
Containers can be started in the background, with the `-d` flag (daemon mode):
|
||||
|
||||
```bash
|
||||
$ docker run -d jpetazzo/clock
|
||||
47d677dcfba4277c6cc68fcaa51f932b544cab1a187c853b7d0caf4e8debe5ad
|
||||
```
|
||||
|
||||
* We don't see the output of the container.
|
||||
* But don't worry: Docker collects that output and logs it!
|
||||
* Docker gives us the ID of the container.
|
||||
|
||||
---
|
||||
|
||||
## List running containers
|
||||
|
||||
How can we check that our container is still running?
|
||||
|
||||
With `docker ps`, just like the UNIX `ps` command, lists running processes.
|
||||
|
||||
```bash
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE ... CREATED STATUS ...
|
||||
47d677dcfba4 jpetazzo/clock ... 2 minutes ago Up 2 minutes ...
|
||||
```
|
||||
|
||||
Docker tells us:
|
||||
|
||||
* The (truncated) ID of our container.
|
||||
* The image used to start the container.
|
||||
* That our container has been running (`Up`) for a couple of minutes.
|
||||
* Other information (COMMAND, PORTS, NAMES) that we will explain later.
|
||||
|
||||
---
|
||||
|
||||
## Starting more containers
|
||||
|
||||
Let's start two more containers.
|
||||
|
||||
```bash
|
||||
$ docker run -d jpetazzo/clock
|
||||
57ad9bdfc06bb4407c47220cf59ce21585dce9a1298d7a67488359aeaea8ae2a
|
||||
```
|
||||
|
||||
```bash
|
||||
$ docker run -d jpetazzo/clock
|
||||
068cc994ffd0190bbe025ba74e4c0771a5d8f14734af772ddee8dc1aaf20567d
|
||||
```
|
||||
|
||||
Check that `docker ps` correctly reports all 3 containers.
|
||||
|
||||
---
|
||||
|
||||
## Viewing only the last container started
|
||||
|
||||
When many containers are already running, it can be useful to
|
||||
see only the last container that was started.
|
||||
|
||||
This can be achieved with the `-l` ("Last") flag:
|
||||
|
||||
```bash
|
||||
$ docker ps -l
|
||||
CONTAINER ID IMAGE ... CREATED STATUS ...
|
||||
068cc994ffd0 jpetazzo/clock ... 2 minutes ago Up 2 minutes ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## View only the IDs of the containers
|
||||
|
||||
Many Docker commands will work on container IDs: `docker stop`, `docker rm`...
|
||||
|
||||
If we want to list only the IDs of our containers (without the other columns
|
||||
or the header line),
|
||||
we can use the `-q` ("Quiet", "Quick") flag:
|
||||
|
||||
```bash
|
||||
$ docker ps -q
|
||||
068cc994ffd0
|
||||
57ad9bdfc06b
|
||||
47d677dcfba4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Combining flags
|
||||
|
||||
We can combine `-l` and `-q` to see only the ID of the last container started:
|
||||
|
||||
```bash
|
||||
$ docker ps -lq
|
||||
068cc994ffd0
|
||||
```
|
||||
|
||||
At a first glance, it looks like this would be particularly useful in scripts.
|
||||
|
||||
However, if we want to start a container and get its ID in a reliable way,
|
||||
it is better to use `docker run -d`, which we will cover in a bit.
|
||||
|
||||
(Using `docker ps -lq` is prone to race conditions: what happens if someone
|
||||
else, or another program or script, starts another container just before
|
||||
we run `docker ps -lq`?)
|
||||
|
||||
---
|
||||
|
||||
## View the logs of a container
|
||||
|
||||
We told you that Docker was logging the container output.
|
||||
|
||||
Let's see that now.
|
||||
|
||||
```bash
|
||||
$ docker logs 068
|
||||
Fri Feb 20 00:39:52 UTC 2015
|
||||
Fri Feb 20 00:39:53 UTC 2015
|
||||
...
|
||||
```
|
||||
|
||||
* We specified a *prefix* of the full container ID.
|
||||
* You can, of course, specify the full ID.
|
||||
* The `logs` command will output the *entire* logs of the container.
|
||||
<br/>(Sometimes, that will be too much. Let's see how to address that.)
|
||||
|
||||
---
|
||||
|
||||
## View only the tail of the logs
|
||||
|
||||
To avoid being spammed with eleventy pages of output,
|
||||
we can use the `--tail` option:
|
||||
|
||||
```bash
|
||||
$ docker logs --tail 3 068
|
||||
Fri Feb 20 00:55:35 UTC 2015
|
||||
Fri Feb 20 00:55:36 UTC 2015
|
||||
Fri Feb 20 00:55:37 UTC 2015
|
||||
```
|
||||
|
||||
* The parameter is the number of lines that we want to see.
|
||||
|
||||
---
|
||||
|
||||
## Follow the logs in real time
|
||||
|
||||
Just like with the standard UNIX command `tail -f`, we can
|
||||
follow the logs of our container:
|
||||
|
||||
```bash
|
||||
$ docker logs --tail 1 --follow 068
|
||||
Fri Feb 20 00:57:12 UTC 2015
|
||||
Fri Feb 20 00:57:13 UTC 2015
|
||||
^C
|
||||
```
|
||||
|
||||
* This will display the last line in the log file.
|
||||
* Then, it will continue to display the logs in real time.
|
||||
* Use `^C` to exit.
|
||||
|
||||
---
|
||||
|
||||
## Stop our container
|
||||
|
||||
There are two ways we can terminate our detached container.
|
||||
|
||||
* Killing it using the `docker kill` command.
|
||||
* Stopping it using the `docker stop` command.
|
||||
|
||||
The first one stops the container immediately, by using the
|
||||
`KILL` signal.
|
||||
|
||||
The second one is more graceful. It sends a `TERM` signal,
|
||||
and after 10 seconds, if the container has not stopped, it
|
||||
sends `KILL.`
|
||||
|
||||
Reminder: the `KILL` signal cannot be intercepted, and will
|
||||
forcibly terminate the container.
|
||||
|
||||
---
|
||||
|
||||
## Stopping our containers
|
||||
|
||||
Let's stop one of those containers:
|
||||
|
||||
```bash
|
||||
$ docker stop 47d6
|
||||
47d6
|
||||
```
|
||||
|
||||
This will take 10 seconds:
|
||||
|
||||
* Docker sends the TERM signal;
|
||||
* the container doesn't react to this signal
|
||||
(it's a simple Shell script with no special
|
||||
signal handling);
|
||||
* 10 seconds later, since the container is still
|
||||
running, Docker sends the KILL signal;
|
||||
* this terminates the container.
|
||||
|
||||
---
|
||||
|
||||
## Killing the remaining containers
|
||||
|
||||
Let's be less patient with the two other containers:
|
||||
|
||||
```bash
|
||||
$ docker kill 068 57ad
|
||||
068
|
||||
57ad
|
||||
```
|
||||
|
||||
The `stop` and `kill` commands can take multiple container IDs.
|
||||
|
||||
Those containers will be terminated immediately (without
|
||||
the 10-second delay).
|
||||
|
||||
Let's check that our containers don't show up anymore:
|
||||
|
||||
```bash
|
||||
$ docker ps
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## List stopped containers
|
||||
|
||||
We can also see stopped containers, with the `-a` (`--all`) option.
|
||||
|
||||
```bash
|
||||
$ docker ps -a
|
||||
CONTAINER ID IMAGE ... CREATED STATUS
|
||||
068cc994ffd0 jpetazzo/clock ... 21 min. ago Exited (137) 3 min. ago
|
||||
57ad9bdfc06b jpetazzo/clock ... 21 min. ago Exited (137) 3 min. ago
|
||||
47d677dcfba4 jpetazzo/clock ... 23 min. ago Exited (137) 3 min. ago
|
||||
5c1dfd4d81f1 jpetazzo/clock ... 40 min. ago Exited (0) 40 min. ago
|
||||
b13c164401fb ubuntu ... 55 min. ago Exited (130) 53 min. ago
|
||||
```
|
||||
|
||||
???
|
||||
|
||||
:EN:- Foreground and background containers
|
||||
:FR:- Exécution interactive ou en arrière-plan
|
||||
174
slides/containers/Building_Images_Interactively.md
Normal file
174
slides/containers/Building_Images_Interactively.md
Normal file
@@ -0,0 +1,174 @@
|
||||
# Building images interactively
|
||||
|
||||
In this section, we will create our first container image.
|
||||
|
||||
It will be a basic distribution image, but we will pre-install
|
||||
the package `figlet`.
|
||||
|
||||
We will:
|
||||
|
||||
* Create a container from a base image.
|
||||
|
||||
* Install software manually in the container, and turn it
|
||||
into a new image.
|
||||
|
||||
* Learn about new commands: `docker commit`, `docker tag`, and `docker diff`.
|
||||
|
||||
---
|
||||
|
||||
## The plan
|
||||
|
||||
1. Create a container (with `docker run`) using our base distro of choice.
|
||||
|
||||
2. Run a bunch of commands to install and set up our software in the container.
|
||||
|
||||
3. (Optionally) review changes in the container with `docker diff`.
|
||||
|
||||
4. Turn the container into a new image with `docker commit`.
|
||||
|
||||
5. (Optionally) add tags to the image with `docker tag`.
|
||||
|
||||
---
|
||||
|
||||
## Setting up our container
|
||||
|
||||
Start an Ubuntu container:
|
||||
|
||||
```bash
|
||||
$ docker run -it ubuntu
|
||||
root@<yourContainerId>:#/
|
||||
```
|
||||
|
||||
Run the command `apt-get update` to refresh the list of packages available to install.
|
||||
|
||||
Then run the command `apt-get install figlet` to install the program we are interested in.
|
||||
|
||||
```bash
|
||||
root@<yourContainerId>:#/ apt-get update && apt-get install figlet
|
||||
.... OUTPUT OF APT-GET COMMANDS ....
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Inspect the changes
|
||||
|
||||
Type `exit` at the container prompt to leave the interactive session.
|
||||
|
||||
Now let's run `docker diff` to see the difference between the base image
|
||||
and our container.
|
||||
|
||||
```bash
|
||||
$ docker diff <yourContainerId>
|
||||
C /root
|
||||
A /root/.bash_history
|
||||
C /tmp
|
||||
C /usr
|
||||
C /usr/bin
|
||||
A /usr/bin/figlet
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: x-extra-details
|
||||
|
||||
## Docker tracks filesystem changes
|
||||
|
||||
As explained before:
|
||||
|
||||
* An image is read-only.
|
||||
|
||||
* When we make changes, they happen in a copy of the image.
|
||||
|
||||
* Docker can show the difference between the image, and its copy.
|
||||
|
||||
* For performance, Docker uses copy-on-write systems.
|
||||
<br/>(i.e. starting a container based on a big image
|
||||
doesn't incur a huge copy.)
|
||||
|
||||
---
|
||||
|
||||
## Copy-on-write security benefits
|
||||
|
||||
* `docker diff` gives us an easy way to audit changes
|
||||
|
||||
(à la Tripwire)
|
||||
|
||||
* Containers can also be started in read-only mode
|
||||
|
||||
(their root filesystem will be read-only, but they can still have read-write data volumes)
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Commit our changes into a new image
|
||||
|
||||
The `docker commit` command will create a new layer with those changes,
|
||||
and a new image using this new layer.
|
||||
|
||||
```bash
|
||||
$ docker commit <yourContainerId>
|
||||
<newImageId>
|
||||
```
|
||||
|
||||
The output of the `docker commit` command will be the ID for your newly created image.
|
||||
|
||||
We can use it as an argument to `docker run`.
|
||||
|
||||
---
|
||||
|
||||
## Testing our new image
|
||||
|
||||
Let's run this image:
|
||||
|
||||
```bash
|
||||
$ docker run -it <newImageId>
|
||||
root@fcfb62f0bfde:/# figlet hello
|
||||
_ _ _
|
||||
| |__ ___| | | ___
|
||||
| '_ \ / _ \ | |/ _ \
|
||||
| | | | __/ | | (_) |
|
||||
|_| |_|\___|_|_|\___/
|
||||
```
|
||||
|
||||
It works! 🎉
|
||||
|
||||
---
|
||||
|
||||
## Tagging images
|
||||
|
||||
Referring to an image by its ID is not convenient. Let's tag it instead.
|
||||
|
||||
We can use the `tag` command:
|
||||
|
||||
```bash
|
||||
$ docker tag <newImageId> figlet
|
||||
```
|
||||
|
||||
But we can also specify the tag as an extra argument to `commit`:
|
||||
|
||||
```bash
|
||||
$ docker commit <containerId> figlet
|
||||
```
|
||||
|
||||
And then run it using its tag:
|
||||
|
||||
```bash
|
||||
$ docker run -it figlet
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## What's next?
|
||||
|
||||
Manual process = bad.
|
||||
|
||||
Automated process = good.
|
||||
|
||||
In the next chapter, we will learn how to automate the build
|
||||
process by writing a `Dockerfile`.
|
||||
|
||||
???
|
||||
|
||||
:EN:- Building our first images interactively
|
||||
:FR:- Fabriquer nos premières images à la main
|
||||
468
slides/containers/Building_Images_With_Dockerfiles.md
Normal file
468
slides/containers/Building_Images_With_Dockerfiles.md
Normal file
@@ -0,0 +1,468 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Building Docker images with a Dockerfile
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
We will build a container image automatically, with a `Dockerfile`.
|
||||
|
||||
At the end of this lesson, you will be able to:
|
||||
|
||||
* Write a `Dockerfile`.
|
||||
|
||||
* Build an image from a `Dockerfile`.
|
||||
|
||||
---
|
||||
|
||||
## `Dockerfile` overview
|
||||
|
||||
* A `Dockerfile` is a build recipe for a Docker image.
|
||||
|
||||
* It contains a series of instructions telling Docker how an image is constructed.
|
||||
|
||||
* The `docker build` command builds an image from a `Dockerfile`.
|
||||
|
||||
---
|
||||
|
||||
## Writing our first `Dockerfile`
|
||||
|
||||
Our Dockerfile must be in a **new, empty directory**.
|
||||
|
||||
1. Create a directory to hold our `Dockerfile`.
|
||||
|
||||
```bash
|
||||
$ mkdir myimage
|
||||
```
|
||||
|
||||
2. Create a `Dockerfile` inside this directory.
|
||||
|
||||
```bash
|
||||
$ cd myimage
|
||||
$ vim Dockerfile
|
||||
```
|
||||
|
||||
Of course, you can use any other editor of your choice.
|
||||
|
||||
---
|
||||
|
||||
## Type this into our Dockerfile...
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu
|
||||
RUN apt-get update
|
||||
RUN apt-get install figlet
|
||||
```
|
||||
|
||||
* `FROM` indicates the base image for our build.
|
||||
|
||||
* Each `RUN` line will be executed by Docker during the build.
|
||||
|
||||
* Our `RUN` commands **must be non-interactive.**
|
||||
<br/>(No input can be provided to Docker during the build.)
|
||||
|
||||
* In many cases, we will add the `-y` flag to `apt-get`.
|
||||
|
||||
---
|
||||
|
||||
## Build it!
|
||||
|
||||
Save our file, then execute:
|
||||
|
||||
```bash
|
||||
$ docker build -t figlet .
|
||||
```
|
||||
|
||||
* `-t` indicates the tag to apply to the image.
|
||||
|
||||
* `.` indicates the location of the *build context*.
|
||||
|
||||
We will talk more about the build context later.
|
||||
|
||||
To keep things simple for now: this is the directory where our Dockerfile is located.
|
||||
|
||||
---
|
||||
|
||||
## What happens when we build the image?
|
||||
|
||||
It depends if we're using BuildKit or not!
|
||||
|
||||
If there are lots of blue lines and the first line looks like this:
|
||||
```
|
||||
[+] Building 1.8s (4/6)
|
||||
```
|
||||
... then we're using BuildKit.
|
||||
|
||||
If the output is mostly black-and-white and the first line looks like this:
|
||||
```
|
||||
Sending build context to Docker daemon 2.048kB
|
||||
```
|
||||
... then we're using the "classic" or "old-style" builder.
|
||||
|
||||
---
|
||||
|
||||
## To BuildKit or Not To BuildKit
|
||||
|
||||
Classic builder:
|
||||
|
||||
- copies the whole "build context" to the Docker Engine
|
||||
|
||||
- linear (processes lines one after the other)
|
||||
|
||||
- requires a full Docker Engine
|
||||
|
||||
BuildKit:
|
||||
|
||||
- only transfers parts of the "build context" when needed
|
||||
|
||||
- will parallelize operations (when possible)
|
||||
|
||||
- can run in non-privileged containers (e.g. on Kubernetes)
|
||||
|
||||
---
|
||||
|
||||
## With the classic builder
|
||||
|
||||
The output of `docker build` looks like this:
|
||||
|
||||
.small[
|
||||
```bash
|
||||
docker build -t figlet .
|
||||
Sending build context to Docker daemon 2.048kB
|
||||
Step 1/3 : FROM ubuntu
|
||||
---> f975c5035748
|
||||
Step 2/3 : RUN apt-get update
|
||||
---> Running in e01b294dbffd
|
||||
(...output of the RUN command...)
|
||||
Removing intermediate container e01b294dbffd
|
||||
---> eb8d9b561b37
|
||||
Step 3/3 : RUN apt-get install figlet
|
||||
---> Running in c29230d70f9b
|
||||
(...output of the RUN command...)
|
||||
Removing intermediate container c29230d70f9b
|
||||
---> 0dfd7a253f21
|
||||
Successfully built 0dfd7a253f21
|
||||
Successfully tagged figlet:latest
|
||||
```
|
||||
]
|
||||
|
||||
* The output of the `RUN` commands has been omitted.
|
||||
* Let's explain what this output means.
|
||||
|
||||
---
|
||||
|
||||
## Sending the build context to Docker
|
||||
|
||||
```bash
|
||||
Sending build context to Docker daemon 2.048 kB
|
||||
```
|
||||
|
||||
* The build context is the `.` directory given to `docker build`.
|
||||
|
||||
* It is sent (as an archive) by the Docker client to the Docker daemon.
|
||||
|
||||
* This allows to use a remote machine to build using local files.
|
||||
|
||||
* Be careful (or patient) if that directory is big and your link is slow.
|
||||
|
||||
* You can speed up the process with a [`.dockerignore`](https://docs.docker.com/engine/reference/builder/#dockerignore-file) file
|
||||
|
||||
* It tells docker to ignore specific files in the directory
|
||||
|
||||
* Only ignore files that you won't need in the build context!
|
||||
|
||||
---
|
||||
|
||||
## Executing each step
|
||||
|
||||
```bash
|
||||
Step 2/3 : RUN apt-get update
|
||||
---> Running in e01b294dbffd
|
||||
(...output of the RUN command...)
|
||||
Removing intermediate container e01b294dbffd
|
||||
---> eb8d9b561b37
|
||||
```
|
||||
|
||||
* A container (`e01b294dbffd`) is created from the base image.
|
||||
|
||||
* The `RUN` command is executed in this container.
|
||||
|
||||
* The container is committed into an image (`eb8d9b561b37`).
|
||||
|
||||
* The build container (`e01b294dbffd`) is removed.
|
||||
|
||||
* The output of this step will be the base image for the next one.
|
||||
|
||||
---
|
||||
|
||||
## With BuildKit
|
||||
|
||||
.small[
|
||||
```bash
|
||||
[+] Building 7.9s (7/7) FINISHED
|
||||
=> [internal] load build definition from Dockerfile 0.0s
|
||||
=> => transferring dockerfile: 98B 0.0s
|
||||
=> [internal] load .dockerignore 0.0s
|
||||
=> => transferring context: 2B 0.0s
|
||||
=> [internal] load metadata for docker.io/library/ubuntu:latest 1.2s
|
||||
=> [1/3] FROM docker.io/library/ubuntu@sha256:cf31af331f38d1d7158470e095b132acd126a7180a54f263d386 3.2s
|
||||
=> => resolve docker.io/library/ubuntu@sha256:cf31af331f38d1d7158470e095b132acd126a7180a54f263d386 0.0s
|
||||
=> => sha256:cf31af331f38d1d7158470e095b132acd126a7180a54f263d386da88eb681d93 1.20kB / 1.20kB 0.0s
|
||||
=> => sha256:1de4c5e2d8954bf5fa9855f8b4c9d3c3b97d1d380efe19f60f3e4107a66f5cae 943B / 943B 0.0s
|
||||
=> => sha256:6a98cbe39225dadebcaa04e21dbe5900ad604739b07a9fa351dd10a6ebad4c1b 3.31kB / 3.31kB 0.0s
|
||||
=> => sha256:80bc30679ac1fd798f3241208c14accd6a364cb8a6224d1127dfb1577d10554f 27.14MB / 27.14MB 2.3s
|
||||
=> => sha256:9bf18fab4cfbf479fa9f8409ad47e2702c63241304c2cdd4c33f2a1633c5f85e 850B / 850B 0.5s
|
||||
=> => sha256:5979309c983a2adeff352538937475cf961d49c34194fa2aab142effe19ed9c1 189B / 189B 0.4s
|
||||
=> => extracting sha256:80bc30679ac1fd798f3241208c14accd6a364cb8a6224d1127dfb1577d10554f 0.7s
|
||||
=> => extracting sha256:9bf18fab4cfbf479fa9f8409ad47e2702c63241304c2cdd4c33f2a1633c5f85e 0.0s
|
||||
=> => extracting sha256:5979309c983a2adeff352538937475cf961d49c34194fa2aab142effe19ed9c1 0.0s
|
||||
=> [2/3] RUN apt-get update 2.5s
|
||||
=> [3/3] RUN apt-get install figlet 0.9s
|
||||
=> exporting to image 0.1s
|
||||
=> => exporting layers 0.1s
|
||||
=> => writing image sha256:3b8aee7b444ab775975dfba691a72d8ac24af2756e0a024e056e3858d5a23f7c 0.0s
|
||||
=> => naming to docker.io/library/figlet 0.0s
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Understanding BuildKit output
|
||||
|
||||
- BuildKit transfers the Dockerfile and the *build context*
|
||||
|
||||
(these are the first two `[internal]` stages)
|
||||
|
||||
- Then it executes the steps defined in the Dockerfile
|
||||
|
||||
(`[1/3]`, `[2/3]`, `[3/3]`)
|
||||
|
||||
- Finally, it exports the result of the build
|
||||
|
||||
(image definition + collection of layers)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## BuildKit plain output
|
||||
|
||||
- When running BuildKit in e.g. a CI pipeline, its output will be different
|
||||
|
||||
- We can see the same output format by using `--progress=plain`
|
||||
|
||||
---
|
||||
|
||||
## The caching system
|
||||
|
||||
If you run the same build again, it will be instantaneous. Why?
|
||||
|
||||
* After each build step, Docker takes a snapshot of the resulting image.
|
||||
|
||||
* Before executing a step, Docker checks if it has already built the same sequence.
|
||||
|
||||
* Docker uses the exact strings defined in your Dockerfile, so:
|
||||
|
||||
* `RUN apt-get install figlet cowsay`
|
||||
<br/> is different from
|
||||
<br/> `RUN apt-get install cowsay figlet`
|
||||
|
||||
* `RUN apt-get update` is not re-executed when the mirrors are updated
|
||||
|
||||
You can force a rebuild with `docker build --no-cache ...`.
|
||||
|
||||
---
|
||||
|
||||
## Running the image
|
||||
|
||||
The resulting image is not different from the one produced manually.
|
||||
|
||||
```bash
|
||||
$ docker run -ti figlet
|
||||
root@91f3c974c9a1:/# figlet hello
|
||||
_ _ _
|
||||
| |__ ___| | | ___
|
||||
| '_ \ / _ \ | |/ _ \
|
||||
| | | | __/ | | (_) |
|
||||
|_| |_|\___|_|_|\___/
|
||||
```
|
||||
|
||||
|
||||
Yay! 🎉
|
||||
|
||||
---
|
||||
|
||||
## Using image and viewing history
|
||||
|
||||
The `history` command lists all the layers composing an image.
|
||||
|
||||
For each layer, it shows its creation time, size, and creation command.
|
||||
|
||||
When an image was built with a Dockerfile, each layer corresponds to
|
||||
a line of the Dockerfile.
|
||||
|
||||
```bash
|
||||
$ docker history figlet
|
||||
IMAGE CREATED CREATED BY SIZE
|
||||
f9e8f1642759 About an hour ago /bin/sh -c apt-get install fi 1.627 MB
|
||||
7257c37726a1 About an hour ago /bin/sh -c apt-get update 21.58 MB
|
||||
07c86167cdc4 4 days ago /bin/sh -c #(nop) CMD ["/bin 0 B
|
||||
<missing> 4 days ago /bin/sh -c sed -i 's/^#\s*\( 1.895 kB
|
||||
<missing> 4 days ago /bin/sh -c echo '#!/bin/sh' 194.5 kB
|
||||
<missing> 4 days ago /bin/sh -c #(nop) ADD file:b 187.8 MB
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Why `sh -c`?
|
||||
|
||||
* On UNIX, to start a new program, we need two system calls:
|
||||
|
||||
- `fork()`, to create a new child process;
|
||||
|
||||
- `execve()`, to replace the new child process with the program to run.
|
||||
|
||||
* Conceptually, `execve()` works like this:
|
||||
|
||||
`execve(program, [list, of, arguments])`
|
||||
|
||||
* When we run a command, e.g. `ls -l /tmp`, something needs to parse the command.
|
||||
|
||||
(i.e. split the program and its arguments into a list.)
|
||||
|
||||
* The shell is usually doing that.
|
||||
|
||||
(It also takes care of expanding environment variables and special things like `~`.)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Why `sh -c`?
|
||||
|
||||
* When we do `RUN ls -l /tmp`, the Docker builder needs to parse the command.
|
||||
|
||||
* Instead of implementing its own parser, it outsources the job to the shell.
|
||||
|
||||
* That's why we see `sh -c ls -l /tmp` in that case.
|
||||
|
||||
* But we can also do the parsing jobs ourselves.
|
||||
|
||||
* This means passing `RUN` a list of arguments.
|
||||
|
||||
* This is called the *exec syntax*.
|
||||
|
||||
---
|
||||
|
||||
## Shell syntax vs exec syntax
|
||||
|
||||
Dockerfile commands that execute something can have two forms:
|
||||
|
||||
* plain string, or *shell syntax*:
|
||||
<br/>`RUN apt-get install figlet`
|
||||
|
||||
* JSON list, or *exec syntax*:
|
||||
<br/>`RUN ["apt-get", "install", "figlet"]`
|
||||
|
||||
We are going to change our Dockerfile to see how it affects the resulting image.
|
||||
|
||||
---
|
||||
|
||||
## Using exec syntax in our Dockerfile
|
||||
|
||||
Let's change our Dockerfile as follows!
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu
|
||||
RUN apt-get update
|
||||
RUN ["apt-get", "install", "figlet"]
|
||||
```
|
||||
|
||||
Then build the new Dockerfile.
|
||||
|
||||
```bash
|
||||
$ docker build -t figlet .
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## History with exec syntax
|
||||
|
||||
Compare the new history:
|
||||
|
||||
```bash
|
||||
$ docker history figlet
|
||||
IMAGE CREATED CREATED BY SIZE
|
||||
27954bb5faaf 10 seconds ago apt-get install figlet 1.627 MB
|
||||
7257c37726a1 About an hour ago /bin/sh -c apt-get update 21.58 MB
|
||||
07c86167cdc4 4 days ago /bin/sh -c #(nop) CMD ["/bin 0 B
|
||||
<missing> 4 days ago /bin/sh -c sed -i 's/^#\s*\( 1.895 kB
|
||||
<missing> 4 days ago /bin/sh -c echo '#!/bin/sh' 194.5 kB
|
||||
<missing> 4 days ago /bin/sh -c #(nop) ADD file:b 187.8 MB
|
||||
```
|
||||
|
||||
* Exec syntax specifies an *exact* command to execute.
|
||||
|
||||
* Shell syntax specifies a command to be wrapped within `/bin/sh -c "..."`.
|
||||
|
||||
---
|
||||
|
||||
## When to use exec syntax and shell syntax
|
||||
|
||||
* shell syntax:
|
||||
|
||||
* is easier to write
|
||||
* interpolates environment variables and other shell expressions
|
||||
* creates an extra process (`/bin/sh -c ...`) to parse the string
|
||||
* requires `/bin/sh` to exist in the container
|
||||
|
||||
* exec syntax:
|
||||
|
||||
* is harder to write (and read!)
|
||||
* passes all arguments without extra processing
|
||||
* doesn't create an extra process
|
||||
* doesn't require `/bin/sh` to exist in the container
|
||||
|
||||
---
|
||||
|
||||
## Pro-tip: the `exec` shell built-in
|
||||
|
||||
POSIX shells have a built-in command named `exec`.
|
||||
|
||||
`exec` should be followed by a program and its arguments.
|
||||
|
||||
From a user perspective:
|
||||
|
||||
- it looks like the shell exits right away after the command execution,
|
||||
|
||||
- in fact, the shell exits just *before* command execution;
|
||||
|
||||
- or rather, the shell gets *replaced* by the command.
|
||||
|
||||
---
|
||||
|
||||
## Example using `exec`
|
||||
|
||||
```dockerfile
|
||||
CMD exec figlet -f script hello
|
||||
```
|
||||
|
||||
In this example, `sh -c` will still be used, but
|
||||
`figlet` will be PID 1 in the container.
|
||||
|
||||
The shell gets replaced by `figlet` when `figlet` starts execution.
|
||||
|
||||
This allows to run processes as PID 1 without using JSON.
|
||||
|
||||
???
|
||||
|
||||
:EN:- Towards automated, reproducible builds
|
||||
:EN:- Writing our first Dockerfile
|
||||
:FR:- Rendre le processus automatique et reproductible
|
||||
:FR:- Écrire son premier Dockerfile
|
||||
362
slides/containers/Buildkit.md
Normal file
362
slides/containers/Buildkit.md
Normal file
@@ -0,0 +1,362 @@
|
||||
# Buildkit
|
||||
|
||||
- "New" backend for Docker builds
|
||||
|
||||
- announced in 2017
|
||||
|
||||
- ships with Docker Engine 18.09
|
||||
|
||||
- enabled by default on Docker Desktop in 2021
|
||||
|
||||
- Huge improvements in build efficiency
|
||||
|
||||
- 100% compatible with existing Dockerfiles
|
||||
|
||||
- New features for multi-arch
|
||||
|
||||
- Not just for building container images
|
||||
|
||||
---
|
||||
|
||||
## Old vs New
|
||||
|
||||
- Classic `docker build`:
|
||||
|
||||
- copy whole build context
|
||||
- linear execution
|
||||
- `docker run` + `docker commit` + `docker run` + `docker commit`...
|
||||
|
||||
- Buildkit:
|
||||
|
||||
- copy files only when they are needed; cache them
|
||||
- compute dependency graph (dependencies are expressed by `COPY`)
|
||||
- parallel execution
|
||||
- doesn't rely on Docker, but on internal runner/snapshotter
|
||||
- can run in "normal" containers (including in Kubernetes pods)
|
||||
|
||||
---
|
||||
|
||||
## Parallel execution
|
||||
|
||||
- In multi-stage builds, all stages can be built in parallel
|
||||
|
||||
(example: https://github.com/jpetazzo/shpod; [before] and [after])
|
||||
|
||||
- Stages are built only when they are necessary
|
||||
|
||||
(i.e. if their output is tagged or used in another necessary stage)
|
||||
|
||||
- Files are copied from context only when needed
|
||||
|
||||
- Files are cached in the builder
|
||||
|
||||
[before]: https://github.com/jpetazzo/shpod/blob/c6efedad6d6c3dc3120dbc0ae0a6915f85862474/Dockerfile
|
||||
[after]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile
|
||||
|
||||
---
|
||||
|
||||
## Turning it on and off
|
||||
|
||||
- On recent version of Docker Desktop (since 2021):
|
||||
|
||||
*enabled by default*
|
||||
|
||||
- On older versions, or on Docker CE (Linux):
|
||||
|
||||
`export DOCKER_BUILDKIT=1`
|
||||
|
||||
- Turning it off:
|
||||
|
||||
`export DOCKER_BUILDKIT=0`
|
||||
|
||||
---
|
||||
|
||||
## Multi-arch support
|
||||
|
||||
- Historically, Docker only ran on x86_64 / amd64
|
||||
|
||||
(Intel/AMD 64 bits architecture)
|
||||
|
||||
- Folks have been running it on 32-bit ARM for ages
|
||||
|
||||
(e.g. Raspberry Pi)
|
||||
|
||||
- This required a Go compiler and appropriate base images
|
||||
|
||||
(which means changing/adapting Dockerfiles to use these base images)
|
||||
|
||||
- Docker [image manifest v2 schema 2][manifest] introduces multi-arch images
|
||||
|
||||
(`FROM alpine` automatically gets the right image for your architecture)
|
||||
|
||||
[manifest]: https://docs.docker.com/registry/spec/manifest-v2-2/
|
||||
|
||||
---
|
||||
|
||||
## Why?
|
||||
|
||||
- Raspberry Pi (32-bit and 64-bit ARM)
|
||||
|
||||
- Other ARM-based embedded systems (ODROID, NVIDIA Jetson...)
|
||||
|
||||
- Apple M1
|
||||
|
||||
- AWS Graviton
|
||||
|
||||
- Ampere Altra (e.g. on Oracle Cloud)
|
||||
|
||||
- ...
|
||||
|
||||
---
|
||||
|
||||
## Multi-arch builds in a nutshell
|
||||
|
||||
Use the `docker buildx build` command:
|
||||
|
||||
```bash
|
||||
docker buildx build … \
|
||||
--platform linux/amd64,linux/arm64,linux/arm/v7,linux/386 \
|
||||
[--tag jpetazzo/hello --push]
|
||||
```
|
||||
|
||||
- Requires all base images to be available for these platforms
|
||||
|
||||
- Must not use binary downloads with hard-coded architectures!
|
||||
|
||||
(streamlining a Dockerfile for multi-arch: [before], [after])
|
||||
|
||||
[before]: https://github.com/jpetazzo/shpod/blob/d20887bbd56b5fcae2d5d9b0ce06cae8887caabf/Dockerfile
|
||||
[after]: https://github.com/jpetazzo/shpod/blob/c50789e662417b34fea6f5e1d893721d66d265b7/Dockerfile
|
||||
|
||||
---
|
||||
|
||||
## Native vs emulated vs cross
|
||||
|
||||
- Native builds:
|
||||
|
||||
*aarch64 machine running aarch64 programs building aarch64 images/binaries*
|
||||
|
||||
- Emulated builds:
|
||||
|
||||
*x86_64 machine running aarch64 programs building aarch64 images/binaries*
|
||||
|
||||
- Cross builds:
|
||||
|
||||
*x86_64 machine running x86_64 programs building aarch64 images/binaries*
|
||||
|
||||
---
|
||||
|
||||
## Native
|
||||
|
||||
- Dockerfiles are (relatively) simple to write
|
||||
|
||||
(nothing special to do to handle multi-arch; just avoid hard-coded archs)
|
||||
|
||||
- Best performance
|
||||
|
||||
- Requires "exotic" machines
|
||||
|
||||
- Requires setting up a build farm
|
||||
|
||||
---
|
||||
|
||||
## Emulated
|
||||
|
||||
- Dockerfiles are (relatively) simple to write
|
||||
|
||||
- Emulation performance can vary
|
||||
|
||||
(from "OK" to "ouch this is slow")
|
||||
|
||||
- Emulation isn't always perfect
|
||||
|
||||
(weird bugs/crashes are rare but can happen)
|
||||
|
||||
- Doesn't require special machines
|
||||
|
||||
- Supports arbitrary architectures thanks to QEMU
|
||||
|
||||
---
|
||||
|
||||
## Cross
|
||||
|
||||
- Dockerfiles are more complicated to write
|
||||
|
||||
- Requires cross-compilation toolchains
|
||||
|
||||
- Performance is good
|
||||
|
||||
- Doesn't require special machines
|
||||
|
||||
---
|
||||
|
||||
## Native builds
|
||||
|
||||
- Requires base images to be available
|
||||
|
||||
- To view available architectures for an image:
|
||||
```bash
|
||||
regctl manifest get --list <imagename>
|
||||
docker manifest inspect <imagename>
|
||||
```
|
||||
|
||||
- Nothing special to do, *except* when downloading binaries!
|
||||
|
||||
```
|
||||
https://releases.hashicorp.com/terraform/1.1.5/terraform_1.1.5_linux_`amd64`.zip
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Finding the right architecture
|
||||
|
||||
`uname -m` → armv7l, aarch64, i686, x86_64
|
||||
|
||||
`GOARCH` (from `go env`) → arm, arm64, 386, amd64
|
||||
|
||||
In Dockerfile, add `ARG TARGETARCH` (or `ARG TARGETPLATFORM`)
|
||||
|
||||
- `TARGETARCH` matches `GOARCH`
|
||||
|
||||
- `TARGETPLAFORM` → linux/arm/v7, linux/arm64, linux/386, linux/amd64
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Welp
|
||||
|
||||
Sometimes, binary releases be like:
|
||||
|
||||
```
|
||||
Linux_arm64.tar.gz
|
||||
Linux_ppc64le.tar.gz
|
||||
Linux_s390x.tar.gz
|
||||
Linux_x86_64.tar.gz
|
||||
```
|
||||
|
||||
This needs a bit of custom mapping.
|
||||
|
||||
---
|
||||
|
||||
## Emulation
|
||||
|
||||
- Leverages `binfmt_misc` and QEMU on Linux
|
||||
|
||||
- Enabling:
|
||||
```bash
|
||||
docker run --rm --privileged aptman/qus -s -- -p
|
||||
```
|
||||
|
||||
- Disabling:
|
||||
```bash
|
||||
docker run --rm --privileged aptman/qus -- -r
|
||||
```
|
||||
|
||||
- Checking status:
|
||||
```bash
|
||||
ls -l /proc/sys/fs/binfmt_misc
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## How it works
|
||||
|
||||
- `binfmt_misc` lets us register _interpreters_ for binaries, e.g.:
|
||||
|
||||
- [DOSBox][dosbox] for DOS programs
|
||||
|
||||
- [Wine][wine] for Windows programs
|
||||
|
||||
- [QEMU][qemu] for Linux programs for other architectures
|
||||
|
||||
- When we try to execute e.g. a SPARC binary on our x86_64 machine:
|
||||
|
||||
- `binfmt_misc` detects the binary format and invokes `qemu-<arch> the-binary ...`
|
||||
|
||||
- QEMU translates SPARC instructions to x86_64 instructions
|
||||
|
||||
- system calls go straight to the kernel
|
||||
|
||||
[dosbox]: https://www.dosbox.com/
|
||||
[QEMU]: https://www.qemu.org/
|
||||
[wine]: https://www.winehq.org/
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## QEMU registration
|
||||
|
||||
- The `aptman/qus` image mentioned earlier contains static QEMU builds
|
||||
|
||||
- It registers all these interpreters with the kernel
|
||||
|
||||
- For more details, check:
|
||||
|
||||
- https://github.com/dbhi/qus
|
||||
|
||||
- https://dbhi.github.io/qus/
|
||||
|
||||
---
|
||||
|
||||
## Cross-compilation
|
||||
|
||||
- Cross-compilation is about 10x faster than emulation
|
||||
|
||||
(non-scientific benchmarks!)
|
||||
|
||||
- In Dockerfile, add:
|
||||
|
||||
`ARG BUILDARCH BUILDPLATFORM TARGETARCH TARGETPLATFORM`
|
||||
|
||||
- Can use `FROM --platform=$BUILDPLATFORM <image>`
|
||||
|
||||
- Then use `$TARGETARCH` or `$TARGETPLATFORM`
|
||||
|
||||
(e.g. for Go, `export GOARCH=$TARGETARCH`)
|
||||
|
||||
- Check [tonistiigi/xx][xx] and [Toni's blog][toni] for some amazing cross tools!
|
||||
|
||||
[xx]: https://github.com/tonistiigi/xx
|
||||
[toni]: https://medium.com/@tonistiigi/faster-multi-platform-builds-dockerfile-cross-compilation-guide-part-1-ec087c719eaf
|
||||
|
||||
---
|
||||
|
||||
## Checking runtime capabilities
|
||||
|
||||
Build and run the following Dockerfile:
|
||||
|
||||
```dockerfile
|
||||
FROM --platform=linux/amd64 busybox AS amd64
|
||||
FROM --platform=linux/arm64 busybox AS arm64
|
||||
FROM --platform=linux/arm/v7 busybox AS arm32
|
||||
FROM --platform=linux/386 busybox AS ia32
|
||||
FROM alpine
|
||||
RUN apk add file
|
||||
WORKDIR /root
|
||||
COPY --from=amd64 /bin/busybox /root/amd64/busybox
|
||||
COPY --from=arm64 /bin/busybox /root/arm64/busybox
|
||||
COPY --from=arm32 /bin/busybox /root/arm32/busybox
|
||||
COPY --from=ia32 /bin/busybox /root/ia32/busybox
|
||||
CMD for A in *; do echo "$A => $($A/busybox uname -a)"; done
|
||||
```
|
||||
|
||||
It will indicate which executables can be run on your engine.
|
||||
|
||||
---
|
||||
|
||||
## More than builds
|
||||
|
||||
- Buildkit is also used in other systems:
|
||||
|
||||
- [Earthly] - generic repeatable build pipelines
|
||||
|
||||
- [Dagger] - CICD pipelines that run anywhere
|
||||
|
||||
- and more!
|
||||
|
||||
[Earthly]: https://earthly.dev/
|
||||
[Dagger]: https://dagger.io/
|
||||
317
slides/containers/Cmd_And_Entrypoint.md
Normal file
317
slides/containers/Cmd_And_Entrypoint.md
Normal file
@@ -0,0 +1,317 @@
|
||||
|
||||
class: title
|
||||
|
||||
# `CMD` and `ENTRYPOINT`
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
In this lesson, we will learn about two important
|
||||
Dockerfile commands:
|
||||
|
||||
`CMD` and `ENTRYPOINT`.
|
||||
|
||||
These commands allow us to set the default command
|
||||
to run in a container.
|
||||
|
||||
---
|
||||
|
||||
## Defining a default command
|
||||
|
||||
When people run our container, we want to greet them with a nice hello message, and using a custom font.
|
||||
|
||||
For that, we will execute:
|
||||
|
||||
```bash
|
||||
figlet -f script hello
|
||||
```
|
||||
|
||||
* `-f script` tells figlet to use a fancy font.
|
||||
|
||||
* `hello` is the message that we want it to display.
|
||||
|
||||
---
|
||||
|
||||
## Adding `CMD` to our Dockerfile
|
||||
|
||||
Our new Dockerfile will look like this:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu
|
||||
RUN apt-get update
|
||||
RUN ["apt-get", "install", "figlet"]
|
||||
CMD figlet -f script hello
|
||||
```
|
||||
|
||||
* `CMD` defines a default command to run when none is given.
|
||||
|
||||
* It can appear at any point in the file.
|
||||
|
||||
* Each `CMD` will replace and override the previous one.
|
||||
|
||||
* As a result, while you can have multiple `CMD` lines, it is useless.
|
||||
|
||||
---
|
||||
|
||||
## Build and test our image
|
||||
|
||||
Let's build it:
|
||||
|
||||
```bash
|
||||
$ docker build -t figlet .
|
||||
...
|
||||
Successfully built 042dff3b4a8d
|
||||
Successfully tagged figlet:latest
|
||||
```
|
||||
|
||||
And run it:
|
||||
|
||||
```bash
|
||||
$ docker run figlet
|
||||
_ _ _
|
||||
| | | | | |
|
||||
| | _ | | | | __
|
||||
|/ \ |/ |/ |/ / \_
|
||||
| |_/|__/|__/|__/\__/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Overriding `CMD`
|
||||
|
||||
If we want to get a shell into our container (instead of running
|
||||
`figlet`), we just have to specify a different program to run:
|
||||
|
||||
```bash
|
||||
$ docker run -it figlet bash
|
||||
root@7ac86a641116:/#
|
||||
```
|
||||
|
||||
* We specified `bash`.
|
||||
|
||||
* It replaced the value of `CMD`.
|
||||
|
||||
---
|
||||
|
||||
## Using `ENTRYPOINT`
|
||||
|
||||
We want to be able to specify a different message on the command line,
|
||||
while retaining `figlet` and some default parameters.
|
||||
|
||||
In other words, we would like to be able to do this:
|
||||
|
||||
```bash
|
||||
$ docker run figlet salut
|
||||
_
|
||||
| |
|
||||
, __, | | _|_
|
||||
/ \_/ | |/ | | |
|
||||
\/ \_/|_/|__/ \_/|_/|_/
|
||||
```
|
||||
|
||||
|
||||
We will use the `ENTRYPOINT` verb in Dockerfile.
|
||||
|
||||
---
|
||||
|
||||
## Adding `ENTRYPOINT` to our Dockerfile
|
||||
|
||||
Our new Dockerfile will look like this:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu
|
||||
RUN apt-get update
|
||||
RUN ["apt-get", "install", "figlet"]
|
||||
ENTRYPOINT ["figlet", "-f", "script"]
|
||||
```
|
||||
|
||||
* `ENTRYPOINT` defines a base command (and its parameters) for the container.
|
||||
|
||||
* The command line arguments are appended to those parameters.
|
||||
|
||||
* Like `CMD`, `ENTRYPOINT` can appear anywhere, and replaces the previous value.
|
||||
|
||||
Why did we use JSON syntax for our `ENTRYPOINT`?
|
||||
|
||||
---
|
||||
|
||||
## Implications of JSON vs string syntax
|
||||
|
||||
* When CMD or ENTRYPOINT use string syntax, they get wrapped in `sh -c`.
|
||||
|
||||
* To avoid this wrapping, we can use JSON syntax.
|
||||
|
||||
What if we used `ENTRYPOINT` with string syntax?
|
||||
|
||||
```bash
|
||||
$ docker run figlet salut
|
||||
```
|
||||
|
||||
This would run the following command in the `figlet` image:
|
||||
|
||||
```bash
|
||||
sh -c "figlet -f script" salut
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Build and test our image
|
||||
|
||||
Let's build it:
|
||||
|
||||
```bash
|
||||
$ docker build -t figlet .
|
||||
...
|
||||
Successfully built 36f588918d73
|
||||
Successfully tagged figlet:latest
|
||||
```
|
||||
|
||||
And run it:
|
||||
|
||||
```bash
|
||||
$ docker run figlet salut
|
||||
_
|
||||
| |
|
||||
, __, | | _|_
|
||||
/ \_/ | |/ | | |
|
||||
\/ \_/|_/|__/ \_/|_/|_/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Using `CMD` and `ENTRYPOINT` together
|
||||
|
||||
What if we want to define a default message for our container?
|
||||
|
||||
Then we will use `ENTRYPOINT` and `CMD` together.
|
||||
|
||||
* `ENTRYPOINT` will define the base command for our container.
|
||||
|
||||
* `CMD` will define the default parameter(s) for this command.
|
||||
|
||||
* They *both* have to use JSON syntax.
|
||||
|
||||
---
|
||||
|
||||
## `CMD` and `ENTRYPOINT` together
|
||||
|
||||
Our new Dockerfile will look like this:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu
|
||||
RUN apt-get update
|
||||
RUN ["apt-get", "install", "figlet"]
|
||||
ENTRYPOINT ["figlet", "-f", "script"]
|
||||
CMD ["hello world"]
|
||||
```
|
||||
|
||||
* `ENTRYPOINT` defines a base command (and its parameters) for the container.
|
||||
|
||||
* If we don't specify extra command-line arguments when starting the container,
|
||||
the value of `CMD` is appended.
|
||||
|
||||
* Otherwise, our extra command-line arguments are used instead of `CMD`.
|
||||
|
||||
---
|
||||
|
||||
## Build and test our image
|
||||
|
||||
Let's build it:
|
||||
|
||||
```bash
|
||||
$ docker build -t myfiglet .
|
||||
...
|
||||
Successfully built 6e0b6a048a07
|
||||
Successfully tagged myfiglet:latest
|
||||
```
|
||||
|
||||
Run it without parameters:
|
||||
|
||||
```bash
|
||||
$ docker run myfiglet
|
||||
_ _ _ _
|
||||
| | | | | | | | |
|
||||
| | _ | | | | __ __ ,_ | | __|
|
||||
|/ \ |/ |/ |/ / \_ | | |_/ \_/ | |/ / |
|
||||
| |_/|__/|__/|__/\__/ \/ \/ \__/ |_/|__/\_/|_/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Overriding the image default parameters
|
||||
|
||||
Now let's pass extra arguments to the image.
|
||||
|
||||
```bash
|
||||
$ docker run myfiglet hola mundo
|
||||
_ _
|
||||
| | | | |
|
||||
| | __ | | __, _ _ _ _ _ __| __
|
||||
|/ \ / \_|/ / | / |/ |/ | | | / |/ | / | / \_
|
||||
| |_/\__/ |__/\_/|_/ | | |_/ \_/|_/ | |_/\_/|_/\__/
|
||||
```
|
||||
|
||||
We overrode `CMD` but still used `ENTRYPOINT`.
|
||||
|
||||
---
|
||||
|
||||
## Overriding `ENTRYPOINT`
|
||||
|
||||
What if we want to run a shell in our container?
|
||||
|
||||
We cannot just do `docker run myfiglet bash` because
|
||||
that would just tell figlet to display the word "bash."
|
||||
|
||||
We use the `--entrypoint` parameter:
|
||||
|
||||
```bash
|
||||
$ docker run -it --entrypoint bash myfiglet
|
||||
root@6027e44e2955:/#
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## `CMD` and `ENTRYPOINT` recap
|
||||
|
||||
- `docker run myimage` executes `ENTRYPOINT` + `CMD`
|
||||
|
||||
- `docker run myimage args` executes `ENTRYPOINT` + `args` (overriding `CMD`)
|
||||
|
||||
- `docker run --entrypoint prog myimage` executes `prog` (overriding both)
|
||||
|
||||
.small[
|
||||
| Command | `ENTRYPOINT` | `CMD` | Result
|
||||
|---------------------------------|--------------------|---------|-------
|
||||
| `docker run figlet` | none | none | Use values from base image (`bash`)
|
||||
| `docker run figlet hola` | none | none | Error (executable `hola` not found)
|
||||
| `docker run figlet` | `figlet -f script` | none | `figlet -f script`
|
||||
| `docker run figlet hola` | `figlet -f script` | none | `figlet -f script hola`
|
||||
| `docker run figlet` | none | `figlet -f script` | `figlet -f script`
|
||||
| `docker run figlet hola` | none | `figlet -f script` | Error (executable `hola` not found)
|
||||
| `docker run figlet` | `figlet -f script` | `hello` | `figlet -f script hello`
|
||||
| `docker run figlet hola` | `figlet -f script` | `hello` | `figlet -f script hola`
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## When to use `ENTRYPOINT` vs `CMD`
|
||||
|
||||
`ENTRYPOINT` is great for "containerized binaries".
|
||||
|
||||
Example: `docker run consul --help`
|
||||
|
||||
(Pretend that the `docker run` part isn't there!)
|
||||
|
||||
`CMD` is great for images with multiple binaries.
|
||||
|
||||
Example: `docker run busybox ifconfig`
|
||||
|
||||
(It makes sense to indicate *which* program we want to run!)
|
||||
|
||||
???
|
||||
|
||||
:EN:- CMD and ENTRYPOINT
|
||||
:FR:- CMD et ENTRYPOINT
|
||||
484
slides/containers/Compose_For_Dev_Stacks.md
Normal file
484
slides/containers/Compose_For_Dev_Stacks.md
Normal file
@@ -0,0 +1,484 @@
|
||||
# Compose for development stacks
|
||||
|
||||
Dockerfile = great to build *one* container image.
|
||||
|
||||
What if we have multiple containers?
|
||||
|
||||
What if some of them require particular `docker run` parameters?
|
||||
|
||||
How do we connect them all together?
|
||||
|
||||
... Compose solves these use-cases (and a few more).
|
||||
|
||||
---
|
||||
|
||||
## Life before Compose
|
||||
|
||||
Before we had Compose, we would typically write custom scripts to:
|
||||
|
||||
- build container images,
|
||||
|
||||
- run containers using these images,
|
||||
|
||||
- connect the containers together,
|
||||
|
||||
- rebuild, restart, update these images and containers.
|
||||
|
||||
---
|
||||
|
||||
## Life with Compose
|
||||
|
||||
Compose enables a simple, powerful onboarding workflow:
|
||||
|
||||
1. Checkout our code.
|
||||
|
||||
2. Run `docker-compose up`.
|
||||
|
||||
3. Our app is up and running!
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Life after Compose
|
||||
|
||||
(Or: when do we need something else?)
|
||||
|
||||
- Compose is *not* an orchestrator
|
||||
|
||||
- It isn't designed to need to run containers on multiple nodes
|
||||
|
||||
(it can, however, work with Docker Swarm Mode)
|
||||
|
||||
- Compose isn't ideal if we want to run containers on Kubernetes
|
||||
|
||||
- it uses different concepts (Compose services ≠ Kubernetes services)
|
||||
|
||||
- it needs a Docker Engine (although containerd support might be coming)
|
||||
|
||||
---
|
||||
|
||||
## First rodeo with Compose
|
||||
|
||||
1. Write Dockerfiles
|
||||
|
||||
2. Describe our stack of containers in a YAML file called `docker-compose.yml`
|
||||
|
||||
3. `docker-compose up` (or `docker-compose up -d` to run in the background)
|
||||
|
||||
4. Compose pulls and builds the required images, and starts the containers
|
||||
|
||||
5. Compose shows the combined logs of all the containers
|
||||
|
||||
(if running in the background, use `docker-compose logs`)
|
||||
|
||||
6. Hit Ctrl-C to stop the whole stack
|
||||
|
||||
(if running in the background, use `docker-compose stop`)
|
||||
|
||||
---
|
||||
|
||||
## Iterating
|
||||
|
||||
After making changes to our source code, we can:
|
||||
|
||||
1. `docker-compose build` to rebuild container images
|
||||
|
||||
2. `docker-compose up` to restart the stack with the new images
|
||||
|
||||
We can also combine both with `docker-compose up --build`
|
||||
|
||||
Compose will be smart, and only recreate the containers that have changed.
|
||||
|
||||
When working with interpreted languages:
|
||||
|
||||
- don't rebuild each time
|
||||
|
||||
- leverage a `volumes` section instead
|
||||
|
||||
---
|
||||
|
||||
## Launching Our First Stack with Compose
|
||||
|
||||
First step: clone the source code for the app we will be working on.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/jpetazzo/trainingwheels
|
||||
cd trainingwheels
|
||||
```
|
||||
|
||||
Second step: start the app.
|
||||
|
||||
```bash
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
Watch Compose build and run the app.
|
||||
|
||||
That Compose stack exposes a web server on port 8000; try connecting to it.
|
||||
|
||||
---
|
||||
|
||||
## Launching Our First Stack with Compose
|
||||
|
||||
We should see a web page like this:
|
||||
|
||||

|
||||
|
||||
Each time we reload, the counter should increase.
|
||||
|
||||
---
|
||||
|
||||
## Stopping the app
|
||||
|
||||
When we hit Ctrl-C, Compose tries to gracefully terminate all of the containers.
|
||||
|
||||
After ten seconds (or if we press `^C` again) it will forcibly kill them.
|
||||
|
||||
---
|
||||
|
||||
## The `docker-compose.yml` file
|
||||
|
||||
Here is the file used in the demo:
|
||||
|
||||
.small[
|
||||
```yaml
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
www:
|
||||
build: www
|
||||
ports:
|
||||
- ${PORT-8000}:5000
|
||||
user: nobody
|
||||
environment:
|
||||
DEBUG: 1
|
||||
command: python counter.py
|
||||
volumes:
|
||||
- ./www:/src
|
||||
|
||||
redis:
|
||||
image: redis
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Compose file structure
|
||||
|
||||
A Compose file has multiple sections:
|
||||
|
||||
* `version` is mandatory. (Typically use "3".)
|
||||
|
||||
* `services` is mandatory. Each service corresponds to a container.
|
||||
|
||||
* `networks` is optional and indicates to which networks containers should be connected.
|
||||
<br/>(By default, containers will be connected on a private, per-compose-file network.)
|
||||
|
||||
* `volumes` is optional and can define volumes to be used and/or shared by the containers.
|
||||
|
||||
---
|
||||
|
||||
## Compose file versions
|
||||
|
||||
* Version 1 is legacy and shouldn't be used.
|
||||
|
||||
(If you see a Compose file without `version` and `services`, it's a legacy v1 file.)
|
||||
|
||||
* Version 2 added support for networks and volumes.
|
||||
|
||||
* Version 3 added support for deployment options (scaling, rolling updates, etc).
|
||||
|
||||
* Typically use `version: "3"`.
|
||||
|
||||
The [Docker documentation](https://docs.docker.com/compose/compose-file/)
|
||||
has excellent information about the Compose file format if you need to know more about versions.
|
||||
|
||||
---
|
||||
|
||||
## Containers in `docker-compose.yml`
|
||||
|
||||
Each service in the YAML file must contain either `build`, or `image`.
|
||||
|
||||
* `build` indicates a path containing a Dockerfile.
|
||||
|
||||
* `image` indicates an image name (local, or on a registry).
|
||||
|
||||
* If both are specified, an image will be built from the `build` directory and named `image`.
|
||||
|
||||
The other parameters are optional.
|
||||
|
||||
They encode the parameters that you would typically add to `docker run`.
|
||||
|
||||
Sometimes they have several minor improvements.
|
||||
|
||||
---
|
||||
|
||||
## Container parameters
|
||||
|
||||
* `command` indicates what to run (like `CMD` in a Dockerfile).
|
||||
|
||||
* `ports` translates to one (or multiple) `-p` options to map ports.
|
||||
<br/>You can specify local ports (i.e. `x:y` to expose public port `x`).
|
||||
|
||||
* `volumes` translates to one (or multiple) `-v` options.
|
||||
<br/>You can use relative paths here.
|
||||
|
||||
For the full list, check: https://docs.docker.com/compose/compose-file/
|
||||
|
||||
---
|
||||
|
||||
## Environment variables
|
||||
|
||||
- We can use environment variables in Compose files
|
||||
|
||||
(like `$THIS` or `${THAT}`)
|
||||
|
||||
- We can provide default values, e.g. `${PORT-8000}`
|
||||
|
||||
- Compose will also automatically load the environment file `.env`
|
||||
|
||||
(it should contain `VAR=value`, one per line)
|
||||
|
||||
- This is a great way to customize build and run parameters
|
||||
|
||||
(base image versions to use, build and run secrets, port numbers...)
|
||||
|
||||
---
|
||||
|
||||
## Configuring a Compose stack
|
||||
|
||||
- Follow [12-factor app configuration principles][12factorconfig]
|
||||
|
||||
(configure the app through environment variables)
|
||||
|
||||
- Provide (in the repo) a default environment file suitable for development
|
||||
|
||||
(no secret or sensitive value)
|
||||
|
||||
- Copy the default environment file to `.env` and tweak it
|
||||
|
||||
(or: provide a script to generate `.env` from a template)
|
||||
|
||||
[12factorconfig]: https://12factor.net/config
|
||||
|
||||
---
|
||||
|
||||
## Running multiple copies of a stack
|
||||
|
||||
- Copy the stack in two different directories, e.g. `front` and `frontcopy`
|
||||
|
||||
- Compose prefixes images and containers with the directory name:
|
||||
|
||||
`front_www`, `front_www_1`, `front_db_1`
|
||||
|
||||
`frontcopy_www`, `frontcopy_www_1`, `frontcopy_db_1`
|
||||
|
||||
- Alternatively, use `docker-compose -p frontcopy`
|
||||
|
||||
(to set the `--project-name` of a stack, which default to the dir name)
|
||||
|
||||
- Each copy is isolated from the others (runs on a different network)
|
||||
|
||||
---
|
||||
|
||||
## Checking stack status
|
||||
|
||||
We have `ps`, `docker ps`, and similarly, `docker-compose ps`:
|
||||
|
||||
```bash
|
||||
$ docker-compose ps
|
||||
Name Command State Ports
|
||||
----------------------------------------------------------------------------
|
||||
trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp
|
||||
trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp
|
||||
```
|
||||
|
||||
Shows the status of all the containers of our stack.
|
||||
|
||||
Doesn't show the other containers.
|
||||
|
||||
---
|
||||
|
||||
## Cleaning up (1)
|
||||
|
||||
If you have started your application in the background with Compose and
|
||||
want to stop it easily, you can use the `kill` command:
|
||||
|
||||
```bash
|
||||
$ docker-compose kill
|
||||
```
|
||||
|
||||
Likewise, `docker-compose rm` will let you remove containers (after confirmation):
|
||||
|
||||
```bash
|
||||
$ docker-compose rm
|
||||
Going to remove trainingwheels_redis_1, trainingwheels_www_1
|
||||
Are you sure? [yN] y
|
||||
Removing trainingwheels_redis_1...
|
||||
Removing trainingwheels_www_1...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cleaning up (2)
|
||||
|
||||
Alternatively, `docker-compose down` will stop and remove containers.
|
||||
|
||||
It will also remove other resources, like networks that were created for the application.
|
||||
|
||||
```bash
|
||||
$ docker-compose down
|
||||
Stopping trainingwheels_www_1 ... done
|
||||
Stopping trainingwheels_redis_1 ... done
|
||||
Removing trainingwheels_www_1 ... done
|
||||
Removing trainingwheels_redis_1 ... done
|
||||
```
|
||||
|
||||
Use `docker-compose down -v` to remove everything including volumes.
|
||||
|
||||
---
|
||||
|
||||
## Special handling of volumes
|
||||
|
||||
- When an image gets updated, Compose automatically creates a new container
|
||||
|
||||
- The data in the old container is lost...
|
||||
|
||||
- ...Except if the container is using a *volume*
|
||||
|
||||
- Compose will then re-attach that volume to the new container
|
||||
|
||||
(and data is then retained across database upgrades)
|
||||
|
||||
- All good database images use volumes
|
||||
|
||||
(e.g. all official images)
|
||||
|
||||
---
|
||||
|
||||
## Gotchas with volumes
|
||||
|
||||
- Unfortunately, Docker volumes don't have labels or metadata
|
||||
|
||||
- Compose tracks volumes thanks to their associated container
|
||||
|
||||
- If the container is deleted, the volume gets orphaned
|
||||
|
||||
- Example: `docker-compose down && docker-compose up`
|
||||
|
||||
- the old volume still exists, detached from its container
|
||||
|
||||
- a new volume gets created
|
||||
|
||||
- `docker-compose down -v`/`--volumes` deletes volumes
|
||||
|
||||
(but **not** `docker-compose down && docker-compose down -v`!)
|
||||
|
||||
---
|
||||
|
||||
## Managing volumes explicitly
|
||||
|
||||
Option 1: *named volumes*
|
||||
|
||||
```yaml
|
||||
services:
|
||||
app:
|
||||
volumes:
|
||||
- data:/some/path
|
||||
volumes:
|
||||
data:
|
||||
```
|
||||
|
||||
- Volume will be named `<project>_data`
|
||||
|
||||
- It won't be orphaned with `docker-compose down`
|
||||
|
||||
- It will correctly be removed with `docker-compose down -v`
|
||||
|
||||
---
|
||||
|
||||
## Managing volumes explicitly
|
||||
|
||||
Option 2: *relative paths*
|
||||
|
||||
```yaml
|
||||
services:
|
||||
app:
|
||||
volumes:
|
||||
- ./data:/some/path
|
||||
```
|
||||
|
||||
- Makes it easy to colocate the app and its data
|
||||
|
||||
(for migration, backups, disk usage accounting...)
|
||||
|
||||
- Won't be removed by `docker-compose down -v`
|
||||
|
||||
---
|
||||
|
||||
## Managing complex stacks
|
||||
|
||||
- Compose provides multiple features to manage complex stacks
|
||||
|
||||
(with many containers)
|
||||
|
||||
- `-f`/`--file`/`$COMPOSE_FILE` can be a list of Compose files
|
||||
|
||||
(separated by `:` and merged together)
|
||||
|
||||
- Services can be assigned to one or more *profiles*
|
||||
|
||||
- `--profile`/`$COMPOSE_PROFILE` can be a list of comma-separated profiles
|
||||
|
||||
(see [Using service profiles][profiles] in the Compose documentation)
|
||||
|
||||
- These variables can be set in `.env`
|
||||
|
||||
[profiles]: https://docs.docker.com/compose/profiles/
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
- A service can have a `depends_on` section
|
||||
|
||||
(listing one or more other services)
|
||||
|
||||
- This is used when bringing up individual services
|
||||
|
||||
(e.g. `docker-compose up blah` or `docker-compose run foo`)
|
||||
|
||||
⚠️ It doesn't make a service "wait" for another one to be up!
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## A bit of history and trivia
|
||||
|
||||
- Compose was initially named "Fig"
|
||||
|
||||
- Compose is one of the only components of Docker written in Python
|
||||
|
||||
(almost everything else is in Go)
|
||||
|
||||
- In 2020, Docker introduced "Compose CLI":
|
||||
|
||||
- `docker compose` command to deploy Compose stacks to some clouds
|
||||
|
||||
- progressively getting feature parity with `docker-compose`
|
||||
|
||||
- also provides numerous improvements (e.g. leverages BuildKit by default)
|
||||
|
||||
???
|
||||
|
||||
:EN:- Using compose to describe an environment
|
||||
:EN:- Connecting services together with a *Compose file*
|
||||
|
||||
:FR:- Utiliser Compose pour décrire son environnement
|
||||
:FR:- Écrire un *Compose file* pour connecter les services entre eux
|
||||
223
slides/containers/Connecting_Containers_With_Links.md
Normal file
223
slides/containers/Connecting_Containers_With_Links.md
Normal file
@@ -0,0 +1,223 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Connecting containers with links
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
Links were the "legacy" way of connecting containers (before the implementation of the CNM).
|
||||
|
||||
They are still useful in some scenarios.
|
||||
|
||||
---
|
||||
|
||||
## How *links* work
|
||||
|
||||
* Links are created *between two containers*
|
||||
* Links are created *from the client to the server*
|
||||
* Links associate an arbitrary name to an existing container
|
||||
* Links exist *only in the context of the client*
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## The plan
|
||||
|
||||
* We will create the `redis` container first.
|
||||
* Then, we will create the `www` container, *with a link to the previous container.*
|
||||
* We don't need to use a custom network for this to work.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Create the `redis` container
|
||||
|
||||
Let's launch a container from the `redis` image.
|
||||
|
||||
```bash
|
||||
$ docker run -d --name datastore redis
|
||||
<yourContainerID>
|
||||
```
|
||||
|
||||
Let's check the container is running:
|
||||
|
||||
```bash
|
||||
$ docker ps -l
|
||||
CONTAINER ID IMAGE COMMAND ... PORTS NAMES
|
||||
9efd72a4f320 redis:latest redis-server ... 6379/tcp datastore
|
||||
```
|
||||
|
||||
|
||||
* Our container is launched and running an instance of Redis.
|
||||
* We used the `--name` flag to reference our container easily later.
|
||||
* We could have used *any name we wanted.*
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Create the `www` container
|
||||
|
||||
If we create the web container without any extra option, it will not be able to connect to redis.
|
||||
|
||||
```bash
|
||||
$ docker run -dP jpetazzo/trainingwheels
|
||||
```
|
||||
|
||||
Check the port number with `docker ps`, and connect to it.
|
||||
|
||||
We get the same red error page as before.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## How our app connects to Redis
|
||||
|
||||
Remember, in the code, we connect to the name `redis`:
|
||||
|
||||
```python
|
||||
redis = redis.Redis("redis")
|
||||
```
|
||||
|
||||
* This means "try to connect to 'redis'".
|
||||
* Not 192.168.123.234.
|
||||
* Not redis.prod.mycompany.net.
|
||||
|
||||
*Obviously* it doesn't work.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Creating a linked container
|
||||
|
||||
Docker allows to specify *links*.
|
||||
|
||||
Links indicate an intent: "this container will connect to this other container."
|
||||
|
||||
Here is how to create our first link:
|
||||
|
||||
```bash
|
||||
$ docker run -ti --link datastore:redis alpine sh
|
||||
```
|
||||
|
||||
In this container, we can communicate with `datastore` using
|
||||
the `redis` DNS alias.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## DNS
|
||||
|
||||
Docker has created a DNS entry for the container, resolving to its internal IP address.
|
||||
|
||||
```bash
|
||||
$ docker run -it --link datastore:redis alpine ping redis
|
||||
PING redis (172.17.0.29): 56 data bytes
|
||||
64 bytes from 172.17.0.29: icmp_seq=0 ttl=64 time=0.164 ms
|
||||
64 bytes from 172.17.0.29: icmp_seq=1 ttl=64 time=0.122 ms
|
||||
64 bytes from 172.17.0.29: icmp_seq=2 ttl=64 time=0.086 ms
|
||||
^C--- redis ping statistics ---
|
||||
3 packets transmitted, 3 packets received, 0% packet loss
|
||||
round-trip min/avg/max/stddev = 0.086/0.124/0.164/0.032 ms
|
||||
```
|
||||
|
||||
|
||||
* The `--link` flag connects one container to another.
|
||||
* We specify the name of the container to link to, `datastore`, and an
|
||||
alias for the link, `redis`, in the format `name:alias`.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Starting our application
|
||||
|
||||
Now that we've poked around a bit let's start the application itself in
|
||||
a fresh container:
|
||||
|
||||
```bash
|
||||
$ docker run -d -P --link datastore:redis jpetazzo/trainingwheels
|
||||
```
|
||||
|
||||
Now let's check the port number associated to the container.
|
||||
|
||||
```bash
|
||||
$ docker ps -l
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Confirming that our application works properly
|
||||
|
||||
Finally, let's browse to our application and confirm it's working.
|
||||
|
||||
```bash
|
||||
http://<yourHostIP>:<port>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Links and environment variables
|
||||
|
||||
In addition to the DNS information, Docker will automatically set environment variables in our container, giving extra details about the linked container.
|
||||
|
||||
```bash
|
||||
$ docker run --link datastore:redis alpine env
|
||||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
HOSTNAME=0738e57b771e
|
||||
REDIS_PORT=tcp://172.17.0.120:6379
|
||||
REDIS_PORT_6379_TCP=tcp://172.17.0.120:6379
|
||||
REDIS_PORT_6379_TCP_ADDR=172.17.0.120
|
||||
REDIS_PORT_6379_TCP_PORT=6379
|
||||
REDIS_PORT_6379_TCP_PROTO=tcp
|
||||
REDIS_NAME=/dreamy_wilson/redis
|
||||
REDIS_ENV_REDIS_VERSION=2.8.13
|
||||
REDIS_ENV_REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-2.8.13.tar.gz
|
||||
REDIS_ENV_REDIS_DOWNLOAD_SHA1=a72925a35849eb2d38a1ea076a3db82072d4ee43
|
||||
HOME=/
|
||||
RUBY_MAJOR=2.1
|
||||
RUBY_VERSION=2.1.2
|
||||
```
|
||||
|
||||
|
||||
* Each variables is prefixed with the link alias: `redis`.
|
||||
* Includes connection information PLUS any environment variables set in
|
||||
the `datastore` container via `ENV` instructions.
|
||||
|
||||
---
|
||||
|
||||
## Differences between network aliases and links
|
||||
|
||||
* With network aliases, you can start containers in *any order.*
|
||||
* With links, you have to start the server (in our example: Redis) first.
|
||||
* With network aliases, you cannot change the name of the server once it is running. If you want to add a name, you have to create a new container.
|
||||
* With links, you can give new names to an existing container.
|
||||
* Network aliases require the use of a custom network.
|
||||
* Links can be used on the default bridge network.
|
||||
* Network aliases work across multi-host networking.
|
||||
* Links (as of Engine 1.11) only work with local containers (but this might be changed in the future).
|
||||
* Network aliases don't populate environment variables.
|
||||
* Links give access to the environment of the target container.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Section summary
|
||||
|
||||
We've learned how to:
|
||||
|
||||
* Create links between containers.
|
||||
* Use names and links to communicate across containers.
|
||||
|
||||
191
slides/containers/Container_Engines.md
Normal file
191
slides/containers/Container_Engines.md
Normal file
@@ -0,0 +1,191 @@
|
||||
# Docker Engine and other container engines
|
||||
|
||||
* We are going to cover the architecture of the Docker Engine.
|
||||
|
||||
* We will also present other container engines.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Docker Engine external architecture
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Docker Engine external architecture
|
||||
|
||||
* The Engine is a daemon (service running in the background).
|
||||
|
||||
* All interaction is done through a REST API exposed over a socket.
|
||||
|
||||
* On Linux, the default socket is a UNIX socket: `/var/run/docker.sock`.
|
||||
|
||||
* We can also use a TCP socket, with optional mutual TLS authentication.
|
||||
|
||||
* The `docker` CLI communicates with the Engine over the socket.
|
||||
|
||||
Note: strictly speaking, the Docker API is not fully REST.
|
||||
|
||||
Some operations (e.g. dealing with interactive containers
|
||||
and log streaming) don't fit the REST model.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Docker Engine internal architecture
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Docker Engine internal architecture
|
||||
|
||||
* Up to Docker 1.10: the Docker Engine is one single monolithic binary.
|
||||
|
||||
* Starting with Docker 1.11, the Engine is split into multiple parts:
|
||||
|
||||
- `dockerd` (REST API, auth, networking, storage)
|
||||
|
||||
- `containerd` (container lifecycle, controlled over a gRPC API)
|
||||
|
||||
- `containerd-shim` (per-container; does almost nothing but allows to restart the Engine without restarting the containers)
|
||||
|
||||
- `runc` (per-container; does the actual heavy lifting to start the container)
|
||||
|
||||
* Some features (like image and snapshot management) are progressively being pushed from `dockerd` to `containerd`.
|
||||
|
||||
For more details, check [this short presentation by Phil Estes](https://www.slideshare.net/PhilEstes/diving-through-the-layers-investigating-runc-containerd-and-the-docker-engine-architecture).
|
||||
|
||||
---
|
||||
|
||||
## Other container engines
|
||||
|
||||
The following list is not exhaustive.
|
||||
|
||||
Furthermore, we limited the scope to Linux containers.
|
||||
|
||||
We can also find containers (or things that look like containers) on other platforms
|
||||
like Windows, macOS, Solaris, FreeBSD ...
|
||||
|
||||
---
|
||||
|
||||
## LXC
|
||||
|
||||
* The venerable ancestor (first released in 2008).
|
||||
|
||||
* Docker initially relied on it to execute containers.
|
||||
|
||||
* No daemon; no central API.
|
||||
|
||||
* Each container is managed by a `lxc-start` process.
|
||||
|
||||
* Each `lxc-start` process exposes a custom API over a local UNIX socket, allowing to interact with the container.
|
||||
|
||||
* No notion of image (container filesystems have to be managed manually).
|
||||
|
||||
* Networking has to be set up manually.
|
||||
|
||||
---
|
||||
|
||||
## LXD
|
||||
|
||||
* Re-uses LXC code (through liblxc).
|
||||
|
||||
* Builds on top of LXC to offer a more modern experience.
|
||||
|
||||
* Daemon exposing a REST API.
|
||||
|
||||
* Can manage images, snapshots, migrations, networking, storage.
|
||||
|
||||
* "offers a user experience similar to virtual machines but using Linux containers instead."
|
||||
|
||||
---
|
||||
|
||||
## CRI-O
|
||||
|
||||
* Designed to be used with Kubernetes as a simple, basic runtime.
|
||||
|
||||
* Compares to `containerd`.
|
||||
|
||||
* Daemon exposing a gRPC interface.
|
||||
|
||||
* Controlled using the CRI API (Container Runtime Interface defined by Kubernetes).
|
||||
|
||||
* Needs an underlying OCI runtime (e.g. runc).
|
||||
|
||||
* Handles storage, images, networking (through CNI plugins).
|
||||
|
||||
We're not aware of anyone using it directly (i.e. outside of Kubernetes).
|
||||
|
||||
---
|
||||
|
||||
## systemd
|
||||
|
||||
* "init" system (PID 1) in most modern Linux distributions.
|
||||
|
||||
* Offers tools like `systemd-nspawn` and `machinectl` to manage containers.
|
||||
|
||||
* `systemd-nspawn` is "In many ways it is similar to chroot(1), but more powerful".
|
||||
|
||||
* `machinectl` can interact with VMs and containers managed by systemd.
|
||||
|
||||
* Exposes a DBUS API.
|
||||
|
||||
* Basic image support (tar archives and raw disk images).
|
||||
|
||||
* Network has to be set up manually.
|
||||
|
||||
---
|
||||
|
||||
## Kata containers
|
||||
|
||||
* OCI-compliant runtime.
|
||||
|
||||
* Fusion of two projects: Intel Clear Containers and Hyper runV.
|
||||
|
||||
* Run each container in a lightweight virtual machine.
|
||||
|
||||
* Requires running on bare metal *or* with nested virtualization.
|
||||
|
||||
---
|
||||
|
||||
## gVisor
|
||||
|
||||
* OCI-compliant runtime.
|
||||
|
||||
* Implements a subset of the Linux kernel system calls.
|
||||
|
||||
* Written in go, uses a smaller subset of system calls.
|
||||
|
||||
* Can be heavily sandboxed.
|
||||
|
||||
* Can run in two modes:
|
||||
|
||||
* KVM (requires bare metal or nested virtualization),
|
||||
|
||||
* ptrace (no requirement, but slower).
|
||||
|
||||
---
|
||||
|
||||
## Overall ...
|
||||
|
||||
* The Docker Engine is very developer-centric:
|
||||
|
||||
- easy to install
|
||||
|
||||
- easy to use
|
||||
|
||||
- no manual setup
|
||||
|
||||
- first-class image build and transfer
|
||||
|
||||
* As a result, it is a fantastic tool in development environments.
|
||||
|
||||
* On servers:
|
||||
|
||||
- Docker is a good default choice
|
||||
|
||||
- If you use Kubernetes, the engine doesn't matter
|
||||
797
slides/containers/Container_Network_Model.md
Normal file
797
slides/containers/Container_Network_Model.md
Normal file
@@ -0,0 +1,797 @@
|
||||
|
||||
class: title
|
||||
|
||||
# The Container Network Model
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
We will learn about the CNM (Container Network Model).
|
||||
|
||||
At the end of this lesson, you will be able to:
|
||||
|
||||
* Create a private network for a group of containers.
|
||||
|
||||
* Use container naming to connect services together.
|
||||
|
||||
* Dynamically connect and disconnect containers to networks.
|
||||
|
||||
* Set the IP address of a container.
|
||||
|
||||
We will also explain the principle of overlay networks and network plugins.
|
||||
|
||||
---
|
||||
|
||||
## The Container Network Model
|
||||
|
||||
Docker has "networks".
|
||||
|
||||
We can manage them with the `docker network` commands; for instance:
|
||||
|
||||
```bash
|
||||
$ docker network ls
|
||||
NETWORK ID NAME DRIVER
|
||||
6bde79dfcf70 bridge bridge
|
||||
8d9c78725538 none null
|
||||
eb0eeab782f4 host host
|
||||
4c1ff84d6d3f blog-dev overlay
|
||||
228a4355d548 blog-prod overlay
|
||||
```
|
||||
|
||||
New networks can be created (with `docker network create`).
|
||||
|
||||
(Note: networks `none` and `host` are special; let's set them aside for now.)
|
||||
|
||||
---
|
||||
|
||||
## What's a network?
|
||||
|
||||
- Conceptually, a Docker "network" is a virtual switch
|
||||
|
||||
(we can also think about it like a VLAN, or a WiFi SSID, for instance)
|
||||
|
||||
- By default, containers are connected to a single network
|
||||
|
||||
(but they can be connected to zero, or many networks, even dynamically)
|
||||
|
||||
- Each network has its own subnet (IP address range)
|
||||
|
||||
- A network can be local (to a single Docker Engine) or global (span multiple hosts)
|
||||
|
||||
- Containers can have *network aliases* providing DNS-based service discovery
|
||||
|
||||
(and each network has its own "domain", "zone", or "scope")
|
||||
|
||||
---
|
||||
|
||||
## Service discovery
|
||||
|
||||
- A container can be given a network alias
|
||||
|
||||
(e.g. with `docker run --net some-network --net-alias db ...`)
|
||||
|
||||
- The containers running in the same network can resolve that network alias
|
||||
|
||||
(i.e. if they do a DNS lookup on `db`, it will give the container's address)
|
||||
|
||||
- We can have a different `db` container in each network
|
||||
|
||||
(this avoids naming conflicts between different stacks)
|
||||
|
||||
- When we name a container, it automatically adds the name as a network alias
|
||||
|
||||
(i.e. `docker run --name xyz ...` is like `docker run --net-alias xyz ...`
|
||||
|
||||
---
|
||||
|
||||
## Network isolation
|
||||
|
||||
- Networks are isolated
|
||||
|
||||
- By default, containers in network A cannot reach those in network B
|
||||
|
||||
- A container connected to both networks A and B can act as a router or proxy
|
||||
|
||||
- Published ports are always reachable through the Docker host address
|
||||
|
||||
(`docker run -P ...` makes a container port available to everyone)
|
||||
|
||||
---
|
||||
|
||||
## How to use networks
|
||||
|
||||
- We typically create one network per "stack" or app that we deploy
|
||||
|
||||
- More complex apps or stacks might require multiple networks
|
||||
|
||||
(e.g. `frontend`, `backend`, ...)
|
||||
|
||||
- Networks allow us to deploy multiple copies of the same stack
|
||||
|
||||
(e.g. `prod`, `dev`, `pr-442`, ....)
|
||||
|
||||
- If we use Docker Compose, this is managed automatically for us
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## CNM vs CNI
|
||||
|
||||
- CNM is the model used by Docker
|
||||
|
||||
- Kubernetes uses a different model, architectured around CNI
|
||||
|
||||
(CNI is a kind of API between a container engine and *CNI plugins*)
|
||||
|
||||
- Docker model:
|
||||
|
||||
- multiple isolated networks
|
||||
- per-network service discovery
|
||||
- network interconnection requires extra steps
|
||||
|
||||
- Kubernetes model:
|
||||
|
||||
- single flat network
|
||||
- per-namespace service discovery
|
||||
- network isolation requires extra steps (Network Policies)
|
||||
|
||||
---
|
||||
|
||||
## Creating a network
|
||||
|
||||
Let's create a network called `dev`.
|
||||
|
||||
```bash
|
||||
$ docker network create dev
|
||||
4c1ff84d6d3f1733d3e233ee039cac276f425a9d5228a4355d54878293a889ba
|
||||
```
|
||||
|
||||
The network is now visible with the `network ls` command:
|
||||
|
||||
```bash
|
||||
$ docker network ls
|
||||
NETWORK ID NAME DRIVER
|
||||
6bde79dfcf70 bridge bridge
|
||||
8d9c78725538 none null
|
||||
eb0eeab782f4 host host
|
||||
4c1ff84d6d3f dev bridge
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Placing containers on a network
|
||||
|
||||
We will create a *named* container on this network.
|
||||
|
||||
It will be reachable with its name, `es`.
|
||||
|
||||
```bash
|
||||
$ docker run -d --name es --net dev elasticsearch:2
|
||||
8abb80e229ce8926c7223beb69699f5f34d6f1d438bfc5682db893e798046863
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Communication between containers
|
||||
|
||||
Now, create another container on this network.
|
||||
|
||||
.small[
|
||||
```bash
|
||||
$ docker run -ti --net dev alpine sh
|
||||
root@0ecccdfa45ef:/#
|
||||
```
|
||||
]
|
||||
|
||||
From this new container, we can resolve and ping the other one, using its assigned name:
|
||||
|
||||
.small[
|
||||
```bash
|
||||
/ # ping es
|
||||
PING es (172.18.0.2) 56(84) bytes of data.
|
||||
64 bytes from es.dev (172.18.0.2): icmp_seq=1 ttl=64 time=0.221 ms
|
||||
64 bytes from es.dev (172.18.0.2): icmp_seq=2 ttl=64 time=0.114 ms
|
||||
64 bytes from es.dev (172.18.0.2): icmp_seq=3 ttl=64 time=0.114 ms
|
||||
^C
|
||||
--- es ping statistics ---
|
||||
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
|
||||
rtt min/avg/max/mdev = 0.114/0.149/0.221/0.052 ms
|
||||
root@0ecccdfa45ef:/#
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Resolving container addresses
|
||||
|
||||
Since Docker Engine 1.10, name resolution is implemented by a dynamic resolver.
|
||||
|
||||
Archeological note: when CNM was intoduced (in Docker Engine 1.9, November 2015)
|
||||
name resolution was implemented with `/etc/hosts`, and it was updated each time
|
||||
CONTAINERs were added/removed. This could cause interesting race conditions
|
||||
since `/etc/hosts` was a bind-mount (and couldn't be updated atomically).
|
||||
|
||||
.small[
|
||||
```bash
|
||||
[root@0ecccdfa45ef /]# cat /etc/hosts
|
||||
172.18.0.3 0ecccdfa45ef
|
||||
127.0.0.1 localhost
|
||||
::1 localhost ip6-localhost ip6-loopback
|
||||
fe00::0 ip6-localnet
|
||||
ff00::0 ip6-mcastprefix
|
||||
ff02::1 ip6-allnodes
|
||||
ff02::2 ip6-allrouters
|
||||
172.18.0.2 es
|
||||
172.18.0.2 es.dev
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
# Service discovery with containers
|
||||
|
||||
* Let's try to run an application that requires two containers.
|
||||
|
||||
* The first container is a web server.
|
||||
|
||||
* The other one is a redis data store.
|
||||
|
||||
* We will place them both on the `dev` network created before.
|
||||
|
||||
---
|
||||
|
||||
## Running the web server
|
||||
|
||||
* The application is provided by the container image `jpetazzo/trainingwheels`.
|
||||
|
||||
* We don't know much about it so we will try to run it and see what happens!
|
||||
|
||||
Start the container, exposing all its ports:
|
||||
|
||||
```bash
|
||||
$ docker run --net dev -d -P jpetazzo/trainingwheels
|
||||
```
|
||||
|
||||
Check the port that has been allocated to it:
|
||||
|
||||
```bash
|
||||
$ docker ps -l
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Test the web server
|
||||
|
||||
* If we connect to the application now, we will see an error page:
|
||||
|
||||

|
||||
|
||||
* This is because the Redis service is not running.
|
||||
* This container tries to resolve the name `redis`.
|
||||
|
||||
Note: we're not using a FQDN or an IP address here; just `redis`.
|
||||
|
||||
---
|
||||
|
||||
## Start the data store
|
||||
|
||||
* We need to start a Redis container.
|
||||
|
||||
* That container must be on the same network as the web server.
|
||||
|
||||
* It must have the right network alias (`redis`) so the application can find it.
|
||||
|
||||
Start the container:
|
||||
|
||||
```bash
|
||||
$ docker run --net dev --net-alias redis -d redis
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Test the web server again
|
||||
|
||||
* If we connect to the application now, we should see that the app is working correctly:
|
||||
|
||||

|
||||
|
||||
* When the app tries to resolve `redis`, instead of getting a DNS error, it gets the IP address of our Redis container.
|
||||
|
||||
---
|
||||
|
||||
## A few words on *scope*
|
||||
|
||||
- Container names are unique (there can be only one `--name redis`)
|
||||
|
||||
- Network aliases are not unique
|
||||
|
||||
- We can have the same network alias in different networks:
|
||||
```bash
|
||||
docker run --net dev --net-alias redis ...
|
||||
docker run --net prod --net-alias redis ...
|
||||
```
|
||||
|
||||
- We can even have multiple containers with the same alias in the same network
|
||||
|
||||
(in that case, we get multiple DNS entries, aka "DNS round robin")
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Names are *local* to each network
|
||||
|
||||
Let's try to ping our `es` container from another container, when that other container is *not* on the `dev` network.
|
||||
|
||||
```bash
|
||||
$ docker run --rm alpine ping es
|
||||
ping: bad address 'es'
|
||||
```
|
||||
|
||||
Names can be resolved only when containers are on the same network.
|
||||
|
||||
Containers can contact each other only when they are on the same network (you can try to ping using the IP address to verify).
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Network aliases
|
||||
|
||||
We would like to have another network, `prod`, with its own `es` container. But there can be only one container named `es`!
|
||||
|
||||
We will use *network aliases*.
|
||||
|
||||
A container can have multiple network aliases.
|
||||
|
||||
Network aliases are *local* to a given network (only exist in this network).
|
||||
|
||||
Multiple containers can have the same network alias (even on the same network).
|
||||
|
||||
Since Docker Engine 1.11, resolving a network alias yields the IP addresses of all containers holding this alias.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Creating containers on another network
|
||||
|
||||
Create the `prod` network.
|
||||
|
||||
```bash
|
||||
$ docker network create prod
|
||||
5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c
|
||||
```
|
||||
|
||||
We can now create multiple containers with the `es` alias on the new `prod` network.
|
||||
|
||||
```bash
|
||||
$ docker run -d --name prod-es-1 --net-alias es --net prod elasticsearch:2
|
||||
38079d21caf0c5533a391700d9e9e920724e89200083df73211081c8a356d771
|
||||
$ docker run -d --name prod-es-2 --net-alias es --net prod elasticsearch:2
|
||||
1820087a9c600f43159688050dcc164c298183e1d2e62d5694fd46b10ac3bc3d
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Resolving network aliases
|
||||
|
||||
Let's try DNS resolution first, using the `nslookup` tool that ships with the `alpine` image.
|
||||
|
||||
```bash
|
||||
$ docker run --net prod --rm alpine nslookup es
|
||||
Name: es
|
||||
Address 1: 172.23.0.3 prod-es-2.prod
|
||||
Address 2: 172.23.0.2 prod-es-1.prod
|
||||
```
|
||||
|
||||
(You can ignore the `can't resolve '(null)'` errors.)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Connecting to aliased containers
|
||||
|
||||
Each ElasticSearch instance has a name (generated when it is started). This name can be seen when we issue a simple HTTP request on the ElasticSearch API endpoint.
|
||||
|
||||
Try the following command a few times:
|
||||
|
||||
.small[
|
||||
```bash
|
||||
$ docker run --rm --net dev centos curl -s es:9200
|
||||
{
|
||||
"name" : "Tarot",
|
||||
...
|
||||
}
|
||||
```
|
||||
]
|
||||
|
||||
Then try it a few times by replacing `--net dev` with `--net prod`:
|
||||
|
||||
.small[
|
||||
```bash
|
||||
$ docker run --rm --net prod centos curl -s es:9200
|
||||
{
|
||||
"name" : "The Symbiote",
|
||||
...
|
||||
}
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Good to know ...
|
||||
|
||||
* Docker will not create network names and aliases on the default `bridge` network.
|
||||
|
||||
* Therefore, if you want to use those features, you have to create a custom network first.
|
||||
|
||||
* Network aliases are *not* unique on a given network.
|
||||
|
||||
* i.e., multiple containers can have the same alias on the same network.
|
||||
|
||||
* In that scenario, the Docker DNS server will return multiple records.
|
||||
<br/>
|
||||
(i.e. you will get DNS round robin out of the box.)
|
||||
|
||||
* Enabling *Swarm Mode* gives access to clustering and load balancing with IPVS.
|
||||
|
||||
* Creation of networks and network aliases is generally automated with tools like Compose.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## A few words about round robin DNS
|
||||
|
||||
Don't rely exclusively on round robin DNS to achieve load balancing.
|
||||
|
||||
Many factors can affect DNS resolution, and you might see:
|
||||
|
||||
- all traffic going to a single instance;
|
||||
- traffic being split (unevenly) between some instances;
|
||||
- different behavior depending on your application language;
|
||||
- different behavior depending on your base distro;
|
||||
- different behavior depending on other factors (sic).
|
||||
|
||||
It's OK to use DNS to discover available endpoints, but remember that you have to re-resolve every now and then to discover new endpoints.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Custom networks
|
||||
|
||||
When creating a network, extra options can be provided.
|
||||
|
||||
* `--internal` disables outbound traffic (the network won't have a default gateway).
|
||||
|
||||
* `--gateway` indicates which address to use for the gateway (when outbound traffic is allowed).
|
||||
|
||||
* `--subnet` (in CIDR notation) indicates the subnet to use.
|
||||
|
||||
* `--ip-range` (in CIDR notation) indicates the subnet to allocate from.
|
||||
|
||||
* `--aux-address` allows specifying a list of reserved addresses (which won't be allocated to containers).
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Setting containers' IP address
|
||||
|
||||
* It is possible to set a container's address with `--ip`.
|
||||
* The IP address has to be within the subnet used for the container.
|
||||
|
||||
A full example would look like this.
|
||||
|
||||
```bash
|
||||
$ docker network create --subnet 10.66.0.0/16 pubnet
|
||||
42fb16ec412383db6289a3e39c3c0224f395d7f85bcb1859b279e7a564d4e135
|
||||
$ docker run --net pubnet --ip 10.66.66.66 -d nginx
|
||||
b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09
|
||||
```
|
||||
|
||||
*Note: don't hard code container IP addresses in your code!*
|
||||
|
||||
*I repeat: don't hard code container IP addresses in your code!*
|
||||
|
||||
---
|
||||
|
||||
## Network drivers
|
||||
|
||||
* A network is managed by a *driver*.
|
||||
|
||||
* The built-in drivers include:
|
||||
|
||||
* `bridge` (default)
|
||||
* `none`
|
||||
* `host`
|
||||
* `macvlan`
|
||||
* `overlay` (for Swarm clusters)
|
||||
|
||||
* More drivers can be provided by plugins (OVS, VLAN...)
|
||||
|
||||
* A network can have a custom IPAM (IP allocator).
|
||||
|
||||
---
|
||||
|
||||
## Overlay networks
|
||||
|
||||
* The features we've seen so far only work when all containers are on a single host.
|
||||
|
||||
* If containers span multiple hosts, we need an *overlay* network to connect them together.
|
||||
|
||||
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging
|
||||
VXLAN, *enabled with Swarm Mode*.
|
||||
|
||||
* Other plugins (Weave, Calico...) can provide overlay networks as well.
|
||||
|
||||
* Once you have an overlay network, *all the features that we've used in this chapter work identically
|
||||
across multiple hosts.*
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Multi-host networking (overlay)
|
||||
|
||||
Out of the scope for this intro-level workshop!
|
||||
|
||||
Very short instructions:
|
||||
|
||||
- enable Swarm Mode (`docker swarm init` then `docker swarm join` on other nodes)
|
||||
- `docker network create mynet --driver overlay`
|
||||
- `docker service create --network mynet myimage`
|
||||
|
||||
If you want to learn more about Swarm mode, you can check
|
||||
[this video](https://www.youtube.com/watch?v=EuzoEaE6Cqs)
|
||||
or [these slides](https://container.training/swarm-selfpaced.yml.html).
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Multi-host networking (plugins)
|
||||
|
||||
Out of the scope for this intro-level workshop!
|
||||
|
||||
General idea:
|
||||
|
||||
- install the plugin (they often ship within containers)
|
||||
|
||||
- run the plugin (if it's in a container, it will often require extra parameters; don't just `docker run` it blindly!)
|
||||
|
||||
- some plugins require configuration or activation (creating a special file that tells Docker "use the plugin whose control socket is at the following location")
|
||||
|
||||
- you can then `docker network create --driver pluginname`
|
||||
|
||||
---
|
||||
|
||||
## Connecting and disconnecting dynamically
|
||||
|
||||
* So far, we have specified which network to use when starting the container.
|
||||
|
||||
* The Docker Engine also allows connecting and disconnecting while the container is running.
|
||||
|
||||
* This feature is exposed through the Docker API, and through two Docker CLI commands:
|
||||
|
||||
* `docker network connect <network> <container>`
|
||||
|
||||
* `docker network disconnect <network> <container>`
|
||||
|
||||
---
|
||||
|
||||
## Dynamically connecting to a network
|
||||
|
||||
* We have a container named `es` connected to a network named `dev`.
|
||||
|
||||
* Let's start a simple alpine container on the default network:
|
||||
|
||||
```bash
|
||||
$ docker run -ti alpine sh
|
||||
/ #
|
||||
```
|
||||
|
||||
* In this container, try to ping the `es` container:
|
||||
|
||||
```bash
|
||||
/ # ping es
|
||||
ping: bad address 'es'
|
||||
```
|
||||
|
||||
This doesn't work, but we will change that by connecting the container.
|
||||
|
||||
---
|
||||
|
||||
## Finding the container ID and connecting it
|
||||
|
||||
* Figure out the ID of our alpine container; here are two methods:
|
||||
|
||||
* looking at `/etc/hostname` in the container,
|
||||
|
||||
* running `docker ps -lq` on the host.
|
||||
|
||||
* Run the following command on the host:
|
||||
|
||||
```bash
|
||||
$ docker network connect dev `<container_id>`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Checking what we did
|
||||
|
||||
* Try again to `ping es` from the container.
|
||||
|
||||
* It should now work correctly:
|
||||
|
||||
```bash
|
||||
/ # ping es
|
||||
PING es (172.20.0.3): 56 data bytes
|
||||
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms
|
||||
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms
|
||||
^C
|
||||
```
|
||||
|
||||
* Interrupt it with Ctrl-C.
|
||||
|
||||
---
|
||||
|
||||
## Looking at the network setup in the container
|
||||
|
||||
We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`:
|
||||
|
||||
.small[
|
||||
```bash
|
||||
/ # ip a
|
||||
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
|
||||
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
|
||||
inet 127.0.0.1/8 scope host lo
|
||||
valid_lft forever preferred_lft forever
|
||||
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
|
||||
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
|
||||
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
|
||||
valid_lft forever preferred_lft forever
|
||||
20: eth1@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
|
||||
link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff
|
||||
inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1
|
||||
valid_lft forever preferred_lft forever
|
||||
/ #
|
||||
```
|
||||
]
|
||||
|
||||
Each network connection is materialized with a virtual network interface.
|
||||
|
||||
As we can see, we can be connected to multiple networks at the same time.
|
||||
|
||||
---
|
||||
|
||||
## Disconnecting from a network
|
||||
|
||||
* Let's try the symmetrical command to disconnect the container:
|
||||
```bash
|
||||
$ docker network disconnect dev <container_id>
|
||||
```
|
||||
|
||||
* From now on, if we try to ping `es`, it will not resolve:
|
||||
```bash
|
||||
/ # ping es
|
||||
ping: bad address 'es'
|
||||
```
|
||||
|
||||
* Trying to ping the IP address directly won't work either:
|
||||
```bash
|
||||
/ # ping 172.20.0.3
|
||||
... (nothing happens until we interrupt it with Ctrl-C)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Network aliases are scoped per network
|
||||
|
||||
* Each network has its own set of network aliases.
|
||||
|
||||
* We saw this earlier: `es` resolves to different addresses in `dev` and `prod`.
|
||||
|
||||
* If we are connected to multiple networks, the resolver looks up names in each of them
|
||||
(as of Docker Engine 18.03, it is the connection order) and stops as soon as the name
|
||||
is found.
|
||||
|
||||
* Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not**
|
||||
give us the addresses of all the `es` services; but only the ones in `dev` or `prod`.
|
||||
|
||||
* However, we can lookup `es.dev` or `es.prod` if we need to.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Finding out about our networks and names
|
||||
|
||||
* We can do reverse DNS lookups on containers' IP addresses.
|
||||
|
||||
* If the IP address belongs to a network (other than the default bridge), the result will be:
|
||||
|
||||
```
|
||||
name-or-first-alias-or-container-id.network-name
|
||||
```
|
||||
|
||||
* Example:
|
||||
|
||||
.small[
|
||||
```bash
|
||||
$ docker run -ti --net prod --net-alias hello alpine
|
||||
/ # apk add --no-cache drill
|
||||
...
|
||||
OK: 5 MiB in 13 packages
|
||||
/ # ifconfig
|
||||
eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03
|
||||
inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
...
|
||||
/ # drill -t ptr `3.0.21.172`.in-addr.arpa
|
||||
...
|
||||
;; ANSWER SECTION:
|
||||
3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`.
|
||||
...
|
||||
```
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Building with a custom network
|
||||
|
||||
* We can build a Dockerfile with a custom network with `docker build --network NAME`.
|
||||
|
||||
* This can be used to check that a build doesn't access the network.
|
||||
|
||||
(But keep in mind that most Dockerfiles will fail,
|
||||
<br/>because they need to install remote packages and dependencies!)
|
||||
|
||||
* This may be used to access an internal package repository.
|
||||
|
||||
(But try to use a multi-stage build instead, if possible!)
|
||||
|
||||
???
|
||||
|
||||
:EN:Container networking essentials
|
||||
:EN:- The Container Network Model
|
||||
:EN:- Container isolation
|
||||
:EN:- Service discovery
|
||||
|
||||
:FR:Mettre ses conteneurs en réseau
|
||||
:FR:- Le "Container Network Model"
|
||||
:FR:- Isolation des conteneurs
|
||||
:FR:- *Service discovery*
|
||||
301
slides/containers/Container_Networking_Basics.md
Normal file
301
slides/containers/Container_Networking_Basics.md
Normal file
@@ -0,0 +1,301 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Container networking basics
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
We will now run network services (accepting requests) in containers.
|
||||
|
||||
At the end of this section, you will be able to:
|
||||
|
||||
* Run a network service in a container.
|
||||
|
||||
* Connect to that network service.
|
||||
|
||||
* Find a container's IP address.
|
||||
|
||||
---
|
||||
|
||||
## Running a very simple service
|
||||
|
||||
- We need something small, simple, easy to configure
|
||||
|
||||
(or, even better, that doesn't require any configuration at all)
|
||||
|
||||
- Let's use the official NGINX image (named `nginx`)
|
||||
|
||||
- It runs a static web server listening on port 80
|
||||
|
||||
- It serves a default "Welcome to nginx!" page
|
||||
|
||||
---
|
||||
|
||||
## Running an NGINX server
|
||||
|
||||
```bash
|
||||
$ docker run -d -P nginx
|
||||
66b1ce719198711292c8f34f84a7b68c3876cf9f67015e752b94e189d35a204e
|
||||
```
|
||||
|
||||
- Docker will automatically pull the `nginx` image from the Docker Hub
|
||||
|
||||
- `-d` / `--detach` tells Docker to run it in the background
|
||||
|
||||
- `P` / `--publish-all` tells Docker to publish all ports
|
||||
|
||||
(publish = make them reachable from other computers)
|
||||
|
||||
- ...OK, how do we connect to our web server now?
|
||||
|
||||
---
|
||||
|
||||
## Finding our web server port
|
||||
|
||||
- First, we need to find the *port number* used by Docker
|
||||
|
||||
(the NGINX container listens on port 80, but this port will be *mapped*)
|
||||
|
||||
- We can use `docker ps`:
|
||||
```bash
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE ... PORTS ...
|
||||
e40ffb406c9e nginx ... 0.0.0.0:`12345`->80/tcp ...
|
||||
```
|
||||
|
||||
- This means:
|
||||
|
||||
*port 12345 on the Docker host is mapped to port 80 in the container*
|
||||
|
||||
- Now we need to connect to the Docker host!
|
||||
|
||||
---
|
||||
|
||||
## Finding the address of the Docker host
|
||||
|
||||
- When running Docker on your Linux workstation:
|
||||
|
||||
*use `localhost`, or any IP address of your machine*
|
||||
|
||||
- When running Docker on a remote Linux server:
|
||||
|
||||
*use any IP address of the remote machine*
|
||||
|
||||
- When running Docker Desktop on Mac or Windows:
|
||||
|
||||
*use `localhost`*
|
||||
|
||||
- In other scenarios (`docker-machine`, local VM...):
|
||||
|
||||
*use the IP address of the Docker VM*
|
||||
|
||||
---
|
||||
|
||||
## Connecting to our web server (GUI)
|
||||
|
||||
Point your browser to the IP address of your Docker host, on the port
|
||||
shown by `docker ps` for container port 80.
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Connecting to our web server (CLI)
|
||||
|
||||
You can also use `curl` directly from the Docker host.
|
||||
|
||||
Make sure to use the right port number if it is different
|
||||
from the example below:
|
||||
|
||||
```bash
|
||||
$ curl localhost:12345
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Welcome to nginx!</title>
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How does Docker know which port to map?
|
||||
|
||||
* There is metadata in the image telling "this image has something on port 80".
|
||||
|
||||
* We can see that metadata with `docker inspect`:
|
||||
|
||||
```bash
|
||||
$ docker inspect --format '{{.Config.ExposedPorts}}' nginx
|
||||
map[80/tcp:{}]
|
||||
```
|
||||
|
||||
* This metadata was set in the Dockerfile, with the `EXPOSE` keyword.
|
||||
|
||||
* We can see that with `docker history`:
|
||||
|
||||
```bash
|
||||
$ docker history nginx
|
||||
IMAGE CREATED CREATED BY
|
||||
7f70b30f2cc6 11 days ago /bin/sh -c #(nop) CMD ["nginx" "-g" "…
|
||||
<missing> 11 days ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM]
|
||||
<missing> 11 days ago /bin/sh -c #(nop) EXPOSE 80/tcp
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Why can't we just connect to port 80?
|
||||
|
||||
- Our Docker host has only one port 80
|
||||
|
||||
- Therefore, we can only have one container at a time on port 80
|
||||
|
||||
- Therefore, if multiple containers want port 80, only one can get it
|
||||
|
||||
- By default, containers *do not* get "their" port number, but a random one
|
||||
|
||||
(not "random" as "crypto random", but as "it depends on various factors")
|
||||
|
||||
- We'll see later how to force a port number (including port 80!)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Using multiple IP addresses
|
||||
|
||||
*Hey, my network-fu is strong, and I have questions...*
|
||||
|
||||
- Can I publish one container on 127.0.0.2:80, and another on 127.0.0.3:80?
|
||||
|
||||
- My machine has multiple (public) IP addresses, let's say A.A.A.A and B.B.B.B.
|
||||
<br/>
|
||||
Can I have one container on A.A.A.A:80 and another on B.B.B.B:80?
|
||||
|
||||
- I have a whole IPV4 subnet, can I allocate it to my containers?
|
||||
|
||||
- What about IPV6?
|
||||
|
||||
You can do all these things when running Docker directly on Linux.
|
||||
|
||||
(On other platforms, *generally not*, but there are some exceptions.)
|
||||
|
||||
---
|
||||
|
||||
## Finding the web server port in a script
|
||||
|
||||
Parsing the output of `docker ps` would be painful.
|
||||
|
||||
There is a command to help us:
|
||||
|
||||
```bash
|
||||
$ docker port <containerID> 80
|
||||
0.0.0.0:12345
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Manual allocation of port numbers
|
||||
|
||||
If you want to set port numbers yourself, no problem:
|
||||
|
||||
```bash
|
||||
$ docker run -d -p 80:80 nginx
|
||||
$ docker run -d -p 8000:80 nginx
|
||||
$ docker run -d -p 8080:80 -p 8888:80 nginx
|
||||
```
|
||||
|
||||
* We are running three NGINX web servers.
|
||||
* The first one is exposed on port 80.
|
||||
* The second one is exposed on port 8000.
|
||||
* The third one is exposed on ports 8080 and 8888.
|
||||
|
||||
Note: the convention is `port-on-host:port-on-container`.
|
||||
|
||||
---
|
||||
|
||||
## Plumbing containers into your infrastructure
|
||||
|
||||
There are many ways to integrate containers in your network.
|
||||
|
||||
* Start the container, letting Docker allocate a public port for it.
|
||||
<br/>Then retrieve that port number and feed it to your configuration.
|
||||
|
||||
* Pick a fixed port number in advance, when you generate your configuration.
|
||||
<br/>Then start your container by setting the port numbers manually.
|
||||
|
||||
* Use an orchestrator like Kubernetes or Swarm.
|
||||
<br/>The orchestrator will provide its own networking facilities.
|
||||
|
||||
Orchestrators typically provide mechanisms to enable direct container-to-container
|
||||
communication across hosts, and publishing/load balancing for inbound traffic.
|
||||
|
||||
---
|
||||
|
||||
## Finding the container's IP address
|
||||
|
||||
We can use the `docker inspect` command to find the IP address of the
|
||||
container.
|
||||
|
||||
```bash
|
||||
$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' <yourContainerID>
|
||||
172.17.0.3
|
||||
```
|
||||
|
||||
* `docker inspect` is an advanced command, that can retrieve a ton
|
||||
of information about our containers.
|
||||
|
||||
* Here, we provide it with a format string to extract exactly the
|
||||
private IP address of the container.
|
||||
|
||||
---
|
||||
|
||||
## Pinging our container
|
||||
|
||||
Let's try to ping our container *from another container.*
|
||||
|
||||
```bash
|
||||
docker run alpine ping `<ipaddress>`
|
||||
PING 172.17.0.X (172.17.0.X): 56 data bytes
|
||||
64 bytes from 172.17.0.X: seq=0 ttl=64 time=0.106 ms
|
||||
64 bytes from 172.17.0.X: seq=1 ttl=64 time=0.250 ms
|
||||
64 bytes from 172.17.0.X: seq=2 ttl=64 time=0.188 ms
|
||||
```
|
||||
|
||||
When running on Linux, we can even ping that IP address directly!
|
||||
|
||||
(And connect to a container's ports even if they aren't published.)
|
||||
|
||||
---
|
||||
|
||||
## How often do we use `-p` and `-P` ?
|
||||
|
||||
- When running a stack of containers, we will often use Compose
|
||||
|
||||
- Compose will take care of exposing containers
|
||||
|
||||
(through a `ports:` section in the `docker-compose.yml` file)
|
||||
|
||||
- It is, however, fairly common to use `docker run -P` for a quick test
|
||||
|
||||
- Or `docker run -p ...` when an image doesn't `EXPOSE` a port correctly
|
||||
|
||||
---
|
||||
|
||||
## Section summary
|
||||
|
||||
We've learned how to:
|
||||
|
||||
* Expose a network port.
|
||||
|
||||
* Connect to an application running in a container.
|
||||
|
||||
* Find a container's IP address.
|
||||
|
||||
???
|
||||
|
||||
:EN:- Exposing single containers
|
||||
:FR:- Exposer un conteneur isolé
|
||||
3
slides/containers/Containers_From_Scratch.md
Normal file
3
slides/containers/Containers_From_Scratch.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Building containers from scratch
|
||||
|
||||
(This is a "bonus section" done if time permits.)
|
||||
345
slides/containers/Copy_On_Write.md
Normal file
345
slides/containers/Copy_On_Write.md
Normal file
@@ -0,0 +1,345 @@
|
||||
# Copy-on-write filesystems
|
||||
|
||||
Container engines rely on copy-on-write to be able
|
||||
to start containers quickly, regardless of their size.
|
||||
|
||||
We will explain how that works, and review some of
|
||||
the copy-on-write storage systems available on Linux.
|
||||
|
||||
---
|
||||
|
||||
## What is copy-on-write?
|
||||
|
||||
- Copy-on-write is a mechanism allowing to share data.
|
||||
|
||||
- The data appears to be a copy, but is only
|
||||
a link (or reference) to the original data.
|
||||
|
||||
- The actual copy happens only when someone
|
||||
tries to change the shared data.
|
||||
|
||||
- Whoever changes the shared data ends up
|
||||
using their own copy instead of the shared data.
|
||||
|
||||
---
|
||||
|
||||
## A few metaphors
|
||||
|
||||
--
|
||||
|
||||
- First metaphor:
|
||||
<br/>white board and tracing paper
|
||||
|
||||
--
|
||||
|
||||
- Second metaphor:
|
||||
<br/>magic books with shadowy pages
|
||||
|
||||
--
|
||||
|
||||
- Third metaphor:
|
||||
<br/>just-in-time house building
|
||||
|
||||
---
|
||||
|
||||
## Copy-on-write is *everywhere*
|
||||
|
||||
- Process creation with `fork()`.
|
||||
|
||||
- Consistent disk snapshots.
|
||||
|
||||
- Efficient VM provisioning.
|
||||
|
||||
- And, of course, containers.
|
||||
|
||||
---
|
||||
|
||||
## Copy-on-write and containers
|
||||
|
||||
Copy-on-write is essential to give us "convenient" containers.
|
||||
|
||||
- Creating a new container (from an existing image) is "free".
|
||||
|
||||
(Otherwise, we would have to copy the image first.)
|
||||
|
||||
- Customizing a container (by tweaking a few files) is cheap.
|
||||
|
||||
(Adding a 1 KB configuration file to a 1 GB container takes 1 KB, not 1 GB.)
|
||||
|
||||
- We can take snapshots, i.e. have "checkpoints" or "save points"
|
||||
when building images.
|
||||
|
||||
---
|
||||
|
||||
## AUFS overview
|
||||
|
||||
- The original (legacy) copy-on-write filesystem used by first versions of Docker.
|
||||
|
||||
- Combine multiple *branches* in a specific order.
|
||||
|
||||
- Each branch is just a normal directory.
|
||||
|
||||
- You generally have:
|
||||
|
||||
- at least one read-only branch (at the bottom),
|
||||
|
||||
- exactly one read-write branch (at the top).
|
||||
|
||||
(But other fun combinations are possible too!)
|
||||
|
||||
---
|
||||
|
||||
## AUFS operations: opening a file
|
||||
|
||||
- With `O_RDONLY` - read-only access:
|
||||
|
||||
- look it up in each branch, starting from the top
|
||||
|
||||
- open the first one we find
|
||||
|
||||
- With `O_WRONLY` or `O_RDWR` - write access:
|
||||
|
||||
- if the file exists on the top branch: open it
|
||||
|
||||
- if the file exists on another branch: "copy up"
|
||||
<br/>
|
||||
(i.e. copy the file to the top branch and open the copy)
|
||||
|
||||
- if the file doesn't exist on any branch: create it on the top branch
|
||||
|
||||
That "copy-up" operation can take a while if the file is big!
|
||||
|
||||
---
|
||||
|
||||
## AUFS operations: deleting a file
|
||||
|
||||
- A *whiteout* file is created.
|
||||
|
||||
- This is similar to the concept of "tombstones" used in some data systems.
|
||||
|
||||
```
|
||||
# docker run ubuntu rm /etc/shadow
|
||||
|
||||
# ls -la /var/lib/docker/aufs/diff/$(docker ps --no-trunc -lq)/etc
|
||||
total 8
|
||||
drwxr-xr-x 2 root root 4096 Jan 27 15:36 .
|
||||
drwxr-xr-x 5 root root 4096 Jan 27 15:36 ..
|
||||
-r--r--r-- 2 root root 0 Jan 27 15:36 .wh.shadow
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## AUFS performance
|
||||
|
||||
- AUFS `mount()` is fast, so creation of containers is quick.
|
||||
|
||||
- Read/write access has native speeds.
|
||||
|
||||
- But initial `open()` is expensive in two scenarios:
|
||||
|
||||
- when writing big files (log files, databases ...),
|
||||
|
||||
- when searching many directories (PATH, classpath, etc.) over many layers.
|
||||
|
||||
- Protip: when we built dotCloud, we ended up putting
|
||||
all important data on *volumes*.
|
||||
|
||||
- When starting the same container multiple times:
|
||||
|
||||
- the data is loaded only once from disk, and cached only once in memory;
|
||||
|
||||
- but `dentries` will be duplicated.
|
||||
|
||||
---
|
||||
|
||||
## Device Mapper
|
||||
|
||||
Device Mapper is a rich subsystem with many features.
|
||||
|
||||
It can be used for: RAID, encrypted devices, snapshots, and more.
|
||||
|
||||
In the context of containers (and Docker in particular), "Device Mapper"
|
||||
means:
|
||||
|
||||
"the Device Mapper system + its *thin provisioning target*"
|
||||
|
||||
If you see the abbreviation "thinp" it stands for "thin provisioning".
|
||||
|
||||
---
|
||||
|
||||
## Device Mapper principles
|
||||
|
||||
- Copy-on-write happens on the *block* level
|
||||
(instead of the *file* level).
|
||||
|
||||
- Each container and each image get their own block device.
|
||||
|
||||
- At any given time, it is possible to take a snapshot:
|
||||
|
||||
- of an existing container (to create a frozen image),
|
||||
|
||||
- of an existing image (to create a container from it).
|
||||
|
||||
- If a block has never been written to:
|
||||
|
||||
- it's assumed to be all zeros,
|
||||
|
||||
- it's not allocated on disk.
|
||||
|
||||
(That last property is the reason for the name "thin" provisioning.)
|
||||
|
||||
---
|
||||
|
||||
## Device Mapper operational details
|
||||
|
||||
- Two storage areas are needed:
|
||||
one for *data*, another for *metadata*.
|
||||
|
||||
- "data" is also called the "pool"; it's just a big pool of blocks.
|
||||
|
||||
(Docker uses the smallest possible block size, 64 KB.)
|
||||
|
||||
- "metadata" contains the mappings between virtual offsets (in the
|
||||
snapshots) and physical offsets (in the pool).
|
||||
|
||||
- Each time a new block (or a copy-on-write block) is written,
|
||||
a block is allocated from the pool.
|
||||
|
||||
- When there are no more blocks in the pool, attempts to write
|
||||
will stall until the pool is increased (or the write operation
|
||||
aborted).
|
||||
|
||||
- In other words: when running out of space, containers are
|
||||
frozen, but operations will resume as soon as space is available.
|
||||
|
||||
---
|
||||
|
||||
## Device Mapper performance
|
||||
|
||||
- By default, Docker puts data and metadata on a loop device
|
||||
backed by a sparse file.
|
||||
|
||||
- This is great from a usability point of view,
|
||||
since zero configuration is needed.
|
||||
|
||||
- But it is terrible from a performance point of view:
|
||||
|
||||
- each time a container writes to a new block,
|
||||
- a block has to be allocated from the pool,
|
||||
- and when it's written to,
|
||||
- a block has to be allocated from the sparse file,
|
||||
- and sparse file performance isn't great anyway.
|
||||
|
||||
- If you use Device Mapper, make sure to put data (and metadata)
|
||||
on devices!
|
||||
|
||||
---
|
||||
|
||||
## BTRFS principles
|
||||
|
||||
- BTRFS is a filesystem (like EXT4, XFS, NTFS...) with built-in snapshots.
|
||||
|
||||
- The "copy-on-write" happens at the filesystem level.
|
||||
|
||||
- BTRFS integrates the snapshot and block pool management features
|
||||
at the filesystem level.
|
||||
|
||||
(Instead of the block level for Device Mapper.)
|
||||
|
||||
- In practice, we create a "subvolume" and
|
||||
later take a "snapshot" of that subvolume.
|
||||
|
||||
Imagine: `mkdir` with Super Powers and `cp -a` with Super Powers.
|
||||
|
||||
- These operations can be executed with the `btrfs` CLI tool.
|
||||
|
||||
---
|
||||
|
||||
## BTRFS in practice with Docker
|
||||
|
||||
- Docker can use BTRFS and its snapshotting features to store container images.
|
||||
|
||||
- The only requirement is that `/var/lib/docker` is on a BTRFS filesystem.
|
||||
|
||||
(Or, the directory specified with the `--data-root` flag when starting the engine.)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## BTRFS quirks
|
||||
|
||||
- BTRFS works by dividing its storage in *chunks*.
|
||||
|
||||
- A chunk can contain data or metadata.
|
||||
|
||||
- You can run out of chunks (and get `No space left on device`)
|
||||
even though `df` shows space available.
|
||||
|
||||
(Because chunks are only partially allocated.)
|
||||
|
||||
- Quick fix:
|
||||
|
||||
```
|
||||
# btrfs filesys balance start -dusage=1 /var/lib/docker
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Overlay2
|
||||
|
||||
- Overlay2 is very similar to AUFS.
|
||||
|
||||
- However, it has been merged in "upstream" kernel.
|
||||
|
||||
- It is therefore available on all modern kernels.
|
||||
|
||||
(AUFS was available on Debian and Ubuntu, but required custom kernels on other distros.)
|
||||
|
||||
- It is simpler than AUFS (it can only have two branches, called "layers").
|
||||
|
||||
- The container engine abstracts this detail, so this is not a concern.
|
||||
|
||||
- Overlay2 storage drivers generally use hard links between layers.
|
||||
|
||||
- This improves `stat()` and `open()` performance, at the expense of inode usage.
|
||||
|
||||
---
|
||||
|
||||
## ZFS
|
||||
|
||||
- ZFS is similar to BTRFS (at least from a container user's perspective).
|
||||
|
||||
- Pros:
|
||||
|
||||
- high performance
|
||||
- high reliability (with e.g. data checksums)
|
||||
- optional data compression and deduplication
|
||||
|
||||
- Cons:
|
||||
|
||||
- high memory usage
|
||||
- not in upstream kernel
|
||||
|
||||
- It is available as a kernel module or through FUSE.
|
||||
|
||||
---
|
||||
|
||||
## Which one is the best?
|
||||
|
||||
- Eventually, overlay2 should be the best option.
|
||||
|
||||
- It is available on all modern systems.
|
||||
|
||||
- Its memory usage is better than Device Mapper, BTRFS, or ZFS.
|
||||
|
||||
- The remarks about *write performance* shouldn't bother you:
|
||||
<br/>
|
||||
data should always be stored in volumes anyway!
|
||||
|
||||
???
|
||||
|
||||
:EN:- Copy-on-write filesystems
|
||||
:EN:- Docker graph drivers
|
||||
:FR:- Les systèmes de fichiers "copy-on-write"
|
||||
:FR:- Les "graph drivers" de Docker
|
||||
132
slides/containers/Copying_Files_During_Build.md
Normal file
132
slides/containers/Copying_Files_During_Build.md
Normal file
@@ -0,0 +1,132 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Copying files during the build
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
So far, we have installed things in our container images
|
||||
by downloading packages.
|
||||
|
||||
We can also copy files from the *build context* to the
|
||||
container that we are building.
|
||||
|
||||
Remember: the *build context* is the directory containing
|
||||
the Dockerfile.
|
||||
|
||||
In this chapter, we will learn a new Dockerfile keyword: `COPY`.
|
||||
|
||||
---
|
||||
|
||||
## Build some C code
|
||||
|
||||
We want to build a container that compiles a basic "Hello world" program in C.
|
||||
|
||||
Here is the program, `hello.c`:
|
||||
|
||||
```bash
|
||||
int main () {
|
||||
puts("Hello, world!");
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Let's create a new directory, and put this file in there.
|
||||
|
||||
Then we will write the Dockerfile.
|
||||
|
||||
---
|
||||
|
||||
## The Dockerfile
|
||||
|
||||
On Debian and Ubuntu, the package `build-essential` will get us a compiler.
|
||||
|
||||
When installing it, don't forget to specify the `-y` flag, otherwise the build will fail (since the build cannot be interactive).
|
||||
|
||||
Then we will use `COPY` to place the source file into the container.
|
||||
|
||||
```bash
|
||||
FROM ubuntu
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y build-essential
|
||||
COPY hello.c /
|
||||
RUN make hello
|
||||
CMD /hello
|
||||
```
|
||||
|
||||
Create this Dockerfile.
|
||||
|
||||
---
|
||||
|
||||
## Testing our C program
|
||||
|
||||
* Create `hello.c` and `Dockerfile` in the same directory.
|
||||
|
||||
* Run `docker build -t hello .` in this directory.
|
||||
|
||||
* Run `docker run hello`, you should see `Hello, world!`.
|
||||
|
||||
Success!
|
||||
|
||||
---
|
||||
|
||||
## `COPY` and the build cache
|
||||
|
||||
* Run the build again.
|
||||
|
||||
* Now, modify `hello.c` and run the build again.
|
||||
|
||||
* Docker can cache steps involving `COPY`.
|
||||
|
||||
* Those steps will not be executed again if the files haven't been changed.
|
||||
|
||||
---
|
||||
|
||||
## Details
|
||||
|
||||
* We can `COPY` whole directories recursively
|
||||
|
||||
* It is possible to do e.g. `COPY . .`
|
||||
|
||||
(but it might require some extra precautions to avoid copying too much)
|
||||
|
||||
* In older Dockerfiles, you might see the `ADD` command; consider it deprecated
|
||||
|
||||
(it is similar to `COPY` but can automatically extract archives)
|
||||
|
||||
* If we really wanted to compile C code in a container, we would:
|
||||
|
||||
* place it in a different directory, with the `WORKDIR` instruction
|
||||
|
||||
* even better, use the `gcc` official image
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## `.dockerignore`
|
||||
|
||||
- We can create a file named `.dockerignore`
|
||||
|
||||
(at the top-level of the build context)
|
||||
|
||||
- It can contain file names and globs to ignore
|
||||
|
||||
- They won't be sent to the builder
|
||||
|
||||
(and won't end up in the resulting image)
|
||||
|
||||
- See the [documentation] for the little details
|
||||
|
||||
(exceptions can be made with `!`, multiple directory levels with `**`...)
|
||||
|
||||
[documentation]: https://docs.docker.com/engine/reference/builder/#dockerignore-file
|
||||
|
||||
???
|
||||
|
||||
:EN:- Leveraging the build cache for faster builds
|
||||
:FR:- Tirer parti du cache afin d'optimiser la vitesse de *build*
|
||||
143
slides/containers/Docker_History.md
Normal file
143
slides/containers/Docker_History.md
Normal file
@@ -0,0 +1,143 @@
|
||||
# History of containers ... and Docker
|
||||
|
||||
---
|
||||
|
||||
## First experimentations
|
||||
|
||||
* [IBM VM/370 (1972)](https://en.wikipedia.org/wiki/VM_%28operating_system%29)
|
||||
|
||||
* [Linux VServers (2001)](http://www.solucorp.qc.ca/changes.hc?projet=vserver)
|
||||
|
||||
* [Solaris Containers (2004)](https://en.wikipedia.org/wiki/Solaris_Containers)
|
||||
|
||||
* [FreeBSD jails (1999-2000)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
|
||||
|
||||
Containers have been around for a *very long time* indeed.
|
||||
|
||||
(See [this excellent blog post by Serge Hallyn](https://s3hh.wordpress.com/2018/03/22/history-of-containers/) for more historic details.)
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## The VPS age (until 2007-2008)
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Containers = cheaper than VMs
|
||||
|
||||
* Users: hosting providers.
|
||||
|
||||
* Highly specialized audience with strong ops culture.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## The PAAS period (2008-2013)
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Containers = easier than VMs
|
||||
|
||||
* I can't speak for Heroku, but containers were (one of) dotCloud's secret weapon
|
||||
|
||||
* dotCloud was operating a PaaS, using a custom container engine.
|
||||
|
||||
* This engine was based on OpenVZ (and later, LXC) and AUFS.
|
||||
|
||||
* It started (circa 2008) as a single Python script.
|
||||
|
||||
* By 2012, the engine had multiple (~10) Python components.
|
||||
<br/>(and ~100 other micro-services!)
|
||||
|
||||
* End of 2012, dotCloud refactors this container engine.
|
||||
|
||||
* The codename for this project is "Docker."
|
||||
|
||||
---
|
||||
|
||||
## First public release of Docker
|
||||
|
||||
* March 2013, PyCon, Santa Clara:
|
||||
<br/>"Docker" is shown to a public audience for the first time.
|
||||
|
||||
* It is released with an open source license.
|
||||
|
||||
* Very positive reactions and feedback!
|
||||
|
||||
* The dotCloud team progressively shifts to Docker development.
|
||||
|
||||
* The same year, dotCloud changes name to Docker.
|
||||
|
||||
* In 2014, the PaaS activity is sold.
|
||||
|
||||
---
|
||||
|
||||
## Docker early days (2013-2014)
|
||||
|
||||
---
|
||||
|
||||
## First users of Docker
|
||||
|
||||
* PAAS builders (Flynn, Dokku, Tsuru, Deis...)
|
||||
|
||||
* PAAS users (those big enough to justify building their own)
|
||||
|
||||
* CI platforms
|
||||
|
||||
* developers, developers, developers, developers
|
||||
|
||||
---
|
||||
|
||||
## Positive feedback loop
|
||||
|
||||
* In 2013, the technology under containers (cgroups, namespaces, copy-on-write storage...)
|
||||
had many blind spots.
|
||||
|
||||
* The growing popularity of Docker and containers exposed many bugs.
|
||||
|
||||
* As a result, those bugs were fixed, resulting in better stability for containers.
|
||||
|
||||
* Any decent hosting/cloud provider can run containers today.
|
||||
|
||||
* Containers become a great tool to deploy/move workloads to/from on-prem/cloud.
|
||||
|
||||
---
|
||||
|
||||
## Maturity (2015-2016)
|
||||
|
||||
---
|
||||
|
||||
## Docker becomes an industry standard
|
||||
|
||||
* Docker reaches the symbolic 1.0 milestone.
|
||||
|
||||
* Existing systems like Mesos and Cloud Foundry add Docker support.
|
||||
|
||||
* Standardization around the OCI (Open Containers Initiative).
|
||||
|
||||
* Other container engines are developed.
|
||||
|
||||
* Creation of the CNCF (Cloud Native Computing Foundation).
|
||||
|
||||
---
|
||||
|
||||
## Docker becomes a platform
|
||||
|
||||
* The initial container engine is now known as "Docker Engine."
|
||||
|
||||
* Other tools are added:
|
||||
* Docker Compose (formerly "Fig")
|
||||
* Docker Machine
|
||||
* Docker Swarm
|
||||
* Kitematic
|
||||
* Docker Cloud (formerly "Tutum")
|
||||
* Docker Datacenter
|
||||
* etc.
|
||||
|
||||
* Docker Inc. launches commercial offers.
|
||||
81
slides/containers/Docker_Machine.md
Normal file
81
slides/containers/Docker_Machine.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# Managing hosts with Docker Machine
|
||||
|
||||
- Docker Machine is a tool to provision and manage Docker hosts.
|
||||
|
||||
- It automates the creation of a virtual machine:
|
||||
|
||||
- locally, with a tool like VirtualBox or VMware;
|
||||
|
||||
- on a public cloud like AWS EC2, Azure, Digital Ocean, GCP, etc.;
|
||||
|
||||
- on a private cloud like OpenStack.
|
||||
|
||||
- It can also configure existing machines through an SSH connection.
|
||||
|
||||
- It can manage as many hosts as you want, with as many "drivers" as you want.
|
||||
|
||||
---
|
||||
|
||||
## Docker Machine workflow
|
||||
|
||||
1) Prepare the environment: setup VirtualBox, obtain cloud credentials ...
|
||||
|
||||
2) Create hosts with `docker-machine create -d drivername machinename`.
|
||||
|
||||
3) Use a specific machine with `eval $(docker-machine env machinename)`.
|
||||
|
||||
4) Profit!
|
||||
|
||||
---
|
||||
|
||||
## Environment variables
|
||||
|
||||
- Most of the tools (CLI, libraries...) connecting to the Docker API can use environment variables.
|
||||
|
||||
- These variables are:
|
||||
|
||||
- `DOCKER_HOST` (indicates address+port to connect to, or path of UNIX socket)
|
||||
|
||||
- `DOCKER_TLS_VERIFY` (indicates that TLS mutual auth should be used)
|
||||
|
||||
- `DOCKER_CERT_PATH` (path to the keypair and certificate to use for auth)
|
||||
|
||||
- `docker-machine env ...` will generate the variables needed to connect to a host.
|
||||
|
||||
- `$(eval docker-machine env ...)` sets these variables in the current shell.
|
||||
|
||||
---
|
||||
|
||||
## Host management features
|
||||
|
||||
With `docker-machine`, we can:
|
||||
|
||||
- upgrade a host to the latest version of the Docker Engine,
|
||||
|
||||
- start/stop/restart hosts,
|
||||
|
||||
- get a shell on a remote machine (with SSH),
|
||||
|
||||
- copy files to/from remotes machines (with SCP),
|
||||
|
||||
- mount a remote host's directory on the local machine (with SSHFS),
|
||||
|
||||
- ...
|
||||
|
||||
---
|
||||
|
||||
## The `generic` driver
|
||||
|
||||
When provisioning a new host, `docker-machine` executes these steps:
|
||||
|
||||
1) Create the host using a cloud or hypervisor API.
|
||||
|
||||
2) Connect to the host over SSH.
|
||||
|
||||
3) Install and configure Docker on the host.
|
||||
|
||||
With the `generic` driver, we provide the IP address of an existing host
|
||||
(instead of e.g. cloud credentials) and we omit the first step.
|
||||
|
||||
This allows to provision physical machines, or VMs provided by a 3rd
|
||||
party, or use a cloud for which we don't have a provisioning API.
|
||||
356
slides/containers/Docker_Overview.md
Normal file
356
slides/containers/Docker_Overview.md
Normal file
@@ -0,0 +1,356 @@
|
||||
# Docker 30,000ft overview
|
||||
|
||||
In this lesson, we will learn about:
|
||||
|
||||
* Why containers (non-technical elevator pitch)
|
||||
|
||||
* Why containers (technical elevator pitch)
|
||||
|
||||
* How Docker helps us to build, ship, and run
|
||||
|
||||
* The history of containers
|
||||
|
||||
We won't actually run Docker or containers in this chapter (yet!).
|
||||
|
||||
Don't worry, we will get to that fast enough!
|
||||
|
||||
---
|
||||
|
||||
## Elevator pitch
|
||||
|
||||
### (for your manager, your boss...)
|
||||
|
||||
---
|
||||
|
||||
## OK... Why the buzz around containers?
|
||||
|
||||
* The software industry has changed
|
||||
|
||||
* Before:
|
||||
* monolithic applications
|
||||
* long development cycles
|
||||
* single environment
|
||||
* slowly scaling up
|
||||
|
||||
* Now:
|
||||
* decoupled services
|
||||
* fast, iterative improvements
|
||||
* multiple environments
|
||||
* quickly scaling out
|
||||
|
||||
---
|
||||
|
||||
## Deployment becomes very complex
|
||||
|
||||
* Many different stacks:
|
||||
* languages
|
||||
* frameworks
|
||||
* databases
|
||||
|
||||
* Many different targets:
|
||||
* individual development environments
|
||||
* pre-production, QA, staging...
|
||||
* production: on prem, cloud, hybrid
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## The deployment problem
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## The matrix from hell
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## The parallel with the shipping industry
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Intermodal shipping containers
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## A new shipping ecosystem
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## A shipping container system for applications
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Eliminate the matrix from hell
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Results
|
||||
|
||||
* [Dev-to-prod reduced from 9 months to 15 minutes (ING)](
|
||||
https://www.docker.com/sites/default/files/CS_ING_01.25.2015_1.pdf)
|
||||
|
||||
* [Continuous integration job time reduced by more than 60% (BBC)](
|
||||
https://www.docker.com/sites/default/files/CS_BBCNews_01.25.2015_1.pdf)
|
||||
|
||||
* [Deploy 100 times a day instead of once a week (GILT)](
|
||||
https://www.docker.com/sites/default/files/CS_Gilt%20Groupe_03.18.2015_0.pdf)
|
||||
|
||||
* [70% infrastructure consolidation (MetLife)](
|
||||
https://www.docker.com/customers/metlife-transforms-customer-experience-legacy-and-microservices-mashup)
|
||||
|
||||
* [60% infrastructure consolidation (Intesa Sanpaolo)](
|
||||
https://blog.docker.com/2017/11/intesa-sanpaolo-builds-resilient-foundation-banking-docker-enterprise-edition/)
|
||||
|
||||
* [14x application density; 60% of legacy datacenter migrated in 4 months (GE Appliances)](
|
||||
https://www.docker.com/customers/ge-uses-docker-enable-self-service-their-developers)
|
||||
|
||||
* etc.
|
||||
|
||||
---
|
||||
|
||||
## Elevator pitch
|
||||
|
||||
### (for your fellow devs and ops)
|
||||
|
||||
---
|
||||
|
||||
## Escape dependency hell
|
||||
|
||||
1. Write installation instructions into an `INSTALL.txt` file
|
||||
|
||||
2. Using this file, write an `install.sh` script that works *for you*
|
||||
|
||||
3. Turn this file into a `Dockerfile`, test it on your machine
|
||||
|
||||
4. If the Dockerfile builds on your machine, it will build *anywhere*
|
||||
|
||||
5. Rejoice as you escape dependency hell and "works on my machine"
|
||||
|
||||
Never again "worked in dev - ops problem now!"
|
||||
|
||||
---
|
||||
|
||||
## On-board developers and contributors rapidly
|
||||
|
||||
1. Write Dockerfiles for your application components
|
||||
|
||||
2. Use pre-made images from the Docker Hub (mysql, redis...)
|
||||
|
||||
3. Describe your stack with a Compose file
|
||||
|
||||
4. On-board somebody with two commands:
|
||||
|
||||
```bash
|
||||
git clone ...
|
||||
docker-compose up
|
||||
```
|
||||
|
||||
With this, you can create development, integration, QA environments in minutes!
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Implement reliable CI easily
|
||||
|
||||
1. Build test environment with a Dockerfile or Compose file
|
||||
|
||||
2. For each test run, stage up a new container or stack
|
||||
|
||||
3. Each run is now in a clean environment
|
||||
|
||||
4. No pollution from previous tests
|
||||
|
||||
Way faster and cheaper than creating VMs each time!
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Use container images as build artefacts
|
||||
|
||||
1. Build your app from Dockerfiles
|
||||
|
||||
2. Store the resulting images in a registry
|
||||
|
||||
3. Keep them forever (or as long as necessary)
|
||||
|
||||
4. Test those images in QA, CI, integration...
|
||||
|
||||
5. Run the same images in production
|
||||
|
||||
6. Something goes wrong? Rollback to previous image
|
||||
|
||||
7. Investigating old regression? Old image has your back!
|
||||
|
||||
Images contain all the libraries, dependencies, etc. needed to run the app.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Decouple "plumbing" from application logic
|
||||
|
||||
1. Write your code to connect to named services ("db", "api"...)
|
||||
|
||||
2. Use Compose to start your stack
|
||||
|
||||
3. Docker will setup per-container DNS resolver for those names
|
||||
|
||||
4. You can now scale, add load balancers, replication ... without changing your code
|
||||
|
||||
Note: this is not covered in this intro level workshop!
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## What did Docker bring to the table?
|
||||
|
||||
### Docker before/after
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Formats and APIs, before Docker
|
||||
|
||||
* No standardized exchange format.
|
||||
<br/>(No, a rootfs tarball is *not* a format!)
|
||||
|
||||
* Containers are hard to use for developers.
|
||||
<br/>(Where's the equivalent of `docker run debian`?)
|
||||
|
||||
* As a result, they are *hidden* from the end users.
|
||||
|
||||
* No re-usable components, APIs, tools.
|
||||
<br/>(At best: VM abstractions, e.g. libvirt.)
|
||||
|
||||
Analogy:
|
||||
|
||||
* Shipping containers are not just steel boxes.
|
||||
* They are steel boxes that are a standard size, with the same hooks and holes.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Formats and APIs, after Docker
|
||||
|
||||
* Standardize the container format, because containers were not portable.
|
||||
|
||||
* Make containers easy to use for developers.
|
||||
|
||||
* Emphasis on re-usable components, APIs, ecosystem of standard tools.
|
||||
|
||||
* Improvement over ad-hoc, in-house, specific tools.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Shipping, before Docker
|
||||
|
||||
* Ship packages: deb, rpm, gem, jar, homebrew...
|
||||
|
||||
* Dependency hell.
|
||||
|
||||
* "Works on my machine."
|
||||
|
||||
* Base deployment often done from scratch (debootstrap...) and unreliable.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Shipping, after Docker
|
||||
|
||||
* Ship container images with all their dependencies.
|
||||
|
||||
* Images are bigger, but they are broken down into layers.
|
||||
|
||||
* Only ship layers that have changed.
|
||||
|
||||
* Save disk, network, memory usage.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Example
|
||||
|
||||
Layers:
|
||||
|
||||
* CentOS
|
||||
* JRE
|
||||
* Tomcat
|
||||
* Dependencies
|
||||
* Application JAR
|
||||
* Configuration
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Devs vs Ops, before Docker
|
||||
|
||||
* Drop a tarball (or a commit hash) with instructions.
|
||||
|
||||
* Dev environment very different from production.
|
||||
|
||||
* Ops don't always have a dev environment themselves ...
|
||||
|
||||
* ... and when they do, it can differ from the devs'.
|
||||
|
||||
* Ops have to sort out differences and make it work ...
|
||||
|
||||
* ... or bounce it back to devs.
|
||||
|
||||
* Shipping code causes frictions and delays.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Devs vs Ops, after Docker
|
||||
|
||||
* Drop a container image or a Compose file.
|
||||
|
||||
* Ops can always run that container image.
|
||||
|
||||
* Ops can always run that Compose file.
|
||||
|
||||
* Ops still have to adapt to prod environment,
|
||||
but at least they have a reference point.
|
||||
|
||||
* Ops have tools allowing to use the same image
|
||||
in dev and prod.
|
||||
|
||||
* Devs can be empowered to make releases themselves
|
||||
more easily.
|
||||
445
slides/containers/Dockerfile_Tips.md
Normal file
445
slides/containers/Dockerfile_Tips.md
Normal file
@@ -0,0 +1,445 @@
|
||||
# Tips for efficient Dockerfiles
|
||||
|
||||
We will see how to:
|
||||
|
||||
* Reduce the number of layers.
|
||||
|
||||
* Leverage the build cache so that builds can be faster.
|
||||
|
||||
* Embed unit testing in the build process.
|
||||
|
||||
---
|
||||
|
||||
## Reducing the number of layers
|
||||
|
||||
* Each line in a `Dockerfile` creates a new layer.
|
||||
|
||||
* Build your `Dockerfile` to take advantage of Docker's caching system.
|
||||
|
||||
* Combine commands by using `&&` to continue commands and `\` to wrap lines.
|
||||
|
||||
Note: it is frequent to build a Dockerfile line by line:
|
||||
|
||||
```dockerfile
|
||||
RUN apt-get install thisthing
|
||||
RUN apt-get install andthatthing andthatotherone
|
||||
RUN apt-get install somemorestuff
|
||||
```
|
||||
|
||||
And then refactor it trivially before shipping:
|
||||
|
||||
```dockerfile
|
||||
RUN apt-get install thisthing andthatthing andthatotherone somemorestuff
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Avoid re-installing dependencies at each build
|
||||
|
||||
* Classic Dockerfile problem:
|
||||
|
||||
"each time I change a line of code, all my dependencies are re-installed!"
|
||||
|
||||
* Solution: `COPY` dependency lists (`package.json`, `requirements.txt`, etc.)
|
||||
by themselves to avoid reinstalling unchanged dependencies every time.
|
||||
|
||||
---
|
||||
|
||||
## Example "bad" `Dockerfile`
|
||||
|
||||
The dependencies are reinstalled every time, because the build system does not know if `requirements.txt` has been updated.
|
||||
|
||||
```bash
|
||||
FROM python
|
||||
WORKDIR /src
|
||||
COPY . .
|
||||
RUN pip install -qr requirements.txt
|
||||
EXPOSE 5000
|
||||
CMD ["python", "app.py"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fixed `Dockerfile`
|
||||
|
||||
Adding the dependencies as a separate step means that Docker can cache more efficiently and only install them when `requirements.txt` changes.
|
||||
|
||||
```bash
|
||||
FROM python
|
||||
WORKDIR /src
|
||||
COPY requirements.txt .
|
||||
RUN pip install -qr requirements.txt
|
||||
COPY . .
|
||||
EXPOSE 5000
|
||||
CMD ["python", "app.py"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Be careful with `chown`, `chmod`, `mv`
|
||||
|
||||
* Layers cannot store efficiently changes in permissions or ownership.
|
||||
|
||||
* Layers cannot represent efficiently when a file is moved either.
|
||||
|
||||
* As a result, operations like `chown`, `chown`, `mv` can be expensive.
|
||||
|
||||
* For instance, in the Dockerfile snippet below, each `RUN` line
|
||||
creates a layer with an entire copy of `some-file`.
|
||||
|
||||
```dockerfile
|
||||
COPY some-file .
|
||||
RUN chown www-data:www-data some-file
|
||||
RUN chmod 644 some-file
|
||||
RUN mv some-file /var/www
|
||||
```
|
||||
|
||||
* How can we avoid that?
|
||||
|
||||
---
|
||||
|
||||
## Put files on the right place
|
||||
|
||||
* Instead of using `mv`, directly put files at the right place.
|
||||
|
||||
* When extracting archives (tar, zip...), merge operations in a single layer.
|
||||
|
||||
Example:
|
||||
|
||||
```dockerfile
|
||||
...
|
||||
RUN wget http://.../foo.tar.gz \
|
||||
&& tar -zxf foo.tar.gz \
|
||||
&& mv foo/fooctl /usr/local/bin \
|
||||
&& rm -rf foo foo.tar.gz
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Use `COPY --chown`
|
||||
|
||||
* The Dockerfile instruction `COPY` can take a `--chown` parameter.
|
||||
|
||||
Examples:
|
||||
|
||||
```dockerfile
|
||||
...
|
||||
COPY --chown=1000 some-file .
|
||||
COPY --chown=1000:1000 some-file .
|
||||
COPY --chown=www-data:www-data some-file .
|
||||
```
|
||||
|
||||
* The `--chown` flag can specify a user, or a user:group pair.
|
||||
|
||||
* The user and group can be specified as names or numbers.
|
||||
|
||||
* When using names, the names must exist in `/etc/passwd` or `/etc/group`.
|
||||
|
||||
*(In the container, not on the host!)*
|
||||
|
||||
---
|
||||
|
||||
## Set correct permissions locally
|
||||
|
||||
* Instead of using `chmod`, set the right file permissions locally.
|
||||
|
||||
* When files are copied with `COPY`, permissions are preserved.
|
||||
|
||||
---
|
||||
|
||||
## Embedding unit tests in the build process
|
||||
|
||||
```dockerfile
|
||||
FROM <baseimage>
|
||||
RUN <install dependencies>
|
||||
COPY <code>
|
||||
RUN <build code>
|
||||
RUN <install test dependencies>
|
||||
COPY <test data sets and fixtures>
|
||||
RUN <unit tests>
|
||||
FROM <baseimage>
|
||||
RUN <install dependencies>
|
||||
COPY <code>
|
||||
RUN <build code>
|
||||
CMD, EXPOSE ...
|
||||
```
|
||||
|
||||
* The build fails as soon as an instruction fails
|
||||
* If `RUN <unit tests>` fails, the build doesn't produce an image
|
||||
* If it succeeds, it produces a clean image (without test libraries and data)
|
||||
|
||||
---
|
||||
|
||||
# Dockerfile examples
|
||||
|
||||
There are a number of tips, tricks, and techniques that we can use in Dockerfiles.
|
||||
|
||||
But sometimes, we have to use different (and even opposed) practices depending on:
|
||||
|
||||
- the complexity of our project,
|
||||
|
||||
- the programming language or framework that we are using,
|
||||
|
||||
- the stage of our project (early MVP vs. super-stable production),
|
||||
|
||||
- whether we're building a final image or a base for further images,
|
||||
|
||||
- etc.
|
||||
|
||||
We are going to show a few examples using very different techniques.
|
||||
|
||||
---
|
||||
|
||||
## When to optimize an image
|
||||
|
||||
When authoring official images, it is a good idea to reduce as much as possible:
|
||||
|
||||
- the number of layers,
|
||||
|
||||
- the size of the final image.
|
||||
|
||||
This is often done at the expense of build time and convenience for the image maintainer;
|
||||
but when an image is downloaded millions of time, saving even a few seconds of pull time
|
||||
can be worth it.
|
||||
|
||||
.small[
|
||||
```dockerfile
|
||||
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \
|
||||
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
|
||||
&& docker-php-ext-install gd
|
||||
...
|
||||
RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \
|
||||
&& echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \
|
||||
&& tar -xzf wordpress.tar.gz -C /usr/src/ \
|
||||
&& rm wordpress.tar.gz \
|
||||
&& chown -R www-data:www-data /usr/src/wordpress
|
||||
```
|
||||
]
|
||||
|
||||
(Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile))
|
||||
|
||||
---
|
||||
|
||||
## When to *not* optimize an image
|
||||
|
||||
Sometimes, it is better to prioritize *maintainer convenience*.
|
||||
|
||||
In particular, if:
|
||||
|
||||
- the image changes a lot,
|
||||
|
||||
- the image has very few users (e.g. only 1, the maintainer!),
|
||||
|
||||
- the image is built and run on the same machine,
|
||||
|
||||
- the image is built and run on machines with a very fast link ...
|
||||
|
||||
In these cases, just keep things simple!
|
||||
|
||||
(Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.)
|
||||
|
||||
---
|
||||
|
||||
```dockerfile
|
||||
FROM debian:sid
|
||||
|
||||
RUN apt-get update -q
|
||||
RUN apt-get install -yq build-essential make
|
||||
RUN apt-get install -yq zlib1g-dev
|
||||
RUN apt-get install -yq ruby ruby-dev
|
||||
RUN apt-get install -yq python-pygments
|
||||
RUN apt-get install -yq nodejs
|
||||
RUN apt-get install -yq cmake
|
||||
RUN gem install --no-rdoc --no-ri github-pages
|
||||
|
||||
COPY . /blog
|
||||
WORKDIR /blog
|
||||
|
||||
VOLUME /blog/_site
|
||||
|
||||
EXPOSE 4000
|
||||
CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Multi-dimensional versioning systems
|
||||
|
||||
Images can have a tag, indicating the version of the image.
|
||||
|
||||
But sometimes, there are multiple important components, and we need to indicate the versions
|
||||
for all of them.
|
||||
|
||||
This can be done with environment variables:
|
||||
|
||||
```dockerfile
|
||||
ENV PIP=9.0.3 \
|
||||
ZC_BUILDOUT=2.11.2 \
|
||||
SETUPTOOLS=38.7.0 \
|
||||
PLONE_MAJOR=5.1 \
|
||||
PLONE_VERSION=5.1.0 \
|
||||
PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d
|
||||
```
|
||||
|
||||
(Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile))
|
||||
|
||||
---
|
||||
|
||||
## Entrypoints and wrappers
|
||||
|
||||
It is very common to define a custom entrypoint.
|
||||
|
||||
That entrypoint will generally be a script, performing any combination of:
|
||||
|
||||
- pre-flights checks (if a required dependency is not available, display
|
||||
a nice error message early instead of an obscure one in a deep log file),
|
||||
|
||||
- generation or validation of configuration files,
|
||||
|
||||
- dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`),
|
||||
|
||||
- and more.
|
||||
|
||||
---
|
||||
|
||||
## A typical entrypoint script
|
||||
|
||||
```dockerfile
|
||||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
# first arg is '-f' or '--some-option'
|
||||
# or first arg is 'something.conf'
|
||||
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
|
||||
set -- redis-server "$@"
|
||||
fi
|
||||
|
||||
# allow the container to be started with '--user'
|
||||
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
|
||||
chown -R redis .
|
||||
exec su-exec redis "$0" "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
```
|
||||
|
||||
(Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh))
|
||||
|
||||
---
|
||||
|
||||
## Factoring information
|
||||
|
||||
To facilitate maintenance (and avoid human errors), avoid to repeat information like:
|
||||
|
||||
- version numbers,
|
||||
|
||||
- remote asset URLs (e.g. source tarballs) ...
|
||||
|
||||
Instead, use environment variables.
|
||||
|
||||
.small[
|
||||
```dockerfile
|
||||
ENV NODE_VERSION 10.2.1
|
||||
...
|
||||
RUN ...
|
||||
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \
|
||||
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
|
||||
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
|
||||
&& grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
|
||||
&& tar -xf "node-v$NODE_VERSION.tar.xz" \
|
||||
&& cd "node-v$NODE_VERSION" \
|
||||
...
|
||||
```
|
||||
]
|
||||
|
||||
(Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile))
|
||||
|
||||
---
|
||||
|
||||
## Overrides
|
||||
|
||||
In theory, development and production images should be the same.
|
||||
|
||||
In practice, we often need to enable specific behaviors in development (e.g. debug statements).
|
||||
|
||||
One way to reconcile both needs is to use Compose to enable these behaviors.
|
||||
|
||||
Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example.
|
||||
|
||||
---
|
||||
|
||||
## Production image
|
||||
|
||||
This Dockerfile builds an image leveraging gunicorn:
|
||||
|
||||
```dockerfile
|
||||
FROM python
|
||||
RUN pip install flask
|
||||
RUN pip install gunicorn
|
||||
RUN pip install redis
|
||||
COPY . /src
|
||||
WORKDIR /src
|
||||
CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app
|
||||
EXPOSE 5000
|
||||
```
|
||||
|
||||
(Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
|
||||
|
||||
---
|
||||
|
||||
## Development Compose file
|
||||
|
||||
This Compose file uses the same image, but with a few overrides for development:
|
||||
|
||||
- the Flask development server is used (overriding `CMD`),
|
||||
|
||||
- the `DEBUG` environment variable is set,
|
||||
|
||||
- a volume is used to provide a faster local development workflow.
|
||||
|
||||
.small[
|
||||
```yaml
|
||||
services:
|
||||
www:
|
||||
build: www
|
||||
ports:
|
||||
- 8000:5000
|
||||
user: nobody
|
||||
environment:
|
||||
DEBUG: 1
|
||||
command: python counter.py
|
||||
volumes:
|
||||
- ./www:/src
|
||||
```
|
||||
]
|
||||
|
||||
(Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml))
|
||||
|
||||
---
|
||||
|
||||
## How to know which best practices are better?
|
||||
|
||||
- The main goal of containers is to make our lives easier.
|
||||
|
||||
- In this chapter, we showed many ways to write Dockerfiles.
|
||||
|
||||
- These Dockerfiles use sometimes diametrically opposed techniques.
|
||||
|
||||
- Yet, they were the "right" ones *for a specific situation.*
|
||||
|
||||
- It's OK (and even encouraged) to start simple and evolve as needed.
|
||||
|
||||
- Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!
|
||||
|
||||
???
|
||||
|
||||
:EN:Optimizing images
|
||||
:EN:- Dockerfile tips, tricks, and best practices
|
||||
:EN:- Reducing build time
|
||||
:EN:- Reducing image size
|
||||
|
||||
:FR:Optimiser ses images
|
||||
:FR:- Bonnes pratiques, trucs et astuces
|
||||
:FR:- Réduire le temps de build
|
||||
:FR:- Réduire la taille des images
|
||||
173
slides/containers/Ecosystem.md
Normal file
173
slides/containers/Ecosystem.md
Normal file
@@ -0,0 +1,173 @@
|
||||
# The container ecosystem
|
||||
|
||||
In this chapter, we will talk about a few actors of the container ecosystem.
|
||||
|
||||
We have (arbitrarily) decided to focus on two groups:
|
||||
|
||||
- the Docker ecosystem,
|
||||
|
||||
- the Cloud Native Computing Foundation (CNCF) and its projects.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## The Docker ecosystem
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Moby vs. Docker
|
||||
|
||||
- Docker Inc. (the company) started Docker (the open source project).
|
||||
|
||||
- At some point, it became necessary to differentiate between:
|
||||
|
||||
- the open source project (code base, contributors...),
|
||||
|
||||
- the product that we use to run containers (the engine),
|
||||
|
||||
- the platform that we use to manage containerized applications,
|
||||
|
||||
- the brand.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Exercise in brand management
|
||||
|
||||
Questions:
|
||||
|
||||
--
|
||||
|
||||
- What is the brand of the car on the previous slide?
|
||||
|
||||
--
|
||||
|
||||
- What kind of engine does it have?
|
||||
|
||||
--
|
||||
|
||||
- Would you say that it's a safe or unsafe car?
|
||||
|
||||
--
|
||||
|
||||
- Harder question: can you drive from the US West to East coasts with it?
|
||||
|
||||
--
|
||||
|
||||
The answers to these questions are part of the Tesla brand.
|
||||
|
||||
---
|
||||
|
||||
## What if ...
|
||||
|
||||
- The blueprints for Tesla cars were available for free.
|
||||
|
||||
- You could legally build your own Tesla.
|
||||
|
||||
- You were allowed to customize it entirely.
|
||||
|
||||
(Put a combustion engine, drive it with a game pad ...)
|
||||
|
||||
- You could even sell the customized versions.
|
||||
|
||||
--
|
||||
|
||||
- ... And call your customized version "Tesla".
|
||||
|
||||
--
|
||||
|
||||
Would we give the same answers to the questions on the previous slide?
|
||||
|
||||
---
|
||||
|
||||
## From Docker to Moby
|
||||
|
||||
- Docker Inc. decided to split the brand.
|
||||
|
||||
- Moby is the open source project.
|
||||
|
||||
(= Components and libraries that you can use, reuse, customize, sell ...)
|
||||
|
||||
- Docker is the product.
|
||||
|
||||
(= Software that you can use, buy support contracts ...)
|
||||
|
||||
- Docker is made with Moby.
|
||||
|
||||
- When Docker Inc. improves the Docker products, it improves Moby.
|
||||
|
||||
(And vice versa.)
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Other examples
|
||||
|
||||
- *Read the Docs* is an open source project to generate and host documentation.
|
||||
|
||||
- You can host it yourself (on your own servers).
|
||||
|
||||
- You can also get hosted on readthedocs.org.
|
||||
|
||||
- The maintainers of the open source project often receive
|
||||
support requests from users of the hosted product ...
|
||||
|
||||
- ... And the maintainers of the hosted product often
|
||||
receive support requests from users of self-hosted instances.
|
||||
|
||||
- Another example:
|
||||
|
||||
*WordPress.com is a blogging platform that is owned and hosted online by
|
||||
Automattic. It is run on WordPress, an open source piece of software used by
|
||||
bloggers. (Wikipedia)*
|
||||
|
||||
---
|
||||
|
||||
## Docker CE vs Docker EE
|
||||
|
||||
- Docker CE = Community Edition.
|
||||
|
||||
- Available on most Linux distros, Mac, Windows.
|
||||
|
||||
- Optimized for developers and ease of use.
|
||||
|
||||
- Docker EE = Enterprise Edition.
|
||||
|
||||
- Available only on a subset of Linux distros + Windows servers.
|
||||
|
||||
(Only available when there is a strong partnership to offer enterprise-class support.)
|
||||
|
||||
- Optimized for production use.
|
||||
|
||||
- Comes with additional components: security scanning, RBAC ...
|
||||
|
||||
---
|
||||
|
||||
## The CNCF
|
||||
|
||||
- Non-profit, part of the Linux Foundation; founded in December 2015.
|
||||
|
||||
*The Cloud Native Computing Foundation builds sustainable ecosystems and fosters
|
||||
a community around a constellation of high-quality projects that orchestrate
|
||||
containers as part of a microservices architecture.*
|
||||
|
||||
*CNCF is an open source software foundation dedicated to making cloud-native computing universal and sustainable.*
|
||||
|
||||
- Home of Kubernetes (and many other projects now).
|
||||
|
||||
- Funded by corporate memberships.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
5
slides/containers/Exercise_Composefile.md
Normal file
5
slides/containers/Exercise_Composefile.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Exercise — writing a Compose file
|
||||
|
||||
Let's write a Compose file for the wordsmith app!
|
||||
|
||||
The code is at: https://github.com/jpetazzo/wordsmith
|
||||
9
slides/containers/Exercise_Dockerfile_Advanced.md
Normal file
9
slides/containers/Exercise_Dockerfile_Advanced.md
Normal file
@@ -0,0 +1,9 @@
|
||||
# Exercise — writing better Dockerfiles
|
||||
|
||||
Let's update our Dockerfiles to leverage multi-stage builds!
|
||||
|
||||
The code is at: https://github.com/jpetazzo/wordsmith
|
||||
|
||||
Use a different tag for these images, so that we can compare their sizes.
|
||||
|
||||
What's the size difference between single-stage and multi-stage builds?
|
||||
100
slides/containers/Exercise_Dockerfile_Basic.md
Normal file
100
slides/containers/Exercise_Dockerfile_Basic.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# Exercise — writing Dockerfiles
|
||||
|
||||
Let's write Dockerfiles for an existing application!
|
||||
|
||||
1. Check out the code repository
|
||||
|
||||
2. Read all the instructions
|
||||
|
||||
3. Write Dockerfiles
|
||||
|
||||
4. Build and test them individually
|
||||
|
||||
<!--
|
||||
5. Test them together with the provided Compose file
|
||||
-->
|
||||
|
||||
---
|
||||
|
||||
## Code repository
|
||||
|
||||
Clone the repository available at:
|
||||
|
||||
https://github.com/jpetazzo/wordsmith
|
||||
|
||||
It should look like this:
|
||||
```
|
||||
├── LICENSE
|
||||
├── README
|
||||
├── db/
|
||||
│ └── words.sql
|
||||
├── web/
|
||||
│ ├── dispatcher.go
|
||||
│ └── static/
|
||||
└── words/
|
||||
├── pom.xml
|
||||
└── src/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Instructions
|
||||
|
||||
The repository contains instructions in English and French.
|
||||
<br/>
|
||||
For now, we only care about the first part (about writing Dockerfiles).
|
||||
<br/>
|
||||
Place each Dockerfile in its own directory, like this:
|
||||
```
|
||||
├── LICENSE
|
||||
├── README
|
||||
├── db/
|
||||
│ ├── `Dockerfile`
|
||||
│ └── words.sql
|
||||
├── web/
|
||||
│ ├── `Dockerfile`
|
||||
│ ├── dispatcher.go
|
||||
│ └── static/
|
||||
└── words/
|
||||
├── `Dockerfile`
|
||||
├── pom.xml
|
||||
└── src/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Build and test
|
||||
|
||||
Build and run each Dockerfile individually.
|
||||
|
||||
For `db`, we should be able to see some messages confirming that the data set
|
||||
was loaded successfully (some `INSERT` lines in the container output).
|
||||
|
||||
For `web` and `words`, we should be able to see some message looking like
|
||||
"server started successfully".
|
||||
|
||||
That's all we care about for now!
|
||||
|
||||
Bonus question: make sure that each container stops correctly when hitting Ctrl-C.
|
||||
|
||||
???
|
||||
|
||||
## Test with a Compose file
|
||||
|
||||
Place the following Compose file at the root of the repository:
|
||||
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
services:
|
||||
db:
|
||||
build: db
|
||||
words:
|
||||
build: words
|
||||
web:
|
||||
build: web
|
||||
ports:
|
||||
- 8888:80
|
||||
```
|
||||
|
||||
Test the whole app by bringin up the stack and connecting to port 8888.
|
||||
297
slides/containers/First_Containers.md
Normal file
297
slides/containers/First_Containers.md
Normal file
@@ -0,0 +1,297 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Our first containers
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
At the end of this lesson, you will have:
|
||||
|
||||
* Seen Docker in action.
|
||||
|
||||
* Started your first containers.
|
||||
|
||||
---
|
||||
|
||||
## Hello World
|
||||
|
||||
In your Docker environment, just run the following command:
|
||||
|
||||
```bash
|
||||
$ docker run busybox echo hello world
|
||||
hello world
|
||||
```
|
||||
|
||||
(If your Docker install is brand new, you will also see a few extra lines,
|
||||
corresponding to the download of the `busybox` image.)
|
||||
|
||||
---
|
||||
|
||||
## That was our first container!
|
||||
|
||||
* We used one of the smallest, simplest images available: `busybox`.
|
||||
|
||||
* `busybox` is typically used in embedded systems (phones, routers...)
|
||||
|
||||
* We ran a single process and echo'ed `hello world`.
|
||||
|
||||
---
|
||||
|
||||
## A more useful container
|
||||
|
||||
Let's run a more exciting container:
|
||||
|
||||
```bash
|
||||
$ docker run -it ubuntu
|
||||
root@04c0bb0a6c07:/#
|
||||
```
|
||||
|
||||
* This is a brand new container.
|
||||
|
||||
* It runs a bare-bones, no-frills `ubuntu` system.
|
||||
|
||||
* `-it` is shorthand for `-i -t`.
|
||||
|
||||
* `-i` tells Docker to connect us to the container's stdin.
|
||||
|
||||
* `-t` tells Docker that we want a pseudo-terminal.
|
||||
|
||||
---
|
||||
|
||||
## Do something in our container
|
||||
|
||||
Try to run `figlet` in our container.
|
||||
|
||||
```bash
|
||||
root@04c0bb0a6c07:/# figlet hello
|
||||
bash: figlet: command not found
|
||||
```
|
||||
|
||||
Alright, we need to install it.
|
||||
|
||||
---
|
||||
|
||||
## Install a package in our container
|
||||
|
||||
We want `figlet`, so let's install it:
|
||||
|
||||
```bash
|
||||
root@04c0bb0a6c07:/# apt-get update
|
||||
...
|
||||
Fetched 1514 kB in 14s (103 kB/s)
|
||||
Reading package lists... Done
|
||||
root@04c0bb0a6c07:/# apt-get install figlet
|
||||
Reading package lists... Done
|
||||
...
|
||||
```
|
||||
|
||||
One minute later, `figlet` is installed!
|
||||
|
||||
---
|
||||
|
||||
## Try to run our freshly installed program
|
||||
|
||||
The `figlet` program takes a message as parameter.
|
||||
|
||||
```bash
|
||||
root@04c0bb0a6c07:/# figlet hello
|
||||
_ _ _
|
||||
| |__ ___| | | ___
|
||||
| '_ \ / _ \ | |/ _ \
|
||||
| | | | __/ | | (_) |
|
||||
|_| |_|\___|_|_|\___/
|
||||
```
|
||||
|
||||
Beautiful! 😍
|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## Counting packages in the container
|
||||
|
||||
Let's check how many packages are installed there.
|
||||
|
||||
```bash
|
||||
root@04c0bb0a6c07:/# dpkg -l | wc -l
|
||||
97
|
||||
```
|
||||
|
||||
* `dpkg -l` lists the packages installed in our container
|
||||
|
||||
* `wc -l` counts them
|
||||
|
||||
How many packages do we have on our host?
|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## Counting packages on the host
|
||||
|
||||
Exit the container by logging out of the shell, like you would usually do.
|
||||
|
||||
(E.g. with `^D` or `exit`)
|
||||
|
||||
```bash
|
||||
root@04c0bb0a6c07:/# exit
|
||||
```
|
||||
|
||||
Now, try to:
|
||||
|
||||
* run `dpkg -l | wc -l`. How many packages are installed?
|
||||
|
||||
* run `figlet`. Does that work?
|
||||
|
||||
---
|
||||
|
||||
class: self-paced
|
||||
|
||||
## Comparing the container and the host
|
||||
|
||||
Exit the container by logging out of the shell, with `^D` or `exit`.
|
||||
|
||||
Now try to run `figlet`. Does that work?
|
||||
|
||||
(It shouldn't; except if, by coincidence, you are running on a machine where figlet was installed before.)
|
||||
|
||||
---
|
||||
|
||||
## Host and containers are independent things
|
||||
|
||||
* We ran an `ubuntu` container on an Linux/Windows/macOS host.
|
||||
|
||||
* They have different, independent packages.
|
||||
|
||||
* Installing something on the host doesn't expose it to the container.
|
||||
|
||||
* And vice-versa.
|
||||
|
||||
* Even if both the host and the container have the same Linux distro!
|
||||
|
||||
* We can run *any container* on *any host*.
|
||||
|
||||
(One exception: Windows containers can only run on Windows hosts; at least for now.)
|
||||
|
||||
---
|
||||
|
||||
## Where's our container?
|
||||
|
||||
* Our container is now in a *stopped* state.
|
||||
|
||||
* It still exists on disk, but all compute resources have been freed up.
|
||||
|
||||
* We will see later how to get back to that container.
|
||||
|
||||
---
|
||||
|
||||
## Starting another container
|
||||
|
||||
What if we start a new container, and try to run `figlet` again?
|
||||
|
||||
```bash
|
||||
$ docker run -it ubuntu
|
||||
root@b13c164401fb:/# figlet
|
||||
bash: figlet: command not found
|
||||
```
|
||||
|
||||
* We started a *brand new container*.
|
||||
|
||||
* The basic Ubuntu image was used, and `figlet` is not here.
|
||||
|
||||
---
|
||||
|
||||
## Where's my container?
|
||||
|
||||
* Can we reuse that container that we took time to customize?
|
||||
|
||||
*We can, but that's not the default workflow with Docker.*
|
||||
|
||||
* What's the default workflow, then?
|
||||
|
||||
*Always start with a fresh container.*
|
||||
<br/>
|
||||
*If we need something installed in our container, build a custom image.*
|
||||
|
||||
* That seems complicated!
|
||||
|
||||
*We'll see that it's actually pretty easy!*
|
||||
|
||||
* And what's the point?
|
||||
|
||||
*This puts a strong emphasis on automation and repeatability. Let's see why ...*
|
||||
|
||||
---
|
||||
|
||||
## Pets vs. Cattle
|
||||
|
||||
* In the "pets vs. cattle" metaphor, there are two kinds of servers.
|
||||
|
||||
* Pets:
|
||||
|
||||
* have distinctive names and unique configurations
|
||||
|
||||
* when they have an outage, we do everything we can to fix them
|
||||
|
||||
* Cattle:
|
||||
|
||||
* have generic names (e.g. with numbers) and generic configuration
|
||||
|
||||
* configuration is enforced by configuration management, golden images ...
|
||||
|
||||
* when they have an outage, we can replace them immediately with a new server
|
||||
|
||||
* What's the connection with Docker and containers?
|
||||
|
||||
---
|
||||
|
||||
## Local development environments
|
||||
|
||||
* When we use local VMs (with e.g. VirtualBox or VMware), our workflow looks like this:
|
||||
|
||||
* create VM from base template (Ubuntu, CentOS...)
|
||||
|
||||
* install packages, set up environment
|
||||
|
||||
* work on project
|
||||
|
||||
* when done, shut down VM
|
||||
|
||||
* next time we need to work on project, restart VM as we left it
|
||||
|
||||
* if we need to tweak the environment, we do it live
|
||||
|
||||
* Over time, the VM configuration evolves, diverges.
|
||||
|
||||
* We don't have a clean, reliable, deterministic way to provision that environment.
|
||||
|
||||
---
|
||||
|
||||
## Local development with Docker
|
||||
|
||||
* With Docker, the workflow looks like this:
|
||||
|
||||
* create container image with our dev environment
|
||||
|
||||
* run container with that image
|
||||
|
||||
* work on project
|
||||
|
||||
* when done, shut down container
|
||||
|
||||
* next time we need to work on project, start a new container
|
||||
|
||||
* if we need to tweak the environment, we create a new image
|
||||
|
||||
* We have a clear definition of our environment, and can share it reliably with others.
|
||||
|
||||
* Let's see in the next chapters how to bake a custom image with `figlet`!
|
||||
|
||||
???
|
||||
|
||||
:EN:- Running our first container
|
||||
:FR:- Lancer nos premiers conteneurs
|
||||
233
slides/containers/Getting_Inside.md
Normal file
233
slides/containers/Getting_Inside.md
Normal file
@@ -0,0 +1,233 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Getting inside a container
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
On a traditional server or VM, we sometimes need to:
|
||||
|
||||
* log into the machine (with SSH or on the console),
|
||||
|
||||
* analyze the disks (by removing them or rebooting with a rescue system).
|
||||
|
||||
In this chapter, we will see how to do that with containers.
|
||||
|
||||
---
|
||||
|
||||
## Getting a shell
|
||||
|
||||
Every once in a while, we want to log into a machine.
|
||||
|
||||
In an perfect world, this shouldn't be necessary.
|
||||
|
||||
* You need to install or update packages (and their configuration)?
|
||||
|
||||
Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...)
|
||||
|
||||
* You need to view logs and metrics?
|
||||
|
||||
Collect and access them through a centralized platform.
|
||||
|
||||
In the real world, though ... we often need shell access!
|
||||
|
||||
---
|
||||
|
||||
## Not getting a shell
|
||||
|
||||
Even without a perfect deployment system, we can do many operations without getting a shell.
|
||||
|
||||
* Installing packages can (and should) be done in the container image.
|
||||
|
||||
* Configuration can be done at the image level, or when the container starts.
|
||||
|
||||
* Dynamic configuration can be stored in a volume (shared with another container).
|
||||
|
||||
* Logs written to stdout are automatically collected by the Docker Engine.
|
||||
|
||||
* Other logs can be written to a shared volume.
|
||||
|
||||
* Process information and metrics are visible from the host.
|
||||
|
||||
_Let's save logging, volumes ... for later, but let's have a look at process information!_
|
||||
|
||||
---
|
||||
|
||||
## Viewing container processes from the host
|
||||
|
||||
If you run Docker on Linux, container processes are visible on the host.
|
||||
|
||||
```bash
|
||||
$ ps faux | less
|
||||
```
|
||||
|
||||
* Scroll around the output of this command.
|
||||
|
||||
* You should see the `jpetazzo/clock` container.
|
||||
|
||||
* A containerized process is just like any other process on the host.
|
||||
|
||||
* We can use tools like `lsof`, `strace`, `gdb` ... To analyze them.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## What's the difference between a container process and a host process?
|
||||
|
||||
* Each process (containerized or not) belongs to *namespaces* and *cgroups*.
|
||||
|
||||
* The namespaces and cgroups determine what a process can "see" and "do".
|
||||
|
||||
* Analogy: each process (containerized or not) runs with a specific UID (user ID).
|
||||
|
||||
* UID=0 is root, and has elevated privileges. Other UIDs are normal users.
|
||||
|
||||
_We will give more details about namespaces and cgroups later._
|
||||
|
||||
---
|
||||
|
||||
## Getting a shell in a running container
|
||||
|
||||
* Sometimes, we need to get a shell anyway.
|
||||
|
||||
* We _could_ run some SSH server in the container ...
|
||||
|
||||
* But it is easier to use `docker exec`.
|
||||
|
||||
```bash
|
||||
$ docker exec -ti ticktock sh
|
||||
```
|
||||
|
||||
* This creates a new process (running `sh`) _inside_ the container.
|
||||
|
||||
* This can also be done "manually" with the tool `nsenter`.
|
||||
|
||||
---
|
||||
|
||||
## Caveats
|
||||
|
||||
* The tool that you want to run needs to exist in the container.
|
||||
|
||||
* Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time.
|
||||
|
||||
(This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.)
|
||||
|
||||
* Most importantly: the container needs to be running.
|
||||
|
||||
* What if the container is stopped or crashed?
|
||||
|
||||
---
|
||||
|
||||
## Getting a shell in a stopped container
|
||||
|
||||
* A stopped container is only _storage_ (like a disk drive).
|
||||
|
||||
* We cannot SSH into a disk drive or USB stick!
|
||||
|
||||
* We need to connect the disk to a running machine.
|
||||
|
||||
* How does that translate into the container world?
|
||||
|
||||
---
|
||||
|
||||
## Analyzing a stopped container
|
||||
|
||||
As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`.
|
||||
|
||||
```bash
|
||||
docker run jpetazzo/crashtest
|
||||
```
|
||||
|
||||
The container starts, but then stops immediately, without any output.
|
||||
|
||||
What would MacGyver™ do?
|
||||
|
||||
First, let's check the status of that container.
|
||||
|
||||
```bash
|
||||
docker ps -l
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Viewing filesystem changes
|
||||
|
||||
* We can use `docker diff` to see files that were added / changed / removed.
|
||||
|
||||
```bash
|
||||
docker diff <container_id>
|
||||
```
|
||||
|
||||
* The container ID was shown by `docker ps -l`.
|
||||
|
||||
* We can also see it with `docker ps -lq`.
|
||||
|
||||
* The output of `docker diff` shows some interesting log files!
|
||||
|
||||
---
|
||||
|
||||
## Accessing files
|
||||
|
||||
* We can extract files with `docker cp`.
|
||||
|
||||
```bash
|
||||
docker cp <container_id>:/var/log/nginx/error.log .
|
||||
```
|
||||
|
||||
* Then we can look at that log file.
|
||||
|
||||
```bash
|
||||
cat error.log
|
||||
```
|
||||
|
||||
(The directory `/run/nginx` doesn't exist.)
|
||||
|
||||
---
|
||||
|
||||
## Exploring a crashed container
|
||||
|
||||
* We can restart a container with `docker start` ...
|
||||
|
||||
* ... But it will probably crash again immediately!
|
||||
|
||||
* We cannot specify a different program to run with `docker start`
|
||||
|
||||
* But we can create a new image from the crashed container
|
||||
|
||||
```bash
|
||||
docker commit <container_id> debugimage
|
||||
```
|
||||
|
||||
* Then we can run a new container from that image, with a custom entrypoint
|
||||
|
||||
```bash
|
||||
docker run -ti --entrypoint sh debugimage
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Obtaining a complete dump
|
||||
|
||||
* We can also dump the entire filesystem of a container.
|
||||
|
||||
* This is done with `docker export`.
|
||||
|
||||
* It generates a tar archive.
|
||||
|
||||
```bash
|
||||
docker export <container_id> | tar tv
|
||||
```
|
||||
|
||||
This will give a detailed listing of the content of the container.
|
||||
|
||||
???
|
||||
|
||||
:EN:- Troubleshooting and getting inside a container
|
||||
:FR:- Inspecter un conteneur en détail, en *live* ou *post-mortem*
|
||||
137
slides/containers/Init_Systems.md
Normal file
137
slides/containers/Init_Systems.md
Normal file
@@ -0,0 +1,137 @@
|
||||
# Init systems and PID 1
|
||||
|
||||
In this chapter, we will consider:
|
||||
|
||||
- the role of PID 1 in the world of Docker,
|
||||
|
||||
- how to avoid some common pitfalls due to the misuse of init systems.
|
||||
|
||||
---
|
||||
|
||||
## What's an init system?
|
||||
|
||||
- On UNIX, the "init system" (or "init" in short) is PID 1.
|
||||
|
||||
- It is the first process started by the kernel when the system starts.
|
||||
|
||||
- It has multiple responsibilities:
|
||||
|
||||
- start every other process on the machine,
|
||||
|
||||
- reap orphaned zombie processes.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Orphaned zombie processes ?!?
|
||||
|
||||
- When a process exits (or "dies"), it becomes a "zombie".
|
||||
|
||||
(Zombie processes show up in `ps` or `top` with the status code `Z`.)
|
||||
|
||||
- Its parent process must *reap* the zombie process.
|
||||
|
||||
(This is done by calling `waitpid()` to retrieve the process' exit status.)
|
||||
|
||||
- When a process exits, if it has child processes, these processes are "orphaned."
|
||||
|
||||
- They are then re-parented to PID 1, init.
|
||||
|
||||
- Init therefore needs to take care of these orphaned processes when they exit.
|
||||
|
||||
---
|
||||
|
||||
## Don't use init systems in containers
|
||||
|
||||
- It's often tempting to use an init system or a process manager.
|
||||
|
||||
(Examples: *systemd*, *supervisord*...)
|
||||
|
||||
- Our containers are then called "system containers".
|
||||
|
||||
(By contrast with "application containers".)
|
||||
|
||||
- "System containers" are similar to lightweight virtual machines.
|
||||
|
||||
- They have multiple downsides:
|
||||
|
||||
- when starting multiple processes, their logs get mixed on stdout,
|
||||
|
||||
- if the application process dies, the container engine doesn't see it.
|
||||
|
||||
- Overall, they make it harder to operate troubleshoot containerized apps.
|
||||
|
||||
---
|
||||
|
||||
## Exceptions and workarounds
|
||||
|
||||
- Sometimes, it's convenient to run a real init system like *systemd*.
|
||||
|
||||
(Example: a CI system whose goal is precisely to test an init script or unit file.)
|
||||
|
||||
- If we need to run multiple processes: can we use multiple containers?
|
||||
|
||||
(Example: [this Compose file](https://github.com/jpetazzo/container.training/blob/master/compose/simple-k8s-control-plane/docker-compose.yaml) runs multiple processes together.)
|
||||
|
||||
- When deploying with Kubernetes:
|
||||
|
||||
- a container belong to a pod,
|
||||
|
||||
- a pod can have multiple containers.
|
||||
|
||||
---
|
||||
|
||||
## What about these zombie processes?
|
||||
|
||||
- Our application runs as PID 1 in the container.
|
||||
|
||||
- Our application may or may not be designed to reap zombie processes.
|
||||
|
||||
- If our application uses subprocesses and doesn't reap them ...
|
||||
|
||||
... this can lead to PID exhaustion!
|
||||
|
||||
(Or, more realistically, to a confusing herd of zombie processes.)
|
||||
|
||||
- How can we solve this?
|
||||
|
||||
---
|
||||
|
||||
## Tini to the rescue
|
||||
|
||||
- Docker can automatically provide a minimal `init` process.
|
||||
|
||||
- This is enabled with `docker run --init ...`
|
||||
|
||||
- It uses a small init system ([tini](https://github.com/krallin/tini)) as PID 1:
|
||||
|
||||
- it reaps zombies,
|
||||
|
||||
- it forwards signals,
|
||||
|
||||
- it exits when the child exits.
|
||||
|
||||
- It is totally transparent to our application.
|
||||
|
||||
- We should use it if our application creates subprocess but doesn't reap them.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## What about Kubernetes?
|
||||
|
||||
- Kubernetes does not expose that `--init` option.
|
||||
|
||||
- However, we can achieve the same result with [Process Namespace Sharing](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/).
|
||||
|
||||
- When Process Namespace Sharing is enabled, PID 1 will be `pause`.
|
||||
|
||||
- That `pause` process takes care of reaping zombies.
|
||||
|
||||
- Process Namespace Sharing is available since Kubernetes 1.16.
|
||||
|
||||
- If you're using an older version of Kubernetes ...
|
||||
|
||||
... you might have to add `tini` explicitly to your Docker image.
|
||||
427
slides/containers/Initial_Images.md
Normal file
427
slides/containers/Initial_Images.md
Normal file
@@ -0,0 +1,427 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Understanding Docker images
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
In this section, we will explain:
|
||||
|
||||
* What is an image.
|
||||
|
||||
* What is a layer.
|
||||
|
||||
* The various image namespaces.
|
||||
|
||||
* How to search and download images.
|
||||
|
||||
* Image tags and when to use them.
|
||||
|
||||
---
|
||||
|
||||
## What is an image?
|
||||
|
||||
* Image = files + metadata
|
||||
|
||||
* These files form the root filesystem of our container.
|
||||
|
||||
* The metadata can indicate a number of things, e.g.:
|
||||
|
||||
* the author of the image
|
||||
* the command to execute in the container when starting it
|
||||
* environment variables to be set
|
||||
* etc.
|
||||
|
||||
* Images are made of *layers*, conceptually stacked on top of each other.
|
||||
|
||||
* Each layer can add, change, and remove files and/or metadata.
|
||||
|
||||
* Images can share layers to optimize disk usage, transfer times, and memory use.
|
||||
|
||||
---
|
||||
|
||||
## Example for a Java webapp
|
||||
|
||||
Each of the following items will correspond to one layer:
|
||||
|
||||
* CentOS base layer
|
||||
* Packages and configuration files added by our local IT
|
||||
* JRE
|
||||
* Tomcat
|
||||
* Our application's dependencies
|
||||
* Our application code and assets
|
||||
* Our application configuration
|
||||
|
||||
(Note: app config is generally added by orchestration facilities.)
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## The read-write layer
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Differences between containers and images
|
||||
|
||||
* An image is a read-only filesystem.
|
||||
|
||||
* A container is an encapsulated set of processes,
|
||||
|
||||
running in a read-write copy of that filesystem.
|
||||
|
||||
* To optimize container boot time, *copy-on-write* is used
|
||||
instead of regular copy.
|
||||
|
||||
* `docker run` starts a container from a given image.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Multiple containers sharing the same image
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Comparison with object-oriented programming
|
||||
|
||||
* Images are conceptually similar to *classes*.
|
||||
|
||||
* Layers are conceptually similar to *inheritance*.
|
||||
|
||||
* Containers are conceptually similar to *instances*.
|
||||
|
||||
---
|
||||
|
||||
## Wait a minute...
|
||||
|
||||
If an image is read-only, how do we change it?
|
||||
|
||||
* We don't.
|
||||
|
||||
* We create a new container from that image.
|
||||
|
||||
* Then we make changes to that container.
|
||||
|
||||
* When we are satisfied with those changes, we transform them into a new layer.
|
||||
|
||||
* A new image is created by stacking the new layer on top of the old image.
|
||||
|
||||
---
|
||||
|
||||
## A chicken-and-egg problem
|
||||
|
||||
* The only way to create an image is by "freezing" a container.
|
||||
|
||||
* The only way to create a container is by instantiating an image.
|
||||
|
||||
* Help!
|
||||
|
||||
---
|
||||
|
||||
## Creating the first images
|
||||
|
||||
There is a special empty image called `scratch`.
|
||||
|
||||
* It allows to *build from scratch*.
|
||||
|
||||
The `docker import` command loads a tarball into Docker.
|
||||
|
||||
* The imported tarball becomes a standalone image.
|
||||
* That new image has a single layer.
|
||||
|
||||
Note: you will probably never have to do this yourself.
|
||||
|
||||
---
|
||||
|
||||
## Creating other images
|
||||
|
||||
`docker commit`
|
||||
|
||||
* Saves all the changes made to a container into a new layer.
|
||||
* Creates a new image (effectively a copy of the container).
|
||||
|
||||
`docker build` **(used 99% of the time)**
|
||||
|
||||
* Performs a repeatable build sequence.
|
||||
* This is the preferred method!
|
||||
|
||||
We will explain both methods in a moment.
|
||||
|
||||
---
|
||||
|
||||
## Images namespaces
|
||||
|
||||
There are three namespaces:
|
||||
|
||||
* Official images
|
||||
|
||||
e.g. `ubuntu`, `busybox` ...
|
||||
|
||||
* User (and organizations) images
|
||||
|
||||
e.g. `jpetazzo/clock`
|
||||
|
||||
* Self-hosted images
|
||||
|
||||
e.g. `registry.example.com:5000/my-private/image`
|
||||
|
||||
Let's explain each of them.
|
||||
|
||||
---
|
||||
|
||||
## Root namespace
|
||||
|
||||
The root namespace is for official images.
|
||||
|
||||
They are gated by Docker Inc.
|
||||
|
||||
They are generally authored and maintained by third parties.
|
||||
|
||||
Those images include:
|
||||
|
||||
* Small, "swiss-army-knife" images like busybox.
|
||||
|
||||
* Distro images to be used as bases for your builds, like ubuntu, fedora...
|
||||
|
||||
* Ready-to-use components and services, like redis, postgresql...
|
||||
|
||||
* Over 150 at this point!
|
||||
|
||||
---
|
||||
|
||||
## User namespace
|
||||
|
||||
The user namespace holds images for Docker Hub users and organizations.
|
||||
|
||||
For example:
|
||||
|
||||
```bash
|
||||
jpetazzo/clock
|
||||
```
|
||||
|
||||
The Docker Hub user is:
|
||||
|
||||
```bash
|
||||
jpetazzo
|
||||
```
|
||||
|
||||
The image name is:
|
||||
|
||||
```bash
|
||||
clock
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Self-hosted namespace
|
||||
|
||||
This namespace holds images which are not hosted on Docker Hub, but on third
|
||||
party registries.
|
||||
|
||||
They contain the hostname (or IP address), and optionally the port, of the
|
||||
registry server.
|
||||
|
||||
For example:
|
||||
|
||||
```bash
|
||||
localhost:5000/wordpress
|
||||
```
|
||||
|
||||
* `localhost:5000` is the host and port of the registry
|
||||
* `wordpress` is the name of the image
|
||||
|
||||
Other examples:
|
||||
|
||||
```bash
|
||||
quay.io/coreos/etcd
|
||||
gcr.io/google-containers/hugo
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How do you store and manage images?
|
||||
|
||||
Images can be stored:
|
||||
|
||||
* On your Docker host.
|
||||
* In a Docker registry.
|
||||
|
||||
You can use the Docker client to download (pull) or upload (push) images.
|
||||
|
||||
To be more accurate: you can use the Docker client to tell a Docker Engine
|
||||
to push and pull images to and from a registry.
|
||||
|
||||
---
|
||||
|
||||
## Showing current images
|
||||
|
||||
Let's look at what images are on our host now.
|
||||
|
||||
```bash
|
||||
$ docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
fedora latest ddd5c9c1d0f2 3 days ago 204.7 MB
|
||||
centos latest d0e7f81ca65c 3 days ago 196.6 MB
|
||||
ubuntu latest 07c86167cdc4 4 days ago 188 MB
|
||||
redis latest 4f5f397d4b7c 5 days ago 177.6 MB
|
||||
postgres latest afe2b5e1859b 5 days ago 264.5 MB
|
||||
alpine latest 70c557e50ed6 5 days ago 4.798 MB
|
||||
debian latest f50f9524513f 6 days ago 125.1 MB
|
||||
busybox latest 3240943c9ea3 2 weeks ago 1.114 MB
|
||||
training/namer latest 902673acc741 9 months ago 289.3 MB
|
||||
jpetazzo/clock latest 12068b93616f 12 months ago 2.433 MB
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Searching for images
|
||||
|
||||
We cannot list *all* images on a remote registry, but
|
||||
we can search for a specific keyword:
|
||||
|
||||
```bash
|
||||
$ docker search marathon
|
||||
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
|
||||
mesosphere/marathon A cluster-wide init and co... 105 [OK]
|
||||
mesoscloud/marathon Marathon 31 [OK]
|
||||
mesosphere/marathon-lb Script to update haproxy b... 22 [OK]
|
||||
tobilg/mongodb-marathon A Docker image to start a ... 4 [OK]
|
||||
```
|
||||
|
||||
|
||||
* "Stars" indicate the popularity of the image.
|
||||
|
||||
* "Official" images are those in the root namespace.
|
||||
|
||||
* "Automated" images are built automatically by the Docker Hub.
|
||||
<br/>(This means that their build recipe is always available.)
|
||||
|
||||
---
|
||||
|
||||
## Downloading images
|
||||
|
||||
There are two ways to download images.
|
||||
|
||||
* Explicitly, with `docker pull`.
|
||||
|
||||
* Implicitly, when executing `docker run` and the image is not found locally.
|
||||
|
||||
---
|
||||
|
||||
## Pulling an image
|
||||
|
||||
```bash
|
||||
$ docker pull debian:jessie
|
||||
Pulling repository debian
|
||||
b164861940b8: Download complete
|
||||
b164861940b8: Pulling image (jessie) from debian
|
||||
d1881793a057: Download complete
|
||||
```
|
||||
|
||||
* As seen previously, images are made up of layers.
|
||||
|
||||
* Docker has downloaded all the necessary layers.
|
||||
|
||||
* In this example, `:jessie` indicates which exact version of Debian
|
||||
we would like.
|
||||
|
||||
It is a *version tag*.
|
||||
|
||||
---
|
||||
|
||||
## Image and tags
|
||||
|
||||
* Images can have tags.
|
||||
|
||||
* Tags define image versions or variants.
|
||||
|
||||
* `docker pull ubuntu` will refer to `ubuntu:latest`.
|
||||
|
||||
* The `:latest` tag is generally updated often.
|
||||
|
||||
---
|
||||
|
||||
## When to (not) use tags
|
||||
|
||||
Don't specify tags:
|
||||
|
||||
* When doing rapid testing and prototyping.
|
||||
* When experimenting.
|
||||
* When you want the latest version.
|
||||
|
||||
Do specify tags:
|
||||
|
||||
* When recording a procedure into a script.
|
||||
* When going to production.
|
||||
* To ensure that the same version will be used everywhere.
|
||||
* To ensure repeatability later.
|
||||
|
||||
This is similar to what we would do with `pip install`, `npm install`, etc.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Multi-arch images
|
||||
|
||||
- An image can support multiple architectures
|
||||
|
||||
- More precisely, a specific *tag* in a given *repository* can have either:
|
||||
|
||||
- a single *manifest* referencing an image for a single architecture
|
||||
|
||||
- a *manifest list* (or *fat manifest*) referencing multiple images
|
||||
|
||||
- In a *manifest list*, each image is identified by a combination of:
|
||||
|
||||
- `os` (linux, windows)
|
||||
|
||||
- `architecture` (amd64, arm, arm64...)
|
||||
|
||||
- optional fields like `variant` (for arm and arm64), `os.version` (for windows)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Working with multi-arch images
|
||||
|
||||
- The Docker Engine will pull "native" images when available
|
||||
|
||||
(images matching its own os/architecture/variant)
|
||||
|
||||
- We can ask for a specific image platform with `--platform`
|
||||
|
||||
- The Docker Engine can run non-native images thanks to QEMU+binfmt
|
||||
|
||||
(automatically on Docker Desktop; with a bit of setup on Linux)
|
||||
|
||||
---
|
||||
|
||||
## Section summary
|
||||
|
||||
We've learned how to:
|
||||
|
||||
* Understand images and layers.
|
||||
* Understand Docker image namespacing.
|
||||
* Search and download images.
|
||||
|
||||
???
|
||||
|
||||
:EN:Building images
|
||||
:EN:- Containers, images, and layers
|
||||
:EN:- Image addresses and tags
|
||||
:EN:- Finding and transferring images
|
||||
|
||||
:FR:Construire des images
|
||||
:FR:- La différence entre un conteneur et une image
|
||||
:FR:- La notion de *layer* partagé entre images
|
||||
178
slides/containers/Installing_Docker.md
Normal file
178
slides/containers/Installing_Docker.md
Normal file
@@ -0,0 +1,178 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Installing Docker
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
At the end of this lesson, you will know:
|
||||
|
||||
* How to install Docker.
|
||||
|
||||
* When to use `sudo` when running Docker commands.
|
||||
|
||||
*Note:* if you were provided with a training VM for a hands-on
|
||||
tutorial, you can skip this chapter, since that VM already
|
||||
has Docker installed, and Docker has already been setup to run
|
||||
without `sudo`.
|
||||
|
||||
---
|
||||
|
||||
## Installing Docker
|
||||
|
||||
There are many ways to install Docker.
|
||||
|
||||
We can arbitrarily distinguish:
|
||||
|
||||
* Installing Docker on an existing Linux machine (physical or VM)
|
||||
|
||||
* Installing Docker on macOS or Windows
|
||||
|
||||
* Installing Docker on a fleet of cloud VMs
|
||||
|
||||
---
|
||||
|
||||
## Installing Docker on Linux
|
||||
|
||||
* The recommended method is to install the packages supplied by Docker Inc :
|
||||
|
||||
- add Docker Inc.'s package repositories to your system configuration
|
||||
|
||||
- install the Docker Engine
|
||||
|
||||
* Detailed installation instructions (distro by distro) are available on:
|
||||
|
||||
https://docs.docker.com/engine/installation/
|
||||
|
||||
* You can also install from binaries (if your distro is not supported):
|
||||
|
||||
https://docs.docker.com/engine/installation/linux/docker-ce/binaries/
|
||||
|
||||
* To quickly setup a dev environment, Docker provides a convenience install script:
|
||||
|
||||
```bash
|
||||
curl -fsSL get.docker.com | sh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Docker Inc. packages vs distribution packages
|
||||
|
||||
* Docker Inc. releases new versions monthly (edge) and quarterly (stable)
|
||||
|
||||
* Releases are immediately available on Docker Inc.'s package repositories
|
||||
|
||||
* Linux distros don't always update to the latest Docker version
|
||||
|
||||
(Sometimes, updating would break their guidelines for major/minor upgrades)
|
||||
|
||||
* Sometimes, some distros have carried packages with custom patches
|
||||
|
||||
* Sometimes, these patches added critical security bugs ☹
|
||||
|
||||
* Installing through Docker Inc.'s repositories is a bit of extra work …
|
||||
|
||||
… but it is generally worth it!
|
||||
|
||||
---
|
||||
|
||||
## Installing Docker on macOS and Windows
|
||||
|
||||
* On macOS, the recommended method is to use Docker Desktop for Mac:
|
||||
|
||||
https://docs.docker.com/docker-for-mac/install/
|
||||
|
||||
* On Windows 10 Pro, Enterprise, and Education, you can use Docker Desktop for Windows:
|
||||
|
||||
https://docs.docker.com/docker-for-windows/install/
|
||||
|
||||
* On older versions of Windows, you can use the Docker Toolbox:
|
||||
|
||||
https://docs.docker.com/toolbox/toolbox_install_windows/
|
||||
|
||||
* On Windows Server 2016, you can also install the native engine:
|
||||
|
||||
https://docs.docker.com/install/windows/docker-ee/
|
||||
|
||||
---
|
||||
|
||||
## Docker Desktop
|
||||
|
||||
* Special Docker edition available for Mac and Windows
|
||||
|
||||
* Integrates well with the host OS:
|
||||
|
||||
* installed like normal user applications on the host
|
||||
|
||||
* provides user-friendly GUI to edit Docker configuration and settings
|
||||
|
||||
* Only support running one Docker VM at a time ...
|
||||
|
||||
... but we can use `docker-machine`, the Docker Toolbox, VirtualBox, etc. to get a cluster.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Docker Desktop internals
|
||||
|
||||
* Leverages the host OS virtualization subsystem
|
||||
|
||||
(e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS)
|
||||
|
||||
* Under the hood, runs a tiny VM
|
||||
|
||||
(transparent to our daily use)
|
||||
|
||||
* Accesses network resources like normal applications
|
||||
|
||||
(and therefore, plays better with enterprise VPNs and firewalls)
|
||||
|
||||
* Supports filesystem sharing through volumes
|
||||
|
||||
(we'll talk about this later)
|
||||
|
||||
---
|
||||
|
||||
## Running Docker on macOS and Windows
|
||||
|
||||
When you execute `docker version` from the terminal:
|
||||
|
||||
* the CLI connects to the Docker Engine over a standard socket,
|
||||
* the Docker Engine is, in fact, running in a VM,
|
||||
* ... but the CLI doesn't know or care about that,
|
||||
* the CLI sends a request using the REST API,
|
||||
* the Docker Engine in the VM processes the request,
|
||||
* the CLI gets the response and displays it to you.
|
||||
|
||||
All communication with the Docker Engine happens over the API.
|
||||
|
||||
This will also allow to use remote Engines exactly as if they were local.
|
||||
|
||||
---
|
||||
|
||||
## Important PSA about security
|
||||
|
||||
* If you have access to the Docker control socket, you can take over the machine
|
||||
|
||||
(Because you can run containers that will access the machine's resources)
|
||||
|
||||
* Therefore, on Linux machines, the `docker` user is equivalent to `root`
|
||||
|
||||
* You should restrict access to it like you would protect `root`
|
||||
|
||||
* By default, the Docker control socket belongs to the `docker` group
|
||||
|
||||
* You can add trusted users to the `docker` group
|
||||
|
||||
* Otherwise, you will have to prefix every `docker` command with `sudo`, e.g.:
|
||||
|
||||
```bash
|
||||
sudo docker version
|
||||
```
|
||||
87
slides/containers/Labels.md
Normal file
87
slides/containers/Labels.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# Labels
|
||||
|
||||
* Labels allow to attach arbitrary metadata to containers.
|
||||
|
||||
* Labels are key/value pairs.
|
||||
|
||||
* They are specified at container creation.
|
||||
|
||||
* You can query them with `docker inspect`.
|
||||
|
||||
* They can also be used as filters with some commands (e.g. `docker ps`).
|
||||
|
||||
---
|
||||
|
||||
## Using labels
|
||||
|
||||
Let's create a few containers with a label `owner`.
|
||||
|
||||
```bash
|
||||
docker run -d -l owner=alice nginx
|
||||
docker run -d -l owner=bob nginx
|
||||
docker run -d -l owner nginx
|
||||
```
|
||||
|
||||
We didn't specify a value for the `owner` label in the last example.
|
||||
|
||||
This is equivalent to setting the value to be an empty string.
|
||||
|
||||
---
|
||||
|
||||
## Querying labels
|
||||
|
||||
We can view the labels with `docker inspect`.
|
||||
|
||||
```bash
|
||||
$ docker inspect $(docker ps -lq) | grep -A3 Labels
|
||||
"Labels": {
|
||||
"maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>",
|
||||
"owner": ""
|
||||
},
|
||||
```
|
||||
|
||||
We can use the `--format` flag to list the value of a label.
|
||||
|
||||
```bash
|
||||
$ docker inspect $(docker ps -q) --format 'OWNER={{.Config.Labels.owner}}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Using labels to select containers
|
||||
|
||||
We can list containers having a specific label.
|
||||
|
||||
```bash
|
||||
$ docker ps --filter label=owner
|
||||
```
|
||||
|
||||
Or we can list containers having a specific label with a specific value.
|
||||
|
||||
```bash
|
||||
$ docker ps --filter label=owner=alice
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Use-cases for labels
|
||||
|
||||
|
||||
* HTTP vhost of a web app or web service.
|
||||
|
||||
(The label is used to generate the configuration for NGINX, HAProxy, etc.)
|
||||
|
||||
* Backup schedule for a stateful service.
|
||||
|
||||
(The label is used by a cron job to determine if/when to backup container data.)
|
||||
|
||||
* Service ownership.
|
||||
|
||||
(To determine internal cross-billing, or who to page in case of outage.)
|
||||
|
||||
* etc.
|
||||
|
||||
???
|
||||
|
||||
:EN:- Using labels to identify containers
|
||||
:FR:- Étiqueter ses conteneurs avec des méta-données
|
||||
451
slides/containers/Local_Development_Workflow.md
Normal file
451
slides/containers/Local_Development_Workflow.md
Normal file
@@ -0,0 +1,451 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Local development workflow with Docker
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
At the end of this section, you will be able to:
|
||||
|
||||
* Share code between container and host.
|
||||
|
||||
* Use a simple local development workflow.
|
||||
|
||||
---
|
||||
|
||||
## Local development in a container
|
||||
|
||||
We want to solve the following issues:
|
||||
|
||||
- "Works on my machine"
|
||||
|
||||
- "Not the same version"
|
||||
|
||||
- "Missing dependency"
|
||||
|
||||
By using Docker containers, we will get a consistent development environment.
|
||||
|
||||
---
|
||||
|
||||
## Working on the "namer" application
|
||||
|
||||
* We have to work on some application whose code is at:
|
||||
|
||||
https://github.com/jpetazzo/namer.
|
||||
|
||||
* What is it? We don't know yet!
|
||||
|
||||
* Let's download the code.
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/jpetazzo/namer
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Looking at the code
|
||||
|
||||
```bash
|
||||
$ cd namer
|
||||
$ ls -1
|
||||
company_name_generator.rb
|
||||
config.ru
|
||||
docker-compose.yml
|
||||
Dockerfile
|
||||
Gemfile
|
||||
```
|
||||
|
||||
--
|
||||
|
||||
Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe?
|
||||
|
||||
---
|
||||
|
||||
## Looking at the `Dockerfile`
|
||||
|
||||
```dockerfile
|
||||
FROM ruby
|
||||
|
||||
COPY . /src
|
||||
WORKDIR /src
|
||||
RUN bundler install
|
||||
|
||||
CMD ["rackup", "--host", "0.0.0.0"]
|
||||
EXPOSE 9292
|
||||
```
|
||||
|
||||
* This application is using a base `ruby` image.
|
||||
* The code is copied in `/src`.
|
||||
* Dependencies are installed with `bundler`.
|
||||
* The application is started with `rackup`.
|
||||
* It is listening on port 9292.
|
||||
|
||||
---
|
||||
|
||||
## Building and running the "namer" application
|
||||
|
||||
* Let's build the application with the `Dockerfile`!
|
||||
|
||||
--
|
||||
|
||||
```bash
|
||||
$ docker build -t namer .
|
||||
```
|
||||
|
||||
--
|
||||
|
||||
* Then run it. *We need to expose its ports.*
|
||||
|
||||
--
|
||||
|
||||
```bash
|
||||
$ docker run -dP namer
|
||||
```
|
||||
|
||||
--
|
||||
|
||||
* Check on which port the container is listening.
|
||||
|
||||
--
|
||||
|
||||
```bash
|
||||
$ docker ps -l
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Connecting to our application
|
||||
|
||||
* Point our browser to our Docker node, on the port allocated to the container.
|
||||
|
||||
--
|
||||
|
||||
* Hit "reload" a few times.
|
||||
|
||||
--
|
||||
|
||||
* This is an enterprise-class, carrier-grade, ISO-compliant company name generator!
|
||||
|
||||
(With 50% more bullshit than the average competition!)
|
||||
|
||||
(Wait, was that 50% more, or 50% less? *Anyway!*)
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Making changes to the code
|
||||
|
||||
Option 1:
|
||||
|
||||
* Edit the code locally
|
||||
* Rebuild the image
|
||||
* Re-run the container
|
||||
|
||||
Option 2:
|
||||
|
||||
* Enter the container (with `docker exec`)
|
||||
* Install an editor
|
||||
* Make changes from within the container
|
||||
|
||||
Option 3:
|
||||
|
||||
* Use a *bind mount* to share local files with the container
|
||||
* Make changes locally
|
||||
* Changes are reflected in the container
|
||||
|
||||
---
|
||||
|
||||
## Our first volume
|
||||
|
||||
We will tell Docker to map the current directory to `/src` in the container.
|
||||
|
||||
```bash
|
||||
$ docker run -d -v $(pwd):/src -P namer
|
||||
```
|
||||
|
||||
* `-d`: the container should run in detached mode (in the background).
|
||||
|
||||
* `-v`: the following host directory should be mounted inside the container.
|
||||
|
||||
* `-P`: publish all the ports exposed by this image.
|
||||
|
||||
* `namer` is the name of the image we will run.
|
||||
|
||||
* We don't specify a command to run because it is already set in the Dockerfile via `CMD`.
|
||||
|
||||
Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell).
|
||||
|
||||
---
|
||||
|
||||
## Mounting volumes inside containers
|
||||
|
||||
The `-v` flag mounts a directory from your host into your Docker container.
|
||||
|
||||
The flag structure is:
|
||||
|
||||
```bash
|
||||
[host-path]:[container-path]:[rw|ro]
|
||||
```
|
||||
|
||||
* `[host-path]` and `[container-path]` are created if they don't exist.
|
||||
|
||||
* You can control the write status of the volume with the `ro` and
|
||||
`rw` options.
|
||||
|
||||
* If you don't specify `rw` or `ro`, it will be `rw` by default.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Hold your horses... and your mounts
|
||||
|
||||
- The `-v /path/on/host:/path/in/container` syntax is the "old" syntax
|
||||
|
||||
- The modern syntax looks like this:
|
||||
|
||||
`--mount type=bind,source=/path/on/host,target=/path/in/container`
|
||||
|
||||
- `--mount` is more explicit, but `-v` is quicker to type
|
||||
|
||||
- `--mount` supports all mount types; `-v` doesn't support `tmpfs` mounts
|
||||
|
||||
- `--mount` fails if the path on the host doesn't exist; `-v` creates it
|
||||
|
||||
With the new syntax, our command becomes:
|
||||
```bash
|
||||
docker run --mount=type=bind,source=$(pwd),target=/src -dP namer
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing the development container
|
||||
|
||||
* Check the port used by our new container.
|
||||
|
||||
```bash
|
||||
$ docker ps -l
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
045885b68bc5 namer rackup 3 seconds ago Up ... 0.0.0.0:32770->9292/tcp ...
|
||||
```
|
||||
|
||||
* Open the application in your web browser.
|
||||
|
||||
---
|
||||
|
||||
## Making a change to our application
|
||||
|
||||
Our customer really doesn't like the color of our text. Let's change it.
|
||||
|
||||
```bash
|
||||
$ vi company_name_generator.rb
|
||||
```
|
||||
|
||||
And change
|
||||
|
||||
```css
|
||||
color: royalblue;
|
||||
```
|
||||
|
||||
To:
|
||||
|
||||
```css
|
||||
color: red;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Viewing our changes
|
||||
|
||||
* Reload the application in our browser.
|
||||
|
||||
--
|
||||
|
||||
* The color should have changed.
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Understanding volumes
|
||||
|
||||
- Volumes are *not* copying or synchronizing files between the host and the container
|
||||
|
||||
- Changes made in the host are immediately visible in the container (and vice versa)
|
||||
|
||||
- When running on Linux:
|
||||
|
||||
- volumes and bind mounts correspond to directories on the host
|
||||
|
||||
- if Docker runs in a Linux VM, these directories are in the Linux VM
|
||||
|
||||
- When running on Docker Desktop:
|
||||
|
||||
- volumes correspond to directories in a small Linux VM running Docker
|
||||
|
||||
- access to bind mounts is translated to host filesystem access
|
||||
<br/>
|
||||
(a bit like a network filesystem)
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Docker Desktop caveats
|
||||
|
||||
- When running Docker natively on Linux, accessing a mount = native I/O
|
||||
|
||||
- When running Docker Desktop, accessing a bind mount = file access translation
|
||||
|
||||
- That file access translation has relatively good performance *in general*
|
||||
|
||||
(watch out, however, for that big `npm install` working on a bind mount!)
|
||||
|
||||
- There are some corner cases when watching files (with mechanisms like inotify)
|
||||
|
||||
- Features like "live reload" or programs like `entr` don't always behave properly
|
||||
|
||||
(due to e.g. file attribute caching, and other interesting details!)
|
||||
|
||||
---
|
||||
|
||||
## Trash your servers and burn your code
|
||||
|
||||
*(This is the title of a
|
||||
[2013 blog post][immutable-deployments]
|
||||
by Chad Fowler, where he explains the concept of immutable infrastructure.)*
|
||||
|
||||
[immutable-deployments]: https://web.archive.org/web/20160305073617/http://chadfowler.com/blog/2013/06/23/immutable-deployments/
|
||||
|
||||
--
|
||||
|
||||
* Let's majorly mess up our container.
|
||||
|
||||
(Remove files or whatever.)
|
||||
|
||||
* Now, how can we fix this?
|
||||
|
||||
--
|
||||
|
||||
* Our old container (with the blue version of the code) is still running.
|
||||
|
||||
* See on which port it is exposed:
|
||||
```bash
|
||||
docker ps
|
||||
```
|
||||
|
||||
* Point our browser to it to confirm that it still works fine.
|
||||
|
||||
---
|
||||
|
||||
## Immutable infrastructure in a nutshell
|
||||
|
||||
* Instead of *updating* a server, we deploy a new one.
|
||||
|
||||
* This might be challenging with classical servers, but it's trivial with containers.
|
||||
|
||||
* In fact, with Docker, the most logical workflow is to build a new image and run it.
|
||||
|
||||
* If something goes wrong with the new image, we can always restart the old one.
|
||||
|
||||
* We can even keep both versions running side by side.
|
||||
|
||||
If this pattern sounds interesting, you might want to read about *blue/green deployment*
|
||||
and *canary deployments*.
|
||||
|
||||
---
|
||||
|
||||
## Recap of the development workflow
|
||||
|
||||
1. Write a Dockerfile to build an image containing our development environment.
|
||||
<br/>
|
||||
(Rails, Django, ... and all the dependencies for our app)
|
||||
|
||||
2. Start a container from that image.
|
||||
<br/>
|
||||
Use the `-v` flag to mount our source code inside the container.
|
||||
|
||||
3. Edit the source code outside the container, using familiar tools.
|
||||
<br/>
|
||||
(vim, emacs, textmate...)
|
||||
|
||||
4. Test the application.
|
||||
<br/>
|
||||
(Some frameworks pick up changes automatically.
|
||||
<br/>Others require you to Ctrl-C + restart after each modification.)
|
||||
|
||||
5. Iterate and repeat steps 3 and 4 until satisfied.
|
||||
|
||||
6. When done, commit+push source code changes.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Debugging inside the container
|
||||
|
||||
Docker has a command called `docker exec`.
|
||||
|
||||
It allows users to run a new process in a container which is already running.
|
||||
|
||||
If sometimes you find yourself wishing you could SSH into a container: you can use `docker exec` instead.
|
||||
|
||||
You can get a shell prompt inside an existing container this way, or run an arbitrary process for automation.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## `docker exec` example
|
||||
|
||||
```bash
|
||||
$ # You can run ruby commands in the area the app is running and more!
|
||||
$ docker exec -it <yourContainerId> bash
|
||||
root@5ca27cf74c2e:/opt/namer# irb
|
||||
irb(main):001:0> [0, 1, 2, 3, 4].map {|x| x ** 2}.compact
|
||||
=> [0, 1, 4, 9, 16]
|
||||
irb(main):002:0> exit
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Stopping the container
|
||||
|
||||
Now that we're done let's stop our container.
|
||||
|
||||
```bash
|
||||
$ docker stop <yourContainerID>
|
||||
```
|
||||
|
||||
And remove it.
|
||||
|
||||
```bash
|
||||
$ docker rm <yourContainerID>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Section summary
|
||||
|
||||
We've learned how to:
|
||||
|
||||
* Share code between container and host.
|
||||
|
||||
* Set our working directory.
|
||||
|
||||
* Use a simple local development workflow.
|
||||
|
||||
???
|
||||
|
||||
:EN:Developing with containers
|
||||
:EN:- “Containerize” a development environment
|
||||
|
||||
:FR:Développer au jour le jour
|
||||
:FR:- « Containeriser » son environnement de développement
|
||||
298
slides/containers/Logging.md
Normal file
298
slides/containers/Logging.md
Normal file
@@ -0,0 +1,298 @@
|
||||
# Logging
|
||||
|
||||
In this chapter, we will explain the different ways to send logs from containers.
|
||||
|
||||
We will then show one particular method in action, using ELK and Docker's logging drivers.
|
||||
|
||||
---
|
||||
|
||||
## There are many ways to send logs
|
||||
|
||||
- The simplest method is to write on the standard output and error.
|
||||
|
||||
- Applications can write their logs to local files.
|
||||
|
||||
(The files are usually periodically rotated and compressed.)
|
||||
|
||||
- It is also very common (on UNIX systems) to use syslog.
|
||||
|
||||
(The logs are collected by syslogd or an equivalent like journald.)
|
||||
|
||||
- In large applications with many components, it is common to use a logging service.
|
||||
|
||||
(The code uses a library to send messages to the logging service.)
|
||||
|
||||
*All these methods are available with containers.*
|
||||
|
||||
---
|
||||
|
||||
## Writing on stdout/stderr
|
||||
|
||||
- The standard output and error of containers is managed by the container engine.
|
||||
|
||||
- This means that each line written by the container is received by the engine.
|
||||
|
||||
- The engine can then do "whatever" with these log lines.
|
||||
|
||||
- With Docker, the default configuration is to write the logs to local files.
|
||||
|
||||
- The files can then be queried with e.g. `docker logs` (and the equivalent API request).
|
||||
|
||||
- This can be customized, as we will see later.
|
||||
|
||||
---
|
||||
|
||||
## Writing to local files
|
||||
|
||||
- If we write to files, it is possible to access them but cumbersome.
|
||||
|
||||
(We have to use `docker exec` or `docker cp`.)
|
||||
|
||||
- Furthermore, if the container is stopped, we cannot use `docker exec`.
|
||||
|
||||
- If the container is deleted, the logs disappear.
|
||||
|
||||
- What should we do for programs who can only log to local files?
|
||||
|
||||
--
|
||||
|
||||
- There are multiple solutions.
|
||||
|
||||
---
|
||||
|
||||
## Using a volume or bind mount
|
||||
|
||||
- Instead of writing logs to a normal directory, we can place them on a volume.
|
||||
|
||||
- The volume can be accessed by other containers.
|
||||
|
||||
- We can run a program like `filebeat` in another container accessing the same volume.
|
||||
|
||||
(`filebeat` reads local log files continuously, like `tail -f`, and sends them
|
||||
to a centralized system like ElasticSearch.)
|
||||
|
||||
- We can also use a bind mount, e.g. `-v /var/log/containers/www:/var/log/tomcat`.
|
||||
|
||||
- The container will write log files to a directory mapped to a host directory.
|
||||
|
||||
- The log files will appear on the host and be consumable directly from the host.
|
||||
|
||||
---
|
||||
|
||||
## Using logging services
|
||||
|
||||
- We can use logging frameworks (like log4j or the Python `logging` package).
|
||||
|
||||
- These frameworks require some code and/or configuration in our application code.
|
||||
|
||||
- These mechanisms can be used identically inside or outside of containers.
|
||||
|
||||
- Sometimes, we can leverage containerized networking to simplify their setup.
|
||||
|
||||
- For instance, our code can send log messages to a server named `log`.
|
||||
|
||||
- The name `log` will resolve to different addresses in development, production, etc.
|
||||
|
||||
---
|
||||
|
||||
## Using syslog
|
||||
|
||||
- What if our code (or the program we are running in containers) uses syslog?
|
||||
|
||||
- One possibility is to run a syslog daemon in the container.
|
||||
|
||||
- Then that daemon can be setup to write to local files or forward to the network.
|
||||
|
||||
- Under the hood, syslog clients connect to a local UNIX socket, `/dev/log`.
|
||||
|
||||
- We can expose a syslog socket to the container (by using a volume or bind-mount).
|
||||
|
||||
- Then just create a symlink from `/dev/log` to the syslog socket.
|
||||
|
||||
- Voilà!
|
||||
|
||||
---
|
||||
|
||||
## Using logging drivers
|
||||
|
||||
- If we log to stdout and stderr, the container engine receives the log messages.
|
||||
|
||||
- The Docker Engine has a modular logging system with many plugins, including:
|
||||
|
||||
- json-file (the default one)
|
||||
- syslog
|
||||
- journald
|
||||
- gelf
|
||||
- fluentd
|
||||
- splunk
|
||||
- etc.
|
||||
|
||||
- Each plugin can process and forward the logs to another process or system.
|
||||
|
||||
---
|
||||
|
||||
## A word of warning about `json-file`
|
||||
|
||||
- By default, log file size is unlimited.
|
||||
|
||||
- This means that a very verbose container *will* use up all your disk space.
|
||||
|
||||
(Or a less verbose container, but running for a very long time.)
|
||||
|
||||
- Log rotation can be enabled by setting a `max-size` option.
|
||||
|
||||
- Older log files can be removed by setting a `max-file` option.
|
||||
|
||||
- Just like other logging options, these can be set per container, or globally.
|
||||
|
||||
Example:
|
||||
```bash
|
||||
$ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Demo: sending logs to ELK
|
||||
|
||||
- We are going to deploy an ELK stack.
|
||||
|
||||
- It will accept logs over a GELF socket.
|
||||
|
||||
- We will run a few containers with the `gelf` logging driver.
|
||||
|
||||
- We will then see our logs in Kibana, the web interface provided by ELK.
|
||||
|
||||
*Important foreword: this is not an "official" or "recommended"
|
||||
setup; it is just an example. We used ELK in this demo because
|
||||
it's a popular setup and we keep being asked about it; but you
|
||||
will have equal success with Fluent or other logging stacks!*
|
||||
|
||||
---
|
||||
|
||||
## What's in an ELK stack?
|
||||
|
||||
- ELK is three components:
|
||||
|
||||
- ElasticSearch (to store and index log entries)
|
||||
|
||||
- Logstash (to receive log entries from various
|
||||
sources, process them, and forward them to various
|
||||
destinations)
|
||||
|
||||
- Kibana (to view/search log entries with a nice UI)
|
||||
|
||||
- The only component that we will configure is Logstash
|
||||
|
||||
- We will accept log entries using the GELF protocol
|
||||
|
||||
- Log entries will be stored in ElasticSearch,
|
||||
<br/>and displayed on Logstash's stdout for debugging
|
||||
|
||||
---
|
||||
|
||||
## Running ELK
|
||||
|
||||
- We are going to use a Compose file describing the ELK stack.
|
||||
|
||||
- The Compose file is in the container.training repository on GitHub.
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/jpetazzo/container.training
|
||||
$ cd container.training
|
||||
$ cd elk
|
||||
$ docker-compose up
|
||||
```
|
||||
|
||||
- Let's have a look at the Compose file while it's deploying.
|
||||
|
||||
---
|
||||
|
||||
## Our basic ELK deployment
|
||||
|
||||
- We are using images from the Docker Hub: `elasticsearch`, `logstash`, `kibana`.
|
||||
|
||||
- We don't need to change the configuration of ElasticSearch.
|
||||
|
||||
- We need to tell Kibana the address of ElasticSearch:
|
||||
|
||||
- it is set with the `ELASTICSEARCH_URL` environment variable,
|
||||
|
||||
- by default it is `localhost:9200`, we change it to `elasticsearch:9200`.
|
||||
|
||||
- We need to configure Logstash:
|
||||
|
||||
- we pass the entire configuration file through command-line arguments,
|
||||
|
||||
- this is a hack so that we don't have to create an image just for the config.
|
||||
|
||||
---
|
||||
|
||||
## Sending logs to ELK
|
||||
|
||||
- The ELK stack accepts log messages through a GELF socket.
|
||||
|
||||
- The GELF socket listens on UDP port 12201.
|
||||
|
||||
- To send a message, we need to change the logging driver used by Docker.
|
||||
|
||||
- This can be done globally (by reconfiguring the Engine) or on a per-container basis.
|
||||
|
||||
- Let's override the logging driver for a single container:
|
||||
|
||||
```bash
|
||||
$ docker run --log-driver=gelf --log-opt=gelf-address=udp://localhost:12201 \
|
||||
alpine echo hello world
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Viewing the logs in ELK
|
||||
|
||||
- Connect to the Kibana interface.
|
||||
|
||||
- It is exposed on port 5601.
|
||||
|
||||
- Browse http://X.X.X.X:5601.
|
||||
|
||||
---
|
||||
|
||||
## "Configuring" Kibana
|
||||
|
||||
- Kibana should offer you to "Configure an index pattern":
|
||||
<br/>in the "Time-field name" drop down, select "@timestamp", and hit the
|
||||
"Create" button.
|
||||
|
||||
- Then:
|
||||
|
||||
- click "Discover" (in the top-left corner),
|
||||
- click "Last 15 minutes" (in the top-right corner),
|
||||
- click "Last 1 hour" (in the list in the middle),
|
||||
- click "Auto-refresh" (top-right corner),
|
||||
- click "5 seconds" (top-left of the list).
|
||||
|
||||
- You should see a series of green bars (with one new green bar every minute).
|
||||
|
||||
- Our 'hello world' message should be visible there.
|
||||
|
||||
---
|
||||
|
||||
## Important afterword
|
||||
|
||||
**This is not a "production-grade" setup.**
|
||||
|
||||
It is just an educational example. Since we have only
|
||||
one node , we did set up a single
|
||||
ElasticSearch instance and a single Logstash instance.
|
||||
|
||||
In a production setup, you need an ElasticSearch cluster
|
||||
(both for capacity and availability reasons). You also
|
||||
need multiple Logstash instances.
|
||||
|
||||
And if you want to withstand
|
||||
bursts of logs, you need some kind of message queue:
|
||||
Redis if you're cheap, Kafka if you want to make sure
|
||||
that you don't drop messages on the floor. Good luck.
|
||||
|
||||
If you want to learn more about the GELF driver,
|
||||
have a look at [this blog post](
|
||||
https://jpetazzo.github.io/2017/01/20/docker-logging-gelf/).
|
||||
322
slides/containers/Multi_Stage_Builds.md
Normal file
322
slides/containers/Multi_Stage_Builds.md
Normal file
@@ -0,0 +1,322 @@
|
||||
# Reducing image size
|
||||
|
||||
* In the previous example, our final image contained:
|
||||
|
||||
* our `hello` program
|
||||
|
||||
* its source code
|
||||
|
||||
* the compiler
|
||||
|
||||
* Only the first one is strictly necessary.
|
||||
|
||||
* We are going to see how to obtain an image without the superfluous components.
|
||||
|
||||
---
|
||||
|
||||
## Can't we remove superfluous files with `RUN`?
|
||||
|
||||
What happens if we do one of the following commands?
|
||||
|
||||
- `RUN rm -rf ...`
|
||||
|
||||
- `RUN apt-get remove ...`
|
||||
|
||||
- `RUN make clean ...`
|
||||
|
||||
--
|
||||
|
||||
This adds a layer which removes a bunch of files.
|
||||
|
||||
But the previous layers (which added the files) still exist.
|
||||
|
||||
---
|
||||
|
||||
## Removing files with an extra layer
|
||||
|
||||
When downloading an image, all the layers must be downloaded.
|
||||
|
||||
| Dockerfile instruction | Layer size | Image size |
|
||||
| ---------------------- | ---------- | ---------- |
|
||||
| `FROM ubuntu` | Size of base image | Size of base image |
|
||||
| `...` | ... | Sum of this layer <br/>+ all previous ones |
|
||||
| `RUN apt-get install somepackage` | Size of files added <br/>(e.g. a few MB) | Sum of this layer <br/>+ all previous ones |
|
||||
| `...` | ... | Sum of this layer <br/>+ all previous ones |
|
||||
| `RUN apt-get remove somepackage` | Almost zero <br/>(just metadata) | Same as previous one |
|
||||
|
||||
Therefore, `RUN rm` does not reduce the size of the image or free up disk space.
|
||||
|
||||
---
|
||||
|
||||
## Removing unnecessary files
|
||||
|
||||
Various techniques are available to obtain smaller images:
|
||||
|
||||
- collapsing layers,
|
||||
|
||||
- adding binaries that are built outside of the Dockerfile,
|
||||
|
||||
- squashing the final image,
|
||||
|
||||
- multi-stage builds.
|
||||
|
||||
Let's review them quickly.
|
||||
|
||||
---
|
||||
|
||||
## Collapsing layers
|
||||
|
||||
You will frequently see Dockerfiles like this:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu
|
||||
RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ...
|
||||
```
|
||||
|
||||
Or the (more readable) variant:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu
|
||||
RUN apt-get update \
|
||||
&& apt-get install xxx \
|
||||
&& ... \
|
||||
&& apt-get remove xxx \
|
||||
&& ...
|
||||
```
|
||||
|
||||
This `RUN` command gives us a single layer.
|
||||
|
||||
The files that are added, then removed in the same layer, do not grow the layer size.
|
||||
|
||||
---
|
||||
|
||||
## Collapsing layers: pros and cons
|
||||
|
||||
Pros:
|
||||
|
||||
- works on all versions of Docker
|
||||
|
||||
- doesn't require extra tools
|
||||
|
||||
Cons:
|
||||
|
||||
- not very readable
|
||||
|
||||
- some unnecessary files might still remain if the cleanup is not thorough
|
||||
|
||||
- that layer is expensive (slow to build)
|
||||
|
||||
---
|
||||
|
||||
## Building binaries outside of the Dockerfile
|
||||
|
||||
This results in a Dockerfile looking like this:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu
|
||||
COPY xxx /usr/local/bin
|
||||
```
|
||||
|
||||
Of course, this implies that the file `xxx` exists in the build context.
|
||||
|
||||
That file has to exist before you can run `docker build`.
|
||||
|
||||
For instance, it can:
|
||||
|
||||
- exist in the code repository,
|
||||
- be created by another tool (script, Makefile...),
|
||||
- be created by another container image and extracted from the image.
|
||||
|
||||
See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox).
|
||||
|
||||
---
|
||||
|
||||
## Building binaries outside: pros and cons
|
||||
|
||||
Pros:
|
||||
|
||||
- final image can be very small
|
||||
|
||||
Cons:
|
||||
|
||||
- requires an extra build tool
|
||||
|
||||
- we're back in dependency hell and "works on my machine"
|
||||
|
||||
Cons, if binary is added to code repository:
|
||||
|
||||
- breaks portability across different platforms
|
||||
|
||||
- grows repository size a lot if the binary is updated frequently
|
||||
|
||||
---
|
||||
|
||||
## Squashing the final image
|
||||
|
||||
The idea is to transform the final image into a single-layer image.
|
||||
|
||||
This can be done in (at least) two ways.
|
||||
|
||||
- Activate experimental features and squash the final image:
|
||||
```bash
|
||||
docker image build --squash ...
|
||||
```
|
||||
|
||||
- Export/import the final image.
|
||||
```bash
|
||||
docker build -t temp-image .
|
||||
docker run --entrypoint true --name temp-container temp-image
|
||||
docker export temp-container | docker import - final-image
|
||||
docker rm temp-container
|
||||
docker rmi temp-image
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Squashing the image: pros and cons
|
||||
|
||||
Pros:
|
||||
|
||||
- single-layer images are smaller and faster to download
|
||||
|
||||
- removed files no longer take up storage and network resources
|
||||
|
||||
Cons:
|
||||
|
||||
- we still need to actively remove unnecessary files
|
||||
|
||||
- squash operation can take a lot of time (on big images)
|
||||
|
||||
- squash operation does not benefit from cache
|
||||
<br/>
|
||||
(even if we change just a tiny file, the whole image needs to be re-squashed)
|
||||
|
||||
---
|
||||
|
||||
## Multi-stage builds
|
||||
|
||||
Multi-stage builds allow us to have multiple *stages*.
|
||||
|
||||
Each stage is a separate image, and can copy files from previous stages.
|
||||
|
||||
We're going to see how they work in more detail.
|
||||
|
||||
---
|
||||
|
||||
# Multi-stage builds
|
||||
|
||||
* At any point in our `Dockerfile`, we can add a new `FROM` line.
|
||||
|
||||
* This line starts a new stage of our build.
|
||||
|
||||
* Each stage can access the files of the previous stages with `COPY --from=...`.
|
||||
|
||||
* When a build is tagged (with `docker build -t ...`), the last stage is tagged.
|
||||
|
||||
* Previous stages are not discarded: they will be used for caching, and can be referenced.
|
||||
|
||||
---
|
||||
|
||||
## Multi-stage builds in practice
|
||||
|
||||
* Each stage is numbered, starting at `0`
|
||||
|
||||
* We can copy a file from a previous stage by indicating its number, e.g.:
|
||||
|
||||
```dockerfile
|
||||
COPY --from=0 /file/from/first/stage /location/in/current/stage
|
||||
```
|
||||
|
||||
* We can also name stages, and reference these names:
|
||||
|
||||
```dockerfile
|
||||
FROM golang AS builder
|
||||
RUN ...
|
||||
FROM alpine
|
||||
COPY --from=builder /go/bin/mylittlebinary /usr/local/bin/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Multi-stage builds for our C program
|
||||
|
||||
We will change our Dockerfile to:
|
||||
|
||||
* give a nickname to the first stage: `compiler`
|
||||
|
||||
* add a second stage using the same `ubuntu` base image
|
||||
|
||||
* add the `hello` binary to the second stage
|
||||
|
||||
* make sure that `CMD` is in the second stage
|
||||
|
||||
The resulting Dockerfile is on the next slide.
|
||||
|
||||
---
|
||||
|
||||
## Multi-stage build `Dockerfile`
|
||||
|
||||
Here is the final Dockerfile:
|
||||
|
||||
```dockerfile
|
||||
FROM ubuntu AS compiler
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y build-essential
|
||||
COPY hello.c /
|
||||
RUN make hello
|
||||
FROM ubuntu
|
||||
COPY --from=compiler /hello /hello
|
||||
CMD /hello
|
||||
```
|
||||
|
||||
Let's build it, and check that it works correctly:
|
||||
|
||||
```bash
|
||||
docker build -t hellomultistage .
|
||||
docker run hellomultistage
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Comparing single/multi-stage build image sizes
|
||||
|
||||
List our images with `docker images`, and check the size of:
|
||||
|
||||
- the `ubuntu` base image,
|
||||
|
||||
- the single-stage `hello` image,
|
||||
|
||||
- the multi-stage `hellomultistage` image.
|
||||
|
||||
We can achieve even smaller images if we use smaller base images.
|
||||
|
||||
However, if we use common base images (e.g. if we standardize on `ubuntu`),
|
||||
these common images will be pulled only once per node, so they are
|
||||
virtually "free."
|
||||
|
||||
---
|
||||
|
||||
## Build targets
|
||||
|
||||
* We can also tag an intermediary stage with the following command:
|
||||
```bash
|
||||
docker build --target STAGE --tag NAME
|
||||
```
|
||||
|
||||
* This will create an image (named `NAME`) corresponding to stage `STAGE`
|
||||
|
||||
* This can be used to easily access an intermediary stage for inspection
|
||||
|
||||
(instead of parsing the output of `docker build` to find out the image ID)
|
||||
|
||||
* This can also be used to describe multiple images from a single Dockerfile
|
||||
|
||||
(instead of using multiple Dockerfiles, which could go out of sync)
|
||||
|
||||
???
|
||||
|
||||
:EN:Optimizing our images and their build process
|
||||
:EN:- Leveraging multi-stage builds
|
||||
|
||||
:FR:Optimiser les images et leur construction
|
||||
:FR:- Utilisation d'un *multi-stage build*
|
||||
1175
slides/containers/Namespaces_Cgroups.md
Normal file
1175
slides/containers/Namespaces_Cgroups.md
Normal file
File diff suppressed because it is too large
Load Diff
141
slides/containers/Naming_And_Inspecting.md
Normal file
141
slides/containers/Naming_And_Inspecting.md
Normal file
@@ -0,0 +1,141 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Naming and inspecting containers
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
In this lesson, we will learn about an important
|
||||
Docker concept: container *naming*.
|
||||
|
||||
Naming allows us to:
|
||||
|
||||
* Reference easily a container.
|
||||
|
||||
* Ensure unicity of a specific container.
|
||||
|
||||
We will also see the `inspect` command, which gives a lot of details about a container.
|
||||
|
||||
---
|
||||
|
||||
## Naming our containers
|
||||
|
||||
So far, we have referenced containers with their ID.
|
||||
|
||||
We have copy-pasted the ID, or used a shortened prefix.
|
||||
|
||||
But each container can also be referenced by its name.
|
||||
|
||||
If a container is named `thumbnail-worker`, I can do:
|
||||
|
||||
```bash
|
||||
$ docker logs thumbnail-worker
|
||||
$ docker stop thumbnail-worker
|
||||
etc.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Default names
|
||||
|
||||
When we create a container, if we don't give a specific
|
||||
name, Docker will pick one for us.
|
||||
|
||||
It will be the concatenation of:
|
||||
|
||||
* A mood (furious, goofy, suspicious, boring...)
|
||||
|
||||
* The name of a famous inventor (tesla, darwin, wozniak...)
|
||||
|
||||
Examples: `happy_curie`, `clever_hopper`, `jovial_lovelace` ...
|
||||
|
||||
---
|
||||
|
||||
## Specifying a name
|
||||
|
||||
You can set the name of the container when you create it.
|
||||
|
||||
```bash
|
||||
$ docker run --name ticktock jpetazzo/clock
|
||||
```
|
||||
|
||||
If you specify a name that already exists, Docker will refuse
|
||||
to create the container.
|
||||
|
||||
This lets us enforce unicity of a given resource.
|
||||
|
||||
---
|
||||
|
||||
## Renaming containers
|
||||
|
||||
* You can rename containers with `docker rename`.
|
||||
|
||||
* This allows you to "free up" a name without destroying the associated container.
|
||||
|
||||
---
|
||||
|
||||
## Inspecting a container
|
||||
|
||||
The `docker inspect` command will output a very detailed JSON map.
|
||||
|
||||
```bash
|
||||
$ docker inspect <containerID>
|
||||
[{
|
||||
...
|
||||
(many pages of JSON here)
|
||||
...
|
||||
```
|
||||
|
||||
There are multiple ways to consume that information.
|
||||
|
||||
---
|
||||
|
||||
## Parsing JSON with the Shell
|
||||
|
||||
* You *could* grep and cut or awk the output of `docker inspect`.
|
||||
|
||||
* Please, don't.
|
||||
|
||||
* It's painful.
|
||||
|
||||
* If you really must parse JSON from the Shell, use JQ! (It's great.)
|
||||
|
||||
```bash
|
||||
$ docker inspect <containerID> | jq .
|
||||
```
|
||||
|
||||
* We will see a better solution which doesn't require extra tools.
|
||||
|
||||
---
|
||||
|
||||
## Using `--format`
|
||||
|
||||
You can specify a format string, which will be parsed by
|
||||
Go's text/template package.
|
||||
|
||||
```bash
|
||||
$ docker inspect --format '{{ json .Created }}' <containerID>
|
||||
"2015-02-24T07:21:11.712240394Z"
|
||||
```
|
||||
|
||||
* The generic syntax is to wrap the expression with double curly braces.
|
||||
|
||||
* The expression starts with a dot representing the JSON object.
|
||||
|
||||
* Then each field or member can be accessed in dotted notation syntax.
|
||||
|
||||
* The optional `json` keyword asks for valid JSON output.
|
||||
<br/>(e.g. here it adds the surrounding double-quotes.)
|
||||
|
||||
???
|
||||
|
||||
:EN:Managing container lifecycle
|
||||
:EN:- Naming and inspecting containers
|
||||
|
||||
:FR:Suivre ses conteneurs à la loupe
|
||||
:FR:- Obtenir des informations détaillées sur un conteneur
|
||||
:FR:- Associer un identifiant unique à un conteneur
|
||||
95
slides/containers/Network_Drivers.md
Normal file
95
slides/containers/Network_Drivers.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# Container network drivers
|
||||
|
||||
The Docker Engine supports different network drivers.
|
||||
|
||||
The built-in drivers include:
|
||||
|
||||
* `bridge` (default)
|
||||
|
||||
* `null` (for the special network called `none`)
|
||||
|
||||
* `host` (for the special network called `host`)
|
||||
|
||||
* `container` (that one is a bit magic!)
|
||||
|
||||
The network is selected with `docker run --net ...`.
|
||||
|
||||
Each network is managed by a driver.
|
||||
|
||||
The different drivers are explained with more details on the following slides.
|
||||
|
||||
---
|
||||
|
||||
## The default bridge
|
||||
|
||||
* By default, the container gets a virtual `eth0` interface.
|
||||
<br/>(In addition to its own private `lo` loopback interface.)
|
||||
|
||||
* That interface is provided by a `veth` pair.
|
||||
|
||||
* It is connected to the Docker bridge.
|
||||
<br/>(Named `docker0` by default; configurable with `--bridge`.)
|
||||
|
||||
* Addresses are allocated on a private, internal subnet.
|
||||
<br/>(Docker uses 172.17.0.0/16 by default; configurable with `--bip`.)
|
||||
|
||||
* Outbound traffic goes through an iptables MASQUERADE rule.
|
||||
|
||||
* Inbound traffic goes through an iptables DNAT rule.
|
||||
|
||||
* The container can have its own routes, iptables rules, etc.
|
||||
|
||||
---
|
||||
|
||||
## The null driver
|
||||
|
||||
* Container is started with `docker run --net none ...`
|
||||
|
||||
* It only gets the `lo` loopback interface. No `eth0`.
|
||||
|
||||
* It can't send or receive network traffic.
|
||||
|
||||
* Useful for isolated/untrusted workloads.
|
||||
|
||||
---
|
||||
|
||||
## The host driver
|
||||
|
||||
* Container is started with `docker run --net host ...`
|
||||
|
||||
* It sees (and can access) the network interfaces of the host.
|
||||
|
||||
* It can bind any address, any port (for ill and for good).
|
||||
|
||||
* Network traffic doesn't have to go through NAT, bridge, or veth.
|
||||
|
||||
* Performance = native!
|
||||
|
||||
Use cases:
|
||||
|
||||
* Performance sensitive applications (VOIP, gaming, streaming...)
|
||||
|
||||
* Peer discovery (e.g. Erlang port mapper, Raft, Serf...)
|
||||
|
||||
---
|
||||
|
||||
## The container driver
|
||||
|
||||
* Container is started with `docker run --net container:id ...`
|
||||
|
||||
* It re-uses the network stack of another container.
|
||||
|
||||
* It shares with this other container the same interfaces, IP address(es), routes, iptables rules, etc.
|
||||
|
||||
* Those containers can communicate over their `lo` interface.
|
||||
<br/>(i.e. one can bind to 127.0.0.1 and the others can connect to it.)
|
||||
|
||||
???
|
||||
|
||||
:EN:Advanced container networking
|
||||
:EN:- Transparent network access with the "host" driver
|
||||
:EN:- Sharing is caring with the "container" driver
|
||||
|
||||
:FR:Paramétrage réseau avancé
|
||||
:FR:- Accès transparent au réseau avec le mode "host"
|
||||
:FR:- Partage de la pile réseau avece le mode "container"
|
||||
442
slides/containers/Orchestration_Overview.md
Normal file
442
slides/containers/Orchestration_Overview.md
Normal file
@@ -0,0 +1,442 @@
|
||||
# Orchestration, an overview
|
||||
|
||||
In this chapter, we will:
|
||||
|
||||
* Explain what is orchestration and why we would need it.
|
||||
|
||||
* Present (from a high-level perspective) some orchestrators.
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## What's orchestration?
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## What's orchestration?
|
||||
|
||||
According to Wikipedia:
|
||||
|
||||
*Orchestration describes the __automated__ arrangement,
|
||||
coordination, and management of complex computer systems,
|
||||
middleware, and services.*
|
||||
|
||||
--
|
||||
|
||||
*[...] orchestration is often discussed in the context of
|
||||
__service-oriented architecture__, __virtualization__, provisioning,
|
||||
Converged Infrastructure and __dynamic datacenter__ topics.*
|
||||
|
||||
--
|
||||
|
||||
What does that really mean?
|
||||
|
||||
---
|
||||
|
||||
## Example 1: dynamic cloud instances
|
||||
|
||||
--
|
||||
|
||||
- Q: do we always use 100% of our servers?
|
||||
|
||||
--
|
||||
|
||||
- A: obviously not!
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
## Example 1: dynamic cloud instances
|
||||
|
||||
- Every night, scale down
|
||||
|
||||
(by shutting down extraneous replicated instances)
|
||||
|
||||
- Every morning, scale up
|
||||
|
||||
(by deploying new copies)
|
||||
|
||||
- "Pay for what you use"
|
||||
|
||||
(i.e. save big $$$ here)
|
||||
|
||||
---
|
||||
|
||||
## Example 1: dynamic cloud instances
|
||||
|
||||
How do we implement this?
|
||||
|
||||
- Crontab
|
||||
|
||||
- Autoscaling (save even bigger $$$)
|
||||
|
||||
That's *relatively* easy.
|
||||
|
||||
Now, how are things for our IAAS provider?
|
||||
|
||||
---
|
||||
|
||||
## Example 2: dynamic datacenter
|
||||
|
||||
- Q: what's the #1 cost in a datacenter?
|
||||
|
||||
--
|
||||
|
||||
- A: electricity!
|
||||
|
||||
--
|
||||
|
||||
- Q: what uses electricity?
|
||||
|
||||
--
|
||||
|
||||
- A: servers, obviously
|
||||
|
||||
- A: ... and associated cooling
|
||||
|
||||
--
|
||||
|
||||
- Q: do we always use 100% of our servers?
|
||||
|
||||
--
|
||||
|
||||
- A: obviously not!
|
||||
|
||||
---
|
||||
|
||||
## Example 2: dynamic datacenter
|
||||
|
||||
- If only we could turn off unused servers during the night...
|
||||
|
||||
- Problem: we can only turn off a server if it's totally empty!
|
||||
|
||||
(i.e. all VMs on it are stopped/moved)
|
||||
|
||||
- Solution: *migrate* VMs and shutdown empty servers
|
||||
|
||||
(e.g. combine two hypervisors with 40% load into 80%+0%,
|
||||
<br/>and shut down the one at 0%)
|
||||
|
||||
---
|
||||
|
||||
## Example 2: dynamic datacenter
|
||||
|
||||
How do we implement this?
|
||||
|
||||
- Shut down empty hosts (but keep some spare capacity)
|
||||
|
||||
- Start hosts again when capacity gets low
|
||||
|
||||
- Ability to "live migrate" VMs
|
||||
|
||||
(Xen already did this 10+ years ago)
|
||||
|
||||
- Rebalance VMs on a regular basis
|
||||
|
||||
- what if a VM is stopped while we move it?
|
||||
- should we allow provisioning on hosts involved in a migration?
|
||||
|
||||
*Scheduling* becomes more complex.
|
||||
|
||||
---
|
||||
|
||||
## What is scheduling?
|
||||
|
||||
According to Wikipedia (again):
|
||||
|
||||
*In computing, scheduling is the method by which threads,
|
||||
processes or data flows are given access to system resources.*
|
||||
|
||||
The scheduler is concerned mainly with:
|
||||
|
||||
- throughput (total amount of work done per time unit);
|
||||
- turnaround time (between submission and completion);
|
||||
- response time (between submission and start);
|
||||
- waiting time (between job readiness and execution);
|
||||
- fairness (appropriate times according to priorities).
|
||||
|
||||
In practice, these goals often conflict.
|
||||
|
||||
**"Scheduling" = decide which resources to use.**
|
||||
|
||||
---
|
||||
|
||||
## Exercise 1
|
||||
|
||||
- You have:
|
||||
|
||||
- 5 hypervisors (physical machines)
|
||||
|
||||
- Each server has:
|
||||
|
||||
- 16 GB RAM, 8 cores, 1 TB disk
|
||||
|
||||
- Each week, your team requests:
|
||||
|
||||
- one VM with X RAM, Y CPU, Z disk
|
||||
|
||||
Scheduling = deciding which hypervisor to use for each VM.
|
||||
|
||||
Difficulty: easy!
|
||||
|
||||
---
|
||||
|
||||
<!-- Warning, two almost identical slides (for img effect) -->
|
||||
|
||||
## Exercise 2
|
||||
|
||||
- You have:
|
||||
|
||||
- 1000+ hypervisors (and counting!)
|
||||
|
||||
- Each server has different resources:
|
||||
|
||||
- 8-500 GB of RAM, 4-64 cores, 1-100 TB disk
|
||||
|
||||
- Multiple times a day, a different team asks for:
|
||||
|
||||
- up to 50 VMs with different characteristics
|
||||
|
||||
Scheduling = deciding which hypervisor to use for each VM.
|
||||
|
||||
Difficulty: ???
|
||||
|
||||
---
|
||||
|
||||
<!-- Warning, two almost identical slides (for img effect) -->
|
||||
|
||||
## Exercise 2
|
||||
|
||||
- You have:
|
||||
|
||||
- 1000+ hypervisors (and counting!)
|
||||
|
||||
- Each server has different resources:
|
||||
|
||||
- 8-500 GB of RAM, 4-64 cores, 1-100 TB disk
|
||||
|
||||
- Multiple times a day, a different team asks for:
|
||||
|
||||
- up to 50 VMs with different characteristics
|
||||
|
||||
Scheduling = deciding which hypervisor to use for each VM.
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Exercise 3
|
||||
|
||||
- You have machines (physical and/or virtual)
|
||||
|
||||
- You have containers
|
||||
|
||||
- You are trying to put the containers on the machines
|
||||
|
||||
- Sounds familiar?
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Scheduling with one resource
|
||||
|
||||
.center[]
|
||||
|
||||
## We can't fit a job of size 6 :(
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Scheduling with one resource
|
||||
|
||||
.center[]
|
||||
|
||||
## ... Now we can!
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Scheduling with two resources
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Scheduling with three resources
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## You need to be good at this
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## But also, you must be quick!
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## And be web scale!
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## And think outside (?) of the box!
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Good luck!
|
||||
|
||||
.center[]
|
||||
|
||||
---
|
||||
|
||||
## TL,DR
|
||||
|
||||
* Scheduling with multiple resources (dimensions) is hard.
|
||||
|
||||
* Don't expect to solve the problem with a Tiny Shell Script.
|
||||
|
||||
* There are literally tons of research papers written on this.
|
||||
|
||||
---
|
||||
|
||||
## But our orchestrator also needs to manage ...
|
||||
|
||||
* Network connectivity (or filtering) between containers.
|
||||
|
||||
* Load balancing (external and internal).
|
||||
|
||||
* Failure recovery (if a node or a whole datacenter fails).
|
||||
|
||||
* Rolling out new versions of our applications.
|
||||
|
||||
(Canary deployments, blue/green deployments...)
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Some orchestrators
|
||||
|
||||
We are going to present briefly a few orchestrators.
|
||||
|
||||
There is no "absolute best" orchestrator.
|
||||
|
||||
It depends on:
|
||||
|
||||
- your applications,
|
||||
|
||||
- your requirements,
|
||||
|
||||
- your pre-existing skills...
|
||||
|
||||
---
|
||||
|
||||
## Nomad
|
||||
|
||||
- Open Source project by Hashicorp.
|
||||
|
||||
- Arbitrary scheduler (not just for containers).
|
||||
|
||||
- Great if you want to schedule mixed workloads.
|
||||
|
||||
(VMs, containers, processes...)
|
||||
|
||||
- Less integration with the rest of the container ecosystem.
|
||||
|
||||
---
|
||||
|
||||
## Mesos
|
||||
|
||||
- Open Source project in the Apache Foundation.
|
||||
|
||||
- Arbitrary scheduler (not just for containers).
|
||||
|
||||
- Two-level scheduler.
|
||||
|
||||
- Top-level scheduler acts as a resource broker.
|
||||
|
||||
- Second-level schedulers (aka "frameworks") obtain resources from top-level.
|
||||
|
||||
- Frameworks implement various strategies.
|
||||
|
||||
(Marathon = long running processes; Chronos = run at intervals; ...)
|
||||
|
||||
- Commercial offering through DC/OS by Mesosphere.
|
||||
|
||||
---
|
||||
|
||||
## Rancher
|
||||
|
||||
- Rancher 1 offered a simple interface for Docker hosts.
|
||||
|
||||
- Rancher 2 is a complete management platform for Docker and Kubernetes.
|
||||
|
||||
- Technically not an orchestrator, but it's a popular option.
|
||||
|
||||
---
|
||||
|
||||
## Swarm
|
||||
|
||||
- Tightly integrated with the Docker Engine.
|
||||
|
||||
- Extremely simple to deploy and setup, even in multi-manager (HA) mode.
|
||||
|
||||
- Secure by default.
|
||||
|
||||
- Strongly opinionated:
|
||||
|
||||
- smaller set of features,
|
||||
|
||||
- easier to operate.
|
||||
|
||||
---
|
||||
|
||||
## Kubernetes
|
||||
|
||||
- Open Source project initiated by Google.
|
||||
|
||||
- Contributions from many other actors.
|
||||
|
||||
- *De facto* standard for container orchestration.
|
||||
|
||||
- Many deployment options; some of them very complex.
|
||||
|
||||
- Reputation: steep learning curve.
|
||||
|
||||
- Reality:
|
||||
|
||||
- true, if we try to understand *everything*;
|
||||
|
||||
- false, if we focus on what matters.
|
||||
|
||||
???
|
||||
|
||||
:EN:- Orchestration overview
|
||||
:FR:- Survol de techniques d'orchestration
|
||||
47
slides/containers/Pods_Anatomy.md
Normal file
47
slides/containers/Pods_Anatomy.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Container Super-structure
|
||||
|
||||
- Multiple orchestration platforms support some kind of container super-structure.
|
||||
|
||||
(i.e., a construct or abstraction bigger than a single container.)
|
||||
|
||||
- For instance, on Kubernetes, this super-structure is called a *pod*.
|
||||
|
||||
- A pod is a group of containers (it could be a single container, too).
|
||||
|
||||
- These containers run together, on the same host.
|
||||
|
||||
(A pod cannot straddle multiple hosts.)
|
||||
|
||||
- All the containers in a pod have the same IP address.
|
||||
|
||||
- How does that map to the Docker world?
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||
## Anatomy of a Pod
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Pods in Docker
|
||||
|
||||
- The containers inside a pod share the same network namespace.
|
||||
|
||||
(Just like when using `docker run --net=container:<container_id>` with the CLI.)
|
||||
|
||||
- As a result, they can communicate together over `localhost`.
|
||||
|
||||
- In addition to "our" containers, the pod has a special container, the *sandbox*.
|
||||
|
||||
- That container uses a special image: `k8s.gcr.io/pause`.
|
||||
|
||||
(This is visible when listing containers running on a Kubernetes node.)
|
||||
|
||||
- Containers within a pod have independent filesystems.
|
||||
|
||||
- They can share directories by using a mechanism called *volumes.*
|
||||
|
||||
(Which is similar to the concept of volumes in Docker.)
|
||||
129
slides/containers/Publishing_To_Docker_Hub.md
Normal file
129
slides/containers/Publishing_To_Docker_Hub.md
Normal file
@@ -0,0 +1,129 @@
|
||||
# Publishing images to the Docker Hub
|
||||
|
||||
We have built our first images.
|
||||
|
||||
We can now publish it to the Docker Hub!
|
||||
|
||||
*You don't have to do the exercises in this section,
|
||||
because they require an account on the Docker Hub, and we
|
||||
don't want to force anyone to create one.*
|
||||
|
||||
*Note, however, that creating an account on the Docker Hub
|
||||
is free (and doesn't require a credit card), and hosting
|
||||
public images is free as well.*
|
||||
|
||||
---
|
||||
|
||||
## Logging into our Docker Hub account
|
||||
|
||||
* This can be done from the Docker CLI:
|
||||
```bash
|
||||
docker login
|
||||
```
|
||||
|
||||
.warning[When running Docker for Mac/Windows, or
|
||||
Docker on a Linux workstation, it can (and will when
|
||||
possible) integrate with your system's keyring to
|
||||
store your credentials securely. However, on most Linux
|
||||
servers, it will store your credentials in `~/.docker/config`.]
|
||||
|
||||
---
|
||||
|
||||
## Image tags and registry addresses
|
||||
|
||||
* Docker images tags are like Git tags and branches.
|
||||
|
||||
* They are like *bookmarks* pointing at a specific image ID.
|
||||
|
||||
* Tagging an image doesn't *rename* an image: it adds another tag.
|
||||
|
||||
* When pushing an image to a registry, the registry address is in the tag.
|
||||
|
||||
Example: `registry.example.net:5000/image`
|
||||
|
||||
* What about Docker Hub images?
|
||||
|
||||
--
|
||||
|
||||
* `jpetazzo/clock` is, in fact, `index.docker.io/jpetazzo/clock`
|
||||
|
||||
* `ubuntu` is, in fact, `library/ubuntu`, i.e. `index.docker.io/library/ubuntu`
|
||||
|
||||
---
|
||||
|
||||
## Tagging an image to push it on the Hub
|
||||
|
||||
* Let's tag our `figlet` image (or any other to our liking):
|
||||
```bash
|
||||
docker tag figlet jpetazzo/figlet
|
||||
```
|
||||
|
||||
* And push it to the Hub:
|
||||
```bash
|
||||
docker push jpetazzo/figlet
|
||||
```
|
||||
|
||||
* That's it!
|
||||
|
||||
--
|
||||
|
||||
* Anybody can now `docker run jpetazzo/figlet` anywhere.
|
||||
|
||||
---
|
||||
|
||||
## The goodness of automated builds
|
||||
|
||||
* You can link a Docker Hub repository with a GitHub or BitBucket repository
|
||||
|
||||
* Each push to GitHub or BitBucket will trigger a build on Docker Hub
|
||||
|
||||
* If the build succeeds, the new image is available on Docker Hub
|
||||
|
||||
* You can map tags and branches between source and container images
|
||||
|
||||
* If you work with public repositories, this is free
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Setting up an automated build
|
||||
|
||||
* We need a Dockerized repository!
|
||||
* Let's go to https://github.com/jpetazzo/trainingwheels and fork it.
|
||||
* Go to the Docker Hub (https://hub.docker.com/) and sign-in. Select "Repositories" in the blue navigation menu.
|
||||
* Select "Create" in the top-right bar, and select "Create Repository+".
|
||||
* Connect your Docker Hub account to your GitHub account.
|
||||
* Click "Create" button.
|
||||
* Then go to "Builds" folder.
|
||||
* Click on Github icon and select your user and the repository that we just forked.
|
||||
* In "Build rules" block near page bottom, put `/www` in "Build Context" column (or whichever directory the Dockerfile is in).
|
||||
* Click "Save and Build" to build the repository immediately (without waiting for a git push).
|
||||
* Subsequent builds will happen automatically, thanks to GitHub hooks.
|
||||
|
||||
---
|
||||
|
||||
## Building on the fly
|
||||
|
||||
- Some services can build images on the fly from a repository
|
||||
|
||||
- Example: [ctr.run](https://ctr.run/)
|
||||
|
||||
.lab[
|
||||
|
||||
- Use ctr.run to automatically build a container image and run it:
|
||||
```bash
|
||||
docker run ctr.run/github.com/undefinedlabs/hello-world
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
There might be a long pause before the first layer is pulled,
|
||||
because the API behind `docker pull` doesn't allow to stream build logs, and there is no feedback during the build.
|
||||
|
||||
It is possible to view the build logs by setting up an account on [ctr.run](https://ctr.run/).
|
||||
|
||||
???
|
||||
|
||||
:EN:- Publishing images to the Docker Hub
|
||||
:FR:- Publier des images sur le Docker Hub
|
||||
229
slides/containers/Resource_Limits.md
Normal file
229
slides/containers/Resource_Limits.md
Normal file
@@ -0,0 +1,229 @@
|
||||
# Limiting resources
|
||||
|
||||
- So far, we have used containers as convenient units of deployment.
|
||||
|
||||
- What happens when a container tries to use more resources than available?
|
||||
|
||||
(RAM, CPU, disk usage, disk and network I/O...)
|
||||
|
||||
- What happens when multiple containers compete for the same resource?
|
||||
|
||||
- Can we limit resources available to a container?
|
||||
|
||||
(Spoiler alert: yes!)
|
||||
|
||||
---
|
||||
|
||||
## Container processes are normal processes
|
||||
|
||||
- Containers are closer to "fancy processes" than to "lightweight VMs".
|
||||
|
||||
- A process running in a container is, in fact, a process running on the host.
|
||||
|
||||
- Let's look at the output of `ps` on a container host running 3 containers :
|
||||
|
||||
```
|
||||
0 2662 0.2 0.3 /usr/bin/dockerd -H fd://
|
||||
0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe
|
||||
0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
|
||||
0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off;
|
||||
101 23543 0.0 0.0 | \_ `nginx`: worker process
|
||||
0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
|
||||
102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2
|
||||
0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
|
||||
0 23725 0.0 0.0 \_ `/bin/sh`
|
||||
```
|
||||
|
||||
- The highlighted processes are containerized processes.
|
||||
<br/>
|
||||
(That host is running nginx, elasticsearch, and alpine.)
|
||||
|
||||
---
|
||||
|
||||
## By default: nothing changes
|
||||
|
||||
- What happens when a process uses too much memory on a Linux system?
|
||||
|
||||
--
|
||||
|
||||
- Simplified answer:
|
||||
|
||||
- swap is used (if available);
|
||||
|
||||
- if there is not enough swap space, eventually, the out-of-memory killer is invoked;
|
||||
|
||||
- the OOM killer uses heuristics to kill processes;
|
||||
|
||||
- sometimes, it kills an unrelated process.
|
||||
|
||||
--
|
||||
|
||||
- What happens when a container uses too much memory?
|
||||
|
||||
- The same thing!
|
||||
|
||||
(i.e., a process eventually gets killed, possibly in another container.)
|
||||
|
||||
---
|
||||
|
||||
## Limiting container resources
|
||||
|
||||
- The Linux kernel offers rich mechanisms to limit container resources.
|
||||
|
||||
- For memory usage, the mechanism is part of the *cgroup* subsystem.
|
||||
|
||||
- This subsystem allows limiting the memory for a process or a group of processes.
|
||||
|
||||
- A container engine leverages these mechanisms to limit memory for a container.
|
||||
|
||||
- The out-of-memory killer has a new behavior:
|
||||
|
||||
- it runs when a container exceeds its allowed memory usage,
|
||||
|
||||
- in that case, it only kills processes in that container.
|
||||
|
||||
---
|
||||
|
||||
## Limiting memory in practice
|
||||
|
||||
- The Docker Engine offers multiple flags to limit memory usage.
|
||||
|
||||
- The two most useful ones are `--memory` and `--memory-swap`.
|
||||
|
||||
- `--memory` limits the amount of physical RAM used by a container.
|
||||
|
||||
- `--memory-swap` limits the total amount (RAM+swap) used by a container.
|
||||
|
||||
- The memory limit can be expressed in bytes, or with a unit suffix.
|
||||
|
||||
(e.g.: `--memory 100m` = 100 megabytes.)
|
||||
|
||||
- We will see two strategies: limiting RAM usage, or limiting both
|
||||
|
||||
---
|
||||
|
||||
## Limiting RAM usage
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker run -ti --memory 100m python
|
||||
```
|
||||
|
||||
If the container tries to use more than 100 MB of RAM, *and* swap is available:
|
||||
|
||||
- the container will not be killed,
|
||||
|
||||
- memory above 100 MB will be swapped out,
|
||||
|
||||
- in most cases, the app in the container will be slowed down (a lot).
|
||||
|
||||
If we run out of swap, the global OOM killer still intervenes.
|
||||
|
||||
---
|
||||
|
||||
## Limiting both RAM and swap usage
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
docker run -ti --memory 100m --memory-swap 100m python
|
||||
```
|
||||
|
||||
If the container tries to use more than 100 MB of memory, it is killed.
|
||||
|
||||
On the other hand, the application will never be slowed down because of swap.
|
||||
|
||||
---
|
||||
|
||||
## When to pick which strategy?
|
||||
|
||||
- Stateful services (like databases) will lose or corrupt data when killed
|
||||
|
||||
- Allow them to use swap space, but monitor swap usage
|
||||
|
||||
- Stateless services can usually be killed with little impact
|
||||
|
||||
- Limit their mem+swap usage, but monitor if they get killed
|
||||
|
||||
- Ultimately, this is no different from "do I want swap, and how much?"
|
||||
|
||||
---
|
||||
|
||||
## Limiting CPU usage
|
||||
|
||||
- There are no less than 3 ways to limit CPU usage:
|
||||
|
||||
- setting a relative priority with `--cpu-shares`,
|
||||
|
||||
- setting a CPU% limit with `--cpus`,
|
||||
|
||||
- pinning a container to specific CPUs with `--cpuset-cpus`.
|
||||
|
||||
- They can be used separately or together.
|
||||
|
||||
---
|
||||
|
||||
## Setting relative priority
|
||||
|
||||
- Each container has a relative priority used by the Linux scheduler.
|
||||
|
||||
- By default, this priority is 1024.
|
||||
|
||||
- As long as CPU usage is not maxed out, this has no effect.
|
||||
|
||||
- When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority.
|
||||
|
||||
- In other words: a container with `--cpu-shares 2048` will receive twice as much than the default.
|
||||
|
||||
---
|
||||
|
||||
## Setting a CPU% limit
|
||||
|
||||
- This setting will make sure that a container doesn't use more than a given % of CPU.
|
||||
|
||||
- The value is expressed in CPUs; therefore:
|
||||
|
||||
`--cpus 0.1` means 10% of one CPU,
|
||||
|
||||
`--cpus 1.0` means 100% of one whole CPU,
|
||||
|
||||
`--cpus 10.0` means 10 entire CPUs.
|
||||
|
||||
---
|
||||
|
||||
## Pinning containers to CPUs
|
||||
|
||||
- On multi-core machines, it is possible to restrict the execution on a set of CPUs.
|
||||
|
||||
- Examples:
|
||||
|
||||
`--cpuset-cpus 0` forces the container to run on CPU 0;
|
||||
|
||||
`--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7;
|
||||
|
||||
`--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11.
|
||||
|
||||
- This will not reserve the corresponding CPUs!
|
||||
|
||||
(They might still be used by other containers, or uncontainerized processes.)
|
||||
|
||||
---
|
||||
|
||||
## Limiting disk usage
|
||||
|
||||
- Most storage drivers do not support limiting the disk usage of containers.
|
||||
|
||||
(With the exception of devicemapper, but the limit cannot be set easily.)
|
||||
|
||||
- This means that a single container could exhaust disk space for everyone.
|
||||
|
||||
- In practice, however, this is not a concern, because:
|
||||
|
||||
- data files (for stateful services) should reside on volumes,
|
||||
|
||||
- assets (e.g. images, user-generated content...) should reside on object stores or on volume,
|
||||
|
||||
- logs are written on standard output and gathered by the container engine.
|
||||
|
||||
- Container disk usage can be audited with `docker ps -s` and `docker diff`.
|
||||
184
slides/containers/Start_And_Attach.md
Normal file
184
slides/containers/Start_And_Attach.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# Restarting and attaching to containers
|
||||
|
||||
We have started containers in the foreground, and in the background.
|
||||
|
||||
In this chapter, we will see how to:
|
||||
|
||||
* Put a container in the background.
|
||||
* Attach to a background container to bring it to the foreground.
|
||||
* Restart a stopped container.
|
||||
|
||||
---
|
||||
|
||||
## Background and foreground
|
||||
|
||||
The distinction between foreground and background containers is arbitrary.
|
||||
|
||||
From Docker's point of view, all containers are the same.
|
||||
|
||||
All containers run the same way, whether there is a client attached to them or not.
|
||||
|
||||
It is always possible to detach from a container, and to reattach to a container.
|
||||
|
||||
Analogy: attaching to a container is like plugging a keyboard and screen to a physical server.
|
||||
|
||||
---
|
||||
|
||||
## Detaching from a container (Linux/macOS)
|
||||
|
||||
* If you have started an *interactive* container (with option `-it`), you can detach from it.
|
||||
|
||||
* The "detach" sequence is `^P^Q`.
|
||||
|
||||
* Otherwise you can detach by killing the Docker client.
|
||||
|
||||
(But not by hitting `^C`, as this would deliver `SIGINT` to the container.)
|
||||
|
||||
What does `-it` stand for?
|
||||
|
||||
* `-t` means "allocate a terminal."
|
||||
* `-i` means "connect stdin to the terminal."
|
||||
|
||||
---
|
||||
|
||||
## Detaching cont. (Win PowerShell and cmd.exe)
|
||||
|
||||
* Docker for Windows has a different detach experience due to shell features.
|
||||
|
||||
* `^P^Q` does not work.
|
||||
|
||||
* `^C` will detach, rather than stop the container.
|
||||
|
||||
* Using Bash, Subsystem for Linux, etc. on Windows behaves like Linux/macOS shells.
|
||||
|
||||
* Both PowerShell and Bash work well in Win 10; just be aware of differences.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Specifying a custom detach sequence
|
||||
|
||||
* You don't like `^P^Q`? No problem!
|
||||
* You can change the sequence with `docker run --detach-keys`.
|
||||
* This can also be passed as a global option to the engine.
|
||||
|
||||
Start a container with a custom detach command:
|
||||
|
||||
```bash
|
||||
$ docker run -ti --detach-keys ctrl-x,x jpetazzo/clock
|
||||
```
|
||||
|
||||
Detach by hitting `^X x`. (This is ctrl-x then x, not ctrl-x twice!)
|
||||
|
||||
Check that our container is still running:
|
||||
|
||||
```bash
|
||||
$ docker ps -l
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Attaching to a container
|
||||
|
||||
You can attach to a container:
|
||||
|
||||
```bash
|
||||
$ docker attach <containerID>
|
||||
```
|
||||
|
||||
* The container must be running.
|
||||
* There *can* be multiple clients attached to the same container.
|
||||
* If you don't specify `--detach-keys` when attaching, it defaults back to `^P^Q`.
|
||||
|
||||
Try it on our previous container:
|
||||
|
||||
```bash
|
||||
$ docker attach $(docker ps -lq)
|
||||
```
|
||||
|
||||
Check that `^X x` doesn't work, but `^P ^Q` does.
|
||||
|
||||
---
|
||||
|
||||
## Detaching from non-interactive containers
|
||||
|
||||
* **Warning:** if the container was started without `-it`...
|
||||
|
||||
* You won't be able to detach with `^P^Q`.
|
||||
* If you hit `^C`, the signal will be proxied to the container.
|
||||
|
||||
* Remember: you can always detach by killing the Docker client.
|
||||
|
||||
---
|
||||
|
||||
## Checking container output
|
||||
|
||||
* Use `docker attach` if you intend to send input to the container.
|
||||
|
||||
* If you just want to see the output of a container, use `docker logs`.
|
||||
|
||||
```bash
|
||||
$ docker logs --tail 1 --follow <containerID>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Restarting a container
|
||||
|
||||
When a container has exited, it is in stopped state.
|
||||
|
||||
It can then be restarted with the `start` command.
|
||||
|
||||
```bash
|
||||
$ docker start <yourContainerID>
|
||||
```
|
||||
|
||||
The container will be restarted using the same options you launched it
|
||||
with.
|
||||
|
||||
You can re-attach to it if you want to interact with it:
|
||||
|
||||
```bash
|
||||
$ docker attach <yourContainerID>
|
||||
```
|
||||
|
||||
Use `docker ps -a` to identify the container ID of a previous `jpetazzo/clock` container,
|
||||
and try those commands.
|
||||
|
||||
---
|
||||
|
||||
## Attaching to a REPL
|
||||
|
||||
* REPL = Read Eval Print Loop
|
||||
|
||||
* Shells, interpreters, TUI ...
|
||||
|
||||
* Symptom: you `docker attach`, and see nothing
|
||||
|
||||
* The REPL doesn't know that you just attached, and doesn't print anything
|
||||
|
||||
* Try hitting `^L` or `Enter`
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## SIGWINCH
|
||||
|
||||
* When you `docker attach`, the Docker Engine sends SIGWINCH signals to the container.
|
||||
|
||||
* SIGWINCH = WINdow CHange; indicates a change in window size.
|
||||
|
||||
* This will cause some CLI and TUI programs to redraw the screen.
|
||||
|
||||
* But not all of them.
|
||||
|
||||
???
|
||||
|
||||
:EN:- Restarting old containers
|
||||
:EN:- Detaching and reattaching to container
|
||||
:FR:- Redémarrer des anciens conteneurs
|
||||
:FR:- Se détacher et rattacher à des conteneurs
|
||||
153
slides/containers/Training_Environment.md
Normal file
153
slides/containers/Training_Environment.md
Normal file
@@ -0,0 +1,153 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Our training environment
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Our training environment
|
||||
|
||||
- If you are attending a tutorial or workshop:
|
||||
|
||||
- a VM has been provisioned for each student
|
||||
|
||||
- If you are doing or re-doing this course on your own, you can:
|
||||
|
||||
- install Docker locally (as explained in the chapter "Installing Docker")
|
||||
|
||||
- install Docker on e.g. a cloud VM
|
||||
|
||||
- use https://www.play-with-docker.com/ to instantly get a training environment
|
||||
|
||||
---
|
||||
|
||||
## Our Docker VM
|
||||
|
||||
*This section assumes that you are following this course as part of
|
||||
a tutorial, training or workshop, where each student is given an
|
||||
individual Docker VM.*
|
||||
|
||||
- The VM is created just before the training.
|
||||
|
||||
- It will stay up during the whole training.
|
||||
|
||||
- It will be destroyed shortly after the training.
|
||||
|
||||
- It comes pre-loaded with Docker and some other useful tools.
|
||||
|
||||
---
|
||||
|
||||
## What *is* Docker?
|
||||
|
||||
- "Installing Docker" really means "Installing the Docker Engine and CLI".
|
||||
|
||||
- The Docker Engine is a daemon (a service running in the background).
|
||||
|
||||
- This daemon manages containers, the same way that a hypervisor manages VMs.
|
||||
|
||||
- We interact with the Docker Engine by using the Docker CLI.
|
||||
|
||||
- The Docker CLI and the Docker Engine communicate through an API.
|
||||
|
||||
- There are many other programs and client libraries which use that API.
|
||||
|
||||
---
|
||||
|
||||
## Why don't we run Docker locally?
|
||||
|
||||
- We are going to download container images and distribution packages.
|
||||
|
||||
- This could put a bit of stress on the local WiFi and slow us down.
|
||||
|
||||
- Instead, we use a remote VM that has a good connectivity
|
||||
|
||||
- In some rare cases, installing Docker locally is challenging:
|
||||
|
||||
- no administrator/root access (computer managed by strict corp IT)
|
||||
|
||||
- 32-bit CPU or OS
|
||||
|
||||
- old OS version (e.g. CentOS 6, OSX pre-Yosemite, Windows 7)
|
||||
|
||||
- It's better to spend time learning containers than fiddling with the installer!
|
||||
|
||||
---
|
||||
|
||||
## Connecting to your Virtual Machine
|
||||
|
||||
You need an SSH client.
|
||||
|
||||
* On OS X, Linux, and other UNIX systems, just use `ssh`:
|
||||
|
||||
```bash
|
||||
$ ssh <login>@<ip-address>
|
||||
```
|
||||
|
||||
* On Windows, if you don't have an SSH client, you can download:
|
||||
|
||||
* Putty (www.putty.org)
|
||||
|
||||
* Git BASH (https://git-for-windows.github.io/)
|
||||
|
||||
* MobaXterm (https://mobaxterm.mobatek.net/)
|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## `tailhist`
|
||||
|
||||
The shell history of the instructor is available online in real time.
|
||||
|
||||
Note the IP address of the instructor's virtual machine (A.B.C.D).
|
||||
|
||||
Open http://A.B.C.D:1088 in your browser and you should see the history.
|
||||
|
||||
The history is updated in real time (using a WebSocket connection).
|
||||
|
||||
It should be green when the WebSocket is connected.
|
||||
|
||||
If it turns red, reloading the page should fix it.
|
||||
|
||||
---
|
||||
|
||||
## Checking your Virtual Machine
|
||||
|
||||
Once logged in, make sure that you can run a basic Docker command:
|
||||
|
||||
.small[
|
||||
```bash
|
||||
$ docker version
|
||||
Client:
|
||||
Version: 18.03.0-ce
|
||||
API version: 1.37
|
||||
Go version: go1.9.4
|
||||
Git commit: 0520e24
|
||||
Built: Wed Mar 21 23:10:06 2018
|
||||
OS/Arch: linux/amd64
|
||||
Experimental: false
|
||||
Orchestrator: swarm
|
||||
|
||||
Server:
|
||||
Engine:
|
||||
Version: 18.03.0-ce
|
||||
API version: 1.37 (minimum version 1.12)
|
||||
Go version: go1.9.4
|
||||
Git commit: 0520e24
|
||||
Built: Wed Mar 21 23:08:35 2018
|
||||
OS/Arch: linux/amd64
|
||||
Experimental: false
|
||||
```
|
||||
]
|
||||
|
||||
If this doesn't work, raise your hand so that an instructor can assist you!
|
||||
|
||||
???
|
||||
|
||||
:EN:Container concepts
|
||||
:FR:Premier contact avec les conteneurs
|
||||
|
||||
:EN:- What's a container engine?
|
||||
:FR:- Qu'est-ce qu'un *container engine* ?
|
||||
164
slides/containers/Windows_Containers.md
Normal file
164
slides/containers/Windows_Containers.md
Normal file
@@ -0,0 +1,164 @@
|
||||
class: title
|
||||
|
||||
# Windows Containers
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
At the end of this section, you will be able to:
|
||||
|
||||
* Understand Windows Container vs. Linux Container.
|
||||
|
||||
* Know about the features of Docker for Windows for choosing architecture.
|
||||
|
||||
* Run other container architectures via QEMU emulation.
|
||||
|
||||
---
|
||||
|
||||
## Are containers *just* for Linux?
|
||||
|
||||
Remember that a container must run on the kernel of the OS it's on.
|
||||
|
||||
- This is both a benefit and a limitation.
|
||||
|
||||
(It makes containers lightweight, but limits them to a specific kernel.)
|
||||
|
||||
- At its launch in 2013, Docker did only support Linux, and only on amd64 CPUs.
|
||||
|
||||
- Since then, many platforms and OS have been added.
|
||||
|
||||
(Windows, ARM, i386, IBM mainframes ... But no macOS or iOS yet!)
|
||||
|
||||
--
|
||||
|
||||
- Docker Desktop (macOS and Windows) can run containers for other architectures
|
||||
|
||||
(Check the docs to see how to [run a Raspberry Pi (ARM) or PPC container](https://docs.docker.com/docker-for-mac/multi-arch/)!)
|
||||
|
||||
---
|
||||
|
||||
## History of Windows containers
|
||||
|
||||
- Early 2016, Windows 10 gained support for running Windows binaries in containers.
|
||||
|
||||
- These are known as "Windows Containers"
|
||||
|
||||
- Win 10 expects Docker for Windows to be installed for full features
|
||||
|
||||
- These must run in Hyper-V mini-VM's with a Windows Server x64 kernel
|
||||
|
||||
- No "scratch" containers, so use "Core" and "Nano" Server OS base layers
|
||||
|
||||
- Since Hyper-V is required, Windows 10 Home won't work (yet...)
|
||||
|
||||
--
|
||||
|
||||
- Late 2016, Windows Server 2016 ships with native Docker support
|
||||
|
||||
- Installed via PowerShell, doesn't need Docker for Windows
|
||||
|
||||
- Can run native (without VM), or with [Hyper-V Isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)
|
||||
|
||||
---
|
||||
|
||||
## LCOW (Linux Containers On Windows)
|
||||
|
||||
While Docker on Windows is largely playing catch up with Docker on Linux,
|
||||
it's moving fast; and this is one thing that you *cannot* do on Linux!
|
||||
|
||||
- LCOW came with the [2017 Fall Creators Update](https://blog.docker.com/2018/02/docker-for-windows-18-02-with-windows-10-fall-creators-update/).
|
||||
|
||||
- It can run Linux and Windows containers side-by-side on Win 10.
|
||||
|
||||
- It is no longer necessary to switch the Engine to "Linux Containers".
|
||||
|
||||
(In fact, if you want to run both Linux and Windows containers at the same time,
|
||||
make sure that your Engine is set to "Windows Containers" mode!)
|
||||
|
||||
--
|
||||
|
||||
If you are a Docker for Windows user, start your engine and try this:
|
||||
|
||||
```bash
|
||||
docker pull microsoft/nanoserver:1803
|
||||
```
|
||||
|
||||
(Make sure to switch to "Windows Containers mode" if necessary.)
|
||||
|
||||
---
|
||||
|
||||
## Run Both Windows and Linux containers
|
||||
|
||||
- Run a Windows Nano Server (minimal CLI-only server)
|
||||
|
||||
```bash
|
||||
docker run --rm -it microsoft/nanoserver:1803 powershell
|
||||
Get-Process
|
||||
exit
|
||||
```
|
||||
|
||||
- Run busybox on Linux in LCOW
|
||||
|
||||
```bash
|
||||
docker run --rm --platform linux busybox echo hello
|
||||
```
|
||||
|
||||
(Although you will not be able to see them, this will create hidden
|
||||
Nano and LinuxKit VMs in Hyper-V!)
|
||||
|
||||
---
|
||||
|
||||
## Did We Say Things Move Fast
|
||||
|
||||
- Things keep improving.
|
||||
|
||||
- Now `--platform` defaults to `windows`, some images support both:
|
||||
|
||||
- golang, mongo, python, redis, hello-world ... and more being added
|
||||
|
||||
- you should still use `--platform` with multi-os images to be certain
|
||||
|
||||
- Windows Containers now support `localhost` accessible containers (July 2018)
|
||||
|
||||
- Microsoft (April 2018) added Hyper-V support to Windows 10 Home ...
|
||||
|
||||
... so stay tuned for Docker support, maybe?!?
|
||||
|
||||
---
|
||||
|
||||
## Other Windows container options
|
||||
|
||||
Most "official" Docker images don't run on Windows yet.
|
||||
|
||||
Places to Look:
|
||||
|
||||
- Hub Official: https://hub.docker.com/u/winamd64/
|
||||
|
||||
- Microsoft: https://hub.docker.com/r/microsoft/
|
||||
|
||||
---
|
||||
|
||||
## SQL Server? Choice of Linux or Windows
|
||||
|
||||
- Microsoft [SQL Server for Linux 2017](https://hub.docker.com/r/microsoft/mssql-server-linux/) (amd64/linux)
|
||||
|
||||
- Microsoft [SQL Server Express 2017](https://hub.docker.com/r/microsoft/mssql-server-windows-express/) (amd64/windows)
|
||||
|
||||
---
|
||||
|
||||
## Windows Tools and Tips
|
||||
|
||||
- PowerShell [Tab Completion: DockerCompletion](https://github.com/matt9ucci/DockerCompletion)
|
||||
|
||||
- Best Shell GUI: [Cmder.net](https://cmder.net/)
|
||||
|
||||
- Good Windows Container Blogs and How-To's
|
||||
|
||||
- Docker DevRel [Elton Stoneman, Microsoft MVP](https://blog.sixeyed.com/)
|
||||
|
||||
- Docker Captain [Nicholas Dille](https://dille.name/blog/)
|
||||
|
||||
- Docker Captain [Stefan Scherer](https://stefanscherer.github.io/)
|
||||
480
slides/containers/Working_With_Volumes.md
Normal file
480
slides/containers/Working_With_Volumes.md
Normal file
@@ -0,0 +1,480 @@
|
||||
|
||||
class: title
|
||||
|
||||
# Working with volumes
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Objectives
|
||||
|
||||
At the end of this section, you will be able to:
|
||||
|
||||
* Create containers holding volumes.
|
||||
|
||||
* Share volumes across containers.
|
||||
|
||||
* Share a host directory with one or many containers.
|
||||
|
||||
---
|
||||
|
||||
## Working with volumes
|
||||
|
||||
Docker volumes can be used to achieve many things, including:
|
||||
|
||||
* Bypassing the copy-on-write system to obtain native disk I/O performance.
|
||||
|
||||
* Bypassing copy-on-write to leave some files out of `docker commit`.
|
||||
|
||||
* Sharing a directory between multiple containers.
|
||||
|
||||
* Sharing a directory between the host and a container.
|
||||
|
||||
* Sharing a *single file* between the host and a container.
|
||||
|
||||
* Using remote storage and custom storage with *volume drivers*.
|
||||
|
||||
---
|
||||
|
||||
## Volumes are special directories in a container
|
||||
|
||||
Volumes can be declared in two different ways:
|
||||
|
||||
* Within a `Dockerfile`, with a `VOLUME` instruction.
|
||||
|
||||
```dockerfile
|
||||
VOLUME /uploads
|
||||
```
|
||||
|
||||
* On the command-line, with the `-v` flag for `docker run`.
|
||||
|
||||
```bash
|
||||
$ docker run -d -v /uploads myapp
|
||||
```
|
||||
|
||||
In both cases, `/uploads` (inside the container) will be a volume.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Volumes bypass the copy-on-write system
|
||||
|
||||
Volumes act as passthroughs to the host filesystem.
|
||||
|
||||
* The I/O performance on a volume is exactly the same as I/O performance
|
||||
on the Docker host.
|
||||
|
||||
* When you `docker commit`, the content of volumes is not brought into
|
||||
the resulting image.
|
||||
|
||||
* If a `RUN` instruction in a `Dockerfile` changes the content of a
|
||||
volume, those changes are not recorded neither.
|
||||
|
||||
* If a container is started with the `--read-only` flag, the volume
|
||||
will still be writable (unless the volume is a read-only volume).
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Volumes can be shared across containers
|
||||
|
||||
You can start a container with *exactly the same volumes* as another one.
|
||||
|
||||
The new container will have the same volumes, in the same directories.
|
||||
|
||||
They will contain exactly the same thing, and remain in sync.
|
||||
|
||||
Under the hood, they are actually the same directories on the host anyway.
|
||||
|
||||
This is done using the `--volumes-from` flag for `docker run`.
|
||||
|
||||
We will see an example in the following slides.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Sharing app server logs with another container
|
||||
|
||||
Let's start a Tomcat container:
|
||||
|
||||
```bash
|
||||
$ docker run --name webapp -d -p 8080:8080 -v /usr/local/tomcat/logs tomcat
|
||||
```
|
||||
|
||||
Now, start an `alpine` container accessing the same volume:
|
||||
|
||||
```bash
|
||||
$ docker run --volumes-from webapp alpine sh -c "tail -f /usr/local/tomcat/logs/*"
|
||||
```
|
||||
|
||||
Then, from another window, send requests to our Tomcat container:
|
||||
```bash
|
||||
$ curl localhost:8080
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Volumes exist independently of containers
|
||||
|
||||
If a container is stopped or removed, its volumes still exist and are available.
|
||||
|
||||
Volumes can be listed and manipulated with `docker volume` subcommands:
|
||||
|
||||
```bash
|
||||
$ docker volume ls
|
||||
DRIVER VOLUME NAME
|
||||
local 5b0b65e4316da67c2d471086640e6005ca2264f3...
|
||||
local pgdata-prod
|
||||
local pgdata-dev
|
||||
local 13b59c9936d78d109d094693446e174e5480d973...
|
||||
```
|
||||
|
||||
Some of those volume names were explicit (pgdata-prod, pgdata-dev).
|
||||
|
||||
The others (the hex IDs) were generated automatically by Docker.
|
||||
|
||||
---
|
||||
|
||||
## Naming volumes
|
||||
|
||||
* Volumes can be created without a container, then used in multiple containers.
|
||||
|
||||
Let's create a couple of volumes directly.
|
||||
|
||||
```bash
|
||||
$ docker volume create webapps
|
||||
webapps
|
||||
```
|
||||
|
||||
```bash
|
||||
$ docker volume create logs
|
||||
logs
|
||||
```
|
||||
|
||||
Volumes are not anchored to a specific path.
|
||||
|
||||
---
|
||||
|
||||
## Populating volumes
|
||||
|
||||
* When an empty volume is mounted on a non-empty directory, the directory is copied to the volume.
|
||||
|
||||
* This makes it easy to "promote" a normal directory to a volume.
|
||||
|
||||
* Non-empty volumes are always mounted as-is.
|
||||
|
||||
Let's populate the webapps volume with the webapps.dist directory from the Tomcat image.
|
||||
|
||||
````bash
|
||||
$ docker run -v webapps:/usr/local/tomcat/webapps.dist tomcat true
|
||||
```
|
||||
|
||||
Note: running `true` will cause the container to exit successfully once the `webapps.dist` directory has been copied to the `webapps` volume, instead of starting tomcat.
|
||||
|
||||
---
|
||||
|
||||
## Using our named volumes
|
||||
|
||||
* Volumes are used with the `-v` option.
|
||||
|
||||
* When a host path does not contain a `/`, it is considered a volume name.
|
||||
|
||||
Let's start a web server using the two previous volumes.
|
||||
|
||||
```bash
|
||||
$ docker run -d -p 1234:8080 \
|
||||
-v logs:/usr/local/tomcat/logs \
|
||||
-v webapps:/usr/local/tomcat/webapps \
|
||||
tomcat
|
||||
```
|
||||
|
||||
Check that it's running correctly:
|
||||
|
||||
```bash
|
||||
$ curl localhost:1234
|
||||
... (Tomcat tells us how happy it is to be up and running) ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Using a volume in another container
|
||||
|
||||
* We will make changes to the volume from another container.
|
||||
|
||||
* In this example, we will run a text editor in the other container.
|
||||
|
||||
(But this could be an FTP server, a WebDAV server, a Git receiver...)
|
||||
|
||||
Let's start another container using the `webapps` volume.
|
||||
|
||||
```bash
|
||||
$ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp
|
||||
```
|
||||
|
||||
Vandalize the page, save, exit.
|
||||
|
||||
Then run `curl localhost:1234` again to see your changes.
|
||||
|
||||
---
|
||||
|
||||
## Using custom "bind-mounts"
|
||||
|
||||
In some cases, you want a specific directory on the host to be mapped
|
||||
inside the container:
|
||||
|
||||
* You want to manage storage and snapshots yourself.
|
||||
|
||||
(With LVM, or a SAN, or ZFS, or anything else!)
|
||||
|
||||
* You have a separate disk with better performance (SSD) or resiliency (EBS)
|
||||
than the system disk, and you want to put important data on that disk.
|
||||
|
||||
* You want to share your source directory between your host (where the
|
||||
source gets edited) and the container (where it is compiled or executed).
|
||||
|
||||
Wait, we already met the last use-case in our example development workflow!
|
||||
Nice.
|
||||
|
||||
```bash
|
||||
$ docker run -d -v /path/on/the/host:/path/in/container image ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Migrating data with `--volumes-from`
|
||||
|
||||
The `--volumes-from` option tells Docker to re-use all the volumes
|
||||
of an existing container.
|
||||
|
||||
* Scenario: migrating from Redis 2.8 to Redis 3.0.
|
||||
|
||||
* We have a container (`myredis`) running Redis 2.8.
|
||||
|
||||
* Stop the `myredis` container.
|
||||
|
||||
* Start a new container, using the Redis 3.0 image, and the `--volumes-from` option.
|
||||
|
||||
* The new container will inherit the data of the old one.
|
||||
|
||||
* Newer containers can use `--volumes-from` too.
|
||||
|
||||
* Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes).
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Data migration in practice
|
||||
|
||||
Let's create a Redis container.
|
||||
|
||||
```bash
|
||||
$ docker run -d --name redis28 redis:2.8
|
||||
```
|
||||
|
||||
Connect to the Redis container and set some data.
|
||||
|
||||
```bash
|
||||
$ docker run -ti --link redis28:redis busybox telnet redis 6379
|
||||
```
|
||||
|
||||
Issue the following commands:
|
||||
|
||||
```bash
|
||||
SET counter 42
|
||||
INFO server
|
||||
SAVE
|
||||
QUIT
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Upgrading Redis
|
||||
|
||||
Stop the Redis container.
|
||||
|
||||
```bash
|
||||
$ docker stop redis28
|
||||
```
|
||||
|
||||
Start the new Redis container.
|
||||
|
||||
```bash
|
||||
$ docker run -d --name redis30 --volumes-from redis28 redis:3.0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Testing the new Redis
|
||||
|
||||
Connect to the Redis container and see our data.
|
||||
|
||||
```bash
|
||||
docker run -ti --link redis30:redis busybox telnet redis 6379
|
||||
```
|
||||
|
||||
Issue a few commands.
|
||||
|
||||
```bash
|
||||
GET counter
|
||||
INFO server
|
||||
QUIT
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Volumes lifecycle
|
||||
|
||||
* When you remove a container, its volumes are kept around.
|
||||
|
||||
* You can list them with `docker volume ls`.
|
||||
|
||||
* You can access them by creating a container with `docker run -v`.
|
||||
|
||||
* You can remove them with `docker volume rm` or `docker system prune`.
|
||||
|
||||
Ultimately, _you_ are the one responsible for logging,
|
||||
monitoring, and backup of your volumes.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Checking volumes defined by an image
|
||||
|
||||
Wondering if an image has volumes? Just use `docker inspect`:
|
||||
|
||||
```bash
|
||||
$ # docker inspect training/datavol
|
||||
[{
|
||||
"config": {
|
||||
. . .
|
||||
"Volumes": {
|
||||
"/var/webapp": {}
|
||||
},
|
||||
. . .
|
||||
}]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Checking volumes used by a container
|
||||
|
||||
To look which paths are actually volumes, and to what they are bound,
|
||||
use `docker inspect` (again):
|
||||
|
||||
```bash
|
||||
$ docker inspect <yourContainerID>
|
||||
[{
|
||||
"ID": "<yourContainerID>",
|
||||
. . .
|
||||
"Volumes": {
|
||||
"/var/webapp": "/var/lib/docker/vfs/dir/f4280c5b6207ed531efd4cc673ff620cef2a7980f747dbbcca001db61de04468"
|
||||
},
|
||||
"VolumesRW": {
|
||||
"/var/webapp": true
|
||||
},
|
||||
}]
|
||||
```
|
||||
|
||||
* We can see that our volume is present on the file system of the Docker host.
|
||||
|
||||
---
|
||||
|
||||
## Sharing a single file
|
||||
|
||||
The same `-v` flag can be used to share a single file (instead of a directory).
|
||||
|
||||
One of the most interesting examples is to share the Docker control socket.
|
||||
|
||||
```bash
|
||||
$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock docker sh
|
||||
```
|
||||
|
||||
From that container, you can now run `docker` commands communicating with
|
||||
the Docker Engine running on the host. Try `docker ps`!
|
||||
|
||||
.warning[Since that container has access to the Docker socket, it
|
||||
has root-like access to the host.]
|
||||
|
||||
---
|
||||
|
||||
## Volume plugins
|
||||
|
||||
You can install plugins to manage volumes backed by particular storage systems,
|
||||
or providing extra features. For instance:
|
||||
|
||||
* [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g.
|
||||
SAN or NAS), or by cloud block stores (e.g. EBS, EFS).
|
||||
|
||||
* [Portworx](https://portworx.com/) - provides distributed block store for containers.
|
||||
|
||||
* [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale
|
||||
to several petabytes. It provides interfaces for object, block and file storage.
|
||||
|
||||
* and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)!
|
||||
|
||||
---
|
||||
|
||||
## Volumes vs. Mounts
|
||||
|
||||
* Since Docker 17.06, a new options is available: `--mount`.
|
||||
|
||||
* It offers a new, richer syntax to manipulate data in containers.
|
||||
|
||||
* It makes an explicit difference between:
|
||||
|
||||
- volumes (identified with a unique name, managed by a storage plugin),
|
||||
|
||||
- bind mounts (identified with a host path, not managed).
|
||||
|
||||
* The former `-v` / `--volume` option is still usable.
|
||||
|
||||
---
|
||||
|
||||
## `--mount` syntax
|
||||
|
||||
Binding a host path to a container path:
|
||||
|
||||
```bash
|
||||
$ docker run \
|
||||
--mount type=bind,source=/path/on/host,target=/path/in/container alpine
|
||||
```
|
||||
|
||||
Mounting a volume to a container path:
|
||||
|
||||
```bash
|
||||
$ docker run \
|
||||
--mount source=myvolume,target=/path/in/container alpine
|
||||
```
|
||||
|
||||
Mounting a tmpfs (in-memory, for temporary files):
|
||||
|
||||
```bash
|
||||
$ docker run \
|
||||
--mount type=tmpfs,destination=/path/in/container,tmpfs-size=1000000 alpine
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Section summary
|
||||
|
||||
We've learned how to:
|
||||
|
||||
* Create and manage volumes.
|
||||
|
||||
* Share volumes across containers.
|
||||
|
||||
* Share a host directory with one or many containers.
|
||||
39
slides/containers/intro.md
Normal file
39
slides/containers/intro.md
Normal file
@@ -0,0 +1,39 @@
|
||||
## A brief introduction
|
||||
|
||||
- This was initially written to support in-person, instructor-led workshops and tutorials
|
||||
|
||||
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://@@GITREPO@@/graphs/contributors)
|
||||
|
||||
- You can also follow along on your own, at your own pace
|
||||
|
||||
- We included as much information as possible in these slides
|
||||
|
||||
- We recommend having a mentor to help you ...
|
||||
|
||||
- ... Or be comfortable spending some time reading the Docker
|
||||
[documentation](https://docs.docker.com/) ...
|
||||
|
||||
- ... And looking for answers in the [Docker forums](https://forums.docker.com),
|
||||
[StackOverflow](http://stackoverflow.com/questions/tagged/docker),
|
||||
and other outlets
|
||||
|
||||
---
|
||||
|
||||
class: self-paced
|
||||
|
||||
## Hands on, you shall practice
|
||||
|
||||
- Nobody ever became a Jedi by spending their lives reading Wookiepedia
|
||||
|
||||
- Likewise, it will take more than merely *reading* these slides
|
||||
to make you an expert
|
||||
|
||||
- These slides include *tons* of demos, exercises, and examples
|
||||
|
||||
- They assume that you have access to a machine running Docker
|
||||
|
||||
- If you are attending a workshop or tutorial:
|
||||
<br/>you will be given specific instructions to access a cloud VM
|
||||
|
||||
- If you are doing this on your own:
|
||||
<br/>we will tell you how to install Docker or access a Docker environment
|
||||
12
slides/containers/links.md
Normal file
12
slides/containers/links.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# Links and resources
|
||||
|
||||
- [Docker Community Slack](https://community.docker.com/registrations/groups/4316)
|
||||
- [Docker Community Forums](https://forums.docker.com/)
|
||||
- [Docker Hub](https://hub.docker.com)
|
||||
- [Docker Blog](https://blog.docker.com/)
|
||||
- [Docker documentation](https://docs.docker.com/)
|
||||
- [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker)
|
||||
- [Docker on Twitter](https://twitter.com/docker)
|
||||
- [Play With Docker Hands-On Labs](https://training.play-with-docker.com/)
|
||||
|
||||
.footnote[These slides (and future updates) are on → https://container.training/]
|
||||
57
slides/count-slides.py
Executable file
57
slides/count-slides.py
Executable file
@@ -0,0 +1,57 @@
|
||||
#!/usr/bin/env python
|
||||
import re
|
||||
import sys
|
||||
|
||||
PREFIX = "name: toc-"
|
||||
EXCLUDED = ["in-person"]
|
||||
|
||||
class State(object):
|
||||
def __init__(self):
|
||||
self.current_slide = 1
|
||||
self.section_title = None
|
||||
self.section_start = 0
|
||||
self.section_slides = 0
|
||||
self.parts = {}
|
||||
self.sections = {}
|
||||
def show(self):
|
||||
if self.section_title.startswith("part-"):
|
||||
return
|
||||
print("{0.section_title}\t{0.section_start}\t{0.section_slides}".format(self))
|
||||
self.sections[self.section_title] = self.section_slides
|
||||
|
||||
state = State()
|
||||
|
||||
title = None
|
||||
for line in open(sys.argv[1]):
|
||||
line = line.rstrip()
|
||||
if line.startswith(PREFIX):
|
||||
if state.section_title is None:
|
||||
print("{}\t{}\t{}".format("title", "index", "size"))
|
||||
else:
|
||||
state.show()
|
||||
state.section_title = line[len(PREFIX):].strip()
|
||||
state.section_start = state.current_slide
|
||||
state.section_slides = 0
|
||||
if line == "---":
|
||||
state.current_slide += 1
|
||||
state.section_slides += 1
|
||||
if line == "--":
|
||||
state.current_slide += 1
|
||||
toc_links = re.findall("\(#toc-(.*)\)", line)
|
||||
if toc_links and state.section_title.startswith("part-"):
|
||||
if state.section_title not in state.parts:
|
||||
state.parts[state.section_title] = []
|
||||
state.parts[state.section_title].append(toc_links[0])
|
||||
# This is really hackish
|
||||
if line.startswith("class:"):
|
||||
for klass in EXCLUDED:
|
||||
if klass in line:
|
||||
state.section_slides -= 1
|
||||
state.current_slide -= 1
|
||||
|
||||
state.show()
|
||||
|
||||
for part in sorted(state.parts, key=lambda f: int(f.split("-")[1])):
|
||||
part_size = sum(state.sections[s] for s in state.parts[part])
|
||||
print("{}\t{}\t{}".format("total size for", part, part_size))
|
||||
|
||||
16
slides/docker-compose.yaml
Normal file
16
slides/docker-compose.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
version: "2"
|
||||
|
||||
services:
|
||||
www:
|
||||
image: nginx
|
||||
volumes:
|
||||
- .:/usr/share/nginx/html
|
||||
ports:
|
||||
- 8080:80
|
||||
builder:
|
||||
build: .
|
||||
volumes:
|
||||
- ..:/repo
|
||||
working_dir: /repo/slides
|
||||
command: ./build.sh forever
|
||||
|
||||
5
slides/exercises/appconfig-brief.md
Normal file
5
slides/exercises/appconfig-brief.md
Normal file
@@ -0,0 +1,5 @@
|
||||
## Exercise — Application Configuration
|
||||
|
||||
- Configure an application with a ConfigMap
|
||||
|
||||
- Generate configuration file from the downward API
|
||||
87
slides/exercises/appconfig-details.md
Normal file
87
slides/exercises/appconfig-details.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# Exercise — Application Configuration
|
||||
|
||||
- We want to configure an application with a ConfigMap
|
||||
|
||||
- We will use the "rainbow" example shown previously
|
||||
|
||||
(HAProxy load balancing traffic to services in multiple namespaces)
|
||||
|
||||
- We won't provide the HAProxy configuration file
|
||||
|
||||
- Instead, we will provide a list of namespaces
|
||||
|
||||
(e.g. as a space-delimited list in a ConfigMap)
|
||||
|
||||
- Our Pod should generate the HAProxy configuration using the ConfigMap
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
- Let's say that we have the "rainbow" app deployed:
|
||||
```bash
|
||||
kubectl apply -f ~/container.training/k8s/rainbow.yaml
|
||||
```
|
||||
|
||||
- And a ConfigMap like the following one:
|
||||
```bash
|
||||
kubectl create configmap rainbow --from-literal=namespaces="blue green"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Goal 1
|
||||
|
||||
- We want a Deployment and a Service called `rainbow`
|
||||
|
||||
- The `rainbow` Service should load balance across Namespaces `blue` and `green`
|
||||
|
||||
(i.e. to the Services called `color` in both these Namespaces)
|
||||
|
||||
- We want to be able to update the configuration:
|
||||
|
||||
- update the ConfigMap to put `blue green red`
|
||||
|
||||
- what should we do so that HAproxy picks up the change?
|
||||
|
||||
---
|
||||
|
||||
## Goal 2
|
||||
|
||||
- Check what happens if we specify a backend that doesn't exist
|
||||
|
||||
(e.g. add `purple` to the list of namespaces)
|
||||
|
||||
- If we specify invalid backends to HAProxy, it won't start!
|
||||
|
||||
- Implement a workaround among these two:
|
||||
|
||||
- remove invalid backends from the list before starting HAProxy
|
||||
|
||||
- wait until all backends are valid before starting HAProxy
|
||||
|
||||
---
|
||||
|
||||
## Goal 3
|
||||
|
||||
- We'd like HAProxy to pick up ConfigMap updates automatically
|
||||
|
||||
- How can we do that?
|
||||
|
||||
---
|
||||
|
||||
## Hints
|
||||
|
||||
- Check the following slides if you need help!
|
||||
|
||||
--
|
||||
|
||||
- We want to generate the HAProxy configuration in an `initContainer`
|
||||
|
||||
--
|
||||
|
||||
- The `namespaces` entry of the `rainbow` ConfigMap should be exposed to the `initContainer`
|
||||
|
||||
--
|
||||
|
||||
- The HAProxy configuration should be in a volume shared with HAProxy
|
||||
7
slides/exercises/dmuc-brief.md
Normal file
7
slides/exercises/dmuc-brief.md
Normal file
@@ -0,0 +1,7 @@
|
||||
## Exercise — Build a Cluster
|
||||
|
||||
- Deploy a cluster by configuring and running each component manually
|
||||
|
||||
- Add CNI networking
|
||||
|
||||
- Generate and validate ServiceAccount tokens
|
||||
33
slides/exercises/dmuc-details.md
Normal file
33
slides/exercises/dmuc-details.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# Exercise — Build a Cluster
|
||||
|
||||
- Step 1: deploy a cluster
|
||||
|
||||
- follow the steps in the "Dessine-moi un cluster" section
|
||||
|
||||
- Step 2: add CNI networking
|
||||
|
||||
- une kube-router
|
||||
|
||||
- interconnect with the route-reflector
|
||||
|
||||
- check that you receive the routes of other clusters
|
||||
|
||||
- Step 3: generate and validate ServiceAccount tokens
|
||||
|
||||
- see next slide for help!
|
||||
|
||||
---
|
||||
|
||||
## ServiceAccount tokens
|
||||
|
||||
- We need to generate a TLS key pair and certificate
|
||||
|
||||
- A self-signed key will work
|
||||
|
||||
- We don't need anything particular in the certificate
|
||||
|
||||
(no particular CN, key use flags, etc.)
|
||||
|
||||
- The key needs to be passed to both API server and controller manager
|
||||
|
||||
- Check that ServiceAccount tokens are generated correctly
|
||||
9
slides/exercises/healthchecks-brief.md
Normal file
9
slides/exercises/healthchecks-brief.md
Normal file
@@ -0,0 +1,9 @@
|
||||
## Exercise — Healthchecks
|
||||
|
||||
- Add readiness and liveness probes to a web service
|
||||
|
||||
(we will use the `rng` service in the dockercoins app)
|
||||
|
||||
- See what happens when the load increases
|
||||
|
||||
(spoiler alert: it involves timeouts!)
|
||||
86
slides/exercises/healthchecks-details.md
Normal file
86
slides/exercises/healthchecks-details.md
Normal file
@@ -0,0 +1,86 @@
|
||||
# Exercise — Healthchecks
|
||||
|
||||
- We want to add healthchecks to the `rng` service in dockercoins
|
||||
|
||||
- The `rng` service exhibits an interesting behavior under load:
|
||||
|
||||
*its latency increases (which will cause probes to time out!)*
|
||||
|
||||
- We want to see:
|
||||
|
||||
- what happens when the readiness probe fails
|
||||
|
||||
- what happens when the liveness probe fails
|
||||
|
||||
- how to set "appropriate" probes and probe parameters
|
||||
|
||||
---
|
||||
|
||||
## Setup
|
||||
|
||||
- First, deploy a new copy of dockercoins
|
||||
|
||||
(for instance, in a brand new namespace)
|
||||
|
||||
- Pro tip #1: ping (e.g. with `httping`) the `rng` service at all times
|
||||
|
||||
- it should initially show a few milliseconds latency
|
||||
|
||||
- that will increase when we scale up
|
||||
|
||||
- it will also let us detect when the service goes "boom"
|
||||
|
||||
- Pro tip #2: also keep an eye on the web UI
|
||||
|
||||
---
|
||||
|
||||
## Readiness
|
||||
|
||||
- Add a readiness probe to `rng`
|
||||
|
||||
- this requires editing the pod template in the Deployment manifest
|
||||
|
||||
- use a simple HTTP check on the `/` route of the service
|
||||
|
||||
- keep all other parameters (timeouts, thresholds...) at their default values
|
||||
|
||||
- Check what happens when deploying an invalid image for `rng` (e.g. `alpine`)
|
||||
|
||||
*(If the probe was set up correctly, the app will continue to work,
|
||||
because Kubernetes won't switch over the traffic to the `alpine` containers,
|
||||
because they don't pass the readiness probe.)*
|
||||
|
||||
---
|
||||
|
||||
## Readiness under load
|
||||
|
||||
- Then roll back `rng` to the original image
|
||||
|
||||
- Check what happens when we scale up the `worker` Deployment to 15+ workers
|
||||
|
||||
(get the latency above 1 second)
|
||||
|
||||
*(We should now observe intermittent unavailability of the service, i.e. every
|
||||
30 seconds it will be unreachable for a bit, then come back, then go away again, etc.)*
|
||||
|
||||
---
|
||||
|
||||
## Liveness
|
||||
|
||||
- Now replace the readiness probe with a liveness probe
|
||||
|
||||
- What happens now?
|
||||
|
||||
*(At first the behavior looks the same as with the readiness probe:
|
||||
service becomes unreachable, then reachable again, etc.; but there is
|
||||
a significant difference behind the scenes. What is it?)*
|
||||
|
||||
---
|
||||
|
||||
## Readiness and liveness
|
||||
|
||||
- Bonus questions!
|
||||
|
||||
- What happens if we enable both probes at the same time?
|
||||
|
||||
- What strategies can we use so that both probes are useful?
|
||||
13
slides/exercises/helm-generic-chart-brief.md
Normal file
13
slides/exercises/helm-generic-chart-brief.md
Normal file
@@ -0,0 +1,13 @@
|
||||
## Exercise — Helm Charts
|
||||
|
||||
- Create a Helm chart to deploy a generic microservice
|
||||
|
||||
- Deploy dockercoins by instanciating that chart multiple times
|
||||
|
||||
- Bonus: have as little values as possible
|
||||
|
||||
- Bonus: handle healthchecks for HTTP services
|
||||
|
||||
- Bonus: make it easy to change image versions
|
||||
|
||||
- Bonus: make it easy to use images on a different registry
|
||||
86
slides/exercises/helm-generic-chart-details.md
Normal file
86
slides/exercises/helm-generic-chart-details.md
Normal file
@@ -0,0 +1,86 @@
|
||||
# Exercise — Helm Charts
|
||||
|
||||
- We want to deploy dockercoins with a Helm chart
|
||||
|
||||
- We want to have a "generic chart" and instantiate it 5 times
|
||||
|
||||
(once for each service)
|
||||
|
||||
- We will pass values to the chart to customize it for each component
|
||||
|
||||
(to indicate which image to use, which ports to expose, etc.)
|
||||
|
||||
- We'll use `helm create` as a starting point for our generic chart
|
||||
|
||||
---
|
||||
|
||||
## Goal
|
||||
|
||||
- Have a directory with the generic chart
|
||||
|
||||
(e.g. `generic-chart`)
|
||||
|
||||
- Have 5 value files
|
||||
|
||||
(e.g. `hasher.yml`, `redis.yml`, `rng.yml`, `webui.yml`, `worker.yml`)
|
||||
|
||||
- Be able to install dockercoins by running 5 times:
|
||||
|
||||
`helm install X ./generic-chart --values=X.yml`
|
||||
|
||||
---
|
||||
|
||||
## Hints
|
||||
|
||||
- There are many little things to tweak in the generic chart
|
||||
|
||||
(service names, port numbers, healthchecks...)
|
||||
|
||||
- Check the training slides if you need a refresher!
|
||||
|
||||
---
|
||||
|
||||
## Bonus 1
|
||||
|
||||
- Minimize the amount of values that have to be set
|
||||
|
||||
- Option 1: no values at all for `rng` and `hasher`
|
||||
|
||||
(default values assume HTTP service listening on port 80)
|
||||
|
||||
- Option 2: no values at all for `worker`
|
||||
|
||||
(default values assume worker container with no service)
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Bonus 2
|
||||
|
||||
- Handle healthchecks
|
||||
|
||||
- Make sure that healthchecks are enabled in HTTP services
|
||||
|
||||
- ...But not in Redis or in the worker
|
||||
|
||||
---
|
||||
|
||||
## Bonus 3
|
||||
|
||||
- Make it easy to change image versions
|
||||
|
||||
- E.g. change `v0.1` to `v0.2` by changing only *one* thing in *one* place
|
||||
|
||||
---
|
||||
|
||||
## Bonus 4
|
||||
|
||||
- Make it easy to use images on a different registry
|
||||
|
||||
- We can assume that the images will always have the same names
|
||||
|
||||
(`hasher`, `rng`, `webui`, `worker`)
|
||||
|
||||
- And the same tag
|
||||
|
||||
(`v0.1`)
|
||||
9
slides/exercises/helm-umbrella-chart-brief.md
Normal file
9
slides/exercises/helm-umbrella-chart-brief.md
Normal file
@@ -0,0 +1,9 @@
|
||||
## Exercise — Umbrella Charts
|
||||
|
||||
- Create a Helm chart with dependencies on other charts
|
||||
|
||||
(leveraging the generic chart created earlier)
|
||||
|
||||
- Deploy dockercoins with that chart
|
||||
|
||||
- Bonus: use an external chart for the redis component
|
||||
77
slides/exercises/helm-umbrella-chart-details.md
Normal file
77
slides/exercises/helm-umbrella-chart-details.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# Exercise — Umbrella Charts
|
||||
|
||||
- We want to deploy dockercoins with a single Helm chart
|
||||
|
||||
- That chart will reuse the "generic chart" created previously
|
||||
|
||||
- This will require expressing dependencies, and using the `alias` keyword
|
||||
|
||||
- It will also require minor changes in the templates
|
||||
|
||||
---
|
||||
|
||||
## Goal
|
||||
|
||||
- We want to be able to install a copy of dockercoins with:
|
||||
```bash
|
||||
helm install dockercoins ./umbrella-chart
|
||||
```
|
||||
|
||||
- It should leverage the generic chart created earlier
|
||||
|
||||
(and instanciate it five times, one time per component of dockercoins)
|
||||
|
||||
- The values YAML files created earlier should be merged in a single one
|
||||
|
||||
---
|
||||
|
||||
## Bonus
|
||||
|
||||
- We want to replace our redis component with a better one
|
||||
|
||||
- We're going to use Bitnami's redis chart
|
||||
|
||||
(find it on the Artifact Hub)
|
||||
|
||||
- However, a lot of adjustments will be required!
|
||||
|
||||
(check following slides if you need hints)
|
||||
|
||||
---
|
||||
|
||||
## Hints (1/2)
|
||||
|
||||
- We will probably have to disable persistence
|
||||
|
||||
- by default, the chart enables persistence
|
||||
|
||||
- this works only if we have a default StorageClass
|
||||
|
||||
- this can be disabled by setting a value
|
||||
|
||||
- We will also have to disable authentication
|
||||
|
||||
- by default, the chart generates a password for Redis
|
||||
|
||||
- the dockercoins code doesn't use one
|
||||
|
||||
- this can also be changed by setting a value
|
||||
|
||||
---
|
||||
|
||||
## Hints (2/2)
|
||||
|
||||
- The dockercoins code connects to `redis`
|
||||
|
||||
- The chart generates different service names
|
||||
|
||||
- Option 1:
|
||||
|
||||
- vendor the chart in our umbrella chart
|
||||
- change the service name in the chart
|
||||
|
||||
- Option 2:
|
||||
|
||||
- add a Service of type ExternalName
|
||||
- it will be a DNS alias from `redis` to `redis-whatever.NAMESPACE.svc.cluster.local`
|
||||
- for extra points, make the domain configurable
|
||||
9
slides/exercises/ingress-brief.md
Normal file
9
slides/exercises/ingress-brief.md
Normal file
@@ -0,0 +1,9 @@
|
||||
## Exercise — Ingress
|
||||
|
||||
- Add an ingress controller to a Kubernetes cluster
|
||||
|
||||
- Create an ingress resource for a couple of web apps on that cluster
|
||||
|
||||
- Challenge: accessing/exposing port 80
|
||||
|
||||
(different methods depending on how the cluster was deployed)
|
||||
131
slides/exercises/ingress-details.md
Normal file
131
slides/exercises/ingress-details.md
Normal file
@@ -0,0 +1,131 @@
|
||||
# Exercise — Ingress
|
||||
|
||||
- We want to expose a couple of web apps through an ingress controller
|
||||
|
||||
- This will require:
|
||||
|
||||
- the web apps (e.g. two instances of `jpetazzo/color`)
|
||||
|
||||
- an ingress controller
|
||||
|
||||
- an ingress resource
|
||||
|
||||
---
|
||||
|
||||
## Different scenarios
|
||||
|
||||
We will use a different deployment mechanism depending on the cluster that we have:
|
||||
|
||||
- Managed cluster with working `LoadBalancer` Services
|
||||
|
||||
- Local development cluster
|
||||
|
||||
- Cluster without `LoadBalancer` Services (e.g. deployed with `kubeadm`)
|
||||
|
||||
---
|
||||
|
||||
## The apps
|
||||
|
||||
- The web apps will be deployed similarly, regardless of the scenario
|
||||
|
||||
- Let's start by deploying two web apps, e.g.:
|
||||
|
||||
a Deployment called `blue` and another called `green`, using image `jpetazzo/color`
|
||||
|
||||
- Expose them with two `ClusterIP` Services
|
||||
|
||||
---
|
||||
|
||||
## Scenario "classic cloud Kubernetes"
|
||||
|
||||
*Difficulty: easy*
|
||||
|
||||
For this scenario, we need a cluster with working `LoadBalancer` Services.
|
||||
|
||||
(For instance, a managed Kubernetes cluster from a cloud provider.)
|
||||
|
||||
We suggest to use "Ingress NGINX" with its default settings.
|
||||
|
||||
It can be installed with `kubectl apply` or with `helm`.
|
||||
|
||||
Both methods are described in [the documentation][ingress-nginx-deploy].
|
||||
|
||||
We want our apps to be available on e.g. http://X.X.X.X/blue and http://X.X.X.X/green
|
||||
<br/>
|
||||
(where X.X.X.X is the IP address of the `LoadBalancer` allocated by Ingress NGINX).
|
||||
|
||||
[ingress-nginx-deploy]: https://kubernetes.github.io/ingress-nginx/deploy/
|
||||
|
||||
---
|
||||
|
||||
## Scenario "local development cluster"
|
||||
|
||||
*Difficulty: easy-hard (depends on the type of cluster!)*
|
||||
|
||||
For this scenario, we want to use a local cluster like KinD, minikube, etc.
|
||||
|
||||
We suggest to use "Ingress NGINX" again, like for the previous scenario.
|
||||
|
||||
Furthermore, we want to use `localdev.me`.
|
||||
|
||||
We want our apps to be available on e.g. `blue.localdev.me` and `green.localdev.me`.
|
||||
|
||||
The difficulty is to ensure that `localhost:80` will map to the ingress controller.
|
||||
|
||||
(See next slide for hints!)
|
||||
|
||||
---
|
||||
|
||||
## Hints
|
||||
|
||||
- With clusters like Docker Desktop, the first `LoadBalancer` service uses `localhost`
|
||||
|
||||
(if the ingress controller is the first `LoadBalancer` service, we're all set!)
|
||||
|
||||
- With clusters like K3D and KinD, it is possible to define extra port mappings
|
||||
|
||||
(and map e.g. `localhost:80` to port 30080 on the node; then use that as a `NodePort`)
|
||||
|
||||
---
|
||||
|
||||
## Scenario "on premises cluster", take 1
|
||||
|
||||
*Difficulty: easy*
|
||||
|
||||
For this scenario, we need a cluster with nodes that are publicly accessible.
|
||||
|
||||
We want to deploy the ingress controller so that it listens on port 80 on all nodes.
|
||||
|
||||
This can be done e.g. with the manifests in @@LINK[k8s/traefik.yaml].
|
||||
|
||||
We want our apps to be available on e.g. http://X.X.X.X/blue and http://X.X.X.X/green
|
||||
<br/>
|
||||
(where X.X.X.X is the IP address of any of our nodes).
|
||||
|
||||
---
|
||||
|
||||
## Scenario "on premises cluster", take 2
|
||||
|
||||
*Difficulty: medium*
|
||||
|
||||
We want to deploy the ingress controller so that it listens on port 80 on all nodes.
|
||||
|
||||
But this time, we want to use a Helm chart to install the ingress controller.
|
||||
|
||||
We can use either the Ingress NGINX Helm chart, or the Traefik Helm chart.
|
||||
|
||||
Test with an untainted node first.
|
||||
|
||||
Feel free to make it work on tainted nodes (e.g. control plane nodes) later.
|
||||
|
||||
---
|
||||
|
||||
## Scenario "on premises cluster", take 3
|
||||
|
||||
*Difficulty: hard*
|
||||
|
||||
This is similar to the previous scenario, but with two significant changes:
|
||||
|
||||
1. We only want to run the ingress controller on nodes that have the role `ingress`.
|
||||
|
||||
2. We don't want to use `hostNetwork`, but a list of `externalIPs` instead.
|
||||
17
slides/exercises/ingress-secret-policy-brief.md
Normal file
17
slides/exercises/ingress-secret-policy-brief.md
Normal file
@@ -0,0 +1,17 @@
|
||||
⚠️ BROKEN EXERCISE - DO NOT USE
|
||||
|
||||
## Exercise — Ingress Secret Policy
|
||||
|
||||
*Implement policy to limit impact of ingress controller vulnerabilities.*
|
||||
|
||||
(Mitigate e.g. CVE-2021-25742)
|
||||
|
||||
- Deploy an ingress controller and cert-manager
|
||||
|
||||
- Deploy a trivial web app secured with TLS
|
||||
|
||||
(obtaining a cert with cert-manager + Let's Encrypt)
|
||||
|
||||
- Prevent ingress controller from reading arbitrary secrets
|
||||
|
||||
- Automatically grant selective access to TLS secrets, but not other secrets
|
||||
95
slides/exercises/ingress-secret-policy-details.md
Normal file
95
slides/exercises/ingress-secret-policy-details.md
Normal file
@@ -0,0 +1,95 @@
|
||||
⚠️ BROKEN EXERCISE - DO NOT USE
|
||||
|
||||
# Exercise — Ingress Secret Policy
|
||||
|
||||
- Most ingress controllers have access to all Secrets
|
||||
|
||||
(so that they can access TLS keys and certs, which are stored in Secrets)
|
||||
|
||||
- Ingress controller vulnerability can lead to full cluster compromise
|
||||
|
||||
(by allowing attacker to access all secrets, including API tokens)
|
||||
|
||||
- See for instance [CVE-2021-25742](https://github.com/kubernetes/ingress-nginx/issues/7837)
|
||||
|
||||
- How can we prevent that?
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Ingress Controller
|
||||
|
||||
- Deploy an Ingress Controller
|
||||
|
||||
(e.g. Traefik or NGINX; you can use @@LINK[k8s/traefik-v2.yaml])
|
||||
|
||||
- Create a trivial web app (e.g. NGINX, `jpetazzo/color`...)
|
||||
|
||||
- Expose it with an Ingress
|
||||
|
||||
(e.g. use `app.<ip-address>.nip.io`)
|
||||
|
||||
- Check that you can access it through `http://app.<ip-address>.nip.io`
|
||||
|
||||
---
|
||||
|
||||
## Step 2: cert-manager
|
||||
|
||||
- Deploy cert-manager
|
||||
|
||||
- Create a ClusterIssuer using Let's Encrypt staging environment
|
||||
|
||||
(e.g. with @@LINK[k8s/cm-clusterissuer.yaml])
|
||||
|
||||
- Create an Ingress for the app, with TLS enabled
|
||||
|
||||
(e.g. use `appsecure.<ip-address>.nip.io`)
|
||||
|
||||
- Tell cert-manager to obtain a certificate for that Ingress
|
||||
|
||||
- option 1: manually create a Certificate (e.g. with @@LINK[k8s/cm-certificate.yaml])
|
||||
|
||||
- option 2: use the `cert-manager.io/cluster-issuer` annotation
|
||||
|
||||
- Check that you get the Let's Encrypt certificate was issued
|
||||
|
||||
---
|
||||
|
||||
## Step 3: RBAC
|
||||
|
||||
- Remove the Ingress Controller's permission to read all Secrets
|
||||
|
||||
- Restart the Ingress Controller
|
||||
|
||||
- Check that https://appsecure doesn't serve the Let's Encrypt cert
|
||||
|
||||
- Grant permission to read the certificate's Secret
|
||||
|
||||
- Check that https://appsecure serve the Let's Encrypt cert again
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Kyverno
|
||||
|
||||
- Install Kyverno
|
||||
|
||||
- Write a Kyverno policy to automatically grant permission to read Secrets
|
||||
|
||||
(e.g. when a cert-manager Certificate is created)
|
||||
|
||||
- Check @@LINK[k8s/kyverno-namespace-setup.yaml] for inspiration
|
||||
|
||||
- Hint: you need to automatically create a Role and RoleBinding
|
||||
|
||||
- Create another app + another Ingress with TLS
|
||||
|
||||
- Check that the Certificate, Secret, Role, RoleBinding are created
|
||||
|
||||
- Check that the new app correctly serves the Let's Encrypt cert
|
||||
|
||||
---
|
||||
|
||||
## Step 5: double-check
|
||||
|
||||
- Check that the Ingress Controller can't access other secrets
|
||||
|
||||
(e.g. by manually creating a Secret and checking with `kubectl exec`?)
|
||||
7
slides/exercises/k8sfundamentals-brief.md
Normal file
7
slides/exercises/k8sfundamentals-brief.md
Normal file
@@ -0,0 +1,7 @@
|
||||
## Exercise — Deploy Dockercoins
|
||||
|
||||
- Deploy the dockercoins application to our Kubernetes cluster
|
||||
|
||||
- Connect components together
|
||||
|
||||
- Expose the web UI and open it in a web browser to check that it works
|
||||
59
slides/exercises/k8sfundamentals-details.md
Normal file
59
slides/exercises/k8sfundamentals-details.md
Normal file
@@ -0,0 +1,59 @@
|
||||
# Exercise — Deploy Dockercoins
|
||||
|
||||
- We want to deploy the dockercoins app
|
||||
|
||||
- There are 5 components in the app:
|
||||
|
||||
hasher, redis, rng, webui, worker
|
||||
|
||||
- We'll use one Deployment for each component
|
||||
|
||||
(created with `kubectl create deployment`)
|
||||
|
||||
- We'll connect them with Services
|
||||
|
||||
(create with `kubectl expose`)
|
||||
|
||||
---
|
||||
|
||||
## Images
|
||||
|
||||
- We'll use the following images:
|
||||
|
||||
- hasher → `dockercoins/hasher:v0.1`
|
||||
|
||||
- redis → `redis`
|
||||
|
||||
- rng → `dockercoins/rng:v0.1`
|
||||
|
||||
- webui → `dockercoins/webui:v0.1`
|
||||
|
||||
- worker → `dockercoins/worker:v0.1`
|
||||
|
||||
- All services should be internal services, except the web UI
|
||||
|
||||
(since we want to be able to connect to the web UI from outside)
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Goal
|
||||
|
||||
- We should be able to see the web UI in our browser
|
||||
|
||||
(with the graph showing approximately 3-4 hashes/second)
|
||||
|
||||
---
|
||||
|
||||
## Hints
|
||||
|
||||
- Make sure to expose services with the right ports
|
||||
|
||||
(check the logs of the worker; they indicate the port numbers)
|
||||
|
||||
- The web UI can be exposed with a NodePort or LoadBalancer Service
|
||||
9
slides/exercises/kyverno-ingress-domain-name-brief.md
Normal file
9
slides/exercises/kyverno-ingress-domain-name-brief.md
Normal file
@@ -0,0 +1,9 @@
|
||||
## Exercise — Generating Ingress With Kyverno
|
||||
|
||||
- When a Service gets created, automatically generate an Ingress
|
||||
|
||||
- Step 1: expose all services with a hard-coded domain name
|
||||
|
||||
- Step 2: only expose services that have a port named `http`
|
||||
|
||||
- Step 3: configure the domain name with a per-namespace ConfigMap
|
||||
33
slides/exercises/kyverno-ingress-domain-name-details.md
Normal file
33
slides/exercises/kyverno-ingress-domain-name-details.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# Exercise — Generating Ingress With Kyverno
|
||||
|
||||
When a Service gets created...
|
||||
|
||||
*(for instance, Service `blue` in Namespace `rainbow`)*
|
||||
|
||||
...Automatically generate an Ingress.
|
||||
|
||||
*(for instance, with host name `blue.rainbow.MYDOMAIN.COM`)*
|
||||
|
||||
---
|
||||
|
||||
## Goals
|
||||
|
||||
- Step 1: expose all services with a hard-coded domain name
|
||||
|
||||
- Step 2: only expose services that have a port named `http`
|
||||
|
||||
- Step 3: configure the domain name with a per-namespace ConfigMap
|
||||
|
||||
(e.g. `kubectl create configmap ingress-domain-name --from-literal=domain=1.2.3.4.nip.io`)
|
||||
|
||||
---
|
||||
|
||||
## Hints
|
||||
|
||||
- We want to use a Kyverno `generate` ClusterPolicy
|
||||
|
||||
- For step 1, check [Generate Resources](https://kyverno.io/docs/writing-policies/generate/) documentation
|
||||
|
||||
- For step 2, check [Preconditions](https://kyverno.io/docs/writing-policies/preconditions/) documentation
|
||||
|
||||
- For step 3, check [External Data Sources](https://kyverno.io/docs/writing-policies/external-data-sources/) documentation
|
||||
9
slides/exercises/localcluster-brief.md
Normal file
9
slides/exercises/localcluster-brief.md
Normal file
@@ -0,0 +1,9 @@
|
||||
## Exercise — Local Cluster
|
||||
|
||||
- Deploy a local Kubernetes cluster if you don't already have one
|
||||
|
||||
- Deploy dockercoins on that cluster
|
||||
|
||||
- Connect to the web UI in your browser
|
||||
|
||||
- Scale up dockercoins
|
||||
43
slides/exercises/localcluster-details.md
Normal file
43
slides/exercises/localcluster-details.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Exercise — Local Cluster
|
||||
|
||||
- We want to have our own local Kubernetes cluster
|
||||
|
||||
(we can use Docker Desktop, KinD, minikube... anything will do!)
|
||||
|
||||
- Then we want to run a copy of dockercoins on that cluster
|
||||
|
||||
- We want to be able to connect to the web UI
|
||||
|
||||
(we can expose the port, or use port-forward, or whatever)
|
||||
|
||||
---
|
||||
|
||||
## Goal
|
||||
|
||||
- Be able to see the dockercoins web UI running on our local cluster
|
||||
|
||||
---
|
||||
|
||||
## Hints
|
||||
|
||||
- On a Mac or Windows machine:
|
||||
|
||||
the easiest solution is probably Docker Desktop
|
||||
|
||||
- On a Linux machine:
|
||||
|
||||
the easiest solution is probably KinD or k3d
|
||||
|
||||
- To connect to the web UI:
|
||||
|
||||
`kubectl port-forward` is probably the easiest solution
|
||||
|
||||
---
|
||||
|
||||
## Bonus
|
||||
|
||||
- If you already have a local Kubernetes cluster:
|
||||
|
||||
try to run another one!
|
||||
|
||||
- Try to use another method than `kubectl port-forward`
|
||||
7
slides/exercises/netpol-brief.md
Normal file
7
slides/exercises/netpol-brief.md
Normal file
@@ -0,0 +1,7 @@
|
||||
## Exercise — Network Policies
|
||||
|
||||
- Implement a system with 3 levels of security
|
||||
|
||||
(private pods, public pods, namespace pods)
|
||||
|
||||
- Apply it to the DockerCoins demo app
|
||||
63
slides/exercises/netpol-details.md
Normal file
63
slides/exercises/netpol-details.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Exercise — Network Policies
|
||||
|
||||
We want to to implement a generic network security mechanism.
|
||||
|
||||
Instead of creating one policy per service, we want to
|
||||
create a fixed number of policies, and use a single label
|
||||
to indicate the security level of our pods.
|
||||
|
||||
Then, when adding a new service to the stack, instead
|
||||
of writing a new network policy for that service, we
|
||||
only need to add the right label to the pods of that service.
|
||||
|
||||
---
|
||||
|
||||
## Specifications
|
||||
|
||||
We will use the label `security` to classify our pods.
|
||||
|
||||
- If `security=private`:
|
||||
|
||||
*the pod shouldn't accept any traffic*
|
||||
|
||||
- If `security=public`:
|
||||
|
||||
*the pod should accept all traffic*
|
||||
|
||||
- If `security=namespace`:
|
||||
|
||||
*the pod should only accept connections coming from the same namespace*
|
||||
|
||||
If `security` isn't set, assume it's `private`.
|
||||
|
||||
---
|
||||
|
||||
## Test setup
|
||||
|
||||
- Deploy a copy of the DockerCoins app in a new namespace
|
||||
|
||||
- Modify the pod templates so that:
|
||||
|
||||
- `webui` has `security=public`
|
||||
|
||||
- `worker` has `security=private`
|
||||
|
||||
- `hasher`, `redis`, `rng` have `security=namespace`
|
||||
|
||||
---
|
||||
|
||||
## Implement and test policies
|
||||
|
||||
- Write the network policies
|
||||
|
||||
(feel free to draw inspiration from the ones we've seen so far)
|
||||
|
||||
- Check that:
|
||||
|
||||
- you can connect to the `webui` from outside the cluster
|
||||
|
||||
- the application works correctly (shows 3-4 hashes/second)
|
||||
|
||||
- you cannot connect to the `hasher`, `redis`, `rng` services
|
||||
|
||||
- you cannot connect or even ping the `worker` pods
|
||||
9
slides/exercises/rbac-brief.md
Normal file
9
slides/exercises/rbac-brief.md
Normal file
@@ -0,0 +1,9 @@
|
||||
## Exercise — RBAC
|
||||
|
||||
- Create two namespaces for users `alice` and `bob`
|
||||
|
||||
- Give each user full access to their own namespace
|
||||
|
||||
- Give each user read-only access to the other's namespace
|
||||
|
||||
- Let `alice` view the nodes of the cluster as well
|
||||
97
slides/exercises/rbac-details.md
Normal file
97
slides/exercises/rbac-details.md
Normal file
@@ -0,0 +1,97 @@
|
||||
# Exercise — RBAC
|
||||
|
||||
We want to:
|
||||
|
||||
- Create two namespaces for users `alice` and `bob`
|
||||
|
||||
- Give each user full access to their own namespace
|
||||
|
||||
- Give each user read-only access to the other's namespace
|
||||
|
||||
- Let `alice` view the nodes of the cluster as well
|
||||
|
||||
---
|
||||
|
||||
## Initial setup
|
||||
|
||||
- Create two namespaces named `alice` and `bob`
|
||||
|
||||
- Check that if we impersonate Alice, we can't access her namespace yet:
|
||||
```bash
|
||||
kubectl --as alice get pods --namespace alice
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Access for Alice
|
||||
|
||||
- Grant Alice full access to her own namespace
|
||||
|
||||
(you can use a pre-existing Cluster Role)
|
||||
|
||||
- Check that Alice can create stuff in her namespace:
|
||||
```bash
|
||||
kubectl --as alice create deployment hello --image nginx --namespace alice
|
||||
```
|
||||
|
||||
- But that she can't create stuff in Bob's namespace:
|
||||
```bash
|
||||
kubectl --as alice create deployment hello --image nginx --namespace bob
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Access for Bob
|
||||
|
||||
- Similarly, grant Bob full access to his own namespace
|
||||
|
||||
- Check that Bob can create stuff in his namespace:
|
||||
```bash
|
||||
kubectl --as bob create deployment hello --image nginx --namespace bob
|
||||
```
|
||||
|
||||
- But that he can't create stuff in Alice's namespace:
|
||||
```bash
|
||||
kubectl --as bob create deployment hello --image nginx --namespace alice
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Read-only access
|
||||
|
||||
- Now, give Alice read-only access to Bob's namespace
|
||||
|
||||
- Check that Alice can view Bob's stuff:
|
||||
```bash
|
||||
kubectl --as alice get pods --namespace bob
|
||||
```
|
||||
|
||||
- But that she can't touch this:
|
||||
```bash
|
||||
kubectl --as alice delete pods --namespace bob --all
|
||||
```
|
||||
|
||||
- Likewise, give Bob read-only access to Alice's namespace
|
||||
|
||||
---
|
||||
|
||||
## Nodes
|
||||
|
||||
- Give Alice read-only access to the cluster nodes
|
||||
|
||||
(this will require creating a custom Cluster Role)
|
||||
|
||||
- Check that Alice can view the nodes:
|
||||
```bash
|
||||
kubectl --as alice get nodes
|
||||
```
|
||||
|
||||
- But that Bob cannot:
|
||||
```bash
|
||||
kubectl --as bob get nodes
|
||||
```
|
||||
|
||||
- And that Alice can't update nodes:
|
||||
```bash
|
||||
kubectl --as alice label nodes --all hello=world
|
||||
```
|
||||
9
slides/exercises/remotecluster-brief.md
Normal file
9
slides/exercises/remotecluster-brief.md
Normal file
@@ -0,0 +1,9 @@
|
||||
## Exercise — Remote Cluster
|
||||
|
||||
- Install kubectl locally
|
||||
|
||||
- Retrieve the kubeconfig file of our remote cluster
|
||||
|
||||
- Deploy dockercoins on that cluster
|
||||
|
||||
- Access an internal service without exposing it
|
||||
62
slides/exercises/remotecluster-details.md
Normal file
62
slides/exercises/remotecluster-details.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# Exercise — Remote Cluster
|
||||
|
||||
- We want to control a remote cluster
|
||||
|
||||
- Then we want to run a copy of dockercoins on that cluster
|
||||
|
||||
- We want to be able to connect to an internal service
|
||||
|
||||
---
|
||||
|
||||
## Goal
|
||||
|
||||
- Be able to access e.g. hasher, rng, or webui
|
||||
|
||||
(without exposing them with a NodePort or LoadBalancer service)
|
||||
|
||||
---
|
||||
|
||||
## Getting access to the cluster
|
||||
|
||||
- If you don't have `kubectl` on your machine, install it
|
||||
|
||||
- Download the kubeconfig file from the remote cluster
|
||||
|
||||
(you can use `scp` or even copy-paste it)
|
||||
|
||||
- If you already have a kubeconfig file on your machine:
|
||||
|
||||
- save the remote kubeconfig with another name (e.g. `~/.kube/config.remote`)
|
||||
|
||||
- set the `KUBECONFIG` environment variable to point to that file name
|
||||
|
||||
- ...or use the `--kubeconfig=...` option with `kubectl`
|
||||
|
||||
- Check that you can access the cluster (e.g. `kubectl get nodes`)
|
||||
|
||||
---
|
||||
|
||||
## If you get an error...
|
||||
|
||||
⚠️ The following applies to clusters deployed with `kubeadm`
|
||||
|
||||
- If you have a cluster where the nodes are named `node1`, `node2`, etc.
|
||||
|
||||
- `kubectl` commands might show connection errors with internal IP addresses
|
||||
|
||||
(e.g. 10.10... or 172.17...)
|
||||
|
||||
- In that case, you might need to edit the `kubeconfig` file:
|
||||
|
||||
- find the server address
|
||||
|
||||
- update it to put the *external* address of the first node of the cluster
|
||||
|
||||
---
|
||||
|
||||
|
||||
## Deploying an app
|
||||
|
||||
- Deploy another copy of dockercoins from your local machine
|
||||
|
||||
- Access internal services (e.g. with `kubectl port-forward`)
|
||||
13
slides/exercises/sealed-secrets-brief.md
Normal file
13
slides/exercises/sealed-secrets-brief.md
Normal file
@@ -0,0 +1,13 @@
|
||||
## Exercise — Sealed Secrets
|
||||
|
||||
- Install the sealed secrets operator
|
||||
|
||||
- Create a secret, seal it, load it in the cluster
|
||||
|
||||
- Check that sealed secrets are "locked"
|
||||
|
||||
(can't be used with a different name, namespace, or cluster)
|
||||
|
||||
- Bonus: migrate a sealing key to another cluster
|
||||
|
||||
- Set RBAC permissions to grant selective access to secrets
|
||||
117
slides/exercises/sealed-secrets-details.md
Normal file
117
slides/exercises/sealed-secrets-details.md
Normal file
@@ -0,0 +1,117 @@
|
||||
# Exercise — Sealed Secrets
|
||||
|
||||
This is a "combo exercise" to practice the following concepts:
|
||||
|
||||
- Secrets (exposing them in containers)
|
||||
|
||||
- RBAC (granting specific permissions to specific users)
|
||||
|
||||
- Operators (specifically, sealed secrets)
|
||||
|
||||
- Migrations (copying/transferring resources from a cluster to another)
|
||||
|
||||
For this exercise, you will need two clusters.
|
||||
|
||||
(It can be two local clusters.)
|
||||
|
||||
We will call them "dev cluster" and "prod cluster".
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
- For simplicity, our application will be NGINX (or `jpetazzo/color`)
|
||||
|
||||
- Our application needs two secrets:
|
||||
|
||||
- a *logging API token* (not too sensitive; same in dev and prod)
|
||||
|
||||
- a *database password* (sensitive; different in dev and prod)
|
||||
|
||||
- Secrets can be exposed as env vars, or mounted in volumes
|
||||
|
||||
(it doesn't matter for this exercise)
|
||||
|
||||
- We want to prepare and deploy the application in the dev cluster
|
||||
|
||||
- ...Then deploy it to the prod cluster
|
||||
|
||||
---
|
||||
|
||||
## Step 1 (easy)
|
||||
|
||||
- On the dev cluster, create a Namespace called `dev`
|
||||
|
||||
- Create the two secrets, `logging-api-token` and `database-password`
|
||||
|
||||
(the content doesn't matter; put a random string of your choice)
|
||||
|
||||
- Create a Deployment called `app` using both secrets
|
||||
|
||||
(use a mount or environment variables; whatever you prefer!)
|
||||
|
||||
- Verify that the secrets are available to the Deployment
|
||||
|
||||
(e.g. with `kubectl exec`)
|
||||
|
||||
- Generate YAML manifests for the application (Deployment+Secrets)
|
||||
|
||||
---
|
||||
|
||||
## Step 2 (medium)
|
||||
|
||||
- Deploy the sealed secrets operator on the dev cluster
|
||||
|
||||
- In the YAML, replace the Secrets with SealedSecrets
|
||||
|
||||
- Delete the `dev` Namespace, recreate it, redeploy the app
|
||||
|
||||
(to make sure everything works fine)
|
||||
|
||||
- Create a `staging` Namespace and try to deploy the app
|
||||
|
||||
- If something doesn't work, fix it
|
||||
|
||||
--
|
||||
|
||||
- Hint: set the *scope* of the sealed secrets
|
||||
|
||||
---
|
||||
|
||||
## Step 3 (hard)
|
||||
|
||||
- On the prod cluster, create a Namespace called `prod`
|
||||
|
||||
- Try to deploy the application using the YAML manifests
|
||||
|
||||
- It won't work (the cluster needs the sealing key)
|
||||
|
||||
- Fix it!
|
||||
|
||||
(check the next slides if you need hints)
|
||||
|
||||
--
|
||||
|
||||
- You will have to copy the Sealed Secret private key
|
||||
|
||||
--
|
||||
|
||||
- And restart the operator so that it picks up the key
|
||||
|
||||
---
|
||||
|
||||
## Step 4 (medium)
|
||||
|
||||
Let's say that we have a user called `alice` on the prod cluster.
|
||||
|
||||
(You can use `kubectl --as=alice` to impersonate her.)
|
||||
|
||||
We want Alice to be able to:
|
||||
|
||||
- deploy the whole application in the `prod` namespace
|
||||
|
||||
- access the *logging API token* secret
|
||||
|
||||
- but *not* the *database password* secret
|
||||
|
||||
- view the logs of the app
|
||||
9
slides/exercises/tf-nodepools-brief.md
Normal file
9
slides/exercises/tf-nodepools-brief.md
Normal file
@@ -0,0 +1,9 @@
|
||||
## Exercise — Terraform Node Pools
|
||||
|
||||
- Write a Terraform configuration to deploy a cluster
|
||||
|
||||
- The cluster should have two node pools with autoscaling
|
||||
|
||||
- Deploy two apps, each using exclusively one node pool
|
||||
|
||||
- Bonus: deploy an app balanced across both node pools
|
||||
69
slides/exercises/tf-nodepools-details.md
Normal file
69
slides/exercises/tf-nodepools-details.md
Normal file
@@ -0,0 +1,69 @@
|
||||
# Exercise — Terraform Node Pools
|
||||
|
||||
- Write a Terraform configuration to deploy a cluster
|
||||
|
||||
- The cluster should have two node pools with autoscaling
|
||||
|
||||
- Deploy two apps, each using exclusively one node pool
|
||||
|
||||
- Bonus: deploy an app balanced across both node pools
|
||||
|
||||
---
|
||||
|
||||
## Cluster deployment
|
||||
|
||||
- Write a Terraform configuration to deploy a cluster
|
||||
|
||||
- We want to have two node pools with autoscaling
|
||||
|
||||
- Example for sizing:
|
||||
|
||||
- 4 GB / 1 CPU per node
|
||||
|
||||
- pools of 1 to 4 nodes
|
||||
|
||||
---
|
||||
|
||||
## Cluster autoscaling
|
||||
|
||||
- Deploy an app on the cluster
|
||||
|
||||
(you can use `nginx`, `jpetazzo/color`...)
|
||||
|
||||
- Set a resource request (e.g. 1 GB RAM)
|
||||
|
||||
- Scale up and verify that the autoscaler kicks in
|
||||
|
||||
---
|
||||
|
||||
## Pool isolation
|
||||
|
||||
- We want to deploy two apps
|
||||
|
||||
- The first app should be deployed exclusively on the first pool
|
||||
|
||||
- The second app should be deployed exclusively on the second pool
|
||||
|
||||
- Check the next slide for hints!
|
||||
|
||||
---
|
||||
|
||||
## Hints
|
||||
|
||||
- One solution involves adding a `nodeSelector` to the pod templates
|
||||
|
||||
- Another solution involves adding:
|
||||
|
||||
- `taints` to the node pools
|
||||
|
||||
- matching `tolerations` to the pod templates
|
||||
|
||||
---
|
||||
|
||||
## Balancing
|
||||
|
||||
- Step 1: make sure that the pools are not balanced
|
||||
|
||||
- Step 2: deploy a new app, check that it goes to the emptiest pool
|
||||
|
||||
- Step 3: update the app so that it balances (as much as possible) between pools
|
||||
2
slides/find-non-ascii.sh
Executable file
2
slides/find-non-ascii.sh
Executable file
@@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
grep --color=auto -P -n "[^\x00-\x80]" */*.md
|
||||
60
slides/find-unmerged-changes.sh
Executable file
60
slides/find-unmerged-changes.sh
Executable file
@@ -0,0 +1,60 @@
|
||||
#!/bin/sh
|
||||
|
||||
# The materials for a given training live in their own branch.
|
||||
# Sometimes, we write custom content (or simply new content) for a training,
|
||||
# and that content doesn't get merged back to main. This script tries to
|
||||
# detect that with the following heuristics:
|
||||
# - list all remote branches
|
||||
# - for each remote branch, list the changes that weren't merged into main
|
||||
# (using "diff main...$BRANCH", three dots)
|
||||
# - ignore a bunch of training-specific files that change all the time anyway
|
||||
# - for the remaining files, compute the diff between main and the branch
|
||||
# (using "diff main..$BRANCH", two dots)
|
||||
# - ignore changes of less than 10 lines
|
||||
# - also ignore a few red herrings
|
||||
# - display whatever is left
|
||||
|
||||
# For "git diff" (in the filter function) to work correctly, we must be
|
||||
# at the root of the repo.
|
||||
cd $(git rev-parse --show-toplevel)
|
||||
|
||||
BRANCHES=$(git branch -r | grep -v origin/HEAD | grep origin/2)
|
||||
|
||||
filter() {
|
||||
threshold=10
|
||||
while read filename; do
|
||||
case $filename in
|
||||
# Generic training-specific files
|
||||
slides/*.html) continue;;
|
||||
slides/*.yml) continue;;
|
||||
slides/logistics*.md) continue;;
|
||||
# Specific content that can be ignored
|
||||
#slides/containers/Local_Environment.md) threshold=100;;
|
||||
# Content that was moved/refactored enough to confuse us
|
||||
slides/containers/Local_Environment.md) threshold=100;;
|
||||
slides/exercises.md) continue;;
|
||||
slides/k8s/batch-jobs) threshold=20;;
|
||||
# Renames
|
||||
*/{*}*) continue;;
|
||||
esac
|
||||
git diff --find-renames --numstat main..$BRANCH -- "$filename" | {
|
||||
# If the files are identical, the diff will be empty, and "read" will fail.
|
||||
read plus minus filename || return
|
||||
# Ignore binary files (FIXME though?)
|
||||
if [ $plus = - ]; then
|
||||
return
|
||||
fi
|
||||
diff=$((plus-minus))
|
||||
if [ $diff -gt $threshold ]; then
|
||||
echo git diff main..$BRANCH -- $filename
|
||||
fi
|
||||
}
|
||||
done
|
||||
}
|
||||
|
||||
for BRANCH in $BRANCHES; do
|
||||
if FILES=$(git diff --find-renames --name-only main...$BRANCH | filter | grep .); then
|
||||
echo "🌳 $BRANCH:"
|
||||
echo "$FILES"
|
||||
fi
|
||||
done
|
||||
118
slides/fix-redirects.sh
Executable file
118
slides/fix-redirects.sh
Executable file
@@ -0,0 +1,118 @@
|
||||
#!/bin/sh
|
||||
|
||||
# This script helps to add "force-redirects" where needed.
|
||||
# This might replace your entire git repos with Vogon poetry.
|
||||
# Use at your own peril!
|
||||
|
||||
set -eu
|
||||
|
||||
# The easiest way to set this env var is by copy-pasting from
|
||||
# the netlify web dashboard, then doctoring the output a bit.
|
||||
# Yeah, that's gross, but after spending 10 minutes with the
|
||||
# API and the CLI and OAuth, it took about 10 seconds to do it
|
||||
# with le copier-coller, so ... :)
|
||||
|
||||
SITES="
|
||||
2020-01-caen
|
||||
2020-01-zr
|
||||
2020-02-caen
|
||||
2020-02-enix
|
||||
2020-02-outreach
|
||||
2020-02-vmware
|
||||
2020-03-ardan
|
||||
2020-03-qcon
|
||||
alfun-2019-06
|
||||
boosterconf2018
|
||||
clt-2019-10
|
||||
dc17eu
|
||||
decembre2018
|
||||
devopsdaysams2018
|
||||
devopsdaysmsp2018
|
||||
gotochgo2018
|
||||
gotochgo2019
|
||||
indexconf2018
|
||||
intro-2019-01
|
||||
intro-2019-04
|
||||
intro-2019-06
|
||||
intro-2019-08
|
||||
intro-2019-09
|
||||
intro-2019-11
|
||||
intro-2019-12
|
||||
k8s2d
|
||||
kadm-2019-04
|
||||
kadm-2019-06
|
||||
kube
|
||||
kube-2019-01
|
||||
kube-2019-02
|
||||
kube-2019-03
|
||||
kube-2019-04
|
||||
kube-2019-06
|
||||
kube-2019-08
|
||||
kube-2019-09
|
||||
kube-2019-10
|
||||
kube-2019-11
|
||||
lisa-2019-10
|
||||
lisa16t1
|
||||
lisa17m7
|
||||
lisa17t9
|
||||
maersk-2019-07
|
||||
maersk-2019-08
|
||||
ndcminnesota2018
|
||||
nr-2019-08
|
||||
oscon2018
|
||||
oscon2019
|
||||
osseu17
|
||||
pycon2019
|
||||
qconsf18wkshp
|
||||
qconsf2017intro
|
||||
qconsf2017swarm
|
||||
qconsf2018
|
||||
qconuk2019
|
||||
septembre2018
|
||||
sfsf-2019-06
|
||||
srecon2018
|
||||
swarm2017
|
||||
velny-k8s101-2018
|
||||
velocity-2019-11
|
||||
velocityeu2018
|
||||
velocitysj2018
|
||||
vmware-2019-11
|
||||
weka
|
||||
wwc-2019-10
|
||||
wwrk-2019-05
|
||||
wwrk-2019-06
|
||||
"
|
||||
|
||||
for SITE in $SITES; do
|
||||
echo "##### $SITE"
|
||||
git checkout -q origin/$SITE
|
||||
# No _redirects? No problem.
|
||||
if ! [ -f _redirects ]; then
|
||||
continue
|
||||
fi
|
||||
# If there is already a force redirect on /, we're good.
|
||||
if grep '^/ .* 200!' _redirects; then
|
||||
continue
|
||||
fi
|
||||
# If there is a redirect on / ... and it's not forced ... do something.
|
||||
if grep "^/ .* 200$" _redirects; then
|
||||
echo "##### $SITE needs to be patched"
|
||||
sed -i 's,^/ \(.*\) 200$,/ \1 200!,' _redirects
|
||||
git add _redirects
|
||||
git commit -m "fix-redirects.sh: adding forced redirect"
|
||||
git push origin HEAD:$SITE
|
||||
continue
|
||||
fi
|
||||
if grep "^/ " _redirects; then
|
||||
echo "##### $SITE with / but no status code"
|
||||
echo "##### Should I add '200!' ?"
|
||||
read foo
|
||||
sed -i 's,^/ \(.*\)$,/ \1 200!,' _redirects
|
||||
git add _redirects
|
||||
git commit -m "fix-redirects.sh: adding status code and forced redirect"
|
||||
git push origin HEAD:$SITE
|
||||
continue
|
||||
fi
|
||||
echo "##### $SITE without / ?"
|
||||
cat _redirects
|
||||
done
|
||||
BIN
slides/images/aj-containers.jpeg
Normal file
BIN
slides/images/aj-containers.jpeg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 127 KiB |
BIN
slides/images/ambassador-diagram.odg
Normal file
BIN
slides/images/ambassador-diagram.odg
Normal file
Binary file not shown.
BIN
slides/images/ambassador-diagram.png
Normal file
BIN
slides/images/ambassador-diagram.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 84 KiB |
BIN
slides/images/api-request-lifecycle.png
Normal file
BIN
slides/images/api-request-lifecycle.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 203 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user