Compare commits

..

346 Commits

Author SHA1 Message Date
Jerome Petazzoni
8ef6219295 fix-redirects.sh: adding forced redirect 2020-04-07 16:48:42 -05:00
Bridget Kromhout
346ce0e15c Merge pull request #304 from bridgetkromhout/devopsdaysmsp2018
testing changes for 90min
2018-07-10 18:03:16 -05:00
Bridget Kromhout
964d936435 Merge branch 'devopsdaysmsp2018' into devopsdaysmsp2018 2018-07-10 07:47:09 -05:00
Bridget Kromhout
546d9a2986 Testing redirect 2018-07-10 07:45:55 -05:00
Bridget Kromhout
8e5d27b185 changing redirects back 2018-07-10 07:40:49 -05:00
Bridget Kromhout
e8d9e94b72 First pass at edits for 90min workshop 2018-07-10 07:37:39 -05:00
Bridget Kromhout
ca980de2fd Merge branch 'master' of github.com:bridgetkromhout/container.training into devopsdaysmsp2018 2018-07-10 07:36:05 -05:00
Bridget Kromhout
4b2b5ff7e4 Merge pull request #303 from jpetazzo/master
bringing branch up to date
2018-07-10 07:34:08 -05:00
Bridget Kromhout
ee2b20926c Merge pull request #302 from bridgetkromhout/version-1.11.0
Version bump
2018-07-10 06:18:30 -05:00
Bridget Kromhout
96a76d2a19 Version bump 2018-07-10 06:17:07 -05:00
Bridget Kromhout
78ac91fcd5 Merge pull request #300 from bridgetkromhout/add-msp
Adding MSP 2018
2018-07-10 05:46:23 -05:00
Bridget Kromhout
971b5b0e6d Let's not link quite yet 2018-07-10 05:45:22 -05:00
Bridget Kromhout
3393563498 Adding MSP 2018 2018-07-06 16:11:37 -05:00
Bridget Kromhout
64fb407e8c Merge pull request #299 from bridgetkromhout/devopsdaysmsp2018
devopsdays MSP 2018-specific stuff
2018-07-06 16:04:20 -05:00
Bridget Kromhout
ea4f46599d Adding devopsdays MSP 2018 2018-07-06 16:02:02 -05:00
Bridget Kromhout
94483ebfec Merge pull request #298 from jpetazzo/improve-index-format
Switch to two-line format since our titles are so long
2018-07-06 15:43:01 -05:00
Jerome Petazzoni
db5d5878f5 Switch to two-line format since our titles are so long 2018-07-03 10:47:41 -05:00
ctas582
2585daac9b Force rng to be single threaded (#293) 2018-06-28 08:20:54 -05:00
Bridget Kromhout
21043108b3 Merge pull request #296 from bridgetkromhout/version-up
Version bump
2018-06-27 01:14:06 -05:00
Bridget Kromhout
65faa4507c Version bump 2018-06-27 08:12:40 +02:00
Bridget Kromhout
644f2b9c7a Merge pull request #295 from bridgetkromhout/add-slides-ams
Adding slides link for ams
2018-06-26 17:04:27 -05:00
Bridget Kromhout
dab9d9fb7e Adding slides link 2018-06-27 00:03:18 +02:00
Diego Quintana
139757613b Update Container_Networking_Basics.md
Added needed single quotes. I've also moved `nginx` to the end of the line, to follow a more consistent syntax  (`options` before `name|id`).

```
Usage:	docker inspect [OPTIONS] NAME|ID [NAME|ID...]

Return low-level information on Docker objects

Options:
  -f, --format string   Format the output using the given Go template
  -s, --size            Display total file sizes if the type is container
      --type string     Return JSON for specified type
```
2018-06-22 10:58:26 -05:00
Bridget Kromhout
10eed2c1c7 Merge pull request #288 from ctas582/typos
Correct typos
2018-06-22 09:21:56 -05:00
ctas582
c4fa75a1da Correct typos 2018-06-21 15:00:36 +01:00
ctas582
847140560f Correct typo 2018-06-21 14:16:05 +01:00
ctas582
1dc07c33ab Correct typos 2018-06-20 11:19:28 +01:00
Bridget Kromhout
4fc73d95c0 Merge pull request #285 from bridgetkromhout/vupdate
Updating version
2018-06-12 10:14:21 -07:00
Bridget Kromhout
690ed55953 Updating version 2018-06-12 10:12:04 -07:00
Bridget Kromhout
16a5809518 Merge pull request #284 from bridgetkromhout/add-vel-2day
Adding Erik and Brian's two-day Velocity training to the front page
2018-06-12 09:01:32 -07:00
Bridget Kromhout
0fed34600b Adding Erik and Brian's two-day 2018-06-12 08:55:53 -07:00
Jerome Petazzoni
2d95f4177a Remove extraneous python invocation 2018-06-12 04:25:00 -05:00
Bridget Kromhout
e9d1db56fa Adding VelNY bootcamp (#283)
* Adding VelNY bootcamp

* Colon not good here
2018-06-12 04:09:54 -05:00
Bridget Kromhout
a076a766a9 Merge pull request #282 from bridgetkromhout/reorder
Reordering upcoming events
2018-06-11 09:47:57 -07:00
Bridget Kromhout
be3c78bf54 Reordering 2018-06-11 09:40:30 -07:00
Bridget Kromhout
5bb6b8e2ab Merge pull request #281 from bridgetkromhout/add-velocity-sj-2018
Adding Velocity SJ 2018
2018-06-11 09:08:35 -07:00
Bridget Kromhout
f79193681d Adding Velocity SJ 2018 2018-06-11 08:53:53 -07:00
Bridget Kromhout
379ae69db5 Merge pull request #277 from bridgetkromhout/rollout-failure
Clarifying rollout failure via dashboard
2018-06-11 08:34:36 -07:00
Jerome Petazzoni
cde89f50a2 Add mention to skip slide if dashboard isn't deployed 2018-06-10 17:07:56 -05:00
Bridget Kromhout
98563ba1ce Clarifying rollout failure via dashboard 2018-06-04 20:58:57 -05:00
Bridget Kromhout
99bf8cc39f Merge pull request #271 from jpetazzo/new-index-generator
Replace index.html with a generator
2018-06-05 02:13:27 +02:00
Bridget Kromhout
ea642cf90e Merge pull request #274 from bridgetkromhout/eng-v
bumping version
2018-06-04 23:28:48 +02:00
Bridget Kromhout
a7d89062cf Bumping engine version 2018-06-04 15:43:30 -05:00
Bridget Kromhout
564e4856b4 Merge branch 'master' of https://github.com/jpetazzo/container.training 2018-06-04 14:41:07 -05:00
Bridget Kromhout
011cd08af3 Merge pull request #269 from jpetazzo/kubectlproxy
Show how to access internal services with kubectl proxy
2018-06-04 21:40:40 +02:00
Jerome Petazzoni
e294a4726c Update version numbers 2018-06-04 08:47:30 -05:00
Jerome Petazzoni
a21e8b0849 Image and title size fixes 2018-06-04 06:11:00 -05:00
Jerome Petazzoni
cc6f36b50f Wording (non-native speakers probably don't know boo-boo) 2018-06-04 05:54:02 -05:00
Jerome Petazzoni
6e35162788 Remove 'kubernetes in action' demo 2018-06-04 05:50:21 -05:00
Jerome Petazzoni
30ca940eeb Opt-out a bunch of slides in the deep dive section 2018-06-04 05:49:24 -05:00
Jerome Petazzoni
14eb19a42b Typo fixes 2018-06-04 05:43:28 -05:00
Jerome Petazzoni
da053ecde2 Update fundamentals TOC 2018-06-03 15:27:27 -05:00
Jerome Petazzoni
c86ef7de45 Add 'past workshops' page and backfill 2016-2017 workshops 2018-06-03 09:55:43 -05:00
Jérôme Petazzoni
c5572020b9 Add a few slides about resource limits (#273)
The section about namespaces and cgroups is very thorough,
but we also need something showing how to practically
limit container resource usage without diving into a very
deep technical chapter.
2018-06-03 05:28:16 -05:00
Jerome Petazzoni
3d7ed3a3f7 Clarify how to stop kubectl proxy 2018-06-03 05:10:48 -05:00
Bridget Kromhout
138163056f Merge pull request #270 from jpetazzo/kubectl-create-namespace
Show an easier way to create namespaces
2018-06-02 17:12:38 +02:00
Alexis Daboville
5e78e00bc9 Small typos (#272)
* Small typo

* elastichsearch -> elasticsearch

* realeased -> released
2018-06-02 09:09:38 -05:00
Jerome Petazzoni
2cb06edc2d Replace index.html with a generator
The events are now listend in index.yaml, and generated
with index.py. The latter is called automatically by
build.sh.

The list of events has been slightly improved:
- we only show the last 5 past events
- video recordings now get a section of their own
2018-05-31 14:22:23 -05:00
Jerome Petazzoni
8915bfb443 Update README section indicating 'teacher for hire' 2018-05-31 12:55:09 -05:00
Jerome Petazzoni
24017ad83f Clarify usage of <<< 2018-05-29 11:06:31 -05:00
Jerome Petazzoni
3edebe3747 New script to count slides
count-slides.py will count the number of slides per section,
and compute size of each chapter as well. It is not perfect
(for instance, it assumes that excluded_classes=in_person)
but it should help to assess the size of the content before
delivering long workshops.
2018-05-29 10:03:11 -05:00
Jerome Petazzoni
636a2d5c87 Show an easier way to create namespaces
We were using 'kubectl apply' with a YAML snppet.
It's valid, but it's quite convoluted. Instead,
let's use 'kubectl create namespace'. We can still
mention the other method of course.
2018-05-29 05:53:12 -05:00
Jerome Petazzoni
4213aba76e Show how to access internal services with kubectl proxy 2018-05-29 05:47:27 -05:00
Jerome Petazzoni
3e822bad82 Add a slide about JSON file and log rotation 2018-05-28 10:28:52 -05:00
Jerome Petazzoni
cd5b06b9c7 Show how to connect/disconnect dynamically 2018-05-28 10:08:11 -05:00
Jerome Petazzoni
b0841562ea Add a bunch of Dockerfile examples 2018-05-25 09:31:50 -05:00
Jerome Petazzoni
06f70e8246 Add 'tree' in the VMs
This is a convenient tool to get an idea of what a
directory hierarchy looks like.
2018-05-24 07:06:21 -05:00
Jerome Petazzoni
9614f8761a Add link to Serge Hallyn blog post 2018-05-24 06:03:28 -05:00
Jerome Petazzoni
92f9ab9001 Add a section leading to multi-stage builds 2018-05-24 05:46:28 -05:00
Bridget Kromhout
ad554f89fc New events (and old event to past) 2018-05-23 15:31:07 -05:00
Jerome Petazzoni
5bb37dff49 Parametrize git repo and slides URLs
We have two extra variables in the slides:
@@GITREPO@@ (current value: github.com/jpetazzo/container.training)
@@SLIDES@@ (current value: http://container.training/)

These variables are set with gitrepo and slides in the YAML files.
(Just like the chat variable.)

Supercedes #256
2018-05-23 15:27:57 -05:00
Bridget Kromhout
0d52dc2290 Merge pull request #267 from jasonknudsen/patch-1
Update README.md - typo
2018-05-23 10:22:05 -05:00
Bridget Kromhout
c575cb9cd5 New events (and old event to past) 2018-05-23 10:18:02 -05:00
jasonknudsen
9cdccd40c7 Update README.md - typo
Typo in instructions - should be pull_images not pull-images
2018-05-23 08:17:46 -07:00
Bret Fisher
fdd10c5a98 fix docker-compose scale up change (#265) 2018-05-18 10:10:06 -05:00
mkrupczak3
8a617fdbc7 change "alpine telnet" to "busybox telnet"
Newer versions of alpine may not include telnet
2018-05-18 10:01:41 -05:00
Jerome Petazzoni
a058a74d8f Minor fix for hidden autopilot command 2018-05-18 09:16:34 -05:00
Bret Fisher
4896a3265e Update volume chapter 2018-05-18 08:08:33 -05:00
Bret Fisher
131947275c Improve explanation about images and layers 2018-05-18 08:08:27 -05:00
Bret Fisher
1b7e8cec5e Update info about Docker for Mac/Windows 2018-05-18 08:08:20 -05:00
Bret Fisher
c17c0ea9aa Remove obsolete MAINTAINER command 2018-05-18 08:08:08 -05:00
Bridget Kromhout
7b378d2425 Merge pull request #264 from bridgetkromhout/master
Moving NDC to past
2018-05-14 06:56:23 -05:00
Bridget Kromhout
47da7d8278 Moving NDC to past 2018-05-14 06:53:08 -05:00
Bridget Kromhout
3c69941fcd Merge pull request #262 from bridgetkromhout/craft-past
Craft to past
2018-05-10 07:38:44 -05:00
Bridget Kromhout
beb188facf Craft to past 2018-05-10 07:36:30 -05:00
Bridget Kromhout
dfea8f6535 Merge pull request #258 from bridgetkromhout/add-ndc
Adding NDC Minnesota
2018-05-08 21:37:43 -05:00
Bridget Kromhout
3b89149bf0 Adding NDC Minnesota 2018-05-08 21:34:53 -05:00
Bret Fisher
c8d73caacd move visualizer to service and stack (#237) 2018-05-08 10:51:40 -05:00
Jérôme Petazzoni
290185f16b Merge pull request #255 from eightlimbed/patch-1
fixed a typo
2018-05-07 13:52:40 -05:00
Jérôme Petazzoni
05e9d36eed Merge pull request #254 from mkrupczak3/master
Fix typo create network to network create
2018-05-07 13:51:12 -05:00
Jérôme Petazzoni
05815fcbf3 Merge pull request #240 from BretFisher/settings-update
updated versions, renamed files
2018-05-07 13:15:34 -05:00
Lee Gaines
bce900a4ca fixed a typo
changed "contain" to "contained" in the first bullet point
2018-05-06 21:49:43 -07:00
mkrupczak3
bf7ba49013 Fix typo create network to network create 2018-05-05 16:55:22 -04:00
Bret Fisher
323aa075b3 removing settings feature teaser 2018-05-05 12:54:20 -04:00
Jérôme Petazzoni
f526014dc8 Merge pull request #253 from BretFisher/ingress-graphics
swarm ingress images and updates
2018-05-05 06:39:13 -05:00
Jérôme Petazzoni
dec546fa65 Merge pull request #252 from BretFisher/patch-15
update docker-compose scale command
2018-05-05 06:36:53 -05:00
Jérôme Petazzoni
36390a7921 Merge pull request #251 from BretFisher/swarm-3-nodes
moving to 3 node swarms by default
2018-05-05 06:35:45 -05:00
Jérôme Petazzoni
313d705778 Merge pull request #248 from BretFisher/fundamentals-cnm-updates
more fundamentals CNM tweaks
2018-05-05 06:20:06 -05:00
Jérôme Petazzoni
ca34efa2d7 Merge pull request #247 from BretFisher/patch-13
adding more images to cache
2018-05-05 05:49:52 -05:00
Jérôme Petazzoni
25e92cfe39 Merge pull request #245 from BretFisher/patch-12
more new features for swarm
2018-05-05 05:46:07 -05:00
Jérôme Petazzoni
999359e81a Update versions.md 2018-05-05 05:45:40 -05:00
Jérôme Petazzoni
3a74248746 Merge pull request #244 from BretFisher/patch-11
a bit more detail on network drivers included
2018-05-05 05:41:10 -05:00
Jérôme Petazzoni
cb828ecbd3 Update Container_Network_Model.md 2018-05-05 05:41:01 -05:00
Jérôme Petazzoni
e1e984e02d Merge pull request #243 from BretFisher/patch-10
Updating some compose info for devs
2018-05-05 05:40:10 -05:00
Jérôme Petazzoni
d6e19fe350 Update Compose_For_Dev_Stacks.md 2018-05-05 05:39:25 -05:00
Jérôme Petazzoni
1f91c748b5 Merge pull request #242 from BretFisher/check-for-entr-in-build
Friendly error if entr isn't installed for build.sh
2018-05-05 05:30:05 -05:00
Bret Fisher
38356acb4e swarm ingress images and updates 2018-05-04 13:00:49 -04:00
Bret Fisher
7b2d598c38 fix my fat fingers.
ugg, sorry, editing via github and I need to go to bed :)
2018-05-04 00:20:31 -04:00
Bret Fisher
c276eb0cfa remove fat finger 2018-05-04 00:19:35 -04:00
Bret Fisher
571de591ca update docker-compose scale command
scale command is now legacy, use `--scale` option instead
2018-05-04 00:18:58 -04:00
Bret Fisher
e49a197fd5 moving to 3 node swarms by default 2018-05-03 23:52:51 -04:00
Bret Fisher
a30eabc23a more fundamentals CNM tweaks 2018-05-03 19:28:39 -04:00
Bret Fisher
73c4cddba5 forgot one image :/ 2018-05-03 16:32:12 -04:00
Bret Fisher
6e341f770a adding more images to cache
Based on images used in swarm and fundamentals workshops
2018-05-03 16:24:54 -04:00
Bridget Kromhout
527145ec81 Merge pull request #241 from BretFisher/patch-8
date updates for container.training
2018-05-03 18:19:36 +02:00
Bret Fisher
c93edceffe more new features for swarm 2018-05-02 23:25:12 -04:00
Bret Fisher
6f9eac7c8e a bit more detail on network drivers included 2018-05-02 23:21:45 -04:00
Bret Fisher
522420ef34 Updating some compose info for devs 2018-05-02 23:18:19 -04:00
Bret Fisher
927bf052b0 Friendly error if entr isn't installed for build.sh 2018-05-02 23:08:52 -04:00
Bret Fisher
1e44689b79 swarm versions 2018-05-02 23:00:55 -04:00
Bret Fisher
b967865faa date updates for container.training 2018-05-02 22:24:12 -04:00
Bret Fisher
054c0cafb2 updated versions, renamed files 2018-05-02 17:43:08 -04:00
Jérôme Petazzoni
29e37c8e2b Merge pull request #235 from KMASubhani/patch-1
Update Getting_Inside.md
2018-04-25 23:33:24 -05:00
Jérôme Petazzoni
44fc2afdc7 Merge pull request #239 from BretFisher/fix-stack-deploy-cmd
reordering stack deploy cmd format
2018-04-25 23:29:58 -05:00
Jérôme Petazzoni
7776c8ee38 Merge pull request #238 from BretFisher/fix-detach-false
remove more unneeded detach=false
2018-04-25 23:27:54 -05:00
Bret Fisher
9ee7e1873f reording stack deploy cmd format 2018-04-25 16:33:38 -05:00
Bret Fisher
e21fcbd1bd remove more unneeded detach=false 2018-04-25 16:26:28 -05:00
Khaja Mashood Ahmed Subhani
5852ab513d Update Getting_Inside.md
fixed spelling
2018-04-25 11:00:37 -05:00
Jérôme Petazzoni
3fe33e4e9e Merge pull request #234 from bridgetkromhout/adding-ndc
Adding NDC
2018-04-24 03:56:13 -05:00
Bridget Kromhout
c44b90b5a4 Adding NDC 2018-04-23 20:03:46 -05:00
Jérôme Petazzoni
f06dc6548c Merge pull request #232 from bridgetkromhout/rollout-params
Clarify rollout params
2018-04-23 11:32:25 -05:00
Jérôme Petazzoni
e13552c306 Merge pull request #224 from bridgetkromhout/re-order
Re-ordering "kubectl apply" discussion
2018-04-23 11:31:15 -05:00
Bridget Kromhout
0305c3783f Adding an overview; marking clarification as extra 2018-04-23 10:52:29 -05:00
Bridget Kromhout
5158ac3d98 Clarify rollout params 2018-04-22 15:49:32 -05:00
Jérôme Petazzoni
25c08b0885 Merge pull request #231 from bridgetkromhout/add-goto-kube101
Adding goto's kube101
2018-04-22 14:55:55 -05:00
Bridget Kromhout
f8131c97e9 Adding goto's kube101 2018-04-22 14:35:50 -05:00
Bridget Kromhout
3de1fab66a Clarifying failure mode 2018-04-22 14:04:57 -05:00
Jérôme Petazzoni
ab664128b7 Merge pull request #228 from bridgetkromhout/helm-completion
Correction for helm completion
2018-04-22 14:00:08 -05:00
Bridget Kromhout
91de693b80 Correction for helm completion 2018-04-22 13:33:54 -05:00
Jérôme Petazzoni
a64606fb32 Merge pull request #225 from bridgetkromhout/tail-log
Clarify log tailing
2018-04-22 13:14:11 -05:00
Jérôme Petazzoni
58d9103bd2 Merge pull request #223 from bridgetkromhout/1.10.1-updates
Updates for 1.10.1
2018-04-22 13:13:25 -05:00
Jérôme Petazzoni
61ab5be12d Merge pull request #222 from bridgetkromhout/weave-link
Link to Weave
2018-04-22 13:08:54 -05:00
Bridget Kromhout
030900b602 Clarify log tailing 2018-04-22 12:39:18 -05:00
Bridget Kromhout
476d689c7d Clarify naming 2018-04-22 12:32:11 -05:00
Bridget Kromhout
4aedbb69c2 Re-ordering 2018-04-22 12:14:16 -05:00
Bridget Kromhout
db2a68709c Updates for 1.10.1 2018-04-22 11:57:37 -05:00
Bridget Kromhout
f114a89136 Link to Weave 2018-04-22 11:08:17 -05:00
Jérôme Petazzoni
96eda76391 Merge pull request #220 from bridgetkromhout/rearrange-kube-halfday
Rearrange kube halfday
2018-04-21 10:48:21 -05:00
Bridget Kromhout
e7d9a8fa2d Correcting EFK 2018-04-21 10:43:39 -05:00
Bridget Kromhout
1cca8db828 Rearranging halfday for kube 2018-04-21 10:38:54 -05:00
Bridget Kromhout
2cde665d2f Merge pull request #219 from jpetazzo/re-add-kube-halfday
Re-add half day file
2018-04-21 10:17:45 -05:00
Jerome Petazzoni
d660c6342f Re-add half day file 2018-04-21 12:00:04 +02:00
Bridget Kromhout
7e8bb0e51f Merge pull request #218 from bridgetkromhout/cloud-typo
Typo fix
2018-04-20 16:49:31 -05:00
Bridget Kromhout
c87f4cc088 Typo fix 2018-04-20 16:47:13 -05:00
Jérôme Petazzoni
05c50349a8 Merge pull request #211 from BretFisher/patch-4
add popular swarm reverse proxy options
2018-04-20 02:38:00 -05:00
Jérôme Petazzoni
e985952816 Add colon and fix minor typo 2018-04-20 02:37:48 -05:00
Jérôme Petazzoni
19f0ef9c86 Merge pull request #216 from jpetazzo/googl
Replace goo.gl with 1.1.1.1
2018-04-20 02:36:15 -05:00
Bret Fisher
cc8e13a85f silly me, Traefik is golang 2018-04-20 03:07:40 -04:00
Bridget Kromhout
6475a05794 Update kubectlrun.md
Removing misleading term
2018-04-19 14:37:26 -05:00
Bridget Kromhout
cc9840afe5 Update kubectlrun.md 2018-04-19 07:36:37 -05:00
Bridget Kromhout
b7a2cde458 Merge pull request #215 from jpetazzo/more-options-to-setup-k8s
Mention Kubernetes the Hard Way and more options
2018-04-19 07:32:20 -05:00
Bridget Kromhout
453992b55d Update setup-k8s.md 2018-04-19 07:31:25 -05:00
Bridget Kromhout
0b1067f95e Merge pull request #217 from jpetazzo/tolerations
Add a line about tolerations
2018-04-19 07:28:57 -05:00
Jérôme Petazzoni
21777cd95b Merge pull request #214 from BretFisher/patch-7
we can now add/remove networks from services 🤗
2018-04-19 06:35:09 -05:00
Jérôme Petazzoni
827ad3bdf2 Merge pull request #213 from BretFisher/patch-6
product name change 🙄
2018-04-19 06:34:41 -05:00
Jérôme Petazzoni
7818157cd0 Merge pull request #212 from BretFisher/patch-5
adding 3rd party registry options
2018-04-19 06:34:22 -05:00
Jérôme Petazzoni
d547241714 Merge pull request #210 from BretFisher/patch-3
fix image size via pic css class
2018-04-19 06:31:46 -05:00
Jérôme Petazzoni
c41e0e9286 Merge pull request #209 from BretFisher/patch-2
removed older notes about detach and service logs
2018-04-19 06:31:17 -05:00
Jérôme Petazzoni
c2d4784895 Merge pull request #208 from BretFisher/patch-1
removed mention of compose upg 1.6 to 1.7
2018-04-19 06:30:47 -05:00
Jérôme Petazzoni
11163965cf Merge pull request #204 from bridgetkromhout/clarify-off-by-one
Clarify an off-by-one amount of pods
2018-04-19 06:30:19 -05:00
Jérôme Petazzoni
e9df065820 Merge pull request #197 from bridgetkromhout/patch-only-daemonset
Patch only daemonset pods
2018-04-19 06:27:52 -05:00
Jerome Petazzoni
101ab0c11a Add a line about tolerations 2018-04-19 06:25:41 -05:00
Jérôme Petazzoni
25f081c0b7 Merge pull request #190 from bridgetkromhout/daemonset
Clarifications around daemonsets
2018-04-19 06:21:58 -05:00
Jérôme Petazzoni
700baef094 Merge pull request #188 from bridgetkromhout/clarify-kinds
kubectl get all missing-type workaround
2018-04-19 06:19:00 -05:00
Jerome Petazzoni
3faa586b16 Remove NOC joke 2018-04-19 06:14:54 -05:00
Jerome Petazzoni
8ca77fe8a4 Merge branch 'googl' of git://github.com/bridgetkromhout/container.training into bridgetkromhout-googl 2018-04-19 05:59:12 -05:00
Jerome Petazzoni
019829cc4d Mention Kubernetes the Hard Way and more options 2018-04-19 05:55:58 -05:00
Bret Fisher
a7f6bb223a we can now add/remove networks from services 🤗 2018-04-18 19:11:51 -04:00
Bret Fisher
eb77a8f328 product name change 🙄 2018-04-18 17:50:19 -04:00
Bret Fisher
5a484b2667 adding 3rd party registry options 2018-04-18 17:47:55 -04:00
Bret Fisher
982c35f8e7 add popular swarm reverse proxy options 2018-04-18 17:28:46 -04:00
Bret Fisher
adffe5f47f fix image size via pic css class
make swarm internals bigger!
2018-04-18 17:07:33 -04:00
Bret Fisher
f90a194b86 removed older notes about detach and service logs
Since these options have been around nearly a year, I removed some unneeded verbosity and consolidated the detach stuff.
2018-04-18 15:34:04 -04:00
Bret Fisher
99e9356e5d removed mention of compose upg 1.6 to 1.7
I feel like compose 1.7 was so long ago (over 2 years) that mentioning logs change isn't necessary.
2018-04-18 15:18:17 -04:00
Bridget Kromhout
860840a4c1 Clarify off-by-one 2018-04-18 14:09:08 -05:00
Bridget Kromhout
ab63b76ae0 Clarify types bug 2018-04-18 13:59:26 -05:00
Bridget Kromhout
29bca726b3 Merge pull request #2 from jpetazzo/daemonset-proposal
Pod cleanup proposal
2018-04-18 12:21:34 -05:00
Bridget Kromhout
91297a68f8 Update daemonset.md 2018-04-18 12:20:53 -05:00
Jerome Petazzoni
2bea8ade63 Break down last kube chapter (it is too long) 2018-04-18 11:44:30 -05:00
Jerome Petazzoni
ec486cf78c Do not bind-mount localtime (fixes #207) 2018-04-18 03:33:07 -05:00
Jerome Petazzoni
63ac378866 Merge branch 'darkalia-add_helm_completion' 2018-04-17 16:13:58 -05:00
Jerome Petazzoni
35db387fc2 Add ':' for consistency 2018-04-17 16:13:44 -05:00
Jerome Petazzoni
a0f9baf5e7 Merge branch 'add_helm_completion' of git://github.com/darkalia/container.training into darkalia-add_helm_completion 2018-04-17 16:12:52 -05:00
Jerome Petazzoni
4e54a79abc Pod cleanup proposal 2018-04-17 16:07:24 -05:00
Jérôme Petazzoni
37bea7158f Merge pull request #181 from jpetazzo/more-info-on-labels-and-rollouts
Label use-cases and rollouts
2018-04-17 15:18:24 -05:00
Jerome Petazzoni
618fe4e959 Clarify the grace period when shutting down pods 2018-04-17 02:24:07 -05:00
Jerome Petazzoni
0c73144977 Merge branch 'jgarrouste-patch-1' 2018-04-16 08:03:34 -05:00
Jerome Petazzoni
ff8c3b1595 Remove -o name 2018-04-16 08:03:09 -05:00
Jerome Petazzoni
b756d0d0dc Merge branch 'patch-1' of git://github.com/jgarrouste/container.training into jgarrouste-patch-1 2018-04-16 08:02:41 -05:00
Jerome Petazzoni
23147fafd1 Paris -> past sessions 2018-04-15 15:57:46 -05:00
Jérémy GARROUSTE
b036b5f24b Delete pods with ''-l run-rng' and remove xargs
Delete pods with ''-l run-rng' and remove xargs
2018-04-15 16:37:10 +02:00
Benjamin Allot
3b9014f750 Add helm completion 2018-04-13 16:40:42 +02:00
Jérôme Petazzoni
6ad7a285e7 Merge pull request #201 from bridgetkromhout/chart-clarity
Clarify chart install
2018-04-13 01:08:13 -05:00
Jérôme Petazzoni
e529eaed2d Merge pull request #200 from bridgetkromhout/helm-example
Use prometheus as example
2018-04-13 01:07:18 -05:00
Jérôme Petazzoni
4697c6c6ad Merge pull request #189 from bridgetkromhout/elastic-patience
Clarify error message upon start & endpoints
2018-04-13 01:06:33 -05:00
Jérôme Petazzoni
56e47c3550 Update kubectlexpose.md
Add line break for readability
2018-04-13 08:06:23 +02:00
Jérôme Petazzoni
b3a9ba339c Merge pull request #199 from bridgetkromhout/helm-mkdir
Directory missing
2018-04-13 01:04:39 -05:00
Jérôme Petazzoni
8d0ce37a59 Merge pull request #196 from bridgetkromhout/or-azure
Azure directions are also included
2018-04-13 01:04:07 -05:00
Jérôme Petazzoni
a1bbbd6f7b Merge pull request #195 from bridgetkromhout/slide-clarity
Making slide easier to read
2018-04-13 01:03:39 -05:00
Bridget Kromhout
de87743c6a Clarify an off-by-one amount of pods 2018-04-12 16:10:38 -05:00
Bridget Kromhout
9d4a72a4ba Merge pull request #202 from bridgetkromhout/url-update-fix
Fixing typo
2018-04-12 15:30:11 -05:00
Bridget Kromhout
19e39aea49 Fixing typo 2018-04-12 15:27:51 -05:00
Bridget Kromhout
da064a6005 Clarify chart install 2018-04-12 10:24:01 -05:00
Bridget Kromhout
a12a38a7a9 Use prometheus as example 2018-04-12 09:50:12 -05:00
Bridget Kromhout
2c3a442a4c wording correction
The addresses aren't what show us the addresses - it seems clear from context that this should be "commands".
2018-04-12 08:11:43 -05:00
Bridget Kromhout
25d560cf46 Directory missing 2018-04-12 07:48:25 -05:00
Bridget Kromhout
c3324cf64c More general 2018-04-12 07:41:43 -05:00
Bridget Kromhout
053bbe7028 Bold instead of highlighting 2018-04-12 07:39:02 -05:00
Bridget Kromhout
74f980437f Clarify that clusters can be of arbitrary size 2018-04-12 07:31:49 -05:00
Jérôme Petazzoni
5ef96a29ac Update kubectlexpose.md 2018-04-12 00:37:18 -05:00
Jérôme Petazzoni
f261e7aa96 Merge pull request #194 from bridgetkromhout/fix-blue
removing extra leading spaces which break everything
2018-04-11 23:55:34 -05:00
Jérôme Petazzoni
8e44e911ca Merge pull request #193 from bridgetkromhout/stern
Missing word added
2018-04-11 23:52:17 -05:00
Bridget Kromhout
6711ba06d9 Patch only daemonset pods 2018-04-11 21:09:46 -05:00
Bridget Kromhout
fce69b6bb2 Azure directions are also included 2018-04-11 19:34:51 -05:00
Bridget Kromhout
1183e2e4bf Making slide easier to read 2018-04-11 18:55:23 -05:00
Bridget Kromhout
de3082e48f Extra spaces prevent this from working 2018-04-11 18:47:30 -05:00
Bridget Kromhout
3acac34e4b Missing word added 2018-04-11 18:11:07 -05:00
Bridget Kromhout
f97bd2b357 googl to cloudflare 2018-04-11 13:36:00 -05:00
Jérôme Petazzoni
3bac124921 Merge pull request #183 from bridgetkromhout/stalling-for-time
Stalling for time during download
2018-04-11 14:56:02 +02:00
Bridget Kromhout
ba44603d0f Correcting title and slide section division 2018-04-11 06:53:01 -05:00
Jerome Petazzoni
358f844c88 Typo fix 2018-04-11 02:40:38 -07:00
Jérôme Petazzoni
74bf2d742c Merge pull request #182 from bridgetkromhout/versions-validated
Clarify versions validated
2018-04-10 23:11:38 -07:00
Jérôme Petazzoni
acba3d5467 Merge pull request #192 from bridgetkromhout/add-links
Add links
2018-04-10 23:03:09 -07:00
Jérôme Petazzoni
cfc066c8ea Merge pull request #191 from jgarrouste/master
Reversed sentences
2018-04-10 15:03:09 -07:00
Jérôme Petazzoni
4f69f19866 Merge pull request #186 from bridgetkromhout/vm-readme
link to VM prep README
2018-04-10 14:56:19 -07:00
Jérôme Petazzoni
c508f88af2 Update setup-k8s.md 2018-04-10 16:56:07 -05:00
Jérôme Petazzoni
9757fdb42f Merge pull request #185 from bridgetkromhout/article
Adding an article
2018-04-10 14:52:49 -07:00
Bridget Kromhout
24d57f535b Add links 2018-04-10 16:52:07 -05:00
Jérôme Petazzoni
e42dfc0726 Merge pull request #184 from bridgetkromhout/url-update
URL update
2018-04-10 14:51:55 -07:00
Bridget Kromhout
3f54f23535 Clarifying cleanup 2018-04-10 16:45:50 -05:00
Jérémy GARROUSTE
c7198b3538 correction 2018-04-10 22:56:42 +02:00
Bridget Kromhout
827d10dd49 Clarifying ambiguous labels on pods 2018-04-10 15:48:54 -05:00
Bridget Kromhout
1b7a072f25 Bump version and add link 2018-04-10 15:29:14 -05:00
Bridget Kromhout
af1347ca17 Clarify endpoints 2018-04-10 15:07:42 -05:00
Bridget Kromhout
f741cf5b23 Clarify error message upon start 2018-04-10 14:33:49 -05:00
Bridget Kromhout
eb1b3c8729 Clarify types 2018-04-10 14:17:27 -05:00
Bridget Kromhout
40e4678a45 goo.gl deprecation 2018-04-10 12:41:07 -05:00
Bridget Kromhout
d3c0a60de9 link to VM prep README 2018-04-10 12:30:46 -05:00
Bridget Kromhout
83bba80f3b URL update 2018-04-10 12:25:44 -05:00
Bridget Kromhout
44e0cfb878 Adding an article 2018-04-10 12:22:24 -05:00
Bridget Kromhout
a58e21e313 URL update 2018-04-10 12:15:01 -05:00
Bridget Kromhout
1131635006 Stalling for time during download 2018-04-10 11:52:52 -05:00
Bridget Kromhout
c6e477e6ab Clarify versions validated 2018-04-10 11:35:28 -05:00
Jerome Petazzoni
18a81120bc Add helper script to gauge chapter weights 2018-04-10 08:41:23 -05:00
Jerome Petazzoni
17cd67f4d0 Breakdown container internals chapter 2018-04-10 08:41:05 -05:00
Jerome Petazzoni
38a40d56a0 Label use-cases and rollouts
This adds a few realistic examples of label usage.
It also adds explanations about why deploying a new
version of the worker doesn't seem to be effective
immediately (the worker doesn't handle signals).
2018-04-10 06:04:17 -05:00
Jerome Petazzoni
96fd2e26fd Minor fixes for autopilot 2018-04-10 05:30:42 -05:00
Jerome Petazzoni
581bbc847d Add demo logo for k8s demo 2018-04-10 04:25:08 -05:00
Jerome Petazzoni
da7cbc41d2 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 17:06:35 -05:00
Jerome Petazzoni
282e22acb9 Improve chapters about container deep dive 2018-04-09 17:06:29 -05:00
Jérôme Petazzoni
9374eebdf6 Merge pull request #180 from bridgetkromhout/links-before-thanks
Moving links before thanks
2018-04-09 13:23:32 -07:00
Bridget Kromhout
dcd5c5b39a Moving links before thanks 2018-04-09 14:58:56 -05:00
Jérôme Petazzoni
974f8ee244 Merge pull request #179 from bridgetkromhout/mosh-tmux
Clarifications for tmux and mosh
2018-04-09 12:55:03 -07:00
Bridget Kromhout
8212aa378a Merge pull request #1 from jpetazzo/ode-to-mosh-and-tmux
Add even more info about mosh and tmux
2018-04-09 14:54:16 -05:00
Jerome Petazzoni
403d4c6408 Add even more info about mosh and tmux 2018-04-09 14:52:21 -05:00
Jerome Petazzoni
142681fa27 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 14:19:45 -05:00
Jerome Petazzoni
69c9141817 Enable new content in self-paced kube workshop 2018-04-09 14:19:27 -05:00
Bridget Kromhout
9ed88e7608 Clarifications for tmux and mosh 2018-04-09 14:19:16 -05:00
Jérôme Petazzoni
b216f4d90b Merge pull request #178 from bridgetkromhout/clarify-live
Formatting fixes
2018-04-09 12:13:07 -07:00
Bridget Kromhout
26ee07d8ba Format fix 2018-04-09 13:20:23 -05:00
Bridget Kromhout
a8e5b02fb4 Clarify live feedback 2018-04-09 13:18:25 -05:00
Jérôme Petazzoni
80a8912a53 Merge pull request #177 from jpetazzo/avril-2018
Avril 2018
2018-04-09 11:08:21 -07:00
Jérôme Petazzoni
1ba6797f25 Merge pull request #176 from bridgetkromhout/version-bump
Updating versions
2018-04-09 10:57:32 -07:00
Bridget Kromhout
11a2167dea Updating versions 2018-04-09 12:52:47 -05:00
Jérôme Petazzoni
af4eeb6e6b Merge pull request #175 from jpetazzo/helm-and-namespaces
Add two chapters: Helm and namespaces
2018-04-09 10:20:33 -07:00
Jérôme Petazzoni
ea6459e2bd Merge pull request #174 from jpetazzo/centralized-logging-with-efk
Add a chapter about centralized logging
2018-04-09 10:19:44 -07:00
Bridget Kromhout
2dfa5a9660 Update logs-centralized.md 2018-04-09 11:59:19 -05:00
Jerome Petazzoni
b86434fbd3 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 11:57:32 -05:00
Jerome Petazzoni
223525cc69 Add the new chapters
The new chapters are commented our right now.
But they're ready to be enabled whenever needed.
2018-04-09 11:57:16 -05:00
Bridget Kromhout
fd63c079c8 Update namespaces.md
typo fix
2018-04-09 11:44:45 -05:00
Jerome Petazzoni
ebe4511c57 Remove useless mkdir 2018-04-09 11:43:27 -05:00
Jérôme Petazzoni
e1a81ef8f3 Merge pull request #171 from jpetazzo/show-stern-to-view-logs
Show how to install and use Stern
2018-04-09 09:38:47 -07:00
Jerome Petazzoni
3382c83d6e Add link to Helm and say it's open source 2018-04-09 11:35:59 -05:00
Bridget Kromhout
a89430673f Update logs-cli.md
clarifications
2018-04-09 11:32:02 -05:00
Jerome Petazzoni
fcea6dbdb6 Clarify Stern installation comments 2018-04-09 11:29:19 -05:00
Bridget Kromhout
c744a7d168 Update helm.md
typo fixes
2018-04-09 11:27:34 -05:00
Bridget Kromhout
0256dc8640 Update logs-centralized.md
A few typo fixes
2018-04-09 11:22:43 -05:00
Jerome Petazzoni
41819794d7 Rename kube-halfday
We now have a full day of content. Rejoice.
2018-04-09 11:19:24 -05:00
Jerome Petazzoni
836903cb02 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 11:11:33 -05:00
Jerome Petazzoni
7f822d33b5 Clean up index.html
Comment out a bunch of older workshops (for which more recent
versions have been delivered since then). Update the links
to self-paced content.
2018-04-09 11:11:26 -05:00
Jérôme Petazzoni
232fdbb1ff Merge pull request #170 from jpetazzo/headless-services
Add headless services
2018-04-09 09:05:33 -07:00
Jerome Petazzoni
f3f6111622 Replace logistics.md with generic version
The current version of the logistics.md slide shows AJ and JP.
The new version is an obvious template, i.e. it says 'this slide
should be customized' and it uses imaginary personas instead.
2018-04-09 10:59:55 -05:00
Jerome Petazzoni
a8378e7e7f Clarify endpoints 2018-04-09 10:12:22 -05:00
Jerome Petazzoni
eb3165096f Add Logging section and manifests 2018-04-09 09:37:28 -05:00
Jerome Petazzoni
90ca58cda8 Add a few slides about network policies
This is a very high-level overview (we can't cover a lot within the current time constraints) but it gives a primer about network policies and a few links to explore further.
2018-04-09 08:27:31 -05:00
Jerome Petazzoni
5a81526387 Add two chapters: Helm and namespaces
In these chapters, we:
- show how to install Helm
- run the Helm tiller on our cluster
- use Helm to install Prometheus
- don't do anything fancy with
  Prometheus (it's just for the
  sake of installing something)
- create a basic Helm chart for
  DockerCoins
- explain namespace concepts
- show how to use contexts to hop
  between namespaces
- use Helm to deploy DockerCoins
  to a new namespace

These two chapters go together.
2018-04-09 07:57:27 -05:00
Jerome Petazzoni
8df073b8ac Add a chapter about centralized logging
Explain the purpose of centralized logging. Describe the
EFK stack. Deploy a simplified EFK stack through a YAML
file. Use it to view container logs. Profit.
2018-04-09 04:17:00 -05:00
Jérôme Petazzoni
0f7356b002 Merge pull request #167 from jgarrouste/avril-2018
Small changes
2018-04-09 00:26:13 -07:00
Jérôme Petazzoni
0c2166fb5f Merge pull request #172 from jpetazzo/clarify-daemonset-bonus-exercises
Clarify the bonus exercises
2018-04-09 00:24:26 -07:00
Jerome Petazzoni
d228222fa6 Reword headless services
Hopefully this explains better the use of headless services.
I also added a slide about endpoints, with a couple of simple
commands to show them.
2018-04-08 17:59:42 -05:00
Bridget Kromhout
e4b7d3244e Merge pull request #173 from bridgetkromhout/muracon-past
MuraCon to past
2018-04-08 17:50:09 -05:00
Bridget Kromhout
7d0e841a73 MuraCon to past 2018-04-08 17:46:55 -05:00
Jerome Petazzoni
9859e441e1 Clarify the bonus exercises
We had two open-ended exercises (questions without
answers). We have added more explanations, as well
as solutions for the exercises. It lets us show a
few more tricks with selectors, and how to apply
changes to sets of resources.
2018-04-08 17:16:27 -05:00
Jerome Petazzoni
e1c638439f Bump versions
Bump up Compose and Machine to latest versions.
Bump down Engine to stable branch.

I'm pushing straight to master because YOLO^W^W
because @bridgetkromhout is using the kube101.yaml
file anyway, so this shouldn't break her things.

(Famous last words...)
2018-04-08 16:34:48 -05:00
Jérôme Petazzoni
253aaaad97 Merge pull request #169 from jpetazzo/what-is-cni
Add slide about CNI
2018-04-08 14:32:17 -07:00
Jérôme Petazzoni
a249ccc12b Merge pull request #168 from jpetazzo/clarify-control-plane
Clarify control plane
2018-04-08 14:29:50 -07:00
Jerome Petazzoni
22fb898267 Show how to install and use Stern
Stern is super cool to stream the logs of multiple
containers.
2018-04-08 16:26:08 -05:00
Bridget Kromhout
e038797875 Update concepts-k8s.md
A few suggested clarifications to your (excellent) clarifications
2018-04-08 15:16:42 -05:00
Jerome Petazzoni
7b9f9e23c0 Add headless services 2018-04-08 11:10:07 -05:00
Jerome Petazzoni
01d062a68f Add slide about CNI 2018-04-08 10:31:17 -05:00
Jerome Petazzoni
a66dfb5faf Clarify control plane
Explain better that the control plane can run outside
of the cluster, and that the word master can be
confusing (does it designate the control plane, or
the node running the control plane? What if there is
no node running the control plane, because the control
plane is external?)
2018-04-08 09:57:51 -05:00
Jerome Petazzoni
ac1480680a Add ecosystem chapter 2018-04-08 08:40:20 -05:00
Jerome Petazzoni
13a9b5ca00 What IS docker?
Explain what the engine is
2018-04-08 07:21:47 -05:00
Jérémy GARROUSTE
0cdf6abf0b Add .center for some images 2018-04-07 20:16:29 +02:00
Jérémy GARROUSTE
2071694983 Add .small[] 2018-04-07 20:16:13 +02:00
Jérôme Petazzoni
12e2b18a6f Merge pull request #166 from jgarrouste/avril-2018
Update the output of docker version and docker build command
2018-04-07 09:30:11 -07:00
Jerome Petazzoni
28e128756d How to pass container config 2018-04-07 11:28:42 -05:00
Jerome Petazzoni
a15109a12c Add chapter about labels 2018-04-07 09:57:35 -05:00
Jerome Petazzoni
e500fb57e8 Add --mount syntax 2018-04-07 09:37:27 -05:00
Jerome Petazzoni
f1849092eb add chapter on Docker Machine 2018-04-07 07:33:28 -05:00
Jerome Petazzoni
f1dbd7e8a6 Copy on write 2018-04-06 09:27:29 -05:00
Jerome Petazzoni
d417f454dd Finalize section on namespaces and cgroups 2018-04-06 09:27:20 -05:00
Jérémy GARROUSTE
d79718d834 Update docker build output 2018-04-06 11:20:09 +02:00
Jérémy GARROUSTE
de9c3a1550 Update docker version output 2018-04-06 10:04:41 +02:00
Jerome Petazzoni
90fc7a4ed3 Merge branch 'avril-2018' of github.com:jpetazzo/container.training into avril-2018 2018-04-05 17:58:55 -05:00
Jerome Petazzoni
09edbc24bc Container deep dive: namespaces, cgroups, etc. 2018-04-05 17:58:43 -05:00
Jérémy GARROUSTE
92f8701c37 Update output of docker build 2018-04-06 00:00:27 +02:00
Jérôme Petazzoni
c828888770 Merge pull request #165 from jgarrouste/avril-2018
Update output of 'docker build'
2018-04-05 14:57:05 -07:00
Jérémy GARROUSTE
bb7728e7e7 Update docker build output 2018-04-05 23:52:37 +02:00
Jerome Petazzoni
5f544f9c78 Add container engines chapter; orchestration overview chapter 2018-04-04 17:09:21 -05:00
Jerome Petazzoni
5b6a7d1995 Update my email address 2018-04-02 18:52:48 -05:00
Jerome Petazzoni
b21185dde7 Introduce EXPOSE 2018-04-02 00:10:45 -05:00
Jerome Petazzoni
deaee0dc82 Explain why use Docker Inc's repos 2018-04-01 23:58:10 -05:00
Jerome Petazzoni
4206346496 MacOS -> macOS 2018-04-01 23:52:38 -05:00
Jerome Petazzoni
6658b632b3 Add reason why we use VMs 2018-04-01 23:49:08 -05:00
Jerome Petazzoni
d9be7160ef Move 'extra details' explanation slide to common deck 2018-04-01 23:34:19 -05:00
Jérôme Petazzoni
d56424a287 Merge pull request #164 from bridgetkromhout/adding-k8s-101
Adding more k8s 101 dates
2018-03-29 16:02:31 -07:00
Bridget Kromhout
2d397c5cb8 Adding more k8s 101 dates 2018-03-29 09:39:20 -07:00
Jérôme Petazzoni
08004caa5d Merge pull request #163 from BretFisher/bret-dates-2018q2
adding more dates
2018-03-28 10:26:07 -07:00
Frank Farmer
522358a004 Small typo 2018-03-28 12:23:47 -05:00
Jérôme Petazzoni
e00a6c36e3 Merge pull request #157 from bridgetkromhout/increase-ulimit
Increase allowed open files
2018-03-28 10:07:11 -07:00
Jérôme Petazzoni
4664497cbc Merge pull request #156 from bridgetkromhout/symlinks-on-rerun
Symlink and directory fixes for multiple runs
2018-03-28 10:06:39 -07:00
Bret Fisher
6be424bde5 adding more dates 2018-03-28 03:27:18 -04:00
Bridget Kromhout
0903438242 Increase allowed open files 2018-03-27 09:36:04 -07:00
Bridget Kromhout
b874b68e57 Symlink fixes for multiple runs 2018-03-27 09:25:48 -07:00
Jerome Petazzoni
a3add3d816 Get inside a container (live and post mortem) 2018-03-12 11:57:34 -05:00
133 changed files with 9699 additions and 1529 deletions

2
.gitignore vendored
View File

@@ -8,4 +8,6 @@ prepare-vms/settings.yaml
prepare-vms/tags
slides/*.yml.html
slides/autopilot/state.yaml
slides/index.html
slides/past.html
node_modules

View File

@@ -292,15 +292,31 @@ If there is a bug and you can't even reproduce it:
sorry. It is probably an Heisenbug. We can't act on it
until it's reproducible, alas.
If you have attended this workshop and have feedback,
or if you want somebody to deliver that workshop at your
conference or for your company: you can contact one of us!
- jerome at docker dot com
# “Please teach us!”
If you have attended one of these workshops, and want
your team or organization to attend a similar one, you
can look at the list of upcoming events on
http://container.training/.
You are also welcome to reuse these materials to run
your own workshop, for your team or even at a meetup
or conference. In that case, you might enjoy watching
[Bridget Kromhout's talk at KubeCon 2018 Europe](
https://www.youtube.com/watch?v=mYsp_cGY2O0), explaining
precisely how to run such a workshop yourself.
Finally, you can also contact the following persons,
who are experienced speakers, are familiar with the
material, and are available to deliver these workshops
at your conference or for your company:
- jerome dot petazzoni at gmail dot com
- bret at bretfisher dot com
If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!
(If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!)
**Thank you!**

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80)
app.run(host="0.0.0.0", port=80, threaded=False)

View File

@@ -1,4 +1,4 @@
# Trainer tools to create and prepare VMs for Docker workshops on AWS
# Trainer tools to create and prepare VMs for Docker workshops on AWS or Azure
## Prerequisites
@@ -14,8 +14,9 @@ And if you want to generate printable cards:
## General Workflow
- fork/clone repo
- set required environment variables for AWS
- set required environment variables
- create your own setting file from `settings/example.yaml`
- if necessary, increase allowed open files: `ulimit -Sn 10000`
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
@@ -102,7 +103,7 @@ wrap Run this program in a container
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
@@ -209,7 +210,7 @@ The `postprep.py` file will be copied via parallel-ssh to all of the VMs and exe
#### Pre-pull images
$ ./workshopctl pull-images TAG
$ ./workshopctl pull_images TAG
#### Generate cards

View File

@@ -7,7 +7,6 @@ services:
working_dir: /root/prepare-vms
volumes:
- $HOME/.aws/:/root/.aws/
- /etc/localtime:/etc/localtime:ro
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
- $PWD/:/root/prepare-vms/
environment:

View File

@@ -48,7 +48,7 @@ _cmd_cards() {
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
python lib/ips-txt-to-html.py $SETTINGS
lib/ips-txt-to-html.py $SETTINGS
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
@@ -393,9 +393,23 @@ pull_tag() {
ubuntu:latest \
fedora:latest \
centos:latest \
elasticsearch:2 \
postgres \
redis \
alpine \
registry \
nicolaka/netshoot \
jpetazzo/trainingwheels \
golang \
training/namer \
dockercoins/hasher \
dockercoins/rng \
dockercoins/webui \
dockercoins/worker \
logstash \
prom/node-exporter \
google/cadvisor \
dockersamples/visualizer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'

View File

@@ -45,7 +45,7 @@ def system(cmd):
# On EC2, the ephemeral disk might be mounted on /mnt.
# If /mnt is a mountpoint, place Docker workspace on it.
system("if mountpoint -q /mnt; then sudo mkdir /mnt/docker && sudo ln -s /mnt/docker /var/lib/docker; fi")
system("if mountpoint -q /mnt; then sudo mkdir -p /mnt/docker && sudo ln -sfn /mnt/docker /var/lib/docker; fi")
# Put our public IP in /tmp/ipv4
# ipv4_retrieval_endpoint = "http://169.254.169.254/latest/meta-data/public-ipv4"
@@ -108,7 +108,7 @@ system("sudo chmod +x /usr/local/bin/docker-machine")
system("docker-machine version")
system("sudo apt-get remove -y --purge dnsmasq-base")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh tree")
### Wait for Docker to be up.
### (If we don't do this, Docker will not be responsive during the next step.)

View File

@@ -17,8 +17,8 @@ paper_margin: 0.2in
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.17.1
machine_version: 0.13.0
compose_version: 1.21.1
machine_version: 0.14.0

View File

@@ -17,8 +17,8 @@ paper_margin: 0.2in
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.18.0
machine_version: 0.13.0
compose_version: 1.21.1
machine_version: 0.14.0

View File

@@ -1,7 +1,7 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
# Number of VMs per cluster
clustersize: 5
clustersize: 3
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
@@ -17,8 +17,8 @@ paper_margin: 0.2in
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.17.1
machine_version: 0.13.0
compose_version: 1.21.1
machine_version: 0.14.0

View File

@@ -1 +1,2 @@
/ /kube-halfday.yml.html 200!
/ /kube-90min.yml.html 200!

View File

@@ -19,6 +19,9 @@ logging.basicConfig(level=os.environ.get("LOG_LEVEL", "INFO"))
TIMEOUT = 60 # 1 minute
# This one is not a constant. It's an ugly global.
IPADDR = None
class State(object):
@@ -163,6 +166,9 @@ def wait_for_prompt():
last_line = output.split('\n')[-1]
# Our custom prompt on the VMs has two lines; the 2nd line is just '$'
if last_line == "$":
# This is a perfect opportunity to grab the node's IP address
global IPADDR
IPADDR = re.findall("^\[(.*)\]", output, re.MULTILINE)[-1]
return
# When we are in an alpine container, the prompt will be "/ #"
if last_line == "/ #":
@@ -397,8 +403,7 @@ while True:
elif method == "open":
# Cheap way to get node1's IP address
screen = capture_pane()
ipaddr = re.findall("^\[(.*)\]", screen, re.MULTILINE)[-1]
url = data.replace("/node1", "/{}".format(ipaddr))
url = data.replace("/node1", "/{}".format(IPADDR))
# This should probably be adapted to run on different OS
subprocess.check_output(["xdg-open", url])
focus_browser()

View File

@@ -1,6 +1,8 @@
#!/bin/sh
set -e
case "$1" in
once)
./index.py
for YAML in *.yml; do
./markmaker.py $YAML > $YAML.html || {
rm $YAML.html
@@ -15,6 +17,13 @@ once)
;;
forever)
set +e
# check if entr is installed
if ! command -v entr >/dev/null; then
echo >&2 "First install 'entr' with apt, brew, etc."
exit
fi
# There is a weird bug in entr, at least on MacOS,
# where it doesn't restore the terminal to a clean
# state when exitting. So let's try to work around

View File

@@ -2,7 +2,7 @@
- All the content is available in a public GitHub repository:
https://github.com/jpetazzo/container.training
https://@@GITREPO@@
- You can get updated "builds" of the slides there:
@@ -10,7 +10,7 @@
<!--
.exercise[
```open https://github.com/jpetazzo/container.training```
```open https://@@GITREPO@@```
```open http://container.training/```
]
-->
@@ -23,6 +23,26 @@
<!--
.exercise[
```open https://github.com/jpetazzo/container.training/tree/master/slides/common/about-slides.md```
```open https://@@GITREPO@@/tree/master/slides/common/about-slides.md```
]
-->
---
class: extra-details
## Extra details
- This slide has a little magnifying glass in the top left corner
- This magnifying glass indicates slides that provide extra details
- Feel free to skip them if:
- you are in a hurry
- you are new to this and want to avoid cognitive overload
- you want only the most essential information
- You can review these slides another time if you want, they'll be waiting for you ☺

View File

@@ -49,26 +49,6 @@ Tip: use `^S` and `^Q` to pause/resume log output.
---
class: extra-details
## Upgrading from Compose 1.6
.warning[The `logs` command has changed between Compose 1.6 and 1.7!]
- Up to 1.6
- `docker-compose logs` is the equivalent of `logs --follow`
- `docker-compose logs` must be restarted if containers are added
- Since 1.7
- `--follow` must be specified explicitly
- new containers are automatically picked up by `docker-compose logs`
---
## Scaling up the application
- Our goal is to make that performance graph go up (without changing a line of code!)
@@ -126,7 +106,7 @@ We have available resources.
- Start one more `worker` container:
```bash
docker-compose scale worker=2
docker-compose up -d --scale worker=2
```
- Look at the performance graph (it should show a x2 improvement)
@@ -147,7 +127,7 @@ We have available resources.
- Start eight more `worker` containers:
```bash
docker-compose scale worker=10
docker-compose up -d --scale worker=10
```
- Look at the performance graph: does it show a x10 improvement?

View File

@@ -8,7 +8,7 @@
- Imperative:
*Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in cup.*
*Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.*
--

View File

@@ -1,66 +1,4 @@
# Pre-requirements
- Be comfortable with the UNIX command line
- navigating directories
- editing files
- a little bit of bash-fu (environment variables, loops)
- Some Docker knowledge
- `docker run`, `docker ps`, `docker build`
- ideally, you know how to write a Dockerfile and build it
<br/>
(even if it's a `FROM` line and a couple of `RUN` commands)
- It's totally OK if you are not a Docker expert!
---
class: extra-details
## Extra details
- This slide has a little magnifying glass in the top left corner
- This magnifiying glass indicates slides that provide extra details
- Feel free to skip them if:
- you are in a hurry
- you are new to this and want to avoid cognitive overload
- you want only the most essential information
- You can review these slides another time if you want, they'll be waiting for you ☺
---
class: title
*Tell me and I forget.*
<br/>
*Teach me and I remember.*
<br/>
*Involve me and I learn.*
Misattributed to Benjamin Franklin
[(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/)
---
## Hands-on sections
- The whole workshop is hands-on
- We are going to build, ship, and run containers!
- You are invited to reproduce all the demos
## Hands-on
- All hands-on sections are clearly identified, like the gray rectangle below
@@ -68,55 +6,12 @@ Misattributed to Benjamin Franklin
- This is the stuff you're supposed to do!
- Go to [container.training](http://container.training/) to view these slides
- Join the chat room: @@CHAT@@
<!-- ```open http://container.training/``` -->
- Go to @@SLIDES@@ to view these slides
]
---
class: in-person
## Where are we going to run our containers?
---
class: in-person, pic
![You get a cluster](images/you-get-a-cluster.jpg)
---
class: in-person
## You get a cluster of cloud VMs
- Each person gets a private cluster of cloud VMs (not shared with anybody else)
- They'll remain up for the duration of the workshop
- You should have a little card with login+password+IP addresses
- You can automatically SSH from one VM to another
- The nodes have aliases: `node1`, `node2`, etc.
---
class: in-person
## Why don't we run containers locally?
- Installing that stuff can be hard on some machines
(32 bits CPU or OS... Laptops without administrator access... etc.)
- *"The whole team downloaded all these container images from the WiFi!
<br/>... and it went great!"* (Literally no-one ever)
- All you need is a computer (or even a phone or tablet!), with:
- an internet connection
@@ -129,47 +24,11 @@ class: in-person
class: in-person
## SSH clients
- On Linux, OS X, FreeBSD... you are probably all set
- On Windows, get one of these:
- [putty](http://www.putty.org/)
- Microsoft [Win32 OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH)
- [Git BASH](https://git-for-windows.github.io/)
- [MobaXterm](http://mobaxterm.mobatek.net/)
- On Android, [JuiceSSH](https://juicessh.com/)
([Play Store](https://play.google.com/store/apps/details?id=com.sonelli.juicessh))
works pretty well
- Nice-to-have: [Mosh](https://mosh.org/) instead of SSH, if your internet connection tends to lose packets
<br/>(available with `(apt|yum|brew) install mosh`; then connect with `mosh user@host`)
---
class: in-person
## Connecting to our lab environment
.exercise[
- Log into the first VM (`node1`) with SSH or MOSH
<!--
```bash
for N in $(awk '/node/{print $2}' /etc/hosts); do
ssh -o StrictHostKeyChecking=no node$N true
done
```
```bash
if which kubectl; then
kubectl get all -o name | grep -v services/kubernetes | xargs -n1 kubectl delete
fi
```
-->
- Log into the first VM (`node1`) with your SSH client
- Check that you can SSH (without password) to `node2`:
```bash
@@ -177,102 +36,6 @@ fi
```
- Type `exit` or `^D` to come back to `node1`
<!-- ```bash exit``` -->
]
If anything goes wrong — ask for help!
---
## Doing or re-doing the workshop on your own?
- Use something like
[Play-With-Docker](http://play-with-docker.com/) or
[Play-With-Kubernetes](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
Zero setup effort; but environment are short-lived and
might have limited resources
- Create your own cluster (local or cloud VMs)
Small setup effort; small cost; flexible environments
- Create a bunch of clusters for you and your friends
([instructions](https://github.com/jpetazzo/container.training/tree/master/prepare-vms))
Bigger setup effort; ideal for group training
---
class: self-paced
## Get your own Docker nodes
- If you already have some Docker nodes: great!
- If not: let's get some thanks to Play-With-Docker
.exercise[
- Go to http://www.play-with-docker.com/
- Log in
- Create your first node
<!-- ```open http://www.play-with-docker.com/``` -->
]
You will need a Docker ID to use Play-With-Docker.
(Creating a Docker ID is free.)
---
## We will (mostly) interact with node1 only
*These remarks apply only when using multiple nodes, of course.*
- Unless instructed, **all commands must be run from the first VM, `node1`**
- We will only checkout/copy the code on `node1`
- During normal operations, we do not need access to the other nodes
- If we had to troubleshoot issues, we would use a combination of:
- SSH (to access system logs, daemon status...)
- Docker API (to check running containers and container engine status)
---
## Terminals
Once in a while, the instructions will say:
<br/>"Open a new terminal."
There are multiple ways to do this:
- create a new window or tab on your machine, and SSH into the VM;
- use screen or tmux on the VM and open a new window from there.
You are welcome to use the method that you feel the most comfortable with.
---
## Tmux cheatsheet
- Ctrl-b c → creates a new window
- Ctrl-b n → go to next window
- Ctrl-b p → go to previous window
- Ctrl-b " → split window top/bottom
- Ctrl-b % → split window left/right
- Ctrl-b Alt-1 → rearrange windows in columns
- Ctrl-b Alt-2 → rearrange windows in rows
- Ctrl-b arrows → navigate to other windows
- Ctrl-b d → detach session
- tmux attach → reattach to session

View File

@@ -1,16 +1,71 @@
# Our sample application
- We will clone the GitHub repository onto our `node1`
- The repository also contains scripts and tools that we will use through the workshop
.exercise[
<!--
```bash
if [ -d container.training ]; then
mv container.training container.training.$$
fi
```
-->
- Clone the repository on `node1`:
```bash
git clone git://@@GITREPO@@
```
]
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
---
## Downloading and running the application
Let's start this before we look around, as downloading will take a little time...
.exercise[
- Go to the `dockercoins` directory, in the cloned repo:
```bash
cd ~/container.training/dockercoins
```
- Use Compose to build and run all containers:
```bash
docker-compose up
```
<!--
```longwait units of work done```
-->
]
Compose tells Docker to build all container images (pulling
the corresponding base images), then starts all containers,
and displays aggregated logs.
---
## More detail on our sample application
- Visit the GitHub repository with all the materials of this workshop:
<br/>https://github.com/jpetazzo/container.training
<br/>https://@@GITREPO@@
- The application is in the [dockercoins](
https://github.com/jpetazzo/container.training/tree/master/dockercoins)
https://@@GITREPO@@/tree/master/dockercoins)
subdirectory
- Let's look at the general layout of the source code:
there is a Compose file [docker-compose.yml](
https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml) ...
https://@@GITREPO@@/blob/master/dockercoins/docker-compose.yml) ...
... and 4 other services, each in its own directory:
@@ -39,61 +94,6 @@ class: extra-details
---
## Service discovery in container-land
- We do not hard-code IP addresses in the code
- We do not hard-code FQDN in the code, either
- We just connect to a service name, and container-magic does the rest
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
---
## Example in `worker/worker.py`
```python
redis = Redis("`redis`")
def get_random_bytes():
r = requests.get("http://`rng`/32")
return r.content
def hash_bytes(data):
r = requests.post("http://`hasher`/",
data=data,
headers={"Content-Type": "application/octet-stream"})
```
(Full source code available [here](
https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
))
---
class: extra-details
## Links, naming, and service discovery
- Containers can have network aliases (resolvable through DNS)
- Compose file version 2+ makes each container reachable through its service name
- Compose file version 1 did require "links" sections
- Network aliases are automatically namespaced
- you can have multiple apps declaring and using a service named `database`
- containers in the blue app will resolve `database` to the IP of the blue database
- containers in the green app will resolve `database` to the IP of the green database
---
## What's this application?
--
@@ -120,61 +120,6 @@ class: extra-details
---
## Getting the application source code
- We will clone the GitHub repository
- The repository also contains scripts and tools that we will use through the workshop
.exercise[
<!--
```bash
if [ -d container.training ]; then
mv container.training container.training.$$
fi
```
-->
- Clone the repository on `node1`:
```bash
git clone git://github.com/jpetazzo/container.training
```
]
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
---
# Running the application
Without further ado, let's start our application.
.exercise[
- Go to the `dockercoins` directory, in the cloned repo:
```bash
cd ~/container.training/dockercoins
```
- Use Compose to build and run all containers:
```bash
docker-compose up
```
<!--
```longwait units of work done```
-->
]
Compose tells Docker to build all container images (pulling
the corresponding base images), then starts all containers,
and displays aggregated logs.
---
## Our application at work
- On the left-hand side, the "rainbow strip" shows the container names
@@ -299,5 +244,5 @@ class: extra-details
Some containers exit immediately, others take longer.
The containers that do not handle `SIGTERM` end up being killed after a 10s timeout.
The containers that do not handle `SIGTERM` end up being killed after a 10s timeout. If we are very impatient, we can hit `^C` a second time!

View File

@@ -17,5 +17,5 @@ class: title, in-person
*Don't stream videos or download big files during the workshop.*<br/>
*Thank you!*
**Slides: http://container.training/**
]
**Slides: @@SLIDES@@**
]

57
slides/count-slides.py Executable file
View File

@@ -0,0 +1,57 @@
#!/usr/bin/env python
import re
import sys
PREFIX = "name: toc-"
EXCLUDED = ["in-person"]
class State(object):
def __init__(self):
self.current_slide = 1
self.section_title = None
self.section_start = 0
self.section_slides = 0
self.chapters = {}
self.sections = {}
def show(self):
if self.section_title.startswith("chapter-"):
return
print("{0.section_title}\t{0.section_start}\t{0.section_slides}".format(self))
self.sections[self.section_title] = self.section_slides
state = State()
title = None
for line in open(sys.argv[1]):
line = line.rstrip()
if line.startswith(PREFIX):
if state.section_title is None:
print("{}\t{}\t{}".format("title", "index", "size"))
else:
state.show()
state.section_title = line[len(PREFIX):].strip()
state.section_start = state.current_slide
state.section_slides = 0
if line == "---":
state.current_slide += 1
state.section_slides += 1
if line == "--":
state.current_slide += 1
toc_links = re.findall("\(#toc-(.*)\)", line)
if toc_links and state.section_title.startswith("chapter-"):
if state.section_title not in state.chapters:
state.chapters[state.section_title] = []
state.chapters[state.section_title].append(toc_links[0])
# This is really hackish
if line.startswith("class:"):
for klass in EXCLUDED:
if klass in line:
state.section_slides -= 1
state.current_slide -= 1
state.show()
for chapter in sorted(state.chapters):
chapter_size = sum(state.sections[s] for s in state.chapters[chapter])
print("{}\t{}\t{}".format("total size for", chapter, chapter_size))

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
slides/images/bridge1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

BIN
slides/images/bridge2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

BIN
slides/images/conductor.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

BIN
slides/images/demo.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 178 KiB

View File

@@ -0,0 +1,213 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 18.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 445 390" enable-background="new 0 0 445 390" xml:space="preserve">
<g>
<path fill="#3A4D54" d="M158.8,352.2h-25.9c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9h-19c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9
h25.3c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9h-15.9c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9h6.8c3.2,0,5.8-2.6,5.8-5.9
c0-3.2-2.6-5.9-5.8-5.9H64.9c-0.1,0-0.3,0-0.4,0c3,0.2,5.4,2.7,5.4,5.9c0,3.1-2.4,5.7-5.4,5.9c0.1,0,0.3,0,0.4,0h-0.8h-6.1
c-3.2,0-5.8,2.6-5.8,5.9s2.6,5.9,5.8,5.9H74h3.7c3.2,0,5.8,2.6,5.8,5.9c0,3.2-2.6,5.9-5.8,5.9H74H47.9c-3.2,0-5.8,2.6-5.8,5.9
s2.6,5.9,5.8,5.9h44.8H93c0,0-0.1,0-0.1,0c3.1,0.1,5.6,2.7,5.6,5.9c0,3.2-2.5,5.8-5.6,5.9c0,0,0.1,0,0.1,0h-0.2
c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9h66c3.2,0,5.8-2.6,5.8-5.9C164.6,354.8,162,352.2,158.8,352.2z"/>
<circle fill="#FBBF45" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" cx="214.6" cy="124.2" r="68.7"/>
<circle fill="#3A4D54" cx="367.5" cy="335.5" r="5.9"/>
<g>
<polygon fill="#E8593A" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" points="116.1,199.1 116.1,214.6 302.9,214.5
302.9,199.1 "/>
<rect x="159.4" y="78.6" fill="#3A4D54" width="4.2" height="50.4"/>
<rect x="174.5" y="93.8" fill="#3A4D54" width="4.2" height="35.1"/>
<rect x="280.2" y="108.2" fill="#3A4D54" width="4.2" height="20.8"/>
<rect x="190.2" y="106.9" fill="#3A4D54" width="4.2" height="22"/>
<rect x="143.3" y="59.8" fill="#3A4D54" width="4.2" height="69.1"/>
<path fill="#3A4D54" d="M294.3,107.9c3.5-2.3,6.9-4.8,10.4-7.4V87.7c-5.2,4.3-10.6,8.2-15.9,11.6c-7.8,4.9-15.1,8.5-22.4,11
c-7.9,2.8-15.7,4.3-23.4,4.7c-7.6,0.3-15.3-0.5-22.8-2.6c-6.9-1.9-13.7-4.7-20.4-8.6C188.8,97.5,178.4,89,168,77.6
c-7.7-8.4-14.7-17.7-21.6-28.2c-5-7.8-9.6-15.8-13.6-23.9c-4-8.1-6.1-13.5-6.9-16c-0.7-1.8-1-3.1-1.2-3.8l0-0.1l0.1-2.7l-0.5,0
l0-0.1H123l-8.1-0.6l-3.1-0.1l-0.1,3.4l0,0.4c0,1.2,0.2,1.9,0.3,2.5l0,0.1c0.3,1.4,0.9,3.2,1.7,5.3c1.2,3.4,3.6,9.1,7.7,17.2
c4.3,8.4,9.2,16.8,14.6,25c7.3,11.1,14.9,20.8,23.2,29.6c11.4,12.1,22.9,21.3,35.1,28.1c7.6,4.2,15.4,7.4,23.2,9.4
c7,1.8,14.2,2.7,21.4,2.7c0,0,0,0,0,0c1.6,0,3.2,0,4.7-0.1c8.7-0.5,17.6-2.4,26.4-5.6 M141.1,52.8c-5.2-7.9-10-16.1-14.2-24.4
c-4-7.9-6.3-13.4-7.5-16.6c-0.5-1.3-0.8-2.4-1.1-3.3l1,0.1c0.3,0.9,0.6,1.9,1,2.9c1.6,4.5,4.2,10.4,7.2,16.6
c4.1,8.3,8.8,16.5,13.9,24.5c5.5,8.5,11.1,16.2,17.1,23.3C152.4,68.9,146.7,61.3,141.1,52.8z"/>
<path fill="#E8593A" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" d="M340.9,53h-7.9h-4.3v8.2h-19.4V53h-4.3h-7.9
h-4.3v8.2v2.7v186.7c0,0.8,0.6,1.4,1.3,1.4h3h42.4h4.3c0.7,0,1.3-0.6,1.3-1.4V62v-0.8V53H340.9z M334.8,206.6h-31.5V152
c0-0.4,0.3-0.7,0.6-0.7h30.2c0.4,0,0.6,0.3,0.6,0.7V206.6z M334.8,142.1h-31.5V125c0-0.4,0.3-0.7,0.6-0.7h30.2
c0.4,0,0.6,0.3,0.6,0.7V142.1z M334.8,115.1h-31.5V97.9c0-0.4,0.3-0.7,0.6-0.7h30.2c0.4,0,0.6,0.3,0.6,0.7V115.1z M334.8,88h-31.5
V70.9c0-0.4,0.3-0.7,0.6-0.7h30.2c0.4,0,0.6,0.3,0.6,0.7V88z"/>
<polygon fill="#E8593A" points="272.2,203 286.7,201.1 297.2,201.1 297.2,214.6 271.7,214.6 "/>
<path fill="#E8593A" d="M298.7,96.2c-2.7,2-5.5,3.9-8.3,5.7c-7.3,4.6-15,8.5-23,11.3c-7.9,2.8-16.1,4.5-24.3,4.8
c-8.1,0.4-16.1-0.6-23.7-2.7c-7.6-2-14.6-5.1-21.1-8.9c-13-7.5-23.7-17.1-32.6-26.8c-8.9-9.8-16-19.6-21.9-28.6
c-5.8-9-10.3-17.3-13.7-24.2c-3.4-6.9-5.7-12.5-7.1-16.3c-0.7-1.9-1.1-3.3-1.3-4.2c-0.1-0.4-0.1-0.7-0.1-0.4l0,0.1
c0,0,0-0.1,0-0.1c0-0.1,0-0.1,0-0.1c0-0.1,0-0.1,0-0.1l-7-0.5c0,0,0,0,0,0.1c0,0,0,0.1,0,0.1c0,0,0,0.1,0,0.1c0,0.1,0,0.2,0,0.3
c0,0.9,0.1,1.4,0.3,2.1c0.3,1.3,0.8,2.9,1.6,5c1.5,4.1,4,9.8,7.6,16.9c3.6,7.1,8.3,15.5,14.4,24.7c6.1,9.2,13.5,19.2,22.9,29.2
c9.3,9.9,20.5,19.8,34.3,27.5c6.9,3.8,14.4,7,22.5,9.1c8,2.1,16.6,3,25.2,2.5c8.6-0.5,17.3-2.4,25.5-5.4c8.3-3,16.2-7.2,23.7-12
c2-1.3,4.1-2.7,6-4.2V96.2z"/>
<path fill="#E8593A" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" d="M122.9,4.2h-3.2h-6.6v11.7H66.1V4.2h-4.6h-6.2
h-6.6v11.7v3.8v265.1c0,1.1,0.9,2,2,2h4.6h65.7h6.6c1.1,0,2-0.9,2-2V17v-1.1V4.2H122.9z M113.5,204.2H64.7v-59.4c0-0.6,0.4-1,1-1
h46.7c0.6,0,1,0.4,1,1V204.2z M113.5,130.8H64.7v-24.3c0-0.6,0.4-1,1-1h46.7c0.6,0,1,0.4,1,1V130.8z M113.5,92.4H64.7V68.1
c0-0.6,0.4-1,1-1h46.7c0.6,0,1,0.4,1,1V92.4z M113.5,54H64.7V29.7c0-0.6,0.4-1,1-1h46.7c0.6,0,1,0.4,1,1V54z"/>
<g>
<g>
<path fill="#2BB8EB" stroke="#3A4D54" stroke-width="5" stroke-miterlimit="10" d="M435.8,132.9H364c-1.4,0-2.6,1.3-2.6,3v44.2
c0,1.7,1.2,3,2.6,3h71.8c2.5,0,3.6-3.7,1.5-5.4l-11.4-13.5c-3.2-3.3-3.2-9,0-12.3l11.4-13.5
C439.3,136.6,438.3,132.9,435.8,132.9z"/>
<path fill="#FFFFFF" stroke="#3A4D54" stroke-width="5" stroke-miterlimit="10" d="M9.8,183.1h129.7c1.4,0,2.6-1.3,2.6-3v-44.2
c0-1.7-1.2-3-2.6-3H9.8c-2.5,0-3.6,3.7-1.5,5.4l11.4,13.5c3.2,3.3,3.2,9,0,12.3L8.3,177.7C6.2,179.4,7.3,183.1,9.8,183.1z"/>
<path fill="#FFFFFF" stroke="#3A4E55" stroke-width="5" stroke-miterlimit="10" d="M402.5,190H42.1c-3.6,0-6.5-1.1-6.5-4.6
v-54.7c0-3.6,2.9-6.5,6.5-6.5h360.4c3.6,0,6.5,2.9,6.5,6.5v52.9C409,187.1,406.1,190,402.5,190z"/>
<path fill="#2BB8EB" d="M402.5,124.2h-46.3V190h46.3c3.6,0,6.5-2.9,6.5-6.5v-52.9C409,127.1,406.1,124.2,402.5,124.2z"/>
<g>
<path fill="#FFFFFF" d="M376.2,144.3v21.3c0,1.1-0.9,2-2,2c-1.1,0-2-0.9-2-2v-17.8l-1.4,0.8c-0.3,0.2-0.7,0.3-1,0.3
c-0.7,0-1.3-0.4-1.7-1c-0.6-0.9-0.3-2.2,0.7-2.7l4.4-2.6c0,0,0.1,0,0.1-0.1c0.1,0,0.1-0.1,0.2-0.1c0.1,0,0.1,0,0.2,0
c0,0,0.1,0,0.1,0c0.1,0,0.2,0,0.3,0c0,0,0.1,0,0.1,0h0c0.1,0,0.2,0,0.3,0c0,0,0.1,0,0.1,0c0.1,0,0.1,0,0.2,0.1c0,0,0.1,0,0.1,0
c0.1,0.1,0.1,0.1,0.2,0.1c0,0,0.1,0.1,0.1,0.1c0,0,0.1,0.1,0.1,0.1c0.1,0,0.1,0.1,0.1,0.1c0,0,0.1,0.1,0.1,0.1
c0,0,0.1,0.1,0.1,0.1l0,0.1c0,0,0,0.1,0,0.1c0,0.1,0.1,0.1,0.1,0.2c0,0.1,0,0.1,0.1,0.2c0,0.1,0,0.1,0,0.2c0,0.1,0,0.2,0.1,0.3
C376.2,144.3,376.2,144.3,376.2,144.3z"/>
<path fill="#FFFFFF" d="M393.4,152.3c1.8,1.7,2.6,4.1,2.6,6.4c0,2.3-0.9,4.6-2.6,6.3c-1.7,1.8-4.1,2.6-6.3,2.6
c-0.1,0-0.1,0-0.1,0c-2.2,0-4.6-0.9-6.3-2.6c-0.8-0.8-0.8-2.1,0-2.9c0.8-0.8,2.1-0.8,2.9,0c0.9,1,2.2,1.4,3.5,1.4
c1.2,0,2.5-0.5,3.4-1.4c0.9-0.9,1.4-2.2,1.4-3.4c0-1.3-0.5-2.5-1.4-3.5c-0.9-1-2.2-1.4-3.4-1.4c-1.2,0-2.5,0.4-3.5,1.4
c-0.8,0.8-2.1,0.8-2.9,0c-0.1-0.1-0.3-0.3-0.4-0.5c0-0.1,0-0.1,0-0.1c0-0.1,0-0.1-0.1-0.2c0-0.1,0-0.2,0-0.3c0,0,0,0,0-0.1
c0-0.2,0-0.4,0-0.6l1.1-9.4c0.1-0.6,0.4-1.1,0.9-1.4c0.1,0,0.1,0,0.1-0.1c0,0,0.1,0,0.1-0.1c0.3-0.1,0.6-0.2,0.9-0.2h9.2
c1.2,0,2.1,0.9,2.1,2.1c0,1.1-0.9,2-2.1,2h-7.4l-0.4,3.6c0.8-0.2,1.6-0.3,2.4-0.3C389.4,149.7,391.7,150.6,393.4,152.3z"/>
</g>
<g>
<path fill="#3A4D54" d="M157.8,142.1L157.8,142.1l-0.9,0c-0.7,0-2.6,2-3,2.5c-1.7,1.7-3.5,3.4-5.2,5.1v-13.7
c0-1.2-0.8-2.2-2-2.2h-0.3c-1.3,0-2,1-2,2.2v29.9c0,1.2,0.8,2.2,2,2.2h0.3c1.3,0,2-1,2-2.2v-5.3l3.4,3.3c1,1,2,2,3,3
c0.5,0.5,1.3,1.3,2.1,1.3h0.4c1.1,0,1.8-0.8,2-1.8l0-0.1v-0.5c0-0.4-0.1-0.7-0.3-1c-0.2-0.3-0.5-0.6-0.7-0.8
c-0.6-0.7-1.2-1.3-1.9-1.9c-2.3-2.3-4.6-4.6-6.9-6.9l5.3-5.4c1-1.1,2.1-2.1,3.1-3.2c0.5-0.5,1.3-1.4,1.3-2.1V144
C159.6,142.9,158.9,142.3,157.8,142.1z"/>
<path fill="#3A4D54" d="M138.9,143.9l-0.2-0.1c-1.9-1.3-4.1-2-6.5-2h-0.9c-2.2,0-4.3,0.6-6.2,1.7c-4.1,2.4-6.5,6.2-6.5,11v0.9
c0,1.1,0.1,2.2,0.5,3.3c1.9,6.3,6.8,9.9,13.4,9.5c1.9-0.1,6.8-0.7,6.8-3.4v-0.4c0-1.1-0.8-1.7-1.8-1.9l-0.1,0h-0.8l-0.2,0.1
c-1.1,0.5-2.7,1.2-3.9,1.2c-1.3,0-2.9-0.1-4.2-0.7c-3.4-1.6-5.4-4.3-5.4-8c0-1.2,0.2-2.4,0.8-3.6c1.6-3.3,4.2-5.3,7.9-5.2
c0.7,0,2,0.1,2.6,0.4c0.6,0.3,2.1,1,2.7,1h0.3l0.1,0c1-0.2,1.9-0.8,1.9-1.9v-0.4c0-0.4-0.2-0.8-0.4-1.2L138.9,143.9z"/>
<path fill="#3A4D54" d="M85.2,133.7h-0.4c-1.3,0-2,1-2,2.2v9.3c-2.3-2-5.1-3.3-8.3-3.3h-0.9c-2.2,0-4.3,0.6-6.2,1.7
c-4.1,2.4-6.5,6.2-6.5,11v0.9c0,2.2,0.6,4.3,1.7,6.2c2.4,4.1,6.2,6.5,11,6.5h0.9c2.2,0,4.3-0.6,6.2-1.7c4.1-2.4,6.5-6.2,6.5-11
v-19.6C87.2,134.6,86.5,133.7,85.2,133.7z M81.6,159.3c-1.7,2.9-4.2,4.5-7.6,4.5c-1.4,0-2.7-0.4-3.9-1c-3-1.7-4.7-4.3-4.7-7.7
c0-1.2,0.2-2.4,0.8-3.6c1.6-3.3,4.3-5.2,8-5.2c1.8,0,3.4,0.5,4.9,1.6c2.4,1.7,3.8,4.1,3.8,7.1C82.8,156.5,82.4,158,81.6,159.3z
"/>
<path fill="#3A4D54" d="M103.1,141.9h-0.6c-2.2,0-4.3,0.6-6.2,1.7c-4.1,2.4-6.5,6.2-6.5,11v0.9c0,2.2,0.6,4.3,1.7,6.2
c2.4,4.1,6.2,6.5,11,6.5h0.9c2.2,0,4.3-0.6,6.2-1.7c4.1-2.4,6.5-6.2,6.5-11v-0.9c0-2-0.5-4-1.5-5.8
C112.1,144.4,108.2,141.9,103.1,141.9z M110.5,159.3c-1.7,2.8-4.2,4.5-7.5,4.5c-1.6,0-3-0.4-4.3-1.2c-2.8-1.7-4.5-4.2-4.5-7.6
c0-1.2,0.2-2.4,0.8-3.6c1.6-3.3,4.3-5.2,8-5.2c1.7,0,3.3,0.5,4.7,1.4c2.6,1.7,4.1,4.1,4.1,7.2
C111.7,156.5,111.3,158,110.5,159.3z"/>
<path fill="#3A4D54" d="M186.4,148c-1.2-2.1-3-3.7-5.2-4.8c-4-2-8.3-2.2-12.2,0.1l-0.6,0.3c-1.6,0.9-3,2.1-4,3.6
c-3,4.4-3.4,9.3-0.7,14l0.3,0.5c1.1,2,2.7,3.6,4.6,4.6c4.2,2.3,8.6,2.6,12.8,0.2l0.4-0.2c1.1-0.7,1.4-1.8,0.8-3
c-0.2-0.5-0.7-0.8-1.2-1.1l-0.1-0.1l-0.1,0c-0.8-0.1-2.9,0.8-3.8,1.2c-1.6,0.3-3.5,0.4-5.1-0.2c2.9-2.5,5.8-5.1,8.8-7.6
c1.3-1.1,2.7-2.4,4.1-3.5c1.2-0.9,2.3-2.2,1.4-3.8L186.4,148z M178.4,152.1c-3.3,2.8-6.5,5.6-9.8,8.4c-0.3-0.4-0.6-0.8-0.9-1.2
c-0.7-1.2-1.1-2.5-1.1-3.9c-0.1-3.5,1.2-6.3,4.2-8.1c2.3-1.3,4.8-1.7,7.4-0.7c1.3,0.5,2.7,1.3,3.6,2.4
C180.7,150.2,179.5,151.2,178.4,152.1z"/>
<path fill="#3A4D54" d="M204.2,142.1h-0.4c-2.6,0-5,0.8-7.1,2.3c-3.5,2.5-5.6,6-5.6,10.4V166c0,1.2,0.8,2.2,2,2.2h0.3
c1.3,0,2-1,2-2.2v-10.7c0-2.4,0.7-4.5,2.4-6.2c1.4-1.3,3.3-2.5,5.2-2.5c1.5,0,3.3-0.5,3.3-2.3
C206.4,142.9,205.5,142.1,204.2,142.1z"/>
</g>
<g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#2BB8EB" d="M281.3,146.6c-0.7-0.3-1.9-0.4-2.6-0.4
c-3.7-0.1-6.4,1.9-7.9,5.2c-0.5,1.1-0.8,2.3-0.8,3.6c0,3.8,2,6.4,5.4,8c1.2,0.6,2.8,0.7,4.2,0.7c1.2,0,2.9-0.7,3.9-1.2l0.2-0.1
h0.8l0.1,0c1,0.2,1.8,0.8,1.8,1.9v0.4c0,2.7-4.9,3.3-6.8,3.4c-6.6,0.5-11.6-3.2-13.4-9.5c-0.3-1.1-0.5-2.2-0.5-3.3v-0.9
c0-4.8,2.4-8.6,6.5-11c1.9-1.1,4-1.7,6.2-1.7h0.9c2.4,0,4.5,0.7,6.5,2l0.2,0.1l0.1,0.2c0.2,0.3,0.4,0.7,0.4,1.2v0.4
c0,1.1-0.8,1.7-1.9,1.9l-0.1,0H284C283.4,147.6,281.9,146.9,281.3,146.6z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#2BB8EB" d="M301.3,141.9h0.6c5.1,0,9,2.5,11.5,6.9c1,1.8,1.5,3.7,1.5,5.8
v0.9c0,4.8-2.4,8.6-6.5,11c-1.9,1.1-4,1.7-6.2,1.7h-0.9c-4.8,0-8.6-2.4-11-6.5c-1.1-1.9-1.7-4-1.7-6.2v-0.9
c0-4.8,2.4-8.6,6.5-11C297,142.4,299.1,141.9,301.3,141.9z M293,155c0,3.4,1.6,5.8,4.5,7.6c1.3,0.8,2.8,1.2,4.3,1.2
c3.3,0,5.8-1.7,7.5-4.5c0.8-1.3,1.2-2.8,1.2-4.4c0-3.1-1.5-5.5-4.1-7.2c-1.4-0.9-3-1.4-4.7-1.4c-3.7,0-6.4,1.9-8,5.2
C293.3,152.6,293,153.8,293,155z"/>
<path fill="#2BB8EB" d="M344,148.8c-2.5-4.5-6.4-6.9-11.5-6.9h-0.6c-2.2,0-4.3,0.6-6.2,1.7c-4.1,2.4-6.5,6.2-6.5,11v0.3v11
c0,1.2,0.8,2.2,2,2.2h0.3c1.3,0,2-1,2-2.2v-11h0c0-1.2,0.3-2.4,0.8-3.5c1.6-3.3,4.3-5.2,8-5.2c1.7,0,3.3,0.5,4.7,1.4
c2.6,1.7,4.1,4.1,4.1,7.2v11c0,1.2,0.8,2.2,2,2.2h0.3c1.3,0,2-1,2-2.2v-11v-0.3C345.5,152.6,345,150.6,344,148.8z"/>
</g>
</g>
<path fill="none" stroke="#3A4D54" stroke-width="5" stroke-miterlimit="10" d="M402.5,190H42.1c-3.6,0-6.5-2.9-6.5-6.5v-52.9
c0-3.6,2.9-6.5,6.5-6.5h360.4c3.6,0,6.5,2.9,6.5,6.5v52.9C409,187.1,406.1,190,402.5,190z"/>
</g>
<polygon fill="#E8593A" points="147.8,203 133.3,201.1 122.8,201.1 122.8,214.6 148.3,214.6 "/>
<rect x="353.6" y="124.2" fill="#3A4D54" width="5.1" height="55.2"/>
</g>
<g>
<path fill="#3A4D54" d="M91.8,293.4H20.2c-3.2,0-5.8-2.6-5.8-5.9s2.6-5.9,5.8-5.9h71.6c3.2,0,5.8,2.6,5.8,5.9S95,293.4,91.8,293.4
z"/>
</g>
<path fill="#3A4D54" d="M428.9,282.7h-83c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9h-54.7c-3.2,0-5.8,2.6-5.8,5.9
c0,3.2,2.6,5.9,5.8,5.9H308c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9h-28.9c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9H262
c-3.2,0-5.8,2.6-5.8,5.9s2.6,5.9,5.8,5.9h13.7c-3.2,0-5.8,2.6-5.8,5.9s2.6,5.9,5.8,5.9h-37.8c-3.2,0-5.8,2.6-5.8,5.9
c0,3,2.2,5.5,5.1,5.8h-48.8c-0.9-0.6-2-1-3.2-1h-47.1c3.2,0,5.8,2.6,5.8,5.9c0,3.2-2.6,5.9-5.8,5.9h-2.8c-3.2,0-5.8,2.9-5.8,6.4
c0,3.5,2.6,6.4,5.8,6.4h58.5h7.5H286c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9H286h-2.7c-3.2,0-5.8-2.6-5.8-5.9
c0-3.2,2.6-5.9,5.8-5.9h66c0.2,0,0.4,0,0.6,0h6.7c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9h-27.2c0,0,0,0,0,0h-0.7
c-3.2,0-5.8-2.6-5.8-5.9c0-3.2,2.6-5.9,5.8-5.9h0.7h14.1c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9h0.2c-3.2,0-5.8-2.6-5.8-5.9
c0-3.2,2.6-5.9,5.8-5.9h0.7h28.9c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9h-16.1h-0.8c0.1,0,0.3,0,0.4,0
c-3-0.2-5.4-2.7-5.4-5.9c0-3.1,2.4-5.7,5.4-5.9c-0.1,0-0.3,0-0.4,0h0.8h65.2h6.5c3.2,0,5.8-2.6,5.8-5.9
C434.6,285.3,432.1,282.7,428.9,282.7z"/>
<g>
<path id="outline_3_" fill-rule="evenodd" clip-rule="evenodd" fill="#3A4D54" d="M258,210.8h37v37.8h18.7
c8.6,0,17.5-1.5,25.7-4.3c4-1.4,8.5-3.3,12.5-5.6c-5.2-6.8-7.9-15.4-8.7-23.9c-1.1-11.5,1.3-26.5,9.1-35.6l3.9-4.5l4.6,3.7
c11.7,9.4,21.5,22.5,23.2,37.4c14-4.1,30.5-3.2,42.9,4l5.1,2.9l-2.7,5.2c-10.5,20.4-32.3,26.7-53.7,25.6
C343.5,333.3,273.8,371,189.4,371c-43.6,0-83.7-16.3-106.5-55l-0.4-0.6l-3.3-6.8c-7.7-17-10.3-35.7-8.5-54.4l0.5-5.6h31.6v-37.8
h37v-37h73.9v-37H258V210.8z"/>
<g id="body_colors_3_">
<path fill="#08AADA" d="M377.8,224.8c2.5-19.3-11.9-34.4-20.9-41.6c-10.3,11.9-11.9,43.1,4.3,56.3c-9,8-28,15.3-47.5,15.3H76.8
c-1.9,20.3,1.7,39,9.8,55l2.7,4.9c1.7,2.9,3.6,5.7,5.6,8.4h0c9.7,0.6,18.7,0.8,26.9,0.7c0,0,0,0,0,0c16.1-0.4,29.3-2.3,39.3-5.7
c1.5-0.5,3.1,0.3,3.6,1.8c0.5,1.5-0.3,3.1-1.8,3.6c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0c-7.9,2.2-16.3,3.8-27.2,4.4
c0.6,0-0.7,0.1-0.7,0.1c-0.4,0-0.8,0.1-1.2,0.1c-4.3,0.2-8.9,0.3-13.6,0.3c-5.2,0-10.3-0.1-15.9-0.4l-0.1,0.1
c19.7,22.2,50.6,35.5,89.3,35.5c81.9,0,151.3-36.3,182.1-117.8c21.8,2.2,42.8-3.3,52.3-21.9C408.6,216.4,389,219.2,377.8,224.8z"
/>
<path fill="#2BB8EB" d="M377.8,224.8c2.5-19.3-11.9-34.4-20.9-41.6c-10.3,11.9-11.9,43.1,4.3,56.3c-9,8-28,15.3-47.5,15.3H90.8
c-1,31.1,10.6,54.7,31,69c0,0,0,0,0,0c16.1-0.4,29.3-2.3,39.3-5.7c1.5-0.5,3.1,0.3,3.6,1.8c0.5,1.5-0.3,3.1-1.8,3.6
c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0c-7.9,2.2-17,3.9-27.9,4.6c0,0-0.3-0.3-0.3-0.3c27.9,14.3,68.3,14.2,114.6-3.6
c51.9-20,100.3-58,134-101.5C378.8,224.3,378.3,224.6,377.8,224.8z"/>
<path fill="#088CB9" d="M76.6,279.5c1.5,10.9,4.7,21.1,9.4,30.4l2.7,4.9c1.7,2.9,3.6,5.7,5.6,8.4c9.7,0.6,18.7,0.8,26.9,0.7
c16.1-0.4,29.3-2.3,39.3-5.7c1.5-0.5,3.1,0.3,3.6,1.8c0.5,1.5-0.3,3.1-1.8,3.6c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0
c-7.9,2.2-17,3.9-27.8,4.5c-0.4,0-1,0-1.4,0c-4.3,0.2-8.9,0.4-13.6,0.4c-5.2,0-10.4-0.1-16.1-0.4c19.7,22.2,50.8,35.5,89.5,35.5
c70.1,0,131.1-26.6,166.5-85.4H76.6z"/>
<path fill="#069BC6" d="M92.9,279.5c4.2,19.1,14.3,34.1,28.9,44.3c16.1-0.4,29.3-2.3,39.3-5.7c1.5-0.5,3.1,0.3,3.6,1.8
c0.5,1.5-0.3,3.1-1.8,3.6c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0c-7.9,2.2-17.2,3.9-28,4.5c27.9,14.3,68.2,14.1,114.5-3.7
c28-10.8,55-26.8,79.2-46.1H92.9z"/>
</g>
<g id="Containers_3_">
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M135.8,219.7h2.5v26.7h-2.5V219.7z M130.9,219.7h2.6v26.7h-2.6
V219.7z M126.1,219.7h2.6v26.7h-2.6V219.7z M121.2,219.7h2.6v26.7h-2.6V219.7z M116.3,219.7h2.6v26.7h-2.6V219.7z M111.6,219.7
h2.5v26.7h-2.5V219.7z M108.9,217h32v32h-32V217z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M172.7,182.7h2.5v26.7h-2.5V182.7z M167.9,182.7h2.6v26.7h-2.6
V182.7z M163,182.7h2.6v26.7H163V182.7z M158.2,182.7h2.6v26.7h-2.6V182.7z M153.3,182.7h2.6v26.7h-2.6V182.7z M148.6,182.7h2.5
v26.7h-2.5V182.7z M145.9,180h32v32h-32V180z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M172.7,219.7h2.5v26.7h-2.5V219.7z M167.9,219.7h2.6v26.7h-2.6
V219.7z M163,219.7h2.6v26.7H163V219.7z M158.2,219.7h2.6v26.7h-2.6V219.7z M153.3,219.7h2.6v26.7h-2.6V219.7z M148.6,219.7h2.5
v26.7h-2.5V219.7z M145.9,217h32v32h-32V217z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M209.7,219.7h2.5v26.7h-2.5V219.7z M204.8,219.7h2.6v26.7h-2.6
V219.7z M200,219.7h2.6v26.7H200V219.7z M195.1,219.7h2.6v26.7h-2.6V219.7z M190.3,219.7h2.6v26.7h-2.6V219.7z M185.5,219.7h2.5
v26.7h-2.5V219.7z M182.9,217h32v32h-32V217z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M209.7,182.7h2.5v26.7h-2.5V182.7z M204.8,182.7h2.6v26.7h-2.6
V182.7z M200,182.7h2.6v26.7H200V182.7z M195.1,182.7h2.6v26.7h-2.6V182.7z M190.3,182.7h2.6v26.7h-2.6V182.7z M185.5,182.7h2.5
v26.7h-2.5V182.7z M182.9,180h32v32h-32V180z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M246.7,219.7h2.5v26.7h-2.5V219.7z M241.8,219.7h2.6v26.7h-2.6
V219.7z M237,219.7h2.6v26.7H237V219.7z M232.1,219.7h2.6v26.7h-2.6V219.7z M227.3,219.7h2.6v26.7h-2.6V219.7z M222.5,219.7h2.5
v26.7h-2.5V219.7z M219.8,217h32v32h-32V217z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M246.7,182.7h2.5v26.7h-2.5V182.7z M241.8,182.7h2.6v26.7h-2.6
V182.7z M237,182.7h2.6v26.7H237V182.7z M232.1,182.7h2.6v26.7h-2.6V182.7z M227.3,182.7h2.6v26.7h-2.6V182.7z M222.5,182.7h2.5
v26.7h-2.5V182.7z M219.8,180h32v32h-32V180z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M246.7,145.7h2.5v26.7h-2.5V145.7z M241.8,145.7h2.6v26.7h-2.6
V145.7z M237,145.7h2.6v26.7H237V145.7z M232.1,145.7h2.6v26.7h-2.6V145.7z M227.3,145.7h2.6v26.7h-2.6V145.7z M222.5,145.7h2.5
v26.7h-2.5V145.7z M219.8,143.1h32v32h-32V143.1z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M283.6,219.7h2.5v26.7h-2.5V219.7z M278.8,219.7h2.6v26.7h-2.6
V219.7z M273.9,219.7h2.6v26.7h-2.6V219.7z M269.1,219.7h2.6v26.7h-2.6V219.7z M264.2,219.7h2.6v26.7h-2.6V219.7z M259.5,219.7
h2.5v26.7h-2.5V219.7z M256.8,217h32v32h-32V217z"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#D4EDF1" d="M175.9,301c4.9,0,8.8,4,8.8,8.8s-4,8.8-8.8,8.8
c-4.9,0-8.8-4-8.8-8.8S171,301,175.9,301"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#3A4D54" d="M175.9,303.5c0.8,0,1.6,0.2,2.3,0.4c-0.8,0.4-1.3,1.3-1.3,2.2
c0,1.4,1.2,2.6,2.6,2.6c1,0,1.8-0.5,2.3-1.3c0.3,0.7,0.5,1.6,0.5,2.4c0,3.5-2.8,6.3-6.3,6.3c-3.5,0-6.3-2.8-6.3-6.3
C169.6,306.3,172.4,303.5,175.9,303.5"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#3A4D54" d="M19.6,282.7h193.6h23.9h190.5c0.4,0,1.6,0.1,1.2,0
c-9.2-2.2-24.9-6.2-23.5-15.8c0.1-0.7-0.2-0.8-0.6-0.3c-16.6,17.5-54.1,12.2-64.3,3.2c-0.2-0.1-0.4-0.1-0.5,0.1
c-11.5,15.4-73.3,9.7-79.3-2.3c-0.1-0.2-0.4-0.3-0.6-0.1c-14.1,15.7-55.7,15.7-69.8,0c-0.2-0.2-0.5-0.1-0.6,0.1
c-6,12-67.8,17.7-79.3,2.3c-0.1-0.2-0.3-0.2-0.5-0.1c-10.1,8.9-44.5,14.3-61.2-3c-0.3-0.3-0.8-0.1-0.8,0.4
C48.9,277.6,28.1,280.5,19.6,282.7"/>
<path fill="#C0DBE0" d="M199.4,364.7c-21.9-10.4-33.9-24.5-40.6-39.9c-8.1,2.3-17.9,3.8-29.3,4.4c-4.3,0.2-8.8,0.4-13.5,0.4
c-5.4,0-11.2-0.2-17.2-0.5c20.1,20.1,44.8,35.5,90.5,35.8C192.7,364.9,196.1,364.8,199.4,364.7z"/>
<path fill="#D4EDF1" d="M167,339c-3-4.1-6-9.3-8.1-14.2c-8.1,2.3-17.9,3.8-29.3,4.4C137.4,333.4,148.5,337.4,167,339z"/>
</g>
<circle fill="#3A4D54" cx="34.8" cy="311" r="5.9"/>
<path fill="#3A4D54" d="M346.8,297.2l-1-2.8c0,0,5.3-11.7-7.4-11.7c-12.7,0,3.5-4.7,3.5-4.7l21.8,2.8l9.6,6.8l-16.1,4.1
L346.8,297.2z"/>
<path fill="#3A4D54" d="M78.7,297.2l1-2.8c0,0-5.3-11.7,7.4-11.7s-3.5-4.7-3.5-4.7l-21.8,2.8l-9.6,6.8l16.1,4.1L78.7,297.2z"/>
<path fill="#3A4D54" d="M361.7,279.5v4.4l15.6,6.7l45.5-4.1l7.3-3.7c0,0-3.8-0.6-7.3-1.7c-3.6-1.1-15.2-1.6-15.2-1.6h-28.3
l-13.6,1.8L361.7,279.5z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

BIN
slides/images/fu-face.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 301 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

BIN
slides/images/tangram.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

BIN
slides/images/tesla.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 484 KiB

BIN
slides/images/tetris-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.8 KiB

BIN
slides/images/tetris-2.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 730 KiB

BIN
slides/images/tetris-3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

BIN
slides/images/trollface.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

59
slides/index.css Normal file
View File

@@ -0,0 +1,59 @@
body {
background-image: url("images/container-background.jpg");
max-width: 1024px;
margin: 0 auto;
}
table {
font-size: 20px;
font-family: sans-serif;
background: white;
width: 100%;
height: 100%;
padding: 20px;
}
.header {
font-size: 300%;
font-weight: bold;
}
.title {
font-size: 150%;
font-weight: bold;
}
.details {
font-size: 80%;
font-style: italic;
}
td {
padding: 1px;
height: 1em;
}
td.spacer {
height: unset;
}
td.footer {
padding-top: 80px;
height: 100px;
}
td.title {
border-bottom: thick solid black;
padding-bottom: 2px;
padding-top: 20px;
}
a {
text-decoration: none;
}
a:hover {
background: yellow;
}
a.attend:after {
content: "📅 attend";
}
a.slides:after {
content: "📚 slides";
}
a.chat:after {
content: "💬 chat";
}
a.video:after {
content: "📺 video";
}

View File

@@ -1,188 +0,0 @@
<html>
<head>
<title>Container Training</title>
<style type="text/css">
body {
background-image: url("images/container-background.jpg");
max-width: 1024px;
margin: 0 auto;
}
table {
font-size: 20px;
font-family: sans-serif;
background: white;
width: 100%;
height: 100%;
padding: 20px;
}
.header {
font-size: 300%;
font-weight: bold;
}
.title {
font-size: 150%;
font-weight: bold;
}
td {
padding: 1px;
height: 1em;
}
td.spacer {
height: unset;
}
td.footer {
padding-top: 80px;
height: 100px;
}
td.title {
border-bottom: thick solid black;
padding-bottom: 2px;
padding-top: 20px;
}
a {
text-decoration: none;
}
a:hover {
background: yellow;
}
a.attend:after {
content: "📅 attend";
}
a.slides:after {
content: "📚 slides";
}
a.chat:after {
content: "💬 chat";
}
a.video:after {
content: "📺 video";
}
</style>
</head>
<body>
<div class="main">
<table>
<tr><td class="header" colspan="4">Container Training</td></tr>
<tr><td class="title" colspan="4">Coming soon near you</td></tr>
<!--
<td>Nothing for now (stay tuned...)</td>
thing for now (stay tuned...)</td>
-->
<tr>
<td>March 27, 2018: SREcon Americas — Kubernetes 101</td>
<td><a class="slides" href="http://srecon2018.container.training/" /></td>
<td><a class="attend" href="https://www.usenix.org/conference/srecon18americas/presentation/kromhout" />
</tr>
<tr>
<td>April 11-12, 2018: Introduction aux conteneurs (in French)</td>
<td>&nbsp;</td>
<td><a class="attend" href="http://paris.container.training/intro.html" />
</tr>
<tr>
<td>April 13, 2018: Introduction à l'orchestration (in French)</td>
<td>&nbsp;</td>
<td><a class="attend" href="http://paris.container.training/kube.html" />
</tr>
<tr><td class="title" colspan="4">Past workshops</td></tr>
<tr>
<td>Boosterconf: Kubernetes 101</td>
<td><a class="slides" href="http://boosterconf2018.container.training/" /></td>
</tr>
<tr>
<!-- February 22, 2018 -->
<td>IndexConf: Kubernetes 101</td>
<td><a class="slides" href="http://indexconf2018.container.training/" /></td>
<!--
<td><a class="attend" href="https://developer.ibm.com/indexconf/sessions/#!?id=5474" />
-->
</tr>
<tr>
<td>Kubernetes enablement at Docker</td>
<td><a class="slides" href="http://kube.container.training/" /></td>
</tr>
<tr>
<td>QCON SF: Orchestrating Microservices with Docker Swarm</td>
<td><a class="slides" href="http://qconsf2017swarm.container.training/" /></td>
</tr>
<tr>
<td>QCON SF: Introduction to Docker and Containers</td>
<td><a class="slides" href="http://qconsf2017intro.container.training/" /></td>
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviLgqTum8MkspG_8VzGl6C07" /></td>
</tr>
<tr>
<td>LISA17 M7: Getting Started with Docker and Containers</td>
<td><a class="slides" href="http://lisa17m7.container.training/" /></td>
</tr>
<tr>
<td>LISA17 T9: Build, Ship, and Run Microservices on a Docker Swarm Cluster</td>
<td><a class="slides" href="http://lisa17t9.container.training/" /></td>
</tr>
<tr>
<td>Deploying and scaling microservices with Docker and Kubernetes</td>
<td><a class="slides" href="http://osseu17.container.training/" /></td>
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviLrsyydCzxWrIP_1-wkcSHS" /></td>
</tr>
<tr>
<td>DockerCon Workshop: from Zero to Hero (full day, B3 M1-2)</td>
<td><a class="slides" href="http://dc17eu.container.training/" /></td>
</tr>
<tr>
<td>DockerCon Workshop: Orchestration for Advanced Users (afternoon, B4 M5-6)</td>
<td><a class="slides" href="https://www.bretfisher.com/dockercon17eu/" /></td>
</tr>
<tr>
<td>LISA16 T1: Deploying and Scaling Applications with Docker Swarm</td>
<td><a class="slides" href="http://lisa16t1.container.training/" /></td>
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviIDDhr8vIwCN1wkyNGXjbbc" /></td>
</tr>
<tr>
<td>PyCon2016: Introduction to Docker and containers</td>
<td><a class="slides" href="https://us.pycon.org/2016/site_media/media/tutorial_handouts/DockerSlides.pdf" /></td>
<td><a class="video" href="https://www.youtube.com/watch?v=ZVaRK10HBjo" /></td>
</tr>
<tr><td class="title" colspan="4">Self-paced tutorials</td></tr>
<tr>
<td>Introduction to Docker and Containers</td>
<td><a class="slides" href="intro-fullday.yml.html" /></td>
</tr>
<tr>
<td>Container Orchestration with Docker and Swarm</td>
<td><a class="slides" href="swarm-selfpaced.yml.html" /></td>
</tr>
<tr>
<td>Deploying and Scaling Microservices with Docker and Kubernetes</td>
<td><a class="slides" href="kube-halfday.yml.html" /></td>
</tr>
<tr><td class="spacer"></td></tr>
<tr>
<td class="footer">
Maintained by Jérôme Petazzoni (<a href="https://twitter.com/jpetazzo">@jpetazzo</a>)
</td>
</tr>
</table>
</div>
</body>
</html>

140
slides/index.py Executable file
View File

@@ -0,0 +1,140 @@
#!/usr/bin/env python2
# coding: utf-8
TEMPLATE="""<html>
<head>
<title>{{ title }}</title>
<link rel="stylesheet" href="index.css">
</head>
<body>
<div class="main">
<table>
<tr><td class="header" colspan="3">{{ title }}</td></tr>
{% if coming_soon %}
<tr><td class="title" colspan="3">Coming soon near you</td></tr>
{% for item in coming_soon %}
<tr>
<td>{{ item.title }}</td>
<td>{% if item.slides %}<a class="slides" href="{{ item.slides }}" />{% endif %}</td>
<td><a class="attend" href="{{ item.attend }}" /></td>
</tr>
<tr>
<td class="details">Scheduled {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
{% if past_workshops %}
<tr><td class="title" colspan="3">Past workshops</td></tr>
{% for item in past_workshops[:5] %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
<td>{% if item.video %}<a class="video" href="{{ item.video }}" />{% endif %}</td>
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% if past_workshops[5:] %}
<tr>
<td>... and at least <a href="past.html">{{ past_workshops[5:] | length }} more</a>.</td>
</tr>
{% endif %}
{% endif %}
{% if recorded_workshops %}
<tr><td class="title" colspan="3">Recorded workshops</td></tr>
{% for item in recorded_workshops %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
<td><a class="video" href="{{ item.video }}" /></td>
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
{% if self_paced %}
<tr><td class="title" colspan="3">Self-paced tutorials</td></tr>
{% for item in self_paced %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
</tr>
{% endfor %}
{% endif %}
{% if all_past_workshops %}
<tr><td class="title" colspan="3">Past workshops</td></tr>
{% for item in all_past_workshops %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
{% if item.video %}
<td><a class="video" href="{{ item.video }}" /></td>
{% endif %}
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
<tr><td class="spacer"></td></tr>
<tr>
<td class="footer">
Maintained by Jérôme Petazzoni (<a href="https://twitter.com/jpetazzo">@jpetazzo</a>) and <a href="https://github.com/jpetazzo/container.training/graphs/contributors">contributors</a>.
</td>
</tr>
</table>
</div>
</body>
</html>""".decode("utf-8")
import datetime
import jinja2
import yaml
items = yaml.load(open("index.yaml"))
for item in items:
if "date" in item:
date = item["date"]
suffix = {
1: "st", 2: "nd", 3: "rd",
21: "st", 22: "nd", 23: "rd",
31: "st"}.get(date.day, "th")
item["prettydate"] = date.strftime("%B %e{}, %Y").format(suffix)
today = datetime.date.today()
coming_soon = [i for i in items if i.get("date") and i["date"] >= today]
coming_soon.sort(key=lambda i: i["date"])
past_workshops = [i for i in items if i.get("date") and i["date"] < today]
past_workshops.sort(key=lambda i: i["date"], reverse=True)
self_paced = [i for i in items if not i.get("date")]
recorded_workshops = [i for i in items if i.get("video")]
template = jinja2.Template(TEMPLATE)
with open("index.html", "w") as f:
f.write(template.render(
title="Container Training",
coming_soon=coming_soon,
past_workshops=past_workshops,
self_paced=self_paced,
recorded_workshops=recorded_workshops
).encode("utf-8"))
with open("past.html", "w") as f:
f.write(template.render(
title="Container Training",
all_past_workshops=past_workshops
).encode("utf-8"))

361
slides/index.yaml Normal file
View File

@@ -0,0 +1,361 @@
- date: 2018-07-12
city: Minneapolis, MN
country: us
event: devopsdays Minneapolis
title: Kubernetes 101
speaker: "ashleymcnamara, bketelsen"
attend: https://www.devopsdays.org/events/2018-minneapolis/registration/
- date: 2018-10-01
city: New York, NY
country: us
event: Velocity
title: Kubernetes 101
speaker: bridgetkromhout
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70102
- date: 2018-09-30
city: New York, NY
country: us
event: Velocity
title: Kubernetes Bootcamp - Deploying and Scaling Microservices
speaker: jpetazzo
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/69875
- date: 2018-07-17
city: Portland, OR
country: us
event: OSCON
title: Kubernetes 101
speaker: bridgetkromhout
attend: https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/66287
- date: 2018-06-27
city: Amsterdam
country: nl
event: devopsdays
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://devopsdaysams2018.container.training
attend: https://www.devopsdays.org/events/2018-amsterdam/registration/
- date: 2018-06-12
city: San Jose, CA
country: us
event: Velocity
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://velocitysj2018.container.training
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66286
- date: 2018-06-12
city: San Jose, CA
country: us
event: Velocity
title: "Kubernetes two-day kickstart: Deploying and Scaling Microservices with Kubernetes"
speaker: "bketelsen, erikstmartin"
slides: http://kubernetes.academy/kube-fullday.yml.html#1
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
- date: 2018-06-11
city: San Jose, CA
country: us
event: Velocity
title: "Kubernetes two-day kickstart: Introduction to Docker and Containers"
speaker: "bketelsen, erikstmartin"
slides: http://kubernetes.academy/intro-fullday.yml.html#1
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
- date: 2018-05-17
city: Virginia Beach, FL
country: us
event: Revolution Conf
title: Docker 101
speaker: bretfisher
slides: https://revconf18.bretfisher.com
- date: 2018-05-10
city: Saint Paul, MN
country: us
event: NDC Minnesota
title: Kubernetes 101
slides: https://ndcminnesota2018.container.training
- date: 2018-05-08
city: Budapest
country: hu
event: CRAFT
title: Swarm Orchestration
slides: https://craftconf18.bretfisher.com
- date: 2018-04-27
city: Chicago, IL
country: us
event: GOTO
title: Swarm Orchestration
slides: https://gotochgo18.bretfisher.com
- date: 2018-04-24
city: Chicago, IL
country: us
event: GOTO
title: Kubernetes 101
slides: http://gotochgo2018.container.training/
- date: 2018-04-11
city: Paris
country: fr
title: Introduction aux conteneurs
lang: fr
slides: https://avril2018.container.training/intro.yml.html
- date: 2018-04-13
city: Paris
country: fr
lang: fr
title: Introduction à l'orchestration
slides: https://avril2018.container.training/kube.yml.html
- date: 2018-04-06
city: Sacramento, CA
country: us
event: MuraCon
title: Docker 101
slides: https://muracon18.bretfisher.com
- date: 2018-03-27
city: Santa Clara, CA
country: us
event: SREcon Americas
title: Kubernetes 101
slides: http://srecon2018.container.training/
- date: 2018-03-27
city: Bergen
country: no
event: Boosterconf
title: Kubernetes 101
slides: http://boosterconf2018.container.training/
- date: 2018-02-22
city: San Francisco, CA
country: us
event: IndexConf
title: Kubernetes 101
slides: http://indexconf2018.container.training/
#attend: https://developer.ibm.com/indexconf/sessions/#!?id=5474
- date: 2017-11-17
city: San Francisco, CA
country: us
event: QCON SF
title: Orchestrating Microservices with Docker Swarm
slides: http://qconsf2017swarm.container.training/
- date: 2017-11-16
city: San Francisco, CA
country: us
event: QCON SF
title: Introduction to Docker and Containers
slides: http://qconsf2017intro.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLgqTum8MkspG_8VzGl6C07
- date: 2017-10-30
city: San Franciso, CA
country: us
event: LISA
title: (M7) Getting Started with Docker and Containers
slides: http://lisa17m7.container.training/
- date: 2017-10-31
city: San Franciso, CA
country: us
event: LISA
title: (T9) Build, Ship, and Run Microservices on a Docker Swarm Cluster
slides: http://lisa17t9.container.training/
- date: 2017-10-26
city: Prague
country: cz
event: Open Source Summit Europe
title: Deploying and scaling microservices with Docker and Kubernetes
slides: http://osseu17.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLrsyydCzxWrIP_1-wkcSHS
- date: 2017-10-16
city: Copenhagen
country: dk
event: DockerCon
title: Swarm from Zero to Hero
slides: http://dc17eu.container.training/
- date: 2017-10-16
city: Copenhagen
country: dk
event: DockerCon
title: Orchestration for Advanced Users
slides: https://www.bretfisher.com/dockercon17eu
- date: 2017-07-25
city: Minneapolis, MN
country: us
event: devopsdays
title: Deploying & Scaling microservices with Docker Swarm
video: https://www.youtube.com/watch?v=DABbqyJeG_E
- date: 2017-06-12
city: Berlin
country: de
event: DevOpsCon
title: Deploying and scaling containerized Microservices with Docker and Swarm
- date: 2017-05-18
city: Portland, OR
country: us
event: PyCon
title: Deploy and scale containers with Docker native, open source orchestration
video: https://www.youtube.com/watch?v=EuzoEaE6Cqs
- date: 2017-05-08
city: Austin, TX
country: us
event: OSCON
title: Deploying and scaling applications in containers with Docker
- date: 2017-05-04
city: Chicago, IL
country: us
event: GOTO
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2017-04-17
city: Austin, TX
country: us
event: DockerCon
title: Orchestration Workshop
- date: 2017-03-22
city: San Jose, CA
country: us
event: Devoxx
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2017-03-03
city: Pasadena, CA
country: us
event: SCALE
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2016-12-06
city: Boston, MA
country: us
event: LISA
title: Deploying and Scaling Applications with Docker Swarm
slides: http://lisa16t1.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviIDDhr8vIwCN1wkyNGXjbbc
- date: 2016-10-07
city: Berlin
country: de
event: LinuxCon
title: Orchestrating Containers in Production at Scale with Docker Swarm
- date: 2016-09-20
city: New York, NY
country: us
event: Velocity
title: Deployment and orchestration at scale with Docker
- date: 2016-08-25
city: Toronto
country: ca
event: LinuxCon
title: Orchestrating Containers in Production at Scale with Docker Swarm
- date: 2016-06-22
city: Seattle, WA
country: us
event: DockerCon
title: Orchestration Workshop
- date: 2016-05-29
city: Portland, OR
country: us
event: PyCon
title: Introduction to Docker and containers
slides: https://us.pycon.org/2016/site_media/media/tutorial_handouts/DockerSlides.pdf
video: https://www.youtube.com/watch?v=ZVaRK10HBjo
- date: 2016-05-17
city: Austin, TX
country: us
event: OSCON
title: Deployment and orchestration at scale with Docker Swarm
- date: 2016-04-27
city: Budapest
country: hu
event: CRAFT
title: Advanced Docker concepts and container orchestration
- date: 2016-04-22
city: Berlin
country: de
event: Neofonie
title: Orchestration Workshop
- date: 2016-04-05
city: Stockholm
country: se
event: Praqma
title: Orchestration Workshop
- date: 2016-03-22
city: Munich
country: de
event: Stylight
title: Orchestration Workshop
- date: 2016-03-11
city: London
country: uk
event: QCON
title: Containers in production with Docker Swarm
- date: 2016-02-19
city: Amsterdam
country: nl
event: Container Solutions
title: Orchestration Workshop
- date: 2016-02-15
city: Paris
country: fr
event: Zenika
title: Orchestration Workshop
- date: 2016-01-22
city: Pasadena, CA
country: us
event: SCALE
title: Advanced Docker concepts and container orchestration
#- date: 2015-11-10
# city: Washington DC
# country: us
# event: LISA
# title: Deploying and Scaling Applications with Docker Swarm
#2015-09-24-strangeloop
- title: Introduction to Docker and Containers
slides: intro-selfpaced.yml.html
- title: Container Orchestration with Docker and Swarm
slides: swarm-selfpaced.yml.html
- title: Deploying and Scaling Microservices with Docker and Kubernetes
slides: kube-selfpaced.yml.html

View File

@@ -1,11 +1,14 @@
title: |
Introduction
to Docker and
Containers
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
@@ -16,7 +19,7 @@ chapters:
- common/about-slides.md
- common/toc.md
- - intro/Docker_Overview.md
#- intro/Docker_History.md
- intro/Docker_History.md
- intro/Training_Environment.md
- intro/Installing_Docker.md
- intro/First_Containers.md
@@ -27,11 +30,13 @@ chapters:
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
- intro/Multi_Stage_Builds.md
- - intro/Multi_Stage_Builds.md
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- - intro/Naming_And_Inspecting.md
- intro/Container_Networking_Basics.md
- intro/Labels.md
- intro/Getting_Inside.md
- - intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
#- intro/Connecting_Containers_With_Links.md
@@ -39,6 +44,16 @@ chapters:
- - intro/Local_Development_Workflow.md
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- intro/Advanced_Dockerfiles.md
- intro/Docker_Machine.md
- - intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Logging.md
- intro/Resource_Limits.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- - intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md
- common/thankyou.md
- intro/links.md

View File

@@ -1,11 +1,14 @@
title: |
Introduction
to Docker and
Containers
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
@@ -16,7 +19,7 @@ chapters:
- common/about-slides.md
- common/toc.md
- - intro/Docker_Overview.md
#- intro/Docker_History.md
- intro/Docker_History.md
- intro/Training_Environment.md
- intro/Installing_Docker.md
- intro/First_Containers.md
@@ -27,11 +30,13 @@ chapters:
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
- intro/Multi_Stage_Builds.md
- - intro/Multi_Stage_Builds.md
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- - intro/Naming_And_Inspecting.md
- intro/Container_Networking_Basics.md
- intro/Labels.md
- intro/Getting_Inside.md
- - intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
#- intro/Connecting_Containers_With_Links.md
@@ -39,6 +44,16 @@ chapters:
- - intro/Local_Development_Workflow.md
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- intro/Advanced_Dockerfiles.md
- intro/Docker_Machine.md
- - intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Logging.md
- intro/Resource_Limits.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- - intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md
- common/thankyou.md
- intro/links.md

View File

@@ -34,18 +34,6 @@ In this section, we will see more Dockerfile commands.
---
## The `MAINTAINER` instruction
The `MAINTAINER` instruction tells you who wrote the `Dockerfile`.
```dockerfile
MAINTAINER Docker Education Team <education@docker.com>
```
It's optional but recommended.
---
## The `RUN` instruction
The `RUN` instruction can be specified in two ways.
@@ -367,7 +355,7 @@ class: extra-details
## Overriding the `ENTRYPOINT` instruction
The entry point can be overriden as well.
The entry point can be overridden as well.
```bash
$ docker run -it training/ls
@@ -428,5 +416,4 @@ ONBUILD COPY . /src
```
* You can't chain `ONBUILD` instructions with `ONBUILD`.
* `ONBUILD` can't be used to trigger `FROM` and `MAINTAINER`
instructions.
* `ONBUILD` can't be used to trigger `FROM` instructions.

View File

@@ -40,6 +40,8 @@ ambassador containers.
---
class: pic
![ambassador](images/ambassador-diagram.png)
---

View File

@@ -0,0 +1,201 @@
# Application Configuration
There are many ways to provide configuration to containerized applications.
There is no "best way" — it depends on factors like:
* configuration size,
* mandatory and optional parameters,
* scope of configuration (per container, per app, per customer, per site, etc),
* frequency of changes in the configuration.
---
## Command-line parameters
```bash
docker run jpetazzo/hamba 80 www1:80 www2:80
```
* Configuration is provided through command-line parameters.
* In the above example, the `ENTRYPOINT` is a script that will:
- parse the parameters,
- generate a configuration file,
- start the actual service.
---
## Command-line parameters pros and cons
* Appropriate for mandatory parameters (without which the service cannot start).
* Convenient for "toolbelt" services instanciated many times.
(Because there is no extra step: just run it!)
* Not great for dynamic configurations or bigger configurations.
(These things are still possible, but more cumbersome.)
---
## Environment variables
```bash
docker run -e ELASTICSEARCH_URL=http://es42:9201/ kibana
```
* Configuration is provided through environment variables.
* The environment variable can be used straight by the program,
<br/>or by a script generating a configuration file.
---
## Environment variables pros and cons
* Appropriate for optional parameters (since the image can provide default values).
* Also convenient for services instanciated many times.
(It's as easy as command-line parameters.)
* Great for services with lots of parameters, but you only want to specify a few.
(And use default values for everything else.)
* Ability to introspect possible parameters and their default values.
* Not great for dynamic configurations.
---
## Baked-in configuration
```
FROM prometheus
COPY prometheus.conf /etc
```
* The configuration is added to the image.
* The image may have a default configuration; the new configuration can:
- replace the default configuration,
- extend it (if the code can read multiple configuration files).
---
## Baked-in configuration pros and cons
* Allows arbitrary customization and complex configuration files.
* Requires to write a configuration file. (Obviously!)
* Requires to build an image to start the service.
* Requires to rebuild the image to reconfigure the service.
* Requires to rebuild the image to upgrade the service.
* Configured images can be stored in registries.
(Which is great, but requires a registry.)
---
## Configuration volume
```bash
docker run -v appconfig:/etc/appconfig myapp
```
* The configuration is stored in a volume.
* The volume is attached to the container.
* The image may have a default configuration.
(But this results in a less "obvious" setup, that needs more documentation.)
---
## Configuration volume pros and cons
* Allows arbitrary customization and complex configuration files.
* Requires to create a volume for each different configuration.
* Services with identical configurations can use the same volume.
* Doesn't require to build / rebuild an image when upgrading / reconfiguring.
* Configuration can be generated or edited through another container.
---
## Dynamic configuration volume
* This is a powerful pattern for dynamic, complex configurations.
* The configuration is stored in a volume.
* The configuration is generated / updated by a special container.
* The application container detects when the configuration is changed.
(And automatically reloads the configuration when necessary.)
* The configuration can be shared between multiple services if needed.
---
## Dynamic configuration volume example
In a first terminal, start a load balancer with an initial configuration:
```bash
$ docker run --name loadbalancer jpetazzo/hamba \
80 goo.gl:80
```
In another terminal, reconfigure that load balancer:
```bash
$ docker run --rm --volumes-from loadbalancer jpetazzo/hamba reconfigure \
80 google.com:80
```
The configuration could also be updated through e.g. a REST API.
(The REST API being itself served from another container.)
---
## Keeping secrets
.warning[Ideally, you should not put secrets (passwords, tokens...) in:]
* command-line or environment variables (anyone with Docker API access can get them),
* images, especially stored in a registry.
Secrets management is better handled with an orchestrator (like Swarm or Kubernetes).
Orchestrators will allow to pass secrets in a "one-way" manner.
Managing secrets securely without an orchestrator can be contrived.
E.g.:
- read the secret on stdin when the service starts,
- pass the secret using an API endpoint.

View File

@@ -117,7 +117,7 @@ CONTAINER ID IMAGE ... CREATED STATUS ...
Many Docker commands will work on container IDs: `docker stop`, `docker rm`...
If we want to list only the IDs of our containers (without the other colums
If we want to list only the IDs of our containers (without the other columns
or the header line),
we can use the `-q` ("Quiet", "Quick") flag:

View File

@@ -93,20 +93,22 @@ The output of `docker build` looks like this:
.small[
```bash
$ docker build -t figlet .
Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
---> e54ca5efa2e9
Step 1 : RUN apt-get update
---> Running in 840cb3533193
---> 7257c37726a1
Removing intermediate container 840cb3533193
Step 2 : RUN apt-get install figlet
---> Running in 2b44df762a2f
---> f9e8f1642759
Removing intermediate container 2b44df762a2f
Successfully built f9e8f1642759
docker build -t figlet .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM ubuntu
---> f975c5035748
Step 2/3 : RUN apt-get update
---> Running in e01b294dbffd
(...output of the RUN command...)
Removing intermediate container e01b294dbffd
---> eb8d9b561b37
Step 3/3 : RUN apt-get install figlet
---> Running in c29230d70f9b
(...output of the RUN command...)
Removing intermediate container c29230d70f9b
---> 0dfd7a253f21
Successfully built 0dfd7a253f21
Successfully tagged figlet:latest
```
]
@@ -134,20 +136,20 @@ Sending build context to Docker daemon 2.048 kB
## Executing each step
```bash
Step 1 : RUN apt-get update
---> Running in 840cb3533193
Step 2/3 : RUN apt-get update
---> Running in e01b294dbffd
(...output of the RUN command...)
---> 7257c37726a1
Removing intermediate container 840cb3533193
Removing intermediate container e01b294dbffd
---> eb8d9b561b37
```
* A container (`840cb3533193`) is created from the base image.
* A container (`e01b294dbffd`) is created from the base image.
* The `RUN` command is executed in this container.
* The container is committed into an image (`7257c37726a1`).
* The container is committed into an image (`eb8d9b561b37`).
* The build container (`840cb3533193`) is removed.
* The build container (`e01b294dbffd`) is removed.
* The output of this step will be the base image for the next one.

View File

@@ -64,6 +64,7 @@ Let's build it:
$ docker build -t figlet .
...
Successfully built 042dff3b4a8d
Successfully tagged figlet:latest
```
And run it:
@@ -165,6 +166,7 @@ Let's build it:
$ docker build -t figlet .
...
Successfully built 36f588918d73
Successfully tagged figlet:latest
```
And run it:
@@ -223,6 +225,7 @@ Let's build it:
$ docker build -t figlet .
...
Successfully built 6e0b6a048a07
Successfully tagged figlet:latest
```
Run it without parameters:

View File

@@ -49,7 +49,7 @@ Before diving in, let's see a small example of Compose in action.
---
## Compose in action
class: pic
![composeup](images/composeup.gif)
@@ -60,6 +60,10 @@ Before diving in, let's see a small example of Compose in action.
If you are using the official training virtual machines, Compose has been
pre-installed.
If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them.
If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`.
You can always check that it is installed by running:
```bash
@@ -135,22 +139,33 @@ services:
---
## Compose file versions
## Compose file structure
Version 1 directly has the various containers (`www`, `redis`...) at the top level of the file.
A Compose file has multiple sections:
Version 2 has multiple sections:
* `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.)
* `version` is mandatory and should be `"2"`.
* `services` is mandatory and corresponds to the content of the version 1 format.
* `services` is mandatory. A service is one or more replicas of the same image running as containers.
* `networks` is optional and indicates to which networks containers should be connected.
<br/>(By default, containers will be connected on a private, per-app network.)
<br/>(By default, containers will be connected on a private, per-compose-file network.)
* `volumes` is optional and can define volumes to be used and/or shared by the containers.
Version 3 adds support for deployment options (scaling, rolling updates, etc.)
---
## Compose file versions
* Version 1 is legacy and shouldn't be used.
(If you see a Compose file without `version` and `services`, it's a legacy v1 file.)
* Version 2 added support for networks and volumes.
* Version 3 added support for deployment options (scaling, rolling updates, etc).
The [Docker documentation](https://docs.docker.com/compose/compose-file/)
has excellent information about the Compose file format if you need to know more about versions.
---
@@ -260,6 +275,8 @@ Removing trainingwheels_www_1 ... done
Removing trainingwheels_redis_1 ... done
```
Use `docker-compose down -v` to remove everything including volumes.
---
## Special handling of volumes

View File

@@ -0,0 +1,177 @@
# Docker Engine and other container engines
* We are going to cover the architecture of the Docker Engine.
* We will also present other container engines.
---
class: pic
## Docker Engine external architecture
![](images/docker-engine-architecture.svg)
---
## Docker Engine external architecture
* The Engine is a daemon (service running in the background).
* All interaction is done through a REST API exposed over a socket.
* On Linux, the default socket is a UNIX socket: `/var/run/docker.sock`.
* We can also use a TCP socket, with optional mutual TLS authentication.
* The `docker` CLI communicates with the Engine over the socket.
Note: strictly speaking, the Docker API is not fully REST.
Some operations (e.g. dealing with interactive containers
and log streaming) don't fit the REST model.
---
class: pic
## Docker Engine internal architecture
![](images/dockerd-and-containerd.png)
---
## Docker Engine internal architecture
* Up to Docker 1.10: the Docker Engine is one single monolithic binary.
* Starting with Docker 1.11, the Engine is split into multiple parts:
- `dockerd` (REST API, auth, networking, storage)
- `containerd` (container lifecycle, controlled over a gRPC API)
- `containerd-shim` (per-container; does almost nothing but allows to restart the Engine without restarting the containers)
- `runc` (per-container; does the actual heavy lifting to start the container)
* Some features (like image and snapshot management) are progressively being pushed from `dockerd` to `containerd`.
For more details, check [this short presentation by Phil Estes](https://www.slideshare.net/PhilEstes/diving-through-the-layers-investigating-runc-containerd-and-the-docker-engine-architecture).
---
## Other container engines
The following list is not exhaustive.
Furthermore, we limited the scope to Linux containers.
Containers also exist (sometimes with other names) on Windows, macOS, Solaris, FreeBSD ...
---
## LXC
* The venerable ancestor (first released in 2008).
* Docker initially relied on it to execute containers.
* No daemon; no central API.
* Each container is managed by a `lxc-start` process.
* Each `lxc-start` process exposes a custom API over a local UNIX socket, allowing to interact with the container.
* No notion of image (container filesystems have to be managed manually).
* Networking has to be setup manually.
---
## LXD
* Re-uses LXC code (through liblxc).
* Builds on top of LXC to offer a more modern experience.
* Daemon exposing a REST API.
* Can manage images, snapshots, migrations, networking, storage.
* "offers a user experience similar to virtual machines but using Linux containers instead."
---
## rkt
* Compares to `runc`.
* No daemon or API.
* Strong emphasis on security (through privilege separation).
* Networking has to be setup separately (e.g. through CNI plugins).
* Partial image management (pull, but no push).
(Image build is handled by separate tools.)
---
## CRI-O
* Designed to be used with Kubernetes as a simple, basic runtime.
* Compares to `containerd`.
* Daemon exposing a gRPC interface.
* Controlled using the CRI API (Container Runtime Interface defined by Kubernetes).
* Needs an underlying OCI runtime (e.g. runc).
* Handles storage, images, networking (through CNI plugins).
We're not aware of anyone using it directly (i.e. outside of Kubernetes).
---
## systemd
* "init" system (PID 1) in most modern Linux distributions.
* Offers tools like `systemd-nspawn` and `machinectl` to manage containers.
* `systemd-nspawn` is "In many ways it is similar to chroot(1), but more powerful".
* `machinectl` can interact with VMs and containers managed by systemd.
* Exposes a DBUS API.
* Basic image support (tar archives and raw disk images).
* Network has to be setup manually.
---
## Overall ...
* The Docker Engine is very developer-centric:
- easy to install
- easy to use
- no manual setup
- first-class image build and transfer
* As a result, it is a fantastic tool in development environments.
* On servers:
- Docker is a good default choice
- If you use Kubernetes, the engine doesn't matter

View File

@@ -65,9 +65,17 @@ eb0eeab782f4 host host
* A network is managed by a *driver*.
* All the drivers that we have seen before are available.
* The built-in drivers include:
* A new multi-host driver, *overlay*, is available out of the box.
* `bridge` (default)
* `none`
* `host`
* `macvlan`
* A multi-host driver, *overlay*, is available out of the box (for Swarm clusters).
* More drivers can be provided by plugins (OVS, VLAN...)
@@ -75,6 +83,8 @@ eb0eeab782f4 host host
---
class: extra-details
## Differences with the CNI
* CNI = Container Network Interface
@@ -87,6 +97,22 @@ eb0eeab782f4 host host
---
class: pic
## Single container in a Docker network
![bridge0](images/bridge1.png)
---
class: pic
## Two containers on two Docker networks
![bridge3](images/bridge2.png)
---
## Creating a network
Let's create a network called `dev`.
@@ -284,7 +310,7 @@ since we wiped out the old Redis container).
---
class: x-extra-details
class: extra-details
## Names are *local* to each network
@@ -324,7 +350,7 @@ class: extra-details
Create the `prod` network.
```bash
$ docker create network prod
$ docker network create prod
5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c
```
@@ -472,11 +498,13 @@ b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09
* If containers span multiple hosts, we need an *overlay* network to connect them together.
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN.
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging
VXLAN, *enabled with Swarm Mode*.
* Other plugins (Weave, Calico...) can provide overlay networks as well.
* Once you have an overlay network, *all the features that we've used in this chapter work identically.*
* Once you have an overlay network, *all the features that we've used in this chapter work identically
across multiple hosts.*
---
@@ -514,13 +542,174 @@ General idea:
---
## Section summary
## Connecting and disconnecting dynamically
We've learned how to:
* So far, we have specified which network to use when starting the container.
* Create private networks for groups of containers.
* The Docker Engine also allows to connect and disconnect while the container runs.
* Assign IP addresses to containers.
* This feature is exposed through the Docker API, and through two Docker CLI commands:
* Use container naming to implement service discovery.
* `docker network connect <network> <container>`
* `docker network disconnect <network> <container>`
---
## Dynamically connecting to a network
* We have a container named `es` connected to a network named `dev`.
* Let's start a simple alpine container on the default network:
```bash
$ docker run -ti alpine sh
/ #
```
* In this container, try to ping the `es` container:
```bash
/ # ping es
ping: bad address 'es'
```
This doesn't work, but we will change that by connecting the container.
---
## Finding the container ID and connecting it
* Figure out the ID of our alpine container; here are two methods:
* looking at `/etc/hostname` in the container,
* running `docker ps -lq` on the host.
* Run the following command on the host:
```bash
$ docker network connect dev `<container_id>`
```
---
## Checking what we did
* Try again to `ping es` from the container.
* It should now work correctly:
```bash
/ # ping es
PING es (172.20.0.3): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms
^C
```
* Interrupt it with Ctrl-C.
---
## Looking at the network setup in the container
We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`:
.small[
```bash
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
20: eth1@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1
valid_lft forever preferred_lft forever
/ #
```
]
Each network connection is materialized with a virtual network interface.
As we can see, we can be connected to multiple networks at the same time.
---
## Disconnecting from a network
* Let's try the symmetrical command to disconnect the container:
```bash
$ docker network disconnect dev <container_id>
```
* From now on, if we try to ping `es`, it will not resolve:
```bash
/ # ping es
ping: bad address 'es'
```
* Trying to ping the IP address directly won't work either:
```bash
/ # ping 172.20.0.3
... (nothing happens until we interrupt it with Ctrl-C)
```
---
class: extra-details
## Network aliases are scoped per network
* Each network has its own set of network aliases.
* We saw this earlier: `es` resolves to different addresses in `dev` and `prod`.
* If we are connected to multiple networks, the resolver looks up names in each of them
(as of Docker Engine 18.03, it is the connection order) and stops as soon as the name
is found.
* Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not**
give us the addresses of all the `es` services; but only the ones in `dev` or `prod`.
* However, we can lookup `es.dev` or `es.prod` if we need to.
---
class: extra-details
## Finding out about our networks and names
* We can do reverse DNS lookups on containers' IP addresses.
* If the IP address belongs to a network (other than the default bridge), the result will be:
```
name-or-first-alias-or-container-id.network-name
```
* Example:
.small[
```bash
$ docker run -ti --net prod --net-alias hello alpine
/ # apk add --no-cache drill
...
OK: 5 MiB in 13 packages
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03
inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
...
/ # drill -t ptr `3.0.21.172`.in-addr.arpa
...
;; ANSWER SECTION:
3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`.
...
```
]

View File

@@ -49,14 +49,14 @@ We will use `docker ps`:
```bash
$ docker ps
CONTAINER ID IMAGE ... PORTS ...
e40ffb406c9e nginx ... 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp ...
CONTAINER ID IMAGE ... PORTS ...
e40ffb406c9e nginx ... 0.0.0.0:32768->80/tcp ...
```
* The web server is running on ports 80 and 443 inside the container.
* The web server is running on port 80 inside the container.
* Those ports are mapped to ports 32769 and 32768 on our Docker host.
* This port is mapped to port 32768 on our Docker host.
We will explain the whys and hows of this port mapping.
@@ -81,7 +81,7 @@ Make sure to use the right port number if it is different
from the example below:
```bash
$ curl localhost:32769
$ curl localhost:32768
<!DOCTYPE html>
<html>
<head>
@@ -91,6 +91,31 @@ $ curl localhost:32769
---
## How does Docker know which port to map?
* There is metadata in the image telling "this image has something on port 80".
* We can see that metadata with `docker inspect`:
```bash
$ docker inspect --format '{{.Config.ExposedPorts}}' nginx
map[80/tcp:{}]
```
* This metadata was set in the Dockerfile, with the `EXPOSE` keyword.
* We can see that with `docker history`:
```bash
$ docker history nginx
IMAGE CREATED CREATED BY
7f70b30f2cc6 11 days ago /bin/sh -c #(nop) CMD ["nginx" "-g" "…
<missing> 11 days ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM]
<missing> 11 days ago /bin/sh -c #(nop) EXPOSE 80/tcp
```
---
## Why are we mapping ports?
* We are out of IPv4 addresses.
@@ -113,7 +138,7 @@ There is a command to help us:
```bash
$ docker port <containerID> 80
32769
32768
```
---

View File

@@ -0,0 +1,3 @@
# Building containers from scratch
(This is a "bonus section" done if time permits.)

View File

@@ -0,0 +1,339 @@
# Copy-on-write filesystems
Container engines rely on copy-on-write to be able
to start containers quickly, regardless of their size.
We will explain how that works, and review some of
the copy-on-write storage systems available on Linux.
---
## What is copy-on-write?
- Copy-on-write is a mechanism allowing to share data.
- The data appears to be a copy, but is only
a link (or reference) to the original data.
- The actual copy happens only when someone
tries to change the shared data.
- Whoever changes the shared data ends up
using their own copy instead of the shared data.
---
## A few metaphors
--
- First metaphor:
<br/>white board and tracing paper
--
- Second metaphor:
<br/>magic books with shadowy pages
--
- Third metaphor:
<br/>just-in-time house building
---
## Copy-on-write is *everywhere*
- Process creation with `fork()`.
- Consistent disk snapshots.
- Efficient VM provisioning.
- And, of course, containers.
---
## Copy-on-write and containers
Copy-on-write is essential to give us "convenient" containers.
- Creating a new container (from an existing image) is "free".
(Otherwise, we would have to copy the image first.)
- Customizing a container (by tweaking a few files) is cheap.
(Adding a 1 KB configuration file to a 1 GB container takes 1 KB, not 1 GB.)
- We can take snapshots, i.e. have "checkpoints" or "save points"
when building images.
---
## AUFS overview
- The original (legacy) copy-on-write filesystem used by first versions of Docker.
- Combine multiple *branches* in a specific order.
- Each branch is just a normal directory.
- You generally have:
- at least one read-only branch (at the bottom),
- exactly one read-write branch (at the top).
(But other fun combinations are possible too!)
---
## AUFS operations: opening a file
- With `O_RDONLY` - read-only access:
- look it up in each branch, starting from the top
- open the first one we find
- With `O_WRONLY` or `O_RDWR` - write access:
- if the file exists on the top branch: open it
- if the file exists on another branch: "copy up"
<br/>
(i.e. copy the file to the top branch and open the copy)
- if the file doesn't exist on any branch: create it on the top branch
That "copy-up" operation can take a while if the file is big!
---
## AUFS operations: deleting a file
- A *whiteout* file is created.
- This is similar to the concept of "tombstones" used in some data systems.
```
# docker run ubuntu rm /etc/shadow
# ls -la /var/lib/docker/aufs/diff/$(docker ps --no-trunc -lq)/etc
total 8
drwxr-xr-x 2 root root 4096 Jan 27 15:36 .
drwxr-xr-x 5 root root 4096 Jan 27 15:36 ..
-r--r--r-- 2 root root 0 Jan 27 15:36 .wh.shadow
```
---
## AUFS performance
- AUFS `mount()` is fast, so creation of containers is quick.
- Read/write access has native speeds.
- But initial `open()` is expensive in two scenarios:
- when writing big files (log files, databases ...),
- when searching many directories (PATH, classpath, etc.) over many layers.
- Protip: when we built dotCloud, we ended up putting
all important data on *volumes*.
- When starting the same container multiple times:
- the data is loaded only once from disk, and cached only once in memory;
- but `dentries` will be duplicated.
---
## Device Mapper
Device Mapper is a rich subsystem with many features.
It can be used for: RAID, encrypted devices, snapshots, and more.
In the context of containers (and Docker in particular), "Device Mapper"
means:
"the Device Mapper system + its *thin provisioning target*"
If you see the abbreviation "thinp" it stands for "thin provisioning".
---
## Device Mapper principles
- Copy-on-write happens on the *block* level
(instead of the *file* level).
- Each container and each image get their own block device.
- At any given time, it is possible to take a snapshot:
- of an existing container (to create a frozen image),
- of an existing image (to create a container from it).
- If a block has never been written to:
- it's assumed to be all zeros,
- it's not allocated on disk.
(That last property is the reason for the name "thin" provisioning.)
---
## Device Mapper operational details
- Two storage areas are needed:
one for *data*, another for *metadata*.
- "data" is also called the "pool"; it's just a big pool of blocks.
(Docker uses the smallest possible block size, 64 KB.)
- "metadata" contains the mappings between virtual offsets (in the
snapshots) and physical offsets (in the pool).
- Each time a new block (or a copy-on-write block) is written,
a block is allocated from the pool.
- When there are no more blocks in the pool, attempts to write
will stall until the pool is increased (or the write operation
aborted).
- In other words: when running out of space, containers are
frozen, but operations will resume as soon as space is available.
---
## Device Mapper performance
- By default, Docker puts data and metadata on a loop device
backed by a sparse file.
- This is great from a usability point of view,
since zero configuration is needed.
- But it is terrible from a performance point of view:
- each time a container writes to a new block,
- a block has to be allocated from the pool,
- and when it's written to,
- a block has to be allocated from the sparse file,
- and sparse file performance isn't great anyway.
- If you use Device Mapper, make sure to put data (and metadata)
on devices!
---
## BTRFS principles
- BTRFS is a filesystem (like EXT4, XFS, NTFS...) with built-in snapshots.
- The "copy-on-write" happens at the filesystem level.
- BTRFS integrates the snapshot and block pool management features
at the filesystem level.
(Instead of the block level for Device Mapper.)
- In practice, we create a "subvolume" and
later take a "snapshot" of that subvolume.
Imagine: `mkdir` with Super Powers and `cp -a` with Super Powers.
- These operations can be executed with the `btrfs` CLI tool.
---
## BTRFS in practice with Docker
- Docker can use BTRFS and its snapshotting features to store container images.
- The only requirement is that `/var/lib/docker` is on a BTRFS filesystem.
(Or, the directory specified with the `--data-root` flag when starting the engine.)
---
class: extra-details
## BTRFS quirks
- BTRFS works by dividing its storage in *chunks*.
- A chunk can contain data or metadata.
- You can run out of chunks (and get `No space left on device`)
even though `df` shows space available.
(Because chunks are only partially allocated.)
- Quick fix:
```
# btrfs filesys balance start -dusage=1 /var/lib/docker
```
---
## Overlay2
- Overlay2 is very similar to AUFS.
- However, it has been merged in "upstream" kernel.
- It is therefore available on all modern kernels.
(AUFS was available on Debian and Ubuntu, but required custom kernels on other distros.)
- It is simpler than AUFS (it can only have two branches, called "layers").
- The container engine abstracts this detail, so this is not a concern.
- Overlay2 storage drivers generally use hard links between layers.
- This improves `stat()` and `open()` performance, at the expense of inode usage.
---
## ZFS
- ZFS is similar to BTRFS (at least from a container user's perspective).
- Pros:
- high performance
- high reliability (with e.g. data checksums)
- optional data compression and deduplication
- Cons:
- high memory usage
- not in upstream kernel
- It is available as a kernel module or through FUSE.
---
## Which one is the best?
- Eventually, overlay2 should be the best option.
- It is available on all modern systems.
- Its memory usage is better than Device Mapper, BTRFS, or ZFS.
- The remarks about *write performance* shouldn't bother you:
<br/>
data should always be stored in volumes anyway!

View File

@@ -64,7 +64,7 @@ Create this Dockerfile.
## Testing our C program
* Create `hello.c` and `Dockerfile` in the same direcotry.
* Create `hello.c` and `Dockerfile` in the same directory.
* Run `docker build -t hello .` in this directory.

View File

@@ -10,10 +10,12 @@
* [Solaris Containers (2004)](https://en.wikipedia.org/wiki/Solaris_Containers)
* [FreeBSD jails (1999)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
* [FreeBSD jails (1999-2000)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
Containers have been around for a *very long time* indeed.
(See [this excellent blog post by Serge Hallyn](https://s3hh.wordpress.com/2018/03/22/history-of-containers/) for more historic details.)
---
class: pic

View File

@@ -0,0 +1,81 @@
# Managing hosts with Docker Machine
- Docker Machine is a tool to provision and manage Docker hosts.
- It automates the creation of a virtual machine:
- locally, with a tool like VirtualBox or VMware;
- on a public cloud like AWS EC2, Azure, Digital Ocean, GCP, etc.;
- on a private cloud like OpenStack.
- It can also configure existing machines through an SSH connection.
- It can manage as many hosts as you want, with as many "drivers" as you want.
---
## Docker Machine workflow
1) Prepare the environment: setup VirtualBox, obtain cloud credentials ...
2) Create hosts with `docker-machine create -d drivername machinename`.
3) Use a specific machine with `eval $(docker-machine env machinename)`.
4) Profit!
---
## Environment variables
- Most of the tools (CLI, libraries...) connecting to the Docker API can use environment variables.
- These variables are:
- `DOCKER_HOST` (indicates address+port to connect to, or path of UNIX socket)
- `DOCKER_TLS_VERIFY` (indicates that TLS mutual auth should be used)
- `DOCKER_CERT_PATH` (path to the keypair and certificate to use for auth)
- `docker-machine env ...` will generate the variables needed to connect to a host.
- `$(eval docker-machine env ...)` sets these variables in the current shell.
---
## Host management features
With `docker-machine`, we can:
- upgrade a host to the latest version of the Docker Engine,
- start/stop/restart hosts,
- get a shell on a remote machine (with SSH),
- copy files to/from remotes machines (with SCP),
- mount a remote host's directory on the local machine (with SSHFS),
- ...
---
## The `generic` driver
When provisioning a new host, `docker-machine` executes these steps:
1) Create the host using a cloud or hypervisor API.
2) Connect to the host over SSH.
3) Install and configure Docker on the host.
With the `generic` driver, we provide the IP address of an existing host
(instead of e.g. cloud credentials) and we omit the first step.
This allows to provision physical machines, or VMs provided by a 3rd
party, or use a cloud for which we don't have a provisioning API.

View File

@@ -72,7 +72,7 @@ class: pic
class: pic
## The parallel with the shipping indsutry
## The parallel with the shipping industry
![history](images/shipping-industry-problem.png)

View File

@@ -51,9 +51,8 @@ The dependencies are reinstalled every time, because the build system does not k
```bash
FROM python
MAINTAINER Docker Education Team <education@docker.com>
COPY . /src/
WORKDIR /src
COPY . .
RUN pip install -qr requirements.txt
EXPOSE 5000
CMD ["python", "app.py"]
@@ -67,11 +66,10 @@ Adding the dependencies as a separate step means that Docker can cache more effi
```bash
FROM python
MAINTAINER Docker Education Team <education@docker.com>
COPY ./requirements.txt /tmp/requirements.txt
COPY requirements.txt /tmp/requirements.txt
RUN pip install -qr /tmp/requirements.txt
COPY . /src/
WORKDIR /src
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
```
@@ -98,3 +96,266 @@ CMD, EXPOSE ...
* The build fails as soon as an instruction fails
* If `RUN <unit tests>` fails, the build doesn't produce an image
* If it succeeds, it produces a clean image (without test libraries and data)
---
# Dockerfile examples
There are a number of tips, tricks, and techniques that we can use in Dockerfiles.
But sometimes, we have to use different (and even opposed) practices depending on:
- the complexity of our project,
- the programming language or framework that we are using,
- the stage of our project (early MVP vs. super-stable production),
- whether we're building a final image or a base for further images,
- etc.
We are going to show a few examples using very different techniques.
---
## When to optimize an image
When authoring official images, it is a good idea to reduce as much as possible:
- the number of layers,
- the size of the final image.
This is often done at the expense of build time and convenience for the image maintainer;
but when an image is downloaded millions of time, saving even a few seconds of pull time
can be worth it.
.small[
```dockerfile
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-install gd
...
RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \
&& echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \
&& tar -xzf wordpress.tar.gz -C /usr/src/ \
&& rm wordpress.tar.gz \
&& chown -R www-data:www-data /usr/src/wordpress
```
]
(Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile))
---
## When to *not* optimize an image
Sometimes, it is better to prioritize *maintainer convenience*.
In particular, if:
- the image changes a lot,
- the image has very few users (e.g. only 1, the maintainer!),
- the image is built and run on the same machine,
- the image is built and run on machines with a very fast link ...
In these cases, just keep things simple!
(Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.)
---
```dockerfile
FROM debian:sid
RUN apt-get update -q
RUN apt-get install -yq build-essential make
RUN apt-get install -yq zlib1g-dev
RUN apt-get install -yq ruby ruby-dev
RUN apt-get install -yq python-pygments
RUN apt-get install -yq nodejs
RUN apt-get install -yq cmake
RUN gem install --no-rdoc --no-ri github-pages
COPY . /blog
WORKDIR /blog
VOLUME /blog/_site
EXPOSE 4000
CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"]
```
---
## Multi-dimensional versioning systems
Images can have a tag, indicating the version of the image.
But sometimes, there are multiple important components, and we need to indicate the versions
for all of them.
This can be done with environment variables:
```dockerfile
ENV PIP=9.0.3 \
ZC_BUILDOUT=2.11.2 \
SETUPTOOLS=38.7.0 \
PLONE_MAJOR=5.1 \
PLONE_VERSION=5.1.0 \
PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d
```
(Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile))
---
## Entrypoints and wrappers
It is very common to define a custom entrypoint.
That entrypoint will generally be a script, performing any combination of:
- pre-flights checks (if a required dependency is not available, display
a nice error message early instead of an obscure one in a deep log file),
- generation or validation of configuration files,
- dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`),
- and more.
---
## A typical entrypoint script
```dockerfile
#!/bin/sh
set -e
# first arg is '-f' or '--some-option'
# or first arg is 'something.conf'
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
set -- redis-server "$@"
fi
# allow the container to be started with '--user'
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
chown -R redis .
exec su-exec redis "$0" "$@"
fi
exec "$@"
```
(Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh))
---
## Factoring information
To facilitate maintenance (and avoid human errors), avoid to repeat information like:
- version numbers,
- remote asset URLs (e.g. source tarballs) ...
Instead, use environment variables.
.small[
```dockerfile
ENV NODE_VERSION 10.2.1
...
RUN ...
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xf "node-v$NODE_VERSION.tar.xz" \
&& cd "node-v$NODE_VERSION" \
...
```
]
(Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile))
---
## Overrides
In theory, development and production images should be the same.
In practice, we often need to enable specific behaviors in development (e.g. debug statements).
One way to reconcile both needs is to use Compose to enable these behaviors.
Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example.
---
## Production image
This Dockerfile builds an image leveraging gunicorn:
```dockerfile
FROM python
RUN pip install flask
RUN pip install gunicorn
RUN pip install redis
COPY . /src
WORKDIR /src
CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app
EXPOSE 5000
```
(Source: [traininghweels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
---
## Development Compose file
This Compose file uses the same image, but with a few overrides for development:
- the Flask development server is used (overriding `CMD`),
- the `DEBUG` environment variable is set,
- a volume is used to provide a faster local development workflow.
.small[
```yaml
services:
www:
build: www
ports:
- 8000:5000
user: nobody
environment:
DEBUG: 1
command: python counter.py
volumes:
- ./www:/src
```
]
(Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml))
---
## How to know which best practices are better?
- The main goal of containers is to make our lives easier.
- In this chapter, we showed many ways to write Dockerfiles.
- These Dockerfiles use sometimes diametrally opposed techniques.
- Yet, they were the "right" ones *for a specific situation.*
- It's OK (and even encouraged) to start simple and evolve as needed.
- Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!

173
slides/intro/Ecosystem.md Normal file
View File

@@ -0,0 +1,173 @@
# The container ecosystem
In this chapter, we will talk about a few actors of the container ecosystem.
We have (arbitrarily) decided to focus on two groups:
- the Docker ecosystem,
- the Cloud Native Computing Foundation (CNCF) and its projects.
---
class: pic
## The Docker ecosystem
![The Docker ecosystem in 2015](images/docker-ecosystem-2015.png)
---
## Moby vs. Docker
- Docker Inc. (the company) started Docker (the open source project).
- At some point, it became necessary to differentiate between:
- the open source project (code base, contributors...),
- the product that we use to run containers (the engine),
- the platform that we use to manage containerized applications,
- the brand.
---
class: pic
![Picture of a Tesla](images/tesla.jpg)
---
## Exercise in brand management
Questions:
--
- What is the brand of the car on the previous slide?
--
- What kind of engine does it have?
--
- Would you say that it's a safe or unsafe car?
--
- Harder question: can you drive from the US West to East coasts with it?
--
The answers to these questions are part of the Tesla brand.
---
## What if ...
- The blueprints for Tesla cars were available for free.
- You could legally build your own Tesla.
- You were allowed to customize it entirely.
(Put a combustion engine, drive it with a game pad ...)
- You could even sell the customized versions.
--
- ... And call your customized version "Tesla".
--
Would we give the same answers to the questions on the previous slide?
---
## From Docker to Moby
- Docker Inc. decided to split the brand.
- Moby is the open source project.
(= Components and libraries that you can use, reuse, customize, sell ...)
- Docker is the product.
(= Software that you can use, buy support contracts ...)
- Docker is made with Moby.
- When Docker Inc. improves the Docker products, it improves Moby.
(And vice versa.)
---
## Other examples
- *Read the Docs* is an open source project to generate and host documentation.
- You can host it yourself (on your own servers).
- You can also get hosted on readthedocs.org.
- The maintainers of the open source project often receive
support requests from users of the hosted product ...
- ... And the maintainers of the hosted product often
receive support requests from users of self-hosted instances.
- Another example:
*WordPress.com is a blogging platform that is owned and hosted online by
Automattic. It is run on WordPress, an open source piece of software used by
bloggers. (Wikipedia)*
---
## Docker CE vs Docker EE
- Docker CE = Community Edition.
- Available on most Linux distros, Mac, Windows.
- Optimized for developers and ease of use.
- Docker EE = Enterprise Edition.
- Available only on a subset of Linux distros + Windows servers.
(Only available when there is a strong partnership to offer enterprise-class support.)
- Optimized for production use.
- Comes with additional components: security scanning, RBAC ...
---
## The CNCF
- Non-profit, part of the Linux Foundation; founded in December 2015.
*The Cloud Native Computing Foundation builds sustainable ecosystems and fosters
a community around a constellation of high-quality projects that orchestrate
containers as part of a microservices architecture.*
*CNCF is an open source software foundation dedicated to making cloud-native computing universal and sustainable.*
- Home of Kubernetes (and many other projects now).
- Funded by corporate memberships.
---
class: pic
![Cloud Native Landscape](https://raw.githubusercontent.com/cncf/landscape/master/landscape/CloudNativeLandscape_latest.png)

View File

@@ -110,6 +110,8 @@ Beautiful! .emoji[😍]
---
class: in-person
## Counting packages in the container
Let's check how many packages are installed there.
@@ -127,6 +129,8 @@ How many packages do we have on our host?
---
class: in-person
## Counting packages on the host
Exit the container by logging out of the shell, like you would usually do.
@@ -145,18 +149,34 @@ Now, try to:
---
class: self-paced
## Comparing the container and the host
Exit the container by logging out of the shell, with `^D` or `exit`.
Now try to run `figlet`. Does that work?
(It shouldn't; except if, by coincidence, you are running on a machine where figlet was installed before.)
---
## Host and containers are independent things
* We ran an `ubuntu` container on an `ubuntu` host.
* We ran an `ubuntu` container on an Linux/Windows/macOS host.
* But they have different, independent packages.
* They have different, independent packages.
* Installing something on the host doesn't expose it to the container.
* And vice-versa.
* Even if both the host and the container have the same Linux distro!
* We can run *any container* on *any host*.
(One exception: Windows containers cannot run on Linux machines; at least not yet.)
---
## Where's our container?

View File

@@ -0,0 +1,227 @@
class: title
# Getting inside a container
![Person standing inside a container](images/getting-inside.png)
---
## Objectives
On a traditional server or VM, we sometimes need to:
* log into the machine (with SSH or on the console),
* analyze the disks (by removing them or rebooting with a rescue system).
In this chapter, we will see how to do that with containers.
---
## Getting a shell
Every once in a while, we want to log into a machine.
In an perfect world, this shouldn't be necessary.
* You need to install or update packages (and their configuration)?
Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...)
* You need to view logs and metrics?
Collect and access them through a centralized platform.
In the real world, though ... we often need shell access!
---
## Not getting a shell
Even without a perfect deployment system, we can do many operations without getting a shell.
* Installing packages can (and should) be done in the container image.
* Configuration can be done at the image level, or when the container starts.
* Dynamic configuration can be stored in a volume (shared with another container).
* Logs written to stdout are automatically collected by the Docker Engine.
* Other logs can be written to a shared volume.
* Process information and metrics are visible from the host.
_Let's save logging, volumes ... for later, but let's have a look at process information!_
---
## Viewing container processes from the host
If you run Docker on Linux, container processes are visible on the host.
```bash
$ ps faux | less
```
* Scroll around the output of this command.
* You should see the `jpetazzo/clock` container.
* A containerized process is just like any other process on the host.
* We can use tools like `lsof`, `strace`, `gdb` ... To analyze them.
---
class: extra-details
## What's the difference between a container process and a host process?
* Each process (containerized or not) belongs to *namespaces* and *cgroups*.
* The namespaces and cgroups determine what a process can "see" and "do".
* Analogy: each process (containerized or not) runs with a specific UID (user ID).
* UID=0 is root, and has elevated privileges. Other UIDs are normal users.
_We will give more details about namespaces and cgroups later._
---
## Getting a shell in a running container
* Sometimes, we need to get a shell anyway.
* We _could_ run some SSH server in the container ...
* But it is easier to use `docker exec`.
```bash
$ docker exec -ti ticktock sh
```
* This creates a new process (running `sh`) _inside_ the container.
* This can also be done "manually" with the tool `nsenter`.
---
## Caveats
* The tool that you want to run needs to exist in the container.
* Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time.
(This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.)
* Most importantly: the container needs to be running.
* What if the container is stopped or crashed?
---
## Getting a shell in a stopped container
* A stopped container is only _storage_ (like a disk drive).
* We cannot SSH into a disk drive or USB stick!
* We need to connect the disk to a running machine.
* How does that translate into the container world?
---
## Analyzing a stopped container
As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`.
```bash
docker run jpetazzo/crashtest
```
The container starts, but then stops immediately, without any output.
What would MacGyver&trade; do?
First, let's check the status of that container.
```bash
docker ps -l
```
---
## Viewing filesystem changes
* We can use `docker diff` to see files that were added / changed / removed.
```bash
docker diff <container_id>
```
* The container ID was shown by `docker ps -l`.
* We can also see it with `docker ps -lq`.
* The output of `docker diff` shows some interesting log files!
---
## Accessing files
* We can extract files with `docker cp`.
```bash
docker cp <container_id>:/var/log/nginx/error.log .
```
* Then we can look at that log file.
```bash
cat error.log
```
(The directory `/run/nginx` doesn't exist.)
---
## Exploring a crashed container
* We can restart a container with `docker start` ...
* ... But it will probably crash again immediately!
* We cannot specify a different program to run with `docker start`
* But we can create a new image from the crashed container
```bash
docker commit <container_id> debugimage
```
* Then we can run a new container from that image, with a custom entrypoint
```bash
docker run -ti --entrypoint sh debugimage
```
---
class: extra-details
## Obtaining a complete dump
* We can also dump the entire filesystem of a container.
* This is done with `docker export`.
* It generates a tar archive.
```bash
docker export <container_id> | tar tv
```
This will give a detailed listing of the content of the container.

View File

@@ -46,6 +46,8 @@ In this section, we will explain:
## Example for a Java webapp
Each of the following items will correspond to one layer:
* CentOS base layer
* Packages and configuration files added by our local IT
* JRE
@@ -56,6 +58,22 @@ In this section, we will explain:
---
class: pic
## The read-write layer
![layers](images/container-layers.jpg)
---
class: pic
## Multiple containers sharing the same image
![layers](images/sharing-layers.jpg)
---
## Differences between containers and images
* An image is a read-only filesystem.
@@ -63,24 +81,14 @@ In this section, we will explain:
* A container is an encapsulated set of processes running in a
read-write copy of that filesystem.
* To optimize container boot time, *copy-on-write* is used
* To optimize container boot time, *copy-on-write* is used
instead of regular copy.
* `docker run` starts a container from a given image.
Let's give a couple of metaphors to illustrate those concepts.
---
## Image as stencils
Images are like templates or stencils that you can create containers from.
![stencil](images/stenciling-wall.jpg)
---
## Object-oriented programming
## Comparison with object-oriented programming
* Images are conceptually similar to *classes*.
@@ -99,7 +107,7 @@ If an image is read-only, how do we change it?
* We create a new container from that image.
* Then we make changes to that container.
* When we are satisfied with those changes, we transform them into a new layer.
* A new image is created by stacking the new layer on top of the old image.
@@ -118,7 +126,7 @@ If an image is read-only, how do we change it?
## Creating the first images
There is a special empty image called `scratch`.
There is a special empty image called `scratch`.
* It allows to *build from scratch*.
@@ -138,7 +146,7 @@ Note: you will probably never have to do this yourself.
* Saves all the changes made to a container into a new layer.
* Creates a new image (effectively a copy of the container).
`docker build`
`docker build` **(used 99% of the time)**
* Performs a repeatable build sequence.
* This is the preferred method!
@@ -180,6 +188,8 @@ Those images include:
* Ready-to-use components and services, like redis, postgresql...
* Over 130 at this point!
---
## User namespace
@@ -299,9 +309,9 @@ There are two ways to download images.
```bash
$ docker pull debian:jessie
Pulling repository debian
b164861940b8: Download complete
b164861940b8: Pulling image (jessie) from debian
d1881793a057: Download complete
b164861940b8: Download complete
b164861940b8: Pulling image (jessie) from debian
d1881793a057: Download complete
```
* As seen previously, images are made up of layers.

View File

@@ -29,7 +29,7 @@ We can arbitrarily distinguish:
* Installing Docker on an existing Linux machine (physical or VM)
* Installing Docker on MacOS or Windows
* Installing Docker on macOS or Windows
* Installing Docker on a fleet of cloud VMs
@@ -37,7 +37,9 @@ We can arbitrarily distinguish:
## Installing Docker on Linux
* The recommended method is to install the packages supplied by Docker Inc.
* The recommended method is to install the packages supplied by Docker Inc.:
https://store.docker.com
* The general method is:
@@ -55,13 +57,35 @@ We can arbitrarily distinguish:
---
## Installing Docker on MacOS and Windows
class: extra-details
* On MacOS, the recommended method is to use Docker4Mac:
## Docker Inc. packages vs distribution packages
* Docker Inc. releases new versions monthly (edge) and quarterly (stable)
* Releases are immediately available on Docker Inc.'s package repositories
* Linux distros don't always update to the latest Docker version
(Sometimes, updating would break their guidelines for major/minor upgrades)
* Sometimes, some distros have carried packages with custom patches
* Sometimes, these patches added critical security bugs ☹
* Installing through Docker Inc.'s repositories is a bit of extra work …
… but it is generally worth it!
---
## Installing Docker on macOS and Windows
* On macOS, the recommended method is to use Docker for Mac:
https://docs.docker.com/docker-for-mac/install/
* On Windows 10 Pro, Enterprise, and Eduction, you can use Docker4Windows:
* On Windows 10 Pro, Enterprise, and Education, you can use Docker for Windows:
https://docs.docker.com/docker-for-windows/install/
@@ -69,9 +93,36 @@ We can arbitrarily distinguish:
https://docs.docker.com/toolbox/toolbox_install_windows/
* On Windows Server 2016, you can also install the native engine:
https://docs.docker.com/install/windows/docker-ee/
---
## Running Docker on MacOS and Windows
## Docker for Mac and Docker for Windows
* Special Docker Editions that integrate well with their respective host OS
* Provide user-friendly GUI to edit Docker configuration and settings
* Leverage the host OS virtualization subsystem (e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS)
* Installed like normal user applications on the host
* Under the hood, they both run a tiny VM (transparent to our daily use)
* Access network resources like normal applications
<br/>(and therefore, play better with enterprise VPNs and firewalls)
* Support filesystem sharing through volumes (we'll talk about this later)
* They only support running one Docker VM at a time ...
<br/>
... but we can use `docker-machine`, the Docker Toolbox, VirtualBox, etc. to get a cluster.
---
## Running Docker on macOS and Windows
When you execute `docker version` from the terminal:
@@ -88,25 +139,6 @@ This will also allow to use remote Engines exactly as if they were local.
---
## Docker4Mac and Docker4Windows
* They let you run Docker without VirtualBox
* They are installed like normal applications (think QEMU, but faster)
* They access network resources like normal applications
<br/>(and therefore, play well with enterprise VPNs and firewalls)
* They support filesystem sharing through volumes (we'll talk about this later)
* They only support running one Docker VM at a time ...
... so if you want to run a full cluster locally, install e.g. the Docker Toolbox
* They can co-exist with the Docker Toolbox
---
## Important PSA about security
* If you have access to the Docker control socket, you can take over the machine

82
slides/intro/Labels.md Normal file
View File

@@ -0,0 +1,82 @@
# Labels
* Labels allow to attach arbitrary metadata to containers.
* Labels are key/value pairs.
* They are specified at container creation.
* You can query them with `docker inspect`.
* They can also be used as filters with some commands (e.g. `docker ps`).
---
## Using labels
Let's create a few containers with a label `owner`.
```bash
docker run -d -l owner=alice nginx
docker run -d -l owner=bob nginx
docker run -d -l owner nginx
```
We didn't specify a value for the `owner` label in the last example.
This is equivalent to setting the value to be an empty string.
---
## Querying labels
We can view the labels with `docker inspect`.
```bash
$ docker inspect $(docker ps -lq) | grep -A3 Labels
"Labels": {
"maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>",
"owner": ""
},
```
We can use the `--format` flag to list the value of a label.
```bash
$ docker inspect $(docker ps -q) --format 'OWNER={{.Config.Labels.owner}}'
```
---
## Using labels to select containers
We can list containers having a specific label.
```bash
$ docker ps --filter label=owner
```
Or we can list containers having a specific label with a specific value.
```bash
$ docker ps --filter label=owner=alice
```
---
## Use-cases for labels
* HTTP vhost of a web app or web service.
(The label is used to generate the configuration for NGINX, HAProxy, etc.)
* Backup schedule for a stateful service.
(The label is used by a cron job to determine if/when to backup container data.)
* Service ownership.
(To determine internal cross-billing, or who to page in case of outage.)
* etc.

View File

@@ -17,7 +17,7 @@ At the end of this section, you will be able to:
---
## Containerized local development environments
## Local development in a container
We want to solve the following issues:
@@ -69,7 +69,6 @@ Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe?
```dockerfile
FROM ruby
MAINTAINER Education Team at Docker <education@docker.com>
COPY . /src
WORKDIR /src
@@ -177,7 +176,9 @@ $ docker run -d -v $(pwd):/src -P namer
* `namer` is the name of the image we will run.
* We don't specify a command to run because is is already set in the Dockerfile.
* We don't specify a command to run because it is already set in the Dockerfile.
Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell).
---

294
slides/intro/Logging.md Normal file
View File

@@ -0,0 +1,294 @@
# Logging
In this chapter, we will explain the different ways to send logs from containers.
We will then show one particular method in action, using ELK and Docker's logging drivers.
---
## There are many ways to send logs
- The simplest method is to write on the standard output and error.
- Applications can write their logs to local files.
(The files are usually periodically rotated and compressed.)
- It is also very common (on UNIX systems) to use syslog.
(The logs are collected by syslogd or an equivalent like journald.)
- In large applications with many components, it is common to use a logging service.
(The code uses a library to send messages to the logging service.)
*All these methods are available with containers.*
---
## Writing on stdout/stderr
- The standard output and error of containers is managed by the container engine.
- This means that each line written by the container is received by the engine.
- The engine can then do "whatever" with these log lines.
- With Docker, the default configuration is to write the logs to local files.
- The files can then be queried with e.g. `docker logs` (and the equivalent API request).
- This can be customized, as we will see later.
---
## Writing to local files
- If we write to files, it is possible to access them but cumbersome.
(We have to use `docker exec` or `docker cp`.)
- Furthermore, if the container is stopped, we cannot use `docker exec`.
- If the container is deleted, the logs disappear.
- What should we do for programs who can only log to local files?
--
- There are multiple solutions.
---
## Using a volume or bind mount
- Instead of writing logs to a normal directory, we can place them on a volume.
- The volume can be accessed by other containers.
- We can run a program like `filebeat` in another container accessing the same volume.
(`filebeat` reads local log files continuously, like `tail -f`, and sends them
to a centralized system like ElasticSearch.)
- We can also use a bind mount, e.g. `-v /var/log/containers/www:/var/log/tomcat`.
- The container will write log files to a directory mapped to a host directory.
- The log files will appear on the host and be consumable directly from the host.
---
## Using logging services
- We can use logging frameworks (like log4j or the Python `logging` package).
- These frameworks require some code and/or configuration in our application code.
- These mechanisms can be used identically inside or outside of containers.
- Sometimes, we can leverage containerized networking to simplify their setup.
- For instance, our code can send log messages to a server named `log`.
- The name `log` will resolve to different addresses in development, production, etc.
---
## Using syslog
- What if our code (or the program we are running in containers) uses syslog?
- One possibility is to run a syslog daemon in the container.
- Then that daemon can be setup to write to local files or forward to the network.
- Under the hood, syslog clients connect to a local UNIX socket, `/dev/log`.
- We can expose a syslog socket to the container (by using a volume or bind-mount).
- Then just create a symlink from `/dev/log` to the syslog socket.
- Voilà!
---
## Using logging drivers
- If we log to stdout and stderr, the container engine receives the log messages.
- The Docker Engine has a modular logging system with many plugins, including:
- json-file (the default one)
- syslog
- journald
- gelf
- fluentd
- splunk
- etc.
- Each plugin can process and forward the logs to another process or system.
---
## A word of warning about `json-file`
- By default, log file size is unlimited.
- This means that a very verbose container *will* use up all your disk space.
(Or a less verbose container, but running for a very long time.)
- Log rotation can be enabled by setting a `max-size` option.
- Older log files can be removed by setting a `max-file` option.
- Just like other logging options, these can be set per container, or globally.
Example:
```bash
$ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch
```
---
## Demo: sending logs to ELK
- We are going to deploy an ELK stack.
- It will accept logs over a GELF socket.
- We will run a few containers with the `gelf` logging driver.
- We will then see our logs in Kibana, the web interface provided by ELK.
*Important foreword: this is not an "official" or "recommended"
setup; it is just an example. We used ELK in this demo because
it's a popular setup and we keep being asked about it; but you
will have equal success with Fluent or other logging stacks!*
---
## What's in an ELK stack?
- ELK is three components:
- ElasticSearch (to store and index log entries)
- Logstash (to receive log entries from various
sources, process them, and forward them to various
destinations)
- Kibana (to view/search log entries with a nice UI)
- The only component that we will configure is Logstash
- We will accept log entries using the GELF protocol
- Log entries will be stored in ElasticSearch,
<br/>and displayed on Logstash's stdout for debugging
---
## Running ELK
- We are going to use a Compose file describing the ELK stack.
```bash
$ cd ~/container.training/stacks
$ docker-compose -f elk.yml up -d
```
- Let's have a look at the Compose file while it's deploying.
---
## Our basic ELK deployment
- We are using images from the Docker Hub: `elasticsearch`, `logstash`, `kibana`.
- We don't need to change the configuration of ElasticSearch.
- We need to tell Kibana the address of ElasticSearch:
- it is set with the `ELASTICSEARCH_URL` environment variable,
- by default it is `localhost:9200`, we change it to `elasticsearch:9200`.
- We need to configure Logstash:
- we pass the entire configuration file through command-line arguments,
- this is a hack so that we don't have to create an image just for the config.
---
## Sending logs to ELK
- The ELK stack accepts log messages through a GELF socket.
- The GELF socket listens on UDP port 12201.
- To send a message, we need to change the logging driver used by Docker.
- This can be done globally (by reconfiguring the Engine) or on a per-container basis.
- Let's override the logging driver for a single container:
```bash
$ docker run --log-driver=gelf --log-opt=gelf-address=udp://localhost:12201 \
alpine echo hello world
```
---
## Viewing the logs in ELK
- Connect to the Kibana interface.
- It is exposed on port 5601.
- Browse http://X.X.X.X:5601.
---
## "Configuring" Kibana
- Kibana should offer you to "Configure an index pattern":
<br/>in the "Time-field name" drop down, select "@timestamp", and hit the
"Create" button.
- Then:
- click "Discover" (in the top-left corner),
- click "Last 15 minutes" (in the top-right corner),
- click "Last 1 hour" (in the list in the middle),
- click "Auto-refresh" (top-right corner),
- click "5 seconds" (top-left of the list).
- You should see a series of green bars (with one new green bar every minute).
- Our 'hello world' message should be visible there.
---
## Important afterword
**This is not a "production-grade" setup.**
It is just an educational example. Since we have only
one node , we did set up a single
ElasticSearch instance and a single Logstash instance.
In a production setup, you need an ElasticSearch cluster
(both for capacity and availability reasons). You also
need multiple Logstash instances.
And if you want to withstand
bursts of logs, you need some kind of message queue:
Redis if you're cheap, Kafka if you want to make sure
that you don't drop messages on the floor. Good luck.
If you want to learn more about the GELF driver,
have a look at [this blog post](
http://jpetazzo.github.io/2017/01/20/docker-logging-gelf/).

View File

@@ -1,6 +1,6 @@
# Multi-stage builds
# Reducing image size
* In the previous example, our final image contain:
* In the previous example, our final image contained:
* our `hello` program
@@ -14,7 +14,196 @@
---
## Multi-stage builds principles
## Can't we remove superfluous files with `RUN`?
What happens if we do one of the following commands?
- `RUN rm -rf ...`
- `RUN apt-get remove ...`
- `RUN make clean ...`
--
This adds a layer which removes a bunch of files.
But the previous layers (which added the files) still exist.
---
## Removing files with an extra layer
When downloading an image, all the layers must be downloaded.
| Dockerfile instruction | Layer size | Image size |
| ---------------------- | ---------- | ---------- |
| `FROM ubuntu` | Size of base image | Size of base image |
| `...` | ... | Sum of this layer <br/>+ all previous ones |
| `RUN apt-get install somepackage` | Size of files added <br/>(e.g. a few MB) | Sum of this layer <br/>+ all previous ones |
| `...` | ... | Sum of this layer <br/>+ all previous ones |
| `RUN apt-get remove somepackage` | Almost zero <br/>(just metadata) | Same as previous one |
Therefore, `RUN rm` does not reduce the size of the image or free up disk space.
---
## Removing unnecessary files
Various techniques are available to obtain smaller images:
- collapsing layers,
- adding binaries that are built outside of the Dockerfile,
- squashing the final image,
- multi-stage builds.
Let's review them quickly.
---
## Collapsing layers
You will frequently see Dockerfiles like this:
```dockerfile
FROM ubuntu
RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ...
```
Or the (more readable) variant:
```dockerfile
FROM ubuntu
RUN apt-get update \
&& apt-get install xxx \
&& ... \
&& apt-get remove xxx \
&& ...
```
This `RUN` command gives us a single layer.
The files that are added, then removed in the same layer, do not grow the layer size.
---
## Collapsing layers: pros and cons
Pros:
- works on all versions of Docker
- doesn't require extra tools
Cons:
- not very readable
- some unnecessary files might still remain if the cleanup is not thorough
- that layer is expensive (slow to build)
---
## Building binaries outside of the Dockerfile
This results in a Dockerfile looking like this:
```dockerfile
FROM ubuntu
COPY xxx /usr/local/bin
```
Of course, this implies that the file `xxx` exists in the build context.
That file has to exist before you can run `docker build`.
For instance, it can:
- exist in the code repository,
- be created by another tool (script, Makefile...),
- be created by another container image and extracted from the image.
See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox).
---
## Building binaries outside: pros and cons
Pros:
- final image can be very small
Cons:
- requires an extra build tool
- we're back in dependency hell and "works on my machine"
Cons, if binary is added to code repository:
- breaks portability across different platforms
- grows repository size a lot if the binary is updated frequently
---
## Squashing the final image
The idea is to transform the final image into a single-layer image.
This can be done in (at least) two ways.
- Activate experimental features and squash the final image:
```bash
docker image build --squash ...
```
- Export/import the final image.
```bash
docker build -t temp-image .
docker run --entrypoint true --name temp-container temp-image
docker export temp-container | docker import - final-image
docker rm temp-container
docker rmi temp-image
```
---
## Squashing the image: pros and cons
Pros:
- single-layer images are smaller and faster to download
- removed files no longer take up storage and network resources
Cons:
- we still need to actively remove unnecessary files
- squash operation can take a lot of time (on big images)
- squash operation does not benefit from cache
<br/>
(even if we change just a tiny file, the whole image needs to be re-squashed)
---
## Multi-stage builds
Multi-stage builds allow us to have multiple *stages*.
Each stage is a separate image, and can copy files from previous stages.
We're going to see how they work in more detail.
---
# Multi-stage builds
* At any point in our `Dockerfile`, we can add a new `FROM` line.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,422 @@
# Orchestration, an overview
In this chapter, we will:
* Explain what is orchestration and why we would need it.
* Present (from a high-level perspective) some orchestrators.
* Show one orchestrator (Kubernetes) in action.
---
class: pic
## What's orchestration?
![Joana Carneiro (orchestra conductor)](images/conductor.jpg)
---
## What's orchestration?
According to Wikipedia:
*Orchestration describes the __automated__ arrangement,
coordination, and management of complex computer systems,
middleware, and services.*
--
*[...] orchestration is often discussed in the context of
__service-oriented architecture__, __virtualization__, provisioning,
Converged Infrastructure and __dynamic datacenter__ topics.*
--
What does that really mean?
---
## Example 1: dynamic cloud instances
--
- Q: do we always use 100% of our servers?
--
- A: obviously not!
.center[![Daily variations of traffic](images/traffic-graph.png)]
---
## Example 1: dynamic cloud instances
- Every night, scale down
(by shutting down extraneous replicated instances)
- Every morning, scale up
(by deploying new copies)
- "Pay for what you use"
(i.e. save big $$$ here)
---
## Example 1: dynamic cloud instances
How do we implement this?
- Crontab
- Autoscaling (save even bigger $$$)
That's *relatively* easy.
Now, how are things for our IAAS provider?
---
## Example 2: dynamic datacenter
- Q: what's the #1 cost in a datacenter?
--
- A: electricity!
--
- Q: what uses electricity?
--
- A: servers, obviously
- A: ... and associated cooling
--
- Q: do we always use 100% of our servers?
--
- A: obviously not!
---
## Example 2: dynamic datacenter
- If only we could turn off unused servers during the night...
- Problem: we can only turn off a server if it's totally empty!
(i.e. all VMs on it are stopped/moved)
- Solution: *migrate* VMs and shutdown empty servers
(e.g. combine two hypervisors with 40% load into 80%+0%,
<br/>and shutdown the one at 0%)
---
## Example 2: dynamic datacenter
How do we implement this?
- Shutdown empty hosts (but keep some spare capacity)
- Start hosts again when capacity gets low
- Ability to "live migrate" VMs
(Xen already did this 10+ years ago)
- Rebalance VMs on a regular basis
- what if a VM is stopped while we move it?
- should we allow provisioning on hosts involved in a migration?
*Scheduling* becomes more complex.
---
## What is scheduling?
According to Wikipedia (again):
*In computing, scheduling is the method by which threads,
processes or data flows are given access to system resources.*
The scheduler is concerned mainly with:
- throughput (total amount or work done per time unit);
- turnaround time (between submission and completion);
- response time (between submission and start);
- waiting time (between job readiness and execution);
- fairness (appropriate times according to priorities).
In practice, these goals often conflict.
**"Scheduling" = decide which resources to use.**
---
## Exercise 1
- You have:
- 5 hypervisors (physical machines)
- Each server has:
- 16 GB RAM, 8 cores, 1 TB disk
- Each week, your team asks:
- one VM with X RAM, Y CPU, Z disk
Scheduling = deciding which hypervisor to use for each VM.
Difficulty: easy!
---
<!-- Warning, two almost identical slides (for img effect) -->
## Exercise 2
- You have:
- 1000+ hypervisors (and counting!)
- Each server has different resources:
- 8-500 GB of RAM, 4-64 cores, 1-100 TB disk
- Multiple times a day, a different team asks for:
- up to 50 VMs with different characteristics
Scheduling = deciding which hypervisor to use for each VM.
Difficulty: ???
---
<!-- Warning, two almost identical slides (for img effect) -->
## Exercise 2
- You have:
- 1000+ hypervisors (and counting!)
- Each server has different resources:
- 8-500 GB of RAM, 4-64 cores, 1-100 TB disk
- Multiple times a day, a different team asks for:
- up to 50 VMs with different characteristics
Scheduling = deciding which hypervisor to use for each VM.
![Troll face](images/trollface.png)
---
## Exercise 3
- You have machines (physical and/or virtual)
- You have containers
- You are trying to put the containers on the machines
- Sounds familiar?
---
## Scheduling with one resource
.center[![Not-so-good bin packing](images/binpacking-1d-1.gif)]
Can we do better?
---
## Scheduling with one resource
.center[![Better bin packing](images/binpacking-1d-2.gif)]
Yup!
---
## Scheduling with two resources
.center[![2D bin packing](images/binpacking-2d.gif)]
---
## Scheduling with three resources
.center[![3D bin packing](images/binpacking-3d.gif)]
---
## You need to be good at this
.center[![Tangram](images/tangram.gif)]
---
## But also, you must be quick!
.center[![Tetris](images/tetris-1.png)]
---
## And be web scale!
.center[![Big tetris](images/tetris-2.gif)]
---
## And think outside (?) of the box!
.center[![3D tetris](images/tetris-3.png)]
---
## Good luck!
.center[![FUUUUUU face](images/fu-face.jpg)]
---
## TL,DR
* Scheduling with multiple resources (dimensions) is hard.
* Don't expect to solve the problem with a Tiny Shell Script.
* There are literally tons of research papers written on this.
---
## But our orchestrator also needs to manage ...
* Network connectivity (or filtering) between containers.
* Load balancing (external and internal).
* Failure recovery (if a node or a whole datacenter fails).
* Rolling out new versions of our applications.
(Canary deployments, blue/green deployments...)
---
## Some orchestrators
We are going to present briefly a few orchestrators.
There is no "absolute best" orchestrator.
It depends on:
- your applications,
- your requirements,
- your pre-existing skills...
---
## Nomad
- Open Source project by Hashicorp.
- Arbitrary scheduler (not just for containers).
- Great if you want to schedule mixed workloads.
(VMs, containers, processes...)
- Less integration with the rest of the container ecosystem.
---
## Mesos
- Open Source project in the Apache Foundation.
- Arbitrary scheduler (not just for containers).
- Two-level scheduler.
- Top-level scheduler acts as a resource broker.
- Second-level schedulers (aka "frameworks") obtain resources from top-level.
- Frameworks implement various strategies.
(Marathon = long running processes; Chronos = run at intervals; ...)
- Commercial offering through DC/OS my Mesosphere.
---
## Rancher
- Rancher 1 offered a simple interface for Docker hosts.
- Rancher 2 is a complete management platform for Docker and Kubernetes.
- Technically not an orchestrator, but it's a popular option.
---
## Swarm
- Tightly integrated with the Docker Engine.
- Extremely simple to deploy and setup, even in multi-manager (HA) mode.
- Secure by default.
- Strongly opinionated:
- smaller set of features,
- easier to operate.
---
## Kubernetes
- Open Source project initiated by Google.
- Contributions from many other actors.
- *De facto* standard for container orchestration.
- Many deployment options; some of them very complex.
- Reputation: steep learning curve.
- Reality:
- true, if we try to understand *everything*;
- false, if we focus on what matters.

View File

@@ -21,7 +21,7 @@ public images is free as well.*
docker login
```
.warning[When running Docker4Mac, Docker4Windows, or
.warning[When running Docker for Mac/Windows, or
Docker on a Linux workstation, it can (and will when
possible) integrate with your system's keyring to
store your credentials securely. However, on most Linux

View File

@@ -0,0 +1,229 @@
# Limiting resources
- So far, we have used containers as convenient units of deployment.
- What happens when a container tries to use more resources than available?
(RAM, CPU, disk usage, disk and network I/O...)
- What happens when multiple containers compete for the same resource?
- Can we limit resources available to a container?
(Spoiler alert: yes!)
---
## Container processes are normal processes
- Containers are closer to "fancy processes" than to "lightweight VMs".
- A process running in a container is, in fact, a process running on the host.
- Let's look at the output of `ps` on a container host running 3 containers :
```
0 2662 0.2 0.3 /usr/bin/dockerd -H fd://
0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe
0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off;
101 23543 0.0 0.0 | \_ `nginx`: worker process
0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2
0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
0 23725 0.0 0.0 \_ `/bin/sh`
```
- The highlighted processes are containerized processes.
<br/>
(That host is running nginx, elasticsearch, and alpine.)
---
## By default: nothing changes
- What happens when a process uses too much memory on a Linux system?
--
- Simplified answer:
- swap is used (if available);
- if there is not enough swap space, eventually, the out-of-memory killer is invoked;
- the OOM killer uses heuristics to kill processes;
- sometimes, it kills an unrelated process.
--
- What happens when a container uses too much memory?
- The same thing!
(i.e., a process eventually gets killed, possibly in another container.)
---
## Limiting container resources
- The Linux kernel offers rich mechanisms to limit container resources.
- For memory usage, the mechanism is part of the *cgroup* subsystem.
- This subsystem allows to limit the memory for a process or a group of processes.
- A container engine leverages these mechanisms to limit memory for a container.
- The out-of-memory killer has a new behavior:
- it runs when a container exceeds its allowed memory usage,
- in that case, it only kills processes in that container.
---
## Limiting memory in practice
- The Docker Engine offers multiple flags to limit memory usage.
- The two most useful ones are `--memory` and `--memory-swap`.
- `--memory` limits the amount of physical RAM used by a container.
- `--memory-swap` limits the total amount (RAM+swap) used by a container.
- The memory limit can be expressed in bytes, or with a unit suffix.
(e.g.: `--memory 100m` = 100 megabytes.)
- We will see two strategies: limiting RAM usage, or limiting both
---
## Limiting RAM usage
Example:
```bash
docker run -ti --memory 100m python
```
If the container tries to use more than 100 MB of RAM, *and* swap is available:
- the container will not be killed,
- memory above 100 MB will be swapped out,
- in most cases, the app in the container will be slowed down (a lot).
If we run out of swap, the global OOM killer still intervenes.
---
## Limiting both RAM and swap usage
Example:
```bash
docker run -ti --memory 100m --memory-swap 100m python
```
If the container tries to use more than 100 MB of memory, it is killed.
On the other hand, the application will never be slowed down because of swap.
---
## When to pick which strategy?
- Stateful services (like databases) will lose or corrupt data when killed
- Allow them to use swap space, but monitor swap usage
- Stateless services can usually be killed with little impact
- Limit their mem+swap usage, but monitor if they get killed
- Ultimately, this is no different from "do I want swap, and how much?"
---
## Limiting CPU usage
- There are no less than 3 ways to limit CPU usage:
- setting a relative priority with `--cpu-shares`,
- setting a CPU% limit with `--cpus`,
- pinning a container to specific CPUs with `--cpuset-cpus`.
- They can be used separately or together.
---
## Setting relative priority
- Each container has a relative priority used by the Linux scheduler.
- By default, this priority is 1024.
- As long as CPU usage is not maxed out, this has no effect.
- When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority.
- In other words: a container with `--cpu-shares 2048` will receive twice as much than the default.
---
## Setting a CPU% limit
- This setting will make sure that a container doesn't use more than a given % of CPU.
- The value is expressed in CPUs; therefore:
`--cpus 0.1` means 10% of one CPU,
`--cpus 1.0` means 100% of one whole CPU,
`--cpus 10.0` means 10 entire CPUs.
---
## Pinning containers to CPUs
- On multi-core machines, it is possible to restrict the execution on a set of CPUs.
- Examples:
`--cpuset-cpus 0` forces the container to run on CPU 0;
`--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7;
`--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11.
- This will not reserve the corresponding CPUs!
(They might still be used by other containers, or uncontainerized processes.)
---
## Limiting disk usage
- Most storage drivers do not support limiting the disk usage of containers.
(With the exception of devicemapper, but the limit cannot be set easily.)
- This means that a single container could exhaust disk space for everyone.
- In practice, however, this is not a concern, because:
- data files (for stateful services) should reside on volumes,
- assets (e.g. images, user-generated content...) should reside on object stores or on volume,
- logs are written on standard output and gathered by the container engine.
- Container disk usage can be audited with `docker ps -s` and `docker diff`.

View File

@@ -38,6 +38,42 @@ individual Docker VM.*
---
## What *is* Docker?
- "Installing Docker" really means "Installing the Docker Engine and CLI".
- The Docker Engine is a daemon (a service running in the background).
- This daemon manages containers, the same way that an hypervisor manages VMs.
- We interact with the Docker Engine by using the Docker CLI.
- The Docker CLI and the Docker Engine communicate through an API.
- There are many other programs, and many client libraries, to use that API.
---
## Why don't we run Docker locally?
- We are going to download container images and distribution packages.
- This could put a bit of stress on the local WiFi and slow us down.
- Instead, we use a remote VM that has a good connectivity
- In some rare cases, installing Docker locally is challenging:
- no administrator/root access (computer managed by strict corp IT)
- 32-bit CPU or OS
- old OS version (e.g. CentOS 6, OSX pre-Yosemite, Windows 7)
- It's better to spend time learning containers than fiddling with the installer!
---
## Connecting to your Virtual Machine
You need an SSH client.
@@ -66,21 +102,24 @@ Once logged in, make sure that you can run a basic Docker command:
```bash
$ docker version
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:09 2017
OS/Arch: darwin/amd64
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:10:06 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:45:38 2017
OS/Arch: linux/amd64
Experimental: true
Engine:
Version: 18.03.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:08:35 2018
OS/Arch: linux/amd64
Experimental: false
```
]

View File

@@ -33,6 +33,8 @@ Docker volumes can be used to achieve many things, including:
* Sharing a *single file* between the host and a container.
* Using remote storage and custom storage with "volume drivers".
---
## Volumes are special directories in a container
@@ -118,7 +120,7 @@ $ curl localhost:8080
## Volumes exist independently of containers
If a container is stopped, its volumes still exist and are available.
If a container is stopped or removed, its volumes still exist and are available.
Volumes can be listed and manipulated with `docker volume` subcommands:
@@ -201,7 +203,7 @@ Then run `curl localhost:1234` again to see your changes.
---
## Managing volumes explicitly
## Using custom "bind-mounts"
In some cases, you want a specific directory on the host to be mapped
inside the container:
@@ -244,6 +246,8 @@ of an existing container.
* Newer containers can use `--volumes-from` too.
* Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes).
---
class: extra-details
@@ -259,7 +263,7 @@ $ docker run -d --name redis28 redis:2.8
Connect to the Redis container and set some data.
```bash
$ docker run -ti --link redis28:redis alpine telnet redis 6379
$ docker run -ti --link redis28:redis busybox telnet redis 6379
```
Issue the following commands:
@@ -298,7 +302,7 @@ class: extra-details
Connect to the Redis container and see our data.
```bash
docker run -ti --link redis30:redis alpine telnet redis 6379
docker run -ti --link redis30:redis busybox telnet redis 6379
```
Issue a few commands.
@@ -394,10 +398,56 @@ has root-like access to the host.]
You can install plugins to manage volumes backed by particular storage systems,
or providing extra features. For instance:
* [dvol](https://github.com/ClusterHQ/dvol) - allows to commit/branch/rollback volumes;
* [Flocker](https://clusterhq.com/flocker/introduction/), [REX-Ray](https://github.com/emccode/rexray) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS);
* [Blockbridge](http://www.blockbridge.com/), [Portworx](http://portworx.com/) - provide distributed block store for containers;
* and much more!
* [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g.
SAN or NAS), or by cloud block stores (e.g. EBS, EFS).
* [Portworx](http://portworx.com/) - provides distributed block store for containers.
* [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale
to several petabytes. It provides interfaces for object, block and file storage.
* and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)!
---
## Volumes vs. Mounts
* Since Docker 17.06, a new options is available: `--mount`.
* It offers a new, richer syntax to manipulate data in containers.
* It makes an explicit difference between:
- volumes (identified with a unique name, managed by a storage plugin),
- bind mounts (identified with a host path, not managed).
* The former `-v` / `--volume` option is still usable.
---
## `--mount` syntax
Binding a host path to a container path:
```bash
$ docker run \
--mount type=bind,source=/path/on/host,target=/path/in/container alpine
```
Mounting a volume to a container path:
```bash
$ docker run \
--mount source=myvolume,target=/path/in/container alpine
```
Mounting a tmpfs (in-memory, for temporary files):
```bash
$ docker run \
--mount type=tmpfs,destination=/path/in/container,tmpfs-size=1000000 alpine
```
---

View File

@@ -2,7 +2,7 @@
- This was initially written to support in-person, instructor-led workshops and tutorials
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors)
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://@@GITREPO@@/graphs/contributors)
- You can also follow along on your own, at your own pace

52
slides/kube-90min.yml Normal file
View File

@@ -0,0 +1,52 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
- extra-details
chapters:
- common/title.md
- logistics.md
#- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- common/composescale.md
- common/composedown.md
- kube/concepts-k8s.md
# - common/declarative.md
- kube/declarative.md
# - kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- - kube/kubectlrun.md
- kube/kubectlexpose.md
- kube/ourapponkube.md
#- kube/kubectlproxy.md
- - kube/dashboard.md
- kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
# Stern is interesting but can be skipped
#- - kube/logs-cli.md
# Bridget hasn't added EFK yet
#- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
- kube/links.md
# Bridget-specific
# - kube/links-bridget.md
- common/thankyou.md

47
slides/kube-fullday.yml Normal file
View File

@@ -0,0 +1,47 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- common/title.md
- logistics.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
#- common/composescale.md
- common/composedown.md
- - kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
- kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- kube/kubectlrun.md
- - kube/kubectlexpose.md
- kube/ourapponkube.md
- kube/kubectlproxy.md
- kube/dashboard.md
- - kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
#- kube/logs-cli.md
#- kube/logs-centralized.md
#- kube/helm.md
#- kube/namespaces.md
- kube/whatsnext.md
- kube/links.md
- common/thankyou.md

View File

@@ -1,38 +1,50 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"
chat: "[SREcon slack #k8s_101](https://usenix-srecon.slack.com/messages/C9XR4F5NJ/) ([get invitation](http://sreconinvite.herokuapp.com/))"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
#chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- common/title.md
- logistics-bridget.md
- logistics.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- common/composescale.md
- common/composedown.md
- - kube/concepts-k8s.md
- kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
- kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- kube/kubectlrun.md
- - kube/kubectlexpose.md
- - kube/kubectlrun.md
- kube/kubectlexpose.md
- kube/ourapponkube.md
- kube/dashboard.md
- - kube/kubectlscale.md
#- kube/kubectlproxy.md
- - kube/dashboard.md
- kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- - kube/logs-cli.md
# Bridget hasn't added EFK yet
#- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
- common/thankyou.md
# - kube/links.md
# Bridget-specific
- kube/links-bridget.md
- common/thankyou.md

View File

@@ -5,6 +5,10 @@ title: |
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
@@ -28,10 +32,15 @@ chapters:
- kube/kubectlrun.md
- - kube/kubectlexpose.md
- kube/ourapponkube.md
- kube/kubectlproxy.md
- kube/dashboard.md
- - kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- - kube/logs-cli.md
- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
- common/thankyou.md
- kube/links.md
- common/thankyou.md

View File

@@ -98,47 +98,80 @@ class: pic
---
## Kubernetes architecture: the master
- The Kubernetes logic (its "brains") is a collection of services:
- the API server (our point of entry to everything!)
- core services like the scheduler and controller manager
- `etcd` (a highly available key/value store; the "database" of Kubernetes)
- Together, these services form what is called the "master"
- These services can run straight on a host, or in containers
<br/>
(that's an implementation detail)
- `etcd` can be run on separate machines (first schema) or co-located (second schema)
- We need at least one master, but we can have more (for high availability)
---
## Kubernetes architecture: the nodes
- The nodes executing our containers run another collection of services:
- The nodes executing our containers run a collection of services:
- a container Engine (typically Docker)
- kubelet (the "node agent")
- kube-proxy (a necessary but not sufficient network component)
- Nodes were formerly called "minions"
- It is customary to *not* run apps on the node(s) running master components
(Except when using small development clusters)
(You might see that word in older articles or documentation)
---
## Do we need to run Docker at all?
## Kubernetes architecture: the control plane
No!
- The Kubernetes logic (its "brains") is a collection of services:
--
- the API server (our point of entry to everything!)
- core services like the scheduler and controller manager
- `etcd` (a highly available key/value store; the "database" of Kubernetes)
- Together, these services form the control plane of our cluster
- The control plane is also called the "master"
---
## Running the control plane on special nodes
- It is common to reserve a dedicated node for the control plane
(Except for single-node development clusters, like when using minikube)
- This node is then called a "master"
(Yes, this is ambiguous: is the "master" a node, or the whole control plane?)
- Normal applications are restricted from running on this node
(By using a mechanism called ["taints"](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/))
- When high availability is required, each service of the control plane must be resilient
- The control plane is then replicated on multiple nodes
(This is sometimes called a "multi-master" setup)
---
## Running the control plane outside containers
- The services of the control plane can run in or out of containers
- For instance: since `etcd` is a critical service, some people
deploy it directly on a dedicated cluster (without containers)
(This is illustrated on the first "super complicated" schema)
- In some hosted Kubernetes offerings (e.g. GKE), the control plane is invisible
(We only "see" a Kubernetes API endpoint)
- In that case, there is no "master node"
*For this reason, it is more accurate to say "control plane" rather than "master".*
---
## Default container runtime
- By default, Kubernetes uses the Docker Engine to run containers
@@ -148,43 +181,7 @@ No!
(like CRI-O, or containerd)
---
## Do we need to run Docker at all?
Yes!
--
- In this workshop, we run our app on a single node first
- We will need to build images and ship them around
- We can do these things without Docker
<br/>
(and get diagnosed with NIH¹ syndrome)
- Docker is still the most stable container engine today
<br/>
(but other options are maturing very quickly)
.footnote[¹[Not Invented Here](https://en.wikipedia.org/wiki/Not_invented_here)]
---
## Do we need to run Docker at all?
- On our development environments, CI pipelines ... :
*Yes, almost certainly*
- On our production servers:
*Yes (today)*
*Probably not (in the future)*
.footnote[More information about CRI [on the Kubernetes blog](http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html)]
.footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)]
---
@@ -198,6 +195,7 @@ Yes!
- node (a machine — physical or virtual — in our cluster)
- pod (group of containers running together on a node)
- IP addresses are associated with *pods*, not with individual containers
- service (stable network endpoint to connect to one or multiple containers)
- namespace (more-or-less isolated group of things)
- secret (bundle of sensitive data to be passed to a container)
@@ -209,25 +207,3 @@ Yes!
class: pic
![Node, pod, container](images/k8s-arch3-thanks-weave.png)
---
class: pic
![One of the best Kubernetes architecture diagrams available](images/k8s-arch4-thanks-luxas.png)
---
## Credits
- The first diagram is courtesy of Weave Works
- a *pod* can have multiple containers working together
- IP addresses are associated with *pods*, not with individual containers
- The second diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha)
- it's one of the best Kubernetes architecture diagrams available!
Both diagrams used with permission.

View File

@@ -36,7 +36,7 @@
## Creating a daemon set
- Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
- Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
--
@@ -55,7 +55,7 @@
--
- option 1: read the docs
- option 1: [read the docs](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#create-a-daemonset)
--
@@ -178,29 +178,37 @@ Wait ... Now, can it be *that* easy?
--
We have both `deploy/rng` and `ds/rng` now!
We have two resources called `rng`:
--
- the *deployment* that was existing before
And one too many pods...
- the *daemon set* that we just created
We also have one too many pods.
<br/>
(The pod corresponding to the *deployment* still exists.)
---
## Explanation
## `deploy/rng` and `ds/rng`
- You can have different resource types with the same name
(i.e. a *deployment* and a *daemonset* both named `rng`)
(i.e. a *deployment* and a *daemon set* both named `rng`)
- We still have the old `rng` *deployment*
- But now we have the new `rng` *daemonset* as well
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/rng 1 1 1 1 18m
```
- If we look at the pods, we have:
- But now we have the new `rng` *daemon set* as well
- *one pod* for the deployment
- *one pod per node* for the daemonset
```
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/rng 2 2 2 2 2 <none> 9s
```
---
@@ -308,116 +316,27 @@ The replica set selector also has a `pod-template-hash`, unlike the pods in our
---
# Updating a service through labels and selectors
## Deleting a deployment
- What if we want to drop the `rng` deployment from the load balancer?
.exercise[
- Option 1:
- destroy it
- Option 2:
- add an extra *label* to the daemon set
- update the service *selector* to refer to that *label*
- Remove the `rng` deployment:
```bash
kubectl delete deployment rng
```
]
--
Of course, option 2 offers more learning opportunities. Right?
- The pod that was created by the deployment is now being terminated:
---
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rng-54f57d4d49-vgz9h 1/1 Terminating 0 4m
rng-vplmj 1/1 Running 0 11m
rng-xbpvg 1/1 Running 0 11m
[...]
```
## Add an extra label to the daemon set
- We will update the daemon set "spec"
- Option 1:
- edit the `rng.yml` file that we used earlier
- load the new definition with `kubectl apply`
- Option 2:
- use `kubectl edit`
--
*If you feel like you got this💕🌈, feel free to try directly.*
*We've included a few hints on the next slides for your convenience!*
---
## We've put resources in your resources
- Reminder: a daemon set is a resource that creates more resources!
- There is a difference between:
- the label(s) of a resource (in the `metadata` block in the beginning)
- the selector of a resource (in the `spec` block)
- the label(s) of the resource(s) created by the first resource (in the `template` block)
- You need to update the selector and the template (metadata labels are not mandatory)
- The template must match the selector
(i.e. the resource will refuse to create resources that it will not select)
---
## Adding our label
- Let's add a label `isactive: yes`
- In YAML, `yes` should be quoted; i.e. `isactive: "yes"`
.exercise[
- Update the daemon set to add `isactive: "yes"` to the selector and template label:
```bash
kubectl edit daemonset rng
```
- Update the service to add `isactive: "yes"` to its selector:
```bash
kubectl edit service rng
```
]
---
## Checking what we've done
.exercise[
- Check the logs of all `run=rng` pods to confirm that exactly one per node is now active:
```bash
kubectl logs -l run=rng
```
]
The timestamps should give us a hint about how many pods are currently receiving traffic.
.exercise[
- Look at the pods that we have right now:
```bash
kubectl get pods
```
]
---
## More labels, more selectors, more problems?
- Bonus exercise 1: clean up the pods of the "old" daemon set
- Bonus exercise 2: how could we have done this to avoid creating new pods?
Ding, dong, the deployment is dead! And the daemon set lives on.

View File

@@ -10,9 +10,6 @@
3) bypass authentication for the dashboard
--
There is an additional step to make the dashboard available from outside (we'll get to that)
--
@@ -148,58 +145,6 @@ The dashboard will then ask you which authentication you want to use.
---
## Exposing the dashboard over HTTPS
- We took a shortcut by forwarding HTTP to HTTPS inside the cluster
- Let's expose the dashboard over HTTPS!
- The dashboard is exposed through a `ClusterIP` service (internal traffic only)
- We will change that into a `NodePort` service (accepting outside traffic)
.exercise[
- Edit the service:
```bash
kubectl edit service kubernetes-dashboard
```
]
--
`NotFound`?!? Y U NO WORK?!?
---
## Editing the `kubernetes-dashboard` service
- If we look at the [YAML](https://goo.gl/Qamqab) that we loaded before, we'll get a hint
--
- The dashboard was created in the `kube-system` namespace
--
.exercise[
- Edit the service:
```bash
kubectl -n kube-system edit service kubernetes-dashboard
```
- Change `ClusterIP` to `NodePort`, save, and exit
- Check the port that was assigned with `kubectl -n kube-system get services`
- Connect to https://oneofournodes:3xxxx/ (yes, https)
]
---
## Running the Kubernetes dashboard securely
- The steps that we just showed you are *for educational purposes only!*
@@ -256,9 +201,9 @@ The dashboard will then ask you which authentication you want to use.
- It's safe if you use HTTPS URLs from trusted sources
--
- It introduces new failure modes
- Example: the official setup instructions for most pod networks
--
- It introduces new failure modes (like if you try to apply yaml from a link that's no longer valid)

217
slides/kube/helm.md Normal file
View File

@@ -0,0 +1,217 @@
# Managing stacks with Helm
- We created our first resources with `kubectl run`, `kubectl expose` ...
- We have also created resources by loading YAML files with `kubectl apply -f`
- For larger stacks, managing thousands of lines of YAML is unreasonable
- These YAML bundles need to be customized with variable parameters
(E.g.: number of replicas, image version to use ...)
- It would be nice to have an organized, versioned collection of bundles
- It would be nice to be able to upgrade/rollback these bundles carefully
- [Helm](https://helm.sh/) is an open source project offering all these things!
---
## Helm concepts
- `helm` is a CLI tool
- `tiller` is its companion server-side component
- A "chart" is an archive containing templatized YAML bundles
- Charts are versioned
- Charts can be stored on private or public repositories
---
## Installing Helm
- We need to install the `helm` CLI; then use it to deploy `tiller`
.exercise[
- Install the `helm` CLI:
```bash
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
```
- Deploy `tiller`:
```bash
helm init
```
- Add the `helm` completion:
```bash
. <(helm completion $(basename $SHELL))
```
]
---
## Fix account permissions
- Helm permission model requires us to tweak permissions
- In a more realistic deployment, you might create per-user or per-team
service accounts, roles, and role bindings
.exercise[
- Grant `cluster-admin` role to `kube-system:default` service account:
```bash
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin --serviceaccount=kube-system:default
```
]
(Defining the exact roles and permissions on your cluster requires
a deeper knowledge of Kubernetes' RBAC model. The command above is
fine for personal and development clusters.)
---
## View available charts
- A public repo is pre-configured when installing Helm
- We can view available charts with `helm search` (and an optional keyword)
.exercise[
- View all available charts:
```bash
helm search
```
- View charts related to `prometheus`:
```bash
helm search prometheus
```
]
---
## Install a chart
- Most charts use `LoadBalancer` service types by default
- Most charts require persistent volumes to store data
- We need to relax these requirements a bit
.exercise[
- Install the Prometheus metrics collector on our cluster:
```bash
helm install stable/prometheus \
--set server.service.type=NodePort \
--set server.persistentVolume.enabled=false
```
]
Where do these `--set` options come from?
---
## Inspecting a chart
- `helm inspect` shows details about a chart (including available options)
.exercise[
- See the metadata and all available options for `stable/prometheus`:
```bash
helm inspect stable/prometheus
```
]
The chart's metadata includes an URL to the project's home page.
(Sometimes it conveniently points to the documentation for the chart.)
---
## Creating a chart
- We are going to show a way to create a *very simplified* chart
- In a real chart, *lots of things* would be templatized
(Resource names, service types, number of replicas...)
.exercise[
- Create a sample chart:
```bash
helm create dockercoins
```
- Move away the sample templates and create an empty template directory:
```bash
mv dockercoins/templates dockercoins/default-templates
mkdir dockercoins/templates
```
]
---
## Exporting the YAML for our application
- The following section assumes that DockerCoins is currently running
.exercise[
- Create one YAML file for each resource that we need:
.small[
```bash
while read kind name; do
kubectl get -o yaml --export $kind $name > dockercoins/templates/$name-$kind.yaml
done <<EOF
deployment worker
deployment hasher
daemonset rng
deployment webui
deployment redis
service hasher
service rng
service webui
service redis
EOF
```
]
]
---
## Testing our helm chart
.exercise[
- Let's install our helm chart! (`dockercoins` is the path to the chart)
```bash
helm install dockercoins
```
]
--
- Since the application is already deployed, this will fail:<br>
`Error: release loitering-otter failed: services "hasher" already exists`
- To avoid naming conflicts, we will deploy the application in another *namespace*

View File

@@ -3,7 +3,7 @@
- This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person,
instructor-led workshops and tutorials
- Credit is also due to [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) — thank you!
- Credit is also due to [multiple contributors](https://@@GITREPO@@/graphs/contributors) — thank you!
- You can also follow along on your own, at your own pace

View File

@@ -123,7 +123,7 @@ Note: please DO NOT call the service `search`. It would collide with the TLD.
.exercise[
- Let's obtain the IP address that was allocated for our service, *programatically:*
- Let's obtain the IP address that was allocated for our service, *programmatically:*
```bash
IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}')
```
@@ -137,4 +137,116 @@ Note: please DO NOT call the service `search`. It would collide with the TLD.
--
Our requests are load balanced across multiple pods.
We may see `curl: (7) Failed to connect to _IP_ port 9200: Connection refused`.
This is normal while the service starts up.
--
Once it's running, our requests are load balanced across multiple pods.
---
class: extra-details
## If we don't need a load balancer
- Sometimes, we want to access our scaled services directly:
- if we want to save a tiny little bit of latency (typically less than 1ms)
- if we need to connect over arbitrary ports (instead of a few fixed ones)
- if we need to communicate over another protocol than UDP or TCP
- if we want to decide how to balance the requests client-side
- ...
- In that case, we can use a "headless service"
---
class: extra-details
## Headless services
- A headless service is obtained by setting the `clusterIP` field to `None`
(Either with `--cluster-ip=None`, or by providing a custom YAML)
- As a result, the service doesn't have a virtual IP address
- Since there is no virtual IP address, there is no load balancer either
- `kube-dns` will return the pods' IP addresses as multiple `A` records
- This gives us an easy way to discover all the replicas for a deployment
---
class: extra-details
## Services and endpoints
- A service has a number of "endpoints"
- Each endpoint is a host + port where the service is available
- The endpoints are maintained and updated automatically by Kubernetes
.exercise[
- Check the endpoints that Kubernetes has associated with our `elastic` service:
```bash
kubectl describe service elastic
```
]
In the output, there will be a line starting with `Endpoints:`.
That line will list a bunch of addresses in `host:port` format.
---
class: extra-details
## Viewing endpoint details
- When we have many endpoints, our display commands truncate the list
```bash
kubectl get endpoints
```
- If we want to see the full list, we can use one of the following commands:
```bash
kubectl describe endpoints elastic
kubectl get endpoints elastic -o yaml
```
- These commands will show us a list of IP addresses
- These IP addresses should match the addresses of the corresponding pods:
```bash
kubectl get pods -l run=elastic -o wide
```
---
class: extra-details
## `endpoints` not `endpoint`
- `endpoints` is the only resource that cannot be singular
```bash
$ kubectl get endpoint
error: the server doesn't have a resource type "endpoint"
```
- This is because the type itself is plural (unlike every other resource)
- There is no `endpoint` object: `type Endpoints struct`
- The type doesn't represent a single endpoint, but a list of endpoints

View File

@@ -1,3 +1,5 @@
class: extra-details
# First contact with `kubectl`
- `kubectl` is (almost) the only tool we'll need to talk to Kubernetes
@@ -79,6 +81,8 @@
---
class: extra-details
## What's available?
- `kubectl` has pretty good introspection facilities
@@ -265,4 +269,4 @@ The `kube-system` namespace is used for the control plane.
]
--
- `kube-public` is created by kubeadm & [used for security bootstrapping](http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html)
- `kube-public` is created by kubeadm & [used for security bootstrapping](https://kubernetes.io/blog/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters)

117
slides/kube/kubectlproxy.md Normal file
View File

@@ -0,0 +1,117 @@
# Accessing internal services with `kubectl proxy`
- `kubectl proxy` runs a proxy in the foreground
- This proxy lets us access the Kubernetes API without authentication
(`kubectl proxy` adds our credentials on the fly to the requests)
- This proxy lets us access the Kubernetes API over plain HTTP
- This is a great tool to learn and experiment with the Kubernetes API
- The Kubernetes API also gives us a proxy to HTTP and HTTPS services
- Therefore, we can use `kubectl proxy` to access internal services
(Without using a `NodePort` or similar service)
---
## Secure by default
- By default, the proxy listens on port 8001
(But this can be changed, or we can tell `kubectl proxy` to pick a port)
- By default, the proxy binds to `127.0.0.1`
(Making it unreachable from other machines, for security reasons)
- By default, the proxy only accepts connections from:
`^localhost$,^127\.0\.0\.1$,^\[::1\]$`
- This is great when running `kubectl proxy` locally
- Not-so-great when running it on a remote machine
---
## Running `kubectl proxy` on a remote machine
- We are going to bind to `INADDR_ANY` instead of `127.0.0.1`
- We are going to accept connections from any address
.exercise[
- Run an open proxy to the Kubernetes API:
```bash
kubectl proxy --port=8888 --address=0.0.0.0 --accept-hosts=.*
```
]
.warning[Anyone can now do whatever they want with our Kubernetes cluster!
<br/>
(Don't do this on a real cluster!)]
---
## Viewing available API routes
- The default route (i.e. `/`) shows a list of available API endpoints
.exercise[
- Point your browser to the IP address of the node running `kubectl proxy`, port 8888
]
The result should look like this:
```json
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
```
---
## Connecting to a service through the proxy
- The API can proxy HTTP and HTTPS requests by accessing a special route:
```
/api/v1/namespaces/`name_of_namespace`/services/`name_of_service`/proxy
```
- Since we now have access to the API, we can use this special route
.exercise[
- Access the `hasher` service through the special proxy route:
```open
http://`X.X.X.X`:8888/api/v1/namespaces/default/services/hasher/proxy
```
]
You should see the banner of the hasher service: `HASHER running on ...`
---
## Stopping the proxy
- Remember: as it is running right now, `kubectl proxy` gives open access to our cluster
.exercise[
- Stop the `kubectl proxy` process with Ctrl-C
]

View File

@@ -20,9 +20,10 @@
.exercise[
- Let's ping `goo.gl`:
- Let's ping `1.1.1.1`, Cloudflare's
[public DNS resolver](https://blog.cloudflare.com/announcing-1111/):
```bash
kubectl run pingpong --image alpine ping goo.gl
kubectl run pingpong --image alpine ping 1.1.1.1
```
]
@@ -49,9 +50,11 @@ OK, what just happened?
--
We should see the following things:
- `deploy/pingpong` (the *deployment* that we just created)
- `rs/pingpong-xxxx` (a *replica set* created by the deployment)
- `po/pingpong-yyyy` (a *pod* created by the replica set)
- `deployment.apps/pingpong` (the *deployment* that we just created)
- `replicaset.apps/pingpong-xxxxxxxxxx` (a *replica set* created by the deployment)
- `pod/pingpong-xxxxxxxxxx-yyyyy` (a *pod* created by the replica set)
Note: as of 1.10.1, resource types are displayed in more detail.
---
@@ -78,21 +81,34 @@ We should see the following things:
---
class: extra-details
## Our `pingpong` deployment
- `kubectl run` created a *deployment*, `deploy/pingpong`
- `kubectl run` created a *deployment*, `deployment.apps/pingpong`
- That deployment created a *replica set*, `rs/pingpong-xxxx`
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/pingpong 1 1 1 1 10m
```
- That replica set created a *pod*, `po/pingpong-yyyy`
- That deployment created a *replica set*, `replicaset.apps/pingpong-xxxxxxxxxx`
```
NAME DESIRED CURRENT READY AGE
replicaset.apps/pingpong-7c8bbcd9bc 1 1 1 10m
```
- That replica set created a *pod*, `pod/pingpong-xxxxxxxxxx-yyyyy`
```
NAME READY STATUS RESTARTS AGE
pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
```
- We'll see later how these folks play together for:
- scaling
- high availability
- rolling updates
- scaling, high availability, rolling updates
---
@@ -119,6 +135,8 @@ We should see the following things:
---
class: extra-details
## Streaming logs in real time
- Just like `docker logs`, `kubectl logs` supports convenient options:
@@ -137,9 +155,8 @@ We should see the following things:
```
<!--
```keys
^C
```
```wait seq=3```
```keys ^C```
-->
]
@@ -159,7 +176,7 @@ We should see the following things:
]
Note: what if we tried to scale `rs/pingpong-xxxx`?
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?
We could! But the *deployment* would notice it right away, and scale back to the initial level.
@@ -181,14 +198,13 @@ We could! But the *deployment* would notice it right away, and scale back to the
```
<!--
```keys
^C
```
```wait Running```
```keys ^C```
-->
- Destroy a pod:
```bash
kubectl delete pod pingpong-yyyy
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
```
]
@@ -211,6 +227,8 @@ We could! But the *deployment* would notice it right away, and scale back to the
---
clas: extra-details
## Viewing logs of multiple pods
- When we specify a deployment name, only one single pod's logs are shown
@@ -234,15 +252,17 @@ Unfortunately, `--follow` cannot (yet) be used to stream the logs from multiple
---
class: title
class: extra-details
Meanwhile,
<br/>
at the Google NOC ...
<br/>
<br/>
.small[“Why the hell]
<br/>
.small[are we getting 1000 packets per second]
<br/>
.small[of ICMP ECHO traffic from these IPs?!?”]
## Aren't we flooding 1.1.1.1?
- If you're wondering this, good question!
- Don't worry, though:
*APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.*
(Source: https://blog.cloudflare.com/announcing-1111/)
- It's very unlikely that our concerted pings manage to produce
even a modest blip at Cloudflare's NOC!

View File

@@ -52,9 +52,9 @@
(15 are listed in the Kubernetes documentation)
- It *looks like* you have a level 3 network, but it's only level 4
- Pods have level 3 (IP) connectivity, but *services* are level 4
(The spec requires UDP and TCP, but not port ranges or arbitrary IP packets)
(Services map to a single UDP or TCP port; no port ranges or arbitrary IP packets)
- `kube-proxy` is on the data path when connecting to a pod or container,
<br/>and it's not particularly fast (relies on userland proxying or iptables)
@@ -63,7 +63,7 @@
## Kubernetes network model: in practice
- The nodes that we are using have been set up to use Weave
- The nodes that we are using have been set up to use [Weave](https://github.com/weaveworks/weave)
- We don't endorse Weave in a particular way, it just Works For Us
@@ -72,10 +72,32 @@
- Unless you:
- routinely saturate 10G network interfaces
- count packet rates in millions per second
- run high-traffic VOIP or gaming platforms
- do weird things that involve millions of simultaneous connections
<br/>(in which case you're already familiar with kernel tuning)
- If necessary, there are alternatives to `kube-proxy`; e.g.
[`kube-router`](https://www.kube-router.io)
---
## The Container Network Interface (CNI)
- The CNI has a well-defined [specification](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration) for network plugins
- When a pod is created, Kubernetes delegates the network setup to CNI plugins
- Typically, a CNI plugin will:
- allocate an IP address (by calling an IPAM plugin)
- add a network interface into the pod's network namespace
- configure the interface as well as required routes etc.
- Using multiple plugins can be done with "meta-plugins" like CNI-Genie or Multus
- Not all CNI plugins are equal
(e.g. they don't all implement network policies, which are required to isolate pods)

View File

@@ -1,19 +1,10 @@
# Links and resources
All things Kubernetes:
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
- [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
All things Docker:
- [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/)
- [Docker documentation](http://docs.docker.com/)
- [Docker Hub](https://hub.docker.com)
- [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker)
- [Play With Docker Hands-On Labs](http://training.play-with-docker.com/)
Everything else:
- [Cloud Developer Advocates](https://developer.microsoft.com/advocates/)
- [Local meetups](https://www.meetup.com/)

Some files were not shown because too many files have changed in this diff Show More