Compare commits

..

345 Commits

Author SHA1 Message Date
Diego Quintana
051dd13c21 Enforce alpine version that includes telnet (#292)
* Enforce alpine version that contains telnet

alpine 3.7 does not contain `telnet` by default https://github.com/gliderlabs/docker-alpine/issues/397#issuecomment-375415746

* bump fix

enforce alpine 3.6 in another slide that mentions `telnet`
2018-06-28 08:26:30 -05:00
Diego Quintana
8c3d4c2c56 Add short inline explanation for -w (#291)
I don't know, but maybe having this short explanation saves a `docker run --help` for someone. 

Tell me if it's too much :D
2018-06-28 08:25:34 -05:00
Jerome Petazzoni
817e17a3a8 Merge branch 'master' into avril2018 2018-04-13 08:13:10 +02:00
Jérôme Petazzoni
6ad7a285e7 Merge pull request #201 from bridgetkromhout/chart-clarity
Clarify chart install
2018-04-13 01:08:13 -05:00
Jérôme Petazzoni
e529eaed2d Merge pull request #200 from bridgetkromhout/helm-example
Use prometheus as example
2018-04-13 01:07:18 -05:00
Jérôme Petazzoni
4697c6c6ad Merge pull request #189 from bridgetkromhout/elastic-patience
Clarify error message upon start & endpoints
2018-04-13 01:06:33 -05:00
Jérôme Petazzoni
56e47c3550 Update kubectlexpose.md
Add line break for readability
2018-04-13 08:06:23 +02:00
Jérôme Petazzoni
b3a9ba339c Merge pull request #199 from bridgetkromhout/helm-mkdir
Directory missing
2018-04-13 01:04:39 -05:00
Jérôme Petazzoni
8d0ce37a59 Merge pull request #196 from bridgetkromhout/or-azure
Azure directions are also included
2018-04-13 01:04:07 -05:00
Jérôme Petazzoni
a1bbbd6f7b Merge pull request #195 from bridgetkromhout/slide-clarity
Making slide easier to read
2018-04-13 01:03:39 -05:00
Jérôme Petazzoni
e48016a0de Merge pull request #203 from jpetazzo/master
Typo fix, thanks Bridget! ♥
2018-04-12 15:55:36 -05:00
Jerome Petazzoni
39765c9ad0 Add food menu 2018-04-12 15:54:20 -05:00
Bridget Kromhout
9d4a72a4ba Merge pull request #202 from bridgetkromhout/url-update-fix
Fixing typo
2018-04-12 15:30:11 -05:00
Bridget Kromhout
19e39aea49 Fixing typo 2018-04-12 15:27:51 -05:00
Jerome Petazzoni
ca06269f00 Merge branch 'master' into avril2018 2018-04-12 12:06:44 -05:00
Bridget Kromhout
da064a6005 Clarify chart install 2018-04-12 10:24:01 -05:00
Bridget Kromhout
a12a38a7a9 Use prometheus as example 2018-04-12 09:50:12 -05:00
Bridget Kromhout
2c3a442a4c wording correction
The addresses aren't what show us the addresses - it seems clear from context that this should be "commands".
2018-04-12 08:11:43 -05:00
Bridget Kromhout
25d560cf46 Directory missing 2018-04-12 07:48:25 -05:00
Bridget Kromhout
c3324cf64c More general 2018-04-12 07:41:43 -05:00
Bridget Kromhout
053bbe7028 Bold instead of highlighting 2018-04-12 07:39:02 -05:00
Jerome Petazzoni
9876a9aaa6 Add dockerfile samples 2018-04-12 09:04:47 +02:00
Jerome Petazzoni
853ba7ec39 Add dockerfile samples 2018-04-12 09:04:36 +02:00
Jérôme Petazzoni
5ef96a29ac Update kubectlexpose.md 2018-04-12 00:37:18 -05:00
Jérôme Petazzoni
f261e7aa96 Merge pull request #194 from bridgetkromhout/fix-blue
removing extra leading spaces which break everything
2018-04-11 23:55:34 -05:00
Jérôme Petazzoni
8e44e911ca Merge pull request #193 from bridgetkromhout/stern
Missing word added
2018-04-11 23:52:17 -05:00
Bridget Kromhout
fce69b6bb2 Azure directions are also included 2018-04-11 19:34:51 -05:00
Bridget Kromhout
1183e2e4bf Making slide easier to read 2018-04-11 18:55:23 -05:00
Bridget Kromhout
de3082e48f Extra spaces prevent this from working 2018-04-11 18:47:30 -05:00
Bridget Kromhout
3acac34e4b Missing word added 2018-04-11 18:11:07 -05:00
Jérôme Petazzoni
3bac124921 Merge pull request #183 from bridgetkromhout/stalling-for-time
Stalling for time during download
2018-04-11 14:56:02 +02:00
Bridget Kromhout
ba44603d0f Correcting title and slide section division 2018-04-11 06:53:01 -05:00
Jerome Petazzoni
3d5c89774c Merge branch 'master' into avril2018 2018-04-11 12:17:03 +02:00
Jerome Petazzoni
358f844c88 Typo fix 2018-04-11 02:40:38 -07:00
Jerome Petazzoni
21bb5fa9e1 Clarify wifi 2018-04-11 01:40:35 -05:00
Jerome Petazzoni
3fe4d730e7 merge master 2018-04-11 01:13:24 -05:00
Jérôme Petazzoni
74bf2d742c Merge pull request #182 from bridgetkromhout/versions-validated
Clarify versions validated
2018-04-10 23:11:38 -07:00
Jérôme Petazzoni
acba3d5467 Merge pull request #192 from bridgetkromhout/add-links
Add links
2018-04-10 23:03:09 -07:00
Jerome Petazzoni
056b3a7127 hotfix for kubectl get all 2018-04-10 17:21:29 -05:00
Jerome Petazzoni
292885566d Merge branch 'master' into avril2018 2018-04-10 17:12:21 -05:00
Jérôme Petazzoni
cfc066c8ea Merge pull request #191 from jgarrouste/master
Reversed sentences
2018-04-10 15:03:09 -07:00
Jérôme Petazzoni
4f69f19866 Merge pull request #186 from bridgetkromhout/vm-readme
link to VM prep README
2018-04-10 14:56:19 -07:00
Jérôme Petazzoni
c508f88af2 Update setup-k8s.md 2018-04-10 16:56:07 -05:00
Jérôme Petazzoni
9757fdb42f Merge pull request #185 from bridgetkromhout/article
Adding an article
2018-04-10 14:52:49 -07:00
Bridget Kromhout
24d57f535b Add links 2018-04-10 16:52:07 -05:00
Jérôme Petazzoni
e42dfc0726 Merge pull request #184 from bridgetkromhout/url-update
URL update
2018-04-10 14:51:55 -07:00
Jérémy GARROUSTE
c7198b3538 correction 2018-04-10 22:56:42 +02:00
Bridget Kromhout
af1347ca17 Clarify endpoints 2018-04-10 15:07:42 -05:00
Bridget Kromhout
f741cf5b23 Clarify error message upon start 2018-04-10 14:33:49 -05:00
Bridget Kromhout
d3c0a60de9 link to VM prep README 2018-04-10 12:30:46 -05:00
Bridget Kromhout
83bba80f3b URL update 2018-04-10 12:25:44 -05:00
Bridget Kromhout
44e0cfb878 Adding an article 2018-04-10 12:22:24 -05:00
Bridget Kromhout
a58e21e313 URL update 2018-04-10 12:15:01 -05:00
Bridget Kromhout
1131635006 Stalling for time during download 2018-04-10 11:52:52 -05:00
Bridget Kromhout
c6e477e6ab Clarify versions validated 2018-04-10 11:35:28 -05:00
Jerome Petazzoni
a54287a6bb Setup chapters appropriately 2018-04-10 09:13:25 -05:00
Jerome Petazzoni
e1fe41b7d7 Merge branch 'master' into avril2018 2018-04-10 08:41:34 -05:00
Jerome Petazzoni
18a81120bc Add helper script to gauge chapter weights 2018-04-10 08:41:23 -05:00
Jerome Petazzoni
17cd67f4d0 Breakdown container internals chapter 2018-04-10 08:41:05 -05:00
Jerome Petazzoni
817e3f9217 Fix @jgarrouste's Twitter link 2018-04-10 06:31:42 -05:00
Jerome Petazzoni
bb94c6fe76 Cards for Paris 2018-04-10 06:31:13 -05:00
Jerome Petazzoni
fd05530fff Merge branch 'more-info-on-labels-and-rollouts' into avril2018 2018-04-10 06:05:33 -05:00
Jerome Petazzoni
38a40d56a0 Label use-cases and rollouts
This adds a few realistic examples of label usage.
It also adds explanations about why deploying a new
version of the worker doesn't seem to be effective
immediately (the worker doesn't handle signals).
2018-04-10 06:04:17 -05:00
Jerome Petazzoni
86f2395b2c Merge branch 'master' into avril2018 2018-04-10 05:31:47 -05:00
Jerome Petazzoni
96fd2e26fd Minor fixes for autopilot 2018-04-10 05:30:42 -05:00
Jerome Petazzoni
60f68351c6 Add demos by @jgarrouste 2018-04-10 04:45:41 -05:00
Jerome Petazzoni
035d015a61 Merge branch 'master' into avril2018 2018-04-10 04:25:22 -05:00
Jerome Petazzoni
581bbc847d Add demo logo for k8s demo 2018-04-10 04:25:08 -05:00
Jerome Petazzoni
83efd145b8 Merge branch 'master' into avril2018 2018-04-09 17:07:02 -05:00
Jerome Petazzoni
da7cbc41d2 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 17:06:35 -05:00
Jerome Petazzoni
282e22acb9 Improve chapters about container deep dive 2018-04-09 17:06:29 -05:00
Jerome Petazzoni
c6c1a942e7 Update WiFi password and schedule 2018-04-09 15:44:32 -05:00
Jerome Petazzoni
59f5ff7788 Customize outline and title 2018-04-09 15:32:52 -05:00
Jerome Petazzoni
1fbf7b7dbd herp derp symlinks and stuff 2018-04-09 15:32:41 -05:00
Jerome Petazzoni
249947b0dd Setup links to slide decks 2018-04-09 15:26:47 -05:00
Jérôme Petazzoni
9374eebdf6 Merge pull request #180 from bridgetkromhout/links-before-thanks
Moving links before thanks
2018-04-09 13:23:32 -07:00
Jerome Petazzoni
e9af03e976 On a second thought, let's have relative links 2018-04-09 15:22:12 -05:00
Jerome Petazzoni
ab583e2670 Custom index for avril2018.container.training 2018-04-09 15:21:35 -05:00
Bridget Kromhout
dcd5c5b39a Moving links before thanks 2018-04-09 14:58:56 -05:00
Jérôme Petazzoni
974f8ee244 Merge pull request #179 from bridgetkromhout/mosh-tmux
Clarifications for tmux and mosh
2018-04-09 12:55:03 -07:00
Bridget Kromhout
8212aa378a Merge pull request #1 from jpetazzo/ode-to-mosh-and-tmux
Add even more info about mosh and tmux
2018-04-09 14:54:16 -05:00
Jerome Petazzoni
403d4c6408 Add even more info about mosh and tmux 2018-04-09 14:52:21 -05:00
Jerome Petazzoni
142681fa27 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 14:19:45 -05:00
Jerome Petazzoni
69c9141817 Enable new content in self-paced kube workshop 2018-04-09 14:19:27 -05:00
Bridget Kromhout
9ed88e7608 Clarifications for tmux and mosh 2018-04-09 14:19:16 -05:00
Jérôme Petazzoni
b216f4d90b Merge pull request #178 from bridgetkromhout/clarify-live
Formatting fixes
2018-04-09 12:13:07 -07:00
Bridget Kromhout
26ee07d8ba Format fix 2018-04-09 13:20:23 -05:00
Bridget Kromhout
a8e5b02fb4 Clarify live feedback 2018-04-09 13:18:25 -05:00
Jérôme Petazzoni
80a8912a53 Merge pull request #177 from jpetazzo/avril-2018
Avril 2018
2018-04-09 11:08:21 -07:00
Jérôme Petazzoni
1ba6797f25 Merge pull request #176 from bridgetkromhout/version-bump
Updating versions
2018-04-09 10:57:32 -07:00
Bridget Kromhout
11a2167dea Updating versions 2018-04-09 12:52:47 -05:00
Jérôme Petazzoni
af4eeb6e6b Merge pull request #175 from jpetazzo/helm-and-namespaces
Add two chapters: Helm and namespaces
2018-04-09 10:20:33 -07:00
Jérôme Petazzoni
ea6459e2bd Merge pull request #174 from jpetazzo/centralized-logging-with-efk
Add a chapter about centralized logging
2018-04-09 10:19:44 -07:00
Bridget Kromhout
2dfa5a9660 Update logs-centralized.md 2018-04-09 11:59:19 -05:00
Jerome Petazzoni
b86434fbd3 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 11:57:32 -05:00
Jerome Petazzoni
223525cc69 Add the new chapters
The new chapters are commented our right now.
But they're ready to be enabled whenever needed.
2018-04-09 11:57:16 -05:00
Bridget Kromhout
fd63c079c8 Update namespaces.md
typo fix
2018-04-09 11:44:45 -05:00
Jerome Petazzoni
ebe4511c57 Remove useless mkdir 2018-04-09 11:43:27 -05:00
Jérôme Petazzoni
e1a81ef8f3 Merge pull request #171 from jpetazzo/show-stern-to-view-logs
Show how to install and use Stern
2018-04-09 09:38:47 -07:00
Jerome Petazzoni
3382c83d6e Add link to Helm and say it's open source 2018-04-09 11:35:59 -05:00
Bridget Kromhout
a89430673f Update logs-cli.md
clarifications
2018-04-09 11:32:02 -05:00
Jerome Petazzoni
fcea6dbdb6 Clarify Stern installation comments 2018-04-09 11:29:19 -05:00
Bridget Kromhout
c744a7d168 Update helm.md
typo fixes
2018-04-09 11:27:34 -05:00
Bridget Kromhout
0256dc8640 Update logs-centralized.md
A few typo fixes
2018-04-09 11:22:43 -05:00
Jerome Petazzoni
41819794d7 Rename kube-halfday
We now have a full day of content. Rejoice.
2018-04-09 11:19:24 -05:00
Jerome Petazzoni
836903cb02 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 11:11:33 -05:00
Jerome Petazzoni
7f822d33b5 Clean up index.html
Comment out a bunch of older workshops (for which more recent
versions have been delivered since then). Update the links
to self-paced content.
2018-04-09 11:11:26 -05:00
Jérôme Petazzoni
232fdbb1ff Merge pull request #170 from jpetazzo/headless-services
Add headless services
2018-04-09 09:05:33 -07:00
Jerome Petazzoni
f3f6111622 Replace logistics.md with generic version
The current version of the logistics.md slide shows AJ and JP.
The new version is an obvious template, i.e. it says 'this slide
should be customized' and it uses imaginary personas instead.
2018-04-09 10:59:55 -05:00
Jerome Petazzoni
a8378e7e7f Clarify endpoints 2018-04-09 10:12:22 -05:00
Jerome Petazzoni
eb3165096f Add Logging section and manifests 2018-04-09 09:37:28 -05:00
Jerome Petazzoni
90ca58cda8 Add a few slides about network policies
This is a very high-level overview (we can't cover a lot within the current time constraints) but it gives a primer about network policies and a few links to explore further.
2018-04-09 08:27:31 -05:00
Jerome Petazzoni
5a81526387 Add two chapters: Helm and namespaces
In these chapters, we:
- show how to install Helm
- run the Helm tiller on our cluster
- use Helm to install Prometheus
- don't do anything fancy with
  Prometheus (it's just for the
  sake of installing something)
- create a basic Helm chart for
  DockerCoins
- explain namespace concepts
- show how to use contexts to hop
  between namespaces
- use Helm to deploy DockerCoins
  to a new namespace

These two chapters go together.
2018-04-09 07:57:27 -05:00
Jerome Petazzoni
8df073b8ac Add a chapter about centralized logging
Explain the purpose of centralized logging. Describe the
EFK stack. Deploy a simplified EFK stack through a YAML
file. Use it to view container logs. Profit.
2018-04-09 04:17:00 -05:00
Jérôme Petazzoni
0f7356b002 Merge pull request #167 from jgarrouste/avril-2018
Small changes
2018-04-09 00:26:13 -07:00
Jérôme Petazzoni
0c2166fb5f Merge pull request #172 from jpetazzo/clarify-daemonset-bonus-exercises
Clarify the bonus exercises
2018-04-09 00:24:26 -07:00
Jerome Petazzoni
d228222fa6 Reword headless services
Hopefully this explains better the use of headless services.
I also added a slide about endpoints, with a couple of simple
commands to show them.
2018-04-08 17:59:42 -05:00
Bridget Kromhout
e4b7d3244e Merge pull request #173 from bridgetkromhout/muracon-past
MuraCon to past
2018-04-08 17:50:09 -05:00
Bridget Kromhout
7d0e841a73 MuraCon to past 2018-04-08 17:46:55 -05:00
Jerome Petazzoni
9859e441e1 Clarify the bonus exercises
We had two open-ended exercises (questions without
answers). We have added more explanations, as well
as solutions for the exercises. It lets us show a
few more tricks with selectors, and how to apply
changes to sets of resources.
2018-04-08 17:16:27 -05:00
Jerome Petazzoni
e1c638439f Bump versions
Bump up Compose and Machine to latest versions.
Bump down Engine to stable branch.

I'm pushing straight to master because YOLO^W^W
because @bridgetkromhout is using the kube101.yaml
file anyway, so this shouldn't break her things.

(Famous last words...)
2018-04-08 16:34:48 -05:00
Jérôme Petazzoni
253aaaad97 Merge pull request #169 from jpetazzo/what-is-cni
Add slide about CNI
2018-04-08 14:32:17 -07:00
Jérôme Petazzoni
a249ccc12b Merge pull request #168 from jpetazzo/clarify-control-plane
Clarify control plane
2018-04-08 14:29:50 -07:00
Jerome Petazzoni
22fb898267 Show how to install and use Stern
Stern is super cool to stream the logs of multiple
containers.
2018-04-08 16:26:08 -05:00
Bridget Kromhout
e038797875 Update concepts-k8s.md
A few suggested clarifications to your (excellent) clarifications
2018-04-08 15:16:42 -05:00
Jerome Petazzoni
7b9f9e23c0 Add headless services 2018-04-08 11:10:07 -05:00
Jerome Petazzoni
01d062a68f Add slide about CNI 2018-04-08 10:31:17 -05:00
Jerome Petazzoni
a66dfb5faf Clarify control plane
Explain better that the control plane can run outside
of the cluster, and that the word master can be
confusing (does it designate the control plane, or
the node running the control plane? What if there is
no node running the control plane, because the control
plane is external?)
2018-04-08 09:57:51 -05:00
Jerome Petazzoni
ac1480680a Add ecosystem chapter 2018-04-08 08:40:20 -05:00
Jerome Petazzoni
13a9b5ca00 What IS docker?
Explain what the engine is
2018-04-08 07:21:47 -05:00
Jérémy GARROUSTE
0cdf6abf0b Add .center for some images 2018-04-07 20:16:29 +02:00
Jérémy GARROUSTE
2071694983 Add .small[] 2018-04-07 20:16:13 +02:00
Jérôme Petazzoni
12e2b18a6f Merge pull request #166 from jgarrouste/avril-2018
Update the output of docker version and docker build command
2018-04-07 09:30:11 -07:00
Jerome Petazzoni
28e128756d How to pass container config 2018-04-07 11:28:42 -05:00
Jerome Petazzoni
a15109a12c Add chapter about labels 2018-04-07 09:57:35 -05:00
Jerome Petazzoni
e500fb57e8 Add --mount syntax 2018-04-07 09:37:27 -05:00
Jerome Petazzoni
f1849092eb add chapter on Docker Machine 2018-04-07 07:33:28 -05:00
Jerome Petazzoni
f1dbd7e8a6 Copy on write 2018-04-06 09:27:29 -05:00
Jerome Petazzoni
d417f454dd Finalize section on namespaces and cgroups 2018-04-06 09:27:20 -05:00
Jérémy GARROUSTE
d79718d834 Update docker build output 2018-04-06 11:20:09 +02:00
Jérémy GARROUSTE
de9c3a1550 Update docker version output 2018-04-06 10:04:41 +02:00
Jerome Petazzoni
90fc7a4ed3 Merge branch 'avril-2018' of github.com:jpetazzo/container.training into avril-2018 2018-04-05 17:58:55 -05:00
Jerome Petazzoni
09edbc24bc Container deep dive: namespaces, cgroups, etc. 2018-04-05 17:58:43 -05:00
Jérémy GARROUSTE
92f8701c37 Update output of docker build 2018-04-06 00:00:27 +02:00
Jérôme Petazzoni
c828888770 Merge pull request #165 from jgarrouste/avril-2018
Update output of 'docker build'
2018-04-05 14:57:05 -07:00
Jérémy GARROUSTE
bb7728e7e7 Update docker build output 2018-04-05 23:52:37 +02:00
Jerome Petazzoni
5f544f9c78 Add container engines chapter; orchestration overview chapter 2018-04-04 17:09:21 -05:00
Jerome Petazzoni
5b6a7d1995 Update my email address 2018-04-02 18:52:48 -05:00
Jerome Petazzoni
b21185dde7 Introduce EXPOSE 2018-04-02 00:10:45 -05:00
Jerome Petazzoni
deaee0dc82 Explain why use Docker Inc's repos 2018-04-01 23:58:10 -05:00
Jerome Petazzoni
4206346496 MacOS -> macOS 2018-04-01 23:52:38 -05:00
Jerome Petazzoni
6658b632b3 Add reason why we use VMs 2018-04-01 23:49:08 -05:00
Jerome Petazzoni
d9be7160ef Move 'extra details' explanation slide to common deck 2018-04-01 23:34:19 -05:00
Jérôme Petazzoni
d56424a287 Merge pull request #164 from bridgetkromhout/adding-k8s-101
Adding more k8s 101 dates
2018-03-29 16:02:31 -07:00
Bridget Kromhout
2d397c5cb8 Adding more k8s 101 dates 2018-03-29 09:39:20 -07:00
Jérôme Petazzoni
08004caa5d Merge pull request #163 from BretFisher/bret-dates-2018q2
adding more dates
2018-03-28 10:26:07 -07:00
Frank Farmer
522358a004 Small typo 2018-03-28 12:23:47 -05:00
Jérôme Petazzoni
e00a6c36e3 Merge pull request #157 from bridgetkromhout/increase-ulimit
Increase allowed open files
2018-03-28 10:07:11 -07:00
Jérôme Petazzoni
4664497cbc Merge pull request #156 from bridgetkromhout/symlinks-on-rerun
Symlink and directory fixes for multiple runs
2018-03-28 10:06:39 -07:00
Bret Fisher
6be424bde5 adding more dates 2018-03-28 03:27:18 -04:00
Bridget Kromhout
0903438242 Increase allowed open files 2018-03-27 09:36:04 -07:00
Bridget Kromhout
b874b68e57 Symlink fixes for multiple runs 2018-03-27 09:25:48 -07:00
Bridget Kromhout
6af9385c5f Merge pull request #155 from bridgetkromhout/update-index
Updating index
2018-03-27 11:08:14 -05:00
Bridget Kromhout
29398ac33b Updating index 2018-03-27 09:06:03 -07:00
Jérôme Petazzoni
7525739b24 Merge pull request #151 from sadiqkhoja/patch-1
corrected number of containers
2018-03-27 05:50:50 -07:00
Bridget Kromhout
50ff71f3f3 Merge pull request #152 from bridgetkromhout/current-versions
Updating versions
2018-03-27 05:04:14 -05:00
Bridget Kromhout
70a9215c9d Updating versions 2018-03-27 03:02:17 -07:00
Sadiq Khoja
9c1a5d9a7d corrected number of containers 2018-03-17 14:39:05 +05:00
Jérôme Petazzoni
9a9b4a6892 Merge pull request #150 from inful/patch-3
Fix: Kubicorn URL
2018-03-14 11:03:28 -07:00
Jone Marius Vignes
e5502c724e Fix: Kubicorn URL
Kubicorn has moved permanently to https://github.com/kubicorn/kubicorn
2018-03-14 14:50:56 +01:00
Jérôme Petazzoni
125878e280 Merge pull request #147 from bridgetkromhout/clarify-socat-port
Clarifying how to find the port needed.
2018-03-13 13:27:51 -07:00
Bridget Kromhout
b4c1498ca1 Clarifying instructions 2018-03-13 20:55:11 +01:00
Bridget Kromhout
88d534a7f2 Clarifying how to find the port needed. 2018-03-13 20:36:19 +01:00
Jérôme Petazzoni
6ce4ed0937 Merge pull request #146 from bridgetkromhout/version-update
Versions updated
2018-03-13 11:53:51 -07:00
Bridget Kromhout
1b9ba62dc8 Versions updated 2018-03-13 19:27:42 +01:00
Jérôme Petazzoni
f3639e6200 Merge pull request #145 from bridgetkromhout/increase-timeout
Increasing timeout for slow mirrors
2018-03-12 13:49:24 -07:00
Bridget Kromhout
1fe56cf401 Increasing timeout for slow mirrors 2018-03-12 21:41:47 +01:00
Jerome Petazzoni
a3add3d816 Get inside a container (live and post mortem) 2018-03-12 11:57:34 -05:00
Jérôme Petazzoni
2807de2123 Merge pull request #144 from wlonkly/patch-2
Remove duplicate line
2018-03-10 14:55:39 -08:00
Jérôme Petazzoni
5029b956d2 Merge pull request #143 from wlonkly/patch-1
Fix typo: compiler -> container
2018-03-10 14:53:50 -08:00
Rich Lafferty
815aaefad9 Remove duplicate line 2018-03-10 15:43:40 -05:00
Rich Lafferty
7ea740f647 Fix typo: compiler -> container 2018-03-10 15:09:32 -05:00
Jerome Petazzoni
eaf25e5b36 Improve kubetest error reporting
The kubetest command used to say [SUCCESS] on completely
fresh nodes. Now we check the existence of the /tmp/node
file, as well as of the kubectl executable.
2018-03-07 16:17:57 -08:00
Jerome Petazzoni
3b336a9127 Merge branch 'bridgetkromhout-attribute-authorship' 2018-03-07 15:47:39 -08:00
Jerome Petazzoni
cc4d1fd1c7 Slight rewording 2018-03-07 15:47:38 -08:00
Jerome Petazzoni
17ec6441a0 Merge branch 'attribute-authorship' of git://github.com/bridgetkromhout/container.training into bridgetkromhout-attribute-authorship 2018-03-07 15:42:49 -08:00
Jerome Petazzoni
a1b107cecb Add Paris sessions 2018-03-07 15:39:23 -08:00
Jérôme Petazzoni
2e06bc2352 Merge pull request #140 from atsaloli/patch-2
Fix tiny typo (missing "o" in "outbound"
2018-03-06 09:54:38 -08:00
Aleksey Tsalolikhin
af0a239bd9 Fix tiny typo (missing "o" in "outbound" 2018-03-06 09:22:42 -08:00
Bridget Kromhout
92939ca3f2 Merge pull request #138 from jpetazzo/lets-tag-things-properly
Tag images properly
2018-03-05 19:55:09 -06:00
Jerome Petazzoni
aca51901a1 Tag images properly
This tags the first build with v0.1, allowing for a smoother, more
logical rollback. Also adds a slide explaining why to stay away
from latest. @kelseyhightower would be proud :-)
2018-03-05 16:13:30 -08:00
Jérôme Petazzoni
8d15dba26d Merge pull request #137 from bridgetkromhout/checklist-edits
Clarifications and links for checklist
2018-03-05 16:13:06 -08:00
Bridget Kromhout
cdca5655fc Clarifications and links for checklist 2018-03-05 17:08:06 -06:00
Jerome Petazzoni
c778fc84ed Add a dump of the checklist I use when delivering 2018-03-05 14:30:39 -08:00
Bridget Kromhout
7f72ee1296 Credit to multiple contributors 2018-03-05 15:53:42 -06:00
Jérôme Petazzoni
1981ac0b93 Merge pull request #135 from bridgetkromhout/bridget-specific
Adding Bridget-specific files
2018-03-05 13:36:06 -08:00
Jérôme Petazzoni
a8f2fb4586 Merge pull request #134 from bridgetkromhout/dedup-thanks
De-dup thanks; add comma
2018-03-05 13:35:45 -08:00
Jérôme Petazzoni
a69d3d0828 Merge pull request #133 from bridgetkromhout/no-chatroom
Makes more sense for "in person" chat
2018-03-05 13:32:51 -08:00
Jérôme Petazzoni
40760f9e98 Merge pull request #131 from bridgetkromhout/change-instance-type
Changing Azure instance type
2018-03-05 13:25:49 -08:00
Bridget Kromhout
b64b16dd67 Adding Bridget-specific files 2018-03-05 14:54:28 -06:00
Bridget Kromhout
8c2c9bc5df De-dup thanks; add comma 2018-03-05 14:51:26 -06:00
Bridget Kromhout
3a21cbc72b Makes more sense for "in person" chat 2018-03-05 14:37:10 -06:00
Bridget Kromhout
5438fca35a Attribute authorship 2018-03-05 14:34:41 -06:00
Bridget Kromhout
a09521ceb1 Changing Azure instance type 2018-03-05 13:44:02 -06:00
Jérôme Petazzoni
0d6501a926 Merge pull request #130 from atsaloli/patch-1
Two small fixes
2018-03-05 10:10:25 -08:00
Aleksey Tsalolikhin
c25f7a119b Fix very small typo -- remove extra "v" in "code" 2018-03-04 19:58:27 -08:00
Aleksey Tsalolikhin
1958c85a96 Fix noun plural tense (change "instructions" -> "instruction")
"An" means one. So "an instruction" rather than "an instructions".  (Small grammar fix.)
2018-03-04 19:56:03 -08:00
Jérôme Petazzoni
a7ba4418c6 Merge pull request #129 from bridgetkromhout/improve-directions
Improve directions
2018-03-03 19:52:15 -06:00
Bridget Kromhout
d6fcbb85e8 Improve directions 2018-03-03 18:44:56 -06:00
Jérôme Petazzoni
278fbf285a Merge pull request #128 from bridgetkromhout/cleanup
Cleanup
2018-03-03 14:39:56 -06:00
Bridget Kromhout
ca828343e4 Remove azure instances post-workshop. 2018-03-03 08:51:54 -06:00
Bridget Kromhout
5c663f9e09 Updating help output 2018-03-03 08:48:02 -06:00
Bridget Kromhout
9debd76816 Document kubetest 2018-03-03 08:44:58 -06:00
Bridget Kromhout
848679829d Removed -i and trailing space 2018-03-02 18:18:04 -06:00
Bridget Kromhout
6727007754 Missing variable 2018-03-02 18:11:32 -06:00
Jerome Petazzoni
03a563c172 Merge branch 'master' of github.com:jpetazzo/container.training 2018-03-02 14:17:54 -06:00
Jerome Petazzoni
cfbd54bebf Add hacky-backslashy kubetest command 2018-03-02 14:17:37 -06:00
Jérôme Petazzoni
7f1e9db0fa Missing curly brace 2018-03-02 13:08:48 -06:00
Jérôme Petazzoni
1367a30a11 Merge pull request #126 from bridgetkromhout/add-azure
Adding Azure examples
2018-03-02 12:46:02 -06:00
Bridget Kromhout
31b234ee3a Adding Azure examples 2018-03-02 12:42:55 -06:00
Jérôme Petazzoni
57dd5e295e Merge pull request #125 from bridgetkromhout/increase-timeouts
Increase timeouts
2018-03-01 17:43:29 -06:00
Bridget Kromhout
c188923f1a Increase timeouts 2018-03-01 17:39:51 -06:00
Jérôme Petazzoni
7a8716d38b Merge pull request #124 from bridgetkromhout/postprep
Postprep is now python
2018-03-01 17:17:04 -06:00
Bridget Kromhout
2e77c13297 Postprep is now python 2018-03-01 17:15:01 -06:00
Jerome Petazzoni
d5279d881d Add info about pre-built images 2018-03-01 15:13:39 -06:00
Jerome Petazzoni
34e9cc1944 Don't assume 5 nodes 2018-03-01 14:55:02 -06:00
Jerome Petazzoni
2a7498e30e A bit of rewording, and a couple of links about dashboard security 2018-03-01 14:51:00 -06:00
Jerome Petazzoni
4689d09e1f One typo and two minor tweaks 2018-03-01 14:18:48 -06:00
Jerome Petazzoni
b818a38307 Correctly report errors happening in functions
`trap ... ERR` does not automatically propagate to functions. Therefore,
Our fancy error-reporting mechanism did not catch errors happening in
functions; and we do most of the actual work in functions. The solution
is to `set -E` or `set -o errtrace`.
2018-03-01 13:56:08 -06:00
Jérôme Petazzoni
7e5d869472 Merge pull request #123 from bridgetkromhout/kube101
Kube101 & non-AWS
2018-03-01 13:23:04 -06:00
Jérôme Petazzoni
3eaf31fd48 Merge pull request #122 from bridgetkromhout/pssh-clarity
Pssh clarity
2018-03-01 13:21:05 -06:00
Bridget Kromhout
fe5e22f5ae How to set up non-AWS workshops 2018-02-28 21:45:36 -06:00
Bridget Kromhout
61da583080 Don't overwrite ip file if exists 2018-02-28 21:44:58 -06:00
Bridget Kromhout
94dfe1a0cd Adding sample file mentioned in README 2018-02-28 21:44:29 -06:00
Bridget Kromhout
412dbadafd Adding settings for kube101 2018-02-28 21:43:41 -06:00
Bridget Kromhout
8c5e4e0b09 Require pssh 2018-02-28 21:28:20 -06:00
Bridget Kromhout
2ac6072d80 Invoke as pssh 2018-02-28 21:26:17 -06:00
Jerome Petazzoni
ef4591c4fc Allow to override instance type (closes #39) 2018-02-28 13:45:08 -06:00
Jerome Petazzoni
22dfbab09b Minor formatting 2018-02-28 13:41:22 -06:00
Jérôme Petazzoni
37f595c480 Merge pull request #120 from bridgetkromhout/clarify-kube-public
Clarify kube-public; define kube-system
2018-02-27 17:42:11 -06:00
Bridget Kromhout
1fc951037d Slight clarification per request 2018-02-27 17:39:52 -06:00
Jérôme Petazzoni
affd46dd88 Merge pull request #121 from bridgetkromhout/obviate-https
Remove need for https in the workshop dashboard
2018-02-27 17:34:27 -06:00
Bridget Kromhout
cfaff3df04 Remove need for https in the workshop dashboard 2018-02-27 17:31:14 -06:00
Jérôme Petazzoni
ce2451971d Merge pull request #118 from bridgetkromhout/twice-the-steps
Proper attribution
2018-02-27 16:57:52 -06:00
Jérôme Petazzoni
8cf5d0efbd Merge pull request #119 from bridgetkromhout/naming-things
Naming things is hard; considering scope here
2018-02-27 16:40:40 -06:00
Bridget Kromhout
f61d61223d Clarify kube-public; define kube-system 2018-02-27 16:31:36 -06:00
Bridget Kromhout
6b6eb50f9a Naming things is hard; considering scope here 2018-02-27 15:26:43 -06:00
Jerome Petazzoni
89ab66335f ... and trim down kube half-day 2018-02-27 14:49:39 -06:00
Jerome Petazzoni
5bc4e95515 Clarify service discovery 2018-02-27 14:45:08 -06:00
Jerome Petazzoni
893f05e401 Move docker-compose logs to the composescale.md chapter 2018-02-27 14:38:41 -06:00
Bridget Kromhout
4abc8ce34c Proper attribution 2018-02-27 14:38:32 -06:00
Jérôme Petazzoni
34d2c610bf Merge pull request #117 from bridgetkromhout/self-deprecating-humor
Attributing humor so it doesn't sound negative
2018-02-27 14:06:58 -06:00
Jerome Petazzoni
1492a8a0bc Rephrase daemon set intro to fit even without the entropy spiel 2018-02-27 13:53:34 -06:00
Bridget Kromhout
388d616048 Attributing humor so it doesn't sound negative 2018-02-27 13:46:19 -06:00
Jerome Petazzoni
28589f5a83 Remove cluster-size specific reference 2018-02-27 13:40:52 -06:00
Jerome Petazzoni
e7a80f7bfb Merge branch 'master' of github.com:jpetazzo/container.training 2018-02-27 13:39:55 -06:00
Jerome Petazzoni
ea47e0ac05 Add link to brigade 2018-02-27 13:39:50 -06:00
Jérôme Petazzoni
09d204038f Merge pull request #116 from bridgetkromhout/versions-installed
Clarify that these are the installed versions
2018-02-27 13:36:40 -06:00
Jérôme Petazzoni
47cb0afac2 Merge pull request #115 from bridgetkromhout/any-cloud
More cloud-provider generic
2018-02-27 13:36:10 -06:00
Jerome Petazzoni
8e2e7f44d3 Break out 'scale things on a single node' section 2018-02-27 13:35:03 -06:00
Bridget Kromhout
8c7702deda Clarify that these are the installed versions
* "Brand new" is a moving target
2018-02-27 13:29:40 -06:00
Bridget Kromhout
bdc1ca01cd More cloud-provider generic 2018-02-27 13:27:11 -06:00
Jerome Petazzoni
dca58d6663 Merge Lucas awesome diagram 2018-02-27 12:22:02 -06:00
Jerome Petazzoni
a0cf4b97c0 Add Lucas' amazing diagram 2018-02-27 12:17:10 -06:00
Jerome Petazzoni
a1c239260f Add Lucas' amazing diagram 2018-02-27 12:17:02 -06:00
Jerome Petazzoni
a8a2cf54a5 Factor out links in separate files 2018-02-27 12:01:53 -06:00
Jerome Petazzoni
d5ba80da55 Replace 'five VMs' with 'a cluster of VMs' 2018-02-27 11:53:01 -06:00
Jerome Petazzoni
3f2da04763 CSS is hard but it's not an excuse 2018-02-27 09:44:32 -06:00
Jerome Petazzoni
e092f50645 Branch out intro/intro.md into per-workshop variants 2018-02-27 09:40:54 -06:00
Jérôme Petazzoni
7f698bd690 Merge pull request #114 from bridgetkromhout/master
Adding upcoming events
2018-02-27 09:28:27 -06:00
Bridget Kromhout
7fe04b9944 Adding upcoming events 2018-02-27 09:26:03 -06:00
Jerome Petazzoni
2671714df3 Move indexconf2018 to past workshops section 2018-02-27 09:11:09 -06:00
Jerome Petazzoni
630e275d99 Merge branch 'bridgetkromhout-master-updates' 2018-02-26 17:52:14 -06:00
Jerome Petazzoni
614f10432e Mostly reformatting so that slides are nice and tidy 2018-02-26 17:52:06 -06:00
Bridget Kromhout
223b5e152b Version updates 2018-02-26 16:56:45 -06:00
Bridget Kromhout
ec55cd2465 Including ACR as one of the cloud k8s offerings 2018-02-26 16:55:56 -06:00
Bridget Kromhout
c59510f921 Updates & clarifications 2018-02-26 16:54:41 -06:00
Bridget Kromhout
0f5f481213 Typo fix 2018-02-26 16:52:23 -06:00
Bridget Kromhout
b40fa45fd3 Clarifications 2018-02-26 16:50:31 -06:00
Bridget Kromhout
8faaf35da0 Clarify we didn't tag the v1 release 2018-02-26 16:48:52 -06:00
Bridget Kromhout
ce0f79af16 Updates & links for all cloud-provided k8s 2018-02-26 16:46:49 -06:00
Bridget Kromhout
faa420f9fd Clarify language and explain https use 2018-02-26 16:41:21 -06:00
Jerome Petazzoni
aab519177d Add indexconf2018 to index 2018-02-19 15:55:22 -08:00
Jerome Petazzoni
5116ad7c44 Use kubeadm token generate to simplify things a bit.
Thanks @rmb938 for the suggestion!

Closes #110.
2018-01-18 21:14:40 +01:00
Jerome Petazzoni
7305e911e5 Update for k8s 1.9 2018-01-10 17:12:49 +01:00
Jerome Petazzoni
b2f670acf6 Add error checking for AMI finder script 2018-01-10 16:48:04 +01:00
Jerome Petazzoni
dc040aa693 Make sleep interruptible; fix slide count 2017-12-23 19:38:32 +01:00
Jerome Petazzoni
9b7a8494b0 Fix logic to advance to next snippet 2017-12-23 18:49:42 +01:00
Jerome Petazzoni
ae6c1bb8eb Major UI refactor
Navigation now includes all slides and all snippets.
ENTER skips to the next snippet, or executes the
selected snippet.

More improvements to come: allow SPACE to navigate
step by step through slides and snippets, executing
the snippets.
2017-12-23 09:31:01 +01:00
Jerome Petazzoni
a9a4f0ea07 Create only one remote session 2017-12-23 05:06:30 +01:00
Jerome Petazzoni
68af5940e3 Script node3 setup as well 2017-12-23 05:00:22 +01:00
Jerome Petazzoni
9df5313da4 Remove spurious output from desktop integration 2017-12-22 23:15:21 +01:00
Jerome Petazzoni
ba3f00e64e Clear screen before showing UI 2017-12-22 23:11:11 +01:00
Jerome Petazzoni
4d7a6d5c70 Stupid typo 2017-12-22 23:03:48 +01:00
Jerome Petazzoni
aef833c3f5 Add pause before switching away from browser 2017-12-22 23:02:47 +01:00
Jerome Petazzoni
6f58fee29b Automatically open links in intro section 2017-12-22 22:54:08 +01:00
Jerome Petazzoni
dda09ddbcb slightly edit tmux commands 2017-12-22 22:49:53 +01:00
Jerome Petazzoni
8b13fe6eb4 fix formatting in PWD reference 2017-12-22 22:48:42 +01:00
Jerome Petazzoni
21f345a96a Improve open command 2017-12-22 22:42:41 +01:00
Jerome Petazzoni
eaa4dc63bf Instruct to use PWD in self-paced mode 2017-12-22 22:40:23 +01:00
Jerome Petazzoni
af5ea2188b More typos 2017-12-22 22:26:08 +01:00
Jerome Petazzoni
7f23a4c964 Fix minor typos 2017-12-22 22:24:37 +01:00
Jerome Petazzoni
345e04c956 Improve tmux detection logic and add instructions 2017-12-21 22:24:27 +01:00
Jerome Petazzoni
2a138102fc Add client address in pub/sub server 2017-12-21 05:53:24 +01:00
Jerome Petazzoni
ef5e8f00f8 Add script to remember myself of how to customize tmux status bar 2017-12-21 05:46:57 +01:00
Jerome Petazzoni
badb73a413 Slower pace for virtual typist 2017-12-21 05:45:38 +01:00
Jerome Petazzoni
2aced95c86 Improve UX for remote control 2017-12-21 05:40:35 +01:00
Jerome Petazzoni
720989e829 Add remote control of slide deck 2017-12-21 05:29:59 +01:00
Jerome Petazzoni
718031565e Exit gracefully if server is not running instead of waiting forever 2017-12-21 05:29:45 +01:00
Jerome Petazzoni
ec7b46b779 Add remote.js to workshop template and pub/sub server 2017-12-21 04:51:49 +01:00
Jerome Petazzoni
270c36b29a Add pub/sub server and CLI remote 2017-12-21 04:43:42 +01:00
Jerome Petazzoni
bc2eb53bb2 Python 3 compatibility 2017-12-21 04:33:35 +01:00
Jerome Petazzoni
afe7b8523c Move autotest to autopilot/ directory 2017-12-21 04:32:16 +01:00
Jérôme Petazzoni
a7743a4314 Update Engine version 2017-12-20 18:05:52 -06:00
Jérôme Petazzoni
ba74fdc841 Round of update for video content 2017-12-20 00:17:49 -06:00
Jérôme Petazzoni
41c047e12a Always start in interactive mode 2017-12-20 00:17:40 -06:00
Jérôme Petazzoni
f4fc055405 Add manifest for video content 2017-12-20 00:17:25 -06:00
Jérôme Petazzoni
2eb6fcfbf5 Add command to backtrack 1 slide 2017-12-18 18:46:24 -06:00
Jérôme Petazzoni
c665e1a2d6 httping only 3 requests is enough 2017-12-18 18:45:40 -06:00
Jérôme Petazzoni
bb7cdafe47 Comment out machine chapter 2017-12-18 18:39:33 -06:00
Jérôme Petazzoni
95fcfadb17 state.yml -> state.yaml to avoid collision with manifests 2017-12-18 18:39:17 -06:00
Jérôme Petazzoni
1ef47531c8 autotest: save all parameters in state.yml 2017-12-18 18:36:46 -06:00
Jérôme Petazzoni
9589b641b6 Another fix in CNC script 2017-12-18 18:35:57 -06:00
Jérôme Petazzoni
63463bda64 Merge branch 'master' of github.com:jpetazzo/container.training 2017-12-18 17:46:11 -06:00
Jérôme Petazzoni
b642412639 Clarify listen-addr and advertise-addr
Closes #108
2017-12-18 17:46:06 -06:00
Jérôme Petazzoni
21f9b73cb4 Update CNC deployment script + workshopctl deps 2017-12-18 17:45:29 -06:00
Jérôme Petazzoni
b73e5432f3 Merge pull request #109 from juliogomez/patch-6
curl is not installed in this step
2017-12-18 17:20:52 -06:00
Jérôme Petazzoni
de5cc9b0bf Merge branch 'master' of github.com:jpetazzo/container.training 2017-12-18 15:25:48 -06:00
Jérôme Petazzoni
08b38127d3 Clarify license for slides since they're not code 2017-12-18 15:25:41 -06:00
Julio
383804b7f1 curl is not installed in that step
curl was actually installed in a previous step, not here
2017-12-16 23:05:29 +01:00
Jérôme Petazzoni
20bf80910e Merge pull request #107 from juliogomez/patch-3
Fixing number of replicas per node
2017-12-16 09:26:18 -06:00
Jérôme Petazzoni
29a2014745 Merge pull request #106 from juliogomez/patch-2
Typo correction in detach mode?
2017-12-16 09:24:36 -06:00
Julio
40f6ee236f Fixing number of replicas per node
If 3 copies per node are 15, 4 copies per node should be 20.
2017-12-15 20:46:25 +01:00
Julio
5551cbd11f Typo correction in detach mode?
While you wrote:
`--detach=false` does not complete *faster*. It just *doesn't wait* for completion.
I think you actually meant:
`--detach=TRUE` does not complete *faster*. It just *doesn't wait* for completion.
2017-12-15 20:25:29 +01:00
Jérôme Petazzoni
9e84a05325 Merge pull request #105 from juliogomez/patch-1
Fixed missing image name (tomcat) in 'docker run' command
2017-12-14 15:33:24 -06:00
Julio
558e990907 Fixed missing image name in command 2017-12-14 22:15:29 +01:00
Jérôme Petazzoni
c2e88bb343 add qcon video 2017-12-07 14:28:46 -06:00
Jérôme Petazzoni
b7582397fe Add quote by not-Benjamin-Franklin 2017-12-04 18:19:04 -06:00
Jérôme Petazzoni
3e7b8615ab Move kube workshop to archives section 2017-12-04 13:57:17 -06:00
Jérôme Petazzoni
6f5d8c5372 Merge pull request #104 from gurayyildirim/patch-1
Kubernetes.io link fixed.
2017-11-24 11:31:07 -06:00
gurayyildirim
c116d75408 Kubernetes.io link fixed.
Kubernetes.io link had a wrong ']' mark which was causing a 404 from Kubernetes.io blog.
2017-11-24 02:45:49 +03:00
Jérôme Petazzoni
bb4ee4e77d Add helper script to setup CNC node 2017-11-20 17:04:38 -08:00
Jérôme Petazzoni
fc0e46988c Fix hint for ssh agent 2017-11-20 16:37:17 -08:00
Jérôme Petazzoni
c71b93c3a7 Add files to generate a CSV file with nodes 2017-11-20 16:35:25 -08:00
Jérôme Petazzoni
2c6b79c17d Add kube image to cards.html 2017-11-20 16:33:08 -08:00
135 changed files with 10221 additions and 1539 deletions

3
.gitignore vendored
View File

@@ -7,4 +7,5 @@ prepare-vms/ips.pdf
prepare-vms/settings.yaml
prepare-vms/tags
slides/*.yml.html
slides/nextstep
slides/autopilot/state.yaml
node_modules

24
CHECKLIST.md Normal file
View File

@@ -0,0 +1,24 @@
Checklist to use when delivering a workshop
Authored by Jérôme; additions by Bridget
- [ ] Create event-named branch (such as `conferenceYYYY`) in the [main repo](https://github.com/jpetazzo/container.training/)
- [ ] Create file `slides/_redirects` containing a link to the desired tutorial: `/ /kube-halfday.yml.html 200`
- [ ] Push local branch to GitHub and merge into main repo
- [ ] [Netlify setup](https://app.netlify.com/sites/container-training/settings/domain): create subdomain for event-named branch
- [ ] Add link to event-named branch to [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] Update the slides that says which versions we are using for [kube](https://github.com/jpetazzo/container.training/blob/master/slides/kube/versions-k8s.md) or [swarm](https://github.com/jpetazzo/container.training/blob/master/slides/swarm/versions.md) workshops
- [ ] Update the version of Compose and Machine in [settings](https://github.com/jpetazzo/container.training/tree/master/prepare-vms/settings)
- [ ] (optional) Create chatroom
- [ ] (optional) Set chatroom in YML ([kube half-day example](https://github.com/jpetazzo/container.training/blob/master/slides/kube-halfday.yml#L6-L8)) and deploy
- [ ] (optional) Put chat link on [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] How many VMs do we need? Check with event organizers ahead of time
- [ ] Provision VMs (slightly more than we think we'll need)
- [ ] Change password on presenter's VMs (to forestall any hijinx)
- [ ] Onsite: walk the room to count seats, check power supplies, lectern, A/V setup
- [ ] Print cards
- [ ] Cut cards
- [ ] Last-minute merge from master
- [ ] Check that all looks good
- [ ] DELIVER!
- [ ] Shut down VMs
- [ ] Update index.html to remove chat link and move session to past things

19
LICENSE
View File

@@ -1,13 +1,12 @@
Copyright 2015 Jérôme Petazzoni
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
The code in this repository is licensed under the Apache License
Version 2.0. You may obtain a copy of this license at:
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
The instructions and slides in this repository (e.g. the files
with extension .md and .yml in the "slides" subdirectory) are
under the Creative Commons Attribution 4.0 International Public
License. You may obtain a copy of this license at:
https://creativecommons.org/licenses/by/4.0/legalcode

View File

@@ -43,7 +43,7 @@ because they have a few things in common:
(and updated) identically between different decks;
- a [build system](slides/) generating HTML slides from
Markdown source files;
- a [semi-automated test harness](slides/autotest.py) to check
- a [semi-automated test harness](slides/autopilot/) to check
that the exercises and examples provided work properly;
- a [PhantomJS script](slides/slidechecker.js) to check
that the slides look good and don't have formatting issues;
@@ -247,6 +247,17 @@ content but you also know to skip during presentation.
- Last 15-30 minutes is for stateful services, DAB files, and questions.
### Pre-built images
There are pre-built images for the 4 components of the DockerCoins demo app: `dockercoins/hasher:v0.1`, `dockercoins/rng:v0.1`, `dockercoins/webui:v0.1`, and `dockercoins/worker:v0.1`. They correspond to the code in this repository.
There are also three variants, for demo purposes:
- `dockercoins/rng:v0.2` is broken (the server won't even start),
- `dockercoins/webui:v0.2` has bigger font on the Y axis and a green graph (instead of blue),
- `dockercoins/worker:v0.2` is 11x slower than `v0.1`.
## Past events
Since its inception, this workshop has been delivered dozens of times,
@@ -285,7 +296,7 @@ If you have attended this workshop and have feedback,
or if you want somebody to deliver that workshop at your
conference or for your company: you can contact one of us!
- jerome at docker dot com
- jerome dot petazzoni at gmail dot com
- bret at bretfisher dot com
If you are willing and able to deliver such workshops,

View File

@@ -1,15 +1,22 @@
# Trainer tools to create and prepare VMs for Docker workshops on AWS
# Trainer tools to create and prepare VMs for Docker workshops on AWS or Azure
## Prerequisites
- [Docker](https://docs.docker.com/engine/installation/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
And if you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
- [jinja2](https://pypi.python.org/pypi/Jinja2) (on a Mac: `brew install jinja2`)
## General Workflow
- fork/clone repo
- set required environment variables for AWS
- set required environment variables
- create your own setting file from `settings/example.yaml`
- if necessary, increase allowed open files: `ulimit -Sn 10000`
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
@@ -35,6 +42,16 @@ The Docker Compose file here is used to build a image with all the dependencies
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`
If you're not using AWS, set these to placeholder values:
```
export AWS_ACCESS_KEY_ID="foo"
export AWS_SECRET_ACCESS_KEY="foo"
export AWS_DEFAULT_REGION="foo"
```
If you don't have the `aws` CLI installed, you will get a warning that it's a missing dependency. If you're not using AWS you can ignore this.
### Update/copy `settings/example.yaml`
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
@@ -48,6 +65,7 @@ workshopctl - the orchestration workshop swiss army knife
Commands:
ami Show the AMI that will be used for deployment
amis List Ubuntu AMIs in the current region
build Build the Docker image to run this program in a container
cards Generate ready-to-print cards for a batch of VMs
deploy Install Docker on a bunch of running VMs
ec2quotas Check our EC2 quotas (max instances)
@@ -55,6 +73,7 @@ help Show available commands
ids List the instance IDs belonging to a given tag or token
ips List the IP addresses of the VMs for a given tag or token
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all notes are reporting as Ready
list List available batches in the current region
opensg Open the default security group to ALL ingress traffic
pull_images Pre-pull a bunch of Docker images
@@ -63,6 +82,7 @@ start Start a batch of VMs
status List instance status for a given batch
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
test Run tests (pre-flight checks) on a batch of VMs
wrap Run this program in a container
```
### Summary of What `./workshopctl` Does For You
@@ -75,12 +95,12 @@ test Run tests (pre-flight checks) on a batch of VMs
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
### Example Steps to Launch a Batch of Instances for a Workshop
### Example Steps to Launch a Batch of AWS Instances for a Workshop
- Run `./workshopctl start N` Creates `N` EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
@@ -88,6 +108,67 @@ test Run tests (pre-flight checks) on a batch of VMs
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
### Example Steps to Launch Azure Instances
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account
- Customize `azuredeploy.parameters.json`
- Required:
- Provide the SSH public key you plan to use for instance configuration
- Optional:
- Choose a name for the workshop (default is "workshop")
- Choose the number of instances (default is 3)
- Customize the desired instance size (default is Standard_D1_v2)
- Launch instances with your chosen resource group name and your preferred region; the examples are "workshop" and "eastus":
```
az group create --name workshop --location eastus
az group deployment create --resource-group workshop --template-file azuredeploy.json --parameters @azuredeploy.parameters.json
```
The `az group deployment create` command can take several minutes and will only say `- Running ..` until it completes, unless you increase the verbosity with `--verbose` or `--debug`.
To display the IPs of the instances you've launched:
```
az vm list-ip-addresses --resource-group workshop --output table
```
If you want to put the IPs into `prepare-vms/tags/<tag>/ips.txt` for a tag of "myworkshop":
1) If you haven't yet installed `jq` and/or created your event's tags directory in `prepare-vms`:
```
brew install jq
mkdir -p tags/myworkshop
```
2) And then generate the IP list:
```
az vm list-ip-addresses --resource-group workshop --output json | jq -r '.[].virtualMachine.network.publicIpAddresses[].ipAddress' > tags/myworkshop/ips.txt
```
After the workshop is over, remove the instances:
```
az group delete --resource-group workshop
```
### Example Steps to Configure Instances from a non-AWS Source
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to ssh into them.
- Set placeholder values for [AWS environment variable settings](#required-environment-variables).
- Choose a tag. It could be an event name, datestamp, etc. Ensure you have created a directory for your tag: `prepare-vms/tags/<tag>/`
- If you have not already generated a file with the IPs to be configured:
- The file should be named `prepare-vms/tags/<tag>/ips.txt`
- Format is one IP per line, no other info needed.
- Ensure the settings file is as desired (especially the number of nodes): `prepare-vms/settings/kube101.yaml`
- For a tag called `myworkshop`, configure instances: `workshopctl deploy myworkshop settings/kube101.yaml`
- Optionally, configure Kubernetes clusters of the size in the settings: `workshopctl kube myworkshop`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: `workshopctl kubetest myworkshop`
- Generate cards to print and hand out: `workshopctl cards myworkshop settings/kube101.yaml`
- Print the cards file: `prepare-vms/tags/myworkshop/ips.html`
## Other Tools
### Deploying your SSH key to all the machines
@@ -97,13 +178,6 @@ test Run tests (pre-flight checks) on a batch of VMs
- Run `pcopykey`.
### Installing extra packages
- Source `postprep.rc`.
(This will install a few extra packages, add entries to
/etc/hosts, generate SSH keys, and deploy them on all hosts.)
## Even More Details
#### Sync of SSH keys
@@ -132,7 +206,7 @@ Instances can be deployed manually using the `deploy` command:
$ ./workshopctl deploy TAG settings/somefile.yaml
The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and executed.
The `postprep.py` file will be copied via parallel-ssh to all of the VMs and executed.
#### Pre-pull images
@@ -142,6 +216,10 @@ The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and exe
$ ./workshopctl cards TAG settings/somefile.yaml
If you want to generate both HTML and PDF cards, install [wkhtmltopdf](https://wkhtmltopdf.org/downloads.html); without that installed, only HTML cards will be generated.
If you don't have `wkhtmltopdf` installed, you will get a warning that it is a missing dependency. If you plan to just print the HTML cards, you can ignore this.
#### List tags
$ ./workshopctl list

View File

@@ -0,0 +1,250 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workshopName": {
"type": "string",
"defaultValue": "workshop",
"metadata": {
"description": "Workshop name."
}
},
"vmPrefix": {
"type": "string",
"defaultValue": "node",
"metadata": {
"description": "Prefix for VM names."
}
},
"numberOfInstances": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "Number of VMs to create."
}
},
"adminUsername": {
"type": "string",
"defaultValue": "ubuntu",
"metadata": {
"description": "Admin username for VMs."
}
},
"sshKeyData": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "SSH rsa public key file as a string."
}
},
"imagePublisher": {
"type": "string",
"defaultValue": "Canonical",
"metadata": {
"description": "OS image publisher; default Canonical."
}
},
"imageOffer": {
"type": "string",
"defaultValue": "UbuntuServer",
"metadata": {
"description": "The name of the image offer. The default is Ubuntu"
}
},
"imageSKU": {
"type": "string",
"defaultValue": "16.04-LTS",
"metadata": {
"description": "Version of the image. The default is 16.04-LTS"
}
},
"vmSize": {
"type": "string",
"defaultValue": "Standard_D1_v2",
"metadata": {
"description": "VM Size."
}
}
},
"variables": {
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('virtualNetworkName'))]",
"subnet1Ref": "[concat(variables('vnetID'),'/subnets/',variables('subnet1Name'))]",
"vmName": "[parameters('vmPrefix')]",
"sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]",
"publicIPAddressName": "PublicIP",
"publicIPAddressType": "Dynamic",
"virtualNetworkName": "MyVNET",
"netSecurityGroup": "MyNSG",
"addressPrefix": "10.0.0.0/16",
"subnet1Name": "subnet-1",
"subnet1Prefix": "10.0.0.0/24",
"nicName": "myVMNic"
},
"resources": [
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/publicIPAddresses",
"name": "[concat(variables('publicIPAddressName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "publicIPLoop",
"count": "[parameters('numberOfInstances')]"
},
"properties": {
"publicIPAllocationMethod": "[variables('publicIPAddressType')]"
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('netSecurityGroup'))]"
],
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnet1Name')]",
"properties": {
"addressPrefix": "[variables('subnet1Prefix')]",
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('netSecurityGroup'))]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkInterfaces",
"name": "[concat(variables('nicName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "nicLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'),copyIndex(1))]",
"[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'), copyIndex(1)))]"
},
"subnet": {
"id": "[variables('subnet1Ref')]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-12-01",
"type": "Microsoft.Compute/virtualMachines",
"name": "[concat(variables('vmName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "vmLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'), copyIndex(1))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"osProfile": {
"computerName": "[concat(variables('vmName'),copyIndex(1))]",
"adminUsername": "[parameters('adminUsername')]",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"ssh": {
"publicKeys": [
{
"path": "[variables('sshKeyPath')]",
"keyData": "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage"
},
"imageReference": {
"publisher": "[parameters('imagePublisher')]",
"offer": "[parameters('imageOffer')]",
"sku": "[parameters('imageSKU')]",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('nicName'),copyIndex(1)))]"
}
]
}
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkSecurityGroups",
"name": "[variables('netSecurityGroup')]",
"location": "[resourceGroup().location]",
"tags": {
"workshop": "[parameters('workshopName')]"
},
"properties": {
"securityRules": [
{
"name": "default-open-ports",
"properties": {
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1000,
"direction": "Inbound"
}
}
]
}
}
],
"outputs": {
"resourceID": {
"type": "string",
"value": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'),'1'))]"
}
}
}

View File

@@ -0,0 +1,18 @@
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"sshKeyData": {
"value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXTIl/M9oeSlcsC5Rfe+nZr4Jc4sl200pSw2lpdxlZ3xzeP15NgSSMJnigUrKUXHfqRQ+2wiPxEf0Odz2GdvmXvR0xodayoOQsO24AoERjeSBXCwqITsfp1bGKzMb30/3ojRBo6LBR6r1+lzJYnNCGkT+IQwLzRIpm0LCNz1j08PUI2aZ04+mcDANvHuN/hwi/THbLLp6SNWN43m9r02RcC6xlCNEhJi4wk4VzMzVbSv9RlLGST2ocbUHwmQ2k9OUmpzoOx73aQi9XNnEaFh2w/eIdXM75VtkT3mRryyykg9y0/hH8/MVmIuRIdzxHQqlm++DLXVH5Ctw6a4kS+ki7 workshop"
},
"workshopName": {
"value": "workshop"
},
"numberOfInstances": {
"value": 3
},
"vmSize": {
"value": "Standard_D1_v2"
}
}
}

View File

@@ -1,18 +1,18 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://container.training/" -%}
{%- set url = "avril2018.container.training" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set workshop_name = "formation" -%}
{%- set cluster_or_machine = "votre VM" -%}
{%- set machine_is_or_machines_are = "Votre VM" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "orchestration workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set workshop_name = "formation" -%}
{%- set cluster_or_machine = "votre cluster" -%}
{%- set machine_is_or_machines_are = "Votre cluster" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
@@ -73,9 +73,9 @@ img {
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
Voici les informations pour vous connecter à
{{ cluster_or_machine }} pour cette formation.
Vous pouvez vous connecter avec n'importe quel client SSH.
</p>
<p>
<img src="{{ image_src }}" />
@@ -88,14 +88,14 @@ img {
</p>
<p>
Your {{ machine_is_or_machines_are }}:
{{ machine_is_or_machines_are }} :
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<p>Les slides sont à l'adresse suivante :
<center>{{ url }}</center>
</p>
</div>

5
prepare-vms/clusters.csv Normal file
View File

@@ -0,0 +1,5 @@
Put your initials in the first column to "claim" a cluster.
Initials{% for node in clusters[0] %} node{{ loop.index }}{% endfor %}
{% for cluster in clusters -%}
{%- for node in cluster %} {{ node|trim }}{% endfor %}
{% endfor %}
Can't render this file because it contains an unexpected character in line 1 and column 42.

21
prepare-vms/cncsetup.sh Normal file
View File

@@ -0,0 +1,21 @@
#!/bin/sh
if [ $(whoami) != ubuntu ]; then
echo "This script should be executed on a freshly deployed node,"
echo "with the 'ubuntu' user. Aborting."
exit 1
fi
if id docker; then
sudo userdel -r docker
fi
pip install --user awscli jinja2 pdfkit
sudo apt-get install -y wkhtmltopdf xvfb
tmux new-session \; send-keys "
[ -f ~/.ssh/id_rsa ] || ssh-keygen
eval \$(ssh-agent)
ssh-add
Xvfb :0 &
export DISPLAY=:0
mkdir -p ~/www
sudo docker run -d -p 80:80 -v \$HOME/www:/usr/share/nginx/html nginx
"

View File

@@ -15,5 +15,6 @@ services:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
AWS_INSTANCE_TYPE: ${AWS_INSTANCE_TYPE}
USER: ${USER}
entrypoint: /root/prepare-vms/workshopctl

View File

@@ -2,7 +2,7 @@
_ERR() {
error "Command $BASH_COMMAND failed (exit status: $?)"
}
set -e
set -eE
trap _ERR ERR
die() {

View File

@@ -39,7 +39,10 @@ _cmd_cards() {
need_tag $TAG
need_settings $SETTINGS
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
# If you're not using AWS, populate the ips.txt file manually
if [ ! -f tags/$TAG/ips.txt ]; then
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
fi
# Remove symlinks to old cards
rm -f ips.html ips.pdf
@@ -124,27 +127,21 @@ _cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
_cmd_kube() {
# Install packages
pssh "
pssh --timeout 200 "
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
sudo apt-key add - &&
echo deb http://apt.kubernetes.io/ kubernetes-xenial main |
sudo tee /etc/apt/sources.list.d/kubernetes.list"
pssh "
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet kubeadm kubectl
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
# Work around https://github.com/kubernetes/kubernetes/issues/53356
pssh "
if [ ! -f /etc/kubernetes/kubelet.conf ]; then
sudo systemctl stop kubelet
sudo rm -rf /var/lib/kubelet/pki
fi"
# Initialize kube master
pssh "
pssh --timeout 200 "
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
sudo kubeadm init
kubeadm token generate > /tmp/token
sudo kubeadm init --token \$(cat /tmp/token)
fi"
# Put kubeconfig in ubuntu's and docker's accounts
@@ -157,15 +154,6 @@ _cmd_kube() {
sudo chown -R docker /home/docker/.kube
fi"
# Get bootstrap token
pssh "
if grep -q node1 /tmp/node; then
TOKEN_NAME=\$(kubectl -n kube-system get secret -o name | grep bootstrap-token)
TOKEN_ID=\$(kubectl -n kube-system get \$TOKEN_NAME -o go-template --template '{{ index .data \"token-id\" }}' | base64 -d)
TOKEN_SECRET=\$(kubectl -n kube-system get \$TOKEN_NAME -o go-template --template '{{ index .data \"token-secret\" }}' | base64 -d)
echo \$TOKEN_ID.\$TOKEN_SECRET >/tmp/token
fi"
# Install weave as the pod network
pssh "
if grep -q node1 /tmp/node; then
@@ -174,15 +162,30 @@ _cmd_kube() {
fi"
# Join the other nodes to the cluster
pssh "
pssh --timeout 200 "
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
sudo kubeadm join --token \$TOKEN node1:6443
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
fi"
sep "Done"
}
_cmd kubetest "Check that all notes are reporting as Ready"
_cmd_kubetest() {
# There are way too many backslashes in the command below.
# Feel free to make that better ♥
pssh "
set -e
[ -f /tmp/node ]
if grep -q node1 /tmp/node; then
which kubectl
for NODE in \$(awk /\ node/\ {print\ \\\$2} /etc/hosts); do
echo \$NODE ; kubectl get nodes | grep -w \$NODE | grep -w Ready
done
fi"
}
_cmd ids "List the instance IDs belonging to a given tag or token"
_cmd_ids() {
TAG=$1
@@ -280,6 +283,9 @@ _cmd_start() {
key_name=$(sync_keys)
AMI=$(_cmd_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
die "I could not find which AMI to use in this region. Try another region?"
fi
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
AWS_KEY_NAME=$(make_key_name)
@@ -292,7 +298,7 @@ _cmd_start() {
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type t2.medium \
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
@@ -430,6 +436,7 @@ tag_is_reachable() {
}
test_tag() {
TAG=$1
ips_file=tags/$TAG/ips.txt
info "Picking a random IP address in $ips_file to run tests."
n=$((1 + $RANDOM % $(wc -l <$ips_file)))

View File

@@ -45,7 +45,7 @@ def system(cmd):
# On EC2, the ephemeral disk might be mounted on /mnt.
# If /mnt is a mountpoint, place Docker workspace on it.
system("if mountpoint -q /mnt; then sudo mkdir /mnt/docker && sudo ln -s /mnt/docker /var/lib/docker; fi")
system("if mountpoint -q /mnt; then sudo mkdir -p /mnt/docker && sudo ln -sfn /mnt/docker /var/lib/docker; fi")
# Put our public IP in /tmp/ipv4
# ipv4_retrieval_endpoint = "http://169.254.169.254/latest/meta-data/public-ipv4"

View File

@@ -0,0 +1,5 @@
# Number of VMs per cluster
clustersize: 5
# Jinja2 template to use to generate ready-to-cut cards
cards_template: clusters.csv

View File

@@ -0,0 +1,24 @@
# customize your cluster size, your cards template, and the versions
# Number of VMs per cluster
clustersize: 5
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.18.0
machine_version: 0.13.0

View File

@@ -7,7 +7,7 @@ clustersize: 1
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
@@ -17,8 +17,8 @@ paper_margin: 0.2in
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.17.1
machine_version: 0.13.0
compose_version: 1.20.1
machine_version: 0.14.0

View File

@@ -0,0 +1,106 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://container.training/" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 21.5%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.4em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -0,0 +1,24 @@
# 3 nodes for k8s 101 workshops
# Number of VMs per cluster
clustersize: 3
# Jinja2 template to use to generate ready-to-cut cards
cards_template: settings/kube101.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.20.1
machine_version: 0.14.0

View File

@@ -7,7 +7,7 @@ clustersize: 5
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
@@ -17,8 +17,8 @@ paper_margin: 0.2in
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.17.1
machine_version: 0.13.0
compose_version: 1.20.1
machine_version: 0.14.0

View File

@@ -38,7 +38,7 @@ check_envvars() {
if [ -z "${!envvar}" ]; then
error "Environment variable $envvar is not set."
if [ "$envvar" = "SSH_AUTH_SOCK" ]; then
error "Hint: run '\$(ssh-agent) ; ssh-add' and try again?"
error "Hint: run 'eval \$(ssh-agent) ; ssh-add' and try again?"
fi
status=1
fi

View File

@@ -1 +0,0 @@
/ /kube-halfday.yml.html 200!

418
slides/autopilot/autotest.py Executable file
View File

@@ -0,0 +1,418 @@
#!/usr/bin/env python
# coding: utf-8
import click
import logging
import os
import random
import re
import select
import subprocess
import sys
import time
import uuid
import yaml
logging.basicConfig(level=os.environ.get("LOG_LEVEL", "INFO"))
TIMEOUT = 60 # 1 minute
# This one is not a constant. It's an ugly global.
IPADDR = None
class State(object):
def __init__(self):
self.interactive = True
self.verify_status = False
self.simulate_type = True
self.slide = 1
self.snippet = 0
def load(self):
data = yaml.load(open("state.yaml"))
self.interactive = bool(data["interactive"])
self.verify_status = bool(data["verify_status"])
self.simulate_type = bool(data["simulate_type"])
self.slide = int(data["slide"])
self.snippet = int(data["snippet"])
def save(self):
with open("state.yaml", "w") as f:
yaml.dump(dict(
interactive=self.interactive,
verify_status=self.verify_status,
simulate_type=self.simulate_type,
slide=self.slide,
snippet=self.snippet,
), f, default_flow_style=False)
state = State()
def hrule():
return "="*int(subprocess.check_output(["tput", "cols"]))
# A "snippet" is something that the user is supposed to do in the workshop.
# Most of the "snippets" are shell commands.
# Some of them can be key strokes or other actions.
# In the markdown source, they are the code sections (identified by triple-
# quotes) within .exercise[] sections.
class Snippet(object):
def __init__(self, slide, content):
self.slide = slide
self.content = content
# Extract the "method" (e.g. bash, keys, ...)
# On multi-line snippets, the method is alone on the first line
# On single-line snippets, the data follows the method immediately
if '\n' in content:
self.method, self.data = content.split('\n', 1)
else:
self.method, self.data = content.split(' ', 1)
self.data = self.data.strip()
self.next = None
def __str__(self):
return self.content
class Slide(object):
current_slide = 0
def __init__(self, content):
self.number = Slide.current_slide
Slide.current_slide += 1
# Remove commented-out slides
# (remark.js considers ??? to be the separator for speaker notes)
content = re.split("\n\?\?\?\n", content)[0]
self.content = content
self.snippets = []
exercises = re.findall("\.exercise\[(.*)\]", content, re.DOTALL)
for exercise in exercises:
if "```" in exercise:
previous = None
for snippet_content in exercise.split("```")[1::2]:
snippet = Snippet(self, snippet_content)
if previous:
previous.next = snippet
previous = snippet
self.snippets.append(snippet)
else:
logging.warning("Exercise on slide {} does not have any ``` snippet."
.format(self.number))
self.debug()
def __str__(self):
text = self.content
for snippet in self.snippets:
text = text.replace(snippet.content, ansi("7")(snippet.content))
return text
def debug(self):
logging.debug("\n{}\n{}\n{}".format(hrule(), self.content, hrule()))
def focus_slides():
subprocess.check_output(["i3-msg", "workspace", "3"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_terminal():
subprocess.check_output(["i3-msg", "workspace", "2"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_browser():
subprocess.check_output(["i3-msg", "workspace", "4"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def ansi(code):
return lambda s: "\x1b[{}m{}\x1b[0m".format(code, s)
# Sleeps the indicated delay, but interruptible by pressing ENTER.
# If interrupted, returns True.
def interruptible_sleep(t):
rfds, _, _ = select.select([0], [], [], t)
return 0 in rfds
def wait_for_string(s, timeout=TIMEOUT):
logging.debug("Waiting for string: {}".format(s))
deadline = time.time() + timeout
while time.time() < deadline:
output = capture_pane()
if s in output:
return
if interruptible_sleep(1): return
raise Exception("Timed out while waiting for {}!".format(s))
def wait_for_prompt():
logging.debug("Waiting for prompt.")
deadline = time.time() + TIMEOUT
while time.time() < deadline:
output = capture_pane()
# If we are not at the bottom of the screen, there will be a bunch of extra \n's
output = output.rstrip('\n')
last_line = output.split('\n')[-1]
# Our custom prompt on the VMs has two lines; the 2nd line is just '$'
if last_line == "$":
# This is a perfect opportunity to grab the node's IP address
global IPADDR
IPADDR = re.findall("^\[(.*)\]", output, re.MULTILINE)[-1]
return
# When we are in an alpine container, the prompt will be "/ #"
if last_line == "/ #":
return
# We did not recognize a known prompt; wait a bit and check again
logging.debug("Could not find a known prompt on last line: {!r}"
.format(last_line))
if interruptible_sleep(1): return
raise Exception("Timed out while waiting for prompt!")
def check_exit_status():
if not state.verify_status:
return
token = uuid.uuid4().hex
data = "echo {} $?\n".format(token)
logging.debug("Sending {!r} to get exit status.".format(data))
send_keys(data)
time.sleep(0.5)
wait_for_prompt()
screen = capture_pane()
status = re.findall("\n{} ([0-9]+)\n".format(token), screen, re.MULTILINE)
logging.debug("Got exit status: {}.".format(status))
if len(status) == 0:
raise Exception("Couldn't retrieve status code {}. Timed out?".format(token))
if len(status) > 1:
raise Exception("More than one status code {}. I'm seeing double! Shoot them both.".format(token))
code = int(status[0])
if code != 0:
raise Exception("Non-zero exit status: {}.".format(code))
# Otherwise just return peacefully.
def setup_tmux_and_ssh():
if subprocess.call(["tmux", "has-session"]):
logging.error("Couldn't connect to tmux. Please setup tmux first.")
ipaddr = open("../../prepare-vms/ips.txt").read().split("\n")[0]
uid = os.getuid()
raise Exception("""
1. If you're running this directly from a node:
tmux
2. If you want to control a remote tmux:
rm -f /tmp/tmux-{uid}/default && ssh -t -L /tmp/tmux-{uid}/default:/tmp/tmux-1001/default docker@{ipaddr} tmux new-session -As 0
3. If you cannot control a remote tmux:
tmux new-session ssh docker@{ipaddr}
""".format(uid=uid, ipaddr=ipaddr))
else:
logging.info("Found tmux session. Trying to acquire shell prompt.")
wait_for_prompt()
logging.info("Successfully connected to test cluster in tmux session.")
slides = [Slide("Dummy slide zero")]
content = open(sys.argv[1]).read()
# OK, this part is definitely hackish, and will break if the
# excludedClasses parameter is not on a single line.
excluded_classes = re.findall("excludedClasses: (\[.*\])", content)
excluded_classes = set(eval(excluded_classes[0]))
for slide in re.split("\n---?\n", content):
slide_classes = re.findall("class: (.*)", slide)
if slide_classes:
slide_classes = slide_classes[0].split(",")
slide_classes = [c.strip() for c in slide_classes]
if excluded_classes & set(slide_classes):
logging.info("Skipping excluded slide.")
continue
slides.append(Slide(slide))
def send_keys(data):
if state.simulate_type and data[0] != '^':
for key in data:
if key == ";":
key = "\\;"
if key == "\n":
if interruptible_sleep(1): return
subprocess.check_call(["tmux", "send-keys", key])
if interruptible_sleep(0.15*random.random()): return
if key == "\n":
if interruptible_sleep(1): return
else:
subprocess.check_call(["tmux", "send-keys", data])
def capture_pane():
return subprocess.check_output(["tmux", "capture-pane", "-p"]).decode('utf-8')
setup_tmux_and_ssh()
try:
state.load()
logging.info("Successfully loaded state from file.")
# Let's override the starting state, so that when an error occurs,
# we can restart the auto-tester and then single-step or debug.
# (Instead of running again through the same issue immediately.)
state.interactive = True
except Exception as e:
logging.exception("Could not load state from file.")
logging.warning("Using default values.")
def move_forward():
state.snippet += 1
if state.snippet > len(slides[state.slide].snippets):
state.slide += 1
state.snippet = 0
check_bounds()
def move_backward():
state.snippet -= 1
if state.snippet < 0:
state.slide -= 1
state.snippet = 0
check_bounds()
def check_bounds():
if state.slide < 1:
state.slide = 1
if state.slide >= len(slides):
state.slide = len(slides)-1
while True:
state.save()
slide = slides[state.slide]
snippet = slide.snippets[state.snippet-1] if state.snippet else None
click.clear()
print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}]"
.format(state.slide, len(slides)-1,
state.snippet, len(slide.snippets) if slide.snippets else 0,
state.simulate_type, state.verify_status))
print(hrule())
if snippet:
print(slide.content.replace(snippet.content, ansi(7)(snippet.content)))
focus_terminal()
else:
print(slide.content)
subprocess.check_output(["./gotoslide.js", str(slide.number)])
focus_slides()
print(hrule())
if state.interactive:
print("y/⎵/⏎ Execute snippet or advance to next snippet")
print("p/← Previous")
print("n/→ Next")
print("s Simulate keystrokes")
print("v Validate exit status")
print("g Go to a specific slide")
print("q Quit")
print("c Continue non-interactively until next error")
command = click.getchar()
else:
command = "y"
if command in ("n", "\x1b[C"):
move_forward()
elif command in ("p", "\x1b[D"):
move_backward()
elif command == "s":
state.simulate_type = not state.simulate_type
elif command == "v":
state.verify_status = not state.verify_status
elif command == "g":
state.slide = click.prompt("Enter slide number", type=int)
state.snippet = 0
check_bounds()
elif command == "q":
break
elif command == "c":
# continue until next timeout
state.interactive = False
elif command in ("y", "\r", " "):
if not snippet:
# Advance to next snippet
# Advance until a slide that has snippets
while not slides[state.slide].snippets:
move_forward()
# But stop if we reach the last slide
if state.slide == len(slides)-1:
break
# And then advance to the snippet
move_forward()
continue
method, data = snippet.method, snippet.data
logging.info("Running with method {}: {}".format(method, data))
if method == "keys":
send_keys(data)
elif method == "bash":
# Make sure that we're ready
wait_for_prompt()
# Strip leading spaces
data = re.sub("\n +", "\n", data)
# Remove backticks (they are used to highlight sections)
data = data.replace('`', '')
# Add "RETURN" at the end of the command :)
data += "\n"
# Send command
send_keys(data)
# Force a short sleep to avoid race condition
time.sleep(0.5)
if snippet.next and snippet.next.method == "wait":
wait_for_string(snippet.next.data)
elif snippet.next and snippet.next.method == "longwait":
wait_for_string(snippet.next.data, 10*TIMEOUT)
else:
wait_for_prompt()
# Verify return code
check_exit_status()
elif method == "copypaste":
screen = capture_pane()
matches = re.findall(data, screen, flags=re.DOTALL)
if len(matches) == 0:
raise Exception("Could not find regex {} in output.".format(data))
# Arbitrarily get the most recent match
match = matches[-1]
# Remove line breaks (like a screen copy paste would do)
match = match.replace('\n', '')
send_keys(match + '\n')
# FIXME: we should factor out the "bash" method
wait_for_prompt()
check_exit_status()
elif method == "open":
# Cheap way to get node1's IP address
screen = capture_pane()
url = data.replace("/node1", "/{}".format(IPADDR))
# This should probably be adapted to run on different OS
subprocess.check_output(["xdg-open", url])
focus_browser()
if state.interactive:
print("Press any key to continue to next step...")
click.getchar()
else:
logging.warning("Unknown method {}: {!r}".format(method, data))
move_forward()
else:
logging.warning("Unknown command {}.".format(command))

17
slides/autopilot/gotoslide.js Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env node
/* Expects a slide number as first argument.
* Will connect to the local pub/sub server,
* and issue a "go to slide X" command, which
* will be sent to all connected browsers.
*/
var io = require('socket.io-client');
var socket = io('http://localhost:3000');
socket.on('connect_error', function(){
console.log('connection error');
socket.close();
});
socket.emit('slide change', process.argv[2], function(){
socket.close();
});

603
slides/autopilot/package-lock.json generated Normal file
View File

@@ -0,0 +1,603 @@
{
"name": "container-training-pub-sub-server",
"version": "0.0.1",
"lockfileVersion": 1,
"requires": true,
"dependencies": {
"accepts": {
"version": "1.3.4",
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.4.tgz",
"integrity": "sha1-hiRnWMfdbSGmR0/whKR0DsBesh8=",
"requires": {
"mime-types": "2.1.17",
"negotiator": "0.6.1"
}
},
"after": {
"version": "0.8.2",
"resolved": "https://registry.npmjs.org/after/-/after-0.8.2.tgz",
"integrity": "sha1-/ts5T58OAqqXaOcCvaI7UF+ufh8="
},
"array-flatten": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz",
"integrity": "sha1-ml9pkFGx5wczKPKgCJaLZOopVdI="
},
"arraybuffer.slice": {
"version": "0.0.6",
"resolved": "https://registry.npmjs.org/arraybuffer.slice/-/arraybuffer.slice-0.0.6.tgz",
"integrity": "sha1-8zshWfBTKj8xB6JywMz70a0peco="
},
"async-limiter": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/async-limiter/-/async-limiter-1.0.0.tgz",
"integrity": "sha512-jp/uFnooOiO+L211eZOoSyzpOITMXx1rBITauYykG3BRYPu8h0UcxsPNB04RR5vo4Tyz3+ay17tR6JVf9qzYWg=="
},
"backo2": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/backo2/-/backo2-1.0.2.tgz",
"integrity": "sha1-MasayLEpNjRj41s+u2n038+6eUc="
},
"base64-arraybuffer": {
"version": "0.1.5",
"resolved": "https://registry.npmjs.org/base64-arraybuffer/-/base64-arraybuffer-0.1.5.tgz",
"integrity": "sha1-c5JncZI7Whl0etZmqlzUv5xunOg="
},
"base64id": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/base64id/-/base64id-1.0.0.tgz",
"integrity": "sha1-R2iMuZu2gE8OBtPnY7HDLlfY5rY="
},
"better-assert": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/better-assert/-/better-assert-1.0.2.tgz",
"integrity": "sha1-QIZrnhueC1W0gYlDEeaPr/rrxSI=",
"requires": {
"callsite": "1.0.0"
}
},
"blob": {
"version": "0.0.4",
"resolved": "https://registry.npmjs.org/blob/-/blob-0.0.4.tgz",
"integrity": "sha1-vPEwUspURj8w+fx+lbmkdjCpSSE="
},
"body-parser": {
"version": "1.18.2",
"resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.18.2.tgz",
"integrity": "sha1-h2eKGdhLR9hZuDGZvVm84iKxBFQ=",
"requires": {
"bytes": "3.0.0",
"content-type": "1.0.4",
"debug": "2.6.9",
"depd": "1.1.1",
"http-errors": "1.6.2",
"iconv-lite": "0.4.19",
"on-finished": "2.3.0",
"qs": "6.5.1",
"raw-body": "2.3.2",
"type-is": "1.6.15"
}
},
"bytes": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.0.0.tgz",
"integrity": "sha1-0ygVQE1olpn4Wk6k+odV3ROpYEg="
},
"callsite": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/callsite/-/callsite-1.0.0.tgz",
"integrity": "sha1-KAOY5dZkvXQDi28JBRU+borxvCA="
},
"component-bind": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/component-bind/-/component-bind-1.0.0.tgz",
"integrity": "sha1-AMYIq33Nk4l8AAllGx06jh5zu9E="
},
"component-emitter": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/component-emitter/-/component-emitter-1.2.1.tgz",
"integrity": "sha1-E3kY1teCg/ffemt8WmPhQOaUJeY="
},
"component-inherit": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/component-inherit/-/component-inherit-0.0.3.tgz",
"integrity": "sha1-ZF/ErfWLcrZJ1crmUTVhnbJv8UM="
},
"content-disposition": {
"version": "0.5.2",
"resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.2.tgz",
"integrity": "sha1-DPaLud318r55YcOoUXjLhdunjLQ="
},
"content-type": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.4.tgz",
"integrity": "sha512-hIP3EEPs8tB9AT1L+NUqtwOAps4mk2Zob89MWXMHjHWg9milF/j4osnnQLXBCBFBk/tvIG/tUc9mOUJiPBhPXA=="
},
"cookie": {
"version": "0.3.1",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.3.1.tgz",
"integrity": "sha1-5+Ch+e9DtMi6klxcWpboBtFoc7s="
},
"cookie-signature": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz",
"integrity": "sha1-4wOogrNCzD7oylE6eZmXNNqzriw="
},
"debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"requires": {
"ms": "2.0.0"
}
},
"depd": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/depd/-/depd-1.1.1.tgz",
"integrity": "sha1-V4O04cRZ8G+lyif5kfPQbnoxA1k="
},
"destroy": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/destroy/-/destroy-1.0.4.tgz",
"integrity": "sha1-l4hXRCxEdJ5CBmE+N5RiBYJqvYA="
},
"ee-first": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz",
"integrity": "sha1-WQxhFWsK4vTwJVcyoViyZrxWsh0="
},
"encodeurl": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.1.tgz",
"integrity": "sha1-eePVhlU0aQn+bw9Fpd5oEDspTSA="
},
"engine.io": {
"version": "3.1.4",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-3.1.4.tgz",
"integrity": "sha1-PQIRtwpVLOhB/8fahiezAamkFi4=",
"requires": {
"accepts": "1.3.3",
"base64id": "1.0.0",
"cookie": "0.3.1",
"debug": "2.6.9",
"engine.io-parser": "2.1.1",
"uws": "0.14.5",
"ws": "3.3.3"
},
"dependencies": {
"accepts": {
"version": "1.3.3",
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.3.tgz",
"integrity": "sha1-w8p0NJOGSMPg2cHjKN1otiLChMo=",
"requires": {
"mime-types": "2.1.17",
"negotiator": "0.6.1"
}
}
}
},
"engine.io-client": {
"version": "3.1.4",
"resolved": "https://registry.npmjs.org/engine.io-client/-/engine.io-client-3.1.4.tgz",
"integrity": "sha1-T88TcLRxY70s6b4nM5ckMDUNTqE=",
"requires": {
"component-emitter": "1.2.1",
"component-inherit": "0.0.3",
"debug": "2.6.9",
"engine.io-parser": "2.1.1",
"has-cors": "1.1.0",
"indexof": "0.0.1",
"parseqs": "0.0.5",
"parseuri": "0.0.5",
"ws": "3.3.3",
"xmlhttprequest-ssl": "1.5.4",
"yeast": "0.1.2"
}
},
"engine.io-parser": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/engine.io-parser/-/engine.io-parser-2.1.1.tgz",
"integrity": "sha1-4Ps/DgRi9/WLt3waUun1p+JuRmg=",
"requires": {
"after": "0.8.2",
"arraybuffer.slice": "0.0.6",
"base64-arraybuffer": "0.1.5",
"blob": "0.0.4",
"has-binary2": "1.0.2"
}
},
"escape-html": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz",
"integrity": "sha1-Aljq5NPQwJdN4cFpGI7wBR0dGYg="
},
"etag": {
"version": "1.8.1",
"resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz",
"integrity": "sha1-Qa4u62XvpiJorr/qg6x9eSmbCIc="
},
"express": {
"version": "4.16.2",
"resolved": "https://registry.npmjs.org/express/-/express-4.16.2.tgz",
"integrity": "sha1-41xt/i1kt9ygpc1PIXgb4ymeB2w=",
"requires": {
"accepts": "1.3.4",
"array-flatten": "1.1.1",
"body-parser": "1.18.2",
"content-disposition": "0.5.2",
"content-type": "1.0.4",
"cookie": "0.3.1",
"cookie-signature": "1.0.6",
"debug": "2.6.9",
"depd": "1.1.1",
"encodeurl": "1.0.1",
"escape-html": "1.0.3",
"etag": "1.8.1",
"finalhandler": "1.1.0",
"fresh": "0.5.2",
"merge-descriptors": "1.0.1",
"methods": "1.1.2",
"on-finished": "2.3.0",
"parseurl": "1.3.2",
"path-to-regexp": "0.1.7",
"proxy-addr": "2.0.2",
"qs": "6.5.1",
"range-parser": "1.2.0",
"safe-buffer": "5.1.1",
"send": "0.16.1",
"serve-static": "1.13.1",
"setprototypeof": "1.1.0",
"statuses": "1.3.1",
"type-is": "1.6.15",
"utils-merge": "1.0.1",
"vary": "1.1.2"
}
},
"finalhandler": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.1.0.tgz",
"integrity": "sha1-zgtoVbRYU+eRsvzGgARtiCU91/U=",
"requires": {
"debug": "2.6.9",
"encodeurl": "1.0.1",
"escape-html": "1.0.3",
"on-finished": "2.3.0",
"parseurl": "1.3.2",
"statuses": "1.3.1",
"unpipe": "1.0.0"
}
},
"forwarded": {
"version": "0.1.2",
"resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.1.2.tgz",
"integrity": "sha1-mMI9qxF1ZXuMBXPozszZGw/xjIQ="
},
"fresh": {
"version": "0.5.2",
"resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz",
"integrity": "sha1-PYyt2Q2XZWn6g1qx+OSyOhBWBac="
},
"has-binary2": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/has-binary2/-/has-binary2-1.0.2.tgz",
"integrity": "sha1-6D26SfC5vk0CbSc2U1DZ8D9Uvpg=",
"requires": {
"isarray": "2.0.1"
}
},
"has-cors": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/has-cors/-/has-cors-1.1.0.tgz",
"integrity": "sha1-XkdHk/fqmEPRu5nCPu9J/xJv/zk="
},
"http-errors": {
"version": "1.6.2",
"resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.6.2.tgz",
"integrity": "sha1-CgAsyFcHGSp+eUbO7cERVfYOxzY=",
"requires": {
"depd": "1.1.1",
"inherits": "2.0.3",
"setprototypeof": "1.0.3",
"statuses": "1.3.1"
},
"dependencies": {
"setprototypeof": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.0.3.tgz",
"integrity": "sha1-ZlZ+NwQ+608E2RvWWMDL77VbjgQ="
}
}
},
"iconv-lite": {
"version": "0.4.19",
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.19.tgz",
"integrity": "sha512-oTZqweIP51xaGPI4uPa56/Pri/480R+mo7SeU+YETByQNhDG55ycFyNLIgta9vXhILrxXDmF7ZGhqZIcuN0gJQ=="
},
"indexof": {
"version": "0.0.1",
"resolved": "https://registry.npmjs.org/indexof/-/indexof-0.0.1.tgz",
"integrity": "sha1-gtwzbSMrkGIXnQWrMpOmYFn9Q10="
},
"inherits": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz",
"integrity": "sha1-Yzwsg+PaQqUC9SRmAiSA9CCCYd4="
},
"ipaddr.js": {
"version": "1.5.2",
"resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.5.2.tgz",
"integrity": "sha1-1LUFvemUaYfM8PxY2QEP+WB+P6A="
},
"isarray": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/isarray/-/isarray-2.0.1.tgz",
"integrity": "sha1-o32U7ZzaLVmGXJ92/llu4fM4dB4="
},
"media-typer": {
"version": "0.3.0",
"resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz",
"integrity": "sha1-hxDXrwqmJvj/+hzgAWhUUmMlV0g="
},
"merge-descriptors": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.1.tgz",
"integrity": "sha1-sAqqVW3YtEVoFQ7J0blT8/kMu2E="
},
"methods": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz",
"integrity": "sha1-VSmk1nZUE07cxSZmVoNbD4Ua/O4="
},
"mime": {
"version": "1.4.1",
"resolved": "https://registry.npmjs.org/mime/-/mime-1.4.1.tgz",
"integrity": "sha512-KI1+qOZu5DcW6wayYHSzR/tXKCDC5Om4s1z2QJjDULzLcmf3DvzS7oluY4HCTrc+9FiKmWUgeNLg7W3uIQvxtQ=="
},
"mime-db": {
"version": "1.30.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.30.0.tgz",
"integrity": "sha1-dMZD2i3Z1qRTmZY0ZbJtXKfXHwE="
},
"mime-types": {
"version": "2.1.17",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.17.tgz",
"integrity": "sha1-Cdejk/A+mVp5+K+Fe3Cp4KsWVXo=",
"requires": {
"mime-db": "1.30.0"
}
},
"ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
},
"negotiator": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.1.tgz",
"integrity": "sha1-KzJxhOiZIQEXeyhWP7XnECrNDKk="
},
"object-component": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/object-component/-/object-component-0.0.3.tgz",
"integrity": "sha1-8MaapQ78lbhmwYb0AKM3acsvEpE="
},
"on-finished": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
"integrity": "sha1-IPEzZIGwg811M3mSoWlxqi2QaUc=",
"requires": {
"ee-first": "1.1.1"
}
},
"parseqs": {
"version": "0.0.5",
"resolved": "https://registry.npmjs.org/parseqs/-/parseqs-0.0.5.tgz",
"integrity": "sha1-1SCKNzjkZ2bikbouoXNoSSGouJ0=",
"requires": {
"better-assert": "1.0.2"
}
},
"parseuri": {
"version": "0.0.5",
"resolved": "https://registry.npmjs.org/parseuri/-/parseuri-0.0.5.tgz",
"integrity": "sha1-gCBKUNTbt3m/3G6+J3jZDkvOMgo=",
"requires": {
"better-assert": "1.0.2"
}
},
"parseurl": {
"version": "1.3.2",
"resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.2.tgz",
"integrity": "sha1-/CidTtiZMRlGDBViUyYs3I3mW/M="
},
"path-to-regexp": {
"version": "0.1.7",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz",
"integrity": "sha1-32BBeABfUi8V60SQ5yR6G/qmf4w="
},
"proxy-addr": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.2.tgz",
"integrity": "sha1-ZXFQT0e7mI7IGAJT+F3X4UlSvew=",
"requires": {
"forwarded": "0.1.2",
"ipaddr.js": "1.5.2"
}
},
"qs": {
"version": "6.5.1",
"resolved": "https://registry.npmjs.org/qs/-/qs-6.5.1.tgz",
"integrity": "sha512-eRzhrN1WSINYCDCbrz796z37LOe3m5tmW7RQf6oBntukAG1nmovJvhnwHHRMAfeoItc1m2Hk02WER2aQ/iqs+A=="
},
"range-parser": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.0.tgz",
"integrity": "sha1-9JvmtIeJTdxA3MlKMi9hEJLgDV4="
},
"raw-body": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.3.2.tgz",
"integrity": "sha1-vNYMd9Prk83gBQKVw/N5OJvIj4k=",
"requires": {
"bytes": "3.0.0",
"http-errors": "1.6.2",
"iconv-lite": "0.4.19",
"unpipe": "1.0.0"
}
},
"safe-buffer": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.1.tgz",
"integrity": "sha512-kKvNJn6Mm93gAczWVJg7wH+wGYWNrDHdWvpUmHyEsgCtIwwo3bqPtV4tR5tuPaUhTOo/kvhVwd8XwwOllGYkbg=="
},
"send": {
"version": "0.16.1",
"resolved": "https://registry.npmjs.org/send/-/send-0.16.1.tgz",
"integrity": "sha512-ElCLJdJIKPk6ux/Hocwhk7NFHpI3pVm/IZOYWqUmoxcgeyM+MpxHHKhb8QmlJDX1pU6WrgaHBkVNm73Sv7uc2A==",
"requires": {
"debug": "2.6.9",
"depd": "1.1.1",
"destroy": "1.0.4",
"encodeurl": "1.0.1",
"escape-html": "1.0.3",
"etag": "1.8.1",
"fresh": "0.5.2",
"http-errors": "1.6.2",
"mime": "1.4.1",
"ms": "2.0.0",
"on-finished": "2.3.0",
"range-parser": "1.2.0",
"statuses": "1.3.1"
}
},
"serve-static": {
"version": "1.13.1",
"resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.13.1.tgz",
"integrity": "sha512-hSMUZrsPa/I09VYFJwa627JJkNs0NrfL1Uzuup+GqHfToR2KcsXFymXSV90hoyw3M+msjFuQly+YzIH/q0MGlQ==",
"requires": {
"encodeurl": "1.0.1",
"escape-html": "1.0.3",
"parseurl": "1.3.2",
"send": "0.16.1"
}
},
"setprototypeof": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.0.tgz",
"integrity": "sha512-BvE/TwpZX4FXExxOxZyRGQQv651MSwmWKZGqvmPcRIjDqWub67kTKuIMx43cZZrS/cBBzwBcNDWoFxt2XEFIpQ=="
},
"socket.io": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/socket.io/-/socket.io-2.0.4.tgz",
"integrity": "sha1-waRZDO/4fs8TxyZS8Eb3FrKeYBQ=",
"requires": {
"debug": "2.6.9",
"engine.io": "3.1.4",
"socket.io-adapter": "1.1.1",
"socket.io-client": "2.0.4",
"socket.io-parser": "3.1.2"
}
},
"socket.io-adapter": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/socket.io-adapter/-/socket.io-adapter-1.1.1.tgz",
"integrity": "sha1-KoBeihTWNyEk3ZFZrUUC+MsH8Gs="
},
"socket.io-client": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/socket.io-client/-/socket.io-client-2.0.4.tgz",
"integrity": "sha1-CRilUkBtxeVAs4Dc2Xr8SmQzL44=",
"requires": {
"backo2": "1.0.2",
"base64-arraybuffer": "0.1.5",
"component-bind": "1.0.0",
"component-emitter": "1.2.1",
"debug": "2.6.9",
"engine.io-client": "3.1.4",
"has-cors": "1.1.0",
"indexof": "0.0.1",
"object-component": "0.0.3",
"parseqs": "0.0.5",
"parseuri": "0.0.5",
"socket.io-parser": "3.1.2",
"to-array": "0.1.4"
}
},
"socket.io-parser": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-3.1.2.tgz",
"integrity": "sha1-28IoIVH8T6675Aru3Ady66YZ9/I=",
"requires": {
"component-emitter": "1.2.1",
"debug": "2.6.9",
"has-binary2": "1.0.2",
"isarray": "2.0.1"
}
},
"statuses": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-1.3.1.tgz",
"integrity": "sha1-+vUbnrdKrvOzrPStX2Gr8ky3uT4="
},
"to-array": {
"version": "0.1.4",
"resolved": "https://registry.npmjs.org/to-array/-/to-array-0.1.4.tgz",
"integrity": "sha1-F+bBH3PdTz10zaek/zI46a2b+JA="
},
"type-is": {
"version": "1.6.15",
"resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.15.tgz",
"integrity": "sha1-yrEPtJCeRByChC6v4a1kbIGARBA=",
"requires": {
"media-typer": "0.3.0",
"mime-types": "2.1.17"
}
},
"ultron": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/ultron/-/ultron-1.1.1.tgz",
"integrity": "sha512-UIEXBNeYmKptWH6z8ZnqTeS8fV74zG0/eRU9VGkpzz+LIJNs8W/zM/L+7ctCkRrgbNnnR0xxw4bKOr0cW0N0Og=="
},
"unpipe": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz",
"integrity": "sha1-sr9O6FFKrmFltIF4KdIbLvSZBOw="
},
"utils-merge": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz",
"integrity": "sha1-n5VxD1CiZ5R7LMwSR0HBAoQn5xM="
},
"uws": {
"version": "0.14.5",
"resolved": "https://registry.npmjs.org/uws/-/uws-0.14.5.tgz",
"integrity": "sha1-Z6rzPEaypYel9mZtAPdpEyjxSdw=",
"optional": true
},
"vary": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz",
"integrity": "sha1-IpnwLG3tMNSllhsLn3RSShj2NPw="
},
"ws": {
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/ws/-/ws-3.3.3.tgz",
"integrity": "sha512-nnWLa/NwZSt4KQJu51MYlCcSQ5g7INpOrOMt4XV8j4dqTXdmlUmSHQ8/oLC069ckre0fRsgfvsKwbTdtKLCDkA==",
"requires": {
"async-limiter": "1.0.0",
"safe-buffer": "5.1.1",
"ultron": "1.1.1"
}
},
"xmlhttprequest-ssl": {
"version": "1.5.4",
"resolved": "https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.4.tgz",
"integrity": "sha1-BPVgkVcks4kIhxXMDteBPpZ3v1c="
},
"yeast": {
"version": "0.1.2",
"resolved": "https://registry.npmjs.org/yeast/-/yeast-0.1.2.tgz",
"integrity": "sha1-AI4G2AlDIMNy28L47XagymyKxBk="
}
}
}

View File

@@ -0,0 +1,8 @@
{
"name": "container-training-pub-sub-server",
"version": "0.0.1",
"dependencies": {
"express": "^4.16.2",
"socket.io": "^2.0.4"
}
}

View File

@@ -0,0 +1,21 @@
/* This snippet is loaded from the workshop HTML file.
* It sets up callbacks to synchronize the local slide
* number with the remote pub/sub server.
*/
var socket = io();
var leader = true;
slideshow.on('showSlide', function (slide) {
if (leader) {
var n = slide.getSlideIndex()+1;
socket.emit('slide change', n);
}
});
socket.on('slide change', function (n) {
leader = false;
slideshow.gotoSlide(n);
leader = true;
});

41
slides/autopilot/server.js Executable file
View File

@@ -0,0 +1,41 @@
#!/usr/bin/env node
/* This is a very simple pub/sub server, allowing to
* remote control browsers displaying the slides.
* The browsers connect to this pub/sub server using
* Socket.IO, and the server tells them which slides
* to display.
*
* The server can be controlled with a little CLI,
* or by one of the browsers.
*/
var express = require('express');
var app = express();
var http = require('http').Server(app);
var io = require('socket.io')(http);
app.get('/', function(req, res){
res.send('container.training autopilot pub/sub server');
});
/* Serve remote.js from the current directory */
app.use(express.static('.'));
/* Serve slides etc. from current and the parent directory */
app.use(express.static('..'));
io.on('connection', function(socket){
console.log('a client connected: ' + socket.handshake.address);
socket.on('slide change', function(n, ack){
console.log('slide change: ' + n);
socket.broadcast.emit('slide change', n);
if (typeof ack === 'function') {
ack();
}
});
});
http.listen(3000, function(){
console.log('listening on *:3000');
});

7
slides/autopilot/tmux-style.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/sh
# This removes the clock (and other extraneous stuff) from the
# tmux status bar, and it gives it a non-default color.
tmux set-option -g status-left ""
tmux set-option -g status-right ""
tmux set-option -g status-style bg=cyan

View File

@@ -1,279 +0,0 @@
#!/usr/bin/env python
# coding: utf-8
import click
import logging
import os
import random
import re
import subprocess
import sys
import time
import uuid
logging.basicConfig(level=os.environ.get("LOG_LEVEL", "INFO"))
interactive = True
verify_status = False
simulate_type = True
TIMEOUT = 60 # 1 minute
def hrule():
return "="*int(subprocess.check_output(["tput", "cols"]))
# A "snippet" is something that the user is supposed to do in the workshop.
# Most of the "snippets" are shell commands.
# Some of them can be key strokes or other actions.
# In the markdown source, they are the code sections (identified by triple-
# quotes) within .exercise[] sections.
class Snippet(object):
def __init__(self, slide, content):
self.slide = slide
self.content = content
self.actions = []
def __str__(self):
return self.content
class Slide(object):
current_slide = 0
def __init__(self, content):
Slide.current_slide += 1
self.number = Slide.current_slide
# Remove commented-out slides
# (remark.js considers ??? to be the separator for speaker notes)
content = re.split("\n\?\?\?\n", content)[0]
self.content = content
self.snippets = []
exercises = re.findall("\.exercise\[(.*)\]", content, re.DOTALL)
for exercise in exercises:
if "```" in exercise:
for snippet in exercise.split("```")[1::2]:
self.snippets.append(Snippet(self, snippet))
else:
logging.warning("Exercise on slide {} does not have any ``` snippet."
.format(self.number))
self.debug()
def __str__(self):
text = self.content
for snippet in self.snippets:
text = text.replace(snippet.content, ansi("7")(snippet.content))
return text
def debug(self):
logging.debug("\n{}\n{}\n{}".format(hrule(), self.content, hrule()))
def ansi(code):
return lambda s: "\x1b[{}m{}\x1b[0m".format(code, s)
def wait_for_string(s, timeout=TIMEOUT):
logging.debug("Waiting for string: {}".format(s))
deadline = time.time() + timeout
while time.time() < deadline:
output = capture_pane()
if s in output:
return
time.sleep(1)
raise Exception("Timed out while waiting for {}!".format(s))
def wait_for_prompt():
logging.debug("Waiting for prompt.")
deadline = time.time() + TIMEOUT
while time.time() < deadline:
output = capture_pane()
# If we are not at the bottom of the screen, there will be a bunch of extra \n's
output = output.rstrip('\n')
if output.endswith("\n$"):
return
if output.endswith("\n/ #"):
return
time.sleep(1)
raise Exception("Timed out while waiting for prompt!")
def check_exit_status():
if not verify_status:
return
token = uuid.uuid4().hex
data = "echo {} $?\n".format(token)
logging.debug("Sending {!r} to get exit status.".format(data))
send_keys(data)
time.sleep(0.5)
wait_for_prompt()
screen = capture_pane()
status = re.findall("\n{} ([0-9]+)\n".format(token), screen, re.MULTILINE)
logging.debug("Got exit status: {}.".format(status))
if len(status) == 0:
raise Exception("Couldn't retrieve status code {}. Timed out?".format(token))
if len(status) > 1:
raise Exception("More than one status code {}. I'm seeing double! Shoot them both.".format(token))
code = int(status[0])
if code != 0:
raise Exception("Non-zero exit status: {}.".format(code))
# Otherwise just return peacefully.
def setup_tmux_and_ssh():
if subprocess.call(["tmux", "has-session"]):
logging.info("Couldn't connect to tmux. A new tmux session will be created.")
subprocess.check_call(["tmux", "new-session", "-d"])
wait_for_string("$")
send_keys("cd ../prepare-vms\n")
send_keys("ssh docker@$(head -n1 ips.txt)\n")
wait_for_string("password:")
send_keys("training\n")
wait_for_prompt()
else:
logging.info("Found tmux session. Trying to acquire shell prompt.")
wait_for_prompt()
logging.info("Successfully connected to test cluster in tmux session.")
slides = []
content = open(sys.argv[1]).read()
for slide in re.split("\n---?\n", content):
slides.append(Slide(slide))
actions = []
for slide in slides:
for snippet in slide.snippets:
content = snippet.content
# Extract the "method" (e.g. bash, keys, ...)
# On multi-line snippets, the method is alone on the first line
# On single-line snippets, the data follows the method immediately
if '\n' in content:
method, data = content.split('\n', 1)
else:
method, data = content.split(' ', 1)
actions.append((slide, snippet, method, data))
def send_keys(data):
if simulate_type and data[0] != '^':
for key in data:
if key == ";":
key = "\\;"
subprocess.check_call(["tmux", "send-keys", key])
time.sleep(0.1*random.random())
else:
subprocess.check_call(["tmux", "send-keys", data])
def capture_pane():
return subprocess.check_output(["tmux", "capture-pane", "-p"])
setup_tmux_and_ssh()
try:
i = int(open("nextstep").read())
logging.info("Loaded next step ({}) from file.".format(i))
except Exception as e:
logging.warning("Could not read nextstep file ({}), initializing to 0.".format(e))
i = 0
while i < len(actions):
with open("nextstep", "w") as f:
f.write(str(i))
slide, snippet, method, data = actions[i]
# Remove extra spaces (we don't want them in the terminal) and carriage returns
data = data.strip()
print(hrule())
print(slide.content.replace(snippet.content, ansi(7)(snippet.content)))
print(hrule())
if interactive:
print("[{}/{}] Shall we execute that snippet above?".format(i, len(actions)))
print("y/⏎/→ Execute snippet")
print("s Skip snippet")
print("g Go to a specific snippet")
print("q Quit")
print("c Continue non-interactively until next error")
command = click.getchar()
else:
command = "y"
# For now, remove the `highlighted` sections
# (Make sure to use $() in shell snippets!)
if '`' in data:
logging.info("Stripping ` from snippet.")
data = data.replace('`', '')
if command == "s":
i += 1
elif command == "g":
i = click.prompt("Enter snippet number", type=int)
elif command == "q":
break
elif command == "c":
# continue until next timeout
interactive = False
elif command in ("y", "\r", " ", "\x1b[C"):
logging.info("Running with method {}: {}".format(method, data))
if method == "keys":
send_keys(data)
elif method == "bash":
# Make sure that we're ready
wait_for_prompt()
# Strip leading spaces
data = re.sub("\n +", "\n", data)
# Add "RETURN" at the end of the command :)
data += "\n"
# Send command
send_keys(data)
# Force a short sleep to avoid race condition
time.sleep(0.5)
_, _, next_method, next_data = actions[i+1]
if next_method == "wait":
wait_for_string(next_data)
elif next_method == "longwait":
wait_for_string(next_data, 10*TIMEOUT)
else:
wait_for_prompt()
# Verify return code FIXME should be optional
check_exit_status()
elif method == "copypaste":
screen = capture_pane()
matches = re.findall(data, screen, flags=re.DOTALL)
if len(matches) == 0:
raise Exception("Could not find regex {} in output.".format(data))
# Arbitrarily get the most recent match
match = matches[-1]
# Remove line breaks (like a screen copy paste would do)
match = match.replace('\n', '')
send_keys(match + '\n')
# FIXME: we should factor out the "bash" method
wait_for_prompt()
check_exit_status()
elif method == "open":
# Cheap way to get node1's IP address
screen = capture_pane()
ipaddr = re.findall("^\[(.*)\]", screen, re.MULTILINE)[-1]
url = data.replace("/node1", "/{}".format(ipaddr))
# This should probably be adapted to run on different OS
subprocess.check_call(["open", url])
else:
logging.warning("Unknown method {}: {!r}".format(method, data))
i += 1
else:
logging.warning("Unknown command {}.".format(command))
# Reset slide counter
with open("nextstep", "w") as f:
f.write(str(0))

View File

@@ -0,0 +1,48 @@
## About these slides
- All the content is available in a public GitHub repository:
https://github.com/jpetazzo/container.training
- You can get updated "builds" of the slides there:
http://container.training/
<!--
.exercise[
```open https://github.com/jpetazzo/container.training```
```open http://container.training/```
]
-->
--
- Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
.footnote[.emoji[👇] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.]
<!--
.exercise[
```open https://github.com/jpetazzo/container.training/tree/master/slides/common/about-slides.md```
]
-->
---
class: extra-details
## Extra details
- This slide has a little magnifying glass in the top left corner
- This magnifiying glass indicates slides that provide extra details
- Feel free to skip them if:
- you are in a hurry
- you are new to this and want to avoid cognitive overload
- you want only the most essential information
- You can review these slides another time if you want, they'll be waiting for you ☺

View File

@@ -0,0 +1,12 @@
## Clean up
- Before moving on, let's remove those containers
.exercise[
- Tell Compose to remove everything:
```bash
docker-compose down
```
]

View File

@@ -0,0 +1,240 @@
## Restarting in the background
- Many flags and commands of Compose are modeled after those of `docker`
.exercise[
- Start the app in the background with the `-d` option:
```bash
docker-compose up -d
```
- Check that our app is running with the `ps` command:
```bash
docker-compose ps
```
]
`docker-compose ps` also shows the ports exposed by the application.
---
class: extra-details
## Viewing logs
- The `docker-compose logs` command works like `docker logs`
.exercise[
- View all logs since container creation and exit when done:
```bash
docker-compose logs
```
- Stream container logs, starting at the last 10 lines for each container:
```bash
docker-compose logs --tail 10 --follow
```
<!--
```wait units of work done```
```keys ^C```
-->
]
Tip: use `^S` and `^Q` to pause/resume log output.
---
class: extra-details
## Upgrading from Compose 1.6
.warning[The `logs` command has changed between Compose 1.6 and 1.7!]
- Up to 1.6
- `docker-compose logs` is the equivalent of `logs --follow`
- `docker-compose logs` must be restarted if containers are added
- Since 1.7
- `--follow` must be specified explicitly
- new containers are automatically picked up by `docker-compose logs`
---
## Scaling up the application
- Our goal is to make that performance graph go up (without changing a line of code!)
--
- Before trying to scale the application, we'll figure out if we need more resources
(CPU, RAM...)
- For that, we will use good old UNIX tools on our Docker node
---
## Looking at resource usage
- Let's look at CPU, memory, and I/O usage
.exercise[
- run `top` to see CPU and memory usage (you should see idle cycles)
<!--
```bash top```
```wait Tasks```
```keys ^C```
-->
- run `vmstat 1` to see I/O usage (si/so/bi/bo)
<br/>(the 4 numbers should be almost zero, except `bo` for logging)
<!--
```bash vmstat 1```
```wait memory```
```keys ^C```
-->
]
We have available resources.
- Why?
- How can we use them?
---
## Scaling workers on a single node
- Docker Compose supports scaling
- Let's scale `worker` and see what happens!
.exercise[
- Start one more `worker` container:
```bash
docker-compose scale worker=2
```
- Look at the performance graph (it should show a x2 improvement)
- Look at the aggregated logs of our containers (`worker_2` should show up)
- Look at the impact on CPU load with e.g. top (it should be negligible)
]
---
## Adding more workers
- Great, let's add more workers and call it a day, then!
.exercise[
- Start eight more `worker` containers:
```bash
docker-compose scale worker=10
```
- Look at the performance graph: does it show a x10 improvement?
- Look at the aggregated logs of our containers
- Look at the impact on CPU load and memory usage
]
---
# Identifying bottlenecks
- You should have seen a 3x speed bump (not 10x)
- Adding workers didn't result in linear improvement
- *Something else* is slowing us down
--
- ... But what?
--
- The code doesn't have instrumentation
- Let's use state-of-the-art HTTP performance analysis!
<br/>(i.e. good old tools like `ab`, `httping`...)
---
## Accessing internal services
- `rng` and `hasher` are exposed on ports 8001 and 8002
- This is declared in the Compose file:
```yaml
...
rng:
build: rng
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
...
```
---
## Measuring latency under load
We will use `httping`.
.exercise[
- Check the latency of `rng`:
```bash
httping -c 3 localhost:8001
```
- Check the latency of `hasher`:
```bash
httping -c 3 localhost:8002
```
]
`rng` has a much higher latency than `hasher`.
---
## Let's draw hasty conclusions
- The bottleneck seems to be `rng`
- *What if* we don't have enough entropy and can't generate enough random numbers?
- We need to scale out the `rng` service on multiple machines!
Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.
(In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy...
<br/>
...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).)

View File

@@ -8,7 +8,7 @@
- Imperative:
*Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in cup.*
*Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.*
--

View File

@@ -20,17 +20,17 @@
---
class: extra-details
class: title
## Extra details
*Tell me and I forget.*
<br/>
*Teach me and I remember.*
<br/>
*Involve me and I learn.*
- This slide should have a little magnifying glass in the top left corner
Misattributed to Benjamin Franklin
(If it doesn't, it's because CSS is hard — Jérôme is only a backend person, alas)
- Slides with that magnifying glass indicate slides providing extra details
- Feel free to skip them if you're in a hurry!
[(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/)
---
@@ -50,7 +50,9 @@ class: extra-details
- Go to [container.training](http://container.training/) to view these slides
- Join the chat room on @@CHAT@@
- Join the chat room: @@CHAT@@
<!-- ```open http://container.training/``` -->
]
@@ -64,15 +66,15 @@ class: in-person
class: in-person, pic
![You get five VMs](images/you-get-five-vms.jpg)
![You get a cluster](images/you-get-a-cluster.jpg)
---
class: in-person
## You get five VMs
## You get a cluster of cloud VMs
- Each person gets 5 private VMs (not shared with anybody else)
- Each person gets a private cluster of cloud VMs (not shared with anybody else)
- They'll remain up for the duration of the workshop
@@ -123,7 +125,50 @@ class: in-person
works pretty well
- Nice-to-have: [Mosh](https://mosh.org/) instead of SSH, if your internet connection tends to lose packets
<br/>(available with `(apt|yum|brew) install mosh`; then connect with `mosh user@host`)
---
class: in-person, extra-details
## What is this Mosh thing?
*You don't have to use Mosh or even know about it to follow along.
<br/>
We're just telling you about it because some of us think it's cool!*
- Mosh is "the mobile shell"
- It is essentially SSH over UDP, with roaming features
- It retransmits packets quickly, so it works great even on lossy connections
(Like hotel or conference WiFi)
- It has intelligent local echo, so it works great even in high-latency connections
(Like hotel or conference WiFi)
- It supports transparent roaming when your client IP address changes
(Like when you hop from hotel to conference WiFi)
---
class: in-person, extra-details
## Using Mosh
- To install it: `(apt|yum|brew) install mosh`
- It has been pre-installed on the VMs that we are using
- To connect to a remote machine: `mosh user@host`
(It is going to establish an SSH connection, then hand off to UDP)
- It requires UDP ports to be open
(By default, it uses a UDP port between 60000 and 61000)
---
@@ -133,18 +178,18 @@ class: in-person
.exercise[
- Log into the first VM (`node1`) with SSH or MOSH
- Log into the first VM (`node1`) with your SSH client
<!--
```bash
for N in $(seq 1 5); do
ssh -o StrictHostKeyChecking=no node$N true
for N in $(awk '/\Wnode/{print $2}' /etc/hosts); do
ssh -o StrictHostKeyChecking=no $N true
done
```
```bash
if which kubectl; then
kubectl get all -o name | grep -v services/kubernetes | xargs -n1 kubectl delete
kubectl get all -o name | grep -v service/kubernetes | xargs -n1 kubectl delete
fi
```
-->
@@ -153,7 +198,7 @@ fi
```bash
ssh node2
```
- Type `exit` or `^D` to come back to node1
- Type `exit` or `^D` to come back to `node1`
<!-- ```bash exit``` -->
@@ -183,6 +228,32 @@ If anything goes wrong — ask for help!
---
class: self-paced
## Get your own Docker nodes
- If you already have some Docker nodes: great!
- If not: let's get some thanks to Play-With-Docker
.exercise[
- Go to http://www.play-with-docker.com/
- Log in
- Create your first node
<!-- ```open http://www.play-with-docker.com/``` -->
]
You will need a Docker ID to use Play-With-Docker.
(Creating a Docker ID is free.)
---
## We will (mostly) interact with node1 only
*These remarks apply only when using multiple nodes, of course.*
@@ -218,6 +289,14 @@ You are welcome to use the method that you feel the most comfortable with.
## Tmux cheatsheet
[Tmux](https://en.wikipedia.org/wiki/Tmux) is a terminal multiplexer like `screen`.
*You don't have to use it or even know about it to follow along.
<br/>
But some of us like to use it to switch between terminals.
<br/>
It has been preinstalled on your workshop nodes.*
- Ctrl-b c → creates a new window
- Ctrl-b n → go to next window
- Ctrl-b p → go to previous window

View File

@@ -1,5 +1,60 @@
# Our sample application
- We will clone the GitHub repository onto our `node1`
- The repository also contains scripts and tools that we will use through the workshop
.exercise[
<!--
```bash
if [ -d container.training ]; then
mv container.training container.training.$$
fi
```
-->
- Clone the repository on `node1`:
```bash
git clone git://github.com/jpetazzo/container.training
```
]
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
---
## Downloading and running the application
Let's start this before we look around, as downloading will take a little time...
.exercise[
- Go to the `dockercoins` directory, in the cloned repo:
```bash
cd ~/container.training/dockercoins
```
- Use Compose to build and run all containers:
```bash
docker-compose up
```
<!--
```longwait units of work done```
-->
]
Compose tells Docker to build all container images (pulling
the corresponding base images), then starts all containers,
and displays aggregated logs.
---
## More detail on our sample application
- Visit the GitHub repository with all the materials of this workshop:
<br/>https://github.com/jpetazzo/container.training
@@ -39,21 +94,15 @@ class: extra-details
---
## Links, naming, and service discovery
## Service discovery in container-land
- Containers can have network aliases (resolvable through DNS)
- We do not hard-code IP addresses in the code
- Compose file version 2+ makes each container reachable through its service name
- We do not hard-code FQDN in the code, either
- Compose file version 1 did require "links" sections
- We just connect to a service name, and container-magic does the rest
- Our code can connect to services using their short name
(instead of e.g. IP address or FQDN)
- Network aliases are automatically namespaced
(i.e. you can have multiple apps declaring and using a service named `database`)
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
---
@@ -80,6 +129,26 @@ https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187
---
class: extra-details
## Links, naming, and service discovery
- Containers can have network aliases (resolvable through DNS)
- Compose file version 2+ makes each container reachable through its service name
- Compose file version 1 did require "links" sections
- Network aliases are automatically namespaced
- you can have multiple apps declaring and using a service named `database`
- containers in the blue app will resolve `database` to the IP of the blue database
- containers in the green app will resolve `database` to the IP of the green database
---
## What's this application?
--
@@ -106,156 +175,22 @@ https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187
---
## Getting the application source code
## Our application at work
- We will clone the GitHub repository
- On the left-hand side, the "rainbow strip" shows the container names
- The repository also contains scripts and tools that we will use through the workshop
.exercise[
<!--
```bash
if [ -d container.training ]; then
mv container.training container.training.$$
fi
```
-->
- Clone the repository on `node1`:
```bash
git clone git://github.com/jpetazzo/container.training
```
]
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
---
# Running the application
Without further ado, let's start our application.
.exercise[
- Go to the `dockercoins` directory, in the cloned repo:
```bash
cd ~/container.training/dockercoins
```
- Use Compose to build and run all containers:
```bash
docker-compose up
```
<!--
```longwait units of work done```
```keys ^C```
-->
]
Compose tells Docker to build all container images (pulling
the corresponding base images), then starts all containers,
and displays aggregated logs.
---
## Lots of logs
- The application continuously generates logs
- On the right-hand side, we see the output of our containers
- We can see the `worker` service making requests to `rng` and `hasher`
- Let's put that in the background
.exercise[
- Stop the application by hitting `^C`
]
- `^C` stops all containers by sending them the `TERM` signal
- Some containers exit immediately, others take longer
<br/>(because they don't handle `SIGTERM` and end up being killed after a 10s timeout)
---
## Restarting in the background
- Many flags and commands of Compose are modeled after those of `docker`
.exercise[
- Start the app in the background with the `-d` option:
```bash
docker-compose up -d
```
- Check that our app is running with the `ps` command:
```bash
docker-compose ps
```
]
`docker-compose ps` also shows the ports exposed by the application.
---
class: extra-details
## Viewing logs
- The `docker-compose logs` command works like `docker logs`
.exercise[
- View all logs since container creation and exit when done:
```bash
docker-compose logs
```
- Stream container logs, starting at the last 10 lines for each container:
```bash
docker-compose logs --tail 10 --follow
```
<!--
```wait units of work done```
```keys ^C```
-->
]
Tip: use `^S` and `^Q` to pause/resume log output.
---
class: extra-details
## Upgrading from Compose 1.6
.warning[The `logs` command has changed between Compose 1.6 and 1.7!]
- Up to 1.6
- `docker-compose logs` is the equivalent of `logs --follow`
- `docker-compose logs` must be restarted if containers are added
- Since 1.7
- `--follow` must be specified explicitly
- new containers are automatically picked up by `docker-compose logs`
- For `rng` and `hasher`, we see HTTP access logs
---
## Connecting to the web UI
- "Logs are exciting and fun!" (No-one, ever)
- The `webui` container exposes a web dashboard; let's view it
.exercise[
@@ -294,7 +229,7 @@ work on a local environment, or when using Docker4Mac or Docker4Windows.
How to fix this?
Edit `dockercoins.yml` and comment out the `volumes` section, and try again.
Stop the app with `^C`, edit `dockercoins.yml`, comment out the `volumes` section, and try again.
---
@@ -338,191 +273,31 @@ class: extra-details
class: extra-details
- Jérôme is clearly incapable of writing good frontend code
- "I'm clearly incapable of writing good frontend code!" 😀 — Jérôme
---
## Scaling up the application
## Stopping the application
- Our goal is to make that performance graph go up (without changing a line of code!)
- If we interrupt Compose (with `^C`), it will politely ask the Docker Engine to stop the app
--
- The Docker Engine will send a `TERM` signal to the containers
- Before trying to scale the application, we'll figure out if we need more resources
(CPU, RAM...)
- For that, we will use good old UNIX tools on our Docker node
---
## Looking at resource usage
- Let's look at CPU, memory, and I/O usage
- If the containers do not exit in a timely manner, the Engine sends a `KILL` signal
.exercise[
- run `top` to see CPU and memory usage (you should see idle cycles)
- Stop the application by hitting `^C`
<!--
```bash top```
```wait Tasks```
```keys ^C```
-->
- run `vmstat 1` to see I/O usage (si/so/bi/bo)
<br/>(the 4 numbers should be almost zero, except `bo` for logging)
<!--
```bash vmstat 1```
```wait memory```
```keys ^C```
-->
]
We have available resources.
- Why?
- How can we use them?
---
## Scaling workers on a single node
- Docker Compose supports scaling
- Let's scale `worker` and see what happens!
.exercise[
- Start one more `worker` container:
```bash
docker-compose scale worker=2
```
- Look at the performance graph (it should show a x2 improvement)
- Look at the aggregated logs of our containers (`worker_2` should show up)
- Look at the impact on CPU load with e.g. top (it should be negligible)
]
---
## Adding more workers
- Great, let's add more workers and call it a day, then!
.exercise[
- Start eight more `worker` containers:
```bash
docker-compose scale worker=10
```
- Look at the performance graph: does it show a x10 improvement?
- Look at the aggregated logs of our containers
- Look at the impact on CPU load and memory usage
]
---
# Identifying bottlenecks
- You should have seen a 3x speed bump (not 10x)
- Adding workers didn't result in linear improvement
- *Something else* is slowing us down
--
- ... But what?
Some containers exit immediately, others take longer.
--
The containers that do not handle `SIGTERM` end up being killed after a 10s timeout. If we are very impatient, we can hit `^C` a second time!
- The code doesn't have instrumentation
- Let's use state-of-the-art HTTP performance analysis!
<br/>(i.e. good old tools like `ab`, `httping`...)
---
## Accessing internal services
- `rng` and `hasher` are exposed on ports 8001 and 8002
- This is declared in the Compose file:
```yaml
...
rng:
build: rng
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
...
```
---
## Measuring latency under load
We will use `httping`.
.exercise[
- Check the latency of `rng`:
```bash
httping -c 10 localhost:8001
```
- Check the latency of `hasher`:
```bash
httping -c 10 localhost:8002
```
]
`rng` has a much higher latency than `hasher`.
---
## Let's draw hasty conclusions
- The bottleneck seems to be `rng`
- *What if* we don't have enough entropy and can't generate enough random numbers?
- We need to scale out the `rng` service on multiple machines!
Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.
(In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy...
<br/>
...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).)
---
## Clean up
- Before moving on, let's remove those containers
.exercise[
- Tell Compose to remove everything:
```bash
docker-compose down
```
]

View File

@@ -6,21 +6,6 @@ Thank you!
class: title, in-person
That's all folks! <br/> Questions?
That's all, folks! <br/> Questions?
![end](images/end.jpg)
---
# Links and resources
- [Docker Community Slack](https://community.docker.com/registrations/groups/4316)
- [Docker Community Forums](https://forums.docker.com/)
- [Docker Hub](https://hub.docker.com)
- [Docker Blog](http://blog.docker.com/)
- [Docker documentation](http://docs.docker.com/)
- [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker)
- [Docker on Twitter](http://twitter.com/docker)
- [Play With Docker Hands-On Labs](http://training.play-with-docker.com/)
.footnote[These slides (and future updates) are on → http://container.training/]

View File

@@ -8,8 +8,12 @@ class: title, self-paced
class: title, in-person
Docker + Kubernetes = ❤️<br/></br>
@@TITLE@@<br/></br>
.footnote[
**Slides: http://kube.container.training/**
]
**WiFI: `ArtyLoft`** ou **`ArtyLoft 5 GHz`**
<br/>
**Mot de passe: `TFLEVENT5`**
**Slides: http://avril2018.container.training/**
]

View File

@@ -0,0 +1,10 @@
#!/bin/sh
INPUT=$1
{
echo "# Front matter"
cat "$INPUT"
} |
grep -e "^# " -e ^---$ | uniq -c |
sed "s/^ *//" | sed s/---// |
paste -d "\t" - -

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
slides/images/conductor.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

BIN
slides/images/demo.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 178 KiB

View File

@@ -0,0 +1,213 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 18.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 445 390" enable-background="new 0 0 445 390" xml:space="preserve">
<g>
<path fill="#3A4D54" d="M158.8,352.2h-25.9c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9h-19c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9
h25.3c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9h-15.9c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9h6.8c3.2,0,5.8-2.6,5.8-5.9
c0-3.2-2.6-5.9-5.8-5.9H64.9c-0.1,0-0.3,0-0.4,0c3,0.2,5.4,2.7,5.4,5.9c0,3.1-2.4,5.7-5.4,5.9c0.1,0,0.3,0,0.4,0h-0.8h-6.1
c-3.2,0-5.8,2.6-5.8,5.9s2.6,5.9,5.8,5.9H74h3.7c3.2,0,5.8,2.6,5.8,5.9c0,3.2-2.6,5.9-5.8,5.9H74H47.9c-3.2,0-5.8,2.6-5.8,5.9
s2.6,5.9,5.8,5.9h44.8H93c0,0-0.1,0-0.1,0c3.1,0.1,5.6,2.7,5.6,5.9c0,3.2-2.5,5.8-5.6,5.9c0,0,0.1,0,0.1,0h-0.2
c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9h66c3.2,0,5.8-2.6,5.8-5.9C164.6,354.8,162,352.2,158.8,352.2z"/>
<circle fill="#FBBF45" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" cx="214.6" cy="124.2" r="68.7"/>
<circle fill="#3A4D54" cx="367.5" cy="335.5" r="5.9"/>
<g>
<polygon fill="#E8593A" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" points="116.1,199.1 116.1,214.6 302.9,214.5
302.9,199.1 "/>
<rect x="159.4" y="78.6" fill="#3A4D54" width="4.2" height="50.4"/>
<rect x="174.5" y="93.8" fill="#3A4D54" width="4.2" height="35.1"/>
<rect x="280.2" y="108.2" fill="#3A4D54" width="4.2" height="20.8"/>
<rect x="190.2" y="106.9" fill="#3A4D54" width="4.2" height="22"/>
<rect x="143.3" y="59.8" fill="#3A4D54" width="4.2" height="69.1"/>
<path fill="#3A4D54" d="M294.3,107.9c3.5-2.3,6.9-4.8,10.4-7.4V87.7c-5.2,4.3-10.6,8.2-15.9,11.6c-7.8,4.9-15.1,8.5-22.4,11
c-7.9,2.8-15.7,4.3-23.4,4.7c-7.6,0.3-15.3-0.5-22.8-2.6c-6.9-1.9-13.7-4.7-20.4-8.6C188.8,97.5,178.4,89,168,77.6
c-7.7-8.4-14.7-17.7-21.6-28.2c-5-7.8-9.6-15.8-13.6-23.9c-4-8.1-6.1-13.5-6.9-16c-0.7-1.8-1-3.1-1.2-3.8l0-0.1l0.1-2.7l-0.5,0
l0-0.1H123l-8.1-0.6l-3.1-0.1l-0.1,3.4l0,0.4c0,1.2,0.2,1.9,0.3,2.5l0,0.1c0.3,1.4,0.9,3.2,1.7,5.3c1.2,3.4,3.6,9.1,7.7,17.2
c4.3,8.4,9.2,16.8,14.6,25c7.3,11.1,14.9,20.8,23.2,29.6c11.4,12.1,22.9,21.3,35.1,28.1c7.6,4.2,15.4,7.4,23.2,9.4
c7,1.8,14.2,2.7,21.4,2.7c0,0,0,0,0,0c1.6,0,3.2,0,4.7-0.1c8.7-0.5,17.6-2.4,26.4-5.6 M141.1,52.8c-5.2-7.9-10-16.1-14.2-24.4
c-4-7.9-6.3-13.4-7.5-16.6c-0.5-1.3-0.8-2.4-1.1-3.3l1,0.1c0.3,0.9,0.6,1.9,1,2.9c1.6,4.5,4.2,10.4,7.2,16.6
c4.1,8.3,8.8,16.5,13.9,24.5c5.5,8.5,11.1,16.2,17.1,23.3C152.4,68.9,146.7,61.3,141.1,52.8z"/>
<path fill="#E8593A" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" d="M340.9,53h-7.9h-4.3v8.2h-19.4V53h-4.3h-7.9
h-4.3v8.2v2.7v186.7c0,0.8,0.6,1.4,1.3,1.4h3h42.4h4.3c0.7,0,1.3-0.6,1.3-1.4V62v-0.8V53H340.9z M334.8,206.6h-31.5V152
c0-0.4,0.3-0.7,0.6-0.7h30.2c0.4,0,0.6,0.3,0.6,0.7V206.6z M334.8,142.1h-31.5V125c0-0.4,0.3-0.7,0.6-0.7h30.2
c0.4,0,0.6,0.3,0.6,0.7V142.1z M334.8,115.1h-31.5V97.9c0-0.4,0.3-0.7,0.6-0.7h30.2c0.4,0,0.6,0.3,0.6,0.7V115.1z M334.8,88h-31.5
V70.9c0-0.4,0.3-0.7,0.6-0.7h30.2c0.4,0,0.6,0.3,0.6,0.7V88z"/>
<polygon fill="#E8593A" points="272.2,203 286.7,201.1 297.2,201.1 297.2,214.6 271.7,214.6 "/>
<path fill="#E8593A" d="M298.7,96.2c-2.7,2-5.5,3.9-8.3,5.7c-7.3,4.6-15,8.5-23,11.3c-7.9,2.8-16.1,4.5-24.3,4.8
c-8.1,0.4-16.1-0.6-23.7-2.7c-7.6-2-14.6-5.1-21.1-8.9c-13-7.5-23.7-17.1-32.6-26.8c-8.9-9.8-16-19.6-21.9-28.6
c-5.8-9-10.3-17.3-13.7-24.2c-3.4-6.9-5.7-12.5-7.1-16.3c-0.7-1.9-1.1-3.3-1.3-4.2c-0.1-0.4-0.1-0.7-0.1-0.4l0,0.1
c0,0,0-0.1,0-0.1c0-0.1,0-0.1,0-0.1c0-0.1,0-0.1,0-0.1l-7-0.5c0,0,0,0,0,0.1c0,0,0,0.1,0,0.1c0,0,0,0.1,0,0.1c0,0.1,0,0.2,0,0.3
c0,0.9,0.1,1.4,0.3,2.1c0.3,1.3,0.8,2.9,1.6,5c1.5,4.1,4,9.8,7.6,16.9c3.6,7.1,8.3,15.5,14.4,24.7c6.1,9.2,13.5,19.2,22.9,29.2
c9.3,9.9,20.5,19.8,34.3,27.5c6.9,3.8,14.4,7,22.5,9.1c8,2.1,16.6,3,25.2,2.5c8.6-0.5,17.3-2.4,25.5-5.4c8.3-3,16.2-7.2,23.7-12
c2-1.3,4.1-2.7,6-4.2V96.2z"/>
<path fill="#E8593A" stroke="#3A4D54" stroke-width="4" stroke-miterlimit="10" d="M122.9,4.2h-3.2h-6.6v11.7H66.1V4.2h-4.6h-6.2
h-6.6v11.7v3.8v265.1c0,1.1,0.9,2,2,2h4.6h65.7h6.6c1.1,0,2-0.9,2-2V17v-1.1V4.2H122.9z M113.5,204.2H64.7v-59.4c0-0.6,0.4-1,1-1
h46.7c0.6,0,1,0.4,1,1V204.2z M113.5,130.8H64.7v-24.3c0-0.6,0.4-1,1-1h46.7c0.6,0,1,0.4,1,1V130.8z M113.5,92.4H64.7V68.1
c0-0.6,0.4-1,1-1h46.7c0.6,0,1,0.4,1,1V92.4z M113.5,54H64.7V29.7c0-0.6,0.4-1,1-1h46.7c0.6,0,1,0.4,1,1V54z"/>
<g>
<g>
<path fill="#2BB8EB" stroke="#3A4D54" stroke-width="5" stroke-miterlimit="10" d="M435.8,132.9H364c-1.4,0-2.6,1.3-2.6,3v44.2
c0,1.7,1.2,3,2.6,3h71.8c2.5,0,3.6-3.7,1.5-5.4l-11.4-13.5c-3.2-3.3-3.2-9,0-12.3l11.4-13.5
C439.3,136.6,438.3,132.9,435.8,132.9z"/>
<path fill="#FFFFFF" stroke="#3A4D54" stroke-width="5" stroke-miterlimit="10" d="M9.8,183.1h129.7c1.4,0,2.6-1.3,2.6-3v-44.2
c0-1.7-1.2-3-2.6-3H9.8c-2.5,0-3.6,3.7-1.5,5.4l11.4,13.5c3.2,3.3,3.2,9,0,12.3L8.3,177.7C6.2,179.4,7.3,183.1,9.8,183.1z"/>
<path fill="#FFFFFF" stroke="#3A4E55" stroke-width="5" stroke-miterlimit="10" d="M402.5,190H42.1c-3.6,0-6.5-1.1-6.5-4.6
v-54.7c0-3.6,2.9-6.5,6.5-6.5h360.4c3.6,0,6.5,2.9,6.5,6.5v52.9C409,187.1,406.1,190,402.5,190z"/>
<path fill="#2BB8EB" d="M402.5,124.2h-46.3V190h46.3c3.6,0,6.5-2.9,6.5-6.5v-52.9C409,127.1,406.1,124.2,402.5,124.2z"/>
<g>
<path fill="#FFFFFF" d="M376.2,144.3v21.3c0,1.1-0.9,2-2,2c-1.1,0-2-0.9-2-2v-17.8l-1.4,0.8c-0.3,0.2-0.7,0.3-1,0.3
c-0.7,0-1.3-0.4-1.7-1c-0.6-0.9-0.3-2.2,0.7-2.7l4.4-2.6c0,0,0.1,0,0.1-0.1c0.1,0,0.1-0.1,0.2-0.1c0.1,0,0.1,0,0.2,0
c0,0,0.1,0,0.1,0c0.1,0,0.2,0,0.3,0c0,0,0.1,0,0.1,0h0c0.1,0,0.2,0,0.3,0c0,0,0.1,0,0.1,0c0.1,0,0.1,0,0.2,0.1c0,0,0.1,0,0.1,0
c0.1,0.1,0.1,0.1,0.2,0.1c0,0,0.1,0.1,0.1,0.1c0,0,0.1,0.1,0.1,0.1c0.1,0,0.1,0.1,0.1,0.1c0,0,0.1,0.1,0.1,0.1
c0,0,0.1,0.1,0.1,0.1l0,0.1c0,0,0,0.1,0,0.1c0,0.1,0.1,0.1,0.1,0.2c0,0.1,0,0.1,0.1,0.2c0,0.1,0,0.1,0,0.2c0,0.1,0,0.2,0.1,0.3
C376.2,144.3,376.2,144.3,376.2,144.3z"/>
<path fill="#FFFFFF" d="M393.4,152.3c1.8,1.7,2.6,4.1,2.6,6.4c0,2.3-0.9,4.6-2.6,6.3c-1.7,1.8-4.1,2.6-6.3,2.6
c-0.1,0-0.1,0-0.1,0c-2.2,0-4.6-0.9-6.3-2.6c-0.8-0.8-0.8-2.1,0-2.9c0.8-0.8,2.1-0.8,2.9,0c0.9,1,2.2,1.4,3.5,1.4
c1.2,0,2.5-0.5,3.4-1.4c0.9-0.9,1.4-2.2,1.4-3.4c0-1.3-0.5-2.5-1.4-3.5c-0.9-1-2.2-1.4-3.4-1.4c-1.2,0-2.5,0.4-3.5,1.4
c-0.8,0.8-2.1,0.8-2.9,0c-0.1-0.1-0.3-0.3-0.4-0.5c0-0.1,0-0.1,0-0.1c0-0.1,0-0.1-0.1-0.2c0-0.1,0-0.2,0-0.3c0,0,0,0,0-0.1
c0-0.2,0-0.4,0-0.6l1.1-9.4c0.1-0.6,0.4-1.1,0.9-1.4c0.1,0,0.1,0,0.1-0.1c0,0,0.1,0,0.1-0.1c0.3-0.1,0.6-0.2,0.9-0.2h9.2
c1.2,0,2.1,0.9,2.1,2.1c0,1.1-0.9,2-2.1,2h-7.4l-0.4,3.6c0.8-0.2,1.6-0.3,2.4-0.3C389.4,149.7,391.7,150.6,393.4,152.3z"/>
</g>
<g>
<path fill="#3A4D54" d="M157.8,142.1L157.8,142.1l-0.9,0c-0.7,0-2.6,2-3,2.5c-1.7,1.7-3.5,3.4-5.2,5.1v-13.7
c0-1.2-0.8-2.2-2-2.2h-0.3c-1.3,0-2,1-2,2.2v29.9c0,1.2,0.8,2.2,2,2.2h0.3c1.3,0,2-1,2-2.2v-5.3l3.4,3.3c1,1,2,2,3,3
c0.5,0.5,1.3,1.3,2.1,1.3h0.4c1.1,0,1.8-0.8,2-1.8l0-0.1v-0.5c0-0.4-0.1-0.7-0.3-1c-0.2-0.3-0.5-0.6-0.7-0.8
c-0.6-0.7-1.2-1.3-1.9-1.9c-2.3-2.3-4.6-4.6-6.9-6.9l5.3-5.4c1-1.1,2.1-2.1,3.1-3.2c0.5-0.5,1.3-1.4,1.3-2.1V144
C159.6,142.9,158.9,142.3,157.8,142.1z"/>
<path fill="#3A4D54" d="M138.9,143.9l-0.2-0.1c-1.9-1.3-4.1-2-6.5-2h-0.9c-2.2,0-4.3,0.6-6.2,1.7c-4.1,2.4-6.5,6.2-6.5,11v0.9
c0,1.1,0.1,2.2,0.5,3.3c1.9,6.3,6.8,9.9,13.4,9.5c1.9-0.1,6.8-0.7,6.8-3.4v-0.4c0-1.1-0.8-1.7-1.8-1.9l-0.1,0h-0.8l-0.2,0.1
c-1.1,0.5-2.7,1.2-3.9,1.2c-1.3,0-2.9-0.1-4.2-0.7c-3.4-1.6-5.4-4.3-5.4-8c0-1.2,0.2-2.4,0.8-3.6c1.6-3.3,4.2-5.3,7.9-5.2
c0.7,0,2,0.1,2.6,0.4c0.6,0.3,2.1,1,2.7,1h0.3l0.1,0c1-0.2,1.9-0.8,1.9-1.9v-0.4c0-0.4-0.2-0.8-0.4-1.2L138.9,143.9z"/>
<path fill="#3A4D54" d="M85.2,133.7h-0.4c-1.3,0-2,1-2,2.2v9.3c-2.3-2-5.1-3.3-8.3-3.3h-0.9c-2.2,0-4.3,0.6-6.2,1.7
c-4.1,2.4-6.5,6.2-6.5,11v0.9c0,2.2,0.6,4.3,1.7,6.2c2.4,4.1,6.2,6.5,11,6.5h0.9c2.2,0,4.3-0.6,6.2-1.7c4.1-2.4,6.5-6.2,6.5-11
v-19.6C87.2,134.6,86.5,133.7,85.2,133.7z M81.6,159.3c-1.7,2.9-4.2,4.5-7.6,4.5c-1.4,0-2.7-0.4-3.9-1c-3-1.7-4.7-4.3-4.7-7.7
c0-1.2,0.2-2.4,0.8-3.6c1.6-3.3,4.3-5.2,8-5.2c1.8,0,3.4,0.5,4.9,1.6c2.4,1.7,3.8,4.1,3.8,7.1C82.8,156.5,82.4,158,81.6,159.3z
"/>
<path fill="#3A4D54" d="M103.1,141.9h-0.6c-2.2,0-4.3,0.6-6.2,1.7c-4.1,2.4-6.5,6.2-6.5,11v0.9c0,2.2,0.6,4.3,1.7,6.2
c2.4,4.1,6.2,6.5,11,6.5h0.9c2.2,0,4.3-0.6,6.2-1.7c4.1-2.4,6.5-6.2,6.5-11v-0.9c0-2-0.5-4-1.5-5.8
C112.1,144.4,108.2,141.9,103.1,141.9z M110.5,159.3c-1.7,2.8-4.2,4.5-7.5,4.5c-1.6,0-3-0.4-4.3-1.2c-2.8-1.7-4.5-4.2-4.5-7.6
c0-1.2,0.2-2.4,0.8-3.6c1.6-3.3,4.3-5.2,8-5.2c1.7,0,3.3,0.5,4.7,1.4c2.6,1.7,4.1,4.1,4.1,7.2
C111.7,156.5,111.3,158,110.5,159.3z"/>
<path fill="#3A4D54" d="M186.4,148c-1.2-2.1-3-3.7-5.2-4.8c-4-2-8.3-2.2-12.2,0.1l-0.6,0.3c-1.6,0.9-3,2.1-4,3.6
c-3,4.4-3.4,9.3-0.7,14l0.3,0.5c1.1,2,2.7,3.6,4.6,4.6c4.2,2.3,8.6,2.6,12.8,0.2l0.4-0.2c1.1-0.7,1.4-1.8,0.8-3
c-0.2-0.5-0.7-0.8-1.2-1.1l-0.1-0.1l-0.1,0c-0.8-0.1-2.9,0.8-3.8,1.2c-1.6,0.3-3.5,0.4-5.1-0.2c2.9-2.5,5.8-5.1,8.8-7.6
c1.3-1.1,2.7-2.4,4.1-3.5c1.2-0.9,2.3-2.2,1.4-3.8L186.4,148z M178.4,152.1c-3.3,2.8-6.5,5.6-9.8,8.4c-0.3-0.4-0.6-0.8-0.9-1.2
c-0.7-1.2-1.1-2.5-1.1-3.9c-0.1-3.5,1.2-6.3,4.2-8.1c2.3-1.3,4.8-1.7,7.4-0.7c1.3,0.5,2.7,1.3,3.6,2.4
C180.7,150.2,179.5,151.2,178.4,152.1z"/>
<path fill="#3A4D54" d="M204.2,142.1h-0.4c-2.6,0-5,0.8-7.1,2.3c-3.5,2.5-5.6,6-5.6,10.4V166c0,1.2,0.8,2.2,2,2.2h0.3
c1.3,0,2-1,2-2.2v-10.7c0-2.4,0.7-4.5,2.4-6.2c1.4-1.3,3.3-2.5,5.2-2.5c1.5,0,3.3-0.5,3.3-2.3
C206.4,142.9,205.5,142.1,204.2,142.1z"/>
</g>
<g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#2BB8EB" d="M281.3,146.6c-0.7-0.3-1.9-0.4-2.6-0.4
c-3.7-0.1-6.4,1.9-7.9,5.2c-0.5,1.1-0.8,2.3-0.8,3.6c0,3.8,2,6.4,5.4,8c1.2,0.6,2.8,0.7,4.2,0.7c1.2,0,2.9-0.7,3.9-1.2l0.2-0.1
h0.8l0.1,0c1,0.2,1.8,0.8,1.8,1.9v0.4c0,2.7-4.9,3.3-6.8,3.4c-6.6,0.5-11.6-3.2-13.4-9.5c-0.3-1.1-0.5-2.2-0.5-3.3v-0.9
c0-4.8,2.4-8.6,6.5-11c1.9-1.1,4-1.7,6.2-1.7h0.9c2.4,0,4.5,0.7,6.5,2l0.2,0.1l0.1,0.2c0.2,0.3,0.4,0.7,0.4,1.2v0.4
c0,1.1-0.8,1.7-1.9,1.9l-0.1,0H284C283.4,147.6,281.9,146.9,281.3,146.6z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#2BB8EB" d="M301.3,141.9h0.6c5.1,0,9,2.5,11.5,6.9c1,1.8,1.5,3.7,1.5,5.8
v0.9c0,4.8-2.4,8.6-6.5,11c-1.9,1.1-4,1.7-6.2,1.7h-0.9c-4.8,0-8.6-2.4-11-6.5c-1.1-1.9-1.7-4-1.7-6.2v-0.9
c0-4.8,2.4-8.6,6.5-11C297,142.4,299.1,141.9,301.3,141.9z M293,155c0,3.4,1.6,5.8,4.5,7.6c1.3,0.8,2.8,1.2,4.3,1.2
c3.3,0,5.8-1.7,7.5-4.5c0.8-1.3,1.2-2.8,1.2-4.4c0-3.1-1.5-5.5-4.1-7.2c-1.4-0.9-3-1.4-4.7-1.4c-3.7,0-6.4,1.9-8,5.2
C293.3,152.6,293,153.8,293,155z"/>
<path fill="#2BB8EB" d="M344,148.8c-2.5-4.5-6.4-6.9-11.5-6.9h-0.6c-2.2,0-4.3,0.6-6.2,1.7c-4.1,2.4-6.5,6.2-6.5,11v0.3v11
c0,1.2,0.8,2.2,2,2.2h0.3c1.3,0,2-1,2-2.2v-11h0c0-1.2,0.3-2.4,0.8-3.5c1.6-3.3,4.3-5.2,8-5.2c1.7,0,3.3,0.5,4.7,1.4
c2.6,1.7,4.1,4.1,4.1,7.2v11c0,1.2,0.8,2.2,2,2.2h0.3c1.3,0,2-1,2-2.2v-11v-0.3C345.5,152.6,345,150.6,344,148.8z"/>
</g>
</g>
<path fill="none" stroke="#3A4D54" stroke-width="5" stroke-miterlimit="10" d="M402.5,190H42.1c-3.6,0-6.5-2.9-6.5-6.5v-52.9
c0-3.6,2.9-6.5,6.5-6.5h360.4c3.6,0,6.5,2.9,6.5,6.5v52.9C409,187.1,406.1,190,402.5,190z"/>
</g>
<polygon fill="#E8593A" points="147.8,203 133.3,201.1 122.8,201.1 122.8,214.6 148.3,214.6 "/>
<rect x="353.6" y="124.2" fill="#3A4D54" width="5.1" height="55.2"/>
</g>
<g>
<path fill="#3A4D54" d="M91.8,293.4H20.2c-3.2,0-5.8-2.6-5.8-5.9s2.6-5.9,5.8-5.9h71.6c3.2,0,5.8,2.6,5.8,5.9S95,293.4,91.8,293.4
z"/>
</g>
<path fill="#3A4D54" d="M428.9,282.7h-83c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9h-54.7c-3.2,0-5.8,2.6-5.8,5.9
c0,3.2,2.6,5.9,5.8,5.9H308c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9h-28.9c-3.2,0-5.8,2.6-5.8,5.9c0,3.2,2.6,5.9,5.8,5.9H262
c-3.2,0-5.8,2.6-5.8,5.9s2.6,5.9,5.8,5.9h13.7c-3.2,0-5.8,2.6-5.8,5.9s2.6,5.9,5.8,5.9h-37.8c-3.2,0-5.8,2.6-5.8,5.9
c0,3,2.2,5.5,5.1,5.8h-48.8c-0.9-0.6-2-1-3.2-1h-47.1c3.2,0,5.8,2.6,5.8,5.9c0,3.2-2.6,5.9-5.8,5.9h-2.8c-3.2,0-5.8,2.9-5.8,6.4
c0,3.5,2.6,6.4,5.8,6.4h58.5h7.5H286c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9H286h-2.7c-3.2,0-5.8-2.6-5.8-5.9
c0-3.2,2.6-5.9,5.8-5.9h66c0.2,0,0.4,0,0.6,0h6.7c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9h-27.2c0,0,0,0,0,0h-0.7
c-3.2,0-5.8-2.6-5.8-5.9c0-3.2,2.6-5.9,5.8-5.9h0.7h14.1c3.2,0,5.8-2.6,5.8-5.9s-2.6-5.9-5.8-5.9h0.2c-3.2,0-5.8-2.6-5.8-5.9
c0-3.2,2.6-5.9,5.8-5.9h0.7h28.9c3.2,0,5.8-2.6,5.8-5.9c0-3.2-2.6-5.9-5.8-5.9h-16.1h-0.8c0.1,0,0.3,0,0.4,0
c-3-0.2-5.4-2.7-5.4-5.9c0-3.1,2.4-5.7,5.4-5.9c-0.1,0-0.3,0-0.4,0h0.8h65.2h6.5c3.2,0,5.8-2.6,5.8-5.9
C434.6,285.3,432.1,282.7,428.9,282.7z"/>
<g>
<path id="outline_3_" fill-rule="evenodd" clip-rule="evenodd" fill="#3A4D54" d="M258,210.8h37v37.8h18.7
c8.6,0,17.5-1.5,25.7-4.3c4-1.4,8.5-3.3,12.5-5.6c-5.2-6.8-7.9-15.4-8.7-23.9c-1.1-11.5,1.3-26.5,9.1-35.6l3.9-4.5l4.6,3.7
c11.7,9.4,21.5,22.5,23.2,37.4c14-4.1,30.5-3.2,42.9,4l5.1,2.9l-2.7,5.2c-10.5,20.4-32.3,26.7-53.7,25.6
C343.5,333.3,273.8,371,189.4,371c-43.6,0-83.7-16.3-106.5-55l-0.4-0.6l-3.3-6.8c-7.7-17-10.3-35.7-8.5-54.4l0.5-5.6h31.6v-37.8
h37v-37h73.9v-37H258V210.8z"/>
<g id="body_colors_3_">
<path fill="#08AADA" d="M377.8,224.8c2.5-19.3-11.9-34.4-20.9-41.6c-10.3,11.9-11.9,43.1,4.3,56.3c-9,8-28,15.3-47.5,15.3H76.8
c-1.9,20.3,1.7,39,9.8,55l2.7,4.9c1.7,2.9,3.6,5.7,5.6,8.4h0c9.7,0.6,18.7,0.8,26.9,0.7c0,0,0,0,0,0c16.1-0.4,29.3-2.3,39.3-5.7
c1.5-0.5,3.1,0.3,3.6,1.8c0.5,1.5-0.3,3.1-1.8,3.6c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0c-7.9,2.2-16.3,3.8-27.2,4.4
c0.6,0-0.7,0.1-0.7,0.1c-0.4,0-0.8,0.1-1.2,0.1c-4.3,0.2-8.9,0.3-13.6,0.3c-5.2,0-10.3-0.1-15.9-0.4l-0.1,0.1
c19.7,22.2,50.6,35.5,89.3,35.5c81.9,0,151.3-36.3,182.1-117.8c21.8,2.2,42.8-3.3,52.3-21.9C408.6,216.4,389,219.2,377.8,224.8z"
/>
<path fill="#2BB8EB" d="M377.8,224.8c2.5-19.3-11.9-34.4-20.9-41.6c-10.3,11.9-11.9,43.1,4.3,56.3c-9,8-28,15.3-47.5,15.3H90.8
c-1,31.1,10.6,54.7,31,69c0,0,0,0,0,0c16.1-0.4,29.3-2.3,39.3-5.7c1.5-0.5,3.1,0.3,3.6,1.8c0.5,1.5-0.3,3.1-1.8,3.6
c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0c-7.9,2.2-17,3.9-27.9,4.6c0,0-0.3-0.3-0.3-0.3c27.9,14.3,68.3,14.2,114.6-3.6
c51.9-20,100.3-58,134-101.5C378.8,224.3,378.3,224.6,377.8,224.8z"/>
<path fill="#088CB9" d="M76.6,279.5c1.5,10.9,4.7,21.1,9.4,30.4l2.7,4.9c1.7,2.9,3.6,5.7,5.6,8.4c9.7,0.6,18.7,0.8,26.9,0.7
c16.1-0.4,29.3-2.3,39.3-5.7c1.5-0.5,3.1,0.3,3.6,1.8c0.5,1.5-0.3,3.1-1.8,3.6c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0
c-7.9,2.2-17,3.9-27.8,4.5c-0.4,0-1,0-1.4,0c-4.3,0.2-8.9,0.4-13.6,0.4c-5.2,0-10.4-0.1-16.1-0.4c19.7,22.2,50.8,35.5,89.5,35.5
c70.1,0,131.1-26.6,166.5-85.4H76.6z"/>
<path fill="#069BC6" d="M92.9,279.5c4.2,19.1,14.3,34.1,28.9,44.3c16.1-0.4,29.3-2.3,39.3-5.7c1.5-0.5,3.1,0.3,3.6,1.8
c0.5,1.5-0.3,3.1-1.8,3.6c-1.3,0.5-2.7,0.9-4.1,1.3c0,0,0,0,0,0c-7.9,2.2-17.2,3.9-28,4.5c27.9,14.3,68.2,14.1,114.5-3.7
c28-10.8,55-26.8,79.2-46.1H92.9z"/>
</g>
<g id="Containers_3_">
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M135.8,219.7h2.5v26.7h-2.5V219.7z M130.9,219.7h2.6v26.7h-2.6
V219.7z M126.1,219.7h2.6v26.7h-2.6V219.7z M121.2,219.7h2.6v26.7h-2.6V219.7z M116.3,219.7h2.6v26.7h-2.6V219.7z M111.6,219.7
h2.5v26.7h-2.5V219.7z M108.9,217h32v32h-32V217z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M172.7,182.7h2.5v26.7h-2.5V182.7z M167.9,182.7h2.6v26.7h-2.6
V182.7z M163,182.7h2.6v26.7H163V182.7z M158.2,182.7h2.6v26.7h-2.6V182.7z M153.3,182.7h2.6v26.7h-2.6V182.7z M148.6,182.7h2.5
v26.7h-2.5V182.7z M145.9,180h32v32h-32V180z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M172.7,219.7h2.5v26.7h-2.5V219.7z M167.9,219.7h2.6v26.7h-2.6
V219.7z M163,219.7h2.6v26.7H163V219.7z M158.2,219.7h2.6v26.7h-2.6V219.7z M153.3,219.7h2.6v26.7h-2.6V219.7z M148.6,219.7h2.5
v26.7h-2.5V219.7z M145.9,217h32v32h-32V217z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M209.7,219.7h2.5v26.7h-2.5V219.7z M204.8,219.7h2.6v26.7h-2.6
V219.7z M200,219.7h2.6v26.7H200V219.7z M195.1,219.7h2.6v26.7h-2.6V219.7z M190.3,219.7h2.6v26.7h-2.6V219.7z M185.5,219.7h2.5
v26.7h-2.5V219.7z M182.9,217h32v32h-32V217z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M209.7,182.7h2.5v26.7h-2.5V182.7z M204.8,182.7h2.6v26.7h-2.6
V182.7z M200,182.7h2.6v26.7H200V182.7z M195.1,182.7h2.6v26.7h-2.6V182.7z M190.3,182.7h2.6v26.7h-2.6V182.7z M185.5,182.7h2.5
v26.7h-2.5V182.7z M182.9,180h32v32h-32V180z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M246.7,219.7h2.5v26.7h-2.5V219.7z M241.8,219.7h2.6v26.7h-2.6
V219.7z M237,219.7h2.6v26.7H237V219.7z M232.1,219.7h2.6v26.7h-2.6V219.7z M227.3,219.7h2.6v26.7h-2.6V219.7z M222.5,219.7h2.5
v26.7h-2.5V219.7z M219.8,217h32v32h-32V217z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M246.7,182.7h2.5v26.7h-2.5V182.7z M241.8,182.7h2.6v26.7h-2.6
V182.7z M237,182.7h2.6v26.7H237V182.7z M232.1,182.7h2.6v26.7h-2.6V182.7z M227.3,182.7h2.6v26.7h-2.6V182.7z M222.5,182.7h2.5
v26.7h-2.5V182.7z M219.8,180h32v32h-32V180z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#26C2EE" d="M246.7,145.7h2.5v26.7h-2.5V145.7z M241.8,145.7h2.6v26.7h-2.6
V145.7z M237,145.7h2.6v26.7H237V145.7z M232.1,145.7h2.6v26.7h-2.6V145.7z M227.3,145.7h2.6v26.7h-2.6V145.7z M222.5,145.7h2.5
v26.7h-2.5V145.7z M219.8,143.1h32v32h-32V143.1z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#00ACD2" d="M283.6,219.7h2.5v26.7h-2.5V219.7z M278.8,219.7h2.6v26.7h-2.6
V219.7z M273.9,219.7h2.6v26.7h-2.6V219.7z M269.1,219.7h2.6v26.7h-2.6V219.7z M264.2,219.7h2.6v26.7h-2.6V219.7z M259.5,219.7
h2.5v26.7h-2.5V219.7z M256.8,217h32v32h-32V217z"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#D4EDF1" d="M175.9,301c4.9,0,8.8,4,8.8,8.8s-4,8.8-8.8,8.8
c-4.9,0-8.8-4-8.8-8.8S171,301,175.9,301"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#3A4D54" d="M175.9,303.5c0.8,0,1.6,0.2,2.3,0.4c-0.8,0.4-1.3,1.3-1.3,2.2
c0,1.4,1.2,2.6,2.6,2.6c1,0,1.8-0.5,2.3-1.3c0.3,0.7,0.5,1.6,0.5,2.4c0,3.5-2.8,6.3-6.3,6.3c-3.5,0-6.3-2.8-6.3-6.3
C169.6,306.3,172.4,303.5,175.9,303.5"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#3A4D54" d="M19.6,282.7h193.6h23.9h190.5c0.4,0,1.6,0.1,1.2,0
c-9.2-2.2-24.9-6.2-23.5-15.8c0.1-0.7-0.2-0.8-0.6-0.3c-16.6,17.5-54.1,12.2-64.3,3.2c-0.2-0.1-0.4-0.1-0.5,0.1
c-11.5,15.4-73.3,9.7-79.3-2.3c-0.1-0.2-0.4-0.3-0.6-0.1c-14.1,15.7-55.7,15.7-69.8,0c-0.2-0.2-0.5-0.1-0.6,0.1
c-6,12-67.8,17.7-79.3,2.3c-0.1-0.2-0.3-0.2-0.5-0.1c-10.1,8.9-44.5,14.3-61.2-3c-0.3-0.3-0.8-0.1-0.8,0.4
C48.9,277.6,28.1,280.5,19.6,282.7"/>
<path fill="#C0DBE0" d="M199.4,364.7c-21.9-10.4-33.9-24.5-40.6-39.9c-8.1,2.3-17.9,3.8-29.3,4.4c-4.3,0.2-8.8,0.4-13.5,0.4
c-5.4,0-11.2-0.2-17.2-0.5c20.1,20.1,44.8,35.5,90.5,35.8C192.7,364.9,196.1,364.8,199.4,364.7z"/>
<path fill="#D4EDF1" d="M167,339c-3-4.1-6-9.3-8.1-14.2c-8.1,2.3-17.9,3.8-29.3,4.4C137.4,333.4,148.5,337.4,167,339z"/>
</g>
<circle fill="#3A4D54" cx="34.8" cy="311" r="5.9"/>
<path fill="#3A4D54" d="M346.8,297.2l-1-2.8c0,0,5.3-11.7-7.4-11.7c-12.7,0,3.5-4.7,3.5-4.7l21.8,2.8l9.6,6.8l-16.1,4.1
L346.8,297.2z"/>
<path fill="#3A4D54" d="M78.7,297.2l1-2.8c0,0-5.3-11.7,7.4-11.7s-3.5-4.7-3.5-4.7l-21.8,2.8l-9.6,6.8l16.1,4.1L78.7,297.2z"/>
<path fill="#3A4D54" d="M361.7,279.5v4.4l15.6,6.7l45.5-4.1l7.3-3.7c0,0-3.8-0.6-7.3-1.7c-3.6-1.1-15.2-1.6-15.2-1.6h-28.3
l-13.6,1.8L361.7,279.5z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

BIN
slides/images/fu-face.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 150 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 301 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

BIN
slides/images/tangram.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

BIN
slides/images/tesla.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 484 KiB

BIN
slides/images/tetris-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.8 KiB

BIN
slides/images/tetris-2.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 730 KiB

BIN
slides/images/tetris-3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

BIN
slides/images/trollface.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

View File

@@ -1,162 +1,29 @@
<html>
<head>
<title>Container Training</title>
<style type="text/css">
body {
background-image: url("images/container-background.jpg");
max-width: 1024px;
margin: 0 auto;
}
table {
font-size: 20px;
font-family: sans-serif;
background: white;
width: 100%;
height: 100%;
padding: 20px;
}
.header {
font-size: 300%;
font-weight: bold;
}
.title {
font-size: 150%;
font-weight: bold;
}
td {
padding: 1px;
height: 1em;
}
td.spacer {
height: unset;
}
td.footer {
padding-top: 80px;
height: 100px;
}
td.title {
border-bottom: thick solid black;
padding-bottom: 2px;
padding-top: 20px;
}
a {
text-decoration: none;
}
a:hover {
background: yellow;
}
a.attend:after {
content: "📅 attend";
}
a.slides:after {
content: "📚 slides";
}
a.chat:after {
content: "💬 chat";
}
a.video:after {
content: "📺 video";
}
</style>
<link rel="stylesheet" type="text/css" href="theme.css">
<title>Formation/workshop containers, orchestration, et Kubernetes à Paris en avril</title>
</head>
<body>
<div class="main">
<table>
<tr><td class="header" colspan="4">Container Training</td></tr>
<tr><td class="title" colspan="4">Coming soon at a conference near you</td></tr>
<tr>
<td>Kubernetes enablement at Docker</td>
<td><a class="slides" href="http://kube.container.training/" /></td>
<td><a class="chat" href="https://docker.slack.com/messages/C83M572J2" /></td>
</tr>
<!--
<td><a class="attend" href="https://qconsf.com/sf2017/workshop/orchestrating-microservices-docker-swarm" /></td>
-->
<!--
<tr>
<td>Nothing for now (stay tuned...)</td>
</tr>
-->
<tr><td class="title" colspan="4">Past workshops</td></tr>
<tr>
<td>QCON SF: Orchestrating Microservices with Docker Swarm</td>
<td><a class="slides" href="http://qconsf2017swarm.container.training/" /></td>
</tr>
<tr>
<td>QCON SF: Introduction to Docker and Containers</td>
<td><a class="slides" href="http://qconsf2017intro.container.training/" /></td>
</tr>
<tr>
<td>LISA17 M7: Getting Started with Docker and Containers</td>
<td><a class="slides" href="http://lisa17m7.container.training/" /></td>
</tr>
<tr>
<td>LISA17 T9: Build, Ship, and Run Microservices on a Docker Swarm Cluster</td>
<td><a class="slides" href="http://lisa17t9.container.training/" /></td>
</tr>
<tr>
<td>Deploying and scaling microservices with Docker and Kubernetes</td>
<td><a class="slides" href="http://osseu17.container.training/" /></td>
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviLrsyydCzxWrIP_1-wkcSHS" /></td>
</tr>
<tr>
<td>DockerCon Workshop: from Zero to Hero (full day, B3 M1-2)</td>
<td><a class="slides" href="http://dc17eu.container.training/" /></td>
</tr>
<tr>
<td>DockerCon Workshop: Orchestration for Advanced Users (afternoon, B4 M5-6)</td>
<td><a class="slides" href="https://www.bretfisher.com/dockercon17eu/" /></td>
</tr>
<tr>
<td>LISA16 T1: Deploying and Scaling Applications with Docker Swarm</td>
<td><a class="slides" href="http://lisa16t1.container.training/" /></td>
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviIDDhr8vIwCN1wkyNGXjbbc" /></td>
</tr>
<tr>
<td>PyCon2016: Introduction to Docker and containers</td>
<td><a class="slides" href="https://us.pycon.org/2016/site_media/media/tutorial_handouts/DockerSlides.pdf" /></td>
<td><a class="video" href="https://www.youtube.com/watch?v=ZVaRK10HBjo" /></td>
</tr>
<tr><td class="title" colspan="4">Self-paced tutorials</td></tr>
<tr>
<td>Introduction to Docker and Containers</td>
<td><a class="slides" href="intro-fullday.yml.html" /></td>
</tr>
<tr>
<td>Container Orchestration with Docker and Swarm</td>
<td><a class="slides" href="swarm-selfpaced.yml.html" /></td>
</tr>
<tr>
<td>Deploying and Scaling Microservices with Docker and Kubernetes</td>
<td><a class="slides" href="kube-halfday.yml.html" /></td>
</tr>
<tr><td class="spacer"></td></tr>
<tr>
<td class="footer">
Maintained by Jérôme Petazzoni (<a href="https://twitter.com/jpetazzo">@jpetazzo</a>)
</td>
</tr>
</table>
</div>
<div class="index">
<div class="block">
<h4>Introduction aux conteneurs</h4>
<h5>De la pratique … aux bonnes pratiques</h5>
<h6>(11-12 avril 2018)</h6>
<p>
<a href="intro.yml.html">SLIDES</a>
<a href="https://gitter.im/jpetazzo/training-20180411-paris">CHATROOM</a>
</p>
</div>
<div class="block">
<h4>Introduction à l'orchestration</h4>
<h5>Kubernetes par l'exemple</h5>
<h6>(13 avril 2018)</h6>
<p>
<a href="kube.yml.html">SLIDES</a>
<a href="https://gitter.im/jpetazzo/training-20180413-paris">CHATROOM</a>
<a href="https://docs.google.com/spreadsheets/d/1KiuCVduTf3wf-4-vSmcK96I61WYdDP0BppkOx_XZcjM/edit?ts=5acfc2ef#gid=0">FOODMENU</a>
</p>
</div>
</div>
</body>
</html>
</html>

View File

@@ -1,10 +1,9 @@
title: |
Introduction
to Docker and
Containers
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/jpetazzo/training-20180411-paris)"
exclude:
- self-paced
@@ -12,17 +11,18 @@ exclude:
chapters:
- common/title.md
- logistics.md
- common/intro.md
- intro/intro.md
- common/about-slides.md
- common/toc.md
- - intro/Docker_Overview.md
#- intro/Docker_History.md
- intro/Docker_History.md
- intro/Training_Environment.md
- intro/Installing_Docker.md
- intro/First_Containers.md
- - intro/First_Containers.md
- intro/Background_Containers.md
- intro/Start_And_Attach.md
- - intro/Initial_Images.md
- intro/Building_Images_Interactively.md
- intro/Initial_Images.md
- - intro/Building_Images_Interactively.md
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
@@ -30,6 +30,8 @@ chapters:
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- - intro/Naming_And_Inspecting.md
- intro/Labels.md
- intro/Getting_Inside.md
- intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
@@ -38,5 +40,17 @@ chapters:
- - intro/Local_Development_Workflow.md
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- intro/Docker_Machine.md
- - intro/CI_Pipeline.md
- intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Dockerfile_Samples.md
- intro/Logging.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- - intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md
- common/thankyou.md
- intro/links.md

View File

@@ -12,10 +12,11 @@ exclude:
chapters:
- common/title.md
# - common/logistics.md
- common/intro.md
- intro/intro.md
- common/about-slides.md
- common/toc.md
- - intro/Docker_Overview.md
#- intro/Docker_History.md
- intro/Docker_History.md
- intro/Training_Environment.md
- intro/Installing_Docker.md
- intro/First_Containers.md
@@ -30,6 +31,8 @@ chapters:
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- - intro/Naming_And_Inspecting.md
- intro/Labels.md
- intro/Getting_Inside.md
- intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
@@ -38,5 +41,15 @@ chapters:
- - intro/Local_Development_Workflow.md
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- intro/Docker_Machine.md
- intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Logging.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md
- common/thankyou.md
- intro/links.md

1
slides/intro.yml Symbolic link
View File

@@ -0,0 +1 @@
intro-fullday.yml

View File

@@ -94,8 +94,6 @@ RUN apt-get update && apt-get install -y wget && apt-get clean
It is also possible to break a command onto multiple lines:
It is possible to execute multiple commands in a single step:
```dockerfile
RUN apt-get update \
&& apt-get install -y wget \

View File

@@ -0,0 +1,201 @@
# Application Configuration
There are many ways to provide configuration to containerized applications.
There is no "best way" — it depends on factors like:
* configuration size,
* mandatory and optional parameters,
* scope of configuration (per container, per app, per customer, per site, etc),
* frequency of changes in the configuration.
---
## Command-line parameters
```bash
docker run jpetazzo/hamba 80 www1:80 www2:80
```
* Configuration is provided through command-line parameters.
* In the above example, the `ENTRYPOINT` is a script that will:
- parse the parameters,
- generate a configuration file,
- start the actual service.
---
## Command-line parameters pros and cons
* Appropriate for mandatory parameters (without which the service cannot start).
* Convenient for "toolbelt" services instanciated many times.
(Because there is no extra step: just run it!)
* Not great for dynamic configurations or bigger configurations.
(These things are still possible, but more cumbersome.)
---
## Environment variables
```bash
docker run -e ELASTICSEARCH_URL=http://es42:9201/ kibana
```
* Configuration is provided through environment variables.
* The environment variable can be used straight by the program,
<br/>or by a script generating a configuration file.
---
## Environment variables pros and cons
* Appropriate for optional parameters (since the image can provide default values).
* Also convenient for services instanciated many times.
(It's as easy as command-line parameters.)
* Great for services with lots of parameters, but you only want to specify a few.
(And use default values for everything else.)
* Ability to introspect possible parameters and their default values.
* Not great for dynamic configurations.
---
## Baked-in configuration
```
FROM prometheus
COPY prometheus.conf /etc
```
* The configuration is added to the image.
* The image may have a default configuration; the new configuration can:
- replace the default configuration,
- extend it (if the code can read multiple configuration files).
---
## Baked-in configuration pros and cons
* Allows arbitrary customization and complex configuration files.
* Requires to write a configuration file. (Obviously!)
* Requires to build an image to start the service.
* Requires to rebuild the image to reconfigure the service.
* Requires to rebuild the image to upgrade the service.
* Configured images can be stored in registries.
(Which is great, but requires a registry.)
---
## Configuration volume
```bash
docker run -v appconfig:/etc/appconfig myapp
```
* The configuration is stored in a volume.
* The volume is attached to the container.
* The image may have a default configuration.
(But this results in a less "obvious" setup, that needs more documentation.)
---
## Configuration volume pros and cons
* Allows arbitrary customization and complex configuration files.
* Requires to create a volume for each different configuration.
* Services with identical configurations can use the same volume.
* Doesn't require to build / rebuild an image when upgrading / reconfiguring.
* Configuration can be generated or edited through another container.
---
## Dynamic configuration volume
* This is a powerful pattern for dynamic, complex configurations.
* The configuration is stored in a volume.
* The configuration is generated / updated by a special container.
* The application container detects when the configuration is changed.
(And automatically reloads the configuration when necessary.)
* The configuration can be shared between multiple services if needed.
---
## Dynamic configuration volume example
In a first terminal, start a load balancer with an initial configuration:
```bash
$ docker run --name loadbalancer jpetazzo/hamba \
80 goo.gl:80
```
In another terminal, reconfigure that load balancer:
```bash
$ docker run --rm --volumes-from loadbalancer jpetazzo/hamba reconfigure \
80 google.com:80
```
The configuration could also be updated through e.g. a REST API.
(The REST API being itself served from another container.)
---
## Keeping secrets
.warning[Ideally, you should not put secrets (passwords, tokens...) in:]
* command-line or environment variables (anyone with Docker API access can get them),
* images, especially stored in a registry.
Secrets management is better handled with an orchestrator (like Swarm or Kubernetes).
Orchestrators will allow to pass secrets in a "one-way" manner.
Managing secrets securely without an orchestrator can be contrived.
E.g.:
- read the secret on stdin when the service starts,
- pass the secret using an API endpoint.

View File

@@ -93,20 +93,22 @@ The output of `docker build` looks like this:
.small[
```bash
$ docker build -t figlet .
Sending build context to Docker daemon 2.048 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
---> e54ca5efa2e9
Step 1 : RUN apt-get update
---> Running in 840cb3533193
---> 7257c37726a1
Removing intermediate container 840cb3533193
Step 2 : RUN apt-get install figlet
---> Running in 2b44df762a2f
---> f9e8f1642759
Removing intermediate container 2b44df762a2f
Successfully built f9e8f1642759
docker build -t figlet .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM ubuntu
---> f975c5035748
Step 2/3 : RUN apt-get update
---> Running in e01b294dbffd
(...output of the RUN command...)
Removing intermediate container e01b294dbffd
---> eb8d9b561b37
Step 3/3 : RUN apt-get install figlet
---> Running in c29230d70f9b
(...output of the RUN command...)
Removing intermediate container c29230d70f9b
---> 0dfd7a253f21
Successfully built 0dfd7a253f21
Successfully tagged figlet:latest
```
]
@@ -134,20 +136,20 @@ Sending build context to Docker daemon 2.048 kB
## Executing each step
```bash
Step 1 : RUN apt-get update
---> Running in 840cb3533193
Step 2/3 : RUN apt-get update
---> Running in e01b294dbffd
(...output of the RUN command...)
---> 7257c37726a1
Removing intermediate container 840cb3533193
Removing intermediate container e01b294dbffd
---> eb8d9b561b37
```
* A container (`840cb3533193`) is created from the base image.
* A container (`e01b294dbffd`) is created from the base image.
* The `RUN` command is executed in this container.
* The container is committed into an image (`7257c37726a1`).
* The container is committed into an image (`eb8d9b561b37`).
* The build container (`840cb3533193`) is removed.
* The build container (`e01b294dbffd`) is removed.
* The output of this step will be the base image for the next one.

View File

@@ -0,0 +1,3 @@
# Building a CI pipeline
.center[![Demo](images/demo.jpg)]

View File

@@ -64,6 +64,7 @@ Let's build it:
$ docker build -t figlet .
...
Successfully built 042dff3b4a8d
Successfully tagged figlet:latest
```
And run it:
@@ -165,6 +166,7 @@ Let's build it:
$ docker build -t figlet .
...
Successfully built 36f588918d73
Successfully tagged figlet:latest
```
And run it:
@@ -223,6 +225,7 @@ Let's build it:
$ docker build -t figlet .
...
Successfully built 6e0b6a048a07
Successfully tagged figlet:latest
```
Run it without parameters:

View File

@@ -0,0 +1,177 @@
# Docker Engine and other container engines
* We are going to cover the architecture of the Docker Engine.
* We will also present other container engines.
---
class: pic
## Docker Engine external architecture
![](images/docker-engine-architecture.svg)
---
## Docker Engine external architecture
* The Engine is a daemon (service running in the background).
* All interaction is done through a REST API exposed over a socket.
* On Linux, the default socket is a UNIX socket: `/var/run/docker.sock`.
* We can also use a TCP socket, with optional mutual TLS authentication.
* The `docker` CLI communicates with the Engine over the socket.
Note: strictly speaking, the Docker API is not fully REST.
Some operations (e.g. dealing with interactive containers
and log streaming) don't fit the REST model.
---
class: pic
## Docker Engine internal architecture
![](images/dockerd-and-containerd.png)
---
## Docker Engine internal architecture
* Up to Docker 1.10: the Docker Engine is one single monolithic binary.
* Starting with Docker 1.11, the Engine is split into multiple parts:
- `dockerd` (REST API, auth, networking, storage)
- `containerd` (container lifecycle, controlled over a gRPC API)
- `containerd-shim` (per-container; does almost nothing but allows to restart the Engine without restarting the containers)
- `runc` (per-container; does the actual heavy lifting to start the container)
* Some features (like image and snapshot management) are progressively being pushed from `dockerd` to `containerd`.
For more details, check [this short presentation by Phil Estes](https://www.slideshare.net/PhilEstes/diving-through-the-layers-investigating-runc-containerd-and-the-docker-engine-architecture).
---
## Other container engines
The following list is not exhaustive.
Furthermore, we limited the scope to Linux containers.
Containers also exist (sometimes with other names) on Windows, macOS, Solaris, FreeBSD ...
---
## LXC
* The venerable ancestor (first realeased in 2008).
* Docker initially relied on it to execute containers.
* No daemon; no central API.
* Each container is managed by a `lxc-start` process.
* Each `lxc-start` process exposes a custom API over a local UNIX socket, allowing to interact with the container.
* No notion of image (container filesystems have to be managed manually).
* Networking has to be setup manually.
---
## LXD
* Re-uses LXC code (through liblxc).
* Builds on top of LXC to offer a more modern experience.
* Daemon exposing a REST API.
* Can manage images, snapshots, migrations, networking, storage.
* "offers a user experience similar to virtual machines but using Linux containers instead."
---
## rkt
* Compares to `runc`.
* No daemon or API.
* Strong emphasis on security (through privilege separation).
* Networking has to be setup separately (e.g. through CNI plugins).
* Partial image management (pull, but no push).
(Image build is handled by separate tools.)
---
## CRI-O
* Designed to be used with Kubernetes as a simple, basic runtime.
* Compares to `containerd`.
* Daemon exposing a gRPC interface.
* Controlled using the CRI API (Container Runtime Interface defined by Kubernetes).
* Needs an underlying OCI runtime (e.g. runc).
* Handles storage, images, networking (through CNI plugins).
We're not aware of anyone using it directly (i.e. outside of Kubernetes).
---
## systemd
* "init" system (PID 1) in most modern Linux distributions.
* Offers tools like `systemd-nspawn` and `machinectl` to manage containers.
* `systemd-nspawn` is "In many ways it is similar to chroot(1), but more powerful".
* `machinectl` can interact with VMs and containers managed by systemd.
* Exposes a DBUS API.
* Basic image support (tar archives and raw disk images).
* Network has to be setup manually.
---
## Overall ...
* The Docker Engine is very developer-centric:
- easy to install
- easy to use
- no manual setup
- first-class image build and transfer
* As a result, it is a fantastic tool in development environments.
* On servers:
- Docker is a good default choice
- If you use Kubernetes, the engine doesn't matter

View File

@@ -434,7 +434,7 @@ When creating a network, extra options can be provided.
* `--internal` disables outbound traffic (the network won't have a default gateway).
* `--gateway` indicates which address to use for the gateway (when utbound traffic is allowed).
* `--gateway` indicates which address to use for the gateway (when outbound traffic is allowed).
* `--subnet` (in CIDR notation) indicates the subnet to use.

View File

@@ -49,14 +49,14 @@ We will use `docker ps`:
```bash
$ docker ps
CONTAINER ID IMAGE ... PORTS ...
e40ffb406c9e nginx ... 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp ...
CONTAINER ID IMAGE ... PORTS ...
e40ffb406c9e nginx ... 0.0.0.0:32768->80/tcp ...
```
* The web server is running on ports 80 and 443 inside the container.
* The web server is running on port 80 inside the container.
* Those ports are mapped to ports 32769 and 32768 on our Docker host.
* This port is mapped to port 32768 on our Docker host.
We will explain the whys and hows of this port mapping.
@@ -81,7 +81,7 @@ Make sure to use the right port number if it is different
from the example below:
```bash
$ curl localhost:32769
$ curl localhost:32768
<!DOCTYPE html>
<html>
<head>
@@ -91,6 +91,31 @@ $ curl localhost:32769
---
## How does Docker know which port to map?
* There is metadata in the image telling "this image has something on port 80".
* We can see that metadata with `docker inspect`:
```bash
$ docker inspect nginx --format {{.Config.ExposedPorts}}
map[80/tcp:{}]
```
* This metadata was set in the Dockerfile, with the `EXPOSE` keyword.
* We can see that with `docker history`:
```bash
$ docker history nginx
IMAGE CREATED CREATED BY
7f70b30f2cc6 11 days ago /bin/sh -c #(nop) CMD ["nginx" "-g" "…
<missing> 11 days ago /bin/sh -c #(nop) STOPSIGNAL [SIGTERM]
<missing> 11 days ago /bin/sh -c #(nop) EXPOSE 80/tcp
```
---
## Why are we mapping ports?
* We are out of IPv4 addresses.
@@ -113,7 +138,7 @@ There is a command to help us:
```bash
$ docker port <containerID> 80
32769
32768
```
---
@@ -128,7 +153,7 @@ $ docker run -d -p 8000:80 nginx
$ docker run -d -p 8080:80 -p 8888:80 nginx
```
* We are running two NGINX web servers.
* We are running three NGINX web servers.
* The first one is exposed on port 80.
* The second one is exposed on port 8000.
* The third one is exposed on ports 8080 and 8888.

View File

@@ -0,0 +1,3 @@
# Building containers from scratch
(This is a "bonus section" done if time permits.)

View File

@@ -0,0 +1,339 @@
# Copy-on-write filesystems
Container engines rely on copy-on-write to be able
to start containers quickly, regardless of their size.
We will explain how that works, and review some of
the copy-on-write storage systems available on Linux.
---
## What is copy-on-write?
- Copy-on-write is a mechanism allowing to share data.
- The data appears to be a copy, but is only
a link (or reference) to the original data.
- The actual copy happens only when someone
tries to change the shared data.
- Whoever changes the shared data ends up
using their own copy instead of the shared data.
---
## A few metaphors
--
- First metaphor:
<br/>white board and tracing paper
--
- Second metaphor:
<br/>magic books with shadowy pages
--
- Third metaphor:
<br/>just-in-time house building
---
## Copy-on-write is *everywhere*
- Process creation with `fork()`.
- Consistent disk snapshots.
- Efficient VM provisioning.
- And, of course, containers.
---
## Copy-on-write and containers
Copy-on-write is essential to give us "convenient" containers.
- Creating a new container (from an existing image) is "free".
(Otherwise, we would have to copy the image first.)
- Customizing a container (by tweaking a few files) is cheap.
(Adding a 1 KB configuration file to a 1 GB container takes 1 KB, not 1 GB.)
- We can take snapshots, i.e. have "checkpoints" or "save points"
when building images.
---
## AUFS overview
- The original (legacy) copy-on-write filesystem used by first versions of Docker.
- Combine multiple *branches* in a specific order.
- Each branch is just a normal directory.
- You generally have:
- at least one read-only branch (at the bottom),
- exactly one read-write branch (at the top).
(But other fun combinations are possible too!)
---
## AUFS operations: opening a file
- With `O_RDONLY` - read-only access:
- look it up in each branch, starting from the top
- open the first one we find
- With `O_WRONLY` or `O_RDWR` - write access:
- if the file exists on the top branch: open it
- if the file exists on another branch: "copy up"
<br/>
(i.e. copy the file to the top branch and open the copy)
- if the file doesn't exist on any branch: create it on the top branch
That "copy-up" operation can take a while if the file is big!
---
## AUFS operations: deleting a file
- A *whiteout* file is created.
- This is similar to the concept of "tombstones" used in some data systems.
```
# docker run ubuntu rm /etc/shadow
# ls -la /var/lib/docker/aufs/diff/$(docker ps --no-trunc -lq)/etc
total 8
drwxr-xr-x 2 root root 4096 Jan 27 15:36 .
drwxr-xr-x 5 root root 4096 Jan 27 15:36 ..
-r--r--r-- 2 root root 0 Jan 27 15:36 .wh.shadow
```
---
## AUFS performance
- AUFS `mount()` is fast, so creation of containers is quick.
- Read/write access has native speeds.
- But initial `open()` is expensive in two scenarios:
- when writing big files (log files, databases ...),
- when searching many directories (PATH, classpath, etc.) over many layers.
- Protip: when we built dotCloud, we ended up putting
all important data on *volumes*.
- When starting the same container multiple times:
- the data is loaded only once from disk, and cached only once in memory;
- but `dentries` will be duplicated.
---
## Device Mapper
Device Mapper is a rich subsystem with many features.
It can be used for: RAID, encrypted devices, snapshots, and more.
In the context of containers (and Docker in particular), "Device Mapper"
means:
"the Device Mapper system + its *thin provisioning target*"
If you see the abbreviation "thinp" it stands for "thin provisioning".
---
## Device Mapper principles
- Copy-on-write happens on the *block* level
(instead of the *file* level).
- Each container and each image get their own block device.
- At any given time, it is possible to take a snapshot:
- of an existing container (to create a frozen image),
- of an existing image (to create a container from it).
- If a block has never been written to:
- it's assumed to be all zeros,
- it's not allocated on disk.
(That last property is the reason for the name "thin" provisioning.)
---
## Device Mapper operational details
- Two storage areas are needed:
one for *data*, another for *metadata*.
- "data" is also called the "pool"; it's just a big pool of blocks.
(Docker uses the smallest possible block size, 64 KB.)
- "metadata" contains the mappings between virtual offsets (in the
snapshots) and physical offsets (in the pool).
- Each time a new block (or a copy-on-write block) is written,
a block is allocated from the pool.
- When there are no more blocks in the pool, attempts to write
will stall until the pool is increased (or the write operation
aborted).
- In other words: when running out of space, containers are
frozen, but operations will resume as soon as space is available.
---
## Device Mapper performance
- By default, Docker puts data and metadata on a loop device
backed by a sparse file.
- This is great from a usability point of view,
since zero configuration is needed.
- But it is terrible from a performance point of view:
- each time a container writes to a new block,
- a block has to be allocated from the pool,
- and when it's written to,
- a block has to be allocated from the sparse file,
- and sparse file performance isn't great anyway.
- If you use Device Mapper, make sure to put data (and metadata)
on devices!
---
## BTRFS principles
- BTRFS is a filesystem (like EXT4, XFS, NTFS...) with built-in snapshots.
- The "copy-on-write" happens at the filesystem level.
- BTRFS integrates the snapshot and block pool management features
at the filesystem level.
(Instead of the block level for Device Mapper.)
- In practice, we create a "subvolume" and
later take a "snapshot" of that subvolume.
Imagine: `mkdir` with Super Powers and `cp -a` with Super Powers.
- These operations can be executed with the `btrfs` CLI tool.
---
## BTRFS in practice with Docker
- Docker can use BTRFS and its snapshotting features to store container images.
- The only requirement is that `/var/lib/docker` is on a BTRFS filesystem.
(Or, the directory specified with the `--data-root` flag when starting the engine.)
---
class: extra-details
## BTRFS quirks
- BTRFS works by dividing its storage in *chunks*.
- A chunk can contain data or metadata.
- You can run out of chunks (and get `No space left on device`)
even though `df` shows space available.
(Because chunks are only partially allocated.)
- Quick fix:
```
# btrfs filesys balance start -dusage=1 /var/lib/docker
```
---
## Overlay2
- Overlay2 is very similar to AUFS.
- However, it has been merged in "upstream" kernel.
- It is therefore available on all modern kernels.
(AUFS was available on Debian and Ubuntu, but required custom kernels on other distros.)
- It is simpler than AUFS (it can only have two branches, called "layers").
- The container engine abstracts this detail, so this is not a concern.
- Overlay2 storage drivers generally use hard links between layers.
- This improves `stat()` and `open()` performance, at the expense of inode usage.
---
## ZFS
- ZFS is similar to BTRFS (at least from a container user's perspective).
- Pros:
- high performance
- high reliability (with e.g. data checksums)
- optional data compression and deduplication
- Cons:
- high memory usage
- not in upstream kernel
- It is available as a kernel module or through FUSE.
---
## Which one is the best?
- Eventually, overlay2 should be the best option.
- It is available on all modern systems.
- Its memory usage is better than Device Mapper, BTRFS, or ZFS.
- The remarks about *write performance* shouldn't bother you:
<br/>
data should always be stored in volumes anyway!

View File

@@ -93,7 +93,7 @@ Success!
* Older Dockerfiles also have the `ADD` instruction.
<br/>It is similar but can automatically extract archives.
* If we really wanted to compile C code in a compiler, we would:
* If we really wanted to compile C code in a container, we would:
* Place it in a different directory, with the `WORKDIR` instruction.

View File

@@ -0,0 +1,81 @@
# Managing hosts with Docker Machine
- Docker Machine is a tool to provision and manage Docker hosts.
- It automates the creation of a virtual machine:
- locally, with a tool like VirtualBox or VMware;
- on a public cloud like AWS EC2, Azure, Digital Ocean, GCP, etc.;
- on a private cloud like OpenStack.
- It can also configure existing machines through an SSH connection.
- It can manage as many hosts as you want, with as many "drivers" as you want.
---
## Docker Machine workflow
1) Prepare the environment: setup VirtualBox, obtain cloud credentials ...
2) Create hosts with `docker-machine create -d drivername machinename`.
3) Use a specific machine with `eval $(docker-machine env machinename)`.
4) Profit!
---
## Environment variables
- Most of the tools (CLI, libraries...) connecting to the Docker API can use ennvironment variables.
- These variables are:
- `DOCKER_HOST` (indicates address+port to connect to, or path of UNIX socket)
- `DOCKER_TLS_VERIFY` (indicates that TLS mutual auth should be used)
- `DOCKER_CERT_PATH` (path to the keypair and certificate to use for auth)
- `docker-machine env ...` will generate the variables needed to connect to an host.
- `$(eval docker-machine env ...)` sets these variables in the current shell.
---
## Host management features
With `docker-machine`, we can:
- upgrade an host to the latest version of the Docker Engine,
- start/stop/restart hosts,
- get a shell on a remote machine (with SSH),
- copy files to/from remotes machines (with SCP),
- mount a remote host's directory on the local machine (with SSHFS),
- ...
---
## The `generic` driver
When provisioning a new host, `docker-machine` executes these steps:
1) Create the host using a cloud or hypervisor API.
2) Connect to the host over SSH.
3) Install and configure Docker on the host.
With the `generic` driver, we provide the IP address of an existing host
(instead of e.g. cloud credentials) and we omit the first step.
This allows to provision physical machines, or VMs provided by a 3rd
party, or use a cloud for which we don't have a provisioning API.

View File

@@ -72,7 +72,7 @@ class: pic
class: pic
## The parallel with the shipping indsutry
## The parallel with the shipping industry
![history](images/shipping-industry-problem.png)

View File

@@ -0,0 +1,5 @@
# Dockerfile Samples
---
## (Demo in terminal)

View File

@@ -90,11 +90,11 @@ COPY <test data sets and fixtures>
RUN <unit tests>
FROM <baseimage>
RUN <install dependencies>
COPY <vcode>
COPY <code>
RUN <build code>
CMD, EXPOSE ...
```
* The build fails as soon as an instructions fails
* The build fails as soon as an instruction fails
* If `RUN <unit tests>` fails, the build doesn't produce an image
* If it succeeds, it produces a clean image (without test libraries and data)

173
slides/intro/Ecosystem.md Normal file
View File

@@ -0,0 +1,173 @@
# The container ecosystem
In this chapter, we will talk about a few actors of the container ecosystem.
We have (arbitrarily) decided to focus on two groups:
- the Docker ecosystem,
- the Cloud Native Computing Foundation (CNCF) and its projects.
---
class: pic
## The Docker ecosystem
![The Docker ecosystem in 2015](images/docker-ecosystem-2015.png)
---
## Moby vs. Docker
- Docker Inc. (the company) started Docker (the open source project).
- At some point, it became necessary to differentiate between:
- the open source project (code base, contributors...),
- the product that we use to run containers (the engine),
- the platform that we use to manage containerized applications,
- the brand.
---
class: pic
![Picture of a Tesla](images/tesla.jpg)
---
## Exercise in brand management
Questions:
--
- What is the brand of the car on the previous slide?
--
- What kind of engine does it have?
--
- Would you say that it's a safe or unsafe car?
--
- Harder question: can you drive from the US West to East coasts with it?
--
The answers to these questions are part of the Tesla brand.
---
## What if ...
- The blueprints for Tesla cars were available for free.
- You could legally build your own Tesla.
- You were allowed to customize it entirely.
(Put a combustion engine, drive it with a game pad ...)
- You could even sell the customized versions.
--
- ... And call your customized version "Tesla".
--
Would we give the same answers to the questions on the previous slide?
---
## From Docker to Moby
- Docker Inc. decided to split the brand.
- Moby is the open source project.
(= Components and libraries that you can use, reuse, customize, sell ...)
- Docker is the product.
(= Software that you can use, buy support contracts ...)
- Docker is made with Moby.
- When Docker Inc. improves the Docker products, it improves Moby.
(And vice versa.)
---
## Other examples
- *Read the Docs* is an open source project to generate and host documentation.
- You can host it yourself (on your own servers).
- You can also get hosted on readthedocs.org.
- The maintainers of the open source project often receive
support requests from users of the hosted product ...
- ... And the maintainers of the hosted product often
receive support requests from users of self-hosted instances.
- Another example:
*WordPress.com is a blogging platform that is owned and hosted online by
Automattic. It is run on WordPress, an open source piece of software used by
bloggers. (Wikipedia)*
---
## Docker CE vs Docker EE
- Docker CE = Community Edition.
- Available on most Linux distros, Mac, Windows.
- Optimized for developers and ease of use.
- Docker EE = Enterprise Edition.
- Available only on a subset of Linux distros + Windows servers.
(Only available when there is a strong partnership to offer enterprise-class support.)
- Optimized for production use.
- Comes with additional components: security scanning, RBAC ...
---
## The CNCF
- Non-profit, part of the Linux Foundation; founded in December 2015.
*The Cloud Native Computing Foundation builds sustainable ecosystems and fosters
a community around a constellation of high-quality projects that orchestrate
containers as part of a microservices architecture.*
*CNCF is an open source software foundation dedicated to making cloud-native computing universal and sustainable.*
- Home of Kubernetes (and many other projects now).
- Funded by corporate memberships.
---
class: pic
![Cloud Native Landscape](https://raw.githubusercontent.com/cncf/landscape/master/landscape/CloudNativeLandscape_latest.png)

View File

@@ -0,0 +1,227 @@
class: title
# Getting inside a container
![Person standing inside a container](images/getting-inside.png)
---
## Objectives
On a traditional server or VM, we sometimes need to:
* log into the machine (with SSH or on the console),
* analyze the disks (by removing them or rebooting with a rescue system).
In this chapter, we will see how to do that with containers.
---
## Getting a shell
Every once in a while, we want to log into a machine.
In an perfect world, this shouldn't be necessary.
* You need to install or update packages (and their configuration)?
Use configuration management. (e.g. Ansible, Chef, Puppet, Salt...)
* You need to view logs and metrics?
Collect and access them through a centralized platform.
In the real world, though ... we often need shell access!
---
## Not getting a shell
Even without a perfect deployment system, we can do many operations without getting a shell.
* Installing packages can (and should) be done in the container image.
* Configuration can be done at the image level, or when the container starts.
* Dynamic configuration can be stored in a volume (shared with another container).
* Logs written to stdout are automatically collected by the Docker Engine.
* Other logs can be written to a shared volume.
* Process information and metrics are visible from the host.
_Let's save logging, volumes ... for later, but let's have a look at process information!_
---
## Viewing container processes from the host
If you run Docker on Linux, container processes are visible on the host.
```bash
$ ps faux | less
```
* Scroll around the output of this command.
* You should see the `jpetazzo/clock` container.
* A containerized process is just like any other process on the host.
* We can use tools like `lsof`, `strace`, `gdb` ... To analyze them.
---
class: extra-details
## What's the difference between a container process and a host process?
* Each process (containerized or not) belongs to *namespaces* and *cgroups*.
* The namespaces and cgroups determine what a process can "see" and "do".
* Analogy: each process (containerized or not) runs with a specific UID (user ID).
* UID=0 is root, and has elevated privileges. Other UIDs are normal users.
_We will give more details about namespaces and cgroups later._
---
## Getting a shell in a running container
* Sometimes, we need to get a shell anyway.
* We _could_ run some SSH server in the container ...
* But it is easier to use `docker exec`.
```bash
$ docker exec -ti ticktock sh
```
* This creates a new process (running `sh`) _inside_ the container.
* This can also be done "manually" with the tool `nsenter`.
---
## Caveats
* The tool that you want to run needs to exist in the container.
* Some tools (like `ip netns exec`) let you attach to _one_ namespace at a time.
(This lets you e.g. setup network interfaces, even if you don't have `ifconfig` or `ip` in the container.)
* Most importantly: the container needs to be running.
* What if the container is stopped or crashed?
---
## Getting a shell in a stopped container
* A stopped container is only _storage_ (like a disk drive).
* We cannot SSH into a disk drive or USB stick!
* We need to connect the disk to a running machine.
* How does that translate into the container world?
---
## Analyzing a stopped container
As an exercise, we are going to try to find out what's wrong with `jpetazzo/crashtest`.
```bash
docker run jpetazzo/crashtest
```
The container starts, but then stops immediately, without any output.
What would McGyver do?
First, let's check the status of that container.
```bash
docker ps -l
```
---
## Viewing filesystem changes
* We can use `docker diff` to see files that were added / changed / removed.
```bash
docker diff <container_id>
```
* The container ID was shown by `docker ps -l`.
* We can also see it with `docker ps -lq`.
* The output of `docker diff` shows some interesting log files!
---
## Accessing files
* We can extract files with `docker cp`.
```bash
docker cp <container_id>:/var/log/nginx/error.log .
```
* Then we can look at that log file.
```bash
cat error.log
```
(The directory `/run/nginx` doesn't exist.)
---
## Exploring a crashed container
* We can restart a container with `docker start` ...
* ... But it will probably crash again immediately!
* We cannot specify a different program to run with `docker start`
* But we can create a new image from the crashed container
```bash
docker commit <container_id> debugimage
```
* Then we can run a new container from that image, with a custom entrypoint
```bash
docker run -ti --entrypoint sh debugimage
```
---
class: extra-details
## Obtaining a complete dump
* We can also dump the entire filesystem of a container.
* This is done with `docker export`.
* It generates a tar archive.
```bash
docker export <container_id> | tar tv
```
This will give a detailed listing of the content of the container.

View File

@@ -29,7 +29,7 @@ We can arbitrarily distinguish:
* Installing Docker on an existing Linux machine (physical or VM)
* Installing Docker on MacOS or Windows
* Installing Docker on macOS or Windows
* Installing Docker on a fleet of cloud VMs
@@ -55,9 +55,31 @@ We can arbitrarily distinguish:
---
## Installing Docker on MacOS and Windows
class: extra-details
* On MacOS, the recommended method is to use Docker4Mac:
## Docker Inc. packages vs distribution packages
* Docker Inc. releases new versions monthly (edge) and quarterly (stable)
* Releases are immediately available on Docker Inc.'s package repositories
* Linux distros don't always update to the latest Docker version
(Sometimes, updating would break their guidelines for major/minor upgrades)
* Sometimes, some distros have carried packages with custom patches
* Sometimes, these patches added critical security bugs ☹
* Installing through Docker Inc.'s repositories is a bit of extra work …
… but it is generally worth it!
---
## Installing Docker on macOS and Windows
* On macOS, the recommended method is to use Docker4Mac:
https://docs.docker.com/docker-for-mac/install/
@@ -71,7 +93,7 @@ We can arbitrarily distinguish:
---
## Running Docker on MacOS and Windows
## Running Docker on macOS and Windows
When you execute `docker version` from the terminal:

82
slides/intro/Labels.md Normal file
View File

@@ -0,0 +1,82 @@
# Labels
* Labels allow to attach arbitrary metadata to containers.
* Labels are key/value pairs.
* They are specified at container creation.
* You can query them with `docker inspect`.
* They can also be used as filters with some commands (e.g. `docker ps`).
---
## Using labels
Let's create a few containers with a label `owner`.
```bash
docker run -d -l owner=alice nginx
docker run -d -l owner=bob nginx
docker run -d -l owner nginx
```
We didn't specify a value for the `owner` label in the last example.
This is equivalent to setting the value to be an empty string.
---
## Querying labels
We can view the labels with `docker inspect`.
```bash
$ docker inspect $(docker ps -lq) | grep -A3 Labels
"Labels": {
"maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>",
"owner": ""
},
```
We can use the `--format` flag to list the value of a label.
```bash
$ docker inspect $(docker ps -q) --format 'OWNER={{.Config.Labels.owner}}'
```
---
## Using labels to select containers
We can list containers having a specific label.
```bash
$ docker ps --filter label=owner
```
Or we can list containers having a specific label with a specific value.
```bash
$ docker ps --filter label=owner=alice
```
---
## Use-cases for labels
* HTTP vhost of a web app or web service.
(The label is used to generate the configuration for NGINX, HAProxy, etc.)
* Backup schedule for a stateful service.
(The label is used by a cron job to determine if/when to backup container data.)
* Service ownership.
(To determine internal cross-billing, or who to page in case of outage.)
* etc.

273
slides/intro/Logging.md Normal file
View File

@@ -0,0 +1,273 @@
# Logging
In this chapter, we will explain the different ways to send logs from containers.
We will then show one particular method in action, using ELK and Docker's logging drivers.
---
## There are many ways to send logs
- The simplest method is to write on the standard output and error.
- Applications can write their logs to local files.
(The files are usually periodically rotated and compressed.)
- It is also very common (on UNIX systems) to use syslog.
(The logs are collected by syslogd or an equivalent like journald.)
- In large applications with many components, it is common to use a logging service.
(The code uses a library to send messages to the logging service.)
*All these methods are available with containers.*
---
## Writing on stdout/stderr
- The standard output and error of containers is managed by the container engine.
- This means that each line written by the container is received by the engine.
- The engine can then do "whatever" with these log lines.
- With Docker, the default configuration is to write the logs to local files.
- The files can then be queried with e.g. `docker logs` (and the equivalent API request).
- This can be customized, as we will see later.
---
## Writing to local files
- If we write to files, it is possible to access them but cumbersome.
(We have to use `docker exec` or `docker cp`.)
- Furthermore, if the container is stopped, we cannot use `docker exec`.
- If the container is deleted, the logs disappear.
- What should we do for programs who can only log to local files?
--
- There are multiple solutions.
---
## Using a volume or bind mount
- Instead of writing logs to a normal directory, we can place them on a volume.
- The volume can be accessed by other containers.
- We can run a program like `filebeat` in another container accessing the same volume.
(`filebeat` reads local log files continuously, like `tail -f`, and sends them
to a centralized system like ElasticSearch.)
- We can also use a bind mount, e.g. `-v /var/log/containers/www:/var/log/tomcat`.
- The container will write log files to a directory mapped to a host directory.
- The log files will appear on the host and be consumable directly from the host.
---
## Using logging services
- We can use logging frameworks (like log4j or the Python `logging` package).
- These frameworks require some code and/or configuration in our application code.
- These mechanisms can be used identically inside or outside of containers.
- Sometimes, we can leverage containerized networking to simplify their setup.
- For instance, our code can send log messages to a server named `log`.
- The name `log` will resolve to different addresses in development, production, etc.
---
## Using syslog
- What if our code (or the program we are running in containers) uses syslog?
- One possibility is to run a syslog daemon in the container.
- Then that daemon can be setup to write to local files or forward to the network.
- Under the hood, syslog clients connect to a local UNIX socket, `/dev/log`.
- We can expose a syslog socket to the container (by using a volume or bind-mount).
- Then just create a symlink from `/dev/log` to the syslog socket.
- Voilà!
---
## Using logging drivers
- If we log to stdout and stderr, the container engine receives the log messages.
- The Docker Engine has a modular logging system with many plugins, including:
- json-file (the default one)
- syslog
- journald
- gelf
- fluentd
- splunk
- etc.
- Each plugin can process and forward the logs to another process or system.
---
## Demo: sending logs to ELK
- We are going to deploy an ELK stack.
- It will accept logs over a GELF socket.
- We will run a few containers with the `gelf` logging driver.
- We will then see our logs in Kibana, the web interface provided by ELK.
*Important foreword: this is not an "official" or "recommended"
setup; it is just an example. We used ELK in this demo because
it's a popular setup and we keep being asked about it; but you
will have equal success with Fluent or other logging stacks!*
---
## What's in an ELK stack?
- ELK is three components:
- ElasticSearch (to store and index log entries)
- Logstash (to receive log entries from various
sources, process them, and forward them to various
destinations)
- Kibana (to view/search log entries with a nice UI)
- The only component that we will configure is Logstash
- We will accept log entries using the GELF protocol
- Log entries will be stored in ElasticSearch,
<br/>and displayed on Logstash's stdout for debugging
---
## Running ELK
- We are going to use a Compose file describing the ELK stack.
```bash
$ cd ~/container.training/stacks
$ docker-compose -f elk.yml up -d
```
- Let's have a look at the Compose file while it's deploying.
---
## Our basic ELK deployment
- We are using images from the Docker Hub: `elasticsearch`, `logstash`, `kibana`.
- We don't need to change the configuration of ElasticSearch.
- We need to tell Kibana the address of ElasticSearch:
- it is set with the `ELASTICSEARCH_URL` environment variable,
- by default it is `localhost:9200`, we change it to `elastichsearch:9200`.
- We need to configure Logstash:
- we pass the entire configuration file through command-line arguments,
- this is a hack so that we don't have to create an image just for the config.
---
## Sending logs to ELK
- The ELK stack accepts log messages through a GELF socket.
- The GELF socket listens on UDP port 12201.
- To send a message, we need to change the logging driver used by Docker.
- This can be done globally (by reconfiguring the Engine) or on a per-container basis.
- Let's override the logging driver for a single container:
```bash
$ docker run --log-driver=gelf --log-opt=gelf-address=udp://localhost:12201 \
alpine echo hello world
```
---
## Viewing the logs in ELK
- Connect to the Kibana interface.
- It is exposed on port 5601.
- Browse http://X.X.X.X:5601.
---
## "Configuring" Kibana
- Kibana should offer you to "Configure an index pattern":
<br/>in the "Time-field name" drop down, select "@timestamp", and hit the
"Create" button.
- Then:
- click "Discover" (in the top-left corner),
- click "Last 15 minutes" (in the top-right corner),
- click "Last 1 hour" (in the list in the middle),
- click "Auto-refresh" (top-right corner),
- click "5 seconds" (top-left of the list).
- You should see a series of green bars (with one new green bar every minute).
- Our 'hello world' message should be visible there.
---
## Important afterword
**This is not a "production-grade" setup.**
It is just an educational example. Since we have only
one node , we did set up a single
ElasticSearch instance and a single Logstash instance.
In a production setup, you need an ElasticSearch cluster
(both for capacity and availability reasons). You also
need multiple Logstash instances.
And if you want to withstand
bursts of logs, you need some kind of message queue:
Redis if you're cheap, Kafka if you want to make sure
that you don't drop messages on the floor. Good luck.
If you want to learn more about the GELF driver,
have a look at [this blog post](
http://jpetazzo.github.io/2017/01/20/docker-logging-gelf/).

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,427 @@
# Orchestration, an overview
In this chapter, we will:
* Explain what is orchestration and why we would need it.
* Present (from a high-level perspective) some orchestrators.
* Show one orchestrator (Kubernetes) in action.
---
class: pic
## What's orchestration?
![Joana Carneiro (orchestra conductor)](images/conductor.jpg)
---
## What's orchestration?
According to Wikipedia:
*Orchestration describes the __automated__ arrangement,
coordination, and management of complex computer systems,
middleware, and services.*
--
*[...] orchestration is often discussed in the context of
__service-oriented architecture__, __virtualization__, provisioning,
Converged Infrastructure and __dynamic datacenter__ topics.*
--
What does that really mean?
---
## Example 1: dynamic cloud instances
--
- Q: do we always use 100% of our servers?
--
- A: obviously not!
.center[![Daily variations of traffic](images/traffic-graph.png)]
---
## Example 1: dynamic cloud instances
- Every night, scale down
(by shutting down extraneous replicated instances)
- Every morning, scale up
(by deploying new copies)
- "Pay for what you use"
(i.e. save big $$$ here)
---
## Example 1: dynamic cloud instances
How do we implement this?
- Crontab
- Autoscaling (save even bigger $$$)
That's *relatively* easy.
Now, how are things for our IAAS provider?
---
## Example 2: dynamic datacenter
- Q: what's the #1 cost in a datacenter?
--
- A: electricity!
--
- Q: what uses electricity?
--
- A: servers, obviously
- A: ... and associated cooling
--
- Q: do we always use 100% of our servers?
--
- A: obviously not!
---
## Example 2: dynamic datacenter
- If only we could turn off unused servers during the night...
- Problem: we can only turn off a server if it's totally empty!
(i.e. all VMs on it are stopped/moved)
- Solution: *migrate* VMs and shutdown empty servers
(e.g. combine two hypervisors with 40% load into 80%+0%,
<br/>and shutdown the one at 0%)
---
## Example 2: dynamic datacenter
How do we implement this?
- Shutdown empty hosts (but keep some spare capacity)
- Start hosts again when capacity gets low
- Ability to "live migrate" VMs
(Xen already did this 10+ years ago)
- Rebalance VMs on a regular basis
- what if a VM is stopped while we move it?
- should we allow provisioning on hosts involved in a migration?
*Scheduling* becomes more complex.
---
## What is scheduling?
According to Wikipedia (again):
*In computing, scheduling is the method by which threads,
processes or data flows are given access to system resources.*
The scheduler is concerned mainly with:
- throughput (total amount or work done per time unit);
- turnaround time (between submission and completion);
- response time (between submission and start);
- waiting time (between job readiness and execution);
- fairness (appropriate times according to priorities).
In practice, these goals often conflict.
**"Scheduling" = decide which resources to use.**
---
## Exercise 1
- You have:
- 5 hypervisors (physical machines)
- Each server has:
- 16 GB RAM, 8 cores, 1 TB disk
- Each week, your team asks:
- one VM with X RAM, Y CPU, Z disk
Scheduling = deciding which hypervisor to use for each VM.
Difficulty: easy!
---
<!-- Warning, two almost identical slides (for img effect) -->
## Exercise 2
- You have:
- 1000+ hypervisors (and counting!)
- Each server has different resources:
- 8-500 GB of RAM, 4-64 cores, 1-100 TB disk
- Multiple times a day, a different team asks for:
- up to 50 VMs with different characteristics
Scheduling = deciding which hypervisor to use for each VM.
Difficulty: ???
---
<!-- Warning, two almost identical slides (for img effect) -->
## Exercise 2
- You have:
- 1000+ hypervisors (and counting!)
- Each server has different resources:
- 8-500 GB of RAM, 4-64 cores, 1-100 TB disk
- Multiple times a day, a different team asks for:
- up to 50 VMs with different characteristics
Scheduling = deciding which hypervisor to use for each VM.
![Troll face](images/trollface.png)
---
## Exercise 3
- You have machines (physical and/or virtual)
- You have containers
- You are trying to put the containers on the machines
- Sounds familiar?
---
## Scheduling with one resource
.center[![Not-so-good bin packing](images/binpacking-1d-1.gif)]
Can we do better?
---
## Scheduling with one resource
.center[![Better bin packing](images/binpacking-1d-2.gif)]
Yup!
---
## Scheduling with two resources
.center[![2D bin packing](images/binpacking-2d.gif)]
---
## Scheduling with three resources
.center[![3D bin packing](images/binpacking-3d.gif)]
---
## You need to be good at this
.center[![Tangram](images/tangram.gif)]
---
## But also, you must be quick!
.center[![Tetris](images/tetris-1.png)]
---
## And be web scale!
.center[![Big tetris](images/tetris-2.gif)]
---
## And think outside (?) of the box!
.center[![3D tetris](images/tetris-3.png)]
---
## Good luck!
.center[![FUUUUUU face](images/fu-face.jpg)]
---
## TL,DR
* Scheduling with multiple resources (dimensions) is hard.
* Don't expect to solve the problem with a Tiny Shell Script.
* There are literally tons of research papers written on this.
---
## But our orchestrator also needs to manage ...
* Network connectivity (or filtering) between containers.
* Load balancing (external and internal).
* Failure recovery (if a node or a whole datacenter fails).
* Rolling out new versions of our applications.
(Canary deployments, blue/green deployments...)
---
## Some orchestrators
We are going to present briefly a few orchestrators.
There is no "absolute best" orchestrator.
It depends on:
- your applications,
- your requirements,
- your pre-existing skills...
---
## Nomad
- Open Source project by Hashicorp.
- Arbitrary scheduler (not just for containers).
- Great if you want to schedule mixed workloads.
(VMs, containers, processes...)
- Less integration with the rest of the container ecosystem.
---
## Mesos
- Open Source project in the Apache Foundation.
- Arbitrary scheduler (not just for containers).
- Two-level scheduler.
- Top-level scheduler acts as a resource broker.
- Second-level schedulers (aka "frameworks") obtain resources from top-level.
- Frameworks implement various strategies.
(Marathon = long running processes; Chronos = run at intervals; ...)
- Commercial offering through DC/OS my Mesosphere.
---
## Rancher
- Rancher 1 offered a simple interface for Docker hosts.
- Rancher 2 is a complete management platform for Docker and Kubernetes.
- Technically not an orchestrator, but it's a popular option.
---
## Swarm
- Tightly integrated with the Docker Engine.
- Extremely simple to deploy and setup, even in multi-manager (HA) mode.
- Secure by default.
- Strongly opinionated:
- smaller set of features,
- easier to operate.
---
## Kubernetes
- Open Source project initiated by Google.
- Contributions from many other actors.
- *De facto* standard for container orchestration.
- Many deployment options; some of them very complex.
- Reputation: steep learning curve.
- Reality:
- true, if we try to understand *everything*;
- false, if we focus on what matters.
---
## Kubernetes in action
.center[![Demo stamp](images/demo.jpg)]

View File

@@ -38,6 +38,42 @@ individual Docker VM.*
---
## What *is* Docker?
- "Installing Docker" really means "Installing the Docker Engine and CLI".
- The Docker Engine is a daemon (a service running in the background).
- This daemon manages containers, the same way that an hypervisor manages VMs.
- We interact with the Docker Engine by using the Docker CLI.
- The Docker CLI and the Docker Engine communicate through an API.
- There are many other programs, and many client libraries, to use that API.
---
## Why don't we run Docker locally?
- We are going to download container images and distribution packages.
- This could put a bit of stress on the local WiFi and slow us down.
- Instead, we use a remote VM that has a good connectivity
- In some rare cases, installing Docker locally is challenging:
- no administrator/root access (computer managed by strict corp IT)
- 32-bit CPU or OS
- old OS version (e.g. CentOS 6, OSX pre-Yosemite, Windows 7)
- It's better to spend time learning containers than fiddling with the installer!
---
## Connecting to your Virtual Machine
You need an SSH client.
@@ -66,21 +102,24 @@ Once logged in, make sure that you can run a basic Docker command:
```bash
$ docker version
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:09 2017
OS/Arch: darwin/amd64
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:10:06 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:45:38 2017
OS/Arch: linux/amd64
Experimental: true
Engine:
Version: 18.03.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:08:35 2018
OS/Arch: linux/amd64
Experimental: false
```
]

View File

@@ -100,7 +100,7 @@ class: extra-details
Let's start a Tomcat container:
```bash
$ docker run --name webapp -d -p 8080:8080 -v /usr/local/tomcat/logs
$ docker run --name webapp -d -p 8080:8080 -v /usr/local/tomcat/logs tomcat
```
Now, start an `alpine` container accessing the same volume:
@@ -195,7 +195,7 @@ Let's start another container using the `webapps` volume.
$ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp
```
Vandalize the page, save, exit.
Where `-w` sets the working directory inside the container. Vandalize the page, save and exit.
Then run `curl localhost:1234` again to see your changes.
@@ -259,7 +259,7 @@ $ docker run -d --name redis28 redis:2.8
Connect to the Redis container and set some data.
```bash
$ docker run -ti --link redis28:redis alpine telnet redis 6379
$ docker run -ti --link redis28:redis alpine:3.6 telnet redis 6379
```
Issue the following commands:
@@ -298,7 +298,7 @@ class: extra-details
Connect to the Redis container and see our data.
```bash
docker run -ti --link redis30:redis alpine telnet redis 6379
docker run -ti --link redis30:redis alpine:3.6 telnet redis 6379
```
Issue a few commands.
@@ -401,6 +401,47 @@ or providing extra features. For instance:
---
## Volumes vs. Mounts
* Since Docker 17.06, a new options is available: `--mount`.
* It offers a new, richer syntax to manipulate data in containers.
* It makes an explicit difference between:
- volumes (identified with a unique name, managed by a storage plugin),
- bind mounts (identified with a host path, not managed).
* The former `-v` / `--volume` option is still usable.
---
## `--mount` syntax
Binding a host path to a container path:
```bash
$ docker run \
--mount type=bind,source=/path/on/host,target=/path/in/container alpine
```
Mounting a volume to a container path:
```bash
$ docker run \
--mount source=myvolume,target=/path/in/container alpine
```
Mounting a tmpfs (in-memory, for temporary files):
```bash
$ docker run \
--mount type=tmpfs,destination=/path/in/container,tmpfs-size=1000000 alpine
```
---
## Section summary
We've learned how to:

39
slides/intro/intro.md Normal file
View File

@@ -0,0 +1,39 @@
## A brief introduction
- This was initially written to support in-person, instructor-led workshops and tutorials
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors)
- You can also follow along on your own, at your own pace
- We included as much information as possible in these slides
- We recommend having a mentor to help you ...
- ... Or be comfortable spending some time reading the Docker
[documentation](https://docs.docker.com/) ...
- ... And looking for answers in the [Docker forums](forums.docker.com),
[StackOverflow](http://stackoverflow.com/questions/tagged/docker),
and other outlets
---
class: self-paced
## Hands on, you shall practice
- Nobody ever became a Jedi by spending their lives reading Wookiepedia
- Likewise, it will take more than merely *reading* these slides
to make you an expert
- These slides include *tons* of exercises and examples
- They assume that you have acccess to a machine running Docker
- If you are attending a workshop or tutorial:
<br/>you will be given specific instructions to access a cloud VM
- If you are doing this on your own:
<br/>we will tell you how to install Docker or access a Docker environment

1
slides/intro/links.md Symbolic link
View File

@@ -0,0 +1 @@
../swarm/links.md

42
slides/kube-fullday.yml Normal file
View File

@@ -0,0 +1,42 @@
title: |
Introduction to Orchestration
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
#chat: "In person!"
exclude:
- self-paced
chapters:
- common/title.md
- logistics.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
- kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- - kube/kubectlrun.md
- kube/kubectlexpose.md
- kube/ourapponkube.md
- - kube/dashboard.md
- kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- - kube/logs-cli.md
- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
- kube/links.md
- common/thankyou.md

View File

@@ -1,32 +0,0 @@
title: |
Docker + Kubernetes = <3
chat: "[Slack](https://docker.slack.com/messages/C83M572J2)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
exclude:
- self-paced
chapters:
- common/title.md
- logistics.md
- common/intro.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
- - kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
- kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- kube/kubectlrun.md
- - kube/kubectlexpose.md
- kube/ourapponkube.md
- kube/dashboard.md
- - kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- kube/whatsnext.md
- common/thankyou.md

View File

@@ -11,11 +11,14 @@ exclude:
chapters:
- common/title.md
#- logistics.md
- common/intro.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- - kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
@@ -29,5 +32,10 @@ chapters:
- - kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- kube/logs-cli.md
- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
- kube/links.md
- common/thankyou.md

1
slides/kube.yml Symbolic link
View File

@@ -0,0 +1 @@
kube-fullday.yml

View File

@@ -98,39 +98,76 @@ class: pic
---
## Kubernetes architecture: the master
- The Kubernetes logic (its "brains") is a collection of services:
- the API server (our point of entry to everything!)
- core services like the scheduler and controller manager
- `etcd` (a highly available key/value store; the "database" of Kubernetes)
- Together, these services form what is called the "master"
- These services can run straight on a host, or in containers
<br/>
(that's an implementation detail)
- `etcd` can be run on separate machines (first schema) or co-located (second schema)
- We need at least one master, but we can have more (for high availability)
---
## Kubernetes architecture: the nodes
- The nodes executing our containers run another collection of services:
- The nodes executing our containers run a collection of services:
- a container Engine (typically Docker)
- kubelet (the "node agent")
- kube-proxy (a necessary but not sufficient network component)
- Nodes were formerly called "minions"
- It is customary to *not* run apps on the node(s) running master components
(You might see that word in older articles or documentation)
(Except when using small development clusters)
---
## Kubernetes architecture: the control plane
- The Kubernetes logic (its "brains") is a collection of services:
- the API server (our point of entry to everything!)
- core services like the scheduler and controller manager
- `etcd` (a highly available key/value store; the "database" of Kubernetes)
- Together, these services form the control plane of our cluster
- The control plane is also called the "master"
---
## Running the control plane on special nodes
- It is common to reserve a dedicated node for the control plane
(Except for single-node development clusters, like when using minikube)
- This node is then called a "master"
(Yes, this is ambiguous: is the "master" a node, or the whole control plane?)
- Normal applications are restricted from running on this node
(By using a mechanism called ["taints"](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/))
- When high availability is required, each service of the control plane must be resilient
- The control plane is then replicated on multiple nodes
(This is sometimes called a "multi-master" setup)
---
## Running the control plane outside containers
- The services of the control plane can run in or out of containers
- For instance: since `etcd` is a critical service, some people
deploy it directly on a dedicated cluster (without containers)
(This is illustrated on the first "super complicated" schema)
- In some hosted Kubernetes offerings (e.g. GKE), the control plane is invisible
(We only "see" a Kubernetes API endpoint)
- In that case, there is no "master node"
*For this reason, it is more accurate to say "control plane" rather than "master".*
---
@@ -184,7 +221,7 @@ Yes!
*Probably not (in the future)*
.footnote[More information about CRI [on the Kubernetes blog](http://blog.kubernetes.io/2016/12/]container-runtime-interface-cri-in-kubernetes.html).
.footnote[More information about CRI [on the Kubernetes blog](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes)]
---
@@ -210,4 +247,24 @@ class: pic
![Node, pod, container](images/k8s-arch3-thanks-weave.png)
(Diagram courtesy of Weave Works, used with permission.)
---
class: pic
![One of the best Kubernetes architecture diagrams available](images/k8s-arch4-thanks-luxas.png)
---
## Credits
- The first diagram is courtesy of Weave Works
- a *pod* can have multiple containers working together
- IP addresses are associated with *pods*, not with individual containers
- The second diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha)
- it's one of the best Kubernetes architecture diagrams available!
Both diagrams used with permission.

View File

@@ -1,19 +1,33 @@
# Daemon sets
- Remember: we did all that cluster orchestration business for `rng`
- We want to scale `rng` in a way that is different from how we scaled `worker`
- We want one (and exactly one) instance of `rng` per node
- If we just scale `deploy/rng` to 4, nothing guarantees that they spread
- What if we just scale up `deploy/rng` to the number of nodes?
- nothing guarantees that the `rng` containers will be distributed evenly
- if we add nodes later, they will not automatically run a copy of `rng`
- if we remove (or reboot) a node, one `rng` container will restart elsewhere
- Instead of a `deployment`, we will use a `daemonset`
---
## Daemon sets in practice
- Daemon sets are great for cluster-wide, per-node processes:
- `kube-proxy`
- `weave` (our overlay network)
- monitoring agents
- hardware management tools (e.g. SCSI/FC HBA agents)
- etc.
- They can also be restricted to run [only on some nodes](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#running-pods-on-only-some-nodes)
@@ -22,7 +36,7 @@
## Creating a daemon set
- Unfortunately, as of Kubernetes 1.8, the CLI cannot create daemon sets
- Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
--
@@ -128,7 +142,7 @@ We all knew this couldn't be that easy, right!
- We could also tell Kubernetes to ignore these errors and try anyway
- The `--force` flag actual name is `--validate=false`
- The `--force` flag's actual name is `--validate=false`
.exercise[
@@ -382,7 +396,7 @@ Of course, option 2 offers more learning opportunities. Right?
.exercise[
- Check the logs of all `run=rng` pods to confirm that only 4 of them are now active:
- Check the logs of all `run=rng` pods to confirm that exactly one per node is now active:
```bash
kubectl logs -l run=rng
```
@@ -402,8 +416,83 @@ The timestamps should give us a hint about how many pods are currently receiving
---
## More labels, more selectors, more problems?
## Cleaning up
- Bonus exercise 1: clean up the pods of the "old" daemon set
- The pods of the "old" daemon set are still running
- Bonus exercise 2: how could we have done to avoid creating new pods?
- We are going to identify them programmatically
.exercise[
- List the pods with `run=rng` but without `isactive=yes`:
```bash
kubectl get pods -l run=rng,isactive!=yes
```
- Remove these pods:
```bash
kubectl get pods -l run=rng,isactive!=yes -o name |
xargs kubectl delete
```
]
---
## Avoiding extra pods
- When we changed the definition of the daemon set, it immediately created new pods
- How could we have avoided this?
--
- By adding the `isactive: "yes"` label to the pods before changing the daemon set!
- This can be done programmatically with `kubectl patch`:
```bash
PATCH='
metadata:
labels:
isactive: "yes"
'
kubectl get pods -l run=rng -o name |
xargs kubectl patch -p "$PATCH"
```
---
## Labels and debugging
- When a pod is misbehaving, we can delete it: another one will be recreated
- But we can also change its labels
- It will be removed from the load balancer (it won't receive traffic anymore)
- Another pod will be recreated immediately
- But the problematic pod is still here, and we can inspect and debug it
- We can even re-add it to the rotation if necessary
(Very useful to troubleshoot intermittent and elusive bugs)
---
## Labels and advanced rollout control
- Conversely, we can add pods matching a service's selector
- These pods will then receive requests and serve traffic
- Examples:
- one-shot pod with all debug flags enabled, to collect logs
- pods created automatically, but added to rotation in a second step
<br/>
(by setting their label accordingly)
- This gives us building blocks for canary and blue/green deployments

View File

@@ -4,11 +4,15 @@
- We are going to deploy that dashboard with *three commands:*
- one to actually *run* the dashboard
1) actually *run* the dashboard
- one to make the dashboard available from outside
2) bypass SSL for the dashboard
- one to bypass authentication for the dashboard
3) bypass authentication for the dashboard
--
There is an additional step to make the dashboard available from outside (we'll get to that)
--
@@ -16,7 +20,7 @@
---
## Running the dashboard
## 1) Running the dashboard
- We need to create a *deployment* and a *service* for the dashboard
@@ -39,11 +43,120 @@ The goo.gl URL expands to:
---
## Making the dashboard reachable from outside
- The dashboard is exposed through a `ClusterIP` service
## 2) Bypassing SSL for the dashboard
- We need a `NodePort` service instead
- The Kubernetes dashboard uses HTTPS, but we don't have a certificate
- Recent versions of Chrome (63 and later) and Edge will refuse to connect
(You won't even get the option to ignore a security warning!)
- We could (and should!) get a certificate, e.g. with [Let's Encrypt](https://letsencrypt.org/)
- ... But for convenience, for this workshop, we'll forward HTTP to HTTPS
.warning[Do not do this at home, or even worse, at work!]
---
## Running the SSL unwrapper
- We are going to run [`socat`](http://www.dest-unreach.org/socat/doc/socat.html), telling it to accept TCP connections and relay them over SSL
- Then we will expose that `socat` instance with a `NodePort` service
- For convenience, these steps are neatly encapsulated into another YAML file
.exercise[
- Apply the convenient YAML file, and defeat SSL protection:
```bash
kubectl apply -f https://goo.gl/tA7GLz
```
]
The goo.gl URL expands to:
<br/>
.small[.small[https://gist.githubusercontent.com/jpetazzo/c53a28b5b7fdae88bc3c5f0945552c04/raw/da13ef1bdd38cc0e90b7a4074be8d6a0215e1a65/socat.yaml]]
.warning[All our dashboard traffic is now clear-text, including passwords!]
---
## Connecting to the dashboard
.exercise[
- Check which port the dashboard is on:
```bash
kubectl -n kube-system get svc socat
```
]
You'll want the `3xxxx` port.
.exercise[
- Connect to http://oneofournodes:3xxxx/
<!-- ```open https://node1:3xxxx/``` -->
]
The dashboard will then ask you which authentication you want to use.
---
## Dashboard authentication
- We have three authentication options at this point:
- token (associated with a role that has appropriate permissions)
- kubeconfig (e.g. using the `~/.kube/config` file from `node1`)
- "skip" (use the dashboard "service account")
- Let's use "skip": we get a bunch of warnings and don't see much
---
## 3) Bypass authentication for the dashboard
- The dashboard documentation [explains how to do this](https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges)
- We just need to load another YAML file!
.exercise[
- Grant admin privileges to the dashboard so we can see our resources:
```bash
kubectl apply -f https://goo.gl/CHsLTA
```
- Reload the dashboard and enjoy!
]
--
.warning[By the way, we just added a backdoor to our Kubernetes cluster!]
---
## Exposing the dashboard over HTTPS
- We took a shortcut by forwarding HTTP to HTTPS inside the cluster
- Let's expose the dashboard over HTTPS!
- The dashboard is exposed through a `ClusterIP` service (internal traffic only)
- We will change that into a `NodePort` service (accepting outside traffic)
.exercise[
@@ -62,12 +175,14 @@ The goo.gl URL expands to:
## Editing the `kubernetes-dashboard` service
- If we look at the YAML that we loaded just before, we'll get a hint
- If we look at the [YAML](https://goo.gl/Qamqab) that we loaded before, we'll get a hint
--
- The dashboard was created in the `kube-system` namespace
--
.exercise[
- Edit the service:
@@ -79,54 +194,21 @@ The goo.gl URL expands to:
- Check the port that was assigned with `kubectl -n kube-system get services`
- Connect to https://oneofournodes:3xxxx/ (yes, https)
]
---
## Connecting to the dashboard
## Running the Kubernetes dashboard securely
.exercise[
- The steps that we just showed you are *for educational purposes only!*
- Connect to https://oneofournodes:3xxxx/
- If you do that on your production cluster, people [can and will abuse it](https://blog.redlock.io/cryptojacking-tesla)
(You will have to work around the TLS certificate validation warning)
<!-- ```open https://node1:3xxxx/``` -->
]
- We have three authentication options at this point:
- token (associated with a role that has appropriate permissions)
- kubeconfig (e.g. using the `~/.kube/config` file from `node1`)
- "skip" (use the dashboard "service account")
- Let's use "skip": we get a bunch of warnings and don't see much
---
## Granting more rights to the dashboard
- The dashboard documentation [explains how to do](https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges)
- We just need to load another YAML file!
.exercise[
- Grant admin privileges to the dashboard so we can see our resources:
```bash
kubectl apply -f https://goo.gl/CHsLTA
```
- Reload the dashboard and enjoy!
]
--
.warning[By the way, we just added a backdoor to our Kubernetes cluster!]
- For an in-depth discussion about securing the dashboard,
<br/>
check [this excellent post on Heptio's blog](https://blog.heptio.com/on-securing-the-kubernetes-dashboard-16b09b1b7aca)
---
@@ -179,3 +261,4 @@ The goo.gl URL expands to:
- It introduces new failure modes
- Example: the official setup instructions for most pod networks

212
slides/kube/helm.md Normal file
View File

@@ -0,0 +1,212 @@
# Managing stacks with Helm
- We created our first resources with `kubectl run`, `kubectl expose` ...
- We have also created resources by loading YAML files with `kubectl apply -f`
- For larger stacks, managing thousands of lines of YAML is unreasonable
- These YAML bundles need to be customized with variable parameters
(E.g.: number of replicas, image version to use ...)
- It would be nice to have an organized, versioned collection of bundles
- It would be nice to be able to upgrade/rollback these bundles carefully
- [Helm](https://helm.sh/) is an open source project offering all these things!
---
## Helm concepts
- `helm` is a CLI tool
- `tiller` is its companion server-side component
- A "chart" is an archive containing templatized YAML bundles
- Charts are versioned
- Charts can be stored on private or public repositories
---
## Installing Helm
- We need to install the `helm` CLI; then use it to deploy `tiller`
.exercise[
- Install the `helm` CLI:
```bash
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
```
- Deploy `tiller`:
```bash
helm init
```
]
---
## Fix account permissions
- Helm permission model requires us to tweak permissions
- In a more realistic deployment, you might create per-user or per-team
service accounts, roles, and role bindings
.exercise[
- Grant `cluster-admin` role to `kube-system:default` service account:
```bash
kubectl create clusterrolebinding add-on-cluster-admin \
--clusterrole=cluster-admin --serviceaccount=kube-system:default
```
]
(Defining the exact roles and permissions on your cluster requires
a deeper knowledge of Kubernetes' RBAC model. The command above is
fine for personal and development clusters.)
---
## View available charts
- A public repo is pre-configured when installing Helm
- We can view available charts with `helm search` (and an optional keyword)
.exercise[
- View all available charts:
```bash
helm search
```
- View charts related to `prometheus`:
```bash
helm search prometheus
```
]
---
## Install a chart
- Most charts use `LoadBalancer` service types by default
- Most charts require persistent volumes to store data
- We need to relax these requirements a bit
.exercise[
- Install the Prometheus metrics collector on our cluster:
```bash
helm install stable/prometheus \
--set server.service.type=NodePort \
--set server.persistentVolume.enabled=false
```
]
Where do these `--set` options come from?
---
## Inspecting a chart
- `helm inspect` shows details about a chart (including available options)
.exercise[
- See the metadata and all available options for `stable/prometheus`:
```bash
helm inspect stable/prometheus
```
]
The chart's metadata includes an URL to the project's home page.
(Sometimes it conveniently points to the documentation for the chart.)
---
## Creating a chart
- We are going to show a way to create a *very simplified* chart
- In a real chart, *lots of things* would be templatized
(Resource names, service types, number of replicas...)
.exercise[
- Create a sample chart:
```bash
helm create dockercoins
```
- Move away the sample templates and create an empty template directory:
```bash
mv dockercoins/templates dockercoins/default-templates
mkdir dockercoins/templates
```
]
---
## Exporting the YAML for our application
- The following section assumes that DockerCoins is currently running
.exercise[
- Create one YAML file for each resource that we need:
.small[
```bash
while read kind name; do
kubectl get -o yaml --export $kind $name > dockercoins/templates/$name-$kind.yaml
done <<EOF
deployment worker
deployment hasher
daemonset rng
deployment webui
deployment redis
service hasher
service rng
service webui
service redis
EOF
```
]
]
---
## Testing our helm chart
.exercise[
- Let's install our helm chart! (`dockercoins` is the path to the chart)
```bash
helm install dockercoins
```
]
--
- Since the application is already deployed, this will fail:<br>
`Error: release loitering-otter failed: services "hasher" already exists`
- To avoid naming conflicts, we will deploy the application in another *namespace*

37
slides/kube/intro.md Normal file
View File

@@ -0,0 +1,37 @@
## A brief introduction
- This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person,
instructor-led workshops and tutorials
- Credit is also due to [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) — thank you!
- You can also follow along on your own, at your own pace
- We included as much information as possible in these slides
- We recommend having a mentor to help you ...
- ... Or be comfortable spending some time reading the Kubernetes [documentation](https://kubernetes.io/docs/) ...
- ... And looking for answers on [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and other outlets
---
class: self-paced
## Hands on, you shall practice
- Nobody ever became a Jedi by spending their lives reading Wookiepedia
- Likewise, it will take more than merely *reading* these slides
to make you an expert
- These slides include *tons* of exercises and examples
- They assume that you have access to a Kubernetes cluster
- If you are attending a workshop or tutorial:
<br/>you will be given specific instructions to access your cluster
- If you are doing this on your own:
<br/>the first chapter will give you various options to get your own cluster

View File

@@ -137,4 +137,116 @@ Note: please DO NOT call the service `search`. It would collide with the TLD.
--
Our requests are load balanced across multiple pods.
We may see `curl: (7) Failed to connect to _IP_ port 9200: Connection refused`.
This is normal while the service starts up.
--
Once it's running, our requests are load balanced across multiple pods.
---
class: extra-details
## If we don't need a load balancer
- Sometimes, we want to access our scaled services directly:
- if we want to save a tiny little bit of latency (typically less than 1ms)
- if we need to connect over arbitrary ports (instead of a few fixed ones)
- if we need to communicate over another protocol than UDP or TCP
- if we want to decide how to balance the requests client-side
- ...
- In that case, we can use a "headless service"
---
class: extra-details
## Headless services
- A headless service is obtained by setting the `clusterIP` field to `None`
(Either with `--cluster-ip=None`, or by providing a custom YAML)
- As a result, the service doesn't have a virtual IP address
- Since there is no virtual IP address, there is no load balancer either
- `kube-dns` will return the pods' IP addresses as multiple `A` records
- This gives us an easy way to discover all the replicas for a deployment
---
class: extra-details
## Services and endpoints
- A service has a number of "endpoints"
- Each endpoint is a host + port where the service is available
- The endpoints are maintained and updated automatically by Kubernetes
.exercise[
- Check the endpoints that Kubernetes has associated with our `elastic` service:
```bash
kubectl describe service elastic
```
]
In the output, there will be a line starting with `Endpoints:`.
That line will list a bunch of addresses in `host:port` format.
---
class: extra-details
## Viewing endpoint details
- When we have many endpoints, our display commands truncate the list
```bash
kubectl get endpoints
```
- If we want to see the full list, we can use one of the following commands:
```bash
kubectl describe endpoints elastic
kubectl get endpoints elastic -o yaml
```
- These commands will show us a list of IP addresses
- These IP addresses should match the addresses of the corresponding pods:
```bash
kubectl get pods -l run=elastic -o wide
```
---
class: extra-details
## `endpoints` not `endpoint`
- `endpoints` is the only resource that cannot be singular
```bash
$ kubectl get endpoint
error: the server doesn't have a resource type "endpoint"
```
- This is because the type itself is plural (unlike every other resource)
- There is no `endpoint` object: `type Endpoints struct`
- The type doesn't represent a single endpoint, but a list of endpoints

Some files were not shown because too many files have changed in this diff Show More