Compare commits

..

938 Commits

Author SHA1 Message Date
Jerome Petazzoni
65d82cbc91 fix-redirects.sh: adding forced redirect 2020-04-07 16:48:47 -05:00
Bridget Kromhout
ac14480281 Merge pull request #233 from jpetazzo/master
Bringing event-specific fork up to date
2018-04-23 12:04:19 -05:00
Jérôme Petazzoni
f06dc6548c Merge pull request #232 from bridgetkromhout/rollout-params
Clarify rollout params
2018-04-23 11:32:25 -05:00
Jérôme Petazzoni
e13552c306 Merge pull request #224 from bridgetkromhout/re-order
Re-ordering "kubectl apply" discussion
2018-04-23 11:31:15 -05:00
Bridget Kromhout
0305c3783f Adding an overview; marking clarification as extra 2018-04-23 10:52:29 -05:00
Bridget Kromhout
5158ac3d98 Clarify rollout params 2018-04-22 15:49:32 -05:00
Jérôme Petazzoni
25c08b0885 Merge pull request #231 from bridgetkromhout/add-goto-kube101
Adding goto's kube101
2018-04-22 14:55:55 -05:00
Bridget Kromhout
f8131c97e9 Adding goto's kube101 2018-04-22 14:35:50 -05:00
Bridget Kromhout
5371ff1213 Merge pull request #230 from jpetazzo/master
Bringing up to date
2018-04-22 14:34:35 -05:00
Bridget Kromhout
3de1fab66a Clarifying failure mode 2018-04-22 14:04:57 -05:00
Jérôme Petazzoni
ab664128b7 Merge pull request #228 from bridgetkromhout/helm-completion
Correction for helm completion
2018-04-22 14:00:08 -05:00
Bridget Kromhout
1be01a1d4c Merge pull request #229 from jpetazzo/master
Merging in updates
2018-04-22 13:44:51 -05:00
Bridget Kromhout
91de693b80 Correction for helm completion 2018-04-22 13:33:54 -05:00
Jérôme Petazzoni
a64606fb32 Merge pull request #225 from bridgetkromhout/tail-log
Clarify log tailing
2018-04-22 13:14:11 -05:00
Jérôme Petazzoni
58d9103bd2 Merge pull request #223 from bridgetkromhout/1.10.1-updates
Updates for 1.10.1
2018-04-22 13:13:25 -05:00
Jérôme Petazzoni
61ab5be12d Merge pull request #222 from bridgetkromhout/weave-link
Link to Weave
2018-04-22 13:08:54 -05:00
Bridget Kromhout
030900b602 Clarify log tailing 2018-04-22 12:39:18 -05:00
Bridget Kromhout
476d689c7d Clarify naming 2018-04-22 12:32:11 -05:00
Bridget Kromhout
4aedbb69c2 Re-ordering 2018-04-22 12:14:16 -05:00
Bridget Kromhout
db2a68709c Updates for 1.10.1 2018-04-22 11:57:37 -05:00
Bridget Kromhout
f114a89136 Link to Weave 2018-04-22 11:08:17 -05:00
Bridget Kromhout
dd092cc1d5 Merge pull request #221 from bridgetkromhout/gotochgo2018
Specific to gotochgo2018
2018-04-21 21:12:40 -05:00
Bridget Kromhout
02a6c08c92 Specific to gotochgo2018 2018-04-21 21:10:05 -05:00
Jérôme Petazzoni
96eda76391 Merge pull request #220 from bridgetkromhout/rearrange-kube-halfday
Rearrange kube halfday
2018-04-21 10:48:21 -05:00
Bridget Kromhout
e7d9a8fa2d Correcting EFK 2018-04-21 10:43:39 -05:00
Bridget Kromhout
1cca8db828 Rearranging halfday for kube 2018-04-21 10:38:54 -05:00
Bridget Kromhout
2cde665d2f Merge pull request #219 from jpetazzo/re-add-kube-halfday
Re-add half day file
2018-04-21 10:17:45 -05:00
Jerome Petazzoni
d660c6342f Re-add half day file 2018-04-21 12:00:04 +02:00
Bridget Kromhout
7e8bb0e51f Merge pull request #218 from bridgetkromhout/cloud-typo
Typo fix
2018-04-20 16:49:31 -05:00
Bridget Kromhout
c87f4cc088 Typo fix 2018-04-20 16:47:13 -05:00
Jérôme Petazzoni
05c50349a8 Merge pull request #211 from BretFisher/patch-4
add popular swarm reverse proxy options
2018-04-20 02:38:00 -05:00
Jérôme Petazzoni
e985952816 Add colon and fix minor typo 2018-04-20 02:37:48 -05:00
Jérôme Petazzoni
19f0ef9c86 Merge pull request #216 from jpetazzo/googl
Replace goo.gl with 1.1.1.1
2018-04-20 02:36:15 -05:00
Bret Fisher
cc8e13a85f silly me, Traefik is golang 2018-04-20 03:07:40 -04:00
Bridget Kromhout
6475a05794 Update kubectlrun.md
Removing misleading term
2018-04-19 14:37:26 -05:00
Bridget Kromhout
cc9840afe5 Update kubectlrun.md 2018-04-19 07:36:37 -05:00
Bridget Kromhout
b7a2cde458 Merge pull request #215 from jpetazzo/more-options-to-setup-k8s
Mention Kubernetes the Hard Way and more options
2018-04-19 07:32:20 -05:00
Bridget Kromhout
453992b55d Update setup-k8s.md 2018-04-19 07:31:25 -05:00
Bridget Kromhout
0b1067f95e Merge pull request #217 from jpetazzo/tolerations
Add a line about tolerations
2018-04-19 07:28:57 -05:00
Jérôme Petazzoni
21777cd95b Merge pull request #214 from BretFisher/patch-7
we can now add/remove networks from services 🤗
2018-04-19 06:35:09 -05:00
Jérôme Petazzoni
827ad3bdf2 Merge pull request #213 from BretFisher/patch-6
product name change 🙄
2018-04-19 06:34:41 -05:00
Jérôme Petazzoni
7818157cd0 Merge pull request #212 from BretFisher/patch-5
adding 3rd party registry options
2018-04-19 06:34:22 -05:00
Jérôme Petazzoni
d547241714 Merge pull request #210 from BretFisher/patch-3
fix image size via pic css class
2018-04-19 06:31:46 -05:00
Jérôme Petazzoni
c41e0e9286 Merge pull request #209 from BretFisher/patch-2
removed older notes about detach and service logs
2018-04-19 06:31:17 -05:00
Jérôme Petazzoni
c2d4784895 Merge pull request #208 from BretFisher/patch-1
removed mention of compose upg 1.6 to 1.7
2018-04-19 06:30:47 -05:00
Jérôme Petazzoni
11163965cf Merge pull request #204 from bridgetkromhout/clarify-off-by-one
Clarify an off-by-one amount of pods
2018-04-19 06:30:19 -05:00
Jérôme Petazzoni
e9df065820 Merge pull request #197 from bridgetkromhout/patch-only-daemonset
Patch only daemonset pods
2018-04-19 06:27:52 -05:00
Jerome Petazzoni
101ab0c11a Add a line about tolerations 2018-04-19 06:25:41 -05:00
Jérôme Petazzoni
25f081c0b7 Merge pull request #190 from bridgetkromhout/daemonset
Clarifications around daemonsets
2018-04-19 06:21:58 -05:00
Jérôme Petazzoni
700baef094 Merge pull request #188 from bridgetkromhout/clarify-kinds
kubectl get all missing-type workaround
2018-04-19 06:19:00 -05:00
Jerome Petazzoni
3faa586b16 Remove NOC joke 2018-04-19 06:14:54 -05:00
Jerome Petazzoni
8ca77fe8a4 Merge branch 'googl' of git://github.com/bridgetkromhout/container.training into bridgetkromhout-googl 2018-04-19 05:59:12 -05:00
Jerome Petazzoni
019829cc4d Mention Kubernetes the Hard Way and more options 2018-04-19 05:55:58 -05:00
Bret Fisher
a7f6bb223a we can now add/remove networks from services 🤗 2018-04-18 19:11:51 -04:00
Bret Fisher
eb77a8f328 product name change 🙄 2018-04-18 17:50:19 -04:00
Bret Fisher
5a484b2667 adding 3rd party registry options 2018-04-18 17:47:55 -04:00
Bret Fisher
982c35f8e7 add popular swarm reverse proxy options 2018-04-18 17:28:46 -04:00
Bret Fisher
adffe5f47f fix image size via pic css class
make swarm internals bigger!
2018-04-18 17:07:33 -04:00
Bret Fisher
f90a194b86 removed older notes about detach and service logs
Since these options have been around nearly a year, I removed some unneeded verbosity and consolidated the detach stuff.
2018-04-18 15:34:04 -04:00
Bret Fisher
99e9356e5d removed mention of compose upg 1.6 to 1.7
I feel like compose 1.7 was so long ago (over 2 years) that mentioning logs change isn't necessary.
2018-04-18 15:18:17 -04:00
Bridget Kromhout
860840a4c1 Clarify off-by-one 2018-04-18 14:09:08 -05:00
Bridget Kromhout
ab63b76ae0 Clarify types bug 2018-04-18 13:59:26 -05:00
Bridget Kromhout
29bca726b3 Merge pull request #2 from jpetazzo/daemonset-proposal
Pod cleanup proposal
2018-04-18 12:21:34 -05:00
Bridget Kromhout
91297a68f8 Update daemonset.md 2018-04-18 12:20:53 -05:00
Jerome Petazzoni
2bea8ade63 Break down last kube chapter (it is too long) 2018-04-18 11:44:30 -05:00
Jerome Petazzoni
ec486cf78c Do not bind-mount localtime (fixes #207) 2018-04-18 03:33:07 -05:00
Jerome Petazzoni
63ac378866 Merge branch 'darkalia-add_helm_completion' 2018-04-17 16:13:58 -05:00
Jerome Petazzoni
35db387fc2 Add ':' for consistency 2018-04-17 16:13:44 -05:00
Jerome Petazzoni
a0f9baf5e7 Merge branch 'add_helm_completion' of git://github.com/darkalia/container.training into darkalia-add_helm_completion 2018-04-17 16:12:52 -05:00
Jerome Petazzoni
4e54a79abc Pod cleanup proposal 2018-04-17 16:07:24 -05:00
Jérôme Petazzoni
37bea7158f Merge pull request #181 from jpetazzo/more-info-on-labels-and-rollouts
Label use-cases and rollouts
2018-04-17 15:18:24 -05:00
Jerome Petazzoni
618fe4e959 Clarify the grace period when shutting down pods 2018-04-17 02:24:07 -05:00
Jerome Petazzoni
0c73144977 Merge branch 'jgarrouste-patch-1' 2018-04-16 08:03:34 -05:00
Jerome Petazzoni
ff8c3b1595 Remove -o name 2018-04-16 08:03:09 -05:00
Jerome Petazzoni
b756d0d0dc Merge branch 'patch-1' of git://github.com/jgarrouste/container.training into jgarrouste-patch-1 2018-04-16 08:02:41 -05:00
Jerome Petazzoni
23147fafd1 Paris -> past sessions 2018-04-15 15:57:46 -05:00
Jérémy GARROUSTE
b036b5f24b Delete pods with ''-l run-rng' and remove xargs
Delete pods with ''-l run-rng' and remove xargs
2018-04-15 16:37:10 +02:00
Benjamin Allot
3b9014f750 Add helm completion 2018-04-13 16:40:42 +02:00
Jérôme Petazzoni
6ad7a285e7 Merge pull request #201 from bridgetkromhout/chart-clarity
Clarify chart install
2018-04-13 01:08:13 -05:00
Jérôme Petazzoni
e529eaed2d Merge pull request #200 from bridgetkromhout/helm-example
Use prometheus as example
2018-04-13 01:07:18 -05:00
Jérôme Petazzoni
4697c6c6ad Merge pull request #189 from bridgetkromhout/elastic-patience
Clarify error message upon start & endpoints
2018-04-13 01:06:33 -05:00
Jérôme Petazzoni
56e47c3550 Update kubectlexpose.md
Add line break for readability
2018-04-13 08:06:23 +02:00
Jérôme Petazzoni
b3a9ba339c Merge pull request #199 from bridgetkromhout/helm-mkdir
Directory missing
2018-04-13 01:04:39 -05:00
Jérôme Petazzoni
8d0ce37a59 Merge pull request #196 from bridgetkromhout/or-azure
Azure directions are also included
2018-04-13 01:04:07 -05:00
Jérôme Petazzoni
a1bbbd6f7b Merge pull request #195 from bridgetkromhout/slide-clarity
Making slide easier to read
2018-04-13 01:03:39 -05:00
Bridget Kromhout
de87743c6a Clarify an off-by-one amount of pods 2018-04-12 16:10:38 -05:00
Bridget Kromhout
9d4a72a4ba Merge pull request #202 from bridgetkromhout/url-update-fix
Fixing typo
2018-04-12 15:30:11 -05:00
Bridget Kromhout
19e39aea49 Fixing typo 2018-04-12 15:27:51 -05:00
Bridget Kromhout
da064a6005 Clarify chart install 2018-04-12 10:24:01 -05:00
Bridget Kromhout
a12a38a7a9 Use prometheus as example 2018-04-12 09:50:12 -05:00
Bridget Kromhout
2c3a442a4c wording correction
The addresses aren't what show us the addresses - it seems clear from context that this should be "commands".
2018-04-12 08:11:43 -05:00
Bridget Kromhout
25d560cf46 Directory missing 2018-04-12 07:48:25 -05:00
Bridget Kromhout
c3324cf64c More general 2018-04-12 07:41:43 -05:00
Bridget Kromhout
053bbe7028 Bold instead of highlighting 2018-04-12 07:39:02 -05:00
Bridget Kromhout
74f980437f Clarify that clusters can be of arbitrary size 2018-04-12 07:31:49 -05:00
Jérôme Petazzoni
5ef96a29ac Update kubectlexpose.md 2018-04-12 00:37:18 -05:00
Jérôme Petazzoni
f261e7aa96 Merge pull request #194 from bridgetkromhout/fix-blue
removing extra leading spaces which break everything
2018-04-11 23:55:34 -05:00
Jérôme Petazzoni
8e44e911ca Merge pull request #193 from bridgetkromhout/stern
Missing word added
2018-04-11 23:52:17 -05:00
Bridget Kromhout
6711ba06d9 Patch only daemonset pods 2018-04-11 21:09:46 -05:00
Bridget Kromhout
fce69b6bb2 Azure directions are also included 2018-04-11 19:34:51 -05:00
Bridget Kromhout
1183e2e4bf Making slide easier to read 2018-04-11 18:55:23 -05:00
Bridget Kromhout
de3082e48f Extra spaces prevent this from working 2018-04-11 18:47:30 -05:00
Bridget Kromhout
3acac34e4b Missing word added 2018-04-11 18:11:07 -05:00
Bridget Kromhout
f97bd2b357 googl to cloudflare 2018-04-11 13:36:00 -05:00
Jérôme Petazzoni
3bac124921 Merge pull request #183 from bridgetkromhout/stalling-for-time
Stalling for time during download
2018-04-11 14:56:02 +02:00
Bridget Kromhout
ba44603d0f Correcting title and slide section division 2018-04-11 06:53:01 -05:00
Jerome Petazzoni
358f844c88 Typo fix 2018-04-11 02:40:38 -07:00
Jérôme Petazzoni
74bf2d742c Merge pull request #182 from bridgetkromhout/versions-validated
Clarify versions validated
2018-04-10 23:11:38 -07:00
Jérôme Petazzoni
acba3d5467 Merge pull request #192 from bridgetkromhout/add-links
Add links
2018-04-10 23:03:09 -07:00
Jérôme Petazzoni
cfc066c8ea Merge pull request #191 from jgarrouste/master
Reversed sentences
2018-04-10 15:03:09 -07:00
Jérôme Petazzoni
4f69f19866 Merge pull request #186 from bridgetkromhout/vm-readme
link to VM prep README
2018-04-10 14:56:19 -07:00
Jérôme Petazzoni
c508f88af2 Update setup-k8s.md 2018-04-10 16:56:07 -05:00
Jérôme Petazzoni
9757fdb42f Merge pull request #185 from bridgetkromhout/article
Adding an article
2018-04-10 14:52:49 -07:00
Bridget Kromhout
24d57f535b Add links 2018-04-10 16:52:07 -05:00
Jérôme Petazzoni
e42dfc0726 Merge pull request #184 from bridgetkromhout/url-update
URL update
2018-04-10 14:51:55 -07:00
Bridget Kromhout
3f54f23535 Clarifying cleanup 2018-04-10 16:45:50 -05:00
Jérémy GARROUSTE
c7198b3538 correction 2018-04-10 22:56:42 +02:00
Bridget Kromhout
827d10dd49 Clarifying ambiguous labels on pods 2018-04-10 15:48:54 -05:00
Bridget Kromhout
1b7a072f25 Bump version and add link 2018-04-10 15:29:14 -05:00
Bridget Kromhout
af1347ca17 Clarify endpoints 2018-04-10 15:07:42 -05:00
Bridget Kromhout
f741cf5b23 Clarify error message upon start 2018-04-10 14:33:49 -05:00
Bridget Kromhout
eb1b3c8729 Clarify types 2018-04-10 14:17:27 -05:00
Bridget Kromhout
40e4678a45 goo.gl deprecation 2018-04-10 12:41:07 -05:00
Bridget Kromhout
d3c0a60de9 link to VM prep README 2018-04-10 12:30:46 -05:00
Bridget Kromhout
83bba80f3b URL update 2018-04-10 12:25:44 -05:00
Bridget Kromhout
44e0cfb878 Adding an article 2018-04-10 12:22:24 -05:00
Bridget Kromhout
a58e21e313 URL update 2018-04-10 12:15:01 -05:00
Bridget Kromhout
1131635006 Stalling for time during download 2018-04-10 11:52:52 -05:00
Bridget Kromhout
c6e477e6ab Clarify versions validated 2018-04-10 11:35:28 -05:00
Jerome Petazzoni
18a81120bc Add helper script to gauge chapter weights 2018-04-10 08:41:23 -05:00
Jerome Petazzoni
17cd67f4d0 Breakdown container internals chapter 2018-04-10 08:41:05 -05:00
Jerome Petazzoni
38a40d56a0 Label use-cases and rollouts
This adds a few realistic examples of label usage.
It also adds explanations about why deploying a new
version of the worker doesn't seem to be effective
immediately (the worker doesn't handle signals).
2018-04-10 06:04:17 -05:00
Jerome Petazzoni
96fd2e26fd Minor fixes for autopilot 2018-04-10 05:30:42 -05:00
Jerome Petazzoni
581bbc847d Add demo logo for k8s demo 2018-04-10 04:25:08 -05:00
Jerome Petazzoni
da7cbc41d2 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 17:06:35 -05:00
Jerome Petazzoni
282e22acb9 Improve chapters about container deep dive 2018-04-09 17:06:29 -05:00
Jérôme Petazzoni
9374eebdf6 Merge pull request #180 from bridgetkromhout/links-before-thanks
Moving links before thanks
2018-04-09 13:23:32 -07:00
Bridget Kromhout
dcd5c5b39a Moving links before thanks 2018-04-09 14:58:56 -05:00
Jérôme Petazzoni
974f8ee244 Merge pull request #179 from bridgetkromhout/mosh-tmux
Clarifications for tmux and mosh
2018-04-09 12:55:03 -07:00
Bridget Kromhout
8212aa378a Merge pull request #1 from jpetazzo/ode-to-mosh-and-tmux
Add even more info about mosh and tmux
2018-04-09 14:54:16 -05:00
Jerome Petazzoni
403d4c6408 Add even more info about mosh and tmux 2018-04-09 14:52:21 -05:00
Jerome Petazzoni
142681fa27 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 14:19:45 -05:00
Jerome Petazzoni
69c9141817 Enable new content in self-paced kube workshop 2018-04-09 14:19:27 -05:00
Bridget Kromhout
9ed88e7608 Clarifications for tmux and mosh 2018-04-09 14:19:16 -05:00
Jérôme Petazzoni
b216f4d90b Merge pull request #178 from bridgetkromhout/clarify-live
Formatting fixes
2018-04-09 12:13:07 -07:00
Bridget Kromhout
26ee07d8ba Format fix 2018-04-09 13:20:23 -05:00
Bridget Kromhout
a8e5b02fb4 Clarify live feedback 2018-04-09 13:18:25 -05:00
Jérôme Petazzoni
80a8912a53 Merge pull request #177 from jpetazzo/avril-2018
Avril 2018
2018-04-09 11:08:21 -07:00
Jérôme Petazzoni
1ba6797f25 Merge pull request #176 from bridgetkromhout/version-bump
Updating versions
2018-04-09 10:57:32 -07:00
Bridget Kromhout
11a2167dea Updating versions 2018-04-09 12:52:47 -05:00
Jérôme Petazzoni
af4eeb6e6b Merge pull request #175 from jpetazzo/helm-and-namespaces
Add two chapters: Helm and namespaces
2018-04-09 10:20:33 -07:00
Jérôme Petazzoni
ea6459e2bd Merge pull request #174 from jpetazzo/centralized-logging-with-efk
Add a chapter about centralized logging
2018-04-09 10:19:44 -07:00
Bridget Kromhout
2dfa5a9660 Update logs-centralized.md 2018-04-09 11:59:19 -05:00
Jerome Petazzoni
b86434fbd3 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 11:57:32 -05:00
Jerome Petazzoni
223525cc69 Add the new chapters
The new chapters are commented our right now.
But they're ready to be enabled whenever needed.
2018-04-09 11:57:16 -05:00
Bridget Kromhout
fd63c079c8 Update namespaces.md
typo fix
2018-04-09 11:44:45 -05:00
Jerome Petazzoni
ebe4511c57 Remove useless mkdir 2018-04-09 11:43:27 -05:00
Jérôme Petazzoni
e1a81ef8f3 Merge pull request #171 from jpetazzo/show-stern-to-view-logs
Show how to install and use Stern
2018-04-09 09:38:47 -07:00
Jerome Petazzoni
3382c83d6e Add link to Helm and say it's open source 2018-04-09 11:35:59 -05:00
Bridget Kromhout
a89430673f Update logs-cli.md
clarifications
2018-04-09 11:32:02 -05:00
Jerome Petazzoni
fcea6dbdb6 Clarify Stern installation comments 2018-04-09 11:29:19 -05:00
Bridget Kromhout
c744a7d168 Update helm.md
typo fixes
2018-04-09 11:27:34 -05:00
Bridget Kromhout
0256dc8640 Update logs-centralized.md
A few typo fixes
2018-04-09 11:22:43 -05:00
Jerome Petazzoni
41819794d7 Rename kube-halfday
We now have a full day of content. Rejoice.
2018-04-09 11:19:24 -05:00
Jerome Petazzoni
836903cb02 Merge branch 'master' of github.com:jpetazzo/container.training 2018-04-09 11:11:33 -05:00
Jerome Petazzoni
7f822d33b5 Clean up index.html
Comment out a bunch of older workshops (for which more recent
versions have been delivered since then). Update the links
to self-paced content.
2018-04-09 11:11:26 -05:00
Jérôme Petazzoni
232fdbb1ff Merge pull request #170 from jpetazzo/headless-services
Add headless services
2018-04-09 09:05:33 -07:00
Jerome Petazzoni
f3f6111622 Replace logistics.md with generic version
The current version of the logistics.md slide shows AJ and JP.
The new version is an obvious template, i.e. it says 'this slide
should be customized' and it uses imaginary personas instead.
2018-04-09 10:59:55 -05:00
Jerome Petazzoni
a8378e7e7f Clarify endpoints 2018-04-09 10:12:22 -05:00
Jerome Petazzoni
eb3165096f Add Logging section and manifests 2018-04-09 09:37:28 -05:00
Jerome Petazzoni
90ca58cda8 Add a few slides about network policies
This is a very high-level overview (we can't cover a lot within the current time constraints) but it gives a primer about network policies and a few links to explore further.
2018-04-09 08:27:31 -05:00
Jerome Petazzoni
5a81526387 Add two chapters: Helm and namespaces
In these chapters, we:
- show how to install Helm
- run the Helm tiller on our cluster
- use Helm to install Prometheus
- don't do anything fancy with
  Prometheus (it's just for the
  sake of installing something)
- create a basic Helm chart for
  DockerCoins
- explain namespace concepts
- show how to use contexts to hop
  between namespaces
- use Helm to deploy DockerCoins
  to a new namespace

These two chapters go together.
2018-04-09 07:57:27 -05:00
Jerome Petazzoni
8df073b8ac Add a chapter about centralized logging
Explain the purpose of centralized logging. Describe the
EFK stack. Deploy a simplified EFK stack through a YAML
file. Use it to view container logs. Profit.
2018-04-09 04:17:00 -05:00
Jérôme Petazzoni
0f7356b002 Merge pull request #167 from jgarrouste/avril-2018
Small changes
2018-04-09 00:26:13 -07:00
Jérôme Petazzoni
0c2166fb5f Merge pull request #172 from jpetazzo/clarify-daemonset-bonus-exercises
Clarify the bonus exercises
2018-04-09 00:24:26 -07:00
Jerome Petazzoni
d228222fa6 Reword headless services
Hopefully this explains better the use of headless services.
I also added a slide about endpoints, with a couple of simple
commands to show them.
2018-04-08 17:59:42 -05:00
Bridget Kromhout
e4b7d3244e Merge pull request #173 from bridgetkromhout/muracon-past
MuraCon to past
2018-04-08 17:50:09 -05:00
Bridget Kromhout
7d0e841a73 MuraCon to past 2018-04-08 17:46:55 -05:00
Jerome Petazzoni
9859e441e1 Clarify the bonus exercises
We had two open-ended exercises (questions without
answers). We have added more explanations, as well
as solutions for the exercises. It lets us show a
few more tricks with selectors, and how to apply
changes to sets of resources.
2018-04-08 17:16:27 -05:00
Jerome Petazzoni
e1c638439f Bump versions
Bump up Compose and Machine to latest versions.
Bump down Engine to stable branch.

I'm pushing straight to master because YOLO^W^W
because @bridgetkromhout is using the kube101.yaml
file anyway, so this shouldn't break her things.

(Famous last words...)
2018-04-08 16:34:48 -05:00
Jérôme Petazzoni
253aaaad97 Merge pull request #169 from jpetazzo/what-is-cni
Add slide about CNI
2018-04-08 14:32:17 -07:00
Jérôme Petazzoni
a249ccc12b Merge pull request #168 from jpetazzo/clarify-control-plane
Clarify control plane
2018-04-08 14:29:50 -07:00
Jerome Petazzoni
22fb898267 Show how to install and use Stern
Stern is super cool to stream the logs of multiple
containers.
2018-04-08 16:26:08 -05:00
Bridget Kromhout
e038797875 Update concepts-k8s.md
A few suggested clarifications to your (excellent) clarifications
2018-04-08 15:16:42 -05:00
Jerome Petazzoni
7b9f9e23c0 Add headless services 2018-04-08 11:10:07 -05:00
Jerome Petazzoni
01d062a68f Add slide about CNI 2018-04-08 10:31:17 -05:00
Jerome Petazzoni
a66dfb5faf Clarify control plane
Explain better that the control plane can run outside
of the cluster, and that the word master can be
confusing (does it designate the control plane, or
the node running the control plane? What if there is
no node running the control plane, because the control
plane is external?)
2018-04-08 09:57:51 -05:00
Jerome Petazzoni
ac1480680a Add ecosystem chapter 2018-04-08 08:40:20 -05:00
Jerome Petazzoni
13a9b5ca00 What IS docker?
Explain what the engine is
2018-04-08 07:21:47 -05:00
Jérémy GARROUSTE
0cdf6abf0b Add .center for some images 2018-04-07 20:16:29 +02:00
Jérémy GARROUSTE
2071694983 Add .small[] 2018-04-07 20:16:13 +02:00
Jérôme Petazzoni
12e2b18a6f Merge pull request #166 from jgarrouste/avril-2018
Update the output of docker version and docker build command
2018-04-07 09:30:11 -07:00
Jerome Petazzoni
28e128756d How to pass container config 2018-04-07 11:28:42 -05:00
Jerome Petazzoni
a15109a12c Add chapter about labels 2018-04-07 09:57:35 -05:00
Jerome Petazzoni
e500fb57e8 Add --mount syntax 2018-04-07 09:37:27 -05:00
Jerome Petazzoni
f1849092eb add chapter on Docker Machine 2018-04-07 07:33:28 -05:00
Jerome Petazzoni
f1dbd7e8a6 Copy on write 2018-04-06 09:27:29 -05:00
Jerome Petazzoni
d417f454dd Finalize section on namespaces and cgroups 2018-04-06 09:27:20 -05:00
Jérémy GARROUSTE
d79718d834 Update docker build output 2018-04-06 11:20:09 +02:00
Jérémy GARROUSTE
de9c3a1550 Update docker version output 2018-04-06 10:04:41 +02:00
Jerome Petazzoni
90fc7a4ed3 Merge branch 'avril-2018' of github.com:jpetazzo/container.training into avril-2018 2018-04-05 17:58:55 -05:00
Jerome Petazzoni
09edbc24bc Container deep dive: namespaces, cgroups, etc. 2018-04-05 17:58:43 -05:00
Jérémy GARROUSTE
92f8701c37 Update output of docker build 2018-04-06 00:00:27 +02:00
Jérôme Petazzoni
c828888770 Merge pull request #165 from jgarrouste/avril-2018
Update output of 'docker build'
2018-04-05 14:57:05 -07:00
Jérémy GARROUSTE
bb7728e7e7 Update docker build output 2018-04-05 23:52:37 +02:00
Jerome Petazzoni
5f544f9c78 Add container engines chapter; orchestration overview chapter 2018-04-04 17:09:21 -05:00
Jerome Petazzoni
5b6a7d1995 Update my email address 2018-04-02 18:52:48 -05:00
Jerome Petazzoni
b21185dde7 Introduce EXPOSE 2018-04-02 00:10:45 -05:00
Jerome Petazzoni
deaee0dc82 Explain why use Docker Inc's repos 2018-04-01 23:58:10 -05:00
Jerome Petazzoni
4206346496 MacOS -> macOS 2018-04-01 23:52:38 -05:00
Jerome Petazzoni
6658b632b3 Add reason why we use VMs 2018-04-01 23:49:08 -05:00
Jerome Petazzoni
d9be7160ef Move 'extra details' explanation slide to common deck 2018-04-01 23:34:19 -05:00
Jérôme Petazzoni
d56424a287 Merge pull request #164 from bridgetkromhout/adding-k8s-101
Adding more k8s 101 dates
2018-03-29 16:02:31 -07:00
Bridget Kromhout
2d397c5cb8 Adding more k8s 101 dates 2018-03-29 09:39:20 -07:00
Jérôme Petazzoni
08004caa5d Merge pull request #163 from BretFisher/bret-dates-2018q2
adding more dates
2018-03-28 10:26:07 -07:00
Frank Farmer
522358a004 Small typo 2018-03-28 12:23:47 -05:00
Jérôme Petazzoni
e00a6c36e3 Merge pull request #157 from bridgetkromhout/increase-ulimit
Increase allowed open files
2018-03-28 10:07:11 -07:00
Jérôme Petazzoni
4664497cbc Merge pull request #156 from bridgetkromhout/symlinks-on-rerun
Symlink and directory fixes for multiple runs
2018-03-28 10:06:39 -07:00
Bret Fisher
6be424bde5 adding more dates 2018-03-28 03:27:18 -04:00
Bridget Kromhout
0903438242 Increase allowed open files 2018-03-27 09:36:04 -07:00
Bridget Kromhout
b874b68e57 Symlink fixes for multiple runs 2018-03-27 09:25:48 -07:00
Bridget Kromhout
6af9385c5f Merge pull request #155 from bridgetkromhout/update-index
Updating index
2018-03-27 11:08:14 -05:00
Bridget Kromhout
29398ac33b Updating index 2018-03-27 09:06:03 -07:00
Jérôme Petazzoni
7525739b24 Merge pull request #151 from sadiqkhoja/patch-1
corrected number of containers
2018-03-27 05:50:50 -07:00
Bridget Kromhout
50ff71f3f3 Merge pull request #152 from bridgetkromhout/current-versions
Updating versions
2018-03-27 05:04:14 -05:00
Bridget Kromhout
70a9215c9d Updating versions 2018-03-27 03:02:17 -07:00
Sadiq Khoja
9c1a5d9a7d corrected number of containers 2018-03-17 14:39:05 +05:00
Jérôme Petazzoni
9a9b4a6892 Merge pull request #150 from inful/patch-3
Fix: Kubicorn URL
2018-03-14 11:03:28 -07:00
Jone Marius Vignes
e5502c724e Fix: Kubicorn URL
Kubicorn has moved permanently to https://github.com/kubicorn/kubicorn
2018-03-14 14:50:56 +01:00
Jérôme Petazzoni
125878e280 Merge pull request #147 from bridgetkromhout/clarify-socat-port
Clarifying how to find the port needed.
2018-03-13 13:27:51 -07:00
Bridget Kromhout
b4c1498ca1 Clarifying instructions 2018-03-13 20:55:11 +01:00
Bridget Kromhout
88d534a7f2 Clarifying how to find the port needed. 2018-03-13 20:36:19 +01:00
Jérôme Petazzoni
6ce4ed0937 Merge pull request #146 from bridgetkromhout/version-update
Versions updated
2018-03-13 11:53:51 -07:00
Bridget Kromhout
1b9ba62dc8 Versions updated 2018-03-13 19:27:42 +01:00
Jérôme Petazzoni
f3639e6200 Merge pull request #145 from bridgetkromhout/increase-timeout
Increasing timeout for slow mirrors
2018-03-12 13:49:24 -07:00
Bridget Kromhout
1fe56cf401 Increasing timeout for slow mirrors 2018-03-12 21:41:47 +01:00
Jerome Petazzoni
a3add3d816 Get inside a container (live and post mortem) 2018-03-12 11:57:34 -05:00
Jérôme Petazzoni
2807de2123 Merge pull request #144 from wlonkly/patch-2
Remove duplicate line
2018-03-10 14:55:39 -08:00
Jérôme Petazzoni
5029b956d2 Merge pull request #143 from wlonkly/patch-1
Fix typo: compiler -> container
2018-03-10 14:53:50 -08:00
Rich Lafferty
815aaefad9 Remove duplicate line 2018-03-10 15:43:40 -05:00
Rich Lafferty
7ea740f647 Fix typo: compiler -> container 2018-03-10 15:09:32 -05:00
Jerome Petazzoni
eaf25e5b36 Improve kubetest error reporting
The kubetest command used to say [SUCCESS] on completely
fresh nodes. Now we check the existence of the /tmp/node
file, as well as of the kubectl executable.
2018-03-07 16:17:57 -08:00
Jerome Petazzoni
3b336a9127 Merge branch 'bridgetkromhout-attribute-authorship' 2018-03-07 15:47:39 -08:00
Jerome Petazzoni
cc4d1fd1c7 Slight rewording 2018-03-07 15:47:38 -08:00
Jerome Petazzoni
17ec6441a0 Merge branch 'attribute-authorship' of git://github.com/bridgetkromhout/container.training into bridgetkromhout-attribute-authorship 2018-03-07 15:42:49 -08:00
Jerome Petazzoni
a1b107cecb Add Paris sessions 2018-03-07 15:39:23 -08:00
Jérôme Petazzoni
2e06bc2352 Merge pull request #140 from atsaloli/patch-2
Fix tiny typo (missing "o" in "outbound"
2018-03-06 09:54:38 -08:00
Aleksey Tsalolikhin
af0a239bd9 Fix tiny typo (missing "o" in "outbound" 2018-03-06 09:22:42 -08:00
Bridget Kromhout
92939ca3f2 Merge pull request #138 from jpetazzo/lets-tag-things-properly
Tag images properly
2018-03-05 19:55:09 -06:00
Jerome Petazzoni
aca51901a1 Tag images properly
This tags the first build with v0.1, allowing for a smoother, more
logical rollback. Also adds a slide explaining why to stay away
from latest. @kelseyhightower would be proud :-)
2018-03-05 16:13:30 -08:00
Jérôme Petazzoni
8d15dba26d Merge pull request #137 from bridgetkromhout/checklist-edits
Clarifications and links for checklist
2018-03-05 16:13:06 -08:00
Bridget Kromhout
cdca5655fc Clarifications and links for checklist 2018-03-05 17:08:06 -06:00
Jerome Petazzoni
c778fc84ed Add a dump of the checklist I use when delivering 2018-03-05 14:30:39 -08:00
Bridget Kromhout
7f72ee1296 Credit to multiple contributors 2018-03-05 15:53:42 -06:00
Jérôme Petazzoni
1981ac0b93 Merge pull request #135 from bridgetkromhout/bridget-specific
Adding Bridget-specific files
2018-03-05 13:36:06 -08:00
Jérôme Petazzoni
a8f2fb4586 Merge pull request #134 from bridgetkromhout/dedup-thanks
De-dup thanks; add comma
2018-03-05 13:35:45 -08:00
Jérôme Petazzoni
a69d3d0828 Merge pull request #133 from bridgetkromhout/no-chatroom
Makes more sense for "in person" chat
2018-03-05 13:32:51 -08:00
Jérôme Petazzoni
40760f9e98 Merge pull request #131 from bridgetkromhout/change-instance-type
Changing Azure instance type
2018-03-05 13:25:49 -08:00
Bridget Kromhout
b64b16dd67 Adding Bridget-specific files 2018-03-05 14:54:28 -06:00
Bridget Kromhout
8c2c9bc5df De-dup thanks; add comma 2018-03-05 14:51:26 -06:00
Bridget Kromhout
3a21cbc72b Makes more sense for "in person" chat 2018-03-05 14:37:10 -06:00
Bridget Kromhout
5438fca35a Attribute authorship 2018-03-05 14:34:41 -06:00
Bridget Kromhout
a09521ceb1 Changing Azure instance type 2018-03-05 13:44:02 -06:00
Jérôme Petazzoni
0d6501a926 Merge pull request #130 from atsaloli/patch-1
Two small fixes
2018-03-05 10:10:25 -08:00
Aleksey Tsalolikhin
c25f7a119b Fix very small typo -- remove extra "v" in "code" 2018-03-04 19:58:27 -08:00
Aleksey Tsalolikhin
1958c85a96 Fix noun plural tense (change "instructions" -> "instruction")
"An" means one. So "an instruction" rather than "an instructions".  (Small grammar fix.)
2018-03-04 19:56:03 -08:00
Jérôme Petazzoni
a7ba4418c6 Merge pull request #129 from bridgetkromhout/improve-directions
Improve directions
2018-03-03 19:52:15 -06:00
Bridget Kromhout
d6fcbb85e8 Improve directions 2018-03-03 18:44:56 -06:00
Jérôme Petazzoni
278fbf285a Merge pull request #128 from bridgetkromhout/cleanup
Cleanup
2018-03-03 14:39:56 -06:00
Bridget Kromhout
ca828343e4 Remove azure instances post-workshop. 2018-03-03 08:51:54 -06:00
Bridget Kromhout
5c663f9e09 Updating help output 2018-03-03 08:48:02 -06:00
Bridget Kromhout
9debd76816 Document kubetest 2018-03-03 08:44:58 -06:00
Bridget Kromhout
848679829d Removed -i and trailing space 2018-03-02 18:18:04 -06:00
Bridget Kromhout
6727007754 Missing variable 2018-03-02 18:11:32 -06:00
Jerome Petazzoni
03a563c172 Merge branch 'master' of github.com:jpetazzo/container.training 2018-03-02 14:17:54 -06:00
Jerome Petazzoni
cfbd54bebf Add hacky-backslashy kubetest command 2018-03-02 14:17:37 -06:00
Jérôme Petazzoni
7f1e9db0fa Missing curly brace 2018-03-02 13:08:48 -06:00
Jérôme Petazzoni
1367a30a11 Merge pull request #126 from bridgetkromhout/add-azure
Adding Azure examples
2018-03-02 12:46:02 -06:00
Bridget Kromhout
31b234ee3a Adding Azure examples 2018-03-02 12:42:55 -06:00
Jérôme Petazzoni
57dd5e295e Merge pull request #125 from bridgetkromhout/increase-timeouts
Increase timeouts
2018-03-01 17:43:29 -06:00
Bridget Kromhout
c188923f1a Increase timeouts 2018-03-01 17:39:51 -06:00
Jérôme Petazzoni
7a8716d38b Merge pull request #124 from bridgetkromhout/postprep
Postprep is now python
2018-03-01 17:17:04 -06:00
Bridget Kromhout
2e77c13297 Postprep is now python 2018-03-01 17:15:01 -06:00
Jerome Petazzoni
d5279d881d Add info about pre-built images 2018-03-01 15:13:39 -06:00
Jerome Petazzoni
34e9cc1944 Don't assume 5 nodes 2018-03-01 14:55:02 -06:00
Jerome Petazzoni
2a7498e30e A bit of rewording, and a couple of links about dashboard security 2018-03-01 14:51:00 -06:00
Jerome Petazzoni
4689d09e1f One typo and two minor tweaks 2018-03-01 14:18:48 -06:00
Jerome Petazzoni
b818a38307 Correctly report errors happening in functions
`trap ... ERR` does not automatically propagate to functions. Therefore,
Our fancy error-reporting mechanism did not catch errors happening in
functions; and we do most of the actual work in functions. The solution
is to `set -E` or `set -o errtrace`.
2018-03-01 13:56:08 -06:00
Jérôme Petazzoni
7e5d869472 Merge pull request #123 from bridgetkromhout/kube101
Kube101 & non-AWS
2018-03-01 13:23:04 -06:00
Jérôme Petazzoni
3eaf31fd48 Merge pull request #122 from bridgetkromhout/pssh-clarity
Pssh clarity
2018-03-01 13:21:05 -06:00
Bridget Kromhout
fe5e22f5ae How to set up non-AWS workshops 2018-02-28 21:45:36 -06:00
Bridget Kromhout
61da583080 Don't overwrite ip file if exists 2018-02-28 21:44:58 -06:00
Bridget Kromhout
94dfe1a0cd Adding sample file mentioned in README 2018-02-28 21:44:29 -06:00
Bridget Kromhout
412dbadafd Adding settings for kube101 2018-02-28 21:43:41 -06:00
Bridget Kromhout
8c5e4e0b09 Require pssh 2018-02-28 21:28:20 -06:00
Bridget Kromhout
2ac6072d80 Invoke as pssh 2018-02-28 21:26:17 -06:00
Jerome Petazzoni
ef4591c4fc Allow to override instance type (closes #39) 2018-02-28 13:45:08 -06:00
Jerome Petazzoni
22dfbab09b Minor formatting 2018-02-28 13:41:22 -06:00
Jérôme Petazzoni
37f595c480 Merge pull request #120 from bridgetkromhout/clarify-kube-public
Clarify kube-public; define kube-system
2018-02-27 17:42:11 -06:00
Bridget Kromhout
1fc951037d Slight clarification per request 2018-02-27 17:39:52 -06:00
Jérôme Petazzoni
affd46dd88 Merge pull request #121 from bridgetkromhout/obviate-https
Remove need for https in the workshop dashboard
2018-02-27 17:34:27 -06:00
Bridget Kromhout
cfaff3df04 Remove need for https in the workshop dashboard 2018-02-27 17:31:14 -06:00
Jérôme Petazzoni
ce2451971d Merge pull request #118 from bridgetkromhout/twice-the-steps
Proper attribution
2018-02-27 16:57:52 -06:00
Jérôme Petazzoni
8cf5d0efbd Merge pull request #119 from bridgetkromhout/naming-things
Naming things is hard; considering scope here
2018-02-27 16:40:40 -06:00
Bridget Kromhout
f61d61223d Clarify kube-public; define kube-system 2018-02-27 16:31:36 -06:00
Bridget Kromhout
6b6eb50f9a Naming things is hard; considering scope here 2018-02-27 15:26:43 -06:00
Jerome Petazzoni
89ab66335f ... and trim down kube half-day 2018-02-27 14:49:39 -06:00
Jerome Petazzoni
5bc4e95515 Clarify service discovery 2018-02-27 14:45:08 -06:00
Jerome Petazzoni
893f05e401 Move docker-compose logs to the composescale.md chapter 2018-02-27 14:38:41 -06:00
Bridget Kromhout
4abc8ce34c Proper attribution 2018-02-27 14:38:32 -06:00
Jérôme Petazzoni
34d2c610bf Merge pull request #117 from bridgetkromhout/self-deprecating-humor
Attributing humor so it doesn't sound negative
2018-02-27 14:06:58 -06:00
Jerome Petazzoni
1492a8a0bc Rephrase daemon set intro to fit even without the entropy spiel 2018-02-27 13:53:34 -06:00
Bridget Kromhout
388d616048 Attributing humor so it doesn't sound negative 2018-02-27 13:46:19 -06:00
Jerome Petazzoni
28589f5a83 Remove cluster-size specific reference 2018-02-27 13:40:52 -06:00
Jerome Petazzoni
e7a80f7bfb Merge branch 'master' of github.com:jpetazzo/container.training 2018-02-27 13:39:55 -06:00
Jerome Petazzoni
ea47e0ac05 Add link to brigade 2018-02-27 13:39:50 -06:00
Jérôme Petazzoni
09d204038f Merge pull request #116 from bridgetkromhout/versions-installed
Clarify that these are the installed versions
2018-02-27 13:36:40 -06:00
Jérôme Petazzoni
47cb0afac2 Merge pull request #115 from bridgetkromhout/any-cloud
More cloud-provider generic
2018-02-27 13:36:10 -06:00
Jerome Petazzoni
8e2e7f44d3 Break out 'scale things on a single node' section 2018-02-27 13:35:03 -06:00
Bridget Kromhout
8c7702deda Clarify that these are the installed versions
* "Brand new" is a moving target
2018-02-27 13:29:40 -06:00
Bridget Kromhout
bdc1ca01cd More cloud-provider generic 2018-02-27 13:27:11 -06:00
Jerome Petazzoni
dca58d6663 Merge Lucas awesome diagram 2018-02-27 12:22:02 -06:00
Jerome Petazzoni
a0cf4b97c0 Add Lucas' amazing diagram 2018-02-27 12:17:10 -06:00
Jerome Petazzoni
a1c239260f Add Lucas' amazing diagram 2018-02-27 12:17:02 -06:00
Jerome Petazzoni
a8a2cf54a5 Factor out links in separate files 2018-02-27 12:01:53 -06:00
Jerome Petazzoni
d5ba80da55 Replace 'five VMs' with 'a cluster of VMs' 2018-02-27 11:53:01 -06:00
Jerome Petazzoni
3f2da04763 CSS is hard but it's not an excuse 2018-02-27 09:44:32 -06:00
Jerome Petazzoni
e092f50645 Branch out intro/intro.md into per-workshop variants 2018-02-27 09:40:54 -06:00
Jérôme Petazzoni
7f698bd690 Merge pull request #114 from bridgetkromhout/master
Adding upcoming events
2018-02-27 09:28:27 -06:00
Bridget Kromhout
7fe04b9944 Adding upcoming events 2018-02-27 09:26:03 -06:00
Jerome Petazzoni
2671714df3 Move indexconf2018 to past workshops section 2018-02-27 09:11:09 -06:00
Jerome Petazzoni
630e275d99 Merge branch 'bridgetkromhout-master-updates' 2018-02-26 17:52:14 -06:00
Jerome Petazzoni
614f10432e Mostly reformatting so that slides are nice and tidy 2018-02-26 17:52:06 -06:00
Bridget Kromhout
223b5e152b Version updates 2018-02-26 16:56:45 -06:00
Bridget Kromhout
ec55cd2465 Including ACR as one of the cloud k8s offerings 2018-02-26 16:55:56 -06:00
Bridget Kromhout
c59510f921 Updates & clarifications 2018-02-26 16:54:41 -06:00
Bridget Kromhout
0f5f481213 Typo fix 2018-02-26 16:52:23 -06:00
Bridget Kromhout
b40fa45fd3 Clarifications 2018-02-26 16:50:31 -06:00
Bridget Kromhout
8faaf35da0 Clarify we didn't tag the v1 release 2018-02-26 16:48:52 -06:00
Bridget Kromhout
ce0f79af16 Updates & links for all cloud-provided k8s 2018-02-26 16:46:49 -06:00
Bridget Kromhout
faa420f9fd Clarify language and explain https use 2018-02-26 16:41:21 -06:00
Jerome Petazzoni
aab519177d Add indexconf2018 to index 2018-02-19 15:55:22 -08:00
Jerome Petazzoni
5116ad7c44 Use kubeadm token generate to simplify things a bit.
Thanks @rmb938 for the suggestion!

Closes #110.
2018-01-18 21:14:40 +01:00
Jerome Petazzoni
7305e911e5 Update for k8s 1.9 2018-01-10 17:12:49 +01:00
Jerome Petazzoni
b2f670acf6 Add error checking for AMI finder script 2018-01-10 16:48:04 +01:00
Jerome Petazzoni
dc040aa693 Make sleep interruptible; fix slide count 2017-12-23 19:38:32 +01:00
Jerome Petazzoni
9b7a8494b0 Fix logic to advance to next snippet 2017-12-23 18:49:42 +01:00
Jerome Petazzoni
ae6c1bb8eb Major UI refactor
Navigation now includes all slides and all snippets.
ENTER skips to the next snippet, or executes the
selected snippet.

More improvements to come: allow SPACE to navigate
step by step through slides and snippets, executing
the snippets.
2017-12-23 09:31:01 +01:00
Jerome Petazzoni
a9a4f0ea07 Create only one remote session 2017-12-23 05:06:30 +01:00
Jerome Petazzoni
68af5940e3 Script node3 setup as well 2017-12-23 05:00:22 +01:00
Jerome Petazzoni
9df5313da4 Remove spurious output from desktop integration 2017-12-22 23:15:21 +01:00
Jerome Petazzoni
ba3f00e64e Clear screen before showing UI 2017-12-22 23:11:11 +01:00
Jerome Petazzoni
4d7a6d5c70 Stupid typo 2017-12-22 23:03:48 +01:00
Jerome Petazzoni
aef833c3f5 Add pause before switching away from browser 2017-12-22 23:02:47 +01:00
Jerome Petazzoni
6f58fee29b Automatically open links in intro section 2017-12-22 22:54:08 +01:00
Jerome Petazzoni
dda09ddbcb slightly edit tmux commands 2017-12-22 22:49:53 +01:00
Jerome Petazzoni
8b13fe6eb4 fix formatting in PWD reference 2017-12-22 22:48:42 +01:00
Jerome Petazzoni
21f345a96a Improve open command 2017-12-22 22:42:41 +01:00
Jerome Petazzoni
eaa4dc63bf Instruct to use PWD in self-paced mode 2017-12-22 22:40:23 +01:00
Jerome Petazzoni
af5ea2188b More typos 2017-12-22 22:26:08 +01:00
Jerome Petazzoni
7f23a4c964 Fix minor typos 2017-12-22 22:24:37 +01:00
Jerome Petazzoni
345e04c956 Improve tmux detection logic and add instructions 2017-12-21 22:24:27 +01:00
Jerome Petazzoni
2a138102fc Add client address in pub/sub server 2017-12-21 05:53:24 +01:00
Jerome Petazzoni
ef5e8f00f8 Add script to remember myself of how to customize tmux status bar 2017-12-21 05:46:57 +01:00
Jerome Petazzoni
badb73a413 Slower pace for virtual typist 2017-12-21 05:45:38 +01:00
Jerome Petazzoni
2aced95c86 Improve UX for remote control 2017-12-21 05:40:35 +01:00
Jerome Petazzoni
720989e829 Add remote control of slide deck 2017-12-21 05:29:59 +01:00
Jerome Petazzoni
718031565e Exit gracefully if server is not running instead of waiting forever 2017-12-21 05:29:45 +01:00
Jerome Petazzoni
ec7b46b779 Add remote.js to workshop template and pub/sub server 2017-12-21 04:51:49 +01:00
Jerome Petazzoni
270c36b29a Add pub/sub server and CLI remote 2017-12-21 04:43:42 +01:00
Jerome Petazzoni
bc2eb53bb2 Python 3 compatibility 2017-12-21 04:33:35 +01:00
Jerome Petazzoni
afe7b8523c Move autotest to autopilot/ directory 2017-12-21 04:32:16 +01:00
Jérôme Petazzoni
a7743a4314 Update Engine version 2017-12-20 18:05:52 -06:00
Jérôme Petazzoni
ba74fdc841 Round of update for video content 2017-12-20 00:17:49 -06:00
Jérôme Petazzoni
41c047e12a Always start in interactive mode 2017-12-20 00:17:40 -06:00
Jérôme Petazzoni
f4fc055405 Add manifest for video content 2017-12-20 00:17:25 -06:00
Jérôme Petazzoni
2eb6fcfbf5 Add command to backtrack 1 slide 2017-12-18 18:46:24 -06:00
Jérôme Petazzoni
c665e1a2d6 httping only 3 requests is enough 2017-12-18 18:45:40 -06:00
Jérôme Petazzoni
bb7cdafe47 Comment out machine chapter 2017-12-18 18:39:33 -06:00
Jérôme Petazzoni
95fcfadb17 state.yml -> state.yaml to avoid collision with manifests 2017-12-18 18:39:17 -06:00
Jérôme Petazzoni
1ef47531c8 autotest: save all parameters in state.yml 2017-12-18 18:36:46 -06:00
Jérôme Petazzoni
9589b641b6 Another fix in CNC script 2017-12-18 18:35:57 -06:00
Jérôme Petazzoni
63463bda64 Merge branch 'master' of github.com:jpetazzo/container.training 2017-12-18 17:46:11 -06:00
Jérôme Petazzoni
b642412639 Clarify listen-addr and advertise-addr
Closes #108
2017-12-18 17:46:06 -06:00
Jérôme Petazzoni
21f9b73cb4 Update CNC deployment script + workshopctl deps 2017-12-18 17:45:29 -06:00
Jérôme Petazzoni
b73e5432f3 Merge pull request #109 from juliogomez/patch-6
curl is not installed in this step
2017-12-18 17:20:52 -06:00
Jérôme Petazzoni
de5cc9b0bf Merge branch 'master' of github.com:jpetazzo/container.training 2017-12-18 15:25:48 -06:00
Jérôme Petazzoni
08b38127d3 Clarify license for slides since they're not code 2017-12-18 15:25:41 -06:00
Julio
383804b7f1 curl is not installed in that step
curl was actually installed in a previous step, not here
2017-12-16 23:05:29 +01:00
Jérôme Petazzoni
20bf80910e Merge pull request #107 from juliogomez/patch-3
Fixing number of replicas per node
2017-12-16 09:26:18 -06:00
Jérôme Petazzoni
29a2014745 Merge pull request #106 from juliogomez/patch-2
Typo correction in detach mode?
2017-12-16 09:24:36 -06:00
Julio
40f6ee236f Fixing number of replicas per node
If 3 copies per node are 15, 4 copies per node should be 20.
2017-12-15 20:46:25 +01:00
Julio
5551cbd11f Typo correction in detach mode?
While you wrote:
`--detach=false` does not complete *faster*. It just *doesn't wait* for completion.
I think you actually meant:
`--detach=TRUE` does not complete *faster*. It just *doesn't wait* for completion.
2017-12-15 20:25:29 +01:00
Jérôme Petazzoni
9e84a05325 Merge pull request #105 from juliogomez/patch-1
Fixed missing image name (tomcat) in 'docker run' command
2017-12-14 15:33:24 -06:00
Julio
558e990907 Fixed missing image name in command 2017-12-14 22:15:29 +01:00
Jérôme Petazzoni
c2e88bb343 add qcon video 2017-12-07 14:28:46 -06:00
Jérôme Petazzoni
b7582397fe Add quote by not-Benjamin-Franklin 2017-12-04 18:19:04 -06:00
Jérôme Petazzoni
3e7b8615ab Move kube workshop to archives section 2017-12-04 13:57:17 -06:00
Jérôme Petazzoni
6f5d8c5372 Merge pull request #104 from gurayyildirim/patch-1
Kubernetes.io link fixed.
2017-11-24 11:31:07 -06:00
gurayyildirim
c116d75408 Kubernetes.io link fixed.
Kubernetes.io link had a wrong ']' mark which was causing a 404 from Kubernetes.io blog.
2017-11-24 02:45:49 +03:00
Jérôme Petazzoni
bb4ee4e77d Add helper script to setup CNC node 2017-11-20 17:04:38 -08:00
Jérôme Petazzoni
fc0e46988c Fix hint for ssh agent 2017-11-20 16:37:17 -08:00
Jérôme Petazzoni
c71b93c3a7 Add files to generate a CSV file with nodes 2017-11-20 16:35:25 -08:00
Jérôme Petazzoni
2c6b79c17d Add kube image to cards.html 2017-11-20 16:33:08 -08:00
Jérôme Petazzoni
f264838ec5 Update versions 2017-11-19 23:34:27 -08:00
Jérôme Petazzoni
54e7d10226 Add internal training 2017-11-19 17:45:24 -08:00
Jérôme Petazzoni
f8d3246005 Move QCON to 'done' section 2017-11-18 11:15:21 -08:00
Jérôme Petazzoni
7856e8c5f2 Put chat room in index 2017-11-16 23:32:12 -08:00
Jérôme Petazzoni
69bafe332f Update interstitials with self-hosted version 2017-11-16 23:29:37 -08:00
Jérôme Petazzoni
e2b24a20d2 Remove pixabay for now: 2017-11-16 08:46:47 -08:00
Jerome Petazzoni
d2d1771fd3 Better emoji support 2017-11-15 23:41:14 +01:00
Jérôme Petazzoni
6c5e3eb3f3 Update versions 2017-11-13 22:39:21 -08:00
Jérôme Petazzoni
d99dbd5878 Add first support to auto-open URLs 2017-11-12 22:47:07 -08:00
Jérôme Petazzoni
6d4894458a Add note about interstitials 2017-11-12 18:06:43 -08:00
Jérôme Petazzoni
bc367a1297 Simplify reference to PWD in intro slides 2017-11-12 17:45:15 -08:00
Jérôme Petazzoni
31d1074ee0 autotest: process single keypresses 2017-11-12 17:38:01 -08:00
Jérôme Petazzoni
cd18a87b8c Add self-paced kube manifest + small fixes in kube content 2017-11-12 17:37:40 -08:00
Jérôme Petazzoni
8279a3bce9 Add QCON slides 2017-11-10 23:01:17 -08:00
Jérôme Petazzoni
93524898cc Fix build script exit status 2017-11-10 22:57:06 -08:00
Jérôme Petazzoni
e9319060f6 Add pause images between chapters 2017-11-10 22:48:20 -08:00
Jérôme Petazzoni
4322478a4a A bunch of changes in there
- added self-paced manifest for intro workshop
- refactored intro content to work for all workshops
- fixed footnote CSS to work in all contexts
- intro+logistics content is good to go
2017-11-10 19:24:38 -08:00
Jérôme Petazzoni
00262c767e Move autotest.py to the slides/ directory 2017-11-10 13:31:00 -08:00
Jérôme Petazzoni
c8030e1500 Move autotest.py to the slides/ directory 2017-11-10 13:30:43 -08:00
Jérôme Petazzoni
3717b90444 Fix one long title 2017-11-10 13:08:19 -08:00
Jérôme Petazzoni
d003bdb765 Update --detach explanations 2017-11-10 13:06:52 -08:00
Jérôme Petazzoni
8b246ac334 Fix overflowing titles and slides in kube material 2017-11-10 12:55:13 -08:00
Jérôme Petazzoni
9df543a6bb Fix long titles and long slides in Swarm content 2017-11-09 19:58:55 -08:00
Jérôme Petazzoni
77befc1092 Improve appearance of code snippets 2017-11-09 19:58:35 -08:00
Jérôme Petazzoni
4417771315 Allow to automatically check slides + append check result at the end 2017-11-09 19:58:22 -08:00
Jérôme Petazzoni
04fa6ec1d8 Fix overflow in intro material 2017-11-09 16:43:56 -08:00
Jérôme Petazzoni
5e8bdcb1f6 Improve intro section 2017-11-09 14:27:30 -08:00
Jérôme Petazzoni
07d99763d3 Fix back-to-toc links 2017-11-09 13:41:29 -08:00
Jérôme Petazzoni
87a2051b24 Move one stray image 2017-11-09 13:36:21 -08:00
Jérôme Petazzoni
0eddc876f1 Add typist simulator 2017-11-08 23:24:43 -08:00
Jérôme Petazzoni
1a67a1397b Misc autotest improvements 2017-11-08 23:10:34 -08:00
Jérôme Petazzoni
dc6bf9d0cf Improve autotest by setting up tmux automatically 2017-11-08 23:10:25 -08:00
Jérôme Petazzoni
84f8bf007e Add spacer row 2017-11-08 21:28:22 -08:00
Jérôme Petazzoni
6a2a66c165 Improve local devel workflow; add notes about immutable infra 2017-11-08 18:49:40 -08:00
Jérôme Petazzoni
f491d559e3 Setting up an automated build 2017-11-08 14:32:04 -08:00
Jérôme Petazzoni
5d9bdc303c Fix title capitalization 2017-11-08 13:39:34 -08:00
Jérôme Petazzoni
f7f0ecddd4 Pushing to the hub 2017-11-08 13:31:22 -08:00
Jérôme Petazzoni
f0f03e2440 Rename other images 2017-11-08 11:04:51 -08:00
Jérôme Petazzoni
1f78264e9f Add helper script to rename assets 2017-11-08 10:41:56 -08:00
Jérôme Petazzoni
2aad5319f9 Rename title images for consistency 2017-11-08 10:41:46 -08:00
Jérôme Petazzoni
479b858398 Update use-cases and testimonials 2017-11-08 09:57:00 -08:00
Jérôme Petazzoni
34b8bfd1b2 Improve reporting for slide layout checker 2017-11-08 09:56:38 -08:00
Jérôme Petazzoni
408036b09c Implement checker for slides too high/too wide 2017-11-08 00:08:00 -08:00
Jérôme Petazzoni
a45ec1cb84 Reorg intro material a bit 2017-11-07 22:41:32 -08:00
Jérôme Petazzoni
8decf1852f Refactor build-tag-push
We now have two clean classes: btp-manual and btp-auto

Exclude one or the other depending on whether you want to
show the full process of building and pushing images, or
if you want to shortcut and use stack files directly.
2017-11-07 19:43:42 -08:00
Jérôme Petazzoni
350fac21f6 Add PyCon intro to docker 2017-11-07 18:41:48 -08:00
Jérôme Petazzoni
cedd386eee Add LISA16 recording 2017-11-07 18:29:08 -08:00
Jérôme Petazzoni
5751765d66 MOAR materials; smaller footer 2017-11-07 18:09:38 -08:00
Jérôme Petazzoni
7293c071bd CSS is hard, yo 2017-11-07 17:46:56 -08:00
Jérôme Petazzoni
98cf8f4f04 Rename manifests 2017-11-07 14:48:20 -08:00
Jérôme Petazzoni
c61ccd1c27 Uniformize selfpaced/halfday/fullday 2017-11-07 11:30:36 -08:00
Jérôme Petazzoni
6b3d0efa56 split out the 'declarative/imperative' explanation 2017-11-07 11:23:53 -08:00
Jérôme Petazzoni
164578f1c8 Images should be good now 2017-11-06 23:26:39 -08:00
Jérôme Petazzoni
938fe956cf Better closing links 2017-11-06 22:48:21 -08:00
Jérôme Petazzoni
b44d402c1d Tweak warning image 2017-11-06 22:43:50 -08:00
Jérôme Petazzoni
fbaa511813 add colophon 2017-11-06 22:17:54 -08:00
Jérôme Petazzoni
f000594c62 Refactored CSS 2017-11-06 22:15:12 -08:00
Jérôme Petazzoni
71ba94063e Add explanations when using Docker Machine or Toolbox
Thanks @jgarrouste for reminding me of this issue :-)
2017-11-06 17:05:34 -08:00
Jérôme Petazzoni
b025a8c966 add application metrics 2017-11-06 16:47:32 -08:00
Jérôme Petazzoni
c36aab132b Add prev/next navigation links + fix TOC backlinks 2017-11-06 16:39:02 -08:00
Jérôme Petazzoni
1e7a47ed37 Factor conclusion 2017-11-06 11:40:30 -08:00
Jérôme Petazzoni
87f544774e Factor pre-requirements 2017-11-06 11:26:16 -08:00
Jérôme Petazzoni
db9ee5f03a Factor title, toc, logistics, thankyou slides 2017-11-05 12:41:30 -08:00
Jérôme Petazzoni
d0f5d69157 Add some extra debug info in first slide 2017-11-05 11:52:27 -08:00
Jérôme Petazzoni
742c7a78bc Debug bar now shows manifest file 2017-11-05 10:22:44 -08:00
Jérôme Petazzoni
918d3a6c23 orchestration-workshop -> container.training (in slides) 2017-11-05 10:16:01 -08:00
Jérôme Petazzoni
1f9b304eec Update homepage 2017-11-05 10:08:19 -08:00
Jérôme Petazzoni
f479591f9c Better cards 2017-11-04 18:59:45 -07:00
Jérôme Petazzoni
241452bab4 Add script to check images and show slide count 2017-11-04 11:45:18 -07:00
Jérôme Petazzoni
5a58b17bb5 Fix image paths 2017-11-04 11:45:00 -07:00
Jérôme Petazzoni
2f4dfe7b6f Update ALL THE READMEs 2017-11-04 09:49:21 -07:00
Jérôme Petazzoni
dad3d434b8 Add index.html (to deprecate view.dckr.info) 2017-11-03 19:15:26 -07:00
Jérôme Petazzoni
3534d81860 Add intro slides 2017-11-03 19:08:44 -07:00
Jérôme Petazzoni
1c4d1d736e Remove useless scripts 2017-11-03 18:59:50 -07:00
Jérôme Petazzoni
c00bbb77c4 Fixup paths 2017-11-03 18:56:20 -07:00
Jérôme Petazzoni
0b24076086 Move markdown files to common/kube/swarm subdirs 2017-11-03 18:52:33 -07:00
Jérôme Petazzoni
c1a156337f Move images to subdirectory 2017-11-03 18:38:48 -07:00
Jérôme Petazzoni
078023058b docs -> slides 2017-11-03 18:31:06 -07:00
Jérôme Petazzoni
c1ecd16a8d Merge the big 2017 refactor 2017-11-03 18:28:52 -07:00
Jérôme Petazzoni
51cf4a076a Merge pull request #100 from soulshake/aj-hotfix-fix
Wrap bash and wait in .exercise[] block
2017-11-01 07:44:37 -07:00
AJ Bowen
7cf622e06a Actually, we just need to remove the space 2017-10-31 16:56:48 -07:00
AJ Bowen
14b2d54e43 Wrap bash and wait in .exercise[] block 2017-10-31 15:59:50 -07:00
Jérôme Petazzoni
3c9353815b hotfix 2017-10-31 15:25:29 -07:00
Jérôme Petazzoni
a44d3618bc Fix prometheus config 2017-10-31 11:12:29 -07:00
Jérôme Petazzoni
5466319407 Fix prometheus config 2017-10-31 11:12:08 -07:00
Jérôme Petazzoni
1d5f4330c0 Improve autotest; fix prometheus node collector 2017-10-31 11:10:39 -07:00
Jérôme Petazzoni
9e5bab1a76 tweaks for automated testing 2017-10-31 09:37:49 -07:00
Jérôme Petazzoni
c67675f900 Remove keymaps (tmux handles specials keys already) 2017-10-31 09:27:53 -07:00
Jérôme Petazzoni
4c18583a8e Improve autotest for Swarm workshop 2017-10-31 09:26:03 -07:00
Jérôme Petazzoni
d02d71270f Use Gitter instead of USENIX Slack 2017-10-31 07:36:16 -07:00
Jérôme Petazzoni
deb304026b allow any log level (and netlify has been set to LOG_LEVEL=DEBUG) 2017-10-30 09:31:15 -07:00
Jérôme Petazzoni
03561f38d8 I'd like to close my tab and I left 4 spaces 2017-10-30 09:10:30 -07:00
Jérôme Petazzoni
4965b205a7 Try to fix 'edit me' link generator 2017-10-30 09:07:34 -07:00
Jérôme Petazzoni
0d610081bd Add LISA tutorial 2017-10-30 07:45:59 -07:00
Jérôme Petazzoni
24c2f9f18e Fix repo/branch/base infer functions 2017-10-29 22:25:12 -07:00
Jérôme Petazzoni
3fc2d4c266 Infer github URL 2017-10-29 22:16:21 -07:00
Jérôme Petazzoni
0c175615a5 Merge branch 'soulshake-aj-wait-tmux' into the-big-2017-refactor 2017-10-29 21:07:46 -07:00
Jérôme Petazzoni
3198bb0d1f Minor tweaks 2017-10-29 21:07:25 -07:00
Jérôme Petazzoni
030d100d70 Merge branch 'the-big-2017-refactor' of github.com:jpetazzo/orchestration-workshop into the-big-2017-refactor 2017-10-29 19:52:32 -07:00
Jérôme Petazzoni
4747860226 Minor CSS tweaks for intro workshop 2017-10-29 19:52:20 -07:00
AJ Bowen
b243094d66 Address @jpetazzo feedback 2017-10-30 02:40:34 +01:00
AJ Bowen
1fec8f506a Use index to look ahead for 'wait' and 'keys. 2017-10-29 17:24:16 -07:00
AJ Bowen
04362a3b52 Check command exit codes 2017-10-29 16:24:11 -07:00
AJ Bowen
f0597c43b3 Move wait_for_success to be with other functions 2017-10-29 15:36:38 -07:00
AJ Bowen
b11e54cc43 Add 'c' option to continue until a timeout, and WORKSHOP_TEST_FORCE_NONINTERACTIVE to raise an exception instead of just warning if a command times out. 2017-10-29 15:34:33 -07:00
AJ Bowen
01d9923ca4 Remove colored logs because @jpetazzo 2017-10-29 15:07:58 -07:00
AJ Bowen
0f3660dc95 Put 'wait' and 'keys' HTML comments before the command they apply to. Add colored logs. 2017-10-29 15:01:24 -07:00
AJ Bowen
60a75647d2 No form feed, no prompt to wait, just print a warning and carry on 2017-10-29 13:55:16 -07:00
AJ Bowen
f46856ff63 Wait for tmux to display a prompt, indicating the command has completed 2017-10-29 13:47:48 -07:00
Jérôme Petazzoni
0508d24046 Merge pull request #97 from soulshake/aj-shfmt
'shfmt -i 4' on shell files
2017-10-29 16:00:17 +01:00
AJ Bowen
1d46898737 Reverse 'echo >/dev/stderr' for '>/dev/stderr echo' according to @jpetazzo preference 2017-10-29 15:55:08 +01:00
AJ Bowen
5b95d6ee7f shfmt -i 4 -bn' to allow pipes to begin lines 2017-10-29 15:42:16 +01:00
AJ Bowen
bb88d11344 shfmt -i 4 2017-10-29 14:45:54 +01:00
Jérôme Petazzoni
7262effec4 Expand the section about selector update 2017-10-25 23:41:57 +02:00
Jérôme Petazzoni
2c08439de4 Fix slides to reflect hostname 2017-10-25 22:42:07 +02:00
Jérôme Petazzoni
6543ffc5b9 Minor autotest update 2017-10-25 22:39:24 +02:00
Jérôme Petazzoni
681754fc1b Cosmetic autotest improvement 2017-10-25 22:39:07 +02:00
Jérôme Petazzoni
603bda8166 Change prompt and set hostname to nodeX 2017-10-25 22:38:57 +02:00
Jérôme Petazzoni
a4a37368e5 Merge pull request #95 from soulshake/aj-the-big-2017-refactor
Fix some typos
2017-10-25 22:24:54 +02:00
AJ Bowen
30fd53a3e1 Setup is a noun; set up is a verb. Fix some more typos. 2017-10-25 21:05:46 +02:00
Jérôme Petazzoni
40eab78186 Improve autotest system 2017-10-25 17:49:30 +02:00
Jérôme Petazzoni
68e0c8fca7 Very crude auto-test harness driving tmux 2017-10-25 11:40:37 +02:00
Jérôme Petazzoni
af261de9a4 Update prereqs-k8s.md 2017-10-24 18:24:15 +02:00
Jérôme Petazzoni
2a176edfb4 Templatize title 2017-10-24 17:44:05 +02:00
Jérôme Petazzoni
f56262bee0 Add diagram, thanks @lukemarsden @weaveworks <3 2017-10-24 14:33:03 +02:00
Jérôme Petazzoni
488fa1c981 Better k8s intro, fix chat links 2017-10-24 13:57:51 +02:00
Jérôme Petazzoni
f63107ce15 Add back to TOC links 2017-10-24 13:16:30 +02:00
Jérôme Petazzoni
68fc895017 Add edition links 2017-10-24 12:18:20 +02:00
Jérôme Petazzoni
5c0b83cd1b Add slide about cluster federation 2017-10-23 17:54:55 +02:00
Jérôme Petazzoni
452b5c0880 Do not open links in new tabs 2017-10-23 17:39:46 +02:00
Jérôme Petazzoni
42549d8c19 Adjust tone in tea example 2017-10-21 15:12:57 +02:00
Jérôme Petazzoni
9a5e9c9ea0 Debugging bar (this is super cool) 2017-10-21 14:18:09 +02:00
Jérôme Petazzoni
1ea7141d95 Auto-insert interstitial slides and links 2017-10-20 19:47:26 +02:00
Jérôme Petazzoni
c0fbf4aec4 Expand the what's next section 2017-10-20 18:48:22 +02:00
Jérôme Petazzoni
48b79a18a4 Cherry-pick 35a8c81 2017-10-20 15:40:54 +02:00
Jérôme Petazzoni
2c5724a5fe Merge pull request #93 from jouve/typo
typo with --update-failure-action flag
2017-10-20 15:37:02 +02:00
Jérôme Petazzoni
80d79c4d31 Add info about other resources created with kubectl run 2017-10-19 18:31:56 +02:00
Jérôme Petazzoni
ff0c868c27 kubernetes network model 2017-10-19 18:09:46 +02:00
Jérôme Petazzoni
cbee7484ae update docs for new slide deck generator 2017-10-19 17:08:22 +02:00
Cyril Jouve
35a8c81b39 typo with --update-failure-action flag 2017-10-19 12:59:06 +02:00
Jerome Petazzoni
764d33c884 power outlets are the worst 2017-10-16 09:05:50 +02:00
Jérôme Petazzoni
a4fc5b924f Last update after dry run 2017-10-16 00:56:55 +02:00
Jérôme Petazzoni
b155000d56 hero syndrome (thanks @soulshake) 2017-10-15 23:23:18 +02:00
Jérôme Petazzoni
baf48657d0 Clean up a bunch of titles 2017-10-15 23:13:06 +02:00
Jérôme Petazzoni
b4b22ff47b Add chat variable to workshop YML files 2017-10-15 23:01:46 +02:00
Jérôme Petazzoni
c4b131ae5e Add black belt refs 2017-10-15 22:37:23 +02:00
Jérôme Petazzoni
af1031760b Add blackbelt icon and css 2017-10-15 22:11:13 +02:00
Jérôme Petazzoni
da7c4742bf Netlify is <3 2017-10-14 22:37:28 +02:00
Jérôme Petazzoni
a3cf917100 Add dashboard section + kubectl apply sec talk 2017-10-14 17:42:02 +02:00
Jérôme Petazzoni
96a5cc15ec Add a comment at end of each slide showing origin 2017-10-14 16:56:12 +02:00
Jérôme Petazzoni
117c6c18e9 pull-images -> pull_images 2017-10-14 14:22:46 +02:00
Jérôme Petazzoni
fe83ce99f2 Backport error reporting fixes 2017-10-14 14:20:24 +02:00
Jérôme Petazzoni
8b0173bc87 Fix build-forever (entr is buggy, yo) 2017-10-14 14:19:21 +02:00
Jérôme Petazzoni
9450ed2057 Improve error reporting (thanks @bretfisher for reporting this) 2017-10-14 14:10:50 +02:00
Romain Degez
994be990f5 Remove extraneous chapter title 2017-10-14 12:31:58 +02:00
Jérôme Petazzoni
e2fd9531ef Add rolling upgrade section and whatsnext 2017-10-13 23:16:02 +02:00
Jérôme Petazzoni
8bb7243aaf Imperative vs declarative; spec 2017-10-13 20:43:31 +02:00
Jérôme Petazzoni
9a067f2064 kube agenda 2017-10-13 20:05:03 +02:00
Jérôme Petazzoni
009bc2089d Backport #91 2017-10-13 19:54:21 +02:00
Jérôme Petazzoni
d1e6248ded Merge pull request #91 from anonymuse/jesse/fix-node-command
Update docker node command with a working filter
2017-10-13 19:38:16 +02:00
Jérôme Petazzoni
b3a7e36c37 fixup build.sh script 2017-10-13 19:23:00 +02:00
Jérôme Petazzoni
a9a82ccd1e Rework slide builder + add section on daemonsets 2017-10-12 20:49:59 +02:00
Jérôme Petazzoni
6abbebe00d Reword Compose File v3 explanations 2017-10-12 10:38:16 +02:00
Jérôme Petazzoni
4dec9c43f1 One more round of updates for dc17eu 2017-10-12 10:17:52 +02:00
Jérôme Petazzoni
3369005e06 Revert to single HTML generator and parametrize excludeClasses 2017-10-12 09:53:14 +02:00
Jérôme Petazzoni
c4d76ba367 Backport #92 (thanks @bretfisher 👍🏻) 2017-10-12 09:04:56 +02:00
Jérôme Petazzoni
ba24b66d84 Fix extra-details icon 2017-10-12 00:12:30 +02:00
Jérôme Petazzoni
8d2391e4d6 Add kubespawn 2017-10-12 00:12:19 +02:00
Jérôme Petazzoni
4c68847dd1 Remove PWD reference from kube material 2017-10-11 23:33:13 +02:00
Jérôme Petazzoni
b67371e0ec Helper script based on entr 2017-10-11 23:32:14 +02:00
Jérôme Petazzoni
cd2cf9b3a4 Tweak page number positioning 2017-10-11 23:31:59 +02:00
Jérôme Petazzoni
4eaf2310b6 Add how to run and expose services on kube 2017-10-11 23:31:39 +02:00
Jérôme Petazzoni
20e9517722 Put slide number in top-left corner 2017-10-11 16:00:23 +02:00
Jérôme Petazzoni
553fd6b742 Fix custom prompt 2017-10-11 15:54:56 +02:00
Jérôme Petazzoni
25c8623a81 Add kubectl completion 2017-10-11 15:54:46 +02:00
Jérôme Petazzoni
f787d1b6c3 Add kube concepts + kubectl primer 2017-10-11 15:49:11 +02:00
Jérôme Petazzoni
825257427f Split out selfpaced and dockercon workshops 2017-10-10 17:55:22 +02:00
Jérôme Petazzoni
7e57b23234 Merge pull request #92 from BretFisher/improve-healthcheck-rollback
fixing healthcheck rollbacks, adding TAG to deploys, adding missing yml
2017-10-10 08:33:27 +02:00
Bret Fisher
5c102d594f fixing healthcheck rollbacks, adding TAG to deploys, adding missing yml 2017-10-09 23:49:27 -04:00
Jérôme Petazzoni
e28a64c6cf Remove old version 2017-10-09 18:04:44 +02:00
Jérôme Petazzoni
f8888bf16a Split out content to many smaller files
And add markmaker.py to generate workshop.md
2017-10-09 16:56:23 +02:00
Jérôme Petazzoni
ac523e0f14 Add upstream URL 2017-10-09 13:30:38 +02:00
Jérôme Petazzoni
3211c1ba8a Add data-path option 2017-10-07 19:24:07 +02:00
Jérôme Petazzoni
f1aa5d07fa Fix printing 2017-10-07 15:14:46 +02:00
Jérôme Petazzoni
c0e2fc8832 Allow to run workshopctl in a container 2017-10-06 21:40:39 +02:00
Jérôme Petazzoni
08722db23f Major rehaul of trainer script (it is now workshopctl) 2017-10-06 19:01:15 +02:00
Jérôme Petazzoni
11ec3336eb Remove media dir (unused) 2017-10-06 13:10:52 +02:00
Jérôme Petazzoni
42603d6f62 Add host network in Swarm mode 2017-10-05 14:27:23 +02:00
Jérôme Petazzoni
5c825c864c Allow to start+deploy in a single step 2017-10-05 12:55:36 +02:00
Jérôme Petazzoni
186b30a742 Add a couple of slides about events 2017-10-04 17:13:01 +02:00
Jérôme Petazzoni
06b97454c6 Add section about configs 2017-10-04 16:36:48 +02:00
Jérôme Petazzoni
c393d2aa51 Remove older (unused) stacks 2017-10-04 15:21:33 +02:00
Jérôme Petazzoni
3817332905 Remove obsolete scripts 2017-10-04 15:20:01 +02:00
Jérôme Petazzoni
b7dbbd4633 Add kubernetes deployment code (behind cheap feature switch) 2017-10-03 22:15:43 +02:00
Jesse White
d73f5232ff Update docker node command with a working filter 2017-10-02 12:15:34 -04:00
Jérôme Petazzoni
b0a34aa106 Remove Swarm classic 2017-10-02 13:33:58 +02:00
Jérôme Petazzoni
36f512a3d3 Backport content from DOD MSP 2017-09-29 23:35:52 +02:00
Jérôme Petazzoni
87cbbd5c35 Backport a few updates from devopscon 2017-09-29 23:17:27 +02:00
Jérôme Petazzoni
2f6689d639 Refactor card generation to use Jinja templates
This makes the card generation process a bit easier to customize.
A few issues with Chrome page breaks were also fixed.
2017-09-29 22:29:08 +02:00
Jérôme Petazzoni
4f7651855e Update version numbers 2017-09-29 19:24:24 +02:00
Jérôme Petazzoni
aea59c757e Add HEALTHCHECK support, courtesy of @bretfisher 2017-09-27 18:07:03 +02:00
Jérôme Petazzoni
af2d82d00a Merge branch 'BretFisher-healthcheck-auto-rollback' 2017-09-27 12:32:43 +02:00
Jérôme Petazzoni
5b8861009d Merge branch 'healthcheck-auto-rollback' of https://github.com/BretFisher/orchestration-workshop into BretFisher-healthcheck-auto-rollback 2017-09-27 12:32:29 +02:00
Jérôme Petazzoni
674bfe82c7 Remove conference hashtag in CTA tweet link (closes #77) 2017-09-27 12:20:08 +02:00
Jérôme Petazzoni
8f61a2fffa If any of the commands of postprep fails, abort
Closes #80
2017-09-27 12:14:55 +02:00
Jérôme Petazzoni
748881d37d Add a fancy table! 2017-09-26 21:55:09 +02:00
Jérôme Petazzoni
d29863a0e0 Merge branch 'ops-feature-history' of https://github.com/BretFisher/orchestration-workshop 2017-09-26 18:42:04 +02:00
Jérôme Petazzoni
acc84729a2 Merge pull request #89 from BretFisher/add-inline-code-bg
Add inline code background color
2017-09-13 14:26:23 -07:00
Bret Fisher
9af9477f65 ugg spacing 2017-09-12 19:09:51 -07:00
Bret Fisher
15cca15ec5 add inline-code grey background
So much grey! All the grey's!
2017-09-12 19:08:36 -07:00
Bret Fisher
685ea653fe adding healthcheck with rollback 2017-09-12 19:03:52 -07:00
Jérôme Petazzoni
bf13657a8f Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2017-08-03 11:02:45 +02:00
Jérôme Petazzoni
9c7fb40475 Merge branch 'BretFisher-user-namespaces' 2017-08-03 11:02:27 +02:00
Jérôme Petazzoni
b1b8b53a2f Adapt @bretfisher work to match formatting etc 2017-08-03 11:01:31 +02:00
Jérôme Petazzoni
69259c27a1 Merge branch 'user-namespaces' of https://github.com/BretFisher/orchestration-workshop into BretFisher-user-namespaces 2017-08-03 08:40:53 +02:00
Jérôme Petazzoni
7354974ece Merge pull request #87 from lastcoolnameleft/patch-1
1.9.0 does not support docker-compse.yml Version 3
2017-08-01 23:31:53 -07:00
Tommy Falgout
5379619026 1.9.0 does not support docker-compse.yml Version 3 2017-08-01 17:46:21 -05:00
Jérôme Petazzoni
0d7ee1dda0 Merge branch 'alexellis-alexellis-patch-sol' 2017-07-12 13:41:45 +02:00
Jérôme Petazzoni
243d585432 Add a few details about what happens when losing the sole manager 2017-07-12 13:41:37 +02:00
Alex Ellis
f5fe7152f3 Internationalisation
I had no idea what SOL was - had to google this on Urban Dictionary :-/ have put an internationalisation in and retained the colliqualism in brackets.
2017-07-11 19:00:23 +01:00
Jérôme Petazzoni
94d9ad22d0 Add ngrep details when using PWD or Vagrant re/ interface selection (closes #84) 2017-07-11 19:51:00 +02:00
Bret Fisher
59f1b1069d fixed some feature release confusion 2017-06-26 14:34:03 -04:00
Bret Fisher
c30386a73d added ops feature history slide 2017-06-18 20:28:51 -07:00
Jérôme Petazzoni
0af160e0a8 Merge pull request #82 from adulescentulus/fix_visualizer_exercise
(some) wrong instructions
2017-06-17 09:31:31 -07:00
Andreas Groll
1fdb7b8077 added missing stackname 2017-06-12 15:25:35 +02:00
Andreas Groll
d2b67c426e you only can connect to the ip where you started your visualizer 2017-06-12 12:07:59 +02:00
Jérôme Petazzoni
a84cc36cd8 Update installation method 2017-06-09 18:16:29 +02:00
Jerome Petazzoni
c8ecf5a647 PYCON final check! 2017-05-17 18:14:33 -07:00
Jerome Petazzoni
e9ee050386 Explain extra details 2017-05-17 15:56:28 -07:00
Jerome Petazzoni
6e59e2092c Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2017-05-17 15:00:42 -07:00
Jerome Petazzoni
c7b0fd32bd Add detail about ASGs 2017-05-17 15:00:31 -07:00
Jérôme Petazzoni
ead4e33604 Merge pull request #79 from jliu70/oscon2017
fix typo
2017-05-17 14:31:26 -07:00
Jérôme Petazzoni
96b4f76c67 Backport all changes from OSCON 2017-05-17 00:17:24 -05:00
Jeff Liu
6337d49123 fix typo 2017-05-08 10:21:51 -05:00
Jerome Petazzoni
aec2de848b Rename docker-compose files to keep .yml extension (fixes #69) 2017-05-03 12:44:17 -07:00
Jérôme Petazzoni
91942f22a0 Merge pull request #73 from everett-toews/cd-to-snap
Change to the snap dir first
2017-05-03 14:36:52 -05:00
Jérôme Petazzoni
93cdc9d987 Merge pull request #72 from everett-toews/fix-worker-service-name
Fix the dockercoins_worker service name
2017-05-03 14:36:27 -05:00
Jérôme Petazzoni
13e6283221 Merge pull request #71 from everett-toews/netshoot
Consistent use of the netshoot image
2017-05-03 14:35:54 -05:00
Jerome Petazzoni
e56bea5c16 Update Swarm visualizer information 2017-05-03 12:36:09 -07:00
Jerome Petazzoni
eda499f084 Fix link to Raft (thanks @kchien) - fixes #74 2017-05-03 12:20:45 -07:00
Jerome Petazzoni
ae638b8e89 Minor updates before GOTO 2017-05-03 11:46:35 -07:00
Jerome Petazzoni
5296be32ed Handle untagged resources 2017-05-03 11:26:47 -07:00
Jerome Petazzoni
f1cd3ba7d0 Remove rc.yaml 2017-05-03 10:02:36 -07:00
Jérôme Petazzoni
b307adee91 Last updates
Conflicts:
	docs/index.html
2017-05-03 09:34:42 -07:00
Jérôme Petazzoni
f4540fad78 Update describe-instances for awscli 1.11 (thanks @mikegcoleman for finding that bug!) 2017-05-03 09:15:45 -07:00
Jérôme Petazzoni
70db794111 Simplify stackfiles 2017-04-16 23:56:30 -05:00
Jérôme Petazzoni
abafc0c8ec Add swarm-rafttool 2017-04-16 23:47:56 -05:00
Everett Toews
a7dba759a8 Change to the snap dir first 2017-04-16 14:34:49 -05:00
Everett Toews
b14662490a Fix the dockercoins_worker service name 2017-04-16 13:23:54 -05:00
Everett Toews
9d45168752 Consistent use of the netshoot image 2017-04-16 13:16:02 -05:00
Jérôme Petazzoni
7b3c9cd2c3 Add @alexmavr/swarm-nbt (FTW!) 2017-04-15 18:29:32 -05:00
Jérôme Petazzoni
84d4a367ec Mention --filter for docker service ps 2017-04-15 17:45:24 -05:00
Jérôme Petazzoni
bd6b37b573 Add @manomarks' Swarm viz tool 2017-04-15 17:21:38 -05:00
Jérôme Petazzoni
e1b2a4440d Update docker service logs; --detach=false 2017-04-14 15:39:52 -05:00
Jérôme Petazzoni
1b5365d905 Update settings; add security workshop 2017-04-14 15:39:24 -05:00
Jérôme Petazzoni
27ea268026 Automatically resolve AMI ID to use 2017-04-14 15:32:03 -05:00
Bret Fisher
45402a28e5 updated to preventls accidently registry delete 2017-04-14 02:37:07 -04:00
Bret Fisher
9e97c7a490 adding user namspace change and daemon.json example
also adding .footnote css
2017-04-14 01:34:51 -04:00
Jérôme Petazzoni
b0f566538d Re-add useful self-paced slides 2017-03-31 21:49:57 -05:00
Jerome Petazzoni
e637354d3e Fix TOC and minor tweaks 2017-03-31 21:41:24 -05:00
Jerome Petazzoni
1f8c27b1aa Update deployed versions 2017-03-31 21:40:05 -05:00
Jerome Petazzoni
f7d317d960 Backporting Devoxx updates 2017-03-31 21:39:48 -05:00
Jérôme Petazzoni
a8c54a8afd Update chat links 2017-03-31 21:36:08 -05:00
Jerome Petazzoni
73b3752c7e Change chat links 2017-03-31 21:33:12 -05:00
Jérôme Petazzoni
d60ba2e91e Merge pull request #68 from hknust/master
Service name should be dockercoins_worker not worker
2017-03-30 17:11:37 -05:00
Jérôme Petazzoni
d480f5c26a Clarify node switching commands 2017-03-20 19:30:38 -07:00
Jérôme Petazzoni
540aa91f48 Hotfix JS file 2017-03-10 16:46:51 -06:00
Jérôme Petazzoni
8f3c0da385 Use our custom fork of remark; updates for Docker Birthday 2017-03-10 16:40:48 -06:00
Holger Knust
6610ff178d Fixed typo on slide. Attempts instead of attemps 2017-03-04 23:13:35 -08:00
Holger Knust
9a9e725d5b Service name should be dockercoins_worker not worker 2017-03-04 11:29:01 -08:00
Jérôme Petazzoni
09cabc556e Update for SCALE 15x 2017-03-02 16:38:59 -08:00
Jérôme Petazzoni
44f4017992 Switch from localhost to 127.0.0.1 (to work around some weird DNS issues) 2017-03-02 14:06:59 -08:00
Jérôme Petazzoni
6f85ff7824 Reorganize advanced content for Docker Birthday 2017-02-16 15:16:06 -06:00
Jérôme Petazzoni
514ac69a8f Ship part 1 for Docker Birthday 2017-02-15 00:03:01 -06:00
Jérôme Petazzoni
7418691249 Rework intro for self-guided workshop 2017-02-14 10:15:27 -06:00
Jérôme Petazzoni
4d2289b2d2 Add details about authorization plugins 2017-02-09 12:33:55 -06:00
Jerome Petazzoni
e0956be92c Add link target for logging 2017-01-20 16:24:15 -08:00
Jérôme Petazzoni
d623f76a02 add note on API scope 2017-01-13 19:29:22 -06:00
Jérôme Petazzoni
dd555af795 update section about restart condition 2017-01-13 17:59:57 -06:00
Jérôme Petazzoni
a2da3f417b update secret section 2017-01-13 17:35:45 -06:00
Jérôme Petazzoni
d129b37781 minor updates, including services ps -a flag 2017-01-13 16:22:58 -06:00
Jérôme Petazzoni
849ea6e576 improve LB demo a bit 2017-01-13 16:04:53 -06:00
Jérôme Petazzoni
7ed54eee66 Merge pull request #64 from trapier/slides_comment_format
slides: code block comment formatting on snap install
2016-12-12 17:59:21 -06:00
Trapier Marshall
1dca8e5a7a slides: code block comment formatting
This will make it easier to copy-paste the whole block used for
snap installation
2016-12-12 11:03:30 -05:00
Jérôme Petazzoni
165de1dbb5 Merge pull request #63 from trapier/slides_cosmetic_edits
couple of cosmetic edits to slides
2016-12-11 21:48:57 -06:00
Trapier Marshall
b7afd13012 couple cosmetic corrections to slides 2016-12-11 01:16:30 -05:00
Jerome Petazzoni
e8b64c5e08 Last touch-ups for LISA16! Good to go! 2016-12-05 19:32:39 -08:00
Jerome Petazzoni
9124eb0e07 Add healthchecks in WIP section 2016-12-05 13:32:09 -08:00
Jerome Petazzoni
0bede24e23 Add what's next section 2016-12-05 10:49:31 -08:00
Jerome Petazzoni
ee79e5ba86 Add MOSH instructions 2016-12-05 10:32:29 -08:00
Jerome Petazzoni
9078cfb57d DAB -> Compose v3 2016-12-05 08:53:31 -08:00
Jerome Petazzoni
6854698fe1 Add Fluentd instructions (contrib) 2016-12-04 17:07:48 -08:00
Jerome Petazzoni
16a4dac192 Add "replayability" instructions 2016-12-04 16:40:17 -08:00
Jerome Petazzoni
0029fa47c5 Update secrets and autolock chapters (thanks @diogomonica for feedback and pointers!) 2016-12-04 09:19:09 -08:00
Jerome Petazzoni
a53636340b Tweak 2016-12-03 10:30:29 -08:00
Jerome Petazzoni
c95b88e562 Secrets management and data encryption 2016-12-03 10:28:20 -08:00
Jerome Petazzoni
d438bd624a Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-12-02 17:50:39 -08:00
Jerome Petazzoni
839746831b Improve illustration a bit 2016-12-02 17:50:29 -08:00
Jérôme Petazzoni
0b1b589314 Merge pull request #60 from hubertst/patch-1
Update provisioning.yml
2016-12-02 16:47:54 -08:00
Hubert
61d2709f8f Update provisioning.yml
fix for ansible 2.2
2016-12-02 09:49:52 +01:00
Jerome Petazzoni
1741a7b35a Add encrypted networks 2016-12-01 22:15:42 -08:00
Jerome Petazzoni
e101856dd7 dynamic scheduling 2016-12-01 17:18:00 -08:00
Jerome Petazzoni
d451f9c7bf Add note on docker service update --mode 2016-12-01 15:52:05 -08:00
Jerome Petazzoni
b021b0eec8 Addtl metrics resources 2016-12-01 15:43:49 -08:00
Jerome Petazzoni
e4f824fd07 docker system ... 2016-11-30 15:54:14 -08:00
Jerome Petazzoni
019165e98c Re-enable a few slides (checked all ??? slides) 2016-11-29 13:02:42 -08:00
Jerome Petazzoni
cf5c2d5741 Add PromQL details + side-by-side Prom&Snap comparison 2016-11-29 12:59:28 -08:00
Jerome Petazzoni
971bf85b17 Clarify raft usage 2016-11-28 17:44:15 -08:00
Jerome Petazzoni
83749ade43 Add "what did we change in this app?" section 2016-11-28 17:17:24 -08:00
Jerome Petazzoni
76fb2f2e2c Add prometheus files (fixes #58) 2016-11-28 12:30:56 -08:00
Jerome Petazzoni
6bda8147e4 Merge branch 'lisa16' 2016-11-28 12:28:03 -08:00
Jerome Petazzoni
95751d1ee9 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-11-23 15:18:12 -08:00
Jerome Petazzoni
12adae107e Update instructions to install Compose in nodes
Closes #51

(Also addresses remarks about using Machine in older EC2 accounts lacking VPC)
2016-11-23 15:18:07 -08:00
Jerome Petazzoni
c652ea08a2 Upgrade to remark 0.14 (closes #38) 2016-11-23 14:45:03 -08:00
Jerome Petazzoni
30008e4af6 Add warning re/ swarmtctl (fixes #35) 2016-11-23 14:34:44 -08:00
Jérôme Petazzoni
bb262e27e8 Merge pull request #55 from stefanlasiewski/master
"Using Docker Machine to communicate with a node" missing the `docker-machine env` command
2016-11-23 12:27:55 -06:00
Jerome Petazzoni
9656d959cc Switch to EBS-based instances; change default instance type to t2.medium 2016-11-21 17:10:07 -08:00
Jerome Petazzoni
46b772b95e First round of updates for LISA 2016-11-21 16:55:47 -08:00
stefanlasiewski
f801e1b9ad Add instructions for VMware Fusion. 2016-11-21 11:44:13 -08:00
stefanlasiewski
1c44d7089a Merge branch 'master' of https://github.com/stefanlasiewski/orchestration-workshop 2016-11-18 14:44:58 -08:00
stefanlasiewski
1f7f4a29ff docker-machine ... should actually be docker-machine env ... in a
couple of places.
2016-11-18 14:44:33 -08:00
Jerome Petazzoni
e16e23e2bd Add supergrok instructions 2016-11-18 10:06:10 -08:00
Jérôme Petazzoni
b5206aa68e Merge pull request #53 from drewmoseley/patch-1
Install pycrypto
2016-11-17 17:24:49 -06:00
Jérôme Petazzoni
8a47bce180 Merge pull request #52 from asziranyi/patch-1
add vagrant-vbguest install link
2016-11-17 17:24:18 -06:00
Drew Moseley
6cd8c32621 Install pycrypto
Not sure if it's somehow unique to my setup but Ansible needed me to install pycrypto as well.
2016-11-17 12:07:42 -05:00
asziranyi
f2f1934940 add vagrant-vbguest installation link 2016-11-17 15:50:47 +01:00
Jerome Petazzoni
8cc388dcb8 add ctrl-p ctrl-q warning 2016-11-14 12:36:57 -08:00
Jerome Petazzoni
a276e72ab0 add ngrok instructions 2016-11-14 11:23:22 -08:00
Jerome Petazzoni
bdb8e1b3df Add instructions for self-paced workshop 2016-11-11 14:28:28 -08:00
Jérôme Petazzoni
66ee4739ed typos 2016-11-07 22:40:59 -06:00
Jérôme Petazzoni
893c7b13c6 Add instructions to create VMs with Docker Machine 2016-11-07 22:38:43 -06:00
Jerome Petazzoni
78b730e4ac Patch up TOC generator 2016-11-01 17:37:48 -07:00
Jerome Petazzoni
e3eb06ddfb Bump up to Compose 1.8.1 and Machine 0.8.2 2016-11-01 17:10:55 -07:00
Jerome Petazzoni
ad29a45191 Add advertise-addr info + small fixups for mentor week 2016-11-01 17:10:36 -07:00
Jerome Petazzoni
e1968beefa Bump to 16.04 LTS AMIs (closes #37)
16.04 doesn't come with Python setuptools, so we have to install that too.
2016-10-18 08:53:53 -07:00
Jerome Petazzoni
b1b3ecb5e9 Add Prometheus section 2016-10-16 17:28:05 -07:00
Jerome Petazzoni
ef60a78998 Pin version numbers used by ELK 2016-10-16 16:30:04 -07:00
Jerome Petazzoni
70064da91c Add Docker Machine; use it to get TLS mutual auth instead of 55555 plain text 2016-10-16 16:27:21 -07:00
Jérôme Petazzoni
0b6a3a1cba Merge pull request #48 from soulshake/typo
Typo fixes
2016-10-08 14:49:16 +02:00
AJ Bowen
e403a005ea 'Set up' when it's a verb, 'setup' when it's a noun. 2016-10-07 17:09:34 +02:00
AJ Bowen
773528fc2b They're --> Their 2016-10-07 16:19:05 +02:00
Jérôme Petazzoni
97af5492f7 Remove InfluxDB password auth 2016-10-04 18:42:32 +02:00
Jérôme Petazzoni
194ce5d7b6 Update Julius info 2016-10-04 14:11:12 +02:00
Jérôme Petazzoni
fafc8fb1ed Update TOC and add slide about Prometheus 2016-10-04 14:10:38 +02:00
Jérôme Petazzoni
4cb37481ba Merge pull request #46 from dragorosson/patch-1
Fix grammar
2016-10-04 03:47:29 +02:00
Drago Rosson
9196b27f0e Fix grammar 2016-10-03 16:21:56 -05:00
Jerome Petazzoni
9ce98430ab Last (hopefully) round of fixes before LinuxCon EU! 2016-10-03 09:20:40 -07:00
tiffany jernigan
4117f079e6 Run InfluxDB and Grafana as services using Docker Hub images. 2016-10-01 18:03:40 -07:00
Jerome Petazzoni
1105c9fa1f Merge remote-tracking branch 'tiffanyfj/metrics' 2016-10-01 08:06:43 -07:00
Jerome Petazzoni
ab7c1bb09a Prepare for LinuxCon EU Berlin 2016-10-01 08:05:55 -07:00
Jérôme Petazzoni
bfcb24c1ca Merge pull request #45 from anonymuse/jesse/docs_linkfix
Fix path for README links
2016-09-30 16:22:08 +02:00
Jesse White
45f410bb49 Fix path for README links 2016-09-29 17:22:55 -04:00
Jérôme Petazzoni
bcd2433fa4 Merge branch 'BretFisher-readme-updates' 2016-09-29 00:25:46 +02:00
Jérôme Petazzoni
1d02ddf271 Mess up with whitespace, because I am OCD like that 2016-09-29 00:25:36 +02:00
Jérôme Petazzoni
4765410393 Merge branch 'readme-updates' of https://github.com/BretFisher/orchestration-workshop-with-docker into BretFisher-readme-updates 2016-09-29 00:22:21 +02:00
tiffany jernigan
6102d21150 Added metrics chapter 2016-09-28 14:18:36 -07:00
Bret Fisher
75caa65973 more trainer info 2016-09-28 01:26:56 -04:00
Bret Fisher
dfd2bf4aeb new example settings file 2016-09-28 01:26:42 -04:00
Bret Fisher
51000b4b4d better swarm image for cards 2016-09-28 01:26:02 -04:00
Bret Fisher
3acd3b078b more info for trainers 2016-09-27 13:06:35 -04:00
Bret Fisher
4b43287c5b more info for trainers 2016-09-27 11:37:42 -04:00
Jerome Petazzoni
c8c745459c Update stateful section 2016-09-19 11:23:23 -07:00
Jerome Petazzoni
04dec2e196 Round of updates for Velocity 2016-09-18 16:20:51 -07:00
Jerome Petazzoni
0f8c189786 Docker Application Bundle -> Distributed Application Bundle 2016-09-18 12:24:47 -07:00
Jerome Petazzoni
81cc14d47b Fix VM card background image 2016-09-18 12:18:05 -07:00
Jérôme Petazzoni
060b2377d5 Merge pull request #34 from everett-toews/fix-link
Fix broken link to nomenclature doc
2016-09-11 12:01:24 -05:00
Everett Toews
1e77736987 Fix broken link to nomenclature doc 2016-09-10 15:49:04 -05:00
Jérôme Petazzoni
bf2b4b7eb7 Merge pull request #32 from everett-toews/github-docs
Move slides to docs for GitHub Pages
2016-09-08 13:56:40 -05:00
Everett Toews
8396f13a4a Move slides to docs for GitHub Pages 2016-08-27 16:12:25 -05:00
Jerome Petazzoni
571097f369 Small fix 2016-08-27 13:55:26 -07:00
Jerome Petazzoni
b1110db8ca Update TOC 2016-08-24 14:01:31 -07:00
Jerome Petazzoni
b73a628f05 Remove old files 2016-08-24 13:52:16 -07:00
Jerome Petazzoni
a07795565d Update tweet message 2016-08-24 13:50:25 -07:00
Jérôme Petazzoni
c4acbfd858 Add diagram 2016-08-24 16:34:32 -04:00
Jerome Petazzoni
ddbda14e14 Reviews/edits 2016-08-24 13:31:00 -07:00
Jerome Petazzoni
ad4ea8659b Node management 2016-08-24 08:04:27 -07:00
Jerome Petazzoni
8d7f27d60d Add Docker Application Bundles
Capitalize Redis consistently
2016-08-24 06:59:15 -07:00
Jerome Petazzoni
9f21c7279c Compose build+push 2016-08-23 14:19:14 -07:00
Jerome Petazzoni
53ae221632 Add stateful service section 2016-08-23 11:03:57 -07:00
Jerome Petazzoni
6719bcda87 Update logging section 2016-08-22 15:51:26 -07:00
Jerome Petazzoni
40e0c96c91 Rolling upgrades 2016-08-22 14:21:00 -07:00
Jerome Petazzoni
2c8664e58d Updated dockercoins deployment instructions 2016-08-12 06:47:30 -07:00
Jerome Petazzoni
1e5cee2456 Updated intro+cluster setup part 2016-08-11 10:01:51 -07:00
Jerome Petazzoni
29b8f53ae0 More typo fixes courtesy of @tiffanyfj 2016-08-11 06:05:43 -07:00
Jérôme Petazzoni
451f68db1d Update instructions to join cluster 2016-08-10 15:50:30 +02:00
Jérôme Petazzoni
5a4d10ed1a Upgrade versions to Engine 1.12 + Compose 1.8 2016-08-10 15:50:10 +02:00
Jérôme Petazzoni
06d5dc7846 Merge pull request #29 from programmerq/pssh-command
detect debian command or upstream command
2016-08-07 15:26:29 +02:00
Jeff Anderson
b63eb0fa40 detect debian command or upstream command 2016-08-01 12:38:12 -06:00
Jérôme Petazzoni
117e2a9ba2 Merge pull request #13 from fiunchinho/master
Version can be set as env variable to be used, instead of generating unix timestamp
2016-07-11 23:57:13 -05:00
Jerome Petazzoni
d2f6e88fd1 Add -v flag for go get swarmit 2016-06-28 16:47:18 -07:00
Jérôme Petazzoni
c742c39ed9 Merge pull request #26 from beenanner/master
Upgrade docker-compose files to v2
2016-06-28 06:44:27 -07:00
Jerome Petazzoni
1f2b931b01 Slack -> Gitter 2016-06-22 11:54:47 -07:00
Jerome Petazzoni
e351ede294 Fix TOC 2016-06-22 11:48:00 -07:00
Jerome Petazzoni
9ffbfacca8 Last words 2016-06-19 11:15:11 -07:00
Jerome Petazzoni
60524d2ff3 Fixes 2016-06-19 00:07:19 -07:00
Jerome Petazzoni
7001c05ec0 DockerCon update 2016-06-18 18:06:15 -07:00
Jonathan Lee
5d4414723d Upgrade docker-compose files to v2 2016-06-13 21:47:59 -04:00
Jérôme Petazzoni
d31f0980a2 Merge pull request #24 from crd/recommend_slide_changes
Recommended slide changes
2016-06-02 17:10:13 -07:00
Cory Donnelly
6649e97b1e Update warning to reflect Consul Leader Election bug has been fixed 2016-06-02 15:58:31 -04:00
Cory Donnelly
06b8cbc964 Fix typos 2016-06-02 15:55:02 -04:00
Cory Donnelly
6992c85d5e Update Git BASH url 2016-06-02 15:53:12 -04:00
Jérôme Petazzoni
313d46ac47 Merge pull request #23 from soulshake/master
Make prompt more readable on light or dark backgrounds
2016-05-29 07:28:53 -07:00
AJ Bowen
5a5db2ad7f Modify prompt colors 2016-05-28 21:07:33 -07:00
Jérôme Petazzoni
1ae29909c8 Merge pull request #22 from soulshake/master
Add script to extract section title
2016-05-28 20:59:00 -07:00
AJ Bowen
6747480869 Add a script to extract section titles 2016-05-28 20:52:21 -07:00
AJ Bowen
9ba359e67a Fix more references to settings.yaml 2016-05-28 19:55:46 -07:00
Jérôme Petazzoni
4c34f6be9b Merge pull request #21 from soulshake/master
Cleanup, mostly
2016-05-28 19:49:33 -07:00
AJ Bowen
a747058a72 Replace settings.yaml with <settings/somefile.yaml> in the documentation, as per @jpetazzo request; add entrypoint to Dockerfile; remove symlink and path manipulation from Dockerfile. 2016-05-28 19:46:38 -07:00
AJ Bowen
a2b77ff63b remove two more comments from docker-compose.yaml 2016-05-28 18:40:07 -07:00
AJ Bowen
5c600a05d0 Replace 'user' with 'root' in images. Squash layers in Dockerfile. Update README. Clean up docker-compose.yaml. 2016-05-28 18:37:29 -07:00
Jerome Petazzoni
340fcd4de2 Minor fixes for PYCON 2016-05-28 18:27:36 -07:00
Jerome Petazzoni
96d5e69c77 Add command to query local registry after pushing busybox (thanks @crd) 2016-05-25 16:25:08 -07:00
Jérôme Petazzoni
3b3825a83a Merge pull request #20 from RaulKite/master
upgrade local vagrant machines to ubuntu 14.04
2016-05-25 16:13:11 -07:00
Jérôme Petazzoni
74e815a706 Merge pull request #18 from soulshake/master
Fix typos pointed out by @crd
2016-05-25 16:12:02 -07:00
Raul Sanchez
2e4417f502 Merge branch 'master' of github.com:RaulKite/orchestration-workshop 2016-05-23 14:14:44 +02:00
Raul Sanchez
a4970dbfd5 upgrade local vagrant machines to ubuntu 14.04 2016-05-23 14:14:25 +02:00
Raul Sanchez
5d6a35e116 upgrade local vagrant machines to ubuntu 14.04 2016-05-23 14:11:03 +02:00
AJ Bowen
943c15a3c8 Fix typos pointed out by @crd 2016-05-17 23:21:01 +02:00
Jerome Petazzoni
65252904c9 Last updates before OSCON 2016-05-17 08:47:59 -07:00
Jerome Petazzoni
31563480b3 Update AMIs and settings files 2016-05-16 09:30:37 -07:00
Jerome Petazzoni
9bf13f70b9 Reword conclusion 2016-05-16 08:31:02 -07:00
Jerome Petazzoni
191982c72e Fix capitalization of Consul, etcd, Zookeeper 2016-05-15 20:24:49 -07:00
Jerome Petazzoni
054bb739ac Add diagrams courtesy of @soulshake; and new dockercoins logo by @ggtools & @ndeloof 2016-05-15 20:21:40 -07:00
Jérôme Petazzoni
338d9f5847 Merge pull request #16 from ggtools/master
New dockercoin logo
2016-05-15 22:03:31 -05:00
Jerome Petazzoni
6b6d2c77ad Big round of updates for OSCON 2016 2016-05-15 20:03:14 -07:00
Jérôme Petazzoni
3be821fefb Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-05-10 15:04:34 +00:00
Jérôme Petazzoni
cc03b0bab2 Add CRAFT talk extensions 2016-05-10 15:04:00 +00:00
Jerome Petazzoni
f2ccd65b34 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-05-09 12:53:39 -07:00
Jerome Petazzoni
cacc6cd6d9 Fixes after Budapest edition 2016-05-03 02:38:16 -07:00
Jérôme Petazzoni
aabbc17d97 Merge pull request #17 from morty/patch-1
Typo
2016-04-27 17:15:06 +02:00
Tom Mortimer-Jones
b87ece9acd Typo 2016-04-27 09:37:51 +01:00
Jerome Petazzoni
2a35e4954c Last touch-ups 2016-04-26 15:40:45 -07:00
Jerome Petazzoni
feefd4e013 Update outline + last round of minor fixes 2016-04-26 14:05:20 -07:00
Jerome Petazzoni
8e1827a506 Another round of updates for CRAFT, almost there 2016-04-26 12:53:50 -07:00
Jerome Petazzoni
76689cd431 Updates for CRAFT (bring everything to Compose v2) 2016-04-26 06:20:11 -07:00
Jerome Petazzoni
7448474b92 Update for Berlin workshop 2016-04-21 22:25:00 -07:00
Jérôme Petazzoni
52a2e6f3e6 Bump swarm image version to 1.2; add Consul Compose file 2016-04-21 10:01:52 +00:00
Christophe Labouisse
666b38ab57 New dockercoin logo 2016-04-21 09:36:16 +02:00
Jerome Petazzoni
2b213a9821 Add reference to MobaXterm 2016-04-20 08:08:04 -07:00
Jerome Petazzoni
3ec61d706e Update versions 2016-04-20 08:07:50 -07:00
Jérôme Petazzoni
4fc9d64737 Add environment logic in autotest 2016-04-17 20:33:31 +00:00
Jérôme Petazzoni
e427c1aa38 Update test harness 2016-04-13 22:23:42 +00:00
Jérôme Petazzoni
169d1085a1 Update slides for automated testing 2016-04-13 22:23:35 +00:00
Jérôme Petazzoni
506c6ea61b Minor change in placeholder for GELF section 2016-04-13 21:49:02 +00:00
Jérôme Petazzoni
654de369ca Add autotest skeleton 2016-04-11 20:11:52 +00:00
Jerome Petazzoni
0e37cb8a93 Tweak printer settings 2016-04-05 11:15:14 -07:00
Jerome Petazzoni
4a081d06ee Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-04-05 11:14:49 -07:00
Jérôme Petazzoni
c21e2ae73c Last fixes for Stockholm 2016-04-05 18:13:07 +00:00
Jérôme Petazzoni
2ca1babd4a Set YAML indentation to two spaces 2016-04-05 11:11:34 +00:00
Jerome Petazzoni
1127ce8fb2 Minor updates about discovery of nodes and backends 2016-04-04 05:47:58 -07:00
Jerome Petazzoni
5662dbef23 Add vimrc + DOCKER_HOST hint in prompt 2016-04-03 13:29:44 -07:00
Jerome Petazzoni
da10562d0e Replace .icon[...warning...] with .warning[]; update hamba description 2016-04-03 06:51:20 -07:00
Jerome Petazzoni
89ca0f9173 Refactor first part for Compose 1.7 2016-04-03 06:40:15 -07:00
Jerome Petazzoni
8fe2b8b392 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-04-03 06:39:43 -07:00
Jérôme Petazzoni
15f6b7bcd1 Merge pull request #15 from schrodervictor/adds-local-environment
Adds local environment
2016-04-03 12:46:24 +02:00
Victor Schröder
6ba755d869 Adds .gitignore 2016-04-03 00:45:12 +02:00
Victor Schröder
1f350e2fe7 Amends the README 2016-04-03 00:15:17 +02:00
Victor Schröder
58713c2bc9 Adds ssh configuration to allow ssh between nodes without passwords 2016-04-03 00:11:27 +02:00
Victor Schröder
1d43566233 Creates the README file with instructions 2016-04-02 23:49:20 +02:00
Victor Schröder
1c4877164d Creates ansible.cfg file and copies the private-key used to ssh in the VMs 2016-04-02 23:31:14 +02:00
Victor Schröder
d7f9f00fcf Removes some unnecessary files from the home folders 2016-04-02 23:29:56 +02:00
Victor Schröder
62270121a0 Adds task to install tmux in the VMs 2016-04-02 23:29:03 +02:00
Victor Schröder
7d40b5debb Adjusts the /etc/hosts files to make all instances visible by the hostname in the subnet 2016-04-02 23:28:12 +02:00
Victor Schröder
3f2ce588c5 Makes the docker daemons listen to port 55555 2016-04-02 23:26:58 +02:00
Victor Schröder
32607b38a2 Adds installation for docker-compose in the playbook 2016-04-02 23:25:33 +02:00
Victor Schröder
25cde1706e Adds initial provisioning playbook and inventory 2016-04-02 23:21:40 +02:00
Victor Schröder
73f8c9e9ae Adds generic Vagrantfile and vagrant.yml with five nodes 2016-04-02 23:20:58 +02:00
Jérôme Petazzoni
ecb1508410 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-04-02 01:07:28 +00:00
Jérôme Petazzoni
edffb26c29 Add DNS watcher 2016-04-02 01:07:01 +00:00
Jerome Petazzoni
4e48a1badb Remove pip from dependencies 2016-04-01 14:51:42 -07:00
Jérôme Petazzoni
71e309080a Minor fixes 2016-04-01 23:22:26 +02:00
Jérôme Petazzoni
452b8c5dd3 Install Compose using single binaries instead of pip 2016-04-01 23:21:49 +02:00
Jérôme Petazzoni
884d0507c2 Display errors on stderr 2016-04-01 23:20:59 +02:00
Jérôme Petazzoni
599e344340 Upgrade remark from 0.5.9 to 0.13; switch slide layout to 16:9 2016-03-31 02:05:40 +02:00
Jérôme Petazzoni
7ea512c034 Consistent capitalization of Swarm and web UI 2016-03-30 16:16:41 +02:00
Jérôme Petazzoni
d59330d0ed Remove extraneous output in chaosmonkey 2016-03-29 18:59:54 +00:00
Jérôme Petazzoni
0440d1ea8d Switch hasher to ruby:alpine 2016-03-29 01:03:05 +00:00
Jérôme Petazzoni
a34f262a95 Switch to alpine/slim images when possible 2016-03-28 12:59:06 +00:00
Jérôme Petazzoni
d1f95ddb39 Add function to do custom build of Swarm 2016-03-28 12:58:31 +00:00
Jérôme Petazzoni
4b6b43530d Add chaosmonkey script 2016-03-28 12:58:11 +00:00
Jérôme Petazzoni
d8680366db Add command to retag instances 2016-03-28 13:47:39 +02:00
Jérôme Petazzoni
8240734b6e Fix Docker Machine install from OSX; add RC settings 2016-03-28 13:47:26 +02:00
Jérôme Petazzoni
306805c02e Refactor setting selection mechanism 2016-03-28 00:18:28 +02:00
Jérôme Petazzoni
cdd237f38e Fix instance shutdown on OSX 2016-03-26 11:14:34 +01:00
Jérôme Petazzoni
d5f98110d6 Install mosh 2016-03-26 10:58:26 +01:00
Jérôme Petazzoni
a42bc7fe28 Minor refactoring 2016-03-25 21:12:50 +01:00
AJ Bowen
9e5c7c8216 Complete rewrite of deployment process
The former VM deployment process relied on extra scripts
located in a (private) repo. The new process is standalone.
2016-03-25 13:59:53 +01:00
Jérôme Petazzoni
1be92ea3fa Add details about security upgrades and mention Nautilus 2016-03-24 14:23:19 +01:00
Jérôme Petazzoni
a38919ff5b Minor fixes after Munich workshop 2016-03-24 14:12:00 +01:00
Jérôme Petazzoni
c2180cffad Minor updates for Munich workshop 2016-03-22 00:16:49 +01:00
Jérôme Petazzoni
914c80dba8 Merge pull request #12 from npalm/qcon
Qcon
2016-03-22 00:15:32 +01:00
José Armesto
4dad732c15 Removed unnecesary prints 2016-03-19 19:45:03 +01:00
José Armesto
bb7cadf701 Version can be set as env variable to be used, instead of generating unix timestamp 2016-03-15 16:28:46 +01:00
Jérôme Petazzoni
fa2dc41176 Merge pull request #8 from svx/master
to -> do
2016-03-14 15:16:48 +01:00
Jérôme Petazzoni
f27b99cbc4 Merge pull request #9 from akafred/rng0-to-rng-in-ambassador
Change rng0 -> rng in dockercoins ambassador-file
2016-03-14 15:07:16 +01:00
Palm, Niek
73b997faf7 add sudo so docker can install consulfs 2016-03-11 15:59:16 +00:00
Palm, Niek
948691a7d6 Add missing commands and a hint 2016-03-11 15:25:16 +00:00
akafred
aa7e8bb1f9 Change rng0 -> rng in dockercoins ambassador-file 2016-03-11 10:55:51 +00:00
Jerome Petazzoni
88e558bd08 Last updates for QCON 2016-03-11 00:29:58 -08:00
Jerome Petazzoni
1174845c52 Add v2 load balancing technique 2016-03-09 15:39:23 -08:00
Jerome Petazzoni
8a73d85beb Fixed lcal registry setup thanks to @soulshake 2016-03-09 08:28:02 -08:00
Jerome Petazzoni
bc1bb493d4 First round of update for QCON 2016-03-09 06:37:21 -08:00
Jérôme Petazzoni
cc6543ce1c Add helper to remove stale endpoints 2016-03-05 14:33:37 +00:00
Jérôme Petazzoni
8c62220c8d Add missing EXPOSE in webui/Dockerfile 2016-03-02 12:12:00 +00:00
Jérôme Petazzoni
332eb9cff9 Add script to automate LB setup; add setup/teardown helpers 2016-03-02 12:11:22 +00:00
Jerome Petazzoni
2ad3625687 Add ConsulFS section 2016-03-01 06:30:53 -08:00
Jerome Petazzoni
7e60d482f7 Add cron strategies 2016-03-01 05:37:31 -08:00
Jerome Petazzoni
4d2e62ffee Adjust for QCON schedule; use local registry 2016-03-01 05:21:02 -08:00
Jérôme Petazzoni
b4870b8ed6 Hackish CEPH cluster on Docker 2016-03-01 01:01:54 +00:00
Jérôme Petazzoni
29ae9a10f2 Remove extraneous ambassadors 2016-02-29 22:03:25 +00:00
Jérôme Petazzoni
ec2b098c29 Add parallel_run function and use it to manage ambassadors 2016-02-29 21:24:32 +00:00
Jérôme Petazzoni
240a55a18a Update other scripts to use common.py 2016-02-29 19:08:01 +00:00
Jérôme Petazzoni
362c5e8a65 Add Compose file for self-hosted registry 2016-02-29 18:40:14 +00:00
Jérôme Petazzoni
b47dccdb8d Move scripts to bin directory; abstract Compose V1/V2 formats 2016-02-29 18:17:06 +00:00
sven
dca494c090 Revoke wording change 2016-02-24 20:39:20 +01:00
sven
9286fdec28 Assess -> Access 2016-02-19 11:46:14 +01:00
sven
c31e961e61 to -> do 2016-02-19 11:30:01 +01:00
Jerome Petazzoni
2b88a121ba Add DockerCoins image 2016-02-18 23:42:53 -08:00
Jerome Petazzoni
84a521296d Round of update for Amsterdam workshop 2016-02-18 15:09:58 -08:00
Jerome Petazzoni
86271b6336 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-02-14 07:11:13 -08:00
Jerome Petazzoni
104e736c45 Add kibana screenshot 2016-02-14 07:10:52 -08:00
Jerome Petazzoni
9dfb421eb1 Last round of updates 2016-02-14 07:10:42 -08:00
Jerome Petazzoni
d69719abc6 Content rehaul before Paris workshop 2016-02-14 06:05:39 -08:00
Jérôme Petazzoni
98bac6713d Add v2 Compose file 2016-02-14 13:19:10 +00:00
Jerome Petazzoni
6c1958e244 Update Compose+Swarm section 2016-02-13 10:56:55 -08:00
Jerome Petazzoni
936a80f86d Conclusions + mention Docker Cloud and UCP 2016-02-13 10:23:10 -08:00
Jerome Petazzoni
2634230a4a Switch to node:4 image to work around AUFS issue 2016-02-12 09:11:00 -08:00
Jerome Petazzoni
4e96a2949c Add Swarm replication and rescheduling sections 2016-02-12 09:10:39 -08:00
Jerome Petazzoni
175d468718 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-02-09 16:33:25 -08:00
Jerome Petazzoni
eb38b7e2e6 Rehaul logging section 2016-02-09 16:33:16 -08:00
Jérôme Petazzoni
9d5dfc9a40 Add sample Compose file with GELF logging 2016-02-10 00:32:00 +00:00
Jérôme Petazzoni
676aad7edc Revert to Compose v1 for simplicity 2016-02-09 23:39:30 +00:00
Jérôme Petazzoni
43d574a164 Add Compose file for ELK stack 2016-02-09 22:09:48 +00:00
Jerome Petazzoni
661948d162 Update versions, schedule, and twitter button 2016-02-08 07:24:50 -08:00
315 changed files with 30205 additions and 4302 deletions

12
.gitignore vendored
View File

@@ -1,5 +1,11 @@
*.pyc
*.swp
*~
ips.txt
ips.html
ips.pdf
prepare-vms/ips.txt
prepare-vms/ips.html
prepare-vms/ips.pdf
prepare-vms/settings.yaml
prepare-vms/tags
slides/*.yml.html
slides/autopilot/state.yaml
node_modules

24
CHECKLIST.md Normal file
View File

@@ -0,0 +1,24 @@
Checklist to use when delivering a workshop
Authored by Jérôme; additions by Bridget
- [ ] Create event-named branch (such as `conferenceYYYY`) in the [main repo](https://github.com/jpetazzo/container.training/)
- [ ] Create file `slides/_redirects` containing a link to the desired tutorial: `/ /kube-halfday.yml.html 200`
- [ ] Push local branch to GitHub and merge into main repo
- [ ] [Netlify setup](https://app.netlify.com/sites/container-training/settings/domain): create subdomain for event-named branch
- [ ] Add link to event-named branch to [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] Update the slides that says which versions we are using for [kube](https://github.com/jpetazzo/container.training/blob/master/slides/kube/versions-k8s.md) or [swarm](https://github.com/jpetazzo/container.training/blob/master/slides/swarm/versions.md) workshops
- [ ] Update the version of Compose and Machine in [settings](https://github.com/jpetazzo/container.training/tree/master/prepare-vms/settings)
- [ ] (optional) Create chatroom
- [ ] (optional) Set chatroom in YML ([kube half-day example](https://github.com/jpetazzo/container.training/blob/master/slides/kube-halfday.yml#L6-L8)) and deploy
- [ ] (optional) Put chat link on [container.training front page](https://github.com/jpetazzo/container.training/blob/master/slides/index.html)
- [ ] How many VMs do we need? Check with event organizers ahead of time
- [ ] Provision VMs (slightly more than we think we'll need)
- [ ] Change password on presenter's VMs (to forestall any hijinx)
- [ ] Onsite: walk the room to count seats, check power supplies, lectern, A/V setup
- [ ] Print cards
- [ ] Cut cards
- [ ] Last-minute merge from master
- [ ] Check that all looks good
- [ ] DELIVER!
- [ ] Shut down VMs
- [ ] Update index.html to remove chat link and move session to past things

19
LICENSE
View File

@@ -1,13 +1,12 @@
Copyright 2015 Jérôme Petazzoni
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
The code in this repository is licensed under the Apache License
Version 2.0. You may obtain a copy of this license at:
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
The instructions and slides in this repository (e.g. the files
with extension .md and .yml in the "slides" subdirectory) are
under the Creative Commons Attribution 4.0 International Public
License. You may obtain a copy of this license at:
https://creativecommons.org/licenses/by/4.0/legalcode

313
README.md
View File

@@ -1,31 +1,135 @@
# Orchestration at scale(s)
# Container Training
This is the material for the "Docker orchestration workshop"
written and delivered by Jérôme Petazzoni (and possibly others)
at multiple conferences and events like:
This repository (formerly known as `orchestration-workshop`)
contains materials (slides, scripts, demo app, and other
code samples) used for various workshops, tutorials, and
training sessions around the themes of Docker, containers,
and orchestration.
- QCON, New York City (2015, June)
- KCDC, Kansas City (2015, June)
- JDEV, Bordeaux (2015, July)
- OSCON, Portland (2015, July)
For the moment, it includes:
- Introduction to Docker and Containers,
- Container Orchestration with Docker Swarm,
- Container Orchestration with Kubernetes.
These materials have been designed around the following
principles:
- they assume very little prior knowledge of Docker,
containers, or a particular programming language;
- they can be used in a classroom setup (with an
instructor), or self-paced at home;
- they are hands-on, meaning that they contain lots
of examples and exercises that you can easily
reproduce;
- they progressively introduce concepts in chapters
that build on top of each other.
If you're looking for the materials, you can stop reading
right now, and hop to http://container.training/, which
hosts all the slides decks available.
The rest of this document explains how this repository
is structured, and how to use it to deliver (or create)
your own tutorials.
## Slides
## Why a single repository?
The slides are in the `www/htdocs` directory.
All these materials have been gathered in a single repository
because they have a few things in common:
The recommended way to view them is to:
- have a Docker host
- clone this repository to your Docker host
- `cd www && docker-compose up -d`
- this will start a web server on port 80
- point your browser at your Docker host and enjoy
- some [common slides](slides/common/) that are re-used
(and updated) identically between different decks;
- a [build system](slides/) generating HTML slides from
Markdown source files;
- a [semi-automated test harness](slides/autopilot/) to check
that the exercises and examples provided work properly;
- a [PhantomJS script](slides/slidechecker.js) to check
that the slides look good and don't have formatting issues;
- [deployment scripts](prepare-vms/) to start training
VMs in bulk;
- a fancy pipeline powered by
[Netlify](https://www.netlify.com/) and continuously
deploying `master` to http://container.training/.
## Sample code
## What are the different courses available?
**Introduction to Docker** is derived from the first
"Docker Fundamentals" training materials. For more information,
see [jpetazzo/intro-to-docker](https://github.com/jpetazzo/intro-to-docker).
The version in this repository has been adapted to the Markdown
publishing pipeline. It is still maintained, but only receives
minor updates once in a while.
**Container Orchestration with Docker Swarm** (formerly
known as "Orchestration Workshop") is a workshop created by Jérôme
Petazzoni in June 2015. Since then, it has been continuously updated
and improved, and received contributions from many others authors.
It is actively maintained.
**Container Orchestration with Kubernetes** was created by
Jérôme Petazzoni in October 2017, with help and feedback from
a few other contributors. It is actively maintained.
## Repository structure
- [bin](bin/)
- A few helper scripts that you can safely ignore for now.
- [dockercoins](dockercoins/)
- The demo app used throughout the orchestration workshops.
- [efk](efk/), [elk](elk/), [prom](prom/), [snap](snap/):
- Logging and metrics stacks used in the later parts of
the orchestration workshops.
- [prepare-local](prepare-local/), [prepare-machine](prepare-machine/):
- Contributed scripts to automate the creation of local environments.
These could use some help to test/check that they work.
- [prepare-vms](prepare-vms/):
- Scripts to automate the creation of AWS instances for students.
These are routinely used and actively maintained.
- [slides](slides/):
- All the slides! They are assembled from Markdown files with
a custom Python script, and then rendered using [gnab/remark](
https://github.com/gnab/remark). Check this directory for more details.
- [stacks](stacks/):
- A handful of Compose files (version 3) allowing to easily
deploy complex application stacks.
## Course structure
(This applies only for the orchestration workshops.)
The workshop introduces a demo app, "DockerCoins," built
around a micro-services architecture. First, we run it
on a single node, using Docker Compose. Then, we pretend
that we need to scale it, and we use an orchestrator
(SwarmKit or Kubernetes) to deploy and scale the app on
a cluster.
We explain the concepts of the orchestrator. For SwarmKit,
we setup the cluster with `docker swarm init` and `docker swarm join`.
For Kubernetes, we use pre-configured clusters.
Then, we cover more advanced concepts: scaling, load balancing,
updates, global services or daemon sets.
There are a number of advanced optional chapters about
logging, metrics, secrets, network encryption, etc.
The content is very modular: it is broken down in a large
number of Markdown files, that are put together according
to a YAML manifest. This allows to re-use content
between different workshops very easily.
### DockerCoins
The sample app is in the `dockercoins` directory.
It's used during all chapters
for explaining different concepts of orchestration.
To see it in action:
@@ -34,54 +138,145 @@ To see it in action:
- the web UI will be available on port 8000
## Running the workshop
## Running the Workshop
WARNING: those instructions are incomplete. Consider
them as notes quickly drafted on a napkin rather than
proper documentation!
If you want to deliver one of these workshops yourself,
this section is for you!
> *This section has been mostly contributed by
> [Bret Fisher](https://twitter.com/bretfisher), who was
> one of the first persons to have the bravery of delivering
> this workshop without me. Thanks Bret! 🍻
>
> Jérôme.*
### General timeline of planning a workshop
- Fork repo and run through slides, doing the hands-on to be sure you
understand the different `dockercoins` repo's and the steps we go through to
get to a full Swarm Mode cluster of many containers. You'll update the first
few slides and last slide at a minimum, with your info.
- ~~Your docs directory can use GitHub Pages.~~
- This workshop expects 5 servers per student. You can get away with as little
as 2 servers per student, but you'll need to change the slide deck to
accommodate. More servers = more fun.
- If you have more then ~20 students, try to get an assistant (TA) to help
people with issues, so you don't have to stop the workshop to help someone
with ssh etc.
- AWS is our most tested process for generating student machines. In
`prepare-vms` you'll find scripts to create EC2 instances, install docker,
pre-pull images, and even print "cards" to place at each students seat with
IP's and username/password.
- Test AWS Scripts: Be sure to test creating *all* your needed servers a week
before workshop (just for a few minutes). You'll likely hit AWS limits in the
region closest to your class, and it sometimes takes days to get AWS to raise
those limits with a support ticket.
- Create a https://gitter.im chat room for your workshop and update slides
with url. Also useful for TA to monitor this during workshop. You can use it
before/after to answer questions, and generally works as a better answer then
"email me that question".
- If you can send an email to students ahead of time, mention how they should
get SSH, and test that SSH works. If they can `ssh github.com` and get
`permission denied (publickey)` then they know it worked, and SSH is properly
installed and they don't have anything blocking it. SSH and a browser are all
they need for class.
- Typically you create the servers the day before or morning of workshop, and
leave them up the rest of day after workshop. If creating hundreds of servers,
you'll likely want to run all these `workshopctl` commands from a dedicated
instance you have in same region as instances you want to create. Much faster
this way if you're on poor internet. Also, create 2 sets of servers for
yourself, and use one during workshop and the 2nd is a backup.
- Remember you'll need to print the "cards" for students, so you'll need to
create instances while you have a way to print them.
### Things That Could Go Wrong
- Creating AWS instances ahead of time, and you hit its limits in region and
didn't plan enough time to wait on support to increase your limits. :(
- Students have technical issues during workshop. Can't get ssh working,
locked-down computer, host firewall, etc.
- Horrible wifi, or ssh port TCP/22 not open on network! If wifi sucks you
can try using MOSH https://mosh.org which handles SSH over UDP. TMUX can also
prevent you from loosing your place if you get disconnected from servers.
https://tmux.github.io
- Forget to print "cards" and cut them up for handing out IP's.
- Forget to have fun and focus on your students!
### Creating the VMs
I use the `trainctl` script from the `docker-fundamentals`
repository. Sorry if you don't have that!
After starting the VMs, use the `trainctl ips` command
to dump the list of IP addresses into a file named `ips.txt`.
`prepare-vms/workshopctl` is the script that gets you most of what you need for
setting up instances. See
[prepare-vms/README.md](prepare-vms)
for all the info on tools and scripts.
### Generating the printed cards
### Content for Different Workshop Durations
- Put `ips.txt` file in `prepare-vms` directory.
- Generate HTML file.
- Open it in Chrome.
- Transform to PDF.
- Print it.
With all the slides, this workshop is a full day long. If you need to deliver
it in shorter timelines, here's some recommendations on what to cut out. You
can replace `---` with `???` which will hide slides. Or leave them there and
add something like `(EXTRA CREDIT)` to title so students can still view the
content but you also know to skip during presentation.
### Deploying your SSH key to all the machines
#### 3 Hour Version
- Make sure that you have SSH keys loaded (`ssh-add -l`).
- Source `rc`.
- Run `pcopykey`.
- Limit time on debug tools, maybe skip a few. *"Chapter 1:
Identifying bottlenecks"*
- Limit time on Compose, try to have them building the Swarm Mode by 30
minutes in
- Skip most of Chapter 3, Centralized Logging and ELK
- Skip most of Chapter 4, but keep stateful services and DAB's if possible
- Mention what DAB's are, but make this part optional in case you run out
of time
### Installing extra packages
#### 2 Hour Version
- Source `postprep.rc`.
(This will install a few extra packages, add entries to
/etc/hosts, generate SSH keys, and deploy them on all hosts.)
- Skip all the above, and:
- Skip the story arc of debugging dockercoins all together, skipping the
troubleshooting tools. Just focus on getting them from single-host to
multi-host and multi-container.
- Goal is first 30min on intro and Docker Compose and what dockercoins is,
and getting it up on one node in docker-compose.
- Next 60-75 minutes is getting dockercoins in Swarm Mode services across
servers. Big Win.
- Last 15-30 minutes is for stateful services, DAB files, and questions.
### Final touches
### Pre-built images
- Set two groups of machines for instructor's use.
- You will use the first group during the workshop.
- The second group will run a web server with the slides.
- Log into the first machine of the second group.
- Git clone this repo.
- Put up the web server as instructed above.
- Use cli53 to add an A record for e.g. `view.dckr.info`.
There are pre-built images for the 4 components of the DockerCoins demo app: `dockercoins/hasher:v0.1`, `dockercoins/rng:v0.1`, `dockercoins/webui:v0.1`, and `dockercoins/worker:v0.1`. They correspond to the code in this repository.
There are also three variants, for demo purposes:
- `dockercoins/rng:v0.2` is broken (the server won't even start),
- `dockercoins/webui:v0.2` has bigger font on the Y axis and a green graph (instead of blue),
- `dockercoins/worker:v0.2` is 11x slower than `v0.1`.
## Past events
Since its inception, this workshop has been delivered dozens of times,
to thousands of people, and has continuously evolved. This is a short
history of the first times it was delivered. Look also in the "tags"
of this repository: they all correspond to successive iterations of
this workshop. If you attended a past version of the workshop, you
can use these tags to see what has changed since then.
- QCON, New York City (2015, June)
- KCDC, Kansas City (2015, June)
- JDEV, Bordeaux (2015, July)
- OSCON, Portland (2015, July)
- StrangeLoop, Saint Louis (2015, September)
- LISA, Washington D.C. (2015, November)
- SCALE, Pasadena (2016, January)
- Zenika, Paris (2016, February)
- Container Solutions, Amsterdam (2016, February)
- ... and much more!
# Problems? Bugs? Questions?
@@ -94,12 +289,18 @@ If there is a bug and you can't fix it, but you can
reproduce it: submit an issue explaining how to reproduce.
If there is a bug and you can't even reproduce it:
sorry. It is probably an Heisenbug. I can't act on it
until it's reproducible.
sorry. It is probably an Heisenbug. We can't act on it
until it's reproducible, alas.
if you have attended this workshop and have feedback,
or if you want us to deliver that workshop at your
conference or for your company: contact me (jerome
at docker dot com).
If you have attended this workshop and have feedback,
or if you want somebody to deliver that workshop at your
conference or for your company: you can contact one of us!
- jerome dot petazzoni at gmail dot com
- bret at bretfisher dot com
If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!
**Thank you!**
Thank you!

27
bin/add-logging.py Executable file
View File

@@ -0,0 +1,27 @@
#!/usr/bin/env python
import os
import sys
import yaml
def error(msg):
print("ERROR: {}".format(msg))
exit(1)
compose_file = os.environ["COMPOSE_FILE"]
input_file, output_file = compose_file, compose_file
config = yaml.load(open(input_file))
version = config.get("version")
if version != "2":
error("Unsupported $COMPOSE_FILE version: {!r}".format(version))
for service in config["services"]:
config["services"][service]["logging"] = dict(
driver="gelf",
options={"gelf-address": "udp://localhost:12201"},
)
yaml.safe_dump(config, open(output_file, "w"), default_flow_style=False)

47
bin/chaosmonkey.sh Executable file
View File

@@ -0,0 +1,47 @@
#!/bin/sh
[ -z "$2" ] && {
echo "Syntax: $0 <host> <command>"
echo "
Command should be:
connect Cancels the effects of 'disconnect'
disconnect Disable all network communication except SSH
reboot Sync disks and immediately reboot (without proper shutdown)
"
exit 1
}
ssh docker@$1 sudo sh <<EOF
_cm_init () {
iptables -L CHAOSMONKEY >/dev/null 2>/dev/null || {
iptables -N CHAOSMONKEY
iptables -I FORWARD -j CHAOSMONKEY
iptables -I INPUT -j CHAOSMONKEY
iptables -I OUTPUT -j CHAOSMONKEY
}
}
_cm_reboot () {
echo "Rebooting..."
echo s > /proc/sysrq-trigger
echo u > /proc/sysrq-trigger
echo b > /proc/sysrq-trigger
}
_cm_disconnect () {
_cm_init
echo "Dropping all network traffic, except SSH..."
iptables -F CHAOSMONKEY
iptables -A CHAOSMONKEY -p tcp --sport 22 -j ACCEPT
iptables -A CHAOSMONKEY -p tcp --dport 22 -j ACCEPT
iptables -A CHAOSMONKEY -j DROP
}
_cm_connect () {
_cm_init
echo "Re-enabling network communication..."
iptables -F CHAOSMONKEY
}
_cm_$2
EOF

View File

@@ -1,69 +0,0 @@
#!/usr/bin/env python
import os
import subprocess
import time
import yaml
user_name = os.environ.get("DOCKERHUB_USER")
if not user_name:
print("Please set the DOCKERHUB_USER to your user name, e.g.:")
print("export DOCKERHUB_USER=zoe")
exit(1)
# Get the name of the current directory.
project_name = os.path.basename(os.path.realpath("."))
# Generate a Docker image tag, using the UNIX timestamp.
# (i.e. number of seconds since January 1st, 1970)
version = str(int(time.time()))
input_file = os.environ.get(
"DOCKER_COMPOSE_YML", "docker-compose.yml")
output_file = os.environ.get(
"DOCKER_COMPOSE_YML", "docker-compose.yml-{}".format(version))
if input_file == output_file == "docker-compose.yml":
print("I will not clobber your docker-compose.yml file.")
print("Unset DOCKER_COMPOSE_YML or set it to something else.")
exit(1)
print("Input file: {}".format(input_file))
print("Output file: {}".format(output_file))
# Execute "docker-compose build" and abort if it fails.
subprocess.check_call(["docker-compose", "-f", input_file, "build"])
# Load the services from the input docker-compose.yml file.
# TODO: run parallel builds.
stack = yaml.load(open(input_file))
# Iterate over all services that have a "build" definition.
# Tag them, and initiate a push in the background.
push_operations = dict()
for service_name, service in stack.items():
if "build" in service:
compose_image = "{}_{}".format(project_name, service_name)
hub_image = "{}/{}_{}:{}".format(user_name, project_name, service_name, version)
# Re-tag the image so that it can be uploaded to the Docker Hub.
subprocess.check_call(["docker", "tag", compose_image, hub_image])
# Spawn "docker push" to upload the image.
push_operations[service_name] = subprocess.Popen(["docker", "push", hub_image])
# Replace the "build" definition by an "image" definition,
# using the name of the image on the Docker Hub.
del service["build"]
service["image"] = hub_image
# Wait for push operations to complete.
for service_name, popen_object in push_operations.items():
print("Waiting for {} push to complete...".format(service_name))
popen_object.wait()
print("Done.")
# Write the new docker-compose.yml file.
with open(output_file, "w") as f:
yaml.safe_dump(stack, f, default_flow_style=False)
print("Wrote new compose file.")
print("COMPOSE_FILE={}".format(output_file))

View File

@@ -1,10 +0,0 @@
cadvisor:
image: google/cadvisor
ports:
- "8080:8080"
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"

View File

@@ -1,69 +0,0 @@
#!/usr/bin/env python
import os
import subprocess
import sys
import yaml
compose_file = os.environ.get("COMPOSE_FILE") or sys.argv[1]
stack = yaml.load(open(compose_file))
project_name = os.path.basename(os.path.realpath("."))
# Get all services and backends in our compose application
containers_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=com.docker.compose.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "com.docker.compose.service" }} '
'{{ .Ports }}',
])
# Build list of backends
frontend_ports = dict()
backends = dict()
for container in containers_data.split('\n'):
if not container:
continue
# TODO: support services with multiple ports!
container_id, service_name, port = container.split(' ')
if not port:
continue
backend, frontend = port.split("->")
backend_addr, backend_port = backend.split(':')
frontend_port, frontend_proto = frontend.split('/')
# TODO: deal with udp (mostly skip it?)
assert frontend_proto == "tcp"
# TODO: check inconsistencies between port mappings
frontend_ports[service_name] = frontend_port
if service_name not in backends:
backends[service_name] = []
backends[service_name].append((backend_addr, backend_port))
# Get all existing ambassadors for this application
ambassadors_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=ambassador.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "ambassador.service" }} '
'{{ .Label "ambassador.bindaddr" }}',
])
# Update ambassadors
for ambassador in ambassadors_data.split('\n'):
if not ambassador:
continue
ambassador_id, service_name, bind_address = ambassador.split()
print("Updating configuration for {}/{} -> {}:{} -> {}"
.format(service_name, ambassador_id,
bind_address, frontend_ports[service_name],
backends[service_name]))
command = [
"docker", "run", "--rm", "--volumes-from", ambassador_id,
"jpetazzo/hamba", "reconfigure",
"{}:{}".format(bind_address, frontend_ports[service_name])
]
for backend_addr, backend_port in backends[service_name]:
command.extend([backend_addr, backend_port])
print command
subprocess.check_call(command)

View File

@@ -1,60 +0,0 @@
#!/usr/bin/env python
import os
import subprocess
import sys
import yaml
compose_file = os.environ.get("COMPOSE_FILE") or sys.argv[1]
stack = yaml.load(open(compose_file))
project_name = os.path.basename(os.path.realpath("."))
# Get all services in our compose application
containers_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=com.docker.compose.project={}".format(project_name),
"--format", '{{ .ID }} {{ .Label "com.docker.compose.service" }}',
])
# Get all existing ambassadors for this application
ambassadors_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=ambassador.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "ambassador.container" }} '
'{{ .Label "ambassador.service" }}',
])
# Build a set of existing ambassadors
ambassadors = dict()
for ambassador in ambassadors_data.split('\n'):
if not ambassador:
continue
ambassador_id, container_id, linked_service = ambassador.split()
ambassadors[container_id, linked_service] = ambassador_id
# Start the missing ambassadors
for container in containers_data.split('\n'):
if not container:
continue
container_id, service_name = container.split()
extra_hosts = stack[service_name].get("extra_hosts", {})
for linked_service, bind_address in extra_hosts.items():
description = "Ambassador {}/{}/{}".format(
service_name, container_id, linked_service)
ambassador_id = ambassadors.get((container_id, linked_service))
if ambassador_id:
print("{} already exists: {}".format(description, ambassador_id))
else:
print("{} not found, creating it:".format(description))
subprocess.check_call([
"docker", "run", "-d",
"--net", "container:{}".format(container_id),
"--label", "ambassador.project={}".format(project_name),
"--label", "ambassador.container={}".format(container_id),
"--label", "ambassador.service={}".format(linked_service),
"--label", "ambassador.bindaddr={}".format(bind_address),
"jpetazzo/hamba", "run"
])

View File

@@ -1,3 +0,0 @@
#!/bin/sh
docker ps -q --filter label=ambassador.project=dockercoins |
xargs docker rm -f

View File

@@ -0,0 +1,30 @@
version: "2"
services:
rng:
build: rng
image: ${REGISTRY_SLASH}rng${COLON_TAG}
ports:
- "8001:80"
hasher:
build: hasher
image: ${REGISTRY_SLASH}hasher${COLON_TAG}
ports:
- "8002:80"
webui:
build: webui
image: ${REGISTRY_SLASH}webui${COLON_TAG}
ports:
- "8000:80"
volumes:
- "./webui/files/:/files/"
redis:
image: redis
worker:
build: worker
image: ${REGISTRY_SLASH}worker${COLON_TAG}

View File

@@ -0,0 +1,47 @@
version: "2"
services:
rng:
build: rng
ports:
- "8001:80"
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
hasher:
build: hasher
ports:
- "8002:80"
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
webui:
build: webui
ports:
- "8000:80"
volumes:
- "./webui/files/:/files/"
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
redis:
image: redis
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
worker:
build: worker
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201

View File

@@ -1,29 +1,26 @@
rng:
version: "2"
services:
rng:
build: rng
ports:
- "8001:80"
- "8001:80"
hasher:
hasher:
build: hasher
ports:
- "8002:80"
- "8002:80"
webui:
webui:
build: webui
links:
- redis
ports:
- "8000:80"
- "8000:80"
volumes:
- "./webui/files/:/files/"
- "./webui/files/:/files/"
redis:
redis:
image: redis
worker:
worker:
build: worker
links:
- rng
- hasher
- redis

View File

@@ -1,43 +0,0 @@
rng1:
build: rng
rng2:
build: rng
rng3:
build: rng
rng0:
image: jpetazzo/hamba
links:
- rng1
- rng2
- rng3
command: 80 rng1 80 rng2 80 rng3 80
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
webui:
build: webui
links:
- redis
ports:
- "8000:80"
#volumes:
# - "./webui/files/:/files/"
redis:
image: jpetazzo/hamba
command: 6379 AA.BB.CC.DD EEEEE
worker:
build: worker
links:
- rng0:rng
- hasher:hasher
- redis

View File

@@ -1,43 +0,0 @@
rng1:
build: rng
rng2:
build: rng
rng3:
build: rng
rng0:
image: jpetazzo/hamba
links:
- rng1
- rng2
- rng3
command: 80 rng1 80 rng2 80 rng3 80
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
webui:
build: webui
extra_hosts:
redis: A.B.C.D
ports:
- "8000:80"
#volumes:
# - "./webui/files/:/files/"
#redis:
# image: redis
worker:
build: worker
links:
- rng0:rng
- hasher:hasher
extra_hosts:
redis: A.B.C.D

View File

@@ -1,43 +0,0 @@
rng1:
build: rng
rng2:
build: rng
rng3:
build: rng
rng0:
image: jpetazzo/hamba
links:
- rng1
- rng2
- rng3
command: 80 rng1 80 rng2 80 rng3 80
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
webui:
build: webui
links:
- redis
ports:
- "8000:80"
volumes:
- "./webui/files/:/files/"
redis:
image: redis
worker:
build: worker
links:
- rng0:rng
- hasher:hasher
- redis

View File

@@ -1,6 +1,10 @@
FROM ruby
FROM ruby:alpine
RUN apk add --update build-base curl
RUN gem install sinatra
RUN gem install thin
ADD hasher.rb /
CMD ["ruby", "hasher.rb"]
EXPOSE 80
HEALTHCHECK \
--interval=1s --timeout=2s --retries=3 --start-period=1s \
CMD curl http://localhost/ || exit 1

View File

@@ -1,5 +0,0 @@
hasher: 80
redis: 6379
rng: 80
webui: 80

View File

@@ -1,4 +1,4 @@
FROM python
FROM python:alpine
RUN pip install Flask
COPY rng.py /
CMD ["python", "rng.py"]

View File

@@ -1,6 +1,7 @@
FROM node
FROM node:4-slim
RUN npm install express
RUN npm install redis
COPY files/ /files/
COPY webui.js /
CMD ["node", "webui.js"]
EXPOSE 80

View File

@@ -2,13 +2,16 @@
<html>
<head>
<title>DockerCoin Miner WebUI</title>
<link rel="stylesheet" type="text/css" href="rickshaw.min.css">
<link rel="stylesheet" type="text/css" href="rickshaw.min.css">
<style>
#graph {
background-color: #eee;
width: 800px;
height: 400px;
}
#tweet {
color: royalblue;
}
</style>
<script src="jquery.js"></script>
<script src="d3.min.js"></script>
@@ -38,13 +41,22 @@ function refresh () {
while (points.length > 0) {
points.pop();
}
var speed;
for (var i=0; i<series.length-1; i++) {
// Compute instantaneous speed
var s1 = series[i];
var s2 = series[i+1];
var speed = (s2.hashes-s1.hashes)/(s2.now-s1.now);
speed = (s2.hashes-s1.hashes)/(s2.now-s1.now);
points.push({ x: s2.now, y: speed });
}
$("#speed").text("~" + speed.toFixed(1) + " hashes/second");
var msg = ("I'm attending a @docker orchestration workshop, "
+ "and my #DockerCoins mining rig is crunching "
+ speed.toFixed(1) + " hashes/second! W00T!");
$("#tweet").attr(
"href",
"https://twitter.com/intent/tweet?text="+encodeURIComponent(msg)
);
if (graph == null) {
graph = new Rickshaw.Graph({
renderer: "area",
@@ -56,7 +68,7 @@ function refresh () {
series: [
{ name: "Coins",
color: "steelblue",
data: points
data: points
}
]
});
@@ -89,5 +101,11 @@ $(function () {
<div id="graph"></div>
<h2>
Current mining speed:
<span id="speed">-</span>
<a id="tweet" href="#">(Tweet this!)</a>
</h2>
</body>
</html>

View File

@@ -1,4 +1,4 @@
FROM python
FROM python:alpine
RUN pip install redis
RUN pip install requests
COPY worker.py /

36
efk/README.md Normal file
View File

@@ -0,0 +1,36 @@
# Elasticsearch + Fluentd + Kibana
This is a variation on the classic "ELK" stack.
The [fluentd](fluentd/) subdirectory contains a Dockerfile to build
a fluentd image embarking a simple configuration file, accepting log
entries on port 24224 and storing them in Elasticsearch in a format
that Kibana can use.
You can also use a pre-built image, `jpetazzo/fluentd:v0.1`
(e.g. if you want to deploy on a cluster and don't want to deploy
your own registry).
Once this fluentd container is running, and assuming you expose
its port 24224/tcp somehow, you can send container logs to fluentd
by using Docker's fluentd logging driver.
You can bring up the whole stack with the associated Compoes file.
With Swarm mode, you can bring up the whole stack like this:
```bash
docker network create efk --driver overlay
docker service create --network efk \
--name elasticsearch elasticsearch:2
docker service create --network efk --publish 5601:5601 \
--name kibana kibana
docker service create --network efk --publish 24224:24224 \
--name fluentd jpetazzo/fluentd:v0.1
```
And then, from any node on your cluster, you can send logs to fluentd like this:
```bash
docker run --log-driver fluentd --log-opt fluentd-address=localhost:24224 \
alpine echo ohai there
```

24
efk/docker-compose.yml Normal file
View File

@@ -0,0 +1,24 @@
version: "2"
services:
elasticsearch:
image: elasticsearch
# If you need to access ES directly, just uncomment those lines.
#ports:
# - "9200:9200"
# - "9300:9300"
fluentd:
#build: fluentd
image: jpetazzo/fluentd:v0.1
ports:
- "127.0.0.1:24224:24224"
depends_on:
- elasticsearch
kibana:
image: kibana
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200

5
efk/fluentd/Dockerfile Normal file
View File

@@ -0,0 +1,5 @@
FROM ruby
RUN gem install fluentd
RUN gem install fluent-plugin-elasticsearch
COPY fluentd.conf /fluentd.conf
CMD ["fluentd", "-c", "/fluentd.conf"]

12
efk/fluentd/fluentd.conf Normal file
View File

@@ -0,0 +1,12 @@
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<match **>
@type elasticsearch
host elasticsearch
logstash_format true
flush_interval 1
</match>

55
elk/docker-compose.yml Normal file
View File

@@ -0,0 +1,55 @@
version: "2"
services:
elasticsearch:
image: elasticsearch
# If you need to access ES directly, just uncomment those lines.
#ports:
# - "9200:9200"
# - "9300:9300"
logstash:
image: logstash
command: |
-e '
input {
# Default port is 12201/udp
gelf { }
# This generates one test event per minute.
# It is great for debugging, but you might
# want to remove it in production.
heartbeat { }
}
# The following filter is a hack!
# The "de_dot" filter would be better, but it
# is not pre-installed with logstash by default.
filter {
ruby {
code => "
event.to_hash.keys.each { |k| event[ k.gsub('"'.'"','"'_'"') ] = event.remove(k) if k.include?'"'.'"' }
"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
# This will output every message on stdout.
# It is great when testing your setup, but in
# production, it will probably cause problems;
# either by filling up your disks, or worse,
# by creating logging loops! BEWARE!
stdout {
codec => rubydebug
}
}'
ports:
- "12201:12201/udp"
kibana:
image: kibana
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200

34
elk/logstash.conf Normal file
View File

@@ -0,0 +1,34 @@
input {
# Listens on 514/udp and 514/tcp by default; change that to non-privileged port
syslog { port => 51415 }
# Default port is 12201/udp
gelf { }
# This generates one test event per minute.
# It is great for debugging, but you might
# want to remove it in production.
heartbeat { }
}
# The following filter is a hack!
# The "de_dot" filter would be better, but it
# is not pre-installed with logstash by default.
filter {
ruby {
code => "
event.to_hash.keys.each { |k| event[ k.gsub('.','_') ] = event.remove(k) if k.include?'.' }
"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
# This will output every message on stdout.
# It is great when testing your setup, but in
# production, it will probably cause problems;
# either by filling up your disks, or worse,
# by creating logging loops! BEWARE!
stdout {
codec => rubydebug
}
}

View File

@@ -1,16 +0,0 @@
#!/bin/sh
# Some tools will choke on the YAML files generated by PyYAML;
# in particular on a section like this one:
#
# service:
# ports:
# - 8000:5000
#
# This script adds two spaces in front of the dash in those files.
# Warning: it is a hack, and probably won't work on some YAML files.
[ -f "$COMPOSE_FILE" ] || {
echo "Cannot find COMPOSE_FILE"
exit 1
}
sed -i 's/^ -/ -/' $COMPOSE_FILE

View File

@@ -1,62 +0,0 @@
#!/usr/bin/env python
import os
import sys
import yaml
# You can specify up to 2 parameters:
# - with 0 parameter, we will look for the COMPOSE_FILE env var
# - with 1 parameter, the same file will be used for in and out
# - with 2 parameters, the 1st is the input, the 2nd the output
if len(sys.argv)==1:
if "COMPOSE_FILE" not in os.environ:
print("Please specify 1 or 2 file names, or set COMPOSE_FILE.")
sys.exit(1)
compose_file = os.environ["COMPOSE_FILE"]
if compose_file == "docker-compose.yml":
print("Refusing to operate directly on docker-compose.yml.")
print("Specify it on the command-line if that's what you want.")
sys.exit(1)
input_file, output_file = compose_file, compose_file
elif len(sys.argv)==2:
input_file, output_file = sys.argv[1], sys.argv[1]
elif len(sys.argv)==3:
input_file, output_file = sys.argv[1], sys.argv[2]
else:
print("Too many arguments. Please specify up to 2 file names.")
sys.exit(1)
stack = yaml.load(open(input_file))
# The ambassadors need to know the service port to use.
# Those ports must be declared here.
ports = yaml.load(open("ports.yml"))
def generate_local_addr():
last_byte = 2
while last_byte<255:
yield "127.127.0.{}".format(last_byte)
last_byte += 1
for service_name, service in stack.items():
if "links" in service:
for link, local_addr in zip(service["links"], generate_local_addr()):
if link not in ports:
print("Skipping link {} in service {} "
"(no port mapping defined). "
"Your code will probably break."
.format(link, service_name))
continue
if "extra_hosts" not in service:
service["extra_hosts"] = {}
service["extra_hosts"][link] = local_addr
del service["links"]
if "ports" in service:
del service["ports"]
if "volumes" in service:
del service["volumes"]
if service_name in ports:
service["ports"] = [ ports[service_name] ]
yaml.safe_dump(stack, open(output_file, "w"), default_flow_style=False)

5
netlify.toml Normal file
View File

@@ -0,0 +1,5 @@
[build]
base = "slides"
publish = "slides"
command = "./build.sh once"

1
prepare-local/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
.vagrant

87
prepare-local/README.md Normal file
View File

@@ -0,0 +1,87 @@
DOCKER ORCHESTRATION (local environment instructions)
=====================================================
Instead of running this training on a cloud provider, you can simulate the
infrastructure locally. These instructions apply to the **PART ONE** of the
workshop.
## 1. Prerequisites
Virtualbox, Vagrant and Ansible
- Virtualbox: https://www.virtualbox.org/wiki/Downloads
- Vagrant: https://www.vagrantup.com/downloads.html
- install vagrant-vbguest plugin (https://github.com/dotless-de/vagrant-vbguest)
- Ansible:
- install Ansible's prerequisites:
$ sudo pip install paramiko PyYAML Jinja2 httplib2 six pycrypto
- clone the Ansible repository and checkout to a stable version
(don't forget the `--recursive` argument when cloning!):
$ git clone --recursive https://github.com/ansible/ansible.git
$ cd ansible
$ git checkout stable-2.0.0.1
$ git submodule update
- source the setup script to make Ansible available on this terminal session:
$ source path/to/your-ansible-clone/hacking/env-setup
- you need to repeat the last step everytime you open a new terminal session
and want to use any Ansible command (but you'll probably only need to run
it once).
## 2. Preparing the environment
Run the following commands:
$ vagrant up
$ chmod 600 private-key
$ ansible-playbook provisioning.yml
And that's it! Now you should be able to ssh on `node1` using:
$ ssh vagrant@10.10.10.10 -i private-key
These are the default IP addresses for the nodes:
10.10.10.10 node1
10.10.10.20 node2
10.10.10.30 node3
10.10.10.40 node4
10.10.10.50 node5
The source code of this repo will be mounted at `~/orchestration-workshop`
(only on the `node1`), so you can edit the code externally and the changes
will reflect inside the instance.
## 3. Possible problems and solutions
- Depending on the Vagrant version, `sudo apt-get install bsdtar` may be needed
- If you get strange Ansible errors about dependencies, try to check your pip
version with `pip --version`. The current version is 8.1.1. If your pip is
older than this, upgrade it with `sudo pip install --upgrade pip`, restart
your terminal session and install the Ansible prerequisites again.
- If the IP's `10.10.10.[10-50]` are already taken in your machine, you can
change them to other values in the `vagrant.yml` and `inventory` files in
this directory. Make sure you pick a set of IP's inside the same subnet.
- If you suspend your computer, the simulated private network may stop to
work. This is a known problem of Virtualbox. To fix it, reload all the VM's
with `vagrant reload`.
- If you get a ssh error saying `WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED`
it means that you already had in the past some other host using one of the
IP addresses we use here. To solve this, remove the old entry in your
`known_hosts` file with:
$ ssh-keygen -f "~/.ssh/known_hosts" -R 10.10.10.10 -R 10.10.10.20 -R 10.10.10.30 -R 10.10.10.40 -R 10.10.10.50

78
prepare-local/Vagrantfile vendored Normal file
View File

@@ -0,0 +1,78 @@
# vim: set filetype=ruby:
require 'yaml'
require 'vagrant-vbguest' unless defined? VagrantVbguest::Config
configuration_file = File.expand_path("vagrant.yml", File.dirname(__FILE__))
configuration = YAML.load_file configuration_file
settings = configuration['vagrant']
instances = configuration['instances']
$enable_serial_logging = false
Vagrant.configure('2') do |config|
def check_dependency(plugin_name)
unless Vagrant.has_plugin?(plugin_name)
puts "Vagrant [" + plugin_name + "] is required but is not installed\n" +
"please check if you have the plugin with the following command:\n" +
" $ vagrand plugin list\n" +
"If needed install the plugin:\n" +
" $ vagrant plugin install " + plugin_name + "\n"
abort "Missing [" + plugin_name + "] plugin\n\n"
end
end
check_dependency 'vagrant-vbguest'
config.vm.box = settings['default_box']
# config.vm.box_url = settings['default_box_url']
config.ssh.forward_agent = true
config.ssh.insert_key = settings['ssh_insert_key']
config.vm.box_check_update = true
config.vbguest.auto_update = false
instances.each do |instance|
next if instance.has_key? 'deactivated' and instance['deactivated']
config.vm.define instance['hostname'] do |guest|
if instance.has_key? 'box'
guest.vm.box = instance['box']
end
if instance.has_key? 'box_url'
guest.vm.box_url = instance['box_url']
end
if instance.has_key? 'private_ip'
guest.vm.network 'private_network', ip: instance['private_ip']
end
guest.vm.provider 'virtualbox' do |vb|
if instance.has_key? 'cpu_execution_cap'
vb.customize ["modifyvm", :id, "--cpuexecutioncap", instance['cpu_execution_cap'].to_s]
end
vb.customize ["modifyvm", :id, "--nictype1", "virtio"]
vb.customize ["modifyvm", :id, "--nictype2", "virtio"]
if instance.has_key? 'memory'
vb.memory = instance['memory']
end
if instance.has_key? 'cores'
vb.cpus = instance['cores']
end
end
if instance.has_key? 'mounts'
instance['mounts'].each do |mount|
guest.vm.synced_folder mount['host_path'], mount['guest_path'], owner: mount['owner'], group: mount['group']
end
end
end
end
end

10
prepare-local/ansible.cfg Normal file
View File

@@ -0,0 +1,10 @@
[defaults]
nocows = True
inventory = inventory
remote_user = vagrant
private_key_file = private-key
host_key_checking = False
deprecation_warnings = False
[ssh_connection]
ssh_args = -o StrictHostKeyChecking=no

11
prepare-local/inventory Normal file
View File

@@ -0,0 +1,11 @@
node1 ansible_ssh_host=10.10.10.10
node2 ansible_ssh_host=10.10.10.20
node3 ansible_ssh_host=10.10.10.30
node4 ansible_ssh_host=10.10.10.40
node5 ansible_ssh_host=10.10.10.50
[nodes]
node[1:5]
[all:children]
nodes

27
prepare-local/private-key Normal file
View File

@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzI
w+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoP
kcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2
hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NO
Td0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcW
yLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQIBIwKCAQEA4iqWPJXtzZA68mKd
ELs4jJsdyky+ewdZeNds5tjcnHU5zUYE25K+ffJED9qUWICcLZDc81TGWjHyAqD1
Bw7XpgUwFgeUJwUlzQurAv+/ySnxiwuaGJfhFM1CaQHzfXphgVml+fZUvnJUTvzf
TK2Lg6EdbUE9TarUlBf/xPfuEhMSlIE5keb/Zz3/LUlRg8yDqz5w+QWVJ4utnKnK
iqwZN0mwpwU7YSyJhlT4YV1F3n4YjLswM5wJs2oqm0jssQu/BT0tyEXNDYBLEF4A
sClaWuSJ2kjq7KhrrYXzagqhnSei9ODYFShJu8UWVec3Ihb5ZXlzO6vdNQ1J9Xsf
4m+2ywKBgQD6qFxx/Rv9CNN96l/4rb14HKirC2o/orApiHmHDsURs5rUKDx0f9iP
cXN7S1uePXuJRK/5hsubaOCx3Owd2u9gD6Oq0CsMkE4CUSiJcYrMANtx54cGH7Rk
EjFZxK8xAv1ldELEyxrFqkbE4BKd8QOt414qjvTGyAK+OLD3M2QdCQKBgQDtx8pN
CAxR7yhHbIWT1AH66+XWN8bXq7l3RO/ukeaci98JfkbkxURZhtxV/HHuvUhnPLdX
3TwygPBYZFNo4pzVEhzWoTtnEtrFueKxyc3+LjZpuo+mBlQ6ORtfgkr9gBVphXZG
YEzkCD3lVdl8L4cw9BVpKrJCs1c5taGjDgdInQKBgHm/fVvv96bJxc9x1tffXAcj
3OVdUN0UgXNCSaf/3A/phbeBQe9xS+3mpc4r6qvx+iy69mNBeNZ0xOitIjpjBo2+
dBEjSBwLk5q5tJqHmy/jKMJL4n9ROlx93XS+njxgibTvU6Fp9w+NOFD/HvxB3Tcz
6+jJF85D5BNAG3DBMKBjAoGBAOAxZvgsKN+JuENXsST7F89Tck2iTcQIT8g5rwWC
P9Vt74yboe2kDT531w8+egz7nAmRBKNM751U/95P9t88EDacDI/Z2OwnuFQHCPDF
llYOUI+SpLJ6/vURRbHSnnn8a/XG+nzedGH5JGqEJNQsz+xT2axM0/W/CRknmGaJ
kda/AoGANWrLCz708y7VYgAtW2Uf1DPOIYMdvo6fxIB5i9ZfISgcJ/bbCUkFrhoH
+vq/5CIWxCPp0f85R4qxxQ5ihxJ0YDQT9Jpx4TMss4PSavPaBH3RXow5Ohe+bYoQ
NE5OgEXk2wVfZczCZpigBKbKZHNYcelXtTt/nP3rsCuGcM4h53s=
-----END RSA PRIVATE KEY-----

View File

@@ -0,0 +1,132 @@
---
- hosts: nodes
sudo: true
vars_files:
- vagrant.yml
tasks:
- name: clean up the home folder
file:
path: /home/vagrant/{{ item }}
state: absent
with_items:
- base.sh
- chef.sh
- cleanup.sh
- cleanup-virtualbox.sh
- puppetlabs-release-wheezy.deb
- puppet.sh
- ruby.sh
- vagrant.sh
- virtualbox.sh
- zerodisk.sh
- name: installing dependencies
apt:
name: apt-transport-https,ca-certificates,python-pip,tmux
state: present
update_cache: true
- name: fetching docker repo key
apt_key:
keyserver: hkp://p80.pool.sks-keyservers.net:80
id: 58118E89F3A912897C070ADBF76221572C52609D
- name: adding package repos
apt_repository:
repo: "{{ item }}"
state: present
with_items:
- deb https://apt.dockerproject.org/repo ubuntu-trusty main
- name: installing docker
apt:
name: docker-engine
state: present
update_cache: true
- name: adding user vagrant to group docker
user:
name: vagrant
groups: docker
append: yes
- name: making docker daemon listen to port 55555
lineinfile:
dest: /etc/default/docker
line: DOCKER_OPTS="--host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:55555"
regexp: '^#?DOCKER_OPTS=.*$'
state: present
register: docker_opts
- name: restarting docker daemon, if needed
service:
name: docker
state: restarted
when: docker_opts is defined and docker_opts.changed
- name: performing pip autoupgrade
pip:
name: pip
state: latest
- name: installing virtualenv
pip:
name: virtualenv
state: latest
- name: Install Docker Compose via PIP
pip: name=docker-compose
- name:
file:
path="/usr/local/bin/docker-compose"
state=file
mode=0755
owner=vagrant
group=docker
- name: building the /etc/hosts file with all nodes
lineinfile:
dest: /etc/hosts
line: "{{ item.private_ip }} {{ item.hostname }}"
regexp: "^{{ item.private_ip }} {{ item.hostname }}$"
state: present
with_items: "{{ instances }}"
- name: copying the ssh key to the nodes
copy:
src: private-key
dest: /home/vagrant/private-key
mode: 0600
group: root
owner: vagrant
- name: copying ssh configuration
copy:
src: ssh-config
dest: /home/vagrant/.ssh/config
mode: 0600
group: root
owner: vagrant
- name: fixing the hostname
hostname:
name: "{{ inventory_hostname }}"
- name: adjusting the /etc/hosts to the new hostname
lineinfile:
dest: /etc/hosts
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
owner: root
group: root
mode: 0644
with_items:
- regexp: '^127\.0\.0\.1'
line: "127.0.0.1 localhost {{ inventory_hostname }}"
- regexp: '^127\.0\.1\.1'
line: "127.0.1.1 {{ inventory_hostname }}"

3
prepare-local/ssh-config Normal file
View File

@@ -0,0 +1,3 @@
Host node*
IdentityFile ~/private-key
StrictHostKeyChecking no

42
prepare-local/vagrant.yml Normal file
View File

@@ -0,0 +1,42 @@
---
vagrant:
default_box: ubuntu/trusty64
default_box_check_update: true
ssh_insert_key: false
min_memory: 256
min_cores: 1
instances:
- hostname: node1
private_ip: 10.10.10.10
memory: 1512
cores: 1
mounts:
- host_path: ../
guest_path: /home/vagrant/orchestration-workshop
owner: vagrant
group: vagrant
- hostname: node2
private_ip: 10.10.10.20
memory: 512
cores: 1
- hostname: node3
private_ip: 10.10.10.30
memory: 512
cores: 1
- hostname: node4
private_ip: 10.10.10.40
memory: 512
cores: 1
- hostname: node5
private_ip: 10.10.10.50
memory: 512
cores: 1

242
prepare-machine/README.md Normal file
View File

@@ -0,0 +1,242 @@
# Setting up your own cluster
If you want to go through this orchestration workshop on your own,
you will need a cluster of Docker nodes.
These instructions will walk you through the required steps,
using [Docker Machine](https://docs.docker.com/machine/) to
create the nodes.
## Requirements
You need Docker Machine. To check if it is installed, try to
run the following command:
```bash
$ docker-machine -v
docker-machine version 0.8.2, build e18a919
```
If you see a Docker Machine version number, perfect! Otherwise,
you need to install it; either as part of the Docker Toolbox,
or as a stand-alone tool. See [Docker Machine installation docs](
https://docs.docker.com/machine/install-machine/) for details.
You also need either credentials for a cloud provider, or a
local VirtualBox or VMware installation (or anything supported
by Docker Machine, really).
## Discrepancies with official environment
The resulting environment will be slightly different from the
one that we provision for people attending the workshop at
conferences and similar events, and you will have to adapt a
few things.
We try to list all the differences here.
### User name
The official environment uses user `docker`. If you use
Docker Machine, the user name will probably be different.
### Node aliases
In the official environment, aliases are seeded in
`/etc/hosts`, allowing you to resolve node IP addresses
with the aliases `node1`, `node2`, etc.; if you use
Docker Machine, you will have to lookup the IP addresses
with the `docker-machine ip nodeX` command instead.
### SSH keys
In the official environment, you can log from one node
to another with SSH, without having to provide a password,
thanks to pre-generated (and pre-copied) SSH keys.
If you use Docker Machine, you will have to use
`docker-machine ssh` from your machine instead.
### Machine and Compose
In the official environment, Docker Machine and Docker
Compose are installed on your nodes. If you use Docker
Machine you will have to install at least Docker Compose.
The easiest way to install Compose (verified to work
with the EC2 and VirtualBox drivers, and probably others
as well) is do use `docker-machine ssh` to connect
to your node, then run the following command:
```bash
sudo curl -L \
https://github.com/docker/compose/releases/download/1.15.0/docker-compose-`uname -s`-`uname -m` \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```
Note that it is not necessary (or even useful) to
install Docker Machine on your nodes, since if you're
following that guide, you already have Machine on
your local computer. ☺
### IP addresses
In some environments, your nodes will have multiple
IP addresses. This is the case with VirtualBox, for
instance. At any point in the workshop, if you need
a node's IP address, you should use the address
given by the `docker-machine ip` command.
## Creating your nodes with Docker Machine
Here are some instructions for various Machine Drivers.
### AWS EC2
You have to retrieve your AWS access key and secret access key,
and set the following environment variables:
```bash
export MACHINE_DRIVER=amazonec2
export AWS_ACCESS_KEY_ID=AKI...
export AWS_SECRET_ACCESS_KEY=...
```
Optionally, you can also set `AWS_DEFAULT_REGION` to the region
closest to you. See [AWS documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions)
for the list of available regions and their codes.
For instance, if you are on the US West Coast, I recommend
that you set `AWS_DEFAULT_REGION` to `us-west-2`; if you are
in Europe, to `eu-central-1` (except in UK and Ireland where
you probably want `eu-west-1`), etc.
If you don't specify anything, your nodes will be in `us-east-1`.
You can also set `AWS_INSTANCE_TYPE` if you want bigger or smaller
instances than `t2.micro`. For the official workshops, we use
`m3.large`, but remember: the bigger the instance, the more
expensive it gets, obviously!
After setting these variables, run the following command:
```bash
for N in $(seq 1 5); do
docker-machine create node$N
docker-machine ssh node$N usermod -aG docker ubuntu
done
```
And after a few minutes, your five nodes will be ready. To log
into a node, use `docker-machine ssh nodeX`.
By default, Docker Machine places the created nodes in a
security group aptly named `docker-machine`. By default, this
group is pretty restrictive, and will only let you connect
to the Docker API and SSH. For the purpose of the workshop,
you will need to open that security group to normal traffic.
You can do that through the AWS EC2 console, or with the
following CLI command:
```bash
aws ec2 authorize-security-group-ingress --group-name docker-machine --protocol -1 --cidr 0.0.0.0/0
```
If Docker Machine fails, complaining that it cannot find
the default VPC or subnet, this could be because you have
an "old" EC2 account (created before the introduction of EC2
VPC) and your account has no default VPC. In that case,
you will have to create a VPC, a subnet in that VPC,
and use the corresponding Machine flags (`--amazonec2-vpc-id`
and `--amazonec2-subnet-id`) or environment variables
(`AWS_VPC_ID` and `AWS_SUBNET_ID`) to tell Machine what to use.
You will get similar error messages if you *have* set these
flags (or environment variables) but the VPC (or subnets)
indicated do not exist. This can happen if you frequently
switch between different EC2 accounts, and forget that you
have set the `AWS_VPC_ID` or `AWS_SUBNET_ID`.
### Microsoft Azure
You have to retrieve your subscription ID, and set the following environment
variables:
```bash
export MACHINE_DRIVER=azure
export AZURE_SUBSCRIPTION_ID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
```
Additionally, you can set `AZURE_LOCATION` to an Azure datacenter
close to you. By default, it will pick "West US". You can see
the available regions [on Azure's website](
https://azure.microsoft.com/en-us/regions/services/).
For instance, if you want to deploy on the US East Coast,
set `AZURE_LOCATION` to `East US` or `eastus` (capitalization
and spacing shouldn't matter; just use the names shown on the
map or table on Azure's website).
Then run the following command:
```bash
for N in $(seq 1 5); do
docker-machine create node$N
docker-machine ssh node$N usermod -aG docker docker-user
done
```
The CLI will give you instructions to authenticate on the Azure portal,
and once you've done that, it will create your VMs.
You will log into your nodes with `docker-machine ssh nodeX`.
By default, the firewall only allows access to the Docker API
and SSH ports. To open access to other ports, you can use the
following command:
```bash
for N in $(seq 1 5); do
az network nsg rule create -g docker-machine --name AllowAny --nsg-name node$N-firewall \
--access allow --direction inbound --protocol '*' \
--source-address-prefix '*' --source-port-range '*' \
--destination-address-prefix '*' --destination-port-range '*'
done
```
(The command takes a while. Be patient.)
### Local VirtualBox or VMware Fusion
If you want to run with local VMs, set the environment variable
`MACHINE_DRIVER` to `virtualbox` or `vmwarefusion` and create your nodes:
```bash
export MACHINE_DRIVER=virtualbox
for N in $(seq 1 5); do
docker-machine create node$N
done
```
### Terminating instances
When you're done, if you started your instance on a public
cloud (or anywhere where it costs you money!) you will want to
terminate (destroy) them. This can be done with the following
command:
```bash
for N in $(seq 1 5); do
docker-machine rm -f node$N
done
```

30
prepare-vms/Dockerfile Normal file
View File

@@ -0,0 +1,30 @@
FROM debian:jessie
MAINTAINER AJ Bowen <aj@soulshake.net>
RUN apt-get update && apt-get install -y \
bsdmainutils \
ca-certificates \
curl \
groff \
jq \
less \
man \
pssh \
python \
python-docutils \
python-pip \
ssh \
wkhtmltopdf \
xvfb \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN pip install \
awscli \
jinja2 \
pdfkit \
PyYAML \
termcolor
RUN mv $(which wkhtmltopdf) $(which wkhtmltopdf).real
COPY lib/wkhtmltopdf /usr/local/bin/wkhtmltopdf

240
prepare-vms/README.md Normal file
View File

@@ -0,0 +1,240 @@
# Trainer tools to create and prepare VMs for Docker workshops on AWS or Azure
## Prerequisites
- [Docker](https://docs.docker.com/engine/installation/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
And if you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
- [jinja2](https://pypi.python.org/pypi/Jinja2) (on a Mac: `brew install jinja2`)
## General Workflow
- fork/clone repo
- set required environment variables
- create your own setting file from `settings/example.yaml`
- if necessary, increase allowed open files: `ulimit -Sn 10000`
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
## Clone/Fork the Repo, and Build the Tools Image
The Docker Compose file here is used to build a image with all the dependencies to run the `./workshopctl` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](workshopctl#L5).
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
$ cd orchestration-workshop/prepare-vms
$ docker-compose build
## Preparing to Run `./workshopctl`
### Required AWS Permissions/Info
- Initial assumptions are you're using a root account. If you'd like to use a IAM user, it will need `AmazonEC2FullAccess` and `IAMReadOnlyAccess`.
- Using a non-default VPC or Security Group isn't supported out of box yet, so you will have to customize `lib/commands.sh` if you want to change that.
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./workshopctl opensg` which opens up all ports.
### Required Environment Variables
- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`
If you're not using AWS, set these to placeholder values:
```
export AWS_ACCESS_KEY_ID="foo"
export AWS_SECRET_ACCESS_KEY="foo"
export AWS_DEFAULT_REGION="foo"
```
If you don't have the `aws` CLI installed, you will get a warning that it's a missing dependency. If you're not using AWS you can ignore this.
### Update/copy `settings/example.yaml`
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
./workshopctl cards 2016-09-28-00-33-bret settings/orchestration.yaml
## `./workshopctl` Usage
```
workshopctl - the orchestration workshop swiss army knife
Commands:
ami Show the AMI that will be used for deployment
amis List Ubuntu AMIs in the current region
build Build the Docker image to run this program in a container
cards Generate ready-to-print cards for a batch of VMs
deploy Install Docker on a bunch of running VMs
ec2quotas Check our EC2 quotas (max instances)
help Show available commands
ids List the instance IDs belonging to a given tag or token
ips List the IP addresses of the VMs for a given tag or token
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all notes are reporting as Ready
list List available batches in the current region
opensg Open the default security group to ALL ingress traffic
pull_images Pre-pull a bunch of Docker images
retag Apply a new tag to a batch of VMs
start Start a batch of VMs
status List instance status for a given batch
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
test Run tests (pre-flight checks) on a batch of VMs
wrap Run this program in a container
```
### Summary of What `./workshopctl` Does For You
- Used to manage bulk AWS instances for you without needing to use AWS cli or gui.
- Can manage multiple "tags" or groups of instances, which are tracked in `prepare-vms/tags/`
- Can also create PDF/HTML for printing student info for instance IP's and login.
- The `./workshopctl` script can be executed directly.
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
### Example Steps to Launch a Batch of AWS Instances for a Workshop
- Run `./workshopctl start N` Creates `N` EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
### Example Steps to Launch Azure Instances
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account
- Customize `azuredeploy.parameters.json`
- Required:
- Provide the SSH public key you plan to use for instance configuration
- Optional:
- Choose a name for the workshop (default is "workshop")
- Choose the number of instances (default is 3)
- Customize the desired instance size (default is Standard_D1_v2)
- Launch instances with your chosen resource group name and your preferred region; the examples are "workshop" and "eastus":
```
az group create --name workshop --location eastus
az group deployment create --resource-group workshop --template-file azuredeploy.json --parameters @azuredeploy.parameters.json
```
The `az group deployment create` command can take several minutes and will only say `- Running ..` until it completes, unless you increase the verbosity with `--verbose` or `--debug`.
To display the IPs of the instances you've launched:
```
az vm list-ip-addresses --resource-group workshop --output table
```
If you want to put the IPs into `prepare-vms/tags/<tag>/ips.txt` for a tag of "myworkshop":
1) If you haven't yet installed `jq` and/or created your event's tags directory in `prepare-vms`:
```
brew install jq
mkdir -p tags/myworkshop
```
2) And then generate the IP list:
```
az vm list-ip-addresses --resource-group workshop --output json | jq -r '.[].virtualMachine.network.publicIpAddresses[].ipAddress' > tags/myworkshop/ips.txt
```
After the workshop is over, remove the instances:
```
az group delete --resource-group workshop
```
### Example Steps to Configure Instances from a non-AWS Source
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to ssh into them.
- Set placeholder values for [AWS environment variable settings](#required-environment-variables).
- Choose a tag. It could be an event name, datestamp, etc. Ensure you have created a directory for your tag: `prepare-vms/tags/<tag>/`
- If you have not already generated a file with the IPs to be configured:
- The file should be named `prepare-vms/tags/<tag>/ips.txt`
- Format is one IP per line, no other info needed.
- Ensure the settings file is as desired (especially the number of nodes): `prepare-vms/settings/kube101.yaml`
- For a tag called `myworkshop`, configure instances: `workshopctl deploy myworkshop settings/kube101.yaml`
- Optionally, configure Kubernetes clusters of the size in the settings: `workshopctl kube myworkshop`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: `workshopctl kubetest myworkshop`
- Generate cards to print and hand out: `workshopctl cards myworkshop settings/kube101.yaml`
- Print the cards file: `prepare-vms/tags/myworkshop/ips.html`
## Other Tools
### Deploying your SSH key to all the machines
- Make sure that you have SSH keys loaded (`ssh-add -l`).
- Source `rc`.
- Run `pcopykey`.
## Even More Details
#### Sync of SSH keys
When the `start` command is run, your local RSA SSH public key will be added to your AWS EC2 keychain.
To see which local key will be uploaded, run `ssh-add -l | grep RSA`.
#### Instance + tag creation
10 VMs will be started, with an automatically generated tag (timestamp + your username).
Your SSH key will be added to the `authorized_keys` of the ubuntu user.
#### Creation of ./$TAG/ directory and contents
Following the creation of the VMs, a text file will be created containing a list of their IPs.
This ips.txt file will be created in the $TAG/ directory and a symlink will be placed in the working directory of the script.
If you create new VMs, the symlinked file will be overwritten.
#### Deployment
Instances can be deployed manually using the `deploy` command:
$ ./workshopctl deploy TAG settings/somefile.yaml
The `postprep.py` file will be copied via parallel-ssh to all of the VMs and executed.
#### Pre-pull images
$ ./workshopctl pull-images TAG
#### Generate cards
$ ./workshopctl cards TAG settings/somefile.yaml
If you want to generate both HTML and PDF cards, install [wkhtmltopdf](https://wkhtmltopdf.org/downloads.html); without that installed, only HTML cards will be generated.
If you don't have `wkhtmltopdf` installed, you will get a warning that it is a missing dependency. If you plan to just print the HTML cards, you can ignore this.
#### List tags
$ ./workshopctl list
#### List VMs
$ ./workshopctl list TAG
This will print a human-friendly list containing some information about each instance.
#### Stop and destroy VMs
$ ./workshopctl stop TAG
## ToDo
- Don't write to bash history in system() in postprep
- compose, etc version inconsistent (int vs str)

View File

@@ -0,0 +1,250 @@
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workshopName": {
"type": "string",
"defaultValue": "workshop",
"metadata": {
"description": "Workshop name."
}
},
"vmPrefix": {
"type": "string",
"defaultValue": "node",
"metadata": {
"description": "Prefix for VM names."
}
},
"numberOfInstances": {
"type": "int",
"defaultValue": 3,
"metadata": {
"description": "Number of VMs to create."
}
},
"adminUsername": {
"type": "string",
"defaultValue": "ubuntu",
"metadata": {
"description": "Admin username for VMs."
}
},
"sshKeyData": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "SSH rsa public key file as a string."
}
},
"imagePublisher": {
"type": "string",
"defaultValue": "Canonical",
"metadata": {
"description": "OS image publisher; default Canonical."
}
},
"imageOffer": {
"type": "string",
"defaultValue": "UbuntuServer",
"metadata": {
"description": "The name of the image offer. The default is Ubuntu"
}
},
"imageSKU": {
"type": "string",
"defaultValue": "16.04-LTS",
"metadata": {
"description": "Version of the image. The default is 16.04-LTS"
}
},
"vmSize": {
"type": "string",
"defaultValue": "Standard_D1_v2",
"metadata": {
"description": "VM Size."
}
}
},
"variables": {
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks',variables('virtualNetworkName'))]",
"subnet1Ref": "[concat(variables('vnetID'),'/subnets/',variables('subnet1Name'))]",
"vmName": "[parameters('vmPrefix')]",
"sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]",
"publicIPAddressName": "PublicIP",
"publicIPAddressType": "Dynamic",
"virtualNetworkName": "MyVNET",
"netSecurityGroup": "MyNSG",
"addressPrefix": "10.0.0.0/16",
"subnet1Name": "subnet-1",
"subnet1Prefix": "10.0.0.0/24",
"nicName": "myVMNic"
},
"resources": [
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/publicIPAddresses",
"name": "[concat(variables('publicIPAddressName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "publicIPLoop",
"count": "[parameters('numberOfInstances')]"
},
"properties": {
"publicIPAllocationMethod": "[variables('publicIPAddressType')]"
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('netSecurityGroup'))]"
],
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnet1Name')]",
"properties": {
"addressPrefix": "[variables('subnet1Prefix')]",
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('netSecurityGroup'))]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkInterfaces",
"name": "[concat(variables('nicName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "nicLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'),copyIndex(1))]",
"[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'), copyIndex(1)))]"
},
"subnet": {
"id": "[variables('subnet1Ref')]"
}
}
}
]
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-12-01",
"type": "Microsoft.Compute/virtualMachines",
"name": "[concat(variables('vmName'),copyIndex(1))]",
"location": "[resourceGroup().location]",
"copy": {
"name": "vmLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'), copyIndex(1))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"osProfile": {
"computerName": "[concat(variables('vmName'),copyIndex(1))]",
"adminUsername": "[parameters('adminUsername')]",
"linuxConfiguration": {
"disablePasswordAuthentication": true,
"ssh": {
"publicKeys": [
{
"path": "[variables('sshKeyPath')]",
"keyData": "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage"
},
"imageReference": {
"publisher": "[parameters('imagePublisher')]",
"offer": "[parameters('imageOffer')]",
"sku": "[parameters('imageSKU')]",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('nicName'),copyIndex(1)))]"
}
]
}
},
"tags": {
"workshop": "[parameters('workshopName')]"
}
},
{
"apiVersion": "2017-11-01",
"type": "Microsoft.Network/networkSecurityGroups",
"name": "[variables('netSecurityGroup')]",
"location": "[resourceGroup().location]",
"tags": {
"workshop": "[parameters('workshopName')]"
},
"properties": {
"securityRules": [
{
"name": "default-open-ports",
"properties": {
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 1000,
"direction": "Inbound"
}
}
]
}
}
],
"outputs": {
"resourceID": {
"type": "string",
"value": "[resourceId('Microsoft.Network/publicIPAddresses', concat(variables('publicIPAddressName'),'1'))]"
}
}
}

View File

@@ -0,0 +1,18 @@
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"sshKeyData": {
"value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXTIl/M9oeSlcsC5Rfe+nZr4Jc4sl200pSw2lpdxlZ3xzeP15NgSSMJnigUrKUXHfqRQ+2wiPxEf0Odz2GdvmXvR0xodayoOQsO24AoERjeSBXCwqITsfp1bGKzMb30/3ojRBo6LBR6r1+lzJYnNCGkT+IQwLzRIpm0LCNz1j08PUI2aZ04+mcDANvHuN/hwi/THbLLp6SNWN43m9r02RcC6xlCNEhJi4wk4VzMzVbSv9RlLGST2ocbUHwmQ2k9OUmpzoOx73aQi9XNnEaFh2w/eIdXM75VtkT3mRryyykg9y0/hH8/MVmIuRIdzxHQqlm++DLXVH5Ctw6a4kS+ki7 workshop"
},
"workshopName": {
"value": "workshop"
},
"numberOfInstances": {
"value": 3
},
"vmSize": {
"value": "Standard_D1_v2"
}
}
}

106
prepare-vms/cards.html Normal file
View File

@@ -0,0 +1,106 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://container.training/" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "orchestration workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_swarm -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 21.5%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.4em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

5
prepare-vms/clusters.csv Normal file
View File

@@ -0,0 +1,5 @@
Put your initials in the first column to "claim" a cluster.
Initials{% for node in clusters[0] %} node{{ loop.index }}{% endfor %}
{% for cluster in clusters -%}
{%- for node in cluster %} {{ node|trim }}{% endfor %}
{% endfor %}
Can't render this file because it contains an unexpected character in line 1 and column 42.

21
prepare-vms/cncsetup.sh Normal file
View File

@@ -0,0 +1,21 @@
#!/bin/sh
if [ $(whoami) != ubuntu ]; then
echo "This script should be executed on a freshly deployed node,"
echo "with the 'ubuntu' user. Aborting."
exit 1
fi
if id docker; then
sudo userdel -r docker
fi
pip install --user awscli jinja2 pdfkit
sudo apt-get install -y wkhtmltopdf xvfb
tmux new-session \; send-keys "
[ -f ~/.ssh/id_rsa ] || ssh-keygen
eval \$(ssh-agent)
ssh-add
Xvfb :0 &
export DISPLAY=:0
mkdir -p ~/www
sudo docker run -d -p 80:80 -v \$HOME/www:/usr/share/nginx/html nginx
"

View File

@@ -0,0 +1,19 @@
version: "2"
services:
workshopctl:
build: .
image: workshopctl
working_dir: /root/prepare-vms
volumes:
- $HOME/.aws/:/root/.aws/
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
- $PWD/:/root/prepare-vms/
environment:
SSH_AUTH_SOCK: ${SSH_AUTH_SOCK}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
AWS_INSTANCE_TYPE: ${AWS_INSTANCE_TYPE}
USER: ${USER}
entrypoint: /root/prepare-vms/workshopctl

View File

@@ -1,70 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 16.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
width="466px" height="423.891px" viewBox="0 0 466 423.891" enable-background="new 0 0 466 423.891" xml:space="preserve">
<rect x="-43.5" y="37.005" display="none" fill-rule="evenodd" clip-rule="evenodd" fill="#E7E7E7" width="792" height="612"/>
<g>
<path id="outline_7_" fill-rule="evenodd" clip-rule="evenodd" d="M242.133,168.481h47.146v48.194h23.837
c11.008,0,22.33-1.962,32.755-5.494c5.123-1.736,10.872-4.154,15.926-7.193c-6.656-8.689-10.053-19.661-11.054-30.476
c-1.358-14.71,1.609-33.855,11.564-45.368l4.956-5.732l5.905,4.747c14.867,11.946,27.372,28.638,29.577,47.665
c17.901-5.266,38.921-4.02,54.701,5.088l6.475,3.734l-3.408,6.652c-13.345,26.046-41.246,34.113-68.524,32.687
c-40.817,101.663-129.68,149.794-237.428,149.794c-55.666,0-106.738-20.81-135.821-70.197l-0.477-0.807l-4.238-8.621
C4.195,271.415,0.93,247.6,3.145,223.803l0.664-7.127h40.315v-48.194h47.143v-47.145h94.292V74.191h56.574V168.481z"/>
<g display="none">
<path display="inline" fill="#394D54" d="M61.093,319.89c6.023,0,11.763-0.157,17.219-0.464c0.476-0.026,0.932-0.063,1.402-0.092
c0.005-0.002,0.008-0.002,0.012-0.002c13.872-0.855,25.876-2.708,35.902-5.57c0.002-0.002,0.004-0.002,0.006-0.002
c1.823-0.521,3.588-1.07,5.282-1.656c1.894-0.657,2.896-2.725,2.241-4.618c-0.656-1.895-2.722-2.899-4.618-2.24
c-12.734,4.412-29.535,6.842-50.125,7.298c-0.002,0-0.004,0-0.005,0c-10.477,0.232-21.93-0.044-34.352-0.843c0,0,0,0-0.001,0
c-0.635-0.038-1.259-0.075-1.9-0.118c-1.995-0.128-3.731,1.374-3.869,3.375c-0.136,1.999,1.376,3.73,3.375,3.866
c2.537,0.173,5.03,0.321,7.49,0.453c0.392,0.021,0.77,0.034,1.158,0.054l0,0C47.566,319.697,54.504,319.89,61.093,319.89z"/>
</g>
<g id="Containers_8_">
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M86.209,179.744h3.227v34.052h-3.227V179.744z M80.02,179.744
h3.354v34.052H80.02V179.744z M73.828,179.744h3.354v34.052h-3.354V179.744z M67.636,179.744h3.354v34.052h-3.354V179.744z
M61.446,179.744H64.8v34.052h-3.354V179.744z M55.384,179.744h3.224v34.052h-3.224V179.744z M51.981,176.338h40.858v40.86H51.981
V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M133.354,132.598h3.229v34.051h-3.229V132.598z M127.165,132.598
h3.354v34.051h-3.354V132.598z M120.973,132.598h3.354v34.051h-3.354V132.598z M114.781,132.598h3.354v34.051h-3.354V132.598z
M108.593,132.598h3.352v34.051h-3.352V132.598z M102.531,132.598h3.222v34.051h-3.222V132.598z M99.124,129.193h40.863v40.859
H99.124V129.193z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M133.354,179.744h3.229v34.052h-3.229V179.744z M127.165,179.744
h3.354v34.052h-3.354V179.744z M120.973,179.744h3.354v34.052h-3.354V179.744z M114.781,179.744h3.354v34.052h-3.354V179.744z
M108.593,179.744h3.352v34.052h-3.352V179.744z M102.531,179.744h3.222v34.052h-3.222V179.744z M99.124,176.338h40.863v40.86
H99.124V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M180.501,179.744h3.225v34.052h-3.225V179.744z M174.31,179.744
h3.355v34.052h-3.355V179.744z M168.12,179.744h3.354v34.052h-3.354V179.744z M161.928,179.744h3.354v34.052h-3.354V179.744z
M155.736,179.744h3.354v34.052h-3.354V179.744z M149.676,179.744h3.222v34.052h-3.222V179.744z M146.271,176.338h40.861v40.86
h-40.861V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M180.501,132.598h3.225v34.051h-3.225V132.598z M174.31,132.598
h3.355v34.051h-3.355V132.598z M168.12,132.598h3.354v34.051h-3.354V132.598z M161.928,132.598h3.354v34.051h-3.354V132.598z
M155.736,132.598h3.354v34.051h-3.354V132.598z M149.676,132.598h3.222v34.051h-3.222V132.598z M146.271,129.193h40.861v40.859
h-40.861V129.193z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M227.647,179.744h3.226v34.052h-3.226V179.744z M221.457,179.744
h3.354v34.052h-3.354V179.744z M215.265,179.744h3.354v34.052h-3.354V179.744z M209.073,179.744h3.354v34.052h-3.354V179.744z
M202.884,179.744h3.354v34.052h-3.354V179.744z M196.821,179.744h3.224v34.052h-3.224V179.744z M193.416,176.338h40.861v40.86
h-40.861V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M227.647,132.598h3.226v34.051h-3.226V132.598z M221.457,132.598
h3.354v34.051h-3.354V132.598z M215.265,132.598h3.354v34.051h-3.354V132.598z M209.073,132.598h3.354v34.051h-3.354V132.598z
M202.884,132.598h3.354v34.051h-3.354V132.598z M196.821,132.598h3.224v34.051h-3.224V132.598z M193.416,129.193h40.861v40.859
h-40.861V129.193z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M227.647,85.451h3.226v34.053h-3.226V85.451z M221.457,85.451
h3.354v34.053h-3.354V85.451z M215.265,85.451h3.354v34.053h-3.354V85.451z M209.073,85.451h3.354v34.053h-3.354V85.451z
M202.884,85.451h3.354v34.053h-3.354V85.451z M196.821,85.451h3.224v34.053h-3.224V85.451z M193.416,82.048h40.861v40.86h-40.861
V82.048z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M274.792,179.744h3.224v34.052h-3.224V179.744z M268.602,179.744
h3.352v34.052h-3.352V179.744z M262.408,179.744h3.354v34.052h-3.354V179.744z M256.218,179.744h3.354v34.052h-3.354V179.744z
M250.026,179.744h3.354v34.052h-3.354V179.744z M243.964,179.744h3.227v34.052h-3.227V179.744z M240.561,176.338h40.86v40.86
h-40.86V176.338z"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M137.428,283.445c6.225,0,11.271,5.049,11.271,11.272
c0,6.225-5.046,11.271-11.271,11.271c-6.226,0-11.272-5.046-11.272-11.271C126.156,288.494,131.202,283.445,137.428,283.445"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M137.428,286.644c1.031,0,2.015,0.194,2.923,0.546
c-0.984,0.569-1.65,1.635-1.65,2.854c0,1.82,1.476,3.293,3.296,3.293c1.247,0,2.329-0.693,2.89-1.715
c0.395,0.953,0.615,1.999,0.615,3.097c0,4.458-3.615,8.073-8.073,8.073c-4.458,0-8.074-3.615-8.074-8.073
C129.354,290.258,132.971,286.644,137.428,286.644"/>
<path fill="#FFFFFF" d="M167.394,364.677c-27.916-13.247-43.239-31.256-51.765-50.915c-10.37,2.961-22.835,4.852-37.317,5.664
c-5.457,0.307-11.196,0.464-17.219,0.464c-6.942,0-14.26-0.205-21.94-0.613c25.6,25.585,57.094,45.283,115.408,45.645
C158.866,364.921,163.14,364.837,167.394,364.677z"/>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 6.5 KiB

BIN
prepare-vms/docker.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

View File

@@ -1,82 +0,0 @@
#!/usr/bin/env python
SETTINGS_BASIC = dict(
clustersize=1,
pagesize=15,
blurb="<p>Here is the connection information to your very own "
"VM for this intro to Docker workshop. You can connect "
"to the VM using your SSH client.</p>\n"
"<p>Your VM is reachable on the following address:</p>\n",
prettify=lambda x: x,
footer="<p>You can find the last version of the slides on "
"http://view.dckr.info/.</p>",
)
SETTINGS_ADVANCED = dict(
clustersize=5,
pagesize=12,
blurb="<p>Here is the connection information to your very own "
"cluster for this orchestration workshop. You can connect "
"to each VM with any SSH client.</p>\n"
"<p>Your machines are:<ul>\n",
prettify=lambda l: [ "node%d: %s"%(i+1, s)
for (i, s) in zip(range(len(l)), l) ],
footer="<p>You can find the last version of the slides on "
"http://view.dckr.info/.</p>"
)
SETTINGS = SETTINGS_ADVANCED
globals().update(SETTINGS)
###############################################################################
ips = list(open("ips.txt"))
assert len(ips)%clustersize == 0
clusters = []
while ips:
cluster = ips[:clustersize]
ips = ips[clustersize:]
clusters.append(cluster)
html = open("ips.html", "w")
html.write("<html><head><style>")
html.write("""
div {
float:left;
border: 1px solid black;
width: 28%;
padding: 4% 2.5% 2.5% 2.5%;
font-size: x-small;
background-image: url("docker-nb.svg");
background-size: 15%;
background-position-x: 50%;
background-repeat: no-repeat;
}
p {
margin: 0.5em 0 0.5em 0;
}
.pagebreak {
page-break-before: always;
clear: both;
display: block;
height: 8px;
}
""")
html.write("</style></head><body>")
for i, cluster in enumerate(clusters):
if i>0 and i%pagesize==0:
html.write('<span class="pagebreak"></span>\n')
html.write("<div>")
html.write(blurb)
for s in prettify(cluster):
html.write("<li>%s</li>\n"%s)
html.write("</ul></p>")
html.write("<p>login=docker password=training</p>\n")
html.write(footer)
html.write("</div>")
html.close()

105
prepare-vms/lib/aws.sh Normal file
View File

@@ -0,0 +1,105 @@
aws_display_tags() {
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| uniq -c \
| sort -k 3 \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
aws ec2 describe-instance-status \
--instance-ids $IDS \
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
--output table
}
aws_display_instances_by_tag() {
TAG=$1
need_tag $TAG
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "$result"
fi
}
aws_get_instance_ids_by_filter() {
FILTER=$1
aws ec2 describe-instances --filters $FILTER \
--query Reservations[*].Instances[*].InstanceId \
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
need_tag $TOKEN
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
}
aws_get_instance_ids_by_tag() {
TAG=$1
need_tag $TAG
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
}
aws_get_instance_ips_by_tag() {
TAG=$1
need_tag $TAG
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws_get_instance_ids_by_tag $TAG)
if [ -z "$IDS" ]; then
die "Invalid tag."
fi
info "Deleting instances with tag $TAG."
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
info "Deleted instances with tag $TAG."
}
aws_tag_instances() {
OLD_TAG_OR_TOKEN=$1
NEW_TAG=$2
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
}

76
prepare-vms/lib/cli.sh Normal file
View File

@@ -0,0 +1,76 @@
# Abort if any error happens, and show the command that caused the error.
_ERR() {
error "Command $BASH_COMMAND failed (exit status: $?)"
}
set -eE
trap _ERR ERR
die() {
if [ -n "$1" ]; then
error "$1"
fi
exit 1
}
error() {
>/dev/stderr echo "[$(red ERROR)] $1"
}
warning() {
>/dev/stderr echo "[$(yellow WARNING)] $1"
}
info() {
>/dev/stderr echo "[$(green INFO)] $1"
}
# Print a full-width separator.
# If given an argument, will print it in the middle of that separator.
# If the argument is longer than the screen width, it will be printed between two separator lines.
sep() {
if [ -z "$COLUMNS" ]; then
COLUMNS=80
fi
SEP=$(yes = | tr -d "\n" | head -c $(($COLUMNS - 1)))
if [ -z "$1" ]; then
>/dev/stderr echo $SEP
else
MSGLEN=$(echo "$1" | wc -c)
if [ $(($MSGLEN + 4)) -gt $COLUMNS ]; then
>/dev/stderr echo "$SEP"
>/dev/stderr echo "$1"
>/dev/stderr echo "$SEP"
else
LEFTLEN=$((($COLUMNS - $MSGLEN - 2) / 2))
RIGHTLEN=$(($COLUMNS - $MSGLEN - 2 - $LEFTLEN))
LEFTSEP=$(echo $SEP | head -c $LEFTLEN)
RIGHTSEP=$(echo $SEP | head -c $RIGHTLEN)
>/dev/stderr echo "$LEFTSEP $1 $RIGHTSEP"
fi
fi
}
need_tag() {
if [ -z "$1" ]; then
die "Please specify a tag or token. To see available tags and tokens, run: $0 list"
fi
}
need_settings() {
if [ -z "$1" ]; then
die "Please specify a settings file."
elif [ ! -f "$1" ]; then
die "Settings file $1 doesn't exist."
fi
}
need_ips_file() {
IPS_FILE=$1
if [ -z "$IPS_FILE" ]; then
die "IPS_FILE not set."
fi
if [ ! -s "$IPS_FILE" ]; then
die "IPS_FILE $IPS_FILE not found. Please run: $0 ips <TAG>"
fi
}

15
prepare-vms/lib/colors.sh Normal file
View File

@@ -0,0 +1,15 @@
bold() {
echo "$(tput bold)$1$(tput sgr0)"
}
red() {
echo "$(tput setaf 1)$1$(tput sgr0)"
}
green() {
echo "$(tput setaf 2)$1$(tput sgr0)"
}
yellow() {
echo "$(tput setaf 3)$1$(tput sgr0)"
}

532
prepare-vms/lib/commands.sh Normal file
View File

@@ -0,0 +1,532 @@
export AWS_DEFAULT_OUTPUT=text
HELP=""
_cmd() {
HELP="$(printf "%s\n%-12s %s\n" "$HELP" "$1" "$2")"
}
_cmd help "Show available commands"
_cmd_help() {
printf "$(basename $0) - the orchestration workshop swiss army knife\n"
printf "Commands:"
printf "%s" "$HELP" | sort
}
_cmd amis "List Ubuntu AMIs in the current region"
_cmd_amis() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION "$@"
}
_cmd ami "Show the AMI that will be used for deployment"
_cmd_ami() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
}
_cmd build "Build the Docker image to run this program in a container"
_cmd_build() {
docker-compose build
}
_cmd wrap "Run this program in a container"
_cmd_wrap() {
docker-compose run --rm workshopctl "$@"
}
_cmd cards "Generate ready-to-print cards for a batch of VMs"
_cmd_cards() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
# If you're not using AWS, populate the ips.txt file manually
if [ ! -f tags/$TAG/ips.txt ]; then
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
fi
# Remove symlinks to old cards
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
python lib/ips-txt-to-html.py $SETTINGS
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
rm -f tags/$TAG/$f
# Move the generated file and replace it with a symlink
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
done
info "Cards created. You can view them with:"
info "xdg-open ips.html ips.pdf (on Linux)"
info "open ips.html ips.pdf (on MacOS)"
}
_cmd deploy "Install Docker on a bunch of running VMs"
_cmd_deploy() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
link_tag $TAG
count=$(wc -l ips.txt)
# wait until all hosts are reachable before trying to deploy
info "Trying to reach $TAG instances..."
while ! tag_is_reachable $TAG; do
>/dev/stderr echo -n "."
sleep 2
done
>/dev/stderr echo ""
sep "Deploying tag $TAG"
pssh -I tee /tmp/settings.yaml <$SETTINGS
pssh "
sudo apt-get update &&
sudo apt-get install -y python-setuptools &&
sudo easy_install pyyaml"
# Copy postprep.py to the remote machines, and execute it, feeding it the list of IP addresses
pssh -I tee /tmp/postprep.py <lib/postprep.py
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <ips.txt
# Install docker-prompt script
pssh -I sudo tee /usr/local/bin/docker-prompt <lib/docker-prompt
pssh sudo chmod +x /usr/local/bin/docker-prompt
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from node1
pssh "
sudo -u docker [ -f /home/docker/.ssh/id_rsa ] ||
ssh -o StrictHostKeyChecking=no node1 sudo -u docker tar -C /home/docker -cvf- .ssh |
sudo -u docker tar -C /home/docker -xf-"
# if 'docker@' doesn't appear in /home/docker/.ssh/authorized_keys, copy it there
pssh "
grep docker@ /home/docker/.ssh/authorized_keys ||
cat /home/docker/.ssh/id_rsa.pub |
sudo -u docker tee -a /home/docker/.ssh/authorized_keys"
# On node1, create and deploy TLS certs using Docker Machine
# (Currently disabled.)
true || pssh "
if grep -q node1 /tmp/node; then
grep ' node' /etc/hosts |
xargs -n2 sudo -H -u docker \
docker-machine create -d generic --generic-ssh-user docker --generic-ip-address
fi"
sep "Deployed tag $TAG"
info "You may want to run one of the following commands:"
info "$0 kube $TAG"
info "$0 pull_images $TAG"
info "$0 cards $TAG $SETTINGS"
}
_cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
_cmd_kube() {
# Install packages
pssh --timeout 200 "
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
sudo apt-key add - &&
echo deb http://apt.kubernetes.io/ kubernetes-xenial main |
sudo tee /etc/apt/sources.list.d/kubernetes.list"
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet kubeadm kubectl
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
# Initialize kube master
pssh --timeout 200 "
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token
sudo kubeadm init --token \$(cat /tmp/token)
fi"
# Put kubeconfig in ubuntu's and docker's accounts
pssh "
if grep -q node1 /tmp/node; then
sudo mkdir -p \$HOME/.kube /home/docker/.kube &&
sudo cp /etc/kubernetes/admin.conf \$HOME/.kube/config &&
sudo cp /etc/kubernetes/admin.conf /home/docker/.kube/config &&
sudo chown -R \$(id -u) \$HOME/.kube &&
sudo chown -R docker /home/docker/.kube
fi"
# Install weave as the pod network
pssh "
if grep -q node1 /tmp/node; then
kubever=\$(kubectl version | base64 | tr -d '\n')
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
fi"
# Join the other nodes to the cluster
pssh --timeout 200 "
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
fi"
sep "Done"
}
_cmd kubetest "Check that all notes are reporting as Ready"
_cmd_kubetest() {
# There are way too many backslashes in the command below.
# Feel free to make that better ♥
pssh "
set -e
[ -f /tmp/node ]
if grep -q node1 /tmp/node; then
which kubectl
for NODE in \$(awk /\ node/\ {print\ \\\$2} /etc/hosts); do
echo \$NODE ; kubectl get nodes | grep -w \$NODE | grep -w Ready
done
fi"
}
_cmd ids "List the instance IDs belonging to a given tag or token"
_cmd_ids() {
TAG=$1
need_tag $TAG
info "Looking up by tag:"
aws_get_instance_ids_by_tag $TAG
# Just in case we managed to create instances but weren't able to tag them
info "Looking up by token:"
aws_get_instance_ids_by_client_token $TAG
}
_cmd ips "List the IP addresses of the VMs for a given tag or token"
_cmd_ips() {
TAG=$1
need_tag $TAG
mkdir -p tags/$TAG
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
link_tag $TAG
}
_cmd list "List available batches in the current region"
_cmd_list() {
info "Listing batches in region $AWS_DEFAULT_REGION:"
aws_display_tags
}
_cmd status "List instance status for a given batch"
_cmd_status() {
info "Using region $AWS_DEFAULT_REGION."
TAG=$1
need_tag $TAG
describe_tag $TAG
tag_is_reachable $TAG
info "You may be interested in running one of the following commands:"
info "$0 ips $TAG"
info "$0 deploy $TAG <settings/somefile.yaml>"
}
_cmd opensg "Open the default security group to ALL ingress traffic"
_cmd_opensg() {
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
}
_cmd pull_images "Pre-pull a bunch of Docker images"
_cmd_pull_images() {
TAG=$1
need_tag $TAG
pull_tag $TAG
}
_cmd retag "Apply a new tag to a batch of VMs"
_cmd_retag() {
OLDTAG=$1
NEWTAG=$2
need_tag $OLDTAG
if [[ -z "$NEWTAG" ]]; then
die "You must specify a new tag to apply."
fi
aws_tag_instances $OLDTAG $NEWTAG
}
_cmd start "Start a batch of VMs"
_cmd_start() {
# Number of instances to create
COUNT=$1
# Optional settings file (to carry on with deployment)
SETTINGS=$2
if [ -z "$COUNT" ]; then
die "Indicate number of instances to start."
fi
# Print our AWS username, to ease the pain of credential-juggling
greet
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
AMI=$(_cmd_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
die "I could not find which AMI to use in this region. Try another region?"
fi
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
AWS_KEY_NAME=$(make_key_name)
sep "Starting instances"
info " Count: $COUNT"
info " Region: $AWS_DEFAULT_REGION"
info " Token/tag: $TOKEN"
info " AMI: $AMI"
info " Key name: $AWS_KEY_NAME"
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
info "Reservation ID: $reservation_id"
sep
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
TAG=$TOKEN
aws_tag_instances $TOKEN $TAG
wait_until_tag_is_running $TAG $COUNT
sep
info "Successfully created $COUNT instances with tag $TAG"
sep
mkdir -p tags/$TAG
IPS=$(aws_get_instance_ips_by_tag $TAG)
echo "$IPS" >tags/$TAG/ips.txt
link_tag $TAG
if [ -n "$SETTINGS" ]; then
_cmd_deploy $TAG $SETTINGS
else
info "To deploy or kill these instances, run one of the following:"
info "$0 deploy $TAG <settings/somefile.yaml>"
info "$0 stop $TAG"
fi
}
_cmd ec2quotas "Check our EC2 quotas (max instances)"
_cmd_ec2quotas() {
greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
info "Available regions:"
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
}
_cmd stop "Stop (terminate, shutdown, kill, remove, destroy...) instances"
_cmd_stop() {
TAG=$1
need_tag $TAG
aws_kill_instances_by_tag $TAG
}
_cmd test "Run tests (pre-flight checks) on a batch of VMs"
_cmd_test() {
TAG=$1
need_tag $TAG
test_tag $TAG
}
###
greet() {
IAMUSER=$(aws iam get-user --query 'User.UserName')
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
}
link_tag() {
TAG=$1
need_tag $TAG
IPS_FILE=tags/$TAG/ips.txt
need_ips_file $IPS_FILE
ln -sf $IPS_FILE ips.txt
}
pull_tag() {
TAG=$1
need_tag $TAG
link_tag $TAG
if [ ! -s $IPS_FILE ]; then
die "Nonexistent or empty IPs file $IPS_FILE."
fi
# Pre-pull a bunch of images
pssh --timeout 900 'for I in \
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
postgres \
redis \
training/namer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'
info "Finished pulling images for $TAG."
info "You may now want to run:"
info "$0 cards $TAG <settings/somefile.yaml>"
}
wait_until_tag_is_running() {
max_retry=50
TAG=$1
COUNT=$2
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do
let "i += 1"
info "$(printf "%d/%d instances online" $done_count $COUNT)"
done_count=$(aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
"Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].State.Name" \
| tr "\t" "\n" \
| wc -l)
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
tag_is_reachable() {
TAG=$1
need_tag $TAG
link_tag $TAG
pssh -t 5 true 2>&1 >/dev/null
}
test_tag() {
TAG=$1
ips_file=tags/$TAG/ips.txt
info "Picking a random IP address in $ips_file to run tests."
n=$((1 + $RANDOM % $(wc -l <$ips_file)))
ip=$(head -n $n $ips_file | tail -n 1)
test_vm $ip
info "Tests complete."
}
test_vm() {
ip=$1
info "Testing instance with IP address $ip."
user=ubuntu
errors=""
for cmd in "hostname" \
"whoami" \
"hostname -i" \
"cat /tmp/node" \
"cat /tmp/ipv4" \
"cat /etc/hosts" \
"hostnamectl status" \
"docker version | grep Version -B1" \
"docker-compose version" \
"docker-machine version" \
"docker images" \
"docker ps" \
"curl --silent localhost:55555" \
"sudo ls -la /mnt/ | grep docker" \
"env" \
"ls -la /home/docker/.ssh"; do
sep "$cmd"
echo "$cmd" \
| ssh -A -q \
-o "UserKnownHostsFile /dev/null" \
-o "StrictHostKeyChecking=no" \
$user@$ip sudo -u docker -i \
|| {
status=$?
error "$cmd exit status: $status"
errors="[$status] $cmd\n$errors"
}
done
sep
if [ -n "$errors" ]; then
error "The following commands had non-zero exit codes:"
printf "$errors"
fi
info "Test VM was $ip."
}
make_key_name() {
SHORT_FINGERPRINT=$(ssh-add -l | grep RSA | head -n1 | cut -d " " -f 2 | tr -d : | cut -c 1-8)
echo "${SHORT_FINGERPRINT}-${USER}"
}
sync_keys() {
# make sure ssh-add -l contains "RSA"
ssh-add -l | grep -q RSA \
|| die "The output of \`ssh-add -l\` doesn't contain 'RSA'. Start the agent, add your keys?"
AWS_KEY_NAME=$(make_key_name)
info "Syncing keys... "
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &>/dev/null; then
aws ec2 import-key-pair --key-name $AWS_KEY_NAME \
--public-key-material "$(ssh-add -L \
| grep -i RSA \
| head -n1 \
| cut -d " " -f 1-2)" &>/dev/null
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &>/dev/null; then
die "Somehow, importing the key didn't work. Make sure that 'ssh-add -l | grep RSA | head -n1' returns an RSA key?"
else
info "Imported new key $AWS_KEY_NAME."
fi
else
info "Using existing key $AWS_KEY_NAME."
fi
}
get_token() {
if [ -z $USER ]; then
export USER=anonymous
fi
date +%Y-%m-%d-%H-%M-$USER
}
describe_tag() {
# Display instance details and reachability/status information
TAG=$1
need_tag $TAG
aws_display_instances_by_tag $TAG
aws_display_instance_statuses_by_tag $TAG
}

21
prepare-vms/lib/docker-prompt Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/sh
case "$DOCKER_HOST" in
*:3376)
echo swarm
;;
*:2376)
echo $DOCKER_MACHINE_NAME
;;
*:2375)
echo $DOCKER_MACHINE_NAME
;;
*:55555)
echo $DOCKER_MACHINE_NAME
;;
"")
echo local
;;
*)
echo unknown
;;
esac

View File

@@ -0,0 +1,174 @@
# borrowed from https://gist.github.com/kirikaza/6627072
# The original script has been wrapped in a function that invokes a subshell.
# That way, it can be safely invoked as a function from other scripts.
find_ubuntu_ami() {
(
usage() {
cat >&2 <<__
usage: find-ubuntu-ami.sh [ <filter>... ] [ <sorting> ] [ <options> ]
where:
<filter> is pair of key and substring to search
-r <region>
-n <name>
-v <version>
-a <arch>
-t <type>
-d <date>
-i <image>
-k <kernel>
<sorting> is one of:
-R by region
-N by name
-V by version
-A by arch
-T by type
-D by date
-I by image
-K by kernel
<options> can be:
-q just show AMI
protip for Docker orchestration workshop admin:
./find-ubuntu-ami.sh -t hvm:ebs -r \$AWS_REGION -v 15.10 -N
__
exit 1
}
args=$(getopt hr:n:v:a:t:d:i:k:RNVATDIKq $*)
if [ $? != 0 ]; then
echo >&2
usage
fi
region=
name=
version=
arch=
type=
date=
image=
kernel=
sort=date
quiet=
set -- $args
for a; do
case "$a" in
-h) usage ;;
-r)
region=$2
shift
;;
-n)
name=$2
shift
;;
-v)
version=$2
shift
;;
-a)
arch=$2
shift
;;
-t)
type=$2
shift
;;
-d)
date=$2
shift
;;
-i)
image=$2
shift
;;
-k)
kernel=$2
shift
;;
-R) sort=region ;;
-N) sort=name ;;
-V) sort=version ;;
-A) sort=arch ;;
-T) sort=type ;;
-D) sort=date ;;
-I) sort=image ;;
-K) sort=kernel ;;
-q) quiet=y ;;
--)
shift
break
;;
*) continue ;;
esac
shift
done
[ $# = 0 ] || usage
fix_json() {
tr -d \\n | sed 's/,]}/]}/'
}
jq_query() {
cat <<__
.aaData | map (
{
region: .[0],
name: .[1],
version: .[2],
arch: .[3],
type: .[4],
date: .[5],
image: .[6],
kernel: .[7]
} | select (
(.region | contains("$region")) and
(.name | contains("$name")) and
(.version | contains("$version")) and
(.arch | contains("$arch")) and
(.type | contains("$type")) and
(.date | contains("$date")) and
(.image | contains("$image</a>")) and
(.kernel | contains("$kernel"))
)
) | sort_by(.$sort) | .[] |
"\(.region)|\(.name)|\(.version)|\(.arch)|\(.type)|\(.date)|\(.image)|\(.kernel)"
__
}
trim_quotes() {
sed 's/^"//;s/"$//'
}
escape_spaces() {
sed 's/ /\\\ /g'
}
url=http://cloud-images.ubuntu.com/locator/ec2/releasesTable
{
[ "$quiet" ] || echo REGION NAME VERSION ARCH TYPE DATE IMAGE KERNEL
curl -s $url | fix_json | jq "$(jq_query)" | trim_quotes | escape_spaces | tr \| ' '
} \
| while read region name version arch type date image kernel; do
image=${image%<*}
image=${image#*>}
if [ "$quiet" ]; then
echo $image
else
echo "$region|$name|$version|$arch|$type|$date|$image|$kernel"
fi
done | column -t -s \|
)
}

View File

@@ -0,0 +1,51 @@
#!/usr/bin/env python
import os
import sys
import yaml
import jinja2
def prettify(l):
l = [ip.strip() for ip in l]
ret = [ "node{}: <code>{}</code>".format(i+1, s) for (i, s) in zip(range(len(l)), l) ]
return ret
# Read settings from user-provided settings file
SETTINGS = yaml.load(open(sys.argv[1]))
clustersize = SETTINGS["clustersize"]
ips = list(open("ips.txt"))
print("---------------------------------------------")
print(" Number of IPs: {}".format(len(ips)))
print(" VMs per cluster: {}".format(clustersize))
print("---------------------------------------------")
assert len(ips)%clustersize == 0
clusters = []
while ips:
cluster = ips[:clustersize]
ips = ips[clustersize:]
clusters.append(cluster)
template_file_name = SETTINGS["cards_template"]
template = jinja2.Template(open(template_file_name).read())
with open("ips.html", "w") as f:
f.write(template.render(clusters=clusters, **SETTINGS))
print("Generated ips.html")
try:
import pdfkit
with open("ips.html") as f:
pdfkit.from_file(f, "ips.pdf", options={
"page-size": SETTINGS["paper_size"],
"margin-top": SETTINGS["paper_margin"],
"margin-bottom": SETTINGS["paper_margin"],
"margin-left": SETTINGS["paper_margin"],
"margin-right": SETTINGS["paper_margin"],
})
print("Generated ips.pdf")
except ImportError:
print("WARNING: could not import pdfkit; did not generate ips.pdf")

150
prepare-vms/lib/postprep.py Executable file
View File

@@ -0,0 +1,150 @@
#!/usr/bin/env python
import os
import platform
import sys
import time
import urllib
import yaml
#################################
config = yaml.load(open("/tmp/settings.yaml"))
COMPOSE_VERSION = config["compose_version"]
MACHINE_VERSION = config["machine_version"]
CLUSTER_SIZE = config["clustersize"]
ENGINE_VERSION = config["engine_version"]
#################################
# This script will be run as ubuntu user, which has root privileges.
# docker commands will require sudo because the ubuntu user has no access to the docker socket.
STEP = 0
START = time.time()
def bold(msg):
return "{} {} {}".format("$(tput smso)", msg, "$(tput rmso)")
def system(cmd):
global STEP
with open("/tmp/pp.status", "a") as f:
t1 = time.time()
f.write(bold("--- RUNNING [step {}] ---> {}...".format(STEP, cmd)))
retcode = os.system(cmd)
t2 = time.time()
td = str(t2-t1)[:5]
f.write(bold("[{}] in {}s\n".format(retcode, td)))
STEP += 1
with open("/home/ubuntu/.bash_history", "a") as f:
f.write("{}\n".format(cmd))
if retcode != 0:
msg = "The following command failed with exit code {}:\n".format(retcode)
msg+= cmd
raise(Exception(msg))
# On EC2, the ephemeral disk might be mounted on /mnt.
# If /mnt is a mountpoint, place Docker workspace on it.
system("if mountpoint -q /mnt; then sudo mkdir -p /mnt/docker && sudo ln -sfn /mnt/docker /var/lib/docker; fi")
# Put our public IP in /tmp/ipv4
# ipv4_retrieval_endpoint = "http://169.254.169.254/latest/meta-data/public-ipv4"
ipv4_retrieval_endpoint = "http://myip.enix.org/REMOTE_ADDR"
system("curl --silent {} > /tmp/ipv4".format(ipv4_retrieval_endpoint))
ipv4 = open("/tmp/ipv4").read()
# Add a "docker" user with password "training"
system("id docker || sudo useradd -d /home/docker -m -s /bin/bash docker")
system("echo docker:training | sudo chpasswd")
# Fancy prompt courtesy of @soulshake.
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL
export PS1='\e[1m\e[31m[{}] \e[32m(\\$(docker-prompt)) \e[34m\u@\h\e[35m \w\e[0m\n$ '
SQRL""".format(ipv4))
# Custom .vimrc
system("""sudo -u docker tee /home/docker/.vimrc <<SQRL
syntax on
set autoindent
set expandtab
set number
set shiftwidth=2
set softtabstop=2
SQRL""")
# add docker user to sudoers and allow password authentication
system("""sudo tee /etc/sudoers.d/docker <<SQRL
docker ALL=(ALL) NOPASSWD:ALL
SQRL""")
system("sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config")
system("sudo service ssh restart")
system("sudo apt-get -q update")
system("sudo apt-get -qy install git jq python-pip")
#######################
### DOCKER INSTALLS ###
#######################
# This will install the latest Docker.
#system("curl --silent https://{}/ | grep -v '( set -x; sleep 20 )' | sudo sh".format(ENGINE_VERSION))
system("sudo apt-get -qy install apt-transport-https ca-certificates curl software-properties-common")
system("curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -")
system("sudo add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial {}'".format(ENGINE_VERSION))
system("sudo apt-get -q update")
system("sudo apt-get -qy install docker-ce")
### Install docker-compose
#system("sudo pip install -U docker-compose=={}".format(COMPOSE_VERSION))
system("sudo curl -sSL -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/{}/docker-compose-{}-{}".format(COMPOSE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-compose")
system("docker-compose version")
### Install docker-machine
system("sudo curl -sSL -o /usr/local/bin/docker-machine https://github.com/docker/machine/releases/download/v{}/docker-machine-{}-{}".format(MACHINE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-machine")
system("docker-machine version")
system("sudo apt-get remove -y --purge dnsmasq-base")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
### Wait for Docker to be up.
### (If we don't do this, Docker will not be responsive during the next step.)
system("while ! sudo -u docker docker version ; do sleep 2; done")
### BEGIN CLUSTERING ###
addresses = list(l.strip() for l in sys.stdin)
assert ipv4 in addresses
def makenames(addrs):
return [ "node%s"%(i+1) for i in range(len(addrs)) ]
while addresses:
cluster = addresses[:CLUSTER_SIZE]
addresses = addresses[CLUSTER_SIZE:]
if ipv4 not in cluster:
continue
names = makenames(cluster)
for ipaddr, name in zip(cluster, names):
system("grep ^{} /etc/hosts || echo {} {} | sudo tee -a /etc/hosts"
.format(ipaddr, ipaddr, name))
print(cluster)
mynode = cluster.index(ipv4) + 1
system("echo node{} | sudo -u docker tee /tmp/node".format(mynode))
system("echo node{} | sudo tee /etc/hostname".format(mynode))
system("sudo hostname node{}".format(mynode))
system("sudo -u docker mkdir -p /home/docker/.ssh")
system("sudo -u docker touch /home/docker/.ssh/authorized_keys")
if ipv4 == cluster[0]:
# If I'm node1 and don't have a private key, generate one (with empty passphrase)
system("sudo -u docker [ -f /home/docker/.ssh/id_rsa ] || sudo -u docker ssh-keygen -t rsa -f /home/docker/.ssh/id_rsa -P ''")
FINISH = time.time()
duration = "Initial deployment took {}s".format(str(FINISH - START)[:5])
system("echo {}".format(duration))

23
prepare-vms/lib/pssh.sh Normal file
View File

@@ -0,0 +1,23 @@
# This file can be sourced in order to directly run commands on
# a batch of VMs whose IPs are located in ips.txt of the directory in which
# the command is run.
pssh() {
HOSTFILE="ips.txt"
[ -f $HOSTFILE ] || {
>/dev/stderr echo "No hostfile found at $HOSTFILE"
return
}
echo "[parallel-ssh] $@"
export PSSH=$(which pssh || which parallel-ssh)
$PSSH -h $HOSTFILE -l ubuntu \
--par 100 \
-O LogLevel=ERROR \
-O UserKnownHostsFile=/dev/null \
-O StrictHostKeyChecking=no \
-O ForwardAgent=yes \
"$@"
}

4
prepare-vms/lib/wkhtmltopdf Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/sh
pidof Xvfb || Xvfb -terminate &
export DISPLAY=:0
exec wkhtmltopdf.real "$@"

View File

@@ -1,45 +0,0 @@
pssh -I tee /tmp/postprep.py <<EOF
#!/usr/bin/env python
import os
import sys
import urllib
clustersize = 5
myaddr = urllib.urlopen("http://myip.enix.org/REMOTE_ADDR").read()
addresses = list(l.strip() for l in sys.stdin)
def makenames(addrs):
return [ "node%s"%(i+1) for i in range(len(addrs)) ]
while addresses:
cluster = addresses[:clustersize]
addresses = addresses[clustersize:]
if myaddr not in cluster:
continue
names = makenames(cluster)
for ipaddr, name in zip(cluster, names):
os.system("grep ^%s /etc/hosts || echo %s %s | sudo tee -a /etc/hosts"
%(ipaddr, ipaddr, name))
if myaddr == cluster[0]:
os.system("[ -f .ssh/id_rsa ] || ssh-keygen -t rsa -f .ssh/id_rsa -P ''")
os.system("sudo apt-get remove -y --purge dnsmasq-base")
os.system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip")
os.system("sudo easy_install pip")
os.system("sudo pip install docker-compose==1.5.2")
os.system("docker pull swarm:1.0.1")
os.system("docker tag -f swarm:1.0.1 swarm")
#os.system("sudo curl -L https://github.com/docker/machine/releases/download/v0.5.6/docker-machine_linux-amd64.zip -o /tmp/docker-machine.zip")
#os.system("cd /usr/local/bin ; sudo unzip /tmp/docker-machine.zip")
os.system("sudo curl -L https://github.com/docker/machine/releases/download/v0.5.6/docker-machine_linux-amd64 -o /usr/local/bin/docker-machine")
os.system("sudo chmod +x /usr/local/bin/docker-machine*")
os.system("echo 1000000 | sudo tee /proc/sys/net/nf_conntrack_max")
#os.system("""sudo sed -i 's,^DOCKER_OPTS=.*,DOCKER_OPTS="-H unix:///var/run/docker.sock -H tcp://0.0.0.0:55555",' /etc/default/docker""")
#os.system("sudo service docker restart")
EOF
pssh -t 300 -I "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" < ips.txt
pssh "[ -f .ssh/id_rsa ] || scp -o StrictHostKeyChecking=no node1:.ssh/id_rsa* .ssh"
pssh "grep docker@ .ssh/authorized_keys || cat .ssh/id_rsa.pub >> .ssh/authorized_keys"

View File

@@ -1,23 +0,0 @@
pssh () {
HOSTFILES="hosts ips.txt /tmp/hosts"
for HOSTFILE in $HOSTFILES; do
[ -f $HOSTFILE ] && break
done
[ -f $HOSTFILE ] || {
echo "No hostfile found (tried $HOSTFILES)"
return
}
parallel-ssh -h $HOSTFILE -l docker \
-O UserKnownHostsFile=/dev/null -O StrictHostKeyChecking=no \
-O ForwardAgent=yes \
"$@"
}
pcopykey () {
ssh-add -L | pssh --askpass --send-input \
"mkdir -p .ssh; tee .ssh/authorized_keys"
ssh-add -L | pssh --send-input \
"sudo mkdir -p /root/.ssh; sudo tee /root/.ssh/authorized_keys"
}

View File

@@ -0,0 +1,5 @@
# Number of VMs per cluster
clustersize: 5
# Jinja2 template to use to generate ready-to-cut cards
cards_template: clusters.csv

View File

@@ -0,0 +1,24 @@
# customize your cluster size, your cards template, and the versions
# Number of VMs per cluster
clustersize: 5
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.18.0
machine_version: 0.13.0

View File

@@ -0,0 +1,24 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
# Number of VMs per cluster
clustersize: 1
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.17.1
machine_version: 0.13.0

View File

@@ -0,0 +1,106 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://container.training/" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 21.5%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.4em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -0,0 +1,24 @@
# 3 nodes for k8s 101 workshops
# Number of VMs per cluster
clustersize: 3
# Jinja2 template to use to generate ready-to-cut cards
cards_template: settings/kube101.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.20.1
machine_version: 0.14.0

View File

@@ -0,0 +1,24 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
# Number of VMs per cluster
clustersize: 5
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.20.1
machine_version: 0.14.0

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

82
prepare-vms/workshopctl Executable file
View File

@@ -0,0 +1,82 @@
#!/bin/bash
# Get the script's real directory, whether we're being called directly or via a symlink
if [ -L "$0" ]; then
export SCRIPT_DIR=$(dirname $(readlink "$0"))
else
export SCRIPT_DIR=$(dirname "$0")
fi
# Load all scriptlets
cd "$SCRIPT_DIR"
for lib in lib/*.sh; do
. $lib
done
TRAINER_IMAGE="preparevms_prepare-vms"
DEPENDENCIES="
aws
ssh
curl
jq
pssh
wkhtmltopdf
man
"
ENVVARS="
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
SSH_AUTH_SOCK
"
check_envvars() {
status=0
for envvar in $ENVVARS; do
if [ -z "${!envvar}" ]; then
error "Environment variable $envvar is not set."
if [ "$envvar" = "SSH_AUTH_SOCK" ]; then
error "Hint: run 'eval \$(ssh-agent) ; ssh-add' and try again?"
fi
status=1
fi
done
return $status
}
check_dependencies() {
status=0
for dependency in $DEPENDENCIES; do
if ! command -v $dependency >/dev/null; then
warning "Dependency $dependency could not be found."
status=1
fi
done
return $status
}
check_image() {
docker inspect $TRAINER_IMAGE >/dev/null 2>&1
}
check_envvars \
|| die "Please set all required environment variables."
check_dependencies \
|| warning "At least one dependency is missing. Install it or try the image wrapper."
# Now check which command was invoked and execute it
if [ "$1" ]; then
cmd="$1"
shift
else
cmd=help
fi
fun=_cmd_$cmd
type -t $fun | grep -q function || die "Invalid command: $cmd"
$fun "$@"
# export SSH_AUTH_DIRNAME=$(dirname $SSH_AUTH_SOCK)
# docker-compose run prepare-vms "$@"

3
prom/Dockerfile Normal file
View File

@@ -0,0 +1,3 @@
FROM prom/prometheus:v1.4.1
COPY prometheus.yml /etc/prometheus/prometheus.yml

17
prom/prometheus.yml Normal file
View File

@@ -0,0 +1,17 @@
global:
scrape_interval: 10s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
dns_sd_configs:
- names: ['tasks.node']
type: 'A'
port: 9100
- job_name: 'cadvisor'
dns_sd_configs:
- names: ['tasks.cadvisor']
type: 'A'
port: 8080

56
slides/README.md Normal file
View File

@@ -0,0 +1,56 @@
# MarkMaker
General principles:
- each slides deck is described in a YAML manifest;
- the YAML manifest lists a number of Markdown files
that compose the slides deck;
- a Python script "compiles" the YAML manifest into
a HTML file;
- that HTML file can be displayed in your browser
(you don't need to host it), or you can publish it
(along with a few static assets) if you want.
## Getting started
Look at the YAML file corresponding to the deck that
you want to edit. The format should be self-explanatory.
*I (Jérôme) am still in the process of fine-tuning that
format. Once I settle for something, I will add better
documentation.*
Make changes in the YAML file, and/or in the referenced
Markdown files. If you have never used Remark before:
- use `---` to separate slides,
- use `.foo[bla]` if you want `bla` to have CSS class `foo`,
- define (or edit) CSS classes in [workshop.css](workshop.css).
After making changes, run `./build.sh once`; it will
compile each `foo.yml` file into `foo.yml.html`.
You can also run `./build.sh forever`: it will monitor the current
directory and rebuild slides automatically when files are modified.
## Publishing pipeline
Each time we push to `master`, a webhook pings
[Netlify](https://www.netlify.com/), which will pull
the repo, build the slides (by running `build.sh once`),
and publish them to http://container.training/.
Pull requests are automatically deployed to testing
subdomains. I had no idea that I would ever say this
about a static page hosting service, but it is seriously awesome. ⚡️💥
## Extra bells and whistles
You can run `./slidechecker foo.yml.html` to check for
missing images and show the number of slides in that deck.
It requires `phantomjs` to be installed. It takes some
time to run so it is not yet integrated with the publishing
pipeline.

8
slides/TODO Normal file
View File

@@ -0,0 +1,8 @@
Black belt references that I want to add somewhere:
What Have Namespaces Done for You Lately?
https://www.youtube.com/watch?v=MHv6cWjvQjM&list=PLkA60AVN3hh-biQ6SCtBJ-WVTyBmmYho8&index=8
Cilium: Network and Application Security with BPF and XDP
https://www.youtube.com/watch?v=ilKlmTDdFgk&list=PLkA60AVN3hh-biQ6SCtBJ-WVTyBmmYho8&index=9

1
slides/_redirects Normal file
View File

@@ -0,0 +1 @@
/ /kube-halfday.yml.html 200!

17
slides/appendcheck.py Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env python
import logging
import os
import subprocess
import sys
logging.basicConfig(level=os.environ.get("LOG_LEVEL", "INFO"))
filename = sys.argv[1]
logging.info("Checking file {}...".format(filename))
text = subprocess.check_output(["./slidechecker.js", filename])
html = open(filename).read()
html = html.replace("</textarea>", "\n---\n```\n{}\n```\n</textarea>".format(text))
open(filename, "w").write(html)

418
slides/autopilot/autotest.py Executable file
View File

@@ -0,0 +1,418 @@
#!/usr/bin/env python
# coding: utf-8
import click
import logging
import os
import random
import re
import select
import subprocess
import sys
import time
import uuid
import yaml
logging.basicConfig(level=os.environ.get("LOG_LEVEL", "INFO"))
TIMEOUT = 60 # 1 minute
# This one is not a constant. It's an ugly global.
IPADDR = None
class State(object):
def __init__(self):
self.interactive = True
self.verify_status = False
self.simulate_type = True
self.slide = 1
self.snippet = 0
def load(self):
data = yaml.load(open("state.yaml"))
self.interactive = bool(data["interactive"])
self.verify_status = bool(data["verify_status"])
self.simulate_type = bool(data["simulate_type"])
self.slide = int(data["slide"])
self.snippet = int(data["snippet"])
def save(self):
with open("state.yaml", "w") as f:
yaml.dump(dict(
interactive=self.interactive,
verify_status=self.verify_status,
simulate_type=self.simulate_type,
slide=self.slide,
snippet=self.snippet,
), f, default_flow_style=False)
state = State()
def hrule():
return "="*int(subprocess.check_output(["tput", "cols"]))
# A "snippet" is something that the user is supposed to do in the workshop.
# Most of the "snippets" are shell commands.
# Some of them can be key strokes or other actions.
# In the markdown source, they are the code sections (identified by triple-
# quotes) within .exercise[] sections.
class Snippet(object):
def __init__(self, slide, content):
self.slide = slide
self.content = content
# Extract the "method" (e.g. bash, keys, ...)
# On multi-line snippets, the method is alone on the first line
# On single-line snippets, the data follows the method immediately
if '\n' in content:
self.method, self.data = content.split('\n', 1)
else:
self.method, self.data = content.split(' ', 1)
self.data = self.data.strip()
self.next = None
def __str__(self):
return self.content
class Slide(object):
current_slide = 0
def __init__(self, content):
self.number = Slide.current_slide
Slide.current_slide += 1
# Remove commented-out slides
# (remark.js considers ??? to be the separator for speaker notes)
content = re.split("\n\?\?\?\n", content)[0]
self.content = content
self.snippets = []
exercises = re.findall("\.exercise\[(.*)\]", content, re.DOTALL)
for exercise in exercises:
if "```" in exercise:
previous = None
for snippet_content in exercise.split("```")[1::2]:
snippet = Snippet(self, snippet_content)
if previous:
previous.next = snippet
previous = snippet
self.snippets.append(snippet)
else:
logging.warning("Exercise on slide {} does not have any ``` snippet."
.format(self.number))
self.debug()
def __str__(self):
text = self.content
for snippet in self.snippets:
text = text.replace(snippet.content, ansi("7")(snippet.content))
return text
def debug(self):
logging.debug("\n{}\n{}\n{}".format(hrule(), self.content, hrule()))
def focus_slides():
subprocess.check_output(["i3-msg", "workspace", "3"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_terminal():
subprocess.check_output(["i3-msg", "workspace", "2"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_browser():
subprocess.check_output(["i3-msg", "workspace", "4"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def ansi(code):
return lambda s: "\x1b[{}m{}\x1b[0m".format(code, s)
# Sleeps the indicated delay, but interruptible by pressing ENTER.
# If interrupted, returns True.
def interruptible_sleep(t):
rfds, _, _ = select.select([0], [], [], t)
return 0 in rfds
def wait_for_string(s, timeout=TIMEOUT):
logging.debug("Waiting for string: {}".format(s))
deadline = time.time() + timeout
while time.time() < deadline:
output = capture_pane()
if s in output:
return
if interruptible_sleep(1): return
raise Exception("Timed out while waiting for {}!".format(s))
def wait_for_prompt():
logging.debug("Waiting for prompt.")
deadline = time.time() + TIMEOUT
while time.time() < deadline:
output = capture_pane()
# If we are not at the bottom of the screen, there will be a bunch of extra \n's
output = output.rstrip('\n')
last_line = output.split('\n')[-1]
# Our custom prompt on the VMs has two lines; the 2nd line is just '$'
if last_line == "$":
# This is a perfect opportunity to grab the node's IP address
global IPADDR
IPADDR = re.findall("^\[(.*)\]", output, re.MULTILINE)[-1]
return
# When we are in an alpine container, the prompt will be "/ #"
if last_line == "/ #":
return
# We did not recognize a known prompt; wait a bit and check again
logging.debug("Could not find a known prompt on last line: {!r}"
.format(last_line))
if interruptible_sleep(1): return
raise Exception("Timed out while waiting for prompt!")
def check_exit_status():
if not state.verify_status:
return
token = uuid.uuid4().hex
data = "echo {} $?\n".format(token)
logging.debug("Sending {!r} to get exit status.".format(data))
send_keys(data)
time.sleep(0.5)
wait_for_prompt()
screen = capture_pane()
status = re.findall("\n{} ([0-9]+)\n".format(token), screen, re.MULTILINE)
logging.debug("Got exit status: {}.".format(status))
if len(status) == 0:
raise Exception("Couldn't retrieve status code {}. Timed out?".format(token))
if len(status) > 1:
raise Exception("More than one status code {}. I'm seeing double! Shoot them both.".format(token))
code = int(status[0])
if code != 0:
raise Exception("Non-zero exit status: {}.".format(code))
# Otherwise just return peacefully.
def setup_tmux_and_ssh():
if subprocess.call(["tmux", "has-session"]):
logging.error("Couldn't connect to tmux. Please setup tmux first.")
ipaddr = open("../../prepare-vms/ips.txt").read().split("\n")[0]
uid = os.getuid()
raise Exception("""
1. If you're running this directly from a node:
tmux
2. If you want to control a remote tmux:
rm -f /tmp/tmux-{uid}/default && ssh -t -L /tmp/tmux-{uid}/default:/tmp/tmux-1001/default docker@{ipaddr} tmux new-session -As 0
3. If you cannot control a remote tmux:
tmux new-session ssh docker@{ipaddr}
""".format(uid=uid, ipaddr=ipaddr))
else:
logging.info("Found tmux session. Trying to acquire shell prompt.")
wait_for_prompt()
logging.info("Successfully connected to test cluster in tmux session.")
slides = [Slide("Dummy slide zero")]
content = open(sys.argv[1]).read()
# OK, this part is definitely hackish, and will break if the
# excludedClasses parameter is not on a single line.
excluded_classes = re.findall("excludedClasses: (\[.*\])", content)
excluded_classes = set(eval(excluded_classes[0]))
for slide in re.split("\n---?\n", content):
slide_classes = re.findall("class: (.*)", slide)
if slide_classes:
slide_classes = slide_classes[0].split(",")
slide_classes = [c.strip() for c in slide_classes]
if excluded_classes & set(slide_classes):
logging.info("Skipping excluded slide.")
continue
slides.append(Slide(slide))
def send_keys(data):
if state.simulate_type and data[0] != '^':
for key in data:
if key == ";":
key = "\\;"
if key == "\n":
if interruptible_sleep(1): return
subprocess.check_call(["tmux", "send-keys", key])
if interruptible_sleep(0.15*random.random()): return
if key == "\n":
if interruptible_sleep(1): return
else:
subprocess.check_call(["tmux", "send-keys", data])
def capture_pane():
return subprocess.check_output(["tmux", "capture-pane", "-p"]).decode('utf-8')
setup_tmux_and_ssh()
try:
state.load()
logging.info("Successfully loaded state from file.")
# Let's override the starting state, so that when an error occurs,
# we can restart the auto-tester and then single-step or debug.
# (Instead of running again through the same issue immediately.)
state.interactive = True
except Exception as e:
logging.exception("Could not load state from file.")
logging.warning("Using default values.")
def move_forward():
state.snippet += 1
if state.snippet > len(slides[state.slide].snippets):
state.slide += 1
state.snippet = 0
check_bounds()
def move_backward():
state.snippet -= 1
if state.snippet < 0:
state.slide -= 1
state.snippet = 0
check_bounds()
def check_bounds():
if state.slide < 1:
state.slide = 1
if state.slide >= len(slides):
state.slide = len(slides)-1
while True:
state.save()
slide = slides[state.slide]
snippet = slide.snippets[state.snippet-1] if state.snippet else None
click.clear()
print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}]"
.format(state.slide, len(slides)-1,
state.snippet, len(slide.snippets) if slide.snippets else 0,
state.simulate_type, state.verify_status))
print(hrule())
if snippet:
print(slide.content.replace(snippet.content, ansi(7)(snippet.content)))
focus_terminal()
else:
print(slide.content)
subprocess.check_output(["./gotoslide.js", str(slide.number)])
focus_slides()
print(hrule())
if state.interactive:
print("y/⎵/⏎ Execute snippet or advance to next snippet")
print("p/← Previous")
print("n/→ Next")
print("s Simulate keystrokes")
print("v Validate exit status")
print("g Go to a specific slide")
print("q Quit")
print("c Continue non-interactively until next error")
command = click.getchar()
else:
command = "y"
if command in ("n", "\x1b[C"):
move_forward()
elif command in ("p", "\x1b[D"):
move_backward()
elif command == "s":
state.simulate_type = not state.simulate_type
elif command == "v":
state.verify_status = not state.verify_status
elif command == "g":
state.slide = click.prompt("Enter slide number", type=int)
state.snippet = 0
check_bounds()
elif command == "q":
break
elif command == "c":
# continue until next timeout
state.interactive = False
elif command in ("y", "\r", " "):
if not snippet:
# Advance to next snippet
# Advance until a slide that has snippets
while not slides[state.slide].snippets:
move_forward()
# But stop if we reach the last slide
if state.slide == len(slides)-1:
break
# And then advance to the snippet
move_forward()
continue
method, data = snippet.method, snippet.data
logging.info("Running with method {}: {}".format(method, data))
if method == "keys":
send_keys(data)
elif method == "bash":
# Make sure that we're ready
wait_for_prompt()
# Strip leading spaces
data = re.sub("\n +", "\n", data)
# Remove backticks (they are used to highlight sections)
data = data.replace('`', '')
# Add "RETURN" at the end of the command :)
data += "\n"
# Send command
send_keys(data)
# Force a short sleep to avoid race condition
time.sleep(0.5)
if snippet.next and snippet.next.method == "wait":
wait_for_string(snippet.next.data)
elif snippet.next and snippet.next.method == "longwait":
wait_for_string(snippet.next.data, 10*TIMEOUT)
else:
wait_for_prompt()
# Verify return code
check_exit_status()
elif method == "copypaste":
screen = capture_pane()
matches = re.findall(data, screen, flags=re.DOTALL)
if len(matches) == 0:
raise Exception("Could not find regex {} in output.".format(data))
# Arbitrarily get the most recent match
match = matches[-1]
# Remove line breaks (like a screen copy paste would do)
match = match.replace('\n', '')
send_keys(match + '\n')
# FIXME: we should factor out the "bash" method
wait_for_prompt()
check_exit_status()
elif method == "open":
# Cheap way to get node1's IP address
screen = capture_pane()
url = data.replace("/node1", "/{}".format(IPADDR))
# This should probably be adapted to run on different OS
subprocess.check_output(["xdg-open", url])
focus_browser()
if state.interactive:
print("Press any key to continue to next step...")
click.getchar()
else:
logging.warning("Unknown method {}: {!r}".format(method, data))
move_forward()
else:
logging.warning("Unknown command {}.".format(command))

17
slides/autopilot/gotoslide.js Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env node
/* Expects a slide number as first argument.
* Will connect to the local pub/sub server,
* and issue a "go to slide X" command, which
* will be sent to all connected browsers.
*/
var io = require('socket.io-client');
var socket = io('http://localhost:3000');
socket.on('connect_error', function(){
console.log('connection error');
socket.close();
});
socket.emit('slide change', process.argv[2], function(){
socket.close();
});

603
slides/autopilot/package-lock.json generated Normal file
View File

@@ -0,0 +1,603 @@
{
"name": "container-training-pub-sub-server",
"version": "0.0.1",
"lockfileVersion": 1,
"requires": true,
"dependencies": {
"accepts": {
"version": "1.3.4",
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.4.tgz",
"integrity": "sha1-hiRnWMfdbSGmR0/whKR0DsBesh8=",
"requires": {
"mime-types": "2.1.17",
"negotiator": "0.6.1"
}
},
"after": {
"version": "0.8.2",
"resolved": "https://registry.npmjs.org/after/-/after-0.8.2.tgz",
"integrity": "sha1-/ts5T58OAqqXaOcCvaI7UF+ufh8="
},
"array-flatten": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/array-flatten/-/array-flatten-1.1.1.tgz",
"integrity": "sha1-ml9pkFGx5wczKPKgCJaLZOopVdI="
},
"arraybuffer.slice": {
"version": "0.0.6",
"resolved": "https://registry.npmjs.org/arraybuffer.slice/-/arraybuffer.slice-0.0.6.tgz",
"integrity": "sha1-8zshWfBTKj8xB6JywMz70a0peco="
},
"async-limiter": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/async-limiter/-/async-limiter-1.0.0.tgz",
"integrity": "sha512-jp/uFnooOiO+L211eZOoSyzpOITMXx1rBITauYykG3BRYPu8h0UcxsPNB04RR5vo4Tyz3+ay17tR6JVf9qzYWg=="
},
"backo2": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/backo2/-/backo2-1.0.2.tgz",
"integrity": "sha1-MasayLEpNjRj41s+u2n038+6eUc="
},
"base64-arraybuffer": {
"version": "0.1.5",
"resolved": "https://registry.npmjs.org/base64-arraybuffer/-/base64-arraybuffer-0.1.5.tgz",
"integrity": "sha1-c5JncZI7Whl0etZmqlzUv5xunOg="
},
"base64id": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/base64id/-/base64id-1.0.0.tgz",
"integrity": "sha1-R2iMuZu2gE8OBtPnY7HDLlfY5rY="
},
"better-assert": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/better-assert/-/better-assert-1.0.2.tgz",
"integrity": "sha1-QIZrnhueC1W0gYlDEeaPr/rrxSI=",
"requires": {
"callsite": "1.0.0"
}
},
"blob": {
"version": "0.0.4",
"resolved": "https://registry.npmjs.org/blob/-/blob-0.0.4.tgz",
"integrity": "sha1-vPEwUspURj8w+fx+lbmkdjCpSSE="
},
"body-parser": {
"version": "1.18.2",
"resolved": "https://registry.npmjs.org/body-parser/-/body-parser-1.18.2.tgz",
"integrity": "sha1-h2eKGdhLR9hZuDGZvVm84iKxBFQ=",
"requires": {
"bytes": "3.0.0",
"content-type": "1.0.4",
"debug": "2.6.9",
"depd": "1.1.1",
"http-errors": "1.6.2",
"iconv-lite": "0.4.19",
"on-finished": "2.3.0",
"qs": "6.5.1",
"raw-body": "2.3.2",
"type-is": "1.6.15"
}
},
"bytes": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.0.0.tgz",
"integrity": "sha1-0ygVQE1olpn4Wk6k+odV3ROpYEg="
},
"callsite": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/callsite/-/callsite-1.0.0.tgz",
"integrity": "sha1-KAOY5dZkvXQDi28JBRU+borxvCA="
},
"component-bind": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/component-bind/-/component-bind-1.0.0.tgz",
"integrity": "sha1-AMYIq33Nk4l8AAllGx06jh5zu9E="
},
"component-emitter": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/component-emitter/-/component-emitter-1.2.1.tgz",
"integrity": "sha1-E3kY1teCg/ffemt8WmPhQOaUJeY="
},
"component-inherit": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/component-inherit/-/component-inherit-0.0.3.tgz",
"integrity": "sha1-ZF/ErfWLcrZJ1crmUTVhnbJv8UM="
},
"content-disposition": {
"version": "0.5.2",
"resolved": "https://registry.npmjs.org/content-disposition/-/content-disposition-0.5.2.tgz",
"integrity": "sha1-DPaLud318r55YcOoUXjLhdunjLQ="
},
"content-type": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/content-type/-/content-type-1.0.4.tgz",
"integrity": "sha512-hIP3EEPs8tB9AT1L+NUqtwOAps4mk2Zob89MWXMHjHWg9milF/j4osnnQLXBCBFBk/tvIG/tUc9mOUJiPBhPXA=="
},
"cookie": {
"version": "0.3.1",
"resolved": "https://registry.npmjs.org/cookie/-/cookie-0.3.1.tgz",
"integrity": "sha1-5+Ch+e9DtMi6klxcWpboBtFoc7s="
},
"cookie-signature": {
"version": "1.0.6",
"resolved": "https://registry.npmjs.org/cookie-signature/-/cookie-signature-1.0.6.tgz",
"integrity": "sha1-4wOogrNCzD7oylE6eZmXNNqzriw="
},
"debug": {
"version": "2.6.9",
"resolved": "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz",
"integrity": "sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA==",
"requires": {
"ms": "2.0.0"
}
},
"depd": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/depd/-/depd-1.1.1.tgz",
"integrity": "sha1-V4O04cRZ8G+lyif5kfPQbnoxA1k="
},
"destroy": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/destroy/-/destroy-1.0.4.tgz",
"integrity": "sha1-l4hXRCxEdJ5CBmE+N5RiBYJqvYA="
},
"ee-first": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz",
"integrity": "sha1-WQxhFWsK4vTwJVcyoViyZrxWsh0="
},
"encodeurl": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.1.tgz",
"integrity": "sha1-eePVhlU0aQn+bw9Fpd5oEDspTSA="
},
"engine.io": {
"version": "3.1.4",
"resolved": "https://registry.npmjs.org/engine.io/-/engine.io-3.1.4.tgz",
"integrity": "sha1-PQIRtwpVLOhB/8fahiezAamkFi4=",
"requires": {
"accepts": "1.3.3",
"base64id": "1.0.0",
"cookie": "0.3.1",
"debug": "2.6.9",
"engine.io-parser": "2.1.1",
"uws": "0.14.5",
"ws": "3.3.3"
},
"dependencies": {
"accepts": {
"version": "1.3.3",
"resolved": "https://registry.npmjs.org/accepts/-/accepts-1.3.3.tgz",
"integrity": "sha1-w8p0NJOGSMPg2cHjKN1otiLChMo=",
"requires": {
"mime-types": "2.1.17",
"negotiator": "0.6.1"
}
}
}
},
"engine.io-client": {
"version": "3.1.4",
"resolved": "https://registry.npmjs.org/engine.io-client/-/engine.io-client-3.1.4.tgz",
"integrity": "sha1-T88TcLRxY70s6b4nM5ckMDUNTqE=",
"requires": {
"component-emitter": "1.2.1",
"component-inherit": "0.0.3",
"debug": "2.6.9",
"engine.io-parser": "2.1.1",
"has-cors": "1.1.0",
"indexof": "0.0.1",
"parseqs": "0.0.5",
"parseuri": "0.0.5",
"ws": "3.3.3",
"xmlhttprequest-ssl": "1.5.4",
"yeast": "0.1.2"
}
},
"engine.io-parser": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/engine.io-parser/-/engine.io-parser-2.1.1.tgz",
"integrity": "sha1-4Ps/DgRi9/WLt3waUun1p+JuRmg=",
"requires": {
"after": "0.8.2",
"arraybuffer.slice": "0.0.6",
"base64-arraybuffer": "0.1.5",
"blob": "0.0.4",
"has-binary2": "1.0.2"
}
},
"escape-html": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/escape-html/-/escape-html-1.0.3.tgz",
"integrity": "sha1-Aljq5NPQwJdN4cFpGI7wBR0dGYg="
},
"etag": {
"version": "1.8.1",
"resolved": "https://registry.npmjs.org/etag/-/etag-1.8.1.tgz",
"integrity": "sha1-Qa4u62XvpiJorr/qg6x9eSmbCIc="
},
"express": {
"version": "4.16.2",
"resolved": "https://registry.npmjs.org/express/-/express-4.16.2.tgz",
"integrity": "sha1-41xt/i1kt9ygpc1PIXgb4ymeB2w=",
"requires": {
"accepts": "1.3.4",
"array-flatten": "1.1.1",
"body-parser": "1.18.2",
"content-disposition": "0.5.2",
"content-type": "1.0.4",
"cookie": "0.3.1",
"cookie-signature": "1.0.6",
"debug": "2.6.9",
"depd": "1.1.1",
"encodeurl": "1.0.1",
"escape-html": "1.0.3",
"etag": "1.8.1",
"finalhandler": "1.1.0",
"fresh": "0.5.2",
"merge-descriptors": "1.0.1",
"methods": "1.1.2",
"on-finished": "2.3.0",
"parseurl": "1.3.2",
"path-to-regexp": "0.1.7",
"proxy-addr": "2.0.2",
"qs": "6.5.1",
"range-parser": "1.2.0",
"safe-buffer": "5.1.1",
"send": "0.16.1",
"serve-static": "1.13.1",
"setprototypeof": "1.1.0",
"statuses": "1.3.1",
"type-is": "1.6.15",
"utils-merge": "1.0.1",
"vary": "1.1.2"
}
},
"finalhandler": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-1.1.0.tgz",
"integrity": "sha1-zgtoVbRYU+eRsvzGgARtiCU91/U=",
"requires": {
"debug": "2.6.9",
"encodeurl": "1.0.1",
"escape-html": "1.0.3",
"on-finished": "2.3.0",
"parseurl": "1.3.2",
"statuses": "1.3.1",
"unpipe": "1.0.0"
}
},
"forwarded": {
"version": "0.1.2",
"resolved": "https://registry.npmjs.org/forwarded/-/forwarded-0.1.2.tgz",
"integrity": "sha1-mMI9qxF1ZXuMBXPozszZGw/xjIQ="
},
"fresh": {
"version": "0.5.2",
"resolved": "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz",
"integrity": "sha1-PYyt2Q2XZWn6g1qx+OSyOhBWBac="
},
"has-binary2": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/has-binary2/-/has-binary2-1.0.2.tgz",
"integrity": "sha1-6D26SfC5vk0CbSc2U1DZ8D9Uvpg=",
"requires": {
"isarray": "2.0.1"
}
},
"has-cors": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/has-cors/-/has-cors-1.1.0.tgz",
"integrity": "sha1-XkdHk/fqmEPRu5nCPu9J/xJv/zk="
},
"http-errors": {
"version": "1.6.2",
"resolved": "https://registry.npmjs.org/http-errors/-/http-errors-1.6.2.tgz",
"integrity": "sha1-CgAsyFcHGSp+eUbO7cERVfYOxzY=",
"requires": {
"depd": "1.1.1",
"inherits": "2.0.3",
"setprototypeof": "1.0.3",
"statuses": "1.3.1"
},
"dependencies": {
"setprototypeof": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.0.3.tgz",
"integrity": "sha1-ZlZ+NwQ+608E2RvWWMDL77VbjgQ="
}
}
},
"iconv-lite": {
"version": "0.4.19",
"resolved": "https://registry.npmjs.org/iconv-lite/-/iconv-lite-0.4.19.tgz",
"integrity": "sha512-oTZqweIP51xaGPI4uPa56/Pri/480R+mo7SeU+YETByQNhDG55ycFyNLIgta9vXhILrxXDmF7ZGhqZIcuN0gJQ=="
},
"indexof": {
"version": "0.0.1",
"resolved": "https://registry.npmjs.org/indexof/-/indexof-0.0.1.tgz",
"integrity": "sha1-gtwzbSMrkGIXnQWrMpOmYFn9Q10="
},
"inherits": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz",
"integrity": "sha1-Yzwsg+PaQqUC9SRmAiSA9CCCYd4="
},
"ipaddr.js": {
"version": "1.5.2",
"resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.5.2.tgz",
"integrity": "sha1-1LUFvemUaYfM8PxY2QEP+WB+P6A="
},
"isarray": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/isarray/-/isarray-2.0.1.tgz",
"integrity": "sha1-o32U7ZzaLVmGXJ92/llu4fM4dB4="
},
"media-typer": {
"version": "0.3.0",
"resolved": "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz",
"integrity": "sha1-hxDXrwqmJvj/+hzgAWhUUmMlV0g="
},
"merge-descriptors": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/merge-descriptors/-/merge-descriptors-1.0.1.tgz",
"integrity": "sha1-sAqqVW3YtEVoFQ7J0blT8/kMu2E="
},
"methods": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz",
"integrity": "sha1-VSmk1nZUE07cxSZmVoNbD4Ua/O4="
},
"mime": {
"version": "1.4.1",
"resolved": "https://registry.npmjs.org/mime/-/mime-1.4.1.tgz",
"integrity": "sha512-KI1+qOZu5DcW6wayYHSzR/tXKCDC5Om4s1z2QJjDULzLcmf3DvzS7oluY4HCTrc+9FiKmWUgeNLg7W3uIQvxtQ=="
},
"mime-db": {
"version": "1.30.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.30.0.tgz",
"integrity": "sha1-dMZD2i3Z1qRTmZY0ZbJtXKfXHwE="
},
"mime-types": {
"version": "2.1.17",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.17.tgz",
"integrity": "sha1-Cdejk/A+mVp5+K+Fe3Cp4KsWVXo=",
"requires": {
"mime-db": "1.30.0"
}
},
"ms": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz",
"integrity": "sha1-VgiurfwAvmwpAd9fmGF4jeDVl8g="
},
"negotiator": {
"version": "0.6.1",
"resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.1.tgz",
"integrity": "sha1-KzJxhOiZIQEXeyhWP7XnECrNDKk="
},
"object-component": {
"version": "0.0.3",
"resolved": "https://registry.npmjs.org/object-component/-/object-component-0.0.3.tgz",
"integrity": "sha1-8MaapQ78lbhmwYb0AKM3acsvEpE="
},
"on-finished": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/on-finished/-/on-finished-2.3.0.tgz",
"integrity": "sha1-IPEzZIGwg811M3mSoWlxqi2QaUc=",
"requires": {
"ee-first": "1.1.1"
}
},
"parseqs": {
"version": "0.0.5",
"resolved": "https://registry.npmjs.org/parseqs/-/parseqs-0.0.5.tgz",
"integrity": "sha1-1SCKNzjkZ2bikbouoXNoSSGouJ0=",
"requires": {
"better-assert": "1.0.2"
}
},
"parseuri": {
"version": "0.0.5",
"resolved": "https://registry.npmjs.org/parseuri/-/parseuri-0.0.5.tgz",
"integrity": "sha1-gCBKUNTbt3m/3G6+J3jZDkvOMgo=",
"requires": {
"better-assert": "1.0.2"
}
},
"parseurl": {
"version": "1.3.2",
"resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.2.tgz",
"integrity": "sha1-/CidTtiZMRlGDBViUyYs3I3mW/M="
},
"path-to-regexp": {
"version": "0.1.7",
"resolved": "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz",
"integrity": "sha1-32BBeABfUi8V60SQ5yR6G/qmf4w="
},
"proxy-addr": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/proxy-addr/-/proxy-addr-2.0.2.tgz",
"integrity": "sha1-ZXFQT0e7mI7IGAJT+F3X4UlSvew=",
"requires": {
"forwarded": "0.1.2",
"ipaddr.js": "1.5.2"
}
},
"qs": {
"version": "6.5.1",
"resolved": "https://registry.npmjs.org/qs/-/qs-6.5.1.tgz",
"integrity": "sha512-eRzhrN1WSINYCDCbrz796z37LOe3m5tmW7RQf6oBntukAG1nmovJvhnwHHRMAfeoItc1m2Hk02WER2aQ/iqs+A=="
},
"range-parser": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/range-parser/-/range-parser-1.2.0.tgz",
"integrity": "sha1-9JvmtIeJTdxA3MlKMi9hEJLgDV4="
},
"raw-body": {
"version": "2.3.2",
"resolved": "https://registry.npmjs.org/raw-body/-/raw-body-2.3.2.tgz",
"integrity": "sha1-vNYMd9Prk83gBQKVw/N5OJvIj4k=",
"requires": {
"bytes": "3.0.0",
"http-errors": "1.6.2",
"iconv-lite": "0.4.19",
"unpipe": "1.0.0"
}
},
"safe-buffer": {
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.1.tgz",
"integrity": "sha512-kKvNJn6Mm93gAczWVJg7wH+wGYWNrDHdWvpUmHyEsgCtIwwo3bqPtV4tR5tuPaUhTOo/kvhVwd8XwwOllGYkbg=="
},
"send": {
"version": "0.16.1",
"resolved": "https://registry.npmjs.org/send/-/send-0.16.1.tgz",
"integrity": "sha512-ElCLJdJIKPk6ux/Hocwhk7NFHpI3pVm/IZOYWqUmoxcgeyM+MpxHHKhb8QmlJDX1pU6WrgaHBkVNm73Sv7uc2A==",
"requires": {
"debug": "2.6.9",
"depd": "1.1.1",
"destroy": "1.0.4",
"encodeurl": "1.0.1",
"escape-html": "1.0.3",
"etag": "1.8.1",
"fresh": "0.5.2",
"http-errors": "1.6.2",
"mime": "1.4.1",
"ms": "2.0.0",
"on-finished": "2.3.0",
"range-parser": "1.2.0",
"statuses": "1.3.1"
}
},
"serve-static": {
"version": "1.13.1",
"resolved": "https://registry.npmjs.org/serve-static/-/serve-static-1.13.1.tgz",
"integrity": "sha512-hSMUZrsPa/I09VYFJwa627JJkNs0NrfL1Uzuup+GqHfToR2KcsXFymXSV90hoyw3M+msjFuQly+YzIH/q0MGlQ==",
"requires": {
"encodeurl": "1.0.1",
"escape-html": "1.0.3",
"parseurl": "1.3.2",
"send": "0.16.1"
}
},
"setprototypeof": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.0.tgz",
"integrity": "sha512-BvE/TwpZX4FXExxOxZyRGQQv651MSwmWKZGqvmPcRIjDqWub67kTKuIMx43cZZrS/cBBzwBcNDWoFxt2XEFIpQ=="
},
"socket.io": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/socket.io/-/socket.io-2.0.4.tgz",
"integrity": "sha1-waRZDO/4fs8TxyZS8Eb3FrKeYBQ=",
"requires": {
"debug": "2.6.9",
"engine.io": "3.1.4",
"socket.io-adapter": "1.1.1",
"socket.io-client": "2.0.4",
"socket.io-parser": "3.1.2"
}
},
"socket.io-adapter": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/socket.io-adapter/-/socket.io-adapter-1.1.1.tgz",
"integrity": "sha1-KoBeihTWNyEk3ZFZrUUC+MsH8Gs="
},
"socket.io-client": {
"version": "2.0.4",
"resolved": "https://registry.npmjs.org/socket.io-client/-/socket.io-client-2.0.4.tgz",
"integrity": "sha1-CRilUkBtxeVAs4Dc2Xr8SmQzL44=",
"requires": {
"backo2": "1.0.2",
"base64-arraybuffer": "0.1.5",
"component-bind": "1.0.0",
"component-emitter": "1.2.1",
"debug": "2.6.9",
"engine.io-client": "3.1.4",
"has-cors": "1.1.0",
"indexof": "0.0.1",
"object-component": "0.0.3",
"parseqs": "0.0.5",
"parseuri": "0.0.5",
"socket.io-parser": "3.1.2",
"to-array": "0.1.4"
}
},
"socket.io-parser": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/socket.io-parser/-/socket.io-parser-3.1.2.tgz",
"integrity": "sha1-28IoIVH8T6675Aru3Ady66YZ9/I=",
"requires": {
"component-emitter": "1.2.1",
"debug": "2.6.9",
"has-binary2": "1.0.2",
"isarray": "2.0.1"
}
},
"statuses": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/statuses/-/statuses-1.3.1.tgz",
"integrity": "sha1-+vUbnrdKrvOzrPStX2Gr8ky3uT4="
},
"to-array": {
"version": "0.1.4",
"resolved": "https://registry.npmjs.org/to-array/-/to-array-0.1.4.tgz",
"integrity": "sha1-F+bBH3PdTz10zaek/zI46a2b+JA="
},
"type-is": {
"version": "1.6.15",
"resolved": "https://registry.npmjs.org/type-is/-/type-is-1.6.15.tgz",
"integrity": "sha1-yrEPtJCeRByChC6v4a1kbIGARBA=",
"requires": {
"media-typer": "0.3.0",
"mime-types": "2.1.17"
}
},
"ultron": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/ultron/-/ultron-1.1.1.tgz",
"integrity": "sha512-UIEXBNeYmKptWH6z8ZnqTeS8fV74zG0/eRU9VGkpzz+LIJNs8W/zM/L+7ctCkRrgbNnnR0xxw4bKOr0cW0N0Og=="
},
"unpipe": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz",
"integrity": "sha1-sr9O6FFKrmFltIF4KdIbLvSZBOw="
},
"utils-merge": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz",
"integrity": "sha1-n5VxD1CiZ5R7LMwSR0HBAoQn5xM="
},
"uws": {
"version": "0.14.5",
"resolved": "https://registry.npmjs.org/uws/-/uws-0.14.5.tgz",
"integrity": "sha1-Z6rzPEaypYel9mZtAPdpEyjxSdw=",
"optional": true
},
"vary": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/vary/-/vary-1.1.2.tgz",
"integrity": "sha1-IpnwLG3tMNSllhsLn3RSShj2NPw="
},
"ws": {
"version": "3.3.3",
"resolved": "https://registry.npmjs.org/ws/-/ws-3.3.3.tgz",
"integrity": "sha512-nnWLa/NwZSt4KQJu51MYlCcSQ5g7INpOrOMt4XV8j4dqTXdmlUmSHQ8/oLC069ckre0fRsgfvsKwbTdtKLCDkA==",
"requires": {
"async-limiter": "1.0.0",
"safe-buffer": "5.1.1",
"ultron": "1.1.1"
}
},
"xmlhttprequest-ssl": {
"version": "1.5.4",
"resolved": "https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.4.tgz",
"integrity": "sha1-BPVgkVcks4kIhxXMDteBPpZ3v1c="
},
"yeast": {
"version": "0.1.2",
"resolved": "https://registry.npmjs.org/yeast/-/yeast-0.1.2.tgz",
"integrity": "sha1-AI4G2AlDIMNy28L47XagymyKxBk="
}
}
}

View File

@@ -0,0 +1,8 @@
{
"name": "container-training-pub-sub-server",
"version": "0.0.1",
"dependencies": {
"express": "^4.16.2",
"socket.io": "^2.0.4"
}
}

View File

@@ -0,0 +1,21 @@
/* This snippet is loaded from the workshop HTML file.
* It sets up callbacks to synchronize the local slide
* number with the remote pub/sub server.
*/
var socket = io();
var leader = true;
slideshow.on('showSlide', function (slide) {
if (leader) {
var n = slide.getSlideIndex()+1;
socket.emit('slide change', n);
}
});
socket.on('slide change', function (n) {
leader = false;
slideshow.gotoSlide(n);
leader = true;
});

41
slides/autopilot/server.js Executable file
View File

@@ -0,0 +1,41 @@
#!/usr/bin/env node
/* This is a very simple pub/sub server, allowing to
* remote control browsers displaying the slides.
* The browsers connect to this pub/sub server using
* Socket.IO, and the server tells them which slides
* to display.
*
* The server can be controlled with a little CLI,
* or by one of the browsers.
*/
var express = require('express');
var app = express();
var http = require('http').Server(app);
var io = require('socket.io')(http);
app.get('/', function(req, res){
res.send('container.training autopilot pub/sub server');
});
/* Serve remote.js from the current directory */
app.use(express.static('.'));
/* Serve slides etc. from current and the parent directory */
app.use(express.static('..'));
io.on('connection', function(socket){
console.log('a client connected: ' + socket.handshake.address);
socket.on('slide change', function(n, ack){
console.log('slide change: ' + n);
socket.broadcast.emit('slide change', n);
if (typeof ack === 'function') {
ack();
}
});
});
http.listen(3000, function(){
console.log('listening on *:3000');
});

7
slides/autopilot/tmux-style.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/sh
# This removes the clock (and other extraneous stuff) from the
# tmux status bar, and it gives it a non-default color.
tmux set-option -g status-left ""
tmux set-option -g status-right ""
tmux set-option -g status-style bg=cyan

38
slides/build.sh Executable file
View File

@@ -0,0 +1,38 @@
#!/bin/sh
case "$1" in
once)
for YAML in *.yml; do
./markmaker.py $YAML > $YAML.html || {
rm $YAML.html
break
}
done
if [ -n "$SLIDECHECKER" ]; then
for YAML in *.yml; do
./appendcheck.py $YAML.html
done
fi
;;
forever)
# There is a weird bug in entr, at least on MacOS,
# where it doesn't restore the terminal to a clean
# state when exitting. So let's try to work around
# it with stty.
STTY=$(stty -g)
while true; do
find . | entr -d $0 once
STATUS=$?
case $STATUS in
2) echo "Directory has changed. Restarting.";;
130) echo "SIGINT or q pressed. Exiting."; break;;
*) echo "Weird exit code: $STATUS. Retrying in 1 second."; sleep 1;;
esac
done
stty $STTY
;;
*)
echo "$0 <once|forever>"
;;
esac

View File

@@ -0,0 +1,48 @@
## About these slides
- All the content is available in a public GitHub repository:
https://github.com/jpetazzo/container.training
- You can get updated "builds" of the slides there:
http://container.training/
<!--
.exercise[
```open https://github.com/jpetazzo/container.training```
```open http://container.training/```
]
-->
--
- Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
.footnote[.emoji[👇] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.]
<!--
.exercise[
```open https://github.com/jpetazzo/container.training/tree/master/slides/common/about-slides.md```
]
-->
---
class: extra-details
## Extra details
- This slide has a little magnifying glass in the top left corner
- This magnifiying glass indicates slides that provide extra details
- Feel free to skip them if:
- you are in a hurry
- you are new to this and want to avoid cognitive overload
- you want only the most essential information
- You can review these slides another time if you want, they'll be waiting for you ☺

View File

@@ -0,0 +1,12 @@
## Clean up
- Before moving on, let's remove those containers
.exercise[
- Tell Compose to remove everything:
```bash
docker-compose down
```
]

View File

@@ -0,0 +1,220 @@
## Restarting in the background
- Many flags and commands of Compose are modeled after those of `docker`
.exercise[
- Start the app in the background with the `-d` option:
```bash
docker-compose up -d
```
- Check that our app is running with the `ps` command:
```bash
docker-compose ps
```
]
`docker-compose ps` also shows the ports exposed by the application.
---
class: extra-details
## Viewing logs
- The `docker-compose logs` command works like `docker logs`
.exercise[
- View all logs since container creation and exit when done:
```bash
docker-compose logs
```
- Stream container logs, starting at the last 10 lines for each container:
```bash
docker-compose logs --tail 10 --follow
```
<!--
```wait units of work done```
```keys ^C```
-->
]
Tip: use `^S` and `^Q` to pause/resume log output.
---
## Scaling up the application
- Our goal is to make that performance graph go up (without changing a line of code!)
--
- Before trying to scale the application, we'll figure out if we need more resources
(CPU, RAM...)
- For that, we will use good old UNIX tools on our Docker node
---
## Looking at resource usage
- Let's look at CPU, memory, and I/O usage
.exercise[
- run `top` to see CPU and memory usage (you should see idle cycles)
<!--
```bash top```
```wait Tasks```
```keys ^C```
-->
- run `vmstat 1` to see I/O usage (si/so/bi/bo)
<br/>(the 4 numbers should be almost zero, except `bo` for logging)
<!--
```bash vmstat 1```
```wait memory```
```keys ^C```
-->
]
We have available resources.
- Why?
- How can we use them?
---
## Scaling workers on a single node
- Docker Compose supports scaling
- Let's scale `worker` and see what happens!
.exercise[
- Start one more `worker` container:
```bash
docker-compose scale worker=2
```
- Look at the performance graph (it should show a x2 improvement)
- Look at the aggregated logs of our containers (`worker_2` should show up)
- Look at the impact on CPU load with e.g. top (it should be negligible)
]
---
## Adding more workers
- Great, let's add more workers and call it a day, then!
.exercise[
- Start eight more `worker` containers:
```bash
docker-compose scale worker=10
```
- Look at the performance graph: does it show a x10 improvement?
- Look at the aggregated logs of our containers
- Look at the impact on CPU load and memory usage
]
---
# Identifying bottlenecks
- You should have seen a 3x speed bump (not 10x)
- Adding workers didn't result in linear improvement
- *Something else* is slowing us down
--
- ... But what?
--
- The code doesn't have instrumentation
- Let's use state-of-the-art HTTP performance analysis!
<br/>(i.e. good old tools like `ab`, `httping`...)
---
## Accessing internal services
- `rng` and `hasher` are exposed on ports 8001 and 8002
- This is declared in the Compose file:
```yaml
...
rng:
build: rng
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
...
```
---
## Measuring latency under load
We will use `httping`.
.exercise[
- Check the latency of `rng`:
```bash
httping -c 3 localhost:8001
```
- Check the latency of `hasher`:
```bash
httping -c 3 localhost:8002
```
]
`rng` has a much higher latency than `hasher`.
---
## Let's draw hasty conclusions
- The bottleneck seems to be `rng`
- *What if* we don't have enough entropy and can't generate enough random numbers?
- We need to scale out the `rng` service on multiple machines!
Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.
(In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy...
<br/>
...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).)

View File

@@ -0,0 +1,63 @@
# Declarative vs imperative
- Our container orchestrator puts a very strong emphasis on being *declarative*
- Declarative:
*I would like a cup of tea.*
- Imperative:
*Boil some water. Pour it in a teapot. Add tea leaves. Steep for a while. Serve in a cup.*
--
- Declarative seems simpler at first ...
--
- ... As long as you know how to brew tea
---
## Declarative vs imperative
- What declarative would really be:
*I want a cup of tea, obtained by pouring an infusion¹ of tea leaves in a cup.*
--
*¹An infusion is obtained by letting the object steep a few minutes in hot² water.*
--
*²Hot liquid is obtained by pouring it in an appropriate container³ and setting it on a stove.*
--
*³Ah, finally, containers! Something we know about. Let's get to work, shall we?*
--
.footnote[Did you know there was an [ISO standard](https://en.wikipedia.org/wiki/ISO_3103)
specifying how to brew tea?]
---
## Declarative vs imperative
- Imperative systems:
- simpler
- if a task is interrupted, we have to restart from scratch
- Declarative systems:
- if a task is interrupted (or if we show up to the party half-way through),
we can figure out what's missing and do only what's necessary
- we need to be able to *observe* the system
- ... and compute a "diff" between *what we have* and *what we want*

309
slides/common/prereqs.md Normal file
View File

@@ -0,0 +1,309 @@
# Pre-requirements
- Be comfortable with the UNIX command line
- navigating directories
- editing files
- a little bit of bash-fu (environment variables, loops)
- Some Docker knowledge
- `docker run`, `docker ps`, `docker build`
- ideally, you know how to write a Dockerfile and build it
<br/>
(even if it's a `FROM` line and a couple of `RUN` commands)
- It's totally OK if you are not a Docker expert!
---
class: title
*Tell me and I forget.*
<br/>
*Teach me and I remember.*
<br/>
*Involve me and I learn.*
Misattributed to Benjamin Franklin
[(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/)
---
## Hands-on sections
- The whole workshop is hands-on
- We are going to build, ship, and run containers!
- You are invited to reproduce all the demos
- All hands-on sections are clearly identified, like the gray rectangle below
.exercise[
- This is the stuff you're supposed to do!
- Go to [container.training](http://container.training/) to view these slides
- Join the chat room: @@CHAT@@
<!-- ```open http://container.training/``` -->
]
---
class: in-person
## Where are we going to run our containers?
---
class: in-person, pic
![You get a cluster](images/you-get-a-cluster.jpg)
---
class: in-person
## You get a cluster of cloud VMs
- Each person gets a private cluster of cloud VMs (not shared with anybody else)
- They'll remain up for the duration of the workshop
- You should have a little card with login+password+IP addresses
- You can automatically SSH from one VM to another
- The nodes have aliases: `node1`, `node2`, etc.
---
class: in-person
## Why don't we run containers locally?
- Installing that stuff can be hard on some machines
(32 bits CPU or OS... Laptops without administrator access... etc.)
- *"The whole team downloaded all these container images from the WiFi!
<br/>... and it went great!"* (Literally no-one ever)
- All you need is a computer (or even a phone or tablet!), with:
- an internet connection
- a web browser
- an SSH client
---
class: in-person
## SSH clients
- On Linux, OS X, FreeBSD... you are probably all set
- On Windows, get one of these:
- [putty](http://www.putty.org/)
- Microsoft [Win32 OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH)
- [Git BASH](https://git-for-windows.github.io/)
- [MobaXterm](http://mobaxterm.mobatek.net/)
- On Android, [JuiceSSH](https://juicessh.com/)
([Play Store](https://play.google.com/store/apps/details?id=com.sonelli.juicessh))
works pretty well
- Nice-to-have: [Mosh](https://mosh.org/) instead of SSH, if your internet connection tends to lose packets
---
class: in-person, extra-details
## What is this Mosh thing?
*You don't have to use Mosh or even know about it to follow along.
<br/>
We're just telling you about it because some of us think it's cool!*
- Mosh is "the mobile shell"
- It is essentially SSH over UDP, with roaming features
- It retransmits packets quickly, so it works great even on lossy connections
(Like hotel or conference WiFi)
- It has intelligent local echo, so it works great even in high-latency connections
(Like hotel or conference WiFi)
- It supports transparent roaming when your client IP address changes
(Like when you hop from hotel to conference WiFi)
---
class: in-person, extra-details
## Using Mosh
- To install it: `(apt|yum|brew) install mosh`
- It has been pre-installed on the VMs that we are using
- To connect to a remote machine: `mosh user@host`
(It is going to establish an SSH connection, then hand off to UDP)
- It requires UDP ports to be open
(By default, it uses a UDP port between 60000 and 61000)
---
class: in-person
## Connecting to our lab environment
.exercise[
- Log into the first VM (`node1`) with your SSH client
<!--
```bash
for N in $(awk '/\Wnode/{print $2}' /etc/hosts); do
ssh -o StrictHostKeyChecking=no $N true
done
```
```bash
if which kubectl; then
kubectl get all -o name | grep -v service/kubernetes | xargs -n1 kubectl delete
fi
```
-->
- Check that you can SSH (without password) to `node2`:
```bash
ssh node2
```
- Type `exit` or `^D` to come back to `node1`
<!-- ```bash exit``` -->
]
If anything goes wrong — ask for help!
---
## Doing or re-doing the workshop on your own?
- Use something like
[Play-With-Docker](http://play-with-docker.com/) or
[Play-With-Kubernetes](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
Zero setup effort; but environment are short-lived and
might have limited resources
- Create your own cluster (local or cloud VMs)
Small setup effort; small cost; flexible environments
- Create a bunch of clusters for you and your friends
([instructions](https://github.com/jpetazzo/container.training/tree/master/prepare-vms))
Bigger setup effort; ideal for group training
---
class: self-paced
## Get your own Docker nodes
- If you already have some Docker nodes: great!
- If not: let's get some thanks to Play-With-Docker
.exercise[
- Go to http://www.play-with-docker.com/
- Log in
- Create your first node
<!-- ```open http://www.play-with-docker.com/``` -->
]
You will need a Docker ID to use Play-With-Docker.
(Creating a Docker ID is free.)
---
## We will (mostly) interact with node1 only
*These remarks apply only when using multiple nodes, of course.*
- Unless instructed, **all commands must be run from the first VM, `node1`**
- We will only checkout/copy the code on `node1`
- During normal operations, we do not need access to the other nodes
- If we had to troubleshoot issues, we would use a combination of:
- SSH (to access system logs, daemon status...)
- Docker API (to check running containers and container engine status)
---
## Terminals
Once in a while, the instructions will say:
<br/>"Open a new terminal."
There are multiple ways to do this:
- create a new window or tab on your machine, and SSH into the VM;
- use screen or tmux on the VM and open a new window from there.
You are welcome to use the method that you feel the most comfortable with.
---
## Tmux cheatsheet
[Tmux](https://en.wikipedia.org/wiki/Tmux) is a terminal multiplexer like `screen`.
*You don't have to use it or even know about it to follow along.
<br/>
But some of us like to use it to switch between terminals.
<br/>
It has been preinstalled on your workshop nodes.*
- Ctrl-b c → creates a new window
- Ctrl-b n → go to next window
- Ctrl-b p → go to previous window
- Ctrl-b " → split window top/bottom
- Ctrl-b % → split window left/right
- Ctrl-b Alt-1 → rearrange windows in columns
- Ctrl-b Alt-2 → rearrange windows in rows
- Ctrl-b arrows → navigate to other windows
- Ctrl-b d → detach session
- tmux attach → reattach to session

19
slides/common/pwd.md Normal file
View File

@@ -0,0 +1,19 @@
## Using Play-With-Docker
- Open a new browser tab to [www.play-with-docker.com](http://www.play-with-docker.com/)
- Confirm that you're not a robot
- Click on "ADD NEW INSTANCE": congratulations, you have your first Docker node!
- When you will need more nodes, just click on "ADD NEW INSTANCE" again
- Note the countdown in the corner; when it expires, your instances are destroyed
- If you give your URL to somebody else, they can access your nodes too
<br/>
(You can use that for pair programming, or to get help from a mentor)
- Loving it? Not loving it? Tell it to the wonderful authors,
[@marcosnils](https://twitter.com/marcosnils) &
[@xetorthio](https://twitter.com/xetorthio)!

303
slides/common/sampleapp.md Normal file
View File

@@ -0,0 +1,303 @@
# Our sample application
- We will clone the GitHub repository onto our `node1`
- The repository also contains scripts and tools that we will use through the workshop
.exercise[
<!--
```bash
if [ -d container.training ]; then
mv container.training container.training.$$
fi
```
-->
- Clone the repository on `node1`:
```bash
git clone git://github.com/jpetazzo/container.training
```
]
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
---
## Downloading and running the application
Let's start this before we look around, as downloading will take a little time...
.exercise[
- Go to the `dockercoins` directory, in the cloned repo:
```bash
cd ~/container.training/dockercoins
```
- Use Compose to build and run all containers:
```bash
docker-compose up
```
<!--
```longwait units of work done```
-->
]
Compose tells Docker to build all container images (pulling
the corresponding base images), then starts all containers,
and displays aggregated logs.
---
## More detail on our sample application
- Visit the GitHub repository with all the materials of this workshop:
<br/>https://github.com/jpetazzo/container.training
- The application is in the [dockercoins](
https://github.com/jpetazzo/container.training/tree/master/dockercoins)
subdirectory
- Let's look at the general layout of the source code:
there is a Compose file [docker-compose.yml](
https://github.com/jpetazzo/container.training/blob/master/dockercoins/docker-compose.yml) ...
... and 4 other services, each in its own directory:
- `rng` = web service generating random bytes
- `hasher` = web service computing hash of POSTed data
- `worker` = background process using `rng` and `hasher`
- `webui` = web interface to watch progress
---
class: extra-details
## Compose file format version
*Particularly relevant if you have used Compose before...*
- Compose 1.6 introduced support for a new Compose file format (aka "v2")
- Services are no longer at the top level, but under a `services` section
- There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer)
- Containers are placed on a dedicated network, making links unnecessary
- There are other minor differences, but upgrade is easy and straightforward
---
## Service discovery in container-land
- We do not hard-code IP addresses in the code
- We do not hard-code FQDN in the code, either
- We just connect to a service name, and container-magic does the rest
(And by container-magic, we mean "a crafty, dynamic, embedded DNS server")
---
## Example in `worker/worker.py`
```python
redis = Redis("`redis`")
def get_random_bytes():
r = requests.get("http://`rng`/32")
return r.content
def hash_bytes(data):
r = requests.post("http://`hasher`/",
data=data,
headers={"Content-Type": "application/octet-stream"})
```
(Full source code available [here](
https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
))
---
class: extra-details
## Links, naming, and service discovery
- Containers can have network aliases (resolvable through DNS)
- Compose file version 2+ makes each container reachable through its service name
- Compose file version 1 did require "links" sections
- Network aliases are automatically namespaced
- you can have multiple apps declaring and using a service named `database`
- containers in the blue app will resolve `database` to the IP of the blue database
- containers in the green app will resolve `database` to the IP of the green database
---
## What's this application?
--
- It is a DockerCoin miner! .emoji[💰🐳📦🚢]
--
- No, you can't buy coffee with DockerCoins
--
- How DockerCoins works:
- `worker` asks to `rng` to generate a few random bytes
- `worker` feeds these bytes into `hasher`
- and repeat forever!
- every second, `worker` updates `redis` to indicate how many loops were done
- `webui` queries `redis`, and computes and exposes "hashing speed" in your browser
---
## Our application at work
- On the left-hand side, the "rainbow strip" shows the container names
- On the right-hand side, we see the output of our containers
- We can see the `worker` service making requests to `rng` and `hasher`
- For `rng` and `hasher`, we see HTTP access logs
---
## Connecting to the web UI
- "Logs are exciting and fun!" (No-one, ever)
- The `webui` container exposes a web dashboard; let's view it
.exercise[
- With a web browser, connect to `node1` on port 8000
- Remember: the `nodeX` aliases are valid only on the nodes themselves
- In your browser, you need to enter the IP address of your node
<!-- ```open http://node1:8000``` -->
]
A drawing area should show up, and after a few seconds, a blue
graph will appear.
---
class: self-paced, extra-details
## If the graph doesn't load
If you just see a `Page not found` error, it might be because your
Docker Engine is running on a different machine. This can be the case if:
- you are using the Docker Toolbox
- you are using a VM (local or remote) created with Docker Machine
- you are controlling a remote Docker Engine
When you run DockerCoins in development mode, the web UI static files
are mapped to the container using a volume. Alas, volumes can only
work on a local environment, or when using Docker4Mac or Docker4Windows.
How to fix this?
Stop the app with `^C`, edit `dockercoins.yml`, comment out the `volumes` section, and try again.
---
class: extra-details
## Why does the speed seem irregular?
- It *looks like* the speed is approximately 4 hashes/second
- Or more precisely: 4 hashes/second, with regular dips down to zero
- Why?
--
class: extra-details
- The app actually has a constant, steady speed: 3.33 hashes/second
<br/>
(which corresponds to 1 hash every 0.3 seconds, for *reasons*)
- Yes, and?
---
class: extra-details
## The reason why this graph is *not awesome*
- The worker doesn't update the counter after every loop, but up to once per second
- The speed is computed by the browser, checking the counter about once per second
- Between two consecutive updates, the counter will increase either by 4, or by 0
- The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.
- What can we conclude from this?
--
class: extra-details
- "I'm clearly incapable of writing good frontend code!" 😀 — Jérôme
---
## Stopping the application
- If we interrupt Compose (with `^C`), it will politely ask the Docker Engine to stop the app
- The Docker Engine will send a `TERM` signal to the containers
- If the containers do not exit in a timely manner, the Engine sends a `KILL` signal
.exercise[
- Stop the application by hitting `^C`
<!--
```keys ^C```
-->
]
--
Some containers exit immediately, others take longer.
The containers that do not handle `SIGTERM` end up being killed after a 10s timeout. If we are very impatient, we can hit `^C` a second time!

11
slides/common/thankyou.md Normal file
View File

@@ -0,0 +1,11 @@
class: title, self-paced
Thank you!
---
class: title, in-person
That's all, folks! <br/> Questions?
![end](images/end.jpg)

21
slides/common/title.md Normal file
View File

@@ -0,0 +1,21 @@
class: title, self-paced
@@TITLE@@
.nav[*Self-paced version*]
---
class: title, in-person
@@TITLE@@<br/></br>
.footnote[
**Be kind to the WiFi!**<br/>
<!-- *Use the 5G network.* -->
*Don't use your hotspot.*<br/>
*Don't stream videos or download big files during the workshop.*<br/>
*Thank you!*
**Slides: http://container.training/**
]

4
slides/common/toc.md Normal file
View File

@@ -0,0 +1,4 @@
@@TOC@@

2
slides/find-non-ascii.sh Executable file
View File

@@ -0,0 +1,2 @@
#!/bin/sh
grep --color=auto -P -n "[^\x00-\x80]" */*.md

View File

@@ -0,0 +1,10 @@
#!/bin/sh
INPUT=$1
{
echo "# Front matter"
cat "$INPUT"
} |
grep -e "^# " -e ^---$ | uniq -c |
sed "s/^ *//" | sed s/---// |
paste -d "\t" - -

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Some files were not shown because too many files have changed in this diff Show More