Compare commits

...

164 Commits

Author SHA1 Message Date
Jerome Petazzoni
60ce3882e8 fix-redirects.sh: adding forced redirect 2020-04-07 16:48:29 -05:00
Jérôme Petazzoni
3c6706ad03 Map site root to dc17eu workshop slides 2017-11-05 09:03:29 -08:00
Jérôme Petazzoni
bee3e763a9 Prepare dc17eu branch 2017-11-05 08:58:58 -08:00
Jérôme Petazzoni
f479591f9c Better cards 2017-11-04 18:59:45 -07:00
Jérôme Petazzoni
241452bab4 Add script to check images and show slide count 2017-11-04 11:45:18 -07:00
Jérôme Petazzoni
5a58b17bb5 Fix image paths 2017-11-04 11:45:00 -07:00
Jérôme Petazzoni
2f4dfe7b6f Update ALL THE READMEs 2017-11-04 09:49:21 -07:00
Jérôme Petazzoni
dad3d434b8 Add index.html (to deprecate view.dckr.info) 2017-11-03 19:15:26 -07:00
Jérôme Petazzoni
3534d81860 Add intro slides 2017-11-03 19:08:44 -07:00
Jérôme Petazzoni
1c4d1d736e Remove useless scripts 2017-11-03 18:59:50 -07:00
Jérôme Petazzoni
c00bbb77c4 Fixup paths 2017-11-03 18:56:20 -07:00
Jérôme Petazzoni
0b24076086 Move markdown files to common/kube/swarm subdirs 2017-11-03 18:52:33 -07:00
Jérôme Petazzoni
c1a156337f Move images to subdirectory 2017-11-03 18:38:48 -07:00
Jérôme Petazzoni
078023058b docs -> slides 2017-11-03 18:31:06 -07:00
Jérôme Petazzoni
c1ecd16a8d Merge the big 2017 refactor 2017-11-03 18:28:52 -07:00
Jérôme Petazzoni
51cf4a076a Merge pull request #100 from soulshake/aj-hotfix-fix
Wrap bash and wait in .exercise[] block
2017-11-01 07:44:37 -07:00
AJ Bowen
7cf622e06a Actually, we just need to remove the space 2017-10-31 16:56:48 -07:00
AJ Bowen
14b2d54e43 Wrap bash and wait in .exercise[] block 2017-10-31 15:59:50 -07:00
Jérôme Petazzoni
3c9353815b hotfix 2017-10-31 15:25:29 -07:00
Jérôme Petazzoni
a44d3618bc Fix prometheus config 2017-10-31 11:12:29 -07:00
Jérôme Petazzoni
5466319407 Fix prometheus config 2017-10-31 11:12:08 -07:00
Jérôme Petazzoni
1d5f4330c0 Improve autotest; fix prometheus node collector 2017-10-31 11:10:39 -07:00
Jérôme Petazzoni
9e5bab1a76 tweaks for automated testing 2017-10-31 09:37:49 -07:00
Jérôme Petazzoni
c67675f900 Remove keymaps (tmux handles specials keys already) 2017-10-31 09:27:53 -07:00
Jérôme Petazzoni
4c18583a8e Improve autotest for Swarm workshop 2017-10-31 09:26:03 -07:00
Jérôme Petazzoni
d02d71270f Use Gitter instead of USENIX Slack 2017-10-31 07:36:16 -07:00
Jérôme Petazzoni
deb304026b allow any log level (and netlify has been set to LOG_LEVEL=DEBUG) 2017-10-30 09:31:15 -07:00
Jérôme Petazzoni
03561f38d8 I'd like to close my tab and I left 4 spaces 2017-10-30 09:10:30 -07:00
Jérôme Petazzoni
4965b205a7 Try to fix 'edit me' link generator 2017-10-30 09:07:34 -07:00
Jérôme Petazzoni
0d610081bd Add LISA tutorial 2017-10-30 07:45:59 -07:00
Jérôme Petazzoni
24c2f9f18e Fix repo/branch/base infer functions 2017-10-29 22:25:12 -07:00
Jérôme Petazzoni
3fc2d4c266 Infer github URL 2017-10-29 22:16:21 -07:00
Jérôme Petazzoni
0c175615a5 Merge branch 'soulshake-aj-wait-tmux' into the-big-2017-refactor 2017-10-29 21:07:46 -07:00
Jérôme Petazzoni
3198bb0d1f Minor tweaks 2017-10-29 21:07:25 -07:00
Jérôme Petazzoni
030d100d70 Merge branch 'the-big-2017-refactor' of github.com:jpetazzo/orchestration-workshop into the-big-2017-refactor 2017-10-29 19:52:32 -07:00
Jérôme Petazzoni
4747860226 Minor CSS tweaks for intro workshop 2017-10-29 19:52:20 -07:00
AJ Bowen
b243094d66 Address @jpetazzo feedback 2017-10-30 02:40:34 +01:00
AJ Bowen
1fec8f506a Use index to look ahead for 'wait' and 'keys. 2017-10-29 17:24:16 -07:00
AJ Bowen
04362a3b52 Check command exit codes 2017-10-29 16:24:11 -07:00
AJ Bowen
f0597c43b3 Move wait_for_success to be with other functions 2017-10-29 15:36:38 -07:00
AJ Bowen
b11e54cc43 Add 'c' option to continue until a timeout, and WORKSHOP_TEST_FORCE_NONINTERACTIVE to raise an exception instead of just warning if a command times out. 2017-10-29 15:34:33 -07:00
AJ Bowen
01d9923ca4 Remove colored logs because @jpetazzo 2017-10-29 15:07:58 -07:00
AJ Bowen
0f3660dc95 Put 'wait' and 'keys' HTML comments before the command they apply to. Add colored logs. 2017-10-29 15:01:24 -07:00
AJ Bowen
60a75647d2 No form feed, no prompt to wait, just print a warning and carry on 2017-10-29 13:55:16 -07:00
AJ Bowen
f46856ff63 Wait for tmux to display a prompt, indicating the command has completed 2017-10-29 13:47:48 -07:00
Jérôme Petazzoni
0508d24046 Merge pull request #97 from soulshake/aj-shfmt
'shfmt -i 4' on shell files
2017-10-29 16:00:17 +01:00
AJ Bowen
1d46898737 Reverse 'echo >/dev/stderr' for '>/dev/stderr echo' according to @jpetazzo preference 2017-10-29 15:55:08 +01:00
AJ Bowen
5b95d6ee7f shfmt -i 4 -bn' to allow pipes to begin lines 2017-10-29 15:42:16 +01:00
AJ Bowen
bb88d11344 shfmt -i 4 2017-10-29 14:45:54 +01:00
Jérôme Petazzoni
7262effec4 Expand the section about selector update 2017-10-25 23:41:57 +02:00
Jérôme Petazzoni
2c08439de4 Fix slides to reflect hostname 2017-10-25 22:42:07 +02:00
Jérôme Petazzoni
6543ffc5b9 Minor autotest update 2017-10-25 22:39:24 +02:00
Jérôme Petazzoni
681754fc1b Cosmetic autotest improvement 2017-10-25 22:39:07 +02:00
Jérôme Petazzoni
603bda8166 Change prompt and set hostname to nodeX 2017-10-25 22:38:57 +02:00
Jérôme Petazzoni
a4a37368e5 Merge pull request #95 from soulshake/aj-the-big-2017-refactor
Fix some typos
2017-10-25 22:24:54 +02:00
AJ Bowen
30fd53a3e1 Setup is a noun; set up is a verb. Fix some more typos. 2017-10-25 21:05:46 +02:00
Jérôme Petazzoni
40eab78186 Improve autotest system 2017-10-25 17:49:30 +02:00
Jérôme Petazzoni
68e0c8fca7 Very crude auto-test harness driving tmux 2017-10-25 11:40:37 +02:00
Jérôme Petazzoni
af261de9a4 Update prereqs-k8s.md 2017-10-24 18:24:15 +02:00
Jérôme Petazzoni
2a176edfb4 Templatize title 2017-10-24 17:44:05 +02:00
Jérôme Petazzoni
f56262bee0 Add diagram, thanks @lukemarsden @weaveworks <3 2017-10-24 14:33:03 +02:00
Jérôme Petazzoni
488fa1c981 Better k8s intro, fix chat links 2017-10-24 13:57:51 +02:00
Jérôme Petazzoni
f63107ce15 Add back to TOC links 2017-10-24 13:16:30 +02:00
Jérôme Petazzoni
68fc895017 Add edition links 2017-10-24 12:18:20 +02:00
Jérôme Petazzoni
5c0b83cd1b Add slide about cluster federation 2017-10-23 17:54:55 +02:00
Jérôme Petazzoni
452b5c0880 Do not open links in new tabs 2017-10-23 17:39:46 +02:00
Jérôme Petazzoni
42549d8c19 Adjust tone in tea example 2017-10-21 15:12:57 +02:00
Jérôme Petazzoni
9a5e9c9ea0 Debugging bar (this is super cool) 2017-10-21 14:18:09 +02:00
Jérôme Petazzoni
1ea7141d95 Auto-insert interstitial slides and links 2017-10-20 19:47:26 +02:00
Jérôme Petazzoni
c0fbf4aec4 Expand the what's next section 2017-10-20 18:48:22 +02:00
Jérôme Petazzoni
48b79a18a4 Cherry-pick 35a8c81 2017-10-20 15:40:54 +02:00
Jérôme Petazzoni
2c5724a5fe Merge pull request #93 from jouve/typo
typo with --update-failure-action flag
2017-10-20 15:37:02 +02:00
Jérôme Petazzoni
80d79c4d31 Add info about other resources created with kubectl run 2017-10-19 18:31:56 +02:00
Jérôme Petazzoni
ff0c868c27 kubernetes network model 2017-10-19 18:09:46 +02:00
Jérôme Petazzoni
cbee7484ae update docs for new slide deck generator 2017-10-19 17:08:22 +02:00
Cyril Jouve
35a8c81b39 typo with --update-failure-action flag 2017-10-19 12:59:06 +02:00
Jerome Petazzoni
764d33c884 power outlets are the worst 2017-10-16 09:05:50 +02:00
Jérôme Petazzoni
a4fc5b924f Last update after dry run 2017-10-16 00:56:55 +02:00
Jérôme Petazzoni
b155000d56 hero syndrome (thanks @soulshake) 2017-10-15 23:23:18 +02:00
Jérôme Petazzoni
baf48657d0 Clean up a bunch of titles 2017-10-15 23:13:06 +02:00
Jérôme Petazzoni
b4b22ff47b Add chat variable to workshop YML files 2017-10-15 23:01:46 +02:00
Jérôme Petazzoni
c4b131ae5e Add black belt refs 2017-10-15 22:37:23 +02:00
Jérôme Petazzoni
af1031760b Add blackbelt icon and css 2017-10-15 22:11:13 +02:00
Jérôme Petazzoni
da7c4742bf Netlify is <3 2017-10-14 22:37:28 +02:00
Jérôme Petazzoni
a3cf917100 Add dashboard section + kubectl apply sec talk 2017-10-14 17:42:02 +02:00
Jérôme Petazzoni
96a5cc15ec Add a comment at end of each slide showing origin 2017-10-14 16:56:12 +02:00
Jérôme Petazzoni
117c6c18e9 pull-images -> pull_images 2017-10-14 14:22:46 +02:00
Jérôme Petazzoni
fe83ce99f2 Backport error reporting fixes 2017-10-14 14:20:24 +02:00
Jérôme Petazzoni
8b0173bc87 Fix build-forever (entr is buggy, yo) 2017-10-14 14:19:21 +02:00
Jérôme Petazzoni
9450ed2057 Improve error reporting (thanks @bretfisher for reporting this) 2017-10-14 14:10:50 +02:00
Romain Degez
994be990f5 Remove extraneous chapter title 2017-10-14 12:31:58 +02:00
Jérôme Petazzoni
e2fd9531ef Add rolling upgrade section and whatsnext 2017-10-13 23:16:02 +02:00
Jérôme Petazzoni
8bb7243aaf Imperative vs declarative; spec 2017-10-13 20:43:31 +02:00
Jérôme Petazzoni
9a067f2064 kube agenda 2017-10-13 20:05:03 +02:00
Jérôme Petazzoni
009bc2089d Backport #91 2017-10-13 19:54:21 +02:00
Jérôme Petazzoni
d1e6248ded Merge pull request #91 from anonymuse/jesse/fix-node-command
Update docker node command with a working filter
2017-10-13 19:38:16 +02:00
Jérôme Petazzoni
b3a7e36c37 fixup build.sh script 2017-10-13 19:23:00 +02:00
Jérôme Petazzoni
a9a82ccd1e Rework slide builder + add section on daemonsets 2017-10-12 20:49:59 +02:00
Jérôme Petazzoni
6abbebe00d Reword Compose File v3 explanations 2017-10-12 10:38:16 +02:00
Jérôme Petazzoni
4dec9c43f1 One more round of updates for dc17eu 2017-10-12 10:17:52 +02:00
Jérôme Petazzoni
3369005e06 Revert to single HTML generator and parametrize excludeClasses 2017-10-12 09:53:14 +02:00
Jérôme Petazzoni
c4d76ba367 Backport #92 (thanks @bretfisher 👍🏻) 2017-10-12 09:04:56 +02:00
Jérôme Petazzoni
ba24b66d84 Fix extra-details icon 2017-10-12 00:12:30 +02:00
Jérôme Petazzoni
8d2391e4d6 Add kubespawn 2017-10-12 00:12:19 +02:00
Jérôme Petazzoni
4c68847dd1 Remove PWD reference from kube material 2017-10-11 23:33:13 +02:00
Jérôme Petazzoni
b67371e0ec Helper script based on entr 2017-10-11 23:32:14 +02:00
Jérôme Petazzoni
cd2cf9b3a4 Tweak page number positioning 2017-10-11 23:31:59 +02:00
Jérôme Petazzoni
4eaf2310b6 Add how to run and expose services on kube 2017-10-11 23:31:39 +02:00
Jérôme Petazzoni
20e9517722 Put slide number in top-left corner 2017-10-11 16:00:23 +02:00
Jérôme Petazzoni
553fd6b742 Fix custom prompt 2017-10-11 15:54:56 +02:00
Jérôme Petazzoni
25c8623a81 Add kubectl completion 2017-10-11 15:54:46 +02:00
Jérôme Petazzoni
f787d1b6c3 Add kube concepts + kubectl primer 2017-10-11 15:49:11 +02:00
Jérôme Petazzoni
825257427f Split out selfpaced and dockercon workshops 2017-10-10 17:55:22 +02:00
Jérôme Petazzoni
7e57b23234 Merge pull request #92 from BretFisher/improve-healthcheck-rollback
fixing healthcheck rollbacks, adding TAG to deploys, adding missing yml
2017-10-10 08:33:27 +02:00
Bret Fisher
5c102d594f fixing healthcheck rollbacks, adding TAG to deploys, adding missing yml 2017-10-09 23:49:27 -04:00
Jérôme Petazzoni
e28a64c6cf Remove old version 2017-10-09 18:04:44 +02:00
Jérôme Petazzoni
f8888bf16a Split out content to many smaller files
And add markmaker.py to generate workshop.md
2017-10-09 16:56:23 +02:00
Jérôme Petazzoni
ac523e0f14 Add upstream URL 2017-10-09 13:30:38 +02:00
Jérôme Petazzoni
3211c1ba8a Add data-path option 2017-10-07 19:24:07 +02:00
Jérôme Petazzoni
f1aa5d07fa Fix printing 2017-10-07 15:14:46 +02:00
Jérôme Petazzoni
c0e2fc8832 Allow to run workshopctl in a container 2017-10-06 21:40:39 +02:00
Jérôme Petazzoni
08722db23f Major rehaul of trainer script (it is now workshopctl) 2017-10-06 19:01:15 +02:00
Jérôme Petazzoni
11ec3336eb Remove media dir (unused) 2017-10-06 13:10:52 +02:00
Jérôme Petazzoni
42603d6f62 Add host network in Swarm mode 2017-10-05 14:27:23 +02:00
Jérôme Petazzoni
5c825c864c Allow to start+deploy in a single step 2017-10-05 12:55:36 +02:00
Jérôme Petazzoni
186b30a742 Add a couple of slides about events 2017-10-04 17:13:01 +02:00
Jérôme Petazzoni
06b97454c6 Add section about configs 2017-10-04 16:36:48 +02:00
Jérôme Petazzoni
c393d2aa51 Remove older (unused) stacks 2017-10-04 15:21:33 +02:00
Jérôme Petazzoni
3817332905 Remove obsolete scripts 2017-10-04 15:20:01 +02:00
Jérôme Petazzoni
b7dbbd4633 Add kubernetes deployment code (behind cheap feature switch) 2017-10-03 22:15:43 +02:00
Jesse White
d73f5232ff Update docker node command with a working filter 2017-10-02 12:15:34 -04:00
Jérôme Petazzoni
b0a34aa106 Remove Swarm classic 2017-10-02 13:33:58 +02:00
Jérôme Petazzoni
36f512a3d3 Backport content from DOD MSP 2017-09-29 23:35:52 +02:00
Jérôme Petazzoni
87cbbd5c35 Backport a few updates from devopscon 2017-09-29 23:17:27 +02:00
Jérôme Petazzoni
2f6689d639 Refactor card generation to use Jinja templates
This makes the card generation process a bit easier to customize.
A few issues with Chrome page breaks were also fixed.
2017-09-29 22:29:08 +02:00
Jérôme Petazzoni
4f7651855e Update version numbers 2017-09-29 19:24:24 +02:00
Jérôme Petazzoni
aea59c757e Add HEALTHCHECK support, courtesy of @bretfisher 2017-09-27 18:07:03 +02:00
Jérôme Petazzoni
af2d82d00a Merge branch 'BretFisher-healthcheck-auto-rollback' 2017-09-27 12:32:43 +02:00
Jérôme Petazzoni
5b8861009d Merge branch 'healthcheck-auto-rollback' of https://github.com/BretFisher/orchestration-workshop into BretFisher-healthcheck-auto-rollback 2017-09-27 12:32:29 +02:00
Jérôme Petazzoni
674bfe82c7 Remove conference hashtag in CTA tweet link (closes #77) 2017-09-27 12:20:08 +02:00
Jérôme Petazzoni
8f61a2fffa If any of the commands of postprep fails, abort
Closes #80
2017-09-27 12:14:55 +02:00
Jérôme Petazzoni
748881d37d Add a fancy table! 2017-09-26 21:55:09 +02:00
Jérôme Petazzoni
d29863a0e0 Merge branch 'ops-feature-history' of https://github.com/BretFisher/orchestration-workshop 2017-09-26 18:42:04 +02:00
Jérôme Petazzoni
acc84729a2 Merge pull request #89 from BretFisher/add-inline-code-bg
Add inline code background color
2017-09-13 14:26:23 -07:00
Bret Fisher
9af9477f65 ugg spacing 2017-09-12 19:09:51 -07:00
Bret Fisher
15cca15ec5 add inline-code grey background
So much grey! All the grey's!
2017-09-12 19:08:36 -07:00
Bret Fisher
685ea653fe adding healthcheck with rollback 2017-09-12 19:03:52 -07:00
Jérôme Petazzoni
bf13657a8f Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2017-08-03 11:02:45 +02:00
Jérôme Petazzoni
9c7fb40475 Merge branch 'BretFisher-user-namespaces' 2017-08-03 11:02:27 +02:00
Jérôme Petazzoni
b1b8b53a2f Adapt @bretfisher work to match formatting etc 2017-08-03 11:01:31 +02:00
Jérôme Petazzoni
69259c27a1 Merge branch 'user-namespaces' of https://github.com/BretFisher/orchestration-workshop into BretFisher-user-namespaces 2017-08-03 08:40:53 +02:00
Jérôme Petazzoni
7354974ece Merge pull request #87 from lastcoolnameleft/patch-1
1.9.0 does not support docker-compse.yml Version 3
2017-08-01 23:31:53 -07:00
Tommy Falgout
5379619026 1.9.0 does not support docker-compse.yml Version 3 2017-08-01 17:46:21 -05:00
Jérôme Petazzoni
0d7ee1dda0 Merge branch 'alexellis-alexellis-patch-sol' 2017-07-12 13:41:45 +02:00
Jérôme Petazzoni
243d585432 Add a few details about what happens when losing the sole manager 2017-07-12 13:41:37 +02:00
Alex Ellis
f5fe7152f3 Internationalisation
I had no idea what SOL was - had to google this on Urban Dictionary :-/ have put an internationalisation in and retained the colliqualism in brackets.
2017-07-11 19:00:23 +01:00
Jérôme Petazzoni
94d9ad22d0 Add ngrep details when using PWD or Vagrant re/ interface selection (closes #84) 2017-07-11 19:51:00 +02:00
Bret Fisher
59f1b1069d fixed some feature release confusion 2017-06-26 14:34:03 -04:00
Bret Fisher
c30386a73d added ops feature history slide 2017-06-18 20:28:51 -07:00
Jérôme Petazzoni
0af160e0a8 Merge pull request #82 from adulescentulus/fix_visualizer_exercise
(some) wrong instructions
2017-06-17 09:31:31 -07:00
Andreas Groll
1fdb7b8077 added missing stackname 2017-06-12 15:25:35 +02:00
Andreas Groll
d2b67c426e you only can connect to the ip where you started your visualizer 2017-06-12 12:07:59 +02:00
Bret Fisher
45402a28e5 updated to preventls accidently registry delete 2017-04-14 02:37:07 -04:00
Bret Fisher
9e97c7a490 adding user namspace change and daemon.json example
also adding .footnote css
2017-04-14 01:34:51 -04:00
210 changed files with 18432 additions and 9926 deletions

2
.gitignore vendored
View File

@@ -6,3 +6,5 @@ prepare-vms/ips.html
prepare-vms/ips.pdf
prepare-vms/settings.yaml
prepare-vms/tags
slides/*.yml.html
autotest/nextstep

269
README.md
View File

@@ -1,137 +1,135 @@
# Docker Orchestration Workshop
# Container Training
This is the material (slides, scripts, demo app, and other
code samples) for the "Docker orchestration workshop"
written and delivered by Jérôme Petazzoni (and lots of others)
non-stop since June 2015.
This repository (formerly known as `orchestration-workshop`)
contains materials (slides, scripts, demo app, and other
code samples) used for various workshops, tutorials, and
training sessions around the themes of Docker, containers,
and orchestration.
For the moment, it includes:
- Introduction to Docker and Containers,
- Container Orchestration with Docker Swarm,
- Container Orchestration with Kubernetes.
These materials have been designed around the following
principles:
- they assume very little prior knowledge of Docker,
containers, or a particular programming language;
- they can be used in a classroom setup (with an
instructor), or self-paced at home;
- they are hands-on, meaning that they contain lots
of examples and exercises that you can easily
reproduce;
- they progressively introduce concepts in chapters
that build on top of each other.
If you're looking for the materials, you can stop reading
right now, and hop to http://container.training/, which
hosts all the slides decks available.
The rest of this document explains how this repository
is structured, and how to use it to deliver (or create)
your own tutorials.
## Content
## Why a single repository?
- Chapter 1: Getting Started: running apps with docker-compose
- Chapter 2: Scaling out with Swarm Mode
- Chapter 3: Operating the Swarm (networks, updates, logging, metrics)
- Chapter 4: Deeper in Swarm (stateful services, scripting, DAB's)
All these materials have been gathered in a single repository
because they have a few things in common:
- a [build system](slides/) generating HTML slides from
Markdown source files;
- some [common slides](slides/common/) that are re-used
(and updated) identically between different decks;
- [deployment scripts](prepare-vms/) to start training
VMs in bulk;
- a [semi-automated test harness](autotest/) to check
that the exercises and examples provided work properly;
- a fancy pipeline powered by
[Netlify](https://www.netlify.com/) and continuously
deploying `master` to http://container.training/.
## Quick start (or, "I want to try it!")
## What are the different courses available?
This workshop is designed to be *hands on*, i.e. to give you a step-by-step
guide where you will build your own Docker cluster, and use it to deploy
a sample application.
**Introduction to Docker** is derived from the first
"Docker Fundamentals" training materials. For more information,
see [jpetazzo/intro-to-docker](https://github.com/jpetazzo/intro-to-docker).
The version in this repository has been adapted to the Markdown
publishing pipeline. It is still maintained, but only receives
minor updates once in a while.
The easiest way to follow the workshop is to attend it when it is delivered
by an instructor. In that case, the instructor will generally give you
credentials (IP addresses, login, password) to connect to your own cluster
of virtual machines; and the [slides](http://jpetazzo.github.io/orchestration-workshop)
assume that you have your own cluster indeed.
**Container Orchestration with Docker Swarm** (formerly
known as "Orchestration Workshop") is a workshop created by Jérôme
Petazzoni in June 2015. Since then, it has been continuously updated
and improved, and received contributions from many others authors.
It is actively maintained.
If you want to follow the workshop on your own, and want to have your
own cluster, we have multiple solutions for you!
**Container Orchestration with Kubernetes** was created by
Jérôme Petazzoni in October 2017, with help and feedback from
a few other contributors. It is actively maintained.
### Using [play-with-docker](http://play-with-docker.com/)
## Repository structure
This method is very easy to get started (you don't need any extra account
or resources!) but will require a bit of adaptation from the workshop slides.
To get started, go to [play-with-docker](http://play-with-docker.com/), and
click on _ADD NEW INSTANCE_ five times. You will get five "docker-in-docker"
containers, all on a private network. These are your five nodes for the workshop!
When the instructions in the slides tell you to "SSH on node X", just go to
the tab corresponding to that node.
The nodes are not directly reachable from outside; so when the slides tell
you to "connect to the IP address of your node on port XYZ" you will have
to use a different method.
We suggest to use "supergrok", a container offering a NGINX+ngrok combo to
expose your services. To use it, just start (on any of your nodes) the
`jpetazzo/supergrok` image. The image will output further instructions:
```
docker run --name supergrok -d jpetazzo/supergrok
docker logs --follow supergrok
```
The logs of the container will give you a tunnel address and explain you
how to connected to exposed services. That's all you need to do!
We are also working on a native proxy, embedded to Play-With-Docker.
Stay tuned!
<!--
- You can use a proxy provided by Play-With-Docker. When the slides
instruct you to connect to nodeX on port ABC, instead, you will connect
to http://play-with-docker.com/XXX.XXX.XXX.XXX:ABC, where XXX.XXX.XXX.XXX
is the IP address of nodeX.
-->
Note that the instances provided by Play-With-Docker have a short lifespan
(a few hours only), so if you want to do the workshop over multiple sessions,
you will have to start over each time ... Or create your own cluster with
one of the methods described below.
- [autotest](autotest/)
- Semi-automated testing system to check that all the exercises
in the slides work properly.
- [bin](bin/)
- A few helper scripts that you can safely ignore for now.
- [dockercoins](dockercoins/)
- The demo app used throughout the orchestration workshops.
- [efk](efk/), [elk](elk/), [prom](prom/), [snap](snap/):
- Logging and metrics stacks used in the later parts of
the orchestration workshops.
- [prepare-local](prepare-local/), [prepare-machine](prepare-machine/):
- Contributed scripts to automate the creation of local environments.
These could use some help to test/check that they work.
- [prepare-vms](prepare-vms/):
- Scripts to automate the creation of AWS instances for students.
These are routinely used and actively maintained.
- [slides](slides/):
- All the slides! They are assembled from Markdown files with
a custom Python script, and then rendered using [gnab/remark](
https://github.com/gnab/remark). Check this directory for more details.
- [stacks](stacks/):
- A handful of Compose files (version 3) allowing to easily
deploy complex application stacks.
### Using Docker Machine to create your own cluster
## Course structure
This method requires a bit more work to get started, but you get a permanent
cluster, with less limitations.
(This applies only for the orchestration workshops.)
You will need Docker Machine (if you have Docker Mac, Docker Windows, or
the Docker Toolbox, you're all set already). You will also need:
The workshop introduces a demo app, "DockerCoins," built
around a micro-services architecture. First, we run it
on a single node, using Docker Compose. Then, we pretend
that we need to scale it, and we use an orchestrator
(SwarmKit or Kubernetes) to deploy and scale the app on
a cluster.
- credentials for a cloud provider (e.g. API keys or tokens),
- or a local install of VirtualBox or VMware (or anything supported
by Docker Machine).
We explain the concepts of the orchestrator. For SwarmKit,
we setup the cluster with `docker swarm init` and `docker swarm join`.
For Kubernetes, we use pre-configured clusters.
Full instructions are in the [prepare-machine](prepare-machine) subdirectory.
Then, we cover more advanced concepts: scaling, load balancing,
updates, global services or daemon sets.
There are a number of advanced optional chapters about
logging, metrics, secrets, network encryption, etc.
The content is very modular: it is broken down in a large
number of Markdown files, that are put together according
to a YAML manifest. This allows to re-use content
between different workshops very easily.
### Using our scripts to mass-create a bunch of clusters
### DockerCoins
Since we often deliver the workshop during conferences or similar events,
we have scripts to automate the creation of a bunch of clusters using
AWS EC2. If you want to create multiple clusters and have EC2 credits,
check the [prepare-vms](prepare-vms) directory for more information.
## How This Repo is Organized
- **dockercoins**
- Sample App: compose files and source code for the dockercoins sample apps
used throughout the workshop
- **docs**
- Slide Deck: presentation slide deck, works out-of-box with GitHub Pages,
uses https://remarkjs.com
- **prepare-local**
- untested scripts for automating the creation of local virtualbox VM's
(could use your help validating)
- **prepare-machine**
- instructions explaining how to use Docker Machine to create VMs
- **prepare-vms**
- scripts for automating the creation of AWS instances for students
## Slide Deck
- The slides are in the `docs` directory.
- To view them locally open `docs/index.html` in your browser. It works
offline too.
- To view them online open https://jpetazzo.github.io/orchestration-workshop/
in your browser.
- When you fork this repo, be sure GitHub Pages is enabled in repo Settings
for "master branch /docs folder" and you'll have your own website for them.
- They use https://remarkjs.com to allow simple markdown in a html file that
remark will transform into a presentation in the browser.
## Sample App: Dockercoins!
The sample app is in the `dockercoins` directory. It's used during all chapters
The sample app is in the `dockercoins` directory.
It's used during all chapters
for explaining different concepts of orchestration.
To see it in action:
@@ -141,13 +139,18 @@ To see it in action:
- the web UI will be available on port 8000
*If you just want to run the workshop for yourself, you can stop reading
here. If you want to deliver the workshop for others (i.e. if you
want to become an instructor), keep reading!*
## Running the Workshop
If you want to deliver one of these workshops yourself,
this section is for you!
> *This section has been mostly contributed by
> [Bret Fisher](https://twitter.com/bretfisher), who was
> one of the first persons to have the bravery of delivering
> this workshop without me. Thanks Bret! 🍻
>
> Jérôme.*
### General timeline of planning a workshop
@@ -155,7 +158,7 @@ want to become an instructor), keep reading!*
understand the different `dockercoins` repo's and the steps we go through to
get to a full Swarm Mode cluster of many containers. You'll update the first
few slides and last slide at a minimum, with your info.
- Your docs directory can use GitHub Pages.
- ~~Your docs directory can use GitHub Pages.~~
- This workshop expects 5 servers per student. You can get away with as little
as 2 servers per student, but you'll need to change the slide deck to
accommodate. More servers = more fun.
@@ -181,13 +184,14 @@ want to become an instructor), keep reading!*
they need for class.
- Typically you create the servers the day before or morning of workshop, and
leave them up the rest of day after workshop. If creating hundreds of servers,
you'll likely want to run all these `trainer` commands from a dedicated
you'll likely want to run all these `workshopctl` commands from a dedicated
instance you have in same region as instances you want to create. Much faster
this way if you're on poor internet. Also, create 2 sets of servers for
yourself, and use one during workshop and the 2nd is a backup.
- Remember you'll need to print the "cards" for students, so you'll need to
create instances while you have a way to print them.
### Things That Could Go Wrong
- Creating AWS instances ahead of time, and you hit its limits in region and
@@ -201,13 +205,15 @@ want to become an instructor), keep reading!*
- Forget to print "cards" and cut them up for handing out IP's.
- Forget to have fun and focus on your students!
### Creating the VMs
`prepare-vms/trainer` is the script that gets you most of what you need for
`prepare-vms/workshopctl` is the script that gets you most of what you need for
setting up instances. See
[prepare-vms/README.md](prepare-vms)
for all the info on tools and scripts.
### Content for Different Workshop Durations
With all the slides, this workshop is a full day long. If you need to deliver
@@ -216,6 +222,7 @@ can replace `---` with `???` which will hide slides. Or leave them there and
add something like `(EXTRA CREDIT)` to title so students can still view the
content but you also know to skip during presentation.
#### 3 Hour Version
- Limit time on debug tools, maybe skip a few. *"Chapter 1:
@@ -227,6 +234,7 @@ content but you also know to skip during presentation.
- Mention what DAB's are, but make this part optional in case you run out
of time
#### 2 Hour Version
- Skip all the above, and:
@@ -271,13 +279,18 @@ If there is a bug and you can't fix it, but you can
reproduce it: submit an issue explaining how to reproduce.
If there is a bug and you can't even reproduce it:
sorry. It is probably an Heisenbug. I can't act on it
until it's reproducible.
sorry. It is probably an Heisenbug. We can't act on it
until it's reproducible, alas.
if you have attended this workshop and have feedback,
or if you want us to deliver that workshop at your
conference or for your company: contact me (jerome
at docker dot com).
If you have attended this workshop and have feedback,
or if you want somebody to deliver that workshop at your
conference or for your company: you can contact one of us!
Thank you!
- jerome at docker dot com
- bret at bretfisher dot com
If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!
**Thank you!**

View File

@@ -1,15 +1,28 @@
#!/usr/bin/env python
import uuid
import logging
import os
import re
import signal
import subprocess
import sys
import time
import uuid
def print_snippet(snippet):
print(78*'-')
print(snippet)
print(78*'-')
logging.basicConfig(level=logging.DEBUG)
TIMEOUT = 60 # 1 minute
def hrule():
return "="*int(subprocess.check_output(["tput", "cols"]))
# A "snippet" is something that the user is supposed to do in the workshop.
# Most of the "snippets" are shell commands.
# Some of them can be key strokes or other actions.
# In the markdown source, they are the code sections (identified by triple-
# quotes) within .exercise[] sections.
class Snippet(object):
@@ -29,26 +42,22 @@ class Slide(object):
def __init__(self, content):
Slide.current_slide += 1
self.number = Slide.current_slide
# Remove commented-out slides
# (remark.js considers ??? to be the separator for speaker notes)
content = re.split("\n\?\?\?\n", content)[0]
self.content = content
self.snippets = []
exercises = re.findall("\.exercise\[(.*)\]", content, re.DOTALL)
for exercise in exercises:
if "```" in exercise and "<br/>`" in exercise:
print("! Exercise on slide {} has both ``` and <br/>` delimiters, skipping."
.format(self.number))
print_snippet(exercise)
elif "```" in exercise:
if "```" in exercise:
for snippet in exercise.split("```")[1::2]:
self.snippets.append(Snippet(self, snippet))
elif "<br/>`" in exercise:
for snippet in re.findall("<br/>`(.*)`", exercise):
self.snippets.append(Snippet(self, snippet))
else:
print(" Exercise on slide {} has neither ``` or <br/>` delimiters, skipping."
.format(self.number))
logging.warning("Exercise on slide {} does not have any ``` snippet."
.format(self.number))
self.debug()
def __str__(self):
text = self.content
@@ -56,136 +65,165 @@ class Slide(object):
text = text.replace(snippet.content, ansi("7")(snippet.content))
return text
def debug(self):
logging.debug("\n{}\n{}\n{}".format(hrule(), self.content, hrule()))
def ansi(code):
return lambda s: "\x1b[{}m{}\x1b[0m".format(code, s)
slides = []
with open("index.html") as f:
content = f.read()
for slide in re.split("\n---?\n", content):
slides.append(Slide(slide))
is_editing_file = False
placeholders = {}
def wait_for_string(s):
logging.debug("Waiting for string: {}".format(s))
deadline = time.time() + TIMEOUT
while time.time() < deadline:
output = capture_pane()
if s in output:
return
time.sleep(1)
raise Exception("Timed out while waiting for {}!".format(s))
def wait_for_prompt():
logging.debug("Waiting for prompt.")
deadline = time.time() + TIMEOUT
while time.time() < deadline:
output = capture_pane()
# If we are not at the bottom of the screen, there will be a bunch of extra \n's
output = output.rstrip('\n')
if output[-2:] == "\n$":
return
time.sleep(1)
raise Exception("Timed out while waiting for prompt!")
def check_exit_status():
token = uuid.uuid4().hex
data = "echo {} $?\n".format(token)
logging.debug("Sending {!r} to get exit status.".format(data))
send_keys(data)
time.sleep(0.5)
wait_for_prompt()
screen = capture_pane()
status = re.findall("\n{} ([0-9]+)\n".format(token), screen, re.MULTILINE)
logging.debug("Got exit status: {}.".format(status))
if len(status) == 0:
raise Exception("Couldn't retrieve status code {}. Timed out?".format(token))
if len(status) > 1:
raise Exception("More than one status code {}. I'm seeing double! Shoot them both.".format(token))
code = int(status[0])
if code != 0:
raise Exception("Non-zero exit status: {}.".format(code))
# Otherwise just return peacefully.
slides = []
content = open(sys.argv[1]).read()
for slide in re.split("\n---?\n", content):
slides.append(Slide(slide))
actions = []
for slide in slides:
for snippet in slide.snippets:
content = snippet.content
# Multi-line snippets should be ```highlightsyntax...
# Single-line snippets will be interpreted as shell commands
# Extract the "method" (e.g. bash, keys, ...)
# On multi-line snippets, the method is alone on the first line
# On single-line snippets, the data follows the method immediately
if '\n' in content:
highlight, content = content.split('\n', 1)
method, data = content.split('\n', 1)
else:
highlight = "bash"
content = content.strip()
# If the previous snippet was a file fragment, and the current
# snippet is not YAML or EDIT, complain.
if is_editing_file and highlight not in ["yaml", "edit"]:
print("! On slide {}, previous snippet was YAML, so what do what do?"
.format(slide.number))
print_snippet(content)
is_editing_file = False
if highlight == "yaml":
is_editing_file = True
elif highlight == "placeholder":
for line in content.split('\n'):
variable, value = line.split(' ', 1)
placeholders[variable] = value
elif highlight == "bash":
for variable, value in placeholders.items():
quoted = "`{}`".format(variable)
if quoted in content:
content = content.replace(quoted, value)
del placeholders[variable]
if '`' in content:
print("! The following snippet on slide {} contains a backtick:"
.format(slide.number))
print_snippet(content)
continue
print("_ "+content)
snippet.actions.append((highlight, content))
elif highlight == "edit":
print(". "+content)
snippet.actions.append((highlight, content))
elif highlight == "meta":
print("^ "+content)
snippet.actions.append((highlight, content))
else:
print("! Unknown highlight {!r} on slide {}.".format(highlight, slide.number))
if placeholders:
print("! Remaining placeholder values: {}".format(placeholders))
method, data = content.split(' ', 1)
actions.append((slide, snippet, method, data))
actions = sum([snippet.actions for snippet in sum([slide.snippets for slide in slides], [])], [])
# Strip ^{ ... ^} for now
def strip_curly_braces(actions, in_braces=False):
if actions == []:
return []
elif actions[0] == ("meta", "^{"):
return strip_curly_braces(actions[1:], True)
elif actions[0] == ("meta", "^}"):
return strip_curly_braces(actions[1:], False)
elif in_braces:
return strip_curly_braces(actions[1:], True)
def send_keys(data):
subprocess.check_call(["tmux", "send-keys", data])
def capture_pane():
return subprocess.check_output(["tmux", "capture-pane", "-p"])
try:
i = int(open("nextstep").read())
logging.info("Loaded next step ({}) from file.".format(i))
except Exception as e:
logging.warning("Could not read nextstep file ({}), initializing to 0.".format(e))
i = 0
interactive = True
while i < len(actions):
with open("nextstep", "w") as f:
f.write(str(i))
slide, snippet, method, data = actions[i]
# Remove extra spaces (we don't want them in the terminal) and carriage returns
data = data.strip()
print(hrule())
print(slide.content.replace(snippet.content, ansi(7)(snippet.content)))
print(hrule())
if interactive:
print("[{}/{}] Shall we execute that snippet above?".format(i, len(actions)))
print("(ENTER to execute, 'c' to continue until next error, N to jump to step #N)")
command = raw_input("> ")
else:
return [actions[0]] + strip_curly_braces(actions[1:], False)
command = ""
actions = strip_curly_braces(actions)
# For now, remove the `highlighted` sections
# (Make sure to use $() in shell snippets!)
if '`' in data:
logging.info("Stripping ` from snippet.")
data = data.replace('`', '')
background = []
cwd = os.path.expanduser("~")
env = {}
for current_action, next_action in zip(actions, actions[1:]+[("bash", "true")]):
if current_action[0] == "meta":
continue
print(ansi(7)(">>> {}".format(current_action[1])))
time.sleep(1)
popen_options = dict(shell=True, cwd=cwd, stdin=subprocess.PIPE, preexec_fn=os.setpgrp)
# The follow hack allows to capture the environment variables set by `docker-machine env`
# FIXME: this doesn't handle `unset` for now
if any([
"eval $(docker-machine env" in current_action[1],
"DOCKER_HOST" in current_action[1],
"COMPOSE_FILE" in current_action[1],
]):
popen_options["stdout"] = subprocess.PIPE
current_action[1] += "\nenv"
proc = subprocess.Popen(current_action[1], **popen_options)
proc.cmd = current_action[1]
if next_action[0] == "meta":
print(">>> {}".format(next_action[1]))
time.sleep(3)
if next_action[1] == "^C":
os.killpg(proc.pid, signal.SIGINT)
proc.wait()
elif next_action[1] == "^Z":
# Let the process run
background.append(proc)
elif next_action[1] == "^D":
proc.communicate()
proc.wait()
if command == "c":
# continue until next timeout
interactive = False
elif command.isdigit():
i = int(command)
elif command == "":
logging.info("Running with method {}: {}".format(method, data))
if method == "keys":
send_keys(data)
elif method == "bash":
# Make sure that we're ready
wait_for_prompt()
# Strip leading spaces
data = re.sub("\n +", "\n", data)
# Add "RETURN" at the end of the command :)
data += "\n"
# Send command
send_keys(data)
# Force a short sleep to avoid race condition
time.sleep(0.5)
_, _, next_method, next_data = actions[i+1]
if next_method == "wait":
wait_for_string(next_data)
else:
wait_for_prompt()
# Verify return code FIXME should be optional
check_exit_status()
elif method == "copypaste":
screen = capture_pane()
matches = re.findall(data, screen, flags=re.DOTALL)
if len(matches) == 0:
raise Exception("Could not find regex {} in output.".format(data))
# Arbitrarily get the most recent match
match = matches[-1]
# Remove line breaks (like a screen copy paste would do)
match = match.replace('\n', '')
send_keys(match + '\n')
# FIXME: we should factor out the "bash" method
wait_for_prompt()
check_exit_status()
else:
print("! Unknown meta action {} after snippet:".format(next_action[1]))
print_snippet(next_action[1])
print(ansi(7)("<<< {}".format(current_action[1])))
else:
proc.wait()
if "stdout" in popen_options:
stdout, stderr = proc.communicate()
for line in stdout.split('\n'):
if line.startswith("DOCKER_"):
variable, value = line.split('=', 1)
env[variable] = value
print("=== {}={}".format(variable, value))
print(ansi(7)("<<< {} >>> {}".format(proc.returncode, current_action[1])))
if proc.returncode != 0:
print("Got non-zero status code; aborting.")
break
if current_action[1].startswith("cd "):
cwd = os.path.expanduser(current_action[1][3:])
for proc in background:
print("Terminating background process:")
print_snippet(proc.cmd)
proc.terminate()
proc.wait()
logging.warning("Unknown method {}: {!r}".format(method, data))
i += 1
else:
i += 1
logging.warning("Unknown command {}, skipping to next step.".format(command))
# Reset slide counter
with open("nextstep", "w") as f:
f.write(str(0))

View File

@@ -1 +0,0 @@
../www/htdocs/index.html

View File

@@ -1,42 +0,0 @@
#!/usr/bin/env python
import os
import sys
import yaml
# arg 1 = service name
# arg 2 = number of instances
service_name = sys.argv[1]
desired_instances = int(sys.argv[2])
compose_file = os.environ["COMPOSE_FILE"]
input_file, output_file = compose_file, compose_file
config = yaml.load(open(input_file))
# The ambassadors need to know the service port to use.
# Those ports must be declared here.
ports = yaml.load(open("ports.yml"))
port = str(ports[service_name])
command_line = port
depends_on = []
for n in range(1, 1+desired_instances):
config["services"]["{}{}".format(service_name, n)] = config["services"][service_name]
command_line += " {}{}:{}".format(service_name, n, port)
depends_on.append("{}{}".format(service_name, n))
config["services"][service_name] = {
"image": "jpetazzo/hamba",
"command": command_line,
"depends_on": depends_on,
}
if "networks" in config["services"]["{}1".format(service_name)]:
config["services"][service_name]["networks"] = config["services"]["{}1".format(service_name)]["networks"]
yaml.safe_dump(config, open(output_file, "w"), default_flow_style=False)

View File

@@ -1,87 +0,0 @@
#!/usr/bin/env python
import os
import sys
import yaml
def error(msg):
print("ERROR: {}".format(msg))
exit(1)
# arg 1 = service name
service_name = sys.argv[1]
compose_file = os.environ["COMPOSE_FILE"]
input_file, output_file = compose_file, compose_file
config = yaml.load(open(input_file))
version = config.get("version")
if version != "2":
error("Unsupported $COMPOSE_FILE version: {!r}".format(version))
# The load balancers need to know the service port to use.
# Those ports must be declared here.
ports = yaml.load(open("ports.yml"))
port = str(ports[service_name])
if service_name not in config["services"]:
error("service {} not found in $COMPOSE_FILE"
.format(service_name))
lb_name = "{}-lb".format(service_name)
be_name = "{}-be".format(service_name)
wd_name = "{}-wd".format(service_name)
if lb_name in config["services"]:
error("load balancer {} already exists in $COMPOSE_FILE"
.format(lb_name))
if wd_name in config["services"]:
error("dns watcher {} already exists in $COMPOSE_FILE"
.format(wd_name))
service = config["services"][service_name]
if "networks" in service:
error("service {} has custom networks"
.format(service_name))
# Put the service on its own network.
service["networks"] = {service_name: {"aliases": [ be_name ] } }
# Put a label indicating which load balancer is responsible for this service.
if "labels" not in service:
service["labels"] = {}
service["labels"]["loadbalancer"] = lb_name
# Add the load balancer.
config["services"][lb_name] = {
"image": "jpetazzo/hamba",
"command": "{} {} {}".format(port, be_name, port),
"depends_on": [ service_name ],
"networks": {
"default": {
"aliases": [ service_name ],
},
service_name: None,
},
}
# Add the DNS watcher.
config["services"][wd_name] = {
"image": "jpetazzo/watchdns",
"command": "{} {} {}".format(port, be_name, port),
"volumes_from": [ lb_name ],
"networks": {
service_name: None,
},
}
if "networks" not in config:
config["networks"] = {}
if service_name not in config["networks"]:
config["networks"][service_name] = None
yaml.safe_dump(config, open(output_file, "w"), default_flow_style=False)

View File

@@ -1,63 +0,0 @@
#!/usr/bin/env python
from common import ComposeFile
import os
import subprocess
import time
registry = os.environ.get("DOCKER_REGISTRY")
if not registry:
print("Please set the DOCKER_REGISTRY variable, e.g.:")
print("export DOCKER_REGISTRY=jpetazzo # use the Docker Hub")
print("export DOCKER_REGISTRY=localhost:5000 # use a local registry")
exit(1)
# Get the name of the current directory.
project_name = os.path.basename(os.path.realpath("."))
# Version used to tag the generated Docker image, using the UNIX timestamp or the given version.
if "VERSION" not in os.environ:
version = str(int(time.time()))
else:
version = os.environ["VERSION"]
# Execute "docker-compose build" and abort if it fails.
subprocess.check_call(["docker-compose", "-f", "docker-compose.yml", "build"])
# Load the services from the input docker-compose.yml file.
# TODO: run parallel builds.
compose_file = ComposeFile("docker-compose.yml")
# Iterate over all services that have a "build" definition.
# Tag them, and initiate a push in the background.
push_operations = dict()
for service_name, service in compose_file.services.items():
if "build" in service:
compose_image = "{}_{}".format(project_name, service_name)
registry_image = "{}/{}:{}".format(registry, compose_image, version)
# Re-tag the image so that it can be uploaded to the registry.
subprocess.check_call(["docker", "tag", compose_image, registry_image])
# Spawn "docker push" to upload the image.
push_operations[service_name] = subprocess.Popen(["docker", "push", registry_image])
# Replace the "build" definition by an "image" definition,
# using the name of the image on the registry.
del service["build"]
service["image"] = registry_image
# Wait for push operations to complete.
for service_name, popen_object in push_operations.items():
print("Waiting for {} push to complete...".format(service_name))
popen_object.wait()
print("Done.")
# Write the new docker-compose.yml file.
if "COMPOSE_FILE" not in os.environ:
os.environ["COMPOSE_FILE"] = "docker-compose.yml-{}".format(version)
print("Writing to new Compose file:")
else:
print("Writing to provided Compose file:")
print("COMPOSE_FILE={}".format(os.environ["COMPOSE_FILE"]))
compose_file.save()

View File

@@ -1,76 +0,0 @@
import os
import subprocess
import sys
import time
import yaml
def COMPOSE_FILE():
if "COMPOSE_FILE" not in os.environ:
print("The $COMPOSE_FILE environment variable is not set. Aborting.")
exit(1)
return os.environ["COMPOSE_FILE"]
class ComposeFile(object):
def __init__(self, filename=None):
if filename is None:
filename = COMPOSE_FILE()
if not os.path.isfile(filename):
print("File {!r} does not exist. Aborting.".format(filename))
exit(1)
self.data = yaml.load(open(filename))
@property
def services(self):
if self.data.get("version") == "2":
return self.data["services"]
else:
return self.data
def save(self, filename=None):
if filename is None:
filename = COMPOSE_FILE()
with open(filename, "w") as f:
yaml.safe_dump(self.data, f, default_flow_style=False)
# Executes a bunch of commands in parallel, but no more than N at a time.
# This allows to execute concurrently a large number of tasks, without
# turning into a fork bomb.
# `parallelism` is the number of tasks to execute simultaneously.
# `commands` is a list of tasks to execute.
# Each task is itself a list, where the first element is a descriptive
# string, and the folloowing elements are the arguments to pass to Popen.
def parallel_run(commands, parallelism):
running = []
# While stuff is running, or we have stuff to run...
while commands or running:
# While there is stuff to run, and room in the pipe...
while commands and len(running)<parallelism:
command = commands.pop(0)
print("START {}".format(command[0]))
popen = subprocess.Popen(
command[1:], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
popen._desc = command[0]
running.append(popen)
must_sleep = True
for popen in running:
status = popen.poll()
if status is not None:
must_sleep = False
running.remove(popen)
if status==0:
print("OK {}".format(popen._desc))
else:
print("ERROR {} [Exit status: {}]"
.format(popen._desc, status))
output = "\n" + popen.communicate()[0].strip()
output = output.replace("\n", "\n| ")
print(output)
else:
print("WAIT ({} running, {} more to run)"
.format(len(running), len(commands)))
if must_sleep:
time.sleep(1)

View File

@@ -1,69 +0,0 @@
#!/usr/bin/env python
from common import parallel_run
import os
import subprocess
project_name = os.path.basename(os.path.realpath("."))
# Get all services and backends in our compose application.
containers_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=com.docker.compose.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "com.docker.compose.service" }} '
'{{ .Ports }}',
])
# Build list of backends.
frontend_ports = dict()
backends = dict()
for container in containers_data.split('\n'):
if not container:
continue
# TODO: support services with multiple ports!
container_id, service_name, port = container.split(' ')
if not port:
continue
backend, frontend = port.split("->")
backend_addr, backend_port = backend.split(':')
frontend_port, frontend_proto = frontend.split('/')
# TODO: deal with udp (mostly skip it?)
assert frontend_proto == "tcp"
# TODO: check inconsistencies between port mappings
frontend_ports[service_name] = frontend_port
if service_name not in backends:
backends[service_name] = []
backends[service_name].append((backend_addr, backend_port))
# Get all existing ambassadors for this application.
ambassadors_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=ambassador.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "ambassador.service" }} '
'{{ .Label "ambassador.bindaddr" }}',
])
# Update ambassadors.
operations = []
for ambassador in ambassadors_data.split('\n'):
if not ambassador:
continue
ambassador_id, service_name, bind_address = ambassador.split()
print("Updating configuration for {}/{} -> {}:{} -> {}"
.format(service_name, ambassador_id,
bind_address, frontend_ports[service_name],
backends[service_name]))
command = [
ambassador_id,
"docker", "run", "--rm", "--volumes-from", ambassador_id,
"jpetazzo/hamba", "reconfigure",
"{}:{}".format(bind_address, frontend_ports[service_name])
]
for backend_addr, backend_port in backends[service_name]:
command.extend([backend_addr, backend_port])
operations.append(command)
# Execute all commands in parallel.
parallel_run(operations, 10)

View File

@@ -1,71 +0,0 @@
#!/usr/bin/env python
from common import ComposeFile, parallel_run
import os
import subprocess
config = ComposeFile()
project_name = os.path.basename(os.path.realpath("."))
# Get all services in our compose application.
containers_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=com.docker.compose.project={}".format(project_name),
"--format", '{{ .ID }} {{ .Label "com.docker.compose.service" }}',
])
# Get all existing ambassadors for this application.
ambassadors_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=ambassador.project={}".format(project_name),
"--format", '{{ .ID }} '
'{{ .Label "ambassador.container" }} '
'{{ .Label "ambassador.service" }}',
])
# Build a set of existing ambassadors.
ambassadors = dict()
for ambassador in ambassadors_data.split('\n'):
if not ambassador:
continue
ambassador_id, container_id, linked_service = ambassador.split()
ambassadors[container_id, linked_service] = ambassador_id
operations = []
# Start the missing ambassadors.
for container in containers_data.split('\n'):
if not container:
continue
container_id, service_name = container.split()
extra_hosts = config.services[service_name].get("extra_hosts", {})
for linked_service, bind_address in extra_hosts.items():
description = "Ambassador {}/{}/{}".format(
service_name, container_id, linked_service)
ambassador_id = ambassadors.pop((container_id, linked_service), None)
if ambassador_id:
print("{} already exists: {}".format(description, ambassador_id))
else:
print("{} not found, creating it.".format(description))
operations.append([
description,
"docker", "run", "-d",
"--net", "container:{}".format(container_id),
"--label", "ambassador.project={}".format(project_name),
"--label", "ambassador.container={}".format(container_id),
"--label", "ambassador.service={}".format(linked_service),
"--label", "ambassador.bindaddr={}".format(bind_address),
"jpetazzo/hamba", "run"
])
# Destroy extraneous ambassadors.
for ambassador_id in ambassadors.values():
print("{} is not useful anymore, destroying it.".format(ambassador_id))
operations.append([
"rm -f {}".format(ambassador_id),
"docker", "rm", "-f", ambassador_id,
])
# Execute all commands in parallel.
parallel_run(operations, 10)

View File

@@ -1,3 +0,0 @@
#!/bin/sh
docker ps -q --filter label=ambassador.project=dockercoins |
xargs docker rm -f

View File

@@ -1,16 +0,0 @@
#!/bin/sh
# Some tools will choke on the YAML files generated by PyYAML;
# in particular on a section like this one:
#
# service:
# ports:
# - 8000:5000
#
# This script adds two spaces in front of the dash in those files.
# Warning: it is a hack, and probably won't work on some YAML files.
[ -f "$COMPOSE_FILE" ] || {
echo "Cannot find COMPOSE_FILE"
exit 1
}
sed -i 's/^ -/ -/' $COMPOSE_FILE

View File

@@ -1,38 +0,0 @@
#!/usr/bin/env python
from common import ComposeFile
import yaml
config = ComposeFile()
# The ambassadors need to know the service port to use.
# Those ports must be declared here.
ports = yaml.load(open("ports.yml"))
def generate_local_addr():
last_byte = 2
while last_byte<255:
yield "127.127.0.{}".format(last_byte)
last_byte += 1
for service_name, service in config.services.items():
if "links" in service:
for link, local_addr in zip(service["links"], generate_local_addr()):
if link not in ports:
print("Skipping link {} in service {} "
"(no port mapping defined). "
"Your code will probably break."
.format(link, service_name))
continue
if "extra_hosts" not in service:
service["extra_hosts"] = {}
service["extra_hosts"][link] = local_addr
del service["links"]
if "ports" in service:
del service["ports"]
if "volumes" in service:
del service["volumes"]
if service_name in ports:
service["ports"] = [ ports[service_name] ]
config.save()

View File

@@ -1,46 +0,0 @@
#!/usr/bin/env python
# FIXME: hardcoded
PORT="80"
import os
import subprocess
project_name = os.path.basename(os.path.realpath("."))
# Get all existing services for this application.
containers_data = subprocess.check_output([
"docker", "ps",
"--filter", "label=com.docker.compose.project={}".format(project_name),
"--format", '{{ .Label "com.docker.compose.service" }} '
'{{ .Label "com.docker.compose.container-number" }} '
'{{ .Label "loadbalancer" }}',
])
load_balancers = dict()
for line in containers_data.split('\n'):
if not line:
continue
service_name, container_number, load_balancer = line.split(' ')
if load_balancer:
if load_balancer not in load_balancers:
load_balancers[load_balancer] = []
load_balancers[load_balancer].append((service_name, int(container_number)))
for load_balancer, backends in load_balancers.items():
# FIXME: iterate on all load balancers
container_name = "{}_{}_1".format(project_name, load_balancer)
command = [
"docker", "run", "--rm",
"--volumes-from", container_name,
"--net", "container:{}".format(container_name),
"jpetazzo/hamba", "reconfigure", PORT,
]
command.extend(
"{}_{}_{}:{}".format(project_name, backend_name, backend_number, PORT)
for (backend_name, backend_number) in sorted(backends)
)
print("Updating configuration for {} with {} backend(s)..."
.format(container_name, len(backends)))
subprocess.check_output(command)

View File

@@ -1,201 +0,0 @@
#!/bin/sh
unset DOCKER_REGISTRY
unset DOCKER_HOST
unset COMPOSE_FILE
SWARM_IMAGE=${SWARM_IMAGE:-swarm}
prepare_1_check_ssh_keys () {
for N in $(seq 1 5); do
ssh node$N true
done
}
prepare_2_compile_swarm () {
cd ~
git clone git://github.com/docker/swarm
cd swarm
[[ -z "$1" ]] && {
echo "Specify which revision to build."
return
}
git checkout "$1" || return
mkdir -p image
docker build -t docker/swarm:$1 .
docker run -i --entrypoint sh docker/swarm:$1 \
-c 'cat $(which swarm)' > image/swarm
chmod +x image/swarm
cat >image/Dockerfile <<EOF
FROM scratch
COPY ./swarm /swarm
ENTRYPOINT ["/swarm", "-debug", "-experimental"]
EOF
docker build -t jpetazzo/swarm:$1 image
docker login
docker push jpetazzo/swarm:$1
docker logout
SWARM_IMAGE=jpetazzo/swarm:$1
}
clean_1_containers () {
for N in $(seq 1 5); do
ssh node$N "docker ps -aq | xargs -r -n1 -P10 docker rm -f"
done
}
clean_2_volumes () {
for N in $(seq 1 5); do
ssh node$N "docker volume ls -q | xargs -r docker volume rm"
done
}
clean_3_images () {
for N in $(seq 1 5); do
ssh node$N "docker images | awk '/dockercoins|jpetazzo/ {print \$1\":\"\$2}' | xargs -r docker rmi -f"
done
}
clean_4_machines () {
rm -rf ~/.docker/machine/
}
clean_all () {
clean_1_containers
clean_2_volumes
clean_3_images
clean_4_machines
}
dm_swarm () {
eval $(docker-machine env node1 --swarm)
}
dm_node1 () {
eval $(docker-machine env node1)
}
setup_1_swarm () {
grep node[12345] /etc/hosts | grep -v ^127 |
while read IPADDR NODENAME; do
docker-machine create --driver generic \
--engine-opt cluster-store=consul://localhost:8500 \
--engine-opt cluster-advertise=eth0:2376 \
--swarm --swarm-master --swarm-image $SWARM_IMAGE \
--swarm-discovery consul://localhost:8500 \
--swarm-opt replication --swarm-opt advertise=$IPADDR:3376 \
--generic-ssh-user docker --generic-ip-address $IPADDR $NODENAME
done
}
setup_2_consul () {
IPADDR=$(ssh node1 ip a ls dev eth0 |
sed -n 's,.*inet \(.*\)/.*,\1,p')
for N in 1 2 3 4 5; do
ssh node$N -- docker run -d --restart=always --name consul_node$N \
-e CONSUL_BIND_INTERFACE=eth0 --net host consul \
agent -server -retry-join $IPADDR -bootstrap-expect 5 \
-ui -client 0.0.0.0
done
}
setup_3_wait () {
# Wait for a Swarm master
dm_swarm
while ! docker ps; do sleep 1; done
# Wait for all nodes to be there
while ! [ "$(docker info | grep "^Nodes:")" = "Nodes: 5" ]; do sleep 1; done
}
setup_4_registry () {
cd ~/orchestration-workshop/registry
dm_swarm
docker-compose up -d
for N in $(seq 2 5); do
docker-compose scale frontend=$N
done
}
setup_5_btp_dockercoins () {
cd ~/orchestration-workshop/dockercoins
dm_node1
export DOCKER_REGISTRY=localhost:5000
cp docker-compose.yml-v2 docker-compose.yml
~/orchestration-workshop/bin/build-tag-push.py | tee /tmp/btp.log
export $(tail -n 1 /tmp/btp.log)
}
setup_6_add_lbs () {
cd ~/orchestration-workshop/dockercoins
~/orchestration-workshop/bin/add-load-balancer-v2.py rng
~/orchestration-workshop/bin/add-load-balancer-v2.py hasher
}
setup_7_consulfs () {
dm_swarm
docker pull jpetazzo/consulfs
for N in $(seq 1 5); do
ssh node$N "docker run --rm -v /usr/local/bin:/target jpetazzo/consulfs"
ssh node$N mkdir -p ~/consul
ssh -f node$N "mountpoint ~/consul || consulfs localhost:8500 ~/consul"
done
}
setup_8_syncmachine () {
while ! mountpoint ~/consul; do
sleep 1
done
cp -r ~/.docker/machine ~/consul/
for N in $(seq 2 5); do
ssh node$N mkdir -p ~/.docker
ssh node$N "[ -L ~/.docker/machine ] || ln -s ~/consul/machine ~/.docker"
done
}
setup_9_elk () {
dm_swarm
cd ~/orchestration-workshop/elk
docker-compose up -d
for N in $(seq 1 5); do
docker-compose scale logstash=$N
done
}
setup_all () {
setup_1_swarm
setup_2_consul
setup_3_wait
setup_4_registry
setup_5_btp_dockercoins
setup_6_add_lbs
setup_7_consulfs
setup_8_syncmachine
dm_swarm
}
force_remove_network () {
dm_swarm
NET="$1"
for CNAME in $(docker network inspect $NET | grep Name | grep -v \"$NET\" | cut -d\" -f4); do
echo $CNAME
docker network disconnect -f $NET $CNAME
done
docker network rm $NET
}
demo_1_compose_up () {
dm_swarm
cd ~/orchestration-workshop/dockercoins
docker-compose up -d
}
grep -qs -- MAGICMARKER "$0" && { # Don't display this line in the function lis
echo "You should source this file, then invoke the following functions:"
grep -- '^[a-z].*{$' "$0" | cut -d" " -f1
}
show_swarm_primary () {
dm_swarm
docker info 2>/dev/null | grep -e ^Role -e ^Primary
}

View File

@@ -1,12 +0,0 @@
version: "2"
services:
cadvisor:
image: google/cadvisor
ports:
- "8080:8080"
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"

View File

@@ -1,19 +0,0 @@
# CEPH on Docker
Note: this doesn't quite work yet.
The OSD containers need to be started twice (the first time, they fail
initializing; second time is a champ).
Also, it looks like you need at least two OSD containers (or the OSD
container should have two disks/directories, whatever).
RadosGw is listening on port 8080.
The `admin` container will create a `docker` user using `radosgw-admin`.
If you run it multiple times, that's OK: further invocations are idempotent.
Last but not least: it looks like AWS CLI uses a new signature format
that doesn't work with RadosGW. After almost two hours trying to figure
out what was wrong, I tried the S3 credentials directly with boto and
it worked immediately (I was able to create a bucket).

View File

@@ -1,53 +0,0 @@
version: "2"
services:
mon:
image: ceph/daemon
command: mon
environment:
CEPH_PUBLIC_NETWORK: 10.33.0.0/16
MON_IP: 10.33.0.2
osd:
image: ceph/daemon
command: osd_directory
depends_on:
- mon
volumes_from:
- mon
volumes:
- /var/lib/ceph/osd
mds:
image: ceph/daemon
command: mds
environment:
CEPHFS_CREATE: 1
depends_on:
- mon
volumes_from:
- mon
rgw:
image: ceph/daemon
command: rgw
depends_on:
- mon
volumes_from:
- mon
environment:
CEPH_OPTS: --verbose
admin:
image: ceph/daemon
entrypoint: radosgw-admin
depends_on:
- mon
volumes_from:
- mon
command: user create --uid=docker --display-name=docker
networks:
default:
ipam:
driver: default
config:
- subnet: 10.33.0.0/16
gateway: 10.33.0.1

View File

@@ -1,12 +0,0 @@
version: "2"
services:
bootstrap:
image: jpetazzo/consul
command: agent -server -bootstrap
container_name: bootstrap
server:
image: jpetazzo/consul
command: agent -server -join bootstrap -join server
client:
image: jpetazzo/consul
command: members -rpc-addr server:8400

View File

@@ -1,7 +1,10 @@
FROM ruby:alpine
RUN apk add --update build-base
RUN apk add --update build-base curl
RUN gem install sinatra
RUN gem install thin
ADD hasher.rb /
CMD ["ruby", "hasher.rb"]
EXPOSE 80
HEALTHCHECK \
--interval=1s --timeout=2s --retries=3 --start-period=1s \
CMD curl http://localhost/ || exit 1

View File

@@ -50,7 +50,7 @@ function refresh () {
points.push({ x: s2.now, y: speed });
}
$("#speed").text("~" + speed.toFixed(1) + " hashes/second");
var msg = ("I'm attending the @docker workshop at #LinuxCon, "
var msg = ("I'm attending a @docker orchestration workshop, "
+ "and my #DockerCoins mining rig is crunching "
+ speed.toFixed(1) + " hashes/second! W00T!");
$("#tweet").attr(

View File

@@ -1,9 +0,0 @@
<html>
<!-- Generated with index.html.sh -->
<head>
<meta http-equiv="refresh" content="0; URL='https://dockercommunity.slack.com/messages/docker-mentor'" />
</head>
<body>
<a href="https://dockercommunity.slack.com/messages/docker-mentor">https://dockercommunity.slack.com/messages/docker-mentor</a>
</body>
</html>

View File

@@ -1,16 +0,0 @@
#!/bin/sh
#LINK=https://gitter.im/jpetazzo/workshop-20170322-sanjose
LINK=https://dockercommunity.slack.com/messages/docker-mentor
#LINK=https://usenix-lisa.slack.com/messages/docker
sed "s,@@LINK@@,$LINK,g" >index.html <<EOF
<html>
<!-- Generated with index.html.sh -->
<head>
<meta http-equiv="refresh" content="0; URL='$LINK'" />
</head>
<body>
<a href="$LINK">$LINK</a>
</body>
</html>
EOF

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -1,19 +0,0 @@
#!/usr/bin/env python
"""
Extract and print level 1 and 2 titles from workshop slides.
"""
separators = [
"---",
"--"
]
slide_count = 1
for line in open("index.html"):
line = line.strip()
if line in separators:
slide_count += 1
if line.startswith('# '):
print slide_count, '# #', line
elif line.startswith('# '):
print slide_count, line

File diff suppressed because it is too large Load Diff

5
netlify.toml Normal file
View File

@@ -0,0 +1,5 @@
[build]
base = "slides"
publish = "slides"
command = "./build.sh once"

View File

@@ -71,7 +71,7 @@ to your node, then run the following command:
```bash
sudo curl -L \
https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` \
https://github.com/docker/compose/releases/download/1.15.0/docker-compose-`uname -s`-`uname -m` \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```

View File

@@ -2,7 +2,6 @@ FROM debian:jessie
MAINTAINER AJ Bowen <aj@soulshake.net>
RUN apt-get update && apt-get install -y \
wkhtmltopdf \
bsdmainutils \
ca-certificates \
curl \
@@ -12,19 +11,20 @@ RUN apt-get update && apt-get install -y \
man \
pssh \
python \
python-pip \
python-docutils \
python-pip \
ssh \
wkhtmltopdf \
xvfb \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN pip install \
awscli \
jinja2 \
pdfkit \
PyYAML \
termcolor
WORKDIR $$HOME
RUN echo "alias ll='ls -lahF'" >> /root/.bashrc
ENTRYPOINT ["/root/prepare-vms/scripts/trainer-cli"]
RUN mv $(which wkhtmltopdf) $(which wkhtmltopdf).real
COPY lib/wkhtmltopdf /usr/local/bin/wkhtmltopdf

View File

@@ -10,24 +10,24 @@
- fork/clone repo
- set required environment variables for AWS
- create your own setting file from `settings/example.yaml`
- run `./trainer` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./trainer cards` command to generate PDF for printing handouts of each users host IP's and login info
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
## Clone/Fork the Repo, and Build the Tools Image
The Docker Compose file here is used to build a image with all the dependencies to run the `./trainer` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](trainer#L5).
The Docker Compose file here is used to build a image with all the dependencies to run the `./workshopctl` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](workshopctl#L5).
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
$ cd orchestration-workshop/prepare-vms
$ docker-compose build
## Preparing to Run `./trainer`
## Preparing to Run `./workshopctl`
### Required AWS Permissions/Info
- Initial assumptions are you're using a root account. If you'd like to use a IAM user, it will need `AmazonEC2FullAccess` and `IAMReadOnlyAccess`.
- Using a non-default VPC or Security Group isn't supported out of box yet, but until then you can [customize the `trainer-cli` script](scripts/trainer-cli#L396-L401).
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./trainer opensg` which opens up all ports.
- Using a non-default VPC or Security Group isn't supported out of box yet, so you will have to customize `lib/commands.sh` if you want to change that.
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./workshopctl opensg` which opens up all ports.
### Required Environment Variables
@@ -37,59 +37,56 @@ The Docker Compose file here is used to build a image with all the dependencies
### Update/copy `settings/example.yaml`
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `trainer deploy`, `trainer cards`, etc.
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
./trainer cards 2016-09-28-00-33-bret settings/orchestration.yaml
./workshopctl cards 2016-09-28-00-33-bret settings/orchestration.yaml
## `./trainer` Usage
## `./workshopctl` Usage
```
./trainer <command> [n-instances|tag] [settings/file.yaml]
Core commands:
start n Start n instances
list [TAG] If a tag is provided, list its VMs. Otherwise, list tags.
deploy TAG Deploy all instances with a given tag
pull-images TAG Pre-pull docker images. Run only after deploying.
stop TAG Stop and delete instances tagged TAG
Extras:
ips TAG List all IPs of instances with a given tag (updates ips.txt)
ids TAG/TOKEN List all instance IDs with a given tag
shell Get a shell in the trainer container
status TAG Print information about this tag and its VMs
tags List all tags (per-region)
retag TAG/TOKEN TAG Retag instances with a new tag
Beta:
ami Look up Amazon Machine Images
cards FILE Generate cards
opensg Modify AWS security groups
workshopctl - the orchestration workshop swiss army knife
Commands:
ami Show the AMI that will be used for deployment
amis List Ubuntu AMIs in the current region
cards Generate ready-to-print cards for a batch of VMs
deploy Install Docker on a bunch of running VMs
ec2quotas Check our EC2 quotas (max instances)
help Show available commands
ids List the instance IDs belonging to a given tag or token
ips List the IP addresses of the VMs for a given tag or token
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
list List available batches in the current region
opensg Open the default security group to ALL ingress traffic
pull_images Pre-pull a bunch of Docker images
retag Apply a new tag to a batch of VMs
start Start a batch of VMs
status List instance status for a given batch
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
test Run tests (pre-flight checks) on a batch of VMs
```
### Summary of What `./trainer` Does For You
### Summary of What `./workshopctl` Does For You
- Used to manage bulk AWS instances for you without needing to use AWS cli or gui.
- Can manage multiple "tags" or groups of instances, which are tracked in `prepare-vms/tags/`
- Can also create PDF/HTML for printing student info for instance IP's and login.
- The `./trainer` script can be executed directly.
- The `./workshopctl` script can be executed directly.
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
### Example Steps to Launch a Batch of Instances for a Workshop
- Export the environment variables needed by the AWS CLI (see **Required Environment Variables** above)
- Run `./trainer start N` Creates `N` EC2 instances
- Run `./workshopctl start N` Creates `N` EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./trainer deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./trainer pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./trainer cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./trainer stop TAG` to terminate instances.
- Run `./workshopctl stop TAG` to terminate instances.
## Other Tools
@@ -133,31 +130,31 @@ If you create new VMs, the symlinked file will be overwritten.
Instances can be deployed manually using the `deploy` command:
$ ./trainer deploy TAG settings/somefile.yaml
$ ./workshopctl deploy TAG settings/somefile.yaml
The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and executed.
#### Pre-pull images
$ ./trainer pull-images TAG
$ ./workshopctl pull-images TAG
#### Generate cards
$ ./trainer cards TAG settings/somefile.yaml
$ ./workshopctl cards TAG settings/somefile.yaml
#### List tags
$ ./trainer list
$ ./workshopctl list
#### List VMs
$ ./trainer list TAG
$ ./workshopctl list TAG
This will print a human-friendly list containing some information about each instance.
#### Stop and destroy VMs
$ ./trainer stop TAG
$ ./workshopctl stop TAG
## ToDo

104
prepare-vms/cards.html Normal file
View File

@@ -0,0 +1,104 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://container.training/" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "orchestration workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 21.5%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.4em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -1,26 +1,19 @@
version: "2"
services:
prepare-vms:
workshopctl:
build: .
container_name: prepare-vms
image: workshopctl
working_dir: /root/prepare-vms
volumes:
- $HOME/.aws/:/root/.aws/
- /etc/localtime:/etc/localtime:ro
- /tmp/.X11-unix:/tmp/.X11-unix
- $SSH_AUTH_DIRNAME:$SSH_AUTH_DIRNAME
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
- $PWD/:/root/prepare-vms/
environment:
SCRIPT_DIR: /root/prepare-vms
DISPLAY: ${DISPLAY}
SSH_AUTH_SOCK: ${SSH_AUTH_SOCK}
SSH_AGENT_PID: ${SSH_AGENT_PID}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
AWS_DEFAULT_OUTPUT: json
AWS_INSTANCE_TYPE: ${AWS_INSTANCE_TYPE}
AWS_VPC_ID: ${AWS_VPC_ID}
USER: ${USER}
entrypoint: /root/prepare-vms/scripts/trainer-cli
entrypoint: /root/prepare-vms/workshopctl

47
prepare-vms/scripts/aws.sh → prepare-vms/lib/aws.sh Executable file → Normal file
View File

@@ -1,32 +1,28 @@
#!/bin/bash
source scripts/cli.sh
aws_display_tags(){
aws_display_tags() {
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf " %7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| awk '{ printf " %-12s %-25s %-25s\n", $1, $2, $3}' \
| uniq -c \
| sort -k 3
| sort -k 3 \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ' )
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
aws ec2 describe-instance-status \
--instance-ids $IDS \
@@ -38,22 +34,20 @@ aws_display_instances_by_tag() {
TAG=$1
need_tag $TAG
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
echo "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "ID State Tags IP Type" \
| awk '{ printf "%9s %12s %15s %20s %14s \n", $1, $2, $3, $4, $5}' # column -t -c 70}
echo "$result"
fi
)
if [[ -z $result ]]; then
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "$result"
fi
}
aws_get_instance_ids_by_filter() {
@@ -63,7 +57,6 @@ aws_get_instance_ids_by_filter() {
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
need_tag $TOKEN
@@ -82,8 +75,8 @@ aws_get_instance_ips_by_tag() {
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
@@ -94,10 +87,12 @@ aws_kill_instances_by_tag() {
die "Invalid tag."
fi
echo "Deleting instances with tag $TAG"
info "Deleting instances with tag $TAG."
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
info "Deleted instances with tag $TAG."
}
aws_tag_instances() {

76
prepare-vms/lib/cli.sh Normal file
View File

@@ -0,0 +1,76 @@
# Abort if any error happens, and show the command that caused the error.
_ERR() {
error "Command $BASH_COMMAND failed (exit status: $?)"
}
set -e
trap _ERR ERR
die() {
if [ -n "$1" ]; then
error "$1"
fi
exit 1
}
error() {
>/dev/stderr echo "[$(red ERROR)] $1"
}
warning() {
>/dev/stderr echo "[$(yellow WARNING)] $1"
}
info() {
>/dev/stderr echo "[$(green INFO)] $1"
}
# Print a full-width separator.
# If given an argument, will print it in the middle of that separator.
# If the argument is longer than the screen width, it will be printed between two separator lines.
sep() {
if [ -z "$COLUMNS" ]; then
COLUMNS=80
fi
SEP=$(yes = | tr -d "\n" | head -c $(($COLUMNS - 1)))
if [ -z "$1" ]; then
>/dev/stderr echo $SEP
else
MSGLEN=$(echo "$1" | wc -c)
if [ $(($MSGLEN + 4)) -gt $COLUMNS ]; then
>/dev/stderr echo "$SEP"
>/dev/stderr echo "$1"
>/dev/stderr echo "$SEP"
else
LEFTLEN=$((($COLUMNS - $MSGLEN - 2) / 2))
RIGHTLEN=$(($COLUMNS - $MSGLEN - 2 - $LEFTLEN))
LEFTSEP=$(echo $SEP | head -c $LEFTLEN)
RIGHTSEP=$(echo $SEP | head -c $RIGHTLEN)
>/dev/stderr echo "$LEFTSEP $1 $RIGHTSEP"
fi
fi
}
need_tag() {
if [ -z "$1" ]; then
die "Please specify a tag or token. To see available tags and tokens, run: $0 list"
fi
}
need_settings() {
if [ -z "$1" ]; then
die "Please specify a settings file."
elif [ ! -f "$1" ]; then
die "Settings file $1 doesn't exist."
fi
}
need_ips_file() {
IPS_FILE=$1
if [ -z "$IPS_FILE" ]; then
die "IPS_FILE not set."
fi
if [ ! -s "$IPS_FILE" ]; then
die "IPS_FILE $IPS_FILE not found. Please run: $0 ips <TAG>"
fi
}

15
prepare-vms/lib/colors.sh Normal file
View File

@@ -0,0 +1,15 @@
bold() {
echo "$(tput bold)$1$(tput sgr0)"
}
red() {
echo "$(tput setaf 1)$1$(tput sgr0)"
}
green() {
echo "$(tput setaf 2)$1$(tput sgr0)"
}
yellow() {
echo "$(tput setaf 3)$1$(tput sgr0)"
}

525
prepare-vms/lib/commands.sh Normal file
View File

@@ -0,0 +1,525 @@
export AWS_DEFAULT_OUTPUT=text
HELP=""
_cmd() {
HELP="$(printf "%s\n%-12s %s\n" "$HELP" "$1" "$2")"
}
_cmd help "Show available commands"
_cmd_help() {
printf "$(basename $0) - the orchestration workshop swiss army knife\n"
printf "Commands:"
printf "%s" "$HELP" | sort
}
_cmd amis "List Ubuntu AMIs in the current region"
_cmd_amis() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION "$@"
}
_cmd ami "Show the AMI that will be used for deployment"
_cmd_ami() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
}
_cmd build "Build the Docker image to run this program in a container"
_cmd_build() {
docker-compose build
}
_cmd wrap "Run this program in a container"
_cmd_wrap() {
docker-compose run --rm workshopctl "$@"
}
_cmd cards "Generate ready-to-print cards for a batch of VMs"
_cmd_cards() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
# Remove symlinks to old cards
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
python lib/ips-txt-to-html.py $SETTINGS
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
rm -f tags/$TAG/$f
# Move the generated file and replace it with a symlink
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
done
info "Cards created. You can view them with:"
info "xdg-open ips.html ips.pdf (on Linux)"
info "open ips.html ips.pdf (on MacOS)"
}
_cmd deploy "Install Docker on a bunch of running VMs"
_cmd_deploy() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
link_tag $TAG
count=$(wc -l ips.txt)
# wait until all hosts are reachable before trying to deploy
info "Trying to reach $TAG instances..."
while ! tag_is_reachable $TAG; do
>/dev/stderr echo -n "."
sleep 2
done
>/dev/stderr echo ""
sep "Deploying tag $TAG"
pssh -I tee /tmp/settings.yaml <$SETTINGS
pssh "
sudo apt-get update &&
sudo apt-get install -y python-setuptools &&
sudo easy_install pyyaml"
# Copy postprep.py to the remote machines, and execute it, feeding it the list of IP addresses
pssh -I tee /tmp/postprep.py <lib/postprep.py
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <ips.txt
# Install docker-prompt script
pssh -I sudo tee /usr/local/bin/docker-prompt <lib/docker-prompt
pssh sudo chmod +x /usr/local/bin/docker-prompt
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from node1
pssh "
sudo -u docker [ -f /home/docker/.ssh/id_rsa ] ||
ssh -o StrictHostKeyChecking=no node1 sudo -u docker tar -C /home/docker -cvf- .ssh |
sudo -u docker tar -C /home/docker -xf-"
# if 'docker@' doesn't appear in /home/docker/.ssh/authorized_keys, copy it there
pssh "
grep docker@ /home/docker/.ssh/authorized_keys ||
cat /home/docker/.ssh/id_rsa.pub |
sudo -u docker tee -a /home/docker/.ssh/authorized_keys"
# On node1, create and deploy TLS certs using Docker Machine
# (Currently disabled.)
true || pssh "
if grep -q node1 /tmp/node; then
grep ' node' /etc/hosts |
xargs -n2 sudo -H -u docker \
docker-machine create -d generic --generic-ssh-user docker --generic-ip-address
fi"
sep "Deployed tag $TAG"
info "You may want to run one of the following commands:"
info "$0 kube $TAG"
info "$0 pull_images $TAG"
info "$0 cards $TAG $SETTINGS"
}
_cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
_cmd_kube() {
# Install packages
pssh "
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
sudo apt-key add - &&
echo deb http://apt.kubernetes.io/ kubernetes-xenial main |
sudo tee /etc/apt/sources.list.d/kubernetes.list"
pssh "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet kubeadm kubectl
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
# Work around https://github.com/kubernetes/kubernetes/issues/53356
pssh "
if [ ! -f /etc/kubernetes/kubelet.conf ]; then
sudo systemctl stop kubelet
sudo rm -rf /var/lib/kubelet/pki
fi"
# Initialize kube master
pssh "
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
sudo kubeadm init
fi"
# Put kubeconfig in ubuntu's and docker's accounts
pssh "
if grep -q node1 /tmp/node; then
sudo mkdir -p \$HOME/.kube /home/docker/.kube &&
sudo cp /etc/kubernetes/admin.conf \$HOME/.kube/config &&
sudo cp /etc/kubernetes/admin.conf /home/docker/.kube/config &&
sudo chown -R \$(id -u) \$HOME/.kube &&
sudo chown -R docker /home/docker/.kube
fi"
# Get bootstrap token
pssh "
if grep -q node1 /tmp/node; then
TOKEN_NAME=\$(kubectl -n kube-system get secret -o name | grep bootstrap-token)
TOKEN_ID=\$(kubectl -n kube-system get \$TOKEN_NAME -o go-template --template '{{ index .data \"token-id\" }}' | base64 -d)
TOKEN_SECRET=\$(kubectl -n kube-system get \$TOKEN_NAME -o go-template --template '{{ index .data \"token-secret\" }}' | base64 -d)
echo \$TOKEN_ID.\$TOKEN_SECRET >/tmp/token
fi"
# Install weave as the pod network
pssh "
if grep -q node1 /tmp/node; then
kubever=\$(kubectl version | base64 | tr -d '\n')
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
fi"
# Join the other nodes to the cluster
pssh "
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
sudo kubeadm join --token \$TOKEN node1:6443
fi"
sep "Done"
}
_cmd ids "List the instance IDs belonging to a given tag or token"
_cmd_ids() {
TAG=$1
need_tag $TAG
info "Looking up by tag:"
aws_get_instance_ids_by_tag $TAG
# Just in case we managed to create instances but weren't able to tag them
info "Looking up by token:"
aws_get_instance_ids_by_client_token $TAG
}
_cmd ips "List the IP addresses of the VMs for a given tag or token"
_cmd_ips() {
TAG=$1
need_tag $TAG
mkdir -p tags/$TAG
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
link_tag $TAG
}
_cmd list "List available batches in the current region"
_cmd_list() {
info "Listing batches in region $AWS_DEFAULT_REGION:"
aws_display_tags
}
_cmd status "List instance status for a given batch"
_cmd_status() {
info "Using region $AWS_DEFAULT_REGION."
TAG=$1
need_tag $TAG
describe_tag $TAG
tag_is_reachable $TAG
info "You may be interested in running one of the following commands:"
info "$0 ips $TAG"
info "$0 deploy $TAG <settings/somefile.yaml>"
}
_cmd opensg "Open the default security group to ALL ingress traffic"
_cmd_opensg() {
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
}
_cmd pull_images "Pre-pull a bunch of Docker images"
_cmd_pull_images() {
TAG=$1
need_tag $TAG
pull_tag $TAG
}
_cmd retag "Apply a new tag to a batch of VMs"
_cmd_retag() {
OLDTAG=$1
NEWTAG=$2
need_tag $OLDTAG
if [[ -z "$NEWTAG" ]]; then
die "You must specify a new tag to apply."
fi
aws_tag_instances $OLDTAG $NEWTAG
}
_cmd start "Start a batch of VMs"
_cmd_start() {
# Number of instances to create
COUNT=$1
# Optional settings file (to carry on with deployment)
SETTINGS=$2
if [ -z "$COUNT" ]; then
die "Indicate number of instances to start."
fi
# Print our AWS username, to ease the pain of credential-juggling
greet
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
AMI=$(_cmd_ami) # Retrieve the AWS image ID
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
AWS_KEY_NAME=$(make_key_name)
sep "Starting instances"
info " Count: $COUNT"
info " Region: $AWS_DEFAULT_REGION"
info " Token/tag: $TOKEN"
info " AMI: $AMI"
info " Key name: $AWS_KEY_NAME"
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type t2.medium \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
info "Reservation ID: $reservation_id"
sep
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
TAG=$TOKEN
aws_tag_instances $TOKEN $TAG
wait_until_tag_is_running $TAG $COUNT
sep
info "Successfully created $COUNT instances with tag $TAG"
sep
mkdir -p tags/$TAG
IPS=$(aws_get_instance_ips_by_tag $TAG)
echo "$IPS" >tags/$TAG/ips.txt
link_tag $TAG
if [ -n "$SETTINGS" ]; then
_cmd_deploy $TAG $SETTINGS
else
info "To deploy or kill these instances, run one of the following:"
info "$0 deploy $TAG <settings/somefile.yaml>"
info "$0 stop $TAG"
fi
}
_cmd ec2quotas "Check our EC2 quotas (max instances)"
_cmd_ec2quotas() {
greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
info "Available regions:"
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
}
_cmd stop "Stop (terminate, shutdown, kill, remove, destroy...) instances"
_cmd_stop() {
TAG=$1
need_tag $TAG
aws_kill_instances_by_tag $TAG
}
_cmd test "Run tests (pre-flight checks) on a batch of VMs"
_cmd_test() {
TAG=$1
need_tag $TAG
test_tag $TAG
}
###
greet() {
IAMUSER=$(aws iam get-user --query 'User.UserName')
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
}
link_tag() {
TAG=$1
need_tag $TAG
IPS_FILE=tags/$TAG/ips.txt
need_ips_file $IPS_FILE
ln -sf $IPS_FILE ips.txt
}
pull_tag() {
TAG=$1
need_tag $TAG
link_tag $TAG
if [ ! -s $IPS_FILE ]; then
die "Nonexistent or empty IPs file $IPS_FILE."
fi
# Pre-pull a bunch of images
pssh --timeout 900 'for I in \
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
postgres \
redis \
training/namer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'
info "Finished pulling images for $TAG."
info "You may now want to run:"
info "$0 cards $TAG <settings/somefile.yaml>"
}
wait_until_tag_is_running() {
max_retry=50
TAG=$1
COUNT=$2
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do
let "i += 1"
info "$(printf "%d/%d instances online" $done_count $COUNT)"
done_count=$(aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
"Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].State.Name" \
| tr "\t" "\n" \
| wc -l)
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
tag_is_reachable() {
TAG=$1
need_tag $TAG
link_tag $TAG
pssh -t 5 true 2>&1 >/dev/null
}
test_tag() {
ips_file=tags/$TAG/ips.txt
info "Picking a random IP address in $ips_file to run tests."
n=$((1 + $RANDOM % $(wc -l <$ips_file)))
ip=$(head -n $n $ips_file | tail -n 1)
test_vm $ip
info "Tests complete."
}
test_vm() {
ip=$1
info "Testing instance with IP address $ip."
user=ubuntu
errors=""
for cmd in "hostname" \
"whoami" \
"hostname -i" \
"cat /tmp/node" \
"cat /tmp/ipv4" \
"cat /etc/hosts" \
"hostnamectl status" \
"docker version | grep Version -B1" \
"docker-compose version" \
"docker-machine version" \
"docker images" \
"docker ps" \
"curl --silent localhost:55555" \
"sudo ls -la /mnt/ | grep docker" \
"env" \
"ls -la /home/docker/.ssh"; do
sep "$cmd"
echo "$cmd" \
| ssh -A -q \
-o "UserKnownHostsFile /dev/null" \
-o "StrictHostKeyChecking=no" \
$user@$ip sudo -u docker -i \
|| {
status=$?
error "$cmd exit status: $status"
errors="[$status] $cmd\n$errors"
}
done
sep
if [ -n "$errors" ]; then
error "The following commands had non-zero exit codes:"
printf "$errors"
fi
info "Test VM was $ip."
}
make_key_name() {
SHORT_FINGERPRINT=$(ssh-add -l | grep RSA | head -n1 | cut -d " " -f 2 | tr -d : | cut -c 1-8)
echo "${SHORT_FINGERPRINT}-${USER}"
}
sync_keys() {
# make sure ssh-add -l contains "RSA"
ssh-add -l | grep -q RSA \
|| die "The output of \`ssh-add -l\` doesn't contain 'RSA'. Start the agent, add your keys?"
AWS_KEY_NAME=$(make_key_name)
info "Syncing keys... "
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &>/dev/null; then
aws ec2 import-key-pair --key-name $AWS_KEY_NAME \
--public-key-material "$(ssh-add -L \
| grep -i RSA \
| head -n1 \
| cut -d " " -f 1-2)" &>/dev/null
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &>/dev/null; then
die "Somehow, importing the key didn't work. Make sure that 'ssh-add -l | grep RSA | head -n1' returns an RSA key?"
else
info "Imported new key $AWS_KEY_NAME."
fi
else
info "Using existing key $AWS_KEY_NAME."
fi
}
get_token() {
if [ -z $USER ]; then
export USER=anonymous
fi
date +%Y-%m-%d-%H-%M-$USER
}
describe_tag() {
# Display instance details and reachability/status information
TAG=$1
need_tag $TAG
aws_display_instances_by_tag $TAG
aws_display_instance_statuses_by_tag $TAG
}

21
prepare-vms/lib/docker-prompt Executable file
View File

@@ -0,0 +1,21 @@
#!/bin/sh
case "$DOCKER_HOST" in
*:3376)
echo swarm
;;
*:2376)
echo $DOCKER_MACHINE_NAME
;;
*:2375)
echo $DOCKER_MACHINE_NAME
;;
*:55555)
echo $DOCKER_MACHINE_NAME
;;
"")
echo local
;;
*)
echo unknown
;;
esac

View File

@@ -0,0 +1,174 @@
# borrowed from https://gist.github.com/kirikaza/6627072
# The original script has been wrapped in a function that invokes a subshell.
# That way, it can be safely invoked as a function from other scripts.
find_ubuntu_ami() {
(
usage() {
cat >&2 <<__
usage: find-ubuntu-ami.sh [ <filter>... ] [ <sorting> ] [ <options> ]
where:
<filter> is pair of key and substring to search
-r <region>
-n <name>
-v <version>
-a <arch>
-t <type>
-d <date>
-i <image>
-k <kernel>
<sorting> is one of:
-R by region
-N by name
-V by version
-A by arch
-T by type
-D by date
-I by image
-K by kernel
<options> can be:
-q just show AMI
protip for Docker orchestration workshop admin:
./find-ubuntu-ami.sh -t hvm:ebs -r \$AWS_REGION -v 15.10 -N
__
exit 1
}
args=$(getopt hr:n:v:a:t:d:i:k:RNVATDIKq $*)
if [ $? != 0 ]; then
echo >&2
usage
fi
region=
name=
version=
arch=
type=
date=
image=
kernel=
sort=date
quiet=
set -- $args
for a; do
case "$a" in
-h) usage ;;
-r)
region=$2
shift
;;
-n)
name=$2
shift
;;
-v)
version=$2
shift
;;
-a)
arch=$2
shift
;;
-t)
type=$2
shift
;;
-d)
date=$2
shift
;;
-i)
image=$2
shift
;;
-k)
kernel=$2
shift
;;
-R) sort=region ;;
-N) sort=name ;;
-V) sort=version ;;
-A) sort=arch ;;
-T) sort=type ;;
-D) sort=date ;;
-I) sort=image ;;
-K) sort=kernel ;;
-q) quiet=y ;;
--)
shift
break
;;
*) continue ;;
esac
shift
done
[ $# = 0 ] || usage
fix_json() {
tr -d \\n | sed 's/,]}/]}/'
}
jq_query() {
cat <<__
.aaData | map (
{
region: .[0],
name: .[1],
version: .[2],
arch: .[3],
type: .[4],
date: .[5],
image: .[6],
kernel: .[7]
} | select (
(.region | contains("$region")) and
(.name | contains("$name")) and
(.version | contains("$version")) and
(.arch | contains("$arch")) and
(.type | contains("$type")) and
(.date | contains("$date")) and
(.image | contains("$image</a>")) and
(.kernel | contains("$kernel"))
)
) | sort_by(.$sort) | .[] |
"\(.region)|\(.name)|\(.version)|\(.arch)|\(.type)|\(.date)|\(.image)|\(.kernel)"
__
}
trim_quotes() {
sed 's/^"//;s/"$//'
}
escape_spaces() {
sed 's/ /\\\ /g'
}
url=http://cloud-images.ubuntu.com/locator/ec2/releasesTable
{
[ "$quiet" ] || echo REGION NAME VERSION ARCH TYPE DATE IMAGE KERNEL
curl -s $url | fix_json | jq "$(jq_query)" | trim_quotes | escape_spaces | tr \| ' '
} \
| while read region name version arch type date image kernel; do
image=${image%<*}
image=${image#*>}
if [ "$quiet" ]; then
echo $image
else
echo "$region|$name|$version|$arch|$type|$date|$image|$kernel"
fi
done | column -t -s \|
)
}

View File

@@ -0,0 +1,51 @@
#!/usr/bin/env python
import os
import sys
import yaml
import jinja2
def prettify(l):
l = [ip.strip() for ip in l]
ret = [ "node{}: <code>{}</code>".format(i+1, s) for (i, s) in zip(range(len(l)), l) ]
return ret
# Read settings from user-provided settings file
SETTINGS = yaml.load(open(sys.argv[1]))
clustersize = SETTINGS["clustersize"]
ips = list(open("ips.txt"))
print("---------------------------------------------")
print(" Number of IPs: {}".format(len(ips)))
print(" VMs per cluster: {}".format(clustersize))
print("---------------------------------------------")
assert len(ips)%clustersize == 0
clusters = []
while ips:
cluster = ips[:clustersize]
ips = ips[clustersize:]
clusters.append(cluster)
template_file_name = SETTINGS["cards_template"]
template = jinja2.Template(open(template_file_name).read())
with open("ips.html", "w") as f:
f.write(template.render(clusters=clusters, **SETTINGS))
print("Generated ips.html")
try:
import pdfkit
with open("ips.html") as f:
pdfkit.from_file(f, "ips.pdf", options={
"page-size": SETTINGS["paper_size"],
"margin-top": SETTINGS["paper_margin"],
"margin-bottom": SETTINGS["paper_margin"],
"margin-left": SETTINGS["paper_margin"],
"margin-right": SETTINGS["paper_margin"],
})
print("Generated ips.pdf")
except ImportError:
print("WARNING: could not import pdfkit; did not generate ips.pdf")

View File

@@ -1,10 +1,3 @@
pssh -I tee /tmp/settings.yaml < $SETTINGS
pssh sudo apt-get update
pssh sudo apt-get install -y python-setuptools
pssh sudo easy_install pyyaml
pssh -I tee /tmp/postprep.py <<EOF
#!/usr/bin/env python
import os
import platform
@@ -18,7 +11,6 @@ import yaml
config = yaml.load(open("/tmp/settings.yaml"))
COMPOSE_VERSION = config["compose_version"]
MACHINE_VERSION = config["machine_version"]
SWARM_VERSION = config["swarm_version"]
CLUSTER_SIZE = config["clustersize"]
ENGINE_VERSION = config["engine_version"]
@@ -39,14 +31,17 @@ def system(cmd):
t1 = time.time()
f.write(bold("--- RUNNING [step {}] ---> {}...".format(STEP, cmd)))
retcode = os.system(cmd)
if retcode:
retcode = bold(retcode)
t2 = time.time()
td = str(t2-t1)[:5]
f.write("[{}] in {}s\n".format(retcode, td))
f.write(bold("[{}] in {}s\n".format(retcode, td)))
STEP += 1
with open("/home/ubuntu/.bash_history", "a") as f:
f.write("{}\n".format(cmd))
if retcode != 0:
msg = "The following command failed with exit code {}:\n".format(retcode)
msg+= cmd
raise(Exception(msg))
# On EC2, the ephemeral disk might be mounted on /mnt.
# If /mnt is a mountpoint, place Docker workspace on it.
@@ -60,38 +55,12 @@ system("curl --silent {} > /tmp/ipv4".format(ipv4_retrieval_endpoint))
ipv4 = open("/tmp/ipv4").read()
# Add a "docker" user with password "training"
system("sudo useradd -d /home/docker -m -s /bin/bash docker")
system("id docker || sudo useradd -d /home/docker -m -s /bin/bash docker")
system("echo docker:training | sudo chpasswd")
# Helper for Docker prompt.
system("""sudo tee /usr/local/bin/docker-prompt <<SQRL
#!/bin/sh
case "\\\$DOCKER_HOST" in
*:3376)
echo swarm
;;
*:2376)
echo \\\$DOCKER_MACHINE_NAME
;;
*:2375)
echo \\\$DOCKER_MACHINE_NAME
;;
*:55555)
echo \\\$DOCKER_MACHINE_NAME
;;
"")
echo local
;;
*)
echo unknown
;;
esac
SQRL""")
system("sudo chmod +x /usr/local/bin/docker-prompt")
# Fancy prompt courtesy of @soulshake.
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL
export PS1='\e[1m\e[31m[\h] \e[32m(\\\$(docker-prompt)) \e[34m\u@{}\e[35m \w\e[0m\n$ '
export PS1='\e[1m\e[31m[{}] \e[32m(\\$(docker-prompt)) \e[34m\u@\h\e[35m \w\e[0m\n$ '
SQRL""".format(ipv4))
# Custom .vimrc
@@ -115,9 +84,6 @@ system("sudo service ssh restart")
system("sudo apt-get -q update")
system("sudo apt-get -qy install git jq python-pip")
# increase the size of the conntrack table so we don't blow it up when going crazy with http load testing
system("echo 1000000 | sudo tee /proc/sys/net/nf_conntrack_max")
#######################
### DOCKER INSTALLS ###
#######################
@@ -134,10 +100,12 @@ system("sudo apt-get -qy install docker-ce")
#system("sudo pip install -U docker-compose=={}".format(COMPOSE_VERSION))
system("sudo curl -sSL -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/{}/docker-compose-{}-{}".format(COMPOSE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-compose")
system("docker-compose version")
### Install docker-machine
system("sudo curl -sSL -o /usr/local/bin/docker-machine https://github.com/docker/machine/releases/download/v{}/docker-machine-{}-{}".format(MACHINE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-machine")
system("docker-machine version")
system("sudo apt-get remove -y --purge dnsmasq-base")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
@@ -146,10 +114,6 @@ system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping ht
### (If we don't do this, Docker will not be responsive during the next step.)
system("while ! sudo -u docker docker version ; do sleep 2; done")
### Install Swarm
#system("docker pull swarm:{}".format(SWARM_VERSION))
#system("docker tag -f swarm:{} swarm".format(SWARM_VERSION))
### BEGIN CLUSTERING ###
addresses = list(l.strip() for l in sys.stdin)
@@ -171,7 +135,9 @@ while addresses:
print(cluster)
mynode = cluster.index(ipv4) + 1
system("echo 'node{}' | sudo -u docker tee /tmp/node".format(mynode))
system("echo node{} | sudo -u docker tee /tmp/node".format(mynode))
system("echo node{} | sudo tee /etc/hostname".format(mynode))
system("sudo hostname node{}".format(mynode))
system("sudo -u docker mkdir -p /home/docker/.ssh")
system("sudo -u docker touch /home/docker/.ssh/authorized_keys")
@@ -182,25 +148,3 @@ while addresses:
FINISH = time.time()
duration = "Initial deployment took {}s".format(str(FINISH - START)[:5])
system("echo {}".format(duration))
EOF
IPS_FILE=ips.txt
if [ ! -s $IPS_FILE ]; then
echo "ips.txt not found."
exit 1
fi
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" < $IPS_FILE
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from node1
pssh "sudo -u docker [ -f /home/docker/.ssh/id_rsa ] || ssh -o StrictHostKeyChecking=no node1 sudo -u docker tar -C /home/docker -cvf- .ssh | sudo -u docker tar -C /home/docker -xf-"
# if 'docker@' doesn't appear in /home/docker/.ssh/authorized_keys, copy it there
pssh "grep docker@ /home/docker/.ssh/authorized_keys \
|| cat /home/docker/.ssh/id_rsa.pub \
| sudo -u docker tee -a /home/docker/.ssh/authorized_keys"
# On node1, create and deploy TLS certs using Docker Machine
#pssh "if grep -q node1 /tmp/node; then grep ' node' /etc/hosts | xargs -n2 sudo -H -u docker docker-machine create -d generic --generic-ssh-user docker --generic-ip-address; fi"

23
prepare-vms/lib/pssh.sh Normal file
View File

@@ -0,0 +1,23 @@
# This file can be sourced in order to directly run commands on
# a batch of VMs whose IPs are located in ips.txt of the directory in which
# the command is run.
pssh() {
HOSTFILE="ips.txt"
[ -f $HOSTFILE ] || {
>/dev/stderr echo "No hostfile found at $HOSTFILE"
return
}
echo "[parallel-ssh] $@"
export PSSH=$(which pssh || which parallel-ssh)
$PSSH -h $HOSTFILE -l ubuntu \
--par 100 \
-O LogLevel=ERROR \
-O UserKnownHostsFile=/dev/null \
-O StrictHostKeyChecking=no \
-O ForwardAgent=yes \
"$@"
}

4
prepare-vms/lib/wkhtmltopdf Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/sh
pidof Xvfb || Xvfb -terminate &
export DISPLAY=:0
exec wkhtmltopdf.real "$@"

View File

@@ -1,70 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 16.0.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
width="466px" height="423.891px" viewBox="0 0 466 423.891" enable-background="new 0 0 466 423.891" xml:space="preserve">
<rect x="-43.5" y="37.005" display="none" fill-rule="evenodd" clip-rule="evenodd" fill="#E7E7E7" width="792" height="612"/>
<g>
<path id="outline_7_" fill-rule="evenodd" clip-rule="evenodd" d="M242.133,168.481h47.146v48.194h23.837
c11.008,0,22.33-1.962,32.755-5.494c5.123-1.736,10.872-4.154,15.926-7.193c-6.656-8.689-10.053-19.661-11.054-30.476
c-1.358-14.71,1.609-33.855,11.564-45.368l4.956-5.732l5.905,4.747c14.867,11.946,27.372,28.638,29.577,47.665
c17.901-5.266,38.921-4.02,54.701,5.088l6.475,3.734l-3.408,6.652c-13.345,26.046-41.246,34.113-68.524,32.687
c-40.817,101.663-129.68,149.794-237.428,149.794c-55.666,0-106.738-20.81-135.821-70.197l-0.477-0.807l-4.238-8.621
C4.195,271.415,0.93,247.6,3.145,223.803l0.664-7.127h40.315v-48.194h47.143v-47.145h94.292V74.191h56.574V168.481z"/>
<g display="none">
<path display="inline" fill="#394D54" d="M61.093,319.89c6.023,0,11.763-0.157,17.219-0.464c0.476-0.026,0.932-0.063,1.402-0.092
c0.005-0.002,0.008-0.002,0.012-0.002c13.872-0.855,25.876-2.708,35.902-5.57c0.002-0.002,0.004-0.002,0.006-0.002
c1.823-0.521,3.588-1.07,5.282-1.656c1.894-0.657,2.896-2.725,2.241-4.618c-0.656-1.895-2.722-2.899-4.618-2.24
c-12.734,4.412-29.535,6.842-50.125,7.298c-0.002,0-0.004,0-0.005,0c-10.477,0.232-21.93-0.044-34.352-0.843c0,0,0,0-0.001,0
c-0.635-0.038-1.259-0.075-1.9-0.118c-1.995-0.128-3.731,1.374-3.869,3.375c-0.136,1.999,1.376,3.73,3.375,3.866
c2.537,0.173,5.03,0.321,7.49,0.453c0.392,0.021,0.77,0.034,1.158,0.054l0,0C47.566,319.697,54.504,319.89,61.093,319.89z"/>
</g>
<g id="Containers_8_">
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M86.209,179.744h3.227v34.052h-3.227V179.744z M80.02,179.744
h3.354v34.052H80.02V179.744z M73.828,179.744h3.354v34.052h-3.354V179.744z M67.636,179.744h3.354v34.052h-3.354V179.744z
M61.446,179.744H64.8v34.052h-3.354V179.744z M55.384,179.744h3.224v34.052h-3.224V179.744z M51.981,176.338h40.858v40.86H51.981
V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M133.354,132.598h3.229v34.051h-3.229V132.598z M127.165,132.598
h3.354v34.051h-3.354V132.598z M120.973,132.598h3.354v34.051h-3.354V132.598z M114.781,132.598h3.354v34.051h-3.354V132.598z
M108.593,132.598h3.352v34.051h-3.352V132.598z M102.531,132.598h3.222v34.051h-3.222V132.598z M99.124,129.193h40.863v40.859
H99.124V129.193z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M133.354,179.744h3.229v34.052h-3.229V179.744z M127.165,179.744
h3.354v34.052h-3.354V179.744z M120.973,179.744h3.354v34.052h-3.354V179.744z M114.781,179.744h3.354v34.052h-3.354V179.744z
M108.593,179.744h3.352v34.052h-3.352V179.744z M102.531,179.744h3.222v34.052h-3.222V179.744z M99.124,176.338h40.863v40.86
H99.124V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M180.501,179.744h3.225v34.052h-3.225V179.744z M174.31,179.744
h3.355v34.052h-3.355V179.744z M168.12,179.744h3.354v34.052h-3.354V179.744z M161.928,179.744h3.354v34.052h-3.354V179.744z
M155.736,179.744h3.354v34.052h-3.354V179.744z M149.676,179.744h3.222v34.052h-3.222V179.744z M146.271,176.338h40.861v40.86
h-40.861V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M180.501,132.598h3.225v34.051h-3.225V132.598z M174.31,132.598
h3.355v34.051h-3.355V132.598z M168.12,132.598h3.354v34.051h-3.354V132.598z M161.928,132.598h3.354v34.051h-3.354V132.598z
M155.736,132.598h3.354v34.051h-3.354V132.598z M149.676,132.598h3.222v34.051h-3.222V132.598z M146.271,129.193h40.861v40.859
h-40.861V129.193z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M227.647,179.744h3.226v34.052h-3.226V179.744z M221.457,179.744
h3.354v34.052h-3.354V179.744z M215.265,179.744h3.354v34.052h-3.354V179.744z M209.073,179.744h3.354v34.052h-3.354V179.744z
M202.884,179.744h3.354v34.052h-3.354V179.744z M196.821,179.744h3.224v34.052h-3.224V179.744z M193.416,176.338h40.861v40.86
h-40.861V176.338z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M227.647,132.598h3.226v34.051h-3.226V132.598z M221.457,132.598
h3.354v34.051h-3.354V132.598z M215.265,132.598h3.354v34.051h-3.354V132.598z M209.073,132.598h3.354v34.051h-3.354V132.598z
M202.884,132.598h3.354v34.051h-3.354V132.598z M196.821,132.598h3.224v34.051h-3.224V132.598z M193.416,129.193h40.861v40.859
h-40.861V129.193z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M227.647,85.451h3.226v34.053h-3.226V85.451z M221.457,85.451
h3.354v34.053h-3.354V85.451z M215.265,85.451h3.354v34.053h-3.354V85.451z M209.073,85.451h3.354v34.053h-3.354V85.451z
M202.884,85.451h3.354v34.053h-3.354V85.451z M196.821,85.451h3.224v34.053h-3.224V85.451z M193.416,82.048h40.861v40.86h-40.861
V82.048z"/>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M274.792,179.744h3.224v34.052h-3.224V179.744z M268.602,179.744
h3.352v34.052h-3.352V179.744z M262.408,179.744h3.354v34.052h-3.354V179.744z M256.218,179.744h3.354v34.052h-3.354V179.744z
M250.026,179.744h3.354v34.052h-3.354V179.744z M243.964,179.744h3.227v34.052h-3.227V179.744z M240.561,176.338h40.86v40.86
h-40.86V176.338z"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" fill="#FFFFFF" d="M137.428,283.445c6.225,0,11.271,5.049,11.271,11.272
c0,6.225-5.046,11.271-11.271,11.271c-6.226,0-11.272-5.046-11.272-11.271C126.156,288.494,131.202,283.445,137.428,283.445"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M137.428,286.644c1.031,0,2.015,0.194,2.923,0.546
c-0.984,0.569-1.65,1.635-1.65,2.854c0,1.82,1.476,3.293,3.296,3.293c1.247,0,2.329-0.693,2.89-1.715
c0.395,0.953,0.615,1.999,0.615,3.097c0,4.458-3.615,8.073-8.073,8.073c-4.458,0-8.074-3.615-8.074-8.073
C129.354,290.258,132.971,286.644,137.428,286.644"/>
<path fill="#FFFFFF" d="M167.394,364.677c-27.916-13.247-43.239-31.256-51.765-50.915c-10.37,2.961-22.835,4.852-37.317,5.664
c-5.457,0.307-11.196,0.464-17.219,0.464c-6.942,0-14.26-0.205-21.94-0.613c25.6,25.585,57.094,45.283,115.408,45.645
C158.866,364.921,163.14,364.837,167.394,364.677z"/>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 6.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 457 KiB

View File

@@ -1,30 +0,0 @@
die () {
if [ -n "$1" ]; then
>&2 echo -n $(tput setaf 1)
>&2 echo -e "$1"
>&2 echo -n $(tput sgr0)
fi
exit 1
}
need_tag(){
TAG=$1
if [ -z "$TAG" ]; then
echo "Please specify a tag or token. Here's the list: "
aws_display_tags
die
fi
}
need_ips_file() {
IPS_FILE=$1
if [ -z "$IPS_FILE" ]; then
echo "IPS_FILE not set."
die
fi
if [ ! -s "$IPS_FILE" ]; then
echo "IPS_FILE $IPS_FILE not found. Please run: trainer ips <TAG>"
die
fi
}

View File

@@ -1,15 +0,0 @@
bold() {
msg=$1
echo "$(tput bold)$1$(tput sgr0)"
}
green() {
msg=$1
echo "$(tput setaf 2)$1$(tput sgr0)"
}
yellow(){
msg=$1
echo "$(tput setaf 3)$1$(tput sgr0)"
}

View File

@@ -1,142 +0,0 @@
#!/bin/bash
# borrowed from https://gist.github.com/kirikaza/6627072
usage() {
cat >&2 <<__
usage: find-ubuntu-ami.sh [ <filter>... ] [ <sorting> ] [ <options> ]
where:
<filter> is pair of key and substring to search
-r <region>
-n <name>
-v <version>
-a <arch>
-t <type>
-d <date>
-i <image>
-k <kernel>
<sorting> is one of:
-R by region
-N by name
-V by version
-A by arch
-T by type
-D by date
-I by image
-K by kernel
<options> can be:
-q just show AMI
protip for Docker orchestration workshop admin:
./find-ubuntu-ami.sh -t hvm:ebs -r \$AWS_REGION -v 15.10 -N
__
exit 1
}
args=`getopt hr:n:v:a:t:d:i:k:RNVATDIKq $*`
if [ $? != 0 ] ; then
echo >&2
usage
fi
region=
name=
version=
arch=
type=
date=
image=
kernel=
sort=date
quiet=
set -- $args
for a ; do
case "$a" in
-h) usage ;;
-r) region=$2 ; shift ;;
-n) name=$2 ; shift ;;
-v) version=$2 ; shift ;;
-a) arch=$2 ; shift ;;
-t) type=$2 ; shift ;;
-d) date=$2 ; shift ;;
-i) image=$2 ; shift ;;
-k) kernel=$2 ; shift ;;
-R) sort=region ;;
-N) sort=name ;;
-V) sort=version ;;
-A) sort=arch ;;
-T) sort=type ;;
-D) sort=date ;;
-I) sort=image ;;
-K) sort=kernel ;;
-q) quiet=y ;;
--) shift ; break ;;
*) continue ;;
esac
shift
done
[ $# = 0 ] || usage
fix_json() {
tr -d \\n | sed 's/,]}/]}/'
}
jq_query() { cat <<__
.aaData | map (
{
region: .[0],
name: .[1],
version: .[2],
arch: .[3],
type: .[4],
date: .[5],
image: .[6],
kernel: .[7]
} | select (
(.region | contains("$region")) and
(.name | contains("$name")) and
(.version | contains("$version")) and
(.arch | contains("$arch")) and
(.type | contains("$type")) and
(.date | contains("$date")) and
(.image | contains("$image</a>")) and
(.kernel | contains("$kernel"))
)
) | sort_by(.$sort) | .[] |
"\(.region)|\(.name)|\(.version)|\(.arch)|\(.type)|\(.date)|\(.image)|\(.kernel)"
__
}
trim_quotes() {
sed 's/^"//;s/"$//'
}
escape_spaces() {
sed 's/ /\\\ /g'
}
url=http://cloud-images.ubuntu.com/locator/ec2/releasesTable
{
[ "$quiet" ] || echo REGION NAME VERSION ARCH TYPE DATE IMAGE KERNEL
curl -s $url | fix_json | jq "`jq_query`" | trim_quotes | escape_spaces | tr \| ' '
} |
while read region name version arch type date image kernel ; do
image=${image%<*}
image=${image#*>}
if [ "$quiet" ]; then
echo $image
else
echo "$region|$name|$version|$arch|$type|$date|$image|$kernel"
fi
done | column -t -s \|

View File

@@ -1,120 +0,0 @@
#!/usr/bin/env python
import os
import sys
import yaml
try:
import pdfkit
except ImportError:
print("WARNING: could not import pdfkit; PDF generation will fali.")
def prettify(l):
l = [ip.strip() for ip in l]
ret = [ "node{}: <code>{}</code>".format(i+1, s) for (i, s) in zip(range(len(l)), l) ]
return ret
# Read settings from user-provided settings file
with open(sys.argv[1]) as f:
data = f.read()
SETTINGS = yaml.load(data)
SETTINGS['footer'] = SETTINGS['footer'].format(url=SETTINGS['url'])
globals().update(SETTINGS)
###############################################################################
ips = list(open("ips.txt"))
print("Current settings (as defined in settings.yaml):")
print(" Number of IPs: {}".format(len(ips)))
print(" VMs per cluster: {}".format(clustersize))
print("Background image: {}".format(background_image))
print("---------------------------------------------")
assert len(ips)%clustersize == 0
if clustersize == 1:
blurb = blurb.format(
cluster_or_machine="machine",
this_or_each="this",
machine_is_or_machines_are="machine is",
workshop_name=workshop_short_name,
)
else:
blurb = blurb.format(
cluster_or_machine="cluster",
this_or_each="each",
machine_is_or_machines_are="machines are",
workshop_name=workshop_short_name,
)
clusters = []
while ips:
cluster = ips[:clustersize]
ips = ips[clustersize:]
clusters.append(cluster)
html = open("ips.html", "w")
html.write("<html><head><style>")
head = """
div {{
float:left;
border: 1px dotted black;
width: 27%;
padding: 6% 2.5% 2.5% 2.5%;
font-size: x-small;
background-image: url("{background_image}");
background-size: 13%;
background-position-x: 50%;
background-position-y: 5%;
background-repeat: no-repeat;
}}
p {{
margin: 0.5em 0 0.5em 0;
}}
.pagebreak {{
page-break-before: always;
clear: both;
display: block;
height: 8px;
}}
"""
head = head.format(background_image=SETTINGS['background_image'])
html.write(head)
html.write("</style></head><body>")
for i, cluster in enumerate(clusters):
if i>0 and i%pagesize==0:
html.write('<span class="pagebreak"></span>\n')
html.write("<div>")
html.write(blurb)
for s in prettify(cluster):
html.write("<li>%s</li>\n"%s)
html.write("</ul></p>")
html.write("<p>login: <b><code>{}</code></b> <br>password: <b><code>{}</code></b></p>\n".format(instance_login, instance_password))
html.write(footer)
html.write("</div>")
html.close()
"""
html.write("<div>")
html.write("<p>{}</p>".format(blurb))
for s in prettify(cluster):
html.write("<li>{}</li>".format(s))
html.write("</ul></p>")
html.write("<center>")
html.write("<p>login: <b><code>{}</code></b> &nbsp&nbsp password: <b><code>{}</code></b></p>\n".format(instance_login, instance_password))
html.write("</center>")
html.write(footer)
html.write("</div>")
html.close()
"""
with open('ips.html') as f:
pdfkit.from_file(f, 'ips.pdf')

View File

@@ -1,23 +0,0 @@
# This file can be sourced in order to directly run commands on
# a batch of VMs whose IPs are located in ips.txt of the directory in which
# the command is run.
pssh () {
HOSTFILE="ips.txt"
[ -f $HOSTFILE ] || {
echo "No hostfile found at $HOSTFILE"
return
}
echo "[parallel-ssh] $@"
export PSSH=$(which pssh || which parallel-ssh)
$PSSH -h $HOSTFILE -l ubuntu \
--par 100 \
-O LogLevel=ERROR \
-O UserKnownHostsFile=/dev/null \
-O StrictHostKeyChecking=no \
-O ForwardAgent=yes \
"$@"
}

View File

@@ -1,482 +0,0 @@
#!/bin/bash
# Don't execute this script directly. Use ../trainer instead.
set -e # if we encounter an error, abort
export AWS_DEFAULT_OUTPUT=text
greet() {
hello=$(aws iam get-user --query 'User.UserName')
echo "Greetings, $hello/${USER}!"
}
deploy_hq(){
TAG=$1
need_tag $TAG
REMOTE_USER=ubuntu
REMOTE_HOST=$(aws_get_instance_ips_by_tag $TAG)
echo "Trying to reach $TAG instances..."
while ! tag_is_reachable $TAG; do
echo -n "."
sleep 2
done
env | grep -i aws > envvars.sh
scp \
-o "UserKnownHostsFile /dev/null" \
-o "StrictHostKeyChecking=no" \
scripts/remote-execution.sh \
envvars.sh \
$REMOTE_USER@$REMOTE_HOST:/tmp/
ssh -A $REMOTE_USER@$REMOTE_HOST "bash /tmp/remote-execution.sh >>/tmp/pre.out 2>>/tmp/pre.err"
ssh -A $REMOTE_USER@$REMOTE_HOST
}
deploy_tag(){
TAG=$1
SETTINGS=$2
need_tag $TAG
link_tag $TAG
count=$(wc -l ips.txt)
# wait until all hosts are reachable before trying to deploy
echo "Trying to reach $TAG instances..."
while ! tag_is_reachable $TAG; do
echo -n "."
sleep 2
done
echo "[[ Deploying tag $TAG ]]"
export SETTINGS
source scripts/postprep.rc
echo "Finished deploying $TAG."
echo "You may want to run one of the following commands:"
echo "./trainer pull-images $TAG"
echo "./trainer cards $TAG <settings/somefile.yaml>"
}
link_tag() {
TAG=$1
need_tag $TAG
IPS_FILE=tags/$TAG/ips.txt
need_ips_file $IPS_FILE
ln -sf $IPS_FILE ips.txt
}
pull_tag(){
TAG=$1
need_tag $TAG
link_tag $TAG
if [ ! -s $IPS_FILE ]; then
echo "Nonexistent or empty IPs file $IPS_FILE"
fi
# Pre-pull a bunch of images
pssh --timeout 900 'for I in \
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
postgres \
redis \
training/namer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'
echo "Finished pulling images for $TAG"
echo "You may now want to run:"
echo "./trainer cards $TAG <settings/somefile.yaml>"
}
wait_until_tag_is_running() {
max_retry=50
TAG=$1
COUNT=$2
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do \
let "i += 1"
echo "Waiting: $done_count/$COUNT instances online"
done_count=$(aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
"Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].State.Name" \
| tr "\t" "\n" \
| wc -l)
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
tag_is_reachable() {
TAG=$1
need_tag $TAG
link_tag $TAG
pssh -t 5 true 2>&1 >/dev/null
}
test_tag(){
ips_file=tags/$TAG/ips.txt
echo "Using random IP in $ips_file to run tests on $TAG"
ip=$(shuf -n 1 $ips_file)
test_vm $ip
echo "Tests complete. You may want to run one of the following commands:"
echo "./trainer cards $TAG <settings/somefile.yaml>"
}
test_vm() {
ip=$1
echo "[[ Testing instance with IP $(tput bold)$ip $(tput sgr0) ]]"
user=ubuntu
for cmd in "hostname" \
"whoami" \
"hostname -i" \
"cat /tmp/node" \
"cat /tmp/ipv4" \
"cat /etc/hosts" \
"hostnamectl status" \
"docker version | grep Version -B1" \
"docker-compose version" \
"docker-machine version" \
"docker images" \
"docker ps" \
"curl --silent localhost:55555" \
"sudo ls -la /mnt/ | grep docker" \
"env" \
"ls -la /home/docker/.ssh"; do
echo "=== $cmd ==="
echo "$cmd" |
ssh -A -q \
-o "UserKnownHostsFile /dev/null" \
-o "StrictHostKeyChecking=no" \
$user@$ip sudo -u docker -i
echo
done
}
make_key_name(){
SHORT_FINGERPRINT=$(ssh-add -l | grep RSA | head -n1 | cut -d " " -f 2 | tr -d : | cut -c 1-8)
echo "${SHORT_FINGERPRINT}-${USER}"
}
sync_keys() {
# make sure ssh-add -l contains "RSA"
ssh-add -l | grep -q RSA ||
die "The output of \`ssh-add -l\` doesn't contain 'RSA'. Start the agent, add your keys?"
AWS_KEY_NAME=$(make_key_name)
echo -n "Syncing keys... "
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &> /dev/null; then
aws ec2 import-key-pair --key-name $AWS_KEY_NAME \
--public-key-material "$(ssh-add -L \
| grep -i RSA \
| head -n1 \
| cut -d " " -f 1-2)" &> /dev/null
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &> /dev/null; then
die "Somehow, importing the key didn't work. Make sure that 'ssh-add -l | grep RSA | head -n1' returns an RSA key?"
else
echo "Imported new key $AWS_KEY_NAME."
fi
else
echo "Using existing key $AWS_KEY_NAME."
fi
}
suggest_amis() {
scripts/find-ubuntu-ami.sh -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
}
get_token() {
if [ -z $USER ]; then
export USER=anonymous
fi
date +%Y-%m-%d-%H-%M-$USER
}
get_ami() {
suggest_amis | head -1
}
make_cards(){
# Generate cards for a given tag
TAG=$1
SETTINGS_FILE=$2
[[ -z "$SETTINGS_FILE" ]] && {
echo "Please specify the settings file you want to use."
echo "e.g.: settings/orchestration.yaml"
exit 1
}
aws_get_instance_ips_by_tag $TAG > tags/$TAG/ips.txt
# Remove symlinks to old cards
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
python scripts/ips-txt-to-html.py $SETTINGS_FILE
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
rm -f tags/$TAG/$f
# Move the generated file and replace it with a symlink
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
done
echo "Cards created. You may want to run:"
echo "chromium ips.html"
echo "chromium ips.pdf"
}
describe_tag() {
# Display instance details and reachability/status information
TAG=$1
need_tag $TAG
echo "============= Tag: $TAG ============="
aws_display_instances_by_tag $TAG
aws_display_instance_statuses_by_tag $TAG
}
run_cli() {
case "$1" in
ami)
# A wrapper for scripts/find-ubuntu-ami.sh
shift
scripts/find-ubuntu-ami.sh -r $AWS_DEFAULT_REGION $*
echo
echo "Protip:"
echo "./trainer ami -a amd64 -v 16.04 -t hvm:ebs -N | grep -v ^REGION | cut -d\" \" -f15"
echo
echo "Suggestions:"
suggest_amis
;;
cards)
TAG=$2
need_tag $TAG
make_cards $TAG $3
;;
deploy)
TAG=$2
need_tag $TAG
if [[ $TAG == *"-hq"* ]]; then
echo "Deploying HQ"
deploy_hq $TAG
else
SETTINGS=$3
if [[ -z "$SETTINGS" ]]; then
echo "Please specify a settings file."
exit 1
fi
if ! [[ -f "$SETTINGS" ]]; then
echo "Settings file $SETTINGS not found."
exit 1
fi
echo "Deploying with settings $SETTINGS."
deploy_tag $TAG $SETTINGS
fi
;;
ids)
TAG=$2
need_tag $TAG
IDS=$(aws_get_instance_ids_by_tag $TAG)
echo "$IDS"
# Just in case we managed to create instances but weren't able to tag them
echo "Lookup by client token $TAG:"
IDS=$(aws_get_instance_ids_by_client_token $TAG)
echo "$IDS"
;;
ips)
TAG=$2
need_tag $TAG
mkdir -p tags/$TAG
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
link_tag $TAG
;;
list)
# list existing instances in a given batch
# to list batches, see "tags" command
echo "Using region $AWS_DEFAULT_REGION."
TAG=$2
need_tag $TAG
describe_tag $TAG
tag_is_reachable $TAG
echo "You may be interested in running one of the following commands:"
echo "./trainer ips $TAG"
echo "./trainer deploy $TAG <settings/somefile.yaml>"
;;
opensg)
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
;;
pull-images)
TAG=$2
need_tag $TAG
pull_tag $TAG
;;
retag)
if [[ -z "$2" ]] || [[ -z "$3" ]]; then
die "Please specify old tag/token, and new tag."
fi
aws_tag_instances $2 $3
;;
shell)
# Get a shell in the container
export PS1="trainer@$AWS_DEFAULT_REGION# "
exec $SHELL
;;
start)
# Create $2 instances
COUNT=$2
if [ -z "$COUNT" ]; then
die "Indicate number of instances to start."
fi
greet # Print our AWS username, to ease the pain of credential-juggling
key_name=$(sync_keys) # Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
AMI=$(get_ami) # Retrieve the AWS image ID
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
if [ ! -z $3 ]; then
# If an extra arg is present, append it to the tag
TOKEN=$TOKEN-$3
fi
echo "-----------------------------------"
echo "Starting $COUNT instances:"
echo " Region: $AWS_DEFAULT_REGION"
echo " Token/tag: $TOKEN"
echo " AMI: $AMI"
AWS_KEY_NAME=$(make_key_name)
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $2 \
--instance-type t2.medium \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}' )
echo " Key name: $AWS_KEY_NAME"
echo "Reservation ID: $reservation_id"
echo "-----------------------------------"
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
TAG=$TOKEN
aws_tag_instances $TOKEN $TAG
wait_until_tag_is_running $TAG $COUNT
echo "[-------------------------------------------------------------------------------------]"
echo " Successfully created $2 instances with tag: $TAG"
echo "[-------------------------------------------------------------------------------------]"
mkdir -p tags/$TAG
IPS=$(aws_get_instance_ips_by_tag $TAG)
echo "$IPS" > tags/$TAG/ips.txt
link_tag $TAG
echo "To deploy or kill these instances, run one of the following:"
echo "./trainer deploy $TAG <settings/somefile.yaml>"
echo "./trainer list $TAG"
;;
status)
greet && echo
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
echo "Max instances: $max_instances" && echo
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
echo "Region:" # $AWS_DEFAULT_REGION."
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
;;
stop)
TAG=$2
need_tag $TAG
aws_kill_instances_by_tag $TAG
;;
tag)
# add a tag to a batch of VMs
TAG=$2
NEW_TAG_KEY=$3
NEW_TAG_VALUE=$4
need_tag $TAG
need_tag $NEW_TAG_KEY
need_tag $NEW_TAG_VALUE
;;
test)
TAG=$2
need_tag $TAG
test_tag $TAG
;;
*)
echo "
./trainer <command> [n-instances|tag] [settings/file.yaml]
Core commands:
start n Start n instances
list [TAG] If a tag is provided, list its VMs. Otherwise, list tags.
deploy TAG Deploy all instances with a given tag
pull-images TAG Pre-pull docker images. Run only after deploying.
stop TAG Stop and delete instances tagged TAG
Extras:
ips TAG List all IPs of instances with a given tag (updates ips.txt)
ids TAG/TOKEN List all instance IDs with a given tag
shell Get a shell in the trainer container
status TAG Print information about this tag and its VMs
tags List all tags (per-region)
retag TAG/TOKEN TAG Retag instances with a new tag
Beta:
ami Look up Amazon Machine Images
cards FILE Generate cards
opensg Modify AWS security groups
"
;;
esac
}
(
cd $SCRIPT_DIR
source scripts/cli.sh
source scripts/aws.sh
source scripts/rc
source scripts/colors.sh
mkdir -p tags
# TODO: unset empty envvars
run_cli "$@"
)

View File

@@ -1,36 +0,0 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
workshop_name: Docker Orchestration
workshop_short_name: orchestration
repo: https://github.com/jpetazzo/orchestration-workshop
url: http://container.training/ # moreinfo link printed on cards
#engine_version: experimental.docker.com #extra features that may change/runaway
#engine_version: test.docker.com
engine_version: get.docker.com #prod release
compose_version: 1.8.1
machine_version: 0.8.2
swarm_version: 1.2.5
# for now these are hard coded in script, and only used for printing cards
instance_login: docker
instance_password: training
# 12 per page works well, but is quite small text
clustersize: 5 # Number of VMs per cluster
pagesize: 12 # Number of cards to print per page
background_image: https://raw.githubusercontent.com/jpetazzo/orchestration-workshop/master/prepare-vms/media/swarm.png
# To be printed on the cards:
blurb: >
Here is the connection information to your very own
{cluster_or_machine} for this {workshop_name} workshop. You can connect
to {this_or_each} VM with any SSH client.
Your {machine_is_or_machines_are}:
# {url} will be replaced by the script
footer: >
<p>For slides, chat and other useful links, see: </p>
<center>{url}</center>

View File

@@ -1,33 +1,24 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
workshop_name: Docker fundamentals
workshop_short_name: Docker # appears on VM connection cards
repo: https://github.com/docker/docker-fundamentals
# Number of VMs per cluster
clustersize: 1
instance_login: docker
instance_password: training
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
clustersize: 1 # Number of VMs per cluster
pagesize: 15 # Number of cards to print per page
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
background_image: https://www.docker.com/sites/default/files/Engine.png
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# To be printed on the cards:
blurb: >
Here is the connection information to your very own
{cluster_or_machine} for this {workshop_name} workshop. You can connect
to {this_or_each} VM with any SSH client.
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
Your {machine_is_or_machines_are}:
# This can be "test" or "stable"
engine_version: test
# {url} will be replaced by the script
footer: >
<p>For slides, chat and other useful links, see: </p>
<center>{url}</center>
url: http://container.training/
engine_version: get.docker.com
compose_version: 1.8.1
machine_version: 0.8.2
swarm_version: latest
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.16.1
machine_version: 0.12.0

View File

@@ -1,33 +1,24 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
workshop_name: Advanced Docker Orchestration
workshop_short_name: orchestration
repo: https://github.com/jpetazzo/orchestration-workshop
# Number of VMs per cluster
clustersize: 5
instance_login: docker
instance_password: training
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
clustersize: 5 # Number of VMs per cluster
pagesize: 12 # Number of cards to print per page
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
background_image: https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# To be printed on the cards:
blurb: >
Here is the connection information to your very own
{cluster_or_machine} for this {workshop_name} workshop. You can connect
to {this_or_each} VM with any SSH client.
Your {machine_is_or_machines_are}:
# {url} will be replaced by the script
footer: >
<p>For slides, chat and other useful links, see: </p>
<center>{url}</center>
url: http://container.training/
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
compose_version: 1.12
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.16.1
machine_version: 0.12.0
swarm_version: latest

View File

@@ -1,32 +0,0 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
workshop_name: Advanced Docker Orchestration
workshop_short_name: orchestration
repo: https://github.com/jpetazzo/orchestration-workshop
instance_login: docker
instance_password: training
clustersize: 3 # Number of VMs per cluster
pagesize: 12 # Number of cards to print per page
background_image: https://blog.docker.com/media/2015/08/notary.png
# To be printed on the cards:
blurb: >
Here is the connection information to your
three Docker nodes for the Security
Workshop. You can connect to each VM with
any SSH client.
Your {machine_is_or_machines_are}:
# {url} will be replaced by the script
footer: ""
url: http://container.training/
engine_version: get.docker.com
compose_version: 1.12.0
machine_version: 0.10.0
swarm_version: latest

View File

@@ -1,80 +0,0 @@
#!/bin/bash
TRAINER_IMAGE="preparevms_prepare-vms"
DEPENDENCIES="
aws
ssh
curl
jq
pssh
wkhtmltopdf
man
"
ENVVARS="
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
SSH_AUTH_SOCK
"
check_envvars() {
STATUS=0
for envvar in $ENVVARS; do
if [ -z "${!envvar}" ]; then
echo "Please set environment variable $envvar."
STATUS=1
unset $envvar
fi
done
return $STATUS
}
check_dependencies() {
STATUS=0
for dependency in $DEPENDENCIES ; do
if ! command -v $dependency >/dev/null; then
echo "Could not find dependency $dependency."
STATUS=1
fi
done
return $STATUS
}
check_ssh_auth_sock() {
if [ -z $SSH_AUTH_SOCK ]; then
echo -n "SSH_AUTH_SOCK envvar not set, so its parent directory can't be "
echo "mounted as a volume in a container."
echo "Try running the command below and trying again:"
echo "eval \$(ssh-agent) && ssh-add"
exit 1
fi
}
check_image() {
docker inspect $TRAINER_IMAGE >/dev/null 2>&1
}
# Get the script's real directory, whether we're being called directly or via a symlink
if [ -L "$0" ]; then
export SCRIPT_DIR=$(dirname $(readlink "$0"))
else
export SCRIPT_DIR=$(dirname "$0")
fi
cd "$SCRIPT_DIR"
check_envvars || exit 1
if check_dependencies; then
scripts/trainer-cli "$@"
elif check_image; then
check_ssh_auth_sock
export SSH_AUTH_DIRNAME=$(dirname $SSH_AUTH_SOCK)
docker-compose run prepare-vms "$@"
else
echo "Some dependencies are missing, and docker image $TRAINER_IMAGE doesn't exist locally."
echo "Please do one of the following: "
echo "- run \`docker-compose build\`"
echo "- install missing dependencies"
fi

82
prepare-vms/workshopctl Executable file
View File

@@ -0,0 +1,82 @@
#!/bin/bash
# Get the script's real directory, whether we're being called directly or via a symlink
if [ -L "$0" ]; then
export SCRIPT_DIR=$(dirname $(readlink "$0"))
else
export SCRIPT_DIR=$(dirname "$0")
fi
# Load all scriptlets
cd "$SCRIPT_DIR"
for lib in lib/*.sh; do
. $lib
done
TRAINER_IMAGE="preparevms_prepare-vms"
DEPENDENCIES="
aws
ssh
curl
jq
pssh
wkhtmltopdf
man
"
ENVVARS="
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
SSH_AUTH_SOCK
"
check_envvars() {
status=0
for envvar in $ENVVARS; do
if [ -z "${!envvar}" ]; then
error "Environment variable $envvar is not set."
if [ "$envvar" = "SSH_AUTH_SOCK" ]; then
error "Hint: run '\$(ssh-agent) ; ssh-add' and try again?"
fi
status=1
fi
done
return $status
}
check_dependencies() {
status=0
for dependency in $DEPENDENCIES; do
if ! command -v $dependency >/dev/null; then
warning "Dependency $dependency could not be found."
status=1
fi
done
return $status
}
check_image() {
docker inspect $TRAINER_IMAGE >/dev/null 2>&1
}
check_envvars \
|| die "Please set all required environment variables."
check_dependencies \
|| warning "At least one dependency is missing. Install it or try the image wrapper."
# Now check which command was invoked and execute it
if [ "$1" ]; then
cmd="$1"
shift
else
cmd=help
fi
fun=_cmd_$cmd
type -t $fun | grep -q function || die "Invalid command: $cmd"
$fun "$@"
# export SSH_AUTH_DIRNAME=$(dirname $SSH_AUTH_SOCK)
# docker-compose run prepare-vms "$@"

View File

@@ -1,37 +0,0 @@
# Docker Registry with Swarm superpowers
To start your registry, just do:
```
docker-compose up -d
```
You can then refer to the registry as `localhost:5000`.
If you are running on Swarm, do the following:
```
docker-compose up -d
docker-compose scale frontend=N
```
... where `N` is the number of nodes in your cluster.
This will make sure that a `frontend` container runs on every node,
so that `localhost:5000` always refers to your registry.
If you scale up your cluster, make sure to re-run `docker-compose scale`
accordingly.
If you supply a too large value for `N`, you will see errors
(since Swarm tries to schedule more frontends than there are
available hosts) but everything will work fine, don't worry.
Note: this will bind port 5000 on the loopoback interface on
all your machines. That port will therefore be unavailable if
you try e.g. `docker run -p 5000:...`.
Note: the registry will only be available from your cluster,
through the loopback interface. If you want to make it available
from outside, remove `127.0.0.1:` from the Compose file.

View File

@@ -1,12 +0,0 @@
version: "2"
services:
backend:
image: registry:2
frontend:
image: jpetazzo/hamba
command: 5000 backend:5000
ports:
- "127.0.0.1:5000:5000"
depends_on:
- backend

56
slides/README.md Normal file
View File

@@ -0,0 +1,56 @@
# MarkMaker
General principles:
- each slides deck is described in a YAML manifest;
- the YAML manifest lists a number of Markdown files
that compose the slides deck;
- a Python script "compiles" the YAML manifest into
a HTML file;
- that HTML file can be displayed in your browser
(you don't need to host it), or you can publish it
(along with a few static assets) if you want.
## Getting started
Look at the YAML file corresponding to the deck that
you want to edit. The format should be self-explanatory.
*I (Jérôme) am still in the process of fine-tuning that
format. Once I settle for something, I will add better
documentation.*
Make changes in the YAML file, and/or in the referenced
Markdown files. If you have never used Remark before:
- use `---` to separate slides,
- use `.foo[bla]` if you want `bla` to have CSS class `foo`,
- define (or edit) CSS classes in [workshop.css](workshop.css).
After making changes, run `./build.sh once`; it will
compile each `foo.yml` file into `foo.yml.html`.
You can also run `./build.sh forever`: it will monitor the current
directory and rebuild slides automatically when files are modified.
## Publishing pipeline
Each time we push to `master`, a webhook pings
[Netlify](https://www.netlify.com/), which will pull
the repo, build the slides (by running `build.sh once`),
and publish them to http://container.training/.
Pull requests are automatically deployed to testing
subdomains. I had no idea that I would ever say this
about a static page hosting service, but it is seriously awesome. ⚡️💥
## Extra bells and whistles
You can run `./slidechecker foo.yml.html` to check for
missing images and show the number of slides in that deck.
It requires `phantomjs` to be installed. It takes some
time to run so it is not yet integrated with the publishing
pipeline.

8
slides/TODO Normal file
View File

@@ -0,0 +1,8 @@
Black belt references that I want to add somewhere:
What Have Namespaces Done for You Lately?
https://www.youtube.com/watch?v=MHv6cWjvQjM&list=PLkA60AVN3hh-biQ6SCtBJ-WVTyBmmYho8&index=8
Cilium: Network and Application Security with BPF and XDP
https://www.youtube.com/watch?v=ilKlmTDdFgk&list=PLkA60AVN3hh-biQ6SCtBJ-WVTyBmmYho8&index=9

1
slides/_redirects Normal file
View File

@@ -0,0 +1 @@
/ /dockercon.yml.html 200!

33
slides/build.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/sh
case "$1" in
once)
for YAML in *.yml; do
./markmaker.py < $YAML > $YAML.html || {
rm $YAML.html
break
}
done
;;
forever)
# There is a weird bug in entr, at least on MacOS,
# where it doesn't restore the terminal to a clean
# state when exitting. So let's try to work around
# it with stty.
STTY=$(stty -g)
while true; do
find . | entr -d $0 once
STATUS=$?
case $STATUS in
2) echo "Directory has changed. Restarting.";;
130) echo "SIGINT or q pressed. Exiting."; break;;
*) echo "Weird exit code: $STATUS. Retrying in 1 second."; sleep 1;;
esac
done
stty $STTY
;;
*)
echo "$0 <once|forever>"
;;
esac

477
slides/common/sampleapp.md Normal file
View File

@@ -0,0 +1,477 @@
# Our sample application
- Visit the GitHub repository with all the materials of this workshop:
<br/>https://github.com/jpetazzo/orchestration-workshop
- The application is in the [dockercoins](
https://github.com/jpetazzo/orchestration-workshop/tree/master/dockercoins)
subdirectory
- Let's look at the general layout of the source code:
there is a Compose file [docker-compose.yml](
https://github.com/jpetazzo/orchestration-workshop/blob/master/dockercoins/docker-compose.yml) ...
... and 4 other services, each in its own directory:
- `rng` = web service generating random bytes
- `hasher` = web service computing hash of POSTed data
- `worker` = background process using `rng` and `hasher`
- `webui` = web interface to watch progress
---
class: extra-details
## Compose file format version
*Particularly relevant if you have used Compose before...*
- Compose 1.6 introduced support for a new Compose file format (aka "v2")
- Services are no longer at the top level, but under a `services` section
- There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer)
- Containers are placed on a dedicated network, making links unnecessary
- There are other minor differences, but upgrade is easy and straightforward
---
## Links, naming, and service discovery
- Containers can have network aliases (resolvable through DNS)
- Compose file version 2+ makes each container reachable through its service name
- Compose file version 1 did require "links" sections
- Our code can connect to services using their short name
(instead of e.g. IP address or FQDN)
- Network aliases are automatically namespaced
(i.e. you can have multiple apps declaring and using a service named `database`)
---
## Example in `worker/worker.py`
![Service discovery](images/service-discovery.png)
---
## What's this application?
---
class: pic
![DockerCoins logo](images/dockercoins.png)
(DockerCoins 2016 logo courtesy of [@XtlCnslt](https://twitter.com/xtlcnslt) and [@ndeloof](https://twitter.com/ndeloof). Thanks!)
---
## What's this application?
- It is a DockerCoin miner! 💰🐳📦🚢
--
- No, you can't buy coffee with DockerCoins
--
- How DockerCoins works:
- `worker` asks to `rng` to generate a few random bytes
- `worker` feeds these bytes into `hasher`
- and repeat forever!
- every second, `worker` updates `redis` to indicate how many loops were done
- `webui` queries `redis`, and computes and exposes "hashing speed" in your browser
---
## Getting the application source code
- We will clone the GitHub repository
- The repository also contains scripts and tools that we will use through the workshop
.exercise[
<!--
```bash
if [ -d orchestration-workshop ]; then
mv orchestration-workshop orchestration-workshop.$$
fi
```
-->
- Clone the repository on `node1`:
```bash
git clone git://github.com/jpetazzo/orchestration-workshop
```
]
(You can also fork the repository on GitHub and clone your fork if you prefer that.)
---
# Running the application
Without further ado, let's start our application.
.exercise[
- Go to the `dockercoins` directory, in the cloned repo:
```bash
cd ~/orchestration-workshop/dockercoins
```
- Use Compose to build and run all containers:
```bash
docker-compose up
```
<!--
```wait units of work done```
```keys ^C```
-->
]
Compose tells Docker to build all container images (pulling
the corresponding base images), then starts all containers,
and displays aggregated logs.
---
## Lots of logs
- The application continuously generates logs
- We can see the `worker` service making requests to `rng` and `hasher`
- Let's put that in the background
.exercise[
- Stop the application by hitting `^C`
]
- `^C` stops all containers by sending them the `TERM` signal
- Some containers exit immediately, others take longer
<br/>(because they don't handle `SIGTERM` and end up being killed after a 10s timeout)
---
## Restarting in the background
- Many flags and commands of Compose are modeled after those of `docker`
.exercise[
- Start the app in the background with the `-d` option:
```bash
docker-compose up -d
```
- Check that our app is running with the `ps` command:
```bash
docker-compose ps
```
]
`docker-compose ps` also shows the ports exposed by the application.
---
class: extra-details
## Viewing logs
- The `docker-compose logs` command works like `docker logs`
.exercise[
- View all logs since container creation and exit when done:
```bash
docker-compose logs
```
- Stream container logs, starting at the last 10 lines for each container:
```bash
docker-compose logs --tail 10 --follow
```
<!--
```wait units of work done```
```keys ^C```
-->
]
Tip: use `^S` and `^Q` to pause/resume log output.
---
class: extra-details
## Upgrading from Compose 1.6
.warning[The `logs` command has changed between Compose 1.6 and 1.7!]
- Up to 1.6
- `docker-compose logs` is the equivalent of `logs --follow`
- `docker-compose logs` must be restarted if containers are added
- Since 1.7
- `--follow` must be specified explicitly
- new containers are automatically picked up by `docker-compose logs`
---
## Connecting to the web UI
- The `webui` container exposes a web dashboard; let's view it
.exercise[
- With a web browser, connect to `node1` on port 8000
- Remember: the `nodeX` aliases are valid only on the nodes themselves
- In your browser, you need to enter the IP address of your node
<!-- ```open http://node1:8000``` -->
]
You should see a speed of approximately 4 hashes/second.
More precisely: 4 hashes/second, with regular dips down to zero.
<br/>This is because Jérôme is incapable of writing good frontend code.
<br/>Don't ask. Seriously, don't ask. This is embarrassing.
---
class: extra-details
## Why does the speed seem irregular?
- The app actually has a constant, steady speed: 3.33 hashes/second
<br/>
(which corresponds to 1 hash every 0.3 seconds, for *reasons*)
- The worker doesn't update the counter after every loop, but up to once per second
- The speed is computed by the browser, checking the counter about once per second
- Between two consecutive updates, the counter will increase either by 4, or by 0
- The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - etc.
*We told you to not ask!!!*
---
## Scaling up the application
- Our goal is to make that performance graph go up (without changing a line of code!)
--
- Before trying to scale the application, we'll figure out if we need more resources
(CPU, RAM...)
- For that, we will use good old UNIX tools on our Docker node
---
## Looking at resource usage
- Let's look at CPU, memory, and I/O usage
.exercise[
- run `top` to see CPU and memory usage (you should see idle cycles)
<!--
```bash top```
```wait Tasks```
```keys ^C```
-->
- run `vmstat 1` to see I/O usage (si/so/bi/bo)
<br/>(the 4 numbers should be almost zero, except `bo` for logging)
<!--
```bash vmstat 1```
```wait memory```
```keys ^C```
-->
]
We have available resources.
- Why?
- How can we use them?
---
## Scaling workers on a single node
- Docker Compose supports scaling
- Let's scale `worker` and see what happens!
.exercise[
- Start one more `worker` container:
```bash
docker-compose scale worker=2
```
- Look at the performance graph (it should show a x2 improvement)
- Look at the aggregated logs of our containers (`worker_2` should show up)
- Look at the impact on CPU load with e.g. top (it should be negligible)
]
---
## Adding more workers
- Great, let's add more workers and call it a day, then!
.exercise[
- Start eight more `worker` containers:
```bash
docker-compose scale worker=10
```
- Look at the performance graph: does it show a x10 improvement?
- Look at the aggregated logs of our containers
- Look at the impact on CPU load and memory usage
]
---
# Identifying bottlenecks
- You should have seen a 3x speed bump (not 10x)
- Adding workers didn't result in linear improvement
- *Something else* is slowing us down
--
- ... But what?
--
- The code doesn't have instrumentation
- Let's use state-of-the-art HTTP performance analysis!
<br/>(i.e. good old tools like `ab`, `httping`...)
---
## Accessing internal services
- `rng` and `hasher` are exposed on ports 8001 and 8002
- This is declared in the Compose file:
```yaml
...
rng:
build: rng
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
...
```
---
## Measuring latency under load
We will use `httping`.
.exercise[
- Check the latency of `rng`:
```bash
httping -c 10 localhost:8001
```
- Check the latency of `hasher`:
```bash
httping -c 10 localhost:8002
```
]
`rng` has a much higher latency than `hasher`.
---
## Let's draw hasty conclusions
- The bottleneck seems to be `rng`
- *What if* we don't have enough entropy and can't generate enough random numbers?
- We need to scale out the `rng` service on multiple machines!
Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.
(In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy...
<br/>
...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).)
---
## Clean up
- Before moving on, let's remove those containers
.exercise[
- Tell Compose to remove everything:
```bash
docker-compose down
```
]

182
slides/dockercon.yml Normal file
View File

@@ -0,0 +1,182 @@
chat: "[Slack](https://dockercommunity.slack.com/messages/C7ET1GY4Q)"
exclude:
- self-paced
- snap
- auto-btp
- benchmarking
- elk-manual
- prom-manual
title: "Swarm: from Zero to Hero (DC17EU)"
chapters:
- |
class: title
.small[
Swarm: from Zero to Hero
.small[.small[
**Be kind to the WiFi!**
*Use the 5G network*
<br/>
*Don't use your hotspot*
<br/>
*Don't stream videos from YouTube, Netflix, etc.
<br/>(if you're bored, watch local content instead)*
Also: share the power outlets
<br/>
*(with limited power comes limited responsibility?)*
<br/>
*(or something?)*
Thank you!
]
]
]
---
## Intros
<!--
- Hello! We are
AJ ([@s0ulshake](https://twitter.com/s0ulshake))
&
Jérôme ([@jpetazzo](https://twitter.com/jpetazzo))
-->
- Hello! We are Jérôme, Lee, Nicholas, and Scott
<!--
I am
Jérôme ([@jpetazzo](https://twitter.com/jpetazzo))
-->
--
- This is our collective Docker knowledge:
![Bell Curve](images/bell-curve.jpg)
---
## "From zero to hero"
--
- It rhymes, but it's a pretty bad title, to be honest
--
- None of you is a "zero"
--
- None of us is a "hero"
--
- None of us should even try to be a hero
--
*The hero syndrome is a phenomenon affecting people who seek heroism or recognition,
usually by creating a desperate situation which they can resolve.
This can include unlawful acts, such as arson.
The phenomenon has been noted to affect civil servants,
such as firefighters, nurses, police officers, and security guards.*
(Wikipedia page on [hero syndrome](https://en.wikipedia.org/wiki/Hero_syndrome))
---
## Agenda
.small[
- 09:00-09:10 Hello!
- 09:10-10:30 Part 1
- 10:30-11:00 coffee break
- 11:00-12:30 Part 2
- 12:30-13:30 lunch break
- 13:30-15:00 Part 3
- 15:00-15:30 coffee break
- 15:30-17:00 Part 4
- 17:00-18:00 Afterhours and Q&A
]
<!--
- The tutorial will run from 9:00am to 12:20pm
- This will be fast-paced, but DON'T PANIC!
- There will be a coffee break at 10:30am
<br/>
(please remind me if I forget about it!)
-->
- All the content is publicly available (slides, code samples, scripts)
Upstream URL: https://github.com/jpetazzo/orchestration-workshop
- Feel free to interrupt for questions at any time
- Live feedback, questions, help on [Gitter](chat)
http://container.training/chat
- swarm/intro.md
- |
@@TOC@@
- - swarm/prereqs.md
- swarm/versions.md
- |
class: title
All right!
<br/>
We're all set.
<br/>
Let's do this.
- common/sampleapp.md
- swarm/swarmkit.md
- swarm/creatingswarm.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/updatingservices.md
- swarm/healthchecks.md
- - swarm/operatingswarm.md
- swarm/netshoot.md
- swarm/ipsec.md
- swarm/swarmtools.md
- swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- - swarm/logging.md
- swarm/metrics.md
- swarm/stateful.md
- swarm/extratips.md
- swarm/end.md
- |
class: title
That's all folks! <br/> Questions?
.small[.small[
Jérôme ([@jpetazzo](https://twitter.com/jpetazzo)) — [@docker](https://twitter.com/docker)
]]
<!--
Tiffany ([@tiffanyfayj](https://twitter.com/tiffanyfayj))
AJ ([@s0ulshake](https://twitter.com/s0ulshake))
-->

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

BIN
slides/images/appcont.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 927 KiB

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

BIN
slides/images/blackbelt.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 595 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

BIN
slides/images/composeup.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

BIN
slides/images/copy.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 26 KiB

BIN
slides/images/diagram.odg Normal file

Binary file not shown.

BIN
slides/images/diagram.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

View File

Before

Width:  |  Height:  |  Size: 680 KiB

After

Width:  |  Height:  |  Size: 680 KiB

View File

Before

Width:  |  Height:  |  Size: 137 KiB

After

Width:  |  Height:  |  Size: 137 KiB

View File

Before

Width:  |  Height:  |  Size: 252 KiB

After

Width:  |  Height:  |  Size: 252 KiB

View File

Before

Width:  |  Height:  |  Size: 213 KiB

After

Width:  |  Height:  |  Size: 213 KiB

View File

Before

Width:  |  Height:  |  Size: 901 KiB

After

Width:  |  Height:  |  Size: 901 KiB

View File

Before

Width:  |  Height:  |  Size: 575 KiB

After

Width:  |  Height:  |  Size: 575 KiB

BIN
slides/images/elimatrix.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

BIN
slides/images/end.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 270 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 384 KiB

View File

Before

Width:  |  Height:  |  Size: 205 KiB

After

Width:  |  Height:  |  Size: 205 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

View File

Before

Width:  |  Height:  |  Size: 147 KiB

After

Width:  |  Height:  |  Size: 147 KiB

View File

Before

Width:  |  Height:  |  Size: 145 KiB

After

Width:  |  Height:  |  Size: 145 KiB

BIN
slides/images/graph.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

BIN
slides/images/image.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

BIN
slides/images/install.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Some files were not shown because too many files have changed in this diff Show More