Compare commits

...

208 Commits

Author SHA1 Message Date
Jérôme Petazzoni
7ed54eee66 Merge pull request #64 from trapier/slides_comment_format
slides: code block comment formatting on snap install
2016-12-12 17:59:21 -06:00
Trapier Marshall
1dca8e5a7a slides: code block comment formatting
This will make it easier to copy-paste the whole block used for
snap installation
2016-12-12 11:03:30 -05:00
Jérôme Petazzoni
165de1dbb5 Merge pull request #63 from trapier/slides_cosmetic_edits
couple of cosmetic edits to slides
2016-12-11 21:48:57 -06:00
Trapier Marshall
b7afd13012 couple cosmetic corrections to slides 2016-12-11 01:16:30 -05:00
Jerome Petazzoni
e8b64c5e08 Last touch-ups for LISA16! Good to go! 2016-12-05 19:32:39 -08:00
Jerome Petazzoni
9124eb0e07 Add healthchecks in WIP section 2016-12-05 13:32:09 -08:00
Jerome Petazzoni
0bede24e23 Add what's next section 2016-12-05 10:49:31 -08:00
Jerome Petazzoni
ee79e5ba86 Add MOSH instructions 2016-12-05 10:32:29 -08:00
Jerome Petazzoni
9078cfb57d DAB -> Compose v3 2016-12-05 08:53:31 -08:00
Jerome Petazzoni
6854698fe1 Add Fluentd instructions (contrib) 2016-12-04 17:07:48 -08:00
Jerome Petazzoni
16a4dac192 Add "replayability" instructions 2016-12-04 16:40:17 -08:00
Jerome Petazzoni
0029fa47c5 Update secrets and autolock chapters (thanks @diogomonica for feedback and pointers!) 2016-12-04 09:19:09 -08:00
Jerome Petazzoni
a53636340b Tweak 2016-12-03 10:30:29 -08:00
Jerome Petazzoni
c95b88e562 Secrets management and data encryption 2016-12-03 10:28:20 -08:00
Jerome Petazzoni
d438bd624a Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-12-02 17:50:39 -08:00
Jerome Petazzoni
839746831b Improve illustration a bit 2016-12-02 17:50:29 -08:00
Jérôme Petazzoni
0b1b589314 Merge pull request #60 from hubertst/patch-1
Update provisioning.yml
2016-12-02 16:47:54 -08:00
Hubert
61d2709f8f Update provisioning.yml
fix for ansible 2.2
2016-12-02 09:49:52 +01:00
Jerome Petazzoni
1741a7b35a Add encrypted networks 2016-12-01 22:15:42 -08:00
Jerome Petazzoni
e101856dd7 dynamic scheduling 2016-12-01 17:18:00 -08:00
Jerome Petazzoni
d451f9c7bf Add note on docker service update --mode 2016-12-01 15:52:05 -08:00
Jerome Petazzoni
b021b0eec8 Addtl metrics resources 2016-12-01 15:43:49 -08:00
Jerome Petazzoni
e4f824fd07 docker system ... 2016-11-30 15:54:14 -08:00
Jerome Petazzoni
019165e98c Re-enable a few slides (checked all ??? slides) 2016-11-29 13:02:42 -08:00
Jerome Petazzoni
cf5c2d5741 Add PromQL details + side-by-side Prom&Snap comparison 2016-11-29 12:59:28 -08:00
Jerome Petazzoni
971bf85b17 Clarify raft usage 2016-11-28 17:44:15 -08:00
Jerome Petazzoni
83749ade43 Add "what did we change in this app?" section 2016-11-28 17:17:24 -08:00
Jerome Petazzoni
76fb2f2e2c Add prometheus files (fixes #58) 2016-11-28 12:30:56 -08:00
Jerome Petazzoni
6bda8147e4 Merge branch 'lisa16' 2016-11-28 12:28:03 -08:00
Jerome Petazzoni
95751d1ee9 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-11-23 15:18:12 -08:00
Jerome Petazzoni
12adae107e Update instructions to install Compose in nodes
Closes #51

(Also addresses remarks about using Machine in older EC2 accounts lacking VPC)
2016-11-23 15:18:07 -08:00
Jerome Petazzoni
c652ea08a2 Upgrade to remark 0.14 (closes #38) 2016-11-23 14:45:03 -08:00
Jerome Petazzoni
30008e4af6 Add warning re/ swarmtctl (fixes #35) 2016-11-23 14:34:44 -08:00
Jérôme Petazzoni
bb262e27e8 Merge pull request #55 from stefanlasiewski/master
"Using Docker Machine to communicate with a node" missing the `docker-machine env` command
2016-11-23 12:27:55 -06:00
Jerome Petazzoni
9656d959cc Switch to EBS-based instances; change default instance type to t2.medium 2016-11-21 17:10:07 -08:00
Jerome Petazzoni
46b772b95e First round of updates for LISA 2016-11-21 16:55:47 -08:00
stefanlasiewski
f801e1b9ad Add instructions for VMware Fusion. 2016-11-21 11:44:13 -08:00
stefanlasiewski
1c44d7089a Merge branch 'master' of https://github.com/stefanlasiewski/orchestration-workshop 2016-11-18 14:44:58 -08:00
stefanlasiewski
1f7f4a29ff docker-machine ... should actually be docker-machine env ... in a
couple of places.
2016-11-18 14:44:33 -08:00
Jerome Petazzoni
e16e23e2bd Add supergrok instructions 2016-11-18 10:06:10 -08:00
Jérôme Petazzoni
b5206aa68e Merge pull request #53 from drewmoseley/patch-1
Install pycrypto
2016-11-17 17:24:49 -06:00
Jérôme Petazzoni
8a47bce180 Merge pull request #52 from asziranyi/patch-1
add vagrant-vbguest install link
2016-11-17 17:24:18 -06:00
Drew Moseley
6cd8c32621 Install pycrypto
Not sure if it's somehow unique to my setup but Ansible needed me to install pycrypto as well.
2016-11-17 12:07:42 -05:00
asziranyi
f2f1934940 add vagrant-vbguest installation link 2016-11-17 15:50:47 +01:00
Jerome Petazzoni
8cc388dcb8 add ctrl-p ctrl-q warning 2016-11-14 12:36:57 -08:00
Jerome Petazzoni
a276e72ab0 add ngrok instructions 2016-11-14 11:23:22 -08:00
Jerome Petazzoni
bdb8e1b3df Add instructions for self-paced workshop 2016-11-11 14:28:28 -08:00
Jérôme Petazzoni
66ee4739ed typos 2016-11-07 22:40:59 -06:00
Jérôme Petazzoni
893c7b13c6 Add instructions to create VMs with Docker Machine 2016-11-07 22:38:43 -06:00
Jerome Petazzoni
78b730e4ac Patch up TOC generator 2016-11-01 17:37:48 -07:00
Jerome Petazzoni
e3eb06ddfb Bump up to Compose 1.8.1 and Machine 0.8.2 2016-11-01 17:10:55 -07:00
Jerome Petazzoni
ad29a45191 Add advertise-addr info + small fixups for mentor week 2016-11-01 17:10:36 -07:00
Jerome Petazzoni
e1968beefa Bump to 16.04 LTS AMIs (closes #37)
16.04 doesn't come with Python setuptools, so we have to install that too.
2016-10-18 08:53:53 -07:00
Jerome Petazzoni
b1b3ecb5e9 Add Prometheus section 2016-10-16 17:28:05 -07:00
Jerome Petazzoni
ef60a78998 Pin version numbers used by ELK 2016-10-16 16:30:04 -07:00
Jerome Petazzoni
70064da91c Add Docker Machine; use it to get TLS mutual auth instead of 55555 plain text 2016-10-16 16:27:21 -07:00
Jérôme Petazzoni
0b6a3a1cba Merge pull request #48 from soulshake/typo
Typo fixes
2016-10-08 14:49:16 +02:00
AJ Bowen
e403a005ea 'Set up' when it's a verb, 'setup' when it's a noun. 2016-10-07 17:09:34 +02:00
AJ Bowen
773528fc2b They're --> Their 2016-10-07 16:19:05 +02:00
Jérôme Petazzoni
97af5492f7 Remove InfluxDB password auth 2016-10-04 18:42:32 +02:00
Jérôme Petazzoni
194ce5d7b6 Update Julius info 2016-10-04 14:11:12 +02:00
Jérôme Petazzoni
fafc8fb1ed Update TOC and add slide about Prometheus 2016-10-04 14:10:38 +02:00
Jérôme Petazzoni
4cb37481ba Merge pull request #46 from dragorosson/patch-1
Fix grammar
2016-10-04 03:47:29 +02:00
Drago Rosson
9196b27f0e Fix grammar 2016-10-03 16:21:56 -05:00
Jerome Petazzoni
9ce98430ab Last (hopefully) round of fixes before LinuxCon EU! 2016-10-03 09:20:40 -07:00
tiffany jernigan
4117f079e6 Run InfluxDB and Grafana as services using Docker Hub images. 2016-10-01 18:03:40 -07:00
Jerome Petazzoni
1105c9fa1f Merge remote-tracking branch 'tiffanyfj/metrics' 2016-10-01 08:06:43 -07:00
Jerome Petazzoni
ab7c1bb09a Prepare for LinuxCon EU Berlin 2016-10-01 08:05:55 -07:00
Jérôme Petazzoni
bfcb24c1ca Merge pull request #45 from anonymuse/jesse/docs_linkfix
Fix path for README links
2016-09-30 16:22:08 +02:00
Jesse White
45f410bb49 Fix path for README links 2016-09-29 17:22:55 -04:00
Jérôme Petazzoni
bcd2433fa4 Merge branch 'BretFisher-readme-updates' 2016-09-29 00:25:46 +02:00
Jérôme Petazzoni
1d02ddf271 Mess up with whitespace, because I am OCD like that 2016-09-29 00:25:36 +02:00
Jérôme Petazzoni
4765410393 Merge branch 'readme-updates' of https://github.com/BretFisher/orchestration-workshop-with-docker into BretFisher-readme-updates 2016-09-29 00:22:21 +02:00
tiffany jernigan
6102d21150 Added metrics chapter 2016-09-28 14:18:36 -07:00
Bret Fisher
75caa65973 more trainer info 2016-09-28 01:26:56 -04:00
Bret Fisher
dfd2bf4aeb new example settings file 2016-09-28 01:26:42 -04:00
Bret Fisher
51000b4b4d better swarm image for cards 2016-09-28 01:26:02 -04:00
Bret Fisher
3acd3b078b more info for trainers 2016-09-27 13:06:35 -04:00
Bret Fisher
4b43287c5b more info for trainers 2016-09-27 11:37:42 -04:00
Jerome Petazzoni
c8c745459c Update stateful section 2016-09-19 11:23:23 -07:00
Jerome Petazzoni
04dec2e196 Round of updates for Velocity 2016-09-18 16:20:51 -07:00
Jerome Petazzoni
0f8c189786 Docker Application Bundle -> Distributed Application Bundle 2016-09-18 12:24:47 -07:00
Jerome Petazzoni
81cc14d47b Fix VM card background image 2016-09-18 12:18:05 -07:00
Jérôme Petazzoni
060b2377d5 Merge pull request #34 from everett-toews/fix-link
Fix broken link to nomenclature doc
2016-09-11 12:01:24 -05:00
Everett Toews
1e77736987 Fix broken link to nomenclature doc 2016-09-10 15:49:04 -05:00
Jérôme Petazzoni
bf2b4b7eb7 Merge pull request #32 from everett-toews/github-docs
Move slides to docs for GitHub Pages
2016-09-08 13:56:40 -05:00
Everett Toews
8396f13a4a Move slides to docs for GitHub Pages 2016-08-27 16:12:25 -05:00
Jerome Petazzoni
571097f369 Small fix 2016-08-27 13:55:26 -07:00
Jerome Petazzoni
b1110db8ca Update TOC 2016-08-24 14:01:31 -07:00
Jerome Petazzoni
b73a628f05 Remove old files 2016-08-24 13:52:16 -07:00
Jerome Petazzoni
a07795565d Update tweet message 2016-08-24 13:50:25 -07:00
Jérôme Petazzoni
c4acbfd858 Add diagram 2016-08-24 16:34:32 -04:00
Jerome Petazzoni
ddbda14e14 Reviews/edits 2016-08-24 13:31:00 -07:00
Jerome Petazzoni
ad4ea8659b Node management 2016-08-24 08:04:27 -07:00
Jerome Petazzoni
8d7f27d60d Add Docker Application Bundles
Capitalize Redis consistently
2016-08-24 06:59:15 -07:00
Jerome Petazzoni
9f21c7279c Compose build+push 2016-08-23 14:19:14 -07:00
Jerome Petazzoni
53ae221632 Add stateful service section 2016-08-23 11:03:57 -07:00
Jerome Petazzoni
6719bcda87 Update logging section 2016-08-22 15:51:26 -07:00
Jerome Petazzoni
40e0c96c91 Rolling upgrades 2016-08-22 14:21:00 -07:00
Jerome Petazzoni
2c8664e58d Updated dockercoins deployment instructions 2016-08-12 06:47:30 -07:00
Jerome Petazzoni
1e5cee2456 Updated intro+cluster setup part 2016-08-11 10:01:51 -07:00
Jerome Petazzoni
29b8f53ae0 More typo fixes courtesy of @tiffanyfj 2016-08-11 06:05:43 -07:00
Jérôme Petazzoni
451f68db1d Update instructions to join cluster 2016-08-10 15:50:30 +02:00
Jérôme Petazzoni
5a4d10ed1a Upgrade versions to Engine 1.12 + Compose 1.8 2016-08-10 15:50:10 +02:00
Jérôme Petazzoni
06d5dc7846 Merge pull request #29 from programmerq/pssh-command
detect debian command or upstream command
2016-08-07 15:26:29 +02:00
Jeff Anderson
b63eb0fa40 detect debian command or upstream command 2016-08-01 12:38:12 -06:00
Jérôme Petazzoni
117e2a9ba2 Merge pull request #13 from fiunchinho/master
Version can be set as env variable to be used, instead of generating unix timestamp
2016-07-11 23:57:13 -05:00
Jerome Petazzoni
d2f6e88fd1 Add -v flag for go get swarmit 2016-06-28 16:47:18 -07:00
Jérôme Petazzoni
c742c39ed9 Merge pull request #26 from beenanner/master
Upgrade docker-compose files to v2
2016-06-28 06:44:27 -07:00
Jerome Petazzoni
1f2b931b01 Slack -> Gitter 2016-06-22 11:54:47 -07:00
Jerome Petazzoni
e351ede294 Fix TOC 2016-06-22 11:48:00 -07:00
Jerome Petazzoni
9ffbfacca8 Last words 2016-06-19 11:15:11 -07:00
Jerome Petazzoni
60524d2ff3 Fixes 2016-06-19 00:07:19 -07:00
Jerome Petazzoni
7001c05ec0 DockerCon update 2016-06-18 18:06:15 -07:00
Jonathan Lee
5d4414723d Upgrade docker-compose files to v2 2016-06-13 21:47:59 -04:00
Jérôme Petazzoni
d31f0980a2 Merge pull request #24 from crd/recommend_slide_changes
Recommended slide changes
2016-06-02 17:10:13 -07:00
Cory Donnelly
6649e97b1e Update warning to reflect Consul Leader Election bug has been fixed 2016-06-02 15:58:31 -04:00
Cory Donnelly
06b8cbc964 Fix typos 2016-06-02 15:55:02 -04:00
Cory Donnelly
6992c85d5e Update Git BASH url 2016-06-02 15:53:12 -04:00
Jérôme Petazzoni
313d46ac47 Merge pull request #23 from soulshake/master
Make prompt more readable on light or dark backgrounds
2016-05-29 07:28:53 -07:00
AJ Bowen
5a5db2ad7f Modify prompt colors 2016-05-28 21:07:33 -07:00
Jérôme Petazzoni
1ae29909c8 Merge pull request #22 from soulshake/master
Add script to extract section title
2016-05-28 20:59:00 -07:00
AJ Bowen
6747480869 Add a script to extract section titles 2016-05-28 20:52:21 -07:00
AJ Bowen
9ba359e67a Fix more references to settings.yaml 2016-05-28 19:55:46 -07:00
Jérôme Petazzoni
4c34f6be9b Merge pull request #21 from soulshake/master
Cleanup, mostly
2016-05-28 19:49:33 -07:00
AJ Bowen
a747058a72 Replace settings.yaml with <settings/somefile.yaml> in the documentation, as per @jpetazzo request; add entrypoint to Dockerfile; remove symlink and path manipulation from Dockerfile. 2016-05-28 19:46:38 -07:00
AJ Bowen
a2b77ff63b remove two more comments from docker-compose.yaml 2016-05-28 18:40:07 -07:00
AJ Bowen
5c600a05d0 Replace 'user' with 'root' in images. Squash layers in Dockerfile. Update README. Clean up docker-compose.yaml. 2016-05-28 18:37:29 -07:00
Jerome Petazzoni
340fcd4de2 Minor fixes for PYCON 2016-05-28 18:27:36 -07:00
Jerome Petazzoni
96d5e69c77 Add command to query local registry after pushing busybox (thanks @crd) 2016-05-25 16:25:08 -07:00
Jérôme Petazzoni
3b3825a83a Merge pull request #20 from RaulKite/master
upgrade local vagrant machines to ubuntu 14.04
2016-05-25 16:13:11 -07:00
Jérôme Petazzoni
74e815a706 Merge pull request #18 from soulshake/master
Fix typos pointed out by @crd
2016-05-25 16:12:02 -07:00
Raul Sanchez
2e4417f502 Merge branch 'master' of github.com:RaulKite/orchestration-workshop 2016-05-23 14:14:44 +02:00
Raul Sanchez
a4970dbfd5 upgrade local vagrant machines to ubuntu 14.04 2016-05-23 14:14:25 +02:00
Raul Sanchez
5d6a35e116 upgrade local vagrant machines to ubuntu 14.04 2016-05-23 14:11:03 +02:00
AJ Bowen
943c15a3c8 Fix typos pointed out by @crd 2016-05-17 23:21:01 +02:00
Jerome Petazzoni
65252904c9 Last updates before OSCON 2016-05-17 08:47:59 -07:00
Jerome Petazzoni
31563480b3 Update AMIs and settings files 2016-05-16 09:30:37 -07:00
Jerome Petazzoni
9bf13f70b9 Reword conclusion 2016-05-16 08:31:02 -07:00
Jerome Petazzoni
191982c72e Fix capitalization of Consul, etcd, Zookeeper 2016-05-15 20:24:49 -07:00
Jerome Petazzoni
054bb739ac Add diagrams courtesy of @soulshake; and new dockercoins logo by @ggtools & @ndeloof 2016-05-15 20:21:40 -07:00
Jérôme Petazzoni
338d9f5847 Merge pull request #16 from ggtools/master
New dockercoin logo
2016-05-15 22:03:31 -05:00
Jerome Petazzoni
6b6d2c77ad Big round of updates for OSCON 2016 2016-05-15 20:03:14 -07:00
Jérôme Petazzoni
3be821fefb Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-05-10 15:04:34 +00:00
Jérôme Petazzoni
cc03b0bab2 Add CRAFT talk extensions 2016-05-10 15:04:00 +00:00
Jerome Petazzoni
f2ccd65b34 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-05-09 12:53:39 -07:00
Jerome Petazzoni
cacc6cd6d9 Fixes after Budapest edition 2016-05-03 02:38:16 -07:00
Jérôme Petazzoni
aabbc17d97 Merge pull request #17 from morty/patch-1
Typo
2016-04-27 17:15:06 +02:00
Tom Mortimer-Jones
b87ece9acd Typo 2016-04-27 09:37:51 +01:00
Jerome Petazzoni
2a35e4954c Last touch-ups 2016-04-26 15:40:45 -07:00
Jerome Petazzoni
feefd4e013 Update outline + last round of minor fixes 2016-04-26 14:05:20 -07:00
Jerome Petazzoni
8e1827a506 Another round of updates for CRAFT, almost there 2016-04-26 12:53:50 -07:00
Jerome Petazzoni
76689cd431 Updates for CRAFT (bring everything to Compose v2) 2016-04-26 06:20:11 -07:00
Jerome Petazzoni
7448474b92 Update for Berlin workshop 2016-04-21 22:25:00 -07:00
Jérôme Petazzoni
52a2e6f3e6 Bump swarm image version to 1.2; add Consul Compose file 2016-04-21 10:01:52 +00:00
Christophe Labouisse
666b38ab57 New dockercoin logo 2016-04-21 09:36:16 +02:00
Jerome Petazzoni
2b213a9821 Add reference to MobaXterm 2016-04-20 08:08:04 -07:00
Jerome Petazzoni
3ec61d706e Update versions 2016-04-20 08:07:50 -07:00
Jérôme Petazzoni
4fc9d64737 Add environment logic in autotest 2016-04-17 20:33:31 +00:00
Jérôme Petazzoni
e427c1aa38 Update test harness 2016-04-13 22:23:42 +00:00
Jérôme Petazzoni
169d1085a1 Update slides for automated testing 2016-04-13 22:23:35 +00:00
Jérôme Petazzoni
506c6ea61b Minor change in placeholder for GELF section 2016-04-13 21:49:02 +00:00
Jérôme Petazzoni
654de369ca Add autotest skeleton 2016-04-11 20:11:52 +00:00
Jerome Petazzoni
0e37cb8a93 Tweak printer settings 2016-04-05 11:15:14 -07:00
Jerome Petazzoni
4a081d06ee Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-04-05 11:14:49 -07:00
Jérôme Petazzoni
c21e2ae73c Last fixes for Stockholm 2016-04-05 18:13:07 +00:00
Jérôme Petazzoni
2ca1babd4a Set YAML indentation to two spaces 2016-04-05 11:11:34 +00:00
Jerome Petazzoni
1127ce8fb2 Minor updates about discovery of nodes and backends 2016-04-04 05:47:58 -07:00
Jerome Petazzoni
5662dbef23 Add vimrc + DOCKER_HOST hint in prompt 2016-04-03 13:29:44 -07:00
Jerome Petazzoni
da10562d0e Replace .icon[...warning...] with .warning[]; update hamba description 2016-04-03 06:51:20 -07:00
Jerome Petazzoni
89ca0f9173 Refactor first part for Compose 1.7 2016-04-03 06:40:15 -07:00
Jerome Petazzoni
8fe2b8b392 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-04-03 06:39:43 -07:00
Jérôme Petazzoni
15f6b7bcd1 Merge pull request #15 from schrodervictor/adds-local-environment
Adds local environment
2016-04-03 12:46:24 +02:00
Victor Schröder
6ba755d869 Adds .gitignore 2016-04-03 00:45:12 +02:00
Victor Schröder
1f350e2fe7 Amends the README 2016-04-03 00:15:17 +02:00
Victor Schröder
58713c2bc9 Adds ssh configuration to allow ssh between nodes without passwords 2016-04-03 00:11:27 +02:00
Victor Schröder
1d43566233 Creates the README file with instructions 2016-04-02 23:49:20 +02:00
Victor Schröder
1c4877164d Creates ansible.cfg file and copies the private-key used to ssh in the VMs 2016-04-02 23:31:14 +02:00
Victor Schröder
d7f9f00fcf Removes some unnecessary files from the home folders 2016-04-02 23:29:56 +02:00
Victor Schröder
62270121a0 Adds task to install tmux in the VMs 2016-04-02 23:29:03 +02:00
Victor Schröder
7d40b5debb Adjusts the /etc/hosts files to make all instances visible by the hostname in the subnet 2016-04-02 23:28:12 +02:00
Victor Schröder
3f2ce588c5 Makes the docker daemons listen to port 55555 2016-04-02 23:26:58 +02:00
Victor Schröder
32607b38a2 Adds installation for docker-compose in the playbook 2016-04-02 23:25:33 +02:00
Victor Schröder
25cde1706e Adds initial provisioning playbook and inventory 2016-04-02 23:21:40 +02:00
Victor Schröder
73f8c9e9ae Adds generic Vagrantfile and vagrant.yml with five nodes 2016-04-02 23:20:58 +02:00
Jérôme Petazzoni
ecb1508410 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-04-02 01:07:28 +00:00
Jérôme Petazzoni
edffb26c29 Add DNS watcher 2016-04-02 01:07:01 +00:00
Jerome Petazzoni
4e48a1badb Remove pip from dependencies 2016-04-01 14:51:42 -07:00
Jérôme Petazzoni
71e309080a Minor fixes 2016-04-01 23:22:26 +02:00
Jérôme Petazzoni
452b8c5dd3 Install Compose using single binaries instead of pip 2016-04-01 23:21:49 +02:00
Jérôme Petazzoni
884d0507c2 Display errors on stderr 2016-04-01 23:20:59 +02:00
Jérôme Petazzoni
599e344340 Upgrade remark from 0.5.9 to 0.13; switch slide layout to 16:9 2016-03-31 02:05:40 +02:00
Jérôme Petazzoni
7ea512c034 Consistent capitalization of Swarm and web UI 2016-03-30 16:16:41 +02:00
Jérôme Petazzoni
d59330d0ed Remove extraneous output in chaosmonkey 2016-03-29 18:59:54 +00:00
Jérôme Petazzoni
0440d1ea8d Switch hasher to ruby:alpine 2016-03-29 01:03:05 +00:00
Jérôme Petazzoni
a34f262a95 Switch to alpine/slim images when possible 2016-03-28 12:59:06 +00:00
Jérôme Petazzoni
d1f95ddb39 Add function to do custom build of Swarm 2016-03-28 12:58:31 +00:00
Jérôme Petazzoni
4b6b43530d Add chaosmonkey script 2016-03-28 12:58:11 +00:00
Jérôme Petazzoni
d8680366db Add command to retag instances 2016-03-28 13:47:39 +02:00
Jérôme Petazzoni
8240734b6e Fix Docker Machine install from OSX; add RC settings 2016-03-28 13:47:26 +02:00
Jérôme Petazzoni
306805c02e Refactor setting selection mechanism 2016-03-28 00:18:28 +02:00
Jérôme Petazzoni
cdd237f38e Fix instance shutdown on OSX 2016-03-26 11:14:34 +01:00
Jérôme Petazzoni
d5f98110d6 Install mosh 2016-03-26 10:58:26 +01:00
Jérôme Petazzoni
a42bc7fe28 Minor refactoring 2016-03-25 21:12:50 +01:00
AJ Bowen
9e5c7c8216 Complete rewrite of deployment process
The former VM deployment process relied on extra scripts
located in a (private) repo. The new process is standalone.
2016-03-25 13:59:53 +01:00
Jérôme Petazzoni
1be92ea3fa Add details about security upgrades and mention Nautilus 2016-03-24 14:23:19 +01:00
José Armesto
4dad732c15 Removed unnecesary prints 2016-03-19 19:45:03 +01:00
José Armesto
bb7cadf701 Version can be set as env variable to be used, instead of generating unix timestamp 2016-03-15 16:28:46 +01:00
116 changed files with 9456 additions and 6187 deletions

8
.gitignore vendored
View File

@@ -1,6 +1,8 @@
*.pyc
*.swp
*~
ips.txt
ips.html
ips.pdf
prepare-vms/ips.txt
prepare-vms/ips.html
prepare-vms/ips.pdf
prepare-vms/settings.yaml
prepare-vms/tags

329
README.md
View File

@@ -1,8 +1,253 @@
# Orchestration at scale(s)
# Docker Orchestration Workshop
This is the material for the "Docker orchestration workshop"
written and delivered by Jérôme Petazzoni (and possibly others)
at multiple conferences and events like:
This is the material (slides, scripts, demo app, and other
code samples) for the "Docker orchestration workshop"
written and delivered by Jérôme Petazzoni (and lots of others)
non-stop since June 2015.
## Content
- Chapter 1: Getting Started: running apps with docker-compose
- Chapter 2: Scaling out with Swarm Mode
- Chapter 3: Operating the Swarm (networks, updates, logging, metrics)
- Chapter 4: Deeper in Swarm (stateful services, scripting, DAB's)
## Quick start (or, "I want to try it!")
This workshop is designed to be *hands on*, i.e. to give you a step-by-step
guide where you will build your own Docker cluster, and use it to deploy
a sample application.
The easiest way to follow the workshop is to attend it when it is delivered
by an instructor. In that case, the instructor will generally give you
credentials (IP addresses, login, password) to connect to your own cluster
of virtual machines; and the [slides](http://jpetazzo.github.io/orchestration-workshop)
assume that you have your own cluster indeed.
If you want to follow the workshop on your own, and want to have your
own cluster, we have multiple solutions for you!
### Using [play-with-docker](http://play-with-docker.com/)
This method is very easy to get started (you don't need any extra account
or resources!) but will require a bit of adaptation from the workshop slides.
To get started, go to [play-with-docker](http://play-with-docker.com/), and
click on _ADD NEW INSTANCE_ five times. You will get five "docker-in-docker"
containers, all on a private network. These are your five nodes for the workshop!
When the instructions in the slides tell you to "SSH on node X", just go to
the tab corresponding to that node.
The nodes are not directly reachable from outside; so when the slides tell
you to "connect to the IP address of your node on port XYZ" you will have
to use a different method.
We suggest to use "supergrok", a container offering a NGINX+ngrok combo to
expose your services. To use it, just start (on any of your nodes) the
`jpetazzo/supergrok` image. The image will output further instructions:
```
docker run --name supergrok -d jpetazzo/supergrok
docker logs --follow supergrok
```
The logs of the container will give you a tunnel address and explain you
how to connected to exposed services. That's all you need to do!
We are also working on a native proxy, embedded to Play-With-Docker.
Stay tuned!
<!--
- You can use a proxy provided by Play-With-Docker. When the slides
instruct you to connect to nodeX on port ABC, instead, you will connect
to http://play-with-docker.com/XXX.XXX.XXX.XXX:ABC, where XXX.XXX.XXX.XXX
is the IP address of nodeX.
-->
Note that the instances provided by Play-With-Docker have a short lifespan
(a few hours only), so if you want to do the workshop over multiple sessions,
you will have to start over each time ... Or create your own cluster with
one of the methods described below.
### Using Docker Machine to create your own cluster
This method requires a bit more work to get started, but you get a permanent
cluster, with less limitations.
You will need Docker Machine (if you have Docker Mac, Docker Windows, or
the Docker Toolbox, you're all set already). You will also need:
- credentials for a cloud provider (e.g. API keys or tokens),
- or a local install of VirtualBox or VMware (or anything supported
by Docker Machine).
Full instructions are in the [prepare-machine](prepare-machine) subdirectory.
### Using our scripts to mass-create a bunch of clusters
Since we often deliver the workshop during conferences or similar events,
we have scripts to automate the creation of a bunch of clusters using
AWS EC2. If you want to create multiple clusters and have EC2 credits,
check the [prepare-vms](prepare-vms) directory for more information.
## How This Repo is Organized
- **dockercoins**
- Sample App: compose files and source code for the dockercoins sample apps
used throughout the workshop
- **docs**
- Slide Deck: presentation slide deck, works out-of-box with GitHub Pages,
uses https://remarkjs.com
- **prepare-local**
- untested scripts for automating the creation of local virtualbox VM's
(could use your help validating)
- **prepare-machine**
- instructions explaining how to use Docker Machine to create VMs
- **prepare-vms**
- scripts for automating the creation of AWS instances for students
## Slide Deck
- The slides are in the `docs` directory.
- To view them locally open `docs/index.html` in your browser. It works
offline too.
- To view them online open https://jpetazzo.github.io/orchestration-workshop/
in your browser.
- When you fork this repo, be sure GitHub Pages is enabled in repo Settings
for "master branch /docs folder" and you'll have your own website for them.
- They use https://remarkjs.com to allow simple markdown in a html file that
remark will transform into a presentation in the browser.
## Sample App: Dockercoins!
The sample app is in the `dockercoins` directory. It's used during all chapters
for explaining different concepts of orchestration.
To see it in action:
- `cd dockercoins && docker-compose up -d`
- this will build and start all the services
- the web UI will be available on port 8000
*If you just want to run the workshop for yourself, you can stop reading
here. If you want to deliver the workshop for others (i.e. if you
want to become an instructor), keep reading!*
## Running the Workshop
### General timeline of planning a workshop
- Fork repo and run through slides, doing the hands-on to be sure you
understand the different `dockercoins` repo's and the steps we go through to
get to a full Swarm Mode cluster of many containers. You'll update the first
few slides and last slide at a minimum, with your info.
- Your docs directory can use GitHub Pages.
- This workshop expects 5 servers per student. You can get away with as little
as 2 servers per student, but you'll need to change the slide deck to
accommodate. More servers = more fun.
- If you have more then ~20 students, try to get an assistant (TA) to help
people with issues, so you don't have to stop the workshop to help someone
with ssh etc.
- AWS is our most tested process for generating student machines. In
`prepare-vms` you'll find scripts to create EC2 instances, install docker,
pre-pull images, and even print "cards" to place at each students seat with
IP's and username/password.
- Test AWS Scripts: Be sure to test creating *all* your needed servers a week
before workshop (just for a few minutes). You'll likely hit AWS limits in the
region closest to your class, and it sometimes takes days to get AWS to raise
those limits with a support ticket.
- Create a https://gitter.im chat room for your workshop and update slides
with url. Also useful for TA to monitor this during workshop. You can use it
before/after to answer questions, and generally works as a better answer then
"email me that question".
- If you can send an email to students ahead of time, mention how they should
get SSH, and test that SSH works. If they can `ssh github.com` and get
`permission denied (publickey)` then they know it worked, and SSH is properly
installed and they don't have anything blocking it. SSH and a browser are all
they need for class.
- Typically you create the servers the day before or morning of workshop, and
leave them up the rest of day after workshop. If creating hundreds of servers,
you'll likely want to run all these `trainer` commands from a dedicated
instance you have in same region as instances you want to create. Much faster
this way if you're on poor internet. Also, create 2 sets of servers for
yourself, and use one during workshop and the 2nd is a backup.
- Remember you'll need to print the "cards" for students, so you'll need to
create instances while you have a way to print them.
### Things That Could Go Wrong
- Creating AWS instances ahead of time, and you hit its limits in region and
didn't plan enough time to wait on support to increase your limits. :(
- Students have technical issues during workshop. Can't get ssh working,
locked-down computer, host firewall, etc.
- Horrible wifi, or ssh port TCP/22 not open on network! If wifi sucks you
can try using MOSH https://mosh.org which handles SSH over UDP. TMUX can also
prevent you from loosing your place if you get disconnected from servers.
https://tmux.github.io
- Forget to print "cards" and cut them up for handing out IP's.
- Forget to have fun and focus on your students!
### Creating the VMs
`prepare-vms/trainer` is the script that gets you most of what you need for
setting up instances. See
[prepare-vms/README.md](prepare-vms)
for all the info on tools and scripts.
### Content for Different Workshop Durations
With all the slides, this workshop is a full day long. If you need to deliver
it in shorter timelines, here's some recommendations on what to cut out. You
can replace `---` with `???` which will hide slides. Or leave them there and
add something like `(EXTRA CREDIT)` to title so students can still view the
content but you also know to skip during presentation.
#### 3 Hour Version
- Limit time on debug tools, maybe skip a few. *"Chapter 1:
Identifying bottlenecks"*
- Limit time on Compose, try to have them building the Swarm Mode by 30
minutes in
- Skip most of Chapter 3, Centralized Logging and ELK
- Skip most of Chapter 4, but keep stateful services and DAB's if possible
- Mention what DAB's are, but make this part optional in case you run out
of time
#### 2 Hour Version
- Skip all the above, and:
- Skip the story arc of debugging dockercoins all together, skipping the
troubleshooting tools. Just focus on getting them from single-host to
multi-host and multi-container.
- Goal is first 30min on intro and Docker Compose and what dockercoins is,
and getting it up on one node in docker-compose.
- Next 60-75 minutes is getting dockercoins in Swarm Mode services across
servers. Big Win.
- Last 15-30 minutes is for stateful services, DAB files, and questions.
## Past events
Since its inception, this workshop has been delivered dozens of times,
to thousands of people, and has continuously evolved. This is a short
history of the first times it was delivered. Look also in the "tags"
of this repository: they all correspond to successive iterations of
this workshop. If you attended a past version of the workshop, you
can use these tags to see what has changed since then.
- QCON, New York City (2015, June)
- KCDC, Kansas City (2015, June)
@@ -13,80 +258,7 @@ at multiple conferences and events like:
- SCALE, Pasadena (2016, January)
- Zenika, Paris (2016, February)
- Container Solutions, Amsterdam (2016, February)
## Slides
The slides are in the `www/htdocs` directory.
The recommended way to view them is to:
- have a Docker host
- clone this repository to your Docker host
- `cd www && docker-compose up -d`
- this will start a web server on port 80
- point your browser at your Docker host and enjoy
## Sample code
The sample app is in the `dockercoins` directory.
To see it in action:
- `cd dockercoins && docker-compose up -d`
- this will build and start all the services
- the web UI will be available on port 8000
## Running the workshop
WARNING: those instructions are incomplete. Consider
them as notes quickly drafted on a napkin rather than
proper documentation!
### Creating the VMs
I use the `trainctl` script from the `docker-fundamentals`
repository. Sorry if you don't have that!
After starting the VMs, use the `trainctl ips` command
to dump the list of IP addresses into a file named `ips.txt`.
### Generating the printed cards
- Put `ips.txt` file in `prepare-vms` directory.
- Generate HTML file.
- Open it in Chrome.
- Transform to PDF.
- Print it.
### Deploying your SSH key to all the machines
- Make sure that you have SSH keys loaded (`ssh-add -l`).
- Source `rc`.
- Run `pcopykey`.
### Installing extra packages
- Source `postprep.rc`.
(This will install a few extra packages, add entries to
/etc/hosts, generate SSH keys, and deploy them on all hosts.)
### Final touches
- Set two groups of machines for instructor's use.
- You will use the first group during the workshop.
- The second group will run a web server with the slides.
- Log into the first machine of the second group.
- Git clone this repo.
- Put up the web server as instructed above.
- Use cli53 to add an A record for e.g. `view.dckr.info`.
- ... and much more!
# Problems? Bugs? Questions?
@@ -108,3 +280,4 @@ conference or for your company: contact me (jerome
at docker dot com).
Thank you!

191
autotest/autotest.py Executable file
View File

@@ -0,0 +1,191 @@
#!/usr/bin/env python
import os
import re
import signal
import subprocess
import time
def print_snippet(snippet):
print(78*'-')
print(snippet)
print(78*'-')
class Snippet(object):
def __init__(self, slide, content):
self.slide = slide
self.content = content
self.actions = []
def __str__(self):
return self.content
class Slide(object):
current_slide = 0
def __init__(self, content):
Slide.current_slide += 1
self.number = Slide.current_slide
# Remove commented-out slides
# (remark.js considers ??? to be the separator for speaker notes)
content = re.split("\n\?\?\?\n", content)[0]
self.content = content
self.snippets = []
exercises = re.findall("\.exercise\[(.*)\]", content, re.DOTALL)
for exercise in exercises:
if "```" in exercise and "<br/>`" in exercise:
print("! Exercise on slide {} has both ``` and <br/>` delimiters, skipping."
.format(self.number))
print_snippet(exercise)
elif "```" in exercise:
for snippet in exercise.split("```")[1::2]:
self.snippets.append(Snippet(self, snippet))
elif "<br/>`" in exercise:
for snippet in re.findall("<br/>`(.*)`", exercise):
self.snippets.append(Snippet(self, snippet))
else:
print(" Exercise on slide {} has neither ``` or <br/>` delimiters, skipping."
.format(self.number))
def __str__(self):
text = self.content
for snippet in self.snippets:
text = text.replace(snippet.content, ansi("7")(snippet.content))
return text
def ansi(code):
return lambda s: "\x1b[{}m{}\x1b[0m".format(code, s)
slides = []
with open("index.html") as f:
content = f.read()
for slide in re.split("\n---?\n", content):
slides.append(Slide(slide))
is_editing_file = False
placeholders = {}
for slide in slides:
for snippet in slide.snippets:
content = snippet.content
# Multi-line snippets should be ```highlightsyntax...
# Single-line snippets will be interpreted as shell commands
if '\n' in content:
highlight, content = content.split('\n', 1)
else:
highlight = "bash"
content = content.strip()
# If the previous snippet was a file fragment, and the current
# snippet is not YAML or EDIT, complain.
if is_editing_file and highlight not in ["yaml", "edit"]:
print("! On slide {}, previous snippet was YAML, so what do what do?"
.format(slide.number))
print_snippet(content)
is_editing_file = False
if highlight == "yaml":
is_editing_file = True
elif highlight == "placeholder":
for line in content.split('\n'):
variable, value = line.split(' ', 1)
placeholders[variable] = value
elif highlight == "bash":
for variable, value in placeholders.items():
quoted = "`{}`".format(variable)
if quoted in content:
content = content.replace(quoted, value)
del placeholders[variable]
if '`' in content:
print("! The following snippet on slide {} contains a backtick:"
.format(slide.number))
print_snippet(content)
continue
print("_ "+content)
snippet.actions.append((highlight, content))
elif highlight == "edit":
print(". "+content)
snippet.actions.append((highlight, content))
elif highlight == "meta":
print("^ "+content)
snippet.actions.append((highlight, content))
else:
print("! Unknown highlight {!r} on slide {}.".format(highlight, slide.number))
if placeholders:
print("! Remaining placeholder values: {}".format(placeholders))
actions = sum([snippet.actions for snippet in sum([slide.snippets for slide in slides], [])], [])
# Strip ^{ ... ^} for now
def strip_curly_braces(actions, in_braces=False):
if actions == []:
return []
elif actions[0] == ("meta", "^{"):
return strip_curly_braces(actions[1:], True)
elif actions[0] == ("meta", "^}"):
return strip_curly_braces(actions[1:], False)
elif in_braces:
return strip_curly_braces(actions[1:], True)
else:
return [actions[0]] + strip_curly_braces(actions[1:], False)
actions = strip_curly_braces(actions)
background = []
cwd = os.path.expanduser("~")
env = {}
for current_action, next_action in zip(actions, actions[1:]+[("bash", "true")]):
if current_action[0] == "meta":
continue
print(ansi(7)(">>> {}".format(current_action[1])))
time.sleep(1)
popen_options = dict(shell=True, cwd=cwd, stdin=subprocess.PIPE, preexec_fn=os.setpgrp)
# The follow hack allows to capture the environment variables set by `docker-machine env`
# FIXME: this doesn't handle `unset` for now
if any([
"eval $(docker-machine env" in current_action[1],
"DOCKER_HOST" in current_action[1],
"COMPOSE_FILE" in current_action[1],
]):
popen_options["stdout"] = subprocess.PIPE
current_action[1] += "\nenv"
proc = subprocess.Popen(current_action[1], **popen_options)
proc.cmd = current_action[1]
if next_action[0] == "meta":
print(">>> {}".format(next_action[1]))
time.sleep(3)
if next_action[1] == "^C":
os.killpg(proc.pid, signal.SIGINT)
proc.wait()
elif next_action[1] == "^Z":
# Let the process run
background.append(proc)
elif next_action[1] == "^D":
proc.communicate()
proc.wait()
else:
print("! Unknown meta action {} after snippet:".format(next_action[1]))
print_snippet(next_action[1])
print(ansi(7)("<<< {}".format(current_action[1])))
else:
proc.wait()
if "stdout" in popen_options:
stdout, stderr = proc.communicate()
for line in stdout.split('\n'):
if line.startswith("DOCKER_"):
variable, value = line.split('=', 1)
env[variable] = value
print("=== {}={}".format(variable, value))
print(ansi(7)("<<< {} >>> {}".format(proc.returncode, current_action[1])))
if proc.returncode != 0:
print("Got non-zero status code; aborting.")
break
if current_action[1].startswith("cd "):
cwd = os.path.expanduser(current_action[1][3:])
for proc in background:
print("Terminating background process:")
print_snippet(proc.cmd)
proc.terminate()
proc.wait()

1
autotest/index.html Symbolic link
View File

@@ -0,0 +1 @@
../www/htdocs/index.html

View File

@@ -33,10 +33,15 @@ if service_name not in config["services"]:
lb_name = "{}-lb".format(service_name)
be_name = "{}-be".format(service_name)
wd_name = "{}-wd".format(service_name)
if lb_name in config["services"]:
error("load balancer {} already exists in $COMPOSE_FILE"
.format(service_name))
.format(lb_name))
if wd_name in config["services"]:
error("dns watcher {} already exists in $COMPOSE_FILE"
.format(wd_name))
service = config["services"][service_name]
if "networks" in service:
@@ -63,6 +68,16 @@ config["services"][lb_name] = {
},
}
# Add the DNS watcher.
config["services"][wd_name] = {
"image": "jpetazzo/watchdns",
"command": "{} {} {}".format(port, be_name, port),
"volumes_from": [ lb_name ],
"networks": {
service_name: None,
},
}
if "networks" not in config:
config["networks"] = {}
if service_name not in config["networks"]:

27
bin/add-logging.py Executable file
View File

@@ -0,0 +1,27 @@
#!/usr/bin/env python
import os
import sys
import yaml
def error(msg):
print("ERROR: {}".format(msg))
exit(1)
compose_file = os.environ["COMPOSE_FILE"]
input_file, output_file = compose_file, compose_file
config = yaml.load(open(input_file))
version = config.get("version")
if version != "2":
error("Unsupported $COMPOSE_FILE version: {!r}".format(version))
for service in config["services"]:
config["services"][service]["logging"] = dict(
driver="gelf",
options={"gelf-address": "udp://localhost:12201"},
)
yaml.safe_dump(config, open(output_file, "w"), default_flow_style=False)

View File

@@ -16,9 +16,11 @@ if not registry:
# Get the name of the current directory.
project_name = os.path.basename(os.path.realpath("."))
# Generate a Docker image tag, using the UNIX timestamp.
# (i.e. number of seconds since January 1st, 1970)
version = str(int(time.time()))
# Version used to tag the generated Docker image, using the UNIX timestamp or the given version.
if "VERSION" not in os.environ:
version = str(int(time.time()))
else:
version = os.environ["VERSION"]
# Execute "docker-compose build" and abort if it fails.
subprocess.check_call(["docker-compose", "-f", "docker-compose.yml", "build"])
@@ -33,7 +35,7 @@ push_operations = dict()
for service_name, service in compose_file.services.items():
if "build" in service:
compose_image = "{}_{}".format(project_name, service_name)
registry_image = "{}/{}_{}:{}".format(registry, project_name, service_name, version)
registry_image = "{}/{}:{}".format(registry, compose_image, version)
# Re-tag the image so that it can be uploaded to the registry.
subprocess.check_call(["docker", "tag", compose_image, registry_image])
# Spawn "docker push" to upload the image.

47
bin/chaosmonkey.sh Executable file
View File

@@ -0,0 +1,47 @@
#!/bin/sh
[ -z "$2" ] && {
echo "Syntax: $0 <host> <command>"
echo "
Command should be:
connect Cancels the effects of 'disconnect'
disconnect Disable all network communication except SSH
reboot Sync disks and immediately reboot (without proper shutdown)
"
exit 1
}
ssh docker@$1 sudo sh <<EOF
_cm_init () {
iptables -L CHAOSMONKEY >/dev/null 2>/dev/null || {
iptables -N CHAOSMONKEY
iptables -I FORWARD -j CHAOSMONKEY
iptables -I INPUT -j CHAOSMONKEY
iptables -I OUTPUT -j CHAOSMONKEY
}
}
_cm_reboot () {
echo "Rebooting..."
echo s > /proc/sysrq-trigger
echo u > /proc/sysrq-trigger
echo b > /proc/sysrq-trigger
}
_cm_disconnect () {
_cm_init
echo "Dropping all network traffic, except SSH..."
iptables -F CHAOSMONKEY
iptables -A CHAOSMONKEY -p tcp --sport 22 -j ACCEPT
iptables -A CHAOSMONKEY -p tcp --dport 22 -j ACCEPT
iptables -A CHAOSMONKEY -j DROP
}
_cm_connect () {
_cm_init
echo "Re-enabling network communication..."
iptables -F CHAOSMONKEY
}
_cm_$2
EOF

View File

@@ -3,14 +3,40 @@ unset DOCKER_REGISTRY
unset DOCKER_HOST
unset COMPOSE_FILE
SWARM_IMAGE=jpetazzo/swarm:1.1.3-rc2-debug-experimental
SWARM_IMAGE=${SWARM_IMAGE:-swarm}
check_ssh_keys () {
prepare_1_check_ssh_keys () {
for N in $(seq 1 5); do
ssh node$N true
done
}
prepare_2_compile_swarm () {
cd ~
git clone git://github.com/docker/swarm
cd swarm
[[ -z "$1" ]] && {
echo "Specify which revision to build."
return
}
git checkout "$1" || return
mkdir -p image
docker build -t docker/swarm:$1 .
docker run -i --entrypoint sh docker/swarm:$1 \
-c 'cat $(which swarm)' > image/swarm
chmod +x image/swarm
cat >image/Dockerfile <<EOF
FROM scratch
COPY ./swarm /swarm
ENTRYPOINT ["/swarm", "-debug", "-experimental"]
EOF
docker build -t jpetazzo/swarm:$1 image
docker login
docker push jpetazzo/swarm:$1
docker logout
SWARM_IMAGE=jpetazzo/swarm:$1
}
clean_1_containers () {
for N in $(seq 1 5); do
ssh node$N "docker ps -aq | xargs -r -n1 -P10 docker rm -f"
@@ -61,18 +87,14 @@ setup_1_swarm () {
}
setup_2_consul () {
ssh node1 docker run --name consul_node1 \
-d --restart=always --net host \
jpetazzo/consul agent -server -bootstrap
IPADDR=$(ssh node1 ip a ls dev eth0 |
sed -n 's,.*inet \(.*\)/.*,\1,p')
# Start other Consul nodes
for N in 2 3 4 5; do
ssh node$N docker run --name consul_node$N \
-d --restart=always --net host \
jpetazzo/consul agent -server -join $IPADDR
for N in 1 2 3 4 5; do
ssh node$N -- docker run -d --restart=always --name consul_node$N \
-e CONSUL_BIND_INTERFACE=eth0 --net host consul \
agent -server -retry-join $IPADDR -bootstrap-expect 5 \
-ui -client 0.0.0.0
done
}
@@ -109,6 +131,36 @@ setup_6_add_lbs () {
~/orchestration-workshop/bin/add-load-balancer-v2.py hasher
}
setup_7_consulfs () {
dm_swarm
docker pull jpetazzo/consulfs
for N in $(seq 1 5); do
ssh node$N "docker run --rm -v /usr/local/bin:/target jpetazzo/consulfs"
ssh node$N mkdir -p ~/consul
ssh -f node$N "mountpoint ~/consul || consulfs localhost:8500 ~/consul"
done
}
setup_8_syncmachine () {
while ! mountpoint ~/consul; do
sleep 1
done
cp -r ~/.docker/machine ~/consul/
for N in $(seq 2 5); do
ssh node$N mkdir -p ~/.docker
ssh node$N "[ -L ~/.docker/machine ] || ln -s ~/consul/machine ~/.docker"
done
}
setup_9_elk () {
dm_swarm
cd ~/orchestration-workshop/elk
docker-compose up -d
for N in $(seq 1 5); do
docker-compose scale logstash=$N
done
}
setup_all () {
setup_1_swarm
setup_2_consul
@@ -116,9 +168,12 @@ setup_all () {
setup_4_registry
setup_5_btp_dockercoins
setup_6_add_lbs
setup_7_consulfs
setup_8_syncmachine
dm_swarm
}
force_remove_network () {
dm_swarm
NET="$1"
@@ -139,3 +194,8 @@ grep -qs -- MAGICMARKER "$0" && { # Don't display this line in the function lis
echo "You should source this file, then invoke the following functions:"
grep -- '^[a-z].*{$' "$0" | cut -d" " -f1
}
show_swarm_primary () {
dm_swarm
docker info 2>/dev/null | grep -e ^Role -e ^Primary
}

View File

@@ -1,10 +1,12 @@
cadvisor:
image: google/cadvisor
ports:
- "8080:8080"
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"
version: "2"
services:
cadvisor:
image: google/cadvisor
ports:
- "8080:8080"
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"

12
consul/docker-compose.yml Normal file
View File

@@ -0,0 +1,12 @@
version: "2"
services:
bootstrap:
image: jpetazzo/consul
command: agent -server -bootstrap
container_name: bootstrap
server:
image: jpetazzo/consul
command: agent -server -join bootstrap -join server
client:
image: jpetazzo/consul
command: members -rpc-addr server:8400

View File

@@ -1,29 +1,26 @@
rng:
version: "2"
services:
rng:
build: rng
ports:
- "8001:80"
- "8001:80"
hasher:
hasher:
build: hasher
ports:
- "8002:80"
- "8002:80"
webui:
webui:
build: webui
links:
- redis
ports:
- "8000:80"
- "8000:80"
volumes:
- "./webui/files/:/files/"
- "./webui/files/:/files/"
redis:
redis:
image: redis
worker:
worker:
build: worker
links:
- rng
- hasher
- redis

View File

@@ -1,43 +0,0 @@
rng1:
build: rng
rng2:
build: rng
rng3:
build: rng
rng:
image: jpetazzo/hamba
links:
- rng1
- rng2
- rng3
command: 80 rng1 80 rng2 80 rng3 80
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
webui:
build: webui
links:
- redis
ports:
- "8000:80"
volumes:
- "./webui/files/:/files/"
redis:
image: jpetazzo/hamba
command: 6379 AA.BB.CC.DD EEEEE
worker:
build: worker
links:
- rng
- hasher
- redis

View File

@@ -1,43 +0,0 @@
rng1:
build: rng
rng2:
build: rng
rng3:
build: rng
rng0:
image: jpetazzo/hamba
links:
- rng1
- rng2
- rng3
command: 80 rng1 80 rng2 80 rng3 80
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
webui:
build: webui
extra_hosts:
redis: A.B.C.D
ports:
- "8000:80"
volumes:
- "./webui/files/:/files/"
#redis:
# image: redis
worker:
build: worker
links:
- rng0:rng
- hasher:hasher
extra_hosts:
redis: A.B.C.D

View File

@@ -0,0 +1,30 @@
version: "2"
services:
rng:
build: rng
image: ${REGISTRY_SLASH}rng${COLON_TAG}
ports:
- "8001:80"
hasher:
build: hasher
image: ${REGISTRY_SLASH}hasher${COLON_TAG}
ports:
- "8002:80"
webui:
build: webui
image: ${REGISTRY_SLASH}webui${COLON_TAG}
ports:
- "8000:80"
volumes:
- "./webui/files/:/files/"
redis:
image: redis
worker:
build: worker
image: ${REGISTRY_SLASH}worker${COLON_TAG}

View File

@@ -1,44 +1,47 @@
rng:
version: "2"
services:
rng:
build: rng
ports:
- "8001:80"
log_driver: gelf
log_opt:
gelf-address: "udp://XX.XX.XX.XX:XXXXX"
- "8001:80"
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
hasher:
hasher:
build: hasher
ports:
- "8002:80"
log_driver: gelf
log_opt:
gelf-address: "udp://XX.XX.XX.XX:XXXXX"
- "8002:80"
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
webui:
webui:
build: webui
links:
- redis
ports:
- "8000:80"
- "8000:80"
volumes:
- "./webui/files/:/files/"
log_driver: gelf
log_opt:
gelf-address: "udp://XX.XX.XX.XX:XXXXX"
- "./webui/files/:/files/"
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
redis:
redis:
image: redis
log_driver: gelf
log_opt:
gelf-address: "udp://XX.XX.XX.XX:XXXXX"
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
worker:
worker:
build: worker
links:
- rng
- hasher
- redis
log_driver: gelf
log_opt:
gelf-address: "udp://XX.XX.XX.XX:XXXXX"
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201

View File

@@ -1,43 +0,0 @@
rng1:
build: rng
rng2:
build: rng
rng3:
build: rng
rng:
image: jpetazzo/hamba
links:
- rng1
- rng2
- rng3
command: 80 rng1 80 rng2 80 rng3 80
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
webui:
build: webui
links:
- redis
ports:
- "8000:80"
volumes:
- "./webui/files/:/files/"
redis:
image: redis
worker:
build: worker
links:
- rng
- hasher
- redis

View File

@@ -1,23 +0,0 @@
version: '2'
services:
rng:
build: rng
ports:
- 80
hasher:
build: hasher
ports:
- 80
webui:
build: webui
ports:
- 80
redis:
image: redis
worker:
build: worker

View File

@@ -1,4 +1,5 @@
FROM ruby
FROM ruby:alpine
RUN apk add --update build-base
RUN gem install sinatra
RUN gem install thin
ADD hasher.rb /

View File

@@ -1,5 +0,0 @@
hasher: 80
redis: 6379
rng: 80
webui: 80

View File

@@ -1,4 +1,4 @@
FROM python
FROM python:alpine
RUN pip install Flask
COPY rng.py /
CMD ["python", "rng.py"]

View File

@@ -1,4 +1,4 @@
FROM node:4
FROM node:4-slim
RUN npm install express
RUN npm install redis
COPY files/ /files/

View File

@@ -50,7 +50,7 @@ function refresh () {
points.push({ x: s2.now, y: speed });
}
$("#speed").text("~" + speed.toFixed(1) + " hashes/second");
var msg = ("I'm attending the @docker workshop at @Stylight, "
var msg = ("I'm attending the @docker workshop at #LinuxCon, "
+ "and my #DockerCoins mining rig is crunching "
+ speed.toFixed(1) + " hashes/second! W00T!");
$("#tweet").attr(

View File

@@ -1,4 +1,4 @@
FROM python
FROM python:alpine
RUN pip install redis
RUN pip install requests
COPY worker.py /

BIN
docs/bell-curve.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 26 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 680 KiB

View File

Before

Width:  |  Height:  |  Size: 137 KiB

After

Width:  |  Height:  |  Size: 137 KiB

View File

Before

Width:  |  Height:  |  Size: 252 KiB

After

Width:  |  Height:  |  Size: 252 KiB

View File

Before

Width:  |  Height:  |  Size: 213 KiB

After

Width:  |  Height:  |  Size: 213 KiB

BIN
docs/dockercoins.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 901 KiB

View File

Before

Width:  |  Height:  |  Size: 575 KiB

After

Width:  |  Height:  |  Size: 575 KiB

View File

Before

Width:  |  Height:  |  Size: 205 KiB

After

Width:  |  Height:  |  Size: 205 KiB

19
docs/extract-section-titles.py Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env python
"""
Extract and print level 1 and 2 titles from workshop slides.
"""
separators = [
"---",
"--"
]
slide_count = 1
for line in open("index.html"):
line = line.strip()
if line in separators:
slide_count += 1
if line.startswith('# '):
print slide_count, '# #', line
elif line.startswith('# '):
print slide_count, line

BIN
docs/grafana-add-graph.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

BIN
docs/grafana-add-source.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

6109
docs/index.html Normal file

File diff suppressed because it is too large Load Diff

View File

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

View File

Before

Width:  |  Height:  |  Size: 145 KiB

After

Width:  |  Height:  |  Size: 145 KiB

BIN
docs/registry-frontends.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

18
docs/remark-0.14.min.js vendored Normal file

File diff suppressed because one or more lines are too long

BIN
docs/service-discovery.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

4
docs/swarm-mode.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 266 KiB

View File

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/you-get-five-vms.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

36
efk/README.md Normal file
View File

@@ -0,0 +1,36 @@
# Elasticsearch + Fluentd + Kibana
This is a variation on the classic "ELK" stack.
The [fluentd](fluentd/) subdirectory contains a Dockerfile to build
a fluentd image embarking a simple configuration file, accepting log
entries on port 24224 and storing them in Elasticsearch in a format
that Kibana can use.
You can also use a pre-built image, `jpetazzo/fluentd:v0.1`
(e.g. if you want to deploy on a cluster and don't want to deploy
your own registry).
Once this fluentd container is running, and assuming you expose
its port 24224/tcp somehow, you can send container logs to fluentd
by using Docker's fluentd logging driver.
You can bring up the whole stack with the associated Compoes file.
With Swarm mode, you can bring up the whole stack like this:
```bash
docker network create efk --driver overlay
docker service create --network efk \
--name elasticsearch elasticsearch:2
docker service create --network efk --publish 5601:5601 \
--name kibana kibana
docker service create --network efk --publish 24224:24224 \
--name fluentd jpetazzo/fluentd:v0.1
```
And then, from any node on your cluster, you can send logs to fluentd like this:
```bash
docker run --log-driver fluentd --log-opt fluentd-address=localhost:24224 \
alpine echo ohai there
```

24
efk/docker-compose.yml Normal file
View File

@@ -0,0 +1,24 @@
version: "2"
services:
elasticsearch:
image: elasticsearch
# If you need to access ES directly, just uncomment those lines.
#ports:
# - "9200:9200"
# - "9300:9300"
fluentd:
#build: fluentd
image: jpetazzo/fluentd:v0.1
ports:
- "127.0.0.1:24224:24224"
depends_on:
- elasticsearch
kibana:
image: kibana
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200

5
efk/fluentd/Dockerfile Normal file
View File

@@ -0,0 +1,5 @@
FROM ruby
RUN gem install fluentd
RUN gem install fluent-plugin-elasticsearch
COPY fluentd.conf /fluentd.conf
CMD ["fluentd", "-c", "/fluentd.conf"]

12
efk/fluentd/fluentd.conf Normal file
View File

@@ -0,0 +1,12 @@
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<match **>
@type elasticsearch
host elasticsearch
logstash_format true
flush_interval 1
</match>

View File

@@ -1,56 +1,55 @@
elasticsearch:
image: elasticsearch
# If you need to acces ES directly, just uncomment those lines.
#ports:
# - "9200:9200"
# - "9300:9300"
version: "2"
logstash:
image: logstash
command: |
-e '
input {
# Default port is 12201/udp
gelf { }
# This generates one test event per minute.
# It is great for debugging, but you might
# want to remove it in production.
heartbeat { }
}
# The following filter is a hack!
# The "de_dot" filter would be better, but it
# is not pre-installed with logstash by default.
filter {
ruby {
code => "
event.to_hash.keys.each { |k| event[ k.gsub('"'.'"','"'_'"') ] = event.remove(k) if k.include?'"'.'"' }
"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
# This will output every message on stdout.
# It is great when testing your setup, but in
# production, it will probably cause problems;
# either by filling up your disks, or worse,
# by creating logging loops! BEWARE!
stdout {
codec => rubydebug
}
}'
ports:
- 12201/udp
links:
- elasticsearch
services:
elasticsearch:
image: elasticsearch
# If you need to access ES directly, just uncomment those lines.
#ports:
# - "9200:9200"
# - "9300:9300"
kibana:
image: kibana
ports:
- 5601
links:
- elasticsearch
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
logstash:
image: logstash
command: |
-e '
input {
# Default port is 12201/udp
gelf { }
# This generates one test event per minute.
# It is great for debugging, but you might
# want to remove it in production.
heartbeat { }
}
# The following filter is a hack!
# The "de_dot" filter would be better, but it
# is not pre-installed with logstash by default.
filter {
ruby {
code => "
event.to_hash.keys.each { |k| event[ k.gsub('"'.'"','"'_'"') ] = event.remove(k) if k.include?'"'.'"' }
"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
# This will output every message on stdout.
# It is great when testing your setup, but in
# production, it will probably cause problems;
# either by filling up your disks, or worse,
# by creating logging loops! BEWARE!
stdout {
codec => rubydebug
}
}'
ports:
- "12201:12201/udp"
kibana:
image: kibana
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200

34
elk/logstash.conf Normal file
View File

@@ -0,0 +1,34 @@
input {
# Listens on 514/udp and 514/tcp by default; change that to non-privileged port
syslog { port => 51415 }
# Default port is 12201/udp
gelf { }
# This generates one test event per minute.
# It is great for debugging, but you might
# want to remove it in production.
heartbeat { }
}
# The following filter is a hack!
# The "de_dot" filter would be better, but it
# is not pre-installed with logstash by default.
filter {
ruby {
code => "
event.to_hash.keys.each { |k| event[ k.gsub('.','_') ] = event.remove(k) if k.include?'.' }
"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
# This will output every message on stdout.
# It is great when testing your setup, but in
# production, it will probably cause problems;
# either by filling up your disks, or worse,
# by creating logging loops! BEWARE!
stdout {
codec => rubydebug
}
}

1
prepare-local/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
.vagrant

87
prepare-local/README.md Normal file
View File

@@ -0,0 +1,87 @@
DOCKER ORCHESTRATION (local environment instructions)
=====================================================
Instead of running this training on a cloud provider, you can simulate the
infrastructure locally. These instructions apply to the **PART ONE** of the
workshop.
## 1. Prerequisites
Virtualbox, Vagrant and Ansible
- Virtualbox: https://www.virtualbox.org/wiki/Downloads
- Vagrant: https://www.vagrantup.com/downloads.html
- install vagrant-vbguest plugin (https://github.com/dotless-de/vagrant-vbguest)
- Ansible:
- install Ansible's prerequisites:
$ sudo pip install paramiko PyYAML Jinja2 httplib2 six pycrypto
- clone the Ansible repository and checkout to a stable version
(don't forget the `--recursive` argument when cloning!):
$ git clone --recursive https://github.com/ansible/ansible.git
$ cd ansible
$ git checkout stable-2.0.0.1
$ git submodule update
- source the setup script to make Ansible available on this terminal session:
$ source path/to/your-ansible-clone/hacking/env-setup
- you need to repeat the last step everytime you open a new terminal session
and want to use any Ansible command (but you'll probably only need to run
it once).
## 2. Preparing the environment
Run the following commands:
$ vagrant up
$ chmod 600 private-key
$ ansible-playbook provisioning.yml
And that's it! Now you should be able to ssh on `node1` using:
$ ssh vagrant@10.10.10.10 -i private-key
These are the default IP addresses for the nodes:
10.10.10.10 node1
10.10.10.20 node2
10.10.10.30 node3
10.10.10.40 node4
10.10.10.50 node5
The source code of this repo will be mounted at `~/orchestration-workshop`
(only on the `node1`), so you can edit the code externally and the changes
will reflect inside the instance.
## 3. Possible problems and solutions
- Depending on the Vagrant version, `sudo apt-get install bsdtar` may be needed
- If you get strange Ansible errors about dependencies, try to check your pip
version with `pip --version`. The current version is 8.1.1. If your pip is
older than this, upgrade it with `sudo pip install --upgrade pip`, restart
your terminal session and install the Ansible prerequisites again.
- If the IP's `10.10.10.[10-50]` are already taken in your machine, you can
change them to other values in the `vagrant.yml` and `inventory` files in
this directory. Make sure you pick a set of IP's inside the same subnet.
- If you suspend your computer, the simulated private network may stop to
work. This is a known problem of Virtualbox. To fix it, reload all the VM's
with `vagrant reload`.
- If you get a ssh error saying `WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED`
it means that you already had in the past some other host using one of the
IP addresses we use here. To solve this, remove the old entry in your
`known_hosts` file with:
$ ssh-keygen -f "~/.ssh/known_hosts" -R 10.10.10.10 -R 10.10.10.20 -R 10.10.10.30 -R 10.10.10.40 -R 10.10.10.50

78
prepare-local/Vagrantfile vendored Normal file
View File

@@ -0,0 +1,78 @@
# vim: set filetype=ruby:
require 'yaml'
require 'vagrant-vbguest' unless defined? VagrantVbguest::Config
configuration_file = File.expand_path("vagrant.yml", File.dirname(__FILE__))
configuration = YAML.load_file configuration_file
settings = configuration['vagrant']
instances = configuration['instances']
$enable_serial_logging = false
Vagrant.configure('2') do |config|
def check_dependency(plugin_name)
unless Vagrant.has_plugin?(plugin_name)
puts "Vagrant [" + plugin_name + "] is required but is not installed\n" +
"please check if you have the plugin with the following command:\n" +
" $ vagrand plugin list\n" +
"If needed install the plugin:\n" +
" $ vagrant plugin install " + plugin_name + "\n"
abort "Missing [" + plugin_name + "] plugin\n\n"
end
end
check_dependency 'vagrant-vbguest'
config.vm.box = settings['default_box']
# config.vm.box_url = settings['default_box_url']
config.ssh.forward_agent = true
config.ssh.insert_key = settings['ssh_insert_key']
config.vm.box_check_update = true
config.vbguest.auto_update = false
instances.each do |instance|
next if instance.has_key? 'deactivated' and instance['deactivated']
config.vm.define instance['hostname'] do |guest|
if instance.has_key? 'box'
guest.vm.box = instance['box']
end
if instance.has_key? 'box_url'
guest.vm.box_url = instance['box_url']
end
if instance.has_key? 'private_ip'
guest.vm.network 'private_network', ip: instance['private_ip']
end
guest.vm.provider 'virtualbox' do |vb|
if instance.has_key? 'cpu_execution_cap'
vb.customize ["modifyvm", :id, "--cpuexecutioncap", instance['cpu_execution_cap'].to_s]
end
vb.customize ["modifyvm", :id, "--nictype1", "virtio"]
vb.customize ["modifyvm", :id, "--nictype2", "virtio"]
if instance.has_key? 'memory'
vb.memory = instance['memory']
end
if instance.has_key? 'cores'
vb.cpus = instance['cores']
end
end
if instance.has_key? 'mounts'
instance['mounts'].each do |mount|
guest.vm.synced_folder mount['host_path'], mount['guest_path'], owner: mount['owner'], group: mount['group']
end
end
end
end
end

10
prepare-local/ansible.cfg Normal file
View File

@@ -0,0 +1,10 @@
[defaults]
nocows = True
inventory = inventory
remote_user = vagrant
private_key_file = private-key
host_key_checking = False
deprecation_warnings = False
[ssh_connection]
ssh_args = -o StrictHostKeyChecking=no

11
prepare-local/inventory Normal file
View File

@@ -0,0 +1,11 @@
node1 ansible_ssh_host=10.10.10.10
node2 ansible_ssh_host=10.10.10.20
node3 ansible_ssh_host=10.10.10.30
node4 ansible_ssh_host=10.10.10.40
node5 ansible_ssh_host=10.10.10.50
[nodes]
node[1:5]
[all:children]
nodes

27
prepare-local/private-key Normal file
View File

@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzI
w+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoP
kcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2
hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NO
Td0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcW
yLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQIBIwKCAQEA4iqWPJXtzZA68mKd
ELs4jJsdyky+ewdZeNds5tjcnHU5zUYE25K+ffJED9qUWICcLZDc81TGWjHyAqD1
Bw7XpgUwFgeUJwUlzQurAv+/ySnxiwuaGJfhFM1CaQHzfXphgVml+fZUvnJUTvzf
TK2Lg6EdbUE9TarUlBf/xPfuEhMSlIE5keb/Zz3/LUlRg8yDqz5w+QWVJ4utnKnK
iqwZN0mwpwU7YSyJhlT4YV1F3n4YjLswM5wJs2oqm0jssQu/BT0tyEXNDYBLEF4A
sClaWuSJ2kjq7KhrrYXzagqhnSei9ODYFShJu8UWVec3Ihb5ZXlzO6vdNQ1J9Xsf
4m+2ywKBgQD6qFxx/Rv9CNN96l/4rb14HKirC2o/orApiHmHDsURs5rUKDx0f9iP
cXN7S1uePXuJRK/5hsubaOCx3Owd2u9gD6Oq0CsMkE4CUSiJcYrMANtx54cGH7Rk
EjFZxK8xAv1ldELEyxrFqkbE4BKd8QOt414qjvTGyAK+OLD3M2QdCQKBgQDtx8pN
CAxR7yhHbIWT1AH66+XWN8bXq7l3RO/ukeaci98JfkbkxURZhtxV/HHuvUhnPLdX
3TwygPBYZFNo4pzVEhzWoTtnEtrFueKxyc3+LjZpuo+mBlQ6ORtfgkr9gBVphXZG
YEzkCD3lVdl8L4cw9BVpKrJCs1c5taGjDgdInQKBgHm/fVvv96bJxc9x1tffXAcj
3OVdUN0UgXNCSaf/3A/phbeBQe9xS+3mpc4r6qvx+iy69mNBeNZ0xOitIjpjBo2+
dBEjSBwLk5q5tJqHmy/jKMJL4n9ROlx93XS+njxgibTvU6Fp9w+NOFD/HvxB3Tcz
6+jJF85D5BNAG3DBMKBjAoGBAOAxZvgsKN+JuENXsST7F89Tck2iTcQIT8g5rwWC
P9Vt74yboe2kDT531w8+egz7nAmRBKNM751U/95P9t88EDacDI/Z2OwnuFQHCPDF
llYOUI+SpLJ6/vURRbHSnnn8a/XG+nzedGH5JGqEJNQsz+xT2axM0/W/CRknmGaJ
kda/AoGANWrLCz708y7VYgAtW2Uf1DPOIYMdvo6fxIB5i9ZfISgcJ/bbCUkFrhoH
+vq/5CIWxCPp0f85R4qxxQ5ihxJ0YDQT9Jpx4TMss4PSavPaBH3RXow5Ohe+bYoQ
NE5OgEXk2wVfZczCZpigBKbKZHNYcelXtTt/nP3rsCuGcM4h53s=
-----END RSA PRIVATE KEY-----

View File

@@ -0,0 +1,132 @@
---
- hosts: nodes
sudo: true
vars_files:
- vagrant.yml
tasks:
- name: clean up the home folder
file:
path: /home/vagrant/{{ item }}
state: absent
with_items:
- base.sh
- chef.sh
- cleanup.sh
- cleanup-virtualbox.sh
- puppetlabs-release-wheezy.deb
- puppet.sh
- ruby.sh
- vagrant.sh
- virtualbox.sh
- zerodisk.sh
- name: installing dependencies
apt:
name: apt-transport-https,ca-certificates,python-pip,tmux
state: present
update_cache: true
- name: fetching docker repo key
apt_key:
keyserver: hkp://p80.pool.sks-keyservers.net:80
id: 58118E89F3A912897C070ADBF76221572C52609D
- name: adding package repos
apt_repository:
repo: "{{ item }}"
state: present
with_items:
- deb https://apt.dockerproject.org/repo ubuntu-trusty main
- name: installing docker
apt:
name: docker-engine
state: present
update_cache: true
- name: adding user vagrant to group docker
user:
name: vagrant
groups: docker
append: yes
- name: making docker daemon listen to port 55555
lineinfile:
dest: /etc/default/docker
line: DOCKER_OPTS="--host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:55555"
regexp: '^#?DOCKER_OPTS=.*$'
state: present
register: docker_opts
- name: restarting docker daemon, if needed
service:
name: docker
state: restarted
when: docker_opts is defined and docker_opts.changed
- name: performing pip autoupgrade
pip:
name: pip
state: latest
- name: installing virtualenv
pip:
name: virtualenv
state: latest
- name: Install Docker Compose via PIP
pip: name=docker-compose
- name:
file:
path="/usr/local/bin/docker-compose"
state=file
mode=0755
owner=vagrant
group=docker
- name: building the /etc/hosts file with all nodes
lineinfile:
dest: /etc/hosts
line: "{{ item.private_ip }} {{ item.hostname }}"
regexp: "^{{ item.private_ip }} {{ item.hostname }}$"
state: present
with_items: "{{ instances }}"
- name: copying the ssh key to the nodes
copy:
src: private-key
dest: /home/vagrant/private-key
mode: 0600
group: root
owner: vagrant
- name: copying ssh configuration
copy:
src: ssh-config
dest: /home/vagrant/.ssh/config
mode: 0600
group: root
owner: vagrant
- name: fixing the hostname
hostname:
name: "{{ inventory_hostname }}"
- name: adjusting the /etc/hosts to the new hostname
lineinfile:
dest: /etc/hosts
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
owner: root
group: root
mode: 0644
with_items:
- regexp: '^127\.0\.0\.1'
line: "127.0.0.1 localhost {{ inventory_hostname }}"
- regexp: '^127\.0\.1\.1'
line: "127.0.1.1 {{ inventory_hostname }}"

3
prepare-local/ssh-config Normal file
View File

@@ -0,0 +1,3 @@
Host node*
IdentityFile ~/private-key
StrictHostKeyChecking no

42
prepare-local/vagrant.yml Normal file
View File

@@ -0,0 +1,42 @@
---
vagrant:
default_box: ubuntu/trusty64
default_box_check_update: true
ssh_insert_key: false
min_memory: 256
min_cores: 1
instances:
- hostname: node1
private_ip: 10.10.10.10
memory: 1512
cores: 1
mounts:
- host_path: ../
guest_path: /home/vagrant/orchestration-workshop
owner: vagrant
group: vagrant
- hostname: node2
private_ip: 10.10.10.20
memory: 512
cores: 1
- hostname: node3
private_ip: 10.10.10.30
memory: 512
cores: 1
- hostname: node4
private_ip: 10.10.10.40
memory: 512
cores: 1
- hostname: node5
private_ip: 10.10.10.50
memory: 512
cores: 1

242
prepare-machine/README.md Normal file
View File

@@ -0,0 +1,242 @@
# Setting up your own cluster
If you want to go through this orchestration workshop on your own,
you will need a cluster of Docker nodes.
These instructions will walk you through the required steps,
using [Docker Machine](https://docs.docker.com/machine/) to
create the nodes.
## Requirements
You need Docker Machine. To check if it is installed, try to
run the following command:
```bash
$ docker-machine -v
docker-machine version 0.8.2, build e18a919
```
If you see a Docker Machine version number, perfect! Otherwise,
you need to install it; either as part of the Docker Toolbox,
or as a stand-alone tool. See [Docker Machine installation docs](
https://docs.docker.com/machine/install-machine/) for details.
You also need either credentials for a cloud provider, or a
local VirtualBox or VMware installation (or anything supported
by Docker Machine, really).
## Discrepancies with official environment
The resulting environment will be slightly different from the
one that we provision for people attending the workshop at
conferences and similar events, and you will have to adapt a
few things.
We try to list all the differences here.
### User name
The official environment uses user `docker`. If you use
Docker Machine, the user name will probably be different.
### Node aliases
In the official environment, aliases are seeded in
`/etc/hosts`, allowing you to resolve node IP addresses
with the aliases `node1`, `node2`, etc.; if you use
Docker Machine, you will have to lookup the IP addresses
with the `docker-machine ip nodeX` command instead.
### SSH keys
In the official environment, you can log from one node
to another with SSH, without having to provide a password,
thanks to pre-generated (and pre-copied) SSH keys.
If you use Docker Machine, you will have to use
`docker-machine ssh` from your machine instead.
### Machine and Compose
In the official environment, Docker Machine and Docker
Compose are installed on your nodes. If you use Docker
Machine you will have to install at least Docker Compose.
The easiest way to install Compose (verified to work
with the EC2 and VirtualBox drivers, and probably others
as well) is do use `docker-machine ssh` to connect
to your node, then run the following command:
```bash
sudo curl -L \
https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```
Note that it is not necessary (or even useful) to
install Docker Machine on your nodes, since if you're
following that guide, you already have Machine on
your local computer. ☺
### IP addresses
In some environments, your nodes will have multiple
IP addresses. This is the case with VirtualBox, for
instance. At any point in the workshop, if you need
a node's IP address, you should use the address
given by the `docker-machine ip` command.
## Creating your nodes with Docker Machine
Here are some instructions for various Machine Drivers.
### AWS EC2
You have to retrieve your AWS access key and secret access key,
and set the following environment variables:
```bash
export MACHINE_DRIVER=amazonec2
export AWS_ACCESS_KEY_ID=AKI...
export AWS_SECRET_ACCESS_KEY=...
```
Optionally, you can also set `AWS_DEFAULT_REGION` to the region
closest to you. See [AWS documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions)
for the list of available regions and their codes.
For instance, if you are on the US West Coast, I recommend
that you set `AWS_DEFAULT_REGION` to `us-west-2`; if you are
in Europe, to `eu-central-1` (except in UK and Ireland where
you probably want `eu-west-1`), etc.
If you don't specify anything, your nodes will be in `us-east-1`.
You can also set `AWS_INSTANCE_TYPE` if you want bigger or smaller
instances than `t2.micro`. For the official workshops, we use
`m3.large`, but remember: the bigger the instance, the more
expensive it gets, obviously!
After setting these variables, run the following command:
```bash
for N in $(seq 1 5); do
docker-machine create node$N
docker-machine ssh node$N usermod -aG docker ubuntu
done
```
And after a few minutes, your five nodes will be ready. To log
into a node, use `docker-machine ssh nodeX`.
By default, Docker Machine places the created nodes in a
security group aptly named `docker-machine`. By default, this
group is pretty restrictive, and will only let you connect
to the Docker API and SSH. For the purpose of the workshop,
you will need to open that security group to normal traffic.
You can do that through the AWS EC2 console, or with the
following CLI command:
```bash
aws ec2 authorize-security-group-ingress --group-name docker-machine --protocol -1 --cidr 0.0.0.0/0
```
If Docker Machine fails, complaining that it cannot find
the default VPC or subnet, this could be because you have
an "old" EC2 account (created before the introduction of EC2
VPC) and your account has no default VPC. In that case,
you will have to create a VPC, a subnet in that VPC,
and use the corresponding Machine flags (`--amazonec2-vpc-id`
and `--amazonec2-subnet-id`) or environment variables
(`AWS_VPC_ID` and `AWS_SUBNET_ID`) to tell Machine what to use.
You will get similar error messages if you *have* set these
flags (or environment variables) but the VPC (or subnets)
indicated do not exist. This can happen if you frequently
switch between different EC2 accounts, and forget that you
have set the `AWS_VPC_ID` or `AWS_SUBNET_ID`.
### Microsoft Azure
You have to retrieve your subscription ID, and set the following environment
variables:
```bash
export MACHINE_DRIVER=azure
export AZURE_SUBSCRIPTION_ID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
```
Additionally, you can set `AZURE_LOCATION` to an Azure datacenter
close to you. By default, it will pick "West US". You can see
the available regions [on Azure's website](
https://azure.microsoft.com/en-us/regions/services/).
For instance, if you want to deploy on the US East Coast,
set `AZURE_LOCATION` to `East US` or `eastus` (capitalization
and spacing shouldn't matter; just use the names shown on the
map or table on Azure's website).
Then run the following command:
```bash
for N in $(seq 1 5); do
docker-machine create node$N
docker-machine ssh node$N usermod -aG docker docker-user
done
```
The CLI will give you instructions to authenticate on the Azure portal,
and once you've done that, it will create your VMs.
You will log into your nodes with `docker-machine ssh nodeX`.
By default, the firewall only allows access to the Docker API
and SSH ports. To open access to other ports, you can use the
following command:
```bash
for N in $(seq 1 5); do
az network nsg rule create -g docker-machine --name AllowAny --nsg-name node$N-firewall \
--access allow --direction inbound --protocol '*' \
--source-address-prefix '*' --source-port-range '*' \
--destination-address-prefix '*' --destination-port-range '*'
done
```
(The command takes a while. Be patient.)
### Local VirtualBox or VMware Fusion
If you want to run with local VMs, set the environment variable
`MACHINE_DRIVER` to `virtualbox` or `vmwarefusion` and create your nodes:
```bash
export MACHINE_DRIVER=virtualbox
for N in $(seq 1 5); do
docker-machine create node$N
done
```
### Terminating instances
When you're done, if you started your instance on a public
cloud (or anywhere where it costs you money!) you will want to
terminate (destroy) them. This can be done with the following
command:
```bash
for N in $(seq 1 5); do
docker-machine rm -f node$N
done
```

30
prepare-vms/Dockerfile Normal file
View File

@@ -0,0 +1,30 @@
FROM debian:jessie
MAINTAINER AJ Bowen <aj@soulshake.net>
RUN apt-get update && apt-get install -y \
wkhtmltopdf \
bsdmainutils \
ca-certificates \
curl \
groff \
jq \
less \
man \
pssh \
python \
python-pip \
python-docutils \
ssh \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN pip install \
awscli \
pdfkit \
PyYAML \
termcolor
WORKDIR $$HOME
RUN echo "alias ll='ls -lahF'" >> /root/.bashrc
ENTRYPOINT ["/root/prepare-vms/scripts/trainer-cli"]

165
prepare-vms/README.md Normal file
View File

@@ -0,0 +1,165 @@
# Trainer tools to create and prepare VMs for Docker workshops on AWS
## Prerequisites
- [Docker](https://docs.docker.com/engine/installation/)
- [Docker Compose](https://docs.docker.com/compose/install/)
## General Workflow
- fork/clone repo
- set required environment variables for AWS
- create your own setting file from `settings/example.yaml`
- run `./trainer` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./trainer cards` command to generate PDF for printing handouts of each users host IP's and login info
## Clone/Fork the Repo, and Build the Tools Image
The Docker Compose file here is used to build a image with all the dependencies to run the `./trainer` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](trainer#L5).
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
$ cd orchestration-workshop/prepare-vms
$ docker-compose build
## Preparing to Run `./trainer`
### Required AWS Permissions/Info
- Initial assumptions are you're using a root account. If you'd like to use a IAM user, it will need `AmazonEC2FullAccess` and `IAMReadOnlyAccess`.
- Using a non-default VPC or Security Group isn't supported out of box yet, but until then you can [customize the `trainer-cli` script](scripts/trainer-cli#L396-L401).
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./trainer opensg` which opens up all ports.
### Required Environment Variables
- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`
### Update/copy `settings/example.yaml`
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `trainer deploy`, `trainer cards`, etc.
./trainer cards 2016-09-28-00-33-bret settings/orchestration.yaml
## `./trainer` Usage
```
./trainer <command> [n-instances|tag] [settings/file.yaml]
Core commands:
start n Start n instances
list [TAG] If a tag is provided, list its VMs. Otherwise, list tags.
deploy TAG Deploy all instances with a given tag
pull-images TAG Pre-pull docker images. Run only after deploying.
stop TAG Stop and delete instances tagged TAG
Extras:
ips TAG List all IPs of instances with a given tag (updates ips.txt)
ids TAG/TOKEN List all instance IDs with a given tag
shell Get a shell in the trainer container
status TAG Print information about this tag and its VMs
tags List all tags (per-region)
retag TAG/TOKEN TAG Retag instances with a new tag
Beta:
ami Look up Amazon Machine Images
cards FILE Generate cards
opensg Modify AWS security groups
```
### Summary of What `./trainer` Does For You
- Used to manage bulk AWS instances for you without needing to use AWS cli or gui.
- Can manage multiple "tags" or groups of instances, which are tracked in `prepare-vms/tags/`
- Can also create PDF/HTML for printing student info for instance IP's and login.
- The `./trainer` script can be executed directly.
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
### Example Steps to Launch a Batch of Instances for a Workshop
- Export the environment variables needed by the AWS CLI (see **Required Environment Variables** above)
- Run `./trainer start N` Creates `N` EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./trainer deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./trainer pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./trainer cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./trainer stop TAG` to terminate instances.
## Other Tools
### Deploying your SSH key to all the machines
- Make sure that you have SSH keys loaded (`ssh-add -l`).
- Source `rc`.
- Run `pcopykey`.
### Installing extra packages
- Source `postprep.rc`.
(This will install a few extra packages, add entries to
/etc/hosts, generate SSH keys, and deploy them on all hosts.)
## Even More Details
#### Sync of SSH keys
When the `start` command is run, your local RSA SSH public key will be added to your AWS EC2 keychain.
To see which local key will be uploaded, run `ssh-add -l | grep RSA`.
#### Instance + tag creation
10 VMs will be started, with an automatically generated tag (timestamp + your username).
Your SSH key will be added to the `authorized_keys` of the ubuntu user.
#### Creation of ./$TAG/ directory and contents
Following the creation of the VMs, a text file will be created containing a list of their IPs.
This ips.txt file will be created in the $TAG/ directory and a symlink will be placed in the working directory of the script.
If you create new VMs, the symlinked file will be overwritten.
#### Deployment
Instances can be deployed manually using the `deploy` command:
$ ./trainer deploy TAG settings/somefile.yaml
The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and executed.
#### Pre-pull images
$ ./trainer pull-images TAG
#### Generate cards
$ ./trainer cards TAG settings/somefile.yaml
#### List tags
$ ./trainer list
#### List VMs
$ ./trainer list TAG
This will print a human-friendly list containing some information about each instance.
#### Stop and destroy VMs
$ ./trainer stop TAG
## ToDo
- Don't write to bash history in system() in postprep
- compose, etc version inconsistent (int vs str)

View File

@@ -0,0 +1,26 @@
version: "2"
services:
prepare-vms:
build: .
container_name: prepare-vms
working_dir: /root/prepare-vms
volumes:
- $HOME/.aws/:/root/.aws/
- /etc/localtime:/etc/localtime:ro
- /tmp/.X11-unix:/tmp/.X11-unix
- $SSH_AUTH_DIRNAME:$SSH_AUTH_DIRNAME
- $PWD/:/root/prepare-vms/
environment:
SCRIPT_DIR: /root/prepare-vms
DISPLAY: ${DISPLAY}
SSH_AUTH_SOCK: ${SSH_AUTH_SOCK}
SSH_AGENT_PID: ${SSH_AGENT_PID}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
AWS_DEFAULT_OUTPUT: json
AWS_INSTANCE_TYPE: ${AWS_INSTANCE_TYPE}
AWS_VPC_ID: ${AWS_VPC_ID}
USER: ${USER}
entrypoint: /root/prepare-vms/scripts/trainer-cli

BIN
prepare-vms/docker.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

View File

@@ -1,82 +0,0 @@
#!/usr/bin/env python
SETTINGS_BASIC = dict(
clustersize=1,
pagesize=15,
blurb="<p>Here is the connection information to your very own "
"VM for this intro to Docker workshop. You can connect "
"to the VM using your SSH client.</p>\n"
"<p>Your VM is reachable on the following address:</p>\n",
prettify=lambda x: x,
footer="<p>You can find the last version of the slides on "
"http://view.dckr.info/.</p>",
)
SETTINGS_ADVANCED = dict(
clustersize=5,
pagesize=12,
blurb="<p>Here is the connection information to your very own "
"cluster for this orchestration workshop. You can connect "
"to each VM with any SSH client.</p>\n"
"<p>Your machines are:<ul>\n",
prettify=lambda l: [ "node%d: %s"%(i+1, s)
for (i, s) in zip(range(len(l)), l) ],
footer="<p>You can find the last version of the slides on -&gt; "
"http://container.training/</p>"
)
SETTINGS = SETTINGS_ADVANCED
globals().update(SETTINGS)
###############################################################################
ips = list(open("ips.txt"))
assert len(ips)%clustersize == 0
clusters = []
while ips:
cluster = ips[:clustersize]
ips = ips[clustersize:]
clusters.append(cluster)
html = open("ips.html", "w")
html.write("<html><head><style>")
html.write("""
div {
float:left;
border: 1px solid black;
width: 28%;
padding: 4% 2.5% 2.5% 2.5%;
font-size: x-small;
background-image: url("docker-nb.svg");
background-size: 15%;
background-position-x: 50%;
background-repeat: no-repeat;
}
p {
margin: 0.5em 0 0.5em 0;
}
.pagebreak {
page-break-before: always;
clear: both;
display: block;
height: 8px;
}
""")
html.write("</style></head><body>")
for i, cluster in enumerate(clusters):
if i>0 and i%pagesize==0:
html.write('<span class="pagebreak"></span>\n')
html.write("<div>")
html.write(blurb)
for s in prettify(cluster):
html.write("<li>%s</li>\n"%s)
html.write("</ul></p>")
html.write("<p>login=docker password=training</p>\n")
html.write(footer)
html.write("</div>")
html.close()

View File

Before

Width:  |  Height:  |  Size: 6.5 KiB

After

Width:  |  Height:  |  Size: 6.5 KiB

BIN
prepare-vms/media/swarm.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 457 KiB

View File

@@ -1,51 +0,0 @@
pssh -I tee /tmp/postprep.py <<EOF
#!/usr/bin/env python
COMPOSE_VERSION = "1.6.2"
MACHINE_VERSION = "0.6.0"
SWARM_VERSION = "1.1.3"
import os
import sys
import urllib
clustersize = 5
myaddr = urllib.urlopen("http://myip.enix.org/REMOTE_ADDR").read()
addresses = list(l.strip() for l in sys.stdin)
def makenames(addrs):
return [ "node%s"%(i+1) for i in range(len(addrs)) ]
while addresses:
cluster = addresses[:clustersize]
addresses = addresses[clustersize:]
if myaddr not in cluster:
continue
names = makenames(cluster)
for ipaddr, name in zip(cluster, names):
os.system("grep ^%s /etc/hosts || echo %s %s | sudo tee -a /etc/hosts"
%(ipaddr, ipaddr, name))
if myaddr == cluster[0]:
os.system("[ -f .ssh/id_rsa ] || ssh-keygen -t rsa -f .ssh/id_rsa -P ''")
os.system("sudo apt-get remove -y --purge dnsmasq-base")
os.system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip")
os.system("sudo easy_install pip")
os.system("sudo pip uninstall -y docker-compose")
#os.system("sudo pip install docker-compose=={}".format(COMPOSE_VERSION))
os.system("sudo curl -sSL -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/{}/docker-compose-$(uname -s)-$(uname -m)".format(COMPOSE_VERSION))
os.system("sudo chmod +x /usr/local/bin/docker-compose")
os.system("docker pull swarm:{}".format(SWARM_VERSION))
os.system("docker tag -f swarm:{} swarm".format(SWARM_VERSION))
#os.system("sudo curl -sSL https://github.com/docker/machine/releases/download/v{}/docker-machine_linux-amd64.zip -o /tmp/docker-machine.zip".format(MACHINE_VERSION))
#os.system("cd /usr/local/bin ; sudo unzip /tmp/docker-machine.zip")
os.system("sudo curl -sSL -o /usr/local/bin/docker-machine https://github.com/docker/machine/releases/download/v{}/docker-machine-$(uname -s)-$(uname -m)".format(MACHINE_VERSION))
os.system("sudo chmod +x /usr/local/bin/docker-machine*")
os.system("echo 1000000 | sudo tee /proc/sys/net/nf_conntrack_max")
#os.system("""sudo sed -i 's,^DOCKER_OPTS=.*,DOCKER_OPTS="-H unix:///var/run/docker.sock -H tcp://0.0.0.0:55555",' /etc/default/docker""")
#os.system("sudo service docker restart")
EOF
pssh -t 300 -I "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" < ips.txt
pssh "[ -f .ssh/id_rsa ] || scp -o StrictHostKeyChecking=no node1:.ssh/id_rsa* .ssh"
pssh "grep docker@ .ssh/authorized_keys || cat .ssh/id_rsa.pub >> .ssh/authorized_keys"

View File

@@ -1,23 +0,0 @@
pssh () {
HOSTFILES="hosts ips.txt /tmp/hosts"
for HOSTFILE in $HOSTFILES; do
[ -f $HOSTFILE ] && break
done
[ -f $HOSTFILE ] || {
echo "No hostfile found (tried $HOSTFILES)"
return
}
parallel-ssh -h $HOSTFILE -l docker \
-O UserKnownHostsFile=/dev/null -O StrictHostKeyChecking=no \
-O ForwardAgent=yes \
"$@"
}
pcopykey () {
ssh-add -L | pssh --askpass --send-input \
"mkdir -p .ssh; tee .ssh/authorized_keys"
ssh-add -L | pssh --send-input \
"sudo mkdir -p /root/.ssh; sudo tee /root/.ssh/authorized_keys"
}

116
prepare-vms/scripts/aws.sh Executable file
View File

@@ -0,0 +1,116 @@
#!/bin/bash
source scripts/cli.sh
aws_display_tags(){
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Tag]" | awk '{ printf " %7s %8s %10s \n", $1, $2, $3}'
aws ec2 describe-instances --filter "Name=tag:Name,Values=[*]" \
--query "Reservations[*].Instances[*].[{Tags:Tags[0].Value,State:State.Name}]" \
| awk '{ printf " %-13s %-10s %-1s\n", $1, $2, $3}' \
| uniq -c \
| sort -k 3
}
aws_display_tokens(){
# Print all tokens in our region with their instance count
echo "[#] [Token] [Tag]" | awk '{ printf " %7s %12s %30s\n", $1, $2, $3}'
# --query 'Volumes[*].{ID:VolumeId,AZ:AvailabilityZone,Size:Size}'
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].{ClientToken:ClientToken,Tags:Tags[0].Value}' \
| awk '{ printf " %7s %12s %50s\n", $1, $2, $3}' \
| sort \
| uniq -c \
| sort -k 3
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ' )
aws ec2 describe-instance-status \
--instance-ids $IDS \
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
--output table
}
aws_display_instances_by_tag() {
TAG=$1
need_tag $TAG
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
echo "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "ID State Tags IP Type" \
| awk '{ printf "%9s %12s %15s %20s %14s \n", $1, $2, $3, $4, $5}' # column -t -c 70}
echo "$result"
fi
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
need_tag $TOKEN
aws ec2 describe-instances --filters "Name=client-token,Values=$TOKEN" \
| grep ^INSTANCE \
| awk '{print $8}'
}
aws_get_instance_ids_by_tag() {
TAG=$1
need_tag $TAG
aws ec2 describe-instances --filters "Name=tag:Name,Values=$TAG" \
| grep ^INSTANCE \
| awk '{print $8}'
}
aws_get_instance_ips_by_tag() {
TAG=$1
need_tag $TAG
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws_get_instance_ids_by_tag $TAG)
if [ -z "$IDS" ]; then
die "Invalid tag."
fi
echo "Deleting instances with tag $TAG"
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
}
aws_tag_instances() {
OLD_TAG_OR_TOKEN=$1
NEW_TAG=$2
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
}

39
prepare-vms/scripts/cli.sh Executable file
View File

@@ -0,0 +1,39 @@
die () {
if [ -n "$1" ]; then
>&2 echo -n $(tput setaf 1)
>&2 echo -e "$1"
>&2 echo -n $(tput sgr0)
fi
exit 1
}
need_tag(){
TAG=$1
if [ -z "$TAG" ]; then
echo "Please specify a tag. Here's the list: "
aws_display_tags
die
fi
}
need_token(){
TOKEN=$1
if [ -z "$TOKEN" ]; then
echo "Please specify a token. Here's the list: "
aws_display_tokens
die
fi
}
need_ips_file() {
IPS_FILE=$1
if [ -z "$IPS_FILE" ]; then
echo "IPS_FILE not set."
die
fi
if [ ! -s "$IPS_FILE" ]; then
echo "IPS_FILE $IPS_FILE not found. Please run: trainer ips <TAG>"
die
fi
}

View File

@@ -0,0 +1,15 @@
bold() {
msg=$1
echo "$(tput bold)$1$(tput sgr0)"
}
green() {
msg=$1
echo "$(tput setaf 2)$1$(tput sgr0)"
}
yellow(){
msg=$1
echo "$(tput setaf 3)$1$(tput sgr0)"
}

View File

@@ -0,0 +1,132 @@
#!/bin/bash
# borrowed from https://gist.github.com/kirikaza/6627072
usage() {
cat >&2 <<__
usage: find-ubuntu-ami.sh [ <filter>... ] [ <sorting> ]
where:
<filter> is pair of key and substring to search
-r <region>
-n <name>
-v <version>
-a <arch>
-t <type>
-d <date>
-i <image>
-k <kernel>
<sorting> is on of:
-R by region
-N by name
-V by version
-A by arch
-T by type
-D by date
-I by image
-K by kernel
protip for Docker orchestration workshop admin:
./find-ubuntu-ami.sh -t hvm:ebs -r \$AWS_REGION -v 15.10 -N
__
exit 1
}
args=`getopt hr:n:v:a:t:d:i:k:RNVATDIK $*`
if [ $? != 0 ] ; then
echo >&2
usage
fi
region=
name=
version=
arch=
type=
date=
image=
kernel=
sort=date
set -- $args
for a ; do
case "$a" in
-h) usage ;;
-r) region=$2 ; shift ;;
-n) name=$2 ; shift ;;
-v) version=$2 ; shift ;;
-a) arch=$2 ; shift ;;
-t) type=$2 ; shift ;;
-d) date=$2 ; shift ;;
-i) image=$2 ; shift ;;
-k) kernel=$2 ; shift ;;
-R) sort=region ;;
-N) sort=name ;;
-V) sort=version ;;
-A) sort=arch ;;
-T) sort=type ;;
-D) sort=date ;;
-I) sort=image ;;
-K) sort=kernel ;;
--) shift ; break ;;
*) continue ;;
esac
shift
done
[ $# = 0 ] || usage
fix_json() {
tr -d \\n | sed 's/,]}/]}/'
}
jq_query() { cat <<__
.aaData | map (
{
region: .[0],
name: .[1],
version: .[2],
arch: .[3],
type: .[4],
date: .[5],
image: .[6],
kernel: .[7]
} | select (
(.region | contains("$region")) and
(.name | contains("$name")) and
(.version | contains("$version")) and
(.arch | contains("$arch")) and
(.type | contains("$type")) and
(.date | contains("$date")) and
(.image | contains("$image</a>")) and
(.kernel | contains("$kernel"))
)
) | sort_by(.$sort) | .[] |
"\(.region)|\(.name)|\(.version)|\(.arch)|\(.type)|\(.date)|\(.image)|\(.kernel)"
__
}
trim_quotes() {
sed 's/^"//;s/"$//'
}
escape_spaces() {
sed 's/ /\\\ /g'
}
url=http://cloud-images.ubuntu.com/locator/ec2/releasesTable
{
echo REGION NAME VERSION ARCH TYPE DATE IMAGE KERNEL
curl -s $url | fix_json | jq "`jq_query`" | trim_quotes | escape_spaces | tr \| ' '
} |
while read region name version arch type date image kernel ; do
image=${image%<*}
image=${image#*>}
echo "$region|$name|$version|$arch|$type|$date|$image|$kernel"
done | column -t -s \|

View File

@@ -0,0 +1,120 @@
#!/usr/bin/env python
import os
import sys
import yaml
try:
import pdfkit
except ImportError:
print("WARNING: could not import pdfkit; PDF generation will fali.")
def prettify(l):
l = [ip.strip() for ip in l]
ret = [ "node{}: <code>{}</code>".format(i+1, s) for (i, s) in zip(range(len(l)), l) ]
return ret
# Read settings from user-provided settings file
with open(sys.argv[1]) as f:
data = f.read()
SETTINGS = yaml.load(data)
SETTINGS['footer'] = SETTINGS['footer'].format(url=SETTINGS['url'])
globals().update(SETTINGS)
###############################################################################
ips = list(open("ips.txt"))
print("Current settings (as defined in settings.yaml):")
print(" Number of IPs: {}".format(len(ips)))
print(" VMs per cluster: {}".format(clustersize))
print("Background image: {}".format(background_image))
print("---------------------------------------------")
assert len(ips)%clustersize == 0
if clustersize == 1:
blurb = blurb.format(
cluster_or_machine="machine",
this_or_each="this",
machine_is_or_machines_are="machine is",
workshop_name=workshop_short_name,
)
else:
blurb = blurb.format(
cluster_or_machine="cluster",
this_or_each="each",
machine_is_or_machines_are="machines are",
workshop_name=workshop_short_name,
)
clusters = []
while ips:
cluster = ips[:clustersize]
ips = ips[clustersize:]
clusters.append(cluster)
html = open("ips.html", "w")
html.write("<html><head><style>")
head = """
div {{
float:left;
border: 1px dotted black;
width: 27%;
padding: 6% 2.5% 2.5% 2.5%;
font-size: x-small;
background-image: url("{background_image}");
background-size: 13%;
background-position-x: 50%;
background-position-y: 5%;
background-repeat: no-repeat;
}}
p {{
margin: 0.5em 0 0.5em 0;
}}
.pagebreak {{
page-break-before: always;
clear: both;
display: block;
height: 8px;
}}
"""
head = head.format(background_image=SETTINGS['background_image'])
html.write(head)
html.write("</style></head><body>")
for i, cluster in enumerate(clusters):
if i>0 and i%pagesize==0:
html.write('<span class="pagebreak"></span>\n')
html.write("<div>")
html.write(blurb)
for s in prettify(cluster):
html.write("<li>%s</li>\n"%s)
html.write("</ul></p>")
html.write("<p>login: <b><code>{}</code></b> <br>password: <b><code>{}</code></b></p>\n".format(instance_login, instance_password))
html.write(footer)
html.write("</div>")
html.close()
"""
html.write("<div>")
html.write("<p>{}</p>".format(blurb))
for s in prettify(cluster):
html.write("<li>{}</li>".format(s))
html.write("</ul></p>")
html.write("<center>")
html.write("<p>login: <b><code>{}</code></b> &nbsp&nbsp password: <b><code>{}</code></b></p>\n".format(instance_login, instance_password))
html.write("</center>")
html.write(footer)
html.write("</div>")
html.close()
"""
with open('ips.html') as f:
pdfkit.from_file(f, 'ips.pdf')

201
prepare-vms/scripts/postprep.rc Executable file
View File

@@ -0,0 +1,201 @@
pssh -I tee /tmp/settings.yaml < $SETTINGS
pssh sudo apt-get update
pssh sudo apt-get install -y python-setuptools
pssh sudo easy_install pyyaml
pssh -I tee /tmp/postprep.py <<EOF
#!/usr/bin/env python
import os
import platform
import sys
import time
import urllib
import yaml
#################################
config = yaml.load(open("/tmp/settings.yaml"))
COMPOSE_VERSION = config["compose_version"]
MACHINE_VERSION = config["machine_version"]
SWARM_VERSION = config["swarm_version"]
CLUSTER_SIZE = config["clustersize"]
ENGINE_VERSION = config["engine_version"]
#################################
# This script will be run as ubuntu user, which has root privileges.
# docker commands will require sudo because the ubuntu user has no access to the docker socket.
STEP = 0
START = time.time()
def bold(msg):
return "{} {} {}".format("$(tput smso)", msg, "$(tput rmso)")
def system(cmd):
global STEP
with open("/tmp/pp.status", "a") as f:
t1 = time.time()
f.write(bold("--- RUNNING [step {}] ---> {}...".format(STEP, cmd)))
retcode = os.system(cmd)
if retcode:
retcode = bold(retcode)
t2 = time.time()
td = str(t2-t1)[:5]
f.write("[{}] in {}s\n".format(retcode, td))
STEP += 1
with open("/home/ubuntu/.bash_history", "a") as f:
f.write("{}\n".format(cmd))
# On EC2, the ephemeral disk might be mounted on /mnt.
# If /mnt is a mountpoint, place Docker workspace on it.
system("if mountpoint -q /mnt; then sudo mkdir /mnt/docker && sudo ln -s /mnt/docker /var/lib/docker; fi")
# Put our public IP in /tmp/ipv4
# ipv4_retrieval_endpoint = "http://169.254.169.254/latest/meta-data/public-ipv4"
ipv4_retrieval_endpoint = "http://myip.enix.org/REMOTE_ADDR"
system("curl --silent {} > /tmp/ipv4".format(ipv4_retrieval_endpoint))
ipv4 = open("/tmp/ipv4").read()
# Add a "docker" user with password "training"
system("sudo useradd -d /home/docker -m -s /bin/bash docker")
system("echo docker:training | sudo chpasswd")
# Helper for Docker prompt.
system("""sudo tee /usr/local/bin/docker-prompt <<SQRL
#!/bin/sh
case "\\\$DOCKER_HOST" in
*:3376)
echo swarm
;;
*:2376)
echo \\\$DOCKER_MACHINE_NAME
;;
*:2375)
echo \\\$DOCKER_MACHINE_NAME
;;
*:55555)
echo \\\$DOCKER_MACHINE_NAME
;;
"")
echo local
;;
*)
echo unknown
;;
esac
SQRL""")
system("sudo chmod +x /usr/local/bin/docker-prompt")
# Fancy prompt courtesy of @soulshake.
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL
export PS1='\e[1m\e[31m[\h] \e[32m(\\\$(docker-prompt)) \e[34m\u@{}\e[35m \w\e[0m\n$ '
SQRL""".format(ipv4))
# Custom .vimrc
system("""sudo -u docker tee /home/docker/.vimrc <<SQRL
syntax on
set autoindent
set expandtab
set number
set shiftwidth=2
set softtabstop=2
SQRL""")
# add docker user to sudoers and allow password authentication
system("""sudo tee /etc/sudoers.d/docker <<SQRL
docker ALL=(ALL) NOPASSWD:ALL
SQRL""")
system("sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config")
system("sudo service ssh restart")
system("sudo apt-get -q update")
system("sudo apt-get -qy install git jq python-pip")
# increase the size of the conntrack table so we don't blow it up when going crazy with http load testing
system("echo 1000000 | sudo tee /proc/sys/net/nf_conntrack_max")
#######################
### DOCKER INSTALLS ###
#######################
# This will install the latest Docker.
system("curl --silent https://{}/ | grep -v '( set -x; sleep 20 )' | sudo sh".format(ENGINE_VERSION))
### Install docker-compose
#system("sudo pip install -U docker-compose=={}".format(COMPOSE_VERSION))
system("sudo curl -sSL -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/{}/docker-compose-{}-{}".format(COMPOSE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-compose")
### Install docker-machine
system("sudo curl -sSL -o /usr/local/bin/docker-machine https://github.com/docker/machine/releases/download/v{}/docker-machine-{}-{}".format(MACHINE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-machine")
system("sudo apt-get remove -y --purge dnsmasq-base")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
### Wait for Docker to be up.
### (If we don't do this, Docker will not be responsive during the next step.)
system("while ! sudo -u docker docker version ; do sleep 2; done")
### Install Swarm
system("docker pull swarm:{}".format(SWARM_VERSION))
system("docker tag -f swarm:{} swarm".format(SWARM_VERSION))
### BEGIN CLUSTERING ###
addresses = list(l.strip() for l in sys.stdin)
assert ipv4 in addresses
def makenames(addrs):
return [ "node%s"%(i+1) for i in range(len(addrs)) ]
while addresses:
cluster = addresses[:CLUSTER_SIZE]
addresses = addresses[CLUSTER_SIZE:]
if ipv4 not in cluster:
continue
names = makenames(cluster)
for ipaddr, name in zip(cluster, names):
system("grep ^{} /etc/hosts || echo {} {} | sudo tee -a /etc/hosts"
.format(ipaddr, ipaddr, name))
print(cluster)
mynode = cluster.index(ipv4) + 1
system("echo 'node{}' | sudo -u docker tee /tmp/node".format(mynode))
system("sudo -u docker mkdir -p /home/docker/.ssh")
system("sudo -u docker touch /home/docker/.ssh/authorized_keys")
if ipv4 == cluster[0]:
# If I'm node1 and don't have a private key, generate one (with empty passphrase)
system("sudo -u docker [ -f /home/docker/.ssh/id_rsa ] || sudo -u docker ssh-keygen -t rsa -f /home/docker/.ssh/id_rsa -P ''")
FINISH = time.time()
duration = "Initial deployment took {}s".format(str(FINISH - START)[:5])
system("echo {}".format(duration))
EOF
IPS_FILE=ips.txt
if [ ! -s $IPS_FILE ]; then
echo "ips.txt not found."
exit 1
fi
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" < $IPS_FILE
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from node1
pssh "sudo -u docker [ -f /home/docker/.ssh/id_rsa ] || ssh -o StrictHostKeyChecking=no node1 sudo -u docker tar -C /home/docker -cvf- .ssh | sudo -u docker tar -C /home/docker -xf-"
# if 'docker@' doesn't appear in /home/docker/.ssh/authorized_keys, copy it there
pssh "grep docker@ /home/docker/.ssh/authorized_keys \
|| cat /home/docker/.ssh/id_rsa.pub \
| sudo -u docker tee -a /home/docker/.ssh/authorized_keys"
# On node1, create and deploy TLS certs using Docker Machine
pssh "if grep -q node1 /tmp/node; then grep ' node' /etc/hosts | xargs -n2 sudo -H -u docker docker-machine create -d generic --generic-ssh-user docker --generic-ip-address; fi"

23
prepare-vms/scripts/rc Executable file
View File

@@ -0,0 +1,23 @@
# This file can be sourced in order to directly run commands on
# a batch of VMs whose IPs are located in ips.txt of the directory in which
# the command is run.
pssh () {
HOSTFILE="ips.txt"
[ -f $HOSTFILE ] || {
echo "No hostfile found at $HOSTFILE"
return
}
echo "[parallel-ssh] $@"
export PSSH=$(which pssh || which parallel-ssh)
$PSSH -h $HOSTFILE -l ubuntu \
--par 100 \
-O LogLevel=ERROR \
-O UserKnownHostsFile=/dev/null \
-O StrictHostKeyChecking=no \
-O ForwardAgent=yes \
"$@"
}

505
prepare-vms/scripts/trainer-cli Executable file
View File

@@ -0,0 +1,505 @@
#!/bin/bash
# Don't execute this script directly. Use ../trainer instead.
set -e # if we encounter an error, abort
export AWS_DEFAULT_OUTPUT=text
greet() {
hello=$(aws iam get-user --query 'User.UserName')
echo "Greetings, $hello/${USER}!"
}
deploy_hq(){
TAG=$1
need_tag $TAG
REMOTE_USER=ubuntu
REMOTE_HOST=$(aws_get_instance_ips_by_tag $TAG)
echo "Trying to reach $TAG instances..."
while ! tag_is_reachable $TAG; do
echo -n "."
sleep 2
done
env | grep -i aws > envvars.sh
scp \
-o "UserKnownHostsFile /dev/null" \
-o "StrictHostKeyChecking=no" \
scripts/remote-execution.sh \
envvars.sh \
$REMOTE_USER@$REMOTE_HOST:/tmp/
ssh -A $REMOTE_USER@$REMOTE_HOST "bash /tmp/remote-execution.sh >>/tmp/pre.out 2>>/tmp/pre.err"
ssh -A $REMOTE_USER@$REMOTE_HOST
}
deploy_tag(){
TAG=$1
SETTINGS=$2
need_tag $TAG
link_tag $TAG
count=$(wc -l ips.txt)
# wait until all hosts are reachable before trying to deploy
echo "Trying to reach $TAG instances..."
while ! tag_is_reachable $TAG; do
echo -n "."
sleep 2
done
echo "[[ Deploying tag $TAG ]]"
export SETTINGS
source scripts/postprep.rc
echo "Finished deploying $TAG."
echo "You may want to run one of the following commands:"
echo "./trainer pull-images $TAG"
echo "./trainer cards $TAG <settings/somefile.yaml>"
}
link_tag() {
TAG=$1
need_tag $TAG
IPS_FILE=tags/$TAG/ips.txt
need_ips_file $IPS_FILE
ln -sf $IPS_FILE ips.txt
}
pull_tag(){
TAG=$1
need_tag $TAG
link_tag $TAG
if [ ! -s $IPS_FILE ]; then
echo "Nonexistent or empty IPs file $IPS_FILE"
fi
# Pre-pull a bunch of images
pssh --timeout 900 'for I in \
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
postgres \
redis \
training/namer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'
echo "Finished pulling images for $TAG"
echo "You may now want to run:"
echo "./trainer cards $TAG <settings/somefile.yaml>"
}
wait_until_tag_is_running() {
max_retry=50
TAG=$1
COUNT=$2
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do \
let "i += 1"
echo "Waiting: $done_count/$COUNT instances online"
done_count=$(aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
"Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].State.Name" \
| tr "\t" "\n" \
| wc -l)
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
tag_is_reachable() {
TAG=$1
need_tag $TAG
link_tag $TAG
pssh -t 5 true 2>&1 >/dev/null
}
test_tag(){
ips_file=tags/$TAG/ips.txt
echo "Using random IP in $ips_file to run tests on $TAG"
ip=$(shuf -n 1 $ips_file)
test_vm $ip
echo "Tests complete. You may want to run one of the following commands:"
echo "./trainer cards $TAG <settings/somefile.yaml>"
}
test_vm() {
ip=$1
echo "[[ Testing instance with IP $(tput bold)$ip $(tput sgr0) ]]"
user=ubuntu
for cmd in "hostname" \
"whoami" \
"hostname -i" \
"cat /tmp/node" \
"cat /tmp/ipv4" \
"cat /etc/hosts" \
"hostnamectl status" \
"docker version | grep Version -B1" \
"docker-compose version" \
"docker-machine version" \
"docker images" \
"docker ps" \
"curl --silent localhost:55555" \
"sudo ls -la /mnt/ | grep docker" \
"env" \
"ls -la /home/docker/.ssh"; do
echo "=== $cmd ==="
echo "$cmd" |
ssh -A -q \
-o "UserKnownHostsFile /dev/null" \
-o "StrictHostKeyChecking=no" \
$user@$ip sudo -u docker -i
echo
done
}
make_key_name(){
SHORT_FINGERPRINT=$(ssh-add -l | grep RSA | head -n1 | cut -d " " -f 2 | tr -d : | cut -c 1-8)
echo "${SHORT_FINGERPRINT}-${USER}"
}
sync_keys() {
# make sure ssh-add -l contains "RSA"
ssh-add -l | grep -q RSA ||
die "The output of \`ssh-add -l\` doesn't contain 'RSA'. Start the agent, add your keys?"
AWS_KEY_NAME=$(make_key_name)
echo -n "Syncing keys... "
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &> /dev/null; then
aws ec2 import-key-pair --key-name $AWS_KEY_NAME \
--public-key-material "$(ssh-add -L \
| grep -i RSA \
| head -n1 \
| cut -d " " -f 1-2)" &> /dev/null
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &> /dev/null; then
die "Somehow, importing the key didn't work. Make sure that 'ssh-add -l | grep RSA | head -n1' returns an RSA key?"
else
echo "Imported new key $AWS_KEY_NAME."
fi
else
echo "Using existing key $AWS_KEY_NAME."
fi
}
suggest_amis() {
scripts/find-ubuntu-ami.sh -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N
}
get_token() {
if [ -z $USER ]; then
export USER=anonymous
fi
date +%Y-%m-%d-%H-%M-$USER
}
get_ami() {
# using find-ubuntu-ami script in `trainer-tools/scripts`:
#AMI=$(./scripts/find-ubuntu-ami.sh -r $AWS_DEFAULT_REGION -a amd64 -v 15.10 -t hvm:ebs -N | grep -v ^REGION | head -1 | awk '{print $7}')
#AMI=$(suggest_amis | grep -v ^REGION | head -1 | awk '{print $7}')
case $AWS_DEFAULT_REGION in
eu-central-1)
AMI=ami-82cf0aed
;;
eu-west-1)
AMI=ami-07174474
;;
us-east-1)
AMI=ami-2808313f
;;
us-east-2)
AMI=ami-1b772d7e
;;
us-west-1)
AMI=ami-dab5e0ba
;;
us-west-2)
AMI=ami-9ee24ffe
;;
esac
echo $AMI
}
make_cards(){
# Generate cards for a given tag
TAG=$1
SETTINGS_FILE=$2
[[ -z "$SETTINGS_FILE" ]] && {
echo "Please specify the settings file you want to use."
echo "e.g.: settings/orchestration.yaml"
exit 1
}
aws_get_instance_ips_by_tag $TAG > tags/$TAG/ips.txt
# Remove symlinks to old cards
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
python scripts/ips-txt-to-html.py $SETTINGS_FILE
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
rm -f tags/$TAG/$f
# Move the generated file and replace it with a symlink
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
done
echo "Cards created. You may want to run:"
echo "chromium ips.html"
echo "chromium ips.pdf"
}
describe_tag() {
# Display instance details and reachability/status information
TAG=$1
need_tag $TAG
echo "============= Tag: $TAG ============="
aws_display_instances_by_tag $TAG
aws_display_instance_statuses_by_tag $TAG
}
run_cli() {
case "$1" in
ami)
# A wrapper for scripts/find-ubuntu-ami.sh
shift
scripts/find-ubuntu-ami.sh -r $AWS_DEFAULT_REGION $*
echo
echo "Protip:"
echo "./trainer ami -a amd64 -v 16.04 -t hvm:ebs -N | grep -v ^REGION | cut -d\" \" -f15"
echo
echo "Suggestions:"
suggest_amis
;;
cards)
TAG=$2
need_tag $TAG
make_cards $TAG $3
;;
deploy)
TAG=$2
need_tag $TAG
if [[ $TAG == *"-hq"* ]]; then
echo "Deploying HQ"
deploy_hq $TAG
else
SETTINGS=$3
if [[ -z "$SETTINGS" ]]; then
echo "Please specify a settings file."
exit 1
fi
if ! [[ -f "$SETTINGS" ]]; then
echo "Settings file $SETTINGS not found."
exit 1
fi
echo "Deploying with settings $SETTINGS."
deploy_tag $TAG $SETTINGS
fi
;;
ids)
TAG=$2
need_tag $TAG
IDS=$(aws_get_instance_ids_by_tag $TAG)
echo "$IDS"
# Just in case we managed to create instances but weren't able to tag them
echo "Lookup by client token $TAG:"
IDS=$(aws_get_instance_ids_by_client_token $TAG)
echo "$IDS"
;;
ips)
TAG=$2
need_tag $TAG
mkdir -p tags/$TAG
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
link_tag $TAG
;;
list)
# list existing instances in a given batch
# to list batches, see "tags" command
echo "Using region $AWS_DEFAULT_REGION."
TAG=$2
need_tag $TAG
describe_tag $TAG
tag_is_reachable $TAG
echo "You may be interested in running one of the following commands:"
echo "./trainer ips $TAG"
echo "./trainer deploy $TAG <settings/somefile.yaml>"
;;
opensg)
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
;;
pull-images)
TAG=$2
need_tag $TAG
pull_tag $TAG
;;
retag)
if [[ -z "$2" ]] || [[ -z "$3" ]]; then
die "Please specify old tag/token, and new tag."
fi
aws_tag_instances $2 $3
;;
shell)
# Get a shell in the container
export PS1="trainer@$AWS_DEFAULT_REGION# "
exec $SHELL
;;
start)
# Create $2 instances
COUNT=$2
if [ -z "$COUNT" ]; then
die "Indicate number of instances to start."
fi
greet # Print our AWS username, to ease the pain of credential-juggling
key_name=$(sync_keys) # Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
AMI=$(get_ami) # Retrieve the AWS image ID
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
if [ ! -z $3 ]; then
# If an extra arg is present, append it to the tag
TOKEN=$TOKEN-$3
fi
echo "-----------------------------------"
echo "Starting $COUNT instances:"
echo " Region: $AWS_DEFAULT_REGION"
echo " Token/tag: $TOKEN"
echo " AMI: $AMI"
AWS_KEY_NAME=$(make_key_name)
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $2 \
--instance-type t2.medium \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}' )
echo " Key name: $AWS_KEY_NAME"
echo "Reservation ID: $reservation_id"
echo "-----------------------------------"
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
TAG=$TOKEN
aws_tag_instances $TOKEN $TAG
wait_until_tag_is_running $TAG $COUNT
echo "[-------------------------------------------------------------------------------------]"
echo " Successfully created $2 instances with tag: $TAG"
echo "[-------------------------------------------------------------------------------------]"
mkdir -p tags/$TAG
IPS=$(aws_get_instance_ips_by_tag $TAG)
echo "$IPS" > tags/$TAG/ips.txt
link_tag $TAG
echo "To deploy or kill these instances, run one of the following:"
echo "./trainer deploy $TAG <settings/somefile.yaml>"
echo "./trainer list $TAG"
;;
status)
greet && echo
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
echo "Max instances: $max_instances" && echo
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
echo "Region:" # $AWS_DEFAULT_REGION."
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
;;
stop)
TAG=$2
need_tag $TAG
aws_kill_instances_by_tag $TAG
;;
tag)
# add a tag to a batch of VMs
TAG=$2
NEW_TAG_KEY=$3
NEW_TAG_VALUE=$4
need_tag $TAG
need_tag $NEW_TAG_KEY
need_tag $NEW_TAG_VALUE
;;
test)
TAG=$2
need_tag $TAG
test_tag $TAG
;;
*)
echo "
./trainer <command> [n-instances|tag] [settings/file.yaml]
Core commands:
start n Start n instances
list [TAG] If a tag is provided, list its VMs. Otherwise, list tags.
deploy TAG Deploy all instances with a given tag
pull-images TAG Pre-pull docker images. Run only after deploying.
stop TAG Stop and delete instances tagged TAG
Extras:
ips TAG List all IPs of instances with a given tag (updates ips.txt)
ids TAG/TOKEN List all instance IDs with a given tag
shell Get a shell in the trainer container
status TAG Print information about this tag and its VMs
tags List all tags (per-region)
retag TAG/TOKEN TAG Retag instances with a new tag
Beta:
ami Look up Amazon Machine Images
cards FILE Generate cards
opensg Modify AWS security groups
"
;;
esac
}
(
cd $SCRIPT_DIR
source scripts/cli.sh
source scripts/aws.sh
source scripts/rc
source scripts/colors.sh
mkdir -p tags
# TODO: unset empty envvars
run_cli "$@"
)

View File

@@ -0,0 +1,36 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
workshop_name: Docker Orchestration
workshop_short_name: orchestration
repo: https://github.com/jpetazzo/orchestration-workshop
url: http://container.training/ # moreinfo link printed on cards
#engine_version: experimental.docker.com #extra features that may change/runaway
#engine_version: test.docker.com
engine_version: get.docker.com #prod release
compose_version: 1.8.1
machine_version: 0.8.2
swarm_version: 1.2.5
# for now these are hard coded in script, and only used for printing cards
instance_login: docker
instance_password: training
# 12 per page works well, but is quite small text
clustersize: 5 # Number of VMs per cluster
pagesize: 12 # Number of cards to print per page
background_image: https://raw.githubusercontent.com/jpetazzo/orchestration-workshop/master/prepare-vms/media/swarm.png
# To be printed on the cards:
blurb: >
Here is the connection information to your very own
{cluster_or_machine} for this {workshop_name} workshop. You can connect
to {this_or_each} VM with any SSH client.
Your {machine_is_or_machines_are}:
# {url} will be replaced by the script
footer: >
<p>For slides, chat and other useful links, see: </p>
<center>{url}</center>

View File

@@ -0,0 +1,33 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
workshop_name: Docker fundamentals
workshop_short_name: Docker # appears on VM connection cards
repo: https://github.com/docker/docker-fundamentals
instance_login: docker
instance_password: training
clustersize: 1 # Number of VMs per cluster
pagesize: 15 # Number of cards to print per page
background_image: https://www.docker.com/sites/default/files/Engine.png
# To be printed on the cards:
blurb: >
Here is the connection information to your very own
{cluster_or_machine} for this {workshop_name} workshop. You can connect
to {this_or_each} VM with any SSH client.
Your {machine_is_or_machines_are}:
# {url} will be replaced by the script
footer: >
<p>For slides, chat and other useful links, see: </p>
<center>{url}</center>
url: http://container.training/
engine_version: get.docker.com
compose_version: 1.8.1
machine_version: 0.8.2
swarm_version: latest

View File

@@ -0,0 +1,34 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
workshop_name: Advanced Docker Orchestration
workshop_short_name: orchestration
repo: https://github.com/jpetazzo/orchestration-workshop
instance_login: docker
instance_password: training
clustersize: 5 # Number of VMs per cluster
pagesize: 12 # Number of cards to print per page
background_image: https://www.docker.com/sites/default/files/Engine.png
#background_image: ../media/swarm.png
# To be printed on the cards:
blurb: >
Here is the connection information to your very own
{cluster_or_machine} for this {workshop_name} workshop. You can connect
to {this_or_each} VM with any SSH client.
Your {machine_is_or_machines_are}:
# {url} will be replaced by the script
footer: >
<p>For slides, chat and other useful links, see: </p>
<center>{url}</center>
url: http://container.training/
engine_version: get.docker.com
compose_version: 1.8.1
machine_version: 0.8.2
swarm_version: latest

View File

@@ -0,0 +1,35 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
workshop_name: Advanced Docker Orchestration
workshop_short_name: orchestration
repo: https://github.com/jpetazzo/orchestration-workshop
instance_login: docker
instance_password: training
clustersize: 5 # Number of VMs per cluster
pagesize: 12 # Number of cards to print per page
#background_image: https://myapps.developer.ubuntu.com/site_media/appmedia/2014/12/swarm.png
background_image: http://www.yellosoft.us/public/images/docker.png
#background_image: ../media/swarm.png
# To be printed on the cards:
blurb: >
Here is the connection information to your very own
{cluster_or_machine} for this {workshop_name} workshop. You can connect
to {this_or_each} VM with any SSH client.
Your {machine_is_or_machines_are}:
# {url} will be replaced by the script
footer: >
<p>For slides, chat and other useful links, see: </p>
<center>{url}</center>
url: http://container.training/
engine_version: test.docker.com
compose_version: 1.9.0
machine_version: 0.9.0-rc1
swarm_version: latest

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

80
prepare-vms/trainer Executable file
View File

@@ -0,0 +1,80 @@
#!/bin/bash
TRAINER_IMAGE="preparevms_prepare-vms"
DEPENDENCIES="
aws
ssh
curl
jq
pssh
wkhtmltopdf
man
"
ENVVARS="
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
SSH_AUTH_SOCK
"
check_envvars() {
STATUS=0
for envvar in $ENVVARS; do
if [ -z "${!envvar}" ]; then
echo "Please set environment variable $envvar."
STATUS=1
unset $envvar
fi
done
return $STATUS
}
check_dependencies() {
STATUS=0
for dependency in $DEPENDENCIES ; do
if ! command -v $dependency >/dev/null; then
echo "Could not find dependency $dependency."
STATUS=1
fi
done
return $STATUS
}
check_ssh_auth_sock() {
if [ -z $SSH_AUTH_SOCK ]; then
echo -n "SSH_AUTH_SOCK envvar not set, so its parent directory can't be "
echo "mounted as a volume in a container."
echo "Try running the command below and trying again:"
echo "eval \$(ssh-agent) && ssh-add"
exit 1
fi
}
check_image() {
docker inspect $TRAINER_IMAGE >/dev/null 2>&1
}
# Get the script's real directory, whether we're being called directly or via a symlink
if [ -L "$0" ]; then
export SCRIPT_DIR=$(dirname $(readlink "$0"))
else
export SCRIPT_DIR=$(dirname "$0")
fi
cd "$SCRIPT_DIR"
check_envvars || exit 1
if check_dependencies; then
scripts/trainer-cli "$@"
elif check_image; then
check_ssh_auth_sock
export SSH_AUTH_DIRNAME=$(dirname $SSH_AUTH_SOCK)
docker-compose run prepare-vms "$@"
else
echo "Some dependencies are missing, and docker image $TRAINER_IMAGE doesn't exist locally."
echo "Please do one of the following: "
echo "- run \`docker-compose build\`"
echo "- install missing dependencies"
fi

3
prom/Dockerfile Normal file
View File

@@ -0,0 +1,3 @@
FROM prom/prometheus:v1.4.1
COPY prometheus.yml /etc/prometheus/prometheus.yml

17
prom/prometheus.yml Normal file
View File

@@ -0,0 +1,17 @@
global:
scrape_interval: 1s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
dns_sd_configs:
- names: ['tasks.node']
type: 'A'
port: 9100
- job_name: 'cadvisor'
dns_sd_configs:
- names: ['tasks.cadvisor']
type: 'A'
port: 8080

29
snap/docker-influxdb.json Normal file
View File

@@ -0,0 +1,29 @@
{
"version": 1,
"schedule": {
"type": "simple",
"interval": "1s"
},
"max-failures": 10,
"workflow": {
"collect": {
"metrics": {
"/intel/docker/*/stats/cgroups/cpu_stats/cpu_usage/total_usage": {},
"/intel/docker/*/stats/cgroups/memory_stats/usage/usage": {}
},
"process": null,
"publish": [
{
"plugin_name": "influx",
"config": {
"host": "127.0.0.1",
"port": 8086,
"database": "snap",
"user": "admin",
"password": "admin"
}
}
]
}
}
}

21
snap/psutil-file.yml Normal file
View File

@@ -0,0 +1,21 @@
---
version: 1
schedule:
type: "simple"
interval: "1s"
max-failures: 10
workflow:
collect:
metrics:
/intel/psutil/load/load1: {}
/intel/psutil/load/load15: {}
/intel/psutil/load/load5: {}
/intel/psutil/vm/available: {}
/intel/psutil/vm/free: {}
/intel/psutil/vm/used: {}
config:
publish:
-
plugin_name: "mock-file"
config:
file: "/tmp/snap-psutil-file.log"

1
stacks/dockercoins Symbolic link
View File

@@ -0,0 +1 @@
../dockercoins

48
stacks/dockercoins.yml Normal file
View File

@@ -0,0 +1,48 @@
version: "3"
services:
rng:
build: dockercoins/rng
image: ${REGISTRY_SLASH-localhost:5000/}rng${COLON_TAG-:latest}
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
deploy:
mode: global
hasher:
build: dockercoins/hasher
image: ${REGISTRY_SLASH-localhost:5000/}hasher${COLON_TAG-:latest}
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
webui:
build: dockercoins/webui
image: ${REGISTRY_SLASH-localhost:5000/}webui${COLON_TAG-:latest}
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
ports:
- "8000:80"
redis:
image: redis
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
worker:
build: dockercoins/worker
image: ${REGISTRY_SLASH-localhost:5000/}worker${COLON_TAG-:latest}
logging:
driver: gelf
options:
gelf-address: udp://localhost:12201
deploy:
replicas: 10

40
stacks/elk.yml Normal file
View File

@@ -0,0 +1,40 @@
version: "3"
services:
elasticsearch:
image: elasticsearch:2
logstash:
image: logstash
command: |
-e '
input {
gelf { }
heartbeat { }
}
filter {
ruby {
code => "
event.to_hash.keys.each { |k| event[ k.gsub('"'.'"','"'_'"') ] = event.remove(k) if k.include?'"'.'"' }
"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
stdout {
codec => rubydebug
}
}'
ports:
- "12201:12201/udp"
kibana:
image: kibana:4
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200

30
stacks/prometheus.yml Normal file
View File

@@ -0,0 +1,30 @@
version: "3"
services:
prometheus:
build: ../prom
image: localhost:5000/prom
ports:
- "9090:9090"
node:
image: prom/node-exporter
command: -collector.procfs /host/proc -collector.sysfs /host/proc -collector.filesystem.ignored-mount-points "^(sys|proc|dev|host|etc)($$|/)"
deploy:
mode: global
volumes:
- "/proc:/host/proc"
- "/sys:/host/sys"
- "/:/rootfs"
cadvisor:
image: google/cadvisor
deploy:
mode: global
volumes:
- "/:/rootfs"
- "/var/run:/var/run"
- "/sys:/sys"
- "/var/lib/docker:/var/lib/docker"

8
stacks/registry.yml Normal file
View File

@@ -0,0 +1,8 @@
version: "3"
services:
registry:
image: registry:2
ports:
- "5000:5000"

View File

@@ -1,6 +0,0 @@
www:
image: nginx
ports:
- "80:80"
volumes:
- "./htdocs:/usr/share/nginx/html"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

Some files were not shown because too many files have changed in this diff Show More