Compare commits

..

215 Commits

Author SHA1 Message Date
Jérôme Petazzoni
a4e62e0880 Last updates for dod msp 2017-07-25 11:55:22 -05:00
Jérôme Petazzoni
b2941ce447 Add details about least privilege model 2017-07-24 23:48:29 -05:00
Jérôme Petazzoni
84c88ed4c2 short version for DoD MSP 2017-07-24 15:34:06 -05:00
Jérôme Petazzoni
0d7ee1dda0 Merge branch 'alexellis-alexellis-patch-sol' 2017-07-12 13:41:45 +02:00
Jérôme Petazzoni
243d585432 Add a few details about what happens when losing the sole manager 2017-07-12 13:41:37 +02:00
Alex Ellis
f5fe7152f3 Internationalisation
I had no idea what SOL was - had to google this on Urban Dictionary :-/ have put an internationalisation in and retained the colliqualism in brackets.
2017-07-11 19:00:23 +01:00
Jérôme Petazzoni
94d9ad22d0 Add ngrep details when using PWD or Vagrant re/ interface selection (closes #84) 2017-07-11 19:51:00 +02:00
Jérôme Petazzoni
0af160e0a8 Merge pull request #82 from adulescentulus/fix_visualizer_exercise
(some) wrong instructions
2017-06-17 09:31:31 -07:00
Andreas Groll
1fdb7b8077 added missing stackname 2017-06-12 15:25:35 +02:00
Andreas Groll
d2b67c426e you only can connect to the ip where you started your visualizer 2017-06-12 12:07:59 +02:00
Jérôme Petazzoni
a84cc36cd8 Update installation method 2017-06-09 18:16:29 +02:00
Jerome Petazzoni
c8ecf5a647 PYCON final check! 2017-05-17 18:14:33 -07:00
Jerome Petazzoni
e9ee050386 Explain extra details 2017-05-17 15:56:28 -07:00
Jerome Petazzoni
6e59e2092c Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2017-05-17 15:00:42 -07:00
Jerome Petazzoni
c7b0fd32bd Add detail about ASGs 2017-05-17 15:00:31 -07:00
Jérôme Petazzoni
ead4e33604 Merge pull request #79 from jliu70/oscon2017
fix typo
2017-05-17 14:31:26 -07:00
Jérôme Petazzoni
96b4f76c67 Backport all changes from OSCON 2017-05-17 00:17:24 -05:00
Jeff Liu
6337d49123 fix typo 2017-05-08 10:21:51 -05:00
Jerome Petazzoni
aec2de848b Rename docker-compose files to keep .yml extension (fixes #69) 2017-05-03 12:44:17 -07:00
Jérôme Petazzoni
91942f22a0 Merge pull request #73 from everett-toews/cd-to-snap
Change to the snap dir first
2017-05-03 14:36:52 -05:00
Jérôme Petazzoni
93cdc9d987 Merge pull request #72 from everett-toews/fix-worker-service-name
Fix the dockercoins_worker service name
2017-05-03 14:36:27 -05:00
Jérôme Petazzoni
13e6283221 Merge pull request #71 from everett-toews/netshoot
Consistent use of the netshoot image
2017-05-03 14:35:54 -05:00
Jerome Petazzoni
e56bea5c16 Update Swarm visualizer information 2017-05-03 12:36:09 -07:00
Jerome Petazzoni
eda499f084 Fix link to Raft (thanks @kchien) - fixes #74 2017-05-03 12:20:45 -07:00
Jerome Petazzoni
ae638b8e89 Minor updates before GOTO 2017-05-03 11:46:35 -07:00
Jerome Petazzoni
5296be32ed Handle untagged resources 2017-05-03 11:26:47 -07:00
Jerome Petazzoni
f1cd3ba7d0 Remove rc.yaml 2017-05-03 10:02:36 -07:00
Jérôme Petazzoni
b307adee91 Last updates
Conflicts:
	docs/index.html
2017-05-03 09:34:42 -07:00
Jérôme Petazzoni
f4540fad78 Update describe-instances for awscli 1.11 (thanks @mikegcoleman for finding that bug!) 2017-05-03 09:15:45 -07:00
Jérôme Petazzoni
70db794111 Simplify stackfiles 2017-04-16 23:56:30 -05:00
Jérôme Petazzoni
abafc0c8ec Add swarm-rafttool 2017-04-16 23:47:56 -05:00
Everett Toews
a7dba759a8 Change to the snap dir first 2017-04-16 14:34:49 -05:00
Everett Toews
b14662490a Fix the dockercoins_worker service name 2017-04-16 13:23:54 -05:00
Everett Toews
9d45168752 Consistent use of the netshoot image 2017-04-16 13:16:02 -05:00
Jérôme Petazzoni
7b3c9cd2c3 Add @alexmavr/swarm-nbt (FTW!) 2017-04-15 18:29:32 -05:00
Jérôme Petazzoni
84d4a367ec Mention --filter for docker service ps 2017-04-15 17:45:24 -05:00
Jérôme Petazzoni
bd6b37b573 Add @manomarks' Swarm viz tool 2017-04-15 17:21:38 -05:00
Jérôme Petazzoni
e1b2a4440d Update docker service logs; --detach=false 2017-04-14 15:39:52 -05:00
Jérôme Petazzoni
1b5365d905 Update settings; add security workshop 2017-04-14 15:39:24 -05:00
Jérôme Petazzoni
27ea268026 Automatically resolve AMI ID to use 2017-04-14 15:32:03 -05:00
Jérôme Petazzoni
b0f566538d Re-add useful self-paced slides 2017-03-31 21:49:57 -05:00
Jerome Petazzoni
e637354d3e Fix TOC and minor tweaks 2017-03-31 21:41:24 -05:00
Jerome Petazzoni
1f8c27b1aa Update deployed versions 2017-03-31 21:40:05 -05:00
Jerome Petazzoni
f7d317d960 Backporting Devoxx updates 2017-03-31 21:39:48 -05:00
Jérôme Petazzoni
a8c54a8afd Update chat links 2017-03-31 21:36:08 -05:00
Jerome Petazzoni
73b3752c7e Change chat links 2017-03-31 21:33:12 -05:00
Jérôme Petazzoni
d60ba2e91e Merge pull request #68 from hknust/master
Service name should be dockercoins_worker not worker
2017-03-30 17:11:37 -05:00
Jérôme Petazzoni
d480f5c26a Clarify node switching commands 2017-03-20 19:30:38 -07:00
Jérôme Petazzoni
540aa91f48 Hotfix JS file 2017-03-10 16:46:51 -06:00
Jérôme Petazzoni
8f3c0da385 Use our custom fork of remark; updates for Docker Birthday 2017-03-10 16:40:48 -06:00
Holger Knust
6610ff178d Fixed typo on slide. Attempts instead of attemps 2017-03-04 23:13:35 -08:00
Holger Knust
9a9e725d5b Service name should be dockercoins_worker not worker 2017-03-04 11:29:01 -08:00
Jérôme Petazzoni
09cabc556e Update for SCALE 15x 2017-03-02 16:38:59 -08:00
Jérôme Petazzoni
44f4017992 Switch from localhost to 127.0.0.1 (to work around some weird DNS issues) 2017-03-02 14:06:59 -08:00
Jérôme Petazzoni
6f85ff7824 Reorganize advanced content for Docker Birthday 2017-02-16 15:16:06 -06:00
Jérôme Petazzoni
514ac69a8f Ship part 1 for Docker Birthday 2017-02-15 00:03:01 -06:00
Jérôme Petazzoni
7418691249 Rework intro for self-guided workshop 2017-02-14 10:15:27 -06:00
Jérôme Petazzoni
4d2289b2d2 Add details about authorization plugins 2017-02-09 12:33:55 -06:00
Jerome Petazzoni
e0956be92c Add link target for logging 2017-01-20 16:24:15 -08:00
Jérôme Petazzoni
d623f76a02 add note on API scope 2017-01-13 19:29:22 -06:00
Jérôme Petazzoni
dd555af795 update section about restart condition 2017-01-13 17:59:57 -06:00
Jérôme Petazzoni
a2da3f417b update secret section 2017-01-13 17:35:45 -06:00
Jérôme Petazzoni
d129b37781 minor updates, including services ps -a flag 2017-01-13 16:22:58 -06:00
Jérôme Petazzoni
849ea6e576 improve LB demo a bit 2017-01-13 16:04:53 -06:00
Jérôme Petazzoni
7ed54eee66 Merge pull request #64 from trapier/slides_comment_format
slides: code block comment formatting on snap install
2016-12-12 17:59:21 -06:00
Trapier Marshall
1dca8e5a7a slides: code block comment formatting
This will make it easier to copy-paste the whole block used for
snap installation
2016-12-12 11:03:30 -05:00
Jérôme Petazzoni
165de1dbb5 Merge pull request #63 from trapier/slides_cosmetic_edits
couple of cosmetic edits to slides
2016-12-11 21:48:57 -06:00
Trapier Marshall
b7afd13012 couple cosmetic corrections to slides 2016-12-11 01:16:30 -05:00
Jerome Petazzoni
e8b64c5e08 Last touch-ups for LISA16! Good to go! 2016-12-05 19:32:39 -08:00
Jerome Petazzoni
9124eb0e07 Add healthchecks in WIP section 2016-12-05 13:32:09 -08:00
Jerome Petazzoni
0bede24e23 Add what's next section 2016-12-05 10:49:31 -08:00
Jerome Petazzoni
ee79e5ba86 Add MOSH instructions 2016-12-05 10:32:29 -08:00
Jerome Petazzoni
9078cfb57d DAB -> Compose v3 2016-12-05 08:53:31 -08:00
Jerome Petazzoni
6854698fe1 Add Fluentd instructions (contrib) 2016-12-04 17:07:48 -08:00
Jerome Petazzoni
16a4dac192 Add "replayability" instructions 2016-12-04 16:40:17 -08:00
Jerome Petazzoni
0029fa47c5 Update secrets and autolock chapters (thanks @diogomonica for feedback and pointers!) 2016-12-04 09:19:09 -08:00
Jerome Petazzoni
a53636340b Tweak 2016-12-03 10:30:29 -08:00
Jerome Petazzoni
c95b88e562 Secrets management and data encryption 2016-12-03 10:28:20 -08:00
Jerome Petazzoni
d438bd624a Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-12-02 17:50:39 -08:00
Jerome Petazzoni
839746831b Improve illustration a bit 2016-12-02 17:50:29 -08:00
Jérôme Petazzoni
0b1b589314 Merge pull request #60 from hubertst/patch-1
Update provisioning.yml
2016-12-02 16:47:54 -08:00
Hubert
61d2709f8f Update provisioning.yml
fix for ansible 2.2
2016-12-02 09:49:52 +01:00
Jerome Petazzoni
1741a7b35a Add encrypted networks 2016-12-01 22:15:42 -08:00
Jerome Petazzoni
e101856dd7 dynamic scheduling 2016-12-01 17:18:00 -08:00
Jerome Petazzoni
d451f9c7bf Add note on docker service update --mode 2016-12-01 15:52:05 -08:00
Jerome Petazzoni
b021b0eec8 Addtl metrics resources 2016-12-01 15:43:49 -08:00
Jerome Petazzoni
e4f824fd07 docker system ... 2016-11-30 15:54:14 -08:00
Jerome Petazzoni
019165e98c Re-enable a few slides (checked all ??? slides) 2016-11-29 13:02:42 -08:00
Jerome Petazzoni
cf5c2d5741 Add PromQL details + side-by-side Prom&Snap comparison 2016-11-29 12:59:28 -08:00
Jerome Petazzoni
971bf85b17 Clarify raft usage 2016-11-28 17:44:15 -08:00
Jerome Petazzoni
83749ade43 Add "what did we change in this app?" section 2016-11-28 17:17:24 -08:00
Jerome Petazzoni
76fb2f2e2c Add prometheus files (fixes #58) 2016-11-28 12:30:56 -08:00
Jerome Petazzoni
6bda8147e4 Merge branch 'lisa16' 2016-11-28 12:28:03 -08:00
Jerome Petazzoni
95751d1ee9 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-11-23 15:18:12 -08:00
Jerome Petazzoni
12adae107e Update instructions to install Compose in nodes
Closes #51

(Also addresses remarks about using Machine in older EC2 accounts lacking VPC)
2016-11-23 15:18:07 -08:00
Jerome Petazzoni
c652ea08a2 Upgrade to remark 0.14 (closes #38) 2016-11-23 14:45:03 -08:00
Jerome Petazzoni
30008e4af6 Add warning re/ swarmtctl (fixes #35) 2016-11-23 14:34:44 -08:00
Jérôme Petazzoni
bb262e27e8 Merge pull request #55 from stefanlasiewski/master
"Using Docker Machine to communicate with a node" missing the `docker-machine env` command
2016-11-23 12:27:55 -06:00
Jerome Petazzoni
9656d959cc Switch to EBS-based instances; change default instance type to t2.medium 2016-11-21 17:10:07 -08:00
Jerome Petazzoni
46b772b95e First round of updates for LISA 2016-11-21 16:55:47 -08:00
stefanlasiewski
f801e1b9ad Add instructions for VMware Fusion. 2016-11-21 11:44:13 -08:00
stefanlasiewski
1c44d7089a Merge branch 'master' of https://github.com/stefanlasiewski/orchestration-workshop 2016-11-18 14:44:58 -08:00
stefanlasiewski
1f7f4a29ff docker-machine ... should actually be docker-machine env ... in a
couple of places.
2016-11-18 14:44:33 -08:00
Jerome Petazzoni
e16e23e2bd Add supergrok instructions 2016-11-18 10:06:10 -08:00
Jérôme Petazzoni
b5206aa68e Merge pull request #53 from drewmoseley/patch-1
Install pycrypto
2016-11-17 17:24:49 -06:00
Jérôme Petazzoni
8a47bce180 Merge pull request #52 from asziranyi/patch-1
add vagrant-vbguest install link
2016-11-17 17:24:18 -06:00
Drew Moseley
6cd8c32621 Install pycrypto
Not sure if it's somehow unique to my setup but Ansible needed me to install pycrypto as well.
2016-11-17 12:07:42 -05:00
asziranyi
f2f1934940 add vagrant-vbguest installation link 2016-11-17 15:50:47 +01:00
Jerome Petazzoni
8cc388dcb8 add ctrl-p ctrl-q warning 2016-11-14 12:36:57 -08:00
Jerome Petazzoni
a276e72ab0 add ngrok instructions 2016-11-14 11:23:22 -08:00
Jerome Petazzoni
bdb8e1b3df Add instructions for self-paced workshop 2016-11-11 14:28:28 -08:00
Jérôme Petazzoni
66ee4739ed typos 2016-11-07 22:40:59 -06:00
Jérôme Petazzoni
893c7b13c6 Add instructions to create VMs with Docker Machine 2016-11-07 22:38:43 -06:00
Jerome Petazzoni
78b730e4ac Patch up TOC generator 2016-11-01 17:37:48 -07:00
Jerome Petazzoni
e3eb06ddfb Bump up to Compose 1.8.1 and Machine 0.8.2 2016-11-01 17:10:55 -07:00
Jerome Petazzoni
ad29a45191 Add advertise-addr info + small fixups for mentor week 2016-11-01 17:10:36 -07:00
Jerome Petazzoni
e1968beefa Bump to 16.04 LTS AMIs (closes #37)
16.04 doesn't come with Python setuptools, so we have to install that too.
2016-10-18 08:53:53 -07:00
Jerome Petazzoni
b1b3ecb5e9 Add Prometheus section 2016-10-16 17:28:05 -07:00
Jerome Petazzoni
ef60a78998 Pin version numbers used by ELK 2016-10-16 16:30:04 -07:00
Jerome Petazzoni
70064da91c Add Docker Machine; use it to get TLS mutual auth instead of 55555 plain text 2016-10-16 16:27:21 -07:00
Jérôme Petazzoni
0b6a3a1cba Merge pull request #48 from soulshake/typo
Typo fixes
2016-10-08 14:49:16 +02:00
AJ Bowen
e403a005ea 'Set up' when it's a verb, 'setup' when it's a noun. 2016-10-07 17:09:34 +02:00
AJ Bowen
773528fc2b They're --> Their 2016-10-07 16:19:05 +02:00
Jérôme Petazzoni
97af5492f7 Remove InfluxDB password auth 2016-10-04 18:42:32 +02:00
Jérôme Petazzoni
194ce5d7b6 Update Julius info 2016-10-04 14:11:12 +02:00
Jérôme Petazzoni
fafc8fb1ed Update TOC and add slide about Prometheus 2016-10-04 14:10:38 +02:00
Jérôme Petazzoni
4cb37481ba Merge pull request #46 from dragorosson/patch-1
Fix grammar
2016-10-04 03:47:29 +02:00
Drago Rosson
9196b27f0e Fix grammar 2016-10-03 16:21:56 -05:00
Jerome Petazzoni
9ce98430ab Last (hopefully) round of fixes before LinuxCon EU! 2016-10-03 09:20:40 -07:00
tiffany jernigan
4117f079e6 Run InfluxDB and Grafana as services using Docker Hub images. 2016-10-01 18:03:40 -07:00
Jerome Petazzoni
1105c9fa1f Merge remote-tracking branch 'tiffanyfj/metrics' 2016-10-01 08:06:43 -07:00
Jerome Petazzoni
ab7c1bb09a Prepare for LinuxCon EU Berlin 2016-10-01 08:05:55 -07:00
Jérôme Petazzoni
bfcb24c1ca Merge pull request #45 from anonymuse/jesse/docs_linkfix
Fix path for README links
2016-09-30 16:22:08 +02:00
Jesse White
45f410bb49 Fix path for README links 2016-09-29 17:22:55 -04:00
Jérôme Petazzoni
bcd2433fa4 Merge branch 'BretFisher-readme-updates' 2016-09-29 00:25:46 +02:00
Jérôme Petazzoni
1d02ddf271 Mess up with whitespace, because I am OCD like that 2016-09-29 00:25:36 +02:00
Jérôme Petazzoni
4765410393 Merge branch 'readme-updates' of https://github.com/BretFisher/orchestration-workshop-with-docker into BretFisher-readme-updates 2016-09-29 00:22:21 +02:00
tiffany jernigan
6102d21150 Added metrics chapter 2016-09-28 14:18:36 -07:00
Bret Fisher
75caa65973 more trainer info 2016-09-28 01:26:56 -04:00
Bret Fisher
dfd2bf4aeb new example settings file 2016-09-28 01:26:42 -04:00
Bret Fisher
51000b4b4d better swarm image for cards 2016-09-28 01:26:02 -04:00
Bret Fisher
3acd3b078b more info for trainers 2016-09-27 13:06:35 -04:00
Bret Fisher
4b43287c5b more info for trainers 2016-09-27 11:37:42 -04:00
Jerome Petazzoni
c8c745459c Update stateful section 2016-09-19 11:23:23 -07:00
Jerome Petazzoni
04dec2e196 Round of updates for Velocity 2016-09-18 16:20:51 -07:00
Jerome Petazzoni
0f8c189786 Docker Application Bundle -> Distributed Application Bundle 2016-09-18 12:24:47 -07:00
Jerome Petazzoni
81cc14d47b Fix VM card background image 2016-09-18 12:18:05 -07:00
Jérôme Petazzoni
060b2377d5 Merge pull request #34 from everett-toews/fix-link
Fix broken link to nomenclature doc
2016-09-11 12:01:24 -05:00
Everett Toews
1e77736987 Fix broken link to nomenclature doc 2016-09-10 15:49:04 -05:00
Jérôme Petazzoni
bf2b4b7eb7 Merge pull request #32 from everett-toews/github-docs
Move slides to docs for GitHub Pages
2016-09-08 13:56:40 -05:00
Everett Toews
8396f13a4a Move slides to docs for GitHub Pages 2016-08-27 16:12:25 -05:00
Jerome Petazzoni
571097f369 Small fix 2016-08-27 13:55:26 -07:00
Jerome Petazzoni
b1110db8ca Update TOC 2016-08-24 14:01:31 -07:00
Jerome Petazzoni
b73a628f05 Remove old files 2016-08-24 13:52:16 -07:00
Jerome Petazzoni
a07795565d Update tweet message 2016-08-24 13:50:25 -07:00
Jérôme Petazzoni
c4acbfd858 Add diagram 2016-08-24 16:34:32 -04:00
Jerome Petazzoni
ddbda14e14 Reviews/edits 2016-08-24 13:31:00 -07:00
Jerome Petazzoni
ad4ea8659b Node management 2016-08-24 08:04:27 -07:00
Jerome Petazzoni
8d7f27d60d Add Docker Application Bundles
Capitalize Redis consistently
2016-08-24 06:59:15 -07:00
Jerome Petazzoni
9f21c7279c Compose build+push 2016-08-23 14:19:14 -07:00
Jerome Petazzoni
53ae221632 Add stateful service section 2016-08-23 11:03:57 -07:00
Jerome Petazzoni
6719bcda87 Update logging section 2016-08-22 15:51:26 -07:00
Jerome Petazzoni
40e0c96c91 Rolling upgrades 2016-08-22 14:21:00 -07:00
Jerome Petazzoni
2c8664e58d Updated dockercoins deployment instructions 2016-08-12 06:47:30 -07:00
Jerome Petazzoni
1e5cee2456 Updated intro+cluster setup part 2016-08-11 10:01:51 -07:00
Jerome Petazzoni
29b8f53ae0 More typo fixes courtesy of @tiffanyfj 2016-08-11 06:05:43 -07:00
Jérôme Petazzoni
451f68db1d Update instructions to join cluster 2016-08-10 15:50:30 +02:00
Jérôme Petazzoni
5a4d10ed1a Upgrade versions to Engine 1.12 + Compose 1.8 2016-08-10 15:50:10 +02:00
Jérôme Petazzoni
06d5dc7846 Merge pull request #29 from programmerq/pssh-command
detect debian command or upstream command
2016-08-07 15:26:29 +02:00
Jeff Anderson
b63eb0fa40 detect debian command or upstream command 2016-08-01 12:38:12 -06:00
Jérôme Petazzoni
117e2a9ba2 Merge pull request #13 from fiunchinho/master
Version can be set as env variable to be used, instead of generating unix timestamp
2016-07-11 23:57:13 -05:00
Jerome Petazzoni
d2f6e88fd1 Add -v flag for go get swarmit 2016-06-28 16:47:18 -07:00
Jérôme Petazzoni
c742c39ed9 Merge pull request #26 from beenanner/master
Upgrade docker-compose files to v2
2016-06-28 06:44:27 -07:00
Jerome Petazzoni
1f2b931b01 Slack -> Gitter 2016-06-22 11:54:47 -07:00
Jerome Petazzoni
e351ede294 Fix TOC 2016-06-22 11:48:00 -07:00
Jerome Petazzoni
9ffbfacca8 Last words 2016-06-19 11:15:11 -07:00
Jerome Petazzoni
60524d2ff3 Fixes 2016-06-19 00:07:19 -07:00
Jerome Petazzoni
7001c05ec0 DockerCon update 2016-06-18 18:06:15 -07:00
Jonathan Lee
5d4414723d Upgrade docker-compose files to v2 2016-06-13 21:47:59 -04:00
Jérôme Petazzoni
d31f0980a2 Merge pull request #24 from crd/recommend_slide_changes
Recommended slide changes
2016-06-02 17:10:13 -07:00
Cory Donnelly
6649e97b1e Update warning to reflect Consul Leader Election bug has been fixed 2016-06-02 15:58:31 -04:00
Cory Donnelly
06b8cbc964 Fix typos 2016-06-02 15:55:02 -04:00
Cory Donnelly
6992c85d5e Update Git BASH url 2016-06-02 15:53:12 -04:00
Jérôme Petazzoni
313d46ac47 Merge pull request #23 from soulshake/master
Make prompt more readable on light or dark backgrounds
2016-05-29 07:28:53 -07:00
AJ Bowen
5a5db2ad7f Modify prompt colors 2016-05-28 21:07:33 -07:00
Jérôme Petazzoni
1ae29909c8 Merge pull request #22 from soulshake/master
Add script to extract section title
2016-05-28 20:59:00 -07:00
AJ Bowen
6747480869 Add a script to extract section titles 2016-05-28 20:52:21 -07:00
AJ Bowen
9ba359e67a Fix more references to settings.yaml 2016-05-28 19:55:46 -07:00
Jérôme Petazzoni
4c34f6be9b Merge pull request #21 from soulshake/master
Cleanup, mostly
2016-05-28 19:49:33 -07:00
AJ Bowen
a747058a72 Replace settings.yaml with <settings/somefile.yaml> in the documentation, as per @jpetazzo request; add entrypoint to Dockerfile; remove symlink and path manipulation from Dockerfile. 2016-05-28 19:46:38 -07:00
AJ Bowen
a2b77ff63b remove two more comments from docker-compose.yaml 2016-05-28 18:40:07 -07:00
AJ Bowen
5c600a05d0 Replace 'user' with 'root' in images. Squash layers in Dockerfile. Update README. Clean up docker-compose.yaml. 2016-05-28 18:37:29 -07:00
Jerome Petazzoni
340fcd4de2 Minor fixes for PYCON 2016-05-28 18:27:36 -07:00
Jerome Petazzoni
96d5e69c77 Add command to query local registry after pushing busybox (thanks @crd) 2016-05-25 16:25:08 -07:00
Jérôme Petazzoni
3b3825a83a Merge pull request #20 from RaulKite/master
upgrade local vagrant machines to ubuntu 14.04
2016-05-25 16:13:11 -07:00
Jérôme Petazzoni
74e815a706 Merge pull request #18 from soulshake/master
Fix typos pointed out by @crd
2016-05-25 16:12:02 -07:00
Raul Sanchez
2e4417f502 Merge branch 'master' of github.com:RaulKite/orchestration-workshop 2016-05-23 14:14:44 +02:00
Raul Sanchez
a4970dbfd5 upgrade local vagrant machines to ubuntu 14.04 2016-05-23 14:14:25 +02:00
Raul Sanchez
5d6a35e116 upgrade local vagrant machines to ubuntu 14.04 2016-05-23 14:11:03 +02:00
AJ Bowen
943c15a3c8 Fix typos pointed out by @crd 2016-05-17 23:21:01 +02:00
Jerome Petazzoni
65252904c9 Last updates before OSCON 2016-05-17 08:47:59 -07:00
Jerome Petazzoni
31563480b3 Update AMIs and settings files 2016-05-16 09:30:37 -07:00
Jerome Petazzoni
9bf13f70b9 Reword conclusion 2016-05-16 08:31:02 -07:00
Jerome Petazzoni
191982c72e Fix capitalization of Consul, etcd, Zookeeper 2016-05-15 20:24:49 -07:00
Jerome Petazzoni
054bb739ac Add diagrams courtesy of @soulshake; and new dockercoins logo by @ggtools & @ndeloof 2016-05-15 20:21:40 -07:00
Jérôme Petazzoni
338d9f5847 Merge pull request #16 from ggtools/master
New dockercoin logo
2016-05-15 22:03:31 -05:00
Jerome Petazzoni
6b6d2c77ad Big round of updates for OSCON 2016 2016-05-15 20:03:14 -07:00
Jérôme Petazzoni
3be821fefb Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-05-10 15:04:34 +00:00
Jérôme Petazzoni
cc03b0bab2 Add CRAFT talk extensions 2016-05-10 15:04:00 +00:00
Jerome Petazzoni
f2ccd65b34 Merge branch 'master' of github.com:jpetazzo/orchestration-workshop 2016-05-09 12:53:39 -07:00
Jérôme Petazzoni
aabbc17d97 Merge pull request #17 from morty/patch-1
Typo
2016-04-27 17:15:06 +02:00
Tom Mortimer-Jones
b87ece9acd Typo 2016-04-27 09:37:51 +01:00
Christophe Labouisse
666b38ab57 New dockercoin logo 2016-04-21 09:36:16 +02:00
José Armesto
4dad732c15 Removed unnecesary prints 2016-03-19 19:45:03 +01:00
José Armesto
bb7cadf701 Version can be set as env variable to be used, instead of generating unix timestamp 2016-03-15 16:28:46 +01:00
100 changed files with 9239 additions and 6274 deletions

329
README.md
View File

@@ -1,8 +1,253 @@
# Orchestration at scale(s)
# Docker Orchestration Workshop
This is the material for the "Docker orchestration workshop"
written and delivered by Jérôme Petazzoni (and possibly others)
at multiple conferences and events like:
This is the material (slides, scripts, demo app, and other
code samples) for the "Docker orchestration workshop"
written and delivered by Jérôme Petazzoni (and lots of others)
non-stop since June 2015.
## Content
- Chapter 1: Getting Started: running apps with docker-compose
- Chapter 2: Scaling out with Swarm Mode
- Chapter 3: Operating the Swarm (networks, updates, logging, metrics)
- Chapter 4: Deeper in Swarm (stateful services, scripting, DAB's)
## Quick start (or, "I want to try it!")
This workshop is designed to be *hands on*, i.e. to give you a step-by-step
guide where you will build your own Docker cluster, and use it to deploy
a sample application.
The easiest way to follow the workshop is to attend it when it is delivered
by an instructor. In that case, the instructor will generally give you
credentials (IP addresses, login, password) to connect to your own cluster
of virtual machines; and the [slides](http://jpetazzo.github.io/orchestration-workshop)
assume that you have your own cluster indeed.
If you want to follow the workshop on your own, and want to have your
own cluster, we have multiple solutions for you!
### Using [play-with-docker](http://play-with-docker.com/)
This method is very easy to get started (you don't need any extra account
or resources!) but will require a bit of adaptation from the workshop slides.
To get started, go to [play-with-docker](http://play-with-docker.com/), and
click on _ADD NEW INSTANCE_ five times. You will get five "docker-in-docker"
containers, all on a private network. These are your five nodes for the workshop!
When the instructions in the slides tell you to "SSH on node X", just go to
the tab corresponding to that node.
The nodes are not directly reachable from outside; so when the slides tell
you to "connect to the IP address of your node on port XYZ" you will have
to use a different method.
We suggest to use "supergrok", a container offering a NGINX+ngrok combo to
expose your services. To use it, just start (on any of your nodes) the
`jpetazzo/supergrok` image. The image will output further instructions:
```
docker run --name supergrok -d jpetazzo/supergrok
docker logs --follow supergrok
```
The logs of the container will give you a tunnel address and explain you
how to connected to exposed services. That's all you need to do!
We are also working on a native proxy, embedded to Play-With-Docker.
Stay tuned!
<!--
- You can use a proxy provided by Play-With-Docker. When the slides
instruct you to connect to nodeX on port ABC, instead, you will connect
to http://play-with-docker.com/XXX.XXX.XXX.XXX:ABC, where XXX.XXX.XXX.XXX
is the IP address of nodeX.
-->
Note that the instances provided by Play-With-Docker have a short lifespan
(a few hours only), so if you want to do the workshop over multiple sessions,
you will have to start over each time ... Or create your own cluster with
one of the methods described below.
### Using Docker Machine to create your own cluster
This method requires a bit more work to get started, but you get a permanent
cluster, with less limitations.
You will need Docker Machine (if you have Docker Mac, Docker Windows, or
the Docker Toolbox, you're all set already). You will also need:
- credentials for a cloud provider (e.g. API keys or tokens),
- or a local install of VirtualBox or VMware (or anything supported
by Docker Machine).
Full instructions are in the [prepare-machine](prepare-machine) subdirectory.
### Using our scripts to mass-create a bunch of clusters
Since we often deliver the workshop during conferences or similar events,
we have scripts to automate the creation of a bunch of clusters using
AWS EC2. If you want to create multiple clusters and have EC2 credits,
check the [prepare-vms](prepare-vms) directory for more information.
## How This Repo is Organized
- **dockercoins**
- Sample App: compose files and source code for the dockercoins sample apps
used throughout the workshop
- **docs**
- Slide Deck: presentation slide deck, works out-of-box with GitHub Pages,
uses https://remarkjs.com
- **prepare-local**
- untested scripts for automating the creation of local virtualbox VM's
(could use your help validating)
- **prepare-machine**
- instructions explaining how to use Docker Machine to create VMs
- **prepare-vms**
- scripts for automating the creation of AWS instances for students
## Slide Deck
- The slides are in the `docs` directory.
- To view them locally open `docs/index.html` in your browser. It works
offline too.
- To view them online open https://jpetazzo.github.io/orchestration-workshop/
in your browser.
- When you fork this repo, be sure GitHub Pages is enabled in repo Settings
for "master branch /docs folder" and you'll have your own website for them.
- They use https://remarkjs.com to allow simple markdown in a html file that
remark will transform into a presentation in the browser.
## Sample App: Dockercoins!
The sample app is in the `dockercoins` directory. It's used during all chapters
for explaining different concepts of orchestration.
To see it in action:
- `cd dockercoins && docker-compose up -d`
- this will build and start all the services
- the web UI will be available on port 8000
*If you just want to run the workshop for yourself, you can stop reading
here. If you want to deliver the workshop for others (i.e. if you
want to become an instructor), keep reading!*
## Running the Workshop
### General timeline of planning a workshop
- Fork repo and run through slides, doing the hands-on to be sure you
understand the different `dockercoins` repo's and the steps we go through to
get to a full Swarm Mode cluster of many containers. You'll update the first
few slides and last slide at a minimum, with your info.
- Your docs directory can use GitHub Pages.
- This workshop expects 5 servers per student. You can get away with as little
as 2 servers per student, but you'll need to change the slide deck to
accommodate. More servers = more fun.
- If you have more then ~20 students, try to get an assistant (TA) to help
people with issues, so you don't have to stop the workshop to help someone
with ssh etc.
- AWS is our most tested process for generating student machines. In
`prepare-vms` you'll find scripts to create EC2 instances, install docker,
pre-pull images, and even print "cards" to place at each students seat with
IP's and username/password.
- Test AWS Scripts: Be sure to test creating *all* your needed servers a week
before workshop (just for a few minutes). You'll likely hit AWS limits in the
region closest to your class, and it sometimes takes days to get AWS to raise
those limits with a support ticket.
- Create a https://gitter.im chat room for your workshop and update slides
with url. Also useful for TA to monitor this during workshop. You can use it
before/after to answer questions, and generally works as a better answer then
"email me that question".
- If you can send an email to students ahead of time, mention how they should
get SSH, and test that SSH works. If they can `ssh github.com` and get
`permission denied (publickey)` then they know it worked, and SSH is properly
installed and they don't have anything blocking it. SSH and a browser are all
they need for class.
- Typically you create the servers the day before or morning of workshop, and
leave them up the rest of day after workshop. If creating hundreds of servers,
you'll likely want to run all these `trainer` commands from a dedicated
instance you have in same region as instances you want to create. Much faster
this way if you're on poor internet. Also, create 2 sets of servers for
yourself, and use one during workshop and the 2nd is a backup.
- Remember you'll need to print the "cards" for students, so you'll need to
create instances while you have a way to print them.
### Things That Could Go Wrong
- Creating AWS instances ahead of time, and you hit its limits in region and
didn't plan enough time to wait on support to increase your limits. :(
- Students have technical issues during workshop. Can't get ssh working,
locked-down computer, host firewall, etc.
- Horrible wifi, or ssh port TCP/22 not open on network! If wifi sucks you
can try using MOSH https://mosh.org which handles SSH over UDP. TMUX can also
prevent you from loosing your place if you get disconnected from servers.
https://tmux.github.io
- Forget to print "cards" and cut them up for handing out IP's.
- Forget to have fun and focus on your students!
### Creating the VMs
`prepare-vms/trainer` is the script that gets you most of what you need for
setting up instances. See
[prepare-vms/README.md](prepare-vms)
for all the info on tools and scripts.
### Content for Different Workshop Durations
With all the slides, this workshop is a full day long. If you need to deliver
it in shorter timelines, here's some recommendations on what to cut out. You
can replace `---` with `???` which will hide slides. Or leave them there and
add something like `(EXTRA CREDIT)` to title so students can still view the
content but you also know to skip during presentation.
#### 3 Hour Version
- Limit time on debug tools, maybe skip a few. *"Chapter 1:
Identifying bottlenecks"*
- Limit time on Compose, try to have them building the Swarm Mode by 30
minutes in
- Skip most of Chapter 3, Centralized Logging and ELK
- Skip most of Chapter 4, but keep stateful services and DAB's if possible
- Mention what DAB's are, but make this part optional in case you run out
of time
#### 2 Hour Version
- Skip all the above, and:
- Skip the story arc of debugging dockercoins all together, skipping the
troubleshooting tools. Just focus on getting them from single-host to
multi-host and multi-container.
- Goal is first 30min on intro and Docker Compose and what dockercoins is,
and getting it up on one node in docker-compose.
- Next 60-75 minutes is getting dockercoins in Swarm Mode services across
servers. Big Win.
- Last 15-30 minutes is for stateful services, DAB files, and questions.
## Past events
Since its inception, this workshop has been delivered dozens of times,
to thousands of people, and has continuously evolved. This is a short
history of the first times it was delivered. Look also in the "tags"
of this repository: they all correspond to successive iterations of
this workshop. If you attended a past version of the workshop, you
can use these tags to see what has changed since then.
- QCON, New York City (2015, June)
- KCDC, Kansas City (2015, June)
@@ -13,80 +258,7 @@ at multiple conferences and events like:
- SCALE, Pasadena (2016, January)
- Zenika, Paris (2016, February)
- Container Solutions, Amsterdam (2016, February)
## Slides
The slides are in the `www/htdocs` directory.
The recommended way to view them is to:
- have a Docker host
- clone this repository to your Docker host
- `cd www && docker-compose up -d`
- this will start a web server on port 80
- point your browser at your Docker host and enjoy
## Sample code
The sample app is in the `dockercoins` directory.
To see it in action:
- `cd dockercoins && docker-compose up -d`
- this will build and start all the services
- the web UI will be available on port 8000
## Running the workshop
WARNING: those instructions are incomplete. Consider
them as notes quickly drafted on a napkin rather than
proper documentation!
### Creating the VMs
I use the `trainctl` script from the `docker-fundamentals`
repository. Sorry if you don't have that!
After starting the VMs, use the `trainctl ips` command
to dump the list of IP addresses into a file named `ips.txt`.
### Generating the printed cards
- Put `ips.txt` file in `prepare-vms` directory.
- Generate HTML file.
- Open it in Chrome.
- Transform to PDF.
- Print it.
### Deploying your SSH key to all the machines
- Make sure that you have SSH keys loaded (`ssh-add -l`).
- Source `rc`.
- Run `pcopykey`.
### Installing extra packages
- Source `postprep.rc`.
(This will install a few extra packages, add entries to
/etc/hosts, generate SSH keys, and deploy them on all hosts.)
### Final touches
- Set two groups of machines for instructor's use.
- You will use the first group during the workshop.
- The second group will run a web server with the slides.
- Log into the first machine of the second group.
- Git clone this repo.
- Put up the web server as instructed above.
- Use cli53 to add an A record for e.g. `view.dckr.info`.
- ... and much more!
# Problems? Bugs? Questions?
@@ -108,3 +280,4 @@ conference or for your company: contact me (jerome
at docker dot com).
Thank you!

27
bin/add-logging.py Executable file
View File

@@ -0,0 +1,27 @@
#!/usr/bin/env python
import os
import sys
import yaml
def error(msg):
print("ERROR: {}".format(msg))
exit(1)
compose_file = os.environ["COMPOSE_FILE"]
input_file, output_file = compose_file, compose_file
config = yaml.load(open(input_file))
version = config.get("version")
if version != "2":
error("Unsupported $COMPOSE_FILE version: {!r}".format(version))
for service in config["services"]:
config["services"][service]["logging"] = dict(
driver="gelf",
options={"gelf-address": "udp://localhost:12201"},
)
yaml.safe_dump(config, open(output_file, "w"), default_flow_style=False)

View File

@@ -16,9 +16,11 @@ if not registry:
# Get the name of the current directory.
project_name = os.path.basename(os.path.realpath("."))
# Generate a Docker image tag, using the UNIX timestamp.
# (i.e. number of seconds since January 1st, 1970)
version = str(int(time.time()))
# Version used to tag the generated Docker image, using the UNIX timestamp or the given version.
if "VERSION" not in os.environ:
version = str(int(time.time()))
else:
version = os.environ["VERSION"]
# Execute "docker-compose build" and abort if it fails.
subprocess.check_call(["docker-compose", "-f", "docker-compose.yml", "build"])
@@ -33,7 +35,7 @@ push_operations = dict()
for service_name, service in compose_file.services.items():
if "build" in service:
compose_image = "{}_{}".format(project_name, service_name)
registry_image = "{}/{}_{}:{}".format(registry, project_name, service_name, version)
registry_image = "{}/{}:{}".format(registry, compose_image, version)
# Re-tag the image so that it can be uploaded to the registry.
subprocess.check_call(["docker", "tag", compose_image, registry_image])
# Spawn "docker push" to upload the image.

View File

@@ -3,7 +3,7 @@ unset DOCKER_REGISTRY
unset DOCKER_HOST
unset COMPOSE_FILE
SWARM_IMAGE=${SWARM_IMAGE:-swarm:1.2.0}
SWARM_IMAGE=${SWARM_IMAGE:-swarm}
prepare_1_check_ssh_keys () {
for N in $(seq 1 5); do
@@ -87,18 +87,14 @@ setup_1_swarm () {
}
setup_2_consul () {
ssh node1 docker run --name consul_node1 \
-d --restart=always --net host \
jpetazzo/consul agent -server -bootstrap
IPADDR=$(ssh node1 ip a ls dev eth0 |
sed -n 's,.*inet \(.*\)/.*,\1,p')
# Start other Consul nodes
for N in 2 3 4 5; do
ssh node$N docker run --name consul_node$N \
-d --restart=always --net host \
jpetazzo/consul agent -server -join $IPADDR
for N in 1 2 3 4 5; do
ssh node$N -- docker run -d --restart=always --name consul_node$N \
-e CONSUL_BIND_INTERFACE=eth0 --net host consul \
agent -server -retry-join $IPADDR -bootstrap-expect 5 \
-ui -client 0.0.0.0
done
}
@@ -135,6 +131,36 @@ setup_6_add_lbs () {
~/orchestration-workshop/bin/add-load-balancer-v2.py hasher
}
setup_7_consulfs () {
dm_swarm
docker pull jpetazzo/consulfs
for N in $(seq 1 5); do
ssh node$N "docker run --rm -v /usr/local/bin:/target jpetazzo/consulfs"
ssh node$N mkdir -p ~/consul
ssh -f node$N "mountpoint ~/consul || consulfs localhost:8500 ~/consul"
done
}
setup_8_syncmachine () {
while ! mountpoint ~/consul; do
sleep 1
done
cp -r ~/.docker/machine ~/consul/
for N in $(seq 2 5); do
ssh node$N mkdir -p ~/.docker
ssh node$N "[ -L ~/.docker/machine ] || ln -s ~/consul/machine ~/.docker"
done
}
setup_9_elk () {
dm_swarm
cd ~/orchestration-workshop/elk
docker-compose up -d
for N in $(seq 1 5); do
docker-compose scale logstash=$N
done
}
setup_all () {
setup_1_swarm
setup_2_consul
@@ -142,6 +168,8 @@ setup_all () {
setup_4_registry
setup_5_btp_dockercoins
setup_6_add_lbs
setup_7_consulfs
setup_8_syncmachine
dm_swarm
}
@@ -166,3 +194,8 @@ grep -qs -- MAGICMARKER "$0" && { # Don't display this line in the function lis
echo "You should source this file, then invoke the following functions:"
grep -- '^[a-z].*{$' "$0" | cut -d" " -f1
}
show_swarm_primary () {
dm_swarm
docker info 2>/dev/null | grep -e ^Role -e ^Primary
}

View File

@@ -1,10 +1,12 @@
cadvisor:
image: google/cadvisor
ports:
- "8080:8080"
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"
version: "2"
services:
cadvisor:
image: google/cadvisor
ports:
- "8080:8080"
volumes:
- "/:/rootfs:ro"
- "/var/run:/var/run:rw"
- "/sys:/sys:ro"
- "/var/lib/docker/:/var/lib/docker:ro"

View File

@@ -1,30 +1,21 @@
version: "2"
services:
rng1:
build: rng
rng2:
build: rng
rng3:
build: rng
rng:
image: jpetazzo/hamba
command: 80 rng1:80 rng2:80 rng3:80
depends_on:
- rng1
- rng2
- rng3
build: rng
image: ${REGISTRY_SLASH}rng${COLON_TAG}
ports:
- "8001:80"
hasher:
build: hasher
image: ${REGISTRY_SLASH}hasher${COLON_TAG}
ports:
- "8002:80"
webui:
build: webui
image: ${REGISTRY_SLASH}webui${COLON_TAG}
ports:
- "8000:80"
volumes:
@@ -35,4 +26,5 @@ services:
worker:
build: worker
image: ${REGISTRY_SLASH}worker${COLON_TAG}

View File

@@ -1,35 +0,0 @@
version: "2"
services:
rng1:
build: rng
rng2:
build: rng
rng3:
build: rng
rng:
image: jpetazzo/hamba
command: 80 rng1:80 rng2:80 rng3:80
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
webui:
build: webui
ports:
- "8000:80"
volumes:
- "./webui/files/:/files/"
redis:
image: jpetazzo/hamba
command: 6379 AA.BB.CC.DD:EEEEE
worker:
build: worker

View File

@@ -1,26 +0,0 @@
version: "2"
services:
rng:
build: rng
ports:
- "80"
hasher:
build: hasher
ports:
- "8002:80"
webui:
build: webui
ports:
- "8000:80"
volumes:
- "./webui/files/:/files/"
redis:
image: redis
worker:
build: worker

View File

@@ -1,20 +0,0 @@
version: '2'
services:
rng:
build: rng
hasher:
build: hasher
webui:
build: webui
ports:
- "8000:80"
redis:
image: redis
worker:
build: worker

View File

@@ -1,5 +0,0 @@
hasher: 80
redis: 6379
rng: 80
webui: 80

View File

@@ -50,7 +50,7 @@ function refresh () {
points.push({ x: s2.now, y: speed });
}
$("#speed").text("~" + speed.toFixed(1) + " hashes/second");
var msg = ("I'm attending the @docker workshop at @scaleconf, "
var msg = ("I'm attending the @docker workshop at #LinuxCon, "
+ "and my #DockerCoins mining rig is crunching "
+ speed.toFixed(1) + " hashes/second! W00T!");
$("#tweet").attr(

BIN
docs/bell-curve.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

9
docs/chat/index.html Normal file
View File

@@ -0,0 +1,9 @@
<html>
<!-- Generated with index.html.sh -->
<head>
<meta http-equiv="refresh" content="0; URL='https://dockercommunity.slack.com/messages/docker-mentor'" />
</head>
<body>
<a href="https://dockercommunity.slack.com/messages/docker-mentor">https://dockercommunity.slack.com/messages/docker-mentor</a>
</body>
</html>

16
docs/chat/index.html.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/bin/sh
#LINK=https://gitter.im/jpetazzo/workshop-20170322-sanjose
LINK=https://dockercommunity.slack.com/messages/docker-mentor
#LINK=https://usenix-lisa.slack.com/messages/docker
sed "s,@@LINK@@,$LINK,g" >index.html <<EOF
<html>
<!-- Generated with index.html.sh -->
<head>
<meta http-equiv="refresh" content="0; URL='$LINK'" />
</head>
<body>
<a href="$LINK">$LINK</a>
</body>
</html>
EOF

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 26 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 680 KiB

View File

Before

Width:  |  Height:  |  Size: 137 KiB

After

Width:  |  Height:  |  Size: 137 KiB

View File

Before

Width:  |  Height:  |  Size: 252 KiB

After

Width:  |  Height:  |  Size: 252 KiB

View File

Before

Width:  |  Height:  |  Size: 213 KiB

After

Width:  |  Height:  |  Size: 213 KiB

BIN
docs/dockercoins.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 901 KiB

View File

Before

Width:  |  Height:  |  Size: 575 KiB

After

Width:  |  Height:  |  Size: 575 KiB

View File

Before

Width:  |  Height:  |  Size: 205 KiB

After

Width:  |  Height:  |  Size: 205 KiB

BIN
docs/extra-details.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

19
docs/extract-section-titles.py Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env python
"""
Extract and print level 1 and 2 titles from workshop slides.
"""
separators = [
"---",
"--"
]
slide_count = 1
for line in open("index.html"):
line = line.strip()
if line in separators:
slide_count += 1
if line.startswith('# '):
print slide_count, '# #', line
elif line.startswith('# '):
print slide_count, line

BIN
docs/grafana-add-graph.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

BIN
docs/grafana-add-source.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 145 KiB

7936
docs/index.html Normal file

File diff suppressed because it is too large Load Diff

View File

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

View File

Before

Width:  |  Height:  |  Size: 145 KiB

After

Width:  |  Height:  |  Size: 145 KiB

BIN
docs/lifecycle.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

BIN
docs/mario-red-shell.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

BIN
docs/pwd-icons.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

BIN
docs/registry-frontends.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

18
docs/remark-0.14.min.js vendored Normal file

File diff suppressed because one or more lines are too long

21
docs/remark.min.js vendored Normal file

File diff suppressed because one or more lines are too long

View File

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 48 KiB

4
docs/swarm-mode.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 266 KiB

View File

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

BIN
docs/you-get-five-vms.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

36
efk/README.md Normal file
View File

@@ -0,0 +1,36 @@
# Elasticsearch + Fluentd + Kibana
This is a variation on the classic "ELK" stack.
The [fluentd](fluentd/) subdirectory contains a Dockerfile to build
a fluentd image embarking a simple configuration file, accepting log
entries on port 24224 and storing them in Elasticsearch in a format
that Kibana can use.
You can also use a pre-built image, `jpetazzo/fluentd:v0.1`
(e.g. if you want to deploy on a cluster and don't want to deploy
your own registry).
Once this fluentd container is running, and assuming you expose
its port 24224/tcp somehow, you can send container logs to fluentd
by using Docker's fluentd logging driver.
You can bring up the whole stack with the associated Compoes file.
With Swarm mode, you can bring up the whole stack like this:
```bash
docker network create efk --driver overlay
docker service create --network efk \
--name elasticsearch elasticsearch:2
docker service create --network efk --publish 5601:5601 \
--name kibana kibana
docker service create --network efk --publish 24224:24224 \
--name fluentd jpetazzo/fluentd:v0.1
```
And then, from any node on your cluster, you can send logs to fluentd like this:
```bash
docker run --log-driver fluentd --log-opt fluentd-address=localhost:24224 \
alpine echo ohai there
```

24
efk/docker-compose.yml Normal file
View File

@@ -0,0 +1,24 @@
version: "2"
services:
elasticsearch:
image: elasticsearch
# If you need to access ES directly, just uncomment those lines.
#ports:
# - "9200:9200"
# - "9300:9300"
fluentd:
#build: fluentd
image: jpetazzo/fluentd:v0.1
ports:
- "127.0.0.1:24224:24224"
depends_on:
- elasticsearch
kibana:
image: kibana
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200

5
efk/fluentd/Dockerfile Normal file
View File

@@ -0,0 +1,5 @@
FROM ruby
RUN gem install fluentd
RUN gem install fluent-plugin-elasticsearch
COPY fluentd.conf /fluentd.conf
CMD ["fluentd", "-c", "/fluentd.conf"]

12
efk/fluentd/fluentd.conf Normal file
View File

@@ -0,0 +1,12 @@
<source>
@type forward
port 24224
bind 0.0.0.0
</source>
<match **>
@type elasticsearch
host elasticsearch
logstash_format true
flush_interval 1
</match>

34
elk/logstash.conf Normal file
View File

@@ -0,0 +1,34 @@
input {
# Listens on 514/udp and 514/tcp by default; change that to non-privileged port
syslog { port => 51415 }
# Default port is 12201/udp
gelf { }
# This generates one test event per minute.
# It is great for debugging, but you might
# want to remove it in production.
heartbeat { }
}
# The following filter is a hack!
# The "de_dot" filter would be better, but it
# is not pre-installed with logstash by default.
filter {
ruby {
code => "
event.to_hash.keys.each { |k| event[ k.gsub('.','_') ] = event.remove(k) if k.include?'.' }
"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
# This will output every message on stdout.
# It is great when testing your setup, but in
# production, it will probably cause problems;
# either by filling up your disks, or worse,
# by creating logging loops! BEWARE!
stdout {
codec => rubydebug
}
}

View File

@@ -13,11 +13,12 @@ Virtualbox, Vagrant and Ansible
- Virtualbox: https://www.virtualbox.org/wiki/Downloads
- Vagrant: https://www.vagrantup.com/downloads.html
- install vagrant-vbguest plugin (https://github.com/dotless-de/vagrant-vbguest)
- Ansible:
- install Ansible's prerequisites:
$ sudo pip install paramiko PyYAML Jinja2 httplib2 six
$ sudo pip install paramiko PyYAML Jinja2 httplib2 six pycrypto
- clone the Ansible repository and checkout to a stable version
(don't forget the `--recursive` argument when cloning!):
@@ -41,6 +42,7 @@ Virtualbox, Vagrant and Ansible
Run the following commands:
$ vagrant up
$ chmod 600 private-key
$ ansible-playbook provisioning.yml
And that's it! Now you should be able to ssh on `node1` using:

View File

@@ -25,7 +25,7 @@ Vagrant.configure('2') do |config|
check_dependency 'vagrant-vbguest'
config.vm.box = settings['default_box']
config.vm.box_url = settings['default_box_url']
# config.vm.box_url = settings['default_box_url']
config.ssh.forward_agent = true
config.ssh.insert_key = settings['ssh_insert_key']
config.vm.box_check_update = true

View File

@@ -3,6 +3,7 @@
sudo: true
vars_files:
- vagrant.yml
tasks:
- name: clean up the home folder
@@ -37,8 +38,7 @@
repo: "{{ item }}"
state: present
with_items:
- deb http://http.debian.net/debian wheezy-backports main
- deb https://apt.dockerproject.org/repo {{ ansible_lsb.id|lower }}-{{ ansible_lsb.codename }} main
- deb https://apt.dockerproject.org/repo ubuntu-trusty main
- name: installing docker
apt:
@@ -76,28 +76,16 @@
name: virtualenv
state: latest
- name: creating docker-compose folder
- name: Install Docker Compose via PIP
pip: name=docker-compose
- name:
file:
path: /opt/docker-compose
state: directory
register: docker_compose_folder
- name: creating virtualenv for docker-compose
shell: virtualenv /opt/docker-compose
when: docker_compose_folder is defined and docker_compose_folder.changed
- name: installing docker-compose
pip:
name: docker-compose
state: latest
virtualenv: /opt/docker-compose
- name: making the docker-compose command available to user
lineinfile:
dest: .bashrc
line: "alias docker-compose='/opt/docker-compose/bin/docker-compose'"
state: present
regexp: '^alias docker-compose=.*$'
path="/usr/local/bin/docker-compose"
state=file
mode=0755
owner=vagrant
group=docker
- name: building the /etc/hosts file with all nodes
lineinfile:
@@ -105,7 +93,7 @@
line: "{{ item.private_ip }} {{ item.hostname }}"
regexp: "^{{ item.private_ip }} {{ item.hostname }}$"
state: present
with_items: instances
with_items: "{{ instances }}"
- name: copying the ssh key to the nodes
copy:
@@ -140,3 +128,5 @@
line: "127.0.0.1 localhost {{ inventory_hostname }}"
- regexp: '^127\.0\.1\.1'
line: "127.0.1.1 {{ inventory_hostname }}"

View File

@@ -1,8 +1,6 @@
---
vagrant:
default_box: debian-7.2.0
default_box_url: https://dl.dropboxusercontent.com/u/197673519/debian-7.2.0.box
default_box: ubuntu/trusty64
default_box_check_update: true
ssh_insert_key: false
min_memory: 256
@@ -12,7 +10,7 @@ instances:
- hostname: node1
private_ip: 10.10.10.10
memory: 512
memory: 1512
cores: 1
mounts:
- host_path: ../
@@ -39,3 +37,6 @@ instances:
private_ip: 10.10.10.50
memory: 512
cores: 1

242
prepare-machine/README.md Normal file
View File

@@ -0,0 +1,242 @@
# Setting up your own cluster
If you want to go through this orchestration workshop on your own,
you will need a cluster of Docker nodes.
These instructions will walk you through the required steps,
using [Docker Machine](https://docs.docker.com/machine/) to
create the nodes.
## Requirements
You need Docker Machine. To check if it is installed, try to
run the following command:
```bash
$ docker-machine -v
docker-machine version 0.8.2, build e18a919
```
If you see a Docker Machine version number, perfect! Otherwise,
you need to install it; either as part of the Docker Toolbox,
or as a stand-alone tool. See [Docker Machine installation docs](
https://docs.docker.com/machine/install-machine/) for details.
You also need either credentials for a cloud provider, or a
local VirtualBox or VMware installation (or anything supported
by Docker Machine, really).
## Discrepancies with official environment
The resulting environment will be slightly different from the
one that we provision for people attending the workshop at
conferences and similar events, and you will have to adapt a
few things.
We try to list all the differences here.
### User name
The official environment uses user `docker`. If you use
Docker Machine, the user name will probably be different.
### Node aliases
In the official environment, aliases are seeded in
`/etc/hosts`, allowing you to resolve node IP addresses
with the aliases `node1`, `node2`, etc.; if you use
Docker Machine, you will have to lookup the IP addresses
with the `docker-machine ip nodeX` command instead.
### SSH keys
In the official environment, you can log from one node
to another with SSH, without having to provide a password,
thanks to pre-generated (and pre-copied) SSH keys.
If you use Docker Machine, you will have to use
`docker-machine ssh` from your machine instead.
### Machine and Compose
In the official environment, Docker Machine and Docker
Compose are installed on your nodes. If you use Docker
Machine you will have to install at least Docker Compose.
The easiest way to install Compose (verified to work
with the EC2 and VirtualBox drivers, and probably others
as well) is do use `docker-machine ssh` to connect
to your node, then run the following command:
```bash
sudo curl -L \
https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```
Note that it is not necessary (or even useful) to
install Docker Machine on your nodes, since if you're
following that guide, you already have Machine on
your local computer. ☺
### IP addresses
In some environments, your nodes will have multiple
IP addresses. This is the case with VirtualBox, for
instance. At any point in the workshop, if you need
a node's IP address, you should use the address
given by the `docker-machine ip` command.
## Creating your nodes with Docker Machine
Here are some instructions for various Machine Drivers.
### AWS EC2
You have to retrieve your AWS access key and secret access key,
and set the following environment variables:
```bash
export MACHINE_DRIVER=amazonec2
export AWS_ACCESS_KEY_ID=AKI...
export AWS_SECRET_ACCESS_KEY=...
```
Optionally, you can also set `AWS_DEFAULT_REGION` to the region
closest to you. See [AWS documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions)
for the list of available regions and their codes.
For instance, if you are on the US West Coast, I recommend
that you set `AWS_DEFAULT_REGION` to `us-west-2`; if you are
in Europe, to `eu-central-1` (except in UK and Ireland where
you probably want `eu-west-1`), etc.
If you don't specify anything, your nodes will be in `us-east-1`.
You can also set `AWS_INSTANCE_TYPE` if you want bigger or smaller
instances than `t2.micro`. For the official workshops, we use
`m3.large`, but remember: the bigger the instance, the more
expensive it gets, obviously!
After setting these variables, run the following command:
```bash
for N in $(seq 1 5); do
docker-machine create node$N
docker-machine ssh node$N usermod -aG docker ubuntu
done
```
And after a few minutes, your five nodes will be ready. To log
into a node, use `docker-machine ssh nodeX`.
By default, Docker Machine places the created nodes in a
security group aptly named `docker-machine`. By default, this
group is pretty restrictive, and will only let you connect
to the Docker API and SSH. For the purpose of the workshop,
you will need to open that security group to normal traffic.
You can do that through the AWS EC2 console, or with the
following CLI command:
```bash
aws ec2 authorize-security-group-ingress --group-name docker-machine --protocol -1 --cidr 0.0.0.0/0
```
If Docker Machine fails, complaining that it cannot find
the default VPC or subnet, this could be because you have
an "old" EC2 account (created before the introduction of EC2
VPC) and your account has no default VPC. In that case,
you will have to create a VPC, a subnet in that VPC,
and use the corresponding Machine flags (`--amazonec2-vpc-id`
and `--amazonec2-subnet-id`) or environment variables
(`AWS_VPC_ID` and `AWS_SUBNET_ID`) to tell Machine what to use.
You will get similar error messages if you *have* set these
flags (or environment variables) but the VPC (or subnets)
indicated do not exist. This can happen if you frequently
switch between different EC2 accounts, and forget that you
have set the `AWS_VPC_ID` or `AWS_SUBNET_ID`.
### Microsoft Azure
You have to retrieve your subscription ID, and set the following environment
variables:
```bash
export MACHINE_DRIVER=azure
export AZURE_SUBSCRIPTION_ID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
```
Additionally, you can set `AZURE_LOCATION` to an Azure datacenter
close to you. By default, it will pick "West US". You can see
the available regions [on Azure's website](
https://azure.microsoft.com/en-us/regions/services/).
For instance, if you want to deploy on the US East Coast,
set `AZURE_LOCATION` to `East US` or `eastus` (capitalization
and spacing shouldn't matter; just use the names shown on the
map or table on Azure's website).
Then run the following command:
```bash
for N in $(seq 1 5); do
docker-machine create node$N
docker-machine ssh node$N usermod -aG docker docker-user
done
```
The CLI will give you instructions to authenticate on the Azure portal,
and once you've done that, it will create your VMs.
You will log into your nodes with `docker-machine ssh nodeX`.
By default, the firewall only allows access to the Docker API
and SSH ports. To open access to other ports, you can use the
following command:
```bash
for N in $(seq 1 5); do
az network nsg rule create -g docker-machine --name AllowAny --nsg-name node$N-firewall \
--access allow --direction inbound --protocol '*' \
--source-address-prefix '*' --source-port-range '*' \
--destination-address-prefix '*' --destination-port-range '*'
done
```
(The command takes a while. Be patient.)
### Local VirtualBox or VMware Fusion
If you want to run with local VMs, set the environment variable
`MACHINE_DRIVER` to `virtualbox` or `vmwarefusion` and create your nodes:
```bash
export MACHINE_DRIVER=virtualbox
for N in $(seq 1 5); do
docker-machine create node$N
done
```
### Terminating instances
When you're done, if you started your instance on a public
cloud (or anywhere where it costs you money!) you will want to
terminate (destroy) them. This can be done with the following
command:
```bash
for N in $(seq 1 5); do
docker-machine rm -f node$N
done
```

View File

@@ -1,52 +1,30 @@
FROM debian:jessie
MAINTAINER AJ Bowen <aj@soulshake.net>
RUN apt-get update
RUN apt-get install -y ca-certificates
RUN apt-get install -y groff
RUN apt-get install -y less
RUN apt-get install -y python python-pip
RUN apt-get install -y python-docutils
RUN apt-get install -y sudo
RUN apt-get install -y \
RUN apt-get update && apt-get install -y \
wkhtmltopdf \
bsdmainutils \
ca-certificates \
curl \
groff \
jq \
less \
man \
pssh \
ssh
python \
python-pip \
python-docutils \
ssh \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
RUN pip install awscli
RUN pip install \
awscli \
pdfkit \
PyYAML \
termcolor
RUN apt-get install -y wkhtmltopdf
ENV HOME /home/user
RUN useradd --create-home --home-dir $HOME user \
&& mkdir -p $HOME/.config/gandi \
&& chown -R user:user $HOME
RUN echo "user ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
# Replace 1000 with your user / group id
#RUN export uid=1000 gid=1000 && \
# mkdir -p /home/user && \
# mkdir -p /etc/sudoers.d && \
# echo "user:x:${uid}:${gid}:user,,,:/home/user:/bin/bash" >> /etc/passwd && \
# echo "user:x:${uid}:" >> /etc/group && \
# echo "user ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/user && \
# chmod 0440 /etc/sudoers.d/user && \
# chown ${uid}:${gid} -R /home/user
WORKDIR $HOME
RUN echo "alias ll='ls -lahF'" >> /home/user/.bashrc
RUN echo "export PATH=$PATH:/home/user/bin" >> /home/user/.bashrc
RUN mkdir -p /home/user/bin
RUN ln -s /home/user/prepare-vms/scripts/trainer-cli /home/user/bin/trainer-cli
USER user
WORKDIR $$HOME
RUN echo "alias ll='ls -lahF'" >> /root/.bashrc
ENTRYPOINT ["/root/prepare-vms/scripts/trainer-cli"]

View File

@@ -1,73 +1,113 @@
# Trainer tools to prepare VMs for Docker workshops
# Trainer tools to create and prepare VMs for Docker workshops on AWS
There are several options for using these tools:
## Prerequisites
### Clone the repo
- [Docker](https://docs.docker.com/engine/installation/)
- [Docker Compose](https://docs.docker.com/compose/install/)
$ git clone https://github.com/soulshake/prepare-vms.git
$ cd prepare-vms
## General Workflow
- fork/clone repo
- set required environment variables for AWS
- create your own setting file from `settings/example.yaml`
- run `./trainer` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./trainer cards` command to generate PDF for printing handouts of each users host IP's and login info
## Clone/Fork the Repo, and Build the Tools Image
The Docker Compose file here is used to build a image with all the dependencies to run the `./trainer` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](trainer#L5).
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
$ cd orchestration-workshop/prepare-vms
$ docker-compose build
$ mkdir $HOME/bin && ln -s `pwd`/trainer $HOME/bin/trainer
### Via the image
## Preparing to Run `./trainer`
$ docker pull soulshake/prepare-vms
### Required AWS Permissions/Info
### Submodule
- Initial assumptions are you're using a root account. If you'd like to use a IAM user, it will need `AmazonEC2FullAccess` and `IAMReadOnlyAccess`.
- Using a non-default VPC or Security Group isn't supported out of box yet, but until then you can [customize the `trainer-cli` script](scripts/trainer-cli#L396-L401).
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./trainer opensg` which opens up all ports.
This repo can be added as a submodule in the repo of the Docker workshop:
### Required Environment Variables
$ git submodule add https://github.com/soulshake/prepare-vms.git
- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`
## Setup
### Update/copy `settings/example.yaml`
### Export needed envvars
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `trainer deploy`, `trainer cards`, etc.
Required environment variables:
./trainer cards 2016-09-28-00-33-bret settings/orchestration.yaml
* `AWS_ACCESS_KEY_ID`
* `AWS_SECRET_ACCESS_KEY`
* `AWS_DEFAULT_REGION`
## `./trainer` Usage
```
./trainer <command> [n-instances|tag] [settings/file.yaml]
Core commands:
start n Start n instances
list [TAG] If a tag is provided, list its VMs. Otherwise, list tags.
deploy TAG Deploy all instances with a given tag
pull-images TAG Pre-pull docker images. Run only after deploying.
stop TAG Stop and delete instances tagged TAG
Extras:
ips TAG List all IPs of instances with a given tag (updates ips.txt)
ids TAG/TOKEN List all instance IDs with a given tag
shell Get a shell in the trainer container
status TAG Print information about this tag and its VMs
tags List all tags (per-region)
retag TAG/TOKEN TAG Retag instances with a new tag
Beta:
ami Look up Amazon Machine Images
cards FILE Generate cards
opensg Modify AWS security groups
```
### Summary of What `./trainer` Does For You
- Used to manage bulk AWS instances for you without needing to use AWS cli or gui.
- Can manage multiple "tags" or groups of instances, which are tracked in `prepare-vms/tags/`
- Can also create PDF/HTML for printing student info for instance IP's and login.
- The `./trainer` script can be executed directly.
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
### Example Steps to Launch a Batch of Instances for a Workshop
- Export the environment variables needed by the AWS CLI (see **Required Environment Variables** above)
- Run `./trainer start N` Creates `N` EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./trainer deploy TAG settings/somefile.yaml` to run `scripts/postprep.rc` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./trainer pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./trainer cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./trainer stop TAG` to terminate instances.
## Other Tools
### Deploying your SSH key to all the machines
- Make sure that you have SSH keys loaded (`ssh-add -l`).
- Source `rc`.
- Run `pcopykey`.
### Installing extra packages
### Update settings.yaml
- Source `postprep.rc`.
(This will install a few extra packages, add entries to
/etc/hosts, generate SSH keys, and deploy them on all hosts.)
If you have more than one workshop:
$ cp settings.yaml settings/YOUR_WORKSHOP_NAME-settings.yaml
$ ln -s settings/YOUR_WORKSHOP_NAME-settings.yaml `pwd`/settings.yaml
Update the `settings.yaml` as needed. This is the file that will be used to generate cards.
## Usage
### Summary
Summary of steps to launch a batch of instances for a workshop:
* Export the environment variables needed by the AWS CLI (see **Requirements** below)
* `trainer start NUMBER_OF_VMS` to create AWS instances
* `trainer deploy TAG` to run `scripts/postprep.rc` via parallel-ssh
* `trainer pull-images TAG` to pre-pull a bunch of Docker images
* `trainer test TAG`
* `trainer cards TAG` to generate a PDF and an HTML file you can print
The `trainer` script can be executed directly.
It will check for the necessary environment variables. Then, if all its dependencies are installed
locally, it will execute `trainer-cli`. If not, it will look for a local Docker image
tagged `soulshake/trainer-tools`. If found, it will run in a container. If not found,
the user will be prompted to either install the missing dependencies or download
the Docker image.
## Detailed usage
### Start some VMs
$ trainer start 10
A few things will happen:
## Even More Details
#### Sync of SSH keys
@@ -89,37 +129,37 @@ This ips.txt file will be created in the $TAG/ directory and a symlink will be p
If you create new VMs, the symlinked file will be overwritten.
## Deployment
#### Deployment
Instances can be deployed manually using the `deploy` command:
$ trainer deploy TAG
$ ./trainer deploy TAG settings/somefile.yaml
The `postprep.rc` file will be copied via parallel-ssh to all of the VMs and executed.
### Pre-pull images
#### Pre-pull images
$ trainer pull-images TAG
$ ./trainer pull-images TAG
### Generate cards
#### Generate cards
$ trainer cards TAG
$ ./trainer cards TAG settings/somefile.yaml
### List tags
#### List tags
$ trainer list
$ ./trainer list
### List VMs
#### List VMs
$ trainer list TAG
$ ./trainer list TAG
This will print a human-friendly list containing some information about each instance.
### Stop and destroy VMs
#### Stop and destroy VMs
$ trainer stop TAG
$ ./trainer stop TAG
## ToDo
* Don't write to bash history in system() in postprep
* compose, etc version inconsistent (int vs str)
- Don't write to bash history in system() in postprep
- compose, etc version inconsistent (int vs str)

View File

@@ -1,38 +1,26 @@
prepare-vms:
build: .
container_name: prepare-vms
working_dir: /home/user/prepare-vms
volumes:
- $HOME/.aws/:$HOME.aws/
- $HOME/.ssh/:/home/user/.ssh/
- /etc/localtime:/etc/localtime:ro
#- /home:/home
- /tmp/.X11-unix:/tmp/.X11-unix
- $SSH_AUTH_DIRNAME:$SSH_AUTH_DIRNAME
- $SCRIPT_DIR/:/home/user/prepare-vms/
#- $SCRIPT_DIR/:$HOME/prepare-vms/
#- $HOME/trainer-tools/
#- $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK)
#- /etc/passwd:/etc/passwd:ro
#- /etc/group:/etc/group:ro
#- /run/user:/run/user
environment:
HOME: /home/user
#SCRIPT_DIR: /home/aj/git/prepare-vms
SCRIPT_DIR: /home/user/prepare-vms
HOME: /home/user
SSH_AUTH_SOCK: ${SSH_AUTH_SOCK}
SSH_AGENT_PID: ${SSH_AGENT_PID}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
DISPLAY: ${DISPLAY}
USER: ${USER}
AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
AWS_DEFAULT_OUTPUT:
AWS_INSTANCE_TYPE: ${AWS_INSTANCE_TYPE}
AWS_VPC_ID: ${AWS_VPC_ID}
entrypoint: /home/user/prepare-vms/scripts/trainer-cli
#entrypoint: trainer
version: "2"
#AWS_DEFAULT_PROFILE: ${AWS_DEFAULT_PROFILE}
#command: /home/user/prepare-vms/trainer-cli
services:
prepare-vms:
build: .
container_name: prepare-vms
working_dir: /root/prepare-vms
volumes:
- $HOME/.aws/:/root/.aws/
- /etc/localtime:/etc/localtime:ro
- /tmp/.X11-unix:/tmp/.X11-unix
- $SSH_AUTH_DIRNAME:$SSH_AUTH_DIRNAME
- $PWD/:/root/prepare-vms/
environment:
SCRIPT_DIR: /root/prepare-vms
DISPLAY: ${DISPLAY}
SSH_AUTH_SOCK: ${SSH_AUTH_SOCK}
SSH_AGENT_PID: ${SSH_AGENT_PID}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION}
AWS_DEFAULT_OUTPUT: json
AWS_INSTANCE_TYPE: ${AWS_INSTANCE_TYPE}
AWS_VPC_ID: ${AWS_VPC_ID}
USER: ${USER}
entrypoint: /root/prepare-vms/scripts/trainer-cli

BIN
prepare-vms/docker.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

After

Width:  |  Height:  |  Size: 457 KiB

View File

@@ -4,22 +4,12 @@ source scripts/cli.sh
aws_display_tags(){
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Tag]" | awk '{ printf " %7s %8s %10s \n", $1, $2, $3}'
aws ec2 describe-instances --filter "Name=tag:Name,Values=[*]" \
--query "Reservations[*].Instances[*].[{Tags:Tags[0].Value,State:State.Name}]" \
| awk '{ printf " %-13s %-10s %-1s\n", $1, $2, $3}' \
| uniq -c \
| sort -k 3
}
aws_display_tokens(){
# Print all tokens in our region with their instance count
echo "[#] [Token] [Tag]" | awk '{ printf " %7s %12s %30s\n", $1, $2, $3}'
# --query 'Volumes[*].{ID:VolumeId,AZ:AvailabilityZone,Size:Size}'
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].{ClientToken:ClientToken,Tags:Tags[0].Value}' \
| awk '{ printf " %7s %12s %50s\n", $1, $2, $3}' \
| sort \
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf " %7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| awk '{ printf " %-12s %-25s %-25s\n", $1, $2, $3}' \
| uniq -c \
| sort -k 3
}
@@ -66,20 +56,24 @@ aws_display_instances_by_tag() {
fi
}
aws_get_instance_ids_by_filter() {
FILTER=$1
aws ec2 describe-instances --filters $FILTER \
--query Reservations[*].Instances[*].InstanceId \
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
need_tag $TOKEN
aws ec2 describe-instances --filters "Name=client-token,Values=$TOKEN" \
| grep ^INSTANCE \
| awk '{print $8}'
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
}
aws_get_instance_ids_by_tag() {
TAG=$1
need_tag $TAG
aws ec2 describe-instances --filters "Name=tag:Name,Values=$TAG" \
| grep ^INSTANCE \
| awk '{print $8}'
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
}
aws_get_instance_ips_by_tag() {

View File

@@ -10,21 +10,12 @@ die () {
need_tag(){
TAG=$1
if [ -z "$TAG" ]; then
echo "Please specify a tag. Here's the list: "
echo "Please specify a tag or token. Here's the list: "
aws_display_tags
die
fi
}
need_token(){
TOKEN=$1
if [ -z "$TOKEN" ]; then
echo "Please specify a token. Here's the list: "
aws_display_tokens
die
fi
}
need_ips_file() {
IPS_FILE=$1
if [ -z "$IPS_FILE" ]; then

View File

@@ -3,7 +3,7 @@
usage() {
cat >&2 <<__
usage: find-ubuntu-ami.sh [ <filter>... ] [ <sorting> ]
usage: find-ubuntu-ami.sh [ <filter>... ] [ <sorting> ] [ <options> ]
where:
<filter> is pair of key and substring to search
-r <region>
@@ -14,7 +14,7 @@ where:
-d <date>
-i <image>
-k <kernel>
<sorting> is on of:
<sorting> is one of:
-R by region
-N by name
-V by version
@@ -23,6 +23,8 @@ where:
-D by date
-I by image
-K by kernel
<options> can be:
-q just show AMI
protip for Docker orchestration workshop admin:
./find-ubuntu-ami.sh -t hvm:ebs -r \$AWS_REGION -v 15.10 -N
@@ -30,7 +32,7 @@ __
exit 1
}
args=`getopt hr:n:v:a:t:d:i:k:RNVATDIK $*`
args=`getopt hr:n:v:a:t:d:i:k:RNVATDIKq $*`
if [ $? != 0 ] ; then
echo >&2
usage
@@ -47,6 +49,8 @@ kernel=
sort=date
quiet=
set -- $args
for a ; do
case "$a" in
@@ -69,6 +73,8 @@ for a ; do
-D) sort=date ;;
-I) sort=image ;;
-K) sort=kernel ;;
-q) quiet=y ;;
--) shift ; break ;;
*) continue ;;
@@ -119,13 +125,17 @@ escape_spaces() {
url=http://cloud-images.ubuntu.com/locator/ec2/releasesTable
{
echo REGION NAME VERSION ARCH TYPE DATE IMAGE KERNEL
[ "$quiet" ] || echo REGION NAME VERSION ARCH TYPE DATE IMAGE KERNEL
curl -s $url | fix_json | jq "`jq_query`" | trim_quotes | escape_spaces | tr \| ' '
} |
while read region name version arch type date image kernel ; do
image=${image%<*}
image=${image#*>}
echo "$region|$name|$version|$arch|$type|$date|$image|$kernel"
if [ "$quiet" ]; then
echo $image
else
echo "$region|$name|$version|$arch|$type|$date|$image|$kernel"
fi
done | column -t -s \|

View File

@@ -13,7 +13,7 @@ def prettify(l):
return ret
# Read settings from settings.yaml
# Read settings from user-provided settings file
with open(sys.argv[1]) as f:
data = f.read()

View File

@@ -1,5 +1,7 @@
pssh -I tee /tmp/settings.yaml < $SETTINGS
pssh sudo apt-get update
pssh sudo apt-get install -y python-setuptools
pssh sudo easy_install pyyaml
pssh -I tee /tmp/postprep.py <<EOF
@@ -89,7 +91,7 @@ system("sudo chmod +x /usr/local/bin/docker-prompt")
# Fancy prompt courtesy of @soulshake.
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL
export PS1='\e[1m\e[32m[\h] \e[34m(\\\$(docker-prompt)) \e[35m\u@{}\e[33m \w\e[0m\n$ '
export PS1='\e[1m\e[31m[\h] \e[32m(\\\$(docker-prompt)) \e[34m\u@{}\e[35m \w\e[0m\n$ '
SQRL""".format(ipv4))
# Custom .vimrc
@@ -121,19 +123,12 @@ system("echo 1000000 | sudo tee /proc/sys/net/nf_conntrack_max")
#######################
# This will install the latest Docker.
system("curl --silent https://{}/ | grep -v '( set -x; sleep 20 )' | sudo sh".format(ENGINE_VERSION))
# Make sure that the daemon listens on 55555 (for orchestration workshop).
# To test, run: export DOCKER_HOST=tcp://localhost:55555 ; docker ps
# or, run "curl localhost:55555" (it should return 404 not found). If it tells you connection refused, that's a bad sign
system("sudo sed -i 's,-H fd://$,-H fd:// -H tcp://0.0.0.0:55555,' /lib/systemd/system/docker.service")
system("sudo systemctl daemon-reload")
# There seems to be a bug in the systemd scripts; so work around it.
# See https://github.com/docker/docker/issues/18444
# If docker is already running, need to do a restart
system("curl --silent localhost:55555 || sudo systemctl restart docker ") # does this work? if not, next line should cover it
system("sudo systemctl start docker || true")
#system("curl --silent https://{}/ | grep -v '( set -x; sleep 20 )' | sudo sh".format(ENGINE_VERSION))
system("sudo apt-get -qy install apt-transport-https ca-certificates curl software-properties-common")
system("curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -")
system("sudo add-apt-repository 'deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial {}'".format(ENGINE_VERSION))
system("sudo apt-get -q update")
system("sudo apt-get -qy install docker-ce")
### Install docker-compose
#system("sudo pip install -U docker-compose=={}".format(COMPOSE_VERSION))
@@ -152,9 +147,8 @@ system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping ht
system("while ! sudo -u docker docker version ; do sleep 2; done")
### Install Swarm
system("docker pull swarm:{}".format(SWARM_VERSION))
system("docker tag -f swarm:{} swarm".format(SWARM_VERSION))
#system("docker pull swarm:{}".format(SWARM_VERSION))
#system("docker tag -f swarm:{} swarm".format(SWARM_VERSION))
### BEGIN CLUSTERING ###
@@ -207,3 +201,6 @@ pssh "grep docker@ /home/docker/.ssh/authorized_keys \
|| cat /home/docker/.ssh/id_rsa.pub \
| sudo -u docker tee -a /home/docker/.ssh/authorized_keys"
# On node1, create and deploy TLS certs using Docker Machine
#pssh "if grep -q node1 /tmp/node; then grep ' node' /etc/hosts | xargs -n2 sudo -H -u docker docker-machine create -d generic --generic-ssh-user docker --generic-ip-address; fi"

View File

@@ -11,8 +11,9 @@ pssh () {
}
echo "[parallel-ssh] $@"
export PSSH=$(which pssh || which parallel-ssh)
parallel-ssh -h $HOSTFILE -l ubuntu \
$PSSH -h $HOSTFILE -l ubuntu \
--par 100 \
-O LogLevel=ERROR \
-O UserKnownHostsFile=/dev/null \

View File

@@ -7,7 +7,7 @@ export AWS_DEFAULT_OUTPUT=text
greet() {
hello=$(aws iam get-user --query 'User.UserName')
echo "Greetings, $hello!"
echo "Greetings, $hello/${USER}!"
}
deploy_hq(){
@@ -53,8 +53,8 @@ deploy_tag(){
source scripts/postprep.rc
echo "Finished deploying $TAG."
echo "You may want to run one of the following commands:"
echo "trainer pull-images $TAG"
echo "trainer cards $TAG"
echo "./trainer pull-images $TAG"
echo "./trainer cards $TAG <settings/somefile.yaml>"
}
link_tag() {
@@ -69,7 +69,6 @@ pull_tag(){
TAG=$1
need_tag $TAG
link_tag $TAG
cards_file=ips.html
if [ ! -s $IPS_FILE ]; then
echo "Nonexistent or empty IPs file $IPS_FILE"
fi
@@ -90,7 +89,7 @@ pull_tag(){
echo "Finished pulling images for $TAG"
echo "You may now want to run:"
echo "trainer cards $TAG"
echo "./trainer cards $TAG <settings/somefile.yaml>"
}
wait_until_tag_is_running() {
@@ -129,7 +128,7 @@ test_tag(){
ip=$(shuf -n 1 $ips_file)
test_vm $ip
echo "Tests complete. You may want to run one of the following commands:"
echo "trainer cards $TAG"
echo "./trainer cards $TAG <settings/somefile.yaml>"
}
test_vm() {
@@ -149,7 +148,6 @@ test_vm() {
"docker-machine version" \
"docker images" \
"docker ps" \
"which fig" \
"curl --silent localhost:55555" \
"sudo ls -la /mnt/ | grep docker" \
"env" \
@@ -194,7 +192,7 @@ sync_keys() {
}
suggest_amis() {
scripts/find-ubuntu-ami.sh -r $AWS_DEFAULT_REGION -a amd64 -v 15.10 -t hvm:ebs -N
scripts/find-ubuntu-ami.sh -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
}
get_token() {
@@ -205,24 +203,7 @@ get_token() {
}
get_ami() {
# using find-ubuntu-ami script in `trainer-tools/scripts`:
#AMI=$(./scripts/find-ubuntu-ami.sh -r $AWS_DEFAULT_REGION -a amd64 -v 15.10 -t hvm:ebs -N | grep -v ^REGION | head -1 | awk '{print $7}')
#AMI=$(suggest_amis | grep -v ^REGION | head -1 | awk '{print $7}')
case $AWS_DEFAULT_REGION in
eu-central-1)
AMI=ami-74a4bc18
;;
eu-west-1)
AMI=ami-cda312be
;;
us-west-2)
AMI=ami-495bbd29
;;
us-east-1)
AMI=ami-1711387d
;;
esac
echo $AMI
suggest_amis | head -1
}
@@ -273,7 +254,7 @@ run_cli() {
scripts/find-ubuntu-ami.sh -r $AWS_DEFAULT_REGION $*
echo
echo "Protip:"
echo "trainer ami -a amd64 -v 15.10 -t hvm:ebs -N | grep -v ^REGION | cut -d\" \" -f15"
echo "./trainer ami -a amd64 -v 16.04 -t hvm:ebs -N | grep -v ^REGION | cut -d\" \" -f15"
echo
echo "Suggestions:"
suggest_amis
@@ -330,8 +311,8 @@ run_cli() {
describe_tag $TAG
tag_is_reachable $TAG
echo "You may be interested in running one of the following commands:"
echo "trainer ips $TAG"
echo "trainer deploy $TAG <settings/somefile.yaml>"
echo "./trainer ips $TAG"
echo "./trainer deploy $TAG <settings/somefile.yaml>"
;;
opensg)
aws ec2 authorize-security-group-ingress \
@@ -395,7 +376,7 @@ run_cli() {
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $2 \
--instance-type c3.large \
--instance-type t2.medium \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}' )
@@ -424,8 +405,8 @@ run_cli() {
echo "$IPS" > tags/$TAG/ips.txt
link_tag $TAG
echo "To deploy or kill these instances, run one of the following:"
echo "trainer deploy $TAG <settings/somefile.yml>"
echo "trainer list $TAG"
echo "./trainer deploy $TAG <settings/somefile.yaml>"
echo "./trainer list $TAG"
;;
status)
greet && echo
@@ -463,7 +444,7 @@ run_cli() {
;;
*)
echo "
trainer COMMAND [n-instances|tag]
./trainer <command> [n-instances|tag] [settings/file.yaml]
Core commands:
start n Start n instances
@@ -482,7 +463,7 @@ Extras:
Beta:
ami Look up Amazon Machine Images
cards Generate cards
cards FILE Generate cards
opensg Modify AWS security groups
"
;;

View File

@@ -1,23 +1,31 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
workshop_name: Advanced Docker Orchestration
workshop_name: Docker Orchestration
workshop_short_name: orchestration
repo: https://github.com/jpetazzo/orchestration-workshop
url: http://container.training/ # moreinfo link printed on cards
#engine_version: experimental.docker.com #extra features that may change/runaway
#engine_version: test.docker.com
engine_version: get.docker.com #prod release
compose_version: 1.8.1
machine_version: 0.8.2
swarm_version: 1.2.5
# for now these are hard coded in script, and only used for printing cards
instance_login: docker
instance_password: training
# 12 per page works well, but is quite small text
clustersize: 5 # Number of VMs per cluster
pagesize: 12 # Number of cards to print per page
#background_image: https://myapps.developer.ubuntu.com/site_media/appmedia/2014/12/swarm.png
background_image: http://www.yellosoft.us/public/images/docker.png
#background_image: ../media/swarm.png
background_image: https://raw.githubusercontent.com/jpetazzo/orchestration-workshop/master/prepare-vms/media/swarm.png
# To be printed on the cards:
blurb: >
Here is the connection information to your very own
{cluster_or_machine} for this {workshop_name} workshop. You can connect
Here is the connection information to your very own
{cluster_or_machine} for this {workshop_name} workshop. You can connect
to {this_or_each} VM with any SSH client.
Your {machine_is_or_machines_are}:
@@ -26,10 +34,3 @@ blurb: >
footer: >
<p>For slides, chat and other useful links, see: </p>
<center>{url}</center>
url: http://container.training/
engine_version: test.docker.com
compose_version: 1.7.0-rc2
machine_version: 0.7.0-rc1
swarm_version: 1.2.0-rc2

View File

@@ -10,9 +10,7 @@ instance_password: training
clustersize: 1 # Number of VMs per cluster
pagesize: 15 # Number of cards to print per page
#background_image: https://myapps.developer.ubuntu.com/site_media/appmedia/2014/12/swarm.png
background_image: http://www.yellosoft.us/public/images/docker.png
#background_image: ../media/swarm.png
background_image: https://www.docker.com/sites/default/files/Engine.png
# To be printed on the cards:
blurb: >
@@ -30,6 +28,6 @@ footer: >
url: http://container.training/
engine_version: get.docker.com
compose_version: 1.7.0
machine_version: 0.6.0
swarm_version: 1.2.0
compose_version: 1.8.1
machine_version: 0.8.2
swarm_version: latest

View File

@@ -10,9 +10,7 @@ instance_password: training
clustersize: 5 # Number of VMs per cluster
pagesize: 12 # Number of cards to print per page
#background_image: https://myapps.developer.ubuntu.com/site_media/appmedia/2014/12/swarm.png
background_image: http://www.yellosoft.us/public/images/docker.png
#background_image: ../media/swarm.png
background_image: https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png
# To be printed on the cards:
blurb: >
@@ -29,7 +27,7 @@ footer: >
url: http://container.training/
engine_version: get.docker.com
compose_version: 1.7.0
machine_version: 0.6.0
swarm_version: 1.2.0
engine_version: stable
compose_version: 1.12.0
machine_version: 0.12.2
swarm_version: latest

View File

@@ -0,0 +1,32 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
workshop_name: Advanced Docker Orchestration
workshop_short_name: orchestration
repo: https://github.com/jpetazzo/orchestration-workshop
instance_login: docker
instance_password: training
clustersize: 3 # Number of VMs per cluster
pagesize: 12 # Number of cards to print per page
background_image: https://blog.docker.com/media/2015/08/notary.png
# To be printed on the cards:
blurb: >
Here is the connection information to your
three Docker nodes for the Security
Workshop. You can connect to each VM with
any SSH client.
Your {machine_is_or_machines_are}:
# {url} will be replaced by the script
footer: ""
url: http://container.training/
engine_version: get.docker.com
compose_version: 1.12.0
machine_version: 0.10.0
swarm_version: latest

View File

@@ -1,6 +1,6 @@
#!/bin/bash
TRAINER_IMAGE="soulshake/prepare-vms"
TRAINER_IMAGE="preparevms_prepare-vms"
DEPENDENCIES="
aws
@@ -71,11 +71,10 @@ if check_dependencies; then
elif check_image; then
check_ssh_auth_sock
export SSH_AUTH_DIRNAME=$(dirname $SSH_AUTH_SOCK)
docker-compose -f docker-compose.yml run prepare-vms "$@"
docker-compose run prepare-vms "$@"
else
echo "Some dependencies are missing, and docker image $TRAINER_IMAGE doesn't exist locally."
echo "Please do one of the following: "
echo "- run \`docker build -t soulshake/prepare-vms .\`"
echo "- run \`docker pull soulshake/prepare-vms\`"
echo "- run \`docker-compose build\`"
echo "- install missing dependencies"
fi

3
prom/Dockerfile Normal file
View File

@@ -0,0 +1,3 @@
FROM prom/prometheus:v1.4.1
COPY prometheus.yml /etc/prometheus/prometheus.yml

17
prom/prometheus.yml Normal file
View File

@@ -0,0 +1,17 @@
global:
scrape_interval: 10s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
dns_sd_configs:
- names: ['tasks.node']
type: 'A'
port: 9100
- job_name: 'cadvisor'
dns_sd_configs:
- names: ['tasks.cadvisor']
type: 'A'
port: 8080

29
snap/docker-influxdb.json Normal file
View File

@@ -0,0 +1,29 @@
{
"version": 1,
"schedule": {
"type": "simple",
"interval": "10s"
},
"max-failures": 10,
"workflow": {
"collect": {
"metrics": {
"/intel/docker/*/stats/cgroups/cpu_stats/cpu_usage/total_usage": {},
"/intel/docker/*/stats/cgroups/memory_stats/usage/usage": {}
},
"process": null,
"publish": [
{
"plugin_name": "influx",
"config": {
"host": "127.0.0.1",
"port": 8086,
"database": "snap",
"user": "admin",
"password": "admin"
}
}
]
}
}
}

21
snap/psutil-file.yml Normal file
View File

@@ -0,0 +1,21 @@
---
version: 1
schedule:
type: "simple"
interval: "1s"
max-failures: 10
workflow:
collect:
metrics:
/intel/psutil/load/load1: {}
/intel/psutil/load/load15: {}
/intel/psutil/load/load5: {}
/intel/psutil/vm/available: {}
/intel/psutil/vm/free: {}
/intel/psutil/vm/used: {}
config:
publish:
-
plugin_name: "mock-file"
config:
file: "/tmp/snap-psutil-file.log"

1
stacks/dockercoins Symbolic link
View File

@@ -0,0 +1 @@
../dockercoins

View File

@@ -0,0 +1,48 @@
version: "3"
services:
rng:
build: dockercoins/rng
image: ${REGISTRY-127.0.0.1:5000}/rng:${TAG-latest}
logging:
driver: gelf
options:
gelf-address: udp://127.0.0.1:12201
deploy:
mode: global
hasher:
build: dockercoins/hasher
image: ${REGISTRY-127.0.0.1:5000}/hasher:${TAG-latest}
logging:
driver: gelf
options:
gelf-address: udp://127.0.0.1:12201
webui:
build: dockercoins/webui
image: ${REGISTRY-127.0.0.1:5000}/webui:${TAG-latest}
logging:
driver: gelf
options:
gelf-address: udp://127.0.0.1:12201
ports:
- "8000:80"
redis:
image: redis
logging:
driver: gelf
options:
gelf-address: udp://127.0.0.1:12201
worker:
build: dockercoins/worker
image: ${REGISTRY-127.0.0.1:5000}/worker:${TAG-latest}
logging:
driver: gelf
options:
gelf-address: udp://127.0.0.1:12201
deploy:
replicas: 10

28
stacks/dockercoins.yml Normal file
View File

@@ -0,0 +1,28 @@
version: "3"
services:
rng:
build: dockercoins/rng
image: ${REGISTRY-127.0.0.1:5000}/rng:${TAG-latest}
deploy:
mode: global
hasher:
build: dockercoins/hasher
image: ${REGISTRY-127.0.0.1:5000}/hasher:${TAG-latest}
webui:
build: dockercoins/webui
image: ${REGISTRY-127.0.0.1:5000}/webui:${TAG-latest}
ports:
- "8000:80"
redis:
image: redis
worker:
build: dockercoins/worker
image: ${REGISTRY-127.0.0.1:5000}/worker:${TAG-latest}
deploy:
replicas: 10

40
stacks/elk.yml Normal file
View File

@@ -0,0 +1,40 @@
version: "3"
services:
elasticsearch:
image: elasticsearch:2
logstash:
image: logstash
command: |
-e '
input {
gelf { }
heartbeat { }
}
filter {
ruby {
code => "
event.to_hash.keys.each { |k| event[ k.gsub('"'.'"','"'_'"') ] = event.remove(k) if k.include?'"'.'"' }
"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
stdout {
codec => rubydebug
}
}'
ports:
- "12201:12201/udp"
kibana:
image: kibana:4
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200

30
stacks/prometheus.yml Normal file
View File

@@ -0,0 +1,30 @@
version: "3"
services:
prometheus:
build: ../prom
image: 127.0.0.1:5000/prom
ports:
- "9090:9090"
node:
image: prom/node-exporter
command: -collector.procfs /host/proc -collector.sysfs /host/proc -collector.filesystem.ignored-mount-points "^(sys|proc|dev|host|etc)($$|/)"
deploy:
mode: global
volumes:
- "/proc:/host/proc"
- "/sys:/host/sys"
- "/:/rootfs"
cadvisor:
image: google/cadvisor
deploy:
mode: global
volumes:
- "/:/rootfs"
- "/var/run:/var/run"
- "/sys:/sys"
- "/var/lib/docker:/var/lib/docker"

8
stacks/registry.yml Normal file
View File

@@ -0,0 +1,8 @@
version: "3"
services:
registry:
image: registry:2
ports:
- "5000:5000"

View File

@@ -1,6 +0,0 @@
www:
image: nginx
ports:
- "8080:80"
volumes:
- "./htdocs:/usr/share/nginx/html"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 152 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

File diff suppressed because it is too large Load Diff