Compare commits

..

234 Commits

Author SHA1 Message Date
Jerome Petazzoni
c596f54dfc fix-redirects.sh: adding forced redirect 2020-04-07 16:57:42 -05:00
Bridget Kromhout
99271a09d3 Merge pull request #391 from bridgetkromhout/velocityeu2018
updating ssid/password
2018-10-30 23:33:24 +01:00
Bridget Kromhout
e51f110e9d updating ssid/password 2018-10-30 22:31:58 +00:00
Bridget Kromhout
58936f098f Merge pull request #389 from bridgetkromhout/velocityeu2018
veleu 2018 updates
2018-10-30 14:00:51 +01:00
Bridget Kromhout
d6f01d5302 veleu 2018 updates 2018-10-30 13:59:20 +01:00
Bridget Kromhout
d5d281b627 Merge pull request #388 from bridgetkromhout/velocityeu2018
vel eu 2018 updates
2018-10-30 13:45:12 +01:00
Bridget Kromhout
0633f952d4 Merge branch 'velocityeu2018' into velocityeu2018 2018-10-30 13:44:25 +01:00
Bridget Kromhout
a93c618154 Update thankyou.md 2018-10-30 13:40:33 +01:00
Bridget Kromhout
4a25c66206 Update logistics-bridget.md
EU logistics
2018-10-30 13:36:16 +01:00
Bridget Kromhout
8530dc750f Merge pull request #387 from jpetazzo/velny-k8s101-2018
bring NY 2018 changes to EU 2018
2018-10-30 12:32:24 +01:00
Bridget Kromhout
0571b1f3a5 Merge branch 'master' of github.com:jpetazzo/container.training 2018-10-30 12:26:54 +01:00
Jerome Petazzoni
c5c1ccaa25 Merge branch 'BretFisher-win-containers-101' 2018-10-29 20:38:21 -05:00
Jerome Petazzoni
b68afe502b Minor formatting/typo edits 2018-10-29 20:38:01 -05:00
Jerome Petazzoni
d18cacab4c Merge branch 'win-containers-101' of git://github.com/BretFisher/container.training into BretFisher-win-containers-101 2018-10-29 19:59:53 -05:00
Bret Fisher
2faca4a507 docker101 fixing titles 2018-10-30 01:53:31 +01:00
Jerome Petazzoni
d797ec62ed Merge branch 'BretFisher-swarm-cicd' 2018-10-29 19:48:59 -05:00
Jerome Petazzoni
a475d63789 add CI/CD slides to self-paced deck as well 2018-10-29 19:48:33 -05:00
Jerome Petazzoni
dd3f2d054f Merge branch 'swarm-cicd' of git://github.com/BretFisher/container.training into BretFisher-swarm-cicd 2018-10-29 19:46:38 -05:00
Bridget Kromhout
73594fd505 Merge pull request #384 from BretFisher/patch-18
swarm workshop at goto canceled 😭
2018-10-26 11:35:53 -05:00
Bret Fisher
16a1b5c6b5 swarm workshop at goto canceled 😭 2018-10-26 07:57:50 +01:00
Bret Fisher
ff7a257844 adding cicd to swarm half day 2018-10-26 07:52:32 +01:00
Bret Fisher
77046a8ddf fixed suggestions 2018-10-26 07:51:09 +01:00
Bret Fisher
3ca696f059 size update from docker docs 2018-10-23 16:27:25 +02:00
Bret Fisher
305db76340 more sizing tweaks 2018-10-23 16:27:25 +02:00
Bret Fisher
b1672704e8 clear up swarm sizes and manager+worker setups
Lot's of people will have ~5-10 servers, so let's give them more detailed info.
2018-10-23 16:27:25 +02:00
Jerome Petazzoni
c058f67a1f Add diagram for dockercoins 2018-10-23 16:25:19 +02:00
Alexandre Buisine
ab56c63901 switch to an up to date version with latest cloud-init binary and multinic patch 2018-10-23 16:22:56 +02:00
Bret Fisher
a5341f9403 Add common Windows/macOS hidden files to gitignore 2018-10-17 19:11:37 +02:00
Laurent Grangeau
b2bdac3384 Typo 2018-10-04 18:02:01 +02:00
Bridget Kromhout
a2531a0c63 making sure two-day events still show up
Because we rebuilt today, the two-day events disappeared from the front page. @jpetazzo this is a temporary fix to make them still show up.
2018-09-30 22:07:03 -04:00
Bridget Kromhout
84e2b90375 Update index.yaml
adding slides
2018-09-30 22:05:01 -04:00
Bridget Kromhout
24e7cab2ca Merge pull request #376 from bridgetkromhout/velny-k8s101-2018
Adding links to thanks slide
2018-09-30 21:55:17 -04:00
Bridget Kromhout
09a364f554 Adding links to thanks slide 2018-09-30 21:51:58 -04:00
Bridget Kromhout
c18d07b06f Merge pull request #375 from bridgetkromhout/velny-k8s101-2018
Custom thanks slide
2018-09-30 21:15:09 -04:00
Bridget Kromhout
41cd6ad554 Custom thanks slide 2018-09-30 21:13:27 -04:00
Bridget Kromhout
565db253bf Merge pull request #374 from bridgetkromhout/velny-k8s101-2018
Velny k8s101 2018 updates
2018-09-30 20:57:37 -04:00
Bridget Kromhout
c46baa0f74 Merge branch 'master' into velny-k8s101-2018 2018-09-30 20:56:17 -04:00
Bridget Kromhout
9639dfb9cc Merge pull request #368 from jpetazzo/kube-ps1
kube-ps1 is cool and we should mention it
2018-09-30 20:55:00 -04:00
Bridget Kromhout
8722de6da2 Update namespaces.md 2018-09-30 20:54:31 -04:00
Bridget Kromhout
f2f87e52b0 Merge pull request #373 from bridgetkromhout/bridget-links
Updating Bridget's links
2018-09-30 20:53:26 -04:00
Bridget Kromhout
56ad2845e7 Updating Bridget's links 2018-09-30 20:52:24 -04:00
Bridget Kromhout
cb94697a55 Merge pull request #372 from bridgetkromhout/velny-k8s101-2018
Velny k8s101 2018
2018-09-30 20:51:50 -04:00
Bridget Kromhout
74a30db7bd Limiting scope for this event 2018-09-30 20:49:54 -04:00
Bridget Kromhout
336cfbe4dc Merge branch 'master' into velny-k8s101-2018 2018-09-30 20:36:04 -04:00
Bridget Kromhout
86e35480a4 Wording edits 2018-10-01 02:14:50 +02:00
Bridget Kromhout
48a834e85c Merge pull request #369 from bridgetkromhout/velny-k8s101-2018
Edits for shorter workshop
2018-09-30 20:11:27 -04:00
Bridget Kromhout
11ca023e45 Edits for shorter workshop 2018-09-30 19:52:47 -04:00
Jerome Petazzoni
1020a8ff86 kube-ps1 is cool and we should mention it 2018-09-30 17:43:18 -05:00
Bridget Kromhout
20b1079a22 Update whatsnext.md
typo fix
2018-09-30 16:48:29 -04:00
Bridget Kromhout
e2f020b994 Merge pull request #367 from bridgetkromhout/velny-k8s101-2018
Readability
2018-09-30 13:40:27 -05:00
Bridget Kromhout
062e8f124a Readability 2018-09-30 14:37:49 -04:00
Bridget Kromhout
9f1c3db527 Merge pull request #366 from bridgetkromhout/velny-k8s101-2018
Adding vel ny 1day
2018-09-30 13:35:16 -05:00
Bridget Kromhout
9a66a894ba Adding vel ny 1day 2018-09-30 14:33:41 -04:00
Bridget Kromhout
f090172413 Merge pull request #365 from jpetazzo/cleanup-after-netpol
Clean up network policies
2018-09-29 21:37:59 -05:00
Jerome Petazzoni
e4251cfa8f Clean up network policies
We should tell people to clean up network policies at the end
of the chapter, otherwise further exercises will fail.
2018-09-29 20:39:32 -05:00
Jerome Petazzoni
b6dd55b21c Use loop4 instead of loop0 2018-09-29 20:16:35 -05:00
Jerome Petazzoni
53d1a68765 Adapt autopilot for new deployment scripts 2018-09-29 20:15:38 -05:00
Jerome Petazzoni
156ce67413 Update CNC script 2018-09-29 18:44:03 -05:00
Jerome Petazzoni
e372850b06 Merge branch 'master' of github.com:jpetazzo/container.training 2018-09-29 10:06:24 -05:00
Jerome Petazzoni
f543b54426 Prepare deployment scripts for Ubuntu 18.04
This adds a few features:
- ./workshopctl kubereset TAG (closes #306)
- remove python-setuptools (prepare for #353)
- ./workshopctl weavetest TAG (help detecting weave issues
  like we had at OSCON, July 2018)
- remove a bit of dead code
2018-09-29 10:06:20 -05:00
Bret Fisher
35614714c8 added portainer setup and gui options 2018-09-29 16:54:42 +02:00
Bret Fisher
100c6b46cf oops, updated slide versions 2018-09-29 16:53:59 +02:00
Bret Fisher
36ccaf7ea4 update compose/machine versions in swarm nodes 2018-09-29 16:53:59 +02:00
Bridget Kromhout
4a655db1ba Merge pull request #362 from jpetazzo/kubectl-run-deprecation
Add explanation about the kubectl run deprecation warning
2018-09-28 21:34:11 -05:00
Bridget Kromhout
2a80586504 Merge pull request #361 from jpetazzo/kubens-and-kubectx
Add a couple of slides about kubens and kubectx
2018-09-28 21:34:03 -05:00
Bridget Kromhout
0a942118c1 Update kubectlrun.md
slight wording change
2018-09-28 21:32:23 -05:00
Jerome Petazzoni
2f1ad67fb3 Add explanation about the kubectl run deprecation warning 2018-09-28 20:54:11 -05:00
Jerome Petazzoni
4b0ac6d0e3 Add a couple of slides about kubens and kubectx 2018-09-28 19:49:08 -05:00
Jerome Petazzoni
ac273da46c Merge branch 'master' of github.com:jpetazzo/container.training 2018-09-28 19:35:41 -05:00
Jerome Petazzoni
7a6594c96d Update container.training index 2018-09-28 19:35:35 -05:00
Bret Fisher
657b7465c6 updating bridge network diags 2018-09-29 02:18:03 +02:00
Bret Fisher
08059a845f remove compose teaser 2018-09-29 02:16:52 +02:00
Jerome Petazzoni
24e2042c9d Explain why revocation is important 2018-09-28 19:14:07 -05:00
Jerome Petazzoni
9771f054ea Add slide about lack of cert revocation 2018-09-28 19:04:57 -05:00
Jerome Petazzoni
5db4e2adfa Merge branch 'master' of github.com:jpetazzo/container.training 2018-09-28 18:49:00 -05:00
Jerome Petazzoni
bde5db49a7 Bump a few more k8s version numbers from 1.11 to 1.12 2018-09-28 18:48:52 -05:00
Jerome Petazzoni
7c6b2730f5 Bump up EBS size to 20G for Portworx 2018-09-29 01:39:07 +02:00
Jerome Petazzoni
7f6a15fbb7 Actually modify the prompt 2018-09-29 01:39:07 +02:00
Bridget Kromhout
d97b1e5944 Slight modifications to current docs/scripts 2018-09-29 01:39:07 +02:00
Jerome Petazzoni
1519196c95 Add kubectl, kubens, kube_ps1
kubectl and kubens are added as kctl and kns (to avoid clashing with
completion for kubectl). Their completion is added too (so you can
do 'kns kube-sy[TAB]' to switch to kube-system).

kube_ps1 is added and enabled. The default prompt for the docker
user now shows the current context and namespace.
2018-09-29 01:39:07 +02:00
Jerome Petazzoni
f8629a2689 Massive refactoring of workshopctl
This allows to manage groups of VMs across multiple infrastructure
providers. It also adds support to create groups of VMs on OpenStack.

WARNING: the syntax of workshopctl has changed slightly. Check READMEs
for details.
2018-09-29 01:39:07 +02:00
Jerome Petazzoni
fadecd52ee Replace registry:2 with registry
registry used to be registry v1, but now it defaults to v2.
We can therefore drop the tag.
2018-09-28 18:36:29 -05:00
Jerome Petazzoni
524d6e4fc1 Minor updates to load balancing example 2018-09-28 18:31:39 -05:00
Bridget Kromhout
51f5f5393c Merge pull request #356 from bridgetkromhout/link-update
Updating links
2018-09-28 16:49:41 -05:00
Bridget Kromhout
f574afa9d2 Updating links 2018-09-28 16:46:10 -05:00
Bridget Kromhout
4f49015a6e Link to experimental multi-master 2018-09-28 23:42:55 +02:00
Bridget Kromhout
f25d12b53d Merge pull request #354 from bridgetkromhout/versions-update
Updating versions
2018-09-28 16:29:00 -05:00
Bridget Kromhout
78259c3eb6 Clarifying version 2018-09-28 16:28:20 -05:00
Bridget Kromhout
adc922e4cd Updating versions 2018-09-28 16:25:38 -05:00
Bridget Kromhout
f68194227c Update whatsnext.md
Typo fix, and clarity since it's not always being delivered by only one person.
2018-09-28 23:16:24 +02:00
Jerome Petazzoni
29a3ce0ba2 Update last chapter (what's next) 2018-09-28 23:16:24 +02:00
Bridget Kromhout
e5fe27dd54 Merge pull request #352 from jpetazzo/remove-netpol-slides-from-ns
Remove network policies blurb from namespaces chatper
2018-09-28 15:17:51 -05:00
Jerome Petazzoni
6016ffe7d7 Add hidden link to pre-game video 2018-09-28 13:43:21 -05:00
Jerome Petazzoni
7c94a6f689 Remove network policies blurb from namespaces chatper
There is now a dedicated chapter about network policies, so
the two very rough slides on that topic should be removed
from the namespaces chapter.
2018-09-28 13:34:26 -05:00
Bridget Kromhout
5953ffe10b Merge pull request #350 from BretFisher/win-detach-note
adding slide about PowerShell detaching
2018-09-28 08:11:20 -05:00
Bridget Kromhout
3016019560 Update Start_And_Attach.md
slight edits for clarity
2018-09-28 08:10:12 -05:00
Bridget Kromhout
0d5da73c74 Merge pull request #339 from jpetazzo/replace-es-with-httpenv
Replace ElasticSearch with jpetazzo/httpenv
2018-09-28 08:05:15 -05:00
Bret Fisher
91c835fcb4 adding slide about PowerShell detaching 2018-09-28 00:20:03 -04:00
Bret Fisher
d01ae0ff39 initial Windows Container pack 2018-09-27 07:13:03 -04:00
Thomas Gerbet
63b85da4f6 Add missing link to storage in Prometheus 2 talk 2018-09-22 12:56:58 +02:00
Maxime Deravet
2406e72210 use https to clone git repo 2018-09-22 12:54:43 +02:00
Jerome Petazzoni
32e1edc2a2 Long slide is long 2018-09-21 09:08:58 +02:00
Jerome Petazzoni
84225e982f Merge branch 'Julien-Eyraud-fix-kaniko-build' 2018-09-19 14:01:24 -05:00
Jerome Petazzoni
e76a06e942 Merge branch 'fix-kaniko-build' of git://github.com/Julien-Eyraud/container.training into Julien-Eyraud-fix-kaniko-build 2018-09-19 14:01:02 -05:00
Nicolas Gavalda
0519682c30 Fix small typo 2018-09-18 18:50:41 +02:00
Jérôme Petazzoni
91f7a81964 Merge branch 'master' into fix-kaniko-build 2018-09-18 18:49:13 +02:00
Nicolas Schwartz
a66fcaf04c Update kaniko-build.yaml
Fix option
2018-09-18 18:48:01 +02:00
Julien Eyraud
9a0649e671 Change postgresql mount path 2018-09-18 17:42:10 +02:00
Julien Eyraud
d23ad0cd8f Fix kaniko-build.yaml to use insecure registry 2018-09-18 16:05:05 +02:00
Jerome Petazzoni
63755c1cd3 Minor fixes 2018-09-16 15:35:23 -05:00
Jerome Petazzoni
149cf79615 Add ENIX cluster files 2018-09-16 12:49:33 -05:00
Jerome Petazzoni
a627128570 Set EFK UID to 0 (fixes #325) 2018-09-16 10:58:10 -05:00
Jerome Petazzoni
91e3078d2e Better error checking + GRO fix 2018-09-16 09:10:14 -05:00
Jerome Petazzoni
31dd943141 Typo 2018-09-16 09:09:08 -05:00
Jerome Petazzoni
3866701475 Fix postgres data volume 2018-09-16 09:08:23 -05:00
Jerome Petazzoni
521f8e9889 More typo fixes courtesy of @abuisine 2018-09-15 11:11:08 -05:00
Jerome Petazzoni
49c3fdd3b2 Minor updates (thanks @abuisine) 2018-09-15 11:03:24 -05:00
Jerome Petazzoni
4bb6a49ee0 Typo fix (thanks @sload) 2018-09-15 10:45:37 -05:00
Jerome Petazzoni
db8e8377ac Replace ElasticSearch with jpetazzo/httpenv
ElasticSearch slowly uses up to 2GB of RAM.
Eventually, on instances provisioned with
only 4GB of RAM and without swap, if more
than one ElasticSearch pod end up on the
same instance, it will cause the instance
to slow down and ultimately crash. Instead,
we now use a tiny Go web server that shows
its environment in JSON. It still highlights
that multiple backends are serving requests
but without the memory usage issue.
2018-09-12 15:49:27 -05:00
Jerome Petazzoni
510a37be44 Rebalance chapter 3/4 2018-09-12 00:15:54 -05:00
Jerome Petazzoni
230bd73597 Update versions 2018-09-11 14:37:04 -05:00
Jerome Petazzoni
7217c0ee1d Typos and fixes for autopilot
There is no significant change to the *content* here, but a lot
of typo fixes and commands added so that the autopilot works
correctly.
2018-09-11 01:41:56 -05:00
Jerome Petazzoni
77d455d894 Sort chapters numerically in slides counter 2018-09-09 17:56:27 -05:00
Jerome Petazzoni
4f9c8275d9 Incorporate Bridget's feedback 2018-09-08 09:55:01 -05:00
Bridget Kromhout
f11aae2514 Update accessinternal.md
slight changes
2018-09-08 09:55:01 -05:00
Jerome Petazzoni
f1e9efc38c Explain how to access internal services
By using kubectl proxy and kubectl port-forward
2018-09-08 09:55:01 -05:00
Bridget Kromhout
975cc4f7df Merge pull request #332 from jpetazzo/new-content-sep-2018
New content for sep 2018 (MERGE CANDIDATE)
2018-09-08 09:03:20 -05:00
Bridget Kromhout
01243280a2 Update configuration.md 2018-09-08 08:56:26 -05:00
Bridget Kromhout
e652c3639d Merge pull request #336 from jpetazzo/deeper-in-netpol
Deeper in netpol
2018-09-08 08:53:30 -05:00
Bridget Kromhout
1e0954d9b4 Update netpol.md
slight corrections
2018-09-08 08:49:37 -05:00
Jerome Petazzoni
bb21f9bbc9 Improvements following Bridget's feedback 2018-09-08 08:45:16 -05:00
Bridget Kromhout
25466e7950 Merge pull request #334 from jpetazzo/localkubeconfig
Show how to use kubectl from the local machine
2018-09-08 08:45:16 -05:00
Jerome Petazzoni
78026ff9b8 Integrate new content
I've dispatched the new content so that the fullday training
(actually two days, don't let the file name distract you)
is broken down in 8 chapters of approximately equal lengths,
where the most complex content is preferably located at the
end of the chapter (to allow people to catch up and ask questions
during breaks) + 1 chapter with the what's next / links / thank you
slides
2018-09-08 08:23:54 -05:00
Jerome Petazzoni
60c7ef4e53 Merge branch 'master' into new-content-sep-2018 2018-09-08 07:57:41 -05:00
Jerome Petazzoni
55952934ed Add tarmak in deployment options 2018-09-08 07:56:16 -05:00
Jerome Petazzoni
f9d31f4c30 merge 2018-09-08 07:32:14 -05:00
Jerome Petazzoni
ec037e422b Clarify 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
73f66f25d8 Rephrase to avoid confusion 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
28174b6cf9 Oops, fixing bad conflict resolve 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
a80c095a07 Put netpol file in the right directory 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
374574717d Clarify network policies
Add clarification re/ pod-to-pod traffic.
Explain that it's stateful (which most people would expect anyway).
2018-09-08 07:20:31 -05:00
Jerome Petazzoni
efce5d1ad4 Add a short chapter about network policies
I will then expand this chapter to add examples showing
how to isolate namespaces; but let's start with that.
2018-09-08 07:20:31 -05:00
Jerome Petazzoni
4eec91a9e6 Merge branch 'new-content-sep-2018' of github.com:jpetazzo/container.training into new-content-sep-2018 2018-09-08 07:16:56 -05:00
Jerome Petazzoni
57166f33aa Prometheus chapter 2018-09-08 07:16:28 -05:00
Bridget Kromhout
f1ebb1f0fb slight corrections 2018-09-06 11:05:17 -05:00
Bridget Kromhout
8182e4df96 Update portworx.md
Slight corrections for clarity
2018-09-06 10:56:59 -05:00
Bridget Kromhout
6f3580820c Update gitworkflows.md
slight corrections
2018-09-06 10:42:59 -05:00
Bridget Kromhout
7b7fd2a4b4 Merge pull request #329 from jpetazzo/kubectlproxy
Revamp section about kubectl proxy
2018-09-06 10:37:17 -05:00
Jerome Petazzoni
f74addd0ca Add short section with Flux and Gitkube
These sections are not as detailed as the usual, but we
intend to show what's possible with git-based workflows.
2018-09-06 07:55:42 -05:00
Jerome Petazzoni
21ba3b7713 Incorporate Bridget's feedback 2018-09-06 02:12:47 -05:00
Jerome Petazzoni
4eca15f822 typo 2018-09-06 01:49:54 -05:00
Bridget Kromhout
4205f619cf Merge pull request #333 from BretFisher/patch-16
adding my next few workshops, I forgets!
2018-09-05 23:31:25 -05:00
Bridget Kromhout
c3dff823ef Update index.yaml
We use `:` as a delimiter and so need to quote text using it.
2018-09-05 23:29:49 -05:00
Bret Fisher
39876d1388 adding my next few workshops, I forgets! 2018-09-05 21:09:13 -04:00
Bridget Kromhout
7e34aa0287 Merge pull request #330 from jpetazzo/move-yaml-to-repo
Add YAML to repo; remove goo.gl links
2018-09-05 09:21:14 -05:00
Bridget Kromhout
3bdafed38e Merge pull request #331 from jpetazzo/preinstall-helm-and-stern
Pre-install Stern and Helm
2018-09-05 09:17:51 -05:00
Jerome Petazzoni
3d438ff304 Add kubectl auth can-i ... 2018-09-05 02:49:49 -05:00
Jerome Petazzoni
bcd1f37085 Add healthchecks
Explain liveness and readiness probes.
No lab yet.
2018-09-04 16:23:38 -05:00
Jerome Petazzoni
ba928e59fc Add ingress section
- Explain ingress resources
- Show how to deploy Traefik
- Use hostNetwork in the process
- Explain taints and tolerations while we're here
2018-09-04 08:40:58 -05:00
Jerome Petazzoni
62c01ef7d6 Add acknowlegement slide for Portworx/Katacoda 2018-09-03 13:00:30 -05:00
Jerome Petazzoni
a71347e328 Add owners and dependents
And explain how to find orphan resources.
2018-09-03 11:16:54 -05:00
Jerome Petazzoni
f235cfa13c Hint about upcoming dynamic provisioning section 2018-09-03 06:16:24 -05:00
Jerome Petazzoni
45b397682b One more note about storage systems 2018-09-03 06:15:41 -05:00
Jerome Petazzoni
858ad02973 Add notes about dynamic provisioning 2018-09-03 06:08:43 -05:00
Jerome Petazzoni
defeef093d Add dynamic provisioning and PostgreSQL example
In this section, we setup Portworx to have a dynamic provisioner.
Then we use it to deploy a PostgreSQL Stateful Set.
Finally we simulate a node failure and observe the failover.
2018-09-03 05:47:21 -05:00
Jerome Petazzoni
b45615e2c3 Mention @jessfraz's img 2018-09-02 10:40:17 -05:00
Jerome Petazzoni
b158babb7f Stateful Sets
- explain the reason why we have stateful sets
- explain the relationship between volumes, persistent volumes,
  persistent volume claims, volume claim templates
- show how to run a Consul cluster with a stateful set
2018-09-02 08:51:03 -05:00
Jerome Petazzoni
59b7386b91 Add authentication and authorization 2018-09-01 09:40:30 -05:00
Jerome Petazzoni
c05bcd23d9 Tons of new chapters! Excitement!
- volumes (general overview)
- building with the docker engine (bind-mounting the docker socket)
- building with kaniko (and init containers)
- managing configuration (configmaps, downward api)

Also added a new-content.yml file with just the new content
(for easier review), containing my plans for future chapters.
2018-08-31 03:27:15 -05:00
Jerome Petazzoni
3cb91855c8 Pre-install Stern and Helm
The commands to install Stern and Helm aren't super exciting,
so let's pre-install these tools. That way, we also generate
completion for them. We still give installation instructions
just in case, but this saves time for more important stuff.
2018-08-28 07:21:43 -05:00
Jerome Petazzoni
dc0850ef3e Expand the network policy section 2018-08-27 11:36:46 -05:00
Jerome Petazzoni
ffdd7fda45 Add YAML to repo; remove goo.gl links
We load a few YAML files from goo.gl links. To avoid bad
surprises, we're moving these YAML files to the repository.
2018-08-27 07:04:01 -05:00
Jerome Petazzoni
83b2133573 Oops, fixing bad conflict resolve 2018-08-23 04:56:22 -05:00
Jerome Petazzoni
d04856f964 Show how to use kubectl from the local machine 2018-08-22 09:22:59 -05:00
Jerome Petazzoni
8373d5302f Revamp section about kubectl proxy 2018-08-21 08:08:19 -05:00
Jerome Petazzoni
7d7cb0eadb Put netpol file in the right directory 2018-08-21 04:21:39 -05:00
Jerome Petazzoni
c00c87f8f2 Clarify network policies
Add clarification re/ pod-to-pod traffic.
Explain that it's stateful (which most people would expect anyway).
2018-08-21 04:21:17 -05:00
Jerome Petazzoni
f599462ad7 Add a short chapter about network policies
I will then expand this chapter to add examples showing
how to isolate namespaces; but let's start with that.
2018-08-21 04:21:17 -05:00
Jerome Petazzoni
018282f392 slides: rename directories
This was discussed and agreed in #246. It will probably break a few
outstanding PRs as well as a few external links but it's for the
better good long term.
2018-08-21 04:03:38 -05:00
Jerome Petazzoni
23b3c1c05a Last tweaks so that autopilot passes 2018-08-20 14:58:00 -05:00
Jerome Petazzoni
62686d0b7a Miscellaneous fixes for autopilot
These changes are only for the autopilot test harness.
They add hidden commands and keystrokes but don't affect
the content of the slides.
2018-08-20 14:15:06 -05:00
Jerome Petazzoni
54288502a2 autopilot: add support for hidden commands 2018-08-20 10:22:01 -05:00
Jerome Petazzoni
efc045e40b autopilot: put a bunch of features behind flags
We don't always need to track slides, switch desktops, and open links.
(These things are not necessary when we're purely testing the labs.)
All these features are now behind boolean flags saved in the state file.
2018-08-20 08:31:47 -05:00
Bridget Kromhout
6e9b16511f Cloud-agnostic; mentioning multiple clouds 2018-08-19 10:07:52 -05:00
Jerome Petazzoni
81b6e60a8c Merge branch 'master' of github.com:jpetazzo/container.training 2018-08-18 11:13:45 -05:00
Jerome Petazzoni
5baaf7e00a Fixes #327 2018-08-18 11:13:39 -05:00
Jérôme Petazzoni
d4d460397f Mention progressDeadlineSeconds
@abuisine ran through the whole deck recently, taking the long route each time it was possible; and he noticed that another field had to be removed when transforming the Deployment into a DaemonSet.
2018-08-15 04:08:31 -05:00
Bridget Kromhout
f66b6b2ee3 Slight edits (#326) 2018-08-15 04:07:42 -05:00
Jérôme Petazzoni
fb7f7fd8c8 Expand to the brief logging/metrics slide
Thanks to @abuisine for reminding me that Heapster is going through a deprecation cycle.

I'm also expanding these two slides to be a bit more useful and relevant.
2018-08-15 04:07:42 -05:00
Jérôme Petazzoni
dc98fa21a9 Add explanations for a failure mode in logging (#324)
* Add explanations for a failure mode in logging

Thanks @abuisine for reporting that one too!

* Typo
2018-08-15 04:04:18 -05:00
Jerome Petazzoni
6b662d3e4c Add QCON workshops 2018-08-15 03:09:22 -05:00
Tim Bell
7069682c8e Update Dockerfile_Tips.md (#321)
Fix typo
2018-08-08 08:40:06 -05:00
Katie McLaughlin
3b1d5b93a8 Update pwk link (#319) 2018-08-02 06:22:42 -05:00
Maxime Deravet
611fe55e90 Allow to configure docker password using the settings file (#317) 2018-07-31 08:24:16 -05:00
Jerome Petazzoni
481272ac22 Add fallback when non-standard strftime is not supported
Closes #301

Thanks @petertang2012
2018-07-27 06:07:11 -05:00
Bridget Kromhout
9069e2d7db Merge pull request #318 from bridgetkromhout/add-vel-uk
Add Velocity UK
2018-07-26 18:35:04 -05:00
Bridget Kromhout
1144c16a4c Add Velocity UK 2018-07-26 18:33:49 -05:00
Bridget Kromhout
9b2846633c Merge pull request #315 from jpetazzo/clarify-kubeadm
Clarify usage of kubeadm
2018-07-20 15:42:31 -07:00
Jérôme Petazzoni
db88c0a5bf Clarify usage of kubeadm
Thanks for @robcz for the inspiration for that one!
2018-07-17 11:55:20 -05:00
Jérôme Petazzoni
28863728c2 Update rollout, new defaults are 25%/25% for MaxSurge and MaxUnavailable (#314) 2018-07-17 10:54:45 -05:00
Bridget Kromhout
dc341da813 Merge pull request #309 from bridgetkromhout/slight-updates
Slight updates for 1.11
2018-07-16 18:58:00 -05:00
Bridget Kromhout
1d210ad808 Merge pull request #3 from jpetazzo/slighter-updates
Slighter updates
2018-07-16 18:28:20 -05:00
Jerome Petazzoni
76d9adadf5 'until 1.10' is ambiguous, try to be more explicit 2018-07-16 18:25:30 -05:00
Jerome Petazzoni
065371fa99 Merge branch 'bridgetkromhout-slight-updates' into slighter-updates 2018-07-16 18:12:45 -05:00
Jerome Petazzoni
e45f21454e Update a couple of references to kube-dns; and cosmetic tweaks 2018-07-16 18:09:50 -05:00
Bridget Kromhout
4d8c13b0bf AKS name change 2018-07-16 18:09:50 -05:00
Bridget Kromhout
5e6b38e8d1 Replace kube-dns with CoreDNS 2018-07-16 18:09:50 -05:00
Bridget Kromhout
5dd2b6313e coredns instead of kube-dns 2018-07-16 18:09:50 -05:00
Bridget Kromhout
96bf00c59b Switching from get to use kubectl api-resources 2018-07-16 18:09:50 -05:00
Bridget Kromhout
065310901f This info isn't shown anymore by kubectl get 2018-07-16 18:09:50 -05:00
Jerome Petazzoni
103261ea35 Update a couple of references to kube-dns; and cosmetic tweaks 2018-07-16 18:07:07 -05:00
Jerome Petazzoni
c6fb6f30af Merge branch 'slight-updates' of git://github.com/bridgetkromhout/container.training into bridgetkromhout-slight-updates 2018-07-16 17:48:56 -05:00
Bridget Kromhout
134d24e23b AKS name change 2018-07-16 15:08:07 -07:00
Jerome Petazzoni
8a8e97f6e2 Add Jerome's training, September in Paris 2018-07-16 16:42:25 -05:00
Bridget Kromhout
29c1bc47d4 Replace kube-dns with CoreDNS 2018-07-16 13:53:27 -07:00
Bridget Kromhout
8af5a10407 coredns instead of kube-dns 2018-07-16 13:45:26 -07:00
Bridget Kromhout
8e9991a860 Switching from get to use kubectl api-resources 2018-07-16 13:38:28 -07:00
Bridget Kromhout
8ba5d6d736 This info isn't shown anymore by kubectl get 2018-07-16 13:32:53 -07:00
Bridget Kromhout
b3d1e2133d Merge pull request #308 from bridgetkromhout/add-oscon
Add oscon slides
2018-07-15 13:24:46 -05:00
Bridget Kromhout
b3cf30f804 Add oscon slides 2018-07-15 13:23:33 -05:00
Bridget Kromhout
b845543e5f Merge pull request #305 from bridgetkromhout/list-msp-slides
Adding slides link
2018-07-10 18:08:52 -05:00
Bridget Kromhout
1b54470046 Adding slides link 2018-07-10 18:04:35 -05:00
Bridget Kromhout
ee2b20926c Merge pull request #302 from bridgetkromhout/version-1.11.0
Version bump
2018-07-10 06:18:30 -05:00
Bridget Kromhout
96a76d2a19 Version bump 2018-07-10 06:17:07 -05:00
Bridget Kromhout
78ac91fcd5 Merge pull request #300 from bridgetkromhout/add-msp
Adding MSP 2018
2018-07-10 05:46:23 -05:00
Bridget Kromhout
971b5b0e6d Let's not link quite yet 2018-07-10 05:45:22 -05:00
Bridget Kromhout
3393563498 Adding MSP 2018 2018-07-06 16:11:37 -05:00
Bridget Kromhout
94483ebfec Merge pull request #298 from jpetazzo/improve-index-format
Switch to two-line format since our titles are so long
2018-07-06 15:43:01 -05:00
Jerome Petazzoni
db5d5878f5 Switch to two-line format since our titles are so long 2018-07-03 10:47:41 -05:00
ctas582
2585daac9b Force rng to be single threaded (#293) 2018-06-28 08:20:54 -05:00
Bridget Kromhout
21043108b3 Merge pull request #296 from bridgetkromhout/version-up
Version bump
2018-06-27 01:14:06 -05:00
Bret Fisher
cb407e75ab make CI/CD common for all courses 2018-04-25 14:27:32 -05:00
Bret Fisher
27d4612449 a note about ci/cd with docker 2018-04-25 14:26:02 -05:00
Bret Fisher
43ab5f79b6 a note about ci/cd with docker 2018-04-25 14:23:40 -05:00
175 changed files with 8852 additions and 1678 deletions

17
.gitignore vendored
View File

@@ -1,13 +1,22 @@
*.pyc
*.swp
*~
prepare-vms/ips.txt
prepare-vms/ips.html
prepare-vms/ips.pdf
prepare-vms/settings.yaml
prepare-vms/tags
prepare-vms/infra
slides/*.yml.html
slides/autopilot/state.yaml
slides/index.html
slides/past.html
node_modules
### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride
### Windows ###
# Windows thumbnail cache files
Thumbs.db
ehthumbs.db
ehthumbs_vista.db

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80)
app.run(host="0.0.0.0", port=80, threaded=False)

62
k8s/consul.yaml Normal file
View File

@@ -0,0 +1,62 @@
apiVersion: v1
kind: Service
metadata:
name: consul
spec:
ports:
- port: 8500
name: http
selector:
app: consul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 3
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.2.2"
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
- "-ui"
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave

28
k8s/docker-build.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: v1
kind: Pod
metadata:
name: build-image
spec:
restartPolicy: OnFailure
containers:
- name: docker-build
image: docker
env:
- name: REGISTRY_PORT
value: #"30000"
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
mkdir /workspace &&
git clone https://github.com/jpetazzo/container.training /workspace &&
docker build -t localhost:$REGISTRY_PORT/worker /workspace/dockercoins/worker &&
docker push localhost:$REGISTRY_PORT/worker
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock

224
k8s/efk.yaml Normal file
View File

@@ -0,0 +1,224 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
# X-Pack Authentication
# =====================
- name: FLUENT_ELASTICSEARCH_USER
value: "elastic"
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: "changeme"
- name: FLUENT_UID
value: "0"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: elasticsearch
name: elasticsearch
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/elasticsearch
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: elasticsearch
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: elasticsearch
spec:
containers:
- image: elasticsearch:5.6.8
imagePullPolicy: IfNotPresent
name: elasticsearch
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: elasticsearch
name: elasticsearch
selfLink: /api/v1/namespaces/default/services/elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
run: elasticsearch
sessionAffinity: None
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: kibana
name: kibana
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/kibana
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: kibana
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200/
image: kibana:5.6.8
imagePullPolicy: Always
name: kibana
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: kibana
name: kibana
selfLink: /api/v1/namespaces/default/services/kibana
spec:
externalTrafficPolicy: Cluster
ports:
- port: 5601
protocol: TCP
targetPort: 5601
selector:
run: kibana
sessionAffinity: None
type: NodePort

View File

@@ -0,0 +1,14 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

18
k8s/haproxy.cfg Normal file
View File

@@ -0,0 +1,18 @@
global
daemon
maxconn 256
defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend the-frontend
bind *:80
default_backend the-backend
backend the-backend
server google.com-80 google.com:80 maxconn 32 check
server ibm.fr-80 ibm.fr:80 maxconn 32 check

16
k8s/haproxy.yaml Normal file
View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: haproxy
spec:
volumes:
- name: config
configMap:
name: haproxy
containers:
- name: haproxy
image: haproxy
volumeMounts:
- name: config
mountPath: /usr/local/etc/haproxy/

14
k8s/ingress.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cheddar
spec:
rules:
- host: cheddar.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: cheddar
servicePort: 80

29
k8s/kaniko-build.yaml Normal file
View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: Pod
metadata:
name: kaniko-build
spec:
initContainers:
- name: git-clone
image: alpine
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
git clone git://github.com/jpetazzo/container.training /workspace
volumeMounts:
- name: workspace
mountPath: /workspace
containers:
- name: build-image
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=/workspace/dockercoins/rng"
- "--insecure"
- "--destination=registry:5000/rng-kaniko:latest"
volumeMounts:
- name: workspace
mountPath: /workspace
volumes:
- name: workspace

View File

@@ -0,0 +1,167 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@@ -0,0 +1,14 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-testcurl-for-testweb
spec:
podSelector:
matchLabels:
run: testweb
ingress:
- from:
- podSelector:
matchLabels:
run: testcurl

View File

@@ -0,0 +1,10 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-for-testweb
spec:
podSelector:
matchLabels:
run: testweb
ingress: []

View File

@@ -0,0 +1,22 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
run: webui
ingress:
- from: []

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
restartPolicy: OnFailure

580
k8s/portworx.yaml Normal file
View File

@@ -0,0 +1,580 @@
# SOURCE: https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true
apiVersion: v1
kind: ConfigMap
metadata:
name: stork-config
namespace: kube-system
data:
policy.cfg: |-
{
"kind": "Policy",
"apiVersion": "v1",
"extenders": [
{
"urlPrefix": "http://stork-service.kube-system.svc:8099",
"apiVersion": "v1beta1",
"filterVerb": "filter",
"prioritizeVerb": "prioritize",
"weight": 5,
"enableHttps": false,
"nodeCacheCapable": false
}
]
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "list", "watch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"]
resources: ["deployments", "deployments/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
- apiGroups: ["*"]
resources: ["statefulsets", "statefulsets/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-role-binding
subjects:
- kind: ServiceAccount
name: stork-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-role
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: stork-service
namespace: kube-system
spec:
selector:
name: stork
ports:
- protocol: TCP
port: 8099
targetPort: 8099
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
tier: control-plane
name: stork
namespace: kube-system
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 3
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
name: stork
tier: control-plane
spec:
containers:
- command:
- /stork
- --driver=pxd
- --verbose
- --leader-elect=true
- --health-monitor-interval=120
imagePullPolicy: Always
image: openstorage/stork:1.1.3
resources:
requests:
cpu: '0.1'
name: stork
hostPID: false
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork
topologyKey: "kubernetes.io/hostname"
serviceAccountName: stork-account
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: stork-snapshot-sc
provisioner: stork-snapshot
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-scheduler-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-scheduler-role
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
- apiGroups: [""]
resourceNames: ["kube-scheduler"]
resources: ["endpoints"]
verbs: ["delete", "get", "patch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list", "watch"]
- apiGroups: [""]
resources: ["bindings", "pods/binding"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["patch", "update"]
- apiGroups: [""]
resources: ["replicationcontrollers", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["app", "extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-scheduler-role-binding
subjects:
- kind: ServiceAccount
name: stork-scheduler-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-scheduler-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
name: stork-scheduler
namespace: kube-system
spec:
replicas: 3
template:
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
spec:
containers:
- command:
- /usr/local/bin/kube-scheduler
- --address=0.0.0.0
- --leader-elect=true
- --scheduler-name=stork
- --policy-configmap=stork-config
- --policy-configmap-namespace=kube-system
- --lock-object-name=stork-scheduler
image: gcr.io/google_containers/kube-scheduler-amd64:v1.11.2
livenessProbe:
httpGet:
path: /healthz
port: 10251
initialDelaySeconds: 15
name: stork-scheduler
readinessProbe:
httpGet:
path: /healthz
port: 10251
resources:
requests:
cpu: '0.1'
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork-scheduler
topologyKey: "kubernetes.io/hostname"
hostPID: false
serviceAccountName: stork-scheduler-account
---
kind: Service
apiVersion: v1
metadata:
name: portworx-service
namespace: kube-system
labels:
name: portworx
spec:
selector:
name: portworx
ports:
- name: px-api
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-get-put-list-role
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "get", "update", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "update", "create"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["privileged"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-role-binding
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-get-put-list-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Namespace
metadata:
name: portworx
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role
namespace: portworx
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role-binding
namespace: portworx
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: Role
name: px-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: portworx
namespace: kube-system
annotations:
portworx.com/install-source: "https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true"
spec:
minReadySeconds: 0
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: portworx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: px/enabled
operator: NotIn
values:
- "false"
- key: node-role.kubernetes.io/master
operator: DoesNotExist
hostNetwork: true
hostPID: false
containers:
- name: portworx
image: portworx/oci-monitor:1.4.2.2
imagePullPolicy: Always
args:
["-c", "px-workshop", "-s", "/dev/loop4", "-b",
"-x", "kubernetes"]
env:
- name: "PX_TEMPLATE_VERSION"
value: "v4"
livenessProbe:
periodSeconds: 30
initialDelaySeconds: 840 # allow image pull in slow networks
httpGet:
host: 127.0.0.1
path: /status
port: 9001
readinessProbe:
periodSeconds: 10
httpGet:
host: 127.0.0.1
path: /health
port: 9015
terminationMessagePath: "/tmp/px-termination-log"
securityContext:
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
- name: etcpwx
mountPath: /etc/pwx
- name: optpwx
mountPath: /opt/pwx
- name: proc1nsmount
mountPath: /host_proc/1/ns
- name: sysdmount
mountPath: /etc/systemd/system
- name: diagsdump
mountPath: /var/cores
- name: journalmount1
mountPath: /var/run/log
readOnly: true
- name: journalmount2
mountPath: /var/log
readOnly: true
- name: dbusmount
mountPath: /var/run/dbus
restartPolicy: Always
serviceAccountName: px-account
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: etcpwx
hostPath:
path: /etc/pwx
- name: optpwx
hostPath:
path: /opt/pwx
- name: proc1nsmount
hostPath:
path: /proc/1/ns
- name: sysdmount
hostPath:
path: /etc/systemd/system
- name: diagsdump
hostPath:
path: /var/cores
- name: journalmount1
hostPath:
path: /var/run/log
- name: journalmount2
hostPath:
path: /var/log
- name: dbusmount
hostPath:
path: /var/run/dbus
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-lh-account
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: px-lh-account
namespace: kube-system
roleRef:
kind: Role
name: px-lh-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
type: NodePort
ports:
- name: http
port: 80
nodePort: 32678
- name: https
port: 443
nodePort: 32679
selector:
tier: px-web-console
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
tier: px-web-console
replicas: 1
template:
metadata:
labels:
tier: px-web-console
spec:
initContainers:
- name: config-init
image: portworx/lh-config-sync:0.2
imagePullPolicy: Always
args:
- "init"
volumeMounts:
- name: config
mountPath: /config/lh
containers:
- name: px-lighthouse
image: portworx/px-lighthouse:1.5.0
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: config
mountPath: /config/lh
- name: config-sync
image: portworx/lh-config-sync:0.2
imagePullPolicy: Always
args:
- "sync"
volumeMounts:
- name: config
mountPath: /config/lh
serviceAccountName: px-lh-account
volumes:
- name: config
emptyDir: {}

30
k8s/postgres.yaml Normal file
View File

@@ -0,0 +1,30 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
labels:
app: postgres
spec:
schedulerName: stork
containers:
- name: postgres
image: postgres:10.5
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres
volumeClaimTemplates:
- metadata:
name: postgres
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi

15
k8s/registry.yaml Normal file
View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: registry
spec:
containers:
- name: registry
image: registry
env:
- name: REGISTRY_HTTP_ADDR
valueFrom:
configMapKeyRef:
name: registry
key: http.addr

67
k8s/socat.yaml Normal file
View File

@@ -0,0 +1,67 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: null
generation: 1
labels:
run: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
spec:
replicas: 1
selector:
matchLabels:
run: socat
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: socat
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard:443,verify=0
image: alpine
imagePullPolicy: Always
name: socat
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: socat
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

11
k8s/storage-class.yaml Normal file
View File

@@ -0,0 +1,11 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-replicated
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "2"
priority_io: "high"

100
k8s/traefik.yaml Normal file
View File

@@ -0,0 +1,100 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -1,4 +1,10 @@
# Trainer tools to create and prepare VMs for Docker workshops on AWS or Azure
# Trainer tools to create and prepare VMs for Docker workshops
These tools can help you to create VMs on:
- Azure
- EC2
- OpenStack
## Prerequisites
@@ -6,6 +12,9 @@
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
Depending on the infrastructure that you want to use, you also need to install
the Azure CLI, the AWS CLI, or terraform (for OpenStack deployment).
And if you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
@@ -14,20 +23,25 @@ And if you want to generate printable cards:
## General Workflow
- fork/clone repo
- set required environment variables
- create an infrastructure configuration in the `prepare-vms/infra` directory
(using one of the example files in that directory)
- create your own setting file from `settings/example.yaml`
- if necessary, increase allowed open files: `ulimit -Sn 10000`
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
- run `./workshopctl start` to create instances
- run `./workshopctl deploy` to install Docker and setup environment
- run `./workshopctl kube` (if you want to install and setup Kubernetes)
- run `./workshopctl cards` (if you want to generate PDF for printing handouts of each users host IP's and login info)
- run `./workshopctl stop` at the end of the workshop to terminate instances
## Clone/Fork the Repo, and Build the Tools Image
The Docker Compose file here is used to build a image with all the dependencies to run the `./workshopctl` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](workshopctl#L5).
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
$ cd orchestration-workshop/prepare-vms
$ git clone https://github.com/jpetazzo/container.training
$ cd container.training/prepare-vms
$ docker-compose build
## Preparing to Run `./workshopctl`
### Required AWS Permissions/Info
@@ -36,27 +50,37 @@ The Docker Compose file here is used to build a image with all the dependencies
- Using a non-default VPC or Security Group isn't supported out of box yet, so you will have to customize `lib/commands.sh` if you want to change that.
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./workshopctl opensg` which opens up all ports.
### Required Environment Variables
### Create your `infra` file
- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`
You need to do this only once. (On AWS, you can create one `infra`
file per region.)
If you're not using AWS, set these to placeholder values:
Make a copy of one of the example files in the `infra` directory.
For instance:
```bash
cp infra/example.aws infra/aws-us-west-2
```
export AWS_ACCESS_KEY_ID="foo"
export AWS_SECRET_ACCESS_KEY="foo"
export AWS_DEFAULT_REGION="foo"
```
Edit your infrastructure file to customize it.
You will probably need to put your cloud provider credentials,
select region...
If you don't have the `aws` CLI installed, you will get a warning that it's a missing dependency. If you're not using AWS you can ignore this.
### Update/copy `settings/example.yaml`
### Create your `settings` file
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
Similarly, pick one of the files in `settings` and copy it
to customize it.
./workshopctl cards 2016-09-28-00-33-bret settings/orchestration.yaml
For instance:
```bash
cp settings/example.yaml settings/myworkshop.yaml
```
You're all set!
## `./workshopctl` Usage
@@ -66,7 +90,7 @@ Commands:
ami Show the AMI that will be used for deployment
amis List Ubuntu AMIs in the current region
build Build the Docker image to run this program in a container
cards Generate ready-to-print cards for a batch of VMs
cards Generate ready-to-print cards for a group of VMs
deploy Install Docker on a bunch of running VMs
ec2quotas Check our EC2 quotas (max instances)
help Show available commands
@@ -74,14 +98,14 @@ ids List the instance IDs belonging to a given tag or token
ips List the IP addresses of the VMs for a given tag or token
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all notes are reporting as Ready
list List available batches in the current region
list List available groups in the current region
opensg Open the default security group to ALL ingress traffic
pull_images Pre-pull a bunch of Docker images
retag Apply a new tag to a batch of VMs
start Start a batch of VMs
status List instance status for a given batch
retag Apply a new tag to a group of VMs
start Start a group of VMs
status List instance status for a given group
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
test Run tests (pre-flight checks) on a batch of VMs
test Run tests (pre-flight checks) on a group of VMs
wrap Run this program in a container
```
@@ -93,24 +117,24 @@ wrap Run this program in a container
- The `./workshopctl` script can be executed directly.
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. This can be configured with the `docker_user_password` property in the settings file.
### Example Steps to Launch a Batch of AWS Instances for a Workshop
### Example Steps to Launch a group of AWS Instances for a Workshop
- Run `./workshopctl start N` Creates `N` EC2 instances
- Run `./workshopctl start --infra infra/aws-us-east-2 --settings/myworkshop.yaml --count 60` to create 60 EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
- Run `./workshopctl deploy TAG` to run `lib/postprep.py` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- Run `./workshopctl cards TAG` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
### Example Steps to Launch Azure Instances
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account (`az login`)
- Customize `azuredeploy.parameters.json`
- Required:
- Provide the SSH public key you plan to use for instance configuration
@@ -155,27 +179,16 @@ az group delete --resource-group workshop
### Example Steps to Configure Instances from a non-AWS Source
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to ssh into them.
- Set placeholder values for [AWS environment variable settings](#required-environment-variables).
- Choose a tag. It could be an event name, datestamp, etc. Ensure you have created a directory for your tag: `prepare-vms/tags/<tag>/`
- If you have not already generated a file with the IPs to be configured:
- The file should be named `prepare-vms/tags/<tag>/ips.txt`
- Format is one IP per line, no other info needed.
- Ensure the settings file is as desired (especially the number of nodes): `prepare-vms/settings/kube101.yaml`
- For a tag called `myworkshop`, configure instances: `workshopctl deploy myworkshop settings/kube101.yaml`
- Optionally, configure Kubernetes clusters of the size in the settings: `workshopctl kube myworkshop`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: `workshopctl kubetest myworkshop`
- Generate cards to print and hand out: `workshopctl cards myworkshop settings/kube101.yaml`
- Print the cards file: `prepare-vms/tags/myworkshop/ips.html`
## Other Tools
### Deploying your SSH key to all the machines
- Make sure that you have SSH keys loaded (`ssh-add -l`).
- Source `rc`.
- Run `pcopykey`.
- Copy `infra/example.generic` to `infra/generic`
- Run `./workshopctl start --infra infra/generic --settings settings/...yaml`
- Note the `prepare-vms/tags/TAG/` path that has been auto-created.
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to SSH into them.
- Edit the file `prepare-vms/tags/TAG/ips.txt`, it should list the IP addresses of the VMs (one per line, without any comments or other info)
- Continue deployment of cluster configuration with `./workshopctl deploy TAG`
- Optionally, configure Kubernetes clusters of the size in the settings: workshopctl kube `TAG`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: workshopctl kubetest `TAG`
- Generate cards to print and hand out: workshopctl cards `TAG`
- Print the cards file: prepare-vms/tags/`TAG`/ips.html
## Even More Details
@@ -188,7 +201,7 @@ To see which local key will be uploaded, run `ssh-add -l | grep RSA`.
#### Instance + tag creation
10 VMs will be started, with an automatically generated tag (timestamp + your username).
The VMs will be started, with an automatically generated tag (timestamp + your username).
Your SSH key will be added to the `authorized_keys` of the ubuntu user.
@@ -196,15 +209,11 @@ Your SSH key will be added to the `authorized_keys` of the ubuntu user.
Following the creation of the VMs, a text file will be created containing a list of their IPs.
This ips.txt file will be created in the $TAG/ directory and a symlink will be placed in the working directory of the script.
If you create new VMs, the symlinked file will be overwritten.
#### Deployment
Instances can be deployed manually using the `deploy` command:
$ ./workshopctl deploy TAG settings/somefile.yaml
$ ./workshopctl deploy TAG
The `postprep.py` file will be copied via parallel-ssh to all of the VMs and executed.
@@ -214,7 +223,7 @@ The `postprep.py` file will be copied via parallel-ssh to all of the VMs and exe
#### Generate cards
$ ./workshopctl cards TAG settings/somefile.yaml
$ ./workshopctl cards TAG
If you want to generate both HTML and PDF cards, install [wkhtmltopdf](https://wkhtmltopdf.org/downloads.html); without that installed, only HTML cards will be generated.
@@ -222,13 +231,11 @@ If you don't have `wkhtmltopdf` installed, you will get a warning that it is a m
#### List tags
$ ./workshopctl list
$ ./workshopctl list infra/some-infra-file
#### List VMs
$ ./workshopctl listall
$ ./workshopctl list TAG
This will print a human-friendly list containing some information about each instance.
$ ./workshopctl tags
#### Stop and destroy VMs

View File

@@ -7,15 +7,6 @@ fi
if id docker; then
sudo userdel -r docker
fi
pip install --user awscli jinja2 pdfkit
sudo apt-get install -y wkhtmltopdf xvfb
tmux new-session \; send-keys "
[ -f ~/.ssh/id_rsa ] || ssh-keygen
eval \$(ssh-agent)
ssh-add
Xvfb :0 &
export DISPLAY=:0
mkdir -p ~/www
sudo docker run -d -p 80:80 -v \$HOME/www:/usr/share/nginx/html nginx
"
sudo apt-get update -q
sudo apt-get install -qy jq python-pip wkhtmltopdf xvfb
pip install --user awscli jinja2 pdfkit pssh

View File

@@ -0,0 +1,6 @@
INFRACLASS=aws
# If you are using AWS to deploy, copy this file (e.g. to "aws", or "us-east-1")
# and customize the variables below.
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=AKI...
export AWS_SECRET_ACCESS_KEY=...

View File

@@ -0,0 +1,2 @@
INFRACLASS=generic
# This is for manual provisioning. No other variable or configuration is needed.

View File

@@ -0,0 +1,9 @@
INFRACLASS=openstack
# If you are using OpenStack, copy this file (e.g. to "openstack" or "enix")
# and customize the variables below.
export TF_VAR_user="jpetazzo"
export TF_VAR_tenant="training"
export TF_VAR_domain="Default"
export TF_VAR_password="..."
export TF_VAR_auth_url="https://api.r1.nxs.enix.io/v3"
export TF_VAR_flavor="GP1.S"

View File

@@ -1,105 +0,0 @@
aws_display_tags() {
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| uniq -c \
| sort -k 3 \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
aws ec2 describe-instance-status \
--instance-ids $IDS \
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
--output table
}
aws_display_instances_by_tag() {
TAG=$1
need_tag $TAG
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "$result"
fi
}
aws_get_instance_ids_by_filter() {
FILTER=$1
aws ec2 describe-instances --filters $FILTER \
--query Reservations[*].Instances[*].InstanceId \
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
need_tag $TOKEN
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
}
aws_get_instance_ids_by_tag() {
TAG=$1
need_tag $TAG
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
}
aws_get_instance_ips_by_tag() {
TAG=$1
need_tag $TAG
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws_get_instance_ids_by_tag $TAG)
if [ -z "$IDS" ]; then
die "Invalid tag."
fi
info "Deleting instances with tag $TAG."
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
info "Deleted instances with tag $TAG."
}
aws_tag_instances() {
OLD_TAG_OR_TOKEN=$1
NEW_TAG=$2
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
}

View File

@@ -50,27 +50,38 @@ sep() {
fi
}
need_tag() {
need_infra() {
if [ -z "$1" ]; then
die "Please specify infrastructure file. (e.g.: infra/aws)"
fi
if [ ! -f "$1" ]; then
die "Infrastructure file $1 doesn't exist."
fi
. "$1"
. "lib/infra/$INFRACLASS.sh"
}
need_tag() {
if [ -z "$TAG" ]; then
die "Please specify a tag or token. To see available tags and tokens, run: $0 list"
fi
if [ ! -d "tags/$TAG" ]; then
die "Tag $TAG not found (directory tags/$TAG does not exist)."
fi
for FILE in settings.yaml ips.txt infra.sh; do
if [ ! -f "tags/$TAG/$FILE" ]; then
warning "File tags/$TAG/$FILE not found."
fi
done
. "tags/$TAG/infra.sh"
. "lib/infra/$INFRACLASS.sh"
}
need_settings() {
if [ -z "$1" ]; then
die "Please specify a settings file."
elif [ ! -f "$1" ]; then
die "Please specify a settings file. (e.g.: settings/kube101.yaml)"
fi
if [ ! -f "$1" ]; then
die "Settings file $1 doesn't exist."
fi
}
need_ips_file() {
IPS_FILE=$1
if [ -z "$IPS_FILE" ]; then
die "IPS_FILE not set."
fi
if [ ! -s "$IPS_FILE" ]; then
die "IPS_FILE $IPS_FILE not found. Please run: $0 ips <TAG>"
fi
}

View File

@@ -7,21 +7,11 @@ _cmd() {
_cmd help "Show available commands"
_cmd_help() {
printf "$(basename $0) - the orchestration workshop swiss army knife\n"
printf "$(basename $0) - the container training swiss army knife\n"
printf "Commands:"
printf "%s" "$HELP" | sort
}
_cmd amis "List Ubuntu AMIs in the current region"
_cmd_amis() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION "$@"
}
_cmd ami "Show the AMI that will be used for deployment"
_cmd_ami() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
}
_cmd build "Build the Docker image to run this program in a container"
_cmd_build() {
docker-compose build
@@ -32,64 +22,53 @@ _cmd_wrap() {
docker-compose run --rm workshopctl "$@"
}
_cmd cards "Generate ready-to-print cards for a batch of VMs"
_cmd cards "Generate ready-to-print cards for a group of VMs"
_cmd_cards() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
need_tag
# If you're not using AWS, populate the ips.txt file manually
if [ ! -f tags/$TAG/ips.txt ]; then
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
fi
# Remove symlinks to old cards
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
lib/ips-txt-to-html.py $SETTINGS
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
rm -f tags/$TAG/$f
# Move the generated file and replace it with a symlink
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
done
# This will process ips.txt to generate two files: ips.pdf and ips.html
(
cd tags/$TAG
../../lib/ips-txt-to-html.py settings.yaml
)
info "Cards created. You can view them with:"
info "xdg-open ips.html ips.pdf (on Linux)"
info "open ips.html ips.pdf (on MacOS)"
info "xdg-open tags/$TAG/ips.html tags/$TAG/ips.pdf (on Linux)"
info "open tags/$TAG/ips.html (on macOS)"
}
_cmd deploy "Install Docker on a bunch of running VMs"
_cmd_deploy() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
link_tag $TAG
count=$(wc -l ips.txt)
need_tag
# wait until all hosts are reachable before trying to deploy
info "Trying to reach $TAG instances..."
while ! tag_is_reachable $TAG; do
while ! tag_is_reachable; do
>/dev/stderr echo -n "."
sleep 2
done
>/dev/stderr echo ""
echo deploying > tags/$TAG/status
sep "Deploying tag $TAG"
pssh -I tee /tmp/settings.yaml <$SETTINGS
# Wait for cloudinit to be done
pssh "
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done"
# Copy settings and install Python YAML parser
pssh -I tee /tmp/settings.yaml <tags/$TAG/settings.yaml
pssh "
sudo apt-get update &&
sudo apt-get install -y python-setuptools &&
sudo easy_install pyyaml"
sudo apt-get install -y python-yaml"
# Copy postprep.py to the remote machines, and execute it, feeding it the list of IP addresses
pssh -I tee /tmp/postprep.py <lib/postprep.py
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <ips.txt
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <tags/$TAG/ips.txt
# Install docker-prompt script
pssh -I sudo tee /usr/local/bin/docker-prompt <lib/docker-prompt
@@ -117,14 +96,17 @@ _cmd_deploy() {
fi"
sep "Deployed tag $TAG"
echo deployed > tags/$TAG/status
info "You may want to run one of the following commands:"
info "$0 kube $TAG"
info "$0 pull_images $TAG"
info "$0 cards $TAG $SETTINGS"
info "$0 cards $TAG"
}
_cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
_cmd_kube() {
TAG=$1
need_tag
# Install packages
pssh --timeout 200 "
@@ -134,13 +116,13 @@ _cmd_kube() {
sudo tee /etc/apt/sources.list.d/kubernetes.list"
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet kubeadm kubectl
sudo apt-get install -qy kubelet kubeadm kubectl &&
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
# Initialize kube master
pssh --timeout 200 "
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token
kubeadm token generate > /tmp/token &&
sudo kubeadm init --token \$(cat /tmp/token)
fi"
@@ -157,22 +139,65 @@ _cmd_kube() {
# Install weave as the pod network
pssh "
if grep -q node1 /tmp/node; then
kubever=\$(kubectl version | base64 | tr -d '\n')
kubever=\$(kubectl version | base64 | tr -d '\n') &&
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
fi"
# Join the other nodes to the cluster
pssh --timeout 200 "
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token) &&
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
fi"
# Install kubectx and kubens
pssh "
[ -d kubectx ] || git clone https://github.com/ahmetb/kubectx &&
sudo ln -sf /home/ubuntu/kubectx/kubectx /usr/local/bin/kctx &&
sudo ln -sf /home/ubuntu/kubectx/kubens /usr/local/bin/kns &&
sudo cp /home/ubuntu/kubectx/completion/*.bash /etc/bash_completion.d &&
[ -d kube-ps1 ] || git clone https://github.com/jonmosco/kube-ps1 &&
sudo -u docker sed -i s/docker-prompt/kube_ps1/ /home/docker/.bashrc &&
sudo -u docker tee -a /home/docker/.bashrc <<EOF
. /home/ubuntu/kube-ps1/kube-ps1.sh
KUBE_PS1_PREFIX=""
KUBE_PS1_SUFFIX=""
KUBE_PS1_SYMBOL_ENABLE="false"
KUBE_PS1_CTX_COLOR="green"
KUBE_PS1_NS_COLOR="green"
EOF"
# Install stern
pssh "
if [ ! -x /usr/local/bin/stern ]; then
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64 &&
sudo chmod +x /usr/local/bin/stern &&
stern --completion bash | sudo tee /etc/bash_completion.d/stern
fi"
# Install helm
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | sudo bash &&
helm completion bash | sudo tee /etc/bash_completion.d/helm
fi"
sep "Done"
}
_cmd kubetest "Check that all notes are reporting as Ready"
_cmd kubereset "Wipe out Kubernetes configuration on all nodes"
_cmd_kubereset() {
TAG=$1
need_tag
pssh "sudo kubeadm reset --force"
}
_cmd kubetest "Check that all nodes are reporting as Ready"
_cmd_kubetest() {
TAG=$1
need_tag
# There are way too many backslashes in the command below.
# Feel free to make that better ♥
pssh "
@@ -186,7 +211,7 @@ _cmd_kubetest() {
fi"
}
_cmd ids "List the instance IDs belonging to a given tag or token"
_cmd ids "(FIXME) List the instance IDs belonging to a given tag or token"
_cmd_ids() {
TAG=$1
need_tag $TAG
@@ -199,262 +224,242 @@ _cmd_ids() {
aws_get_instance_ids_by_client_token $TAG
}
_cmd ips "List the IP addresses of the VMs for a given tag or token"
_cmd_ips() {
TAG=$1
need_tag $TAG
mkdir -p tags/$TAG
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
link_tag $TAG
}
_cmd list "List available batches in the current region"
_cmd list "List available groups for a given infrastructure"
_cmd_list() {
info "Listing batches in region $AWS_DEFAULT_REGION:"
aws_display_tags
need_infra $1
infra_list
}
_cmd status "List instance status for a given batch"
_cmd_status() {
info "Using region $AWS_DEFAULT_REGION."
_cmd listall "List VMs running on all configured infrastructures"
_cmd_listall() {
for infra in infra/*; do
case $infra in
infra/example.*)
;;
*)
info "Listing infrastructure $infra:"
need_infra $infra
infra_list
;;
esac
done
}
_cmd netfix "Disable GRO and run a pinger job on the VMs"
_cmd_netfix () {
TAG=$1
need_tag $TAG
describe_tag $TAG
tag_is_reachable $TAG
info "You may be interested in running one of the following commands:"
info "$0 ips $TAG"
info "$0 deploy $TAG <settings/somefile.yaml>"
need_tag
pssh "
sudo ethtool -K ens3 gro off
sudo tee /root/pinger.service <<EOF
[Unit]
Description=pinger
[Install]
WantedBy=multi-user.target
[Service]
WorkingDirectory=/
ExecStart=/bin/ping -w60 1.1
User=nobody
Group=nogroup
Restart=always
EOF
sudo systemctl enable /root/pinger.service
sudo systemctl start pinger"
}
_cmd opensg "Open the default security group to ALL ingress traffic"
_cmd_opensg() {
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
need_infra $1
infra_opensg
}
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
_cmd pssh "Run an arbitrary command on all nodes"
_cmd_pssh() {
TAG=$1
need_tag
shift
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
pssh "$@"
}
_cmd pull_images "Pre-pull a bunch of Docker images"
_cmd_pull_images() {
TAG=$1
need_tag $TAG
pull_tag $TAG
need_tag
pull_tag
}
_cmd retag "Apply a new tag to a batch of VMs"
_cmd quotas "Check our infrastructure quotas (max instances)"
_cmd_quotas() {
need_infra $1
infra_quotas
}
_cmd retag "(FIXME) Apply a new tag to a group of VMs"
_cmd_retag() {
OLDTAG=$1
NEWTAG=$2
need_tag $OLDTAG
TAG=$OLDTAG
need_tag
if [[ -z "$NEWTAG" ]]; then
die "You must specify a new tag to apply."
fi
aws_tag_instances $OLDTAG $NEWTAG
}
_cmd start "Start a batch of VMs"
_cmd start "Start a group of VMs"
_cmd_start() {
# Number of instances to create
COUNT=$1
# Optional settings file (to carry on with deployment)
SETTINGS=$2
while [ ! -z "$*" ]; do
case "$1" in
--infra) INFRA=$2; shift 2;;
--settings) SETTINGS=$2; shift 2;;
--count) COUNT=$2; shift 2;;
--tag) TAG=$2; shift 2;;
*) die "Unrecognized parameter: $1."
esac
done
if [ -z "$INFRA" ]; then
die "Please add --infra flag to specify which infrastructure file to use."
fi
if [ -z "$SETTINGS" ]; then
die "Please add --settings flag to specify which settings file to use."
fi
if [ -z "$COUNT" ]; then
die "Indicate number of instances to start."
COUNT=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
warning "No --count option was specified. Using value from settings file ($COUNT)."
fi
# Check that the specified settings and infrastructure are valid.
need_settings $SETTINGS
need_infra $INFRA
# Print our AWS username, to ease the pain of credential-juggling
greet
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
AMI=$(_cmd_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
die "I could not find which AMI to use in this region. Try another region?"
if [ -z "$TAG" ]; then
TAG=$(make_tag)
fi
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
AWS_KEY_NAME=$(make_key_name)
sep "Starting instances"
info " Count: $COUNT"
info " Region: $AWS_DEFAULT_REGION"
info " Token/tag: $TOKEN"
info " AMI: $AMI"
info " Key name: $AWS_KEY_NAME"
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
info "Reservation ID: $reservation_id"
sep
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
TAG=$TOKEN
aws_tag_instances $TOKEN $TAG
wait_until_tag_is_running $TAG $COUNT
mkdir -p tags/$TAG
ln -s ../../$INFRA tags/$TAG/infra.sh
ln -s ../../$SETTINGS tags/$TAG/settings.yaml
echo creating > tags/$TAG/status
infra_start $COUNT
sep
info "Successfully created $COUNT instances with tag $TAG"
sep
echo created > tags/$TAG/status
mkdir -p tags/$TAG
IPS=$(aws_get_instance_ips_by_tag $TAG)
echo "$IPS" >tags/$TAG/ips.txt
link_tag $TAG
if [ -n "$SETTINGS" ]; then
_cmd_deploy $TAG $SETTINGS
else
info "To deploy or kill these instances, run one of the following:"
info "$0 deploy $TAG <settings/somefile.yaml>"
info "$0 stop $TAG"
fi
}
_cmd ec2quotas "Check our EC2 quotas (max instances)"
_cmd_ec2quotas() {
greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
info "Available regions:"
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
info "To deploy Docker on these instances, you can run:"
info "$0 deploy $TAG"
info "To terminate these instances, you can run:"
info "$0 stop $TAG"
}
_cmd stop "Stop (terminate, shutdown, kill, remove, destroy...) instances"
_cmd_stop() {
TAG=$1
need_tag $TAG
aws_kill_instances_by_tag $TAG
need_tag
infra_stop
echo stopped > tags/$TAG/status
}
_cmd test "Run tests (pre-flight checks) on a batch of VMs"
_cmd tags "List groups of VMs known locally"
_cmd_tags() {
(
cd tags
echo "[#] [Status] [Tag] [Infra]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
for tag in *; do
if [ -f $tag/ips.txt ]; then
count="$(wc -l < $tag/ips.txt)"
else
count="?"
fi
if [ -f $tag/status ]; then
status="$(cat $tag/status)"
else
status="?"
fi
if [ -f $tag/infra.sh ]; then
infra="$(basename $(readlink $tag/infra.sh))"
else
infra="?"
fi
echo "$count $status $tag $infra" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
done
)
}
_cmd test "Run tests (pre-flight checks) on a group of VMs"
_cmd_test() {
TAG=$1
need_tag $TAG
test_tag $TAG
need_tag
test_tag
}
###
# Sometimes, weave fails to come up on some nodes.
# Symptom: the pods on a node are unreachable (they don't even ping).
# Remedy: wipe out Weave state and delete weave pod on that node.
# Specifically, identify the weave pod that is defective, then:
# kubectl -n kube-system exec weave-net-XXXXX -c weave rm /weavedb/weave-netdata.db
# kubectl -n kube-system delete pod weave-net-XXXXX
_cmd weavetest "Check that weave seems properly setup"
_cmd_weavetest() {
TAG=$1
need_tag
pssh "
kubectl -n kube-system get pods -o name | grep weave | cut -d/ -f2 |
xargs -I POD kubectl -n kube-system exec POD -c weave -- \
sh -c \"./weave --local status | grep Connections | grep -q ' 1 failed' || ! echo POD \""
}
greet() {
IAMUSER=$(aws iam get-user --query 'User.UserName')
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
}
link_tag() {
TAG=$1
need_tag $TAG
IPS_FILE=tags/$TAG/ips.txt
need_ips_file $IPS_FILE
ln -sf $IPS_FILE ips.txt
}
pull_tag() {
TAG=$1
need_tag $TAG
link_tag $TAG
if [ ! -s $IPS_FILE ]; then
die "Nonexistent or empty IPs file $IPS_FILE."
fi
# Pre-pull a bunch of images
pssh --timeout 900 'for I in \
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
elasticsearch:2 \
postgres \
redis \
alpine \
registry \
nicolaka/netshoot \
jpetazzo/trainingwheels \
golang \
training/namer \
dockercoins/hasher \
dockercoins/rng \
dockercoins/webui \
dockercoins/worker \
logstash \
prom/node-exporter \
google/cadvisor \
dockersamples/visualizer \
nathanleclaire/redisonrails; do
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
elasticsearch:2 \
postgres \
redis \
alpine \
registry \
nicolaka/netshoot \
jpetazzo/trainingwheels \
golang \
training/namer \
dockercoins/hasher \
dockercoins/rng \
dockercoins/webui \
dockercoins/worker \
logstash \
prom/node-exporter \
google/cadvisor \
dockersamples/visualizer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'
info "Finished pulling images for $TAG."
info "You may now want to run:"
info "$0 cards $TAG <settings/somefile.yaml>"
}
wait_until_tag_is_running() {
max_retry=50
TAG=$1
COUNT=$2
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do
let "i += 1"
info "$(printf "%d/%d instances online" $done_count $COUNT)"
done_count=$(aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
"Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].State.Name" \
| tr "\t" "\n" \
| wc -l)
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
tag_is_reachable() {
TAG=$1
need_tag $TAG
link_tag $TAG
pssh -t 5 true 2>&1 >/dev/null
}
test_tag() {
TAG=$1
ips_file=tags/$TAG/ips.txt
info "Picking a random IP address in $ips_file to run tests."
n=$((1 + $RANDOM % $(wc -l <$ips_file)))
ip=$(head -n $n $ips_file | tail -n 1)
ip=$(shuf -n1 $ips_file)
test_vm $ip
info "Tests complete."
}
@@ -530,17 +535,9 @@ sync_keys() {
fi
}
get_token() {
make_tag() {
if [ -z $USER ]; then
export USER=anonymous
fi
date +%Y-%m-%d-%H-%M-$USER
}
describe_tag() {
# Display instance details and reachability/status information
TAG=$1
need_tag $TAG
aws_display_instances_by_tag $TAG
aws_display_instance_statuses_by_tag $TAG
}

26
prepare-vms/lib/infra.sh Normal file
View File

@@ -0,0 +1,26 @@
# Default stub functions for infrastructure libraries.
# When loading an infrastructure library, these functions will be overridden.
infra_list() {
warning "infra_list is unsupported on $INFRACLASS."
}
infra_quotas() {
warning "infra_quotas is unsupported on $INFRACLASS."
}
infra_start() {
warning "infra_start is unsupported on $INFRACLASS."
}
infra_stop() {
warning "infra_stop is unsupported on $INFRACLASS."
}
infra_quotas() {
warning "infra_quotas is unsupported on $INFRACLASS."
}
infra_opensg() {
warning "infra_opensg is unsupported on $INFRACLASS."
}

View File

@@ -0,0 +1,205 @@
infra_list() {
aws_display_tags
}
infra_quotas() {
greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
info "Available regions:"
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
}
infra_start() {
COUNT=$1
# Print our AWS username, to ease the pain of credential-juggling
greet
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
AMI=$(aws_get_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
die "I could not find which AMI to use in this region. Try another region?"
fi
AWS_KEY_NAME=$(make_key_name)
sep "Starting instances"
info " Count: $COUNT"
info " Region: $AWS_DEFAULT_REGION"
info " Token/tag: $TAG"
info " AMI: $AMI"
info " Key name: $AWS_KEY_NAME"
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
--client-token $TAG \
--block-device-mapping 'DeviceName=/dev/sda1,Ebs={VolumeSize=20}' \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
info "Reservation ID: $reservation_id"
sep
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TAG)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
aws_tag_instances $TAG $TAG
# Wait until EC2 API tells us that the instances are running
wait_until_tag_is_running $TAG $COUNT
aws_get_instance_ips_by_tag $TAG > tags/$TAG/ips.txt
}
infra_stop() {
aws_kill_instances_by_tag
}
infra_opensg() {
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
}
wait_until_tag_is_running() {
max_retry=50
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do
let "i += 1"
info "$(printf "%d/%d instances online" $done_count $COUNT)"
done_count=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
"Name=instance-state-name,Values=running" \
--query "length(Reservations[].Instances[])")
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
aws_display_tags() {
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| uniq -c \
| sort -k 3 \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
aws ec2 describe-instance-status \
--instance-ids $IDS \
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
--output table
}
aws_display_instances_by_tag() {
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "$result"
fi
}
aws_get_instance_ids_by_filter() {
FILTER=$1
aws ec2 describe-instances --filters $FILTER \
--query Reservations[*].Instances[*].InstanceId \
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
}
aws_get_instance_ids_by_tag() {
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
}
aws_get_instance_ips_by_tag() {
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
IDS=$(aws_get_instance_ids_by_tag $TAG)
if [ -z "$IDS" ]; then
die "Invalid tag."
fi
info "Deleting instances with tag $TAG."
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
info "Deleted instances with tag $TAG."
}
aws_tag_instances() {
OLD_TAG_OR_TOKEN=$1
NEW_TAG=$2
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
}
aws_get_ami() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
}

View File

@@ -0,0 +1,8 @@
infra_start() {
COUNT=$1
info "You should now run your provisioning commands for $COUNT machines."
info "Note: no machines have been automatically created!"
info "Once done, put the list of IP addresses in tags/$TAG/ips.txt"
info "(one IP address per line, without any comments or extra lines)."
touch tags/$TAG/ips.txt
}

View File

@@ -0,0 +1,20 @@
infra_start() {
COUNT=$1
cp terraform/*.tf tags/$TAG
(
cd tags/$TAG
terraform init
echo prefix = \"$TAG\" >> terraform.tfvars
echo count = \"$COUNT\" >> terraform.tfvars
terraform apply -auto-approve
terraform output ip_addresses > ips.txt
)
}
infra_stop() {
(
cd tags/$TAG
terraform destroy -auto-approve
)
}

View File

@@ -31,7 +31,13 @@ while ips:
clusters.append(cluster)
template_file_name = SETTINGS["cards_template"]
template = jinja2.Template(open(template_file_name).read())
template_file_path = os.path.join(
os.path.dirname(__file__),
"..",
"templates",
template_file_name
)
template = jinja2.Template(open(template_file_path).read())
with open("ips.html", "w") as f:
f.write(template.render(clusters=clusters, **SETTINGS))
print("Generated ips.html")

View File

@@ -13,6 +13,7 @@ COMPOSE_VERSION = config["compose_version"]
MACHINE_VERSION = config["machine_version"]
CLUSTER_SIZE = config["clustersize"]
ENGINE_VERSION = config["engine_version"]
DOCKER_USER_PASSWORD = config["docker_user_password"]
#################################
@@ -54,9 +55,9 @@ system("curl --silent {} > /tmp/ipv4".format(ipv4_retrieval_endpoint))
ipv4 = open("/tmp/ipv4").read()
# Add a "docker" user with password "training"
# Add a "docker" user with password coming from the settings
system("id docker || sudo useradd -d /home/docker -m -s /bin/bash docker")
system("echo docker:training | sudo chpasswd")
system("echo docker:{} | sudo chpasswd".format(DOCKER_USER_PASSWORD))
# Fancy prompt courtesy of @soulshake.
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL
@@ -82,7 +83,7 @@ system("sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /e
system("sudo service ssh restart")
system("sudo apt-get -q update")
system("sudo apt-get -qy install git jq python-pip")
system("sudo apt-get -qy install git jq")
#######################
### DOCKER INSTALLS ###
@@ -97,7 +98,6 @@ system("sudo apt-get -q update")
system("sudo apt-get -qy install docker-ce")
### Install docker-compose
#system("sudo pip install -U docker-compose=={}".format(COMPOSE_VERSION))
system("sudo curl -sSL -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/{}/docker-compose-{}-{}".format(COMPOSE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-compose")
system("docker-compose version")

View File

@@ -1,12 +1,17 @@
# This file can be sourced in order to directly run commands on
# a batch of VMs whose IPs are located in ips.txt of the directory in which
# a group of VMs whose IPs are located in ips.txt of the directory in which
# the command is run.
pssh() {
HOSTFILE="ips.txt"
if [ -z "$TAG" ]; then
>/dev/stderr echo "Variable \$TAG is not set."
return
fi
HOSTFILE="tags/$TAG/ips.txt"
[ -f $HOSTFILE ] || {
>/dev/stderr echo "No hostfile found at $HOSTFILE"
>/dev/stderr echo "Hostfile $HOSTFILE not found."
return
}

View File

@@ -0,0 +1,25 @@
# Number of VMs per cluster
clustersize: 5
# Jinja2 template to use to generate ready-to-cut cards
cards_template: enix.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.22.0
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -22,3 +22,6 @@ engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.18.0
machine_version: 0.13.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -20,5 +20,8 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
compose_version: 1.22.0
machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -4,7 +4,7 @@
clustersize: 3
# Jinja2 template to use to generate ready-to-cut cards
cards_template: settings/kube101.html
cards_template: kube101.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
@@ -22,3 +22,6 @@ engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -20,5 +20,8 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
compose_version: 1.22.0
machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -85,7 +85,7 @@ img {
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>

View File

Can't render this file because it contains an unexpected character in line 1 and column 42.

View File

@@ -0,0 +1,117 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://septembre2018.container.training" -%}
{%- set pagesize = 9 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 30%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.3em;
}
img.enix {
height: 4.5em;
margin-top: 0.2em;
}
img.kube {
height: 4.2em;
margin-top: 1.7em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Voici les informations permettant de se connecter à votre
cluster pour cette formation. Vous pouvez vous connecter
à ces machines virtuelles avec n'importe quel client SSH.
</p>
<p>
<img class="enix" src="https://enix.io/static/img/logos/logo-domain-cropped.png" />
<table>
<tr><td>identifiant:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>mot de passe:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
Vos serveurs sont :
<img class="kube" src="{{ image_src }}" />
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>Le support de formation est à l'adresse suivante :
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -85,7 +85,7 @@ img {
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>

View File

@@ -0,0 +1,5 @@
resource "openstack_compute_keypair_v2" "ssh_deploy_key" {
name = "${var.prefix}"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}

View File

@@ -0,0 +1,32 @@
resource "openstack_compute_instance_v2" "machine" {
count = "${var.count}"
name = "${format("%s-%04d", "${var.prefix}", count.index+1)}"
image_name = "Ubuntu 16.04.5 (Xenial Xerus)"
flavor_name = "${var.flavor}"
security_groups = ["${openstack_networking_secgroup_v2.full_access.name}"]
key_pair = "${openstack_compute_keypair_v2.ssh_deploy_key.name}"
network {
name = "${openstack_networking_network_v2.internal.name}"
fixed_ip_v4 = "${cidrhost("${openstack_networking_subnet_v2.internal.cidr}", count.index+10)}"
}
}
resource "openstack_compute_floatingip_v2" "machine" {
count = "${var.count}"
# This is something provided to us by Enix when our tenant was provisioned.
pool = "Public Floating"
}
resource "openstack_compute_floatingip_associate_v2" "machine" {
count = "${var.count}"
floating_ip = "${openstack_compute_floatingip_v2.machine.*.address[count.index]}"
instance_id = "${openstack_compute_instance_v2.machine.*.id[count.index]}"
fixed_ip = "${cidrhost("${openstack_networking_subnet_v2.internal.cidr}", count.index+10)}"
}
output "ip_addresses" {
value = "${join("\n", openstack_compute_floatingip_v2.machine.*.address)}"
}
variable "flavor" {}

View File

@@ -0,0 +1,23 @@
resource "openstack_networking_network_v2" "internal" {
name = "${var.prefix}"
}
resource "openstack_networking_subnet_v2" "internal" {
name = "${var.prefix}"
network_id = "${openstack_networking_network_v2.internal.id}"
cidr = "10.10.0.0/16"
ip_version = 4
dns_nameservers = ["1.1.1.1"]
}
resource "openstack_networking_router_v2" "router" {
name = "${var.prefix}"
external_network_id = "15f0c299-1f50-42a6-9aff-63ea5b75f3fc"
}
resource "openstack_networking_router_interface_v2" "router_internal" {
router_id = "${openstack_networking_router_v2.router.id}"
subnet_id = "${openstack_networking_subnet_v2.internal.id}"
}

View File

@@ -0,0 +1,13 @@
provider "openstack" {
user_name = "${var.user}"
tenant_name = "${var.tenant}"
domain_name = "${var.domain}"
password = "${var.password}"
auth_url = "${var.auth_url}"
}
variable "user" {}
variable "tenant" {}
variable "domain" {}
variable "password" {}
variable "auth_url" {}

View File

@@ -0,0 +1,12 @@
resource "openstack_networking_secgroup_v2" "full_access" {
name = "${var.prefix} - full access"
}
resource "openstack_networking_secgroup_rule_v2" "full_access" {
direction = "ingress"
ethertype = "IPv4"
protocol = ""
remote_ip_prefix = "0.0.0.0/0"
security_group_id = "${openstack_networking_secgroup_v2.full_access.id}"
}

View File

@@ -0,0 +1,8 @@
variable "prefix" {
type = "string"
}
variable "count" {
type = "string"
}

View File

@@ -1,20 +1,19 @@
#!/bin/bash
# Get the script's real directory, whether we're being called directly or via a symlink
# Get the script's real directory.
# This should work whether we're being called directly or via a symlink.
if [ -L "$0" ]; then
export SCRIPT_DIR=$(dirname $(readlink "$0"))
else
export SCRIPT_DIR=$(dirname "$0")
fi
# Load all scriptlets
# Load all scriptlets.
cd "$SCRIPT_DIR"
for lib in lib/*.sh; do
. $lib
done
TRAINER_IMAGE="preparevms_prepare-vms"
DEPENDENCIES="
aws
ssh
@@ -25,49 +24,26 @@ DEPENDENCIES="
man
"
ENVVARS="
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
SSH_AUTH_SOCK
"
# Check for missing dependencies, and issue a warning if necessary.
missing=0
for dependency in $DEPENDENCIES; do
if ! command -v $dependency >/dev/null; then
warning "Dependency $dependency could not be found."
missing=1
fi
done
if [ $missing = 1 ]; then
warning "At least one dependency is missing. Install it or try the image wrapper."
fi
check_envvars() {
status=0
for envvar in $ENVVARS; do
if [ -z "${!envvar}" ]; then
error "Environment variable $envvar is not set."
if [ "$envvar" = "SSH_AUTH_SOCK" ]; then
error "Hint: run 'eval \$(ssh-agent) ; ssh-add' and try again?"
fi
status=1
fi
done
return $status
}
# Check if SSH_AUTH_SOCK is set.
# (If it's not, deployment will almost certainly fail.)
if [ -z "${SSH_AUTH_SOCK}" ]; then
warning "Environment variable SSH_AUTH_SOCK is not set."
warning "Hint: run 'eval \$(ssh-agent) ; ssh-add' and try again?"
fi
check_dependencies() {
status=0
for dependency in $DEPENDENCIES; do
if ! command -v $dependency >/dev/null; then
warning "Dependency $dependency could not be found."
status=1
fi
done
return $status
}
check_image() {
docker inspect $TRAINER_IMAGE >/dev/null 2>&1
}
check_envvars \
|| die "Please set all required environment variables."
check_dependencies \
|| warning "At least one dependency is missing. Install it or try the image wrapper."
# Now check which command was invoked and execute it
# Now check which command was invoked and execute it.
if [ "$1" ]; then
cmd="$1"
shift
@@ -77,6 +53,3 @@ fi
fun=_cmd_$cmd
type -t $fun | grep -q function || die "Invalid command: $cmd"
$fun "$@"
# export SSH_AUTH_DIRNAME=$(dirname $SSH_AUTH_SOCK)
# docker-compose run prepare-vms "$@"

View File

@@ -29,6 +29,10 @@ class State(object):
self.interactive = True
self.verify_status = False
self.simulate_type = True
self.switch_desktop = False
self.sync_slides = False
self.open_links = False
self.run_hidden = True
self.slide = 1
self.snippet = 0
@@ -37,6 +41,10 @@ class State(object):
self.interactive = bool(data["interactive"])
self.verify_status = bool(data["verify_status"])
self.simulate_type = bool(data["simulate_type"])
self.switch_desktop = bool(data["switch_desktop"])
self.sync_slides = bool(data["sync_slides"])
self.open_links = bool(data["open_links"])
self.run_hidden = bool(data["run_hidden"])
self.slide = int(data["slide"])
self.snippet = int(data["snippet"])
@@ -46,6 +54,10 @@ class State(object):
interactive=self.interactive,
verify_status=self.verify_status,
simulate_type=self.simulate_type,
switch_desktop=self.switch_desktop,
sync_slides=self.sync_slides,
open_links=self.open_links,
run_hidden=self.run_hidden,
slide=self.slide,
snippet=self.snippet,
), f, default_flow_style=False)
@@ -122,14 +134,20 @@ class Slide(object):
def focus_slides():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "3"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_terminal():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "2"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_browser():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "4"])
subprocess.check_output(["i3-msg", "workspace", "1"])
@@ -205,7 +223,7 @@ def check_exit_status():
def setup_tmux_and_ssh():
if subprocess.call(["tmux", "has-session"]):
logging.error("Couldn't connect to tmux. Please setup tmux first.")
ipaddr = open("../../prepare-vms/ips.txt").read().split("\n")[0]
ipaddr = "$IPADDR"
uid = os.getuid()
raise Exception("""
@@ -307,17 +325,21 @@ while True:
slide = slides[state.slide]
snippet = slide.snippets[state.snippet-1] if state.snippet else None
click.clear()
print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}]"
print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}] "
"[switch_desktop:{}] [sync_slides:{}] [open_links:{}] [run_hidden:{}]"
.format(state.slide, len(slides)-1,
state.snippet, len(slide.snippets) if slide.snippets else 0,
state.simulate_type, state.verify_status))
state.simulate_type, state.verify_status,
state.switch_desktop, state.sync_slides,
state.open_links, state.run_hidden))
print(hrule())
if snippet:
print(slide.content.replace(snippet.content, ansi(7)(snippet.content)))
focus_terminal()
else:
print(slide.content)
subprocess.check_output(["./gotoslide.js", str(slide.number)])
if state.sync_slides:
subprocess.check_output(["./gotoslide.js", str(slide.number)])
focus_slides()
print(hrule())
if state.interactive:
@@ -326,6 +348,10 @@ while True:
print("n/→ Next")
print("s Simulate keystrokes")
print("v Validate exit status")
print("d Switch desktop")
print("k Sync slides")
print("o Open links")
print("h Run hidden commands")
print("g Go to a specific slide")
print("q Quit")
print("c Continue non-interactively until next error")
@@ -341,6 +367,14 @@ while True:
state.simulate_type = not state.simulate_type
elif command == "v":
state.verify_status = not state.verify_status
elif command == "d":
state.switch_desktop = not state.switch_desktop
elif command == "k":
state.sync_slides = not state.sync_slides
elif command == "o":
state.open_links = not state.open_links
elif command == "h":
state.run_hidden = not state.run_hidden
elif command == "g":
state.slide = click.prompt("Enter slide number", type=int)
state.snippet = 0
@@ -366,7 +400,7 @@ while True:
logging.info("Running with method {}: {}".format(method, data))
if method == "keys":
send_keys(data)
elif method == "bash":
elif method == "bash" or (method == "hide" and state.run_hidden):
# Make sure that we're ready
wait_for_prompt()
# Strip leading spaces
@@ -405,11 +439,12 @@ while True:
screen = capture_pane()
url = data.replace("/node1", "/{}".format(IPADDR))
# This should probably be adapted to run on different OS
subprocess.check_output(["xdg-open", url])
focus_browser()
if state.interactive:
print("Press any key to continue to next step...")
click.getchar()
if state.open_links:
subprocess.check_output(["xdg-open", url])
focus_browser()
if state.interactive:
print("Press any key to continue to next step...")
click.getchar()
else:
logging.warning("Unknown method {}: {!r}".format(method, data))
move_forward()

View File

@@ -0,0 +1 @@
click

View File

@@ -1,309 +0,0 @@
# Pre-requirements
- Be comfortable with the UNIX command line
- navigating directories
- editing files
- a little bit of bash-fu (environment variables, loops)
- Some Docker knowledge
- `docker run`, `docker ps`, `docker build`
- ideally, you know how to write a Dockerfile and build it
<br/>
(even if it's a `FROM` line and a couple of `RUN` commands)
- It's totally OK if you are not a Docker expert!
---
class: title
*Tell me and I forget.*
<br/>
*Teach me and I remember.*
<br/>
*Involve me and I learn.*
Misattributed to Benjamin Franklin
[(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/)
---
## Hands-on sections
- The whole workshop is hands-on
- We are going to build, ship, and run containers!
- You are invited to reproduce all the demos
- All hands-on sections are clearly identified, like the gray rectangle below
.exercise[
- This is the stuff you're supposed to do!
- Go to @@SLIDES@@ to view these slides
- Join the chat room: @@CHAT@@
<!-- ```open @@SLIDES@@``` -->
]
---
class: in-person
## Where are we going to run our containers?
---
class: in-person, pic
![You get a cluster](images/you-get-a-cluster.jpg)
---
class: in-person
## You get a cluster of cloud VMs
- Each person gets a private cluster of cloud VMs (not shared with anybody else)
- They'll remain up for the duration of the workshop
- You should have a little card with login+password+IP addresses
- You can automatically SSH from one VM to another
- The nodes have aliases: `node1`, `node2`, etc.
---
class: in-person
## Why don't we run containers locally?
- Installing that stuff can be hard on some machines
(32 bits CPU or OS... Laptops without administrator access... etc.)
- *"The whole team downloaded all these container images from the WiFi!
<br/>... and it went great!"* (Literally no-one ever)
- All you need is a computer (or even a phone or tablet!), with:
- an internet connection
- a web browser
- an SSH client
---
class: in-person
## SSH clients
- On Linux, OS X, FreeBSD... you are probably all set
- On Windows, get one of these:
- [putty](http://www.putty.org/)
- Microsoft [Win32 OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH)
- [Git BASH](https://git-for-windows.github.io/)
- [MobaXterm](http://mobaxterm.mobatek.net/)
- On Android, [JuiceSSH](https://juicessh.com/)
([Play Store](https://play.google.com/store/apps/details?id=com.sonelli.juicessh))
works pretty well
- Nice-to-have: [Mosh](https://mosh.org/) instead of SSH, if your internet connection tends to lose packets
---
class: in-person, extra-details
## What is this Mosh thing?
*You don't have to use Mosh or even know about it to follow along.
<br/>
We're just telling you about it because some of us think it's cool!*
- Mosh is "the mobile shell"
- It is essentially SSH over UDP, with roaming features
- It retransmits packets quickly, so it works great even on lossy connections
(Like hotel or conference WiFi)
- It has intelligent local echo, so it works great even in high-latency connections
(Like hotel or conference WiFi)
- It supports transparent roaming when your client IP address changes
(Like when you hop from hotel to conference WiFi)
---
class: in-person, extra-details
## Using Mosh
- To install it: `(apt|yum|brew) install mosh`
- It has been pre-installed on the VMs that we are using
- To connect to a remote machine: `mosh user@host`
(It is going to establish an SSH connection, then hand off to UDP)
- It requires UDP ports to be open
(By default, it uses a UDP port between 60000 and 61000)
---
class: in-person
## Connecting to our lab environment
.exercise[
- Log into the first VM (`node1`) with your SSH client
<!--
```bash
for N in $(awk '/\Wnode/{print $2}' /etc/hosts); do
ssh -o StrictHostKeyChecking=no $N true
done
```
```bash
if which kubectl; then
kubectl get all -o name | grep -v service/kubernetes | xargs -rn1 kubectl delete
fi
```
-->
- Check that you can SSH (without password) to `node2`:
```bash
ssh node2
```
- Type `exit` or `^D` to come back to `node1`
<!-- ```bash exit``` -->
]
If anything goes wrong — ask for help!
---
## Doing or re-doing the workshop on your own?
- Use something like
[Play-With-Docker](http://play-with-docker.com/) or
[Play-With-Kubernetes](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
Zero setup effort; but environment are short-lived and
might have limited resources
- Create your own cluster (local or cloud VMs)
Small setup effort; small cost; flexible environments
- Create a bunch of clusters for you and your friends
([instructions](https://@@GITREPO@@/tree/master/prepare-vms))
Bigger setup effort; ideal for group training
---
class: self-paced
## Get your own Docker nodes
- If you already have some Docker nodes: great!
- If not: let's get some thanks to Play-With-Docker
.exercise[
- Go to http://www.play-with-docker.com/
- Log in
- Create your first node
<!-- ```open http://www.play-with-docker.com/``` -->
]
You will need a Docker ID to use Play-With-Docker.
(Creating a Docker ID is free.)
---
## We will (mostly) interact with node1 only
*These remarks apply only when using multiple nodes, of course.*
- Unless instructed, **all commands must be run from the first VM, `node1`**
- We will only checkout/copy the code on `node1`
- During normal operations, we do not need access to the other nodes
- If we had to troubleshoot issues, we would use a combination of:
- SSH (to access system logs, daemon status...)
- Docker API (to check running containers and container engine status)
---
## Terminals
Once in a while, the instructions will say:
<br/>"Open a new terminal."
There are multiple ways to do this:
- create a new window or tab on your machine, and SSH into the VM;
- use screen or tmux on the VM and open a new window from there.
You are welcome to use the method that you feel the most comfortable with.
---
## Tmux cheatsheet
[Tmux](https://en.wikipedia.org/wiki/Tmux) is a terminal multiplexer like `screen`.
*You don't have to use it or even know about it to follow along.
<br/>
But some of us like to use it to switch between terminals.
<br/>
It has been preinstalled on your workshop nodes.*
- Ctrl-b c → creates a new window
- Ctrl-b n → go to next window
- Ctrl-b p → go to previous window
- Ctrl-b " → split window top/bottom
- Ctrl-b % → split window left/right
- Ctrl-b Alt-1 → rearrange windows in columns
- Ctrl-b Alt-2 → rearrange windows in rows
- Ctrl-b arrows → navigate to other windows
- Ctrl-b d → detach session
- tmux attach → reattach to session

View File

@@ -1,11 +0,0 @@
class: title, self-paced
Thank you!
---
class: title, in-person
That's all, folks! <br/> Questions?
![end](images/end.jpg)

View File

@@ -1,3 +1,6 @@
class: title
# Advanced Dockerfiles
![construction](images/title-advanced-dockerfiles.jpg)

View File

@@ -107,9 +107,17 @@ class: pic
class: pic
## Two containers on a single Docker network
![bridge2](images/bridge2.png)
---
class: pic
## Two containers on two Docker networks
![bridge3](images/bridge2.png)
![bridge3](images/bridge3.png)
---

View File

@@ -312,7 +312,7 @@ CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app
EXPOSE 5000
```
(Source: [traininghweels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
(Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
---

View File

@@ -1,3 +1,4 @@
class: title
# Getting inside a container

View File

@@ -1,3 +1,4 @@
class: title
# Installing Docker

View File

@@ -309,54 +309,6 @@ and *canary deployments*.
---
## Improving the workflow
The workflow that we showed is nice, but it requires us to:
* keep track of all the `docker run` flags required to run the container,
* inspect the `Dockerfile` to know which path(s) to mount,
* write scripts to hide that complexity.
There has to be a better way!
---
## Docker Compose to the rescue
* Docker Compose allows us to "encode" `docker run` parameters in a YAML file.
* Here is the `docker-compose.yml` file that we can use for our "namer" app:
```yaml
www:
build: .
volumes:
- .:/src
ports:
- 80:9292
```
* Try it:
```bash
$ docker-compose up -d
```
---
## Working with Docker Compose
* When you see a `docker-compose.yml` file, you can use `docker-compose up`.
* It can build images and run them with the required parameters.
* Compose can also deal with complex, multi-container apps.
(More on this later!)
---
## Recap of the development workflow
1. Write a Dockerfile to build an image containing our development environment.

View File

@@ -24,7 +24,7 @@ Analogy: attaching to a container is like plugging a keyboard and screen to a ph
---
## Detaching from a container
## Detaching from a container (Linux/macOS)
* If you have started an *interactive* container (with option `-it`), you can detach from it.
@@ -41,6 +41,20 @@ What does `-it` stand for?
---
## Detaching cont. (Win PowerShell and cmd.exe)
* Docker for Windows has a different detach experience due to shell features.
* `^P^Q` does not work.
* `^C` will detach, rather than stop the container.
* Using Bash, Subsystem for Linux, etc. on Windows behaves like Linux/macOS shells.
* Both PowerShell and Bash work well in Win 10; just be aware of differences.
---
class: extra-details
## Specifying a custom detach sequence

View File

@@ -1,3 +1,4 @@
class: title
# Our training environment

View File

@@ -0,0 +1,164 @@
class: title
# Windows Containers
![Container with Windows](images/windows-containers.jpg)
---
## Objectives
At the end of this section, you will be able to:
* Understand Windows Container vs. Linux Container.
* Know about the features of Docker for Windows for choosing architecture.
* Run other container architectures via QEMU emulation.
---
## Are containers *just* for Linux?
Remember that a container must run on the kernel of the OS it's on.
- This is both a benefit and a limitation.
(It makes containers lightweight, but limits them to a specific kernel.)
- At its launch in 2013, Docker did only support Linux, and only on amd64 CPUs.
- Since then, many platforms and OS have been added.
(Windows, ARM, i386, IBM mainframes ... But no macOS or iOS yet!)
--
- Docker Desktop (macOS and Windows) can run containers for other architectures
(Check the docs to see how to [run a Raspberry Pi (ARM) or PPC container](https://docs.docker.com/docker-for-mac/multi-arch/)!)
---
## History of Windows containers
- Early 2016, Windows 10 gained support for running Windows binaries in containers.
- These are known as "Windows Containers"
- Win 10 expects Docker for Windows to be installed for full features
- These must run in Hyper-V mini-VM's with a Windows Server x64 kernel
- No "scratch" containers, so use "Core" and "Nano" Server OS base layers
- Since Hyper-V is required, Windows 10 Home won't work (yet...)
--
- Late 2016, Windows Server 2016 ships with native Docker support
- Installed via PowerShell, doesn't need Docker for Windows
- Can run native (without VM), or with [Hyper-V Isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)
---
## LCOW (Linux Containers On Windows)
While Docker on Windows is largely playing catch up with Docker on Linux,
it's moving fast; and this is one thing that you *cannot* do on Linux!
- LCOW came with the [2017 Fall Creators Update](https://blog.docker.com/2018/02/docker-for-windows-18-02-with-windows-10-fall-creators-update/).
- It can run Linux and Windows containers side-by-side on Win 10.
- It is no longer necessary to switch the Engine to "Linux Containers".
(In fact, if you want to run both Linux and Windows containers at the same time,
make sure that your Engine is set to "Windows Containers" mode!)
--
If you are a Docker for Windows user, start your engine and try this:
```bash
docker pull microsoft/nanoserver:1803
```
(Make sure to switch to "Windows Containers mode" if necessary.)
---
## Run Both Windows and Linux containers
- Run a Windows Nano Server (minimal CLI-only server)
```bash
docker run --rm -it microsoft/nanoserver:1803 powershell
Get-Process
exit
```
- Run busybox on Linux in LCOW
```bash
docker run --rm --platform linux busybox echo hello
```
(Although you will not be able to see them, this will create hidden
Nano and LinuxKit VMs in Hyper-V!)
---
## Did We Say Things Move Fast
- Things keep improving.
- Now `--platform` defaults to `windows`, some images support both:
- golang, mongo, python, redis, hello-world ... and more being added
- you should still use `--plaform` with multi-os images to be certain
- Windows Containers now support `localhost` accessable containers (July 2018)
- Microsoft (April 2018) added Hyper-V support to Windows 10 Home ...
... so stay tuned for Docker support, maybe?!?
---
## Other Windows container options
Most "official" Docker images don't run on Windows yet.
Places to Look:
- Hub Official: https://hub.docker.com/u/winamd64/
- Microsoft: https://hub.docker.com/r/microsoft/
---
## SQL Server? Choice of Linux or Windows
- Microsoft [SQL Server for Linux 2017](https://hub.docker.com/r/microsoft/mssql-server-linux/) (amd64/linux)
- Microsoft [SQL Server Express 2017](https://hub.docker.com/r/microsoft/mssql-server-windows-express/) (amd64/windows)
---
## Windows Tools and Tips
- PowerShell [Tab Completion: DockerCompletion](https://github.com/matt9ucci/DockerCompletion)
- Best Shell GUI: [Cmder.net](http://cmder.net/)
- Good Windows Container Blogs and How-To's
- Dockers DevRel [Elton Stoneman, Microsoft MVP](https://blog.sixeyed.com/)
- Docker Captian [Nicholas Dille](https://dille.name/blog/)
- Docker Captain [Stefan Scherer](https://stefanscherer.github.io/)

View File

@@ -51,7 +51,7 @@ for line in open(sys.argv[1]):
state.show()
for chapter in sorted(state.chapters):
for chapter in sorted(state.chapters, key=lambda f: int(f.split("-")[1])):
chapter_size = sum(state.sections[s] for s in state.chapters[chapter])
print("{}\t{}\t{}".format("total size for", chapter, chapter_size))

BIN
slides/images/bridge1.png Normal file → Executable file

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 97 KiB

BIN
slides/images/bridge2.png Normal file → Executable file

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 119 KiB

BIN
slides/images/bridge3.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Some files were not shown because too many files have changed in this diff Show More