Compare commits

...

437 Commits

Author SHA1 Message Date
Jerome Petazzoni
687b61dbf4 fix-redirects.sh: adding forced redirect 2020-04-07 16:57:06 -05:00
Jerome Petazzoni
22f32ee4c0 Merge branch 'master' into qconsf2018 2018-11-09 02:25:18 -06:00
Jerome Petazzoni
9e051abb32 settings for 4 nodes cluster + two-sided card template 2018-11-09 02:25:00 -06:00
Jerome Petazzoni
ee3c2c3030 Merge branch 'ignore-preflight-errors' into qconsf2018 2018-11-09 02:23:53 -06:00
Jerome Petazzoni
45f9d7bf59 bump versions 2018-11-09 02:23:38 -06:00
Jerome Petazzoni
efb72c2938 Bump all the versions
Bump:
- stern
- Ubuntu

Also, each place where there is a 'bumpable' version, I added
a ##VERSION## marker, easily greppable.
2018-11-08 20:42:02 -06:00
Jerome Petazzoni
357d341d82 Ignore 'wrong Docker version' warning
For some reason, kubeadm doesn't want to deploy with Docker Engine 18.09.
Before, it would just issue a warning; but now apparently the warning blocks
the deployment. So... let's ignore the warning. (I've tested the content
and it works fine with Engine 18.09 as far as I can tell.)
2018-11-08 20:32:52 -06:00
Jerome Petazzoni
d4c338c62c Update prom slides for QCON preload 2018-11-07 23:08:51 -06:00
Bridget Kromhout
3ebcfd142b Merge pull request #394 from jpetazzo/halfday-fullday-twodays
Add kube-twodays.yml
2018-11-07 16:28:20 -05:00
Bridget Kromhout
6c5d049c4c Merge pull request #371 from bridgetkromhout/kubens
Clarify kubens
2018-11-07 16:27:08 -05:00
Bridget Kromhout
072ba44cba Merge pull request #395 from jpetazzo/add-links-to-whatsnext
Add links to what's next section
2018-11-07 16:25:29 -05:00
Bridget Kromhout
bc8a9dc4e7 Merge pull request #398 from jpetazzo/use-dockercoins-from-docker-hub
Add instructions to use the dockercoins/ images
2018-11-07 16:23:37 -05:00
Jerome Petazzoni
d35d186249 Merge branch 'master' into qconsf2018 2018-11-01 19:48:17 -05:00
Jerome Petazzoni
b1ba881eee Limit ElasticSearch RAM to 1 GB
Committing straight to master since this file
is not used by @bridgetkromhout, and people use
that file by cloning the repo (so it has to be
merged in master for people to see it).

HASHTAG YOLO
2018-11-01 19:48:06 -05:00
Jerome Petazzoni
6c8172d7b1 Merge branch 'work-around-kubectl-logs-bug' into qconsf2018 2018-11-01 19:45:45 -05:00
Jerome Petazzoni
d3fac47823 kubectl logs -l ... --tail ... is buggy.
(It always returns 10 lines of output instead
of the requested number.)

This works around the problem, by adding extra
explanations of the issue and providing a shell
function as a workaround.

See kubernetes/kubernetes#70554 for details.
2018-11-01 19:45:13 -05:00
Jerome Petazzoni
4f71074a06 Work around bug in kubectl logs
kubectl logs -l ... --tail ... is buggy.
(It always returns 10 lines of output instead
of the requested number.)

This works around the problem, by adding extra
explanations of the issue and providing a shell
function as a workaround.

See kubernetes/kubernetes#70554 for details.
2018-11-01 19:41:29 -05:00
Jerome Petazzoni
37470fc5ed Merge branch 'use-dockercoins-from-docker-hub' into qconsf2018 2018-11-01 19:08:57 -05:00
Jerome Petazzoni
337a5d94ed Add instructions to use the dockercoins/ images
We have images on the Docker Hub for the various components
of dockercoins. Let's add one slide explaining how to use that,
for people who would be lost or would have issues with their
registry, so that they can catch up.
2018-11-01 19:08:40 -05:00
Jerome Petazzoni
98510f9f1c Setup qconsf2018 2018-11-01 16:10:03 -05:00
Jerome Petazzoni
6be0751147 Merge branch 'preinstall-helm-and-prometheus' into qconsf2018 2018-11-01 15:59:43 -05:00
Jerome Petazzoni
a40b291d54 Merge branch 'kubectl-create-deployment' into qconsf2018 2018-11-01 15:59:21 -05:00
Jerome Petazzoni
f24687e79f Merge branch 'jpetazzo-last-slide' into qconsf2018 2018-11-01 15:59:12 -05:00
Jerome Petazzoni
9f5f16dc09 Merge branch 'halfday-fullday-twodays' into qconsf2018 2018-11-01 15:59:03 -05:00
Jerome Petazzoni
9a5989d1f2 Merge branch 'enixlogo' into qconsf2018 2018-11-01 15:58:55 -05:00
Jerome Petazzoni
43acccc0af Add command to preinstall Helm and Prometheus
In some cases, I would like Prometheus to be pre-installed (so that
it shows a bunch of metrics) without relying on people doing it (and
setting up Helm correctly). This patch allows to run:

./workshopctl helmprom TAG

It will setup Helm with a proper service account, then deploy
the Pormetheus chart, disabling the alert manager, persistence,
and assigning the Prometheus server to NodePort 30090.

This command is idempotent.
2018-11-01 15:35:09 -05:00
Jerome Petazzoni
4a447c7bf5 Clarify further kubens vs kns 2018-11-01 13:48:00 -05:00
Jerome Petazzoni
b9de73d0fd Address deprecation of 'kubectl run'
kubectl run is being deprecated as a multi-purpose tool.
This PR replaces 'kubectl run' with 'kubectl create deployment'
in most places (except in the very first example, to reduce the
cognitive load; and when we really want a single-shot container).

It also updates the places where we use a 'run' label, since
'kubectl create deployment' uses the 'app' label instead.

NOTE: this hasn't gone through end-to-end testing yet.
2018-11-01 01:25:26 -05:00
Jerome Petazzoni
6b9b83a7ae Add link to my private training intake form 2018-10-31 22:50:41 -05:00
Jerome Petazzoni
3f7675be04 Add links to what's next section
For each concept that is present in the full-length tutorial,
I added a link to the corresponding chapter in the final section,
so that people who liked the short version can get similarly
presented info from the longer version.
2018-10-30 17:24:27 -05:00
Jerome Petazzoni
b4bb9e5958 Update QCON entries (jpetazzo is delivering twice) 2018-10-30 16:47:44 -05:00
Jerome Petazzoni
9a6160ba1f Add kube-twodays.yml
kube-fullday is now suitable for one-day tutorials
kube-twodays is not suitable for two-day tutorials

I also tweaked (added a couple of line breaks) so that line
numbers would be aligned on all kube-...yml files.
2018-10-30 16:42:43 -05:00
Bridget Kromhout
1d243b72ec adding vel eu 2018 k8s101 slides
adding vel eu 2018 k8s101 slides
2018-10-30 14:15:44 +01:00
Jerome Petazzoni
c5c1ccaa25 Merge branch 'BretFisher-win-containers-101' 2018-10-29 20:38:21 -05:00
Jerome Petazzoni
b68afe502b Minor formatting/typo edits 2018-10-29 20:38:01 -05:00
Jerome Petazzoni
d18cacab4c Merge branch 'win-containers-101' of git://github.com/BretFisher/container.training into BretFisher-win-containers-101 2018-10-29 19:59:53 -05:00
Bret Fisher
2faca4a507 docker101 fixing titles 2018-10-30 01:53:31 +01:00
Jerome Petazzoni
d797ec62ed Merge branch 'BretFisher-swarm-cicd' 2018-10-29 19:48:59 -05:00
Jerome Petazzoni
a475d63789 add CI/CD slides to self-paced deck as well 2018-10-29 19:48:33 -05:00
Jerome Petazzoni
dd3f2d054f Merge branch 'swarm-cicd' of git://github.com/BretFisher/container.training into BretFisher-swarm-cicd 2018-10-29 19:46:38 -05:00
Bridget Kromhout
73594fd505 Merge pull request #384 from BretFisher/patch-18
swarm workshop at goto canceled 😭
2018-10-26 11:35:53 -05:00
Bret Fisher
16a1b5c6b5 swarm workshop at goto canceled 😭 2018-10-26 07:57:50 +01:00
Bret Fisher
ff7a257844 adding cicd to swarm half day 2018-10-26 07:52:32 +01:00
Bret Fisher
77046a8ddf fixed suggestions 2018-10-26 07:51:09 +01:00
Bret Fisher
3ca696f059 size update from docker docs 2018-10-23 16:27:25 +02:00
Bret Fisher
305db76340 more sizing tweaks 2018-10-23 16:27:25 +02:00
Bret Fisher
b1672704e8 clear up swarm sizes and manager+worker setups
Lot's of people will have ~5-10 servers, so let's give them more detailed info.
2018-10-23 16:27:25 +02:00
Jerome Petazzoni
c058f67a1f Add diagram for dockercoins 2018-10-23 16:25:19 +02:00
Alexandre Buisine
ab56c63901 switch to an up to date version with latest cloud-init binary and multinic patch 2018-10-23 16:22:56 +02:00
Bret Fisher
a5341f9403 Add common Windows/macOS hidden files to gitignore 2018-10-17 19:11:37 +02:00
Laurent Grangeau
b2bdac3384 Typo 2018-10-04 18:02:01 +02:00
Bridget Kromhout
a2531a0c63 making sure two-day events still show up
Because we rebuilt today, the two-day events disappeared from the front page. @jpetazzo this is a temporary fix to make them still show up.
2018-09-30 22:07:03 -04:00
Bridget Kromhout
84e2b90375 Update index.yaml
adding slides
2018-09-30 22:05:01 -04:00
Bridget Kromhout
9639dfb9cc Merge pull request #368 from jpetazzo/kube-ps1
kube-ps1 is cool and we should mention it
2018-09-30 20:55:00 -04:00
Bridget Kromhout
8722de6da2 Update namespaces.md 2018-09-30 20:54:31 -04:00
Bridget Kromhout
f2f87e52b0 Merge pull request #373 from bridgetkromhout/bridget-links
Updating Bridget's links
2018-09-30 20:53:26 -04:00
Bridget Kromhout
56ad2845e7 Updating Bridget's links 2018-09-30 20:52:24 -04:00
Bridget Kromhout
f23272d154 Clarify kubens 2018-09-30 20:32:10 -04:00
Bridget Kromhout
86e35480a4 Wording edits 2018-10-01 02:14:50 +02:00
Jerome Petazzoni
1020a8ff86 kube-ps1 is cool and we should mention it 2018-09-30 17:43:18 -05:00
Bridget Kromhout
20b1079a22 Update whatsnext.md
typo fix
2018-09-30 16:48:29 -04:00
Bridget Kromhout
f090172413 Merge pull request #365 from jpetazzo/cleanup-after-netpol
Clean up network policies
2018-09-29 21:37:59 -05:00
Jerome Petazzoni
e4251cfa8f Clean up network policies
We should tell people to clean up network policies at the end
of the chapter, otherwise further exercises will fail.
2018-09-29 20:39:32 -05:00
Jerome Petazzoni
b6dd55b21c Use loop4 instead of loop0 2018-09-29 20:16:35 -05:00
Jerome Petazzoni
53d1a68765 Adapt autopilot for new deployment scripts 2018-09-29 20:15:38 -05:00
Jerome Petazzoni
f01bc2a7a9 Fix overlapsing slide number and pics 2018-09-29 18:54:00 -05:00
Jerome Petazzoni
156ce67413 Update CNC script 2018-09-29 18:44:03 -05:00
Jerome Petazzoni
e372850b06 Merge branch 'master' of github.com:jpetazzo/container.training 2018-09-29 10:06:24 -05:00
Jerome Petazzoni
f543b54426 Prepare deployment scripts for Ubuntu 18.04
This adds a few features:
- ./workshopctl kubereset TAG (closes #306)
- remove python-setuptools (prepare for #353)
- ./workshopctl weavetest TAG (help detecting weave issues
  like we had at OSCON, July 2018)
- remove a bit of dead code
2018-09-29 10:06:20 -05:00
Bret Fisher
35614714c8 added portainer setup and gui options 2018-09-29 16:54:42 +02:00
Bret Fisher
100c6b46cf oops, updated slide versions 2018-09-29 16:53:59 +02:00
Bret Fisher
36ccaf7ea4 update compose/machine versions in swarm nodes 2018-09-29 16:53:59 +02:00
Bridget Kromhout
4a655db1ba Merge pull request #362 from jpetazzo/kubectl-run-deprecation
Add explanation about the kubectl run deprecation warning
2018-09-28 21:34:11 -05:00
Bridget Kromhout
2a80586504 Merge pull request #361 from jpetazzo/kubens-and-kubectx
Add a couple of slides about kubens and kubectx
2018-09-28 21:34:03 -05:00
Bridget Kromhout
0a942118c1 Update kubectlrun.md
slight wording change
2018-09-28 21:32:23 -05:00
Jerome Petazzoni
2f1ad67fb3 Add explanation about the kubectl run deprecation warning 2018-09-28 20:54:11 -05:00
Jerome Petazzoni
4b0ac6d0e3 Add a couple of slides about kubens and kubectx 2018-09-28 19:49:08 -05:00
Jerome Petazzoni
ac273da46c Merge branch 'master' of github.com:jpetazzo/container.training 2018-09-28 19:35:41 -05:00
Jerome Petazzoni
7a6594c96d Update container.training index 2018-09-28 19:35:35 -05:00
Bret Fisher
657b7465c6 updating bridge network diags 2018-09-29 02:18:03 +02:00
Bret Fisher
08059a845f remove compose teaser 2018-09-29 02:16:52 +02:00
Jerome Petazzoni
24e2042c9d Explain why revocation is important 2018-09-28 19:14:07 -05:00
Jerome Petazzoni
9771f054ea Add slide about lack of cert revocation 2018-09-28 19:04:57 -05:00
Jerome Petazzoni
5db4e2adfa Merge branch 'master' of github.com:jpetazzo/container.training 2018-09-28 18:49:00 -05:00
Jerome Petazzoni
bde5db49a7 Bump a few more k8s version numbers from 1.11 to 1.12 2018-09-28 18:48:52 -05:00
Jerome Petazzoni
7c6b2730f5 Bump up EBS size to 20G for Portworx 2018-09-29 01:39:07 +02:00
Jerome Petazzoni
7f6a15fbb7 Actually modify the prompt 2018-09-29 01:39:07 +02:00
Bridget Kromhout
d97b1e5944 Slight modifications to current docs/scripts 2018-09-29 01:39:07 +02:00
Jerome Petazzoni
1519196c95 Add kubectl, kubens, kube_ps1
kubectl and kubens are added as kctl and kns (to avoid clashing with
completion for kubectl). Their completion is added too (so you can
do 'kns kube-sy[TAB]' to switch to kube-system).

kube_ps1 is added and enabled. The default prompt for the docker
user now shows the current context and namespace.
2018-09-29 01:39:07 +02:00
Jerome Petazzoni
f8629a2689 Massive refactoring of workshopctl
This allows to manage groups of VMs across multiple infrastructure
providers. It also adds support to create groups of VMs on OpenStack.

WARNING: the syntax of workshopctl has changed slightly. Check READMEs
for details.
2018-09-29 01:39:07 +02:00
Jerome Petazzoni
fadecd52ee Replace registry:2 with registry
registry used to be registry v1, but now it defaults to v2.
We can therefore drop the tag.
2018-09-28 18:36:29 -05:00
Jerome Petazzoni
524d6e4fc1 Minor updates to load balancing example 2018-09-28 18:31:39 -05:00
Bridget Kromhout
51f5f5393c Merge pull request #356 from bridgetkromhout/link-update
Updating links
2018-09-28 16:49:41 -05:00
Bridget Kromhout
f574afa9d2 Updating links 2018-09-28 16:46:10 -05:00
Bridget Kromhout
4f49015a6e Link to experimental multi-master 2018-09-28 23:42:55 +02:00
Bridget Kromhout
f25d12b53d Merge pull request #354 from bridgetkromhout/versions-update
Updating versions
2018-09-28 16:29:00 -05:00
Bridget Kromhout
78259c3eb6 Clarifying version 2018-09-28 16:28:20 -05:00
Bridget Kromhout
adc922e4cd Updating versions 2018-09-28 16:25:38 -05:00
Bridget Kromhout
f68194227c Update whatsnext.md
Typo fix, and clarity since it's not always being delivered by only one person.
2018-09-28 23:16:24 +02:00
Jerome Petazzoni
29a3ce0ba2 Update last chapter (what's next) 2018-09-28 23:16:24 +02:00
Bridget Kromhout
e5fe27dd54 Merge pull request #352 from jpetazzo/remove-netpol-slides-from-ns
Remove network policies blurb from namespaces chatper
2018-09-28 15:17:51 -05:00
Jerome Petazzoni
6016ffe7d7 Add hidden link to pre-game video 2018-09-28 13:43:21 -05:00
Jerome Petazzoni
7c94a6f689 Remove network policies blurb from namespaces chatper
There is now a dedicated chapter about network policies, so
the two very rough slides on that topic should be removed
from the namespaces chapter.
2018-09-28 13:34:26 -05:00
Bridget Kromhout
5953ffe10b Merge pull request #350 from BretFisher/win-detach-note
adding slide about PowerShell detaching
2018-09-28 08:11:20 -05:00
Bridget Kromhout
3016019560 Update Start_And_Attach.md
slight edits for clarity
2018-09-28 08:10:12 -05:00
Bridget Kromhout
0d5da73c74 Merge pull request #339 from jpetazzo/replace-es-with-httpenv
Replace ElasticSearch with jpetazzo/httpenv
2018-09-28 08:05:15 -05:00
Bret Fisher
91c835fcb4 adding slide about PowerShell detaching 2018-09-28 00:20:03 -04:00
Bret Fisher
d01ae0ff39 initial Windows Container pack 2018-09-27 07:13:03 -04:00
Thomas Gerbet
63b85da4f6 Add missing link to storage in Prometheus 2 talk 2018-09-22 12:56:58 +02:00
Maxime Deravet
2406e72210 use https to clone git repo 2018-09-22 12:54:43 +02:00
Jerome Petazzoni
32e1edc2a2 Long slide is long 2018-09-21 09:08:58 +02:00
Jerome Petazzoni
84225e982f Merge branch 'Julien-Eyraud-fix-kaniko-build' 2018-09-19 14:01:24 -05:00
Jerome Petazzoni
e76a06e942 Merge branch 'fix-kaniko-build' of git://github.com/Julien-Eyraud/container.training into Julien-Eyraud-fix-kaniko-build 2018-09-19 14:01:02 -05:00
Nicolas Gavalda
0519682c30 Fix small typo 2018-09-18 18:50:41 +02:00
Jérôme Petazzoni
91f7a81964 Merge branch 'master' into fix-kaniko-build 2018-09-18 18:49:13 +02:00
Nicolas Schwartz
a66fcaf04c Update kaniko-build.yaml
Fix option
2018-09-18 18:48:01 +02:00
Julien Eyraud
9a0649e671 Change postgresql mount path 2018-09-18 17:42:10 +02:00
Julien Eyraud
d23ad0cd8f Fix kaniko-build.yaml to use insecure registry 2018-09-18 16:05:05 +02:00
Jerome Petazzoni
63755c1cd3 Minor fixes 2018-09-16 15:35:23 -05:00
Jerome Petazzoni
149cf79615 Add ENIX cluster files 2018-09-16 12:49:33 -05:00
Jerome Petazzoni
a627128570 Set EFK UID to 0 (fixes #325) 2018-09-16 10:58:10 -05:00
Jerome Petazzoni
91e3078d2e Better error checking + GRO fix 2018-09-16 09:10:14 -05:00
Jerome Petazzoni
31dd943141 Typo 2018-09-16 09:09:08 -05:00
Jerome Petazzoni
3866701475 Fix postgres data volume 2018-09-16 09:08:23 -05:00
Jerome Petazzoni
521f8e9889 More typo fixes courtesy of @abuisine 2018-09-15 11:11:08 -05:00
Jerome Petazzoni
49c3fdd3b2 Minor updates (thanks @abuisine) 2018-09-15 11:03:24 -05:00
Jerome Petazzoni
4bb6a49ee0 Typo fix (thanks @sload) 2018-09-15 10:45:37 -05:00
Jerome Petazzoni
db8e8377ac Replace ElasticSearch with jpetazzo/httpenv
ElasticSearch slowly uses up to 2GB of RAM.
Eventually, on instances provisioned with
only 4GB of RAM and without swap, if more
than one ElasticSearch pod end up on the
same instance, it will cause the instance
to slow down and ultimately crash. Instead,
we now use a tiny Go web server that shows
its environment in JSON. It still highlights
that multiple backends are serving requests
but without the memory usage issue.
2018-09-12 15:49:27 -05:00
Jerome Petazzoni
510a37be44 Rebalance chapter 3/4 2018-09-12 00:15:54 -05:00
Jerome Petazzoni
230bd73597 Update versions 2018-09-11 14:37:04 -05:00
Jerome Petazzoni
7217c0ee1d Typos and fixes for autopilot
There is no significant change to the *content* here, but a lot
of typo fixes and commands added so that the autopilot works
correctly.
2018-09-11 01:41:56 -05:00
Jerome Petazzoni
77d455d894 Sort chapters numerically in slides counter 2018-09-09 17:56:27 -05:00
Jerome Petazzoni
4f9c8275d9 Incorporate Bridget's feedback 2018-09-08 09:55:01 -05:00
Bridget Kromhout
f11aae2514 Update accessinternal.md
slight changes
2018-09-08 09:55:01 -05:00
Jerome Petazzoni
f1e9efc38c Explain how to access internal services
By using kubectl proxy and kubectl port-forward
2018-09-08 09:55:01 -05:00
Bridget Kromhout
975cc4f7df Merge pull request #332 from jpetazzo/new-content-sep-2018
New content for sep 2018 (MERGE CANDIDATE)
2018-09-08 09:03:20 -05:00
Bridget Kromhout
01243280a2 Update configuration.md 2018-09-08 08:56:26 -05:00
Bridget Kromhout
e652c3639d Merge pull request #336 from jpetazzo/deeper-in-netpol
Deeper in netpol
2018-09-08 08:53:30 -05:00
Bridget Kromhout
1e0954d9b4 Update netpol.md
slight corrections
2018-09-08 08:49:37 -05:00
Jerome Petazzoni
bb21f9bbc9 Improvements following Bridget's feedback 2018-09-08 08:45:16 -05:00
Bridget Kromhout
25466e7950 Merge pull request #334 from jpetazzo/localkubeconfig
Show how to use kubectl from the local machine
2018-09-08 08:45:16 -05:00
Jerome Petazzoni
78026ff9b8 Integrate new content
I've dispatched the new content so that the fullday training
(actually two days, don't let the file name distract you)
is broken down in 8 chapters of approximately equal lengths,
where the most complex content is preferably located at the
end of the chapter (to allow people to catch up and ask questions
during breaks) + 1 chapter with the what's next / links / thank you
slides
2018-09-08 08:23:54 -05:00
Jerome Petazzoni
60c7ef4e53 Merge branch 'master' into new-content-sep-2018 2018-09-08 07:57:41 -05:00
Jerome Petazzoni
55952934ed Add tarmak in deployment options 2018-09-08 07:56:16 -05:00
Jerome Petazzoni
3eaa844c55 Add ENIX logo
Warning: do not merge this branch to your content, otherwise you
will get the ENIX logo in the top right of all your decks
2018-09-08 07:49:38 -05:00
Jerome Petazzoni
f9d31f4c30 merge 2018-09-08 07:32:14 -05:00
Jerome Petazzoni
ec037e422b Clarify 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
73f66f25d8 Rephrase to avoid confusion 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
28174b6cf9 Oops, fixing bad conflict resolve 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
a80c095a07 Put netpol file in the right directory 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
374574717d Clarify network policies
Add clarification re/ pod-to-pod traffic.
Explain that it's stateful (which most people would expect anyway).
2018-09-08 07:20:31 -05:00
Jerome Petazzoni
efce5d1ad4 Add a short chapter about network policies
I will then expand this chapter to add examples showing
how to isolate namespaces; but let's start with that.
2018-09-08 07:20:31 -05:00
Jerome Petazzoni
4eec91a9e6 Merge branch 'new-content-sep-2018' of github.com:jpetazzo/container.training into new-content-sep-2018 2018-09-08 07:16:56 -05:00
Jerome Petazzoni
57166f33aa Prometheus chapter 2018-09-08 07:16:28 -05:00
Bridget Kromhout
f1ebb1f0fb slight corrections 2018-09-06 11:05:17 -05:00
Bridget Kromhout
8182e4df96 Update portworx.md
Slight corrections for clarity
2018-09-06 10:56:59 -05:00
Bridget Kromhout
6f3580820c Update gitworkflows.md
slight corrections
2018-09-06 10:42:59 -05:00
Bridget Kromhout
7b7fd2a4b4 Merge pull request #329 from jpetazzo/kubectlproxy
Revamp section about kubectl proxy
2018-09-06 10:37:17 -05:00
Jerome Petazzoni
f74addd0ca Add short section with Flux and Gitkube
These sections are not as detailed as the usual, but we
intend to show what's possible with git-based workflows.
2018-09-06 07:55:42 -05:00
Jerome Petazzoni
21ba3b7713 Incorporate Bridget's feedback 2018-09-06 02:12:47 -05:00
Jerome Petazzoni
4eca15f822 typo 2018-09-06 01:49:54 -05:00
Bridget Kromhout
4205f619cf Merge pull request #333 from BretFisher/patch-16
adding my next few workshops, I forgets!
2018-09-05 23:31:25 -05:00
Bridget Kromhout
c3dff823ef Update index.yaml
We use `:` as a delimiter and so need to quote text using it.
2018-09-05 23:29:49 -05:00
Bret Fisher
39876d1388 adding my next few workshops, I forgets! 2018-09-05 21:09:13 -04:00
Bridget Kromhout
7e34aa0287 Merge pull request #330 from jpetazzo/move-yaml-to-repo
Add YAML to repo; remove goo.gl links
2018-09-05 09:21:14 -05:00
Bridget Kromhout
3bdafed38e Merge pull request #331 from jpetazzo/preinstall-helm-and-stern
Pre-install Stern and Helm
2018-09-05 09:17:51 -05:00
Jerome Petazzoni
3d438ff304 Add kubectl auth can-i ... 2018-09-05 02:49:49 -05:00
Jerome Petazzoni
bcd1f37085 Add healthchecks
Explain liveness and readiness probes.
No lab yet.
2018-09-04 16:23:38 -05:00
Jerome Petazzoni
ba928e59fc Add ingress section
- Explain ingress resources
- Show how to deploy Traefik
- Use hostNetwork in the process
- Explain taints and tolerations while we're here
2018-09-04 08:40:58 -05:00
Jerome Petazzoni
62c01ef7d6 Add acknowlegement slide for Portworx/Katacoda 2018-09-03 13:00:30 -05:00
Jerome Petazzoni
a71347e328 Add owners and dependents
And explain how to find orphan resources.
2018-09-03 11:16:54 -05:00
Jerome Petazzoni
f235cfa13c Hint about upcoming dynamic provisioning section 2018-09-03 06:16:24 -05:00
Jerome Petazzoni
45b397682b One more note about storage systems 2018-09-03 06:15:41 -05:00
Jerome Petazzoni
858ad02973 Add notes about dynamic provisioning 2018-09-03 06:08:43 -05:00
Jerome Petazzoni
defeef093d Add dynamic provisioning and PostgreSQL example
In this section, we setup Portworx to have a dynamic provisioner.
Then we use it to deploy a PostgreSQL Stateful Set.
Finally we simulate a node failure and observe the failover.
2018-09-03 05:47:21 -05:00
Jerome Petazzoni
b45615e2c3 Mention @jessfraz's img 2018-09-02 10:40:17 -05:00
Jerome Petazzoni
b158babb7f Stateful Sets
- explain the reason why we have stateful sets
- explain the relationship between volumes, persistent volumes,
  persistent volume claims, volume claim templates
- show how to run a Consul cluster with a stateful set
2018-09-02 08:51:03 -05:00
Jerome Petazzoni
59b7386b91 Add authentication and authorization 2018-09-01 09:40:30 -05:00
Jerome Petazzoni
c05bcd23d9 Tons of new chapters! Excitement!
- volumes (general overview)
- building with the docker engine (bind-mounting the docker socket)
- building with kaniko (and init containers)
- managing configuration (configmaps, downward api)

Also added a new-content.yml file with just the new content
(for easier review), containing my plans for future chapters.
2018-08-31 03:27:15 -05:00
Jerome Petazzoni
3cb91855c8 Pre-install Stern and Helm
The commands to install Stern and Helm aren't super exciting,
so let's pre-install these tools. That way, we also generate
completion for them. We still give installation instructions
just in case, but this saves time for more important stuff.
2018-08-28 07:21:43 -05:00
Jerome Petazzoni
dc0850ef3e Expand the network policy section 2018-08-27 11:36:46 -05:00
Jerome Petazzoni
ffdd7fda45 Add YAML to repo; remove goo.gl links
We load a few YAML files from goo.gl links. To avoid bad
surprises, we're moving these YAML files to the repository.
2018-08-27 07:04:01 -05:00
Jerome Petazzoni
83b2133573 Oops, fixing bad conflict resolve 2018-08-23 04:56:22 -05:00
Jerome Petazzoni
d04856f964 Show how to use kubectl from the local machine 2018-08-22 09:22:59 -05:00
Jerome Petazzoni
8373d5302f Revamp section about kubectl proxy 2018-08-21 08:08:19 -05:00
Jerome Petazzoni
7d7cb0eadb Put netpol file in the right directory 2018-08-21 04:21:39 -05:00
Jerome Petazzoni
c00c87f8f2 Clarify network policies
Add clarification re/ pod-to-pod traffic.
Explain that it's stateful (which most people would expect anyway).
2018-08-21 04:21:17 -05:00
Jerome Petazzoni
f599462ad7 Add a short chapter about network policies
I will then expand this chapter to add examples showing
how to isolate namespaces; but let's start with that.
2018-08-21 04:21:17 -05:00
Jerome Petazzoni
018282f392 slides: rename directories
This was discussed and agreed in #246. It will probably break a few
outstanding PRs as well as a few external links but it's for the
better good long term.
2018-08-21 04:03:38 -05:00
Jerome Petazzoni
23b3c1c05a Last tweaks so that autopilot passes 2018-08-20 14:58:00 -05:00
Jerome Petazzoni
62686d0b7a Miscellaneous fixes for autopilot
These changes are only for the autopilot test harness.
They add hidden commands and keystrokes but don't affect
the content of the slides.
2018-08-20 14:15:06 -05:00
Jerome Petazzoni
54288502a2 autopilot: add support for hidden commands 2018-08-20 10:22:01 -05:00
Jerome Petazzoni
efc045e40b autopilot: put a bunch of features behind flags
We don't always need to track slides, switch desktops, and open links.
(These things are not necessary when we're purely testing the labs.)
All these features are now behind boolean flags saved in the state file.
2018-08-20 08:31:47 -05:00
Bridget Kromhout
6e9b16511f Cloud-agnostic; mentioning multiple clouds 2018-08-19 10:07:52 -05:00
Jerome Petazzoni
81b6e60a8c Merge branch 'master' of github.com:jpetazzo/container.training 2018-08-18 11:13:45 -05:00
Jerome Petazzoni
5baaf7e00a Fixes #327 2018-08-18 11:13:39 -05:00
Jérôme Petazzoni
d4d460397f Mention progressDeadlineSeconds
@abuisine ran through the whole deck recently, taking the long route each time it was possible; and he noticed that another field had to be removed when transforming the Deployment into a DaemonSet.
2018-08-15 04:08:31 -05:00
Bridget Kromhout
f66b6b2ee3 Slight edits (#326) 2018-08-15 04:07:42 -05:00
Jérôme Petazzoni
fb7f7fd8c8 Expand to the brief logging/metrics slide
Thanks to @abuisine for reminding me that Heapster is going through a deprecation cycle.

I'm also expanding these two slides to be a bit more useful and relevant.
2018-08-15 04:07:42 -05:00
Jérôme Petazzoni
dc98fa21a9 Add explanations for a failure mode in logging (#324)
* Add explanations for a failure mode in logging

Thanks @abuisine for reporting that one too!

* Typo
2018-08-15 04:04:18 -05:00
Jerome Petazzoni
6b662d3e4c Add QCON workshops 2018-08-15 03:09:22 -05:00
Tim Bell
7069682c8e Update Dockerfile_Tips.md (#321)
Fix typo
2018-08-08 08:40:06 -05:00
Katie McLaughlin
3b1d5b93a8 Update pwk link (#319) 2018-08-02 06:22:42 -05:00
Maxime Deravet
611fe55e90 Allow to configure docker password using the settings file (#317) 2018-07-31 08:24:16 -05:00
Jerome Petazzoni
481272ac22 Add fallback when non-standard strftime is not supported
Closes #301

Thanks @petertang2012
2018-07-27 06:07:11 -05:00
Bridget Kromhout
9069e2d7db Merge pull request #318 from bridgetkromhout/add-vel-uk
Add Velocity UK
2018-07-26 18:35:04 -05:00
Bridget Kromhout
1144c16a4c Add Velocity UK 2018-07-26 18:33:49 -05:00
Bridget Kromhout
9b2846633c Merge pull request #315 from jpetazzo/clarify-kubeadm
Clarify usage of kubeadm
2018-07-20 15:42:31 -07:00
Jérôme Petazzoni
db88c0a5bf Clarify usage of kubeadm
Thanks for @robcz for the inspiration for that one!
2018-07-17 11:55:20 -05:00
Jérôme Petazzoni
28863728c2 Update rollout, new defaults are 25%/25% for MaxSurge and MaxUnavailable (#314) 2018-07-17 10:54:45 -05:00
Bridget Kromhout
dc341da813 Merge pull request #309 from bridgetkromhout/slight-updates
Slight updates for 1.11
2018-07-16 18:58:00 -05:00
Bridget Kromhout
1d210ad808 Merge pull request #3 from jpetazzo/slighter-updates
Slighter updates
2018-07-16 18:28:20 -05:00
Jerome Petazzoni
76d9adadf5 'until 1.10' is ambiguous, try to be more explicit 2018-07-16 18:25:30 -05:00
Jerome Petazzoni
065371fa99 Merge branch 'bridgetkromhout-slight-updates' into slighter-updates 2018-07-16 18:12:45 -05:00
Jerome Petazzoni
e45f21454e Update a couple of references to kube-dns; and cosmetic tweaks 2018-07-16 18:09:50 -05:00
Bridget Kromhout
4d8c13b0bf AKS name change 2018-07-16 18:09:50 -05:00
Bridget Kromhout
5e6b38e8d1 Replace kube-dns with CoreDNS 2018-07-16 18:09:50 -05:00
Bridget Kromhout
5dd2b6313e coredns instead of kube-dns 2018-07-16 18:09:50 -05:00
Bridget Kromhout
96bf00c59b Switching from get to use kubectl api-resources 2018-07-16 18:09:50 -05:00
Bridget Kromhout
065310901f This info isn't shown anymore by kubectl get 2018-07-16 18:09:50 -05:00
Jerome Petazzoni
103261ea35 Update a couple of references to kube-dns; and cosmetic tweaks 2018-07-16 18:07:07 -05:00
Jerome Petazzoni
c6fb6f30af Merge branch 'slight-updates' of git://github.com/bridgetkromhout/container.training into bridgetkromhout-slight-updates 2018-07-16 17:48:56 -05:00
Bridget Kromhout
134d24e23b AKS name change 2018-07-16 15:08:07 -07:00
Jerome Petazzoni
8a8e97f6e2 Add Jerome's training, September in Paris 2018-07-16 16:42:25 -05:00
Bridget Kromhout
29c1bc47d4 Replace kube-dns with CoreDNS 2018-07-16 13:53:27 -07:00
Bridget Kromhout
8af5a10407 coredns instead of kube-dns 2018-07-16 13:45:26 -07:00
Bridget Kromhout
8e9991a860 Switching from get to use kubectl api-resources 2018-07-16 13:38:28 -07:00
Bridget Kromhout
8ba5d6d736 This info isn't shown anymore by kubectl get 2018-07-16 13:32:53 -07:00
Bridget Kromhout
b3d1e2133d Merge pull request #308 from bridgetkromhout/add-oscon
Add oscon slides
2018-07-15 13:24:46 -05:00
Bridget Kromhout
b3cf30f804 Add oscon slides 2018-07-15 13:23:33 -05:00
Bridget Kromhout
b845543e5f Merge pull request #305 from bridgetkromhout/list-msp-slides
Adding slides link
2018-07-10 18:08:52 -05:00
Bridget Kromhout
1b54470046 Adding slides link 2018-07-10 18:04:35 -05:00
Bridget Kromhout
ee2b20926c Merge pull request #302 from bridgetkromhout/version-1.11.0
Version bump
2018-07-10 06:18:30 -05:00
Bridget Kromhout
96a76d2a19 Version bump 2018-07-10 06:17:07 -05:00
Bridget Kromhout
78ac91fcd5 Merge pull request #300 from bridgetkromhout/add-msp
Adding MSP 2018
2018-07-10 05:46:23 -05:00
Bridget Kromhout
971b5b0e6d Let's not link quite yet 2018-07-10 05:45:22 -05:00
Bridget Kromhout
3393563498 Adding MSP 2018 2018-07-06 16:11:37 -05:00
Bridget Kromhout
94483ebfec Merge pull request #298 from jpetazzo/improve-index-format
Switch to two-line format since our titles are so long
2018-07-06 15:43:01 -05:00
Jerome Petazzoni
db5d5878f5 Switch to two-line format since our titles are so long 2018-07-03 10:47:41 -05:00
ctas582
2585daac9b Force rng to be single threaded (#293) 2018-06-28 08:20:54 -05:00
Bridget Kromhout
21043108b3 Merge pull request #296 from bridgetkromhout/version-up
Version bump
2018-06-27 01:14:06 -05:00
Bridget Kromhout
65faa4507c Version bump 2018-06-27 08:12:40 +02:00
Bridget Kromhout
644f2b9c7a Merge pull request #295 from bridgetkromhout/add-slides-ams
Adding slides link for ams
2018-06-26 17:04:27 -05:00
Bridget Kromhout
dab9d9fb7e Adding slides link 2018-06-27 00:03:18 +02:00
Diego Quintana
139757613b Update Container_Networking_Basics.md
Added needed single quotes. I've also moved `nginx` to the end of the line, to follow a more consistent syntax  (`options` before `name|id`).

```
Usage:	docker inspect [OPTIONS] NAME|ID [NAME|ID...]

Return low-level information on Docker objects

Options:
  -f, --format string   Format the output using the given Go template
  -s, --size            Display total file sizes if the type is container
      --type string     Return JSON for specified type
```
2018-06-22 10:58:26 -05:00
Bridget Kromhout
10eed2c1c7 Merge pull request #288 from ctas582/typos
Correct typos
2018-06-22 09:21:56 -05:00
ctas582
c4fa75a1da Correct typos 2018-06-21 15:00:36 +01:00
ctas582
847140560f Correct typo 2018-06-21 14:16:05 +01:00
ctas582
1dc07c33ab Correct typos 2018-06-20 11:19:28 +01:00
Bridget Kromhout
4fc73d95c0 Merge pull request #285 from bridgetkromhout/vupdate
Updating version
2018-06-12 10:14:21 -07:00
Bridget Kromhout
690ed55953 Updating version 2018-06-12 10:12:04 -07:00
Bridget Kromhout
16a5809518 Merge pull request #284 from bridgetkromhout/add-vel-2day
Adding Erik and Brian's two-day Velocity training to the front page
2018-06-12 09:01:32 -07:00
Bridget Kromhout
0fed34600b Adding Erik and Brian's two-day 2018-06-12 08:55:53 -07:00
Jerome Petazzoni
2d95f4177a Remove extraneous python invocation 2018-06-12 04:25:00 -05:00
Bridget Kromhout
e9d1db56fa Adding VelNY bootcamp (#283)
* Adding VelNY bootcamp

* Colon not good here
2018-06-12 04:09:54 -05:00
Bridget Kromhout
a076a766a9 Merge pull request #282 from bridgetkromhout/reorder
Reordering upcoming events
2018-06-11 09:47:57 -07:00
Bridget Kromhout
be3c78bf54 Reordering 2018-06-11 09:40:30 -07:00
Bridget Kromhout
5bb6b8e2ab Merge pull request #281 from bridgetkromhout/add-velocity-sj-2018
Adding Velocity SJ 2018
2018-06-11 09:08:35 -07:00
Bridget Kromhout
f79193681d Adding Velocity SJ 2018 2018-06-11 08:53:53 -07:00
Bridget Kromhout
379ae69db5 Merge pull request #277 from bridgetkromhout/rollout-failure
Clarifying rollout failure via dashboard
2018-06-11 08:34:36 -07:00
Jerome Petazzoni
cde89f50a2 Add mention to skip slide if dashboard isn't deployed 2018-06-10 17:07:56 -05:00
Bridget Kromhout
98563ba1ce Clarifying rollout failure via dashboard 2018-06-04 20:58:57 -05:00
Bridget Kromhout
99bf8cc39f Merge pull request #271 from jpetazzo/new-index-generator
Replace index.html with a generator
2018-06-05 02:13:27 +02:00
Bridget Kromhout
ea642cf90e Merge pull request #274 from bridgetkromhout/eng-v
bumping version
2018-06-04 23:28:48 +02:00
Bridget Kromhout
a7d89062cf Bumping engine version 2018-06-04 15:43:30 -05:00
Bridget Kromhout
564e4856b4 Merge branch 'master' of https://github.com/jpetazzo/container.training 2018-06-04 14:41:07 -05:00
Bridget Kromhout
011cd08af3 Merge pull request #269 from jpetazzo/kubectlproxy
Show how to access internal services with kubectl proxy
2018-06-04 21:40:40 +02:00
Jerome Petazzoni
e294a4726c Update version numbers 2018-06-04 08:47:30 -05:00
Jerome Petazzoni
a21e8b0849 Image and title size fixes 2018-06-04 06:11:00 -05:00
Jerome Petazzoni
cc6f36b50f Wording (non-native speakers probably don't know boo-boo) 2018-06-04 05:54:02 -05:00
Jerome Petazzoni
6e35162788 Remove 'kubernetes in action' demo 2018-06-04 05:50:21 -05:00
Jerome Petazzoni
30ca940eeb Opt-out a bunch of slides in the deep dive section 2018-06-04 05:49:24 -05:00
Jerome Petazzoni
14eb19a42b Typo fixes 2018-06-04 05:43:28 -05:00
Jerome Petazzoni
da053ecde2 Update fundamentals TOC 2018-06-03 15:27:27 -05:00
Jerome Petazzoni
c86ef7de45 Add 'past workshops' page and backfill 2016-2017 workshops 2018-06-03 09:55:43 -05:00
Jérôme Petazzoni
c5572020b9 Add a few slides about resource limits (#273)
The section about namespaces and cgroups is very thorough,
but we also need something showing how to practically
limit container resource usage without diving into a very
deep technical chapter.
2018-06-03 05:28:16 -05:00
Jerome Petazzoni
3d7ed3a3f7 Clarify how to stop kubectl proxy 2018-06-03 05:10:48 -05:00
Bridget Kromhout
138163056f Merge pull request #270 from jpetazzo/kubectl-create-namespace
Show an easier way to create namespaces
2018-06-02 17:12:38 +02:00
Alexis Daboville
5e78e00bc9 Small typos (#272)
* Small typo

* elastichsearch -> elasticsearch

* realeased -> released
2018-06-02 09:09:38 -05:00
Jerome Petazzoni
2cb06edc2d Replace index.html with a generator
The events are now listend in index.yaml, and generated
with index.py. The latter is called automatically by
build.sh.

The list of events has been slightly improved:
- we only show the last 5 past events
- video recordings now get a section of their own
2018-05-31 14:22:23 -05:00
Jerome Petazzoni
8915bfb443 Update README section indicating 'teacher for hire' 2018-05-31 12:55:09 -05:00
Jerome Petazzoni
24017ad83f Clarify usage of <<< 2018-05-29 11:06:31 -05:00
Jerome Petazzoni
3edebe3747 New script to count slides
count-slides.py will count the number of slides per section,
and compute size of each chapter as well. It is not perfect
(for instance, it assumes that excluded_classes=in_person)
but it should help to assess the size of the content before
delivering long workshops.
2018-05-29 10:03:11 -05:00
Jerome Petazzoni
636a2d5c87 Show an easier way to create namespaces
We were using 'kubectl apply' with a YAML snppet.
It's valid, but it's quite convoluted. Instead,
let's use 'kubectl create namespace'. We can still
mention the other method of course.
2018-05-29 05:53:12 -05:00
Jerome Petazzoni
4213aba76e Show how to access internal services with kubectl proxy 2018-05-29 05:47:27 -05:00
Jerome Petazzoni
3e822bad82 Add a slide about JSON file and log rotation 2018-05-28 10:28:52 -05:00
Jerome Petazzoni
cd5b06b9c7 Show how to connect/disconnect dynamically 2018-05-28 10:08:11 -05:00
Jerome Petazzoni
b0841562ea Add a bunch of Dockerfile examples 2018-05-25 09:31:50 -05:00
Jerome Petazzoni
06f70e8246 Add 'tree' in the VMs
This is a convenient tool to get an idea of what a
directory hierarchy looks like.
2018-05-24 07:06:21 -05:00
Jerome Petazzoni
9614f8761a Add link to Serge Hallyn blog post 2018-05-24 06:03:28 -05:00
Jerome Petazzoni
92f9ab9001 Add a section leading to multi-stage builds 2018-05-24 05:46:28 -05:00
Bridget Kromhout
ad554f89fc New events (and old event to past) 2018-05-23 15:31:07 -05:00
Jerome Petazzoni
5bb37dff49 Parametrize git repo and slides URLs
We have two extra variables in the slides:
@@GITREPO@@ (current value: github.com/jpetazzo/container.training)
@@SLIDES@@ (current value: http://container.training/)

These variables are set with gitrepo and slides in the YAML files.
(Just like the chat variable.)

Supercedes #256
2018-05-23 15:27:57 -05:00
Bridget Kromhout
0d52dc2290 Merge pull request #267 from jasonknudsen/patch-1
Update README.md - typo
2018-05-23 10:22:05 -05:00
Bridget Kromhout
c575cb9cd5 New events (and old event to past) 2018-05-23 10:18:02 -05:00
jasonknudsen
9cdccd40c7 Update README.md - typo
Typo in instructions - should be pull_images not pull-images
2018-05-23 08:17:46 -07:00
Bret Fisher
fdd10c5a98 fix docker-compose scale up change (#265) 2018-05-18 10:10:06 -05:00
mkrupczak3
8a617fdbc7 change "alpine telnet" to "busybox telnet"
Newer versions of alpine may not include telnet
2018-05-18 10:01:41 -05:00
Jerome Petazzoni
a058a74d8f Minor fix for hidden autopilot command 2018-05-18 09:16:34 -05:00
Bret Fisher
4896a3265e Update volume chapter 2018-05-18 08:08:33 -05:00
Bret Fisher
131947275c Improve explanation about images and layers 2018-05-18 08:08:27 -05:00
Bret Fisher
1b7e8cec5e Update info about Docker for Mac/Windows 2018-05-18 08:08:20 -05:00
Bret Fisher
c17c0ea9aa Remove obsolete MAINTAINER command 2018-05-18 08:08:08 -05:00
Bridget Kromhout
7b378d2425 Merge pull request #264 from bridgetkromhout/master
Moving NDC to past
2018-05-14 06:56:23 -05:00
Bridget Kromhout
47da7d8278 Moving NDC to past 2018-05-14 06:53:08 -05:00
Bridget Kromhout
3c69941fcd Merge pull request #262 from bridgetkromhout/craft-past
Craft to past
2018-05-10 07:38:44 -05:00
Bridget Kromhout
beb188facf Craft to past 2018-05-10 07:36:30 -05:00
Bridget Kromhout
dfea8f6535 Merge pull request #258 from bridgetkromhout/add-ndc
Adding NDC Minnesota
2018-05-08 21:37:43 -05:00
Bridget Kromhout
3b89149bf0 Adding NDC Minnesota 2018-05-08 21:34:53 -05:00
Bret Fisher
c8d73caacd move visualizer to service and stack (#237) 2018-05-08 10:51:40 -05:00
Jérôme Petazzoni
290185f16b Merge pull request #255 from eightlimbed/patch-1
fixed a typo
2018-05-07 13:52:40 -05:00
Jérôme Petazzoni
05e9d36eed Merge pull request #254 from mkrupczak3/master
Fix typo create network to network create
2018-05-07 13:51:12 -05:00
Jérôme Petazzoni
05815fcbf3 Merge pull request #240 from BretFisher/settings-update
updated versions, renamed files
2018-05-07 13:15:34 -05:00
Lee Gaines
bce900a4ca fixed a typo
changed "contain" to "contained" in the first bullet point
2018-05-06 21:49:43 -07:00
mkrupczak3
bf7ba49013 Fix typo create network to network create 2018-05-05 16:55:22 -04:00
Bret Fisher
323aa075b3 removing settings feature teaser 2018-05-05 12:54:20 -04:00
Jérôme Petazzoni
f526014dc8 Merge pull request #253 from BretFisher/ingress-graphics
swarm ingress images and updates
2018-05-05 06:39:13 -05:00
Jérôme Petazzoni
dec546fa65 Merge pull request #252 from BretFisher/patch-15
update docker-compose scale command
2018-05-05 06:36:53 -05:00
Jérôme Petazzoni
36390a7921 Merge pull request #251 from BretFisher/swarm-3-nodes
moving to 3 node swarms by default
2018-05-05 06:35:45 -05:00
Jérôme Petazzoni
313d705778 Merge pull request #248 from BretFisher/fundamentals-cnm-updates
more fundamentals CNM tweaks
2018-05-05 06:20:06 -05:00
Jérôme Petazzoni
ca34efa2d7 Merge pull request #247 from BretFisher/patch-13
adding more images to cache
2018-05-05 05:49:52 -05:00
Jérôme Petazzoni
25e92cfe39 Merge pull request #245 from BretFisher/patch-12
more new features for swarm
2018-05-05 05:46:07 -05:00
Jérôme Petazzoni
999359e81a Update versions.md 2018-05-05 05:45:40 -05:00
Jérôme Petazzoni
3a74248746 Merge pull request #244 from BretFisher/patch-11
a bit more detail on network drivers included
2018-05-05 05:41:10 -05:00
Jérôme Petazzoni
cb828ecbd3 Update Container_Network_Model.md 2018-05-05 05:41:01 -05:00
Jérôme Petazzoni
e1e984e02d Merge pull request #243 from BretFisher/patch-10
Updating some compose info for devs
2018-05-05 05:40:10 -05:00
Jérôme Petazzoni
d6e19fe350 Update Compose_For_Dev_Stacks.md 2018-05-05 05:39:25 -05:00
Jérôme Petazzoni
1f91c748b5 Merge pull request #242 from BretFisher/check-for-entr-in-build
Friendly error if entr isn't installed for build.sh
2018-05-05 05:30:05 -05:00
Bret Fisher
38356acb4e swarm ingress images and updates 2018-05-04 13:00:49 -04:00
Bret Fisher
7b2d598c38 fix my fat fingers.
ugg, sorry, editing via github and I need to go to bed :)
2018-05-04 00:20:31 -04:00
Bret Fisher
c276eb0cfa remove fat finger 2018-05-04 00:19:35 -04:00
Bret Fisher
571de591ca update docker-compose scale command
scale command is now legacy, use `--scale` option instead
2018-05-04 00:18:58 -04:00
Bret Fisher
e49a197fd5 moving to 3 node swarms by default 2018-05-03 23:52:51 -04:00
Bret Fisher
a30eabc23a more fundamentals CNM tweaks 2018-05-03 19:28:39 -04:00
Bret Fisher
73c4cddba5 forgot one image :/ 2018-05-03 16:32:12 -04:00
Bret Fisher
6e341f770a adding more images to cache
Based on images used in swarm and fundamentals workshops
2018-05-03 16:24:54 -04:00
Bridget Kromhout
527145ec81 Merge pull request #241 from BretFisher/patch-8
date updates for container.training
2018-05-03 18:19:36 +02:00
Bret Fisher
c93edceffe more new features for swarm 2018-05-02 23:25:12 -04:00
Bret Fisher
6f9eac7c8e a bit more detail on network drivers included 2018-05-02 23:21:45 -04:00
Bret Fisher
522420ef34 Updating some compose info for devs 2018-05-02 23:18:19 -04:00
Bret Fisher
927bf052b0 Friendly error if entr isn't installed for build.sh 2018-05-02 23:08:52 -04:00
Bret Fisher
1e44689b79 swarm versions 2018-05-02 23:00:55 -04:00
Bret Fisher
b967865faa date updates for container.training 2018-05-02 22:24:12 -04:00
Bret Fisher
054c0cafb2 updated versions, renamed files 2018-05-02 17:43:08 -04:00
Jérôme Petazzoni
29e37c8e2b Merge pull request #235 from KMASubhani/patch-1
Update Getting_Inside.md
2018-04-25 23:33:24 -05:00
Jérôme Petazzoni
44fc2afdc7 Merge pull request #239 from BretFisher/fix-stack-deploy-cmd
reordering stack deploy cmd format
2018-04-25 23:29:58 -05:00
Jérôme Petazzoni
7776c8ee38 Merge pull request #238 from BretFisher/fix-detach-false
remove more unneeded detach=false
2018-04-25 23:27:54 -05:00
Bret Fisher
9ee7e1873f reording stack deploy cmd format 2018-04-25 16:33:38 -05:00
Bret Fisher
e21fcbd1bd remove more unneeded detach=false 2018-04-25 16:26:28 -05:00
Bret Fisher
cb407e75ab make CI/CD common for all courses 2018-04-25 14:27:32 -05:00
Bret Fisher
27d4612449 a note about ci/cd with docker 2018-04-25 14:26:02 -05:00
Bret Fisher
43ab5f79b6 a note about ci/cd with docker 2018-04-25 14:23:40 -05:00
Khaja Mashood Ahmed Subhani
5852ab513d Update Getting_Inside.md
fixed spelling
2018-04-25 11:00:37 -05:00
Jérôme Petazzoni
3fe33e4e9e Merge pull request #234 from bridgetkromhout/adding-ndc
Adding NDC
2018-04-24 03:56:13 -05:00
Bridget Kromhout
c44b90b5a4 Adding NDC 2018-04-23 20:03:46 -05:00
Jérôme Petazzoni
f06dc6548c Merge pull request #232 from bridgetkromhout/rollout-params
Clarify rollout params
2018-04-23 11:32:25 -05:00
Jérôme Petazzoni
e13552c306 Merge pull request #224 from bridgetkromhout/re-order
Re-ordering "kubectl apply" discussion
2018-04-23 11:31:15 -05:00
Bridget Kromhout
0305c3783f Adding an overview; marking clarification as extra 2018-04-23 10:52:29 -05:00
Bridget Kromhout
5158ac3d98 Clarify rollout params 2018-04-22 15:49:32 -05:00
Jérôme Petazzoni
25c08b0885 Merge pull request #231 from bridgetkromhout/add-goto-kube101
Adding goto's kube101
2018-04-22 14:55:55 -05:00
Bridget Kromhout
f8131c97e9 Adding goto's kube101 2018-04-22 14:35:50 -05:00
Bridget Kromhout
3de1fab66a Clarifying failure mode 2018-04-22 14:04:57 -05:00
Jérôme Petazzoni
ab664128b7 Merge pull request #228 from bridgetkromhout/helm-completion
Correction for helm completion
2018-04-22 14:00:08 -05:00
Bridget Kromhout
91de693b80 Correction for helm completion 2018-04-22 13:33:54 -05:00
Jérôme Petazzoni
a64606fb32 Merge pull request #225 from bridgetkromhout/tail-log
Clarify log tailing
2018-04-22 13:14:11 -05:00
Jérôme Petazzoni
58d9103bd2 Merge pull request #223 from bridgetkromhout/1.10.1-updates
Updates for 1.10.1
2018-04-22 13:13:25 -05:00
Jérôme Petazzoni
61ab5be12d Merge pull request #222 from bridgetkromhout/weave-link
Link to Weave
2018-04-22 13:08:54 -05:00
Bridget Kromhout
030900b602 Clarify log tailing 2018-04-22 12:39:18 -05:00
Bridget Kromhout
476d689c7d Clarify naming 2018-04-22 12:32:11 -05:00
Bridget Kromhout
4aedbb69c2 Re-ordering 2018-04-22 12:14:16 -05:00
Bridget Kromhout
db2a68709c Updates for 1.10.1 2018-04-22 11:57:37 -05:00
Bridget Kromhout
f114a89136 Link to Weave 2018-04-22 11:08:17 -05:00
Jérôme Petazzoni
96eda76391 Merge pull request #220 from bridgetkromhout/rearrange-kube-halfday
Rearrange kube halfday
2018-04-21 10:48:21 -05:00
Bridget Kromhout
e7d9a8fa2d Correcting EFK 2018-04-21 10:43:39 -05:00
Bridget Kromhout
1cca8db828 Rearranging halfday for kube 2018-04-21 10:38:54 -05:00
Bridget Kromhout
2cde665d2f Merge pull request #219 from jpetazzo/re-add-kube-halfday
Re-add half day file
2018-04-21 10:17:45 -05:00
Jerome Petazzoni
d660c6342f Re-add half day file 2018-04-21 12:00:04 +02:00
Bridget Kromhout
7e8bb0e51f Merge pull request #218 from bridgetkromhout/cloud-typo
Typo fix
2018-04-20 16:49:31 -05:00
Bridget Kromhout
c87f4cc088 Typo fix 2018-04-20 16:47:13 -05:00
Jérôme Petazzoni
05c50349a8 Merge pull request #211 from BretFisher/patch-4
add popular swarm reverse proxy options
2018-04-20 02:38:00 -05:00
Jérôme Petazzoni
e985952816 Add colon and fix minor typo 2018-04-20 02:37:48 -05:00
Jérôme Petazzoni
19f0ef9c86 Merge pull request #216 from jpetazzo/googl
Replace goo.gl with 1.1.1.1
2018-04-20 02:36:15 -05:00
Bret Fisher
cc8e13a85f silly me, Traefik is golang 2018-04-20 03:07:40 -04:00
Bridget Kromhout
6475a05794 Update kubectlrun.md
Removing misleading term
2018-04-19 14:37:26 -05:00
Bridget Kromhout
cc9840afe5 Update kubectlrun.md 2018-04-19 07:36:37 -05:00
Bridget Kromhout
b7a2cde458 Merge pull request #215 from jpetazzo/more-options-to-setup-k8s
Mention Kubernetes the Hard Way and more options
2018-04-19 07:32:20 -05:00
Bridget Kromhout
453992b55d Update setup-k8s.md 2018-04-19 07:31:25 -05:00
Bridget Kromhout
0b1067f95e Merge pull request #217 from jpetazzo/tolerations
Add a line about tolerations
2018-04-19 07:28:57 -05:00
Jérôme Petazzoni
21777cd95b Merge pull request #214 from BretFisher/patch-7
we can now add/remove networks from services 🤗
2018-04-19 06:35:09 -05:00
Jérôme Petazzoni
827ad3bdf2 Merge pull request #213 from BretFisher/patch-6
product name change 🙄
2018-04-19 06:34:41 -05:00
Jérôme Petazzoni
7818157cd0 Merge pull request #212 from BretFisher/patch-5
adding 3rd party registry options
2018-04-19 06:34:22 -05:00
Jérôme Petazzoni
d547241714 Merge pull request #210 from BretFisher/patch-3
fix image size via pic css class
2018-04-19 06:31:46 -05:00
Jérôme Petazzoni
c41e0e9286 Merge pull request #209 from BretFisher/patch-2
removed older notes about detach and service logs
2018-04-19 06:31:17 -05:00
Jérôme Petazzoni
c2d4784895 Merge pull request #208 from BretFisher/patch-1
removed mention of compose upg 1.6 to 1.7
2018-04-19 06:30:47 -05:00
Jérôme Petazzoni
11163965cf Merge pull request #204 from bridgetkromhout/clarify-off-by-one
Clarify an off-by-one amount of pods
2018-04-19 06:30:19 -05:00
Jérôme Petazzoni
e9df065820 Merge pull request #197 from bridgetkromhout/patch-only-daemonset
Patch only daemonset pods
2018-04-19 06:27:52 -05:00
Jerome Petazzoni
101ab0c11a Add a line about tolerations 2018-04-19 06:25:41 -05:00
Jérôme Petazzoni
25f081c0b7 Merge pull request #190 from bridgetkromhout/daemonset
Clarifications around daemonsets
2018-04-19 06:21:58 -05:00
Jérôme Petazzoni
700baef094 Merge pull request #188 from bridgetkromhout/clarify-kinds
kubectl get all missing-type workaround
2018-04-19 06:19:00 -05:00
Jerome Petazzoni
3faa586b16 Remove NOC joke 2018-04-19 06:14:54 -05:00
Jerome Petazzoni
8ca77fe8a4 Merge branch 'googl' of git://github.com/bridgetkromhout/container.training into bridgetkromhout-googl 2018-04-19 05:59:12 -05:00
Jerome Petazzoni
019829cc4d Mention Kubernetes the Hard Way and more options 2018-04-19 05:55:58 -05:00
Bret Fisher
a7f6bb223a we can now add/remove networks from services 🤗 2018-04-18 19:11:51 -04:00
Bret Fisher
eb77a8f328 product name change 🙄 2018-04-18 17:50:19 -04:00
Bret Fisher
5a484b2667 adding 3rd party registry options 2018-04-18 17:47:55 -04:00
Bret Fisher
982c35f8e7 add popular swarm reverse proxy options 2018-04-18 17:28:46 -04:00
Bret Fisher
adffe5f47f fix image size via pic css class
make swarm internals bigger!
2018-04-18 17:07:33 -04:00
Bret Fisher
f90a194b86 removed older notes about detach and service logs
Since these options have been around nearly a year, I removed some unneeded verbosity and consolidated the detach stuff.
2018-04-18 15:34:04 -04:00
Bret Fisher
99e9356e5d removed mention of compose upg 1.6 to 1.7
I feel like compose 1.7 was so long ago (over 2 years) that mentioning logs change isn't necessary.
2018-04-18 15:18:17 -04:00
Bridget Kromhout
860840a4c1 Clarify off-by-one 2018-04-18 14:09:08 -05:00
Bridget Kromhout
ab63b76ae0 Clarify types bug 2018-04-18 13:59:26 -05:00
Bridget Kromhout
29bca726b3 Merge pull request #2 from jpetazzo/daemonset-proposal
Pod cleanup proposal
2018-04-18 12:21:34 -05:00
Bridget Kromhout
91297a68f8 Update daemonset.md 2018-04-18 12:20:53 -05:00
Jerome Petazzoni
2bea8ade63 Break down last kube chapter (it is too long) 2018-04-18 11:44:30 -05:00
Jerome Petazzoni
ec486cf78c Do not bind-mount localtime (fixes #207) 2018-04-18 03:33:07 -05:00
Jerome Petazzoni
63ac378866 Merge branch 'darkalia-add_helm_completion' 2018-04-17 16:13:58 -05:00
Jerome Petazzoni
35db387fc2 Add ':' for consistency 2018-04-17 16:13:44 -05:00
Jerome Petazzoni
a0f9baf5e7 Merge branch 'add_helm_completion' of git://github.com/darkalia/container.training into darkalia-add_helm_completion 2018-04-17 16:12:52 -05:00
Jerome Petazzoni
4e54a79abc Pod cleanup proposal 2018-04-17 16:07:24 -05:00
Jérôme Petazzoni
37bea7158f Merge pull request #181 from jpetazzo/more-info-on-labels-and-rollouts
Label use-cases and rollouts
2018-04-17 15:18:24 -05:00
Jerome Petazzoni
618fe4e959 Clarify the grace period when shutting down pods 2018-04-17 02:24:07 -05:00
Jerome Petazzoni
0c73144977 Merge branch 'jgarrouste-patch-1' 2018-04-16 08:03:34 -05:00
Jerome Petazzoni
ff8c3b1595 Remove -o name 2018-04-16 08:03:09 -05:00
Jerome Petazzoni
b756d0d0dc Merge branch 'patch-1' of git://github.com/jgarrouste/container.training into jgarrouste-patch-1 2018-04-16 08:02:41 -05:00
Jerome Petazzoni
23147fafd1 Paris -> past sessions 2018-04-15 15:57:46 -05:00
Jérémy GARROUSTE
b036b5f24b Delete pods with ''-l run-rng' and remove xargs
Delete pods with ''-l run-rng' and remove xargs
2018-04-15 16:37:10 +02:00
Benjamin Allot
3b9014f750 Add helm completion 2018-04-13 16:40:42 +02:00
Bridget Kromhout
de87743c6a Clarify an off-by-one amount of pods 2018-04-12 16:10:38 -05:00
Bridget Kromhout
74f980437f Clarify that clusters can be of arbitrary size 2018-04-12 07:31:49 -05:00
Bridget Kromhout
6711ba06d9 Patch only daemonset pods 2018-04-11 21:09:46 -05:00
Bridget Kromhout
f97bd2b357 googl to cloudflare 2018-04-11 13:36:00 -05:00
Bridget Kromhout
3f54f23535 Clarifying cleanup 2018-04-10 16:45:50 -05:00
Bridget Kromhout
827d10dd49 Clarifying ambiguous labels on pods 2018-04-10 15:48:54 -05:00
Bridget Kromhout
1b7a072f25 Bump version and add link 2018-04-10 15:29:14 -05:00
Bridget Kromhout
eb1b3c8729 Clarify types 2018-04-10 14:17:27 -05:00
Bridget Kromhout
40e4678a45 goo.gl deprecation 2018-04-10 12:41:07 -05:00
Jerome Petazzoni
38a40d56a0 Label use-cases and rollouts
This adds a few realistic examples of label usage.
It also adds explanations about why deploying a new
version of the worker doesn't seem to be effective
immediately (the worker doesn't handle signals).
2018-04-10 06:04:17 -05:00
207 changed files with 11789 additions and 1797 deletions

19
.gitignore vendored
View File

@@ -1,11 +1,22 @@
*.pyc
*.swp
*~
prepare-vms/ips.txt
prepare-vms/ips.html
prepare-vms/ips.pdf
prepare-vms/settings.yaml
prepare-vms/tags
prepare-vms/infra
slides/*.yml.html
slides/autopilot/state.yaml
slides/index.html
slides/past.html
node_modules
### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride
### Windows ###
# Windows thumbnail cache files
Thumbs.db
ehthumbs.db
ehthumbs_vista.db

View File

@@ -292,15 +292,31 @@ If there is a bug and you can't even reproduce it:
sorry. It is probably an Heisenbug. We can't act on it
until it's reproducible, alas.
If you have attended this workshop and have feedback,
or if you want somebody to deliver that workshop at your
conference or for your company: you can contact one of us!
# “Please teach us!”
If you have attended one of these workshops, and want
your team or organization to attend a similar one, you
can look at the list of upcoming events on
http://container.training/.
You are also welcome to reuse these materials to run
your own workshop, for your team or even at a meetup
or conference. In that case, you might enjoy watching
[Bridget Kromhout's talk at KubeCon 2018 Europe](
https://www.youtube.com/watch?v=mYsp_cGY2O0), explaining
precisely how to run such a workshop yourself.
Finally, you can also contact the following persons,
who are experienced speakers, are familiar with the
material, and are available to deliver these workshops
at your conference or for your company:
- jerome dot petazzoni at gmail dot com
- bret at bretfisher dot com
If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!
(If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!)
**Thank you!**

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80)
app.run(host="0.0.0.0", port=80, threaded=False)

62
k8s/consul.yaml Normal file
View File

@@ -0,0 +1,62 @@
apiVersion: v1
kind: Service
metadata:
name: consul
spec:
ports:
- port: 8500
name: http
selector:
app: consul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 3
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.2.2"
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
- "-ui"
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave

28
k8s/docker-build.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: v1
kind: Pod
metadata:
name: build-image
spec:
restartPolicy: OnFailure
containers:
- name: docker-build
image: docker
env:
- name: REGISTRY_PORT
value: #"30000"
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
mkdir /workspace &&
git clone https://github.com/jpetazzo/container.training /workspace &&
docker build -t localhost:$REGISTRY_PORT/worker /workspace/dockercoins/worker &&
docker push localhost:$REGISTRY_PORT/worker
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock

227
k8s/efk.yaml Normal file
View File

@@ -0,0 +1,227 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
# X-Pack Authentication
# =====================
- name: FLUENT_ELASTICSEARCH_USER
value: "elastic"
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: "changeme"
- name: FLUENT_UID
value: "0"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: elasticsearch
name: elasticsearch
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/elasticsearch
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: elasticsearch
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: elasticsearch
spec:
containers:
- image: elasticsearch:5.6.8
imagePullPolicy: IfNotPresent
name: elasticsearch
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
env:
- name: ES_JAVA_OPTS
value: "-Xms1g -Xmx1g"
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: elasticsearch
name: elasticsearch
selfLink: /api/v1/namespaces/default/services/elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
run: elasticsearch
sessionAffinity: None
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: kibana
name: kibana
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/kibana
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: kibana
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200/
image: kibana:5.6.8
imagePullPolicy: Always
name: kibana
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: kibana
name: kibana
selfLink: /api/v1/namespaces/default/services/kibana
spec:
externalTrafficPolicy: Cluster
ports:
- port: 5601
protocol: TCP
targetPort: 5601
selector:
run: kibana
sessionAffinity: None
type: NodePort

View File

@@ -0,0 +1,14 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

18
k8s/haproxy.cfg Normal file
View File

@@ -0,0 +1,18 @@
global
daemon
maxconn 256
defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend the-frontend
bind *:80
default_backend the-backend
backend the-backend
server google.com-80 google.com:80 maxconn 32 check
server ibm.fr-80 ibm.fr:80 maxconn 32 check

16
k8s/haproxy.yaml Normal file
View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: haproxy
spec:
volumes:
- name: config
configMap:
name: haproxy
containers:
- name: haproxy
image: haproxy
volumeMounts:
- name: config
mountPath: /usr/local/etc/haproxy/

14
k8s/ingress.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cheddar
spec:
rules:
- host: cheddar.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: cheddar
servicePort: 80

29
k8s/kaniko-build.yaml Normal file
View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: Pod
metadata:
name: kaniko-build
spec:
initContainers:
- name: git-clone
image: alpine
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
git clone git://github.com/jpetazzo/container.training /workspace
volumeMounts:
- name: workspace
mountPath: /workspace
containers:
- name: build-image
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=/workspace/dockercoins/rng"
- "--insecure"
- "--destination=registry:5000/rng-kaniko:latest"
volumeMounts:
- name: workspace
mountPath: /workspace
volumes:
- name: workspace

View File

@@ -0,0 +1,167 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@@ -0,0 +1,14 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-testcurl-for-testweb
spec:
podSelector:
matchLabels:
app: testweb
ingress:
- from:
- podSelector:
matchLabels:
run: testcurl

View File

@@ -0,0 +1,10 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-for-testweb
spec:
podSelector:
matchLabels:
app: testweb
ingress: []

View File

@@ -0,0 +1,22 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
app: webui
ingress:
- from: []

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
restartPolicy: OnFailure

580
k8s/portworx.yaml Normal file
View File

@@ -0,0 +1,580 @@
# SOURCE: https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true
apiVersion: v1
kind: ConfigMap
metadata:
name: stork-config
namespace: kube-system
data:
policy.cfg: |-
{
"kind": "Policy",
"apiVersion": "v1",
"extenders": [
{
"urlPrefix": "http://stork-service.kube-system.svc:8099",
"apiVersion": "v1beta1",
"filterVerb": "filter",
"prioritizeVerb": "prioritize",
"weight": 5,
"enableHttps": false,
"nodeCacheCapable": false
}
]
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "list", "watch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"]
resources: ["deployments", "deployments/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
- apiGroups: ["*"]
resources: ["statefulsets", "statefulsets/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-role-binding
subjects:
- kind: ServiceAccount
name: stork-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-role
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: stork-service
namespace: kube-system
spec:
selector:
name: stork
ports:
- protocol: TCP
port: 8099
targetPort: 8099
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
tier: control-plane
name: stork
namespace: kube-system
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 3
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
name: stork
tier: control-plane
spec:
containers:
- command:
- /stork
- --driver=pxd
- --verbose
- --leader-elect=true
- --health-monitor-interval=120
imagePullPolicy: Always
image: openstorage/stork:1.1.3
resources:
requests:
cpu: '0.1'
name: stork
hostPID: false
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork
topologyKey: "kubernetes.io/hostname"
serviceAccountName: stork-account
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: stork-snapshot-sc
provisioner: stork-snapshot
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-scheduler-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-scheduler-role
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
- apiGroups: [""]
resourceNames: ["kube-scheduler"]
resources: ["endpoints"]
verbs: ["delete", "get", "patch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list", "watch"]
- apiGroups: [""]
resources: ["bindings", "pods/binding"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["patch", "update"]
- apiGroups: [""]
resources: ["replicationcontrollers", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["app", "extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-scheduler-role-binding
subjects:
- kind: ServiceAccount
name: stork-scheduler-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-scheduler-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
name: stork-scheduler
namespace: kube-system
spec:
replicas: 3
template:
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
spec:
containers:
- command:
- /usr/local/bin/kube-scheduler
- --address=0.0.0.0
- --leader-elect=true
- --scheduler-name=stork
- --policy-configmap=stork-config
- --policy-configmap-namespace=kube-system
- --lock-object-name=stork-scheduler
image: gcr.io/google_containers/kube-scheduler-amd64:v1.11.2
livenessProbe:
httpGet:
path: /healthz
port: 10251
initialDelaySeconds: 15
name: stork-scheduler
readinessProbe:
httpGet:
path: /healthz
port: 10251
resources:
requests:
cpu: '0.1'
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork-scheduler
topologyKey: "kubernetes.io/hostname"
hostPID: false
serviceAccountName: stork-scheduler-account
---
kind: Service
apiVersion: v1
metadata:
name: portworx-service
namespace: kube-system
labels:
name: portworx
spec:
selector:
name: portworx
ports:
- name: px-api
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-get-put-list-role
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "get", "update", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "update", "create"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["privileged"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-role-binding
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-get-put-list-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Namespace
metadata:
name: portworx
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role
namespace: portworx
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role-binding
namespace: portworx
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: Role
name: px-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: portworx
namespace: kube-system
annotations:
portworx.com/install-source: "https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true"
spec:
minReadySeconds: 0
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: portworx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: px/enabled
operator: NotIn
values:
- "false"
- key: node-role.kubernetes.io/master
operator: DoesNotExist
hostNetwork: true
hostPID: false
containers:
- name: portworx
image: portworx/oci-monitor:1.4.2.2
imagePullPolicy: Always
args:
["-c", "px-workshop", "-s", "/dev/loop4", "-b",
"-x", "kubernetes"]
env:
- name: "PX_TEMPLATE_VERSION"
value: "v4"
livenessProbe:
periodSeconds: 30
initialDelaySeconds: 840 # allow image pull in slow networks
httpGet:
host: 127.0.0.1
path: /status
port: 9001
readinessProbe:
periodSeconds: 10
httpGet:
host: 127.0.0.1
path: /health
port: 9015
terminationMessagePath: "/tmp/px-termination-log"
securityContext:
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
- name: etcpwx
mountPath: /etc/pwx
- name: optpwx
mountPath: /opt/pwx
- name: proc1nsmount
mountPath: /host_proc/1/ns
- name: sysdmount
mountPath: /etc/systemd/system
- name: diagsdump
mountPath: /var/cores
- name: journalmount1
mountPath: /var/run/log
readOnly: true
- name: journalmount2
mountPath: /var/log
readOnly: true
- name: dbusmount
mountPath: /var/run/dbus
restartPolicy: Always
serviceAccountName: px-account
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: etcpwx
hostPath:
path: /etc/pwx
- name: optpwx
hostPath:
path: /opt/pwx
- name: proc1nsmount
hostPath:
path: /proc/1/ns
- name: sysdmount
hostPath:
path: /etc/systemd/system
- name: diagsdump
hostPath:
path: /var/cores
- name: journalmount1
hostPath:
path: /var/run/log
- name: journalmount2
hostPath:
path: /var/log
- name: dbusmount
hostPath:
path: /var/run/dbus
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-lh-account
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: px-lh-account
namespace: kube-system
roleRef:
kind: Role
name: px-lh-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
type: NodePort
ports:
- name: http
port: 80
nodePort: 32678
- name: https
port: 443
nodePort: 32679
selector:
tier: px-web-console
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
tier: px-web-console
replicas: 1
template:
metadata:
labels:
tier: px-web-console
spec:
initContainers:
- name: config-init
image: portworx/lh-config-sync:0.2
imagePullPolicy: Always
args:
- "init"
volumeMounts:
- name: config
mountPath: /config/lh
containers:
- name: px-lighthouse
image: portworx/px-lighthouse:1.5.0
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: config
mountPath: /config/lh
- name: config-sync
image: portworx/lh-config-sync:0.2
imagePullPolicy: Always
args:
- "sync"
volumeMounts:
- name: config
mountPath: /config/lh
serviceAccountName: px-lh-account
volumes:
- name: config
emptyDir: {}

30
k8s/postgres.yaml Normal file
View File

@@ -0,0 +1,30 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
labels:
app: postgres
spec:
schedulerName: stork
containers:
- name: postgres
image: postgres:10.5
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres
volumeClaimTemplates:
- metadata:
name: postgres
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi

15
k8s/registry.yaml Normal file
View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: registry
spec:
containers:
- name: registry
image: registry
env:
- name: REGISTRY_HTTP_ADDR
valueFrom:
configMapKeyRef:
name: registry
key: http.addr

67
k8s/socat.yaml Normal file
View File

@@ -0,0 +1,67 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: null
generation: 1
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
spec:
replicas: 1
selector:
matchLabels:
app: socat
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: socat
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard:443,verify=0
image: alpine
imagePullPolicy: Always
name: socat
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: socat
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

11
k8s/storage-class.yaml Normal file
View File

@@ -0,0 +1,11 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-replicated
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "2"
priority_io: "high"

100
k8s/traefik.yaml Normal file
View File

@@ -0,0 +1,100 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -1,4 +1,10 @@
# Trainer tools to create and prepare VMs for Docker workshops on AWS or Azure
# Trainer tools to create and prepare VMs for Docker workshops
These tools can help you to create VMs on:
- Azure
- EC2
- OpenStack
## Prerequisites
@@ -6,6 +12,9 @@
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
Depending on the infrastructure that you want to use, you also need to install
the Azure CLI, the AWS CLI, or terraform (for OpenStack deployment).
And if you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
@@ -14,20 +23,25 @@ And if you want to generate printable cards:
## General Workflow
- fork/clone repo
- set required environment variables
- create an infrastructure configuration in the `prepare-vms/infra` directory
(using one of the example files in that directory)
- create your own setting file from `settings/example.yaml`
- if necessary, increase allowed open files: `ulimit -Sn 10000`
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
- run `./workshopctl start` to create instances
- run `./workshopctl deploy` to install Docker and setup environment
- run `./workshopctl kube` (if you want to install and setup Kubernetes)
- run `./workshopctl cards` (if you want to generate PDF for printing handouts of each users host IP's and login info)
- run `./workshopctl stop` at the end of the workshop to terminate instances
## Clone/Fork the Repo, and Build the Tools Image
The Docker Compose file here is used to build a image with all the dependencies to run the `./workshopctl` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](workshopctl#L5).
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
$ cd orchestration-workshop/prepare-vms
$ git clone https://github.com/jpetazzo/container.training
$ cd container.training/prepare-vms
$ docker-compose build
## Preparing to Run `./workshopctl`
### Required AWS Permissions/Info
@@ -36,27 +50,37 @@ The Docker Compose file here is used to build a image with all the dependencies
- Using a non-default VPC or Security Group isn't supported out of box yet, so you will have to customize `lib/commands.sh` if you want to change that.
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./workshopctl opensg` which opens up all ports.
### Required Environment Variables
### Create your `infra` file
- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`
You need to do this only once. (On AWS, you can create one `infra`
file per region.)
If you're not using AWS, set these to placeholder values:
Make a copy of one of the example files in the `infra` directory.
For instance:
```bash
cp infra/example.aws infra/aws-us-west-2
```
export AWS_ACCESS_KEY_ID="foo"
export AWS_SECRET_ACCESS_KEY="foo"
export AWS_DEFAULT_REGION="foo"
```
Edit your infrastructure file to customize it.
You will probably need to put your cloud provider credentials,
select region...
If you don't have the `aws` CLI installed, you will get a warning that it's a missing dependency. If you're not using AWS you can ignore this.
### Update/copy `settings/example.yaml`
### Create your `settings` file
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
Similarly, pick one of the files in `settings` and copy it
to customize it.
./workshopctl cards 2016-09-28-00-33-bret settings/orchestration.yaml
For instance:
```bash
cp settings/example.yaml settings/myworkshop.yaml
```
You're all set!
## `./workshopctl` Usage
@@ -66,7 +90,7 @@ Commands:
ami Show the AMI that will be used for deployment
amis List Ubuntu AMIs in the current region
build Build the Docker image to run this program in a container
cards Generate ready-to-print cards for a batch of VMs
cards Generate ready-to-print cards for a group of VMs
deploy Install Docker on a bunch of running VMs
ec2quotas Check our EC2 quotas (max instances)
help Show available commands
@@ -74,14 +98,14 @@ ids List the instance IDs belonging to a given tag or token
ips List the IP addresses of the VMs for a given tag or token
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all notes are reporting as Ready
list List available batches in the current region
list List available groups in the current region
opensg Open the default security group to ALL ingress traffic
pull_images Pre-pull a bunch of Docker images
retag Apply a new tag to a batch of VMs
start Start a batch of VMs
status List instance status for a given batch
retag Apply a new tag to a group of VMs
start Start a group of VMs
status List instance status for a given group
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
test Run tests (pre-flight checks) on a batch of VMs
test Run tests (pre-flight checks) on a group of VMs
wrap Run this program in a container
```
@@ -93,24 +117,24 @@ wrap Run this program in a container
- The `./workshopctl` script can be executed directly.
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. This can be configured with the `docker_user_password` property in the settings file.
### Example Steps to Launch a Batch of AWS Instances for a Workshop
### Example Steps to Launch a group of AWS Instances for a Workshop
- Run `./workshopctl start N` Creates `N` EC2 instances
- Run `./workshopctl start --infra infra/aws-us-east-2 --settings/myworkshop.yaml --count 60` to create 60 EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
- Run `./workshopctl deploy TAG` to run `lib/postprep.py` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
### Example Steps to Launch Azure Instances
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account (`az login`)
- Customize `azuredeploy.parameters.json`
- Required:
- Provide the SSH public key you plan to use for instance configuration
@@ -155,27 +179,16 @@ az group delete --resource-group workshop
### Example Steps to Configure Instances from a non-AWS Source
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to ssh into them.
- Set placeholder values for [AWS environment variable settings](#required-environment-variables).
- Choose a tag. It could be an event name, datestamp, etc. Ensure you have created a directory for your tag: `prepare-vms/tags/<tag>/`
- If you have not already generated a file with the IPs to be configured:
- The file should be named `prepare-vms/tags/<tag>/ips.txt`
- Format is one IP per line, no other info needed.
- Ensure the settings file is as desired (especially the number of nodes): `prepare-vms/settings/kube101.yaml`
- For a tag called `myworkshop`, configure instances: `workshopctl deploy myworkshop settings/kube101.yaml`
- Optionally, configure Kubernetes clusters of the size in the settings: `workshopctl kube myworkshop`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: `workshopctl kubetest myworkshop`
- Generate cards to print and hand out: `workshopctl cards myworkshop settings/kube101.yaml`
- Print the cards file: `prepare-vms/tags/myworkshop/ips.html`
## Other Tools
### Deploying your SSH key to all the machines
- Make sure that you have SSH keys loaded (`ssh-add -l`).
- Source `rc`.
- Run `pcopykey`.
- Copy `infra/example.generic` to `infra/generic`
- Run `./workshopctl start --infra infra/generic --settings settings/...yaml`
- Note the `prepare-vms/tags/TAG/` path that has been auto-created.
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to SSH into them.
- Edit the file `prepare-vms/tags/TAG/ips.txt`, it should list the IP addresses of the VMs (one per line, without any comments or other info)
- Continue deployment of cluster configuration with `./workshopctl deploy TAG`
- Optionally, configure Kubernetes clusters of the size in the settings: workshopctl kube `TAG`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: workshopctl kubetest `TAG`
- Generate cards to print and hand out: workshopctl cards `TAG`
- Print the cards file: prepare-vms/tags/`TAG`/ips.html
## Even More Details
@@ -188,7 +201,7 @@ To see which local key will be uploaded, run `ssh-add -l | grep RSA`.
#### Instance + tag creation
10 VMs will be started, with an automatically generated tag (timestamp + your username).
The VMs will be started, with an automatically generated tag (timestamp + your username).
Your SSH key will be added to the `authorized_keys` of the ubuntu user.
@@ -196,25 +209,21 @@ Your SSH key will be added to the `authorized_keys` of the ubuntu user.
Following the creation of the VMs, a text file will be created containing a list of their IPs.
This ips.txt file will be created in the $TAG/ directory and a symlink will be placed in the working directory of the script.
If you create new VMs, the symlinked file will be overwritten.
#### Deployment
Instances can be deployed manually using the `deploy` command:
$ ./workshopctl deploy TAG settings/somefile.yaml
$ ./workshopctl deploy TAG
The `postprep.py` file will be copied via parallel-ssh to all of the VMs and executed.
#### Pre-pull images
$ ./workshopctl pull-images TAG
$ ./workshopctl pull_images TAG
#### Generate cards
$ ./workshopctl cards TAG settings/somefile.yaml
$ ./workshopctl cards TAG
If you want to generate both HTML and PDF cards, install [wkhtmltopdf](https://wkhtmltopdf.org/downloads.html); without that installed, only HTML cards will be generated.
@@ -222,13 +231,11 @@ If you don't have `wkhtmltopdf` installed, you will get a warning that it is a m
#### List tags
$ ./workshopctl list
$ ./workshopctl list infra/some-infra-file
#### List VMs
$ ./workshopctl listall
$ ./workshopctl list TAG
This will print a human-friendly list containing some information about each instance.
$ ./workshopctl tags
#### Stop and destroy VMs

View File

@@ -7,15 +7,6 @@ fi
if id docker; then
sudo userdel -r docker
fi
pip install --user awscli jinja2 pdfkit
sudo apt-get install -y wkhtmltopdf xvfb
tmux new-session \; send-keys "
[ -f ~/.ssh/id_rsa ] || ssh-keygen
eval \$(ssh-agent)
ssh-add
Xvfb :0 &
export DISPLAY=:0
mkdir -p ~/www
sudo docker run -d -p 80:80 -v \$HOME/www:/usr/share/nginx/html nginx
"
sudo apt-get update -q
sudo apt-get install -qy jq python-pip wkhtmltopdf xvfb
pip install --user awscli jinja2 pdfkit pssh

View File

@@ -7,7 +7,6 @@ services:
working_dir: /root/prepare-vms
volumes:
- $HOME/.aws/:/root/.aws/
- /etc/localtime:/etc/localtime:ro
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
- $PWD/:/root/prepare-vms/
environment:

View File

@@ -0,0 +1,6 @@
INFRACLASS=aws
# If you are using AWS to deploy, copy this file (e.g. to "aws", or "us-east-1")
# and customize the variables below.
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=AKI...
export AWS_SECRET_ACCESS_KEY=...

View File

@@ -0,0 +1,2 @@
INFRACLASS=generic
# This is for manual provisioning. No other variable or configuration is needed.

View File

@@ -0,0 +1,9 @@
INFRACLASS=openstack
# If you are using OpenStack, copy this file (e.g. to "openstack" or "enix")
# and customize the variables below.
export TF_VAR_user="jpetazzo"
export TF_VAR_tenant="training"
export TF_VAR_domain="Default"
export TF_VAR_password="..."
export TF_VAR_auth_url="https://api.r1.nxs.enix.io/v3"
export TF_VAR_flavor="GP1.S"

View File

@@ -1,105 +0,0 @@
aws_display_tags() {
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| uniq -c \
| sort -k 3 \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
aws ec2 describe-instance-status \
--instance-ids $IDS \
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
--output table
}
aws_display_instances_by_tag() {
TAG=$1
need_tag $TAG
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "$result"
fi
}
aws_get_instance_ids_by_filter() {
FILTER=$1
aws ec2 describe-instances --filters $FILTER \
--query Reservations[*].Instances[*].InstanceId \
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
need_tag $TOKEN
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
}
aws_get_instance_ids_by_tag() {
TAG=$1
need_tag $TAG
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
}
aws_get_instance_ips_by_tag() {
TAG=$1
need_tag $TAG
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws_get_instance_ids_by_tag $TAG)
if [ -z "$IDS" ]; then
die "Invalid tag."
fi
info "Deleting instances with tag $TAG."
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
info "Deleted instances with tag $TAG."
}
aws_tag_instances() {
OLD_TAG_OR_TOKEN=$1
NEW_TAG=$2
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
}

View File

@@ -50,27 +50,38 @@ sep() {
fi
}
need_tag() {
need_infra() {
if [ -z "$1" ]; then
die "Please specify infrastructure file. (e.g.: infra/aws)"
fi
if [ ! -f "$1" ]; then
die "Infrastructure file $1 doesn't exist."
fi
. "$1"
. "lib/infra/$INFRACLASS.sh"
}
need_tag() {
if [ -z "$TAG" ]; then
die "Please specify a tag or token. To see available tags and tokens, run: $0 list"
fi
if [ ! -d "tags/$TAG" ]; then
die "Tag $TAG not found (directory tags/$TAG does not exist)."
fi
for FILE in settings.yaml ips.txt infra.sh; do
if [ ! -f "tags/$TAG/$FILE" ]; then
warning "File tags/$TAG/$FILE not found."
fi
done
. "tags/$TAG/infra.sh"
. "lib/infra/$INFRACLASS.sh"
}
need_settings() {
if [ -z "$1" ]; then
die "Please specify a settings file."
elif [ ! -f "$1" ]; then
die "Please specify a settings file. (e.g.: settings/kube101.yaml)"
fi
if [ ! -f "$1" ]; then
die "Settings file $1 doesn't exist."
fi
}
need_ips_file() {
IPS_FILE=$1
if [ -z "$IPS_FILE" ]; then
die "IPS_FILE not set."
fi
if [ ! -s "$IPS_FILE" ]; then
die "IPS_FILE $IPS_FILE not found. Please run: $0 ips <TAG>"
fi
}

View File

@@ -7,21 +7,11 @@ _cmd() {
_cmd help "Show available commands"
_cmd_help() {
printf "$(basename $0) - the orchestration workshop swiss army knife\n"
printf "$(basename $0) - the container training swiss army knife\n"
printf "Commands:"
printf "%s" "$HELP" | sort
}
_cmd amis "List Ubuntu AMIs in the current region"
_cmd_amis() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION "$@"
}
_cmd ami "Show the AMI that will be used for deployment"
_cmd_ami() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
}
_cmd build "Build the Docker image to run this program in a container"
_cmd_build() {
docker-compose build
@@ -32,64 +22,53 @@ _cmd_wrap() {
docker-compose run --rm workshopctl "$@"
}
_cmd cards "Generate ready-to-print cards for a batch of VMs"
_cmd cards "Generate ready-to-print cards for a group of VMs"
_cmd_cards() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
need_tag
# If you're not using AWS, populate the ips.txt file manually
if [ ! -f tags/$TAG/ips.txt ]; then
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
fi
# Remove symlinks to old cards
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
python lib/ips-txt-to-html.py $SETTINGS
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
rm -f tags/$TAG/$f
# Move the generated file and replace it with a symlink
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
done
# This will process ips.txt to generate two files: ips.pdf and ips.html
(
cd tags/$TAG
../../lib/ips-txt-to-html.py settings.yaml
)
info "Cards created. You can view them with:"
info "xdg-open ips.html ips.pdf (on Linux)"
info "open ips.html ips.pdf (on MacOS)"
info "xdg-open tags/$TAG/ips.html tags/$TAG/ips.pdf (on Linux)"
info "open tags/$TAG/ips.html (on macOS)"
}
_cmd deploy "Install Docker on a bunch of running VMs"
_cmd_deploy() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
link_tag $TAG
count=$(wc -l ips.txt)
need_tag
# wait until all hosts are reachable before trying to deploy
info "Trying to reach $TAG instances..."
while ! tag_is_reachable $TAG; do
while ! tag_is_reachable; do
>/dev/stderr echo -n "."
sleep 2
done
>/dev/stderr echo ""
echo deploying > tags/$TAG/status
sep "Deploying tag $TAG"
pssh -I tee /tmp/settings.yaml <$SETTINGS
# Wait for cloudinit to be done
pssh "
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done"
# Copy settings and install Python YAML parser
pssh -I tee /tmp/settings.yaml <tags/$TAG/settings.yaml
pssh "
sudo apt-get update &&
sudo apt-get install -y python-setuptools &&
sudo easy_install pyyaml"
sudo apt-get install -y python-yaml"
# Copy postprep.py to the remote machines, and execute it, feeding it the list of IP addresses
pssh -I tee /tmp/postprep.py <lib/postprep.py
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <ips.txt
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <tags/$TAG/ips.txt
# Install docker-prompt script
pssh -I sudo tee /usr/local/bin/docker-prompt <lib/docker-prompt
@@ -117,14 +96,17 @@ _cmd_deploy() {
fi"
sep "Deployed tag $TAG"
echo deployed > tags/$TAG/status
info "You may want to run one of the following commands:"
info "$0 kube $TAG"
info "$0 pull_images $TAG"
info "$0 cards $TAG $SETTINGS"
info "$0 cards $TAG"
}
_cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
_cmd_kube() {
TAG=$1
need_tag
# Install packages
pssh --timeout 200 "
@@ -134,14 +116,16 @@ _cmd_kube() {
sudo tee /etc/apt/sources.list.d/kubernetes.list"
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet kubeadm kubectl
sudo apt-get install -qy kubelet kubeadm kubectl &&
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
# Initialize kube master
pssh --timeout 200 "
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token
sudo kubeadm init --token \$(cat /tmp/token)
kubeadm token generate > /tmp/token &&
sudo kubeadm init \
--token \$(cat /tmp/token) \
--ignore-preflight-errors=SystemVerification
fi"
# Put kubeconfig in ubuntu's and docker's accounts
@@ -157,22 +141,69 @@ _cmd_kube() {
# Install weave as the pod network
pssh "
if grep -q node1 /tmp/node; then
kubever=\$(kubectl version | base64 | tr -d '\n')
kubever=\$(kubectl version | base64 | tr -d '\n') &&
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
fi"
# Join the other nodes to the cluster
pssh --timeout 200 "
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token) &&
sudo kubeadm join \
--discovery-token-unsafe-skip-ca-verification \
--ignore-preflight-errors=SystemVerification \
--token \$TOKEN node1:6443
fi"
# Install kubectx and kubens
pssh "
[ -d kubectx ] || git clone https://github.com/ahmetb/kubectx &&
sudo ln -sf /home/ubuntu/kubectx/kubectx /usr/local/bin/kctx &&
sudo ln -sf /home/ubuntu/kubectx/kubens /usr/local/bin/kns &&
sudo cp /home/ubuntu/kubectx/completion/*.bash /etc/bash_completion.d &&
[ -d kube-ps1 ] || git clone https://github.com/jonmosco/kube-ps1 &&
sudo -u docker sed -i s/docker-prompt/kube_ps1/ /home/docker/.bashrc &&
sudo -u docker tee -a /home/docker/.bashrc <<EOF
. /home/ubuntu/kube-ps1/kube-ps1.sh
KUBE_PS1_PREFIX=""
KUBE_PS1_SUFFIX=""
KUBE_PS1_SYMBOL_ENABLE="false"
KUBE_PS1_CTX_COLOR="green"
KUBE_PS1_NS_COLOR="green"
EOF"
# Install stern
pssh "
if [ ! -x /usr/local/bin/stern ]; then
##VERSION##
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.10.0/stern_linux_amd64 &&
sudo chmod +x /usr/local/bin/stern &&
stern --completion bash | sudo tee /etc/bash_completion.d/stern
fi"
# Install helm
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | sudo bash &&
helm completion bash | sudo tee /etc/bash_completion.d/helm
fi"
sep "Done"
}
_cmd kubetest "Check that all notes are reporting as Ready"
_cmd kubereset "Wipe out Kubernetes configuration on all nodes"
_cmd_kubereset() {
TAG=$1
need_tag
pssh "sudo kubeadm reset --force"
}
_cmd kubetest "Check that all nodes are reporting as Ready"
_cmd_kubetest() {
TAG=$1
need_tag
# There are way too many backslashes in the command below.
# Feel free to make that better ♥
pssh "
@@ -186,7 +217,7 @@ _cmd_kubetest() {
fi"
}
_cmd ids "List the instance IDs belonging to a given tag or token"
_cmd ids "(FIXME) List the instance IDs belonging to a given tag or token"
_cmd_ids() {
TAG=$1
need_tag $TAG
@@ -199,248 +230,264 @@ _cmd_ids() {
aws_get_instance_ids_by_client_token $TAG
}
_cmd ips "List the IP addresses of the VMs for a given tag or token"
_cmd_ips() {
TAG=$1
need_tag $TAG
mkdir -p tags/$TAG
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
link_tag $TAG
}
_cmd list "List available batches in the current region"
_cmd list "List available groups for a given infrastructure"
_cmd_list() {
info "Listing batches in region $AWS_DEFAULT_REGION:"
aws_display_tags
need_infra $1
infra_list
}
_cmd status "List instance status for a given batch"
_cmd_status() {
info "Using region $AWS_DEFAULT_REGION."
_cmd listall "List VMs running on all configured infrastructures"
_cmd_listall() {
for infra in infra/*; do
case $infra in
infra/example.*)
;;
*)
info "Listing infrastructure $infra:"
need_infra $infra
infra_list
;;
esac
done
}
_cmd netfix "Disable GRO and run a pinger job on the VMs"
_cmd_netfix () {
TAG=$1
need_tag $TAG
describe_tag $TAG
tag_is_reachable $TAG
info "You may be interested in running one of the following commands:"
info "$0 ips $TAG"
info "$0 deploy $TAG <settings/somefile.yaml>"
need_tag
pssh "
sudo ethtool -K ens3 gro off
sudo tee /root/pinger.service <<EOF
[Unit]
Description=pinger
[Install]
WantedBy=multi-user.target
[Service]
WorkingDirectory=/
ExecStart=/bin/ping -w60 1.1
User=nobody
Group=nogroup
Restart=always
EOF
sudo systemctl enable /root/pinger.service
sudo systemctl start pinger"
}
_cmd opensg "Open the default security group to ALL ingress traffic"
_cmd_opensg() {
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
need_infra $1
infra_opensg
}
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
_cmd pssh "Run an arbitrary command on all nodes"
_cmd_pssh() {
TAG=$1
need_tag
shift
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
pssh "$@"
}
_cmd pull_images "Pre-pull a bunch of Docker images"
_cmd_pull_images() {
TAG=$1
need_tag $TAG
pull_tag $TAG
need_tag
pull_tag
}
_cmd retag "Apply a new tag to a batch of VMs"
_cmd quotas "Check our infrastructure quotas (max instances)"
_cmd_quotas() {
need_infra $1
infra_quotas
}
_cmd retag "(FIXME) Apply a new tag to a group of VMs"
_cmd_retag() {
OLDTAG=$1
NEWTAG=$2
need_tag $OLDTAG
TAG=$OLDTAG
need_tag
if [[ -z "$NEWTAG" ]]; then
die "You must specify a new tag to apply."
fi
aws_tag_instances $OLDTAG $NEWTAG
}
_cmd start "Start a batch of VMs"
_cmd start "Start a group of VMs"
_cmd_start() {
# Number of instances to create
COUNT=$1
# Optional settings file (to carry on with deployment)
SETTINGS=$2
while [ ! -z "$*" ]; do
case "$1" in
--infra) INFRA=$2; shift 2;;
--settings) SETTINGS=$2; shift 2;;
--count) COUNT=$2; shift 2;;
--tag) TAG=$2; shift 2;;
*) die "Unrecognized parameter: $1."
esac
done
if [ -z "$INFRA" ]; then
die "Please add --infra flag to specify which infrastructure file to use."
fi
if [ -z "$SETTINGS" ]; then
die "Please add --settings flag to specify which settings file to use."
fi
if [ -z "$COUNT" ]; then
die "Indicate number of instances to start."
COUNT=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
warning "No --count option was specified. Using value from settings file ($COUNT)."
fi
# Check that the specified settings and infrastructure are valid.
need_settings $SETTINGS
need_infra $INFRA
# Print our AWS username, to ease the pain of credential-juggling
greet
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
AMI=$(_cmd_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
die "I could not find which AMI to use in this region. Try another region?"
if [ -z "$TAG" ]; then
TAG=$(make_tag)
fi
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
AWS_KEY_NAME=$(make_key_name)
sep "Starting instances"
info " Count: $COUNT"
info " Region: $AWS_DEFAULT_REGION"
info " Token/tag: $TOKEN"
info " AMI: $AMI"
info " Key name: $AWS_KEY_NAME"
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
info "Reservation ID: $reservation_id"
sep
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
TAG=$TOKEN
aws_tag_instances $TOKEN $TAG
wait_until_tag_is_running $TAG $COUNT
mkdir -p tags/$TAG
ln -s ../../$INFRA tags/$TAG/infra.sh
ln -s ../../$SETTINGS tags/$TAG/settings.yaml
echo creating > tags/$TAG/status
infra_start $COUNT
sep
info "Successfully created $COUNT instances with tag $TAG"
sep
echo created > tags/$TAG/status
mkdir -p tags/$TAG
IPS=$(aws_get_instance_ips_by_tag $TAG)
echo "$IPS" >tags/$TAG/ips.txt
link_tag $TAG
if [ -n "$SETTINGS" ]; then
_cmd_deploy $TAG $SETTINGS
else
info "To deploy or kill these instances, run one of the following:"
info "$0 deploy $TAG <settings/somefile.yaml>"
info "$0 stop $TAG"
fi
}
_cmd ec2quotas "Check our EC2 quotas (max instances)"
_cmd_ec2quotas() {
greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
info "Available regions:"
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
info "To deploy Docker on these instances, you can run:"
info "$0 deploy $TAG"
info "To terminate these instances, you can run:"
info "$0 stop $TAG"
}
_cmd stop "Stop (terminate, shutdown, kill, remove, destroy...) instances"
_cmd_stop() {
TAG=$1
need_tag $TAG
aws_kill_instances_by_tag $TAG
need_tag
infra_stop
echo stopped > tags/$TAG/status
}
_cmd test "Run tests (pre-flight checks) on a batch of VMs"
_cmd tags "List groups of VMs known locally"
_cmd_tags() {
(
cd tags
echo "[#] [Status] [Tag] [Infra]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
for tag in *; do
if [ -f $tag/ips.txt ]; then
count="$(wc -l < $tag/ips.txt)"
else
count="?"
fi
if [ -f $tag/status ]; then
status="$(cat $tag/status)"
else
status="?"
fi
if [ -f $tag/infra.sh ]; then
infra="$(basename $(readlink $tag/infra.sh))"
else
infra="?"
fi
echo "$count $status $tag $infra" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
done
)
}
_cmd test "Run tests (pre-flight checks) on a group of VMs"
_cmd_test() {
TAG=$1
need_tag $TAG
test_tag $TAG
need_tag
test_tag
}
###
_cmd helmprom "Install Helm and Prometheus"
_cmd_helmprom() {
TAG=$1
need_tag
pssh "
if grep -q node1 /tmp/node; then
kubectl -n kube-system get serviceaccount helm ||
kubectl -n kube-system create serviceaccount helm
helm init --service-account helm
kubectl get clusterrolebinding helm-can-do-everything ||
kubectl create clusterrolebinding helm-can-do-everything \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:helm
helm upgrade --install prometheus stable/prometheus \
--namespace kube-system \
--set server.service.type=NodePort \
--set server.service.nodePort=30090 \
--set server.persistentVolume.enabled=false \
--set alertmanager.enabled=false
fi"
}
# Sometimes, weave fails to come up on some nodes.
# Symptom: the pods on a node are unreachable (they don't even ping).
# Remedy: wipe out Weave state and delete weave pod on that node.
# Specifically, identify the weave pod that is defective, then:
# kubectl -n kube-system exec weave-net-XXXXX -c weave rm /weavedb/weave-netdata.db
# kubectl -n kube-system delete pod weave-net-XXXXX
_cmd weavetest "Check that weave seems properly setup"
_cmd_weavetest() {
TAG=$1
need_tag
pssh "
kubectl -n kube-system get pods -o name | grep weave | cut -d/ -f2 |
xargs -I POD kubectl -n kube-system exec POD -c weave -- \
sh -c \"./weave --local status | grep Connections | grep -q ' 1 failed' || ! echo POD \""
}
greet() {
IAMUSER=$(aws iam get-user --query 'User.UserName')
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
}
link_tag() {
TAG=$1
need_tag $TAG
IPS_FILE=tags/$TAG/ips.txt
need_ips_file $IPS_FILE
ln -sf $IPS_FILE ips.txt
}
pull_tag() {
TAG=$1
need_tag $TAG
link_tag $TAG
if [ ! -s $IPS_FILE ]; then
die "Nonexistent or empty IPs file $IPS_FILE."
fi
# Pre-pull a bunch of images
pssh --timeout 900 'for I in \
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
postgres \
redis \
training/namer \
nathanleclaire/redisonrails; do
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
elasticsearch:2 \
postgres \
redis \
alpine \
registry \
nicolaka/netshoot \
jpetazzo/trainingwheels \
golang \
training/namer \
dockercoins/hasher \
dockercoins/rng \
dockercoins/webui \
dockercoins/worker \
logstash \
prom/node-exporter \
google/cadvisor \
dockersamples/visualizer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'
info "Finished pulling images for $TAG."
info "You may now want to run:"
info "$0 cards $TAG <settings/somefile.yaml>"
}
wait_until_tag_is_running() {
max_retry=50
TAG=$1
COUNT=$2
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do
let "i += 1"
info "$(printf "%d/%d instances online" $done_count $COUNT)"
done_count=$(aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
"Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].State.Name" \
| tr "\t" "\n" \
| wc -l)
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
tag_is_reachable() {
TAG=$1
need_tag $TAG
link_tag $TAG
pssh -t 5 true 2>&1 >/dev/null
}
test_tag() {
TAG=$1
ips_file=tags/$TAG/ips.txt
info "Picking a random IP address in $ips_file to run tests."
n=$((1 + $RANDOM % $(wc -l <$ips_file)))
ip=$(head -n $n $ips_file | tail -n 1)
ip=$(shuf -n1 $ips_file)
test_vm $ip
info "Tests complete."
}
@@ -516,17 +563,9 @@ sync_keys() {
fi
}
get_token() {
make_tag() {
if [ -z $USER ]; then
export USER=anonymous
fi
date +%Y-%m-%d-%H-%M-$USER
}
describe_tag() {
# Display instance details and reachability/status information
TAG=$1
need_tag $TAG
aws_display_instances_by_tag $TAG
aws_display_instance_statuses_by_tag $TAG
}

26
prepare-vms/lib/infra.sh Normal file
View File

@@ -0,0 +1,26 @@
# Default stub functions for infrastructure libraries.
# When loading an infrastructure library, these functions will be overridden.
infra_list() {
warning "infra_list is unsupported on $INFRACLASS."
}
infra_quotas() {
warning "infra_quotas is unsupported on $INFRACLASS."
}
infra_start() {
warning "infra_start is unsupported on $INFRACLASS."
}
infra_stop() {
warning "infra_stop is unsupported on $INFRACLASS."
}
infra_quotas() {
warning "infra_quotas is unsupported on $INFRACLASS."
}
infra_opensg() {
warning "infra_opensg is unsupported on $INFRACLASS."
}

View File

@@ -0,0 +1,206 @@
infra_list() {
aws_display_tags
}
infra_quotas() {
greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
info "Available regions:"
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
}
infra_start() {
COUNT=$1
# Print our AWS username, to ease the pain of credential-juggling
greet
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
AMI=$(aws_get_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
die "I could not find which AMI to use in this region. Try another region?"
fi
AWS_KEY_NAME=$(make_key_name)
sep "Starting instances"
info " Count: $COUNT"
info " Region: $AWS_DEFAULT_REGION"
info " Token/tag: $TAG"
info " AMI: $AMI"
info " Key name: $AWS_KEY_NAME"
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
--client-token $TAG \
--block-device-mapping 'DeviceName=/dev/sda1,Ebs={VolumeSize=20}' \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
info "Reservation ID: $reservation_id"
sep
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TAG)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
aws_tag_instances $TAG $TAG
# Wait until EC2 API tells us that the instances are running
wait_until_tag_is_running $TAG $COUNT
aws_get_instance_ips_by_tag $TAG > tags/$TAG/ips.txt
}
infra_stop() {
aws_kill_instances_by_tag
}
infra_opensg() {
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
}
wait_until_tag_is_running() {
max_retry=50
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do
let "i += 1"
info "$(printf "%d/%d instances online" $done_count $COUNT)"
done_count=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
"Name=instance-state-name,Values=running" \
--query "length(Reservations[].Instances[])")
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
aws_display_tags() {
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| uniq -c \
| sort -k 3 \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
aws ec2 describe-instance-status \
--instance-ids $IDS \
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
--output table
}
aws_display_instances_by_tag() {
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "$result"
fi
}
aws_get_instance_ids_by_filter() {
FILTER=$1
aws ec2 describe-instances --filters $FILTER \
--query Reservations[*].Instances[*].InstanceId \
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
}
aws_get_instance_ids_by_tag() {
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
}
aws_get_instance_ips_by_tag() {
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
IDS=$(aws_get_instance_ids_by_tag $TAG)
if [ -z "$IDS" ]; then
die "Invalid tag."
fi
info "Deleting instances with tag $TAG."
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
info "Deleted instances with tag $TAG."
}
aws_tag_instances() {
OLD_TAG_OR_TOKEN=$1
NEW_TAG=$2
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
}
aws_get_ami() {
##VERSION##
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 18.04 -t hvm:ebs -N -q
}

View File

@@ -0,0 +1,8 @@
infra_start() {
COUNT=$1
info "You should now run your provisioning commands for $COUNT machines."
info "Note: no machines have been automatically created!"
info "Once done, put the list of IP addresses in tags/$TAG/ips.txt"
info "(one IP address per line, without any comments or extra lines)."
touch tags/$TAG/ips.txt
}

View File

@@ -0,0 +1,20 @@
infra_start() {
COUNT=$1
cp terraform/*.tf tags/$TAG
(
cd tags/$TAG
terraform init
echo prefix = \"$TAG\" >> terraform.tfvars
echo count = \"$COUNT\" >> terraform.tfvars
terraform apply -auto-approve
terraform output ip_addresses > ips.txt
)
}
infra_stop() {
(
cd tags/$TAG
terraform destroy -auto-approve
)
}

View File

@@ -31,7 +31,13 @@ while ips:
clusters.append(cluster)
template_file_name = SETTINGS["cards_template"]
template = jinja2.Template(open(template_file_name).read())
template_file_path = os.path.join(
os.path.dirname(__file__),
"..",
"templates",
template_file_name
)
template = jinja2.Template(open(template_file_path).read())
with open("ips.html", "w") as f:
f.write(template.render(clusters=clusters, **SETTINGS))
print("Generated ips.html")

View File

@@ -13,6 +13,7 @@ COMPOSE_VERSION = config["compose_version"]
MACHINE_VERSION = config["machine_version"]
CLUSTER_SIZE = config["clustersize"]
ENGINE_VERSION = config["engine_version"]
DOCKER_USER_PASSWORD = config["docker_user_password"]
#################################
@@ -54,9 +55,9 @@ system("curl --silent {} > /tmp/ipv4".format(ipv4_retrieval_endpoint))
ipv4 = open("/tmp/ipv4").read()
# Add a "docker" user with password "training"
# Add a "docker" user with password coming from the settings
system("id docker || sudo useradd -d /home/docker -m -s /bin/bash docker")
system("echo docker:training | sudo chpasswd")
system("echo docker:{} | sudo chpasswd".format(DOCKER_USER_PASSWORD))
# Fancy prompt courtesy of @soulshake.
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL
@@ -82,7 +83,7 @@ system("sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /e
system("sudo service ssh restart")
system("sudo apt-get -q update")
system("sudo apt-get -qy install git jq python-pip")
system("sudo apt-get -qy install git jq")
#######################
### DOCKER INSTALLS ###
@@ -97,7 +98,6 @@ system("sudo apt-get -q update")
system("sudo apt-get -qy install docker-ce")
### Install docker-compose
#system("sudo pip install -U docker-compose=={}".format(COMPOSE_VERSION))
system("sudo curl -sSL -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/{}/docker-compose-{}-{}".format(COMPOSE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-compose")
system("docker-compose version")
@@ -108,7 +108,7 @@ system("sudo chmod +x /usr/local/bin/docker-machine")
system("docker-machine version")
system("sudo apt-get remove -y --purge dnsmasq-base")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh tree")
### Wait for Docker to be up.
### (If we don't do this, Docker will not be responsive during the next step.)

View File

@@ -1,12 +1,17 @@
# This file can be sourced in order to directly run commands on
# a batch of VMs whose IPs are located in ips.txt of the directory in which
# a group of VMs whose IPs are located in ips.txt of the directory in which
# the command is run.
pssh() {
HOSTFILE="ips.txt"
if [ -z "$TAG" ]; then
>/dev/stderr echo "Variable \$TAG is not set."
return
fi
HOSTFILE="tags/$TAG/ips.txt"
[ -f $HOSTFILE ] || {
>/dev/stderr echo "No hostfile found at $HOSTFILE"
>/dev/stderr echo "Hostfile $HOSTFILE not found."
return
}

View File

@@ -0,0 +1,25 @@
# Number of VMs per cluster
clustersize: 5
# Jinja2 template to use to generate ready-to-cut cards
cards_template: enix.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.22.0
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -22,3 +22,6 @@ engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.18.0
machine_version: 0.13.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -17,8 +17,11 @@ paper_margin: 0.2in
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: test
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.17.1
machine_version: 0.13.0
compose_version: 1.22.0
machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -0,0 +1,25 @@
# Number of VMs per cluster
clustersize: 4
# Jinja2 template to use to generate ready-to-cut cards
cards_template: jerome.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -4,7 +4,7 @@
clustersize: 3
# Jinja2 template to use to generate ready-to-cut cards
cards_template: settings/kube101.html
cards_template: kube101.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
@@ -20,5 +20,8 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.20.1
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -1,7 +1,7 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
# Number of VMs per cluster
clustersize: 5
clustersize: 3
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
@@ -20,5 +20,8 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.20.1
machine_version: 0.14.0
compose_version: 1.22.0
machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -85,7 +85,7 @@ img {
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>

View File

Can't render this file because it contains an unexpected character in line 1 and column 42.

View File

@@ -0,0 +1,117 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://septembre2018.container.training" -%}
{%- set pagesize = 9 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 30%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.3em;
}
img.enix {
height: 4.5em;
margin-top: 0.2em;
}
img.kube {
height: 4.2em;
margin-top: 1.7em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Voici les informations permettant de se connecter à votre
cluster pour cette formation. Vous pouvez vous connecter
à ces machines virtuelles avec n'importe quel client SSH.
</p>
<p>
<img class="enix" src="https://enix.io/static/img/logos/logo-domain-cropped.png" />
<table>
<tr><td>identifiant:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>mot de passe:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
Vos serveurs sont :
<img class="kube" src="{{ image_src }}" />
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>Le support de formation est à l'adresse suivante :
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -0,0 +1,131 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://qconsf2018.container.training/" -%}
{%- set pagesize = 9 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
@import url('https://fonts.googleapis.com/css?family=Slabo+27px');
body, table {
margin: 0;
padding: 0;
line-height: 1.0em;
font-size: 15px;
font-family: 'Slabo 27px';
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
height: 31%;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 30%;
padding-left: 1.5%;
padding-right: 1.5%;
}
div.back {
border: 1px dotted white;
}
div.back p {
margin: 0.5em 1em 0 1em;
}
p {
margin: 0.4em 0 0.8em 0;
}
img {
height: 5em;
float: right;
margin-right: 1em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% if loop.index%pagesize==0 or loop.last %}
<span class="pagebreak"></span>
{% for x in range(pagesize) %}
<div class="back">
<br/>
<p>You got this card at the workshop "Getting Started With Kubernetes and Container Orchestration"
during QCON San Francisco (November 2018).</p>
<p>That workshop was a 1-day version of a longer curriculum.</p>
<p>If you liked that workshop, the instructor (Jérôme Petazzoni) can deliver it
(or the longer version) to your team or organization.</p>
<p>You can reach him at:</p>
<p>jerome.petazzoni@gmail.com</p>
<p>Thank you!</p>
</div>
{% endfor %}
<span class="pagebreak"></span>
{% endif %}
{% endfor %}
</body>
</html>

View File

@@ -85,7 +85,7 @@ img {
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>

View File

@@ -0,0 +1,5 @@
resource "openstack_compute_keypair_v2" "ssh_deploy_key" {
name = "${var.prefix}"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}

View File

@@ -0,0 +1,32 @@
resource "openstack_compute_instance_v2" "machine" {
count = "${var.count}"
name = "${format("%s-%04d", "${var.prefix}", count.index+1)}"
image_name = "Ubuntu 16.04.5 (Xenial Xerus)"
flavor_name = "${var.flavor}"
security_groups = ["${openstack_networking_secgroup_v2.full_access.name}"]
key_pair = "${openstack_compute_keypair_v2.ssh_deploy_key.name}"
network {
name = "${openstack_networking_network_v2.internal.name}"
fixed_ip_v4 = "${cidrhost("${openstack_networking_subnet_v2.internal.cidr}", count.index+10)}"
}
}
resource "openstack_compute_floatingip_v2" "machine" {
count = "${var.count}"
# This is something provided to us by Enix when our tenant was provisioned.
pool = "Public Floating"
}
resource "openstack_compute_floatingip_associate_v2" "machine" {
count = "${var.count}"
floating_ip = "${openstack_compute_floatingip_v2.machine.*.address[count.index]}"
instance_id = "${openstack_compute_instance_v2.machine.*.id[count.index]}"
fixed_ip = "${cidrhost("${openstack_networking_subnet_v2.internal.cidr}", count.index+10)}"
}
output "ip_addresses" {
value = "${join("\n", openstack_compute_floatingip_v2.machine.*.address)}"
}
variable "flavor" {}

View File

@@ -0,0 +1,23 @@
resource "openstack_networking_network_v2" "internal" {
name = "${var.prefix}"
}
resource "openstack_networking_subnet_v2" "internal" {
name = "${var.prefix}"
network_id = "${openstack_networking_network_v2.internal.id}"
cidr = "10.10.0.0/16"
ip_version = 4
dns_nameservers = ["1.1.1.1"]
}
resource "openstack_networking_router_v2" "router" {
name = "${var.prefix}"
external_network_id = "15f0c299-1f50-42a6-9aff-63ea5b75f3fc"
}
resource "openstack_networking_router_interface_v2" "router_internal" {
router_id = "${openstack_networking_router_v2.router.id}"
subnet_id = "${openstack_networking_subnet_v2.internal.id}"
}

View File

@@ -0,0 +1,13 @@
provider "openstack" {
user_name = "${var.user}"
tenant_name = "${var.tenant}"
domain_name = "${var.domain}"
password = "${var.password}"
auth_url = "${var.auth_url}"
}
variable "user" {}
variable "tenant" {}
variable "domain" {}
variable "password" {}
variable "auth_url" {}

View File

@@ -0,0 +1,12 @@
resource "openstack_networking_secgroup_v2" "full_access" {
name = "${var.prefix} - full access"
}
resource "openstack_networking_secgroup_rule_v2" "full_access" {
direction = "ingress"
ethertype = "IPv4"
protocol = ""
remote_ip_prefix = "0.0.0.0/0"
security_group_id = "${openstack_networking_secgroup_v2.full_access.id}"
}

View File

@@ -0,0 +1,8 @@
variable "prefix" {
type = "string"
}
variable "count" {
type = "string"
}

View File

@@ -1,20 +1,19 @@
#!/bin/bash
# Get the script's real directory, whether we're being called directly or via a symlink
# Get the script's real directory.
# This should work whether we're being called directly or via a symlink.
if [ -L "$0" ]; then
export SCRIPT_DIR=$(dirname $(readlink "$0"))
else
export SCRIPT_DIR=$(dirname "$0")
fi
# Load all scriptlets
# Load all scriptlets.
cd "$SCRIPT_DIR"
for lib in lib/*.sh; do
. $lib
done
TRAINER_IMAGE="preparevms_prepare-vms"
DEPENDENCIES="
aws
ssh
@@ -25,49 +24,26 @@ DEPENDENCIES="
man
"
ENVVARS="
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
SSH_AUTH_SOCK
"
# Check for missing dependencies, and issue a warning if necessary.
missing=0
for dependency in $DEPENDENCIES; do
if ! command -v $dependency >/dev/null; then
warning "Dependency $dependency could not be found."
missing=1
fi
done
if [ $missing = 1 ]; then
warning "At least one dependency is missing. Install it or try the image wrapper."
fi
check_envvars() {
status=0
for envvar in $ENVVARS; do
if [ -z "${!envvar}" ]; then
error "Environment variable $envvar is not set."
if [ "$envvar" = "SSH_AUTH_SOCK" ]; then
error "Hint: run 'eval \$(ssh-agent) ; ssh-add' and try again?"
fi
status=1
fi
done
return $status
}
# Check if SSH_AUTH_SOCK is set.
# (If it's not, deployment will almost certainly fail.)
if [ -z "${SSH_AUTH_SOCK}" ]; then
warning "Environment variable SSH_AUTH_SOCK is not set."
warning "Hint: run 'eval \$(ssh-agent) ; ssh-add' and try again?"
fi
check_dependencies() {
status=0
for dependency in $DEPENDENCIES; do
if ! command -v $dependency >/dev/null; then
warning "Dependency $dependency could not be found."
status=1
fi
done
return $status
}
check_image() {
docker inspect $TRAINER_IMAGE >/dev/null 2>&1
}
check_envvars \
|| die "Please set all required environment variables."
check_dependencies \
|| warning "At least one dependency is missing. Install it or try the image wrapper."
# Now check which command was invoked and execute it
# Now check which command was invoked and execute it.
if [ "$1" ]; then
cmd="$1"
shift
@@ -77,6 +53,3 @@ fi
fun=_cmd_$cmd
type -t $fun | grep -q function || die "Invalid command: $cmd"
$fun "$@"
# export SSH_AUTH_DIRNAME=$(dirname $SSH_AUTH_SOCK)
# docker-compose run prepare-vms "$@"

1
slides/_redirects Normal file
View File

@@ -0,0 +1 @@
/ /kube-fullday.yml.html 200!

View File

@@ -29,6 +29,10 @@ class State(object):
self.interactive = True
self.verify_status = False
self.simulate_type = True
self.switch_desktop = False
self.sync_slides = False
self.open_links = False
self.run_hidden = True
self.slide = 1
self.snippet = 0
@@ -37,6 +41,10 @@ class State(object):
self.interactive = bool(data["interactive"])
self.verify_status = bool(data["verify_status"])
self.simulate_type = bool(data["simulate_type"])
self.switch_desktop = bool(data["switch_desktop"])
self.sync_slides = bool(data["sync_slides"])
self.open_links = bool(data["open_links"])
self.run_hidden = bool(data["run_hidden"])
self.slide = int(data["slide"])
self.snippet = int(data["snippet"])
@@ -46,6 +54,10 @@ class State(object):
interactive=self.interactive,
verify_status=self.verify_status,
simulate_type=self.simulate_type,
switch_desktop=self.switch_desktop,
sync_slides=self.sync_slides,
open_links=self.open_links,
run_hidden=self.run_hidden,
slide=self.slide,
snippet=self.snippet,
), f, default_flow_style=False)
@@ -122,14 +134,20 @@ class Slide(object):
def focus_slides():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "3"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_terminal():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "2"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_browser():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "4"])
subprocess.check_output(["i3-msg", "workspace", "1"])
@@ -205,7 +223,7 @@ def check_exit_status():
def setup_tmux_and_ssh():
if subprocess.call(["tmux", "has-session"]):
logging.error("Couldn't connect to tmux. Please setup tmux first.")
ipaddr = open("../../prepare-vms/ips.txt").read().split("\n")[0]
ipaddr = "$IPADDR"
uid = os.getuid()
raise Exception("""
@@ -307,17 +325,21 @@ while True:
slide = slides[state.slide]
snippet = slide.snippets[state.snippet-1] if state.snippet else None
click.clear()
print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}]"
print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}] "
"[switch_desktop:{}] [sync_slides:{}] [open_links:{}] [run_hidden:{}]"
.format(state.slide, len(slides)-1,
state.snippet, len(slide.snippets) if slide.snippets else 0,
state.simulate_type, state.verify_status))
state.simulate_type, state.verify_status,
state.switch_desktop, state.sync_slides,
state.open_links, state.run_hidden))
print(hrule())
if snippet:
print(slide.content.replace(snippet.content, ansi(7)(snippet.content)))
focus_terminal()
else:
print(slide.content)
subprocess.check_output(["./gotoslide.js", str(slide.number)])
if state.sync_slides:
subprocess.check_output(["./gotoslide.js", str(slide.number)])
focus_slides()
print(hrule())
if state.interactive:
@@ -326,6 +348,10 @@ while True:
print("n/→ Next")
print("s Simulate keystrokes")
print("v Validate exit status")
print("d Switch desktop")
print("k Sync slides")
print("o Open links")
print("h Run hidden commands")
print("g Go to a specific slide")
print("q Quit")
print("c Continue non-interactively until next error")
@@ -341,6 +367,14 @@ while True:
state.simulate_type = not state.simulate_type
elif command == "v":
state.verify_status = not state.verify_status
elif command == "d":
state.switch_desktop = not state.switch_desktop
elif command == "k":
state.sync_slides = not state.sync_slides
elif command == "o":
state.open_links = not state.open_links
elif command == "h":
state.run_hidden = not state.run_hidden
elif command == "g":
state.slide = click.prompt("Enter slide number", type=int)
state.snippet = 0
@@ -366,7 +400,7 @@ while True:
logging.info("Running with method {}: {}".format(method, data))
if method == "keys":
send_keys(data)
elif method == "bash":
elif method == "bash" or (method == "hide" and state.run_hidden):
# Make sure that we're ready
wait_for_prompt()
# Strip leading spaces
@@ -405,11 +439,12 @@ while True:
screen = capture_pane()
url = data.replace("/node1", "/{}".format(IPADDR))
# This should probably be adapted to run on different OS
subprocess.check_output(["xdg-open", url])
focus_browser()
if state.interactive:
print("Press any key to continue to next step...")
click.getchar()
if state.open_links:
subprocess.check_output(["xdg-open", url])
focus_browser()
if state.interactive:
print("Press any key to continue to next step...")
click.getchar()
else:
logging.warning("Unknown method {}: {!r}".format(method, data))
move_forward()

View File

@@ -0,0 +1 @@
click

View File

@@ -1,6 +1,8 @@
#!/bin/sh
set -e
case "$1" in
once)
./index.py
for YAML in *.yml; do
./markmaker.py $YAML > $YAML.html || {
rm $YAML.html
@@ -15,6 +17,13 @@ once)
;;
forever)
set +e
# check if entr is installed
if ! command -v entr >/dev/null; then
echo >&2 "First install 'entr' with apt, brew, etc."
exit
fi
# There is a weird bug in entr, at least on MacOS,
# where it doesn't restore the terminal to a clean
# state when exitting. So let's try to work around

View File

@@ -1,11 +0,0 @@
class: title, self-paced
Thank you!
---
class: title, in-person
That's all, folks! <br/> Questions?
![end](images/end.jpg)

View File

@@ -1,3 +1,6 @@
class: title
# Advanced Dockerfiles
![construction](images/title-advanced-dockerfiles.jpg)
@@ -34,18 +37,6 @@ In this section, we will see more Dockerfile commands.
---
## The `MAINTAINER` instruction
The `MAINTAINER` instruction tells you who wrote the `Dockerfile`.
```dockerfile
MAINTAINER Docker Education Team <education@docker.com>
```
It's optional but recommended.
---
## The `RUN` instruction
The `RUN` instruction can be specified in two ways.
@@ -367,7 +358,7 @@ class: extra-details
## Overriding the `ENTRYPOINT` instruction
The entry point can be overriden as well.
The entry point can be overridden as well.
```bash
$ docker run -it training/ls
@@ -428,5 +419,4 @@ ONBUILD COPY . /src
```
* You can't chain `ONBUILD` instructions with `ONBUILD`.
* `ONBUILD` can't be used to trigger `FROM` and `MAINTAINER`
instructions.
* `ONBUILD` can't be used to trigger `FROM` instructions.

View File

@@ -40,6 +40,8 @@ ambassador containers.
---
class: pic
![ambassador](images/ambassador-diagram.png)
---

View File

@@ -117,7 +117,7 @@ CONTAINER ID IMAGE ... CREATED STATUS ...
Many Docker commands will work on container IDs: `docker stop`, `docker rm`...
If we want to list only the IDs of our containers (without the other colums
If we want to list only the IDs of our containers (without the other columns
or the header line),
we can use the `-q` ("Quiet", "Quick") flag:

View File

@@ -49,7 +49,7 @@ Before diving in, let's see a small example of Compose in action.
---
## Compose in action
class: pic
![composeup](images/composeup.gif)
@@ -60,6 +60,10 @@ Before diving in, let's see a small example of Compose in action.
If you are using the official training virtual machines, Compose has been
pre-installed.
If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them.
If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`.
You can always check that it is installed by running:
```bash
@@ -135,22 +139,33 @@ services:
---
## Compose file versions
## Compose file structure
Version 1 directly has the various containers (`www`, `redis`...) at the top level of the file.
A Compose file has multiple sections:
Version 2 has multiple sections:
* `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.)
* `version` is mandatory and should be `"2"`.
* `services` is mandatory and corresponds to the content of the version 1 format.
* `services` is mandatory. A service is one or more replicas of the same image running as containers.
* `networks` is optional and indicates to which networks containers should be connected.
<br/>(By default, containers will be connected on a private, per-app network.)
<br/>(By default, containers will be connected on a private, per-compose-file network.)
* `volumes` is optional and can define volumes to be used and/or shared by the containers.
Version 3 adds support for deployment options (scaling, rolling updates, etc.)
---
## Compose file versions
* Version 1 is legacy and shouldn't be used.
(If you see a Compose file without `version` and `services`, it's a legacy v1 file.)
* Version 2 added support for networks and volumes.
* Version 3 added support for deployment options (scaling, rolling updates, etc).
The [Docker documentation](https://docs.docker.com/compose/compose-file/)
has excellent information about the Compose file format if you need to know more about versions.
---
@@ -260,6 +275,8 @@ Removing trainingwheels_www_1 ... done
Removing trainingwheels_redis_1 ... done
```
Use `docker-compose down -v` to remove everything including volumes.
---
## Special handling of volumes

View File

@@ -73,7 +73,7 @@ Containers also exist (sometimes with other names) on Windows, macOS, Solaris, F
## LXC
* The venerable ancestor (first realeased in 2008).
* The venerable ancestor (first released in 2008).
* Docker initially relied on it to execute containers.

View File

@@ -65,9 +65,17 @@ eb0eeab782f4 host host
* A network is managed by a *driver*.
* All the drivers that we have seen before are available.
* The built-in drivers include:
* A new multi-host driver, *overlay*, is available out of the box.
* `bridge` (default)
* `none`
* `host`
* `macvlan`
* A multi-host driver, *overlay*, is available out of the box (for Swarm clusters).
* More drivers can be provided by plugins (OVS, VLAN...)
@@ -75,6 +83,8 @@ eb0eeab782f4 host host
---
class: extra-details
## Differences with the CNI
* CNI = Container Network Interface
@@ -87,6 +97,30 @@ eb0eeab782f4 host host
---
class: pic
## Single container in a Docker network
![bridge0](images/bridge1.png)
---
class: pic
## Two containers on a single Docker network
![bridge2](images/bridge2.png)
---
class: pic
## Two containers on two Docker networks
![bridge3](images/bridge3.png)
---
## Creating a network
Let's create a network called `dev`.
@@ -284,7 +318,7 @@ since we wiped out the old Redis container).
---
class: x-extra-details
class: extra-details
## Names are *local* to each network
@@ -324,7 +358,7 @@ class: extra-details
Create the `prod` network.
```bash
$ docker create network prod
$ docker network create prod
5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c
```
@@ -472,11 +506,13 @@ b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09
* If containers span multiple hosts, we need an *overlay* network to connect them together.
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN.
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging
VXLAN, *enabled with Swarm Mode*.
* Other plugins (Weave, Calico...) can provide overlay networks as well.
* Once you have an overlay network, *all the features that we've used in this chapter work identically.*
* Once you have an overlay network, *all the features that we've used in this chapter work identically
across multiple hosts.*
---
@@ -514,13 +550,174 @@ General idea:
---
## Section summary
## Connecting and disconnecting dynamically
We've learned how to:
* So far, we have specified which network to use when starting the container.
* Create private networks for groups of containers.
* The Docker Engine also allows to connect and disconnect while the container runs.
* Assign IP addresses to containers.
* This feature is exposed through the Docker API, and through two Docker CLI commands:
* Use container naming to implement service discovery.
* `docker network connect <network> <container>`
* `docker network disconnect <network> <container>`
---
## Dynamically connecting to a network
* We have a container named `es` connected to a network named `dev`.
* Let's start a simple alpine container on the default network:
```bash
$ docker run -ti alpine sh
/ #
```
* In this container, try to ping the `es` container:
```bash
/ # ping es
ping: bad address 'es'
```
This doesn't work, but we will change that by connecting the container.
---
## Finding the container ID and connecting it
* Figure out the ID of our alpine container; here are two methods:
* looking at `/etc/hostname` in the container,
* running `docker ps -lq` on the host.
* Run the following command on the host:
```bash
$ docker network connect dev `<container_id>`
```
---
## Checking what we did
* Try again to `ping es` from the container.
* It should now work correctly:
```bash
/ # ping es
PING es (172.20.0.3): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms
^C
```
* Interrupt it with Ctrl-C.
---
## Looking at the network setup in the container
We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`:
.small[
```bash
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
20: eth1@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1
valid_lft forever preferred_lft forever
/ #
```
]
Each network connection is materialized with a virtual network interface.
As we can see, we can be connected to multiple networks at the same time.
---
## Disconnecting from a network
* Let's try the symmetrical command to disconnect the container:
```bash
$ docker network disconnect dev <container_id>
```
* From now on, if we try to ping `es`, it will not resolve:
```bash
/ # ping es
ping: bad address 'es'
```
* Trying to ping the IP address directly won't work either:
```bash
/ # ping 172.20.0.3
... (nothing happens until we interrupt it with Ctrl-C)
```
---
class: extra-details
## Network aliases are scoped per network
* Each network has its own set of network aliases.
* We saw this earlier: `es` resolves to different addresses in `dev` and `prod`.
* If we are connected to multiple networks, the resolver looks up names in each of them
(as of Docker Engine 18.03, it is the connection order) and stops as soon as the name
is found.
* Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not**
give us the addresses of all the `es` services; but only the ones in `dev` or `prod`.
* However, we can lookup `es.dev` or `es.prod` if we need to.
---
class: extra-details
## Finding out about our networks and names
* We can do reverse DNS lookups on containers' IP addresses.
* If the IP address belongs to a network (other than the default bridge), the result will be:
```
name-or-first-alias-or-container-id.network-name
```
* Example:
.small[
```bash
$ docker run -ti --net prod --net-alias hello alpine
/ # apk add --no-cache drill
...
OK: 5 MiB in 13 packages
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03
inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
...
/ # drill -t ptr `3.0.21.172`.in-addr.arpa
...
;; ANSWER SECTION:
3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`.
...
```
]

View File

@@ -98,7 +98,7 @@ $ curl localhost:32768
* We can see that metadata with `docker inspect`:
```bash
$ docker inspect nginx --format {{.Config.ExposedPorts}}
$ docker inspect --format '{{.Config.ExposedPorts}}' nginx
map[80/tcp:{}]
```

View File

@@ -64,7 +64,7 @@ Create this Dockerfile.
## Testing our C program
* Create `hello.c` and `Dockerfile` in the same direcotry.
* Create `hello.c` and `Dockerfile` in the same directory.
* Run `docker build -t hello .` in this directory.

View File

@@ -10,10 +10,12 @@
* [Solaris Containers (2004)](https://en.wikipedia.org/wiki/Solaris_Containers)
* [FreeBSD jails (1999)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
* [FreeBSD jails (1999-2000)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
Containers have been around for a *very long time* indeed.
(See [this excellent blog post by Serge Hallyn](https://s3hh.wordpress.com/2018/03/22/history-of-containers/) for more historic details.)
---
class: pic

View File

@@ -30,7 +30,7 @@
## Environment variables
- Most of the tools (CLI, libraries...) connecting to the Docker API can use ennvironment variables.
- Most of the tools (CLI, libraries...) connecting to the Docker API can use environment variables.
- These variables are:
@@ -40,7 +40,7 @@
- `DOCKER_CERT_PATH` (path to the keypair and certificate to use for auth)
- `docker-machine env ...` will generate the variables needed to connect to an host.
- `docker-machine env ...` will generate the variables needed to connect to a host.
- `$(eval docker-machine env ...)` sets these variables in the current shell.
@@ -50,7 +50,7 @@
With `docker-machine`, we can:
- upgrade an host to the latest version of the Docker Engine,
- upgrade a host to the latest version of the Docker Engine,
- start/stop/restart hosts,

View File

@@ -0,0 +1,361 @@
# Tips for efficient Dockerfiles
We will see how to:
* Reduce the number of layers.
* Leverage the build cache so that builds can be faster.
* Embed unit testing in the build process.
---
## Reducing the number of layers
* Each line in a `Dockerfile` creates a new layer.
* Build your `Dockerfile` to take advantage of Docker's caching system.
* Combine commands by using `&&` to continue commands and `\` to wrap lines.
Note: it is frequent to build a Dockerfile line by line:
```dockerfile
RUN apt-get install thisthing
RUN apt-get install andthatthing andthatotherone
RUN apt-get install somemorestuff
```
And then refactor it trivially before shipping:
```dockerfile
RUN apt-get install thisthing andthatthing andthatotherone somemorestuff
```
---
## Avoid re-installing dependencies at each build
* Classic Dockerfile problem:
"each time I change a line of code, all my dependencies are re-installed!"
* Solution: `COPY` dependency lists (`package.json`, `requirements.txt`, etc.)
by themselves to avoid reinstalling unchanged dependencies every time.
---
## Example "bad" `Dockerfile`
The dependencies are reinstalled every time, because the build system does not know if `requirements.txt` has been updated.
```bash
FROM python
WORKDIR /src
COPY . .
RUN pip install -qr requirements.txt
EXPOSE 5000
CMD ["python", "app.py"]
```
---
## Fixed `Dockerfile`
Adding the dependencies as a separate step means that Docker can cache more efficiently and only install them when `requirements.txt` changes.
```bash
FROM python
COPY requirements.txt /tmp/requirements.txt
RUN pip install -qr /tmp/requirements.txt
WORKDIR /src
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
```
---
## Embedding unit tests in the build process
```dockerfile
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
RUN <install test dependencies>
COPY <test data sets and fixtures>
RUN <unit tests>
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
CMD, EXPOSE ...
```
* The build fails as soon as an instruction fails
* If `RUN <unit tests>` fails, the build doesn't produce an image
* If it succeeds, it produces a clean image (without test libraries and data)
---
# Dockerfile examples
There are a number of tips, tricks, and techniques that we can use in Dockerfiles.
But sometimes, we have to use different (and even opposed) practices depending on:
- the complexity of our project,
- the programming language or framework that we are using,
- the stage of our project (early MVP vs. super-stable production),
- whether we're building a final image or a base for further images,
- etc.
We are going to show a few examples using very different techniques.
---
## When to optimize an image
When authoring official images, it is a good idea to reduce as much as possible:
- the number of layers,
- the size of the final image.
This is often done at the expense of build time and convenience for the image maintainer;
but when an image is downloaded millions of time, saving even a few seconds of pull time
can be worth it.
.small[
```dockerfile
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-install gd
...
RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \
&& echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \
&& tar -xzf wordpress.tar.gz -C /usr/src/ \
&& rm wordpress.tar.gz \
&& chown -R www-data:www-data /usr/src/wordpress
```
]
(Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile))
---
## When to *not* optimize an image
Sometimes, it is better to prioritize *maintainer convenience*.
In particular, if:
- the image changes a lot,
- the image has very few users (e.g. only 1, the maintainer!),
- the image is built and run on the same machine,
- the image is built and run on machines with a very fast link ...
In these cases, just keep things simple!
(Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.)
---
```dockerfile
FROM debian:sid
RUN apt-get update -q
RUN apt-get install -yq build-essential make
RUN apt-get install -yq zlib1g-dev
RUN apt-get install -yq ruby ruby-dev
RUN apt-get install -yq python-pygments
RUN apt-get install -yq nodejs
RUN apt-get install -yq cmake
RUN gem install --no-rdoc --no-ri github-pages
COPY . /blog
WORKDIR /blog
VOLUME /blog/_site
EXPOSE 4000
CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"]
```
---
## Multi-dimensional versioning systems
Images can have a tag, indicating the version of the image.
But sometimes, there are multiple important components, and we need to indicate the versions
for all of them.
This can be done with environment variables:
```dockerfile
ENV PIP=9.0.3 \
ZC_BUILDOUT=2.11.2 \
SETUPTOOLS=38.7.0 \
PLONE_MAJOR=5.1 \
PLONE_VERSION=5.1.0 \
PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d
```
(Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile))
---
## Entrypoints and wrappers
It is very common to define a custom entrypoint.
That entrypoint will generally be a script, performing any combination of:
- pre-flights checks (if a required dependency is not available, display
a nice error message early instead of an obscure one in a deep log file),
- generation or validation of configuration files,
- dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`),
- and more.
---
## A typical entrypoint script
```dockerfile
#!/bin/sh
set -e
# first arg is '-f' or '--some-option'
# or first arg is 'something.conf'
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
set -- redis-server "$@"
fi
# allow the container to be started with '--user'
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
chown -R redis .
exec su-exec redis "$0" "$@"
fi
exec "$@"
```
(Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh))
---
## Factoring information
To facilitate maintenance (and avoid human errors), avoid to repeat information like:
- version numbers,
- remote asset URLs (e.g. source tarballs) ...
Instead, use environment variables.
.small[
```dockerfile
ENV NODE_VERSION 10.2.1
...
RUN ...
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xf "node-v$NODE_VERSION.tar.xz" \
&& cd "node-v$NODE_VERSION" \
...
```
]
(Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile))
---
## Overrides
In theory, development and production images should be the same.
In practice, we often need to enable specific behaviors in development (e.g. debug statements).
One way to reconcile both needs is to use Compose to enable these behaviors.
Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example.
---
## Production image
This Dockerfile builds an image leveraging gunicorn:
```dockerfile
FROM python
RUN pip install flask
RUN pip install gunicorn
RUN pip install redis
COPY . /src
WORKDIR /src
CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app
EXPOSE 5000
```
(Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
---
## Development Compose file
This Compose file uses the same image, but with a few overrides for development:
- the Flask development server is used (overriding `CMD`),
- the `DEBUG` environment variable is set,
- a volume is used to provide a faster local development workflow.
.small[
```yaml
services:
www:
build: www
ports:
- 8000:5000
user: nobody
environment:
DEBUG: 1
command: python counter.py
volumes:
- ./www:/src
```
]
(Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml))
---
## How to know which best practices are better?
- The main goal of containers is to make our lives easier.
- In this chapter, we showed many ways to write Dockerfiles.
- These Dockerfiles use sometimes diametrally opposed techniques.
- Yet, they were the "right" ones *for a specific situation.*
- It's OK (and even encouraged) to start simple and evolve as needed.
- Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!

View File

@@ -110,6 +110,8 @@ Beautiful! .emoji[😍]
---
class: in-person
## Counting packages in the container
Let's check how many packages are installed there.
@@ -127,6 +129,8 @@ How many packages do we have on our host?
---
class: in-person
## Counting packages on the host
Exit the container by logging out of the shell, like you would usually do.
@@ -145,18 +149,34 @@ Now, try to:
---
class: self-paced
## Comparing the container and the host
Exit the container by logging out of the shell, with `^D` or `exit`.
Now try to run `figlet`. Does that work?
(It shouldn't; except if, by coincidence, you are running on a machine where figlet was installed before.)
---
## Host and containers are independent things
* We ran an `ubuntu` container on an `ubuntu` host.
* We ran an `ubuntu` container on an Linux/Windows/macOS host.
* But they have different, independent packages.
* They have different, independent packages.
* Installing something on the host doesn't expose it to the container.
* And vice-versa.
* Even if both the host and the container have the same Linux distro!
* We can run *any container* on *any host*.
(One exception: Windows containers cannot run on Linux machines; at least not yet.)
---
## Where's our container?

View File

@@ -1,3 +1,4 @@
class: title
# Getting inside a container
@@ -144,7 +145,7 @@ docker run jpetazzo/crashtest
The container starts, but then stops immediately, without any output.
What would McGyver do?
What would MacGyver&trade; do?
First, let's check the status of that container.

View File

@@ -46,6 +46,8 @@ In this section, we will explain:
## Example for a Java webapp
Each of the following items will correspond to one layer:
* CentOS base layer
* Packages and configuration files added by our local IT
* JRE
@@ -56,6 +58,22 @@ In this section, we will explain:
---
class: pic
## The read-write layer
![layers](images/container-layers.jpg)
---
class: pic
## Multiple containers sharing the same image
![layers](images/sharing-layers.jpg)
---
## Differences between containers and images
* An image is a read-only filesystem.
@@ -63,24 +81,14 @@ In this section, we will explain:
* A container is an encapsulated set of processes running in a
read-write copy of that filesystem.
* To optimize container boot time, *copy-on-write* is used
* To optimize container boot time, *copy-on-write* is used
instead of regular copy.
* `docker run` starts a container from a given image.
Let's give a couple of metaphors to illustrate those concepts.
---
## Image as stencils
Images are like templates or stencils that you can create containers from.
![stencil](images/stenciling-wall.jpg)
---
## Object-oriented programming
## Comparison with object-oriented programming
* Images are conceptually similar to *classes*.
@@ -99,7 +107,7 @@ If an image is read-only, how do we change it?
* We create a new container from that image.
* Then we make changes to that container.
* When we are satisfied with those changes, we transform them into a new layer.
* A new image is created by stacking the new layer on top of the old image.
@@ -118,7 +126,7 @@ If an image is read-only, how do we change it?
## Creating the first images
There is a special empty image called `scratch`.
There is a special empty image called `scratch`.
* It allows to *build from scratch*.
@@ -138,7 +146,7 @@ Note: you will probably never have to do this yourself.
* Saves all the changes made to a container into a new layer.
* Creates a new image (effectively a copy of the container).
`docker build`
`docker build` **(used 99% of the time)**
* Performs a repeatable build sequence.
* This is the preferred method!
@@ -180,6 +188,8 @@ Those images include:
* Ready-to-use components and services, like redis, postgresql...
* Over 130 at this point!
---
## User namespace
@@ -299,9 +309,9 @@ There are two ways to download images.
```bash
$ docker pull debian:jessie
Pulling repository debian
b164861940b8: Download complete
b164861940b8: Pulling image (jessie) from debian
d1881793a057: Download complete
b164861940b8: Download complete
b164861940b8: Pulling image (jessie) from debian
d1881793a057: Download complete
```
* As seen previously, images are made up of layers.

View File

@@ -1,3 +1,4 @@
class: title
# Installing Docker
@@ -37,7 +38,9 @@ We can arbitrarily distinguish:
## Installing Docker on Linux
* The recommended method is to install the packages supplied by Docker Inc.
* The recommended method is to install the packages supplied by Docker Inc.:
https://store.docker.com
* The general method is:
@@ -79,11 +82,11 @@ class: extra-details
## Installing Docker on macOS and Windows
* On macOS, the recommended method is to use Docker4Mac:
* On macOS, the recommended method is to use Docker for Mac:
https://docs.docker.com/docker-for-mac/install/
* On Windows 10 Pro, Enterprise, and Eduction, you can use Docker4Windows:
* On Windows 10 Pro, Enterprise, and Education, you can use Docker for Windows:
https://docs.docker.com/docker-for-windows/install/
@@ -91,6 +94,33 @@ class: extra-details
https://docs.docker.com/toolbox/toolbox_install_windows/
* On Windows Server 2016, you can also install the native engine:
https://docs.docker.com/install/windows/docker-ee/
---
## Docker for Mac and Docker for Windows
* Special Docker Editions that integrate well with their respective host OS
* Provide user-friendly GUI to edit Docker configuration and settings
* Leverage the host OS virtualization subsystem (e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS)
* Installed like normal user applications on the host
* Under the hood, they both run a tiny VM (transparent to our daily use)
* Access network resources like normal applications
<br/>(and therefore, play better with enterprise VPNs and firewalls)
* Support filesystem sharing through volumes (we'll talk about this later)
* They only support running one Docker VM at a time ...
<br/>
... but we can use `docker-machine`, the Docker Toolbox, VirtualBox, etc. to get a cluster.
---
## Running Docker on macOS and Windows
@@ -110,25 +140,6 @@ This will also allow to use remote Engines exactly as if they were local.
---
## Docker4Mac and Docker4Windows
* They let you run Docker without VirtualBox
* They are installed like normal applications (think QEMU, but faster)
* They access network resources like normal applications
<br/>(and therefore, play well with enterprise VPNs and firewalls)
* They support filesystem sharing through volumes (we'll talk about this later)
* They only support running one Docker VM at a time ...
... so if you want to run a full cluster locally, install e.g. the Docker Toolbox
* They can co-exist with the Docker Toolbox
---
## Important PSA about security
* If you have access to the Docker control socket, you can take over the machine

View File

@@ -17,7 +17,7 @@ At the end of this section, you will be able to:
---
## Containerized local development environments
## Local development in a container
We want to solve the following issues:
@@ -69,7 +69,6 @@ Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe?
```dockerfile
FROM ruby
MAINTAINER Education Team at Docker <education@docker.com>
COPY . /src
WORKDIR /src
@@ -177,7 +176,9 @@ $ docker run -d -v $(pwd):/src -P namer
* `namer` is the name of the image we will run.
* We don't specify a command to run because is is already set in the Dockerfile.
* We don't specify a command to run because it is already set in the Dockerfile.
Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell).
---
@@ -308,54 +309,6 @@ and *canary deployments*.
---
## Improving the workflow
The workflow that we showed is nice, but it requires us to:
* keep track of all the `docker run` flags required to run the container,
* inspect the `Dockerfile` to know which path(s) to mount,
* write scripts to hide that complexity.
There has to be a better way!
---
## Docker Compose to the rescue
* Docker Compose allows us to "encode" `docker run` parameters in a YAML file.
* Here is the `docker-compose.yml` file that we can use for our "namer" app:
```yaml
www:
build: .
volumes:
- .:/src
ports:
- 80:9292
```
* Try it:
```bash
$ docker-compose up -d
```
---
## Working with Docker Compose
* When you see a `docker-compose.yml` file, you can use `docker-compose up`.
* It can build images and run them with the required parameters.
* Compose can also deal with complex, multi-container apps.
(More on this later!)
---
## Recap of the development workflow
1. Write a Dockerfile to build an image containing our development environment.

View File

@@ -131,6 +131,27 @@ We will then show one particular method in action, using ELK and Docker's loggin
---
## A word of warning about `json-file`
- By default, log file size is unlimited.
- This means that a very verbose container *will* use up all your disk space.
(Or a less verbose container, but running for a very long time.)
- Log rotation can be enabled by setting a `max-size` option.
- Older log files can be removed by setting a `max-file` option.
- Just like other logging options, these can be set per container, or globally.
Example:
```bash
$ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch
```
---
## Demo: sending logs to ELK
- We are going to deploy an ELK stack.
@@ -192,7 +213,7 @@ $ docker-compose -f elk.yml up -d
- it is set with the `ELASTICSEARCH_URL` environment variable,
- by default it is `localhost:9200`, we change it to `elastichsearch:9200`.
- by default it is `localhost:9200`, we change it to `elasticsearch:9200`.
- We need to configure Logstash:

View File

@@ -0,0 +1,295 @@
# Reducing image size
* In the previous example, our final image contained:
* our `hello` program
* its source code
* the compiler
* Only the first one is strictly necessary.
* We are going to see how to obtain an image without the superfluous components.
---
## Can't we remove superfluous files with `RUN`?
What happens if we do one of the following commands?
- `RUN rm -rf ...`
- `RUN apt-get remove ...`
- `RUN make clean ...`
--
This adds a layer which removes a bunch of files.
But the previous layers (which added the files) still exist.
---
## Removing files with an extra layer
When downloading an image, all the layers must be downloaded.
| Dockerfile instruction | Layer size | Image size |
| ---------------------- | ---------- | ---------- |
| `FROM ubuntu` | Size of base image | Size of base image |
| `...` | ... | Sum of this layer <br/>+ all previous ones |
| `RUN apt-get install somepackage` | Size of files added <br/>(e.g. a few MB) | Sum of this layer <br/>+ all previous ones |
| `...` | ... | Sum of this layer <br/>+ all previous ones |
| `RUN apt-get remove somepackage` | Almost zero <br/>(just metadata) | Same as previous one |
Therefore, `RUN rm` does not reduce the size of the image or free up disk space.
---
## Removing unnecessary files
Various techniques are available to obtain smaller images:
- collapsing layers,
- adding binaries that are built outside of the Dockerfile,
- squashing the final image,
- multi-stage builds.
Let's review them quickly.
---
## Collapsing layers
You will frequently see Dockerfiles like this:
```dockerfile
FROM ubuntu
RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ...
```
Or the (more readable) variant:
```dockerfile
FROM ubuntu
RUN apt-get update \
&& apt-get install xxx \
&& ... \
&& apt-get remove xxx \
&& ...
```
This `RUN` command gives us a single layer.
The files that are added, then removed in the same layer, do not grow the layer size.
---
## Collapsing layers: pros and cons
Pros:
- works on all versions of Docker
- doesn't require extra tools
Cons:
- not very readable
- some unnecessary files might still remain if the cleanup is not thorough
- that layer is expensive (slow to build)
---
## Building binaries outside of the Dockerfile
This results in a Dockerfile looking like this:
```dockerfile
FROM ubuntu
COPY xxx /usr/local/bin
```
Of course, this implies that the file `xxx` exists in the build context.
That file has to exist before you can run `docker build`.
For instance, it can:
- exist in the code repository,
- be created by another tool (script, Makefile...),
- be created by another container image and extracted from the image.
See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox).
---
## Building binaries outside: pros and cons
Pros:
- final image can be very small
Cons:
- requires an extra build tool
- we're back in dependency hell and "works on my machine"
Cons, if binary is added to code repository:
- breaks portability across different platforms
- grows repository size a lot if the binary is updated frequently
---
## Squashing the final image
The idea is to transform the final image into a single-layer image.
This can be done in (at least) two ways.
- Activate experimental features and squash the final image:
```bash
docker image build --squash ...
```
- Export/import the final image.
```bash
docker build -t temp-image .
docker run --entrypoint true --name temp-container temp-image
docker export temp-container | docker import - final-image
docker rm temp-container
docker rmi temp-image
```
---
## Squashing the image: pros and cons
Pros:
- single-layer images are smaller and faster to download
- removed files no longer take up storage and network resources
Cons:
- we still need to actively remove unnecessary files
- squash operation can take a lot of time (on big images)
- squash operation does not benefit from cache
<br/>
(even if we change just a tiny file, the whole image needs to be re-squashed)
---
## Multi-stage builds
Multi-stage builds allow us to have multiple *stages*.
Each stage is a separate image, and can copy files from previous stages.
We're going to see how they work in more detail.
---
# Multi-stage builds
* At any point in our `Dockerfile`, we can add a new `FROM` line.
* This line starts a new stage of our build.
* Each stage can access the files of the previous stages with `COPY --from=...`.
* When a build is tagged (with `docker build -t ...`), the last stage is tagged.
* Previous stages are not discarded: they will be used for caching, and can be referenced.
---
## Multi-stage builds in practice
* Each stage is numbered, starting at `0`
* We can copy a file from a previous stage by indicating its number, e.g.:
```dockerfile
COPY --from=0 /file/from/first/stage /location/in/current/stage
```
* We can also name stages, and reference these names:
```dockerfile
FROM golang AS builder
RUN ...
FROM alpine
COPY --from=builder /go/bin/mylittlebinary /usr/local/bin/
```
---
## Multi-stage builds for our C program
We will change our Dockerfile to:
* give a nickname to the first stage: `compiler`
* add a second stage using the same `ubuntu` base image
* add the `hello` binary to the second stage
* make sure that `CMD` is in the second stage
The resulting Dockerfile is on the next slide.
---
## Multi-stage build `Dockerfile`
Here is the final Dockerfile:
```dockerfile
FROM ubuntu AS compiler
RUN apt-get update
RUN apt-get install -y build-essential
COPY hello.c /
RUN make hello
FROM ubuntu
COPY --from=compiler /hello /hello
CMD /hello
```
Let's build it, and check that it works correctly:
```bash
docker build -t hellomultistage .
docker run hellomultistage
```
---
## Comparing single/multi-stage build image sizes
List our images with `docker images`, and check the size of:
- the `ubuntu` base image,
- the single-stage `hello` image,
- the multi-stage `hellomultistage` image.
We can achieve even smaller images if we use smaller base images.
However, if we use common base images (e.g. if we standardize on `ubuntu`),
these common images will be pulled only once per node, so they are
virtually "free."

View File

@@ -76,6 +76,8 @@ The last item should be done for educational purposes only!
---
class: extra-details, deep-dive
## Manipulating namespaces
- Namespaces are created with two methods:
@@ -94,6 +96,8 @@ The last item should be done for educational purposes only!
---
class: extra-details, deep-dive
## Namespaces lifecycle
- When the last process of a namespace exits, the namespace is destroyed.
@@ -114,6 +118,8 @@ The last item should be done for educational purposes only!
---
class: extra-details, deep-dive
## Namespaces can be used independently
- As mentioned in the previous slides:
@@ -138,7 +144,7 @@ The last item should be done for educational purposes only!
- Also allows to set the NIS domain.
(If you dont' know what a NIS domain is, you don't have to worry about it!)
(If you don't know what a NIS domain is, you don't have to worry about it!)
- If you're wondering: UTS = UNIX time sharing.
@@ -150,6 +156,8 @@ The last item should be done for educational purposes only!
---
class: extra-details, deep-dive
## Creating our first namespace
Let's use `unshare` to create a new process that will have its own UTS namespace:
@@ -166,6 +174,8 @@ $ sudo unshare --uts
---
class: extra-details, deep-dive
## Demonstrating our uts namespace
In our new "container", check the hostname, change it, and check it:
@@ -398,6 +408,8 @@ class: extra-details
---
class: extra-details, deep-dive
## Setting up a private `/tmp`
Create a new mount namespace:
@@ -435,6 +447,8 @@ The mount is automatically cleaned up when you exit the process.
---
class: extra-details, deep-dive
## PID namespace in action
Create a new PID namespace:
@@ -453,10 +467,14 @@ Check the process tree in the new namespace:
--
class: extra-details, deep-dive
🤔 Why do we see all the processes?!?
---
class: extra-details, deep-dive
## PID namespaces and `/proc`
- Tools like `ps` rely on the `/proc` pseudo-filesystem.
@@ -471,6 +489,8 @@ Check the process tree in the new namespace:
---
class: extra-details, deep-dive
## PID namespaces, take 2
- This can be solved by mounting `/proc` in the namespace.
@@ -570,6 +590,8 @@ Check `man 2 unshare` and `man pid_namespaces` if you want more details.
---
class: extra-details, deep-dive
## User namespace challenges
- UID needs to be mapped when passed between processes or kernel subsystems.
@@ -686,6 +708,8 @@ cpu memory
---
class: extra-details, deep-dive
## Cgroups v1 vs v2
- Cgroups v1 are available on all systems (and widely used).
@@ -759,6 +783,8 @@ cpu memory
---
class: extra-details, deep-dive
## Avoiding the OOM killer
- For some workloads (databases and stateful systems), killing
@@ -778,6 +804,8 @@ cpu memory
---
class: extra-details, deep-dive
## Overhead of the memory cgroup
- Each time a process grabs or releases a page, the kernel update counters.
@@ -796,6 +824,8 @@ cpu memory
---
class: extra-details, deep-dive
## Setting up a limit with the memory cgroup
Create a new memory cgroup:
@@ -808,7 +838,7 @@ $ sudo mkdir $CG
Limit it to approximately 100MB of memory usage:
```bash
$ sudo tee $CG/memory.memsw.limit_in_bytes <<<100000000
$ sudo tee $CG/memory.memsw.limit_in_bytes <<< 100000000
```
Move the current process to that cgroup:
@@ -819,8 +849,67 @@ $ sudo tee $CG/tasks <<< $$
The current process *and all its future children* are now limited.
(Confused about `<<<`? Look at the next slide!)
---
class: extra-details, deep-dive
## What's `<<<`?
- This is a "here string". (It is a non-POSIX shell extension.)
- The following commands are equivalent:
```bash
foo <<< hello
```
```bash
echo hello | foo
```
```bash
foo <<EOF
hello
EOF
```
- Why did we use that?
---
class: extra-details, deep-dive
## Writing to cgroups pseudo-files requires root
Instead of:
```bash
sudo tee $CG/tasks <<< $$
```
We could have done:
```bash
sudo sh -c "echo $$ > $CG/tasks"
```
The following commands, however, would be invalid:
```bash
sudo echo $$ > $CG/tasks
```
```bash
sudo -i # (or su)
echo $$ > $CG/tasks
```
---
class: extra-details, deep-dive
## Testing the memory limit
Start the Python interpreter:
@@ -860,8 +949,6 @@ Killed
- Allows to set relative weights used by the scheduler.
- We cannot set CPU limits (like, "don't use more than 10% of CPU").
---
## Cpuset cgroup

View File

@@ -420,8 +420,3 @@ It depends on:
- false, if we focus on what matters.
---
## Kubernetes in action
.center[![Demo stamp](images/demo.jpg)]

View File

@@ -21,7 +21,7 @@ public images is free as well.*
docker login
```
.warning[When running Docker4Mac, Docker4Windows, or
.warning[When running Docker for Mac/Windows, or
Docker on a Linux workstation, it can (and will when
possible) integrate with your system's keyring to
store your credentials securely. However, on most Linux

View File

@@ -0,0 +1,229 @@
# Limiting resources
- So far, we have used containers as convenient units of deployment.
- What happens when a container tries to use more resources than available?
(RAM, CPU, disk usage, disk and network I/O...)
- What happens when multiple containers compete for the same resource?
- Can we limit resources available to a container?
(Spoiler alert: yes!)
---
## Container processes are normal processes
- Containers are closer to "fancy processes" than to "lightweight VMs".
- A process running in a container is, in fact, a process running on the host.
- Let's look at the output of `ps` on a container host running 3 containers :
```
0 2662 0.2 0.3 /usr/bin/dockerd -H fd://
0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe
0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off;
101 23543 0.0 0.0 | \_ `nginx`: worker process
0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2
0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
0 23725 0.0 0.0 \_ `/bin/sh`
```
- The highlighted processes are containerized processes.
<br/>
(That host is running nginx, elasticsearch, and alpine.)
---
## By default: nothing changes
- What happens when a process uses too much memory on a Linux system?
--
- Simplified answer:
- swap is used (if available);
- if there is not enough swap space, eventually, the out-of-memory killer is invoked;
- the OOM killer uses heuristics to kill processes;
- sometimes, it kills an unrelated process.
--
- What happens when a container uses too much memory?
- The same thing!
(i.e., a process eventually gets killed, possibly in another container.)
---
## Limiting container resources
- The Linux kernel offers rich mechanisms to limit container resources.
- For memory usage, the mechanism is part of the *cgroup* subsystem.
- This subsystem allows to limit the memory for a process or a group of processes.
- A container engine leverages these mechanisms to limit memory for a container.
- The out-of-memory killer has a new behavior:
- it runs when a container exceeds its allowed memory usage,
- in that case, it only kills processes in that container.
---
## Limiting memory in practice
- The Docker Engine offers multiple flags to limit memory usage.
- The two most useful ones are `--memory` and `--memory-swap`.
- `--memory` limits the amount of physical RAM used by a container.
- `--memory-swap` limits the total amount (RAM+swap) used by a container.
- The memory limit can be expressed in bytes, or with a unit suffix.
(e.g.: `--memory 100m` = 100 megabytes.)
- We will see two strategies: limiting RAM usage, or limiting both
---
## Limiting RAM usage
Example:
```bash
docker run -ti --memory 100m python
```
If the container tries to use more than 100 MB of RAM, *and* swap is available:
- the container will not be killed,
- memory above 100 MB will be swapped out,
- in most cases, the app in the container will be slowed down (a lot).
If we run out of swap, the global OOM killer still intervenes.
---
## Limiting both RAM and swap usage
Example:
```bash
docker run -ti --memory 100m --memory-swap 100m python
```
If the container tries to use more than 100 MB of memory, it is killed.
On the other hand, the application will never be slowed down because of swap.
---
## When to pick which strategy?
- Stateful services (like databases) will lose or corrupt data when killed
- Allow them to use swap space, but monitor swap usage
- Stateless services can usually be killed with little impact
- Limit their mem+swap usage, but monitor if they get killed
- Ultimately, this is no different from "do I want swap, and how much?"
---
## Limiting CPU usage
- There are no less than 3 ways to limit CPU usage:
- setting a relative priority with `--cpu-shares`,
- setting a CPU% limit with `--cpus`,
- pinning a container to specific CPUs with `--cpuset-cpus`.
- They can be used separately or together.
---
## Setting relative priority
- Each container has a relative priority used by the Linux scheduler.
- By default, this priority is 1024.
- As long as CPU usage is not maxed out, this has no effect.
- When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority.
- In other words: a container with `--cpu-shares 2048` will receive twice as much than the default.
---
## Setting a CPU% limit
- This setting will make sure that a container doesn't use more than a given % of CPU.
- The value is expressed in CPUs; therefore:
`--cpus 0.1` means 10% of one CPU,
`--cpus 1.0` means 100% of one whole CPU,
`--cpus 10.0` means 10 entire CPUs.
---
## Pinning containers to CPUs
- On multi-core machines, it is possible to restrict the execution on a set of CPUs.
- Examples:
`--cpuset-cpus 0` forces the container to run on CPU 0;
`--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7;
`--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11.
- This will not reserve the corresponding CPUs!
(They might still be used by other containers, or uncontainerized processes.)
---
## Limiting disk usage
- Most storage drivers do not support limiting the disk usage of containers.
(With the exception of devicemapper, but the limit cannot be set easily.)
- This means that a single container could exhaust disk space for everyone.
- In practice, however, this is not a concern, because:
- data files (for stateful services) should reside on volumes,
- assets (e.g. images, user-generated content...) should reside on object stores or on volume,
- logs are written on standard output and gathered by the container engine.
- Container disk usage can be audited with `docker ps -s` and `docker diff`.

View File

@@ -24,7 +24,7 @@ Analogy: attaching to a container is like plugging a keyboard and screen to a ph
---
## Detaching from a container
## Detaching from a container (Linux/macOS)
* If you have started an *interactive* container (with option `-it`), you can detach from it.
@@ -41,6 +41,20 @@ What does `-it` stand for?
---
## Detaching cont. (Win PowerShell and cmd.exe)
* Docker for Windows has a different detach experience due to shell features.
* `^P^Q` does not work.
* `^C` will detach, rather than stop the container.
* Using Bash, Subsystem for Linux, etc. on Windows behaves like Linux/macOS shells.
* Both PowerShell and Bash work well in Win 10; just be aware of differences.
---
class: extra-details
## Specifying a custom detach sequence

View File

@@ -1,3 +1,4 @@
class: title
# Our training environment

View File

@@ -0,0 +1,164 @@
class: title
# Windows Containers
![Container with Windows](images/windows-containers.jpg)
---
## Objectives
At the end of this section, you will be able to:
* Understand Windows Container vs. Linux Container.
* Know about the features of Docker for Windows for choosing architecture.
* Run other container architectures via QEMU emulation.
---
## Are containers *just* for Linux?
Remember that a container must run on the kernel of the OS it's on.
- This is both a benefit and a limitation.
(It makes containers lightweight, but limits them to a specific kernel.)
- At its launch in 2013, Docker did only support Linux, and only on amd64 CPUs.
- Since then, many platforms and OS have been added.
(Windows, ARM, i386, IBM mainframes ... But no macOS or iOS yet!)
--
- Docker Desktop (macOS and Windows) can run containers for other architectures
(Check the docs to see how to [run a Raspberry Pi (ARM) or PPC container](https://docs.docker.com/docker-for-mac/multi-arch/)!)
---
## History of Windows containers
- Early 2016, Windows 10 gained support for running Windows binaries in containers.
- These are known as "Windows Containers"
- Win 10 expects Docker for Windows to be installed for full features
- These must run in Hyper-V mini-VM's with a Windows Server x64 kernel
- No "scratch" containers, so use "Core" and "Nano" Server OS base layers
- Since Hyper-V is required, Windows 10 Home won't work (yet...)
--
- Late 2016, Windows Server 2016 ships with native Docker support
- Installed via PowerShell, doesn't need Docker for Windows
- Can run native (without VM), or with [Hyper-V Isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)
---
## LCOW (Linux Containers On Windows)
While Docker on Windows is largely playing catch up with Docker on Linux,
it's moving fast; and this is one thing that you *cannot* do on Linux!
- LCOW came with the [2017 Fall Creators Update](https://blog.docker.com/2018/02/docker-for-windows-18-02-with-windows-10-fall-creators-update/).
- It can run Linux and Windows containers side-by-side on Win 10.
- It is no longer necessary to switch the Engine to "Linux Containers".
(In fact, if you want to run both Linux and Windows containers at the same time,
make sure that your Engine is set to "Windows Containers" mode!)
--
If you are a Docker for Windows user, start your engine and try this:
```bash
docker pull microsoft/nanoserver:1803
```
(Make sure to switch to "Windows Containers mode" if necessary.)
---
## Run Both Windows and Linux containers
- Run a Windows Nano Server (minimal CLI-only server)
```bash
docker run --rm -it microsoft/nanoserver:1803 powershell
Get-Process
exit
```
- Run busybox on Linux in LCOW
```bash
docker run --rm --platform linux busybox echo hello
```
(Although you will not be able to see them, this will create hidden
Nano and LinuxKit VMs in Hyper-V!)
---
## Did We Say Things Move Fast
- Things keep improving.
- Now `--platform` defaults to `windows`, some images support both:
- golang, mongo, python, redis, hello-world ... and more being added
- you should still use `--plaform` with multi-os images to be certain
- Windows Containers now support `localhost` accessable containers (July 2018)
- Microsoft (April 2018) added Hyper-V support to Windows 10 Home ...
... so stay tuned for Docker support, maybe?!?
---
## Other Windows container options
Most "official" Docker images don't run on Windows yet.
Places to Look:
- Hub Official: https://hub.docker.com/u/winamd64/
- Microsoft: https://hub.docker.com/r/microsoft/
---
## SQL Server? Choice of Linux or Windows
- Microsoft [SQL Server for Linux 2017](https://hub.docker.com/r/microsoft/mssql-server-linux/) (amd64/linux)
- Microsoft [SQL Server Express 2017](https://hub.docker.com/r/microsoft/mssql-server-windows-express/) (amd64/windows)
---
## Windows Tools and Tips
- PowerShell [Tab Completion: DockerCompletion](https://github.com/matt9ucci/DockerCompletion)
- Best Shell GUI: [Cmder.net](http://cmder.net/)
- Good Windows Container Blogs and How-To's
- Dockers DevRel [Elton Stoneman, Microsoft MVP](https://blog.sixeyed.com/)
- Docker Captian [Nicholas Dille](https://dille.name/blog/)
- Docker Captain [Stefan Scherer](https://stefanscherer.github.io/)

View File

@@ -33,6 +33,8 @@ Docker volumes can be used to achieve many things, including:
* Sharing a *single file* between the host and a container.
* Using remote storage and custom storage with "volume drivers".
---
## Volumes are special directories in a container
@@ -118,7 +120,7 @@ $ curl localhost:8080
## Volumes exist independently of containers
If a container is stopped, its volumes still exist and are available.
If a container is stopped or removed, its volumes still exist and are available.
Volumes can be listed and manipulated with `docker volume` subcommands:
@@ -201,7 +203,7 @@ Then run `curl localhost:1234` again to see your changes.
---
## Managing volumes explicitly
## Using custom "bind-mounts"
In some cases, you want a specific directory on the host to be mapped
inside the container:
@@ -244,6 +246,8 @@ of an existing container.
* Newer containers can use `--volumes-from` too.
* Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes).
---
class: extra-details
@@ -259,7 +263,7 @@ $ docker run -d --name redis28 redis:2.8
Connect to the Redis container and set some data.
```bash
$ docker run -ti --link redis28:redis alpine telnet redis 6379
$ docker run -ti --link redis28:redis busybox telnet redis 6379
```
Issue the following commands:
@@ -298,7 +302,7 @@ class: extra-details
Connect to the Redis container and see our data.
```bash
docker run -ti --link redis30:redis alpine telnet redis 6379
docker run -ti --link redis30:redis busybox telnet redis 6379
```
Issue a few commands.
@@ -394,10 +398,15 @@ has root-like access to the host.]
You can install plugins to manage volumes backed by particular storage systems,
or providing extra features. For instance:
* [dvol](https://github.com/ClusterHQ/dvol) - allows to commit/branch/rollback volumes;
* [Flocker](https://clusterhq.com/flocker/introduction/), [REX-Ray](https://github.com/emccode/rexray) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS);
* [Blockbridge](http://www.blockbridge.com/), [Portworx](http://portworx.com/) - provide distributed block store for containers;
* and much more!
* [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g.
SAN or NAS), or by cloud block stores (e.g. EBS, EFS).
* [Portworx](http://portworx.com/) - provides distributed block store for containers.
* [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale
to several petabytes. It provides interfaces for object, block and file storage.
* and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)!
---

View File

@@ -2,7 +2,7 @@
- This was initially written to support in-person, instructor-led workshops and tutorials
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors)
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://@@GITREPO@@/graphs/contributors)
- You can also follow along on your own, at your own pace

Some files were not shown because too many files have changed in this diff Show More