Compare commits

..

315 Commits

Author SHA1 Message Date
Jerome Petazzoni
2381510a0b fix-redirects.sh: adding forced redirect 2020-04-07 16:57:54 -05:00
sload
732f06729f Fix typo MIP => NIP 2018-09-15 10:45:01 -05:00
Jerome Petazzoni
5687b204cd Merge branch 'replace-es-with-httpenv' into weka 2018-09-12 15:50:07 -05:00
Jerome Petazzoni
db8e8377ac Replace ElasticSearch with jpetazzo/httpenv
ElasticSearch slowly uses up to 2GB of RAM.
Eventually, on instances provisioned with
only 4GB of RAM and without swap, if more
than one ElasticSearch pod end up on the
same instance, it will cause the instance
to slow down and ultimately crash. Instead,
we now use a tiny Go web server that shows
its environment in JSON. It still highlights
that multiple backends are serving requests
but without the memory usage issue.
2018-09-12 15:49:27 -05:00
Jerome Petazzoni
510a37be44 Rebalance chapter 3/4 2018-09-12 00:15:54 -05:00
Jerome Petazzoni
29a925a50d _redirects 2018-09-11 14:41:47 -05:00
Jerome Petazzoni
7d67e23e89 Merge branch 'update-final-words' into weka 2018-09-11 14:37:36 -05:00
Jerome Petazzoni
651e6b720b Merge branch 'enixlogo' into weka 2018-09-11 14:37:32 -05:00
Jerome Petazzoni
c8cd845b7d Merge branch 'master' into weka 2018-09-11 14:37:26 -05:00
Jerome Petazzoni
230bd73597 Update versions 2018-09-11 14:37:04 -05:00
Jerome Petazzoni
7217c0ee1d Typos and fixes for autopilot
There is no significant change to the *content* here, but a lot
of typo fixes and commands added so that the autopilot works
correctly.
2018-09-11 01:41:56 -05:00
Jerome Petazzoni
51882896d4 Update last chapter (what's next) 2018-09-10 03:29:21 -05:00
Jerome Petazzoni
77d455d894 Sort chapters numerically in slides counter 2018-09-09 17:56:27 -05:00
Jerome Petazzoni
39532c7547 WEKA 2018-09-09 17:56:01 -05:00
Jerome Petazzoni
4f9c8275d9 Incorporate Bridget's feedback 2018-09-08 09:55:01 -05:00
Bridget Kromhout
f11aae2514 Update accessinternal.md
slight changes
2018-09-08 09:55:01 -05:00
Jerome Petazzoni
f1e9efc38c Explain how to access internal services
By using kubectl proxy and kubectl port-forward
2018-09-08 09:55:01 -05:00
Bridget Kromhout
975cc4f7df Merge pull request #332 from jpetazzo/new-content-sep-2018
New content for sep 2018 (MERGE CANDIDATE)
2018-09-08 09:03:20 -05:00
Bridget Kromhout
01243280a2 Update configuration.md 2018-09-08 08:56:26 -05:00
Bridget Kromhout
e652c3639d Merge pull request #336 from jpetazzo/deeper-in-netpol
Deeper in netpol
2018-09-08 08:53:30 -05:00
Bridget Kromhout
1e0954d9b4 Update netpol.md
slight corrections
2018-09-08 08:49:37 -05:00
Jerome Petazzoni
bb21f9bbc9 Improvements following Bridget's feedback 2018-09-08 08:45:16 -05:00
Bridget Kromhout
25466e7950 Merge pull request #334 from jpetazzo/localkubeconfig
Show how to use kubectl from the local machine
2018-09-08 08:45:16 -05:00
Jerome Petazzoni
78026ff9b8 Integrate new content
I've dispatched the new content so that the fullday training
(actually two days, don't let the file name distract you)
is broken down in 8 chapters of approximately equal lengths,
where the most complex content is preferably located at the
end of the chapter (to allow people to catch up and ask questions
during breaks) + 1 chapter with the what's next / links / thank you
slides
2018-09-08 08:23:54 -05:00
Jerome Petazzoni
60c7ef4e53 Merge branch 'master' into new-content-sep-2018 2018-09-08 07:57:41 -05:00
Jerome Petazzoni
55952934ed Add tarmak in deployment options 2018-09-08 07:56:16 -05:00
Jerome Petazzoni
3eaa844c55 Add ENIX logo
Warning: do not merge this branch to your content, otherwise you
will get the ENIX logo in the top right of all your decks
2018-09-08 07:49:38 -05:00
Jerome Petazzoni
f9d31f4c30 merge 2018-09-08 07:32:14 -05:00
Jerome Petazzoni
ec037e422b Clarify 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
73f66f25d8 Rephrase to avoid confusion 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
28174b6cf9 Oops, fixing bad conflict resolve 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
a80c095a07 Put netpol file in the right directory 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
374574717d Clarify network policies
Add clarification re/ pod-to-pod traffic.
Explain that it's stateful (which most people would expect anyway).
2018-09-08 07:20:31 -05:00
Jerome Petazzoni
efce5d1ad4 Add a short chapter about network policies
I will then expand this chapter to add examples showing
how to isolate namespaces; but let's start with that.
2018-09-08 07:20:31 -05:00
Jerome Petazzoni
4eec91a9e6 Merge branch 'new-content-sep-2018' of github.com:jpetazzo/container.training into new-content-sep-2018 2018-09-08 07:16:56 -05:00
Jerome Petazzoni
57166f33aa Prometheus chapter 2018-09-08 07:16:28 -05:00
Bridget Kromhout
f1ebb1f0fb slight corrections 2018-09-06 11:05:17 -05:00
Bridget Kromhout
8182e4df96 Update portworx.md
Slight corrections for clarity
2018-09-06 10:56:59 -05:00
Bridget Kromhout
6f3580820c Update gitworkflows.md
slight corrections
2018-09-06 10:42:59 -05:00
Bridget Kromhout
7b7fd2a4b4 Merge pull request #329 from jpetazzo/kubectlproxy
Revamp section about kubectl proxy
2018-09-06 10:37:17 -05:00
Jerome Petazzoni
f74addd0ca Add short section with Flux and Gitkube
These sections are not as detailed as the usual, but we
intend to show what's possible with git-based workflows.
2018-09-06 07:55:42 -05:00
Jerome Petazzoni
21ba3b7713 Incorporate Bridget's feedback 2018-09-06 02:12:47 -05:00
Jerome Petazzoni
4eca15f822 typo 2018-09-06 01:49:54 -05:00
Bridget Kromhout
4205f619cf Merge pull request #333 from BretFisher/patch-16
adding my next few workshops, I forgets!
2018-09-05 23:31:25 -05:00
Bridget Kromhout
c3dff823ef Update index.yaml
We use `:` as a delimiter and so need to quote text using it.
2018-09-05 23:29:49 -05:00
Bret Fisher
39876d1388 adding my next few workshops, I forgets! 2018-09-05 21:09:13 -04:00
Bridget Kromhout
7e34aa0287 Merge pull request #330 from jpetazzo/move-yaml-to-repo
Add YAML to repo; remove goo.gl links
2018-09-05 09:21:14 -05:00
Bridget Kromhout
3bdafed38e Merge pull request #331 from jpetazzo/preinstall-helm-and-stern
Pre-install Stern and Helm
2018-09-05 09:17:51 -05:00
Jerome Petazzoni
3d438ff304 Add kubectl auth can-i ... 2018-09-05 02:49:49 -05:00
Jerome Petazzoni
bcd1f37085 Add healthchecks
Explain liveness and readiness probes.
No lab yet.
2018-09-04 16:23:38 -05:00
Jerome Petazzoni
ba928e59fc Add ingress section
- Explain ingress resources
- Show how to deploy Traefik
- Use hostNetwork in the process
- Explain taints and tolerations while we're here
2018-09-04 08:40:58 -05:00
Jerome Petazzoni
62c01ef7d6 Add acknowlegement slide for Portworx/Katacoda 2018-09-03 13:00:30 -05:00
Jerome Petazzoni
a71347e328 Add owners and dependents
And explain how to find orphan resources.
2018-09-03 11:16:54 -05:00
Jerome Petazzoni
f235cfa13c Hint about upcoming dynamic provisioning section 2018-09-03 06:16:24 -05:00
Jerome Petazzoni
45b397682b One more note about storage systems 2018-09-03 06:15:41 -05:00
Jerome Petazzoni
858ad02973 Add notes about dynamic provisioning 2018-09-03 06:08:43 -05:00
Jerome Petazzoni
defeef093d Add dynamic provisioning and PostgreSQL example
In this section, we setup Portworx to have a dynamic provisioner.
Then we use it to deploy a PostgreSQL Stateful Set.
Finally we simulate a node failure and observe the failover.
2018-09-03 05:47:21 -05:00
Jerome Petazzoni
b45615e2c3 Mention @jessfraz's img 2018-09-02 10:40:17 -05:00
Jerome Petazzoni
b158babb7f Stateful Sets
- explain the reason why we have stateful sets
- explain the relationship between volumes, persistent volumes,
  persistent volume claims, volume claim templates
- show how to run a Consul cluster with a stateful set
2018-09-02 08:51:03 -05:00
Jerome Petazzoni
59b7386b91 Add authentication and authorization 2018-09-01 09:40:30 -05:00
Jerome Petazzoni
c05bcd23d9 Tons of new chapters! Excitement!
- volumes (general overview)
- building with the docker engine (bind-mounting the docker socket)
- building with kaniko (and init containers)
- managing configuration (configmaps, downward api)

Also added a new-content.yml file with just the new content
(for easier review), containing my plans for future chapters.
2018-08-31 03:27:15 -05:00
Jerome Petazzoni
3cb91855c8 Pre-install Stern and Helm
The commands to install Stern and Helm aren't super exciting,
so let's pre-install these tools. That way, we also generate
completion for them. We still give installation instructions
just in case, but this saves time for more important stuff.
2018-08-28 07:21:43 -05:00
Jerome Petazzoni
dc0850ef3e Expand the network policy section 2018-08-27 11:36:46 -05:00
Jerome Petazzoni
ffdd7fda45 Add YAML to repo; remove goo.gl links
We load a few YAML files from goo.gl links. To avoid bad
surprises, we're moving these YAML files to the repository.
2018-08-27 07:04:01 -05:00
Jerome Petazzoni
83b2133573 Oops, fixing bad conflict resolve 2018-08-23 04:56:22 -05:00
Jerome Petazzoni
d04856f964 Show how to use kubectl from the local machine 2018-08-22 09:22:59 -05:00
Jerome Petazzoni
8373d5302f Revamp section about kubectl proxy 2018-08-21 08:08:19 -05:00
Jerome Petazzoni
7d7cb0eadb Put netpol file in the right directory 2018-08-21 04:21:39 -05:00
Jerome Petazzoni
c00c87f8f2 Clarify network policies
Add clarification re/ pod-to-pod traffic.
Explain that it's stateful (which most people would expect anyway).
2018-08-21 04:21:17 -05:00
Jerome Petazzoni
f599462ad7 Add a short chapter about network policies
I will then expand this chapter to add examples showing
how to isolate namespaces; but let's start with that.
2018-08-21 04:21:17 -05:00
Jerome Petazzoni
018282f392 slides: rename directories
This was discussed and agreed in #246. It will probably break a few
outstanding PRs as well as a few external links but it's for the
better good long term.
2018-08-21 04:03:38 -05:00
Jerome Petazzoni
23b3c1c05a Last tweaks so that autopilot passes 2018-08-20 14:58:00 -05:00
Jerome Petazzoni
62686d0b7a Miscellaneous fixes for autopilot
These changes are only for the autopilot test harness.
They add hidden commands and keystrokes but don't affect
the content of the slides.
2018-08-20 14:15:06 -05:00
Jerome Petazzoni
54288502a2 autopilot: add support for hidden commands 2018-08-20 10:22:01 -05:00
Jerome Petazzoni
efc045e40b autopilot: put a bunch of features behind flags
We don't always need to track slides, switch desktops, and open links.
(These things are not necessary when we're purely testing the labs.)
All these features are now behind boolean flags saved in the state file.
2018-08-20 08:31:47 -05:00
Bridget Kromhout
6e9b16511f Cloud-agnostic; mentioning multiple clouds 2018-08-19 10:07:52 -05:00
Jerome Petazzoni
81b6e60a8c Merge branch 'master' of github.com:jpetazzo/container.training 2018-08-18 11:13:45 -05:00
Jerome Petazzoni
5baaf7e00a Fixes #327 2018-08-18 11:13:39 -05:00
Jérôme Petazzoni
d4d460397f Mention progressDeadlineSeconds
@abuisine ran through the whole deck recently, taking the long route each time it was possible; and he noticed that another field had to be removed when transforming the Deployment into a DaemonSet.
2018-08-15 04:08:31 -05:00
Bridget Kromhout
f66b6b2ee3 Slight edits (#326) 2018-08-15 04:07:42 -05:00
Jérôme Petazzoni
fb7f7fd8c8 Expand to the brief logging/metrics slide
Thanks to @abuisine for reminding me that Heapster is going through a deprecation cycle.

I'm also expanding these two slides to be a bit more useful and relevant.
2018-08-15 04:07:42 -05:00
Jérôme Petazzoni
dc98fa21a9 Add explanations for a failure mode in logging (#324)
* Add explanations for a failure mode in logging

Thanks @abuisine for reporting that one too!

* Typo
2018-08-15 04:04:18 -05:00
Jerome Petazzoni
6b662d3e4c Add QCON workshops 2018-08-15 03:09:22 -05:00
Tim Bell
7069682c8e Update Dockerfile_Tips.md (#321)
Fix typo
2018-08-08 08:40:06 -05:00
Katie McLaughlin
3b1d5b93a8 Update pwk link (#319) 2018-08-02 06:22:42 -05:00
Maxime Deravet
611fe55e90 Allow to configure docker password using the settings file (#317) 2018-07-31 08:24:16 -05:00
Jerome Petazzoni
481272ac22 Add fallback when non-standard strftime is not supported
Closes #301

Thanks @petertang2012
2018-07-27 06:07:11 -05:00
Bridget Kromhout
9069e2d7db Merge pull request #318 from bridgetkromhout/add-vel-uk
Add Velocity UK
2018-07-26 18:35:04 -05:00
Bridget Kromhout
1144c16a4c Add Velocity UK 2018-07-26 18:33:49 -05:00
Bridget Kromhout
9b2846633c Merge pull request #315 from jpetazzo/clarify-kubeadm
Clarify usage of kubeadm
2018-07-20 15:42:31 -07:00
Jérôme Petazzoni
db88c0a5bf Clarify usage of kubeadm
Thanks for @robcz for the inspiration for that one!
2018-07-17 11:55:20 -05:00
Jérôme Petazzoni
28863728c2 Update rollout, new defaults are 25%/25% for MaxSurge and MaxUnavailable (#314) 2018-07-17 10:54:45 -05:00
Bridget Kromhout
dc341da813 Merge pull request #309 from bridgetkromhout/slight-updates
Slight updates for 1.11
2018-07-16 18:58:00 -05:00
Bridget Kromhout
1d210ad808 Merge pull request #3 from jpetazzo/slighter-updates
Slighter updates
2018-07-16 18:28:20 -05:00
Jerome Petazzoni
76d9adadf5 'until 1.10' is ambiguous, try to be more explicit 2018-07-16 18:25:30 -05:00
Jerome Petazzoni
065371fa99 Merge branch 'bridgetkromhout-slight-updates' into slighter-updates 2018-07-16 18:12:45 -05:00
Jerome Petazzoni
e45f21454e Update a couple of references to kube-dns; and cosmetic tweaks 2018-07-16 18:09:50 -05:00
Bridget Kromhout
4d8c13b0bf AKS name change 2018-07-16 18:09:50 -05:00
Bridget Kromhout
5e6b38e8d1 Replace kube-dns with CoreDNS 2018-07-16 18:09:50 -05:00
Bridget Kromhout
5dd2b6313e coredns instead of kube-dns 2018-07-16 18:09:50 -05:00
Bridget Kromhout
96bf00c59b Switching from get to use kubectl api-resources 2018-07-16 18:09:50 -05:00
Bridget Kromhout
065310901f This info isn't shown anymore by kubectl get 2018-07-16 18:09:50 -05:00
Jerome Petazzoni
103261ea35 Update a couple of references to kube-dns; and cosmetic tweaks 2018-07-16 18:07:07 -05:00
Jerome Petazzoni
c6fb6f30af Merge branch 'slight-updates' of git://github.com/bridgetkromhout/container.training into bridgetkromhout-slight-updates 2018-07-16 17:48:56 -05:00
Bridget Kromhout
134d24e23b AKS name change 2018-07-16 15:08:07 -07:00
Jerome Petazzoni
8a8e97f6e2 Add Jerome's training, September in Paris 2018-07-16 16:42:25 -05:00
Bridget Kromhout
29c1bc47d4 Replace kube-dns with CoreDNS 2018-07-16 13:53:27 -07:00
Bridget Kromhout
8af5a10407 coredns instead of kube-dns 2018-07-16 13:45:26 -07:00
Bridget Kromhout
8e9991a860 Switching from get to use kubectl api-resources 2018-07-16 13:38:28 -07:00
Bridget Kromhout
8ba5d6d736 This info isn't shown anymore by kubectl get 2018-07-16 13:32:53 -07:00
Bridget Kromhout
b3d1e2133d Merge pull request #308 from bridgetkromhout/add-oscon
Add oscon slides
2018-07-15 13:24:46 -05:00
Bridget Kromhout
b3cf30f804 Add oscon slides 2018-07-15 13:23:33 -05:00
Bridget Kromhout
b845543e5f Merge pull request #305 from bridgetkromhout/list-msp-slides
Adding slides link
2018-07-10 18:08:52 -05:00
Bridget Kromhout
1b54470046 Adding slides link 2018-07-10 18:04:35 -05:00
Bridget Kromhout
ee2b20926c Merge pull request #302 from bridgetkromhout/version-1.11.0
Version bump
2018-07-10 06:18:30 -05:00
Bridget Kromhout
96a76d2a19 Version bump 2018-07-10 06:17:07 -05:00
Bridget Kromhout
78ac91fcd5 Merge pull request #300 from bridgetkromhout/add-msp
Adding MSP 2018
2018-07-10 05:46:23 -05:00
Bridget Kromhout
971b5b0e6d Let's not link quite yet 2018-07-10 05:45:22 -05:00
Bridget Kromhout
3393563498 Adding MSP 2018 2018-07-06 16:11:37 -05:00
Bridget Kromhout
94483ebfec Merge pull request #298 from jpetazzo/improve-index-format
Switch to two-line format since our titles are so long
2018-07-06 15:43:01 -05:00
Jerome Petazzoni
db5d5878f5 Switch to two-line format since our titles are so long 2018-07-03 10:47:41 -05:00
ctas582
2585daac9b Force rng to be single threaded (#293) 2018-06-28 08:20:54 -05:00
Bridget Kromhout
21043108b3 Merge pull request #296 from bridgetkromhout/version-up
Version bump
2018-06-27 01:14:06 -05:00
Bridget Kromhout
65faa4507c Version bump 2018-06-27 08:12:40 +02:00
Bridget Kromhout
644f2b9c7a Merge pull request #295 from bridgetkromhout/add-slides-ams
Adding slides link for ams
2018-06-26 17:04:27 -05:00
Bridget Kromhout
dab9d9fb7e Adding slides link 2018-06-27 00:03:18 +02:00
Diego Quintana
139757613b Update Container_Networking_Basics.md
Added needed single quotes. I've also moved `nginx` to the end of the line, to follow a more consistent syntax  (`options` before `name|id`).

```
Usage:	docker inspect [OPTIONS] NAME|ID [NAME|ID...]

Return low-level information on Docker objects

Options:
  -f, --format string   Format the output using the given Go template
  -s, --size            Display total file sizes if the type is container
      --type string     Return JSON for specified type
```
2018-06-22 10:58:26 -05:00
Bridget Kromhout
10eed2c1c7 Merge pull request #288 from ctas582/typos
Correct typos
2018-06-22 09:21:56 -05:00
ctas582
c4fa75a1da Correct typos 2018-06-21 15:00:36 +01:00
ctas582
847140560f Correct typo 2018-06-21 14:16:05 +01:00
ctas582
1dc07c33ab Correct typos 2018-06-20 11:19:28 +01:00
Bridget Kromhout
4fc73d95c0 Merge pull request #285 from bridgetkromhout/vupdate
Updating version
2018-06-12 10:14:21 -07:00
Bridget Kromhout
690ed55953 Updating version 2018-06-12 10:12:04 -07:00
Bridget Kromhout
16a5809518 Merge pull request #284 from bridgetkromhout/add-vel-2day
Adding Erik and Brian's two-day Velocity training to the front page
2018-06-12 09:01:32 -07:00
Bridget Kromhout
0fed34600b Adding Erik and Brian's two-day 2018-06-12 08:55:53 -07:00
Jerome Petazzoni
2d95f4177a Remove extraneous python invocation 2018-06-12 04:25:00 -05:00
Bridget Kromhout
e9d1db56fa Adding VelNY bootcamp (#283)
* Adding VelNY bootcamp

* Colon not good here
2018-06-12 04:09:54 -05:00
Bridget Kromhout
a076a766a9 Merge pull request #282 from bridgetkromhout/reorder
Reordering upcoming events
2018-06-11 09:47:57 -07:00
Bridget Kromhout
be3c78bf54 Reordering 2018-06-11 09:40:30 -07:00
Bridget Kromhout
5bb6b8e2ab Merge pull request #281 from bridgetkromhout/add-velocity-sj-2018
Adding Velocity SJ 2018
2018-06-11 09:08:35 -07:00
Bridget Kromhout
f79193681d Adding Velocity SJ 2018 2018-06-11 08:53:53 -07:00
Bridget Kromhout
379ae69db5 Merge pull request #277 from bridgetkromhout/rollout-failure
Clarifying rollout failure via dashboard
2018-06-11 08:34:36 -07:00
Jerome Petazzoni
cde89f50a2 Add mention to skip slide if dashboard isn't deployed 2018-06-10 17:07:56 -05:00
Bridget Kromhout
98563ba1ce Clarifying rollout failure via dashboard 2018-06-04 20:58:57 -05:00
Bridget Kromhout
99bf8cc39f Merge pull request #271 from jpetazzo/new-index-generator
Replace index.html with a generator
2018-06-05 02:13:27 +02:00
Bridget Kromhout
ea642cf90e Merge pull request #274 from bridgetkromhout/eng-v
bumping version
2018-06-04 23:28:48 +02:00
Bridget Kromhout
a7d89062cf Bumping engine version 2018-06-04 15:43:30 -05:00
Bridget Kromhout
564e4856b4 Merge branch 'master' of https://github.com/jpetazzo/container.training 2018-06-04 14:41:07 -05:00
Bridget Kromhout
011cd08af3 Merge pull request #269 from jpetazzo/kubectlproxy
Show how to access internal services with kubectl proxy
2018-06-04 21:40:40 +02:00
Jerome Petazzoni
e294a4726c Update version numbers 2018-06-04 08:47:30 -05:00
Jerome Petazzoni
a21e8b0849 Image and title size fixes 2018-06-04 06:11:00 -05:00
Jerome Petazzoni
cc6f36b50f Wording (non-native speakers probably don't know boo-boo) 2018-06-04 05:54:02 -05:00
Jerome Petazzoni
6e35162788 Remove 'kubernetes in action' demo 2018-06-04 05:50:21 -05:00
Jerome Petazzoni
30ca940eeb Opt-out a bunch of slides in the deep dive section 2018-06-04 05:49:24 -05:00
Jerome Petazzoni
14eb19a42b Typo fixes 2018-06-04 05:43:28 -05:00
Jerome Petazzoni
da053ecde2 Update fundamentals TOC 2018-06-03 15:27:27 -05:00
Jerome Petazzoni
c86ef7de45 Add 'past workshops' page and backfill 2016-2017 workshops 2018-06-03 09:55:43 -05:00
Jérôme Petazzoni
c5572020b9 Add a few slides about resource limits (#273)
The section about namespaces and cgroups is very thorough,
but we also need something showing how to practically
limit container resource usage without diving into a very
deep technical chapter.
2018-06-03 05:28:16 -05:00
Jerome Petazzoni
3d7ed3a3f7 Clarify how to stop kubectl proxy 2018-06-03 05:10:48 -05:00
Bridget Kromhout
138163056f Merge pull request #270 from jpetazzo/kubectl-create-namespace
Show an easier way to create namespaces
2018-06-02 17:12:38 +02:00
Alexis Daboville
5e78e00bc9 Small typos (#272)
* Small typo

* elastichsearch -> elasticsearch

* realeased -> released
2018-06-02 09:09:38 -05:00
Jerome Petazzoni
2cb06edc2d Replace index.html with a generator
The events are now listend in index.yaml, and generated
with index.py. The latter is called automatically by
build.sh.

The list of events has been slightly improved:
- we only show the last 5 past events
- video recordings now get a section of their own
2018-05-31 14:22:23 -05:00
Jerome Petazzoni
8915bfb443 Update README section indicating 'teacher for hire' 2018-05-31 12:55:09 -05:00
Jerome Petazzoni
24017ad83f Clarify usage of <<< 2018-05-29 11:06:31 -05:00
Jerome Petazzoni
3edebe3747 New script to count slides
count-slides.py will count the number of slides per section,
and compute size of each chapter as well. It is not perfect
(for instance, it assumes that excluded_classes=in_person)
but it should help to assess the size of the content before
delivering long workshops.
2018-05-29 10:03:11 -05:00
Jerome Petazzoni
636a2d5c87 Show an easier way to create namespaces
We were using 'kubectl apply' with a YAML snppet.
It's valid, but it's quite convoluted. Instead,
let's use 'kubectl create namespace'. We can still
mention the other method of course.
2018-05-29 05:53:12 -05:00
Jerome Petazzoni
4213aba76e Show how to access internal services with kubectl proxy 2018-05-29 05:47:27 -05:00
Jerome Petazzoni
3e822bad82 Add a slide about JSON file and log rotation 2018-05-28 10:28:52 -05:00
Jerome Petazzoni
cd5b06b9c7 Show how to connect/disconnect dynamically 2018-05-28 10:08:11 -05:00
Jerome Petazzoni
b0841562ea Add a bunch of Dockerfile examples 2018-05-25 09:31:50 -05:00
Jerome Petazzoni
06f70e8246 Add 'tree' in the VMs
This is a convenient tool to get an idea of what a
directory hierarchy looks like.
2018-05-24 07:06:21 -05:00
Jerome Petazzoni
9614f8761a Add link to Serge Hallyn blog post 2018-05-24 06:03:28 -05:00
Jerome Petazzoni
92f9ab9001 Add a section leading to multi-stage builds 2018-05-24 05:46:28 -05:00
Bridget Kromhout
ad554f89fc New events (and old event to past) 2018-05-23 15:31:07 -05:00
Jerome Petazzoni
5bb37dff49 Parametrize git repo and slides URLs
We have two extra variables in the slides:
@@GITREPO@@ (current value: github.com/jpetazzo/container.training)
@@SLIDES@@ (current value: http://container.training/)

These variables are set with gitrepo and slides in the YAML files.
(Just like the chat variable.)

Supercedes #256
2018-05-23 15:27:57 -05:00
Bridget Kromhout
0d52dc2290 Merge pull request #267 from jasonknudsen/patch-1
Update README.md - typo
2018-05-23 10:22:05 -05:00
Bridget Kromhout
c575cb9cd5 New events (and old event to past) 2018-05-23 10:18:02 -05:00
jasonknudsen
9cdccd40c7 Update README.md - typo
Typo in instructions - should be pull_images not pull-images
2018-05-23 08:17:46 -07:00
Bret Fisher
fdd10c5a98 fix docker-compose scale up change (#265) 2018-05-18 10:10:06 -05:00
mkrupczak3
8a617fdbc7 change "alpine telnet" to "busybox telnet"
Newer versions of alpine may not include telnet
2018-05-18 10:01:41 -05:00
Jerome Petazzoni
a058a74d8f Minor fix for hidden autopilot command 2018-05-18 09:16:34 -05:00
Bret Fisher
4896a3265e Update volume chapter 2018-05-18 08:08:33 -05:00
Bret Fisher
131947275c Improve explanation about images and layers 2018-05-18 08:08:27 -05:00
Bret Fisher
1b7e8cec5e Update info about Docker for Mac/Windows 2018-05-18 08:08:20 -05:00
Bret Fisher
c17c0ea9aa Remove obsolete MAINTAINER command 2018-05-18 08:08:08 -05:00
Bridget Kromhout
7b378d2425 Merge pull request #264 from bridgetkromhout/master
Moving NDC to past
2018-05-14 06:56:23 -05:00
Bridget Kromhout
47da7d8278 Moving NDC to past 2018-05-14 06:53:08 -05:00
Bridget Kromhout
3c69941fcd Merge pull request #262 from bridgetkromhout/craft-past
Craft to past
2018-05-10 07:38:44 -05:00
Bridget Kromhout
beb188facf Craft to past 2018-05-10 07:36:30 -05:00
Bridget Kromhout
dfea8f6535 Merge pull request #258 from bridgetkromhout/add-ndc
Adding NDC Minnesota
2018-05-08 21:37:43 -05:00
Bridget Kromhout
3b89149bf0 Adding NDC Minnesota 2018-05-08 21:34:53 -05:00
Bret Fisher
c8d73caacd move visualizer to service and stack (#237) 2018-05-08 10:51:40 -05:00
Jérôme Petazzoni
290185f16b Merge pull request #255 from eightlimbed/patch-1
fixed a typo
2018-05-07 13:52:40 -05:00
Jérôme Petazzoni
05e9d36eed Merge pull request #254 from mkrupczak3/master
Fix typo create network to network create
2018-05-07 13:51:12 -05:00
Jérôme Petazzoni
05815fcbf3 Merge pull request #240 from BretFisher/settings-update
updated versions, renamed files
2018-05-07 13:15:34 -05:00
Lee Gaines
bce900a4ca fixed a typo
changed "contain" to "contained" in the first bullet point
2018-05-06 21:49:43 -07:00
mkrupczak3
bf7ba49013 Fix typo create network to network create 2018-05-05 16:55:22 -04:00
Bret Fisher
323aa075b3 removing settings feature teaser 2018-05-05 12:54:20 -04:00
Jérôme Petazzoni
f526014dc8 Merge pull request #253 from BretFisher/ingress-graphics
swarm ingress images and updates
2018-05-05 06:39:13 -05:00
Jérôme Petazzoni
dec546fa65 Merge pull request #252 from BretFisher/patch-15
update docker-compose scale command
2018-05-05 06:36:53 -05:00
Jérôme Petazzoni
36390a7921 Merge pull request #251 from BretFisher/swarm-3-nodes
moving to 3 node swarms by default
2018-05-05 06:35:45 -05:00
Jérôme Petazzoni
313d705778 Merge pull request #248 from BretFisher/fundamentals-cnm-updates
more fundamentals CNM tweaks
2018-05-05 06:20:06 -05:00
Jérôme Petazzoni
ca34efa2d7 Merge pull request #247 from BretFisher/patch-13
adding more images to cache
2018-05-05 05:49:52 -05:00
Jérôme Petazzoni
25e92cfe39 Merge pull request #245 from BretFisher/patch-12
more new features for swarm
2018-05-05 05:46:07 -05:00
Jérôme Petazzoni
999359e81a Update versions.md 2018-05-05 05:45:40 -05:00
Jérôme Petazzoni
3a74248746 Merge pull request #244 from BretFisher/patch-11
a bit more detail on network drivers included
2018-05-05 05:41:10 -05:00
Jérôme Petazzoni
cb828ecbd3 Update Container_Network_Model.md 2018-05-05 05:41:01 -05:00
Jérôme Petazzoni
e1e984e02d Merge pull request #243 from BretFisher/patch-10
Updating some compose info for devs
2018-05-05 05:40:10 -05:00
Jérôme Petazzoni
d6e19fe350 Update Compose_For_Dev_Stacks.md 2018-05-05 05:39:25 -05:00
Jérôme Petazzoni
1f91c748b5 Merge pull request #242 from BretFisher/check-for-entr-in-build
Friendly error if entr isn't installed for build.sh
2018-05-05 05:30:05 -05:00
Bret Fisher
38356acb4e swarm ingress images and updates 2018-05-04 13:00:49 -04:00
Bret Fisher
7b2d598c38 fix my fat fingers.
ugg, sorry, editing via github and I need to go to bed :)
2018-05-04 00:20:31 -04:00
Bret Fisher
c276eb0cfa remove fat finger 2018-05-04 00:19:35 -04:00
Bret Fisher
571de591ca update docker-compose scale command
scale command is now legacy, use `--scale` option instead
2018-05-04 00:18:58 -04:00
Bret Fisher
e49a197fd5 moving to 3 node swarms by default 2018-05-03 23:52:51 -04:00
Bret Fisher
a30eabc23a more fundamentals CNM tweaks 2018-05-03 19:28:39 -04:00
Bret Fisher
73c4cddba5 forgot one image :/ 2018-05-03 16:32:12 -04:00
Bret Fisher
6e341f770a adding more images to cache
Based on images used in swarm and fundamentals workshops
2018-05-03 16:24:54 -04:00
Bridget Kromhout
527145ec81 Merge pull request #241 from BretFisher/patch-8
date updates for container.training
2018-05-03 18:19:36 +02:00
Bret Fisher
c93edceffe more new features for swarm 2018-05-02 23:25:12 -04:00
Bret Fisher
6f9eac7c8e a bit more detail on network drivers included 2018-05-02 23:21:45 -04:00
Bret Fisher
522420ef34 Updating some compose info for devs 2018-05-02 23:18:19 -04:00
Bret Fisher
927bf052b0 Friendly error if entr isn't installed for build.sh 2018-05-02 23:08:52 -04:00
Bret Fisher
1e44689b79 swarm versions 2018-05-02 23:00:55 -04:00
Bret Fisher
b967865faa date updates for container.training 2018-05-02 22:24:12 -04:00
Bret Fisher
054c0cafb2 updated versions, renamed files 2018-05-02 17:43:08 -04:00
Jérôme Petazzoni
29e37c8e2b Merge pull request #235 from KMASubhani/patch-1
Update Getting_Inside.md
2018-04-25 23:33:24 -05:00
Jérôme Petazzoni
44fc2afdc7 Merge pull request #239 from BretFisher/fix-stack-deploy-cmd
reordering stack deploy cmd format
2018-04-25 23:29:58 -05:00
Jérôme Petazzoni
7776c8ee38 Merge pull request #238 from BretFisher/fix-detach-false
remove more unneeded detach=false
2018-04-25 23:27:54 -05:00
Bret Fisher
9ee7e1873f reording stack deploy cmd format 2018-04-25 16:33:38 -05:00
Bret Fisher
e21fcbd1bd remove more unneeded detach=false 2018-04-25 16:26:28 -05:00
Khaja Mashood Ahmed Subhani
5852ab513d Update Getting_Inside.md
fixed spelling
2018-04-25 11:00:37 -05:00
Jérôme Petazzoni
3fe33e4e9e Merge pull request #234 from bridgetkromhout/adding-ndc
Adding NDC
2018-04-24 03:56:13 -05:00
Bridget Kromhout
c44b90b5a4 Adding NDC 2018-04-23 20:03:46 -05:00
Jérôme Petazzoni
f06dc6548c Merge pull request #232 from bridgetkromhout/rollout-params
Clarify rollout params
2018-04-23 11:32:25 -05:00
Jérôme Petazzoni
e13552c306 Merge pull request #224 from bridgetkromhout/re-order
Re-ordering "kubectl apply" discussion
2018-04-23 11:31:15 -05:00
Bridget Kromhout
0305c3783f Adding an overview; marking clarification as extra 2018-04-23 10:52:29 -05:00
Bridget Kromhout
5158ac3d98 Clarify rollout params 2018-04-22 15:49:32 -05:00
Jérôme Petazzoni
25c08b0885 Merge pull request #231 from bridgetkromhout/add-goto-kube101
Adding goto's kube101
2018-04-22 14:55:55 -05:00
Bridget Kromhout
f8131c97e9 Adding goto's kube101 2018-04-22 14:35:50 -05:00
Bridget Kromhout
3de1fab66a Clarifying failure mode 2018-04-22 14:04:57 -05:00
Jérôme Petazzoni
ab664128b7 Merge pull request #228 from bridgetkromhout/helm-completion
Correction for helm completion
2018-04-22 14:00:08 -05:00
Bridget Kromhout
91de693b80 Correction for helm completion 2018-04-22 13:33:54 -05:00
Jérôme Petazzoni
a64606fb32 Merge pull request #225 from bridgetkromhout/tail-log
Clarify log tailing
2018-04-22 13:14:11 -05:00
Jérôme Petazzoni
58d9103bd2 Merge pull request #223 from bridgetkromhout/1.10.1-updates
Updates for 1.10.1
2018-04-22 13:13:25 -05:00
Jérôme Petazzoni
61ab5be12d Merge pull request #222 from bridgetkromhout/weave-link
Link to Weave
2018-04-22 13:08:54 -05:00
Bridget Kromhout
030900b602 Clarify log tailing 2018-04-22 12:39:18 -05:00
Bridget Kromhout
476d689c7d Clarify naming 2018-04-22 12:32:11 -05:00
Bridget Kromhout
4aedbb69c2 Re-ordering 2018-04-22 12:14:16 -05:00
Bridget Kromhout
db2a68709c Updates for 1.10.1 2018-04-22 11:57:37 -05:00
Bridget Kromhout
f114a89136 Link to Weave 2018-04-22 11:08:17 -05:00
Jérôme Petazzoni
96eda76391 Merge pull request #220 from bridgetkromhout/rearrange-kube-halfday
Rearrange kube halfday
2018-04-21 10:48:21 -05:00
Bridget Kromhout
e7d9a8fa2d Correcting EFK 2018-04-21 10:43:39 -05:00
Bridget Kromhout
1cca8db828 Rearranging halfday for kube 2018-04-21 10:38:54 -05:00
Bridget Kromhout
2cde665d2f Merge pull request #219 from jpetazzo/re-add-kube-halfday
Re-add half day file
2018-04-21 10:17:45 -05:00
Jerome Petazzoni
d660c6342f Re-add half day file 2018-04-21 12:00:04 +02:00
Bridget Kromhout
7e8bb0e51f Merge pull request #218 from bridgetkromhout/cloud-typo
Typo fix
2018-04-20 16:49:31 -05:00
Bridget Kromhout
c87f4cc088 Typo fix 2018-04-20 16:47:13 -05:00
Jérôme Petazzoni
05c50349a8 Merge pull request #211 from BretFisher/patch-4
add popular swarm reverse proxy options
2018-04-20 02:38:00 -05:00
Jérôme Petazzoni
e985952816 Add colon and fix minor typo 2018-04-20 02:37:48 -05:00
Jérôme Petazzoni
19f0ef9c86 Merge pull request #216 from jpetazzo/googl
Replace goo.gl with 1.1.1.1
2018-04-20 02:36:15 -05:00
Bret Fisher
cc8e13a85f silly me, Traefik is golang 2018-04-20 03:07:40 -04:00
Bridget Kromhout
6475a05794 Update kubectlrun.md
Removing misleading term
2018-04-19 14:37:26 -05:00
Bridget Kromhout
cc9840afe5 Update kubectlrun.md 2018-04-19 07:36:37 -05:00
Bridget Kromhout
b7a2cde458 Merge pull request #215 from jpetazzo/more-options-to-setup-k8s
Mention Kubernetes the Hard Way and more options
2018-04-19 07:32:20 -05:00
Bridget Kromhout
453992b55d Update setup-k8s.md 2018-04-19 07:31:25 -05:00
Bridget Kromhout
0b1067f95e Merge pull request #217 from jpetazzo/tolerations
Add a line about tolerations
2018-04-19 07:28:57 -05:00
Jérôme Petazzoni
21777cd95b Merge pull request #214 from BretFisher/patch-7
we can now add/remove networks from services 🤗
2018-04-19 06:35:09 -05:00
Jérôme Petazzoni
827ad3bdf2 Merge pull request #213 from BretFisher/patch-6
product name change 🙄
2018-04-19 06:34:41 -05:00
Jérôme Petazzoni
7818157cd0 Merge pull request #212 from BretFisher/patch-5
adding 3rd party registry options
2018-04-19 06:34:22 -05:00
Jérôme Petazzoni
d547241714 Merge pull request #210 from BretFisher/patch-3
fix image size via pic css class
2018-04-19 06:31:46 -05:00
Jérôme Petazzoni
c41e0e9286 Merge pull request #209 from BretFisher/patch-2
removed older notes about detach and service logs
2018-04-19 06:31:17 -05:00
Jérôme Petazzoni
c2d4784895 Merge pull request #208 from BretFisher/patch-1
removed mention of compose upg 1.6 to 1.7
2018-04-19 06:30:47 -05:00
Jérôme Petazzoni
11163965cf Merge pull request #204 from bridgetkromhout/clarify-off-by-one
Clarify an off-by-one amount of pods
2018-04-19 06:30:19 -05:00
Jérôme Petazzoni
e9df065820 Merge pull request #197 from bridgetkromhout/patch-only-daemonset
Patch only daemonset pods
2018-04-19 06:27:52 -05:00
Jerome Petazzoni
101ab0c11a Add a line about tolerations 2018-04-19 06:25:41 -05:00
Jérôme Petazzoni
25f081c0b7 Merge pull request #190 from bridgetkromhout/daemonset
Clarifications around daemonsets
2018-04-19 06:21:58 -05:00
Jérôme Petazzoni
700baef094 Merge pull request #188 from bridgetkromhout/clarify-kinds
kubectl get all missing-type workaround
2018-04-19 06:19:00 -05:00
Jerome Petazzoni
3faa586b16 Remove NOC joke 2018-04-19 06:14:54 -05:00
Jerome Petazzoni
8ca77fe8a4 Merge branch 'googl' of git://github.com/bridgetkromhout/container.training into bridgetkromhout-googl 2018-04-19 05:59:12 -05:00
Jerome Petazzoni
019829cc4d Mention Kubernetes the Hard Way and more options 2018-04-19 05:55:58 -05:00
Bret Fisher
a7f6bb223a we can now add/remove networks from services 🤗 2018-04-18 19:11:51 -04:00
Bret Fisher
eb77a8f328 product name change 🙄 2018-04-18 17:50:19 -04:00
Bret Fisher
5a484b2667 adding 3rd party registry options 2018-04-18 17:47:55 -04:00
Bret Fisher
982c35f8e7 add popular swarm reverse proxy options 2018-04-18 17:28:46 -04:00
Bret Fisher
adffe5f47f fix image size via pic css class
make swarm internals bigger!
2018-04-18 17:07:33 -04:00
Bret Fisher
f90a194b86 removed older notes about detach and service logs
Since these options have been around nearly a year, I removed some unneeded verbosity and consolidated the detach stuff.
2018-04-18 15:34:04 -04:00
Bret Fisher
99e9356e5d removed mention of compose upg 1.6 to 1.7
I feel like compose 1.7 was so long ago (over 2 years) that mentioning logs change isn't necessary.
2018-04-18 15:18:17 -04:00
Bridget Kromhout
860840a4c1 Clarify off-by-one 2018-04-18 14:09:08 -05:00
Bridget Kromhout
ab63b76ae0 Clarify types bug 2018-04-18 13:59:26 -05:00
Bridget Kromhout
29bca726b3 Merge pull request #2 from jpetazzo/daemonset-proposal
Pod cleanup proposal
2018-04-18 12:21:34 -05:00
Bridget Kromhout
91297a68f8 Update daemonset.md 2018-04-18 12:20:53 -05:00
Jerome Petazzoni
2bea8ade63 Break down last kube chapter (it is too long) 2018-04-18 11:44:30 -05:00
Jerome Petazzoni
ec486cf78c Do not bind-mount localtime (fixes #207) 2018-04-18 03:33:07 -05:00
Jerome Petazzoni
63ac378866 Merge branch 'darkalia-add_helm_completion' 2018-04-17 16:13:58 -05:00
Jerome Petazzoni
35db387fc2 Add ':' for consistency 2018-04-17 16:13:44 -05:00
Jerome Petazzoni
a0f9baf5e7 Merge branch 'add_helm_completion' of git://github.com/darkalia/container.training into darkalia-add_helm_completion 2018-04-17 16:12:52 -05:00
Jerome Petazzoni
4e54a79abc Pod cleanup proposal 2018-04-17 16:07:24 -05:00
Jérôme Petazzoni
37bea7158f Merge pull request #181 from jpetazzo/more-info-on-labels-and-rollouts
Label use-cases and rollouts
2018-04-17 15:18:24 -05:00
Jerome Petazzoni
618fe4e959 Clarify the grace period when shutting down pods 2018-04-17 02:24:07 -05:00
Jerome Petazzoni
0c73144977 Merge branch 'jgarrouste-patch-1' 2018-04-16 08:03:34 -05:00
Jerome Petazzoni
ff8c3b1595 Remove -o name 2018-04-16 08:03:09 -05:00
Jerome Petazzoni
b756d0d0dc Merge branch 'patch-1' of git://github.com/jgarrouste/container.training into jgarrouste-patch-1 2018-04-16 08:02:41 -05:00
Jerome Petazzoni
23147fafd1 Paris -> past sessions 2018-04-15 15:57:46 -05:00
Jérémy GARROUSTE
b036b5f24b Delete pods with ''-l run-rng' and remove xargs
Delete pods with ''-l run-rng' and remove xargs
2018-04-15 16:37:10 +02:00
Benjamin Allot
3b9014f750 Add helm completion 2018-04-13 16:40:42 +02:00
Bridget Kromhout
de87743c6a Clarify an off-by-one amount of pods 2018-04-12 16:10:38 -05:00
Bridget Kromhout
74f980437f Clarify that clusters can be of arbitrary size 2018-04-12 07:31:49 -05:00
Bridget Kromhout
6711ba06d9 Patch only daemonset pods 2018-04-11 21:09:46 -05:00
Bridget Kromhout
f97bd2b357 googl to cloudflare 2018-04-11 13:36:00 -05:00
Bridget Kromhout
3f54f23535 Clarifying cleanup 2018-04-10 16:45:50 -05:00
Bridget Kromhout
827d10dd49 Clarifying ambiguous labels on pods 2018-04-10 15:48:54 -05:00
Bridget Kromhout
1b7a072f25 Bump version and add link 2018-04-10 15:29:14 -05:00
Bridget Kromhout
eb1b3c8729 Clarify types 2018-04-10 14:17:27 -05:00
Bridget Kromhout
40e4678a45 goo.gl deprecation 2018-04-10 12:41:07 -05:00
175 changed files with 10208 additions and 992 deletions

2
.gitignore vendored
View File

@@ -8,4 +8,6 @@ prepare-vms/settings.yaml
prepare-vms/tags
slides/*.yml.html
slides/autopilot/state.yaml
slides/index.html
slides/past.html
node_modules

View File

@@ -292,15 +292,31 @@ If there is a bug and you can't even reproduce it:
sorry. It is probably an Heisenbug. We can't act on it
until it's reproducible, alas.
If you have attended this workshop and have feedback,
or if you want somebody to deliver that workshop at your
conference or for your company: you can contact one of us!
# “Please teach us!”
If you have attended one of these workshops, and want
your team or organization to attend a similar one, you
can look at the list of upcoming events on
http://container.training/.
You are also welcome to reuse these materials to run
your own workshop, for your team or even at a meetup
or conference. In that case, you might enjoy watching
[Bridget Kromhout's talk at KubeCon 2018 Europe](
https://www.youtube.com/watch?v=mYsp_cGY2O0), explaining
precisely how to run such a workshop yourself.
Finally, you can also contact the following persons,
who are experienced speakers, are familiar with the
material, and are available to deliver these workshops
at your conference or for your company:
- jerome dot petazzoni at gmail dot com
- bret at bretfisher dot com
If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!
(If you are willing and able to deliver such workshops,
feel free to submit a PR to add your name to that list!)
**Thank you!**

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80)
app.run(host="0.0.0.0", port=80, threaded=False)

62
k8s/consul.yaml Normal file
View File

@@ -0,0 +1,62 @@
apiVersion: v1
kind: Service
metadata:
name: consul
spec:
ports:
- port: 8500
name: http
selector:
app: consul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 3
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.2.2"
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
- "-ui"
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave

28
k8s/docker-build.yaml Normal file
View File

@@ -0,0 +1,28 @@
apiVersion: v1
kind: Pod
metadata:
name: build-image
spec:
restartPolicy: OnFailure
containers:
- name: docker-build
image: docker
env:
- name: REGISTRY_PORT
value: #"30000"
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
mkdir /workspace &&
git clone https://github.com/jpetazzo/container.training /workspace &&
docker build -t localhost:$REGISTRY_PORT/worker /workspace/dockercoins/worker &&
docker push localhost:$REGISTRY_PORT/worker
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock

222
k8s/efk.yaml Normal file
View File

@@ -0,0 +1,222 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
# X-Pack Authentication
# =====================
- name: FLUENT_ELASTICSEARCH_USER
value: "elastic"
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: "changeme"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: elasticsearch
name: elasticsearch
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/elasticsearch
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: elasticsearch
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: elasticsearch
spec:
containers:
- image: elasticsearch:5.6.8
imagePullPolicy: IfNotPresent
name: elasticsearch
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: elasticsearch
name: elasticsearch
selfLink: /api/v1/namespaces/default/services/elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
run: elasticsearch
sessionAffinity: None
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: kibana
name: kibana
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/kibana
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: kibana
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200/
image: kibana:5.6.8
imagePullPolicy: Always
name: kibana
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: kibana
name: kibana
selfLink: /api/v1/namespaces/default/services/kibana
spec:
externalTrafficPolicy: Cluster
ports:
- port: 5601
protocol: TCP
targetPort: 5601
selector:
run: kibana
sessionAffinity: None
type: NodePort

View File

@@ -0,0 +1,14 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

18
k8s/haproxy.cfg Normal file
View File

@@ -0,0 +1,18 @@
global
daemon
maxconn 256
defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend the-frontend
bind *:80
default_backend the-backend
backend the-backend
server google.com-80 google.com:80 maxconn 32 check
server bing.com-80 bing.com:80 maxconn 32 check

16
k8s/haproxy.yaml Normal file
View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: haproxy
spec:
volumes:
- name: config
configMap:
name: haproxy
containers:
- name: haproxy
image: haproxy
volumeMounts:
- name: config
mountPath: /usr/local/etc/haproxy/

14
k8s/ingress.yaml Normal file
View File

@@ -0,0 +1,14 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cheddar
spec:
rules:
- host: cheddar.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: cheddar
servicePort: 80

29
k8s/kaniko-build.yaml Normal file
View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: Pod
metadata:
name: kaniko-build
spec:
initContainers:
- name: git-clone
image: alpine
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
git clone git://github.com/jpetazzo/container.training /workspace
volumeMounts:
- name: workspace
mountPath: /workspace
containers:
- name: build-image
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=/workspace/dockercoins/rng"
- "--skip-tls-verify"
- "--destination=registry:5000/rng-kaniko:latest"
volumeMounts:
- name: workspace
mountPath: /workspace
volumes:
- name: workspace

View File

@@ -0,0 +1,167 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@@ -0,0 +1,14 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-testcurl-for-testweb
spec:
podSelector:
matchLabels:
run: testweb
ingress:
- from:
- podSelector:
matchLabels:
run: testcurl

View File

@@ -0,0 +1,10 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-for-testweb
spec:
podSelector:
matchLabels:
run: testweb
ingress: []

View File

@@ -0,0 +1,22 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-webui
spec:
podSelector:
matchLabels:
run: webui
ingress:
- from: []

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
restartPolicy: OnFailure

580
k8s/portworx.yaml Normal file
View File

@@ -0,0 +1,580 @@
# SOURCE: https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop0&c=px-workshop&stork=true&lh=true
apiVersion: v1
kind: ConfigMap
metadata:
name: stork-config
namespace: kube-system
data:
policy.cfg: |-
{
"kind": "Policy",
"apiVersion": "v1",
"extenders": [
{
"urlPrefix": "http://stork-service.kube-system.svc:8099",
"apiVersion": "v1beta1",
"filterVerb": "filter",
"prioritizeVerb": "prioritize",
"weight": 5,
"enableHttps": false,
"nodeCacheCapable": false
}
]
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "list", "watch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["*"]
resources: ["deployments", "deployments/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
- apiGroups: ["*"]
resources: ["statefulsets", "statefulsets/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-role-binding
subjects:
- kind: ServiceAccount
name: stork-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-role
apiGroup: rbac.authorization.k8s.io
---
kind: Service
apiVersion: v1
metadata:
name: stork-service
namespace: kube-system
spec:
selector:
name: stork
ports:
- protocol: TCP
port: 8099
targetPort: 8099
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
tier: control-plane
name: stork
namespace: kube-system
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
replicas: 3
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
name: stork
tier: control-plane
spec:
containers:
- command:
- /stork
- --driver=pxd
- --verbose
- --leader-elect=true
- --health-monitor-interval=120
imagePullPolicy: Always
image: openstorage/stork:1.1.3
resources:
requests:
cpu: '0.1'
name: stork
hostPID: false
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork
topologyKey: "kubernetes.io/hostname"
serviceAccountName: stork-account
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: stork-snapshot-sc
provisioner: stork-snapshot
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: stork-scheduler-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-scheduler-role
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
- apiGroups: [""]
resourceNames: ["kube-scheduler"]
resources: ["endpoints"]
verbs: ["delete", "get", "patch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list", "watch"]
- apiGroups: [""]
resources: ["bindings", "pods/binding"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["patch", "update"]
- apiGroups: [""]
resources: ["replicationcontrollers", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["app", "extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: stork-scheduler-role-binding
subjects:
- kind: ServiceAccount
name: stork-scheduler-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: stork-scheduler-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
name: stork-scheduler
namespace: kube-system
spec:
replicas: 3
template:
metadata:
labels:
component: scheduler
tier: control-plane
name: stork-scheduler
spec:
containers:
- command:
- /usr/local/bin/kube-scheduler
- --address=0.0.0.0
- --leader-elect=true
- --scheduler-name=stork
- --policy-configmap=stork-config
- --policy-configmap-namespace=kube-system
- --lock-object-name=stork-scheduler
image: gcr.io/google_containers/kube-scheduler-amd64:v1.11.2
livenessProbe:
httpGet:
path: /healthz
port: 10251
initialDelaySeconds: 15
name: stork-scheduler
readinessProbe:
httpGet:
path: /healthz
port: 10251
resources:
requests:
cpu: '0.1'
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "name"
operator: In
values:
- stork-scheduler
topologyKey: "kubernetes.io/hostname"
hostPID: false
serviceAccountName: stork-scheduler-account
---
kind: Service
apiVersion: v1
metadata:
name: portworx-service
namespace: kube-system
labels:
name: portworx
spec:
selector:
name: portworx
ports:
- name: px-api
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-get-put-list-role
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "get", "update", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "update", "create"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["privileged"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-role-binding
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-get-put-list-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Namespace
metadata:
name: portworx
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role
namespace: portworx
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role-binding
namespace: portworx
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: Role
name: px-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: portworx
namespace: kube-system
annotations:
portworx.com/install-source: "https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop0&c=px-workshop&stork=true&lh=true"
spec:
minReadySeconds: 0
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: portworx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: px/enabled
operator: NotIn
values:
- "false"
- key: node-role.kubernetes.io/master
operator: DoesNotExist
hostNetwork: true
hostPID: false
containers:
- name: portworx
image: portworx/oci-monitor:1.4.2.2
imagePullPolicy: Always
args:
["-c", "px-workshop", "-s", "/dev/loop0", "-b",
"-x", "kubernetes"]
env:
- name: "PX_TEMPLATE_VERSION"
value: "v4"
livenessProbe:
periodSeconds: 30
initialDelaySeconds: 840 # allow image pull in slow networks
httpGet:
host: 127.0.0.1
path: /status
port: 9001
readinessProbe:
periodSeconds: 10
httpGet:
host: 127.0.0.1
path: /health
port: 9015
terminationMessagePath: "/tmp/px-termination-log"
securityContext:
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
- name: etcpwx
mountPath: /etc/pwx
- name: optpwx
mountPath: /opt/pwx
- name: proc1nsmount
mountPath: /host_proc/1/ns
- name: sysdmount
mountPath: /etc/systemd/system
- name: diagsdump
mountPath: /var/cores
- name: journalmount1
mountPath: /var/run/log
readOnly: true
- name: journalmount2
mountPath: /var/log
readOnly: true
- name: dbusmount
mountPath: /var/run/dbus
restartPolicy: Always
serviceAccountName: px-account
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: etcpwx
hostPath:
path: /etc/pwx
- name: optpwx
hostPath:
path: /opt/pwx
- name: proc1nsmount
hostPath:
path: /proc/1/ns
- name: sysdmount
hostPath:
path: /etc/systemd/system
- name: diagsdump
hostPath:
path: /var/cores
- name: journalmount1
hostPath:
path: /var/run/log
- name: journalmount2
hostPath:
path: /var/log
- name: dbusmount
hostPath:
path: /var/run/dbus
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-lh-account
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: px-lh-account
namespace: kube-system
roleRef:
kind: Role
name: px-lh-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
type: NodePort
ports:
- name: http
port: 80
nodePort: 32678
- name: https
port: 443
nodePort: 32679
selector:
tier: px-web-console
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: px-lighthouse
namespace: kube-system
labels:
tier: px-web-console
spec:
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
tier: px-web-console
replicas: 1
template:
metadata:
labels:
tier: px-web-console
spec:
initContainers:
- name: config-init
image: portworx/lh-config-sync:0.2
imagePullPolicy: Always
args:
- "init"
volumeMounts:
- name: config
mountPath: /config/lh
containers:
- name: px-lighthouse
image: portworx/px-lighthouse:1.5.0
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
volumeMounts:
- name: config
mountPath: /config/lh
- name: config-sync
image: portworx/lh-config-sync:0.2
imagePullPolicy: Always
args:
- "sync"
volumeMounts:
- name: config
mountPath: /config/lh
serviceAccountName: px-lh-account
volumes:
- name: config
emptyDir: {}

30
k8s/postgres.yaml Normal file
View File

@@ -0,0 +1,30 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
serviceName: postgres
template:
metadata:
labels:
app: postgres
spec:
schedulerName: stork
containers:
- name: postgres
image: postgres:10.5
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres
volumeClaimTemplates:
- metadata:
name: postgres
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi

15
k8s/registry.yaml Normal file
View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: registry
spec:
containers:
- name: registry
image: registry
env:
- name: REGISTRY_HTTP_ADDR
valueFrom:
configMapKeyRef:
name: registry
key: http.addr

67
k8s/socat.yaml Normal file
View File

@@ -0,0 +1,67 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: null
generation: 1
labels:
run: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
spec:
replicas: 1
selector:
matchLabels:
run: socat
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: socat
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard:443,verify=0
image: alpine
imagePullPolicy: Always
name: socat
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: socat
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

11
k8s/storage-class.yaml Normal file
View File

@@ -0,0 +1,11 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-replicated
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "2"
priority_io: "high"

100
k8s/traefik.yaml Normal file
View File

@@ -0,0 +1,100 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -93,7 +93,7 @@ wrap Run this program in a container
- The `./workshopctl` script can be executed directly.
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. This can be configured with the `docker_user_password` property in the settings file.
### Example Steps to Launch a Batch of AWS Instances for a Workshop
@@ -103,7 +103,7 @@ wrap Run this program in a container
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull-images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
@@ -210,7 +210,7 @@ The `postprep.py` file will be copied via parallel-ssh to all of the VMs and exe
#### Pre-pull images
$ ./workshopctl pull-images TAG
$ ./workshopctl pull_images TAG
#### Generate cards

View File

@@ -1,18 +1,20 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "avril2018.container.training" -%}
{%- set url = "http://container.training/" -%}
{%- set pagesize = 12 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "formation" -%}
{%- set cluster_or_machine = "votre VM" -%}
{%- set machine_is_or_machines_are = "Votre VM" -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "formation" -%}
{%- set cluster_or_machine = "votre cluster" -%}
{%- set machine_is_or_machines_are = "Votre cluster" -%}
{%- set workshop_name = "orchestration workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- set image_src = image_src_swarm -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
@@ -73,9 +75,9 @@ img {
<div>
<p>
Voici les informations pour vous connecter à
{{ cluster_or_machine }} pour cette formation.
Vous pouvez vous connecter avec n'importe quel client SSH.
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
@@ -83,19 +85,19 @@ img {
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
{{ machine_is_or_machines_are }} :
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>Les slides sont à l'adresse suivante :
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>

View File

@@ -7,7 +7,6 @@ services:
working_dir: /root/prepare-vms
volumes:
- $HOME/.aws/:/root/.aws/
- /etc/localtime:/etc/localtime:ro
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
- $PWD/:/root/prepare-vms/
environment:

View File

@@ -48,7 +48,7 @@ _cmd_cards() {
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
python lib/ips-txt-to-html.py $SETTINGS
lib/ips-txt-to-html.py $SETTINGS
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
@@ -168,6 +168,22 @@ _cmd_kube() {
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
fi"
# Install stern
pssh "
if [ ! -x /usr/local/bin/stern ]; then
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64
sudo chmod +x /usr/local/bin/stern
stern --completion bash | sudo tee /etc/bash_completion.d/stern
fi"
# Install helm
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | sudo bash
helm completion bash | sudo tee /etc/bash_completion.d/helm
fi"
sep "Done"
}
@@ -393,9 +409,23 @@ pull_tag() {
ubuntu:latest \
fedora:latest \
centos:latest \
elasticsearch:2 \
postgres \
redis \
alpine \
registry \
nicolaka/netshoot \
jpetazzo/trainingwheels \
golang \
training/namer \
dockercoins/hasher \
dockercoins/rng \
dockercoins/webui \
dockercoins/worker \
logstash \
prom/node-exporter \
google/cadvisor \
dockersamples/visualizer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'

View File

@@ -13,6 +13,7 @@ COMPOSE_VERSION = config["compose_version"]
MACHINE_VERSION = config["machine_version"]
CLUSTER_SIZE = config["clustersize"]
ENGINE_VERSION = config["engine_version"]
DOCKER_USER_PASSWORD = config["docker_user_password"]
#################################
@@ -54,9 +55,9 @@ system("curl --silent {} > /tmp/ipv4".format(ipv4_retrieval_endpoint))
ipv4 = open("/tmp/ipv4").read()
# Add a "docker" user with password "training"
# Add a "docker" user with password coming from the settings
system("id docker || sudo useradd -d /home/docker -m -s /bin/bash docker")
system("echo docker:training | sudo chpasswd")
system("echo docker:{} | sudo chpasswd".format(DOCKER_USER_PASSWORD))
# Fancy prompt courtesy of @soulshake.
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL
@@ -108,7 +109,7 @@ system("sudo chmod +x /usr/local/bin/docker-machine")
system("docker-machine version")
system("sudo apt-get remove -y --purge dnsmasq-base")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh")
system("sudo apt-get -qy install python-setuptools pssh apache2-utils httping htop unzip mosh tree")
### Wait for Docker to be up.
### (If we don't do this, Docker will not be responsive during the next step.)

View File

@@ -22,3 +22,6 @@ engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.18.0
machine_version: 0.13.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -7,7 +7,7 @@ clustersize: 1
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
@@ -20,5 +20,8 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.20.1
machine_version: 0.14.0
compose_version: 1.22.0
machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -85,7 +85,7 @@ img {
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>

View File

@@ -20,5 +20,8 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.20.1
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -1,13 +1,13 @@
# This file is passed by trainer-cli to scripts/ips-txt-to-html.py
# Number of VMs per cluster
clustersize: 5
clustersize: 3
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
@@ -20,5 +20,8 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.20.1
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

1
slides/_redirects Normal file
View File

@@ -0,0 +1 @@
/ /weka.yml.html 200!

View File

@@ -29,6 +29,10 @@ class State(object):
self.interactive = True
self.verify_status = False
self.simulate_type = True
self.switch_desktop = False
self.sync_slides = False
self.open_links = False
self.run_hidden = True
self.slide = 1
self.snippet = 0
@@ -37,6 +41,10 @@ class State(object):
self.interactive = bool(data["interactive"])
self.verify_status = bool(data["verify_status"])
self.simulate_type = bool(data["simulate_type"])
self.switch_desktop = bool(data["switch_desktop"])
self.sync_slides = bool(data["sync_slides"])
self.open_links = bool(data["open_links"])
self.run_hidden = bool(data["run_hidden"])
self.slide = int(data["slide"])
self.snippet = int(data["snippet"])
@@ -46,6 +54,10 @@ class State(object):
interactive=self.interactive,
verify_status=self.verify_status,
simulate_type=self.simulate_type,
switch_desktop=self.switch_desktop,
sync_slides=self.sync_slides,
open_links=self.open_links,
run_hidden=self.run_hidden,
slide=self.slide,
snippet=self.snippet,
), f, default_flow_style=False)
@@ -122,14 +134,20 @@ class Slide(object):
def focus_slides():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "3"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_terminal():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "2"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_browser():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "4"])
subprocess.check_output(["i3-msg", "workspace", "1"])
@@ -307,17 +325,21 @@ while True:
slide = slides[state.slide]
snippet = slide.snippets[state.snippet-1] if state.snippet else None
click.clear()
print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}]"
print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}] "
"[switch_desktop:{}] [sync_slides:{}] [open_links:{}] [run_hidden:{}]"
.format(state.slide, len(slides)-1,
state.snippet, len(slide.snippets) if slide.snippets else 0,
state.simulate_type, state.verify_status))
state.simulate_type, state.verify_status,
state.switch_desktop, state.sync_slides,
state.open_links, state.run_hidden))
print(hrule())
if snippet:
print(slide.content.replace(snippet.content, ansi(7)(snippet.content)))
focus_terminal()
else:
print(slide.content)
subprocess.check_output(["./gotoslide.js", str(slide.number)])
if state.sync_slides:
subprocess.check_output(["./gotoslide.js", str(slide.number)])
focus_slides()
print(hrule())
if state.interactive:
@@ -326,6 +348,10 @@ while True:
print("n/→ Next")
print("s Simulate keystrokes")
print("v Validate exit status")
print("d Switch desktop")
print("k Sync slides")
print("o Open links")
print("h Run hidden commands")
print("g Go to a specific slide")
print("q Quit")
print("c Continue non-interactively until next error")
@@ -341,6 +367,14 @@ while True:
state.simulate_type = not state.simulate_type
elif command == "v":
state.verify_status = not state.verify_status
elif command == "d":
state.switch_desktop = not state.switch_desktop
elif command == "k":
state.sync_slides = not state.sync_slides
elif command == "o":
state.open_links = not state.open_links
elif command == "h":
state.run_hidden = not state.run_hidden
elif command == "g":
state.slide = click.prompt("Enter slide number", type=int)
state.snippet = 0
@@ -366,7 +400,7 @@ while True:
logging.info("Running with method {}: {}".format(method, data))
if method == "keys":
send_keys(data)
elif method == "bash":
elif method == "bash" or (method == "hide" and state.run_hidden):
# Make sure that we're ready
wait_for_prompt()
# Strip leading spaces
@@ -405,11 +439,12 @@ while True:
screen = capture_pane()
url = data.replace("/node1", "/{}".format(IPADDR))
# This should probably be adapted to run on different OS
subprocess.check_output(["xdg-open", url])
focus_browser()
if state.interactive:
print("Press any key to continue to next step...")
click.getchar()
if state.open_links:
subprocess.check_output(["xdg-open", url])
focus_browser()
if state.interactive:
print("Press any key to continue to next step...")
click.getchar()
else:
logging.warning("Unknown method {}: {!r}".format(method, data))
move_forward()

View File

@@ -0,0 +1 @@
click

View File

@@ -1,6 +1,8 @@
#!/bin/sh
set -e
case "$1" in
once)
./index.py
for YAML in *.yml; do
./markmaker.py $YAML > $YAML.html || {
rm $YAML.html
@@ -15,6 +17,13 @@ once)
;;
forever)
set +e
# check if entr is installed
if ! command -v entr >/dev/null; then
echo >&2 "First install 'entr' with apt, brew, etc."
exit
fi
# There is a weird bug in entr, at least on MacOS,
# where it doesn't restore the terminal to a clean
# state when exitting. So let's try to work around

View File

@@ -1,19 +0,0 @@
class: title, self-paced
@@TITLE@@
.nav[*Self-paced version*]
---
class: title, in-person
@@TITLE@@<br/></br>
.footnote[
**WiFI: `ArtyLoft`** ou **`ArtyLoft 5 GHz`**
<br/>
**Mot de passe: `TFLEVENT5`**
**Slides: http://avril2018.container.training/**
]

View File

@@ -34,18 +34,6 @@ In this section, we will see more Dockerfile commands.
---
## The `MAINTAINER` instruction
The `MAINTAINER` instruction tells you who wrote the `Dockerfile`.
```dockerfile
MAINTAINER Docker Education Team <education@docker.com>
```
It's optional but recommended.
---
## The `RUN` instruction
The `RUN` instruction can be specified in two ways.
@@ -367,7 +355,7 @@ class: extra-details
## Overriding the `ENTRYPOINT` instruction
The entry point can be overriden as well.
The entry point can be overridden as well.
```bash
$ docker run -it training/ls
@@ -428,5 +416,4 @@ ONBUILD COPY . /src
```
* You can't chain `ONBUILD` instructions with `ONBUILD`.
* `ONBUILD` can't be used to trigger `FROM` and `MAINTAINER`
instructions.
* `ONBUILD` can't be used to trigger `FROM` instructions.

View File

@@ -40,6 +40,8 @@ ambassador containers.
---
class: pic
![ambassador](images/ambassador-diagram.png)
---

View File

@@ -117,7 +117,7 @@ CONTAINER ID IMAGE ... CREATED STATUS ...
Many Docker commands will work on container IDs: `docker stop`, `docker rm`...
If we want to list only the IDs of our containers (without the other colums
If we want to list only the IDs of our containers (without the other columns
or the header line),
we can use the `-q` ("Quiet", "Quick") flag:

View File

@@ -49,7 +49,7 @@ Before diving in, let's see a small example of Compose in action.
---
## Compose in action
class: pic
![composeup](images/composeup.gif)
@@ -60,6 +60,10 @@ Before diving in, let's see a small example of Compose in action.
If you are using the official training virtual machines, Compose has been
pre-installed.
If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them.
If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`.
You can always check that it is installed by running:
```bash
@@ -135,22 +139,33 @@ services:
---
## Compose file versions
## Compose file structure
Version 1 directly has the various containers (`www`, `redis`...) at the top level of the file.
A Compose file has multiple sections:
Version 2 has multiple sections:
* `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.)
* `version` is mandatory and should be `"2"`.
* `services` is mandatory and corresponds to the content of the version 1 format.
* `services` is mandatory. A service is one or more replicas of the same image running as containers.
* `networks` is optional and indicates to which networks containers should be connected.
<br/>(By default, containers will be connected on a private, per-app network.)
<br/>(By default, containers will be connected on a private, per-compose-file network.)
* `volumes` is optional and can define volumes to be used and/or shared by the containers.
Version 3 adds support for deployment options (scaling, rolling updates, etc.)
---
## Compose file versions
* Version 1 is legacy and shouldn't be used.
(If you see a Compose file without `version` and `services`, it's a legacy v1 file.)
* Version 2 added support for networks and volumes.
* Version 3 added support for deployment options (scaling, rolling updates, etc).
The [Docker documentation](https://docs.docker.com/compose/compose-file/)
has excellent information about the Compose file format if you need to know more about versions.
---
@@ -260,6 +275,8 @@ Removing trainingwheels_www_1 ... done
Removing trainingwheels_redis_1 ... done
```
Use `docker-compose down -v` to remove everything including volumes.
---
## Special handling of volumes

View File

@@ -73,7 +73,7 @@ Containers also exist (sometimes with other names) on Windows, macOS, Solaris, F
## LXC
* The venerable ancestor (first realeased in 2008).
* The venerable ancestor (first released in 2008).
* Docker initially relied on it to execute containers.

View File

@@ -65,9 +65,17 @@ eb0eeab782f4 host host
* A network is managed by a *driver*.
* All the drivers that we have seen before are available.
* The built-in drivers include:
* A new multi-host driver, *overlay*, is available out of the box.
* `bridge` (default)
* `none`
* `host`
* `macvlan`
* A multi-host driver, *overlay*, is available out of the box (for Swarm clusters).
* More drivers can be provided by plugins (OVS, VLAN...)
@@ -75,6 +83,8 @@ eb0eeab782f4 host host
---
class: extra-details
## Differences with the CNI
* CNI = Container Network Interface
@@ -87,6 +97,22 @@ eb0eeab782f4 host host
---
class: pic
## Single container in a Docker network
![bridge0](images/bridge1.png)
---
class: pic
## Two containers on two Docker networks
![bridge3](images/bridge2.png)
---
## Creating a network
Let's create a network called `dev`.
@@ -284,7 +310,7 @@ since we wiped out the old Redis container).
---
class: x-extra-details
class: extra-details
## Names are *local* to each network
@@ -324,7 +350,7 @@ class: extra-details
Create the `prod` network.
```bash
$ docker create network prod
$ docker network create prod
5a41562fecf2d8f115bedc16865f7336232a04268bdf2bd816aecca01b68d50c
```
@@ -472,11 +498,13 @@ b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09
* If containers span multiple hosts, we need an *overlay* network to connect them together.
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging VXLAN.
* Docker ships with a default network plugin, `overlay`, implementing an overlay network leveraging
VXLAN, *enabled with Swarm Mode*.
* Other plugins (Weave, Calico...) can provide overlay networks as well.
* Once you have an overlay network, *all the features that we've used in this chapter work identically.*
* Once you have an overlay network, *all the features that we've used in this chapter work identically
across multiple hosts.*
---
@@ -514,13 +542,174 @@ General idea:
---
## Section summary
## Connecting and disconnecting dynamically
We've learned how to:
* So far, we have specified which network to use when starting the container.
* Create private networks for groups of containers.
* The Docker Engine also allows to connect and disconnect while the container runs.
* Assign IP addresses to containers.
* This feature is exposed through the Docker API, and through two Docker CLI commands:
* Use container naming to implement service discovery.
* `docker network connect <network> <container>`
* `docker network disconnect <network> <container>`
---
## Dynamically connecting to a network
* We have a container named `es` connected to a network named `dev`.
* Let's start a simple alpine container on the default network:
```bash
$ docker run -ti alpine sh
/ #
```
* In this container, try to ping the `es` container:
```bash
/ # ping es
ping: bad address 'es'
```
This doesn't work, but we will change that by connecting the container.
---
## Finding the container ID and connecting it
* Figure out the ID of our alpine container; here are two methods:
* looking at `/etc/hostname` in the container,
* running `docker ps -lq` on the host.
* Run the following command on the host:
```bash
$ docker network connect dev `<container_id>`
```
---
## Checking what we did
* Try again to `ping es` from the container.
* It should now work correctly:
```bash
/ # ping es
PING es (172.20.0.3): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.376 ms
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.130 ms
^C
```
* Interrupt it with Ctrl-C.
---
## Looking at the network setup in the container
We can look at the list of network interfaces with `ifconfig`, `ip a`, or `ip l`:
.small[
```bash
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
20: eth1@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:14:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.4/16 brd 172.20.255.255 scope global eth1
valid_lft forever preferred_lft forever
/ #
```
]
Each network connection is materialized with a virtual network interface.
As we can see, we can be connected to multiple networks at the same time.
---
## Disconnecting from a network
* Let's try the symmetrical command to disconnect the container:
```bash
$ docker network disconnect dev <container_id>
```
* From now on, if we try to ping `es`, it will not resolve:
```bash
/ # ping es
ping: bad address 'es'
```
* Trying to ping the IP address directly won't work either:
```bash
/ # ping 172.20.0.3
... (nothing happens until we interrupt it with Ctrl-C)
```
---
class: extra-details
## Network aliases are scoped per network
* Each network has its own set of network aliases.
* We saw this earlier: `es` resolves to different addresses in `dev` and `prod`.
* If we are connected to multiple networks, the resolver looks up names in each of them
(as of Docker Engine 18.03, it is the connection order) and stops as soon as the name
is found.
* Therefore, if we are connected to both `dev` and `prod`, resolving `es` will **not**
give us the addresses of all the `es` services; but only the ones in `dev` or `prod`.
* However, we can lookup `es.dev` or `es.prod` if we need to.
---
class: extra-details
## Finding out about our networks and names
* We can do reverse DNS lookups on containers' IP addresses.
* If the IP address belongs to a network (other than the default bridge), the result will be:
```
name-or-first-alias-or-container-id.network-name
```
* Example:
.small[
```bash
$ docker run -ti --net prod --net-alias hello alpine
/ # apk add --no-cache drill
...
OK: 5 MiB in 13 packages
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:15:00:03
inet addr:`172.21.0.3` Bcast:172.21.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
...
/ # drill -t ptr `3.0.21.172`.in-addr.arpa
...
;; ANSWER SECTION:
3.0.21.172.in-addr.arpa. 600 IN PTR `hello.prod`.
...
```
]

View File

@@ -98,7 +98,7 @@ $ curl localhost:32768
* We can see that metadata with `docker inspect`:
```bash
$ docker inspect nginx --format {{.Config.ExposedPorts}}
$ docker inspect --format '{{.Config.ExposedPorts}}' nginx
map[80/tcp:{}]
```

View File

@@ -64,7 +64,7 @@ Create this Dockerfile.
## Testing our C program
* Create `hello.c` and `Dockerfile` in the same direcotry.
* Create `hello.c` and `Dockerfile` in the same directory.
* Run `docker build -t hello .` in this directory.

View File

@@ -10,10 +10,12 @@
* [Solaris Containers (2004)](https://en.wikipedia.org/wiki/Solaris_Containers)
* [FreeBSD jails (1999)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
* [FreeBSD jails (1999-2000)](https://www.freebsd.org/cgi/man.cgi?query=jail&sektion=8&manpath=FreeBSD+4.0-RELEASE)
Containers have been around for a *very long time* indeed.
(See [this excellent blog post by Serge Hallyn](https://s3hh.wordpress.com/2018/03/22/history-of-containers/) for more historic details.)
---
class: pic

View File

@@ -30,7 +30,7 @@
## Environment variables
- Most of the tools (CLI, libraries...) connecting to the Docker API can use ennvironment variables.
- Most of the tools (CLI, libraries...) connecting to the Docker API can use environment variables.
- These variables are:
@@ -40,7 +40,7 @@
- `DOCKER_CERT_PATH` (path to the keypair and certificate to use for auth)
- `docker-machine env ...` will generate the variables needed to connect to an host.
- `docker-machine env ...` will generate the variables needed to connect to a host.
- `$(eval docker-machine env ...)` sets these variables in the current shell.
@@ -50,7 +50,7 @@
With `docker-machine`, we can:
- upgrade an host to the latest version of the Docker Engine,
- upgrade a host to the latest version of the Docker Engine,
- start/stop/restart hosts,

View File

@@ -0,0 +1,361 @@
# Tips for efficient Dockerfiles
We will see how to:
* Reduce the number of layers.
* Leverage the build cache so that builds can be faster.
* Embed unit testing in the build process.
---
## Reducing the number of layers
* Each line in a `Dockerfile` creates a new layer.
* Build your `Dockerfile` to take advantage of Docker's caching system.
* Combine commands by using `&&` to continue commands and `\` to wrap lines.
Note: it is frequent to build a Dockerfile line by line:
```dockerfile
RUN apt-get install thisthing
RUN apt-get install andthatthing andthatotherone
RUN apt-get install somemorestuff
```
And then refactor it trivially before shipping:
```dockerfile
RUN apt-get install thisthing andthatthing andthatotherone somemorestuff
```
---
## Avoid re-installing dependencies at each build
* Classic Dockerfile problem:
"each time I change a line of code, all my dependencies are re-installed!"
* Solution: `COPY` dependency lists (`package.json`, `requirements.txt`, etc.)
by themselves to avoid reinstalling unchanged dependencies every time.
---
## Example "bad" `Dockerfile`
The dependencies are reinstalled every time, because the build system does not know if `requirements.txt` has been updated.
```bash
FROM python
WORKDIR /src
COPY . .
RUN pip install -qr requirements.txt
EXPOSE 5000
CMD ["python", "app.py"]
```
---
## Fixed `Dockerfile`
Adding the dependencies as a separate step means that Docker can cache more efficiently and only install them when `requirements.txt` changes.
```bash
FROM python
COPY requirements.txt /tmp/requirements.txt
RUN pip install -qr /tmp/requirements.txt
WORKDIR /src
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
```
---
## Embedding unit tests in the build process
```dockerfile
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
RUN <install test dependencies>
COPY <test data sets and fixtures>
RUN <unit tests>
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
CMD, EXPOSE ...
```
* The build fails as soon as an instruction fails
* If `RUN <unit tests>` fails, the build doesn't produce an image
* If it succeeds, it produces a clean image (without test libraries and data)
---
# Dockerfile examples
There are a number of tips, tricks, and techniques that we can use in Dockerfiles.
But sometimes, we have to use different (and even opposed) practices depending on:
- the complexity of our project,
- the programming language or framework that we are using,
- the stage of our project (early MVP vs. super-stable production),
- whether we're building a final image or a base for further images,
- etc.
We are going to show a few examples using very different techniques.
---
## When to optimize an image
When authoring official images, it is a good idea to reduce as much as possible:
- the number of layers,
- the size of the final image.
This is often done at the expense of build time and convenience for the image maintainer;
but when an image is downloaded millions of time, saving even a few seconds of pull time
can be worth it.
.small[
```dockerfile
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev && rm -rf /var/lib/apt/lists/* \
&& docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
&& docker-php-ext-install gd
...
RUN curl -o wordpress.tar.gz -SL https://wordpress.org/wordpress-${WORDPRESS_UPSTREAM_VERSION}.tar.gz \
&& echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c - \
&& tar -xzf wordpress.tar.gz -C /usr/src/ \
&& rm wordpress.tar.gz \
&& chown -R www-data:www-data /usr/src/wordpress
```
]
(Source: [Wordpress official image](https://github.com/docker-library/wordpress/blob/618490d4bdff6c5774b84b717979bfe3d6ba8ad1/apache/Dockerfile))
---
## When to *not* optimize an image
Sometimes, it is better to prioritize *maintainer convenience*.
In particular, if:
- the image changes a lot,
- the image has very few users (e.g. only 1, the maintainer!),
- the image is built and run on the same machine,
- the image is built and run on machines with a very fast link ...
In these cases, just keep things simple!
(Next slide: a Dockerfile that can be used to preview a Jekyll / github pages site.)
---
```dockerfile
FROM debian:sid
RUN apt-get update -q
RUN apt-get install -yq build-essential make
RUN apt-get install -yq zlib1g-dev
RUN apt-get install -yq ruby ruby-dev
RUN apt-get install -yq python-pygments
RUN apt-get install -yq nodejs
RUN apt-get install -yq cmake
RUN gem install --no-rdoc --no-ri github-pages
COPY . /blog
WORKDIR /blog
VOLUME /blog/_site
EXPOSE 4000
CMD ["jekyll", "serve", "--host", "0.0.0.0", "--incremental"]
```
---
## Multi-dimensional versioning systems
Images can have a tag, indicating the version of the image.
But sometimes, there are multiple important components, and we need to indicate the versions
for all of them.
This can be done with environment variables:
```dockerfile
ENV PIP=9.0.3 \
ZC_BUILDOUT=2.11.2 \
SETUPTOOLS=38.7.0 \
PLONE_MAJOR=5.1 \
PLONE_VERSION=5.1.0 \
PLONE_MD5=76dc6cfc1c749d763c32fff3a9870d8d
```
(Source: [Plone official image](https://github.com/plone/plone.docker/blob/master/5.1/5.1.0/alpine/Dockerfile))
---
## Entrypoints and wrappers
It is very common to define a custom entrypoint.
That entrypoint will generally be a script, performing any combination of:
- pre-flights checks (if a required dependency is not available, display
a nice error message early instead of an obscure one in a deep log file),
- generation or validation of configuration files,
- dropping privileges (with e.g. `su` or `gosu`, sometimes combined with `chown`),
- and more.
---
## A typical entrypoint script
```dockerfile
#!/bin/sh
set -e
# first arg is '-f' or '--some-option'
# or first arg is 'something.conf'
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
set -- redis-server "$@"
fi
# allow the container to be started with '--user'
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
chown -R redis .
exec su-exec redis "$0" "$@"
fi
exec "$@"
```
(Source: [Redis official image](https://github.com/docker-library/redis/blob/d24f2be82673ccef6957210cc985e392ebdc65e4/4.0/alpine/docker-entrypoint.sh))
---
## Factoring information
To facilitate maintenance (and avoid human errors), avoid to repeat information like:
- version numbers,
- remote asset URLs (e.g. source tarballs) ...
Instead, use environment variables.
.small[
```dockerfile
ENV NODE_VERSION 10.2.1
...
RUN ...
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION.tar.xz" \
&& curl -fsSLO --compressed "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xf "node-v$NODE_VERSION.tar.xz" \
&& cd "node-v$NODE_VERSION" \
...
```
]
(Source: [Nodejs official image](https://github.com/nodejs/docker-node/blob/master/10/alpine/Dockerfile))
---
## Overrides
In theory, development and production images should be the same.
In practice, we often need to enable specific behaviors in development (e.g. debug statements).
One way to reconcile both needs is to use Compose to enable these behaviors.
Let's look at the [trainingwheels](https://github.com/jpetazzo/trainingwheels) demo app for an example.
---
## Production image
This Dockerfile builds an image leveraging gunicorn:
```dockerfile
FROM python
RUN pip install flask
RUN pip install gunicorn
RUN pip install redis
COPY . /src
WORKDIR /src
CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app
EXPOSE 5000
```
(Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
---
## Development Compose file
This Compose file uses the same image, but with a few overrides for development:
- the Flask development server is used (overriding `CMD`),
- the `DEBUG` environment variable is set,
- a volume is used to provide a faster local development workflow.
.small[
```yaml
services:
www:
build: www
ports:
- 8000:5000
user: nobody
environment:
DEBUG: 1
command: python counter.py
volumes:
- ./www:/src
```
]
(Source: [trainingwheels Compose file](https://github.com/jpetazzo/trainingwheels/blob/master/docker-compose.yml))
---
## How to know which best practices are better?
- The main goal of containers is to make our lives easier.
- In this chapter, we showed many ways to write Dockerfiles.
- These Dockerfiles use sometimes diametrally opposed techniques.
- Yet, they were the "right" ones *for a specific situation.*
- It's OK (and even encouraged) to start simple and evolve as needed.
- Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!

View File

@@ -110,6 +110,8 @@ Beautiful! .emoji[😍]
---
class: in-person
## Counting packages in the container
Let's check how many packages are installed there.
@@ -127,6 +129,8 @@ How many packages do we have on our host?
---
class: in-person
## Counting packages on the host
Exit the container by logging out of the shell, like you would usually do.
@@ -145,18 +149,34 @@ Now, try to:
---
class: self-paced
## Comparing the container and the host
Exit the container by logging out of the shell, with `^D` or `exit`.
Now try to run `figlet`. Does that work?
(It shouldn't; except if, by coincidence, you are running on a machine where figlet was installed before.)
---
## Host and containers are independent things
* We ran an `ubuntu` container on an `ubuntu` host.
* We ran an `ubuntu` container on an Linux/Windows/macOS host.
* But they have different, independent packages.
* They have different, independent packages.
* Installing something on the host doesn't expose it to the container.
* And vice-versa.
* Even if both the host and the container have the same Linux distro!
* We can run *any container* on *any host*.
(One exception: Windows containers cannot run on Linux machines; at least not yet.)
---
## Where's our container?

View File

@@ -144,7 +144,7 @@ docker run jpetazzo/crashtest
The container starts, but then stops immediately, without any output.
What would McGyver do?
What would MacGyver&trade; do?
First, let's check the status of that container.

View File

@@ -46,6 +46,8 @@ In this section, we will explain:
## Example for a Java webapp
Each of the following items will correspond to one layer:
* CentOS base layer
* Packages and configuration files added by our local IT
* JRE
@@ -56,6 +58,22 @@ In this section, we will explain:
---
class: pic
## The read-write layer
![layers](images/container-layers.jpg)
---
class: pic
## Multiple containers sharing the same image
![layers](images/sharing-layers.jpg)
---
## Differences between containers and images
* An image is a read-only filesystem.
@@ -63,24 +81,14 @@ In this section, we will explain:
* A container is an encapsulated set of processes running in a
read-write copy of that filesystem.
* To optimize container boot time, *copy-on-write* is used
* To optimize container boot time, *copy-on-write* is used
instead of regular copy.
* `docker run` starts a container from a given image.
Let's give a couple of metaphors to illustrate those concepts.
---
## Image as stencils
Images are like templates or stencils that you can create containers from.
![stencil](images/stenciling-wall.jpg)
---
## Object-oriented programming
## Comparison with object-oriented programming
* Images are conceptually similar to *classes*.
@@ -99,7 +107,7 @@ If an image is read-only, how do we change it?
* We create a new container from that image.
* Then we make changes to that container.
* When we are satisfied with those changes, we transform them into a new layer.
* A new image is created by stacking the new layer on top of the old image.
@@ -118,7 +126,7 @@ If an image is read-only, how do we change it?
## Creating the first images
There is a special empty image called `scratch`.
There is a special empty image called `scratch`.
* It allows to *build from scratch*.
@@ -138,7 +146,7 @@ Note: you will probably never have to do this yourself.
* Saves all the changes made to a container into a new layer.
* Creates a new image (effectively a copy of the container).
`docker build`
`docker build` **(used 99% of the time)**
* Performs a repeatable build sequence.
* This is the preferred method!
@@ -180,6 +188,8 @@ Those images include:
* Ready-to-use components and services, like redis, postgresql...
* Over 130 at this point!
---
## User namespace
@@ -299,9 +309,9 @@ There are two ways to download images.
```bash
$ docker pull debian:jessie
Pulling repository debian
b164861940b8: Download complete
b164861940b8: Pulling image (jessie) from debian
d1881793a057: Download complete
b164861940b8: Download complete
b164861940b8: Pulling image (jessie) from debian
d1881793a057: Download complete
```
* As seen previously, images are made up of layers.

View File

@@ -37,7 +37,9 @@ We can arbitrarily distinguish:
## Installing Docker on Linux
* The recommended method is to install the packages supplied by Docker Inc.
* The recommended method is to install the packages supplied by Docker Inc.:
https://store.docker.com
* The general method is:
@@ -79,11 +81,11 @@ class: extra-details
## Installing Docker on macOS and Windows
* On macOS, the recommended method is to use Docker4Mac:
* On macOS, the recommended method is to use Docker for Mac:
https://docs.docker.com/docker-for-mac/install/
* On Windows 10 Pro, Enterprise, and Eduction, you can use Docker4Windows:
* On Windows 10 Pro, Enterprise, and Education, you can use Docker for Windows:
https://docs.docker.com/docker-for-windows/install/
@@ -91,6 +93,33 @@ class: extra-details
https://docs.docker.com/toolbox/toolbox_install_windows/
* On Windows Server 2016, you can also install the native engine:
https://docs.docker.com/install/windows/docker-ee/
---
## Docker for Mac and Docker for Windows
* Special Docker Editions that integrate well with their respective host OS
* Provide user-friendly GUI to edit Docker configuration and settings
* Leverage the host OS virtualization subsystem (e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS)
* Installed like normal user applications on the host
* Under the hood, they both run a tiny VM (transparent to our daily use)
* Access network resources like normal applications
<br/>(and therefore, play better with enterprise VPNs and firewalls)
* Support filesystem sharing through volumes (we'll talk about this later)
* They only support running one Docker VM at a time ...
<br/>
... but we can use `docker-machine`, the Docker Toolbox, VirtualBox, etc. to get a cluster.
---
## Running Docker on macOS and Windows
@@ -110,25 +139,6 @@ This will also allow to use remote Engines exactly as if they were local.
---
## Docker4Mac and Docker4Windows
* They let you run Docker without VirtualBox
* They are installed like normal applications (think QEMU, but faster)
* They access network resources like normal applications
<br/>(and therefore, play well with enterprise VPNs and firewalls)
* They support filesystem sharing through volumes (we'll talk about this later)
* They only support running one Docker VM at a time ...
... so if you want to run a full cluster locally, install e.g. the Docker Toolbox
* They can co-exist with the Docker Toolbox
---
## Important PSA about security
* If you have access to the Docker control socket, you can take over the machine

View File

@@ -17,7 +17,7 @@ At the end of this section, you will be able to:
---
## Containerized local development environments
## Local development in a container
We want to solve the following issues:
@@ -69,7 +69,6 @@ Aha, a `Gemfile`! This is Ruby. Probably. We know this. Maybe?
```dockerfile
FROM ruby
MAINTAINER Education Team at Docker <education@docker.com>
COPY . /src
WORKDIR /src
@@ -177,7 +176,9 @@ $ docker run -d -v $(pwd):/src -P namer
* `namer` is the name of the image we will run.
* We don't specify a command to run because is is already set in the Dockerfile.
* We don't specify a command to run because it is already set in the Dockerfile.
Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell).
---

View File

@@ -131,6 +131,27 @@ We will then show one particular method in action, using ELK and Docker's loggin
---
## A word of warning about `json-file`
- By default, log file size is unlimited.
- This means that a very verbose container *will* use up all your disk space.
(Or a less verbose container, but running for a very long time.)
- Log rotation can be enabled by setting a `max-size` option.
- Older log files can be removed by setting a `max-file` option.
- Just like other logging options, these can be set per container, or globally.
Example:
```bash
$ docker run --log-opt max-size=10m --log-opt max-file=3 elasticsearch
```
---
## Demo: sending logs to ELK
- We are going to deploy an ELK stack.
@@ -192,7 +213,7 @@ $ docker-compose -f elk.yml up -d
- it is set with the `ELASTICSEARCH_URL` environment variable,
- by default it is `localhost:9200`, we change it to `elastichsearch:9200`.
- by default it is `localhost:9200`, we change it to `elasticsearch:9200`.
- We need to configure Logstash:

View File

@@ -0,0 +1,295 @@
# Reducing image size
* In the previous example, our final image contained:
* our `hello` program
* its source code
* the compiler
* Only the first one is strictly necessary.
* We are going to see how to obtain an image without the superfluous components.
---
## Can't we remove superfluous files with `RUN`?
What happens if we do one of the following commands?
- `RUN rm -rf ...`
- `RUN apt-get remove ...`
- `RUN make clean ...`
--
This adds a layer which removes a bunch of files.
But the previous layers (which added the files) still exist.
---
## Removing files with an extra layer
When downloading an image, all the layers must be downloaded.
| Dockerfile instruction | Layer size | Image size |
| ---------------------- | ---------- | ---------- |
| `FROM ubuntu` | Size of base image | Size of base image |
| `...` | ... | Sum of this layer <br/>+ all previous ones |
| `RUN apt-get install somepackage` | Size of files added <br/>(e.g. a few MB) | Sum of this layer <br/>+ all previous ones |
| `...` | ... | Sum of this layer <br/>+ all previous ones |
| `RUN apt-get remove somepackage` | Almost zero <br/>(just metadata) | Same as previous one |
Therefore, `RUN rm` does not reduce the size of the image or free up disk space.
---
## Removing unnecessary files
Various techniques are available to obtain smaller images:
- collapsing layers,
- adding binaries that are built outside of the Dockerfile,
- squashing the final image,
- multi-stage builds.
Let's review them quickly.
---
## Collapsing layers
You will frequently see Dockerfiles like this:
```dockerfile
FROM ubuntu
RUN apt-get update && apt-get install xxx && ... && apt-get remove xxx && ...
```
Or the (more readable) variant:
```dockerfile
FROM ubuntu
RUN apt-get update \
&& apt-get install xxx \
&& ... \
&& apt-get remove xxx \
&& ...
```
This `RUN` command gives us a single layer.
The files that are added, then removed in the same layer, do not grow the layer size.
---
## Collapsing layers: pros and cons
Pros:
- works on all versions of Docker
- doesn't require extra tools
Cons:
- not very readable
- some unnecessary files might still remain if the cleanup is not thorough
- that layer is expensive (slow to build)
---
## Building binaries outside of the Dockerfile
This results in a Dockerfile looking like this:
```dockerfile
FROM ubuntu
COPY xxx /usr/local/bin
```
Of course, this implies that the file `xxx` exists in the build context.
That file has to exist before you can run `docker build`.
For instance, it can:
- exist in the code repository,
- be created by another tool (script, Makefile...),
- be created by another container image and extracted from the image.
See for instance the [busybox official image](https://github.com/docker-library/busybox/blob/fe634680e32659aaf0ee0594805f74f332619a90/musl/Dockerfile) or this [older busybox image](https://github.com/jpetazzo/docker-busybox).
---
## Building binaries outside: pros and cons
Pros:
- final image can be very small
Cons:
- requires an extra build tool
- we're back in dependency hell and "works on my machine"
Cons, if binary is added to code repository:
- breaks portability across different platforms
- grows repository size a lot if the binary is updated frequently
---
## Squashing the final image
The idea is to transform the final image into a single-layer image.
This can be done in (at least) two ways.
- Activate experimental features and squash the final image:
```bash
docker image build --squash ...
```
- Export/import the final image.
```bash
docker build -t temp-image .
docker run --entrypoint true --name temp-container temp-image
docker export temp-container | docker import - final-image
docker rm temp-container
docker rmi temp-image
```
---
## Squashing the image: pros and cons
Pros:
- single-layer images are smaller and faster to download
- removed files no longer take up storage and network resources
Cons:
- we still need to actively remove unnecessary files
- squash operation can take a lot of time (on big images)
- squash operation does not benefit from cache
<br/>
(even if we change just a tiny file, the whole image needs to be re-squashed)
---
## Multi-stage builds
Multi-stage builds allow us to have multiple *stages*.
Each stage is a separate image, and can copy files from previous stages.
We're going to see how they work in more detail.
---
# Multi-stage builds
* At any point in our `Dockerfile`, we can add a new `FROM` line.
* This line starts a new stage of our build.
* Each stage can access the files of the previous stages with `COPY --from=...`.
* When a build is tagged (with `docker build -t ...`), the last stage is tagged.
* Previous stages are not discarded: they will be used for caching, and can be referenced.
---
## Multi-stage builds in practice
* Each stage is numbered, starting at `0`
* We can copy a file from a previous stage by indicating its number, e.g.:
```dockerfile
COPY --from=0 /file/from/first/stage /location/in/current/stage
```
* We can also name stages, and reference these names:
```dockerfile
FROM golang AS builder
RUN ...
FROM alpine
COPY --from=builder /go/bin/mylittlebinary /usr/local/bin/
```
---
## Multi-stage builds for our C program
We will change our Dockerfile to:
* give a nickname to the first stage: `compiler`
* add a second stage using the same `ubuntu` base image
* add the `hello` binary to the second stage
* make sure that `CMD` is in the second stage
The resulting Dockerfile is on the next slide.
---
## Multi-stage build `Dockerfile`
Here is the final Dockerfile:
```dockerfile
FROM ubuntu AS compiler
RUN apt-get update
RUN apt-get install -y build-essential
COPY hello.c /
RUN make hello
FROM ubuntu
COPY --from=compiler /hello /hello
CMD /hello
```
Let's build it, and check that it works correctly:
```bash
docker build -t hellomultistage .
docker run hellomultistage
```
---
## Comparing single/multi-stage build image sizes
List our images with `docker images`, and check the size of:
- the `ubuntu` base image,
- the single-stage `hello` image,
- the multi-stage `hellomultistage` image.
We can achieve even smaller images if we use smaller base images.
However, if we use common base images (e.g. if we standardize on `ubuntu`),
these common images will be pulled only once per node, so they are
virtually "free."

View File

@@ -76,6 +76,8 @@ The last item should be done for educational purposes only!
---
class: extra-details, deep-dive
## Manipulating namespaces
- Namespaces are created with two methods:
@@ -94,6 +96,8 @@ The last item should be done for educational purposes only!
---
class: extra-details, deep-dive
## Namespaces lifecycle
- When the last process of a namespace exits, the namespace is destroyed.
@@ -114,6 +118,8 @@ The last item should be done for educational purposes only!
---
class: extra-details, deep-dive
## Namespaces can be used independently
- As mentioned in the previous slides:
@@ -138,7 +144,7 @@ The last item should be done for educational purposes only!
- Also allows to set the NIS domain.
(If you dont' know what a NIS domain is, you don't have to worry about it!)
(If you don't know what a NIS domain is, you don't have to worry about it!)
- If you're wondering: UTS = UNIX time sharing.
@@ -150,6 +156,8 @@ The last item should be done for educational purposes only!
---
class: extra-details, deep-dive
## Creating our first namespace
Let's use `unshare` to create a new process that will have its own UTS namespace:
@@ -166,6 +174,8 @@ $ sudo unshare --uts
---
class: extra-details, deep-dive
## Demonstrating our uts namespace
In our new "container", check the hostname, change it, and check it:
@@ -398,6 +408,8 @@ class: extra-details
---
class: extra-details, deep-dive
## Setting up a private `/tmp`
Create a new mount namespace:
@@ -435,6 +447,8 @@ The mount is automatically cleaned up when you exit the process.
---
class: extra-details, deep-dive
## PID namespace in action
Create a new PID namespace:
@@ -453,10 +467,14 @@ Check the process tree in the new namespace:
--
class: extra-details, deep-dive
🤔 Why do we see all the processes?!?
---
class: extra-details, deep-dive
## PID namespaces and `/proc`
- Tools like `ps` rely on the `/proc` pseudo-filesystem.
@@ -471,6 +489,8 @@ Check the process tree in the new namespace:
---
class: extra-details, deep-dive
## PID namespaces, take 2
- This can be solved by mounting `/proc` in the namespace.
@@ -570,6 +590,8 @@ Check `man 2 unshare` and `man pid_namespaces` if you want more details.
---
class: extra-details, deep-dive
## User namespace challenges
- UID needs to be mapped when passed between processes or kernel subsystems.
@@ -686,6 +708,8 @@ cpu memory
---
class: extra-details, deep-dive
## Cgroups v1 vs v2
- Cgroups v1 are available on all systems (and widely used).
@@ -759,6 +783,8 @@ cpu memory
---
class: extra-details, deep-dive
## Avoiding the OOM killer
- For some workloads (databases and stateful systems), killing
@@ -778,6 +804,8 @@ cpu memory
---
class: extra-details, deep-dive
## Overhead of the memory cgroup
- Each time a process grabs or releases a page, the kernel update counters.
@@ -796,6 +824,8 @@ cpu memory
---
class: extra-details, deep-dive
## Setting up a limit with the memory cgroup
Create a new memory cgroup:
@@ -808,7 +838,7 @@ $ sudo mkdir $CG
Limit it to approximately 100MB of memory usage:
```bash
$ sudo tee $CG/memory.memsw.limit_in_bytes <<<100000000
$ sudo tee $CG/memory.memsw.limit_in_bytes <<< 100000000
```
Move the current process to that cgroup:
@@ -819,8 +849,67 @@ $ sudo tee $CG/tasks <<< $$
The current process *and all its future children* are now limited.
(Confused about `<<<`? Look at the next slide!)
---
class: extra-details, deep-dive
## What's `<<<`?
- This is a "here string". (It is a non-POSIX shell extension.)
- The following commands are equivalent:
```bash
foo <<< hello
```
```bash
echo hello | foo
```
```bash
foo <<EOF
hello
EOF
```
- Why did we use that?
---
class: extra-details, deep-dive
## Writing to cgroups pseudo-files requires root
Instead of:
```bash
sudo tee $CG/tasks <<< $$
```
We could have done:
```bash
sudo sh -c "echo $$ > $CG/tasks"
```
The following commands, however, would be invalid:
```bash
sudo echo $$ > $CG/tasks
```
```bash
sudo -i # (or su)
echo $$ > $CG/tasks
```
---
class: extra-details, deep-dive
## Testing the memory limit
Start the Python interpreter:
@@ -860,8 +949,6 @@ Killed
- Allows to set relative weights used by the scheduler.
- We cannot set CPU limits (like, "don't use more than 10% of CPU").
---
## Cpuset cgroup

View File

@@ -420,8 +420,3 @@ It depends on:
- false, if we focus on what matters.
---
## Kubernetes in action
.center[![Demo stamp](images/demo.jpg)]

View File

@@ -21,7 +21,7 @@ public images is free as well.*
docker login
```
.warning[When running Docker4Mac, Docker4Windows, or
.warning[When running Docker for Mac/Windows, or
Docker on a Linux workstation, it can (and will when
possible) integrate with your system's keyring to
store your credentials securely. However, on most Linux

View File

@@ -0,0 +1,229 @@
# Limiting resources
- So far, we have used containers as convenient units of deployment.
- What happens when a container tries to use more resources than available?
(RAM, CPU, disk usage, disk and network I/O...)
- What happens when multiple containers compete for the same resource?
- Can we limit resources available to a container?
(Spoiler alert: yes!)
---
## Container processes are normal processes
- Containers are closer to "fancy processes" than to "lightweight VMs".
- A process running in a container is, in fact, a process running on the host.
- Let's look at the output of `ps` on a container host running 3 containers :
```
0 2662 0.2 0.3 /usr/bin/dockerd -H fd://
0 2766 0.1 0.1 \_ docker-containerd --config /var/run/docker/containe
0 23479 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
0 23497 0.0 0.0 | \_ `nginx`: master process nginx -g daemon off;
101 23543 0.0 0.0 | \_ `nginx`: worker process
0 23565 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
102 23584 9.4 11.3 | \_ `/docker-java-home/jre/bin/java` -Xms2g -Xmx2
0 23707 0.0 0.0 \_ docker-containerd-shim -namespace moby -workdir
0 23725 0.0 0.0 \_ `/bin/sh`
```
- The highlighted processes are containerized processes.
<br/>
(That host is running nginx, elasticsearch, and alpine.)
---
## By default: nothing changes
- What happens when a process uses too much memory on a Linux system?
--
- Simplified answer:
- swap is used (if available);
- if there is not enough swap space, eventually, the out-of-memory killer is invoked;
- the OOM killer uses heuristics to kill processes;
- sometimes, it kills an unrelated process.
--
- What happens when a container uses too much memory?
- The same thing!
(i.e., a process eventually gets killed, possibly in another container.)
---
## Limiting container resources
- The Linux kernel offers rich mechanisms to limit container resources.
- For memory usage, the mechanism is part of the *cgroup* subsystem.
- This subsystem allows to limit the memory for a process or a group of processes.
- A container engine leverages these mechanisms to limit memory for a container.
- The out-of-memory killer has a new behavior:
- it runs when a container exceeds its allowed memory usage,
- in that case, it only kills processes in that container.
---
## Limiting memory in practice
- The Docker Engine offers multiple flags to limit memory usage.
- The two most useful ones are `--memory` and `--memory-swap`.
- `--memory` limits the amount of physical RAM used by a container.
- `--memory-swap` limits the total amount (RAM+swap) used by a container.
- The memory limit can be expressed in bytes, or with a unit suffix.
(e.g.: `--memory 100m` = 100 megabytes.)
- We will see two strategies: limiting RAM usage, or limiting both
---
## Limiting RAM usage
Example:
```bash
docker run -ti --memory 100m python
```
If the container tries to use more than 100 MB of RAM, *and* swap is available:
- the container will not be killed,
- memory above 100 MB will be swapped out,
- in most cases, the app in the container will be slowed down (a lot).
If we run out of swap, the global OOM killer still intervenes.
---
## Limiting both RAM and swap usage
Example:
```bash
docker run -ti --memory 100m --memory-swap 100m python
```
If the container tries to use more than 100 MB of memory, it is killed.
On the other hand, the application will never be slowed down because of swap.
---
## When to pick which strategy?
- Stateful services (like databases) will lose or corrupt data when killed
- Allow them to use swap space, but monitor swap usage
- Stateless services can usually be killed with little impact
- Limit their mem+swap usage, but monitor if they get killed
- Ultimately, this is no different from "do I want swap, and how much?"
---
## Limiting CPU usage
- There are no less than 3 ways to limit CPU usage:
- setting a relative priority with `--cpu-shares`,
- setting a CPU% limit with `--cpus`,
- pinning a container to specific CPUs with `--cpuset-cpus`.
- They can be used separately or together.
---
## Setting relative priority
- Each container has a relative priority used by the Linux scheduler.
- By default, this priority is 1024.
- As long as CPU usage is not maxed out, this has no effect.
- When CPU usage is maxed out, each container receives CPU cycles in proportion of its relative priority.
- In other words: a container with `--cpu-shares 2048` will receive twice as much than the default.
---
## Setting a CPU% limit
- This setting will make sure that a container doesn't use more than a given % of CPU.
- The value is expressed in CPUs; therefore:
`--cpus 0.1` means 10% of one CPU,
`--cpus 1.0` means 100% of one whole CPU,
`--cpus 10.0` means 10 entire CPUs.
---
## Pinning containers to CPUs
- On multi-core machines, it is possible to restrict the execution on a set of CPUs.
- Examples:
`--cpuset-cpus 0` forces the container to run on CPU 0;
`--cpuset-cpus 3,5,7` restricts the container to CPUs 3, 5, 7;
`--cpuset-cpus 0-3,8-11` restricts the container to CPUs 0, 1, 2, 3, 8, 9, 10, 11.
- This will not reserve the corresponding CPUs!
(They might still be used by other containers, or uncontainerized processes.)
---
## Limiting disk usage
- Most storage drivers do not support limiting the disk usage of containers.
(With the exception of devicemapper, but the limit cannot be set easily.)
- This means that a single container could exhaust disk space for everyone.
- In practice, however, this is not a concern, because:
- data files (for stateful services) should reside on volumes,
- assets (e.g. images, user-generated content...) should reside on object stores or on volume,
- logs are written on standard output and gathered by the container engine.
- Container disk usage can be audited with `docker ps -s` and `docker diff`.

View File

@@ -33,6 +33,8 @@ Docker volumes can be used to achieve many things, including:
* Sharing a *single file* between the host and a container.
* Using remote storage and custom storage with "volume drivers".
---
## Volumes are special directories in a container
@@ -118,7 +120,7 @@ $ curl localhost:8080
## Volumes exist independently of containers
If a container is stopped, its volumes still exist and are available.
If a container is stopped or removed, its volumes still exist and are available.
Volumes can be listed and manipulated with `docker volume` subcommands:
@@ -195,13 +197,13 @@ Let's start another container using the `webapps` volume.
$ docker run -v webapps:/webapps -w /webapps -ti alpine vi ROOT/index.jsp
```
Where `-w` sets the working directory inside the container. Vandalize the page, save and exit.
Vandalize the page, save, exit.
Then run `curl localhost:1234` again to see your changes.
---
## Managing volumes explicitly
## Using custom "bind-mounts"
In some cases, you want a specific directory on the host to be mapped
inside the container:
@@ -244,6 +246,8 @@ of an existing container.
* Newer containers can use `--volumes-from` too.
* Doesn't work across servers, so not usable in clusters (Swarm, Kubernetes).
---
class: extra-details
@@ -259,7 +263,7 @@ $ docker run -d --name redis28 redis:2.8
Connect to the Redis container and set some data.
```bash
$ docker run -ti --link redis28:redis alpine:3.6 telnet redis 6379
$ docker run -ti --link redis28:redis busybox telnet redis 6379
```
Issue the following commands:
@@ -298,7 +302,7 @@ class: extra-details
Connect to the Redis container and see our data.
```bash
docker run -ti --link redis30:redis alpine:3.6 telnet redis 6379
docker run -ti --link redis30:redis busybox telnet redis 6379
```
Issue a few commands.
@@ -394,10 +398,15 @@ has root-like access to the host.]
You can install plugins to manage volumes backed by particular storage systems,
or providing extra features. For instance:
* [dvol](https://github.com/ClusterHQ/dvol) - allows to commit/branch/rollback volumes;
* [Flocker](https://clusterhq.com/flocker/introduction/), [REX-Ray](https://github.com/emccode/rexray) - create and manage volumes backed by an enterprise storage system (e.g. SAN or NAS), or by cloud block stores (e.g. EBS);
* [Blockbridge](http://www.blockbridge.com/), [Portworx](http://portworx.com/) - provide distributed block store for containers;
* and much more!
* [REX-Ray](https://rexray.io/) - create and manage volumes backed by an enterprise storage system (e.g.
SAN or NAS), or by cloud block stores (e.g. EBS, EFS).
* [Portworx](http://portworx.com/) - provides distributed block store for containers.
* [Gluster](https://www.gluster.org/) - open source software-defined distributed storage that can scale
to several petabytes. It provides interfaces for object, block and file storage.
* and much more at the [Docker Store](https://store.docker.com/search?category=volume&q=&type=plugin)!
---

View File

@@ -2,7 +2,7 @@
- This was initially written to support in-person, instructor-led workshops and tutorials
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors)
- These materials are maintained by [Jérôme Petazzoni](https://twitter.com/jpetazzo) and [multiple contributors](https://@@GITREPO@@/graphs/contributors)
- You can also follow along on your own, at your own pace

57
slides/count-slides.py Executable file
View File

@@ -0,0 +1,57 @@
#!/usr/bin/env python
import re
import sys
PREFIX = "name: toc-"
EXCLUDED = ["in-person"]
class State(object):
def __init__(self):
self.current_slide = 1
self.section_title = None
self.section_start = 0
self.section_slides = 0
self.chapters = {}
self.sections = {}
def show(self):
if self.section_title.startswith("chapter-"):
return
print("{0.section_title}\t{0.section_start}\t{0.section_slides}".format(self))
self.sections[self.section_title] = self.section_slides
state = State()
title = None
for line in open(sys.argv[1]):
line = line.rstrip()
if line.startswith(PREFIX):
if state.section_title is None:
print("{}\t{}\t{}".format("title", "index", "size"))
else:
state.show()
state.section_title = line[len(PREFIX):].strip()
state.section_start = state.current_slide
state.section_slides = 0
if line == "---":
state.current_slide += 1
state.section_slides += 1
if line == "--":
state.current_slide += 1
toc_links = re.findall("\(#toc-(.*)\)", line)
if toc_links and state.section_title.startswith("chapter-"):
if state.section_title not in state.chapters:
state.chapters[state.section_title] = []
state.chapters[state.section_title].append(toc_links[0])
# This is really hackish
if line.startswith("class:"):
for klass in EXCLUDED:
if klass in line:
state.section_slides -= 1
state.current_slide -= 1
state.show()
for chapter in sorted(state.chapters, key=lambda f: int(f.split("-")[1])):
chapter_size = sum(state.sections[s] for s in state.chapters[chapter])
print("{}\t{}\t{}".format("total size for", chapter, chapter_size))

View File

@@ -1,10 +0,0 @@
#!/bin/sh
INPUT=$1
{
echo "# Front matter"
cat "$INPUT"
} |
grep -e "^# " -e ^---$ | uniq -c |
sed "s/^ *//" | sed s/---// |
paste -d "\t" - -

BIN
slides/images/bridge1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

BIN
slides/images/bridge2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

59
slides/index.css Normal file
View File

@@ -0,0 +1,59 @@
body {
background-image: url("images/container-background.jpg");
max-width: 1024px;
margin: 0 auto;
}
table {
font-size: 20px;
font-family: sans-serif;
background: white;
width: 100%;
height: 100%;
padding: 20px;
}
.header {
font-size: 300%;
font-weight: bold;
}
.title {
font-size: 150%;
font-weight: bold;
}
.details {
font-size: 80%;
font-style: italic;
}
td {
padding: 1px;
height: 1em;
}
td.spacer {
height: unset;
}
td.footer {
padding-top: 80px;
height: 100px;
}
td.title {
border-bottom: thick solid black;
padding-bottom: 2px;
padding-top: 20px;
}
a {
text-decoration: none;
}
a:hover {
background: yellow;
}
a.attend:after {
content: "📅 attend";
}
a.slides:after {
content: "📚 slides";
}
a.chat:after {
content: "💬 chat";
}
a.video:after {
content: "📺 video";
}

View File

@@ -1,29 +0,0 @@
<html>
<head>
<link rel="stylesheet" type="text/css" href="theme.css">
<title>Formation/workshop containers, orchestration, et Kubernetes à Paris en avril</title>
</head>
<body>
<div class="index">
<div class="block">
<h4>Introduction aux conteneurs</h4>
<h5>De la pratique … aux bonnes pratiques</h5>
<h6>(11-12 avril 2018)</h6>
<p>
<a href="intro.yml.html">SLIDES</a>
<a href="https://gitter.im/jpetazzo/training-20180411-paris">CHATROOM</a>
</p>
</div>
<div class="block">
<h4>Introduction à l'orchestration</h4>
<h5>Kubernetes par l'exemple</h5>
<h6>(13 avril 2018)</h6>
<p>
<a href="kube.yml.html">SLIDES</a>
<a href="https://gitter.im/jpetazzo/training-20180413-paris">CHATROOM</a>
<a href="https://docs.google.com/spreadsheets/d/1KiuCVduTf3wf-4-vSmcK96I61WYdDP0BppkOx_XZcjM/edit?ts=5acfc2ef#gid=0">FOODMENU</a>
</p>
</div>
</div>
</body>
</html>

146
slides/index.py Executable file
View File

@@ -0,0 +1,146 @@
#!/usr/bin/env python2
# coding: utf-8
TEMPLATE="""<html>
<head>
<title>{{ title }}</title>
<link rel="stylesheet" href="index.css">
</head>
<body>
<div class="main">
<table>
<tr><td class="header" colspan="3">{{ title }}</td></tr>
{% if coming_soon %}
<tr><td class="title" colspan="3">Coming soon near you</td></tr>
{% for item in coming_soon %}
<tr>
<td>{{ item.title }}</td>
<td>{% if item.slides %}<a class="slides" href="{{ item.slides }}" />{% endif %}</td>
<td><a class="attend" href="{{ item.attend }}" /></td>
</tr>
<tr>
<td class="details">Scheduled {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
{% if past_workshops %}
<tr><td class="title" colspan="3">Past workshops</td></tr>
{% for item in past_workshops[:5] %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
<td>{% if item.video %}<a class="video" href="{{ item.video }}" />{% endif %}</td>
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% if past_workshops[5:] %}
<tr>
<td>... and at least <a href="past.html">{{ past_workshops[5:] | length }} more</a>.</td>
</tr>
{% endif %}
{% endif %}
{% if recorded_workshops %}
<tr><td class="title" colspan="3">Recorded workshops</td></tr>
{% for item in recorded_workshops %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
<td><a class="video" href="{{ item.video }}" /></td>
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
{% if self_paced %}
<tr><td class="title" colspan="3">Self-paced tutorials</td></tr>
{% for item in self_paced %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
</tr>
{% endfor %}
{% endif %}
{% if all_past_workshops %}
<tr><td class="title" colspan="3">Past workshops</td></tr>
{% for item in all_past_workshops %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
{% if item.video %}
<td><a class="video" href="{{ item.video }}" /></td>
{% endif %}
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
<tr><td class="spacer"></td></tr>
<tr>
<td class="footer">
Maintained by Jérôme Petazzoni (<a href="https://twitter.com/jpetazzo">@jpetazzo</a>) and <a href="https://github.com/jpetazzo/container.training/graphs/contributors">contributors</a>.
</td>
</tr>
</table>
</div>
</body>
</html>""".decode("utf-8")
import datetime
import jinja2
import yaml
items = yaml.load(open("index.yaml"))
for item in items:
if "date" in item:
date = item["date"]
suffix = {
1: "st", 2: "nd", 3: "rd",
21: "st", 22: "nd", 23: "rd",
31: "st"}.get(date.day, "th")
# %e is a non-standard extension (it displays the day, but without a
# leading zero). If strftime fails with ValueError, try to fall back
# on %d (which displays the day but with a leading zero when needed).
try:
item["prettydate"] = date.strftime("%B %e{}, %Y").format(suffix)
except ValueError:
item["prettydate"] = date.strftime("%B %d{}, %Y").format(suffix)
today = datetime.date.today()
coming_soon = [i for i in items if i.get("date") and i["date"] >= today]
coming_soon.sort(key=lambda i: i["date"])
past_workshops = [i for i in items if i.get("date") and i["date"] < today]
past_workshops.sort(key=lambda i: i["date"], reverse=True)
self_paced = [i for i in items if not i.get("date")]
recorded_workshops = [i for i in items if i.get("video")]
template = jinja2.Template(TEMPLATE)
with open("index.html", "w") as f:
f.write(template.render(
title="Container Training",
coming_soon=coming_soon,
past_workshops=past_workshops,
self_paced=self_paced,
recorded_workshops=recorded_workshops
).encode("utf-8"))
with open("past.html", "w") as f:
f.write(template.render(
title="Container Training",
all_past_workshops=past_workshops
).encode("utf-8"))

420
slides/index.yaml Normal file
View File

@@ -0,0 +1,420 @@
- date: 2018-11-23
city: Copenhagen
country: dk
event: GOTO
title: Build Container Orchestration with Docker Swarm
speaker: bretfisher
attend: https://gotocph.com/2018/workshops/121
- date: 2018-11-08
city: San Francisco, CA
country: us
event: QCON
title: Introduction to Docker and Containers
speaker: jpetazzo
attend: https://qconsf.com/sf2018/workshop/introduction-docker-and-containers
- date: 2018-11-09
city: San Francisco, CA
country: us
event: QCON
title: Getting Started With Kubernetes and Container Orchestration
speaker: jpetazzo
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration
- date: 2018-10-31
city: London, UK
country: uk
event: Velocity EU
title: Kubernetes 101
speaker: bridgetkromhout
attend: https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/71149
- date: 2018-10-30
city: London, UK
country: uk
event: Velocity EU
title: "Docker Zero to Hero: Docker, Compose and Production Swarm"
speaker: bretfisher
attend: https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/71231
- date: 2018-07-12
city: Minneapolis, MN
country: us
event: devopsdays Minneapolis
title: Kubernetes 101
speaker: "ashleymcnamara, bketelsen"
slides: https://devopsdaysmsp2018.container.training
attend: https://www.devopsdays.org/events/2018-minneapolis/registration/
- date: 2018-10-01
city: New York, NY
country: us
event: Velocity
title: Kubernetes 101
speaker: bridgetkromhout
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70102
- date: 2018-09-30
city: New York, NY
country: us
event: Velocity
title: Kubernetes Bootcamp - Deploying and Scaling Microservices
speaker: jpetazzo
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/69875
- date: 2018-09-30
city: New York, NY
country: us
event: Velocity
title: "Docker Zero to Hero: Docker, Compose and Production Swarm"
speaker: bretfisher
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70147
- date: 2018-09-17
country: fr
city: Paris
event: ENIX SAS
speaker: jpetazzo
title: Déployer ses applications avec Kubernetes (in French)
lang: fr
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
- date: 2018-07-17
city: Portland, OR
country: us
event: OSCON
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://oscon2018.container.training/
attend: https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/66287
- date: 2018-06-27
city: Amsterdam
country: nl
event: devopsdays
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://devopsdaysams2018.container.training
attend: https://www.devopsdays.org/events/2018-amsterdam/registration/
- date: 2018-06-12
city: San Jose, CA
country: us
event: Velocity
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://velocitysj2018.container.training
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66286
- date: 2018-06-12
city: San Jose, CA
country: us
event: Velocity
title: "Kubernetes two-day kickstart: Deploying and Scaling Microservices with Kubernetes"
speaker: "bketelsen, erikstmartin"
slides: http://kubernetes.academy/kube-fullday.yml.html#1
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
- date: 2018-06-11
city: San Jose, CA
country: us
event: Velocity
title: "Kubernetes two-day kickstart: Introduction to Docker and Containers"
speaker: "bketelsen, erikstmartin"
slides: http://kubernetes.academy/intro-fullday.yml.html#1
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
- date: 2018-05-17
city: Virginia Beach, FL
country: us
event: Revolution Conf
title: Docker 101
speaker: bretfisher
slides: https://revconf18.bretfisher.com
- date: 2018-05-10
city: Saint Paul, MN
country: us
event: NDC Minnesota
title: Kubernetes 101
slides: https://ndcminnesota2018.container.training
- date: 2018-05-08
city: Budapest
country: hu
event: CRAFT
title: Swarm Orchestration
slides: https://craftconf18.bretfisher.com
- date: 2018-04-27
city: Chicago, IL
country: us
event: GOTO
title: Swarm Orchestration
slides: https://gotochgo18.bretfisher.com
- date: 2018-04-24
city: Chicago, IL
country: us
event: GOTO
title: Kubernetes 101
slides: http://gotochgo2018.container.training/
- date: 2018-04-11
city: Paris
country: fr
title: Introduction aux conteneurs
lang: fr
slides: https://avril2018.container.training/intro.yml.html
- date: 2018-04-13
city: Paris
country: fr
lang: fr
title: Introduction à l'orchestration
slides: https://avril2018.container.training/kube.yml.html
- date: 2018-04-06
city: Sacramento, CA
country: us
event: MuraCon
title: Docker 101
slides: https://muracon18.bretfisher.com
- date: 2018-03-27
city: Santa Clara, CA
country: us
event: SREcon Americas
title: Kubernetes 101
slides: http://srecon2018.container.training/
- date: 2018-03-27
city: Bergen
country: no
event: Boosterconf
title: Kubernetes 101
slides: http://boosterconf2018.container.training/
- date: 2018-02-22
city: San Francisco, CA
country: us
event: IndexConf
title: Kubernetes 101
slides: http://indexconf2018.container.training/
#attend: https://developer.ibm.com/indexconf/sessions/#!?id=5474
- date: 2017-11-17
city: San Francisco, CA
country: us
event: QCON SF
title: Orchestrating Microservices with Docker Swarm
slides: http://qconsf2017swarm.container.training/
- date: 2017-11-16
city: San Francisco, CA
country: us
event: QCON SF
title: Introduction to Docker and Containers
slides: http://qconsf2017intro.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLgqTum8MkspG_8VzGl6C07
- date: 2017-10-30
city: San Franciso, CA
country: us
event: LISA
title: (M7) Getting Started with Docker and Containers
slides: http://lisa17m7.container.training/
- date: 2017-10-31
city: San Franciso, CA
country: us
event: LISA
title: (T9) Build, Ship, and Run Microservices on a Docker Swarm Cluster
slides: http://lisa17t9.container.training/
- date: 2017-10-26
city: Prague
country: cz
event: Open Source Summit Europe
title: Deploying and scaling microservices with Docker and Kubernetes
slides: http://osseu17.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLrsyydCzxWrIP_1-wkcSHS
- date: 2017-10-16
city: Copenhagen
country: dk
event: DockerCon
title: Swarm from Zero to Hero
slides: http://dc17eu.container.training/
- date: 2017-10-16
city: Copenhagen
country: dk
event: DockerCon
title: Orchestration for Advanced Users
slides: https://www.bretfisher.com/dockercon17eu
- date: 2017-07-25
city: Minneapolis, MN
country: us
event: devopsdays
title: Deploying & Scaling microservices with Docker Swarm
video: https://www.youtube.com/watch?v=DABbqyJeG_E
- date: 2017-06-12
city: Berlin
country: de
event: DevOpsCon
title: Deploying and scaling containerized Microservices with Docker and Swarm
- date: 2017-05-18
city: Portland, OR
country: us
event: PyCon
title: Deploy and scale containers with Docker native, open source orchestration
video: https://www.youtube.com/watch?v=EuzoEaE6Cqs
- date: 2017-05-08
city: Austin, TX
country: us
event: OSCON
title: Deploying and scaling applications in containers with Docker
- date: 2017-05-04
city: Chicago, IL
country: us
event: GOTO
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2017-04-17
city: Austin, TX
country: us
event: DockerCon
title: Orchestration Workshop
- date: 2017-03-22
city: San Jose, CA
country: us
event: Devoxx
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2017-03-03
city: Pasadena, CA
country: us
event: SCALE
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2016-12-06
city: Boston, MA
country: us
event: LISA
title: Deploying and Scaling Applications with Docker Swarm
slides: http://lisa16t1.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviIDDhr8vIwCN1wkyNGXjbbc
- date: 2016-10-07
city: Berlin
country: de
event: LinuxCon
title: Orchestrating Containers in Production at Scale with Docker Swarm
- date: 2016-09-20
city: New York, NY
country: us
event: Velocity
title: Deployment and orchestration at scale with Docker
- date: 2016-08-25
city: Toronto
country: ca
event: LinuxCon
title: Orchestrating Containers in Production at Scale with Docker Swarm
- date: 2016-06-22
city: Seattle, WA
country: us
event: DockerCon
title: Orchestration Workshop
- date: 2016-05-29
city: Portland, OR
country: us
event: PyCon
title: Introduction to Docker and containers
slides: https://us.pycon.org/2016/site_media/media/tutorial_handouts/DockerSlides.pdf
video: https://www.youtube.com/watch?v=ZVaRK10HBjo
- date: 2016-05-17
city: Austin, TX
country: us
event: OSCON
title: Deployment and orchestration at scale with Docker Swarm
- date: 2016-04-27
city: Budapest
country: hu
event: CRAFT
title: Advanced Docker concepts and container orchestration
- date: 2016-04-22
city: Berlin
country: de
event: Neofonie
title: Orchestration Workshop
- date: 2016-04-05
city: Stockholm
country: se
event: Praqma
title: Orchestration Workshop
- date: 2016-03-22
city: Munich
country: de
event: Stylight
title: Orchestration Workshop
- date: 2016-03-11
city: London
country: uk
event: QCON
title: Containers in production with Docker Swarm
- date: 2016-02-19
city: Amsterdam
country: nl
event: Container Solutions
title: Orchestration Workshop
- date: 2016-02-15
city: Paris
country: fr
event: Zenika
title: Orchestration Workshop
- date: 2016-01-22
city: Pasadena, CA
country: us
event: SCALE
title: Advanced Docker concepts and container orchestration
#- date: 2015-11-10
# city: Washington DC
# country: us
# event: LISA
# title: Deploying and Scaling Applications with Docker Swarm
#2015-09-24-strangeloop
- title: Introduction to Docker and Containers
slides: intro-selfpaced.yml.html
- title: Container Orchestration with Docker and Swarm
slides: swarm-selfpaced.yml.html
- title: Deploying and Scaling Microservices with Docker and Kubernetes
slides: kube-selfpaced.yml.html

View File

@@ -2,55 +2,58 @@ title: |
Introduction
to Containers
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
chat: "[Gitter](https://gitter.im/jpetazzo/training-20180411-paris)"
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- common/title.md
- shared/title.md
- logistics.md
- intro/intro.md
- common/about-slides.md
- common/toc.md
- - intro/Docker_Overview.md
- intro/Docker_History.md
- intro/Training_Environment.md
- intro/Installing_Docker.md
- - intro/First_Containers.md
- intro/Background_Containers.md
- intro/Start_And_Attach.md
- intro/Initial_Images.md
- - intro/Building_Images_Interactively.md
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
- intro/Multi_Stage_Builds.md
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- - intro/Naming_And_Inspecting.md
- intro/Labels.md
- intro/Getting_Inside.md
- intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
#- intro/Connecting_Containers_With_Links.md
- intro/Ambassadors.md
- - intro/Local_Development_Workflow.md
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- intro/Docker_Machine.md
- - intro/CI_Pipeline.md
- intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Dockerfile_Samples.md
- intro/Logging.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- - intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md
- common/thankyou.md
- intro/links.md
- containers/intro.md
- shared/about-slides.md
- shared/toc.md
- - containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- - containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Docker_Machine.md
- - containers/Advanced_Dockerfiles.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Resource_Limits.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
- - containers/Container_Engines.md
- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -1,55 +1,59 @@
title: |
Introduction
to Docker and
Containers
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
chapters:
- common/title.md
# - common/logistics.md
- intro/intro.md
- common/about-slides.md
- common/toc.md
- - intro/Docker_Overview.md
- intro/Docker_History.md
- intro/Training_Environment.md
- intro/Installing_Docker.md
- intro/First_Containers.md
- intro/Background_Containers.md
- intro/Start_And_Attach.md
- - intro/Initial_Images.md
- intro/Building_Images_Interactively.md
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
- intro/Multi_Stage_Builds.md
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- - intro/Naming_And_Inspecting.md
- intro/Labels.md
- intro/Getting_Inside.md
- intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
#- intro/Connecting_Containers_With_Links.md
- intro/Ambassadors.md
- - intro/Local_Development_Workflow.md
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- intro/Docker_Machine.md
- intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Logging.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md
- common/thankyou.md
- intro/links.md
- shared/title.md
# - shared/logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/toc.md
- - containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- - containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Docker_Machine.md
- - containers/Advanced_Dockerfiles.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Resource_Limits.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
- - containers/Container_Engines.md
- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -1 +0,0 @@
intro-fullday.yml

View File

@@ -1,3 +0,0 @@
# Building a CI pipeline
.center[![Demo](images/demo.jpg)]

View File

@@ -1,5 +0,0 @@
# Dockerfile Samples
---
## (Demo in terminal)

View File

@@ -1,100 +0,0 @@
# Tips for efficient Dockerfiles
We will see how to:
* Reduce the number of layers.
* Leverage the build cache so that builds can be faster.
* Embed unit testing in the build process.
---
## Reducing the number of layers
* Each line in a `Dockerfile` creates a new layer.
* Build your `Dockerfile` to take advantage of Docker's caching system.
* Combine commands by using `&&` to continue commands and `\` to wrap lines.
Note: it is frequent to build a Dockerfile line by line:
```dockerfile
RUN apt-get install thisthing
RUN apt-get install andthatthing andthatotherone
RUN apt-get install somemorestuff
```
And then refactor it trivially before shipping:
```dockerfile
RUN apt-get install thisthing andthatthing andthatotherone somemorestuff
```
---
## Avoid re-installing dependencies at each build
* Classic Dockerfile problem:
"each time I change a line of code, all my dependencies are re-installed!"
* Solution: `COPY` dependency lists (`package.json`, `requirements.txt`, etc.)
by themselves to avoid reinstalling unchanged dependencies every time.
---
## Example "bad" `Dockerfile`
The dependencies are reinstalled every time, because the build system does not know if `requirements.txt` has been updated.
```bash
FROM python
MAINTAINER Docker Education Team <education@docker.com>
COPY . /src/
WORKDIR /src
RUN pip install -qr requirements.txt
EXPOSE 5000
CMD ["python", "app.py"]
```
---
## Fixed `Dockerfile`
Adding the dependencies as a separate step means that Docker can cache more efficiently and only install them when `requirements.txt` changes.
```bash
FROM python
MAINTAINER Docker Education Team <education@docker.com>
COPY ./requirements.txt /tmp/requirements.txt
RUN pip install -qr /tmp/requirements.txt
COPY . /src/
WORKDIR /src
EXPOSE 5000
CMD ["python", "app.py"]
```
---
## Embedding unit tests in the build process
```dockerfile
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
RUN <install test dependencies>
COPY <test data sets and fixtures>
RUN <unit tests>
FROM <baseimage>
RUN <install dependencies>
COPY <code>
RUN <build code>
CMD, EXPOSE ...
```
* The build fails as soon as an instruction fails
* If `RUN <unit tests>` fails, the build doesn't produce an image
* If it succeeds, it produces a clean image (without test libraries and data)

View File

@@ -1,106 +0,0 @@
# Multi-stage builds
* In the previous example, our final image contain:
* our `hello` program
* its source code
* the compiler
* Only the first one is strictly necessary.
* We are going to see how to obtain an image without the superfluous components.
---
## Multi-stage builds principles
* At any point in our `Dockerfile`, we can add a new `FROM` line.
* This line starts a new stage of our build.
* Each stage can access the files of the previous stages with `COPY --from=...`.
* When a build is tagged (with `docker build -t ...`), the last stage is tagged.
* Previous stages are not discarded: they will be used for caching, and can be referenced.
---
## Multi-stage builds in practice
* Each stage is numbered, starting at `0`
* We can copy a file from a previous stage by indicating its number, e.g.:
```dockerfile
COPY --from=0 /file/from/first/stage /location/in/current/stage
```
* We can also name stages, and reference these names:
```dockerfile
FROM golang AS builder
RUN ...
FROM alpine
COPY --from=builder /go/bin/mylittlebinary /usr/local/bin/
```
---
## Multi-stage builds for our C program
We will change our Dockerfile to:
* give a nickname to the first stage: `compiler`
* add a second stage using the same `ubuntu` base image
* add the `hello` binary to the second stage
* make sure that `CMD` is in the second stage
The resulting Dockerfile is on the next slide.
---
## Multi-stage build `Dockerfile`
Here is the final Dockerfile:
```dockerfile
FROM ubuntu AS compiler
RUN apt-get update
RUN apt-get install -y build-essential
COPY hello.c /
RUN make hello
FROM ubuntu
COPY --from=compiler /hello /hello
CMD /hello
```
Let's build it, and check that it works correctly:
```bash
docker build -t hellomultistage .
docker run hellomultistage
```
---
## Comparing single/multi-stage build image sizes
List our images with `docker images`, and check the size of:
- the `ubuntu` base image,
- the single-stage `hello` image,
- the multi-stage `hellomultistage` image.
We can achieve even smaller images if we use smaller base images.
However, if we use common base images (e.g. if we standardize on `ubuntu`),
these common images will be pulled only once per node, so they are
virtually "free."

View File

@@ -0,0 +1,131 @@
# Accessing internal services
- When we are logged in on a cluster node, we can access internal services
(by virtue of the Kubernetes network model: all nodes can reach all pods and services)
- When we are accessing a remote cluster, things are different
(generally, our local machine won't have access to the cluster's internal subnet)
- How can we temporarily access a service without exposing it to everyone?
--
- `kubectl proxy`: gives us access to the API, which includes a proxy for HTTP resources
- `kubectl port-forward`: allows forwarding of TCP ports to arbitrary pods, services, ...
---
## Suspension of disbelief
The exercises in this section assume that we have set up `kubectl` on our
local machine in order to access a remote cluster.
We will therefore show how to access services and pods of the remote cluster,
from our local machine.
You can also run these exercises directly on the cluster (if you haven't
installed and set up `kubectl` locally).
Running commands locally will be less useful
(since you could access services and pods directly),
but keep in mind that these commands will work anywhere as long as you have
installed and set up `kubectl` to communicate with your cluster.
---
## `kubectl proxy` in theory
- Running `kubectl proxy` gives us access to the entire Kubernetes API
- The API includes routes to proxy HTTP traffic
- These routes look like the following:
`/api/v1/namespaces/<namespace>/services/<service>/proxy`
- We just add the URI to the end of the request, for instance:
`/api/v1/namespaces/<namespace>/services/<service>/proxy/index.html`
- We can access `services` and `pods` this way
---
## `kubectl proxy` in practice
- Let's access the `webui` service through `kubectl proxy`
.exercise[
- Run an API proxy in the background:
```bash
kubectl proxy &
```
- Access the `webui` service:
```bash
curl localhost:8001/api/v1/namespaces/default/services/webui/proxy/index.html
```
- Terminate the proxy:
```bash
kill %1
```
]
---
## `kubectl port-forward` in theory
- What if we want to access a TCP service?
- We can use `kubectl port-forward` instead
- It will create a TCP relay to forward connections to a specific port
(of a pod, service, deployment...)
- The syntax is:
`kubectl port-forward service/name_of_service local_port:remote_port`
- If only one port number is specified, it is used for both local and remote ports
---
## `kubectl port-forward` in practice
- Let's access our remote Redis server
.exercise[
- Forward connections from local port 10000 to remote port 6379:
```bash
kubectl port-forward svc/redis 10000:6379 &
```
- Connect to the Redis server:
```bash
telnet localhost 10000
```
- Issue a few commands, e.g. `INFO server` then `QUIT`
<!--
```wait Connected to localhost```
```keys INFO server```
```keys ^J```
```keys QUIT```
```keys ^J```
-->
- Terminate the port forwarder:
```bash
kill %1
```
]

533
slides/k8s/authn-authz.md Normal file
View File

@@ -0,0 +1,533 @@
# Authentication and authorization
*And first, a little refresher!*
- Authentication = verifying the identity of a person
On a UNIX system, we can authenticate with login+password, SSH keys ...
- Authorization = listing what they are allowed to do
On a UNIX system, this can include file permissions, sudoer entries ...
- Sometimes abbreviated as "authn" and "authz"
- In good modular systems, these things are decoupled
(so we can e.g. change a password or SSH key without having to reset access rights)
---
## Authentication in Kubernetes
- When the API server receives a request, it tries to authenticate it
(it examines headers, certificates ... anything available)
- Many authentication methods can be used simultaneously:
- TLS client certificates (that's what we've been doing with `kubectl` so far)
- bearer tokens (a secret token in the HTTP headers of the request)
- [HTTP basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication) (carrying user and password in a HTTP header)
- authentication proxy (sitting in front of the API and setting trusted headers)
- It's the job of the authentication method to produce:
- the user name
- the user ID
- a list of groups
- The API server doesn't interpret these; it'll be the job of *authorizers*
---
## Anonymous requests
- If any authentication method *rejects* a request, it's denied
(`401 Unauthorized` HTTP code)
- If a request is neither accepted nor accepted by anyone, it's anonymous
- the user name is `system:anonymous`
- the list of groups is `[system:unauthenticated]`
- By default, the anonymous user can't do anything
(that's what you get if you just `curl` the Kubernetes API)
---
## Authentication with TLS certificates
- This is enabled in most Kubernetes deployments
- The user name is derived from the `CN` in the client certificates
- The groups are derived from the `O` fields in the client certificate
- From the point of view of the Kubernetes API, users do not exist
(i.e. they are not stored in etcd or anywhere else)
- Users can be created (and given membership to groups) independently of the API
- The Kubernetes API can be set up to use your custom CA to validate client certs
---
class: extra-details
## Viewing our admin certificate
- Let's inspect the certificate we've been using all this time!
.exercise[
- This command will show the `CN` and `O` fields for our certificate:
```bash
kubectl config view \
--raw \
-o json \
| jq -r .users[0].user[\"client-certificate-data\"] \
| base64 -d \
| openssl x509 -text \
| grep Subject:
```
]
Let's break down that command together! 😅
---
class: extra-details
## Breaking down the command
- `kubectl config view` shows the Kubernetes user configuration
- `--raw` includes certificate information (which shows as REDACTED otherwise)
- `-o json` outputs the information in JSON format
- `| jq ...` extracts the field with the user certificate (in base64)
- `| base64 -d` decodes the base64 format (now we have a PEM file)
- `| openssl x509 -text` parses the certificate and outputs it as plain text
- `| grep Subject:` shows us the line that interests us
→ We are user `kubernetes-admin`, in group `system:masters`.
---
## Authentication with tokens
- Tokens are passed as HTTP headers:
`Authorization: Bearer and-then-here-comes-the-token`
- Tokens can be validated through a number of different methods:
- static tokens hard-coded in a file on the API server
- [bootstrap tokens](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) (special case to create a cluster or join nodes)
- [OpenID Connect tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) (to delegate authentication to compatible OAuth2 providers)
- service accounts (these deserve more details, coming right up!)
---
## Service accounts
- A service account is a user that exists in the Kubernetes API
(it is visible with e.g. `kubectl get serviceaccounts`)
- Service accounts can therefore be created / updated dynamically
(they don't require hand-editing a file and restarting the API server)
- A service account is associated with a set of secrets
(the kind that you can view with `kubectl get secrets`)
- Service accounts are generally used to grant permissions to applications, services ...
(as opposed to humans)
---
class: extra-details
## Token authentication in practice
- We are going to list existing service accounts
- Then we will extract the token for a given service account
- And we will use that token to authenticate with the API
---
class: extra-details
## Listing service accounts
.exercise[
- The resource name is `serviceaccount` or `sa` in short:
```bash
kubectl get sa
```
]
There should be just one service account in the default namespace: `default`.
---
class: extra-details
## Finding the secret
.exercise[
- List the secrets for the `default` service account:
```bash
kubectl get sa default -o yaml
SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name)
```
]
It should be named `default-token-XXXXX`.
---
class: extra-details
## Extracting the token
- The token is stored in the secret, wrapped with base64 encoding
.exercise[
- View the secret:
```bash
kubectl get secret $SECRET -o yaml
```
- Extract the token and decode it:
```bash
TOKEN=$(kubectl get secret $SECRET -o json \
| jq -r .data.token | base64 -d)
```
]
---
class: extra-details
## Using the token
- Let's send a request to the API, without and with the token
.exercise[
- Find the ClusterIP for the `kubernetes` service:
```bash
kubectl get svc kubernetes
API=$(kubectl get svc kubernetes -o json | jq -r .spec.clusterIP)
```
- Connect without the token:
```bash
curl -k https://$API
```
- Connect with the token:
```bash
curl -k -H "Authorization: Bearer $TOKEN" https://$API
```
]
---
class: extra-details
## Results
- In both cases, we will get a "Forbidden" error
- Without authentication, the user is `system:anonymous`
- With authentication, it is shown as `system:serviceaccount:default:default`
- The API "sees" us as a different user
- But neither user has any right, so we can't do nothin'
- Let's change that!
---
## Authorization in Kubernetes
- There are multiple ways to grant permissions in Kubernetes, called [authorizers](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules):
- [Node Authorization](https://kubernetes.io/docs/reference/access-authn-authz/node/) (used internally by kubelet; we can ignore it)
- [Attribute-based access control](https://kubernetes.io/docs/reference/access-authn-authz/abac/) (powerful but complex and static; ignore it too)
- [Webhook](https://kubernetes.io/docs/reference/access-authn-authz/webhook/) (each API request is submitted to an external service for approval)
- [Role-based access control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (associates permissions to users dynamically)
- The one we want is the last one, generally abbreviated as RBAC
---
## Role-based access control
- RBAC allows to specify fine-grained permissions
- Permissions are expressed as *rules*
- A rule is a combination of:
- [verbs](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb) like create, get, list, update, delete ...
- resources (as in "API resource", like pods, nodes, services ...)
- resource names (to specify e.g. one specific pod instead of all pods)
- in some case, [subresources](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources) (e.g. logs are subresources of pods)
---
## From rules to roles to rolebindings
- A *role* is an API object containing a list of *rules*
Example: role "external-load-balancer-configurator" can:
- [list, get] resources [endpoints, services, pods]
- [update] resources [services]
- A *rolebinding* associates a role with a user
Example: rolebinding "external-load-balancer-configurator":
- associates user "external-load-balancer-configurator"
- with role "external-load-balancer-configurator"
- Yes, there can be users, roles, and rolebindings with the same name
- It's a good idea for 1-1-1 bindings; not so much for 1-N ones
---
## Cluster-scope permissions
- API resources Role and RoleBinding are for objects within a namespace
- We can also define API resources ClusterRole and ClusterRoleBinding
- These are a superset, allowing to:
- specify actions on cluster-wide objects (like nodes)
- operate across all namespaces
- We can create Role and RoleBinding resources within a namespaces
- ClusterRole and ClusterRoleBinding resources are global
---
## Pods and service accounts
- A pod can be associated to a service account
- by default, it is associated to the `default` service account
- as we've seen earlier, this service account has no permission anyway
- The associated token is exposed into the pod's filesystem
(in `/var/run/secrets/kubernetes.io/serviceaccount/token`)
- Standard Kubernetes tooling (like `kubectl`) will look for it there
- So Kubernetes tools running in a pod will automatically use the service account
---
## In practice
- We are going to create a service account
- We will use an existing cluster role (`view`)
- We will bind together this role and this service account
- Then we will run a pod using that service account
- In this pod, we will install `kubectl` and check our permissions
---
## Creating a service account
- We will call the new service account `viewer`
(note that nothing prevents us from calling it `view`, like the role)
.exercise[
- Create the new service account:
```bash
kubectl create serviceaccount viewer
```
- List service accounts now:
```bash
kubectl get serviceaccounts
```
]
---
## Binding a role to the service account
- Binding a role = creating a *rolebinding* object
- We will call that object `viewercanview`
(but again, we could call it `view`)
.exercise[
- Create the new role binding:
```bash
kubectl create rolebinding viewercanview \
--clusterrole=view \
--serviceaccount=default:viewer
```
]
It's important to note a couple of details in these flags ...
---
## Roles vs Cluster Roles
- We used `--clusterrole=view`
- What would have happened if we had used `--role=view`?
- we would have bound the role `view` from the local namespace
<br/>(instead of the cluster role `view`)
- the command would have worked fine (no error)
- but later, our API requests would have been denied
- This is a deliberate design decision
(we can reference roles that don't exist, and create/update them later)
---
## Users vs Service Accounts
- We used `--serviceaccount=default:viewer`
- What would have happened if we had used `--user=default:viewer`?
- we would have bound the role to a user instead of a service account
- again, the command would have worked fine (no error)
- ... but our API requests would have been denied later
- What's about the `default:` prefix?
- that's the namespace of the service account
- yes, it could be inferred from context, but ... `kubectl` requires it
---
## Testing
- We will run an `alpine` pod and install `kubectl` there
.exercise[
- Run a one-time pod:
```bash
kubectl run eyepod --rm -ti --restart=Never \
--serviceaccount=viewer \
--image alpine
```
- Install `curl`, then use it to install `kubectl`:
```bash
apk add --no-cache curl
URLBASE=https://storage.googleapis.com/kubernetes-release/release
KUBEVER=$(curl -s $URLBASE/stable.txt)
curl -LO $URLBASE/$KUBEVER/bin/linux/amd64/kubectl
chmod +x kubectl
```
]
---
## Running `kubectl` in the pod
- We'll try to use our `view` permissions, then to create an object
.exercise[
- Check that we can, indeed, view things:
```bash
./kubectl get all
```
- But that we can't create things:
```
./kubectl run tryme --image=nginx
```
- Exit the container with `exit` or `^D`
<!-- ```keys ^D``` -->
]
---
## Testing directly with `kubectl`
- We can also check for permission with `kubectl auth can-i`:
```bash
kubectl auth can-i list nodes
kubectl auth can-i create pods
kubectl auth can-i get pod/name-of-pod
kubectl auth can-i get /url-fragment-of-api-request/
kubectl auth can-i '*' services
```
- And we can check permissions on behalf of other users:
```bash
kubectl auth can-i list nodes \
--as some-user
kubectl auth can-i list nodes \
--as system:serviceaccount:<namespace>:<name-of-service-account>
```

View File

@@ -0,0 +1,161 @@
# Building images with the Docker Engine
- Until now, we have built our images manually, directly on a node
- We are going to show how to build images from within the cluster
(by executing code in a container controlled by Kubernetes)
- We are going to use the Docker Engine for that purpose
- To access the Docker Engine, we will mount the Docker socket in our container
- After building the image, we will push it to our self-hosted registry
---
## Resource specification for our builder pod
.small[
```yaml
apiVersion: v1
kind: Pod
metadata:
name: build-image
spec:
restartPolicy: OnFailure
containers:
- name: docker-build
image: docker
env:
- name: REGISTRY_PORT
value: "`3XXXX`"
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
mkdir /workspace &&
git clone https://github.com/jpetazzo/container.training /workspace &&
docker build -t localhost:$REGISTRY_PORT/worker /workspace/dockercoins/worker &&
docker push localhost:$REGISTRY_PORT/worker
volumeMounts:
- name: docker-socket
mountPath: /var/run/docker.sock
volumes:
- name: docker-socket
hostPath:
path: /var/run/docker.sock
```
]
---
## Breaking down the pod specification (1/2)
- `restartPolicy: OnFailure` prevents the build from running in an infinite lopo
- We use the `docker` image (so that the `docker` CLI is available)
- We rely on the fact that the `docker` image is based on `alpine`
(which is why we use `apk` to install `git`)
- The port for the registry is passed through an environment variable
(this avoids repeating it in the specification, which would be error-prone)
.warning[The environment variable has to be a string, so the `"`s are mandatory!]
---
## Breaking down the pod specification (2/2)
- The volume `docker-socket` is declared with a `hostPath`, indicating a bind-mount
- It is then mounted in the container onto the default Docker socket path
- We show a interesting way to specify the commands to run in the container:
- the command executed will be `sh -c <args>`
- `args` is a list of strings
- `|` is used to pass a multi-line string in the YAML file
---
## Running our pod
- Let's try this out!
.exercise[
- Check the port used by our self-hosted registry:
```bash
kubectl get svc registry
```
- Edit `~/container.training/k8s/docker-build.yaml` to put the port number
- Schedule the pod by applying the resource file:
```bash
kubectl apply -f ~/container.training/k8s/docker-build.yaml
```
- Watch the logs:
```bash
stern build-image
```
<!--
```longwait latest: digest: sha256:```
```keys ^C```
-->
]
---
## What's missing?
What do we need to change to make this production-ready?
- Build from a long-running container (e.g. a `Deployment`) triggered by web hooks
(the payload of the web hook could indicate the repository to build)
- Build a specific branch or tag; tag image accordingly
- Handle repositories where the Dockerfile is not at the root
(or containing multiple Dockerfiles)
- Expose build logs so that troubleshooting is straightforward
--
🤔 That seems like a lot of work!
--
That's why services like Docker Hub (with [automated builds](https://docs.docker.com/docker-hub/builds/)) are helpful.
<br/>
They handle the whole "code repository → Docker image" workflow.
---
## Things to be aware of
- This is talking directly to a node's Docker Engine to build images
- It bypasses resource allocation mechanisms used by Kubernetes
(but you can use *taints* and *tolerations* to dedicate builder nodes)
- Be careful not to introduce conflicts when naming images
(e.g. do not allow the user to specify the image names!)
- Your builds are going to be *fast*
(because they will leverage Docker's caching system)

View File

@@ -0,0 +1,218 @@
# Building images with Kaniko
- [Kaniko](https://github.com/GoogleContainerTools/kaniko) is an open source tool to build container images within Kubernetes
- It can build an image using any standard Dockerfile
- The resulting image can be pushed to a registry or exported as a tarball
- It doesn't require any particular privilege
(and can therefore run in a regular container in a regular pod)
- This combination of features is pretty unique
(most other tools use different formats, or require elevated privileges)
---
## Kaniko in practice
- Kaniko provides an "executor image", `gcr.io/kaniko-project/executor`
- When running that image, we need to specify at least:
- the path to the build context (=the directory with our Dockerfile)
- the target image name (including the registry address)
- Simplified example:
```
docker run \
-v ...:/workspace gcr.io/kaniko-project/executor \
--context=/workspace \
--destination=registry:5000/image_name:image_tag
```
---
## Running Kaniko in a Docker container
- Let's build the image for the DockerCoins `worker` service with Kaniko
.exercise[
- Find the port number for our self-hosted registry:
```bash
kubectl get svc registry
PORT=$(kubectl get svc registry -o json | jq .spec.ports[0].nodePort)
```
- Run Kaniko:
```bash
docker run --net host \
-v ~/container.training/dockercoins/worker:/workspace \
gcr.io/kaniko-project/executor \
--context=/workspace \
--destination=127.0.0.1:$PORT/worker-kaniko:latest
```
]
We use `--net host` so that we can connect to the registry over `127.0.0.1`.
---
## Running Kaniko in a Kubernetes pod
- We need to mount or copy the build context to the pod
- We are going to build straight from the git repository
(to avoid depending on files sitting on a node, outside of containers)
- We need to `git clone` the repository before running Kaniko
- We are going to use two containers sharing a volume:
- a first container to `git clone` the repository to the volume
- a second container to run Kaniko, using the content of the volume
- However, we need the first container to be done before running the second one
🤔 How could we do that?
---
## [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) to the rescue
- A pod can have a list of `initContainers`
- `initContainers` are executed in the specified order
- Each Init Container needs to complete (exit) successfully
- If any Init Container fails (non-zero exit status) the pod fails
(what happens next depends on the pod's `restartPolicy`)
- After all Init Containers have run successfully, normal `containers` are started
- We are going to execute the `git clone` operation in an Init Container
---
## Our Kaniko builder pod
.small[
```yaml
apiVersion: v1
kind: Pod
metadata:
name: kaniko-build
spec:
initContainers:
- name: git-clone
image: alpine
command: ["sh", "-c"]
args:
- |
apk add --no-cache git &&
git clone git://github.com/jpetazzo/container.training /workspace
volumeMounts:
- name: workspace
mountPath: /workspace
containers:
- name: build-image
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=/workspace/dockercoins/rng"
- "--insecure"
- "--destination=registry:5000/rng-kaniko:latest"
volumeMounts:
- name: workspace
mountPath: /workspace
volumes:
- name: workspace
```
]
---
## Explanations
- We define a volume named `workspace` (using the default `emptyDir` provider)
- That volume is mounted to `/workspace` in both our containers
- The `git-clone` Init Container installs `git` and runs `git clone`
- The `build-image` container executes Kaniko
- We use our self-hosted registry DNS name (`registry`)
- We add `--insecure` to use plain HTTP to talk to the registry
---
## Running our Kaniko builder pod
- The YAML for the pod is in `k8s/kaniko-build.yaml`
.exercise[
- Create the pod:
```bash
kubectl apply -f ~/container.training/k8s/kaniko-build.yaml
```
- Watch the logs:
```bash
stern kaniko
```
<!--
```longwait registry:5000/rng-kaniko:latest:```
```keys ^C```
-->
]
---
## Discussion
*What should we use? The Docker build technique shown earlier? Kaniko? Something else?*
- The Docker build technique is simple, and has the potential to be very fast
- However, it doesn't play nice with Kubernetes resource limits
- Kaniko plays nice with resource limits
- However, it's slower (there is no caching at all)
- The ultimate building tool will probably be [Jessica Frazelle](https://twitter.com/jessfraz)'s [img](https://github.com/genuinetools/img) builder
(it depends on upstream changes that are not in Kubernetes 1.11.2 yet)
But ... is it all about [speed](https://github.com/AkihiroSuda/buildbench/issues/1)? (No!)
---
## The big picture
- For starters: the [Docker Hub automated builds](https://docs.docker.com/docker-hub/builds/) are very easy to set up
- link a GitHub repository with the Docker Hub
- each time you push to GitHub, an image gets build on the Docker Hub
- If this doesn't work for you: why?
- too slow (I'm far from `us-east-1`!) → consider using your cloud provider's registry
- I'm not using a cloud provider → ok, perhaps you need to self-host then
- I need fancy features (e.g. CI) → consider something like GitLab

Some files were not shown because too many files have changed in this diff Show More