Compare commits

...

152 Commits

Author SHA1 Message Date
Jerome Petazzoni
a4e21ffe4d fix-redirects.sh: adding forced redirect 2020-04-07 16:48:33 -05:00
Jérôme Petazzoni
ae140b24cb Merge pull request #435 from joepvd/patch-1
typo correction
2019-02-27 21:10:21 +01:00
Joep van Delft
ab87de9813 typo correction 2019-02-27 20:41:10 +01:00
Jerome Petazzoni
53a416626e Update WiFi + logistics 2018-12-17 01:43:45 -06:00
Jerome Petazzoni
8bf71b956f Customize all the things 2018-12-16 05:46:32 -06:00
Jerome Petazzoni
2c445ca99b Merge branch 'enixlogo' into decembre2018 2018-12-16 05:26:07 -06:00
Jerome Petazzoni
2ba244839a Merge branch 'consul-auto-join' into decembre2018 2018-12-16 05:25:31 -06:00
Jerome Petazzoni
0b71bc8655 Merge branch 'improve-kubectl-config-context' into decembre2018 2018-12-16 05:25:12 -06:00
Jerome Petazzoni
949fdd7791 Merge branch 'explain-system-masters' into decembre2018 2018-12-16 05:24:59 -06:00
Jerome Petazzoni
5cf8b42fe9 Merge branch 'mention-kubectl-logs-bug' into decembre2018 2018-12-16 05:24:45 -06:00
Jerome Petazzoni
781ac48c5c Merge branch 'bump-versions-to-1.13' into decembre2018 2018-12-16 05:23:57 -06:00
Jerome Petazzoni
1cf3849bbd Merge branch 'static-pods' into decembre2018 2018-12-16 05:23:47 -06:00
Jerome Petazzoni
c77960d77b Merge branch 'rewrite-labels-and-selectors' into decembre2018 2018-12-16 05:23:35 -06:00
Jérôme Petazzoni
46c6866ce9 Merge pull request #414 from jpetazzo/make-build-and-push-optional
Make build and push optional
2018-12-09 20:04:38 +01:00
Jerome Petazzoni
fe95318108 Copypasta fix 🤦 2018-12-07 14:31:55 -06:00
Jerome Petazzoni
65232f93ba Add GOTO Chicago 2018-12-07 14:23:58 -06:00
Jerome Petazzoni
9fa7b958dc Update Consul demo to use Cloud auto-join
Consul 1.4 introduces Cloud auto-join, which finds the
IP addresses of the other nodes by querying an API (in
that case, the Kubernetes API).

This involves creating a service account and granting
permissions to list and get pods. It is a little bit
more complex, but it reuses previous notions (like RBAC)
so I like it better.
2018-12-06 21:38:26 -06:00
Jerome Petazzoni
a95e5c960e Make build and push optional
This reformulates the section where we run DockerCoins
to better explain why we use images (and how they are
essential to the "ship" part of the action), and it
tells upfront that it will be possible to use images
from the Docker Hub (and skip altogether the part where
we run our own registry and build and push images).

It also reshuffles section headers a bit, because that
part had a handful of really small sections. Now we
have:

- Shipping images with a registry
- Running our application on Kubernetes

I think that's better.

It also paves the way to make the entire self-hosted
registry part optional.
2018-12-06 20:21:14 -06:00
Jerome Petazzoni
5b87162e95 Update portworx demo for 4 nodes 2018-12-05 19:12:53 -06:00
Jerome Petazzoni
8c4914294e Improve namespace switching example
We show how to change namespace by creating a new context, then
switching to the new context. It works, but it is very cumbersome.
Instead, let's just update the current context, and give some
details about when it's better to update the current context, and
when it is better to use different contexts and hop between them.
2018-12-05 19:01:15 -06:00
Jerome Petazzoni
7b9b9f527d Explain system:masters
Add a couple of extra-details slides showing how our client certificate
gives us all the privileges on the cluster (through the system:masters
group).
2018-12-05 18:31:12 -06:00
Jerome Petazzoni
3c7f39747c Mention the kubectl logs -l ... --tail N issue in k8s 1.12
This supersedes #399.

There was a bug in Kubernetes 1.12. It was fixed in 1.13.

Let's just mention the issue in one brief slide but not add
too much extra fluff about it.
2018-12-05 17:55:18 -06:00
Jerome Petazzoni
be67a742ee Update Kubernetes versions to 1.13 2018-12-05 17:34:56 -06:00
Jerome Petazzoni
40cd934118 Add a slide explaining tradeoffs between static/normal pods for control plane 2018-12-05 14:25:19 -06:00
Jerome Petazzoni
556db65251 Add warning about --infra flag (fixes #383) 2018-12-05 14:05:57 -06:00
Jerome Petazzoni
ff781a3065 Add QCON London 2018-11-30 23:37:53 +01:00
Bridget Kromhout
8348d750df Merge pull request #405 from jpetazzo/support-multiday-events
Support multi-day events
2018-11-29 16:43:11 +11:00
Jérôme Petazzoni
9afa0acbf9 Typo 2018-11-28 01:45:49 +01:00
Bret Fisher
cb624755e4 large update to fix many "slide debt" issues
with swarm stacks, service updates, rollbacks, and healthchecks
2018-11-28 01:45:49 +01:00
Bret Fisher
523ca55831 smoothing out update/rollback slides 2018-11-28 01:45:49 +01:00
Bret Fisher
f0b48935fa rolling updates streamline 2018-11-28 01:45:49 +01:00
Jerome Petazzoni
abcc47b563 Add a section about static pods
This was a request by @abuisine, so I'm flagging him for review :-)

This section explains the challenges associated with self-hosting
the control plane; and segues into static pods. It also mentions
bootkube and the Pod Checkpointer. There is an exercise showing
how to run a static pod.
2018-11-28 01:29:40 +01:00
Jerome Petazzoni
33e1bfd8be Support multi-day events
In index.yaml, the date can now be specified as a range. For instance,
instead of:

date: 2018-11-28

We can use:

date: [2018-11-28, 2018-12-05]

For now, only the start date is shown (so the event still appears
as happening on 2018-11-28 in that example), but it will be considered
"current" (and show up in the list of "coming soon" events) until
the end date.

This way, when updating the content during a multi-day event, the
event stays in the top list and is not pushed to the "past events"
section.

Single-day events can still use the old syntax, of course.
2018-11-26 16:55:47 +01:00
Jerome Petazzoni
2efc29991e Rewrite section about labels and selectors
The old version was using a slightly confusing way to
show which pods were receiving traffic:

kubectl logs --tail 1 --selector app=rng

(And then we look at the timestamp of the last request.)

In this new version, concepts are introduced progressively;
the YAML parser magic is isolated from the other concerns;
we show the impact of removing a pod from load balancing
in a way that is (IMHO) more straightforward:

- follow logs of specific pod
- remove pod from load balancer
- logs instantly stop flowing

These slides also explain why the DaemonSet and the
ReplicaSet for the rng service don't step on each other's
toes.
2018-11-20 12:45:32 -06:00
Jerome Petazzoni
11387f1330 Bump all the versions
Bump:
- stern
- Ubuntu

Also, each place where there is a 'bumpable' version, I added
a ##VERSION## marker, easily greppable.
2018-11-19 20:52:14 +01:00
Jerome Petazzoni
fe93dccbac Rework presentation of DockerCoins
The last 5(ish) times I presented DockerCoins, I ended up
explaining it slightly differently. While the application
is building, I explain what it does and its architecture
(instead of watching the build and pointing out, 'oh look
there is ruby... and python...') and I found that it
worked better. It may also be better for shorter
workshops, because we can deliver useful information
while the app is building (instead of filling with
a tapdancing show).

@bretfisher and @bridgetkromhout, do you like the new
flow for that section? If not, I can figure something
out so that we each have our own section here, but I
hope you will actually like this one better. :)
2018-11-19 20:51:52 +01:00
Bridget Kromhout
5fad84a7cf Merge pull request #396 from jpetazzo/kubectl-create-deployment
Address deprecation of 'kubectl run'
2018-11-19 13:41:24 -06:00
Bridget Kromhout
22dd6b4e70 Merge pull request #397 from jpetazzo/preinstall-helm-and-prometheus
Add command to preinstall Helm and Prometheus
2018-11-19 13:40:51 -06:00
Jerome Petazzoni
a3594e7e1e 2018 -> 2018 🤦 2018-11-14 12:23:24 -06:00
Jerome Petazzoni
7f74e5ce32 Add upcoming training in France with ENIX 2018-11-14 12:21:29 -06:00
Jerome Petazzoni
9e051abb32 settings for 4 nodes cluster + two-sided card template 2018-11-09 02:25:00 -06:00
Bridget Kromhout
3ebcfd142b Merge pull request #394 from jpetazzo/halfday-fullday-twodays
Add kube-twodays.yml
2018-11-07 16:28:20 -05:00
Bridget Kromhout
6c5d049c4c Merge pull request #371 from bridgetkromhout/kubens
Clarify kubens
2018-11-07 16:27:08 -05:00
Bridget Kromhout
072ba44cba Merge pull request #395 from jpetazzo/add-links-to-whatsnext
Add links to what's next section
2018-11-07 16:25:29 -05:00
Bridget Kromhout
bc8a9dc4e7 Merge pull request #398 from jpetazzo/use-dockercoins-from-docker-hub
Add instructions to use the dockercoins/ images
2018-11-07 16:23:37 -05:00
Jerome Petazzoni
b1ba881eee Limit ElasticSearch RAM to 1 GB
Committing straight to master since this file
is not used by @bridgetkromhout, and people use
that file by cloning the repo (so it has to be
merged in master for people to see it).

HASHTAG YOLO
2018-11-01 19:48:06 -05:00
Jerome Petazzoni
337a5d94ed Add instructions to use the dockercoins/ images
We have images on the Docker Hub for the various components
of dockercoins. Let's add one slide explaining how to use that,
for people who would be lost or would have issues with their
registry, so that they can catch up.
2018-11-01 19:08:40 -05:00
Jerome Petazzoni
43acccc0af Add command to preinstall Helm and Prometheus
In some cases, I would like Prometheus to be pre-installed (so that
it shows a bunch of metrics) without relying on people doing it (and
setting up Helm correctly). This patch allows to run:

./workshopctl helmprom TAG

It will setup Helm with a proper service account, then deploy
the Pormetheus chart, disabling the alert manager, persistence,
and assigning the Prometheus server to NodePort 30090.

This command is idempotent.
2018-11-01 15:35:09 -05:00
Jerome Petazzoni
4a447c7bf5 Clarify further kubens vs kns 2018-11-01 13:48:00 -05:00
Jerome Petazzoni
b9de73d0fd Address deprecation of 'kubectl run'
kubectl run is being deprecated as a multi-purpose tool.
This PR replaces 'kubectl run' with 'kubectl create deployment'
in most places (except in the very first example, to reduce the
cognitive load; and when we really want a single-shot container).

It also updates the places where we use a 'run' label, since
'kubectl create deployment' uses the 'app' label instead.

NOTE: this hasn't gone through end-to-end testing yet.
2018-11-01 01:25:26 -05:00
Jerome Petazzoni
3f7675be04 Add links to what's next section
For each concept that is present in the full-length tutorial,
I added a link to the corresponding chapter in the final section,
so that people who liked the short version can get similarly
presented info from the longer version.
2018-10-30 17:24:27 -05:00
Jerome Petazzoni
b4bb9e5958 Update QCON entries (jpetazzo is delivering twice) 2018-10-30 16:47:44 -05:00
Jerome Petazzoni
9a6160ba1f Add kube-twodays.yml
kube-fullday is now suitable for one-day tutorials
kube-twodays is not suitable for two-day tutorials

I also tweaked (added a couple of line breaks) so that line
numbers would be aligned on all kube-...yml files.
2018-10-30 16:42:43 -05:00
Bridget Kromhout
1d243b72ec adding vel eu 2018 k8s101 slides
adding vel eu 2018 k8s101 slides
2018-10-30 14:15:44 +01:00
Jerome Petazzoni
c5c1ccaa25 Merge branch 'BretFisher-win-containers-101' 2018-10-29 20:38:21 -05:00
Jerome Petazzoni
b68afe502b Minor formatting/typo edits 2018-10-29 20:38:01 -05:00
Jerome Petazzoni
d18cacab4c Merge branch 'win-containers-101' of git://github.com/BretFisher/container.training into BretFisher-win-containers-101 2018-10-29 19:59:53 -05:00
Bret Fisher
2faca4a507 docker101 fixing titles 2018-10-30 01:53:31 +01:00
Jerome Petazzoni
d797ec62ed Merge branch 'BretFisher-swarm-cicd' 2018-10-29 19:48:59 -05:00
Jerome Petazzoni
a475d63789 add CI/CD slides to self-paced deck as well 2018-10-29 19:48:33 -05:00
Jerome Petazzoni
dd3f2d054f Merge branch 'swarm-cicd' of git://github.com/BretFisher/container.training into BretFisher-swarm-cicd 2018-10-29 19:46:38 -05:00
Bridget Kromhout
73594fd505 Merge pull request #384 from BretFisher/patch-18
swarm workshop at goto canceled 😭
2018-10-26 11:35:53 -05:00
Bret Fisher
16a1b5c6b5 swarm workshop at goto canceled 😭 2018-10-26 07:57:50 +01:00
Bret Fisher
ff7a257844 adding cicd to swarm half day 2018-10-26 07:52:32 +01:00
Bret Fisher
77046a8ddf fixed suggestions 2018-10-26 07:51:09 +01:00
Bret Fisher
3ca696f059 size update from docker docs 2018-10-23 16:27:25 +02:00
Bret Fisher
305db76340 more sizing tweaks 2018-10-23 16:27:25 +02:00
Bret Fisher
b1672704e8 clear up swarm sizes and manager+worker setups
Lot's of people will have ~5-10 servers, so let's give them more detailed info.
2018-10-23 16:27:25 +02:00
Jerome Petazzoni
c058f67a1f Add diagram for dockercoins 2018-10-23 16:25:19 +02:00
Alexandre Buisine
ab56c63901 switch to an up to date version with latest cloud-init binary and multinic patch 2018-10-23 16:22:56 +02:00
Bret Fisher
a5341f9403 Add common Windows/macOS hidden files to gitignore 2018-10-17 19:11:37 +02:00
Laurent Grangeau
b2bdac3384 Typo 2018-10-04 18:02:01 +02:00
Bridget Kromhout
a2531a0c63 making sure two-day events still show up
Because we rebuilt today, the two-day events disappeared from the front page. @jpetazzo this is a temporary fix to make them still show up.
2018-09-30 22:07:03 -04:00
Bridget Kromhout
84e2b90375 Update index.yaml
adding slides
2018-09-30 22:05:01 -04:00
Bridget Kromhout
9639dfb9cc Merge pull request #368 from jpetazzo/kube-ps1
kube-ps1 is cool and we should mention it
2018-09-30 20:55:00 -04:00
Bridget Kromhout
8722de6da2 Update namespaces.md 2018-09-30 20:54:31 -04:00
Bridget Kromhout
f2f87e52b0 Merge pull request #373 from bridgetkromhout/bridget-links
Updating Bridget's links
2018-09-30 20:53:26 -04:00
Bridget Kromhout
56ad2845e7 Updating Bridget's links 2018-09-30 20:52:24 -04:00
Bridget Kromhout
f23272d154 Clarify kubens 2018-09-30 20:32:10 -04:00
Bridget Kromhout
86e35480a4 Wording edits 2018-10-01 02:14:50 +02:00
Jerome Petazzoni
1020a8ff86 kube-ps1 is cool and we should mention it 2018-09-30 17:43:18 -05:00
Bridget Kromhout
20b1079a22 Update whatsnext.md
typo fix
2018-09-30 16:48:29 -04:00
Bridget Kromhout
f090172413 Merge pull request #365 from jpetazzo/cleanup-after-netpol
Clean up network policies
2018-09-29 21:37:59 -05:00
Jerome Petazzoni
e4251cfa8f Clean up network policies
We should tell people to clean up network policies at the end
of the chapter, otherwise further exercises will fail.
2018-09-29 20:39:32 -05:00
Jerome Petazzoni
b6dd55b21c Use loop4 instead of loop0 2018-09-29 20:16:35 -05:00
Jerome Petazzoni
53d1a68765 Adapt autopilot for new deployment scripts 2018-09-29 20:15:38 -05:00
Jerome Petazzoni
f01bc2a7a9 Fix overlapsing slide number and pics 2018-09-29 18:54:00 -05:00
Jerome Petazzoni
156ce67413 Update CNC script 2018-09-29 18:44:03 -05:00
Jerome Petazzoni
e372850b06 Merge branch 'master' of github.com:jpetazzo/container.training 2018-09-29 10:06:24 -05:00
Jerome Petazzoni
f543b54426 Prepare deployment scripts for Ubuntu 18.04
This adds a few features:
- ./workshopctl kubereset TAG (closes #306)
- remove python-setuptools (prepare for #353)
- ./workshopctl weavetest TAG (help detecting weave issues
  like we had at OSCON, July 2018)
- remove a bit of dead code
2018-09-29 10:06:20 -05:00
Bret Fisher
35614714c8 added portainer setup and gui options 2018-09-29 16:54:42 +02:00
Bret Fisher
100c6b46cf oops, updated slide versions 2018-09-29 16:53:59 +02:00
Bret Fisher
36ccaf7ea4 update compose/machine versions in swarm nodes 2018-09-29 16:53:59 +02:00
Bridget Kromhout
4a655db1ba Merge pull request #362 from jpetazzo/kubectl-run-deprecation
Add explanation about the kubectl run deprecation warning
2018-09-28 21:34:11 -05:00
Bridget Kromhout
2a80586504 Merge pull request #361 from jpetazzo/kubens-and-kubectx
Add a couple of slides about kubens and kubectx
2018-09-28 21:34:03 -05:00
Bridget Kromhout
0a942118c1 Update kubectlrun.md
slight wording change
2018-09-28 21:32:23 -05:00
Jerome Petazzoni
2f1ad67fb3 Add explanation about the kubectl run deprecation warning 2018-09-28 20:54:11 -05:00
Jerome Petazzoni
4b0ac6d0e3 Add a couple of slides about kubens and kubectx 2018-09-28 19:49:08 -05:00
Jerome Petazzoni
ac273da46c Merge branch 'master' of github.com:jpetazzo/container.training 2018-09-28 19:35:41 -05:00
Jerome Petazzoni
7a6594c96d Update container.training index 2018-09-28 19:35:35 -05:00
Bret Fisher
657b7465c6 updating bridge network diags 2018-09-29 02:18:03 +02:00
Bret Fisher
08059a845f remove compose teaser 2018-09-29 02:16:52 +02:00
Jerome Petazzoni
24e2042c9d Explain why revocation is important 2018-09-28 19:14:07 -05:00
Jerome Petazzoni
9771f054ea Add slide about lack of cert revocation 2018-09-28 19:04:57 -05:00
Jerome Petazzoni
5db4e2adfa Merge branch 'master' of github.com:jpetazzo/container.training 2018-09-28 18:49:00 -05:00
Jerome Petazzoni
bde5db49a7 Bump a few more k8s version numbers from 1.11 to 1.12 2018-09-28 18:48:52 -05:00
Jerome Petazzoni
7c6b2730f5 Bump up EBS size to 20G for Portworx 2018-09-29 01:39:07 +02:00
Jerome Petazzoni
7f6a15fbb7 Actually modify the prompt 2018-09-29 01:39:07 +02:00
Bridget Kromhout
d97b1e5944 Slight modifications to current docs/scripts 2018-09-29 01:39:07 +02:00
Jerome Petazzoni
1519196c95 Add kubectl, kubens, kube_ps1
kubectl and kubens are added as kctl and kns (to avoid clashing with
completion for kubectl). Their completion is added too (so you can
do 'kns kube-sy[TAB]' to switch to kube-system).

kube_ps1 is added and enabled. The default prompt for the docker
user now shows the current context and namespace.
2018-09-29 01:39:07 +02:00
Jerome Petazzoni
f8629a2689 Massive refactoring of workshopctl
This allows to manage groups of VMs across multiple infrastructure
providers. It also adds support to create groups of VMs on OpenStack.

WARNING: the syntax of workshopctl has changed slightly. Check READMEs
for details.
2018-09-29 01:39:07 +02:00
Jerome Petazzoni
fadecd52ee Replace registry:2 with registry
registry used to be registry v1, but now it defaults to v2.
We can therefore drop the tag.
2018-09-28 18:36:29 -05:00
Jerome Petazzoni
524d6e4fc1 Minor updates to load balancing example 2018-09-28 18:31:39 -05:00
Bridget Kromhout
51f5f5393c Merge pull request #356 from bridgetkromhout/link-update
Updating links
2018-09-28 16:49:41 -05:00
Bridget Kromhout
f574afa9d2 Updating links 2018-09-28 16:46:10 -05:00
Bridget Kromhout
4f49015a6e Link to experimental multi-master 2018-09-28 23:42:55 +02:00
Bridget Kromhout
f25d12b53d Merge pull request #354 from bridgetkromhout/versions-update
Updating versions
2018-09-28 16:29:00 -05:00
Bridget Kromhout
78259c3eb6 Clarifying version 2018-09-28 16:28:20 -05:00
Bridget Kromhout
adc922e4cd Updating versions 2018-09-28 16:25:38 -05:00
Bridget Kromhout
f68194227c Update whatsnext.md
Typo fix, and clarity since it's not always being delivered by only one person.
2018-09-28 23:16:24 +02:00
Jerome Petazzoni
29a3ce0ba2 Update last chapter (what's next) 2018-09-28 23:16:24 +02:00
Bridget Kromhout
e5fe27dd54 Merge pull request #352 from jpetazzo/remove-netpol-slides-from-ns
Remove network policies blurb from namespaces chatper
2018-09-28 15:17:51 -05:00
Jerome Petazzoni
6016ffe7d7 Add hidden link to pre-game video 2018-09-28 13:43:21 -05:00
Jerome Petazzoni
7c94a6f689 Remove network policies blurb from namespaces chatper
There is now a dedicated chapter about network policies, so
the two very rough slides on that topic should be removed
from the namespaces chapter.
2018-09-28 13:34:26 -05:00
Bridget Kromhout
5953ffe10b Merge pull request #350 from BretFisher/win-detach-note
adding slide about PowerShell detaching
2018-09-28 08:11:20 -05:00
Bridget Kromhout
3016019560 Update Start_And_Attach.md
slight edits for clarity
2018-09-28 08:10:12 -05:00
Bridget Kromhout
0d5da73c74 Merge pull request #339 from jpetazzo/replace-es-with-httpenv
Replace ElasticSearch with jpetazzo/httpenv
2018-09-28 08:05:15 -05:00
Bret Fisher
91c835fcb4 adding slide about PowerShell detaching 2018-09-28 00:20:03 -04:00
Bret Fisher
d01ae0ff39 initial Windows Container pack 2018-09-27 07:13:03 -04:00
Thomas Gerbet
63b85da4f6 Add missing link to storage in Prometheus 2 talk 2018-09-22 12:56:58 +02:00
Maxime Deravet
2406e72210 use https to clone git repo 2018-09-22 12:54:43 +02:00
Jerome Petazzoni
32e1edc2a2 Long slide is long 2018-09-21 09:08:58 +02:00
Jerome Petazzoni
84225e982f Merge branch 'Julien-Eyraud-fix-kaniko-build' 2018-09-19 14:01:24 -05:00
Jerome Petazzoni
e76a06e942 Merge branch 'fix-kaniko-build' of git://github.com/Julien-Eyraud/container.training into Julien-Eyraud-fix-kaniko-build 2018-09-19 14:01:02 -05:00
Nicolas Gavalda
0519682c30 Fix small typo 2018-09-18 18:50:41 +02:00
Jérôme Petazzoni
91f7a81964 Merge branch 'master' into fix-kaniko-build 2018-09-18 18:49:13 +02:00
Nicolas Schwartz
a66fcaf04c Update kaniko-build.yaml
Fix option
2018-09-18 18:48:01 +02:00
Julien Eyraud
9a0649e671 Change postgresql mount path 2018-09-18 17:42:10 +02:00
Julien Eyraud
d23ad0cd8f Fix kaniko-build.yaml to use insecure registry 2018-09-18 16:05:05 +02:00
Jerome Petazzoni
63755c1cd3 Minor fixes 2018-09-16 15:35:23 -05:00
Jerome Petazzoni
149cf79615 Add ENIX cluster files 2018-09-16 12:49:33 -05:00
Jerome Petazzoni
a627128570 Set EFK UID to 0 (fixes #325) 2018-09-16 10:58:10 -05:00
Jerome Petazzoni
91e3078d2e Better error checking + GRO fix 2018-09-16 09:10:14 -05:00
Jerome Petazzoni
31dd943141 Typo 2018-09-16 09:09:08 -05:00
Jerome Petazzoni
3866701475 Fix postgres data volume 2018-09-16 09:08:23 -05:00
Jerome Petazzoni
521f8e9889 More typo fixes courtesy of @abuisine 2018-09-15 11:11:08 -05:00
Jerome Petazzoni
49c3fdd3b2 Minor updates (thanks @abuisine) 2018-09-15 11:03:24 -05:00
Jerome Petazzoni
4bb6a49ee0 Typo fix (thanks @sload) 2018-09-15 10:45:37 -05:00
Jerome Petazzoni
3eaa844c55 Add ENIX logo
Warning: do not merge this branch to your content, otherwise you
will get the ENIX logo in the top right of all your decks
2018-09-08 07:49:38 -05:00
Bret Fisher
cb407e75ab make CI/CD common for all courses 2018-04-25 14:27:32 -05:00
Bret Fisher
27d4612449 a note about ci/cd with docker 2018-04-25 14:26:02 -05:00
Bret Fisher
43ab5f79b6 a note about ci/cd with docker 2018-04-25 14:23:40 -05:00
118 changed files with 2972 additions and 1303 deletions

17
.gitignore vendored
View File

@@ -1,13 +1,22 @@
*.pyc
*.swp
*~
prepare-vms/ips.txt
prepare-vms/ips.html
prepare-vms/ips.pdf
prepare-vms/settings.yaml
prepare-vms/tags
prepare-vms/infra
slides/*.yml.html
slides/autopilot/state.yaml
slides/index.html
slides/past.html
node_modules
### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride
### Windows ###
# Windows thumbnail cache files
Thumbs.db
ehthumbs.db
ehthumbs_vista.db

View File

@@ -5,6 +5,3 @@ RUN gem install thin
ADD hasher.rb /
CMD ["ruby", "hasher.rb"]
EXPOSE 80
HEALTHCHECK \
--interval=1s --timeout=2s --retries=3 --start-period=1s \
CMD curl http://localhost/ || exit 1

View File

@@ -1,3 +1,37 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: consul
labels:
app: consul
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: consul
subjects:
- kind: ServiceAccount
name: consul
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
labels:
app: consul
---
apiVersion: v1
kind: Service
metadata:
@@ -24,6 +58,7 @@ spec:
labels:
app: consul
spec:
serviceAccountName: consul
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@@ -37,18 +72,11 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.2.2"
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: "consul:1.4.0"
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=provider=k8s label_selector=\"app=consul\""
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"

View File

@@ -72,6 +72,8 @@ spec:
value: "elastic"
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: "changeme"
- name: FLUENT_UID
value: "0"
resources:
limits:
memory: 200Mi
@@ -130,6 +132,9 @@ spec:
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
env:
- name: ES_JAVA_OPTS
value: "-Xms1g -Xmx1g"
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler

View File

@@ -14,5 +14,5 @@ frontend the-frontend
backend the-backend
server google.com-80 google.com:80 maxconn 32 check
server bing.com-80 bing.com:80 maxconn 32 check
server ibm.fr-80 ibm.fr:80 maxconn 32 check

10
k8s/just-a-pod.yaml Normal file
View File

@@ -0,0 +1,10 @@
apiVersion: v1
Kind: Pod
metadata:
name: hello
namespace: default
spec:
containers:
- name: hello
image: nginx

View File

@@ -19,7 +19,7 @@ spec:
image: gcr.io/kaniko-project/executor:latest
args:
- "--context=/workspace/dockercoins/rng"
- "--skip-tls-verify"
- "--insecure"
- "--destination=registry:5000/rng-kaniko:latest"
volumeMounts:
- name: workspace

View File

@@ -5,7 +5,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress:
- from:
- podSelector:

View File

@@ -5,6 +5,6 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress: []

View File

@@ -16,7 +16,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: webui
app: webui
ingress:
- from: []

View File

@@ -1,4 +1,4 @@
# SOURCE: https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop0&c=px-workshop&stork=true&lh=true
# SOURCE: https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true
apiVersion: v1
kind: ConfigMap
metadata:
@@ -372,7 +372,7 @@ metadata:
name: portworx
namespace: kube-system
annotations:
portworx.com/install-source: "https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop0&c=px-workshop&stork=true&lh=true"
portworx.com/install-source: "https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true"
spec:
minReadySeconds: 0
updateStrategy:
@@ -402,7 +402,7 @@ spec:
image: portworx/oci-monitor:1.4.2.2
imagePullPolicy: Always
args:
["-c", "px-workshop", "-s", "/dev/loop0", "-b",
["-c", "px-workshop", "-s", "/dev/loop4", "-b",
"-x", "kubernetes"]
env:
- name: "PX_TEMPLATE_VERSION"

View File

@@ -17,7 +17,7 @@ spec:
- name: postgres
image: postgres:10.5
volumeMounts:
- mountPath: /var/lib/postgresql
- mountPath: /var/lib/postgresql/data
name: postgres
volumeClaimTemplates:
- metadata:

View File

@@ -6,7 +6,7 @@ metadata:
creationTimestamp: null
generation: 1
labels:
run: socat
app: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
@@ -14,7 +14,7 @@ spec:
replicas: 1
selector:
matchLabels:
run: socat
app: socat
strategy:
rollingUpdate:
maxSurge: 1
@@ -24,7 +24,7 @@ spec:
metadata:
creationTimestamp: null
labels:
run: socat
app: socat
spec:
containers:
- args:
@@ -49,7 +49,7 @@ kind: Service
metadata:
creationTimestamp: null
labels:
run: socat
app: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
@@ -60,7 +60,7 @@ spec:
protocol: TCP
targetPort: 80
selector:
run: socat
app: socat
sessionAffinity: None
type: NodePort
status:

View File

@@ -1,4 +1,10 @@
# Trainer tools to create and prepare VMs for Docker workshops on AWS or Azure
# Trainer tools to create and prepare VMs for Docker workshops
These tools can help you to create VMs on:
- Azure
- EC2
- OpenStack
## Prerequisites
@@ -6,6 +12,9 @@
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
Depending on the infrastructure that you want to use, you also need to install
the Azure CLI, the AWS CLI, or terraform (for OpenStack deployment).
And if you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
@@ -14,20 +23,25 @@ And if you want to generate printable cards:
## General Workflow
- fork/clone repo
- set required environment variables
- create an infrastructure configuration in the `prepare-vms/infra` directory
(using one of the example files in that directory)
- create your own setting file from `settings/example.yaml`
- if necessary, increase allowed open files: `ulimit -Sn 10000`
- run `./workshopctl` commands to create instances, install docker, setup each users environment in node1, other management tasks
- run `./workshopctl cards` command to generate PDF for printing handouts of each users host IP's and login info
- run `./workshopctl start` to create instances
- run `./workshopctl deploy` to install Docker and setup environment
- run `./workshopctl kube` (if you want to install and setup Kubernetes)
- run `./workshopctl cards` (if you want to generate PDF for printing handouts of each users host IP's and login info)
- run `./workshopctl stop` at the end of the workshop to terminate instances
## Clone/Fork the Repo, and Build the Tools Image
The Docker Compose file here is used to build a image with all the dependencies to run the `./workshopctl` commands and optional tools. Each run of the script will check if you have those dependencies locally on your host, and will only use the container if you're [missing a dependency](workshopctl#L5).
$ git clone https://github.com/jpetazzo/orchestration-workshop.git
$ cd orchestration-workshop/prepare-vms
$ git clone https://github.com/jpetazzo/container.training
$ cd container.training/prepare-vms
$ docker-compose build
## Preparing to Run `./workshopctl`
### Required AWS Permissions/Info
@@ -36,27 +50,37 @@ The Docker Compose file here is used to build a image with all the dependencies
- Using a non-default VPC or Security Group isn't supported out of box yet, so you will have to customize `lib/commands.sh` if you want to change that.
- These instances will assign the default VPC Security Group, which does not open any ports from Internet by default. So you'll need to add Inbound rules for `SSH | TCP | 22 | 0.0.0.0/0` and `Custom TCP Rule | TCP | 8000 - 8002 | 0.0.0.0/0`, or run `./workshopctl opensg` which opens up all ports.
### Required Environment Variables
### Create your `infra` file
- `AWS_ACCESS_KEY_ID`
- `AWS_SECRET_ACCESS_KEY`
- `AWS_DEFAULT_REGION`
You need to do this only once. (On AWS, you can create one `infra`
file per region.)
If you're not using AWS, set these to placeholder values:
Make a copy of one of the example files in the `infra` directory.
For instance:
```bash
cp infra/example.aws infra/aws-us-west-2
```
export AWS_ACCESS_KEY_ID="foo"
export AWS_SECRET_ACCESS_KEY="foo"
export AWS_DEFAULT_REGION="foo"
```
Edit your infrastructure file to customize it.
You will probably need to put your cloud provider credentials,
select region...
If you don't have the `aws` CLI installed, you will get a warning that it's a missing dependency. If you're not using AWS you can ignore this.
### Update/copy `settings/example.yaml`
### Create your `settings` file
Then pass `settings/YOUR_WORKSHOP_NAME-settings.yaml` as an argument to `./workshopctl deploy`, `./workshopctl cards`, etc.
Similarly, pick one of the files in `settings` and copy it
to customize it.
./workshopctl cards 2016-09-28-00-33-bret settings/orchestration.yaml
For instance:
```bash
cp settings/example.yaml settings/myworkshop.yaml
```
You're all set!
## `./workshopctl` Usage
@@ -66,7 +90,7 @@ Commands:
ami Show the AMI that will be used for deployment
amis List Ubuntu AMIs in the current region
build Build the Docker image to run this program in a container
cards Generate ready-to-print cards for a batch of VMs
cards Generate ready-to-print cards for a group of VMs
deploy Install Docker on a bunch of running VMs
ec2quotas Check our EC2 quotas (max instances)
help Show available commands
@@ -74,14 +98,14 @@ ids List the instance IDs belonging to a given tag or token
ips List the IP addresses of the VMs for a given tag or token
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all notes are reporting as Ready
list List available batches in the current region
list List available groups in the current region
opensg Open the default security group to ALL ingress traffic
pull_images Pre-pull a bunch of Docker images
retag Apply a new tag to a batch of VMs
start Start a batch of VMs
status List instance status for a given batch
retag Apply a new tag to a group of VMs
start Start a group of VMs
status List instance status for a given group
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
test Run tests (pre-flight checks) on a batch of VMs
test Run tests (pre-flight checks) on a group of VMs
wrap Run this program in a container
```
@@ -95,22 +119,22 @@ wrap Run this program in a container
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. This can be configured with the `docker_user_password` property in the settings file.
### Example Steps to Launch a Batch of AWS Instances for a Workshop
### Example Steps to Launch a group of AWS Instances for a Workshop
- Run `./workshopctl start N` Creates `N` EC2 instances
- Run `./workshopctl start --infra infra/aws-us-east-2 --settings/myworkshop.yaml --count 60` to create 60 EC2 instances
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./workshopctl deploy TAG settings/somefile.yaml` to run `lib/postprep.py` via parallel-ssh
- Run `./workshopctl deploy TAG` to run `lib/postprep.py` via parallel-ssh
- If it errors or times out, you should be able to rerun
- Requires good connection to run all the parallel SSH connections, up to 100 parallel (ProTip: create dedicated management instance in same AWS region where you run all these utils from)
- Run `./workshopctl pull_images TAG` to pre-pull a bunch of Docker images to the instances
- Run `./workshopctl cards TAG settings/somefile.yaml` generates PDF/HTML files to print and cut and hand out to students
- Run `./workshopctl cards TAG` generates PDF/HTML files to print and cut and hand out to students
- *Have a great workshop*
- Run `./workshopctl stop TAG` to terminate instances.
### Example Steps to Launch Azure Instances
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account
- Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and authenticate with a valid account (`az login`)
- Customize `azuredeploy.parameters.json`
- Required:
- Provide the SSH public key you plan to use for instance configuration
@@ -155,27 +179,16 @@ az group delete --resource-group workshop
### Example Steps to Configure Instances from a non-AWS Source
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to ssh into them.
- Set placeholder values for [AWS environment variable settings](#required-environment-variables).
- Choose a tag. It could be an event name, datestamp, etc. Ensure you have created a directory for your tag: `prepare-vms/tags/<tag>/`
- If you have not already generated a file with the IPs to be configured:
- The file should be named `prepare-vms/tags/<tag>/ips.txt`
- Format is one IP per line, no other info needed.
- Ensure the settings file is as desired (especially the number of nodes): `prepare-vms/settings/kube101.yaml`
- For a tag called `myworkshop`, configure instances: `workshopctl deploy myworkshop settings/kube101.yaml`
- Optionally, configure Kubernetes clusters of the size in the settings: `workshopctl kube myworkshop`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: `workshopctl kubetest myworkshop`
- Generate cards to print and hand out: `workshopctl cards myworkshop settings/kube101.yaml`
- Print the cards file: `prepare-vms/tags/myworkshop/ips.html`
## Other Tools
### Deploying your SSH key to all the machines
- Make sure that you have SSH keys loaded (`ssh-add -l`).
- Source `rc`.
- Run `pcopykey`.
- Copy `infra/example.generic` to `infra/generic`
- Run `./workshopctl start --infra infra/generic --settings settings/...yaml`
- Note the `prepare-vms/tags/TAG/` path that has been auto-created.
- Launch instances via your preferred method. You'll need to get the instance IPs and be able to SSH into them.
- Edit the file `prepare-vms/tags/TAG/ips.txt`, it should list the IP addresses of the VMs (one per line, without any comments or other info)
- Continue deployment of cluster configuration with `./workshopctl deploy TAG`
- Optionally, configure Kubernetes clusters of the size in the settings: workshopctl kube `TAG`
- Optionally, test your Kubernetes clusters. They may take a little time to become ready: workshopctl kubetest `TAG`
- Generate cards to print and hand out: workshopctl cards `TAG`
- Print the cards file: prepare-vms/tags/`TAG`/ips.html
## Even More Details
@@ -188,7 +201,7 @@ To see which local key will be uploaded, run `ssh-add -l | grep RSA`.
#### Instance + tag creation
10 VMs will be started, with an automatically generated tag (timestamp + your username).
The VMs will be started, with an automatically generated tag (timestamp + your username).
Your SSH key will be added to the `authorized_keys` of the ubuntu user.
@@ -196,15 +209,11 @@ Your SSH key will be added to the `authorized_keys` of the ubuntu user.
Following the creation of the VMs, a text file will be created containing a list of their IPs.
This ips.txt file will be created in the $TAG/ directory and a symlink will be placed in the working directory of the script.
If you create new VMs, the symlinked file will be overwritten.
#### Deployment
Instances can be deployed manually using the `deploy` command:
$ ./workshopctl deploy TAG settings/somefile.yaml
$ ./workshopctl deploy TAG
The `postprep.py` file will be copied via parallel-ssh to all of the VMs and executed.
@@ -214,7 +223,7 @@ The `postprep.py` file will be copied via parallel-ssh to all of the VMs and exe
#### Generate cards
$ ./workshopctl cards TAG settings/somefile.yaml
$ ./workshopctl cards TAG
If you want to generate both HTML and PDF cards, install [wkhtmltopdf](https://wkhtmltopdf.org/downloads.html); without that installed, only HTML cards will be generated.
@@ -222,13 +231,11 @@ If you don't have `wkhtmltopdf` installed, you will get a warning that it is a m
#### List tags
$ ./workshopctl list
$ ./workshopctl list infra/some-infra-file
#### List VMs
$ ./workshopctl listall
$ ./workshopctl list TAG
This will print a human-friendly list containing some information about each instance.
$ ./workshopctl tags
#### Stop and destroy VMs

View File

@@ -7,15 +7,6 @@ fi
if id docker; then
sudo userdel -r docker
fi
pip install --user awscli jinja2 pdfkit
sudo apt-get install -y wkhtmltopdf xvfb
tmux new-session \; send-keys "
[ -f ~/.ssh/id_rsa ] || ssh-keygen
eval \$(ssh-agent)
ssh-add
Xvfb :0 &
export DISPLAY=:0
mkdir -p ~/www
sudo docker run -d -p 80:80 -v \$HOME/www:/usr/share/nginx/html nginx
"
sudo apt-get update -q
sudo apt-get install -qy jq python-pip wkhtmltopdf xvfb
pip install --user awscli jinja2 pdfkit pssh

View File

@@ -0,0 +1,6 @@
INFRACLASS=aws
# If you are using AWS to deploy, copy this file (e.g. to "aws", or "us-east-1")
# and customize the variables below.
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=AKI...
export AWS_SECRET_ACCESS_KEY=...

View File

@@ -0,0 +1,2 @@
INFRACLASS=generic
# This is for manual provisioning. No other variable or configuration is needed.

View File

@@ -0,0 +1,9 @@
INFRACLASS=openstack
# If you are using OpenStack, copy this file (e.g. to "openstack" or "enix")
# and customize the variables below.
export TF_VAR_user="jpetazzo"
export TF_VAR_tenant="training"
export TF_VAR_domain="Default"
export TF_VAR_password="..."
export TF_VAR_auth_url="https://api.r1.nxs.enix.io/v3"
export TF_VAR_flavor="GP1.S"

View File

@@ -1,105 +0,0 @@
aws_display_tags() {
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| uniq -c \
| sort -k 3 \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
aws ec2 describe-instance-status \
--instance-ids $IDS \
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
--output table
}
aws_display_instances_by_tag() {
TAG=$1
need_tag $TAG
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "$result"
fi
}
aws_get_instance_ids_by_filter() {
FILTER=$1
aws ec2 describe-instances --filters $FILTER \
--query Reservations[*].Instances[*].InstanceId \
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
need_tag $TOKEN
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
}
aws_get_instance_ids_by_tag() {
TAG=$1
need_tag $TAG
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
}
aws_get_instance_ips_by_tag() {
TAG=$1
need_tag $TAG
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
TAG=$1
need_tag $TAG
IDS=$(aws_get_instance_ids_by_tag $TAG)
if [ -z "$IDS" ]; then
die "Invalid tag."
fi
info "Deleting instances with tag $TAG."
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
info "Deleted instances with tag $TAG."
}
aws_tag_instances() {
OLD_TAG_OR_TOKEN=$1
NEW_TAG=$2
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
}

View File

@@ -50,27 +50,41 @@ sep() {
fi
}
need_tag() {
need_infra() {
if [ -z "$1" ]; then
die "Please specify infrastructure file. (e.g.: infra/aws)"
fi
if [ "$1" = "--infra" ]; then
die "The infrastructure file should be passed directly to this command. Remove '--infra' and try again."
fi
if [ ! -f "$1" ]; then
die "Infrastructure file $1 doesn't exist."
fi
. "$1"
. "lib/infra/$INFRACLASS.sh"
}
need_tag() {
if [ -z "$TAG" ]; then
die "Please specify a tag or token. To see available tags and tokens, run: $0 list"
fi
if [ ! -d "tags/$TAG" ]; then
die "Tag $TAG not found (directory tags/$TAG does not exist)."
fi
for FILE in settings.yaml ips.txt infra.sh; do
if [ ! -f "tags/$TAG/$FILE" ]; then
warning "File tags/$TAG/$FILE not found."
fi
done
. "tags/$TAG/infra.sh"
. "lib/infra/$INFRACLASS.sh"
}
need_settings() {
if [ -z "$1" ]; then
die "Please specify a settings file."
elif [ ! -f "$1" ]; then
die "Please specify a settings file. (e.g.: settings/kube101.yaml)"
fi
if [ ! -f "$1" ]; then
die "Settings file $1 doesn't exist."
fi
}
need_ips_file() {
IPS_FILE=$1
if [ -z "$IPS_FILE" ]; then
die "IPS_FILE not set."
fi
if [ ! -s "$IPS_FILE" ]; then
die "IPS_FILE $IPS_FILE not found. Please run: $0 ips <TAG>"
fi
}

View File

@@ -7,21 +7,11 @@ _cmd() {
_cmd help "Show available commands"
_cmd_help() {
printf "$(basename $0) - the orchestration workshop swiss army knife\n"
printf "$(basename $0) - the container training swiss army knife\n"
printf "Commands:"
printf "%s" "$HELP" | sort
}
_cmd amis "List Ubuntu AMIs in the current region"
_cmd_amis() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION "$@"
}
_cmd ami "Show the AMI that will be used for deployment"
_cmd_ami() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
}
_cmd build "Build the Docker image to run this program in a container"
_cmd_build() {
docker-compose build
@@ -32,64 +22,53 @@ _cmd_wrap() {
docker-compose run --rm workshopctl "$@"
}
_cmd cards "Generate ready-to-print cards for a batch of VMs"
_cmd cards "Generate ready-to-print cards for a group of VMs"
_cmd_cards() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
need_tag
# If you're not using AWS, populate the ips.txt file manually
if [ ! -f tags/$TAG/ips.txt ]; then
aws_get_instance_ips_by_tag $TAG >tags/$TAG/ips.txt
fi
# Remove symlinks to old cards
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
lib/ips-txt-to-html.py $SETTINGS
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
rm -f tags/$TAG/$f
# Move the generated file and replace it with a symlink
mv -f $f tags/$TAG/$f && ln -s tags/$TAG/$f $f
done
# This will process ips.txt to generate two files: ips.pdf and ips.html
(
cd tags/$TAG
../../lib/ips-txt-to-html.py settings.yaml
)
info "Cards created. You can view them with:"
info "xdg-open ips.html ips.pdf (on Linux)"
info "open ips.html ips.pdf (on MacOS)"
info "xdg-open tags/$TAG/ips.html tags/$TAG/ips.pdf (on Linux)"
info "open tags/$TAG/ips.html (on macOS)"
}
_cmd deploy "Install Docker on a bunch of running VMs"
_cmd_deploy() {
TAG=$1
SETTINGS=$2
need_tag $TAG
need_settings $SETTINGS
link_tag $TAG
count=$(wc -l ips.txt)
need_tag
# wait until all hosts are reachable before trying to deploy
info "Trying to reach $TAG instances..."
while ! tag_is_reachable $TAG; do
while ! tag_is_reachable; do
>/dev/stderr echo -n "."
sleep 2
done
>/dev/stderr echo ""
echo deploying > tags/$TAG/status
sep "Deploying tag $TAG"
pssh -I tee /tmp/settings.yaml <$SETTINGS
# Wait for cloudinit to be done
pssh "
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done"
# Copy settings and install Python YAML parser
pssh -I tee /tmp/settings.yaml <tags/$TAG/settings.yaml
pssh "
sudo apt-get update &&
sudo apt-get install -y python-setuptools &&
sudo easy_install pyyaml"
sudo apt-get install -y python-yaml"
# Copy postprep.py to the remote machines, and execute it, feeding it the list of IP addresses
pssh -I tee /tmp/postprep.py <lib/postprep.py
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <ips.txt
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <tags/$TAG/ips.txt
# Install docker-prompt script
pssh -I sudo tee /usr/local/bin/docker-prompt <lib/docker-prompt
@@ -117,14 +96,17 @@ _cmd_deploy() {
fi"
sep "Deployed tag $TAG"
echo deployed > tags/$TAG/status
info "You may want to run one of the following commands:"
info "$0 kube $TAG"
info "$0 pull_images $TAG"
info "$0 cards $TAG $SETTINGS"
info "$0 cards $TAG"
}
_cmd kube "Setup kubernetes clusters with kubeadm (must be run AFTER deploy)"
_cmd_kube() {
TAG=$1
need_tag
# Install packages
pssh --timeout 200 "
@@ -134,13 +116,13 @@ _cmd_kube() {
sudo tee /etc/apt/sources.list.d/kubernetes.list"
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet kubeadm kubectl
sudo apt-get install -qy kubelet kubeadm kubectl &&
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
# Initialize kube master
pssh --timeout 200 "
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token
kubeadm token generate > /tmp/token &&
sudo kubeadm init --token \$(cat /tmp/token)
fi"
@@ -157,38 +139,66 @@ _cmd_kube() {
# Install weave as the pod network
pssh "
if grep -q node1 /tmp/node; then
kubever=\$(kubectl version | base64 | tr -d '\n')
kubever=\$(kubectl version | base64 | tr -d '\n') &&
kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=\$kubever
fi"
# Join the other nodes to the cluster
pssh --timeout 200 "
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token)
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token) &&
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
fi"
# Install kubectx and kubens
pssh "
[ -d kubectx ] || git clone https://github.com/ahmetb/kubectx &&
sudo ln -sf /home/ubuntu/kubectx/kubectx /usr/local/bin/kctx &&
sudo ln -sf /home/ubuntu/kubectx/kubens /usr/local/bin/kns &&
sudo cp /home/ubuntu/kubectx/completion/*.bash /etc/bash_completion.d &&
[ -d kube-ps1 ] || git clone https://github.com/jonmosco/kube-ps1 &&
sudo -u docker sed -i s/docker-prompt/kube_ps1/ /home/docker/.bashrc &&
sudo -u docker tee -a /home/docker/.bashrc <<EOF
. /home/ubuntu/kube-ps1/kube-ps1.sh
KUBE_PS1_PREFIX=""
KUBE_PS1_SUFFIX=""
KUBE_PS1_SYMBOL_ENABLE="false"
KUBE_PS1_CTX_COLOR="green"
KUBE_PS1_NS_COLOR="green"
EOF"
# Install stern
pssh "
if [ ! -x /usr/local/bin/stern ]; then
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64
sudo chmod +x /usr/local/bin/stern
##VERSION##
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.10.0/stern_linux_amd64 &&
sudo chmod +x /usr/local/bin/stern &&
stern --completion bash | sudo tee /etc/bash_completion.d/stern
fi"
# Install helm
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | sudo bash
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | sudo bash &&
helm completion bash | sudo tee /etc/bash_completion.d/helm
fi"
sep "Done"
}
_cmd kubetest "Check that all notes are reporting as Ready"
_cmd kubereset "Wipe out Kubernetes configuration on all nodes"
_cmd_kubereset() {
TAG=$1
need_tag
pssh "sudo kubeadm reset --force"
}
_cmd kubetest "Check that all nodes are reporting as Ready"
_cmd_kubetest() {
TAG=$1
need_tag
# There are way too many backslashes in the command below.
# Feel free to make that better ♥
pssh "
@@ -202,7 +212,7 @@ _cmd_kubetest() {
fi"
}
_cmd ids "List the instance IDs belonging to a given tag or token"
_cmd ids "(FIXME) List the instance IDs belonging to a given tag or token"
_cmd_ids() {
TAG=$1
need_tag $TAG
@@ -215,262 +225,264 @@ _cmd_ids() {
aws_get_instance_ids_by_client_token $TAG
}
_cmd ips "List the IP addresses of the VMs for a given tag or token"
_cmd_ips() {
TAG=$1
need_tag $TAG
mkdir -p tags/$TAG
aws_get_instance_ips_by_tag $TAG | tee tags/$TAG/ips.txt
link_tag $TAG
}
_cmd list "List available batches in the current region"
_cmd list "List available groups for a given infrastructure"
_cmd_list() {
info "Listing batches in region $AWS_DEFAULT_REGION:"
aws_display_tags
need_infra $1
infra_list
}
_cmd status "List instance status for a given batch"
_cmd_status() {
info "Using region $AWS_DEFAULT_REGION."
_cmd listall "List VMs running on all configured infrastructures"
_cmd_listall() {
for infra in infra/*; do
case $infra in
infra/example.*)
;;
*)
info "Listing infrastructure $infra:"
need_infra $infra
infra_list
;;
esac
done
}
_cmd netfix "Disable GRO and run a pinger job on the VMs"
_cmd_netfix () {
TAG=$1
need_tag $TAG
describe_tag $TAG
tag_is_reachable $TAG
info "You may be interested in running one of the following commands:"
info "$0 ips $TAG"
info "$0 deploy $TAG <settings/somefile.yaml>"
need_tag
pssh "
sudo ethtool -K ens3 gro off
sudo tee /root/pinger.service <<EOF
[Unit]
Description=pinger
[Install]
WantedBy=multi-user.target
[Service]
WorkingDirectory=/
ExecStart=/bin/ping -w60 1.1
User=nobody
Group=nogroup
Restart=always
EOF
sudo systemctl enable /root/pinger.service
sudo systemctl start pinger"
}
_cmd opensg "Open the default security group to ALL ingress traffic"
_cmd_opensg() {
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
need_infra $1
infra_opensg
}
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
_cmd pssh "Run an arbitrary command on all nodes"
_cmd_pssh() {
TAG=$1
need_tag
shift
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
pssh "$@"
}
_cmd pull_images "Pre-pull a bunch of Docker images"
_cmd_pull_images() {
TAG=$1
need_tag $TAG
pull_tag $TAG
need_tag
pull_tag
}
_cmd retag "Apply a new tag to a batch of VMs"
_cmd quotas "Check our infrastructure quotas (max instances)"
_cmd_quotas() {
need_infra $1
infra_quotas
}
_cmd retag "(FIXME) Apply a new tag to a group of VMs"
_cmd_retag() {
OLDTAG=$1
NEWTAG=$2
need_tag $OLDTAG
TAG=$OLDTAG
need_tag
if [[ -z "$NEWTAG" ]]; then
die "You must specify a new tag to apply."
fi
aws_tag_instances $OLDTAG $NEWTAG
}
_cmd start "Start a batch of VMs"
_cmd start "Start a group of VMs"
_cmd_start() {
# Number of instances to create
COUNT=$1
# Optional settings file (to carry on with deployment)
SETTINGS=$2
while [ ! -z "$*" ]; do
case "$1" in
--infra) INFRA=$2; shift 2;;
--settings) SETTINGS=$2; shift 2;;
--count) COUNT=$2; shift 2;;
--tag) TAG=$2; shift 2;;
*) die "Unrecognized parameter: $1."
esac
done
if [ -z "$INFRA" ]; then
die "Please add --infra flag to specify which infrastructure file to use."
fi
if [ -z "$SETTINGS" ]; then
die "Please add --settings flag to specify which settings file to use."
fi
if [ -z "$COUNT" ]; then
die "Indicate number of instances to start."
COUNT=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
warning "No --count option was specified. Using value from settings file ($COUNT)."
fi
# Check that the specified settings and infrastructure are valid.
need_settings $SETTINGS
need_infra $INFRA
# Print our AWS username, to ease the pain of credential-juggling
greet
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
AMI=$(_cmd_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
die "I could not find which AMI to use in this region. Try another region?"
if [ -z "$TAG" ]; then
TAG=$(make_tag)
fi
TOKEN=$(get_token) # generate a timestamp token for this batch of VMs
AWS_KEY_NAME=$(make_key_name)
sep "Starting instances"
info " Count: $COUNT"
info " Region: $AWS_DEFAULT_REGION"
info " Token/tag: $TOKEN"
info " AMI: $AMI"
info " Key name: $AWS_KEY_NAME"
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
--client-token $TOKEN \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
info "Reservation ID: $reservation_id"
sep
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TOKEN)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
TAG=$TOKEN
aws_tag_instances $TOKEN $TAG
wait_until_tag_is_running $TAG $COUNT
mkdir -p tags/$TAG
ln -s ../../$INFRA tags/$TAG/infra.sh
ln -s ../../$SETTINGS tags/$TAG/settings.yaml
echo creating > tags/$TAG/status
infra_start $COUNT
sep
info "Successfully created $COUNT instances with tag $TAG"
sep
echo created > tags/$TAG/status
mkdir -p tags/$TAG
IPS=$(aws_get_instance_ips_by_tag $TAG)
echo "$IPS" >tags/$TAG/ips.txt
link_tag $TAG
if [ -n "$SETTINGS" ]; then
_cmd_deploy $TAG $SETTINGS
else
info "To deploy or kill these instances, run one of the following:"
info "$0 deploy $TAG <settings/somefile.yaml>"
info "$0 stop $TAG"
fi
}
_cmd ec2quotas "Check our EC2 quotas (max instances)"
_cmd_ec2quotas() {
greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
info "Available regions:"
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
info "To deploy Docker on these instances, you can run:"
info "$0 deploy $TAG"
info "To terminate these instances, you can run:"
info "$0 stop $TAG"
}
_cmd stop "Stop (terminate, shutdown, kill, remove, destroy...) instances"
_cmd_stop() {
TAG=$1
need_tag $TAG
aws_kill_instances_by_tag $TAG
need_tag
infra_stop
echo stopped > tags/$TAG/status
}
_cmd test "Run tests (pre-flight checks) on a batch of VMs"
_cmd tags "List groups of VMs known locally"
_cmd_tags() {
(
cd tags
echo "[#] [Status] [Tag] [Infra]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
for tag in *; do
if [ -f $tag/ips.txt ]; then
count="$(wc -l < $tag/ips.txt)"
else
count="?"
fi
if [ -f $tag/status ]; then
status="$(cat $tag/status)"
else
status="?"
fi
if [ -f $tag/infra.sh ]; then
infra="$(basename $(readlink $tag/infra.sh))"
else
infra="?"
fi
echo "$count $status $tag $infra" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
done
)
}
_cmd test "Run tests (pre-flight checks) on a group of VMs"
_cmd_test() {
TAG=$1
need_tag $TAG
test_tag $TAG
need_tag
test_tag
}
###
_cmd helmprom "Install Helm and Prometheus"
_cmd_helmprom() {
TAG=$1
need_tag
pssh "
if grep -q node1 /tmp/node; then
kubectl -n kube-system get serviceaccount helm ||
kubectl -n kube-system create serviceaccount helm
helm init --service-account helm
kubectl get clusterrolebinding helm-can-do-everything ||
kubectl create clusterrolebinding helm-can-do-everything \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:helm
helm upgrade --install prometheus stable/prometheus \
--namespace kube-system \
--set server.service.type=NodePort \
--set server.service.nodePort=30090 \
--set server.persistentVolume.enabled=false \
--set alertmanager.enabled=false
fi"
}
# Sometimes, weave fails to come up on some nodes.
# Symptom: the pods on a node are unreachable (they don't even ping).
# Remedy: wipe out Weave state and delete weave pod on that node.
# Specifically, identify the weave pod that is defective, then:
# kubectl -n kube-system exec weave-net-XXXXX -c weave rm /weavedb/weave-netdata.db
# kubectl -n kube-system delete pod weave-net-XXXXX
_cmd weavetest "Check that weave seems properly setup"
_cmd_weavetest() {
TAG=$1
need_tag
pssh "
kubectl -n kube-system get pods -o name | grep weave | cut -d/ -f2 |
xargs -I POD kubectl -n kube-system exec POD -c weave -- \
sh -c \"./weave --local status | grep Connections | grep -q ' 1 failed' || ! echo POD \""
}
greet() {
IAMUSER=$(aws iam get-user --query 'User.UserName')
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
}
link_tag() {
TAG=$1
need_tag $TAG
IPS_FILE=tags/$TAG/ips.txt
need_ips_file $IPS_FILE
ln -sf $IPS_FILE ips.txt
}
pull_tag() {
TAG=$1
need_tag $TAG
link_tag $TAG
if [ ! -s $IPS_FILE ]; then
die "Nonexistent or empty IPs file $IPS_FILE."
fi
# Pre-pull a bunch of images
pssh --timeout 900 'for I in \
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
elasticsearch:2 \
postgres \
redis \
alpine \
registry \
nicolaka/netshoot \
jpetazzo/trainingwheels \
golang \
training/namer \
dockercoins/hasher \
dockercoins/rng \
dockercoins/webui \
dockercoins/worker \
logstash \
prom/node-exporter \
google/cadvisor \
dockersamples/visualizer \
nathanleclaire/redisonrails; do
debian:latest \
ubuntu:latest \
fedora:latest \
centos:latest \
elasticsearch:2 \
postgres \
redis \
alpine \
registry \
nicolaka/netshoot \
jpetazzo/trainingwheels \
golang \
training/namer \
dockercoins/hasher \
dockercoins/rng \
dockercoins/webui \
dockercoins/worker \
logstash \
prom/node-exporter \
google/cadvisor \
dockersamples/visualizer \
nathanleclaire/redisonrails; do
sudo -u docker docker pull $I
done'
info "Finished pulling images for $TAG."
info "You may now want to run:"
info "$0 cards $TAG <settings/somefile.yaml>"
}
wait_until_tag_is_running() {
max_retry=50
TAG=$1
COUNT=$2
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do
let "i += 1"
info "$(printf "%d/%d instances online" $done_count $COUNT)"
done_count=$(aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
"Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].State.Name" \
| tr "\t" "\n" \
| wc -l)
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
tag_is_reachable() {
TAG=$1
need_tag $TAG
link_tag $TAG
pssh -t 5 true 2>&1 >/dev/null
}
test_tag() {
TAG=$1
ips_file=tags/$TAG/ips.txt
info "Picking a random IP address in $ips_file to run tests."
n=$((1 + $RANDOM % $(wc -l <$ips_file)))
ip=$(head -n $n $ips_file | tail -n 1)
ip=$(shuf -n1 $ips_file)
test_vm $ip
info "Tests complete."
}
@@ -546,17 +558,9 @@ sync_keys() {
fi
}
get_token() {
make_tag() {
if [ -z $USER ]; then
export USER=anonymous
fi
date +%Y-%m-%d-%H-%M-$USER
}
describe_tag() {
# Display instance details and reachability/status information
TAG=$1
need_tag $TAG
aws_display_instances_by_tag $TAG
aws_display_instance_statuses_by_tag $TAG
}

26
prepare-vms/lib/infra.sh Normal file
View File

@@ -0,0 +1,26 @@
# Default stub functions for infrastructure libraries.
# When loading an infrastructure library, these functions will be overridden.
infra_list() {
warning "infra_list is unsupported on $INFRACLASS."
}
infra_quotas() {
warning "infra_quotas is unsupported on $INFRACLASS."
}
infra_start() {
warning "infra_start is unsupported on $INFRACLASS."
}
infra_stop() {
warning "infra_stop is unsupported on $INFRACLASS."
}
infra_quotas() {
warning "infra_quotas is unsupported on $INFRACLASS."
}
infra_opensg() {
warning "infra_opensg is unsupported on $INFRACLASS."
}

View File

@@ -0,0 +1,206 @@
infra_list() {
aws_display_tags
}
infra_quotas() {
greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
--query 'AccountAttributes[*][AttributeValues]')
info "In the current region ($AWS_DEFAULT_REGION) you can deploy up to $max_instances instances."
# Print list of AWS EC2 regions, highlighting ours ($AWS_DEFAULT_REGION) in the list
# If our $AWS_DEFAULT_REGION is not valid, the error message will be pretty descriptive:
# Could not connect to the endpoint URL: "https://ec2.foo.amazonaws.com/"
info "Available regions:"
aws ec2 describe-regions | awk '{print $3}' | grep --color=auto $AWS_DEFAULT_REGION -C50
}
infra_start() {
COUNT=$1
# Print our AWS username, to ease the pain of credential-juggling
greet
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
AMI=$(aws_get_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
die "I could not find which AMI to use in this region. Try another region?"
fi
AWS_KEY_NAME=$(make_key_name)
sep "Starting instances"
info " Count: $COUNT"
info " Region: $AWS_DEFAULT_REGION"
info " Token/tag: $TAG"
info " AMI: $AMI"
info " Key name: $AWS_KEY_NAME"
result=$(aws ec2 run-instances \
--key-name $AWS_KEY_NAME \
--count $COUNT \
--instance-type ${AWS_INSTANCE_TYPE-t2.medium} \
--client-token $TAG \
--block-device-mapping 'DeviceName=/dev/sda1,Ebs={VolumeSize=20}' \
--image-id $AMI)
reservation_id=$(echo "$result" | head -1 | awk '{print $2}')
info "Reservation ID: $reservation_id"
sep
# if instance creation succeeded, we should have some IDs
IDS=$(aws_get_instance_ids_by_client_token $TAG)
if [ -z "$IDS" ]; then
die "Instance creation failed."
fi
# Tag these new instances with a tag that is the same as the token
aws_tag_instances $TAG $TAG
# Wait until EC2 API tells us that the instances are running
wait_until_tag_is_running $TAG $COUNT
aws_get_instance_ips_by_tag $TAG > tags/$TAG/ips.txt
}
infra_stop() {
aws_kill_instances_by_tag
}
infra_opensg() {
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol icmp \
--port -1 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol udp \
--port 0-65535 \
--cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress \
--group-name default \
--protocol tcp \
--port 0-65535 \
--cidr 0.0.0.0/0
}
wait_until_tag_is_running() {
max_retry=50
i=0
done_count=0
while [[ $done_count -lt $COUNT ]]; do
let "i += 1"
info "$(printf "%d/%d instances online" $done_count $COUNT)"
done_count=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
"Name=instance-state-name,Values=running" \
--query "length(Reservations[].Instances[])")
if [[ $i -gt $max_retry ]]; then
die "Timed out while waiting for instance creation (after $max_retry retries)"
fi
sleep 1
done
}
aws_display_tags() {
# Print all "Name" tags in our region with their instance count
echo "[#] [Status] [Token] [Tag]" \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
aws ec2 describe-instances \
--query "Reservations[*].Instances[*].[State.Name,ClientToken,Tags[0].Value]" \
| tr -d "\r" \
| uniq -c \
| sort -k 3 \
| awk '{ printf "%-7s %-12s %-25s %-25s\n", $1, $2, $3, $4}'
}
aws_get_tokens() {
aws ec2 describe-instances --output text \
--query 'Reservations[*].Instances[*].[ClientToken]' \
| sort -u
}
aws_display_instance_statuses_by_tag() {
IDS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].InstanceId" | tr '\t' ' ')
aws ec2 describe-instance-status \
--instance-ids $IDS \
--query "InstanceStatuses[*].{ID:InstanceId,InstanceState:InstanceState.Name,InstanceStatus:InstanceStatus.Status,SystemStatus:SystemStatus.Status,Reachability:InstanceStatus.Status}" \
--output table
}
aws_display_instances_by_tag() {
result=$(aws ec2 describe-instances --output table \
--filter "Name=tag:Name,Values=$TAG" \
--query "Reservations[*].Instances[*].[ \
InstanceId, \
State.Name, \
Tags[0].Value, \
PublicIpAddress, \
InstanceType \
]"
)
if [[ -z $result ]]; then
die "No instances found with tag $TAG in region $AWS_DEFAULT_REGION."
else
echo "$result"
fi
}
aws_get_instance_ids_by_filter() {
FILTER=$1
aws ec2 describe-instances --filters $FILTER \
--query Reservations[*].Instances[*].InstanceId \
--output text | tr "\t" "\n" | tr -d "\r"
}
aws_get_instance_ids_by_client_token() {
TOKEN=$1
aws_get_instance_ids_by_filter Name=client-token,Values=$TOKEN
}
aws_get_instance_ids_by_tag() {
aws_get_instance_ids_by_filter Name=tag:Name,Values=$TAG
}
aws_get_instance_ips_by_tag() {
aws ec2 describe-instances --filter "Name=tag:Name,Values=$TAG" \
--output text \
--query "Reservations[*].Instances[*].PublicIpAddress" \
| tr "\t" "\n" \
| sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 # sort IPs
}
aws_kill_instances_by_tag() {
IDS=$(aws_get_instance_ids_by_tag $TAG)
if [ -z "$IDS" ]; then
die "Invalid tag."
fi
info "Deleting instances with tag $TAG."
aws ec2 terminate-instances --instance-ids $IDS \
| grep ^TERMINATINGINSTANCES
info "Deleted instances with tag $TAG."
}
aws_tag_instances() {
OLD_TAG_OR_TOKEN=$1
NEW_TAG=$2
IDS=$(aws_get_instance_ids_by_client_token $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
IDS=$(aws_get_instance_ids_by_tag $OLD_TAG_OR_TOKEN)
[[ -n "$IDS" ]] && aws ec2 create-tags --tag Key=Name,Value=$NEW_TAG --resources $IDS >/dev/null
}
aws_get_ami() {
##VERSION##
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 18.04 -t hvm:ebs -N -q
}

View File

@@ -0,0 +1,8 @@
infra_start() {
COUNT=$1
info "You should now run your provisioning commands for $COUNT machines."
info "Note: no machines have been automatically created!"
info "Once done, put the list of IP addresses in tags/$TAG/ips.txt"
info "(one IP address per line, without any comments or extra lines)."
touch tags/$TAG/ips.txt
}

View File

@@ -0,0 +1,20 @@
infra_start() {
COUNT=$1
cp terraform/*.tf tags/$TAG
(
cd tags/$TAG
terraform init
echo prefix = \"$TAG\" >> terraform.tfvars
echo count = \"$COUNT\" >> terraform.tfvars
terraform apply -auto-approve
terraform output ip_addresses > ips.txt
)
}
infra_stop() {
(
cd tags/$TAG
terraform destroy -auto-approve
)
}

View File

@@ -31,7 +31,13 @@ while ips:
clusters.append(cluster)
template_file_name = SETTINGS["cards_template"]
template = jinja2.Template(open(template_file_name).read())
template_file_path = os.path.join(
os.path.dirname(__file__),
"..",
"templates",
template_file_name
)
template = jinja2.Template(open(template_file_path).read())
with open("ips.html", "w") as f:
f.write(template.render(clusters=clusters, **SETTINGS))
print("Generated ips.html")

View File

@@ -83,7 +83,7 @@ system("sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /e
system("sudo service ssh restart")
system("sudo apt-get -q update")
system("sudo apt-get -qy install git jq python-pip")
system("sudo apt-get -qy install git jq")
#######################
### DOCKER INSTALLS ###
@@ -98,7 +98,6 @@ system("sudo apt-get -q update")
system("sudo apt-get -qy install docker-ce")
### Install docker-compose
#system("sudo pip install -U docker-compose=={}".format(COMPOSE_VERSION))
system("sudo curl -sSL -o /usr/local/bin/docker-compose https://github.com/docker/compose/releases/download/{}/docker-compose-{}-{}".format(COMPOSE_VERSION, platform.system(), platform.machine()))
system("sudo chmod +x /usr/local/bin/docker-compose")
system("docker-compose version")

View File

@@ -1,12 +1,17 @@
# This file can be sourced in order to directly run commands on
# a batch of VMs whose IPs are located in ips.txt of the directory in which
# a group of VMs whose IPs are located in ips.txt of the directory in which
# the command is run.
pssh() {
HOSTFILE="ips.txt"
if [ -z "$TAG" ]; then
>/dev/stderr echo "Variable \$TAG is not set."
return
fi
HOSTFILE="tags/$TAG/ips.txt"
[ -f $HOSTFILE ] || {
>/dev/stderr echo "No hostfile found at $HOSTFILE"
>/dev/stderr echo "Hostfile $HOSTFILE not found."
return
}

View File

@@ -0,0 +1,25 @@
# Number of VMs per cluster
clustersize: 5
# Jinja2 template to use to generate ready-to-cut cards
cards_template: enix.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: A4
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.22.0
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -0,0 +1,25 @@
# Number of VMs per cluster
clustersize: 4
# Jinja2 template to use to generate ready-to-cut cards
cards_template: jerome.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# Feel free to reduce this if your printer can handle it
paper_margin: 0.2in
# Note: paper_size and paper_margin only apply to PDF generated with pdfkit.
# If you print (or generate a PDF) using ips.html, they will be ignored.
# (The equivalent parameters must be set from the browser's print dialog.)
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -4,7 +4,7 @@
clustersize: 3
# Jinja2 template to use to generate ready-to-cut cards
cards_template: settings/kube101.html
cards_template: kube101.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter

View File

@@ -20,8 +20,8 @@ paper_margin: 0.2in
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
compose_version: 1.22.0
machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training
docker_user_password: training

View File

Can't render this file because it contains an unexpected character in line 1 and column 42.

View File

@@ -0,0 +1,117 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://septembre2018.container.training" -%}
{%- set pagesize = 9 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
body, table {
margin: 0;
padding: 0;
line-height: 1em;
font-size: 14px;
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 30%;
padding-left: 1.5%;
padding-right: 1.5%;
}
p {
margin: 0.4em 0 0.4em 0;
}
img {
height: 4em;
float: right;
margin-right: -0.3em;
}
img.enix {
height: 4.5em;
margin-top: 0.2em;
}
img.kube {
height: 4.2em;
margin-top: 1.7em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
{% if loop.index0>0 and loop.index0%pagesize==0 %}
<span class="pagebreak"></span>
{% endif %}
<div>
<p>
Voici les informations permettant de se connecter à votre
cluster pour cette formation. Vous pouvez vous connecter
à ces machines virtuelles avec n'importe quel client SSH.
</p>
<p>
<img class="enix" src="https://enix.io/static/img/logos/logo-domain-cropped.png" />
<table>
<tr><td>identifiant:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>mot de passe:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
Vos serveurs sont :
<img class="kube" src="{{ image_src }}" />
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>Le support de formation est à l'adresse suivante :
<center>{{ url }}</center>
</p>
</div>
{% endfor %}
</body>
</html>

View File

@@ -0,0 +1,131 @@
{# Feel free to customize or override anything in there! #}
{%- set url = "http://qconsf2018.container.training/" -%}
{%- set pagesize = 9 -%}
{%- if clustersize == 1 -%}
{%- set workshop_name = "Docker workshop" -%}
{%- set cluster_or_machine = "machine" -%}
{%- set this_or_each = "this" -%}
{%- set machine_is_or_machines_are = "machine is" -%}
{%- set image_src = "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png" -%}
{%- else -%}
{%- set workshop_name = "Kubernetes workshop" -%}
{%- set cluster_or_machine = "cluster" -%}
{%- set this_or_each = "each" -%}
{%- set machine_is_or_machines_are = "machines are" -%}
{%- set image_src_swarm = "https://cdn.wp.nginx.com/wp-content/uploads/2016/07/docker-swarm-hero2.png" -%}
{%- set image_src_kube = "https://avatars1.githubusercontent.com/u/13629408" -%}
{%- set image_src = image_src_kube -%}
{%- endif -%}
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head><style>
@import url('https://fonts.googleapis.com/css?family=Slabo+27px');
body, table {
margin: 0;
padding: 0;
line-height: 1.0em;
font-size: 15px;
font-family: 'Slabo 27px';
}
table {
border-spacing: 0;
margin-top: 0.4em;
margin-bottom: 0.4em;
border-left: 0.8em double grey;
padding-left: 0.4em;
}
div {
float: left;
border: 1px dotted black;
height: 31%;
padding-top: 1%;
padding-bottom: 1%;
/* columns * (width+left+right) < 100% */
width: 30%;
padding-left: 1.5%;
padding-right: 1.5%;
}
div.back {
border: 1px dotted white;
}
div.back p {
margin: 0.5em 1em 0 1em;
}
p {
margin: 0.4em 0 0.8em 0;
}
img {
height: 5em;
float: right;
margin-right: 1em;
}
.logpass {
font-family: monospace;
font-weight: bold;
}
.pagebreak {
page-break-after: always;
clear: both;
display: block;
height: 8px;
}
</style></head>
<body>
{% for cluster in clusters %}
<div>
<p>
Here is the connection information to your very own
{{ cluster_or_machine }} for this {{ workshop_name }}.
You can connect to {{ this_or_each }} VM with any SSH client.
</p>
<p>
<img src="{{ image_src }}" />
<table>
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>
<p>
Your {{ machine_is_or_machines_are }}:
<table>
{% for node in cluster %}
<tr><td>node{{ loop.index }}:</td><td>{{ node }}</td></tr>
{% endfor %}
</table>
</p>
<p>You can find the slides at:
<center>{{ url }}</center>
</p>
</div>
{% if loop.index%pagesize==0 or loop.last %}
<span class="pagebreak"></span>
{% for x in range(pagesize) %}
<div class="back">
<br/>
<p>You got this card at the workshop "Getting Started With Kubernetes and Container Orchestration"
during QCON San Francisco (November 2018).</p>
<p>That workshop was a 1-day version of a longer curriculum.</p>
<p>If you liked that workshop, the instructor (Jérôme Petazzoni) can deliver it
(or the longer version) to your team or organization.</p>
<p>You can reach him at:</p>
<p>jerome.petazzoni@gmail.com</p>
<p>Thank you!</p>
</div>
{% endfor %}
<span class="pagebreak"></span>
{% endif %}
{% endfor %}
</body>
</html>

View File

@@ -0,0 +1,5 @@
resource "openstack_compute_keypair_v2" "ssh_deploy_key" {
name = "${var.prefix}"
public_key = "${file("~/.ssh/id_rsa.pub")}"
}

View File

@@ -0,0 +1,32 @@
resource "openstack_compute_instance_v2" "machine" {
count = "${var.count}"
name = "${format("%s-%04d", "${var.prefix}", count.index+1)}"
image_name = "Ubuntu 16.04.5 (Xenial Xerus)"
flavor_name = "${var.flavor}"
security_groups = ["${openstack_networking_secgroup_v2.full_access.name}"]
key_pair = "${openstack_compute_keypair_v2.ssh_deploy_key.name}"
network {
name = "${openstack_networking_network_v2.internal.name}"
fixed_ip_v4 = "${cidrhost("${openstack_networking_subnet_v2.internal.cidr}", count.index+10)}"
}
}
resource "openstack_compute_floatingip_v2" "machine" {
count = "${var.count}"
# This is something provided to us by Enix when our tenant was provisioned.
pool = "Public Floating"
}
resource "openstack_compute_floatingip_associate_v2" "machine" {
count = "${var.count}"
floating_ip = "${openstack_compute_floatingip_v2.machine.*.address[count.index]}"
instance_id = "${openstack_compute_instance_v2.machine.*.id[count.index]}"
fixed_ip = "${cidrhost("${openstack_networking_subnet_v2.internal.cidr}", count.index+10)}"
}
output "ip_addresses" {
value = "${join("\n", openstack_compute_floatingip_v2.machine.*.address)}"
}
variable "flavor" {}

View File

@@ -0,0 +1,23 @@
resource "openstack_networking_network_v2" "internal" {
name = "${var.prefix}"
}
resource "openstack_networking_subnet_v2" "internal" {
name = "${var.prefix}"
network_id = "${openstack_networking_network_v2.internal.id}"
cidr = "10.10.0.0/16"
ip_version = 4
dns_nameservers = ["1.1.1.1"]
}
resource "openstack_networking_router_v2" "router" {
name = "${var.prefix}"
external_network_id = "15f0c299-1f50-42a6-9aff-63ea5b75f3fc"
}
resource "openstack_networking_router_interface_v2" "router_internal" {
router_id = "${openstack_networking_router_v2.router.id}"
subnet_id = "${openstack_networking_subnet_v2.internal.id}"
}

View File

@@ -0,0 +1,13 @@
provider "openstack" {
user_name = "${var.user}"
tenant_name = "${var.tenant}"
domain_name = "${var.domain}"
password = "${var.password}"
auth_url = "${var.auth_url}"
}
variable "user" {}
variable "tenant" {}
variable "domain" {}
variable "password" {}
variable "auth_url" {}

View File

@@ -0,0 +1,12 @@
resource "openstack_networking_secgroup_v2" "full_access" {
name = "${var.prefix} - full access"
}
resource "openstack_networking_secgroup_rule_v2" "full_access" {
direction = "ingress"
ethertype = "IPv4"
protocol = ""
remote_ip_prefix = "0.0.0.0/0"
security_group_id = "${openstack_networking_secgroup_v2.full_access.id}"
}

View File

@@ -0,0 +1,8 @@
variable "prefix" {
type = "string"
}
variable "count" {
type = "string"
}

View File

@@ -1,20 +1,19 @@
#!/bin/bash
# Get the script's real directory, whether we're being called directly or via a symlink
# Get the script's real directory.
# This should work whether we're being called directly or via a symlink.
if [ -L "$0" ]; then
export SCRIPT_DIR=$(dirname $(readlink "$0"))
else
export SCRIPT_DIR=$(dirname "$0")
fi
# Load all scriptlets
# Load all scriptlets.
cd "$SCRIPT_DIR"
for lib in lib/*.sh; do
. $lib
done
TRAINER_IMAGE="preparevms_prepare-vms"
DEPENDENCIES="
aws
ssh
@@ -25,49 +24,26 @@ DEPENDENCIES="
man
"
ENVVARS="
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
SSH_AUTH_SOCK
"
# Check for missing dependencies, and issue a warning if necessary.
missing=0
for dependency in $DEPENDENCIES; do
if ! command -v $dependency >/dev/null; then
warning "Dependency $dependency could not be found."
missing=1
fi
done
if [ $missing = 1 ]; then
warning "At least one dependency is missing. Install it or try the image wrapper."
fi
check_envvars() {
status=0
for envvar in $ENVVARS; do
if [ -z "${!envvar}" ]; then
error "Environment variable $envvar is not set."
if [ "$envvar" = "SSH_AUTH_SOCK" ]; then
error "Hint: run 'eval \$(ssh-agent) ; ssh-add' and try again?"
fi
status=1
fi
done
return $status
}
# Check if SSH_AUTH_SOCK is set.
# (If it's not, deployment will almost certainly fail.)
if [ -z "${SSH_AUTH_SOCK}" ]; then
warning "Environment variable SSH_AUTH_SOCK is not set."
warning "Hint: run 'eval \$(ssh-agent) ; ssh-add' and try again?"
fi
check_dependencies() {
status=0
for dependency in $DEPENDENCIES; do
if ! command -v $dependency >/dev/null; then
warning "Dependency $dependency could not be found."
status=1
fi
done
return $status
}
check_image() {
docker inspect $TRAINER_IMAGE >/dev/null 2>&1
}
check_envvars \
|| die "Please set all required environment variables."
check_dependencies \
|| warning "At least one dependency is missing. Install it or try the image wrapper."
# Now check which command was invoked and execute it
# Now check which command was invoked and execute it.
if [ "$1" ]; then
cmd="$1"
shift
@@ -77,6 +53,3 @@ fi
fun=_cmd_$cmd
type -t $fun | grep -q function || die "Invalid command: $cmd"
$fun "$@"
# export SSH_AUTH_DIRNAME=$(dirname $SSH_AUTH_SOCK)
# docker-compose run prepare-vms "$@"

1
slides/_redirects Normal file
View File

@@ -0,0 +1 @@
/ /kube-twodays.yml.html 200!

View File

@@ -223,7 +223,7 @@ def check_exit_status():
def setup_tmux_and_ssh():
if subprocess.call(["tmux", "has-session"]):
logging.error("Couldn't connect to tmux. Please setup tmux first.")
ipaddr = open("../../prepare-vms/ips.txt").read().split("\n")[0]
ipaddr = "$IPADDR"
uid = os.getuid()
raise Exception("""

View File

@@ -1,3 +1,6 @@
class: title
# Advanced Dockerfiles
![construction](images/title-advanced-dockerfiles.jpg)

View File

@@ -107,9 +107,17 @@ class: pic
class: pic
## Two containers on a single Docker network
![bridge2](images/bridge2.png)
---
class: pic
## Two containers on two Docker networks
![bridge3](images/bridge2.png)
![bridge3](images/bridge3.png)
---

View File

@@ -1,3 +1,4 @@
class: title
# Getting inside a container

View File

@@ -1,3 +1,4 @@
class: title
# Installing Docker

View File

@@ -309,54 +309,6 @@ and *canary deployments*.
---
## Improving the workflow
The workflow that we showed is nice, but it requires us to:
* keep track of all the `docker run` flags required to run the container,
* inspect the `Dockerfile` to know which path(s) to mount,
* write scripts to hide that complexity.
There has to be a better way!
---
## Docker Compose to the rescue
* Docker Compose allows us to "encode" `docker run` parameters in a YAML file.
* Here is the `docker-compose.yml` file that we can use for our "namer" app:
```yaml
www:
build: .
volumes:
- .:/src
ports:
- 80:9292
```
* Try it:
```bash
$ docker-compose up -d
```
---
## Working with Docker Compose
* When you see a `docker-compose.yml` file, you can use `docker-compose up`.
* It can build images and run them with the required parameters.
* Compose can also deal with complex, multi-container apps.
(More on this later!)
---
## Recap of the development workflow
1. Write a Dockerfile to build an image containing our development environment.

View File

@@ -24,7 +24,7 @@ Analogy: attaching to a container is like plugging a keyboard and screen to a ph
---
## Detaching from a container
## Detaching from a container (Linux/macOS)
* If you have started an *interactive* container (with option `-it`), you can detach from it.
@@ -41,6 +41,20 @@ What does `-it` stand for?
---
## Detaching cont. (Win PowerShell and cmd.exe)
* Docker for Windows has a different detach experience due to shell features.
* `^P^Q` does not work.
* `^C` will detach, rather than stop the container.
* Using Bash, Subsystem for Linux, etc. on Windows behaves like Linux/macOS shells.
* Both PowerShell and Bash work well in Win 10; just be aware of differences.
---
class: extra-details
## Specifying a custom detach sequence

View File

@@ -1,3 +1,4 @@
class: title
# Our training environment

View File

@@ -0,0 +1,164 @@
class: title
# Windows Containers
![Container with Windows](images/windows-containers.jpg)
---
## Objectives
At the end of this section, you will be able to:
* Understand Windows Container vs. Linux Container.
* Know about the features of Docker for Windows for choosing architecture.
* Run other container architectures via QEMU emulation.
---
## Are containers *just* for Linux?
Remember that a container must run on the kernel of the OS it's on.
- This is both a benefit and a limitation.
(It makes containers lightweight, but limits them to a specific kernel.)
- At its launch in 2013, Docker did only support Linux, and only on amd64 CPUs.
- Since then, many platforms and OS have been added.
(Windows, ARM, i386, IBM mainframes ... But no macOS or iOS yet!)
--
- Docker Desktop (macOS and Windows) can run containers for other architectures
(Check the docs to see how to [run a Raspberry Pi (ARM) or PPC container](https://docs.docker.com/docker-for-mac/multi-arch/)!)
---
## History of Windows containers
- Early 2016, Windows 10 gained support for running Windows binaries in containers.
- These are known as "Windows Containers"
- Win 10 expects Docker for Windows to be installed for full features
- These must run in Hyper-V mini-VM's with a Windows Server x64 kernel
- No "scratch" containers, so use "Core" and "Nano" Server OS base layers
- Since Hyper-V is required, Windows 10 Home won't work (yet...)
--
- Late 2016, Windows Server 2016 ships with native Docker support
- Installed via PowerShell, doesn't need Docker for Windows
- Can run native (without VM), or with [Hyper-V Isolation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/hyperv-container)
---
## LCOW (Linux Containers On Windows)
While Docker on Windows is largely playing catch up with Docker on Linux,
it's moving fast; and this is one thing that you *cannot* do on Linux!
- LCOW came with the [2017 Fall Creators Update](https://blog.docker.com/2018/02/docker-for-windows-18-02-with-windows-10-fall-creators-update/).
- It can run Linux and Windows containers side-by-side on Win 10.
- It is no longer necessary to switch the Engine to "Linux Containers".
(In fact, if you want to run both Linux and Windows containers at the same time,
make sure that your Engine is set to "Windows Containers" mode!)
--
If you are a Docker for Windows user, start your engine and try this:
```bash
docker pull microsoft/nanoserver:1803
```
(Make sure to switch to "Windows Containers mode" if necessary.)
---
## Run Both Windows and Linux containers
- Run a Windows Nano Server (minimal CLI-only server)
```bash
docker run --rm -it microsoft/nanoserver:1803 powershell
Get-Process
exit
```
- Run busybox on Linux in LCOW
```bash
docker run --rm --platform linux busybox echo hello
```
(Although you will not be able to see them, this will create hidden
Nano and LinuxKit VMs in Hyper-V!)
---
## Did We Say Things Move Fast
- Things keep improving.
- Now `--platform` defaults to `windows`, some images support both:
- golang, mongo, python, redis, hello-world ... and more being added
- you should still use `--plaform` with multi-os images to be certain
- Windows Containers now support `localhost` accessable containers (July 2018)
- Microsoft (April 2018) added Hyper-V support to Windows 10 Home ...
... so stay tuned for Docker support, maybe?!?
---
## Other Windows container options
Most "official" Docker images don't run on Windows yet.
Places to Look:
- Hub Official: https://hub.docker.com/u/winamd64/
- Microsoft: https://hub.docker.com/r/microsoft/
---
## SQL Server? Choice of Linux or Windows
- Microsoft [SQL Server for Linux 2017](https://hub.docker.com/r/microsoft/mssql-server-linux/) (amd64/linux)
- Microsoft [SQL Server Express 2017](https://hub.docker.com/r/microsoft/mssql-server-windows-express/) (amd64/windows)
---
## Windows Tools and Tips
- PowerShell [Tab Completion: DockerCompletion](https://github.com/matt9ucci/DockerCompletion)
- Best Shell GUI: [Cmder.net](http://cmder.net/)
- Good Windows Container Blogs and How-To's
- Dockers DevRel [Elton Stoneman, Microsoft MVP](https://blog.sixeyed.com/)
- Docker Captian [Nicholas Dille](https://dille.name/blog/)
- Docker Captain [Stefan Scherer](https://stefanscherer.github.io/)

BIN
slides/images/bridge1.png Normal file → Executable file

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 97 KiB

BIN
slides/images/bridge2.png Normal file → Executable file

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 119 KiB

BIN
slides/images/bridge3.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -0,0 +1 @@
<mxfile userAgent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" version="9.3.0" editor="www.draw.io" type="device"><diagram id="cb13f823-9e55-f92e-d17e-d0d789fca2e0" name="Page-1">7Vnfb9sgEP5rLG0vlQ3+kTyuabs9bFq1Vtr2iO2LjUqMhfGS9q8fxNgxJtXSqmmmrVEeuAMOuO874LCHF6vNR0Hq8gvPgXnIzzcevvAQCkKEPP338/tOk0RxpygEzU2jneKGPoBR+kbb0hwaq6HknEla28qMVxVk0tIRIfjabrbkzB61JgU4ipuMMFf7neay7LSzyN/pPwEtyn7kwDc1KcnuCsHbyoznIbzc/rrqFeltmfZNSXK+HqnwpYcXgnPZlVabBTDt295tXb+rR2qHeQuo5EEdDC6/CGuhn/J2YvK+d8Z2OaA7+B4+X5dUwk1NMl27VvArXSlXTEmBKi4pYwvOuNj2xTmB2TJT+kYKfgejmjibQbpUNQUjTWOMZ3xFM1MeXKOFJa/kFVlRpgn1jadccj0uF/RB1ZBhdCUYNqFQyWZxICRsHvVQMPhd8Rn4CqS4V01Mhx4pw+RgZuT1jhdJrytHnMC9khguFoPpHR6qYCDZDw920FlzcQfCwUg5q9bFrE3hzyClHaKf00Ex0PZrKxmtYOTPZ7h9QgJFf5TtJUEep7HaGvqaPtbwS0FnY4cSF7sA7cHuJaALHehEVbzhdghusQ1bGL4ibJEDW0ma8i3iDow4dELo3KNMQE6bN+QOQW44rk6BXOIec5C29A25g3ZLfELk+gv7CLqPl7cOcFDlH/S1XGOnr3v6kmfdGp+MQDBz/KkxYSQFdj7gPALh6mqhfoPLIXcygInD1QJ4KzKwbmKSiALk6IR3YRm5Pdrj9V4ngBFJf9mT2AeFGeGaUzW9AfVgutPO57aJbvKm1zgDmBpKpvSZGOqW7BjaMmNY9mFkCRyyXH+9+U/YEp2SLWge2SDHk9g/lC04nBgKJoZekC3IYcvt4vr/IEt8UrIk4VniR7MZwhGahz0OBnH8XOqgWXSmzQVJHG7N20SKjkckN4v+N4mU/GVEwoF9tDybOuEkkT8mWdy8vW32pH+qF60b6N6puksp421+wAPZyV6ykokX4+g1L4puYn3SiyJsqPyhyv5ZL/3cSoGRrkFQtUjQkekfM2h7wo2jNjll1E5eX5LnBm0wif7kiFcFN4G84NkdiIUy3ugh65rRTHmFVx6KmdTJoIrpuNCld0vtrO3HBElUViia924jh6kqDqXNTTvvq7jOL60k0agIo0WlRAZLbUHHtJob+2DUkteP7CL2Q7z1Pv7YI/oRtpFJuhnM3V0kjvfQM8BP30aUuPsW0hFj98EJX/4G</diagram></mxfile>

Binary file not shown.

After

Width:  |  Height:  |  Size: 426 KiB

View File

@@ -106,25 +106,40 @@ import yaml
items = yaml.load(open("index.yaml"))
# Items with a date correspond to scheduled sessions.
# Items without a date correspond to self-paced content.
# The date should be specified as a string (e.g. 2018-11-26).
# It can also be a list of two elements (e.g. [2018-11-26, 2018-11-28]).
# The latter indicates an event spanning multiple dates.
# The first date will be used in the generated page, but the event
# will be considered "current" (and therefore, shown in the list of
# upcoming events) until the second date.
for item in items:
if "date" in item:
date = item["date"]
if type(date) == list:
date_begin, date_end = date
else:
date_begin, date_end = date, date
suffix = {
1: "st", 2: "nd", 3: "rd",
21: "st", 22: "nd", 23: "rd",
31: "st"}.get(date.day, "th")
31: "st"}.get(date_begin.day, "th")
# %e is a non-standard extension (it displays the day, but without a
# leading zero). If strftime fails with ValueError, try to fall back
# on %d (which displays the day but with a leading zero when needed).
try:
item["prettydate"] = date.strftime("%B %e{}, %Y").format(suffix)
item["prettydate"] = date_begin.strftime("%B %e{}, %Y").format(suffix)
except ValueError:
item["prettydate"] = date.strftime("%B %d{}, %Y").format(suffix)
item["prettydate"] = date_begin.strftime("%B %d{}, %Y").format(suffix)
item["begin"] = date_begin
item["end"] = date_end
today = datetime.date.today()
coming_soon = [i for i in items if i.get("date") and i["date"] >= today]
coming_soon.sort(key=lambda i: i["date"])
past_workshops = [i for i in items if i.get("date") and i["date"] < today]
coming_soon = [i for i in items if i.get("date") and i["end"] >= today]
coming_soon.sort(key=lambda i: i["begin"])
past_workshops = [i for i in items if i.get("date") and i["end"] < today]
past_workshops.sort(key=lambda i: i["date"], reverse=True)
self_paced = [i for i in items if not i.get("date")]
recorded_workshops = [i for i in items if i.get("video")]

View File

@@ -1,26 +1,62 @@
- date: 2018-11-23
city: Copenhagen
country: dk
- date: 2019-04-28
country: us
city: Chicago, IL
event: GOTO
title: Build Container Orchestration with Docker Swarm
speaker: bretfisher
attend: https://gotocph.com/2018/workshops/121
speaker: jpetazzo
title: Getting Started With Kubernetes and Container Orchestration
attend: https://gotochgo.com/2019/workshops/148
- date: 2019-03-07
country: uk
city: London
event: QCON
speaker: jpetazzo
title: Getting Started With Kubernetes and Container Orchestration
attend: https://qconlondon.com/london2019/workshop/getting-started-kubernetes-and-container-orchestration
- date: [2019-01-07, 2019-01-08]
country: fr
city: Paris
event: ENIX SAS
speaker: "jpetazzo, alexbuisine"
title: Bien démarrer avec les conteneurs (in French)
lang: fr
attend: https://enix.io/fr/services/formation/bien-demarrer-avec-les-conteneurs/
- date: [2018-12-17, 2018-12-18]
country: fr
city: Paris
event: ENIX SAS
speaker: "jpetazzo, rdegez"
title: Déployer ses applications avec Kubernetes (in French)
lang: fr
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
- date: 2018-11-08
city: San Francisco, CA
country: us
event: QCON
title: Introduction to Docker and Containers
speaker: jpetazzo
speaker: zeroasterisk
attend: https://qconsf.com/sf2018/workshop/introduction-docker-and-containers
- date: 2018-11-08
city: San Francisco, CA
country: us
event: QCON
title: Getting Started With Kubernetes and Container Orchestration
speaker: jpetazzo
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration-thursday-section
slides: http://qconsf2018.container.training/
- date: 2018-11-09
city: San Francisco, CA
country: us
event: QCON
title: Getting Started With Kubernetes and Container Orchestration
speaker: jpetazzo
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration-friday-section
slides: http://qconsf2018.container.training/
- date: 2018-10-31
city: London, UK
@@ -28,6 +64,7 @@
event: Velocity EU
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://velocityeu2018.container.training
attend: https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/71149
- date: 2018-10-30
@@ -54,16 +91,18 @@
title: Kubernetes 101
speaker: bridgetkromhout
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70102
slides: https://velny-k8s101-2018.container.training
- date: 2018-09-30
- date: 2018-10-01
city: New York, NY
country: us
event: Velocity
title: Kubernetes Bootcamp - Deploying and Scaling Microservices
speaker: jpetazzo
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/69875
slides: https://k8s2d.container.training
- date: 2018-09-30
- date: 2018-10-01
city: New York, NY
country: us
event: Velocity
@@ -79,6 +118,7 @@
title: Déployer ses applications avec Kubernetes (in French)
lang: fr
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
slides: https://septembre2018.container.training
- date: 2018-07-17
city: Portland, OR

View File

@@ -42,6 +42,7 @@ chapters:
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Docker_Machine.md

View File

@@ -42,6 +42,7 @@ chapters:
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Docker_Machine.md

View File

@@ -24,15 +24,9 @@
(it examines headers, certificates ... anything available)
- Many authentication methods can be used simultaneously:
- Many authentication methods are available and can be used simultaneously
- TLS client certificates (that's what we've been doing with `kubectl` so far)
- bearer tokens (a secret token in the HTTP headers of the request)
- [HTTP basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication) (carrying user and password in a HTTP header)
- authentication proxy (sitting in front of the API and setting trusted headers)
(we will see them on the next slide)
- It's the job of the authentication method to produce:
@@ -44,6 +38,26 @@
---
## Authentication methods
- TLS client certificates
(that's what we've been doing with `kubectl` so far)
- Bearer tokens
(a secret token in the HTTP headers of the request)
- [HTTP basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication)
(carrying user and password in a HTTP header)
- Authentication proxy
(sitting in front of the API and setting trusted headers)
---
## Anonymous requests
- If any authentication method *rejects* a request, it's denied
@@ -119,6 +133,30 @@ class: extra-details
→ We are user `kubernetes-admin`, in group `system:masters`.
(We will see later how and why this gives us the permissions that we have.)
---
## User certificates in practice
- The Kubernetes API server does not support certificate revocation
(see issue [#18982](https://github.com/kubernetes/kubernetes/issues/18982))
- As a result, we cannot easily suspend a user's access
- There are workarounds, but they are very inconvenient:
- issue short-lived certificates (e.g. 24 hours) and regenerate them often
- re-create the CA and re-issue all certificates in case of compromise
- grant permissions to individual users, not groups
<br/>
(and remove all permissions to a compromised user)
- Until this is fixed, we probably want to use other methods
---
## Authentication with tokens
@@ -182,23 +220,23 @@ class: extra-details
kubectl get sa
```
]
]
There should be just one service account in the default namespace: `default`.
There should be just one service account in the default namespace: `default`.
---
---
class: extra-details
class: extra-details
## Finding the secret
## Finding the secret
.exercise[
.exercise[
- List the secrets for the `default` service account:
```bash
kubectl get sa default -o yaml
SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name)
```
- List the secrets for the `default` service account:
```bash
kubectl get sa default -o yaml
SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name)
```
]
@@ -502,7 +540,7 @@ It's important to note a couple of details in these flags ...
- But that we can't create things:
```
./kubectl run tryme --image=nginx
./kubectl create deployment --image=nginx
```
- Exit the container with `exit` or `^D`
@@ -531,3 +569,45 @@ It's important to note a couple of details in these flags ...
kubectl auth can-i list nodes \
--as system:serviceaccount:<namespace>:<name-of-service-account>
```
---
class: extra-details
## Where do our permissions come from?
- When interacting with the Kubernetes API, we are using a client certificate
- We saw previously that this client certificate contained:
`CN=kubernetes-admin` and `O=system:masters`
- Let's look for these in existing ClusterRoleBindings:
```bash
kubectl get clusterrolebindings -o yaml |
grep -e kubernetes-admin -e system:masters
```
(`system:masters` should show up, but not `kubernetes-admin`.)
- Where does this match come from?
---
class: extra-details
## The `system:masters` group
- If we eyeball the output of `kubectl get clusterrolebindings -o yaml`, we'll find out!
- It is in the `cluster-admin` binding:
```bash
kubectl describe clusterrolebinding cluster-admin
```
- This binding associates `system:masters` to the cluster role `cluster-admin`
- And the `cluster-admin` is, basically, `root`:
```bash
kubectl describe clusterrole cluster-admin
```

View File

@@ -327,7 +327,7 @@ We'll cover them just after!*
- We will provide a simple HAproxy configuration, `k8s/haproxy.cfg`
- It listens on port 80, and load balances connections between Google and Bing
- It listens on port 80, and load balances connections between IBM and Google
---
@@ -407,20 +407,22 @@ spec:
- half of the connections to Google
- the other half to Bing
- the other half to IBM
.exercise[
- Access the load balancer a few times:
```bash
curl -I $IP
curl -I $IP
curl -I $IP
curl $IP
curl $IP
curl $IP
```
]
We should see connections served by Google (look for the `Location` header) and others served by Bing (indicated by the `X-MSEdge-Ref` header).
We should see connections served by Google, and others served by IBM.
<br/>
(Each server sends us a redirect page. Look at the URL that they send us to!)
---

View File

@@ -36,7 +36,7 @@
## Creating a daemon set
- Unfortunately, as of Kubernetes 1.10, the CLI cannot create daemon sets
- Unfortunately, as of Kubernetes 1.12, the CLI cannot create daemon sets
--
@@ -252,38 +252,29 @@ The master node has [taints](https://kubernetes.io/docs/concepts/configuration/t
---
## What are all these pods doing?
## Is this working?
- Let's check the logs of all these `rng` pods
- All these pods have a `run=rng` label:
- the first pod, because that's what `kubectl run` does
- the other ones (in the daemon set), because we
*copied the spec from the first one*
- Therefore, we can query everybody's logs using that `run=rng` selector
.exercise[
- Check the logs of all the pods having a label `run=rng`:
```bash
kubectl logs -l run=rng --tail 1
```
]
- Look at the web UI
--
It appears that *all the pods* are serving requests at the moment.
- The graph should now go above 10 hashes per second!
--
- It looks like the newly created pods are serving traffic correctly
- How and why did this happen?
(We didn't do anything special to add them to the `rng` service load balancer!)
---
## The magic of selectors
# Labels and selectors
- The `rng` *service* is load balancing requests to a set of pods
- This set of pods is defined as "pods having the label `run=rng`"
- That set of pods is defined by the *selector* of the `rng` service
.exercise[
@@ -294,110 +285,333 @@ It appears that *all the pods* are serving requests at the moment.
]
When we created additional pods with this label, they were
automatically detected by `svc/rng` and added as *endpoints*
to the associated load balancer.
- The selector is `app=rng`
- It means "all the pods having the label `app=rng`"
(They can have additional labels as well, that's OK!)
---
## Removing the first pod from the load balancer
## Selector evaluation
- We can use selectors with many `kubectl` commands
- For instance, with `kubectl get`, `kubectl logs`, `kubectl delete` ... and more
.exercise[
- Get the list of pods matching selector `app=rng`:
```bash
kubectl get pods -l app=rng
kubectl get pods --selector app=rng
```
]
But ... why do these pods (in particular, the *new* ones) have this `app=rng` label?
---
## Where do labels come from?
- When we create a deployment with `kubectl create deployment rng`,
<br/>this deployment gets the label `app=rng`
- The replica sets created by this deployment also get the label `app=rng`
- The pods created by these replica sets also get the label `app=rng`
- When we created the daemon set from the deployment, we re-used the same spec
- Therefore, the pods created by the daemon set get the same labels
.footnote[Note: when we use `kubectl run stuff`, the label is `run=stuff` instead.]
---
## Updating load balancer configuration
- We would like to remove a pod from the load balancer
- What would happen if we removed that pod, with `kubectl delete pod ...`?
--
The `replicaset` would re-create it immediately.
It would be re-created immediately (by the replica set or the daemon set)
--
- What would happen if we removed the `run=rng` label from that pod?
- What would happen if we removed the `app=rng` label from that pod?
--
The `replicaset` would re-create it immediately.
It would *also* be re-created immediately
--
... Because what matters to the `replicaset` is the number of pods *matching that selector.*
--
- But but but ... Don't we have more than one pod with `run=rng` now?
--
The answer lies in the exact selector used by the `replicaset` ...
Why?!?
---
## Deep dive into selectors
## Selectors for replica sets and daemon sets
- Let's look at the selectors for the `rng` *deployment* and the associated *replica set*
- The "mission" of a replica set is:
"Make sure that there is the right number of pods matching this spec!"
- The "mission" of a daemon set is:
"Make sure that there is a pod matching this spec on each node!"
--
- *In fact,* replica sets and daemon sets do not check pod specifications
- They merely have a *selector*, and they look for pods matching that selector
- Yes, we can fool them by manually creating pods with the "right" labels
- Bottom line: if we remove our `app=rng` label ...
... The pod "diseappears" for its parent, which re-creates another pod to replace it
---
class: extra-details
## Isolation of replica sets and daemon sets
- Since both the `rng` daemon set and the `rng` replica set use `app=rng` ...
... Why don't they "find" each other's pods?
--
- *Replica sets* have a more specific selector, visible with `kubectl describe`
(It looks like `app=rng,pod-template-hash=abcd1234`)
- *Daemon sets* also have a more specific selector, but it's invisible
(It looks like `app=rng,controller-revision-hash=abcd1234`)
- As a result, each controller only "sees" the pods it manages
---
## Removing a pod from the load balancer
- Currently, the `rng` service is defined by the `app=rng` selector
- The only way to remove a pod is to remove or change the `app` label
- ... But that will cause another pod to be created instead!
- What's the solution?
--
- We need to change the selector of the `rng` service!
- Let's add another label to that selector (e.g. `enabled=yes`)
---
## Complex selectors
- If a selector specifies multiple labels, they are understood as a logical *AND*
(In other words: the pods must match all the labels)
- Kubernetes has support for advanced, set-based selectors
(But these cannot be used with services, at least not yet!)
---
## The plan
1. Add the label `enabled=yes` to all our `rng` pods
2. Update the selector for the `rng` service to also include `enabled=yes`
3. Toggle traffic to a pod by manually adding/removing the `enabled` label
4. Profit!
*Note: if we swap steps 1 and 2, it will cause a short
service disruption, because there will be a period of time
during which the service selector won't match any pod.
During that time, requests to the service will time out.
By doing things in the order above, we guarantee that there won't
be any interruption.*
---
## Adding labels to pods
- We want to add the label `enabled=yes` to all pods that have `app=rng`
- We could edit each pod one by one with `kubectl edit` ...
- ... Or we could use `kubectl label` to label them all
- `kubectl label` can use selectors itself
.exercise[
- Show detailed information about the `rng` deployment:
- Add `enabled=yes` to all pods that have `app=rng`:
```bash
kubectl describe deploy rng
kubectl label pods -l app=rng enabled=yes
```
- Show detailed information about the `rng` replica:
<br/>(The second command doesn't require you to get the exact name of the replica set)
]
---
## Updating the service selector
- We need to edit the service specification
- Reminder: in the service definition, we will see `app: rng` in two places
- the label of the service itself (we don't need to touch that one)
- the selector of the service (that's the one we want to change)
.exercise[
- Update the service to add `enabled: yes` to its selector:
```bash
kubectl describe rs rng-yyyy
kubectl describe rs -l run=rng
kubectl edit service rng
```
<!--
```wait Please edit the object below```
```keys /app: rng```
```keys ^J```
```keys noenabled: yes```
```keys ^[``` ]
```keys :wq```
```keys ^J```
-->
]
--
The replica set selector also has a `pod-template-hash`, unlike the pods in our daemon set.
... And then we get *the weirdest error ever.* Why?
---
# Updating a service through labels and selectors
## When the YAML parser is being too smart
- What if we want to drop the `rng` deployment from the load balancer?
- YAML parsers try to help us:
- Option 1:
- `xyz` is the string `"xyz"`
- destroy it
- `42` is the integer `42`
- Option 2:
- `yes` is the boolean value `true`
- add an extra *label* to the daemon set
- If we want the string `"42"` or the string `"yes"`, we have to quote them
- update the service *selector* to refer to that *label*
- So we have to use `enabled: "yes"`
--
Of course, option 2 offers more learning opportunities. Right?
.footnote[For a good laugh: if we had used "ja", "oui", "si" ... as the value, it would have worked!]
---
## Add an extra label to the daemon set
## Updating the service selector, take 2
- We will update the daemon set "spec"
.exercise[
- Option 1:
- Update the service to add `enabled: "yes"` to its selector:
```bash
kubectl edit service rng
```
- edit the `rng.yml` file that we used earlier
<!--
```wait Please edit the object below```
```keys /app: rng```
```keys ^J```
```keys noenabled: "yes"```
```keys ^[``` ]
```keys :wq```
```keys ^J```
-->
- load the new definition with `kubectl apply`
]
- Option 2:
This time it should work!
- use `kubectl edit`
--
*If you feel like you got this💕🌈, feel free to try directly.*
*We've included a few hints on the next slides for your convenience!*
If we did everything correctly, the web UI shouldn't show any change.
---
## Updating labels
- We want to disable the pod that was created by the deployment
- All we have to do, is remove the `enabled` label from that pod
- To identify that pod, we can use its name
- ... Or rely on the fact that it's the only one with a `pod-template-hash` label
- Good to know:
- `kubectl label ... foo=` doesn't remove a label (it sets it to an empty string)
- to remove label `foo`, use `kubectl label ... foo-`
- to change an existing label, we would need to add `--overwrite`
---
## Removing a pod from the load balancer
.exercise[
- In one window, check the logs of that pod:
```bash
POD=$(kubectl get pod -l app=rng,pod-template-hash -o name)
kubectl logs --tail 1 --follow $POD
```
(We should see a steady stream of HTTP logs)
- In another window, remove the label from the pod:
```bash
kubectl label pod -l app=rng,pod-template-hash enabled-
```
(The stream of HTTP logs should stop immediately)
]
There might be a slight change in the web UI (since we removed a bit
of capacity from the `rng` service). If we remove more pods,
the effect should be more visible.
---
class: extra-details
## Updating the daemon set
- If we scale up our cluster by adding new nodes, the daemon set will create more pods
- These pods won't have the `enabled=yes` label
- If we want these pods to have that label, we need to edit the daemon set spec
- We can do that with e.g. `kubectl edit daemonset rng`
---
class: extra-details
## We've put resources in your resources
- Reminder: a daemon set is a resource that creates more resources!
@@ -410,7 +624,9 @@ Of course, option 2 offers more learning opportunities. Right?
- the label(s) of the resource(s) created by the first resource (in the `template` block)
- You need to update the selector and the template (metadata labels are not mandatory)
- We would need to update the selector and the template
(metadata labels are not mandatory)
- The template must match the selector
@@ -418,175 +634,6 @@ Of course, option 2 offers more learning opportunities. Right?
---
## Adding our label
- Let's add a label `isactive: yes`
- In YAML, `yes` should be quoted; i.e. `isactive: "yes"`
.exercise[
- Update the daemon set to add `isactive: "yes"` to the selector and template label:
```bash
kubectl edit daemonset rng
```
<!--
```wait Please edit the object below```
```keys /run: rng```
```keys ^J```
```keys noisactive: "yes"```
```keys ^[``` ]
```keys /run: rng```
```keys ^J```
```keys oisactive: "yes"```
```keys ^[``` ]
```keys :wq```
```keys ^J```
-->
- Update the service to add `isactive: "yes"` to its selector:
```bash
kubectl edit service rng
```
<!--
```wait Please edit the object below```
```keys /run: rng```
```keys ^J```
```keys noisactive: "yes"```
```keys ^[``` ]
```keys :wq```
```keys ^J```
-->
]
---
## Checking what we've done
.exercise[
- Check the most recent log line of all `run=rng` pods to confirm that exactly one per node is now active:
```bash
kubectl logs -l run=rng --tail 1
```
]
The timestamps should give us a hint about how many pods are currently receiving traffic.
.exercise[
- Look at the pods that we have right now:
```bash
kubectl get pods
```
]
---
## Cleaning up
- The pods of the deployment and the "old" daemon set are still running
- We are going to identify them programmatically
.exercise[
- List the pods with `run=rng` but without `isactive=yes`:
```bash
kubectl get pods -l run=rng,isactive!=yes
```
- Remove these pods:
```bash
kubectl delete pods -l run=rng,isactive!=yes
```
]
---
## Cleaning up stale pods
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rng-54f57d4d49-7pt82 1/1 Terminating 0 51m
rng-54f57d4d49-vgz9h 1/1 Running 0 22s
rng-b85tm 1/1 Terminating 0 39m
rng-hfbrr 1/1 Terminating 0 39m
rng-vplmj 1/1 Running 0 7m
rng-xbpvg 1/1 Running 0 7m
[...]
```
- The extra pods (noted `Terminating` above) are going away
- ... But a new one (`rng-54f57d4d49-vgz9h` above) was restarted immediately!
--
- Remember, the *deployment* still exists, and makes sure that one pod is up and running
- If we delete the pod associated to the deployment, it is recreated automatically
---
## Deleting a deployment
.exercise[
- Remove the `rng` deployment:
```bash
kubectl delete deployment rng
```
]
--
- The pod that was created by the deployment is now being terminated:
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rng-54f57d4d49-vgz9h 1/1 Terminating 0 4m
rng-vplmj 1/1 Running 0 11m
rng-xbpvg 1/1 Running 0 11m
[...]
```
Ding, dong, the deployment is dead! And the daemon set lives on.
---
## Avoiding extra pods
- When we changed the definition of the daemon set, it immediately created new pods. We had to remove the old ones manually.
- How could we have avoided this?
--
- By adding the `isactive: "yes"` label to the pods before changing the daemon set!
- This can be done programmatically with `kubectl patch`:
```bash
PATCH='
metadata:
labels:
isactive: "yes"
'
kubectl get pods -l run=rng -l controller-revision-hash -o name |
xargs kubectl patch -p "$PATCH"
```
---
## Labels and debugging
- When a pod is misbehaving, we can delete it: another one will be recreated

View File

@@ -182,7 +182,7 @@ The dashboard will then ask you which authentication you want to use.
kubectl -n kube-system edit service kubernetes-dashboard
```
- Change `ClusterIP` to `NodePort`, save, and exit
- Change type `type:` from `ClusterIP` to `NodePort`, save, and exit
<!--
```wait Please edit the object below```

View File

@@ -111,7 +111,7 @@
- Display that key:
```
kubectl get logs deployment flux | grep identity
kubectl logs deployment flux | grep identity
```
- Then add that key to the repository, giving it **write** access

View File

@@ -344,7 +344,7 @@ This is normal: we haven't provided any ingress rule yet.
- To make our lives easier, we will use [nip.io](http://nip.io)
- Check out `http://cheddar.A.B.C.D.mip.io`
- Check out `http://cheddar.A.B.C.D.nip.io`
(replacing A.B.C.D with the IP address of `node1`)
@@ -392,9 +392,9 @@ This is normal: we haven't provided any ingress rule yet.
- Run all three deployments:
```bash
kubectl run cheddar --image=errm/cheese:cheddar
kubectl run stilton --image=errm/cheese:stilton
kubectl run wensleydale --image=errm/cheese:wensleydale
kubectl create deployment cheddar --image=errm/cheese:cheddar
kubectl create deployment stilton --image=errm/cheese:stilton
kubectl create deployment wensleydale --image=errm/cheese:wensleydale
```
- Create a service for each of them:

View File

@@ -43,45 +43,63 @@ Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables`
- an external load balancer is allocated for the service
- the load balancer is configured accordingly
<br/>(e.g.: a `NodePort` service is created, and the load balancer sends traffic to that port)
- available only when the underlying infrastructure provides some "load balancer as a service"
<br/>(e.g. AWS, Azure, GCE, OpenStack...)
- `ExternalName`
- the DNS entry managed by CoreDNS will just be a `CNAME` to a provided record
- no port, no IP address, no nothing else is allocated
The `LoadBalancer` type is currently only available on AWS, Azure, and GCE.
---
## Running containers with open ports
- Since `ping` doesn't have anything to connect to, we'll have to run something else
- We could use the `nginx` official image, but ...
... we wouldn't be able to tell the backends from each other!
- We are going to use `jpetazzo/httpenv`, a tiny HTTP server written in Go
- `jpetazzo/httpenv` listens on port 8888
- It serves its environment variables in JSON format
- The environment variables will include `HOSTNAME`, which will be the pod name
(and therefore, will be different on each backend)
---
## Creating a deployment for our HTTP server
- We *could* do `kubectl run httpenv --image=jpetazzo/httpenv` ...
- But since `kubectl run` is being deprecated, let's see how to use `kubectl create` instead
.exercise[
- Start a bunch of HTTP servers:
```bash
kubectl run httpenv --image=jpetazzo/httpenv --replicas=10
```
- Watch them being started:
- In another window, watch the pods (to see when they will be created):
```bash
kubectl get pods -w
```
<!--
```wait httpenv-```
```keys ^C```
-->
<!-- ```keys ^C``` -->
- Create a deployment for this very lightweight HTTP server:
```bash
kubectl create deployment httpenv --image=jpetazzo/httpenv
```
- Scale it to 10 replicas:
```bash
kubectl scale deployment httpenv --replicas=10
```
]
The `jpetazzo/httpenv` image runs an HTTP server on port 8888.
<br/>
It serves its environment variables in JSON format.
The `-w` option "watches" events happening on the specified resources.
---
## Exposing our deployment
@@ -92,12 +110,12 @@ The `-w` option "watches" events happening on the specified resources.
- Expose the HTTP port of our server:
```bash
kubectl expose deploy/httpenv --port 8888
kubectl expose deployment httpenv --port 8888
```
- Look up which IP address was allocated:
```bash
kubectl get svc
kubectl get service
```
]
@@ -151,7 +169,7 @@ The `-w` option "watches" events happening on the specified resources.
--
Our requests are load balanced across multiple pods.
Try it a few times! Our requests are load balanced across multiple pods.
---
@@ -237,7 +255,7 @@ class: extra-details
- These IP addresses should match the addresses of the corresponding pods:
```bash
kubectl get pods -l run=httpenv -o wide
kubectl get pods -l app=httpenv -o wide
```
---

View File

@@ -32,7 +32,8 @@
--
OK, what just happened?
(Starting with Kubernetes 1.12, we get a message telling us that
`kubectl run` is deprecated. Let's ignore it for now.)
---
@@ -172,6 +173,11 @@ pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
kubectl scale deploy/pingpong --replicas 8
```
- Note that this command does exactly the same thing:
```bash
kubectl scale deployment pingpong --replicas 8
```
]
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?
@@ -228,6 +234,44 @@ We could! But the *deployment* would notice it right away, and scale back to the
---
## What about that deprecation warning?
- As we can see from the previous slide, `kubectl run` can do many things
- The exact type of resource created is not obvious
- To make things more explicit, it is better to use `kubectl create`:
- `kubectl create deployment` to create a deployment
- `kubectl create job` to create a job
- Eventually, `kubectl run` will be used only to start one-shot pods
(see https://github.com/kubernetes/kubernetes/pull/68132)
---
## Various ways of creating resources
- `kubectl run`
- easy way to get started
- versatile
- `kubectl create <resource>`
- explicit, but lacks some features
- can't create a CronJob
- can't pass command-line arguments to deployments
- `kubectl create -f foo.yaml` or `kubectl apply -f foo.yaml`
- all features are available
- requires writing YAML
---
## Viewing logs of multiple pods
- When we specify a deployment name, only one single pod's logs are shown
@@ -251,6 +295,24 @@ Unfortunately, `--follow` cannot (yet) be used to stream the logs from multiple
---
class: extra-details
## `kubectl logs -l ... --tail N`
- If we run this with Kubernetes 1.12, the last command shows multiple lines
- This is a regression when `--tail` is used together with `-l`/`--selector`
- It always shows the last 10 lines of output for each container
(instead of the number of lines specified on the command line)
- The problem was fixed in Kubernetes 1.13
*See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details.*
---
## Aren't we flooding 1.1.1.1?
- If you're wondering this, good question!

View File

@@ -1,15 +1,13 @@
# Links and resources
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
- [Kubernetes on StackOverflow](https://stackoverflow.com/questions/tagged/kubernetes)
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
- [Microsoft Learn](https://docs.microsoft.com/learn/)
- [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/)
- [Cloud Developer Advocates](https://developer.microsoft.com/advocates/)
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
- [Local meetups](https://www.meetup.com/)
- [devopsdays](https://www.devopsdays.org/)

View File

@@ -14,11 +14,11 @@
- Download the `kubectl` binary from one of these links:
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.11.2/bin/linux/amd64/kubectl)
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl)
|
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.11.2/bin/darwin/amd64/kubectl)
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl)
|
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.11.2/bin/windows/amd64/kubectl.exe)
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/windows/amd64/kubectl.exe)
- On Linux and macOS, make the binary executable with `chmod +x kubectl`

View File

@@ -59,13 +59,15 @@ Exactly what we need!
- If it is not installed, the easiest method is to download a [binary release](https://github.com/wercker/stern/releases)
- The following commands will install Stern on a Linux Intel 64 bits machine:
- The following commands will install Stern on a Linux Intel 64 bit machine:
```bash
sudo curl -L -o /usr/local/bin/stern \
https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64
https://github.com/wercker/stern/releases/download/1.10.0/stern_linux_amd64
sudo chmod +x /usr/local/bin/stern
```
<!-- ##VERSION## -->
---
## Using Stern
@@ -130,11 +132,13 @@ Exactly what we need!
- We can use that property to view the logs of all the pods created with `kubectl run`
- Similarly, everything created with `kubectl create deployment` has a label `app`
.exercise[
- View the logs for all the things started with `kubectl run`:
- View the logs for all the things started with `kubectl create deployment`:
```bash
stern -l run
stern -l app
```
<!--

View File

@@ -68,7 +68,7 @@
kubectl -n blue get svc
```
- We can also use *contexts*
- We can also change our current *context*
- A context is a *(user, cluster, namespace)* tuple
@@ -76,9 +76,9 @@
---
## Creating a context
## Viewing existing contexts
- We are going to create a context for the `blue` namespace
- On our training environments, at this point, there should be only one context
.exercise[
@@ -87,29 +87,79 @@
kubectl config get-contexts
```
- Create a new context:
]
- The current context (the only one!) is tagged with a `*`
- What are NAME, CLUSTER, AUTHINFO, and NAMESPACE?
---
## What's in a context
- NAME is an arbitrary string to identify the context
- CLUSTER is a reference to a cluster
(i.e. API endpoint URL, and optional certificate)
- AUTHINFO is a reference to the authentication information to use
(i.e. a TLS client certificate, token, or otherwise)
- NAMESPACE is the namespace
(empty string = `default`)
---
## Switching contexts
- We want to use a different namespace
- Solution 1: update the current context
*This is appropriate if we need to change just one thing (e.g. namespace or authentication).*
- Solution 2: create a new context and switch to it
*This is appropriate if we need to change multiple things and switch back and forth.*
- Let's go with solution 1!
---
## Updating a context
- This is done through `kubectl config set-context`
- We can update a context by passing its name, or the current context with `--current`
.exercise[
- Update the current context to use the `blue` namespace:
```bash
kubectl config set-context blue --namespace=blue \
--cluster=kubernetes --user=kubernetes-admin
kubectl config set-context --current --namespace=blue
```
- Check the result:
```bash
kubectl config get-contexts
```
]
We have created a context; but this is just some configuration values.
The namespace doesn't exist yet.
---
## Using a context
## Using our new namespace
- Let's switch to our new context and deploy the DockerCoins chart
- Let's check that we are in our new namespace, then deploy the DockerCoins chart
.exercise[
- Use the `blue` context:
- Verify that the new context is empty:
```bash
kubectl config use-context blue
kubectl get all
```
- Deploy DockerCoins:
@@ -175,48 +225,66 @@ Note: it might take a minute or two for the app to be up and running.
---
## Network policies overview
- We can create as many network policies as we want
- Each network policy has:
- a *pod selector*: "which pods are targeted by the policy?"
- lists of ingress and/or egress rules: "which peers and ports are allowed or blocked?"
- If a pod is not targeted by any policy, traffic is allowed by default
- If a pod is targeted by at least one policy, traffic must be allowed explicitly
---
## More about network policies
- This remains a high level overview of network policies
- For more details, check:
- the [Kubernetes documentation about network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
- this [talk about network policies at KubeCon 2017 US](https://www.youtube.com/watch?v=3gGpMmYeEO8) by [@ahmetb](https://twitter.com/ahmetb)
---
## Switch back to the default namespace
- Let's make sure that we don't run future exercises in the `blue` namespace
.exercise[
- View the names of the contexts:
```bash
kubectl config get-contexts
```
- Switch back to the original context:
```bash
kubectl config use-context kubernetes-admin@kubernetes
kubectl config set-context --current --namespace=
```
]
Note: we could have used `--namespace=default` for the same result.
---
## Switching namespaces more easily
- We can also use a little helper tool called `kubens`:
```bash
# Switch to namespace foo
kubens foo
# Switch back to the previous namespace
kubens -
```
- On our clusters, `kubens` is called `kns` instead
(so that it's even fewer keystrokes to switch namespaces)
---
## `kubens` and `kubectx`
- With `kubens`, we can switch quickly between namespaces
- With `kubectx`, we can switch quickly between contexts
- Both tools are simple shell scripts available from https://github.com/ahmetb/kubectx
- On our clusters, they are installed as `kns` and `kctx`
(for brevity and to avoid completion clashes between `kubectx` and `kubectl`)
---
## `kube-ps1`
- It's easy to lose track of our current cluster / context / namespace
- `kube-ps1` makes it easy to track these, by showing them in our shell prompt
- It's a simple shell script available from https://github.com/jonmosco/kube-ps1
- On our clusters, `kube-ps1` is installed and included in `PS1`:
```
[123.45.67.89] `(kubernetes-admin@kubernetes:default)` docker@node1 ~
```
(The highlighted part is `context:namespace`, managed by `kube-ps1`)
- Highly recommended if you work across multiple contexts or namespaces!

View File

@@ -117,13 +117,13 @@ This is our game plan:
- Let's use the `nginx` image:
```bash
kubectl run testweb --image=nginx
kubectl create deployment testweb --image=nginx
```
- Find out the IP address of the pod with one of these two commands:
```bash
kubectl get pods -o wide -l run=testweb
IP=$(kubectl get pods -l run=testweb -o json | jq -r .items[0].status.podIP)
kubectl get pods -o wide -l app=testweb
IP=$(kubectl get pods -l app=testweb -o json | jq -r .items[0].status.podIP)
```
- Check that we can connect to the server:
@@ -138,7 +138,7 @@ The `curl` command should show us the "Welcome to nginx!" page.
## Adding a very restrictive network policy
- The policy will select pods with the label `run=testweb`
- The policy will select pods with the label `app=testweb`
- It will specify an empty list of ingress rules (matching nothing)
@@ -172,7 +172,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress: []
```
@@ -207,7 +207,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress:
- from:
- podSelector:
@@ -325,7 +325,7 @@ spec:
## Allowing traffic to `webui` pods
This policy selects all pods with label `run=webui`.
This policy selects all pods with label `app=webui`.
It allows traffic from any source.
@@ -339,7 +339,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: webui
app: webui
ingress:
- from: []
```
@@ -371,6 +371,23 @@ troubleshoot easily, without having to poke holes in our firewall.
---
## Cleaning up our network policies
- The network policies that we have installed block all traffic to the default namespace
- We should remove them, otherwise further exercises will fail!
.exercise[
- Remove all network policies:
```bash
kubectl delete networkpolicies --all
```
]
---
## Protecting the control plane
- Should we add network policies to block unauthorized access to the control plane?
@@ -405,11 +422,11 @@ troubleshoot easily, without having to poke holes in our firewall.
- The API documentation has a lot of detail about the format of various objects:
- [NetworkPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#networkpolicy-v1-networking-k8s-io)
- [NetworkPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicy-v1-networking-k8s-io)
- [NetworkPolicySpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#networkpolicyspec-v1-networking-k8s-io)
- [NetworkPolicySpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyspec-v1-networking-k8s-io)
- [NetworkPolicyIngressRule](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#networkpolicyingressrule-v1-networking-k8s-io)
- [NetworkPolicyIngressRule](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyingressrule-v1-networking-k8s-io)
- etc.

View File

@@ -1,12 +1,42 @@
class: title
# Shipping images with a registry
Our app on Kube
- Initially, our app was running on a single node
- We could *build* and *run* in the same place
- Therefore, we did not need to *ship* anything
- Now that we want to run on a cluster, things are different
- The easiest way to ship container images is to use a registry
---
## What's on the menu?
## How Docker registries work (a reminder)
In this part, we will:
- What happens when we execute `docker run alpine` ?
- If the Engine needs to pull the `alpine` image, it expands it into `library/alpine`
- `library/alpine` is expanded into `index.docker.io/library/alpine`
- The Engine communicates with `index.docker.io` to retrieve `library/alpine:latest`
- To use something else than `index.docker.io`, we specify it in the image name
- Examples:
```bash
docker pull gcr.io/google-containers/alpine-with-bash:1.0
docker build -t registry.mycompany.io:5000/myimage:awesome .
docker push registry.mycompany.io:5000/myimage:awesome
```
---
## The plan
We are going to:
- **build** images for our app,
@@ -14,25 +44,42 @@ In this part, we will:
- **run** deployments using these images,
- expose these deployments so they can communicate with each other,
- expose (with a ClusterIP) the deployments that need to communicate together,
- expose the web UI so we can access it from outside.
- expose (with a NodePort) the web UI so we can access it from outside.
---
## The plan
## Building and shipping our app
- Build on our control node (`node1`)
- We will pick a registry
- Tag images so that they are named `$REGISTRY/servicename`
(let's pretend the address will be `REGISTRY:PORT`)
- Upload them to a registry
- We will build on our control node (`node1`)
- Create deployments using the images
(the images will be named `REGISTRY:PORT/servicename`)
- Expose (with a ClusterIP) the services that need to communicate
- We will push the images to the registry
- Expose (with a NodePort) the WebUI
- These images will be usable by the other nodes of the cluster
(i.e., we could do `docker run REGISTRY:PORT/servicename` from these nodes)
---
## A shortcut opportunity
- As it happens, the images that we need do already exist on the Docker Hub:
https://hub.docker.com/r/dockercoins/
- We could use them instead of using our own registry and images
*In the following slides, we are going to show how to run a registry
and use it to host container images. We will also show you how to
use the existing images from the Docker Hub, so that you can catch
up (or skip altogether the build/push part) if needed.*
---
@@ -40,18 +87,26 @@ In this part, we will:
- We could use the Docker Hub
- Or a service offered by our cloud provider (ACR, GCR, ECR...)
- There are alternatives like Quay
- Or we could just self-host that registry
- Each major cloud provider has an option as well
*We'll self-host the registry because it's the most generic solution for this workshop.*
(ACR on Azure, ECR on AWS, GCR on Google Cloud...)
- There are also commercial products to run our own registry
(Docker EE, Quay...)
- And open source options, too!
*We are going to self-host an open source registry because it's the most generic solution for this workshop. We will use Docker's reference
implementation for simplicity.*
---
## Using the open source registry
- We need to run a `registry:2` container
<br/>(make sure you specify tag `:2` to run the new version!)
- We need to run a `registry` container
- It will store images and layers to the local filesystem
<br/>(but you can add a config file to use S3, Swift, etc.)
@@ -67,7 +122,7 @@ In this part, we will:
---
# Deploying a self-hosted registry
## Deploying a self-hosted registry
- We will deploy a registry container, and expose it with a NodePort
@@ -75,7 +130,7 @@ In this part, we will:
- Create the registry service:
```bash
kubectl run registry --image=registry:2
kubectl create deployment registry --image=registry
```
- Expose it on a NodePort:
@@ -247,7 +302,28 @@ class: extra-details
---
## Deploying all the things
## Catching up
- If you have problems deploying the registry ...
- Or building or pushing the images ...
- Don't worry: you can easily use pre-built images from the Docker Hub!
- The images are named `dockercoins/worker:v0.1`, `dockercoins/rng:v0.1`, etc.
- To use them, just set the `REGISTRY` environment variable to `dockercoins`:
```bash
export REGISTRY=dockercoins
```
- Make sure to set the `TAG` to `v0.1`
(our repositories on the Docker Hub do not provide a `latest` tag)
---
# Running our application on Kubernetes
- We can now deploy our code (as well as a redis instance)
@@ -255,13 +331,13 @@ class: extra-details
- Deploy `redis`:
```bash
kubectl run redis --image=redis
kubectl create deployment redis --image=redis
```
- Deploy everything else:
```bash
for SERVICE in hasher rng webui worker; do
kubectl run $SERVICE --image=$REGISTRY/$SERVICE:$TAG
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
```
@@ -300,7 +376,7 @@ kubectl wait deploy/worker --for condition=available
---
# Exposing services internally
## Connecting containers together
- Three deployments need to be reachable by others: `hasher`, `redis`, `rng`
@@ -347,7 +423,7 @@ We should now see the `worker`, well, working happily.
---
# Exposing services for external access
## Exposing services for external access
- Now we would like to access the Web UI
@@ -385,4 +461,8 @@ We should now see the `worker`, well, working happily.
--
Yes, this may take a little while to update. *(Narrator: it was DNS.)*
--
*Alright, we're back to where we started, when we were running on a single node!*

View File

@@ -22,14 +22,19 @@
.exercise[
- Let's start a replicated `nginx` deployment:
- Let's create a deployment running `nginx`:
```bash
kubectl run yanginx --image=nginx --replicas=3
kubectl create deployment yanginx --image=nginx
```
- Scale it to a few replicas:
```bash
kubectl scale deployment yanginx --replicas=3
```
- Once it's up, check the corresponding pods:
```bash
kubectl get pods -l run=yanginx -o yaml | head -n 25
kubectl get pods -l app=yanginx -o yaml | head -n 25
```
]
@@ -99,12 +104,12 @@ so the lines should not be indented (otherwise the indentation will insert space
- Delete the Deployment:
```bash
kubectl delete deployment -l run=yanginx --cascade=false
kubectl delete deployment -l app=yanginx --cascade=false
```
- Delete the Replica Set:
```bash
kubectl delete replicaset -l run=yanginx --cascade=false
kubectl delete replicaset -l app=yanginx --cascade=false
```
- Check that the pods are still here:
@@ -126,7 +131,7 @@ class: extra-details
- If we change the labels on a dependent, so that it's not selected anymore
(e.g. change the `run: yanginx` in the pods of the previous example)
(e.g. change the `app: yanginx` in the pods of the previous example)
- If a deployment tool that we're using does these things for us
@@ -174,4 +179,4 @@ class: extra-details
]
As always, the [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) has useful extra information and pointers.
As always, the [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) has useful extra information and pointers.

View File

@@ -122,13 +122,13 @@
- Create a 10 GB file on each node:
```bash
for N in $(seq 1 5); do ssh node$N sudo truncate --size 10G /portworx.blk; done
for N in $(seq 1 4); do ssh node$N sudo truncate --size 10G /portworx.blk; done
```
(If SSH asks to confirm host keys, enter `yes` each time.)
- Associate the file to a loop device on each node:
```bash
for N in $(seq 1 5); do ssh node$N sudo losetup /dev/loop0 /portworx.blk; done
for N in $(seq 1 4); do ssh node$N sudo losetup /dev/loop4 /portworx.blk; done
```
]
@@ -139,7 +139,7 @@
- To install Portworx, we need to go to https://install.portworx.com/
- This website will ask us a bunch of questoins about our cluster
- This website will ask us a bunch of questions about our cluster
- Then, it will generate a YAML file that we should apply to our cluster
@@ -168,7 +168,7 @@ way is to use https://install.portworx.com/.
FYI, this is how we obtained the YAML file used earlier:
```
KBVER=$(kubectl version -o json | jq -r .serverVersion.gitVersion)
BLKDEV=/dev/loop0
BLKDEV=/dev/loop4
curl https://install.portworx.com/1.4/?kbver=$KBVER&b=true&s=$BLKDEV&c=px-workshop&stork=true&lh=true
```
If you want to use an external key/value store, add one of the following:
@@ -295,7 +295,7 @@ It should show as `portworx-replicated (default)`.
- With a `volumeClaimTemplate` requesting a 1 GB volume
- That volume will be mounted to `/var/lib/postgresql`
- That volume will be mounted to `/var/lib/postgresql/data`
- There is another little detail: we enable the `stork` scheduler
@@ -328,7 +328,7 @@ spec:
- name: postgres
image: postgres:10.5
volumeMounts:
- mountPath: /var/lib/postgresql
- mountPath: /var/lib/postgresql/data
name: postgres
volumeClaimTemplates:
- metadata:
@@ -494,7 +494,7 @@ By "disrupt" we mean: "disconnect it from the network".
- Logout to go back on `node1`
<!-- ```keys ^D``` -->>
<!-- ```keys ^D``` -->
- Watch the events unfolding with `kubectl get events -w` and `kubectl get pods -w`

View File

@@ -38,7 +38,7 @@
- An exporter serves metrics over HTTP, in plain text
- This is was the *node exporter* looks like:
- This is what the *node exporter* looks like:
http://demo.robustperception.io:9100/metrics
@@ -145,7 +145,7 @@ scrape_configs:
(it will even be gentler on the I/O subsystem since it needs to write less)
FIXME link to Goutham's talk
[Storage in Prometheus 2.0](https://www.youtube.com/watch?v=C4YV-9CrawA) by [Goutham V](https://twitter.com/putadent) at DC17EU
---

View File

@@ -71,7 +71,7 @@
cd ~/container.training/stacks
```
- Edit `dockercoins/worker/worker.py`, update the `sleep` line to sleep 1 second
- Edit `dockercoins/worker/worker.py`; update the first `sleep` line to sleep 1 second
- Build a new tag and push it to the registry:
```bash

View File

@@ -4,7 +4,9 @@
--
- We used `kubeadm` on freshly installed VM instances running Ubuntu 16.04 LTS
<!-- ##VERSION## -->
- We used `kubeadm` on freshly installed VM instances running Ubuntu 18.04 LTS
1. Install Docker
@@ -34,7 +36,7 @@
--
(At least ... not yet!)
(At least ... not yet! Though it's [experimental in 1.12](https://kubernetes.io/docs/setup/independent/high-availability/).)
--
@@ -45,7 +47,7 @@
## Other deployment options
- If you are on Azure:
[AKS](https://azure.microsoft.com/services/container-service/)
[AKS](https://azure.microsoft.com/services/kubernetes-service/)
- If you are on Google Cloud:
[GKE](https://cloud.google.com/kubernetes-engine/)
@@ -56,7 +58,7 @@
[kops](https://github.com/kubernetes/kops)
- On a local machine:
[minikube](https://kubernetes.io/docs/getting-started-guides/minikube/),
[minikube](https://kubernetes.io/docs/setup/minikube/),
[kubespawn](https://github.com/kinvolk/kube-spawn),
[Docker4Mac](https://docs.docker.com/docker-for-mac/kubernetes/)

View File

@@ -167,7 +167,7 @@ spec:
- It indicates which *provisioner* to use
- And arbitrary paramters for that provisioner
- And arbitrary parameters for that provisioner
(replication levels, type of disk ... anything relevant!)
@@ -266,7 +266,9 @@ spec:
---
## Stateful sets in action
# Running a Consul cluster
- Here is a good use-case for Stateful sets!
- We are going to deploy a Consul cluster with 3 nodes
@@ -294,42 +296,54 @@ consul agent -data=dir=/consul/data -client=0.0.0.0 -server -ui \
-retry-join=`Y.Y.Y.Y`
```
- We need to replace X.X.X.X and Y.Y.Y.Y with the addresses of other nodes
- Replace X.X.X.X and Y.Y.Y.Y with the addresses of other nodes
- We can specify DNS names, but then they have to be FQDN
- It's OK for a pod to include itself in the list as well
- We can therefore use the same command-line on all nodes (easier!)
- The same command-line can be used on all nodes (convenient!)
---
## Discovering the addresses of other pods
## Cloud Auto-join
- When a service is created for a stateful set, individual DNS entries are created
- Since version 1.4.0, Consul can use the Kubernetes API to find its peers
- These entries are constructed like this:
- This is called [Cloud Auto-join]
`<name-of-stateful-set>-<n>.<name-of-service>.<namespace>.svc.cluster.local`
- Instead of passing an IP address, we need to pass a parameter like this:
- `<n>` is the number of the pod in the set (starting at zero)
```
consul agent -retry-join "provider=k8s label_selector=\"app=consul\""
```
- If we deploy Consul in the default namespace, the names could be:
- Consul needs to be able to talk to the Kubernetes API
- `consul-0.consul.default.svc.cluster.local`
- `consul-1.consul.default.svc.cluster.local`
- `consul-2.consul.default.svc.cluster.local`
- We can provide a `kubeconfig` file
- If Consul runs in a pod, it will use the *service account* of the pod
[Cloud Auto-join]: https://www.consul.io/docs/agent/cloud-auto-join.html#kubernetes-k8s-
---
## Setting up Cloud auto-join
- We need to create a service account for Consul
- We need to create a role that can `list` and `get` pods
- We need to bind that role to the service account
- And of course, we need to make sure that Consul pods use that service account
---
## Putting it all together
- The file `k8s/consul.yaml` defines a service and a stateful set
- The file `k8s/consul.yaml` defines the required resources
(service account, cluster role, cluster role binding, service, stateful set)
- It has a few extra touches:
- the name of the namespace is injected through an environment variable
- a `podAntiAffinity` prevents two pods from running on the same node
- a `preStop` hook makes the pod leave the cluster when shutdown gracefully

239
slides/k8s/staticpods.md Normal file
View File

@@ -0,0 +1,239 @@
# Static pods
- Pods are usually created indirectly, through another resource:
Deployment, Daemon Set, Job, Stateful Set ...
- They can also be created directly
- This can be done by writing YAML and using `kubectl apply` or `kubectl create`
- Some resources (not all of them) can be created with `kubectl run`
- Creating a resource with `kubectl` requires the API to be up
- If we want to run the API server (and its dependencies) on Kubernetes itself ...
... how can we create API pods (and other resources) when the API is not up yet?
---
## In theory
- Each component of the control plane can be replicated
- We could set up the control plane outside of the cluster
- Then, once the cluster is up, create replicas running on the cluster
- Finally, remove the replicas that are running outside of the cluster
*What could possibly go wrong?*
---
## Sawing off the branch you're sitting on
- What if anything goes wrong?
(During the setup or at a later point)
- Worst case scenario, we might need to:
- set up a new control plane (outside of the cluster)
- restore a backup from the old control plane
- move the new control plane to the cluster (again)
- This doesn't sound like a great experience
---
## Static pods to the rescue
- Pods are started by kubelet (an agent running on every node)
- To know which pods it should run, the kubelet queries the API server
- The kubelet can also get a list of *static pods* from:
- a directory containing one (or multiple) *manifests*, and/or
- a URL (serving a *manifest*)
- These "manifests" are basically YAML definitions
(As produced by `kubectl get pod my-little-pod -o yaml --export`)
---
## Static pods are dynamic
- Kubelet will periodically reload the manifests
- It will start/stop pods accordingly
(i.e. it is not necessary to restart the kubelet after updating the manifests)
- When connected to the Kubernetes API, the kubelet will create *mirror pods*
- Mirror pods are copies of the static pods
(so they can be seen with e.g. `kubectl get pods`)
---
## Bootstrapping a cluster with static pods
- We can run control plane components with these static pods
- They don't need the API to be up (just the kubelet)
- Once they are up, the API becomes available
- These pods are then visible through the API
(We cannot upgrade them from the API, though)
*This is how kubeadm has initialized our clusters.*
---
## Static pods vs normal pods
- The API only gives us a read-only access to static pods
- We can `kubectl delete` a static pod ...
... But the kubelet will restart it immediately
- Static pods can be selected just like other pods
(So they can receive service traffic)
- A service can select a mixture of static and other pods
---
## From static pods to normal pods
- Once the control plane is up and running, it can be used to create normal pods
- We can then set up a copy of the control plane in normal pods
- Then the static pods can be removed
- The scheduler and the controller manager use leader election
(Only one is active at a time; removing an instance is seamless)
- Each instance of the API server adds itself to the `kubernetes` service
- Etcd will typically require more work!
---
## From normal pods back to static pods
- Alright, but what if the control plane is down and we need to fix it?
- We restart it using static pods!
- This can be done automatically with the [Pod Checkpointer]
- The Pod Checkpointer automatically generates manifests of running pods
- The manifests are used to restart these pods if API contact is lost
(More details in the [Pod Checkpointer] documentation page)
- This technique is used by [bootkube]
[Pod Checkpointer]: https://github.com/kubernetes-incubator/bootkube/blob/master/cmd/checkpoint/README.md
[bootkube]: https://github.com/kubernetes-incubator/bootkube
---
## Where should the control plane be?
*Is it better to run the control plane in static pods, or normal pods?*
- If I'm a *user* of the cluster: I don't care, it makes no difference to me
- What if I'm an *admin*, i.e. the person who installs, upgraes, repairs... the cluster?
- If I'm using a managed Kubernetes cluster (AKS, EKS, GKE...) it's not my problem
(I'm not the one setting up and managing the control plane)
- If I already picked a tool (kubeadm, kops...) to setup my cluster, the tool decides for me
- What if I haven't picked a tool yet, or if I'm installing from scratch?
- static pods = easier to set up, easier to troubleshoot, less risk of outage
- normal pods = easier to upgrade, easier to move (if nodes need to be shutdown)
---
## Static pods in action
- On our clusters, the `staticPodPath` is `/etc/kubernetes/manifests`
.exercise[
- Have a look at this directory:
```bash
ls -l /etc/kubernetes/manifests
```
]
We should see YAML files corresponding to the pods of the control plane.
---
## Running a static pod
- We are going to add a pod manifest to the directory, and kubelet will run it
.exercise[
- Copy a manifest to the directory:
```bash
sudo cp ~/container.training/k8s/just-a-pod.yaml /etc/kubernetes/manifests
```
- Check that it's running:
```bash
kubectl get pods
```
]
The output should include a pod named `hello-node1`.
---
## Remarks
In the manifest, the pod was named `hello`.
```yaml
apiVersion: v1
Kind: Pod
metadata:
name: hello
namespace: default
spec:
containers:
- name: hello
image: nginx
```
The `-node1` suffix was added automatically by kubelet.
If we delete the pod (with `kubectl delete`), it will be recreated immediately.
To delete the pod, we need to delete (or move) the manifest file.

View File

@@ -1,9 +1,10 @@
## Versions installed
- Kubernetes 1.11.0
- Docker Engine 18.03.1-ce
- Kubernetes 1.13.0
- Docker Engine 18.09.0
- Docker Compose 1.21.1
<!-- ##VERSION## -->
.exercise[
@@ -22,7 +23,7 @@ class: extra-details
## Kubernetes and Docker compatibility
- Kubernetes 1.10.x only validates Docker Engine versions [1.11.2 to 1.13.1 and 17.03.x](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#external-dependencies)
- Kubernetes 1.13.x only validates Docker Engine versions [up to 18.06](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#external-dependencies)
--
@@ -34,7 +35,9 @@ class: extra-details
class: extra-details
- "Validates" = continuous integration builds
- No!
- "Validates" = continuous integration builds with very extensive (and expensive) testing
- The Docker API is versioned, and offers strong backward-compatibility

View File

@@ -20,6 +20,43 @@ And *then* it is time to look at orchestration!
---
## Options for our first production cluster
- Get a managed cluster from a major cloud provider (AKS, EKS, GKE...)
(price: $, difficulty: medium)
- Hire someone to deploy it for us
(price: $$, difficulty: easy)
- Do it ourselves
(price: $-$$$, difficulty: hard)
---
## One big cluster vs. multiple small ones
- Yes, it is possible to have prod+dev in a single cluster
(and implement good isolation and security with RBAC, network policies...)
- But it is not a good idea to do that for our first deployment
- Start with a production cluster + at least a test cluster
- Implement and check RBAC and isolation on the test cluster
(e.g. deploy multiple test versions side-by-side)
- Make sure that all our devs have usable dev clusters
(whether it's a local minikube or a full-blown multi-node cluster)
---
## Namespaces
- Namespaces let you run multiple identical stacks side by side
@@ -40,6 +77,18 @@ And *then* it is time to look at orchestration!
---
## Relevant sections
- [Namespaces](kube-selfpaced.yml.html#toc-namespaces)
- [Network Policies](kube-selfpaced.yml.html#toc-network-policies)
- [Role-Based Access Control](kube-selfpaced.yml.html#toc-authentication-and-authorization)
(covers permissions model, user and service accounts management ...)
---
## Stateful services (databases etc.)
- As a first step, it is wiser to keep stateful services *outside* of the cluster
@@ -62,15 +111,26 @@ And *then* it is time to look at orchestration!
## Stateful services (second take)
- If you really want to host stateful services on Kubernetes, you can look into:
- If we want to host stateful services on Kubernetes, we can use:
- volumes (to carry persistent data)
- a storage provider
- storage plugins
- persistent volumes, persistent volume claims
- persistent volume claims (to ask for specific volume characteristics)
- stateful sets
- stateful sets (pods that are *not* ephemeral)
- Good questions to ask:
- what's the *operational cost* of running this service ourselves?
- what do we gain by deploying this stateful service on Kubernetes?
- Relevant sections:
[Volumes](kube-selfpaced.yml.html#toc-volumes)
|
[Stateful Sets](kube-selfpaced.yml.html#toc-stateful-sets)
|
[Persistent Volumes](kube-selfpaced.yml.html#toc-highly-available-persistent-volumes)
---
@@ -89,7 +149,7 @@ And *then* it is time to look at orchestration!
- URI mapping
- and much more!
- Check out e.g. [Træfik](https://docs.traefik.io/user-guide/kubernetes/)
- [This section](kube-selfpaced.yml.html#toc-exposing-http-services-with-ingress-resources) shows how to expose multiple HTTP apps using [Træfik](https://docs.traefik.io/user-guide/kubernetes/)
---
@@ -105,6 +165,8 @@ And *then* it is time to look at orchestration!
(e.g. with an agent bind-mounting the log directory)
- [This section](kube-selfpaced.yml.html#toc-centralized-logging) shows how to do that with [Fluentd](https://docs.fluentd.org/v0.12/articles/kubernetes-fluentd) and the EFK stack
---
## Metrics
@@ -123,8 +185,6 @@ And *then* it is time to look at orchestration!
(but is being [deprecated](https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md) starting with Kubernetes 1.11)
---
## Managing the configuration of our applications
@@ -141,6 +201,8 @@ And *then* it is time to look at orchestration!
(It's the container equivalent of the password on a post-it note on your screen)
- [This section](kube-selfpaced.yml.html#toc-managing-configuration) shows how to manage app config with config maps (among others)
---
## Managing stack deployments
@@ -201,7 +263,7 @@ Sorry Star Trek fans, this is not the federation you're looking for!
## Developer experience
*I've put this last, but it's pretty important!*
*We've put this last, but it's pretty important!*
- How do you on-board a new developer?

View File

@@ -33,30 +33,30 @@ chapters:
- k8s/kubectlrun.md
- k8s/kubectlexpose.md
- - k8s/ourapponkube.md
- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
# - k8s/kubectlproxy.md
# - k8s/localkubeconfig.md
# - k8s/accessinternal.md
- k8s/dashboard.md
- k8s/kubectlscale.md
- - k8s/daemonset.md
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/daemonset.md
- - k8s/rollout.md
# - k8s/healthchecks.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
- - k8s/helm.md
- k8s/namespaces.md
- k8s/netpol.md
- k8s/authn-authz.md
- - k8s/ingress.md
- k8s/gitworkflows.md
#- - k8s/helm.md
# - k8s/namespaces.md
# - k8s/netpol.md
# - k8s/authn-authz.md
#- - k8s/ingress.md
# - k8s/gitworkflows.md
- k8s/prometheus.md
- - k8s/volumes.md
- k8s/build-with-docker.md
- k8s/build-with-kaniko.md
- k8s/configuration.md
- - k8s/owners-and-dependents.md
- k8s/statefulsets.md
- k8s/portworx.md
#- - k8s/volumes.md
# - k8s/build-with-docker.md
# - k8s/build-with-kaniko.md
# - k8s/configuration.md
#- - k8s/owners-and-dependents.md
# - k8s/statefulsets.md
# - k8s/portworx.md
- - k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,6 +1,7 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"

View File

@@ -5,6 +5,7 @@ title: |
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
@@ -56,6 +57,7 @@ chapters:
- - k8s/owners-and-dependents.md
- k8s/statefulsets.md
- k8s/portworx.md
- k8s/staticpods.md
- - k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

62
slides/kube-twodays.yml Normal file
View File

@@ -0,0 +1,62 @@
title: |
D&eacute;ployer ses applications
avec Kubernetes
chat: "[Gitter](https://gitter.im/enix/formation-kubernetes-20181217)"
gitrepo: github.com/jpetazzo/container.training
slides: http://decembre2018.container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- k8s/versions-k8s.md
- shared/sampleapp.md
- shared/composescale.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- - k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-k8s.md
- k8s/kubectlrun.md
- k8s/kubectlexpose.md
- - k8s/ourapponkube.md
- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/dashboard.md
- k8s/kubectlscale.md
- - k8s/daemonset.md
- k8s/rollout.md
- k8s/healthchecks.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
- - k8s/helm.md
- k8s/namespaces.md
- k8s/netpol.md
- k8s/authn-authz.md
- - k8s/ingress.md
- k8s/gitworkflows.md
- k8s/prometheus.md
- - k8s/volumes.md
- k8s/build-with-docker.md
- k8s/build-with-kaniko.md
- k8s/configuration.md
- - k8s/owners-and-dependents.md
- k8s/statefulsets.md
- k8s/portworx.md
- k8s/staticpods.md
- - k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,26 +1,14 @@
## Intros
- This slide should be customized by the tutorial instructor(s).
- Hello! We are:
- .emoji[👩🏻‍🏫] Ann O'Nymous ([@...](https://twitter.com/...), Megacorp Inc)
- .emoji[👨🏾‍🎓] Stu Dent ([@...](https://twitter.com/...), University of Wakanda)
<!-- .dummy[
- .emoji[👷🏻‍♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- .emoji[] Jérémy ([@jeremygarrouste](twitter.com/jeremygarrouste), Inpiwee)
- .emoji[🎧] Romain ([@rdegez](https://twitter.com/rdegez), Enix SAS)
] -->
- The workshop will run from 9:15 to 17:30
- The workshop will run from ...
- There will be a lunch break at ...
- There will be a lunch break at noon
(And coffee breaks!)

17
slides/override.css Normal file
View File

@@ -0,0 +1,17 @@
.remark-slide-content:not(.pic) {
background-repeat: no-repeat;
background-position: 99% 1%;
background-size: 8%;
background-image: url(https://enix.io/static/img/logos/logo-domain-cropped.png);
}
div.extra-details:not(.pic) {
background-image: url("images/extra-details.png"), url(https://enix.io/static/img/logos/logo-domain-cropped.png);
background-position: 0.5% 1%, 99% 1%;
background-size: 4%, 8%;
}
.remark-slide-content:not(.pic) div.remark-slide-number {
top: 16px;
right: 112px
}

View File

@@ -17,7 +17,7 @@ fi
- Clone the repository on `node1`:
```bash
git clone git://@@GITREPO@@
git clone https://@@GITREPO@@
```
]
@@ -54,49 +54,84 @@ and displays aggregated logs.
---
## More detail on our sample application
## What's this application?
- Visit the GitHub repository with all the materials of this workshop:
<br/>https://@@GITREPO@@
--
- The application is in the [dockercoins](
https://@@GITREPO@@/tree/master/dockercoins)
subdirectory
- It is a DockerCoin miner! .emoji[💰🐳📦🚢]
- Let's look at the general layout of the source code:
--
there is a Compose file [docker-compose.yml](
https://@@GITREPO@@/blob/master/dockercoins/docker-compose.yml) ...
- No, you can't buy coffee with DockerCoins
... and 4 other services, each in its own directory:
--
- `rng` = web service generating random bytes
- `hasher` = web service computing hash of POSTed data
- `worker` = background process using `rng` and `hasher`
- `webui` = web interface to watch progress
- How DockerCoins works:
- generate a few random bytes
- hash these bytes
- increment a counter (to keep track of speed)
- repeat forever!
--
- DockerCoins is *not* a cryptocurrency
(the only common points are "randomness", "hashing", and "coins" in the name)
---
class: extra-details
## DockerCoins in the microservices era
## Compose file format version
- DockerCoins is made of 5 services:
*Particularly relevant if you have used Compose before...*
- `rng` = web service generating random bytes
- Compose 1.6 introduced support for a new Compose file format (aka "v2")
- `hasher` = web service computing hash of POSTed data
- Services are no longer at the top level, but under a `services` section
- `worker` = background process calling `rng` and `hasher`
- There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer)
- `webui` = web interface to watch progress
- Containers are placed on a dedicated network, making links unnecessary
- `redis` = data store (holds a counter updated by `worker`)
- There are other minor differences, but upgrade is easy and straightforward
- These 5 services are visible in the application's Compose file,
[docker-compose.yml](
https://@@GITREPO@@/blob/master/dockercoins/docker-compose.yml)
---
## How DockerCoins works
- `worker` invokes web service `rng` to generate random bytes
- `worker` invokes web service `hasher` to hash these bytes
- `worker` does this in an infinite loop
- every second, `worker` updates `redis` to indicate how many loops were done
- `webui` queries `redis`, and computes and exposes "hashing speed" in our browser
*(See diagram on next slide!)*
---
class: pic
![Diagram showing the 5 containers of the applications](images/dockercoins-diagram.svg)
---
## Service discovery in container-land
How does each service find out the address of the other ones?
--
- We do not hard-code IP addresses in the code
- We do not hard-code FQDN in the code, either
@@ -150,29 +185,46 @@ class: extra-details
---
## What's this application?
## Show me the code!
--
- You can check the GitHub repository with all the materials of this workshop:
<br/>https://@@GITREPO@@
- It is a DockerCoin miner! .emoji[💰🐳📦🚢]
- The application is in the [dockercoins](
https://@@GITREPO@@/tree/master/dockercoins)
subdirectory
--
- The Compose file ([docker-compose.yml](
https://@@GITREPO@@/blob/master/dockercoins/docker-compose.yml))
lists all 5 services
- No, you can't buy coffee with DockerCoins
- `redis` is using an official image from the Docker Hub
--
- `hasher`, `rng`, `worker`, `webui` are each built from a Dockerfile
- How DockerCoins works:
- Each service's Dockerfile and source code is in its own directory
- `worker` asks to `rng` to generate a few random bytes
(`hasher` is in the [hasher](https://@@GITREPO@@/blob/master/dockercoins/hasher/) directory,
`rng` is in the [rng](https://@@GITREPO@@/blob/master/dockercoins/rng/)
directory, etc.)
- `worker` feeds these bytes into `hasher`
---
- and repeat forever!
class: extra-details
- every second, `worker` updates `redis` to indicate how many loops were done
## Compose file format version
- `webui` queries `redis`, and computes and exposes "hashing speed" in your browser
*This is relevant only if you have used Compose before 2016...*
- Compose 1.6 introduced support for a new Compose file format (aka "v2")
- Services are no longer at the top level, but under a `services` section
- There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer)
- Containers are placed on a dedicated network, making links unnecessary
- There are other minor differences, but upgrade is easy and straightforward
---

View File

@@ -11,10 +11,10 @@ class: title, in-person
@@TITLE@@<br/></br>
.footnote[
**Be kind to the WiFi!**<br/>
**WiFi: EnixTraining**<br/>
**Password: kubeforever**<br/>
<!-- *Use the 5G network.* -->
*Don't use your hotspot.*<br/>
*Don't stream videos or download big files during the workshop.*<br/>
*Don't stream videos or download big files during the workshop[.](https://www.youtube.com/watch?v=h16zyxiwDLY)*<br/>
*Thank you!*
**Slides: @@SLIDES@@**

View File

@@ -40,9 +40,10 @@ chapters:
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/stacks.md
- swarm/cicd.md
- swarm/updatingservices.md
#- swarm/rollingupdates.md
- swarm/rollingupdates.md
- swarm/healthchecks.md
- - swarm/operatingswarm.md
- swarm/netshoot.md
@@ -55,6 +56,7 @@ chapters:
- swarm/apiscope.md
- - swarm/logging.md
- swarm/metrics.md
- swarm/gui.md
- swarm/stateful.md
- swarm/extratips.md
- shared/thankyou.md

View File

@@ -40,7 +40,8 @@ chapters:
#- swarm/testingregistry.md
#- swarm/btp-manual.md
#- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/stacks.md
- swarm/cicd.md
- swarm/updatingservices.md
#- swarm/rollingupdates.md
#- swarm/healthchecks.md

View File

@@ -41,7 +41,8 @@ chapters:
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/stacks.md
- swarm/cicd.md
- |
name: part-2

View File

@@ -41,7 +41,7 @@ chapters:
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/compose2swarm.md
- swarm/stacks.md
- |
name: part-2

Some files were not shown because too many files have changed in this diff Show More