Compare commits

...

330 Commits

Author SHA1 Message Date
Jérôme Petazzoni
ca5d1d0330 Prepare ZOS specific content 2021-09-21 15:11:10 +02:00
Jérôme Petazzoni
c5cd84e274 🐞 Typo 2021-09-21 15:10:47 +02:00
Jérôme Petazzoni
108f936f84 Update Ingress chapter
Improve explanations and rationale for ingress resources.
Mention kubectl create ingress.
Explain the v1beta1/v1 update.
Mention Gateway API.
2021-09-21 14:31:47 +02:00
Jérôme Petazzoni
3594fef67a 🐞 Formatting fixes 2021-09-13 14:59:33 +02:00
Jérôme Petazzoni
021929e50e 📝 Add a bunch of exercises 2021-09-13 13:11:46 +02:00
Jérôme Petazzoni
e3fa685ee1 ♻️ Update logistics page; add reference to exercises 2021-09-13 10:26:24 +02:00
Jérôme Petazzoni
4f662d14cc 🐞 Fix Prometheus tag name 2021-08-14 22:03:50 +02:00
Jérôme Petazzoni
d956da1733 🐞 Typo fix 2021-08-14 21:26:47 +02:00
Jérôme Petazzoni
1b820f3bc1 ⬆️ Update Traefik to v2.5 to support Ingress v1
Ingress v1beta1 is no longer served in Kubernetes 1.22, so we need
a version of Traefik that uses Ingress v1. Traefik supports Ingress
v1 in Traefik v2.5 and above. Right now (August 2021) the traefik
image is v2.4, so let's pin the image version to v2.5 (which is
currently in rc) so that the Ingress labs work correctly with
Kubernetes 1.22.
2021-08-14 20:53:16 +02:00
Jérôme Petazzoni
f1d4704b0e ⬆️ Update deployment scripts for kubeadm 1.22 2021-08-13 19:51:53 +02:00
Jerome Petazzoni
71423233bd 🔧 Fix Tomcat volume example
New Tomcat image (version 9) doesn't load any example webapp
by default, but ships with examples in webapps.dist.

Let's use this as an opportunity to demonstrate how to populate
empty volumes from container directories.

Closes #561.
2021-08-05 12:55:22 +02:00
Jerome Petazzoni
b508360227 🔧 Fix OpenStack image version 2021-08-05 12:38:03 +02:00
Jérôme Petazzoni
7cd47243ab Merge pull request #590 from iambricegg/patch-1
Update btp-manual.md
2021-08-01 15:04:21 +02:00
Brice GG
a9d84b01d8 Update btp-manual.md
Fix the missing variable $TAG in the snippet that cause the push to registry failed.
2021-08-01 12:40:34 +00:00
Jerome Petazzoni
4df547d9b1 🐞 Add a missing control plane component 2021-07-21 16:06:16 +02:00
Jerome Petazzoni
d14f86e683 ⬆️ Update CRD content to deprecate v1beta1 manifests 2021-07-21 15:50:27 +02:00
Jerome Petazzoni
92cdb4146b 🔧 Be more consistent when installing Helm charts
Always install Helm charts in their own namespace, and specify the
repo through a command-line flag instead of adding the repo.
2021-07-21 14:41:28 +02:00
Jerome Petazzoni
0ca798bc30 🔧 Tweak managed Kubernetes section 2021-07-21 14:24:08 +02:00
Jerome Petazzoni
8025d37188 🔧 Tweak RBAC section; add auth can-i --list 2021-07-19 15:38:34 +02:00
Jerome Petazzoni
3318ce84e4 ⚠️ Fix ws security issue in autopilot
This is not a big deal since the autopilot code is only used by
me, in local environments; but that'll keep dependabot happy :)
2021-07-19 14:58:14 +02:00
Jerome Petazzoni
3e29881ece 💻️ Add image setting for OpenStack TF infra template 2021-07-19 14:55:32 +02:00
Jérôme Petazzoni
b91ed846a0 🐞 Typo 2021-06-24 15:25:50 +02:00
Jérôme Petazzoni
f123878c85 Merge pull request #588 from jeansebastienh/fix
doc: fix 1/60 => 1.66%
2021-06-24 14:55:02 +02:00
Jean-Sébastien Hedde
2c048a0193 doc: fix 1/60 => 1.66% 2021-06-24 11:25:45 +02:00
Jérôme Petazzoni
ee7bd37f83 ♻️ Update download URL for k9s 2021-06-10 17:25:28 +02:00
Jérôme Petazzoni
166cacc48e ♻️ Update slides counting script 2021-06-10 07:56:53 +02:00
Jérôme Petazzoni
9595179f03 ♻️ Rename settings files 2021-06-07 17:46:32 +02:00
Jérôme Petazzoni
3b6509b95b 🐞 Fix minor bug in inventory command 2021-06-07 17:22:02 +02:00
Jérôme Petazzoni
c84a5ce6b7 📅Add more Enix sesssions + fix past slides 2021-06-04 17:33:13 +02:00
Jérôme Petazzoni
4402c17eb9 📃Add info about hyperkube in k8s 1.19 2021-06-03 15:59:26 +02:00
Jérôme Petazzoni
4f04046fea 🤖Update deployment scripts 2021-05-31 08:12:35 +02:00
Jérôme Petazzoni
6a6882802d 🔒️Creative exercise with Sealed Secrets 2021-05-26 08:23:49 +02:00
Jérôme Petazzoni
75f33bb9d8 🕵️ Add another YAML to help gain access to clusters 2021-05-21 18:33:30 +02:00
Jérôme Petazzoni
ab266aba83 ♻️ Refactor TOC generator
"Modules" are now named "parts".
When there are more than 9 subparts in a part, the titles will
be smooched together in the TOC so that they fit on a single
page. Otherwise, line breaks are added (like before) so that
the text can breathe a little bit.
2021-05-21 18:32:11 +02:00
Jérôme Petazzoni
e26eeb4386 🤖 Update dependencies (thanks @dependabot!)
We don't use that part of the code at the moment, but it's
probably safer to update it anyway. Good hygiene! 🧼
2021-05-08 15:39:15 +02:00
Jérôme Petazzoni
98429e14f0 🔥 Add prometheus-stack + Grafana content (from LKE workshop) and update metrics-server section 2021-05-04 17:19:59 +02:00
Jérôme Petazzoni
bbf65f7433 📃 Update 1-day program 2021-05-04 16:26:39 +02:00
Jérôme Petazzoni
cb6f3989fd ⚙ Refactor SSH options; add check for Terraform signature problem 2021-05-04 13:06:08 +02:00
Jerome Petazzoni
dbc87e7a0d 🔧Minor fixes 2021-04-27 16:57:36 +02:00
Jerome Petazzoni
08d7b93be1 🔌Minor tweaks to networking sections 2021-04-27 16:53:55 +02:00
Jerome Petazzoni
b66b8d25af 🖼 Fix picture CSS rules (hopefully for good this time😅) 2021-04-27 15:53:19 +02:00
Jerome Petazzoni
f780e4a0e6 💾Update volume section 2021-04-26 16:58:09 +02:00
Jerome Petazzoni
a129187ce1 🔌Update container networking basics 2021-04-26 15:29:20 +02:00
Jerome Petazzoni
ac0547d96b 📃Update Dockerfile exercise instructions 2021-04-26 09:15:05 +02:00
Jerome Petazzoni
58ccebf5c7 🎼Big Compose update 2021-04-26 01:45:29 +02:00
Jerome Petazzoni
56b9b864bb 📃 Add more BuildKit content 2021-04-25 20:13:24 +02:00
Jérôme Petazzoni
f49a8f2ec9 📃 Update container content with multi-arch 2021-04-25 16:26:03 +02:00
Jérôme Petazzoni
ea031a6231 ✂️ Remove listall command; rename list into inventory; update README 2021-04-24 17:25:53 +02:00
Jérôme Petazzoni
c92e887c53 🔐 Add 'workshopctl passwords' command 2021-04-24 17:14:03 +02:00
Jérôme Petazzoni
a6992e0c09 🔧 Fix warn→warning that had been overlooked earlier 2021-04-24 15:32:16 +02:00
Jérôme Petazzoni
07818688a7 ✂️ Remove emoji class
It shouldn't be necessary, since it was basically specifying a
font that may or may not be installed on folks' computers (and
wasn't loaded from the CSS). Tiny simplification but I'll take it 😁
2021-04-24 15:31:27 +02:00
Jérôme Petazzoni
c624415e78 📃 Update Kustomize section 2021-04-24 14:43:37 +02:00
Jérôme Petazzoni
112f6ec3b7 Merge pull request #586 from jpetazzo/fix_helm_version_range
 Add missing comma for helm version range
2021-04-22 11:06:44 +02:00
Jérôme Petazzoni
f51b5c7244 ♻️ Update rbac.authorization.k8s.io/v1beta1 to v1 + vendor YAML
This bumps up all the deprecated RBAC YAML to v1.

It also updates a few vendored YAMLs.

Oh, and removes the unused Service resources from the Traefik YAMLs.

Closes #585
2021-04-22 11:04:14 +02:00
Jérôme Petazzoni
88a5041943 ♻️ Update ingress.yaml
Provide two files (v1beta1 and v1) and a symlink pointing to v1beta1.

There are many folks running older version of Kubernetes still; so I'm
making v1beta1 the default, but I hope to be able to switch to v1 by
end of year and remove the v1beta1 one.

Closes #584
2021-04-22 10:26:42 +02:00
Jérôme Petazzoni
8d7f8c9c05 🔧 Add missing dependency to workshopctl 2021-04-22 10:23:14 +02:00
Jérôme Petazzoni
19fc53dbbd ⚠️ Fix warn → warning 2021-04-19 17:27:19 +02:00
Jerome Petazzoni
d74a331a05 📃 Update cert-manager install instructions 2021-04-15 09:43:38 +02:00
Jerome Petazzoni
53a3c8a86a 📃 Update Helm intro blurb 2021-04-15 09:39:12 +02:00
Julien Girardin
2214717aaa Add missing comma for helm version range 2021-04-14 12:12:37 +02:00
Jerome Petazzoni
e75e4d7f2c 🗂️ Update table of contents to add new Helm chapters
Closes #580
2021-04-12 18:33:30 +02:00
Jerome Petazzoni
84c33b9eae Merge @zempashi's Helm content 🎉 2021-04-12 18:28:56 +02:00
Jerome Petazzoni
e606cd2b21 ✂️ Don't include helm.yml 2021-04-12 18:28:46 +02:00
Jerome Petazzoni
d217e52ab5 🔐 Add rbac-lookup plugin info in RBAC section 2021-04-09 17:34:49 +02:00
Jerome Petazzoni
f3c3646298 🔥 Deprecate --count in favor of --students 2021-04-09 17:16:12 +02:00
Jerome Petazzoni
f25bf60d46 ♻️ Replace the Tomcat example with the OWASP Juice Shop 2021-04-09 17:12:55 +02:00
Jerome Petazzoni
6ab11ca91c 🔐 Add cert-manager + Ingress annotation information 2021-04-09 15:48:10 +02:00
Jerome Petazzoni
a5d857edd4 ✂️ Simplify Consul YAML a tiny bit 2021-04-09 15:26:27 +02:00
Jerome Petazzoni
25d6073b17 ✂️ Remove unused annotations (they're confusing) 2021-04-09 13:46:52 +02:00
Jerome Petazzoni
216fefad23 Merge branch 'otomato-gh-add-openebs' 2021-04-09 12:51:53 +02:00
Jerome Petazzoni
f3eb9ce12f 👀 Review + improve OpenEBS content 2021-04-09 12:51:38 +02:00
Jerome Petazzoni
a484425c81 ✏️ Add non-dedicated control plane
Thanks @zempashi for the suggestion 👍🏻
2021-04-07 19:24:13 +02:00
Jerome Petazzoni
67806fc592 ✏️ Add a bunch of control plane diagrams 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
cfcf874bac 📃 Update section summaries 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
858afc846c 🚪 Instructions to access EKS cluster 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
629b4d1037 💬 Add Slack chat room template 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
58f2894b54 📃 Document the EKS shell scripts 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
df1db67e53 🔀 Move @soulshake's scripts and commands to prepare-eks directory 2021-04-07 19:24:12 +02:00
AJ Bowen
068c81bdcd Fix incorrect bits in create_describe_cluster_policy 2021-04-07 19:24:12 +02:00
AJ Bowen
911d78aede Rename test pod 2021-04-07 19:24:12 +02:00
AJ Bowen
305674fa3c Add --overwrite when annotating service account 2021-04-07 19:24:12 +02:00
AJ Bowen
6bdc687cc7 Remove partial teardown command 2021-04-07 19:24:12 +02:00
AJ Bowen
49e3a0b75f Add a quick/dirty script to associate a role with the default service account in the default namespace granting r/o access to an s3 bucket 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
5acb05dfff ⚙️ Add EKS prep scripts 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
edaef92b35 🚫 Remove 0.yml 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
63fccb495f ⚠️ Improve error reporting for missing content files 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
055c8a7267 📃 Minor slides update 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
f72847bc81 ☁️ Add support for Linode deployment 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
4be82f4f57 ️ Add some quizzes 2021-04-07 19:24:12 +02:00
Jerome Petazzoni
cb760dbe94 ✍️ Add details about how to author YAML 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
f306749f68 🖨️ Improve output in case no arg is provided 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
8d20fa4654 🐞 Fix missing resource name in Kyverno examples 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
249d446ef2 🔑 Add Cilium and Tufin web tools to generate and view network policies 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
fe84dec863 🔑 Add details about etcd security 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
ce8dc2cdff 🔧 Minor tweaks and improvements 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
bc33f1f5df 💻️ Update Scaleway deployment scripts 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
8597ca1956 🔧 Fix args example 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
2300d0719b ✂️ Remove ctr.run 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
2e6230a9a0 🔑 Explain how to use imagePullSecrets 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
ae17c2479c 📊 Update Helm stable chart and add deprecation warning 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
23f7e8cff9 ↔️ Update DNS map script 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
f72cf16c82 🐞 Fix Helm command in Prom deploy 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
6ec8849da1 🧪 Add GitLab chapter 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
6c11de207a 🔎 Extra details about CPU limits 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
2295e4f3de 🐞 Fix missing closing triple-backquote 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
18853b2497 Add diagrams showing the different k8s network layers 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
426957bdca Add Tilt section 2021-04-07 19:23:55 +02:00
Jerome Petazzoni
6bc08c0a7e Add k9s section 2021-04-07 19:23:55 +02:00
Anton Weiss
88d4e5ff54 Update volumeSnapshot link and status 2021-04-07 19:23:55 +02:00
dependabot[bot]
e3e4d04202 Bump socket.io from 2.0.4 to 2.4.0 in /slides/autopilot
Bumps [socket.io](https://github.com/socketio/socket.io) from 2.0.4 to 2.4.0.
- [Release notes](https://github.com/socketio/socket.io/releases)
- [Changelog](https://github.com/socketio/socket.io/blob/2.4.0/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io/compare/2.0.4...2.4.0)

Signed-off-by: dependabot[bot] <support@github.com>
2021-04-07 19:23:55 +02:00
Jerome Petazzoni
be6d982e2c ✏️ Add non-dedicated control plane
Thanks @zempashi for the suggestion 👍🏻
2021-04-07 16:52:36 +02:00
Jerome Petazzoni
04bc8a9f60 ✏️ Add a bunch of control plane diagrams 2021-04-07 16:00:34 +02:00
Julien Girardin
b0dc1c7c3f Fix blank slide, and title of Helm Invalid values 2021-04-07 11:32:30 +02:00
Jerome Petazzoni
bb1b225026 👀 Review and suggestions for new Helm content 2021-04-06 08:29:10 +02:00
Julien Girardin
2160aa7f40 Split chapter for better toc 2021-04-06 08:29:10 +02:00
Julien Girardin
8f75a4cd7f 👮 Add values schema validation 2021-04-06 08:29:10 +02:00
Jerome Petazzoni
45213a8f2e 👀 Review dependency chapter 2021-04-06 08:29:10 +02:00
Julien Girardin
f03aedd024 🏠Helm dependencies 2021-04-06 08:29:10 +02:00
Jerome Petazzoni
fcfcb127b4 📃 Update section summaries 2021-03-30 18:09:24 +02:00
Jerome Petazzoni
5380b2d52a 🚪 Instructions to access EKS cluster 2021-03-28 20:08:58 +02:00
Jerome Petazzoni
cc5da860b9 💬 Add Slack chat room template 2021-03-28 18:28:38 +02:00
Jerome Petazzoni
9e9b17f6c9 📃 Document the EKS shell scripts 2021-03-28 15:36:25 +02:00
Jerome Petazzoni
b9ea938157 🔀 Move @soulshake's scripts and commands to prepare-eks directory 2021-03-28 12:59:54 +02:00
Jerome Petazzoni
b23aacdce0 Merge remote-tracking branch 'soulshake/aj/eks-role' 2021-03-28 11:14:42 +02:00
Jerome Petazzoni
c3d6e5e660 ⚙️ Add EKS prep scripts 2021-03-28 11:12:50 +02:00
Jerome Petazzoni
907adf8075 🚫 Remove 0.yml 2021-03-28 11:11:18 +02:00
AJ Bowen
dff505ac76 Fix incorrect bits in create_describe_cluster_policy 2021-03-28 10:53:48 +02:00
AJ Bowen
df0ffc4d75 Rename test pod 2021-03-27 19:15:24 +01:00
AJ Bowen
02278b3748 Add --overwrite when annotating service account 2021-03-27 19:13:34 +01:00
AJ Bowen
ab959220ba Remove partial teardown command 2021-03-27 19:12:30 +01:00
AJ Bowen
b4576e39d0 Add a quick/dirty script to associate a role with the default service account in the default namespace granting r/o access to an s3 bucket 2021-03-27 19:09:08 +01:00
Jerome Petazzoni
894dafeecb ⚠️ Improve error reporting for missing content files 2021-03-18 14:57:46 +01:00
Jerome Petazzoni
366c656d82 📃 Minor slides update 2021-03-17 23:55:26 +01:00
Jerome Petazzoni
a60f929232 ☁️ Add support for Linode deployment 2021-03-14 19:22:31 +01:00
Jerome Petazzoni
fdc58cafda ️ Add some quizzes 2021-03-14 19:21:43 +01:00
Jerome Petazzoni
8de186b909 ✍️ Add details about how to author YAML 2021-03-11 12:55:53 +01:00
Jerome Petazzoni
b816d075d4 🖨️ Improve output in case no arg is provided 2021-03-10 19:45:23 +01:00
Jerome Petazzoni
6303b67b86 🐞 Fix missing resource name in Kyverno examples 2021-02-27 19:52:07 +01:00
Jerome Petazzoni
4f3bb9beb2 🔑 Add Cilium and Tufin web tools to generate and view network policies 2021-02-27 19:48:38 +01:00
Jerome Petazzoni
1f34da55b3 🔑 Add details about etcd security 2021-02-27 19:13:50 +01:00
Jerome Petazzoni
f30792027f 🔧 Minor tweaks and improvements 2021-02-24 22:35:25 +01:00
Jerome Petazzoni
74679ab77e 💻️ Update Scaleway deployment scripts 2021-02-24 21:41:30 +01:00
Jerome Petazzoni
71ce2eb31a 🔧 Fix args example 2021-02-24 18:22:47 +01:00
Jerome Petazzoni
eb96dd21bb ✂️ Remove ctr.run 2021-02-24 14:20:09 +01:00
Anton Weiss
b1adca025d Add openebs tutorial 2021-02-24 12:26:44 +02:00
Jerome Petazzoni
e82d2812aa 🔑 Explain how to use imagePullSecrets 2021-02-23 21:44:57 +01:00
Jerome Petazzoni
9c8c3ef537 📊 Update Helm stable chart and add deprecation warning 2021-02-22 22:30:19 +01:00
Jerome Petazzoni
2f2948142a ↔️ Update DNS map script 2021-02-22 21:35:02 +01:00
Jerome Petazzoni
2516b2d32b 🐞 Fix Helm command in Prom deploy 2021-02-21 16:29:49 +01:00
Jerome Petazzoni
42f4b65c87 🧪 Add GitLab chapter 2021-02-21 15:12:00 +01:00
Jerome Petazzoni
989a62b5ff 🔎 Extra details about CPU limits 2021-02-20 11:51:45 +01:00
Jerome Petazzoni
b5eb59ab80 🐞 Fix missing closing triple-backquote 2021-02-18 09:18:23 +01:00
Jerome Petazzoni
10920509c3 Add diagrams showing the different k8s network layers 2021-02-15 22:19:45 +01:00
Jerome Petazzoni
955149e019 Add Tilt section 2021-02-07 21:44:38 +01:00
Jerome Petazzoni
111ff30c38 Add k9s section 2021-02-07 21:41:08 +01:00
Jérôme Petazzoni
6c038a5d33 Merge pull request #578 from otomato-gh/volumeSnapshotsInfo
Update volumeSnapshot link and status
2021-02-05 09:35:39 +01:00
Anton Weiss
6737a20840 Update volumeSnapshot link and status 2021-01-31 12:18:09 +02:00
Jérôme Petazzoni
1d1060a319 Merge pull request #577 from jpetazzo/dependabot/npm_and_yarn/slides/autopilot/socket.io-2.4.0
Bump socket.io from 2.0.4 to 2.4.0 in /slides/autopilot
2021-01-26 08:01:45 -06:00
dependabot[bot]
93e9a60634 Bump socket.io from 2.0.4 to 2.4.0 in /slides/autopilot
Bumps [socket.io](https://github.com/socketio/socket.io) from 2.0.4 to 2.4.0.
- [Release notes](https://github.com/socketio/socket.io/releases)
- [Changelog](https://github.com/socketio/socket.io/blob/2.4.0/CHANGELOG.md)
- [Commits](https://github.com/socketio/socket.io/compare/2.0.4...2.4.0)

Signed-off-by: dependabot[bot] <support@github.com>
2021-01-20 23:13:24 +00:00
Jerome Petazzoni
de2c0e72c3 Add 2021 high five sessions 2021-01-13 00:41:59 -06:00
Jerome Petazzoni
41204c948b 📃 Add Kubernetes internal APIs 2021-01-05 16:12:36 -06:00
Jerome Petazzoni
553b1f7871 Expand secrets section 2021-01-04 21:14:23 -06:00
Jerome Petazzoni
bd168f7676 Diametrally doesn't seem to be an English word
Thanks Peter Uys for letting me know :)
2020-12-11 17:07:42 +01:00
Jérôme Petazzoni
3a527649d1 Merge pull request #576 from hvariant/patch-1
fix typo
2020-12-08 23:05:26 +01:00
hvariant
ecbbcf8b51 fix typo 2020-12-05 12:26:43 +11:00
Jerome Petazzoni
29edb1aefe Minor tweaks after 1st NR session 2020-11-30 00:29:05 +01:00
Jerome Petazzoni
bd3c91f342 Update udemy promo codes 2020-11-23 12:26:04 +01:00
jsubirat
fa709f0cb4 Update kyverno.md
Adds missing `pod`s in the commands
2020-11-19 17:29:12 +01:00
jsubirat
543b44fb29 Update kyverno.md
Adds missing `pod` in the command
2020-11-19 17:28:54 +01:00
Jerome Petazzoni
536a9cc44b Update advanced TOC 2020-11-15 22:06:49 +01:00
Jerome Petazzoni
2ff3d88bab typo 2020-11-15 22:06:38 +01:00
Jerome Petazzoni
295ee9b6b4 Add warning about using CSR API for user certs 2020-11-15 19:29:45 +01:00
Jerome Petazzoni
17c5f6de01 Add cert-manager section 2020-11-15 19:29:35 +01:00
Jerome Petazzoni
556dbb965c Add networking.k8s.io permissions to Traefik v2 2020-11-15 18:44:17 +01:00
Jerome Petazzoni
32250f8053 Update section about swap with cgroups v2 info 2020-11-15 16:44:18 +01:00
Jerome Petazzoni
bdede6de07 Add aggregation layer details 2020-11-14 20:57:27 +01:00
Jerome Petazzoni
eefdc21488 Add details about /status 2020-11-14 19:10:04 +01:00
Jerome Petazzoni
e145428910 Add notes about backups 2020-11-14 14:39:43 +01:00
Jerome Petazzoni
76789b6113 Add Sealed Secrets 2020-11-14 14:35:49 +01:00
Jerome Petazzoni
f9660ba9dc Add kubebuilder tutorial 2020-11-13 18:46:16 +01:00
Jerome Petazzoni
c2497508f8 Add API server deep dive 2020-11-13 15:08:15 +01:00
Jerome Petazzoni
b5d3b213b1 Update CRD section 2020-11-13 12:50:55 +01:00
Jerome Petazzoni
b4c76ad11d Add CNI deep dive 2020-11-12 13:37:33 +01:00
Jerome Petazzoni
b251ff3812 --output-watch-events 2020-11-11 22:46:20 +01:00
Jerome Petazzoni
ede4ea0dd5 Add note about GVK 2020-11-11 21:17:54 +01:00
Jerome Petazzoni
2ab06c6dfd Add events section 2020-11-11 20:51:33 +01:00
Jerome Petazzoni
3a01deb039 Add section on finalizers 2020-11-11 15:05:33 +01:00
Jerome Petazzoni
b88f63e1f7 Update Docker Desktop and k3d instructions
Fixes #572
2020-11-10 17:55:02 +01:00
Jerome Petazzoni
918311ac51 Separate CRD and ECK; reorganize API extension chapter 2020-11-10 17:43:08 +01:00
Jerome Petazzoni
73e8110f09 Tweak 2020-11-10 17:43:08 +01:00
Jerome Petazzoni
ecb5106d59 Add provenance of default RBAC rules 2020-11-10 17:43:08 +01:00
Jérôme Petazzoni
e4d8cd4952 Merge pull request #573 from wrekone/master
Update ingress.md
2020-11-05 06:51:09 +01:00
Ben
c4aedbd327 Update ingress.md
fix typo
2020-11-04 20:19:34 -08:00
Jerome Petazzoni
2fb3584b1b Small update about selectors 2020-11-03 21:59:04 +01:00
Jerome Petazzoni
cb90cc9a1e Rename images 2020-10-31 11:32:16 +01:00
Jerome Petazzoni
bf28dff816 Add HPA v2 content using Prometheus Adapter 2020-10-30 17:55:46 +01:00
Jerome Petazzoni
b5cb871c69 Update Prometheus chart location 2020-10-29 17:39:14 +01:00
Jerome Petazzoni
aa8f538574 Add example to generate certs with local CA 2020-10-29 14:53:42 +01:00
Jerome Petazzoni
ebf2e23785 Add info about advanced label selectors 2020-10-29 12:32:01 +01:00
Jerome Petazzoni
0553a1ba8b Add chapter on Kyverno 2020-10-28 00:00:32 +01:00
Jerome Petazzoni
9d47177028 Add activeDeadlineSeconds explanation 2020-10-27 11:11:29 +01:00
Jerome Petazzoni
9d4a035497 Add Kompose, Skaffold, and Tilt. Move tools to a separate kubetools action. 2020-10-27 10:58:31 +01:00
Jerome Petazzoni
6fe74cb35c Add note about 'kubectl describe ns' 2020-10-24 16:23:36 +02:00
Jerome Petazzoni
43aa41ed51 Add note to remap_nodeports command 2020-10-24 16:23:21 +02:00
Jerome Petazzoni
f6e810f648 Add k9s and popeye 2020-10-24 11:27:33 +02:00
Jerome Petazzoni
4c710d6826 Add Krew support 2020-10-23 21:19:27 +02:00
Jerome Petazzoni
410c98399e Use empty values by default
This allows content rendering with an almost-empty YAML file
2020-10-22 14:13:11 +02:00
Jerome Petazzoni
19c9843a81 Add admission webhook content 2020-10-22 14:12:32 +02:00
Jerome Petazzoni
69d084e04a Update PSP (runtime/default instead of docker/default) 2020-10-20 22:11:26 +02:00
Jerome Petazzoni
1300d76890 Update dashboard content 2020-10-20 21:19:08 +02:00
Jerome Petazzoni
0040313371 Bump up admin clusters scripts 2020-10-20 16:53:24 +02:00
Jerome Petazzoni
c9e04b906d Bump up k8s bins; add 'k' alias and completion 2020-10-20 16:53:24 +02:00
Jérôme Petazzoni
41f66f4144 Merge pull request #571 from bbaassssiiee/bugfix/typo
typo: should read: characters
2020-10-20 11:29:32 +02:00
Bas Meijer
aced587fd0 characters 2020-10-20 11:03:59 +02:00
Jerome Petazzoni
749b3d1648 Add survey form 2020-10-13 16:05:33 +02:00
Jérôme Petazzoni
c40cc71bbc Merge pull request #570 from fc92/patch-2
update server-side dry run for recent kubectl
2020-10-11 23:22:28 +02:00
Jérôme Petazzoni
69b775ef27 Merge pull request #569 from fc92/patch-1
Update dashboard.md
2020-10-11 23:20:51 +02:00
fc92
3bfc14c5f7 update server-side dry run for recent kubectl
Error message :
$ kubectl apply -f web.yaml --server-dry-run --validate=false -o yaml                                                                   
Error: unknown flag: --server-dry-run                                                                                                   
See 'kubectl apply --help' for usage.

Doc : 
      --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be                  
sent, without sending it. If server strategy, submit server-side request without persisting the resource.
2020-10-10 23:07:45 +02:00
fc92
97984af8a2 Update dashboard.md
Kube Ops View URL changed to
2020-10-10 22:12:21 +02:00
Jérôme Petazzoni
9b31c45899 Merge pull request #567 from christianbumann/patch-1
Add description for the -f flag
2020-10-08 08:37:26 +02:00
Jérôme Petazzoni
c0db28d439 Merge pull request #568 from christianbumann/patch-2
Fix typo
2020-10-08 08:36:38 +02:00
Jérôme Petazzoni
0e49bfa837 Merge pull request #566 from tullo/master
fix backend svc name in cheeseplate ingress
2020-10-08 08:36:11 +02:00
Christian Bumann
fc9c0a6285 Update Container_Network_Model.md 2020-10-08 08:16:53 +02:00
Christian Bumann
d4914fa168 Fix typo 2020-10-08 08:14:59 +02:00
Christian Bumann
e4edd9445c Add description for the -f flag 2020-10-07 14:00:19 +02:00
Andreas Amstutz
ba7deefce5 fix k8s version 2020-10-05 12:06:26 +02:00
Andreas
be104f1b44 fix backend svc name in cheeseplate ingress 2020-10-05 12:02:31 +02:00
Jerome Petazzoni
5c329b0b79 Bump versions 2020-10-04 20:59:36 +02:00
Jerome Petazzoni
78ffd22499 Typo fix 2020-10-04 15:53:40 +02:00
Jerome Petazzoni
33174a1682 Add clean command 2020-09-27 16:25:37 +02:00
Jerome Petazzoni
d402a2ea93 Add tailhist 2020-09-24 17:00:52 +02:00
Jerome Petazzoni
1fc3abcffd Add jid (JSON explorer tool) 2020-09-24 11:52:03 +02:00
Jerome Petazzoni
c1020f24b1 Add Ingress TLS chapter 2020-09-15 17:44:05 +02:00
Jerome Petazzoni
4fc81209d4 Skip comments in domain file 2020-09-14 17:43:11 +02:00
Jerome Petazzoni
ed841711c5 Fix 'list' command 2020-09-14 16:58:55 +02:00
Jerome Petazzoni
07457af6f7 Update Consul section 2020-09-11 22:30:18 +02:00
Jerome Petazzoni
2d4961fbd3 Add fwdays slides 2020-09-11 15:13:24 +02:00
Jerome Petazzoni
14679999be Big refactor of deployment script
Add support for OVHcloud, Hetzner; refactor Scaleway support
2020-09-09 19:37:15 +02:00
Jerome Petazzoni
29c6d2876a Reword sanity check 2020-09-08 11:08:58 +02:00
Jerome Petazzoni
a02e7429ad Add note about httpenv arch 2020-09-07 12:49:08 +02:00
Jerome Petazzoni
fee0be7f09 Update 'kubectl create deployment' for 1.19 2020-09-02 16:48:19 +02:00
Jerome Petazzoni
d98fcbce87 Update Ingress to 1.19 2020-09-02 13:34:11 +02:00
Jerome Petazzoni
35320837e5 Add info about immutable configmaps and secrets 2020-09-02 13:21:21 +02:00
Jerome Petazzoni
d73e597198 Small updates for Kubernetes 1.19 2020-09-02 13:08:04 +02:00
Jerome Petazzoni
b4c0378114 Add ips command to output tab-separated addresses 2020-08-31 16:31:59 +02:00
Jerome Petazzoni
efdc4fcfa9 bump versions 2020-08-26 12:38:51 +02:00
Jerome Petazzoni
c32fcc81bb Tweak 1-day content 2020-08-26 09:10:15 +02:00
Jerome Petazzoni
f6930042bd Mention downward API fields 2020-08-26 09:05:24 +02:00
Jerome Petazzoni
2e2767b090 Bump up kubectl versions in remote section 2020-08-19 13:38:49 +02:00
Jerome Petazzoni
115cc5e0c0 Add support for Scaleway Cloud instances 2020-08-15 14:02:24 +02:00
Jerome Petazzoni
d252fe254b Update DNS script 2020-08-15 12:34:08 +02:00
Jerome Petazzoni
7d96562042 Minor updates after LKE testing 2020-08-12 19:22:57 +02:00
Jerome Petazzoni
4ded8c699d typo 2020-08-05 18:23:37 +02:00
Jérôme Petazzoni
620a3df798 Merge pull request #563 from lucas-foodles/patch-1
Fix typo
2020-08-05 17:28:34 +02:00
Jerome Petazzoni
d28723f07a Add fwdays workshops 2020-08-04 17:21:31 +02:00
Jerome Petazzoni
f2334d2d1b Add skillsmatter dates 2020-07-30 19:11:43 +02:00
Jerome Petazzoni
ddf79eebc7 Add skillsmatter 2020-07-30 19:09:42 +02:00
Jerome Petazzoni
6467264ff5 Add Bret coupon codes; high five online october 2020-07-30 12:11:29 +02:00
lucas-foodles
55fcff9333 Fix typo 2020-07-29 10:46:17 +02:00
Jerome Petazzoni
8fb7ea3908 Use 'sudo port', as per #529 2020-07-09 15:32:21 +02:00
Jérôme Petazzoni
7dd72f123f Merge pull request #562 from guilhem/patch-1
mismatch requests/limits
2020-07-07 15:35:46 +02:00
Guilhem Lettron
ff95066006 mismatch requests/limits
Burstable are killed when node is overloaded and exceeded requests
2020-07-07 13:55:28 +02:00
Jerome Petazzoni
8146c4dabe Add CRD that I had forgotten 2020-07-01 18:15:33 +02:00
Jerome Petazzoni
17aea33beb Add config for Traefik v2 2020-07-01 18:15:23 +02:00
Jerome Petazzoni
9770f81a1c Update DaemonSet in filebeat example to apps/v1 2020-07-01 16:55:48 +02:00
Jerome Petazzoni
0cb9095303 Fix up CRDs and add better openapiv3 schema validation 2020-07-01 16:53:51 +02:00
Jerome Petazzoni
ffded8469b Clean up socat deployment (even if we don't use it anymore) 2020-07-01 16:10:40 +02:00
Jerome Petazzoni
0e892cf8b4 Fix indentation in volume example 2020-06-28 12:10:01 +02:00
Jerome Petazzoni
b87efbd6e9 Update etcd slide 2020-06-26 07:32:53 +02:00
Jerome Petazzoni
1a24b530d6 Update Kustomize version 2020-06-22 08:33:21 +02:00
Jerome Petazzoni
122ffec5c2 kubectl get --show-labels and -L 2020-06-16 22:50:38 +02:00
Jerome Petazzoni
276a2dbdda Fix titles 2020-06-04 12:55:42 +02:00
Jerome Petazzoni
2836b58078 Add ENIX high five sessions 2020-06-04 12:53:25 +02:00
Jerome Petazzoni
0d065788a4 Improve how we display dates (sounds silly but with longer online events it becomes necessary) 2020-06-04 12:42:44 +02:00
Jerome Petazzoni
14271a4df0 Rehaul 'setup k8s' sections 2020-06-03 16:54:41 +02:00
Jerome Petazzoni
412d029d0c Tweak self-hosted options 2020-06-02 17:45:51 +02:00
Jerome Petazzoni
f960230f8e Reorganize managed options; add Scaleway 2020-06-02 17:28:23 +02:00
Jerome Petazzoni
774c8a0e31 Rewrite intro to the authn/authz module 2020-06-01 23:43:33 +02:00
Jerome Petazzoni
4671a981a7 Add deployment automation steps
The settings file can now specify an optional list of steps.
After creating a bunch of instances, the steps are then
automatically executed. This helps since virtually all
deployments will be a sequence of 'start + deploy + otheractions'.

It also helps to automatically excecute steps like webssh
and tailhist (since I tend to forget them often).
2020-06-01 20:58:23 +02:00
Jerome Petazzoni
b9743a5f8c Simplify Portworx setup and update it for k8s 1.18 2020-06-01 14:41:25 +02:00
Jerome Petazzoni
df4980750c Bump up ship version 2020-05-27 17:41:22 +02:00
Jerome Petazzoni
9467c7309e Update shortlinks 2020-05-17 20:21:15 +02:00
Jerome Petazzoni
86b0380a77 Update operator links 2020-05-13 20:29:59 +02:00
Jerome Petazzoni
eb9052ae9a Add twitch chat info 2020-05-07 13:24:35 +02:00
Jerome Petazzoni
8f85332d8a Advanced Dockerfiles -> Advanced Dockerfile Syntax 2020-05-06 17:25:03 +02:00
Jerome Petazzoni
0479ad2285 Add force redirects 2020-05-06 17:22:13 +02:00
Jerome Petazzoni
986d7eb9c2 Add foreword to operators design section 2020-05-05 17:24:05 +02:00
Jerome Petazzoni
3fafbb8d4e Add kustomize CLI and completion 2020-05-04 16:47:26 +02:00
Jerome Petazzoni
5a24df3fd4 Add details on Kustomize 2020-05-04 16:25:35 +02:00
Jerome Petazzoni
1bbfba0531 Add definition of idempotent 2020-05-04 02:18:05 +02:00
Jerome Petazzoni
8d98431ba0 Add Helm graduation status 2020-05-04 02:09:00 +02:00
Jerome Petazzoni
c31c81a286 Allow overriding YAML desc through env vars 2020-05-04 00:54:34 +02:00
Jerome Petazzoni
a0314fc5f5 Keep --restart=Never for folks running 1.17- 2020-05-03 17:08:32 +02:00
Jérôme Petazzoni
3f088236a4 Merge pull request #557 from barpilot/psp
psp: update deprecated parts
2020-05-03 17:07:41 +02:00
Jerome Petazzoni
ce4e2ffe46 Add sleep command in init container example
It can be tricky to illustrate what's going on here, since installing
git and cloning the repo can be so fast. So we're sleeping a few seconds
to help with this demo and make it easier to show the race condition.
2020-05-03 17:01:59 +02:00
Jérôme Petazzoni
c3a05a6393 Merge pull request #558 from barpilot/vol-init
volume: add missing pod nginx-with-init creating
2020-05-03 16:57:46 +02:00
Jerome Petazzoni
40b2b8e62e Fix deployment name in labels/selector intro
(Fixes #552)
2020-05-03 16:53:25 +02:00
Jerome Petazzoni
efdcf4905d Bump up Kubernetes dashboard to 2.0.0 2020-05-03 16:01:19 +02:00
Jérôme Petazzoni
bdb57c05b4 Merge pull request #550 from BretFisher/patch-20
update k8s dashboard versions
2020-05-03 15:55:15 +02:00
Jerome Petazzoni
af0762a0a2 Remove ':' from file names
Colons are not allowed in file names on Windows. Let's use
something else instead.

(Initially reported by @DenisBalan. This closes #549.)
2020-05-03 15:49:37 +02:00
Jerome Petazzoni
0d6c364a95 Add MacPorts instructions for stern 2020-05-03 13:40:01 +02:00
Jerome Petazzoni
690a1eb75c Move Ardan Live 2020-05-01 15:37:57 -05:00
Jérôme Petazzoni
c796a6bfc1 Merge pull request #556 from barpilot/healthcheck
healthcheck: fix rng manifest filename
2020-04-30 22:51:37 +02:00
Jerome Petazzoni
0b10d3d40d Add a bunch of other managed offerings 2020-04-30 15:50:24 -05:00
Jérôme Petazzoni
cdb50925da Merge pull request #554 from barpilot/installer
separate managed options from deployment
2020-04-30 22:47:22 +02:00
Jérôme Petazzoni
ca1f8ec828 Merge pull request #553 from barpilot/kubeadm
Remove experimental status on kubeadm HA
2020-04-30 22:46:33 +02:00
Jerome Petazzoni
7302d3533f Use built-in dockercoins manifest instead of separate kubercoins repo 2020-04-30 15:45:12 -05:00
Jerome Petazzoni
d3c931e602 Add separate instructions for Zoom webinar 2020-04-30 15:42:41 -05:00
Guilhem Lettron
7402c8e6a8 psp: update psp apiVersion to policy/v1beta1 2020-04-29 22:46:33 +02:00
Guilhem Lettron
1de539bff8 healthcheck: fix rng manifest filename 2020-04-29 22:41:15 +02:00
Guilhem Lettron
a6c7d69986 volume: add missing pod nginx-with-init creating 2020-04-29 22:37:49 +02:00
Guilhem Lettron
b0bff595cf psp: update generator helpers
kubectl run →  kubectl create deployment
kubectl run --restart=Never → kubectl run
2020-04-29 22:33:34 +02:00
Jerome Petazzoni
6f806ed200 typo 2020-04-28 14:23:52 -05:00
Jerome Petazzoni
0c8b20f6b6 typo 2020-04-28 14:21:31 -05:00
Jerome Petazzoni
2ba35e1f8d typo 2020-04-28 14:20:22 -05:00
Jerome Petazzoni
eb0d9bed2a Update descriptions 2020-04-28 06:18:59 -05:00
Jerome Petazzoni
bab493a926 Update descriptions 2020-04-28 06:17:21 -05:00
Guilhem Lettron
f4f2d83fa4 separate managed options from deployment 2020-04-27 20:55:23 +02:00
Guilhem Lettron
9f049951ab Remove experimental status on kubeadm HA 2020-04-27 20:47:30 +02:00
Jerome Petazzoni
7257a5c594 Add outline tags to Kubernetes course 2020-04-27 07:35:14 -05:00
Jerome Petazzoni
102aef5ac5 Add outline tags to Docker short course 2020-04-26 11:36:50 -05:00
Jerome Petazzoni
d2b3a1d663 Add Ardan Live 2020-04-23 08:46:56 -05:00
Jerome Petazzoni
d84ada0927 Fix slides counter 2020-04-23 07:33:46 -05:00
Jerome Petazzoni
0e04b4a07d Modularize logistics file and add logistics-online file 2020-04-20 15:51:02 -05:00
Jerome Petazzoni
aef910b4b7 Do not show 'Module 1' if there is only one module 2020-04-20 13:01:06 -05:00
Jerome Petazzoni
298b6db20c Rename 'chapter' into 'module' 2020-04-20 11:49:35 -05:00
Jerome Petazzoni
7ec6e871c9 Add shortlink container.training/next 2020-04-15 13:17:03 -05:00
Jerome Petazzoni
a0558e4ee5 Rework kubectl run section, break it down
We now have better explanations on labels and selectors.
The kubectl run section was getting very long, so now
it is different parts: kubectl run basics; how to create
other resources like batch jobs; first contact with
labels and annotations; and showing the limitations
of kubectl logs.
2020-04-08 18:29:59 -05:00
Jerome Petazzoni
16a62f9f84 Really dirty script to add force redirects 2020-04-07 17:00:53 -05:00
Bret Fisher
2ce50007d2 update k8s dashboard versions 2020-03-16 17:57:41 -04:00
270 changed files with 42141 additions and 4273 deletions

View File

@@ -9,21 +9,21 @@ services:
etcd:
network_mode: "service:pause"
image: k8s.gcr.io/etcd:3.4.3
image: k8s.gcr.io/etcd:3.4.9
command: etcd
kube-apiserver:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount --allow-privileged
kube-controller-manager:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-controller-manager --master http://localhost:8080 --allocate-node-cidrs --cluster-cidr=10.CLUSTER.0.0/16
"Edit the CLUSTER placeholder first. Then, remove this line.":
kube-scheduler:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-scheduler --master http://localhost:8080

View File

@@ -1,3 +1,6 @@
# Note: hyperkube isn't available after Kubernetes 1.18.
# So we'll have to update this for Kubernetes 1.19!
version: "3"
services:
@@ -9,20 +12,20 @@ services:
etcd:
network_mode: "service:pause"
image: k8s.gcr.io/etcd:3.4.3
image: k8s.gcr.io/etcd:3.4.9
command: etcd
kube-apiserver:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-apiserver --etcd-servers http://127.0.0.1:2379 --address 0.0.0.0 --disable-admission-plugins=ServiceAccount
kube-controller-manager:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-controller-manager --master http://localhost:8080
kube-scheduler:
network_mode: "service:pause"
image: k8s.gcr.io/hyperkube:v1.17.2
image: k8s.gcr.io/hyperkube:v1.18.8
command: kube-scheduler --master http://localhost:8080

49
dockercoins/Tiltfile Normal file
View File

@@ -0,0 +1,49 @@
k8s_yaml(blob('''
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: registry
name: registry
spec:
selector:
matchLabels:
app: registry
template:
metadata:
labels:
app: registry
spec:
containers:
- image: registry
name: registry
---
apiVersion: v1
kind: Service
metadata:
labels:
app: registry
name: registry
spec:
ports:
- port: 5000
protocol: TCP
targetPort: 5000
nodePort: 30555
selector:
app: registry
type: NodePort
'''))
default_registry('localhost:30555')
docker_build('dockercoins/hasher', 'hasher')
docker_build('dockercoins/rng', 'rng')
docker_build('dockercoins/webui', 'webui')
docker_build('dockercoins/worker', 'worker')
k8s_yaml('../k8s/dockercoins.yaml')
# Uncomment the following line to let tilt run with the default kubeadm cluster-admin context.
#allow_k8s_contexts('kubernetes-admin@kubernetes')
# While we're here: if you're controlling a remote cluster, uncomment that line.
# It will create a port forward so that you can access the remote registry.
#k8s_resource(workload='registry', port_forwards='30555:5000')

33
k8s/certbot.yaml Normal file
View File

@@ -0,0 +1,33 @@
kind: Service
apiVersion: v1
metadata:
name: certbot
spec:
ports:
- port: 80
protocol: TCP
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: certbot
spec:
rules:
- http:
paths:
- path: /.well-known/acme-challenge/
backend:
serviceName: certbot
servicePort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: certbot
subsets:
- addresses:
- ip: A.B.C.D
ports:
- port: 8000
protocol: TCP

11
k8s/cm-certificate.yaml Normal file
View File

@@ -0,0 +1,11 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: xyz.A.B.C.D.nip.io
spec:
secretName: xyz.A.B.C.D.nip.io
dnsNames:
- xyz.A.B.C.D.nip.io
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer

18
k8s/cm-clusterissuer.yaml Normal file
View File

@@ -0,0 +1,18 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# Remember to update this if you use this manifest to obtain real certificates :)
email: hello@example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
# To use the production environment, use the following line instead:
#server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: issuer-letsencrypt-staging
solvers:
- http01:
ingress:
class: traefik

View File

@@ -1,3 +1,6 @@
# Note: apiextensions.k8s.io/v1beta1 is deprecated, and won't be served
# in Kubernetes 1.22 and later versions. This YAML manifest is here just
# for reference, but it's not intended to be used in modern trainings.
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:

View File

@@ -4,6 +4,13 @@ metadata:
name: coffees.container.training
spec:
group: container.training
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
scope: Namespaced
names:
plural: coffees
@@ -11,25 +18,4 @@ spec:
kind: Coffee
shortNames:
- cof
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
properties:
spec:
required:
- taste
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string
additionalPrinterColumns:
- jsonPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date

37
k8s/coffee-3.yaml Normal file
View File

@@ -0,0 +1,37 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: coffees.container.training
spec:
group: container.training
scope: Namespaced
names:
plural: coffees
singular: coffee
kind: Coffee
shortNames:
- cof
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
required: [ spec ]
properties:
spec:
type: object
properties:
taste:
description: Subjective taste of that kind of coffee bean
type: string
required: [ taste ]
additionalPrinterColumns:
- jsonPath: .spec.taste
description: Subjective taste of that kind of coffee bean
name: Taste
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date

View File

@@ -9,9 +9,9 @@ spec:
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: robusta
name: excelsa
spec:
taste: stronger
taste: fruity
---
kind: Coffee
apiVersion: container.training/v1alpha1
@@ -23,7 +23,12 @@ spec:
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: excelsa
name: robusta
spec:
taste: fruity
taste: stronger
bitterness: high
---
kind: Coffee
apiVersion: container.training/v1alpha1
metadata:
name: java

77
k8s/consul-1.yaml Normal file
View File

@@ -0,0 +1,77 @@
# Basic Consul cluster using Cloud Auto-Join.
# Caveats:
# - no actual persistence
# - scaling down to 1 will break the cluster
# - pods may be colocated
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: consul
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: consul
subjects:
- kind: ServiceAccount
name: consul
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
---
apiVersion: v1
kind: Service
metadata:
name: consul
spec:
ports:
- port: 8500
name: http
selector:
app: consul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 3
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
serviceAccountName: consul
containers:
- name: consul
image: "consul:1.8"
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=provider=k8s label_selector=\"app=consul\" namespace=\"$(NAMESPACE)\""
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
- "-ui"

View File

@@ -1,5 +1,9 @@
# Better Consul cluster.
# There is still no actual persistence, but:
# - podAntiaffinity prevents pod colocation
# - clusters works when scaling down to 1 (thanks to lifecycle hook)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
kind: Role
metadata:
name: consul
rules:
@@ -11,17 +15,16 @@ rules:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
kind: RoleBinding
metadata:
name: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
kind: Role
name: consul
subjects:
- kind: ServiceAccount
name: consul
namespace: default
---
apiVersion: v1
kind: ServiceAccount
@@ -59,20 +62,22 @@ spec:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- consul
matchLabels:
app: consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.6"
image: "consul:1.8"
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=provider=k8s label_selector=\"app=consul\""
- "-retry-join=provider=k8s label_selector=\"app=consul\" namespace=\"$(NAMESPACE)\""
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
@@ -80,7 +85,4 @@ spec:
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave
command: [ "sh", "-c", "consul leave" ]

98
k8s/consul-3.yaml Normal file
View File

@@ -0,0 +1,98 @@
# Even better Consul cluster.
# That one uses a volumeClaimTemplate to achieve true persistence.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: consul
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: consul
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: consul
subjects:
- kind: ServiceAccount
name: consul
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul
---
apiVersion: v1
kind: Service
metadata:
name: consul
spec:
ports:
- port: 8500
name: http
selector:
app: consul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 3
selector:
matchLabels:
app: consul
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
template:
metadata:
labels:
app: consul
spec:
serviceAccountName: consul
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: consul
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.8"
volumeMounts:
- name: data
mountPath: /consul/data
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- "agent"
- "-bootstrap-expect=3"
- "-retry-join=provider=k8s label_selector=\"app=consul\" namespace=\"$(NAMESPACE)\""
- "-client=0.0.0.0"
- "-data-dir=/consul/data"
- "-server"
- "-ui"
lifecycle:
preStop:
exec:
command: [ "sh", "-c", "consul leave" ]

View File

@@ -1,3 +1,10 @@
# This file is based on the following manifest:
# https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml
# It adds the "skip login" flag, as well as an insecure hack to defeat SSL.
# As its name implies, it is INSECURE and you should not use it in production,
# or on clusters that contain any kind of important or sensitive data, or on
# clusters that have a life span of more than a few hours.
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -187,7 +194,7 @@ spec:
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-rc2
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
@@ -226,7 +233,7 @@ spec:
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
@@ -272,7 +279,7 @@ spec:
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.2
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
@@ -293,7 +300,7 @@ spec:
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"beta.kubernetes.io/os": linux
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master

View File

@@ -0,0 +1,305 @@
# This is a copy of the following file:
# https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}

View File

@@ -0,0 +1,336 @@
# This file is based on the following manifest:
# https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml
# It adds a ServiceAccount that has cluster-admin privileges on the cluster,
# and exposes the dashboard on a NodePort. It makes it easier to do quick demos
# of the Kubernetes dashboard, without compromising the security too much.
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: cluster-admin
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: cluster-admin
namespace: kubernetes-dashboard

View File

@@ -5,7 +5,7 @@ metadata:
name: fluentd
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
@@ -21,7 +21,7 @@ rules:
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:

View File

@@ -11,7 +11,7 @@ metadata:
name: elasticsearch-operator
namespace: elasticsearch-operator
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: elasticsearch-operator
@@ -41,7 +41,7 @@ rules:
resources: ["elasticsearchclusters"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elasticsearch-operator
@@ -55,13 +55,16 @@ subjects:
name: elasticsearch-operator
namespace: elasticsearch-operator
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch-operator
namespace: elasticsearch-operator
spec:
replicas: 1
selector:
matchLabels:
name: elasticsearch-operator
template:
metadata:
labels:

30
k8s/event-node.yaml Normal file
View File

@@ -0,0 +1,30 @@
kind: Event
apiVersion: v1
metadata:
generateName: hello-
labels:
container.training/test: ""
#eventTime: "2020-07-04T00:00:00.000000Z"
#firstTimestamp: "2020-01-01T00:00:00.000000Z"
#lastTimestamp: "2020-12-31T00:00:00.000000Z"
#count: 42
involvedObject:
kind: Node
apiVersion: v1
name: kind-control-plane
# Note: the uid should be the Node name (not the uid of the Node).
# This might be specific to global objects.
uid: kind-control-plane
type: Warning
reason: NodeOverheat
message: "Node temperature exceeds critical threshold"
action: Hello
source:
component: thermal-probe
#host: node1
#reportingComponent: ""
#reportingInstance: ""

36
k8s/event-pod.yaml Normal file
View File

@@ -0,0 +1,36 @@
kind: Event
apiVersion: v1
metadata:
# One convention is to use <objectname>.<timestamp>,
# where the timestamp is taken with a nanosecond
# precision and expressed in hexadecimal.
# Example: web-5dcb957ccc-fjvzc.164689730a36ec3d
name: hello.1234567890
# The label doesn't serve any purpose, except making
# it easier to identify or delete that specific event.
labels:
container.training/test: ""
#eventTime: "2020-07-04T00:00:00.000000Z"
#firstTimestamp: "2020-01-01T00:00:00.000000Z"
#lastTimestamp: "2020-12-31T00:00:00.000000Z"
#count: 42
involvedObject:
### These 5 lines should be updated to refer to an object.
### Make sure to put the correct "uid", because it is what
### "kubectl describe" is using to gather relevant events.
#apiVersion: v1
#kind: Pod
#name: magic-bean
#namespace: blue
#uid: 7f28fda8-6ef4-4580-8d87-b55721fcfc30
type: Normal
reason: BackupSuccessful
message: "Object successfully dumped to gitops repository"
source:
component: gitops-sync
#reportingComponent: ""
#reportingInstance: ""

View File

@@ -52,7 +52,7 @@ data:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
@@ -60,6 +60,9 @@ metadata:
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
@@ -128,7 +131,7 @@ spec:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
@@ -141,7 +144,7 @@ roleRef:
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat

View File

@@ -1,4 +1,4 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
@@ -11,4 +11,4 @@ roleRef:
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
namespace: kube-system

34
k8s/hackthecluster.yaml Normal file
View File

@@ -0,0 +1,34 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: hackthecluster
spec:
selector:
matchLabels:
app: hackthecluster
template:
metadata:
labels:
app: hackthecluster
spec:
volumes:
- name: slash
hostPath:
path: /
tolerations:
- effect: NoSchedule
operator: Exists
containers:
- name: alpine
image: alpine
volumeMounts:
- name: slash
mountPath: /hostfs
command:
- sleep
- infinity
securityContext:
#privileged: true
capabilities:
add:
- SYS_CHROOT

View File

@@ -27,7 +27,7 @@ spec:
command:
- sh
- -c
- "apk update && apk add curl && curl https://github.com/jpetazzo.keys > /root/.ssh/authorized_keys"
- "mkdir -p /root/.ssh && apk update && apk add curl && curl https://github.com/jpetazzo.keys > /root/.ssh/authorized_keys"
containers:
- name: web
image: nginx

View File

@@ -0,0 +1,29 @@
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta2
metadata:
name: rng
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: rng
minReplicas: 1
maxReplicas: 20
behavior:
scaleUp:
stabilizationWindowSeconds: 60
scaleDown:
stabilizationWindowSeconds: 180
metrics:
- type: Object
object:
describedObject:
apiVersion: v1
kind: Service
name: httplat
metric:
name: httplat_latency_seconds
target:
type: Value
value: 0.1

20
k8s/ingress-v1.yaml Normal file
View File

@@ -0,0 +1,20 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whatever
spec:
#tls:
#- secretName: whatever.A.B.C.D.nip.io
# hosts:
# - whatever.A.B.C.D.nip.io
rules:
- host: whatever.A.B.C.D.nip.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whatever
port:
number: 1234

17
k8s/ingress-v1beta1.yaml Normal file
View File

@@ -0,0 +1,17 @@
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: whatever
spec:
#tls:
#- secretName: whatever.A.B.C.D.nip.io
# hosts:
# - whatever.A.B.C.D.nip.io
rules:
- host: whatever.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: whatever
servicePort: 1234

View File

@@ -1,13 +0,0 @@
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: whatever
spec:
rules:
- host: whatever.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: whatever
servicePort: 1234

1
k8s/ingress.yaml Symbolic link
View File

@@ -0,0 +1 @@
ingress-v1beta1.yaml

View File

@@ -1,162 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@@ -0,0 +1,63 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: setup-namespace
spec:
rules:
- name: setup-limitrange
match:
resources:
kinds:
- Namespace
generate:
kind: LimitRange
name: default-limitrange
namespace: "{{request.object.metadata.name}}"
data:
spec:
limits:
- type: Container
min:
cpu: 0.1
memory: 0.1
max:
cpu: 2
memory: 2Gi
default:
cpu: 0.25
memory: 500Mi
defaultRequest:
cpu: 0.25
memory: 250Mi
- name: setup-resourcequota
match:
resources:
kinds:
- Namespace
generate:
kind: ResourceQuota
name: default-resourcequota
namespace: "{{request.object.metadata.name}}"
data:
spec:
hard:
requests.cpu: "10"
requests.memory: 10Gi
limits.cpu: "20"
limits.memory: 20Gi
- name: setup-networkpolicy
match:
resources:
kinds:
- Namespace
generate:
kind: NetworkPolicy
name: default-networkpolicy
namespace: "{{request.object.metadata.name}}"
data:
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}

View File

@@ -0,0 +1,22 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: pod-color-policy-1
spec:
validationFailureAction: enforce
rules:
- name: ensure-pod-color-is-valid
match:
resources:
kinds:
- Pod
selector:
matchExpressions:
- key: color
operator: Exists
- key: color
operator: NotIn
values: [ red, green, blue ]
validate:
message: "If it exists, the label color must be red, green, or blue."
deny: {}

View File

@@ -0,0 +1,21 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: pod-color-policy-2
spec:
validationFailureAction: enforce
background: false
rules:
- name: prevent-color-change
match:
resources:
kinds:
- Pod
validate:
message: "Once label color has been added, it cannot be changed."
deny:
conditions:
- key: "{{ request.oldObject.metadata.labels.color }}"
operator: NotEqual
value: "{{ request.object.metadata.labels.color }}"

View File

@@ -0,0 +1,25 @@
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: pod-color-policy-3
spec:
validationFailureAction: enforce
background: false
rules:
- name: prevent-color-removal
match:
resources:
kinds:
- Pod
selector:
matchExpressions:
- key: color
operator: DoesNotExist
validate:
message: "Once label color has been added, it cannot be removed."
deny:
conditions:
- key: "{{ request.oldObject.metadata.labels.color }}"
operator: NotIn
value: []

View File

@@ -1,49 +1,50 @@
# This is a local copy of:
# https://github.com/rancher/local-path-provisioner/blob/master/deploy/local-path-storage.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
namespace: local-path-storage
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["endpoints", "persistentvolumes", "pods"]
verbs: ["*"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [ "" ]
resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
verbs: [ "get", "list", "watch" ]
- apiGroups: [ "" ]
resources: [ "endpoints", "persistentvolumes", "pods" ]
verbs: [ "*" ]
- apiGroups: [ "" ]
resources: [ "events" ]
verbs: [ "create", "patch" ]
- apiGroups: [ "storage.k8s.io" ]
resources: [ "storageclasses" ]
verbs: [ "get", "list", "watch" ]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
namespace: local-path-storage
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: apps/v1
kind: Deployment
@@ -62,27 +63,28 @@ spec:
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.8
imagePullPolicy: Always
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.19
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
@@ -91,6 +93,7 @@ metadata:
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
@@ -99,12 +102,59 @@ metadata:
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/opt/local-path-provisioner"]
}
]
}
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/opt/local-path-provisioner"]
}
]
}
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: busybox

View File

@@ -1,32 +1,61 @@
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
# This file is https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# But with the following arguments added to metrics-server:
# args:
# - --kubelet-insecure-tls
# - --metric-resolution=5s
apiVersion: v1
kind: ServiceAccount
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
@@ -38,95 +67,26 @@ subjects:
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.3
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
args:
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
- --metric-resolution=5s
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: 443
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
verbs:
- get
- list
- watch
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
@@ -136,3 +96,98 @@ subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
- --metric-resolution=5s
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
periodSeconds: 10
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100

View File

@@ -14,7 +14,7 @@ spec:
initContainers:
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
command: [ "sh", "-c", "apk add git && sleep 5 && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/

24
k8s/openebs-pod.yaml Normal file
View File

@@ -0,0 +1,24 @@
apiVersion: v1
kind: Pod
metadata:
name: openebs-local-hostpath-pod
spec:
volumes:
- name: storage
persistentVolumeClaim:
claimName: local-hostpath-pvc
containers:
- name: better
image: alpine
command:
- sh
- -c
- |
while true; do
echo "$(date) [$(hostname)] Kubernetes is better with PVs." >> /mnt/storage/greet.txt
sleep $(($RANDOM % 5 + 20))
done
volumeMounts:
- mountPath: /mnt/storage
name: storage

File diff suppressed because it is too large Load Diff

View File

@@ -22,7 +22,10 @@ spec:
command: ["sh", "-c", "if [ -d /vol/lost+found ]; then rmdir /vol/lost+found; fi"]
containers:
- name: postgres
image: postgres:11
image: postgres:12
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres

View File

@@ -1,12 +1,12 @@
---
apiVersion: extensions/v1beta1
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
annotations:
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
seccomp.security.alpha.kubernetes.io/allowedProfileNames: runtime/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: runtime/default
name: restricted
spec:
allowPrivilegeEscalation: false

View File

@@ -1,28 +1,17 @@
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: null
generation: 1
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
spec:
replicas: 1
selector:
matchLabels:
app: socat
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: socat
spec:
@@ -34,34 +23,19 @@ spec:
image: alpine
imagePullPolicy: Always
name: socat
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: socat
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

17
k8s/test.yaml Normal file
View File

@@ -0,0 +1,17 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: whatever
spec:
#tls:
#- secretName: whatever.A.B.C.D.nip.io
# hosts:
# - whatever.A.B.C.D.nip.io
rules:
- host: whatever.nip.io
http:
paths:
- path: /
backend:
serviceName: whatever
servicePort: 1234

87
k8s/traefik-v1.yaml Normal file
View File

@@ -0,0 +1,87 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

102
k8s/traefik-v2.yaml Normal file
View File

@@ -0,0 +1,102 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v2.5
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --accesslog
- --api
- --api.insecure
- --log.level=INFO
- --metrics.prometheus
- --providers.kubernetesingress
- --entrypoints.http.Address=:80
- --entrypoints.https.Address=:443
- --entrypoints.https.http.tls.certResolver=default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

View File

@@ -1,103 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- effect: NoSchedule
operator: Exists
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

1
k8s/traefik.yaml Symbolic link
View File

@@ -0,0 +1 @@
traefik-v2.yaml

View File

@@ -8,24 +8,24 @@ metadata:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: users:jean.doe
name: user=jean.doe
rules:
- apiGroups: [ certificates.k8s.io ]
resources: [ certificatesigningrequests ]
verbs: [ create ]
- apiGroups: [ certificates.k8s.io ]
resourceNames: [ users:jean.doe ]
resourceNames: [ user=jean.doe ]
resources: [ certificatesigningrequests ]
verbs: [ get, create, delete, watch ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: users:jean.doe
name: user=jean.doe
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: users:jean.doe
name: user=jean.doe
subjects:
- kind: ServiceAccount
name: jean.doe

View File

@@ -3,8 +3,6 @@ apiVersion: v1
kind: PersistentVolume
metadata:
name: consul-node2
annotations:
node: node2
spec:
capacity:
storage: 10Gi
@@ -26,8 +24,6 @@ apiVersion: v1
kind: PersistentVolume
metadata:
name: consul-node3
annotations:
node: node3
spec:
capacity:
storage: 10Gi
@@ -49,8 +45,6 @@ apiVersion: v1
kind: PersistentVolume
metadata:
name: consul-node4
annotations:
node: node4
spec:
capacity:
storage: 10Gi

View File

@@ -0,0 +1,13 @@
#!/bin/sh
# Create an EKS cluster.
# This is not idempotent (each time you run it, it creates a new cluster).
eksctl create cluster \
--node-type=t3.large \
--nodes-max=10 \
--alb-ingress-access \
--asg-access \
--ssh-access \
--with-oidc \
#

32
prepare-eks/20_create_users.sh Executable file
View File

@@ -0,0 +1,32 @@
#!/bin/sh
# For each user listed in "users.txt", create an IAM user.
# Also create AWS API access keys, and store them in "users.keys".
# This is idempotent (you can run it multiple times, it will only
# create the missing users). However, it will not remove users.
# Note that you can remove users from "users.keys" (or even wipe
# that file out entirely) and then this script will delete their
# keys and generate new keys for them (and add the new keys to
# "users.keys".)
echo "Getting list of existing users ..."
aws iam list-users --output json | jq -r .Users[].UserName > users.tmp
for U in $(cat users.txt); do
if ! grep -qw $U users.tmp; then
echo "Creating user $U..."
aws iam create-user --user-name=$U \
--tags=Key=container.training,Value=1
fi
if ! grep -qw $U users.keys; then
echo "Listing keys for user $U..."
KEYS=$(aws iam list-access-keys --user=$U | jq -r .AccessKeyMetadata[].AccessKeyId)
for KEY in $KEYS; do
echo "Deleting key $KEY for user $U..."
aws iam delete-access-key --user=$U --access-key-id=$KEY
done
echo "Creating access key for user $U..."
aws iam create-access-key --user=$U --output json \
| jq -r '.AccessKey | [ .UserName, .AccessKeyId, .SecretAccessKey ] | @tsv' \
>> users.keys
fi
done

View File

@@ -0,0 +1,51 @@
#!/bin/sh
# Create an IAM policy to authorize users to do "aws eks update-kubeconfig".
# This is idempotent, which allows to update the policy document below if
# you want the users to do other things as well.
# Note that each time you run this script, it will actually create a new
# version of the policy, set that version as the default version, and
# remove all non-default versions. (Because you can only have up to
# 5 versions of a given policy, so you need to clean them up.)
# After running that script, you will want to attach the policy to our
# users (check the other scripts in that directory).
POLICY_NAME=user.container.training
POLICY_DOC='{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"eks:DescribeCluster"
],
"Resource": "arn:aws:eks:*",
"Effect": "Allow"
}
]
}'
ACCOUNT=$(aws sts get-caller-identity | jq -r .Account)
aws iam create-policy-version \
--policy-arn arn:aws:iam::$ACCOUNT:policy/$POLICY_NAME \
--policy-document "$POLICY_DOC" \
--set-as-default
# For reference, the command below creates a policy without versioning:
#aws iam create-policy \
#--policy-name user.container.training \
#--policy-document "$JSON"
for VERSION in $(
aws iam list-policy-versions \
--policy-arn arn:aws:iam::$ACCOUNT:policy/$POLICY_NAME \
--query 'Versions[?!IsDefaultVersion].VersionId' \
--output text)
do
aws iam delete-policy-version \
--policy-arn arn:aws:iam::$ACCOUNT:policy/$POLICY_NAME \
--version-id "$VERSION"
done
# For reference, the command below shows all users using the policy:
#aws iam list-entities-for-policy \
#--policy-arn arn:aws:iam::$ACCOUNT:policy/$POLICY_NAME

14
prepare-eks/40_attach_policy.sh Executable file
View File

@@ -0,0 +1,14 @@
#!/bin/sh
# Attach our user policy to all the users defined in "users.txt".
# This should be idempotent, because attaching the same policy
# to the same user multiple times doesn't do anything.
ACCOUNT=$(aws sts get-caller-identity | jq -r .Account)
POLICY_NAME=user.container.training
for U in $(cat users.txt); do
echo "Attaching policy to user $U ..."
aws iam attach-user-policy \
--user-name $U \
--policy-arn arn:aws:iam::$ACCOUNT:policy/$POLICY_NAME
done

24
prepare-eks/50_aws_auth.sh Executable file
View File

@@ -0,0 +1,24 @@
#!/bin/sh
# Update the aws-auth ConfigMap to map our IAM users to Kubernetes users.
# Each user defined in "users.txt" will be mapped to a Kubernetes user
# with the same name, and put in the "container.training" group, too.
# This is idempotent.
# WARNING: this will wipe out the mapUsers component of the aws-auth
# ConfigMap, removing all users that aren't in "users.txt".
# It won't touch mapRoles, so it shouldn't break the role mappings
# put in place by EKS.
ACCOUNT=$(aws sts get-caller-identity | jq -r .Account)
rm -f users.map
for U in $(cat users.txt); do
echo "\
- userarn: arn:aws:iam::$ACCOUNT:user/$U
username: $U
groups: [ container.training ]\
" >> users.map
done
kubectl create --namespace=kube-system configmap aws-auth \
--dry-run=client --from-file=mapUsers=users.map -o yaml \
| kubectl apply -f-

View File

@@ -0,0 +1,65 @@
#!/bin/sh
# Create a shared Kubernetes Namespace ("container-training") as well as
# individual namespaces for every user in "users.txt", and set up a bunch
# of permissions.
# Specifically:
# - each user gets "view" permissions in the "default" Namespace
# - each user gets "edit" permissions in the "container-training" Namespace
# - each user gets permissions to list Nodes and Namespaces
# - each user gets "admin" permissions in their personal Namespace
# Note that since Kubernetes Namespaces can't have dots in their names,
# if a user has dots, dots will be mapped to dashes.
# So user "ada.lovelace" will get namespace "ada-lovelace".
# This is kind of idempotent (but will raise a bunch of errors for objects
# that already exist).
# TODO: if this needs to evolve, replace all the "create" operations by
# "apply" operations. But this is good enough for now.
kubectl create rolebinding --namespace default container.training \
--group=container.training --clusterrole=view
kubectl create clusterrole view-nodes \
--verb=get,list,watch --resource=node
kubectl create clusterrolebinding view-nodes \
--group=container.training --clusterrole=view-nodes
kubectl create clusterrole view-namespaces \
--verb=get,list,watch --resource=namespace
kubectl create clusterrolebinding view-namespaces \
--group=container.training --clusterrole=view-namespaces
kubectl create namespace container-training
kubectl create rolebinding --namespace container-training edit \
--group=container.training --clusterrole=edit
# Note: API calls to EKS tend to be fairly slow. To optimize things a bit,
# instead of running "kubectl" N times, we generate a bunch of YAML and
# apply it. It will still generate a lot of API calls but it's much faster
# than calling "kubectl" N times. It might be possible to make this even
# faster by generating a "kind: List" (I don't know if this would issue
# a single API calls or multiple ones; TBD!)
for U in $(cat users.txt); do
NS=$(echo $U | tr . -)
cat <<EOF
---
kind: Namespace
apiVersion: v1
metadata:
name: $NS
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: admin
namespace: $NS
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: $U
EOF
done | kubectl create -f-

76
prepare-eks/70_oidc.sh Executable file
View File

@@ -0,0 +1,76 @@
#!/bin/sh
# Create an IAM role to be used by a Kubernetes ServiceAccount.
# The role isn't given any permissions yet (this has to be done by
# another script in this series), but a properly configured Pod
# should still be able to execute "aws sts get-caller-identity"
# and confirm that it's using that role.
# This requires the cluster to have an attached OIDC provider.
# This should be the case if the cluster has been created with
# the scripts in this directory; otherwise, this can be done with
# the subsequent command, which is idempotent:
# eksctl utils associate-iam-oidc-provider --cluster cluster-name-12341234 --approve
# The policy document used below will authorize all ServiceAccounts
# in the "container-training" Namespace to use that role.
# This script will also annotate the container-training:default
# ServiceAccount so that it can use that role.
# This script is not quite idempotent: if you want to use a new
# trust policy, some work will be required. (You can delete the role,
# but that requires detaching the associated policies. There might also
# be a way to update the trust policy directly; we didn't investigate this
# further at this point.)
if [ "$1" ]; then
CLUSTER="$1"
else
echo "Please indicate cluster to use. Available clusters:"
aws eks list-clusters --output table
exit 1
fi
ACCOUNT=$(aws sts get-caller-identity | jq -r .Account)
OIDC=$(aws eks describe-cluster --name $CLUSTER --query cluster.identity.oidc.issuer --output text | cut -d/ -f3-)
ROLE_NAME=s3-reader-container-training
TRUST_POLICY=$(envsubst <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${ACCOUNT}:oidc-provider/${OIDC}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"${OIDC}:sub": ["system:serviceaccount:container-training:*"]
}
}
}
]
}
EOF
)
aws iam create-role \
--role-name "$ROLE_NAME" \
--assume-role-policy-document "$TRUST_POLICY"
kubectl annotate serviceaccounts \
--namespace container-training default \
"eks.amazonaws.com/role-arn=arn:aws:iam::$ACCOUNT:role/$ROLE_NAME" \
--overwrite
exit
# Here are commands to delete the role:
for POLICY_ARN in $(aws iam list-attached-role-policies --role-name $ROLE_NAME --query 'AttachedPolicies[*].PolicyArn' --output text); do aws iam detach-role-policy --role-name $ROLE_NAME --policy-arn $POLICY_ARN; done
aws iam delete-role --role-name $ROLE_NAME
# Merging the policy with the existing policies:
{
aws iam get-role --role-name s3-reader-container-training | jq -r .Role.AssumeRolePolicyDocument.Statement[]
echo "$TRUST_POLICY" | jq -r .Statement[]
} | jq -s '{"Version": "2012-10-17", "Statement": .}' > /tmp/policy.json
aws iam update-assume-role-policy \
--role-name $ROLE_NAME \
--policy-document file:///tmp/policy.json

54
prepare-eks/80_s3_bucket.sh Executable file
View File

@@ -0,0 +1,54 @@
#!/bin/sh
# Create an S3 bucket with two objects in it:
# - public.txt (world-readable)
# - private.txt (private)
# Also create an IAM policy granting read-only access to the bucket
# (and therefore, to the private object).
# Finally, attach the policy to an IAM role (for instance, the role
# created by another script in this directory).
# This isn't idempotent, but it can be made idempotent by replacing the
# "aws iam create-policy" call with "aws iam create-policy-version" and
# a bit of extra elbow grease. (See other scripts in this directory for
# an example).
ACCOUNT=$(aws sts get-caller-identity | jq -r .Account)
BUCKET=container.training
ROLE_NAME=s3-reader-container-training
POLICY_NAME=s3-reader-container-training
POLICY_DOC=$(envsubst <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject*"
],
"Resource": [
"arn:aws:s3:::$BUCKET",
"arn:aws:s3:::$BUCKET/*"
]
}
]
}
EOF
)
aws iam create-policy \
--policy-name $POLICY_NAME \
--policy-doc "$POLICY_DOC"
aws s3 mb s3://container.training
echo "this is a public object" \
| aws s3 cp - s3://container.training/public.txt \
--acl public-read
echo "this is a private object" \
| aws s3 cp - s3://container.training/private.txt \
--acl private
aws iam attach-role-policy \
--role-name "$ROLE_NAME" \
--policy-arn arn:aws:iam::$ACCOUNT:policy/$POLICY_NAME

50
prepare-eks/users.txt Normal file
View File

@@ -0,0 +1,50 @@
ada.lovelace
adele.goldstine
amanda.jones
anita.borg
ann.kiessling
barbara.mcclintock
beatrice.worsley
bessie.blount
betty.holberton
beulah.henry
carleen.hutchins
caroline.herschel
dona.bailey
dorothy.hodgkin
ellen.ochoa
edith.clarke
elisha.collier
elizabeth.feinler
emily.davenport
erna.hoover
frances.spence
gertrude.blanch
grace.hopper
grete.hermann
giuliana.tesoro
harriet.tubman
hedy.lamarr
irma.wyman
jane.goodall
jean.bartik
joy.mangano
josephine.cochrane
katherine.blodgett
kathleen.antonelli
lynn.conway
margaret.hamilton
maria.beasley
marie.curie
marjorie.joyner
marlyn.meltzer
mary.kies
melitta.bentz
milly.koss
radia.perlman
rosalind.franklin
ruth.teitelbaum
sarah.mather
sophie.wilson
stephanie.kwolek
yvonne.brill

View File

@@ -4,7 +4,11 @@ These tools can help you to create VMs on:
- Azure
- EC2
- Hetzner
- Linode
- OpenStack
- OVHcloud
- Scaleway
## Prerequisites
@@ -13,7 +17,8 @@ These tools can help you to create VMs on:
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`)
Depending on the infrastructure that you want to use, you also need to install
the Azure CLI, the AWS CLI, or terraform (for OpenStack deployment).
the CLI that is specific to that cloud. For OpenStack deployments, you will
need Terraform.
And if you want to generate printable cards:
@@ -90,6 +95,9 @@ You're all set!
## `./workshopctl` Usage
If you run `./workshopctl` without arguments, it will show a list of
available commands, looking like this:
```
workshopctl - the orchestration workshop swiss army knife
Commands:
@@ -98,32 +106,7 @@ cards Generate ready-to-print cards for a group of VMs
deploy Install Docker on a bunch of running VMs
disableaddrchecks Disable source/destination IP address checks
disabledocker Stop Docker Engine and don't restart it automatically
helmprom Install Helm and Prometheus
help Show available commands
ids (FIXME) List the instance IDs belonging to a given tag or token
kubebins Install Kubernetes and CNI binaries but don't start anything
kubereset Wipe out Kubernetes configuration on all nodes
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all nodes are reporting as Ready
listall List VMs running on all configured infrastructures
list List available groups for a given infrastructure
netfix Disable GRO and run a pinger job on the VMs
opensg Open the default security group to ALL ingress traffic
ping Ping VMs in a given tag, to check that they have network access
pssh Run an arbitrary command on all nodes
pull_images Pre-pull a bunch of Docker images
quotas Check our infrastructure quotas (max instances)
remap_nodeports Remap NodePort range to 10000-10999
retag (FIXME) Apply a new tag to a group of VMs
ssh Open an SSH session to the first node of a tag
start Start a group of VMs
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
tags List groups of VMs known locally
test Run tests (pre-flight checks) on a group of VMs
weavetest Check that weave seems properly setup
webssh Install a WEB SSH server on the machines (port 1080)
wrap Run this program in a container
www Run a web server to access card HTML and PDF
...
```
### Summary of What `./workshopctl` Does For You
@@ -138,7 +121,8 @@ www Run a web server to access card HTML and PDF
### Example Steps to Launch a group of AWS Instances for a Workshop
- Run `./workshopctl start --infra infra/aws-us-east-2 --settings/myworkshop.yaml --count 60` to create 60 EC2 instances
- Run `./workshopctl start --infra infra/aws-us-east-2 --settings/myworkshop.yaml --students 50` to create 50 clusters
- The number of instances will be `students × clustersize`
- Your local SSH key will be synced to instances under `ubuntu` user
- AWS instances will be created and tagged based on date, and IP's stored in `prepare-vms/tags/`
- Run `./workshopctl deploy TAG` to run `lib/postprep.py` via parallel-ssh
@@ -248,12 +232,19 @@ If you don't have `wkhtmltopdf` installed, you will get a warning that it is a m
#### List tags
$ ./workshopctl list infra/some-infra-file
$ ./workshopctl listall
$ ./workshopctl tags
$ ./workshopctl inventory infra/some-infra-file
$ ./workshopctl inventory
Note: the `tags` command will show only the VMs that you have provisioned
and deployed on the current machine (i.e. listed in the `tags` subdirectory).
The `inventory` command will try to list all existing VMs (including the
ones not listed in the `tags` directory, and including VMs provisioned
through other mechanisms). It is not supported across all platforms,
however.
#### Stop and destroy VMs
$ ./workshopctl stop TAG

View File

@@ -0,0 +1,24 @@
INFRACLASS=openstack-cli
# Copy that file to e.g. openstack or ovh, then customize it.
# Some Openstack providers (like OVHcloud) will let you download
# a file containing credentials. That's what you need to use.
# The file below contains some example values.
export OS_AUTH_URL=https://auth.cloud.ovh.net/v3/
export OS_IDENTITY_API_VERSION=3
export OS_USER_DOMAIN_NAME=${OS_USER_DOMAIN_NAME:-"Default"}
export OS_PROJECT_DOMAIN_NAME=${OS_PROJECT_DOMAIN_NAME:-"Default"}
export OS_TENANT_ID=abcd1234
export OS_TENANT_NAME="0123456"
export OS_USERNAME="user-xyz123"
export OS_PASSWORD=AbCd1234
export OS_REGION_NAME="GRA7"
# And then some values to indicate server type, image, etc.
# You can see available flavors with `openstack flavor list`
export OS_FLAVOR=s1-4
# You can see available images with `openstack image list`
export OS_IMAGE=896c5f54-51dc-44f0-8c22-ce99ba7164df
# You can create a key with `openstack keypair create --public-key ~/.ssh/id_rsa.pub containertraining`
export OS_KEY=containertraining

View File

@@ -1,4 +1,5 @@
INFRACLASS=openstack
INFRACLASS=openstack-tf
# If you are using OpenStack, copy this file (e.g. to "openstack" or "enix")
# and customize the variables below.
export TF_VAR_user="jpetazzo"
@@ -6,4 +7,5 @@ export TF_VAR_tenant="training"
export TF_VAR_domain="Default"
export TF_VAR_password="..."
export TF_VAR_auth_url="https://api.r1.nxs.enix.io/v3"
export TF_VAR_flavor="GP1.S"
export TF_VAR_flavor="GP1.S"
export TF_VAR_image="Ubuntu 18.04"

View File

@@ -0,0 +1,5 @@
INFRACLASS=hetzner
if ! [ -f ~/.config/hcloud/cli.toml ]; then
warning "~/.config/hcloud/cli.toml not found."
warning "Make sure that the Hetzner CLI (hcloud) is installed and configured."
fi

View File

@@ -0,0 +1,3 @@
INFRACLASS=scaleway
#SCW_INSTANCE_TYPE=DEV1-L
#SCW_ZONE=fr-par-2

View File

@@ -66,7 +66,7 @@ need_infra() {
need_tag() {
if [ -z "$TAG" ]; then
die "Please specify a tag or token. To see available tags and tokens, run: $0 list"
die "Please specify a tag. To see available tags, run: $0 tags"
fi
if [ ! -d "tags/$TAG" ]; then
die "Tag $TAG not found (directory tags/$TAG does not exist)."

View File

@@ -1,5 +1,9 @@
export AWS_DEFAULT_OUTPUT=text
# Ignore SSH key validation when connecting to these remote hosts.
# (Otherwise, deployment scripts break when a VM IP address reuse.)
SSHOPTS="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=ERROR"
HELP=""
_cmd() {
HELP="$(printf "%s\n%-20s %s\n" "$HELP" "$1" "$2")"
@@ -43,6 +47,16 @@ _cmd_cards() {
info "$0 www"
}
_cmd clean "Remove information about stopped clusters"
_cmd_clean() {
for TAG in tags/*; do
if grep -q ^stopped$ "$TAG/status"; then
info "Removing $TAG..."
rm -rf "$TAG"
fi
done
}
_cmd deploy "Install Docker on a bunch of running VMs"
_cmd_deploy() {
TAG=$1
@@ -59,11 +73,35 @@ _cmd_deploy() {
echo deploying > tags/$TAG/status
sep "Deploying tag $TAG"
# Wait for cloudinit to be done
# If this VM image is using cloud-init,
# wait for cloud-init to be done
pssh "
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done"
if [ -d /var/lib/cloud ]; then
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done
fi"
# Special case for scaleway since it doesn't come with sudo
if [ "$INFRACLASS" = "scaleway" ]; then
pssh -l root "
grep DEBIAN_FRONTEND /etc/environment || echo DEBIAN_FRONTEND=noninteractive >> /etc/environment
grep cloud-init /etc/sudoers && rm /etc/sudoers
apt-get update && apt-get install sudo -y"
fi
# FIXME
# Special case for hetzner since it doesn't have an ubuntu user
#if [ "$INFRACLASS" = "hetzner" ]; then
# pssh -l root "
#[ -d /home/ubuntu ] ||
# useradd ubuntu -m -s /bin/bash
#echo 'ubuntu ALL=(ALL:ALL) NOPASSWD:ALL' > /etc/sudoers.d/ubuntu
#[ -d /home/ubuntu/.ssh ] ||
# install --owner=ubuntu --mode=700 --directory /home/ubuntu/.ssh
#[ -f /home/ubuntu/.ssh/authorized_keys ] ||
# install --owner=ubuntu --mode=600 /root/.ssh/authorized_keys --target-directory /home/ubuntu/.ssh"
#fi
# Copy settings and install Python YAML parser
pssh -I tee /tmp/settings.yaml <tags/$TAG/settings.yaml
@@ -71,6 +109,12 @@ _cmd_deploy() {
sudo apt-get update &&
sudo apt-get install -y python-yaml"
# If there is no "python" binary, symlink to python3
#pssh "
#if ! which python; then
# ln -s $(which python3) /usr/local/bin/python
#fi"
# Copy postprep.py to the remote machines, and execute it, feeding it the list of IP addresses
pssh -I tee /tmp/postprep.py <lib/postprep.py
pssh --timeout 900 --send-input "python /tmp/postprep.py >>/tmp/pp.out 2>>/tmp/pp.err" <tags/$TAG/ips.txt
@@ -82,7 +126,7 @@ _cmd_deploy() {
# If /home/docker/.ssh/id_rsa doesn't exist, copy it from the first node
pssh "
sudo -u docker [ -f /home/docker/.ssh/id_rsa ] ||
ssh -o StrictHostKeyChecking=no \$(cat /etc/name_of_first_node) sudo -u docker tar -C /home/docker -cvf- .ssh |
ssh $SSHOPTS \$(cat /etc/name_of_first_node) sudo -u docker tar -C /home/docker -cvf- .ssh |
sudo -u docker tar -C /home/docker -xf-"
# if 'docker@' doesn't appear in /home/docker/.ssh/authorized_keys, copy it there
@@ -126,24 +170,27 @@ _cmd_kubebins() {
TAG=$1
need_tag
##VERSION##
ETCD_VERSION=v3.4.13
K8SBIN_VERSION=v1.19.11 # Can't go to 1.20 because it requires a serviceaccount signing key.
CNI_VERSION=v0.8.7
pssh --timeout 300 "
set -e
cd /usr/local/bin
if ! [ -x etcd ]; then
##VERSION##
curl -L https://github.com/etcd-io/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz \
curl -L https://github.com/etcd-io/etcd/releases/download/$ETCD_VERSION/etcd-$ETCD_VERSION-linux-amd64.tar.gz \
| sudo tar --strip-components=1 --wildcards -zx '*/etcd' '*/etcdctl'
fi
if ! [ -x hyperkube ]; then
##VERSION##
curl -L https://dl.k8s.io/v1.17.2/kubernetes-server-linux-amd64.tar.gz \
curl -L https://dl.k8s.io/$K8SBIN_VERSION/kubernetes-server-linux-amd64.tar.gz \
| sudo tar --strip-components=3 -zx \
kubernetes/server/bin/kube{ctl,let,-proxy,-apiserver,-scheduler,-controller-manager}
fi
sudo mkdir -p /opt/cni/bin
cd /opt/cni/bin
if ! [ -x bridge ]; then
curl -L https://github.com/containernetworking/plugins/releases/download/v0.7.6/cni-plugins-amd64-v0.7.6.tgz \
curl -L https://github.com/containernetworking/plugins/releases/download/$CNI_VERSION/cni-plugins-linux-amd64-$CNI_VERSION.tgz \
| sudo tar -zx
fi
"
@@ -158,7 +205,7 @@ _cmd_kube() {
KUBEVERSION=$2
if [ "$KUBEVERSION" ]; then
EXTRA_APTGET="=$KUBEVERSION-00"
EXTRA_KUBEADM="--kubernetes-version=v$KUBEVERSION"
EXTRA_KUBEADM="kubernetesVersion: v$KUBEVERSION"
else
EXTRA_APTGET=""
EXTRA_KUBEADM=""
@@ -173,13 +220,39 @@ _cmd_kube() {
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet$EXTRA_APTGET kubeadm$EXTRA_APTGET kubectl$EXTRA_APTGET &&
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl &&
echo 'alias k=kubectl' | sudo tee /etc/bash_completion.d/k &&
echo 'complete -F __start_kubectl k' | sudo tee -a /etc/bash_completion.d/k"
# Initialize kube master
# Disable swap
# (note that this won't survive across node reboots!)
if [ "$INFRACLASS" = "linode" ]; then
pssh "
sudo swapoff -a"
fi
# Initialize kube control plane
pssh --timeout 200 "
if i_am_first_node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token &&
sudo kubeadm init $EXTRA_KUBEADM --token \$(cat /tmp/token) --apiserver-cert-extra-sans \$(cat /tmp/ipv4)
cat >/tmp/kubeadm-config.yaml <<EOF
kind: InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: \$(cat /tmp/token)
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: cgroupfs
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
apiServer:
certSANs:
- \$(cat /tmp/ipv4)
$EXTRA_KUBEADM
EOF
sudo kubeadm init --config=/tmp/kubeadm-config.yaml --ignore-preflight-errors=NumCPU
fi"
# Put kubeconfig in ubuntu's and docker's accounts
@@ -203,7 +276,7 @@ _cmd_kube() {
pssh --timeout 200 "
if ! i_am_first_node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
FIRSTNODE=\$(cat /etc/name_of_first_node) &&
TOKEN=\$(ssh -o StrictHostKeyChecking=no \$FIRSTNODE cat /tmp/token) &&
TOKEN=\$(ssh $SSHOPTS \$FIRSTNODE cat /tmp/token) &&
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN \$FIRSTNODE:6443
fi"
@@ -212,17 +285,23 @@ _cmd_kube() {
if i_am_first_node; then
kubectl apply -f https://raw.githubusercontent.com/jpetazzo/container.training/master/k8s/metrics-server.yaml
fi"
}
_cmd kubetools "Install a bunch of CLI tools for Kubernetes"
_cmd_kubetools() {
TAG=$1
need_tag
# Install kubectx and kubens
pssh "
[ -d kubectx ] || git clone https://github.com/ahmetb/kubectx &&
sudo ln -sf /home/ubuntu/kubectx/kubectx /usr/local/bin/kctx &&
sudo ln -sf /home/ubuntu/kubectx/kubens /usr/local/bin/kns &&
sudo cp /home/ubuntu/kubectx/completion/*.bash /etc/bash_completion.d &&
sudo ln -sf \$HOME/kubectx/kubectx /usr/local/bin/kctx &&
sudo ln -sf \$HOME/kubectx/kubens /usr/local/bin/kns &&
sudo cp \$HOME/kubectx/completion/*.bash /etc/bash_completion.d &&
[ -d kube-ps1 ] || git clone https://github.com/jonmosco/kube-ps1 &&
sudo -u docker sed -i s/docker-prompt/kube_ps1/ /home/docker/.bashrc &&
sudo -u docker tee -a /home/docker/.bashrc <<EOF
. /home/ubuntu/kube-ps1/kube-ps1.sh
. \$HOME/kube-ps1/kube-ps1.sh
KUBE_PS1_PREFIX=""
KUBE_PS1_SUFFIX=""
KUBE_PS1_SYMBOL_ENABLE="false"
@@ -246,23 +325,85 @@ EOF"
helm completion bash | sudo tee /etc/bash_completion.d/helm
fi"
# Install kustomize
pssh "
if [ ! -x /usr/local/bin/kustomize ]; then
##VERSION##
curl -L https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v3.6.1/kustomize_v3.6.1_linux_amd64.tar.gz |
sudo tar -C /usr/local/bin -zx kustomize
echo complete -C /usr/local/bin/kustomize kustomize | sudo tee /etc/bash_completion.d/kustomize
fi"
# Install ship
# Note: 0.51.3 is the last version that doesn't display GIN-debug messages
# (don't want to get folks confused by that!)
pssh "
if [ ! -x /usr/local/bin/ship ]; then
##VERSION##
curl -L https://github.com/replicatedhq/ship/releases/download/v0.40.0/ship_0.40.0_linux_amd64.tar.gz |
curl -L https://github.com/replicatedhq/ship/releases/download/v0.51.3/ship_0.51.3_linux_amd64.tar.gz |
sudo tar -C /usr/local/bin -zx ship
fi"
# Install the AWS IAM authenticator
pssh "
if [ ! -x /usr/local/bin/aws-iam-authenticator ]; then
##VERSION##
##VERSION##
sudo curl -o /usr/local/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator
sudo chmod +x /usr/local/bin/aws-iam-authenticator
fi"
sep "Done"
# Install the krew package manager
pssh "
if [ ! -d /home/docker/.krew ]; then
cd /tmp &&
curl -fsSL https://github.com/kubernetes-sigs/krew/releases/latest/download/krew.tar.gz |
tar -zxf- &&
sudo -u docker -H ./krew-linux_amd64 install krew &&
echo export PATH=/home/docker/.krew/bin:\\\$PATH | sudo -u docker tee -a /home/docker/.bashrc
fi"
# Install k9s
pssh "
if [ ! -x /usr/local/bin/k9s ]; then
VERSION=v0.24.10 &&
FILENAME=k9s_\${VERSION}_\$(uname -s)_\$(uname -m).tar.gz &&
curl -sSL https://github.com/derailed/k9s/releases/download/\$VERSION/\$FILENAME |
sudo tar -zxvf- -C /usr/local/bin k9s
fi"
# Install popeye
pssh "
if [ ! -x /usr/local/bin/popeye ]; then
FILENAME=popeye_\$(uname -s)_\$(uname -m).tar.gz &&
curl -sSL https://github.com/derailed/popeye/releases/latest/download/\$FILENAME |
sudo tar -zxvf- -C /usr/local/bin popeye
fi"
# Install Tilt
pssh "
if [ ! -x /usr/local/bin/tilt ]; then
curl -fsSL https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.sh | bash
fi"
# Install Skaffold
pssh "
if [ ! -x /usr/local/bin/skaffold ]; then
curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 &&
sudo install skaffold /usr/local/bin/
fi"
# Install Kompose
pssh "
if [ ! -x /usr/local/bin/kompose ]; then
curl -Lo kompose https://github.com/kubernetes/kompose/releases/latest/download/kompose-linux-amd64 &&
sudo install kompose /usr/local/bin
fi"
pssh "
if [ ! -x /usr/local/bin/kubeseal ]; then
curl -Lo kubeseal https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.13.1/kubeseal-linux-amd64 &&
sudo install kubeseal /usr/local/bin
fi"
}
_cmd kubereset "Wipe out Kubernetes configuration on all nodes"
@@ -284,44 +425,44 @@ _cmd_kubetest() {
set -e
if i_am_first_node; then
which kubectl
for NODE in \$(awk /[0-9]\$/\ {print\ \\\$2} /etc/hosts); do
for NODE in \$(grep [0-9]\$ /etc/hosts | grep -v ^127 | awk {print\ \\\$2}); do
echo \$NODE ; kubectl get nodes | grep -w \$NODE | grep -w Ready
done
fi"
}
_cmd ids "(FIXME) List the instance IDs belonging to a given tag or token"
_cmd_ids() {
_cmd ips "Show the IP addresses for a given tag"
_cmd_ips() {
TAG=$1
need_tag $TAG
info "Looking up by tag:"
aws_get_instance_ids_by_tag $TAG
# Just in case we managed to create instances but weren't able to tag them
info "Looking up by token:"
aws_get_instance_ids_by_client_token $TAG
SETTINGS=tags/$TAG/settings.yaml
CLUSTERSIZE=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
while true; do
for I in $(seq $CLUSTERSIZE); do
read ip || return 0
printf "%s\t" "$ip"
done
printf "\n"
done < tags/$TAG/ips.txt
}
_cmd list "List available groups for a given infrastructure"
_cmd_list() {
need_infra $1
infra_list
}
_cmd listall "List VMs running on all configured infrastructures"
_cmd_listall() {
for infra in infra/*; do
case $infra in
infra/example.*)
;;
*)
info "Listing infrastructure $infra:"
need_infra $infra
infra_list
;;
esac
done
_cmd inventory "List all VMs on a given infrastructure (or all infras if no arg given)"
_cmd_inventory() {
case "$1" in
"")
for INFRA in infra/*; do
$0 inventory $INFRA
done
;;
*/example.*)
;;
*)
need_infra $1
sep "Listing instances for $1"
infra_list
;;
esac
}
_cmd maketag "Generate a quasi-unique tag for a group of instances"
@@ -329,7 +470,7 @@ _cmd_maketag() {
if [ -z $USER ]; then
export USER=anonymous
fi
MS=$(($(date +%N)/1000000))
MS=$(($(date +%N | tr -d 0)/1000000))
date +%Y-%m-%d-%H-%M-$MS-$USER
}
@@ -400,16 +541,6 @@ _cmd_opensg() {
infra_opensg
}
_cmd portworx "Prepare the nodes for Portworx deployment"
_cmd_portworx() {
TAG=$1
need_tag
pssh "
sudo truncate --size 10G /portworx.blk &&
sudo losetup /dev/loop4 /portworx.blk"
}
_cmd disableaddrchecks "Disable source/destination IP address checks"
_cmd_disableaddrchecks() {
TAG=$1
@@ -446,6 +577,17 @@ _cmd_remap_nodeports() {
if i_am_first_node && ! grep -q '$ADD_LINE' $MANIFEST_FILE; then
sudo sed -i 's/\($FIND_LINE\)\$/\1\n$ADD_LINE/' $MANIFEST_FILE
fi"
info "If you have manifests hard-coding nodePort values,"
info "you might want to patch them with a command like:"
info "
if i_am_first_node; then
kubectl -n kube-system patch svc prometheus-server \\
-p 'spec: { ports: [ {port: 80, nodePort: 10101} ]}'
fi
"
}
_cmd quotas "Check our infrastructure quotas (max instances)"
@@ -454,25 +596,14 @@ _cmd_quotas() {
infra_quotas
}
_cmd retag "(FIXME) Apply a new tag to a group of VMs"
_cmd_retag() {
OLDTAG=$1
NEWTAG=$2
TAG=$OLDTAG
need_tag
if [[ -z "$NEWTAG" ]]; then
die "You must specify a new tag to apply."
fi
aws_tag_instances $OLDTAG $NEWTAG
}
_cmd ssh "Open an SSH session to the first node of a tag"
_cmd_ssh() {
TAG=$1
need_tag
IP=$(head -1 tags/$TAG/ips.txt)
info "Logging into $IP"
ssh docker@$IP
ssh $SSHOPTS docker@$IP
}
_cmd start "Start a group of VMs"
@@ -481,8 +612,9 @@ _cmd_start() {
case "$1" in
--infra) INFRA=$2; shift 2;;
--settings) SETTINGS=$2; shift 2;;
--count) COUNT=$2; shift 2;;
--count) die "Flag --count is deprecated; please use --students instead." ;;
--tag) TAG=$2; shift 2;;
--students) STUDENTS=$2; shift 2;;
*) die "Unrecognized parameter: $1."
esac
done
@@ -494,8 +626,14 @@ _cmd_start() {
die "Please add --settings flag to specify which settings file to use."
fi
if [ -z "$COUNT" ]; then
COUNT=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
warning "No --count option was specified. Using value from settings file ($COUNT)."
CLUSTERSIZE=$(awk '/^clustersize:/ {print $2}' $SETTINGS)
if [ -z "$STUDENTS" ]; then
warning "Neither --count nor --students was specified."
warning "According to the settings file, the cluster size is $CLUSTERSIZE."
warning "Deploying one cluster of $CLUSTERSIZE nodes."
STUDENTS=1
fi
COUNT=$(($STUDENTS*$CLUSTERSIZE))
fi
# Check that the specified settings and infrastructure are valid.
@@ -513,11 +651,43 @@ _cmd_start() {
infra_start $COUNT
sep
info "Successfully created $COUNT instances with tag $TAG"
sep
echo created > tags/$TAG/status
info "To deploy Docker on these instances, you can run:"
info "$0 deploy $TAG"
# If the settings.yaml file has a "steps" field,
# automatically execute all the actions listed in that field.
# If an action fails, retry it up to 10 times.
python -c 'if True: # hack to deal with indentation
import sys, yaml
settings = yaml.safe_load(sys.stdin)
print ("\n".join(settings.get("steps", [])))
' < tags/$TAG/settings.yaml \
| while read step; do
if [ -z "$step" ]; then
break
fi
sep
info "Automatically executing step '$step'."
TRY=1
MAXTRY=10
while ! $0 $step $TAG ; do
TRY=$(($TRY+1))
if [ $TRY -gt $MAXTRY ]; then
error "This step ($step) failed after $MAXTRY attempts."
info "You can troubleshoot the situation manually, or terminate these instances with:"
info "$0 stop $TAG"
die "Giving up."
else
sep
info "Step '$step' failed. Let's wait 10 seconds and try again."
info "(Attempt $TRY out of $MAXTRY.)"
sleep 10
fi
done
done
sep
info "Deployment successful."
info "To log into the first machine of that batch, you can run:"
info "$0 ssh $TAG"
info "To terminate these instances, you can run:"
info "$0 stop $TAG"
}
@@ -572,18 +742,18 @@ _cmd_tmux() {
IP=$(head -1 tags/$TAG/ips.txt)
info "Opening ssh+tmux with $IP"
rm -f /tmp/tmux-$UID/default
ssh -t -L /tmp/tmux-$UID/default:/tmp/tmux-1001/default docker@$IP tmux new-session -As 0
ssh $SSHOPTS -t -L /tmp/tmux-$UID/default:/tmp/tmux-1001/default docker@$IP tmux new-session -As 0
}
_cmd helmprom "Install Helm and Prometheus"
_cmd helmprom "Install Prometheus with Helm"
_cmd_helmprom() {
TAG=$1
need_tag
pssh "
if i_am_first_node; then
sudo -u docker -H helm repo add stable https://kubernetes-charts.storage.googleapis.com/
sudo -u docker -H helm install prometheus stable/prometheus \
--namespace kube-system \
sudo -u docker -H helm upgrade --install prometheus prometheus \
--repo https://prometheus-community.github.io/helm-charts/ \
--namespace prometheus --create-namespace \
--set server.service.type=NodePort \
--set server.service.nodePort=30090 \
--set server.persistentVolume.enabled=false \
@@ -591,6 +761,31 @@ _cmd_helmprom() {
fi"
}
_cmd passwords "Set individual passwords for each cluster"
_cmd_passwords() {
TAG=$1
need_tag
PASSWORDS_FILE="tags/$TAG/passwords"
if ! [ -f "$PASSWORDS_FILE" ]; then
error "File $PASSWORDS_FILE not found. Please create it first."
error "It should contain one password per line."
error "It should have as many lines as there are clusters."
die "Aborting."
fi
N_CLUSTERS=$($0 ips "$TAG" | wc -l)
N_PASSWORDS=$(wc -l < "$PASSWORDS_FILE")
if [ "$N_CLUSTERS" != "$N_PASSWORDS" ]; then
die "Found $N_CLUSTERS clusters and $N_PASSWORDS passwords. Aborting."
fi
$0 ips "$TAG" | paste "$PASSWORDS_FILE" - | while read password nodes; do
info "Setting password for $nodes..."
for node in $nodes; do
echo docker:$password | ssh $SSHOPTS ubuntu@$node sudo chpasswd
done
done
info "Done."
}
# Sometimes, weave fails to come up on some nodes.
# Symptom: the pods on a node are unreachable (they don't even ping).
# Remedy: wipe out Weave state and delete weave pod on that node.
@@ -615,11 +810,12 @@ _cmd_webssh() {
sudo apt-get update &&
sudo apt-get install python-tornado python-paramiko -y"
pssh "
[ -d webssh ] || git clone https://github.com/jpetazzo/webssh"
cd /opt
[ -d webssh ] || sudo git clone https://github.com/jpetazzo/webssh"
pssh "
for KEYFILE in /etc/ssh/*.pub; do
read a b c < \$KEYFILE; echo localhost \$a \$b
done > webssh/known_hosts"
done | sudo tee /opt/webssh/known_hosts"
pssh "cat >webssh.service <<EOF
[Unit]
Description=webssh
@@ -628,7 +824,7 @@ Description=webssh
WantedBy=multi-user.target
[Service]
WorkingDirectory=/home/ubuntu/webssh
WorkingDirectory=/opt/webssh
ExecStart=/usr/bin/env python run.py --fbidhttp=false --port=1080 --policy=reject
User=nobody
Group=nogroup
@@ -651,11 +847,6 @@ _cmd_www() {
python3 -m http.server
}
greet() {
IAMUSER=$(aws iam get-user --query 'User.UserName')
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
}
pull_tag() {
# Pre-pull a bunch of images
pssh --timeout 900 'for I in \
@@ -723,10 +914,7 @@ test_vm() {
"ls -la /home/docker/.ssh"; do
sep "$cmd"
echo "$cmd" \
| ssh -A -q \
-o "UserKnownHostsFile /dev/null" \
-o "StrictHostKeyChecking=no" \
$user@$ip sudo -u docker -i \
| ssh -A $SSHOPTS $user@$ip sudo -u docker -i \
|| {
status=$?
error "$cmd exit status: $status"
@@ -745,27 +933,3 @@ make_key_name() {
SHORT_FINGERPRINT=$(ssh-add -l | grep RSA | head -n1 | cut -d " " -f 2 | tr -d : | cut -c 1-8)
echo "${SHORT_FINGERPRINT}-${USER}"
}
sync_keys() {
# make sure ssh-add -l contains "RSA"
ssh-add -l | grep -q RSA \
|| die "The output of \`ssh-add -l\` doesn't contain 'RSA'. Start the agent, add your keys?"
AWS_KEY_NAME=$(make_key_name)
info "Syncing keys... "
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &>/dev/null; then
aws ec2 import-key-pair --key-name $AWS_KEY_NAME \
--public-key-material "$(ssh-add -L \
| grep -i RSA \
| head -n1 \
| cut -d " " -f 1-2)" &>/dev/null
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &>/dev/null; then
die "Somehow, importing the key didn't work. Make sure that 'ssh-add -l | grep RSA | head -n1' returns an RSA key?"
else
info "Imported new key $AWS_KEY_NAME."
fi
else
info "Using existing key $AWS_KEY_NAME."
fi
}

View File

@@ -1,9 +1,14 @@
if ! command -v aws >/dev/null; then
warning "AWS CLI (aws) not found."
fi
infra_list() {
aws_display_tags
aws ec2 describe-instances --output json |
jq -r '.Reservations[].Instances[] | [.InstanceId, .ClientToken, .State.Name, .InstanceType ] | @tsv'
}
infra_quotas() {
greet
aws_greet
max_instances=$(aws ec2 describe-account-attributes \
--attribute-names max-instances \
@@ -21,10 +26,10 @@ infra_start() {
COUNT=$1
# Print our AWS username, to ease the pain of credential-juggling
greet
aws_greet
# Upload our SSH keys to AWS if needed, to be added to each VM's authorized_keys
key_name=$(sync_keys)
key_name=$(aws_sync_keys)
AMI=$(aws_get_ami) # Retrieve the AWS image ID
if [ -z "$AMI" ]; then
@@ -61,7 +66,7 @@ infra_start() {
aws_tag_instances $TAG $TAG
# Wait until EC2 API tells us that the instances are running
wait_until_tag_is_running $TAG $COUNT
aws_wait_until_tag_is_running $TAG $COUNT
aws_get_instance_ips_by_tag $TAG > tags/$TAG/ips.txt
}
@@ -98,7 +103,7 @@ infra_disableaddrchecks() {
done
}
wait_until_tag_is_running() {
aws_wait_until_tag_is_running() {
max_retry=100
i=0
done_count=0
@@ -212,5 +217,34 @@ aws_tag_instances() {
aws_get_ami() {
##VERSION##
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 18.04 -t hvm:ebs -N -q
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a ${AWS_ARCHITECTURE-amd64} -v 18.04 -t hvm:ebs -N -q
}
aws_greet() {
IAMUSER=$(aws iam get-user --query 'User.UserName')
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."
}
aws_sync_keys() {
# make sure ssh-add -l contains "RSA"
ssh-add -l | grep -q RSA \
|| die "The output of \`ssh-add -l\` doesn't contain 'RSA'. Start the agent, add your keys?"
AWS_KEY_NAME=$(make_key_name)
info "Syncing keys... "
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &>/dev/null; then
aws ec2 import-key-pair --key-name $AWS_KEY_NAME \
--public-key-material "$(ssh-add -L \
| grep -i RSA \
| head -n1 \
| cut -d " " -f 1-2)" &>/dev/null
if ! aws ec2 describe-key-pairs --key-name "$AWS_KEY_NAME" &>/dev/null; then
die "Somehow, importing the key didn't work. Make sure that 'ssh-add -l | grep RSA | head -n1' returns an RSA key?"
else
info "Imported new key $AWS_KEY_NAME."
fi
else
info "Using existing key $AWS_KEY_NAME."
fi
}

View File

@@ -0,0 +1,57 @@
if ! command -v hcloud >/dev/null; then
warning "Hetzner CLI (hcloud) not found."
fi
if ! [ -f ~/.config/hcloud/cli.toml ]; then
warning "~/.config/hcloud/cli.toml not found."
fi
infra_list() {
[ "$(hcloud server list -o json)" = "null" ] && return
hcloud server list -o json |
jq -r '.[] | [.id, .name , .status, .server_type.name] | @tsv'
}
infra_start() {
COUNT=$1
HETZNER_INSTANCE_TYPE=${HETZNER_INSTANCE_TYPE-cx21}
HETZNER_DATACENTER=${HETZNER_DATACENTER-nbg1-dc3}
HETZNER_IMAGE=${HETZNER_IMAGE-168855}
for I in $(seq 1 $COUNT); do
NAME=$(printf "%s-%03d" $TAG $I)
sep "Starting instance $I/$COUNT"
info " Datacenter: $HETZNER_DATACENTER"
info " Name: $NAME"
info " Instance type: $HETZNER_INSTANCE_TYPE"
hcloud server create \
--type=${HETZNER_INSTANCE_TYPE} \
--datacenter=${HETZNER_DATACENTER} \
--image=${HETZNER_IMAGE} \
--name=$NAME \
--label=tag=$TAG \
--ssh-key ~/.ssh/id_rsa.pub
done
hetzner_get_ips_by_tag $TAG > tags/$TAG/ips.txt
}
infra_stop() {
for ID in $(hetzner_get_ids_by_tag $TAG); do
info "Scheduling deletion of instance $ID..."
hcloud server delete $ID &
done
info "Waiting for deletion to complete..."
wait
}
hetzner_get_ids_by_tag() {
TAG=$1
hcloud server list --selector=tag=$TAG -o json | jq -r .[].name
}
hetzner_get_ips_by_tag() {
TAG=$1
hcloud server list --selector=tag=$TAG -o json | jq -r .[].public_net.ipv4.ip
}

View File

@@ -0,0 +1,58 @@
if ! command -v linode-cli >/dev/null; then
warning "Linode CLI (linode-cli) not found."
fi
if ! [ -f ~/.config/linode-cli ]; then
warning "~/.config/linode-cli not found."
fi
# To view available regions: "linode-cli regions list"
LINODE_REGION=${LINODE_REGION-us-west}
# To view available types: "linode-cli linodes types"
LINODE_TYPE=${LINODE_TYPE-g6-standard-2}
infra_list() {
linode-cli linodes list --json |
jq -r '.[] | [.id, .label, .status, .type] | @tsv'
}
infra_start() {
COUNT=$1
for I in $(seq 1 $COUNT); do
NAME=$(printf "%s-%03d" $TAG $I)
sep "Starting instance $I/$COUNT"
info " Zone: $LINODE_REGION"
info " Name: $NAME"
info " Instance type: $LINODE_TYPE"
ROOT_PASS="$(base64 /dev/urandom | cut -c1-20 | head -n 1)"
linode-cli linodes create \
--type=${LINODE_TYPE} --region=${LINODE_REGION} \
--image=linode/ubuntu18.04 \
--authorized_keys="${LINODE_SSHKEY}" \
--root_pass="${ROOT_PASS}" \
--tags=${TAG} --label=${NAME}
done
sep
linode_get_ips_by_tag $TAG > tags/$TAG/ips.txt
}
infra_stop() {
info "Counting instances..."
linode_get_ids_by_tag $TAG | wc -l
info "Deleting instances..."
linode_get_ids_by_tag $TAG |
xargs -n1 -P10 \
linode-cli linodes delete
}
linode_get_ids_by_tag() {
TAG=$1
linode-cli linodes list --tags $TAG --json | jq -r ".[].id"
}
linode_get_ips_by_tag() {
TAG=$1
linode-cli linodes list --tags $TAG --json | jq -r ".[].ipv4[0]"
}

View File

@@ -0,0 +1,53 @@
infra_list() {
openstack server list -f json |
jq -r '.[] | [.ID, .Name , .Status, .Flavor] | @tsv'
}
infra_start() {
COUNT=$1
sep "Starting $COUNT instances"
info " Region: $OS_REGION_NAME"
info " User: $OS_USERNAME"
info " Flavor: $OS_FLAVOR"
info " Image: $OS_IMAGE"
openstack server create \
--flavor $OS_FLAVOR \
--image $OS_IMAGE \
--key-name $OS_KEY \
--min $COUNT --max $COUNT \
--property workshopctl=$TAG \
$TAG
sep "Waiting for IP addresses to be available"
GOT=0
while [ "$GOT" != "$COUNT" ]; do
echo "Got $GOT/$COUNT IP addresses."
oscli_get_ips_by_tag $TAG > tags/$TAG/ips.txt
GOT="$(wc -l < tags/$TAG/ips.txt)"
done
}
infra_stop() {
info "Counting instances..."
oscli_get_instances_json $TAG |
jq -r .[].Name |
wc -l
info "Deleting instances..."
oscli_get_instances_json $TAG |
jq -r .[].Name |
xargs -P10 -n1 openstack server delete
info "Done."
}
oscli_get_instances_json() {
TAG=$1
openstack server list -f json --name "${TAG}-[0-9]*"
}
oscli_get_ips_by_tag() {
TAG=$1
oscli_get_instances_json $TAG |
jq -r .[].Networks | grep -oE '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' || true
}

View File

@@ -0,0 +1,28 @@
infra_start() {
COUNT=$1
cp terraform/*.tf tags/$TAG
(
cd tags/$TAG
if ! terraform init; then
error "'terraform init' failed."
error "If it mentions the following error message:"
error "openpgp: signature made by unknown entity."
error "Then you need to upgrade Terraform to 0.11.15"
error "to upgrade its signing keys following the"
error "codecov breach."
die "Aborting."
fi
echo prefix = \"$TAG\" >> terraform.tfvars
echo count = \"$COUNT\" >> terraform.tfvars
terraform apply -auto-approve
terraform output ip_addresses > ips.txt
)
}
infra_stop() {
(
cd tags/$TAG
terraform destroy -auto-approve
)
}

View File

@@ -1,20 +0,0 @@
infra_start() {
COUNT=$1
cp terraform/*.tf tags/$TAG
(
cd tags/$TAG
terraform init
echo prefix = \"$TAG\" >> terraform.tfvars
echo count = \"$COUNT\" >> terraform.tfvars
terraform apply -auto-approve
terraform output ip_addresses > ips.txt
)
}
infra_stop() {
(
cd tags/$TAG
terraform destroy -auto-approve
)
}

View File

@@ -0,0 +1,51 @@
if ! command -v scw >/dev/null; then
warning "Scaleway CLI (scw) not found."
fi
if ! [ -f ~/.config/scw/config.yaml ]; then
warning "~/.config/scw/config.yaml not found."
fi
SCW_INSTANCE_TYPE=${SCW_INSTANCE_TYPE-DEV1-M}
SCW_ZONE=${SCW_ZONE-fr-par-1}
infra_list() {
scw instance server list -o json |
jq -r '.[] | [.id, .name, .state, .commercial_type] | @tsv'
}
infra_start() {
COUNT=$1
for I in $(seq 1 $COUNT); do
NAME=$(printf "%s-%03d" $TAG $I)
sep "Starting instance $I/$COUNT"
info " Zone: $SCW_ZONE"
info " Name: $NAME"
info " Instance type: $SCW_INSTANCE_TYPE"
scw instance server create \
type=${SCW_INSTANCE_TYPE} zone=${SCW_ZONE} \
image=ubuntu_bionic name=${NAME}
done
sep
scw_get_ips_by_tag $TAG > tags/$TAG/ips.txt
}
infra_stop() {
info "Counting instances..."
scw_get_ids_by_tag $TAG | wc -l
info "Deleting instances..."
scw_get_ids_by_tag $TAG |
xargs -n1 -P10 \
scw instance server delete zone=${SCW_ZONE} force-shutdown=true with-ip=true
}
scw_get_ids_by_tag() {
TAG=$1
scw instance server list zone=${SCW_ZONE} name=$TAG -o json | jq -r .[].id
}
scw_get_ips_by_tag() {
TAG=$1
scw instance server list zone=${SCW_ZONE} name=$TAG -o json | jq -r .[].public_ip.address
}

View File

@@ -0,0 +1,23 @@
infra_disableaddrchecks() {
die "unimplemented"
}
infra_list() {
die "unimplemented"
}
infra_opensg() {
die "unimplemented"
}
infra_quotas() {
die "unimplemented"
}
infra_start() {
die "unimplemented"
}
infra_stop() {
die "unimplemented"
}

View File

@@ -37,7 +37,7 @@ def system(cmd):
td = str(t2-t1)[:5]
f.write(bold("[{}] in {}s\n".format(retcode, td)))
STEP += 1
with open("/home/ubuntu/.bash_history", "a") as f:
with open(os.environ["HOME"] + "/.bash_history", "a") as f:
f.write("{}\n".format(cmd))
if retcode != 0:
msg = "The following command failed with exit code {}:\n".format(retcode)
@@ -114,7 +114,7 @@ system("sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /e
system("sudo service ssh restart")
system("sudo apt-get -q update")
system("sudo apt-get -qy install git jq")
system("sudo apt-get -qy install git jid jq")
system("sudo apt-get -qy install emacs-nox joe")
#######################

View File

@@ -18,7 +18,13 @@ pssh() {
echo "[parallel-ssh] $@"
export PSSH=$(which pssh || which parallel-ssh)
$PSSH -h $HOSTFILE -l ubuntu \
case "$INFRACLASS" in
hetzner) LOGIN=root ;;
linode) LOGIN=root ;;
*) LOGIN=ubuntu ;;
esac
$PSSH -h $HOSTFILE -l $LOGIN \
--par 100 \
-O LogLevel=ERROR \
-O UserKnownHostsFile=/dev/null \

View File

@@ -1,49 +1,71 @@
#!/usr/bin/env python
"""
There are two ways to use this script:
1. Pass a file name and a tag name as a single argument.
It will load a list of domains from the given file (one per line),
and assign them to the clusters corresponding to that tag.
There should be more domains than clusters.
Example: ./map-dns.py domains.txt 2020-08-15-jp
2. Pass a domain as the 1st argument, and IP addresses then.
It will configure the domain with the listed IP addresses.
Example: ./map-dns.py open-duck.site 1.2.3.4 2.3.4.5 3.4.5.6
In both cases, the domains should be configured to use GANDI LiveDNS.
"""
import os
import requests
import sys
import yaml
# configurable stuff
domains_file = "../../plentydomains/domains.txt"
# This can be tweaked if necessary.
config_file = os.path.join(
os.environ["HOME"], ".config/gandi/config.yaml")
tag = "test"
os.environ["HOME"], ".config/gandi/config.yaml")
apiurl = "https://dns.api.gandi.net/api/v5/domains"
# inferred stuff
domains = open(domains_file).read().split()
apikey = yaml.safe_load(open(config_file))["apirest"]["key"]
ips = open(f"tags/{tag}/ips.txt").read().split()
settings_file = f"tags/{tag}/settings.yaml"
clustersize = yaml.safe_load(open(settings_file))["clustersize"]
# now do the fucking work
# Figure out if we're called for a bunch of domains, or just one.
domain_or_domain_file = sys.argv[1]
if os.path.isfile(domain_or_domain_file):
domains = open(domain_or_domain_file).read().split()
domains = [ d for d in domains if not d.startswith('#') ]
tag = sys.argv[2]
ips = open(f"tags/{tag}/ips.txt").read().split()
settings_file = f"tags/{tag}/settings.yaml"
clustersize = yaml.safe_load(open(settings_file))["clustersize"]
else:
domains = [domain_or_domain_file]
ips = sys.argv[2:]
clustersize = len(ips)
# Now, do the work.
while domains and ips:
domain = domains[0]
domains = domains[1:]
cluster = ips[:clustersize]
ips = ips[clustersize:]
print(f"{domain} => {cluster}")
zone = ""
node = 0
for ip in cluster:
node += 1
zone += f"@ 300 IN A {ip}\n"
zone += f"* 300 IN A {ip}\n"
zone += f"node{node} 300 IN A {ip}\n"
r = requests.put(
f"{apiurl}/{domain}/records",
headers={"x-api-key": apikey},
data=zone)
print(r.text)
domain = domains[0]
domains = domains[1:]
cluster = ips[:clustersize]
ips = ips[clustersize:]
print(f"{domain} => {cluster}")
zone = ""
node = 0
for ip in cluster:
node += 1
zone += f"@ 300 IN A {ip}\n"
zone += f"* 300 IN A {ip}\n"
zone += f"node{node} 300 IN A {ip}\n"
r = requests.put(
f"{apiurl}/{domain}/records",
headers={"x-api-key": apikey},
data=zone)
print(r.text)
#r = requests.get(
# f"{apiurl}/{domain}/records",
# headers={"x-api-key": apikey},
# )
#r = requests.get(
# f"{apiurl}/{domain}/records",
# headers={"x-api-key": apikey},
# )
if domains:
print(f"Good, we have {len(domains)} domains left.")
print(f"Good, we have {len(domains)} domains left.")
if ips:
print(f"Crap, we have {len(ips)} IP addresses left.")
print(f"Crap, we have {len(ips)} IP addresses left.")

View File

@@ -21,3 +21,9 @@ machine_version: 0.15.0
# Password used to connect with the "docker user"
docker_user_password: training
steps:
- deploy
- webssh
- tailhist
- cards

View File

@@ -1,24 +0,0 @@
# 3 nodes for k8s 101 workshops
# Number of VMs per cluster
clustersize: 3
# The hostname of each node will be clusterprefix + a number
clusterprefix: node
# Jinja2 template to use to generate ready-to-cut cards
cards_template: cards.html
# Use "Letter" in the US, and "A4" everywhere else
paper_size: Letter
# This can be "test" or "stable"
engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.24.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -20,3 +20,11 @@ machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
steps:
- deploy
- webssh
- tailhist
- kube
- kubetools
- cards
- kubetest

View File

@@ -30,11 +30,13 @@ TAG=$PREFIX-$SETTINGS
--tag $TAG \
--infra $INFRA \
--settings settings/$SETTINGS.yaml \
--count $STUDENTS
--students $STUDENTS
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl disabledocker $TAG
retry 5 ./workshopctl kubebins $TAG
retry 5 ./workshopctl webssh $TAG
retry 5 ./workshopctl tailhist $TAG
./workshopctl cards $TAG
SETTINGS=admin-kubenet
@@ -43,11 +45,13 @@ TAG=$PREFIX-$SETTINGS
--tag $TAG \
--infra $INFRA \
--settings settings/$SETTINGS.yaml \
--count $((3*$STUDENTS))
--students $STUDENTS
retry 5 ./workshopctl disableaddrchecks $TAG
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl kubebins $TAG
retry 5 ./workshopctl webssh $TAG
retry 5 ./workshopctl tailhist $TAG
./workshopctl cards $TAG
SETTINGS=admin-kuberouter
@@ -56,11 +60,13 @@ TAG=$PREFIX-$SETTINGS
--tag $TAG \
--infra $INFRA \
--settings settings/$SETTINGS.yaml \
--count $((3*$STUDENTS))
--students $STUDENTS
retry 5 ./workshopctl disableaddrchecks $TAG
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl kubebins $TAG
retry 5 ./workshopctl webssh $TAG
retry 5 ./workshopctl tailhist $TAG
./workshopctl cards $TAG
#INFRA=infra/aws-us-west-1
@@ -73,8 +79,9 @@ TAG=$PREFIX-$SETTINGS
--tag $TAG \
--infra $INFRA \
--settings settings/$SETTINGS.yaml \
--count $((3*$STUDENTS))
--students $STUDENTS
retry 5 ./workshopctl deploy $TAG
retry 5 ./workshopctl kube $TAG 1.15.9
./workshopctl cards $TAG
retry 5 ./workshopctl kube $TAG 1.19.11
retry 5 ./workshopctl webssh $TAG
retry 5 ./workshopctl tailhist $TAG

View File

@@ -1,7 +1,7 @@
resource "openstack_compute_instance_v2" "machine" {
count = "${var.count}"
name = "${format("%s-%04d", "${var.prefix}", count.index+1)}"
image_name = "Ubuntu 16.04.5 (Xenial Xerus)"
image_name = "${var.image}"
flavor_name = "${var.flavor}"
security_groups = ["${openstack_networking_secgroup_v2.full_access.name}"]
key_pair = "${openstack_compute_keypair_v2.ssh_deploy_key.name}"
@@ -30,3 +30,5 @@ output "ip_addresses" {
}
variable "flavor" {}
variable "image" {}

View File

@@ -5,4 +5,3 @@ variable "prefix" {
variable "count" {
type = "string"
}

View File

@@ -15,9 +15,9 @@ for lib in lib/*.sh; do
done
DEPENDENCIES="
aws
ssh
curl
fping
jq
pssh
wkhtmltopdf

View File

@@ -1,15 +1,24 @@
# Uncomment and/or edit one of the the following lines if necessary.
#/ /kube-halfday.yml.html 200
#/ /kube-fullday.yml.html 200
#/ /kube-twodays.yml.html 200
#/ /kube-halfday.yml.html 200!
#/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200!
/ /week1.yml.html 200!
# And this allows to do "git clone https://container.training".
/info/refs service=git-upload-pack https://github.com/jpetazzo/container.training/info/refs?service=git-upload-pack
#/dockermastery https://www.udemy.com/course/docker-mastery/?referralCode=1410924A733D33635CCB
#/kubernetesmastery https://www.udemy.com/course/kubernetesmastery/?referralCode=7E09090AF9B79E6C283F
/dockermastery https://www.udemy.com/course/docker-mastery/?couponCode=SWEETFEBSALEC1
/kubernetesmastery https://www.udemy.com/course/kubernetesmastery/?couponCode=SWEETFEBSALEC4
/dockermastery https://www.udemy.com/course/docker-mastery/?referralCode=1410924A733D33635CCB
/kubernetesmastery https://www.udemy.com/course/kubernetesmastery/?referralCode=7E09090AF9B79E6C283F
#/dockermastery https://www.udemy.com/course/docker-mastery/?couponCode=DOCKERALLDAY
#/kubernetesmastery https://www.udemy.com/course/kubernetesmastery/?couponCode=DOCKERALLDAY
# Shortlink for the QRCode
/q /qrcode.html 200
# Shortlinks for next training in English and French
#/next https://www.eventbrite.com/e/livestream-intensive-kubernetes-bootcamp-tickets-103262336428
/next https://skillsmatter.com/courses/700-advanced-kubernetes-concepts-workshop-jerome-petazzoni
/hi5 https://enix.io/fr/services/formation/online/
# Survey form
/please https://docs.google.com/forms/d/e/1FAIpQLSfIYSgrV7tpfBNm1hOaprjnBHgWKn5n-k5vtNXYJkOX1sRxng/viewform

View File

@@ -233,7 +233,7 @@ def setup_tmux_and_ssh():
ipaddr = "$IPADDR"
uid = os.getuid()
raise Exception("""
raise Exception(r"""
1. If you're running this directly from a node:
tmux
@@ -247,6 +247,16 @@ rm -f /tmp/tmux-{uid}/default && ssh -t -L /tmp/tmux-{uid}/default:/tmp/tmux-100
3. If you cannot control a remote tmux:
tmux new-session ssh docker@{ipaddr}
4. If you are running this locally with a remote cluster, make sure your prompt has the expected format:
tmux
IPADDR=$(
kubectl get nodes -o json |
jq -r '.items[0].status.addresses[] | select(.type=="ExternalIP") | .address'
)
export PS1="\n[{ipaddr}] \u@\h:\w\n\$ "
""".format(uid=uid, ipaddr=ipaddr))
else:
logging.info("Found tmux session. Trying to acquire shell prompt.")

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,6 @@
"version": "0.0.1",
"dependencies": {
"express": "^4.16.2",
"socket.io": "^2.0.4"
"socket.io": "^2.4.0"
}
}

View File

@@ -1,7 +1,7 @@
class: title
# Advanced Dockerfiles
# Advanced Dockerfile Syntax
![construction](images/title-advanced-dockerfiles.jpg)
@@ -12,7 +12,10 @@ class: title
We have seen simple Dockerfiles to illustrate how Docker build
container images.
In this section, we will see more Dockerfile commands.
In this section, we will give a recap of the Dockerfile syntax,
and introduce advanced Dockerfile commands that we might
come across sometimes; or that we might want to use in some
specific scenarios.
---
@@ -420,3 +423,8 @@ ONBUILD COPY . /src
* You can't chain `ONBUILD` instructions with `ONBUILD`.
* `ONBUILD` can't be used to trigger `FROM` instructions.
???
:EN:- Advanced Dockerfile syntax
:FR:- Dockerfile niveau expert

View File

@@ -44,6 +44,64 @@ Fri Feb 20 00:28:55 UTC 2015
---
## When `^C` doesn't work...
Sometimes, `^C` won't be enough.
Why? And how can we stop the container in that case?
---
## What happens when we hit `^C`
`SIGINT` gets sent to the container, which means:
- `SIGINT` gets sent to PID 1 (default case)
- `SIGINT` gets sent to *foreground processes* when running with `-ti`
But there is a special case for PID 1: it ignores all signals!
- except `SIGKILL` and `SIGSTOP`
- except signals handled explicitly
TL,DR: there are many circumstances when `^C` won't stop the container.
---
class: extra-details
## Why is PID 1 special?
- PID 1 has some extra responsibilities:
- it starts (directly or indirectly) every other process
- when a process exits, its processes are "reparented" under PID 1
- When PID 1 exits, everything stops:
- on a "regular" machine, it causes a kernel panic
- in a container, it kills all the processes
- We don't want PID 1 to stop accidentally
- That's why it has these extra protections
---
## How to stop these containers, then?
- Start another terminal and forget about them
(for now!)
- We'll shortly learn about `docker kill`
---
## Run a container in the background
Containers can be started in the background, with the `-d` flag (daemon mode):
@@ -280,3 +338,8 @@ CONTAINER ID IMAGE ... CREATED STATUS
5c1dfd4d81f1 jpetazzo/clock ... 40 min. ago Exited (0) 40 min. ago
b13c164401fb ubuntu ... 55 min. ago Exited (130) 53 min. ago
```
???
:EN:- Foreground and background containers
:FR:- Exécution interactive ou en arrière-plan

View File

@@ -131,7 +131,7 @@ root@fcfb62f0bfde:/# figlet hello
|_| |_|\___|_|_|\___/
```
It works! .emoji[🎉]
It works! 🎉
---
@@ -167,3 +167,8 @@ Automated process = good.
In the next chapter, we will learn how to automate the build
process by writing a `Dockerfile`.
???
:EN:- Building our first images interactively
:FR:- Fabriquer nos premières images à la main

View File

@@ -89,6 +89,44 @@ To keep things simple for now: this is the directory where our Dockerfile is loc
## What happens when we build the image?
It depends if we're using BuildKit or not!
If there are lots of blue lines and the first line looks like this:
```
[+] Building 1.8s (4/6)
```
... then we're using BuildKit.
If the output is mostly black-and-white and the first line looks like this:
```
Sending build context to Docker daemon 2.048kB
```
... then we're using the "classic" or "old-style" builder.
---
## To BuildKit or Not To BuildKit
Classic builder:
- copies the whole "build context" to the Docker Engine
- linear (processes lines one after the other)
- requires a full Docker Engine
BuildKit:
- only transfers parts of the "build context" when needed
- will parallelize operations (when possible)
- can run in non-privileged containers (e.g. on Kubernetes)
---
## With the classic builder
The output of `docker build` looks like this:
.small[
@@ -131,7 +169,7 @@ Sending build context to Docker daemon 2.048 kB
* Be careful (or patient) if that directory is big and your link is slow.
* You can speed up the process with a [`.dockerignore`](https://docs.docker.com/engine/reference/builder/#dockerignore-file) file
* You can speed up the process with a [`.dockerignore`](https://docs.docker.com/engine/reference/builder/#dockerignore-file) file
* It tells docker to ignore specific files in the directory
@@ -161,6 +199,64 @@ Removing intermediate container e01b294dbffd
---
## With BuildKit
.small[
```bash
[+] Building 7.9s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 98B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 1.2s
=> [1/3] FROM docker.io/library/ubuntu@sha256:cf31af331f38d1d7158470e095b132acd126a7180a54f263d386 3.2s
=> => resolve docker.io/library/ubuntu@sha256:cf31af331f38d1d7158470e095b132acd126a7180a54f263d386 0.0s
=> => sha256:cf31af331f38d1d7158470e095b132acd126a7180a54f263d386da88eb681d93 1.20kB / 1.20kB 0.0s
=> => sha256:1de4c5e2d8954bf5fa9855f8b4c9d3c3b97d1d380efe19f60f3e4107a66f5cae 943B / 943B 0.0s
=> => sha256:6a98cbe39225dadebcaa04e21dbe5900ad604739b07a9fa351dd10a6ebad4c1b 3.31kB / 3.31kB 0.0s
=> => sha256:80bc30679ac1fd798f3241208c14accd6a364cb8a6224d1127dfb1577d10554f 27.14MB / 27.14MB 2.3s
=> => sha256:9bf18fab4cfbf479fa9f8409ad47e2702c63241304c2cdd4c33f2a1633c5f85e 850B / 850B 0.5s
=> => sha256:5979309c983a2adeff352538937475cf961d49c34194fa2aab142effe19ed9c1 189B / 189B 0.4s
=> => extracting sha256:80bc30679ac1fd798f3241208c14accd6a364cb8a6224d1127dfb1577d10554f 0.7s
=> => extracting sha256:9bf18fab4cfbf479fa9f8409ad47e2702c63241304c2cdd4c33f2a1633c5f85e 0.0s
=> => extracting sha256:5979309c983a2adeff352538937475cf961d49c34194fa2aab142effe19ed9c1 0.0s
=> [2/3] RUN apt-get update 2.5s
=> [3/3] RUN apt-get install figlet 0.9s
=> exporting to image 0.1s
=> => exporting layers 0.1s
=> => writing image sha256:3b8aee7b444ab775975dfba691a72d8ac24af2756e0a024e056e3858d5a23f7c 0.0s
=> => naming to docker.io/library/figlet 0.0s
```
]
---
## Understanding BuildKit output
- BuildKit transfers the Dockerfile and the *build context*
(these are the first two `[internal]` stages)
- Then it executes the steps defined in the Dockerfile
(`[1/3]`, `[2/3]`, `[3/3]`)
- Finally, it exports the result of the build
(image definition + collection of layers)
---
class: extra-details
## BuildKit plain output
- When running BuildKit in e.g. a CI pipeline, its output will be different
- We can see the same output format by using `--progress=plain`
---
## The caching system
If you run the same build again, it will be instantaneous. Why?
@@ -171,10 +267,10 @@ If you run the same build again, it will be instantaneous. Why?
* Docker uses the exact strings defined in your Dockerfile, so:
* `RUN apt-get install figlet cowsay `
* `RUN apt-get install figlet cowsay`
<br/> is different from
<br/> `RUN apt-get install cowsay figlet`
* `RUN apt-get update` is not re-executed when the mirrors are updated
You can force a rebuild with `docker build --no-cache ...`.
@@ -196,7 +292,7 @@ root@91f3c974c9a1:/# figlet hello
```
Yay! .emoji[🎉]
Yay! 🎉
---
@@ -363,3 +459,10 @@ In this example, `sh -c` will still be used, but
The shell gets replaced by `figlet` when `figlet` starts execution.
This allows to run processes as PID 1 without using JSON.
???
:EN:- Towards automated, reproducible builds
:EN:- Writing our first Dockerfile
:FR:- Rendre le processus automatique et reproductible
:FR:- Écrire son premier Dockerfile

View File

@@ -272,3 +272,46 @@ $ docker run -it --entrypoint bash myfiglet
root@6027e44e2955:/#
```
---
## `CMD` and `ENTRYPOINT` recap
- `docker run myimage` executes `ENTRYPOINT` + `CMD`
- `docker run myimage args` executes `ENTRYPOINT` + `args` (overriding `CMD`)
- `docker run --entrypoint prog myimage` executes `prog` (overriding both)
.small[
| Command | `ENTRYPOINT` | `CMD` | Result
|---------------------------------|--------------------|---------|-------
| `docker run figlet` | none | none | Use values from base image (`bash`)
| `docker run figlet hola` | none | none | Error (executable `hola` not found)
| `docker run figlet` | `figlet -f script` | none | `figlet -f script`
| `docker run figlet hola` | `figlet -f script` | none | `figlet -f script hola`
| `docker run figlet` | none | `figlet -f script` | `figlet -f script`
| `docker run figlet hola` | none | `figlet -f script` | Error (executable `hola` not found)
| `docker run figlet` | `figlet -f script` | `hello` | `figlet -f script hello`
| `docker run figlet hola` | `figlet -f script` | `hello` | `figlet -f script hola`
]
---
## When to use `ENTRYPOINT` vs `CMD`
`ENTRYPOINT` is great for "containerized binaries".
Example: `docker run consul --help`
(Pretend that the `docker run` part isn't there!)
`CMD` is great for images with multiple binaries.
Example: `docker run busybox ifconfig`
(It makes sense to indicate *which* program we want to run!)
???
:EN:- CMD and ENTRYPOINT
:FR:- CMD et ENTRYPOINT

View File

@@ -1,51 +1,40 @@
# Compose for development stacks
Dockerfiles are great to build container images.
Dockerfile = great to build *one* container image.
But what if we work with a complex stack made of multiple containers?
What if we have multiple containers?
Eventually, we will want to write some custom scripts and automation to build, run, and connect
our containers together.
What if some of them require particular `docker run` parameters?
There is a better way: using Docker Compose.
How do we connect them all together?
In this section, you will use Compose to bootstrap a development environment.
... Compose solves these use-cases (and a few more).
---
## What is Docker Compose?
## Life before Compose
Docker Compose (formerly known as `fig`) is an external tool.
Before we had Compose, we would typically write custom scripts to:
Unlike the Docker Engine, it is written in Python. It's open source as well.
- build container images,
The general idea of Compose is to enable a very simple, powerful onboarding workflow:
- run containers using these images,
1. Checkout your code.
- connect the containers together,
- rebuild, restart, update these images and containers.
---
## Life with Compose
Compose enables a simple, powerful onboarding workflow:
1. Checkout our code.
2. Run `docker-compose up`.
3. Your app is up and running!
---
## Compose overview
This is how you work with Compose:
* You describe a set (or stack) of containers in a YAML file called `docker-compose.yml`.
* You run `docker-compose up`.
* Compose automatically pulls images, builds containers, and starts them.
* Compose can set up links, volumes, and other Docker options for you.
* Compose can run the containers in the background, or in the foreground.
* When containers are running in the foreground, their aggregated output is shown.
Before diving in, let's see a small example of Compose in action.
3. Our app is up and running!
---
@@ -55,20 +44,61 @@ class: pic
---
## Checking if Compose is installed
## Life after Compose
If you are using the official training virtual machines, Compose has been
pre-installed.
(Or: when do we need something else?)
If you are using Docker for Mac/Windows or the Docker Toolbox, Compose comes with them.
- Compose is *not* an orchestrator
If you are on Linux (desktop or server environment), you will need to install Compose from its [release page](https://github.com/docker/compose/releases) or with `pip install docker-compose`.
- It isn't designed to need to run containers on multiple nodes
You can always check that it is installed by running:
(it can, however, work with Docker Swarm Mode)
```bash
$ docker-compose --version
```
- Compose isn't ideal if we want to run containers on Kubernetes
- it uses different concepts (Compose services ≠ Kubernetes services)
- it needs a Docker Engine (althought containerd support might be coming)
---
## First rodeo with Compose
1. Write Dockerfiles
2. Describe our stack of containers in a YAML file called `docker-compose.yml`
3. `docker-compose up` (or `docker-compose up -d` to run in the background)
4. Compose pulls and builds the required images, and starts the containers
5. Compose shows the combined logs of all the containers
(if running in the background, use `docker-compose logs`)
6. Hit Ctrl-C to stop the whole stack
(if running in the background, use `docker-compose stop`)
---
## Iterating
After making changes to our source code, we can:
1. `docker-compose build` to rebuild container images
2. `docker-compose up` to restart the stack with the new images
We can also combine both with `docker-compose up --build`
Compose will be smart, and only recreate the containers that have changed.
When working with interpreted languages:
- dont' rebuild each time
- leverage a `volumes` section instead
---
@@ -77,38 +107,37 @@ $ docker-compose --version
First step: clone the source code for the app we will be working on.
```bash
$ cd
$ git clone https://github.com/jpetazzo/trainingwheels
...
$ cd trainingwheels
git clone https://github.com/jpetazzo/trainingwheels
cd trainingwheels
```
Second step: start your app.
Second step: start the app.
```bash
$ docker-compose up
docker-compose up
```
Watch Compose build and run your app with the correct parameters,
including linking the relevant containers together.
Watch Compose build and run the app.
That Compose stack exposes a web server on port 8000; try connecting to it.
---
## Launching Our First Stack with Compose
Verify that the app is running at `http://<yourHostIP>:8000`.
We should see a web page like this:
![composeapp](images/composeapp.png)
Each time we reload, the counter should increase.
---
## Stopping the app
When you hit `^C`, Compose tries to gracefully terminate all of the containers.
When we hit Ctrl-C, Compose tries to gracefully terminate all of the containers.
After ten seconds (or if you press `^C` again) it will forcibly kill
them.
After ten seconds (or if we press `^C` again) it will forcibly kill them.
---
@@ -118,13 +147,13 @@ Here is the file used in the demo:
.small[
```yaml
version: "2"
version: "3"
services:
www:
build: www
ports:
- 8000:5000
- ${PORT-8000}:5000
user: nobody
environment:
DEBUG: 1
@@ -143,9 +172,9 @@ services:
A Compose file has multiple sections:
* `version` is mandatory. (We should use `"2"` or later; version 1 is deprecated.)
* `version` is mandatory. (Typically use "3".)
* `services` is mandatory. A service is one or more replicas of the same image running as containers.
* `services` is mandatory. Each service corresponds to a container.
* `networks` is optional and indicates to which networks containers should be connected.
<br/>(By default, containers will be connected on a private, per-compose-file network.)
@@ -164,6 +193,8 @@ A Compose file has multiple sections:
* Version 3 added support for deployment options (scaling, rolling updates, etc).
* Typically use `version: "3"`.
The [Docker documentation](https://docs.docker.com/compose/compose-file/)
has excellent information about the Compose file format if you need to know more about versions.
@@ -201,34 +232,45 @@ For the full list, check: https://docs.docker.com/compose/compose-file/
---
## Compose commands
## Environment variables
We already saw `docker-compose up`, but another one is `docker-compose build`.
- We can use environment variables in Compose files
It will execute `docker build` for all containers mentioning a `build` path.
(like `$THIS` or `${THAT}`)
It can also be invoked automatically when starting the application:
- We can provide default values, e.g. `${PORT-8000}`
```bash
docker-compose up --build
```
- Compose will also automatically load the environment file `.env`
Another common option is to start containers in the background:
(it should contain `VAR=value`, one per line)
```bash
docker-compose up -d
```
- This is a great way to customize build and run parameters
(base image versions to use, build and run secrets, port numbers...)
---
## Check container status
## Running multiple copies of a stack
It can be tedious to check the status of your containers with `docker ps`,
especially when running multiple apps at the same time.
- Copy the stack in two different directories, e.g. `front` and `frontcopy`
Compose makes it easier; with `docker-compose ps` you will see only the status of the
containers of the current stack:
- Compose prefixes images and containers with the directory name:
`front_www`, `front_www_1`, `front_db_1`
`frontcopy_www`, `frontcopy_www_1`, `frontcopy_db_1`
- Alternatively, use `docker-compose -p frontcopy`
(to set the `--project-name` of a stack, which default to the dir name)
- Each copy is isolated from the others (runs on a different network)
---
## Checking stack status
We have `ps`, `docker ps`, and similarly, `docker-compose ps`:
```bash
$ docker-compose ps
@@ -238,6 +280,10 @@ trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp
trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp
```
Shows the status of all the containers of our stack.
Doesn't show the other containers.
---
## Cleaning up (1)
@@ -281,44 +327,44 @@ Use `docker-compose down -v` to remove everything including volumes.
## Special handling of volumes
Compose is smart. If your container uses volumes, when you restart your
application, Compose will create a new container, but carefully re-use
the volumes it was using previously.
- When an image gets updated, Compose automatically creates a new container
This makes it easy to upgrade a stateful service, by pulling its
new image and just restarting your stack with Compose.
- The data in the old container is lost...
- ... Except if the container is using a *volume*
- Compose will then re-attach that volume to the new container
(and data is then retained across database upgrades)
- All good database images use volumes
(e.g. all official images)
---
## Compose project name
class: extra-details
* When you run a Compose command, Compose infers the "project name" of your app.
## A bit of history and trivia
* By default, the "project name" is the name of the current directory.
- Compose was initially named "Fig"
* For instance, if you are in `/home/zelda/src/ocarina`, the project name is `ocarina`.
- Compose is one of the only components of Docker written in Python
* All resources created by Compose are tagged with this project name.
(almost everything else is in Go)
* The project name also appears as a prefix of the names of the resources.
- In 2020, Docker introduced "Compose CLI":
E.g. in the previous example, service `www` will create a container `ocarina_www_1`.
- `docker compose` command to deploy Compose stacks to some clouds
* The project name can be overridden with `docker-compose -p`.
- progressively getting feature parity with `docker-compose`
---
- also provides numerous improvements (e.g. leverages BuildKit by default)
## Running two copies of the same app
???
If you want to run two copies of the same app simultaneously, all you have to do is to
make sure that each copy has a different project name.
:EN:- Using compose to describe an environment
:EN:- Connecting services together with a *Compose file*
You can:
* copy your code in a directory with a different name
* start each copy with `docker-compose -p myprojname up`
Each copy will run in a different network, totally isolated from the other.
This is ideal to debug regressions, do side-by-side comparisons, etc.
:FR:- Utiliser Compose pour décrire son environnement
:FR:- Écrire un *Compose file* pour connecter les services entre eux

View File

@@ -27,9 +27,9 @@ We will also explain the principle of overlay networks and network plugins.
## The Container Network Model
The CNM was introduced in Engine 1.9.0 (November 2015).
Docker has "networks".
The CNM adds the notion of a *network*, and a new top-level command to manipulate and see those networks: `docker network`.
We can manage them with the `docker network` commands; for instance:
```bash
$ docker network ls
@@ -41,59 +41,79 @@ eb0eeab782f4 host host
228a4355d548 blog-prod overlay
```
---
New networks can be created (with `docker network create`).
## What's in a network?
* Conceptually, a network is a virtual switch.
* It can be local (to a single Engine) or global (spanning multiple hosts).
* A network has an IP subnet associated to it.
* Docker will allocate IP addresses to the containers connected to a network.
* Containers can be connected to multiple networks.
* Containers can be given per-network names and aliases.
* The names and aliases can be resolved via an embedded DNS server.
(Note: networks `none` and `host` are special; let's set them aside for now.)
---
## Network implementation details
## What's a network?
* A network is managed by a *driver*.
- Conceptually, a Docker "network" is a virtual switch
* The built-in drivers include:
(we can also think about it like a VLAN, or a WiFi SSID, for instance)
* `bridge` (default)
- By default, containers are connected to a single network
* `none`
(but they can be connected to zero, or many networks, even dynamically)
* `host`
- Each network has its own subnet (IP address range)
* `macvlan`
- A network can be local (to a single Docker Engine) or global (span multiple hosts)
* A multi-host driver, *overlay*, is available out of the box (for Swarm clusters).
- Containers can have *network aliases* providing DNS-based service discovery
* More drivers can be provided by plugins (OVS, VLAN...)
* A network can have a custom IPAM (IP allocator).
(and each network has its own "domain", "zone", or "scope")
---
class: extra-details
## Service discovery
## Differences with the CNI
- A container can be given a network alias
* CNI = Container Network Interface
(e.g. with `docker run --net some-network --net-alias db ...`)
* CNI is used notably by Kubernetes
- The containers running in the same network can resolve that network alias
* With CNI, all the nodes and containers are on a single IP network
(i.e. if they do a DNS lookup on `db`, it will give the container's address)
* Both CNI and CNM offer the same functionality, but with very different methods
- We can have a different `db` container in each network
(this avoids naming conflicts between different stacks)
- When we name a container, it automatically adds the name as a network alias
(i.e. `docker run --name xyz ...` is like `docker run --net-alias xyz ...`
---
## Network isolation
- Networks are isolated
- By default, containers in network A cannot reach those in network B
- A container connected to both networks A and B can act as a router or proxy
- Published ports are always reachable through the Docker host address
(`docker run -P ...` makes a container port available to everyone)
---
## How to use networks
- We typically create one network per "stack" or app that we deploy
- More complex apps or stacks might require multiple networks
(e.g. `frontend`, `backend`, ...)
- Networks allow us to deploy multiple copies of the same stack
(e.g. `prod`, `dev`, `pr-442`, ....)
- If we use Docker Compose, this is managed automatically for us
---
@@ -121,6 +141,30 @@ class: pic
---
class: extra-details
## CNM vs CNI
- CNM is the model used by Docker
- Kubernetes uses a different model, architectured around CNI
(CNI is a kind of API between a container engine and *CNI plugins*)
- Docker model:
- multiple isolated networks
- per-network service discovery
- network interconnection requires extra steps
- Kubernetes model:
- single flat network
- per-namespace service discovery
- network isolation requires extra steps (Network Policies)
---
## Creating a network
Let's create a network called `dev`.
@@ -190,8 +234,12 @@ class: extra-details
## Resolving container addresses
In Docker Engine 1.9, name resolution is implemented with `/etc/hosts`, and
updating it each time containers are added/removed.
Since Docker Engine 1.10, name resolution is implemented by a dynamic resolver.
Archeological note: when CNM was intoduced (in Docker Engine 1.9, November 2015)
name resolution was implemented with `/etc/hosts`, and it was updated each time
CONTAINERs were added/removed. This could cause interesting race conditions
since `/etc/hosts` was a bind-mount (and couldn't be updated atomically).
.small[
```bash
@@ -208,10 +256,6 @@ ff02::2 ip6-allrouters
```
]
In Docker Engine 1.10, this has been replaced by a dynamic resolver.
(This avoids race conditions when updating `/etc/hosts`.)
---
# Service discovery with containers
@@ -265,12 +309,12 @@ Note: we're not using a FQDN or an IP address here; just `redis`.
* That container must be on the same network as the web server.
* It must have the right name (`redis`) so the application can find it.
* It must have the right network alias (`redis`) so the application can find it.
Start the container:
```bash
$ docker run --net dev --name redis -d redis
$ docker run --net dev --net-alias redis -d redis
```
---
@@ -287,34 +331,19 @@ $ docker run --net dev --name redis -d redis
## A few words on *scope*
* What if we want to run multiple copies of our application?
- Container names are unique (there can be only one `--name redis`)
* Since names are unique, there can be only one container named `redis` at a time.
- Network aliases are not unique
* However, we can specify the network name of our container with `--net-alias`.
- We can have the same network alias in different networks:
```bash
docker run --net dev --net-alias redis ...
docker run --net prod --net-alias redis ...
```
* `--net-alias` is scoped per network, and independent from the container name.
- We can even have multiple containers with the same alias in the same network
---
class: extra-details
## Using a network alias instead of a name
Let's remove the `redis` container:
```bash
$ docker rm -f redis
```
And create one that doesn't block the `redis` name:
```bash
$ docker run --net dev --net-alias redis -d redis
```
Check that the app still works (but the counter is back to 1,
since we wiped out the old Redis container).
(in that case, we get multiple DNS entries, aka "DNS round robin")
---
@@ -347,7 +376,9 @@ A container can have multiple network aliases.
Network aliases are *local* to a given network (only exist in this network).
Multiple containers can have the same network alias (even on the same network). In Docker Engine 1.11, resolving a network alias yields the IP addresses of all containers holding this alias.
Multiple containers can have the same network alias (even on the same network).
Since Docker Engine 1.11, resolving a network alias yields the IP addresses of all containers holding this alias.
---
@@ -500,6 +531,24 @@ b2887adeb5578a01fd9c55c435cad56bbbe802350711d2743691f95743680b09
---
## Network drivers
* A network is managed by a *driver*.
* The built-in drivers include:
* `bridge` (default)
* `none`
* `host`
* `macvlan`
* `overlay` (for Swarm clusters)
* More drivers can be provided by plugins (OVS, VLAN...)
* A network can have a custom IPAM (IP allocator).
---
## Overlay networks
* The features we've seen so far only work when all containers are on a single host.
@@ -740,3 +789,15 @@ class: extra-details
* This may be used to access an internal package repository.
(But try to use a multi-stage build instead, if possible!)
???
:EN:Container networking essentials
:EN:- The Container Network Model
:EN:- Container isolation
:EN:- Service discovery
:FR:Mettre ses conteneurs en réseau
:FR:- Le "Container Network Model"
:FR:- Isolation des conteneurs
:FR:- *Service discovery*

View File

@@ -15,53 +15,84 @@ At the end of this section, you will be able to:
* Run a network service in a container.
* Manipulate container networking basics.
* Connect to that network service.
* Find a container's IP address.
We will also explain the different network models used by Docker.
---
## Running a very simple service
- We need something small, simple, easy to configure
(or, even better, that doesn't require any configuration at all)
- Let's use the official NGINX image (named `nginx`)
- It runs a static web server listening on port 80
- It serves a default "Welcome to nginx!" page
---
## A simple, static web server
Run the Docker Hub image `nginx`, which contains a basic web server:
## Runing an NGINX server
```bash
$ docker run -d -P nginx
66b1ce719198711292c8f34f84a7b68c3876cf9f67015e752b94e189d35a204e
```
* Docker will download the image from the Docker Hub.
- Docker will automatically pull the `nginx` image from the Docker Hub
* `-d` tells Docker to run the image in the background.
- `-d` / `--detach` tells Docker to run it in the background
* `-P` tells Docker to make this service reachable from other computers.
<br/>(`-P` is the short version of `--publish-all`.)
- `P` / `--publish-all` tells Docker to publish all ports
But, how do we connect to our web server now?
(publish = make them reachable from other computers)
- ...OK, how do we connect to our web server now?
---
## Finding our web server port
We will use `docker ps`:
- First, we need to find the *port number* used by Docker
```bash
$ docker ps
CONTAINER ID IMAGE ... PORTS ...
e40ffb406c9e nginx ... 0.0.0.0:32768->80/tcp ...
```
(the NGINX container listens on port 80, but this port will be *mapped*)
- We can use `docker ps`:
```bash
$ docker ps
CONTAINER ID IMAGE ... PORTS ...
e40ffb406c9e nginx ... 0.0.0.0:`12345`->80/tcp ...
```
* The web server is running on port 80 inside the container.
- This means:
* This port is mapped to port 32768 on our Docker host.
*port 12345 on the Docker host is mapped to port 80 in the container*
We will explain the whys and hows of this port mapping.
- Now we need to connect to the Docker host!
But first, let's make sure that everything works properly.
---
## Finding the address of the Docker host
- When running Docker on your Linux workstation:
*use `localhost`, or any IP address of your machine*
- When running Docker on a remote Linux server:
*use any IP address of the remote machine*
- When running Docker Desktop on Mac or Windows:
*use `localhost`*
- In other scenarios (`docker-machine`, local VM...):
*use the IP address of the Docker VM*
---
## Connecting to our web server (GUI)
@@ -81,7 +112,7 @@ Make sure to use the right port number if it is different
from the example below:
```bash
$ curl localhost:32768
$ curl localhost:12345
<!DOCTYPE html>
<html>
<head>
@@ -116,17 +147,41 @@ IMAGE CREATED CREATED BY
---
## Why are we mapping ports?
## Why can't we just connect to port 80?
* We are out of IPv4 addresses.
- Our Docker host has only one port 80
* Containers cannot have public IPv4 addresses.
- Therefore, we can only have one container at a time on port 80
* They have private addresses.
- Therefore, if multiple containers want port 80, only one can get it
* Services have to be exposed port by port.
- By default, containers *do not* get "their" port number, but a random one
* Ports have to be mapped to avoid conflicts.
(not "random" as "crypto random", but as "it depends on various factors")
- We'll see later how to force a port number (including port 80!)
---
class: extra-details
## Using multiple IP addresses
*Hey, my network-fu is strong, and I have questions...*
- Can I publish one container on 127.0.0.2:80, and another on 127.0.0.3:80?
- My machine has multiple (public) IP addresses, let's say A.A.A.A and B.B.B.B.
<br/>
Can I have one container on A.A.A.A:80 and another on B.B.B.B:80?
- I have a whole IPV4 subnet, can I allocate it to my containers?
- What about IPV6?
You can do all these things when running Docker directly on Linux.
(On other platforms, *generally not*, but there are some exceptions.)
---
@@ -138,7 +193,7 @@ There is a command to help us:
```bash
$ docker port <containerID> 80
32768
0.0.0.0:12345
```
---
@@ -172,13 +227,11 @@ There are many ways to integrate containers in your network.
* Pick a fixed port number in advance, when you generate your configuration.
<br/>Then start your container by setting the port numbers manually.
* Use a network plugin, connecting your containers with e.g. VLANs, tunnels...
* Use an orchestrator like Kubernetes or Swarm.
<br/>The orchestrator will provide its own networking facilities.
* Enable *Swarm Mode* to deploy across a cluster.
<br/>The container will then be reachable through any node of the cluster.
When using Docker through an extra management layer like Mesos or Kubernetes,
these will usually provide their own mechanism to expose containers.
Orchestrators typically provide mechanisms to enable direct container-to-container
communication across hosts, and publishing/load balancing for inbound traffic.
---
@@ -202,16 +255,34 @@ $ docker inspect --format '{{ .NetworkSettings.IPAddress }}' <yourContainerID>
## Pinging our container
We can test connectivity to the container using the IP address we've
just discovered. Let's see this now by using the `ping` tool.
Let's try to ping our container *from another container.*
```bash
$ ping <ipAddress>
64 bytes from <ipAddress>: icmp_req=1 ttl=64 time=0.085 ms
64 bytes from <ipAddress>: icmp_req=2 ttl=64 time=0.085 ms
64 bytes from <ipAddress>: icmp_req=3 ttl=64 time=0.085 ms
docker run alpine ping `<ipaddress>`
PING 172.17.0.X (172.17.0.X): 56 data bytes
64 bytes from 172.17.0.X: seq=0 ttl=64 time=0.106 ms
64 bytes from 172.17.0.X: seq=1 ttl=64 time=0.250 ms
64 bytes from 172.17.0.X: seq=2 ttl=64 time=0.188 ms
```
When running on Linux, we can even ping that IP address directly!
(And connect to a container's ports even if they aren't published.)
---
## How often do we use `-p` and `-P` ?
- When running a stack of containers, we will often use Compose
- Compose will take care of exposing containers
(through a `ports:` section in the `docker-compose.yml` file)
- It is, however, fairly common to use `docker run -P` for a quick test
- Or `docker run -p ...` when an image doesn't `EXPOSE` a port correctly
---
## Section summary
@@ -220,9 +291,11 @@ We've learned how to:
* Expose a network port.
* Manipulate container networking basics.
* Connect to an application running in a container.
* Find a container's IP address.
In the next chapter, we will see how to connect
containers together without exposing their ports.
???
:EN:- Exposing single containers
:FR:- Exposer un conteneur isolé

View File

@@ -88,13 +88,45 @@ Success!
## Details
* You can `COPY` whole directories recursively.
* We can `COPY` whole directories recursively
* Older Dockerfiles also have the `ADD` instruction.
<br/>It is similar but can automatically extract archives.
* It is possible to do e.g. `COPY . .`
(but it might require some extra precautions to avoid copying too much)
* In older Dockerfiles, you might see the `ADD` command; consider it deprecated
(it is similar to `COPY` but can automatically extract archives)
* If we really wanted to compile C code in a container, we would:
* Place it in a different directory, with the `WORKDIR` instruction.
* place it in a different directory, with the `WORKDIR` instruction
* Even better, use the `gcc` official image.
* even better, use the `gcc` official image
---
class: extra-details
## `.dockerignore`
- We can create a file named `.dockerignore`
(at the top-level of the build context)
- It can contain file names and globs to ignore
- They won't be sent to the builder
(and won't end up in the resulting image)
- See the [documentation] for the little details
(exceptions can be made with `!`, multiple directory levels with `**`...)
[documentation]: https://docs.docker.com/engine/reference/builder/#dockerignore-file
???
:EN:- Leveraging the build cache for faster builds
:FR:- Tirer parti du cache afin d'optimiser la vitesse de *build*

View File

@@ -66,9 +66,9 @@ Adding the dependencies as a separate step means that Docker can cache more effi
```bash
FROM python
COPY requirements.txt /tmp/requirements.txt
RUN pip install -qr /tmp/requirements.txt
WORKDIR /src
COPY requirements.txt .
RUN pip install -qr requirements.txt
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]
@@ -424,10 +424,22 @@ services:
- In this chapter, we showed many ways to write Dockerfiles.
- These Dockerfiles use sometimes diametrally opposed techniques.
- These Dockerfiles use sometimes diametrically opposed techniques.
- Yet, they were the "right" ones *for a specific situation.*
- It's OK (and even encouraged) to start simple and evolve as needed.
- Feel free to review this chapter later (after writing a few Dockerfiles) for inspiration!
???
:EN:Optimizing images
:EN:- Dockerfile tips, tricks, and best practices
:EN:- Reducing build time
:EN:- Reducing image size
:FR:Optimiser ses images
:FR:- Bonnes pratiques, trucs et astuces
:FR:- Réduire le temps de build
:FR:- Réduire la taille des images

View File

@@ -2,4 +2,99 @@
Let's write Dockerfiles for an existing application!
The code is at: https://github.com/jpetazzo/wordsmith
1. Check out the code repository
2. Read all the instructions
3. Write Dockerfiles
4. Build and test them individually
<!--
5. Test them together with the provided Compose file
-->
---
## Code repository
Clone the repository available at:
https://github.com/jpetazzo/wordsmith
It should look like this:
```
├── LICENSE
├── README
├── db/
│ └── words.sql
├── web/
│ ├── dispatcher.go
│ └── static/
└── words/
├── pom.xml
└── src/
```
---
## Instructions
The repository contains instructions in English and French.
<br/>
For now, we only care about the first part (about writing Dockerfiles).
<br/>
Place each Dockerfile in its own directory, like this:
```
├── LICENSE
├── README
├── db/
│ ├── `Dockerfile`
│ └── words.sql
├── web/
│ ├── `Dockerfile`
│ ├── dispatcher.go
│ └── static/
└── words/
├── `Dockerfile`
├── pom.xml
└── src/
```
---
## Build and test
Build and run each Dockerfile individually.
For `db`, we should be able to see some messages confirming that the data set
was loaded successfully (some `INSERT` lines in the container output).
For `web` and `words`, we should be able to see some message looking like
"server started successfully".
That's all we care about for now!
Bonus question: make sure that each container stops correctly when hitting Ctrl-C.
???
## Test with a Compose file
Place the following Compose file at the root of the repository:
```yaml
version: "3"
services:
db:
build: db
words:
build: words
web:
build: web
ports:
- 8888:80
```
Test the whole app by bringin up the stack and connecting to port 8888.

View File

@@ -106,7 +106,7 @@ root@04c0bb0a6c07:/# figlet hello
|_| |_|\___|_|_|\___/
```
Beautiful! .emoji[😍]
Beautiful! 😍
---
@@ -118,7 +118,7 @@ Let's check how many packages are installed there.
```bash
root@04c0bb0a6c07:/# dpkg -l | wc -l
190
97
```
* `dpkg -l` lists the packages installed in our container
@@ -175,7 +175,7 @@ Now try to run `figlet`. Does that work?
* We can run *any container* on *any host*.
(One exception: Windows containers cannot run on Linux machines; at least not yet.)
(One exception: Windows containers can only run on Windows hosts; at least for now.)
---
@@ -290,3 +290,8 @@ bash: figlet: command not found
* We have a clear definition of our environment, and can share it reliably with others.
* Let's see in the next chapters how to bake a custom image with `figlet`!
???
:EN:- Running our first container
:FR:- Lancer nos premiers conteneurs

View File

@@ -226,3 +226,8 @@ docker export <container_id> | tar tv
```
This will give a detailed listing of the content of the container.
???
:EN:- Troubleshooting and getting inside a container
:FR:- Inspecter un conteneur en détail, en *live* ou *post-mortem*

Some files were not shown because too many files have changed in this diff Show More