Compare commits

...

102 Commits

Author SHA1 Message Date
Jerome Petazzoni
f01bc2a7a9 Fix overlapsing slide number and pics 2018-09-29 18:54:00 -05:00
Jerome Petazzoni
3eaa844c55 Add ENIX logo
Warning: do not merge this branch to your content, otherwise you
will get the ENIX logo in the top right of all your decks
2018-09-08 07:49:38 -05:00
Jerome Petazzoni
ec037e422b Clarify 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
73f66f25d8 Rephrase to avoid confusion 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
28174b6cf9 Oops, fixing bad conflict resolve 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
a80c095a07 Put netpol file in the right directory 2018-09-08 07:20:31 -05:00
Jerome Petazzoni
374574717d Clarify network policies
Add clarification re/ pod-to-pod traffic.
Explain that it's stateful (which most people would expect anyway).
2018-09-08 07:20:31 -05:00
Jerome Petazzoni
efce5d1ad4 Add a short chapter about network policies
I will then expand this chapter to add examples showing
how to isolate namespaces; but let's start with that.
2018-09-08 07:20:31 -05:00
Bridget Kromhout
7b7fd2a4b4 Merge pull request #329 from jpetazzo/kubectlproxy
Revamp section about kubectl proxy
2018-09-06 10:37:17 -05:00
Jerome Petazzoni
4eca15f822 typo 2018-09-06 01:49:54 -05:00
Bridget Kromhout
4205f619cf Merge pull request #333 from BretFisher/patch-16
adding my next few workshops, I forgets!
2018-09-05 23:31:25 -05:00
Bridget Kromhout
c3dff823ef Update index.yaml
We use `:` as a delimiter and so need to quote text using it.
2018-09-05 23:29:49 -05:00
Bret Fisher
39876d1388 adding my next few workshops, I forgets! 2018-09-05 21:09:13 -04:00
Bridget Kromhout
7e34aa0287 Merge pull request #330 from jpetazzo/move-yaml-to-repo
Add YAML to repo; remove goo.gl links
2018-09-05 09:21:14 -05:00
Bridget Kromhout
3bdafed38e Merge pull request #331 from jpetazzo/preinstall-helm-and-stern
Pre-install Stern and Helm
2018-09-05 09:17:51 -05:00
Jerome Petazzoni
3cb91855c8 Pre-install Stern and Helm
The commands to install Stern and Helm aren't super exciting,
so let's pre-install these tools. That way, we also generate
completion for them. We still give installation instructions
just in case, but this saves time for more important stuff.
2018-08-28 07:21:43 -05:00
Jerome Petazzoni
ffdd7fda45 Add YAML to repo; remove goo.gl links
We load a few YAML files from goo.gl links. To avoid bad
surprises, we're moving these YAML files to the repository.
2018-08-27 07:04:01 -05:00
Jerome Petazzoni
8373d5302f Revamp section about kubectl proxy 2018-08-21 08:08:19 -05:00
Jerome Petazzoni
018282f392 slides: rename directories
This was discussed and agreed in #246. It will probably break a few
outstanding PRs as well as a few external links but it's for the
better good long term.
2018-08-21 04:03:38 -05:00
Jerome Petazzoni
23b3c1c05a Last tweaks so that autopilot passes 2018-08-20 14:58:00 -05:00
Jerome Petazzoni
62686d0b7a Miscellaneous fixes for autopilot
These changes are only for the autopilot test harness.
They add hidden commands and keystrokes but don't affect
the content of the slides.
2018-08-20 14:15:06 -05:00
Jerome Petazzoni
54288502a2 autopilot: add support for hidden commands 2018-08-20 10:22:01 -05:00
Jerome Petazzoni
efc045e40b autopilot: put a bunch of features behind flags
We don't always need to track slides, switch desktops, and open links.
(These things are not necessary when we're purely testing the labs.)
All these features are now behind boolean flags saved in the state file.
2018-08-20 08:31:47 -05:00
Bridget Kromhout
6e9b16511f Cloud-agnostic; mentioning multiple clouds 2018-08-19 10:07:52 -05:00
Jerome Petazzoni
81b6e60a8c Merge branch 'master' of github.com:jpetazzo/container.training 2018-08-18 11:13:45 -05:00
Jerome Petazzoni
5baaf7e00a Fixes #327 2018-08-18 11:13:39 -05:00
Jérôme Petazzoni
d4d460397f Mention progressDeadlineSeconds
@abuisine ran through the whole deck recently, taking the long route each time it was possible; and he noticed that another field had to be removed when transforming the Deployment into a DaemonSet.
2018-08-15 04:08:31 -05:00
Bridget Kromhout
f66b6b2ee3 Slight edits (#326) 2018-08-15 04:07:42 -05:00
Jérôme Petazzoni
fb7f7fd8c8 Expand to the brief logging/metrics slide
Thanks to @abuisine for reminding me that Heapster is going through a deprecation cycle.

I'm also expanding these two slides to be a bit more useful and relevant.
2018-08-15 04:07:42 -05:00
Jérôme Petazzoni
dc98fa21a9 Add explanations for a failure mode in logging (#324)
* Add explanations for a failure mode in logging

Thanks @abuisine for reporting that one too!

* Typo
2018-08-15 04:04:18 -05:00
Jerome Petazzoni
6b662d3e4c Add QCON workshops 2018-08-15 03:09:22 -05:00
Tim Bell
7069682c8e Update Dockerfile_Tips.md (#321)
Fix typo
2018-08-08 08:40:06 -05:00
Katie McLaughlin
3b1d5b93a8 Update pwk link (#319) 2018-08-02 06:22:42 -05:00
Maxime Deravet
611fe55e90 Allow to configure docker password using the settings file (#317) 2018-07-31 08:24:16 -05:00
Jerome Petazzoni
481272ac22 Add fallback when non-standard strftime is not supported
Closes #301

Thanks @petertang2012
2018-07-27 06:07:11 -05:00
Bridget Kromhout
9069e2d7db Merge pull request #318 from bridgetkromhout/add-vel-uk
Add Velocity UK
2018-07-26 18:35:04 -05:00
Bridget Kromhout
1144c16a4c Add Velocity UK 2018-07-26 18:33:49 -05:00
Bridget Kromhout
9b2846633c Merge pull request #315 from jpetazzo/clarify-kubeadm
Clarify usage of kubeadm
2018-07-20 15:42:31 -07:00
Jérôme Petazzoni
db88c0a5bf Clarify usage of kubeadm
Thanks for @robcz for the inspiration for that one!
2018-07-17 11:55:20 -05:00
Jérôme Petazzoni
28863728c2 Update rollout, new defaults are 25%/25% for MaxSurge and MaxUnavailable (#314) 2018-07-17 10:54:45 -05:00
Bridget Kromhout
dc341da813 Merge pull request #309 from bridgetkromhout/slight-updates
Slight updates for 1.11
2018-07-16 18:58:00 -05:00
Bridget Kromhout
1d210ad808 Merge pull request #3 from jpetazzo/slighter-updates
Slighter updates
2018-07-16 18:28:20 -05:00
Jerome Petazzoni
76d9adadf5 'until 1.10' is ambiguous, try to be more explicit 2018-07-16 18:25:30 -05:00
Jerome Petazzoni
065371fa99 Merge branch 'bridgetkromhout-slight-updates' into slighter-updates 2018-07-16 18:12:45 -05:00
Jerome Petazzoni
e45f21454e Update a couple of references to kube-dns; and cosmetic tweaks 2018-07-16 18:09:50 -05:00
Bridget Kromhout
4d8c13b0bf AKS name change 2018-07-16 18:09:50 -05:00
Bridget Kromhout
5e6b38e8d1 Replace kube-dns with CoreDNS 2018-07-16 18:09:50 -05:00
Bridget Kromhout
5dd2b6313e coredns instead of kube-dns 2018-07-16 18:09:50 -05:00
Bridget Kromhout
96bf00c59b Switching from get to use kubectl api-resources 2018-07-16 18:09:50 -05:00
Bridget Kromhout
065310901f This info isn't shown anymore by kubectl get 2018-07-16 18:09:50 -05:00
Jerome Petazzoni
103261ea35 Update a couple of references to kube-dns; and cosmetic tweaks 2018-07-16 18:07:07 -05:00
Jerome Petazzoni
c6fb6f30af Merge branch 'slight-updates' of git://github.com/bridgetkromhout/container.training into bridgetkromhout-slight-updates 2018-07-16 17:48:56 -05:00
Bridget Kromhout
134d24e23b AKS name change 2018-07-16 15:08:07 -07:00
Jerome Petazzoni
8a8e97f6e2 Add Jerome's training, September in Paris 2018-07-16 16:42:25 -05:00
Bridget Kromhout
29c1bc47d4 Replace kube-dns with CoreDNS 2018-07-16 13:53:27 -07:00
Bridget Kromhout
8af5a10407 coredns instead of kube-dns 2018-07-16 13:45:26 -07:00
Bridget Kromhout
8e9991a860 Switching from get to use kubectl api-resources 2018-07-16 13:38:28 -07:00
Bridget Kromhout
8ba5d6d736 This info isn't shown anymore by kubectl get 2018-07-16 13:32:53 -07:00
Bridget Kromhout
b3d1e2133d Merge pull request #308 from bridgetkromhout/add-oscon
Add oscon slides
2018-07-15 13:24:46 -05:00
Bridget Kromhout
b3cf30f804 Add oscon slides 2018-07-15 13:23:33 -05:00
Bridget Kromhout
b845543e5f Merge pull request #305 from bridgetkromhout/list-msp-slides
Adding slides link
2018-07-10 18:08:52 -05:00
Bridget Kromhout
1b54470046 Adding slides link 2018-07-10 18:04:35 -05:00
Bridget Kromhout
ee2b20926c Merge pull request #302 from bridgetkromhout/version-1.11.0
Version bump
2018-07-10 06:18:30 -05:00
Bridget Kromhout
96a76d2a19 Version bump 2018-07-10 06:17:07 -05:00
Bridget Kromhout
78ac91fcd5 Merge pull request #300 from bridgetkromhout/add-msp
Adding MSP 2018
2018-07-10 05:46:23 -05:00
Bridget Kromhout
971b5b0e6d Let's not link quite yet 2018-07-10 05:45:22 -05:00
Bridget Kromhout
3393563498 Adding MSP 2018 2018-07-06 16:11:37 -05:00
Bridget Kromhout
94483ebfec Merge pull request #298 from jpetazzo/improve-index-format
Switch to two-line format since our titles are so long
2018-07-06 15:43:01 -05:00
Jerome Petazzoni
db5d5878f5 Switch to two-line format since our titles are so long 2018-07-03 10:47:41 -05:00
ctas582
2585daac9b Force rng to be single threaded (#293) 2018-06-28 08:20:54 -05:00
Bridget Kromhout
21043108b3 Merge pull request #296 from bridgetkromhout/version-up
Version bump
2018-06-27 01:14:06 -05:00
Bridget Kromhout
65faa4507c Version bump 2018-06-27 08:12:40 +02:00
Bridget Kromhout
644f2b9c7a Merge pull request #295 from bridgetkromhout/add-slides-ams
Adding slides link for ams
2018-06-26 17:04:27 -05:00
Bridget Kromhout
dab9d9fb7e Adding slides link 2018-06-27 00:03:18 +02:00
Diego Quintana
139757613b Update Container_Networking_Basics.md
Added needed single quotes. I've also moved `nginx` to the end of the line, to follow a more consistent syntax  (`options` before `name|id`).

```
Usage:	docker inspect [OPTIONS] NAME|ID [NAME|ID...]

Return low-level information on Docker objects

Options:
  -f, --format string   Format the output using the given Go template
  -s, --size            Display total file sizes if the type is container
      --type string     Return JSON for specified type
```
2018-06-22 10:58:26 -05:00
Bridget Kromhout
10eed2c1c7 Merge pull request #288 from ctas582/typos
Correct typos
2018-06-22 09:21:56 -05:00
ctas582
c4fa75a1da Correct typos 2018-06-21 15:00:36 +01:00
ctas582
847140560f Correct typo 2018-06-21 14:16:05 +01:00
ctas582
1dc07c33ab Correct typos 2018-06-20 11:19:28 +01:00
Bridget Kromhout
4fc73d95c0 Merge pull request #285 from bridgetkromhout/vupdate
Updating version
2018-06-12 10:14:21 -07:00
Bridget Kromhout
690ed55953 Updating version 2018-06-12 10:12:04 -07:00
Bridget Kromhout
16a5809518 Merge pull request #284 from bridgetkromhout/add-vel-2day
Adding Erik and Brian's two-day Velocity training to the front page
2018-06-12 09:01:32 -07:00
Bridget Kromhout
0fed34600b Adding Erik and Brian's two-day 2018-06-12 08:55:53 -07:00
Jerome Petazzoni
2d95f4177a Remove extraneous python invocation 2018-06-12 04:25:00 -05:00
Bridget Kromhout
e9d1db56fa Adding VelNY bootcamp (#283)
* Adding VelNY bootcamp

* Colon not good here
2018-06-12 04:09:54 -05:00
Bridget Kromhout
a076a766a9 Merge pull request #282 from bridgetkromhout/reorder
Reordering upcoming events
2018-06-11 09:47:57 -07:00
Bridget Kromhout
be3c78bf54 Reordering 2018-06-11 09:40:30 -07:00
Bridget Kromhout
5bb6b8e2ab Merge pull request #281 from bridgetkromhout/add-velocity-sj-2018
Adding Velocity SJ 2018
2018-06-11 09:08:35 -07:00
Bridget Kromhout
f79193681d Adding Velocity SJ 2018 2018-06-11 08:53:53 -07:00
Bridget Kromhout
379ae69db5 Merge pull request #277 from bridgetkromhout/rollout-failure
Clarifying rollout failure via dashboard
2018-06-11 08:34:36 -07:00
Jerome Petazzoni
cde89f50a2 Add mention to skip slide if dashboard isn't deployed 2018-06-10 17:07:56 -05:00
Bridget Kromhout
98563ba1ce Clarifying rollout failure via dashboard 2018-06-04 20:58:57 -05:00
Bridget Kromhout
99bf8cc39f Merge pull request #271 from jpetazzo/new-index-generator
Replace index.html with a generator
2018-06-05 02:13:27 +02:00
Bridget Kromhout
ea642cf90e Merge pull request #274 from bridgetkromhout/eng-v
bumping version
2018-06-04 23:28:48 +02:00
Bridget Kromhout
a7d89062cf Bumping engine version 2018-06-04 15:43:30 -05:00
Bridget Kromhout
564e4856b4 Merge branch 'master' of https://github.com/jpetazzo/container.training 2018-06-04 14:41:07 -05:00
Bridget Kromhout
011cd08af3 Merge pull request #269 from jpetazzo/kubectlproxy
Show how to access internal services with kubectl proxy
2018-06-04 21:40:40 +02:00
Jerome Petazzoni
c86ef7de45 Add 'past workshops' page and backfill 2016-2017 workshops 2018-06-03 09:55:43 -05:00
Jerome Petazzoni
3d7ed3a3f7 Clarify how to stop kubectl proxy 2018-06-03 05:10:48 -05:00
Jerome Petazzoni
2cb06edc2d Replace index.html with a generator
The events are now listend in index.yaml, and generated
with index.py. The latter is called automatically by
build.sh.

The list of events has been slightly improved:
- we only show the last 5 past events
- video recordings now get a section of their own
2018-05-31 14:22:23 -05:00
Jerome Petazzoni
4213aba76e Show how to access internal services with kubectl proxy 2018-05-29 05:47:27 -05:00
Bridget Kromhout
c575cb9cd5 New events (and old event to past) 2018-05-23 10:18:02 -05:00
115 changed files with 2152 additions and 572 deletions

2
.gitignore vendored
View File

@@ -8,4 +8,6 @@ prepare-vms/settings.yaml
prepare-vms/tags
slides/*.yml.html
slides/autopilot/state.yaml
slides/index.html
slides/past.html
node_modules

View File

@@ -28,5 +28,5 @@ def rng(how_many_bytes):
if __name__ == "__main__":
app.run(host="0.0.0.0", port=80)
app.run(host="0.0.0.0", port=80, threaded=False)

222
k8s/efk.yaml Normal file
View File

@@ -0,0 +1,222 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:elasticsearch
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
# X-Pack Authentication
# =====================
- name: FLUENT_ELASTICSEARCH_USER
value: "elastic"
- name: FLUENT_ELASTICSEARCH_PASSWORD
value: "changeme"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: elasticsearch
name: elasticsearch
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/elasticsearch
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: elasticsearch
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: elasticsearch
spec:
containers:
- image: elasticsearch:5.6.8
imagePullPolicy: IfNotPresent
name: elasticsearch
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: elasticsearch
name: elasticsearch
selfLink: /api/v1/namespaces/default/services/elasticsearch
spec:
ports:
- port: 9200
protocol: TCP
targetPort: 9200
selector:
run: elasticsearch
sessionAffinity: None
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: kibana
name: kibana
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/kibana
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: kibana
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200/
image: kibana:5.6.8
imagePullPolicy: Always
name: kibana
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: kibana
name: kibana
selfLink: /api/v1/namespaces/default/services/kibana
spec:
externalTrafficPolicy: Cluster
ports:
- port: 5601
protocol: TCP
targetPort: 5601
selector:
run: kibana
sessionAffinity: None
type: NodePort

View File

@@ -0,0 +1,14 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

View File

@@ -0,0 +1,167 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@@ -0,0 +1,14 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-testcurl-for-testweb
spec:
podSelector:
matchLabels:
run: testweb
ingress:
- from:
- podSelector:
matchLabels:
run: testcurl

View File

@@ -0,0 +1,10 @@
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-for-testweb
spec:
podSelector:
matchLabels:
run: testweb
ingress: []

67
k8s/socat.yaml Normal file
View File

@@ -0,0 +1,67 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
creationTimestamp: null
generation: 1
labels:
run: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
spec:
replicas: 1
selector:
matchLabels:
run: socat
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: socat
spec:
containers:
- args:
- sh
- -c
- apk add --no-cache socat && socat TCP-LISTEN:80,fork,reuseaddr OPENSSL:kubernetes-dashboard:443,verify=0
image: alpine
imagePullPolicy: Always
name: socat
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
spec:
externalTrafficPolicy: Cluster
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: socat
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

View File

@@ -93,7 +93,7 @@ wrap Run this program in a container
- The `./workshopctl` script can be executed directly.
- It will run locally if all its dependencies are fulfilled; otherwise it will run in the Docker container you created with `docker-compose build` (preparevms_prepare-vms).
- During `start` it will add your default local SSH key to all instances under the `ubuntu` user.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. For now, this is hard coded.
- During `deploy` it will create the `docker` user with password `training`, which is printing on the cards for students. This can be configured with the `docker_user_password` property in the settings file.
### Example Steps to Launch a Batch of AWS Instances for a Workshop

View File

@@ -85,7 +85,7 @@ img {
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>

View File

@@ -48,7 +48,7 @@ _cmd_cards() {
rm -f ips.html ips.pdf
# This will generate two files in the base dir: ips.pdf and ips.html
python lib/ips-txt-to-html.py $SETTINGS
lib/ips-txt-to-html.py $SETTINGS
for f in ips.html ips.pdf; do
# Remove old versions of cards if they exist
@@ -168,6 +168,22 @@ _cmd_kube() {
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
fi"
# Install stern
pssh "
if [ ! -x /usr/local/bin/stern ]; then
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64
sudo chmod +x /usr/local/bin/stern
stern --completion bash | sudo tee /etc/bash_completion.d/stern
fi"
# Install helm
pssh "
if [ ! -x /usr/local/bin/helm ]; then
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | sudo bash
helm completion bash | sudo tee /etc/bash_completion.d/helm
fi"
sep "Done"
}

View File

@@ -13,6 +13,7 @@ COMPOSE_VERSION = config["compose_version"]
MACHINE_VERSION = config["machine_version"]
CLUSTER_SIZE = config["clustersize"]
ENGINE_VERSION = config["engine_version"]
DOCKER_USER_PASSWORD = config["docker_user_password"]
#################################
@@ -54,9 +55,9 @@ system("curl --silent {} > /tmp/ipv4".format(ipv4_retrieval_endpoint))
ipv4 = open("/tmp/ipv4").read()
# Add a "docker" user with password "training"
# Add a "docker" user with password coming from the settings
system("id docker || sudo useradd -d /home/docker -m -s /bin/bash docker")
system("echo docker:training | sudo chpasswd")
system("echo docker:{} | sudo chpasswd".format(DOCKER_USER_PASSWORD))
# Fancy prompt courtesy of @soulshake.
system("""sudo -u docker tee -a /home/docker/.bashrc <<SQRL

View File

@@ -22,3 +22,6 @@ engine_version: test
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.18.0
machine_version: 0.13.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -22,3 +22,6 @@ engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -85,7 +85,7 @@ img {
<tr><td>login:</td></tr>
<tr><td class="logpass">docker</td></tr>
<tr><td>password:</td></tr>
<tr><td class="logpass">training</td></tr>
<tr><td class="logpass">{{ docker_user_password }}</td></tr>
</table>
</p>

View File

@@ -22,3 +22,6 @@ engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -22,3 +22,6 @@ engine_version: stable
# These correspond to the version numbers visible on their respective GitHub release pages
compose_version: 1.21.1
machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -29,6 +29,10 @@ class State(object):
self.interactive = True
self.verify_status = False
self.simulate_type = True
self.switch_desktop = False
self.sync_slides = False
self.open_links = False
self.run_hidden = True
self.slide = 1
self.snippet = 0
@@ -37,6 +41,10 @@ class State(object):
self.interactive = bool(data["interactive"])
self.verify_status = bool(data["verify_status"])
self.simulate_type = bool(data["simulate_type"])
self.switch_desktop = bool(data["switch_desktop"])
self.sync_slides = bool(data["sync_slides"])
self.open_links = bool(data["open_links"])
self.run_hidden = bool(data["run_hidden"])
self.slide = int(data["slide"])
self.snippet = int(data["snippet"])
@@ -46,6 +54,10 @@ class State(object):
interactive=self.interactive,
verify_status=self.verify_status,
simulate_type=self.simulate_type,
switch_desktop=self.switch_desktop,
sync_slides=self.sync_slides,
open_links=self.open_links,
run_hidden=self.run_hidden,
slide=self.slide,
snippet=self.snippet,
), f, default_flow_style=False)
@@ -122,14 +134,20 @@ class Slide(object):
def focus_slides():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "3"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_terminal():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "2"])
subprocess.check_output(["i3-msg", "workspace", "1"])
def focus_browser():
if not state.switch_desktop:
return
subprocess.check_output(["i3-msg", "workspace", "4"])
subprocess.check_output(["i3-msg", "workspace", "1"])
@@ -307,17 +325,21 @@ while True:
slide = slides[state.slide]
snippet = slide.snippets[state.snippet-1] if state.snippet else None
click.clear()
print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}]"
print("[Slide {}/{}] [Snippet {}/{}] [simulate_type:{}] [verify_status:{}] "
"[switch_desktop:{}] [sync_slides:{}] [open_links:{}] [run_hidden:{}]"
.format(state.slide, len(slides)-1,
state.snippet, len(slide.snippets) if slide.snippets else 0,
state.simulate_type, state.verify_status))
state.simulate_type, state.verify_status,
state.switch_desktop, state.sync_slides,
state.open_links, state.run_hidden))
print(hrule())
if snippet:
print(slide.content.replace(snippet.content, ansi(7)(snippet.content)))
focus_terminal()
else:
print(slide.content)
subprocess.check_output(["./gotoslide.js", str(slide.number)])
if state.sync_slides:
subprocess.check_output(["./gotoslide.js", str(slide.number)])
focus_slides()
print(hrule())
if state.interactive:
@@ -326,6 +348,10 @@ while True:
print("n/→ Next")
print("s Simulate keystrokes")
print("v Validate exit status")
print("d Switch desktop")
print("k Sync slides")
print("o Open links")
print("h Run hidden commands")
print("g Go to a specific slide")
print("q Quit")
print("c Continue non-interactively until next error")
@@ -341,6 +367,14 @@ while True:
state.simulate_type = not state.simulate_type
elif command == "v":
state.verify_status = not state.verify_status
elif command == "d":
state.switch_desktop = not state.switch_desktop
elif command == "k":
state.sync_slides = not state.sync_slides
elif command == "o":
state.open_links = not state.open_links
elif command == "h":
state.run_hidden = not state.run_hidden
elif command == "g":
state.slide = click.prompt("Enter slide number", type=int)
state.snippet = 0
@@ -366,7 +400,7 @@ while True:
logging.info("Running with method {}: {}".format(method, data))
if method == "keys":
send_keys(data)
elif method == "bash":
elif method == "bash" or (method == "hide" and state.run_hidden):
# Make sure that we're ready
wait_for_prompt()
# Strip leading spaces
@@ -405,11 +439,12 @@ while True:
screen = capture_pane()
url = data.replace("/node1", "/{}".format(IPADDR))
# This should probably be adapted to run on different OS
subprocess.check_output(["xdg-open", url])
focus_browser()
if state.interactive:
print("Press any key to continue to next step...")
click.getchar()
if state.open_links:
subprocess.check_output(["xdg-open", url])
focus_browser()
if state.interactive:
print("Press any key to continue to next step...")
click.getchar()
else:
logging.warning("Unknown method {}: {!r}".format(method, data))
move_forward()

View File

@@ -0,0 +1 @@
click

View File

@@ -1,6 +1,8 @@
#!/bin/sh
set -e
case "$1" in
once)
./index.py
for YAML in *.yml; do
./markmaker.py $YAML > $YAML.html || {
rm $YAML.html
@@ -15,6 +17,7 @@ once)
;;
forever)
set +e
# check if entr is installed
if ! command -v entr >/dev/null; then
echo >&2 "First install 'entr' with apt, brew, etc."

View File

@@ -355,7 +355,7 @@ class: extra-details
## Overriding the `ENTRYPOINT` instruction
The entry point can be overriden as well.
The entry point can be overridden as well.
```bash
$ docker run -it training/ls

View File

@@ -117,7 +117,7 @@ CONTAINER ID IMAGE ... CREATED STATUS ...
Many Docker commands will work on container IDs: `docker stop`, `docker rm`...
If we want to list only the IDs of our containers (without the other colums
If we want to list only the IDs of our containers (without the other columns
or the header line),
we can use the `-q` ("Quiet", "Quick") flag:

View File

@@ -98,7 +98,7 @@ $ curl localhost:32768
* We can see that metadata with `docker inspect`:
```bash
$ docker inspect nginx --format {{.Config.ExposedPorts}}
$ docker inspect --format '{{.Config.ExposedPorts}}' nginx
map[80/tcp:{}]
```

View File

@@ -64,7 +64,7 @@ Create this Dockerfile.
## Testing our C program
* Create `hello.c` and `Dockerfile` in the same direcotry.
* Create `hello.c` and `Dockerfile` in the same directory.
* Run `docker build -t hello .` in this directory.

View File

@@ -30,7 +30,7 @@
## Environment variables
- Most of the tools (CLI, libraries...) connecting to the Docker API can use ennvironment variables.
- Most of the tools (CLI, libraries...) connecting to the Docker API can use environment variables.
- These variables are:
@@ -40,7 +40,7 @@
- `DOCKER_CERT_PATH` (path to the keypair and certificate to use for auth)
- `docker-machine env ...` will generate the variables needed to connect to an host.
- `docker-machine env ...` will generate the variables needed to connect to a host.
- `$(eval docker-machine env ...)` sets these variables in the current shell.
@@ -50,7 +50,7 @@
With `docker-machine`, we can:
- upgrade an host to the latest version of the Docker Engine,
- upgrade a host to the latest version of the Docker Engine,
- start/stop/restart hosts,

View File

@@ -312,7 +312,7 @@ CMD gunicorn --bind 0.0.0.0:5000 --workers 10 counter:app
EXPOSE 5000
```
(Source: [traininghweels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
(Source: [trainingwheels Dockerfile](https://github.com/jpetazzo/trainingwheels/blob/master/www/Dockerfile))
---

View File

@@ -176,7 +176,7 @@ $ docker run -d -v $(pwd):/src -P namer
* `namer` is the name of the image we will run.
* We don't specify a command to run because is is already set in the Dockerfile.
* We don't specify a command to run because it is already set in the Dockerfile.
Note: on Windows, replace `$(pwd)` with `%cd%` (or `${pwd}` if you use PowerShell).

View File

@@ -102,7 +102,7 @@ Cons:
- not very readable
- some unnecessary files might still remain if the cleanup is not torough
- some unnecessary files might still remain if the cleanup is not thorough
- that layer is expensive (slow to build)

View File

@@ -144,7 +144,7 @@ class: extra-details, deep-dive
- Also allows to set the NIS domain.
(If you dont' know what a NIS domain is, you don't have to worry about it!)
(If you don't know what a NIS domain is, you don't have to worry about it!)
- If you're wondering: UTS = UNIX time sharing.

59
slides/index.css Normal file
View File

@@ -0,0 +1,59 @@
body {
background-image: url("images/container-background.jpg");
max-width: 1024px;
margin: 0 auto;
}
table {
font-size: 20px;
font-family: sans-serif;
background: white;
width: 100%;
height: 100%;
padding: 20px;
}
.header {
font-size: 300%;
font-weight: bold;
}
.title {
font-size: 150%;
font-weight: bold;
}
.details {
font-size: 80%;
font-style: italic;
}
td {
padding: 1px;
height: 1em;
}
td.spacer {
height: unset;
}
td.footer {
padding-top: 80px;
height: 100px;
}
td.title {
border-bottom: thick solid black;
padding-bottom: 2px;
padding-top: 20px;
}
a {
text-decoration: none;
}
a:hover {
background: yellow;
}
a.attend:after {
content: "📅 attend";
}
a.slides:after {
content: "📚 slides";
}
a.chat:after {
content: "💬 chat";
}
a.video:after {
content: "📺 video";
}

View File

@@ -1,248 +0,0 @@
<html>
<head>
<title>Container Training</title>
<style type="text/css">
body {
background-image: url("images/container-background.jpg");
max-width: 1024px;
margin: 0 auto;
}
table {
font-size: 20px;
font-family: sans-serif;
background: white;
width: 100%;
height: 100%;
padding: 20px;
}
.header {
font-size: 300%;
font-weight: bold;
}
.title {
font-size: 150%;
font-weight: bold;
}
td {
padding: 1px;
height: 1em;
}
td.spacer {
height: unset;
}
td.footer {
padding-top: 80px;
height: 100px;
}
td.title {
border-bottom: thick solid black;
padding-bottom: 2px;
padding-top: 20px;
}
a {
text-decoration: none;
}
a:hover {
background: yellow;
}
a.attend:after {
content: "📅 attend";
}
a.slides:after {
content: "📚 slides";
}
a.chat:after {
content: "💬 chat";
}
a.video:after {
content: "📺 video";
}
</style>
</head>
<body>
<div class="main">
<table>
<tr><td class="header" colspan="4">Container Training</td></tr>
<tr><td class="title" colspan="4">Coming soon near you</td></tr>
<!--
<td>Nothing for now (stay tuned...)</td>
thing for now (stay tuned...)</td>
-->
<tr>
<td>June 12th, 2018: Velocity San Jose - Kubernetes 101</td>
<td>&nbsp;</td>
<td><a class="attend" href="https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66286" /></td>
</tr>
<tr>
<td>June 27th, 2018: devopsdays Amsterdam - Kubernetes 101</td>
<td>&nbsp;</td>
<td><a class="attend" href="https://www.devopsdays.org/events/2018-amsterdam/registration/" /></td>
</tr>
<tr>
<td>July 17th, 2018: OSCON - Kubernetes 101</td>
<td>&nbsp;</td>
<td><a class="attend" href="https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/66287" /></td>
</tr>
<tr>
<td>Oct 1st, 2018: Velocity New York - Kubernetes 101</td>
<td>&nbsp;</td>
<td><a class="attend" href="https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70102" /></td>
</tr>
<tr><td class="title" colspan="4">Past workshops</td></tr>
<tr>
<td>May 17th, 2018: Revolution Conf Virginia Beach - Docker 101</td>
<td><a class="slides" href="https://revconf18.bretfisher.com" /></td>
</tr>
<tr>
<td>May 10th, 2018: NDC Minnesota - Kubernetes 101</td>
<td><a class="slides" href="https://ndcminnesota2018.container.training" /></td>
</tr>
<tr>
<td>May 8th, 2018: CRAFT Budapest - Swarm Orchestration</td>
<td><a class="slides" href="https://craftconf18.bretfisher.com" /></td>
</tr>
<tr>
<td>April 27th, 2018: GOTO Chicago - Swarm Orchestration</td>
<td><a class="slides" href="https://gotochgo18.bretfisher.com" /></td>
</tr>
<tr>
<td>April 24th, 2018: GOTO Chicago - Kubernetes 101</td>
<td><a class="slides" href="http://gotochgo2018.container.training/" /></td>
</tr>
<tr>
<td>April 11-12, 2018: Introduction aux conteneurs (in French)</td>
<td><a class="slides" href="https://avril2018.container.training/intro.yml.html" /></td>
</tr>
<tr>
<td>April 13, 2018: Introduction à l'orchestration (in French)</td>
<td><a class="slides" href="https://avril2018.container.training/kube.yml.html" /></td>
</tr>
<tr>
<td>April 6th, 2018: MuraCon Sacramento, CA - Docker 101</td>
<td><a class="slides" href="https://muracon18.bretfisher.com" /></td>
</tr>
<tr>
<td>March 27, 2018: SREcon Americas — Kubernetes 101</td>
<td><a class="slides" href="http://srecon2018.container.training/" /></td>
</tr>
<tr>
<td>March 27, 2018: Boosterconf: Kubernetes 101</td>
<td><a class="slides" href="http://boosterconf2018.container.training/" /></td>
</tr>
<tr>
<td>February 22, 2018: IndexConf: Kubernetes 101</td>
<td><a class="slides" href="http://indexconf2018.container.training/" /></td>
<!--
<td><a class="attend" href="https://developer.ibm.com/indexconf/sessions/#!?id=5474" />
-->
</tr>
<!--
<tr>
<td>Kubernetes enablement at Docker</td>
<td><a class="slides" href="http://kube.container.training/" /></td>
</tr>
-->
<tr>
<td>QCON SF: Orchestrating Microservices with Docker Swarm</td>
<td><a class="slides" href="http://qconsf2017swarm.container.training/" /></td>
</tr>
<tr>
<td>QCON SF: Introduction to Docker and Containers</td>
<td><a class="slides" href="http://qconsf2017intro.container.training/" /></td>
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviLgqTum8MkspG_8VzGl6C07" /></td>
</tr>
<!--
<tr>
<td>LISA17 M7: Getting Started with Docker and Containers</td>
<td><a class="slides" href="http://lisa17m7.container.training/" /></td>
</tr>
<tr>
<td>LISA17 T9: Build, Ship, and Run Microservices on a Docker Swarm Cluster</td>
<td><a class="slides" href="http://lisa17t9.container.training/" /></td>
</tr>
-->
<tr>
<td>Deploying and scaling microservices with Docker and Kubernetes</td>
<td><a class="slides" href="http://osseu17.container.training/" /></td>
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviLrsyydCzxWrIP_1-wkcSHS" /></td>
</tr>
<!--
<tr>
<td>DockerCon Workshop: from Zero to Hero (full day, B3 M1-2)</td>
<td><a class="slides" href="http://dc17eu.container.training/" /></td>
</tr>
<tr>
<td>DockerCon Workshop: Orchestration for Advanced Users (afternoon, B4 M5-6)</td>
<td><a class="slides" href="https://www.bretfisher.com/dockercon17eu/" /></td>
</tr>
-->
<tr>
<td>LISA16 T1: Deploying and Scaling Applications with Docker Swarm</td>
<td><a class="slides" href="http://lisa16t1.container.training/" /></td>
<td><a class="video" href="https://www.youtube.com/playlist?list=PLBAFXs0YjviIDDhr8vIwCN1wkyNGXjbbc" /></td>
</tr>
<tr>
<td>PyCon2016: Introduction to Docker and containers</td>
<td><a class="slides" href="https://us.pycon.org/2016/site_media/media/tutorial_handouts/DockerSlides.pdf" /></td>
<td><a class="video" href="https://www.youtube.com/watch?v=ZVaRK10HBjo" /></td>
</tr>
<tr><td class="title" colspan="4">Self-paced tutorials</td></tr>
<tr>
<td>Introduction to Docker and Containers</td>
<td><a class="slides" href="intro-selfpaced.yml.html" /></td>
</tr>
<tr>
<td>Container Orchestration with Docker and Swarm</td>
<td><a class="slides" href="swarm-selfpaced.yml.html" /></td>
</tr>
<tr>
<td>Deploying and Scaling Microservices with Docker and Kubernetes</td>
<td><a class="slides" href="kube-selfpaced.yml.html" /></td>
</tr>
<tr><td class="spacer"></td></tr>
<tr>
<td class="footer">
Maintained by Jérôme Petazzoni (<a href="https://twitter.com/jpetazzo">@jpetazzo</a>) and <a href="https://github.com/jpetazzo/container.training/graphs/contributors">contributors</a>.
</td>
</tr>
</table>
</div>
</body>
</html>

146
slides/index.py Executable file
View File

@@ -0,0 +1,146 @@
#!/usr/bin/env python2
# coding: utf-8
TEMPLATE="""<html>
<head>
<title>{{ title }}</title>
<link rel="stylesheet" href="index.css">
</head>
<body>
<div class="main">
<table>
<tr><td class="header" colspan="3">{{ title }}</td></tr>
{% if coming_soon %}
<tr><td class="title" colspan="3">Coming soon near you</td></tr>
{% for item in coming_soon %}
<tr>
<td>{{ item.title }}</td>
<td>{% if item.slides %}<a class="slides" href="{{ item.slides }}" />{% endif %}</td>
<td><a class="attend" href="{{ item.attend }}" /></td>
</tr>
<tr>
<td class="details">Scheduled {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
{% if past_workshops %}
<tr><td class="title" colspan="3">Past workshops</td></tr>
{% for item in past_workshops[:5] %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
<td>{% if item.video %}<a class="video" href="{{ item.video }}" />{% endif %}</td>
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% if past_workshops[5:] %}
<tr>
<td>... and at least <a href="past.html">{{ past_workshops[5:] | length }} more</a>.</td>
</tr>
{% endif %}
{% endif %}
{% if recorded_workshops %}
<tr><td class="title" colspan="3">Recorded workshops</td></tr>
{% for item in recorded_workshops %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
<td><a class="video" href="{{ item.video }}" /></td>
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
{% if self_paced %}
<tr><td class="title" colspan="3">Self-paced tutorials</td></tr>
{% for item in self_paced %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
</tr>
{% endfor %}
{% endif %}
{% if all_past_workshops %}
<tr><td class="title" colspan="3">Past workshops</td></tr>
{% for item in all_past_workshops %}
<tr>
<td>{{ item.title }}</td>
<td><a class="slides" href="{{ item.slides }}" /></td>
{% if item.video %}
<td><a class="video" href="{{ item.video }}" /></td>
{% endif %}
</tr>
<tr>
<td class="details">Delivered {{ item.prettydate }} at {{ item.event }} in {{item.city }}.</td>
</tr>
{% endfor %}
{% endif %}
<tr><td class="spacer"></td></tr>
<tr>
<td class="footer">
Maintained by Jérôme Petazzoni (<a href="https://twitter.com/jpetazzo">@jpetazzo</a>) and <a href="https://github.com/jpetazzo/container.training/graphs/contributors">contributors</a>.
</td>
</tr>
</table>
</div>
</body>
</html>""".decode("utf-8")
import datetime
import jinja2
import yaml
items = yaml.load(open("index.yaml"))
for item in items:
if "date" in item:
date = item["date"]
suffix = {
1: "st", 2: "nd", 3: "rd",
21: "st", 22: "nd", 23: "rd",
31: "st"}.get(date.day, "th")
# %e is a non-standard extension (it displays the day, but without a
# leading zero). If strftime fails with ValueError, try to fall back
# on %d (which displays the day but with a leading zero when needed).
try:
item["prettydate"] = date.strftime("%B %e{}, %Y").format(suffix)
except ValueError:
item["prettydate"] = date.strftime("%B %d{}, %Y").format(suffix)
today = datetime.date.today()
coming_soon = [i for i in items if i.get("date") and i["date"] >= today]
coming_soon.sort(key=lambda i: i["date"])
past_workshops = [i for i in items if i.get("date") and i["date"] < today]
past_workshops.sort(key=lambda i: i["date"], reverse=True)
self_paced = [i for i in items if not i.get("date")]
recorded_workshops = [i for i in items if i.get("video")]
template = jinja2.Template(TEMPLATE)
with open("index.html", "w") as f:
f.write(template.render(
title="Container Training",
coming_soon=coming_soon,
past_workshops=past_workshops,
self_paced=self_paced,
recorded_workshops=recorded_workshops
).encode("utf-8"))
with open("past.html", "w") as f:
f.write(template.render(
title="Container Training",
all_past_workshops=past_workshops
).encode("utf-8"))

420
slides/index.yaml Normal file
View File

@@ -0,0 +1,420 @@
- date: 2018-11-23
city: Copenhagen
country: dk
event: GOTO
title: Build Container Orchestration with Docker Swarm
speaker: bretfisher
attend: https://gotocph.com/2018/workshops/121
- date: 2018-11-08
city: San Francisco, CA
country: us
event: QCON
title: Introduction to Docker and Containers
speaker: jpetazzo
attend: https://qconsf.com/sf2018/workshop/introduction-docker-and-containers
- date: 2018-11-09
city: San Francisco, CA
country: us
event: QCON
title: Getting Started With Kubernetes and Container Orchestration
speaker: jpetazzo
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration
- date: 2018-10-31
city: London, UK
country: uk
event: Velocity EU
title: Kubernetes 101
speaker: bridgetkromhout
attend: https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/71149
- date: 2018-10-30
city: London, UK
country: uk
event: Velocity EU
title: "Docker Zero to Hero: Docker, Compose and Production Swarm"
speaker: bretfisher
attend: https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/71231
- date: 2018-07-12
city: Minneapolis, MN
country: us
event: devopsdays Minneapolis
title: Kubernetes 101
speaker: "ashleymcnamara, bketelsen"
slides: https://devopsdaysmsp2018.container.training
attend: https://www.devopsdays.org/events/2018-minneapolis/registration/
- date: 2018-10-01
city: New York, NY
country: us
event: Velocity
title: Kubernetes 101
speaker: bridgetkromhout
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70102
- date: 2018-09-30
city: New York, NY
country: us
event: Velocity
title: Kubernetes Bootcamp - Deploying and Scaling Microservices
speaker: jpetazzo
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/69875
- date: 2018-09-30
city: New York, NY
country: us
event: Velocity
title: "Docker Zero to Hero: Docker, Compose and Production Swarm"
speaker: bretfisher
attend: https://conferences.oreilly.com/velocity/vl-ny/public/schedule/detail/70147
- date: 2018-09-17
country: fr
city: Paris
event: ENIX SAS
speaker: jpetazzo
title: Déployer ses applications avec Kubernetes (in French)
lang: fr
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
- date: 2018-07-17
city: Portland, OR
country: us
event: OSCON
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://oscon2018.container.training/
attend: https://conferences.oreilly.com/oscon/oscon-or/public/schedule/detail/66287
- date: 2018-06-27
city: Amsterdam
country: nl
event: devopsdays
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://devopsdaysams2018.container.training
attend: https://www.devopsdays.org/events/2018-amsterdam/registration/
- date: 2018-06-12
city: San Jose, CA
country: us
event: Velocity
title: Kubernetes 101
speaker: bridgetkromhout
slides: https://velocitysj2018.container.training
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66286
- date: 2018-06-12
city: San Jose, CA
country: us
event: Velocity
title: "Kubernetes two-day kickstart: Deploying and Scaling Microservices with Kubernetes"
speaker: "bketelsen, erikstmartin"
slides: http://kubernetes.academy/kube-fullday.yml.html#1
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
- date: 2018-06-11
city: San Jose, CA
country: us
event: Velocity
title: "Kubernetes two-day kickstart: Introduction to Docker and Containers"
speaker: "bketelsen, erikstmartin"
slides: http://kubernetes.academy/intro-fullday.yml.html#1
attend: https://conferences.oreilly.com/velocity/vl-ca/public/schedule/detail/66932
- date: 2018-05-17
city: Virginia Beach, FL
country: us
event: Revolution Conf
title: Docker 101
speaker: bretfisher
slides: https://revconf18.bretfisher.com
- date: 2018-05-10
city: Saint Paul, MN
country: us
event: NDC Minnesota
title: Kubernetes 101
slides: https://ndcminnesota2018.container.training
- date: 2018-05-08
city: Budapest
country: hu
event: CRAFT
title: Swarm Orchestration
slides: https://craftconf18.bretfisher.com
- date: 2018-04-27
city: Chicago, IL
country: us
event: GOTO
title: Swarm Orchestration
slides: https://gotochgo18.bretfisher.com
- date: 2018-04-24
city: Chicago, IL
country: us
event: GOTO
title: Kubernetes 101
slides: http://gotochgo2018.container.training/
- date: 2018-04-11
city: Paris
country: fr
title: Introduction aux conteneurs
lang: fr
slides: https://avril2018.container.training/intro.yml.html
- date: 2018-04-13
city: Paris
country: fr
lang: fr
title: Introduction à l'orchestration
slides: https://avril2018.container.training/kube.yml.html
- date: 2018-04-06
city: Sacramento, CA
country: us
event: MuraCon
title: Docker 101
slides: https://muracon18.bretfisher.com
- date: 2018-03-27
city: Santa Clara, CA
country: us
event: SREcon Americas
title: Kubernetes 101
slides: http://srecon2018.container.training/
- date: 2018-03-27
city: Bergen
country: no
event: Boosterconf
title: Kubernetes 101
slides: http://boosterconf2018.container.training/
- date: 2018-02-22
city: San Francisco, CA
country: us
event: IndexConf
title: Kubernetes 101
slides: http://indexconf2018.container.training/
#attend: https://developer.ibm.com/indexconf/sessions/#!?id=5474
- date: 2017-11-17
city: San Francisco, CA
country: us
event: QCON SF
title: Orchestrating Microservices with Docker Swarm
slides: http://qconsf2017swarm.container.training/
- date: 2017-11-16
city: San Francisco, CA
country: us
event: QCON SF
title: Introduction to Docker and Containers
slides: http://qconsf2017intro.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLgqTum8MkspG_8VzGl6C07
- date: 2017-10-30
city: San Franciso, CA
country: us
event: LISA
title: (M7) Getting Started with Docker and Containers
slides: http://lisa17m7.container.training/
- date: 2017-10-31
city: San Franciso, CA
country: us
event: LISA
title: (T9) Build, Ship, and Run Microservices on a Docker Swarm Cluster
slides: http://lisa17t9.container.training/
- date: 2017-10-26
city: Prague
country: cz
event: Open Source Summit Europe
title: Deploying and scaling microservices with Docker and Kubernetes
slides: http://osseu17.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviLrsyydCzxWrIP_1-wkcSHS
- date: 2017-10-16
city: Copenhagen
country: dk
event: DockerCon
title: Swarm from Zero to Hero
slides: http://dc17eu.container.training/
- date: 2017-10-16
city: Copenhagen
country: dk
event: DockerCon
title: Orchestration for Advanced Users
slides: https://www.bretfisher.com/dockercon17eu
- date: 2017-07-25
city: Minneapolis, MN
country: us
event: devopsdays
title: Deploying & Scaling microservices with Docker Swarm
video: https://www.youtube.com/watch?v=DABbqyJeG_E
- date: 2017-06-12
city: Berlin
country: de
event: DevOpsCon
title: Deploying and scaling containerized Microservices with Docker and Swarm
- date: 2017-05-18
city: Portland, OR
country: us
event: PyCon
title: Deploy and scale containers with Docker native, open source orchestration
video: https://www.youtube.com/watch?v=EuzoEaE6Cqs
- date: 2017-05-08
city: Austin, TX
country: us
event: OSCON
title: Deploying and scaling applications in containers with Docker
- date: 2017-05-04
city: Chicago, IL
country: us
event: GOTO
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2017-04-17
city: Austin, TX
country: us
event: DockerCon
title: Orchestration Workshop
- date: 2017-03-22
city: San Jose, CA
country: us
event: Devoxx
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2017-03-03
city: Pasadena, CA
country: us
event: SCALE
title: Container deployment, scaling, and orchestration with Docker Swarm
- date: 2016-12-06
city: Boston, MA
country: us
event: LISA
title: Deploying and Scaling Applications with Docker Swarm
slides: http://lisa16t1.container.training/
video: https://www.youtube.com/playlist?list=PLBAFXs0YjviIDDhr8vIwCN1wkyNGXjbbc
- date: 2016-10-07
city: Berlin
country: de
event: LinuxCon
title: Orchestrating Containers in Production at Scale with Docker Swarm
- date: 2016-09-20
city: New York, NY
country: us
event: Velocity
title: Deployment and orchestration at scale with Docker
- date: 2016-08-25
city: Toronto
country: ca
event: LinuxCon
title: Orchestrating Containers in Production at Scale with Docker Swarm
- date: 2016-06-22
city: Seattle, WA
country: us
event: DockerCon
title: Orchestration Workshop
- date: 2016-05-29
city: Portland, OR
country: us
event: PyCon
title: Introduction to Docker and containers
slides: https://us.pycon.org/2016/site_media/media/tutorial_handouts/DockerSlides.pdf
video: https://www.youtube.com/watch?v=ZVaRK10HBjo
- date: 2016-05-17
city: Austin, TX
country: us
event: OSCON
title: Deployment and orchestration at scale with Docker Swarm
- date: 2016-04-27
city: Budapest
country: hu
event: CRAFT
title: Advanced Docker concepts and container orchestration
- date: 2016-04-22
city: Berlin
country: de
event: Neofonie
title: Orchestration Workshop
- date: 2016-04-05
city: Stockholm
country: se
event: Praqma
title: Orchestration Workshop
- date: 2016-03-22
city: Munich
country: de
event: Stylight
title: Orchestration Workshop
- date: 2016-03-11
city: London
country: uk
event: QCON
title: Containers in production with Docker Swarm
- date: 2016-02-19
city: Amsterdam
country: nl
event: Container Solutions
title: Orchestration Workshop
- date: 2016-02-15
city: Paris
country: fr
event: Zenika
title: Orchestration Workshop
- date: 2016-01-22
city: Pasadena, CA
country: us
event: SCALE
title: Advanced Docker concepts and container orchestration
#- date: 2015-11-10
# city: Washington DC
# country: us
# event: LISA
# title: Deploying and Scaling Applications with Docker Swarm
#2015-09-24-strangeloop
- title: Introduction to Docker and Containers
slides: intro-selfpaced.yml.html
- title: Container Orchestration with Docker and Swarm
slides: swarm-selfpaced.yml.html
- title: Deploying and Scaling Microservices with Docker and Kubernetes
slides: kube-selfpaced.yml.html

View File

@@ -13,47 +13,47 @@ exclude:
- self-paced
chapters:
- common/title.md
- shared/title.md
- logistics.md
- intro/intro.md
- common/about-slides.md
- common/toc.md
- - intro/Docker_Overview.md
- intro/Docker_History.md
- intro/Training_Environment.md
- intro/Installing_Docker.md
- intro/First_Containers.md
- intro/Background_Containers.md
- intro/Start_And_Attach.md
- - intro/Initial_Images.md
- intro/Building_Images_Interactively.md
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
- - intro/Multi_Stage_Builds.md
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- - intro/Naming_And_Inspecting.md
- intro/Labels.md
- intro/Getting_Inside.md
- - intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
#- intro/Connecting_Containers_With_Links.md
- intro/Ambassadors.md
- - intro/Local_Development_Workflow.md
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- intro/Docker_Machine.md
- - intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Logging.md
- intro/Resource_Limits.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- - intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md
- common/thankyou.md
- intro/links.md
- containers/intro.md
- shared/about-slides.md
- shared/toc.md
- - containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- - containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Docker_Machine.md
- - containers/Advanced_Dockerfiles.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Resource_Limits.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
- - containers/Container_Engines.md
- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -13,47 +13,47 @@ exclude:
- in-person
chapters:
- common/title.md
# - common/logistics.md
- intro/intro.md
- common/about-slides.md
- common/toc.md
- - intro/Docker_Overview.md
- intro/Docker_History.md
- intro/Training_Environment.md
- intro/Installing_Docker.md
- intro/First_Containers.md
- intro/Background_Containers.md
- intro/Start_And_Attach.md
- - intro/Initial_Images.md
- intro/Building_Images_Interactively.md
- intro/Building_Images_With_Dockerfiles.md
- intro/Cmd_And_Entrypoint.md
- intro/Copying_Files_During_Build.md
- - intro/Multi_Stage_Builds.md
- intro/Publishing_To_Docker_Hub.md
- intro/Dockerfile_Tips.md
- - intro/Naming_And_Inspecting.md
- intro/Labels.md
- intro/Getting_Inside.md
- - intro/Container_Networking_Basics.md
- intro/Network_Drivers.md
- intro/Container_Network_Model.md
#- intro/Connecting_Containers_With_Links.md
- intro/Ambassadors.md
- - intro/Local_Development_Workflow.md
- intro/Working_With_Volumes.md
- intro/Compose_For_Dev_Stacks.md
- intro/Docker_Machine.md
- - intro/Advanced_Dockerfiles.md
- intro/Application_Configuration.md
- intro/Logging.md
- intro/Resource_Limits.md
- - intro/Namespaces_Cgroups.md
- intro/Copy_On_Write.md
#- intro/Containers_From_Scratch.md
- - intro/Container_Engines.md
- intro/Ecosystem.md
- intro/Orchestration_Overview.md
- common/thankyou.md
- intro/links.md
- shared/title.md
# - shared/logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/toc.md
- - containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- - containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Docker_Machine.md
- - containers/Advanced_Dockerfiles.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Resource_Limits.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
- - containers/Container_Engines.md
- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -161,7 +161,7 @@ class: pic
(This is illustrated on the first "super complicated" schema)
- In some hosted Kubernetes offerings (e.g. GKE), the control plane is invisible
- In some hosted Kubernetes offerings (e.g. AKS, GKE, EKS), the control plane is invisible
(We only "see" a Kubernetes API endpoint)
@@ -239,7 +239,11 @@ Yes!
- namespace (more-or-less isolated group of things)
- secret (bundle of sensitive data to be passed to a container)
And much more! (We can see the full list by running `kubectl get`)
And much more!
- We can see the full list by running `kubectl api-resources`
(In Kubernetes 1.10 and prior, the command to list API resources was `kubectl get`)
---

View File

@@ -95,10 +95,21 @@ Note: `--export` will remove "cluster-specific" information, i.e.:
- Change `kind: Deployment` to `kind: DaemonSet`
<!--
```bash vim rng.yml```
```wait kind: Deployment```
```keys /Deployment```
```keys ^J```
```keys cwDaemonSet```
```keys ^[``` ]
```keys :wq```
```keys ^J```
-->
- Save, quit
- Try to create our new resource:
```bash
```
kubectl apply -f rng.yml
```
@@ -130,6 +141,7 @@ We all knew this couldn't be that easy, right!
- remove the `replicas` field
- remove the `strategy` field (which defines the rollout mechanism for a deployment)
- remove the `progressDeadlineSeconds` field (also used by the rollout mechanism)
- remove the `status: {}` line at the end
--
@@ -419,11 +431,35 @@ Of course, option 2 offers more learning opportunities. Right?
kubectl edit daemonset rng
```
<!--
```wait Please edit the object below```
```keys /run: rng```
```keys ^J```
```keys noisactive: "yes"```
```keys ^[``` ]
```keys /run: rng```
```keys ^J```
```keys oisactive: "yes"```
```keys ^[``` ]
```keys :wq```
```keys ^J```
-->
- Update the service to add `isactive: "yes"` to its selector:
```bash
kubectl edit service rng
```
<!--
```wait Please edit the object below```
```keys /run: rng```
```keys ^J```
```keys noisactive: "yes"```
```keys ^[``` ]
```keys :wq```
```keys ^J```
-->
]
---

View File

@@ -32,15 +32,11 @@ There is an additional step to make the dashboard available from outside (we'll
- Create all the dashboard resources, with the following command:
```bash
kubectl apply -f https://goo.gl/Qamqab
kubectl apply -f ~/container.training/k8s/kubernetes-dashboard.yaml
```
]
The goo.gl URL expands to:
<br/>
.small[https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml]
---
@@ -72,15 +68,11 @@ The goo.gl URL expands to:
- Apply the convenient YAML file, and defeat SSL protection:
```bash
kubectl apply -f https://goo.gl/tA7GLz
kubectl apply -f ~/container.training/k8s/socat.yaml
```
]
The goo.gl URL expands to:
<br/>
.small[.small[https://gist.githubusercontent.com/jpetazzo/c53a28b5b7fdae88bc3c5f0945552c04/raw/da13ef1bdd38cc0e90b7a4074be8d6a0215e1a65/socat.yaml]]
.warning[All our dashboard traffic is now clear-text, including passwords!]
---
@@ -135,7 +127,7 @@ The dashboard will then ask you which authentication you want to use.
- Grant admin privileges to the dashboard so we can see our resources:
```bash
kubectl apply -f https://goo.gl/CHsLTA
kubectl apply -f ~/container.training/k8s/grant-admin-to-dashboard.yaml
```
- Reload the dashboard and enjoy!
@@ -161,7 +153,7 @@ The dashboard will then ask you which authentication you want to use.
.exercise[
- Edit the service:
```bash
```
kubectl edit service kubernetes-dashboard
```
@@ -175,7 +167,7 @@ The dashboard will then ask you which authentication you want to use.
## Editing the `kubernetes-dashboard` service
- If we look at the [YAML](https://goo.gl/Qamqab) that we loaded before, we'll get a hint
- If we look at the [YAML](https://github.com/jpetazzo/container.training/blob/master/k8s/kubernetes-dashboard.yaml) that we loaded before, we'll get a hint
--
@@ -192,6 +184,16 @@ The dashboard will then ask you which authentication you want to use.
- Change `ClusterIP` to `NodePort`, save, and exit
<!--
```wait Please edit the object below```
```keys /ClusterIP```
```keys ^J```
```keys cwNodePort```
```keys ^[ ``` ]
```keys :wq```
```keys ^J```
-->
- Check the port that was assigned with `kubectl -n kube-system get services`
- Connect to https://oneofournodes:3xxxx/ (yes, https)

View File

@@ -34,27 +34,47 @@
## Installing Helm
- We need to install the `helm` CLI; then use it to deploy `tiller`
- If the `helm` CLI is not installed in your environment, install it
.exercise[
- Install the `helm` CLI:
- Check if `helm` is installed:
```bash
helm
```
- If it's not installed, run the following command:
```bash
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
```
- Deploy `tiller`:
]
---
## Installing Tiller
- Tiller is composed of a *service* and a *deployment* in the `kube-system` namespace
- They can be managed (installed, upgraded...) with the `helm` CLI
.exercise[
- Deploy Tiller:
```bash
helm init
```
- Add the `helm` completion:
```bash
. <(helm completion $(basename $SHELL))
```
]
If Tiller was already installed, don't worry: this won't break it.
At the end of the install process, you will see:
```
Happy Helming!
```
---
## Fix account permissions

View File

@@ -6,7 +6,7 @@
- If we want to connect to our pod(s), we need to create a *service*
- Once a service is created, `kube-dns` will allow us to resolve it by name
- Once a service is created, CoreDNS will allow us to resolve it by name
(i.e. after creating service `hello`, the name `hello` will resolve to something)
@@ -46,7 +46,7 @@ Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables`
- `ExternalName`
- the DNS entry managed by `kube-dns` will just be a `CNAME` to a provided record
- the DNS entry managed by CoreDNS will just be a `CNAME` to a provided record
- no port, no IP address, no nothing else is allocated
The `LoadBalancer` type is currently only available on AWS, Azure, and GCE.
@@ -69,7 +69,10 @@ The `LoadBalancer` type is currently only available on AWS, Azure, and GCE.
kubectl get pods -w
```
<!-- ```keys ^C``` -->
<!--
```wait elastic-```
```keys ^C```
-->
]
@@ -123,11 +126,16 @@ Note: please DO NOT call the service `search`. It would collide with the TLD.
.exercise[
- Let's obtain the IP address that was allocated for our service, *programatically:*
- Let's obtain the IP address that was allocated for our service, *programmatically:*
```bash
IP=$(kubectl get svc elastic -o go-template --template '{{ .spec.clusterIP }}')
```
<!--
```hide kubectl wait deploy elastic --for condition=available```
```hide sleep 5``` (give some time for elasticsearch to start... hopefully this is enough!)
-->
- Send a few requests:
```bash
curl http://$IP:9200/
@@ -179,7 +187,7 @@ class: extra-details
- Since there is no virtual IP address, there is no load balancer either
- `kube-dns` will return the pods' IP addresses as multiple `A` records
- CoreDNS will return the pods' IP addresses as multiple `A` records
- This gives us an easy way to discover all the replicas for a deployment

View File

@@ -83,7 +83,9 @@
- `kubectl` has pretty good introspection facilities
- We can list all available resource types by running `kubectl get`
- We can list all available resource types by running `kubectl api-resources`
<br/>
(In Kubernetes 1.10 and prior, this command used to be `kubectl get`)
- We can view details about a resource with:
```bash
@@ -224,7 +226,7 @@ The `kube-system` namespace is used for the control plane.
- `kube-controller-manager` and `kube-scheduler` are other master components
- `kube-dns` is an additional component (not mandatory but super useful, so it's there)
- `coredns` provides DNS-based service discovery ([replacing kube-dns as of 1.11](https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/))
- `kube-proxy` is the (per-node) component managing port mappings and such

179
slides/k8s/kubectlproxy.md Normal file
View File

@@ -0,0 +1,179 @@
# Accessing the API with `kubectl proxy`
- The API requires us to authenticate.red[¹]
- There are many authentication methods available, including:
- TLS client certificates
<br/>
(that's what we've used so far)
- HTTP basic password authentication
<br/>
(from a static file; not recommended)
- various token mechanisms
<br/>
(detailed in the [documentation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#authentication-strategies))
.red[¹]OK, we lied. If you don't authenticate, you are considered to
be user `system:anonymous`, which doesn't have any access rights by default.
---
## Accessing the API directly
- Let's see what happens if we try to access the API directly with `curl`
.exercise[
- Retrieve the ClusterIP allocated to the `kubernetes` service:
```bash
kubectl get svc kubernetes
```
- Replace the IP below and try to connect with `curl`:
```bash
curl -k https://`10.96.0.1`/
```
]
The API will tell us that user `system:anonymous` cannot access this path.
---
## Authenticating to the API
If we wanted to talk to the API, we would need to:
- extract our TLS key and certificate information from `~/.kube/config`
(the information is in PEM format, encoded in base64)
- use that information to present our certificate when connecting
(for instance, with `openssl s_client -key ... -cert ... -connect ...`)
- figure out exactly which credentials to use
(once we start juggling multiple clusters)
- change that whole process if we're using another authentication method
🤔 There has to be a better way!
---
## Using `kubectl proxy` for authentication
- `kubectl proxy` runs a proxy in the foreground
- This proxy lets us access the Kubernetes API without authentication
(`kubectl proxy` adds our credentials on the fly to the requests)
- This proxy lets us access the Kubernetes API over plain HTTP
- This is a great tool to learn and experiment with the Kubernetes API
- ... And for serious usages as well (suitable for one-shot scripts)
- For unattended use, it is better to create a [service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
---
## Trying `kubectl proxy`
- Let's start `kubectl proxy` and then do a simple request with `curl`!
.exercise[
- Start `kubectl proxy` in the background:
```bash
kubectl proxy &
```
- Access the API's default route:
```bash
curl localhost:8001
```
- Terminate the proxy:
```bash
kill %1
```
]
The output is a list of available API routes.
---
## `kubectl proxy` is intended for local use
- By default, the proxy listens on port 8001
(But this can be changed, or we can tell `kubectl proxy` to pick a port)
- By default, the proxy binds to `127.0.0.1`
(Making it unreachable from other machines, for security reasons)
- By default, the proxy only accepts connections from:
`^localhost$,^127\.0\.0\.1$,^\[::1\]$`
- This is great when running `kubectl proxy` locally
- Not-so-great when you want to connect to the proxy from a remote machine
---
## Running `kubectl proxy` on a remote machine
- If we wanted to connect to the proxy from another machine, we would need to:
- bind to `INADDR_ANY` instead of `127.0.0.1`
- accept connections from any address
- This is achieved with:
```
kubectl proxy --port=8888 --address=0.0.0.0 --accept-hosts=.*
```
.warning[Do not do this on a real cluster: it opens full unauthenticated access!]
---
## Security considerations
- Running `kubectl proxy` openly is a huge security risk
- It is slightly better to run the proxy where you need it
(and copy credentials, e.g. `~/.kube/config`, to that place)
- It is even better to use a limited account with reduced permissions
---
## Good to know ...
- `kubectl proxy` also gives access to all internal services
- Specifically, services are exposed as such:
```
/api/v1/namespaces/<namespace>/services/<service>/proxy
```
- We can use `kubectl proxy` to access an internal service in a pinch
(or, for non HTTP services, `kubectl port-forward`)
- This is not very useful when running `kubectl` directly on the cluster
(since we could connect to the services directly anyway)
- But it is very powerful as soon as you run `kubectl` from a remote machine

View File

@@ -26,6 +26,8 @@
kubectl run pingpong --image alpine ping 1.1.1.1
```
<!-- ```hide kubectl wait deploy/pingpong --for condition=available``` -->
]
--
@@ -196,10 +198,13 @@ We could! But the *deployment* would notice it right away, and scale back to the
<!--
```wait Running```
```keys ^C```
```hide kubectl wait deploy pingpong --for condition=available```
```keys kubectl delete pod ping```
```copypaste pong-..........-.....```
-->
- Destroy a pod:
```bash
```
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
```
]

View File

@@ -10,7 +10,12 @@
kubectl get deployments -w
```
<!-- ```keys ^C``` -->
<!--
```wait RESTARTS```
```keys ^C```
```wait AVAILABLE```
```keys ^C```
-->
- Now, create more `worker` replicas:
```bash

View File

@@ -6,7 +6,7 @@
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
- [Azure Container Service](https://docs.microsoft.com/azure/aks/)
- [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/)
- [Cloud Developer Advocates](https://developer.microsoft.com/advocates/)

View File

@@ -40,12 +40,12 @@
- Load the YAML file into our cluster:
```bash
kubectl apply -f https://goo.gl/MUZhE4
kubectl apply -f ~/container.training/k8s/efk.yaml
```
]
If we [look at the YAML file](https://goo.gl/MUZhE4), we see that
If we [look at the YAML file](https://github.com/jpetazzo/container.training/blob/master/k8s/efk.yaml), we see that
it creates a daemon set, two deployments, two services,
and a few roles and role bindings (to give fluentd the required permissions).
@@ -113,7 +113,7 @@ and a few roles and role bindings (to give fluentd the required permissions).
- The first time you connect to Kibana, you must "configure an index pattern"
- Just use the one that is suggested, `@timestamp`
- Just use the one that is suggested, `@timestamp`.red[*]
- Then click "Discover" (in the top-left corner)
@@ -123,6 +123,9 @@ and a few roles and role bindings (to give fluentd the required permissions).
`kubernetes.host`, `kubernetes.pod_name`, `stream`, `log`
.red[*]If you don't see `@timestamp`, it's probably because no logs exist yet.
<br/>Wait a bit, and double-check the logging pipeline!
---
## Caveat emptor

View File

@@ -47,22 +47,24 @@ Exactly what we need!
## Installing Stern
- For simplicity, let's just grab a binary release
- Run `stern` (without arguments) to check if it's installed:
.exercise[
```
$ stern
Tail multiple pods and containers from Kubernetes
- Download a binary release from GitHub:
```bash
sudo curl -L -o /usr/local/bin/stern \
https://github.com/wercker/stern/releases/download/1.6.0/stern_linux_amd64
sudo chmod +x /usr/local/bin/stern
Usage:
stern pod-query [flags]
```
]
- If it is not installed, the easiest method is to download a [binary release](https://github.com/wercker/stern/releases)
These installation instructions will work on our clusters, since they are Linux amd64 VMs.
However, you will have to adapt them if you want to install Stern on your local machine.
- The following commands will install Stern on a Linux Intel 64 bits machine:
```bash
sudo curl -L -o /usr/local/bin/stern \
https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64
sudo chmod +x /usr/local/bin/stern
```
---

View File

@@ -151,7 +151,7 @@ Note: it might take a minute or two for the app to be up and running.
- A pod in the `default` namespace can communicate with a pod in the `kube-system` namespace
- `kube-dns` uses a different subdomain for each namespace
- CoreDNS uses a different subdomain for each namespace
- Example: from any pod in the cluster, you can connect to the Kubernetes API with:

268
slides/k8s/netpol.md Normal file
View File

@@ -0,0 +1,268 @@
# Network policies
- Namespaces help us to *organize* resources
- Namespaces do not provide isolation
- By default, every pod can contact every other pod
- By default, every service accepts traffic from anyone
- If we want this to be different, we need *network policies*
---
## What's a network policy?
A network policy is defined by the following things.
- A *pod selector* indicating which pods it applies to
e.g.: "all pods in namespace `blue` with the label `zone=internal`"
- A list of *ingress rules* indicating which inbound traffic is allowed
e.g.: "TCP connections to ports 8000 and 8080 coming from pods with label `zone=dmz`,
and from the external subnet 4.42.6.0/24, except 4.42.6.5"
- A list of *egress rules* indicating which outbound traffic is allowed
A network policy can provide ingress rules, egress rules, or both.
---
## How do network policies apply?
- A pod can be "selected" by any number of network policies
- If a pod isn't selected by any network policy, then its traffic is unrestricted
(In other words: in the absence of network policies, all traffic is allowed)
- If a pod is selected by at least one network policy, then all traffic is blocked ...
... unless it is explicitly allowed by one of these network policies
---
class: extra-details
## Traffic filtering is flow-oriented
- Network policies deal with *connections*, not individual packets
- Example: to allow HTTP (80/tcp) connections to pod A, you only need an ingress rule
(You do not need a matching egress rule to allow response traffic to go through)
- This also applies for UDP traffic
(Allowing DNS traffic can be done with a single rule)
- Network policy implementations use stateful connection tracking
---
## Pod-to-pod traffic
- Connections from pod A to pod B have to be allowed by both pods:
- pod A has to be unrestricted, or allow the connection as an *egress* rule
- pod B has to be unrestricted, or allow the connection as an *ingress* rule
- As a consequence: if a network policy restricts traffic going from/to a pod,
<br/>
the restriction cannot be overridden by a network policy selecting another pod
- This prevents an entity managing network policies in namespace A
(but without permission to do so in namespace B)
from adding network policies giving them access to namespace B
---
## The rationale for network policies
- In network security, it is generally considered better to "deny all, then allow selectively"
(The other approach, "allow all, then block selectively" makes it too easy to leave holes)
- As soon as one network policy selects a pod, the pod enters this "deny all" logic
- Further network policies can open additional access
- Good network policies should be scoped as precisely as possible
- In particular: make sure that the selector is not too broad
(Otherwise, you end up affecting pods that were otherwise well secured)
---
## Our first network policy
This is our game plan:
- run a web server in a pod
- create a network policy to block all access to the web server
- create another network policy to allow access only from specific pods
---
## Running our test web server
.exercise[
- Let's use the `nginx` image:
```bash
kubectl run testweb --image=nginx
```
- Find out the IP address of the pod with one of these two commands:
```bash
kubectl get pods -o wide -l run=testweb
IP=$(kubectl get pods -l run=testweb -o json | jq -r .items[0].status.podIP)
```
- Check that we can connect to the server:
```bash
curl $IP
```
]
The `curl` command should show us the "Welcome to nginx!" page.
---
## Adding a very restrictive network policy
- The policy will select pods with the label `run=testweb`
- It will specify an empty list of ingress rules (matching nothing)
.exercise[
- Apply the policy in this YAML file:
```bash
kubectl apply -f ~/container.training/k8s/netpol-deny-all-for-testweb.yaml
```
- Check if we can still access the server:
```bash
curl $IP
```
]
The `curl` command should now time out.
---
## Looking at the network policy
This is the file that we applied:
```yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-all-for-testweb
spec:
podSelector:
matchLabels:
run: testweb
ingress: []
```
---
## Allowing connections only from specific pods
- We want to allow traffic from pods with the label `run=testcurl`
- Reminder: this label is automatically applied when we do `kubectl run testcurl ...`
.exercise[
- Apply another policy:
```bash
kubectl apply -f ~/container.training/netpol-allow-testcurl-for-testweb.yaml
```
]
---
## Looking at the network policy
This is the second file that we applied:
```yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-testcurl-for-testweb
spec:
podSelector:
matchLabels:
run: testweb
ingress:
- from:
- podSelector:
matchLabels:
run: testcurl
```
---
## Testing the network policy
- Let's create pods with, and without, the required label
.exercise[
- Try to connect to testweb from a pod with the `run=testcurl` label:
```bash
kubectl run testcurl --rm -i --image=centos -- curl -m3 $IP
```
- Try to connect to testweb with a different label:
```bash
kubectl run testkurl --rm -i --image=centos -- curl -m3 $IP
```
]
The first command will work (and show the "Welcome to nginx!" page).
The second command will fail and time out after 3 seconds.
(The timeout is obtained with the `-m3` option.)
---
## An important warning
- Some network plugins only have partial support for network policies
- For instance, Weave [doesn't support ipBlock (yet)](https://github.com/weaveworks/weave/issues/3168)
- Weave added support for egress rules [in version 2.4](https://github.com/weaveworks/weave/pull/3313) (released in July 2018)
- Unsupported features might be silently ignored
(Making you believe that you are secure, when you're not)
---
## Further resources
- As always, the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is a good starting point
- And two resources by [Ahmet Alp Balkan](https://ahmet.im/):
- a [very good talk about network policies](https://www.youtube.com/watch?list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-ulZb&v=3gGpMmYeEO8) at KubeCon North America 2017
- a repository of [ready-to-use recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for network policies

View File

@@ -114,6 +114,8 @@ In this part, we will:
.exercise[
<!-- ```hide kubectl wait deploy/registry --for condition=available```-->
- View the repositories currently held in our registry:
```bash
curl $REGISTRY/v2/_catalog
@@ -275,6 +277,11 @@ class: extra-details
.exercise[
<!-- ```hide
kubectl wait deploy/rng --for condition=available
kubectl wait deploy/worker --for condition=available
``` -->
- Look at some logs:
```bash
kubectl logs deploy/rng
@@ -328,9 +335,8 @@ class: extra-details
(Give it about 10 seconds to recover)
<!--
```keys
^C
```
```wait units of work done, updating hash counter```
```keys ^C```
-->
]

View File

@@ -96,7 +96,10 @@
kubectl get deployments -w
```
<!-- ```keys ^C``` -->
<!--
```wait NAME```
```keys ^C```
-->
- Update `worker` either with `kubectl edit`, or by running:
```bash
@@ -150,23 +153,38 @@ That rollout should be pretty quick. What shows in the web UI?
kubectl rollout status deploy worker
```
<!--
```wait Waiting for deployment```
```keys ^C```
-->
]
--
Our rollout is stuck. However, the app is not dead (just 10% slower).
Our rollout is stuck. However, the app is not dead.
(After a minute, it will stabilize to be 20-25% slower.)
---
## What's going on with our rollout?
- Why is our app 10% slower?
- Why is our app a bit slower?
- Because `MaxUnavailable=1`, so the rollout terminated 1 replica out of 10 available
- Because `MaxUnavailable=25%`
- Okay, but why do we see 2 new replicas being rolled out?
... So the rollout terminated 2 replicas out of 10 available
- Because `MaxSurge=1`, so in addition to replacing the terminated one, the rollout is also starting one more
- Okay, but why do we see 5 new replicas being rolled out?
- Because `MaxSurge=25%`
... So in addition to replacing 2 replicas, the rollout is also starting 3 more
- It rounded down the number of MaxUnavailable pods conservatively,
<br/>
but the total number of pods being rolled out is allowed to be 25+25=50%
---
@@ -176,20 +194,49 @@ class: extra-details
- We start with 10 pods running for the `worker` deployment
- Current settings: MaxUnavailable=1 and MaxSurge=1
- Current settings: MaxUnavailable=25% and MaxSurge=25%
- When we start the rollout:
- one replica is taken down (as per MaxUnavailable=1)
- another is created (with the new version) to replace it
- another is created (with the new version) per MaxSurge=1
- two replicas are taken down (as per MaxUnavailable=25%)
- two others are created (with the new version) to replace them
- three others are created (with the new version) per MaxSurge=25%)
- Now we have 9 replicas up and running, and 2 being deployed
- Now we have 8 replicas up and running, and 5 being deployed
- Our rollout is stuck at this point!
---
## Checking the dashboard during the bad rollout
If you haven't deployed the Kubernetes dashboard earlier, just skip this slide.
.exercise[
- Check which port the dashboard is on:
```bash
kubectl -n kube-system get svc socat
```
]
Note the `3xxxx` port.
.exercise[
- Connect to http://oneofournodes:3xxxx/
<!-- ```open https://node1:3xxxx/``` -->
]
--
- We have failures in Deployments, Pods, and Replica Sets
---
## Recovering from a bad rollout
- We could push some `v0.3` image
@@ -222,7 +269,7 @@ class: extra-details
- revert to `v0.1`
- be conservative on availability (always have desired number of available workers)
- be aggressive on rollout speed (update more than one pod at a time)
- go slow on rollout speed (update only one pod at a time)
- give some time to our workers to "warm up" before starting more
The corresponding changes can be expressed in the following YAML snippet:
@@ -238,7 +285,7 @@ spec:
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 3
maxSurge: 1
minReadySeconds: 10
```
]
@@ -267,7 +314,7 @@ spec:
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 3
maxSurge: 1
minReadySeconds: 10
"
kubectl rollout status deployment worker

View File

@@ -10,7 +10,7 @@
2. Install Kubernetes packages
3. Run `kubeadm init` on the master node
3. Run `kubeadm init` on the first node (it deploys the control plane on that node)
4. Set up Weave (the overlay network)
<br/>

View File

@@ -1,7 +1,7 @@
## Versions installed
- Kubernetes 1.10.3
- Docker Engine 18.03.0-ce
- Kubernetes 1.11.0
- Docker Engine 18.03.1-ce
- Docker Compose 1.21.1

View File

@@ -28,7 +28,7 @@ And *then* it is time to look at orchestration!
- Each of the two `redis` services has its own `ClusterIP`
- `kube-dns` creates two entries, mapping to these two `ClusterIP` addresses:
- CoreDNS creates two entries, mapping to these two `ClusterIP` addresses:
`redis.blue.svc.cluster.local` and `redis.green.svc.cluster.local`
@@ -93,13 +93,37 @@ And *then* it is time to look at orchestration!
---
## Logging and metrics
## Logging
- Logging is delegated to the container engine
- Metrics are typically handled with [Prometheus](https://prometheus.io/)
- Logs are exposed through the API
- Logs are also accessible through local files (`/var/log/containers`)
- Log shipping to a central platform is usually done through these files
(e.g. with an agent bind-mounting the log directory)
---
## Metrics
- The kubelet embeds [cAdvisor](https://github.com/google/cadvisor), which exposes container metrics
(cAdvisor might be separated in the future for more flexibility)
- It is a good idea to start with [Prometheus](https://prometheus.io/)
(even if you end up using something else)
- Starting from Kubernetes 1.8, we can use the [Metrics API](https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/)
- [Heapster](https://github.com/kubernetes/heapster) was a popular add-on
(but is being [deprecated](https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md) starting with Kubernetes 1.11)
([Heapster](https://github.com/kubernetes/heapster) is a popular add-on)
---

View File

@@ -14,33 +14,35 @@ exclude:
- self-paced
chapters:
- common/title.md
- shared/title.md
- logistics.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
#- common/composescale.md
- common/composedown.md
- - kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
- kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- kube/kubectlrun.md
- - kube/kubectlexpose.md
- kube/ourapponkube.md
- kube/dashboard.md
- - kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
#- kube/logs-cli.md
#- kube/logs-centralized.md
#- kube/helm.md
#- kube/namespaces.md
- kube/whatsnext.md
- kube/links.md
- common/thankyou.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
- shared/composedown.md
- - k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-k8s.md
- k8s/kubectlrun.md
- - k8s/kubectlexpose.md
- k8s/ourapponkube.md
- k8s/kubectlproxy.md
- k8s/dashboard.md
- - k8s/kubectlscale.md
- k8s/daemonset.md
- k8s/rollout.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
- k8s/helm.md
- k8s/namespaces.md
- k8s/netpol.md
- k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -13,39 +13,41 @@ exclude:
- self-paced
chapters:
- common/title.md
- shared/title.md
#- logistics.md
# Bridget-specific; others use logistics.md
- logistics-bridget.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- k8s/versions-k8s.md
- shared/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- common/composescale.md
- common/composedown.md
- kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
- kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- - kube/kubectlrun.md
- kube/kubectlexpose.md
- kube/ourapponkube.md
- - kube/dashboard.md
- kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- - kube/logs-cli.md
#- shared/composescale.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-k8s.md
- - k8s/kubectlrun.md
- k8s/kubectlexpose.md
- k8s/ourapponkube.md
#- k8s/kubectlproxy.md
- - k8s/dashboard.md
- k8s/kubectlscale.md
- k8s/daemonset.md
- k8s/rollout.md
- - k8s/logs-cli.md
# Bridget hasn't added EFK yet
#- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
# - kube/links.md
#- k8s/logs-centralized.md
- k8s/helm.md
- k8s/namespaces.md
#- k8s/netpol.md
- k8s/whatsnext.md
# - k8s/links.md
# Bridget-specific
- kube/links-bridget.md
- common/thankyou.md
- k8s/links-bridget.md
- shared/thankyou.md

View File

@@ -13,33 +13,35 @@ exclude:
- in-person
chapters:
- common/title.md
- shared/title.md
#- logistics.md
- kube/intro.md
- common/about-slides.md
- common/toc.md
- - common/prereqs.md
- kube/versions-k8s.md
- common/sampleapp.md
- common/composescale.md
- common/composedown.md
- - kube/concepts-k8s.md
- common/declarative.md
- kube/declarative.md
- kube/kubenet.md
- kube/kubectlget.md
- kube/setup-k8s.md
- kube/kubectlrun.md
- - kube/kubectlexpose.md
- kube/ourapponkube.md
- kube/dashboard.md
- - kube/kubectlscale.md
- kube/daemonset.md
- kube/rollout.md
- - kube/logs-cli.md
- kube/logs-centralized.md
- kube/helm.md
- kube/namespaces.md
- kube/whatsnext.md
- kube/links.md
- common/thankyou.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- k8s/versions-k8s.md
- shared/sampleapp.md
- shared/composescale.md
- shared/composedown.md
- - k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-k8s.md
- k8s/kubectlrun.md
- - k8s/kubectlexpose.md
- k8s/ourapponkube.md
- k8s/kubectlproxy.md
- k8s/dashboard.md
- - k8s/kubectlscale.md
- k8s/daemonset.md
- k8s/rollout.md
- - k8s/logs-cli.md
- k8s/logs-centralized.md
- k8s/helm.md
- k8s/namespaces.md
- k8s/netpol.md
- k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

17
slides/override.css Normal file
View File

@@ -0,0 +1,17 @@
.remark-slide-content:not(.pic) {
background-repeat: no-repeat;
background-position: 99% 1%;
background-size: 8%;
background-image: url(https://enix.io/static/img/logos/logo-domain-cropped.png);
}
div.extra-details:not(.pic) {
background-image: url("images/extra-details.png"), url(https://enix.io/static/img/logos/logo-domain-cropped.png);
background-position: 0.5% 1%, 99% 1%;
background-size: 4%, 8%;
}
.remark-slide-content:not(.pic) div.remark-slide-number {
top: 16px;
right: 112px
}

View File

@@ -1,2 +1,3 @@
# This is for netlify
PyYAML
jinja2

View File

@@ -35,7 +35,7 @@ class: extra-details
- This slide has a little magnifying glass in the top left corner
- This magnifiying glass indicates slides that provide extra details
- This magnifying glass indicates slides that provide extra details
- Feel free to skip them if:

View File

@@ -189,7 +189,9 @@ done
```bash
if which kubectl; then
kubectl get all -o name | grep -v service/kubernetes | xargs -rn1 kubectl delete
kubectl get deploy,ds -o name | xargs -rn1 kubectl delete
kubectl get all -o name | grep -v service/kubernetes | xargs -rn1 kubectl delete --ignore-not-found=true
kubectl -n kube-system get deploy,svc -o name | grep -v dns | xargs -rn1 kubectl -n kube-system delete
fi
```
-->
@@ -212,7 +214,7 @@ If anything goes wrong — ask for help!
- Use something like
[Play-With-Docker](http://play-with-docker.com/) or
[Play-With-Kubernetes](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
[Play-With-Kubernetes](https://training.play-with-kubernetes.com/)
Zero setup effort; but environment are short-lived and
might have limited resources

View File

@@ -8,8 +8,9 @@
<!--
```bash
cd ~
if [ -d container.training ]; then
mv container.training container.training.$$
mv container.training container.training.$RANDOM
fi
```
-->

Some files were not shown because too many files have changed in this diff Show More