Compare commits

..

97 Commits

Author SHA1 Message Date
Jerome Petazzoni
fecd68b1f7 fix-redirects.sh: adding forced redirect 2020-04-07 16:57:50 -05:00
Jerome Petazzoni
0815350c0c Merge branch 'master' into vmware-2019-11 2019-11-19 11:15:21 -06:00
Jérôme Petazzoni
5b488fbe62 Update Installing_Docker.md 2019-11-19 09:35:46 -06:00
Jerome Petazzoni
7ce4f8f000 Finalize deck with stateful part 2019-11-19 08:04:08 -06:00
Jerome Petazzoni
7d77dad108 Merge branch 'master' into vmware-2019-11 2019-11-19 07:42:42 -06:00
Jerome Petazzoni
6d01a9d813 Add commands to prep portworx; make postgresql work on PKS 2019-11-19 07:40:01 -06:00
Jerome Petazzoni
cb81469170 Move storage class to portworx manifest 2019-11-19 06:58:49 -06:00
Jerome Petazzoni
c595a337e4 Rewrite services section
Improve the order when introducing ClusterIP, LoadBalancer, NodePort.
Explain the deal with ExternalIP and ExternalName, and reword the
Ingress slide.
2019-11-19 06:51:39 -06:00
Jerome Petazzoni
e32d3273bd Add PKS 2019-11-19 04:01:06 -06:00
Jerome Petazzoni
cde40563a6 Add VMware content 2019-11-18 18:56:03 -06:00
Jerome Petazzoni
f85d5ac53a merge 2019-11-18 17:59:09 -06:00
Jerome Petazzoni
03d2d0bc5d kubectl is the new SSH 2019-11-18 16:47:10 -06:00
Jerome Petazzoni
2c46106792 Add explanations to navigate slides 2019-11-18 13:53:54 -06:00
Jerome Petazzoni
291d2a6c92 Add note about DNS integration 2019-11-18 13:30:09 -06:00
Jerome Petazzoni
f73fb92832 Put pods before services
The flow is better this way, since we can introduce pods
just after seeing them in kubectl describe node.

Also, add some extra info when we curl the Kubernetes API.
2019-11-18 12:57:26 -06:00
Jerome Petazzoni
e9e2fa0e50 Fix YAML formatting 2019-11-18 09:04:18 -06:00
Jerome Petazzoni
be02f1e850 Shuffle content around 2019-11-18 04:54:30 -06:00
Jerome Petazzoni
3266369dd7 Prep deck 2019-11-15 10:21:54 -06:00
Jerome Petazzoni
a0162d37f1 Add explanations to the node/pod diagram 2019-11-15 08:49:57 -06:00
Jerome Petazzoni
a61b69ad9a Merge branch 'master' of github.com:jpetazzo/container.training 2019-11-12 14:48:55 -06:00
Jerome Petazzoni
3388db4272 Update what we can do with k8s 2019-11-12 14:48:28 -06:00
Jérôme Petazzoni
d2d901302f Merge pull request #533 from BretFisher/remove-rkt
remove deprecated rkt, mention runtimes are different per distro
2019-11-12 13:15:32 +01:00
Jérôme Petazzoni
1fae4253bc Update concepts-k8s.md 2019-11-12 06:15:06 -06:00
Bret Fisher
f7f5ab1304 deprecated rkt, added more containerd/cri-o info 2019-11-12 06:45:42 -05:00
Jerome Petazzoni
7addacef22 Pin HAProxy to v1 2019-11-12 01:47:36 -06:00
Jerome Petazzoni
0136391ab5 Add rollback --to-revision 2019-11-11 01:23:28 -06:00
Jerome Petazzoni
ed27ad1d1e Expand volume section 2019-11-11 00:59:39 -06:00
Jerome Petazzoni
c15aa708df Put random values in Ingress 2019-11-11 00:25:50 -06:00
Bret Fisher
5749348883 remove deprecated rkt, mention runtimes are different per distro 2019-11-08 00:19:35 -05:00
Jerome Petazzoni
bc885f3dca Update information re/ JVM resource limits
Thanks @qerub for the heads up.
2019-11-07 11:39:19 -06:00
Jerome Petazzoni
bbe35a3901 Update the mention of Prometheus exposition format
Thanks @qerub for letting me know that the protobuf format
was deprecated in Prom 2. Also, that technical document by
@beorn7 is a real delight to read. 💯
2019-11-07 11:21:20 -06:00
Jerome Petazzoni
eb17b4c628 Tweak single-day workshop content 2019-11-07 11:15:14 -06:00
Jérôme Petazzoni
a4d50a5439 Merge pull request #532 from someara/someara/editors
adding editors
2019-11-07 14:03:24 +01:00
Sean OMeara
98d2b79c97 adding editors 2019-11-04 10:13:29 +01:00
Jerome Petazzoni
8320534a5c Add prefix to slide numbers 2019-11-03 07:42:24 -06:00
Jerome Petazzoni
74ece65947 Add Velocity slides 2019-11-03 07:11:05 -06:00
Jerome Petazzoni
7444f8d71e Add cronjobs and YAML catch up instructions 2019-11-01 22:46:43 -05:00
Jerome Petazzoni
c9bc417a32 Update logs section 2019-10-31 20:19:33 -05:00
Jerome Petazzoni
7d4331477a Get rid of $TAG and $REGISTRY
These variables are useful when deploying images
from a local registry (or from another place than
the Docker Hub) but they turned out to be quite
confusing. After holding to them for a while,
I think it is time to see the errors of my ways
and simplify that stuff.
2019-10-31 19:49:35 -05:00
Jerome Petazzoni
ff132fd728 Add mention to Review Access / rakkess 2019-10-31 17:26:01 -05:00
Jerome Petazzoni
4ec7b1d7f4 Improve section on healthchecks, and add information about startup probes 2019-10-31 17:15:01 -05:00
Jerome Petazzoni
e08e7848ed Add instructions about shpod 2019-10-31 16:07:33 -05:00
Jérôme Petazzoni
be6afa3e5e Merge pull request #531 from infomaven/master
Update troubleshooting instructions for Python 3.7 users
2019-10-30 23:23:59 +01:00
Jérôme Petazzoni
c340d909de Merge pull request #529 from joemcmahon/os-x-stern-install
Os x stern install
2019-10-30 23:19:50 +01:00
Jérôme Petazzoni
b667cf7cfc Update logs-cli.md 2019-10-30 17:19:25 -05:00
Jérôme Petazzoni
e04998e9cd Merge pull request #527 from joemcmahon/fix-jinja2-and-pyyml-install-instructions
Add instructions for pyyml, jinja2, default Python
2019-10-30 23:14:51 +01:00
Jérôme Petazzoni
84198b3fdc Update README.md 2019-10-30 17:13:13 -05:00
Nadine Whitfield
5c161d2090 Update README.md 2019-10-29 23:51:57 -07:00
Nadine Whitfield
0fc7c2316c Updated for python 3.7 2019-10-29 23:48:50 -07:00
Jerome Petazzoni
fb64c0d68f Update kube-proxy command 2019-10-29 20:31:18 -05:00
Jerome Petazzoni
23aaf7f58c Improve DMUC slides 2019-10-29 19:48:23 -05:00
Jerome Petazzoni
6cbcc4ae69 Fix CNI version (0.8 is unsupported yet) 2019-10-29 19:44:41 -05:00
Jerome Petazzoni
0b80238736 Bump up versions of kubebins 2019-10-25 12:25:49 -05:00
Joe McMahon
4c285b5318 Add instruction to install stern on OS X 2019-10-10 09:29:42 -07:00
Jérôme Petazzoni
2095a15728 Merge pull request #528 from tvroom/add.link.video.zombie.exec.healthchecks
Add link to conf video mentioning issues with zombie'd exec healthchecks
2019-10-09 21:58:56 +02:00
Tim Vroom
13ba8cef9d Add link to conference video mentioning issues with zombie'd exec healthcheck 2019-10-09 10:47:52 -07:00
Joe McMahon
be2374c672 Add instructions for pyyml, jinja2, default Python
Installing `mosh` via Homebrew may change `/usr/local/bin/python` to
Python 2. Adds docs to check and fix this so that `pyyml` and `jinja2`
can be installed.
2019-10-08 09:52:44 -07:00
Jerome Petazzoni
f96da2d260 Add dry-run, server-dry-run, kubectl diff
Closes #523.
2019-10-06 09:24:30 -05:00
Christian Bewernitz
5958874071 highlight code that is recommended to be used (#522)
Better highlight code that is recommended to be used.

(Thanks @karfau for the patch!)
2019-10-05 07:57:33 -05:00
Jerome Petazzoni
370bdf9aaf Add kube web view and kube ops view 2019-10-03 05:28:13 -05:00
Jerome Petazzoni
381cd27037 Add kube resource report 2019-10-03 05:19:51 -05:00
Jerome Petazzoni
c409c6997a Add kubecost blog post about requests and limits 2019-10-03 05:09:17 -05:00
Jerome Petazzoni
eb2e74f236 Adjust apiVersion for k8s 1.16 2019-09-23 08:53:38 -05:00
Jerome Petazzoni
169d850fc7 bump apiversion for 1.16 2019-09-23 08:30:28 -05:00
Jerome Petazzoni
96104193ba Add LISA tutorial 2019-09-20 09:57:27 -05:00
Jerome Petazzoni
5a5a08cf25 Add CLT training 2019-09-19 13:22:59 -05:00
Jerome Petazzoni
82b7b7ba88 Add slides for ENIX training 2019-09-18 13:08:54 -05:00
Jerome Petazzoni
8c4a0a3fce Merge branch 'master' of github.com:jpetazzo/container.training 2019-09-17 06:13:29 -05:00
Jerome Petazzoni
f4f0fb0f23 http.server requires python3 2019-09-17 06:13:21 -05:00
Jérôme Petazzoni
8dfcb440c8 Merge pull request #526 from BretFisher/fix-pod-yaml
fixing uppercase K in yaml for static pods
2019-09-16 15:19:38 +02:00
Bret Fisher
f3622d98fe fixing uppercase K in yaml for static pods 2019-09-13 16:49:47 -04:00
Jérôme Petazzoni
b1fc7580a1 Merge pull request #525 from BretFisher/patch-19
added GOTO Berlin to index
2019-09-09 11:44:38 +02:00
Bret Fisher
ab77d89232 added GOTO Berlin to index 2019-09-06 13:19:53 -04:00
Jerome Petazzoni
04f728c67a Add nowrap to vimrc
The certificates embedded in .kube/config make the file a bit hard
to read. This will make it easier.
2019-09-03 09:04:42 -05:00
Jerome Petazzoni
5bbce4783a Better modularize card generation
Most parameters used by the Jinja template for the cards
can now be specified in settings.yaml. This should make
the generation of cards for admin training much easier.
2019-09-03 06:51:15 -05:00
Jerome Petazzoni
889c79addb Word tweaks for eksctl
Just indicate that eksctl is now "the new way" to deploy EKS
(since AWS now supports it officially).
2019-09-03 04:49:03 -05:00
AJ Bowen
c4b408621c Create .tmux.conf to allow mouse and scrolling support and vim bindings for changing panes 2019-09-03 04:44:57 -05:00
Jerome Petazzoni
49df28d44f Add WebSSH snippet 2019-08-26 01:08:14 -05:00
Jerome Petazzoni
46878ed6c7 Update chapter about version upgrades 2019-08-23 05:48:55 -05:00
Jerome Petazzoni
b5b005b6d2 Bump k8s version 2019-08-23 05:12:48 -05:00
Jerome Petazzoni
9e991d1900 Add command to change the NodePort range
This helps when the customer's internet connection filters out
the default port range. It still requires to have a port range
open somewhere, though. here we use 10000-10999, but this should
be adjusted if necessary.
2019-08-23 05:11:05 -05:00
Jerome Petazzoni
ace911a208 Restore ingress YAML template 2019-08-23 04:45:37 -05:00
Jerome Petazzoni
ead027a62e Reorganize content flow
This introduces concepts more progressively (instead of
front-loading most of the theory before tackling first
useful commands). It was successfully testsed at PyCon
and at a few 1-day engagements and works really well.
I'm now making it the official flow.

I'm also reformatting the YAML a little bit to facilitate
content suffling.
2019-08-13 09:37:14 -05:00
Jerome Petazzoni
09c832031b Bump up ingress version in slides too 2019-08-13 08:13:37 -05:00
Jerome Petazzoni
34fca341bc Bump k8s YAML versions 2019-08-13 08:05:39 -05:00
Jerome Petazzoni
af18c5ab9f Bump versions 2019-08-13 06:04:24 -05:00
Jérôme Petazzoni
afa3a59461 Merge pull request #521 from gurayyildirim/hacknbreak2019
Add HacknBreak 2019 workshops to website
2019-08-12 14:25:05 +02:00
gurayyildirim
1abfac419b Fix date format 2019-08-12 15:21:53 +03:00
Güray Yıldırım
edd2f749c0 Add HacknBreak 2019 workshops to website 2019-08-12 15:16:11 +03:00
Jerome Petazzoni
2365b8f460 Add web server to make it easier to generate cards from CNC node 2019-08-08 07:37:05 -05:00
Jerome Petazzoni
c7a504dcb4 Replace 'iff' with something more understandable 2019-08-07 07:50:11 -05:00
Jérôme Petazzoni
ffb15c8316 Merge pull request #517 from antweiss/master
Fixing some typos
2019-08-07 14:46:29 +02:00
Jerome Petazzoni
f7fbe1b056 Add example blog post about Operator Framework 2019-08-07 05:25:49 -05:00
Jérôme Petazzoni
4be1b40586 Merge pull request #518 from antweiss/new-flux-github
Update Flux github url
2019-07-31 15:18:32 +02:00
Anton Weiss
91fb2f167c Update Flux github url 2019-07-28 16:27:53 +03:00
Anton Weiss
02dcb58f77 Fix typo in consul startup command 2019-07-28 16:05:48 +03:00
Anton Weiss
3a816568da Fix 2 typos in k8s/operators.md and k8s/operators-design.md 2019-07-28 14:21:20 +03:00
97 changed files with 2844 additions and 1791 deletions

1
.gitignore vendored
View File

@@ -3,6 +3,7 @@
*~
prepare-vms/tags
prepare-vms/infra
prepare-vms/www
slides/*.yml.html
slides/autopilot/state.yaml
slides/index.html

View File

@@ -72,7 +72,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.4.4"
image: "consul:1.5"
args:
- "agent"
- "-bootstrap-expect=3"

160
k8s/dockercoins.yaml Normal file
View File

@@ -0,0 +1,160 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hasher
name: hasher
spec:
replicas: 1
selector:
matchLabels:
app: hasher
template:
metadata:
labels:
app: hasher
spec:
containers:
- image: dockercoins/hasher:v0.1
name: hasher
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hasher
name: hasher
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: hasher
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: redis
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis
name: redis
---
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rng
name: rng
spec:
replicas: 1
selector:
matchLabels:
app: rng
template:
metadata:
labels:
app: rng
spec:
containers:
- image: dockercoins/rng:v0.1
name: rng
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rng
name: rng
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rng
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: webui
name: webui
spec:
replicas: 1
selector:
matchLabels:
app: webui
template:
metadata:
labels:
app: webui
spec:
containers:
- image: dockercoins/webui:v0.1
name: webui
---
apiVersion: v1
kind: Service
metadata:
labels:
app: webui
name: webui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: webui
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- image: dockercoins/worker:v0.1
name: worker

View File

@@ -32,13 +32,16 @@ subjects:
name: fluentd
namespace: default
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
@@ -51,7 +54,7 @@ spec:
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.3-debian-elasticsearch-1
image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-elasticsearch-1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
@@ -86,7 +89,7 @@ spec:
hostPath:
path: /var/lib/docker/containers
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
@@ -128,7 +131,7 @@ spec:
app: elasticsearch
type: ClusterIP
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
labels:

View File

@@ -9,7 +9,7 @@ spec:
name: haproxy
containers:
- name: haproxy
image: haproxy
image: haproxy:1
volumeMounts:
- name: config
mountPath: /usr/local/etc/haproxy/

View File

@@ -1,14 +1,13 @@
apiVersion: extensions/v1beta1
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cheddar
name: whatever
spec:
rules:
- host: cheddar.A.B.C.D.nip.io
- host: whatever.A.B.C.D.nip.io
http:
paths:
- path: /
backend:
serviceName: cheddar
servicePort: 80
serviceName: whatever
servicePort: 1234

View File

@@ -12,11 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
@@ -95,7 +90,7 @@ subjects:
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
@@ -114,12 +109,13 @@ spec:
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --enable-skip-login
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
@@ -166,7 +162,7 @@ spec:
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
labels:

View File

@@ -1,5 +1,5 @@
apiVersion: v1
Kind: Pod
kind: Pod
metadata:
name: hello
namespace: default

View File

@@ -12,11 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
@@ -95,7 +90,7 @@ subjects:
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
@@ -114,7 +109,7 @@ spec:
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP

View File

@@ -45,7 +45,7 @@ subjects:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: apps/v1beta2
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner

View File

@@ -58,7 +58,7 @@ metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
@@ -82,7 +82,7 @@ spec:
emptyDir: {}
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
image: k8s.gcr.io/metrics-server-amd64:v0.3.3
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-without-volume
spec:
containers:
- name: nginx
image: nginx

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/

View File

@@ -1,7 +1,7 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
name: nginx-with-git
spec:
volumes:
- name: www

View File

@@ -0,0 +1,20 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-init
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
initContainers:
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/

View File

@@ -74,7 +74,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: "consul:1.4.4"
image: "consul:1.5"
volumeMounts:
- name: data
mountPath: /consul/data

View File

@@ -1,4 +1,339 @@
# SOURCE: https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true
# SOURCE: https://install.portworx.com/?kbver=1.15.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true&st=k8s&mc=false
---
kind: Service
apiVersion: v1
metadata:
name: portworx-service
namespace: kube-system
labels:
name: portworx
spec:
selector:
name: portworx
type: NodePort
ports:
- name: px-api
protocol: TCP
port: 9001
targetPort: 9001
- name: px-kvdb
protocol: TCP
port: 9019
targetPort: 9019
- name: px-sdk
protocol: TCP
port: 9020
targetPort: 9020
- name: px-rest-gateway
protocol: TCP
port: 9021
targetPort: 9021
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: volumeplacementstrategies.portworx.io
spec:
group: portworx.io
versions:
- name: v1beta2
served: true
storage: true
- name: v1beta1
served: false
storage: false
scope: Cluster
names:
plural: volumeplacementstrategies
singular: volumeplacementstrategy
kind: VolumePlacementStrategy
shortNames:
- vps
- vp
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-get-put-list-role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "get", "update", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "update", "create"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["privileged"]
verbs: ["use"]
- apiGroups: ["portworx.io"]
resources: ["volumeplacementstrategies"]
verbs: ["get", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-role-binding
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-get-put-list-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Namespace
metadata:
name: portworx
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role
namespace: portworx
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role-binding
namespace: portworx
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: Role
name: px-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: portworx
namespace: kube-system
annotations:
portworx.com/install-source: "https://install.portworx.com/?kbver=1.15.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true&st=k8s&mc=false"
spec:
minReadySeconds: 0
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: portworx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: px/enabled
operator: NotIn
values:
- "false"
- key: node-role.kubernetes.io/master
operator: DoesNotExist
hostNetwork: true
hostPID: false
initContainers:
- name: checkloop
image: alpine
command: [ "sh", "-c" ]
args:
- |
if ! grep -q loop4 /proc/partitions; then
echo 'Could not find "loop4" in /proc/partitions. Please create it first.'
exit 1
fi
containers:
- name: portworx
image: portworx/oci-monitor:2.1.3
imagePullPolicy: Always
args:
["-c", "px-workshop", "-s", "/dev/loop4", "-secret_type", "k8s", "-b",
"-x", "kubernetes"]
env:
- name: "AUTO_NODE_RECOVERY_TIMEOUT_IN_SECS"
value: "1500"
- name: "PX_TEMPLATE_VERSION"
value: "v4"
livenessProbe:
periodSeconds: 30
initialDelaySeconds: 840 # allow image pull in slow networks
httpGet:
host: 127.0.0.1
path: /status
port: 9001
readinessProbe:
periodSeconds: 10
httpGet:
host: 127.0.0.1
path: /health
port: 9015
terminationMessagePath: "/tmp/px-termination-log"
securityContext:
privileged: true
volumeMounts:
- name: diagsdump
mountPath: /var/cores
- name: dockersock
mountPath: /var/run/docker.sock
- name: containerdsock
mountPath: /run/containerd
- name: criosock
mountPath: /var/run/crio
- name: crioconf
mountPath: /etc/crictl.yaml
- name: etcpwx
mountPath: /etc/pwx
- name: optpwx
mountPath: /opt/pwx
- name: procmount
mountPath: /host_proc
- name: sysdmount
mountPath: /etc/systemd/system
- name: journalmount1
mountPath: /var/run/log
readOnly: true
- name: journalmount2
mountPath: /var/log
readOnly: true
- name: dbusmount
mountPath: /var/run/dbus
restartPolicy: Always
serviceAccountName: px-account
volumes:
- name: diagsdump
hostPath:
path: /var/cores
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: containerdsock
hostPath:
path: /run/containerd
- name: criosock
hostPath:
path: /var/run/crio
- name: crioconf
hostPath:
path: /etc/crictl.yaml
type: FileOrCreate
- name: etcpwx
hostPath:
path: /etc/pwx
- name: optpwx
hostPath:
path: /opt/pwx
- name: procmount
hostPath:
path: /proc
- name: sysdmount
hostPath:
path: /etc/systemd/system
- name: journalmount1
hostPath:
path: /var/run/log
- name: journalmount2
hostPath:
path: /var/log
- name: dbusmount
hostPath:
path: /var/run/dbus
---
kind: Service
apiVersion: v1
metadata:
name: portworx-api
namespace: kube-system
labels:
name: portworx-api
spec:
selector:
name: portworx-api
type: NodePort
ports:
- name: px-api
protocol: TCP
port: 9001
targetPort: 9001
- name: px-sdk
protocol: TCP
port: 9020
targetPort: 9020
- name: px-rest-gateway
protocol: TCP
port: 9021
targetPort: 9021
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: portworx-api
namespace: kube-system
spec:
minReadySeconds: 0
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 100%
template:
metadata:
labels:
name: portworx-api
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: px/enabled
operator: NotIn
values:
- "false"
- key: node-role.kubernetes.io/master
operator: DoesNotExist
hostNetwork: true
hostPID: false
containers:
- name: portworx-api
image: k8s.gcr.io/pause:3.1
imagePullPolicy: Always
readinessProbe:
periodSeconds: 10
httpGet:
host: 127.0.0.1
path: /status
port: 9001
restartPolicy: Always
serviceAccountName: px-account
---
apiVersion: v1
kind: ConfigMap
metadata:
@@ -11,7 +346,7 @@ data:
"apiVersion": "v1",
"extenders": [
{
"urlPrefix": "http://stork-service.kube-system.svc:8099",
"urlPrefix": "http://stork-service.kube-system:8099",
"apiVersion": "v1beta1",
"filterVerb": "filter",
"prioritizeVerb": "prioritize",
@@ -34,8 +369,8 @@ metadata:
name: stork-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "delete", "create", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
@@ -48,14 +383,14 @@ rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["stork.libopenstorage.org"]
resources: ["*"]
verbs: ["get", "list", "watch", "update", "patch", "create", "delete"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create", "list", "watch", "delete"]
verbs: ["create", "get"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshotdatas"]
resources: ["volumesnapshots", "volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
@@ -72,6 +407,9 @@ rules:
- apiGroups: ["*"]
resources: ["statefulsets", "statefulsets/extensions"]
verbs: ["list", "get", "watch", "patch", "update", "initialize"]
- apiGroups: ["*"]
resources: ["*"]
verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
@@ -131,7 +469,10 @@ spec:
- --leader-elect=true
- --health-monitor-interval=120
imagePullPolicy: Always
image: openstorage/stork:1.1.3
image: openstorage/stork:2.2.4
env:
- name: "PX_SERVICE_NAME"
value: "portworx-api"
resources:
requests:
cpu: '0.1'
@@ -168,16 +509,13 @@ metadata:
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "update"]
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch", "update"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create"]
- apiGroups: [""]
resourceNames: ["kube-scheduler"]
resources: ["endpoints"]
@@ -197,7 +535,7 @@ rules:
- apiGroups: [""]
resources: ["replicationcontrollers", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["app", "extensions"]
- apiGroups: ["apps", "extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
@@ -253,7 +591,7 @@ spec:
- --policy-configmap=stork-config
- --policy-configmap-namespace=kube-system
- --lock-object-name=stork-scheduler
image: gcr.io/google_containers/kube-scheduler-amd64:v1.11.2
image: gcr.io/google_containers/kube-scheduler-amd64:v1.15.2
livenessProbe:
httpGet:
path: /healthz
@@ -280,229 +618,61 @@ spec:
hostPID: false
serviceAccountName: stork-scheduler-account
---
kind: Service
apiVersion: v1
metadata:
name: portworx-service
namespace: kube-system
labels:
name: portworx
spec:
selector:
name: portworx
ports:
- name: px-api
protocol: TCP
port: 9001
targetPort: 9001
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-account
name: px-lh-account
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-get-put-list-role
name: px-lh-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "get", "update", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete", "get", "list"]
- apiGroups: [""]
resources: ["persistentvolumeclaims", "persistentvolumes"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "update", "create"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["privileged"]
verbs: ["use"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get"]
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["create", "get", "list", "watch"]
- apiGroups: ["stork.libopenstorage.org"]
resources: ["clusterpairs","migrations","groupvolumesnapshots"]
verbs: ["get", "list", "create", "update", "delete"]
- apiGroups: ["monitoring.coreos.com"]
resources:
- alertmanagers
- prometheuses
- prometheuses/finalizers
- servicemonitors
verbs: ["*"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-role-binding
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-get-put-list-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Namespace
metadata:
name: portworx
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role
namespace: portworx
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-role-binding
namespace: portworx
subjects:
- kind: ServiceAccount
name: px-account
namespace: kube-system
roleRef:
kind: Role
name: px-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: portworx
namespace: kube-system
annotations:
portworx.com/install-source: "https://install.portworx.com/?kbver=1.11.2&b=true&s=/dev/loop4&c=px-workshop&stork=true&lh=true"
spec:
minReadySeconds: 0
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
name: portworx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: px/enabled
operator: NotIn
values:
- "false"
- key: node-role.kubernetes.io/master
operator: DoesNotExist
hostNetwork: true
hostPID: false
containers:
- name: portworx
image: portworx/oci-monitor:1.4.2.2
imagePullPolicy: Always
args:
["-c", "px-workshop", "-s", "/dev/loop4", "-b",
"-x", "kubernetes"]
env:
- name: "PX_TEMPLATE_VERSION"
value: "v4"
livenessProbe:
periodSeconds: 30
initialDelaySeconds: 840 # allow image pull in slow networks
httpGet:
host: 127.0.0.1
path: /status
port: 9001
readinessProbe:
periodSeconds: 10
httpGet:
host: 127.0.0.1
path: /health
port: 9015
terminationMessagePath: "/tmp/px-termination-log"
securityContext:
privileged: true
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
- name: etcpwx
mountPath: /etc/pwx
- name: optpwx
mountPath: /opt/pwx
- name: proc1nsmount
mountPath: /host_proc/1/ns
- name: sysdmount
mountPath: /etc/systemd/system
- name: diagsdump
mountPath: /var/cores
- name: journalmount1
mountPath: /var/run/log
readOnly: true
- name: journalmount2
mountPath: /var/log
readOnly: true
- name: dbusmount
mountPath: /var/run/dbus
restartPolicy: Always
serviceAccountName: px-account
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: etcpwx
hostPath:
path: /etc/pwx
- name: optpwx
hostPath:
path: /opt/pwx
- name: proc1nsmount
hostPath:
path: /proc/1/ns
- name: sysdmount
hostPath:
path: /etc/systemd/system
- name: diagsdump
hostPath:
path: /var/cores
- name: journalmount1
hostPath:
path: /var/run/log
- name: journalmount2
hostPath:
path: /var/log
- name: dbusmount
hostPath:
path: /var/run/dbus
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: px-lh-account
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: px-lh-role-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: px-lh-account
namespace: kube-system
- kind: ServiceAccount
name: px-lh-account
namespace: kube-system
roleRef:
kind: Role
kind: ClusterRole
name: px-lh-role
apiGroup: rbac.authorization.k8s.io
---
@@ -518,14 +688,12 @@ spec:
ports:
- name: http
port: 80
nodePort: 32678
- name: https
port: 443
nodePort: 32679
selector:
tier: px-web-console
---
apiVersion: apps/v1beta2
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: px-lighthouse
@@ -549,7 +717,7 @@ spec:
spec:
initContainers:
- name: config-init
image: portworx/lh-config-sync:0.2
image: portworx/lh-config-sync:0.4
imagePullPolicy: Always
args:
- "init"
@@ -558,8 +726,9 @@ spec:
mountPath: /config/lh
containers:
- name: px-lighthouse
image: portworx/px-lighthouse:1.5.0
image: portworx/px-lighthouse:2.0.4
imagePullPolicy: Always
args: [ "-kubernetes", "true" ]
ports:
- containerPort: 80
- containerPort: 443
@@ -567,14 +736,30 @@ spec:
- name: config
mountPath: /config/lh
- name: config-sync
image: portworx/lh-config-sync:0.2
image: portworx/lh-config-sync:0.4
imagePullPolicy: Always
args:
- "sync"
volumeMounts:
- name: config
mountPath: /config/lh
- name: stork-connector
image: portworx/lh-stork-connector:0.2
imagePullPolicy: Always
serviceAccountName: px-lh-account
volumes:
- name: config
emptyDir: {}
---
# That one is an extra.
# Create a default Storage Class to simplify Portworx setup.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-replicated
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "2"
priority_io: "high"

View File

@@ -12,10 +12,17 @@ spec:
labels:
app: postgres
spec:
schedulerName: stork
#schedulerName: stork
initContainers:
- name: rmdir
image: alpine
volumeMounts:
- mountPath: /vol
name: postgres
command: ["sh", "-c", "if [ -d /vol/lost+found ]; then rmdir /vol/lost+found; fi"]
containers:
- name: postgres
image: postgres:10.5
image: postgres:11
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres

View File

@@ -6,13 +6,16 @@ metadata:
namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
@@ -26,7 +29,7 @@ spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http

View File

@@ -7,9 +7,9 @@ workshop.
## 1. Prerequisites
Virtualbox, Vagrant and Ansible
- Virtualbox: https://www.virtualbox.org/wiki/Downloads
- Vagrant: https://www.vagrantup.com/downloads.html
@@ -25,7 +25,7 @@ Virtualbox, Vagrant and Ansible
$ git clone --recursive https://github.com/ansible/ansible.git
$ cd ansible
$ git checkout stable-2.0.0.1
$ git checkout stable-{{ getStableVersionFromAnsibleProject }}
$ git submodule update
- source the setup script to make Ansible available on this terminal session:
@@ -38,6 +38,7 @@ Virtualbox, Vagrant and Ansible
## 2. Preparing the environment
Change into directory that has your Vagrantfile
Run the following commands:
@@ -66,6 +67,14 @@ will reflect inside the instance.
- Depending on the Vagrant version, `sudo apt-get install bsdtar` may be needed
- If you get an error like "no Vagrant file found" or you have a file but "cannot open base box" when running `vagrant up`,
chances are good you not in the correct directory.
Make sure you are in sub directory named "prepare-local". It has all the config files required by ansible, vagrant and virtualbox
- If you are using Python 3.7, running the ansible-playbook provisioning, see an error like "SyntaxError: invalid syntax" and it mentions
the word "async", you need to upgrade your Ansible version to 2.6 or higher to resolve the keyword conflict.
https://github.com/ansible/ansible/issues/42105
- If you get strange Ansible errors about dependencies, try to check your pip
version with `pip --version`. The current version is 8.1.1. If your pip is
older than this, upgrade it with `sudo pip install --upgrade pip`, restart

View File

@@ -10,15 +10,21 @@ These tools can help you to create VMs on:
- [Docker](https://docs.docker.com/engine/installation/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`) - the configuration scripts require this
- [Parallel SSH](https://code.google.com/archive/p/parallel-ssh/) (on a Mac: `brew install pssh`)
Depending on the infrastructure that you want to use, you also need to install
the Azure CLI, the AWS CLI, or terraform (for OpenStack deployment).
And if you want to generate printable cards:
- [pyyaml](https://pypi.python.org/pypi/PyYAML) (on a Mac: `brew install pyyaml`)
- [jinja2](https://pypi.python.org/pypi/Jinja2) (on a Mac: `brew install jinja2`)
- [pyyaml](https://pypi.python.org/pypi/PyYAML)
- [jinja2](https://pypi.python.org/pypi/Jinja2)
You can install them with pip (perhaps with `pip install --user`, or even use `virtualenv` if that's your thing).
These require Python 3. If you are on a Mac, see below for specific instructions on setting up
Python 3 to be the default Python on a Mac. In particular, if you installed `mosh`, Homebrew
may have changed your default Python to Python 2.
## General Workflow
@@ -87,26 +93,37 @@ You're all set!
```
workshopctl - the orchestration workshop swiss army knife
Commands:
ami Show the AMI that will be used for deployment
amis List Ubuntu AMIs in the current region
build Build the Docker image to run this program in a container
cards Generate ready-to-print cards for a group of VMs
deploy Install Docker on a bunch of running VMs
ec2quotas Check our EC2 quotas (max instances)
help Show available commands
ids List the instance IDs belonging to a given tag or token
ips List the IP addresses of the VMs for a given tag or token
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all notes are reporting as Ready
list List available groups in the current region
opensg Open the default security group to ALL ingress traffic
pull_images Pre-pull a bunch of Docker images
retag Apply a new tag to a group of VMs
start Start a group of VMs
status List instance status for a given group
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
test Run tests (pre-flight checks) on a group of VMs
wrap Run this program in a container
build Build the Docker image to run this program in a container
cards Generate ready-to-print cards for a group of VMs
deploy Install Docker on a bunch of running VMs
disableaddrchecks Disable source/destination IP address checks
disabledocker Stop Docker Engine and don't restart it automatically
helmprom Install Helm and Prometheus
help Show available commands
ids (FIXME) List the instance IDs belonging to a given tag or token
kubebins Install Kubernetes and CNI binaries but don't start anything
kubereset Wipe out Kubernetes configuration on all nodes
kube Setup kubernetes clusters with kubeadm (must be run AFTER deploy)
kubetest Check that all nodes are reporting as Ready
listall List VMs running on all configured infrastructures
list List available groups for a given infrastructure
netfix Disable GRO and run a pinger job on the VMs
opensg Open the default security group to ALL ingress traffic
ping Ping VMs in a given tag, to check that they have network access
pssh Run an arbitrary command on all nodes
pull_images Pre-pull a bunch of Docker images
quotas Check our infrastructure quotas (max instances)
remap_nodeports Remap NodePort range to 10000-10999
retag (FIXME) Apply a new tag to a group of VMs
ssh Open an SSH session to the first node of a tag
start Start a group of VMs
stop Stop (terminate, shutdown, kill, remove, destroy...) instances
tags List groups of VMs known locally
test Run tests (pre-flight checks) on a group of VMs
weavetest Check that weave seems properly setup
webssh Install a WEB SSH server on the machines (port 1080)
wrap Run this program in a container
www Run a web server to access card HTML and PDF
```
### Summary of What `./workshopctl` Does For You
@@ -245,3 +262,32 @@ If you don't have `wkhtmltopdf` installed, you will get a warning that it is a m
- Don't write to bash history in system() in postprep
- compose, etc version inconsistent (int vs str)
## Making sure Python3 is the default (Mac only)
Check the `/usr/local/bin/python` symlink. It should be pointing to
`/usr/local/Cellar/python/3`-something. If it isn't, follow these
instructions.
1) Verify that Python 3 is installed.
```
ls -la /usr/local/Cellar/Python
```
You should see one or more versions of Python 3. If you don't,
install it with `brew install python`.
2) Verify that `python` points to Python3.
```
ls -la /usr/local/bin/python
```
If this points to `/usr/local/Cellar/python@2`, then we'll need to change it.
```
rm /usr/local/bin/python
ln -s /usr/local/Cellar/Python/xxxx /usr/local/bin/python
# where xxxx is the most recent Python 3 version you saw above
```

View File

@@ -33,9 +33,14 @@ _cmd_cards() {
../../lib/ips-txt-to-html.py settings.yaml
)
ln -sf ../tags/$TAG/ips.html www/$TAG.html
ln -sf ../tags/$TAG/ips.pdf www/$TAG.pdf
info "Cards created. You can view them with:"
info "xdg-open tags/$TAG/ips.html tags/$TAG/ips.pdf (on Linux)"
info "open tags/$TAG/ips.html (on macOS)"
info "Or you can start a web server with:"
info "$0 www"
}
_cmd deploy "Install Docker on a bunch of running VMs"
@@ -122,11 +127,11 @@ _cmd_kubebins() {
set -e
cd /usr/local/bin
if ! [ -x etcd ]; then
curl -L https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz \
curl -L https://github.com/etcd-io/etcd/releases/download/v3.3.15/etcd-v3.3.15-linux-amd64.tar.gz \
| sudo tar --strip-components=1 --wildcards -zx '*/etcd' '*/etcdctl'
fi
if ! [ -x hyperkube ]; then
curl -L https://dl.k8s.io/v1.14.1/kubernetes-server-linux-amd64.tar.gz \
curl -L https://dl.k8s.io/v1.16.2/kubernetes-server-linux-amd64.tar.gz \
| sudo tar --strip-components=3 -zx kubernetes/server/bin/hyperkube
fi
if ! [ -x kubelet ]; then
@@ -138,7 +143,7 @@ _cmd_kubebins() {
sudo mkdir -p /opt/cni/bin
cd /opt/cni/bin
if ! [ -x bridge ]; then
curl -L https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz \
curl -L https://github.com/containernetworking/plugins/releases/download/v0.7.6/cni-plugins-amd64-v0.7.6.tgz \
| sudo tar -zx
fi
"
@@ -152,10 +157,10 @@ _cmd_kube() {
# Optional version, e.g. 1.13.5
KUBEVERSION=$2
if [ "$KUBEVERSION" ]; then
EXTRA_KUBELET="=$KUBEVERSION-00"
EXTRA_APTGET="=$KUBEVERSION-00"
EXTRA_KUBEADM="--kubernetes-version=v$KUBEVERSION"
else
EXTRA_KUBELET=""
EXTRA_APTGET=""
EXTRA_KUBEADM=""
fi
@@ -167,7 +172,7 @@ _cmd_kube() {
sudo tee /etc/apt/sources.list.d/kubernetes.list"
pssh --timeout 200 "
sudo apt-get update -q &&
sudo apt-get install -qy kubelet$EXTRA_KUBELET kubeadm kubectl &&
sudo apt-get install -qy kubelet$EXTRA_APTGET kubeadm$EXTRA_APTGET kubectl$EXTRA_APTGET &&
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl"
# Initialize kube master
@@ -357,6 +362,16 @@ _cmd_opensg() {
infra_opensg
}
_cmd portworx "Prepare the nodes for Portworx deployment"
_cmd_portworx() {
TAG=$1
need_tag
pssh "
sudo truncate --size 10G /portworx.blk &&
sudo losetup /dev/loop4 /portworx.blk"
}
_cmd disableaddrchecks "Disable source/destination IP address checks"
_cmd_disableaddrchecks() {
TAG=$1
@@ -381,6 +396,20 @@ _cmd_pull_images() {
pull_tag
}
_cmd remap_nodeports "Remap NodePort range to 10000-10999"
_cmd_remap_nodeports() {
TAG=$1
need_tag
FIND_LINE=" - --service-cluster-ip-range=10.96.0.0\/12"
ADD_LINE=" - --service-node-port-range=10000-10999"
MANIFEST_FILE=/etc/kubernetes/manifests/kube-apiserver.yaml
pssh "
if i_am_first_node && ! grep -q '$ADD_LINE' $MANIFEST_FILE; then
sudo sed -i 's/\($FIND_LINE\)\$/\1\n$ADD_LINE/' $MANIFEST_FILE
fi"
}
_cmd quotas "Check our infrastructure quotas (max instances)"
_cmd_quotas() {
need_infra $1
@@ -568,6 +597,18 @@ EOF"
sudo systemctl start webssh.service"
}
_cmd www "Run a web server to access card HTML and PDF"
_cmd_www() {
cd www
IPADDR=$(curl -sL canihazip.com/s)
info "The following files are available:"
for F in *; do
echo "http://$IPADDR:8000/$F"
done
info "Press Ctrl-C to stop server."
python3 -m http.server
}
greet() {
IAMUSER=$(aws iam get-user --query 'User.UserName')
info "Hello! You seem to be UNIX user $USER, and IAM user $IAMUSER."

View File

@@ -4,17 +4,12 @@ import sys
import yaml
import jinja2
def prettify(l):
l = [ip.strip() for ip in l]
ret = [ "node{}: <code>{}</code>".format(i+1, s) for (i, s) in zip(range(len(l)), l) ]
return ret
# Read settings from user-provided settings file
SETTINGS = yaml.load(open(sys.argv[1]))
clustersize = SETTINGS["clustersize"]
context = yaml.safe_load(open(sys.argv[1]))
ips = list(open("ips.txt"))
clustersize = context["clustersize"]
print("---------------------------------------------")
print(" Number of IPs: {}".format(len(ips)))
@@ -30,7 +25,9 @@ while ips:
ips = ips[clustersize:]
clusters.append(cluster)
template_file_name = SETTINGS["cards_template"]
context["clusters"] = clusters
template_file_name = context["cards_template"]
template_file_path = os.path.join(
os.path.dirname(__file__),
"..",
@@ -39,18 +36,19 @@ template_file_path = os.path.join(
)
template = jinja2.Template(open(template_file_path).read())
with open("ips.html", "w") as f:
f.write(template.render(clusters=clusters, **SETTINGS))
f.write(template.render(**context))
print("Generated ips.html")
try:
import pdfkit
with open("ips.html") as f:
pdfkit.from_file(f, "ips.pdf", options={
"page-size": SETTINGS["paper_size"],
"margin-top": SETTINGS["paper_margin"],
"margin-bottom": SETTINGS["paper_margin"],
"margin-left": SETTINGS["paper_margin"],
"margin-right": SETTINGS["paper_margin"],
"page-size": context["paper_size"],
"margin-top": context["paper_margin"],
"margin-bottom": context["paper_margin"],
"margin-left": context["paper_margin"],
"margin-right": context["paper_margin"],
})
print("Generated ips.pdf")
except ImportError:

View File

@@ -73,8 +73,29 @@ set expandtab
set number
set shiftwidth=2
set softtabstop=2
set nowrap
SQRL""")
# Custom .tmux.conf
system(
"""sudo -u docker tee /home/docker/.tmux.conf <<SQRL
bind h select-pane -L
bind j select-pane -D
bind k select-pane -U
bind l select-pane -R
# Allow using mouse to switch panes
set -g mouse on
# Make scrolling with wheels work
bind -n WheelUpPane if-shell -F -t = "#{mouse_any_flag}" "send-keys -M" "if -Ft= '#{pane_in_mode}' 'send-keys -M' 'select-pane -t=; copy-mode -e; send-keys -M'"
bind -n WheelDownPane select-pane -t= \; send-keys -M
SQRL"""
)
# add docker user to sudoers and allow password authentication
system("""sudo tee /etc/sudoers.d/docker <<SQRL
docker ALL=(ALL) NOPASSWD:ALL
@@ -85,6 +106,7 @@ system("sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /e
system("sudo service ssh restart")
system("sudo apt-get -q update")
system("sudo apt-get -qy install git jq")
system("sudo apt-get -qy install emacs-nox joe")
#######################
### DOCKER INSTALLS ###

View File

@@ -26,3 +26,5 @@ machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
image:

View File

@@ -26,3 +26,6 @@ machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
clusternumber: 100
image:

View File

@@ -26,3 +26,6 @@ machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
clusternumber: 200
image:

View File

@@ -26,3 +26,5 @@ machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training
image:

View File

@@ -26,4 +26,3 @@ machine_version: 0.14.0
# Password used to connect with the "docker user"
docker_user_password: training

View File

@@ -61,6 +61,6 @@ TAG=$PREFIX-$SETTINGS
--count $((3*$STUDENTS))
./workshopctl deploy $TAG
./workshopctl kube $TAG 1.13.5
./workshopctl kube $TAG 1.14.6
./workshopctl cards $TAG

View File

@@ -1,12 +1,23 @@
{# Feel free to customize or override anything in there! #}
{#
The variables below can be customized here directly, or in your
settings.yaml file. Any variable in settings.yaml will be exposed
in here as well.
#}
{%- set url = "http://FIXME.container.training/" -%}
{%- set pagesize = 9 -%}
{%- set lang = "en" -%}
{%- set event = "training session" -%}
{%- set backside = False -%}
{%- set image = "kube" -%}
{%- set clusternumber = 100 -%}
{%- set url = url
| default("http://FIXME.container.training/") -%}
{%- set pagesize = pagesize
| default(9) -%}
{%- set lang = lang
| default("en") -%}
{%- set event = event
| default("training session") -%}
{%- set backside = backside
| default(False) -%}
{%- set image = image
| default("kube") -%}
{%- set clusternumber = clusternumber
| default(None) -%}
{%- set image_src = {
"docker": "https://s3-us-west-2.amazonaws.com/www.breadware.com/integrations/docker.png",
@@ -161,7 +172,9 @@ img.kube {
<div>
<p>{{ intro }}</p>
<p>
{% if image_src %}
<img src="{{ image_src }}" />
{% endif %}
<table>
{% if clusternumber != None %}
<tr><td>cluster:</td></tr>
@@ -187,8 +200,10 @@ img.kube {
</p>
<p>
{% if url %}
{{ slides_are_at }}
<center>{{ url }}</center>
{% endif %}
</p>
</div>
{% if loop.index%pagesize==0 or loop.last %}

4
prepare-vms/www/README Normal file
View File

@@ -0,0 +1,4 @@
This directory will contain symlinks to HTML and PDF files for the cards
with the IP address, login, and password for the training environments.
The file "index.html" is empty on purpose: it prevents listing the files.

View File

View File

@@ -1,6 +1,6 @@
# Uncomment and/or edit one of the the following lines if necessary.
#/ /kube-halfday.yml.html 200
#/ /kube-fullday.yml.html 200
/ /kube-fullday.yml.html 200!
#/ /kube-twodays.yml.html 200
# And this allows to do "git clone https://container.training".

View File

@@ -1,34 +0,0 @@
# Our sample application
No assignment
# Kubernetes concepts
Do we want some kind of multiple-choice quiz?
# First contact with kubectl
Start some pre-defined image and check its logs
(Do we want to make a custom "mystery image" that shows a message
and then sleeps forever?)
Start another one (to make sure they understand that they need
to specify a unique name each time)
Provide as many ways as you can to figure out on which node
these pods are running (even if you only have one node).
# Exposing containers
Start a container running the official tomcat image.
Expose it.
Connect to it.
# Shipping apps
(We need a few images for a demo app other than DockerCoins?)
Start the components of the app.
Expose what needs to be exposed.
Connect to the app and check that it works.

View File

@@ -1,105 +0,0 @@
## Assignment: get Kubernetes
- In order to do the other assignments, we need a Kubernetes cluster
- Here are some *free* options:
- Docker Desktop
- Minikube
- Online sandbox like Katacoda
- You can also get a managed cluster (but this costs some money)
---
## Recommendation 1: Docker Desktop
- If you are already using Docker Desktop, use it for Kubernetes
- If you are running MacOS, [install Docker Desktop](https://docs.docker.com/docker-for-mac/install/)
- you will need a post-2010 Mac
- you will need macOS Sierra 10.12 or later
- If you are running Windows 10, [install Docker Desktop](https://docs.docker.com/docker-for-windows/install/)
- you will need Windows 10 64 bits Pro, Enterprise, or Education
- virtualization needs to be enabled in your BIOS
- Then [enable Kubernetes](https://blog.docker.com/2018/07/kubernetes-is-now-available-in-docker-desktop-stable-channel/) if it's not already on
---
## Recommendation 2: Minikube
- In some scenarios, you can't use Docker Desktop:
- if you run Linux
- if you are running an unsupported version of Windows
- You might also want to install Minikube for other reasons
(there are more tutorials and instructions out there for Minikube)
- Minikube installation is a bit more complex
(depending on which hypervisor and OS you are using)
---
## Minikube installation details
- Minikube typically runs in a local virtual machine
- It supports multiple hypervisors:
- VirtualBox (Linux, Mac, Windows)
- HyperV (Windows)
- HyperKit, VMware (Mac)
- KVM (Linux)
- Check the [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) for details relevant to your setup
---
## Recommendation 3: learning platform
- Sometimes, you can't even install Minikube
(computer locked by IT policies; insufficient resources...)
- In that case, you can use a platform like:
- Katacoda
- Play-with-Kubernetes
---
## Recommendation 4: hosted cluster
- You can also get your own hosted cluster
- This will cost a little bit of money
(unless you have free hosting credits)
- Setup will vary depending on the provider, platform, etc.
---
class: assignment
- Make sure that you have a Kubernetes cluster
- You should be able to run `kubectl get nodes` and see a list of nodes
- These nodes should be in `Ready` state

View File

@@ -104,22 +104,6 @@ like Windows, macOS, Solaris, FreeBSD ...
---
## rkt
* Compares to `runc`.
* No daemon or API.
* Strong emphasis on security (through privilege separation).
* Networking has to be set up separately (e.g. through CNI plugins).
* Partial image management (pull, but no push).
(Image build is handled by separate tools.)
---
## CRI-O
* Designed to be used with Kubernetes as a simple, basic runtime.

View File

@@ -102,29 +102,44 @@ class: extra-details
---
## Docker Desktop for Mac and Docker Desktop for Windows
## Docker Desktop
* Special Docker Editions that integrate well with their respective host OS
* Special Docker edition available for Mac and Windows
* Provide user-friendly GUI to edit Docker configuration and settings
* Integrates well with the host OS:
* Leverage the host OS virtualization subsystem (e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS)
* installed like normal user applications on the host
* Installed like normal user applications on the host
* provides user-friendly GUI to edit Docker configuration and settings
* Under the hood, they both run a tiny VM (transparent to our daily use)
* Only support running one Docker VM at a time ...
* Access network resources like normal applications
<br/>(and therefore, play better with enterprise VPNs and firewalls)
* Support filesystem sharing through volumes (we'll talk about this later)
* They only support running one Docker VM at a time ...
<br/>
... but we can use `docker-machine`, the Docker Toolbox, VirtualBox, etc. to get a cluster.
---
class: extra-details
## Docker Desktop internals
* Leverages the host OS virtualization subsystem
(e.g. the [Hypervisor API](https://developer.apple.com/documentation/hypervisor) on macOS)
* Under the hood, runs a tiny VM
(transparent to our daily use)
* Accesses network resources like normal applications
(and therefore, plays better with enterprise VPNs and firewalls)
* Supports filesystem sharing through volumes
(we'll talk about this later)
---
## Running Docker on macOS and Windows
When you execute `docker version` from the terminal:

View File

@@ -5,6 +5,7 @@
speaker: jpetazzo
title: Deploying and scaling applications with Kubernetes
attend: https://conferences.oreilly.com/velocity/vl-eu/public/schedule/detail/79109
slides: https://velocity-2019-11.container.training/
- date: 2019-11-13
country: fr
@@ -15,6 +16,38 @@
lang: fr
attend: http://2019.devops-dday.com/Workshop.html
- date: 2019-10-30
country: us
city: Portland, OR
event: LISA
speaker: jpetazzo
title: Deep Dive into Kubernetes Internals for Builders and Operators
attend: https://www.usenix.org/conference/lisa19/presentation/petazzoni-tutorial
- date: [2019-10-22, 2019-10-24]
country: us
city: Charlotte, NC
event: Ardan Labs
speaker: jpetazzo
title: Kubernetes Training
attend: https://www.eventbrite.com/e/containers-docker-and-kubernetes-training-for-devs-and-ops-charlotte-nc-november-2019-tickets-73296659281
- date: 2019-10-22
country: us
city: Charlotte, NC
event: Ardan Labs
speaker: jpetazzo
title: Docker & Containers Training
attend: https://www.eventbrite.com/e/containers-docker-and-kubernetes-training-for-devs-and-ops-charlotte-nc-november-2019-tickets-73296659281
- date: 2019-10-22
country: de
city: Berlin
event: GOTO
speaker: bretfisher
title: Kubernetes or Swarm? Build Both, Deploy Apps, Learn The Differences
attend: https://gotober.com/2019/workshops/194
- date: [2019-09-24, 2019-09-25]
country: fr
city: Paris
@@ -23,6 +56,34 @@
title: Déployer ses applications avec Kubernetes (in French)
lang: fr
attend: https://enix.io/fr/services/formation/deployer-ses-applications-avec-kubernetes/
slides: https://kube-2019-09.container.training/
- date: 2019-08-27
country: tr
city: Izmir
event: HacknBreak
speaker: gurayyildirim
title: Deploying and scaling applications with Kubernetes (in Turkish)
lang: tr
attend: https://hacknbreak.com
- date: 2019-08-26
country: tr
city: Izmir
event: HacknBreak
speaker: gurayyildirim
title: Container Orchestration with Docker and Swarm (in Turkish)
lang: tr
attend: https://hacknbreak.com
- date: 2019-08-25
country: tr
city: Izmir
event: HackBreak
speaker: gurayyildirim
title: Introduction to Docker and Containers (in Turkish)
lang: tr
attend: https://hacknbreak.com
- date: 2019-07-16
country: us

View File

@@ -1,63 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/toc.md
- - containers/Docker_Overview.md
#- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- - containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- containers/Resource_Limits.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- - containers/Docker_Machine.md
- containers/Advanced_Dockerfiles.md
- containers/Application_Configuration.md
- containers/Logging.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
- - containers/Container_Engines.md
#- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -1,63 +0,0 @@
title: |
Introduction
to Containers
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
chapters:
- shared/title.md
# - shared/logistics.md
- containers/intro.md
- shared/about-slides.md
- shared/toc.md
- - containers/Docker_Overview.md
- containers/Docker_History.md
- containers/Training_Environment.md
- containers/Installing_Docker.md
- containers/First_Containers.md
- containers/Background_Containers.md
- containers/Start_And_Attach.md
- - containers/Initial_Images.md
- containers/Building_Images_Interactively.md
- containers/Building_Images_With_Dockerfiles.md
- containers/Cmd_And_Entrypoint.md
- containers/Copying_Files_During_Build.md
- containers/Exercise_Dockerfile_Basic.md
- - containers/Multi_Stage_Builds.md
- containers/Publishing_To_Docker_Hub.md
- containers/Dockerfile_Tips.md
- containers/Exercise_Dockerfile_Advanced.md
- - containers/Naming_And_Inspecting.md
- containers/Labels.md
- containers/Getting_Inside.md
- - containers/Container_Networking_Basics.md
- containers/Network_Drivers.md
- containers/Container_Network_Model.md
#- containers/Connecting_Containers_With_Links.md
- containers/Ambassadors.md
- - containers/Local_Development_Workflow.md
- containers/Windows_Containers.md
- containers/Working_With_Volumes.md
- containers/Compose_For_Dev_Stacks.md
- containers/Exercise_Composefile.md
- containers/Docker_Machine.md
- - containers/Advanced_Dockerfiles.md
- containers/Application_Configuration.md
- containers/Logging.md
- containers/Resource_Limits.md
- - containers/Namespaces_Cgroups.md
- containers/Copy_On_Write.md
#- containers/Containers_From_Scratch.md
- - containers/Container_Engines.md
- containers/Ecosystem.md
- containers/Orchestration_Overview.md
- shared/thankyou.md
- containers/links.md

View File

@@ -667,17 +667,12 @@ class: extra-details
- For auditing purposes, sometimes we want to know who can perform an action
- There is a proof-of-concept tool by Aqua Security which does exactly that:
- There are a few tools to help us with that
https://github.com/aquasecurity/kubectl-who-can
- [kubectl-who-can](https://github.com/aquasecurity/kubectl-who-can) by Aqua Security
- This is one way to install it:
```bash
docker run --rm -v /usr/local/bin:/go/bin golang \
go get -v github.com/aquasecurity/kubectl-who-can
```
- [Review Access (aka Rakkess)](https://github.com/corneliusweig/rakkess)
- This is one way to use it:
```bash
kubectl-who-can create pods
```
- Both are available as standalone programs, or as plugins for `kubectl`
(`kubectl` plugins can be installed and managed with `krew`)

View File

@@ -15,26 +15,3 @@
- `dockercoins/webui:v0.1`
- `dockercoins/worker:v0.1`
---
## Setting `$REGISTRY` and `$TAG`
- In the upcoming exercises and labs, we use a couple of environment variables:
- `$REGISTRY` as a prefix to all image names
- `$TAG` as the image version tag
- For example, the worker image is `$REGISTRY/worker:$TAG`
- If you copy-paste the commands in these exercises:
**make sure that you set `$REGISTRY` and `$TAG` first!**
- For example:
```
export REGISTRY=dockercoins TAG=v0.1
```
(this will expand `$REGISTRY/worker:$TAG` to `dockercoins/worker:v0.1`)

View File

@@ -10,6 +10,8 @@
- Components can be upgraded one at a time without problems
<!-- ##VERSION## -->
---
## Checking what we're running
@@ -166,7 +168,7 @@
- Upgrade kubelet:
```bash
apt install kubelet=1.14.2-00
sudo apt install kubelet=1.15.3-00
```
]
@@ -226,7 +228,7 @@
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
```
- Look for the `image:` line, and update it to e.g. `v1.14.0`
- Look for the `image:` line, and update it to e.g. `v1.15.0`
]
@@ -260,14 +262,52 @@
sudo kubeadm upgrade plan
```
(Note: kubeadm is confused by our manual upgrade of the API server.
<br/>It thinks the cluster is running 1.14.0!)
]
<!-- ##VERSION## -->
Note 1: kubeadm thinks that our cluster is running 1.15.0.
<br/>It is confused by our manual upgrade of the API server!
Note 2: kubeadm itself is still version 1.14.6.
<br/>It doesn't know how to upgrade do 1.15.X.
---
## Upgrading kubeadm
- First things first: we need to upgrade kubeadm
.exercise[
- Upgrade kubeadm:
```
sudo apt install kubeadm
```
- Check what kubeadm tells us:
```
sudo kubeadm upgrade plan
```
]
Note: kubeadm still thinks that our cluster is running 1.15.0.
<br/>But at least it knows about version 1.15.X now.
---
## Upgrading the cluster with kubeadm
- Ideally, we should revert our `image:` change
(so that kubeadm executes the right migration steps)
- Or we can try the upgrade anyway
.exercise[
- Perform the upgrade:
```bash
sudo kubeadm upgrade apply v1.14.2
sudo kubeadm upgrade apply v1.15.3
```
]
@@ -287,8 +327,8 @@
- Download the configuration on each node, and upgrade kubelet:
```bash
for N in 1 2 3; do
ssh test$N sudo kubeadm upgrade node config --kubelet-version v1.14.2
ssh test$N sudo apt install kubelet=1.14.2-00
ssh test$N sudo kubeadm upgrade node config --kubelet-version v1.15.3
ssh test$N sudo apt install kubelet=1.15.3-00
done
```
]
@@ -297,7 +337,7 @@
## Checking what we've done
- All our nodes should now be updated to version 1.14.2
- All our nodes should now be updated to version 1.15.3
.exercise[
@@ -307,3 +347,19 @@
```
]
---
class: extra-details
## Skipping versions
- This example worked because we went from 1.14 to 1.15
- If you are upgrading from e.g. 1.13, you will generally have to go through 1.14 first
- This means upgrading kubeadm to 1.14.X, then using it to upgrade the cluster
- Then upgrading kubeadm to 1.15.X, etc.
- **Make sure to read the release notes before upgrading!**

View File

@@ -44,21 +44,37 @@
## Other things that Kubernetes can do for us
- Basic autoscaling
- Autoscaling
- Blue/green deployment, canary deployment
(straightforward on CPU; more complex on other metrics)
- Long running services, but also batch (one-off) jobs
- Ressource management and scheduling
- Overcommit our cluster and *evict* low-priority jobs
(reserve CPU/RAM for containers; placement constraints)
- Run services with *stateful* data (databases etc.)
- Advanced rollout patterns
- Fine-grained access control defining *what* can be done by *whom* on *which* resources
(blue/green deployment, canary deployment)
- Integrating third party services (*service catalog*)
---
- Automating complex tasks (*operators*)
## More things that Kubernetes can do for us
- Batch jobs
(one-off; parallel; also cron-style periodic execution)
- Fine-grained access control
(defining *what* can be done by *whom* on *which* resources)
- Stateful services
(databases, message queues, etc.)
- Automating complex tasks with *operators*
(e.g. database replication, failover, etc.)
---
@@ -191,11 +207,29 @@ No!
- By default, Kubernetes uses the Docker Engine to run containers
- We could also use `rkt` ("Rocket") from CoreOS
- We can leverage other pluggable runtimes through the *Container Runtime Interface*
- Or leverage other pluggable runtimes through the *Container Runtime Interface*
- <del>We could also use `rkt` ("Rocket") from CoreOS</del> (deprecated)
(like CRI-O, or containerd)
---
class: extra-details
## Some runtimes available through CRI
- [containerd](https://github.com/containerd/containerd/blob/master/README.md)
- maintained by Docker, IBM, and community
- used by Docker Engine, microk8s, k3s, GKE; also standalone
- comes with its own CLI, `ctr`
- [CRI-O](https://github.com/cri-o/cri-o/blob/master/README.md):
- maintained by Red Hat, SUSE, and community
- used by OpenShift and Kubic
- designed specifically as a minimal runtime for Kubernetes
- [And more](https://kubernetes.io/docs/setup/production-environment/container-runtimes/)
---
@@ -265,6 +299,48 @@ class: pic
---
## Scaling
- How would we scale the pod shown on the previous slide?
- **Do** create additional pods
- each pod can be on a different node
- each pod will have its own IP address
- **Do not** add more NGINX containers in the pod
- all the NGINX containers would be on the same node
- they would all have the same IP address
<br/>(resulting in `Address alreading in use` errors)
---
## Together or separate
- Should we put e.g. a web application server and a cache together?
<br/>
("cache" being something like e.g. Memcached or Redis)
- Putting them **in the same pod** means:
- they have to be scaled together
- they can communicate very efficiently over `localhost`
- Putting them **in different pods** means:
- they can be scaled separately
- they must communicate over remote IP addresses
<br/>(incurring more latency, lower performance)
- Both scenarios can make sense, depending on our goals
---
## Credits
- The first diagram is courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha)

View File

@@ -193,7 +193,12 @@
- Best practice: set a memory limit, and pass it to the runtime
(see [this blog post](https://very-serio.us/2017/12/05/running-jvms-in-kubernetes/) for a detailed example)
- Note: recent versions of the JVM can do this automatically
(see [JDK-8146115](https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8146115))
and
[this blog post](https://very-serio.us/2017/12/05/running-jvms-in-kubernetes/)
for detailed examples)
---

View File

@@ -105,6 +105,22 @@ The dashboard will then ask you which authentication you want to use.
---
## Other dashboards
- [Kube Web View](https://codeberg.org/hjacobs/kube-web-view)
- read-only dashboard
- optimized for "troubleshooting and incident response"
- see [vision and goals](https://kube-web-view.readthedocs.io/en/latest/vision.html#vision) for details
- [Kube Ops View](https://github.com/hjacobs/kube-ops-view)
- "provides a common operational picture for multiple Kubernetes clusters"
---
# Security implications of `kubectl apply`
- When we do `kubectl apply -f <URL>`, we create arbitrary resources
@@ -156,4 +172,3 @@ The dashboard will then ask you which authentication you want to use.
- It introduces new failure modes
(for instance, if you try to apply YAML from a link that's no longer valid)

View File

@@ -481,13 +481,13 @@ docker run alpine echo hello world
.exercise[
- Create the file `kubeconfig.kubelet` with `kubectl`:
- Create the file `~/.kube/config` with `kubectl`:
```bash
kubectl --kubeconfig kubeconfig.kubelet config \
kubectl config \
set-cluster localhost --server http://localhost:8080
kubectl --kubeconfig kubeconfig.kubelet config \
kubectl config \
set-context localhost --cluster localhost
kubectl --kubeconfig kubeconfig.kubelet config \
kubectl config \
use-context localhost
```
@@ -495,19 +495,7 @@ docker run alpine echo hello world
---
## All Kubernetes clients can use `kubeconfig`
- The `kubeconfig.kubelet` file has the same format as e.g. `~/.kubeconfig`
- All Kubernetes clients can use a similar file
- The `kubectl config` commands can be used to manipulate these files
- This highlights that kubelet is a "normal" client of the API server
---
## Our `kubeconfig.kubelet` file
## Our `~/.kube/config` file
The file that we generated looks like the one below.
@@ -533,9 +521,9 @@ clusters:
.exercise[
- Start kubelet with that `kubeconfig.kubelet` file:
- Start kubelet with that kubeconfig file:
```bash
kubelet --kubeconfig kubeconfig.kubelet
kubelet --kubeconfig ~/.kube/config
```
]

209
slides/k8s/dryrun.md Normal file
View File

@@ -0,0 +1,209 @@
# Authoring YAML
- There are various ways to generate YAML with Kubernetes, e.g.:
- `kubectl run`
- `kubectl create deployment` (and a few other `kubectl create` variants)
- `kubectl expose`
- When and why do we need to write our own YAML?
- How do we write YAML from scratch?
---
## The limits of generated YAML
- Many advanced (and even not-so-advanced) features require to write YAML:
- pods with multiple containers
- resource limits
- healthchecks
- DaemonSets, StatefulSets
- and more!
- How do we access these features?
---
## We don't have to start from scratch
- Create a resource (e.g. Deployment)
- Dump its YAML with `kubectl get -o yaml ...`
- Edit the YAML
- Use `kubectl apply -f ...` with the YAML file to:
- update the resource (if it's the same kind)
- create a new resource (if it's a different kind)
- Or: Use The Docs, Luke
(the documentation almost always has YAML examples)
---
## Generating YAML without creating resources
- We can use the `--dry-run` option
.exercise[
- Generate the YAML for a Deployment without creating it:
```bash
kubectl create deployment web --image nginx --dry-run
```
]
- We can clean up that YAML even more if we want
(for instance, we can remove the `creationTimestamp` and empty dicts)
---
## Using `--dry-run` with `kubectl apply`
- The `--dry-run` option can also be used with `kubectl apply`
- However, it can be misleading (it doesn't do a "real" dry run)
- Let's see what happens in the following scenario:
- generate the YAML for a Deployment
- tweak the YAML to transform it into a DaemonSet
- apply that YAML to see what would actually be created
---
## The limits of `kubectl apply --dry-run`
.exercise[
- Generate the YAML for a deployment:
```bash
kubectl create deployment web --image=nginx -o yaml > web.yaml
```
- Change the `kind` in the YAML to make it a `DaemonSet`:
```bash
sed -i s/Deployment/DaemonSet/ web.yaml
```
- Ask `kubectl` what would be applied:
```bash
kubectl apply -f web.yaml --dry-run --validate=false -o yaml
```
]
The resulting YAML doesn't represent a valid DaemonSet.
---
## Server-side dry run
- Since Kubernetes 1.13, we can use [server-side dry run and diffs](https://kubernetes.io/blog/2019/01/14/apiserver-dry-run-and-kubectl-diff/)
- Server-side dry run will do all the work, but *not* persist to etcd
(all validation and mutation hooks will be executed)
.exercise[
- Try the same YAML file as earlier, with server-side dry run:
```bash
kubectl apply -f web.yaml --server-dry-run --validate=false -o yaml
```
]
The resulting YAML doesn't have the `replicas` field anymore.
Instead, it has the fields expected in a DaemonSet.
---
## Advantages of server-side dry run
- The YAML is verified much more extensively
- The only step that is skipped is "write to etcd"
- YAML that passes server-side dry run *should* apply successfully
(unless the cluster state changes by the time the YAML is actually applied)
- Validating or mutating hooks that have side effects can also be an issue
---
## `kubectl diff`
- Kubernetes 1.13 also introduced `kubectl diff`
- `kubectl diff` does a server-side dry run, *and* shows differences
.exercise[
- Try `kubectl diff` on the YAML that we tweaked earlier:
```bash
kubectl diff -f web.yaml
```
]
Note: we don't need to specify `--validate=false` here.
---
## Advantage of YAML
- Using YAML (instead of `kubectl run`/`create`/etc.) allows to be *declarative*
- The YAML describes the desired state of our cluster and applications
- YAML can be stored, versioned, archived (e.g. in git repositories)
- To change resources, change the YAML files
(instead of using `kubectl edit`/`scale`/`label`/etc.)
- Changes can be reviewed before being applied
(with code reviews, pull requests ...)
- This workflow is sometimes called "GitOps"
(there are tools like Weave Flux or GitKube to facilitate it)
---
## YAML in practice
- Get started with `kubectl run`/`create`/`expose`/etc.
- Dump the YAML with `kubectl get -o yaml`
- Tweak that YAML and `kubectl apply` it back
- Store that YAML for reference (for further deployments)
- Feel free to clean up the YAML:
- remove fields you don't know
- check that it still works!
- That YAML will be useful later when using e.g. Kustomize or Helm

View File

@@ -87,7 +87,7 @@
- Clone the Flux repository:
```
git clone https://github.com/weaveworks/flux
git clone https://github.com/fluxcd/flux
```
- Edit `deploy/flux-deployment.yaml`

View File

@@ -1,41 +1,3 @@
## Questions to ask before adding healthchecks
- Do we want liveness, readiness, both?
(sometimes, we can use the same check, but with different failure thresholds)
- Do we have existing HTTP endpoints that we can use?
- Do we need to add new endpoints, or perhaps use something else?
- Are our healthchecks likely to use resources and/or slow down the app?
- Do they depend on additional services?
(this can be particularly tricky, see next slide)
---
## Healthchecks and dependencies
- A good healthcheck should always indicate the health of the service itself
- It should not be affected by the state of the service's dependencies
- Example: a web server requiring a database connection to operate
(make sure that the healthcheck can report "OK" even if the database is down;
<br/>
because it won't help us to restart the web server if the issue is with the DB!)
- Example: a microservice calling other microservices
- Example: a worker process
(these will generally require minor code changes to report health)
---
## Adding healthchecks to an app
- Let's add healthchecks to DockerCoins!
@@ -370,24 +332,4 @@ class: extra-details
(and have gcr.io/pause take care of the reaping)
---
## Healthchecks for worker
- Readiness isn't useful
(because worker isn't a backend for a service)
- Liveness may help us restart a broken worker, but how can we check it?
- Embedding an HTTP server is an option
(but it has a high potential for unwanted side effects and false positives)
- Using a "lease" file can be relatively easy:
- touch a file during each iteration of the main loop
- check the timestamp of that file from an exec probe
- Writing logs (and checking them from the probe) also works
- Discussion of this in [Video - 10 Ways to Shoot Yourself in the Foot with Kubernetes, #9 Will Surprise You](https://www.youtube.com/watch?v=QKI-JRs2RIE)

View File

@@ -42,9 +42,11 @@
- internal corruption (causing all requests to error)
- If the liveness probe fails *N* consecutive times, the container is killed
- Anything where our incident response would be "just restart/reboot it"
- *N* is the `failureThreshold` (3 by default)
.warning[**Do not** use liveness probes for problems that can't be fixed by a restart]
- Otherwise we just restart our pods for no reason, creating useless load
---
@@ -52,7 +54,7 @@
- Indicates if the container is ready to serve traffic
- If a container becomes "unready" (let's say busy!) it might be ready again soon
- If a container becomes "unready" it might be ready again soon
- If the readiness probe fails:
@@ -66,19 +68,79 @@
## When to use a readiness probe
- To indicate temporary failures
- To indicate failure due to an external cause
- the application can only service *N* parallel connections
- database is down or unreachable
- the runtime is busy doing garbage collection or initial data load
- mandatory auth or other backend service unavailable
- The container is marked as "not ready" after `failureThreshold` failed attempts
- To indicate temporary failure or unavailability
(3 by default)
- application can only service *N* parallel connections
- It is marked again as "ready" after `successThreshold` successful attempts
- runtime is busy doing garbage collection or initial data load
(1 by default)
- For processes that take a long time to start
(more on that later)
---
## Dependencies
- If a web server depends on a database to function, and the database is down:
- the web server's liveness probe should succeed
- the web server's readiness probe should fail
- Same thing for any hard dependency (without which the container can't work)
.warning[**Do not** fail liveness probes for problems that are external to the container]
---
## Timing and thresholds
- Probes are executed at intervals of `periodSeconds` (default: 10)
- The timeout for a probe is set with `timeoutSeconds` (default: 1)
.warning[If a probe takes longer than that, it is considered as a FAIL]
- A probe is considered successful after `successThreshold` successes (default: 1)
- A probe is considered failing after `failureThreshold` failures (default: 3)
- A probe can have an `initialDelaySeconds` parameter (default: 0)
- Kubernetes will wait that amount of time before running the probe for the first time
(this is important to avoid killing services that take a long time to start)
---
class: extra-details
## Startup probe
- Kubernetes 1.16 introduces a third type of probe: `startupProbe`
(it is in `alpha` in Kubernetes 1.16)
- It can be used to indicate "container not ready *yet*"
- process is still starting
- loading external data, priming caches
- Before Kubernetes 1.16, we had to use the `initialDelaySeconds` parameter
(available for both liveness and readiness probes)
- `initialDelaySeconds` is a rigid delay (always wait X before running probes)
- `startupProbe` works better when a container start time can vary a lot
---
@@ -112,10 +174,12 @@
(instead of serving errors or timeouts)
- Overloaded backends get removed from load balancer rotation
- Unavailable backends get removed from load balancer rotation
(thus improving response times across the board)
- If a probe is not defined, it's as if there was an "always successful" probe
---
## Example: HTTP probe
@@ -165,14 +229,56 @@ If the Redis process becomes unresponsive, it will be killed.
---
## Details about liveness and readiness probes
## Questions to ask before adding healthchecks
- Probes are executed at intervals of `periodSeconds` (default: 10)
- Do we want liveness, readiness, both?
- The timeout for a probe is set with `timeoutSeconds` (default: 1)
(sometimes, we can use the same check, but with different failure thresholds)
- A probe is considered successful after `successThreshold` successes (default: 1)
- Do we have existing HTTP endpoints that we can use?
- A probe is considered failing after `failureThreshold` failures (default: 3)
- Do we need to add new endpoints, or perhaps use something else?
- If a probe is not defined, it's as if there was an "always successful" probe
- Are our healthchecks likely to use resources and/or slow down the app?
- Do they depend on additional services?
(this can be particularly tricky, see next slide)
---
## Healthchecks and dependencies
- Liveness checks should not be influenced by the state of external services
- All checks should reply quickly (by default, less than 1 second)
- Otherwise, they are considered to fail
- This might require to check the health of dependencies asynchronously
(e.g. if a database or API might be healthy but still take more than
1 second to reply, we should check the status asynchronously and report
a cached status)
---
## Healthchecks for workers
(In that context, worker = process that doesn't accept connections)
- Readiness isn't useful
(because workers aren't backends for a service)
- Liveness may help us restart a broken worker, but how can we check it?
- Embedding an HTTP server is a (potentially expensive) option
- Using a "lease" file can be relatively easy:
- touch a file during each iteration of the main loop
- check the timestamp of that file from an exec probe
- Writing logs (and checking them from the probe) also works

View File

@@ -415,7 +415,7 @@ This is normal: we haven't provided any ingress rule yet.
Here is a minimal host-based ingress resource:
```yaml
apiVersion: extensions/v1beta1
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: cheddar
@@ -523,4 +523,4 @@ spec:
- This should eventually stabilize
(remember that ingresses are currently `apiVersion: extensions/v1beta1`)
(remember that ingresses are currently `apiVersion: networking.k8s.io/v1beta1`)

View File

@@ -14,42 +14,80 @@
`ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName`
---
## Basic service types
- `ClusterIP` (default type)
- a virtual IP address is allocated for the service (in an internal, private range)
- this IP address is reachable only from within the cluster (nodes and pods)
- our code can connect to the service using the original port number
- `NodePort`
- a port is allocated for the service (by default, in the 30000-32768 range)
- that port is made available *on all our nodes* and anybody can connect to it
- our code must be changed to connect to that new port number
These service types are always available.
Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables` rules.
- HTTP services can also use `Ingress` resources (more on that later)
---
## More service types
## `ClusterIP`
- `LoadBalancer`
- It's the default service type
- an external load balancer is allocated for the service
- the load balancer is configured accordingly
<br/>(e.g.: a `NodePort` service is created, and the load balancer sends traffic to that port)
- available only when the underlying infrastructure provides some "load balancer as a service"
<br/>(e.g. AWS, Azure, GCE, OpenStack...)
- A virtual IP address is allocated for the service
- `ExternalName`
(in an internal, private range; e.g. 10.96.0.0/12)
- the DNS entry managed by CoreDNS will just be a `CNAME` to a provided record
- no port, no IP address, no nothing else is allocated
- This IP address is reachable only from within the cluster (nodes and pods)
- Our code can connect to the service using the original port number
- Perfect for internal communication, within the cluster
---
## `LoadBalancer`
- An external load balancer is allocated for the service
(typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...)
- This is available only when the underlying infrastructure provides some kind of
"load balancer as a service"
- Each service of that type will typically cost a little bit of money
(e.g. a few cents per hour on AWS or GCE)
- Ideally, traffic would flow directly from the load balancer to the pods
- In practice, it will often flow through a `NodePort` first
---
## `NodePort`
- A port number is allocated for the service
(by default, in the 30000-32768 range)
- That port is made available *on all our nodes* and anybody can connect to it
(we can connect to any node on that port to reach the service)
- Our code needs to be changed to connect to that new port number
- Under the hood: `kube-proxy` sets up a bunch of `iptables` rules on our nodes
- Sometimes, it's the only available option for external traffic
(e.g. most clusters deployed with kubeadm or on-premises)
---
class: extra-details
## `ExternalName`
- No load balancer (internal or external) is created
- Only a DNS entry gets added to the DNS managed by Kubernetes
- That DNS entry will just be a `CNAME` to a provided record
Example:
```bash
kubectl create service externalname k8s --external-name kubernetes.io
```
*Creates a CNAME `k8s` pointing to `kubernetes.io`*
---
@@ -279,18 +317,28 @@ error: the server doesn't have a resource type "endpoint"
---
## Exposing services to the outside world
class: extra-details
- The default type (ClusterIP) only works for internal traffic
## `ExternalIP`
- If we want to accept external traffic, we can use one of these:
- When creating a servivce, we can also specify an `ExternalIP`
- NodePort (expose a service on a TCP port between 30000-32768)
(this is not a type, but an extra attribute to the service)
- LoadBalancer (provision a cloud load balancer for our service)
- It will make the service availableon this IP address
- ExternalIP (use one node's external IP address)
(if the IP address belongs to a node of the cluster)
- Ingress (a special mechanism for HTTP services)
---
*We'll see NodePorts and Ingresses more in detail later.*
## `Ingress`
- Ingresses are another type (kind) of resource
- They are specifically for HTTP services
(not TCP or UDP)
- They can also handle TLS certificates, URL rewriting ...
- They require an *Ingress Controller* to function

View File

@@ -20,6 +20,50 @@
---
class: extra-details
## `kubectl` is the new SSH
- We often start managing servers with SSH
(installing packages, troubleshooting ...)
- At scale, it becomes tedious, repetitive, error-prone
- Instead, we use config management, central logging, etc.
- In many cases, we still need SSH:
- as the underlying access method (e.g. Ansible)
- to debug tricky scenarios
- to inspect and poke at things
---
class: extra-details
## The parallel with `kubectl`
- We often start managing Kubernetes clusters with `kubectl`
(deploying applications, troubleshooting ...)
- At scale (with many applications or clusters), it becomes tedious, repetitive, error-prone
- Instead, we use automated pipelines, observability tooling, etc.
- In many cases, we still need `kubectl`:
- to debug tricky scenarios
- to inspect and poke at things
- The Kubernetes API is always the underlying access method
---
## `kubectl get`
- Let's look at our `Node` resources with `kubectl get`!
@@ -71,7 +115,7 @@
- Show the capacity of all our nodes as a stream of JSON objects:
```bash
kubectl get nodes -o json |
kubectl get nodes -o json |
jq ".items[] | {name:.metadata.name} + .status.capacity"
```
@@ -182,53 +226,6 @@ class: extra-details
---
## Services
- A *service* is a stable endpoint to connect to "something"
(In the initial proposal, they were called "portals")
.exercise[
- List the services on our cluster with one of these commands:
```bash
kubectl get services
kubectl get svc
```
]
--
There is already one service on our cluster: the Kubernetes API itself.
---
## ClusterIP services
- A `ClusterIP` service is internal, available from the cluster only
- This is useful for introspection from within containers
.exercise[
- Try to connect to the API:
```bash
curl -k https://`10.96.0.1`
```
- `-k` is used to skip certificate verification
- Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `kubectl get svc`
]
--
The error that we see is expected: the Kubernetes API requires authentication.
---
## Listing running containers
- Containers are manipulated through *pods*
@@ -467,3 +464,117 @@ class: extra-details
[KEP-0009]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/0009-node-heartbeat.md
[node controller documentation]: https://kubernetes.io/docs/concepts/architecture/nodes/#node-controller
---
## Services
- A *service* is a stable endpoint to connect to "something"
(In the initial proposal, they were called "portals")
.exercise[
- List the services on our cluster with one of these commands:
```bash
kubectl get services
kubectl get svc
```
]
--
There is already one service on our cluster: the Kubernetes API itself.
---
## ClusterIP services
- A `ClusterIP` service is internal, available from the cluster only
- This is useful for introspection from within containers
.exercise[
- Try to connect to the API:
```bash
curl -k https://`10.96.0.1`
```
- `-k` is used to skip certificate verification
- Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `kubectl get svc`
]
The command above should either time out, or show an authentication error. Why?
---
## Time out
- Connections to ClusterIP services only work *from within the cluster*
- If we are outside the cluster, the `curl` command will probably time out
(Because the IP address, e.g. 10.96.0.1, isn't routed properly outside the cluster)
- This is the case with most "real" Kubernetes clusters
- To try the connection from within the cluster, we can use [shpod](https://github.com/jpetazzo/shpod)
---
## Authentication error
This is what we should see when connecting from within the cluster:
```json
$ curl -k https://10.96.0.1
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
```
---
## Explanations
- We can see `kind`, `apiVersion`, `metadata`
- These are typical of a Kubernetes API reply
- Because we *are* talking to the Kubernetes API
- The Kubernetes API tells us "Forbidden"
(because it requires authentication)
- The Kubernetes API is reachable from within the cluster
(many apps integrating with Kubernetes will use this)
---
## DNS integration
- Each service also gets a DNS record
- The Kubernetes DNS resolver is available *from within pods*
(and sometimes, from within nodes, depending on configuration)
- Code running in pods can connect to services using their name
(e.g. https://kubernetes/...)

View File

@@ -153,10 +153,7 @@ pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
kubectl logs deploy/pingpong --tail 1 --follow
```
<!--
```wait seq=3```
```keys ^C```
-->
- Leave that command running, so that we can keep an eye on these logs
]
@@ -186,6 +183,44 @@ We could! But the *deployment* would notice it right away, and scale back to the
---
## Log streaming
- Let's look again at the output of `kubectl logs`
(the one we started before scaling up)
- `kubectl logs` shows us one line per second
- We could expect 3 lines per second
(since we should now have 3 pods running `ping`)
- Let's try to figure out what's happening!
---
## Streaming logs of multiple pods
- What happens if we restart `kubectl logs`?
.exercise[
- Interrupt `kubectl logs` (with Ctrl-C)
- Restart it:
```bash
kubectl logs deploy/pingpong --tail 1 --follow
```
]
`kubectl logs` will warn us that multiple pods were found, and that it's showing us only one of them.
Let's leave `kubectl logs` running while we keep exploring.
---
## Resilience
- The *deployment* `pingpong` watches its *replica set*
@@ -196,20 +231,12 @@ We could! But the *deployment* would notice it right away, and scale back to the
.exercise[
- In a separate window, list pods, and keep watching them:
- In a separate window, watch the list of pods:
```bash
kubectl get pods -w
watch kubectl get pods
```
<!--
```wait Running```
```keys ^C```
```hide kubectl wait deploy pingpong --for condition=available```
```keys kubectl delete pod ping```
```copypaste pong-..........-.....```
-->
- Destroy a pod:
- Destroy the pod currently shown by `kubectl logs`:
```
kubectl delete pod pingpong-xxxxxxxxxx-yyyyy
```
@@ -217,6 +244,23 @@ We could! But the *deployment* would notice it right away, and scale back to the
---
## What happened?
- `kubectl delete pod` terminates the pod gracefully
(sending it the TERM signal and waiting for it to shutdown)
- As soon as the pod is in "Terminating" state, the Replica Set replaces it
- But we can still see the output of the "Terminating" pod in `kubectl logs`
- Until 30 seconds later, when the grace period expires
- The pod is then killed, and `kubectl logs` exits
---
## What if we wanted something different?
- What if we wanted to start a "one-shot" container that *doesn't* get restarted?
@@ -234,6 +278,72 @@ We could! But the *deployment* would notice it right away, and scale back to the
---
## Scheduling periodic background work
- A Cron Job is a job that will be executed at specific intervals
(the name comes from the traditional cronjobs executed by the UNIX crond)
- It requires a *schedule*, represented as five space-separated fields:
- minute [0,59]
- hour [0,23]
- day of the month [1,31]
- month of the year [1,12]
- day of the week ([0,6] with 0=Sunday)
- `*` means "all valid values"; `/N` means "every N"
- Example: `*/3 * * * *` means "every three minutes"
---
## Creating a Cron Job
- Let's create a simple job to be executed every three minutes
- Cron Jobs need to terminate, otherwise they'd run forever
.exercise[
- Create the Cron Job:
```bash
kubectl run --schedule="*/3 * * * *" --restart=OnFailure --image=alpine sleep 10
```
- Check the resource that was created:
```bash
kubectl get cronjobs
```
]
---
## Cron Jobs in action
- At the specified schedule, the Cron Job will create a Job
- The Job will create a Pod
- The Job will make sure that the Pod completes
(re-creating another one if it fails, for instance if its node fails)
.exercise[
- Check the Jobs that are created:
```bash
kubectl get jobs
```
]
(It will take a few minutes before the first job is scheduled.)
---
## What about that deprecation warning?
- As we can see from the previous slide, `kubectl run` can do many things

View File

@@ -34,11 +34,11 @@
- Download the `kubectl` binary from one of these links:
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl)
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl)
|
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/darwin/amd64/kubectl)
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl)
|
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe)
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/windows/amd64/kubectl.exe)
- On Linux and macOS, make the binary executable with `chmod +x kubectl`

View File

@@ -66,6 +66,8 @@ Exactly what we need!
sudo chmod +x /usr/local/bin/stern
```
- On OS X, just `brew install stern`
<!-- ##VERSION## -->
---

View File

@@ -218,6 +218,18 @@ class: extra-details
## What's going on?
- Without the `--network-plugin` flag, kubelet defaults to "no-op" networking
- It lets the container engine use a default network
(in that case, we end up with the default Docker bridge)
- Our pods are running on independent, disconnected, host-local networks
---
## What do we need to do?
- On a normal cluster, kubelet is configured to set up pod networking with CNI plugins
- This requires:
@@ -228,14 +240,6 @@ class: extra-details
- running kubelet with `--network-plugin=cni`
- Without the `--network-plugin` flag, kubelet defaults to "no-op" networking
- It lets the container engine use a default network
(in that case, we end up with the default Docker bridge)
- Our pods are running on independent, disconnected, host-local networks
---
## Using network plugins
@@ -394,7 +398,7 @@ class: extra-details
- Start kube-proxy:
```bash
sudo kube-proxy --kubeconfig ~/kubeconfig
sudo kube-proxy --kubeconfig ~/.kube/config
```
- Expose our Deployment:

View File

@@ -32,7 +32,7 @@
- must be able to anticipate all the events that might happen
- design will be better only to the extend of what we anticipated
- design will be better only to the extent of what we anticipated
- hard to anticipate if we don't have production experience
@@ -187,6 +187,8 @@ class: extra-details
[Intro talk](https://www.youtube.com/watch?v=8k_ayO1VRXE)
|
[Deep dive talk](https://www.youtube.com/watch?v=fu7ecA2rXmc)
|
[Simple example](https://medium.com/faun/writing-your-first-kubernetes-operator-8f3df4453234)
- Zalando Kubernetes Operator Pythonic Framework (KOPF)

View File

@@ -386,4 +386,6 @@ We should see at least one index being created in cerebro.
- What if we want different images or parameters for the different nodes?
*Operators can be very powerful, iff we know exactly the scenarios that they can handle.*
*Operators can be very powerful.
<br/>
But we need to know exactly the scenarios that they can handle.*

View File

@@ -11,16 +11,36 @@
- Deploy everything else:
```bash
set -u
for SERVICE in hasher rng webui worker; do
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
kubectl create deployment hasher --image=dockercoins/hasher:v0.1
kubectl create deployment rng --image=dockercoins/rng:v0.1
kubectl create deployment webui --image=dockercoins/webui:v0.1
kubectl create deployment worker --image=dockercoins/worker:v0.1
```
]
---
class: extra-details
## Deploying other images
- If we wanted to deploy images from another registry ...
- ... Or with a different tag ...
- ... We could use the following snippet:
```bash
REGISTRY=dockercoins
TAG=v0.1
for SERVICE in hasher rng webui worker; do
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
```
---
## Is this working?
- After waiting for the deployment to complete, let's look at the logs!
@@ -111,7 +131,7 @@ We should now see the `worker`, well, working happily.
- Create a `NodePort` service for the Web UI:
```bash
kubectl expose deploy/webui --type=NodePort --port=80
kubectl expose deploy/webui --type=`NodePort` --port=80
```
- Check the port that was allocated:
@@ -121,6 +141,8 @@ We should now see the `worker`, well, working happily.
]
.warning[On PKS, replace `NodePort` with `LoadBalancer`.]
---
## Accessing the web UI
@@ -133,8 +155,14 @@ We should now see the `worker`, well, working happily.
<!-- ```open http://node1:3xxxx/``` -->
- On PKS, you will have to use the EXTERNAL-IP shown on the `webui` line
(and you can connect to port 80, yay!)
]
--
Yes, this may take a little while to update. *(Narrator: it was DNS.)*

View File

@@ -240,6 +240,25 @@ If you want to use an external key/value store, add one of the following:
---
## Check our default Storage Class
- The YAML manifest applied earlier should define a default storage class
.exercise[
- Check that we have a default storage class:
```bash
kubectl get storageclass
```
]
There should be a storage class showing as `portworx-replicated (default)`.
---
class: extra-details
## Our default Storage Class
This is our Storage Class (in `k8s/storage-class.yaml`):
@@ -265,28 +284,6 @@ parameters:
---
## Creating our Storage Class
- Let's apply that YAML file!
.exercise[
- Create the Storage Class:
```bash
kubectl apply -f ~/container.training/k8s/storage-class.yaml
```
- Check that it is now available:
```bash
kubectl get sc
```
]
It should show as `portworx-replicated (default)`.
---
## Our Postgres Stateful set
- The next slide shows `k8s/postgres.yaml`
@@ -326,7 +323,7 @@ spec:
schedulerName: stork
containers:
- name: postgres
image: postgres:10.5
image: postgres:11
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres

View File

@@ -60,10 +60,12 @@
(by default: every minute; can be more/less frequent)
- If you're worried about parsing overhead: exporters can also use protobuf
- The list of URLs to scrape (the *scrape targets*) is defined in configuration
.footnote[Worried about the overhead of parsing a text format?
<br/>
Check this [comparison](https://github.com/RichiH/OpenMetrics/blob/master/markdown/protobuf_vs_text.md) of the text format with the (now deprecated) protobuf format!]
---
## Defining scrape targets

View File

@@ -515,3 +515,24 @@ services.nodeports 0 0
(with `kubectl describe resourcequota ...`)
- Rinse and repeat regularly
---
## Additional resources
- [A Practical Guide to Setting Kubernetes Requests and Limits](http://blog.kubecost.com/blog/requests-and-limits/)
- explains what requests and limits are
- provides guidelines to set requests and limits
- gives PromQL expressions to compute good values
<br/>(our app needs to be running for a while)
- [Kube Resource Report](https://github.com/hjacobs/kube-resource-report/)
- generates web reports on resource usage
- [static demo](https://hjacobs.github.io/kube-resource-report/sample-report/output/index.html)
|
[live demo](https://kube-resource-report.demo.j-serv.de/applications.html)

View File

@@ -14,7 +14,27 @@
## Rolling updates
- With rolling updates, when a resource is updated, it happens progressively
- With rolling updates, when a Deployment is updated, it happens progressively
- The Deployment controls multiple Replica Sets
- Each Replica Set is a group of identical Pods
(with the same image, arguments, parameters ...)
- During the rolling update, we have at least two Replica Sets:
- the "new" set (corresponding to the "target" version)
- at least one "old" set
- We can have multiple "old" sets
(if we start another update before the first one is done)
---
## Update strategy
- Two parameters determine the pace of the rollout: `maxUnavailable` and `maxSurge`
@@ -61,32 +81,6 @@
---
## Building a new version of the `worker` service
.warning[
Only run these commands if you have built and pushed DockerCoins to a local registry.
<br/>
If you are using images from the Docker Hub (`dockercoins/worker:v0.1`), skip this.
]
.exercise[
- Go to the `stacks` directory (`~/container.training/stacks`)
- Edit `dockercoins/worker/worker.py`; update the first `sleep` line to sleep 1 second
- Build a new tag and push it to the registry:
```bash
#export REGISTRY=localhost:3xxxx
export TAG=v0.2
docker-compose -f dockercoins.yml build
docker-compose -f dockercoins.yml push
```
]
---
## Rolling out the new `worker` service
.exercise[
@@ -105,7 +99,7 @@ If you are using images from the Docker Hub (`dockercoins/worker:v0.1`), skip th
- Update `worker` either with `kubectl edit`, or by running:
```bash
kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
kubectl set image deploy worker worker=dockercoins/worker:v0.2
```
]
@@ -146,8 +140,7 @@ That rollout should be pretty quick. What shows in the web UI?
- Update `worker` by specifying a non-existent image:
```bash
export TAG=v0.3
kubectl set image deploy worker worker=$REGISTRY/worker:$TAG
kubectl set image deploy worker worker=dockercoins/worker:v0.3
```
- Check what's going on:
@@ -216,27 +209,14 @@ If you didn't deploy the Kubernetes dashboard earlier, just skip this slide.
.exercise[
- Check which port the dashboard is on:
```bash
kubectl -n kube-system get svc socat
```
- Connect to the dashboard that we deployed earlier
- Check that we have failures in Deployments, Pods, and Replica Sets
- Can we see the reason for the failure?
]
Note the `3xxxx` port.
.exercise[
- Connect to http://oneofournodes:3xxxx/
<!-- ```open https://node1:3xxxx/``` -->
]
--
- We have failures in Deployments, Pods, and Replica Sets
---
## Recovering from a bad rollout
@@ -265,6 +245,137 @@ Note the `3xxxx` port.
---
## Rolling back to an older version
- We reverted to `v0.2`
- But this version still has a performance problem
- How can we get back to the previous version?
---
## Multiple "undos"
- What happens if we try `kubectl rollout undo` again?
.exercise[
- Try it:
```bash
kubectl rollout undo deployment worker
```
- Check the web UI, the list of pods ...
]
🤔 That didn't work.
---
## Multiple "undos" don't work
- If we see successive versions as a stack:
- `kubectl rollout undo` doesn't "pop" the last element from the stack
- it copies the N-1th element to the top
- Multiple "undos" just swap back and forth between the last two versions!
.exercise[
- Go back to v0.2 again:
```bash
kubectl rollout undo deployment worker
```
]
---
## In this specific scenario
- Our version numbers are easy to guess
- What if we had used git hashes?
- What if we had changed other parameters in the Pod spec?
---
## Listing versions
- We can list successive versions of a Deployment with `kubectl rollout history`
.exercise[
- Look at our successive versions:
```bash
kubectl rollout history deployment worker
```
]
We don't see *all* revisions.
We might see something like 1, 4, 5.
(Depending on how many "undos" we did before.)
---
## Explaining deployment revisions
- These revisions correspond to our Replica Sets
- This information is stored in the Replica Set annotations
.exercise[
- Check the annotations for our replica sets:
```bash
kubectl describe replicasets -l app=worker | grep -A3
```
]
---
class: extra-details
## What about the missing revisions?
- The missing revisions are stored in another annotation:
`deployment.kubernetes.io/revision-history`
- These are not shown in `kubectl rollout history`
- We could easily reconstruct the full list with a script
(if we wanted to!)
---
## Rolling back to an older version
- `kubectl rollout undo` can work with a revision number
.exercise[
- Roll back to the "known good" deployment version:
```bash
kubectl rollout undo deployment worker --to-revision=1
```
- Check the web UI or the list of pods
]
---
class: extra-details
## Changing rollout parameters
@@ -285,7 +396,7 @@ spec:
spec:
containers:
- name: worker
image: $REGISTRY/worker:v0.1
image: dockercoins/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0
@@ -316,7 +427,7 @@ class: extra-details
spec:
containers:
- name: worker
image: $REGISTRY/worker:v0.1
image: dockercoins/worker:v0.1
strategy:
rollingUpdate:
maxUnavailable: 0

View File

@@ -61,7 +61,8 @@
- [minikube](https://kubernetes.io/docs/setup/minikube/),
[kubespawn](https://github.com/kinvolk/kube-spawn),
[Docker Desktop](https://docs.docker.com/docker-for-mac/kubernetes/):
[Docker Desktop](https://docs.docker.com/docker-for-mac/kubernetes/),
[kind](https://kind.sigs.k8s.io):
for local development
- [kubicorn](https://github.com/kubicorn/kubicorn),

View File

@@ -18,7 +18,7 @@ with a cloud provider
---
## EKS (the hard way)
## EKS (the old way)
- [Read the doc](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html)
@@ -36,7 +36,7 @@ with a cloud provider
---
## EKS (the easy way)
## EKS (the new way)
- Install `eksctl`

View File

@@ -345,7 +345,7 @@ spec:
we figure out the minimal command-line to run our Consul cluster.*
```
consul agent -data=dir=/consul/data -client=0.0.0.0 -server -ui \
consul agent -data-dir=/consul/data -client=0.0.0.0 -server -ui \
-bootstrap-expect=3 \
-retry-join=`X.X.X.X` \
-retry-join=`Y.Y.Y.Y`

View File

@@ -224,7 +224,7 @@ In the manifest, the pod was named `hello`.
```yaml
apiVersion: v1
Kind: Pod
kind: Pod
metadata:
name: hello
namespace: default

View File

@@ -1,7 +1,7 @@
## Versions installed
- Kubernetes 1.15.0
- Docker Engine 18.09.7
- Kubernetes 1.15.3
- Docker Engine 19.03.1
- Docker Compose 1.24.1
<!-- ##VERSION## -->

View File

@@ -66,7 +66,87 @@ class: extra-details
---
## A simple volume example
## Adding a volume to a Pod
- We will start with the simplest Pod manifest we can find
- We will add a volume to that Pod manifest
- We will mount that volume in a container in the Pod
- By default, this volume will be an `emptyDir`
(an empty directory)
- It will "shadow" the directory where it's mounted
---
## Our basic Pod
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-without-volume
spec:
containers:
- name: nginx
image: nginx
```
This is a MVP! (Minimum Viable Pod😉)
It runs a single NGINX container.
---
## Trying the basic pod
.exercise[
- Create the Pod:
```bash
kubectl create -f ~/container.training/k8s/nginx-1-without-volume.yaml
```
- Get its IP address:
```bash
IPADDR=$(kubectl get pod nginx-without-volume -o jsonpath={.status.podIP})
```
- Send a request with curl:
```bash
curl $IPADDR
```
]
(We should see the "Welcome to NGINX" page.)
---
## Adding a volume
- We need to add the volume in two places:
- at the Pod level (to declare the volume)
- at the container level (to mount the volume)
- We will declare a volume named `www`
- No type is specified, so it will default to `emptyDir`
(as the name implies, it will be initialized as an empty directory at pod creation)
- In that pod, there is also a container named `nginx`
- That container mounts the volume `www` to path `/usr/share/nginx/html/`
---
## The Pod with a volume
```yaml
apiVersion: v1
@@ -86,30 +166,57 @@ spec:
---
## A simple volume example, explained
## Trying the Pod with a volume
- We define a standalone `Pod` named `nginx-with-volume`
.exercise[
- In that pod, there is a volume named `www`
- Create the Pod:
```bash
kubectl create -f ~/container.training/k8s/nginx-2-with-volume.yaml
```
- No type is specified, so it will default to `emptyDir`
- Get its IP address:
```bash
IPADDR=$(kubectl get pod nginx-with-volume -o jsonpath={.status.podIP})
```
(as the name implies, it will be initialized as an empty directory at pod creation)
- Send a request with curl:
```bash
curl $IPADDR
```
- In that pod, there is also a container named `nginx`
]
- That container mounts the volume `www` to path `/usr/share/nginx/html/`
(We should now see a "403 Forbidden" error page.)
---
## A volume shared between two containers
## Populating the volume with another container
- Let's add another container to the Pod
- Let's mount the volume in *both* containers
- That container will populate the volume with static files
- NGINX will then serve these static files
- To populate the volume, we will clone the Spoon-Knife repository
- this repository is https://github.com/octocat/Spoon-Knife
- it's very popular (more than 100K stars!)
---
## Sharing a volume between two containers
.small[
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-volume
name: nginx-with-git
spec:
volumes:
- name: www
@@ -147,30 +254,72 @@ spec:
---
## Sharing a volume, in action
## Trying the shared volume
- Let's try it!
- This one will be time-sensitive!
- We need to catch the Pod IP address *as soon as it's created*
- Then send a request to it *as fast as possible*
.exercise[
- Create the pod by applying the YAML file:
- Watch the pods (so that we can catch the Pod IP address)
```bash
kubectl apply -f ~/container.training/k8s/nginx-with-volume.yaml
kubectl get pods -o wide --watch
```
- Check the IP address that was allocated to our pod:
]
---
## Shared volume in action
.exercise[
- Create the pod:
```bash
kubectl get pod nginx-with-volume -o wide
IP=$(kubectl get pod nginx-with-volume -o json | jq -r .status.podIP)
kubectl create -f ~/container.training/k8s/nginx-3-with-git.yaml
```
- Access the web server:
- As soon as we see its IP address, access it:
```bash
curl $IP
```
- A few seconds later, the state of the pod will change; access it again:
```bash
curl $IP
```
]
The first time, we should see "403 Forbidden".
The second time, we should see the HTML file from the Spoon-Knife repository.
---
## Explanations
- Both containers are started at the same time
- NGINX starts very quickly
(it can serve requests immediately)
- But at this point, the volume is empty
(NGINX serves "403 Forbidden")
- The other containers installs git and clones the repository
(this takes a bit longer)
- When the other container is done, the volume holds the repository
(NGINX serves the HTML file)
---
## The devil is in the details
@@ -183,13 +332,100 @@ spec:
- That's why we specified `restartPolicy: OnFailure`
---
## Inconsistencies
- There is a short period of time during which the website is not available
(because the `git` container hasn't done its job yet)
- This could be avoided by using [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
- With a bigger website, we could get inconsistent results
(we will see a live example in a few sections)
(where only a part of the content is ready)
- In real applications, this could cause incorrect results
- How can we avoid that?
---
## Init Containers
- We can define containers that should execute *before* the main ones
- They will be executed in order
(instead of in parallel)
- They must all succeed before the main containers are started
- This is *exactly* what we need here!
- Let's see one in action
.footnote[See [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) documentation for all the details.]
---
## Defining Init Containers
.small[
```yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-init
spec:
volumes:
- name: www
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html/
initContainers:
- name: git
image: alpine
command: [ "sh", "-c", "apk add --no-cache git && git clone https://github.com/octocat/Spoon-Knife /www" ]
volumeMounts:
- name: www
mountPath: /www/
```
]
---
## Trying the init container
- Repeat the same operation as earlier
(try to send HTTP requests as soon as the pod comes up)
- This time, instead of "403 Forbidden" we get a "connection refused"
- NGINX doesn't start until the git container has done its job
- We never get inconsistent results
(a "half-ready" container)
---
## Other uses of init containers
- Load content
- Generate configuration (or certificates)
- Database migrations
- Waiting for other services to be up
(to avoid flurry of connection errors in main container)
- etc.
---

View File

@@ -20,10 +20,9 @@ And *then* it is time to look at orchestration!
---
## Options for our first production cluster
- Get a managed cluster from a major cloud provider (AKS, EKS, GKE...)
- Use a managed cluster (AKS, EKS, GKE, PKS...)
(price: $, difficulty: medium)
@@ -207,59 +206,157 @@ And *then* it is time to look at orchestration!
---
## Managing stack deployments
## Congratulations!
- The best deployment tool will vary, depending on:
- We learned a lot about Kubernetes, its internals, its advanced concepts
- the size and complexity of your stack(s)
- how often you change it (i.e. add/remove components)
- the size and skills of your team
--
- A few examples:
- That was just the easy part
- shell scripts invoking `kubectl`
- YAML resources descriptions committed to a repo
- [Helm](https://github.com/kubernetes/helm) (~package manager)
- [Spinnaker](https://www.spinnaker.io/) (Netflix' CD platform)
- [Brigade](https://brigade.sh/) (event-driven scripting; no YAML)
- The hard challenges will revolve around *culture* and *people*
--
- ... What does that mean?
---
## Cluster federation
## Running an app involves many steps
--
- Write the app
![Star Trek Federation](images/startrek-federation.jpg)
- Tests, QA ...
--
- Ship *something* (more on that later)
Sorry Star Trek fans, this is not the federation you're looking for!
- Provision resources (e.g. VMs, clusters)
--
- Deploy the *something* on the resources
(If I add "Your cluster is in another federation" I might get a 3rd fandom wincing!)
- Manage, maintain, monitor the resources
- Manage, maintain, monitor the app
- And much more
---
## Cluster federation
## Who does what?
- Kubernetes master operation relies on etcd
- The old "devs vs ops" division has changed
- etcd uses the [Raft](https://raft.github.io/) protocol
- In some organizations, "ops" are now called "SRE" or "platform" teams
- Raft recommends low latency between nodes
(and they have very different sets of skills)
- What if our cluster spreads to multiple regions?
- Do you know which team is responsible for each item on the list on the previous page?
--
- Acknowledge that a lot of tasks are outsourced
- Break it down in local clusters
(e.g. if we add "buy/rack/provision machines" in that list)
- Regroup them in a *cluster federation*
---
- Synchronize resources across clusters
## What do we ship?
- Discover resources across clusters
- Some organizations embrace "you build it, you run it"
- When "build" and "run" are owned by different teams, where's the line?
- What does the "build" team ship to the "run" team?
- Let's see a few options, and what they imply
---
## Shipping code
- Team "build" ships code
(hopefully in a repository, identified by a commit hash)
- Team "run" containerizes that code
✔️ no extra work for developers
❌ very little advantage of using containers
---
## Shipping container images
- Team "build" ships container images
(hopefully built automatically from a source repository)
- Team "run" uses theses images to create e.g. Kubernetes resources
✔️ universal artefact (support all languages uniformly)
✔️ easy to start a single component (good for monoliths)
❌ complex applications will require a lot of extra work
❌ adding/removing components in the stack also requires extra work
❌ complex applications will run very differently between dev and prod
---
## Shipping Compose files
(Or another kind of dev-centric manifest)
- Team "build" ships a manifest that works on a single node
(as well as images, or ways to build them)
- Team "run" adapts that manifest to work on a cluster
✔️ all teams can start the stack in a reliable, deterministic manner
❌ adding/removing components still requires *some* work (but less than before)
❌ there will be *some* differences between dev and prod
---
## Shipping Kubernetes manifests
- Team "build" ships ready-to-run manifests
(YAML, Helm charts, Kustomize ...)
- Team "run" adjusts some parameters and monitors the application
✔️ parity between dev and prod environments
✔️ "run" team can focus on SLAs, SLOs, and overall quality
❌ requires *a lot* of extra work (and new skills) from the "build" team
❌ Kubernetes is not a very convenient development platform (at least, not yet)
---
## What's the right answer?
- It depends on our teams
- existing skills (do they know how to do it?)
- availability (do they have the time to do it?)
- potential skills (can they learn to do it?)
- It depends on our culture
- owning "run" often implies being on call
- do we reward on-call duty without encouraging hero syndrome?
- do we give people resources (time, money) to learn?
---

93
slides/k8s/yamldeploy.md Normal file
View File

@@ -0,0 +1,93 @@
# Deploying with YAML
- So far, we created resources with the following commands:
- `kubectl run`
- `kubectl create deployment`
- `kubectl expose`
- We can also create resources directly with YAML manifests
---
## `kubectl apply` vs `create`
- `kubectl create -f whatever.yaml`
- creates resources if they don't exist
- if resources already exist, don't alter them
<br/>(and display error message)
- `kubectl apply -f whatever.yaml`
- creates resources if they don't exist
- if resources already exist, update them
<br/>(to match the definition provided by the YAML file)
- stores the manifest as an *annotation* in the resource
---
## Creating multiple resources
- The manifest can contain multiple resources separated by `---`
```yaml
kind: ...
apiVersion: ...
metadata: ...
name: ...
...
---
kind: ...
apiVersion: ...
metadata: ...
name: ...
...
```
---
## Creating multiple resources
- The manifest can also contain a list of resources
```yaml
apiVersion: v1
kind: List
items:
- kind: ...
apiVersion: ...
...
- kind: ...
apiVersion: ...
...
```
---
## Deploying dockercoins with YAML
- We provide a YAML manifest with all the resources for Dockercoins
(Deployments and Services)
- We can use it if we need to deploy or redeploy Dockercoins
.exercise[
- Deploy or redeploy Dockercoins:
```bash
kubectl apply -f ~/container.training/k8s/dockercoins.yaml
```
]
(If we deployed Dockercoins earlier, we will see warning messages,
because the resources that we created lack the necessary annotation.
We can safely ignore them.)

View File

@@ -1,43 +0,0 @@
title: |
Kubernetes
for Admins and Ops
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
- static-pods-exercise
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/dmuc.md
- - k8s/multinode.md
- k8s/cni.md
- k8s/apilb.md
- k8s/control-plane-auth.md
- - k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md
- k8s/staticpods.md
- k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
- k8s/bootstrap.md
- - k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- - k8s/lastwords-admin.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,69 +0,0 @@
title: |
Kubernetes
for administrators
and operators
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
# DAY 1
- - k8s/prereqs-admin.md
- k8s/architecture.md
- k8s/deploymentslideshow.md
- k8s/dmuc.md
- - k8s/multinode.md
- k8s/cni.md
- - k8s/apilb.md
- k8s/setup-managed.md
- k8s/setup-selfhosted.md
- k8s/cluster-upgrade.md
- k8s/staticpods.md
- - k8s/cluster-backup.md
- k8s/cloud-controller-manager.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
# DAY 2
- - k8s/kubercoins.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
- k8s/authn-authz.md
- k8s/csr-api.md
- - k8s/openid-connect.md
- k8s/control-plane-auth.md
###- k8s/bootstrap.md
- k8s/netpol.md
- k8s/podsecuritypolicy.md
- - k8s/resource-limits.md
- k8s/metrics-server.md
- k8s/cluster-sizing.md
- k8s/horizontal-pod-autoscaler.md
- - k8s/prometheus.md
- k8s/extending-api.md
- k8s/operators.md
###- k8s/operators-design.md
# CONCLUSION
- - k8s/lastwords-admin.md
- k8s/links.md
- shared/thankyou.md
- |
# (All content after this slide is bonus material)
# EXTRA
- - k8s/volumes.md
- k8s/configuration.md
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md

View File

@@ -1,6 +1,7 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
Kubernetes Meetup
@
VMware
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
@@ -8,7 +9,9 @@ chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
slides: http://vmware-2019-11.container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- self-paced
@@ -19,64 +22,92 @@ chapters:
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
#- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
#- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectlrun.md
- k8s/logs-cli.md
- vmware/vrli.md
- shared/declarative.md
- k8s/declarative.md
- k8s/kubenet.md
- - k8s/kubectlget.md
- k8s/setup-k8s.md
- k8s/kubectlrun.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- - k8s/shippingimages.md
- vmware/nsxt.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
-
- k8s/yamldeploy.md
- k8s/setup-k8s.md
- vmware/pks.md
#- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- |
## Scaling `rng`
- Let's scale the `rng` service just like we scaled `worker`
.exercise[
- Scale `rng`:
```bash
kubectl scale deploy rng --replicas=2
```
]
The web UI graph should go past 10 hashes/second.
- vmware/vrops.md
#- shared/hastyconclusions.md
#- k8s/daemonset.md
#- k8s/dryrun.md
#- k8s/kubectlproxy.md
#- k8s/localkubeconfig.md
#- k8s/accessinternal.md
- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- - k8s/rollout.md
- k8s/rollout.md
#- k8s/healthchecks.md
#- k8s/healthchecks-more.md
#- k8s/record.md
- k8s/namespaces.md
-
#- k8s/ingress.md
#- k8s/kustomize.md
#- k8s/helm.md
#- k8s/create-chart.md
# - k8s/healthchecks.md
# - k8s/healthchecks-more.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
#- k8s/netpol.md
#- k8s/authn-authz.md
#- k8s/csr-api.md
#- k8s/openid-connect.md
#- k8s/podsecuritypolicy.md
#- k8s/ingress.md
#- k8s/gitworkflows.md
- k8s/prometheus.md
#- - k8s/volumes.md
# - k8s/build-with-docker.md
# - k8s/build-with-kaniko.md
# - k8s/configuration.md
#- - k8s/owners-and-dependents.md
# - k8s/extending-api.md
# - k8s/operators.md
# - k8s/operators-design.md
# - k8s/statefulsets.md
- k8s/volumes.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
#- k8s/logs-centralized.md
#- k8s/prometheus.md
#- k8s/statefulsets.md
#- k8s/local-persistent-volumes.md
- k8s/portworx.md
#- k8s/extending-api.md
#- k8s/operators.md
#- k8s/operators-design.md
#- k8s/staticpods.md
# - k8s/portworx.md
- - k8s/whatsnext.md
#- k8s/owners-and-dependents.md
#- k8s/gitworkflows.md
- vmware/vsan.md
- k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,67 +0,0 @@
title: |
Kubernetes 101
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/training-20180413-paris)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- shared/title.md
#- logistics.md
# Bridget-specific; others use logistics.md
- logistics-bridget.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-k8s.md
- - k8s/kubectlrun.md
- k8s/deploymentslideshow.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
#- k8s/kubectlproxy.md
#- k8s/localkubeconfig.md
#- k8s/accessinternal.md
- - k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/rollout.md
#- k8s/record.md
- - k8s/logs-cli.md
# Bridget hasn't added EFK yet
#- k8s/logs-centralized.md
- k8s/namespaces.md
- k8s/helm.md
- k8s/create-chart.md
#- k8s/kustomize.md
#- k8s/netpol.md
- k8s/whatsnext.md
# - k8s/links.md
# Bridget-specific
- k8s/links-bridget.md
- shared/thankyou.md

View File

@@ -10,6 +10,8 @@ gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
#slidenumberprefix: "#SomeHashTag &mdash; "
exclude:
- in-person
@@ -19,65 +21,79 @@ chapters:
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
-
- shared/prereqs.md
#- shared/webssh.md
- shared/connecting.md
- k8s/versions-k8s.md
- assignments/setup.md
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- k8s/kubectlget.md
-
- k8s/kubectlrun.md
- k8s/logs-cli.md
- shared/declarative.md
- k8s/declarative.md
- - k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-k8s.md
- k8s/kubectlrun.md
- k8s/deploymentslideshow.md
- - k8s/kubectlexpose.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/shippingimages.md
- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
-
- k8s/yamldeploy.md
- k8s/setup-k8s.md
- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/dryrun.md
- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/dashboard.md
- - k8s/kubectlscale.md
# - k8s/scalingdockercoins.md
# - shared/hastyconclusions.md
- k8s/daemonset.md
-
- k8s/rollout.md
- k8s/record.md
- k8s/namespaces.md
- - k8s/kustomize.md
- k8s/helm.md
- k8s/create-chart.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
- - k8s/netpol.md
- k8s/record.md
-
- k8s/namespaces.md
- k8s/ingress.md
- k8s/kustomize.md
- k8s/helm.md
- k8s/create-chart.md
-
- k8s/netpol.md
- k8s/authn-authz.md
-
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/podsecuritypolicy.md
- - k8s/ingress.md
- k8s/gitworkflows.md
- k8s/prometheus.md
- - k8s/volumes.md
-
- k8s/volumes.md
- k8s/build-with-docker.md
- k8s/build-with-kaniko.md
- k8s/configuration.md
- - k8s/owners-and-dependents.md
-
- k8s/logs-centralized.md
- k8s/prometheus.md
-
- k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
-
- k8s/extending-api.md
- k8s/operators.md
- k8s/operators-design.md
- - k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
- k8s/staticpods.md
- - k8s/whatsnext.md
- k8s/owners-and-dependents.md
- k8s/gitworkflows.md
-
- k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,82 +0,0 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
chapters:
- shared/title.md
- logistics.md
- k8s/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/connecting.md
- k8s/versions-k8s.md
- shared/sampleapp.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- k8s/kubenet.md
- - k8s/kubectlget.md
- k8s/setup-k8s.md
- k8s/kubectlrun.md
- k8s/deploymentslideshow.md
- k8s/kubectlexpose.md
- - k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md
- k8s/ourapponkube.md
- k8s/kubectlproxy.md
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/dashboard.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- - k8s/daemonset.md
- k8s/rollout.md
- k8s/record.md
- k8s/namespaces.md
- k8s/kustomize.md
#- k8s/helm.md
#- k8s/create-chart.md
- k8s/healthchecks.md
- k8s/healthchecks-more.md
- k8s/logs-cli.md
- k8s/logs-centralized.md
#- k8s/netpol.md
- k8s/authn-authz.md
- k8s/csr-api.md
- k8s/openid-connect.md
- k8s/podsecuritypolicy.md
- - k8s/ingress.md
#- k8s/gitworkflows.md
- k8s/prometheus.md
- - k8s/volumes.md
#- k8s/build-with-docker.md
#- k8s/build-with-kaniko.md
- k8s/configuration.md
#- k8s/owners-and-dependents.md
- k8s/extending-api.md
- k8s/operators.md
- k8s/operators-design.md
- - k8s/statefulsets.md
- k8s/local-persistent-volumes.md
- k8s/portworx.md
#- k8s/staticpods.md
- - k8s/whatsnext.md
- k8s/links.md
- shared/thankyou.md

View File

@@ -1,35 +1,11 @@
## Intros
- This slide should be customized by the tutorial instructor(s).
- Hello! We are:
- .emoji[👩🏻‍🏫] Ann O'Nymous ([@...](https://twitter.com/...), Megacorp Inc)
- Brice ([@bdereims](https://twitter.com/bdereims), VMware)
- .emoji[👨🏾‍🎓] Stu Dent ([@...](https://twitter.com/...), University of Wakanda)
<!-- .dummy[
- .emoji[👷🏻‍♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
- .emoji[🚁] Alexandre ([@alexbuisine](https://twitter.com/alexbuisine), Enix SAS)
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- .emoji[⛵] Jérémy ([@jeremygarrouste](twitter.com/jeremygarrouste), Inpiwee)
- .emoji[🎧] Romain ([@rdegez](https://twitter.com/rdegez), Enix SAS)
] -->
- The workshop will run from ...
- There will be a lunch break at ...
(And coffee breaks!)
- Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix)
- Feel free to interrupt for questions at any time
- *Especially when you see full screen container pictures!*
- Live feedback, questions, help: @@CHAT@@

View File

@@ -80,7 +80,7 @@ def flatten(titles):
def generatefromyaml(manifest, filename):
manifest = yaml.load(manifest)
manifest = yaml.safe_load(manifest)
markdown, titles = processchapter(manifest["chapters"], filename)
logging.debug("Found {} titles.".format(len(titles)))
@@ -111,6 +111,7 @@ def generatefromyaml(manifest, filename):
html = html.replace("@@GITREPO@@", manifest["gitrepo"])
html = html.replace("@@SLIDES@@", manifest["slides"])
html = html.replace("@@TITLE@@", manifest["title"].replace("\n", " "))
html = html.replace("@@SLIDENUMBERPREFIX@@", manifest.get("slidenumberprefix", ""))
return html

View File

@@ -4,7 +4,12 @@ class: in-person
.exercise[
- Log into the first VM (`node1`) with your SSH client
- Log into the first VM (`node1`) with your SSH client:
```bash
ssh `user`@`A.B.C.D`
```
(Replace `user` and `A.B.C.D` with the user and IP address provided to you)
<!--
```bash
@@ -18,16 +23,13 @@ done
```
-->
- Check that you can SSH (without password) to `node2`:
```bash
ssh node2
```
- Type `exit` or `^D` to come back to `node1`
<!-- ```bash exit``` -->
]
You should see a prompt looking like this:
```
[A.B.C.D] (...) user@node1 ~
$
```
If anything goes wrong — ask for help!
---
@@ -52,6 +54,20 @@ If anything goes wrong — ask for help!
---
## For a consistent Kubernetes experience ...
- If you are using your own Kubernetes cluster, you can use [shpod](https://github.com/jpetazzo/shpod)
- `shpod` provides a shell running in a pod on your own cluster
- It comes with many tools pre-installed (helm, stern...)
- These tools are used in many exercises in these slides
- `shpod` also gives you completion and a fancy prompt
---
class: self-paced
## Get your own Docker nodes

View File

@@ -50,7 +50,9 @@ Misattributed to Benjamin Franklin
- Go to @@SLIDES@@ to view these slides
<!--
- Join the chat room: @@CHAT@@
-->
<!-- ```open @@SLIDES@@``` -->
@@ -58,6 +60,22 @@ Misattributed to Benjamin Franklin
---
## Navigating slides
- Use arrows to move to next/previous slide
(up, down, left, right, page up, page down)
- Type a slide number + ENTER to go to that slide
- The slide number is also visible in the URL bar
(e.g. .../#123 for slide 123)
- Slides will remain online so you can review them later if needed
---
class: in-person
## Where are we going to run our containers?

View File

@@ -26,9 +26,7 @@ fi
---
## Downloading and running the application
Let's start this before we look around, as downloading will take a little time...
## Having a look at the application
.exercise[
@@ -37,20 +35,22 @@ Let's start this before we look around, as downloading will take a little time..
cd ~/container.training/dockercoins
```
- Use Compose to build and run all containers:
- Check the files and directories:
```bash
docker-compose up
tree
```
<!--
```longwait units of work done```
-->
]
Compose tells Docker to build all container images (pulling
the corresponding base images), then starts all containers,
and displays aggregated logs.
---
## Viewing the application
- Jérôme is going to wear his developer hat ...
- ... start the application on his developer's machine ...
- ... and wait for the app to be up and running.
---
@@ -165,26 +165,6 @@ https://@@GITREPO@@/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/wo
---
class: extra-details
## Links, naming, and service discovery
- Containers can have network aliases (resolvable through DNS)
- Compose file version 2+ makes each container reachable through its service name
- Compose file version 1 required "links" sections to accomplish this
- Network aliases are automatically namespaced
- you can have multiple apps declaring and using a service named `database`
- containers in the blue app will resolve `database` to the IP of the blue database
- containers in the green app will resolve `database` to the IP of the green database
---
## Show me the code!
- You can check the GitHub repository with all the materials of this workshop:
@@ -210,24 +190,6 @@ class: extra-details
---
class: extra-details
## Compose file format version
*This is relevant only if you have used Compose before 2016...*
- Compose 1.6 introduced support for a new Compose file format (aka "v2")
- Services are no longer at the top level, but under a `services` section
- There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer)
- Containers are placed on a dedicated network, making links unnecessary
- There are other minor differences, but upgrade is easy and straightforward
---
## Our application at work
- On the left-hand side, the "rainbow strip" shows the container names
@@ -246,18 +208,6 @@ class: extra-details
- The `webui` container exposes a web dashboard; let's view it
.exercise[
- With a web browser, connect to `node1` on port 8000
- Remember: the `nodeX` aliases are valid only on the nodes themselves
- In your browser, you need to enter the IP address of your node
<!-- ```open http://node1:8000``` -->
]
A drawing area should show up, and after a few seconds, a blue
graph will appear.
@@ -283,74 +233,3 @@ work on a local environment, or when using Docker Desktop for Mac or Windows.
How to fix this?
Stop the app with `^C`, edit `dockercoins.yml`, comment out the `volumes` section, and try again.
---
class: extra-details
## Why does the speed seem irregular?
- It *looks like* the speed is approximately 4 hashes/second
- Or more precisely: 4 hashes/second, with regular dips down to zero
- Why?
--
class: extra-details
- The app actually has a constant, steady speed: 3.33 hashes/second
<br/>
(which corresponds to 1 hash every 0.3 seconds, for *reasons*)
- Yes, and?
---
class: extra-details
## The reason why this graph is *not awesome*
- The worker doesn't update the counter after every loop, but up to once per second
- The speed is computed by the browser, checking the counter about once per second
- Between two consecutive updates, the counter will increase either by 4, or by 0
- The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.
- What can we conclude from this?
--
class: extra-details
- "I'm clearly incapable of writing good frontend code!" 😀 — Jérôme
---
## Stopping the application
- If we interrupt Compose (with `^C`), it will politely ask the Docker Engine to stop the app
- The Docker Engine will send a `TERM` signal to the containers
- If the containers do not exit in a timely manner, the Engine sends a `KILL` signal
.exercise[
- Stop the application by hitting `^C`
<!--
```keys ^C```
-->
]
--
Some containers exit immediately, others take longer.
The containers that do not handle `SIGTERM` end up being killed after a 10s timeout. If we are very impatient, we can hit `^C` a second time!

View File

@@ -11,11 +11,12 @@ class: title, in-person
@@TITLE@@<br/></br>
.footnote[
<!--
**Be kind to the WiFi!**<br/>
<!-- *Use the 5G network.* -->
*Don't use your hotspot.*<br/>
*Don't stream videos or download big files during the workshop[.](https://www.youtube.com/watch?v=h16zyxiwDLY)*<br/>
*Thank you!*
-->
**Slides: @@SLIDES@@**
]

29
slides/shared/webssh.md Normal file
View File

@@ -0,0 +1,29 @@
## WebSSH
- The virtual machines are also accessible via WebSSH
- This can be useful if:
- you can't install an SSH client on your machine
- SSH connections are blocked (by firewall or local policy)
- To use WebSSH, connect to the IP address of the remote VM on port 1080
(each machine runs a WebSSH server)
- Then provide the login and password indicated on your card
---
## Good to know
- WebSSH uses WebSocket
- If you're having connections issues, try to disable your HTTP proxy
(many HTTP proxies can't handle WebSocket properly)
- Most keyboard shortcuts should work, except Ctrl-W
(as it is hardwired by the browser to "close this tab")

View File

@@ -1,65 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
- snap
- btp-auto
- benchmarking
- elk-manual
- prom-manual
chapters:
- shared/title.md
- logistics.md
- swarm/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/connecting.md
- swarm/versions.md
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/stacks.md
- swarm/cicd.md
- swarm/updatingservices.md
- swarm/rollingupdates.md
- swarm/healthchecks.md
- - swarm/operatingswarm.md
- swarm/netshoot.md
- swarm/ipsec.md
- swarm/swarmtools.md
- swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- - swarm/logging.md
- swarm/metrics.md
- swarm/gui.md
- swarm/stateful.md
- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

View File

@@ -1,64 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- self-paced
- snap
- btp-manual
- benchmarking
- elk-manual
- prom-manual
chapters:
- shared/title.md
- logistics.md
- swarm/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/connecting.md
- swarm/versions.md
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
#- swarm/hostingregistry.md
#- swarm/testingregistry.md
#- swarm/btp-manual.md
#- swarm/swarmready.md
- swarm/stacks.md
- swarm/cicd.md
- swarm/updatingservices.md
#- swarm/rollingupdates.md
#- swarm/healthchecks.md
- - swarm/operatingswarm.md
#- swarm/netshoot.md
#- swarm/ipsec.md
#- swarm/swarmtools.md
- swarm/security.md
#- swarm/secrets.md
#- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- swarm/logging.md
- swarm/metrics.md
#- swarm/stateful.md
#- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

View File

@@ -1,73 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
- btp-auto
chapters:
- shared/title.md
#- shared/logistics.md
- swarm/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/connecting.md
- swarm/versions.md
- |
name: part-1
class: title, self-paced
Part 1
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/stacks.md
- swarm/cicd.md
- |
name: part-2
class: title, self-paced
Part 2
- - swarm/operatingswarm.md
- swarm/netshoot.md
- swarm/swarmnbt.md
- swarm/ipsec.md
- swarm/updatingservices.md
- swarm/rollingupdates.md
- swarm/healthchecks.md
- swarm/nodeinfo.md
- swarm/swarmtools.md
- - swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
- swarm/logging.md
- swarm/metrics.md
- swarm/stateful.md
- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

View File

@@ -1,72 +0,0 @@
title: |
Container Orchestration
with Docker and Swarm
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
exclude:
- in-person
- btp-auto
chapters:
- shared/title.md
#- shared/logistics.md
- swarm/intro.md
- shared/about-slides.md
- shared/toc.md
- - shared/prereqs.md
- shared/connecting.md
- swarm/versions.md
- |
name: part-1
class: title, self-paced
Part 1
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md
- swarm/swarmmode.md
- swarm/creatingswarm.md
#- swarm/machine.md
- swarm/morenodes.md
- - swarm/firstservice.md
- swarm/ourapponswarm.md
- swarm/hostingregistry.md
- swarm/testingregistry.md
- swarm/btp-manual.md
- swarm/swarmready.md
- swarm/stacks.md
- |
name: part-2
class: title, self-paced
Part 2
- - swarm/operatingswarm.md
#- swarm/netshoot.md
#- swarm/swarmnbt.md
- swarm/ipsec.md
- swarm/updatingservices.md
- swarm/rollingupdates.md
#- swarm/healthchecks.md
- swarm/nodeinfo.md
- swarm/swarmtools.md
- - swarm/security.md
- swarm/secrets.md
- swarm/encryptionatrest.md
- swarm/leastprivilege.md
- swarm/apiscope.md
#- swarm/logging.md
#- swarm/metrics.md
- swarm/stateful.md
- swarm/extratips.md
- shared/thankyou.md
- swarm/links.md

9
slides/vmware/nsxt.md Normal file
View File

@@ -0,0 +1,9 @@
# NSX-T
*Connect and secure Kubernetes Pods*
- Distributed firewall and micro-segmentation for VMs and Pods
- Ingress and LoadBalancer Controller for Kubernetes
- Traceflow for Pods and dynamic routing

16
slides/vmware/pks.md Normal file
View File

@@ -0,0 +1,16 @@
# PKS
*Automate and streamline Kubernetes cluster deployment and operations*
- Fully automated installation of mainstream Kubernetes
- Scale up, scale down & upgrade clusters
- Highly-available control plane & self-healing features
(replace nodes automatically when needed and deploy CVE patches)
- Integration with VMware SDDC (Software Defined Data Center) features
(e.g. vMotion, DRS, Shared Datastore, NSX-T, vREALIZE Suite)

12
slides/vmware/vrli.md Normal file
View File

@@ -0,0 +1,12 @@
# vRLI
*Centralize logs*
- Compatible with syslog
- Query language
- Dashboards
- High ingest capacity

11
slides/vmware/vrops.md Normal file
View File

@@ -0,0 +1,11 @@
# vROPS
*Manage Kubernetes and/or PKS clusters*
- Automatically add new PKS clusters after deployment
- Supervision
- Capacity management
- Global view of infrastructure

9
slides/vmware/vsan.md Normal file
View File

@@ -0,0 +1,9 @@
# vSAN
*Instantiate Stateful Pods*
- Compatible with CSI
- Distributed storage for higher fault tolerance + performance
- Available for Pods and VMs

View File

@@ -28,6 +28,7 @@
var slideshow = remark.create({
ratio: '16:9',
highlightSpans: true,
slideNumberFormat: '@@SLIDENUMBERPREFIX@@%current%/%total%',
excludedClasses: [@@EXCLUDE@@]
});
</script>