Compare commits

...

23 Commits

Author SHA1 Message Date
Jerome Petazzoni
687b61dbf4 fix-redirects.sh: adding forced redirect 2020-04-07 16:57:06 -05:00
Jerome Petazzoni
22f32ee4c0 Merge branch 'master' into qconsf2018 2018-11-09 02:25:18 -06:00
Jerome Petazzoni
ee3c2c3030 Merge branch 'ignore-preflight-errors' into qconsf2018 2018-11-09 02:23:53 -06:00
Jerome Petazzoni
45f9d7bf59 bump versions 2018-11-09 02:23:38 -06:00
Jerome Petazzoni
efb72c2938 Bump all the versions
Bump:
- stern
- Ubuntu

Also, each place where there is a 'bumpable' version, I added
a ##VERSION## marker, easily greppable.
2018-11-08 20:42:02 -06:00
Jerome Petazzoni
357d341d82 Ignore 'wrong Docker version' warning
For some reason, kubeadm doesn't want to deploy with Docker Engine 18.09.
Before, it would just issue a warning; but now apparently the warning blocks
the deployment. So... let's ignore the warning. (I've tested the content
and it works fine with Engine 18.09 as far as I can tell.)
2018-11-08 20:32:52 -06:00
Jerome Petazzoni
d4c338c62c Update prom slides for QCON preload 2018-11-07 23:08:51 -06:00
Jerome Petazzoni
d35d186249 Merge branch 'master' into qconsf2018 2018-11-01 19:48:17 -05:00
Jerome Petazzoni
6c8172d7b1 Merge branch 'work-around-kubectl-logs-bug' into qconsf2018 2018-11-01 19:45:45 -05:00
Jerome Petazzoni
d3fac47823 kubectl logs -l ... --tail ... is buggy.
(It always returns 10 lines of output instead
of the requested number.)

This works around the problem, by adding extra
explanations of the issue and providing a shell
function as a workaround.

See kubernetes/kubernetes#70554 for details.
2018-11-01 19:45:13 -05:00
Jerome Petazzoni
4f71074a06 Work around bug in kubectl logs
kubectl logs -l ... --tail ... is buggy.
(It always returns 10 lines of output instead
of the requested number.)

This works around the problem, by adding extra
explanations of the issue and providing a shell
function as a workaround.

See kubernetes/kubernetes#70554 for details.
2018-11-01 19:41:29 -05:00
Jerome Petazzoni
37470fc5ed Merge branch 'use-dockercoins-from-docker-hub' into qconsf2018 2018-11-01 19:08:57 -05:00
Jerome Petazzoni
98510f9f1c Setup qconsf2018 2018-11-01 16:10:03 -05:00
Jerome Petazzoni
6be0751147 Merge branch 'preinstall-helm-and-prometheus' into qconsf2018 2018-11-01 15:59:43 -05:00
Jerome Petazzoni
a40b291d54 Merge branch 'kubectl-create-deployment' into qconsf2018 2018-11-01 15:59:21 -05:00
Jerome Petazzoni
f24687e79f Merge branch 'jpetazzo-last-slide' into qconsf2018 2018-11-01 15:59:12 -05:00
Jerome Petazzoni
9f5f16dc09 Merge branch 'halfday-fullday-twodays' into qconsf2018 2018-11-01 15:59:03 -05:00
Jerome Petazzoni
9a5989d1f2 Merge branch 'enixlogo' into qconsf2018 2018-11-01 15:58:55 -05:00
Jerome Petazzoni
43acccc0af Add command to preinstall Helm and Prometheus
In some cases, I would like Prometheus to be pre-installed (so that
it shows a bunch of metrics) without relying on people doing it (and
setting up Helm correctly). This patch allows to run:

./workshopctl helmprom TAG

It will setup Helm with a proper service account, then deploy
the Pormetheus chart, disabling the alert manager, persistence,
and assigning the Prometheus server to NodePort 30090.

This command is idempotent.
2018-11-01 15:35:09 -05:00
Jerome Petazzoni
b9de73d0fd Address deprecation of 'kubectl run'
kubectl run is being deprecated as a multi-purpose tool.
This PR replaces 'kubectl run' with 'kubectl create deployment'
in most places (except in the very first example, to reduce the
cognitive load; and when we really want a single-shot container).

It also updates the places where we use a 'run' label, since
'kubectl create deployment' uses the 'app' label instead.

NOTE: this hasn't gone through end-to-end testing yet.
2018-11-01 01:25:26 -05:00
Jerome Petazzoni
6b9b83a7ae Add link to my private training intake form 2018-10-31 22:50:41 -05:00
Jerome Petazzoni
f01bc2a7a9 Fix overlapsing slide number and pics 2018-09-29 18:54:00 -05:00
Jerome Petazzoni
3eaa844c55 Add ENIX logo
Warning: do not merge this branch to your content, otherwise you
will get the ENIX logo in the top right of all your decks
2018-09-08 07:49:38 -05:00
26 changed files with 259 additions and 123 deletions

View File

@@ -5,7 +5,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress:
- from:
- podSelector:

View File

@@ -5,6 +5,6 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress: []

View File

@@ -16,7 +16,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: webui
app: webui
ingress:
- from: []

View File

@@ -6,7 +6,7 @@ metadata:
creationTimestamp: null
generation: 1
labels:
run: socat
app: socat
name: socat
namespace: kube-system
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/socat
@@ -14,7 +14,7 @@ spec:
replicas: 1
selector:
matchLabels:
run: socat
app: socat
strategy:
rollingUpdate:
maxSurge: 1
@@ -24,7 +24,7 @@ spec:
metadata:
creationTimestamp: null
labels:
run: socat
app: socat
spec:
containers:
- args:
@@ -49,7 +49,7 @@ kind: Service
metadata:
creationTimestamp: null
labels:
run: socat
app: socat
name: socat
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/socat
@@ -60,7 +60,7 @@ spec:
protocol: TCP
targetPort: 80
selector:
run: socat
app: socat
sessionAffinity: None
type: NodePort
status:

View File

@@ -123,7 +123,9 @@ _cmd_kube() {
pssh --timeout 200 "
if grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/admin.conf ]; then
kubeadm token generate > /tmp/token &&
sudo kubeadm init --token \$(cat /tmp/token)
sudo kubeadm init \
--token \$(cat /tmp/token) \
--ignore-preflight-errors=SystemVerification
fi"
# Put kubeconfig in ubuntu's and docker's accounts
@@ -147,7 +149,10 @@ _cmd_kube() {
pssh --timeout 200 "
if ! grep -q node1 /tmp/node && [ ! -f /etc/kubernetes/kubelet.conf ]; then
TOKEN=\$(ssh -o StrictHostKeyChecking=no node1 cat /tmp/token) &&
sudo kubeadm join --discovery-token-unsafe-skip-ca-verification --token \$TOKEN node1:6443
sudo kubeadm join \
--discovery-token-unsafe-skip-ca-verification \
--ignore-preflight-errors=SystemVerification \
--token \$TOKEN node1:6443
fi"
# Install kubectx and kubens
@@ -170,7 +175,8 @@ EOF"
# Install stern
pssh "
if [ ! -x /usr/local/bin/stern ]; then
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64 &&
##VERSION##
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.10.0/stern_linux_amd64 &&
sudo chmod +x /usr/local/bin/stern &&
stern --completion bash | sudo tee /etc/bash_completion.d/stern
fi"
@@ -400,6 +406,28 @@ _cmd_test() {
test_tag
}
_cmd helmprom "Install Helm and Prometheus"
_cmd_helmprom() {
TAG=$1
need_tag
pssh "
if grep -q node1 /tmp/node; then
kubectl -n kube-system get serviceaccount helm ||
kubectl -n kube-system create serviceaccount helm
helm init --service-account helm
kubectl get clusterrolebinding helm-can-do-everything ||
kubectl create clusterrolebinding helm-can-do-everything \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:helm
helm upgrade --install prometheus stable/prometheus \
--namespace kube-system \
--set server.service.type=NodePort \
--set server.service.nodePort=30090 \
--set server.persistentVolume.enabled=false \
--set alertmanager.enabled=false
fi"
}
# Sometimes, weave fails to come up on some nodes.
# Symptom: the pods on a node are unreachable (they don't even ping).
# Remedy: wipe out Weave state and delete weave pod on that node.

View File

@@ -201,5 +201,6 @@ aws_tag_instances() {
}
aws_get_ami() {
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 16.04 -t hvm:ebs -N -q
##VERSION##
find_ubuntu_ami -r $AWS_DEFAULT_REGION -a amd64 -v 18.04 -t hvm:ebs -N -q
}

1
slides/_redirects Normal file
View File

@@ -0,0 +1 @@
/ /kube-fullday.yml.html 200!

View File

@@ -13,6 +13,7 @@
title: Getting Started With Kubernetes and Container Orchestration
speaker: jpetazzo
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration-thursday-section
slides: http://qconsf2018.container.training/
- date: 2018-11-09
city: San Francisco, CA
@@ -21,6 +22,7 @@
title: Getting Started With Kubernetes and Container Orchestration
speaker: jpetazzo
attend: https://qconsf.com/sf2018/workshop/getting-started-kubernetes-and-container-orchestration-friday-section
slides: http://qconsf2018.container.training/
- date: 2018-10-31
city: London, UK

View File

@@ -538,7 +538,7 @@ It's important to note a couple of details in these flags ...
- But that we can't create things:
```
./kubectl run tryme --image=nginx
./kubectl create deployment --image=nginx
```
- Exit the container with `exit` or `^D`

View File

@@ -256,19 +256,19 @@ The master node has [taints](https://kubernetes.io/docs/concepts/configuration/t
- Let's check the logs of all these `rng` pods
- All these pods have a `run=rng` label:
- All these pods have the label `app=rng`:
- the first pod, because that's what `kubectl run` does
- the first pod, because that's what `kubectl create deployment` does
- the other ones (in the daemon set), because we
*copied the spec from the first one*
- Therefore, we can query everybody's logs using that `run=rng` selector
- Therefore, we can query everybody's logs using that `app=rng` selector
.exercise[
- Check the logs of all the pods having a label `run=rng`:
- Check the logs of all the pods having a label `app=rng`:
```bash
kubectl logs -l run=rng --tail 1
kubectl logs -l app=rng --tail 1
```
]
@@ -279,11 +279,51 @@ It appears that *all the pods* are serving requests at the moment.
---
## Working around `kubectl logs` bugs
- That last command didn't show what we needed
- We mentioned earlier that regression affecting `kubectl logs` ...
(see [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for more details)
- Let's work around the issue by executing `kubectl logs` one pod at a time
- For convenience, we'll define a little shell function
---
## Our helper function
- The function `ktail` below will:
- list the names of all pods matching a selector
- display the last line of log for each pod
.exercise[
- Define `ktail`:
```bash
ktail () {
kubectl get pods -o name -l $1 |
xargs -rn1 kubectl logs --tail 1
}
```
- Try it:
```bash
ktail app=rng
```
]
---
## The magic of selectors
- The `rng` *service* is load balancing requests to a set of pods
- This set of pods is defined as "pods having the label `run=rng`"
- This set of pods is defined as "pods having the label `app=rng`"
.exercise[
@@ -310,7 +350,7 @@ to the associated load balancer.
--
- What would happen if we removed the `run=rng` label from that pod?
- What would happen if we removed the `app=rng` label from that pod?
--
@@ -322,7 +362,7 @@ to the associated load balancer.
--
- But but but ... Don't we have more than one pod with `run=rng` now?
- But but but ... Don't we have more than one pod with `app=rng` now?
--
@@ -345,7 +385,7 @@ to the associated load balancer.
<br/>(The second command doesn't require you to get the exact name of the replica set)
```bash
kubectl describe rs rng-yyyyyyyy
kubectl describe rs -l run=rng
kubectl describe rs -l app=rng
```
]
@@ -433,11 +473,11 @@ Of course, option 2 offers more learning opportunities. Right?
<!--
```wait Please edit the object below```
```keys /run: rng```
```keys /app: rng```
```keys ^J```
```keys noisactive: "yes"```
```keys ^[``` ]
```keys /run: rng```
```keys /app: rng```
```keys ^J```
```keys oisactive: "yes"```
```keys ^[``` ]
@@ -452,7 +492,7 @@ Of course, option 2 offers more learning opportunities. Right?
<!--
```wait Please edit the object below```
```keys /run: rng```
```keys /app: rng```
```keys ^J```
```keys noisactive: "yes"```
```keys ^[``` ]
@@ -468,9 +508,9 @@ Of course, option 2 offers more learning opportunities. Right?
.exercise[
- Check the most recent log line of all `run=rng` pods to confirm that exactly one per node is now active:
- Check the most recent log line of all `app=rng` pods to confirm that exactly one per node is now active:
```bash
kubectl logs -l run=rng --tail 1
kubectl logs -l app=rng --tail 1
```
]
@@ -496,14 +536,14 @@ The timestamps should give us a hint about how many pods are currently receiving
.exercise[
- List the pods with `run=rng` but without `isactive=yes`:
- List the pods with `app=rng` but without `isactive=yes`:
```bash
kubectl get pods -l run=rng,isactive!=yes
kubectl get pods -l app=rng,isactive!=yes
```
- Remove these pods:
```bash
kubectl delete pods -l run=rng,isactive!=yes
kubectl delete pods -l app=rng,isactive!=yes
```
]
@@ -581,7 +621,7 @@ Ding, dong, the deployment is dead! And the daemon set lives on.
labels:
isactive: "yes"
'
kubectl get pods -l run=rng -l controller-revision-hash -o name |
kubectl get pods -l app=rng -l controller-revision-hash -o name |
xargs kubectl patch -p "$PATCH"
```

View File

@@ -392,9 +392,9 @@ This is normal: we haven't provided any ingress rule yet.
- Run all three deployments:
```bash
kubectl run cheddar --image=errm/cheese:cheddar
kubectl run stilton --image=errm/cheese:stilton
kubectl run wensleydale --image=errm/cheese:wensleydale
kubectl create deployment cheddar --image=errm/cheese:cheddar
kubectl create deployment stilton --image=errm/cheese:stilton
kubectl create deployment wensleydale --image=errm/cheese:wensleydale
```
- Create a service for each of them:

View File

@@ -57,31 +57,49 @@ Under the hood: `kube-proxy` is using a userland proxy and a bunch of `iptables`
- Since `ping` doesn't have anything to connect to, we'll have to run something else
- We could use the `nginx` official image, but ...
... we wouldn't be able to tell the backends from each other!
- We are going to use `jpetazzo/httpenv`, a tiny HTTP server written in Go
- `jpetazzo/httpenv` listens on port 8888
- It serves its environment variables in JSON format
- The environment variables will include `HOSTNAME`, which will be the pod name
(and therefore, will be different on each backend)
---
## Creating a deployment for our HTTP server
- We *could* do `kubectl run httpenv --image=jpetazzo/httpenv` ...
- But since `kubectl run` is being deprecated, let's see how to use `kubectl create` instead
.exercise[
- Start a bunch of HTTP servers:
```bash
kubectl run httpenv --image=jpetazzo/httpenv --replicas=10
```
- Watch them being started:
- In another window, watch the pods (to see when they will be created):
```bash
kubectl get pods -w
```
<!--
```wait httpenv-```
```keys ^C```
-->
<!-- ```keys ^C``` -->
- Create a deployment for this very lightweight HTTP server:
```bash
kubectl create deployment httpenv --image=jpetazzo/httpenv
```
- Scale it to 10 replicas:
```bash
kubectl scale deployment httpenv --replicas=10
```
]
The `jpetazzo/httpenv` image runs an HTTP server on port 8888.
<br/>
It serves its environment variables in JSON format.
The `-w` option "watches" events happening on the specified resources.
---
## Exposing our deployment
@@ -92,12 +110,12 @@ The `-w` option "watches" events happening on the specified resources.
- Expose the HTTP port of our server:
```bash
kubectl expose deploy/httpenv --port 8888
kubectl expose deployment httpenv --port 8888
```
- Look up which IP address was allocated:
```bash
kubectl get svc
kubectl get service
```
]
@@ -237,7 +255,7 @@ class: extra-details
- These IP addresses should match the addresses of the corresponding pods:
```bash
kubectl get pods -l run=httpenv -o wide
kubectl get pods -l app=httpenv -o wide
```
---

View File

@@ -173,6 +173,11 @@ pod/pingpong-7c8bbcd9bc-6c9qz 1/1 Running 0 10m
kubectl scale deploy/pingpong --replicas 8
```
- Note that this command does exactly the same thing:
```bash
kubectl scale deployment pingpong --replicas 8
```
]
Note: what if we tried to scale `replicaset.apps/pingpong-xxxxxxxxxx`?
@@ -290,6 +295,20 @@ Unfortunately, `--follow` cannot (yet) be used to stream the logs from multiple
---
## `kubectl logs -l ... --tail N`
- With Kubernetes 1.12 (and up to at least 1.12.2), the last command shows multiple lines
- This is a regression when `--tail` is used together with `-l`/`--selector`
- It always shows the last 10 lines of output for each container
(instead of the number of lines specified on the command line)
- See [#70554](https://github.com/kubernetes/kubernetes/issues/70554) for details
---
## Aren't we flooding 1.1.1.1?
- If you're wondering this, good question!

View File

@@ -14,11 +14,11 @@
- Download the `kubectl` binary from one of these links:
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl)
[Linux](https://storage.googleapis.com/kubernetes-release/release/v1.12.2/bin/linux/amd64/kubectl)
|
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl)
[macOS](https://storage.googleapis.com/kubernetes-release/release/v1.12.2/bin/darwin/amd64/kubectl)
|
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/windows/amd64/kubectl.exe)
[Windows](https://storage.googleapis.com/kubernetes-release/release/v1.12.2/bin/windows/amd64/kubectl.exe)
- On Linux and macOS, make the binary executable with `chmod +x kubectl`

View File

@@ -62,10 +62,12 @@ Exactly what we need!
- The following commands will install Stern on a Linux Intel 64 bit machine:
```bash
sudo curl -L -o /usr/local/bin/stern \
https://github.com/wercker/stern/releases/download/1.8.0/stern_linux_amd64
https://github.com/wercker/stern/releases/download/1.10.0/stern_linux_amd64
sudo chmod +x /usr/local/bin/stern
```
<!-- ##VERSION## -->
---
## Using Stern
@@ -130,11 +132,13 @@ Exactly what we need!
- We can use that property to view the logs of all the pods created with `kubectl run`
- Similarly, everything created with `kubectl create deployment` has a label `app`
.exercise[
- View the logs for all the things started with `kubectl run`:
- View the logs for all the things started with `kubectl create deployment`:
```bash
stern -l run
stern -l app
```
<!--

View File

@@ -117,13 +117,13 @@ This is our game plan:
- Let's use the `nginx` image:
```bash
kubectl run testweb --image=nginx
kubectl create deployment testweb --image=nginx
```
- Find out the IP address of the pod with one of these two commands:
```bash
kubectl get pods -o wide -l run=testweb
IP=$(kubectl get pods -l run=testweb -o json | jq -r .items[0].status.podIP)
kubectl get pods -o wide -l app=testweb
IP=$(kubectl get pods -l app=testweb -o json | jq -r .items[0].status.podIP)
```
- Check that we can connect to the server:
@@ -138,7 +138,7 @@ The `curl` command should show us the "Welcome to nginx!" page.
## Adding a very restrictive network policy
- The policy will select pods with the label `run=testweb`
- The policy will select pods with the label `app=testweb`
- It will specify an empty list of ingress rules (matching nothing)
@@ -172,7 +172,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress: []
```
@@ -207,7 +207,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: testweb
app: testweb
ingress:
- from:
- podSelector:
@@ -325,7 +325,7 @@ spec:
## Allowing traffic to `webui` pods
This policy selects all pods with label `run=webui`.
This policy selects all pods with label `app=webui`.
It allows traffic from any source.
@@ -339,7 +339,7 @@ metadata:
spec:
podSelector:
matchLabels:
run: webui
app: webui
ingress:
- from: []
```

View File

@@ -74,7 +74,7 @@ In this part, we will:
- Create the registry service:
```bash
kubectl run registry --image=registry
kubectl create deployment registry --image=registry
```
- Expose it on a NodePort:
@@ -275,13 +275,13 @@ class: extra-details
- Deploy `redis`:
```bash
kubectl run redis --image=redis
kubectl create deployment redis --image=redis
```
- Deploy everything else:
```bash
for SERVICE in hasher rng webui worker; do
kubectl run $SERVICE --image=$REGISTRY/$SERVICE:$TAG
kubectl create deployment $SERVICE --image=$REGISTRY/$SERVICE:$TAG
done
```

View File

@@ -22,14 +22,19 @@
.exercise[
- Let's start a replicated `nginx` deployment:
- Let's create a deployment running `nginx`:
```bash
kubectl run yanginx --image=nginx --replicas=3
kubectl create deployment yanginx --image=nginx
```
- Scale it to a few replicas:
```bash
kubectl scale deployment yanginx --replicas=3
```
- Once it's up, check the corresponding pods:
```bash
kubectl get pods -l run=yanginx -o yaml | head -n 25
kubectl get pods -l app=yanginx -o yaml | head -n 25
```
]
@@ -99,12 +104,12 @@ so the lines should not be indented (otherwise the indentation will insert space
- Delete the Deployment:
```bash
kubectl delete deployment -l run=yanginx --cascade=false
kubectl delete deployment -l app=yanginx --cascade=false
```
- Delete the Replica Set:
```bash
kubectl delete replicaset -l run=yanginx --cascade=false
kubectl delete replicaset -l app=yanginx --cascade=false
```
- Check that the pods are still here:
@@ -126,7 +131,7 @@ class: extra-details
- If we change the labels on a dependent, so that it's not selected anymore
(e.g. change the `run: yanginx` in the pods of the previous example)
(e.g. change the `app: yanginx` in the pods of the previous example)
- If a deployment tool that we're using does these things for us
@@ -174,4 +179,4 @@ class: extra-details
]
As always, the [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) has useful extra information and pointers.
As always, the [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) has useful extra information and pointers.

View File

@@ -151,7 +151,7 @@ scrape_configs:
## Running Prometheus on our cluster
We need to:
We would need to:
- Run the Prometheus server in a pod
@@ -171,19 +171,21 @@ We need to:
## Helm Charts to the rescue
- To make our lives easier, we are going to use a Helm Chart
- To make our lives easier, we could use a Helm Chart
- The Helm Chart will take care of all the steps explained above
- The Helm Chart would take care of all the steps explained above
(including some extra features that we don't need, but won't hurt)
- In fact, Prometheus has been pre-installed on our clusters with Helm
(it was pre-installed so that it would be populated with metrics by now)
---
## Step 1: install Helm
## Step 1: if we had to install Helm
- If we already installed Helm earlier, these commands won't break anything
.exercice[
- Note that if Helm is already installed, these commands won't break anything
- Install Tiller (Helm's server-side component) on our cluster:
```bash
@@ -196,27 +198,17 @@ We need to:
--clusterrole=cluster-admin --serviceaccount=kube-system:default
```
]
---
## Step 2: install Prometheus
## Step 2: if we had to install Prometheus
- Skip this if we already installed Prometheus earlier
(in doubt, check with `helm list`)
.exercice[
- Install Prometheus on our cluster:
- This is how we would use Helm to deploy Prometheus on the cluster:
```bash
helm install stable/prometheus \
--set server.service.type=NodePort \
--set server.persistentVolume.enabled=false
```
]
The provided flags:
- expose the server web UI (and API) on a NodePort
@@ -235,11 +227,13 @@ The provided flags:
- Figure out the NodePort that was allocated to the Prometheus server:
```bash
kubectl get svc | grep prometheus-server
kubectl get svc -n kube-system | grep prometheus-server
```
- With your browser, connect to that port
(spoiler alert: it should be 30090)
]
---

View File

@@ -4,7 +4,9 @@
--
- We used `kubeadm` on freshly installed VM instances running Ubuntu 16.04 LTS
<!-- ##VERSION## -->
- We used `kubeadm` on freshly installed VM instances running Ubuntu 18.04 LTS
1. Install Docker

View File

@@ -1,9 +1,10 @@
## Versions installed
- Kubernetes 1.12.0
- Docker Engine 18.06.1-ce
- Kubernetes 1.12.2
- Docker Engine 18.09.0
- Docker Compose 1.21.1
<!-- ##VERSION## -->
.exercise[

View File

@@ -1,14 +1,15 @@
title: |
Deploying and Scaling Microservices
with Kubernetes
Getting Started With
Kubernetes and
Container Orchestration
#chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
chat: "In person!"
chat: "Gitter ([Thursday](https://gitter.im/jpetazzo/workshop-20181108-sanfrancisco)|[Friday](https://gitter.im/jpetazzo/workshop-20181109-sanfrancisco))"
gitrepo: github.com/jpetazzo/container.training
slides: http://container.training/
slides: http://qconsf2018.container.training/
exclude:
- self-paced

View File

@@ -1,26 +1,11 @@
## Intros
- This slide should be customized by the tutorial instructor(s).
- Hello! I'm
Jérôme Petazzoni ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- Hello! We are:
- The workshop will run from 9am to 4pm
- .emoji[👩🏻‍🏫] Ann O'Nymous ([@...](https://twitter.com/...), Megacorp Inc)
- .emoji[👨🏾‍🎓] Stu Dent ([@...](https://twitter.com/...), University of Wakanda)
<!-- .dummy[
- .emoji[👷🏻‍♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS)
- .emoji[⛵] Jérémy ([@jeremygarrouste](twitter.com/jeremygarrouste), Inpiwee)
] -->
- The workshop will run from ...
- There will be a lunch break at ...
- There will be a lunch break from noon to 1pm
(And coffee breaks!)

17
slides/override.css Normal file
View File

@@ -0,0 +1,17 @@
.remark-slide-content:not(.pic) {
background-repeat: no-repeat;
background-position: 99% 1%;
background-size: 8%;
background-image: url(https://enix.io/static/img/logos/logo-domain-cropped.png);
}
div.extra-details:not(.pic) {
background-image: url("images/extra-details.png"), url(https://enix.io/static/img/logos/logo-domain-cropped.png);
background-position: 0.5% 1%, 99% 1%;
background-size: 4%, 8%;
}
.remark-slide-content:not(.pic) div.remark-slide-number {
top: 16px;
right: 112px
}

View File

@@ -9,3 +9,20 @@ class: title, in-person
That's all, folks! <br/> Questions?
![end](images/end.jpg)
---
## Final words
- You can find more content on http://container.training/
(More slides, videos, dates of upcoming workshops and tutorials...)
- If you want me to train your team:
[contact me!](https://docs.google.com/forms/d/e/1FAIpQLScm2evHMvRU8C5ZK59l8FGsLY_Kkup9P_GHgjfByUMyMpMmDA/viewform)
(This workshop is also available as longer training sessions, covering advanced topics)
- The organizers of this conference would like you to rate this workshop!
.footnote[*Thank you!*]

View File

@@ -4,6 +4,7 @@
<title>@@TITLE@@</title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<link rel="stylesheet" href="workshop.css">
<link rel="stylesheet" href="override.css">
</head>
<body>
<!--