mirror of
https://github.com/jpetazzo/container.training.git
synced 2026-02-15 01:59:57 +00:00
Compare commits
12 Commits
oscon2018
...
indexconf2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0900d605ef | ||
|
|
f67cfa8693 | ||
|
|
cb8690f4a3 | ||
|
|
7a6d488d60 | ||
|
|
b1d8b5eec8 | ||
|
|
1983d6cb4f | ||
|
|
e565da49ca | ||
|
|
ee33799a8f | ||
|
|
b61426a044 | ||
|
|
fd057d8a1e | ||
|
|
4b76fbcc4b | ||
|
|
b25d40f48e |
2
slides/_redirects
Normal file
2
slides/_redirects
Normal file
@@ -0,0 +1,2 @@
|
||||
/ /kube-halfday.yml.html 200!
|
||||
|
||||
@@ -9,12 +9,10 @@
|
||||
|
||||
- We recommend having a mentor to help you ...
|
||||
|
||||
- ... Or be comfortable spending some time reading the Docker
|
||||
[documentation](https://docs.docker.com/) ...
|
||||
- ... Or be comfortable spending some time reading the Kubernetes
|
||||
[documentation](https://kubernetes.io/docs/) ...
|
||||
|
||||
- ... And looking for answers in the [Docker forums](forums.docker.com),
|
||||
[StackOverflow](http://stackoverflow.com/questions/tagged/docker),
|
||||
and other outlets
|
||||
- ... And looking for answers on [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and other outlets
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ class: extra-details
|
||||
|
||||
- This slide should have a little magnifying glass in the top left corner
|
||||
|
||||
(If it doesn't, it's because CSS is hard — Jérôme is only a backend person, alas)
|
||||
(If it doesn't, it's because CSS is hard — we're only backend people, alas!)
|
||||
|
||||
- Slides with that magnifying glass indicate slides providing extra details
|
||||
|
||||
@@ -62,7 +62,7 @@ Misattributed to Benjamin Franklin
|
||||
|
||||
- This is the stuff you're supposed to do!
|
||||
|
||||
- Go to [container.training](http://container.training/) to view these slides
|
||||
- Go to [indexconf2018.container.training](http://indexconf2018.container.training/) to view these slides
|
||||
|
||||
- Join the chat room on @@CHAT@@
|
||||
|
||||
@@ -78,17 +78,11 @@ class: in-person
|
||||
|
||||
---
|
||||
|
||||
class: in-person, pic
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
class: in-person
|
||||
|
||||
## You get five VMs
|
||||
## You get three VMs
|
||||
|
||||
- Each person gets 5 private VMs (not shared with anybody else)
|
||||
- Each person gets 3 private VMs (not shared with anybody else)
|
||||
|
||||
- They'll remain up for the duration of the workshop
|
||||
|
||||
@@ -96,7 +90,7 @@ class: in-person
|
||||
|
||||
- You can automatically SSH from one VM to another
|
||||
|
||||
- The nodes have aliases: `node1`, `node2`, etc.
|
||||
- The nodes have aliases: `node1`, `node2`, `node3`.
|
||||
|
||||
---
|
||||
|
||||
@@ -153,7 +147,7 @@ class: in-person
|
||||
|
||||
<!--
|
||||
```bash
|
||||
for N in $(seq 1 5); do
|
||||
for N in $(seq 1 3); do
|
||||
ssh -o StrictHostKeyChecking=no node$N true
|
||||
done
|
||||
```
|
||||
|
||||
@@ -21,65 +21,6 @@
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Compose file format version
|
||||
|
||||
*Particularly relevant if you have used Compose before...*
|
||||
|
||||
- Compose 1.6 introduced support for a new Compose file format (aka "v2")
|
||||
|
||||
- Services are no longer at the top level, but under a `services` section
|
||||
|
||||
- There has to be a `version` key at the top level, with value `"2"` (as a string, not an integer)
|
||||
|
||||
- Containers are placed on a dedicated network, making links unnecessary
|
||||
|
||||
- There are other minor differences, but upgrade is easy and straightforward
|
||||
|
||||
---
|
||||
|
||||
## Links, naming, and service discovery
|
||||
|
||||
- Containers can have network aliases (resolvable through DNS)
|
||||
|
||||
- Compose file version 2+ makes each container reachable through its service name
|
||||
|
||||
- Compose file version 1 did require "links" sections
|
||||
|
||||
- Our code can connect to services using their short name
|
||||
|
||||
(instead of e.g. IP address or FQDN)
|
||||
|
||||
- Network aliases are automatically namespaced
|
||||
|
||||
(i.e. you can have multiple apps declaring and using a service named `database`)
|
||||
|
||||
---
|
||||
|
||||
## Example in `worker/worker.py`
|
||||
|
||||
```python
|
||||
redis = Redis("`redis`")
|
||||
|
||||
|
||||
def get_random_bytes():
|
||||
r = requests.get("http://`rng`/32")
|
||||
return r.content
|
||||
|
||||
|
||||
def hash_bytes(data):
|
||||
r = requests.post("http://`hasher`/",
|
||||
data=data,
|
||||
headers={"Content-Type": "application/octet-stream"})
|
||||
```
|
||||
|
||||
(Full source code available [here](
|
||||
https://github.com/jpetazzo/container.training/blob/8279a3bce9398f7c1a53bdd95187c53eda4e6435/dockercoins/worker/worker.py#L17
|
||||
))
|
||||
|
||||
---
|
||||
|
||||
## What's this application?
|
||||
|
||||
--
|
||||
@@ -124,7 +65,7 @@ fi
|
||||
|
||||
- Clone the repository on `node1`:
|
||||
```bash
|
||||
git clone git://github.com/jpetazzo/container.training
|
||||
git clone https://github.com/jpetazzo/container.training/
|
||||
```
|
||||
|
||||
]
|
||||
@@ -183,77 +124,6 @@ and displays aggregated logs.
|
||||
|
||||
---
|
||||
|
||||
## Restarting in the background
|
||||
|
||||
- Many flags and commands of Compose are modeled after those of `docker`
|
||||
|
||||
.exercise[
|
||||
|
||||
- Start the app in the background with the `-d` option:
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
- Check that our app is running with the `ps` command:
|
||||
```bash
|
||||
docker-compose ps
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
`docker-compose ps` also shows the ports exposed by the application.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Viewing logs
|
||||
|
||||
- The `docker-compose logs` command works like `docker logs`
|
||||
|
||||
.exercise[
|
||||
|
||||
- View all logs since container creation and exit when done:
|
||||
```bash
|
||||
docker-compose logs
|
||||
```
|
||||
|
||||
- Stream container logs, starting at the last 10 lines for each container:
|
||||
```bash
|
||||
docker-compose logs --tail 10 --follow
|
||||
```
|
||||
|
||||
<!--
|
||||
```wait units of work done```
|
||||
```keys ^C```
|
||||
-->
|
||||
|
||||
]
|
||||
|
||||
Tip: use `^S` and `^Q` to pause/resume log output.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Upgrading from Compose 1.6
|
||||
|
||||
.warning[The `logs` command has changed between Compose 1.6 and 1.7!]
|
||||
|
||||
- Up to 1.6
|
||||
|
||||
- `docker-compose logs` is the equivalent of `logs --follow`
|
||||
|
||||
- `docker-compose logs` must be restarted if containers are added
|
||||
|
||||
- Since 1.7
|
||||
|
||||
- `--follow` must be specified explicitly
|
||||
|
||||
- new containers are automatically picked up by `docker-compose logs`
|
||||
|
||||
---
|
||||
|
||||
## Connecting to the web UI
|
||||
|
||||
- The `webui` container exposes a web dashboard; let's view it
|
||||
@@ -275,245 +145,6 @@ graph will appear.
|
||||
|
||||
---
|
||||
|
||||
class: self-paced, extra-details
|
||||
|
||||
## If the graph doesn't load
|
||||
|
||||
If you just see a `Page not found` error, it might be because your
|
||||
Docker Engine is running on a different machine. This can be the case if:
|
||||
|
||||
- you are using the Docker Toolbox
|
||||
|
||||
- you are using a VM (local or remote) created with Docker Machine
|
||||
|
||||
- you are controlling a remote Docker Engine
|
||||
|
||||
When you run DockerCoins in development mode, the web UI static files
|
||||
are mapped to the container using a volume. Alas, volumes can only
|
||||
work on a local environment, or when using Docker4Mac or Docker4Windows.
|
||||
|
||||
How to fix this?
|
||||
|
||||
Edit `dockercoins.yml` and comment out the `volumes` section, and try again.
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## Why does the speed seem irregular?
|
||||
|
||||
- It *looks like* the speed is approximately 4 hashes/second
|
||||
|
||||
- Or more precisely: 4 hashes/second, with regular dips down to zero
|
||||
|
||||
- Why?
|
||||
|
||||
--
|
||||
|
||||
class: extra-details
|
||||
|
||||
- The app actually has a constant, steady speed: 3.33 hashes/second
|
||||
<br/>
|
||||
(which corresponds to 1 hash every 0.3 seconds, for *reasons*)
|
||||
|
||||
- Yes, and?
|
||||
|
||||
---
|
||||
|
||||
class: extra-details
|
||||
|
||||
## The reason why this graph is *not awesome*
|
||||
|
||||
- The worker doesn't update the counter after every loop, but up to once per second
|
||||
|
||||
- The speed is computed by the browser, checking the counter about once per second
|
||||
|
||||
- Between two consecutive updates, the counter will increase either by 4, or by 0
|
||||
|
||||
- The perceived speed will therefore be 4 - 4 - 4 - 0 - 4 - 4 - 0 etc.
|
||||
|
||||
- What can we conclude from this?
|
||||
|
||||
--
|
||||
|
||||
class: extra-details
|
||||
|
||||
- Jérôme is clearly incapable of writing good frontend code
|
||||
|
||||
---
|
||||
|
||||
## Scaling up the application
|
||||
|
||||
- Our goal is to make that performance graph go up (without changing a line of code!)
|
||||
|
||||
--
|
||||
|
||||
- Before trying to scale the application, we'll figure out if we need more resources
|
||||
|
||||
(CPU, RAM...)
|
||||
|
||||
- For that, we will use good old UNIX tools on our Docker node
|
||||
|
||||
---
|
||||
|
||||
## Looking at resource usage
|
||||
|
||||
- Let's look at CPU, memory, and I/O usage
|
||||
|
||||
.exercise[
|
||||
|
||||
- run `top` to see CPU and memory usage (you should see idle cycles)
|
||||
|
||||
<!--
|
||||
```bash top```
|
||||
|
||||
```wait Tasks```
|
||||
```keys ^C```
|
||||
-->
|
||||
|
||||
- run `vmstat 1` to see I/O usage (si/so/bi/bo)
|
||||
<br/>(the 4 numbers should be almost zero, except `bo` for logging)
|
||||
|
||||
<!--
|
||||
```bash vmstat 1```
|
||||
|
||||
```wait memory```
|
||||
```keys ^C```
|
||||
-->
|
||||
|
||||
]
|
||||
|
||||
We have available resources.
|
||||
|
||||
- Why?
|
||||
- How can we use them?
|
||||
|
||||
---
|
||||
|
||||
## Scaling workers on a single node
|
||||
|
||||
- Docker Compose supports scaling
|
||||
- Let's scale `worker` and see what happens!
|
||||
|
||||
.exercise[
|
||||
|
||||
- Start one more `worker` container:
|
||||
```bash
|
||||
docker-compose scale worker=2
|
||||
```
|
||||
|
||||
- Look at the performance graph (it should show a x2 improvement)
|
||||
|
||||
- Look at the aggregated logs of our containers (`worker_2` should show up)
|
||||
|
||||
- Look at the impact on CPU load with e.g. top (it should be negligible)
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
## Adding more workers
|
||||
|
||||
- Great, let's add more workers and call it a day, then!
|
||||
|
||||
.exercise[
|
||||
|
||||
- Start eight more `worker` containers:
|
||||
```bash
|
||||
docker-compose scale worker=10
|
||||
```
|
||||
|
||||
- Look at the performance graph: does it show a x10 improvement?
|
||||
|
||||
- Look at the aggregated logs of our containers
|
||||
|
||||
- Look at the impact on CPU load and memory usage
|
||||
|
||||
]
|
||||
|
||||
---
|
||||
|
||||
# Identifying bottlenecks
|
||||
|
||||
- You should have seen a 3x speed bump (not 10x)
|
||||
|
||||
- Adding workers didn't result in linear improvement
|
||||
|
||||
- *Something else* is slowing us down
|
||||
|
||||
--
|
||||
|
||||
- ... But what?
|
||||
|
||||
--
|
||||
|
||||
- The code doesn't have instrumentation
|
||||
|
||||
- Let's use state-of-the-art HTTP performance analysis!
|
||||
<br/>(i.e. good old tools like `ab`, `httping`...)
|
||||
|
||||
---
|
||||
|
||||
## Accessing internal services
|
||||
|
||||
- `rng` and `hasher` are exposed on ports 8001 and 8002
|
||||
|
||||
- This is declared in the Compose file:
|
||||
|
||||
```yaml
|
||||
...
|
||||
rng:
|
||||
build: rng
|
||||
ports:
|
||||
- "8001:80"
|
||||
|
||||
hasher:
|
||||
build: hasher
|
||||
ports:
|
||||
- "8002:80"
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Measuring latency under load
|
||||
|
||||
We will use `httping`.
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check the latency of `rng`:
|
||||
```bash
|
||||
httping -c 3 localhost:8001
|
||||
```
|
||||
|
||||
- Check the latency of `hasher`:
|
||||
```bash
|
||||
httping -c 3 localhost:8002
|
||||
```
|
||||
|
||||
]
|
||||
|
||||
`rng` has a much higher latency than `hasher`.
|
||||
|
||||
---
|
||||
|
||||
## Let's draw hasty conclusions
|
||||
|
||||
- The bottleneck seems to be `rng`
|
||||
|
||||
- *What if* we don't have enough entropy and can't generate enough random numbers?
|
||||
|
||||
- We need to scale out the `rng` service on multiple machines!
|
||||
|
||||
Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.
|
||||
|
||||
(In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy...
|
||||
<br/>
|
||||
...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).)
|
||||
|
||||
---
|
||||
|
||||
## Clean up
|
||||
|
||||
- Before moving on, let's remove those containers
|
||||
|
||||
@@ -6,7 +6,7 @@ Thank you!
|
||||
|
||||
class: title, in-person
|
||||
|
||||
That's all folks! <br/> Questions?
|
||||
That's all, folks! <br/> Questions?
|
||||
|
||||

|
||||
|
||||
@@ -14,13 +14,9 @@ That's all folks! <br/> Questions?
|
||||
|
||||
# Links and resources
|
||||
|
||||
- [Docker Community Slack](https://community.docker.com/registrations/groups/4316)
|
||||
- [Docker Community Forums](https://forums.docker.com/)
|
||||
- [Docker Hub](https://hub.docker.com)
|
||||
- [Docker Blog](http://blog.docker.com/)
|
||||
- [Docker documentation](http://docs.docker.com/)
|
||||
- [Docker on StackOverflow](https://stackoverflow.com/questions/tagged/docker)
|
||||
- [Docker on Twitter](http://twitter.com/docker)
|
||||
- [Play With Docker Hands-On Labs](http://training.play-with-docker.com/)
|
||||
- [Kubernetes Community](https://kubernetes.io/community/) - Slack, Google Groups, meetups
|
||||
- [Play With Kubernetes Hands-On Labs](https://medium.com/@marcosnils/introducing-pwk-play-with-k8s-159fcfeb787b)
|
||||
- [Local meetups](https://www.meetup.com/)
|
||||
- [Microsoft Cloud Developer Advocates](https://developer.microsoft.com/en-us/advocates/)
|
||||
|
||||
.footnote[These slides (and future updates) are on → http://container.training/]
|
||||
|
||||
@@ -17,5 +17,5 @@ class: title, in-person
|
||||
*Don't stream videos or download big files during the workshop.*<br/>
|
||||
*Thank you!*
|
||||
|
||||
**Slides: http://container.training/**
|
||||
]
|
||||
**Slides: http://indexconf2018.container.training/**
|
||||
]
|
||||
|
||||
BIN
slides/images/k8s-arch4-thanks-luxas.png
Normal file
BIN
slides/images/k8s-arch4-thanks-luxas.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 144 KiB |
@@ -74,7 +74,7 @@
|
||||
<td><a class="slides" href="http://indexconf2018.container.training/" /></td>
|
||||
<td><a class="attend" href="https://developer.ibm.com/indexconf/sessions/#!?id=5474" />
|
||||
</tr>
|
||||
|
||||
|
||||
<tr><td class="title" colspan="4">Past workshops</td></tr>
|
||||
|
||||
<tr>
|
||||
|
||||
@@ -1,9 +1,7 @@
|
||||
title: |
|
||||
Deploying and Scaling Microservices
|
||||
with Docker and Kubernetes
|
||||
Kubernetes 101
|
||||
|
||||
chat: "[Slack](https://dockercommunity.slack.com/messages/C7GKACWDV)"
|
||||
#chat: "[Gitter](https://gitter.im/jpetazzo/workshop-yyyymmdd-city)"
|
||||
chat: "[Gitter](https://gitter.im/jpetazzo/workshop-20180222-sf)"
|
||||
|
||||
exclude:
|
||||
- self-paced
|
||||
|
||||
@@ -211,3 +211,11 @@ class: pic
|
||||

|
||||
|
||||
(Diagram courtesy of Weave Works, used with permission.)
|
||||
|
||||
---
|
||||
|
||||
class: pic
|
||||
|
||||

|
||||
|
||||
(Diagram courtesy of Lucas Käldström, in [this presentation](https://speakerdeck.com/luxas/kubeadm-cluster-creation-internals-from-self-hosting-to-upgradability-and-ha).)
|
||||
|
||||
@@ -1,10 +1,8 @@
|
||||
# Daemon sets
|
||||
|
||||
- Remember: we did all that cluster orchestration business for `rng`
|
||||
- What if we want one (and exactly one) instance of `rng` per node?
|
||||
|
||||
- We want one (and exactly one) instance of `rng` per node
|
||||
|
||||
- If we just scale `deploy/rng` to 4, nothing guarantees that they spread
|
||||
- If we just scale `deploy/rng` to 2, nothing guarantees that they spread
|
||||
|
||||
- Instead of a `deployment`, we will use a `daemonset`
|
||||
|
||||
@@ -22,7 +20,7 @@
|
||||
|
||||
## Creating a daemon set
|
||||
|
||||
- Unfortunately, as of Kubernetes 1.8, the CLI cannot create daemon sets
|
||||
- Unfortunately, as of Kubernetes 1.9, the CLI cannot create daemon sets
|
||||
|
||||
--
|
||||
|
||||
@@ -382,7 +380,7 @@ Of course, option 2 offers more learning opportunities. Right?
|
||||
|
||||
.exercise[
|
||||
|
||||
- Check the logs of all `run=rng` pods to confirm that only 4 of them are now active:
|
||||
- Check the logs of all `run=rng` pods to confirm that only 2 of them are now active:
|
||||
```bash
|
||||
kubectl logs -l run=rng
|
||||
```
|
||||
@@ -406,4 +404,4 @@ The timestamps should give us a hint about how many pods are currently receiving
|
||||
|
||||
- Bonus exercise 1: clean up the pods of the "old" daemon set
|
||||
|
||||
- Bonus exercise 2: how could we have done to avoid creating new pods?
|
||||
- Bonus exercise 2: how could we have done this to avoid creating new pods?
|
||||
|
||||
@@ -89,7 +89,13 @@ The goo.gl URL expands to:
|
||||
|
||||
- Connect to https://oneofournodes:3xxxx/
|
||||
|
||||
(You will have to work around the TLS certificate validation warning)
|
||||
- Yes, https. If you use http it will say:
|
||||
|
||||
This page isn’t working
|
||||
<oneofournodes> sent an invalid response.
|
||||
ERR_INVALID_HTTP_RESPONSE
|
||||
|
||||
- You will have to work around the TLS certificate validation warning
|
||||
|
||||
<!-- ```open https://node1:3xxxx/``` -->
|
||||
|
||||
@@ -109,7 +115,7 @@ The goo.gl URL expands to:
|
||||
|
||||
## Granting more rights to the dashboard
|
||||
|
||||
- The dashboard documentation [explains how to do](https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges)
|
||||
- The dashboard documentation [explains how to do this](https://github.com/kubernetes/dashboard/wiki/Access-control#admin-privileges)
|
||||
|
||||
- We just need to load another YAML file!
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@
|
||||
|
||||
.exercise[
|
||||
|
||||
- Give us more info about them nodes:
|
||||
- Give us more info about the nodes:
|
||||
```bash
|
||||
kubectl get nodes -o wide
|
||||
```
|
||||
@@ -136,7 +136,7 @@ There is already one service on our cluster: the Kubernetes API itself.
|
||||
```
|
||||
|
||||
- `-k` is used to skip certificate verification
|
||||
- Make sure to replace 10.96.0.1 with the CLUSTER-IP shown earlier
|
||||
- Make sure to replace 10.96.0.1 with the CLUSTER-IP shown by `$ kubectl get svc`
|
||||
|
||||
]
|
||||
|
||||
@@ -173,7 +173,7 @@ The error that we see is expected: the Kubernetes API requires authentication.
|
||||
|
||||
## Namespaces
|
||||
|
||||
- Namespaces allow to segregate resources
|
||||
- Namespaces allow us to segregate resources
|
||||
|
||||
.exercise[
|
||||
|
||||
|
||||
@@ -245,4 +245,4 @@ at the Google NOC ...
|
||||
<br/>
|
||||
.small[are we getting 1000 packets per second]
|
||||
<br/>
|
||||
.small[of ICMP ECHO traffic from EC2 ?!?”]
|
||||
.small[of ICMP ECHO traffic from Azure ?!?”]
|
||||
|
||||
@@ -40,7 +40,7 @@ In this part, we will:
|
||||
|
||||
- We could use the Docker Hub
|
||||
|
||||
- Or a service offered by our cloud provider (GCR, ECR...)
|
||||
- Or a service offered by our cloud provider (ACR, GCR, ECR...)
|
||||
|
||||
- Or we could just self-host that registry
|
||||
|
||||
|
||||
@@ -149,7 +149,7 @@ Our rollout is stuck. However, the app is not dead (just 10% slower).
|
||||
|
||||
- We want to:
|
||||
|
||||
- revert to `v0.1`
|
||||
- revert to `v0.1` (which we now realize we didn't tag - yikes!)
|
||||
- be conservative on availability (always have desired number of available workers)
|
||||
- be aggressive on rollout speed (update more than one pod at a time)
|
||||
- give some time to our workers to "warm up" before starting more
|
||||
@@ -163,7 +163,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: worker
|
||||
image: $REGISTRY/worker:v0.1
|
||||
image: $REGISTRY/worker:latest
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
@@ -192,7 +192,7 @@ spec:
|
||||
spec:
|
||||
containers:
|
||||
- name: worker
|
||||
image: $REGISTRY/worker:v0.1
|
||||
image: $REGISTRY/worker:latest
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxUnavailable: 0
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
--
|
||||
|
||||
- We used `kubeadm` on "fresh" EC2 instances with Ubuntu 16.04 LTS
|
||||
- We used `kubeadm` on Azure instances with Ubuntu 16.04 LTS
|
||||
|
||||
1. Install Docker
|
||||
|
||||
@@ -36,26 +36,25 @@
|
||||
|
||||
--
|
||||
|
||||
- It's still twice as many steps as setting up a Swarm cluster 😕
|
||||
- "It's still twice as many steps as setting up a Swarm cluster 😕 " -- Jérôme
|
||||
|
||||
---
|
||||
|
||||
## Other deployment options
|
||||
|
||||
- If you are on Google Cloud:
|
||||
[GKE](https://cloud.google.com/container-engine/)
|
||||
- If you are on Azure:
|
||||
[AKS](https://azure.microsoft.com/services/container-service/)
|
||||
|
||||
Empirically the best Kubernetes deployment out there
|
||||
- If you are on Google Cloud:
|
||||
[GKE](https://cloud.google.com/kubernetes-engine/)
|
||||
|
||||
- If you are on AWS:
|
||||
[kops](https://github.com/kubernetes/kops)
|
||||
|
||||
... But with AWS re:invent just around the corner, expect some changes
|
||||
[EKS](https://aws.amazon.com/eks/)
|
||||
|
||||
- On a local machine:
|
||||
[minikube](https://kubernetes.io/docs/getting-started-guides/minikube/),
|
||||
[kubespawn](https://github.com/kinvolk/kube-spawn),
|
||||
[Docker4Mac (coming soon)](https://beta.docker.com/)
|
||||
[Docker4Mac](https://docs.docker.com/docker-for-mac/kubernetes/)
|
||||
|
||||
- If you want something customizable:
|
||||
[kubicorn](https://github.com/kris-nova/kubicorn)
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
## Brand new versions!
|
||||
## Versions Installed
|
||||
|
||||
- Kubernetes 1.8
|
||||
- Docker Engine 17.11
|
||||
- Docker Compose 1.17
|
||||
- Kubernetes 1.9.3
|
||||
- Docker Engine 18.02.0-ce
|
||||
- Docker Compose 1.18.0
|
||||
|
||||
|
||||
.exercise[
|
||||
|
||||
@@ -131,6 +131,7 @@ And *then* it is time to look at orchestration!
|
||||
|
||||
- shell scripts invoking `kubectl`
|
||||
- YAML resources descriptions committed to a repo
|
||||
- [Brigade](https://brigade.sh/) (event-driven scripting; no YAML)
|
||||
- [Helm](https://github.com/kubernetes/helm) (~package manager)
|
||||
- [Spinnaker](https://www.spinnaker.io/) (Netflix' CD platform)
|
||||
|
||||
@@ -160,7 +161,7 @@ Sorry Star Trek fans, this is not the federation you're looking for!
|
||||
|
||||
- Raft recommends low latency between nodes
|
||||
|
||||
- What if our cluster spreads multiple regions?
|
||||
- What if our cluster spreads to multiple regions?
|
||||
|
||||
--
|
||||
|
||||
|
||||
@@ -2,18 +2,19 @@
|
||||
|
||||
- Hello! We are:
|
||||
|
||||
- .emoji[👷🏻♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), Travis CI)
|
||||
- .emoji[✨] Bridget ([@bridgetkromhout](https://twitter.com/bridgetkromhout))
|
||||
|
||||
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Docker Inc.)
|
||||
- .emoji[🌟] Jessica ([@jldeen](https://twitter.com/jldeen))
|
||||
|
||||
- The workshop will run from 9am to 4pm
|
||||
- .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo))
|
||||
|
||||
- There will be a lunch break at noon
|
||||
- This workshop will run from 10:30am-12:45pm.
|
||||
|
||||
(And coffee breaks!)
|
||||
- Lunchtime is after the workshop!
|
||||
|
||||
(And we will take a 15min break at 11:30am!)
|
||||
|
||||
- Feel free to interrupt for questions at any time
|
||||
|
||||
- *Especially when you see full screen container pictures!*
|
||||
|
||||
- Live feedback, questions, help on @@CHAT@@
|
||||
|
||||
Reference in New Issue
Block a user