mirror of
https://github.com/jpetazzo/container.training.git
synced 2026-05-06 00:46:56 +00:00
797 lines
15 KiB
HTML
797 lines
15 KiB
HTML
<!DOCTYPE html>
|
|
<html>
|
|
<head>
|
|
<base target="_blank">
|
|
<title>Docker Orchestration Workshop</title>
|
|
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
|
|
<style type="text/css">
|
|
@import url(https://fonts.googleapis.com/css?family=Yanone+Kaffeesatz);
|
|
@import url(https://fonts.googleapis.com/css?family=Droid+Serif:400,700,400italic);
|
|
@import url(https://fonts.googleapis.com/css?family=Ubuntu+Mono:400,700,400italic);
|
|
|
|
body { font-family: 'Droid Serif'; font-size: 150%; }
|
|
|
|
h1, h2, h3 {
|
|
font-family: 'Yanone Kaffeesatz';
|
|
font-weight: normal;
|
|
}
|
|
a {
|
|
text-decoration: none;
|
|
color: blue;
|
|
}
|
|
.remark-code, .remark-inline-code { font-family: 'Ubuntu Mono'; }
|
|
.red { color: #fa0000; }
|
|
.gray { color: #ccc; }
|
|
.small { font-size: 70%; }
|
|
.big { font-size: 140%; }
|
|
.underline { text-decoration: underline; }
|
|
.footnote {
|
|
position: absolute;
|
|
bottom: 3em;
|
|
}
|
|
.pic {
|
|
vertical-align: middle;
|
|
text-align: center;
|
|
padding: 0 0 0 0 !important;
|
|
}
|
|
img {
|
|
max-width: 100%;
|
|
max-height: 450px;
|
|
}
|
|
.title {
|
|
vertical-align: middle;
|
|
text-align: center;
|
|
}
|
|
.title {
|
|
font-size: 2em;
|
|
}
|
|
.title .remark-slide-number {
|
|
font-size: 0.5em;
|
|
}
|
|
.quote {
|
|
background: #eee;
|
|
border-left: 10px solid #ccc;
|
|
margin: 1.5em 10px;
|
|
padding: 0.5em 10px;
|
|
quotes: "\201C""\201D""\2018""\2019";
|
|
font-style: italic;
|
|
}
|
|
.quote:before {
|
|
color: #ccc;
|
|
content: open-quote;
|
|
font-size: 4em;
|
|
line-height: 0.1em;
|
|
margin-right: 0.25em;
|
|
vertical-align: -0.4em;
|
|
}
|
|
.quote p {
|
|
display: inline;
|
|
}
|
|
.icon img {
|
|
height: 1em;
|
|
}
|
|
.exercise {
|
|
background-color: #eee;
|
|
background-image: url("keyboard.png");
|
|
background-size: 1.4em;
|
|
background-repeat: no-repeat;
|
|
background-position: 0.2em 0.2em;
|
|
border: 2px dotted black;
|
|
}
|
|
.exercise::before {
|
|
content: "Exercise:";
|
|
margin-left: 1.8em;
|
|
}
|
|
li p { line-height: 1.25em; }
|
|
</style>
|
|
</head>
|
|
<body>
|
|
<textarea id="source">
|
|
|
|
class: title
|
|
|
|
# Docker <br/> Orchestration <br/> Workshop
|
|
|
|
---
|
|
|
|
# Pre-requirements
|
|
|
|
- Computer with network connection and SSH client
|
|
<br/>(on Windows, get [putty](http://www.putty.org/))
|
|
- GitHub account
|
|
- Docker Hub account
|
|
- Basic Docker knowledge
|
|
|
|
.exercise[
|
|
|
|
- This is the stuff you're supposed to do!
|
|
- Create [GitHub](https://github.com/) and
|
|
[Docker Hub](https://hub.docker.com) accounts now if needed
|
|
- Go to [view.dckr.info](http://view.dckr.info) to view those slides
|
|
|
|
]
|
|
|
|
---
|
|
|
|
# VM environment
|
|
|
|
- Each person gets 5 VMs
|
|
- They are *your* VMs
|
|
- They'll be up until tomorrow
|
|
- You have a little card with login+password+IP addresses
|
|
- You can automatically SSH from one VM to another
|
|
|
|
.exercise[
|
|
|
|
- Log into the first VM
|
|
- Check that you can SSH (without password) to `node2`
|
|
- Check the version of docker with `docker version`
|
|
|
|
]
|
|
|
|
Note: from now on, unless instructed, all commands have
|
|
to be done from the first VM, `node1`.
|
|
|
|
---
|
|
|
|
## Versions
|
|
|
|
- Docker 1.6 (1.7 will be released in a few days)
|
|
|
|
- Compose 0.3 RC
|
|
|
|
- Swarm 0.3 RC
|
|
|
|
---
|
|
|
|
# Our sample application
|
|
|
|
- Let's look at the general layout of
|
|
[source code](https://github.com/jpetazzo/orchestration-workshop)
|
|
|
|
- Each directory = 1 microservice
|
|
- `rng` = web service generating random bytes
|
|
- `hasher` = web service computing hash of POSTed data
|
|
- `worker` = background process using `rng` and `hasher`
|
|
- `webui` = web interface to watch progress
|
|
|
|
.exercise[
|
|
|
|
- Fork the repository on GitHub
|
|
- Clone your fork on `node1`
|
|
|
|
]
|
|
|
|
|
|
---
|
|
|
|
## What's this application?
|
|
|
|
- It is a DockerCoin miner! 💰🐳📦🚢
|
|
|
|
- No, you can't buy coffee with DockerCoins
|
|
|
|
- How DockerCoins works:
|
|
|
|
- `worker` asks to `rng` to give it random bytes
|
|
- `worker` feeds those random bytes into `hasher`
|
|
- each hash starting with `0` is a DockerCoin
|
|
- DockerCoins are stored in `redis`
|
|
- you can see the progress with the `webui`
|
|
|
|
Next: we will inspect components independently.
|
|
|
|
---
|
|
|
|
## Running components independently
|
|
|
|
.exercise[
|
|
|
|
- Go to the `dockercoins` directory (in the cloned repo)
|
|
- Run `docker-compose up rng`
|
|
<br/>(Docker will pull `python` and build the microservice)
|
|
|
|
]
|
|
|
|
.icon[] The container log says
|
|
`Running on http://0.0.0.0:80/`
|
|
<br/>but that is port 80 *in the container*.
|
|
On the host it is 8001.
|
|
|
|
This is mapped in The `docker-compose.yml` file:
|
|
|
|
```
|
|
rng:
|
|
…
|
|
ports:
|
|
- "8001:80"
|
|
```
|
|
|
|
---
|
|
|
|
## Getting random bytes of data
|
|
|
|
.exercise[
|
|
|
|
- Open a second terminal and connect to the same VM
|
|
- Check that the service is alive:
|
|
<br/>`curl localhost:8001`
|
|
- Get 10 bytes of random data:
|
|
<br/>`curl localhost:8001/10`
|
|
<br/>(the output might confuse your terminal, since this is binary data)
|
|
- Test the performance on one big request::
|
|
<br/>`curl -o/dev/null localhost:8001/10000000`
|
|
<br/>(should take ~1s, and show speed of ~10 MB/s)
|
|
|
|
]
|
|
|
|
Next: we'll see how it behaves with many small requests.
|
|
|
|
---
|
|
|
|
## Concurrent requests
|
|
|
|
.exercise[
|
|
|
|
- Test 1000 requests of 1000 bytes each:
|
|
<br/>`ab -n 100 localhost:8001/1000`
|
|
- Test 1000 requests, 10 requests in parallel:
|
|
<br/>`ab -n 100 -c 10 localhost:8001/1000`
|
|
<br/>(look how the latency has increased!)
|
|
- Try with 100 requests in parallel:
|
|
<br/>`ab -n 100 -c 100 localhost:8001/1000`
|
|
|
|
]
|
|
|
|
Take note of the number of requests/s.
|
|
|
|
---
|
|
|
|
## Save some random data and stop the generator
|
|
|
|
Before testing the hasher, let's save some random
|
|
data that we will feed to the hasher later.
|
|
|
|
.exercise[
|
|
|
|
- Run `curl localhost:8001/1000000 > /tmp/random`
|
|
|
|
]
|
|
|
|
Now we can stop the generator.
|
|
|
|
.exercise[
|
|
|
|
- In the shell where you did `docker-compose up rng`,
|
|
<br/>stop it by hitting `^C`
|
|
|
|
]
|
|
|
|
---
|
|
|
|
## Running the hasher
|
|
|
|
.exercise[
|
|
|
|
- Run `docker-compose up hasher`
|
|
<br/>(it will pull `ruby` and do the build)
|
|
|
|
]
|
|
|
|
.icon[] Again, pay attention to the port mapping!
|
|
|
|
The container log says that it's listening on port 80,
|
|
but it's mapped to port 8002 on the host.
|
|
|
|
You can see the mapping in `docker-compose.yml`.
|
|
|
|
---
|
|
|
|
## Testing the hasher
|
|
|
|
.exercise[
|
|
|
|
- Run `curl localhost:8002`
|
|
<br/>(it will say it's alive)
|
|
|
|
- Posting binary data requires some extra flags:
|
|
|
|
```
|
|
curl \
|
|
-H "Content-type: application/octet-stream" \
|
|
--data-binary @/tmp/random \
|
|
localhost:8002
|
|
```
|
|
|
|
- Compute the hash locally to verify that it works fine:
|
|
<br/>`sha256sum /tmp/random`
|
|
<br/>(it should display the same hash)
|
|
|
|
]
|
|
|
|
---
|
|
|
|
## Benchmarking the hasher
|
|
|
|
The invocation of `ab` will be slightly more complex as well.
|
|
|
|
.exercise[
|
|
|
|
- Execute 100 requests in a row:
|
|
|
|
```
|
|
ab -n 100 -T application/octet-stream \
|
|
-p /tmp/random localhost:8002/
|
|
```
|
|
|
|
- Execute 100 requests with 10 requests in parallel:
|
|
|
|
```
|
|
ab -c 10 -n 100 -T application/octet-stream \
|
|
-p /tmp/random localhost:8002/
|
|
```
|
|
|
|
]
|
|
|
|
Take note of the performance numbers (requests/s).
|
|
|
|
---
|
|
|
|
## Benchmarking the hasher on smaller data
|
|
|
|
Here we hashed 1 meg. Later we will hash much smaller payloads.
|
|
|
|
Let's repeat the tests with smaller data.
|
|
|
|
.exercise[
|
|
|
|
- Run `truncate --size=10 /tmp/random`
|
|
- Repeat the `ab` tests
|
|
|
|
]
|
|
|
|
---
|
|
|
|
## Why do `rng` and `hasher` behave differently?
|
|
|
|

|
|
|
|
---
|
|
|
|
# Running the whole app on a single node
|
|
|
|
.exercise[
|
|
|
|
- Run `docker-compose up` to start all components
|
|
|
|
]
|
|
|
|
- Aggregate output is shown
|
|
|
|
- Output is verbose because the worker is constantly hitting other services
|
|
|
|
- Now let's use the little web UI to see realtime progress
|
|
|
|
|
|
.exercise[
|
|
|
|
- Open http://[yourVMaddr]:8000/ (from a browser)
|
|
|
|
]
|
|
|
|
---
|
|
|
|
## Running in the background
|
|
|
|
- The logs are very verbose (and won't get better)
|
|
|
|
- Let's put them in the background for now!
|
|
|
|
.exercise[
|
|
|
|
- Stop the app (with `^C`)
|
|
|
|
- Start it again with `docker-compose up -d`
|
|
|
|
- Check on the web UI that the app is still making progress
|
|
|
|
]
|
|
|
|
---
|
|
|
|
# Finding bottlenecks
|
|
|
|
- Let's look at CPU, memory, and I/O usage
|
|
|
|
.exercise[
|
|
|
|
- run `top` to see CPU and memory usage
|
|
<br/>(you should see idle cycles)
|
|
|
|
- run `vmstat 3` to see I/O usage (si/so/bi/bo)
|
|
<br/>(the 4 numbers should be almost zero,
|
|
<br/>except `bo` for logging)
|
|
|
|
]
|
|
|
|
We have available resources.
|
|
|
|
- Why?
|
|
- How can we use them?
|
|
|
|
---
|
|
|
|
## Measuring performance
|
|
|
|
- The code doesn't have instrumentation
|
|
|
|
- Let's use `ab` and `httping` to view latency of microservices
|
|
|
|
.exercise[
|
|
|
|
- Start two new SSH connections
|
|
|
|
- In the first one, let run `httping localhost:8001`
|
|
|
|
- In the other one, let run `httping localhost:8002`
|
|
|
|
]
|
|
|
|
---
|
|
|
|
# Scaling workers on a single node
|
|
|
|
- Docker Compose supports scaling
|
|
- It doesn't deal with load balancing
|
|
- For services that *do not* accept connections, that's OK
|
|
- Let's scale `worker` and see what happens!
|
|
|
|
.exercise[
|
|
|
|
- In one SSH session, run `docker-compose logs worker`
|
|
|
|
- In another, run `docker-compose scale worker=4`
|
|
|
|
- See the impact on CPU load (with top/htop),
|
|
<br/>and on compute speed (with web UI)
|
|
|
|
]
|
|
|
|
---
|
|
|
|
# Scaling HTTP on a single node
|
|
|
|
The plan:
|
|
|
|
- Scale `rng` to multiple containers
|
|
|
|
- Put a load balancer in front of it
|
|
|
|
- Point other services to the load balancer
|
|
|
|
Note: Compose does not support that kind of scaling yet.
|
|
<br/>We will have to do it manually for now.
|
|
|
|
---
|
|
|
|
## Scaling `rng`
|
|
|
|
.exercise[
|
|
|
|
- Replace the `rng` service with multiple copies of it:
|
|
|
|
```
|
|
rng1:
|
|
build: rng
|
|
|
|
rng2:
|
|
build: rng
|
|
|
|
rng3:
|
|
build: rng
|
|
```
|
|
|
|
]
|
|
|
|
That's all!
|
|
|
|
---
|
|
|
|
## Introduction to `jpetazzo/hamba`
|
|
|
|
- Public image on the Docker Hub
|
|
|
|
- Load balancer based on HAProxy
|
|
|
|
- Expects the following arguments:
|
|
<br/>`FE-port BE1-addr BE1-port BE2-addr BE2-port ...`
|
|
<br/>*or*
|
|
<br/>`FE-addr:FE-port BE1-addr BE1-port BE2-addr BE2-port ...`
|
|
|
|
- FE=frontend (the thing other services connect to)
|
|
|
|
- BE=backend (the multiple copies of your scaled service)
|
|
|
|
.small[
|
|
Example: listen to port 80 and balance traffic on www1:1234 + www2:2345
|
|
|
|
```
|
|
docker run -d -p 80 jpetazzo/hamba 80 www1 1234 www2 2345
|
|
```
|
|
]
|
|
|
|
---
|
|
|
|
## Add our load balancer to the Compose file
|
|
|
|
.exercise[
|
|
|
|
- Add the following section to the Compose file:
|
|
|
|
```
|
|
rng0:
|
|
image: jpetazzo/hamba
|
|
links:
|
|
- rng1
|
|
- rng2
|
|
- rng3
|
|
command: 80 rng1 80 rng2 80 rng3 80
|
|
ports:
|
|
- "8001:80"
|
|
```
|
|
|
|
]
|
|
|
|
---
|
|
|
|
## Point other services to the load balancer
|
|
|
|
- The only affected service is `worker`
|
|
|
|
- We have to replace the `rng` link with a link to `rng0`,
|
|
but it should still be named `rng` (so we don't change the code)
|
|
|
|
.exercise[
|
|
|
|
- Update the `worker` section as follows:
|
|
|
|
```
|
|
worker:
|
|
build: worker
|
|
links:
|
|
- rng0:rng
|
|
- hasher
|
|
- redis
|
|
```
|
|
|
|
]
|
|
|
|
---
|
|
|
|
## Start the whole stack
|
|
|
|
- The new `rng0` load balancer also ties up port 8001
|
|
|
|
- We have to stop the old `rng` service first
|
|
<br/>(Compose doesn't do it for us)
|
|
|
|
.exercise[
|
|
|
|
- Run `docker-compose stop rng`
|
|
|
|
]
|
|
|
|
- Now (re-)start the whole stack
|
|
|
|
.exercise[
|
|
|
|
- Run `docker-compose up -d`
|
|
- Check worker logs with `docker-compose logs worker`
|
|
- Check load balancer logs with `docker-compose logs rng0`
|
|
|
|
]
|
|
|
|
---
|
|
|
|
## The good, the bad, the ugly
|
|
|
|
- The good
|
|
|
|
We scaled a service, added a load balancer -
|
|
<br/>without changing a single line of code
|
|
|
|
- The bad
|
|
|
|
We manually copy-pasted sections in `docker-compose.yml`
|
|
|
|
- The ugly
|
|
|
|
If we scale up/down, we have to restart everything
|
|
|
|
---
|
|
|
|
## Ideas to improve the situation
|
|
|
|
- Parse `docker-compose.yml` to automatically replace
|
|
services with their scaled counterparts
|
|
|
|
- Replace Docker Links with network namespace sharing
|
|
|
|
- More on this later
|
|
|
|
---
|
|
|
|
# Introducing Swarm
|
|
|
|

|
|
|
|
---
|
|
|
|
## Overview
|
|
|
|
- Swarm consolidates multiple Docker hosts into a single one
|
|
|
|
- Swarm "looks like" a Docker daemon, but it dispatches (schedules)
|
|
your containers on multiple daemons
|
|
|
|
- Swarm talks the Docker API front and back
|
|
|
|
- Swarm is open source and written in Go (like Docker)
|
|
|
|
- Swarm was started by two of the original Docker authors
|
|
<br/>([@aluzzardi](https://twitter.com/aluzzardi) and [@vieux](https://twitter.com/vieux))
|
|
|
|
- Swarm is not stable yet (version 0.3 right now)
|
|
|
|
---
|
|
|
|
# Setting up our Swarm cluster
|
|
|
|
- This is usually done by **Docker Machine**
|
|
<br/>( or by custom deployment scripts)
|
|
|
|
- We will do a simplified version here (without TLS),
|
|
<br/>to give you an idea of what's involved
|
|
|
|
- Components involved:
|
|
|
|
- service discovery mechanism
|
|
<br/>(we'll use Docker's hosted system)
|
|
|
|
- swarm agent
|
|
<br/>(runs on each node, registers it with service discovery)
|
|
|
|
- swarm manager
|
|
<br/>(runs on `node1`, exposes Docker API)
|
|
|
|
---
|
|
|
|
## Service discovery
|
|
|
|
- Possible backends:
|
|
|
|
- dynamic, self-hosted (zk, etcd, consul)
|
|
|
|
- static (command-line or file)
|
|
|
|
- hosted by Docker (token)
|
|
|
|
- We will use the token mechanism
|
|
|
|
.exercise[
|
|
|
|
- Run `docker run swarm create`
|
|
- Save the output carefully: it's your token
|
|
<br/>(it's the unique identifier for your cluster)
|
|
|
|
]
|
|
|
|
---
|
|
|
|
## Swarm agent
|
|
|
|
- Used only for dynamic discovery (zk, etcd, consul, token)
|
|
|
|
- Must run on each node
|
|
|
|
- Every 20s (by default), tells to the discovery system:
|
|
</br>"Hello, there is a Swarm node at A.B.C.D:EFGH"
|
|
|
|
- The node continues to work even if the agent dies
|
|
|
|
---
|
|
|
|
## Join the cluster
|
|
|
|
.exercise[
|
|
|
|
- Connect to `node2`
|
|
|
|
- Start the swarm agent:
|
|
<br/>`docker run -d swarm join \`
|
|
<br/>` --advertise A.B.C.D:55555 token://XXX`
|
|
<br/>.small[(`A.B.C.D` is the IP address of `node2`, `XXX` is the token generated earlier)]
|
|
|
|
- Check that the node registered successfully:
|
|
<br/>`docker swarm list token://XXX`
|
|
|
|
- Repeat on nodes 3, 4, 5
|
|
|
|
]
|
|
|
|
Note: the Docker daemon on your VMs listens on port 55555
|
|
|
|
---
|
|
|
|
## Swarm manager
|
|
|
|
- Today: must run on the "master" node
|
|
|
|
- Later: can run on multiple nodes, with master election
|
|
|
|
.exercise[
|
|
|
|
- Connect to `node1`
|
|
|
|
- Start the swarm manager:
|
|
<br/>`docker run -d -p 10000:2375 swarm manage token://XXX`
|
|
|
|
]
|
|
|
|
- Remember to replace XXX with your token!
|
|
- The Swarm manager listens on port 2375
|
|
- We're telling Docker to expose that on port 10000
|
|
|
|
---
|
|
|
|
## First contact with Swarm
|
|
|
|
- We must setup our CLI to talk to the Swarm master
|
|
|
|
.exercise[
|
|
|
|
- From any machine, set the environment variable:
|
|
<br/>`export DOCKER_HOST=tcp://node1:10000`
|
|
|
|
- Check the output of `docker version` and `docker info`
|
|
|
|
]
|
|
|
|
- Remember to set the environment variable if you open another SSH session!
|
|
|
|
- With Docker Machine, you would do a command like:
|
|
<br/>`eval $(docker-machine env my-swarm-master)`
|
|
|
|
---
|
|
|
|
# Running on Swarm
|
|
|
|
# Scaling on Swarm
|
|
|
|
# Cluster metrics
|
|
|
|
# Introducing Mesos
|
|
|
|
# Setting up our Mesos cluster
|
|
|
|
# Running on Mesos
|
|
|
|
# Network on Mesos
|
|
|
|
---
|
|
|
|
class: title
|
|
|
|
# Thanks! <br/> Questions?
|
|
|
|
### [@jpetazzo](https://twitter.com/jpetazzo) <br/> [@docker](https://twitter.com/docker)
|
|
|
|
</textarea>
|
|
<script src="https://gnab.github.io/remark/downloads/remark-0.5.9.min.js" type="text/javascript">
|
|
</script>
|
|
<script type="text/javascript">
|
|
var slideshow = remark.create();
|
|
</script>
|
|
</body>
|
|
</html>
|