Now that we have a good number of longer exercises, it makes sense to rename the shorter demos/labs into 'labs' to avoid confusion between the two.
3.2 KiB
Restarting in the background
- Many flags and commands of Compose are modeled after those of
docker
.lab[
-
Start the app in the background with the
-doption:docker-compose up -d -
Check that our app is running with the
pscommand:docker-compose ps
]
docker-compose ps also shows the ports exposed by the application.
class: extra-details
Viewing logs
- The
docker-compose logscommand works likedocker logs
.lab[
-
View all logs since container creation and exit when done:
docker-compose logs -
Stream container logs, starting at the last 10 lines for each container:
docker-compose logs --tail 10 --follow
]
Tip: use ^S and ^Q to pause/resume log output.
Scaling up the application
- Our goal is to make that performance graph go up (without changing a line of code!)
--
-
Before trying to scale the application, we'll figure out if we need more resources
(CPU, RAM...)
-
For that, we will use good old UNIX tools on our Docker node
Looking at resource usage
- Let's look at CPU, memory, and I/O usage
.lab[
- run
topto see CPU and memory usage (you should see idle cycles)
- run
vmstat 1to see I/O usage (si/so/bi/bo)
(the 4 numbers should be almost zero, exceptbofor logging)
]
We have available resources.
- Why?
- How can we use them?
Scaling workers on a single node
- Docker Compose supports scaling
- Let's scale
workerand see what happens!
.lab[
-
Start one more
workercontainer:docker-compose up -d --scale worker=2 -
Look at the performance graph (it should show a x2 improvement)
-
Look at the aggregated logs of our containers (
worker_2should show up) -
Look at the impact on CPU load with e.g. top (it should be negligible)
]
Adding more workers
- Great, let's add more workers and call it a day, then!
.lab[
-
Start eight more
workercontainers:docker-compose up -d --scale worker=10 -
Look at the performance graph: does it show a x10 improvement?
-
Look at the aggregated logs of our containers
-
Look at the impact on CPU load and memory usage
]
Identifying bottlenecks
-
You should have seen a 3x speed bump (not 10x)
-
Adding workers didn't result in linear improvement
-
Something else is slowing us down
--
- ... But what?
--
-
The code doesn't have instrumentation
-
Let's use state-of-the-art HTTP performance analysis!
(i.e. good old tools likeab,httping...)
Accessing internal services
-
rngandhasherare exposed on ports 8001 and 8002 -
This is declared in the Compose file:
... rng: build: rng ports: - "8001:80" hasher: build: hasher ports: - "8002:80" ...
Measuring latency under load
We will use httping.
.lab[
-
Check the latency of
rng:httping -c 3 localhost:8001 -
Check the latency of
hasher:httping -c 3 localhost:8002
]
rng has a much higher latency than hasher.