Do not scale DockerCoins with Compose in Kubernetes courses

In the Kubernetes courses, it takes a bit too long before we
reach the Kubernetes content. Furthermore, learning how to
scale with Compose is not super helpful. These changes
allow to switch between two course flows:

- show how to scale with Compose, then transition to k8s/Swarm
- do not show how to scale with Compose; jump to k8s/Swarm earlier

In the latter case, we still benchmark the speed of rng and
hasher, but we do it on Kuberntes (by running httping on
the ClusterIP of these services).

These changes will also allow to make the whole DaemonSet
section optional, for shorter courses when we want to
simply scale the rng service without telling the bogus
explanation about entropy.
This commit is contained in:
Jerome Petazzoni
2019-04-02 09:54:43 -05:00
parent 9c5fa6f15e
commit 59f2416c56
11 changed files with 232 additions and 19 deletions

View File

@@ -0,0 +1,200 @@
# Scaling our demo app
- Our ultimate goal is to get more DockerCoins
(i.e. increase the number of loops per second shown on the web UI)
- Let's look at the architecture again:
![DockerCoins architecture](images/dockercoins-diagram.svg)
- The loop is done in the worker;
perhaps we could try adding more workers?
---
## Adding another worker
- All we have to do is scale the `worker` Deployment
.exercise[
- Open two new terminals to check what's going on with pods and deployments:
```bash
kubectl get pods -w
kubectl get deployments -w
```
<!--
```wait RESTARTS```
```keys ^C```
```wait AVAILABLE```
```keys ^C```
-->
- Now, create more `worker` replicas:
```bash
kubectl scale deployment worker --replicas=2
```
]
After a few seconds, the graph in the web UI should show up.
---
## Adding more workers
- If 2 workers give us 2x speed, what about 3 workers?
.exercise[
- Scale the `worker` Deployment further:
```bash
kubectl scale deployment worker --replicas=3
```
]
The graph in the web UI should go up again.
(This is looking great! We're gonna be RICH!)
---
## Adding even more workers
- Let's see if 10 workers give us 10x speed!
.exercise[
- Scale the `worker` Deployment to a bigger number:
```bash
kubectl scale deployment worker --replicas=10
```
]
--
The graph will peak at 10 hashes/second.
(We can add as many workers as we want: we will never go past 10 hashes/second.)
---
class: extra-details
## Didn't we briefly exceed 10 hashes/second?
- It may *look like it*, because the web UI shows instant speed
- The instant speed can briefly exceed 10 hashes/second
- The average speed cannot
- The instant speed can be biased because of how it's computed
---
class: extra-details
## Why instant speed is misleading
- The instant speed is computed client-side by the web UI
- The web UI checks the hash counter once per second
<br/>
(and does a classic (h2-h1)/(t2-t1) speed computation)
- The counter is updated once per second by the workers
- These timings are not exact
<br/>
(e.g. the web UI check interval is client-side JavaScript)
- Sometimes, between two web UI counter measurements,
<br/>
the workers are able to update the counter *twice*
- During that cycle, the instant speed will appear to be much bigger
<br/>
(but it will be compensated by lower instand speed before and after)
---
## Why are we stuck at 10 hashes per second?
- If this was high-quality, production code, we would have instrumentation
(Datadog, Honeycomb, New Relic, statsd, Sumologic, ...)
- It's not!
- Perhaps we could benchmark our web services?
(with tools like `ab`, or even simpler, `httping`)
---
## Benchmarking our web services
- We want to check `hasher` and `rng`
- We are going to use `httping`
- It's just like `ping`, but using HTTP `GET` requests
(it measures how long it takes to perform one `GET` request)
- It's used like this:
```
httping [-c count] http://host:port/path
```
- Or even simpler:
```
httping ip.ad.dr.ess
```
- We will use `httping` on the ClusterIP addresses of our services
---
## Obtaining ClusterIP addresses
- We can simply check the output of `kubectl get services`
- Or do it programmatically, as in the example below
.exercise[
- Retrieve the IP addresses:
```bash
HASHER=$(kubectl get svc hasher -o go-template={{.spec.clusterIP}})
RNG=$(kubectl get svc rng -o go-template={{.spec.clusterIP}})
```
]
Now we can access the IP addresses of our services through `$HASHER` and `$RNG`.
---
## Checking `hasher` and `rng` response times
.exercise[
- Check the response times for both services:
```bash
httping -c 3 $HASHER
httping -c 3 $RNG
```
]
- `hasher` is fine (it should take a few milliseconds to reply)
- `rng` is not (it should take about 700 milliseconds if there are 10 workers)
- Something is wrong with `rng`, but ... what?

View File

@@ -22,7 +22,8 @@ chapters:
- - shared/prereqs.md
- k8s/versions-k8s.md
- shared/sampleapp.md
- shared/composescale.md
# - shared/composescale.md
# - shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
@@ -41,6 +42,8 @@ chapters:
# - k8s/accessinternal.md
- k8s/dashboard.md
- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- - k8s/rollout.md
# - k8s/healthchecks.md

View File

@@ -26,6 +26,7 @@ chapters:
- shared/sampleapp.md
# Bridget doesn't go into as much depth with compose
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
@@ -44,6 +45,8 @@ chapters:
#- k8s/accessinternal.md
- - k8s/dashboard.md
- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- k8s/daemonset.md
- k8s/rollout.md
- - k8s/logs-cli.md

View File

@@ -23,6 +23,7 @@ chapters:
- k8s/versions-k8s.md
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
@@ -41,6 +42,8 @@ chapters:
- k8s/accessinternal.md
- k8s/dashboard.md
- k8s/kubectlscale.md
# - k8s/scalingdockercoins.md
# - shared/hastyconclusions.md
- k8s/daemonset.md
- - k8s/rollout.md
- k8s/healthchecks.md

View File

@@ -22,7 +22,8 @@ chapters:
- - shared/prereqs.md
- k8s/versions-k8s.md
- shared/sampleapp.md
- shared/composescale.md
#- shared/composescale.md
#- shared/hastyconclusions.md
- shared/composedown.md
- k8s/concepts-k8s.md
- shared/declarative.md
@@ -40,7 +41,9 @@ chapters:
- k8s/localkubeconfig.md
- k8s/accessinternal.md
- k8s/dashboard.md
- k8s/kubectlscale.md
#- k8s/kubectlscale.md
- k8s/scalingdockercoins.md
- shared/hastyconclusions.md
- - k8s/daemonset.md
- k8s/rollout.md
- k8s/healthchecks.md

View File

@@ -202,19 +202,3 @@ We will use `httping`.
]
`rng` has a much higher latency than `hasher`.
---
## Let's draw hasty conclusions
- The bottleneck seems to be `rng`
- *What if* we don't have enough entropy and can't generate enough random numbers?
- We need to scale out the `rng` service on multiple machines!
Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.
(In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy...
<br/>
...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).)

View File

@@ -0,0 +1,13 @@
## Let's draw hasty conclusions
- The bottleneck seems to be `rng`
- *What if* we don't have enough entropy and can't generate enough random numbers?
- We need to scale out the `rng` service on multiple machines!
Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.
(In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy...
<br/>
...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).)

View File

@@ -27,6 +27,7 @@ chapters:
- swarm/versions.md
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md

View File

@@ -27,6 +27,7 @@ chapters:
- swarm/versions.md
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md

View File

@@ -28,6 +28,7 @@ chapters:
Part 1
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md

View File

@@ -28,6 +28,7 @@ chapters:
Part 1
- shared/sampleapp.md
- shared/composescale.md
- shared/hastyconclusions.md
- shared/composedown.md
- swarm/swarmkit.md
- shared/declarative.md