diff --git a/slides/k8s/scalingdockercoins.md b/slides/k8s/scalingdockercoins.md new file mode 100644 index 00000000..00e42141 --- /dev/null +++ b/slides/k8s/scalingdockercoins.md @@ -0,0 +1,200 @@ +# Scaling our demo app + +- Our ultimate goal is to get more DockerCoins + + (i.e. increase the number of loops per second shown on the web UI) + +- Let's look at the architecture again: + + ![DockerCoins architecture](images/dockercoins-diagram.svg) + +- The loop is done in the worker; + perhaps we could try adding more workers? + +--- + +## Adding another worker + +- All we have to do is scale the `worker` Deployment + +.exercise[ + +- Open two new terminals to check what's going on with pods and deployments: + ```bash + kubectl get pods -w + kubectl get deployments -w + ``` + + + +- Now, create more `worker` replicas: + ```bash + kubectl scale deployment worker --replicas=2 + ``` + +] + +After a few seconds, the graph in the web UI should show up. + +--- + +## Adding more workers + +- If 2 workers give us 2x speed, what about 3 workers? + +.exercise[ + +- Scale the `worker` Deployment further: + ```bash + kubectl scale deployment worker --replicas=3 + ``` + +] + +The graph in the web UI should go up again. + +(This is looking great! We're gonna be RICH!) + +--- + +## Adding even more workers + +- Let's see if 10 workers give us 10x speed! + +.exercise[ + +- Scale the `worker` Deployment to a bigger number: + ```bash + kubectl scale deployment worker --replicas=10 + ``` + +] + +-- + +The graph will peak at 10 hashes/second. + +(We can add as many workers as we want: we will never go past 10 hashes/second.) + +--- + +class: extra-details + +## Didn't we briefly exceed 10 hashes/second? + +- It may *look like it*, because the web UI shows instant speed + +- The instant speed can briefly exceed 10 hashes/second + +- The average speed cannot + +- The instant speed can be biased because of how it's computed + +--- + +class: extra-details + +## Why instant speed is misleading + +- The instant speed is computed client-side by the web UI + +- The web UI checks the hash counter once per second +
+ (and does a classic (h2-h1)/(t2-t1) speed computation) + +- The counter is updated once per second by the workers + +- These timings are not exact +
+ (e.g. the web UI check interval is client-side JavaScript) + +- Sometimes, between two web UI counter measurements, +
+ the workers are able to update the counter *twice* + +- During that cycle, the instant speed will appear to be much bigger +
+ (but it will be compensated by lower instand speed before and after) + +--- + +## Why are we stuck at 10 hashes per second? + +- If this was high-quality, production code, we would have instrumentation + + (Datadog, Honeycomb, New Relic, statsd, Sumologic, ...) + +- It's not! + +- Perhaps we could benchmark our web services? + + (with tools like `ab`, or even simpler, `httping`) + +--- + +## Benchmarking our web services + +- We want to check `hasher` and `rng` + +- We are going to use `httping` + +- It's just like `ping`, but using HTTP `GET` requests + + (it measures how long it takes to perform one `GET` request) + +- It's used like this: + ``` + httping [-c count] http://host:port/path + ``` + +- Or even simpler: + ``` + httping ip.ad.dr.ess + ``` + +- We will use `httping` on the ClusterIP addresses of our services + +--- + +## Obtaining ClusterIP addresses + +- We can simply check the output of `kubectl get services` + +- Or do it programmatically, as in the example below + +.exercise[ + +- Retrieve the IP addresses: + ```bash + HASHER=$(kubectl get svc hasher -o go-template={{.spec.clusterIP}}) + RNG=$(kubectl get svc rng -o go-template={{.spec.clusterIP}}) + ``` + +] + +Now we can access the IP addresses of our services through `$HASHER` and `$RNG`. + +--- + +## Checking `hasher` and `rng` response times + +.exercise[ + +- Check the response times for both services: + ```bash + httping -c 3 $HASHER + httping -c 3 $RNG + ``` + +] + +- `hasher` is fine (it should take a few milliseconds to reply) + +- `rng` is not (it should take about 700 milliseconds if there are 10 workers) + +- Something is wrong with `rng`, but ... what? diff --git a/slides/kube-fullday.yml b/slides/kube-fullday.yml index 4cbf8c16..ab0993e2 100644 --- a/slides/kube-fullday.yml +++ b/slides/kube-fullday.yml @@ -22,7 +22,8 @@ chapters: - - shared/prereqs.md - k8s/versions-k8s.md - shared/sampleapp.md - - shared/composescale.md +# - shared/composescale.md +# - shared/hastyconclusions.md - shared/composedown.md - k8s/concepts-k8s.md - shared/declarative.md @@ -41,6 +42,8 @@ chapters: # - k8s/accessinternal.md - k8s/dashboard.md - k8s/kubectlscale.md + - k8s/scalingdockercoins.md + - shared/hastyconclusions.md - k8s/daemonset.md - - k8s/rollout.md # - k8s/healthchecks.md diff --git a/slides/kube-halfday.yml b/slides/kube-halfday.yml index 11b232b1..e4f845e5 100644 --- a/slides/kube-halfday.yml +++ b/slides/kube-halfday.yml @@ -26,6 +26,7 @@ chapters: - shared/sampleapp.md # Bridget doesn't go into as much depth with compose #- shared/composescale.md + #- shared/hastyconclusions.md - shared/composedown.md - k8s/concepts-k8s.md - shared/declarative.md @@ -44,6 +45,8 @@ chapters: #- k8s/accessinternal.md - - k8s/dashboard.md - k8s/kubectlscale.md + - k8s/scalingdockercoins.md + - shared/hastyconclusions.md - k8s/daemonset.md - k8s/rollout.md - - k8s/logs-cli.md diff --git a/slides/kube-selfpaced.yml b/slides/kube-selfpaced.yml index 422ff9fb..64dee1a2 100644 --- a/slides/kube-selfpaced.yml +++ b/slides/kube-selfpaced.yml @@ -23,6 +23,7 @@ chapters: - k8s/versions-k8s.md - shared/sampleapp.md - shared/composescale.md + - shared/hastyconclusions.md - shared/composedown.md - k8s/concepts-k8s.md - shared/declarative.md @@ -41,6 +42,8 @@ chapters: - k8s/accessinternal.md - k8s/dashboard.md - k8s/kubectlscale.md +# - k8s/scalingdockercoins.md +# - shared/hastyconclusions.md - k8s/daemonset.md - - k8s/rollout.md - k8s/healthchecks.md diff --git a/slides/kube-twodays.yml b/slides/kube-twodays.yml index 84e2f689..ac3f4368 100644 --- a/slides/kube-twodays.yml +++ b/slides/kube-twodays.yml @@ -22,7 +22,8 @@ chapters: - - shared/prereqs.md - k8s/versions-k8s.md - shared/sampleapp.md - - shared/composescale.md + #- shared/composescale.md + #- shared/hastyconclusions.md - shared/composedown.md - k8s/concepts-k8s.md - shared/declarative.md @@ -40,7 +41,9 @@ chapters: - k8s/localkubeconfig.md - k8s/accessinternal.md - k8s/dashboard.md - - k8s/kubectlscale.md + #- k8s/kubectlscale.md + - k8s/scalingdockercoins.md + - shared/hastyconclusions.md - - k8s/daemonset.md - k8s/rollout.md - k8s/healthchecks.md diff --git a/slides/shared/composescale.md b/slides/shared/composescale.md index e517f2f7..ff8578f1 100644 --- a/slides/shared/composescale.md +++ b/slides/shared/composescale.md @@ -202,19 +202,3 @@ We will use `httping`. ] `rng` has a much higher latency than `hasher`. - ---- - -## Let's draw hasty conclusions - -- The bottleneck seems to be `rng` - -- *What if* we don't have enough entropy and can't generate enough random numbers? - -- We need to scale out the `rng` service on multiple machines! - -Note: this is a fiction! We have enough entropy. But we need a pretext to scale out. - -(In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy... -
-...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).) diff --git a/slides/shared/hastyconclusions.md b/slides/shared/hastyconclusions.md new file mode 100644 index 00000000..a9b9cc96 --- /dev/null +++ b/slides/shared/hastyconclusions.md @@ -0,0 +1,13 @@ +## Let's draw hasty conclusions + +- The bottleneck seems to be `rng` + +- *What if* we don't have enough entropy and can't generate enough random numbers? + +- We need to scale out the `rng` service on multiple machines! + +Note: this is a fiction! We have enough entropy. But we need a pretext to scale out. + +(In fact, the code of `rng` uses `/dev/urandom`, which never runs out of entropy... +
+...and is [just as good as `/dev/random`](http://www.slideshare.net/PacSecJP/filippo-plain-simple-reality-of-entropy).) diff --git a/slides/swarm-fullday.yml b/slides/swarm-fullday.yml index 1244fcef..ab3b38a9 100644 --- a/slides/swarm-fullday.yml +++ b/slides/swarm-fullday.yml @@ -27,6 +27,7 @@ chapters: - swarm/versions.md - shared/sampleapp.md - shared/composescale.md + - shared/hastyconclusions.md - shared/composedown.md - swarm/swarmkit.md - shared/declarative.md diff --git a/slides/swarm-halfday.yml b/slides/swarm-halfday.yml index c8e7b1e2..d69c5c2c 100644 --- a/slides/swarm-halfday.yml +++ b/slides/swarm-halfday.yml @@ -27,6 +27,7 @@ chapters: - swarm/versions.md - shared/sampleapp.md - shared/composescale.md + - shared/hastyconclusions.md - shared/composedown.md - swarm/swarmkit.md - shared/declarative.md diff --git a/slides/swarm-selfpaced.yml b/slides/swarm-selfpaced.yml index 79a1cf48..73290511 100644 --- a/slides/swarm-selfpaced.yml +++ b/slides/swarm-selfpaced.yml @@ -28,6 +28,7 @@ chapters: Part 1 - shared/sampleapp.md - shared/composescale.md + - shared/hastyconclusions.md - shared/composedown.md - swarm/swarmkit.md - shared/declarative.md diff --git a/slides/swarm-video.yml b/slides/swarm-video.yml index 685ce10d..ba62175a 100644 --- a/slides/swarm-video.yml +++ b/slides/swarm-video.yml @@ -28,6 +28,7 @@ chapters: Part 1 - shared/sampleapp.md - shared/composescale.md + - shared/hastyconclusions.md - shared/composedown.md - swarm/swarmkit.md - shared/declarative.md