mirror of
https://github.com/jpetazzo/container.training.git
synced 2026-02-14 09:39:56 +00:00
In the Kubernetes courses, it takes a bit too long before we reach the Kubernetes content. Furthermore, learning how to scale with Compose is not super helpful. These changes allow to switch between two course flows: - show how to scale with Compose, then transition to k8s/Swarm - do not show how to scale with Compose; jump to k8s/Swarm earlier In the latter case, we still benchmark the speed of rng and hasher, but we do it on Kuberntes (by running httping on the ClusterIP of these services). These changes will also allow to make the whole DaemonSet section optional, for shorter courses when we want to simply scale the rng service without telling the bogus explanation about entropy.
513 B
513 B
Let's draw hasty conclusions
-
The bottleneck seems to be
rng -
What if we don't have enough entropy and can't generate enough random numbers?
-
We need to scale out the
rngservice on multiple machines!
Note: this is a fiction! We have enough entropy. But we need a pretext to scale out.
(In fact, the code of rng uses /dev/urandom, which never runs out of entropy...
...and is just as good as /dev/random.)