For now we set to warn+audit on baseline pods,
but don't enforce any restriction yet. This way,
it shouldn't break anything, but will still issue
visible warnings for problematic pods.
vcluster deployment mode needs 2 volumes per cluster (one for the
control plane, one for shpod), so we're switching to a smaller
machine type since that's quickly becoming the limiting factor
rather than CPU/RAM.
It look like commit f9d73c0 introduced a very subtle regression
by removing what seemed to be an extraneous space in a selector...
But the space was there on purpose, so it had actually broken
Mermaid integration. This fixes it, hopefully in a way that won't
be affected the same way!
For educational purposes, the RNG service is meant to
process only one request at a time (without concurrency).
But the flask server now defaults to a multi-threaded
implementation, which defeats our original purpose.
So here we disable threading to restore the original
behavior.
(Probably due to K8S version mismatch; vcluster was on 1.33 and the
host cluster was on 1.35. Symptoms: some pods start, all their
containers are ready, the pod shows up as ready, and yet, it's not
considered ready so the deployment says 0/1 and Helm never completes.)
The section about Ingress has been both simplified (separating
the content about taints and tolerations) and made somewhat
deeper, to make it more compatible with both live classes and
recorded videos.
A new section about setting up Ingress Controllers has been
added.
The structure of each deck should now be:
- title slide
- logistics (for live classes)
- chat room info (for live classes)
- shared/about-slides
- */prereqs* (when relevant; mostly k8s classes)
- shared/handson
- */labs-live (for live classes)
- shared/connecting (for live classes)
- */labs-async
- toc
This is more uniform across the different courses
(live and async; containers and K8S).
Note that we install a TON of things from GitHub.
Since GitHub isn't available over IPv6, we are using
a custom solution based on cachttps, a caching
proxy to forward requests to GitHub. Our deployment
scripts try to detect a cachttps instance (assuming
it will be available through DNS over cachttps.internal)
and if they find one, they use it. Otherwise they
access GitHub directly - which won't work on IPv6-only
hosts, but will of course work fine on IPv4 and
dual-stack hosts.