For educational purposes, the RNG service is meant to
process only one request at a time (without concurrency).
But the flask server now defaults to a multi-threaded
implementation, which defeats our original purpose.
So here we disable threading to restore the original
behavior.
(Probably due to K8S version mismatch; vcluster was on 1.33 and the
host cluster was on 1.35. Symptoms: some pods start, all their
containers are ready, the pod shows up as ready, and yet, it's not
considered ready so the deployment says 0/1 and Helm never completes.)
The section about Ingress has been both simplified (separating
the content about taints and tolerations) and made somewhat
deeper, to make it more compatible with both live classes and
recorded videos.
A new section about setting up Ingress Controllers has been
added.
The structure of each deck should now be:
- title slide
- logistics (for live classes)
- chat room info (for live classes)
- shared/about-slides
- */prereqs* (when relevant; mostly k8s classes)
- shared/handson
- */labs-live (for live classes)
- shared/connecting (for live classes)
- */labs-async
- toc
This is more uniform across the different courses
(live and async; containers and K8S).
Note that we install a TON of things from GitHub.
Since GitHub isn't available over IPv6, we are using
a custom solution based on cachttps, a caching
proxy to forward requests to GitHub. Our deployment
scripts try to detect a cachttps instance (assuming
it will be available through DNS over cachttps.internal)
and if they find one, they use it. Otherwise they
access GitHub directly - which won't work on IPv6-only
hosts, but will of course work fine on IPv4 and
dual-stack hosts.
The first iteration on Proxmox support relied on a single
template image hosted on shared storage. This new iteration
relies on template images hosted on local storage. It will
detect the template VM to use on each node thanks to its tags.
Note: later, we'll need to expose an easy way to switch
between shared-store and local-store template images.
Multiple small changes to allow deployment in IPv6-only environments.
What we do:
- detect if we are in an IPv6-only environment
- if yes, specify a service CIDR and listening address
(kubeadm will otherwise pick the IPv4 address for the API server)
- switch to Cilium
Also minor changes to pssh and terraform to handle pinging and
connecting to IPv6 addresses.
We want to be able to run on IPv6-only clusters
(as well as legacy IPv4 clusters, as well as
DualStack clusters). This requires minor changes
in the code, because in multiple places, we were
binding listening sockets explicitly to 0.0.0.0.
We change this to :: instead, and in some cases,
we make it easier to change that if needed (e.g.
through environment variables).
- detect which EKS version to use
(instead of hard-coding it in the TF config)
- do not issue a CSR on EKS
(because EKS is broken and doesn't support it)
- automatically install a StorageClass on EKS
(because the EBS CSI addon doesn't install one by default)
- put EKS clusters in the default VPC
(instead of creating one VPC per cluster,
since there is a default limit of 5 VPC per region)
- add support to provision VMs on googlecloud
- refactor the way we define the project used by Terraform
(we'll now use the GOOGLE_PROJECT environment variable,
and if it's not set, we'll set it automatically by getting
the default project from the gcloud CLI)