(Probably due to K8S version mismatch; vcluster was on 1.33 and the
host cluster was on 1.35. Symptoms: some pods start, all their
containers are ready, the pod shows up as ready, and yet, it's not
considered ready so the deployment says 0/1 and Helm never completes.)
The first iteration on Proxmox support relied on a single
template image hosted on shared storage. This new iteration
relies on template images hosted on local storage. It will
detect the template VM to use on each node thanks to its tags.
Note: later, we'll need to expose an easy way to switch
between shared-store and local-store template images.
Multiple small changes to allow deployment in IPv6-only environments.
What we do:
- detect if we are in an IPv6-only environment
- if yes, specify a service CIDR and listening address
(kubeadm will otherwise pick the IPv4 address for the API server)
- switch to Cilium
Also minor changes to pssh and terraform to handle pinging and
connecting to IPv6 addresses.
- detect which EKS version to use
(instead of hard-coding it in the TF config)
- do not issue a CSR on EKS
(because EKS is broken and doesn't support it)
- automatically install a StorageClass on EKS
(because the EBS CSI addon doesn't install one by default)
- put EKS clusters in the default VPC
(instead of creating one VPC per cluster,
since there is a default limit of 5 VPC per region)
- add support to provision VMs on googlecloud
- refactor the way we define the project used by Terraform
(we'll now use the GOOGLE_PROJECT environment variable,
and if it's not set, we'll set it automatically by getting
the default project from the gcloud CLI)
Instead of passing an image name through a terraform variable,
use tags to select the latest image matching the specified
tags (in this case, os=Ubuntu version=22.04).
- instead of using 'kubectl wait nodes', we now use a simpler
'kubectl get nodes -o name' and check if there is anything
in the output. This seems to work better (as the previous
method would sometimes remain stuck because the kubectl
process would never get stopped by SIGPIPE).
- the shpod SSH NodePort is no longer hard-coded to 32222,
which allows us to use e.g. vcluster to deploy multiple
Kubernetes labs on a single 'home' (or 'outer') Kubernetes
cluster.
Break down provider-specific configuration into two files:
- config.tf (actual configuration, e.g. credentials, that cannot be
included in submodules)
- variables.tf (per-provider knobs and settings, e.g. mapping logical
VM size like S/M/L to actual cloud SKUs)
Summary of changes:
- "workshopctl" is now "labctl"
- it can handle deployment of VMs but also of managed
Kubernetes clusters (and therefore, it replaces
the "prepare-tf" directory)
- support for many more providers has been added
Check the README.md, in particular the "directory structure";
it has the most important information.