mirror of
https://github.com/clastix/kamaji.git
synced 2026-02-14 10:00:02 +00:00
docs: update to latest features
This commit is contained in:
committed by
Dario Tranchitella
parent
da924b30ff
commit
713b0754bb
@@ -20,7 +20,7 @@ Global hyper-scalers are leading the Managed Kubernetes space, while other cloud
|
||||
**Kamaji** aims to solve these pains by leveraging multi-tenancy and simplifying how to run multiple control planes on the same infrastructure with a fraction of the operational burden.
|
||||
|
||||
## How it works
|
||||
Kamaji turns any Kubernetes cluster into an _“admin cluster”_ to orchestrate other Kubernetes clusters called _“tenant clusters”_. What makes Kamaji special is that Control Planes of _“tenant clusters”_ are just regular pods running in the _“admin cluster”_ instead of dedicated Virtual Machines. This solution makes running control planes at scale cheaper and easier to deploy and operate.
|
||||
Kamaji turns any Kubernetes cluster into an _“admin cluster”_ to orchestrate other Kubernetes clusters called _“tenant clusters”_. Kamaji is special because the Control Planes of _“tenant clusters”_ are just regular pods instead of dedicated Virtual Machines. This solution makes running Control Planes at scale cheaper and easier to deploy and operate.
|
||||
|
||||

|
||||

|
||||
|
||||
@@ -15,7 +15,7 @@ export KAMAJI_NAMESPACE=kamaji-system
|
||||
export TENANT_NAMESPACE=default
|
||||
export TENANT_NAME=tenant-00
|
||||
export TENANT_DOMAIN=$KAMAJI_REGION.cloudapp.azure.com
|
||||
export TENANT_VERSION=v1.25.0
|
||||
export TENANT_VERSION=v1.26.0
|
||||
export TENANT_PORT=6443 # port used to expose the tenant api server
|
||||
export TENANT_PROXY_PORT=8132 # port used to expose the konnectivity server
|
||||
export TENANT_POD_CIDR=10.36.0.0/16
|
||||
|
||||
@@ -5,7 +5,7 @@ export KAMAJI_NAMESPACE=kamaji-system
|
||||
export TENANT_NAMESPACE=default
|
||||
export TENANT_NAME=tenant-00
|
||||
export TENANT_DOMAIN=clastix.labs
|
||||
export TENANT_VERSION=v1.25.0
|
||||
export TENANT_VERSION=v1.26.0
|
||||
export TENANT_PORT=6443 # port used to expose the tenant api server
|
||||
export TENANT_PROXY_PORT=8132 # port used to expose the konnectivity server
|
||||
export TENANT_POD_CIDR=10.36.0.0/16
|
||||
|
||||
@@ -11,13 +11,13 @@ prometheus-stack:
|
||||
helm repo update
|
||||
helm install prometheus-stack --create-namespace -n monitoring prometheus-community/kube-prometheus-stack
|
||||
|
||||
reqs: kind ingress-nginx etcd-cluster cert-manager
|
||||
reqs: kind ingress-nginx cert-manager
|
||||
|
||||
cert-manager:
|
||||
@kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.yaml
|
||||
|
||||
kamaji: reqs
|
||||
@kubectl apply -f $(kind_path)/../../config/install.yaml
|
||||
helm install kamaji --create-namespace -n kamaji-system $(kind_path)/../../charts/kamaji
|
||||
|
||||
destroy: kind/destroy etcd-certificates/cleanup
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ export DOCKER_IMAGE_NAME="kindest/node"
|
||||
export DOCKER_NETWORK="kind"
|
||||
|
||||
# Variables
|
||||
export KUBERNETES_VERSION=${1:-v1.23.5}
|
||||
export KUBERNETES_VERSION=${1:-v1.23.4}
|
||||
export KUBECONFIG="${KUBECONFIG:-/tmp/kubeconfig}"
|
||||
|
||||
if [ -z $2 ]
|
||||
|
||||
@@ -11,9 +11,9 @@ These are requirements of the design behind Kamaji:
|
||||
Goals and scope may vary as the project evolves.
|
||||
|
||||
## Tenant Control Plane
|
||||
What makes Kamaji special is that the Control Plane of a _“tenant cluster”_ is just one or more regular pods running in a namespace of the _“admin cluster”_ instead of a dedicated set of Virtual Machines. This solution makes running control planes at scale cheaper and easier to deploy and operate. The Tenant Control Plane components are packaged in the same way they are running in bare metal or virtual nodes. We leverage the `kubeadm` code to set up the control plane components as they were running on their own server. The unchanged images of upstream `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` are used.
|
||||
Kamaji is special because the Control Planes of the _“tenant cluster”_ are regular pods running in a namespace of the _“admin cluster”_ instead of a dedicated set of Virtual Machines. This solution makes running Control Planes at scale cheaper and easier to deploy and operate. The Tenant Control Plane components are packaged in the same way they are running in bare metal or virtual nodes. We leverage the `kubeadm` code to set up the control plane components as they were running on their own server. The unchanged images of upstream `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` are used.
|
||||
|
||||
High Availability and rolling updates of the Tenant Control Plane pods are provided by a regular Deployment. Autoscaling based on the metrics is available. A Service is used to espose the Tenant Control Plane outside of the _“admin cluster”_. The `LoadBalancer` service type is used, `NodePort` and `ClusterIP` with an Ingress Controller are still viable options, depending on the case.
|
||||
High Availability and rolling updates of the Tenant Control Plane pods are provided by a regular Deployment. Autoscaling based on the metrics is available. A Service is used to espose the Tenant Control Plane outside of the _“admin cluster”_. The `LoadBalancer` service type is used, `NodePort` and `ClusterIP` are other viable options, depending on the case.
|
||||
|
||||
Kamaji offers a [Custom Resource Definition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) to provide a declarative approach of managing a Tenant Control Plane. This *CRD* is called `TenantControlPlane`, or `tcp` in short.
|
||||
|
||||
@@ -25,9 +25,7 @@ And what about the tenant worker nodes? They are just _"worker nodes"_, i.e. reg
|
||||
We have in roadmap, the Cluster APIs support as well as a Terraform provider so that you can create _“tenant clusters”_ in a declarative way.
|
||||
|
||||
## Datastores
|
||||
Putting the Tenant Control Plane in a pod is the easiest part. Also, we have to make sure each tenant cluster saves the state to be able to store and retrieve data. A dedicated `etcd` cluster for each tenant cluster doesn’t scale well for a managed service because `etcd` data persistence can be cumbersome at scale, rising the operational effort to mitigate it. So we have to find an alternative keeping in mind our goal for a resilient and cost-optimized solution at the same time.
|
||||
|
||||
As we can deploy any Kubernetes cluster with an external `etcd` cluster, we explored this option for the tenant control planes. On the admin cluster, we can deploy a multi-tenant `etcd` datastore to save the state of multiple tenant clusters. Kamaji offers a Custom Resource Definition called `DataStore` to provide a declarative approach of managing Tenant datastores. With this solution, the resiliency is guaranteed by the usual `etcd` mechanism, and the pods' count remains under control, so it solves the main goal of resiliency and costs optimization. The trade-off here is that we have to operate an external datastore, in addition to `etcd` of the _“admin cluster”_ and manage the access to be sure that each _“tenant cluster”_ uses only its data.
|
||||
Putting the Tenant Control Plane in a pod is the easiest part. Also, we have to make sure each tenant cluster saves the state to be able to store and retrieve data. As we can deploy a Kubernetes cluster with an external `etcd` cluster, we explored this option for the Tenant Control Planes. On the admin cluster, you can deploy one or multi-tenant `etcd` to save the state of multiple tenant clusters. Kamaji offers a Custom Resource Definition called `DataStore` to provide a declarative approach of managing multiple datastores. By sharing the datastore between multiple tenants, the resiliency is still guaranteed and the pods' count remains under control, so it solves the main goal of resiliency and costs optimization. The trade-off here is that you have to operate external datastores, in addition to `etcd` of the _“admin cluster”_ and manage the access to be sure that each _“tenant cluster”_ uses only its data.
|
||||
|
||||
### Other storage drivers
|
||||
Kamaji offers the option of using a more capable datastore than `etcd` to save the state of multiple tenants' clusters. Thanks to the native [kine](https://github.com/k3s-io/kine) integration, you can run _MySQL_ or _PostgreSQL_ compatible databases as datastore for _“tenant clusters”_.
|
||||
@@ -35,6 +33,11 @@ Kamaji offers the option of using a more capable datastore than `etcd` to save t
|
||||
### Pooling
|
||||
By default, Kamaji is expecting to persist all the _“tenant clusters”_ data in a unique datastore that could be backed by different drivers. However, you can pick a different datastore for a specific set of _“tenant clusters”_ that could have different resources assigned or a different tiering. Pooling of multiple datastore is an option you can leverage for a very large set of _“tenant clusters”_ so you can distribute the load properly. As future improvements, we have a _datastore scheduler_ feature in roadmap so that Kamaji itself can assign automatically a _“tenant cluster”_ to the best datastore in the pool.
|
||||
|
||||
### Migration
|
||||
In order to simplify Day2 Operations and reduce the operational burden, Kamaji provides the capability to live migrate data from a datastore to another one of the same driver without manual and error prone backup and restore operations.
|
||||
|
||||
> Currently, live data migration is only available between datastores having the same driver.
|
||||
|
||||
## Konnectivity
|
||||
|
||||
In addition to the standard control plane containers, Kamaji creates an instance of [konnectivity-server](https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/) running as sidecar container in the `tcp` pod and exposed on port `8132` of the `tcp` service.
|
||||
|
||||
@@ -8,15 +8,15 @@ We assume you have installed on your workstation:
|
||||
|
||||
- [Docker](https://docker.com)
|
||||
- [KinD](https://kind.sigs.k8s.io/)
|
||||
- [kubectl@v1.25.0](https://kubernetes.io/docs/tasks/tools/#kubectl)
|
||||
- [kubeadm@v1.25.0](https://kubernetes.io/docs/tasks/tools/#kubeadm)
|
||||
- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl)
|
||||
- [kubeadm](https://kubernetes.io/docs/tasks/tools/#kubeadm)
|
||||
- [Helm](https://helm.sh/docs/intro/install/)
|
||||
- [jq](https://stedolan.github.io/jq/)
|
||||
- [openssl](https://www.openssl.org/)
|
||||
- [cfssl/cfssljson](https://github.com/cloudflare/cfssl)
|
||||
|
||||
|
||||
> Starting from Kamaji v0.0.2, `kubectl` and `kubeadm` need to meet at least minimum version to `v1.25.0`:
|
||||
> this is required due to the latest changes addressed from the release Kubernetes 1.25 release regarding the `kubelet-config` ConfigMap required for the node join.
|
||||
> Starting from Kamaji v0.1.0, `kubectl` and `kubeadm` need to meet at least minimum version to `v1.25.0` due to the changes regarding the `kubelet-config` ConfigMap required for the node join.
|
||||
|
||||
## Setup Kamaji on KinD
|
||||
|
||||
@@ -26,81 +26,65 @@ The instance of Kamaji is made of a single node hosting:
|
||||
- admin worker
|
||||
- multi-tenant datastore
|
||||
|
||||
### Standard installation
|
||||
### Standard Installation
|
||||
|
||||
You can install your KinD cluster, ETCD multi-tenant cluster and Kamaji operator with a **single command**:
|
||||
You can install your KinD cluster, an `etcd` based multi-tenant datastore and the Kamaji operator with a **single command**:
|
||||
|
||||
```bash
|
||||
$ make -C deploy/kind
|
||||
```
|
||||
|
||||
Now you can [create your first `TenantControlPlane`](#deploy-tenant-control-plane).
|
||||
Now you can deploy a [`TenantControlPlane`](#deploy-tenant-control-plane).
|
||||
|
||||
### Data store-specific
|
||||
### Installation with alternative datastore drivers
|
||||
|
||||
Kamaji offers the possibility of using a different storage system than `ETCD` for the tenants, like `MySQL` or `PostgreSQL` compatible databases.
|
||||
Kamaji offers the possibility of using a different storage system than `etcd` for datastore, like `MySQL` or `PostgreSQL` compatible databases.
|
||||
|
||||
First, setup a KinD cluster:
|
||||
First, setup a KinD cluster and the other requirements:
|
||||
|
||||
```bash
|
||||
$ make -C deploy/kind kind
|
||||
$ make -C deploy/kind reqs
|
||||
```
|
||||
|
||||
#### ETCD
|
||||
Install one of the alternative supported databases:
|
||||
|
||||
Deploy a multi-tenant `ETCD` cluster into the Kamaji node:
|
||||
- **MySQL** install it with command:
|
||||
|
||||
`$ make -C deploy/kine/mysql mariadb`
|
||||
|
||||
- **PostgreSQL** install it with command:
|
||||
|
||||
`$ make -C deploy/kine/postgresql postgresql`
|
||||
|
||||
Then use Helm to install the Kamaji Operator and make sure it uses a datastore with the proper driver `datastore.driver=<MySQL|PostgreSQL>`.
|
||||
|
||||
For example, with a PostreSQL datastore:
|
||||
|
||||
```bash
|
||||
$ make -C deploy/kind etcd-cluster
|
||||
helm install kamaji charts/kamaji -n kamaji-system --create-namespace \
|
||||
--set etcd.deploy=false \
|
||||
--set datastore.driver=PostgreSQL \
|
||||
--set datastore.endpoints[0]=postgres-default-rw.kamaji-system.svc:5432 \
|
||||
--set datastore.basicAuth.usernameSecret.name=postgres-default-superuser \
|
||||
--set datastore.basicAuth.usernameSecret.namespace=kamaji-system \
|
||||
--set datastore.basicAuth.usernameSecret.keyPath=username \
|
||||
--set datastore.basicAuth.passwordSecret.name=postgres-default-superuser \
|
||||
--set datastore.basicAuth.passwordSecret.namespace=kamaji-system \
|
||||
--set datastore.basicAuth.passwordSecret.keyPath=password \
|
||||
--set datastore.tlsConfig.certificateAuthority.certificate.name=postgres-default-ca \
|
||||
--set datastore.tlsConfig.certificateAuthority.certificate.namespace=kamaji-system \
|
||||
--set datastore.tlsConfig.certificateAuthority.certificate.keyPath=ca.crt \
|
||||
--set datastore.tlsConfig.certificateAuthority.privateKey.name=postgres-default-ca \
|
||||
--set datastore.tlsConfig.certificateAuthority.privateKey.namespace=kamaji-system \
|
||||
--set datastore.tlsConfig.certificateAuthority.privateKey.keyPath=ca.key \
|
||||
--set datastore.tlsConfig.clientCertificate.certificate.name=postgres-default-root-cert \
|
||||
--set datastore.tlsConfig.clientCertificate.certificate.namespace=kamaji-system \
|
||||
--set datastore.tlsConfig.clientCertificate.certificate.keyPath=tls.crt \
|
||||
--set datastore.tlsConfig.clientCertificate.privateKey.name=postgres-default-root-cert \
|
||||
--set datastore.tlsConfig.clientCertificate.privateKey.namespace=kamaji-system \
|
||||
--set datastore.tlsConfig.clientCertificate.privateKey.keyPath=tls.key
|
||||
```
|
||||
|
||||
Now you're ready to [install Kamaji operator](#install-kamaji).
|
||||
|
||||
#### MySQL
|
||||
|
||||
Deploy a MySQL/MariaDB backend into the Kamaji node:
|
||||
|
||||
```bash
|
||||
$ make -C deploy/kine/mysql mariadb
|
||||
```
|
||||
|
||||
Adjust the Kamaji install manifest `config/install.yaml` according to the example of a MySQL DataStore `config/samples/kamaji_v1alpha1_datastore_mysql.yaml` and make sure Kamaji uses the proper datastore name:
|
||||
|
||||
```
|
||||
--datastore={.metadata.name}
|
||||
```
|
||||
|
||||
Now you're ready to [install Kamaji operator](#install-kamaji).
|
||||
|
||||
#### PostgreSQL
|
||||
|
||||
Deploy a PostgreSQL backend into the Kamaji node:
|
||||
|
||||
```bash
|
||||
$ make -C deploy/kine/postgresql postgresql
|
||||
```
|
||||
|
||||
Adjust the Kamaji install manifest `config/install.yaml` according to the example of a PostgreSQL DataStore `config/samples/kamaji_v1alpha1_datastore_postgresql.yaml` and make sure Kamaji uses the proper datastore name:
|
||||
|
||||
```
|
||||
--datastore={.metadata.name}
|
||||
```
|
||||
|
||||
Now you're ready to [install Kamaji operator](#install-kamaji).
|
||||
|
||||
### Install Kamaji
|
||||
|
||||
Kamaji takes advantage of the [dynamic admission control](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), such as validating and mutating webhook configurations.
|
||||
These webhooks are secured by a TLS communication, and the certificates are managed by [`cert-manager`](https://cert-manager.io/), making it a prerequisite that must be [installed](https://cert-manager.io/docs/installation/).
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f config/install.yaml
|
||||
```
|
||||
|
||||
> Please note that this single YAML manifest is missing some required automations.
|
||||
> The preferred way to install Kamaji is using its Helm Chart.
|
||||
> Please, refer to the section [**Setup Kamaji on a generic infrastructure**.](/guides/kamaji-deployment-guide#install-kamaji-controller)
|
||||
|
||||
### Deploy Tenant Control Plane
|
||||
|
||||
Now it is the moment of deploying your first tenant control plane.
|
||||
@@ -156,7 +140,7 @@ EOF
|
||||
> Check networkProfile fields according to your installation
|
||||
> To let Kamaji works in kind, you have indicate that the service must be [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport)
|
||||
|
||||
### Get Kubeconfig
|
||||
### Get the kubeconfig
|
||||
|
||||
Let's retrieve kubeconfig and store in `/tmp/kubeconfig`
|
||||
|
||||
|
||||
171
docs/content/guides/datastore-migration.md
Normal file
171
docs/content/guides/datastore-migration.md
Normal file
@@ -0,0 +1,171 @@
|
||||
# Datastore Migration
|
||||
|
||||
On the admin cluster, you can deploy one or more multi-tenant datastores as `etcd`, `PostgreSQL`, and `MySQL` to save the state of the tenant clusters. A Tenant Control Plane can be migrated from a datastore to another one without service disruption or without complex and error prone backup & restore procedures.
|
||||
|
||||
This guide will assist you to live migrate Tenant's data from a datastore to another one having the same `etcd` driver.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Assume you have a Tenant Control Plane using the default datastore:
|
||||
|
||||
``` shell
|
||||
kubectl get tcp
|
||||
NAME VERSION STATUS CONTROL-PLANE ENDPOINT KUBECONFIG DATASTORE AGE
|
||||
tenant-00 v1.25.2 Ready 192.168.32.200:6443 tenant-00-admin-kubeconfig default 8d
|
||||
```
|
||||
|
||||
You can check a custom resource called `DataStore` providing a declarative description of the `default` datastore:
|
||||
|
||||
```yaml
|
||||
apiVersion: kamaji.clastix.io/v1alpha1
|
||||
kind: DataStore
|
||||
metadata:
|
||||
annotations:
|
||||
labels:
|
||||
name: default
|
||||
spec:
|
||||
driver: etcd
|
||||
endpoints:
|
||||
- etcd-0.etcd.kamaji-system.svc.cluster.local:2379
|
||||
- etcd-1.etcd.kamaji-system.svc.cluster.local:2379
|
||||
- etcd-2.etcd.kamaji-system.svc.cluster.local:2379
|
||||
tlsConfig:
|
||||
certificateAuthority:
|
||||
certificate:
|
||||
secretReference:
|
||||
keyPath: ca.crt
|
||||
name: etcd-certs
|
||||
namespace: kamaji-system
|
||||
privateKey:
|
||||
secretReference:
|
||||
keyPath: ca.key
|
||||
name: etcd-certs
|
||||
namespace: kamaji-system
|
||||
clientCertificate:
|
||||
certificate:
|
||||
secretReference:
|
||||
keyPath: tls.crt
|
||||
name: etcd-root-client-certs
|
||||
namespace: kamaji-system
|
||||
privateKey:
|
||||
secretReference:
|
||||
keyPath: tls.key
|
||||
name: etcd-root-client-certs
|
||||
namespace: kamaji-system
|
||||
status:
|
||||
usedBy:
|
||||
- default/tenant-00
|
||||
```
|
||||
|
||||
The `default` datastore is installed by Kamaji Helm chart in the same namespace hosting the controller:
|
||||
|
||||
```shell
|
||||
kubectl -n kamaji-system get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
etcd-0 1/1 Running 0 23d
|
||||
etcd-1 1/1 Running 0 23d
|
||||
etcd-2 1/1 Running 0 23d
|
||||
kamaji-5d6cdfbbb9-bn27f 1/1 Running 0 2d19h
|
||||
```
|
||||
|
||||
## Install a new datastore
|
||||
A managed datastore is highly recommended in production. The [kamaji-etcd](https://github.com/clastix/kamaji-etcd) project provides a viable option to setup a managed multi-tenant `etcd` running as StatefulSet made of three replicas:
|
||||
|
||||
```bash
|
||||
helm repo add clastix https://clastix.github.io/charts
|
||||
helm repo update
|
||||
helm install dedicated clastix/kamaji-etcd -n dedicated --create-namespace --set datastore.enabled=true
|
||||
```
|
||||
|
||||
You should end up with a new datastore `dedicated` provided by an `etcd` cluster:
|
||||
|
||||
```yaml
|
||||
kubectl get datastore dedicated -o yaml
|
||||
apiVersion: kamaji.clastix.io/v1alpha1
|
||||
kind: DataStore
|
||||
metadata:
|
||||
annotations:
|
||||
labels:
|
||||
name: dedicated
|
||||
spec:
|
||||
driver: etcd
|
||||
endpoints:
|
||||
- dedicated-0.dedicated.dedicated.svc.cluster.local:2379
|
||||
- dedicated-1.dedicated.dedicated.svc.cluster.local:2379
|
||||
- dedicated-2.dedicated.dedicated.svc.cluster.local:2379
|
||||
tlsConfig:
|
||||
certificateAuthority:
|
||||
certificate:
|
||||
secretReference:
|
||||
keyPath: ca.crt
|
||||
name: dedicated-certs
|
||||
namespace: dedicated
|
||||
privateKey:
|
||||
secretReference:
|
||||
keyPath: ca.key
|
||||
name: dedicated-certs
|
||||
namespace: dedicated
|
||||
clientCertificate:
|
||||
certificate:
|
||||
secretReference:
|
||||
keyPath: tls.crt
|
||||
name: dedicated-root-client-certs
|
||||
namespace: dedicated
|
||||
privateKey:
|
||||
secretReference:
|
||||
keyPath: tls.key
|
||||
name: dedicated-root-client-certs
|
||||
namespace: dedicated
|
||||
status: {}
|
||||
```
|
||||
|
||||
Check the `etcd` cluster:
|
||||
|
||||
```bash
|
||||
kubectl -n dedicated get sts,pods,pvc
|
||||
NAME READY AGE
|
||||
statefulset.apps/dedicated 3/3 25h
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/dedicated-0 1/1 Running 0 25h
|
||||
pod/dedicated-1 1/1 Running 0 25h
|
||||
pod/dedicated-2 1/1 Running 0 25h
|
||||
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
persistentvolumeclaim/data-dedicated-0 Bound pvc-a5c66737-ef78-4689-b863-037f8382ed78 10Gi RWO local-path 25h
|
||||
persistentvolumeclaim/data-dedicated-1 Bound pvc-1e9f77eb-89f3-4256-9508-c18b71fca7ea 10Gi RWO local-path 25h
|
||||
persistentvolumeclaim/data-dedicated-2 Bound pvc-957c4802-1e7c-4f37-ac01-b89ad1fa9fdb 10Gi RWO local-path 25h
|
||||
```
|
||||
|
||||
## Migrate data
|
||||
To migrate data from current `default` datastore to the new dedicated one, patch the Tenant Control Plane `tenant-00` to use the new `dedicated` datastore:
|
||||
|
||||
```shell
|
||||
kubectl patch --type merge tcp tenant-00 -p '{"spec": {"dataStore": "dedicated"}}'
|
||||
```
|
||||
|
||||
and check the process happening in real time:
|
||||
|
||||
```shell
|
||||
kubectl get tcp -w
|
||||
NAME VERSION STATUS CONTROL-PLANE ENDPOINT KUBECONFIG DATASTORE AGE
|
||||
tenant-00 v1.25.2 Ready 192.168.32.200:6443 tenant-00-admin-kubeconfig default 9d
|
||||
tenant-00 v1.25.2 Migrating 192.168.32.200:6443 tenant-00-admin-kubeconfig default 9d
|
||||
tenant-00 v1.25.2 Migrating 192.168.32.200:6443 tenant-00-admin-kubeconfig default 9d
|
||||
tenant-00 v1.25.2 Migrating 192.168.32.200:6443 tenant-00-admin-kubeconfig dedicated 9d
|
||||
tenant-00 v1.25.2 Migrating 192.168.32.200:6443 tenant-00-admin-kubeconfig dedicated 9d
|
||||
tenant-00 v1.25.2 Ready 192.168.32.200:6443 tenant-00-admin-kubeconfig dedicated 9d
|
||||
```
|
||||
|
||||
During the datastore migration, the Tenant Control Plane is put in read-only mode to avoid misalignments between source and destination datastores. If tenant users try to update the data, an admission controller denies the request with the following message:
|
||||
|
||||
|
||||
```shell
|
||||
Error from server (the current Control Plane is in freezing mode due to a maintenance mode,
|
||||
all the changes are blocked: removing the webhook may lead to an inconsistent state upon its completion):
|
||||
admission webhook "catchall.migrate.kamaji.clastix.io" denied the request
|
||||
```
|
||||
|
||||
After a while, depending on the amount of data to migrate, the Tenant Control Plane is put back in full operating mode by the Kamaji controller.
|
||||
|
||||
> Please, note the datastore migration leaves the data on the default datastore, so you have to remove it manually.
|
||||
@@ -13,7 +13,6 @@ The guide requires:
|
||||
|
||||
* [Prepare the bootstrap workspace](#prepare-the-bootstrap-workspace)
|
||||
* [Access Admin cluster](#access-admin-cluster)
|
||||
* [Install DataStore](#install-datastore)
|
||||
* [Install Kamaji controller](#install-kamaji-controller)
|
||||
* [Create Tenant Cluster](#create-tenant-cluster)
|
||||
* [Cleanup](#cleanup)
|
||||
@@ -96,21 +95,13 @@ And check you can access:
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
## Install datastore
|
||||
The Kamaji controller needs to access a multi-tenant datastore in order to save data of the tenants' clusters. The Kamaji Helm Chart provides the installation of an unamanaged `etcd`. However, a managed `etcd` is highly recommended in production.
|
||||
|
||||
As alternative, the [kamaji-etcd](https://github.com/clastix/kamaji-etcd) project provides a viable option to setup a manged multi-tenant `etcd` as 3 replicas StatefulSet with data persistence:
|
||||
|
||||
```bash
|
||||
helm repo add clastix https://clastix.github.io/charts
|
||||
helm repo update
|
||||
helm install etcd clastix/kamaji-etcd -n kamaji-system --create-namespace
|
||||
```
|
||||
|
||||
Optionally, Kamaji offers the possibility of using a different storage system for the tenants' clusters, as MySQL or PostgreSQL compatible database, thanks to the native [kine](https://github.com/k3s-io/kine) integration.
|
||||
|
||||
## Install Kamaji Controller
|
||||
Install Kamaji with `helm` using an unmanaged `etcd` as datastore:
|
||||
|
||||
Kamaji takes advantage of the [dynamic admission control](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), such as validating and mutating webhook configurations. These webhooks are secured by a TLS communication, and the certificates are managed by [`cert-manager`](https://cert-manager.io/), making it a prerequisite that must be [installed](https://cert-manager.io/docs/installation/).
|
||||
|
||||
The Kamaji controller needs to access a default datastore in order to save data of the tenants' clusters. The Kamaji Helm Chart provides the installation of a basic unamanaged `etcd`, out of box.
|
||||
|
||||
Install Kamaji with `helm` using an unmanaged `etcd` as default datastore:
|
||||
|
||||
```bash
|
||||
helm repo add clastix https://clastix.github.io/charts
|
||||
@@ -118,15 +109,7 @@ helm repo update
|
||||
helm install kamaji clastix/kamaji -n kamaji-system --create-namespace
|
||||
```
|
||||
|
||||
Alternatively, if you opted for a managed `etcd` datastore:
|
||||
|
||||
```
|
||||
helm repo add clastix https://clastix.github.io/charts
|
||||
helm repo update
|
||||
helm install kamaji clastix/kamaji -n kamaji-system --create-namespace --set etcd.deploy=false
|
||||
```
|
||||
|
||||
Congratulations! You just turned your Azure Kubernetes AKS cluster into a Kamaji cluster capable to run multiple Tenant Control Planes.
|
||||
A managed datastore is highly recommended in production. The [kamaji-etcd](https://github.com/clastix/kamaji-etcd) project provides a viable option to setup a managed multi-tenant `etcd` running as StatefulSet made of three replicas. Optionally, Kamaji offers support for a different storage system, as `MySQL` or `PostgreSQL` compatible database, thanks to the native [kine](https://github.com/k3s-io/kine) integration.
|
||||
|
||||
## Create Tenant Cluster
|
||||
|
||||
@@ -146,6 +129,7 @@ metadata:
|
||||
name: ${TENANT_NAME}
|
||||
namespace: ${TENANT_NAMESPACE}
|
||||
spec:
|
||||
dataStore: default
|
||||
controlPlane:
|
||||
deployment:
|
||||
replicas: 3
|
||||
@@ -171,7 +155,7 @@ spec:
|
||||
requests:
|
||||
cpu: 125m
|
||||
memory: 256Mi
|
||||
limits: {}
|
||||
limits: {}
|
||||
service:
|
||||
additionalMetadata:
|
||||
labels:
|
||||
|
||||
@@ -13,7 +13,6 @@ The guide requires:
|
||||
|
||||
* [Prepare the bootstrap workspace](#prepare-the-bootstrap-workspace)
|
||||
* [Access Admin cluster](#access-admin-cluster)
|
||||
* [Install DataStore](#install-datastore)
|
||||
* [Install Kamaji controller](#install-kamaji-controller)
|
||||
* [Create Tenant Cluster](#create-tenant-cluster)
|
||||
* [Cleanup](#cleanup)
|
||||
@@ -42,30 +41,26 @@ Throughout the following instructions, shell variables are used to indicate valu
|
||||
source kamaji.env
|
||||
```
|
||||
|
||||
Any regular and conformant Kubernetes v1.22+ cluster can be turned into a Kamaji setup. To work properly, the admin cluster should provide at least:
|
||||
Any regular and conformant Kubernetes v1.22+ cluster can be turned into a Kamaji setup. To work properly, the admin cluster should provide:
|
||||
|
||||
- CNI module installed, eg. [Calico](https://github.com/projectcalico/calico), [Cilium](https://github.com/cilium/cilium).
|
||||
- CSI module installed with a Storage Class for the Tenants' `etcd`. Local Persistent Volumes are an option.
|
||||
- Support for LoadBalancer Service Type, or alternatively, an Ingress Controller, eg. [ingress-nginx](https://github.com/kubernetes/ingress-nginx), [haproxy](https://github.com/haproxytech/kubernetes-ingress).
|
||||
- Monitoring Stack, eg. [Prometheus](https://github.com/prometheus-community).
|
||||
- CSI module installed with a Storage Class for the Tenant datastores. Local Persistent Volumes are an option.
|
||||
- Support for LoadBalancer service type, eg. [MetalLB](https://metallb.universe.tf/), or alternatively, an Ingress Controller, eg. [ingress-nginx](https://github.com/kubernetes/ingress-nginx), [haproxy](https://github.com/haproxytech/kubernetes-ingress).
|
||||
- Optionally, a Monitoring Stack installed, eg. [Prometheus](https://github.com/prometheus-community).
|
||||
|
||||
Make sure you have a `kubeconfig` file with admin permissions on the cluster you want to turn into Kamaji Admin Cluster.
|
||||
|
||||
## Install datastore
|
||||
The Kamaji controller needs to access a multi-tenant datastore in order to save data of the tenants' clusters. The Kamaji Helm Chart provides the installation of an unamanaged `etcd`. However, a managed `etcd` is highly recommended in production.
|
||||
|
||||
As alternative, the [kamaji-etcd](https://github.com/clastix/kamaji-etcd) project provides a viable option to setup a manged multi-tenant `etcd` as 3 replicas StatefulSet with data persistence:
|
||||
Make sure you have a `kubeconfig` file with admin permissions on the cluster you want to turn into Kamaji Admin Cluster and check you can access:
|
||||
|
||||
```bash
|
||||
helm repo add clastix https://clastix.github.io/charts
|
||||
helm repo update
|
||||
helm install etcd clastix/kamaji-etcd -n kamaji-system --create-namespace
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
Optionally, Kamaji offers the possibility of using a different storage system for the tenants' clusters, as MySQL or PostgreSQL compatible database, thanks to the native [kine](https://github.com/k3s-io/kine) integration.
|
||||
|
||||
## Install Kamaji Controller
|
||||
Install Kamaji with `helm` using an unmanaged `etcd` as datastore:
|
||||
|
||||
Kamaji takes advantage of the [dynamic admission control](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), such as validating and mutating webhook configurations. These webhooks are secured by a TLS communication, and the certificates are managed by [`cert-manager`](https://cert-manager.io/), making it a prerequisite that must be [installed](https://cert-manager.io/docs/installation/).
|
||||
|
||||
The Kamaji controller needs to access a default datastore in order to save data of the tenants' clusters. The Kamaji Helm Chart provides the installation of a basic unamanaged `etcd`, out of box.
|
||||
|
||||
Install Kamaji with `helm` using an unmanaged `etcd` as default datastore:
|
||||
|
||||
```bash
|
||||
helm repo add clastix https://clastix.github.io/charts
|
||||
@@ -73,15 +68,7 @@ helm repo update
|
||||
helm install kamaji clastix/kamaji -n kamaji-system --create-namespace
|
||||
```
|
||||
|
||||
Alternatively, if you opted for a managed `etcd` datastore:
|
||||
|
||||
```bash
|
||||
helm repo add clastix https://clastix.github.io/charts
|
||||
helm repo update
|
||||
helm install kamaji clastix/kamaji -n kamaji-system --create-namespace --set etcd.deploy=false
|
||||
```
|
||||
|
||||
Congratulations! You just turned your Kubernetes cluster into a Kamaji cluster capable to run multiple Tenant Control Planes.
|
||||
A managed datastore is highly recommended in production. The [kamaji-etcd](https://github.com/clastix/kamaji-etcd) project provides a viable option to setup a managed multi-tenant `etcd` running as StatefulSet made of three replicas. Optionally, Kamaji offers support for a different storage system, as `MySQL` or `PostgreSQL` compatible database, thanks to the native [kine](https://github.com/k3s-io/kine) integration.
|
||||
|
||||
## Create Tenant Cluster
|
||||
|
||||
@@ -97,6 +84,7 @@ metadata:
|
||||
name: ${TENANT_NAME}
|
||||
namespace: ${TENANT_NAMESPACE}
|
||||
spec:
|
||||
dataStore: default
|
||||
controlPlane:
|
||||
deployment:
|
||||
replicas: 3
|
||||
@@ -159,12 +147,13 @@ EOF
|
||||
kubectl -n ${TENANT_NAMESPACE} apply -f ${TENANT_NAMESPACE}-${TENANT_NAME}-tcp.yaml
|
||||
```
|
||||
|
||||
After a few minutes, check the created resources in the tenants namespace and when ready it will look similar to the following:
|
||||
After a few seconds, check the created resources in the tenants namespace and when ready it will look similar to the following:
|
||||
|
||||
```command
|
||||
kubectl -n tenants get tcp,deploy,pods,svc
|
||||
NAME VERSION STATUS CONTROL-PLANE-ENDPOINT KUBECONFIG AGE
|
||||
tenantcontrolplane.kamaji.clastix.io/tenant-00 v1.23.1 Ready 192.168.32.240:6443 tenant-00-admin-kubeconfig 2m20s
|
||||
|
||||
NAME VERSION STATUS CONTROL-PLANE ENDPOINT KUBECONFIG DATASTORE AGE
|
||||
tenantcontrolplane/tenant-00 v1.25.2 Ready 192.168.32.240:6443 tenant-00-admin-kubeconfig default 2m20s
|
||||
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/tenant-00 3/3 3 3 118s
|
||||
@@ -178,9 +167,9 @@ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
|
||||
service/tenant-00 LoadBalancer 10.32.132.241 192.168.32.240 6443:32152/TCP,8132:32713/TCP 2m20s
|
||||
```
|
||||
|
||||
The regular Tenant Control Plane containers: `kube-apiserver`, `kube-controller-manager`, `kube-scheduler` are running unchanged in the `tcp` pods instead of dedicated machines and they are exposed through a service on the port `6443` of worker nodes in the Admin cluster.
|
||||
The regular Tenant Control Plane containers: `kube-apiserver`, `kube-controller-manager`, `kube-scheduler` are running unchanged in the `tcp` pods instead of dedicated machines and they are exposed through a service on the port `6443` of worker nodes in the admin cluster.
|
||||
|
||||
The `LoadBalancer` service type is used to expose the Tenant Control Plane. However, `NodePort` and `ClusterIP` with an Ingress Controller are still viable options, depending on the case. High Availability and rolling updates of the Tenant Control Plane are provided by the `tcp` Deployment and all the resources reconcilied by the Kamaji controller.
|
||||
The `LoadBalancer` service type is used to expose the Tenant Control Plane on the assigned `loadBalancerIP` acting as `ControlPlaneEndpoint` for the worker nodes and other clients as, for example, `kubectl`. Service types `NodePort` and `ClusterIP` are still viable options to expose the Tenant Control Plane, depending on the case. High Availability and rolling updates of the Tenant Control Planes are provided by the `tcp` Deployment and all the resources reconcilied by the Kamaji controller.
|
||||
|
||||
### Working with Tenant Control Plane
|
||||
|
||||
|
||||
@@ -1,20 +1,16 @@
|
||||
# Manage tenant resources GitOps-way from the admin cluster
|
||||
|
||||
In this guide, you can learn how to apply applications and resources in general, the GitOps-way, to the Tenant Control Planes.
|
||||
This guide describe a declarative way to deploy Kubernetes add-ons across multiple Tenant Clusters, the GitOps-way. An admin may need to apply a specific workload into Tenant Clusters and ensure is constantly reconciled, no matter what the tenants will do in their clusters. Examples include installing monitoring agents, ensuring specific policies, installing infrastructure operators like Cert Manager and so on.
|
||||
|
||||
An admin may need to apply a specific workload into tenant control planes and ensure is constantly reconciled, no matter what the tenants will do in their clusters.
|
||||
|
||||
Examples include installing monitoring agents, ensuring specific policies, installing infrastructure operators like Cert Manager and so on.
|
||||
This way the tenant resources can be ensured from a single pane of glass, from the *admin cluster*.
|
||||
|
||||
## Flux as the GitOps operator
|
||||
|
||||
As GitOps ensures a constant reconciliation to a Git-versioned desired state, Flux can satisfy the requirement of those scenarios.
|
||||
|
||||
In particular, the controllers that reconcile [resources](https://fluxcd.io/flux/concepts/#reconciliation) support communicating to external clusters.
|
||||
As GitOps ensures a constant reconciliation to a Git-versioned desired state, [Flux](https://fluxcd.io) can satisfy the requirement of those scenarios. In particular, the controllers that reconcile [resources](https://fluxcd.io/flux/concepts/#reconciliation) support communicating to external clusters.
|
||||
|
||||
In this scenario the Flux toolkit would run in the *admin cluster*, with reconcile controllers reconciling resources into *tenant clusters*.
|
||||
|
||||
<img src="../images/kamaji-flux.png" alt="kamaji-flux" width="720"/>
|
||||

|
||||
|
||||
This is something possible as the Flux reconciliation Custom Resources specifications provide ability to specify `Secret` which contain a `kubeconfig` - here you can find the related documentation for both [`Kustomization`](https://fluxcd.io/flux/components/kustomize/kustomization/#remote-clusters--cluster-api) and [`HelmRelease`](https://fluxcd.io/flux/components/helm/helmreleases/#remote-clusters--cluster-api) CRs.
|
||||
|
||||
@@ -86,10 +82,6 @@ tenant1-cert-manager-cainjector 1/1 1 1 4m3s
|
||||
tenant1-cert-manager-webhook 1/1 1 1 4m3s
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
This way tenant resources can be ensured from a single pane of glass, from the *admin cluster*.
|
||||
|
||||
No matter what the tenant users will do on the *tenant cluster*, the Flux reconciliation controllers wirunning in the *admin cluster* will ensure the desired state declared by the reconciliation resources applied existing in the *admin cluster*, will be reconciled in the *tenant cluster*.
|
||||
|
||||
Furthermore, this approach does not need to have in each tenant cluster nor Flux neither applied the related reconciliation Custom Resorces.
|
||||
|
||||
@@ -7,7 +7,26 @@ The process of upgrading a _“tenant cluster”_ consists in two steps:
|
||||
## Upgrade of Tenant Control Plane
|
||||
You should patch the `TenantControlPlane.spec.kubernetes.version` custom resource with a new compatible value according to the [Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/).
|
||||
|
||||
> Note: during the upgrade, a new ReplicaSet of Tenant Control Plane pod will be created, so make sure you have at least two pods to avoid service disruption.
|
||||
During the upgrade, a new ReplicaSet of Tenant Control Plane pod will be created, so make sure you have enough replicas to avoid service disruption. Also make sure you have the Rolling Update strategy properly configured:
|
||||
|
||||
```yaml
|
||||
apiVersion: kamaji.clastix.io/v1alpha1
|
||||
kind: TenantControlPlane
|
||||
metadata:
|
||||
name: tenant-00
|
||||
spec:
|
||||
controlPlane:
|
||||
deployment:
|
||||
replicas: 3
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 1
|
||||
maxUnavailable: 1
|
||||
type: RollingUpdate
|
||||
...
|
||||
```
|
||||
|
||||
## Upgrade of Tenant Worker Nodes
|
||||
As currently Kamaji is not providing any helpers for Tenant Worker Nodes, you should make sure to upgrade them manually, for example, with the help of `kubeadm`. We have in roadmap, the Cluster APIs support so that you can upgrade _“tenant clusters”_ in a fully declarative way.
|
||||
As currently Kamaji is not providing any helpers for Tenant Worker Nodes, you should make sure to upgrade them manually, for example, with the help of `kubeadm`. Refer to the official [documentation](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/#upgrade-worker-nodes).
|
||||
|
||||
> We have in roadmap, the Cluster APIs support so that you can upgrade _“tenant clusters”_ in a fully declarative way.
|
||||
@@ -4,17 +4,21 @@ Currently, **Kamaji** allows customization using CLI flags for the `manager` sub
|
||||
|
||||
Available flags are the following:
|
||||
|
||||
```
|
||||
--datastore string The default DataStore that should be used by Kamaji to setup the required storage (default "etcd")
|
||||
--health-probe-bind-address string The address the probe endpoint binds to. (default ":8081")
|
||||
-h, --help help for manager
|
||||
--kine-image string Container image along with tag to use for the Kine sidecar container (used only if etcd-storage-type is set to one of kine strategies) (default "rancher/kine:v0.9.2-amd64")
|
||||
--leader-elect Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager. (default true)
|
||||
--metrics-bind-address string The address the metric endpoint binds to. (default ":8080")
|
||||
--tmp-directory string Directory which will be used to work with temporary files. (default "/tmp/kamaji")
|
||||
--zap-devel Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
|
||||
--zap-encoder encoder Zap log encoding (one of 'json' or 'console')
|
||||
--zap-log-level level Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
|
||||
--zap-stacktrace-level level Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
|
||||
--zap-time-encoding time-encoding Zap time encoding (one of 'epoch', 'millis', 'nano', 'iso8601', 'rfc3339' or 'rfc3339nano'). Defaults to 'epoch'.
|
||||
```
|
||||
| Flag | Usage | Default |
|
||||
| ---- | ------ | --- |
|
||||
| `--metrics-bind-address` | The address the metric endpoint binds to. | `:8080` |
|
||||
| `--health-probe-bind-address` | The address the probe endpoint binds to. | `:8081` |
|
||||
| `--leader-elect` | Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager. | `true` |
|
||||
| `--tmp-directory` | Directory which will be used to work with temporary files. | `/tmp/kamaji` |
|
||||
| `--kine-image` | Container image along with tag to use for the Kine sidecar container (used only if etcd-storage-type is set to one of kine strategies). | `rancher/kine:v0.9.2-amd64` |
|
||||
| `--datastore` | The default DataStore that should be used by Kamaji to setup the required storage. | `etcd` |
|
||||
| `--migrate-image` | Specify the container image to launch when a TenantControlPlane is migrated to a new datastore. | `migrate-image` |
|
||||
| `--pod-namespace` | The Kubernetes Namespace on which the Operator is running in, required for the TenantControlPlane migration jobs. | `os.Getenv("POD_NAMESPACE")` |
|
||||
| `--webhook-service-name` | The Kamaji webhook server Service name which is used to get validation webhooks, required for the TenantControlPlane migration jobs. | `kamaji-webhook-service` |
|
||||
| `--serviceaccount-name` | The Kubernetes ServiceAccount used by the Operator, required for the TenantControlPlane migration jobs. | `os.Getenv("SERVICE_ACCOUNT")` |
|
||||
| `--webhook-ca-path` | Path to the Manager webhook server CA, required for the TenantControlPlane migration jobs. | `/tmp/k8s-webhook-server/serving-certs/ca.crt` |
|
||||
| `--zap-devel` | Development Mode (encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode (encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error). | `true` |
|
||||
| `--zap-encoder` | Zap log encoding, one of 'json' or 'console' | `console` |
|
||||
| `--zap-log-level` | Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity | `info` |
|
||||
| `--zap-stacktrace-level` | Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic'). | `info` |
|
||||
| `--zap-time-encoding` | Zap time encoding (one of 'epoch', 'millis', 'nano', 'iso8601', 'rfc3339' or 'rfc3339nano') | `epoch` |
|
||||
|
||||
@@ -2,9 +2,9 @@
|
||||
|
||||
In Kamaji, there are different components that might require independent versioning and support level:
|
||||
|
||||
|Kamaji|Admin Cluster|Tenant Cluster (min)|Tenant Cluster (max)|Konnectivity|Tenant etcd |
|
||||
|------|-------------|--------------------|--------------------|------------|------------|
|
||||
|0.0.1 |1.22.0+ |1.21.0 |1.23.5 |0.0.31 |3.5.4 |
|
||||
|0.0.2 |1.22.0+ |1.21.0 |1.25.0 |0.0.32 |3.5.4 |
|
||||
|Kamaji|Admin Cluster (min)|Admin Cluster (max)|Tenant Cluster (min)|Tenant Cluster (max)|Konnectivity|Tenant etcd |
|
||||
|------|-------------------|-------------------|--------------------|--------------------|------------|------------|
|
||||
|0.0.1 |1.22.0 |1.24.0 |1.21.0 |1.23.5 |0.0.31 |3.5.4 |
|
||||
|0.1.0 |1.22.0 |1.25.0 |1.21.0 |1.25.0 |0.0.32 |3.5.4 |
|
||||
|0.2.0 |1.22.0 |1.26.0 |1.21.0 |1.26.0 |0.0.32 |3.5.6 |
|
||||
|
||||
Other combinations might work but they have not been yet tested.
|
||||
|
||||
@@ -46,7 +46,9 @@ nav:
|
||||
- guides/kamaji-azure-deployment-guide.md
|
||||
- guides/postgresql-datastore.md
|
||||
- guides/mysql-datastore.md
|
||||
- guides/kamaji-gitops-flux.md
|
||||
- guides/upgrade.md
|
||||
- guides/datastore-migration.md
|
||||
- 'Use Cases': use-cases.md
|
||||
- 'Reference':
|
||||
- reference/index.md
|
||||
|
||||
Reference in New Issue
Block a user