Compare commits

...

160 Commits

Author SHA1 Message Date
Adriano Pezzuto
99fb71ecf9 fix(helm): use 0.0.0+latest for kamaji-crd (#982)
Co-authored-by: bsctl <bsctl@clastix.io>
2025-10-18 16:01:24 +02:00
Adriano Pezzuto
da180c9ce0 feat(docs): document how to install helm kamaji-crds (#983)
* feat(docs): document how to install helm kamaji-crds

* fix(docs): document how to install helm kamaji-crds

---------

Co-authored-by: bsctl <bsctl@clastix.io>
2025-10-18 16:01:04 +02:00
dependabot[bot]
4cced97991 feat(deps): bump github.com/docker/docker (#985)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.5.0+incompatible to 28.5.1+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.5.0...v28.5.1)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.5.1+incompatible
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-18 16:00:17 +02:00
dependabot[bot]
a00e4544f9 feat(deps): bump sigs.k8s.io/controller-runtime from 0.22.1 to 0.22.3 (#988)
Bumps [sigs.k8s.io/controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) from 0.22.1 to 0.22.3.
- [Release notes](https://github.com/kubernetes-sigs/controller-runtime/releases)
- [Changelog](https://github.com/kubernetes-sigs/controller-runtime/blob/main/RELEASE.md)
- [Commits](https://github.com/kubernetes-sigs/controller-runtime/compare/v0.22.1...v0.22.3)

---
updated-dependencies:
- dependency-name: sigs.k8s.io/controller-runtime
  dependency-version: 0.22.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-18 16:00:05 +02:00
dependabot[bot]
b1984dc66d feat(deps): bump github.com/nats-io/nats.go from 1.46.1 to 1.47.0 (#989)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.46.1 to 1.47.0.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.46.1...v1.47.0)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.47.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-18 15:59:45 +02:00
Timofei Larkin
8d25078c47 feat: don't clobber 3rd-party labels (#992)
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
2025-10-17 17:35:20 +02:00
dependabot[bot]
bc85d8b73c feat(deps): bump github.com/docker/docker (#978)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.4.0+incompatible to 28.5.0+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.4.0...v28.5.0)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.5.0+incompatible
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-06 14:12:20 +02:00
Dario Tranchitella
de459fb5da feat!: write permissions (#937)
* fix: decoding object only if requested

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(api): limiting write permissions

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat: write permissions handlers, routes, and controller

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: write permissions

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-10-03 14:30:58 +02:00
dependabot[bot]
2b707423ff feat(deps): bump github.com/onsi/ginkgo/v2 from 2.25.3 to 2.26.0 (#977)
Bumps [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) from 2.25.3 to 2.26.0.
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.25.3...v2.26.0)

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-version: 2.26.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-03 14:02:10 +02:00
Loïc Brun
285cef0f02 fix(konnectivity): rotate certicate during certificate authority rotation (#976) 2025-10-02 16:17:59 +02:00
dependabot[bot]
f6686f6efa feat(deps): bump github.com/nats-io/nats.go from 1.46.0 to 1.46.1 (#973)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.46.0 to 1.46.1.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.46.0...v1.46.1)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.46.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-02 14:55:52 +02:00
Dario Tranchitella
2a809a79c4 docs(readme): managed kubernetes on hetzner article (#974) 2025-10-01 21:20:06 +02:00
Dario Tranchitella
f477df2a84 docs(readme): minikube medium article (#972) 2025-09-30 15:46:02 +02:00
Adriano Pezzuto
464dc7ef49 fix(docs): missing export env vars in capi proxmox sample (#971)
* fix(docs): missing export env vars in capi proxmox sample

* fix(docs): escape token in proxmox capi

---------

Co-authored-by: bsctl <bsctl@clastix.io>
2025-09-29 09:10:02 +02:00
dependabot[bot]
b550865da3 feat(deps): bump github.com/nats-io/nats.go from 1.45.0 to 1.46.0 (#968)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.45.0 to 1.46.0.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.45.0...v1.46.0)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.46.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 18:18:45 +02:00
outbackdingo
89c8615ce4 fix(docs): exporting variables for proxmox-infra-provider (#969)
* update context in proxmox-infra-provider.md

* Update docs/content/cluster-api/proxmox-infra-provider.md

Co-authored-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Co-authored-by: dingo <dingo@optimcloud.com>
Co-authored-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-23 18:18:32 +02:00
Dario Tranchitella
cb2152d5a7 feat: kubeconfig generator (#933)
* feat(api): kubeconfig generator

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* refactor: abstracting enqueue to channel function

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* fix: avoiding multiple context registration

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat: kubeconfig generator

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: kubeconfig generator

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(helm): deployment for kubeconfig generator

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-22 15:32:50 +02:00
dependabot[bot]
4bace03fc3 feat(deps): bump the etcd group with 2 updates (#966)
Bumps the etcd group with 2 updates: [go.etcd.io/etcd/api/v3](https://github.com/etcd-io/etcd) and [go.etcd.io/etcd/client/v3](https://github.com/etcd-io/etcd).


Updates `go.etcd.io/etcd/api/v3` from 3.6.4 to 3.6.5
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.6.4...v3.6.5)

Updates `go.etcd.io/etcd/client/v3` from 3.6.4 to 3.6.5
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.6.4...v3.6.5)

---
updated-dependencies:
- dependency-name: go.etcd.io/etcd/api/v3
  dependency-version: 3.6.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: etcd
- dependency-name: go.etcd.io/etcd/client/v3
  dependency-version: 3.6.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: etcd
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-22 09:27:32 +00:00
dependabot[bot]
e3225a383c feat(deps): bump github.com/testcontainers/testcontainers-go (#967)
Bumps [github.com/testcontainers/testcontainers-go](https://github.com/testcontainers/testcontainers-go) from 0.38.0 to 0.39.0.
- [Release notes](https://github.com/testcontainers/testcontainers-go/releases)
- [Commits](https://github.com/testcontainers/testcontainers-go/compare/v0.38.0...v0.39.0)

---
updated-dependencies:
- dependency-name: github.com/testcontainers/testcontainers-go
  dependency-version: 0.39.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-22 08:17:42 +02:00
engineeringdatacenter
9a046d8b2c chore(adopters): kamaji promoted to production for Aruba Managed k8s (#965)
* kamaji promoted to production for Aruba Managed k8s

* Clarify Aruba Cloud's use of Kamaji

Updated the description for Aruba Cloud to reflect its current use of Kamaji.

Signed-off-by: engineeringdatacenter <123960000+engineeringdatacenter@users.noreply.github.com>

---------

Signed-off-by: engineeringdatacenter <123960000+engineeringdatacenter@users.noreply.github.com>
2025-09-19 16:08:09 +02:00
Parth Yadav
764433bd04 chore(adopters): add coredge.io as an adopter (#962)
This patch updates ADOPTERS.md with Coredge.io addition in the list.

Signed-off-by: Parth Yadav <parth@coredge.io>
2025-09-11 21:42:41 +02:00
dependabot[bot]
0e54d84ebb feat(deps): bump github.com/spf13/viper from 1.20.1 to 1.21.0 (#952)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.20.1 to 1.21.0.
- [Release notes](https://github.com/spf13/viper/releases)
- [Commits](https://github.com/spf13/viper/compare/v1.20.1...v1.21.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/viper
  dependency-version: 1.21.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-11 15:26:52 +02:00
dependabot[bot]
b0faf7d31e feat(deps): bump sigs.k8s.io/controller-runtime from 0.22.0 to 0.22.1 (#953)
* feat(deps): bump sigs.k8s.io/controller-runtime from 0.22.0 to 0.22.1

Bumps [sigs.k8s.io/controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) from 0.22.0 to 0.22.1.
- [Release notes](https://github.com/kubernetes-sigs/controller-runtime/releases)
- [Changelog](https://github.com/kubernetes-sigs/controller-runtime/blob/main/RELEASE.md)
- [Commits](https://github.com/kubernetes-sigs/controller-runtime/compare/v0.22.0...v0.22.1)

---
updated-dependencies:
- dependency-name: sigs.k8s.io/controller-runtime
  dependency-version: 0.22.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* chore(golangci-lint): apply is no more deprecated

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-11 15:25:42 +02:00
dependabot[bot]
47cc705c98 feat(deps): bump k8s.io/kubernetes in the k8s group (#960)
Bumps the k8s group with 1 update: [k8s.io/kubernetes](https://github.com/kubernetes/kubernetes).


Updates `k8s.io/kubernetes` from 1.34.0 to 1.34.1
- [Release notes](https://github.com/kubernetes/kubernetes/releases)
- [Commits](https://github.com/kubernetes/kubernetes/compare/v1.34.0...v1.34.1)

---
updated-dependencies:
- dependency-name: k8s.io/kubernetes
  dependency-version: 1.34.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: k8s
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-11 15:12:50 +02:00
Dario Tranchitella
17869a4e0f fix(controller-manager): supporting extra args override (#959)
* fix(controller-manager): supporting extra args override

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore: removing deprecated intstr.FromInt usage

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-10 14:23:32 +02:00
Dario Tranchitella
2a7749839e feat!: inflecting version for konnectivity components from tcp (#934)
* feat(api)!: inflecting version for konnectivity components from tcp

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat: inflecting version for konnectivity components from tcp

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs(konnectivity): warning about missing container artefacts

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-10 12:19:33 +02:00
Adriano Pezzuto
aabbdd96a3 fix(docs): improve capi cluster class (#957)
* fix(docs): improve capi cluster class
2025-09-09 15:53:23 +02:00
Pierre Gaxatte
5d6f512df1 fix(certificates): use a stable format for the rotate annotation value (#955) 2025-09-09 12:27:11 +02:00
Dario Tranchitella
1b4bd884dc docs(capi): updating to latest changes (#956)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-09 11:34:30 +02:00
Dario Tranchitella
1a0858d350 fix: konnectivity logs and nil pointer dereference (#951)
* fix(konnectivity): avoiding nil pointer reconcile for agent

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* fix(log): ignoring not found errors for konnectivity cleanup

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-07 11:02:44 +02:00
dependabot[bot]
f03279183e feat(deps): bump github.com/prometheus/client_golang (#949)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.23.1 to 1.23.2.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.23.1...v1.23.2)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-version: 1.23.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-07 10:07:54 +02:00
Dario Tranchitella
e2a0648989 fix: default values for schema and username (#941)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-05 15:32:10 +02:00
Dario Tranchitella
72f32aba19 fix(ci): installing go from go.mod version in e2e workflow (#947)
* fix(ci): installing go from go.mod version in e2e workflow

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* fix(gh): triggering e2e on job spec changes

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-05 10:56:18 +02:00
dependabot[bot]
a27a9efba2 feat(deps): bump github.com/onsi/ginkgo/v2 from 2.25.2 to 2.25.3 (#946)
Bumps [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) from 2.25.2 to 2.25.3.
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.25.2...v2.25.3)

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-version: 2.25.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-05 09:43:31 +02:00
dependabot[bot]
d9203a3e95 feat(deps): bump github.com/prometheus/client_golang (#944)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.23.0 to 1.23.1.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.23.0...v1.23.1)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-version: 1.23.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-05 09:43:20 +02:00
dependabot[bot]
fcce4d5f83 feat(deps): bump github.com/docker/docker (#945)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.3.3+incompatible to 28.4.0+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.3.3...v28.4.0)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.4.0+incompatible
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-05 09:43:11 +02:00
dependabot[bot]
5349649515 chore(ci): bump actions/setup-go from 5 to 6 (#943)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-05 09:42:58 +02:00
Dario Tranchitella
4a474d5749 fix: handling create or update for patch resources (#942)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-05 09:42:17 +02:00
Dario Tranchitella
8be3eebdbe docs: enhancing documentation regarding tcp update (#930)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-04 17:33:45 +02:00
Dario Tranchitella
b9fee273eb fix: patching of kube-proxy and coredns advanced objects (#940)
* fix(coredns): using patch for deployment and service reconciliation

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(kubeproxy): using patch for daemonset reconciliation

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-04 16:56:09 +02:00
dependabot[bot]
ef0e653729 feat(deps): bump github.com/spf13/pflag from 1.0.7 to 1.0.9 (#931)
Bumps [github.com/spf13/pflag](https://github.com/spf13/pflag) from 1.0.7 to 1.0.9.
- [Release notes](https://github.com/spf13/pflag/releases)
- [Commits](https://github.com/spf13/pflag/compare/v1.0.7...v1.0.9)

---
updated-dependencies:
- dependency-name: github.com/spf13/pflag
  dependency-version: 1.0.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-03 15:12:09 +02:00
dependabot[bot]
fad65dc625 feat(deps): bump github.com/spf13/cobra from 1.9.1 to 1.10.1 (#932)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.9.1 to 1.10.1.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.9.1...v1.10.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-version: 1.10.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-03 15:09:26 +02:00
dependabot[bot]
a161a7c37d feat(deps): bump sigs.k8s.io/controller-runtime from 0.21.0 to 0.22.0 (#927)
Bumps [sigs.k8s.io/controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) from 0.21.0 to 0.22.0.
- [Release notes](https://github.com/kubernetes-sigs/controller-runtime/releases)
- [Changelog](https://github.com/kubernetes-sigs/controller-runtime/blob/main/RELEASE.md)
- [Commits](https://github.com/kubernetes-sigs/controller-runtime/compare/v0.21.0...v0.22.0)

---
updated-dependencies:
- dependency-name: sigs.k8s.io/controller-runtime
  dependency-version: 0.22.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-01 12:06:18 +02:00
dependabot[bot]
afe719eef1 feat(deps): bump github.com/onsi/ginkgo/v2 from 2.25.1 to 2.25.2 (#926)
Bumps [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) from 2.25.1 to 2.25.2.
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.25.1...v2.25.2)

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-version: 2.25.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-01 10:13:25 +02:00
Dario Tranchitella
dc470f247d feat(k8s): support for v1.34.0 (#925)
* feat(k8s): support for v1.34.0

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(crds): fields update and documentation

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-09-01 10:11:02 +02:00
dependabot[bot]
417f14038a feat(deps): bump github.com/onsi/gomega from 1.38.1 to 1.38.2 (#923)
Bumps [github.com/onsi/gomega](https://github.com/onsi/gomega) from 1.38.1 to 1.38.2.
- [Release notes](https://github.com/onsi/gomega/releases)
- [Changelog](https://github.com/onsi/gomega/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/gomega/compare/v1.38.1...v1.38.2)

---
updated-dependencies:
- dependency-name: github.com/onsi/gomega
  dependency-version: 1.38.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 18:05:37 +02:00
dependabot[bot]
aba527f461 feat(deps): bump github.com/nats-io/nats.go from 1.44.0 to 1.45.0 (#918)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.44.0 to 1.45.0.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.44.0...v1.45.0)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.45.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-26 15:59:13 +00:00
dependabot[bot]
bd0960908b feat(deps): bump github.com/onsi/gomega from 1.38.0 to 1.38.1 (#921)
Bumps [github.com/onsi/gomega](https://github.com/onsi/gomega) from 1.38.0 to 1.38.1.
- [Release notes](https://github.com/onsi/gomega/releases)
- [Changelog](https://github.com/onsi/gomega/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/gomega/compare/v1.38.0...v1.38.1)

---
updated-dependencies:
- dependency-name: github.com/onsi/gomega
  dependency-version: 1.38.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-26 17:57:50 +02:00
dependabot[bot]
3b7f18604f feat(deps): bump github.com/onsi/ginkgo/v2 from 2.23.4 to 2.24.0 (#916)
Bumps [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) from 2.23.4 to 2.24.0.
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.23.4...v2.24.0)

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-version: 2.24.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-20 12:27:53 +02:00
Adriano Pezzuto
ef697e48df feat(docs): add user quote (#917) 2025-08-19 09:29:12 +02:00
Dario Tranchitella
13b85aa386 fix(ci): using pat for container image build (#914)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-08-18 09:35:27 +02:00
dependabot[bot]
8898a13eec feat(deps): bump github.com/go-pg/pg/v10 from 10.14.0 to 10.15.0 (#913)
Bumps [github.com/go-pg/pg/v10](https://github.com/go-pg/pg) from 10.14.0 to 10.15.0.
- [Release notes](https://github.com/go-pg/pg/releases)
- [Changelog](https://github.com/go-pg/pg/blob/v10/CHANGELOG.md)
- [Commits](https://github.com/go-pg/pg/compare/v10.14.0...v10.15.0)

---
updated-dependencies:
- dependency-name: github.com/go-pg/pg/v10
  dependency-version: 10.15.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-18 08:41:08 +02:00
Dario Tranchitella
d30af82691 feat(deps): bump k8s.io/kubernetes from 1.33.3 to 1.33.4 (#912)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-08-14 14:40:10 +02:00
dependabot[bot]
a1f7066b99 chore(ci): bump amannn/action-semantic-pull-request from 5 to 6 (#909)
Bumps [amannn/action-semantic-pull-request](https://github.com/amannn/action-semantic-pull-request) from 5 to 6.
- [Release notes](https://github.com/amannn/action-semantic-pull-request/releases)
- [Changelog](https://github.com/amannn/action-semantic-pull-request/blob/main/CHANGELOG.md)
- [Commits](https://github.com/amannn/action-semantic-pull-request/compare/v5...v6)

---
updated-dependencies:
- dependency-name: amannn/action-semantic-pull-request
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-14 10:26:55 +02:00
Dario Tranchitella
feb906d728 docs: aligning to latest capi cp provider spec (#911)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-08-14 09:02:24 +02:00
dependabot[bot]
5394ec6ca3 chore(ci): bump actions/checkout from 4 to 5 (#907)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-13 16:05:38 +02:00
Jianan Wang
0ecefc6563 fix(docs): aws network-interfaces is an array 2025-08-12 20:04:23 +02:00
Dario Tranchitella
9ed00b98e6 feat(deps): bump k8s.io/kubernetes from 1.33.2 to 1.33.3 (#906)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-08-11 14:39:03 +02:00
Dario Tranchitella
ed6b95fb5d chore(gh): building edge images using workflow dispatch (#905)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-08-11 11:43:20 +02:00
Dario Tranchitella
f0f41bd0da fix(charts): uncommitted file (#902)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-08-08 08:37:24 +02:00
Dario Tranchitella
fb9af3bf52 feat(helm): providing kamaji-crds chart (#894)
* feat(helm): providing kamaji-crds chart

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(gh): linting and publishing

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(e2e): installing crds during e2e

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-08-08 08:15:40 +02:00
Dario Tranchitella
b65a7cff14 chore: adding NOTICE file (#901)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-08-08 07:37:08 +02:00
Dario Tranchitella
17f99abadc chore(ci): using pat for git push and autogenerating notes (#900)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-08-04 10:14:08 +02:00
dependabot[bot]
df3866fa24 feat(deps): bump github.com/prometheus/client_golang (#899)
Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.22.0 to 1.23.0.
- [Release notes](https://github.com/prometheus/client_golang/releases)
- [Changelog](https://github.com/prometheus/client_golang/blob/v1.23.0/CHANGELOG.md)
- [Commits](https://github.com/prometheus/client_golang/compare/v1.22.0...v1.23.0)

---
updated-dependencies:
- dependency-name: github.com/prometheus/client_golang
  dependency-version: 1.23.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-01 10:32:39 +02:00
Mateusz Kwiatkowski
f52fe45c46 feat: add hostNetwork support for the Konnectivity Agent (#883)
This commit extends CRD API: Added hostNetwork field to KonnectivityAgentSpec struct.
It's false by default so it's backwards compatible.

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-07-30 22:31:38 +02:00
dependabot[bot]
c04d8ddc85 feat(deps): bump github.com/docker/docker (#897)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.3.2+incompatible to 28.3.3+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.3.2...v28.3.3)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.3.3+incompatible
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-30 16:02:57 +02:00
dependabot[bot]
3ecd84b68a feat(deps): bump github.com/nats-io/nats.go from 1.43.0 to 1.44.0 (#898)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.43.0 to 1.44.0.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.43.0...v1.44.0)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.44.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-30 16:02:47 +02:00
Dario Tranchitella
9ba9c65755 fix(gh): release create does not push git tag by default (#896)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-07-29 15:04:21 +02:00
Dario Tranchitella
5e68fd8fe0 fix: honouring certificate expiratin threshold (#886)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-07-28 09:40:16 +02:00
Dario Tranchitella
e6f20674ec chore(gh): weekly release (#892)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-07-25 16:14:27 +02:00
Philipp Riederer
0990317595 feat!: support setting the username for the relational database (#891)
* Support setting the username for the relational database

fixes #889

* update crd+documentation
2025-07-24 14:05:26 +02:00
Dario Tranchitella
382d3274f3 fix(docs): wrong field for konnectivity agent (#890)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-07-23 14:43:18 +02:00
dependabot[bot]
55516c833e feat(deps): bump github.com/onsi/gomega from 1.37.0 to 1.38.0 (#887)
Bumps [github.com/onsi/gomega](https://github.com/onsi/gomega) from 1.37.0 to 1.38.0.
- [Release notes](https://github.com/onsi/gomega/releases)
- [Changelog](https://github.com/onsi/gomega/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/gomega/compare/v1.37.0...v1.38.0)

---
updated-dependencies:
- dependency-name: github.com/onsi/gomega
  dependency-version: 1.38.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-23 14:24:34 +02:00
Dario Tranchitella
cac1631523 feat: rotating certificates via annotation (#877)
* fix(kubeconfig): checking certificate authority data for validity

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat: rotating certificates via annotation

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: rotating certificates via annotation

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-07-21 09:23:29 +02:00
Dario Tranchitella
d1eb860918 feat!: support for konnectivity deployment mode (#875)
* feat(konnectivity): support for deployment mode

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(helm)!: support for konnectivity deployment mode

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(sample): support for konnectivity deployment mode

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: support for konnectivity deployment mode

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-07-21 09:21:35 +02:00
dependabot[bot]
6c76bd6a97 feat(deps): bump github.com/testcontainers/testcontainers-go (#878)
Bumps [github.com/testcontainers/testcontainers-go](https://github.com/testcontainers/testcontainers-go) from 0.37.0 to 0.38.0.
- [Release notes](https://github.com/testcontainers/testcontainers-go/releases)
- [Commits](https://github.com/testcontainers/testcontainers-go/compare/v0.37.0...v0.38.0)

---
updated-dependencies:
- dependency-name: github.com/testcontainers/testcontainers-go
  dependency-version: 0.38.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-18 14:42:18 +02:00
dependabot[bot]
462d52332c feat(deps): bump github.com/spf13/pflag from 1.0.6 to 1.0.7 (#884)
Bumps [github.com/spf13/pflag](https://github.com/spf13/pflag) from 1.0.6 to 1.0.7.
- [Release notes](https://github.com/spf13/pflag/releases)
- [Commits](https://github.com/spf13/pflag/compare/v1.0.6...v1.0.7)

---
updated-dependencies:
- dependency-name: github.com/spf13/pflag
  dependency-version: 1.0.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-18 14:41:49 +02:00
Mario Valderrama
63a29b4b59 fix: typo in llms.txt (#879) 2025-07-16 11:36:21 +02:00
Dario Tranchitella
e366dc3959 feat: pausing reconciliation of controlled objects (#874)
* feat: pausing reconciliation of controlled objects

Objects such as TenantControlPlane and Secret can be annotated with
kamaji.clastix.io/paused to prevent controllers from processing them.

This will stop reconciling objects for debugging or other purposes.
Annotation value is irrelevant, just the key presence is evaluated.

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: pausing reconciliation of controlled objects

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(logs): typo for deleted resources

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-07-16 10:44:48 +02:00
Dario Tranchitella
0ab8843418 feat(chore): support for customising container repository via ldflags (#873)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-07-14 13:38:09 +02:00
dependabot[bot]
ce5fe906aa feat(deps): bump github.com/docker/docker (#869)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.3.0+incompatible to 28.3.2+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.3.0...v28.3.2)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.3.2+incompatible
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-13 19:29:12 +02:00
Dario Tranchitella
09c9743465 feat(deps): updating kamaji-etcd and kubeadm dependencies (#865)
* feat(deps): upgrading kamaji-etcd helm dependency

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(deps): upgrading kubeadm support to v1.33.2

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-07-03 09:49:09 +02:00
Yukiel Zhong
8290e84c3f fix(docs): optimize kind getting started (#847) 2025-07-03 09:23:57 +02:00
Dario Tranchitella
678aca6229 chore(ci): stripping binaries and avoiding cgo (#861)
* chore(docs): aligning to latest capi cp provider docs

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(ci): stripping binaries and avoiding cgo

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(gh): upgrading to ubuntu-latest for e2e

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(test): printing debug messages for node join in e2e

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* fix(ci): ignoring file existing error

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* fix(ci): enabling br_netfilter as github action step

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-07-03 09:09:49 +02:00
Parth Yadav
d6a94dfa5e fix(controlplane): Prioritize InternalIP in kubelet-preferred-address-types (#859)
This patch switches default kubelet-preferred-address-types to
"InternalIP,ExternalIP,Hostname" to avoid failures in kube-apiserver
connection to kubelet when node hostnames are not resolvable by the
external DNS server. This improves out-of-the-box reliability across
most environments by choosing node `InternalIP` as the preferred mode
to reach Kubelet.

Signed-off-by: Parth Yadav <parthyadav3105@gmail.com>
2025-06-29 21:59:20 +02:00
Dario Tranchitella
3fd1882e43 fix: wrong jsonpath for installed version (#857)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-06-27 17:11:31 +02:00
Dario Tranchitella
d40996daa9 feat(docs): enhancing navigation for api reference (#853)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-06-27 12:37:49 +02:00
dependabot[bot]
b5956e43a5 feat(deps): bump github.com/docker/docker (#850)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.2.2+incompatible to 28.3.0+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.2.2...v28.3.0)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.3.0+incompatible
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-26 21:02:03 +02:00
Alessandro
c97767b54f feat(api): display status version in TenantControlPlane columns (#852)
Signed-off-by: alecristofanilli <cristofanillia@gmail.com>
2025-06-26 21:01:42 +02:00
Dario Tranchitella
464984f091 feat(docs): generating api docs for cluster api objects (#851)
* chore(makefile): generating api docs for cluster api objects

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: generating api docs for cluster api objects

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-06-26 12:45:39 +02:00
Dario Tranchitella
d7b21b5814 chore(helm): ignoring helm files for stable packaging (#848)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-06-25 12:34:13 +02:00
Dario Tranchitella
3230a70475 feat(migration): enhancements and customisable timeout (#845)
* feat(migration): customising timeout via tcp annotation

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: customising migration timeout via tcp annotation

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* fix(migrate): delete job in case of timeout change

This will delete the failed job due to an incorrect timeout and performs
the creation of a new object rather than updating it, since its
immutability in the API specification.

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-06-20 19:32:49 +02:00
dependabot[bot]
b0c9034994 feat(deps): bump k8s.io/kubernetes in the k8s group (#844)
Bumps the k8s group with 1 update: [k8s.io/kubernetes](https://github.com/kubernetes/kubernetes).


Updates `k8s.io/kubernetes` from 1.33.1 to 1.33.2
- [Release notes](https://github.com/kubernetes/kubernetes/releases)
- [Commits](https://github.com/kubernetes/kubernetes/compare/v1.33.1...v1.33.2)

---
updated-dependencies:
- dependency-name: k8s.io/kubernetes
  dependency-version: 1.33.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: k8s
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-20 13:49:53 +02:00
dependabot[bot]
22f9c36b15 feat(deps): bump github.com/go-sql-driver/mysql from 1.9.2 to 1.9.3 (#843)
Bumps [github.com/go-sql-driver/mysql](https://github.com/go-sql-driver/mysql) from 1.9.2 to 1.9.3.
- [Release notes](https://github.com/go-sql-driver/mysql/releases)
- [Changelog](https://github.com/go-sql-driver/mysql/blob/v1.9.3/CHANGELOG.md)
- [Commits](https://github.com/go-sql-driver/mysql/compare/v1.9.2...v1.9.3)

---
updated-dependencies:
- dependency-name: github.com/go-sql-driver/mysql
  dependency-version: 1.9.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 12:30:24 +02:00
Adriano Pezzuto
d5d0295736 feat(docs): add user public quote (#842) 2025-06-13 18:38:15 +02:00
Adriano Pezzuto
80afd43c9f feat(docs): add llms.txt file (#841)
* feat(docs): add terraform guide

* fix(docs): update metallb annotation

* feat(docs): add llms.txt file
2025-06-13 17:11:21 +02:00
Dario Tranchitella
32ef65820d feat: toggable cleanup schema prior migration (#840)
* feat(migration): cleanup prior migration

When using the annotation `kamaji.clastix.io/cleanup-prior-migration`
with a true boolean value, Kamaji will perform a clean-up on the target
DataStore to avoid stale resources when back and forth migrations occur.

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: cleanup prior migration

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-06-13 08:06:24 +02:00
Dario Tranchitella
eeb12c232b feat(metrics): exposing resource handlers time bucket (#836)
* refactor: static names and avoiding clash

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(metrics): exposing resource handlers time bucket

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-06-10 17:28:10 +02:00
Dario Tranchitella
ce8d5f2516 refactor: requeue deprecated, migrating to requeue after (#837)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-06-10 17:27:45 +02:00
dependabot[bot]
1ac72ff22f feat(deps): bump github.com/nats-io/nats.go from 1.42.0 to 1.43.0 (#831)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.42.0 to 1.43.0.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.42.0...v1.43.0)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.43.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-04 14:23:01 -07:00
Adriano Pezzuto
501bd7a7ca feat(docs): terraform guide (#832)
* feat(docs): add terraform guide

* fix(docs): update metallb annotation
2025-06-04 15:01:29 +02:00
dependabot[bot]
ba3249f220 feat(deps): bump github.com/docker/docker (#829)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.2.0+incompatible to 28.2.2+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.2.0...v28.2.2)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.2.2+incompatible
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-03 13:45:13 -07:00
Wouter van Os
c156322fe3 chore(adopters): add CBWS (#830) 2025-06-03 08:57:49 +02:00
Dario Tranchitella
ca622ef9ae feat(k8s): upgrade support to v1.33.1 (#826)
* feat(k8s): upgrade support to v1.33.1

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(deps): upgrade support to k8s v1.33.1

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-05-29 16:48:23 +02:00
Ricardo Pardini
a9c324e2e5 fix(helm): document importance of correct clusterDomain for kamaji-etcd subchart (#822)
Signed-off-by: Ricardo Pardini <ricardo@pardini.net>
2025-05-28 17:28:02 -06:00
dependabot[bot]
95a32ee5d4 feat(deps): bump github.com/docker/docker (#824)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.1.1+incompatible to 28.2.0+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.1.1...v28.2.0)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.2.0+incompatible
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-28 21:05:09 +00:00
dependabot[bot]
9874700b28 feat(deps): bump github.com/go-logr/logr from 1.4.2 to 1.4.3 (#825)
Bumps [github.com/go-logr/logr](https://github.com/go-logr/logr) from 1.4.2 to 1.4.3.
- [Release notes](https://github.com/go-logr/logr/releases)
- [Changelog](https://github.com/go-logr/logr/blob/master/CHANGELOG.md)
- [Commits](https://github.com/go-logr/logr/compare/v1.4.2...v1.4.3)

---
updated-dependencies:
- dependency-name: github.com/go-logr/logr
  dependency-version: 1.4.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-28 23:03:22 +02:00
dependabot[bot]
994162c5b0 feat(deps): bump sigs.k8s.io/controller-runtime from 0.20.4 to 0.21.0 (#819)
Bumps [sigs.k8s.io/controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) from 0.20.4 to 0.21.0.
- [Release notes](https://github.com/kubernetes-sigs/controller-runtime/releases)
- [Changelog](https://github.com/kubernetes-sigs/controller-runtime/blob/main/RELEASE.md)
- [Commits](https://github.com/kubernetes-sigs/controller-runtime/compare/v0.20.4...v0.21.0)

---
updated-dependencies:
- dependency-name: sigs.k8s.io/controller-runtime
  dependency-version: 0.21.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-27 02:30:34 +02:00
dependabot[bot]
8ba99bd6c6 feat(deps): bump k8s.io/kubernetes in the k8s group (#815)
Bumps the k8s group with 1 update: [k8s.io/kubernetes](https://github.com/kubernetes/kubernetes).


Updates `k8s.io/kubernetes` from 1.33.0 to 1.33.1
- [Release notes](https://github.com/kubernetes/kubernetes/releases)
- [Commits](https://github.com/kubernetes/kubernetes/compare/v1.33.0...v1.33.1)

---
updated-dependencies:
- dependency-name: k8s.io/kubernetes
  dependency-version: 1.33.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: k8s
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-25 03:08:37 +02:00
Pierre PIRONIN
16438ebed8 Update kamaji-kind.md to install Clastix Helm repo (#818) 2025-05-23 06:40:27 -07:00
Dario Tranchitella
f750073af6 refactor!: k8s api server validation for kubelet preferred address type uniqueness (#812)
* feat(api): relying on k8s list set for unique items

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(crd)!: relying on k8s list set for unique items

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(webhook): removing unused webhook for kubelet preferred address type

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs(crd): kubelet preferred address type uniqueness

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-05-07 12:13:00 +02:00
Dario Tranchitella
6b10c89d2f feat(ci): semantic pr title (#810)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-05-06 15:32:21 +02:00
Dario Tranchitella
8bd1f53568 fix(helm): missing docs for kamaji-etcd dependency bump from 0.9.2 to 0.10.0 (#809)
* fix(ci): lint and diff for helm jobs

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs(helm): bump kamaji-etcd dependency from 0.9.2 to 0.10.0

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-05-05 17:16:08 +02:00
Dario Tranchitella
9db4ccc5f1 chore(helm): upgrading kamaji-etcd to v0.10.0 (#808)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-05-05 17:04:51 +02:00
dependabot[bot]
fd49c238f5 feat(deps): bump github.com/nats-io/nats.go from 1.41.2 to 1.42.0 (#807)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.41.2 to 1.42.0.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.41.2...v1.42.0)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.42.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-05 14:23:47 +02:00
Dario Tranchitella
b53adbfd6e feat: releasing helm latest version (#805)
* chore(github): releasing helm latest version

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(helm): releasing helm latest version

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: releasing helm latest version

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* fix(docs): upgrading supported k8s matrix

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-05-02 15:09:01 +02:00
dependabot[bot]
994ca7687d feat(deps): bump github.com/testcontainers/testcontainers-go (#802)
Bumps [github.com/testcontainers/testcontainers-go](https://github.com/testcontainers/testcontainers-go) from 0.36.0 to 0.37.0.
- [Release notes](https://github.com/testcontainers/testcontainers-go/releases)
- [Commits](https://github.com/testcontainers/testcontainers-go/compare/v0.36.0...v0.37.0)

---
updated-dependencies:
- dependency-name: github.com/testcontainers/testcontainers-go
  dependency-version: 0.37.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-28 19:05:02 +02:00
Dario Tranchitella
c2bb50933a feat: supporting k8s v1.33 (#792)
* chore(go): updating dependencies for k8s v1.33

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* refactor: aligning to k8s v1.33 changes

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(kubeadm): supporting k8s v1.33.0

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(test): aligning changes to k8s v1.33

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(sample): updating to k8s v1.33.0

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: support to k8s v1.33

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(helm)!: support to k8s v1.33

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(makefile): removing kind deploy

Main makefile handles the provisioning of it according to e2e test
suite.

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* fix(test): removing sa on test and fixing worker nodes join

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-04-24 13:18:15 +02:00
Dario Tranchitella
b027e23b99 feat: enhancing concurrent reconciliations (#790)
* feat: buffered channels for generic events

Channels used for GenericEvent feeding for cross controllers triggers
are now buffered according to the --max-concurrent-tcp-reconciles: this
is required to avoid channel full errors when dealing with large
management clusters serving a sizeable amount of Tenant Control Planes.

Increasing this value will put more pressure on memory (mostly for GC)
and CPU (provisioning multiple certificates at the same time).

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* refactor: retrying datastore status update

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(performance): reducing memory consumption for channel triggers

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(datastore): reconcile events only for root object changes

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat: waiting soot manager exit before termination

This change introduces a grace period of 10 seconds before abruptly
terminating the Tenant Control Plane deployment, allowing the soot
manager to complete its exit procedure and avoid false positive errors
due to API Server being unresponsive due to user deletion.

Aim of this change is reducing the amount of false positive errors upon
mass deletion of Tenant COntrol Plane objects.

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* refactor: unbuffered channel with timeout

WatchesRawSource is non blocking, no need to check if channel is full.
To prevent deadlocks a WithTimeout check has been introduced.

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-04-23 21:00:29 +02:00
Febrian
728ac21ffa Update cluster-autoscaler.md
A label is missing in the Cluster Autoscaler YAML configuration.
2025-04-23 11:57:46 +02:00
dependabot[bot]
4595b79ddd feat(deps): bump github.com/docker/docker (#796)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.1.0+incompatible to 28.1.1+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.1.0...v28.1.1)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.1.1+incompatible
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-22 16:54:29 +02:00
bsctl
335ecfbe27 feat(docs): add capi proxmox sample 2025-04-21 12:01:49 +02:00
dependabot[bot]
fd8ffeb607 feat(deps): bump github.com/nats-io/nats.go from 1.41.1 to 1.41.2 (#794)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.41.1 to 1.41.2.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.41.1...v1.41.2)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.41.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-18 10:31:06 +02:00
dependabot[bot]
4cdfcc1347 feat(deps): bump github.com/docker/docker (#793)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.0.4+incompatible to 28.1.0+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.0.4...v28.1.0)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.1.0+incompatible
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-18 10:30:45 +02:00
Dario Tranchitella
7c785726d9 refactor: consolidating struct members for soot controllers (#791)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-04-17 22:36:39 +02:00
Adriano Pezzuto
69141e5765 feat(docs): refactoring (#784)
* feat(docs): add landing page

* feat(docs): refactoring
2025-04-16 11:13:35 +02:00
Johann Wagner
223aa6d4c9 chore(adopters): add wobcom 2025-04-14 14:21:25 +02:00
Dario Tranchitella
2204fdad63 fix(datastore): pod template hashing for storage migration (#710)
* fix(datastore): pod template hashing for storage migration

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* test: ensuring migration works for etcd and postgresql

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-04-14 14:20:19 +02:00
Dario Tranchitella
880a392887 fix(helm): pull secrets to sa instead of deployment (#785)
This change is required for the enterprise offering where the Kamaji
stable image is hosted in a container registry with authentication and
can't be pulled with no credentials: when a migrate job is spun up it
resuses the same Kamaji controller ServiceAccount which will offer its
image pull credentials.

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-04-14 10:24:15 +02:00
dependabot[bot]
2ab9dc3949 feat(deps): bump github.com/nats-io/nats.go from 1.41.0 to 1.41.1 (#781)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.41.0 to 1.41.1.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.41.0...v1.41.1)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.41.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-10 15:21:20 +02:00
Dario Tranchitella
f87a057809 test: retry scale on conflict (#778)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-04-08 20:46:30 +02:00
dependabot[bot]
3e79845175 feat(deps): bump github.com/go-sql-driver/mysql from 1.9.1 to 1.9.2 (#774)
Bumps [github.com/go-sql-driver/mysql](https://github.com/go-sql-driver/mysql) from 1.9.1 to 1.9.2.
- [Release notes](https://github.com/go-sql-driver/mysql/releases)
- [Changelog](https://github.com/go-sql-driver/mysql/blob/master/CHANGELOG.md)
- [Commits](https://github.com/go-sql-driver/mysql/compare/v1.9.1...v1.9.2)

---
updated-dependencies:
- dependency-name: github.com/go-sql-driver/mysql
  dependency-version: 1.9.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-08 15:20:34 +02:00
Mario Valderrama
c769226e79 test: add scale to zero e2e test (#776)
* test: add scale to zero e2e test

Signed-off-by: Mario Valderrama <mario.valderrama@ionos.com>

* fix: retry create token command

* fix: use correct assertion

---------

Signed-off-by: Mario Valderrama <mario.valderrama@ionos.com>
2025-04-08 15:20:21 +02:00
dependabot[bot]
97d87b6a56 feat(deps): bump github.com/onsi/ginkgo/v2 from 2.23.3 to 2.23.4 (#775)
Bumps [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) from 2.23.3 to 2.23.4.
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.23.3...v2.23.4)

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-version: 2.23.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-08 11:29:04 +02:00
Dario Tranchitella
b68010e072 feat!: introducing sleeping status (#773)
* feat(api): introducing sleeping status

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(helm)!: introducing sleeping status

Marking this commit as breaking since a CustomResourceDefinition update
is required for users dealing with scale to zero since the introduction
of the new enum for the status field.

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: introducing sleeping status

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-04-07 16:44:13 +02:00
Mario Valderrama
dc18f27948 fix: stop watches when TCP is scaled to zero (#771) 2025-04-07 11:19:12 +02:00
dependabot[bot]
d3f75feb12 feat(deps): bump github.com/onsi/gomega from 1.36.3 to 1.37.0 (#768)
Bumps [github.com/onsi/gomega](https://github.com/onsi/gomega) from 1.36.3 to 1.37.0.
- [Release notes](https://github.com/onsi/gomega/releases)
- [Changelog](https://github.com/onsi/gomega/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/gomega/compare/v1.36.3...v1.37.0)

---
updated-dependencies:
- dependency-name: github.com/onsi/gomega
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-07 10:51:03 +02:00
Mario Valderrama
94a64d1f75 fix: prevent unnecessary copy in loop (#769) 2025-04-07 10:50:52 +02:00
dependabot[bot]
ec523d3490 feat(deps): bump github.com/nats-io/nats.go from 1.40.1 to 1.41.0 (#770)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.40.1 to 1.41.0.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.40.1...v1.41.0)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.41.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-07 10:44:35 +02:00
dependabot[bot]
20b8a3aca0 feat(deps): bump github.com/testcontainers/testcontainers-go (#765)
Bumps [github.com/testcontainers/testcontainers-go](https://github.com/testcontainers/testcontainers-go) from 0.35.0 to 0.36.0.
- [Release notes](https://github.com/testcontainers/testcontainers-go/releases)
- [Commits](https://github.com/testcontainers/testcontainers-go/compare/v0.35.0...v0.36.0)

---
updated-dependencies:
- dependency-name: github.com/testcontainers/testcontainers-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-02 07:44:37 +01:00
dependabot[bot]
37c548bf8d feat(deps): bump github.com/onsi/gomega from 1.36.2 to 1.36.3 (#749)
Bumps [github.com/onsi/gomega](https://github.com/onsi/gomega) from 1.36.2 to 1.36.3.
- [Release notes](https://github.com/onsi/gomega/releases)
- [Changelog](https://github.com/onsi/gomega/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/gomega/compare/v1.36.2...v1.36.3)

---
updated-dependencies:
- dependency-name: github.com/onsi/gomega
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-02 07:44:17 +01:00
dependabot[bot]
0ebbdae4f8 feat(deps): bump the etcd group with 2 updates (#764)
Bumps the etcd group with 2 updates: [go.etcd.io/etcd/api/v3](https://github.com/etcd-io/etcd) and [go.etcd.io/etcd/client/v3](https://github.com/etcd-io/etcd).


Updates `go.etcd.io/etcd/api/v3` from 3.5.20 to 3.5.21
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.5.20...v3.5.21)

Updates `go.etcd.io/etcd/client/v3` from 3.5.20 to 3.5.21
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.5.20...v3.5.21)

---
updated-dependencies:
- dependency-name: go.etcd.io/etcd/api/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: etcd
- dependency-name: go.etcd.io/etcd/client/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: etcd
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-02 01:08:23 +01:00
Dario Tranchitella
ec443e6eac fix(crds): datastore driver is immutable (#767)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-04-02 01:07:56 +01:00
Dario Tranchitella
b2ec531183 chore(go): upgrading to 1.24 (#766)
* chore(go): upgrading to 1.24

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(ci): building golanci-lint from source

* chore(golangci-lint): aligning to v2 release

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-04-01 21:09:46 +02:00
Ammar Yasser
0f3de13d26 feat: validate datastores with cel (#762)
* feat: Validate DataStores with CEL using the following rules

- certificateAuthority privateKey must have secretReference or content when driver is etcd
- clientCertificate must have secretReference or content when driver is etcd
- clientCertificate privateKey must have secretReference or content when driver is etcd
- When driver is not etcd and tlsConfig exists, certificateAuthority must be null or contain valid content
- When driver is not etcd and tlsConfig exists, clientCertificate must be null or contain valid content
- When driver is not etcd and basicAuth exists, username must have secretReference or content
- When driver is not etcd and basicAuth exists, password must have secretReference or content
- When driver is not etcd, either tlsConfig or basicAuth must be provided

Signed-off-by: aerosouund <aerosound161@gmail.com>

* fix: Add extra rule

Signed-off-by: aerosouund <aerosound161@gmail.com>

* fix: ginkgo flag ordering

Signed-off-by: aerosouund <aerosound161@gmail.com>

* fix: Fix syntax of tls or basic auth rule and remove the certificate authority rule

Signed-off-by: aerosouund <aerosound161@gmail.com>

* test: Add ginkgo tests for validations

Signed-off-by: aerosouund <aerosound161@gmail.com>

* fix(test): missing default values

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* chore(ci): running integration tests as gh job

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: aerosouund <aerosound161@gmail.com>
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
Co-authored-by: Dario Tranchitella <dario@tranchitella.eu>
2025-03-31 19:03:55 +02:00
Dario Tranchitella
dd099e750f fix(soot): triggering cleanup for failed soot manager (#761)
* fix(soot): triggering cleanup for failed soot manager

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* fix: logging blocked channels

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-03-31 18:53:58 +02:00
dependabot[bot]
aab2250e8d feat(deps): bump github.com/spf13/viper from 1.20.0 to 1.20.1 (#757)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.20.0 to 1.20.1.
- [Release notes](https://github.com/spf13/viper/releases)
- [Commits](https://github.com/spf13/viper/compare/v1.20.0...v1.20.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/viper
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-31 18:49:07 +02:00
dependabot[bot]
13243c984a feat(deps): bump sigs.k8s.io/controller-runtime from 0.20.3 to 0.20.4 (#750)
Bumps [sigs.k8s.io/controller-runtime](https://github.com/kubernetes-sigs/controller-runtime) from 0.20.3 to 0.20.4.
- [Release notes](https://github.com/kubernetes-sigs/controller-runtime/releases)
- [Changelog](https://github.com/kubernetes-sigs/controller-runtime/blob/main/RELEASE.md)
- [Commits](https://github.com/kubernetes-sigs/controller-runtime/compare/v0.20.3...v0.20.4)

---
updated-dependencies:
- dependency-name: sigs.k8s.io/controller-runtime
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-31 18:48:34 +02:00
dependabot[bot]
df3a906bcf feat(deps): bump github.com/nats-io/nats.go from 1.39.1 to 1.40.1 (#754)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.39.1 to 1.40.1.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.39.1...v1.40.1)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-31 18:48:27 +02:00
dependabot[bot]
33664d7e40 feat(deps): bump github.com/docker/docker (#755)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 27.5.1+incompatible to 28.0.4+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v27.5.1...v28.0.4)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-31 18:48:18 +02:00
Dario Tranchitella
8b22f22bd3 fix: check cert names and ips including tcp address (#758)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-03-27 18:34:57 +01:00
bsctl
05aad8ce56 feat(docs): how to assign a specific address to tcp 2025-03-27 17:34:59 +01:00
bsctl
c91b4b3674 feat(docs): refine monitoring 2025-03-27 16:40:43 +01:00
bsctl
f64953c411 feat(docs): document monitoring 2025-03-27 16:40:43 +01:00
bsctl
751854b310 feat(docs): minors on cluster-api documentation 2025-03-24 08:19:36 +01:00
bsctl
620647b2da feat(docs): document cluster-api usage 2025-03-24 08:19:36 +01:00
Dario Tranchitella
a8f8582ea6 fix(datastore): handling datastore with no client certificate (#745)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2025-03-23 22:29:33 +01:00
dependabot[bot]
7dceac8dc6 feat(deps): bump the etcd group with 2 updates (#739)
Bumps the etcd group with 2 updates: [go.etcd.io/etcd/api/v3](https://github.com/etcd-io/etcd) and [go.etcd.io/etcd/client/v3](https://github.com/etcd-io/etcd).


Updates `go.etcd.io/etcd/api/v3` from 3.5.19 to 3.5.20
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.5.19...v3.5.20)

Updates `go.etcd.io/etcd/client/v3` from 3.5.19 to 3.5.20
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.5.19...v3.5.20)

---
updated-dependencies:
- dependency-name: go.etcd.io/etcd/api/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: etcd
- dependency-name: go.etcd.io/etcd/client/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: etcd
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-22 17:55:54 +01:00
dependabot[bot]
ad7c3b71e7 feat(deps): bump github.com/go-sql-driver/mysql from 1.9.0 to 1.9.1 (#740)
Bumps [github.com/go-sql-driver/mysql](https://github.com/go-sql-driver/mysql) from 1.9.0 to 1.9.1.
- [Release notes](https://github.com/go-sql-driver/mysql/releases)
- [Changelog](https://github.com/go-sql-driver/mysql/blob/master/CHANGELOG.md)
- [Commits](https://github.com/go-sql-driver/mysql/compare/v1.9.0...v1.9.1)

---
updated-dependencies:
- dependency-name: github.com/go-sql-driver/mysql
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-22 17:55:23 +01:00
dependabot[bot]
989dcff863 feat(deps): bump github.com/onsi/ginkgo/v2 from 2.23.2 to 2.23.3 (#741)
Bumps [github.com/onsi/ginkgo/v2](https://github.com/onsi/ginkgo) from 2.23.2 to 2.23.3.
- [Release notes](https://github.com/onsi/ginkgo/releases)
- [Changelog](https://github.com/onsi/ginkgo/blob/master/CHANGELOG.md)
- [Commits](https://github.com/onsi/ginkgo/compare/v2.23.2...v2.23.3)

---
updated-dependencies:
- dependency-name: github.com/onsi/ginkgo/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-22 17:55:16 +01:00
196 changed files with 45146 additions and 3198 deletions

10
.github/release-template.md vendored Normal file
View File

@@ -0,0 +1,10 @@
This edge release can be pulled from Docker Hub as follows:
```
docker pull clastix/kamaji:$TAG
```
> As from the v1.0.0 release, CLASTIX no longer provides stable release artefacts.
>
> Stable release artefacts are offered on a subscription basis by CLASTIX, the main Kamaji project contributor.
> Learn more from CLASTIX's [Support](https://clastix.io/support/) section.

View File

@@ -7,28 +7,39 @@ on:
branches: [ "*" ]
jobs:
test:
name: integration
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v5
- uses: actions/setup-go@v6
with:
go-version-file: go.mod
- run: make test
golangci:
name: lint
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
- uses: actions/checkout@v5
- uses: actions/setup-go@v6
with:
go-version-file: go.mod
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v6.5.2
with:
version: v1.62.2
only-new-issues: false
args: --config .golangci.yml
run: make golint
# TODO(prometherion): enable back once golangci-lint is built from v1.24 rather than v1.23
# uses: golangci/golangci-lint-action@v6.5.2
# with:
# version: v1.62.2
# only-new-issues: false
# args: --config .golangci.yml
diff:
name: diff
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
- uses: actions/setup-go@v5
- uses: actions/setup-go@v6
with:
go-version-file: go.mod
- run: make manifests

View File

@@ -4,7 +4,7 @@ on:
push:
branches: [ "*" ]
paths:
- '.github/workflows/e2e.yml'
- '.github/workflows/e2e.yaml'
- 'api/**'
- 'charts/kamaji/**'
- 'controllers/**'
@@ -18,7 +18,7 @@ on:
pull_request:
branches: [ "*" ]
paths:
- '.github/workflows/e2e.yml'
- '.github/workflows/e2e.yaml'
- 'api/**'
- 'charts/kamaji/**'
- 'controllers/**'
@@ -33,18 +33,18 @@ on:
jobs:
kind:
name: Kubernetes
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
- uses: actions/setup-go@v5
- uses: actions/setup-go@v6
with:
go-version: '1.22'
check-latest: true
go-version-file: go.mod
- run: |
sudo apt-get update
sudo apt-get install -y golang-cfssl
sudo swapoff -a
sudo modprobe br_netfilter
- name: e2e testing
run: make e2e

View File

@@ -2,8 +2,7 @@ name: Helm Chart
on:
push:
branches: [ "*" ]
tags: [ "helm-v*" ]
branches: [ "master" ]
pull_request:
branches: [ "*" ]
@@ -12,16 +11,19 @@ jobs:
name: diff
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
- run: make -C charts/kamaji docs
- name: Checking if Helm docs is not aligned
- name: Checking if Kamaji Helm Chart docs is not aligned
run: if [[ $(git diff | wc -l) -gt 0 ]]; then echo ">>> Untracked changes have not been committed" && git --no-pager diff && exit 1; fi
- run: make -C charts/kamaji-crds docs
- name: Checking if Kamaji CRDs Helm Chart docs is not aligned
run: if [[ $(git diff | wc -l) -gt 0 ]]; then echo ">>> Untracked changes have not been committed" && git --no-pager diff && exit 1; fi
lint:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- uses: azure/setup-helm@v4
with:
version: 3.3.4
@@ -29,13 +31,16 @@ jobs:
run: |-
helm repo add clastix https://clastix.github.io/charts
helm dependency build ./charts/kamaji
- name: Linting Chart
- name: Linting Kamaji Helm Chart
run: helm lint ./charts/kamaji
- name: Linting Kamaji CRDS Helm Chart
run: helm lint ./charts/kamaji-crds
release:
if: startsWith(github.ref, 'refs/tags/helm-v')
if: github.event_name == 'push' && github.ref == 'refs/heads/master'
needs: [ "lint", "diff" ]
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Publish Helm chart
uses: stefanprodan/helm-gh-pages@master
with:

View File

@@ -7,15 +7,21 @@ on:
- v*
branches:
- master
workflow_dispatch:
inputs:
tag:
description: "Tag to build"
required: true
type: string
jobs:
ko:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
- uses: actions/setup-go@v5
- uses: actions/setup-go@v6
with:
go-version-file: go.mod
- name: "ko: install"
@@ -25,7 +31,7 @@ jobs:
- name: "ko: login to docker.io container registry"
run: ./bin/ko login docker.io -u ${{ secrets.DOCKER_IO_USERNAME }} -p ${{ secrets.DOCKER_IO_TOKEN }}
- name: "ko: build and push tag"
run: make VERSION=${{ github.ref_name }} KO_LOCAL=false KO_PUSH=true build
if: startsWith(github.ref, 'refs/tags/v') || startsWith(github.ref, 'refs/tags/edge-')
run: make VERSION=${{ github.event.inputs.tag }} KO_LOCAL=false KO_PUSH=true build
if: github.event_name == 'workflow_dispatch'
- name: "ko: build and push latest"
run: make VERSION=latest KO_LOCAL=false KO_PUSH=true build

23
.github/workflows/pr.yaml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Check PR Title
on:
pull_request:
types: [opened, edited, reopened, synchronize]
jobs:
semantic-pr-title:
runs-on: ubuntu-22.04
steps:
- uses: amannn/action-semantic-pull-request@v6
with:
types: |
feat
fix
chore
docs
style
refactor
perf
test
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

75
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,75 @@
name: Weekly Edge Release
on:
schedule:
- cron: '0 7 * * 1' # Every Monday at 9 AM CET
workflow_dispatch:
permissions:
contents: write
jobs:
release:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v5
with:
fetch-depth: 0
- name: generating date metadata
id: date
run: |
CURRENT_DATE=$(date -u +'%Y-%m-%d')
YY=$(date -u +'%y')
M=$(date -u +'%_m' | sed 's/ //g')
FIRST_OF_MONTH=$(date -u -d "$CURRENT_DATE" +%Y-%m-01)
WEEK_NUM=$(( (($(date -u +%s) - $(date -u -d "$FIRST_OF_MONTH" +%s)) / 86400 + $(date -u -d "$FIRST_OF_MONTH" +%u) - 1) / 7 + 1 ))
echo "yy=$YY" >> $GITHUB_OUTPUT
echo "month=$M" >> $GITHUB_OUTPUT
echo "week=$WEEK_NUM" >> $GITHUB_OUTPUT
echo "date=$CURRENT_DATE" >> $GITHUB_OUTPUT
- name: generating tag metadata
id: tag
run: |
TAG="edge-${{ steps.date.outputs.yy }}.${{ steps.date.outputs.month }}.${{ steps.date.outputs.week }}"
echo "tag=$TAG" >> $GITHUB_OUTPUT
- name: generate release notes from template
run: |
export TAG="${{ steps.tag.outputs.tag }}"
envsubst < .github/release-template.md > release-notes.md
- name: generate release notes from template
run: |
export TAG="${{ steps.tag.outputs.tag }}"
envsubst < .github/release-template.md > release-notes-header.md
- name: generate GitHub release notes
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh release --repo "$GITHUB_REPOSITORY" \
create "${{ steps.tag.outputs.tag }}" \
--generate-notes \
--draft \
--title "temp" \
--notes "temp" > /dev/null || true
gh release view "${{ steps.tag.outputs.tag }}" \
--json body --jq .body > auto-notes.md
gh release delete "${{ steps.tag.outputs.tag }}" --yes || true
- name: combine notes
run: |
cat release-notes-header.md auto-notes.md > release-notes.md
- name: create GitHub release
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh release create "${{ steps.tag.outputs.tag }}" \
--title "${{ steps.tag.outputs.tag }}" \
--notes-file release-notes.md
- name: trigger container build workflow
env:
GH_TOKEN: ${{ secrets.WORKFLOW_TOKEN }}
run: |
gh workflow run "Container image build" \
--ref master \
-f tag="${{ steps.tag.outputs.tag }}"

View File

@@ -1,53 +1,76 @@
run:
timeout: 10m
linters-settings:
revive:
rules:
- name: dot-imports
arguments:
- allowedPackages:
- "github.com/onsi/ginkgo/v2"
- "github.com/onsi/gomega"
gci:
sections:
- standard
- default
- prefix(github.com/clastix/kamaji/)
goheader:
template: |-
Copyright 2022 Clastix Labs
SPDX-License-Identifier: Apache-2.0
version: "2"
linters:
default: all
disable:
- depguard
- wrapcheck
- mnd
- varnamelen
- testpackage
- tagliatelle
- paralleltest
- ireturn
- err113
- gochecknoglobals
- wsl
- exhaustive
- nosprintfhostport
- nonamedreturns
- interfacebloat
- exhaustruct
- lll
- gosec
- gomoddirectives
- godox
- gochecknoinits
- funlen
- dupl
- cyclop
- depguard
- dupl
- err113
- exhaustive
- exhaustruct
- funlen
- gochecknoglobals
- gochecknoinits
- gocognit
- godox
- gomoddirectives
- gosec
- interfacebloat
- ireturn
- lll
- mnd
- nestif
- nonamedreturns
- nosprintfhostport
- paralleltest
- perfsprint
# deprecated linters
- exportloopref
enable-all: true
- tagliatelle
- testpackage
- varnamelen
- wrapcheck
- wsl
settings:
staticcheck:
checks:
- all
- -QF1008
goheader:
template: |-
Copyright 2022 Clastix Labs
SPDX-License-Identifier: Apache-2.0
revive:
rules:
- name: dot-imports
arguments:
- allowedPackages:
- github.com/onsi/ginkgo/v2
- github.com/onsi/gomega
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- third_party$
- builtin$
- examples$
formatters:
enable:
- gci
- gofmt
- gofumpt
- goimports
settings:
gci:
sections:
- standard
- default
- prefix(github.com/clastix/kamaji/)
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -8,7 +8,9 @@ Feel free to open a Pull-Request to get yours listed.
| Type | Name | Since | Website | Use-Case |
|:-|:-|:-|:-|:-|
| Vendor | Aknostic | 2023 | [link](https://aknostic.com) | Aknostic is a cloud-native consultancy company using Kamaji to build a Kubernetes based PaaS. |
| R&D | Aruba | 2024 | [link](https://www.aruba.it/home.aspx) | Aruba Cloud is an Italian Cloud Service Provider evaluating Kamaji to build and offer [Managed Kubernetes Service](https://my.arubacloud.com). |
| Vendor | Aruba | 2025 | [link](https://www.arubacloud.com/) | Aruba Cloud is an Italian Cloud Service Provider using Kamaji to build and offer [Managed Kubernetes Service](https://my.arubacloud.com). |
| Vendor | CBWS | 2025 | [link](https://cbws.nl) | CBWS is an European Cloud Provider using Kamaji to build and offer their [Managed Kubernetes Service](https://cbws.nl/cloud/kubernetes/). |
| Vendor | Coredge | 2025 | [link](https://coredge.io/) | Coredge uses Kamaji in its K8saaS offering to save infrastructure costs in its Sovereign Cloud & AI Infrastructure Platform for end-user organisations. |
| Vendor | DCloud | 2024 | [link](https://dcloud.co.id) | DCloud is an Indonesian Cloud Provider using Kamaji to build and offer [Managed Kubernetes Service](https://dcloud.co.id/dkubes.html). |
| Vendor | Dinova | 2025 | [link](https://dinova.one/) | Dinova is an Italian cloud services provider that integrates Kamaji in its datacenters to offer fully managed Kubernetes clusters. |
| End-user | KINX | 2024 | [link](https://kinx.net/?lang=en) | KINX is an Internet infrastructure service provider and will use kamaji for its new [Managed Kubernetes Service](https://kinx.net/service/cloud/kubernetes/intro/?lang=en). |
@@ -26,6 +28,7 @@ Feel free to open a Pull-Request to get yours listed.
| End-user | Rackspace | 2024 | [link](https://spot.rackspace.com/) | Rackspace Spot uses Kamaji to manage our instances, offering fully-managed kubernetes infrastructure, auctioned in an open market. |
| R&D | IONOS Cloud | 2024 | [link](https://cloud.ionos.com/) | IONOS Cloud is a German Cloud Provider evaluating Kamaji for its [Managed Kubernetes platform](https://cloud.ionos.com/managed/kubernetes). |
| Vendor | OVHCloud | 2025 | [link](https://www.ovhcloud.com/) | OVHCloud is a European Cloud Provider that will use Kamaji for its Managed Kubernetes Service offer. |
| Vendor | WOBCOM GmbH | 2024 | [link](https://www.wobcom.de/) | WOBCOM provides an [**Open Digital Platform**](https://www.wobcom.de/geschaeftskunden/odp/) solution for Smart Cities, which is provided for customers in a Managed Kubernetes provided by Kamaji. |
### Adopter Types

View File

@@ -73,47 +73,47 @@ help: ## Display this help.
.PHONY: ko
ko: $(KO) ## Download ko locally if necessary.
$(KO): $(LOCALBIN)
test -s $(LOCALBIN)/ko || GOBIN=$(LOCALBIN) go install github.com/google/ko@v0.14.1
test -s $(LOCALBIN)/ko || GOBIN=$(LOCALBIN) CGO_ENABLED=0 go install -ldflags="-s -w" github.com/google/ko@v0.14.1
.PHONY: yq
yq: $(YQ) ## Download yq locally if necessary.
$(YQ): $(LOCALBIN)
test -s $(LOCALBIN)/yq || GOBIN=$(LOCALBIN) go install github.com/mikefarah/yq/v4@v4.44.2
test -s $(LOCALBIN)/yq || GOBIN=$(LOCALBIN) CGO_ENABLED=0 go install -ldflags="-s -w" github.com/mikefarah/yq/v4@v4.44.2
.PHONY: helm
helm: $(HELM) ## Download helm locally if necessary.
$(HELM): $(LOCALBIN)
test -s $(LOCALBIN)/helm || GOBIN=$(LOCALBIN) go install helm.sh/helm/v3/cmd/helm@v3.9.0
test -s $(LOCALBIN)/helm || GOBIN=$(LOCALBIN) CGO_ENABLED=0 go install -ldflags="-s -w" helm.sh/helm/v3/cmd/helm@v3.9.0
.PHONY: ginkgo
ginkgo: $(GINKGO) ## Download ginkgo locally if necessary.
$(GINKGO): $(LOCALBIN)
test -s $(LOCALBIN)/ginkgo || GOBIN=$(LOCALBIN) go install github.com/onsi/ginkgo/v2/ginkgo
test -s $(LOCALBIN)/ginkgo || GOBIN=$(LOCALBIN) CGO_ENABLED=0 go install -ldflags="-s -w" github.com/onsi/ginkgo/v2/ginkgo
.PHONY: kind
kind: $(KIND) ## Download kind locally if necessary.
$(KIND): $(LOCALBIN)
test -s $(LOCALBIN)/kind || GOBIN=$(LOCALBIN) go install sigs.k8s.io/kind/cmd/kind@v0.14.0
test -s $(LOCALBIN)/kind || GOBIN=$(LOCALBIN) CGO_ENABLED=0 go install -ldflags="-s -w" sigs.k8s.io/kind/cmd/kind@v0.14.0
.PHONY: controller-gen
controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if necessary.
$(CONTROLLER_GEN): $(LOCALBIN)
test -s $(LOCALBIN)/controller-gen || GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.1
test -s $(LOCALBIN)/controller-gen || GOBIN=$(LOCALBIN) CGO_ENABLED=0 go install -ldflags="-s -w" sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.1
.PHONY: golangci-lint
golangci-lint: $(GOLANGCI_LINT) ## Download golangci-lint locally if necessary.
$(GOLANGCI_LINT): $(LOCALBIN)
test -s $(LOCALBIN)/golangci-lint || GOBIN=$(LOCALBIN) go install github.com/golangci/golangci-lint/cmd/golangci-lint@v1.62.2
test -s $(LOCALBIN)/golangci-lint || GOBIN=$(LOCALBIN) CGO_ENABLED=0 go install -ldflags="-s -w" github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.0.2
.PHONY: apidocs-gen
apidocs-gen: $(APIDOCS_GEN) ## Download crdoc locally if necessary.
$(APIDOCS_GEN): $(LOCALBIN)
test -s $(LOCALBIN)/crdoc || GOBIN=$(LOCALBIN) go install fybrik.io/crdoc@latest
test -s $(LOCALBIN)/crdoc || GOBIN=$(LOCALBIN) CGO_ENABLED=0 go install -ldflags="-s -w" fybrik.io/crdoc@latest
.PHONY: envtest
envtest: $(ENVTEST) ## Download envtest-setup locally if necessary.
$(ENVTEST): $(LOCALBIN)
test -s $(LOCALBIN)/setup-envtest || GOBIN=$(LOCALBIN) go install sigs.k8s.io/controller-runtime/tools/setup-envtest@$(ENVTEST_VERSION)
test -s $(LOCALBIN)/setup-envtest || GOBIN=$(LOCALBIN) CGO_ENABLED=0 go install -ldflags="-s -w" sigs.k8s.io/controller-runtime/tools/setup-envtest@$(ENVTEST_VERSION)
##@ Development
@@ -129,9 +129,18 @@ webhook: controller-gen yq
$(YQ) -i 'map(.clientConfig.service.namespace |= "{{ .Release.Namespace }}")' ./charts/kamaji/controller-gen/validating-webhook.yaml
crds: controller-gen yq
# kamaji chart
$(CONTROLLER_GEN) crd webhook paths="./..." output:stdout | $(YQ) 'select(documentIndex == 0)' > ./charts/kamaji/crds/kamaji.clastix.io_datastores.yaml
$(CONTROLLER_GEN) crd webhook paths="./..." output:stdout | $(YQ) 'select(documentIndex == 1)' > ./charts/kamaji/crds/kamaji.clastix.io_tenantcontrolplanes.yaml
$(CONTROLLER_GEN) crd webhook paths="./..." output:stdout | $(YQ) 'select(documentIndex == 1)' > ./charts/kamaji/crds/kamaji.clastix.io_kubeconfiggenerators.yaml
$(CONTROLLER_GEN) crd webhook paths="./..." output:stdout | $(YQ) 'select(documentIndex == 2)' > ./charts/kamaji/crds/kamaji.clastix.io_tenantcontrolplanes.yaml
$(YQ) -i '. *n load("./charts/kamaji/controller-gen/crd-conversion.yaml")' ./charts/kamaji/crds/kamaji.clastix.io_tenantcontrolplanes.yaml
# kamaji-crds chart
cp ./charts/kamaji/controller-gen/crd-conversion.yaml ./charts/kamaji-crds/hack/crd-conversion.yaml
$(YQ) '.spec' ./charts/kamaji/crds/kamaji.clastix.io_datastores.yaml > ./charts/kamaji-crds/hack/kamaji.clastix.io_datastores_spec.yaml
$(YQ) '.spec' ./charts/kamaji/crds/kamaji.clastix.io_tenantcontrolplanes.yaml > ./charts/kamaji-crds/hack/kamaji.clastix.io_tenantcontrolplanes_spec.yaml
$(YQ) '.spec' ./charts/kamaji/crds/kamaji.clastix.io_kubeconfiggenerators.yaml > ./charts/kamaji-crds/hack/kamaji.clastix.io_kubeconfiggenerators_spec.yaml
$(YQ) -i '.conversion.webhook.clientConfig.service.name = "{{ .Values.kamajiService }}"' ./charts/kamaji-crds/hack/kamaji.clastix.io_tenantcontrolplanes_spec.yaml
$(YQ) -i '.conversion.webhook.clientConfig.service.namespace = "{{ .Values.kamajiNamespace }}"' ./charts/kamaji-crds/hack/kamaji.clastix.io_tenantcontrolplanes_spec.yaml
manifests: rbac webhook crds ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
@@ -144,11 +153,10 @@ golint: golangci-lint ## Linting the code according to the styling guide.
## Run unit tests (all tests except E2E).
.PHONY: test
test: envtest ginkgo
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(LOCALBIN) -p path)" $(GINKGO) -r -v --trace \
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(LOCALBIN) -p path)" $(GINKGO) -r -v -coverprofile cover.out --trace \
./api/... \
./cmd/... \
./internal/... \
-coverprofile cover.out
_datastore-mysql:
$(MAKE) NAME=$(NAME) -C deploy/kine/mysql mariadb
@@ -238,11 +246,12 @@ load: kind
##@ e2e
.PHONY: env
env:
@make -C deploy/kind kind ingress-nginx
env: kind
$(KIND) create cluster --name kamaji
.PHONY: e2e
e2e: env build load helm ginkgo cert-manager ## Create a KinD cluster, install Kamaji on it and run the test suite.
$(HELM) upgrade --debug --install kamaji-crds ./charts/kamaji-crds --create-namespace --namespace kamaji-system
$(HELM) repo add clastix https://clastix.github.io/charts
$(HELM) dependency build ./charts/kamaji
$(HELM) upgrade --debug --install kamaji ./charts/kamaji --create-namespace --namespace kamaji-system --set "image.tag=$(VERSION)" --set "image.pullPolicy=Never" --set "telemetry.disabled=true"
@@ -251,6 +260,15 @@ e2e: env build load helm ginkgo cert-manager ## Create a KinD cluster, install K
##@ Document
CAPI_URL = https://github.com/clastix/cluster-api-control-plane-provider-kamaji.git
CAPI_DIR := $(shell mktemp -d)
CRDS_DIR := $(shell mktemp -d)
.PHONY: apidoc
apidoc: apidocs-gen
$(APIDOCS_GEN) crdoc --resources charts/kamaji/crds --output docs/content/reference/api.md --template docs/templates/reference-cr.tmpl
@cp charts/kamaji/crds/*.yaml $(CRDS_DIR)
@git clone $(CAPI_URL) $(CAPI_DIR)
@cp $(CAPI_DIR)/config/crd/bases/*.yaml $(CRDS_DIR)
@rm -rf $(CAPI_DIR)
$(APIDOCS_GEN) crdoc --resources $(CRDS_DIR) --output docs/content/reference/api.md --template docs/templates/reference-cr.tmpl
@rm -rf $(CRDS_DIR)

15
NOTICE Normal file
View File

@@ -0,0 +1,15 @@
Kamaji — The Kubernetes Control Plane Manager: copyright 2022 Clastix Labs
Licensed under the Apache License, Version 2.0: https://kamaji.clastix.io
This product includes software developed by Clastix Labs and the Kamaji open-source community under the Apache License, Version 2.0.
Kamaji powers Kubernetes Control Planes at scale for companies worldwide.
We encourage all commercial products and services using Kamaji to acknowledge this publicly and join our growing ecosystem of adopters.
You can support the Kamaji community by:
- Listing Kamaji in your product's "Open Source Credits" or similar section
- Adding your organization to the Adopters list on GitHub: https://github.com/clastix/kamaji/blob/master/ADOPTERS.md
- Mentioning Kamaji on your company or product website
Public acknowledgement strengthens the open-source ecosystem and helps ensure the sustainability of the project you rely on.

View File

@@ -7,6 +7,15 @@ plugins:
projectName: operator
repo: github.com/clastix/kamaji
resources:
- api:
crdVersion: v1
namespaced: false
controller: true
domain: clastix.io
group: kamaji
kind: KubeconfigGenerator
path: github.com/clastix/kamaji/api/v1alpha1
version: v1alpha1
- api:
crdVersion: v1
namespaced: true

View File

@@ -123,6 +123,8 @@ Since Kamaji is just focusing on the Control Plane a [Kamaji's Cluster API Contr
- YouTube ▶️ [Rancher & Kamaji: solving multitenancy challenges in the Kubernetes world](https://www.youtube.com/watch?v=VXHNrMmlF8U)
- YouTube ▶️ [Enabling Self-Service Kubernetes clusters with Kamaji and Paralus](https://www.youtube.com/watch?v=JWA2LwZazM0)
- YouTube ▶️ [Hosted Control Plane on Kubernetes (HPC) with Kamaji and K0mostron by Hervé Leclerc, ALTER WAY](https://www.youtube.com/watch?v=vmRdE2ngn78)
- Medium 📖 [Set up Virtual Control Planes with Kamaji on Minikube, by Ben Soer](https://medium.com/@bensoer/set-up-virtual-control-planes-with-kamaji-on-minikube-a540be0275aa)
- Hands-On tutorial 📖 [How to build your own managed Kubernetes service on Hetzner Cloud, by Hans Jörg Wieland](https://wieland.tech/blog/kamaji-cluster-api-and-etcd)
### 🏷️ Versioning

View File

@@ -9,6 +9,7 @@ import (
)
//+kubebuilder:validation:Enum=etcd;MySQL;PostgreSQL;NATS
//+kubebuilder:validation:XValidation:rule="self == oldSelf",message="Datastore driver is immutable"
type Driver string
@@ -24,6 +25,13 @@ var (
type Endpoints []string
// DataStoreSpec defines the desired state of DataStore.
// +kubebuilder:validation:XValidation:rule="(self.driver == \"etcd\") ? (self.tlsConfig != null && (has(self.tlsConfig.certificateAuthority.privateKey.secretReference) || has(self.tlsConfig.certificateAuthority.privateKey.content))) : true", message="certificateAuthority privateKey must have secretReference or content when driver is etcd"
// +kubebuilder:validation:XValidation:rule="(self.driver == \"etcd\") ? (self.tlsConfig != null && (has(self.tlsConfig.clientCertificate.certificate.secretReference) || has(self.tlsConfig.clientCertificate.certificate.content))) : true", message="clientCertificate must have secretReference or content when driver is etcd"
// +kubebuilder:validation:XValidation:rule="(self.driver == \"etcd\") ? (self.tlsConfig != null && (has(self.tlsConfig.clientCertificate.privateKey.secretReference) || has(self.tlsConfig.clientCertificate.privateKey.content))) : true", message="clientCertificate privateKey must have secretReference or content when driver is etcd"
// +kubebuilder:validation:XValidation:rule="(self.driver != \"etcd\" && has(self.tlsConfig) && has(self.tlsConfig.clientCertificate)) ? (((has(self.tlsConfig.clientCertificate.certificate.secretReference) || has(self.tlsConfig.clientCertificate.certificate.content)))) : true", message="When driver is not etcd and tlsConfig exists, clientCertificate must be null or contain valid content"
// +kubebuilder:validation:XValidation:rule="(self.driver != \"etcd\" && has(self.basicAuth)) ? ((has(self.basicAuth.username.secretReference) || has(self.basicAuth.username.content))) : true", message="When driver is not etcd and basicAuth exists, username must have secretReference or content"
// +kubebuilder:validation:XValidation:rule="(self.driver != \"etcd\" && has(self.basicAuth)) ? ((has(self.basicAuth.password.secretReference) || has(self.basicAuth.password.content))) : true", message="When driver is not etcd and basicAuth exists, password must have secretReference or content"
// +kubebuilder:validation:XValidation:rule="(self.driver != \"etcd\") ? (has(self.tlsConfig) || has(self.basicAuth)) : true", message="When driver is not etcd, either tlsConfig or basicAuth must be provided"
type DataStoreSpec struct {
// The driver to use to connect to the shared datastore.
Driver Driver `json:"driver"`

View File

@@ -52,12 +52,14 @@ func (d *DatastoreUsedSecret) ExtractValue() client.IndexerFunc {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.CertificateAuthority.PrivateKey.SecretRef))
}
if ds.Spec.TLSConfig.ClientCertificate.Certificate.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.ClientCertificate.Certificate.SecretRef))
}
if ds.Spec.TLSConfig.ClientCertificate != nil {
if ds.Spec.TLSConfig.ClientCertificate.Certificate.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.ClientCertificate.Certificate.SecretRef))
}
if ds.Spec.TLSConfig.ClientCertificate.PrivateKey.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.ClientCertificate.PrivateKey.SecretRef))
if ds.Spec.TLSConfig.ClientCertificate.PrivateKey.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.ClientCertificate.PrivateKey.SecretRef))
}
}
}

View File

@@ -0,0 +1,91 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
var (
ManagedByLabel = "kamaji.clastix.io/managed-by"
ManagedForLabel = "kamaji.clastix.io/managed-for"
)
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp",description="Age"
//+kubebuilder:metadata:annotations={"cert-manager.io/inject-ca-from=kamaji-system/kamaji-serving-cert"}
//+kubebuilder:resource:scope=Cluster,shortName=kc,categories=kamaji
// KubeconfigGenerator is the Schema for the kubeconfiggenerators API.
type KubeconfigGenerator struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec KubeconfigGeneratorSpec `json:"spec,omitempty"`
Status KubeconfigGeneratorStatus `json:"status,omitempty"`
}
// CompoundValue allows defining a static, or a dynamic value.
// Options are mutually exclusive, just one should be picked up.
// +kubebuilder:validation:XValidation:rule="(has(self.stringValue) || has(self.fromDefinition)) && !(has(self.stringValue) && has(self.fromDefinition))",message="Either stringValue or fromDefinition must be set, but not both."
type CompoundValue struct {
// StringValue is a static string value.
StringValue string `json:"stringValue,omitempty"`
// FromDefinition is used to generate a dynamic value,
// it uses the dot notation to access fields from the referenced TenantControlPlane object:
// e.g.: metadata.name
FromDefinition string `json:"fromDefinition,omitempty"`
}
type KubeconfigGeneratorSpec struct {
// NamespaceSelector is used to filter Namespaces from which the generator should extract TenantControlPlane objects.
NamespaceSelector metav1.LabelSelector `json:"namespaceSelector,omitempty"`
// TenantControlPlaneSelector is used to filter the TenantControlPlane objects that should be address by the generator.
TenantControlPlaneSelector metav1.LabelSelector `json:"tenantControlPlaneSelector,omitempty"`
// Groups is resolved a set of strings used to assign the x509 organisations field.
// It will be recognised by Kubernetes as user groups.
Groups []CompoundValue `json:"groups,omitempty"`
// User resolves to a string to identify the client, assigned to the x509 Common Name field.
User CompoundValue `json:"user"`
// ControlPlaneEndpointFrom is the key used to extract the Tenant Control Plane endpoint that must be used by the generator.
// The targeted Secret is the `${TCP}-admin-kubeconfig` one, default to `admin.svc`.
//+kubebuilder:default="admin.svc"
ControlPlaneEndpointFrom string `json:"controlPlaneEndpointFrom,omitempty"`
}
type KubeconfigGeneratorStatusError struct {
// Resource is the Namespaced name of the errored resource.
//+kubebuilder:validation:Required
Resource string `json:"resource"`
// Message is the error message recorded upon the last generator run.
//+kubebuilder:validation:Required
Message string `json:"message"`
}
// KubeconfigGeneratorStatus defines the observed state of KubeconfigGenerator.
type KubeconfigGeneratorStatus struct {
// Resources is the sum of targeted TenantControlPlane objects.
//+kubebuilder:default=0
Resources int `json:"resources"`
// AvailableResources is the sum of successfully generated resources.
// In case of a different value compared to Resources, check the field errors.
//+kubebuilder:default=0
AvailableResources int `json:"availableResources"`
// Errors is the list of failed kubeconfig generations.
Errors []KubeconfigGeneratorStatusError `json:"errors,omitempty"`
}
//+kubebuilder:object:root=true
// KubeconfigGeneratorList contains a list of TenantControlPlane.
type KubeconfigGeneratorList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []KubeconfigGenerator `json:"items"`
}
func init() {
SchemeBuilder.Register(&KubeconfigGenerator{}, &KubeconfigGeneratorList{})
}

View File

@@ -0,0 +1,10 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
const (
// PausedReconciliationAnnotation is an annotation that can be applied to
// Tenant Control Plane objects to prevent the controller from processing such a resource.
PausedReconciliationAnnotation = "kamaji.clastix.io/paused"
)

View File

@@ -8,6 +8,7 @@ import (
"fmt"
"net"
"strconv"
"strings"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
@@ -95,3 +96,17 @@ func getLoadBalancerAddress(ingress []corev1.LoadBalancerIngress) (string, error
return "", kamajierrors.MissingValidIPError{}
}
func (in *TenantControlPlane) normalizeNamespaceName() string {
// The dash character (-) must be replaced with an underscore, PostgreSQL is complaining about it:
// https://github.com/clastix/kamaji/issues/328
return strings.ReplaceAll(fmt.Sprintf("%s_%s", in.GetNamespace(), in.GetName()), "-", "_")
}
func (in *TenantControlPlane) GetDefaultDatastoreUsername() string {
return in.normalizeNamespaceName()
}
func (in *TenantControlPlane) GetDefaultDatastoreSchema() string {
return in.normalizeNamespaceName()
}

View File

@@ -122,6 +122,12 @@ type ExternalKubernetesObjectStatus struct {
LastUpdate metav1.Time `json:"lastUpdate,omitempty"`
}
type KonnectivityAgentStatus struct {
ExternalKubernetesObjectStatus `json:",inline"`
Mode KonnectivityAgentMode `json:"mode,omitempty"`
}
// KonnectivityStatus defines the status of Konnectivity as Addon.
type KonnectivityStatus struct {
Enabled bool `json:"enabled"`
@@ -130,7 +136,7 @@ type KonnectivityStatus struct {
Kubeconfig KubeconfigStatus `json:"kubeconfig,omitempty"`
ServiceAccount ExternalKubernetesObjectStatus `json:"sa,omitempty"`
ClusterRoleBinding ExternalKubernetesObjectStatus `json:"clusterrolebinding,omitempty"`
Agent ExternalKubernetesObjectStatus `json:"agent,omitempty"`
Agent KonnectivityAgentStatus `json:"agent,omitempty"`
Service KubernetesServiceStatus `json:"service,omitempty"`
}
@@ -183,11 +189,14 @@ type KubernetesStatus struct {
Ingress *KubernetesIngressStatus `json:"ingress,omitempty"`
}
// +kubebuilder:validation:Enum=Provisioning;CertificateAuthorityRotating;Upgrading;Migrating;Ready;NotReady
// +kubebuilder:validation:Enum=Unknown;Provisioning;CertificateAuthorityRotating;Upgrading;Migrating;Ready;NotReady;Sleeping;WriteLimited
type KubernetesVersionStatus string
var (
VersionUnknown KubernetesVersionStatus = "Unknown"
VersionProvisioning KubernetesVersionStatus = "Provisioning"
VersionSleeping KubernetesVersionStatus = "Sleeping"
VersionWriteLimited KubernetesVersionStatus = "WriteLimited"
VersionCARotating KubernetesVersionStatus = "CertificateAuthorityRotating"
VersionUpgrading KubernetesVersionStatus = "Upgrading"
VersionMigrating KubernetesVersionStatus = "Migrating"

View File

@@ -67,11 +67,12 @@ const (
type KubeletSpec struct {
// Ordered list of the preferred NodeAddressTypes to use for kubelet connections.
// Default to Hostname, InternalIP, ExternalIP.
//+kubebuilder:default={"Hostname","InternalIP","ExternalIP"}
// Default to InternalIP, ExternalIP, Hostname.
//+kubebuilder:default={"InternalIP","ExternalIP","Hostname"}
//+kubebuilder:validation:MinItems=1
//+listType=set
PreferredAddressTypes []KubeletPreferredAddressType `json:"preferredAddressTypes,omitempty"`
// CGroupFS defines the cgroup driver for Kubelet
// CGroupFS defines the cgroup driver for Kubelet
// https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/
CGroupFS CGroupDriver `json:"cgroupfs,omitempty"`
}
@@ -225,7 +226,9 @@ type KonnectivityServerSpec struct {
// The port which Konnectivity server is listening to.
Port int32 `json:"port"`
// Container image version of the Konnectivity server.
//+kubebuilder:default=v0.28.6
// If left empty, Kamaji will automatically inflect the version from the deployed Tenant Control Plane.
//
// WARNING: for last cut-off releases, the container image could be not available.
Version string `json:"version,omitempty"`
// Container image used by the Konnectivity server.
//+kubebuilder:default=registry.k8s.io/kas-network-proxy/proxy-server
@@ -235,25 +238,50 @@ type KonnectivityServerSpec struct {
ExtraArgs ExtraArgs `json:"extraArgs,omitempty"`
}
type KonnectivityAgentMode string
var (
KonnectivityAgentModeDaemonSet KonnectivityAgentMode = "DaemonSet"
KonnectivityAgentModeDeployment KonnectivityAgentMode = "Deployment"
)
//+kubebuilder:validation:XValidation:rule="!(self.mode == 'DaemonSet' && has(self.replicas) && self.replicas != 0) && !(self.mode == 'Deployment' && self.replicas == 0)",message="replicas must be 0 when mode is DaemonSet, and greater than 0 when mode is Deployment"
type KonnectivityAgentSpec struct {
// AgentImage defines the container image for Konnectivity's agent.
//+kubebuilder:default=registry.k8s.io/kas-network-proxy/proxy-agent
Image string `json:"image,omitempty"`
// Version for Konnectivity agent.
//+kubebuilder:default=v0.28.6
// If left empty, Kamaji will automatically inflect the version from the deployed Tenant Control Plane.
//
// WARNING: for last cut-off releases, the container image could be not available.
Version string `json:"version,omitempty"`
// Tolerations for the deployed agent.
// Can be customized to start the konnectivity-agent even if the nodes are not ready or tainted.
//+kubebuilder:default={{key: "CriticalAddonsOnly", operator: "Exists"}}
Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
ExtraArgs ExtraArgs `json:"extraArgs,omitempty"`
// HostNetwork enables the konnectivity agent to use the Host network namespace.
// By enabling this mode, the Agent doesn't need to wait for the CNI initialisation,
// enabling a sort of out-of-band access to nodes for troubleshooting scenarios,
// or when the agent needs direct access to the host network.
//+kubebuilder:default=false
HostNetwork bool `json:"hostNetwork,omitempty"`
// Mode allows specifying the Agent deployment mode: Deployment, or DaemonSet (default).
//+kubebuilder:default="DaemonSet"
//+kubebuilder:validation:Enum=DaemonSet;Deployment
Mode KonnectivityAgentMode `json:"mode,omitempty"`
// Replicas defines the number of replicas when Mode is Deployment.
// Must be 0 if Mode is DaemonSet.
//+kubebuilder:validation:Optional
Replicas int32 `json:"replicas,omitempty"`
}
// KonnectivitySpec defines the spec for Konnectivity.
type KonnectivitySpec struct {
//+kubebuilder:default={version:"v0.28.6",image:"registry.k8s.io/kas-network-proxy/proxy-server",port:8132}
//+kubebuilder:default={image:"registry.k8s.io/kas-network-proxy/proxy-server",port:8132}
KonnectivityServerSpec KonnectivityServerSpec `json:"server,omitempty"`
//+kubebuilder:default={version:"v0.28.6",image:"registry.k8s.io/kas-network-proxy/proxy-agent"}
//+kubebuilder:default={image:"registry.k8s.io/kas-network-proxy/proxy-agent",mode:"DaemonSet"}
KonnectivityAgentSpec KonnectivityAgentSpec `json:"agent,omitempty"`
}
@@ -269,14 +297,36 @@ type AddonsSpec struct {
KubeProxy *AddonSpec `json:"kubeProxy,omitempty"`
}
type Permissions struct {
BlockCreate bool `json:"blockCreation,omitempty"`
BlockUpdate bool `json:"blockUpdate,omitempty"`
BlockDelete bool `json:"blockDeletion,omitempty"`
}
func (p *Permissions) HasAnyLimitation() bool {
if p.BlockCreate || p.BlockUpdate || p.BlockDelete {
return true
}
return false
}
// TenantControlPlaneSpec defines the desired state of TenantControlPlane.
// +kubebuilder:validation:XValidation:rule="!has(oldSelf.dataStore) || has(self.dataStore)", message="unsetting the dataStore is not supported"
// +kubebuilder:validation:XValidation:rule="!has(oldSelf.dataStoreSchema) || has(self.dataStoreSchema)", message="unsetting the dataStoreSchema is not supported"
// +kubebuilder:validation:XValidation:rule="!has(oldSelf.dataStoreUsername) || has(self.dataStoreUsername)", message="unsetting the dataStoreUsername is not supported"
// +kubebuilder:validation:XValidation:rule="!has(self.networkProfile.loadBalancerSourceRanges) || (size(self.networkProfile.loadBalancerSourceRanges) == 0 || self.controlPlane.service.serviceType == 'LoadBalancer')", message="LoadBalancer source ranges are supported only with LoadBalancer service type"
// +kubebuilder:validation:XValidation:rule="!has(self.networkProfile.loadBalancerClass) || self.controlPlane.service.serviceType == 'LoadBalancer'", message="LoadBalancerClass is supported only with LoadBalancer service type"
// +kubebuilder:validation:XValidation:rule="self.controlPlane.service.serviceType != 'LoadBalancer' || (oldSelf.controlPlane.service.serviceType != 'LoadBalancer' && self.controlPlane.service.serviceType == 'LoadBalancer') || has(self.networkProfile.loadBalancerClass) == has(oldSelf.networkProfile.loadBalancerClass)",message="LoadBalancerClass cannot be set or unset at runtime"
type TenantControlPlaneSpec struct {
// WritePermissions allows to select which operations (create, delete, update) must be blocked:
// by default, all actions are allowed, and API Server can write to its Datastore.
//
// By blocking all actions, the Tenant Control Plane can enter in a Read Only mode:
// this phase can be used to prevent Datastore quota exhaustion or for your own business logic
// (e.g.: blocking creation and update, but allowing deletion to "clean up" space).
WritePermissions Permissions `json:"writePermissions,omitempty"`
// DataStore specifies the DataStore that should be used to store the Kubernetes data for the given Tenant Control Plane.
// When Kamaji runs with the default DataStore flag, all empty values will inherit the default value.
// By leaving it empty and running Kamaji with no default DataStore flag, it is possible to achieve automatic assignment to a specific DataStore object.
@@ -289,8 +339,14 @@ type TenantControlPlaneSpec struct {
// to the user to avoid clashes between different TenantControlPlanes. If not set upon creation, Kamaji will default the
// DataStoreSchema by concatenating the namespace and name of the TenantControlPlane.
// +kubebuilder:validation:XValidation:rule="self == oldSelf",message="changing the dataStoreSchema is not supported"
DataStoreSchema string `json:"dataStoreSchema,omitempty"`
ControlPlane ControlPlane `json:"controlPlane"`
DataStoreSchema string `json:"dataStoreSchema,omitempty"`
// DataStoreUsername allows to specify the username of the database (for relational DataStores). This
// value is optional and immutable. Note that Kamaji currently doesn't ensure that DataStoreUsername values are unique. It's up
// to the user to avoid clashes between different TenantControlPlanes. If not set upon creation, Kamaji will default the
// DataStoreUsername by concatenating the namespace and name of the TenantControlPlane.
// +kubebuilder:validation:XValidation:rule="self == oldSelf",message="changing the dataStoreUsername is not supported"
DataStoreUsername string `json:"dataStoreUsername,omitempty"`
ControlPlane ControlPlane `json:"controlPlane"`
// Kubernetes specification for tenant control plane
Kubernetes KubernetesSpec `json:"kubernetes"`
// NetworkProfile specifies how the network is
@@ -304,6 +360,7 @@ type TenantControlPlaneSpec struct {
//+kubebuilder:subresource:scale:specpath=.spec.controlPlane.deployment.replicas,statuspath=.status.kubernetesResources.deployment.replicas,selectorpath=.status.kubernetesResources.deployment.selector
//+kubebuilder:resource:categories=kamaji,shortName=tcp
//+kubebuilder:printcolumn:name="Version",type="string",JSONPath=".spec.kubernetes.version",description="Kubernetes version"
//+kubebuilder:printcolumn:name="Installed Version",type="string",JSONPath=".status.kubernetesResources.version.version",description="The actual installed Kubernetes version from status"
//+kubebuilder:printcolumn:name="Status",type="string",JSONPath=".status.kubernetesResources.version.status",description="Status"
//+kubebuilder:printcolumn:name="Control-Plane endpoint",type="string",JSONPath=".status.controlPlaneEndpoint",description="Tenant Control Plane Endpoint (API server)"
//+kubebuilder:printcolumn:name="Kubeconfig",type="string",JSONPath=".status.kubeconfig.admin.secretName",description="Secret which contains admin kubeconfig"

View File

@@ -19,7 +19,7 @@ var _ = Describe("Cluster controller", func() {
)
BeforeEach(func() {
ctx = context.Background() //nolint:fatcontext
ctx = context.Background()
tcp = &TenantControlPlane{
ObjectMeta: metav1.ObjectMeta{
Name: "tcp",

View File

@@ -0,0 +1,177 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
var _ = Describe("Datastores validation test", func() {
var (
ctx context.Context
ds *DataStore
)
BeforeEach(func() {
ctx = context.Background()
ds = &DataStore{
ObjectMeta: metav1.ObjectMeta{
Name: "ds",
Namespace: "default",
},
Spec: DataStoreSpec{},
}
})
AfterEach(func() {
if err := k8sClient.Delete(ctx, ds); err != nil && !apierrors.IsNotFound(err) {
Expect(err).NotTo(HaveOccurred())
}
})
Context("DataStores fields", func() {
It("datastores of type ETCD must have their TLS configurations set correctly", func() {
ds = &DataStore{
ObjectMeta: metav1.ObjectMeta{
Name: "bad-etcd",
},
Spec: DataStoreSpec{
Driver: "etcd",
Endpoints: []string{"etcd-server:2379"},
TLSConfig: &TLSConfig{
CertificateAuthority: CertKeyPair{},
ClientCertificate: &ClientCertificate{},
},
},
}
err := k8sClient.Create(ctx, ds)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("certificateAuthority privateKey must have secretReference or content when driver is etcd"))
})
It("valid ETCD DataStore should be created", func() {
var (
cert = []byte("cert")
key = []byte("privkey")
)
ds = &DataStore{
ObjectMeta: metav1.ObjectMeta{
Name: "good-etcd",
},
Spec: DataStoreSpec{
Driver: "etcd",
Endpoints: []string{"etcd-server:2379"},
TLSConfig: &TLSConfig{
CertificateAuthority: CertKeyPair{
Certificate: ContentRef{
Content: cert,
},
PrivateKey: &ContentRef{
Content: key,
},
},
ClientCertificate: &ClientCertificate{
Certificate: ContentRef{
Content: cert,
},
PrivateKey: ContentRef{
Content: key,
},
},
},
},
}
err := k8sClient.Create(ctx, ds)
Expect(err).To(Not(HaveOccurred()))
})
It("datastores of type PostgreSQL must have either basicAuth or tlsConfig", func() {
ds = &DataStore{
ObjectMeta: metav1.ObjectMeta{
Name: "bad-pg",
},
Spec: DataStoreSpec{
Driver: "PostgreSQL",
Endpoints: []string{"pg-server:5432"},
},
}
err := k8sClient.Create(ctx, ds)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("When driver is not etcd, either tlsConfig or basicAuth must be provided"))
})
It("datastores of type PostgreSQL can have basicAuth", func() {
ds = &DataStore{
ObjectMeta: metav1.ObjectMeta{
Name: "good-pg",
},
Spec: DataStoreSpec{
Driver: "PostgreSQL",
Endpoints: []string{"pg-server:5432"},
BasicAuth: &BasicAuth{
Username: ContentRef{
Content: []byte("postgres"),
},
Password: ContentRef{
Content: []byte("postgres"),
},
},
},
}
err := k8sClient.Create(ctx, ds)
Expect(err).To(Not(HaveOccurred()))
})
It("datastores of type PostgreSQL must have tlsConfig with proper content", func() {
ds = &DataStore{
ObjectMeta: metav1.ObjectMeta{
Name: "bad-pg",
},
Spec: DataStoreSpec{
Driver: "PostgreSQL",
Endpoints: []string{"pg-server:5432"},
TLSConfig: &TLSConfig{
ClientCertificate: &ClientCertificate{},
},
},
}
err := k8sClient.Create(context.Background(), ds)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("When driver is not etcd and tlsConfig exists, clientCertificate must be null or contain valid content"))
})
It("datastores of type PostgreSQL need a proper clientCertificate", func() {
ds = &DataStore{
ObjectMeta: metav1.ObjectMeta{
Name: "good-pg",
},
Spec: DataStoreSpec{
Driver: "PostgreSQL",
Endpoints: []string{"pg-server:5432"},
TLSConfig: &TLSConfig{
ClientCertificate: &ClientCertificate{
Certificate: ContentRef{
Content: []byte("cert"),
},
},
},
},
}
err := k8sClient.Create(context.Background(), ds)
Expect(err).ToNot(HaveOccurred())
})
})
})

View File

@@ -289,6 +289,21 @@ func (in *ClientCertificate) DeepCopy() *ClientCertificate {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CompoundValue) DeepCopyInto(out *CompoundValue) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CompoundValue.
func (in *CompoundValue) DeepCopy() *CompoundValue {
if in == nil {
return nil
}
out := new(CompoundValue)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ContentRef) DeepCopyInto(out *ContentRef) {
*out = *in
@@ -808,6 +823,22 @@ func (in *KonnectivityAgentSpec) DeepCopy() *KonnectivityAgentSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KonnectivityAgentStatus) DeepCopyInto(out *KonnectivityAgentStatus) {
*out = *in
in.ExternalKubernetesObjectStatus.DeepCopyInto(&out.ExternalKubernetesObjectStatus)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KonnectivityAgentStatus.
func (in *KonnectivityAgentStatus) DeepCopy() *KonnectivityAgentStatus {
if in == nil {
return nil
}
out := new(KonnectivityAgentStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KonnectivityConfigMap) DeepCopyInto(out *KonnectivityConfigMap) {
*out = *in
@@ -935,6 +966,123 @@ func (in *KubeadmPhasesStatus) DeepCopy() *KubeadmPhasesStatus {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubeconfigGenerator) DeepCopyInto(out *KubeconfigGenerator) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeconfigGenerator.
func (in *KubeconfigGenerator) DeepCopy() *KubeconfigGenerator {
if in == nil {
return nil
}
out := new(KubeconfigGenerator)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *KubeconfigGenerator) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubeconfigGeneratorList) DeepCopyInto(out *KubeconfigGeneratorList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]KubeconfigGenerator, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeconfigGeneratorList.
func (in *KubeconfigGeneratorList) DeepCopy() *KubeconfigGeneratorList {
if in == nil {
return nil
}
out := new(KubeconfigGeneratorList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *KubeconfigGeneratorList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubeconfigGeneratorSpec) DeepCopyInto(out *KubeconfigGeneratorSpec) {
*out = *in
in.NamespaceSelector.DeepCopyInto(&out.NamespaceSelector)
in.TenantControlPlaneSelector.DeepCopyInto(&out.TenantControlPlaneSelector)
if in.Groups != nil {
in, out := &in.Groups, &out.Groups
*out = make([]CompoundValue, len(*in))
copy(*out, *in)
}
out.User = in.User
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeconfigGeneratorSpec.
func (in *KubeconfigGeneratorSpec) DeepCopy() *KubeconfigGeneratorSpec {
if in == nil {
return nil
}
out := new(KubeconfigGeneratorSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubeconfigGeneratorStatus) DeepCopyInto(out *KubeconfigGeneratorStatus) {
*out = *in
if in.Errors != nil {
in, out := &in.Errors, &out.Errors
*out = make([]KubeconfigGeneratorStatusError, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeconfigGeneratorStatus.
func (in *KubeconfigGeneratorStatus) DeepCopy() *KubeconfigGeneratorStatus {
if in == nil {
return nil
}
out := new(KubeconfigGeneratorStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubeconfigGeneratorStatusError) DeepCopyInto(out *KubeconfigGeneratorStatusError) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KubeconfigGeneratorStatusError.
func (in *KubeconfigGeneratorStatusError) DeepCopy() *KubeconfigGeneratorStatusError {
if in == nil {
return nil
}
out := new(KubeconfigGeneratorStatusError)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubeconfigStatus) DeepCopyInto(out *KubeconfigStatus) {
*out = *in
@@ -1137,6 +1285,21 @@ func (in *NetworkProfileSpec) DeepCopy() *NetworkProfileSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Permissions) DeepCopyInto(out *Permissions) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Permissions.
func (in *Permissions) DeepCopy() *Permissions {
if in == nil {
return nil
}
out := new(Permissions)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PublicKeyPrivateKeyPairStatus) DeepCopyInto(out *PublicKeyPrivateKeyPairStatus) {
*out = *in
@@ -1301,6 +1464,7 @@ func (in *TenantControlPlaneList) DeepCopyObject() runtime.Object {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *TenantControlPlaneSpec) DeepCopyInto(out *TenantControlPlaneSpec) {
*out = *in
out.WritePermissions = in.WritePermissions
in.ControlPlane.DeepCopyInto(&out.ControlPlane)
in.Kubernetes.DeepCopyInto(&out.Kubernetes)
in.NetworkProfile.DeepCopyInto(&out.NetworkProfile)

View File

@@ -0,0 +1,28 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# Helm source files
README.md.gotmpl
.helmignore
# Build tools
Makefile

View File

@@ -0,0 +1,37 @@
apiVersion: v2
appVersion: latest
description: Kamaji is the Hosted Control Plane Manager for Kubernetes.
home: https://github.com/clastix/kamaji
icon: https://github.com/clastix/kamaji/raw/master/assets/logo-colored.png
maintainers:
- email: dario@tranchitella.eu
name: Dario Tranchitella
url: https://clastix.io
- email: me@bsctl.io
name: Adriano Pezzuto
url: https://clastix.io
name: kamaji-crds
sources:
- https://github.com/clastix/kamaji
type: application
version: 0.0.0+latest
annotations:
artifacthub.io/crds: |
- kind: TenantControlPlane
version: v1alpha1
name: tenantcontrolplanes.kamaji.clastix.io
displayName: TenantControlPlane
description: TenantControlPlane defines the desired state for a Control Plane backed by Kamaji.
- kind: DataStore
version: v1alpha1
name: datastores.kamaji.clastix.io
displayName: DataStore
description: DataStores is holding all the required details to communicate with a Datastore, such as etcd, MySQL, PostgreSQL, and NATS.
artifacthub.io/links: |
- name: CLASTIX
url: https://clastix.io
- name: support
url: https://clastix.io/support
artifacthub.io/changes: |
- kind: added
description: First commit

View File

@@ -0,0 +1,9 @@
docs: HELMDOCS_VERSION := v1.8.1
docs: docker
@docker run --rm -v "$$(pwd):/helm-docs" -u $$(id -u) jnorwood/helm-docs:$(HELMDOCS_VERSION)
docker:
@hash docker 2>/dev/null || {\
echo "You need docker" &&\
exit 1;\
}

View File

@@ -0,0 +1,2 @@
Kamaji Custom Resource Definitions have been installed properly:
you can proceed to upgrade your Kamaji operator instance.

View File

@@ -0,0 +1,66 @@
# kamaji-crds
![Version: 0.0.0+latest](https://img.shields.io/badge/Version-0.0.0+latest-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: latest](https://img.shields.io/badge/AppVersion-latest-informational?style=flat-square)
Kamaji is the Hosted Control Plane Manager for Kubernetes.
## Maintainers
| Name | Email | Url |
| ---- | ------ | --- |
| Dario Tranchitella | <dario@tranchitella.eu> | <https://clastix.io> |
| Adriano Pezzuto | <me@bsctl.io> | <https://clastix.io> |
## Source Code
* <https://github.com/clastix/kamaji>
[Kamaji](https://github.com/clastix/kamaji) Custom Resource Definitions packaged as Helm Charts.
## How to use this chart
Add `clastix` Helm repository:
helm repo add clastix https://clastix.github.io/charts
Install the Chart with the release name `kamaji-crds`:
helm upgrade --install --namespace kamaji-system --create-namespace kamaji-crds clastix/kamaji-crds
Show the status:
helm status kamaji-crds -n kamaji-system
Upgrade the Chart
helm upgrade kamaji-crds -n kamaji-system clastix/kamaji-crds
Uninstall the Chart
helm uninstall kamaji-crds -n kamaji-system
## Customize the installation
There are two methods for specifying overrides of values during Chart installation: `--values` and `--set`.
The `--values` option is the preferred method because it allows you to keep your overrides in a YAML file, rather than specifying them all on the command line. Create a copy of the YAML file `values.yaml` and add your overrides to it.
Specify your overrides file when you install the Chart:
helm upgrade kamaji-crds --install --namespace kamaji-system --create-namespace clastix/kamaji-crds --values myvalues.yaml
The values in your overrides file `myvalues.yaml` will override their counterparts in the Chart's values.yaml file. Any values in `values.yaml` that werent overridden will keep their defaults.
If you only need to make minor customizations, you can specify them on the command line by using the `--set` option. For example:
helm upgrade kamaji-crds --install --namespace kamaji-system --create-namespace clastix/kamaji-crds --set kamajiCertificateName=kamaji
## Values
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| fullnameOverride | string | `""` | Overrides the full name of the resources created by the chart. |
| kamajiCertificateName | string | `"kamaji-serving-cert"` | The cert-manager Certificate resource name, holding the Certificate Authority for webhooks. |
| kamajiNamespace | string | `"kamaji-system"` | The namespace where Kamaji has been installed: required to inject the Certificate Authority for cert-manager. |
| kamajiService | string | `"kamaji-webhook-service"` | The Kamaji webhook Service name. |
| nameOverride | string | `""` | Overrides the name of the chart for resource naming purposes. |

View File

@@ -0,0 +1,54 @@
{{ template "chart.header" . }}
{{ template "chart.deprecationWarning" . }}
{{ template "chart.badgesSection" . }}
{{ template "chart.description" . }}
{{ template "chart.maintainersSection" . }}
{{ template "chart.sourcesSection" . }}
{{ template "chart.requirementsSection" . }}
[Kamaji](https://github.com/clastix/kamaji) Custom Resource Definitions packaged as Helm Charts.
## How to use this chart
Add `clastix` Helm repository:
helm repo add clastix https://clastix.github.io/charts
Install the Chart with the release name `kamaji-crds`:
helm upgrade --install --namespace kamaji-system --create-namespace kamaji-crds clastix/kamaji-crds
Show the status:
helm status kamaji-crds -n kamaji-system
Upgrade the Chart
helm upgrade kamaji-crds -n kamaji-system clastix/kamaji-crds
Uninstall the Chart
helm uninstall kamaji-crds -n kamaji-system
## Customize the installation
There are two methods for specifying overrides of values during Chart installation: `--values` and `--set`.
The `--values` option is the preferred method because it allows you to keep your overrides in a YAML file, rather than specifying them all on the command line. Create a copy of the YAML file `values.yaml` and add your overrides to it.
Specify your overrides file when you install the Chart:
helm upgrade kamaji-crds --install --namespace kamaji-system --create-namespace clastix/kamaji-crds --values myvalues.yaml
The values in your overrides file `myvalues.yaml` will override their counterparts in the Chart's values.yaml file. Any values in `values.yaml` that werent overridden will keep their defaults.
If you only need to make minor customizations, you can specify them on the command line by using the `--set` option. For example:
helm upgrade kamaji-crds --install --namespace kamaji-system --create-namespace clastix/kamaji-crds --set kamajiCertificateName=kamaji
{{ template "chart.valuesSection" . }}

View File

@@ -0,0 +1,11 @@
spec:
conversion:
strategy: Webhook
webhook:
clientConfig:
service:
name: kamaji-webhook-service
namespace: kamaji-system
path: /convert
conversionReviewVersions:
- v1

View File

@@ -0,0 +1,288 @@
group: kamaji.clastix.io
names:
kind: DataStore
listKind: DataStoreList
plural: datastores
singular: datastore
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Kamaji data store driver
jsonPath: .spec.driver
name: Driver
type: string
- description: Age
jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: DataStore is the Schema for the datastores API.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
description: DataStoreSpec defines the desired state of DataStore.
properties:
basicAuth:
description: |-
In case of authentication enabled for the given data store, specifies the username and password pair.
This value is optional.
properties:
password:
properties:
content:
description: |-
Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: |-
Name of the key for the given Secret reference where the content is stored.
This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
username:
properties:
content:
description: |-
Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: |-
Name of the key for the given Secret reference where the content is stored.
This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- password
- username
type: object
driver:
description: The driver to use to connect to the shared datastore.
enum:
- etcd
- MySQL
- PostgreSQL
- NATS
type: string
x-kubernetes-validations:
- message: Datastore driver is immutable
rule: self == oldSelf
endpoints:
description: |-
List of the endpoints to connect to the shared datastore.
No need for protocol, just bare IP/FQDN and port.
items:
type: string
minItems: 1
type: array
tlsConfig:
description: |-
Defines the TLS/SSL configuration required to connect to the data store in a secure way.
This value is optional.
properties:
certificateAuthority:
description: |-
Retrieve the Certificate Authority certificate and private key, such as bare content of the file, or a SecretReference.
The key reference is required since etcd authentication is based on certificates, and Kamaji is responsible in creating this.
properties:
certificate:
properties:
content:
description: |-
Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: |-
Name of the key for the given Secret reference where the content is stored.
This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
privateKey:
properties:
content:
description: |-
Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: |-
Name of the key for the given Secret reference where the content is stored.
This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- certificate
type: object
clientCertificate:
description: Specifies the SSL/TLS key and private key pair used to connect to the data store.
properties:
certificate:
properties:
content:
description: |-
Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: |-
Name of the key for the given Secret reference where the content is stored.
This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
privateKey:
properties:
content:
description: |-
Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: |-
Name of the key for the given Secret reference where the content is stored.
This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- certificate
- privateKey
type: object
required:
- certificateAuthority
type: object
required:
- driver
- endpoints
type: object
x-kubernetes-validations:
- message: certificateAuthority privateKey must have secretReference or content when driver is etcd
rule: '(self.driver == "etcd") ? (self.tlsConfig != null && (has(self.tlsConfig.certificateAuthority.privateKey.secretReference) || has(self.tlsConfig.certificateAuthority.privateKey.content))) : true'
- message: clientCertificate must have secretReference or content when driver is etcd
rule: '(self.driver == "etcd") ? (self.tlsConfig != null && (has(self.tlsConfig.clientCertificate.certificate.secretReference) || has(self.tlsConfig.clientCertificate.certificate.content))) : true'
- message: clientCertificate privateKey must have secretReference or content when driver is etcd
rule: '(self.driver == "etcd") ? (self.tlsConfig != null && (has(self.tlsConfig.clientCertificate.privateKey.secretReference) || has(self.tlsConfig.clientCertificate.privateKey.content))) : true'
- message: When driver is not etcd and tlsConfig exists, clientCertificate must be null or contain valid content
rule: '(self.driver != "etcd" && has(self.tlsConfig) && has(self.tlsConfig.clientCertificate)) ? (((has(self.tlsConfig.clientCertificate.certificate.secretReference) || has(self.tlsConfig.clientCertificate.certificate.content)))) : true'
- message: When driver is not etcd and basicAuth exists, username must have secretReference or content
rule: '(self.driver != "etcd" && has(self.basicAuth)) ? ((has(self.basicAuth.username.secretReference) || has(self.basicAuth.username.content))) : true'
- message: When driver is not etcd and basicAuth exists, password must have secretReference or content
rule: '(self.driver != "etcd" && has(self.basicAuth)) ? ((has(self.basicAuth.password.secretReference) || has(self.basicAuth.password.content))) : true'
- message: When driver is not etcd, either tlsConfig or basicAuth must be provided
rule: '(self.driver != "etcd") ? (has(self.tlsConfig) || has(self.basicAuth)) : true'
status:
description: DataStoreStatus defines the observed state of DataStore.
properties:
usedBy:
description: List of the Tenant Control Planes, namespaced named, using this data store.
items:
type: string
type: array
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -0,0 +1,214 @@
group: kamaji.clastix.io
names:
categories:
- kamaji
kind: KubeconfigGenerator
listKind: KubeconfigGeneratorList
plural: kubeconfiggenerators
shortNames:
- kc
singular: kubeconfiggenerator
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Age
jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: KubeconfigGenerator is the Schema for the kubeconfiggenerators API.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
properties:
controlPlaneEndpointFrom:
default: admin.svc
description: |-
ControlPlaneEndpointFrom is the key used to extract the Tenant Control Plane endpoint that must be used by the generator.
The targeted Secret is the `${TCP}-admin-kubeconfig` one, default to `admin.svc`.
type: string
groups:
description: |-
Groups is resolved a set of strings used to assign the x509 organisations field.
It will be recognised by Kubernetes as user groups.
items:
description: |-
CompoundValue allows defining a static, or a dynamic value.
Options are mutually exclusive, just one should be picked up.
properties:
fromDefinition:
description: |-
FromDefinition is used to generate a dynamic value,
it uses the dot notation to access fields from the referenced TenantControlPlane object:
e.g.: metadata.name
type: string
stringValue:
description: StringValue is a static string value.
type: string
type: object
x-kubernetes-validations:
- message: Either stringValue or fromDefinition must be set, but not both.
rule: (has(self.stringValue) || has(self.fromDefinition)) && !(has(self.stringValue) && has(self.fromDefinition))
type: array
namespaceSelector:
description: NamespaceSelector is used to filter Namespaces from which the generator should extract TenantControlPlane objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
tenantControlPlaneSelector:
description: TenantControlPlaneSelector is used to filter the TenantControlPlane objects that should be address by the generator.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
user:
description: User resolves to a string to identify the client, assigned to the x509 Common Name field.
properties:
fromDefinition:
description: |-
FromDefinition is used to generate a dynamic value,
it uses the dot notation to access fields from the referenced TenantControlPlane object:
e.g.: metadata.name
type: string
stringValue:
description: StringValue is a static string value.
type: string
type: object
x-kubernetes-validations:
- message: Either stringValue or fromDefinition must be set, but not both.
rule: (has(self.stringValue) || has(self.fromDefinition)) && !(has(self.stringValue) && has(self.fromDefinition))
required:
- user
type: object
status:
description: KubeconfigGeneratorStatus defines the observed state of KubeconfigGenerator.
properties:
availableResources:
default: 0
description: |-
AvailableResources is the sum of successfully generated resources.
In case of a different value compared to Resources, check the field errors.
type: integer
errors:
description: Errors is the list of failed kubeconfig generations.
items:
properties:
message:
description: Message is the error message recorded upon the last generator run.
type: string
resource:
description: Resource is the Namespaced name of the errored resource.
type: string
required:
- message
- resource
type: object
type: array
resources:
default: 0
description: Resources is the sum of targeted TenantControlPlane objects.
type: integer
required:
- availableResources
- resources
type: object
type: object
served: true
storage: true
subresources:
status: {}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,49 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "kamaji-crds.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "kamaji.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "kamaji-crds.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create the cert-manager annotation to inject Certificate CA.
*/}}
{{- define "kamaji-crds.certManagerAnnotation" -}}
{{- printf "%s/%s" (required "A valid .Values.kamajiNamespace is required" .Values.kamajiNamespace) (required "A valid .Values.kamajiCertificateName is required" .Values.kamajiCertificateName) }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "kamaji-crds.labels" -}}
helm.sh/chart: {{ include "kamaji-crds.chart" . }}
app.kubernetes.io/name: {{ include "kamaji-crds.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: "crds"
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

View File

@@ -0,0 +1,10 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: {{ include "kamaji-crds.certManagerAnnotation" . }}
labels:
{{- include "kamaji-crds.labels" . | nindent 4 }}
name: datastores.kamaji.clastix.io
spec:
{{ tpl (.Files.Get "hack/kamaji.clastix.io_datastores_spec.yaml") . | nindent 2}}

View File

@@ -0,0 +1,10 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: {{ include "kamaji-crds.certManagerAnnotation" . }}
labels:
{{- include "kamaji-crds.labels" . | nindent 4 }}
name: kubeconfiggenerators.kamaji.clastix.io
spec:
{{ tpl (.Files.Get "hack/kamaji.clastix.io_kubeconfiggenerators_spec.yaml") . | nindent 2 }}

View File

@@ -0,0 +1,10 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: {{ include "kamaji-crds.certManagerAnnotation" . }}
labels:
{{- include "kamaji-crds.labels" . | nindent 4 }}
name: tenantcontrolplanes.kamaji.clastix.io
spec:
{{ tpl (.Files.Get "hack/kamaji.clastix.io_tenantcontrolplanes_spec.yaml") . | nindent 2 }}

View File

@@ -0,0 +1,15 @@
# Default values for kamaji-crds.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# -- Overrides the name of the chart for resource naming purposes.
nameOverride: ""
# -- Overrides the full name of the resources created by the chart.
fullnameOverride: ""
# -- The namespace where Kamaji has been installed: required to inject the Certificate Authority for cert-manager.
kamajiNamespace: kamaji-system
# -- The Kamaji webhook Service name.
kamajiService: kamaji-webhook-service
# -- The cert-manager Certificate resource name, holding the Certificate Authority for webhooks.
kamajiCertificateName: kamaji-serving-cert

View File

@@ -21,3 +21,8 @@
.idea/
*.tmproj
.vscode/
# Helm source files
README.md.gotmpl
.helmignore
# Build tools
Makefile

View File

@@ -1,6 +1,6 @@
dependencies:
- name: kamaji-etcd
repository: https://clastix.github.io/charts
version: 0.9.2
digest: sha256:ba76d3a30e5e20dbbbbcc36a0e7465d4b1adacc956061e7f6ea47b99fc8f08a6
generated: "2025-03-14T21:23:30.421915+09:00"
version: 0.11.0
digest: sha256:96b4115b8c02f771f809ec1bed3be3a3903e7e8315d6966aa54b0f73230ea421
generated: "2025-07-03T09:19:19.835421461+02:00"

View File

@@ -1,5 +1,5 @@
apiVersion: v2
appVersion: v0.0.0
appVersion: latest
description: Kamaji is the Hosted Control Plane Manager for Kubernetes.
home: https://github.com/clastix/kamaji
icon: https://github.com/clastix/kamaji/raw/master/assets/logo-colored.png
@@ -17,11 +17,11 @@ name: kamaji
sources:
- https://github.com/clastix/kamaji
type: application
version: 0.0.0
version: 0.0.0+latest
dependencies:
- name: kamaji-etcd
repository: https://clastix.github.io/charts
version: ">=0.9.2"
version: ">=0.11.0"
condition: kamaji-etcd.deploy
annotations:
catalog.cattle.io/certified: partner
@@ -46,4 +46,5 @@ annotations:
artifacthub.io/operator: "true"
artifacthub.io/operatorCapabilities: "full lifecycle"
artifacthub.io/changes: |
- Using dependency chart `kamaji-etcd` as a default DataStore.
- kind: added
description: Releasing latest chart at every push

View File

@@ -1,6 +1,6 @@
# kamaji
![Version: 0.0.0](https://img.shields.io/badge/Version-0.0.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.0.0](https://img.shields.io/badge/AppVersion-v0.0.0-informational?style=flat-square)
![Version: 0.0.0+latest](https://img.shields.io/badge/Version-0.0.0+latest-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: latest](https://img.shields.io/badge/AppVersion-latest-informational?style=flat-square)
Kamaji is the Hosted Control Plane Manager for Kubernetes.
@@ -22,7 +22,7 @@ Kubernetes: `>=1.21.0-0`
| Repository | Name | Version |
|------------|------|---------|
| https://clastix.github.io/charts | kamaji-etcd | >=0.9.2 |
| https://clastix.github.io/charts | kamaji-etcd | >=0.11.0 |
[Kamaji](https://github.com/clastix/kamaji) requires a [multi-tenant `etcd`](https://github.com/clastix/kamaji-internal/blob/master/deploy/getting-started-with-kamaji.md#setup-internal-multi-tenant-etcd) cluster.
This Helm Chart starting from v0.1.1 provides the installation of an internal `etcd` in order to streamline the local test. If you'd like to use an externally managed etcd instance, you can specify the overrides and by setting the value `etcd.deploy=false`.
@@ -82,10 +82,25 @@ Here the values you can override:
| image.repository | string | `"clastix/kamaji"` | The container image of the Kamaji controller. |
| image.tag | string | `nil` | Overrides the image tag whose default is the chart appVersion. |
| imagePullSecrets | list | `[]` | |
| kamaji-etcd.datastore.enabled | bool | `true` | |
| kamaji-etcd.datastore.name | string | `"default"` | |
| kamaji-etcd.deploy | bool | `true` | |
| kamaji-etcd.fullnameOverride | string | `"kamaji-etcd"` | |
| kamaji-etcd | object | `{"clusterDomain":"cluster.local","datastore":{"enabled":true,"name":"default"},"deploy":true,"fullnameOverride":"kamaji-etcd"}` | Subchart: See https://github.com/clastix/kamaji-etcd/blob/master/charts/kamaji-etcd/values.yaml |
| kubeconfigGenerator.affinity | object | `{}` | Kubernetes affinity rules to apply to Kubeconfig Generator controller pods |
| kubeconfigGenerator.enableLeaderElect | bool | `true` | Enables the leader election. |
| kubeconfigGenerator.enabled | bool | `false` | Toggle to deploy the Kubeconfig Generator Deployment. |
| kubeconfigGenerator.extraArgs | list | `[]` | A list of extra arguments to add to the Kubeconfig Generator controller default ones. |
| kubeconfigGenerator.fullnameOverride | string | `""` | |
| kubeconfigGenerator.healthProbeBindAddress | string | `":8081"` | The address the probe endpoint binds to. |
| kubeconfigGenerator.loggingDevel.enable | bool | `false` | Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) |
| kubeconfigGenerator.nodeSelector | object | `{}` | Kubernetes node selector rules to schedule Kubeconfig Generator controller |
| kubeconfigGenerator.podAnnotations | object | `{}` | The annotations to apply to the Kubeconfig Generator controller pods. |
| kubeconfigGenerator.podSecurityContext | object | `{"runAsNonRoot":true}` | The securityContext to apply to the Kubeconfig Generator controller pods. |
| kubeconfigGenerator.replicaCount | int | `2` | The number of the pod replicas for the Kubeconfig Generator controller. |
| kubeconfigGenerator.resources.limits.cpu | string | `"200m"` | |
| kubeconfigGenerator.resources.limits.memory | string | `"512Mi"` | |
| kubeconfigGenerator.resources.requests.cpu | string | `"200m"` | |
| kubeconfigGenerator.resources.requests.memory | string | `"512Mi"` | |
| kubeconfigGenerator.securityContext | object | `{"allowPrivilegeEscalation":false}` | The securityContext to apply to the Kubeconfig Generator controller container only. |
| kubeconfigGenerator.serviceAccountOverride | string | `""` | The name of the service account to use. If not set, the root Kamaji one will be used. |
| kubeconfigGenerator.tolerations | list | `[]` | Kubernetes node taints that the Kubeconfig Generator controller pods would tolerate |
| livenessProbe | object | `{"httpGet":{"path":"/healthz","port":"healthcheck"},"initialDelaySeconds":15,"periodSeconds":20}` | The livenessProbe for the controller container |
| loggingDevel.enable | bool | `false` | Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default false) |
| metricsBindAddress | string | `":8080"` | The address the metric endpoint binds to. (default ":8080") |

View File

@@ -1,12 +0,0 @@
# Kamaji
Kamaji deploys and operates Kubernetes at scale with a fraction of the operational burden.
Useful links:
- [Kamaji Github repository](https://github.com/clastix/kamaji)
- [Kamaji Documentation](https://kamaji.clastix.io)
## Requirements
* Kubernetes v1.22+
* Helm v3

View File

@@ -1,3 +1,11 @@
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
@@ -51,6 +59,7 @@
- kamaji.clastix.io
resources:
- datastores/status
- kubeconfiggenerators/status
- tenantcontrolplanes/status
verbs:
- get
@@ -59,6 +68,18 @@
- apiGroups:
- kamaji.clastix.io
resources:
- kubeconfiggenerators
verbs:
- create
- get
- list
- patch
- update
- watch
- apiGroups:
- kamaji.clastix.io
resources:
- kubeconfiggenerators/finalizers
- tenantcontrolplanes/finalizers
verbs:
- update

View File

@@ -120,6 +120,9 @@ spec:
- PostgreSQL
- NATS
type: string
x-kubernetes-validations:
- message: Datastore driver is immutable
rule: self == oldSelf
endpoints:
description: |-
List of the endpoints to connect to the shared datastore.
@@ -263,6 +266,21 @@ spec:
- driver
- endpoints
type: object
x-kubernetes-validations:
- message: certificateAuthority privateKey must have secretReference or content when driver is etcd
rule: '(self.driver == "etcd") ? (self.tlsConfig != null && (has(self.tlsConfig.certificateAuthority.privateKey.secretReference) || has(self.tlsConfig.certificateAuthority.privateKey.content))) : true'
- message: clientCertificate must have secretReference or content when driver is etcd
rule: '(self.driver == "etcd") ? (self.tlsConfig != null && (has(self.tlsConfig.clientCertificate.certificate.secretReference) || has(self.tlsConfig.clientCertificate.certificate.content))) : true'
- message: clientCertificate privateKey must have secretReference or content when driver is etcd
rule: '(self.driver == "etcd") ? (self.tlsConfig != null && (has(self.tlsConfig.clientCertificate.privateKey.secretReference) || has(self.tlsConfig.clientCertificate.privateKey.content))) : true'
- message: When driver is not etcd and tlsConfig exists, clientCertificate must be null or contain valid content
rule: '(self.driver != "etcd" && has(self.tlsConfig) && has(self.tlsConfig.clientCertificate)) ? (((has(self.tlsConfig.clientCertificate.certificate.secretReference) || has(self.tlsConfig.clientCertificate.certificate.content)))) : true'
- message: When driver is not etcd and basicAuth exists, username must have secretReference or content
rule: '(self.driver != "etcd" && has(self.basicAuth)) ? ((has(self.basicAuth.username.secretReference) || has(self.basicAuth.username.content))) : true'
- message: When driver is not etcd and basicAuth exists, password must have secretReference or content
rule: '(self.driver != "etcd" && has(self.basicAuth)) ? ((has(self.basicAuth.password.secretReference) || has(self.basicAuth.password.content))) : true'
- message: When driver is not etcd, either tlsConfig or basicAuth must be provided
rule: '(self.driver != "etcd") ? (has(self.tlsConfig) || has(self.basicAuth)) : true'
status:
description: DataStoreStatus defines the observed state of DataStore.
properties:

View File

@@ -0,0 +1,222 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: kamaji-system/kamaji-serving-cert
controller-gen.kubebuilder.io/version: v0.16.1
name: kubeconfiggenerators.kamaji.clastix.io
spec:
group: kamaji.clastix.io
names:
categories:
- kamaji
kind: KubeconfigGenerator
listKind: KubeconfigGeneratorList
plural: kubeconfiggenerators
shortNames:
- kc
singular: kubeconfiggenerator
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Age
jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: KubeconfigGenerator is the Schema for the kubeconfiggenerators API.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
properties:
controlPlaneEndpointFrom:
default: admin.svc
description: |-
ControlPlaneEndpointFrom is the key used to extract the Tenant Control Plane endpoint that must be used by the generator.
The targeted Secret is the `${TCP}-admin-kubeconfig` one, default to `admin.svc`.
type: string
groups:
description: |-
Groups is resolved a set of strings used to assign the x509 organisations field.
It will be recognised by Kubernetes as user groups.
items:
description: |-
CompoundValue allows defining a static, or a dynamic value.
Options are mutually exclusive, just one should be picked up.
properties:
fromDefinition:
description: |-
FromDefinition is used to generate a dynamic value,
it uses the dot notation to access fields from the referenced TenantControlPlane object:
e.g.: metadata.name
type: string
stringValue:
description: StringValue is a static string value.
type: string
type: object
x-kubernetes-validations:
- message: Either stringValue or fromDefinition must be set, but not both.
rule: (has(self.stringValue) || has(self.fromDefinition)) && !(has(self.stringValue) && has(self.fromDefinition))
type: array
namespaceSelector:
description: NamespaceSelector is used to filter Namespaces from which the generator should extract TenantControlPlane objects.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
tenantControlPlaneSelector:
description: TenantControlPlaneSelector is used to filter the TenantControlPlane objects that should be address by the generator.
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: |-
A label selector requirement is a selector that contains values, a key, and an operator that
relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: |-
operator represents a key's relationship to a set of values.
Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: |-
values is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- key
- operator
type: object
type: array
x-kubernetes-list-type: atomic
matchLabels:
additionalProperties:
type: string
description: |-
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
x-kubernetes-map-type: atomic
user:
description: User resolves to a string to identify the client, assigned to the x509 Common Name field.
properties:
fromDefinition:
description: |-
FromDefinition is used to generate a dynamic value,
it uses the dot notation to access fields from the referenced TenantControlPlane object:
e.g.: metadata.name
type: string
stringValue:
description: StringValue is a static string value.
type: string
type: object
x-kubernetes-validations:
- message: Either stringValue or fromDefinition must be set, but not both.
rule: (has(self.stringValue) || has(self.fromDefinition)) && !(has(self.stringValue) && has(self.fromDefinition))
required:
- user
type: object
status:
description: KubeconfigGeneratorStatus defines the observed state of KubeconfigGenerator.
properties:
availableResources:
default: 0
description: |-
AvailableResources is the sum of successfully generated resources.
In case of a different value compared to Resources, check the field errors.
type: integer
errors:
description: Errors is the list of failed kubeconfig generations.
items:
properties:
message:
description: Message is the error message recorded upon the last generator run.
type: string
resource:
description: Resource is the Namespaced name of the errored resource.
type: string
required:
- message
- resource
type: object
type: array
resources:
default: 0
description: Resources is the sum of targeted TenantControlPlane objects.
type: integer
required:
- availableResources
- resources
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -23,6 +23,10 @@ spec:
jsonPath: .spec.kubernetes.version
name: Version
type: string
- description: The actual installed Kubernetes version from status
jsonPath: .status.kubernetesResources.version.version
name: Installed Version
type: string
- description: Status
jsonPath: .status.kubernetesResources.version.status
name: Status
@@ -92,7 +96,7 @@ spec:
agent:
default:
image: registry.k8s.io/kas-network-proxy/proxy-agent
version: v0.28.6
mode: DaemonSet
properties:
extraArgs:
description: |-
@@ -103,10 +107,31 @@ spec:
items:
type: string
type: array
hostNetwork:
default: false
description: |-
HostNetwork enables the konnectivity agent to use the Host network namespace.
By enabling this mode, the Agent doesn't need to wait for the CNI initialisation,
enabling a sort of out-of-band access to nodes for troubleshooting scenarios,
or when the agent needs direct access to the host network.
type: boolean
image:
default: registry.k8s.io/kas-network-proxy/proxy-agent
description: AgentImage defines the container image for Konnectivity's agent.
type: string
mode:
default: DaemonSet
description: 'Mode allows specifying the Agent deployment mode: Deployment, or DaemonSet (default).'
enum:
- DaemonSet
- Deployment
type: string
replicas:
description: |-
Replicas defines the number of replicas when Mode is Deployment.
Must be 0 if Mode is DaemonSet.
format: int32
type: integer
tolerations:
default:
- key: CriticalAddonsOnly
@@ -152,15 +177,20 @@ spec:
type: object
type: array
version:
default: v0.28.6
description: Version for Konnectivity agent.
description: |-
Version for Konnectivity agent.
If left empty, Kamaji will automatically inflect the version from the deployed Tenant Control Plane.
WARNING: for last cut-off releases, the container image could be not available.
type: string
type: object
x-kubernetes-validations:
- message: replicas must be 0 when mode is DaemonSet, and greater than 0 when mode is Deployment
rule: '!(self.mode == ''DaemonSet'' && has(self.replicas) && self.replicas != 0) && !(self.mode == ''Deployment'' && self.replicas == 0)'
server:
default:
image: registry.k8s.io/kas-network-proxy/proxy-server
port: 8132
version: v0.28.6
properties:
extraArgs:
description: |-
@@ -187,7 +217,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -239,8 +269,11 @@ spec:
type: object
type: object
version:
default: v0.28.6
description: Container image version of the Konnectivity server.
description: |-
Container image version of the Konnectivity server.
If left empty, Kamaji will automatically inflect the version from the deployed Tenant Control Plane.
WARNING: for last cut-off releases, the container image could be not available.
type: string
required:
- port
@@ -312,7 +345,9 @@ spec:
description: EnvVar represents an environment variable present in a Container.
properties:
name:
description: Name of the environment variable. Must be a C_IDENTIFIER.
description: |-
Name of the environment variable.
May consist of any printable ASCII characters except '='.
type: string
value:
description: |-
@@ -366,6 +401,42 @@ spec:
- fieldPath
type: object
x-kubernetes-map-type: atomic
fileKeyRef:
description: |-
FileKeyRef selects a key of the env file.
Requires the EnvFiles feature gate to be enabled.
properties:
key:
description: |-
The key within the env file. An invalid key will prevent the pod from starting.
The keys defined within a source may consist of any printable ASCII characters except '='.
During Alpha stage of the EnvFiles feature gate, the key size is limited to 128 characters.
type: string
optional:
default: false
description: |-
Specify whether the file or its key must be defined. If the file or key
does not exist, then the env var is not published.
If optional is set to true and the specified key does not exist,
the environment variable will not be set in the Pod's containers.
If optional is set to false and the specified key does not exist,
an error will be returned during Pod creation.
type: boolean
path:
description: |-
The path within the volume from which to select the file.
Must be relative and may not contain the '..' path or start with '..'.
type: string
volumeName:
description: The name of the volume mount containing the env file.
type: string
required:
- key
- path
- volumeName
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: |-
Selects a resource of the container: only resources limits and requests
@@ -421,13 +492,13 @@ spec:
envFrom:
description: |-
List of sources to populate environment variables in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
The keys defined within a source may consist of any printable ASCII characters except '='.
When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.
items:
description: EnvFromSource represents the source of a set of ConfigMaps
description: EnvFromSource represents the source of a set of ConfigMaps or Secrets
properties:
configMapRef:
description: The ConfigMap to select from
@@ -447,7 +518,9 @@ spec:
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
description: |-
Optional text to prepend to the name of each environment variable.
May consist of any printable ASCII characters except '='.
type: string
secretRef:
description: The Secret to select from
@@ -696,6 +769,12 @@ spec:
- port
type: object
type: object
stopSignal:
description: |-
StopSignal defines which signal will be sent to a container when it is being stopped.
If not specified, the default is defined by the container runtime in use.
StopSignal can only be set for Pods with a non-empty .spec.os.name
type: string
type: object
livenessProbe:
description: |-
@@ -1086,7 +1165,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -1140,10 +1219,10 @@ spec:
restartPolicy:
description: |-
RestartPolicy defines the restart behavior of individual containers in a pod.
This field may only be set for init containers, and the only allowed value is "Always".
For non-init containers or when this field is not specified,
This overrides the pod-level restart policy. When this field is not specified,
the restart behavior is defined by the Pod's restart policy and the container type.
Setting the RestartPolicy as "Always" for the init container will have the following effect:
Additionally, setting the RestartPolicy as "Always" for the init container will
have the following effect:
this init container will be continually restarted on
exit until all regular containers have terminated. Once all regular
containers have completed, all init containers with restartPolicy "Always"
@@ -1155,6 +1234,57 @@ spec:
init container is started, or after any startupProbe has successfully
completed.
type: string
restartPolicyRules:
description: |-
Represents a list of rules to be checked to determine if the
container should be restarted on exit. The rules are evaluated in
order. Once a rule matches a container exit condition, the remaining
rules are ignored. If no rule matches the container exit condition,
the Container-level restart policy determines the whether the container
is restarted or not. Constraints on the rules:
- At most 20 rules are allowed.
- Rules can have the same action.
- Identical rules are not forbidden in validations.
When rules are specified, container MUST set RestartPolicy explicitly
even it if matches the Pod's RestartPolicy.
items:
description: ContainerRestartRule describes how a container exit is handled.
properties:
action:
description: |-
Specifies the action taken on a container exit if the requirements
are satisfied. The only possible value is "Restart" to restart the
container.
type: string
exitCodes:
description: Represents the exit codes to check on container exits.
properties:
operator:
description: |-
Represents the relationship between the container exit code(s) and the
specified values. Possible values are:
- In: the requirement is satisfied if the container exit code is in the
set of specified values.
- NotIn: the requirement is satisfied if the container exit code is
not in the set of specified values.
type: string
values:
description: |-
Specifies the set of values to check for container exit codes.
At most 255 elements are allowed.
items:
format: int32
type: integer
type: array
x-kubernetes-list-type: set
required:
- operator
type: object
required:
- action
type: object
type: array
x-kubernetes-list-type: atomic
securityContext:
description: |-
SecurityContext defines the security options the container should be run with.
@@ -1677,7 +1807,9 @@ spec:
description: EnvVar represents an environment variable present in a Container.
properties:
name:
description: Name of the environment variable. Must be a C_IDENTIFIER.
description: |-
Name of the environment variable.
May consist of any printable ASCII characters except '='.
type: string
value:
description: |-
@@ -1731,6 +1863,42 @@ spec:
- fieldPath
type: object
x-kubernetes-map-type: atomic
fileKeyRef:
description: |-
FileKeyRef selects a key of the env file.
Requires the EnvFiles feature gate to be enabled.
properties:
key:
description: |-
The key within the env file. An invalid key will prevent the pod from starting.
The keys defined within a source may consist of any printable ASCII characters except '='.
During Alpha stage of the EnvFiles feature gate, the key size is limited to 128 characters.
type: string
optional:
default: false
description: |-
Specify whether the file or its key must be defined. If the file or key
does not exist, then the env var is not published.
If optional is set to true and the specified key does not exist,
the environment variable will not be set in the Pod's containers.
If optional is set to false and the specified key does not exist,
an error will be returned during Pod creation.
type: boolean
path:
description: |-
The path within the volume from which to select the file.
Must be relative and may not contain the '..' path or start with '..'.
type: string
volumeName:
description: The name of the volume mount containing the env file.
type: string
required:
- key
- path
- volumeName
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: |-
Selects a resource of the container: only resources limits and requests
@@ -1786,13 +1954,13 @@ spec:
envFrom:
description: |-
List of sources to populate environment variables in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
The keys defined within a source may consist of any printable ASCII characters except '='.
When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.
items:
description: EnvFromSource represents the source of a set of ConfigMaps
description: EnvFromSource represents the source of a set of ConfigMaps or Secrets
properties:
configMapRef:
description: The ConfigMap to select from
@@ -1812,7 +1980,9 @@ spec:
type: object
x-kubernetes-map-type: atomic
prefix:
description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
description: |-
Optional text to prepend to the name of each environment variable.
May consist of any printable ASCII characters except '='.
type: string
secretRef:
description: The Secret to select from
@@ -2061,6 +2231,12 @@ spec:
- port
type: object
type: object
stopSignal:
description: |-
StopSignal defines which signal will be sent to a container when it is being stopped.
If not specified, the default is defined by the container runtime in use.
StopSignal can only be set for Pods with a non-empty .spec.os.name
type: string
type: object
livenessProbe:
description: |-
@@ -2451,7 +2627,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -2505,10 +2681,10 @@ spec:
restartPolicy:
description: |-
RestartPolicy defines the restart behavior of individual containers in a pod.
This field may only be set for init containers, and the only allowed value is "Always".
For non-init containers or when this field is not specified,
This overrides the pod-level restart policy. When this field is not specified,
the restart behavior is defined by the Pod's restart policy and the container type.
Setting the RestartPolicy as "Always" for the init container will have the following effect:
Additionally, setting the RestartPolicy as "Always" for the init container will
have the following effect:
this init container will be continually restarted on
exit until all regular containers have terminated. Once all regular
containers have completed, all init containers with restartPolicy "Always"
@@ -2520,6 +2696,57 @@ spec:
init container is started, or after any startupProbe has successfully
completed.
type: string
restartPolicyRules:
description: |-
Represents a list of rules to be checked to determine if the
container should be restarted on exit. The rules are evaluated in
order. Once a rule matches a container exit condition, the remaining
rules are ignored. If no rule matches the container exit condition,
the Container-level restart policy determines the whether the container
is restarted or not. Constraints on the rules:
- At most 20 rules are allowed.
- Rules can have the same action.
- Identical rules are not forbidden in validations.
When rules are specified, container MUST set RestartPolicy explicitly
even it if matches the Pod's RestartPolicy.
items:
description: ContainerRestartRule describes how a container exit is handled.
properties:
action:
description: |-
Specifies the action taken on a container exit if the requirements
are satisfied. The only possible value is "Restart" to restart the
container.
type: string
exitCodes:
description: Represents the exit codes to check on container exits.
properties:
operator:
description: |-
Represents the relationship between the container exit code(s) and the
specified values. Possible values are:
- In: the requirement is satisfied if the container exit code is in the
set of specified values.
- NotIn: the requirement is satisfied if the container exit code is
not in the set of specified values.
type: string
values:
description: |-
Specifies the set of values to check for container exit codes.
At most 255 elements are allowed.
items:
format: int32
type: integer
type: array
x-kubernetes-list-type: set
required:
- operator
type: object
required:
- action
type: object
type: array
x-kubernetes-list-type: atomic
securityContext:
description: |-
SecurityContext defines the security options the container should be run with.
@@ -3846,15 +4073,13 @@ spec:
volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim.
If specified, the CSI driver will create or update the volume with the attributes defined
in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName,
it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass
will be applied to the claim but it's not allowed to reset this field to empty string once it is set.
If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass
will be set by the persistentvolume controller if it exists.
it can be changed after the claim is created. An empty string or nil value indicates that no
VolumeAttributesClass will be applied to the claim. If the claim enters an Infeasible error state,
this field can be reset to its previous value (including nil) to cancel the modification.
If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be
set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource
exists.
More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/
(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).
type: string
volumeMode:
description: |-
@@ -4028,12 +4253,9 @@ spec:
description: |-
glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime.
Deprecated: Glusterfs is deprecated and the in-tree glusterfs type is no longer supported.
More info: https://examples.k8s.io/volumes/glusterfs/README.md
properties:
endpoints:
description: |-
endpoints is the endpoint name that details Glusterfs topology.
More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod
description: endpoints is the endpoint name that details Glusterfs topology.
type: string
path:
description: |-
@@ -4087,7 +4309,7 @@ spec:
The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field.
The OCI object gets mounted in a single directory (spec.containers[*].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images.
The volume will be mounted read-only (ro) and non-executable files (noexec).
Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath).
Sub path mounts for containers are not supported (spec.containers[*].volumeMounts.subpath) before 1.33.
The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type.
properties:
pullPolicy:
@@ -4112,7 +4334,7 @@ spec:
description: |-
iscsi represents an ISCSI Disk resource that is attached to a
kubelet's host machine and then exposed to the pod.
More info: https://examples.k8s.io/volumes/iscsi/README.md
More info: https://kubernetes.io/docs/concepts/storage/volumes/#iscsi
properties:
chapAuthDiscovery:
description: chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication
@@ -4502,6 +4724,110 @@ spec:
type: array
x-kubernetes-list-type: atomic
type: object
podCertificate:
description: |-
Projects an auto-rotating credential bundle (private key and certificate
chain) that the pod can use either as a TLS client or server.
Kubelet generates a private key and uses it to send a
PodCertificateRequest to the named signer. Once the signer approves the
request and issues a certificate chain, Kubelet writes the key and
certificate chain to the pod filesystem. The pod does not start until
certificates have been issued for each podCertificate projected volume
source in its spec.
Kubelet will begin trying to rotate the certificate at the time indicated
by the signer using the PodCertificateRequest.Status.BeginRefreshAt
timestamp.
Kubelet can write a single file, indicated by the credentialBundlePath
field, or separate files, indicated by the keyPath and
certificateChainPath fields.
The credential bundle is a single file in PEM format. The first PEM
entry is the private key (in PKCS#8 format), and the remaining PEM
entries are the certificate chain issued by the signer (typically,
signers will return their certificate chain in leaf-to-root order).
Prefer using the credential bundle format, since your application code
can read it atomically. If you use keyPath and certificateChainPath,
your application must make two separate file reads. If these coincide
with a certificate rotation, it is possible that the private key and leaf
certificate you read may not correspond to each other. Your application
will need to check for this condition, and re-read until they are
consistent.
The named signer controls chooses the format of the certificate it
issues; consult the signer implementation's documentation to learn how to
use the certificates it issues.
properties:
certificateChainPath:
description: |-
Write the certificate chain at this path in the projected volume.
Most applications should use credentialBundlePath. When using keyPath
and certificateChainPath, your application needs to check that the key
and leaf certificate are consistent, because it is possible to read the
files mid-rotation.
type: string
credentialBundlePath:
description: |-
Write the credential bundle at this path in the projected volume.
The credential bundle is a single file that contains multiple PEM blocks.
The first PEM block is a PRIVATE KEY block, containing a PKCS#8 private
key.
The remaining blocks are CERTIFICATE blocks, containing the issued
certificate chain from the signer (leaf and any intermediates).
Using credentialBundlePath lets your Pod's application code make a single
atomic read that retrieves a consistent key and certificate chain. If you
project them to separate files, your application code will need to
additionally check that the leaf certificate was issued to the key.
type: string
keyPath:
description: |-
Write the key at this path in the projected volume.
Most applications should use credentialBundlePath. When using keyPath
and certificateChainPath, your application needs to check that the key
and leaf certificate are consistent, because it is possible to read the
files mid-rotation.
type: string
keyType:
description: |-
The type of keypair Kubelet will generate for the pod.
Valid values are "RSA3072", "RSA4096", "ECDSAP256", "ECDSAP384",
"ECDSAP521", and "ED25519".
type: string
maxExpirationSeconds:
description: |-
maxExpirationSeconds is the maximum lifetime permitted for the
certificate.
Kubelet copies this value verbatim into the PodCertificateRequests it
generates for this projection.
If omitted, kube-apiserver will set it to 86400(24 hours). kube-apiserver
will reject values shorter than 3600 (1 hour). The maximum allowable
value is 7862400 (91 days).
The signer implementation is then free to issue a certificate with any
lifetime *shorter* than MaxExpirationSeconds, but no shorter than 3600
seconds (1 hour). This constraint is enforced by kube-apiserver.
`kubernetes.io` signers will never issue certificates with a lifetime
longer than 24 hours.
format: int32
type: integer
signerName:
description: Kubelet's generated CSRs will be addressed to this signer.
type: string
required:
- keyType
- signerName
type: object
secret:
description: secret information about the secret data to project
properties:
@@ -4631,7 +4957,6 @@ spec:
description: |-
rbd represents a Rados Block Device mount on the host that shares a pod's lifetime.
Deprecated: RBD is deprecated and the in-tree rbd type is no longer supported.
More info: https://examples.k8s.io/volumes/rbd/README.md
properties:
fsType:
description: |-
@@ -5173,7 +5498,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -5188,7 +5512,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -5349,7 +5672,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -5364,7 +5686,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -5453,8 +5774,8 @@ spec:
most preferred is the one with the greatest sum of weights, i.e.
for each node that meets all of the scheduling requirements (resource
request, requiredDuringScheduling anti-affinity expressions, etc.),
compute a sum by iterating through the elements of this field and adding
"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the
compute a sum by iterating through the elements of this field and subtracting
"weight" from the sum if the node has pods which matches the corresponding podAffinityTerm; the
node(s) with the highest sum are the most preferred.
items:
description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
@@ -5518,7 +5839,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -5533,7 +5853,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -5694,7 +6013,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -5709,7 +6027,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -5878,7 +6195,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -5937,7 +6254,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -5998,7 +6315,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -6057,7 +6374,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -6339,7 +6656,6 @@ spec:
- Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
If this value is nil, the behavior is equivalent to the Honor policy.
This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
type: string
nodeTaintsPolicy:
description: |-
@@ -6350,7 +6666,6 @@ spec:
- Ignore: node taints are ignored. All nodes are included.
If this value is nil, the behavior is equivalent to the Ignore policy.
This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
type: string
topologyKey:
description: |-
@@ -6463,6 +6778,16 @@ spec:
x-kubernetes-validations:
- message: changing the dataStoreSchema is not supported
rule: self == oldSelf
dataStoreUsername:
description: |-
DataStoreUsername allows to specify the username of the database (for relational DataStores). This
value is optional and immutable. Note that Kamaji currently doesn't ensure that DataStoreUsername values are unique. It's up
to the user to avoid clashes between different TenantControlPlanes. If not set upon creation, Kamaji will default the
DataStoreUsername by concatenating the namespace and name of the TenantControlPlane.
type: string
x-kubernetes-validations:
- message: changing the dataStoreUsername is not supported
rule: self == oldSelf
kubernetes:
description: Kubernetes specification for tenant control plane
properties:
@@ -6533,7 +6858,7 @@ spec:
properties:
cgroupfs:
description: |-
CGroupFS defines the cgroup driver for Kubelet
CGroupFS defines the cgroup driver for Kubelet
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/
enum:
- systemd
@@ -6541,12 +6866,12 @@ spec:
type: string
preferredAddressTypes:
default:
- Hostname
- InternalIP
- ExternalIP
- Hostname
description: |-
Ordered list of the preferred NodeAddressTypes to use for kubelet connections.
Default to Hostname, InternalIP, ExternalIP.
Default to InternalIP, ExternalIP, Hostname.
items:
enum:
- Hostname
@@ -6557,6 +6882,7 @@ spec:
type: string
minItems: 1
type: array
x-kubernetes-list-type: set
type: object
version:
description: Kubernetes Version for the tenant control plane
@@ -6637,6 +6963,22 @@ spec:
description: 'CIDR for Kubernetes Services: if empty, defaulted to 10.96.0.0/16.'
type: string
type: object
writePermissions:
description: |-
WritePermissions allows to select which operations (create, delete, update) must be blocked:
by default, all actions are allowed, and API Server can write to its Datastore.
By blocking all actions, the Tenant Control Plane can enter in a Read Only mode:
this phase can be used to prevent Datastore quota exhaustion or for your own business logic
(e.g.: blocking creation and update, but allowing deletion to "clean up" space).
properties:
blockCreation:
type: boolean
blockDeletion:
type: boolean
blockUpdate:
type: boolean
type: object
required:
- controlPlane
- kubernetes
@@ -6646,6 +6988,8 @@ spec:
rule: '!has(oldSelf.dataStore) || has(self.dataStore)'
- message: unsetting the dataStoreSchema is not supported
rule: '!has(oldSelf.dataStoreSchema) || has(self.dataStoreSchema)'
- message: unsetting the dataStoreUsername is not supported
rule: '!has(oldSelf.dataStoreUsername) || has(self.dataStoreUsername)'
- message: LoadBalancer source ranges are supported only with LoadBalancer service type
rule: '!has(self.networkProfile.loadBalancerSourceRanges) || (size(self.networkProfile.loadBalancerSourceRanges) == 0 || self.controlPlane.service.serviceType == ''LoadBalancer'')'
- message: LoadBalancerClass is supported only with LoadBalancer service type
@@ -6678,6 +7022,8 @@ spec:
description: Last time when k8s object was updated
format: date-time
type: string
mode:
type: string
name:
type: string
namespace:
@@ -7071,7 +7417,7 @@ spec:
description: KubernetesDeploymentStatus defines the status for the Tenant Control Plane Deployment in the management cluster.
properties:
availableReplicas:
description: Total number of available pods (ready for at least minReadySeconds) targeted by this deployment.
description: Total number of available non-terminating pods (ready for at least minReadySeconds) targeted by this deployment.
format: int32
type: integer
collisionCount:
@@ -7129,16 +7475,24 @@ spec:
format: int64
type: integer
readyReplicas:
description: readyReplicas is the number of pods targeted by this Deployment with a Ready Condition.
description: Total number of non-terminating pods targeted by this Deployment with a Ready Condition.
format: int32
type: integer
replicas:
description: Total number of non-terminated pods targeted by this deployment (their labels match the selector).
description: Total number of non-terminating pods targeted by this deployment (their labels match the selector).
format: int32
type: integer
selector:
description: Selector is the label selector used to group the Tenant Control Plane Pods used by the scale subresource.
type: string
terminatingReplicas:
description: |-
Total number of terminating pods targeted by this deployment. Terminating pods have a non-null
.metadata.deletionTimestamp and have not yet reached the Failed or Succeeded .status.phase.
This is an alpha field. Enable DeploymentReplicaSetTerminatingReplicas to be able to use this field.
format: int32
type: integer
unavailableReplicas:
description: |-
Total number of unavailable pods targeted by this deployment. This is the total number of
@@ -7147,7 +7501,7 @@ spec:
format: int32
type: integer
updatedReplicas:
description: Total number of non-terminated pods targeted by this deployment that have the desired template spec.
description: Total number of non-terminating pods targeted by this deployment that have the desired template spec.
format: int32
type: integer
required:
@@ -7373,12 +7727,15 @@ spec:
default: Provisioning
description: Status returns the current status of the Kubernetes version, such as its provisioning state, or completed upgrade.
enum:
- Unknown
- Provisioning
- CertificateAuthorityRotating
- Upgrading
- Migrating
- Ready
- NotReady
- Sleeping
- WriteLimited
type: string
version:
description: Version is the running Kubernetes version of the Tenant Control Plane.

View File

@@ -89,3 +89,15 @@ Create the name of the cert-manager Certificate
{{- define "kamaji.certificateName" -}}
{{- printf "%s-serving-cert" (include "kamaji.fullname" .) }}
{{- end }}
{{/*
Kubeconfig Generator Deployment name.
*/}}
{{- define "kamaji.kubeconfigGeneratorName" -}}
{{- if .Values.kubeconfigGenerator.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name "kubeconfig-generator" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}

View File

@@ -19,10 +19,6 @@ spec:
labels:
{{- include "kamaji.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
serviceAccountName: {{ include "kamaji.serviceAccountName" . }}

View File

@@ -0,0 +1,54 @@
{{- if .Values.kubeconfigGenerator.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
{{- include "kamaji.labels" . | nindent 4 }}
name: {{ include "kamaji.kubeconfigGeneratorName" . }}
namespace: {{ .Release.Namespace }}
spec:
replicas: {{ .Values.kubeconfigGenerator.replicaCount }}
selector:
matchLabels:
{{- include "kamaji.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.kubeconfigGenerator.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "kamaji.selectorLabels" . | nindent 8 }}
spec:
securityContext:
{{- toYaml .Values.kubeconfigGenerator.podSecurityContext | nindent 8 }}
serviceAccountName: {{ default .Values.kubeconfigGenerator.serviceAccountOverride (include "kamaji.serviceAccountName" .) }}
containers:
- args:
- kubeconfig-generator
- --health-probe-bind-address={{ .Values.kubeconfigGenerator.healthProbeBindAddress }}
- --leader-elect={{ .Values.kubeconfigGenerator.enableLeaderElect }}
{{- if .Values.kubeconfigGenerator.loggingDevel.enable }}- --zap-devel{{- end }}
{{- with .Values.kubeconfigGenerator.extraArgs }}
{{- toYaml . | nindent 10 }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
name: controller
resources:
{{- toYaml .Values.kubeconfigGenerator.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.kubeconfigGenerator.securityContext | nindent 12 }}
{{- with .Values.kubeconfigGenerator.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.kubeconfigGenerator.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.kubeconfigGenerator.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -9,6 +9,10 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
namespace: {{ .Release.Namespace }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 2 }}
{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role

View File

@@ -98,9 +98,12 @@ loggingDevel:
# -- If specified, all the Kamaji instances with an unassigned DataStore will inherit this default value.
defaultDatastoreName: default
# -- Subchart: See https://github.com/clastix/kamaji-etcd/blob/master/charts/kamaji-etcd/values.yaml
kamaji-etcd:
deploy: true
fullnameOverride: kamaji-etcd
## -- Important, this must match your management cluster's clusterDomain, otherwise the init jobs will fail
clusterDomain: "cluster.local"
datastore:
enabled: true
name: default
@@ -108,4 +111,48 @@ kamaji-etcd:
# -- Disable the analytics traces collection
telemetry:
disabled: false
kubeconfigGenerator:
# -- Toggle to deploy the Kubeconfig Generator Deployment.
enabled: false
fullnameOverride: ""
# -- The number of the pod replicas for the Kubeconfig Generator controller.
replicaCount: 2
# -- The annotations to apply to the Kubeconfig Generator controller pods.
podAnnotations: {}
# -- The securityContext to apply to the Kubeconfig Generator controller pods.
podSecurityContext:
runAsNonRoot: true
# -- The name of the service account to use. If not set, the root Kamaji one will be used.
serviceAccountOverride: ""
# -- The address the probe endpoint binds to.
healthProbeBindAddress: ":8081"
# -- Enables the leader election.
enableLeaderElect: true
loggingDevel:
# -- Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error)
enable: false
# -- A list of extra arguments to add to the Kubeconfig Generator controller default ones.
extraArgs: []
resources:
limits:
cpu: 200m
memory: 512Mi
requests:
cpu: 200m
memory: 512Mi
# -- The securityContext to apply to the Kubeconfig Generator controller container only.
securityContext:
allowPrivilegeEscalation: false
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
# -- Kubernetes node selector rules to schedule Kubeconfig Generator controller
nodeSelector: {}
# -- Kubernetes node taints that the Kubeconfig Generator controller pods would tolerate
tolerations: []
# -- Kubernetes affinity rules to apply to Kubeconfig Generator controller pods
affinity: {}

View File

@@ -0,0 +1,167 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package kubeconfiggenerator
import (
"flag"
"fmt"
"io"
"os"
goRuntime "runtime"
"time"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/rest"
"k8s.io/klog/v2"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
"github.com/clastix/kamaji/controllers"
"github.com/clastix/kamaji/internal"
)
func NewCmd(scheme *runtime.Scheme) *cobra.Command {
// CLI flags
var (
metricsBindAddress string
healthProbeBindAddress string
leaderElect bool
controllerReconcileTimeout time.Duration
cacheResyncPeriod time.Duration
managerNamespace string
certificateExpirationDeadline time.Duration
)
cmd := &cobra.Command{
Use: "kubeconfig-generator",
Short: "Start the Kubeconfig Generator manager",
SilenceErrors: false,
SilenceUsage: true,
PreRunE: func(*cobra.Command, []string) error {
// Avoid polluting stdout with useless details by the underlying klog implementations
klog.SetOutput(io.Discard)
klog.LogToStderr(false)
if certificateExpirationDeadline < 24*time.Hour {
return fmt.Errorf("certificate expiration deadline must be at least 24 hours")
}
return nil
},
RunE: func(*cobra.Command, []string) error {
ctx := ctrl.SetupSignalHandler()
setupLog := ctrl.Log.WithName("kubeconfig-generator")
setupLog.Info(fmt.Sprintf("Kamaji version %s %s%s", internal.GitTag, internal.GitCommit, internal.GitDirty))
setupLog.Info(fmt.Sprintf("Build from: %s", internal.GitRepo))
setupLog.Info(fmt.Sprintf("Build date: %s", internal.BuildTime))
setupLog.Info(fmt.Sprintf("Go Version: %s", goRuntime.Version()))
setupLog.Info(fmt.Sprintf("Go OS/Arch: %s/%s", goRuntime.GOOS, goRuntime.GOARCH))
ctrlOpts := ctrl.Options{
Scheme: scheme,
Metrics: metricsserver.Options{
BindAddress: metricsBindAddress,
},
HealthProbeBindAddress: healthProbeBindAddress,
LeaderElection: leaderElect,
LeaderElectionNamespace: managerNamespace,
LeaderElectionID: "kubeconfiggenerator.kamaji.clastix.io",
NewCache: func(config *rest.Config, opts cache.Options) (cache.Cache, error) {
opts.SyncPeriod = &cacheResyncPeriod
return cache.New(config, opts)
},
}
triggerChan := make(chan event.GenericEvent)
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrlOpts)
if err != nil {
setupLog.Error(err, "unable to start manager")
return err
}
setupLog.Info("setting probes")
{
if err = mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
return err
}
if err = mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
return err
}
}
certController := &controllers.CertificateLifecycle{Channel: triggerChan, Deadline: certificateExpirationDeadline}
certController.EnqueueFn = certController.EnqueueForKubeconfigGenerator
if err = certController.SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CertificateLifecycle")
return err
}
if err = (&controllers.KubeconfigGeneratorWatcher{
Client: mgr.GetClient(),
GeneratorChan: triggerChan,
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "KubeconfigGeneratorWatcher")
return err
}
if err = (&controllers.KubeconfigGeneratorReconciler{
Client: mgr.GetClient(),
NotValidThreshold: certificateExpirationDeadline,
CertificateChan: triggerChan,
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "KubeconfigGenerator")
return err
}
setupLog.Info("starting manager")
if err = mgr.Start(ctx); err != nil {
setupLog.Error(err, "problem running manager")
return err
}
return nil
},
}
// Setting zap logger
zapfs := flag.NewFlagSet("zap", flag.ExitOnError)
opts := zap.Options{
Development: true,
}
opts.BindFlags(zapfs)
cmd.Flags().AddGoFlagSet(zapfs)
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
// Setting CLI flags
cmd.Flags().StringVar(&metricsBindAddress, "metrics-bind-address", ":8090", "The address the metric endpoint binds to.")
cmd.Flags().StringVar(&healthProbeBindAddress, "health-probe-bind-address", ":8091", "The address the probe endpoint binds to.")
cmd.Flags().BoolVar(&leaderElect, "leader-elect", true, "Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.")
cmd.Flags().DurationVar(&controllerReconcileTimeout, "controller-reconcile-timeout", 30*time.Second, "The reconciliation request timeout before the controller withdraw the external resource calls, such as dealing with the Datastore, or the Tenant Control Plane API endpoint.")
cmd.Flags().DurationVar(&cacheResyncPeriod, "cache-resync-period", 10*time.Hour, "The controller-runtime.Manager cache resync period.")
cmd.Flags().StringVar(&managerNamespace, "pod-namespace", os.Getenv("POD_NAMESPACE"), "The Kubernetes Namespace on which the Operator is running in, required for the TenantControlPlane migration jobs.")
cmd.Flags().DurationVar(&certificateExpirationDeadline, "certificate-expiration-deadline", 24*time.Hour, "Define the deadline upon certificate expiration to start the renewal process, cannot be less than a 24 hours.")
cobra.OnInitialize(func() {
viper.AutomaticEnv()
})
return cmd
}

View File

@@ -4,6 +4,7 @@
package manager
import (
"context"
"flag"
"fmt"
"io"
@@ -20,6 +21,7 @@ import (
"k8s.io/klog/v2"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server"
@@ -61,8 +63,6 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
webhookCAPath string
)
ctx := ctrl.SetupSignalHandler()
cmd := &cobra.Command{
Use: "manager",
Short: "Start the Kamaji Kubernetes Operator",
@@ -85,7 +85,7 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
return fmt.Errorf("unable to read webhook CA: %w", err)
}
if err = datastoreutils.CheckExists(ctx, scheme, datastore); err != nil {
if err = datastoreutils.CheckExists(context.Background(), scheme, datastore); err != nil {
return err
}
@@ -96,6 +96,8 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
return nil
},
RunE: func(*cobra.Command, []string) error {
ctx := ctrl.SetupSignalHandler()
setupLog := ctrl.Log.WithName("setup")
setupLog.Info(fmt.Sprintf("Kamaji version %s %s%s", internal.GitTag, internal.GitCommit, internal.GitDirty))
@@ -136,7 +138,7 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
return err
}
tcpChannel, certChannel := make(controllers.TenantControlPlaneChannel), make(controllers.CertificateChannel)
tcpChannel, certChannel := make(chan event.GenericEvent), make(chan event.GenericEvent)
if err = (&controllers.DataStore{Client: mgr.GetClient(), TenantControlPlaneTrigger: tcpChannel}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "DataStore")
@@ -148,11 +150,12 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
Client: mgr.GetClient(),
APIReader: mgr.GetAPIReader(),
Config: controllers.TenantControlPlaneReconcilerConfig{
ReconcileTimeout: controllerReconcileTimeout,
DefaultDataStoreName: datastore,
KineContainerImage: kineImage,
TmpBaseDirectory: tmpDirectory,
DefaultDataStoreName: datastore,
KineContainerImage: kineImage,
TmpBaseDirectory: tmpDirectory,
CertExpirationThreshold: certificateExpirationDeadline,
},
ReconcileTimeout: controllerReconcileTimeout,
CertificateChan: certChannel,
TriggerChan: tcpChannel,
KamajiNamespace: managerNamespace,
@@ -191,7 +194,10 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
}
}
if err = (&controllers.CertificateLifecycle{Channel: certChannel, Deadline: certificateExpirationDeadline}).SetupWithManager(mgr); err != nil {
certController := &controllers.CertificateLifecycle{Channel: certChannel, Deadline: certificateExpirationDeadline}
certController.EnqueueFn = certController.EnqueueForTenantControlPlane
if err = certController.SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "CertificateLifecycle")
return err
@@ -213,6 +219,9 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
routes.TenantControlPlaneMigrate{}: {
handlers.Freeze{},
},
routes.TenantControlPlaneWritePermission{}: {
handlers.WritePermission{},
},
routes.TenantControlPlaneDefaults{}: {
handlers.TenantControlPlaneDefaults{
DefaultDatastore: datastore,
@@ -222,7 +231,6 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
handlers.TenantControlPlaneCertSANs{},
handlers.TenantControlPlaneName{},
handlers.TenantControlPlaneVersion{},
handlers.TenantControlPlaneKubeletAddresses{},
handlers.TenantControlPlaneDataStore{Client: mgr.GetClient()},
handlers.TenantControlPlaneDeployment{
Client: mgr.GetClient(),
@@ -306,7 +314,7 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
cmd.Flags().StringVar(&tmpDirectory, "tmp-directory", "/tmp/kamaji", "Directory which will be used to work with temporary files.")
cmd.Flags().StringVar(&kineImage, "kine-image", "rancher/kine:v0.11.10-amd64", "Container image along with tag to use for the Kine sidecar container (used only if etcd-storage-type is set to one of kine strategies).")
cmd.Flags().StringVar(&datastore, "datastore", "", "Optional, the default DataStore that should be used by Kamaji to setup the required storage of Tenant Control Planes with undeclared DataStore.")
cmd.Flags().StringVar(&migrateJobImage, "migrate-image", fmt.Sprintf("clastix/kamaji:%s", internal.GitTag), "Specify the container image to launch when a TenantControlPlane is migrated to a new datastore.")
cmd.Flags().StringVar(&migrateJobImage, "migrate-image", fmt.Sprintf("%s/clastix/kamaji:%s", internal.ContainerRepository, internal.GitTag), "Specify the container image to launch when a TenantControlPlane is migrated to a new datastore.")
cmd.Flags().IntVar(&maxConcurrentReconciles, "max-concurrent-tcp-reconciles", 1, "Specify the number of workers for the Tenant Control Plane controller (beware of CPU consumption)")
cmd.Flags().StringVar(&managerNamespace, "pod-namespace", os.Getenv("POD_NAMESPACE"), "The Kubernetes Namespace on which the Operator is running in, required for the TenantControlPlane migration jobs.")
cmd.Flags().StringVar(&managerServiceName, "webhook-service-name", "kamaji-webhook-service", "The Kamaji webhook server Service name which is used to get validation webhooks, required for the TenantControlPlane migration jobs.")

View File

@@ -22,9 +22,10 @@ import (
func NewCmd(scheme *runtime.Scheme) *cobra.Command {
// CLI flags
var (
tenantControlPlane string
targetDataStore string
timeout time.Duration
tenantControlPlane string
targetDataStore string
cleanupPriorMigration bool
timeout time.Duration
)
cmd := &cobra.Command{
@@ -95,6 +96,20 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
return err
}
defer targetConnection.Close()
if cleanupPriorMigration {
log.Info("Checking if target DataStore should be clean-up prior migration")
if exists, _ := targetConnection.DBExists(ctx, tcp.Status.Storage.Setup.Schema); exists {
log.Info("A colliding schema on target DataStore is present, cleaning up")
if dErr := targetConnection.DeleteDB(ctx, tcp.Status.Storage.Setup.Schema); dErr != nil {
return fmt.Errorf("error cleaning up prior migration: %s", dErr.Error())
}
log.Info("Cleaning up prior migration has been completed")
}
}
// Start migrating from the old Datastore to the new one
log.Info("migration from origin to target started")
@@ -110,6 +125,7 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
cmd.Flags().StringVar(&tenantControlPlane, "tenant-control-plane", "", "Namespaced-name of the TenantControlPlane that must be migrated (e.g.: default/test)")
cmd.Flags().StringVar(&targetDataStore, "target-datastore", "", "Name of the Datastore to which the TenantControlPlane will be migrated")
cmd.Flags().BoolVar(&cleanupPriorMigration, "cleanup-prior-migration", false, "When set to true, migration job will drop existing data in the target DataStore: useful to avoid stale data when migrating back and forth between DataStores.")
cmd.Flags().DurationVar(&timeout, "timeout", 5*time.Minute, "Amount of time for the context timeout")
_ = cmd.MarkFlagRequired("tenant-control-plane")

View File

@@ -0,0 +1,21 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: KubeconfigGenerator
metadata:
name: tenant
spec:
controlPlaneEndpointFrom: admin.conf
groups:
- stringValue: custom.group.tld
- fromDefinition: metadata.namespace
namespaceSelector:
matchExpressions:
- key: kubernetes.io/metadata.name
operator: Exists
values: []
tenantControlPlaneSelector:
matchExpressions:
- key: tenant.clastix.io
operator: Exists
values: []
user:
fromDefinition: metadata.name

View File

@@ -1,9 +1,9 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: TenantControlPlane
metadata:
name: k8s-130
name: k8s-133
labels:
tenant.clastix.io: k8s-130
tenant.clastix.io: k8s-133
spec:
controlPlane:
deployment:
@@ -11,7 +11,7 @@ spec:
service:
serviceType: LoadBalancer
kubernetes:
version: "v1.30.0"
version: "v1.33.0"
kubelet:
cgroupfs: systemd
networkProfile:
@@ -22,3 +22,5 @@ spec:
konnectivity:
server:
port: 8132
agent:
mode: DaemonSet

View File

@@ -0,0 +1,36 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: TenantControlPlane
metadata:
name: example-hostnetwork-tcp
namespace: tenant-system
spec:
controlPlane:
deployment:
replicas: 2
service:
serviceType: LoadBalancer
kubernetes:
version: v1.29.0
kubelet:
cgroupfs: systemd
preferredAddressTypes: ["InternalIP", "ExternalIP"]
networkProfile:
address: "10.0.0.100"
port: 6443
serviceCidr: "10.96.0.0/16"
podCidr: "10.244.0.0/16"
addons:
coreDNS: {}
konnectivity:
server:
port: 8132
agent:
hostNetwork: true
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 300
kubeProxy: {}

View File

@@ -1,10 +0,0 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"sigs.k8s.io/controller-runtime/pkg/event"
)
type CertificateChannel chan event.GenericEvent

View File

@@ -24,14 +24,16 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/constants"
"github.com/clastix/kamaji/internal/crypto"
"github.com/clastix/kamaji/internal/utilities"
)
type CertificateLifecycle struct {
Channel CertificateChannel
Deadline time.Duration
Channel chan event.GenericEvent
Deadline time.Duration
EnqueueFn func(secret *corev1.Secret)
client client.Client
}
@@ -41,19 +43,25 @@ func (s *CertificateLifecycle) Reconcile(ctx context.Context, request reconcile.
logger.Info("starting CertificateLifecycle handling")
secret := corev1.Secret{}
err := s.client.Get(ctx, request.NamespacedName, &secret)
if k8serrors.IsNotFound(err) {
logger.Info("resource have been deleted, skipping")
var secret corev1.Secret
if err := s.client.Get(ctx, request.NamespacedName, &secret); err != nil {
if k8serrors.IsNotFound(err) {
logger.Info("resource may have been deleted, skipping")
return reconcile.Result{}, nil
}
return reconcile.Result{}, nil
}
if err != nil {
logger.Error(err, "cannot retrieve the required resource")
return reconcile.Result{}, err
}
if utils.IsPaused(&secret) {
logger.Info("paused reconciliation, no further actions")
return reconcile.Result{}, nil
}
checkType, ok := secret.GetLabels()[constants.ControllerLabelResource]
if !ok {
logger.Info("missing controller label, shouldn't happen")
@@ -62,14 +70,15 @@ func (s *CertificateLifecycle) Reconcile(ctx context.Context, request reconcile.
}
var crt *x509.Certificate
var err error
switch checkType {
case "x509":
case utilities.CertificateX509Label:
crt, err = s.extractCertificateFromBareSecret(secret)
case "kubeconfig":
case utilities.CertificateKubeconfigLabel:
crt, err = s.extractCertificateFromKubeconfig(secret)
default:
err = fmt.Errorf("unsupported strategy, %s", checkType)
return reconcile.Result{}, fmt.Errorf("unsupported strategy, %q", checkType)
}
if err != nil {
@@ -83,12 +92,7 @@ func (s *CertificateLifecycle) Reconcile(ctx context.Context, request reconcile.
if deadline.After(crt.NotAfter) {
logger.Info("certificate near expiration, must be rotated")
s.Channel <- event.GenericEvent{Object: &kamajiv1alpha1.TenantControlPlane{
ObjectMeta: metav1.ObjectMeta{
Name: secret.GetOwnerReferences()[0].Name,
Namespace: secret.Namespace,
},
}}
s.EnqueueFn(&secret)
logger.Info("certificate rotation triggered")
@@ -99,7 +103,36 @@ func (s *CertificateLifecycle) Reconcile(ctx context.Context, request reconcile.
logger.Info("certificate is still valid, enqueuing back", "after", after.String())
return reconcile.Result{Requeue: true, RequeueAfter: after}, nil
return reconcile.Result{RequeueAfter: after}, nil
}
func (s *CertificateLifecycle) EnqueueForTenantControlPlane(secret *corev1.Secret) {
for _, or := range secret.GetOwnerReferences() {
if or.Kind != "TenantControlPlane" {
continue
}
s.Channel <- event.GenericEvent{Object: &kamajiv1alpha1.TenantControlPlane{
ObjectMeta: metav1.ObjectMeta{
Name: or.Name,
Namespace: secret.Namespace,
},
}}
}
}
func (s *CertificateLifecycle) EnqueueForKubeconfigGenerator(secret *corev1.Secret) {
for _, or := range secret.GetOwnerReferences() {
if or.Kind != "KubeconfigGenerator" {
continue
}
s.Channel <- event.GenericEvent{Object: &kamajiv1alpha1.TenantControlPlane{
ObjectMeta: metav1.ObjectMeta{
Name: or.Name,
},
}}
}
}
func (s *CertificateLifecycle) extractCertificateFromBareSecret(secret corev1.Secret) (*x509.Certificate, error) {
@@ -144,7 +177,7 @@ func (s *CertificateLifecycle) extractCertificateFromKubeconfig(secret corev1.Se
func (s *CertificateLifecycle) SetupWithManager(mgr controllerruntime.Manager) error {
s.client = mgr.GetClient()
supportedStrategies := sets.New[string]("x509", "kubeconfig")
supportedStrategies := sets.New[string](utilities.CertificateX509Label, utilities.CertificateKubeconfigLabel)
return controllerruntime.NewControllerManagedBy(mgr).
For(&corev1.Secret{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {

View File

@@ -6,10 +6,12 @@ package controllers
import (
"context"
"github.com/pkg/errors"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/fields"
k8stypes "k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/client-go/util/retry"
"k8s.io/client-go/util/workqueue"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
@@ -21,62 +23,77 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/utils"
)
type DataStore struct {
Client client.Client
// TenantControlPlaneTrigger is the channel used to communicate across the controllers:
// if a Data Source is updated we have to be sure that the reconciliation of the certificates content
// if a Data Source is updated, we have to be sure that the reconciliation of the certificates content
// for each Tenant Control Plane is put in place properly.
TenantControlPlaneTrigger TenantControlPlaneChannel
TenantControlPlaneTrigger chan event.GenericEvent
}
//+kubebuilder:rbac:groups=kamaji.clastix.io,resources=datastores,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=kamaji.clastix.io,resources=datastores/status,verbs=get;update;patch
func (r *DataStore) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
log := log.FromContext(ctx)
logger := log.FromContext(ctx)
ds := &kamajiv1alpha1.DataStore{}
err := r.Client.Get(ctx, request.NamespacedName, ds)
if k8serrors.IsNotFound(err) {
log.Info("resource have been deleted, skipping")
var ds kamajiv1alpha1.DataStore
if err := r.Client.Get(ctx, request.NamespacedName, &ds); err != nil {
if k8serrors.IsNotFound(err) {
logger.Info("resource may have been deleted, skipping")
return reconcile.Result{}, nil
}
logger.Error(err, "cannot retrieve the required resource")
return reconcile.Result{}, err
}
if utils.IsPaused(&ds) {
logger.Info("paused reconciliation, no further actions")
return reconcile.Result{}, nil
}
if err != nil {
log.Error(err, "cannot retrieve the required resource")
return reconcile.Result{}, err
}
var tcpList kamajiv1alpha1.TenantControlPlaneList
tcpList := kamajiv1alpha1.TenantControlPlaneList{}
updateErr := retry.RetryOnConflict(retry.DefaultRetry, func() error {
if lErr := r.Client.List(ctx, &tcpList, client.MatchingFieldsSelector{
Selector: fields.OneTermEqualSelector(kamajiv1alpha1.TenantControlPlaneUsedDataStoreKey, ds.GetName()),
}); lErr != nil {
return errors.Wrap(lErr, "cannot retrieve list of the Tenant Control Plane using the following instance")
}
// Updating the status with the list of Tenant Control Plane using the following Data Source
tcpSets := sets.NewString()
for _, tcp := range tcpList.Items {
tcpSets.Insert(getNamespacedName(tcp.GetNamespace(), tcp.GetName()).String())
}
if err := r.Client.List(ctx, &tcpList, client.MatchingFieldsSelector{
Selector: fields.OneTermEqualSelector(kamajiv1alpha1.TenantControlPlaneUsedDataStoreKey, ds.GetName()),
}); err != nil {
log.Error(err, "cannot retrieve list of the Tenant Control Plane using the following instance")
ds.Status.UsedBy = tcpSets.List()
return reconcile.Result{}, err
}
// Updating the status with the list of Tenant Control Plane using the following Data Source
tcpSets := sets.NewString()
for _, tcp := range tcpList.Items {
tcpSets.Insert(getNamespacedName(tcp.GetNamespace(), tcp.GetName()).String())
}
if sErr := r.Client.Status().Update(ctx, &ds); sErr != nil {
return errors.Wrap(sErr, "cannot update the status for the given instance")
}
ds.Status.UsedBy = tcpSets.List()
return nil
})
if updateErr != nil {
logger.Error(updateErr, "cannot update DataStore status")
if err := r.Client.Status().Update(ctx, ds); err != nil {
log.Error(err, "cannot update the status for the given instance")
return reconcile.Result{}, err
return reconcile.Result{}, updateErr
}
// Triggering the reconciliation of the Tenant Control Plane upon a Secret change
for _, i := range tcpList.Items {
tcp := i
for _, tcp := range tcpList.Items {
var shrunkTCP kamajiv1alpha1.TenantControlPlane
r.TenantControlPlaneTrigger <- event.GenericEvent{Object: &tcp}
shrunkTCP.Name = tcp.Name
shrunkTCP.Namespace = tcp.Namespace
go utils.TriggerChannel(ctx, r.TenantControlPlaneTrigger, shrunkTCP)
}
return reconcile.Result{}, nil
@@ -95,7 +112,7 @@ func (r *DataStore) SetupWithManager(mgr controllerruntime.Manager) error {
//nolint:forcetypeassert
return controllerruntime.NewControllerManagedBy(mgr).
For(&kamajiv1alpha1.DataStore{}, builder.WithPredicates(
predicate.ResourceVersionChangedPredicate{},
predicate.GenerationChangedPredicate{},
)).
Watches(&kamajiv1alpha1.TenantControlPlane{}, handler.Funcs{
CreateFunc: func(_ context.Context, createEvent event.TypedCreateEvent[client.Object], w workqueue.TypedRateLimitingInterface[reconcile.Request]) {

View File

@@ -0,0 +1,444 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"bytes"
"context"
"crypto/x509"
"fmt"
"sort"
"strings"
"time"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/sets"
clientcmdapiv1 "k8s.io/client-go/tools/clientcmd/api/v1"
certutil "k8s.io/client-go/util/cert"
"k8s.io/client-go/util/keyutil"
"k8s.io/client-go/util/workqueue"
kubeadmconstants "k8s.io/kubernetes/cmd/kubeadm/app/constants"
"k8s.io/kubernetes/cmd/kubeadm/app/util"
"k8s.io/kubernetes/cmd/kubeadm/app/util/pkiutil"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/constants"
"github.com/clastix/kamaji/internal/crypto"
"github.com/clastix/kamaji/internal/resources"
"github.com/clastix/kamaji/internal/utilities"
)
type KubeconfigGeneratorReconciler struct {
Client client.Client
NotValidThreshold time.Duration
CertificateChan chan event.GenericEvent
}
//+kubebuilder:rbac:groups="",resources=namespaces,verbs=get;list;watch
//+kubebuilder:rbac:groups=kamaji.clastix.io,resources=kubeconfiggenerators,verbs=get;list;watch;create;update;patch
//+kubebuilder:rbac:groups=kamaji.clastix.io,resources=kubeconfiggenerators/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=kamaji.clastix.io,resources=kubeconfiggenerators/finalizers,verbs=update
func (r *KubeconfigGeneratorReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := log.FromContext(ctx)
logger.Info("reconciling resource")
var generator kamajiv1alpha1.KubeconfigGenerator
if err := r.Client.Get(ctx, req.NamespacedName, &generator); err != nil {
if apierrors.IsNotFound(err) {
logger.Info("resource may have been deleted, skipping")
return ctrl.Result{}, nil
}
logger.Error(err, "cannot retrieve the required resource")
return ctrl.Result{}, err
}
if utils.IsPaused(&generator) {
logger.Info("paused reconciliation, no further actions")
return ctrl.Result{}, nil
}
status, err := r.handle(ctx, &generator)
if err != nil {
logger.Error(err, "cannot handle the request")
return ctrl.Result{}, err
}
generator.Status = status
if statusErr := r.Client.Status().Update(ctx, &generator); statusErr != nil {
logger.Error(statusErr, "cannot update resource status")
return ctrl.Result{}, statusErr
}
logger.Info("reconciling completed")
return ctrl.Result{}, nil
}
func (r *KubeconfigGeneratorReconciler) handle(ctx context.Context, generator *kamajiv1alpha1.KubeconfigGenerator) (kamajiv1alpha1.KubeconfigGeneratorStatus, error) {
nsSelector, nsErr := metav1.LabelSelectorAsSelector(&generator.Spec.NamespaceSelector)
if nsErr != nil {
return kamajiv1alpha1.KubeconfigGeneratorStatus{}, errors.Wrap(nsErr, "NamespaceSelector contains an error")
}
var namespaceList corev1.NamespaceList
if err := r.Client.List(ctx, &namespaceList, &client.ListOptions{LabelSelector: nsSelector}); err != nil {
return kamajiv1alpha1.KubeconfigGeneratorStatus{}, errors.Wrap(err, "cannot filter Namespace objects using provided selector")
}
var targets []kamajiv1alpha1.TenantControlPlane
for _, ns := range namespaceList.Items {
tcpSelector, tcpErr := metav1.LabelSelectorAsSelector(&generator.Spec.TenantControlPlaneSelector)
if tcpErr != nil {
return kamajiv1alpha1.KubeconfigGeneratorStatus{}, errors.Wrap(tcpErr, "TenantControlPlaneSelector contains an error")
}
var tcpList kamajiv1alpha1.TenantControlPlaneList
if err := r.Client.List(ctx, &tcpList, &client.ListOptions{Namespace: ns.GetName(), LabelSelector: tcpSelector}); err != nil {
return kamajiv1alpha1.KubeconfigGeneratorStatus{}, errors.Wrap(err, "cannot filter TenantControlPlane objects using provided selector")
}
targets = append(targets, tcpList.Items...)
}
sort.Slice(targets, func(i, j int) bool {
return client.ObjectKeyFromObject(&targets[i]).String() < client.ObjectKeyFromObject(&targets[j]).String()
})
status := kamajiv1alpha1.KubeconfigGeneratorStatus{
Resources: len(targets),
AvailableResources: len(targets),
}
for _, tcp := range targets {
if err := r.process(ctx, generator, tcp); err != nil {
status.Errors = append(status.Errors, *err)
status.AvailableResources--
}
}
return status, nil
}
func (r *KubeconfigGeneratorReconciler) process(ctx context.Context, generator *kamajiv1alpha1.KubeconfigGenerator, tcp kamajiv1alpha1.TenantControlPlane) *kamajiv1alpha1.KubeconfigGeneratorStatusError {
statusErr := kamajiv1alpha1.KubeconfigGeneratorStatusError{
Resource: client.ObjectKeyFromObject(&tcp).String(),
}
var adminSecret corev1.Secret
if tcp.Status.KubeConfig.Admin.SecretName == "" {
statusErr.Message = "the admin kubeconfig is not yet generated"
return &statusErr
}
if err := r.Client.Get(ctx, types.NamespacedName{Name: tcp.Status.KubeConfig.Admin.SecretName, Namespace: tcp.GetNamespace()}, &adminSecret); err != nil {
statusErr.Message = fmt.Sprintf("an error occurred retrieving the admin Kubeconfig: %s", err.Error())
return &statusErr
}
kubeconfigTmpl, kcErr := utilities.DecodeKubeconfig(adminSecret, generator.Spec.ControlPlaneEndpointFrom)
if kcErr != nil {
statusErr.Message = fmt.Sprintf("unable to decode Kubeconfig template: %s", kcErr.Error())
return &statusErr
}
uMap, uErr := runtime.DefaultUnstructuredConverter.ToUnstructured(&tcp)
if uErr != nil {
statusErr.Message = fmt.Sprintf("cannot convert the resource to a map: %s", uErr)
return &statusErr
}
var user string
groups := sets.New[string]()
for _, group := range generator.Spec.Groups {
switch {
case group.StringValue != "":
groups.Insert(group.StringValue)
case group.FromDefinition != "":
v, ok, vErr := unstructured.NestedString(uMap, strings.Split(group.FromDefinition, ".")...)
switch {
case vErr != nil:
statusErr.Message = fmt.Sprintf("cannot run NestedString %q due to an error: %s", group.FromDefinition, vErr.Error())
return &statusErr
case !ok:
statusErr.Message = fmt.Sprintf("provided dot notation %q is not found", group.FromDefinition)
return &statusErr
default:
groups.Insert(v)
}
default:
statusErr.Message = "at least a StringValue or FromDefinition Group value must be provided"
return &statusErr
}
}
switch {
case generator.Spec.User.StringValue != "":
user = generator.Spec.User.StringValue
case generator.Spec.User.FromDefinition != "":
v, ok, vErr := unstructured.NestedString(uMap, strings.Split(generator.Spec.User.FromDefinition, ".")...)
switch {
case vErr != nil:
statusErr.Message = fmt.Sprintf("cannot run NestedString %q due to an error: %s", generator.Spec.User.FromDefinition, vErr.Error())
return &statusErr
case !ok:
statusErr.Message = fmt.Sprintf("provided dot notation %q is not found", generator.Spec.User.FromDefinition)
return &statusErr
default:
user = v
}
default:
statusErr.Message = "at least a StringValue or FromDefinition for the user field must be provided"
return &statusErr
}
var resultSecret corev1.Secret
resultSecret.SetName(tcp.Name + "-" + generator.Name)
resultSecret.SetNamespace(tcp.Namespace)
objectKey := client.ObjectKeyFromObject(&resultSecret)
if err := r.Client.Get(ctx, objectKey, &resultSecret); err != nil {
if !apierrors.IsNotFound(err) {
statusErr.Message = fmt.Sprintf("the secret %q cannot be generated", objectKey.String())
return &statusErr
}
if generateErr := r.generate(ctx, generator, &resultSecret, kubeconfigTmpl, &tcp, groups, user); generateErr != nil {
statusErr.Message = fmt.Sprintf("an error occurred generating the %q Secret: %s", objectKey.String(), generateErr.Error())
return &statusErr
}
return nil
}
isValid, validateErr := r.isValid(&resultSecret, kubeconfigTmpl, groups, user)
switch {
case !isValid:
if generateErr := r.generate(ctx, generator, &resultSecret, kubeconfigTmpl, &tcp, groups, user); generateErr != nil {
statusErr.Message = fmt.Sprintf("an error occurred regenerating the %q Secret: %s", objectKey.String(), generateErr.Error())
return &statusErr
}
return nil
case validateErr != nil:
statusErr.Message = fmt.Sprintf("an error occurred checking validation for %q Secret: %s", objectKey.String(), validateErr.Error())
return &statusErr
default:
return nil
}
}
func (r *KubeconfigGeneratorReconciler) generate(ctx context.Context, generator *kamajiv1alpha1.KubeconfigGenerator, secret *corev1.Secret, tmpl *clientcmdapiv1.Config, tcp *kamajiv1alpha1.TenantControlPlane, groups sets.Set[string], user string) error {
_, config, err := resources.GetKubeadmManifestDeps(ctx, r.Client, tcp)
if err != nil {
return err
}
clientCertConfig := pkiutil.CertConfig{
Config: certutil.Config{
CommonName: user,
Organization: groups.UnsortedList(),
Usages: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth},
},
NotAfter: util.StartTimeUTC().Add(kubeadmconstants.CertificateValidityPeriod),
EncryptionAlgorithm: config.InitConfiguration.ClusterConfiguration.EncryptionAlgorithmType(),
}
var caSecret corev1.Secret
if caErr := r.Client.Get(ctx, types.NamespacedName{Namespace: tcp.Namespace, Name: tcp.Status.Certificates.CA.SecretName}, &caSecret); caErr != nil {
return errors.Wrap(caErr, "cannot retrieve Certificate Authority")
}
caCert, crtErr := crypto.ParseCertificateBytes(caSecret.Data[kubeadmconstants.CACertName])
if crtErr != nil {
return errors.Wrap(crtErr, "cannot parse Certificate Authority certificate")
}
caKey, keyErr := crypto.ParsePrivateKeyBytes(caSecret.Data[kubeadmconstants.CAKeyName])
if keyErr != nil {
return errors.Wrap(keyErr, "cannot parse Certificate Authority key")
}
clientCert, clientKey, err := pkiutil.NewCertAndKey(caCert, caKey, &clientCertConfig)
contextUserName := generator.Name
for name := range tmpl.AuthInfos {
tmpl.AuthInfos[name].Name = contextUserName
tmpl.AuthInfos[name].AuthInfo.ClientCertificateData = pkiutil.EncodeCertPEM(clientCert)
tmpl.AuthInfos[name].AuthInfo.ClientKeyData, err = keyutil.MarshalPrivateKeyToPEM(clientKey)
if err != nil {
return errors.Wrap(err, "cannot marshal private key to PEM")
}
}
for name := range tmpl.Contexts {
tmpl.Contexts[name].Name = contextUserName
tmpl.Contexts[name].Context.AuthInfo = contextUserName
}
tmpl.CurrentContext = contextUserName
_, err = utilities.CreateOrUpdateWithConflict(ctx, r.Client, secret, func() error {
labels := secret.GetLabels()
if labels == nil {
labels = map[string]string{}
}
labels[kamajiv1alpha1.ManagedByLabel] = generator.Name
labels[kamajiv1alpha1.ManagedForLabel] = tcp.Name
labels[constants.ControllerLabelResource] = utilities.CertificateKubeconfigLabel
secret.SetLabels(labels)
if secret.Data == nil {
secret.Data = make(map[string][]byte)
}
secret.Data["value"], err = utilities.EncodeToYaml(tmpl)
if err != nil {
return errors.Wrap(err, "cannot encode generated Kubeconfig to YAML")
}
if utilities.IsRotationRequested(secret) {
utilities.SetLastRotationTimestamp(secret)
}
if orErr := controllerutil.SetOwnerReference(tcp, secret, r.Client.Scheme()); orErr != nil {
return orErr
}
return ctrl.SetControllerReference(tcp, secret, r.Client.Scheme())
})
if err != nil {
return errors.Wrap(err, "cannot create or update generated Kubeconfig")
}
return nil
}
func (r *KubeconfigGeneratorReconciler) isValid(secret *corev1.Secret, tmpl *clientcmdapiv1.Config, groups sets.Set[string], user string) (bool, error) {
if utilities.IsRotationRequested(secret) {
return false, nil
}
concrete, decodeErr := utilities.DecodeKubeconfig(*secret, "value")
if decodeErr != nil {
return false, decodeErr
}
// Checking Certificate Authority validity
switch {
case len(concrete.Clusters) != len(tmpl.Clusters):
return false, nil
default:
for i := range tmpl.Clusters {
if !bytes.Equal(tmpl.Clusters[i].Cluster.CertificateAuthorityData, concrete.Clusters[i].Cluster.CertificateAuthorityData) {
return false, nil
}
if tmpl.Clusters[i].Cluster.Server != concrete.Clusters[i].Cluster.Server {
return false, nil
}
}
}
for _, auth := range concrete.AuthInfos {
valid, vErr := crypto.IsValidCertificateKeyPairBytes(auth.AuthInfo.ClientCertificateData, auth.AuthInfo.ClientKeyData, r.NotValidThreshold)
if vErr != nil {
return false, vErr
}
if !valid {
return false, nil
}
crt, crtErr := crypto.ParseCertificateBytes(auth.AuthInfo.ClientCertificateData)
if crtErr != nil {
return false, crtErr
}
if crt.Subject.CommonName != user {
return false, nil
}
if !sets.New[string](crt.Subject.Organization...).Equal(groups) {
return false, nil
}
}
return true, nil
}
func (r *KubeconfigGeneratorReconciler) SetupWithManager(mgr manager.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&kamajiv1alpha1.KubeconfigGenerator{}).
WatchesRawSource(source.Channel(r.CertificateChan, handler.Funcs{GenericFunc: func(_ context.Context, genericEvent event.TypedGenericEvent[client.Object], w workqueue.TypedRateLimitingInterface[reconcile.Request]) {
w.AddRateLimited(ctrl.Request{
NamespacedName: types.NamespacedName{
Name: genericEvent.Object.GetName(),
},
})
}})).
Watches(&corev1.Secret{}, handler.TypedEnqueueRequestsFromMapFunc(func(ctx context.Context, object client.Object) []ctrl.Request {
if object.GetLabels() == nil {
return nil
}
v, found := object.GetLabels()[kamajiv1alpha1.ManagedByLabel]
if !found {
return nil
}
return []ctrl.Request{
{
NamespacedName: types.NamespacedName{
Name: v,
},
},
}
})).
Complete(r)
}

View File

@@ -0,0 +1,75 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/manager"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
)
type KubeconfigGeneratorWatcher struct {
Client client.Client
GeneratorChan chan event.GenericEvent
}
func (r *KubeconfigGeneratorWatcher) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
logger := log.FromContext(ctx)
logger.Info("reconciling resource")
var tcp kamajiv1alpha1.TenantControlPlane
if err := r.Client.Get(ctx, req.NamespacedName, &tcp); err != nil {
if apierrors.IsNotFound(err) {
logger.Info("resource may have been deleted, skipping")
return ctrl.Result{}, nil
}
logger.Error(err, "cannot retrieve the required resource")
return ctrl.Result{}, err
}
var generators kamajiv1alpha1.KubeconfigGeneratorList
if err := r.Client.List(ctx, &generators); err != nil {
logger.Error(err, "cannot list generators")
return ctrl.Result{}, err
}
for _, generator := range generators.Items {
sel, err := metav1.LabelSelectorAsSelector(&generator.Spec.TenantControlPlaneSelector)
if err != nil {
logger.Error(err, "cannot validate Selector", "generator", generator.Name)
return ctrl.Result{}, err
}
if sel.Matches(labels.Set(tcp.Labels)) {
logger.Info("pushing Generator", "generator", generator.Name)
r.GeneratorChan <- event.GenericEvent{
Object: &generator,
}
}
}
return ctrl.Result{}, nil
}
func (r *KubeconfigGeneratorWatcher) SetupWithManager(mgr manager.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&kamajiv1alpha1.TenantControlPlane{}).
Complete(r)
}

View File

@@ -5,6 +5,7 @@ package controllers
import (
"fmt"
"time"
"github.com/go-logr/logr"
"github.com/google/uuid"
@@ -26,6 +27,7 @@ type GroupResourceBuilderConfiguration struct {
log logr.Logger
tcpReconcilerConfig TenantControlPlaneReconcilerConfig
tenantControlPlane kamajiv1alpha1.TenantControlPlane
ExpirationThreshold time.Duration
Connection datastore.Connection
DataStore kamajiv1alpha1.DataStore
KamajiNamespace string
@@ -78,8 +80,8 @@ func getDefaultResources(config GroupResourceBuilderConfiguration) []resources.R
resources = append(resources, getKubeadmConfigResources(config.client, getTmpDirectory(config.tcpReconcilerConfig.TmpBaseDirectory, config.tenantControlPlane), config.DataStore)...)
resources = append(resources, getKubernetesCertificatesResources(config.client, config.tcpReconcilerConfig, config.tenantControlPlane)...)
resources = append(resources, getKubeconfigResources(config.client, config.tcpReconcilerConfig, config.tenantControlPlane)...)
resources = append(resources, getKubernetesStorageResources(config.client, config.Connection, config.DataStore)...)
resources = append(resources, getKonnectivityServerRequirementsResources(config.client)...)
resources = append(resources, getKubernetesStorageResources(config.client, config.Connection, config.DataStore, config.ExpirationThreshold)...)
resources = append(resources, getKonnectivityServerRequirementsResources(config.client, config.ExpirationThreshold)...)
resources = append(resources, getKubernetesDeploymentResources(config.client, config.tcpReconcilerConfig, config.DataStore)...)
resources = append(resources, getKonnectivityServerPatchResources(config.client)...)
resources = append(resources, getDataStoreMigratingCleanup(config.client, config.KamajiNamespace)...)
@@ -148,28 +150,33 @@ func getKubeadmConfigResources(c client.Client, tmpDirectory string, dataStore k
func getKubernetesCertificatesResources(c client.Client, tcpReconcilerConfig TenantControlPlaneReconcilerConfig, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
return []resources.Resource{
&resources.CACertificate{
Client: c,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
Client: c,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
CertExpirationThreshold: tcpReconcilerConfig.CertExpirationThreshold,
},
&resources.FrontProxyCACertificate{
Client: c,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
Client: c,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
CertExpirationThreshold: tcpReconcilerConfig.CertExpirationThreshold,
},
&resources.SACertificate{
Client: c,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
},
&resources.APIServerCertificate{
Client: c,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
Client: c,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
CertExpirationThreshold: tcpReconcilerConfig.CertExpirationThreshold,
},
&resources.APIServerKubeletClientCertificate{
Client: c,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
Client: c,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
CertExpirationThreshold: tcpReconcilerConfig.CertExpirationThreshold,
},
&resources.FrontProxyClientCertificate{
Client: c,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
Client: c,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
CertExpirationThreshold: tcpReconcilerConfig.CertExpirationThreshold,
},
}
}
@@ -177,33 +184,37 @@ func getKubernetesCertificatesResources(c client.Client, tcpReconcilerConfig Ten
func getKubeconfigResources(c client.Client, tcpReconcilerConfig TenantControlPlaneReconcilerConfig, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
return []resources.Resource{
&resources.KubeconfigResource{
Name: "admin-kubeconfig",
Client: c,
KubeConfigFileName: resources.AdminKubeConfigFileName,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
Client: c,
Name: "admin-kubeconfig",
KubeConfigFileName: resources.AdminKubeConfigFileName,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
CertExpirationThreshold: tcpReconcilerConfig.CertExpirationThreshold,
},
&resources.KubeconfigResource{
Name: "admin-kubeconfig",
Client: c,
KubeConfigFileName: resources.SuperAdminKubeConfigFileName,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
Client: c,
Name: "admin-kubeconfig",
KubeConfigFileName: resources.SuperAdminKubeConfigFileName,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
CertExpirationThreshold: tcpReconcilerConfig.CertExpirationThreshold,
},
&resources.KubeconfigResource{
Name: "controller-manager-kubeconfig",
Client: c,
KubeConfigFileName: resources.ControllerManagerKubeConfigFileName,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
Client: c,
Name: "controller-manager-kubeconfig",
KubeConfigFileName: resources.ControllerManagerKubeConfigFileName,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
CertExpirationThreshold: tcpReconcilerConfig.CertExpirationThreshold,
},
&resources.KubeconfigResource{
Name: "scheduler-kubeconfig",
Client: c,
KubeConfigFileName: resources.SchedulerKubeConfigFileName,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
Client: c,
Name: "scheduler-kubeconfig",
KubeConfigFileName: resources.SchedulerKubeConfigFileName,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
CertExpirationThreshold: tcpReconcilerConfig.CertExpirationThreshold,
},
}
}
func getKubernetesStorageResources(c client.Client, dbConnection datastore.Connection, datastore kamajiv1alpha1.DataStore) []resources.Resource {
func getKubernetesStorageResources(c client.Client, dbConnection datastore.Connection, datastore kamajiv1alpha1.DataStore, threshold time.Duration) []resources.Resource {
return []resources.Resource{
&ds.MultiTenancy{
DataStore: datastore,
@@ -219,8 +230,9 @@ func getKubernetesStorageResources(c client.Client, dbConnection datastore.Conne
DataStore: datastore,
},
&ds.Certificate{
Client: c,
DataStore: datastore,
Client: c,
DataStore: datastore,
CertExpirationThreshold: threshold,
},
}
}
@@ -251,10 +263,10 @@ func GetExternalKonnectivityResources(c client.Client) []resources.Resource {
}
}
func getKonnectivityServerRequirementsResources(c client.Client) []resources.Resource {
func getKonnectivityServerRequirementsResources(c client.Client, threshold time.Duration) []resources.Resource {
return []resources.Resource{
&konnectivity.EgressSelectorConfigurationResource{Client: c},
&konnectivity.CertificateResource{Client: c},
&konnectivity.CertificateResource{Client: c, CertExpirationThreshold: threshold},
&konnectivity.KubeconfigResource{Client: c},
}
}

View File

@@ -7,6 +7,7 @@ import (
"context"
"github.com/go-logr/logr"
"github.com/pkg/errors"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
@@ -23,6 +24,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
sooterrors "github.com/clastix/kamaji/controllers/soot/controllers/errors"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/kubeadm"
"github.com/clastix/kamaji/internal/resources"
@@ -30,8 +32,7 @@ import (
)
type CoreDNS struct {
logger logr.Logger
Logger logr.Logger
AdminClient client.Client
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
TriggerChannel chan event.GenericEvent
@@ -40,43 +41,44 @@ type CoreDNS struct {
func (c *CoreDNS) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := c.GetTenantControlPlaneFunc()
if err != nil {
c.logger.Error(err, "cannot retrieve TenantControlPlane")
if errors.Is(err, sooterrors.ErrPausedReconciliation) {
c.Logger.Info(err.Error())
return reconcile.Result{}, nil
}
return reconcile.Result{}, err
}
c.logger.Info("start processing")
c.Logger.Info("start processing")
resource := &addons.CoreDNS{Client: c.AdminClient}
result, handlingErr := resources.Handle(ctx, resource, tcp)
if handlingErr != nil {
c.logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
c.Logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
return reconcile.Result{}, handlingErr
}
if result == controllerutil.OperationResultNone {
c.logger.Info("reconciliation completed")
c.Logger.Info("reconciliation completed")
return reconcile.Result{}, nil
}
if err = utils.UpdateStatus(ctx, c.AdminClient, tcp, resource); err != nil {
c.logger.Error(err, "update status failed", "resource", resource.GetName())
c.Logger.Error(err, "update status failed", "resource", resource.GetName())
return reconcile.Result{}, err
}
c.logger.Info("reconciliation processed")
c.Logger.Info("reconciliation processed")
return reconcile.Result{}, nil
}
func (c *CoreDNS) SetupWithManager(mgr manager.Manager) error {
c.logger = mgr.GetLogger().WithName("coredns")
c.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
WithOptions(controller.TypedOptions[reconcile.Request]{SkipNameValidation: ptr.To(true)}).
For(&rbacv1.ClusterRoleBinding{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {

View File

@@ -0,0 +1,10 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package errors
import (
"github.com/pkg/errors"
)
var ErrPausedReconciliation = errors.New("paused reconciliation, no further actions")

View File

@@ -7,6 +7,7 @@ import (
"context"
"github.com/go-logr/logr"
"github.com/pkg/errors"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/api/rbac/v1"
@@ -25,14 +26,14 @@ import (
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/controllers"
sooterrors "github.com/clastix/kamaji/controllers/soot/controllers/errors"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/resources"
"github.com/clastix/kamaji/internal/resources/konnectivity"
)
type KonnectivityAgent struct {
logger logr.Logger
Logger logr.Logger
AdminClient client.Client
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
TriggerChannel chan event.GenericEvent
@@ -41,43 +42,50 @@ type KonnectivityAgent struct {
func (k *KonnectivityAgent) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := k.GetTenantControlPlaneFunc()
if err != nil {
k.logger.Error(err, "cannot retrieve TenantControlPlane")
if errors.Is(err, sooterrors.ErrPausedReconciliation) {
k.Logger.Info(err.Error())
return reconcile.Result{}, nil
}
k.Logger.Error(err, "cannot retrieve TenantControlPlane")
return reconcile.Result{}, err
}
if tcp.Spec.Addons.Konnectivity == nil {
return reconcile.Result{}, nil
}
for _, resource := range controllers.GetExternalKonnectivityResources(k.AdminClient) {
k.logger.Info("start processing", "resource", resource.GetName())
k.Logger.Info("start processing", "resource", resource.GetName())
result, handlingErr := resources.Handle(ctx, resource, tcp)
if handlingErr != nil {
k.logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
k.Logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
return reconcile.Result{}, handlingErr
}
if result == controllerutil.OperationResultNone {
k.logger.Info("resource processed", "resource", resource.GetName())
k.Logger.Info("resource processed", "resource", resource.GetName())
continue
}
if err = utils.UpdateStatus(ctx, k.AdminClient, tcp, resource); err != nil {
k.logger.Error(err, "update status failed", "resource", resource.GetName())
k.Logger.Error(err, "update status failed", "resource", resource.GetName())
return reconcile.Result{}, err
}
}
k.logger.Info("reconciliation completed")
k.Logger.Info("reconciliation completed")
return reconcile.Result{}, nil
}
func (k *KonnectivityAgent) SetupWithManager(mgr manager.Manager) error {
k.logger = mgr.GetLogger().WithName("konnectivity_agent")
k.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
WithOptions(controller.TypedOptions[reconcile.Request]{SkipNameValidation: ptr.To(true)}).
For(&appsv1.DaemonSet{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {

View File

@@ -7,6 +7,7 @@ import (
"context"
"github.com/go-logr/logr"
"github.com/pkg/errors"
"k8s.io/utils/ptr"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
@@ -19,6 +20,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
sooterrors "github.com/clastix/kamaji/controllers/soot/controllers/errors"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/resources"
)
@@ -34,6 +36,12 @@ type KubeadmPhase struct {
func (k *KubeadmPhase) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := k.GetTenantControlPlaneFunc()
if err != nil {
if errors.Is(err, sooterrors.ErrPausedReconciliation) {
k.logger.Info(err.Error())
return reconcile.Result{}, nil
}
return reconcile.Result{}, err
}
@@ -65,7 +73,6 @@ func (k *KubeadmPhase) Reconcile(ctx context.Context, _ reconcile.Request) (reco
func (k *KubeadmPhase) SetupWithManager(mgr manager.Manager) error {
k.logger = mgr.GetLogger().WithName(k.Phase.GetName())
k.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
WithOptions(controller.TypedOptions[reconcile.Request]{SkipNameValidation: ptr.To(true)}).

View File

@@ -7,6 +7,7 @@ import (
"context"
"github.com/go-logr/logr"
"github.com/pkg/errors"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
@@ -23,6 +24,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
sooterrors "github.com/clastix/kamaji/controllers/soot/controllers/errors"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/kubeadm"
"github.com/clastix/kamaji/internal/resources"
@@ -30,53 +32,55 @@ import (
)
type KubeProxy struct {
Logger logr.Logger
AdminClient client.Client
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
TriggerChannel chan event.GenericEvent
logger logr.Logger
}
func (k *KubeProxy) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := k.GetTenantControlPlaneFunc()
if err != nil {
k.logger.Error(err, "cannot retrieve TenantControlPlane")
if errors.Is(err, sooterrors.ErrPausedReconciliation) {
k.Logger.Info(err.Error())
return reconcile.Result{}, nil
}
k.Logger.Error(err, "cannot retrieve TenantControlPlane")
return reconcile.Result{}, err
}
k.logger.Info("start processing")
k.Logger.Info("start processing")
resource := &addons.KubeProxy{Client: k.AdminClient}
result, handlingErr := resources.Handle(ctx, resource, tcp)
if handlingErr != nil {
k.logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
k.Logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
return reconcile.Result{}, handlingErr
}
if result == controllerutil.OperationResultNone {
k.logger.Info("reconciliation completed")
k.Logger.Info("reconciliation completed")
return reconcile.Result{}, nil
}
if err = utils.UpdateStatus(ctx, k.AdminClient, tcp, resource); err != nil {
k.logger.Error(err, "update status failed")
k.Logger.Error(err, "update status failed")
return reconcile.Result{}, err
}
k.logger.Info("reconciliation processed")
k.Logger.Info("reconciliation processed")
return reconcile.Result{}, nil
}
func (k *KubeProxy) SetupWithManager(mgr manager.Manager) error {
k.logger = mgr.GetLogger().WithName("kube_proxy")
k.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
WithOptions(controller.TypedOptions[reconcile.Request]{SkipNameValidation: ptr.To(true)}).
For(&rbacv1.ClusterRoleBinding{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {

View File

@@ -6,10 +6,12 @@ package controllers
import (
"context"
"fmt"
"time"
"github.com/go-logr/logr"
"github.com/pkg/errors"
admissionregistrationv1 "k8s.io/api/admissionregistration/v1"
"k8s.io/apimachinery/pkg/api/errors"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
pointer "k8s.io/utils/ptr"
controllerruntime "sigs.k8s.io/controller-runtime"
@@ -24,14 +26,14 @@ import (
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/api/v1alpha1"
sooterrors "github.com/clastix/kamaji/controllers/soot/controllers/errors"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/utilities"
)
type Migrate struct {
client client.Client
logger logr.Logger
Client client.Client
Logger logr.Logger
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
WebhookNamespace string
WebhookServiceName string
@@ -42,11 +44,17 @@ type Migrate struct {
func (m *Migrate) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := m.GetTenantControlPlaneFunc()
if err != nil {
if errors.Is(err, sooterrors.ErrPausedReconciliation) {
m.Logger.Info(err.Error())
return reconcile.Result{}, nil
}
return reconcile.Result{}, err
}
// Cannot detect the status of the TenantControlPlane, enqueuing back
if tcp.Status.Kubernetes.Version.Status == nil {
return reconcile.Result{Requeue: true}, nil
return reconcile.Result{RequeueAfter: time.Second}, nil
}
switch *tcp.Status.Kubernetes.Version.Status {
@@ -57,7 +65,7 @@ func (m *Migrate) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile
}
if err != nil {
m.logger.Error(err, "reconciliation failed")
m.Logger.Error(err, "reconciliation failed")
return reconcile.Result{}, err
}
@@ -66,8 +74,8 @@ func (m *Migrate) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile
}
func (m *Migrate) cleanup(ctx context.Context) error {
if err := m.client.Delete(ctx, m.object()); err != nil {
if errors.IsNotFound(err) {
if err := m.Client.Delete(ctx, m.object()); err != nil {
if apierrors.IsNotFound(err) {
return nil
}
@@ -80,7 +88,7 @@ func (m *Migrate) cleanup(ctx context.Context) error {
func (m *Migrate) createOrUpdate(ctx context.Context) error {
obj := m.object()
_, err := utilities.CreateOrUpdateWithConflict(ctx, m.client, obj, func() error {
_, err := utilities.CreateOrUpdateWithConflict(ctx, m.Client, obj, func() error {
obj.Webhooks = []admissionregistrationv1.ValidatingWebhook{
{
Name: "leases.migrate.kamaji.clastix.io",
@@ -178,8 +186,6 @@ func (m *Migrate) createOrUpdate(ctx context.Context) error {
}
func (m *Migrate) SetupWithManager(mgr manager.Manager) error {
m.client = mgr.GetClient()
m.logger = mgr.GetLogger().WithName("migrate")
m.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).

View File

@@ -0,0 +1,207 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"fmt"
"time"
"github.com/go-logr/logr"
admissionregistrationv1 "k8s.io/api/admissionregistration/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/kubernetes/cmd/kubeadm/app/util/errors"
"k8s.io/utils/ptr"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
sooterrors "github.com/clastix/kamaji/controllers/soot/controllers/errors"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/utilities"
)
type WritePermissions struct {
Logger logr.Logger
Client client.Client
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
WebhookNamespace string
WebhookServiceName string
WebhookCABundle []byte
TriggerChannel chan event.GenericEvent
}
func (r *WritePermissions) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := r.GetTenantControlPlaneFunc()
if err != nil {
if errors.Is(err, sooterrors.ErrPausedReconciliation) {
r.Logger.Info(err.Error())
return reconcile.Result{}, nil
}
return reconcile.Result{}, err
}
// Cannot detect the status of the TenantControlPlane, enqueuing back
if tcp.Status.Kubernetes.Version.Status == nil {
return reconcile.Result{RequeueAfter: time.Second}, nil
}
switch {
case ptr.Deref(tcp.Status.Kubernetes.Version.Status, kamajiv1alpha1.VersionUnknown) == kamajiv1alpha1.VersionWriteLimited &&
tcp.Spec.WritePermissions.HasAnyLimitation():
err = r.createOrUpdate(ctx, tcp.Spec.WritePermissions)
default:
err = r.cleanup(ctx)
}
if err != nil {
r.Logger.Error(err, "reconciliation failed")
return reconcile.Result{}, err
}
return reconcile.Result{}, nil
}
func (r *WritePermissions) createOrUpdate(ctx context.Context, writePermissions kamajiv1alpha1.Permissions) error {
obj := r.object().DeepCopy()
_, err := utilities.CreateOrUpdateWithConflict(ctx, r.Client, obj, func() error {
obj.Webhooks = []admissionregistrationv1.ValidatingWebhook{
{
Name: "leases.write-permissions.kamaji.clastix.io",
ClientConfig: admissionregistrationv1.WebhookClientConfig{
URL: ptr.To(fmt.Sprintf("https://%s.%s.svc:443/write-permission", r.WebhookServiceName, r.WebhookNamespace)),
CABundle: r.WebhookCABundle,
},
Rules: []admissionregistrationv1.RuleWithOperations{
{
Operations: []admissionregistrationv1.OperationType{
admissionregistrationv1.Create,
admissionregistrationv1.Delete,
},
Rule: admissionregistrationv1.Rule{
APIGroups: []string{"*"},
APIVersions: []string{"*"},
Resources: []string{"*"},
Scope: ptr.To(admissionregistrationv1.NamespacedScope),
},
},
},
FailurePolicy: ptr.To(admissionregistrationv1.Fail),
MatchPolicy: ptr.To(admissionregistrationv1.Equivalent),
NamespaceSelector: &metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "kubernetes.io/metadata.name",
Operator: metav1.LabelSelectorOpIn,
Values: []string{
"kube-node-lease",
},
},
},
},
SideEffects: ptr.To(admissionregistrationv1.SideEffectClassNoneOnDryRun),
AdmissionReviewVersions: []string{"v1"},
},
{
Name: "catchall.write-permissions.kamaji.clastix.io",
ClientConfig: admissionregistrationv1.WebhookClientConfig{
URL: ptr.To(fmt.Sprintf("https://%s.%s.svc:443/write-permission", r.WebhookServiceName, r.WebhookNamespace)),
CABundle: r.WebhookCABundle,
},
Rules: []admissionregistrationv1.RuleWithOperations{
{
Operations: func() []admissionregistrationv1.OperationType {
var ops []admissionregistrationv1.OperationType
if writePermissions.BlockCreate {
ops = append(ops, admissionregistrationv1.Create)
}
if writePermissions.BlockUpdate {
ops = append(ops, admissionregistrationv1.Update)
}
if writePermissions.BlockDelete {
ops = append(ops, admissionregistrationv1.Delete)
}
return ops
}(),
Rule: admissionregistrationv1.Rule{
APIGroups: []string{"*"},
APIVersions: []string{"*"},
Resources: []string{"*"},
Scope: ptr.To(admissionregistrationv1.AllScopes),
},
},
},
FailurePolicy: ptr.To(admissionregistrationv1.Fail),
MatchPolicy: ptr.To(admissionregistrationv1.Equivalent),
NamespaceSelector: &metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "kubernetes.io/metadata.name",
Operator: metav1.LabelSelectorOpNotIn,
Values: []string{
"kube-system",
"kube-node-lease",
},
},
},
},
SideEffects: ptr.To(admissionregistrationv1.SideEffectClassNoneOnDryRun),
TimeoutSeconds: nil,
AdmissionReviewVersions: []string{"v1"},
},
}
return nil
})
return err
}
func (r *WritePermissions) cleanup(ctx context.Context) error {
if err := r.Client.Delete(ctx, r.object()); err != nil {
if apierrors.IsNotFound(err) {
return nil
}
return fmt.Errorf("unable to clean-up ValidationWebhook required for write permissions: %w", err)
}
return nil
}
func (r *WritePermissions) SetupWithManager(mgr manager.Manager) error {
r.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
WithOptions(controller.TypedOptions[reconcile.Request]{SkipNameValidation: ptr.To(true)}).
For(&admissionregistrationv1.ValidatingWebhookConfiguration{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return object.GetName() == r.object().GetName()
}))).
WatchesRawSource(source.Channel(r.TriggerChannel, &handler.EnqueueRequestForObject{})).
Complete(r)
}
func (r *WritePermissions) object() *admissionregistrationv1.ValidatingWebhookConfiguration {
return &admissionregistrationv1.ValidatingWebhookConfiguration{
ObjectMeta: metav1.ObjectMeta{
Name: "kamaji-write-permissions",
},
}
}

View File

@@ -6,8 +6,9 @@ package soot
import (
"context"
"fmt"
"time"
"k8s.io/apimachinery/pkg/api/errors"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/client-go/rest"
"k8s.io/client-go/util/retry"
"k8s.io/utils/ptr"
@@ -28,20 +29,26 @@ import (
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/finalizers"
"github.com/clastix/kamaji/controllers/soot/controllers"
"github.com/clastix/kamaji/controllers/soot/controllers/errors"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/resources"
"github.com/clastix/kamaji/internal/utilities"
)
type sootItem struct {
triggers []chan event.GenericEvent
cancelFn context.CancelFunc
triggers []chan event.GenericEvent
cancelFn context.CancelFunc
completedCh chan struct{}
}
type sootMap map[string]sootItem
const (
sootManagerAnnotation = "kamaji.clastix.io/soot"
sootManagerFailedAnnotation = "failed"
)
type Manager struct {
client client.Client
sootMap sootMap
// sootManagerErrChan is the channel that is going to be used
// when the soot manager cannot start due to any kind of problem.
@@ -59,10 +66,14 @@ func (m *Manager) retrieveTenantControlPlane(ctx context.Context, request reconc
return func() (*kamajiv1alpha1.TenantControlPlane, error) {
tcp := &kamajiv1alpha1.TenantControlPlane{}
if err := m.client.Get(ctx, request.NamespacedName, tcp); err != nil {
if err := m.AdminClient.Get(ctx, request.NamespacedName, tcp); err != nil {
return nil, err
}
if utils.IsPaused(tcp) {
return nil, errors.ErrPausedReconciliation
}
return tcp, nil
}
}
@@ -93,39 +104,82 @@ func (m *Manager) cleanup(ctx context.Context, req reconcile.Request, tenantCont
}
v.cancelFn()
// TODO(prometherion): the 10 seconds is an hardcoded number,
// it's widely used across the code base as a timeout with the API Server.
// Evaluate if we would need to make this configurable globally.
deadlineCtx, deadlineFn := context.WithTimeout(ctx, 10*time.Second)
defer deadlineFn()
select {
case _, open := <-v.completedCh:
if !open {
log.FromContext(ctx).Info("soot manager completed its process")
break
}
case <-deadlineCtx.Done():
log.FromContext(ctx).Error(deadlineCtx.Err(), "soot manager didn't exit to timeout")
break
}
delete(m.sootMap, tcpName)
return nil
}
func (m *Manager) retryTenantControlPlaneAnnotations(ctx context.Context, request reconcile.Request, modifierFn func(annotations map[string]string)) error {
return retry.RetryOnConflict(retry.DefaultRetry, func() error {
tcp, err := m.retrieveTenantControlPlane(ctx, request)()
if err != nil {
return err
}
if tcp.Annotations == nil {
tcp.Annotations = map[string]string{}
}
modifierFn(tcp.Annotations)
tcp.SetAnnotations(tcp.Annotations)
return m.AdminClient.Update(ctx, tcp)
})
}
//nolint:maintidx
func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res reconcile.Result, err error) {
// Retrieving the TenantControlPlane:
// in case of deletion, we must be sure to properly remove from the memory the soot manager.
tcp := &kamajiv1alpha1.TenantControlPlane{}
if err = m.client.Get(ctx, request.NamespacedName, tcp); err != nil {
if errors.IsNotFound(err) {
if err = m.AdminClient.Get(ctx, request.NamespacedName, tcp); err != nil {
if apierrors.IsNotFound(err) {
return reconcile.Result{}, m.cleanup(ctx, request, nil)
}
return reconcile.Result{}, err
}
// Handling finalizer if the TenantControlPlane is marked for deletion:
tcpStatus := ptr.Deref(tcp.Status.Kubernetes.Version.Status, kamajiv1alpha1.VersionProvisioning)
// Handling finalizer if the TenantControlPlane is marked for deletion or scaled to zero:
// the clean-up function is already taking care to stop the manager, if this exists.
if tcp.GetDeletionTimestamp() != nil {
if tcp.GetDeletionTimestamp() != nil || tcpStatus == kamajiv1alpha1.VersionSleeping {
if controllerutil.ContainsFinalizer(tcp, finalizers.SootFinalizer) {
return reconcile.Result{}, m.cleanup(ctx, request, tcp)
}
return reconcile.Result{}, nil
}
tcpStatus := *tcp.Status.Kubernetes.Version.Status
// Triggering the reconciliation of the underlying controllers of
// the soot manager if this is already registered.
v, ok := m.sootMap[request.String()]
if ok {
switch {
case tcp.Annotations != nil && tcp.Annotations[sootManagerAnnotation] == sootManagerFailedAnnotation:
delete(m.sootMap, request.String())
return reconcile.Result{}, m.retryTenantControlPlaneAnnotations(ctx, request, func(annotations map[string]string) {
delete(annotations, sootManagerAnnotation)
})
case tcpStatus == kamajiv1alpha1.VersionCARotating:
// The TenantControlPlane CA has been rotated, it means the running manager
// must be restarted to avoid certificate signed by unknown authority errors.
@@ -137,7 +191,12 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
return reconcile.Result{}, m.cleanup(ctx, request, tcp)
default:
for _, trigger := range v.triggers {
trigger <- event.GenericEvent{Object: tcp}
var shrunkTCP kamajiv1alpha1.TenantControlPlane
shrunkTCP.Name = tcp.Namespace
shrunkTCP.Namespace = tcp.Namespace
go utils.TriggerChannel(ctx, trigger, shrunkTCP)
}
}
@@ -145,7 +204,7 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
}
// No need to start a soot manager if the TenantControlPlane is not ready:
// enqueuing back is not required since we're going to get that event once ready.
if tcpStatus == kamajiv1alpha1.VersionNotReady || tcpStatus == kamajiv1alpha1.VersionCARotating {
if tcpStatus == kamajiv1alpha1.VersionNotReady || tcpStatus == kamajiv1alpha1.VersionCARotating || tcpStatus == kamajiv1alpha1.VersionSleeping {
log.FromContext(ctx).Info("skipping start of the soot manager for a not ready instance")
return reconcile.Result{}, nil
@@ -159,11 +218,11 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
return nil
})
return reconcile.Result{Requeue: true}, finalizerErr
return reconcile.Result{RequeueAfter: time.Second}, finalizerErr
}
// Generating the manager and starting it:
// in case of any error, reconciling the request to start it back from the beginning.
tcpRest, err := utilities.GetRESTClientConfig(ctx, m.client, tcp)
tcpRest, err := utilities.GetRESTClientConfig(ctx, m.AdminClient, tcp)
if err != nil {
return reconcile.Result{}, err
}
@@ -178,14 +237,14 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
mgr, err := controllerruntime.NewManager(tcpRest, controllerruntime.Options{
Logger: log.Log.WithName(fmt.Sprintf("soot_%s_%s", tcp.GetNamespace(), tcp.GetName())),
Scheme: m.client.Scheme(),
Scheme: m.AdminClient.Scheme(),
Metrics: metricsserver.Options{
BindAddress: "0",
},
NewClient: func(config *rest.Config, _ client.Options) (client.Client, error) {
return client.New(config, client.Options{
Scheme: m.client.Scheme(),
})
NewClient: func(config *rest.Config, opts client.Options) (client.Client, error) {
opts.Scheme = m.AdminClient.Scheme()
return client.New(config, opts)
},
})
if err != nil {
@@ -194,11 +253,26 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
//
// Register all the controllers of the soot here:
//
writePermissions := &controllers.WritePermissions{
Logger: mgr.GetLogger().WithName("writePermissions"),
Client: mgr.GetClient(),
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
WebhookNamespace: m.MigrateServiceNamespace,
WebhookServiceName: m.MigrateServiceName,
WebhookCABundle: m.MigrateCABundle,
TriggerChannel: nil,
}
if err = writePermissions.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
migrate := &controllers.Migrate{
WebhookNamespace: m.MigrateServiceNamespace,
WebhookServiceName: m.MigrateServiceName,
WebhookCABundle: m.MigrateCABundle,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
Client: mgr.GetClient(),
Logger: mgr.GetLogger().WithName("migrate"),
}
if err = migrate.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
@@ -207,6 +281,8 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
konnectivityAgent := &controllers.KonnectivityAgent{
AdminClient: m.AdminClient,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
Logger: mgr.GetLogger().WithName("konnectivity_agent"),
TriggerChannel: make(chan event.GenericEvent),
}
if err = konnectivityAgent.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
@@ -215,6 +291,8 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
kubeProxy := &controllers.KubeProxy{
AdminClient: m.AdminClient,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
Logger: mgr.GetLogger().WithName("kube_proxy"),
TriggerChannel: make(chan event.GenericEvent),
}
if err = kubeProxy.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
@@ -223,6 +301,8 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
coreDNS := &controllers.CoreDNS{
AdminClient: m.AdminClient,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
Logger: mgr.GetLogger().WithName("coredns"),
TriggerChannel: make(chan event.GenericEvent),
}
if err = coreDNS.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
@@ -234,6 +314,7 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
Client: m.AdminClient,
Phase: resources.PhaseUploadConfigKubeadm,
},
TriggerChannel: make(chan event.GenericEvent),
}
if err = uploadKubeadmConfig.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
@@ -245,6 +326,7 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
Client: m.AdminClient,
Phase: resources.PhaseUploadConfigKubelet,
},
TriggerChannel: make(chan event.GenericEvent),
}
if err = uploadKubeletConfig.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
@@ -256,6 +338,7 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
Client: m.AdminClient,
Phase: resources.PhaseBootstrapToken,
},
TriggerChannel: make(chan event.GenericEvent),
}
if err = bootstrapToken.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
@@ -267,23 +350,40 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
Client: m.AdminClient,
Phase: resources.PhaseClusterAdminRBAC,
},
TriggerChannel: make(chan event.GenericEvent),
}
if err = kubeadmRbac.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
completedCh := make(chan struct{})
// Starting the manager
go func() {
if err = mgr.Start(tcpCtx); err != nil {
log.FromContext(ctx).Error(err, "unable to start soot manager")
// The sootManagerAnnotation is used to propagate the error between reconciliations with its state:
// this is required to avoid mutex and prevent concurrent read/write on the soot map
annotationErr := m.retryTenantControlPlaneAnnotations(ctx, request, func(annotations map[string]string) {
annotations[sootManagerAnnotation] = sootManagerFailedAnnotation
})
if annotationErr != nil {
log.FromContext(ctx).Error(err, "unable to update TenantControlPlane for soot failed annotation")
}
// When the manager cannot start we're enqueuing back the request to take advantage of the backoff factor
// of the queue: this is a goroutine and cannot return an error since the manager is running on its own,
// using the sootManagerErrChan channel we can trigger a reconciliation although the TCP hadn't any change.
m.sootManagerErrChan <- event.GenericEvent{Object: tcp}
var shrunkTCP kamajiv1alpha1.TenantControlPlane
shrunkTCP.Name = tcp.Name
shrunkTCP.Namespace = tcp.Namespace
m.sootManagerErrChan <- event.GenericEvent{Object: &shrunkTCP}
}
close(completedCh)
}()
m.sootMap[request.NamespacedName.String()] = sootItem{
triggers: []chan event.GenericEvent{
writePermissions.TriggerChannel,
migrate.TriggerChannel,
konnectivityAgent.TriggerChannel,
kubeProxy.TriggerChannel,
@@ -292,14 +392,14 @@ func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
uploadKubeletConfig.TriggerChannel,
bootstrapToken.TriggerChannel,
},
cancelFn: tcpCancelFn,
cancelFn: tcpCancelFn,
completedCh: completedCh,
}
return reconcile.Result{Requeue: true}, nil
return reconcile.Result{RequeueAfter: time.Second}, nil
}
func (m *Manager) SetupWithManager(mgr manager.Manager) error {
m.client = mgr.GetClient()
m.sootManagerErrChan = make(chan event.GenericEvent)
m.sootMap = make(map[string]sootItem)

View File

@@ -1,8 +0,0 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import "sigs.k8s.io/controller-runtime/pkg/event"
type TenantControlPlaneChannel chan event.GenericEvent

View File

@@ -12,6 +12,7 @@ import (
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
@@ -81,7 +82,7 @@ func (m *TelemetryController) collectStats(ctx context.Context, uid string) {
for _, tcp := range tcpList.Items {
switch {
case tcp.Spec.ControlPlane.Deployment.Replicas == nil || *tcp.Spec.ControlPlane.Deployment.Replicas == 0:
case ptr.Deref(tcp.Status.Kubernetes.Version.Status, kamajiv1alpha1.VersionProvisioning) == kamajiv1alpha1.VersionSleeping:
stats.TenantControlPlanes.Sleeping++
case tcp.Status.Kubernetes.Version.Status != nil && *tcp.Status.Kubernetes.Version.Status == kamajiv1alpha1.VersionNotReady:
stats.TenantControlPlanes.NotReady++

View File

@@ -44,26 +44,27 @@ type TenantControlPlaneReconciler struct {
Client client.Client
APIReader client.Reader
Config TenantControlPlaneReconcilerConfig
TriggerChan TenantControlPlaneChannel
TriggerChan chan event.GenericEvent
KamajiNamespace string
KamajiServiceAccount string
KamajiService string
KamajiMigrateImage string
MaxConcurrentReconciles int
ReconcileTimeout time.Duration
// CertificateChan is the channel used by the CertificateLifecycleController that is checking for
// certificates and kubeconfig user certs validity: a generic event for the given TCP will be triggered
// once the validity threshold for the given certificate is reached.
CertificateChan CertificateChannel
CertificateChan chan event.GenericEvent
clock mutex.Clock
}
// TenantControlPlaneReconcilerConfig gives the necessary configuration for TenantControlPlaneReconciler.
type TenantControlPlaneReconcilerConfig struct {
ReconcileTimeout time.Duration
DefaultDataStoreName string
KineContainerImage string
TmpBaseDirectory string
DefaultDataStoreName string
KineContainerImage string
TmpBaseDirectory string
CertExpirationThreshold time.Duration
}
//+kubebuilder:rbac:groups=kamaji.clastix.io,resources=tenantcontrolplanes,verbs=get;list;watch;create;update;patch;delete
@@ -80,12 +81,12 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
log := log.FromContext(ctx)
var cancelFn context.CancelFunc
ctx, cancelFn = context.WithTimeout(ctx, r.Config.ReconcileTimeout)
ctx, cancelFn = context.WithTimeout(ctx, r.ReconcileTimeout)
defer cancelFn()
tenantControlPlane, err := r.getTenantControlPlane(ctx, req.NamespacedName)()
if k8serrors.IsNotFound(err) {
log.Info("resource have been deleted, skipping")
log.Info("resource may have been deleted, skipping")
return reconcile.Result{}, nil
}
@@ -95,17 +96,23 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
return reconcile.Result{}, err
}
if utils.IsPaused(tenantControlPlane) {
log.Info("paused reconciliation, no further actions")
return ctrl.Result{}, nil
}
releaser, err := mutex.Acquire(r.mutexSpec(tenantControlPlane))
if err != nil {
switch {
case errors.As(err, &mutex.ErrTimeout):
log.Info("acquire timed out, current process is blocked by another reconciliation")
return ctrl.Result{Requeue: true}, nil
return ctrl.Result{RequeueAfter: time.Second}, nil
case errors.As(err, &mutex.ErrCancelled):
log.Info("acquire cancelled")
return ctrl.Result{Requeue: true}, nil
return ctrl.Result{RequeueAfter: time.Second}, nil
default:
log.Error(err, "acquire failed")
@@ -125,7 +132,7 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
if errors.Is(err, ErrMissingDataStore) {
log.Info(err.Error())
return ctrl.Result{Requeue: true}, nil
return ctrl.Result{RequeueAfter: time.Second}, nil
}
log.Error(err, "cannot retrieve the DataStore for the given instance")
@@ -186,7 +193,7 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
if kamajierrors.ShouldReconcileErrorBeIgnored(err) {
log.V(1).Info("sentinel error, enqueuing back request", "error", err.Error())
return ctrl.Result{Requeue: true}, nil
return ctrl.Result{RequeueAfter: time.Second}, nil
}
log.Error(err, "handling of resource failed", "resource", resource.GetName())
@@ -202,7 +209,7 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
if kamajierrors.ShouldReconcileErrorBeIgnored(err) {
log.V(1).Info("sentinel error, enqueuing back request", "error", err.Error())
return ctrl.Result{Requeue: true}, nil
return ctrl.Result{RequeueAfter: time.Second}, nil
}
log.Error(err, "update of the resource failed", "resource", resource.GetName())
@@ -215,7 +222,7 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
if result == resources.OperationResultEnqueueBack {
log.Info("requested enqueuing back", "resources", resource.GetName())
return ctrl.Result{Requeue: true}, nil
return ctrl.Result{RequeueAfter: time.Second}, nil
}
}

View File

@@ -0,0 +1,19 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package utils
import (
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/clastix/kamaji/api/v1alpha1"
)
func IsPaused(obj client.Object) bool {
if obj.GetAnnotations() == nil {
return false
}
_, paused := obj.GetAnnotations()[v1alpha1.PausedReconciliationAnnotation]
return paused
}

View File

@@ -0,0 +1,26 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package utils
import (
"context"
"time"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/log"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
)
func TriggerChannel(ctx context.Context, receiver chan event.GenericEvent, tcp kamajiv1alpha1.TenantControlPlane) {
deadlineCtx, cancelFn := context.WithTimeout(ctx, 10*time.Second)
defer cancelFn()
select {
case receiver <- event.GenericEvent{Object: &tcp}:
return
case <-deadlineCtx.Done():
log.FromContext(ctx).Error(deadlineCtx.Err(), "cannot send due to timeout")
}
}

View File

@@ -18,7 +18,7 @@ export KAMAJI_NAMESPACE=kamaji-system
export TENANT_NAMESPACE=tenant-00
export TENANT_NAME=tenant-00
export TENANT_DOMAIN=internal.kamaji.aws.com
export TENANT_VERSION=v1.30.2
export TENANT_VERSION=v1.31.0
export TENANT_PORT=6443 # port used to expose the tenant api server
export TENANT_PROXY_PORT=8132 # port used to expose the konnectivity server
export TENANT_POD_CIDR=10.36.0.0/16

View File

@@ -15,7 +15,7 @@ export KAMAJI_NAMESPACE=kamaji-system
export TENANT_NAMESPACE=default
export TENANT_NAME=tenant-00
export TENANT_DOMAIN=$KAMAJI_REGION.cloudapp.azure.com
export TENANT_VERSION=v1.26.0
export TENANT_VERSION=v1.31.0
export TENANT_PORT=6443 # port used to expose the tenant api server
export TENANT_PROXY_PORT=8132 # port used to expose the konnectivity server
export TENANT_POD_CIDR=10.36.0.0/16

View File

@@ -5,7 +5,7 @@ export KAMAJI_NAMESPACE=kamaji-system
export TENANT_NAMESPACE=default
export TENANT_NAME=tenant-00
export TENANT_DOMAIN=clastix.labs
export TENANT_VERSION=v1.26.0
export TENANT_VERSION=v1.31.0
export TENANT_PORT=6443 # port used to expose the tenant api server
export TENANT_PROXY_PORT=8132 # port used to expose the konnectivity server
export TENANT_POD_CIDR=10.36.0.0/16

View File

@@ -1,36 +0,0 @@
kind_path := $(patsubst %/,%,$(dir $(abspath $(lastword $(MAKEFILE_LIST)))))
include ../etcd/Makefile
.PHONY: kind ingress-nginx
.DEFAULT_GOAL := kamaji
prometheus-stack:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus-stack --create-namespace -n monitoring prometheus-community/kube-prometheus-stack
reqs: kind ingress-nginx cert-manager
cert-manager:
@kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.yaml
kamaji: reqs
helm install kamaji --create-namespace -n kamaji-system $(kind_path)/../../charts/kamaji
destroy: kind/destroy etcd-certificates/cleanup
kind:
@kind create cluster --config $(kind_path)/kind-kamaji.yaml
kind/destroy:
@kind delete cluster --name kamaji
ingress-nginx: ingress-nginx-install
ingress-nginx-install:
kubectl apply -f $(kind_path)/nginx-deploy.yaml
kamaji-kind-worker-join:
$(kind_path)/join-node.bash

View File

@@ -1,36 +0,0 @@
#!/bin/bash
set -e
# Constants
export DOCKER_IMAGE_NAME="kindest/node"
export DOCKER_NETWORK="kind"
# Variables
export KUBERNETES_VERSION=${1:-v1.23.4}
export KUBECONFIG="${KUBECONFIG:-/tmp/kubeconfig}"
if [ -z $2 ]
then
MAPPING_PORT=""
else
MAPPING_PORT="-p ${2}:80"
fi
clear
echo "Welcome to join a new node to the Kind network"
echo -ne "\nChecking right kubeconfig\n"
kubectl cluster-info
echo "Are you pointing to the right tenant control plane? (Type return to continue)"
read
JOIN_CMD="$(kubeadm --kubeconfig=${KUBECONFIG} token create --print-join-command) --ignore-preflight-errors=SystemVerification"
echo "Deploying new node..."
NODE=$(docker run -d --privileged -v /lib/modules:/lib/modules:ro -v /var --net $DOCKER_NETWORK $MAPPING_PORT $DOCKER_IMAGE_NAME:$KUBERNETES_VERSION)
sleep 10
echo "Joining new node..."
docker exec -e JOIN_CMD="$JOIN_CMD" $NODE /bin/bash -c "$JOIN_CMD"
echo "Node has joined! Remember to install the kind-net CNI by issuing the following command:"
echo " $: kubectl apply -f https://raw.githubusercontent.com/aojea/kindnet/master/install-kindnet.yaml"

View File

@@ -1,37 +0,0 @@
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kamaji
nodes:
- role: control-plane
image: kindest/node:v1.23.4
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
## required for Cluster API local development
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
extraPortMappings:
## expose port 80 of the node to port 80 on the host
- containerPort: 80
hostPort: 80
protocol: TCP
## expose port 443 of the node to port 443 on the host
- containerPort: 443
hostPort: 443
protocol: TCP
## expose port 31132 of the node to port 31132 on the host for konnectivity
- containerPort: 31132
hostPort: 31132
protocol: TCP
## expose port 31443 of the node to port 31443 on the host
- containerPort: 31443
hostPort: 31443
protocol: TCP
## expose port 6443 of the node to port 8443 on the host
- containerPort: 6443
hostPort: 8443
protocol: TCP

View File

@@ -1,694 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
verbs:
- list
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- apiGroups:
- ''
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: NodePort
ipFamilyPolicy: SingleStack
ipFamilies:
- IPv4
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
strategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --watch-ingress-without-class=true
- --publish-status-address=localhost
- --enable-ssl-passthrough=true
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
hostPort: 80
- name: https
containerPort: 443
protocol: TCP
hostPort: 443
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
ingress-ready: 'true'
kubernetes.io/os: linux
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Equal
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 0
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: nginx
namespace: ingress-nginx
spec:
controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
webhooks:
- name: validate.nginx.ingress.kubernetes.io
matchPolicy: Equivalent
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
sideEffects: None
admissionReviewVersions:
- v1
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx-admission
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
allowPrivilegeEscalation: false
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
namespace: ingress-nginx
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
imagePullPolicy: IfNotPresent
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
allowPrivilegeEscalation: false
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000

View File

@@ -0,0 +1,128 @@
# Cluster Autoscaler
The [Cluster Autoscaler](https://github.com/kubernetes/autoscaler) is a tool that automatically adjusts the size of a Kubernetes cluster so that all pods have a place to run and there are no unneeded nodes.
When pods are unschedulable because there are not enough resources, Cluster Autoscaler scales up the cluster. When nodes are underutilized, Cluster Autoscaler scales down the cluster.
Cluster API supports the Cluster Autoscaler. See the [Cluster Autoscaler on Cluster API](https://cluster-api.sigs.k8s.io/tasks/automated-machine-management/autoscaling) for more information.
## Getting started with the Cluster Autoscaler on Kamaji
Kamaji supports the Cluster Autoscaler through Cluster API. There are several way to run the Cluster autoscaler with Cluster API. In this guide, we're leveraging the unique features of Kamaji to run the Cluster Autoscaler as part of Hosted Control Plane.
In other words, the Cluster Autoscaler is running as a pod in the Kamaji Management Cluster, side by side with the Tenant Control Plane pods, and connecting directly to the apiserver of the workload cluster, hiding sensitive data and information from the tenant: this can be done by mounting the kubeconfig of the tenant cluster in the Cluster Autoscaler pod.
### Create the workload cluster
Create a workload cluster using the Kamaji Control Plane Provider and the Infrastructure Provider of choice. The following example creates a workload cluster using the vSphere Infrastructure Provider:
The template file [`capi-kamaji-vsphere-autoscaler-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-autoscaler-template.yaml) provides a full example of a cluster with autoscaler enabled. You can generate the cluster manifest using `clusterctl`.
Before you need to list all the variables in the template file:
```bash
cat capi-kamaji-vsphere-autoscaler-template.yaml | clusterctl generate yaml --list-variables
```
Fill them with the desired values and generate the manifest:
```bash
clusterctl generate yaml \
--from capi-kamaji-vsphere-autoscaler-template.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
Apply the generated manifest to create the ClusterClass:
```bash
kubectl apply -f capi-kamaji-vsphere-cluster.yaml
```
### Install the Cluster Autoscaler
Install the Cluster Autoscaler via Helm in the Management Cluster, in the same namespace where workload cluster is deployed.
!!! info "Options for install Cluster Autoscaler"
Cluster Autoscaler works on a single cluster: it means every cluster must have its own Cluster Autoscaler instance. This could be solved by leveraging on Project Sveltos automations, by deploying a Cluster Autoscaler instance for each Kamaji Cluster API instance.
```bash
helm repo add autoscaler https://kubernetes.github.io/autoscaler
helm repo update
helm upgrade --install ${CLUSTER_NAME}-autoscaler autoscaler/cluster-autoscaler \
--set cloudProvider=clusterapi \
--set autodiscvovery.namespace=default \
--set "autoDiscovery.labels[0].autoscaling=enabled" \
--set clusterAPIKubeconfigSecret=${CLUSTER_NAME}-kubeconfig \
--set clusterAPIMode=kubeconfig-incluster
```
The `autoDiscovery.labels` values are used to pick dynamically clusters to autoscale.
Such labels must be set on the workload cluster, in the `Cluster` and `MachineDeployment` resources.
```yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
labels:
cluster.x-k8s.io/cluster-name: sample
# Cluster Autoscaler labels
autoscaling: enabled
name: sample
# other fields omitted for brevity
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
annotations:
# Cluster Autoscaler annotations
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "6"
labels:
cluster.x-k8s.io/cluster-name: sample
# Cluster Autoscaler labels
autoscaling: enabled
name: sample-md-0
# other fields omitted for brevity
---
# other Cluster API resources omitted for brevity
```
### Verify the Cluster Autoscaler
To verify the Cluster Autoscaler is working as expected, you can deploy a workload in the Tenant cluster with some CPU requirements in order to simulate workload requiring resources.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-node
name: hello-node
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- image: quay.io/google-containers/pause-amd64:3.0
imagePullPolicy: IfNotPresent
name: pause-amd64
resources:
limits:
cpu: 500m
```
Apply the workload to the Tenant cluster and simulate the load spike by increasing the replicas. The Cluster Autoscaler should scale up the cluster to accommodate the workload. Cooldown time must be configured properly on a cluster basis.
!!! warning "Possible Resource Wasting"
With Cluster Autoscaler, new machines are automatically created in a very short time, ending up with some up-provisioning and potentially wasting resources. The official Cluster Autosclaler documentation must be understood to provide correct values according to the infrastructure and provisioning times.

View File

@@ -0,0 +1,642 @@
# Cluster Class with Kamaji
`ClusterClass` is a Cluster API feature that enables template-based cluster creation. When combined with Kamaji's hosted control plane architecture, `ClusterClass` provides a powerful pattern for standardizing Kubernetes cluster deployments across multiple infrastructure providers while maintaining consistent control plane configurations.
!!! warning "Experimental Feature"
ClusterClass is still an experimental feature of Cluster API. As with any experimental features it should be used with caution. Read more about ClusterClass in the [Cluster API documentation](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/).
## Understanding Cluster Class
`ClusterClass` reduces configuration boilerplate by defining reusable cluster templates. Instead of creating individual resources for each cluster, you define a `ClusterClass` once and create multiple clusters from it with minimal configuration.
With Kamaji, this pattern becomes even more powerful:
- **Shared Control Plane Templates**: The same KamajiControlPlaneTemplate works across all infrastructure providers
- **Infrastructure Flexibility**: Deploy worker nodes on vSphere, AWS, Azure, or any supported provider while maintaining consistent control planes
- **Simplified Management**: Hosted control planes reduce the complexity of `ClusterClass` templates
## Enabling Cluster Class
To use `ClusterClass` with Kamaji, you need to enable the cluster topology feature gate before initializing the management cluster:
```bash
export CLUSTER_TOPOLOGY=true
clusterctl init --control-plane kamaji --infrastructure vsphere
```
This will install:
- Cluster API core components with `ClusterClass` support
- Kamaji Control Plane Provider
- Your chosen infrastructure provider (vSphere in this example)
Verify the installation:
```bash
kubectl get deployments -A | grep -E "capi|kamaji"
```
## Template Architecture with Kamaji
A `ClusterClass` with Kamaji consists of four main components:
1. Control Plane Template (KamajiControlPlaneTemplate): Defines the hosted control plane configuration that remains consistent across infrastructure providers.
2. Infrastructure Template (VSphereClusterTemplate): Provider-specific infrastructure configuration for the cluster.
3. Bootstrap Template (KubeadmConfigTemplate): Node initialization configuration that works across providers.
4. Machine Template (VSphereMachineTemplate): Provider-specific machine configuration for worker nodes.
Here's how these components relate in a `ClusterClass`:
```yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: kamaji-vsphere-class
spec:
# Infrastructure provider template
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereClusterTemplate
name: vsphere-cluster-template
# Kamaji control plane template - reusable across providers
controlPlane:
ref:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlaneTemplate
name: kamaji-control-plane-template
# Worker configuration
workers:
machineDeployments:
- class: default-worker
template:
bootstrap:
ref:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: worker-bootstrap-template
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
name: vsphere-worker-template
```
The key advantage: the KamajiControlPlaneTemplate and KubeadmConfigTemplate can be shared across different infrastructure providers, while only the infrastructure-specific templates need to change.
## Creating a Cluster Class
Let's create a `ClusterClass` for vSphere with Kamaji. First, define the shared templates:
### KamajiControlPlaneTemplate
This template defines the hosted control plane configuration:
```yaml
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlaneTemplate
metadata:
name: kamaji-controlplane
namespace: capi-templates-vsphere
spec:
template:
spec:
dataStoreName: "default" # Default datastore for etcd
network:
serviceType: LoadBalancer
serviceAddress: ""
certSANs: []
addons:
coreDNS: {}
kubeProxy: {}
konnectivity: {}
apiServer:
extraArgs: []
resources:
requests: {}
controllerManager:
extraArgs: []
resources:
requests: {}
scheduler:
extraArgs: []
resources:
requests: {}
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- InternalIP
registry: "registry.k8s.io"
```
### KubeadmConfigTemplate
This bootstrap template configures worker nodes:
```yaml
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: worker-bootstrap-template
spec:
template:
spec:
# Configuration for kubeadm join
joinConfiguration:
discovery: {}
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: '{{ local_hostname }}'
kubeletExtraArgs:
cloud-provider: external
node-ip: "{{ ds.meta_data.local_ipv4 }}"
# Commands to run before kubeadm join
preKubeadmCommands:
- hostnamectl set-hostname "{{ ds.meta_data.hostname }}"
- echo "127.0.0.1 {{ ds.meta_data.hostname }}" >> /etc/hosts
# Commands to run after kubeadm join
postKubeadmCommands: []
# Users to create on worker nodes
users: []
```
### VSphereClusterTemplate
Infrastructure-specific template for vSphere:
```yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereClusterTemplate
metadata:
name: vsphere
namespace: capi-templates-vsphere
spec:
template:
spec:
server: "vcenter.sample.com" # vCenter server address
thumbprint: "" # vCenter certificate thumbprint
identityRef:
kind: VSphereClusterIdentity
name: "vsphere-cluster-identity"
failureDomainSelector: {}
clusterModules: []
```
### VSphereMachineTemplate
Machine template for vSphere workers:
```yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
metadata:
name: vsphere-vm-base
namespace: capi-templates-vsphere
spec:
template:
spec:
# Resources will be patched by ClusterClass based on variables
# numCPUs, memoryMiB, diskGiB are dynamically set
# Infrastructure defaults - will be patched by ClusterClass
server: "vcenter.sample.com"
datacenter: "datacenter"
datastore: "datastore"
resourcePool: "Resources"
folder: "vm-folder"
template: "ubuntu-2404-kube-v1.32.0"
storagePolicyName: ""
thumbprint: ""
# Network configuration (IPAM by default)
network:
devices:
- networkName: "k8s-network"
dhcp4: false
addressesFromPools:
- apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: "{{ .builtin.cluster.name }}" # Uses cluster name
```
### Variables and Patching in Cluster Class
`ClusterClass` becomes powerful through its variable system and JSON patching capabilities. This allows the same templates to be customized for different use cases without duplicating YAML.
#### Variable System
Variables in `ClusterClass` define the parameters users can customize when creating clusters. Each variable has:
- **Schema Definition**: OpenAPI v3 schema that validates input
- **Required/Optional**: Whether the variable must be provided
- **Default Values**: Fallback values when not specified
- **Type Constraints**: Data types, ranges, and enum values
Here's how variables work in practice:
**Control Plane Variables:**
```yaml
variables:
- name: kamajiControlPlane
required: true
schema:
openAPIV3Schema:
type: object
properties:
dataStoreName:
type: string
description: "Datastore name for etcd"
default: "default"
network:
type: object
properties:
serviceType:
type: string
enum: ["ClusterIP", "NodePort", "LoadBalancer"]
default: "LoadBalancer"
serviceAddress:
type: string
description: "Pre-assigned VIP address"
```
**Machine Resource Variables:**
```yaml
- name: machineSpecs
required: true
schema:
openAPIV3Schema:
type: object
properties:
numCPUs:
type: integer
minimum: 2
maximum: 64
default: 4
memoryMiB:
type: integer
minimum: 4096
maximum: 131072
default: 8192
diskGiB:
type: integer
minimum: 40
maximum: 2048
default: 100
```
#### JSON Patching System
Patches apply variable values to the base templates at cluster creation time. This enables the same template to serve different configurations.
**Control Plane Patching:**
```yaml
patches:
- name: controlPlaneConfig
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlaneTemplate
matchResources:
controlPlane: true
jsonPatches:
- op: replace
path: /spec/template/spec/dataStoreName
valueFrom:
variable: kamajiControlPlane.dataStoreName
- op: replace
path: /spec/template/spec/network/serviceType
valueFrom:
variable: kamajiControlPlane.network.serviceType
```
**Machine Resource Patching:**
```yaml
- name: machineResources
definitions:
- selector:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
matchResources:
machineDeploymentClass:
names: ["default-worker"]
jsonPatches:
- op: add # Resources are not in base template
path: /spec/template/spec/numCPUs
valueFrom:
variable: machineSpecs.numCPUs
- op: add
path: /spec/template/spec/memoryMiB
valueFrom:
variable: machineSpecs.memoryMiB
```
#### Advanced Patching Patterns
**Conditional Patching:**
```yaml
- name: optionalVIP
definitions:
- selector:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlaneTemplate
jsonPatches:
- op: replace
path: /spec/template/spec/network/serviceAddress
valueFrom:
variable: kamajiControlPlane.network.serviceAddress
# Only applies if serviceAddress is not empty
enabledIf: "{{ ne .kamajiControlPlane.network.serviceAddress \"\" }}"
```
**Infrastructure Patching:**
```yaml
- name: infrastructureConfig
definitions:
- selector:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachineTemplate
jsonPatches:
- op: replace
path: /spec/template/spec/datacenter
valueFrom:
variable: infrastructure.datacenter
- op: replace
path: /spec/template/spec/datastore
valueFrom:
variable: infrastructure.datastore
- op: replace
path: /spec/template/spec/template
valueFrom:
variable: infrastructure.vmTemplate
```
### Complete Cluster Class with Variables
For a comprehensive example with all variables and patches configured, see the [vsphere-kamaji-clusterclass.yaml](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-class-template.yaml) template.
## Creating a Cluster from Cluster Class
With the `ClusterClass` defined, creating a cluster becomes remarkably simple:
```yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: my-cluster
namespace: default
spec:
# Network configuration defined at cluster level
clusterNetwork:
pods:
cidrBlocks: ["10.244.0.0/16"]
services:
cidrBlocks: ["10.96.0.0/12"]
serviceDomain: "cluster.local"
topology:
class: vsphere-standard
classNamespace: capi-templates-vsphere
version: v1.32.0
controlPlane:
replicas: 2
workers:
machineDeployments:
- class: default-worker
name: worker-nodes
replicas: 3
variables:
- name: kamajiControlPlane
value:
dataStoreName: "etcd"
network:
serviceType: "LoadBalancer"
serviceAddress: "" # Auto-assigned if empty
- name: machineSpecs
value:
numCPUs: 8
memoryMiB: 16384
diskGiB: 60
- name: infrastructure
value:
vmTemplate: "ubuntu-2404-kube-v1.32.0"
datacenter: "K8s-TI-dtc"
datastore: "K8s-N01td-01"
resourcePool: "rp-kamaji-dev"
folder: "my-cluster-vms"
- name: networking
value:
networkName: "VM-K8s-TI-cpmgmt"
nameservers: ["8.8.8.8", "1.1.1.1"]
dhcp4: false # Using IPAM
```
Create the cluster:
```bash
kubectl apply -f my-cluster.yaml
```
Monitor cluster creation:
```bash
clusterctl describe cluster my-cluster
kubectl get cluster,kamajicontrolplane,machinedeployment -n default
```
With this approach, the same `KamajiControlPlaneTemplate` and `KubeadmConfigTemplate` can be reused when creating `ClusterClasses` for AWS, Azure, or any other provider. Only the infrastructure-specific templates need to change.
## Cross-Provider Template Reuse
One of Kamaji's key advantages with `ClusterClass` is template modularity across providers. Here's how to leverage this:
### Shared Templates Repository
Create a namespace for shared templates:
```bash
kubectl create namespace cluster-templates
```
Deploy shared Kamaji and bootstrap templates once:
```bash
kubectl apply -n cluster-templates -f kamaji-controlplane-template.yaml
kubectl apply -n cluster-templates -f kubeadm-config-template.yaml
```
### Provider-Specific Cluster Classes
For each infrastructure provider, create a `ClusterClass` that references the shared templates:
#### AWS Cluster Class
```yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: kamaji-aws-class
spec:
controlPlane:
ref:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlaneTemplate
name: kamaji-controlplane
namespace: cluster-templates # Shared template
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSClusterTemplate
name: aws-cluster-template # AWS-specific
workers:
machineDeployments:
- class: default-worker
template:
bootstrap:
ref:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: kubeadm
namespace: cluster-templates # Shared template
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
name: aws-worker-template # AWS-specific
```
### Azure Cluster Class
```yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: ClusterClass
metadata:
name: kamaji-azure-class
spec:
controlPlane:
ref:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlaneTemplate
name: kamaji-control-plane-template
namespace: cluster-templates # Same shared template
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureClusterTemplate
name: azure-cluster-template # Azure-specific
workers:
machineDeployments:
- class: default-worker
template:
bootstrap:
ref:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: worker-bootstrap-template
namespace: cluster-templates # Same shared template
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureMachineTemplate
name: azure-worker-template # Azure-specific
```
## Managing Cluster Class Lifecycle
### Listing Available Cluster Classes
```bash
kubectl get clusterclasses -A
```
### Viewing Cluster Class Details
```bash
kubectl describe clusterclass vsphere-standard -n capi-templates-vsphere
```
### Updating a Cluster Class
A `ClusterClass` update affects only new clusters. Existing clusters continue using their original configuration:
```bash
kubectl edit clusterclass vsphere-standard -n capi-templates-vsphere
```
### Deleting Clusters Created from Cluster Class
Always delete clusters before removing the `ClusterClass`:
```bash
# Delete the cluster
kubectl delete cluster my-cluster
# Wait for cleanup
kubectl wait --for=delete cluster/my-cluster --timeout=10m
# Then safe to delete ClusterClass if no longer needed
kubectl delete clusterclass vsphere-standard -n capi-templates-vsphere
```
## Template Versioning Strategies
When managing `ClusterClasses` across environments, consider these versioning approaches:
### Semantic Versioning in Names
```yaml
metadata:
name: vsphere-standard-v1-2-0
namespace: capi-templates-vsphere
```
### Using Labels for Version Tracking
```yaml
metadata:
name: vsphere-standard
namespace: capi-templates-vsphere
labels:
version: "1.2.0"
stability: "stable"
tier: "standard"
```
### Namespace Separation
```bash
kubectl create namespace clusterclass-v1
kubectl create namespace clusterclass-v2
```
This enables gradual migration between `ClusterClass` versions while maintaining compatibility.
## Further Reading
- [Cluster API ClusterClass Documentation](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-class/)
- [Kamaji Control Plane Provider Reference](https://doc.crds.dev/github.com/clastix/cluster-api-control-plane-provider-kamaji)
- [CAPI Provider Integration](https://github.com/clastix/cluster-api-control-plane-provider-kamaji)

View File

@@ -0,0 +1,98 @@
# Kamaji Control Plane Provider
Kamaji can act as a Cluster API Control Plane provider using the `KamajiControlPlane` custom resource, which defines the control plane of a Tenant Cluster.
Here is an example of a `KamajiControlPlane`:
```yaml
kind: KamajiControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
metadata:
name: '${CLUSTER_NAME}'
namespace: '${CLUSTER_NAMESPACE}'
spec:
apiServer:
extraArgs:
- --cloud-provider=external
controllerManager:
extraArgs:
- --cloud-provider=external
dataStoreName: default
addons:
coreDNS: {}
kubeProxy: {}
konnectivity: {}
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- InternalIP
network:
serviceType: LoadBalancer
version: ${KUBERNETES_VERSION}
```
You can use this as reference in a standard `Cluster` custom resource as controlplane provider:
```yaml
kind: Cluster
apiVersion: cluster.x-k8s.io/v1beta1
metadata:
labels:
cluster.x-k8s.io/cluster-name: '${CLUSTER_NAME}'
name: '${CLUSTER_NAME}'
namespace: '${CLUSTER_NAMESPACE}'
spec:
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KamajiControlPlane
name: '${CLUSTER_NAME}'
clusterNetwork:
pods:
cidrBlocks:
- '${PODS_CIDR}'
services:
cidrBlocks:
- '${SERVICES_CIDR}'
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: ... # your infrastructure kind may vary
name: '${CLUSTER_NAME}'
```
!!! info "Full Reference"
For a full reference of the `KamajiControlPlane` custom resource, please see the [Reference APIs](https://doc.crds.dev/github.com/clastix/cluster-api-control-plane-provider-kamaji/controlplane.cluster.x-k8s.io/KamajiControlPlane/v1alpha1).
## Getting started with the Kamaji Control Plane Provider
Cluster API Provider Kamaji is compliant with the `clusterctl` contract, which means you can use it with the `clusterctl` CLI to create and manage your Kamaji based clusters.
!!! info "Options for install Cluster API"
There are two ways to getting started with Cluster API:
* using `clusterctl` to install the Cluster API components.
* using the Cluster API Operator. Please refer to the [Cluster API Operator](https://cluster-api-operator.sigs.k8s.io/) guide for this option.
### Prerequisites
* [`clusterctl`](https://cluster-api.sigs.k8s.io/user/quick-start#install-clusterctl) installed in your workstation to handle the lifecycle of your clusters.
* [`kubectl`](https://kubernetes.io/docs/tasks/tools/) installed in your workstation to interact with your clusters.
* [Kamaji](../getting-started/index.md) installed in your Management Cluster.
### Initialize the Management Cluster
Use `clusterctl` to initialize the Management Cluster. When executed for the first time, `clusterctl init` will fetch and install the Cluster API components in the Management Cluster
```bash
clusterctl init --control-plane kamaji
```
As result, the following Cluster API components will be installed:
* Cluster API Provider in `capi-system` namespace
* Bootstrap Provider in `capi-kubeadm-bootstrap-system` namespace
* Kamaji Control Plane Provider in `kamaji-system` namespace
In the next step, we will create a fully functional Kubernetes cluster using the Kamaji Control Plane Provider and the Infrastructure provider of choice.
For a complete list of supported infrastructure providers, please refer to the [other providers](other-providers.md) page.

View File

@@ -0,0 +1,14 @@
# Cluster APIs Support
The [Cluster API](https://github.com/kubernetes-sigs/cluster-api) brings declarative, Kubernetes-style APIs to the creation, configuration, and management of Kubernetes clusters. If you're not familiar with the Cluster API project, you can learn more from the [official documentation](https://cluster-api.sigs.k8s.io/).
Users can utilize Kamaji in two distinct ways:
* **Standalone:** Kamaji can be used as a standalone Kubernetes Operator installed in the Management Cluster to manage multiple Tenant Control Planes. Worker nodes of Tenant Clusters can join any infrastructure, whether it be cloud, data-center, or edge, using various automation tools such as _Ansible_, _Terraform_, or even manually with any script calling `kubeadm`. See [yaki](https://goyaki.clastix.io/) as an example.
* **Cluster API Provider:** Kamaji can be used as a [Cluster API Control Plane Provider](https://cluster-api.sigs.k8s.io/reference/providers#control-plane) to manage multiple Tenant Control Planes across various infrastructures. Kamaji offers seamless integration with the most popular [Cluster API Infrastructure Providers](https://cluster-api.sigs.k8s.io/reference/providers#infrastructure).
!!! tip "Control Plane and Infrastructure Decoupling"
Kamaji decouples the Control Plane from the infrastructure, allowing the Kamaji Management Cluster to reside on a different infrastructure or cloud provider than the Tenant worker machines, as long as network reachability is ensured. This flexibility enables mixing and matching infrastructure providers, such as hosting the Management Cluster on a public cloud while deploying Tenant worker machines on private data centers, edge environments, or other clouds.
Check the currently supported infrastructure providers and the roadmap on the related [repository](https://github.com/clastix/cluster-api-control-plane-provider-kamaji).

View File

@@ -0,0 +1,21 @@
# Other Infra Providers
Kamaji offers seamless integration with the most popular [Cluster API Infrastructure Providers](https://cluster-api.sigs.k8s.io/reference/providers#infrastructure):
- AWS
- Azure
- Google Cloud
- Equinix/Packet
- Hetzner
- KubeVirt
- Metal³
- Nutanix
- OpenStack
- Tinkerbell
- vSphere
- IONOS Cloud
- Proxmox by IONOS Cloud
For the most up-to-date information and technical considerations, please always check the related [repository](https://github.com/clastix/cluster-api-control-plane-provider-kamaji).

View File

@@ -0,0 +1,174 @@
# Proxmox VE Infra Provider
Use the Cluster API [Proxmox VE Infra Provider ](https://github.com/ionos-cloud/cluster-api-provider-proxmox) to create a fully functional Kubernetes cluster with the Cluster API [Kamaji Control Plane Provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji).
The Proxmox Cluster API implementation is developed and maintained by [IONOS Cloud](https://github.com/ionos-cloud).
## Proxmox VE Requirements
A Template VM built using the [Proxmox Builder](https://image-builder.sigs.k8s.io/capi/providers/proxmox) is necessary to create the cluster machines.
## Install the Proxmox VE Infrastructure Provider
To use the Proxmox Cluster API provider, you must connect and authenticate to a Proxmox VE system.
```bash
# The Proxmox VE host
export PROXMOX_URL="https://pve.example:8006"
# The Proxmox VE TokenID for authentication
export PROXMOX_TOKEN='clastix@pam!capi'
# The secret associated with the TokenID
export PROXMOX_SECRET="REDACTED"
```
!!! warning "Env escaping "
Pay attention to escape special characters, such as `\` and `!`
Install the Infrastructure Provider:
```bash
clusterctl init --infrastructure proxmox
```
## Install the IPAM Provider
To assign IP addresses to nodes, you can use the in-cluster [IPAM provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster). To do so, initialize the Management Cluster with the `--ipam in-cluster` flag:
```bash
clusterctl init --ipam in-cluster
```
## Create a Tenant Cluster
Once all controllers are running in the management cluster, you can generate and apply the cluster manifests for the tenant cluster you want to provision.
### Generate the Cluster Manifest using the template
Use `clusterctl` to generate a tenant cluster manifest for your Proxmox VE. Set the following environment variables to match the workload cluster configuration:
```bash
# Cluster Configuration
export CLUSTER_NAME="sample"
export CLUSTER_NAMESPACE="default"
export CONTROL_PLANE_REPLICAS=2
export KUBERNETES_VERSION="v1.31.4"
export CLUSTER_DATASTORE="default"
```
Set the following environment variables to configure the workload cluster network:
```bash
# Networking Configuration
export IP_RANGE='["192.168.100.100-192.168.100.200"]'
export IP_PREFIX=24
export GATEWAY="192.168.100.1"
export DNS_SERVERS='["8.8.8.8"]'
export NETWORK_BRIDGE="vmbr0"
export NETWORK_MODEL="virtio"
```
Set the following environment variables to configure the workload machines:
```bash
# Node Configuration
export SSH_USER="clastix"
export SSH_AUTHORIZED_KEY="ssh-rsa AAAAB3Nz ..."
export NODE_LABELS="datacenter=us-west,instance-type=large"
export NODE_TAINTS="environment=production:PreferNoSchedule"
# You can add additional cloud-init configuration to further customize
# the worker nodes by setting the CLOUD_INIT_CONFIG environment variable:
export CLOUD_INIT_CONFIG="#cloud-config package_update: true packages: - net-tools"
# Number of worker nodes
export NODE_REPLICAS=2
# Resource Configuration
export SOURCE_NODE="labs"
export TEMPLATE_ID=100
export ALLOWED_NODES='["labs"]'
export MEMORY_MIB=4096
export NUM_CORES=2
export NUM_SOCKETS=2
export BOOT_VOLUME_DEVICE="scsi0"
export BOOT_VOLUME_SIZE=20
export FILE_STORAGE_FORMAT="qcow2"
export STORAGE_NODE="local"
export POOL_NAME="sample-pool"
```
Use the following command to generate a cluster manifest based on the [`capi-kamaji-proxmox-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/proxmox/capi-kamaji-proxmox-template.yaml) template file:
```bash
clusterctl generate cluster $CLUSTER_NAME \
--from capi-kamaji-proxmox-template.yaml \
> capi-kamaji-proxmox-cluster.yaml
```
!!! warning "Customize the Template"
Before to generate cluster manifest, review and edit the template `capi-kamaji-proxmox-template.yaml` to customize.
### Apply the Cluster Manifest
Apply the generated cluster manifest to provision the tenant cluster:
```bash
kubectl apply -f capi-kamaji-proxmox-cluster.yaml
```
Check the status of the cluster deployment using `clusterctl`:
```bash
clusterctl describe cluster $CLUSTER_NAME
```
and related tenant control plane created on the Kamaji Management Cluster:
```bash
kubectl get tcp -n default
```
## Access the Tenant Cluster
To access the tenant cluster, you can estract the `kubeconfig` file from the Kamaji Management Cluster:
```bash
clusterctl get kubeconfig $CLUSTER_NAME \
> ~/.kube/$CLUSTER_NAME.kubeconfig
```
and use it to access the tenant cluster:
```bash
export KUBECONFIG=~/.kube/$CLUSTER_NAME.kubeconfig
kubectl cluster-info
```
## Delete the Tenant Cluster
For cluster deletion, use the following command:
```bash
kubectl delete cluster $CLUSTER_NAME
```
Always use `kubectl delete cluster $CLUSTER_NAME` to delete the tenant cluster. Using `kubectl delete -f capi-kamaji-proxmox-cluster.yaml` may lead to orphaned resources in some scenarios, as this method doesn't always respect ownership references between resources that were created after the initial deployment.
## Install the Tenant Cluster as Helm Release
Alternatively, you can create a Tenant Cluster using the Helm Chart [cluster-api-kamaji-proxmox](https://github.com/clastix/cluster-api-kamaji-proxmox).
Create a Tenant Cluster as Helm Release:
```bash
helm repo add clastix https://clastix.github.io/cluster-api-kamaji-proxmox
helm repo update
helm install sample clastix/cluster-api-kamaji-proxmox \
--set cluster.name=sample \
--namespace default \
--values my-values.yaml
```
where `my-values.yaml` is a file containing the configuration values for the Tenant Cluster.

View File

@@ -0,0 +1,284 @@
# vSphere Infra Provider
Use the Cluster API [vSphere Infra Provider](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere) to create a fully functional Kubernetes cluster using the Cluster API [Kamaji Control Plane Provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji).
## vSphere Requirements
You need to access a **vSphere** environment with the following requirements:
- The vSphere environment should be configured with a DHCP service in the primary VM network for your tenant clusters. Alternatively you can use an [IPAM Provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster).
- Configure one Resource Pool across the hosts onto which the tenant clusters will be provisioned. Every host in the Resource Pool will need access to a shared storage.
- A Template VM based on published [OVA images](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere). For production-like environments, it is highly recommended to build and use your own custom OVA images. Take a look to the [image-builder](https://github.com/kubernetes-sigs/image-builder) project.
- To use the vSphere Container Storage Interface (CSI), your vSphere cluster needs support for Cloud Native Storage (CNS). CNS relies on a shared datastore. Ensure that your vSphere environment is properly configured to support CNS.
## Install the vSphere Infrastructure Provider
In order to use vSphere Cluster API provider, you must be able to connect and authenticate to a **vCenter**. Ensure you have credentials to your vCenter server:
```bash
export VSPHERE_USERNAME="admin@vsphere.local"
export VSPHERE_PASSWORD="*******"
```
Install the vSphere Infrastructure Provider:
```bash
clusterctl init --infrastructure vsphere
```
## Install the IPAM Provider
If you intend to use IPAM to assign addresses to the nodes, you can use the in-cluster [IPAM provider](https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster) instead of rely on DHCP service. To do so, initialize the Management Cluster with the `--ipam in-cluster` flag:
```bash
clusterctl init --ipam in-cluster
```
## Create a Tenant Cluster
Once all the controllers are up and running in the management cluster, you can generate and apply the cluster manifests of the tenant cluster you want to provision.
### Generate the Cluster Manifest using the template
Using `clusterctl`, you can generate a tenant cluster manifest for your vSphere environment. Set the environment variables to match your vSphere configuration.
For example:
```bash
# vSphere Configuration
export VSPHERE_SERVER="vcenter.vsphere.local"
export VSPHERE_DATACENTER="SDDC-Datacenter"
export VSPHERE_DATASTORE="DefaultDatastore"
export VSPHERE_NETWORK="VM Network"
export VSPHERE_RESOURCE_POOL="*/Resources"
export VSPHERE_FOLDER="kamaji-capi-pool"
export VSPHERE_TLS_THUMBPRINT="..."
export VSPHERE_STORAGE_POLICY="vSAN Storage Policy"
```
If you intend to use IPAM, set the environment variables to match your IPAM configuration.
For example:
```bash
# IPAM Configuration
export NODE_IPAM_POOL_RANGE="10.9.62.100-10.9.62.200"
export NODE_IPAM_POOL_PREFIX="24"
export NODE_IPAM_POOL_GATEWAY="10.9.62.1"
```
Set the environment variables to match your cluster configuration.
For example:
```bash
# Cluster Configuration
export CLUSTER_NAME="sample"
export CLUSTER_NAMESPACE="default"
export POD_CIDR="10.36.0.0/16"
export SVC_CIDR="10.96.0.0/16"
export CONTROL_PLANE_REPLICAS=2
export NAMESERVER="8.8.8.8"
export KUBERNETES_VERSION="v1.31.0"
export CPI_IMAGE_VERSION="v1.31.0"
```
Set the environment variables to match your machine configuration.
For example:
```bash
# Machine Configuration
export MACHINE_TEMPLATE="ubuntu-2404-kube-v1.31.0"
export MACHINE_DEPLOY_REPLICAS=2
export NODE_DISK_SIZE=25
export NODE_MEMORY_SIZE=8192
export NODE_CPU_COUNT=2
export SSH_USER="clastix"
export SSH_AUTHORIZED_KEY="ssh-rsa AAAAB3N..."
```
The following command will generate a cluster manifest based on the [`capi-kamaji-vsphere-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-template.yaml) template file:
```bash
clusterctl generate cluster $CLUSTER_NAME \
--from capi-kamaji-vsphere-template.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
If you want to use DHCP instead of IPAM, use the [`capi-kamaji-vsphere-dhcp-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-dhcp-template.yaml) template file:
```bash
clusterctl generate cluster $CLUSTER_NAME \
--from capi-kamaji-vsphere-dhcp-template.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
### Additional cloud-init configuration
Cluster API requires to use templates for the machines, which are based on `cloud-init`. You can add additional `cloud-init` configuration to further customize the worker nodes by including an additional `cloud-init` file in the `KubeadmConfigTemplate`:
```yaml
kind: KubeadmConfigTemplate
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
metadata:
name: ${CLUSTER_NAME}-md-0
spec:
template:
spec:
files:
- path: "/etc/cloud/cloud.cfg.d/99-custom.cfg"
content: "${CLOUD_INIT_CONFIG:-}"
owner: "root:root"
permissions: "0644"
```
You can then set the `CLOUD_INIT_CONFIG` environment variable to include the additional configuration:
```bash
export CLOUD_INIT_CONFIG="#cloud-config package_update: true packages: - net-tools"
```
and include it in the `clusterctl generate cluster` command:
```bash
clusterctl generate cluster $CLUSTER_NAME \
--from capi-kamaji-vsphere-template.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
### Apply the Cluster Manifest
Apply the generated cluster manifest to create the tenant cluster:
```bash
kubectl apply -f capi-kamaji-vsphere-cluster.yaml
```
You can check the status of the cluster deployment with `clusterctl`:
```bash
clusterctl describe cluster $CLUSTER_NAME
```
You can check the status of the tenant cluster with `kubectl`:
```bash
kubectl get clusters -n default
```
and related tenant control plane created on the Kamaji Management Cluster:
```bash
kubectl get tcp -n default
```
## Access the Tenant Cluster
To access the tenant cluster, you can estract the `kubeconfig` file from the Kamaji Management Cluster:
```bash
clusterctl get kubeconfig $CLUSTER_NAME \
> ~/.kube/$CLUSTER_NAME.kubeconfig
```
and use it to access the tenant cluster:
```bash
export KUBECONFIG=~/.kube/$CLUSTER_NAME.kubeconfig
kubectl cluster-info
```
## Cloud Controller Manager
The template file [`capi-kamaji-vsphere-template.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-template.yaml) includes the external [Cloud Controller Manager (CCM)](https://github.com/kubernetes/cloud-provider-vsphere) configuration for vSphere. The CCM is a Kubernetes controller that manages the cloud provider's resources.
Usually, the CCM is deployed on control plane nodes, but in Kamaji there are no nodes for Control Plane, so the CCM is deployed on the worker nodes as daemonset.
As alternative, you can deploy the CCM as part of the Hosted Control Plane on the Management Cluster. To do so, the template file [`capi-kamaji-vsphere-template-ccm.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-template-ccm.yaml) includes the configuration for the CCM as part of the Kamaji Control Plane. This approach provides security benefits by isolating vSphere credentials from tenant users while maintaining full Cluster API integration.
The following command will generate a cluster manifest with the CCM installed on the Management Cluster:
```bash
clusterctl generate cluster $CLUSTER_NAME \
--from capi-kamaji-vsphere-template-ccm.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
Apply the generated cluster manifest to create the tenant cluster:
```bash
kubectl apply -f capi-kamaji-vsphere-cluster.yaml
```
## vSphere CSI Driver
The template file [`capi-kamaji-vsphere-template-csi.yaml`](https://raw.githubusercontent.com/clastix/cluster-api-control-plane-provider-kamaji/master/templates/vsphere/capi-kamaji-vsphere-template-csi.yaml) includes the [vSphere CSI Driver](https://github.com/kubernetes-sigs/vsphere-csi-driver) configuration for vSphere. The vSphere CSI Driver is a Container Storage Interface (CSI) driver that provides a way to use vSphere storage with Kubernetes.
This template file introduces a *"split configuration"* for the vSphere CSI Driver, with the CSI driver deployed on the worker nodes as daemonset and the CSI Controller Manager deployed on the Management Cluster as part of the Hosted Control Plane. In this way, no vSphere credentials are required on the tenant cluster.
This split architecture enables:
* Tenant isolation from vSphere credentials
* Simplified networking requirements
* Centralized controller management
The template file also include a default storage class for the vSphere CSI Driver.
Set the environment variables to match your storage configuration.
For example:
```bash
# Storage Configuration
export CSI_INSECURE="false"
export CSI_LOG_LEVEL="PRODUCTION" # or "DEVELOPMENT"
export CSI_STORAGE_CLASS_NAME="vsphere-csi"
```
The following command will generate a cluster manifest with split configuration for the vSphere CSI Driver:
```bash
clusterctl generate cluster $CLUSTER_NAME \
--from capi-kamaji-vsphere-template-csi.yaml \
> capi-kamaji-vsphere-cluster.yaml
```
Apply the generated cluster manifest to create the tenant cluster:
```bash
kubectl apply -f capi-kamaji-vsphere-cluster.yaml
```
## Delete the Tenant Cluster
For cluster deletion, use the following command:
```bash
kubectl delete cluster sample
```
Always use `kubectl delete cluster $CLUSTER_NAME` to delete the tenant cluster. Using `kubectl delete -f capi-kamaji-vsphere-cluster.yaml` may lead to orphaned resources in some scenarios, as this method doesn't always respect ownership references between resources that were created after the initial deployment.
## Install the Tenant Cluster as Helm Release
Another option to create a Tenant Cluster is to use the Helm Chart [cluster-api-kamaji-vsphere](https://github.com/clastix/cluster-api-kamaji-vsphere).
!!! warning "Advanced Usage"
This Helm Chart provides several additional configuration options to customize the Tenant Cluster. Please refer to its documentation for more information. Make sure you get comfortable with the Cluster API concepts and Kamaji before to attempt to use it.
Create a Tenant Cluster as Helm Release:
```bash
helm repo add clastix https://clastix.github.io/cluster-api-kamaji-vsphere
helm repo update
helm install sample clastix/cluster-api-kamaji-vsphere \
--set cluster.name=sample \
--namespace default \
--values my-values.yaml
```
where `my-values.yaml` is a file containing the configuration values for the Tenant Cluster.

View File

@@ -1,54 +0,0 @@
# Concepts
**Kamaji** is a **Kubernetes Control Plane Manager**. It operates Kubernetes at scale with a fraction of the operational burden. Kamaji turns any Kubernetes cluster into a _“Management Cluster”_ to orchestrate other Kubernetes clusters called _“Tenant Clusters”_.
These are requirements of the design behind Kamaji:
- Communication between the _“Management Cluster”_ and a _“Tenant Cluster”_ is unidirectional. The _“Management Cluster”_ manages a _“Tenant Cluster”_, but a _“Tenant Cluster”_ has no awareness of the _“Management Cluster”_.
- Communication between different _“Tenant Clusters”_ is not allowed.
- The worker nodes of tenant should not run anything beyond tenant's workloads.
Goals and scope may vary as the project evolves.
## Tenant Control Plane
Kamaji is special because the Control Planes of the _“Tenant Clusters”_ are regular pods running in a namespace of the _“Management Cluster”_ instead of a dedicated machines. This solution makes running Control Planes at scale cheaper and easier to deploy and operate. The Tenant Control Plane components are packaged in the same way they are running in bare metal or virtual nodes. We leverage the `kubeadm` code to set up the control plane components as they were running on their own server. The unchanged images of upstream `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` are used.
High Availability and rolling updates of the Tenant Control Plane pods are provided by a regular Deployment. Autoscaling based on the metrics is available. A Service is used to espose the Tenant Control Plane outside of the _“Management Cluster”_. The `LoadBalancer` service type is used, `NodePort` and `ClusterIP` are other viable options, depending on the case.
Kamaji offers a [Custom Resource Definition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) to provide a declarative approach of managing a Tenant Control Plane. This *CRD* is called `TenantControlPlane`, or `tcp` in short.
All the _“Tenant Clusters”_ built with Kamaji are fully compliant CNCF Kubernetes clusters and are compatible with the standard Kubernetes toolchains everybody knows and loves. See [CNCF compliance](reference/conformance.md).
## Tenant worker nodes
And what about the tenant worker nodes?
They are just _"worker nodes"_, i.e. regular virtual or bare metal machines, connecting to the APIs server of the Tenant Control Plane.
Kamaji's goal is to manage the lifecycle of hundreds of these _“Tenant Clusters”_, not only one, so how to add another Tenant Cluster to Kamaji?
As you could expect, you have just deploys a new Tenant Control Plane in one of the _“Management Cluster”_ namespace, and then joins the tenant worker nodes to it.
A [Cluster API ControlPlane provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji) has been released, allowing to offer a Cluster API-native declarative lifecycle, by automating the worker nodes join.
## Datastores
Putting the Tenant Control Plane in a pod is the easiest part. Also, we have to make sure each Tenant Cluster saves the state to be able to store and retrieve data. As we can deploy a Kubernetes cluster with an external `etcd` cluster, we explored this option for the Tenant Control Planes. On the Management Cluster, you can deploy one or multi-tenant `etcd` to save the state of multiple Tenant Clusters. Kamaji offers a Custom Resource Definition called `DataStore` to provide a declarative approach of managing multiple datastores. By sharing the datastore between multiple tenants, the resiliency is still guaranteed and the pods' count remains under control, so it solves the main goal of resiliency and costs optimization. The trade-off here is that you have to operate external datastores, in addition to `etcd` of the _“Management Cluster”_ and manage the access to be sure that each _“Tenant Cluster”_ uses only its data.
### Other storage drivers
Kamaji offers the option of using a more capable datastore than `etcd` to save the state of multiple tenants' clusters. Thanks to the native [kine](https://github.com/k3s-io/kine) integration, you can run _MySQL_ or _PostgreSQL_ compatible databases as datastore for _“Tenant Clusters”_.
### Pooling
By default, Kamaji is expecting to persist all the _“Tenant Clusters”_ data in a unique datastore that could be backed by different drivers. However, you can pick a different datastore for a specific set of _“Tenant Clusters”_ that could have different resources assigned or a different tiering. Pooling of multiple datastore is an option you can leverage for a very large set of _“Tenant Clusters”_ so you can distribute the load properly. As future improvements, we have a _datastore scheduler_ feature in roadmap so that Kamaji itself can assign automatically a _“Tenant Cluster”_ to the best datastore in the pool.
### Migration
In order to simplify Day2 Operations and reduce the operational burden, Kamaji provides the capability to live migrate data from a datastore to another one of the same driver without manual and error prone backup and restore operations.
> Currently, live data migration is only available between datastores having the same driver.
## Konnectivity
In addition to the standard control plane containers, Kamaji creates an instance of [konnectivity-server](https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/) running as sidecar container in the `tcp` pod and exposed on port `8132` of the `tcp` service.
This is required when the tenant worker nodes are not reachable from the `tcp` pods. The Konnectivity service consists of two parts: the Konnectivity server in the tenant control plane pod and the Konnectivity agents running on the tenant worker nodes.
After worker nodes joined the tenant control plane, the Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections. After enabling the Konnectivity service, all control plane to worker nodes traffic goes through these connections.
> In Kamaji, Konnectivity is enabled by default and can be disabled when not required.

View File

@@ -0,0 +1,36 @@
# Datastore
A critical part of any Kubernetes control plane is its datastore, the system that persists the clusters state, configuration, and operational data. In Kamaji, this requirement is addressed with flexibility and scalability in mind, allowing you to choose the best storage backend for your needs and to manage many clusters efficiently.
Kamajis architecture decouples the control plane from its underlying datastore. Instead of each Tenant Cluster running its own dedicated datastore instance, Kamaji enables you to share datastores across multiple Tenant Clusters, or assign a dedicated datastore to each Tenant Cluster where needed. This approach optimizes resource usage, simplifies operations, and supports a variety of backend technologies.
## Supported Datastore Backends
Kamaji supports several options for persisting Tenant Cluster state:
- **etcd:**
The default and most widely used Kubernetes datastore. You can deploy one or more etcd clusters in the Management Cluster and assign them to Tenant Control Planes as needed.
- **SQL Databases:**
For environments where etcd is not ideal, Kamaji integrates with [kine](https://github.com/k3s-io/kine), allowing you to use MySQL or PostgreSQL-compatible databases as the backend for Tenant Clusters.
!!! info "NATS"
The support of [NATS](https://nats.io/) is still experimental, mostly because multi-tenancy is not (yet) supported in NATS.
## Declarative Management
Datastores are managed declaratively using the `DataStore` Custom Resource Definition (CRD). This makes it easy to define, configure, and assign datastores to Tenant Control Planes, and fits naturally into GitOps and Infrastructure as Code workflows.
## Pooling and Scalability
By default, Kamaji can persist all Tenant Clusters data in a single datastore, but you can also create pools of datastores and assign clusters based on resource requirements, performance needs, or organizational policies. This pooling capability is especially useful for large-scale environments, where distributing the load across multiple datastores ensures resilience and scalability.
Kamajis roadmap includes a datastore scheduler, which will automatically assign new Tenant Clusters to the most appropriate datastore in the pool, further reducing operational overhead.
## Live Migration
Operational needs change over time, and Kamaji makes it easy to adapt. You can live-migrate a Tenant Clusters data from one datastore to another, as long as they use the same backend driver, without manual backup and restore steps. This feature simplifies Day 2 operations and helps you optimize your infrastructure as your requirements evolve.
!!! info "Datastore Migration"
Currently, live data migration is only available between datastores having the same driver.

View File

@@ -0,0 +1,57 @@
# High Level Overview
Kamaji is an open source Kubernetes Operator that transforms any Kubernetes cluster into a **Management Cluster** capable of orchestrating and managing multiple independent **Tenant Clusters**. This architecture is designed to simplify large-scale Kubernetes operations, reduce infrastructure costs, and provide strong isolation between tenants.
![Kamaji Architecture](../images/architecture.png)
## Architecture Overview
- **Management Cluster:**
The central cluster where Kamaji is installed. It hosts the control planes for all Tenant Clusters as regular Kubernetes pods, leveraging the Management Clusters reliability, scalability, and operational features.
- **Tenant Clusters:**
These are user-facing Kubernetes clusters, each with its own dedicated control plane running as pods in the Management Cluster. Tenant Clusters are fully isolated from each other and unaware of the Management Clusters existence.
- **Tenant Worker Nodes:**
Regular virtual or bare metal machines that join a Tenant Cluster by connecting to its control plane. These nodes run only tenant workloads, ensuring strong security and resource isolation.
## Design Principles
- **Unidirectional Management:**
The Management Cluster manages all Tenant Clusters. Communication is strictly one-way: Tenant Clusters do not have access to or awareness of the Management Cluster.
- **Strong Isolation:**
There is no communication between different Tenant Clusters. Each cluster is fully isolated at the control plane and data store level.
- **Declarative Operations:**
Kamaji leverages Kubernetes Custom Resource Definitions (CRDs) to provide a fully declarative approach to managing control planes, datastores, and other resources.
- **CNCF Compliance:**
Kamaji uses upstream, unmodified Kubernetes components and kubeadm for control plane setup, ensuring that all Tenant Clusters are [CNCF Certified Kubernetes](https://www.cncf.io/certification/software-conformance/) and compatible with standard Kubernetes tooling.
## Extensibility and Integrations
Kamaji is designed to integrate seamlessly with the broader cloud-native and enterprise ecosystem, enabling organizations to leverage their existing tools and infrastructure:
- **Infrastructure as Code:**
Kamaji works well with tools like [Terraform](https://www.terraform.io/) and [Ansible](https://www.ansible.com/) for automated cluster provisioning and management.
- **GitOps:**
Kamaji supports GitOps workflows, enabling you to manage cluster and tenant lifecycle declaratively through version-controlled repositories using tools like [Flux](https://fluxcd.io/) or [Argo CD](https://argo-cd.readthedocs.io/). This ensures consistency, auditability, and repeatability in your operations.
- **Cluster API Integration:**
Kamaji can be used as a [Cluster API Control Plane Provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji), enabling automated, declarative lifecycle management of clusters and worker nodes across any infrastructure.
- **Enterprise Addons:**
Additional features, such as Ingress management for Tenant Control Planes, are available as enterprise-grade addons.
## Learn More
Explore the following concepts to understand how Kamaji works under the hood:
- [Tenant Control Plane](tenant-control-plane.md)
- [Datastore](datastore.md)
- [Tenant Worker Nodes](tenant-worker-nodes.md)
- [Konnectivity](konnectivity.md)
Kamajis architecture is designed for flexibility, scalability, and operational simplicity, making it an ideal solution for organizations managing multiple Kubernetes clusters at scale.

View File

@@ -0,0 +1,154 @@
# Konnectivity
In traditional Kubernetes deployments, the control plane components need to communicate directly with worker nodes for various operations like:
executing commands in pods, retrieving logs, or managing port forwards.
However, in many real-world environments, especially those spanning multiple networks or cloud providers,
direct communication isn't always possible or desirable. This is where Konnectivity comes in.
## Understanding Konnectivity in Kamaji
Kamaji integrates [Konnectivity](https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/) as a core component of its architecture.
Each Tenant Control Plane pod includes a `konnectivity-server` running as a sidecar container,
which establishes and maintains secure tunnels with agents running on the worker nodes.
This design ensures reliable communication even in complex network environments.
The Konnectivity service consists of two main components:
1. **Konnectivity Server:**
Runs alongside the control plane components in each Tenant Control Plane pod and is exposed on port 8132.
It manages connections from worker nodes and routes traffic appropriately.
2. **Konnectivity Agent:**
Runs on worker nodes as _DaemonSet_ or _Deployment_ and initiates outbound connections to its control plane's Konnectivity server.
These connections are maintained to create a reliable tunnel for all control plane to worker node communications.
## How It Works
When a worker node joins a Tenant Cluster, the Konnectivity agents automatically establish connections to their designated Konnectivity server.
These connections are maintained continuously, ensuring reliable communication paths between the control plane and worker nodes.
All traffic from the control plane to worker nodes flows through these established tunnels, enabling operations such as:
- Executing commands in pods
- Retrieving container logs
- Managing port forwards
- Collecting metrics and health information
- Running exec sessions for debugging
## Configuration and Management
Konnectivity is enabled by default in Kamaji, as it's considered a best practice for modern Kubernetes deployments.
However, it can be disabled if your environment has different requirements, or if you need to use alternative networking solutions.
The service is automatically configured when worker nodes join a cluster, without requiring any operational overhead.
The connection details are managed as part of the standard node bootstrap process,
making it transparent to cluster operators and users.
## Agent delivery mode
You can customise the Konnectivity Agent delivery mode via the Tenant Control Plane definition
using the field `tenantcontrolplane.spec.addons.konnectivity.agent.mode`.
```yaml
apiVersion: kamaji.clastix.io/v1alpha1
kind: TenantControlPlane
metadata:
name: konnectivity-example
spec:
controlPlane:
deployment:
replicas: 2
service:
serviceType: LoadBalancer
kubernetes:
version: "v1.33.0"
networkProfile:
port: 6443
addons:
konnectivity:
server:
port: 8132
agent:
## DaemonSet, Deployment
mode: DaemonSet
## When mode is Deployment, specify the desired Agent replicas
# replicas: 2
```
Available strategies are the following:
- `DaemonSet`: runs on every node
- `Deployment`: useful to decrease the resource footprint in certain workloads cluster,
it allows customising also the amount of deployed replicas via the field
`tenantcontrolplane.spec.addons.konnectivity.agent.replicas`.
---
By integrating Konnectivity as a core feature, Kamaji ensures that your Tenant Clusters can operate reliably and securely across any network topology,
making it easier to build and manage distributed Kubernetes environments at scale.
## Version compatibility between API Server and Konnectivity
In recent Kubernetes releases, Konnectivity has aligned its versioning with the Kubernetes API Server.
This means that for example:
- Kubernetes v1.34.0 pairs with Konnectivity v0.34.0
- Kubernetes v1.33.0 pairs with Konnectivity v0.33.0
Within Kamaji, this version matching happens automatically.
The field `TenantControlPlane.spec.addons.konnectivity` determines the proper Konnectivity version for both the server and the agent,
ensuring compatibility with the tenant control plane's API Server version.
!!! warning "Konnectivity images could not be available!"
For the most recent Kubernetes releases, the corresponding Konnectivity image artifacts _may not yet be built and published_ by the upstream community.
In these cases, you may need to override the automatic pairing and configure a previous Konnectivity version that is available.
You can still have a version skew between the Kubernetes API Server for the given Tenant Control Plane, and the Konnectivity components.
```yaml
apiVersion: kamaji.clastix.io/v1alpha1
kind: TenantControlPlane
metadata:
name: konnectivity
namespace: default
spec:
addons:
coreDNS: {}
konnectivity:
agent:
hostNetwork: false
image: registry.k8s.io/kas-network-proxy/proxy-agent
mode: DaemonSet
tolerations:
- key: CriticalAddonsOnly
operator: Exists
version: v0.33.0
server:
image: registry.k8s.io/kas-network-proxy/proxy-server
port: 8132
version: v0.33.0
kubeProxy: {}
controlPlane:
deployment:
replicas: 2
service:
serviceType: LoadBalancer
dataStore: etcd-kamaji-etcd
kubernetes:
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- InternalIP
- ExternalIP
- Hostname
version: v1.34.0
networkProfile:
clusterDomain: cluster.local
dnsServiceIPs:
- 10.96.0.10
podCidr: 10.244.0.0/16
port: 6443
serviceCidr: 10.96.0.0/16
```

View File

@@ -0,0 +1,29 @@
# Tenant Control Plane
Kamaji introduces a new way to manage Kubernetes control planes at scale. Instead of dedicating separate machines to each clusters control plane, Kamaji runs every Tenant Clusters control plane as a set of pods inside the Management Cluster. This design unlocks significant efficiencies: you can operate hundreds or thousands of isolated Kubernetes clusters on shared infrastructure, all while maintaining strong separation and reliability.
At the heart of this approach is Kamajis commitment to upstream compatibility. The control plane components—`kube-apiserver`, `kube-scheduler`, and `kube-controller-manager`—are the same as those used in any CNCF-compliant Kubernetes cluster. Kamaji uses `kubeadm` for setup and lifecycle management, so you get the benefits of a standard, certified Kubernetes experience.
## How It Works
When you want to create a new Tenant Cluster, you simply define a `TenantControlPlane` resource in the Management Cluster. Kamajis controllers take over from there, deploying the necessary control plane pods, configuring networking, and connecting to the appropriate datastore. The control plane is exposed via a Kubernetes Service—by default as a `LoadBalancer`, but you can also use `NodePort` or `ClusterIP` depending on your needs.
Worker nodes, whether virtual machines or bare metal, join the Tenant Cluster by connecting to its control plane endpoint. This process is compatible with standard Kubernetes tools and can be automated using Cluster API or other infrastructure automation solutions.
## Highlights
- **Efficiency and Scale:**
By running control planes as pods, Kamaji reduces the infrastructure and operational overhead of managing many clusters.
- **High Availability and Automation:**
Control plane pods are managed by Kubernetes Deployments, enabling rolling updates, self-healing, and autoscaling. Kamaji automates the entire lifecycle, from creation to deletion.
- **Declarative and GitOps:**
The `TenantControlPlane` custom resource allows you to manage clusters declaratively, fitting perfectly with GitOps and Infrastructure as Code workflows.
- **Seamless Integration:**
Kamaji works with Cluster API, supports a variety of datastores, and is compatible with the full Kubernetes ecosystem.
Kamajis Tenant Control Plane model is designed for organizations that need to deliver robust, production-grade Kubernetes clusters at scale—whether for internal platform engineering, managed services, or multi-tenant environments.

View File

@@ -0,0 +1,53 @@
# Tenant Worker Nodes
While Kamaji innovates in how control planes are managed, Tenant Worker Nodes remain true to their Kubernetes roots: they are regular virtual machines or bare metal servers that run your workloads. What makes them special in Kamaji's architecture is how they integrate with the containerized control planes and how they can be managed at scale across diverse infrastructure environments.
## Understanding Worker Nodes in Kamaji
In a Kamaji managed cluster, worker nodes connect to their Tenant Control Plane just as they would in a traditional Kubernetes setup. The key difference is that the control plane they're connecting to runs as pods within the Management Cluster, rather than on dedicated machines. This architectural choice maintains compatibility with existing tools and workflows while enabling more efficient resource utilization.
Each worker node belongs to exactly one Tenant Cluster and runs only that tenant's workloads. This clear separation ensures strong isolation between different tenants' applications and data, making Kamaji suitable for multi-tenant environments.
## Infrastructure Flexibility
Your worker nodes can run:
- On bare metal servers in a data center
- As virtual machines in private clouds
- On public cloud instances
- At edge locations
- In hybrid or multi-cloud configurations
This flexibility allows you to place workloads where they make the most sense for your use case, whether that's close to users, near data sources, or in specific regulatory environments.
## Lifecycle Management Options
Kamaji supports multiple approaches to managing worker node lifecycles:
### Manual Management
For simple setups or specific requirements, you can join worker nodes to their Tenant Clusters using standard `kubeadm` commands. This process is familiar to Kubernetes administrators and works just as it would with traditionally deployed clusters.
!!! tip "yaki"
See [yaki](https://goyaki.clastix.io/) script, which you could modify for your preferred operating system and version. The provided script is just a facility: it assumes all worker nodes are running `Ubuntu`. Make sure to adapt the script if you're using a different OS distribution.
### Automation Tools
You can use standard infrastructure automation tools to manage worker nodes:
- Terraform for infrastructure provisioning
- Ansible for configuration management
### Cluster API Integration
For more sophisticated automation, Kamaji provides a [Cluster API Control Plane Provider](https://github.com/clastix/cluster-api-control-plane-provider-kamaji).
This integration enables:
- Declarative management of both tenant control planes and tenant worker nodes
- Automated scaling and updates
- Integration with infrastructure providers for major cloud platforms
- Consistent management across different environments
---
Kamaji's approach to worker nodes combines the familiarity of traditional Kubernetes with the flexibility to run anywhere and the ability to manage at scale. Whether you're building a private cloud platform, offering Kubernetes as a service, or managing edge computing infrastructure, Kamaji provides the tools and patterns you need.

View File

@@ -31,7 +31,8 @@ Following is the list of supported Ingress Controllers:
- [HAProxy Technologies Kubernetes Ingress](https://github.com/haproxytech/kubernetes-ingress)
> Active subscribers can request additional Ingress Controller flavours
!!! info "Other Ingress Controllers"
Active subscribers can request support for additional Ingress Controller flavours.
## How to enable the Addon
@@ -89,9 +90,7 @@ spec:
```
The pattern for the generated hosts is the following:
`${tcp.namespace}-${tcp.name}.{k8s|konnectivity}.${ADDON_ANNOTATION_VALUE}`
> Please, notice the `konnectivity` rule will be created only if the `konnectivity` addon has been enabled.
`${tcp.namespace}-${tcp.name}.{k8s|konnectivity}.${ADDON_ANNOTATION_VALUE}`. Please, notice the `konnectivity` rule will be created only if the `konnectivity` addon has been enabled.
## Infrastructure requirements
@@ -142,7 +141,8 @@ spec:
The `ingressClassName` value must match a non-handled `IngressClass` object,
the addon will take care of generating the correct object.
> Nota Bene: the `hostname` must absolutely point to the 443 port
!!! warning "Use the right port"
The `hostname` field must absolutely point to the 443 port!
### Kubernetes components extra Arguments

View File

@@ -1,6 +1,16 @@
# Getting started
This section contains the information on how to get started with Kamaji
This section contains how to get started with Kamaji on different environments:
!!! success "Slow Start"
The material provided in this section is intended to be a slow start to Kamaji.
It is intended to be a deep learning experience, and to help you getting started with Kamaji while understanding the components involved and the core concepts behind it. We do not provide any "one-click" deployment here.
- [Getting started with Kamaji on Kind](./kamaji-kind.md)
- [Getting started with Kamaji on generic infra](./kamaji-generic.md)
- [Getting started with Kamaji on EKS](./kamaji-aws.md)
- [Getting started with Kamaji on AKS](./kamaji-azure.md)
- [Getting started with Kamaji](getting-started.md): install the required components and Kamaji on any Kubernetes cluster
- [Kamaji: Getting started on Kind](kind.md): useful for development environments, create a Kamaji environment on `kind`

View File

@@ -1,12 +1,12 @@
# Setup Kamaji on AWS
# Kamaji on AWS
This guide will lead you through the process of creating a working Kamaji setup on on AWS.
The guide requires:
- a bootstrap machine
- a Kubernetes cluster (EKS) to run the Admin and Tenant Control Planes
- an arbitrary number of machines to host `Tenant`s' workloads
- a Kubernetes cluster (EKS) to run the Management and Tenant Control Planes
- an arbitrary number of machines to host Tenant workloads.
## Summary
@@ -36,13 +36,11 @@ We assume you have installed on the bootstrap machine:
Make sure you have a valid AWS Account, and login to AWS:
> The easiest way to get started with AWS is to create [access keys](https://docs.aws.amazon.com/cli/v1/userguide/cli-authentication-user.html#cli-authentication-user-configure.title) associated to your account
```bash
aws configure
```
## Create Management cluster
## Access Management cluster
In Kamaji, a Management Cluster is a regular Kubernetes cluster which hosts zero to many Tenant Cluster Control Planes. The Management Cluster acts as a cockpit for all the Tenant clusters and implements monitoring, logging, and governance of all the Kamaji setups, including all Tenant Clusters. For this guide, we're going to use an instance of AWS Kubernetes Service (EKS) as a Management Cluster.
@@ -66,7 +64,9 @@ In order to create quickly an EKS cluster, we will use `eksctl` provided by AWS.
For our use case, we will create an EKS cluster with the following configuration:
```bash
cat >eks-cluster.yaml <<EOF
source kamaji-aws.env
cat > eks-cluster.yaml <<EOF
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
@@ -97,7 +97,6 @@ addons:
EOF
eks create cluster -f eks-cluster.yaml
```
Please note :
@@ -114,22 +113,20 @@ aws eks update-kubeconfig --region ${KAMAJI_REGION} --name ${KAMAJI_CLUSTER}
kubectl cluster-info
# make ebs as a default storage class
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```
### (optional) Add route 53 domain
### Add route 53 domain
In order to easily access tenant clusters, it is recommended to create a Route53 domain or use an existing one if it exists
```bash
# for within VPC
aws route53 create-hosted-zone --name "$TENANT_DOMAIN" --caller-reference $(date +%s) --vpc "VPCRegion=$KAMAJI_REGION,VPCId=$KAMAJI_VPC_ID"
```
## Install Kamaji
Follow the [Getting Started](../getting-started.md) to install Cert Manager and the Kamaji Controller.
Follow the [Getting Started](kamaji-generic.md) to install Cert Manager and the Kamaji Controller.
### Install Cert Manager
@@ -146,12 +143,11 @@ helm install \
--set installCRDs=true
```
### (optional) Install ExternalDNS
### Install ExternalDNS (optional)
ExternalDNS allows updating your DNS records dynamically from an annotation that you add in the service within EKS. Run the following commands to install the ExternalDNS Helm chart:
```bash
helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
helm repo update
helm install external-dns external-dns/external-dns \
@@ -160,17 +156,28 @@ helm install external-dns external-dns/external-dns \
--version 1.15.1
```
## Install Kamaji Controller
### Install Kamaji Controller
Installing Kamaji via Helm charts is the preferred way. Run the following commands to install a stable release of Kamaji:
```bash
helm repo add clastix https://clastix.github.io/charts
helm repo update
helm install kamaji clastix/kamaji -n kamaji-system --create-namespace
helm install kamaji clastix/kamaji -n kamaji-system --create-namespace --version 0.0.0+latest
```
## Create a Tenant Cluster
!!! note "Alternative: Two-Step Installation"
The kamaji chart includes CRDs. For separate CRD management (GitOps, policy requirements), install CRDs first:
```bash
# Step 1: Install CRDs
helm install kamaji-crds clastix/kamaji-crds -n kamaji-system --create-namespace --version 0.0.0+latest
# Step 2: Install Kamaji operator
helm install kamaji clastix/kamaji -n kamaji-system --version 0.0.0+latest
```
## Create Tenant Cluster
Now that our management cluster is up and running, we can create a Tenant Cluster. A Tenant Cluster is a Kubernetes cluster that is managed by Kamaji.
@@ -183,11 +190,8 @@ Before creating a Tenant Control Plane, you need to define some variables:
```bash
export KAMAJI_VPC_ID=$(aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$KAMAJI_VPC_NAME" --query "Vpcs[0].VpcId" --output text)
export KAMAJI_PUBLIC_SUBNET_ID=$(aws ec2 describe-subnets --filters "Name=vpc-id,Values=$KAMAJI_VPC_ID" --filters "Name=tag:Name,Values=$KAMAJI_PUBLIC_SUBNET_NAME" --query "Subnets[0].SubnetId" --output text)
export TENANT_EIP_ID=$(aws ec2 allocate-address --query 'AllocationId' --output text)
export TENANT_PUBLIC_IP=$(aws ec2 describe-addresses --allocation-ids $TENANT_EIP_ID --query 'Addresses[0].PublicIp' --output text)
```
In the next step, we will create a Tenant Control Plane with the following configuration:
@@ -287,20 +291,22 @@ it is important to provide a static public IP address for the API server in orde
- The following annotation: `external-dns.alpha.kubernetes.io/hostname` is set to create the DNS record. It tells AWS to expose the Tenant Control Plane with a public domain name: `${TENANT_NAME}.${TENANT_DOMAIN}`.
> Since AWS load Balancer does not support setting LoadBalancerIP, you will get the following warning on the service created for the control plane tenant `Error syncing load balancer: failed to ensure load balancer: LoadBalancerIP cannot be specified for AWS ELB`. you can ignore it for now.
Since AWS load Balancer does not support setting LoadBalancerIP, you will get the following warning on the service created for the control plane tenant `Error syncing load balancer: failed to ensure load balancer: LoadBalancerIP cannot be specified for AWS ELB`. you can ignore it for now.
### Working with Tenant Control Plane
Check the access to the Tenant Control Plane:
> If the domain you used is a private route53 domain make sure to map the public IP of the LB to `${TENANT_NAME}.${TENANT_DOMAIN}` in your `/etc/hosts`. otherwise, `kubectl` will fail to check SSL certificates
```bash
curl -k https://${TENANT_PUBLIC_IP}:${TENANT_PORT}/version
curl -k https://${TENANT_NAME}.${TENANT_DOMAIN}:${TENANT_PORT}/healthz
curl -k https://${TENANT_NAME}.${TENANT_DOMAIN}:${TENANT_PORT}/version
```
!!! warning "Using Private Domains"
If the domain you used is a private __Route 53__ domain make sure to map the public IP of the LoadBalancer to `${TENANT_NAME}.${TENANT_DOMAIN}` in your `/etc/hosts`. Otherwise, `kubectl` will fail to check SSL certificates
Let's retrieve the `kubeconfig` in order to work with it:
```bash
@@ -332,7 +338,7 @@ NAME ENDPOINTS AGE
kubernetes 13.37.33.12:6443 3m22s
```
## Join worker nodes
### Join worker nodes
The Tenant Control Plane is made of pods running in the Kamaji Management Cluster. At this point, the Tenant Cluster has no worker nodes. So, the next step is to join some worker nodes to the Tenant Control Plane.
@@ -349,34 +355,27 @@ TENANT_ADDR=$(kubectl -n ${TENANT_NAMESPACE} get svc ${TENANT_NAME} -o json | jq
JOIN_CMD=$(echo "sudo kubeadm join ${TENANT_ADDR}:6443 ")$(kubeadm --kubeconfig=${TENANT_NAMESPACE}-${TENANT_NAME}.kubeconfig token create --ttl 0 --print-join-command |cut -d" " -f4-)
```
> Setting `--ttl=0` on the `kubeadm token create` will guarantee that the token will never expire and can be used every time.
>
> It's not intended for production-grade setups.
!!! tip "Token expiration"
Setting `--ttl=0` on the `kubeadm token create` will guarantee that the token will never expire and can be used every time. It's not intended for production-grade setups.
### Create tenant worker nodes
In this section, we will use AMI provided by CAPA (Cluster API Provider AWS) to create the worker nodes. Those AMIs are built using [image builder](https://github.com/kubernetes-sigs/image-builder/tree/main) and contain all the necessary components to join the cluster.
```bash
export KAMAJI_PRIVATE_SUBNET_ID=$(aws ec2 describe-subnets --filters "Name=vpc-id,Values=$KAMAJI_VPC_ID" --filters "Name=tag:Name,Values=$KAMAJI_PRIVATE_SUBNET_NAME" --query "Subnets[0].SubnetId" --output text)
export WORKER_AMI=$(clusterawsadm ami list --kubernetes-version=$TENANT_VERSION --os=ubuntu-24.04 --region=$KAMAJI_REGION -o json | jq -r .items[0].spec.imageID)
cat <<EOF >> worker-user-data.sh
#!/bin/bash
$JOIN_CMD
EOF
aws ec2 run-instances --image-id $WORKER_AMI --instance-type "t2.medium" --user-data $(cat worker-user-data.sh | base64 -w0) --network-interfaces '{"SubnetId":'"'${KAMAJI_PRIVATE_SUBNET_ID}'"',"AssociatePublicIpAddress":false,"DeviceIndex":0,"Groups":["<REPLACE_WITH_SG>"]}' --count "1"
aws ec2 run-instances --image-id $WORKER_AMI --instance-type "t2.medium" --user-data $(cat worker-user-data.sh | base64 -w0) --network-interfaces '[{"SubnetId":'"'${KAMAJI_PRIVATE_SUBNET_ID}'"',"AssociatePublicIpAddress":false,"DeviceIndex":0,"Groups":["<REPLACE_WITH_SG>"]}]' --count "1"
```
> We have used user data to run the `kubeadm join` command on the instance boot. This will make sure that the worker node will join the cluster automatically.
We have used user data to run the `kubeadm join` command on the instance boot. This will make sure that the worker node will join the cluster automatically.
> Make sure to replace `<REPLACE_WITH_SG>` with the security group id that allows the worker nodes to communicate with the public IP of the tenant control plane
Make sure to replace `<REPLACE_WITH_SG>` with the security group id that allows the worker nodes to communicate with the public IP of the tenant control plane
Checking the nodes in the Tenant Cluster:
@@ -422,5 +421,6 @@ To get rid of the whole Kamaji infrastructure, remove the EKS cluster:
```bash
eksctl delete cluster -f eks-cluster.yaml
```
That's all folks!

Some files were not shown because too many files have changed in this diff Show More