Compare commits

..

10 Commits

Author SHA1 Message Date
dependabot[bot]
3d2444e646 feat(deps): bump k8s.io/klog/v2 from 2.130.1 to 2.140.0 in the k8s group (#1100)
Bumps the k8s group with 1 update: [k8s.io/klog/v2](https://github.com/kubernetes/klog).


Updates `k8s.io/klog/v2` from 2.130.1 to 2.140.0
- [Release notes](https://github.com/kubernetes/klog/releases)
- [Changelog](https://github.com/kubernetes/klog/blob/main/RELEASE.md)
- [Commits](https://github.com/kubernetes/klog/compare/v2.130.1...2.140.0)

---
updated-dependencies:
- dependency-name: k8s.io/klog/v2
  dependency-version: 2.140.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: k8s
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-09 14:18:32 +01:00
Dario Tranchitella
cedd0f642c fix(datastore): consistent password update if user exists (#1097)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2026-03-09 14:14:27 +01:00
Dario Tranchitella
e4da581e69 fix: reinit kubelet configuration upon patch for op remove (#1099)
* fix: reinit kubelet configuration upon patch for op remove

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* fix(docs): updating cluster api

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2026-03-09 14:13:48 +01:00
dependabot[bot]
6ed71b1e3e feat(deps): bump k8s.io/kubernetes in the k8s group (#1092)
Bumps the k8s group with 1 update: [k8s.io/kubernetes](https://github.com/kubernetes/kubernetes).


Updates `k8s.io/kubernetes` from 1.35.1 to 1.35.2
- [Release notes](https://github.com/kubernetes/kubernetes/releases)
- [Commits](https://github.com/kubernetes/kubernetes/compare/v1.35.1...v1.35.2)

---
updated-dependencies:
- dependency-name: k8s.io/kubernetes
  dependency-version: 1.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: k8s
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-03 10:57:19 +01:00
Dario Tranchitella
a5bfbaaf72 feat!: cidr validation via cel (#1095)
* fix(docs): container probes

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(api): cidr validation using cel

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* refactor: cidr validation is offloaded to cel

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* feat(test): integration test for cidr and ip cel functions

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

* docs: bumping up minimum management cluster version

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>

---------

Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2026-03-02 14:30:10 +01:00
Dario Tranchitella
adaaef0857 chore(goreleaser): prerelease must be false (#1096)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2026-03-02 09:48:58 +01:00
Dario Tranchitella
7ad75e8216 chore(release)!: switch to goreleaser and migrating tag format (#1094)
Signed-off-by: Dario Tranchitella <dario@tranchitella.eu>
2026-03-01 21:53:41 +01:00
Aleksei Sviridkin
69d62273c2 feat(deployment): make startup probe failure threshold configurable (#1086)
* feat(deployment): make startup probe failure threshold configurable

Add StartupProbeFailureThreshold field to TenantControlPlane CRD
DeploymentSpec, allowing users to configure how many consecutive
startup probe failures are tolerated before a container is considered
failed. The value is applied to all control plane components
(kube-apiserver, controller-manager, and scheduler).

Defaults to 3 (preserving current behavior). With PeriodSeconds=10,
the total startup timeout equals FailureThreshold * 10 seconds.
Setting this to 30 gives 5 minutes, which is useful for
resource-constrained environments.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>

* chore: regenerate CRD manifests for startupProbeFailureThreshold

Run `make manifests` to update Helm CRD files with the new
startupProbeFailureThreshold field in DeploymentSpec.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>

* feat(deployment): expand configurable probes to all probe types

Replace StartupProbeFailureThreshold with a full Probes config
supporting liveness, readiness, and startup probes with configurable
TimeoutSeconds, PeriodSeconds, and FailureThreshold parameters.
Use ptr.Deref for safe pointer dereferencing.

Ref: #471

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>

* chore: regenerate CRD manifests and API documentation

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>

* feat(deployment): add per-component probe overrides and expand ProbeSpec

Add cascading probe configuration: global defaults → per-component
overrides (apiServer, controllerManager, scheduler). Expand ProbeSpec
with InitialDelaySeconds and SuccessThreshold fields.

Ref: #471

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>

* chore: regenerate CRD manifests and API documentation

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Aleksei Sviridkin <f@lex.la>

---------

Signed-off-by: Aleksei Sviridkin <f@lex.la>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-24 16:20:06 +01:00
Patryk Rostkowski
b3ddfcda27 docs: add externalClusterReference with CAPI usage (#1088)
Signed-off-by: Patryk Rostkowski <patrostkowski@gmail.com>
2026-02-24 10:21:41 +01:00
dependabot[bot]
b13eca045c feat(deps): bump github.com/nats-io/nats.go from 1.48.0 to 1.49.0 (#1091)
Bumps [github.com/nats-io/nats.go](https://github.com/nats-io/nats.go) from 1.48.0 to 1.49.0.
- [Release notes](https://github.com/nats-io/nats.go/releases)
- [Commits](https://github.com/nats-io/nats.go/compare/v1.48.0...v1.49.0)

---
updated-dependencies:
- dependency-name: github.com/nats-io/nats.go
  dependency-version: 1.49.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-24 10:21:29 +01:00
35 changed files with 5646 additions and 321 deletions

View File

@@ -1,10 +0,0 @@
This edge release can be pulled from Docker Hub as follows:
```
docker pull clastix/kamaji:$TAG
```
> As from the v1.0.0 release, CLASTIX no longer provides stable release artefacts.
>
> Stable release artefacts are offered on a subscription basis by CLASTIX, the main Kamaji project contributor.
> Learn more from CLASTIX's [Support](https://clastix.io/support/) section.

View File

@@ -2,17 +2,8 @@ name: Container image build
on:
push:
tags:
- edge-*
- v*
branches:
- master
workflow_dispatch:
inputs:
tag:
description: "Tag to build"
required: true
type: string
jobs:
ko:
@@ -21,17 +12,19 @@ jobs:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- uses: actions/setup-go@v6
with:
go-version-file: go.mod
- name: "ko: install"
run: make ko
- name: "ko: login to quay.io container registry"
run: ./bin/ko login quay.io -u ${{ secrets.QUAY_IO_USERNAME }} -p ${{ secrets.QUAY_IO_TOKEN }}
- name: "ko: login to docker.io container registry"
run: ./bin/ko login docker.io -u ${{ secrets.DOCKER_IO_USERNAME }} -p ${{ secrets.DOCKER_IO_TOKEN }}
- name: "ko: build and push tag"
run: make VERSION=${{ github.event.inputs.tag }} KO_LOCAL=false KO_PUSH=true build
if: github.event_name == 'workflow_dispatch'
- name: "ko: build and push latest"
run: make VERSION=latest KO_LOCAL=false KO_PUSH=true build

View File

@@ -15,8 +15,9 @@ jobs:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- name: generating date metadata
id: date
- name: "tag: compute"
id: git
run: |
CURRENT_DATE=$(date -u +'%Y-%m-%d')
YY=$(date -u +'%y')
@@ -24,52 +25,36 @@ jobs:
FIRST_OF_MONTH=$(date -u -d "$CURRENT_DATE" +%Y-%m-01)
WEEK_NUM=$(( (($(date -u +%s) - $(date -u -d "$FIRST_OF_MONTH" +%s)) / 86400 + $(date -u -d "$FIRST_OF_MONTH" +%u) - 1) / 7 + 1 ))
echo "yy=$YY" >> $GITHUB_OUTPUT
echo "month=$M" >> $GITHUB_OUTPUT
echo "week=$WEEK_NUM" >> $GITHUB_OUTPUT
echo "date=$CURRENT_DATE" >> $GITHUB_OUTPUT
- name: generating tag metadata
id: tag
run: |
TAG="edge-${{ steps.date.outputs.yy }}.${{ steps.date.outputs.month }}.${{ steps.date.outputs.week }}"
TAG="$YY.$M.$WEEK_NUM-edge"
echo "tag=$TAG" >> $GITHUB_OUTPUT
- name: generate release notes from template
- name: "tag: push"
run: |
export TAG="${{ steps.tag.outputs.tag }}"
envsubst < .github/release-template.md > release-notes.md
- name: generate release notes from template
run: |
export TAG="${{ steps.tag.outputs.tag }}"
envsubst < .github/release-template.md > release-notes-header.md
- name: generate GitHub release notes
git tag ${{ steps.git.outputs.tag }}
git push origin ${{ steps.git.outputs.tag }}
- uses: actions/setup-go@v6
with:
go-version-file: go.mod
- name: "deps: installing ko"
run: make ko
- name: "ko: login to quay.io container registry"
run: ./bin/ko login quay.io -u ${{ secrets.QUAY_IO_USERNAME }} -p ${{ secrets.QUAY_IO_TOKEN }}
- name: "ko: login to docker.io container registry"
run: ./bin/ko login docker.io -u ${{ secrets.DOCKER_IO_USERNAME }} -p ${{ secrets.DOCKER_IO_TOKEN }}
- name: "path: expanding with local binaries"
run: echo "${{ github.workspace }}/bin" >> $GITHUB_PATH
- name: "goreleaser: release"
uses: goreleaser/goreleaser-action@v7
with:
distribution: goreleaser
version: "~> v2"
args: release --clean
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh release --repo "$GITHUB_REPOSITORY" \
create "${{ steps.tag.outputs.tag }}" \
--generate-notes \
--draft \
--title "temp" \
--notes "temp" > /dev/null || true
gh release view "${{ steps.tag.outputs.tag }}" \
--json body --jq .body > auto-notes.md
gh release delete "${{ steps.tag.outputs.tag }}" --yes || true
- name: combine notes
run: |
cat release-notes-header.md auto-notes.md > release-notes.md
- name: create GitHub release
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh release create "${{ steps.tag.outputs.tag }}" \
--title "${{ steps.tag.outputs.tag }}" \
--notes-file release-notes.md
- name: trigger container build workflow
env:
GH_TOKEN: ${{ secrets.WORKFLOW_TOKEN }}
run: |
gh workflow run "Container image build" \
--ref master \
-f tag="${{ steps.tag.outputs.tag }}"
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

1
.gitignore vendored
View File

@@ -38,3 +38,4 @@ bin
!deploy/kine/mysql/server-csr.json
!deploy/kine/nats/server-csr.json
charts/kamaji/charts
dist

91
.goreleaser.yaml Normal file
View File

@@ -0,0 +1,91 @@
# yaml-language-server: $schema=https://goreleaser.com/static/schema.json
version: 2
project_name: kamaji
builds:
- id: kamaji
main: .
binary: "{{ .ProjectName }}-{{ .Os }}-{{ .Arch }}"
env:
- CGO_ENABLED=0
flags:
- -trimpath
mod_timestamp: '{{ .CommitTimestamp }}'
ldflags:
- "-X github.com/clastix/kamaji/internal.GitCommit={{.Commit}}"
- "-X github.com/clastix/kamaji/internal.GitTag={{.Tag}}"
- "-X github.com/clastix/kamaji/internal.GitDirty={{ if eq .GitTreeState \"dirty\" }}.dev{{ end }}"
- "-X github.com/clastix/kamaji/internal.BuildTime={{.Date}}"
- "-X github.com/clastix/kamaji/internal.GitRepo={{ .GitURL }}"
goos:
- linux
goarch:
- amd64
- arm
- arm64
kos:
- repositories:
- docker.io/clastix/kamaji
- quay.io/clastix/kamaji
tags:
- "{{ .Tag }}"
bare: true
preserve_import_paths: false
platforms:
- linux/amd64
- linux/arm64
- linux/arm
release:
footer: |
**Container Images**
```
docker pull clastix/{{ .ProjectName }}:{{ .Tag }}
```
> This is an **edge release** and is intended for testing and evaluation purposes only.
> It may include experimental features and does not provide the stability guarantees of a production-ready build.
>
> **Stable release artefacts** are available on a subscription basis from CLASTIX,
> the primary contributor to the Kamaji project.
>
> For production-grade releases and enterprise support,
> please refer to CLASTIX's [Support](https://clastix.io/support/) offerings.
**Full Changelog**: https://github.com/clastix/{{ .ProjectName }}/compare/{{ .PreviousTag }}...{{ .Tag }}
changelog:
sort: asc
use: github
filters:
exclude:
- 'merge conflict'
- Merge pull request
- Merge remote-tracking branch
- Merge branch
groups:
- title: '🛠 Dependency updates'
regexp: '^.*?(feat|fix)\(deps\)!?:.+$'
order: 300
- title: '✨ New Features'
regexp: '^.*?feat(\([[:word:]]+\))??!?:.+$'
order: 100
- title: '🐛 Bug fixes'
regexp: '^.*?fix(\([[:word:]]+\))??!?:.+$'
order: 200
- title: '📖 Documentation updates'
regexp: ^.*?docs(\([[:word:]]+\))??!?:.+$
order: 400
- title: '🛡️ Security updates'
regexp: ^.*?(sec)(\([[:word:]]+\))??!?:.+$
order: 500
- title: '🚀 Build process updates'
regexp: ^.*?(build|ci)(\([[:word:]]+\))??!?:.+$
order: 600
- title: '📦 Other work'
order: 9999
checksum:
name_template: "checksums.txt"

View File

@@ -47,6 +47,9 @@ GOLANGCI_LINT ?= $(LOCALBIN)/golangci-lint
HELM ?= $(LOCALBIN)/helm
KIND ?= $(LOCALBIN)/kind
KO ?= $(LOCALBIN)/ko
GORELEASER ?= $(LOCALBIN)/goreleaser
COSIGN ?= $(LOCALBIN)/cosign
SYFT ?= $(LOCALBIN)/syft
YQ ?= $(LOCALBIN)/yq
ENVTEST ?= $(LOCALBIN)/setup-envtest
@@ -68,8 +71,34 @@ all: build
help: ## Display this help.
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
##@ Documentation
.PHONY: docs
docs: ## Serve documentation locally with Docker.
docker run --rm -it \
-p 8000:8000 \
-v "$${PWD}/docs":/docs:Z \
-w /docs \
squidfunk/mkdocs-material \
serve -a 0.0.0.0:8000
##@ Binary
.PHONY: cosign
cosign: $(COSIGN) ## Download cosign locally if necessary.
$(COSIGN): $(LOCALBIN)
test -s $(LOCALBIN)/cosign || GOBIN=$(LOCALBIN) go install github.com/sigstore/cosign/v3/cmd/cosign@v3.0.5
.PHONY: syft
syft: $(SYFT) ## Download syft locally if necessary.
$(SYFT): $(LOCALBIN)
test -s $(LOCALBIN)/syft || GOBIN=$(LOCALBIN) go install github.com/anchore/syft/cmd/syft@v1.42.1
.PHONY: goreleaser
goreleaser: $(GORELEASER) ## Download goreleaser locally if necessary.
$(GORELEASER): $(LOCALBIN)
test -s $(LOCALBIN)/goreleaser || GOBIN=$(LOCALBIN) go install github.com/goreleaser/goreleaser/v2@v2.14.1
.PHONY: ko
ko: $(KO) ## Download ko locally if necessary.
$(KO): $(LOCALBIN)

View File

@@ -0,0 +1,260 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
var _ = Describe("NetworkProfile validation", func() {
var (
ctx context.Context
tcp *TenantControlPlane
)
const (
ipv6CIDRBlock = "fd00::/108"
)
BeforeEach(func() {
ctx = context.Background()
tcp = &TenantControlPlane{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "tcp-network-",
Namespace: "default",
},
Spec: TenantControlPlaneSpec{
ControlPlane: ControlPlane{
Service: ServiceSpec{
ServiceType: ServiceTypeClusterIP,
},
},
},
}
})
AfterEach(func() {
// When creation is denied by validation, GenerateName is never resolved
// and tcp.Name remains empty, so there is nothing to delete.
if tcp.Name == "" {
return
}
if err := k8sClient.Delete(ctx, tcp); err != nil && !apierrors.IsNotFound(err) {
Expect(err).NotTo(HaveOccurred())
}
})
Context("serviceCidr", func() {
It("allows creation with the default IPv4 CIDR", func() {
tcp.Spec.NetworkProfile.ServiceCIDR = "10.96.0.0/16"
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("allows creation with a non-default valid IPv4 CIDR", func() {
tcp.Spec.NetworkProfile.ServiceCIDR = "172.16.0.0/12"
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("allows creation with a valid IPv6 CIDR", func() {
tcp.Spec.NetworkProfile.ServiceCIDR = ipv6CIDRBlock
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("allows creation when serviceCidr is empty", func() {
tcp.Spec.NetworkProfile.ServiceCIDR = ""
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("denies creation with a plain IP address instead of a CIDR", func() {
tcp.Spec.NetworkProfile.ServiceCIDR = "10.96.0.1"
err := k8sClient.Create(ctx, tcp)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("serviceCidr must be empty or a valid CIDR"))
})
It("denies creation with an arbitrary non-CIDR string", func() {
tcp.Spec.NetworkProfile.ServiceCIDR = "not-a-cidr"
err := k8sClient.Create(ctx, tcp)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("serviceCidr must be empty or a valid CIDR"))
})
})
Context("podCidr", func() {
It("allows creation with the default IPv4 CIDR", func() {
tcp.Spec.NetworkProfile.PodCIDR = "10.244.0.0/16"
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("allows creation with a non-default valid IPv4 CIDR", func() {
tcp.Spec.NetworkProfile.PodCIDR = "192.168.128.0/17"
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("allows creation with a valid IPv6 CIDR", func() {
tcp.Spec.NetworkProfile.PodCIDR = "2001:db8::/48"
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("allows creation when podCidr is empty", func() {
tcp.Spec.NetworkProfile.PodCIDR = ""
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("denies creation with a plain IP address instead of a CIDR", func() {
tcp.Spec.NetworkProfile.PodCIDR = "10.244.0.1"
err := k8sClient.Create(ctx, tcp)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("podCidr must be empty or a valid CIDR"))
})
It("denies creation with an arbitrary non-CIDR string", func() {
tcp.Spec.NetworkProfile.PodCIDR = "not-a-cidr"
err := k8sClient.Create(ctx, tcp)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("podCidr must be empty or a valid CIDR"))
})
})
Context("loadBalancerSourceRanges CIDR format", func() {
BeforeEach(func() {
tcp.Spec.ControlPlane.Service.ServiceType = ServiceTypeLoadBalancer
})
It("allows creation with a single valid CIDR", func() {
tcp.Spec.NetworkProfile.LoadBalancerSourceRanges = []string{"10.0.0.0/8"}
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("allows creation with multiple valid CIDRs", func() {
tcp.Spec.NetworkProfile.LoadBalancerSourceRanges = []string{
"10.0.0.0/8",
"192.168.0.0/24",
"172.16.0.0/12",
}
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("allows creation with valid IPv6 CIDRs", func() {
tcp.Spec.NetworkProfile.LoadBalancerSourceRanges = []string{
"2001:db8::/32",
"fd00::/8",
}
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("denies creation when an entry is a plain IP address", func() {
tcp.Spec.NetworkProfile.LoadBalancerSourceRanges = []string{"192.168.1.1"}
err := k8sClient.Create(ctx, tcp)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("all LoadBalancer source range entries must be valid CIDR"))
})
It("denies creation when an entry is an arbitrary string", func() {
tcp.Spec.NetworkProfile.LoadBalancerSourceRanges = []string{"not-a-cidr"}
err := k8sClient.Create(ctx, tcp)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("all LoadBalancer source range entries must be valid CIDR"))
})
It("denies creation when at least one entry in a mixed list is invalid", func() {
tcp.Spec.NetworkProfile.LoadBalancerSourceRanges = []string{
"10.0.0.0/8",
"not-a-cidr",
}
err := k8sClient.Create(ctx, tcp)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("all LoadBalancer source range entries must be valid CIDR"))
})
})
Context("dnsServiceIPs", func() {
BeforeEach(func() {
tcp.Spec.NetworkProfile.ServiceCIDR = "10.96.0.0/16"
})
It("allows creation when dnsServiceIPs is not set", func() {
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("allows creation with an explicitly empty dnsServiceIPs list", func() {
tcp.Spec.NetworkProfile.DNSServiceIPs = []string{}
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("allows creation when all IPs are within the service CIDR", func() {
tcp.Spec.NetworkProfile.DNSServiceIPs = []string{"10.96.0.10"}
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("allows creation with multiple IPs all within the service CIDR", func() {
tcp.Spec.NetworkProfile.DNSServiceIPs = []string{
"10.96.0.10",
"10.96.0.11",
}
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("denies creation when a DNS service IP is outside the service CIDR", func() {
tcp.Spec.NetworkProfile.DNSServiceIPs = []string{"192.168.1.10"}
err := k8sClient.Create(ctx, tcp)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("all DNS service IPs must be part of the Service CIDR"))
})
It("denies creation when at least one IP in a mixed list is outside the service CIDR", func() {
tcp.Spec.NetworkProfile.DNSServiceIPs = []string{
"10.96.0.10",
"192.168.1.10",
}
err := k8sClient.Create(ctx, tcp)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("all DNS service IPs must be part of the Service CIDR"))
})
It("allows creation with an IPv6 DNS service IP within an IPv6 service CIDR", func() {
tcp.Spec.NetworkProfile.ServiceCIDR = ipv6CIDRBlock
tcp.Spec.NetworkProfile.DNSServiceIPs = []string{"fd00::10"}
Expect(k8sClient.Create(ctx, tcp)).To(Succeed())
})
It("denies creation when an IPv6 DNS service IP is outside the IPv6 service CIDR", func() {
tcp.Spec.NetworkProfile.ServiceCIDR = ipv6CIDRBlock
tcp.Spec.NetworkProfile.DNSServiceIPs = []string{"2001:db8::10"}
err := k8sClient.Create(ctx, tcp)
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("all DNS service IPs must be part of the Service CIDR"))
})
})
})

View File

@@ -12,6 +12,7 @@ import (
)
// NetworkProfileSpec defines the desired state of NetworkProfile.
// +kubebuilder:validation:XValidation:rule="!has(self.dnsServiceIPs) || self.dnsServiceIPs.all(r, cidr(self.serviceCidr).containsIP(r))",message="all DNS service IPs must be part of the Service CIDR"
type NetworkProfileSpec struct {
// LoadBalancerSourceRanges restricts the IP ranges that can access
// the LoadBalancer type Service. This field defines a list of IP
@@ -20,14 +21,16 @@ type NetworkProfileSpec struct {
// This feature is useful for restricting access to API servers or services
// to specific networks for security purposes.
// Example: {"192.168.1.0/24", "10.0.0.0/8"}
//+kubebuilder:validation:MaxItems=16
//+kubebuilder:validation:XValidation:rule="self.all(r, isCIDR(r))",message="all LoadBalancer source range entries must be valid CIDR"
LoadBalancerSourceRanges []string `json:"loadBalancerSourceRanges,omitempty"`
// Specify the LoadBalancer class in case of multiple load balancer implementations.
// Field supported only for Tenant Control Plane instances exposed using a LoadBalancer Service.
// +kubebuilder:validation:MinLength=1
// +kubebuilder:validation:XValidation:rule="self == oldSelf",message="LoadBalancerClass is immutable"
LoadBalancerClass *string `json:"loadBalancerClass,omitempty"`
// Address where API server of will be exposed.
// In case of LoadBalancer Service, this can be empty in order to use the exposed IP provided by the cloud controller manager.
// Address where API server will be exposed.
// In the case of LoadBalancer Service, this can be empty in order to use the exposed IP provided by the cloud controller manager.
Address string `json:"address,omitempty"`
// The default domain name used for DNS resolution within the cluster.
//+kubebuilder:default="cluster.local"
@@ -37,7 +40,7 @@ type NetworkProfileSpec struct {
// AllowAddressAsExternalIP will include tenantControlPlane.Spec.NetworkProfile.Address in the section of
// ExternalIPs of the Kubernetes Service (only ClusterIP or NodePort)
AllowAddressAsExternalIP bool `json:"allowAddressAsExternalIP,omitempty"`
// Port where API server of will be exposed
// Port where API server will be exposed
//+kubebuilder:default=6443
Port int32 `json:"port,omitempty"`
// CertSANs sets extra Subject Alternative Names (SANs) for the API Server signing certificate.
@@ -45,14 +48,20 @@ type NetworkProfileSpec struct {
CertSANs []string `json:"certSANs,omitempty"`
// CIDR for Kubernetes Services: if empty, defaulted to 10.96.0.0/16.
//+kubebuilder:default="10.96.0.0/16"
//+kubebuilder:validation:Optional
//+kubebuilder:validation:XValidation:rule="self == '' || isCIDR(self)",message="serviceCidr must be empty or a valid CIDR"
ServiceCIDR string `json:"serviceCidr,omitempty"`
// CIDR for Kubernetes Pods: if empty, defaulted to 10.244.0.0/16.
//+kubebuilder:default="10.244.0.0/16"
//+kubebuilder:validation:Optional
//+kubebuilder:validation:XValidation:rule="self == '' || isCIDR(self)",message="podCidr must be empty or a valid CIDR"
PodCIDR string `json:"podCidr,omitempty"`
// The DNS Service for internal resolution, it must match the Service CIDR.
// In case of an empty value, it is automatically computed according to the Service CIDR, e.g.:
// Service CIDR 10.96.0.0/16, the resulting DNS Service IP will be 10.96.0.10 for IPv4,
// for IPv6 from the CIDR 2001:db8:abcd::/64 the resulting DNS Service IP will be 2001:db8:abcd::10.
//+kubebuilder:validation:MaxItems=8
//+kubebuilder:validation:Optional
DNSServiceIPs []string `json:"dnsServiceIPs,omitempty"`
}
@@ -173,6 +182,54 @@ type ControlPlaneComponentsResources struct {
Kine *corev1.ResourceRequirements `json:"kine,omitempty"`
}
// ProbeSpec defines configurable parameters for a Kubernetes probe.
type ProbeSpec struct {
// InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
//+kubebuilder:validation:Minimum=0
InitialDelaySeconds *int32 `json:"initialDelaySeconds,omitempty"`
// TimeoutSeconds is the number of seconds after which the probe times out.
//+kubebuilder:validation:Minimum=1
TimeoutSeconds *int32 `json:"timeoutSeconds,omitempty"`
// PeriodSeconds is how often (in seconds) to perform the probe.
//+kubebuilder:validation:Minimum=1
PeriodSeconds *int32 `json:"periodSeconds,omitempty"`
// SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
// Must be 1 for liveness and startup probes.
//+kubebuilder:validation:Minimum=1
SuccessThreshold *int32 `json:"successThreshold,omitempty"`
// FailureThreshold is the consecutive failure count required to consider the probe failed.
//+kubebuilder:validation:Minimum=1
FailureThreshold *int32 `json:"failureThreshold,omitempty"`
}
// ProbeSet defines per-probe-type configuration.
type ProbeSet struct {
// Liveness defines parameters for the liveness probe.
Liveness *ProbeSpec `json:"liveness,omitempty"`
// Readiness defines parameters for the readiness probe.
Readiness *ProbeSpec `json:"readiness,omitempty"`
// Startup defines parameters for the startup probe.
Startup *ProbeSpec `json:"startup,omitempty"`
}
// ControlPlaneProbes defines probe configuration for Control Plane components.
// Global probe settings (Liveness, Readiness, Startup) apply to all components.
// Per-component settings (APIServer, ControllerManager, Scheduler) override global settings.
type ControlPlaneProbes struct {
// Liveness defines default parameters for liveness probes of all Control Plane components.
Liveness *ProbeSpec `json:"liveness,omitempty"`
// Readiness defines default parameters for the readiness probe of kube-apiserver.
Readiness *ProbeSpec `json:"readiness,omitempty"`
// Startup defines default parameters for startup probes of all Control Plane components.
Startup *ProbeSpec `json:"startup,omitempty"`
// APIServer defines probe overrides for kube-apiserver, taking precedence over global probe settings.
APIServer *ProbeSet `json:"apiServer,omitempty"`
// ControllerManager defines probe overrides for kube-controller-manager, taking precedence over global probe settings.
ControllerManager *ProbeSet `json:"controllerManager,omitempty"`
// Scheduler defines probe overrides for kube-scheduler, taking precedence over global probe settings.
Scheduler *ProbeSet `json:"scheduler,omitempty"`
}
type DeploymentSpec struct {
// RegistrySettings allows to override the default images for the given Tenant Control Plane instance.
// It could be used to point to a different container registry rather than the public one.
@@ -224,6 +281,10 @@ type DeploymentSpec struct {
// AdditionalVolumeMounts allows to mount an additional volume into each component of the Control Plane
// (kube-apiserver, controller-manager, and scheduler).
AdditionalVolumeMounts *AdditionalVolumeMounts `json:"additionalVolumeMounts,omitempty"`
// Probes defines the probe configuration for the Control Plane components
// (kube-apiserver, controller-manager, and scheduler).
// Override TimeoutSeconds, PeriodSeconds, and FailureThreshold for resource-constrained environments.
Probes *ControlPlaneProbes `json:"probes,omitempty"`
//+kubebuilder:default="default"
// ServiceAccountName allows to specify the service account to be mounted to the pods of the Control plane deployment
ServiceAccountName string `json:"serviceAccountName,omitempty"`

View File

@@ -9,7 +9,8 @@ package v1alpha1
import (
corev1 "k8s.io/api/core/v1"
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
apisv1 "sigs.k8s.io/gateway-api/apis/v1"
)
@@ -449,6 +450,51 @@ func (in *ControlPlaneExtraArgs) DeepCopy() *ControlPlaneExtraArgs {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ControlPlaneProbes) DeepCopyInto(out *ControlPlaneProbes) {
*out = *in
if in.Liveness != nil {
in, out := &in.Liveness, &out.Liveness
*out = new(ProbeSpec)
(*in).DeepCopyInto(*out)
}
if in.Readiness != nil {
in, out := &in.Readiness, &out.Readiness
*out = new(ProbeSpec)
(*in).DeepCopyInto(*out)
}
if in.Startup != nil {
in, out := &in.Startup, &out.Startup
*out = new(ProbeSpec)
(*in).DeepCopyInto(*out)
}
if in.APIServer != nil {
in, out := &in.APIServer, &out.APIServer
*out = new(ProbeSet)
(*in).DeepCopyInto(*out)
}
if in.ControllerManager != nil {
in, out := &in.ControllerManager, &out.ControllerManager
*out = new(ProbeSet)
(*in).DeepCopyInto(*out)
}
if in.Scheduler != nil {
in, out := &in.Scheduler, &out.Scheduler
*out = new(ProbeSet)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ControlPlaneProbes.
func (in *ControlPlaneProbes) DeepCopy() *ControlPlaneProbes {
if in == nil {
return nil
}
out := new(ControlPlaneProbes)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DataStore) DeepCopyInto(out *DataStore) {
*out = *in
@@ -608,6 +654,13 @@ func (in *DataStoreStatus) DeepCopyInto(out *DataStoreStatus) {
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
*out = make([]v1.Condition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataStoreStatus.
@@ -709,6 +762,11 @@ func (in *DeploymentSpec) DeepCopyInto(out *DeploymentSpec) {
*out = new(AdditionalVolumeMounts)
(*in).DeepCopyInto(*out)
}
if in.Probes != nil {
in, out := &in.Probes, &out.Probes
*out = new(ControlPlaneProbes)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DeploymentSpec.
@@ -907,7 +965,7 @@ func (in *JSONPatch) DeepCopyInto(out *JSONPatch) {
*out = *in
if in.Value != nil {
in, out := &in.Value, &out.Value
*out = new(v1.JSON)
*out = new(apiextensionsv1.JSON)
(*in).DeepCopyInto(*out)
}
}
@@ -1493,6 +1551,76 @@ func (in *Permissions) DeepCopy() *Permissions {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProbeSet) DeepCopyInto(out *ProbeSet) {
*out = *in
if in.Liveness != nil {
in, out := &in.Liveness, &out.Liveness
*out = new(ProbeSpec)
(*in).DeepCopyInto(*out)
}
if in.Readiness != nil {
in, out := &in.Readiness, &out.Readiness
*out = new(ProbeSpec)
(*in).DeepCopyInto(*out)
}
if in.Startup != nil {
in, out := &in.Startup, &out.Startup
*out = new(ProbeSpec)
(*in).DeepCopyInto(*out)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProbeSet.
func (in *ProbeSet) DeepCopy() *ProbeSet {
if in == nil {
return nil
}
out := new(ProbeSet)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ProbeSpec) DeepCopyInto(out *ProbeSpec) {
*out = *in
if in.InitialDelaySeconds != nil {
in, out := &in.InitialDelaySeconds, &out.InitialDelaySeconds
*out = new(int32)
**out = **in
}
if in.TimeoutSeconds != nil {
in, out := &in.TimeoutSeconds, &out.TimeoutSeconds
*out = new(int32)
**out = **in
}
if in.PeriodSeconds != nil {
in, out := &in.PeriodSeconds, &out.PeriodSeconds
*out = new(int32)
**out = **in
}
if in.SuccessThreshold != nil {
in, out := &in.SuccessThreshold, &out.SuccessThreshold
*out = new(int32)
**out = **in
}
if in.FailureThreshold != nil {
in, out := &in.FailureThreshold, &out.FailureThreshold
*out = new(int32)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProbeSpec.
func (in *ProbeSpec) DeepCopy() *ProbeSpec {
if in == nil {
return nil
}
out := new(ProbeSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PublicKeyPrivateKeyPairStatus) DeepCopyInto(out *PublicKeyPrivateKeyPairStatus) {
*out = *in

View File

@@ -6165,6 +6165,397 @@ versions:
type: string
type: object
type: object
probes:
description: |-
Probes defines the probe configuration for the Control Plane components
(kube-apiserver, controller-manager, and scheduler).
Override TimeoutSeconds, PeriodSeconds, and FailureThreshold for resource-constrained environments.
properties:
apiServer:
description: APIServer defines probe overrides for kube-apiserver, taking precedence over global probe settings.
properties:
liveness:
description: Liveness defines parameters for the liveness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
readiness:
description: Readiness defines parameters for the readiness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
startup:
description: Startup defines parameters for the startup probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
type: object
controllerManager:
description: ControllerManager defines probe overrides for kube-controller-manager, taking precedence over global probe settings.
properties:
liveness:
description: Liveness defines parameters for the liveness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
readiness:
description: Readiness defines parameters for the readiness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
startup:
description: Startup defines parameters for the startup probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
type: object
liveness:
description: Liveness defines default parameters for liveness probes of all Control Plane components.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
readiness:
description: Readiness defines default parameters for the readiness probe of kube-apiserver.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
scheduler:
description: Scheduler defines probe overrides for kube-scheduler, taking precedence over global probe settings.
properties:
liveness:
description: Liveness defines parameters for the liveness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
readiness:
description: Readiness defines parameters for the readiness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
startup:
description: Startup defines parameters for the startup probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
type: object
startup:
description: Startup defines default parameters for startup probes of all Control Plane components.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
type: object
registrySettings:
default:
apiServerImage: kube-apiserver
@@ -7185,8 +7576,8 @@ versions:
properties:
address:
description: |-
Address where API server of will be exposed.
In case of LoadBalancer Service, this can be empty in order to use the exposed IP provided by the cloud controller manager.
Address where API server will be exposed.
In the case of LoadBalancer Service, this can be empty in order to use the exposed IP provided by the cloud controller manager.
type: string
allowAddressAsExternalIP:
description: |-
@@ -7216,6 +7607,7 @@ versions:
for IPv6 from the CIDR 2001:db8:abcd::/64 the resulting DNS Service IP will be 2001:db8:abcd::10.
items:
type: string
maxItems: 8
type: array
loadBalancerClass:
description: |-
@@ -7237,21 +7629,34 @@ versions:
Example: {"192.168.1.0/24", "10.0.0.0/8"}
items:
type: string
maxItems: 16
type: array
x-kubernetes-validations:
- message: all LoadBalancer source range entries must be valid CIDR
rule: self.all(r, isCIDR(r))
podCidr:
default: 10.244.0.0/16
description: 'CIDR for Kubernetes Pods: if empty, defaulted to 10.244.0.0/16.'
type: string
x-kubernetes-validations:
- message: podCidr must be empty or a valid CIDR
rule: self == '' || isCIDR(self)
port:
default: 6443
description: Port where API server of will be exposed
description: Port where API server will be exposed
format: int32
type: integer
serviceCidr:
default: 10.96.0.0/16
description: 'CIDR for Kubernetes Services: if empty, defaulted to 10.96.0.0/16.'
type: string
x-kubernetes-validations:
- message: serviceCidr must be empty or a valid CIDR
rule: self == '' || isCIDR(self)
type: object
x-kubernetes-validations:
- message: all DNS service IPs must be part of the Service CIDR
rule: '!has(self.dnsServiceIPs) || self.dnsServiceIPs.all(r, cidr(self.serviceCidr).containsIP(r))'
writePermissions:
description: |-
WritePermissions allows to select which operations (create, delete, update) must be blocked:

View File

@@ -6173,6 +6173,397 @@ spec:
type: string
type: object
type: object
probes:
description: |-
Probes defines the probe configuration for the Control Plane components
(kube-apiserver, controller-manager, and scheduler).
Override TimeoutSeconds, PeriodSeconds, and FailureThreshold for resource-constrained environments.
properties:
apiServer:
description: APIServer defines probe overrides for kube-apiserver, taking precedence over global probe settings.
properties:
liveness:
description: Liveness defines parameters for the liveness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
readiness:
description: Readiness defines parameters for the readiness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
startup:
description: Startup defines parameters for the startup probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
type: object
controllerManager:
description: ControllerManager defines probe overrides for kube-controller-manager, taking precedence over global probe settings.
properties:
liveness:
description: Liveness defines parameters for the liveness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
readiness:
description: Readiness defines parameters for the readiness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
startup:
description: Startup defines parameters for the startup probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
type: object
liveness:
description: Liveness defines default parameters for liveness probes of all Control Plane components.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
readiness:
description: Readiness defines default parameters for the readiness probe of kube-apiserver.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
scheduler:
description: Scheduler defines probe overrides for kube-scheduler, taking precedence over global probe settings.
properties:
liveness:
description: Liveness defines parameters for the liveness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
readiness:
description: Readiness defines parameters for the readiness probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
startup:
description: Startup defines parameters for the startup probe.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
type: object
startup:
description: Startup defines default parameters for startup probes of all Control Plane components.
properties:
failureThreshold:
description: FailureThreshold is the consecutive failure count required to consider the probe failed.
format: int32
minimum: 1
type: integer
initialDelaySeconds:
description: InitialDelaySeconds is the number of seconds after the container has started before the probe is initiated.
format: int32
minimum: 0
type: integer
periodSeconds:
description: PeriodSeconds is how often (in seconds) to perform the probe.
format: int32
minimum: 1
type: integer
successThreshold:
description: |-
SuccessThreshold is the minimum consecutive successes for the probe to be considered successful.
Must be 1 for liveness and startup probes.
format: int32
minimum: 1
type: integer
timeoutSeconds:
description: TimeoutSeconds is the number of seconds after which the probe times out.
format: int32
minimum: 1
type: integer
type: object
type: object
registrySettings:
default:
apiServerImage: kube-apiserver
@@ -7193,8 +7584,8 @@ spec:
properties:
address:
description: |-
Address where API server of will be exposed.
In case of LoadBalancer Service, this can be empty in order to use the exposed IP provided by the cloud controller manager.
Address where API server will be exposed.
In the case of LoadBalancer Service, this can be empty in order to use the exposed IP provided by the cloud controller manager.
type: string
allowAddressAsExternalIP:
description: |-
@@ -7224,6 +7615,7 @@ spec:
for IPv6 from the CIDR 2001:db8:abcd::/64 the resulting DNS Service IP will be 2001:db8:abcd::10.
items:
type: string
maxItems: 8
type: array
loadBalancerClass:
description: |-
@@ -7245,21 +7637,34 @@ spec:
Example: {"192.168.1.0/24", "10.0.0.0/8"}
items:
type: string
maxItems: 16
type: array
x-kubernetes-validations:
- message: all LoadBalancer source range entries must be valid CIDR
rule: self.all(r, isCIDR(r))
podCidr:
default: 10.244.0.0/16
description: 'CIDR for Kubernetes Pods: if empty, defaulted to 10.244.0.0/16.'
type: string
x-kubernetes-validations:
- message: podCidr must be empty or a valid CIDR
rule: self == '' || isCIDR(self)
port:
default: 6443
description: Port where API server of will be exposed
description: Port where API server will be exposed
format: int32
type: integer
serviceCidr:
default: 10.96.0.0/16
description: 'CIDR for Kubernetes Services: if empty, defaulted to 10.96.0.0/16.'
type: string
x-kubernetes-validations:
- message: serviceCidr must be empty or a valid CIDR
rule: self == '' || isCIDR(self)
type: object
x-kubernetes-validations:
- message: all DNS service IPs must be part of the Service CIDR
rule: '!has(self.dnsServiceIPs) || self.dnsServiceIPs.all(r, cidr(self.serviceCidr).containsIP(r))'
writePermissions:
description: |-
WritePermissions allows to select which operations (create, delete, update) must be blocked:

View File

@@ -255,8 +255,6 @@ func NewCmd(scheme *runtime.Scheme) *cobra.Command {
Scheme: *mgr.GetScheme(),
},
},
handlers.TenantControlPlaneServiceCIDR{},
handlers.TenantControlPlaneLoadBalancerSourceRanges{},
handlers.TenantControlPlaneGatewayValidation{
Client: mgr.GetClient(),
DiscoveryClient: discoveryClient,

View File

@@ -0,0 +1,154 @@
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: KubeadmConfigTemplate
metadata:
name: worker-external
namespace: default
spec:
template:
spec:
users:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
- name: feature-gates
value: "KubeletCrashLoopBackOffMax=true,KubeletEnsureSecretPulledImages=true"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
metadata:
name: worker-external
namespace: default
spec:
template:
spec:
virtualMachineBootstrapCheck:
checkStrategy: ssh
virtualMachineTemplate:
metadata:
namespace: default
spec:
runStrategy: Always
template:
spec:
dnsPolicy: None
dnsConfig:
nameservers:
- 1.1.1.1
- 8.8.8.8
searches: []
options:
- name: ndots
value: "1"
domain:
cpu:
cores: 2
devices:
interfaces:
- name: default
masquerade: {}
disks:
- disk:
bus: virtio
name: containervolume
networkInterfaceMultiqueue: true
memory:
guest: 4Gi
evictionStrategy: External
networks:
- name: default
pod: {}
volumes:
- containerDisk:
image: quay.io/capk/ubuntu-2404-container-disk:v1.34.1
name: containervolume
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtClusterTemplate
metadata:
name: kubevirt-external
namespace: default
spec:
template:
metadata:
annotations:
cluster.x-k8s.io/managed-by: kamaji
spec:
controlPlaneServiceTemplate:
spec:
type: LoadBalancer
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlaneTemplate
metadata:
name: kamaji-controlplane-external
namespace: default
spec:
template:
spec:
addons:
coreDNS: {}
konnectivity: {}
kubeProxy: {}
dataStoreName: "default" # reference to DataStore present on external cluster
deployment:
externalClusterReference:
deploymentNamespace: kamaji-tenants
kubeconfigSecretName: kind-external-kubeconfig
kubeconfigSecretKey: kubeconfig
network:
serviceType: LoadBalancer
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- InternalIP
registry: "registry.k8s.io"
---
apiVersion: cluster.x-k8s.io/v1beta2
kind: ClusterClass
metadata:
name: kubevirt-kamaji-kubeadm-external
namespace: default
spec:
controlPlane:
templateRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlaneTemplate
name: kamaji-controlplane-external
infrastructure:
templateRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtClusterTemplate
name: kubevirt-external
workers:
machineDeployments:
- class: small
bootstrap:
templateRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: KubeadmConfigTemplate
name: worker-external
infrastructure:
templateRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: KubevirtMachineTemplate
name: worker-external
---
apiVersion: cluster.x-k8s.io/v1beta2
kind: Cluster
metadata:
name: demo-external
namespace: default
spec:
topology:
classRef:
name: kubevirt-kamaji-kubeadm-external
namespace: default
version: v1.34.0
controlPlane:
replicas: 1
workers:
machineDeployments:
- class: small
name: md-small
replicas: 1

View File

@@ -267,25 +267,24 @@ func getKubernetesStorageResources(c client.Client, dbConnection datastore.Conne
func getKubernetesAdditionalStorageResources(c client.Client, dbConnections map[string]datastore.Connection, dataStoreOverrides []builder.DataStoreOverrides, threshold time.Duration) []resources.Resource {
res := make([]resources.Resource, 0, len(dataStoreOverrides))
for _, dso := range dataStoreOverrides {
datastore := dso.DataStore
res = append(res,
&ds.MultiTenancy{
DataStore: datastore,
DataStore: dso.DataStore,
},
&ds.Config{
Client: c,
ConnString: dbConnections[dso.Resource].GetConnectionString(),
DataStore: datastore,
DataStore: dso.DataStore,
IsOverride: true,
},
&ds.Setup{
Client: c,
Connection: dbConnections[dso.Resource],
DataStore: datastore,
DataStore: dso.DataStore,
},
&ds.Certificate{
Client: c,
DataStore: datastore,
DataStore: dso.DataStore,
CertExpirationThreshold: threshold,
})
}

View File

@@ -0,0 +1,305 @@
# Kamaji and `externalClusterReference` usage
This document explains how to use **Kamaji's `externalClusterReference`** together with **Cluster API (CAPI)** to run Kubernetes control planes on an **external cluster**, while managing worker nodes from a management cluster.
It assumes the use of the KubeVirt infrastructure provider for ease of deployment and local testing.
---
## High-level Architecture
The following setup operates on **two Kubernetes clusters**:
- **Management cluster** runs Cluster API controllers and the Kamaji control-plane provider, and manages cluster lifecycle and topology.
- **External cluster** - runs Kamaji, hosts the Kubernetes control plane components and receives control plane workloads via Kamaji
---
## Prerequisites
- `docker`
- `kind`
- `kubectl`
- `clusterctl`
- `helm`
---
## Step 1: Create the KIND clusters
Create the **management** cluster:
```bash
kind create cluster --name management
```
Create the **external** cluster that will host control planes:
```bash
kind create cluster --name external
```
Verify contexts:
```bash
kubectl config get-contexts
```
---
## Step 2: Initialize Cluster API controllers
Switch to the management cluster:
```bash
kubectl config use-context kind-management
```
Enable ClusterClass support and initialize Cluster API with Kamaji and KubeVirt:
```bash
export CLUSTER_TOPOLOGY=true
clusterctl init \
--core cluster-api \
--bootstrap kubeadm \
--infrastructure kubevirt \
--control-plane kamaji
```
---
## Step 3: Enable Kamaji external cluster feature gates
Patch the Kamaji controller to enable `externalClusterReference`:
```bash
kubectl -n kamaji-system patch deployment capi-kamaji-controller-manager \
--type='json' \
-p='[
{
"op": "replace",
"path": "/spec/template/spec/containers/0/args/1",
"value": "--feature-gates=ExternalClusterReference=true,ExternalClusterReferenceCrossNamespace=true"
}
]'
```
---
## Step 4: Install KubeVirt
Fetch the latest stable KubeVirt version and install:
```bash
export VERSION=$(curl -s "https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt")
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-operator.yaml"
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/kubevirt-cr.yaml"
```
Enable emulation (optional, if virtualization is not supported):
```bash
kubectl -n kubevirt patch kubevirt kubevirt \
--type=merge \
--patch '{"spec":{"configuration":{"developerConfiguration":{"useEmulation":true}}}}'
```
---
## Step 5: Prepare kubeconfig for the external cluster
Retrieve the external cluster control-plane address:
```bash
EXT_CP_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' "external-control-plane")
```
Export and rewrite the kubeconfig:
```bash
kubectl --context kind-external config view --raw --minify --flatten > kind-external.kubeconfig
```
Replace the API endpoint with the cluster IP, required for cross-cluster access from the management cluster:
```bash
bash -c "sed -i -E 's#https://[^:]+:[0-9]+#https://$EXT_CP_IP:6443#g' kind-external.kubeconfig"
```
Create the kubeconfig secret in the management cluster:
```bash
kubectl -n default create secret generic kind-external-kubeconfig \
--from-file=kubeconfig=kind-external.kubeconfig
```
---
## Step 6: Install Kamaji and dependencies on the external cluster
Switch context:
```bash
kubectl config use-context kind-external
```
Install cert-manager:
```bash
helm upgrade --install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--set installCRDs=true
```
Install Kamaji:
```bash
helm upgrade --install kamaji clastix/kamaji \
--namespace kamaji-system \
--create-namespace \
--set 'resources=null' \
--version 0.0.0+latest
```
Install MetalLB:
```bash
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.3/config/manifests/metallb-native.yaml
```
Configure MetalLB IP address pool:
```bash
SUBNET=$(docker network inspect kind | jq -r '.[0].IPAM.Config[] | select(.Subnet | test(":") | not) | .Subnet' | head -n1)
NET_PREFIX=$(echo "$SUBNET" | cut -d/ -f1 | awk -F. '{print $1"."$2}')
kubectl apply -f - <<EOF
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- ${NET_PREFIX}.255.200-${NET_PREFIX}.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
EOF
```
Create tenant namespace:
```bash
kubectl create namespace kamaji-tenants
```
---
## Step 7: Definition of KamajiControlPlaneTemplate
The `KamajiControlPlaneTemplate` is defined in [the following manifest](https://raw.githubusercontent.com/clastix/kamaji/master/config/capi/clusterclass-kubevirt-kamaji-external.yaml) and can be applied directly.
This template configures how Kamaji deploys and manages the tenant control plane on an external Kubernetes cluster using Cluster API.
```bash
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlaneTemplate
metadata:
name: kamaji-controlplane-external
namespace: default
spec:
template:
spec:
addons:
coreDNS: {}
konnectivity: {}
kubeProxy: {}
dataStoreName: "default" # reference to DataStore present on external cluster
deployment:
externalClusterReference:
deploymentNamespace: kamaji-tenants
kubeconfigSecretName: kind-external-kubeconfig
kubeconfigSecretKey: kubeconfig
network:
serviceType: LoadBalancer
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- InternalIP
registry: "registry.k8s.io"
```
The `.spec.template.spec.deployment.externalClusterReference` section defines how Kamaji connects to and deploys control plane components into the external cluster:
- `deploymentNamespace` - The namespace on the external cluster where `TenantControlPlane` resources and control plane components are created.
- `kubeconfigSecretName` - The name of the Kubernetes Secret containing a kubeconfig that allows Kamaji to authenticate to the external cluster.
- `kubeconfigSecretKey` - The key inside the secret that holds the kubeconfig data.
The referenced secret must exist in the Kamaji management cluster and provide sufficient permissions to create and manage resources in the target external cluster.
---
## Step 8: Create the Cluster
Switch context back to the management cluster:
```bash
kubectl config use-context kind-management
```
Apply the Cluster manifest:
```bash
kubectl apply -f "https://raw.githubusercontent.com/clastix/kamaji/master/config/capi/clusterclass-kubevirt-kamaji-external.yaml"
```
---
## Validation
Check tenant control plane pods running in the external cluster:
```bash
kubectl --context kind-external -n kamaji-tenants get pods
```
Check cluster status in the management cluster:
```bash
kubectl --context kind-management get clusters
kubectl --context kind-management get kamajicontrolplanes
```
Get cluster kubeconfig and confirm it is working:
```bash
kubectl config use-context kind-management
clusterctl get kubeconfig demo-external > demo-external.kubeconfig
KUBECONFIG=./demo-external.kubeconfig kubectl get nodes
```
---
## Clean up
Delete Kind clusters:
```bash
kind delete cluster --name management
kind delete cluster --name external
```
---
## Summary
Using `externalClusterReference` with Kamaji and Cluster API enables:
- Hosted Kubernetes control planes on remote clusters
- Strong separation of concerns
- Multi-cluster management patterns
- Clean integration with ClusterClass

View File

@@ -40,7 +40,7 @@ Throughout the following instructions, shell variables are used to indicate valu
source kamaji.env
```
Any regular and conformant Kubernetes v1.22+ cluster can be turned into a Kamaji setup. To work properly, the Management Cluster should provide:
Any regular and conformant Kubernetes v1.33+ cluster can be turned into a Kamaji setup. To work properly, the Management Cluster should provide:
- CNI module installed, eg. [Calico](https://github.com/projectcalico/calico), [Cilium](https://github.com/cilium/cilium).
- CSI module installed with a Storage Class for the Tenant datastores. The [Local Path Provisioner](https://github.com/rancher/local-path-provisioner) is a suggested choice, even for production environments.

File diff suppressed because it is too large Load Diff

View File

@@ -18,10 +18,13 @@ Usage of the said artefacts is not suggested for production use-case due to miss
### Edge Releases
Edge Release artifacts are published on a monthly basis as part of the open source project.
Versioning follows the form `edge-{year}.{month}.{incremental}` where incremental refers to the monthly release.
For example, `edge-24.7.1` is the first edge release shipped in July 2024.
Versioning follows the form `{year}.{month}.{incremental}-edge` where incremental refers to the monthly release.
For example, `26.3.1-edge` is the first edge release shipped in March 2027.
The full list of edge release artifacts can be found on the Kamaji's GitHub [releases page](https://github.com/clastix/kamaji/releases).
> _Nota Bene_: all edge releases prior to March 2026 used a different pattern (`edge-{year}.{month}.{incremental}`):
> this change has been required to take advantage of GoReleaser to start our support for CRA compliance.
Edge Release artifacts contain the code in from the main branch at the point in time when they were cut.
This means they always have the latest features and fixes, and have undergone automated testing as well as maintainer code review.
Edge Releases may involve partial features that are later modified or backed out.
@@ -31,7 +34,7 @@ Edge Releases are generally considered production ready and the project will mar
| Kamaji | Management Cluster | Tenant Cluster |
|-------------|--------------------|----------------------|
| edge-25.4.1 | v1.22+ | [v1.30.0 .. v1.33.0] |
| 26.3.2-edge | v1.33+ | [v1.30.0 .. v1.35.0] |
Using Edge Release artifacts and reporting bugs helps us ensure a rapid pace of development and is a great way to help maintainers.

View File

@@ -68,6 +68,7 @@ nav:
- cluster-api/other-providers.md
- cluster-api/cluster-autoscaler.md
- cluster-api/cluster-class.md
- cluster-api/external-cluster.md
- 'Guides':
- guides/index.md
- guides/alternative-datastore.md

View File

@@ -0,0 +1,126 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package e2e
import (
"context"
"fmt"
"time"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/client-go/util/retry"
pointer "k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
)
var _ = Describe("When the datastore-config Secret is corrupted for a PostgreSQL-backed TenantControlPlane", func() {
tcp := &kamajiv1alpha1.TenantControlPlane{
ObjectMeta: metav1.ObjectMeta{
Name: "postgresql-secret-regeneration",
Namespace: "default",
},
Spec: kamajiv1alpha1.TenantControlPlaneSpec{
DataStore: "postgresql-bronze",
ControlPlane: kamajiv1alpha1.ControlPlane{
Deployment: kamajiv1alpha1.DeploymentSpec{
Replicas: pointer.To(int32(1)),
},
Service: kamajiv1alpha1.ServiceSpec{
ServiceType: "ClusterIP",
},
},
Kubernetes: kamajiv1alpha1.KubernetesSpec{
Version: "v1.23.6",
},
},
}
JustBeforeEach(func() {
Expect(k8sClient.Create(context.Background(), tcp)).NotTo(HaveOccurred())
StatusMustEqualTo(tcp, kamajiv1alpha1.VersionReady)
})
JustAfterEach(func() {
Expect(k8sClient.Delete(context.Background(), tcp)).Should(Succeed())
})
It("Should regenerate the Secret and restart the TCP pods successfully", func() {
By("recording the UIDs of the currently running TenantControlPlane pods")
initialPodUIDs := sets.New[types.UID]()
Eventually(func() int {
podList := &corev1.PodList{}
if err := k8sClient.List(context.Background(), podList,
client.InNamespace(tcp.GetNamespace()),
client.MatchingLabels{"kamaji.clastix.io/name": tcp.GetName()},
); err != nil {
return 0
}
initialPodUIDs.Clear()
for _, pod := range podList.Items {
initialPodUIDs.Insert(pod.GetUID())
}
return initialPodUIDs.Len()
}, time.Minute, time.Second).Should(Not(BeZero()))
By("retrieving the current datastore-config Secret and its checksum")
secretName := fmt.Sprintf("%s-datastore-config", tcp.GetName())
var secret corev1.Secret
Expect(k8sClient.Get(context.Background(), types.NamespacedName{Name: secretName, Namespace: tcp.GetNamespace()}, &secret)).To(Succeed())
originalChecksum := secret.GetAnnotations()["kamaji.clastix.io/checksum"]
Expect(originalChecksum).NotTo(BeEmpty(), "expected datastore-config Secret to carry a checksum annotation")
By("corrupting the DB_PASSWORD in the datastore-config Secret")
err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
if err := k8sClient.Get(context.Background(), client.ObjectKeyFromObject(&secret), &secret); err != nil {
return err
}
secret.Data["DB_PASSWORD"] = []byte("corrupted-password")
return k8sClient.Update(context.Background(), &secret)
})
Expect(err).ToNot(HaveOccurred())
By("waiting for the controller to detect the corruption and regenerate the Secret with a new checksum")
Eventually(func() string {
if err := k8sClient.Get(context.Background(), client.ObjectKeyFromObject(&secret), &secret); err != nil {
return ""
}
return secret.GetAnnotations()["kamaji.clastix.io/checksum"]
}, 5*time.Minute, time.Second).ShouldNot(Equal(originalChecksum))
By("waiting for at least one new TenantControlPlane pod to replace the pre-existing ones")
Eventually(func() bool {
var podList corev1.PodList
if err := k8sClient.List(context.Background(), &podList,
client.InNamespace(tcp.GetNamespace()),
client.MatchingLabels{"kamaji.clastix.io/name": tcp.GetName()},
); err != nil {
return false
}
for _, pod := range podList.Items {
if !initialPodUIDs.Has(pod.GetUID()) {
return true
}
}
return false
}, 5*time.Minute, time.Second).Should(BeTrue())
By("verifying the TenantControlPlane is Ready after the restart with the regenerated Secret")
StatusMustEqualTo(tcp, kamajiv1alpha1.VersionReady)
})
})

10
go.mod
View File

@@ -15,7 +15,7 @@ require (
github.com/google/uuid v1.6.0
github.com/json-iterator/go v1.1.12
github.com/juju/mutex/v2 v2.0.0
github.com/nats-io/nats.go v1.48.0
github.com/nats-io/nats.go v1.49.0
github.com/onsi/ginkgo/v2 v2.28.1
github.com/onsi/gomega v1.39.1
github.com/prometheus/client_golang v1.23.2
@@ -33,9 +33,9 @@ require (
k8s.io/apimachinery v0.35.0
k8s.io/client-go v0.35.0
k8s.io/cluster-bootstrap v0.0.0
k8s.io/klog/v2 v2.130.1
k8s.io/klog/v2 v2.140.0
k8s.io/kubelet v0.0.0
k8s.io/kubernetes v1.35.1
k8s.io/kubernetes v1.35.2
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4
sigs.k8s.io/controller-runtime v0.22.4
sigs.k8s.io/gateway-api v1.4.1
@@ -98,7 +98,7 @@ require (
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/juju/errors v0.0.0-20220203013757-bd733f3c86b9 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/compress v1.18.2 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/lithammer/dedent v1.1.0 // indirect
@@ -117,7 +117,7 @@ require (
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nats-io/nkeys v0.4.11 // indirect
github.com/nats-io/nkeys v0.4.12 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect

20
go.sum
View File

@@ -190,8 +190,8 @@ github.com/juju/version/v2 v2.0.0-20211007103408-2e8da085dc23/go.mod h1:Ljlbryh9
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kisielk/sqlstruct v0.0.0-20201105191214-5f3e10d3ab46/go.mod h1:yyMNCyc/Ib3bDTKd379tNMpB/7/H5TjM2Y9QJ5THLbE=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk=
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
@@ -243,10 +243,10 @@ github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/nats-io/nats.go v1.48.0 h1:pSFyXApG+yWU/TgbKCjmm5K4wrHu86231/w84qRVR+U=
github.com/nats-io/nats.go v1.48.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
github.com/nats-io/nkeys v0.4.11 h1:q44qGV008kYd9W1b1nEBkNzvnWxtRSQ7A8BoqRrcfa0=
github.com/nats-io/nkeys v0.4.11/go.mod h1:szDimtgmfOi9n25JpfIdGw12tZFYXqhGxjhVxsatHVE=
github.com/nats-io/nats.go v1.49.0 h1:yh/WvY59gXqYpgl33ZI+XoVPKyut/IcEaqtsiuTJpoE=
github.com/nats-io/nats.go v1.49.0/go.mod h1:fDCn3mN5cY8HooHwE2ukiLb4p4G4ImmzvXyJt+tGwdw=
github.com/nats-io/nkeys v0.4.12 h1:nssm7JKOG9/x4J8II47VWCL1Ds29avyiQDRn0ckMvDc=
github.com/nats-io/nkeys v0.4.12/go.mod h1:MT59A1HYcjIcyQDJStTfaOY6vhy9XTUjOFo+SVsvpBg=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
@@ -539,8 +539,8 @@ k8s.io/cri-api v0.35.0 h1:fxLSKyJHqbyCSUsg1rW4DRpmjSEM/elZ1GXzYTSLoDQ=
k8s.io/cri-api v0.35.0/go.mod h1:Cnt29u/tYl1Se1cBRL30uSZ/oJ5TaIp4sZm1xDLvcMc=
k8s.io/cri-client v0.35.0 h1:U1K4bteO93yioUS38804ybN+kWaon9zrzVtB37I3fCs=
k8s.io/cri-client v0.35.0/go.mod h1:XG5GkuuSpxvungsJVzW58NyWBoGSQhMMJmE5c66m9N8=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/klog/v2 v2.140.0 h1:Tf+J3AH7xnUzZyVVXhTgGhEKnFqye14aadWv7bzXdzc=
k8s.io/klog/v2 v2.140.0/go.mod h1:o+/RWfJ6PwpnFn7OyAG3QnO47BFsymfEfrz6XyYSSp0=
k8s.io/kms v0.35.0 h1:/x87FED2kDSo66csKtcYCEHsxF/DBlNl7LfJ1fVQs1o=
k8s.io/kms v0.35.0/go.mod h1:VT+4ekZAdrZDMgShK37vvlyHUVhwI9t/9tvh0AyCWmQ=
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 h1:Y3gxNAuB0OBLImH611+UDZcmKS3g6CthxToOb37KgwE=
@@ -549,8 +549,8 @@ k8s.io/kube-proxy v0.35.0 h1:erv2wYmGZ6nyu/FtmaIb+ORD3q2rfZ4Fhn7VXs/8cPQ=
k8s.io/kube-proxy v0.35.0/go.mod h1:bd9lpN3uLLOOWc/CFZbkPEi9DTkzQQymbE8FqSU4bWk=
k8s.io/kubelet v0.35.0 h1:8cgJHCBCKLYuuQ7/Pxb/qWbJfX1LXIw7790ce9xHq7c=
k8s.io/kubelet v0.35.0/go.mod h1:ciRzAXn7C4z5iB7FhG1L2CGPPXLTVCABDlbXt/Zz8YA=
k8s.io/kubernetes v1.35.1 h1:qmjXSCDPnOuXPuJb5pv+eLzpXhhlD09Jid1pG/OvFU8=
k8s.io/kubernetes v1.35.1/go.mod h1:AaPpCpiS8oAqRbEwpY5r3RitLpwpVp5lVXKFkJril58=
k8s.io/kubernetes v1.35.2 h1:2HthVDfK3YJYv624imuKXPzUJ17xQop9OT5dgT+IMKE=
k8s.io/kubernetes v1.35.2/go.mod h1:AaPpCpiS8oAqRbEwpY5r3RitLpwpVp5lVXKFkJril58=
k8s.io/system-validators v1.12.1 h1:AY1+COTLJN/Sj0w9QzH1H0yvyF3Kl6CguMnh32WlcUU=
k8s.io/system-validators v1.12.1/go.mod h1:awfSS706v9R12VC7u7K89FKfqVy44G+E0L1A0FX9Wmw=
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck=

View File

@@ -51,6 +51,18 @@ const (
kineInitContainerName = "chmod"
)
func applyProbeOverrides(probe *corev1.Probe, spec *kamajiv1alpha1.ProbeSpec) {
if spec == nil {
return
}
probe.InitialDelaySeconds = pointer.Deref(spec.InitialDelaySeconds, probe.InitialDelaySeconds)
probe.TimeoutSeconds = pointer.Deref(spec.TimeoutSeconds, probe.TimeoutSeconds)
probe.PeriodSeconds = pointer.Deref(spec.PeriodSeconds, probe.PeriodSeconds)
probe.SuccessThreshold = pointer.Deref(spec.SuccessThreshold, probe.SuccessThreshold)
probe.FailureThreshold = pointer.Deref(spec.FailureThreshold, probe.FailureThreshold)
}
type DataStoreOverrides struct {
Resource string
DataStore kamajiv1alpha1.DataStore
@@ -384,6 +396,16 @@ func (d Deployment) buildScheduler(podSpec *corev1.PodSpec, tenantControlPlane k
FailureThreshold: 3,
}
if probes := tenantControlPlane.Spec.ControlPlane.Deployment.Probes; probes != nil {
applyProbeOverrides(podSpec.Containers[index].LivenessProbe, probes.Liveness)
applyProbeOverrides(podSpec.Containers[index].StartupProbe, probes.Startup)
if probes.Scheduler != nil {
applyProbeOverrides(podSpec.Containers[index].LivenessProbe, probes.Scheduler.Liveness)
applyProbeOverrides(podSpec.Containers[index].StartupProbe, probes.Scheduler.Startup)
}
}
switch {
case tenantControlPlane.Spec.ControlPlane.Deployment.Resources == nil:
podSpec.Containers[index].Resources = corev1.ResourceRequirements{}
@@ -475,6 +497,17 @@ func (d Deployment) buildControllerManager(podSpec *corev1.PodSpec, tenantContro
SuccessThreshold: 1,
FailureThreshold: 3,
}
if probes := tenantControlPlane.Spec.ControlPlane.Deployment.Probes; probes != nil {
applyProbeOverrides(podSpec.Containers[index].LivenessProbe, probes.Liveness)
applyProbeOverrides(podSpec.Containers[index].StartupProbe, probes.Startup)
if probes.ControllerManager != nil {
applyProbeOverrides(podSpec.Containers[index].LivenessProbe, probes.ControllerManager.Liveness)
applyProbeOverrides(podSpec.Containers[index].StartupProbe, probes.ControllerManager.Startup)
}
}
switch {
case tenantControlPlane.Spec.ControlPlane.Deployment.Resources == nil:
podSpec.Containers[index].Resources = corev1.ResourceRequirements{}
@@ -606,6 +639,19 @@ func (d Deployment) buildKubeAPIServer(podSpec *corev1.PodSpec, tenantControlPla
SuccessThreshold: 1,
FailureThreshold: 3,
}
if probes := tenantControlPlane.Spec.ControlPlane.Deployment.Probes; probes != nil {
applyProbeOverrides(podSpec.Containers[index].LivenessProbe, probes.Liveness)
applyProbeOverrides(podSpec.Containers[index].ReadinessProbe, probes.Readiness)
applyProbeOverrides(podSpec.Containers[index].StartupProbe, probes.Startup)
if probes.APIServer != nil {
applyProbeOverrides(podSpec.Containers[index].LivenessProbe, probes.APIServer.Liveness)
applyProbeOverrides(podSpec.Containers[index].ReadinessProbe, probes.APIServer.Readiness)
applyProbeOverrides(podSpec.Containers[index].StartupProbe, probes.APIServer.Startup)
}
}
podSpec.Containers[index].ImagePullPolicy = corev1.PullAlways
// Volume mounts
var extraVolumeMounts []corev1.VolumeMount

View File

@@ -4,12 +4,21 @@
package controlplane
import (
"testing"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
corev1 "k8s.io/api/core/v1"
pointer "k8s.io/utils/ptr"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
)
func TestControlplaneDeployment(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Controlplane Deployment Suite")
}
var _ = Describe("Controlplane Deployment", func() {
var d Deployment
BeforeEach(func() {
@@ -43,4 +52,74 @@ var _ = Describe("Controlplane Deployment", func() {
Expect(etcdSerVersOverrides).To(Equal("/events#https://etcd-0;https://etcd-1;https://etcd-2,/pods#https://etcd-3;https://etcd-4;https://etcd-5"))
})
})
Describe("applyProbeOverrides", func() {
var probe *corev1.Probe
BeforeEach(func() {
probe = &corev1.Probe{
InitialDelaySeconds: 0,
TimeoutSeconds: 1,
PeriodSeconds: 10,
SuccessThreshold: 1,
FailureThreshold: 3,
}
})
It("should not modify probe when spec is nil", func() {
applyProbeOverrides(probe, nil)
Expect(probe.InitialDelaySeconds).To(Equal(int32(0)))
Expect(probe.TimeoutSeconds).To(Equal(int32(1)))
Expect(probe.PeriodSeconds).To(Equal(int32(10)))
Expect(probe.SuccessThreshold).To(Equal(int32(1)))
Expect(probe.FailureThreshold).To(Equal(int32(3)))
})
It("should override only FailureThreshold when only it is set", func() {
spec := &kamajiv1alpha1.ProbeSpec{
FailureThreshold: pointer.To(int32(30)),
}
applyProbeOverrides(probe, spec)
Expect(probe.FailureThreshold).To(Equal(int32(30)))
Expect(probe.InitialDelaySeconds).To(Equal(int32(0)))
Expect(probe.TimeoutSeconds).To(Equal(int32(1)))
Expect(probe.PeriodSeconds).To(Equal(int32(10)))
Expect(probe.SuccessThreshold).To(Equal(int32(1)))
})
It("should override all fields when all are set", func() {
spec := &kamajiv1alpha1.ProbeSpec{
InitialDelaySeconds: pointer.To(int32(15)),
TimeoutSeconds: pointer.To(int32(5)),
PeriodSeconds: pointer.To(int32(30)),
SuccessThreshold: pointer.To(int32(2)),
FailureThreshold: pointer.To(int32(10)),
}
applyProbeOverrides(probe, spec)
Expect(probe.InitialDelaySeconds).To(Equal(int32(15)))
Expect(probe.TimeoutSeconds).To(Equal(int32(5)))
Expect(probe.PeriodSeconds).To(Equal(int32(30)))
Expect(probe.SuccessThreshold).To(Equal(int32(2)))
Expect(probe.FailureThreshold).To(Equal(int32(10)))
})
It("should cascade global then component overrides", func() {
global := &kamajiv1alpha1.ProbeSpec{
FailureThreshold: pointer.To(int32(10)),
PeriodSeconds: pointer.To(int32(20)),
}
applyProbeOverrides(probe, global)
component := &kamajiv1alpha1.ProbeSpec{
FailureThreshold: pointer.To(int32(60)),
}
applyProbeOverrides(probe, component)
Expect(probe.FailureThreshold).To(Equal(int32(60)))
Expect(probe.PeriodSeconds).To(Equal(int32(20)))
Expect(probe.TimeoutSeconds).To(Equal(int32(1)))
Expect(probe.InitialDelaySeconds).To(Equal(int32(0)))
Expect(probe.SuccessThreshold).To(Equal(int32(1)))
})
})
})

View File

@@ -47,6 +47,7 @@ func NewStorageConnection(ctx context.Context, client client.Client, ds kamajiv1
type Connection interface {
CreateUser(ctx context.Context, user, password string) error
UpdateUser(ctx context.Context, user, password string) error
CreateDB(ctx context.Context, dbName string) error
GrantPrivileges(ctx context.Context, user, dbName string) error
UserExists(ctx context.Context, user string) (bool, error)

View File

@@ -5,6 +5,10 @@ package errors
import "fmt"
func NewUpdateUserError(err error) error {
return fmt.Errorf("cannot update user: %w", err)
}
func NewCreateUserError(err error) error {
return fmt.Errorf("cannot create user: %w", err)
}

View File

@@ -49,6 +49,10 @@ func (e *EtcdClient) CreateUser(ctx context.Context, user, password string) erro
return nil
}
func (e *EtcdClient) UpdateUser(ctx context.Context, user, password string) error {
return nil
}
func (e *EtcdClient) CreateDB(context.Context, string) error {
return nil
}

View File

@@ -29,6 +29,7 @@ const (
mysqlShowGrantsStatement = "SHOW GRANTS FOR `%s`@`%%`"
mysqlCreateDBStatement = "CREATE DATABASE IF NOT EXISTS %s"
mysqlCreateUserStatement = "CREATE USER `%s`@`%%` IDENTIFIED BY '%s'"
mysqlUpdateUserStatement = "ALTER USER `%s`@`%%` IDENTIFIED BY '%s'"
mysqlGrantPrivilegesStatement = "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER, INDEX ON `%s`.* TO `%s`@`%%`"
mysqlDropDBStatement = "DROP DATABASE IF EXISTS `%s`"
mysqlDropUserStatement = "DROP USER IF EXISTS `%s`"
@@ -158,6 +159,14 @@ func (c *MySQLConnection) CreateUser(ctx context.Context, user, password string)
return nil
}
func (c *MySQLConnection) UpdateUser(ctx context.Context, user, password string) error {
if err := c.mutate(ctx, mysqlUpdateUserStatement, user, password); err != nil {
return errors.NewUpdateUserError(err)
}
return nil
}
func (c *MySQLConnection) CreateDB(ctx context.Context, dbName string) error {
if err := c.mutate(ctx, mysqlCreateDBStatement, dbName); err != nil {
return errors.NewCreateDBError(err)

View File

@@ -70,6 +70,10 @@ func (nc *NATSConnection) CreateUser(_ context.Context, _, _ string) error {
return nil
}
func (nc *NATSConnection) UpdateUser(_ context.Context, _, _ string) error {
return nil
}
func (nc *NATSConnection) CreateDB(_ context.Context, dbName string) error {
_, err := nc.js.CreateKeyValue(&nats.KeyValueConfig{Bucket: dbName})
if err != nil {

View File

@@ -20,6 +20,7 @@ const (
postgresqlCreateDBStatement = `CREATE DATABASE "%s"`
postgresqlUserExists = "SELECT 1 FROM pg_roles WHERE rolname = ?"
postgresqlCreateUserStatement = `CREATE ROLE "%s" LOGIN PASSWORD ?`
postgresqlUpdateUserStatement = `ALTER ROLE "%s" WITH PASSWORD ?`
postgresqlShowGrantsStatement = "SELECT has_database_privilege(rolname, ?, 'create') from pg_roles where rolcanlogin and rolname = ?"
postgresqlShowOwnershipStatement = "SELECT 't' FROM pg_catalog.pg_database AS d WHERE d.datname = ? AND pg_catalog.pg_get_userbyid(d.datdba) = ?"
postgresqlShowTableOwnershipStatement = "SELECT 't' from pg_tables where tableowner = ? AND tablename = ?"
@@ -142,6 +143,15 @@ func (r *PostgreSQLConnection) CreateUser(ctx context.Context, user, password st
return nil
}
func (r *PostgreSQLConnection) UpdateUser(ctx context.Context, user, password string) error {
_, err := r.db.ExecContext(ctx, fmt.Sprintf(postgresqlUpdateUserStatement, user), password)
if err != nil {
return errors.NewUpdateUserError(err)
}
return nil
}
func (r *PostgreSQLConnection) DBExists(ctx context.Context, dbName string) (bool, error) {
rows, err := r.db.ExecContext(ctx, postgresqlFetchDBStatement, dbName)
if err != nil {

View File

@@ -104,6 +104,7 @@ func getKubeletConfigmapContent(kubeletConfiguration KubeletConfiguration, patch
return nil, fmt.Errorf("unable to apply JSON patching to KubeletConfiguration: %w", patchErr)
}
kc = kubelettypes.KubeletConfiguration{}
if patchErr = utilities.DecodeFromJSON(string(kubeletConfig), &kc); patchErr != nil {
return nil, fmt.Errorf("unable to decode JSON to KubeletConfiguration: %w", patchErr)
}

View File

@@ -230,6 +230,10 @@ func (r *Setup) createUser(ctx context.Context, _ *kamajiv1alpha1.TenantControlP
}
if exists {
if updateErr := r.Connection.UpdateUser(ctx, r.resource.user, r.resource.password); updateErr != nil {
return controllerutil.OperationResultNone, fmt.Errorf("unable to update the user to : %w", updateErr)
}
return controllerutil.OperationResultNone, nil
}

View File

@@ -1,58 +0,0 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package handlers
import (
"context"
"fmt"
"net"
"gomodules.xyz/jsonpatch/v2"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/webhook/admission"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/internal/webhook/utils"
)
type TenantControlPlaneLoadBalancerSourceRanges struct{}
func (t TenantControlPlaneLoadBalancerSourceRanges) handle(tcp *kamajiv1alpha1.TenantControlPlane) error {
for _, sourceCIDR := range tcp.Spec.NetworkProfile.LoadBalancerSourceRanges {
_, _, err := net.ParseCIDR(sourceCIDR)
if err != nil {
return fmt.Errorf("invalid LoadBalancer source CIDR %s, %s", sourceCIDR, err.Error())
}
}
return nil
}
func (t TenantControlPlaneLoadBalancerSourceRanges) OnCreate(object runtime.Object) AdmissionResponse {
return func(context.Context, admission.Request) ([]jsonpatch.JsonPatchOperation, error) {
tcp := object.(*kamajiv1alpha1.TenantControlPlane) //nolint:forcetypeassert
if err := t.handle(tcp); err != nil {
return nil, err
}
return nil, nil
}
}
func (t TenantControlPlaneLoadBalancerSourceRanges) OnDelete(runtime.Object) AdmissionResponse {
return utils.NilOp()
}
func (t TenantControlPlaneLoadBalancerSourceRanges) OnUpdate(object runtime.Object, _ runtime.Object) AdmissionResponse {
return func(context.Context, admission.Request) ([]jsonpatch.JsonPatchOperation, error) {
tcp := object.(*kamajiv1alpha1.TenantControlPlane) //nolint:forcetypeassert
if err := t.handle(tcp); err != nil {
return nil, err
}
return nil, nil
}
}

View File

@@ -1,64 +0,0 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package handlers_test
import (
"context"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/webhook/admission"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/internal/webhook/handlers"
)
var _ = Describe("TCP LoadBalancer Source Ranges Webhook", func() {
var (
ctx context.Context
t handlers.TenantControlPlaneLoadBalancerSourceRanges
tcp *kamajiv1alpha1.TenantControlPlane
)
BeforeEach(func() {
t = handlers.TenantControlPlaneLoadBalancerSourceRanges{}
tcp = &kamajiv1alpha1.TenantControlPlane{
ObjectMeta: metav1.ObjectMeta{
Name: "tcp",
Namespace: "default",
},
Spec: kamajiv1alpha1.TenantControlPlaneSpec{},
}
ctx = context.Background()
})
It("allows creation when valid CIDR ranges are provided", func() {
tcp.Spec.ControlPlane.Service.ServiceType = kamajiv1alpha1.ServiceTypeLoadBalancer
tcp.Spec.NetworkProfile.LoadBalancerSourceRanges = []string{"192.168.0.0/24"}
_, err := t.OnCreate(tcp)(ctx, admission.Request{})
Expect(err).ToNot(HaveOccurred())
})
It("allows creation when LoadBalancer service has no CIDR field", func() {
tcp.Spec.ControlPlane.Service.ServiceType = kamajiv1alpha1.ServiceTypeLoadBalancer
_, err := t.OnCreate(tcp)(ctx, admission.Request{})
Expect(err).ToNot(HaveOccurred())
})
It("allows creation when LoadBalancer service has an empty CIDR list", func() {
tcp.Spec.ControlPlane.Service.ServiceType = kamajiv1alpha1.ServiceTypeLoadBalancer
tcp.Spec.NetworkProfile.LoadBalancerSourceRanges = []string{}
_, err := t.OnCreate(tcp)(ctx, admission.Request{})
Expect(err).ToNot(HaveOccurred())
})
It("denies creation when source ranges contain invalid CIDRs", func() {
tcp.Spec.ControlPlane.Service.ServiceType = kamajiv1alpha1.ServiceTypeLoadBalancer
tcp.Spec.NetworkProfile.LoadBalancerSourceRanges = []string{"192.168.0.0/33"}
_, err := t.OnCreate(tcp)(ctx, admission.Request{})
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(ContainSubstring("invalid LoadBalancer source CIDR 192.168.0.0/33"))
})
})

View File

@@ -1,71 +0,0 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package handlers
import (
"context"
"fmt"
"net"
"gomodules.xyz/jsonpatch/v2"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/webhook/admission"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/internal/webhook/utils"
)
type TenantControlPlaneServiceCIDR struct{}
func (t TenantControlPlaneServiceCIDR) handle(tcp *kamajiv1alpha1.TenantControlPlane) error {
if tcp.Spec.Addons.CoreDNS == nil {
return nil
}
_, cidr, err := net.ParseCIDR(tcp.Spec.NetworkProfile.ServiceCIDR)
if err != nil {
return fmt.Errorf("unable to parse Service CIDR, %s", err.Error())
}
for _, serviceIP := range tcp.Spec.NetworkProfile.DNSServiceIPs {
ip := net.ParseIP(serviceIP)
if ip == nil {
return fmt.Errorf("unable to parse IP address %s", serviceIP)
}
if !cidr.Contains(ip) {
return fmt.Errorf("the Service CIDR does not contain the DNS Service IP %s", serviceIP)
}
}
return nil
}
func (t TenantControlPlaneServiceCIDR) OnCreate(object runtime.Object) AdmissionResponse {
return func(context.Context, admission.Request) ([]jsonpatch.JsonPatchOperation, error) {
tcp := object.(*kamajiv1alpha1.TenantControlPlane) //nolint:forcetypeassert
if err := t.handle(tcp); err != nil {
return nil, err
}
return nil, nil
}
}
func (t TenantControlPlaneServiceCIDR) OnDelete(runtime.Object) AdmissionResponse {
return utils.NilOp()
}
func (t TenantControlPlaneServiceCIDR) OnUpdate(object runtime.Object, _ runtime.Object) AdmissionResponse {
return func(context.Context, admission.Request) ([]jsonpatch.JsonPatchOperation, error) {
tcp := object.(*kamajiv1alpha1.TenantControlPlane) //nolint:forcetypeassert
if err := t.handle(tcp); err != nil {
return nil, err
}
return nil, nil
}
}