Compare commits

...

298 Commits
v0.1.0 ... main

Author SHA1 Message Date
Enrico Candino
f341f7f5e8 Bump Charts to 1.0.2-rc1 (#652) 2026-01-29 09:55:31 +01:00
renovate-rancher[bot]
ca50a6b231 Update registry.suse.com/bci/bci-base Docker tag to v15.7 (#651)
* Update registry.suse.com/bci/bci-base Docker tag to v15.7

* move k3k controller image to `registry.suse.com/bci/bci-base:15.7`

---------

Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
Co-authored-by: Enrico Candino <enrico.candino@suse.com>
2026-01-28 12:35:28 +01:00
Enrico Candino
004e177ac1 Bump kubernetes dependencies (v1.33) (#647)
* bump kubernetes to v0.33.7

* updated kuberneets api versions

* bump tests

* fix k3s version

* fix test

* centralize k8s version

* remove focus

* revert GetPodCondition, GetContainerStatus and pin of k8s.io/controller-manager
2026-01-27 22:28:56 +01:00
Enrico Candino
0164c785ab Show correct allocatable resources when a Policy is applied (#638)
* wip

* wip

* wip

* fix lint and tests

* fixed bugs for missing resources

* cleanup and refactor

* removed coreClient from configureNode

* added comments to distribute algorithm
2026-01-27 15:56:37 +01:00
Hussein Galal
c1b7da4c72 SecretMounts feature and private registries (#570)
* Add SecretMounts field

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2026-01-26 21:47:40 +02:00
renovate-rancher[bot]
ff0b03af02 Update Update Kubernetes dependencies to v1.32.10 [SECURITY] (#626)
* Update Update Kubernetes dependencies to v1.32.10 [SECURITY]

* bump k8s.io/kubelet

---------

Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
Co-authored-by: Enrico Candino <enrico.candino@suse.com>
2026-01-26 17:24:14 +01:00
Enrico Candino
62a76a8202 Bump testcontainers-go (v0.40.0), containerd (v1.7.30) and x/crypto (v0.45.0) (#640)
* bump testcontainers to v0.40.0

* bump containerd andx/crypto
2026-01-26 16:37:05 +01:00
Enrico Candino
9e841cc75c Update helm.sh/helm/v3 to v3.18.5 (#641)
* bump helm to v3.17.4

* removed unneeded replace

* bump helm to v3.18.5
2026-01-26 15:38:56 +01:00
renovate-rancher[bot]
bc79a2e6a9 Update module github.com/sirupsen/logrus to v1.9.4 (#631)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-26 13:45:23 +01:00
renovate-rancher[bot]
3681614a3e Update dependency golangci/golangci-lint to v2.8.0 (#635)
* Update dependency golangci/golangci-lint to v2.8.0

* bump golangci-lint version in github action

---------

Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
Co-authored-by: Enrico Candino <enrico.candino@suse.com>
2026-01-26 13:30:26 +01:00
renovate-rancher[bot]
f04d88bd3f Update github/codeql-action digest to 38e701f (#634)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-26 10:27:20 +01:00
renovate-rancher[bot]
4b293cef42 Update module go.uber.org/zap to v1.27.1 (#633)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-26 10:19:02 +01:00
renovate-rancher[bot]
1e0aa0ad37 Update module github.com/spf13/cobra to v1.10.2 (#632)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-26 10:18:36 +01:00
renovate-rancher[bot]
e28fa84ae7 Update module github.com/go-logr/logr to v1.4.3 (#629)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-26 10:18:01 +01:00
renovate-rancher[bot]
511be5aa4e Pin dependencies (#628)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-22 15:24:57 +01:00
renovate-rancher[bot]
cd6c962bcf Migrate config .github/renovate.json (#627)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-22 15:06:16 +01:00
Kevin McDermott
c0418267c9 Merge pull request #623 from bigkevmcd/resource-quantity
Use resource.Quantity instead of a string for storageRequestSize in the Cluster definition.
2026-01-22 13:13:06 +00:00
Kevin McDermott
eaa20c16e7 Make the storageRequestSize immutable.
It can't be changed in the StatefulSet and modifying the value causes an
error.
2026-01-22 08:27:20 +00:00
jpgouin
0cea0c9e14 Only reconcile the server resource on the StatefullSet Controller (fix #618) 2026-01-21 16:53:52 +01:00
Kevin McDermott
d12f3ea757 Fix lint issues and failing test.
golangci-lint was complaining about duplicate imports of corev1 and the
ordering of them in the files.
2026-01-21 14:50:30 +00:00
Kevin McDermott
9ea81c861b Use resource.Quantity for storageRequestSize
Previously the resource.Quantity was stored as string which allowed
invalid values to be created.

This performs validation on the strings using the standard K8s resource
mechanism.
2026-01-21 14:50:28 +00:00
Enrico Candino
20c5441030 Bump to Go 1.25 (#620)
* bump to Go 1.25

* add go toolchain
2026-01-21 15:21:34 +01:00
renovate-rancher[bot]
a3a4c931a0 Add initial Renovate configuration (#621)
* Add initial Renovate configuration

* add permission

* fix multiple runs

---------

Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
Co-authored-by: Enrico Candino <enrico.candino@suse.com>
2026-01-21 15:04:51 +01:00
Hussein Galal
fcc7191ab3 CLI cluster update (#595)
* CLI cluster update

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2026-01-20 14:00:24 +02:00
jpgouin
ff6862e511 fix virtual pod NodeSelector #572 (#616) 2026-01-20 11:33:42 +01:00
Peter Matseykanets
20305e03b7 Add a dedicated Validate GitHub Actions workflow (#614) 2026-01-19 10:00:12 -05:00
Enrico Candino
5f42eafd2a Dev doc update (#611)
* update development.md

* fix tests

* fix cli test
2026-01-16 14:56:43 +01:00
Enrico Candino
ccc3d1651c Fixed resource allocation fetching the stats from the node where the (#610)
virtual-kubelet is running on.
Removed random node selection during Pod creation.
2026-01-16 13:23:18 +01:00
Enrico Candino
0185998aa0 Bump Charts to 1.0.2-rc1 (#609) 2026-01-15 14:12:52 +01:00
Guilherme Macedo
af5d33cfb8 Add FOSSA scanning workflow (#606)
Signed-off-by: Guilherme Macedo <guilherme@gmacedo.com>
2026-01-14 19:14:07 +01:00
Enrico Candino
f0d9b08b24 Added AsciiDoc K3k CRDs docs automation (#600)
* added asciidoc conversion for crds doc

* check versions

* use crd-ref-docs for generated asciidoc documentation

* add custom templates

* clenaup templates

* update references, rename docs file

* revert to found available pandoc version
2026-01-09 15:50:36 +01:00
Hussein Galal
a871917aec Refactor startup command to wait for node IP changes (#598)
* Patch node ip when server pod restarts

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Refactor startup command and adding safe mode

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add date/time logging to the startup script

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2026-01-09 16:29:47 +02:00
Enrico Candino
c16eae99c7 Added AsciiDoc k3kcli automation (#597)
* adding scripts for asciidoc cli generation

* fix small typos to align to existing docs

* added pandoc

* pandoc check
2026-01-07 11:16:40 +01:00
Enrico Candino
fc6bcedc5f fix for missing label update in creation, added tests (#592) 2025-12-16 11:34:50 +01:00
Hussein Galal
0086d5aa4a Attach creation of Pseudo PV to the PVC instead of the pods (#577)
* Change creation of pseudo PV to PVC instead of pods

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Ignore not found pvs when deleting the pvc

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Avoid returning when the PV already exists

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-12-15 17:27:30 +02:00
Enrico Candino
c9bb1bcf46 Fixed CreatePod and UpdatePod in Virtual Kubelet for Downward API (#573)
* wip

* fix lint

* ephemerals container

* remove unused

* add retry

* volumes refactor

* added configmap and secret keyRef translation

* set debug logger to virtual kubelet logger

* added tests

* fix lint, removed unused func

* added test file locally
2025-12-10 13:47:34 +01:00
Enrico Candino
6d5dd8564f Bump Charts to 1.0.1 (#588) 2025-12-09 12:33:21 +01:00
Enrico Candino
93025d301b Bump Charts to 1.0.1-rc2 (#586) 2025-12-03 11:44:49 +01:00
Enrico Candino
e385ceb66f Fixed missing Kubernetes host version when specified (#585)
* fix for missing host version

* added test

* fix test

* fix test
2025-12-03 09:21:27 +01:00
Enrico Candino
5c49c3d6b7 Fix create events rbac (#575)
* cleanup logs in kubelet provider

* added events create rbac to kubelet

* fix lint, moved fetch pod logic in separate func
2025-11-25 13:48:04 +01:00
Enrico Candino
521ff17ef6 Added test for SubPathExpr (#569)
* small fixes

* added test for subpathexpr

* removed old comment
2025-11-21 16:28:59 +01:00
Enrico Candino
5b4f31ef73 bump version to 1.0.1-rc1 in Chart.yaml (#567) 2025-11-17 18:24:57 +01:00
Enrico Candino
8856419e70 added check for failing tests (#566) 2025-11-17 12:58:59 +01:00
Enrico Candino
8760afd5bc Added --namespace flag to k3kcli policy create (#564)
* added --namespace flag to policy create to actually bind the new policy to existing namespaces

* fix lint

* fix tests

* added overwrite flag

* updated cli docs

* fix tests 2

* moved double quotes to single quote

* fix test
2025-11-14 21:45:28 +01:00
Enrico Candino
27730305c2 Added labels and annotations flags to cluster and policy create (#565)
* added labels and annotations flags to cluster create

* added labels and annotations flag to create policy command
2025-11-14 16:54:01 +01:00
Enrico Candino
d0e50a580d Added cluster details in cli during creation (#562)
* added cluster details in cli during cluster creation, silenced usage, removed static persistence type from help

* fix docs
2025-11-14 12:53:15 +01:00
Enrico Candino
7dc4726bbd Fixed panic during kubeconfig generate (#554)
* fix panic during kubeconfig generate

* moved check
2025-11-11 17:18:26 +01:00
Enrico Candino
7144cf9e66 Moved CRDs to Helm templates folder (#552)
* moved CRDs of Cluster and VirtualClusterPolicy

Updated the generate script to output CRDs to the correct directory and include the keep resource policy annotation.

* fix crd directory in tests
2025-11-11 16:22:56 +01:00
Enrico Candino
de0d2a0019 Add Job Summary reports to Conformance tests (#553)
* simplify shared conformance tests

* summary

* added failed test to summary

* space

* fix failed tests file

* removed sigs test
2025-11-11 13:01:23 +01:00
Enrico Candino
a84c49f9b6 Update Go version and some deps (#551)
* bump to Go 1.24.10

* bump k8s libs to v0.31.13 and v1.31.13

* bump cli deps (cobra, viper, pflag)
2025-11-07 12:34:34 +01:00
Enrico Candino
e79e6dbfc4 add upload permissions (#550) 2025-11-06 16:53:51 +01:00
Enrico Candino
2b6441e54e Added trivy vulns check (#549)
* image check

* added k3kcli
2025-11-06 12:46:15 +01:00
Enrico Candino
49a8d2a0ba Bump Charts to 1.0.0 (#543) 2025-11-03 16:44:11 +01:00
Enrico Candino
2e6de51dab Improve tests resiliency (#539)
* fix missing namespaces cleanup

* fix conflict namespace

* fix PVC already created error, patch for existing volume, and check with hardcoded k3k name

* removed useless test

* fix for dump covdata from external pod

* keep namespaces flag

* fix for multi-node clusters

* fix for hanging pod in isolated namespace
2025-10-31 21:51:37 +01:00
Enrico Candino
90aecbbb42 Bump Charts to 1.0.0-rc3 (#542) 2025-10-31 17:01:03 +01:00
Enrico Candino
af9e1d6ca7 Cleanup orphaned resources after Cluster deletion (#540)
* adding controller reference for garbage collection, delete API lease

* added test

* fix lint
2025-10-31 15:25:38 +01:00
Enrico Candino
ae380fa8e9 bump chart to 1.0.0-rc2 (#535) 2025-10-28 16:28:45 +01:00
Enrico Candino
c34cf9ce94 added virtual mode conformance tests (#534) 2025-10-28 13:47:31 +01:00
Enrico Candino
bf70e0d171 Updated Cluster and VirtualClusterPolicy spec for sync and loadbalancer (#528)
* add default false for ingress and priorityClass, cleanup tests and added new tests

* fix typo for loadBalancer

* fix test aligning VirtualClusterPolicy SyncConfig

* set required enabled field, revert pointer on optional SyncConfig

* update samples
2025-10-24 17:02:26 +02:00
Enrico Candino
cebf6594c4 switch to text log as default (#529) 2025-10-24 13:42:41 +02:00
Enrico Candino
075d72df5d Cleanup of customCAs spec (#527)
* cleanup spec from customCAs when omitted

* add enabled default for customCAs
2025-10-23 22:11:44 +02:00
Enrico Candino
ee7eac89ce Enhance logging and update Helm installation parameters for better debugging and cluster management (#519) 2025-10-22 14:55:47 +02:00
Enrico Candino
514fdf6b86 Fix for flaky test (#523)
* fix for flaky test

* fix lint

* check ContainersReady condition
2025-10-21 18:19:36 +02:00
Hussein Galal
730e4e1c79 Fix pseudo PV deletion (#511)
* Fix pseudo PV deletion

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix pseudo PV deletion

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-10-18 00:56:50 +02:00
Hussein Galal
a3076af38f Increase timeout and add timeout option (#514)
* Increase timeout and add timeout option

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Increase timeout and add timeout option

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-10-17 16:51:40 +03:00
Hussein Galal
89dc352bea Scale up/down tests for virtual and shared mode (#508)
* Scale up/down tests for virtual and shared mode

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* defer cleanup and more fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add labels to e2e tests and divide the workload

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add labels to e2e tests and divide the workload

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add validate job to e2e test

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix label filters for e2e tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix makefile

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* use constants for e2e tests labels

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix typo

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix labels

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-10-15 17:01:14 +03:00
Enrico Candino
7644406eeb Fix for flaky test (#509)
* fix for flaky test and fix for PVC creation

* fix lint
2025-10-15 11:31:45 +02:00
Enrico Candino
2206632dcc bump charts (#507) 2025-10-14 15:19:00 +02:00
Enrico Candino
8ffdc9bafd renaming webhook (#506) 2025-10-13 17:25:17 +02:00
Enrico Candino
594c2571c3 promoted v1alpha1 resources to v1beta1 (#505) 2025-10-13 17:24:56 +02:00
Hussein Galal
12971f55a6 Add k8s version upgrade test (#503)
* Add k8s version upgrade test

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* remove unused functions

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-10-13 17:14:25 +03:00
Enrico Candino
99f750525f Fix extraEnv and other Helm values (#500)
* fix for extraEnv

* moved env var to flags

* changed resources as object

* renamed replicaCount to replicas

* cleanup spaces

* moved some values and spacing

* renamed some flags
2025-10-13 12:50:07 +02:00
Hussein Galal
a0fd472841 Use K3S host cluster for E2E tests (#492)
* Add kubeconfig to e2e_tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add E2E_KUBECONFIG env variable

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix yaml permissions for kubeconfig

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix image name and use ttl.sh

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add uuidgen result to a file

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add hostIP

* Add k3s version to e2e test

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* remove comment

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* remove virtual mode tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix failed test

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add KUBECONFIG env variable to the make install

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add k3kcli to github_path

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Use docker installation for testing the cli

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* typo

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix test cli

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* typo

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-10-08 15:39:35 +03:00
Enrico Candino
7387fc1b23 Fix Service reconciliation error loop (#497)
* fix service reconciliation error by adding checks for virtual service annotations

* renamed var
2025-10-08 14:03:50 +02:00
Enrico Candino
9f265c73d9 Fix for HA server deletion (#493)
* wip

* wip

* wip

* removed todo
2025-10-08 13:23:15 +02:00
Enrico Candino
00ef6d582c Add log-format, and cleanup (#494)
* using logr.Logger

* testing levels

* adding log format

* fix lint

* removed tests

* final cleanup
2025-10-08 13:19:57 +02:00
Enrico Candino
5c95ca3dfa Fix for pod eviction in host cluster (#484)
* update statefulset controller

* fix for single pod

* adding pod controller

* added test

* removed comment

* merged service controller

* revert statefulset

* added test

* added common owner filter
2025-10-03 16:22:54 +02:00
jpgouin
6523b8339b change the default storage request size request to 2Gi (#490)
* change the default storage request size request to 2Gi
2025-10-03 09:04:13 +02:00
Hussein Galal
80037e815f Adding upgrade path tests (#481)
* Adding upgrade path tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Remove update label

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-10-02 14:53:08 +03:00
Enrico Candino
7585611792 Rename PodController to StatefulSetController (#482)
* renamed pod.go

* update statefulset controller

* fix for single pod

* added test, revert finalizer

* wip ha deletion

* revert logic

* remove focus
2025-10-01 17:06:24 +02:00
Hussein Galal
0bd681ab60 Lb service status sync (#451)
* Sync service LB status back to virtual service

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Sync service LB status back to virtual service

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-10-01 13:25:31 +03:00
Hussein Galal
4fe36b3d0c Bump Chart to v0.3.5 (#485)
* Bump Chart to v0.3.5

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Bump Chart to v0.3.5

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-09-30 15:26:59 +03:00
Enrico Candino
01589bb359 splitting tests (#461) 2025-09-23 12:07:49 +02:00
Hussein Galal
30217df268 Bump chart to v0.3.5-rc1 (#467)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-09-17 12:14:44 +03:00
Enrico Candino
04198652d5 check for single expose mode (#466) 2025-09-17 10:39:55 +02:00
Hussein Galal
72eb819216 Add imagepullsecrets to controller, server, and agents (#455)
* Add imagepullsecrets to controller, server, and agents

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix test cli

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fxing tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add agent section to helm chart values

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix charts values

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixing chart and refactoring cluster config

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* default lists to the values of imagepullsecrets

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* more fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix version image function and add unit tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* simplify arguments and remove registry from the code

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-09-17 11:29:01 +03:00
Alex Bissessur
4d4003f6f9 fix broken k3kcli docs path (#463)
Signed-off-by: alex <alexbissessur@gmail.com>
2025-09-16 16:41:19 +02:00
Hussein Galal
aca01127f8 Fix PVC sync and sync defaults (#458)
* Fix PVC sync and sync defaults

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix PVC sync and sync defaults

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes to pvc sync

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* increase the timeout on the e2e test

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* configure the syncConfig correctly in vcp

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix policy unit test

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* revert timeout of the test to 20 second

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-09-16 13:01:12 +03:00
Enrico Candino
1550c6b45a Add k3k controller coverage data (#452)
* added k3k controller coverage data

* cleanup
2025-09-03 11:37:56 +02:00
Hussein Galal
caf785f23b Add resources sync configuration (#431)
* Add resources sync configuration

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* refactor cluster sync

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* more fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* more fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* simplify the syncerContext

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* simplify the syncerContext

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* drop the ClusterClient struct

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix updates to syncer

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* refactor secrets/configmaps sync

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* refactor secrets/configmaps sync

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add imagepullsecret translation

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix test

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add exception for deleted resources

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* linting fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* remove the option to disable imagepullsecret translation

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-09-01 14:34:29 +03:00
Hussein Galal
b3f7a8ab7f Fix subpath field (#441)
* Fix pod fieldpath annotation translation (#434)

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix overrideEnvVars

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* remove extra comment

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix unit tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix merge env vars

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix test

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-09-01 11:10:31 +03:00
Enrico Candino
bd2494a0a9 Bump Charts to 0.3.4 (#446) 2025-08-28 10:57:23 +02:00
Enrico Candino
237a3cb280 Bump Charts to 0.3.4-rc3 (#445) 2025-08-25 19:02:30 +02:00
Enrico Candino
d23cf86fce Fix missing custom-certs flag in cli (#444)
* fix missing custom-certs path in cli

* fix docs
2025-08-25 18:37:29 +02:00
Enrico Candino
65cb8ad123 bump chart (#440) 2025-08-19 10:57:07 +02:00
Hussein Galal
6db88b5a00 Revert "Fix pod fieldpath annotation translation (#434)" (#435)
This reverts commit 883d401ae3.
2025-08-18 14:28:19 +03:00
Hussein Galal
8d89c7d133 Fix service port for generated kubeconfig secret (#433)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-08-18 09:17:30 +03:00
Hussein Galal
883d401ae3 Fix pod fieldpath annotation translation (#434)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-08-18 09:16:58 +03:00
Enrico Candino
f85702dc23 Bump version and appVersion to 0.3.4-rc1 in Chart.yaml (#429) 2025-07-24 17:13:12 +02:00
Enrico Candino
084701fcd9 Migrate from urfave/cli to cobra (#426)
* wip

* env var fix

* cluster create

* cluster create and delete

* cluster list

* cluster cmd

* kubeconfig

* policy create

* policy delete and list, and added root commands

* removed urfavecli from k3kcli

* fix policy command

* k3k-kubelet to cobra

* updated docs

* updated go.mod

* updated test

* added deletion

* added cleanup and flake attempts

* wip bind env

* simplified config
2025-07-24 16:49:40 +02:00
Enrico Candino
5eb1d2a5bb Adding some tests for k3kcli (#417)
* adding some cli tests

* added coverage and tests

* fix lint and cli tests

* fix defer

* some more cli tests
2025-07-23 11:03:41 +02:00
Enrico Candino
98d17cdb50 Added new golangci-lint formatters (#425)
* add gci formatter

* gofmt and gofumpt

* rewrite rule

* added make fmt
2025-07-22 10:42:41 +02:00
Enrico Candino
2047a600ed Migrate golangci-lint to v2 (#424)
* golangci-lint upgrade

* fix lint
2025-07-22 10:10:26 +02:00
Hussein Galal
a98c49b59a Adding custom certificate to the virtual clusters (#409)
* Adding custom certificate to the virtual clusters

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* docs update

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* integrate cert-manager

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add individual cert tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-07-21 19:23:11 +03:00
Enrico Candino
1048e3f82d fix for portallocator initialization (#423) 2025-07-21 17:03:39 +02:00
Alex Bissessur
c480bc339e update ver for k3kcli install (#421)
Signed-off-by: xelab04 <alexbissessur@gmail.com>
2025-07-21 11:18:34 +02:00
Enrico Candino
a0af20f20f codecov (#418) 2025-07-18 11:50:57 +02:00
Enrico Candino
748a439d7a fix for restoring policy (#413) 2025-07-17 10:25:09 +02:00
Enrico Candino
0a55bec305 improve chart-release workflow (#412) 2025-07-14 15:56:30 +02:00
Enrico Candino
2ab71df139 Add Conditions and current status to Cluster (#408)
* Added Cluster Conditions

* added e2e tests

* fix lint

* cli polling

* update tests
2025-07-14 15:53:37 +02:00
Enrico Candino
753b31b52a Adding configurable maxConcurrentReconcilers and small CRD cleanup (#410)
* removed Persistence from Status, fixed default for StorageSize and StorageDefault

* added configurable maxConcurrentReconciles

* fix concurrent issues

* add validate as prereq for tests
2025-07-10 14:46:33 +02:00
Hussein Galal
fcc875ab85 Mirror host nodes (#389)
* mirror host nodes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add mirror host nodes feature

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add controllername to secrets/configmap syncer

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* golint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* build docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* setting controller namespace env

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix typo

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add a controller_namespace env to the test

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add mirrorHostNodes spec to conformance tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* change the ptr int to int

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix map key name

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-07-08 14:48:24 +03:00
Enrico Candino
57263bd10e fail fast matrix (#398) 2025-07-01 11:04:56 +02:00
Enrico Candino
bf82318ad9 Add PriorityClass reconciler (virtual cluster -> host) (#377)
* added priorityclass controller

* added priorityClass controller tests

* fix for update priorityClass

* fix system skip priorityclass

* fix name
2025-07-01 11:03:14 +02:00
jpgouin
1ca86d09d1 add troubleshoot how to guide (#390)
* add troubleshoot how to guide

Co-authored-by: Enrico Candino <enrico.candino@gmail.com>
2025-06-30 16:54:13 +02:00
Enrico Candino
584bae8974 bump charts (#403) 2025-06-30 10:43:53 +02:00
Hussein Galal
5a24c4edf7 bump charts to 0.3.3-r6 (#401)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-06-27 17:02:23 +03:00
Hussein Galal
44aa1a22ab Add pods/attach permission to k3k-kubelet (#400)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-06-27 16:42:05 +03:00
Enrico Candino
2b115a0b80 Add scheduled Conformance tests for shared mode (#396)
* add conformance tests with matrix

* fix serial

* splitted conformance and sigs

* push

* sig check focus fix

* cleanup cluster

* matrix for conformance tests

* removed push
2025-06-26 15:55:08 +02:00
Enrico Candino
8eb5c49ce4 bump chart (#395) 2025-06-25 10:48:52 +02:00
Enrico Candino
54ae8d2126 add named controller (#394) 2025-06-24 23:56:14 +02:00
Hussein Galal
3a101dccfd bump charts to 0.3.3-r4 (#393)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-06-24 00:11:17 +03:00
Hussein Galal
b81073619a Generate kubeconfig secret (#392)
* Generate kubeconfig secret

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix typo

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix typo

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-06-23 14:31:36 +03:00
Enrico Candino
f5d2e981ab Bump Charts to 0.3.3-r3 (#391)
* bump charts to 0.3.3-r2

* bump charts to 0.3.3-r3
2025-06-20 18:19:10 +02:00
jpgouin
541f506d9d [CLI] add storage-request-size flag (#372)
[CLI] add storage-request-size flag
2025-06-20 17:13:47 +02:00
Enrico Candino
f389a4e2be Fix Network Policy reconciliation (#388)
* logs

* fix delete cleanup

* update spec

* added policyName to status, skip netpol for policy managed clusters
2025-06-20 16:10:47 +02:00
jpgouin
818328c9d4 fix-howto usage of serverEnvs and agentEnvs (#385) 2025-06-20 12:32:43 +02:00
Enrico Candino
0c4752039d Fix for k3kcli policy delete (#386)
* fix for delete policy

* fix docs
2025-06-20 12:08:51 +02:00
Enrico Candino
eca219cb48 kubelet controllers cleanup (#376) 2025-06-20 12:08:28 +02:00
Hussein Galal
d1f88c32b3 Ephemeral containers fix (#371)
* Update virtual kubelet and k8s to 1.31.4

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix ephemeral containers in provider

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix linters

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix comments

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-06-20 12:52:45 +03:00
Enrico Candino
b8f0e77a71 fix for empty namespace (#375) 2025-06-19 14:28:27 +02:00
jpgouin
08ba3944e0 [DOC] add how-to create virtual clusters (#373)
* [DOC] add how-to create virtual clusters
2025-06-06 12:39:51 +02:00
Enrico Candino
09e8a180de charts-0.3.3-r1 (#370) 2025-06-04 09:50:48 +02:00
jpgouin
87032c8195 [DOC] add how to choose mode (#369)
*[DOC] add how to choose between `shared` and `virtual` mode doc
2025-06-03 11:12:12 +02:00
jpgouin
78e0c307b8 add workload exposition howto doc (#366)
* [DOC] add how to expose workloads outside the virtual cluster
2025-06-03 09:14:20 +02:00
Enrico Candino
5758b880a5 Added k3kcli cluster list and k3kcli policy list commands (#368)
* list commands

* go mod tidy

* moved logic to separate file, small refactor
2025-05-30 15:35:31 +02:00
Enrico Candino
2655d792cc Update allowedModeTypes field to allowedMode (#367)
* change allowedModeTypse to allowedMode

* added shortname "vcp" and additional mode column
2025-05-29 14:53:58 +02:00
Enrico Candino
93e1c85468 VirtualClusterPolicy documentation (#364)
* initial docs

* update for change/cleanup

* update crd docs config

* updated links

* crd docs updates
2025-05-29 11:18:26 +02:00
Enrico Candino
8fbe4b93e8 Change VirtualClusterPolicy scope to Cluster (#358)
* rename clusterset to policy

* fixes

* rename clusterset to policy

* wip

* go mod

* cluster scoped

* gomod

* gomod

* fix lint

* wip

* moved logic to vcp controller

* update for clusters

* small fixes

* update cli

* fix docs, updated spec

* fix cleanup

* added missing owns for limitranges
2025-05-29 10:45:48 +02:00
jpgouin
2515d19187 move howtos in docs folder (#362) 2025-05-27 14:11:13 +02:00
jpgouin
2b1448ffb8 add air-gap support (#359)
* add airgap support
* add airgap howto guide
2025-05-27 10:13:07 +02:00
Enrico Candino
fdb5bb9c19 update version in readme (#357) 2025-05-19 20:27:23 +02:00
Hussein Galal
45fdbf9363 Fix DNS options and allow custom dnsConfig (#354)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-05-15 15:58:36 +03:00
Enrico Candino
3590b48d91 remove --devel reference (#353) 2025-05-15 12:07:08 +02:00
Enrico Candino
cca3d0c309 Rename ClusterSet to VirtualClusterPolicy (#349)
* rename clusterset to policy

* fixes
2025-05-15 12:04:47 +02:00
Hussein Galal
f228c4536c Fix annotation update (#335)
* Fix annotation update

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix annotation update

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix annotation update

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-05-12 13:02:05 +03:00
Hussein Galal
37fe4493e7 Fix HA init server scaling (#333)
* Fix HA init server scaling

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* increase timeout in e2e test

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-05-12 11:51:35 +03:00
Enrico Candino
6a22f6f704 fix build-crds, bump Go to 1.24.2, bump golangci-lint (#344) 2025-05-06 17:24:35 +02:00
Enrico Candino
96a4341dfb Services updates (LoadBalancerConfig and NodePortConfig) (#329)
* updates to services

- added loadBalancerConfig
- removed service-port
- added logic to not expose services

* Refactor cluster tests to improve readability and maintainability

- Simplified service port expectations by directly accessing elements instead of using `ContainElement`.
- Enhanced clarity of test assertions for `k3s-server-port` and `k3s-etcd-port` attributes.
- Removed redundant code for checking service ports.

* fix ports for ingress expose, update kubeconfig generate
2025-04-22 11:52:18 +02:00
Hussein Galal
510ab4bb8a Add extra env for servers/agents (#324)
* Add extra env for servers/agents

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* cli docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix container env

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-04-21 12:25:51 +02:00
Hussein Galal
9d96ee1e9c Display error if agents flag is provided in shared mode (#330)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-04-18 17:48:43 +02:00
Enrico Candino
7c424821ca Update Chart.yaml (#332) 2025-04-18 12:43:14 +02:00
jpgouin
a2f5fd7592 Merge pull request #328 from takushi-35/ehemeral-docs
[DOC] fix: Incorrect creation method when using Ephemeral Storage in advanced-usage.md
2025-04-11 14:27:40 +02:00
takushi-35
c8df86b83b fix: Correcting incorrect procedures in advanced-usage.md 2025-04-10 13:27:00 +09:00
Hussein Galal
d41d2b8c31 Fix update bug in ensureObject (#325)
* Fix update bug in ensureObjects

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix log msg

Co-authored-by: Enrico Candino <enrico.candino@gmail.com>

* Fix import

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
Co-authored-by: Enrico Candino <enrico.candino@gmail.com>
2025-04-09 17:25:48 +02:00
Enrico Candino
7cb2399b89 Update and fix to k3kcli for new ClusterSet integration (#321)
* added clusterset flag to cluster creation and displayname to clusterset creation

* updated cli docs
2025-04-04 13:22:55 +02:00
Enrico Candino
90568f24b1 Added ClusterSet as singleton (#316)
* added ClusterSet as singleton

* fix tests
2025-04-03 16:26:25 +02:00
Hussein Galal
0843a9e313 Initial support for ResourceQuotas in clustersets (#308)
* Add ResourceQuota to clusterset

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Generate docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add a defualt limitRange for ClusterSets

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix linting

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add test for clusterset limitRange

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add server and worker limits

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make charts

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add default limits and fixes to resourcesquota

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make build-crds

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make build-crds

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make spec as pointer

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* delete default limit

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Update tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Update tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* return on delete in limitrange

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-04-03 12:30:48 +02:00
Enrico Candino
b58578788c Add clusterset commands (#319)
* added clusterset create command, small refactor with appcontext

* added clusterset delete

* updated docs
2025-04-03 11:07:32 +02:00
Enrico Candino
c4cc1e69cd requeue if server not ready (#318) 2025-04-03 10:45:18 +02:00
Enrico Candino
bd947c0fcb Create dedicated namespace for new clusters (#314)
* create dedicated namespace for new clusters

* porcelain test

* use --exit-code instead of test and shell for escaping issue

* update go.mod
2025-03-26 14:53:41 +01:00
Hussein Galal
b0b61f8d8e Fix delete cli (#281)
* Fix delete cli

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix delete cli

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* check if object has a controller reference before removing

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* move the update to the if condition

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* move the update to the if condition

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-03-24 13:44:00 +02:00
Enrico Candino
3281d54c6c fix typo (#300) 2025-03-24 10:48:42 +01:00
Hussein Galal
853b0a7e05 Chart update for 0.3.1 (#309)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-03-21 01:59:53 +02:00
Rossella Sblendido
28b15d2e92 Merge pull request #299 from rancher/doc-k3d
Add documentation to install k3k on k3d
2025-03-10 11:07:02 +01:00
rossella
cad59c0494 Moved how to to development.md 2025-03-07 17:38:22 +01:00
Enrico Candino
d0810af17c Fix kubeconfig load from multiple configuration files (#301)
* fix kubeconfig load with standards kubectl approach

* update cli docs
2025-03-07 15:11:00 +01:00
Enrico Candino
2b7202e676 Added NetworkPolicy for Cluster isolation (#290)
* added cluster NetworkPolicy

* wip tests

* remove focus

* added networking test

* test refactoring

* unfocus

* revert labels

* added async creation of clusters, and namespace deletion

* add unfocus validation
2025-03-07 14:36:49 +01:00
rossella
4975b0b799 Add documentation to install k3k on k3d
k3d is a valid local option for testing k3k. Adding documentation
on how to install k3k on top of k3d.
2025-03-04 16:48:21 +01:00
jpgouin
90d17cd6dd Merge pull request #288 from jp-gouin/fix-cli 2025-03-04 13:48:34 +01:00
Justin J. Janes
3e5e9c7965 adding a new issue template for bug reports (#258)
* adding a new issue template for bug reports

* adding extra helper commands for logs from k3k
2025-03-04 12:33:34 +02:00
jpgouin
1d027909ee cli - fix bug with Kubeconfig value resolution 2025-03-04 10:23:54 +00:00
Enrico Candino
6105402bf2 charts-0.3.1-r1 (#283) 2025-03-03 17:14:13 +01:00
jpgouin
6031eeb09b Merge pull request #273 from jp-gouin/readme-1
Update quickstart pre-requisites and generate automatically cli docs
2025-03-03 14:44:11 +01:00
jpgouin
2f582a473a update cli docs 2025-03-03 13:31:40 +00:00
jpgouin
26d3d29ba1 remove reference to persistence type 2025-03-03 13:21:07 +00:00
jpgouin
e97a3f5966 remove reference to persistence type 2025-03-03 13:21:07 +00:00
jpgouin
07b9cdcc86 update readme and advanced-usage doc 2025-03-03 13:21:07 +00:00
jpgouin
a3cbe42782 update doc and genclidoc 2025-03-03 13:21:07 +00:00
jpgouin
3cf8c0a744 default the kubeconfig flag to $HOME/.kube/config 2025-03-03 13:21:07 +00:00
jpgouin
ddc367516b fix test 2025-03-03 13:21:07 +00:00
jpgouin
7b83b9fd36 fix test 2025-03-03 13:21:07 +00:00
jpgouin
bf8fdd9071 fix lint 2025-03-03 13:21:07 +00:00
jpgouin
4ca5203df1 Flatten the cli structure, add docs generation in the Makefile and remove crds/Makefile 2025-03-03 13:21:07 +00:00
Enrico Candino
5e8bc0d3cd Update CRDs documentation (#279)
* complete CRD documentation

* fix missing rebuild of CRDs
2025-03-03 11:47:53 +01:00
Enrico Candino
430e18bf30 Added wsl linter, and fixed related issues (#275)
* added wsl linter

* fixed issues
2025-02-27 10:59:02 +01:00
Hussein Galal
ec0e5a4a87 Support for multi node in shared mode (#269)
* Support for multi node in shared mode

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixing typo

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-02-26 22:55:31 +02:00
Hussein Galal
29438121ba Fix server/agent extra args in the cli (#272)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-02-26 11:32:25 +02:00
Hussein Galal
c2cde0c9ba Fix the default CIDRs for both modes (#271)
* Fix the default CIDRs for both modes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix service/cluster cidr

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-02-26 11:32:17 +02:00
jpgouin
1be43e0564 Merge pull request #270 from jp-gouin/readme-1
Update README.md
2025-02-25 11:27:57 +01:00
jpgouin
8913772240 Update README.md
Co-authored-by: Enrico Candino <enrico.candino@gmail.com>
2025-02-25 11:07:47 +01:00
jpgouin
dbbe03ca96 Update README.md
Co-authored-by: Enrico Candino <enrico.candino@gmail.com>
2025-02-25 10:56:45 +01:00
jpgouin
e52a682cca Update README.md
add explanation to create a cluster from a host cluster accessed through Rancher
2025-02-24 14:22:56 +01:00
Enrico Candino
26a0bb6583 Added ServiceCIDR lookup, and changed default (#263)
* added serviceCIDR lookup

* fix log

* fix comment

* swap serviceCIDR lookup
2025-02-24 12:08:58 +01:00
Enrico Candino
dee20455ee added multiarch support (#262) 2025-02-21 14:36:17 +01:00
Enrico Candino
f5c9a4b3a1 fix duplicated envvars, added tests (#260) 2025-02-19 16:27:46 +01:00
Enrico Candino
65fe7a678f fix k3s version metadata (#256) 2025-02-18 16:04:11 +01:00
Hussein Galal
8811ba74de Fix cluster spec update (#257)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-02-18 15:40:49 +02:00
Enrico Candino
127b5fc848 Remove dapper (#254)
* wip drop dapper

* added tests, validate

* fix kubebuilder assets

* debug

* fix maybe

* export global

* export global 2

* fix goreleaser

* dev doc section improved

* crd and docs

* drop dapper

* drop unused tmpl

* added help

* typos, and added `build-crds` target to default
2025-02-18 11:59:20 +01:00
Hussein Galal
d95e3fd33a Chart update for v0.3.0 (#255)
* Chart update for v0.3.0

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Update chart to 0.3.0

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Update chart to 0.3.0-r1

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-02-17 15:06:22 +02:00
Hussein Galal
1f4b3c4835 Assign pod's hostname if not assigned (#253)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-02-17 12:47:49 +02:00
Enrico Candino
0056e4a3f7 add check for number of arguments (#252) 2025-02-17 10:37:42 +01:00
Hussein Galal
8bc5519db0 Update chart (#251)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-02-14 15:28:20 +02:00
Hussein Galal
fa553d25d4 Default to dynamic persistence and fix HA restarts (#250)
* Default to dynamic persistence and fix HA restarts

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-02-14 14:26:10 +02:00
Enrico Candino
51a8fd8a8d Fix and enhancements to IngressExposeConfig (annotations) (#248)
* ingress fixes

* added annotations to IngressConfig

* sync annotations with CR

* removed hosts

* small doc for ingress
2025-02-14 12:38:42 +01:00
Enrico Candino
fdb133ad4a Added Ports to NodePortConfig and expose fixes (#247)
* fix NodePort service update

* updated crd docs
2025-02-11 14:47:01 +01:00
Enrico Candino
0aa60b7f3a Update of some docs (#231)
* updated README, docs folder

* updated architecture doc

* shared and virtual architecture images

* advanced usage

* added crd-ref-docs tool for CRDs documentation

* small fixes

* requested changes

* full example in advanced usage

* removed security part

* Apply suggestions from code review

Co-authored-by: jpgouin <jp-gouin@hotmail.fr>

---------

Co-authored-by: jpgouin <jp-gouin@hotmail.fr>
2025-02-10 16:21:33 +01:00
Enrico Candino
8d1bda4733 fix panic for nil Expose (#240) 2025-02-10 12:44:53 +01:00
Hussein Galal
f23b538f11 Fix metadata information for the virtual pods (#228)
* Fix metadata information for the virtual pods

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-02-10 10:24:56 +02:00
Hussein Galal
ac132a5840 Fixing etcd pod controller (#233)
* Fixing etcd pod controller

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix logic in etcd pod controller

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-02-06 22:36:05 +02:00
Enrico Candino
2f44b4068a Small cli refactor (cluster name as arg, default kubeconfig path) (#230)
* cli refactor

* restored name
2025-02-05 23:54:40 +01:00
Enrico Candino
48efbe575e Fix Webhook certificate recreate (#226)
* wip cert webhook

* fix lint

* cleanup and refactor

* fix go.mod

* removed logs

* renamed

* small simplification

* improved logging

* improved logging

* some tests for config data

* fix logs

* moved interface
2025-02-05 21:55:34 +01:00
jpgouin
3df5a5b780 Merge pull request #213 from jp-gouin/fix-ingress
fix ingress creation, use the ingress host in Kubeconfig when enabled
2025-02-04 09:31:49 +01:00
Enrico Candino
2a7541cdca Fix missing updates of server certificates (#219)
* merge

* wip test

* added test for restart

* tests reorg

* simplified tests
2025-02-04 09:17:56 +01:00
Enrico Candino
997216f4bb chart-releaser action (#222) 2025-02-04 09:17:27 +01:00
Enrico Candino
bc3f906280 Fix status update, updated k3s default version, updated CRDs (#218)
* fix status update

* fix schema and default image

* removed retry in controller

* removed fmt
2025-01-30 12:56:42 +01:00
Hussein Galal
19efdc81c3 Add initial support for daemonsets (#217)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-01-30 00:59:25 +02:00
Enrico Candino
54be0ba9d8 Logs and organization cleanup (#208)
* logs and organization cleanup

* getting log from context

* reused log var
2025-01-29 12:03:33 +01:00
Enrico Candino
72b5a98dff Fix typos and adding spellcheck linter (#215)
* adding spellcheck linter

* fix typos
2025-01-28 17:47:45 +01:00
jpgouin
2019decc78 check value of cluster.Spec.Expose.Ingress 2025-01-28 14:34:50 +00:00
jpgouin
ebdeb3aa58 fix merge 2025-01-28 14:27:50 +00:00
jpgouin
c88890e502 Merge branch 'main' into fix-ingress 2025-01-28 15:11:09 +01:00
Hussein Galal
86d543b4be Fix webhook restarts in k3k-kubelet (#214)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-01-28 01:52:55 +02:00
Enrico Candino
44045c5592 Added test (virtual cluster creation, with pod) and small kubeconfig refactor (#211)
* added virtual cluster and pod test

* moved ClusterCreate

* match patch k8s host version
2025-01-24 22:26:01 +01:00
jpgouin
e6db5a34c8 fix ingress creation, use the ingress host in Kubeconfig when enabled 2025-01-24 18:48:31 +00:00
jpgouin
8f24151b3f Merge pull request #209 from jp-gouin/cli
add flag to override kubernetes server value in the generated kubeconfig
2025-01-24 10:46:26 +01:00
Hussein Galal
8b0383f35e Fix chart release action (#210)
* Fix chart release action

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix chart release action

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-01-23 21:02:34 +02:00
jpgouin
ec93371b71 check err 2025-01-23 16:29:43 +00:00
jpgouin
5721c108b6 add flag to override kubernetes server value in the generated kubeconfig 2025-01-23 16:16:45 +00:00
Enrico Candino
9e52c375a0 bump urfave/cli to v2 (#205) 2025-01-23 10:14:01 +01:00
Hussein Galal
ca8f30fd9e upgrade chart (#207)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-01-23 02:30:12 +02:00
Hussein Galal
931c7c5fcb Fix secret tokens and DNS translation (#200)
* Include init containers in token translation

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix kubernetes.defaul service DNS translation

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add skip test var to dapper

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add kubelet version and image pull policy to the shared agent

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-01-23 01:55:05 +02:00
Enrico Candino
fd6ed8184f removed antiaffinity (#199) 2025-01-22 18:34:30 +01:00
Enrico Candino
c285004944 fix release tag (#201) 2025-01-22 15:18:10 +01:00
Enrico Candino
b0aa22b2f4 Simplify Cluster spec (#193)
* removed some required parameters, adding defaults

* add hostVersion in Status field

* fix tests
2025-01-21 21:19:44 +01:00
Enrico Candino
3f49593f96 Add Cluster creation test (#192)
* added k3kcli to path

* test create cluster

* updated ptr

* added cluster creation test
2025-01-21 17:53:42 +01:00
Enrico Candino
0b3a5f250e Added golangci-lint action (#197)
* added golangci-lint action

* linters

* cleanup linters

* fix error, increase timeout

* removed unnecessary call to Stringer
2025-01-21 11:30:57 +01:00
Enrico Candino
e7671134d2 fixed missing version (#196) 2025-01-21 10:52:27 +01:00
Enrico Candino
f9b3d62413 bump go1.23 (#198) 2025-01-21 10:50:23 +01:00
Enrico Candino
d4368da9a0 E2E tests scaffolding (#189)
* testcontainers

add build script

dropped namespace from chart

upload logs

removed old tests

* show go.mod diffs
2025-01-16 20:40:53 +01:00
Hussein Galal
c93cdd0333 Add retry for k3k-kubelet provider functions (#188)
* Add retry for k3k kubelet provider functions

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add retry for k3k kubelet provider function

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* go mod tidy

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-01-16 21:34:28 +02:00
Enrico Candino
958d515a59 removed Namespace creation from charts, edited default (#190) 2025-01-16 18:34:17 +01:00
Hussein Galal
9d0c907df2 Fix downward api for status fields in k3k-kubelet (#185)
* Fix downward api for status fields in k3k-kubelet

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-01-16 02:40:17 +02:00
Enrico Candino
1691d48875 Fix for UpdatePod (#187)
* fix for UpdatePod

* removed print
2025-01-15 18:50:21 +01:00
Enrico Candino
960afe9504 fix error for existing webhook (#186) 2025-01-15 18:43:12 +01:00
Enrico Candino
349f54d627 fix for default priorityClasses (#182) 2025-01-14 20:30:16 +01:00
Hussein Galal
ccaa09fa4a Add PVC syncing support (#179)
* Add pvc syncing support

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-01-14 20:57:04 +02:00
Enrico Candino
f9ddec53b7 Added priorityClass to Clusters and ClusterSets (#180)
* added priorityClass to Clusters and ClusterSets

* fixed comment
2025-01-14 11:05:48 +01:00
Enrico Candino
5892121dbe Fix action event check on wrong field event_name (#177)
The event name should be checked against the `event_name` field.
2025-01-09 11:28:43 +01:00
Enrico Candino
524dc69b98 Fix for missing permission (#176) 2025-01-09 10:25:38 +01:00
Enrico Candino
4fdce5b1aa Test release workflows (#173)
* goreleaser action

* removed old release

* fix gomega version in tests

* updated build workflow

* fix for empty var
2025-01-09 10:10:53 +01:00
Enrico Candino
9fc4a57fc2 Fix go.mod (#171)
* check go mod

* fix go.mod
2025-01-08 10:02:23 +01:00
Enrico Candino
ee00b08927 Add real node resources to virtual node (#169)
* add real nodes capacity to virtual node

* distinguish capacity from allocatable node resources
2025-01-02 22:22:18 +01:00
Enrico Candino
7fdd48d577 Implementation of GetStatsSummary and GetMetricsResource for Virtual Kubelet (#163)
* implemented  GetStatsSummary and GetMetricsResource for Virtual Kubelet

* fixed ClusterRole for node proxy

* limit the clusterrole with get and list

* remove unused Metrics client interface
2024-12-27 11:41:40 +01:00
jpgouin
70a098df4c allow exec into pod and fetching log in shared mode (#160) 2024-12-17 11:41:17 +01:00
Hussein Galal
6739aa0382 Initial networking support for shared mode (#154)
* Initial networking support for shared mode

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix deletion logic and controller reference

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* golintci

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-12-10 23:22:55 +02:00
Enrico Candino
acd9d96732 fix timeout (#157) 2024-12-05 19:34:12 +01:00
Enrico Candino
72b2a5f1d1 Added podSecurityAdmissionLevel to ClusterSet (#145)
* added Namespace reconciliation for PodSecurity labels

* added Namespace Watch

* added tests, and example

* bump deps
2024-12-04 21:38:02 +01:00
Enrico Candino
8e7d0f43a9 changed cluster creation backoff (#156) 2024-12-04 20:44:43 +01:00
Enrico Candino
a235b85362 Bump testing dependencies (#155)
* fixed testing deps, added doc

* added manual dispatch
2024-12-04 20:31:33 +01:00
Enrico Candino
6d716e43b2 Bump deps and enable tests on PRs (#152)
* enable tests on PRs

* bump deps
2024-11-28 20:13:57 +01:00
Enrico Candino
6db5247ff7 fix netpol reconciliation (#150) 2024-11-28 01:44:37 +01:00
Enrico Candino
c561b033df Added allowedNodeTypes to ClusterSet, and fixed NetworkPolicy reconciliation (#144)
* updated CRDs

* added Mode to ClusterSet, and enum to CRD

* fix typos

* fix mode type in cli

* deletion of second clusterset in same namespace

* removed focused test, added clusterset example

* renamed modes

* added allowedNodeTypes, fixed samples

* fixed network policy reconciliation
2024-11-27 23:00:39 +02:00
Enrico Candino
37573d36a4 Added envtest integration tests for ClusterSet (#143)
* init tests

* added clusterset tests

* added github action

* updated Dapper with envtest bins
2024-11-11 18:13:20 +02:00
Hussein Galal
bc25c1c70a Serviceaccount token synchronization (#139)
* Serviceaccount token sync

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixing typo

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-11-08 00:11:56 +02:00
Enrico Candino
c9599963d1 added node selector to workloads (#138) 2024-11-06 21:50:51 +02:00
Hussein Galal
84f921641b Token random generation (#136)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-11-01 21:27:03 +02:00
Michael Bolot
26a7fa023f Adding basic volume syncing (#137)
* Adding basic volume syncing

Adds syncing for basic volume types (secret/configmap/projected secret
and configmap). Also changes the virtual kubelet to use a cache from
controller-runtime rather than a client for some operations.
2024-10-31 11:57:59 -05:00
Hussein Galal
7599d6946f Fix virtual node types (#135)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-10-24 23:56:17 +03:00
Hussein Galal
f04902f0a2 Add structured logging via zap (#133)
* Add structured logging properly

use a centralized logger wrapper to work with controller and virt-kubelet

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix some log messages

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-10-22 01:04:21 +03:00
Hussein Galal
d19f0f9ca6 virtual-kubelet controller integration (#130)
* Virtual kubelet controller integration

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add k3k-kubelet image to the release workflow

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add k3k-kubelet image to the release workflow

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix build/release workflow

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Remove pkg directory in k3k-kubelet

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* rename Type to Config

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Move the kubelet and config outside of pkg

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix comments

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix naming throughout the package

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix comments

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* more fixes to naming

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-10-21 22:54:08 +03:00
Hussein Galal
bf1fe2a71c Adding Networkpolicy to ClusterSets (#125)
* Adding cluster set types

Adds types for cluster sets, which allows constraining a few elements of
clusters including: overall resource usage, and which nodes it can use.

* Add networkpolicy to clustersets

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix comments

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix linting issues

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixing node controller logic and nit fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* more fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix main cli

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Comment the resource quota for clustersets

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
Co-authored-by: Michael Bolot <michael.bolot@suse.com>
2024-10-16 00:27:42 +03:00
Hussein Galal
dbe6767aff Adding experimental disclaimer (#129)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-10-11 00:33:27 +03:00
Michael Bolot
ab33b3cb3f Adding poc for virtual kubelet (#112)
Adds a POC for running pods in the host cluster powered by virtual kubelet.
2024-10-01 00:33:10 +03:00
Michael Bolot
56da25941f Fixing bugs with namespaced clusters (#111)
Fixes a few bugs with namespaced clusters, specifically:
- The agent config still used a hardcoded value for the config secret
  mount
- The kubeconfig generation still used the old "cluster namespace" as
  the destination
In addition, changes the headless service name to not have two "-".
2024-09-06 02:15:36 +03:00
Michael Bolot
9faab4f82d Changing the cluster to be namespaced (#110)
* Changing the cluster to be namespaced

Changes the cluster type to be namespaced (and changes the various
controllers to work with this new feature). Also adds crd generation and
docs to the core cluster type.

* CI fix
2024-09-05 22:50:11 +03:00
Hussein Galal
bf72d39280 Use gh tool (#106)
* use gh tool instead of third party gh action

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix checksum

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add GH_TOKEN env

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-06-21 23:56:43 +03:00
Hussein Galal
3879912b57 Move to Github Action (#105)
* Move to Github Action

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Move to Github Action

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix code generation

* Add release and chart workflows

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add release and chart workflows

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add release and chart workflows

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add release and chart workflows

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* test release and charts

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* test release and charts

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* test release and charts

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* test release and charts

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* test release and charts

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix GHA migration

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix GHA migration

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-05-21 00:00:47 +03:00
Hussein Galal
0d6bf4922a Fix code generation (#104)
* Fix code generation

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update go

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-05-14 01:05:13 +03:00
Cuong Nguyen Duc
57c24f6f3c Correct file name (#95) 2024-03-18 08:46:59 +02:00
Hussein Galal
fe23607b71 Update chart to 0.1.4-r1 (#98)
* Update chart to 0.1.4-r1

* Update image to v0.2.1
2024-03-15 02:10:33 +02:00
Hussein Galal
caa0537d5e Renaming binaries and fix typo (#97) 2024-03-15 01:39:18 +02:00
Hussein Galal
0cad65e4fe Fix for readiness probe (#96)
* Fix for readiness probe

* update code generator code
2024-03-15 01:04:52 +02:00
Hussein Galal
cc914cf870 Update chart (#91)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-02-15 23:59:59 +02:00
Hussein Galal
ba35d12124 Cluster spec update (#90)
* Remove unused functions

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* enable cluster server and agent update

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-25 06:37:59 +02:00
Hussein Galal
6fc22df6bc Cluster type validations (#89)
* Cluster type validations

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Cluster type validations

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-12 23:09:30 +02:00
Hussein Galal
c92f722122 Add delete subcommand (#88)
* Add delete subcommand

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add delete subcommand

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-11 02:36:12 +02:00
Hussein Galal
5e141fe98e Add kubeconfig subcommand (#87)
* Add kubeconfig subcommand

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add kubeconfig subcommand

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add kubeconfig subcommand

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add kubeconfig subcommand

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-11 00:57:46 +02:00
Hussein Galal
4b2308e709 Update chart to v0.1.2-r1 (#82)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-06 07:38:54 +02:00
Hussein Galal
3cdcb04e1a Add validation for system cluster name for both controller and cli (#81)
* Add validation for system cluster name for both controller and cli

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add validation for system cluster name for both controller and cli

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add validation for system cluster name for both controller and cli

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-06 02:15:20 +02:00
Hussein Galal
fedfa109b5 Fix append to empty slice (#80)
* Fix append to empty slice

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix initialization of addresses slice

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-04 01:49:48 +02:00
Hussein Galal
99d043f2ee fix chart releases (#79)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-03 02:55:09 +02:00
Hussein Galal
57ed675a7f fix chart releases (#78)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-03 02:49:05 +02:00
Hussein Galal
7c9060c394 fix chart release (#77)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-03 02:37:08 +02:00
Hussein Galal
a104aacf5f Add github config mail and username for pushing k3k release (#76)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-03 02:24:46 +02:00
Hussein Galal
6346b06eb3 Add github config mail and username for pushing k3k release (#75)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-03 02:08:10 +02:00
Hussein Galal
6fd745f268 Fix chart release (#74)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-03 01:53:26 +02:00
Hussein Galal
1258fb6d58 Upgrade chart and fix manifest (#73)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-01-03 00:03:08 +02:00
238 changed files with 58948 additions and 3435 deletions

3
.cr.yaml Normal file
View File

@@ -0,0 +1,3 @@
release-name-template: chart-{{ .Version }}
make-release-latest: false
skip-existing: true

View File

@@ -1,138 +0,0 @@
---
kind: pipeline
name: amd64
platform:
os: linux
arch: amd64
steps:
- name: build
image: rancher/dapper:v0.6.0
environment:
CROSS: 'true'
GITHUB_TOKEN:
from_secret: github_token
commands:
- dapper ci
- echo "${DRONE_TAG}-amd64" | sed -e 's/+/-/g' >.tags
volumes:
- name: docker
path: /var/run/docker.sock
when:
branch:
exclude:
- k3k-chart
- name: package-chart
image: rancher/dapper:v0.6.0
environment:
GITHUB_TOKEN:
from_secret: github_token
commands:
- dapper package-chart
volumes:
- name: docker
path: /var/run/docker.sock
when:
branch:
- k3k-chart
instance:
- drone-publish.rancher.io
- name: release-chart
image: rancher/dapper:v0.6.0
environment:
GITHUB_TOKEN:
from_secret: github_token
commands:
- dapper release-chart
volumes:
- name: docker
path: /var/run/docker.sock
when:
branch:
- k3k-chart
instance:
- drone-publish.rancher.io
- name: github_binary_release
image: plugins/github-release
settings:
api_key:
from_secret: github_token
prerelease: true
checksum:
- sha256
checksum_file: CHECKSUMsum-amd64.txt
checksum_flatten: true
files:
- "bin/*"
when:
instance:
- drone-publish.rancher.io
ref:
- refs/head/master
- refs/tags/*
event:
- tag
branch:
exclude:
- k3k-chart
- name: docker-publish
image: plugins/docker
settings:
dockerfile: package/Dockerfile
password:
from_secret: docker_password
repo: "rancher/k3k"
username:
from_secret: docker_username
when:
instance:
- drone-publish.rancher.io
ref:
- refs/head/master
- refs/tags/*
event:
- tag
branch:
exclude:
- k3k-chart
volumes:
- name: docker
host:
path: /var/run/docker.sock
---
kind: pipeline
type: docker
name: manifest
platform:
os: linux
arch: amd64
steps:
- name: push-runtime-manifest
image: plugins/manifest
settings:
username:
from_secret: docker_username
password:
from_secret: docker_password
spec: manifest-runtime.tmpl
when:
event:
- tag
instance:
- drone-publish.rancher.io
ref:
- refs/head/master
- refs/tags/*
branch:
exclude:
- k3k-chart
depends_on:
- amd64

41
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,41 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
<!-- Thanks for helping us to improve K3K! We welcome all bug reports. Please fill out each area of the template so we can better help you. Comments like this will be hidden when you post but you can delete them if you wish. -->
**Environmental Info:**
Host Cluster Version:
<!-- For example K3S v1.32.1+k3s1 or RKE2 v1.31.5+rke2r1 -->
Node(s) CPU architecture, OS, and Version:
<!-- Provide the output from "uname -a" on the node(s) -->
Host Cluster Configuration:
<!-- Provide some basic information on the cluster configuration. For example, "1 servers, 2 agents CNI: Flannel". -->
K3K Cluster Configuration:
<!-- Provide some basic information on the cluster configuration. For example, "3 servers, 2 agents". -->
**Describe the bug:**
<!-- A clear and concise description of what the bug is. -->
**Steps To Reproduce:**
- Created a cluster with `k3k create`:
**Expected behavior:**
<!-- A clear and concise description of what you expected to happen. -->
**Actual behavior:**
<!-- A clear and concise description of what actually happened. -->
**Additional context / logs:**
<!-- Add any other context and/or logs about the problem here. -->
<!-- kubectl logs -n k3k-system -l app.kubernetes.io/instance=k3k -->
<!-- $ kubectl logs -n <cluster-namespace> k3k-<cluster-name>-server-0 -->
<!-- $ kubectl logs -n <cluster-namespace> -l cluster=<cluster-name>,mode=shared # in shared mode -->

10
.github/renovate.json vendored Normal file
View File

@@ -0,0 +1,10 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"github>rancher/renovate-config#release"
],
"baseBranchPatterns": [
"main"
],
"prHourlyLimit": 2
}

88
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,88 @@
name: Build
on:
push:
branches: [main]
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Set up Go
uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Set up QEMU
uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@e435ccd777264be153ace6237001ef4d979d3a7a # v6
with:
distribution: goreleaser
version: v2
args: --clean --snapshot
env:
REPO: ${{ github.repository }}
REGISTRY: ""
- name: Run Trivy vulnerability scanner (k3kcli)
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # 0.33.1
with:
ignore-unfixed: true
severity: 'MEDIUM,HIGH,CRITICAL'
scan-type: 'fs'
scan-ref: 'dist/k3kcli_linux_amd64_v1/k3kcli'
format: 'sarif'
output: 'trivy-results-k3kcli.sarif'
- name: Upload Trivy scan results to GitHub Security tab (k3kcli)
uses: github/codeql-action/upload-sarif@38e701f46e33fb233075bf4238cb1e5d68e429e4 # v3
with:
sarif_file: trivy-results-k3kcli.sarif
category: k3kcli
- name: Run Trivy vulnerability scanner (k3k)
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # 0.33.1
with:
ignore-unfixed: true
severity: 'MEDIUM,HIGH,CRITICAL'
scan-type: 'image'
scan-ref: '${{ github.repository }}:v0.0.0-amd64'
format: 'sarif'
output: 'trivy-results-k3k.sarif'
- name: Upload Trivy scan results to GitHub Security tab (k3k)
uses: github/codeql-action/upload-sarif@38e701f46e33fb233075bf4238cb1e5d68e429e4 # v3
with:
sarif_file: trivy-results-k3k.sarif
category: k3k
- name: Run Trivy vulnerability scanner (k3k-kubelet)
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # 0.33.1
with:
ignore-unfixed: true
severity: 'MEDIUM,HIGH,CRITICAL'
scan-type: 'image'
scan-ref: '${{ github.repository }}-kubelet:v0.0.0-amd64'
format: 'sarif'
output: 'trivy-results-k3k-kubelet.sarif'
- name: Upload Trivy scan results to GitHub Security tab (k3k-kubelet)
uses: github/codeql-action/upload-sarif@38e701f46e33fb233075bf4238cb1e5d68e429e4 # v3
with:
sarif_file: trivy-results-k3k-kubelet.sarif
category: k3k-kubelet

33
.github/workflows/chart.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: Chart
on:
workflow_dispatch:
permissions:
contents: write
jobs:
chart-release:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Install Helm
uses: azure/setup-helm@1a275c3b69536ee54be43f2070a358922e12c8d4 # v4
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
- name: Run chart-releaser
uses: helm/chart-releaser-action@cae68fefc6b5f367a0275617c9f83181ba54714f # v1.7.0
with:
config: .cr.yaml
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

34
.github/workflows/fossa.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: FOSSA Scanning
on:
push:
branches: ["main", "release/**"]
workflow_dispatch:
permissions:
contents: read
id-token: write
jobs:
fossa-scanning:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6
# The FOSSA token is shared between all repos in Rancher's GH org. It can be
# used directly and there is no need to request specific access to EIO.
- name: Read FOSSA token
uses: rancher-eio/read-vault-secrets@main
with:
secrets: |
secret/data/github/org/rancher/fossa/push token | FOSSA_API_KEY_PUSH_ONLY
- name: FOSSA scan
uses: fossas/fossa-action@main
with:
api-key: ${{ env.FOSSA_API_KEY_PUSH_ONLY }}
# Only runs the scan and do not provide/returns any results back to the
# pipeline.
run-tests: false

61
.github/workflows/release-delete.yml vendored Normal file
View File

@@ -0,0 +1,61 @@
name: Release - Delete Draft
on:
workflow_dispatch:
inputs:
tag:
type: string
description: The tag of the release
permissions:
contents: write
packages: write
env:
GH_TOKEN: ${{ github.token }}
jobs:
release-delete:
runs-on: ubuntu-latest
steps:
- name: Check tag
if: inputs.tag == ''
run: echo "::error::Missing tag from input" && exit 1
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Check if release is draft
run: |
CURRENT_TAG=${{ inputs.tag }}
isDraft=$(gh release view ${CURRENT_TAG} --json isDraft --jq ".isDraft")
if [ "$isDraft" = true ]; then
echo "Release ${CURRENT_TAG} is draft"
else
echo "::error::Cannot delete non-draft release" && exit 1
fi
- name: Delete packages from Github Container Registry
run: |
CURRENT_TAG=${{ inputs.tag }}
echo "Deleting packages with tag ${CURRENT_TAG}"
JQ_QUERY=".[] | select(.metadata.container.tags[] == \"${CURRENT_TAG}\")"
for package in k3k k3k-kubelet
do
echo "Deleting ${package} image"
PACKAGE_TO_DELETE=$(gh api /user/packages/container/${package}/versions --jq "${JQ_QUERY}")
echo $PACKAGE_TO_DELETE | jq
PACKAGE_ID=$(echo $PACKAGE_TO_DELETE | jq .id)
echo "Deleting ${PACKAGE_ID}"
gh api --method DELETE /user/packages/container/${package}/versions/${PACKAGE_ID}
done
- name: Delete Github release
run: |
CURRENT_TAG=${{ inputs.tag }}
echo "Deleting release ${CURRENT_TAG}"
gh release delete ${CURRENT_TAG}

90
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,90 @@
name: Release
on:
push:
tags:
- "v*"
workflow_dispatch:
inputs:
commit:
type: string
description: Checkout a specific commit
permissions:
contents: write
packages: write
id-token: write
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
- name: Checkout code at the specific commit
if: inputs.commit != ''
run: git checkout ${{ inputs.commit }}
- name: Set up Go
uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Set up QEMU
uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3
- name: "Read secrets"
uses: rancher-eio/read-vault-secrets@main
if: github.repository_owner == 'rancher'
with:
secrets: |
secret/data/github/repo/${{ github.repository }}/dockerhub/${{ github.repository_owner }}/credentials username | DOCKER_USERNAME ;
secret/data/github/repo/${{ github.repository }}/dockerhub/${{ github.repository_owner }}/credentials password | DOCKER_PASSWORD ;
# Manually dispatched workflows (or forks) will use ghcr.io
- name: Setup ghcr.io
if: github.event_name == 'workflow_dispatch' || github.repository_owner != 'rancher'
run: |
echo "REGISTRY=ghcr.io" >> $GITHUB_ENV
echo "DOCKER_USERNAME=${{ github.actor }}" >> $GITHUB_ENV
echo "DOCKER_PASSWORD=${{ github.token }}" >> $GITHUB_ENV
- name: Login to container registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ env.DOCKER_USERNAME }}
password: ${{ env.DOCKER_PASSWORD }}
# If the tag does not exists the workflow was manually triggered.
# That means we are creating temporary nightly builds, with a "fake" local tag
- name: Check release tag
id: release-tag
run: |
CURRENT_TAG=$(git describe --tag --always --match="v[0-9]*")
if git show-ref --tags ${CURRENT_TAG} --quiet; then
echo "tag ${CURRENT_TAG} already exists";
else
echo "tag ${CURRENT_TAG} does not exist"
git tag ${CURRENT_TAG}
fi
echo "CURRENT_TAG=${CURRENT_TAG}" >> "$GITHUB_OUTPUT"
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@e435ccd777264be153ace6237001ef4d979d3a7a # v6
with:
distribution: goreleaser
version: v2
args: --clean
env:
GITHUB_TOKEN: ${{ github.token }}
GORELEASER_CURRENT_TAG: ${{ steps.release-tag.outputs.CURRENT_TAG }}
REGISTRY: ${{ env.REGISTRY }}
REPO: ${{ github.repository }}

63
.github/workflows/renovate-vault.yml vendored Normal file
View File

@@ -0,0 +1,63 @@
name: Renovate
on:
workflow_dispatch:
inputs:
logLevel:
description: "Override default log level"
required: false
default: info
type: choice
options:
- info
- debug
overrideSchedule:
description: "Override all schedules"
required: false
default: "false"
type: choice
options:
- "false"
- "true"
configMigration:
description: "Toggle PRs for config migration"
required: false
default: "true"
type: choice
options:
- "false"
- "true"
renovateConfig:
description: "Define a custom renovate config file"
required: false
default: ".github/renovate.json"
type: string
minimumReleaseAge:
description: "Override minimumReleaseAge for a one-time run (e.g., '0 days' to disable delay)"
required: false
default: "null"
type: string
extendsPreset:
description: "Override renovate extends preset (default: 'github>rancher/renovate-config#release')."
required: false
default: "github>rancher/renovate-config#release"
type: string
schedule:
- cron: '30 4,6 * * 1-5'
permissions:
contents: read
id-token: write
jobs:
call-workflow:
uses: rancher/renovate-config/.github/workflows/renovate-vault.yml@release
with:
configMigration: ${{ inputs.configMigration || 'true' }}
logLevel: ${{ inputs.logLevel || 'info' }}
overrideSchedule: ${{ github.event.inputs.overrideSchedule == 'true' && '{''schedule'':null}' || '' }}
renovateConfig: ${{ inputs.renovateConfig || '.github/renovate.json' }}
minimumReleaseAge: ${{ inputs.minimumReleaseAge || 'null' }}
extendsPreset: ${{ inputs.extendsPreset || 'github>rancher/renovate-config#release' }}
secrets:
override-token: "${{ secrets.RENOVATE_FORK_GH_TOKEN || '' }}"

View File

@@ -0,0 +1,158 @@
name: Conformance Tests - Shared Mode
on:
schedule:
- cron: "0 1 * * *"
workflow_dispatch:
permissions:
contents: read
jobs:
conformance:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
type:
- parallel
- serial
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Install helm
uses: azure/setup-helm@1a275c3b69536ee54be43f2070a358922e12c8d4 # v4.3.1
- name: Install hydrophone
run: go install sigs.k8s.io/hydrophone@latest
- name: Install k3d and kubectl
run: |
wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
k3d version
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
- name: Setup Kubernetes (k3d)
env:
REPO_NAME: k3k-registry
REPO_PORT: 12345
run: |
echo "127.0.0.1 ${REPO_NAME}" | sudo tee -a /etc/hosts
k3d registry create ${REPO_NAME} --port ${REPO_PORT}
k3d cluster create k3k --servers 2 \
-p "30000-30010:30000-30010@server:0" \
--registry-use k3d-${REPO_NAME}:${REPO_PORT}
kubectl cluster-info
kubectl get nodes
- name: Setup K3k
env:
REPO: k3k-registry:12345
run: |
echo "127.0.0.1 k3k-registry" | sudo tee -a /etc/hosts
make build
make package
make push
# add k3kcli to $PATH
echo "${{ github.workspace }}/bin" >> $GITHUB_PATH
VERSION=$(make version)
k3d image import ${REPO}/k3k:${VERSION} -c k3k --verbose
k3d image import ${REPO}/k3k-kubelet:${VERSION} -c k3k --verbose
make install
echo "Wait for K3k controller to be available"
kubectl wait -n k3k-system pod --for condition=Ready -l "app.kubernetes.io/name=k3k" --timeout=5m
- name: Check k3kcli
run: k3kcli -v
- name: Create virtual cluster
run: |
kubectl create namespace k3k-mycluster
cat <<EOF | kubectl apply -f -
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: mycluster
namespace: k3k-mycluster
spec:
mirrorHostNodes: true
tlsSANs:
- "127.0.0.1"
expose:
nodePort:
serverPort: 30001
EOF
echo "Wait for bootstrap secret to be available"
kubectl wait -n k3k-mycluster --for=create secret k3k-mycluster-bootstrap --timeout=5m
k3kcli kubeconfig generate --name mycluster
export KUBECONFIG=${{ github.workspace }}/k3k-mycluster-mycluster-kubeconfig.yaml
kubectl cluster-info
kubectl get nodes
kubectl get pods -A
- name: Run conformance tests (parallel)
if: matrix.type == 'parallel'
run: |
# Run conformance tests in parallel mode (skipping serial)
hydrophone --conformance --parallel 4 --skip='\[Serial\]' \
--kubeconfig ${{ github.workspace }}/k3k-mycluster-mycluster-kubeconfig.yaml \
--output-dir /tmp
- name: Run conformance tests (serial)
if: matrix.type == 'serial'
run: |
# Run serial conformance tests
hydrophone --focus='\[Serial\].*\[Conformance\]' \
--kubeconfig ${{ github.workspace }}/k3k-mycluster-mycluster-kubeconfig.yaml \
--output-dir /tmp
- name: Archive conformance logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: conformance-${{ matrix.type }}-logs
path: /tmp/e2e.log
- name: Job Summary
if: always()
run: |
echo '## 📊 Conformance Tests Results (${{ matrix.type }})' >> $GITHUB_STEP_SUMMARY
echo '| Passed | Failed | Pending | Skipped |' >> $GITHUB_STEP_SUMMARY
echo '|---|---|---|---|' >> $GITHUB_STEP_SUMMARY
RESULTS=$(tail -10 /tmp/e2e.log | grep -E "Passed .* Failed .* Pending .* Skipped" | cut -d '-' -f 3)
RESULTS=$(echo $RESULTS | grep -oE '[0-9]+' | xargs | sed 's/ / | /g')
echo "| $RESULTS |" >> $GITHUB_STEP_SUMMARY
# only include failed tests section if there are any
if grep -q '\[FAIL\]' /tmp/e2e.log; then
echo '' >> $GITHUB_STEP_SUMMARY
echo '### Failed Tests' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
grep '\[FAIL\]' /tmp/e2e.log >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
fi

View File

@@ -0,0 +1,145 @@
name: Conformance Tests - Virtual Mode
on:
schedule:
- cron: "0 1 * * *"
workflow_dispatch:
permissions:
contents: read
jobs:
conformance:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
type:
- parallel
- serial
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Install helm
uses: azure/setup-helm@1a275c3b69536ee54be43f2070a358922e12c8d4 # v4.3.1
- name: Install hydrophone
run: go install sigs.k8s.io/hydrophone@latest
- name: Install k3s
env:
KUBECONFIG: /etc/rancher/k3s/k3s.yaml
K3S_HOST_VERSION: v1.33.7+k3s1
run: |
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=${K3S_HOST_VERSION} INSTALL_K3S_EXEC="--write-kubeconfig-mode=777" sh -s -
kubectl cluster-info
kubectl get nodes
- name: Build, package and setup K3k
env:
KUBECONFIG: /etc/rancher/k3s/k3s.yaml
run: |
export REPO=ttl.sh/$(uuidgen)
export VERSION=1h
make build
make package
make push
make install
# add k3kcli to $PATH
echo "${{ github.workspace }}/bin" >> $GITHUB_PATH
echo "Wait for K3k controller to be available"
kubectl wait -n k3k-system pod --for condition=Ready -l "app.kubernetes.io/name=k3k" --timeout=5m
- name: Check k3kcli
run: k3kcli -v
- name: Create virtual cluster
env:
KUBECONFIG: /etc/rancher/k3s/k3s.yaml
run: |
k3kcli cluster create --mode=virtual --servers=2 mycluster
export KUBECONFIG=${{ github.workspace }}/k3k-mycluster-mycluster-kubeconfig.yaml
kubectl cluster-info
kubectl get nodes
kubectl get pods -A
- name: Run conformance tests (parallel)
if: matrix.type == 'parallel'
run: |
# Run conformance tests in parallel mode (skipping serial)
hydrophone --conformance --parallel 4 --skip='\[Serial\]' \
--kubeconfig ${{ github.workspace }}/k3k-mycluster-mycluster-kubeconfig.yaml \
--output-dir /tmp
- name: Run conformance tests (serial)
if: matrix.type == 'serial'
run: |
# Run serial conformance tests
hydrophone --focus='\[Serial\].*\[Conformance\]' \
--kubeconfig ${{ github.workspace }}/k3k-mycluster-mycluster-kubeconfig.yaml \
--output-dir /tmp
- name: Export logs
if: always()
env:
KUBECONFIG: /etc/rancher/k3s/k3s.yaml
run: |
journalctl -u k3s -o cat --no-pager > /tmp/k3s.log
kubectl logs -n k3k-system -l "app.kubernetes.io/name=k3k" --tail=-1 > /tmp/k3k.log
- name: Archive K3s logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: k3s-${{ matrix.type }}-logs
path: /tmp/k3s.log
- name: Archive K3k logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: k3k-${{ matrix.type }}-logs
path: /tmp/k3k.log
- name: Archive conformance logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: conformance-${{ matrix.type }}-logs
path: /tmp/e2e.log
- name: Job Summary
if: always()
run: |
echo '## 📊 Conformance Tests Results (${{ matrix.type }})' >> $GITHUB_STEP_SUMMARY
echo '| Passed | Failed | Pending | Skipped |' >> $GITHUB_STEP_SUMMARY
echo '|---|---|---|---|' >> $GITHUB_STEP_SUMMARY
RESULTS=$(tail -10 /tmp/e2e.log | grep -E "Passed .* Failed .* Pending .* Skipped" | cut -d '-' -f 3)
RESULTS=$(echo $RESULTS | grep -oE '[0-9]+' | xargs | sed 's/ / | /g')
echo "| $RESULTS |" >> $GITHUB_STEP_SUMMARY
# only include failed tests section if there are any
if grep -q '\[FAIL\]' /tmp/e2e.log; then
echo '' >> $GITHUB_STEP_SUMMARY
echo '### Failed Tests' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
grep '\[FAIL\]' /tmp/e2e.log >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
fi

171
.github/workflows/test-e2e.yaml vendored Normal file
View File

@@ -0,0 +1,171 @@
name: Tests E2E
on:
push:
branches: [main]
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
permissions:
contents: read
jobs:
tests-e2e:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Install Ginkgo
run: go install github.com/onsi/ginkgo/v2/ginkgo
- name: Setup environment
run: |
mkdir ${{ github.workspace }}/covdata
echo "COVERAGE=true" >> $GITHUB_ENV
echo "GOCOVERDIR=${{ github.workspace }}/covdata" >> $GITHUB_ENV
echo "REPO=ttl.sh/$(uuidgen)" >> $GITHUB_ENV
echo "VERSION=1h" >> $GITHUB_ENV
echo "K3S_HOST_VERSION=v1.32.1+k3s1 >> $GITHUB_ENV"
- name: Install k3s
run: |
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=${{ env.K3S_HOST_VERSION }} INSTALL_K3S_EXEC="--write-kubeconfig-mode=777" sh -s -
- name: Build and package and push dev images
env:
KUBECONFIG: /etc/rancher/k3s/k3s.yaml
REPO: ${{ env.REPO }}
VERSION: ${{ env.VERSION }}
run: |
make build
make package
make push
make install
- name: Run e2e tests
env:
KUBECONFIG: /etc/rancher/k3s/k3s.yaml
REPO: ${{ env.REPO }}
VERSION: ${{ env.VERSION }}
run: make E2E_LABEL_FILTER="e2e && !slow" test-e2e
- name: Convert coverage data
run: go tool covdata textfmt -i=${GOCOVERDIR} -o ${GOCOVERDIR}/cover.out
- name: Upload coverage reports to Codecov (controller)
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ${GOCOVERDIR}/cover.out
flags: controller
- name: Upload coverage reports to Codecov (e2e)
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./cover.out
flags: e2e
- name: Archive k3s logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: e2e-k3s-logs
path: /tmp/k3s.log
- name: Archive k3k logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: e2e-k3k-logs
path: /tmp/k3k.log
tests-e2e-slow:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Install Ginkgo
run: go install github.com/onsi/ginkgo/v2/ginkgo
- name: Setup environment
run: |
mkdir ${{ github.workspace }}/covdata
echo "COVERAGE=true" >> $GITHUB_ENV
echo "GOCOVERDIR=${{ github.workspace }}/covdata" >> $GITHUB_ENV
echo "REPO=ttl.sh/$(uuidgen)" >> $GITHUB_ENV
echo "VERSION=1h" >> $GITHUB_ENV
echo "K3S_HOST_VERSION=v1.32.1+k3s1 >> $GITHUB_ENV"
- name: Install k3s
run: |
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=${{ env.K3S_HOST_VERSION }} INSTALL_K3S_EXEC="--write-kubeconfig-mode=777" sh -s -
- name: Build and package and push dev images
env:
KUBECONFIG: /etc/rancher/k3s/k3s.yaml
REPO: ${{ env.REPO }}
VERSION: ${{ env.VERSION }}
run: |
make build
make package
make push
make install
- name: Run e2e tests
env:
KUBECONFIG: /etc/rancher/k3s/k3s.yaml
REPO: ${{ env.REPO }}
VERSION: ${{ env.VERSION }}
run: make E2E_LABEL_FILTER="e2e && slow" test-e2e
- name: Convert coverage data
run: go tool covdata textfmt -i=${GOCOVERDIR} -o ${GOCOVERDIR}/cover.out
- name: Upload coverage reports to Codecov (controller)
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ${GOCOVERDIR}/cover.out
flags: controller
- name: Upload coverage reports to Codecov (e2e)
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./cover.out
flags: e2e
- name: Archive k3s logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: e2e-k3s-logs
path: /tmp/k3s.log
- name: Archive k3k logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: e2e-k3k-logs
path: /tmp/k3k.log

99
.github/workflows/test.yaml vendored Normal file
View File

@@ -0,0 +1,99 @@
name: Tests
on:
push:
branches: [main]
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
permissions:
contents: read
jobs:
tests:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Run unit tests
run: make test-unit
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./cover.out
flags: unit
tests-cli:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Install Ginkgo
run: go install github.com/onsi/ginkgo/v2/ginkgo
- name: Setup environment
run: |
mkdir ${{ github.workspace }}/covdata
echo "COVERAGE=true" >> $GITHUB_ENV
echo "GOCOVERDIR=${{ github.workspace }}/covdata" >> $GITHUB_ENV
echo "K3S_HOST_VERSION=v1.32.1+k3s1 >> $GITHUB_ENV"
- name: Build and package
run: |
make build
make package
# add k3kcli to $PATH
echo "${{ github.workspace }}/bin" >> $GITHUB_PATH
- name: Check k3kcli
run: k3kcli -v
- name: Run cli tests
env:
K3K_DOCKER_INSTALL: "true"
K3S_HOST_VERSION: "${{ env.K3S_HOST_VERSION }}"
run: make test-cli
- name: Convert coverage data
run: go tool covdata textfmt -i=${{ github.workspace }}/covdata -o ${{ github.workspace }}/covdata/cover.out
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ${{ github.workspace }}/covdata/cover.out
flags: cli
- name: Archive k3s logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: cli-k3s-logs
path: /tmp/k3s.log
- name: Archive k3k logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: cli-k3k-logs
path: /tmp/k3k.log

41
.github/workflows/validate.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Validate
on:
push:
branches: [main]
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
permissions:
contents: read
jobs:
validate:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Set up Go
uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
cache: true
- name: Install Pandoc
run: sudo apt-get install pandoc
- name: Run linters
uses: golangci/golangci-lint-action@1e7e51e771db61008b38414a730f564565cf7c20 # v9.2.0
with:
version: v2.8.0
args: -v
only-new-issues: true
skip-cache: false
- name: Run formatters
run: golangci-lint -v fmt ./...
- name: Validate
run: make validate

8
.gitignore vendored
View File

@@ -4,4 +4,10 @@
/dist
*.swp
.idea
.vscode/
__debug*
*-kubeconfig.yaml
.envtest
cover.out
covcounters.**
covmeta.**

27
.golangci.yml Normal file
View File

@@ -0,0 +1,27 @@
version: "2"
linters:
enable:
- misspell
- wsl_v5
formatters:
enable:
- gci
- gofmt
- gofumpt
settings:
gci:
# The default order is `standard > default > custom > blank > dot > alias > localmodule`.
custom-order: true
sections:
- standard
- default
- alias
- localmodule
- dot
- blank
gofmt:
rewrite-rules:
- pattern: 'interface{}'
replacement: 'any'

148
.goreleaser.yaml Normal file
View File

@@ -0,0 +1,148 @@
version: 2
release:
draft: true
replace_existing_draft: true
prerelease: auto
before:
hooks:
- go mod tidy
- go generate ./...
builds:
- id: k3k
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- "amd64"
- "arm64"
- "s390x"
ldflags:
- -w -s # strip debug info and symbol table
- -X "github.com/rancher/k3k/pkg/buildinfo.Version={{ .Tag }}"
- id: k3k-kubelet
main: ./k3k-kubelet
binary: k3k-kubelet
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- "amd64"
- "arm64"
- "s390x"
ldflags:
- -w -s # strip debug info and symbol table
- -X "github.com/rancher/k3k/pkg/buildinfo.Version={{ .Tag }}"
- id: k3kcli
main: ./cli
binary: k3kcli
env:
- CGO_ENABLED=0
goarch:
- "amd64"
- "arm64"
ldflags:
- -w -s # strip debug info and symbol table
- -X "github.com/rancher/k3k/pkg/buildinfo.Version={{ .Tag }}"
archives:
- format: binary
name_template: >-
{{ .Binary }}-{{- .Os }}-{{ .Arch }}
{{- if .Arm }}v{{ .Arm }}{{ end }}
format_overrides:
- goos: windows
format: zip
# For the image_templates we are using the following expression to build images for the correct registry
# {{- if .Env.REGISTRY }}{{ .Env.REGISTRY }}/{{ end }}
#
# REGISTRY= -> rancher/k3k:vX.Y.Z
# REGISTRY=ghcr.io -> ghcr.io/rancher/k3k:latest:vX.Y.Z
#
dockers:
# k3k amd64
- use: buildx
goarch: amd64
ids:
- k3k
- k3kcli
dockerfile: "package/Dockerfile.k3k"
skip_push: false
image_templates:
- "{{- if .Env.REGISTRY }}{{ .Env.REGISTRY }}/{{ end }}{{ .Env.REPO }}:{{ .Tag }}-amd64"
build_flag_templates:
- "--build-arg=BIN_K3K=k3k"
- "--build-arg=BIN_K3KCLI=k3kcli"
- "--pull"
- "--platform=linux/amd64"
# k3k arm64
- use: buildx
goarch: arm64
ids:
- k3k
- k3kcli
dockerfile: "package/Dockerfile.k3k"
skip_push: false
image_templates:
- "{{- if .Env.REGISTRY }}{{ .Env.REGISTRY }}/{{ end }}{{ .Env.REPO }}:{{ .Tag }}-arm64"
build_flag_templates:
- "--build-arg=BIN_K3K=k3k"
- "--build-arg=BIN_K3KCLI=k3kcli"
- "--pull"
- "--platform=linux/arm64"
# k3k-kubelet amd64
- use: buildx
goarch: amd64
ids:
- k3k-kubelet
dockerfile: "package/Dockerfile.k3k-kubelet"
skip_push: false
image_templates:
- "{{- if .Env.REGISTRY }}{{ .Env.REGISTRY }}/{{ end }}{{ .Env.REPO }}-kubelet:{{ .Tag }}-amd64"
build_flag_templates:
- "--build-arg=BIN_K3K_KUBELET=k3k-kubelet"
- "--pull"
- "--platform=linux/amd64"
# k3k-kubelet arm64
- use: buildx
goarch: arm64
ids:
- k3k-kubelet
dockerfile: "package/Dockerfile.k3k-kubelet"
skip_push: false
image_templates:
- "{{- if .Env.REGISTRY }}{{ .Env.REGISTRY }}/{{ end }}{{ .Env.REPO }}-kubelet:{{ .Tag }}-arm64"
build_flag_templates:
- "--build-arg=BIN_K3K_KUBELET=k3k-kubelet"
- "--pull"
- "--platform=linux/arm64"
docker_manifests:
# k3k
- name_template: "{{- if .Env.REGISTRY }}{{ .Env.REGISTRY }}/{{ end }}{{ .Env.REPO }}:{{ .Tag }}"
image_templates:
- "{{- if .Env.REGISTRY }}{{ .Env.REGISTRY }}/{{ end }}{{ .Env.REPO }}:{{ .Tag }}-amd64"
- "{{- if .Env.REGISTRY }}{{ .Env.REGISTRY }}/{{ end }}{{ .Env.REPO }}:{{ .Tag }}-arm64"
# k3k-kubelet arm64
- name_template: "{{- if .Env.REGISTRY }}{{ .Env.REGISTRY }}/{{ end }}{{ .Env.REPO }}-kubelet:{{ .Tag }}"
image_templates:
- "{{- if .Env.REGISTRY }}{{ .Env.REGISTRY }}/{{ end }}{{ .Env.REPO }}-kubelet:{{ .Tag }}-amd64"
- "{{- if .Env.REGISTRY }}{{ .Env.REGISTRY }}/{{ end }}{{ .Env.REPO }}-kubelet:{{ .Tag }}-arm64"
changelog:
sort: asc
filters:
exclude:
- "^docs:"
- "^test:"

View File

@@ -1,24 +0,0 @@
ARG GOLANG=rancher/hardened-build-base:v1.20.7b2
FROM ${GOLANG}
ARG DAPPER_HOST_ARCH
ENV ARCH $DAPPER_HOST_ARCH
RUN apk -U add \bash git gcc musl-dev docker vim less file curl wget ca-certificates
RUN if [ "${ARCH}" == "amd64" ]; then \
curl -sL https://install.goreleaser.com/github.com/golangci/golangci-lint.sh | sh -s v1.15.0; \
fi
RUN curl -sL https://github.com/helm/chart-releaser/releases/download/v1.5.0/chart-releaser_1.5.0_linux_${ARCH}.tar.gz | tar -xz cr \
&& mv cr /bin/
ENV GO111MODULE on
ENV DAPPER_ENV REPO TAG DRONE_TAG CROSS GITHUB_TOKEN
ENV DAPPER_SOURCE /go/src/github.com/rancher/k3k/
ENV DAPPER_OUTPUT ./bin ./dist ./deploy
ENV DAPPER_DOCKER_SOCKET true
ENV HOME ${DAPPER_SOURCE}
WORKDIR ${DAPPER_SOURCE}
ENTRYPOINT ["./ops/entry"]
CMD ["ci"]

147
Makefile
View File

@@ -1,15 +1,140 @@
TARGETS := $(shell ls ops)
.dapper:
@echo Downloading dapper
@curl -sL https://releases.rancher.com/dapper/latest/dapper-$$(uname -s)-$$(uname -m) > .dapper.tmp
@@chmod +x .dapper.tmp
@./.dapper.tmp -v
@mv .dapper.tmp .dapper
REPO ?= rancher
COVERAGE ?= false
VERSION ?= $(shell git describe --tags --always --dirty --match="v[0-9]*")
$(TARGETS): .dapper
./.dapper $@
## Dependencies
.DEFAULT_GOAL := default
GOLANGCI_LINT_VERSION := v2.8.0
GINKGO_VERSION ?= v2.21.0
GINKGO_FLAGS ?= -v -r --coverprofile=cover.out --coverpkg=./...
ENVTEST_VERSION ?= v0.0.0-20250505003155-b6c5897febe5
ENVTEST_K8S_VERSION := 1.31.0
CRD_REF_DOCS_VER ?= v0.2.0
.PHONY: $(TARGETS)
GOLANGCI_LINT ?= go run github.com/golangci/golangci-lint/v2/cmd/golangci-lint@$(GOLANGCI_LINT_VERSION)
GINKGO ?= go run github.com/onsi/ginkgo/v2/ginkgo@$(GINKGO_VERSION)
CRD_REF_DOCS := go run github.com/elastic/crd-ref-docs@$(CRD_REF_DOCS_VER)
PANDOC := $(shell which pandoc 2> /dev/null)
ENVTEST ?= go run sigs.k8s.io/controller-runtime/tools/setup-envtest@$(ENVTEST_VERSION)
ENVTEST_DIR ?= $(shell pwd)/.envtest
E2E_LABEL_FILTER ?= e2e
export KUBEBUILDER_ASSETS ?= $(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(ENVTEST_DIR) -p path)
.PHONY: all
all: version generate build package ## Run 'make' or 'make all' to run 'version', 'generate', 'build' and 'package'
.PHONY: version
version: ## Print the current version
@echo $(VERSION)
.PHONY: build
build: ## Build the the K3k binaries (k3k, k3k-kubelet and k3kcli)
@VERSION=$(VERSION) COVERAGE=$(COVERAGE) ./scripts/build
.PHONY: package
package: package-k3k package-k3k-kubelet ## Package the k3k and k3k-kubelet Docker images
.PHONY: package-%
package-%:
docker build -f package/Dockerfile.$* \
-t $(REPO)/$*:$(VERSION) \
-t $(REPO)/$*:latest \
-t $(REPO)/$*:dev .
.PHONY: push
push: push-k3k push-k3k-kubelet ## Push the K3k images to the registry
.PHONY: push-%
push-%:
docker push $(REPO)/$*:$(VERSION)
docker push $(REPO)/$*:latest
docker push $(REPO)/$*:dev
.PHONY: test
test: ## Run all the tests
$(GINKGO) $(GINKGO_FLAGS) --label-filter=$(label-filter)
.PHONY: test-unit
test-unit: ## Run the unit tests (skips the e2e)
$(GINKGO) $(GINKGO_FLAGS) --skip-file=tests/*
.PHONY: test-controller
test-controller: ## Run the controller tests (pkg/controller)
$(GINKGO) $(GINKGO_FLAGS) pkg/controller
.PHONY: test-kubelet-controller
test-kubelet-controller: ## Run the controller tests (pkg/controller)
$(GINKGO) $(GINKGO_FLAGS) k3k-kubelet/controller
.PHONY: test-e2e
test-e2e: ## Run the e2e tests
$(GINKGO) $(GINKGO_FLAGS) --label-filter="$(E2E_LABEL_FILTER)" tests
.PHONY: test-cli
test-cli: ## Run the cli tests
$(GINKGO) $(GINKGO_FLAGS) --label-filter=cli --flake-attempts=3 tests
.PHONY: generate
generate: ## Generate the CRDs specs
go generate ./...
.PHONY: docs
docs: docs-crds docs-cli ## Build the CRDs and CLI docs
.PHONY: docs-crds
docs-crds: ## Build the CRDs docs
$(CRD_REF_DOCS) --config=./docs/crds/config.yaml \
--renderer=markdown \
--source-path=./pkg/apis/k3k.io/v1beta1 \
--output-path=./docs/crds/crds.md
$(CRD_REF_DOCS) --config=./docs/crds/config.yaml \
--renderer=asciidoctor \
--templates-dir=./docs/crds/templates/asciidoctor \
--source-path=./pkg/apis/k3k.io/v1beta1 \
--output-path=./docs/crds/crds.adoc
.PHONY: docs-cli
docs-cli: ## Build the CLI docs
ifeq (, $(PANDOC))
$(error "pandoc not found in PATH.")
endif
@./scripts/generate-cli-docs
.PHONY: lint
lint: ## Find any linting issues in the project
$(GOLANGCI_LINT) run --timeout=5m
.PHONY: fmt
fmt: ## Format source files in the project
ifndef CI
$(GOLANGCI_LINT) fmt ./...
endif
.PHONY: validate
validate: generate docs fmt ## Validate the project checking for any dependency or doc mismatch
$(GINKGO) unfocus
go mod tidy
go mod verify
git status --porcelain
git --no-pager diff --exit-code
.PHONY: install
install: ## Install K3k with Helm on the targeted Kubernetes cluster
helm upgrade --install --namespace k3k-system --create-namespace \
--set controller.extraEnv[0].name=DEBUG \
--set-string controller.extraEnv[0].value=true \
--set controller.image.repository=$(REPO)/k3k \
--set controller.image.tag=$(VERSION) \
--set agent.shared.image.repository=$(REPO)/k3k-kubelet \
--set agent.shared.image.tag=$(VERSION) \
k3k ./charts/k3k/
.PHONY: help
help: ## Show this help.
@egrep -h '\s##\s' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m %-30s\033[0m %s\n", $$1, $$2}'

253
README.md
View File

@@ -1,122 +1,179 @@
# K3K
# K3k: Kubernetes in Kubernetes
A Kubernetes in Kubernetes tool, k3k provides a way to run multiple embedded isolated k3s clusters on your kubernetes cluster.
## Example
An example on creating a k3k cluster on an RKE2 host using k3kcli
[![asciicast](https://asciinema.org/a/eYlc3dsL2pfP2B50i3Ea8MJJp.svg)](https://asciinema.org/a/eYlc3dsL2pfP2B50i3Ea8MJJp)
## Architecture
K3K consists of a controller and a cli tool, the controller can be deployed via a helm chart and the cli can be downloaded from the releases page.
### Controller
The K3K controller will watch a CRD called `clusters.k3k.io`. Once found, the controller will create a separate namespace and it will create a K3S cluster as specified in the spec of the object.
Each server and agent is created as a separate pod that runs in the new namespace.
### CLI
The CLI provides a quick and easy way to create K3K clusters using simple flags, and automatically exposes the K3K clusters so it's accessible via a kubeconfig.
## Features
### Isolation
Each cluster runs in a sperate namespace that can be isolated via netowrk policies and RBAC rules, clusters also run in a sperate network namespace with flannel as the backend CNI. Finally, each cluster has a separate datastore which can be persisted.
In addition, k3k offers a persistence feature that can help users to persist their datatstore, using dynamic storage class volumes.
### Portability and Customization
The "Cluster" object is considered the template of the cluster that you can re-use to spin up multiple clusters in a matter of seconds.
K3K clusters use K3S internally and leverage all options that can be passed to K3S. Each cluster is exposed to the host cluster via NodePort, LoadBalancers, and Ingresses.
[![Go Report Card](https://goreportcard.com/badge/github.com/rancher/k3k)](https://goreportcard.com/report/github.com/rancher/k3k)
![Tests](https://github.com/rancher/k3k/actions/workflows/test.yaml/badge.svg)
![Build](https://github.com/rancher/k3k/actions/workflows/build.yml/badge.svg)
[![Conformance Tests - Virtual Mode](https://github.com/rancher/k3k/actions/workflows/test-conformance-virtual.yaml/badge.svg)](https://github.com/rancher/k3k/actions/workflows/test-conformance-virtual.yaml)
| | Separate Namespace (for each tenant) | K3K | vcluster | Separate Cluster (for each tenant) |
|-----------------------|---------------------------------------|------------------------------|-----------------|------------------------------------|
| Isolation | Very weak | Very strong | strong | Very strong |
| Access for tenants | Very restricted | Built-in k8s RBAC / Rancher | Vclustser admin | Cluster admin |
| Cost | Very cheap | Very cheap | cheap | expensive |
| Overhead | Very low | Very low | Very low | Very high |
| Networking | Shared | Separate | shared | separate |
| Cluster Configuration | | Very easy | Very hard | |
K3k, Kubernetes in Kubernetes, is a tool that empowers you to create and manage isolated K3s clusters within your existing Kubernetes environment. It enables efficient multi-tenancy, streamlined experimentation, and robust resource isolation, minimizing infrastructure costs by allowing you to run multiple lightweight Kubernetes clusters on the same physical host. K3k offers both "shared" mode, optimizing resource utilization, and "virtual" mode, providing complete isolation with dedicated K3s server pods. This allows you to access a full Kubernetes experience without the overhead of managing separate physical resources.
K3k integrates seamlessly with Rancher for simplified management of your embedded clusters.
## Features and Benefits
- **Resource Isolation:** Ensure workload isolation and prevent resource contention between teams or applications. K3k allows you to define resource limits and quotas for each embedded cluster, guaranteeing that one team's workloads won't impact another's performance.
- **Simplified Multi-Tenancy:** Easily create dedicated Kubernetes environments for different users or projects, simplifying access control and management. Provide each team with their own isolated cluster, complete with its own namespaces, RBAC, and resource quotas, without the complexity of managing multiple physical clusters.
- **Lightweight and Fast:** Leverage the lightweight nature of K3s to spin up and tear down clusters quickly, accelerating development and testing cycles. Spin up a new K3k cluster in seconds, test your application in a clean environment, and tear it down just as quickly, streamlining your CI/CD pipeline.
- **Optimized Resource Utilization (Shared Mode):** Maximize your infrastructure investment by running multiple K3s clusters on the same physical host. K3k's shared mode allows you to efficiently share underlying resources, reducing overhead and minimizing costs.
- **Complete Isolation (Virtual Mode):** For enhanced security and isolation, K3k's virtual mode provides dedicated K3s server pods for each embedded cluster. This ensures complete separation of workloads and eliminates any potential resource contention or security risks.
- **Rancher Integration:** Simplify the management of your K3k clusters with Rancher. Leverage Rancher's intuitive UI and powerful features to monitor, manage, and scale your embedded clusters with ease.
## Installation
This section provides instructions on how to install K3k and the `k3kcli`.
### Prerequisites
* [Helm](https://helm.sh) must be installed to use the charts. Please refer to Helm's [documentation](https://helm.sh/docs) to get started.
* An existing [RKE2](https://docs.rke2.io/install/quickstart) Kubernetes cluster (recommended).
* A configured storage provider with a default storage class.
**Note:** If you do not have a storage provider, you can configure the cluster to use ephemeral or static storage. Please consult the [k3kcli advance usage](./docs/advanced-usage.md#using-the-cli) for instructions on using these options.
### Install the K3k controller
1. Add the K3k Helm repository:
```bash
helm repo add k3k https://rancher.github.io/k3k
helm repo update
```
2. Install the K3k controller:
```bash
helm install --namespace k3k-system --create-namespace k3k k3k/k3k
```
We recommend using the latest released version when possible.
### Install the `k3kcli`
The `k3kcli` provides a quick and easy way to create K3k clusters and automatically exposes them via a kubeconfig.
To install it, simply download the latest available version for your architecture from the GitHub Releases page.
For example, you can download the Linux amd64 version with:
```
wget -qO k3kcli https://github.com/rancher/k3k/releases/download/v1.0.1/k3kcli-linux-amd64 && \
chmod +x k3kcli && \
sudo mv k3kcli /usr/local/bin
```
You should now be able to run:
```bash
-> % k3kcli --version
k3kcli version v1.0.1
```
## Usage
### Deploy K3K Controller
This section provides examples of how to use the `k3kcli` to manage your K3k clusters.
[Helm](https://helm.sh) must be installed to use the charts. Please refer to
Helm's [documentation](https://helm.sh/docs) to get started.
**K3k operates within the context of your currently configured `kubectl` context.** This means that K3k respects the standard Kubernetes mechanisms for context configuration, including the `--kubeconfig` flag, the `$KUBECONFIG` environment variable, and the default `$HOME/.kube/config` file. Any K3k clusters you create will reside within the Kubernetes cluster that your `kubectl` is currently pointing to.
Once Helm has been set up correctly, add the repo as follows:
```sh
helm repo add k3k https://rancher.github.io/k3k
### Creating a K3k Cluster
To create a new K3k cluster, use the following command:
```bash
k3kcli cluster create mycluster
```
> [!NOTE]
> **Creating a K3k Cluster on a Rancher-Managed Host Cluster**
>
> If your *host* Kubernetes cluster is managed by Rancher (e.g., your kubeconfig's `server` address includes a Rancher URL), use the `--kubeconfig-server` flag when creating your K3k cluster:
>
>```bash
>k3kcli cluster create --kubeconfig-server <host_node_IP_or_load_balancer_IP> mycluster
>```
>
> This ensures the generated kubeconfig connects to the correct endpoint.
When the K3s server is ready, `k3kcli` will generate the necessary kubeconfig file and print instructions on how to use it.
Here's an example of the output:
```bash
INFO[0000] Creating a new cluster [mycluster]
INFO[0000] Extracting Kubeconfig for [mycluster] cluster
INFO[0000] waiting for cluster to be available..
INFO[0073] certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1738746570: notBefore=2025-02-05 09:09:30 +0000 UTC notAfter=2026-02-05 09:10:42 +0000 UTC
INFO[0073] You can start using the cluster with:
export KUBECONFIG=/my/current/directory/mycluster-kubeconfig.yaml
kubectl cluster-info
```
If you had already added this repo earlier, run `helm repo update` to retrieve
the latest versions of the packages. You can then run `helm search repo
k3k --devel` to see the charts.
After exporting the generated kubeconfig, you should be able to reach your Kubernetes cluster:
To install the k3k chart:
```sh
helm install my-k3k k3k/k3k --devel
```bash
export KUBECONFIG=/my/current/directory/mycluster-kubeconfig.yaml
kubectl get nodes
kubectl get pods -A
```
To uninstall the chart:
You can also directly create a Cluster resource in some namespace, to create a K3k cluster:
```sh
helm delete my-k3k
```bash
kubectl apply -f - <<EOF
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: mycluster
namespace: k3k-mycluster
EOF
```
**NOTE: Since k3k is still under development, the chart is marked as a development chart, this means that you need to add the `--devel` flag to install it.**
and use the `k3kcli` to retrieve the kubeconfig:
### Create a new cluster
To create a new cluster you need to install and run the cli or create a cluster object, to install the cli:
#### For linux and macOS
1 - Donwload the binary, linux dowload url:
```
wget https://github.com/rancher/k3k/releases/download/v0.0.0-alpha2/k3kcli
```
macOS dowload url:
```
wget https://github.com/rancher/k3k/releases/download/v0.0.0-alpha2/k3kcli
```
Then copy to local bin
```
chmod +x k3kcli
sudo cp k3kcli /usr/local/bin
```bash
k3kcli kubeconfig generate --namespace k3k-mycluster --name mycluster
```
#### For Windows
1 - Download the Binary:
Use PowerShell's Invoke-WebRequest cmdlet to download the binary:
```powershel
Invoke-WebRequest -Uri "https://github.com/rancher/k3k/releases/download/v0.0.0-alpha2/k3kcli-windows" -OutFile "k3kcli.exe"
```
2 - Copy the Binary to a Directory in PATH:
To allow running the binary from any command prompt, you can copy it to a directory in your system's PATH. For example, copying it to C:\Users\<YourUsername>\bin (create this directory if it doesn't exist):
```
Copy-Item "k3kcli.exe" "C:\bin"
```
3 - Update Environment Variable (PATH):
If you haven't already added `C:\bin` (or your chosen directory) to your PATH, you can do it through PowerShell:
```
setx PATH "C:\bin;%PATH%"
### Deleting a K3k Cluster
To delete a K3k cluster, use the following command:
```bash
k3kcli cluster delete mycluster
```
To create a new cluster you can use:
```sh
k3k cluster create --name example-cluster --token test
```
## Architecture
For a detailed explanation of the K3k architecture, please refer to the [Architecture documentation](./docs/architecture.md).
## Advanced Usage
For more in-depth examples and information on advanced K3k usage, including details on shared vs. virtual modes, resource management, and other configuration options, please see the [Advanced Usage documentation](./docs/advanced-usage.md).
## Development
If you're interested in building K3k from source or contributing to the project, please refer to the [Development documentation](./docs/development.md).
## License
Copyright (c) 2014-2025 [SUSE](http://rancher.com/)
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

View File

@@ -2,5 +2,5 @@ apiVersion: v2
name: k3k
description: A Helm chart for K3K
type: application
version: 0.1.0-r1
appVersion: 0.0.0-alpha6
version: 1.0.2-rc2
appVersion: v1.0.2-rc2

View File

@@ -1,116 +0,0 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: clusters.k3k.io
spec:
group: k3k.io
versions:
- name: v1alpha1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
name:
type: string
version:
type: string
servers:
type: integer
agents:
type: integer
token:
type: string
clusterCIDR:
type: string
serviceCIDR:
type: string
clusterDNS:
type: string
serverArgs:
type: array
items:
type: string
agentArgs:
type: array
items:
type: string
tlsSANs:
type: array
items:
type: string
persistence:
type: object
properties:
type:
type: string
default: "ephermal"
storageClassName:
type: string
storageRequestSize:
type: string
addons:
type: array
items:
type: object
properties:
secretNamespace:
type: string
secretRef:
type: string
expose:
type: object
properties:
ingress:
type: object
properties:
enabled:
type: boolean
ingressClassName:
type: string
loadbalancer:
type: object
properties:
enabled:
type: boolean
nodePort:
type: object
properties:
enabled:
type: boolean
status:
type: object
properties:
overrideClusterCIDR:
type: boolean
clusterCIDR:
type: string
overrideServiceCIDR:
type: boolean
serviceCIDR:
type: string
clusterDNS:
type: string
tlsSANs:
type: array
items:
type: string
persistence:
type: object
properties:
type:
type: string
default: "ephermal"
storageClassName:
type: string
storageRequestSize:
type: string
scope: Cluster
names:
plural: clusters
singular: cluster
kind: Cluster

View File

@@ -60,3 +60,54 @@ Create the name of the service account to use
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Print the image pull secrets in the expected format (an array of objects with one possible field, "name").
*/}}
{{- define "image.pullSecrets" }}
{{- $imagePullSecrets := list }}
{{- range . }}
{{- if kindIs "string" . }}
{{- $imagePullSecrets = append $imagePullSecrets (dict "name" .) }}
{{- else }}
{{- $imagePullSecrets = append $imagePullSecrets . }}
{{- end }}
{{- end }}
{{- toYaml $imagePullSecrets }}
{{- end }}
{{- define "controller.registry" }}
{{- $registry := .Values.global.imageRegistry | default .Values.controller.image.registry -}}
{{- if $registry }}
{{- $registry }}/
{{- else }}
{{- $registry }}
{{- end }}
{{- end }}
{{- define "server.registry" }}
{{- $registry := .Values.global.imageRegistry | default .Values.server.image.registry -}}
{{- if $registry }}
{{- $registry }}/
{{- else }}
{{- $registry }}
{{- end }}
{{- end }}
{{- define "agent.virtual.registry" }}
{{- $registry := .Values.global.imageRegistry | default .Values.agent.virtual.image.registry -}}
{{- if $registry }}
{{- $registry }}/
{{- else }}
{{- $registry }}
{{- end }}
{{- end }}
{{- define "agent.shared.registry" }}
{{- $registry := .Values.global.imageRegistry | default .Values.agent.shared.image.registry -}}
{{- if $registry }}
{{- $registry }}/
{{- else }}
{{- $registry }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,960 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
controller-gen.kubebuilder.io/version: v0.20.0
name: clusters.k3k.io
spec:
group: k3k.io
names:
kind: Cluster
listKind: ClusterList
plural: clusters
singular: cluster
scope: Namespaced
versions:
- additionalPrinterColumns:
- jsonPath: .spec.mode
name: Mode
type: string
- jsonPath: .status.phase
name: Status
type: string
- jsonPath: .status.policyName
name: Policy
type: string
name: v1beta1
schema:
openAPIV3Schema:
description: |-
Cluster defines a virtual Kubernetes cluster managed by k3k.
It specifies the desired state of a virtual cluster, including version, node configuration, and networking.
k3k uses this to provision and manage these virtual clusters.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
default: {}
description: Spec defines the desired state of the Cluster.
properties:
addons:
description: Addons specifies secrets containing raw YAML to deploy
on cluster startup.
items:
description: Addon specifies a Secret containing YAML to be deployed
on cluster startup.
properties:
secretNamespace:
description: SecretNamespace is the namespace of the Secret.
type: string
secretRef:
description: SecretRef is the name of the Secret.
type: string
type: object
type: array
agentArgs:
description: |-
AgentArgs specifies ordered key-value pairs for K3s agent pods.
Example: ["--node-name=my-agent-node"]
items:
type: string
type: array
agentEnvs:
description: AgentEnvs specifies list of environment variables to
set in the agent pod.
items:
description: EnvVar represents an environment variable present in
a Container.
properties:
name:
description: Name of the environment variable. Must be a C_IDENTIFIER.
type: string
value:
description: |-
Variable references $(VAR_NAME) are expanded
using the previously defined environment variables in the container and
any service environment variables. If a variable cannot be resolved,
the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.
"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)".
Escaped references will never be expanded, regardless of whether the variable
exists or not.
Defaults to "".
type: string
valueFrom:
description: Source for the environment variable's value. Cannot
be used if value is not empty.
properties:
configMapKeyRef:
description: Selects a key of a ConfigMap.
properties:
key:
description: The key to select.
type: string
name:
default: ""
description: |-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type: string
optional:
description: Specify whether the ConfigMap or its key
must be defined
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: |-
Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['<KEY>']`, `metadata.annotations['<KEY>']`,
spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
properties:
apiVersion:
description: Version of the schema the FieldPath is
written in terms of, defaults to "v1".
type: string
fieldPath:
description: Path of the field to select in the specified
API version.
type: string
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: |-
Selects a resource of the container: only resources limits and requests
(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
properties:
containerName:
description: 'Container name: required for volumes,
optional for env vars'
type: string
divisor:
anyOf:
- type: integer
- type: string
description: Specifies the output format of the exposed
resources, defaults to "1"
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
resource:
description: 'Required: resource to select'
type: string
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
key:
description: The key of the secret to select from. Must
be a valid secret key.
type: string
name:
default: ""
description: |-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type: string
optional:
description: Specify whether the Secret or its key must
be defined
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
type: object
type: array
agents:
default: 0
description: |-
Agents specifies the number of K3s pods to run in agent (worker) mode.
Must be 0 or greater. Defaults to 0.
This field is ignored in "shared" mode.
format: int32
type: integer
x-kubernetes-validations:
- message: invalid value for agents
rule: self >= 0
clusterCIDR:
description: |-
ClusterCIDR is the CIDR range for pod IPs.
Defaults to 10.42.0.0/16 in shared mode and 10.52.0.0/16 in virtual mode.
This field is immutable.
type: string
x-kubernetes-validations:
- message: clusterCIDR is immutable
rule: self == oldSelf
clusterDNS:
description: |-
ClusterDNS is the IP address for the CoreDNS service.
Must be within the ServiceCIDR range. Defaults to 10.43.0.10.
This field is immutable.
type: string
x-kubernetes-validations:
- message: clusterDNS is immutable
rule: self == oldSelf
customCAs:
description: CustomCAs specifies the cert/key pairs for custom CA
certificates.
properties:
enabled:
default: true
description: Enabled toggles this feature on or off.
type: boolean
sources:
description: Sources defines the sources for all required custom
CA certificates.
properties:
clientCA:
description: ClientCA specifies the client-ca cert/key pair.
properties:
secretName:
description: |-
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
type: object
etcdPeerCA:
description: ETCDPeerCA specifies the etcd-peer-ca cert/key
pair.
properties:
secretName:
description: |-
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
type: object
etcdServerCA:
description: ETCDServerCA specifies the etcd-server-ca cert/key
pair.
properties:
secretName:
description: |-
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
type: object
requestHeaderCA:
description: RequestHeaderCA specifies the request-header-ca
cert/key pair.
properties:
secretName:
description: |-
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
type: object
serverCA:
description: ServerCA specifies the server-ca cert/key pair.
properties:
secretName:
description: |-
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
type: object
serviceAccountToken:
description: ServiceAccountToken specifies the service-account-token
key.
properties:
secretName:
description: |-
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
type: object
required:
- clientCA
- etcdPeerCA
- etcdServerCA
- requestHeaderCA
- serverCA
- serviceAccountToken
type: object
required:
- enabled
- sources
type: object
expose:
description: |-
Expose specifies options for exposing the API server.
By default, it's only exposed as a ClusterIP.
properties:
ingress:
description: Ingress specifies options for exposing the API server
through an Ingress.
properties:
annotations:
additionalProperties:
type: string
description: Annotations specifies annotations to add to the
Ingress.
type: object
ingressClassName:
description: IngressClassName specifies the IngressClass to
use for the Ingress.
type: string
type: object
loadBalancer:
description: LoadBalancer specifies options for exposing the API
server through a LoadBalancer service.
properties:
etcdPort:
description: |-
ETCDPort is the port on which the ETCD service is exposed when type is LoadBalancer.
If not specified, the default etcd 2379 port will be allocated.
If 0 or negative, the port will not be exposed.
format: int32
type: integer
serverPort:
description: |-
ServerPort is the port on which the K3s server is exposed when type is LoadBalancer.
If not specified, the default https 443 port will be allocated.
If 0 or negative, the port will not be exposed.
format: int32
type: integer
type: object
nodePort:
description: NodePort specifies options for exposing the API server
through NodePort.
properties:
etcdPort:
description: |-
ETCDPort is the port on each node on which the ETCD service is exposed when type is NodePort.
If not specified, a random port between 30000-32767 will be allocated.
If out of range, the port will not be exposed.
format: int32
type: integer
serverPort:
description: |-
ServerPort is the port on each node on which the K3s server is exposed when type is NodePort.
If not specified, a random port between 30000-32767 will be allocated.
If out of range, the port will not be exposed.
format: int32
type: integer
type: object
type: object
x-kubernetes-validations:
- message: ingress, loadbalancer and nodePort are mutually exclusive;
only one can be set
rule: '[has(self.ingress), has(self.loadBalancer), has(self.nodePort)].filter(x,
x).size() <= 1'
mirrorHostNodes:
description: |-
MirrorHostNodes controls whether node objects from the host cluster
are mirrored into the virtual cluster.
type: boolean
mode:
allOf:
- enum:
- shared
- virtual
- enum:
- shared
- virtual
default: shared
description: |-
Mode specifies the cluster provisioning mode: "shared" or "virtual".
Defaults to "shared". This field is immutable.
type: string
x-kubernetes-validations:
- message: mode is immutable
rule: self == oldSelf
nodeSelector:
additionalProperties:
type: string
description: |-
NodeSelector specifies node labels to constrain where server/agent pods are scheduled.
In "shared" mode, this also applies to workloads.
type: object
persistence:
description: |-
Persistence specifies options for persisting etcd data.
Defaults to dynamic persistence, which uses a PersistentVolumeClaim to provide data persistence.
A default StorageClass is required for dynamic persistence.
properties:
storageClassName:
description: |-
StorageClassName is the name of the StorageClass to use for the PVC.
This field is only relevant in "dynamic" mode.
type: string
storageRequestSize:
anyOf:
- type: integer
- type: string
default: 2G
description: |-
StorageRequestSize is the requested size for the PVC.
This field is only relevant in "dynamic" mode.
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
x-kubernetes-validations:
- message: storageRequestSize is immutable
rule: self == oldSelf
type:
default: dynamic
description: Type specifies the persistence mode.
type: string
type: object
priorityClass:
description: |-
PriorityClass specifies the priorityClassName for server/agent pods.
In "shared" mode, this also applies to workloads.
type: string
secretMounts:
description: |-
SecretMounts specifies a list of secrets to mount into server and agent pods.
Each entry defines a secret and its mount path within the pods.
items:
description: |-
SecretMount defines a secret to be mounted into server or agent pods,
allowing for custom configurations, certificates, or other sensitive data.
properties:
defaultMode:
description: |-
defaultMode is Optional: mode bits used to set permissions on created files by default.
Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
YAML accepts both octal and decimal values, JSON requires decimal values
for mode bits. Defaults to 0644.
Directories within the path are not affected by this setting.
This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
format: int32
type: integer
items:
description: |-
items If unspecified, each key-value pair in the Data field of the referenced
Secret will be projected into the volume as a file whose name is the
key and content is the value. If specified, the listed keys will be
projected into the specified paths, and unlisted keys will not be
present. If a key is specified which is not present in the Secret,
the volume setup will error unless it is marked optional. Paths must be
relative and may not contain the '..' path or start with '..'.
items:
description: Maps a string key to a path within a volume.
properties:
key:
description: key is the key to project.
type: string
mode:
description: |-
mode is Optional: mode bits used to set permissions on this file.
Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
YAML accepts both octal and decimal values, JSON requires decimal values for mode bits.
If not specified, the volume defaultMode will be used.
This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
format: int32
type: integer
path:
description: |-
path is the relative path of the file to map the key to.
May not be an absolute path.
May not contain the path element '..'.
May not start with the string '..'.
type: string
required:
- key
- path
type: object
type: array
x-kubernetes-list-type: atomic
mountPath:
description: |-
MountPath is the path within server and agent pods where the
secret contents will be mounted.
type: string
optional:
description: optional field specify whether the Secret or its
keys must be defined
type: boolean
role:
description: |-
Role is the type of the k3k pod that will be used to mount the secret.
This can be 'server', 'agent', or 'all' (for both).
enum:
- server
- agent
- all
type: string
secretName:
description: |-
secretName is the name of the secret in the pod's namespace to use.
More info: https://kubernetes.io/docs/concepts/storage/volumes#secret
type: string
subPath:
description: |-
SubPath is an optional path within the secret to mount instead of the root.
When specified, only the specified key from the secret will be mounted as a file
at MountPath, keeping the parent directory writable.
type: string
type: object
type: array
serverArgs:
description: |-
ServerArgs specifies ordered key-value pairs for K3s server pods.
Example: ["--tls-san=example.com"]
items:
type: string
type: array
serverEnvs:
description: ServerEnvs specifies list of environment variables to
set in the server pod.
items:
description: EnvVar represents an environment variable present in
a Container.
properties:
name:
description: Name of the environment variable. Must be a C_IDENTIFIER.
type: string
value:
description: |-
Variable references $(VAR_NAME) are expanded
using the previously defined environment variables in the container and
any service environment variables. If a variable cannot be resolved,
the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.
"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)".
Escaped references will never be expanded, regardless of whether the variable
exists or not.
Defaults to "".
type: string
valueFrom:
description: Source for the environment variable's value. Cannot
be used if value is not empty.
properties:
configMapKeyRef:
description: Selects a key of a ConfigMap.
properties:
key:
description: The key to select.
type: string
name:
default: ""
description: |-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type: string
optional:
description: Specify whether the ConfigMap or its key
must be defined
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
fieldRef:
description: |-
Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['<KEY>']`, `metadata.annotations['<KEY>']`,
spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
properties:
apiVersion:
description: Version of the schema the FieldPath is
written in terms of, defaults to "v1".
type: string
fieldPath:
description: Path of the field to select in the specified
API version.
type: string
required:
- fieldPath
type: object
x-kubernetes-map-type: atomic
resourceFieldRef:
description: |-
Selects a resource of the container: only resources limits and requests
(limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
properties:
containerName:
description: 'Container name: required for volumes,
optional for env vars'
type: string
divisor:
anyOf:
- type: integer
- type: string
description: Specifies the output format of the exposed
resources, defaults to "1"
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
resource:
description: 'Required: resource to select'
type: string
required:
- resource
type: object
x-kubernetes-map-type: atomic
secretKeyRef:
description: Selects a key of a secret in the pod's namespace
properties:
key:
description: The key of the secret to select from. Must
be a valid secret key.
type: string
name:
default: ""
description: |-
Name of the referent.
This field is effectively required, but due to backwards compatibility is
allowed to be empty. Instances of this type with an empty value here are
almost certainly wrong.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
type: string
optional:
description: Specify whether the Secret or its key must
be defined
type: boolean
required:
- key
type: object
x-kubernetes-map-type: atomic
type: object
required:
- name
type: object
type: array
serverLimit:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: ServerLimit specifies resource limits for server nodes.
type: object
servers:
default: 1
description: |-
Servers specifies the number of K3s pods to run in server (control plane) mode.
Must be at least 1. Defaults to 1.
format: int32
type: integer
x-kubernetes-validations:
- message: cluster must have at least one server
rule: self >= 1
serviceCIDR:
description: |-
ServiceCIDR is the CIDR range for service IPs.
Defaults to 10.43.0.0/16 in shared mode and 10.53.0.0/16 in virtual mode.
This field is immutable.
type: string
x-kubernetes-validations:
- message: serviceCIDR is immutable
rule: self == oldSelf
sync:
default: {}
description: Sync specifies the resources types that will be synced
from virtual cluster to host cluster.
properties:
configMaps:
default:
enabled: true
description: ConfigMaps resources sync configuration.
properties:
enabled:
default: true
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
required:
- enabled
type: object
ingresses:
default:
enabled: false
description: Ingresses resources sync configuration.
properties:
enabled:
default: false
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
required:
- enabled
type: object
persistentVolumeClaims:
default:
enabled: true
description: PersistentVolumeClaims resources sync configuration.
properties:
enabled:
default: true
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
required:
- enabled
type: object
priorityClasses:
default:
enabled: false
description: PriorityClasses resources sync configuration.
properties:
enabled:
default: false
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
required:
- enabled
type: object
secrets:
default:
enabled: true
description: Secrets resources sync configuration.
properties:
enabled:
default: true
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
type: object
services:
default:
enabled: true
description: Services resources sync configuration.
properties:
enabled:
default: true
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
required:
- enabled
type: object
type: object
tlsSANs:
description: TLSSANs specifies subject alternative names for the K3s
server certificate.
items:
type: string
type: array
tokenSecretRef:
description: |-
TokenSecretRef is a Secret reference containing the token used by worker nodes to join the cluster.
The Secret must have a "token" field in its data.
properties:
name:
description: name is unique within a namespace to reference a
secret resource.
type: string
namespace:
description: namespace defines the space within which the secret
name must be unique.
type: string
type: object
x-kubernetes-map-type: atomic
version:
description: |-
Version is the K3s version to use for the virtual nodes.
It should follow the K3s versioning convention (e.g., v1.28.2-k3s1).
If not specified, the Kubernetes version of the host node will be used.
type: string
workerLimit:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: WorkerLimit specifies resource limits for agent nodes.
type: object
type: object
status:
default: {}
description: Status reflects the observed state of the Cluster.
properties:
clusterCIDR:
description: ClusterCIDR is the CIDR range for pod IPs.
type: string
clusterDNS:
description: ClusterDNS is the IP address for the CoreDNS service.
type: string
conditions:
description: Conditions are the individual conditions for the cluster
set.
items:
description: Condition contains details for one aspect of the current
state of this API Resource.
properties:
lastTransitionTime:
description: |-
lastTransitionTime is the last time the condition transitioned from one status to another.
This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: |-
message is a human readable message indicating details about the transition.
This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: |-
observedGeneration represents the .metadata.generation that the condition was set based upon.
For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date
with respect to the current state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: |-
reason contains a programmatic identifier indicating the reason for the condition's last transition.
Producers of specific condition types may define expected values and meanings for this field,
and whether the values are considered a guaranteed API.
The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
hostVersion:
description: HostVersion is the Kubernetes version of the host node.
type: string
kubeletPort:
description: KubeletPort specefies the port used by k3k-kubelet in
shared mode.
type: integer
phase:
default: Unknown
description: Phase is a high-level summary of the cluster's current
lifecycle state.
enum:
- Pending
- Provisioning
- Ready
- Failed
- Terminating
- Unknown
type: string
policyName:
description: PolicyName specifies the virtual cluster policy name
bound to the virtual cluster.
type: string
serviceCIDR:
description: ServiceCIDR is the CIDR range for service IPs.
type: string
tlsSANs:
description: TLSSANs specifies subject alternative names for the K3s
server certificate.
items:
type: string
type: array
webhookPort:
description: WebhookPort specefies the port used by webhook in k3k-kubelet
in shared mode.
type: integer
type: object
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -0,0 +1,429 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
controller-gen.kubebuilder.io/version: v0.20.0
name: virtualclusterpolicies.k3k.io
spec:
group: k3k.io
names:
kind: VirtualClusterPolicy
listKind: VirtualClusterPolicyList
plural: virtualclusterpolicies
shortNames:
- vcp
singular: virtualclusterpolicy
scope: Cluster
versions:
- additionalPrinterColumns:
- jsonPath: .spec.allowedMode
name: Mode
type: string
name: v1beta1
schema:
openAPIV3Schema:
description: |-
VirtualClusterPolicy allows defining common configurations and constraints
for clusters within a clusterpolicy.
properties:
apiVersion:
description: |-
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value, and
may reject unrecognized values.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
type: string
kind:
description: |-
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated.
In CamelCase.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
type: string
metadata:
type: object
spec:
default: {}
description: Spec defines the desired state of the VirtualClusterPolicy.
properties:
allowedMode:
default: shared
description: AllowedMode specifies the allowed cluster provisioning
mode. Defaults to "shared".
enum:
- shared
- virtual
type: string
x-kubernetes-validations:
- message: mode is immutable
rule: self == oldSelf
defaultNodeSelector:
additionalProperties:
type: string
description: DefaultNodeSelector specifies the node selector that
applies to all clusters (server + agent) in the target Namespace.
type: object
defaultPriorityClass:
description: DefaultPriorityClass specifies the priorityClassName
applied to all pods of all clusters in the target Namespace.
type: string
disableNetworkPolicy:
description: DisableNetworkPolicy indicates whether to disable the
creation of a default network policy for cluster isolation.
type: boolean
limit:
description: |-
Limit specifies the LimitRange that will be applied to all pods within the VirtualClusterPolicy
to set defaults and constraints (min/max)
properties:
limits:
description: Limits is the list of LimitRangeItem objects that
are enforced.
items:
description: LimitRangeItem defines a min/max usage limit for
any resource that matches on kind.
properties:
default:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Default resource requirement limit value by
resource name if resource limit is omitted.
type: object
defaultRequest:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: DefaultRequest is the default resource requirement
request value by resource name if resource request is
omitted.
type: object
max:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Max usage constraints on this kind by resource
name.
type: object
maxLimitRequestRatio:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: MaxLimitRequestRatio if specified, the named
resource must have a request and limit that are both non-zero
where limit divided by request is less than or equal to
the enumerated value; this represents the max burst for
the named resource.
type: object
min:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Min usage constraints on this kind by resource
name.
type: object
type:
description: Type of resource that this limit applies to.
type: string
required:
- type
type: object
type: array
x-kubernetes-list-type: atomic
required:
- limits
type: object
podSecurityAdmissionLevel:
description: PodSecurityAdmissionLevel specifies the pod security
admission level applied to the pods in the namespace.
enum:
- privileged
- baseline
- restricted
type: string
quota:
description: Quota specifies the resource limits for clusters within
a clusterpolicy.
properties:
hard:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
hard is the set of desired hard limits for each named resource.
More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/
type: object
scopeSelector:
description: |-
scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota
but expressed using ScopeSelectorOperator in combination with possible values.
For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched.
properties:
matchExpressions:
description: A list of scope selector requirements by scope
of the resources.
items:
description: |-
A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator
that relates the scope name and values.
properties:
operator:
description: |-
Represents a scope's relationship to a set of values.
Valid operators are In, NotIn, Exists, DoesNotExist.
type: string
scopeName:
description: The name of the scope that the selector
applies to.
type: string
values:
description: |-
An array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty.
This array is replaced during a strategic merge patch.
items:
type: string
type: array
x-kubernetes-list-type: atomic
required:
- operator
- scopeName
type: object
type: array
x-kubernetes-list-type: atomic
type: object
x-kubernetes-map-type: atomic
scopes:
description: |-
A collection of filters that must match each object tracked by a quota.
If not specified, the quota matches all objects.
items:
description: A ResourceQuotaScope defines a filter that must
match each object tracked by a quota
type: string
type: array
x-kubernetes-list-type: atomic
type: object
sync:
default: {}
description: Sync specifies the resources types that will be synced
from virtual cluster to host cluster.
properties:
configMaps:
default:
enabled: true
description: ConfigMaps resources sync configuration.
properties:
enabled:
default: true
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
required:
- enabled
type: object
ingresses:
default:
enabled: false
description: Ingresses resources sync configuration.
properties:
enabled:
default: false
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
required:
- enabled
type: object
persistentVolumeClaims:
default:
enabled: true
description: PersistentVolumeClaims resources sync configuration.
properties:
enabled:
default: true
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
required:
- enabled
type: object
priorityClasses:
default:
enabled: false
description: PriorityClasses resources sync configuration.
properties:
enabled:
default: false
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
required:
- enabled
type: object
secrets:
default:
enabled: true
description: Secrets resources sync configuration.
properties:
enabled:
default: true
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
type: object
services:
default:
enabled: true
description: Services resources sync configuration.
properties:
enabled:
default: true
description: Enabled is an on/off switch for syncing resources.
type: boolean
selector:
additionalProperties:
type: string
description: |-
Selector specifies set of labels of the resources that will be synced, if empty
then all resources of the given type will be synced.
type: object
required:
- enabled
type: object
type: object
type: object
status:
description: Status reflects the observed state of the VirtualClusterPolicy.
properties:
conditions:
description: Conditions are the individual conditions for the cluster
set.
items:
description: Condition contains details for one aspect of the current
state of this API Resource.
properties:
lastTransitionTime:
description: |-
lastTransitionTime is the last time the condition transitioned from one status to another.
This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable.
format: date-time
type: string
message:
description: |-
message is a human readable message indicating details about the transition.
This may be an empty string.
maxLength: 32768
type: string
observedGeneration:
description: |-
observedGeneration represents the .metadata.generation that the condition was set based upon.
For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date
with respect to the current state of the instance.
format: int64
minimum: 0
type: integer
reason:
description: |-
reason contains a programmatic identifier indicating the reason for the condition's last transition.
Producers of specific condition types may define expected values and meanings for this field,
and whether the values are considered a guaranteed API.
The value should be a CamelCase string.
This field may not be empty.
maxLength: 1024
minLength: 1
pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
type: string
status:
description: status of the condition, one of True, False, Unknown.
enum:
- "True"
- "False"
- Unknown
type: string
type:
description: type of condition in CamelCase or in foo.example.com/CamelCase.
maxLength: 316
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
required:
- lastTransitionTime
- message
- reason
- status
- type
type: object
type: array
lastUpdateTime:
description: LastUpdate is the timestamp when the status was last
updated.
type: string
observedGeneration:
description: ObservedGeneration was the generation at the time the
status was updated.
format: int64
type: integer
summary:
description: Summary is a summary of the status.
type: string
type: object
required:
- metadata
- spec
type: object
served: true
storage: true
subresources:
status: {}

View File

@@ -4,9 +4,9 @@ metadata:
name: {{ include "k3k.fullname" . }}
labels:
{{- include "k3k.labels" . | nindent 4 }}
namespace: {{ .Values.namespace }}
namespace: {{ .Release.Namespace }}
spec:
replicas: {{ .Values.image.replicaCount }}
replicas: {{ .Values.controller.replicas }}
selector:
matchLabels:
{{- include "k3k.selectorLabels" . | nindent 6 }}
@@ -15,12 +15,47 @@ spec:
labels:
{{- include "k3k.selectorLabels" . | nindent 8 }}
spec:
imagePullSecrets: {{- include "image.pullSecrets" (concat .Values.controller.imagePullSecrets .Values.global.imagePullSecrets) | nindent 8 }}
containers:
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
- image: "{{- include "controller.registry" .}}{{ .Values.controller.image.repository }}:{{ .Values.controller.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.controller.image.pullPolicy }}
name: {{ .Chart.Name }}
{{- with .Values.controller.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
args:
- k3k
- --cluster-cidr={{ .Values.host.clusterCIDR }}
- --k3s-server-image={{- include "server.registry" .}}{{ .Values.server.image.repository }}
- --k3s-server-image-pull-policy={{ .Values.server.image.pullPolicy }}
- --agent-shared-image={{- include "agent.shared.registry" .}}{{ .Values.agent.shared.image.repository }}:{{ default .Chart.AppVersion .Values.agent.shared.image.tag }}
- --agent-shared-image-pull-policy={{ .Values.agent.shared.image.pullPolicy }}
- --agent-virtual-image={{- include "agent.virtual.registry" .}}{{ .Values.agent.virtual.image.repository }}
- --agent-virtual-image-pull-policy={{ .Values.agent.virtual.image.pullPolicy }}
- --kubelet-port-range={{ .Values.agent.shared.kubeletPortRange }}
- --webhook-port-range={{ .Values.agent.shared.webhookPortRange }}
{{- range $key, $value := include "image.pullSecrets" (concat .Values.agent.imagePullSecrets .Values.global.imagePullSecrets) | fromYamlArray }}
- --agent-image-pull-secret
- {{ .name }}
{{- end }}
{{- range $key, $value := include "image.pullSecrets" (concat .Values.server.imagePullSecrets .Values.global.imagePullSecrets) | fromYamlArray }}
- --server-image-pull-secret
- {{ .name }}
{{- end }}
env:
- name: CONTROLLER_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
{{- with .Values.controller.extraEnv }}
{{- toYaml . | nindent 10 }}
{{- end }}
ports:
- containerPort: 8080
name: https
protocol: TCP
serviceAccountName: {{ include "k3k.serviceAccountName" . }}
- containerPort: 9443
name: https-webhook
protocol: TCP
serviceAccountName: {{ include "k3k.serviceAccountName" . }}

View File

@@ -1,4 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace }}

View File

@@ -11,4 +11,50 @@ roleRef:
subjects:
- kind: ServiceAccount
name: {{ include "k3k.serviceAccountName" . }}
namespace: {{ .Values.namespace }}
namespace: {{ .Release.Namespace }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k3k-kubelet-node
rules:
- apiGroups:
- ""
resources:
- "nodes"
- "nodes/proxy"
- "namespaces"
verbs:
- "get"
- "list"
- "watch"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k3k-kubelet-node
roleRef:
kind: ClusterRole
name: k3k-kubelet-node
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: k3k-priorityclass
rules:
- apiGroups:
- "scheduling.k8s.io"
resources:
- "priorityclasses"
verbs:
- "*"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k3k-priorityclass
roleRef:
kind: ClusterRole
name: k3k-priorityclass
apiGroup: rbac.authorization.k8s.io

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: k3k-webhook
labels:
{{- include "k3k.labels" . | nindent 4 }}
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 443
protocol: TCP
name: https-webhook
targetPort: 9443
selector:
{{- include "k3k.selectorLabels" . | nindent 6 }}

View File

@@ -5,5 +5,5 @@ metadata:
name: {{ include "k3k.serviceAccountName" . }}
labels:
{{- include "k3k.labels" . | nindent 4 }}
namespace: {{ .Values.namespace }}
{{- end }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,19 +1,85 @@
replicaCount: 1
namespace: k3k-system
image:
repository: rancher/k3k
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
tag: "v0.0.0-alpha6"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
global:
# -- Global override for container image registry
imageRegistry: ""
# -- Global override for container image registry pull secrets
imagePullSecrets: []
serviceAccount:
# Specifies whether a service account should be created
create: true
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
host:
# clusterCIDR specifies the clusterCIDR that will be added to the default networkpolicy, if not set
# the controller will collect the PodCIDRs of all the nodes on the system.
clusterCIDR: ""
controller:
replicas: 1
image:
registry: ""
repository: rancher/k3k
tag: ""
pullPolicy: ""
imagePullSecrets: []
# extraEnv allows you to specify additional environment variables for the k3k controller deployment.
# This is useful for passing custom configuration or secrets to the controller.
# For example:
# extraEnv:
# - name: MY_CUSTOM_VAR
# value: "my_custom_value"
# - name: ANOTHER_VAR
# valueFrom:
# secretKeyRef:
# name: my-secret
# key: my-key
extraEnv: []
# resources allows you to set resources limits and requests for CPU and Memory
# resources:
# limits:
# cpu: "200m"
# memory: "200Mi"
# requests:
# cpu: "100m"
# memory: "100Mi"
resources: {}
# configuration related to k3s server component in k3k
server:
imagePullSecrets: []
image:
registry:
repository: "rancher/k3s"
pullPolicy: ""
# configuration related to the agent component in k3k
agent:
imagePullSecrets: []
# configuration related to agent in shared mode
shared:
image:
registry: ""
repository: "rancher/k3k-kubelet"
tag: ""
pullPolicy: ""
# Specifies the port range that will be used for k3k-kubelet api if mirrorHostNodes is enabled
kubeletPortRange: "50000-51000"
# Specifies the port range that will be used for webhook if mirrorHostNodes is enabled
webhookPortRange: "51001-52000"
# configuration related to agent in virtual mode
virtual:
image:
registry: ""
repository: "rancher/k3s"
pullPolicy: ""

21
cli/cmds/cluster.go Normal file
View File

@@ -0,0 +1,21 @@
package cmds
import (
"github.com/spf13/cobra"
)
func NewClusterCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "cluster",
Short: "K3k cluster command.",
}
cmd.AddCommand(
NewClusterCreateCmd(appCtx),
NewClusterUpdateCmd(appCtx),
NewClusterDeleteCmd(appCtx),
NewClusterListCmd(appCtx),
)
return cmd
}

View File

@@ -1,25 +0,0 @@
package cluster
import (
"github.com/rancher/k3k/cli/cmds"
"github.com/urfave/cli"
)
var clusterSubcommands = []cli.Command{
{
Name: "create",
Usage: "Create new cluster",
SkipFlagParsing: false,
SkipArgReorder: true,
Action: createCluster,
Flags: append(cmds.CommonFlags, clusterCreateFlags...),
},
}
func NewClusterCommand() cli.Command {
return cli.Command{
Name: "cluster",
Usage: "cluster command",
Subcommands: clusterSubcommands,
}
}

View File

@@ -1,328 +0,0 @@
package cluster
import (
"context"
"errors"
"fmt"
"net/url"
"os"
"path/filepath"
"strings"
"time"
"github.com/rancher/k3k/cli/cmds"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1alpha1"
"github.com/rancher/k3k/pkg/controller/cluster/server"
"github.com/rancher/k3k/pkg/controller/util"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
"k8s.io/client-go/util/retry"
"sigs.k8s.io/controller-runtime/pkg/client"
)
var (
Scheme = runtime.NewScheme()
backoff = wait.Backoff{
Steps: 5,
Duration: 20 * time.Second,
Factor: 2,
Jitter: 0.1,
}
)
func init() {
_ = clientgoscheme.AddToScheme(Scheme)
_ = v1alpha1.AddToScheme(Scheme)
}
var (
name string
token string
clusterCIDR string
serviceCIDR string
servers int64
agents int64
serverArgs cli.StringSlice
agentArgs cli.StringSlice
persistenceType string
storageClassName string
version string
clusterCreateFlags = []cli.Flag{
cli.StringFlag{
Name: "name",
Usage: "name of the cluster",
Destination: &name,
},
cli.Int64Flag{
Name: "servers",
Usage: "number of servers",
Destination: &servers,
Value: 1,
},
cli.Int64Flag{
Name: "agents",
Usage: "number of agents",
Destination: &agents,
},
cli.StringFlag{
Name: "token",
Usage: "token of the cluster",
Destination: &token,
},
cli.StringFlag{
Name: "cluster-cidr",
Usage: "cluster CIDR",
Destination: &clusterCIDR,
},
cli.StringFlag{
Name: "service-cidr",
Usage: "service CIDR",
Destination: &serviceCIDR,
},
cli.StringFlag{
Name: "persistence-type",
Usage: "Persistence mode for the nodes (ephermal, static, dynamic)",
Value: server.EphermalNodesType,
Destination: &persistenceType,
},
cli.StringFlag{
Name: "storage-class-name",
Usage: "Storage class name for dynamic persistence type",
Destination: &storageClassName,
},
cli.StringSliceFlag{
Name: "server-args",
Usage: "servers extra arguments",
Value: &serverArgs,
},
cli.StringSliceFlag{
Name: "agent-args",
Usage: "agents extra arguments",
Value: &agentArgs,
},
cli.StringFlag{
Name: "version",
Usage: "k3s version",
Destination: &version,
Value: "v1.26.1-k3s1",
},
}
)
func createCluster(clx *cli.Context) error {
ctx := context.Background()
if err := validateCreateFlags(clx); err != nil {
return err
}
restConfig, err := clientcmd.BuildConfigFromFlags("", cmds.Kubeconfig)
if err != nil {
return err
}
ctrlClient, err := client.New(restConfig, client.Options{
Scheme: Scheme,
})
if err != nil {
return err
}
logrus.Infof("Creating a new cluster [%s]", name)
cluster := newCluster(
name,
token,
int32(servers),
int32(agents),
clusterCIDR,
serviceCIDR,
serverArgs,
agentArgs,
)
cluster.Spec.Expose = &v1alpha1.ExposeConfig{
NodePort: &v1alpha1.NodePortConfig{
Enabled: true,
},
}
// add Host IP address as an extra TLS-SAN to expose the k3k cluster
url, err := url.Parse(restConfig.Host)
if err != nil {
return err
}
host := strings.Split(url.Host, ":")
cluster.Spec.TLSSANs = []string{host[0]}
if err := ctrlClient.Create(ctx, cluster); err != nil {
if apierrors.IsAlreadyExists(err) {
logrus.Infof("Cluster [%s] already exists", name)
} else {
return err
}
}
logrus.Infof("Extracting Kubeconfig for [%s] cluster", name)
var kubeconfig []byte
if err := retry.OnError(backoff, apierrors.IsNotFound, func() error {
kubeconfig, err = extractKubeconfig(ctx, ctrlClient, cluster, host[0])
if err != nil {
logrus.Infof("waiting for cluster to be available: %v", err)
return err
}
return nil
}); err != nil {
return err
}
pwd, err := os.Getwd()
if err != nil {
return err
}
logrus.Infof(`You can start using the cluster with:
export KUBECONFIG=%s
kubectl cluster-info
`, filepath.Join(pwd, cluster.Name+"-kubeconfig.yaml"))
return os.WriteFile(cluster.Name+"-kubeconfig.yaml", kubeconfig, 0644)
}
func validateCreateFlags(clx *cli.Context) error {
if persistenceType != server.EphermalNodesType &&
persistenceType != server.DynamicNodesType {
return errors.New("invalid persistence type")
}
if token == "" {
return errors.New("empty cluster token")
}
if name == "" {
return errors.New("empty cluster name")
}
if servers <= 0 {
return errors.New("invalid number of servers")
}
if cmds.Kubeconfig == "" && os.Getenv("KUBECONFIG") == "" {
return errors.New("empty kubeconfig")
}
return nil
}
func newCluster(name, token string, servers, agents int32, clusterCIDR, serviceCIDR string, serverArgs, agentArgs []string) *v1alpha1.Cluster {
return &v1alpha1.Cluster{
ObjectMeta: metav1.ObjectMeta{
Name: name,
},
TypeMeta: metav1.TypeMeta{
Kind: "Cluster",
APIVersion: "k3k.io/v1alpha1",
},
Spec: v1alpha1.ClusterSpec{
Name: name,
Token: token,
Servers: &servers,
Agents: &agents,
ClusterCIDR: clusterCIDR,
ServiceCIDR: serviceCIDR,
ServerArgs: serverArgs,
AgentArgs: agentArgs,
Version: version,
Persistence: &v1alpha1.PersistenceConfig{
Type: persistenceType,
StorageClassName: storageClassName,
},
},
}
}
func extractKubeconfig(ctx context.Context, client client.Client, cluster *v1alpha1.Cluster, serverIP string) ([]byte, error) {
nn := types.NamespacedName{
Name: cluster.Name + "-kubeconfig",
Namespace: util.ClusterNamespace(cluster),
}
var kubeSecret v1.Secret
if err := client.Get(ctx, nn, &kubeSecret); err != nil {
return nil, err
}
kubeconfig := kubeSecret.Data["kubeconfig.yaml"]
if kubeconfig == nil {
return nil, errors.New("empty kubeconfig")
}
nn = types.NamespacedName{
Name: "k3k-server-service",
Namespace: util.ClusterNamespace(cluster),
}
var k3kService v1.Service
if err := client.Get(ctx, nn, &k3kService); err != nil {
return nil, err
}
if k3kService.Spec.Type == v1.ServiceTypeNodePort {
nodePort := k3kService.Spec.Ports[0].NodePort
restConfig, err := clientcmd.RESTConfigFromKubeConfig(kubeconfig)
if err != nil {
return nil, err
}
hostURL := fmt.Sprintf("https://%s:%d", serverIP, nodePort)
restConfig.Host = hostURL
clientConfig := generateKubeconfigFromRest(restConfig)
b, err := clientcmd.Write(clientConfig)
if err != nil {
return nil, err
}
kubeconfig = b
}
return kubeconfig, nil
}
func generateKubeconfigFromRest(config *rest.Config) clientcmdapi.Config {
clusters := make(map[string]*clientcmdapi.Cluster)
clusters["default-cluster"] = &clientcmdapi.Cluster{
Server: config.Host,
CertificateAuthorityData: config.CAData,
}
contexts := make(map[string]*clientcmdapi.Context)
contexts["default-context"] = &clientcmdapi.Context{
Cluster: "default-cluster",
Namespace: "default",
AuthInfo: "default",
}
authinfos := make(map[string]*clientcmdapi.AuthInfo)
authinfos["default"] = &clientcmdapi.AuthInfo{
ClientCertificateData: config.CertData,
ClientKeyData: config.KeyData,
}
clientConfig := clientcmdapi.Config{
Kind: "Config",
APIVersion: "v1",
Clusters: clusters,
Contexts: contexts,
CurrentContext: "default-context",
AuthInfos: authinfos,
}
return clientConfig
}

View File

@@ -1 +0,0 @@
package cluster

466
cli/cmds/cluster_create.go Normal file
View File

@@ -0,0 +1,466 @@
package cmds
import (
"bytes"
"context"
"errors"
"fmt"
"net/url"
"os"
"strings"
"text/template"
"time"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/util/retry"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
"github.com/rancher/k3k/pkg/controller"
k3kcluster "github.com/rancher/k3k/pkg/controller/cluster"
"github.com/rancher/k3k/pkg/controller/kubeconfig"
)
type CreateConfig struct {
token string
clusterCIDR string
serviceCIDR string
servers int
agents int
serverArgs []string
agentArgs []string
serverEnvs []string
agentEnvs []string
labels []string
annotations []string
persistenceType string
storageClassName string
storageRequestSize string
version string
mode string
kubeconfigServerHost string
policy string
mirrorHostNodes bool
customCertsPath string
timeout time.Duration
}
func NewClusterCreateCmd(appCtx *AppContext) *cobra.Command {
createConfig := &CreateConfig{}
cmd := &cobra.Command{
Use: "create",
Short: "Create a new cluster.",
Example: "k3kcli cluster create [command options] NAME",
PreRunE: func(cmd *cobra.Command, args []string) error {
return validateCreateConfig(createConfig)
},
RunE: createAction(appCtx, createConfig),
Args: cobra.ExactArgs(1),
}
CobraFlagNamespace(appCtx, cmd.Flags())
createFlags(cmd, createConfig)
return cmd
}
func createAction(appCtx *AppContext, config *CreateConfig) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
ctx := context.Background()
client := appCtx.Client
name := args[0]
if name == k3kcluster.ClusterInvalidName {
return errors.New("invalid cluster name")
}
if config.mode == string(v1beta1.SharedClusterMode) && config.agents != 0 {
return errors.New("invalid flag, --agents flag is only allowed in virtual mode")
}
namespace := appCtx.Namespace(name)
if err := createNamespace(ctx, client, namespace, config.policy); err != nil {
return err
}
if strings.Contains(config.version, "+") {
orig := config.version
config.version = strings.ReplaceAll(config.version, "+", "-")
logrus.Warnf("Invalid K3s docker reference version: '%s'. Using '%s' instead", orig, config.version)
}
if config.token != "" {
logrus.Info("Creating cluster token secret")
obj := k3kcluster.TokenSecretObj(config.token, name, namespace)
if err := client.Create(ctx, &obj); err != nil {
return err
}
}
if config.customCertsPath != "" {
if err := CreateCustomCertsSecrets(ctx, name, namespace, config.customCertsPath, client); err != nil {
return err
}
}
logrus.Infof("Creating cluster '%s' in namespace '%s'", name, namespace)
cluster, err := newCluster(name, namespace, config)
if err != nil {
return err
}
cluster.Spec.Expose = &v1beta1.ExposeConfig{
NodePort: &v1beta1.NodePortConfig{},
}
// add Host IP address as an extra TLS-SAN to expose the k3k cluster
url, err := url.Parse(appCtx.RestConfig.Host)
if err != nil {
return err
}
host := strings.Split(url.Host, ":")
if config.kubeconfigServerHost != "" {
host = []string{config.kubeconfigServerHost}
}
cluster.Spec.TLSSANs = []string{host[0]}
if err := client.Create(ctx, cluster); err != nil {
if apierrors.IsAlreadyExists(err) {
logrus.Infof("Cluster '%s' already exists", name)
} else {
return err
}
}
if err := waitForClusterReconciled(ctx, client, cluster, config.timeout); err != nil {
return fmt.Errorf("failed to wait for cluster to be reconciled: %w", err)
}
clusterDetails, err := getClusterDetails(cluster)
if err != nil {
return fmt.Errorf("failed to get cluster details: %w", err)
}
logrus.Info(clusterDetails)
logrus.Infof("Waiting for cluster to be available..")
if err := waitForClusterReady(ctx, client, cluster, config.timeout); err != nil {
return fmt.Errorf("failed to wait for cluster to become ready (status: %s): %w", cluster.Status.Phase, err)
}
logrus.Infof("Extracting Kubeconfig for '%s' cluster", name)
// retry every 5s for at most 2m, or 25 times
availableBackoff := wait.Backoff{
Duration: 5 * time.Second,
Cap: 2 * time.Minute,
Steps: 25,
}
cfg := kubeconfig.New()
var kubeconfig *clientcmdapi.Config
if err := retry.OnError(availableBackoff, apierrors.IsNotFound, func() error {
kubeconfig, err = cfg.Generate(ctx, client, cluster, host[0], 0)
return err
}); err != nil {
return err
}
return writeKubeconfigFile(cluster, kubeconfig, "")
}
}
func newCluster(name, namespace string, config *CreateConfig) (*v1beta1.Cluster, error) {
var storageRequestSize *resource.Quantity
if config.storageRequestSize != "" {
parsed, err := resource.ParseQuantity(config.storageRequestSize)
if err != nil {
return nil, err
}
storageRequestSize = ptr.To(parsed)
}
cluster := &v1beta1.Cluster{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: parseKeyValuePairs(config.labels, "label"),
Annotations: parseKeyValuePairs(config.annotations, "annotation"),
},
TypeMeta: metav1.TypeMeta{
Kind: "Cluster",
APIVersion: "k3k.io/v1beta1",
},
Spec: v1beta1.ClusterSpec{
Servers: ptr.To(int32(config.servers)),
Agents: ptr.To(int32(config.agents)),
ClusterCIDR: config.clusterCIDR,
ServiceCIDR: config.serviceCIDR,
ServerArgs: config.serverArgs,
AgentArgs: config.agentArgs,
ServerEnvs: env(config.serverEnvs),
AgentEnvs: env(config.agentEnvs),
Version: config.version,
Mode: v1beta1.ClusterMode(config.mode),
Persistence: v1beta1.PersistenceConfig{
Type: v1beta1.PersistenceMode(config.persistenceType),
StorageClassName: ptr.To(config.storageClassName),
StorageRequestSize: storageRequestSize,
},
MirrorHostNodes: config.mirrorHostNodes,
},
}
if config.storageClassName == "" {
cluster.Spec.Persistence.StorageClassName = nil
}
if config.token != "" {
cluster.Spec.TokenSecretRef = &corev1.SecretReference{
Name: k3kcluster.TokenSecretName(name),
Namespace: namespace,
}
}
if config.customCertsPath != "" {
cluster.Spec.CustomCAs = &v1beta1.CustomCAs{
Enabled: true,
Sources: v1beta1.CredentialSources{
ClientCA: v1beta1.CredentialSource{
SecretName: controller.SafeConcatNameWithPrefix(cluster.Name, "client-ca"),
},
ServerCA: v1beta1.CredentialSource{
SecretName: controller.SafeConcatNameWithPrefix(cluster.Name, "server-ca"),
},
ETCDServerCA: v1beta1.CredentialSource{
SecretName: controller.SafeConcatNameWithPrefix(cluster.Name, "etcd-server-ca"),
},
ETCDPeerCA: v1beta1.CredentialSource{
SecretName: controller.SafeConcatNameWithPrefix(cluster.Name, "etcd-peer-ca"),
},
RequestHeaderCA: v1beta1.CredentialSource{
SecretName: controller.SafeConcatNameWithPrefix(cluster.Name, "request-header-ca"),
},
ServiceAccountToken: v1beta1.CredentialSource{
SecretName: controller.SafeConcatNameWithPrefix(cluster.Name, "service-account-token"),
},
},
}
}
return cluster, nil
}
func env(envSlice []string) []corev1.EnvVar {
var envVars []corev1.EnvVar
for _, env := range envSlice {
keyValue := strings.Split(env, "=")
if len(keyValue) != 2 {
logrus.Fatalf("incorrect value for environment variable %s", env)
}
envVars = append(envVars, corev1.EnvVar{
Name: keyValue[0],
Value: keyValue[1],
})
}
return envVars
}
func waitForClusterReconciled(ctx context.Context, k8sClient client.Client, cluster *v1beta1.Cluster, timeout time.Duration) error {
return wait.PollUntilContextTimeout(ctx, time.Second, timeout, false, func(ctx context.Context) (bool, error) {
key := client.ObjectKeyFromObject(cluster)
if err := k8sClient.Get(ctx, key, cluster); err != nil {
return false, fmt.Errorf("failed to get resource: %w", err)
}
return cluster.Status.HostVersion != "", nil
})
}
func waitForClusterReady(ctx context.Context, k8sClient client.Client, cluster *v1beta1.Cluster, timeout time.Duration) error {
interval := 5 * time.Second
return wait.PollUntilContextTimeout(ctx, interval, timeout, true, func(ctx context.Context) (bool, error) {
key := client.ObjectKeyFromObject(cluster)
if err := k8sClient.Get(ctx, key, cluster); err != nil {
return false, fmt.Errorf("failed to get resource: %w", err)
}
// If resource ready -> stop polling
if cluster.Status.Phase == v1beta1.ClusterReady {
return true, nil
}
// If resource failed -> stop polling with an error
if cluster.Status.Phase == v1beta1.ClusterFailed {
return true, fmt.Errorf("cluster creation failed: %s", cluster.Status.Phase)
}
// Condition not met, continue polling.
return false, nil
})
}
func CreateCustomCertsSecrets(ctx context.Context, name, namespace, customCertsPath string, k8sclient client.Client) error {
customCAsMap := map[string]string{
"etcd-peer-ca": "/etcd/peer-ca",
"etcd-server-ca": "/etcd/server-ca",
"server-ca": "/server-ca",
"client-ca": "/client-ca",
"request-header-ca": "/request-header-ca",
"service-account-token": "/service",
}
for certName, fileName := range customCAsMap {
var (
certFilePath, keyFilePath string
cert, key []byte
err error
)
if certName != "service-account-token" {
certFilePath = customCertsPath + fileName + ".crt"
cert, err = os.ReadFile(certFilePath)
if err != nil {
return err
}
}
keyFilePath = customCertsPath + fileName + ".key"
key, err = os.ReadFile(keyFilePath)
if err != nil {
return err
}
certSecret := caCertSecret(certName, name, namespace, cert, key)
if err := k8sclient.Create(ctx, certSecret); err != nil {
return client.IgnoreAlreadyExists(err)
}
}
return nil
}
func caCertSecret(certName, clusterName, clusterNamespace string, cert, key []byte) *corev1.Secret {
return &corev1.Secret{
TypeMeta: metav1.TypeMeta{
Kind: "Secret",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: controller.SafeConcatNameWithPrefix(clusterName, certName),
Namespace: clusterNamespace,
},
Type: corev1.SecretTypeTLS,
Data: map[string][]byte{
corev1.TLSCertKey: cert,
corev1.TLSPrivateKeyKey: key,
},
}
}
func parseKeyValuePairs(pairs []string, pairType string) map[string]string {
resultMap := make(map[string]string)
for _, p := range pairs {
var k, v string
keyValue := strings.SplitN(p, "=", 2)
k = keyValue[0]
if len(keyValue) == 2 {
v = keyValue[1]
}
resultMap[k] = v
logrus.Debugf("Adding '%s=%s' %s to Cluster", k, v, pairType)
}
return resultMap
}
const clusterDetailsTemplate = `Cluster details:
Mode: {{ .Mode }}
Servers: {{ .Servers }}{{ if .Agents }}
Agents: {{ .Agents }}{{ end }}
Version: {{ if .Version }}{{ .Version }}{{ else }}{{ .HostVersion }}{{ end }} (Host: {{ .HostVersion }})
Persistence:
Type: {{.Persistence.Type}}{{ if .Persistence.StorageClassName }}
StorageClass: {{ .Persistence.StorageClassName }}{{ end }}{{ if .Persistence.StorageRequestSize }}
Size: {{ .Persistence.StorageRequestSize }}{{ end }}{{ if .Labels }}
Labels: {{ range $key, $value := .Labels }}
{{$key}}: {{$value}}{{ end }}{{ end }}{{ if .Annotations }}
Annotations: {{ range $key, $value := .Annotations }}
{{$key}}: {{$value}}{{ end }}{{ end }}`
func getClusterDetails(cluster *v1beta1.Cluster) (string, error) {
type templateData struct {
Mode v1beta1.ClusterMode
Servers int32
Agents int32
Version string
HostVersion string
Persistence struct {
Type v1beta1.PersistenceMode
StorageClassName string
StorageRequestSize string
}
Labels map[string]string
Annotations map[string]string
}
data := templateData{
Mode: cluster.Spec.Mode,
Servers: ptr.Deref(cluster.Spec.Servers, 0),
Agents: ptr.Deref(cluster.Spec.Agents, 0),
Version: cluster.Spec.Version,
HostVersion: cluster.Status.HostVersion,
Annotations: cluster.Annotations,
Labels: cluster.Labels,
}
data.Persistence.Type = cluster.Spec.Persistence.Type
data.Persistence.StorageClassName = ptr.Deref(cluster.Spec.Persistence.StorageClassName, "")
if srs := cluster.Spec.Persistence.StorageRequestSize; srs != nil {
data.Persistence.StorageRequestSize = srs.String()
}
tmpl, err := template.New("clusterDetails").Parse(clusterDetailsTemplate)
if err != nil {
return "", err
}
var buf bytes.Buffer
if err = tmpl.Execute(&buf, data); err != nil {
return "", err
}
return buf.String(), nil
}

View File

@@ -0,0 +1,65 @@
package cmds
import (
"errors"
"time"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/api/resource"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
)
func createFlags(cmd *cobra.Command, cfg *CreateConfig) {
cmd.Flags().IntVar(&cfg.servers, "servers", 1, "number of servers")
cmd.Flags().IntVar(&cfg.agents, "agents", 0, "number of agents")
cmd.Flags().StringVar(&cfg.token, "token", "", "token of the cluster")
cmd.Flags().StringVar(&cfg.clusterCIDR, "cluster-cidr", "", "cluster CIDR")
cmd.Flags().StringVar(&cfg.serviceCIDR, "service-cidr", "", "service CIDR")
cmd.Flags().BoolVar(&cfg.mirrorHostNodes, "mirror-host-nodes", false, "Mirror Host Cluster Nodes")
cmd.Flags().StringVar(&cfg.persistenceType, "persistence-type", string(v1beta1.DynamicPersistenceMode), "persistence mode for the nodes (dynamic, ephemeral)")
cmd.Flags().StringVar(&cfg.storageClassName, "storage-class-name", "", "storage class name for dynamic persistence type")
cmd.Flags().StringVar(&cfg.storageRequestSize, "storage-request-size", "", "storage size for dynamic persistence type")
cmd.Flags().StringSliceVar(&cfg.serverArgs, "server-args", []string{}, "servers extra arguments")
cmd.Flags().StringSliceVar(&cfg.agentArgs, "agent-args", []string{}, "agents extra arguments")
cmd.Flags().StringSliceVar(&cfg.serverEnvs, "server-envs", []string{}, "servers extra Envs")
cmd.Flags().StringSliceVar(&cfg.agentEnvs, "agent-envs", []string{}, "agents extra Envs")
cmd.Flags().StringArrayVar(&cfg.labels, "labels", []string{}, "Labels to add to the cluster object (e.g. key=value)")
cmd.Flags().StringArrayVar(&cfg.annotations, "annotations", []string{}, "Annotations to add to the cluster object (e.g. key=value)")
cmd.Flags().StringVar(&cfg.version, "version", "", "k3s version")
cmd.Flags().StringVar(&cfg.mode, "mode", "shared", "k3k mode type (shared, virtual)")
cmd.Flags().StringVar(&cfg.kubeconfigServerHost, "kubeconfig-server", "", "override the kubeconfig server host")
cmd.Flags().StringVar(&cfg.policy, "policy", "", "The policy to create the cluster in")
cmd.Flags().StringVar(&cfg.customCertsPath, "custom-certs", "", "The path for custom certificate directory")
cmd.Flags().DurationVar(&cfg.timeout, "timeout", 3*time.Minute, "The timeout for waiting for the cluster to become ready (e.g., 10s, 5m, 1h).")
}
func validateCreateConfig(cfg *CreateConfig) error {
if cfg.servers <= 0 {
return errors.New("invalid number of servers")
}
if cfg.persistenceType != "" {
switch v1beta1.PersistenceMode(cfg.persistenceType) {
case v1beta1.EphemeralPersistenceMode, v1beta1.DynamicPersistenceMode:
return nil
default:
return errors.New(`persistence-type should be one of "dynamic" or "ephemeral"`)
}
}
if _, err := resource.ParseQuantity(cfg.storageRequestSize); err != nil {
return errors.New(`invalid storage size, should be a valid resource quantity e.g "10Gi"`)
}
if cfg.mode != "" {
switch cfg.mode {
case string(v1beta1.VirtualClusterMode), string(v1beta1.SharedClusterMode):
return nil
default:
return errors.New(`mode should be one of "shared" or "virtual"`)
}
}
return nil
}

View File

@@ -0,0 +1,96 @@
package cmds
import (
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/utils/ptr"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
)
func Test_printClusterDetails(t *testing.T) {
tests := []struct {
name string
cluster *v1beta1.Cluster
want string
wantErr bool
}{
{
name: "simple cluster",
cluster: &v1beta1.Cluster{
Spec: v1beta1.ClusterSpec{
Mode: v1beta1.SharedClusterMode,
Version: "123",
Persistence: v1beta1.PersistenceConfig{
Type: v1beta1.DynamicPersistenceMode,
},
},
Status: v1beta1.ClusterStatus{
HostVersion: "456",
},
},
want: `Cluster details:
Mode: shared
Servers: 0
Version: 123 (Host: 456)
Persistence:
Type: dynamic`,
},
{
name: "simple cluster with no version",
cluster: &v1beta1.Cluster{
Spec: v1beta1.ClusterSpec{
Mode: v1beta1.SharedClusterMode,
Persistence: v1beta1.PersistenceConfig{
Type: v1beta1.DynamicPersistenceMode,
},
},
Status: v1beta1.ClusterStatus{
HostVersion: "456",
},
},
want: `Cluster details:
Mode: shared
Servers: 0
Version: 456 (Host: 456)
Persistence:
Type: dynamic`,
},
{
name: "cluster with agents",
cluster: &v1beta1.Cluster{
Spec: v1beta1.ClusterSpec{
Mode: v1beta1.SharedClusterMode,
Agents: ptr.To[int32](3),
Persistence: v1beta1.PersistenceConfig{
Type: v1beta1.DynamicPersistenceMode,
StorageClassName: ptr.To("local-path"),
StorageRequestSize: ptr.To(resource.MustParse("3G")),
},
},
Status: v1beta1.ClusterStatus{
HostVersion: "456",
},
},
want: `Cluster details:
Mode: shared
Servers: 0
Agents: 3
Version: 456 (Host: 456)
Persistence:
Type: dynamic
StorageClass: local-path
Size: 3G`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
clusterDetails, err := getClusterDetails(tt.cluster)
assert.NoError(t, err)
assert.Equal(t, tt.want, clusterDetails)
})
}
}

115
cli/cmds/cluster_delete.go Normal file
View File

@@ -0,0 +1,115 @@
package cmds
import (
"context"
"errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
k3kcluster "github.com/rancher/k3k/pkg/controller/cluster"
"github.com/rancher/k3k/pkg/controller/cluster/agent"
)
var keepData bool
func NewClusterDeleteCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "delete",
Short: "Delete an existing cluster.",
Example: "k3kcli cluster delete [command options] NAME",
RunE: delete(appCtx),
Args: cobra.ExactArgs(1),
}
CobraFlagNamespace(appCtx, cmd.Flags())
cmd.Flags().BoolVar(&keepData, "keep-data", false, "keeps persistence volumes created for the cluster after deletion")
return cmd
}
func delete(appCtx *AppContext) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
ctx := context.Background()
client := appCtx.Client
name := args[0]
if name == k3kcluster.ClusterInvalidName {
return errors.New("invalid cluster name")
}
namespace := appCtx.Namespace(name)
logrus.Infof("Deleting '%s' cluster in namespace '%s'", name, namespace)
cluster := v1beta1.Cluster{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
}
// keep bootstrap secrets and tokens if --keep-data flag is passed
if keepData {
// skip removing tokenSecret
if err := RemoveOwnerReferenceFromSecret(ctx, k3kcluster.TokenSecretName(cluster.Name), client, cluster); err != nil {
return err
}
// skip removing webhook secret
if err := RemoveOwnerReferenceFromSecret(ctx, agent.WebhookSecretName(cluster.Name), client, cluster); err != nil {
return err
}
} else {
matchingLabels := ctrlclient.MatchingLabels(map[string]string{"cluster": cluster.Name, "role": "server"})
listOpts := ctrlclient.ListOptions{Namespace: cluster.Namespace}
matchingLabels.ApplyToList(&listOpts)
deleteOpts := &ctrlclient.DeleteAllOfOptions{ListOptions: listOpts}
if err := client.DeleteAllOf(ctx, &v1.PersistentVolumeClaim{}, deleteOpts); err != nil {
return ctrlclient.IgnoreNotFound(err)
}
}
if err := client.Delete(ctx, &cluster); err != nil {
return ctrlclient.IgnoreNotFound(err)
}
return nil
}
}
func RemoveOwnerReferenceFromSecret(ctx context.Context, name string, cl ctrlclient.Client, cluster v1beta1.Cluster) error {
var secret v1.Secret
key := types.NamespacedName{
Name: name,
Namespace: cluster.Namespace,
}
if err := cl.Get(ctx, key, &secret); err != nil {
if apierrors.IsNotFound(err) {
logrus.Warnf("%s secret is not found", name)
return nil
}
return err
}
if controllerutil.HasControllerReference(&secret) {
if err := controllerutil.RemoveOwnerReference(&cluster, &secret, cl.Scheme()); err != nil {
return err
}
return cl.Update(ctx, &secret)
}
return nil
}

52
cli/cmds/cluster_list.go Normal file
View File

@@ -0,0 +1,52 @@
package cmds
import (
"context"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/types"
"k8s.io/cli-runtime/pkg/printers"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
)
func NewClusterListCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "list",
Short: "List all existing clusters.",
Example: "k3kcli cluster list [command options]",
RunE: list(appCtx),
Args: cobra.NoArgs,
}
CobraFlagNamespace(appCtx, cmd.Flags())
return cmd
}
func list(appCtx *AppContext) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
ctx := context.Background()
client := appCtx.Client
var clusters v1beta1.ClusterList
if err := client.List(ctx, &clusters, ctrlclient.InNamespace(appCtx.namespace)); err != nil {
return err
}
crd := &apiextensionsv1.CustomResourceDefinition{}
if err := client.Get(ctx, types.NamespacedName{Name: "clusters.k3k.io"}, crd); err != nil {
return err
}
items := toPointerSlice(clusters.Items)
table := createTable(crd, items)
printer := printers.NewTablePrinter(printers.PrintOptions{WithNamespace: true})
return printer.PrintObj(table, cmd.OutOrStdout())
}
}

198
cli/cmds/cluster_update.go Normal file
View File

@@ -0,0 +1,198 @@
package cmds
import (
"bufio"
"errors"
"fmt"
"os"
"strings"
"github.com/blang/semver/v4"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/ptr"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
k3kcluster "github.com/rancher/k3k/pkg/controller/cluster"
)
type UpdateConfig struct {
servers int32
agents int32
labels []string
annotations []string
version string
noConfirm bool
}
func NewClusterUpdateCmd(appCtx *AppContext) *cobra.Command {
updateConfig := &UpdateConfig{}
cmd := &cobra.Command{
Use: "update",
Short: "Update existing cluster",
Example: "k3kcli cluster update [command options] NAME",
RunE: updateAction(appCtx, updateConfig),
Args: cobra.ExactArgs(1),
}
CobraFlagNamespace(appCtx, cmd.Flags())
updateFlags(cmd, updateConfig)
return cmd
}
func updateFlags(cmd *cobra.Command, cfg *UpdateConfig) {
cmd.Flags().Int32Var(&cfg.servers, "servers", 1, "number of servers")
cmd.Flags().Int32Var(&cfg.agents, "agents", 0, "number of agents")
cmd.Flags().StringArrayVar(&cfg.labels, "labels", []string{}, "Labels to add to the cluster object (e.g. key=value)")
cmd.Flags().StringArrayVar(&cfg.annotations, "annotations", []string{}, "Annotations to add to the cluster object (e.g. key=value)")
cmd.Flags().StringVar(&cfg.version, "version", "", "k3s version")
cmd.Flags().BoolVarP(&cfg.noConfirm, "no-confirm", "y", false, "Skip interactive approval before applying update")
}
func updateAction(appCtx *AppContext, config *UpdateConfig) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
client := appCtx.Client
name := args[0]
if name == k3kcluster.ClusterInvalidName {
return errors.New("invalid cluster name")
}
namespace := appCtx.Namespace(name)
var virtualCluster v1beta1.Cluster
clusterKey := types.NamespacedName{Name: name, Namespace: appCtx.namespace}
if err := appCtx.Client.Get(ctx, clusterKey, &virtualCluster); err != nil {
if apierrors.IsNotFound(err) {
return fmt.Errorf("cluster %s not found in namespace %s", name, appCtx.namespace)
}
return fmt.Errorf("failed to fetch cluster: %w", err)
}
var changes []change
if cmd.Flags().Changed("version") && config.version != virtualCluster.Spec.Version {
currentVersion := virtualCluster.Spec.Version
if currentVersion == "" {
currentVersion = virtualCluster.Status.HostVersion
}
currentVersionSemver, err := semver.ParseTolerant(currentVersion)
if err != nil {
return fmt.Errorf("failed to parse current cluster version %w", err)
}
newVersionSemver, err := semver.ParseTolerant(config.version)
if err != nil {
return fmt.Errorf("failed to parse new cluster version %w", err)
}
if newVersionSemver.LT(currentVersionSemver) {
return fmt.Errorf("downgrading cluster version is not supported")
}
changes = append(changes, change{"Version", currentVersion, config.version})
virtualCluster.Spec.Version = config.version
}
if cmd.Flags().Changed("servers") {
var oldServers int32
if virtualCluster.Spec.Agents != nil {
oldServers = *virtualCluster.Spec.Servers
}
if oldServers != config.servers {
changes = append(changes, change{"Servers", fmt.Sprintf("%d", oldServers), fmt.Sprintf("%d", config.servers)})
virtualCluster.Spec.Servers = ptr.To(config.servers)
}
}
if cmd.Flags().Changed("agents") {
var oldAgents int32
if virtualCluster.Spec.Agents != nil {
oldAgents = *virtualCluster.Spec.Agents
}
if oldAgents != config.agents {
changes = append(changes, change{"Agents", fmt.Sprintf("%d", oldAgents), fmt.Sprintf("%d", config.agents)})
virtualCluster.Spec.Agents = ptr.To(config.agents)
}
}
var labelChanges []change
if cmd.Flags().Changed("labels") {
oldLabels := labels.Merge(nil, virtualCluster.Labels)
virtualCluster.Labels = labels.Merge(virtualCluster.Labels, parseKeyValuePairs(config.labels, "label"))
labelChanges = diffMaps(oldLabels, virtualCluster.Labels)
}
var annotationChanges []change
if cmd.Flags().Changed("annotations") {
oldAnnotations := labels.Merge(nil, virtualCluster.Annotations)
virtualCluster.Annotations = labels.Merge(virtualCluster.Annotations, parseKeyValuePairs(config.annotations, "annotation"))
annotationChanges = diffMaps(oldAnnotations, virtualCluster.Annotations)
}
if len(changes) == 0 && len(labelChanges) == 0 && len(annotationChanges) == 0 {
logrus.Info("No changes detected, skipping update")
return nil
}
logrus.Infof("Updating cluster '%s' in namespace '%s'", name, namespace)
printDiff(changes)
printMapDiff("Labels", labelChanges)
printMapDiff("Annotations", annotationChanges)
if !config.noConfirm {
if !confirmClusterUpdate(&virtualCluster) {
return nil
}
}
if err := client.Update(ctx, &virtualCluster); err != nil {
return err
}
logrus.Info("Cluster updated successfully")
return nil
}
}
func confirmClusterUpdate(cluster *v1beta1.Cluster) bool {
clusterDetails, err := getClusterDetails(cluster)
if err != nil {
logrus.Fatalf("unable to get cluster details: %v", err)
}
fmt.Printf("\nNew %s\n", clusterDetails)
fmt.Printf("\nDo you want to update the cluster? [y/N]: ")
scanner := bufio.NewScanner(os.Stdin)
if !scanner.Scan() {
if err := scanner.Err(); err != nil {
logrus.Errorf("Error reading input: %v", err)
}
return false
}
fmt.Printf("\n")
return strings.ToLower(strings.TrimSpace(scanner.Text())) == "y"
}

53
cli/cmds/diff_printer.go Normal file
View File

@@ -0,0 +1,53 @@
package cmds
import "fmt"
type change struct {
field string
oldValue string
newValue string
}
func printDiff(changes []change) {
for _, c := range changes {
if c.oldValue == c.newValue {
continue
}
fmt.Printf("%s: %s -> %s\n", c.field, c.oldValue, c.newValue)
}
}
func printMapDiff(title string, changes []change) {
if len(changes) == 0 {
return
}
fmt.Printf("%s:\n", title)
for _, c := range changes {
switch c.oldValue {
case "":
fmt.Printf(" %s=%s (new)\n", c.field, c.newValue)
default:
fmt.Printf(" %s=%s -> %s=%s\n", c.field, c.oldValue, c.field, c.newValue)
}
}
}
func diffMaps(oldMap, newMap map[string]string) []change {
var changes []change
// Check for new and changed keys
for k, newVal := range newMap {
if oldVal, exists := oldMap[k]; exists {
if oldVal != newVal {
changes = append(changes, change{k, oldVal, newVal})
}
} else {
changes = append(changes, change{k, "", newVal})
}
}
return changes
}

153
cli/cmds/kubeconfig.go Normal file
View File

@@ -0,0 +1,153 @@
package cmds
import (
"context"
"net/url"
"os"
"path/filepath"
"strings"
"time"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apiserver/pkg/authentication/user"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/retry"
apierrors "k8s.io/apimachinery/pkg/api/errors"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
"github.com/rancher/k3k/pkg/controller"
"github.com/rancher/k3k/pkg/controller/certs"
"github.com/rancher/k3k/pkg/controller/kubeconfig"
)
type GenerateKubeconfigConfig struct {
name string
configName string
cn string
org []string
altNames []string
expirationDays int64
kubeconfigServerHost string
}
func NewKubeconfigCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "kubeconfig",
Short: "Manage kubeconfig for clusters.",
}
cmd.AddCommand(
NewKubeconfigGenerateCmd(appCtx),
)
return cmd
}
func NewKubeconfigGenerateCmd(appCtx *AppContext) *cobra.Command {
cfg := &GenerateKubeconfigConfig{}
cmd := &cobra.Command{
Use: "generate",
Short: "Generate kubeconfig for clusters.",
RunE: generate(appCtx, cfg),
Args: cobra.NoArgs,
}
CobraFlagNamespace(appCtx, cmd.Flags())
generateKubeconfigFlags(cmd, cfg)
return cmd
}
func generateKubeconfigFlags(cmd *cobra.Command, cfg *GenerateKubeconfigConfig) {
cmd.Flags().StringVar(&cfg.name, "name", "", "cluster name")
cmd.Flags().StringVar(&cfg.configName, "config-name", "", "the name of the generated kubeconfig file")
cmd.Flags().StringVar(&cfg.cn, "cn", controller.AdminCommonName, "Common name (CN) of the generated certificates for the kubeconfig")
cmd.Flags().StringSliceVar(&cfg.org, "org", nil, "Organization name (ORG) of the generated certificates for the kubeconfig")
cmd.Flags().StringSliceVar(&cfg.altNames, "altNames", nil, "altNames of the generated certificates for the kubeconfig")
cmd.Flags().Int64Var(&cfg.expirationDays, "expiration-days", 365, "Expiration date of the certificates used for the kubeconfig")
cmd.Flags().StringVar(&cfg.kubeconfigServerHost, "kubeconfig-server", "", "override the kubeconfig server host")
}
func generate(appCtx *AppContext, cfg *GenerateKubeconfigConfig) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
ctx := context.Background()
client := appCtx.Client
clusterKey := types.NamespacedName{
Name: cfg.name,
Namespace: appCtx.Namespace(cfg.name),
}
var cluster v1beta1.Cluster
if err := client.Get(ctx, clusterKey, &cluster); err != nil {
return err
}
url, err := url.Parse(appCtx.RestConfig.Host)
if err != nil {
return err
}
host := strings.Split(url.Host, ":")
if cfg.kubeconfigServerHost != "" {
host = []string{cfg.kubeconfigServerHost}
cfg.altNames = append(cfg.altNames, cfg.kubeconfigServerHost)
}
certAltNames := certs.AddSANs(cfg.altNames)
if len(cfg.org) == 0 {
cfg.org = []string{user.SystemPrivilegedGroup}
}
kubeCfg := kubeconfig.KubeConfig{
CN: cfg.cn,
ORG: cfg.org,
ExpiryDate: time.Hour * 24 * time.Duration(cfg.expirationDays),
AltNames: certAltNames,
}
logrus.Infof("waiting for cluster to be available..")
var kubeconfig *clientcmdapi.Config
if err := retry.OnError(controller.Backoff, apierrors.IsNotFound, func() error {
kubeconfig, err = kubeCfg.Generate(ctx, client, &cluster, host[0], 0)
return err
}); err != nil {
return err
}
return writeKubeconfigFile(&cluster, kubeconfig, cfg.configName)
}
}
func writeKubeconfigFile(cluster *v1beta1.Cluster, kubeconfig *clientcmdapi.Config, configName string) error {
if configName == "" {
configName = cluster.Namespace + "-" + cluster.Name + "-kubeconfig.yaml"
}
pwd, err := os.Getwd()
if err != nil {
return err
}
logrus.Infof(`You can start using the cluster with:
export KUBECONFIG=%s
kubectl cluster-info
`, filepath.Join(pwd, configName))
kubeconfigData, err := clientcmd.Write(*kubeconfig)
if err != nil {
return err
}
return os.WriteFile(configName, kubeconfigData, 0o644)
}

20
cli/cmds/policy.go Normal file
View File

@@ -0,0 +1,20 @@
package cmds
import (
"github.com/spf13/cobra"
)
func NewPolicyCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "policy",
Short: "K3k policy command.",
}
cmd.AddCommand(
NewPolicyCreateCmd(appCtx),
NewPolicyDeleteCmd(appCtx),
NewPolicyListCmd(appCtx),
)
return cmd
}

183
cli/cmds/policy_create.go Normal file
View File

@@ -0,0 +1,183 @@
package cmds
import (
"context"
"errors"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
"github.com/rancher/k3k/pkg/controller/policy"
)
type VirtualClusterPolicyCreateConfig struct {
mode string
labels []string
annotations []string
namespaces []string
overwrite bool
}
func NewPolicyCreateCmd(appCtx *AppContext) *cobra.Command {
config := &VirtualClusterPolicyCreateConfig{}
cmd := &cobra.Command{
Use: "create",
Short: "Create a new policy.",
Example: "k3kcli policy create [command options] NAME",
PreRunE: func(cmd *cobra.Command, args []string) error {
switch config.mode {
case string(v1beta1.VirtualClusterMode), string(v1beta1.SharedClusterMode):
return nil
default:
return errors.New(`mode should be one of "shared" or "virtual"`)
}
},
RunE: policyCreateAction(appCtx, config),
Args: cobra.ExactArgs(1),
}
cmd.Flags().StringVar(&config.mode, "mode", "shared", "The allowed mode type of the policy")
cmd.Flags().StringArrayVar(&config.labels, "labels", []string{}, "Labels to add to the policy object (e.g. key=value)")
cmd.Flags().StringArrayVar(&config.annotations, "annotations", []string{}, "Annotations to add to the policy object (e.g. key=value)")
cmd.Flags().StringSliceVar(&config.namespaces, "namespace", []string{}, "The namespaces where to bind the policy")
cmd.Flags().BoolVar(&config.overwrite, "overwrite", false, "Overwrite namespace binding of existing policy")
return cmd
}
func policyCreateAction(appCtx *AppContext, config *VirtualClusterPolicyCreateConfig) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
ctx := context.Background()
client := appCtx.Client
policyName := args[0]
_, err := createPolicy(ctx, client, config, policyName)
if err != nil {
return err
}
return bindPolicyToNamespaces(ctx, client, config, policyName)
}
}
func createNamespace(ctx context.Context, client client.Client, name, policyName string) error {
ns := &v1.Namespace{ObjectMeta: metav1.ObjectMeta{Name: name}}
if policyName != "" {
ns.Labels = map[string]string{
policy.PolicyNameLabelKey: policyName,
}
}
if err := client.Get(ctx, types.NamespacedName{Name: name}, ns); err != nil {
if !apierrors.IsNotFound(err) {
return err
}
logrus.Infof(`Creating namespace '%s'`, name)
if err := client.Create(ctx, ns); err != nil {
return err
}
}
return nil
}
func createPolicy(ctx context.Context, client client.Client, config *VirtualClusterPolicyCreateConfig, policyName string) (*v1beta1.VirtualClusterPolicy, error) {
logrus.Infof("Creating policy '%s'", policyName)
policy := &v1beta1.VirtualClusterPolicy{
ObjectMeta: metav1.ObjectMeta{
Name: policyName,
Labels: parseKeyValuePairs(config.labels, "label"),
Annotations: parseKeyValuePairs(config.annotations, "annotation"),
},
TypeMeta: metav1.TypeMeta{
Kind: "VirtualClusterPolicy",
APIVersion: "k3k.io/v1beta1",
},
Spec: v1beta1.VirtualClusterPolicySpec{
AllowedMode: v1beta1.ClusterMode(config.mode),
},
}
if err := client.Create(ctx, policy); err != nil {
if !apierrors.IsAlreadyExists(err) {
return nil, err
}
logrus.Infof("Policy '%s' already exists", policyName)
}
return policy, nil
}
func bindPolicyToNamespaces(ctx context.Context, client client.Client, config *VirtualClusterPolicyCreateConfig, policyName string) error {
var errs []error
for _, namespace := range config.namespaces {
var ns v1.Namespace
if err := client.Get(ctx, types.NamespacedName{Name: namespace}, &ns); err != nil {
if apierrors.IsNotFound(err) {
logrus.Warnf(`Namespace '%s' not found, skipping`, namespace)
} else {
errs = append(errs, err)
}
continue
}
if ns.Labels == nil {
ns.Labels = map[string]string{}
}
oldPolicy := ns.Labels[policy.PolicyNameLabelKey]
// same policy found, no need to update
if oldPolicy == policyName {
logrus.Debugf(`Policy '%s' already bound to namespace '%s'`, policyName, namespace)
continue
}
// no old policy, safe to update
if oldPolicy == "" {
ns.Labels[policy.PolicyNameLabelKey] = policyName
if err := client.Update(ctx, &ns); err != nil {
errs = append(errs, err)
} else {
logrus.Infof(`Added policy '%s' to namespace '%s'`, policyName, namespace)
}
continue
}
// different policy, warn or check for overwrite flag
if oldPolicy != policyName {
if config.overwrite {
logrus.Infof(`Found policy '%s' bound to namespace '%s'. Overwriting it with '%s'`, oldPolicy, namespace, policyName)
ns.Labels[policy.PolicyNameLabelKey] = policyName
if err := client.Update(ctx, &ns); err != nil {
errs = append(errs, err)
} else {
logrus.Infof(`Added policy '%s' to namespace '%s'`, policyName, namespace)
}
} else {
logrus.Warnf(`Found policy '%s' bound to namespace '%s'. Skipping. To overwrite it use the --overwrite flag`, oldPolicy, namespace)
}
}
}
return errors.Join(errs...)
}

47
cli/cmds/policy_delete.go Normal file
View File

@@ -0,0 +1,47 @@
package cmds
import (
"context"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
)
func NewPolicyDeleteCmd(appCtx *AppContext) *cobra.Command {
return &cobra.Command{
Use: "delete",
Short: "Delete an existing policy.",
Example: "k3kcli policy delete [command options] NAME",
RunE: policyDeleteAction(appCtx),
Args: cobra.ExactArgs(1),
}
}
func policyDeleteAction(appCtx *AppContext) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
ctx := context.Background()
client := appCtx.Client
name := args[0]
policy := &v1beta1.VirtualClusterPolicy{}
policy.Name = name
if err := client.Delete(ctx, policy); err != nil {
if !apierrors.IsNotFound(err) {
return err
}
logrus.Warnf("Policy '%s' not found", name)
return nil
}
logrus.Infof("Policy '%s' deleted", name)
return nil
}
}

47
cli/cmds/policy_list.go Normal file
View File

@@ -0,0 +1,47 @@
package cmds
import (
"context"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/types"
"k8s.io/cli-runtime/pkg/printers"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
)
func NewPolicyListCmd(appCtx *AppContext) *cobra.Command {
return &cobra.Command{
Use: "list",
Short: "List all existing policies.",
Example: "k3kcli policy list [command options]",
RunE: policyList(appCtx),
Args: cobra.NoArgs,
}
}
func policyList(appCtx *AppContext) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
ctx := context.Background()
client := appCtx.Client
var policies v1beta1.VirtualClusterPolicyList
if err := client.List(ctx, &policies); err != nil {
return err
}
crd := &apiextensionsv1.CustomResourceDefinition{}
if err := client.Get(ctx, types.NamespacedName{Name: "virtualclusterpolicies.k3k.io"}, crd); err != nil {
return err
}
items := toPointerSlice(policies.Items)
table := createTable(crd, items)
printer := printers.NewTablePrinter(printers.PrintOptions{})
return printer.PrintObj(table, cmd.OutOrStdout())
}
}

View File

@@ -1,42 +1,120 @@
package cmds
import (
"fmt"
"strings"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/spf13/viper"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"sigs.k8s.io/controller-runtime/pkg/client"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
"github.com/rancher/k3k/pkg/buildinfo"
)
var (
debug bool
Kubeconfig string
CommonFlags = []cli.Flag{
cli.StringFlag{
Name: "kubeconfig",
EnvVar: "KUBECONFIG",
Usage: "Kubeconfig path",
Destination: &Kubeconfig,
},
}
)
type AppContext struct {
RestConfig *rest.Config
Client client.Client
func NewApp() *cli.App {
app := cli.NewApp()
app.Name = "k3kcli"
app.Usage = "CLI for K3K"
app.Flags = []cli.Flag{
cli.BoolFlag{
Name: "debug",
Usage: "Turn on debug logs",
Destination: &debug,
EnvVar: "K3K_DEBUG",
},
}
app.Before = func(clx *cli.Context) error {
if debug {
logrus.SetLevel(logrus.DebugLevel)
}
return nil
}
return app
// Global flags
Debug bool
Kubeconfig string
namespace string
}
func NewRootCmd() *cobra.Command {
appCtx := &AppContext{}
rootCmd := &cobra.Command{
SilenceUsage: true,
Use: "k3kcli",
Short: "CLI for K3K.",
Version: buildinfo.Version,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
InitializeConfig(cmd)
if appCtx.Debug {
logrus.SetLevel(logrus.DebugLevel)
}
restConfig, err := loadRESTConfig(appCtx.Kubeconfig)
if err != nil {
return err
}
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
_ = v1beta1.AddToScheme(scheme)
_ = apiextensionsv1.AddToScheme(scheme)
ctrlClient, err := client.New(restConfig, client.Options{Scheme: scheme})
if err != nil {
return err
}
appCtx.RestConfig = restConfig
appCtx.Client = ctrlClient
return nil
},
DisableAutoGenTag: true,
}
rootCmd.PersistentFlags().StringVar(&appCtx.Kubeconfig, "kubeconfig", "", "kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)")
rootCmd.PersistentFlags().BoolVar(&appCtx.Debug, "debug", false, "Turn on debug logs")
rootCmd.AddCommand(
NewClusterCmd(appCtx),
NewPolicyCmd(appCtx),
NewKubeconfigCmd(appCtx),
)
return rootCmd
}
func (ctx *AppContext) Namespace(name string) string {
if ctx.namespace != "" {
return ctx.namespace
}
return "k3k-" + name
}
func loadRESTConfig(kubeconfig string) (*rest.Config, error) {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
configOverrides := &clientcmd.ConfigOverrides{}
if kubeconfig != "" {
loadingRules.ExplicitPath = kubeconfig
}
kubeConfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, configOverrides)
return kubeConfig.ClientConfig()
}
func CobraFlagNamespace(appCtx *AppContext, flag *pflag.FlagSet) {
flag.StringVarP(&appCtx.namespace, "namespace", "n", "", "namespace of the k3k cluster")
}
func InitializeConfig(cmd *cobra.Command) {
viper.SetEnvKeyReplacer(strings.NewReplacer("-", "_"))
viper.AutomaticEnv()
// Bind the current command's flags to viper
cmd.Flags().VisitAll(func(f *pflag.Flag) {
// Apply the viper config value to the flag when the flag is not set and viper has a value
if !f.Changed && viper.IsSet(f.Name) {
val := viper.Get(f.Name)
_ = cmd.Flags().Set(f.Name, fmt.Sprintf("%v", val))
}
})
}

104
cli/cmds/table_printer.go Normal file
View File

@@ -0,0 +1,104 @@
package cmds
import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/util/jsonpath"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// createTable creates a table to print from the printerColumn defined in the CRD spec, plus the name at the beginning
func createTable[T runtime.Object](crd *apiextensionsv1.CustomResourceDefinition, objs []T) *metav1.Table {
printerColumns := getPrinterColumnsFromCRD(crd)
return &metav1.Table{
TypeMeta: metav1.TypeMeta{APIVersion: "meta.k8s.io/v1", Kind: "Table"},
ColumnDefinitions: convertToTableColumns(printerColumns),
Rows: createTableRows(objs, printerColumns),
}
}
func getPrinterColumnsFromCRD(crd *apiextensionsv1.CustomResourceDefinition) []apiextensionsv1.CustomResourceColumnDefinition {
printerColumns := []apiextensionsv1.CustomResourceColumnDefinition{
{Name: "Name", Type: "string", Format: "name", Description: "Name of the Resource", JSONPath: ".metadata.name"},
}
for _, version := range crd.Spec.Versions {
if version.Name == "v1beta1" {
printerColumns = append(printerColumns, version.AdditionalPrinterColumns...)
break
}
}
return printerColumns
}
func convertToTableColumns(printerColumns []apiextensionsv1.CustomResourceColumnDefinition) []metav1.TableColumnDefinition {
var columnDefinitions []metav1.TableColumnDefinition
for _, col := range printerColumns {
columnDefinitions = append(columnDefinitions, metav1.TableColumnDefinition{
Name: col.Name,
Type: col.Type,
Format: col.Format,
Description: col.Description,
Priority: col.Priority,
})
}
return columnDefinitions
}
func createTableRows[T runtime.Object](objs []T, printerColumns []apiextensionsv1.CustomResourceColumnDefinition) []metav1.TableRow {
var rows []metav1.TableRow
for _, obj := range objs {
objMap, err := runtime.DefaultUnstructuredConverter.ToUnstructured(&obj)
if err != nil {
rows = append(rows, metav1.TableRow{Cells: []any{"<error: " + err.Error() + ">"}})
continue
}
rows = append(rows, metav1.TableRow{
Cells: buildRowCells(objMap, printerColumns),
Object: runtime.RawExtension{Object: obj},
})
}
return rows
}
func buildRowCells(objMap map[string]any, printerColumns []apiextensionsv1.CustomResourceColumnDefinition) []any {
var cells []any
for _, printCol := range printerColumns {
j := jsonpath.New(printCol.Name)
err := j.Parse("{" + printCol.JSONPath + "}")
if err != nil {
cells = append(cells, "<error>")
continue
}
results, err := j.FindResults(objMap)
if err != nil || len(results) == 0 || len(results[0]) == 0 {
cells = append(cells, "<none>")
continue
}
cells = append(cells, results[0][0].Interface())
}
return cells
}
func toPointerSlice[T any](v []T) []*T {
vPtr := make([]*T, len(v))
for i := range v {
vPtr[i] = &v[i]
}
return vPtr
}

View File

@@ -1,28 +1,14 @@
package main
import (
"os"
"github.com/sirupsen/logrus"
"github.com/rancher/k3k/cli/cmds"
"github.com/rancher/k3k/cli/cmds/cluster"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
const (
program = "k3k"
version = "dev"
gitCommit = "HEAD"
)
func main() {
app := cmds.NewApp()
app.Commands = []cli.Command{
cluster.NewClusterCommand(),
}
app.Version = version + " (" + gitCommit + ")"
if err := app.Run(os.Args); err != nil {
app := cmds.NewRootCmd()
if err := app.Execute(); err != nil {
logrus.Fatal(err)
}
}

132
docs/advanced-usage.md Normal file
View File

@@ -0,0 +1,132 @@
# Advanced Usage
This document provides advanced usage information for k3k, including detailed use cases and explanations of the `Cluster` resource fields for customization.
## Customizing the Cluster Resource
The `Cluster` resource provides a variety of fields for customizing the behavior of your virtual clusters. You can check the [CRD documentation](./crds/crds.md) for the full specs.
**Note:** Most of these customization options can also be configured using the `k3kcli` tool. Refer to the [k3kcli](./cli/k3kcli.md) documentation for more details.
This example creates a "shared" mode K3k cluster with:
- 3 servers
- K3s version v1.31.3-k3s1
- Custom network configuration
- Deployment on specific nodes with the `nodeSelector`
- `kube-api` exposed using an ingress
- Custom K3s `serverArgs`
- ETCD data persisted using a `PVC`
```yaml
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: my-virtual-cluster
namespace: my-namespace
spec:
mode: shared
version: v1.31.3-k3s1
servers: 3
tlsSANs:
- my-cluster.example.com
nodeSelector:
disktype: ssd
expose:
ingress:
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "HTTPS"
clusterCIDR: 10.42.0.0/16
serviceCIDR: 10.43.0.0/16
clusterDNS: 10.43.0.10
serverArgs:
- --tls-san=my-cluster.example.com
persistence:
type: dynamic
storageClassName: local-path
```
### `mode`
The `mode` field specifies the cluster provisioning mode, which can be either `shared` or `virtual`. The default mode is `shared`.
* **`shared` mode:** In this mode, the virtual cluster shares the host cluster's resources and networking. This mode is suitable for lightweight workloads and development environments where isolation is not a primary concern.
* **`virtual` mode:** In this mode, the virtual cluster runs as a separate K3s cluster within the host cluster. This mode provides stronger isolation and is suitable for production workloads or when dedicated resources are required.
### `version`
The `version` field specifies the Kubernetes version to be used by the virtual nodes. If not specified, K3k will use the same K3s version as the host cluster. For example, if the host cluster is running Kubernetes v1.31.3, K3k will use the corresponding K3s version (e.g., `v1.31.3-k3s1`).
### `servers`
The `servers` field specifies the number of K3s server nodes to deploy for the virtual cluster. The default value is 1.
### `agents`
The `agents` field specifies the number of K3s agent nodes to deploy for the virtual cluster. The default value is 0.
**Note:** In `shared` mode, this field is ignored, as the Virtual Kubelet acts as the agent, and there are no K3s worker nodes.
### `nodeSelector`
The `nodeSelector` field allows you to specify a node selector that will be applied to all server/agent pods. In `shared` mode, the node selector will also be applied to the workloads.
### `expose`
The `expose` field contains options for exposing the API server of the virtual cluster. By default, the API server is only exposed as a `ClusterIP`, which is relatively secure but difficult to access from outside the cluster.
You can use the `expose` field to enable exposure via `NodePort`, `LoadBalancer`, or `Ingress`.
In this example we are exposing the Cluster with a Nginx ingress-controller, that has to be configured with the `--enable-ssl-passthrough` flag.
### `clusterCIDR`
The `clusterCIDR` field specifies the CIDR range for the pods of the cluster. The default value is `10.42.0.0/16` in shared mode, and `10.52.0.0/16` in virtual mode.
### `serviceCIDR`
The `serviceCIDR` field specifies the CIDR range for the services in the cluster. The default value is `10.43.0.0/16` in shared mode, and `10.53.0.0/16` in virtual mode.
**Note:** In `shared` mode, the `serviceCIDR` should match the host cluster's `serviceCIDR` to prevent conflicts and in `virtual` mode both `serviceCIDR` and `clusterCIDR` should be different than the host cluster.
### `clusterDNS`
The `clusterDNS` field specifies the IP address for the CoreDNS service. It needs to be in the range provided by `serviceCIDR`. The default value is `10.43.0.10`.
### `serverArgs`
The `serverArgs` field allows you to specify additional arguments to be passed to the K3s server pods.
## Using the cli
You can check the [k3kcli documentation](./cli/k3kcli.md) for the full specs.
### No storage provider:
* Ephemeral Storage:
```bash
k3kcli cluster create --persistence-type ephemeral my-cluster
```
*Important Notes:*
* Using `--persistence-type ephemeral` will result in data loss if the nodes are restarted.
* It is highly recommended to use `--persistence-type dynamic` with a configured storage class.

140
docs/architecture.md Normal file
View File

@@ -0,0 +1,140 @@
# Architecture
Virtual Clusters are isolated Kubernetes clusters provisioned on a physical cluster. K3k leverages [K3s](https://k3s.io/) as the control plane of the Kubernetes cluster because of its lightweight footprint.
K3k provides two modes of deploying virtual clusters: the "shared" mode (default), and "virtual".
## Shared Mode
The default `shared` mode uses a K3s server as control plane with an [agentless servers configuration](https://docs.k3s.io/advanced#running-agentless-servers-experimental). With this option enabled, the servers do not run the kubelet, container runtime, or CNI. The server uses a [Virtual Kubelet](https://virtual-kubelet.io/) provider implementation specific to K3k, which schedules the workloads and other eventually needed resources on the host cluster. This K3k Virtual Kubelet provider handles the reflection of resources and workload execution within the shared host cluster environment.
![Shared Mode](./images/architecture/shared-mode.png)
### Networking and Storage
Because of this shared infrastructure, the CNI will be the same one configured in the host cluster. To provide the needed isolation, K3k will leverage Network Policies.
The same goes for the available storage, so the Storage Classes and Volumes are those of the host cluster.
### Resource Sharing and Limits
In shared mode, K3k leverages Kubernetes ResourceQuotas and LimitRanges to manage resource sharing and enforce limits. Since all virtual cluster workloads run within the same namespace on the host cluster, ResourceQuotas are applied to this namespace to limit the total resources consumed by a virtual cluster. LimitRanges are used to set default resource requests and limits for pods, ensuring that workloads have reasonable resource allocations even if they don't explicitly specify them.
Each pod in a virtual cluster is assigned a unique name that incorporates the pod name, namespace, and cluster name. This prevents naming collisions in the shared host cluster namespace.
It's important to understand that ResourceQuotas are applied at the namespace level. This means that all pods within a virtual cluster share the same quota. While this provides overall limits for the virtual cluster, it also means that resource allocation is dynamic. If one workload isn't using its full resource allocation, other workloads within the *same* virtual cluster can utilize those resources, even if they belong to different deployments or services.
This dynamic sharing can be both a benefit and a challenge. It allows for efficient resource utilization, but it can also lead to unpredictable performance if workloads have varying resource demands. Furthermore, this approach makes it difficult to guarantee strict resource isolation between workloads within the same virtual cluster.
GPU resource sharing is an area of ongoing investigation. K3k is actively exploring potential solutions in this area.
### Isolation and Security
Isolation between virtual clusters in shared mode relies heavily on Kubernetes Network Policies. Network Policies define rules that control the network traffic allowed to and from pods. K3k configures Network Policies to ensure that pods in one virtual cluster cannot communicate with pods in other virtual clusters or with pods in the host cluster itself, providing a strong foundation for network isolation.
While Network Policies offer robust isolation capabilities, it's important to understand their characteristics:
* **CNI Integration:** Network Policies integrate seamlessly with supported CNI plugins. K3k leverages this integration to enforce network isolation.
* **Granular Control:** Network Policies provide granular control over network traffic, allowing for fine-tuned security policies.
* **Scalability:** Network Policies scale well with the number of virtual clusters and applications, ensuring consistent isolation as the environment grows.
K3k also utilizes Kubernetes Pod Security Admission (PSA) to enforce security policies within virtual clusters based on Pod Security Standards (PSS). PSS define different levels of security for pods, restricting what actions pods can perform. By configuring PSA to enforce a specific PSS level (e.g., `baseline` or `restricted`) for a virtual cluster, K3k ensures that pods adhere to established security best practices and prevents them from using privileged features or performing potentially dangerous operations.
Key aspects of PSA integration include:
* **Namespace-Level Enforcement:** PSA configuration is applied at the namespace level, providing a consistent security posture for all pods within the virtual cluster.
* **Standardized Profiles:** PSS offers a set of predefined security profiles aligned with industry best practices, simplifying security configuration and ensuring a baseline level of security.
The shared mode architecture is designed with security in mind. K3k employs multiple layers of security controls, including Network Policies and PSA, to protect virtual clusters and the host cluster. While the shared namespace model requires careful configuration and management, these controls provide a robust security foundation for running workloads in a multi-tenant environment. K3k continuously evaluates and enhances its security mechanisms to address evolving threats and ensure the highest level of protection for its users.
## Virtual Mode
The `virtual` mode in K3k deploys fully functional K3s clusters (including both server and agent components) as virtual clusters. These K3s clusters run as pods within the host cluster. Each virtual cluster has its own dedicated K3s server and one or more K3s agents acting as worker nodes. This approach provides strong isolation, as each virtual cluster operates independently with its own control plane and worker nodes. While these virtual clusters run as pods on the host cluster, they function as complete and separate Kubernetes environments.
![Virtual Mode](./images/architecture/virtual-mode.png)
### Networking and Storage
Virtual clusters in `virtual` mode each have their own independent networking configuration managed by their respective K3s servers. Each virtual cluster runs its own CNI plugin, configured within its K3s server, providing complete network isolation from other virtual clusters and the host cluster. While the virtual cluster networks ultimately operate on top of the host cluster's network infrastructure, the networking configuration and traffic management are entirely separate.
### Resource Sharing and Limits
Resource sharing in `virtual` mode is managed by applying resource limits to the pods that make up the virtual cluster (both the K3s server pod and the K3s agent pods). Each pod is assigned a specific amount of CPU, memory, and other resources. The workloads running *within* the virtual cluster then utilize these allocated resources. This means that the virtual cluster as a whole has a defined resource pool determined by the limits on its constituent pods.
This approach provides a clear and direct way to control the resources available to each virtual cluster. However, it requires careful resource planning to ensure that each virtual cluster has sufficient capacity for its workloads.
### Isolation and Security
The `virtual` mode offers strong isolation due to the dedicated K3s clusters deployed for each virtual cluster. Because each virtual cluster runs its own separate control plane and worker nodes, workloads are effectively isolated from each other and from the host cluster. This architecture minimizes the risk of one virtual cluster impacting others or the host cluster.
Security in `virtual` mode benefits from the inherent isolation provided by the separate K3s clusters. However, standard Kubernetes security best practices still apply, and K3k emphasizes a layered security approach. While the K3s server pods often run with elevated privileges (due to the nature of their function, requiring access to system resources), K3k recommends minimizing these privileges whenever possible and adhering to the principle of least privilege. This can be achieved by carefully configuring the necessary capabilities instead of relying on full `privileged` mode. Further information on K3s security best practices can be found in the official K3s documentation: [https://docs.k3s.io/security](https://docs.k3s.io/security) (This link provides general security guidance, including discussions of capabilities and other relevant topics).
Currently security in virtual mode has a risk of privilege escalation as the server pods run with elevated privileges (due to the nature of their function, requiring access to system resources).
## K3k Components
K3k consists of two main components:
* **Controller:** The K3k controller is a core component that runs on the host cluster. It watches for `Cluster` custom resources (CRs) and manages the lifecycle of virtual clusters. When a new `Cluster` CR is created, the controller provisions the necessary resources, including namespaces, K3s server and agent pods, and network configurations, to create the virtual cluster.
* **CLI:** The K3k CLI provides a command-line interface for interacting with K3k. It allows users to easily create, manage, and access virtual clusters. The CLI simplifies common tasks such as creating `Cluster` CRs, retrieving kubeconfigs for accessing virtual clusters, and performing other management operations.
## VirtualClusterPolicy
K3k introduces the VirtualClusterPolicy Custom Resource, a way to set up and apply common configurations and how your virtual clusters operate within the K3k environment.
The primary goal of VCPs is to allow administrators to centrally manage and apply consistent policies. This reduces repetitive configuration, helps meet organizational standards, and enhances the security and operational consistency of virtual clusters managed by K3k.
A VirtualClusterPolicy is bound to one or more Kubernetes Namespaces. Once bound, the rules defined in the VCP apply to all K3k virtual clusters that are running or get created in that Namespace. This allows for flexible policy application, meaning different Namespaces can use their own unique VCPs, while others can share a single VCP for a consistent setup.
Common use cases for administrators leveraging VirtualClusterPolicy include:
- Defining the operational mode (like "shared" or "virtual") for virtual clusters.
- Setting up resource quotas and limit ranges to effectively manage how much resources virtual clusters and their workloads can use.
- Enforcing security standards, for example, by configuring Pod Security Admission (PSA) labels for Namespaces.
The K3k controller actively monitors VirtualClusterPolicy resources and the corresponding Namespace bindings. When a VCP is applied or updated, the controller ensures that the defined configurations are enforced on the relevant virtual clusters and their associated resources within the targeted Namespaces.
For a deep dive into what VirtualClusterPolicy can do, along with more examples, check out the [VirtualClusterPolicy Concepts](./virtualclusterpolicy.md) page. For a full list of all the spec fields, see the [API Reference for VirtualClusterPolicy](./crds/crds.md#virtualclusterpolicy).
## Comparison and Trade-offs
K3k offers two distinct modes for deploying virtual clusters: `shared` and `virtual`. Each mode has its own strengths and weaknesses, and the best choice depends on the specific needs and priorities of the user. Here's a comparison to help you make an informed decision:
| Feature | Shared Mode | Virtual Mode |
|---|---|---|
| **Architecture** | Agentless K3s server with Virtual Kubelet | Full K3s cluster (server and agents) as pods |
| **Isolation** | Network Policies | Dedicated control plane and worker nodes |
| **Resource Sharing** | Dynamic, namespace-level ResourceQuotas | Resource limits on virtual cluster pods |
| **Networking** | Host cluster's CNI | Virtual cluster's own CNI |
| **Storage** | Host cluster's storage | *Under development* |
| **Security** | Pod Security Admission (PSA), Network Policies | Inherent isolation, PSA, Network Policies, secure host configuration |
| **Performance** | Smaller footprint, more efficient due to running directly on the host | Higher overhead due to running full K3s clusters |
**Trade-offs:**
* **Isolation vs. Overhead:** The `shared` mode has lower overhead but weaker isolation, while the `virtual` mode provides stronger isolation but potentially higher overhead due to running full K3s clusters.
* **Resource Sharing:** The `shared` mode offers dynamic resource sharing within a namespace, which can be efficient but less predictable. The `virtual` mode provides dedicated resources to each virtual cluster, offering more control but requiring careful planning.
**Choosing the right mode:**
* **Choose `shared` mode if:**
* You prioritize low overhead and resource efficiency.
* You need a simple setup and don't require strong isolation between virtual clusters.
* Your workloads don't have strict performance requirements.
* Your workloads needs host capacities (GPU)
* **Choose `virtual` mode if:**
* You prioritize strong isolation.
* You need dedicated resources and predictable performance for your virtual clusters.
Ultimately, the best choice depends on your specific requirements and priorities. Consider the trade-offs carefully and choose the mode that best aligns with your needs.

25
docs/cli/convert.lua Normal file
View File

@@ -0,0 +1,25 @@
local deleting_see_also = false
function Header(el)
-- If we hit "SEE ALSO", start deleting and remove the header itself
if pandoc.utils.stringify(el):upper() == "SEE ALSO" then
deleting_see_also = true
return {}
end
-- If we hit any other header, stop deleting
deleting_see_also = false
return el
end
function BulletList(el)
if deleting_see_also then
return {} -- Deletes the list of links
end
return el
end
function CodeBlock(el)
-- Forces the ---- separator
local content = "----\n" .. el.text .. "\n----\n\n"
return pandoc.RawBlock('asciidoc', content)
end

31
docs/cli/genclidoc.go Normal file
View File

@@ -0,0 +1,31 @@
package main
import (
"fmt"
"os"
"path"
"github.com/spf13/cobra/doc"
"github.com/rancher/k3k/cli/cmds"
)
func main() {
// Instantiate the CLI application
k3kcli := cmds.NewRootCmd()
wd, err := os.Getwd()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
outputDir := path.Join(wd, "docs/cli")
if err := doc.GenMarkdownTree(k3kcli, outputDir); err != nil {
fmt.Println("Error generating documentation:", err)
os.Exit(1)
}
fmt.Println("Documentation generated at " + outputDir)
}

317
docs/cli/k3kcli.adoc Normal file
View File

@@ -0,0 +1,317 @@
== k3kcli
CLI for K3K.
=== Options
----
--debug Turn on debug logs
-h, --help help for k3kcli
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli cluster
K3k cluster command.
=== Options
----
-h, --help help for cluster
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli cluster create
Create a new cluster.
----
k3kcli cluster create [flags]
----
=== Examples
----
k3kcli cluster create [command options] NAME
----
=== Options
----
--agent-args strings agents extra arguments
--agent-envs strings agents extra Envs
--agents int number of agents
--annotations stringArray Annotations to add to the cluster object (e.g. key=value)
--cluster-cidr string cluster CIDR
--custom-certs string The path for custom certificate directory
-h, --help help for create
--kubeconfig-server string override the kubeconfig server host
--labels stringArray Labels to add to the cluster object (e.g. key=value)
--mirror-host-nodes Mirror Host Cluster Nodes
--mode string k3k mode type (shared, virtual) (default "shared")
-n, --namespace string namespace of the k3k cluster
--persistence-type string persistence mode for the nodes (dynamic, ephemeral) (default "dynamic")
--policy string The policy to create the cluster in
--server-args strings servers extra arguments
--server-envs strings servers extra Envs
--servers int number of servers (default 1)
--service-cidr string service CIDR
--storage-class-name string storage class name for dynamic persistence type
--storage-request-size string storage size for dynamic persistence type
--timeout duration The timeout for waiting for the cluster to become ready (e.g., 10s, 5m, 1h). (default 3m0s)
--token string token of the cluster
--version string k3s version
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli cluster delete
Delete an existing cluster.
----
k3kcli cluster delete [flags]
----
=== Examples
----
k3kcli cluster delete [command options] NAME
----
=== Options
----
-h, --help help for delete
--keep-data keeps persistence volumes created for the cluster after deletion
-n, --namespace string namespace of the k3k cluster
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli cluster list
List all existing clusters.
----
k3kcli cluster list [flags]
----
=== Examples
----
k3kcli cluster list [command options]
----
=== Options
----
-h, --help help for list
-n, --namespace string namespace of the k3k cluster
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli cluster update
Update existing cluster
----
k3kcli cluster update [flags]
----
=== Examples
----
k3kcli cluster update [command options] NAME
----
=== Options
----
--agents int32 number of agents
--annotations stringArray Annotations to add to the cluster object (e.g. key=value)
-h, --help help for update
--labels stringArray Labels to add to the cluster object (e.g. key=value)
-n, --namespace string namespace of the k3k cluster
-y, --no-confirm Skip interactive approval before applying update
--servers int32 number of servers (default 1)
--version string k3s version
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli kubeconfig
Manage kubeconfig for clusters.
=== Options
----
-h, --help help for kubeconfig
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli kubeconfig generate
Generate kubeconfig for clusters.
----
k3kcli kubeconfig generate [flags]
----
=== Options
----
--altNames strings altNames of the generated certificates for the kubeconfig
--cn string Common name (CN) of the generated certificates for the kubeconfig (default "system:admin")
--config-name string the name of the generated kubeconfig file
--expiration-days int Expiration date of the certificates used for the kubeconfig (default 365)
-h, --help help for generate
--kubeconfig-server string override the kubeconfig server host
--name string cluster name
-n, --namespace string namespace of the k3k cluster
--org strings Organization name (ORG) of the generated certificates for the kubeconfig
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli policy
K3k policy command.
=== Options
----
-h, --help help for policy
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli policy create
Create a new policy.
----
k3kcli policy create [flags]
----
=== Examples
----
k3kcli policy create [command options] NAME
----
=== Options
----
--annotations stringArray Annotations to add to the policy object (e.g. key=value)
-h, --help help for create
--labels stringArray Labels to add to the policy object (e.g. key=value)
--mode string The allowed mode type of the policy (default "shared")
--namespace strings The namespaces where to bind the policy
--overwrite Overwrite namespace binding of existing policy
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli policy delete
Delete an existing policy.
----
k3kcli policy delete [flags]
----
=== Examples
----
k3kcli policy delete [command options] NAME
----
=== Options
----
-h, --help help for delete
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli policy list
List all existing policies.
----
k3kcli policy list [flags]
----
=== Examples
----
k3kcli policy list [command options]
----
=== Options
----
-h, --help help for list
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----

18
docs/cli/k3kcli.md Normal file
View File

@@ -0,0 +1,18 @@
## k3kcli
CLI for K3K.
### Options
```
--debug Turn on debug logs
-h, --help help for k3kcli
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli cluster](k3kcli_cluster.md) - K3k cluster command.
* [k3kcli kubeconfig](k3kcli_kubeconfig.md) - Manage kubeconfig for clusters.
* [k3kcli policy](k3kcli_policy.md) - K3k policy command.

View File

@@ -0,0 +1,25 @@
## k3kcli cluster
K3k cluster command.
### Options
```
-h, --help help for cluster
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli](k3kcli.md) - CLI for K3K.
* [k3kcli cluster create](k3kcli_cluster_create.md) - Create a new cluster.
* [k3kcli cluster delete](k3kcli_cluster_delete.md) - Delete an existing cluster.
* [k3kcli cluster list](k3kcli_cluster_list.md) - List all existing clusters.
* [k3kcli cluster update](k3kcli_cluster_update.md) - Update existing cluster

View File

@@ -0,0 +1,53 @@
## k3kcli cluster create
Create a new cluster.
```
k3kcli cluster create [flags]
```
### Examples
```
k3kcli cluster create [command options] NAME
```
### Options
```
--agent-args strings agents extra arguments
--agent-envs strings agents extra Envs
--agents int number of agents
--annotations stringArray Annotations to add to the cluster object (e.g. key=value)
--cluster-cidr string cluster CIDR
--custom-certs string The path for custom certificate directory
-h, --help help for create
--kubeconfig-server string override the kubeconfig server host
--labels stringArray Labels to add to the cluster object (e.g. key=value)
--mirror-host-nodes Mirror Host Cluster Nodes
--mode string k3k mode type (shared, virtual) (default "shared")
-n, --namespace string namespace of the k3k cluster
--persistence-type string persistence mode for the nodes (dynamic, ephemeral) (default "dynamic")
--policy string The policy to create the cluster in
--server-args strings servers extra arguments
--server-envs strings servers extra Envs
--servers int number of servers (default 1)
--service-cidr string service CIDR
--storage-class-name string storage class name for dynamic persistence type
--storage-request-size string storage size for dynamic persistence type
--timeout duration The timeout for waiting for the cluster to become ready (e.g., 10s, 5m, 1h). (default 3m0s)
--token string token of the cluster
--version string k3s version
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli cluster](k3kcli_cluster.md) - K3k cluster command.

View File

@@ -0,0 +1,33 @@
## k3kcli cluster delete
Delete an existing cluster.
```
k3kcli cluster delete [flags]
```
### Examples
```
k3kcli cluster delete [command options] NAME
```
### Options
```
-h, --help help for delete
--keep-data keeps persistence volumes created for the cluster after deletion
-n, --namespace string namespace of the k3k cluster
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli cluster](k3kcli_cluster.md) - K3k cluster command.

View File

@@ -0,0 +1,32 @@
## k3kcli cluster list
List all existing clusters.
```
k3kcli cluster list [flags]
```
### Examples
```
k3kcli cluster list [command options]
```
### Options
```
-h, --help help for list
-n, --namespace string namespace of the k3k cluster
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli cluster](k3kcli_cluster.md) - K3k cluster command.

View File

@@ -0,0 +1,38 @@
## k3kcli cluster update
Update existing cluster
```
k3kcli cluster update [flags]
```
### Examples
```
k3kcli cluster update [command options] NAME
```
### Options
```
--agents int32 number of agents
--annotations stringArray Annotations to add to the cluster object (e.g. key=value)
-h, --help help for update
--labels stringArray Labels to add to the cluster object (e.g. key=value)
-n, --namespace string namespace of the k3k cluster
-y, --no-confirm Skip interactive approval before applying update
--servers int32 number of servers (default 1)
--version string k3s version
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli cluster](k3kcli_cluster.md) - K3k cluster command.

View File

@@ -0,0 +1,22 @@
## k3kcli kubeconfig
Manage kubeconfig for clusters.
### Options
```
-h, --help help for kubeconfig
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli](k3kcli.md) - CLI for K3K.
* [k3kcli kubeconfig generate](k3kcli_kubeconfig_generate.md) - Generate kubeconfig for clusters.

View File

@@ -0,0 +1,33 @@
## k3kcli kubeconfig generate
Generate kubeconfig for clusters.
```
k3kcli kubeconfig generate [flags]
```
### Options
```
--altNames strings altNames of the generated certificates for the kubeconfig
--cn string Common name (CN) of the generated certificates for the kubeconfig (default "system:admin")
--config-name string the name of the generated kubeconfig file
--expiration-days int Expiration date of the certificates used for the kubeconfig (default 365)
-h, --help help for generate
--kubeconfig-server string override the kubeconfig server host
--name string cluster name
-n, --namespace string namespace of the k3k cluster
--org strings Organization name (ORG) of the generated certificates for the kubeconfig
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli kubeconfig](k3kcli_kubeconfig.md) - Manage kubeconfig for clusters.

24
docs/cli/k3kcli_policy.md Normal file
View File

@@ -0,0 +1,24 @@
## k3kcli policy
K3k policy command.
### Options
```
-h, --help help for policy
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli](k3kcli.md) - CLI for K3K.
* [k3kcli policy create](k3kcli_policy_create.md) - Create a new policy.
* [k3kcli policy delete](k3kcli_policy_delete.md) - Delete an existing policy.
* [k3kcli policy list](k3kcli_policy_list.md) - List all existing policies.

View File

@@ -0,0 +1,36 @@
## k3kcli policy create
Create a new policy.
```
k3kcli policy create [flags]
```
### Examples
```
k3kcli policy create [command options] NAME
```
### Options
```
--annotations stringArray Annotations to add to the policy object (e.g. key=value)
-h, --help help for create
--labels stringArray Labels to add to the policy object (e.g. key=value)
--mode string The allowed mode type of the policy (default "shared")
--namespace strings The namespaces where to bind the policy
--overwrite Overwrite namespace binding of existing policy
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli policy](k3kcli_policy.md) - K3k policy command.

View File

@@ -0,0 +1,31 @@
## k3kcli policy delete
Delete an existing policy.
```
k3kcli policy delete [flags]
```
### Examples
```
k3kcli policy delete [command options] NAME
```
### Options
```
-h, --help help for delete
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli policy](k3kcli_policy.md) - K3k policy command.

View File

@@ -0,0 +1,31 @@
## k3kcli policy list
List all existing policies.
```
k3kcli policy list [flags]
```
### Examples
```
k3kcli policy list [command options]
```
### Options
```
-h, --help help for list
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli policy](k3kcli_policy.md) - K3k policy command.

9
docs/crds/config.yaml Normal file
View File

@@ -0,0 +1,9 @@
processor:
# RE2 regular expressions describing type fields that should be excluded from the generated documentation.
ignoreFields:
- "status$"
- "TypeMeta$"
render:
# Version of Kubernetes to use when generating links to Kubernetes API documentation.
kubernetesVersion: "1.31"

691
docs/crds/crds.adoc Normal file
View File

@@ -0,0 +1,691 @@
[id="k3k-api-reference"]
= API Reference
:revdate: "2006-01-02"
:page-revdate: {revdate}
:anchor_prefix: k8s-api
== Packages
- xref:{anchor_prefix}-k3k-io-v1beta1[$$k3k.io/v1beta1$$]
[id="{anchor_prefix}-k3k-io-v1beta1"]
== k3k.io/v1beta1
=== Resource Types
- xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-cluster[$$Cluster$$]
- xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterlist[$$ClusterList$$]
- xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicy[$$VirtualClusterPolicy$$]
- xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicylist[$$VirtualClusterPolicyList$$]
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-addon"]
=== Addon
Addon specifies a Secret containing YAML to be deployed on cluster startup.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`secretNamespace`* __string__ | SecretNamespace is the namespace of the Secret. + | |
| *`secretRef`* __string__ | SecretRef is the name of the Secret. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-cluster"]
=== Cluster
Cluster defines a virtual Kubernetes cluster managed by k3k.
It specifies the desired state of a virtual cluster, including version, node configuration, and networking.
k3k uses this to provision and manage these virtual clusters.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterlist[$$ClusterList$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`apiVersion`* __string__ | `k3k.io/v1beta1` | |
| *`kind`* __string__ | `Cluster` | |
| *`metadata`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#objectmeta-v1-meta[$$ObjectMeta$$]__ | Refer to Kubernetes API documentation for fields of `metadata`.
| |
| *`spec`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]__ | Spec defines the desired state of the Cluster. + | { } |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterlist"]
=== ClusterList
ClusterList is a list of Cluster resources.
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`apiVersion`* __string__ | `k3k.io/v1beta1` | |
| *`kind`* __string__ | `ClusterList` | |
| *`metadata`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#listmeta-v1-meta[$$ListMeta$$]__ | Refer to Kubernetes API documentation for fields of `metadata`.
| |
| *`items`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-cluster[$$Cluster$$] array__ | | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clustermode"]
=== ClusterMode
_Underlying type:_ _string_
ClusterMode is the possible provisioning mode of a Cluster.
_Validation:_
- Enum: [shared virtual]
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicyspec[$$VirtualClusterPolicySpec$$]
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterphase"]
=== ClusterPhase
_Underlying type:_ _string_
ClusterPhase is a high-level summary of the cluster's current lifecycle state.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterstatus[$$ClusterStatus$$]
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec"]
=== ClusterSpec
ClusterSpec defines the desired state of a virtual Kubernetes cluster.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-cluster[$$Cluster$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`version`* __string__ | Version is the K3s version to use for the virtual nodes. +
It should follow the K3s versioning convention (e.g., v1.28.2-k3s1). +
If not specified, the Kubernetes version of the host node will be used. + | |
| *`mode`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clustermode[$$ClusterMode$$]__ | Mode specifies the cluster provisioning mode: "shared" or "virtual". +
Defaults to "shared". This field is immutable. + | shared | Enum: [shared virtual] +
| *`servers`* __integer__ | Servers specifies the number of K3s pods to run in server (control plane) mode. +
Must be at least 1. Defaults to 1. + | 1 |
| *`agents`* __integer__ | Agents specifies the number of K3s pods to run in agent (worker) mode. +
Must be 0 or greater. Defaults to 0. +
This field is ignored in "shared" mode. + | 0 |
| *`clusterCIDR`* __string__ | ClusterCIDR is the CIDR range for pod IPs. +
Defaults to 10.42.0.0/16 in shared mode and 10.52.0.0/16 in virtual mode. +
This field is immutable. + | |
| *`serviceCIDR`* __string__ | ServiceCIDR is the CIDR range for service IPs. +
Defaults to 10.43.0.0/16 in shared mode and 10.53.0.0/16 in virtual mode. +
This field is immutable. + | |
| *`clusterDNS`* __string__ | ClusterDNS is the IP address for the CoreDNS service. +
Must be within the ServiceCIDR range. Defaults to 10.43.0.10. +
This field is immutable. + | |
| *`persistence`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistenceconfig[$$PersistenceConfig$$]__ | Persistence specifies options for persisting etcd data. +
Defaults to dynamic persistence, which uses a PersistentVolumeClaim to provide data persistence. +
A default StorageClass is required for dynamic persistence. + | |
| *`expose`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-exposeconfig[$$ExposeConfig$$]__ | Expose specifies options for exposing the API server. +
By default, it's only exposed as a ClusterIP. + | |
| *`nodeSelector`* __object (keys:string, values:string)__ | NodeSelector specifies node labels to constrain where server/agent pods are scheduled. +
In "shared" mode, this also applies to workloads. + | |
| *`priorityClass`* __string__ | PriorityClass specifies the priorityClassName for server/agent pods. +
In "shared" mode, this also applies to workloads. + | |
| *`tokenSecretRef`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#secretreference-v1-core[$$SecretReference$$]__ | TokenSecretRef is a Secret reference containing the token used by worker nodes to join the cluster. +
The Secret must have a "token" field in its data. + | |
| *`tlsSANs`* __string array__ | TLSSANs specifies subject alternative names for the K3s server certificate. + | |
| *`serverArgs`* __string array__ | ServerArgs specifies ordered key-value pairs for K3s server pods. +
Example: ["--tls-san=example.com"] + | |
| *`agentArgs`* __string array__ | AgentArgs specifies ordered key-value pairs for K3s agent pods. +
Example: ["--node-name=my-agent-node"] + | |
| *`serverEnvs`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#envvar-v1-core[$$EnvVar$$] array__ | ServerEnvs specifies list of environment variables to set in the server pod. + | |
| *`agentEnvs`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#envvar-v1-core[$$EnvVar$$] array__ | AgentEnvs specifies list of environment variables to set in the agent pod. + | |
| *`addons`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-addon[$$Addon$$] array__ | Addons specifies secrets containing raw YAML to deploy on cluster startup. + | |
| *`serverLimit`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcelist-v1-core[$$ResourceList$$]__ | ServerLimit specifies resource limits for server nodes. + | |
| *`workerLimit`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcelist-v1-core[$$ResourceList$$]__ | WorkerLimit specifies resource limits for agent nodes. + | |
| *`mirrorHostNodes`* __boolean__ | MirrorHostNodes controls whether node objects from the host cluster +
are mirrored into the virtual cluster. + | |
| *`customCAs`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-customcas[$$CustomCAs$$]__ | CustomCAs specifies the cert/key pairs for custom CA certificates. + | |
| *`sync`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]__ | Sync specifies the resources types that will be synced from virtual cluster to host cluster. + | { } |
| *`secretMounts`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-secretmount[$$SecretMount$$] array__ | SecretMounts specifies a list of secrets to mount into server and agent pods. +
Each entry defines a secret and its mount path within the pods. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-configmapsyncconfig"]
=== ConfigMapSyncConfig
ConfigMapSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | true |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource"]
=== CredentialSource
CredentialSource defines where to get a credential from.
It can represent either a TLS key pair or a single private key.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsources[$$CredentialSources$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`secretName`* __string__ | The secret must contain specific keys based on the credential type: +
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`. +
- For the ServiceAccountToken signing key: `tls.key`. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsources"]
=== CredentialSources
CredentialSources lists all the required credentials, including both
TLS key pairs and single signing keys.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-customcas[$$CustomCAs$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`serverCA`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | ServerCA specifies the server-ca cert/key pair. + | |
| *`clientCA`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | ClientCA specifies the client-ca cert/key pair. + | |
| *`requestHeaderCA`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | RequestHeaderCA specifies the request-header-ca cert/key pair. + | |
| *`etcdServerCA`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | ETCDServerCA specifies the etcd-server-ca cert/key pair. + | |
| *`etcdPeerCA`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | ETCDPeerCA specifies the etcd-peer-ca cert/key pair. + | |
| *`serviceAccountToken`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | ServiceAccountToken specifies the service-account-token key. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-customcas"]
=== CustomCAs
CustomCAs specifies the cert/key pairs for custom CA certificates.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled toggles this feature on or off. + | true |
| *`sources`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsources[$$CredentialSources$$]__ | Sources defines the sources for all required custom CA certificates. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-exposeconfig"]
=== ExposeConfig
ExposeConfig specifies options for exposing the API server.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`ingress`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-ingressconfig[$$IngressConfig$$]__ | Ingress specifies options for exposing the API server through an Ingress. + | |
| *`loadBalancer`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-loadbalancerconfig[$$LoadBalancerConfig$$]__ | LoadBalancer specifies options for exposing the API server through a LoadBalancer service. + | |
| *`nodePort`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-nodeportconfig[$$NodePortConfig$$]__ | NodePort specifies options for exposing the API server through NodePort. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-ingressconfig"]
=== IngressConfig
IngressConfig specifies options for exposing the API server through an Ingress.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-exposeconfig[$$ExposeConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`annotations`* __object (keys:string, values:string)__ | Annotations specifies annotations to add to the Ingress. + | |
| *`ingressClassName`* __string__ | IngressClassName specifies the IngressClass to use for the Ingress. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-ingresssyncconfig"]
=== IngressSyncConfig
IngressSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | false |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-loadbalancerconfig"]
=== LoadBalancerConfig
LoadBalancerConfig specifies options for exposing the API server through a LoadBalancer service.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-exposeconfig[$$ExposeConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`serverPort`* __integer__ | ServerPort is the port on which the K3s server is exposed when type is LoadBalancer. +
If not specified, the default https 443 port will be allocated. +
If 0 or negative, the port will not be exposed. + | |
| *`etcdPort`* __integer__ | ETCDPort is the port on which the ETCD service is exposed when type is LoadBalancer. +
If not specified, the default etcd 2379 port will be allocated. +
If 0 or negative, the port will not be exposed. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-nodeportconfig"]
=== NodePortConfig
NodePortConfig specifies options for exposing the API server through NodePort.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-exposeconfig[$$ExposeConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`serverPort`* __integer__ | ServerPort is the port on each node on which the K3s server is exposed when type is NodePort. +
If not specified, a random port between 30000-32767 will be allocated. +
If out of range, the port will not be exposed. + | |
| *`etcdPort`* __integer__ | ETCDPort is the port on each node on which the ETCD service is exposed when type is NodePort. +
If not specified, a random port between 30000-32767 will be allocated. +
If out of range, the port will not be exposed. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistenceconfig"]
=== PersistenceConfig
PersistenceConfig specifies options for persisting etcd data.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`type`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistencemode[$$PersistenceMode$$]__ | Type specifies the persistence mode. + | dynamic |
| *`storageClassName`* __string__ | StorageClassName is the name of the StorageClass to use for the PVC. +
This field is only relevant in "dynamic" mode. + | |
| *`storageRequestSize`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#quantity-resource-api[$$Quantity$$]__ | StorageRequestSize is the requested size for the PVC. +
This field is only relevant in "dynamic" mode. + | 2G |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistencemode"]
=== PersistenceMode
_Underlying type:_ _string_
PersistenceMode is the storage mode of a Cluster.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistenceconfig[$$PersistenceConfig$$]
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistentvolumeclaimsyncconfig"]
=== PersistentVolumeClaimSyncConfig
PersistentVolumeClaimSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | true |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-podsecurityadmissionlevel"]
=== PodSecurityAdmissionLevel
_Underlying type:_ _string_
PodSecurityAdmissionLevel is the policy level applied to the pods in the namespace.
_Validation:_
- Enum: [privileged baseline restricted]
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicyspec[$$VirtualClusterPolicySpec$$]
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-priorityclasssyncconfig"]
=== PriorityClassSyncConfig
PriorityClassSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | false |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-secretmount"]
=== SecretMount
SecretMount defines a secret to be mounted into server or agent pods,
allowing for custom configurations, certificates, or other sensitive data.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`secretName`* __string__ | secretName is the name of the secret in the pod's namespace to use. +
More info: https://kubernetes.io/docs/concepts/storage/volumes#secret + | |
| *`items`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#keytopath-v1-core[$$KeyToPath$$] array__ | items If unspecified, each key-value pair in the Data field of the referenced +
Secret will be projected into the volume as a file whose name is the +
key and content is the value. If specified, the listed keys will be +
projected into the specified paths, and unlisted keys will not be +
present. If a key is specified which is not present in the Secret, +
the volume setup will error unless it is marked optional. Paths must be +
relative and may not contain the '..' path or start with '..'. + | |
| *`defaultMode`* __integer__ | defaultMode is Optional: mode bits used to set permissions on created files by default. +
Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. +
YAML accepts both octal and decimal values, JSON requires decimal values +
for mode bits. Defaults to 0644. +
Directories within the path are not affected by this setting. +
This might be in conflict with other options that affect the file +
mode, like fsGroup, and the result can be other mode bits set. + | |
| *`optional`* __boolean__ | optional field specify whether the Secret or its keys must be defined + | |
| *`mountPath`* __string__ | MountPath is the path within server and agent pods where the +
secret contents will be mounted. + | |
| *`subPath`* __string__ | SubPath is an optional path within the secret to mount instead of the root. +
When specified, only the specified key from the secret will be mounted as a file +
at MountPath, keeping the parent directory writable. + | |
| *`role`* __string__ | Role is the type of the k3k pod that will be used to mount the secret. +
This can be 'server', 'agent', or 'all' (for both). + | | Enum: [server agent all] +
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-secretsyncconfig"]
=== SecretSyncConfig
SecretSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | true |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-servicesyncconfig"]
=== ServiceSyncConfig
ServiceSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | true |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig"]
=== SyncConfig
SyncConfig will contain the resources that should be synced from virtual cluster to host cluster.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicyspec[$$VirtualClusterPolicySpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`services`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-servicesyncconfig[$$ServiceSyncConfig$$]__ | Services resources sync configuration. + | { enabled:true } |
| *`configMaps`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-configmapsyncconfig[$$ConfigMapSyncConfig$$]__ | ConfigMaps resources sync configuration. + | { enabled:true } |
| *`secrets`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-secretsyncconfig[$$SecretSyncConfig$$]__ | Secrets resources sync configuration. + | { enabled:true } |
| *`ingresses`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-ingresssyncconfig[$$IngressSyncConfig$$]__ | Ingresses resources sync configuration. + | { enabled:false } |
| *`persistentVolumeClaims`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistentvolumeclaimsyncconfig[$$PersistentVolumeClaimSyncConfig$$]__ | PersistentVolumeClaims resources sync configuration. + | { enabled:true } |
| *`priorityClasses`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-priorityclasssyncconfig[$$PriorityClassSyncConfig$$]__ | PriorityClasses resources sync configuration. + | { enabled:false } |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicy"]
=== VirtualClusterPolicy
VirtualClusterPolicy allows defining common configurations and constraints
for clusters within a clusterpolicy.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicylist[$$VirtualClusterPolicyList$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`apiVersion`* __string__ | `k3k.io/v1beta1` | |
| *`kind`* __string__ | `VirtualClusterPolicy` | |
| *`metadata`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#objectmeta-v1-meta[$$ObjectMeta$$]__ | Refer to Kubernetes API documentation for fields of `metadata`.
| |
| *`spec`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicyspec[$$VirtualClusterPolicySpec$$]__ | Spec defines the desired state of the VirtualClusterPolicy. + | { } |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicylist"]
=== VirtualClusterPolicyList
VirtualClusterPolicyList is a list of VirtualClusterPolicy resources.
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`apiVersion`* __string__ | `k3k.io/v1beta1` | |
| *`kind`* __string__ | `VirtualClusterPolicyList` | |
| *`metadata`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#listmeta-v1-meta[$$ListMeta$$]__ | Refer to Kubernetes API documentation for fields of `metadata`.
| |
| *`items`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicy[$$VirtualClusterPolicy$$] array__ | | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicyspec"]
=== VirtualClusterPolicySpec
VirtualClusterPolicySpec defines the desired state of a VirtualClusterPolicy.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicy[$$VirtualClusterPolicy$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`quota`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcequotaspec-v1-core[$$ResourceQuotaSpec$$]__ | Quota specifies the resource limits for clusters within a clusterpolicy. + | |
| *`limit`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#limitrangespec-v1-core[$$LimitRangeSpec$$]__ | Limit specifies the LimitRange that will be applied to all pods within the VirtualClusterPolicy +
to set defaults and constraints (min/max) + | |
| *`defaultNodeSelector`* __object (keys:string, values:string)__ | DefaultNodeSelector specifies the node selector that applies to all clusters (server + agent) in the target Namespace. + | |
| *`defaultPriorityClass`* __string__ | DefaultPriorityClass specifies the priorityClassName applied to all pods of all clusters in the target Namespace. + | |
| *`allowedMode`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clustermode[$$ClusterMode$$]__ | AllowedMode specifies the allowed cluster provisioning mode. Defaults to "shared". + | shared | Enum: [shared virtual] +
| *`disableNetworkPolicy`* __boolean__ | DisableNetworkPolicy indicates whether to disable the creation of a default network policy for cluster isolation. + | |
| *`podSecurityAdmissionLevel`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-podsecurityadmissionlevel[$$PodSecurityAdmissionLevel$$]__ | PodSecurityAdmissionLevel specifies the pod security admission level applied to the pods in the namespace. + | | Enum: [privileged baseline restricted] +
| *`sync`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]__ | Sync specifies the resources types that will be synced from virtual cluster to host cluster. + | { } |
|===

522
docs/crds/crds.md Normal file
View File

@@ -0,0 +1,522 @@
# API Reference
## Packages
- [k3k.io/v1beta1](#k3kiov1beta1)
## k3k.io/v1beta1
### Resource Types
- [Cluster](#cluster)
- [ClusterList](#clusterlist)
- [VirtualClusterPolicy](#virtualclusterpolicy)
- [VirtualClusterPolicyList](#virtualclusterpolicylist)
#### Addon
Addon specifies a Secret containing YAML to be deployed on cluster startup.
_Appears in:_
- [ClusterSpec](#clusterspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `secretNamespace` _string_ | SecretNamespace is the namespace of the Secret. | | |
| `secretRef` _string_ | SecretRef is the name of the Secret. | | |
#### Cluster
Cluster defines a virtual Kubernetes cluster managed by k3k.
It specifies the desired state of a virtual cluster, including version, node configuration, and networking.
k3k uses this to provision and manage these virtual clusters.
_Appears in:_
- [ClusterList](#clusterlist)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `k3k.io/v1beta1` | | |
| `kind` _string_ | `Cluster` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[ClusterSpec](#clusterspec)_ | Spec defines the desired state of the Cluster. | \{ \} | |
#### ClusterList
ClusterList is a list of Cluster resources.
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `k3k.io/v1beta1` | | |
| `kind` _string_ | `ClusterList` | | |
| `metadata` _[ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `items` _[Cluster](#cluster) array_ | | | |
#### ClusterMode
_Underlying type:_ _string_
ClusterMode is the possible provisioning mode of a Cluster.
_Validation:_
- Enum: [shared virtual]
_Appears in:_
- [ClusterSpec](#clusterspec)
- [VirtualClusterPolicySpec](#virtualclusterpolicyspec)
#### ClusterPhase
_Underlying type:_ _string_
ClusterPhase is a high-level summary of the cluster's current lifecycle state.
_Appears in:_
- [ClusterStatus](#clusterstatus)
#### ClusterSpec
ClusterSpec defines the desired state of a virtual Kubernetes cluster.
_Appears in:_
- [Cluster](#cluster)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `version` _string_ | Version is the K3s version to use for the virtual nodes.<br />It should follow the K3s versioning convention (e.g., v1.28.2-k3s1).<br />If not specified, the Kubernetes version of the host node will be used. | | |
| `mode` _[ClusterMode](#clustermode)_ | Mode specifies the cluster provisioning mode: "shared" or "virtual".<br />Defaults to "shared". This field is immutable. | shared | Enum: [shared virtual] <br /> |
| `servers` _integer_ | Servers specifies the number of K3s pods to run in server (control plane) mode.<br />Must be at least 1. Defaults to 1. | 1 | |
| `agents` _integer_ | Agents specifies the number of K3s pods to run in agent (worker) mode.<br />Must be 0 or greater. Defaults to 0.<br />This field is ignored in "shared" mode. | 0 | |
| `clusterCIDR` _string_ | ClusterCIDR is the CIDR range for pod IPs.<br />Defaults to 10.42.0.0/16 in shared mode and 10.52.0.0/16 in virtual mode.<br />This field is immutable. | | |
| `serviceCIDR` _string_ | ServiceCIDR is the CIDR range for service IPs.<br />Defaults to 10.43.0.0/16 in shared mode and 10.53.0.0/16 in virtual mode.<br />This field is immutable. | | |
| `clusterDNS` _string_ | ClusterDNS is the IP address for the CoreDNS service.<br />Must be within the ServiceCIDR range. Defaults to 10.43.0.10.<br />This field is immutable. | | |
| `persistence` _[PersistenceConfig](#persistenceconfig)_ | Persistence specifies options for persisting etcd data.<br />Defaults to dynamic persistence, which uses a PersistentVolumeClaim to provide data persistence.<br />A default StorageClass is required for dynamic persistence. | | |
| `expose` _[ExposeConfig](#exposeconfig)_ | Expose specifies options for exposing the API server.<br />By default, it's only exposed as a ClusterIP. | | |
| `nodeSelector` _object (keys:string, values:string)_ | NodeSelector specifies node labels to constrain where server/agent pods are scheduled.<br />In "shared" mode, this also applies to workloads. | | |
| `priorityClass` _string_ | PriorityClass specifies the priorityClassName for server/agent pods.<br />In "shared" mode, this also applies to workloads. | | |
| `tokenSecretRef` _[SecretReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#secretreference-v1-core)_ | TokenSecretRef is a Secret reference containing the token used by worker nodes to join the cluster.<br />The Secret must have a "token" field in its data. | | |
| `tlsSANs` _string array_ | TLSSANs specifies subject alternative names for the K3s server certificate. | | |
| `serverArgs` _string array_ | ServerArgs specifies ordered key-value pairs for K3s server pods.<br />Example: ["--tls-san=example.com"] | | |
| `agentArgs` _string array_ | AgentArgs specifies ordered key-value pairs for K3s agent pods.<br />Example: ["--node-name=my-agent-node"] | | |
| `serverEnvs` _[EnvVar](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#envvar-v1-core) array_ | ServerEnvs specifies list of environment variables to set in the server pod. | | |
| `agentEnvs` _[EnvVar](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#envvar-v1-core) array_ | AgentEnvs specifies list of environment variables to set in the agent pod. | | |
| `addons` _[Addon](#addon) array_ | Addons specifies secrets containing raw YAML to deploy on cluster startup. | | |
| `serverLimit` _[ResourceList](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcelist-v1-core)_ | ServerLimit specifies resource limits for server nodes. | | |
| `workerLimit` _[ResourceList](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcelist-v1-core)_ | WorkerLimit specifies resource limits for agent nodes. | | |
| `mirrorHostNodes` _boolean_ | MirrorHostNodes controls whether node objects from the host cluster<br />are mirrored into the virtual cluster. | | |
| `customCAs` _[CustomCAs](#customcas)_ | CustomCAs specifies the cert/key pairs for custom CA certificates. | | |
| `sync` _[SyncConfig](#syncconfig)_ | Sync specifies the resources types that will be synced from virtual cluster to host cluster. | \{ \} | |
| `secretMounts` _[SecretMount](#secretmount) array_ | SecretMounts specifies a list of secrets to mount into server and agent pods.<br />Each entry defines a secret and its mount path within the pods. | | |
#### ConfigMapSyncConfig
ConfigMapSyncConfig specifies the sync options for services.
_Appears in:_
- [SyncConfig](#syncconfig)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `enabled` _boolean_ | Enabled is an on/off switch for syncing resources. | true | |
| `selector` _object (keys:string, values:string)_ | Selector specifies set of labels of the resources that will be synced, if empty<br />then all resources of the given type will be synced. | | |
#### CredentialSource
CredentialSource defines where to get a credential from.
It can represent either a TLS key pair or a single private key.
_Appears in:_
- [CredentialSources](#credentialsources)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `secretName` _string_ | The secret must contain specific keys based on the credential type:<br />- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.<br />- For the ServiceAccountToken signing key: `tls.key`. | | |
#### CredentialSources
CredentialSources lists all the required credentials, including both
TLS key pairs and single signing keys.
_Appears in:_
- [CustomCAs](#customcas)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `serverCA` _[CredentialSource](#credentialsource)_ | ServerCA specifies the server-ca cert/key pair. | | |
| `clientCA` _[CredentialSource](#credentialsource)_ | ClientCA specifies the client-ca cert/key pair. | | |
| `requestHeaderCA` _[CredentialSource](#credentialsource)_ | RequestHeaderCA specifies the request-header-ca cert/key pair. | | |
| `etcdServerCA` _[CredentialSource](#credentialsource)_ | ETCDServerCA specifies the etcd-server-ca cert/key pair. | | |
| `etcdPeerCA` _[CredentialSource](#credentialsource)_ | ETCDPeerCA specifies the etcd-peer-ca cert/key pair. | | |
| `serviceAccountToken` _[CredentialSource](#credentialsource)_ | ServiceAccountToken specifies the service-account-token key. | | |
#### CustomCAs
CustomCAs specifies the cert/key pairs for custom CA certificates.
_Appears in:_
- [ClusterSpec](#clusterspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `enabled` _boolean_ | Enabled toggles this feature on or off. | true | |
| `sources` _[CredentialSources](#credentialsources)_ | Sources defines the sources for all required custom CA certificates. | | |
#### ExposeConfig
ExposeConfig specifies options for exposing the API server.
_Appears in:_
- [ClusterSpec](#clusterspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `ingress` _[IngressConfig](#ingressconfig)_ | Ingress specifies options for exposing the API server through an Ingress. | | |
| `loadBalancer` _[LoadBalancerConfig](#loadbalancerconfig)_ | LoadBalancer specifies options for exposing the API server through a LoadBalancer service. | | |
| `nodePort` _[NodePortConfig](#nodeportconfig)_ | NodePort specifies options for exposing the API server through NodePort. | | |
#### IngressConfig
IngressConfig specifies options for exposing the API server through an Ingress.
_Appears in:_
- [ExposeConfig](#exposeconfig)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `annotations` _object (keys:string, values:string)_ | Annotations specifies annotations to add to the Ingress. | | |
| `ingressClassName` _string_ | IngressClassName specifies the IngressClass to use for the Ingress. | | |
#### IngressSyncConfig
IngressSyncConfig specifies the sync options for services.
_Appears in:_
- [SyncConfig](#syncconfig)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `enabled` _boolean_ | Enabled is an on/off switch for syncing resources. | false | |
| `selector` _object (keys:string, values:string)_ | Selector specifies set of labels of the resources that will be synced, if empty<br />then all resources of the given type will be synced. | | |
#### LoadBalancerConfig
LoadBalancerConfig specifies options for exposing the API server through a LoadBalancer service.
_Appears in:_
- [ExposeConfig](#exposeconfig)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `serverPort` _integer_ | ServerPort is the port on which the K3s server is exposed when type is LoadBalancer.<br />If not specified, the default https 443 port will be allocated.<br />If 0 or negative, the port will not be exposed. | | |
| `etcdPort` _integer_ | ETCDPort is the port on which the ETCD service is exposed when type is LoadBalancer.<br />If not specified, the default etcd 2379 port will be allocated.<br />If 0 or negative, the port will not be exposed. | | |
#### NodePortConfig
NodePortConfig specifies options for exposing the API server through NodePort.
_Appears in:_
- [ExposeConfig](#exposeconfig)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `serverPort` _integer_ | ServerPort is the port on each node on which the K3s server is exposed when type is NodePort.<br />If not specified, a random port between 30000-32767 will be allocated.<br />If out of range, the port will not be exposed. | | |
| `etcdPort` _integer_ | ETCDPort is the port on each node on which the ETCD service is exposed when type is NodePort.<br />If not specified, a random port between 30000-32767 will be allocated.<br />If out of range, the port will not be exposed. | | |
#### PersistenceConfig
PersistenceConfig specifies options for persisting etcd data.
_Appears in:_
- [ClusterSpec](#clusterspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `type` _[PersistenceMode](#persistencemode)_ | Type specifies the persistence mode. | dynamic | |
| `storageClassName` _string_ | StorageClassName is the name of the StorageClass to use for the PVC.<br />This field is only relevant in "dynamic" mode. | | |
| `storageRequestSize` _[Quantity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#quantity-resource-api)_ | StorageRequestSize is the requested size for the PVC.<br />This field is only relevant in "dynamic" mode. | 2G | |
#### PersistenceMode
_Underlying type:_ _string_
PersistenceMode is the storage mode of a Cluster.
_Appears in:_
- [PersistenceConfig](#persistenceconfig)
#### PersistentVolumeClaimSyncConfig
PersistentVolumeClaimSyncConfig specifies the sync options for services.
_Appears in:_
- [SyncConfig](#syncconfig)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `enabled` _boolean_ | Enabled is an on/off switch for syncing resources. | true | |
| `selector` _object (keys:string, values:string)_ | Selector specifies set of labels of the resources that will be synced, if empty<br />then all resources of the given type will be synced. | | |
#### PodSecurityAdmissionLevel
_Underlying type:_ _string_
PodSecurityAdmissionLevel is the policy level applied to the pods in the namespace.
_Validation:_
- Enum: [privileged baseline restricted]
_Appears in:_
- [VirtualClusterPolicySpec](#virtualclusterpolicyspec)
#### PriorityClassSyncConfig
PriorityClassSyncConfig specifies the sync options for services.
_Appears in:_
- [SyncConfig](#syncconfig)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `enabled` _boolean_ | Enabled is an on/off switch for syncing resources. | false | |
| `selector` _object (keys:string, values:string)_ | Selector specifies set of labels of the resources that will be synced, if empty<br />then all resources of the given type will be synced. | | |
#### SecretMount
SecretMount defines a secret to be mounted into server or agent pods,
allowing for custom configurations, certificates, or other sensitive data.
_Appears in:_
- [ClusterSpec](#clusterspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `secretName` _string_ | secretName is the name of the secret in the pod's namespace to use.<br />More info: https://kubernetes.io/docs/concepts/storage/volumes#secret | | |
| `items` _[KeyToPath](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#keytopath-v1-core) array_ | items If unspecified, each key-value pair in the Data field of the referenced<br />Secret will be projected into the volume as a file whose name is the<br />key and content is the value. If specified, the listed keys will be<br />projected into the specified paths, and unlisted keys will not be<br />present. If a key is specified which is not present in the Secret,<br />the volume setup will error unless it is marked optional. Paths must be<br />relative and may not contain the '..' path or start with '..'. | | |
| `defaultMode` _integer_ | defaultMode is Optional: mode bits used to set permissions on created files by default.<br />Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.<br />YAML accepts both octal and decimal values, JSON requires decimal values<br />for mode bits. Defaults to 0644.<br />Directories within the path are not affected by this setting.<br />This might be in conflict with other options that affect the file<br />mode, like fsGroup, and the result can be other mode bits set. | | |
| `optional` _boolean_ | optional field specify whether the Secret or its keys must be defined | | |
| `mountPath` _string_ | MountPath is the path within server and agent pods where the<br />secret contents will be mounted. | | |
| `subPath` _string_ | SubPath is an optional path within the secret to mount instead of the root.<br />When specified, only the specified key from the secret will be mounted as a file<br />at MountPath, keeping the parent directory writable. | | |
| `role` _string_ | Role is the type of the k3k pod that will be used to mount the secret.<br />This can be 'server', 'agent', or 'all' (for both). | | Enum: [server agent all] <br /> |
#### SecretSyncConfig
SecretSyncConfig specifies the sync options for services.
_Appears in:_
- [SyncConfig](#syncconfig)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `enabled` _boolean_ | Enabled is an on/off switch for syncing resources. | true | |
| `selector` _object (keys:string, values:string)_ | Selector specifies set of labels of the resources that will be synced, if empty<br />then all resources of the given type will be synced. | | |
#### ServiceSyncConfig
ServiceSyncConfig specifies the sync options for services.
_Appears in:_
- [SyncConfig](#syncconfig)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `enabled` _boolean_ | Enabled is an on/off switch for syncing resources. | true | |
| `selector` _object (keys:string, values:string)_ | Selector specifies set of labels of the resources that will be synced, if empty<br />then all resources of the given type will be synced. | | |
#### SyncConfig
SyncConfig will contain the resources that should be synced from virtual cluster to host cluster.
_Appears in:_
- [ClusterSpec](#clusterspec)
- [VirtualClusterPolicySpec](#virtualclusterpolicyspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `services` _[ServiceSyncConfig](#servicesyncconfig)_ | Services resources sync configuration. | \{ enabled:true \} | |
| `configMaps` _[ConfigMapSyncConfig](#configmapsyncconfig)_ | ConfigMaps resources sync configuration. | \{ enabled:true \} | |
| `secrets` _[SecretSyncConfig](#secretsyncconfig)_ | Secrets resources sync configuration. | \{ enabled:true \} | |
| `ingresses` _[IngressSyncConfig](#ingresssyncconfig)_ | Ingresses resources sync configuration. | \{ enabled:false \} | |
| `persistentVolumeClaims` _[PersistentVolumeClaimSyncConfig](#persistentvolumeclaimsyncconfig)_ | PersistentVolumeClaims resources sync configuration. | \{ enabled:true \} | |
| `priorityClasses` _[PriorityClassSyncConfig](#priorityclasssyncconfig)_ | PriorityClasses resources sync configuration. | \{ enabled:false \} | |
#### VirtualClusterPolicy
VirtualClusterPolicy allows defining common configurations and constraints
for clusters within a clusterpolicy.
_Appears in:_
- [VirtualClusterPolicyList](#virtualclusterpolicylist)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `k3k.io/v1beta1` | | |
| `kind` _string_ | `VirtualClusterPolicy` | | |
| `metadata` _[ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#objectmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `spec` _[VirtualClusterPolicySpec](#virtualclusterpolicyspec)_ | Spec defines the desired state of the VirtualClusterPolicy. | \{ \} | |
#### VirtualClusterPolicyList
VirtualClusterPolicyList is a list of VirtualClusterPolicy resources.
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `apiVersion` _string_ | `k3k.io/v1beta1` | | |
| `kind` _string_ | `VirtualClusterPolicyList` | | |
| `metadata` _[ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#listmeta-v1-meta)_ | Refer to Kubernetes API documentation for fields of `metadata`. | | |
| `items` _[VirtualClusterPolicy](#virtualclusterpolicy) array_ | | | |
#### VirtualClusterPolicySpec
VirtualClusterPolicySpec defines the desired state of a VirtualClusterPolicy.
_Appears in:_
- [VirtualClusterPolicy](#virtualclusterpolicy)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `quota` _[ResourceQuotaSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcequotaspec-v1-core)_ | Quota specifies the resource limits for clusters within a clusterpolicy. | | |
| `limit` _[LimitRangeSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#limitrangespec-v1-core)_ | Limit specifies the LimitRange that will be applied to all pods within the VirtualClusterPolicy<br />to set defaults and constraints (min/max) | | |
| `defaultNodeSelector` _object (keys:string, values:string)_ | DefaultNodeSelector specifies the node selector that applies to all clusters (server + agent) in the target Namespace. | | |
| `defaultPriorityClass` _string_ | DefaultPriorityClass specifies the priorityClassName applied to all pods of all clusters in the target Namespace. | | |
| `allowedMode` _[ClusterMode](#clustermode)_ | AllowedMode specifies the allowed cluster provisioning mode. Defaults to "shared". | shared | Enum: [shared virtual] <br /> |
| `disableNetworkPolicy` _boolean_ | DisableNetworkPolicy indicates whether to disable the creation of a default network policy for cluster isolation. | | |
| `podSecurityAdmissionLevel` _[PodSecurityAdmissionLevel](#podsecurityadmissionlevel)_ | PodSecurityAdmissionLevel specifies the pod security admission level applied to the pods in the namespace. | | Enum: [privileged baseline restricted] <br /> |
| `sync` _[SyncConfig](#syncconfig)_ | Sync specifies the resources types that will be synced from virtual cluster to host cluster. | \{ \} | |

View File

@@ -0,0 +1,19 @@
{{- define "gvDetails" -}}
{{- $gv := . -}}
[id="{{ asciidocGroupVersionID $gv | asciidocRenderAnchorID }}"]
== {{ $gv.GroupVersionString }}
{{ $gv.Doc }}
{{- if $gv.Kinds }}
=== Resource Types
{{- range $gv.SortedKinds }}
- {{ $gv.TypeForKind . | asciidocRenderTypeLink }}
{{- end }}
{{ end }}
{{ range $gv.SortedTypes }}
{{ template "type" . }}
{{ end }}
{{- end -}}

View File

@@ -0,0 +1,19 @@
{{- define "gvList" -}}
{{- $groupVersions := . -}}
[id="k3k-api-reference"]
= API Reference
:revdate: "2006-01-02"
:page-revdate: {revdate}
:anchor_prefix: k8s-api
== Packages
{{- range $groupVersions }}
- {{ asciidocRenderGVLink . }}
{{- end }}
{{ range $groupVersions }}
{{ template "gvDetails" . }}
{{ end }}
{{- end -}}

View File

@@ -0,0 +1,43 @@
{{- define "type" -}}
{{- $type := . -}}
{{- if asciidocShouldRenderType $type -}}
[id="{{ asciidocTypeID $type | asciidocRenderAnchorID }}"]
=== {{ $type.Name }}
{{ if $type.IsAlias }}_Underlying type:_ _{{ asciidocRenderTypeLink $type.UnderlyingType }}_{{ end }}
{{ $type.Doc }}
{{ if $type.Validation -}}
_Validation:_
{{- range $type.Validation }}
- {{ . }}
{{- end }}
{{- end }}
{{ if $type.References -}}
_Appears In:_
{{ range $type.SortedReferences }}
* {{ asciidocRenderTypeLink . }}
{{- end }}
{{- end }}
{{ if $type.Members -}}
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
{{ if $type.GVK -}}
| *`apiVersion`* __string__ | `{{ $type.GVK.Group }}/{{ $type.GVK.Version }}` | |
| *`kind`* __string__ | `{{ $type.GVK.Kind }}` | |
{{ end -}}
{{ range $type.Members -}}
| *`{{ .Name }}`* __{{ asciidocRenderType .Type }}__ | {{ template "type_members" . }} | {{ .Default }} | {{ range .Validation -}} {{ asciidocRenderValidation . }} +
{{ end }}
{{ end -}}
|===
{{ end -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,8 @@
{{- define "type_members" -}}
{{- $field := . -}}
{{- if eq $field.Name "metadata" -}}
Refer to Kubernetes API documentation for fields of `metadata`.
{{ else -}}
{{ asciidocRenderFieldDoc $field.Doc }}
{{- end -}}
{{- end -}}

188
docs/development.md Normal file
View File

@@ -0,0 +1,188 @@
# Development
## Prerequisites
To start developing K3k you will need:
- Go
- Docker
- Helm
- A running Kubernetes cluster
> [!IMPORTANT]
>
> Virtual clusters in shared mode need to have a configured storage provider, unless the `--persistence-type ephemeral` flag is used.
>
> To install the [`local-path-provisioner`](https://github.com/rancher/local-path-provisioner) and set it as the default storage class you can run:
>
> ```
> kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.34/deploy/local-path-storage.yaml
> kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
> ```
### TLDR
```shell
#!/bin/bash
set -euo pipefail
# These environment variables configure the image repository and tag.
export REPO=ghcr.io/myuser
export VERSION=dev-$(date -u '+%Y%m%d%H%M')
make
make push
make install
```
### Makefile
To see all the available Make commands you can run `make help`, i.e:
```
-> % make help
all Run 'make' or 'make all' to run 'version', 'generate', 'build' and 'package'
version Print the current version
build Build the the K3k binaries (k3k, k3k-kubelet and k3kcli)
package Package the k3k and k3k-kubelet Docker images
push Push the K3k images to the registry
test Run all the tests
test-unit Run the unit tests (skips the e2e)
test-controller Run the controller tests (pkg/controller)
test-kubelet-controller Run the controller tests (pkg/controller)
test-e2e Run the e2e tests
test-cli Run the cli tests
generate Generate the CRDs specs
docs Build the CRDs and CLI docs
docs-crds Build the CRDs docs
docs-cli Build the CLI docs
lint Find any linting issues in the project
fmt Format source files in the project
validate Validate the project checking for any dependency or doc mismatch
install Install K3k with Helm on the targeted Kubernetes cluster
help Show this help.
```
### Build
To build the needed binaries (`k3k`, `k3k-kubelet` and the `k3kcli`) and package the images you can simply run `make`.
By default the `rancher` repository will be used, but you can customize this to your registry with the `REPO` env var:
```
REPO=ghcr.io/userorg make
```
To customize the tag you can also explicitly set the VERSION:
```
VERSION=dev-$(date -u '+%Y%m%d%H%M') make
```
### Push
You will need to push the built images to your registry, and you can use the `make push` command to do this.
### Install
Once you have your images available you can install K3k with the `make install` command. This will use `helm` to install the release.
## Tests
To run the tests you can just run `make test`, or one of the other available "sub-tests" targets (`test-unit`, `test-controller`, `test-e2e`, `test-cli`).
When running the tests the namespaces used are cleaned up. If you want to keep them to debug you can use the `KEEP_NAMESPACES`, i.e.:
```
KEEP_NAMESPACES=true make test-e2e
```
The e2e and cli tests run against the cluster configured in your KUBECONFIG environment variable. Running the tests with the `K3K_DOCKER_INSTALL` environment variable set will use `tescontainers` instead:
```
K3K_DOCKER_INSTALL=true make test-e2e
```
We use [Ginkgo](https://onsi.github.io/ginkgo/), and [`envtest`](https://book.kubebuilder.io/reference/envtest) for testing the controllers.
The required binaries for `envtest` are installed with [`setup-envtest`](https://pkg.go.dev/sigs.k8s.io/controller-runtime/tools/setup-envtest), in the `.envtest` folder.
## CRDs and Docs
We are using Kubebuilder and `controller-gen` to build the needed CRDs. To generate the specs you can run `make generate`.
Remember also to update the CRDs documentation running the `make docs` command.
## How to install k3k on k3d
This document provides a guide on how to install k3k on [k3d](https://k3d.io).
### Installing k3d
Since k3d uses docker under the hood, we need to expose the ports on the host that we'll then use for the NodePort in virtual cluster creation.
Create the k3d cluster in the following way:
```bash
k3d cluster create k3k -p "30000-30010:30000-30010@server:0"
```
With this syntax ports from 30000 to 30010 will be exposed on the host.
### Install k3k
Install now k3k as usual:
```bash
helm repo update
helm install --namespace k3k-system --create-namespace k3k k3k/k3k
```
### Create a virtual cluster
Once the k3k controller is up and running, create a namespace where to create our first virtual cluster.
```bash
kubectl create ns k3k-mycluster
```
Create then the virtual cluster exposing through NodePort one of the ports that we set up in the previous step:
```bash
cat <<EOF | kubectl apply -f -
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: mycluster
namespace: k3k-mycluster
spec:
expose:
nodePort:
serverPort: 30001
EOF
```
Check when the cluster is ready:
```bash
kubectl get po -n k3k-mycluster
```
Last thing to do is to get the kubeconfig to connect to the virtual cluster we've just created:
```bash
k3kcli kubeconfig generate --name mycluster --namespace k3k-mycluster --kubeconfig-server localhost:30001
```
> [!IMPORTANT]
> Because of technical limitation is not possible to create virtual clusters in `virtual` mode with K3d, or any other dockerized environment (Kind, Minikube)

92
docs/howtos/airgap.md Normal file
View File

@@ -0,0 +1,92 @@
# K3k Air Gap Installation Guide
Applicable K3k modes: `virtual`, `shared`
This guide describes how to deploy **K3k** in an **air-gapped environment**, including the packaging of required images, Helm chart configurations, and cluster creation using a private container registry.
---
## 1. Package Required Container Images
### 1.1: Follow K3s Air Gap Preparation
Begin with the official K3s air gap packaging instructions:
[K3s Air Gap Installation Docs](https://docs.k3s.io/installation/airgap)
### 1.2: Include K3k-Specific Images
In addition to the K3s images, make sure to include the following in your image bundle:
| Image Names | Descriptions |
| --------------------------- | --------------------------------------------------------------- |
| `rancher/k3k:<tag>` | K3k controller image (replace `<tag>` with the desired version) |
| `rancher/k3k-kubelet:<tag>` | K3k agent image for shared mode |
| `rancher/k3s:<tag>` | K3s server/agent image for virtual clusters |
Load these images into your internal (air-gapped) registry.
---
## 2. Configure Helm Chart for Air Gap installation
Update the `values.yaml` file in the K3k Helm chart with air gap settings:
```yaml
controller:
imagePullSecrets: [] # Optional
image:
repository: rancher/k3k
tag: "" # Specify the version tag
pullPolicy: "" # Optional: "IfNotPresent", "Always", etc.
agent:
imagePullSecrets: []
virtual:
image:
repository: rancher/k3s
pullPolicy: "" # Optional
shared:
image:
repository: rancher/k3k-kubelet
tag: "" # Specify the version tag
pullPolicy: "" # Optional
server:
imagePullSecrets: [] # Optional
image:
repository: rancher/k3s
pullPolicy: "" # Optional
```
These values enforce the use of internal image repositories for the K3k controller, the agent and the server.
**Note** : All virtual clusters will use automatically those settings.
---
## 3. Enforce Registry in Virtual Clusters
When creating a virtual cluster, use the `--system-default-registry` flag to ensure all system components (e.g., CoreDNS) pull from your internal registry:
```bash
k3kcli cluster create \
--server-args "--system-default-registry=registry.internal.domain" \
my-cluster
```
This flag is passed directly to the K3s server in the virtual cluster, influencing all system workload image pulls.
[K3s Server CLI Reference](https://docs.k3s.io/cli/server#k3s-server-cli-help)
---
## 4. Specify K3s Version for Virtual Clusters
K3k allows specifying the K3s version used in each virtual cluster:
```bash
k3kcli cluster create \
--k3s-version v1.29.4+k3s1 \
my-cluster
```
- If omitted, the **host clusters K3s version** will be used by default, which might not exist if it's not part of the air gap package.

View File

@@ -0,0 +1,79 @@
# How to Choose Between Shared and Virtual Mode
This guide helps you choose the right mode for your virtual cluster: **Shared** or **Virtual**.
If you're unsure, start with **Shared mode** — it's the default and fits most common scenarios.
---
## Shared Mode (default)
**Best for:**
- Developers who want to run workloads quickly without managing Kubernetes internals
- Platform teams that require visibility and control over all workloads
- Users who need access to host-level resources (e.g., GPUs)
In **Shared mode**, the virtual cluster runs its own K3s server but relies on the host to execute workloads. The virtual kubelet syncs resources, enabling lightweight, fast provisioning with support for cluster resource isolation. More details on the [architecture](./../architecture.md#shared-mode).
---
### Use Cases by Persona
#### 👩‍💻 Developer
*"Im building a web app that should be exposed outside the virtual cluster."*
→ Use **Shared mode**. It allows you to [expose](./expose-workloads.md) your application.
#### 👩‍🔬 Data Scientist:
*“I need to run Jupyter notebooks that leverage the cluster's GPU.”*
→ Use **Shared mode**. It gives access to physical devices while keeping overhead low.
#### 🧑‍💼 Platform Admin
*"I want to monitor and secure all tenant workloads from a central location."*
→ Use **Shared mode**. Host-level agents (e.g., observability, policy enforcement) work across all virtual clusters.
#### 🔒 Security Engineer
*"I need to enforce security policies like network policies or runtime scanning across all workloads."*
→ Use **Shared mode**. The platform can enforce policies globally without tenant bypass.
*"I need to test a new admission controller or policy engine."*
→ Use **Shared mode**, if it's scoped to your virtual cluster. You can run tools like Kubewarden without affecting the host.
#### 🔁 CI/CD Engineer
*"I want to spin up disposable virtual clusters per pipeline run, fast and with low resource cost."*
→ Use **Shared mode**. It's quick to provision and ideal for short-lived, namespace-scoped environments.
---
## Virtual Mode
**Best for:**
- Advanced users who need full Kubernetes isolation
- Developers testing experimental or cluster-wide features
- Use cases requiring control over the entire Kubernetes control plane
In **Virtual mode**, the virtual cluster runs its own isolated Kubernetes control plane. It supports different CNIs, and API configurations — ideal for deep experimentation or advanced workloads. More details on the [architecture](./../architecture.md#virtual-mode).
---
### Use Cases by Persona
#### 👩‍💻 Developer
*"I need to test a new Kubernetes feature gate thats disabled in the host cluster."*
→ Use **Virtual mode**. You can configure your own control plane flags and API features.
#### 🧑‍💼 Platform Admin
*"Were testing upgrades across Kubernetes versions, including new API behaviors."*
→ Use Virtual mode. You can run different Kubernetes versions and safely validate upgrade paths.
#### 🌐 Network Engineer
*"Im evaluating a new CNI that needs full control of the clusters networking."*
→ Use **Virtual mode**. You can run a separate CNI stack without affecting the host or other tenants.
#### 🔒 Security Engineer
*"Im testing a new admission controller and policy engine before rolling it out cluster-wide."*
→ Use **Virtual mode**, if you need to test cluster-wide policies, custom admission flow, or advanced extensions with full control.
---
## Still Not Sure?
If you're evaluating more advanced use cases or want a deeper comparison, see the full trade-off breakdown in the [Architecture documentation](../architecture.md).

View File

@@ -0,0 +1,302 @@
# How to: Create a Virtual Cluster
This guide walks through the various ways to create and manage virtual clusters in K3K. We'll cover common use cases using both the **Custom Resource Definitions (CRDs)** and the **K3K CLI**, so you can choose the method that fits your workflow.
> 📘 For full reference:
> - [CRD Reference Documentation](../crds/crds.md)
> - [CLI Reference Documentation](../cli/k3kcli.md)
> - [Full example](../advanced-usage.md)
> [!NOTE]
> 🚧 Some features are currently only available via the CRD interface. CLI support may be added in the future.
---
## Use Case: Create and Expose a Basic Virtual Cluster
### CRD Method
```yaml
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: k3kcluster-ingress
spec:
tlsSANs:
- my-cluster.example.com
expose:
ingress:
ingressClassName: nginx
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-redirect: "HTTPS"
```
This will create a virtual cluster in `shared` mode and expose it via an ingress with the specified hostname.
### CLI Method
*No CLI method available yet*
---
## Use Case: Create a Virtual Cluster with Persistent Storage (**Default**)
### CRD Method
```yaml
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: k3kcluster-persistent
spec:
persistence:
type: dynamic
storageClassName: local-path
storageRequestSize: 30Gi
```
This ensures that the virtual cluster stores its state persistently with a 30Gi volume.
If `storageClassName` is not set it will default to the default StorageClass.
If `storageRequestSize` is not set it will request a 1Gi volume by default.
### CLI Method
```sh
k3kcli cluster create \
--persistence-type dynamic \
--storage-class-name local-path \
k3kcluster-persistent
```
> [!NOTE]
> The `k3kcli` does not support configuring the `storageRequestSize` yet.
---
## Use Case: Create a Highly Available Virtual Cluster in `shared` mode
### CRD Method
```yaml
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: k3kcluster-ha
spec:
servers: 3
```
This will create a virtual cluster with 3 servers and a default 1Gi volume for persistence.
### CLI Method
```sh
k3kcli cluster create \
--servers 3 \
k3kcluster-ha
```
---
## Use Case: Create a Highly Available Virtual Cluster in `virtual` mode
### CRD Method
```yaml
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: k3kcluster-virtual
spec:
mode: virtual
servers: 3
agents: 3
```
This will create a virtual cluster with 3 servers and 3 agents and a default 1Gi volume for persistence.
> [!NOTE]
> Agents only exist for `virtual` mode.
### CLI Method
```sh
k3kcli cluster create \
--agents 3 \
--servers 3 \
--mode virtual \
k3kcluster-virtual
```
---
## Use Case: Create an Ephemeral Virtual Cluster
### CRD Method
```yaml
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: k3kcluster-ephemeral
spec:
persistence:
type: ephemeral
```
This will create an ephemeral virtual cluster with no persistence and a single server.
### CLI Method
```sh
k3kcli cluster create \
--persistence-type ephemeral \
k3kcluster-ephemeral
```
---
## Use Case: Create a Virtual Cluster with a Custom Kubernetes Version
### CRD Method
```yaml
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: k3kcluster-custom-k8s
spec:
version: "v1.33.1-k3s1"
```
This sets the virtual cluster's Kubernetes version explicitly.
> [!NOTE]
> Only [K3s](https://k3s.io) distributions are supported. You can find compatible versions on the K3s GitHub [release page](https://github.com/k3s-io/k3s/releases).
### CLI Method
```sh
k3kcli cluster create \
--version v1.33.1-k3s1 \
k3kcluster-custom-k8s
```
---
## Use Case: Create a Virtual Cluster with Custom Resource Limits
### CRD Method
```yaml
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: k3kcluster-resourced
spec:
mode: virtual
serverLimit:
cpu: "1"
memory: "2Gi"
workerLimit:
cpu: "1"
memory: "2Gi"
```
This configures the CPU and memory limit for the virtual cluster.
### CLI Method
*No CLI method available yet*
---
## Use Case: Create a Virtual Cluster on specific host nodes
### CRD Method
```yaml
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: k3kcluster-node-placed
spec:
nodeSelector:
disktype: ssd
```
This places the virtual cluster on nodes with the label `disktype: ssd`.
> [!NOTE]
> In `shared` mode workloads are also scheduled on the selected nodes
### CLI Method
*No CLI method available yet*
---
## Use Case: Create a Virtual Cluster with a Rancher Host Cluster Kubeconfig
When using a `kubeconfig` generated with Rancher, you need to specify with the CLI the desired host for the virtual cluster `kubeconfig`.
By default, `k3kcli` uses the current host `kubeconfig` to determine the target cluster.
### CRD Method
*Not applicable*
### CLI Method
```sh
k3kcli cluster create \
--kubeconfig-server https://abc.xyz \
k3kcluster-host-rancher
```
---
## Use Case: Create a Virtual Cluster Behind an HTTP Proxy
### CRD Method
```yaml
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: k3kcluster-http-proxy
spec:
serverEnvs:
- name: HTTP_PROXY
value: "http://abc.xyz"
agentEnvs:
- name: HTTP_PROXY
value: "http://abc.xyz"
```
This configures an HTTP proxy for both servers and agents in the virtual cluster.
> [!NOTE]
> This can be leveraged to pass **any custom environment variables** to the servers and agents — not just proxy settings.
### CLI Method
```sh
k3kcli cluster create \
--server-envs HTTP_PROXY=http://abc.xyz \
--agent-envs HTTP_PROXY=http://abc.xyz \
k3kcluster-http-proxy
```
---
## How to: Connect to a Virtual Cluster
Once the virtual cluster is running, you can connect to it using the CLI:
### CLI Method
```sh
k3kcli kubeconfig generate --namespace k3k-mycluster --name mycluster
export KUBECONFIG=$PWD/mycluster-kubeconfig.yaml
kubectl get nodes
```
This command generates a `kubeconfig` file, which you can use to access your virtual cluster via `kubectl`.

View File

@@ -0,0 +1,52 @@
# How-to: Expose Workloads Outside the Virtual Cluster
This guide explains how to expose workloads running in k3k-managed virtual clusters to external networks. Behavior varies depending on the operating mode of the virtual cluster.
## Virtual Mode
> [!CAUTION]
> **Not Supported**
> In *virtual mode*, direct external exposure of workloads is **not available**.
> This mode is designed for strong isolation and does not expose the virtual cluster's network directly.
## Shared Mode
In *shared mode*, workloads can be exposed to the external network using standard Kubernetes service types or an ingress controller, depending on your requirements.
> [!NOTE]
> *`Services`* are always synced from the virtual cluster to the host cluster following the same principle described [here](../architecture.md#shared-mode) for pods.
### Option 1: Use `NodePort` or `LoadBalancer`
To expose a service such as a web application outside the host cluster:
- **`NodePort`**:
Exposes the service on a static port on each nodes IP.
Access the service at `http://<NodeIP>:<NodePort>`.
- **`LoadBalancer`**:
Provisions an external load balancer (if supported by the environment) and exposes the service via the load balancers IP.
> **Note**
> The `LoadBalancer` IP is currently not reflected back to the virtual cluster service.
> [k3k issue #365](https://github.com/rancher/k3k/issues/365)
### Option 2: Use `ClusterIP` for Internal Communication
If the workload should only be accessible to other services or pods *within* the host cluster:
- Use the `ClusterIP` service type.
This exposes the service on an internal IP, only reachable inside the host cluster.
### Option 3: Use Ingress for HTTP/HTTPS Routing
For more advanced routing (e.g., hostname- or path-based routing), deploy an **Ingress controller** in the virtual cluster, and expose it via `NodePort` or `LoadBalancer`.
This allows you to:
- Define Ingress resources in the virtual cluster.
- Route external traffic to services within the virtual cluster.
>**Note**
> Support for using the host cluster's Ingress controller from a virtual cluster is being tracked in
> [k3k issue #356](https://github.com/rancher/k3k/issues/356)

View File

@@ -0,0 +1,147 @@
# Troubleshooting
This guide walks through common troubleshooting steps for working with K3K virtual clusters.
---
## `too many open files` error
The `k3k-kubelet` or `k3kcluster-server-` run into the following issue:
```sh
E0604 13:14:53.369369 1 leaderelection.go:336] error initially creating leader election record: Post "https://k3k-http-proxy-k3kcluster-service/apis/coordination.k8s.io/v1/namespaces/kube-system/leases": context canceled
{"level":"fatal","timestamp":"2025-06-04T13:14:53.369Z","logger":"k3k-kubelet","msg":"virtual manager stopped","error":"too many open files"}
```
This typically indicates a low limit on inotify watchers or file descriptors on the host system.
To increase the inotify limits connect to the host nodes and run:
```sh
sudo sysctl -w fs.inotify.max_user_watches=2099999999
sudo sysctl -w fs.inotify.max_user_instances=2099999999
sudo sysctl -w fs.inotify.max_queued_events=2099999999
```
You can persist these settings by adding them to `/etc/sysctl.conf`:
```sh
fs.inotify.max_user_watches=2099999999
fs.inotify.max_user_instances=2099999999
fs.inotify.max_queued_events=2099999999
```
Apply the changes:
```sh
sudo sysctl -p
```
You can find more details in this [KB document](https://www.suse.com/support/kb/doc/?id=000020048).
---
## Inspect Controller Logs for Failure Diagnosis
To view logs for a failed virtual cluster:
```sh
kubectl logs -n k3k-system -l app.kubernetes.io/name=k3k
```
This retrieves logs from K3k controller components.
---
## Inspect Cluster Logs for Failure Diagnosis
To view logs for a failed virtual cluster:
```sh
kubectl logs -n <cluster_namespace> -l cluster=<cluster_name>
```
This retrieves logs from K3k cluster components (`agents, server and virtual-kubelet`).
> 💡 You can also use `kubectl describe cluster <cluster_name>` to check for recent events and status conditions.
---
## Virtual Cluster Not Starting or Stuck in Pending
Some of the most common causes are related to missing prerequisites or wrong configuration.
### Storage class not available
When creating a Virtual Cluster with `dynamic` persistence, a PVC is needed. You can check if the PVC was claimed but not bound with `kubectl get pvc -n <cluster_namespace>`. If you see a pending PVC you probably don't have a default storage class defined, or you have specified a wrong one.
#### Example with wrong storage class
The `pvc` is pending:
```bash
kubectl get pvc -n k3k-test-storage
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
varlibrancherk3s-k3k-test-storage-server-0 Pending not-available <unset> 4s
```
The `server` is pending:
```bash
kubectl get po -n k3k-test-storage
NAME READY STATUS RESTARTS AGE
k3k-test-storage-kubelet-j4zn5 1/1 Running 0 54s
k3k-test-storage-server-0 0/1 Pending 0 54s
```
To fix this you should use a valid storage class, you can list existing storage class using:
```bash
kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 3d6h
```
### Wrong node selector
When creating a Virtual Cluster with `defaultNodeSelector`, if the selector is not valid all pods will be pending.
#### Example
The `server` is pending:
```bash
kubectl get po
NAME READY STATUS RESTARTS AGE
k3k-k3kcluster-node-placed-server-0 0/1 Pending 0 58s
```
The description of the pod provide the reason:
```bash
kubectl describe po k3k-k3kcluster-node-placed-server-0
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 84s default-scheduler 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
```
To fix this you should use a valid node affinity/selector.
### Image pull issues (airgapped setup)
When creating a Virtual Cluster in air-gapped environment, images need to be available in the configured registry. You can check for `ImagePullBackOff` status when getting the pods in the virtual cluster namespace.
#### Example
The `server` is failing:
```bash
kubectl get po -n k3k-test-registry
NAME READY STATUS RESTARTS AGE
k3k-test-registry-kubelet-r4zh5 1/1 Running 0 54s
k3k-test-registry-server-0 0/1 ImagePullBackOff 0 54s
```
To fix this make sure the failing image is available. You can describe the failing pod to get more details.

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 253 KiB

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 193 KiB

View File

@@ -0,0 +1,147 @@
# VirtualClusterPolicy
The VirtualClusterPolicy Custom Resource in K3k provides a way to define and enforce consistent configurations, security settings, and resource management rules for your virtual clusters and the Namespaces they operate within.
By using VCPs, administrators can centrally manage these aspects, reducing manual configuration, ensuring alignment with organizational standards, and enhancing the overall security and operational consistency of the K3k environment.
## Core Concepts
### What is a VirtualClusterPolicy?
A `VirtualClusterPolicy` is a cluster-scoped Kubernetes Custom Resource that specifies a set of rules and configurations. These policies are then applied to K3k virtual clusters (`Cluster` resources) operating within Kubernetes Namespaces that are explicitly bound to a VCP.
### Binding a Policy to a Namespace
To apply a `VirtualClusterPolicy` to one or more Namespaces (and thus to all K3k `Cluster` resources within those Namespaces), you need to label the desired Namespace(s). Add the following label to your Namespace metadata:
`policy.k3k.io/policy-name: <YOUR_POLICY_NAME>`
**Example: Labeling a Namespace**
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: my-app-namespace
labels:
policy.k3k.io/policy-name: "standard-dev-policy"
```
In this example, `my-app-namespace` will adhere to the rules defined in the `VirtualClusterPolicy` named `standard-dev-policy`. Multiple Namespaces can be bound to the same policy for uniform configuration, or different Namespaces can be bound to distinct policies.
It's also important to note what happens when a Namespace's policy binding changes. If a Namespace is unbound from a VirtualClusterPolicy (by removing the policy.k3k.io/policy-name label), K3k will clean up and remove the resources (such as ResourceQuotas, LimitRanges, and managed Namespace labels) that were originally applied by that policy. Similarly, if the label is changed to bind the Namespace to a new VirtualClusterPolicy, K3k will first remove the resources associated with the old policy before applying the configurations from the new one, ensuring a clean transition.
### Default Policy Values
If you create a `VirtualClusterPolicy` without specifying any `spec` fields (e.g., using `k3kcli policy create my-default-policy`), it will be created with default settings. Currently, this includes `spec.allowedMode` being set to `"shared"`.
```yaml
# Example of a minimal VCP (after creation with defaults)
apiVersion: k3k.io/v1beta1
kind: VirtualClusterPolicy
metadata:
name: my-default-policy
spec:
allowedMode: shared
```
## Key Capabilities & Examples
A `VirtualClusterPolicy` can configure several aspects of the Namespaces it's bound to and the virtual clusters operating within them.
### 1. Restricting Allowed Virtual Cluster Modes (`AllowedMode`)
You can restrict the `mode` (e.g., "shared" or "virtual") in which K3k `Cluster` resources can be provisioned within bound Namespaces. If a `Cluster` is created in a bound Namespace with a mode not allowed in `allowedMode`, its creation might proceed but an error should be reported in the `Cluster` resource's status.
**Example:** Allow only "shared" mode clusters.
```yaml
apiVersion: k3k.io/v1beta1
kind: VirtualClusterPolicy
metadata:
name: shared-only-policy
spec:
allowedModeTypes:
- shared
```
You can also specify this using the CLI: `k3kcli policy create --mode shared shared-only-policy` (or `--mode virtual`).
### 2. Defining Resource Quotas (`quota`)
You can define resource consumption limits for bound Namespaces by specifying a `ResourceQuota`. K3k will create a `ResourceQuota` object in each bound Namespace with the provided specifications.
**Example:** Set CPU, memory, and pod limits.
```yaml
apiVersion: k3k.io/v1beta1
kind: VirtualClusterPolicy
metadata:
name: quota-policy
spec:
quota:
hard:
cpu: "10"
memory: "20Gi"
pods: "10"
```
### 3. Setting Limit Ranges (`limit`)
You can define default resource requests/limits and min/max constraints for containers running in bound Namespaces by specifying a `LimitRange`. K3k will create a `LimitRange` object in each bound Namespace.
**Example:** Define default CPU requests/limits and min/max CPU.
```yaml
apiVersion: k3k.io/v1beta1
kind: VirtualClusterPolicy
metadata:
name: limit-policy
spec:
limit:
limits:
- default:
cpu: "500m"
defaultRequest:
cpu: "500m"
max:
cpu: "1"
min:
cpu: "100m"
type: Container
```
### 4. Managing Network Isolation (`disableNetworkPolicy`)
By default, K3k creates a `NetworkPolicy` in bound Namespaces to provide network isolation for virtual clusters (especially in shared mode). You can disable the creation of this default policy.
**Example:** Disable the default NetworkPolicy.
```yaml
apiVersion: k3k.io/v1beta1
kind: VirtualClusterPolicy
metadata:
name: no-default-netpol-policy
spec:
disableNetworkPolicy: true
```
### 5. Enforcing Pod Security Admission (`podSecurityAdmissionLevel`)
You can enforce Pod Security Standards (PSS) by specifying a Pod Security Admission (PSA) level. K3k will apply the corresponding PSA labels to each bound Namespace. The allowed values are `privileged`, `baseline`, `restricted`, and this will add labels like `pod-security.kubernetes.io/enforce: <level>` to the bound Namespace.
**Example:** Enforce the "baseline" PSS level.
```yaml
apiVersion: k3k.io/v1beta1
kind: VirtualClusterPolicy
metadata:
name: baseline-psa-policy
spec:
podSecurityAdmissionLevel: baseline
```
## Further Reading
* For a complete reference of all `VirtualClusterPolicy` spec fields, see the [API Reference for VirtualClusterPolicy](./crds/crds.md#virtualclusterpolicy).
* To understand how VCPs fit into the overall K3k system, see the [Architecture](./architecture.md) document.

View File

@@ -1,18 +0,0 @@
apiVersion: k3k.io/v1alpha1
kind: Cluster
metadata:
name: example1
spec:
servers: 1
agents: 3
token: test
version: v1.26.0-k3s2
clusterCIDR: 10.30.0.0/16
serviceCIDR: 10.31.0.0/16
clusterDNS: 10.30.0.10
serverArgs:
- "--write-kubeconfig-mode=777"
expose:
ingress:
enabled: true
ingressClassName: "nginx"

View File

@@ -0,0 +1,15 @@
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: shared-multiple-servers
spec:
mode: shared
servers: 3
agents: 3
version: v1.33.1-k3s1
serverArgs:
- "--write-kubeconfig-mode=777"
tlsSANs:
- myserver.app
expose:
nodePort: {}

View File

@@ -0,0 +1,14 @@
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: shared-single-server
spec:
mode: shared
servers: 1
version: v1.33.1-k3s1
serverArgs:
- "--write-kubeconfig-mode=777"
tlsSANs:
- myserver.app
expose:
nodePort: {}

View File

@@ -1,18 +0,0 @@
apiVersion: k3k.io/v1alpha1
kind: Cluster
metadata:
name: single-server
spec:
servers: 1
agents: 3
token: test
version: v1.26.0-k3s2
clusterCIDR: 10.30.0.0/16
serviceCIDR: 10.31.0.0/16
clusterDNS: 10.30.0.10
serverArgs:
- "--write-kubeconfig-mode=777"
expose:
ingress:
enabled: true
ingressClassName: "nginx"

View File

@@ -0,0 +1,13 @@
apiVersion: k3k.io/v1beta1
kind: Cluster
metadata:
name: virtual-server
spec:
mode: virtual
servers: 3
agents: 3
version: v1.33.1-k3s1
tlsSANs:
- myserver.app
expose:
nodePort: {}

View File

@@ -0,0 +1,9 @@
apiVersion: k3k.io/v1beta1
kind: VirtualClusterPolicy
metadata:
name: policy-example
spec:
allowedMode: shared
disableNetworkPolicy: true
# podSecurityAdmissionLevel: "baseline"
# defaultPriorityClass: "lowpriority"

282
go.mod
View File

@@ -1,89 +1,219 @@
module github.com/rancher/k3k
go 1.20
go 1.25
replace (
go.etcd.io/etcd/api/v3 => github.com/k3s-io/etcd/api/v3 v3.5.9-k3s1
go.etcd.io/etcd/client/v3 => github.com/k3s-io/etcd/client/v3 v3.5.9-k3s1
)
require (
github.com/sirupsen/logrus v1.8.1
github.com/urfave/cli v1.22.12
go.etcd.io/etcd/api/v3 v3.5.9
go.etcd.io/etcd/client/v3 v3.5.5
k8s.io/api v0.26.1
k8s.io/apimachinery v0.26.1
k8s.io/client-go v0.26.1
k8s.io/klog v1.0.0
toolchain go1.25.6
require (
github.com/blang/semver/v4 v4.0.0
github.com/go-logr/logr v1.4.3
github.com/go-logr/zapr v1.3.0
github.com/google/go-cmp v0.7.0
github.com/onsi/ginkgo/v2 v2.21.0
github.com/onsi/gomega v1.36.0
github.com/rancher/dynamiclistener v1.27.5
github.com/sirupsen/logrus v1.9.4
github.com/spf13/cobra v1.10.2
github.com/spf13/pflag v1.0.10
github.com/spf13/viper v1.21.0
github.com/stretchr/testify v1.11.1
github.com/testcontainers/testcontainers-go v0.40.0
github.com/testcontainers/testcontainers-go/modules/k3s v0.40.0
github.com/virtual-kubelet/virtual-kubelet v1.11.1-0.20250530103808-c9f64e872803
go.etcd.io/etcd/api/v3 v3.5.21
go.etcd.io/etcd/client/v3 v3.5.21
go.uber.org/zap v1.27.1
gopkg.in/yaml.v2 v2.4.0
helm.sh/helm/v3 v3.18.5
k8s.io/api v0.33.7
k8s.io/apiextensions-apiserver v0.33.7
k8s.io/apimachinery v0.33.7
k8s.io/apiserver v0.33.7
k8s.io/cli-runtime v0.33.7
k8s.io/client-go v0.33.7
k8s.io/component-base v0.33.7
k8s.io/component-helpers v0.33.7
k8s.io/kubectl v0.33.7
k8s.io/kubelet v0.33.7
k8s.io/kubernetes v1.33.7
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738
sigs.k8s.io/controller-runtime v0.19.4
)
require (
cel.dev/expr v0.19.1 // indirect
dario.cat/mergo v1.0.2 // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/BurntSushi/toml v1.5.0 // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver/v3 v3.3.0 // indirect
github.com/Masterminds/sprig/v3 v3.3.0 // indirect
github.com/Masterminds/squirrel v1.5.4 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/coreos/go-semver v0.3.0 // indirect
github.com/coreos/go-systemd/v22 v22.3.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/emicklei/go-restful/v3 v3.9.0 // indirect
github.com/evanphx/json-patch/v5 v5.6.0 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.20.0 // indirect
github.com/go-openapi/swag v0.19.14 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/google/gnostic v0.5.7-v3refs // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/imdario/mergo v0.3.6 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/mailru/easyjson v0.7.6 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/client_golang v1.14.0 // indirect
github.com/prometheus/client_model v0.3.0 // indirect
github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.9 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.6.0 // indirect
go.uber.org/zap v1.24.0 // indirect
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b // indirect
golang.org/x/sys v0.3.0 // indirect
golang.org/x/term v0.3.0 // indirect
golang.org/x/time v0.3.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20220502173005-c8bf987b8c21 // indirect
google.golang.org/grpc v1.49.0 // indirect
google.golang.org/protobuf v1.28.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiextensions-apiserver v0.26.0 // indirect
k8s.io/component-base v0.26.1 // indirect
k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)
require (
github.com/go-logr/logr v1.2.3 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chai2010/gettext-go v1.0.2 // indirect
github.com/containerd/containerd v1.7.30 // indirect
github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/platforms v0.2.1 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cpuguy83/dockercfg v0.3.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.6 // indirect
github.com/cyphar/filepath-securejoin v0.5.1 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/docker v28.5.1+incompatible // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/ebitengine/purego v0.8.4 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v5.9.11+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-gorp/gorp/v3 v3.1.0 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/go-task/slim-sprig/v3 v3.0.0 // indirect
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/cel-go v0.23.2 // indirect
github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/mux v1.8.1 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/gosuri/uitable v0.0.4 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.24.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/huandu/xstrings v1.5.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmoiron/sqlx v1.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/magiconair/properties v1.8.10 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.17 // indirect
github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/go-archive v0.1.0 // indirect
github.com/moby/patternmatcher v0.6.0 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/sys/sequential v0.6.0 // indirect
github.com/moby/sys/user v0.4.0 // indirect
github.com/moby/sys/userns v0.1.0 // indirect
github.com/moby/term v0.5.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/rancher/dynamiclistener v0.3.5
golang.org/x/net v0.3.1-0.20221206200815-1e63c2f08a10 // indirect
golang.org/x/text v0.5.0 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/prometheus/client_golang v1.22.0 // indirect
github.com/prometheus/client_model v0.6.2
github.com/prometheus/common v0.64.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/rubenv/sql-migrate v1.8.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sagikazarmark/locafero v0.11.0 // indirect
github.com/santhosh-tekuri/jsonschema/v6 v6.0.2 // indirect
github.com/shirou/gopsutil/v4 v4.25.6 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 // indirect
github.com/spf13/afero v1.15.0 // indirect
github.com/spf13/cast v1.10.0 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.21 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.58.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 // indirect
go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0 // indirect
go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/otel/sdk v1.33.0 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.opentelemetry.io/proto/otlp v1.4.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.yaml.in/yaml/v2 v2.4.2 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.47.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.37.0 // indirect
golang.org/x/text v0.31.0 // indirect
golang.org/x/time v0.12.0 // indirect
golang.org/x/tools v0.38.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8 // indirect
google.golang.org/grpc v1.68.1 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
k8s.io/apiserver v0.26.1
k8s.io/klog/v2 v2.80.1
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448
sigs.k8s.io/controller-runtime v0.14.1
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/controller-manager v0.33.7 // indirect
k8s.io/klog/v2 v2.130.1
k8s.io/kms v0.33.7 // indirect
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
oras.land/oras-go/v2 v2.6.0 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.2 // indirect
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
sigs.k8s.io/kustomize/api v0.19.0 // indirect
sigs.k8s.io/kustomize/kyaml v0.19.0 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect
sigs.k8s.io/yaml v1.5.0 // indirect
)

1060
go.sum

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 137 KiB

View File

@@ -1,21 +0,0 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail
CODEGEN_GIT_PKG=https://github.com/kubernetes/code-generator.git
git clone --depth 1 ${CODEGEN_GIT_PKG} || true
SCRIPT_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
CODEGEN_PKG=./code-generator
"${CODEGEN_PKG}/generate-groups.sh" \
"deepcopy" \
github.com/rancher/k3k/pkg/generated \
github.com/rancher/k3k/pkg/apis \
"k3k.io:v1alpha1" \
--go-header-file "${SCRIPT_ROOT}"/hack/boilerplate.go.txt \
--output-base "$(dirname "${BASH_SOURCE[0]}")/../../../.."
rm -rf code-generator

34
k3k-kubelet/README.md Normal file
View File

@@ -0,0 +1,34 @@
## Virtual Kubelet
This package provides an impelementation of a virtual cluster node using [virtual-kubelet](https://github.com/virtual-kubelet/virtual-kubelet).
The implementation is based on several projects, including:
- [Virtual Kubelet](https://github.com/virtual-kubelet/virtual-kubelet)
- [Kubectl](https://github.com/kubernetes/kubectl)
- [Client-go](https://github.com/kubernetes/client-go)
- [Azure-Aci](https://github.com/virtual-kubelet/azure-aci)
## Overview
This project creates a node that registers itself in the virtual cluster. When workloads are scheduled to this node, it simply creates/updates the workload on the host cluster.
## Usage
Build/Push the image using (from the root of rancher/k3k):
```
make build
docker buildx build -f package/Dockerfile . -t $REPO/$IMAGE:$TAG
```
When running, it is recommended to deploy a k3k cluster with 1 server (with `--disable-agent` as a server arg) and no agents (so that the workloads can only be scheduled on the virtual node/host cluster).
After the image is built, it should be deployed with the following ENV vars set:
- `CLUSTER_NAME` should be the name of the cluster.
- `CLUSTER_NAMESPACE` should be the namespace the cluster is running in.
- `HOST_KUBECONFIG` should be the path on the local filesystem (in container) to a kubeconfig for the host cluster (likely stored in a secret/mounted as a volume).
- `VIRT_KUBECONFIG`should be the path on the local filesystem (in container) to a kubeconfig for the virtual cluster (likely stored in a secret/mounted as a volume).
- `VIRT_POD_IP` should be the IP that the container is accessible from.
This project is still under development and there are many features yet to be implemented, but it can run a basic nginx pod.

37
k3k-kubelet/config.go Normal file
View File

@@ -0,0 +1,37 @@
package main
import (
"errors"
)
// config has all virtual-kubelet startup options
type config struct {
ClusterName string `mapstructure:"clusterName"`
ClusterNamespace string `mapstructure:"clusterNamespace"`
ServiceName string `mapstructure:"serviceName"`
Token string `mapstructure:"token"`
AgentHostname string `mapstructure:"agentHostname"`
HostKubeconfig string `mapstructure:"hostKubeconfig"`
VirtKubeconfig string `mapstructure:"virtKubeconfig"`
KubeletPort int `mapstructure:"kubeletPort"`
WebhookPort int `mapstructure:"webhookPort"`
ServerIP string `mapstructure:"serverIP"`
Version string `mapstructure:"version"`
MirrorHostNodes bool `mapstructure:"mirrorHostNodes"`
}
func (c *config) validate() error {
if c.ClusterName == "" {
return errors.New("cluster name is not provided")
}
if c.ClusterNamespace == "" {
return errors.New("cluster namespace is not provided")
}
if c.AgentHostname == "" {
return errors.New("agent Hostname is not provided")
}
return nil
}

View File

@@ -0,0 +1,154 @@
package syncer
import (
"context"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
ctrl "sigs.k8s.io/controller-runtime"
"github.com/rancher/k3k/k3k-kubelet/translate"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
)
const (
configMapControllerName = "configmap-syncer"
configMapFinalizerName = "configmap.k3k.io/finalizer"
)
type ConfigMapSyncer struct {
// SyncerContext contains all client information for host and virtual cluster
*SyncerContext
}
func (c *ConfigMapSyncer) Name() string {
return configMapControllerName
}
// AddConfigMapSyncer adds configmap syncer controller to the manager of the virtual cluster
func AddConfigMapSyncer(ctx context.Context, virtMgr, hostMgr manager.Manager, clusterName, clusterNamespace string) error {
reconciler := ConfigMapSyncer{
SyncerContext: &SyncerContext{
VirtualClient: virtMgr.GetClient(),
HostClient: hostMgr.GetClient(),
Translator: translate.ToHostTranslator{
ClusterName: clusterName,
ClusterNamespace: clusterNamespace,
},
ClusterName: clusterName,
ClusterNamespace: clusterNamespace,
},
}
name := reconciler.Translator.TranslateName(clusterNamespace, configMapControllerName)
return ctrl.NewControllerManagedBy(virtMgr).
Named(name).
For(&corev1.ConfigMap{}).WithEventFilter(predicate.NewPredicateFuncs(reconciler.filterResources)).
Complete(&reconciler)
}
func (c *ConfigMapSyncer) filterResources(object client.Object) bool {
var cluster v1beta1.Cluster
ctx := context.Background()
if err := c.HostClient.Get(ctx, types.NamespacedName{Name: c.ClusterName, Namespace: c.ClusterNamespace}, &cluster); err != nil {
return false
}
// check for configMap Sync Config
syncConfig := cluster.Spec.Sync.ConfigMaps
// If syncing is disabled, only process deletions to allow for cleanup.
if !syncConfig.Enabled {
return object.GetDeletionTimestamp() != nil
}
labelSelector := labels.SelectorFromSet(syncConfig.Selector)
if labelSelector.Empty() {
return true
}
return labelSelector.Matches(labels.Set(object.GetLabels()))
}
// Reconcile implements reconcile.Reconciler and synchronizes the objects in objs to the host cluster
func (c *ConfigMapSyncer) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) {
log := ctrl.LoggerFrom(ctx).WithValues("cluster", c.ClusterName, "clusterNamespace", c.ClusterName)
ctx = ctrl.LoggerInto(ctx, log)
var cluster v1beta1.Cluster
if err := c.HostClient.Get(ctx, types.NamespacedName{Name: c.ClusterName, Namespace: c.ClusterNamespace}, &cluster); err != nil {
return reconcile.Result{}, err
}
var virtualConfigMap corev1.ConfigMap
if err := c.VirtualClient.Get(ctx, req.NamespacedName, &virtualConfigMap); err != nil {
return reconcile.Result{}, client.IgnoreNotFound(err)
}
syncedConfigMap := c.translateConfigMap(&virtualConfigMap)
if err := controllerutil.SetControllerReference(&cluster, syncedConfigMap, c.HostClient.Scheme()); err != nil {
return reconcile.Result{}, err
}
// handle deletion
if !virtualConfigMap.DeletionTimestamp.IsZero() {
// deleting the synced configMap if exist
if err := c.HostClient.Delete(ctx, syncedConfigMap); err != nil && !apierrors.IsNotFound(err) {
return reconcile.Result{}, err
}
// remove the finalizer after cleaning up the synced configMap
if controllerutil.RemoveFinalizer(&virtualConfigMap, configMapFinalizerName) {
if err := c.VirtualClient.Update(ctx, &virtualConfigMap); err != nil {
return reconcile.Result{}, err
}
}
return reconcile.Result{}, nil
}
// Add finalizer if it does not exist
if controllerutil.AddFinalizer(&virtualConfigMap, configMapFinalizerName) {
if err := c.VirtualClient.Update(ctx, &virtualConfigMap); err != nil {
return reconcile.Result{}, err
}
}
var hostConfigMap corev1.ConfigMap
if err := c.HostClient.Get(ctx, types.NamespacedName{Name: syncedConfigMap.Name, Namespace: syncedConfigMap.Namespace}, &hostConfigMap); err != nil {
if apierrors.IsNotFound(err) {
log.Info("creating the ConfigMap for the first time on the host cluster")
return reconcile.Result{}, c.HostClient.Create(ctx, syncedConfigMap)
}
return reconcile.Result{}, err
}
// TODO: Add option to keep labels/annotation set by the host cluster
log.Info("updating ConfigMap on the host cluster")
return reconcile.Result{}, c.HostClient.Update(ctx, syncedConfigMap)
}
// translateConfigMap will translate a given configMap created in the virtual cluster and
// translates it to host cluster object
func (c *ConfigMapSyncer) translateConfigMap(configMap *corev1.ConfigMap) *corev1.ConfigMap {
hostConfigMap := configMap.DeepCopy()
c.Translator.TranslateTo(hostConfigMap)
return hostConfigMap
}

Some files were not shown because too many files have changed in this diff Show More