Compare commits

...

179 Commits

Author SHA1 Message Date
Dario Tranchitella
b5a7ff6e6c chore(helm): bumping up kamaji version 2023-04-16 21:24:04 +02:00
Dario Tranchitella
9f937a1eec chore(makefile): bumping up kamaji version 2023-04-16 21:24:04 +02:00
Dario Tranchitella
0ae3659949 docs: supporting up to k8s 1.27 2023-04-16 21:24:04 +02:00
Dario Tranchitella
736fbf0505 feat(kubeadm): updating support to k8s 1.27 2023-04-14 07:17:15 +02:00
Dario Tranchitella
8dc0672718 fix: updating ingress status with provided loadbalancer ip 2023-04-13 15:16:43 +02:00
Dario Tranchitella
27f598fbfc fix: avoiding nil pointer when updating status for ingress 2023-04-13 15:16:43 +02:00
r3drun3
d3603c7187 docs(readme): add go report badge 2023-04-07 12:07:01 +02:00
bsctl
83797fc0b3 docs: refactor the versioning section 2023-04-05 19:28:58 +02:00
bsctl
517a4a3458 docs: refactor the contribute section 2023-04-05 19:28:58 +02:00
Massimiliano Giovagnoli
649cf0c852 docs: add a quickstart
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2023-03-31 15:23:22 +02:00
Dario Tranchitella
741090f4e6 chore(helm): releasing v0.2.2 2023-03-27 17:08:29 +02:00
Dario Tranchitella
6e8a86d975 chore(kustomize): releasing v0.2.2 2023-03-27 17:08:29 +02:00
Dario Tranchitella
21b01fae9d chore(makefile): releasing v0.2.2 2023-03-27 17:08:29 +02:00
Pietro Terrizzi
a0cd4591a9 docs: added backup and restore shortguide 2023-03-22 21:18:00 +01:00
Pietro Terrizzi
f757c5a5aa docs(velero): initial commit 2023-03-22 21:18:00 +01:00
Dario Tranchitella
b15a764381 fix: ensuring to save kubeconfig status upon restoration 2023-03-13 17:03:17 +01:00
Dario Tranchitella
8d3dcdf467 fix(helm): aligning docs to latest changes 2023-02-24 10:18:56 +01:00
Dario Tranchitella
aada5c29a2 chore(helm): releasing helm v0.11.3 2023-02-24 09:57:46 +01:00
Dario Tranchitella
cb4a493e28 chore(helm): bumping up to v0.2.1 2023-02-24 09:56:39 +01:00
Dario Tranchitella
f783aff3c0 chore(kustomize): bumping up to v0.2.1 2023-02-24 09:56:39 +01:00
Dario Tranchitella
c8bdaf0aa2 chore(makefile): bumping up to v0.2.1 2023-02-24 09:56:39 +01:00
Dario Tranchitella
d1c2fe020e feat: upgrading to kubernetes v1.26.1 2023-02-24 09:56:23 +01:00
Dario Tranchitella
5b93d7181f fix: avoiding secrets regeneration upon velero restore 2023-02-23 19:01:47 +01:00
Dario Tranchitella
1273d95340 feat(helm): using tolerations for jobs 2023-02-22 14:19:24 +01:00
Filippo Pinton
1e4c78b646 fix(helm): remove duplicate labels 2023-02-21 15:25:20 +01:00
Pietro Terrizzi
903cfc0bae docs(helm): added pvc customAnnotations 2023-02-15 18:07:14 +01:00
Pietro Terrizzi
7bd142bcb2 feat(helm): added customAnnotations to PVC 2023-02-15 18:07:14 +01:00
Pietro Terrizzi
153a43e6f2 chore: k8s.gcr.io is deprecated in favor of registry.k8s.io 2023-02-15 18:06:26 +01:00
Dario Tranchitella
2abaeb5586 docs: keeping labels consistent 2023-02-13 11:24:36 +01:00
Dario Tranchitella
a8a41951cb refactor!: keeping labels consistent
The label kamaji.clastix.io/soot is deprecated in favour of
kamaji.clastix.io/name, every external resource referring to this must
be aligned prior to updating to this version.
2023-02-13 11:24:36 +01:00
Dario Tranchitella
a0485c338b refactor(checksum): using helper functions 2023-02-10 15:31:28 +01:00
mendrugory
89edc8bbf5 chore: no maintainer 2023-02-09 14:24:35 +01:00
Dario Tranchitella
43765769ec feat: v0.2.0 release 2023-02-06 22:34:33 +01:00
Dario Tranchitella
0016f121ed feat(helm): emptyDir with memory medium for flock performances 2023-02-06 22:12:50 +01:00
Dario Tranchitella
c3fb5373f6 fix(e2e): waiting for reconciliation of the TCP 2023-02-06 22:12:50 +01:00
Dario Tranchitella
670f10ad4e docs: documenting new flag max-concurrent-tcp-reconciles 2023-02-06 22:12:50 +01:00
Dario Tranchitella
4110b688c9 feat: configurable max concurrent tcp reconciles 2023-02-06 22:12:50 +01:00
Dario Tranchitella
830d86a38a feat: introducing enqueueback reconciliation status
Required for the changes introduced with 74f7157e8b
2023-02-06 22:12:50 +01:00
Dario Tranchitella
44d1f3fa7f refactor: updating local tcp instance to avoid 2nd retrieval 2023-02-06 22:12:50 +01:00
Dario Tranchitella
e23ae3c7f3 feat: automatically set gomaxprocs to match container cpu quota 2023-02-06 22:12:50 +01:00
bsctl
713b0754bb docs: update to latest features 2023-02-05 10:08:49 +01:00
Dario Tranchitella
da924b30ff docs: benchmarking kamaji on AWS 2023-02-05 09:09:02 +01:00
Dario Tranchitella
0f0d83130f chore(helm): ServiceMonitor support 2023-02-05 09:09:02 +01:00
Dario Tranchitella
634e808d2d chore(kustomize): ServiceMonitor support 2023-02-05 09:09:02 +01:00
bsctl
b99f224d32 fix(helm): handle basicAuth values for datastore 2023-02-05 09:07:20 +01:00
Dario Tranchitella
d02b5f427e test(e2e): kube-apiserver kubelet-preferred-address-types support 2023-01-22 14:56:47 +01:00
Dario Tranchitella
08b5bc05c3 docs: kube-apiserver kubelet-preferred-address-types support 2023-01-22 14:56:47 +01:00
Dario Tranchitella
4bd8e2d319 chore(helm): kube-apiserver kubelet-preferred-address-types support 2023-01-22 14:56:47 +01:00
Dario Tranchitella
a1f155fcab chore(kustomize): kube-apiserver kubelet-preferred-address-types support 2023-01-22 14:56:47 +01:00
Dario Tranchitella
743ea1343f feat(api): kube-apiserver kubelet-preferred-address-types support 2023-01-22 14:56:47 +01:00
Dario Tranchitella
41780bcb04 docs: tcp deployment strategy support 2023-01-17 10:01:21 +01:00
Dario Tranchitella
014297bb0f chore(helm): tcp deployment strategy support 2023-01-17 10:01:21 +01:00
Dario Tranchitella
20cfdd6931 chore(kustomize): tcp deployment strategy support 2023-01-17 10:01:21 +01:00
Dario Tranchitella
f03e250cf8 feat(api): deployment strategy support 2023-01-17 10:01:21 +01:00
Dario Tranchitella
2cdee08924 chore(helm): certificate authority rotation handling 2023-01-13 19:09:03 +01:00
Dario Tranchitella
6d27ca9e9e chore(kustomize): certificate authority rotation handling 2023-01-13 19:09:03 +01:00
Dario Tranchitella
2293e49e4b fix: certificate authority rotation handling 2023-01-13 19:09:03 +01:00
Dario Tranchitella
6b0f92baa3 docs: certificate authority rotation handling 2023-01-13 19:09:03 +01:00
Dario Tranchitella
8e94039962 feat(api)!: introducing ca rotating status 2023-01-13 19:09:03 +01:00
Dario Tranchitella
551df6df97 fix(kubeadm_phase): wrong string value representation 2023-01-13 19:09:03 +01:00
Massimiliano Giovagnoli
c905e16e75 chore(docs/guides): fix syntax on flux helmrelease
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2023-01-02 14:38:40 +01:00
Massimiliano Giovagnoli
e08792adc2 chore(docs/images): update flux diagram
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2023-01-02 14:38:40 +01:00
maxgio92
248b5082d0 docs(docs/content/guides/kamaji-gitops-flux.md): use third person
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
Co-authored-by: Dario Tranchitella <dario@tranchitella.eu>
2023-01-02 14:38:40 +01:00
Massimiliano Giovagnoli
5cebb05458 docs: add guide for managing tenant resources gitops way
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2023-01-02 14:38:40 +01:00
Dario Tranchitella
efbefba0b3 docs(api): aligning to latest changes 2022-12-22 11:57:29 +01:00
Dario Tranchitella
bc19203071 chore(gi): checking diff and docs alignement 2022-12-22 11:57:29 +01:00
Dario Tranchitella
c8b8dcc2d3 test(e2e): testing different datastores and migration 2022-12-22 11:57:29 +01:00
Dario Tranchitella
cf2721201d chore: adding samples and automating deployment of datastores 2022-12-22 11:57:29 +01:00
Dario Tranchitella
4aa77924f4 chore(helm): using webhooks for secrets instead of finalizers 2022-12-20 20:54:41 +01:00
Dario Tranchitella
a3c52e81f6 chore(kustomize): using webhooks for secrets instead of finalizers 2022-12-20 20:54:41 +01:00
Dario Tranchitella
7ed3c44401 refactor(datastore): using webhooks for secrets instead of finalizers 2022-12-20 20:54:41 +01:00
Dario Tranchitella
beebaf0364 fix(storage): wrong variable while assigning finalizers 2022-12-20 20:45:09 +01:00
Dario Tranchitella
b9cda29461 fix(migrate): allowing leases updates during migration 2022-12-20 20:45:09 +01:00
Dario Tranchitella
7db8a64bdd fix(etcd): using stored username for cert common name 2022-12-19 16:28:48 +01:00
Dario Tranchitella
c6abe03fd1 fix(soot): typo on params for service name and namespace 2022-12-19 10:44:39 +01:00
Dario Tranchitella
723fa1aea6 chore(helm): upgrading etcd to v3.5.6 2022-12-19 08:59:05 +01:00
Dario Tranchitella
7e0ec81ba2 deps: upgrading etcd to v3.5.6 2022-12-19 08:59:05 +01:00
Dario Tranchitella
7353bb5813 chore(gh): upgrading to go 1.19 2022-12-17 15:57:47 +01:00
Dario Tranchitella
96cedadf0a test(e2e): upgrading to ginkgo v2 2022-12-17 15:57:47 +01:00
Dario Tranchitella
76b603de1e chore(helm): upgrade to 1.26 2022-12-17 15:57:47 +01:00
Dario Tranchitella
7cff6b5850 chore(kustomize): upgrade to 1.26 2022-12-17 15:57:47 +01:00
Dario Tranchitella
6e6ea0189f refactor(k8s): upgrade to 1.26 2022-12-17 15:57:47 +01:00
Dario Tranchitella
aefdbc9481 deps(k8s): upgrade to 1.26 2022-12-17 15:57:47 +01:00
Dario Tranchitella
074279b3c2 chore(go): upgrading to 1.19 required for k8s 1.26 2022-12-17 15:57:47 +01:00
Dario Tranchitella
09891f4d71 chore(gh): running workflows on ubuntu-22.04 2022-12-17 15:46:53 +01:00
Dario Tranchitella
18c60461e5 refactor: conforming finalizers management 2022-12-16 22:44:42 +01:00
Dario Tranchitella
ceab662671 feat(soot): using finalizer for clean-up 2022-12-16 22:44:42 +01:00
Dario Tranchitella
d38098a57e fix(soot): ensure that manager is stopped upon tcp deletion 2022-12-16 22:44:42 +01:00
Dario Tranchitella
017a50b8f6 fix(soot): ensuring manager to restart upon tcp pod restart 2022-12-16 22:44:42 +01:00
Dario Tranchitella
b07062b4dd chore(kustomize): missing datastore finalizer rbac 2022-12-15 15:50:30 +01:00
Dario Tranchitella
b880cff8d7 fix: missing datastore finalizer rbac 2022-12-15 15:50:30 +01:00
Dario Tranchitella
3f7fa08871 refactor: removing unused scheme 2022-12-15 15:50:30 +01:00
Dario Tranchitella
8311f1fe1a fix: ensure default datastore exists before starting manager 2022-12-15 15:50:30 +01:00
Dario Tranchitella
77fff030bf chore(helm): support for runtime class 2022-12-14 21:24:01 +01:00
Dario Tranchitella
abada61930 chore(kustomize): support for runtime class 2022-12-14 21:24:01 +01:00
Dario Tranchitella
1eb1e0f17c feat: support for runtime class 2022-12-14 21:24:01 +01:00
Dario Tranchitella
e83c34776b refactor(soot): creating channel source during controller setup 2022-12-14 21:23:47 +01:00
Dario Tranchitella
3b902943f1 chore(helm): kubeadm phases are moved to soot manager 2022-12-14 21:23:47 +01:00
Dario Tranchitella
c5d62f3d82 chore(kustomize): kubeadm phases are moved to soot manager 2022-12-14 21:23:47 +01:00
Dario Tranchitella
938341a2e7 refactor(log): uniforming log for soot controllers 2022-12-14 21:23:47 +01:00
Dario Tranchitella
3ea721cf2b feat(kubeadm): moving phases to soot manager 2022-12-14 21:23:47 +01:00
Dario Tranchitella
5cbd085cf8 chore(helm): addons no more need checksum 2022-12-14 12:22:49 +01:00
Dario Tranchitella
e358fbe6bc chore(kustomize): addons no more need checksum 2022-12-14 12:22:49 +01:00
Dario Tranchitella
9d55e77902 refactor(api): no more need of checksum for addons 2022-12-14 12:22:49 +01:00
Dario Tranchitella
1e4640e8e6 feat(addons): implementation in the soot cluster 2022-12-14 12:22:49 +01:00
Dario Tranchitella
1b14922f55 refactor(kubeadm): preparing migration for addons to soot manager 2022-12-14 12:22:49 +01:00
Dario Tranchitella
11f800063f fix(konnectivity): typo in ca-cert cli flag 2022-12-14 12:22:49 +01:00
Dario Tranchitella
e11b459a3d fix(konnectivity): reconciliation failed and in loop 2022-12-14 12:22:49 +01:00
Dario Tranchitella
9c8de782f3 docs: datastore migration support 2022-12-14 10:17:30 +01:00
Dario Tranchitella
4c51eafc90 feat(konnectivity): reconciliation performed by soot manager 2022-12-12 16:22:36 +01:00
Dario Tranchitella
1a80fc5b28 fix(api): wrong konnectivity defaults 2022-12-12 16:22:36 +01:00
Dario Tranchitella
02052d5339 fix(helm): wrong konnectivity defaults 2022-12-12 16:22:36 +01:00
Dario Tranchitella
7e47e33b39 fix(kustomize): wrong konnectivity defaults 2022-12-12 16:22:36 +01:00
Dario Tranchitella
28c47d9d13 refactor: moving migrate webhook handling from tcp to soot manager 2022-12-12 16:22:36 +01:00
Dario Tranchitella
1ec257a729 feat: introducing soot controllers manager 2022-12-12 16:22:36 +01:00
Dario Tranchitella
68006b1102 fix(datastore): coalesce for storage configuration 2022-12-11 21:39:36 +01:00
Dario Tranchitella
cd109dcf06 fix: using slash prefix for etcd datastore 2022-12-11 21:39:36 +01:00
Dario Tranchitella
1138eb1dea fix: using the status storage schema for the etcd prefix 2022-12-09 11:54:23 +01:00
Dario Tranchitella
f4f914098c feat(migrate): enhancing job metadata 2022-12-08 14:33:20 +01:00
Dario Tranchitella
5e78b6392a feat(migrate): making timeout configurable 2022-12-08 14:33:20 +01:00
Dario Tranchitella
e25f95d7eb feat(migrate): making image configurable 2022-12-08 14:33:20 +01:00
Dario Tranchitella
7f49fc6125 refactor(konnectivity): removing default logging options
verbosity and logtostderr can now be enforced using the extra args
struct member for the server, and the agent as well.
2022-12-08 14:23:31 +01:00
Dario Tranchitella
8b9683802b fix: support for arguments without a value 2022-12-08 14:23:31 +01:00
Dario Tranchitella
cb5e35699e docs: support for konnectivity extra args 2022-12-08 14:23:31 +01:00
Dario Tranchitella
0d6246c098 chore(helm): support for konnectivity extra args 2022-12-08 14:23:31 +01:00
Dario Tranchitella
d8760fdc6e chore(kustomize): support for konnectivity extra args 2022-12-08 14:23:31 +01:00
Dario Tranchitella
c00df62ff7 feat(konnectivity)!: support for extra args 2022-12-08 14:23:31 +01:00
Dario Tranchitella
653a3933e8 chore(helm): decoupling agent and server struct 2022-12-08 14:23:31 +01:00
Dario Tranchitella
6775b2ae57 chore(kustomize): decoupling agent and server struct 2022-12-08 14:23:31 +01:00
Dario Tranchitella
5241fa64ed refactor(konnectivity)!: decoupling agent and server structs 2022-12-08 14:23:31 +01:00
Dario Tranchitella
723fef5336 feat(migrate): injecting webhook into tcp 2022-12-08 14:13:45 +01:00
Dario Tranchitella
8d1d8598c1 refactor: moving datastore migrate resource to its module 2022-12-08 14:13:45 +01:00
Dario Tranchitella
c96f58974b fix(helm): installing datastore upon completion 2022-12-04 22:12:37 +01:00
Dario Tranchitella
2d1daa8498 feat(datastore): validation webhook 2022-12-04 22:12:37 +01:00
Dario Tranchitella
fe948298d8 chore(helm): wrong crd validation markers 2022-12-04 22:12:37 +01:00
Dario Tranchitella
79942dda34 chore(kustomize): wrong crd validation markers 2022-12-04 22:12:37 +01:00
Dario Tranchitella
44919598ec fix(kubebuilder): wrong crd validation markers 2022-12-04 22:12:37 +01:00
Dario Tranchitella
2336d402c3 refactor: using custom validator and custom defaulter 2022-12-04 21:39:14 +01:00
Dario Tranchitella
79c59e55e5 feat: validation webhook to prevent DataStore migration to a different driver 2022-12-04 21:39:14 +01:00
Dario Tranchitella
95d0983faa chore(dockerfile): optimizing build 2022-12-03 12:04:04 +01:00
Dario Tranchitella
7e276e5ba1 chore(helm): support to datastore migration w/ the same driver 2022-12-03 12:04:04 +01:00
Dario Tranchitella
b2e646064f fix(helm): switching over webhook server service 2022-12-03 12:04:04 +01:00
Dario Tranchitella
3850ad9752 chore(kustomize): support to datastore migration w/ the same driver 2022-12-03 12:04:04 +01:00
Dario Tranchitella
9e899379f4 feat: support to datastore migration w/ the same driver 2022-12-03 12:04:04 +01:00
Dario Tranchitella
a260a92495 fix(psql): checking db and table ownership 2022-12-03 12:04:04 +01:00
Dario Tranchitella
cc4864ca9e feat: datastore migration drivers 2022-12-03 12:04:04 +01:00
Dario Tranchitella
ece1a4e7ee fix: avoiding inconsistency upon tcp retrieval and status update 2022-12-03 12:04:04 +01:00
Dario Tranchitella
eb2440ae62 refactor: abstracting datastore configuration retrieval 2022-12-03 12:04:04 +01:00
Dario Tranchitella
0c415707d7 fix(datastore): not deleting database content upon certificates change 2022-12-03 12:04:04 +01:00
Dario Tranchitella
7a6b0a8de3 fix(datastore): ensuring to update status upon any change 2022-12-03 12:04:04 +01:00
Dario Tranchitella
a31fbdc875 chore(makefile): allowing creation of multiple datastore instances 2022-12-03 12:04:04 +01:00
Dario Tranchitella
4ff0cdf28b docs: configuration for the manager command 2022-12-03 12:04:04 +01:00
Dario Tranchitella
ae573b137c chore(kustomize): removing rbac proxy and support for manager command 2022-12-03 12:04:04 +01:00
Dario Tranchitella
e81b3224c2 chore(helm): removing rbac proxy and support for manager command 2022-12-03 12:04:04 +01:00
Dario Tranchitella
4298bdd73e chore(dockerfile): manager command 2022-12-03 12:04:04 +01:00
Dario Tranchitella
15d0d57790 feat: refactoring for commands 2022-12-03 12:04:04 +01:00
Dario Tranchitella
c17a31ef82 fix: avoiding collision of datastore schemes 2022-11-29 18:25:52 +01:00
Dario Tranchitella
f0df1cfe6f fix: removing tcp data using prefix, and not range 2022-11-29 18:25:52 +01:00
Dario Tranchitella
1bcff90785 chore(kustomize): show datastore for each tcp 2022-11-27 18:57:38 +01:00
Dario Tranchitella
6c817a9ae2 chore(helm): show datastore for each tcp 2022-11-27 18:57:38 +01:00
Dario Tranchitella
5b9311f421 feat: show datastore for each tcp 2022-11-27 18:57:38 +01:00
Dario Tranchitella
0d607dfe5d refactor: adding finalizer upon datastore setu 2022-11-27 17:26:34 +01:00
Dario Tranchitella
11502bf359 refactor: retry on conflict for the status update 2022-11-27 17:26:34 +01:00
Dario Tranchitella
ff1c9fca16 chore(samples): updating to latest kubeadm supported version 2022-11-27 17:23:24 +01:00
Dario Tranchitella
adc4b7d98c chore(test): updating to latest kindest/node version 2022-11-27 17:23:24 +01:00
Dario Tranchitella
a96133f342 deps: upgrade to k8s 1.25.4 2022-11-27 17:23:24 +01:00
Dario Tranchitella
81fb429c83 test(e2e): validating tcp kubernetes version 2022-11-26 18:39:59 +01:00
Dario Tranchitella
190acc99b3 feat: tcp version validation upon create and update 2022-11-26 18:39:59 +01:00
Dario Tranchitella
b0a059d305 docs: cert-manager dependency 2022-11-26 16:56:26 +01:00
Dario Tranchitella
bcc7d0ebbd chore(makefile): installing cert-manager for e2e 2022-11-26 16:56:26 +01:00
Dario Tranchitella
9dc0a9a168 chore(makefile): crds diverged between kustomize and helm 2022-11-26 16:56:26 +01:00
Dario Tranchitella
d312738581 chore(helm): support for cert-manager and webhooks 2022-11-26 16:56:26 +01:00
Dario Tranchitella
30bc8cc2bf feat!: support for cert-manager and webhooks 2022-11-26 16:56:26 +01:00
Dario Tranchitella
55d7f09a34 chore(kustomize): support for cert-manager and webhooks 2022-11-26 16:56:26 +01:00
Dario Tranchitella
2c892d79e4 fix(ci): missing metadata upon container images release 2022-11-26 16:56:26 +01:00
Dario Tranchitella
43f1a6b95b chore(makefile): installing required dependencies 2022-11-26 16:56:26 +01:00
Dario Tranchitella
78ef34c9d6 fix(docs): aligning to latest changes for the chart documentation 2022-11-19 11:07:37 +01:00
Matteo Ruina
16d8b2d701 fix(helm): support installation on EKS 2022-11-18 16:50:00 +01:00
Dario Tranchitella
68764be716 chore(helm): support installation using --wait option 2022-10-22 09:47:08 +02:00
183 changed files with 9891 additions and 5311 deletions

View File

@@ -9,12 +9,12 @@ on:
jobs:
golangci:
name: lint
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v3
with:
go-version: '1.18'
go-version: '1.19'
check-latest: true
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v3.2.0
@@ -24,14 +24,14 @@ jobs:
args: --timeout 5m --config .golangci.yml
diff:
name: diff
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-go@v3
with:
go-version: '1.18'
go-version: '1.19'
check-latest: true
- run: make yaml-installation-file
- name: Checking if YAML installer file is not aligned
@@ -40,3 +40,8 @@ jobs:
run: test -z "$(git ls-files --others --exclude-standard 2> /dev/null)"
- name: Checking if source code is not formatted
run: test -z "$(git diff 2> /dev/null)"
- run: make apidoc
- name: Checking if generated API documentation files are not aligned
run: if [[ $(git diff | wc -l) -gt 0 ]]; then echo ">>> Untracked generated files have not been committed" && git --no-pager diff && exit 1; fi
- name: Checking if generated API documentation generated untracked files
run: test -z "$(git ls-files --others --exclude-standard 2> /dev/null)"

View File

@@ -7,12 +7,29 @@ on:
jobs:
docker-ci:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Generate build-args
id: build-args
run: |
# Declare vars for internal use
VERSION=$(git describe --abbrev=0 --tags)
GIT_HEAD_COMMIT=$(git rev-parse --short HEAD)
GIT_TAG_COMMIT=$(git rev-parse --short $VERSION)
GIT_MODIFIED_1=$(git diff $GIT_HEAD_COMMIT $GIT_TAG_COMMIT --quiet && echo "" || echo ".dev")
GIT_MODIFIED_2=$(git diff --quiet && echo "" || echo ".dirty")
# Export to GH_ENV
echo "GIT_LAST_TAG=$VERSION" >> $GITHUB_ENV
echo "GIT_HEAD_COMMIT=$GIT_HEAD_COMMIT" >> $GITHUB_ENV
echo "GIT_TAG_COMMIT=$GIT_TAG_COMMIT" >> $GITHUB_ENV
echo "GIT_MODIFIED=$(echo "$GIT_MODIFIED_1""$GIT_MODIFIED_2")" >> $GITHUB_ENV
echo "GIT_REPO=$(git config --get remote.origin.url)" >> $GITHUB_ENV
echo "BUILD_DATE=$(git log -1 --format="%at" | xargs -I{} date -d @{} +%Y-%m-%dT%H:%M:%S)" >> $GITHUB_ENV
- name: Docker meta
id: meta
uses: docker/metadata-action@v3

View File

@@ -29,14 +29,14 @@ on:
jobs:
kind:
name: Kubernetes
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-go@v3
with:
go-version: '1.18'
go-version: '1.19'
check-latest: true
- run: |
sudo apt-get update

View File

@@ -10,7 +10,7 @@ on:
jobs:
diff:
name: diff
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
@@ -19,7 +19,7 @@ jobs:
- name: Checking if Helm docs is not aligned
run: if [[ $(git diff | wc -l) -gt 0 ]]; then echo ">>> Untracked changes have not been committed" && git --no-pager diff && exit 1; fi
lint:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
- uses: azure/setup-helm@v1
@@ -29,7 +29,7 @@ jobs:
run: helm lint ./charts/kamaji
release:
if: startsWith(github.ref, 'refs/tags/helm-v')
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
- name: Publish Helm chart

1
.gitignore vendored
View File

@@ -29,4 +29,5 @@ bin
**/*.key
**/*.pem
**/*.csr
**/server-csr.json
.DS_Store

View File

@@ -1,13 +1,5 @@
# Build the manager binary
FROM golang:1.18 as builder
ARG TARGETARCH
ARG GIT_HEAD_COMMIT
ARG GIT_TAG_COMMIT
ARG GIT_LAST_TAG
ARG GIT_MODIFIED
ARG GIT_REPO
ARG BUILD_DATE
FROM golang:1.19 as builder
WORKDIR /workspace
# Copy the Go Modules manifests
@@ -19,22 +11,30 @@ RUN go mod download
# Copy the go source
COPY main.go main.go
COPY cmd/ cmd/
COPY api/ api/
COPY controllers/ controllers/
COPY internal/ internal/
COPY indexers/ indexers/
# Build
ARG TARGETARCH
ARG GIT_HEAD_COMMIT
ARG GIT_TAG_COMMIT
ARG GIT_LAST_TAG
ARG GIT_MODIFIED
ARG GIT_REPO
ARG BUILD_DATE
RUN CGO_ENABLED=0 GOOS=linux GOARCH=$TARGETARCH go build \
-ldflags "-X github.com/clastix/kamaji/internal.GitRepo=$GIT_REPO -X github.com/clastix/kamaji/internal.GitTag=$GIT_LAST_TAG -X github.com/clastix/kamaji/internal.GitCommit=$GIT_HEAD_COMMIT -X github.com/clastix/kamaji/internal.GitDirty=$GIT_MODIFIED -X github.com/clastix/kamaji/internal.BuildTime=$BUILD_DATE" \
-a -o manager main.go
-a -o kamaji main.go
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
COPY ./kamaji.yaml .
COPY --from=builder /workspace/kamaji .
USER 65532:65532
ENTRYPOINT ["/manager"]
ENTRYPOINT ["/kamaji"]

View File

@@ -3,7 +3,7 @@
# To re-generate a bundle for another specific version without changing the standard setup, you can:
# - use the VERSION as arg of the bundle target (e.g make bundle VERSION=0.0.2)
# - use environment variables to overwrite this value (e.g export VERSION=0.0.2)
VERSION ?= 0.1.1
VERSION ?= 0.2.3
# CHANNELS define the bundle channels used in the bundle.
# Add a new line here if you would like to change its default config. (E.g CHANNELS = "candidate,fast,stable")
@@ -77,7 +77,7 @@ helm: ## Download helm locally if necessary.
GINKGO = $(shell pwd)/bin/ginkgo
ginkgo: ## Download ginkgo locally if necessary.
$(call go-install-tool,$(GINKGO),github.com/onsi/ginkgo/ginkgo@v1.16.5)
$(call go-install-tool,$(GINKGO),github.com/onsi/ginkgo/v2/ginkgo@v2.6.0)
KIND = $(shell pwd)/bin/kind
kind: ## Download kind locally if necessary.
@@ -103,8 +103,6 @@ apidocs-gen: ## Download crdoc locally if necessary.
manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
cp config/crd/bases/kamaji.clastix.io_tenantcontrolplanes.yaml charts/kamaji/crds/tenantcontrolplane.yaml
cp config/crd/bases/kamaji.clastix.io_datastores.yaml charts/kamaji/crds/datastore.yaml
generate: controller-gen ## Generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
$(CONTROLLER_GEN) object:headerFile="hack/boilerplate.go.txt" paths="./..."
@@ -115,11 +113,41 @@ golint: golangci-lint ## Linting the code according to the styling guide.
test:
go test ./... -coverprofile cover.out
_datastore-mysql:
$(MAKE) NAME=$(NAME) -C deploy/kine/mysql mariadb
kubectl apply -f $(shell pwd)/config/samples/kamaji_v1alpha1_datastore_mysql_$(NAME).yaml
datastore-mysql:
$(MAKE) NAME=bronze _datastore-mysql
$(MAKE) NAME=silver _datastore-mysql
$(MAKE) NAME=gold _datastore-mysql
_datastore-postgres:
$(MAKE) NAME=$(NAME) NAMESPACE=postgres-system -C deploy/kine/postgresql postgresql
kubectl apply -f $(shell pwd)/config/samples/kamaji_v1alpha1_datastore_postgresql_$(NAME).yaml
datastore-postgres:
$(MAKE) NAME=bronze _datastore-postgres
$(MAKE) NAME=silver _datastore-postgres
$(MAKE) NAME=gold _datastore-postgres
_datastore-etcd:
$(HELM) upgrade --install etcd-$(NAME) clastix/kamaji-etcd --create-namespace -n etcd-system --set datastore.enabled=true
datastore-etcd: helm
$(HELM) repo add clastix https://clastix.github.io/charts
$(HELM) repo update
$(MAKE) NAME=bronze _datastore-etcd
$(MAKE) NAME=silver _datastore-etcd
$(MAKE) NAME=gold _datastore-etcd
datastores: datastore-mysql datastore-etcd datastore-postgres ## Install all Kamaji DataStores with multiple drivers, and different tiers.
##@ Build
# Get information about git current status
GIT_HEAD_COMMIT ?= $$(git rev-parse --short HEAD)
GIT_TAG_COMMIT ?= $$(git rev-parse --short $(VERSION))
GIT_TAG_COMMIT ?= $$(git rev-parse --short v$(VERSION))
GIT_MODIFIED_1 ?= $$(git diff $(GIT_HEAD_COMMIT) $(GIT_TAG_COMMIT) --quiet && echo "" || echo ".dev")
GIT_MODIFIED_2 ?= $$(git diff --quiet && echo "" || echo ".dirty")
GIT_MODIFIED ?= $$(echo "$(GIT_MODIFIED_1)$(GIT_MODIFIED_2)")
@@ -145,6 +173,15 @@ docker-push: ## Push docker image with the manager.
##@ Deployment
metallb:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
kubectl apply -f https://kind.sigs.k8s.io/examples/loadbalancer/metallb-config.yaml
echo ""
docker network inspect -f '{{.IPAM.Config}}' kind
cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.yaml
dev: generate manifests uninstall install rbac ## Full installation for development purposes
go fmt ./...
@@ -253,8 +290,9 @@ env:
##@ e2e
.PHONY: e2e
e2e: env load helm ginkgo ## Create a KinD cluster, install Kamaji on it and run the test suite.
e2e: env load helm ginkgo cert-manager ## Create a KinD cluster, install Kamaji on it and run the test suite.
$(HELM) upgrade --debug --install kamaji ./charts/kamaji --create-namespace --namespace kamaji-system --set "image.pullPolicy=Never"
$(MAKE) datastores
$(GINKGO) -v ./e2e
##@ Document

View File

@@ -16,12 +16,19 @@ resources:
kind: TenantControlPlane
path: github.com/clastix/kamaji/api/v1alpha1
version: v1alpha1
webhooks:
defaulting: true
validation: true
webhookVersion: v1
- api:
crdVersion: v1
namespaced: false
domain: clastix.io
group: kamaji
kind: DataStore
path: github.com/clastix/kamaji/api/v1alpha1
version: v1alpha1
webhooks:
defaulting: true
validation: true
webhookVersion: v1
version: "3"

View File

@@ -5,6 +5,7 @@
<img src="https://img.shields.io/github/go-mod/go-version/clastix/kamaji"/>
<a href="https://github.com/clastix/kamaji/releases">
<img src="https://img.shields.io/github/v/release/clastix/kamaji"/>
<img src="https://goreportcard.com/badge/github.com/clastix/kamaji">
</a>
</p>
@@ -20,7 +21,7 @@ Global hyper-scalers are leading the Managed Kubernetes space, while other cloud
**Kamaji** aims to solve these pains by leveraging multi-tenancy and simplifying how to run multiple control planes on the same infrastructure with a fraction of the operational burden.
## How it works
Kamaji turns any Kubernetes cluster into an _“admin cluster”_ to orchestrate other Kubernetes clusters called _“tenant clusters”_. What makes Kamaji special is that Control Planes of _“tenant clusters”_ are just regular pods running in the _“admin cluster”_ instead of dedicated Virtual Machines. This solution makes running control planes at scale cheaper and easier to deploy and operate.
Kamaji turns any Kubernetes cluster into an _“admin cluster”_ to orchestrate other Kubernetes clusters called _“tenant clusters”_. Kamaji is special because the Control Planes of _“tenant clusters”_ are just regular pods instead of dedicated Virtual Machines. This solution makes running Control Planes at scale cheaper and easier to deploy and operate.
![Architecture](docs/content/images/kamaji-light.png#gh-light-mode-only)
![Architecture](docs/content/images/kamaji-dark.png#gh-dark-mode-only)
@@ -40,7 +41,8 @@ Please refer to the [Getting Started guide](https://kamaji.clastix.io/getting-st
## Roadmap
- [ ] Benchmarking and stress-test
- [x] Benchmarking
- [ ] Stress-test
- [x] Support for dynamic address allocation on native Load Balancer
- [x] Zero Downtime Tenant Control Plane upgrade
- [x] `konnectivity` integration
@@ -50,6 +52,7 @@ Please refer to the [Getting Started guide](https://kamaji.clastix.io/getting-st
- [x] `kine` integration for MySQL as datastore
- [x] `kine` integration for PostgreSQL as datastore
- [x] Pool of multiple datastores
- [x] Seamless migration between datastore with the same driver
- [ ] Automatic assigning of Tenant Control Plane to a datastore
- [ ] Autoscaling of Tenant Control Plane pods
@@ -61,4 +64,4 @@ Please, check the project's [documentation](https://kamaji.clastix.io/) for gett
Kamaji is Open Source with Apache 2 license and any contribution is welcome.
## Community
Join the [Kubernetes Slack Workspace](https://slack.k8s.io/) and the [`#kamaji`](https://kubernetes.slack.com/archives/C03GLTTMWNN) channel to meet end-users and contributors.
Join the [Kubernetes Slack Workspace](https://slack.k8s.io/) and the [`#kamaji`](https://kubernetes.slack.com/archives/C03GLTTMWNN) channel to meet end-users and contributors.

View File

@@ -30,7 +30,7 @@ func (in *ContentRef) GetContent(ctx context.Context, client client.Client) ([]b
return nil, err
}
v, ok := secret.Data[secretRef.KeyPath]
v, ok := secret.Data[string(secretRef.KeyPath)]
if !ok {
return nil, fmt.Errorf("secret %s does not have key %s", namespacedName.String(), secretRef.KeyPath)
}

View File

@@ -0,0 +1,57 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
"fmt"
"strings"
"github.com/go-logr/logr"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
)
//+kubebuilder:webhook:path=/validate--v1-secret,mutating=false,failurePolicy=ignore,sideEffects=None,groups="",resources=secrets,verbs=delete,versions=v1,name=vdatastoresecrets.kb.io,admissionReviewVersions=v1
type dataStoreSecretValidator struct {
log logr.Logger
client client.Client
}
func (d *dataStoreSecretValidator) ValidateCreate(context.Context, runtime.Object) error {
return nil
}
func (d *dataStoreSecretValidator) ValidateUpdate(context.Context, runtime.Object, runtime.Object) error {
return nil
}
func (d *dataStoreSecretValidator) ValidateDelete(ctx context.Context, obj runtime.Object) error {
secret := obj.(*corev1.Secret) //nolint:forcetypeassert
dsList := &DataStoreList{}
if err := d.client.List(ctx, dsList, client.MatchingFieldsSelector{Selector: fields.OneTermEqualSelector(DatastoreUsedSecretNamespacedNameKey, fmt.Sprintf("%s/%s", secret.GetNamespace(), secret.GetName()))}); err != nil {
return err
}
if len(dsList.Items) > 0 {
var res []string
for _, ds := range dsList.Items {
res = append(res, ds.GetName())
}
return fmt.Errorf("the Secret is used by the following kamajiv1alpha1.DataStores and cannot be deleted (%s)", strings.Join(res, ", "))
}
return nil
}
func (d *dataStoreSecretValidator) Default(context.Context, runtime.Object) error {
return nil
}

View File

@@ -8,7 +8,9 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type Driver string //+kubebuilder:validation:Enum=etcd;MySQL;PostgreSQL
// +kubebuilder:validation:Enum=etcd;MySQL;PostgreSQL
type Driver string
var (
EtcdDriver Driver = "etcd"
@@ -16,13 +18,17 @@ var (
KinePostgreSQLDriver Driver = "PostgreSQL"
)
// +kubebuilder:validation:MinItems=1
type Endpoints []string
// DataStoreSpec defines the desired state of DataStore.
type DataStoreSpec struct {
// The driver to use to connect to the shared datastore.
Driver Driver `json:"driver"`
// List of the endpoints to connect to the shared datastore.
// No need for protocol, just bare IP/FQDN and port.
Endpoints []string `json:"endpoints"` //+kubebuilder:validation:MinLength=1
Endpoints Endpoints `json:"endpoints"`
// In case of authentication enabled for the given data store, specifies the username and password pair.
// This value is optional.
BasicAuth *BasicAuth `json:"basicAuth,omitempty"`
@@ -62,11 +68,14 @@ type ContentRef struct {
SecretRef *SecretReference `json:"secretReference,omitempty"`
}
// +kubebuilder:validation:MinLength=1
type secretReferKeyPath string
type SecretReference struct {
corev1.SecretReference `json:",inline"`
// Name of the key for the given Secret reference where the content is stored.
// This value is mandatory.
KeyPath string `json:"keyPath"`
KeyPath secretReferKeyPath `json:"keyPath"`
}
// DataStoreStatus defines the observed state of DataStore.

View File

@@ -0,0 +1,185 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
"fmt"
"github.com/go-logr/logr"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
)
//+kubebuilder:webhook:path=/mutate-kamaji-clastix-io-v1alpha1-datastore,mutating=true,failurePolicy=fail,sideEffects=None,groups=kamaji.clastix.io,resources=datastores,verbs=create;update,versions=v1alpha1,name=mdatastore.kb.io,admissionReviewVersions=v1
//+kubebuilder:webhook:path=/validate-kamaji-clastix-io-v1alpha1-datastore,mutating=false,failurePolicy=fail,sideEffects=None,groups=kamaji.clastix.io,resources=datastores,verbs=create;update;delete,versions=v1alpha1,name=vdatastore.kb.io,admissionReviewVersions=v1
func (in *DataStore) SetupWebhookWithManager(mgr ctrl.Manager) error {
secretValidator := &dataStoreSecretValidator{
log: mgr.GetLogger().WithName("datastore-secret-webhook"),
client: mgr.GetClient(),
}
if err := ctrl.NewWebhookManagedBy(mgr).For(&corev1.Secret{}).WithValidator(secretValidator).Complete(); err != nil {
return err
}
dsValidator := &dataStoreValidator{
log: mgr.GetLogger().WithName("datastore-webhook"),
client: mgr.GetClient(),
}
return ctrl.NewWebhookManagedBy(mgr).
For(in).
WithValidator(dsValidator).
WithDefaulter(dsValidator).
Complete()
}
type dataStoreValidator struct {
log logr.Logger
client client.Client
}
func (d *dataStoreValidator) ValidateCreate(ctx context.Context, obj runtime.Object) error {
ds, ok := obj.(*DataStore)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.DataStore")
}
if err := d.validate(ctx, ds); err != nil {
return err
}
return nil
}
func (d *dataStoreValidator) ValidateUpdate(ctx context.Context, oldObj, newObj runtime.Object) error {
old, ok := oldObj.(*DataStore)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.DataStore")
}
ds, ok := newObj.(*DataStore)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.DataStore")
}
d.log.Info("validate update", "name", ds.GetName())
if ds.Spec.Driver != old.Spec.Driver {
return fmt.Errorf("driver of a DataStore cannot be changed")
}
if err := d.validate(ctx, ds); err != nil {
return err
}
return nil
}
func (d *dataStoreValidator) ValidateDelete(ctx context.Context, obj runtime.Object) error {
ds, ok := obj.(*DataStore)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.DataStore")
}
tcpList := &TenantControlPlaneList{}
if err := d.client.List(ctx, tcpList, client.MatchingFieldsSelector{Selector: fields.OneTermEqualSelector(TenantControlPlaneUsedDataStoreKey, ds.GetName())}); err != nil {
return err
}
if len(tcpList.Items) > 0 {
return fmt.Errorf("the DataStore is used by multiple TenantControlPlanes and cannot be removed")
}
return nil
}
func (d *dataStoreValidator) Default(context.Context, runtime.Object) error {
return nil
}
func (d *dataStoreValidator) validate(ctx context.Context, ds *DataStore) error {
if ds.Spec.BasicAuth != nil {
if err := d.validateBasicAuth(ctx, ds); err != nil {
return err
}
}
if err := d.validateTLSConfig(ctx, ds); err != nil {
return err
}
return nil
}
func (d *dataStoreValidator) validateBasicAuth(ctx context.Context, ds *DataStore) error {
if err := d.validateContentReference(ctx, ds.Spec.BasicAuth.Password); err != nil {
return fmt.Errorf("basic-auth password is not valid, %w", err)
}
if err := d.validateContentReference(ctx, ds.Spec.BasicAuth.Username); err != nil {
return fmt.Errorf("basic-auth username is not valid, %w", err)
}
return nil
}
func (d *dataStoreValidator) validateTLSConfig(ctx context.Context, ds *DataStore) error {
if err := d.validateContentReference(ctx, ds.Spec.TLSConfig.CertificateAuthority.Certificate); err != nil {
return fmt.Errorf("CA certificate is not valid, %w", err)
}
if ds.Spec.Driver == EtcdDriver {
if ds.Spec.TLSConfig.CertificateAuthority.PrivateKey == nil {
return fmt.Errorf("CA private key is required when using the etcd driver")
}
}
if ds.Spec.TLSConfig.CertificateAuthority.PrivateKey != nil {
if err := d.validateContentReference(ctx, *ds.Spec.TLSConfig.CertificateAuthority.PrivateKey); err != nil {
return fmt.Errorf("CA private key is not valid, %w", err)
}
}
if err := d.validateContentReference(ctx, ds.Spec.TLSConfig.ClientCertificate.Certificate); err != nil {
return fmt.Errorf("client certificate is not valid, %w", err)
}
if err := d.validateContentReference(ctx, ds.Spec.TLSConfig.ClientCertificate.PrivateKey); err != nil {
return fmt.Errorf("client private key is not valid, %w", err)
}
return nil
}
func (d *dataStoreValidator) validateContentReference(ctx context.Context, ref ContentRef) error {
switch {
case len(ref.Content) > 0:
return nil
case ref.SecretRef == nil:
return fmt.Errorf("the Secret reference is mandatory when bare content is not specified")
case len(ref.SecretRef.SecretReference.Name) == 0:
return fmt.Errorf("the Secret reference name is mandatory")
case len(ref.SecretRef.SecretReference.Namespace) == 0:
return fmt.Errorf("the Secret reference namespace is mandatory")
}
if err := d.client.Get(ctx, types.NamespacedName{Name: ref.SecretRef.SecretReference.Name, Namespace: ref.SecretRef.SecretReference.Namespace}, &corev1.Secret{}); err != nil {
if errors.IsNotFound(err) {
return fmt.Errorf("secret %s/%s is not found", ref.SecretRef.SecretReference.Namespace, ref.SecretRef.SecretReference.Name)
}
return err
}
return nil
}

View File

@@ -0,0 +1,68 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
"fmt"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
DatastoreUsedSecretNamespacedNameKey = "secretRef"
)
type DatastoreUsedSecret struct{}
func (d *DatastoreUsedSecret) SetupWithManager(ctx context.Context, mgr controllerruntime.Manager) error {
return mgr.GetFieldIndexer().IndexField(ctx, d.Object(), d.Field(), d.ExtractValue())
}
func (d *DatastoreUsedSecret) Object() client.Object {
return &DataStore{}
}
func (d *DatastoreUsedSecret) Field() string {
return DatastoreUsedSecretNamespacedNameKey
}
func (d *DatastoreUsedSecret) ExtractValue() client.IndexerFunc {
return func(object client.Object) (res []string) {
ds := object.(*DataStore) //nolint:forcetypeassert
if ds.Spec.BasicAuth != nil {
if ds.Spec.BasicAuth.Username.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.BasicAuth.Username.SecretRef))
}
if ds.Spec.BasicAuth.Password.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.BasicAuth.Password.SecretRef))
}
}
if ds.Spec.TLSConfig.CertificateAuthority.Certificate.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.CertificateAuthority.Certificate.SecretRef))
}
if ds.Spec.TLSConfig.CertificateAuthority.PrivateKey != nil && ds.Spec.TLSConfig.CertificateAuthority.PrivateKey.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.CertificateAuthority.PrivateKey.SecretRef))
}
if ds.Spec.TLSConfig.ClientCertificate.Certificate.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.ClientCertificate.Certificate.SecretRef))
}
if ds.Spec.TLSConfig.ClientCertificate.PrivateKey.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.ClientCertificate.PrivateKey.SecretRef))
}
return res
}
}
func (d *DatastoreUsedSecret) namespacedName(ref SecretReference) string {
return fmt.Sprintf("%s/%s", ref.Namespace, ref.Name)
}

View File

@@ -1,15 +1,13 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package indexers
package v1alpha1
import (
"context"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
)
const (
@@ -19,7 +17,7 @@ const (
type TenantControlPlaneStatusDataStore struct{}
func (t *TenantControlPlaneStatusDataStore) Object() client.Object {
return &kamajiv1alpha1.TenantControlPlane{}
return &TenantControlPlane{}
}
func (t *TenantControlPlaneStatusDataStore) Field() string {
@@ -28,8 +26,7 @@ func (t *TenantControlPlaneStatusDataStore) Field() string {
func (t *TenantControlPlaneStatusDataStore) ExtractValue() client.IndexerFunc {
return func(object client.Object) []string {
//nolint:forcetypeassert
tcp := object.(*kamajiv1alpha1.TenantControlPlane)
tcp := object.(*TenantControlPlane) //nolint:forcetypeassert
return []string{tcp.Status.Storage.DataStoreName}
}

View File

@@ -1,17 +0,0 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func (in AddonStatus) GetChecksum() string {
return in.Checksum
}
func (in *AddonStatus) SetChecksum(checksum string) {
in.LastUpdate = metav1.Now()
in.Checksum = checksum
}

View File

@@ -112,15 +112,12 @@ type KubeadmPhaseStatus struct {
// KubeadmPhasesStatus contains the status of the different kubeadm phases action.
type KubeadmPhasesStatus struct {
UploadConfigKubeadm KubeadmPhaseStatus `json:"uploadConfigKubeadm"`
UploadConfigKubelet KubeadmPhaseStatus `json:"uploadConfigKubelet"`
BootstrapToken KubeadmPhaseStatus `json:"bootstrapToken"`
BootstrapToken KubeadmPhaseStatus `json:"bootstrapToken"`
}
type ExternalKubernetesObjectStatus struct {
Name string `json:"name,omitempty"`
Namespace string `json:"namespace,omitempty"`
Checksum string `json:"checksum,omitempty"`
// Last time when k8s object was updated
LastUpdate metav1.Time `json:"lastUpdate,omitempty"`
}
@@ -145,7 +142,6 @@ type KonnectivityConfigMap struct {
// AddonStatus defines the observed state of an Addon.
type AddonStatus struct {
Enabled bool `json:"enabled"`
Checksum string `json:"checksum,omitempty"`
LastUpdate metav1.Time `json:"lastUpdate,omitempty"`
}
@@ -187,12 +183,14 @@ type KubernetesStatus struct {
Ingress *KubernetesIngressStatus `json:"ingress,omitempty"`
}
// +kubebuilder:validation:Enum=Provisioning;Upgrading;Ready;NotReady
// +kubebuilder:validation:Enum=Provisioning;CertificateAuthorityRotating;Upgrading;Migrating;Ready;NotReady
type KubernetesVersionStatus string
var (
VersionProvisioning KubernetesVersionStatus = "Provisioning"
VersionCARotating KubernetesVersionStatus = "CertificateAuthorityRotating"
VersionUpgrading KubernetesVersionStatus = "Upgrading"
VersionMigrating KubernetesVersionStatus = "Migrating"
VersionReady KubernetesVersionStatus = "Ready"
VersionNotReady KubernetesVersionStatus = "NotReady"
)

View File

@@ -4,6 +4,7 @@
package v1alpha1
import (
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -32,7 +33,23 @@ type NetworkProfileSpec struct {
DNSServiceIPs []string `json:"dnsServiceIPs,omitempty"`
}
// +kubebuilder:validation:Enum=Hostname;InternalIP;ExternalIP;InternalDNS;ExternalDNS
type KubeletPreferredAddressType string
const (
NodeHostName KubeletPreferredAddressType = "Hostname"
NodeInternalIP KubeletPreferredAddressType = "InternalIP"
NodeExternalIP KubeletPreferredAddressType = "ExternalIP"
NodeInternalDNS KubeletPreferredAddressType = "InternalDNS"
NodeExternalDNS KubeletPreferredAddressType = "ExternalDNS"
)
type KubeletSpec struct {
// Ordered list of the preferred NodeAddressTypes to use for kubelet connections.
// Default to Hostname, InternalIP, ExternalIP.
// +kubebuilder:default={"Hostname","InternalIP","ExternalIP"}
// +kubebuilder:validation:MinItems=1
PreferredAddressTypes []KubeletPreferredAddressType `json:"preferredAddressTypes,omitempty"`
// CGroupFS defines the cgroup driver for Kubelet
// https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/
CGroupFS CGroupDriver `json:"cgroupfs,omitempty"`
@@ -76,10 +93,22 @@ type IngressSpec struct {
Hostname string `json:"hostname,omitempty"`
}
// ComponentResourceRequirements describes the compute resource requirements.
type ComponentResourceRequirements struct {
// Limits describes the maximum amount of compute resources allowed.
// More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Limits corev1.ResourceList `json:"limits,omitempty" protobuf:"bytes,1,rep,name=limits,casttype=ResourceList,castkey=ResourceName"`
// Requests describes the minimum amount of compute resources required.
// If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
// otherwise to an implementation-defined value.
// More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Requests corev1.ResourceList `json:"requests,omitempty" protobuf:"bytes,2,rep,name=requests,casttype=ResourceList,castkey=ResourceName"`
}
type ControlPlaneComponentsResources struct {
APIServer *corev1.ResourceRequirements `json:"apiServer,omitempty"`
ControllerManager *corev1.ResourceRequirements `json:"controllerManager,omitempty"`
Scheduler *corev1.ResourceRequirements `json:"scheduler,omitempty"`
APIServer *ComponentResourceRequirements `json:"apiServer,omitempty"`
ControllerManager *ComponentResourceRequirements `json:"controllerManager,omitempty"`
Scheduler *ComponentResourceRequirements `json:"scheduler,omitempty"`
}
type DeploymentSpec struct {
@@ -89,6 +118,16 @@ type DeploymentSpec struct {
// Selector which must match a node's labels for the pod to be scheduled on that node.
// More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used
// to run the Tenant Control Plane pod. If no RuntimeClass resource matches the named class, the pod will not be run.
// If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an
// empty definition that uses the default runtime handler.
// More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class
RuntimeClassName string `json:"runtimeClassName,omitempty"`
// Strategy describes how to replace existing pods with new ones for the given Tenant Control Plane.
// Default value is set to Rolling Update, with a blue/green strategy.
// +kubebuilder:default={type:"RollingUpdate",rollingUpdate:{maxUnavailable:0,maxSurge:"100%"}}
Strategy appsv1.DeploymentStrategy `json:"strategy,omitempty"`
// If specified, the Tenant Control Plane pod's tolerations.
// More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
@@ -138,21 +177,39 @@ type ImageOverrideTrait struct {
ImageTag string `json:"imageTag,omitempty"`
}
// KonnectivitySpec defines the spec for Konnectivity.
type KonnectivitySpec struct {
// Port of Konnectivity proxy server.
ProxyPort int32 `json:"proxyPort"`
// Version for Konnectivity server and agent.
// ExtraArgs allows adding additional arguments to said component.
type ExtraArgs []string
type KonnectivityServerSpec struct {
// The port which Konnectivity server is listening to.
Port int32 `json:"port"`
// Container image version of the Konnectivity server.
// +kubebuilder:default=v0.0.32
Version string `json:"version,omitempty"`
// ServerImage defines the container image for Konnectivity's server.
// Container image used by the Konnectivity server.
// +kubebuilder:default=registry.k8s.io/kas-network-proxy/proxy-server
ServerImage string `json:"serverImage,omitempty"`
Image string `json:"image,omitempty"`
// Resources define the amount of CPU and memory to allocate to the Konnectivity server.
Resources *ComponentResourceRequirements `json:"resources,omitempty"`
ExtraArgs ExtraArgs `json:"extraArgs,omitempty"`
}
type KonnectivityAgentSpec struct {
// AgentImage defines the container image for Konnectivity's agent.
// +kubebuilder:default=registry.k8s.io/kas-network-proxy/proxy-agent
AgentImage string `json:"agentImage,omitempty"`
// Resources define the amount of CPU and memory to allocate to the Konnectivity server.
Resources *corev1.ResourceRequirements `json:"resources,omitempty"`
Image string `json:"image,omitempty"`
// Version for Konnectivity agent.
// +kubebuilder:default=v0.0.32
Version string `json:"version,omitempty"`
ExtraArgs ExtraArgs `json:"extraArgs,omitempty"`
}
// KonnectivitySpec defines the spec for Konnectivity.
type KonnectivitySpec struct {
// +kubebuilder:default={version:"v0.0.32",image:"registry.k8s.io/kas-network-proxy/proxy-server",port:8132}
KonnectivityServerSpec KonnectivityServerSpec `json:"server,omitempty"`
// +kubebuilder:default={version:"v0.0.32",image:"registry.k8s.io/kas-network-proxy/proxy-agent"}
KonnectivityAgentSpec KonnectivityAgentSpec `json:"agent,omitempty"`
}
// AddonsSpec defines the enabled addons and their features.
@@ -187,9 +244,10 @@ type TenantControlPlaneSpec struct {
// +kubebuilder:subresource:scale:specpath=.spec.controlPlane.deployment.replicas,statuspath=.status.kubernetesResources.deployment.replicas,selectorpath=.status.kubernetesResources.deployment.selector
// +kubebuilder:resource:shortName=tcp
// +kubebuilder:printcolumn:name="Version",type="string",JSONPath=".spec.kubernetes.version",description="Kubernetes version"
// +kubebuilder:printcolumn:name="Status",type="string",JSONPath=".status.kubernetesResources.version.status",description="Kubernetes version"
// +kubebuilder:printcolumn:name="Control-Plane-Endpoint",type="string",JSONPath=".status.controlPlaneEndpoint",description="Tenant Control Plane Endpoint (API server)"
// +kubebuilder:printcolumn:name="Status",type="string",JSONPath=".status.kubernetesResources.version.status",description="Status"
// +kubebuilder:printcolumn:name="Control-Plane endpoint",type="string",JSONPath=".status.controlPlaneEndpoint",description="Tenant Control Plane Endpoint (API server)"
// +kubebuilder:printcolumn:name="Kubeconfig",type="string",JSONPath=".status.kubeconfig.admin.secretName",description="Secret which contains admin kubeconfig"
//+kubebuilder:printcolumn:name="Datastore",type="string",JSONPath=".status.storage.dataStoreName",description="DataStore actually used"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp",description="Age"
// TenantControlPlane is the Schema for the tenantcontrolplanes API.

View File

@@ -0,0 +1,188 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
"fmt"
"strings"
"github.com/blang/semver"
"github.com/go-logr/logr"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/sets"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/clastix/kamaji/internal/upgrade"
)
//+kubebuilder:webhook:path=/mutate-kamaji-clastix-io-v1alpha1-tenantcontrolplane,mutating=true,failurePolicy=fail,sideEffects=None,groups=kamaji.clastix.io,resources=tenantcontrolplanes,verbs=create;update,versions=v1alpha1,name=mtenantcontrolplane.kb.io,admissionReviewVersions=v1
//+kubebuilder:webhook:path=/validate-kamaji-clastix-io-v1alpha1-tenantcontrolplane,mutating=false,failurePolicy=fail,sideEffects=None,groups=kamaji.clastix.io,resources=tenantcontrolplanes,verbs=create;update,versions=v1alpha1,name=vtenantcontrolplane.kb.io,admissionReviewVersions=v1
func (in *TenantControlPlane) SetupWebhookWithManager(mgr ctrl.Manager, datastore string) error {
validator := &tenantControlPlaneValidator{
client: mgr.GetClient(),
defaultDatastore: datastore,
log: mgr.GetLogger().WithName("tenantcontrolplane-webhook"),
}
return ctrl.NewWebhookManagedBy(mgr).
For(in).
WithValidator(validator).
WithDefaulter(validator).
Complete()
}
type tenantControlPlaneValidator struct {
client client.Client
defaultDatastore string
log logr.Logger
}
func (t *tenantControlPlaneValidator) Default(_ context.Context, obj runtime.Object) error {
tcp, ok := obj.(*TenantControlPlane)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.TenantControlPlane")
}
if len(tcp.Spec.DataStore) == 0 {
tcp.Spec.DataStore = t.defaultDatastore
}
return nil
}
func (t *tenantControlPlaneValidator) ValidateCreate(_ context.Context, obj runtime.Object) error {
tcp, ok := obj.(*TenantControlPlane)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.TenantControlPlane")
}
t.log.Info("validate create", "name", tcp.Name, "namespace", tcp.Namespace)
ver, err := semver.New(t.normalizeKubernetesVersion(tcp.Spec.Kubernetes.Version))
if err != nil {
return errors.Wrap(err, "unable to parse the desired Kubernetes version")
}
supportedVer, supportedErr := semver.Make(t.normalizeKubernetesVersion(upgrade.KubeadmVersion))
if supportedErr != nil {
return errors.Wrap(supportedErr, "unable to parse the Kamaji supported Kubernetes version")
}
if ver.GT(supportedVer) {
return fmt.Errorf("unable to create a TenantControlPlane with a Kubernetes version greater than the supported one, actually %s", supportedVer.String())
}
if err = t.validatePreferredKubeletAddressTypes(tcp.Spec.Kubernetes.Kubelet.PreferredAddressTypes); err != nil {
return err
}
return nil
}
func (t *tenantControlPlaneValidator) ValidateUpdate(ctx context.Context, oldObj, newObj runtime.Object) error {
old, ok := oldObj.(*TenantControlPlane)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.TenantControlPlane")
}
tcp, ok := newObj.(*TenantControlPlane)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.TenantControlPlane")
}
t.log.Info("validate update", "name", tcp.Name, "namespace", tcp.Namespace)
if err := t.validateVersionUpdate(old, tcp); err != nil {
return err
}
if err := t.validateDataStore(ctx, old, tcp); err != nil {
return err
}
if err := t.validatePreferredKubeletAddressTypes(tcp.Spec.Kubernetes.Kubelet.PreferredAddressTypes); err != nil {
return err
}
return nil
}
func (t *tenantControlPlaneValidator) ValidateDelete(context.Context, runtime.Object) error {
return nil
}
func (t *tenantControlPlaneValidator) validatePreferredKubeletAddressTypes(addressTypes []KubeletPreferredAddressType) error {
s := sets.NewString()
for _, at := range addressTypes {
if s.Has(string(at)) {
return fmt.Errorf("preferred kubelet address types is stated multiple times: %s", at)
}
s.Insert(string(at))
}
return nil
}
func (t *tenantControlPlaneValidator) validateVersionUpdate(oldObj, newObj *TenantControlPlane) error {
oldVer, oldErr := semver.Make(t.normalizeKubernetesVersion(oldObj.Spec.Kubernetes.Version))
if oldErr != nil {
return errors.Wrap(oldErr, "unable to parse the previous Kubernetes version")
}
newVer, newErr := semver.New(t.normalizeKubernetesVersion(newObj.Spec.Kubernetes.Version))
if newErr != nil {
return errors.Wrap(newErr, "unable to parse the desired Kubernetes version")
}
supportedVer, supportedErr := semver.Make(t.normalizeKubernetesVersion(upgrade.KubeadmVersion))
if supportedErr != nil {
return errors.Wrap(supportedErr, "unable to parse the Kamaji supported Kubernetes version")
}
switch {
case newVer.GT(supportedVer):
return fmt.Errorf("unable to upgrade to a version greater than the supported one, actually %s", supportedVer.String())
case newVer.LT(oldVer):
return fmt.Errorf("unable to downgrade a TenantControlPlane from %s to %s", oldVer.String(), newVer.String())
case newVer.Minor-oldVer.Minor > 1:
return fmt.Errorf("unable to upgrade to a minor version in a non-sequential mode")
}
return nil
}
func (t *tenantControlPlaneValidator) validateDataStore(ctx context.Context, oldObj, tcp *TenantControlPlane) error {
if oldObj.Spec.DataStore == tcp.Spec.DataStore {
return nil
}
previousDatastore, desiredDatastore := &DataStore{}, &DataStore{}
if err := t.client.Get(ctx, types.NamespacedName{Name: oldObj.Spec.DataStore}, previousDatastore); err != nil {
return fmt.Errorf("unable to retrieve old DataStore for validation: %w", err)
}
if err := t.client.Get(ctx, types.NamespacedName{Name: tcp.Spec.DataStore}, desiredDatastore); err != nil {
return fmt.Errorf("unable to retrieve old DataStore for validation: %w", err)
}
if previousDatastore.Spec.Driver != desiredDatastore.Spec.Driver {
return fmt.Errorf("migration between different Datastore drivers is not supported")
}
return nil
}
func (t *tenantControlPlaneValidator) normalizeKubernetesVersion(input string) string {
if strings.HasPrefix(input, "v") {
return strings.Replace(input, "v", "", 1)
}
return input
}

View File

@@ -0,0 +1,123 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
"crypto/tls"
"fmt"
"net"
"path/filepath"
"testing"
"time"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
admissionv1beta1 "k8s.io/api/admission/v1beta1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/rest"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/envtest"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
//+kubebuilder:scaffold:imports
)
// These tests use Ginkgo (BDD-style Go testing framework). Refer to
// http://onsi.github.io/ginkgo/ to learn more about Ginkgo.
var (
cfg *rest.Config
k8sClient client.Client
testEnv *envtest.Environment
ctx context.Context
cancel context.CancelFunc
)
func TestAPIs(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Webhook Suite")
}
var _ = BeforeSuite(func() {
logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))
ctx, cancel = context.WithCancel(context.TODO())
By("bootstrapping test environment")
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "..", "config", "crd", "bases")},
ErrorIfCRDPathMissing: false,
WebhookInstallOptions: envtest.WebhookInstallOptions{
Paths: []string{filepath.Join("..", "..", "config", "webhook")},
},
}
var err error
// cfg is defined in this file globally.
cfg, err = testEnv.Start()
Expect(err).NotTo(HaveOccurred())
Expect(cfg).NotTo(BeNil())
scheme := runtime.NewScheme()
err = AddToScheme(scheme)
Expect(err).NotTo(HaveOccurred())
err = admissionv1beta1.AddToScheme(scheme)
Expect(err).NotTo(HaveOccurred())
//+kubebuilder:scaffold:scheme
k8sClient, err = client.New(cfg, client.Options{Scheme: scheme})
Expect(err).NotTo(HaveOccurred())
Expect(k8sClient).NotTo(BeNil())
// start webhook server using Manager
webhookInstallOptions := &testEnv.WebhookInstallOptions
mgr, err := ctrl.NewManager(cfg, ctrl.Options{
Scheme: scheme,
Host: webhookInstallOptions.LocalServingHost,
Port: webhookInstallOptions.LocalServingPort,
CertDir: webhookInstallOptions.LocalServingCertDir,
LeaderElection: false,
MetricsBindAddress: "0",
})
Expect(err).NotTo(HaveOccurred())
err = (&TenantControlPlane{}).SetupWebhookWithManager(mgr, "")
Expect(err).NotTo(HaveOccurred())
err = (&DataStore{}).SetupWebhookWithManager(mgr)
Expect(err).NotTo(HaveOccurred())
//+kubebuilder:scaffold:webhook
go func() {
defer GinkgoRecover()
err = mgr.Start(ctx)
Expect(err).NotTo(HaveOccurred())
}()
// wait for the webhook server to get ready
dialer := &net.Dialer{Timeout: time.Second}
addrPort := fmt.Sprintf("%s:%d", webhookInstallOptions.LocalServingHost, webhookInstallOptions.LocalServingPort)
Eventually(func() error {
conn, err := tls.DialWithDialer(dialer, "tcp", addrPort, &tls.Config{InsecureSkipVerify: true})
if err != nil {
return err
}
conn.Close()
return nil
}).Should(Succeed())
})
var _ = AfterSuite(func() {
cancel()
By("tearing down the test environment")
err := testEnv.Stop()
Expect(err).NotTo(HaveOccurred())
})

View File

@@ -10,7 +10,7 @@ package v1alpha1
import (
"k8s.io/api/core/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
@@ -254,6 +254,35 @@ func (in *ClientCertificate) DeepCopy() *ClientCertificate {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ComponentResourceRequirements) DeepCopyInto(out *ComponentResourceRequirements) {
*out = *in
if in.Limits != nil {
in, out := &in.Limits, &out.Limits
*out = make(v1.ResourceList, len(*in))
for key, val := range *in {
(*out)[key] = val.DeepCopy()
}
}
if in.Requests != nil {
in, out := &in.Requests, &out.Requests
*out = make(v1.ResourceList, len(*in))
for key, val := range *in {
(*out)[key] = val.DeepCopy()
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ComponentResourceRequirements.
func (in *ComponentResourceRequirements) DeepCopy() *ComponentResourceRequirements {
if in == nil {
return nil
}
out := new(ComponentResourceRequirements)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ContentRef) DeepCopyInto(out *ContentRef) {
*out = *in
@@ -306,17 +335,17 @@ func (in *ControlPlaneComponentsResources) DeepCopyInto(out *ControlPlaneCompone
*out = *in
if in.APIServer != nil {
in, out := &in.APIServer, &out.APIServer
*out = new(v1.ResourceRequirements)
*out = new(ComponentResourceRequirements)
(*in).DeepCopyInto(*out)
}
if in.ControllerManager != nil {
in, out := &in.ControllerManager, &out.ControllerManager
*out = new(v1.ResourceRequirements)
*out = new(ComponentResourceRequirements)
(*in).DeepCopyInto(*out)
}
if in.Scheduler != nil {
in, out := &in.Scheduler, &out.Scheduler
*out = new(v1.ResourceRequirements)
*out = new(ComponentResourceRequirements)
(*in).DeepCopyInto(*out)
}
}
@@ -477,7 +506,7 @@ func (in *DataStoreSpec) DeepCopyInto(out *DataStoreSpec) {
*out = *in
if in.Endpoints != nil {
in, out := &in.Endpoints, &out.Endpoints
*out = make([]string, len(*in))
*out = make(Endpoints, len(*in))
copy(*out, *in)
}
if in.BasicAuth != nil {
@@ -603,6 +632,25 @@ func (in *ETCDCertificatesStatus) DeepCopy() *ETCDCertificatesStatus {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in Endpoints) DeepCopyInto(out *Endpoints) {
{
in := &in
*out = make(Endpoints, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Endpoints.
func (in Endpoints) DeepCopy() Endpoints {
if in == nil {
return nil
}
out := new(Endpoints)
in.DeepCopyInto(out)
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ExternalKubernetesObjectStatus) DeepCopyInto(out *ExternalKubernetesObjectStatus) {
*out = *in
@@ -619,6 +667,25 @@ func (in *ExternalKubernetesObjectStatus) DeepCopy() *ExternalKubernetesObjectSt
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in ExtraArgs) DeepCopyInto(out *ExtraArgs) {
{
in := &in
*out = make(ExtraArgs, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtraArgs.
func (in ExtraArgs) DeepCopy() ExtraArgs {
if in == nil {
return nil
}
out := new(ExtraArgs)
in.DeepCopyInto(out)
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ImageOverrideTrait) DeepCopyInto(out *ImageOverrideTrait) {
*out = *in
@@ -650,6 +717,26 @@ func (in *IngressSpec) DeepCopy() *IngressSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KonnectivityAgentSpec) DeepCopyInto(out *KonnectivityAgentSpec) {
*out = *in
if in.ExtraArgs != nil {
in, out := &in.ExtraArgs, &out.ExtraArgs
*out = make(ExtraArgs, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KonnectivityAgentSpec.
func (in *KonnectivityAgentSpec) DeepCopy() *KonnectivityAgentSpec {
if in == nil {
return nil
}
out := new(KonnectivityAgentSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KonnectivityConfigMap) DeepCopyInto(out *KonnectivityConfigMap) {
*out = *in
@@ -666,13 +753,35 @@ func (in *KonnectivityConfigMap) DeepCopy() *KonnectivityConfigMap {
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KonnectivitySpec) DeepCopyInto(out *KonnectivitySpec) {
func (in *KonnectivityServerSpec) DeepCopyInto(out *KonnectivityServerSpec) {
*out = *in
if in.Resources != nil {
in, out := &in.Resources, &out.Resources
*out = new(v1.ResourceRequirements)
*out = new(ComponentResourceRequirements)
(*in).DeepCopyInto(*out)
}
if in.ExtraArgs != nil {
in, out := &in.ExtraArgs, &out.ExtraArgs
*out = make(ExtraArgs, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KonnectivityServerSpec.
func (in *KonnectivityServerSpec) DeepCopy() *KonnectivityServerSpec {
if in == nil {
return nil
}
out := new(KonnectivityServerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KonnectivitySpec) DeepCopyInto(out *KonnectivitySpec) {
*out = *in
in.KonnectivityServerSpec.DeepCopyInto(&out.KonnectivityServerSpec)
in.KonnectivityAgentSpec.DeepCopyInto(&out.KonnectivityAgentSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KonnectivitySpec.
@@ -742,8 +851,6 @@ func (in *KubeadmPhaseStatus) DeepCopy() *KubeadmPhaseStatus {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubeadmPhasesStatus) DeepCopyInto(out *KubeadmPhasesStatus) {
*out = *in
in.UploadConfigKubeadm.DeepCopyInto(&out.UploadConfigKubeadm)
in.UploadConfigKubelet.DeepCopyInto(&out.UploadConfigKubelet)
in.BootstrapToken.DeepCopyInto(&out.BootstrapToken)
}

View File

@@ -1,11 +1,11 @@
apiVersion: v2
appVersion: v0.1.1
appVersion: v0.2.3
description: Kamaji is a tool aimed to build and operate a Managed Kubernetes Service
with a fraction of the operational burden. With Kamaji, you can deploy and operate
hundreds of Kubernetes clusters as a hyper-scaler.
home: https://github.com/clastix/kamaji
icon: https://github.com/clastix/kamaji/raw/master/assets/kamaji-logo.png
kubeVersion: 1.21 - 1.25
kubeVersion: ">=1.21.0-0"
maintainers:
- email: dario@tranchitella.eu
name: Dario Tranchitella
@@ -13,13 +13,11 @@ maintainers:
name: Massimiliano Giovagnoli
- email: me@bsctl.io
name: Adriano Pezzuto
- email: iam@mendrugory.com
name: Gonzalo Gabriel Jiménez Fuentes
name: kamaji
sources:
- https://github.com/clastix/kamaji
type: application
version: 0.10.0
version: 0.11.5
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: kamaji

View File

@@ -1,6 +1,6 @@
# kamaji
![Version: 0.10.0](https://img.shields.io/badge/Version-0.10.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.1.1](https://img.shields.io/badge/AppVersion-v0.1.1-informational?style=flat-square)
![Version: 0.11.5](https://img.shields.io/badge/Version-0.11.5-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.2.3](https://img.shields.io/badge/AppVersion-v0.2.3-informational?style=flat-square)
Kamaji is a tool aimed to build and operate a Managed Kubernetes Service with a fraction of the operational burden. With Kamaji, you can deploy and operate hundreds of Kubernetes clusters as a hyper-scaler.
@@ -11,7 +11,6 @@ Kamaji is a tool aimed to build and operate a Managed Kubernetes Service with a
| Dario Tranchitella | <dario@tranchitella.eu> | |
| Massimiliano Giovagnoli | <me@maxgio.it> | |
| Adriano Pezzuto | <me@bsctl.io> | |
| Gonzalo Gabriel Jiménez Fuentes | <iam@mendrugory.com> | |
## Source Code
@@ -19,7 +18,7 @@ Kamaji is a tool aimed to build and operate a Managed Kubernetes Service with a
## Requirements
Kubernetes: `1.21 - 1.25`
Kubernetes: `>=1.21.0-0`
[Kamaji](https://github.com/clastix/kamaji) requires a [multi-tenant `etcd`](https://github.com/clastix/kamaji-internal/blob/master/deploy/getting-started-with-kamaji.md#setup-internal-multi-tenant-etcd) cluster.
This Helm Chart starting from v0.1.1 provides the installation of an internal `etcd` in order to streamline the local test. If you'd like to use an externally managed etcd instance, you can specify the overrides and by setting the value `etcd.deploy=false`.
@@ -67,7 +66,6 @@ Here the values you can override:
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | Kubernetes affinity rules to apply to Kamaji controller pods |
| configPath | string | `"./kamaji.yaml"` | Configuration file path alternative. (default "./kamaji.yaml") |
| datastore.basicAuth.passwordSecret.keyPath | string | `nil` | The Secret key where the data is stored. |
| datastore.basicAuth.passwordSecret.name | string | `nil` | The name of the Secret containing the password used to connect to the relational database. |
| datastore.basicAuth.passwordSecret.namespace | string | `nil` | The namespace of the Secret containing the password used to connect to the relational database. |
@@ -91,7 +89,7 @@ Here the values you can override:
| datastore.tlsConfig.clientCertificate.privateKey.namespace | string | `nil` | Namespace of the Secret containing the client certificate private key required to establish the mandatory SSL/TLS connection to the datastore. |
| etcd.compactionInterval | int | `0` | ETCD Compaction interval (e.g. "5m0s"). (default: "0" (disabled)) |
| etcd.deploy | bool | `true` | Install an etcd with enabled multi-tenancy along with Kamaji |
| etcd.image | object | `{"pullPolicy":"IfNotPresent","repository":"quay.io/coreos/etcd","tag":"v3.5.4"}` | Install specific etcd image |
| etcd.image | object | `{"pullPolicy":"IfNotPresent","repository":"quay.io/coreos/etcd","tag":"v3.5.6"}` | Install specific etcd image |
| etcd.livenessProbe | object | `{"failureThreshold":8,"httpGet":{"path":"/health?serializable=true","port":2381,"scheme":"HTTP"},"initialDelaySeconds":10,"periodSeconds":10,"timeoutSeconds":15}` | The livenessProbe for the etcd container |
| etcd.overrides.caSecret.name | string | `"etcd-certs"` | Name of the secret which contains CA's certificate and private key. (default: "etcd-certs") |
| etcd.overrides.caSecret.namespace | string | `"kamaji-system"` | Namespace of the secret which contains CA's certificate and private key. (default: "kamaji-system") |
@@ -100,6 +98,7 @@ Here the values you can override:
| etcd.overrides.endpoints | object | `{"etcd-0":"etcd-0.etcd.kamaji-system.svc.cluster.local","etcd-1":"etcd-1.etcd.kamaji-system.svc.cluster.local","etcd-2":"etcd-2.etcd.kamaji-system.svc.cluster.local"}` | (map) Dictionary of the endpoints for the etcd cluster's members, key is the name of the etcd server. Don't define the protocol (TLS is automatically inflected), or any port, inflected from .etcd.peerApiPort value. |
| etcd.peerApiPort | int | `2380` | The peer API port which servers are listening to. |
| etcd.persistence.accessModes[0] | string | `"ReadWriteOnce"` | |
| etcd.persistence.customAnnotations | object | `{}` | The custom annotations to add to the PVC |
| etcd.persistence.size | string | `"10Gi"` | |
| etcd.persistence.storageClass | string | `""` | |
| etcd.port | int | `2379` | The client request port. |
@@ -126,11 +125,10 @@ Here the values you can override:
| resources.requests.cpu | string | `"100m"` | |
| resources.requests.memory | string | `"20Mi"` | |
| securityContext | object | `{"allowPrivilegeEscalation":false}` | The securityContext to apply to the Kamaji controller container only. It does not apply to the Kamaji RBAC proxy container. |
| service.port | int | `8443` | |
| service.type | string | `"ClusterIP"` | |
| serviceAccount.annotations | object | `{}` | |
| serviceAccount.create | bool | `true` | |
| serviceAccount.name | string | `"kamaji-controller-manager"` | |
| serviceMonitor.enabled | bool | `false` | Toggle the ServiceMonitor true if you have Prometheus Operator installed and configured |
| temporaryDirectoryPath | string | `"/tmp/kamaji"` | Directory which will be used to work with temporary files. (default "/tmp/kamaji") |
| tolerations | list | `[]` | Kubernetes node taints that the Kamaji controller pods would tolerate |

View File

@@ -3,8 +3,8 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: kamaji-system/kamaji-serving-cert
controller-gen.kubebuilder.io/version: v0.9.2
creationTimestamp: null
name: datastores.kamaji.clastix.io
spec:
group: kamaji.clastix.io
@@ -15,254 +15,225 @@ spec:
singular: datastore
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Kamaji data store driver
jsonPath: .spec.driver
name: Driver
type: string
- description: Age
jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: DataStore is the Schema for the datastores API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DataStoreSpec defines the desired state of DataStore.
properties:
basicAuth:
description: In case of authentication enabled for the given data
store, specifies the username and password pair. This value is optional.
properties:
password:
properties:
content:
description: Bare content of the file, base64 encoded. It
has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference
where the content is stored. This value is mandatory.
type: string
name:
description: name is unique within a namespace to reference
a secret resource.
type: string
namespace:
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
username:
properties:
content:
description: Bare content of the file, base64 encoded. It
has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference
where the content is stored. This value is mandatory.
type: string
name:
description: name is unique within a namespace to reference
a secret resource.
type: string
namespace:
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- password
- username
type: object
driver:
description: The driver to use to connect to the shared datastore.
type: string
endpoints:
description: List of the endpoints to connect to the shared datastore.
No need for protocol, just bare IP/FQDN and port.
items:
- additionalPrinterColumns:
- description: Kamaji data store driver
jsonPath: .spec.driver
name: Driver
type: string
- description: Age
jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: DataStore is the Schema for the datastores API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DataStoreSpec defines the desired state of DataStore.
properties:
basicAuth:
description: In case of authentication enabled for the given data store, specifies the username and password pair. This value is optional.
properties:
password:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
username:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- password
- username
type: object
driver:
description: The driver to use to connect to the shared datastore.
enum:
- etcd
- MySQL
- PostgreSQL
type: string
type: array
tlsConfig:
description: Defines the TLS/SSL configuration required to connect
to the data store in a secure way.
properties:
certificateAuthority:
description: Retrieve the Certificate Authority certificate and
private key, such as bare content of the file, or a SecretReference.
The key reference is required since etcd authentication is based
on certificates, and Kamaji is responsible in creating this.
properties:
certificate:
properties:
content:
description: Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
type: string
name:
description: name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
privateKey:
properties:
content:
description: Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
type: string
name:
description: name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- certificate
type: object
clientCertificate:
description: Specifies the SSL/TLS key and private key pair used
to connect to the data store.
properties:
certificate:
properties:
content:
description: Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
type: string
name:
description: name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
privateKey:
properties:
content:
description: Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
type: string
name:
description: name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- certificate
- privateKey
type: object
required:
- certificateAuthority
- clientCertificate
type: object
required:
- driver
- endpoints
- tlsConfig
type: object
status:
description: DataStoreStatus defines the observed state of DataStore.
properties:
usedBy:
description: List of the Tenant Control Planes, namespaced named,
using this data store.
items:
type: string
type: array
type: object
type: object
served: true
storage: true
subresources:
status: {}
endpoints:
description: List of the endpoints to connect to the shared datastore. No need for protocol, just bare IP/FQDN and port.
items:
type: string
minItems: 1
type: array
tlsConfig:
description: Defines the TLS/SSL configuration required to connect to the data store in a secure way.
properties:
certificateAuthority:
description: Retrieve the Certificate Authority certificate and private key, such as bare content of the file, or a SecretReference. The key reference is required since etcd authentication is based on certificates, and Kamaji is responsible in creating this.
properties:
certificate:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
privateKey:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- certificate
type: object
clientCertificate:
description: Specifies the SSL/TLS key and private key pair used to connect to the data store.
properties:
certificate:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
privateKey:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- certificate
- privateKey
type: object
required:
- certificateAuthority
- clientCertificate
type: object
required:
- driver
- endpoints
- tlsConfig
type: object
status:
description: DataStoreStatus defines the observed state of DataStore.
properties:
usedBy:
description: List of the Tenant Control Planes, namespaced named, using this data store.
items:
type: string
type: array
type: object
type: object
served: true
storage: true
subresources:
status: {}

File diff suppressed because it is too large Load Diff

View File

@@ -46,9 +46,9 @@ app.kubernetes.io/managed-by: {{ .Release.Service }}
Selector labels
*/}}
{{- define "kamaji.selectorLabels" -}}
app.kubernetes.io/name: {{ include "kamaji.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: controller-manager
app.kubernetes.io/name: {{ default (include "kamaji.name" .) .name }}
app.kubernetes.io/instance: {{ default .Release.Name .instance }}
app.kubernetes.io/component: {{ default "controller-manager" .component }}
{{- end }}
{{/*
@@ -61,3 +61,31 @@ Create the name of the service account to use
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Create the name of the Service to user for webhooks
*/}}
{{- define "kamaji.webhookServiceName" -}}
{{- printf "%s-webhook-service" (include "kamaji.fullname" .) }}
{{- end }}
{{/*
Create the name of the Service to user for metrics
*/}}
{{- define "kamaji.metricsServiceName" -}}
{{- printf "%s-metrics-service" (include "kamaji.fullname" .) }}
{{- end }}
{{/*
Create the name of the cert-manager secret
*/}}
{{- define "kamaji.webhookSecretName" -}}
{{- printf "%s-webhook-server-cert" (include "kamaji.fullname" .) }}
{{- end }}
{{/*
Create the name of the cert-manager Certificate
*/}}
{{- define "kamaji.certificateName" -}}
{{- printf "%s-serving-cert" (include "kamaji.fullname" .) }}
{{- end }}

View File

@@ -0,0 +1,16 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
labels:
{{- $data := . | mustMergeOverwrite (dict "component" "certificate") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: {{ include "kamaji.certificateName" . }}
namespace: {{ .Release.Namespace }}
spec:
dnsNames:
- {{ include "kamaji.webhookServiceName" . }}.{{ .Release.Namespace }}.svc
- {{ include "kamaji.webhookServiceName" . }}.{{ .Release.Namespace }}.svc.cluster.local
issuerRef:
kind: Issuer
name: kamaji-selfsigned-issuer
secretName: {{ include "kamaji.webhookSecretName" . }}

View File

@@ -0,0 +1,10 @@
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
labels:
{{- $data := . | mustMergeOverwrite (dict "component" "issuer") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: kamaji-selfsigned-issuer
namespace: {{ .Release.Namespace }}
spec:
selfSigned: {}

View File

@@ -28,18 +28,7 @@ spec:
serviceAccountName: {{ include "kamaji.serviceAccountName" . }}
containers:
- args:
- --secure-listen-address=0.0.0.0:8443
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=10
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
name: kube-rbac-proxy
ports:
- containerPort: 8443
name: https
protocol: TCP
- args:
- --config-file={{ .Values.configPath }}
- manager
- --health-probe-bind-address={{ .Values.healthProbeBindAddress }}
- --leader-elect
- --metrics-bind-address={{ .Values.metricsBindAddress }}
@@ -52,7 +41,16 @@ spec:
{{- toYaml . | nindent 8 }}
{{- end }}
command:
- /manager
- /kamaji
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- with .Values.livenessProbe }}
@@ -61,6 +59,12 @@ spec:
{{- end }}
name: manager
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
- containerPort: 8080
name: metrics
protocol: TCP
- containerPort: 8081
name: healthcheck
protocol: TCP
@@ -72,7 +76,21 @@ spec:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
terminationGracePeriodSeconds: 10
volumes:
- name: tmp
emptyDir:
medium: Memory
- name: cert
secret:
defaultMode: 420
secretName: {{ include "kamaji.webhookSecretName" . }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -2,6 +2,8 @@ apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: {{ include "datastore.fullname" . }}
annotations:
"helm.sh/hook": pre-install
labels:
{{- include "datastore.labels" . | nindent 4 }}
spec:
@@ -10,7 +12,12 @@ spec:
{{- include "datastore.endpoints" . | indent 4 }}
{{- if (and .Values.datastore.basicAuth.usernameSecret.name .Values.datastore.basicAuth.passwordSecret.name) }}
basicAuth:
{{- .Values.datastore.basicAuth | toYaml | nindent 4 }}
username:
secretReference:
{{- .Values.datastore.basicAuth.usernameSecret | toYaml | nindent 8 }}
password:
secretReference:
{{- .Values.datastore.basicAuth.passwordSecret | toYaml | nindent 8 }}
{{- end }}
tlsConfig:
certificateAuthority:

View File

@@ -6,6 +6,10 @@ metadata:
{{- include "etcd.labels" . | nindent 4 }}
name: {{ include "etcd.csrConfigMapName" . }}
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": "hook-succeeded,hook-failed"
data:
ca-csr.json: |-
{

View File

@@ -28,4 +28,8 @@ spec:
- --ignore-not-found=true
- {{ include "etcd.caSecretName" . }}
- {{ include "etcd.clientSecretName" . }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -18,35 +18,13 @@ spec:
serviceAccountName: {{ include "etcd.serviceAccountName" . }}
restartPolicy: Never
initContainers:
- name: cfssl
image: cfssl/cfssl:latest
command:
- bash
- -c
- |-
cfssl gencert -initca /csr/ca-csr.json | cfssljson -bare /certs/ca &&
mv /certs/ca.pem /certs/ca.crt && mv /certs/ca-key.pem /certs/ca.key &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=peer-authentication /csr/peer-csr.json | cfssljson -bare /certs/peer &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=peer-authentication /csr/server-csr.json | cfssljson -bare /certs/server &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=client-authentication /csr/root-client-csr.json | cfssljson -bare /certs/root-client
volumeMounts:
- mountPath: /certs
name: certs
- mountPath: /csr
name: csr
- name: kubectl
image: {{ printf "clastix/kubectl:%s" (include "etcd.jobsTagKubeVersion" .) }}
command:
- sh
- -c
- |-
kubectl --namespace={{ .Release.Namespace }} delete secret --ignore-not-found=true {{ include "etcd.caSecretName" . }} {{ include "etcd.clientSecretName" . }} &&
kubectl --namespace={{ .Release.Namespace }} create secret generic {{ include "etcd.caSecretName" . }} --from-file=/certs/ca.crt --from-file=/certs/ca.key --from-file=/certs/peer-key.pem --from-file=/certs/peer.pem --from-file=/certs/server-key.pem --from-file=/certs/server.pem &&
kubectl --namespace={{ .Release.Namespace }} create secret tls {{ include "etcd.clientSecretName" . }} --key=/certs/root-client-key.pem --cert=/certs/root-client.pem &&
kubectl --namespace={{ .Release.Namespace }} rollout status sts/etcd --timeout=300s
volumeMounts:
- mountPath: /certs
name: certs
containers:
- command:
- bash
@@ -82,10 +60,11 @@ spec:
- name: root-certs
secret:
secretName: {{ include "etcd.clientSecretName" . }}
optional: true
- name: csr
configMap:
name: {{ include "etcd.csrConfigMapName" . }}
- name: certs
emptyDir: {}
secret:
secretName: {{ include "etcd.caSecretName" . }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,64 @@
{{- if .Values.etcd.deploy }}
apiVersion: batch/v1
kind: Job
metadata:
labels:
{{- include "etcd.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": "hook-succeeded"
name: "{{ .Release.Name }}-etcd-certs"
namespace: {{ .Release.Namespace }}
spec:
template:
metadata:
name: "{{ .Release.Name }}"
spec:
serviceAccountName: {{ include "etcd.serviceAccountName" . }}
restartPolicy: Never
initContainers:
- name: cfssl
image: cfssl/cfssl:latest
command:
- bash
- -c
- |-
cfssl gencert -initca /csr/ca-csr.json | cfssljson -bare /certs/ca &&
mv /certs/ca.pem /certs/ca.crt && mv /certs/ca-key.pem /certs/ca.key &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=peer-authentication /csr/peer-csr.json | cfssljson -bare /certs/peer &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=peer-authentication /csr/server-csr.json | cfssljson -bare /certs/server &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=client-authentication /csr/root-client-csr.json | cfssljson -bare /certs/root-client
volumeMounts:
- mountPath: /certs
name: certs
- mountPath: /csr
name: csr
containers:
- name: kubectl
image: {{ printf "clastix/kubectl:%s" (include "etcd.jobsTagKubeVersion" .) }}
command:
- sh
- -c
- |-
kubectl --namespace={{ .Release.Namespace }} delete secret --ignore-not-found=true {{ include "etcd.caSecretName" . }} {{ include "etcd.clientSecretName" . }} &&
kubectl --namespace={{ .Release.Namespace }} create secret generic {{ include "etcd.caSecretName" . }} --from-file=/certs/ca.crt --from-file=/certs/ca.key --from-file=/certs/peer-key.pem --from-file=/certs/peer.pem --from-file=/certs/server-key.pem --from-file=/certs/server.pem &&
kubectl --namespace={{ .Release.Namespace }} create secret tls {{ include "etcd.clientSecretName" . }} --key=/certs/root-client-key.pem --cert=/certs/root-client.pem
volumeMounts:
- mountPath: /certs
name: certs
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumes:
- name: csr
configMap:
name: {{ include "etcd.csrConfigMapName" . }}
- name: certs
emptyDir: {}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -5,6 +5,9 @@ metadata:
labels:
{{- include "etcd.labels" . | nindent 4 }}
name: etcd-gen-certs-role
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
@@ -38,6 +41,9 @@ metadata:
{{- include "etcd.labels" . | nindent 4 }}
name: etcd-gen-certs-rolebiding
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role

View File

@@ -5,5 +5,8 @@ metadata:
labels:
{{- include "etcd.labels" . | nindent 4 }}
name: {{ include "etcd.serviceAccountName" . }}
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -81,6 +81,10 @@ spec:
volumeClaimTemplates:
- metadata:
name: data
{{- with .Values.etcd.persistence.customAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
storageClassName: {{ .Values.etcd.persistence.storageClassName }}
accessModes:

View File

@@ -0,0 +1,50 @@
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
annotations:
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "kamaji.certificateName" . }}
labels:
{{- $data := . | mustMergeOverwrite (dict "instance" "mutating-webhook-configuration") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: kamaji-mutating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-kamaji-clastix-io-v1alpha1-datastore
failurePolicy: Fail
name: mdatastore.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- datastores
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-kamaji-clastix-io-v1alpha1-tenantcontrolplane
failurePolicy: Fail
name: mtenantcontrolplane.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- tenantcontrolplanes
sideEffects: None

View File

@@ -66,6 +66,16 @@ rules:
- patch
- update
- watch
- apiGroups:
- batch
resources:
- jobs
verbs:
- create
- delete
- get
- list
- watch
- apiGroups:
- ""
resources:
@@ -114,12 +124,6 @@ rules:
- patch
- update
- watch
- apiGroups:
- kamaji.clastix.io
resources:
- datastores/finalizers
verbs:
- update
- apiGroups:
- kamaji.clastix.io
resources:

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "kamaji.fullname" . }}
labels:
{{- include "kamaji.labels" . | nindent 4 }}
namespace: {{ .Release.Namespace }}
spec:
type: {{ .Values.service.type }}
ports:
- name: https
port: {{ .Values.service.port }}
protocol: TCP
targetPort: https
selector:
{{- include "kamaji.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
labels:
{{- $data := . | mustMergeOverwrite (dict "component" "metrics") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: {{ include "kamaji.metricsServiceName" . }}
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 8080
name: metrics
protocol: TCP
targetPort: metrics
selector:
{{- include "kamaji.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
labels:
{{- $data := . | mustMergeOverwrite (dict "component" "webhook" "instance" "webhook-service") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 443
protocol: TCP
name: webhook-server
targetPort: webhook-server
selector:
{{- include "kamaji.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,21 @@
{{- if and (.Capabilities.APIVersions.Has "monitoring.coreos.com/v1") .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
{{- $data := . | mustMergeOverwrite (dict "component" "servicemonitor") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: {{ include "kamaji.fullname" . }}
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- path: /metrics
port: metrics
scheme: http
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "kamaji.name" . }}
{{- end }}

View File

@@ -0,0 +1,70 @@
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
annotations:
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "kamaji.certificateName" . }}
labels:
{{- $data := . | mustMergeOverwrite (dict "instance" "validating-webhook-configuration") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: kamaji-validating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate--v1-secret
failurePolicy: Ignore
name: vdatastoresecrets.kb.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- DELETE
resources:
- secrets
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-kamaji-clastix-io-v1alpha1-datastore
failurePolicy: Fail
name: vdatastore.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
- DELETE
resources:
- datastores
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-kamaji-clastix-io-v1alpha1-tenantcontrolplane
failurePolicy: Fail
name: vtenantcontrolplane.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- tenantcontrolplanes
sideEffects: None

View File

@@ -15,8 +15,10 @@ image:
# -- A list of extra arguments to add to the kamaji controller default ones
extraArgs: []
# -- Configuration file path alternative. (default "./kamaji.yaml")
configPath: "./kamaji.yaml"
serviceMonitor:
# -- Toggle the ServiceMonitor true if you have Prometheus Operator installed and configured
enabled: false
etcd:
# -- Install an etcd with enabled multi-tenancy along with Kamaji
@@ -31,7 +33,7 @@ etcd:
# -- Install specific etcd image
image:
repository: quay.io/coreos/etcd
tag: "v3.5.4"
tag: "v3.5.6"
pullPolicy: IfNotPresent
# -- The livenessProbe for the etcd container
@@ -55,6 +57,9 @@ etcd:
storageClass: ""
accessModes:
- ReadWriteOnce
# -- The custom annotations to add to the PVC
customAnnotations: {}
# volumeType: local
overrides:
caSecret:
@@ -127,10 +132,6 @@ securityContext:
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 8443
resources:
limits:
cpu: 200m

217
cmd/manager/cmd.go Normal file
View File

@@ -0,0 +1,217 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package manager
import (
"flag"
"fmt"
"io"
"os"
goRuntime "runtime"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/klog/v2"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
cmdutils "github.com/clastix/kamaji/cmd/utils"
"github.com/clastix/kamaji/controllers"
"github.com/clastix/kamaji/controllers/soot"
"github.com/clastix/kamaji/internal"
datastoreutils "github.com/clastix/kamaji/internal/datastore/utils"
"github.com/clastix/kamaji/internal/webhook"
)
func NewCmd(scheme *runtime.Scheme) *cobra.Command {
// CLI flags
var (
metricsBindAddress string
healthProbeBindAddress string
leaderElect bool
tmpDirectory string
kineImage string
datastore string
managerNamespace string
managerServiceAccountName string
managerServiceName string
webhookCABundle []byte
migrateJobImage string
maxConcurrentReconciles int
webhookCAPath string
)
ctx := ctrl.SetupSignalHandler()
cmd := &cobra.Command{
Use: "manager",
Short: "Start the Kamaji Kubernetes Operator",
SilenceErrors: false,
SilenceUsage: true,
PreRunE: func(cmd *cobra.Command, args []string) (err error) {
// Avoid to pollute Kamaji stdout with useless details by the underlying klog implementations
klog.SetOutput(io.Discard)
klog.LogToStderr(false)
if err = cmdutils.CheckFlags(cmd.Flags(), []string{"kine-image", "datastore", "migrate-image", "tmp-directory", "pod-namespace", "webhook-service-name", "serviceaccount-name", "webhook-ca-path"}...); err != nil {
return err
}
if webhookCABundle, err = os.ReadFile(webhookCAPath); err != nil {
return fmt.Errorf("unable to read webhook CA: %w", err)
}
if err = datastoreutils.CheckExists(ctx, scheme, datastore); err != nil {
return err
}
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
setupLog := ctrl.Log.WithName("setup")
setupLog.Info(fmt.Sprintf("Kamaji version %s %s%s", internal.GitTag, internal.GitCommit, internal.GitDirty))
setupLog.Info(fmt.Sprintf("Build from: %s", internal.GitRepo))
setupLog.Info(fmt.Sprintf("Build date: %s", internal.BuildTime))
setupLog.Info(fmt.Sprintf("Go Version: %s", goRuntime.Version()))
setupLog.Info(fmt.Sprintf("Go OS/Arch: %s/%s", goRuntime.GOOS, goRuntime.GOARCH))
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsBindAddress,
Port: 9443,
HealthProbeBindAddress: healthProbeBindAddress,
LeaderElection: leaderElect,
LeaderElectionNamespace: managerNamespace,
LeaderElectionID: "799b98bc.clastix.io",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
return err
}
tcpChannel := make(controllers.TenantControlPlaneChannel)
if err = (&controllers.DataStore{TenantControlPlaneTrigger: tcpChannel}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "DataStore")
return err
}
reconciler := &controllers.TenantControlPlaneReconciler{
Client: mgr.GetClient(),
APIReader: mgr.GetAPIReader(),
Config: controllers.TenantControlPlaneReconcilerConfig{
DefaultDataStoreName: datastore,
KineContainerImage: kineImage,
TmpBaseDirectory: tmpDirectory,
},
TriggerChan: tcpChannel,
KamajiNamespace: managerNamespace,
KamajiServiceAccount: managerServiceAccountName,
KamajiService: managerServiceName,
KamajiMigrateImage: migrateJobImage,
MaxConcurrentReconciles: maxConcurrentReconciles,
}
if err = reconciler.SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Namespace")
return err
}
if err = (&webhook.Freeze{}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to register webhook", "webhook", "Freeze")
return err
}
if err = (&kamajiv1alpha1.DatastoreUsedSecret{}).SetupWithManager(ctx, mgr); err != nil {
setupLog.Error(err, "unable to create indexer", "indexer", "DatastoreUsedSecret")
return err
}
if err = (&kamajiv1alpha1.TenantControlPlaneStatusDataStore{}).SetupWithManager(ctx, mgr); err != nil {
setupLog.Error(err, "unable to create indexer", "indexer", "TenantControlPlaneStatusDataStore")
return err
}
if err = (&kamajiv1alpha1.TenantControlPlane{}).SetupWebhookWithManager(mgr, datastore); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "TenantControlPlane")
return err
}
if err = (&kamajiv1alpha1.DataStore{}).SetupWebhookWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "DataStore")
return err
}
if err = (&soot.Manager{
MigrateCABundle: webhookCABundle,
MigrateServiceName: managerServiceName,
MigrateServiceNamespace: managerNamespace,
AdminClient: mgr.GetClient(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to set up soot manager")
return err
}
if err = mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
return err
}
if err = mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
return err
}
setupLog.Info("starting manager")
if err = mgr.Start(ctx); err != nil {
setupLog.Error(err, "problem running manager")
return err
}
return nil
},
}
// Setting zap logger
zapfs := flag.NewFlagSet("zap", flag.ExitOnError)
opts := zap.Options{
Development: true,
}
opts.BindFlags(zapfs)
cmd.Flags().AddGoFlagSet(zapfs)
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
// Setting CLI flags
cmd.Flags().StringVar(&metricsBindAddress, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
cmd.Flags().StringVar(&healthProbeBindAddress, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
cmd.Flags().BoolVar(&leaderElect, "leader-elect", true, "Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.")
cmd.Flags().StringVar(&tmpDirectory, "tmp-directory", "/tmp/kamaji", "Directory which will be used to work with temporary files.")
cmd.Flags().StringVar(&kineImage, "kine-image", "rancher/kine:v0.9.2-amd64", "Container image along with tag to use for the Kine sidecar container (used only if etcd-storage-type is set to one of kine strategies).")
cmd.Flags().StringVar(&datastore, "datastore", "etcd", "The default DataStore that should be used by Kamaji to setup the required storage.")
cmd.Flags().StringVar(&migrateJobImage, "migrate-image", fmt.Sprintf("clastix/kamaji:v%s", internal.GitTag), "Specify the container image to launch when a TenantControlPlane is migrated to a new datastore.")
cmd.Flags().IntVar(&maxConcurrentReconciles, "max-concurrent-tcp-reconciles", 1, "Specify the number of workers for the Tenant Control Plane controller (beware of CPU consumption)")
cmd.Flags().StringVar(&managerNamespace, "pod-namespace", os.Getenv("POD_NAMESPACE"), "The Kubernetes Namespace on which the Operator is running in, required for the TenantControlPlane migration jobs.")
cmd.Flags().StringVar(&managerServiceName, "webhook-service-name", "kamaji-webhook-service", "The Kamaji webhook server Service name which is used to get validation webhooks, required for the TenantControlPlane migration jobs.")
cmd.Flags().StringVar(&managerServiceAccountName, "serviceaccount-name", os.Getenv("SERVICE_ACCOUNT"), "The Kubernetes Namespace on which the Operator is running in, required for the TenantControlPlane migration jobs.")
cmd.Flags().StringVar(&webhookCAPath, "webhook-ca-path", "/tmp/k8s-webhook-server/serving-certs/ca.crt", "Path to the Manager webhook server CA, required for the TenantControlPlane migration jobs.")
cobra.OnInitialize(func() {
viper.AutomaticEnv()
})
return cmd
}

119
cmd/migrate/cmd.go Normal file
View File

@@ -0,0 +1,119 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package migrate
import (
"context"
"fmt"
"strings"
"time"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/internal/datastore"
)
func NewCmd(scheme *runtime.Scheme) *cobra.Command {
// CLI flags
var (
tenantControlPlane string
targetDataStore string
timeout time.Duration
)
cmd := &cobra.Command{
Use: "migrate",
Short: "Migrate the data of a TenantControlPlane to another compatible DataStore",
SilenceUsage: true,
RunE: func(cmd *cobra.Command, args []string) error {
ctx, cancelFn := context.WithTimeout(context.Background(), timeout)
defer cancelFn()
log := ctrl.Log
log.Info("generating the controller-runtime client")
client, err := ctrlclient.New(ctrl.GetConfigOrDie(), ctrlclient.Options{
Scheme: scheme,
})
if err != nil {
return err
}
parts := strings.Split(tenantControlPlane, string(types.Separator))
if len(parts) != 2 {
return fmt.Errorf("non well-formed namespaced name for the tenant control plane, expected <NAMESPACE>/NAME, fot %s", tenantControlPlane)
}
log.Info("retrieving the TenantControlPlane")
tcp := &kamajiv1alpha1.TenantControlPlane{}
if err = client.Get(ctx, types.NamespacedName{Namespace: parts[0], Name: parts[1]}, tcp); err != nil {
return err
}
log.Info("retrieving the TenantControlPlane used DataStore")
originDs := &kamajiv1alpha1.DataStore{}
if err = client.Get(ctx, types.NamespacedName{Name: tcp.Status.Storage.DataStoreName}, originDs); err != nil {
return err
}
log.Info("retrieving the target DataStore")
targetDs := &kamajiv1alpha1.DataStore{}
if err = client.Get(ctx, types.NamespacedName{Name: targetDataStore}, targetDs); err != nil {
return err
}
if tcp.Status.Storage.Driver != string(targetDs.Spec.Driver) {
return fmt.Errorf("migration between DataStore with different driver is not supported")
}
if tcp.Status.Storage.DataStoreName == targetDs.GetName() {
return fmt.Errorf("cannot migrate to the same DataStore")
}
log.Info("generating the origin storage connection")
originConnection, err := datastore.NewStorageConnection(ctx, client, *originDs)
if err != nil {
return err
}
defer originConnection.Close()
log.Info("generating the target storage connection")
targetConnection, err := datastore.NewStorageConnection(ctx, client, *targetDs)
if err != nil {
return err
}
defer targetConnection.Close()
// Start migrating from the old Datastore to the new one
log.Info("migration from origin to target started")
if err = originConnection.Migrate(ctx, *tcp, targetConnection); err != nil {
return fmt.Errorf("unable to migrate data from %s to %s: %w", originDs.GetName(), targetDs.GetName(), err)
}
log.Info("migration completed")
return nil
},
}
cmd.Flags().StringVar(&tenantControlPlane, "tenant-control-plane", "", "Namespaced-name of the TenantControlPlane that must be migrated (e.g.: default/test)")
cmd.Flags().StringVar(&targetDataStore, "target-datastore", "", "Name of the Datastore to which the TenantControlPlane will be migrated")
cmd.Flags().DurationVar(&timeout, "timeout", 5*time.Minute, "Amount of time for the context timeout")
_ = cmd.MarkFlagRequired("tenant-control-plane")
_ = cmd.MarkFlagRequired("target-datastore")
return cmd
}

31
cmd/root.go Normal file
View File

@@ -0,0 +1,31 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package cmd
import (
"math/rand"
"time"
"github.com/spf13/cobra"
_ "go.uber.org/automaxprocs" // Automatically set `GOMAXPROCS` to match Linux container CPU quota.
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
)
func NewCmd(scheme *runtime.Scheme) *cobra.Command {
return &cobra.Command{
Use: "kamaji",
Short: "Build and operate Kubernetes at scale with a fraction of operational burden.",
PersistentPreRun: func(cmd *cobra.Command, args []string) {
// Seed is required to ensure non reproducibility for the certificates generate by Kamaji.
rand.Seed(time.Now().UnixNano())
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(kamajiv1alpha1.AddToScheme(scheme))
},
}
}

22
cmd/utils/check_flags.go Normal file
View File

@@ -0,0 +1,22 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package utils
import (
"fmt"
"github.com/spf13/pflag"
)
func CheckFlags(flags *pflag.FlagSet, args ...string) error {
for _, arg := range args {
v, _ := flags.GetString(arg)
if len(v) == 0 {
return fmt.Errorf("expecting a value for --%s arg", arg)
}
}
return nil
}

View File

@@ -0,0 +1,39 @@
# The following manifests contain a self-signed issuer CR and a certificate CR.
# More document can be found at https://docs.cert-manager.io
# WARNING: Targets CertManager v1.0. Check https://cert-manager.io/docs/installation/upgrading/ for breaking changes.
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
labels:
app.kubernetes.io/name: issuer
app.kubernetes.io/instance: selfsigned-issuer
app.kubernetes.io/component: certificate
app.kubernetes.io/created-by: operator
app.kubernetes.io/part-of: operator
app.kubernetes.io/managed-by: kustomize
name: selfsigned-issuer
namespace: system
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
labels:
app.kubernetes.io/name: certificate
app.kubernetes.io/instance: serving-cert
app.kubernetes.io/component: certificate
app.kubernetes.io/created-by: operator
app.kubernetes.io/part-of: operator
app.kubernetes.io/managed-by: kustomize
name: serving-cert # this name should match the one appeared in kustomizeconfig.yaml
namespace: system
spec:
# $(SERVICE_NAME) and $(SERVICE_NAMESPACE) will be substituted by kustomize
dnsNames:
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc.cluster.local
issuerRef:
kind: Issuer
name: selfsigned-issuer
secretName: webhook-server-cert # this secret will not be prefixed, since it's not managed by kustomize

View File

@@ -0,0 +1,5 @@
resources:
- certificate.yaml
configurations:
- kustomizeconfig.yaml

View File

@@ -0,0 +1,16 @@
# This configuration is for teaching kustomize how to update name ref and var substitution
nameReference:
- kind: Issuer
group: cert-manager.io
fieldSpecs:
- kind: Certificate
group: cert-manager.io
path: spec/issuerRef/name
varReference:
- kind: Certificate
group: cert-manager.io
path: spec/commonName
- kind: Certificate
group: cert-manager.io
path: spec/dnsNames

View File

@@ -60,6 +60,7 @@ spec:
keyPath:
description: Name of the key for the given Secret reference
where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference
@@ -86,6 +87,7 @@ spec:
keyPath:
description: Name of the key for the given Secret reference
where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference
@@ -106,12 +108,17 @@ spec:
type: object
driver:
description: The driver to use to connect to the shared datastore.
enum:
- etcd
- MySQL
- PostgreSQL
type: string
endpoints:
description: List of the endpoints to connect to the shared datastore.
No need for protocol, just bare IP/FQDN and port.
items:
type: string
minItems: 1
type: array
tlsConfig:
description: Defines the TLS/SSL configuration required to connect
@@ -136,6 +143,7 @@ spec:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to
@@ -163,6 +171,7 @@ spec:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to
@@ -197,6 +206,7 @@ spec:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to
@@ -224,6 +234,7 @@ spec:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to

View File

@@ -22,18 +22,22 @@ spec:
jsonPath: .spec.kubernetes.version
name: Version
type: string
- description: Kubernetes version
- description: Status
jsonPath: .status.kubernetesResources.version.status
name: Status
type: string
- description: Tenant Control Plane Endpoint (API server)
jsonPath: .status.controlPlaneEndpoint
name: Control-Plane-Endpoint
name: Control-Plane endpoint
type: string
- description: Secret which contains admin kubeconfig
jsonPath: .status.kubeconfig.admin.secretName
name: Kubeconfig
type: string
- description: DataStore actually used
jsonPath: .status.storage.dataStoreName
name: Datastore
type: string
- description: Age
jsonPath: .metadata.creationTimestamp
name: Age
@@ -82,54 +86,85 @@ spec:
description: Enables the Konnectivity addon in the Tenant Cluster,
required if the worker nodes are in a different network.
properties:
agentImage:
default: registry.k8s.io/kas-network-proxy/proxy-agent
description: AgentImage defines the container image for Konnectivity's
agent.
type: string
proxyPort:
description: Port of Konnectivity proxy server.
format: int32
type: integer
resources:
description: Resources define the amount of CPU and memory
to allocate to the Konnectivity server.
agent:
default:
image: registry.k8s.io/kas-network-proxy/proxy-agent
version: v0.0.32
properties:
limits:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Limits describes the maximum amount of compute
resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
type: object
requests:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Requests describes the minimum amount of
compute resources required. If Requests is omitted for
a container, it defaults to Limits if that is explicitly
specified, otherwise to an implementation-defined value.
More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
type: object
extraArgs:
description: ExtraArgs allows adding additional arguments
to said component.
items:
type: string
type: array
image:
default: registry.k8s.io/kas-network-proxy/proxy-agent
description: AgentImage defines the container image for
Konnectivity's agent.
type: string
version:
default: v0.0.32
description: Version for Konnectivity agent.
type: string
type: object
server:
default:
image: registry.k8s.io/kas-network-proxy/proxy-server
port: 8132
version: v0.0.32
properties:
extraArgs:
description: ExtraArgs allows adding additional arguments
to said component.
items:
type: string
type: array
image:
default: registry.k8s.io/kas-network-proxy/proxy-server
description: Container image used by the Konnectivity
server.
type: string
port:
description: The port which Konnectivity server is listening
to.
format: int32
type: integer
resources:
description: Resources define the amount of CPU and memory
to allocate to the Konnectivity server.
properties:
limits:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Limits describes the maximum amount
of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
type: object
requests:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Requests describes the minimum amount
of compute resources required. If Requests is omitted
for a container, it defaults to Limits if that is
explicitly specified, otherwise to an implementation-defined
value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
type: object
type: object
version:
default: v0.0.32
description: Container image version of the Konnectivity
server.
type: string
required:
- port
type: object
serverImage:
default: registry.k8s.io/kas-network-proxy/proxy-server
description: ServerImage defines the container image for Konnectivity's
server.
type: string
version:
default: v0.0.32
description: Version for Konnectivity server and agent.
type: string
required:
- proxyPort
type: object
kubeProxy:
description: Enables the kube-proxy addon in the Tenant Cluster.
@@ -1116,8 +1151,8 @@ spec:
controller-manager, and scheduler).
properties:
apiServer:
description: ResourceRequirements describes the compute
resource requirements.
description: ComponentResourceRequirements describes the
compute resource requirements.
properties:
limits:
additionalProperties:
@@ -1144,8 +1179,8 @@ spec:
type: object
type: object
controllerManager:
description: ResourceRequirements describes the compute
resource requirements.
description: ComponentResourceRequirements describes the
compute resource requirements.
properties:
limits:
additionalProperties:
@@ -1172,8 +1207,8 @@ spec:
type: object
type: object
scheduler:
description: ResourceRequirements describes the compute
resource requirements.
description: ComponentResourceRequirements describes the
compute resource requirements.
properties:
limits:
additionalProperties:
@@ -1200,6 +1235,74 @@ spec:
type: object
type: object
type: object
runtimeClassName:
description: 'RuntimeClassName refers to a RuntimeClass object
in the node.k8s.io group, which should be used to run the
Tenant Control Plane pod. If no RuntimeClass resource matches
the named class, the pod will not be run. If unset or empty,
the "legacy" RuntimeClass will be used, which is an implicit
class with an empty definition that uses the default runtime
handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class'
type: string
strategy:
default:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0
type: RollingUpdate
description: Strategy describes how to replace existing pods
with new ones for the given Tenant Control Plane. Default
value is set to Rolling Update, with a blue/green strategy.
properties:
rollingUpdate:
description: 'Rolling update config params. Present only
if DeploymentStrategyType = RollingUpdate. --- TODO:
Update this to follow our convention for oneOf, whatever
we decide it to be.'
properties:
maxSurge:
anyOf:
- type: integer
- type: string
description: 'The maximum number of pods that can
be scheduled above the desired number of pods. Value
can be an absolute number (ex: 5) or a percentage
of desired pods (ex: 10%). This can not be 0 if
MaxUnavailable is 0. Absolute number is calculated
from percentage by rounding up. Defaults to 25%.
Example: when this is set to 30%, the new ReplicaSet
can be scaled up immediately when the rolling update
starts, such that the total number of old and new
pods do not exceed 130% of desired pods. Once old
pods have been killed, new ReplicaSet can be scaled
up further, ensuring that total number of pods running
at any time during the update is at most 130% of
desired pods.'
x-kubernetes-int-or-string: true
maxUnavailable:
anyOf:
- type: integer
- type: string
description: 'The maximum number of pods that can
be unavailable during the update. Value can be an
absolute number (ex: 5) or a percentage of desired
pods (ex: 10%). Absolute number is calculated from
percentage by rounding down. This can not be 0 if
MaxSurge is 0. Defaults to 25%. Example: when this
is set to 30%, the old ReplicaSet can be scaled
down to 70% of desired pods immediately when the
rolling update starts. Once new pods are ready,
old ReplicaSet can be scaled down further, followed
by scaling up the new ReplicaSet, ensuring that
the total number of pods available at all times
during the update is at least 70% of desired pods.'
x-kubernetes-int-or-string: true
type: object
type:
description: Type of deployment. Can be "Recreate" or
"RollingUpdate". Default is RollingUpdate.
type: string
type: object
tolerations:
description: 'If specified, the Tenant Control Plane pod''s
tolerations. More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/'
@@ -1376,8 +1479,8 @@ spec:
in the calculations. - Ignore: nodeAffinity/nodeSelector
are ignored. All nodes are included in the calculations.
\n If this value is nil, the behavior is equivalent
to the Honor policy. This is a alpha-level feature
enabled by the NodeInclusionPolicyInPodTopologySpread
to the Honor policy. This is a beta-level feature
default enabled by the NodeInclusionPolicyInPodTopologySpread
feature flag."
type: string
nodeTaintsPolicy:
@@ -1388,8 +1491,8 @@ spec:
has a toleration, are included. - Ignore: node taints
are ignored. All nodes are included. \n If this value
is nil, the behavior is equivalent to the Ignore policy.
This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread
feature flag."
This is a beta-level feature default enabled by the
NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
topologyKey:
description: TopologyKey is the key of node labels.
@@ -1573,6 +1676,24 @@ spec:
- systemd
- cgroupfs
type: string
preferredAddressTypes:
default:
- Hostname
- InternalIP
- ExternalIP
description: Ordered list of the preferred NodeAddressTypes
to use for kubelet connections. Default to Hostname, InternalIP,
ExternalIP.
items:
enum:
- Hostname
- InternalIP
- ExternalIP
- InternalDNS
- ExternalDNS
type: string
minItems: 1
type: array
type: object
version:
description: Kubernetes Version for the tenant control plane
@@ -1635,8 +1756,6 @@ spec:
coreDNS:
description: AddonStatus defines the observed state of an Addon.
properties:
checksum:
type: string
enabled:
type: boolean
lastUpdate:
@@ -1651,8 +1770,6 @@ spec:
properties:
agent:
properties:
checksum:
type: string
lastUpdate:
description: Last time when k8s object was updated
format: date-time
@@ -1675,8 +1792,6 @@ spec:
type: object
clusterrolebinding:
properties:
checksum:
type: string
lastUpdate:
description: Last time when k8s object was updated
format: date-time
@@ -1709,8 +1824,6 @@ spec:
type: object
sa:
properties:
checksum:
type: string
lastUpdate:
description: Last time when k8s object was updated
format: date-time
@@ -1893,8 +2006,6 @@ spec:
kubeProxy:
description: AddonStatus defines the observed state of an Addon.
properties:
checksum:
type: string
enabled:
type: boolean
lastUpdate:
@@ -2022,30 +2133,8 @@ spec:
format: date-time
type: string
type: object
uploadConfigKubeadm:
description: KubeadmPhaseStatus contains the status of a kubeadm
phase action.
properties:
checksum:
type: string
lastUpdate:
format: date-time
type: string
type: object
uploadConfigKubelet:
description: KubeadmPhaseStatus contains the status of a kubeadm
phase action.
properties:
checksum:
type: string
lastUpdate:
format: date-time
type: string
type: object
required:
- bootstrapToken
- uploadConfigKubeadm
- uploadConfigKubelet
type: object
kubeadmconfig:
description: KubeadmConfig contains the status of the configuration
@@ -2213,27 +2302,25 @@ spec:
properties:
ingress:
description: Ingress is a list containing ingress points
for the load-balancer. Traffic intended for the service
should be sent to these ingress points.
for the load-balancer.
items:
description: 'LoadBalancerIngress represents the status
of a load-balancer ingress point: traffic intended
for the service should be sent to an ingress point.'
description: IngressLoadBalancerIngress represents the
status of a load-balancer ingress point.
properties:
hostname:
description: Hostname is set for load-balancer ingress
points that are DNS based (typically AWS load-balancers)
points that are DNS based.
type: string
ip:
description: IP is set for load-balancer ingress
points that are IP based (typically GCE or OpenStack
load-balancers)
points that are IP based.
type: string
ports:
description: Ports is a list of records of service
ports If used, every port defined in the service
should have an entry in it
description: Ports provides information about the
ports exposed by this LoadBalancer.
items:
description: IngressPortStatus represents the
error condition of a service port
properties:
error:
description: 'Error is to record the problem
@@ -2250,16 +2337,14 @@ spec:
type: string
port:
description: Port is the port number of the
service port of which status is recorded
here
ingress port.
format: int32
type: integer
protocol:
default: TCP
description: 'Protocol is the protocol of
the service port of which status is recorded
here The supported values are: "TCP", "UDP",
"SCTP"'
the ingress port. The supported values are:
"TCP", "UDP", "SCTP"'
type: string
required:
- port
@@ -2453,7 +2538,9 @@ spec:
version, such as its provisioning state, or completed upgrade.
enum:
- Provisioning
- CertificateAuthorityRotating
- Upgrading
- Migrating
- Ready
- NotReady
type: string

View File

@@ -7,14 +7,11 @@ resources:
#+kubebuilder:scaffold:crdkustomizeresource
patchesStrategicMerge:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
#- patches/webhook_in_clusters.yaml
- patches/webhook_in_clusters.yaml
#+kubebuilder:scaffold:crdkustomizewebhookpatch
# [CERTMANAGER] To enable cert-manager, uncomment all the sections with [CERTMANAGER] prefix.
# patches here are for enabling the CA injection for each CRD
#- patches/cainjection_in_clusters.yaml
- patches/cainjection_in_clusters.yaml
- patches/cainjection_in_datastores.yaml
#+kubebuilder:scaffold:crdkustomizecainjectionpatch
# the following config is for teaching kustomize how to do kustomization for CRDs.

View File

@@ -17,59 +17,42 @@ bases:
- ../rbac
- ../manager
- ../samples
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
#- ../webhook
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
#- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
#- ../prometheus
- ../webhook
- ../certmanager
- ../prometheus
patchesStrategicMerge:
# Protect the /metrics endpoint by putting it behind auth.
# If you want your controller-manager to expose the /metrics
# endpoint w/o any authn/z, please comment the following line.
- manager_auth_proxy_patch.yaml
# Mount the controller config file for loading manager configurations
# through a ComponentConfig type
#- manager_config_patch.yaml
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
#- manager_webhook_patch.yaml
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'.
# Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA injection in the admission webhooks.
# 'CERTMANAGER' needs to be enabled to use ca injection
#- webhookcainjection_patch.yaml
- manager_webhook_patch.yaml
- webhookcainjection_patch.yaml
# the following config is for teaching kustomize how to do var substitution
vars:
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
#- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
# objref:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert # this name should match the one in certificate.yaml
# fieldref:
# fieldpath: metadata.namespace
#- name: CERTIFICATE_NAME
# objref:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert # this name should match the one in certificate.yaml
#- name: SERVICE_NAMESPACE # namespace of the service
# objref:
# kind: Service
# version: v1
# name: webhook-service
# fieldref:
# fieldpath: metadata.namespace
#- name: SERVICE_NAME
# objref:
# kind: Service
# version: v1
# name: webhook-service
- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
objref:
kind: Certificate
group: cert-manager.io
version: v1
name: serving-cert # this name should match the one in certificate.yaml
fieldref:
fieldpath: metadata.namespace
- name: CERTIFICATE_NAME
objref:
kind: Certificate
group: cert-manager.io
version: v1
name: serving-cert # this name should match the one in certificate.yaml
- name: SERVICE_NAMESPACE # namespace of the service
objref:
kind: Service
version: v1
name: webhook-service
fieldref:
fieldpath: metadata.namespace
- name: SERVICE_NAME
objref:
kind: Service
version: v1
name: webhook-service

View File

@@ -0,0 +1,23 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: manager
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
volumes:
- name: cert
secret:
defaultMode: 420
secretName: webhook-server-cert

View File

@@ -0,0 +1,29 @@
# This patch add annotation to admission webhook config and
# the variables $(CERTIFICATE_NAMESPACE) and $(CERTIFICATE_NAME) will be substituted by kustomize.
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: mutatingwebhookconfiguration
app.kubernetes.io/instance: mutating-webhook-configuration
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: operator
app.kubernetes.io/part-of: operator
app.kubernetes.io/managed-by: kustomize
name: mutating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: validatingwebhookconfiguration
app.kubernetes.io/instance: validating-webhook-configuration
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: operator
app.kubernetes.io/part-of: operator
app.kubernetes.io/managed-by: kustomize
name: validating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)

View File

@@ -9,8 +9,8 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: kamaji-system/kamaji-serving-cert
controller-gen.kubebuilder.io/version: v0.9.2
creationTimestamp: null
name: datastores.kamaji.clastix.io
spec:
group: kamaji.clastix.io
@@ -59,6 +59,7 @@ spec:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
@@ -81,6 +82,7 @@ spec:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
@@ -99,11 +101,16 @@ spec:
type: object
driver:
description: The driver to use to connect to the shared datastore.
enum:
- etcd
- MySQL
- PostgreSQL
type: string
endpoints:
description: List of the endpoints to connect to the shared datastore. No need for protocol, just bare IP/FQDN and port.
items:
type: string
minItems: 1
type: array
tlsConfig:
description: Defines the TLS/SSL configuration required to connect to the data store in a secure way.
@@ -121,6 +128,7 @@ spec:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
@@ -143,6 +151,7 @@ spec:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
@@ -171,6 +180,7 @@ spec:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
@@ -193,6 +203,7 @@ spec:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
@@ -237,10 +248,20 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: kamaji-system/kamaji-serving-cert
controller-gen.kubebuilder.io/version: v0.9.2
creationTimestamp: null
name: tenantcontrolplanes.kamaji.clastix.io
spec:
conversion:
strategy: Webhook
webhook:
clientConfig:
service:
name: kamaji-webhook-service
namespace: kamaji-system
path: /convert
conversionReviewVersions:
- v1
group: kamaji.clastix.io
names:
kind: TenantControlPlane
@@ -256,18 +277,22 @@ spec:
jsonPath: .spec.kubernetes.version
name: Version
type: string
- description: Kubernetes version
- description: Status
jsonPath: .status.kubernetesResources.version.status
name: Status
type: string
- description: Tenant Control Plane Endpoint (API server)
jsonPath: .status.controlPlaneEndpoint
name: Control-Plane-Endpoint
name: Control-Plane endpoint
type: string
- description: Secret which contains admin kubeconfig
jsonPath: .status.kubeconfig.admin.secretName
name: Kubeconfig
type: string
- description: DataStore actually used
jsonPath: .status.storage.dataStoreName
name: Datastore
type: string
- description: Age
jsonPath: .metadata.creationTimestamp
name: Age
@@ -304,46 +329,73 @@ spec:
konnectivity:
description: Enables the Konnectivity addon in the Tenant Cluster, required if the worker nodes are in a different network.
properties:
agentImage:
default: registry.k8s.io/kas-network-proxy/proxy-agent
description: AgentImage defines the container image for Konnectivity's agent.
type: string
proxyPort:
description: Port of Konnectivity proxy server.
format: int32
type: integer
resources:
description: Resources define the amount of CPU and memory to allocate to the Konnectivity server.
agent:
default:
image: registry.k8s.io/kas-network-proxy/proxy-agent
version: v0.0.32
properties:
limits:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
type: object
requests:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
type: object
extraArgs:
description: ExtraArgs allows adding additional arguments to said component.
items:
type: string
type: array
image:
default: registry.k8s.io/kas-network-proxy/proxy-agent
description: AgentImage defines the container image for Konnectivity's agent.
type: string
version:
default: v0.0.32
description: Version for Konnectivity agent.
type: string
type: object
server:
default:
image: registry.k8s.io/kas-network-proxy/proxy-server
port: 8132
version: v0.0.32
properties:
extraArgs:
description: ExtraArgs allows adding additional arguments to said component.
items:
type: string
type: array
image:
default: registry.k8s.io/kas-network-proxy/proxy-server
description: Container image used by the Konnectivity server.
type: string
port:
description: The port which Konnectivity server is listening to.
format: int32
type: integer
resources:
description: Resources define the amount of CPU and memory to allocate to the Konnectivity server.
properties:
limits:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
type: object
requests:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
type: object
type: object
version:
default: v0.0.32
description: Container image version of the Konnectivity server.
type: string
required:
- port
type: object
serverImage:
default: registry.k8s.io/kas-network-proxy/proxy-server
description: ServerImage defines the container image for Konnectivity's server.
type: string
version:
default: v0.0.32
description: Version for Konnectivity server and agent.
type: string
required:
- proxyPort
type: object
kubeProxy:
description: Enables the kube-proxy addon in the Tenant Cluster. The registry and the tag are configurable, the image is hard-coded to `kube-proxy`.
@@ -880,7 +932,7 @@ spec:
description: Resources defines the amount of memory and CPU to allocate to each component of the Control Plane (kube-apiserver, controller-manager, and scheduler).
properties:
apiServer:
description: ResourceRequirements describes the compute resource requirements.
description: ComponentResourceRequirements describes the compute resource requirements.
properties:
limits:
additionalProperties:
@@ -902,7 +954,7 @@ spec:
type: object
type: object
controllerManager:
description: ResourceRequirements describes the compute resource requirements.
description: ComponentResourceRequirements describes the compute resource requirements.
properties:
limits:
additionalProperties:
@@ -924,7 +976,7 @@ spec:
type: object
type: object
scheduler:
description: ResourceRequirements describes the compute resource requirements.
description: ComponentResourceRequirements describes the compute resource requirements.
properties:
limits:
additionalProperties:
@@ -946,6 +998,37 @@ spec:
type: object
type: object
type: object
runtimeClassName:
description: 'RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run the Tenant Control Plane pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class'
type: string
strategy:
default:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0
type: RollingUpdate
description: Strategy describes how to replace existing pods with new ones for the given Tenant Control Plane. Default value is set to Rolling Update, with a blue/green strategy.
properties:
rollingUpdate:
description: 'Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. --- TODO: Update this to follow our convention for oneOf, whatever we decide it to be.'
properties:
maxSurge:
anyOf:
- type: integer
- type: string
description: 'The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods.'
x-kubernetes-int-or-string: true
maxUnavailable:
anyOf:
- type: integer
- type: string
description: 'The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods.'
x-kubernetes-int-or-string: true
type: object
type:
description: Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate.
type: string
type: object
tolerations:
description: 'If specified, the Tenant Control Plane pod''s tolerations. More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/'
items:
@@ -1020,10 +1103,10 @@ spec:
format: int32
type: integer
nodeAffinityPolicy:
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
nodeTaintsPolicy:
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag."
type: string
topologyKey:
description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field.
@@ -1162,6 +1245,22 @@ spec:
- systemd
- cgroupfs
type: string
preferredAddressTypes:
default:
- Hostname
- InternalIP
- ExternalIP
description: Ordered list of the preferred NodeAddressTypes to use for kubelet connections. Default to Hostname, InternalIP, ExternalIP.
items:
enum:
- Hostname
- InternalIP
- ExternalIP
- InternalDNS
- ExternalDNS
type: string
minItems: 1
type: array
type: object
version:
description: Kubernetes Version for the tenant control plane
@@ -1217,8 +1316,6 @@ spec:
coreDNS:
description: AddonStatus defines the observed state of an Addon.
properties:
checksum:
type: string
enabled:
type: boolean
lastUpdate:
@@ -1232,8 +1329,6 @@ spec:
properties:
agent:
properties:
checksum:
type: string
lastUpdate:
description: Last time when k8s object was updated
format: date-time
@@ -1256,8 +1351,6 @@ spec:
type: object
clusterrolebinding:
properties:
checksum:
type: string
lastUpdate:
description: Last time when k8s object was updated
format: date-time
@@ -1289,8 +1382,6 @@ spec:
type: object
sa:
properties:
checksum:
type: string
lastUpdate:
description: Last time when k8s object was updated
format: date-time
@@ -1411,8 +1502,6 @@ spec:
kubeProxy:
description: AddonStatus defines the observed state of an Addon.
properties:
checksum:
type: string
enabled:
type: boolean
lastUpdate:
@@ -1533,28 +1622,8 @@ spec:
format: date-time
type: string
type: object
uploadConfigKubeadm:
description: KubeadmPhaseStatus contains the status of a kubeadm phase action.
properties:
checksum:
type: string
lastUpdate:
format: date-time
type: string
type: object
uploadConfigKubelet:
description: KubeadmPhaseStatus contains the status of a kubeadm phase action.
properties:
checksum:
type: string
lastUpdate:
format: date-time
type: string
type: object
required:
- bootstrapToken
- uploadConfigKubeadm
- uploadConfigKubelet
type: object
kubeadmconfig:
description: KubeadmConfig contains the status of the configuration required by kubeadm
@@ -1694,19 +1763,20 @@ spec:
description: LoadBalancer contains the current status of the load-balancer.
properties:
ingress:
description: Ingress is a list containing ingress points for the load-balancer. Traffic intended for the service should be sent to these ingress points.
description: Ingress is a list containing ingress points for the load-balancer.
items:
description: 'LoadBalancerIngress represents the status of a load-balancer ingress point: traffic intended for the service should be sent to an ingress point.'
description: IngressLoadBalancerIngress represents the status of a load-balancer ingress point.
properties:
hostname:
description: Hostname is set for load-balancer ingress points that are DNS based (typically AWS load-balancers)
description: Hostname is set for load-balancer ingress points that are DNS based.
type: string
ip:
description: IP is set for load-balancer ingress points that are IP based (typically GCE or OpenStack load-balancers)
description: IP is set for load-balancer ingress points that are IP based.
type: string
ports:
description: Ports is a list of records of service ports If used, every port defined in the service should have an entry in it
description: Ports provides information about the ports exposed by this LoadBalancer.
items:
description: IngressPortStatus represents the error condition of a service port
properties:
error:
description: 'Error is to record the problem with the service port The format of the error shall comply with the following rules: - built-in error values shall be specified in this file and those shall use CamelCase names - cloud provider specific error values must have names that comply with the format foo.example.com/CamelCase. --- The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)'
@@ -1714,12 +1784,12 @@ spec:
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
type: string
port:
description: Port is the port number of the service port of which status is recorded here
description: Port is the port number of the ingress port.
format: int32
type: integer
protocol:
default: TCP
description: 'Protocol is the protocol of the service port of which status is recorded here The supported values are: "TCP", "UDP", "SCTP"'
description: 'Protocol is the protocol of the ingress port. The supported values are: "TCP", "UDP", "SCTP"'
type: string
required:
- port
@@ -1853,7 +1923,9 @@ spec:
description: Status returns the current status of the Kubernetes version, such as its provisioning state, or completed upgrade.
enum:
- Provisioning
- CertificateAuthorityRotating
- Upgrading
- Migrating
- Ready
- NotReady
type: string
@@ -1972,6 +2044,16 @@ rules:
- patch
- update
- watch
- apiGroups:
- batch
resources:
- jobs
verbs:
- create
- delete
- get
- list
- watch
- apiGroups:
- ""
resources:
@@ -2170,6 +2252,26 @@ spec:
selector:
control-plane: controller-manager
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: operator
app.kubernetes.io/instance: webhook-service
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/name: service
app.kubernetes.io/part-of: operator
name: kamaji-webhook-service
namespace: kamaji-system
spec:
ports:
- port: 443
protocol: TCP
targetPort: 9443
selector:
control-plane: controller-manager
---
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -2189,24 +2291,20 @@ spec:
spec:
containers:
- args:
- --secure-listen-address=0.0.0.0:8443
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=10
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
name: kube-rbac-proxy
ports:
- containerPort: 8443
name: https
protocol: TCP
- args:
- --health-probe-bind-address=:8081
- --metrics-bind-address=127.0.0.1:8080
- manager
- --leader-elect
- --datastore=kamaji-etcd
command:
- /manager
image: clastix/kamaji:v0.1.1
- /kamaji
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
image: clastix/kamaji:v0.2.3
imagePullPolicy: Always
livenessProbe:
httpGet:
@@ -2215,6 +2313,13 @@ spec:
initialDelaySeconds: 15
periodSeconds: 20
name: manager
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
- containerPort: 8080
name: metrics
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
@@ -2230,10 +2335,55 @@ spec:
memory: 20Mi
securityContext:
allowPrivilegeEscalation: false
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
securityContext:
runAsNonRoot: true
serviceAccountName: kamaji-controller-manager
terminationGracePeriodSeconds: 10
volumes:
- name: cert
secret:
defaultMode: 420
secretName: webhook-server-cert
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
labels:
app.kubernetes.io/component: certificate
app.kubernetes.io/created-by: operator
app.kubernetes.io/instance: serving-cert
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/name: certificate
app.kubernetes.io/part-of: operator
name: kamaji-serving-cert
namespace: kamaji-system
spec:
dnsNames:
- kamaji-webhook-service.kamaji-system.svc
- kamaji-webhook-service.kamaji-system.svc.cluster.local
issuerRef:
kind: Issuer
name: kamaji-selfsigned-issuer
secretName: webhook-server-cert
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
labels:
app.kubernetes.io/component: certificate
app.kubernetes.io/created-by: operator
app.kubernetes.io/instance: selfsigned-issuer
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/name: issuer
app.kubernetes.io/part-of: operator
name: kamaji-selfsigned-issuer
namespace: kamaji-system
spec:
selfSigned: {}
---
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
@@ -2270,3 +2420,152 @@ spec:
keyPath: tls.key
name: root-client-certs
namespace: kamaji-system
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
control-plane: controller-manager
name: kamaji-controller-manager-metrics-monitor
namespace: kamaji-system
spec:
endpoints:
- path: /metrics
port: metrics
scheme: http
namespaceSelector:
matchNames:
- kamaji-system
selector:
matchLabels:
control-plane: controller-manager
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
annotations:
cert-manager.io/inject-ca-from: kamaji-system/kamaji-serving-cert
labels:
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: operator
app.kubernetes.io/instance: mutating-webhook-configuration
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/name: mutatingwebhookconfiguration
app.kubernetes.io/part-of: operator
name: kamaji-mutating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: kamaji-webhook-service
namespace: kamaji-system
path: /mutate-kamaji-clastix-io-v1alpha1-datastore
failurePolicy: Fail
name: mdatastore.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- datastores
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: kamaji-webhook-service
namespace: kamaji-system
path: /mutate-kamaji-clastix-io-v1alpha1-tenantcontrolplane
failurePolicy: Fail
name: mtenantcontrolplane.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- tenantcontrolplanes
sideEffects: None
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
annotations:
cert-manager.io/inject-ca-from: kamaji-system/kamaji-serving-cert
labels:
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: operator
app.kubernetes.io/instance: validating-webhook-configuration
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/name: validatingwebhookconfiguration
app.kubernetes.io/part-of: operator
name: kamaji-validating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: kamaji-webhook-service
namespace: kamaji-system
path: /validate--v1-secret
failurePolicy: Ignore
name: vdatastoresecrets.kb.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- DELETE
resources:
- secrets
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: kamaji-webhook-service
namespace: kamaji-system
path: /validate-kamaji-clastix-io-v1alpha1-datastore
failurePolicy: Fail
name: vdatastore.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
- DELETE
resources:
- datastores
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: kamaji-webhook-service
namespace: kamaji-system
path: /validate-kamaji-clastix-io-v1alpha1-tenantcontrolplane
failurePolicy: Fail
name: vtenantcontrolplane.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- tenantcontrolplanes
sideEffects: None

View File

@@ -13,4 +13,4 @@ kind: Kustomization
images:
- name: controller
newName: clastix/kamaji
newTag: v0.1.1
newTag: v0.2.3

View File

@@ -26,12 +26,26 @@ spec:
runAsNonRoot: true
containers:
- command:
- /manager
- /kamaji
args:
- manager
- --leader-elect
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
image: controller:latest
imagePullPolicy: Always
name: manager
ports:
- containerPort: 8080
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
livenessProbe:

View File

@@ -1,2 +1,5 @@
resources:
- monitor.yaml
configurations:
- kustomizeconfig.yaml

View File

@@ -0,0 +1,4 @@
varReference:
- kind: ServiceMonitor
group: monitoring.coreos.com
path: spec/namespaceSelector/matchNames

View File

@@ -1,4 +1,3 @@
# Prometheus Monitor Service (Metrics)
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
@@ -10,11 +9,11 @@ metadata:
spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
port: metrics
scheme: http
namespaceSelector:
matchNames:
- $(SERVICE_NAMESPACE)
selector:
matchLabels:
control-plane: controller-manager

View File

@@ -17,6 +17,16 @@ rules:
- patch
- update
- watch
- apiGroups:
- batch
resources:
- jobs
verbs:
- create
- delete
- get
- list
- watch
- apiGroups:
- ""
resources:

View File

@@ -0,0 +1,34 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: mysql-bronze
spec:
driver: MySQL
endpoints:
- bronze.mysql-system.svc:3306
basicAuth:
username:
content: cm9vdA==
password:
secretReference:
name: mysql-bronze-config
namespace: mysql-system
keyPath: MYSQL_ROOT_PASSWORD
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: mysql-bronze-config
namespace: mysql-system
keyPath: "ca.crt"
clientCertificate:
certificate:
secretReference:
name: mysql-bronze-config
namespace: mysql-system
keyPath: "server.crt"
privateKey:
secretReference:
name: mysql-bronze-config
namespace: mysql-system
keyPath: "server.key"

View File

@@ -1,34 +1,34 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: mysql
name: mysql-gold
spec:
driver: MySQL
endpoints:
- mariadb.kamaji-system.svc:3306
- gold.mysql-system.svc:3306
basicAuth:
username:
content: cm9vdA==
password:
secretReference:
name: mysql-config
namespace: kamaji-system
name: mysql-gold-config
namespace: mysql-system
keyPath: MYSQL_ROOT_PASSWORD
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: mysql-config
namespace: kamaji-system
name: mysql-gold-config
namespace: mysql-system
keyPath: "ca.crt"
clientCertificate:
certificate:
secretReference:
name: mysql-config
namespace: kamaji-system
name: mysql-gold-config
namespace: mysql-system
keyPath: "server.crt"
privateKey:
secretReference:
name: mysql-config
namespace: kamaji-system
name: mysql-gold-config
namespace: mysql-system
keyPath: "server.key"

View File

@@ -0,0 +1,34 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: mysql-silver
spec:
driver: MySQL
endpoints:
- silver.mysql-system.svc:3306
basicAuth:
username:
content: cm9vdA==
password:
secretReference:
name: mysql-silver-config
namespace: mysql-system
keyPath: MYSQL_ROOT_PASSWORD
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: mysql-silver-config
namespace: mysql-system
keyPath: "ca.crt"
clientCertificate:
certificate:
secretReference:
name: mysql-silver-config
namespace: mysql-system
keyPath: "server.crt"
privateKey:
secretReference:
name: mysql-silver-config
namespace: mysql-system
keyPath: "server.key"

View File

@@ -0,0 +1,37 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: postgresql-bronze
spec:
driver: PostgreSQL
endpoints:
- postgres-bronze-rw.postgres-system.svc:5432
basicAuth:
username:
secretReference:
name: postgres-bronze-superuser
namespace: postgres-system
keyPath: username
password:
secretReference:
name: postgres-bronze-superuser
namespace: postgres-system
keyPath: password
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: postgres-bronze-ca
namespace: postgres-system
keyPath: ca.crt
clientCertificate:
certificate:
secretReference:
name: postgres-bronze-root-cert
namespace: postgres-system
keyPath: tls.crt
privateKey:
secretReference:
name: postgres-bronze-root-cert
namespace: postgres-system
keyPath: tls.key

View File

@@ -1,37 +1,37 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: postgresql
name: postgresql-gold
spec:
driver: PostgreSQL
endpoints:
- postgresql-rw.kamaji-system.svc:5432
- postgres-gold-rw.postgres-system.svc:5432
basicAuth:
username:
secretReference:
name: postgresql-superuser
namespace: kamaji-system
name: postgres-gold-superuser
namespace: postgres-system
keyPath: username
password:
secretReference:
name: postgresql-superuser
namespace: kamaji-system
name: postgres-gold-superuser
namespace: postgres-system
keyPath: password
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: postgresql-ca
namespace: kamaji-system
name: postgres-gold-ca
namespace: postgres-system
keyPath: ca.crt
clientCertificate:
certificate:
secretReference:
name: postgres-root-cert
namespace: kamaji-system
name: postgres-gold-root-cert
namespace: postgres-system
keyPath: tls.crt
privateKey:
secretReference:
name: postgres-root-cert
namespace: kamaji-system
name: postgres-gold-root-cert
namespace: postgres-system
keyPath: tls.key

View File

@@ -0,0 +1,37 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: postgresql-silver
spec:
driver: PostgreSQL
endpoints:
- postgres-silver-rw.postgres-system.svc:5432
basicAuth:
username:
secretReference:
name: postgres-silver-superuser
namespace: postgres-system
keyPath: username
password:
secretReference:
name: postgres-silver-superuser
namespace: postgres-system
keyPath: password
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: postgres-silver-ca
namespace: postgres-system
keyPath: ca.crt
clientCertificate:
certificate:
secretReference:
name: postgres-silver-root-cert
namespace: postgres-system
keyPath: tls.crt
privateKey:
secretReference:
name: postgres-silver-root-cert
namespace: postgres-system
keyPath: tls.key

View File

@@ -1,22 +1,22 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: TenantControlPlane
metadata:
name: test
name: k8s-126
spec:
controlPlane:
deployment:
replicas: 1
replicas: 2
service:
serviceType: LoadBalancer
kubernetes:
version: "v1.23.1"
version: "v1.26.0"
kubelet:
cgroupfs: cgroupfs
admissionControllers:
- ResourceQuota
- LimitRanger
cgroupfs: systemd
networkProfile:
port: 6443
addons:
coreDNS: {}
kubeProxy: {}
konnectivity:
server:
port: 8132

View File

@@ -0,0 +1,6 @@
resources:
- manifests.yaml
- service.yaml
configurations:
- kustomizeconfig.yaml

View File

@@ -0,0 +1,25 @@
# the following config is for teaching kustomize where to look at when substituting vars.
# It requires kustomize v2.1.0 or newer to work properly.
nameReference:
- kind: Service
version: v1
fieldSpecs:
- kind: MutatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/name
- kind: ValidatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/name
namespace:
- kind: MutatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/namespace
create: true
- kind: ValidatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/namespace
create: true
varReference:
- path: metadata/annotations

View File

@@ -0,0 +1,114 @@
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
creationTimestamp: null
name: mutating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /mutate-kamaji-clastix-io-v1alpha1-datastore
failurePolicy: Fail
name: mdatastore.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- datastores
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /mutate-kamaji-clastix-io-v1alpha1-tenantcontrolplane
failurePolicy: Fail
name: mtenantcontrolplane.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- tenantcontrolplanes
sideEffects: None
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
creationTimestamp: null
name: validating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /validate--v1-secret
failurePolicy: Ignore
name: vdatastoresecrets.kb.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- DELETE
resources:
- secrets
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /validate-kamaji-clastix-io-v1alpha1-datastore
failurePolicy: Fail
name: vdatastore.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
- DELETE
resources:
- datastores
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /validate-kamaji-clastix-io-v1alpha1-tenantcontrolplane
failurePolicy: Fail
name: vtenantcontrolplane.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- tenantcontrolplanes
sideEffects: None

View File

@@ -0,0 +1,20 @@
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: service
app.kubernetes.io/instance: webhook-service
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: operator
app.kubernetes.io/part-of: operator
app.kubernetes.io/managed-by: kustomize
name: webhook-service
namespace: system
spec:
ports:
- port: 443
protocol: TCP
targetPort: 9443
selector:
control-plane: controller-manager

View File

@@ -5,7 +5,6 @@ package controllers
import (
"context"
"fmt"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/fields"
@@ -15,7 +14,6 @@ import (
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
@@ -24,11 +22,6 @@ import (
"sigs.k8s.io/controller-runtime/pkg/source"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/indexers"
)
const (
dataStoreFinalizer = "finalizer.kamaji.clastix.io/datastore"
)
type DataStore struct {
@@ -55,66 +48,11 @@ func (r *DataStore) Reconcile(ctx context.Context, request reconcile.Request) (r
return reconcile.Result{}, err
}
// Managing the finalizer, required to don't drop a DataSource if this is still used by a Tenant Control Plane.
switch {
case ds.DeletionTimestamp != nil && controllerutil.ContainsFinalizer(ds, dataStoreFinalizer):
log.Info("marked for deletion, checking conditions")
if len(ds.Status.UsedBy) == 0 {
log.Info("resource is no more used by any Tenant Control Plane")
controllerutil.RemoveFinalizer(ds, dataStoreFinalizer)
return reconcile.Result{}, r.client.Update(ctx, ds)
}
log.Info("DataStore is still used by some Tenant Control Planes, cannot be removed")
case ds.DeletionTimestamp == nil && !controllerutil.ContainsFinalizer(ds, dataStoreFinalizer):
log.Info("the resource is missing the required finalizer, adding it")
controllerutil.AddFinalizer(ds, dataStoreFinalizer)
return reconcile.Result{}, r.client.Update(ctx, ds)
}
// A Data Source can trigger several Tenant Control Planes and requires a minimum validation:
// we have to ensure the data provided by the Data Source is valid and referencing an existing Secret object.
if _, err := ds.Spec.TLSConfig.CertificateAuthority.Certificate.GetContent(ctx, r.client); err != nil {
log.Error(err, "invalid Certificate Authority data")
return reconcile.Result{}, err
}
if ds.Spec.Driver == kamajiv1alpha1.EtcdDriver {
if ds.Spec.TLSConfig.CertificateAuthority.PrivateKey == nil {
err := fmt.Errorf("a valid private key is required for the etcd driver")
log.Error(err, "missing Certificate Authority private key data")
return reconcile.Result{}, err
}
if _, err := ds.Spec.TLSConfig.CertificateAuthority.PrivateKey.GetContent(ctx, r.client); err != nil {
log.Error(err, "invalid Certificate Authority private key data")
return reconcile.Result{}, err
}
}
if _, err := ds.Spec.TLSConfig.ClientCertificate.Certificate.GetContent(ctx, r.client); err != nil {
log.Error(err, "invalid Client Certificate data")
return reconcile.Result{}, err
}
if _, err := ds.Spec.TLSConfig.ClientCertificate.PrivateKey.GetContent(ctx, r.client); err != nil {
log.Error(err, "invalid Client Certificate private key data")
return reconcile.Result{}, err
}
tcpList := kamajiv1alpha1.TenantControlPlaneList{}
if err := r.client.List(ctx, &tcpList, client.MatchingFieldsSelector{
Selector: fields.OneTermEqualSelector(indexers.TenantControlPlaneUsedDataStoreKey, ds.GetName()),
Selector: fields.OneTermEqualSelector(kamajiv1alpha1.TenantControlPlaneUsedDataStoreKey, ds.GetName()),
}); err != nil {
log.Error(err, "cannot retrieve list of the Tenant Control Plane using the following instance")

View File

@@ -0,0 +1,10 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package finalizers
const (
// DatastoreFinalizer is using a wrong name, since it's related to the underlying datastore.
DatastoreFinalizer = "finalizer.kamaji.clastix.io"
SootFinalizer = "finalizer.kamaji.clastix.io/soot"
)

View File

@@ -10,8 +10,10 @@ import (
"github.com/google/uuid"
k8stypes "k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/finalizers"
"github.com/clastix/kamaji/internal/datastore"
"github.com/clastix/kamaji/internal/resources"
ds "github.com/clastix/kamaji/internal/resources/datastore"
@@ -19,15 +21,19 @@ import (
)
type GroupResourceBuilderConfiguration struct {
client client.Client
log logr.Logger
tcpReconcilerConfig TenantControlPlaneReconcilerConfig
tenantControlPlane kamajiv1alpha1.TenantControlPlane
Connection datastore.Connection
DataStore kamajiv1alpha1.DataStore
client client.Client
log logr.Logger
tcpReconcilerConfig TenantControlPlaneReconcilerConfig
tenantControlPlane kamajiv1alpha1.TenantControlPlane
Connection datastore.Connection
DataStore kamajiv1alpha1.DataStore
KamajiNamespace string
KamajiServiceAccount string
KamajiService string
KamajiMigrateImage string
}
type GroupDeleteableResourceBuilderConfiguration struct {
type GroupDeletableResourceBuilderConfiguration struct {
client client.Client
log logr.Logger
tcpReconcilerConfig TenantControlPlaneReconcilerConfig
@@ -44,32 +50,55 @@ func GetResources(config GroupResourceBuilderConfiguration) []resources.Resource
// GetDeletableResources returns a list of resources that have to be deleted when tenant control planes are deleted
// Currently there is only a default approach
// TODO: the idea of this function is to become a factory to return the group of deleteable resources according to the given configuration.
func GetDeletableResources(config GroupDeleteableResourceBuilderConfiguration) []resources.DeleteableResource {
return getDefaultDeleteableResources(config)
// TODO: the idea of this function is to become a factory to return the group of deletable resources according to the given configuration.
func GetDeletableResources(tcp *kamajiv1alpha1.TenantControlPlane, config GroupDeletableResourceBuilderConfiguration) []resources.DeletableResource {
var res []resources.DeletableResource
if controllerutil.ContainsFinalizer(tcp, finalizers.DatastoreFinalizer) {
res = append(res, &ds.Setup{
Client: config.client,
Connection: config.connection,
})
}
return res
}
func getDefaultResources(config GroupResourceBuilderConfiguration) []resources.Resource {
resources := append(getUpgradeResources(config.client), getKubernetesServiceResources(config.client)...)
resources := getDataStoreMigratingResources(config.client, config.KamajiNamespace, config.KamajiMigrateImage, config.KamajiServiceAccount, config.KamajiService)
resources = append(resources, getUpgradeResources(config.client)...)
resources = append(resources, getKubernetesServiceResources(config.client)...)
resources = append(resources, getKubeadmConfigResources(config.client, getTmpDirectory(config.tcpReconcilerConfig.TmpBaseDirectory, config.tenantControlPlane), config.DataStore)...)
resources = append(resources, getKubernetesCertificatesResources(config.client, config.tcpReconcilerConfig, config.tenantControlPlane)...)
resources = append(resources, getKubeconfigResources(config.client, config.tcpReconcilerConfig, config.tenantControlPlane)...)
resources = append(resources, getKubernetesStorageResources(config.client, config.Connection, config.DataStore)...)
resources = append(resources, getInternalKonnectivityResources(config.client)...)
resources = append(resources, getKonnectivityServerRequirementsResources(config.client)...)
resources = append(resources, getKubernetesDeploymentResources(config.client, config.tcpReconcilerConfig, config.DataStore)...)
resources = append(resources, getKonnectivityServerPatchResources(config.client)...)
resources = append(resources, getDataStoreMigratingCleanup(config.client, config.KamajiNamespace)...)
resources = append(resources, getKubernetesIngressResources(config.client)...)
resources = append(resources, getKubeadmPhaseResources(config.client)...)
resources = append(resources, getKubeadmAddonResources(config.client)...)
resources = append(resources, getExternalKonnectivityResources(config.client)...)
return resources
}
func getDefaultDeleteableResources(config GroupDeleteableResourceBuilderConfiguration) []resources.DeleteableResource {
return []resources.DeleteableResource{
&ds.Setup{
Client: config.client,
Connection: config.connection,
func getDataStoreMigratingCleanup(c client.Client, kamajiNamespace string) []resources.Resource {
return []resources.Resource{
&ds.Migrate{
Client: c,
KamajiNamespace: kamajiNamespace,
ShouldCleanUp: true,
},
}
}
func getDataStoreMigratingResources(c client.Client, kamajiNamespace, migrateImage string, kamajiServiceAccount, kamajiService string) []resources.Resource {
return []resources.Resource{
&ds.Migrate{
Client: c,
MigrateImage: migrateImage,
KamajiNamespace: kamajiNamespace,
KamajiServiceAccount: kamajiServiceAccount,
KamajiServiceName: kamajiService,
},
}
}
@@ -198,52 +227,15 @@ func getKubernetesIngressResources(c client.Client) []resources.Resource {
}
}
func getKubeadmPhaseResources(c client.Client) []resources.Resource {
return []resources.Resource{
&resources.KubeadmPhase{
Name: "upload-config-kubeadm",
Client: c,
Phase: resources.PhaseUploadConfigKubeadm,
},
&resources.KubeadmPhase{
Name: "upload-config-kubelet",
Client: c,
Phase: resources.PhaseUploadConfigKubelet,
},
&resources.KubeadmPhase{
Name: "bootstrap-token",
Client: c,
Phase: resources.PhaseBootstrapToken,
},
}
}
func getKubeadmAddonResources(c client.Client) []resources.Resource {
return []resources.Resource{
&resources.KubeadmAddonResource{
Name: "coredns",
Client: c,
KubeadmAddon: resources.AddonCoreDNS,
},
&resources.KubeadmAddonResource{
Name: "kubeproxy",
Client: c,
KubeadmAddon: resources.AddonKubeProxy,
},
}
}
func getExternalKonnectivityResources(c client.Client) []resources.Resource {
func GetExternalKonnectivityResources(c client.Client) []resources.Resource {
return []resources.Resource{
&konnectivity.Agent{Client: c},
&konnectivity.ServiceAccountResource{Client: c},
&konnectivity.ClusterRoleBindingResource{Client: c},
&konnectivity.KubernetesDeploymentResource{Client: c},
&konnectivity.ServiceResource{Client: c},
&konnectivity.Agent{Client: c},
}
}
func getInternalKonnectivityResources(c client.Client) []resources.Resource {
func getKonnectivityServerRequirementsResources(c client.Client) []resources.Resource {
return []resources.Resource{
&konnectivity.EgressSelectorConfigurationResource{Client: c},
&konnectivity.CertificateResource{Client: c},
@@ -251,6 +243,13 @@ func getInternalKonnectivityResources(c client.Client) []resources.Resource {
}
}
func getKonnectivityServerPatchResources(c client.Client) []resources.Resource {
return []resources.Resource{
&konnectivity.KubernetesDeploymentResource{Client: c},
&konnectivity.ServiceResource{Client: c},
}
}
func getNamespacedName(namespace string, name string) k8stypes.NamespacedName {
return k8stypes.NamespacedName{Namespace: namespace, Name: name}
}

View File

@@ -0,0 +1,89 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"github.com/go-logr/logr"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/kubeadm"
"github.com/clastix/kamaji/internal/resources"
"github.com/clastix/kamaji/internal/resources/addons"
)
type CoreDNS struct {
logger logr.Logger
AdminClient client.Client
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
TriggerChannel chan event.GenericEvent
}
func (c *CoreDNS) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
tcp, err := c.GetTenantControlPlaneFunc()
if err != nil {
c.logger.Error(err, "cannot retrieve TenantControlPlane")
return reconcile.Result{}, err
}
c.logger.Info("start processing")
resource := &addons.CoreDNS{Client: c.AdminClient}
result, handlingErr := resources.Handle(ctx, resource, tcp)
if handlingErr != nil {
c.logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
return reconcile.Result{}, handlingErr
}
if result == controllerutil.OperationResultNone {
c.logger.Info("reconciliation completed")
return reconcile.Result{}, nil
}
if err = utils.UpdateStatus(ctx, c.AdminClient, tcp, resource); err != nil {
c.logger.Error(err, "update status failed", "resource", resource.GetName())
return reconcile.Result{}, err
}
c.logger.Info("reconciliation processed")
return reconcile.Result{}, nil
}
func (c *CoreDNS) SetupWithManager(mgr manager.Manager) error {
c.logger = mgr.GetLogger().WithName("coredns")
c.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
For(&rbacv1.ClusterRoleBinding{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return object.GetName() == kubeadm.CoreDNSClusterRoleBindingName
}))).
Watches(&source.Channel{Source: c.TriggerChannel}, &handler.EnqueueRequestForObject{}).
Owns(&rbacv1.ClusterRole{}).
Owns(&corev1.ServiceAccount{}).
Owns(&corev1.Service{}).
Owns(&corev1.ConfigMap{}).
Owns(&appsv1.Deployment{}).
Complete(c)
}

View File

@@ -0,0 +1,112 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"github.com/go-logr/logr"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/api/rbac/v1"
"k8s.io/apimachinery/pkg/types"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/controllers"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/resources"
"github.com/clastix/kamaji/internal/resources/konnectivity"
)
type KonnectivityAgent struct {
logger logr.Logger
AdminClient client.Client
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
TriggerChannel chan event.GenericEvent
}
func (k *KonnectivityAgent) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := k.GetTenantControlPlaneFunc()
if err != nil {
k.logger.Error(err, "cannot retrieve TenantControlPlane")
return reconcile.Result{}, err
}
for _, resource := range controllers.GetExternalKonnectivityResources(k.AdminClient) {
k.logger.Info("start processing", "resource", resource.GetName())
result, handlingErr := resources.Handle(ctx, resource, tcp)
if handlingErr != nil {
k.logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
return reconcile.Result{}, handlingErr
}
if result == controllerutil.OperationResultNone {
k.logger.Info("resource processed", "resource", resource.GetName())
continue
}
if err = utils.UpdateStatus(ctx, k.AdminClient, tcp, resource); err != nil {
k.logger.Error(err, "update status failed", "resource", resource.GetName())
return reconcile.Result{}, err
}
}
k.logger.Info("reconciliation completed")
return reconcile.Result{}, nil
}
func (k *KonnectivityAgent) SetupWithManager(mgr manager.Manager) error {
k.logger = mgr.GetLogger().WithName("konnectivity_agent")
k.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
For(&appsv1.DaemonSet{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return object.GetName() == konnectivity.AgentName && object.GetNamespace() == konnectivity.AgentNamespace
}))).
Watches(&source.Kind{Type: &corev1.ServiceAccount{}}, handler.EnqueueRequestsFromMapFunc(func(object client.Object) []reconcile.Request {
if object.GetName() == konnectivity.AgentName && object.GetNamespace() == konnectivity.AgentNamespace {
return []reconcile.Request{
{
NamespacedName: types.NamespacedName{
Namespace: object.GetNamespace(),
Name: object.GetName(),
},
},
}
}
return nil
})).
Watches(&source.Kind{Type: &v1.ClusterRoleBinding{}}, handler.EnqueueRequestsFromMapFunc(func(object client.Object) []reconcile.Request {
if object.GetName() == konnectivity.CertCommonName {
return []reconcile.Request{
{
NamespacedName: types.NamespacedName{
Name: konnectivity.CertCommonName,
},
},
}
}
return nil
})).
Watches(&source.Channel{Source: k.TriggerChannel}, &handler.EnqueueRequestForObject{}).
Complete(k)
}

View File

@@ -0,0 +1,72 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"github.com/go-logr/logr"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/resources"
)
type KubeadmPhase struct {
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
TriggerChannel chan event.GenericEvent
Phase resources.KubeadmPhaseResource
logger logr.Logger
}
func (k *KubeadmPhase) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := k.GetTenantControlPlaneFunc()
if err != nil {
return reconcile.Result{}, err
}
k.logger.Info("start processing")
result, handlingErr := resources.Handle(ctx, k.Phase, tcp)
if handlingErr != nil {
k.logger.Error(handlingErr, "resource process failed")
return reconcile.Result{}, handlingErr
}
if result == controllerutil.OperationResultNone {
k.logger.Info("reconciliation completed")
return reconcile.Result{}, nil
}
if err = utils.UpdateStatus(ctx, k.Phase.GetClient(), tcp, k.Phase); err != nil {
k.logger.Error(err, "update status failed")
return reconcile.Result{}, err
}
k.logger.Info("reconciliation processed")
return reconcile.Result{}, nil
}
func (k *KubeadmPhase) SetupWithManager(mgr manager.Manager) error {
k.logger = mgr.GetLogger().WithName(k.Phase.GetName())
k.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
For(k.Phase.GetWatchedObject(), builder.WithPredicates(predicate.NewPredicateFuncs(k.Phase.GetPredicateFunc()))).
Watches(&source.Channel{Source: k.TriggerChannel}, &handler.EnqueueRequestForObject{}).
Complete(k)
}

View File

@@ -0,0 +1,89 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"github.com/go-logr/logr"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/kubeadm"
"github.com/clastix/kamaji/internal/resources"
"github.com/clastix/kamaji/internal/resources/addons"
)
type KubeProxy struct {
AdminClient client.Client
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
TriggerChannel chan event.GenericEvent
logger logr.Logger
}
func (k *KubeProxy) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := k.GetTenantControlPlaneFunc()
if err != nil {
k.logger.Error(err, "cannot retrieve TenantControlPlane")
return reconcile.Result{}, err
}
k.logger.Info("start processing")
resource := &addons.KubeProxy{Client: k.AdminClient}
result, handlingErr := resources.Handle(ctx, resource, tcp)
if handlingErr != nil {
k.logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
return reconcile.Result{}, handlingErr
}
if result == controllerutil.OperationResultNone {
k.logger.Info("reconciliation completed")
return reconcile.Result{}, nil
}
if err = utils.UpdateStatus(ctx, k.AdminClient, tcp, resource); err != nil {
k.logger.Error(err, "update status failed")
return reconcile.Result{}, err
}
k.logger.Info("reconciliation processed")
return reconcile.Result{}, nil
}
func (k *KubeProxy) SetupWithManager(mgr manager.Manager) error {
k.logger = mgr.GetLogger().WithName("kube_proxy")
k.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
For(&rbacv1.ClusterRoleBinding{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return object.GetName() == kubeadm.KubeProxyClusterRoleBindingName
}))).
Watches(&source.Channel{Source: k.TriggerChannel}, &handler.EnqueueRequestForObject{}).
Owns(&corev1.ServiceAccount{}).
Owns(&rbacv1.Role{}).
Owns(&rbacv1.RoleBinding{}).
Owns(&corev1.ConfigMap{}).
Owns(&appsv1.DaemonSet{}).
Complete(k)
}

View File

@@ -0,0 +1,200 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"fmt"
"github.com/go-logr/logr"
admissionregistrationv1 "k8s.io/api/admissionregistration/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/pointer"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/utilities"
)
type Migrate struct {
client client.Client
logger logr.Logger
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
WebhookNamespace string
WebhookServiceName string
WebhookCABundle []byte
TriggerChannel chan event.GenericEvent
}
func (m *Migrate) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := m.GetTenantControlPlaneFunc()
if err != nil {
return reconcile.Result{}, err
}
// Cannot detect the status of the TenantControlPlane, enqueuing back
if tcp.Status.Kubernetes.Version.Status == nil {
return reconcile.Result{Requeue: true}, nil
}
switch *tcp.Status.Kubernetes.Version.Status {
case v1alpha1.VersionMigrating:
err = m.createOrUpdate(ctx)
case v1alpha1.VersionReady:
err = m.cleanup(ctx)
}
if err != nil {
m.logger.Error(err, "reconciliation failed")
return reconcile.Result{}, err
}
return reconcile.Result{}, nil
}
func (m *Migrate) cleanup(ctx context.Context) error {
if err := m.client.Delete(ctx, m.object()); err != nil {
if errors.IsNotFound(err) {
return nil
}
return fmt.Errorf("unable to clean-up ValidationWebhook required for migration: %w", err)
}
return nil
}
func (m *Migrate) createOrUpdate(ctx context.Context) error {
obj := m.object()
_, err := utilities.CreateOrUpdateWithConflict(ctx, m.client, obj, func() error {
obj.Webhooks = []admissionregistrationv1.ValidatingWebhook{
{
Name: "leases.migrate.kamaji.clastix.io",
ClientConfig: admissionregistrationv1.WebhookClientConfig{
URL: pointer.String(fmt.Sprintf("https://%s.%s.svc:443/migrate", m.WebhookServiceName, m.WebhookNamespace)),
CABundle: m.WebhookCABundle,
},
Rules: []admissionregistrationv1.RuleWithOperations{
{
Operations: []admissionregistrationv1.OperationType{
admissionregistrationv1.Create,
admissionregistrationv1.Delete,
},
Rule: admissionregistrationv1.Rule{
APIGroups: []string{"*"},
APIVersions: []string{"*"},
Resources: []string{"*"},
Scope: func(v admissionregistrationv1.ScopeType) *admissionregistrationv1.ScopeType {
return &v
}(admissionregistrationv1.NamespacedScope),
},
},
},
FailurePolicy: func(v admissionregistrationv1.FailurePolicyType) *admissionregistrationv1.FailurePolicyType {
return &v
}(admissionregistrationv1.Fail),
MatchPolicy: func(v admissionregistrationv1.MatchPolicyType) *admissionregistrationv1.MatchPolicyType {
return &v
}(admissionregistrationv1.Equivalent),
NamespaceSelector: &metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "kubernetes.io/metadata.name",
Operator: metav1.LabelSelectorOpIn,
Values: []string{
"kube-node-lease",
},
},
},
},
SideEffects: func(v admissionregistrationv1.SideEffectClass) *admissionregistrationv1.SideEffectClass {
return &v
}(admissionregistrationv1.SideEffectClassNoneOnDryRun),
AdmissionReviewVersions: []string{"v1"},
},
{
Name: "catchall.migrate.kamaji.clastix.io",
ClientConfig: admissionregistrationv1.WebhookClientConfig{
URL: pointer.String(fmt.Sprintf("https://%s.%s.svc:443/migrate", m.WebhookServiceName, m.WebhookNamespace)),
CABundle: m.WebhookCABundle,
},
Rules: []admissionregistrationv1.RuleWithOperations{
{
Operations: []admissionregistrationv1.OperationType{admissionregistrationv1.OperationAll},
Rule: admissionregistrationv1.Rule{
APIGroups: []string{"*"},
APIVersions: []string{"*"},
Resources: []string{"*"},
Scope: func(v admissionregistrationv1.ScopeType) *admissionregistrationv1.ScopeType {
return &v
}(admissionregistrationv1.AllScopes),
},
},
},
FailurePolicy: func(v admissionregistrationv1.FailurePolicyType) *admissionregistrationv1.FailurePolicyType {
return &v
}(admissionregistrationv1.Fail),
MatchPolicy: func(v admissionregistrationv1.MatchPolicyType) *admissionregistrationv1.MatchPolicyType {
return &v
}(admissionregistrationv1.Equivalent),
NamespaceSelector: &metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "kubernetes.io/metadata.name",
Operator: metav1.LabelSelectorOpNotIn,
Values: []string{
"kube-system",
"kube-node-lease",
},
},
},
},
SideEffects: func(v admissionregistrationv1.SideEffectClass) *admissionregistrationv1.SideEffectClass {
return &v
}(admissionregistrationv1.SideEffectClassNoneOnDryRun),
TimeoutSeconds: nil,
AdmissionReviewVersions: []string{"v1"},
},
}
return nil
})
return err
}
func (m *Migrate) SetupWithManager(mgr manager.Manager) error {
m.client = mgr.GetClient()
m.logger = mgr.GetLogger().WithName("migrate")
m.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
For(&admissionregistrationv1.ValidatingWebhookConfiguration{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
vwc := m.object()
return object.GetName() == vwc.GetName()
}))).
Watches(&source.Channel{Source: m.TriggerChannel}, &handler.EnqueueRequestForObject{}).
Complete(m)
}
func (m *Migrate) object() *admissionregistrationv1.ValidatingWebhookConfiguration {
return &admissionregistrationv1.ValidatingWebhookConfiguration{
ObjectMeta: metav1.ObjectMeta{
Name: "kamaji-freeze",
},
}
}

307
controllers/soot/manager.go Normal file
View File

@@ -0,0 +1,307 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package soot
import (
"context"
"fmt"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/client-go/rest"
"k8s.io/client-go/util/retry"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/finalizers"
"github.com/clastix/kamaji/controllers/soot/controllers"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/resources"
"github.com/clastix/kamaji/internal/utilities"
)
type sootItem struct {
triggers []chan event.GenericEvent
cancelFn context.CancelFunc
}
type sootMap map[string]sootItem
type Manager struct {
client client.Client
sootMap sootMap
// sootManagerErrChan is the channel that is going to be used
// when the soot manager cannot start due to any kind of problem.
sootManagerErrChan chan event.GenericEvent
MigrateCABundle []byte
MigrateServiceName string
MigrateServiceNamespace string
AdminClient client.Client
}
// retrieveTenantControlPlane is the function used to let an underlying controller of the soot manager
// to retrieve its parent TenantControlPlane definition, required to understand which actions must be performed.
func (m *Manager) retrieveTenantControlPlane(ctx context.Context, request reconcile.Request) utils.TenantControlPlaneRetrievalFn {
return func() (*kamajiv1alpha1.TenantControlPlane, error) {
tcp := &kamajiv1alpha1.TenantControlPlane{}
if err := m.client.Get(ctx, request.NamespacedName, tcp); err != nil {
return nil, err
}
return tcp, nil
}
}
// If the TenantControlPlane is deleted we have to free up memory by stopping the soot manager:
// this is made possible by retrieving the cancel function of the soot manager context to cancel it.
func (m *Manager) cleanup(ctx context.Context, req reconcile.Request, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) (err error) {
if tenantControlPlane != nil && controllerutil.ContainsFinalizer(tenantControlPlane, finalizers.SootFinalizer) {
defer func() {
err = retry.RetryOnConflict(retry.DefaultRetry, func() error {
tcp, tcpErr := m.retrieveTenantControlPlane(ctx, req)()
if tcpErr != nil {
return tcpErr
}
controllerutil.RemoveFinalizer(tcp, finalizers.SootFinalizer)
return m.AdminClient.Update(ctx, tcp)
})
}()
}
tcpName := req.NamespacedName.String()
v, ok := m.sootMap[tcpName]
if !ok {
return nil
}
v.cancelFn()
delete(m.sootMap, tcpName)
return nil
}
func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res reconcile.Result, err error) {
// Retrieving the TenantControlPlane:
// in case of deletion, we must be sure to properly remove from the memory the soot manager.
tcp := &kamajiv1alpha1.TenantControlPlane{}
if err = m.client.Get(ctx, request.NamespacedName, tcp); err != nil {
if errors.IsNotFound(err) {
return reconcile.Result{}, m.cleanup(ctx, request, nil)
}
return reconcile.Result{}, err
}
// Handling finalizer if the TenantControlPlane is marked for deletion:
// the clean-up function is already taking care to stop the manager, if this exists.
if tcp.GetDeletionTimestamp() != nil {
if controllerutil.ContainsFinalizer(tcp, finalizers.SootFinalizer) {
return reconcile.Result{}, m.cleanup(ctx, request, tcp)
}
return reconcile.Result{}, nil
}
tcpStatus := *tcp.Status.Kubernetes.Version.Status
// Triggering the reconciliation of the underlying controllers of
// the soot manager if this is already registered.
v, ok := m.sootMap[request.String()]
if ok {
switch {
case tcpStatus == kamajiv1alpha1.VersionCARotating:
// The TenantControlPlane CA has been rotated, it means the running manager
// must be restarted to avoid certificate signed by unknown authority errors.
return reconcile.Result{}, m.cleanup(ctx, request, tcp)
case tcpStatus == kamajiv1alpha1.VersionNotReady:
// The TenantControlPlane is in non-ready mode, or marked for deletion:
// we don't want to pollute with messages due to broken connection.
// Once the TCP will be ready again, the event will be intercepted and the manager started back.
return reconcile.Result{}, m.cleanup(ctx, request, tcp)
default:
for _, trigger := range v.triggers {
trigger <- event.GenericEvent{Object: tcp}
}
}
return reconcile.Result{}, nil
}
// No need to start a soot manager if the TenantControlPlane is not ready:
// enqueuing back is not required since we're going to get that event once ready.
if tcpStatus == kamajiv1alpha1.VersionNotReady || tcpStatus == kamajiv1alpha1.VersionCARotating {
log.FromContext(ctx).Info("skipping start of the soot manager for a not ready instance")
return reconcile.Result{}, nil
}
// Setting the finalizer for the soot manager:
// upon deletion the soot manager will be shut down prior the Deployment, avoiding logs pollution.
if !controllerutil.ContainsFinalizer(tcp, finalizers.SootFinalizer) {
_, finalizerErr := utilities.CreateOrUpdateWithConflict(ctx, m.AdminClient, tcp, func() error {
controllerutil.AddFinalizer(tcp, finalizers.SootFinalizer)
return nil
})
return reconcile.Result{Requeue: true}, finalizerErr
}
// Generating the manager and starting it:
// in case of any error, reconciling the request to start it back from the beginning.
tcpRest, err := utilities.GetRESTClientConfig(ctx, m.client, tcp)
if err != nil {
return reconcile.Result{}, err
}
tcpCtx, tcpCancelFn := context.WithCancel(ctx)
defer func() {
// If the reconciliation fails, we don't need to get a potential dangling goroutine.
if err != nil {
tcpCancelFn()
}
}()
mgr, err := controllerruntime.NewManager(tcpRest, controllerruntime.Options{
Logger: log.Log.WithName(fmt.Sprintf("soot_%s_%s", tcp.GetNamespace(), tcp.GetName())),
Scheme: m.client.Scheme(),
MetricsBindAddress: "0",
NewClient: func(cache cache.Cache, config *rest.Config, options client.Options, uncachedObjects ...client.Object) (client.Client, error) {
return client.New(config, client.Options{
Scheme: m.client.Scheme(),
})
},
})
if err != nil {
return reconcile.Result{}, err
}
//
// Register all the controllers of the soot here:
//
migrate := &controllers.Migrate{
WebhookNamespace: m.MigrateServiceNamespace,
WebhookServiceName: m.MigrateServiceName,
WebhookCABundle: m.MigrateCABundle,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
}
if err = migrate.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
konnectivityAgent := &controllers.KonnectivityAgent{
AdminClient: m.AdminClient,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
}
if err = konnectivityAgent.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
kubeProxy := &controllers.KubeProxy{
AdminClient: m.AdminClient,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
}
if err = kubeProxy.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
coreDNS := &controllers.CoreDNS{
AdminClient: m.AdminClient,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
}
if err = coreDNS.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
uploadKubeadmConfig := &controllers.KubeadmPhase{
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
Phase: &resources.KubeadmPhase{
Client: m.AdminClient,
Phase: resources.PhaseUploadConfigKubeadm,
},
}
if err = uploadKubeadmConfig.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
uploadKubeletConfig := &controllers.KubeadmPhase{
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
Phase: &resources.KubeadmPhase{
Client: m.AdminClient,
Phase: resources.PhaseUploadConfigKubelet,
},
}
if err = uploadKubeletConfig.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
bootstrapToken := &controllers.KubeadmPhase{
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
Phase: &resources.KubeadmPhase{
Client: m.AdminClient,
Phase: resources.PhaseBootstrapToken,
},
}
if err = bootstrapToken.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
// Starting the manager
go func() {
if err = mgr.Start(tcpCtx); err != nil {
log.FromContext(ctx).Error(err, "unable to start soot manager")
// When the manager cannot start we're enqueuing back the request to take advantage of the backoff factor
// of the queue: this is a goroutine and cannot return an error since the manager is running on its own,
// using the sootManagerErrChan channel we can trigger a reconciliation although the TCP hadn't any change.
m.sootManagerErrChan <- event.GenericEvent{Object: tcp}
}
}()
m.sootMap[request.NamespacedName.String()] = sootItem{
triggers: []chan event.GenericEvent{
migrate.TriggerChannel,
konnectivityAgent.TriggerChannel,
kubeProxy.TriggerChannel,
coreDNS.TriggerChannel,
uploadKubeadmConfig.TriggerChannel,
uploadKubeletConfig.TriggerChannel,
bootstrapToken.TriggerChannel,
},
cancelFn: tcpCancelFn,
}
return reconcile.Result{Requeue: true}, nil
}
func (m *Manager) SetupWithManager(mgr manager.Manager) error {
m.client = mgr.GetClient()
m.sootManagerErrChan = make(chan event.GenericEvent)
m.sootMap = make(map[string]sootItem)
return controllerruntime.NewControllerManagedBy(mgr).
Watches(&source.Channel{Source: m.sootManagerErrChan}, &handler.EnqueueRequestForObject{}).
For(&kamajiv1alpha1.TenantControlPlane{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
obj := object.(*kamajiv1alpha1.TenantControlPlane) //nolint:forcetypeassert
// status is required to understand if we have to start or stop the soot manager
if obj.Status.Kubernetes.Version.Status == nil {
return false
}
if *obj.Status.Kubernetes.Version.Status == kamajiv1alpha1.VersionProvisioning {
return false
}
return true
}))).
Complete(m)
}

View File

@@ -1,104 +0,0 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"crypto/tls"
"crypto/x509"
"fmt"
"net"
"strconv"
"github.com/pkg/errors"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/internal/datastore"
)
func (r *TenantControlPlaneReconciler) getStorageConnection(ctx context.Context, ds kamajiv1alpha1.DataStore) (datastore.Connection, error) {
ca, err := ds.Spec.TLSConfig.CertificateAuthority.Certificate.GetContent(ctx, r.Client)
if err != nil {
return nil, err
}
crt, err := ds.Spec.TLSConfig.ClientCertificate.Certificate.GetContent(ctx, r.Client)
if err != nil {
return nil, err
}
key, err := ds.Spec.TLSConfig.ClientCertificate.PrivateKey.GetContent(ctx, r.Client)
if err != nil {
return nil, err
}
rootCAs := x509.NewCertPool()
if ok := rootCAs.AppendCertsFromPEM(ca); !ok {
return nil, fmt.Errorf("error create root CA for the DB connector")
}
certificate, err := tls.X509KeyPair(crt, key)
if err != nil {
return nil, errors.Wrap(err, "cannot retrieve x.509 key pair from the Kine Secret")
}
var user, password string
if auth := ds.Spec.BasicAuth; auth != nil {
u, err := auth.Username.GetContent(ctx, r.Client)
if err != nil {
return nil, err
}
user = string(u)
p, err := auth.Password.GetContent(ctx, r.Client)
if err != nil {
return nil, err
}
password = string(p)
}
eps := make([]datastore.ConnectionEndpoint, 0, len(ds.Spec.Endpoints))
for _, ep := range ds.Spec.Endpoints {
host, stringPort, err := net.SplitHostPort(ep)
if err != nil {
return nil, errors.Wrap(err, "cannot retrieve host-port pair from DataStore endpoints")
}
port, err := strconv.Atoi(stringPort)
if err != nil {
return nil, errors.Wrap(err, "cannot convert port from string for the given DataStore")
}
eps = append(eps, datastore.ConnectionEndpoint{
Host: host,
Port: port,
})
}
cc := datastore.ConnectionConfig{
User: user,
Password: password,
Endpoints: eps,
TLSConfig: &tls.Config{
RootCAs: rootCAs,
Certificates: []tls.Certificate{certificate},
},
}
switch ds.Spec.Driver {
case kamajiv1alpha1.KineMySQLDriver:
cc.TLSConfig.ServerName = cc.Endpoints[0].Host
return datastore.NewMySQLConnection(cc)
case kamajiv1alpha1.KinePostgreSQLDriver:
cc.TLSConfig.ServerName = cc.Endpoints[0].Host
//nolint:contextcheck
return datastore.NewPostgreSQLConnection(cc)
case kamajiv1alpha1.EtcdDriver:
return datastore.NewETCDConnection(cc)
default:
return nil, fmt.Errorf("%s is not a valid driver", ds.Spec.Driver)
}
}

View File

@@ -6,38 +6,52 @@ package controllers
import (
"context"
"fmt"
"strings"
"time"
"github.com/juju/mutex/v2"
"github.com/pkg/errors"
appsv1 "k8s.io/api/apps/v1"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
apimachineryerrors "k8s.io/apimachinery/pkg/api/errors"
k8stypes "k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/workqueue"
"k8s.io/utils/clock"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/finalizers"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/datastore"
kamajierrors "github.com/clastix/kamaji/internal/errors"
"github.com/clastix/kamaji/internal/resources"
)
const (
tenantControlPlaneFinalizer = "finalizer.kamaji.clastix.io"
)
// TenantControlPlaneReconciler reconciles a TenantControlPlane object.
type TenantControlPlaneReconciler struct {
client.Client
Scheme *runtime.Scheme
Config TenantControlPlaneReconcilerConfig
TriggerChan TenantControlPlaneChannel
Client client.Client
APIReader client.Reader
Config TenantControlPlaneReconcilerConfig
TriggerChan TenantControlPlaneChannel
KamajiNamespace string
KamajiServiceAccount string
KamajiService string
KamajiMigrateImage string
MaxConcurrentReconciles int
clock mutex.Clock
}
// TenantControlPlaneReconcilerConfig gives the necessary configuration for TenantControlPlaneReconciler.
@@ -55,25 +69,46 @@ type TenantControlPlaneReconcilerConfig struct {
//+kubebuilder:rbac:groups=core,resources=services,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=networking.k8s.io,resources=ingresses,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=batch,resources=jobs,verbs=get;list;watch;create;delete
func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
tenantControlPlane := &kamajiv1alpha1.TenantControlPlane{}
isTenantControlPlane, err := r.getTenantControlPlane(ctx, req.NamespacedName, tenantControlPlane)
tenantControlPlane, err := r.getTenantControlPlane(ctx, req.NamespacedName)()
if err != nil {
if apimachineryerrors.IsNotFound(err) {
log.Info("resource may have been deleted, skipping")
return ctrl.Result{}, nil
}
log.Error(err, "cannot retrieve the required instance")
return ctrl.Result{}, err
}
if !isTenantControlPlane {
return ctrl.Result{}, nil
releaser, err := mutex.Acquire(r.mutexSpec(tenantControlPlane))
if err != nil {
switch {
case errors.As(err, &mutex.ErrTimeout):
log.Info("acquire timed out, current process is blocked by another reconciliation")
return ctrl.Result{Requeue: true}, nil
case errors.As(err, &mutex.ErrCancelled):
log.Info("acquire cancelled")
return ctrl.Result{Requeue: true}, nil
default:
log.Error(err, "acquire failed")
return ctrl.Result{}, err
}
}
defer releaser.Release()
markedToBeDeleted := tenantControlPlane.GetDeletionTimestamp() != nil
hasFinalizer := controllerutil.ContainsFinalizer(tenantControlPlane, tenantControlPlaneFinalizer)
if markedToBeDeleted && !hasFinalizer {
if markedToBeDeleted && !controllerutil.ContainsFinalizer(tenantControlPlane, finalizers.DatastoreFinalizer) {
return ctrl.Result{}, nil
}
// Retrieving the DataStore to use for the current reconciliation
@@ -84,7 +119,7 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
return ctrl.Result{}, err
}
dsConnection, err := r.getStorageConnection(ctx, *ds)
dsConnection, err := datastore.NewStorageConnection(ctx, r.Client, *ds)
if err != nil {
log.Error(err, "cannot generate the DataStore connection for the given instance")
@@ -92,19 +127,18 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
}
defer dsConnection.Close()
if markedToBeDeleted {
if markedToBeDeleted && controllerutil.ContainsFinalizer(tenantControlPlane, finalizers.DatastoreFinalizer) {
log.Info("marked for deletion, performing clean-up")
groupDeleteableResourceBuilderConfiguration := GroupDeleteableResourceBuilderConfiguration{
groupDeletableResourceBuilderConfiguration := GroupDeletableResourceBuilderConfiguration{
client: r.Client,
log: log,
tcpReconcilerConfig: r.Config,
tenantControlPlane: *tenantControlPlane,
connection: dsConnection,
}
registeredDeletableResources := GetDeletableResources(groupDeleteableResourceBuilderConfiguration)
for _, resource := range registeredDeletableResources {
for _, resource := range GetDeletableResources(tenantControlPlane, groupDeletableResourceBuilderConfiguration) {
if err = resources.HandleDeletion(ctx, resource, tenantControlPlane); err != nil {
log.Error(err, "resource deletion failed", "resource", resource.GetName())
@@ -112,32 +146,22 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
}
}
if hasFinalizer {
log.Info("removing finalizer")
if err = r.RemoveFinalizer(ctx, tenantControlPlane); err != nil {
log.Error(err, "cannot remove the finalizer for the given resource")
return ctrl.Result{}, err
}
}
log.Info("resource deletion has been completed")
log.Info("resource deletions have been completed")
return ctrl.Result{}, nil
}
if !hasFinalizer {
return ctrl.Result{}, r.AddFinalizer(ctx, tenantControlPlane)
}
groupResourceBuilderConfiguration := GroupResourceBuilderConfiguration{
client: r.Client,
log: log,
tcpReconcilerConfig: r.Config,
tenantControlPlane: *tenantControlPlane,
DataStore: *ds,
Connection: dsConnection,
client: r.Client,
log: log,
tcpReconcilerConfig: r.Config,
tenantControlPlane: *tenantControlPlane,
Connection: dsConnection,
DataStore: *ds,
KamajiNamespace: r.KamajiNamespace,
KamajiServiceAccount: r.KamajiServiceAccount,
KamajiService: r.KamajiService,
KamajiMigrateImage: r.KamajiMigrateImage,
}
registeredResources := GetResources(groupResourceBuilderConfiguration)
@@ -159,7 +183,7 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
continue
}
if err := r.updateStatus(ctx, req.NamespacedName, resource); err != nil {
if err = utils.UpdateStatus(ctx, r.Client, tenantControlPlane, resource); err != nil {
log.Error(err, "update of the resource failed", "resource", resource.GetName())
return ctrl.Result{}, err
@@ -167,7 +191,11 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
log.Info(fmt.Sprintf("%s has been configured", resource.GetName()))
return ctrl.Result{}, nil
if result == resources.OperationResultEnqueueBack {
log.Info("requested enqueuing back", "resources", resource.GetName())
return ctrl.Result{Requeue: true}, nil
}
}
log.Info(fmt.Sprintf("%s has been reconciled", tenantControlPlane.GetName()))
@@ -175,8 +203,20 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
return ctrl.Result{}, nil
}
func (r *TenantControlPlaneReconciler) mutexSpec(obj client.Object) mutex.Spec {
return mutex.Spec{
Name: strings.ReplaceAll(fmt.Sprintf("kamaji%s", obj.GetUID()), "-", ""),
Clock: r.clock,
Delay: 10 * time.Millisecond,
Timeout: time.Second,
Cancel: nil,
}
}
// SetupWithManager sets up the controller with the Manager.
func (r *TenantControlPlaneReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.clock = clock.RealClock{}
return ctrl.NewControllerManagedBy(mgr).
Watches(&source.Channel{Source: r.TriggerChan}, handler.Funcs{GenericFunc: func(genericEvent event.GenericEvent, limitingInterface workqueue.RateLimitingInterface) {
limitingInterface.AddRateLimited(ctrl.Request{
@@ -192,53 +232,55 @@ func (r *TenantControlPlaneReconciler) SetupWithManager(mgr ctrl.Manager) error
Owns(&appsv1.Deployment{}).
Owns(&corev1.Service{}).
Owns(&networkingv1.Ingress{}).
Watches(&source.Kind{Type: &batchv1.Job{}}, handler.EnqueueRequestsFromMapFunc(func(object client.Object) []reconcile.Request {
labels := object.GetLabels()
name, namespace := labels["tcp.kamaji.clastix.io/name"], labels["tcp.kamaji.clastix.io/namespace"]
return []reconcile.Request{
{
NamespacedName: k8stypes.NamespacedName{
Namespace: namespace,
Name: name,
},
},
}
}), builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
if object.GetNamespace() != r.KamajiNamespace {
return false
}
labels := object.GetLabels()
if labels == nil {
return false
}
v, ok := labels["kamaji.clastix.io/component"]
return ok && v == "migrate"
}))).
WithOptions(controller.Options{
MaxConcurrentReconciles: r.MaxConcurrentReconciles,
}).
Complete(r)
}
func (r *TenantControlPlaneReconciler) getTenantControlPlane(ctx context.Context, namespacedName k8stypes.NamespacedName, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) (bool, error) {
if err := r.Client.Get(ctx, namespacedName, tenantControlPlane); err != nil {
if !k8serrors.IsNotFound(err) {
return false, err
func (r *TenantControlPlaneReconciler) getTenantControlPlane(ctx context.Context, namespacedName k8stypes.NamespacedName) utils.TenantControlPlaneRetrievalFn {
return func() (*kamajiv1alpha1.TenantControlPlane, error) {
tcp := &kamajiv1alpha1.TenantControlPlane{}
if err := r.APIReader.Get(ctx, namespacedName, tcp); err != nil {
return nil, err
}
return false, nil
return tcp, nil
}
return true, nil
}
func (r *TenantControlPlaneReconciler) updateStatus(ctx context.Context, namespacedName k8stypes.NamespacedName, resource resources.Resource) error {
tenantControlPlane := &kamajiv1alpha1.TenantControlPlane{}
isTenantControlPlane, err := r.getTenantControlPlane(ctx, namespacedName, tenantControlPlane)
if err != nil {
return err
}
if !isTenantControlPlane {
return fmt.Errorf("error updating tenantControlPlane %s: not found", namespacedName.Name)
}
if err := resource.UpdateTenantControlPlaneStatus(ctx, tenantControlPlane); err != nil {
return err
}
if err := r.Status().Update(ctx, tenantControlPlane); err != nil {
return fmt.Errorf("error updating tenantControlPlane status: %w", err)
}
return nil
}
func (r *TenantControlPlaneReconciler) AddFinalizer(ctx context.Context, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) error {
controllerutil.AddFinalizer(tenantControlPlane, tenantControlPlaneFinalizer)
return r.Update(ctx, tenantControlPlane)
}
func (r *TenantControlPlaneReconciler) RemoveFinalizer(ctx context.Context, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) error {
controllerutil.RemoveFinalizer(tenantControlPlane, tenantControlPlaneFinalizer)
controllerutil.RemoveFinalizer(tenantControlPlane, finalizers.DatastoreFinalizer)
return r.Update(ctx, tenantControlPlane)
return r.Client.Update(ctx, tenantControlPlane)
}
// dataStore retrieves the override DataStore for the given Tenant Control Plane if specified,
@@ -249,10 +291,6 @@ func (r *TenantControlPlaneReconciler) dataStore(ctx context.Context, tenantCont
dataStoreName = r.Config.DefaultDataStoreName
}
if statusDataStore := tenantControlPlane.Status.Storage.DataStoreName; len(statusDataStore) > 0 && dataStoreName != statusDataStore {
return nil, fmt.Errorf("migration from a DataStore (current: %s) to another one (desired: %s) is not yet supported", statusDataStore, dataStoreName)
}
ds := &kamajiv1alpha1.DataStore{}
if err := r.Client.Get(ctx, k8stypes.NamespacedName{Name: dataStoreName}, ds); err != nil {
return nil, errors.Wrap(err, "cannot retrieve *kamajiv1alpha.DataStore object")

View File

@@ -0,0 +1,10 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package utils
import (
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
)
type TenantControlPlaneRetrievalFn func() (*kamajiv1alpha1.TenantControlPlane, error)

View File

@@ -0,0 +1,38 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package utils
import (
"context"
"fmt"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/retry"
"sigs.k8s.io/controller-runtime/pkg/client"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/internal/resources"
)
func UpdateStatus(ctx context.Context, client client.Client, tcp *kamajiv1alpha1.TenantControlPlane, resource resources.Resource) error {
updateErr := retry.RetryOnConflict(retry.DefaultRetry, func() (err error) {
defer func() {
if err != nil {
_ = client.Get(ctx, types.NamespacedName{Name: tcp.Name, Namespace: tcp.Namespace}, tcp)
}
}()
if err = resource.UpdateTenantControlPlaneStatus(ctx, tcp); err != nil {
return fmt.Errorf("error applying TenantcontrolPlane status: %w", err)
}
if err = client.Status().Update(ctx, tcp); err != nil {
return fmt.Errorf("error updating tenantControlPlane status: %w", err)
}
return nil
})
return updateErr
}

View File

@@ -15,7 +15,7 @@ export KAMAJI_NAMESPACE=kamaji-system
export TENANT_NAMESPACE=default
export TENANT_NAME=tenant-00
export TENANT_DOMAIN=$KAMAJI_REGION.cloudapp.azure.com
export TENANT_VERSION=v1.25.0
export TENANT_VERSION=v1.26.0
export TENANT_PORT=6443 # port used to expose the tenant api server
export TENANT_PROXY_PORT=8132 # port used to expose the konnectivity server
export TENANT_POD_CIDR=10.36.0.0/16

View File

@@ -5,7 +5,7 @@ export KAMAJI_NAMESPACE=kamaji-system
export TENANT_NAMESPACE=default
export TENANT_NAME=tenant-00
export TENANT_DOMAIN=clastix.labs
export TENANT_VERSION=v1.25.0
export TENANT_VERSION=v1.26.0
export TENANT_PORT=6443 # port used to expose the tenant api server
export TENANT_PROXY_PORT=8132 # port used to expose the konnectivity server
export TENANT_POD_CIDR=10.36.0.0/16

View File

@@ -11,10 +11,13 @@ prometheus-stack:
helm repo update
helm install prometheus-stack --create-namespace -n monitoring prometheus-community/kube-prometheus-stack
reqs: kind ingress-nginx etcd-cluster
reqs: kind ingress-nginx cert-manager
cert-manager:
@kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.yaml
kamaji: reqs
@kubectl apply -f $(kind_path)/../../config/install.yaml
helm install kamaji --create-namespace -n kamaji-system $(kind_path)/../../charts/kamaji
destroy: kind/destroy etcd-certificates/cleanup

View File

@@ -7,7 +7,7 @@ export DOCKER_IMAGE_NAME="kindest/node"
export DOCKER_NETWORK="kind"
# Variables
export KUBERNETES_VERSION=${1:-v1.23.5}
export KUBERNETES_VERSION=${1:-v1.23.4}
export KUBECONFIG="${KUBECONFIG:-/tmp/kubeconfig}"
if [ -z $2 ]

View File

@@ -1,32 +1,34 @@
ROOT_DIR:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
NAME:=default
NAMESPACE:=mysql-system
mariadb: mariadb-certificates mariadb-secret mariadb-deployment
mariadb-certificates:
rm -rf $(ROOT_DIR)/certs && mkdir $(ROOT_DIR)/certs
cfssl gencert -initca $(ROOT_DIR)/ca-csr.json | cfssljson -bare $(ROOT_DIR)/certs/ca
@mv $(ROOT_DIR)/certs/ca.pem $(ROOT_DIR)/certs/ca.crt
@mv $(ROOT_DIR)/certs/ca-key.pem $(ROOT_DIR)/certs/ca.key
cfssl gencert -ca=$(ROOT_DIR)/certs/ca.crt -ca-key=$(ROOT_DIR)/certs/ca.key \
rm -rf $(ROOT_DIR)/certs/$(NAME) && mkdir -p $(ROOT_DIR)/certs/$(NAME)
cfssl gencert -initca $(ROOT_DIR)/ca-csr.json | cfssljson -bare $(ROOT_DIR)/certs/$(NAME)/ca
@mv $(ROOT_DIR)/certs/$(NAME)/ca.pem $(ROOT_DIR)/certs/$(NAME)/ca.crt
@mv $(ROOT_DIR)/certs/$(NAME)/ca-key.pem $(ROOT_DIR)/certs/$(NAME)/ca.key
@NAME=$(NAME) NAMESPACE=$(NAMESPACE) envsubst < server-csr.json > $(ROOT_DIR)/certs/$(NAME)/server-csr.json
cfssl gencert -ca=$(ROOT_DIR)/certs/$(NAME)/ca.crt -ca-key=$(ROOT_DIR)/certs/$(NAME)/ca.key \
-config=$(ROOT_DIR)/config.json -profile=server \
$(ROOT_DIR)/server-csr.json | cfssljson -bare $(ROOT_DIR)/certs/server
@mv $(ROOT_DIR)/certs/server.pem $(ROOT_DIR)/certs/server.crt
@mv $(ROOT_DIR)/certs/server-key.pem $(ROOT_DIR)/certs/server.key
chmod 644 $(ROOT_DIR)/certs/*
$(ROOT_DIR)/certs/$(NAME)/server-csr.json | cfssljson -bare $(ROOT_DIR)/certs/$(NAME)/server
@mv $(ROOT_DIR)/certs/$(NAME)/server.pem $(ROOT_DIR)/certs/$(NAME)/server.crt
@mv $(ROOT_DIR)/certs/$(NAME)/server-key.pem $(ROOT_DIR)/certs/$(NAME)/server.key
chmod 644 $(ROOT_DIR)/certs/$(NAME)/*
mariadb-secret:
@kubectl create namespace kamaji-system --dry-run=client -o yaml | kubectl apply -f -
@kubectl -n kamaji-system create secret generic mysql-config \
--from-file=$(ROOT_DIR)/certs/ca.crt --from-file=$(ROOT_DIR)/certs/ca.key \
--from-file=$(ROOT_DIR)/certs/server.key --from-file=$(ROOT_DIR)/certs/server.crt \
@kubectl create namespace $(NAMESPACE) --dry-run=client -o yaml | kubectl apply -f -
@kubectl -n $(NAMESPACE) create secret generic mysql-$(NAME)-config \
--from-file=$(ROOT_DIR)/certs/$(NAME)/ca.crt --from-file=$(ROOT_DIR)/certs/$(NAME)/ca.key \
--from-file=$(ROOT_DIR)/certs/$(NAME)/server.key --from-file=$(ROOT_DIR)/certs/$(NAME)/server.crt \
--from-file=$(ROOT_DIR)/mysql-ssl.cnf \
--from-literal=MYSQL_ROOT_PASSWORD=root \
--dry-run=client -o yaml | kubectl apply -f -
mariadb-deployment:
@kubectl -n kamaji-system apply -f $(ROOT_DIR)/mariadb.yaml
@NAME=$(NAME) envsubst < $(ROOT_DIR)/mariadb.yaml | kubectl -n $(NAMESPACE) apply -f -
mariadb-destroy:
@kubectl delete -n kamaji-system -f $(ROOT_DIR)/mariadb.yaml --ignore-not-found
@kubectl delete -n kamaji-system secret mysql-config --ignore-not-found
@kubectl delete -n kamaji-system secret kine-secret --ignore-not-found
@NAME=$(NAME) envsubst < $(ROOT_DIR)/mariadb.yaml | kubectl -n $(NAMESPACE) delete --ignore-not-found -f -
@kubectl delete -n $(NAMESPACE) secret mysql-$(NAME)config --ignore-not-found

View File

@@ -0,0 +1,14 @@
{
"CN": "bronze.mysql-system.svc.cluster.local",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"localhost",
"bronze",
"bronze.mysql-system.svc",
"bronze.mysql-system.svc.cluster.local"
]
}

View File

@@ -1,14 +1,18 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: mariadb
namespace:
name: $NAME
labels:
db: mysql
instance: $NAME
---
apiVersion: v1
kind: Service
metadata:
name: mariadb
namespace:
name: $NAME
labels:
db: mysql
instance: $NAME
spec:
type: ClusterIP
ports:
@@ -17,34 +21,37 @@ spec:
protocol: TCP
targetPort: 3306
selector:
app: mariadb
db: mysql
instance: $NAME
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb
name: mysql-$NAME
labels:
app: mariadb
db: mysql
instance: $NAME
namespace:
spec:
selector:
matchLabels:
app: mariadb
db: mysql
instance: $NAME
replicas: 1
template:
metadata:
name: mariadb
labels:
app: mariadb
db: mysql
instance: $NAME
spec:
serviceAccountName: mariadb
serviceAccountName: $NAME
volumes:
- name: certs
secret:
secretName: mysql-config
secretName: mysql-$NAME-config
- name: data
persistentVolumeClaim:
claimName: pvc-mariadb
claimName: mysql-$NAME-pvc
containers:
- name: mariadb
image: mariadb:10.7.4
@@ -60,14 +67,16 @@ spec:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-config
name: mysql-$NAME-config
key: MYSQL_ROOT_PASSWORD
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-mariadb
namespace:
name: mysql-$NAME-pvc
labels:
db: mysql
instance: $NAME
spec:
accessModes:
- ReadWriteOnce

View File

@@ -1,5 +1,5 @@
{
"CN": "mariadb.kamaji-system.svc.cluster.local",
"CN": "$NAME.$NAMESPACE.svc.cluster.local",
"key": {
"algo": "rsa",
"size": 2048
@@ -7,13 +7,8 @@
"hosts": [
"127.0.0.1",
"localhost",
"mariadb",
"mariadb.kamaji-system",
"mariadb.kamaji-system.svc",
"mariadb.kamaji-system.svc.cluster.local",
"mysql",
"mysql.kamaji-system",
"mysql.kamaji-system.svc",
"mysql.kamaji-system.svc.cluster.local"
"$NAME",
"$NAME.$NAMESPACE.svc",
"$NAME.$NAMESPACE.svc.cluster.local"
]
}

View File

@@ -1,15 +1,17 @@
ROOT_DIR:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
NAME:=default
NAMESPACE:=kamaji-system
postgresql: cnpg-setup cnpg-deploy postgresql-secret
cnpg-setup:
@kubectl apply -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/releases/cnpg-1.16.0.yaml
@kubectl apply -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/releases/cnpg-1.18.0.yaml
cnpg-deploy:
@kubectl -n cnpg-system rollout status deployment/cnpg-controller-manager
@kubectl create namespace kamaji-system --dry-run=client -o yaml | kubectl apply -f -
@kubectl -n kamaji-system apply -f postgresql.yaml
@while ! kubectl -n kamaji-system get secret postgresql-superuser > /dev/null 2>&1; do sleep 1; done
@kubectl create namespace $(NAMESPACE) --dry-run=client -o yaml | kubectl apply -f -
@NAME=$(NAME) envsubst < postgresql.yaml | kubectl -n $(NAMESPACE) apply -f -
@while ! kubectl -n $(NAMESPACE) get secret postgres-$(NAME)-superuser > /dev/null 2>&1; do sleep 1; done
CNPG = $(shell git rev-parse --show-toplevel)/bin/kubectl-cnpg
cnpg:
@@ -18,10 +20,10 @@ cnpg:
sh -s -- -b $(shell git rev-parse --show-toplevel)/bin
postgresql-secret: cnpg
@kubectl -n kamaji-system get secret postgres-root-cert > /dev/null 2>&1 || $(CNPG) -n kamaji-system certificate postgres-root-cert \
--cnpg-cluster postgresql \
--cnpg-user $$(kubectl -n kamaji-system get secret postgresql-superuser -o jsonpath='{.data.username}' | base64 -d)
@kubectl -n $(NAMESPACE) get secret postgres-$(NAME)-root-cert > /dev/null 2>&1 || $(CNPG) -n $(NAMESPACE) certificate postgres-$(NAME)-root-cert \
--cnpg-cluster postgres-$(NAME) \
--cnpg-user $$(kubectl -n $(NAMESPACE) get secret postgres-$(NAME)-superuser -o jsonpath='{.data.username}' | base64 -d)
postgresql-destroy:
@kubectl delete -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/releases/cnpg-1.16.0.yaml --ignore-not-found && \
kubectl -n kamaji-system delete secret postgres-root-cert --ignore-not-found
@NAME=$(NAME) envsubst < postgresql.yaml | kubectl -n $(NAMESPACE) delete -f -
@kubectl -n $(NAMESPACE) delete secret postgres-$(NAME)-root-cert --ignore-not-found

View File

@@ -1,7 +1,7 @@
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: postgresql
name: postgres-$NAME
spec:
description: PostgreSQL cluster used by Kamaji along with kine
instances: 3

View File

@@ -11,9 +11,9 @@ These are requirements of the design behind Kamaji:
Goals and scope may vary as the project evolves.
## Tenant Control Plane
What makes Kamaji special is that the Control Plane of a _“tenant cluster”_ is just one or more regular pods running in a namespace of the _“admin cluster”_ instead of a dedicated set of Virtual Machines. This solution makes running control planes at scale cheaper and easier to deploy and operate. The Tenant Control Plane components are packaged in the same way they are running in bare metal or virtual nodes. We leverage the `kubeadm` code to set up the control plane components as they were running on their own server. The unchanged images of upstream `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` are used.
Kamaji is special because the Control Planes of the _“tenant cluster”_ are regular pods running in a namespace of the _“admin cluster”_ instead of a dedicated set of Virtual Machines. This solution makes running Control Planes at scale cheaper and easier to deploy and operate. The Tenant Control Plane components are packaged in the same way they are running in bare metal or virtual nodes. We leverage the `kubeadm` code to set up the control plane components as they were running on their own server. The unchanged images of upstream `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` are used.
High Availability and rolling updates of the Tenant Control Plane pods are provided by a regular Deployment. Autoscaling based on the metrics is available. A Service is used to espose the Tenant Control Plane outside of the _“admin cluster”_. The `LoadBalancer` service type is used, `NodePort` and `ClusterIP` with an Ingress Controller are still viable options, depending on the case.
High Availability and rolling updates of the Tenant Control Plane pods are provided by a regular Deployment. Autoscaling based on the metrics is available. A Service is used to espose the Tenant Control Plane outside of the _“admin cluster”_. The `LoadBalancer` service type is used, `NodePort` and `ClusterIP` are other viable options, depending on the case.
Kamaji offers a [Custom Resource Definition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) to provide a declarative approach of managing a Tenant Control Plane. This *CRD* is called `TenantControlPlane`, or `tcp` in short.
@@ -25,9 +25,7 @@ And what about the tenant worker nodes? They are just _"worker nodes"_, i.e. reg
We have in roadmap, the Cluster APIs support as well as a Terraform provider so that you can create _“tenant clusters”_ in a declarative way.
## Datastores
Putting the Tenant Control Plane in a pod is the easiest part. Also, we have to make sure each tenant cluster saves the state to be able to store and retrieve data. A dedicated `etcd` cluster for each tenant cluster doesnt scale well for a managed service because `etcd` data persistence can be cumbersome at scale, rising the operational effort to mitigate it. So we have to find an alternative keeping in mind our goal for a resilient and cost-optimized solution at the same time.
As we can deploy any Kubernetes cluster with an external `etcd` cluster, we explored this option for the tenant control planes. On the admin cluster, we can deploy a multi-tenant `etcd` datastore to save the state of multiple tenant clusters. Kamaji offers a Custom Resource Definition called `DataStore` to provide a declarative approach of managing Tenant datastores. With this solution, the resiliency is guaranteed by the usual `etcd` mechanism, and the pods' count remains under control, so it solves the main goal of resiliency and costs optimization. The trade-off here is that we have to operate an external datastore, in addition to `etcd` of the _“admin cluster”_ and manage the access to be sure that each _“tenant cluster”_ uses only its data.
Putting the Tenant Control Plane in a pod is the easiest part. Also, we have to make sure each tenant cluster saves the state to be able to store and retrieve data. As we can deploy a Kubernetes cluster with an external `etcd` cluster, we explored this option for the Tenant Control Planes. On the admin cluster, you can deploy one or multi-tenant `etcd` to save the state of multiple tenant clusters. Kamaji offers a Custom Resource Definition called `DataStore` to provide a declarative approach of managing multiple datastores. By sharing the datastore between multiple tenants, the resiliency is still guaranteed and the pods' count remains under control, so it solves the main goal of resiliency and costs optimization. The trade-off here is that you have to operate external datastores, in addition to `etcd` of the _“admin cluster”_ and manage the access to be sure that each _“tenant cluster”_ uses only its data.
### Other storage drivers
Kamaji offers the option of using a more capable datastore than `etcd` to save the state of multiple tenants' clusters. Thanks to the native [kine](https://github.com/k3s-io/kine) integration, you can run _MySQL_ or _PostgreSQL_ compatible databases as datastore for _“tenant clusters”_.
@@ -35,6 +33,11 @@ Kamaji offers the option of using a more capable datastore than `etcd` to save t
### Pooling
By default, Kamaji is expecting to persist all the _“tenant clusters”_ data in a unique datastore that could be backed by different drivers. However, you can pick a different datastore for a specific set of _“tenant clusters”_ that could have different resources assigned or a different tiering. Pooling of multiple datastore is an option you can leverage for a very large set of _“tenant clusters”_ so you can distribute the load properly. As future improvements, we have a _datastore scheduler_ feature in roadmap so that Kamaji itself can assign automatically a _“tenant cluster”_ to the best datastore in the pool.
### Migration
In order to simplify Day2 Operations and reduce the operational burden, Kamaji provides the capability to live migrate data from a datastore to another one of the same driver without manual and error prone backup and restore operations.
> Currently, live data migration is only available between datastores having the same driver.
## Konnectivity
In addition to the standard control plane containers, Kamaji creates an instance of [konnectivity-server](https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/) running as sidecar container in the `tcp` pod and exposed on port `8132` of the `tcp` service.

Some files were not shown because too many files have changed in this diff Show More