Compare commits

...

251 Commits

Author SHA1 Message Date
Dario Tranchitella
aada5c29a2 chore(helm): releasing helm v0.11.3 2023-02-24 09:57:46 +01:00
Dario Tranchitella
cb4a493e28 chore(helm): bumping up to v0.2.1 2023-02-24 09:56:39 +01:00
Dario Tranchitella
f783aff3c0 chore(kustomize): bumping up to v0.2.1 2023-02-24 09:56:39 +01:00
Dario Tranchitella
c8bdaf0aa2 chore(makefile): bumping up to v0.2.1 2023-02-24 09:56:39 +01:00
Dario Tranchitella
d1c2fe020e feat: upgrading to kubernetes v1.26.1 2023-02-24 09:56:23 +01:00
Dario Tranchitella
5b93d7181f fix: avoiding secrets regeneration upon velero restore 2023-02-23 19:01:47 +01:00
Dario Tranchitella
1273d95340 feat(helm): using tolerations for jobs 2023-02-22 14:19:24 +01:00
Filippo Pinton
1e4c78b646 fix(helm): remove duplicate labels 2023-02-21 15:25:20 +01:00
Pietro Terrizzi
903cfc0bae docs(helm): added pvc customAnnotations 2023-02-15 18:07:14 +01:00
Pietro Terrizzi
7bd142bcb2 feat(helm): added customAnnotations to PVC 2023-02-15 18:07:14 +01:00
Pietro Terrizzi
153a43e6f2 chore: k8s.gcr.io is deprecated in favor of registry.k8s.io 2023-02-15 18:06:26 +01:00
Dario Tranchitella
2abaeb5586 docs: keeping labels consistent 2023-02-13 11:24:36 +01:00
Dario Tranchitella
a8a41951cb refactor!: keeping labels consistent
The label kamaji.clastix.io/soot is deprecated in favour of
kamaji.clastix.io/name, every external resource referring to this must
be aligned prior to updating to this version.
2023-02-13 11:24:36 +01:00
Dario Tranchitella
a0485c338b refactor(checksum): using helper functions 2023-02-10 15:31:28 +01:00
mendrugory
89edc8bbf5 chore: no maintainer 2023-02-09 14:24:35 +01:00
Dario Tranchitella
43765769ec feat: v0.2.0 release 2023-02-06 22:34:33 +01:00
Dario Tranchitella
0016f121ed feat(helm): emptyDir with memory medium for flock performances 2023-02-06 22:12:50 +01:00
Dario Tranchitella
c3fb5373f6 fix(e2e): waiting for reconciliation of the TCP 2023-02-06 22:12:50 +01:00
Dario Tranchitella
670f10ad4e docs: documenting new flag max-concurrent-tcp-reconciles 2023-02-06 22:12:50 +01:00
Dario Tranchitella
4110b688c9 feat: configurable max concurrent tcp reconciles 2023-02-06 22:12:50 +01:00
Dario Tranchitella
830d86a38a feat: introducing enqueueback reconciliation status
Required for the changes introduced with 74f7157e8b
2023-02-06 22:12:50 +01:00
Dario Tranchitella
44d1f3fa7f refactor: updating local tcp instance to avoid 2nd retrieval 2023-02-06 22:12:50 +01:00
Dario Tranchitella
e23ae3c7f3 feat: automatically set gomaxprocs to match container cpu quota 2023-02-06 22:12:50 +01:00
bsctl
713b0754bb docs: update to latest features 2023-02-05 10:08:49 +01:00
Dario Tranchitella
da924b30ff docs: benchmarking kamaji on AWS 2023-02-05 09:09:02 +01:00
Dario Tranchitella
0f0d83130f chore(helm): ServiceMonitor support 2023-02-05 09:09:02 +01:00
Dario Tranchitella
634e808d2d chore(kustomize): ServiceMonitor support 2023-02-05 09:09:02 +01:00
bsctl
b99f224d32 fix(helm): handle basicAuth values for datastore 2023-02-05 09:07:20 +01:00
Dario Tranchitella
d02b5f427e test(e2e): kube-apiserver kubelet-preferred-address-types support 2023-01-22 14:56:47 +01:00
Dario Tranchitella
08b5bc05c3 docs: kube-apiserver kubelet-preferred-address-types support 2023-01-22 14:56:47 +01:00
Dario Tranchitella
4bd8e2d319 chore(helm): kube-apiserver kubelet-preferred-address-types support 2023-01-22 14:56:47 +01:00
Dario Tranchitella
a1f155fcab chore(kustomize): kube-apiserver kubelet-preferred-address-types support 2023-01-22 14:56:47 +01:00
Dario Tranchitella
743ea1343f feat(api): kube-apiserver kubelet-preferred-address-types support 2023-01-22 14:56:47 +01:00
Dario Tranchitella
41780bcb04 docs: tcp deployment strategy support 2023-01-17 10:01:21 +01:00
Dario Tranchitella
014297bb0f chore(helm): tcp deployment strategy support 2023-01-17 10:01:21 +01:00
Dario Tranchitella
20cfdd6931 chore(kustomize): tcp deployment strategy support 2023-01-17 10:01:21 +01:00
Dario Tranchitella
f03e250cf8 feat(api): deployment strategy support 2023-01-17 10:01:21 +01:00
Dario Tranchitella
2cdee08924 chore(helm): certificate authority rotation handling 2023-01-13 19:09:03 +01:00
Dario Tranchitella
6d27ca9e9e chore(kustomize): certificate authority rotation handling 2023-01-13 19:09:03 +01:00
Dario Tranchitella
2293e49e4b fix: certificate authority rotation handling 2023-01-13 19:09:03 +01:00
Dario Tranchitella
6b0f92baa3 docs: certificate authority rotation handling 2023-01-13 19:09:03 +01:00
Dario Tranchitella
8e94039962 feat(api)!: introducing ca rotating status 2023-01-13 19:09:03 +01:00
Dario Tranchitella
551df6df97 fix(kubeadm_phase): wrong string value representation 2023-01-13 19:09:03 +01:00
Massimiliano Giovagnoli
c905e16e75 chore(docs/guides): fix syntax on flux helmrelease
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2023-01-02 14:38:40 +01:00
Massimiliano Giovagnoli
e08792adc2 chore(docs/images): update flux diagram
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2023-01-02 14:38:40 +01:00
maxgio92
248b5082d0 docs(docs/content/guides/kamaji-gitops-flux.md): use third person
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
Co-authored-by: Dario Tranchitella <dario@tranchitella.eu>
2023-01-02 14:38:40 +01:00
Massimiliano Giovagnoli
5cebb05458 docs: add guide for managing tenant resources gitops way
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2023-01-02 14:38:40 +01:00
Dario Tranchitella
efbefba0b3 docs(api): aligning to latest changes 2022-12-22 11:57:29 +01:00
Dario Tranchitella
bc19203071 chore(gi): checking diff and docs alignement 2022-12-22 11:57:29 +01:00
Dario Tranchitella
c8b8dcc2d3 test(e2e): testing different datastores and migration 2022-12-22 11:57:29 +01:00
Dario Tranchitella
cf2721201d chore: adding samples and automating deployment of datastores 2022-12-22 11:57:29 +01:00
Dario Tranchitella
4aa77924f4 chore(helm): using webhooks for secrets instead of finalizers 2022-12-20 20:54:41 +01:00
Dario Tranchitella
a3c52e81f6 chore(kustomize): using webhooks for secrets instead of finalizers 2022-12-20 20:54:41 +01:00
Dario Tranchitella
7ed3c44401 refactor(datastore): using webhooks for secrets instead of finalizers 2022-12-20 20:54:41 +01:00
Dario Tranchitella
beebaf0364 fix(storage): wrong variable while assigning finalizers 2022-12-20 20:45:09 +01:00
Dario Tranchitella
b9cda29461 fix(migrate): allowing leases updates during migration 2022-12-20 20:45:09 +01:00
Dario Tranchitella
7db8a64bdd fix(etcd): using stored username for cert common name 2022-12-19 16:28:48 +01:00
Dario Tranchitella
c6abe03fd1 fix(soot): typo on params for service name and namespace 2022-12-19 10:44:39 +01:00
Dario Tranchitella
723fa1aea6 chore(helm): upgrading etcd to v3.5.6 2022-12-19 08:59:05 +01:00
Dario Tranchitella
7e0ec81ba2 deps: upgrading etcd to v3.5.6 2022-12-19 08:59:05 +01:00
Dario Tranchitella
7353bb5813 chore(gh): upgrading to go 1.19 2022-12-17 15:57:47 +01:00
Dario Tranchitella
96cedadf0a test(e2e): upgrading to ginkgo v2 2022-12-17 15:57:47 +01:00
Dario Tranchitella
76b603de1e chore(helm): upgrade to 1.26 2022-12-17 15:57:47 +01:00
Dario Tranchitella
7cff6b5850 chore(kustomize): upgrade to 1.26 2022-12-17 15:57:47 +01:00
Dario Tranchitella
6e6ea0189f refactor(k8s): upgrade to 1.26 2022-12-17 15:57:47 +01:00
Dario Tranchitella
aefdbc9481 deps(k8s): upgrade to 1.26 2022-12-17 15:57:47 +01:00
Dario Tranchitella
074279b3c2 chore(go): upgrading to 1.19 required for k8s 1.26 2022-12-17 15:57:47 +01:00
Dario Tranchitella
09891f4d71 chore(gh): running workflows on ubuntu-22.04 2022-12-17 15:46:53 +01:00
Dario Tranchitella
18c60461e5 refactor: conforming finalizers management 2022-12-16 22:44:42 +01:00
Dario Tranchitella
ceab662671 feat(soot): using finalizer for clean-up 2022-12-16 22:44:42 +01:00
Dario Tranchitella
d38098a57e fix(soot): ensure that manager is stopped upon tcp deletion 2022-12-16 22:44:42 +01:00
Dario Tranchitella
017a50b8f6 fix(soot): ensuring manager to restart upon tcp pod restart 2022-12-16 22:44:42 +01:00
Dario Tranchitella
b07062b4dd chore(kustomize): missing datastore finalizer rbac 2022-12-15 15:50:30 +01:00
Dario Tranchitella
b880cff8d7 fix: missing datastore finalizer rbac 2022-12-15 15:50:30 +01:00
Dario Tranchitella
3f7fa08871 refactor: removing unused scheme 2022-12-15 15:50:30 +01:00
Dario Tranchitella
8311f1fe1a fix: ensure default datastore exists before starting manager 2022-12-15 15:50:30 +01:00
Dario Tranchitella
77fff030bf chore(helm): support for runtime class 2022-12-14 21:24:01 +01:00
Dario Tranchitella
abada61930 chore(kustomize): support for runtime class 2022-12-14 21:24:01 +01:00
Dario Tranchitella
1eb1e0f17c feat: support for runtime class 2022-12-14 21:24:01 +01:00
Dario Tranchitella
e83c34776b refactor(soot): creating channel source during controller setup 2022-12-14 21:23:47 +01:00
Dario Tranchitella
3b902943f1 chore(helm): kubeadm phases are moved to soot manager 2022-12-14 21:23:47 +01:00
Dario Tranchitella
c5d62f3d82 chore(kustomize): kubeadm phases are moved to soot manager 2022-12-14 21:23:47 +01:00
Dario Tranchitella
938341a2e7 refactor(log): uniforming log for soot controllers 2022-12-14 21:23:47 +01:00
Dario Tranchitella
3ea721cf2b feat(kubeadm): moving phases to soot manager 2022-12-14 21:23:47 +01:00
Dario Tranchitella
5cbd085cf8 chore(helm): addons no more need checksum 2022-12-14 12:22:49 +01:00
Dario Tranchitella
e358fbe6bc chore(kustomize): addons no more need checksum 2022-12-14 12:22:49 +01:00
Dario Tranchitella
9d55e77902 refactor(api): no more need of checksum for addons 2022-12-14 12:22:49 +01:00
Dario Tranchitella
1e4640e8e6 feat(addons): implementation in the soot cluster 2022-12-14 12:22:49 +01:00
Dario Tranchitella
1b14922f55 refactor(kubeadm): preparing migration for addons to soot manager 2022-12-14 12:22:49 +01:00
Dario Tranchitella
11f800063f fix(konnectivity): typo in ca-cert cli flag 2022-12-14 12:22:49 +01:00
Dario Tranchitella
e11b459a3d fix(konnectivity): reconciliation failed and in loop 2022-12-14 12:22:49 +01:00
Dario Tranchitella
9c8de782f3 docs: datastore migration support 2022-12-14 10:17:30 +01:00
Dario Tranchitella
4c51eafc90 feat(konnectivity): reconciliation performed by soot manager 2022-12-12 16:22:36 +01:00
Dario Tranchitella
1a80fc5b28 fix(api): wrong konnectivity defaults 2022-12-12 16:22:36 +01:00
Dario Tranchitella
02052d5339 fix(helm): wrong konnectivity defaults 2022-12-12 16:22:36 +01:00
Dario Tranchitella
7e47e33b39 fix(kustomize): wrong konnectivity defaults 2022-12-12 16:22:36 +01:00
Dario Tranchitella
28c47d9d13 refactor: moving migrate webhook handling from tcp to soot manager 2022-12-12 16:22:36 +01:00
Dario Tranchitella
1ec257a729 feat: introducing soot controllers manager 2022-12-12 16:22:36 +01:00
Dario Tranchitella
68006b1102 fix(datastore): coalesce for storage configuration 2022-12-11 21:39:36 +01:00
Dario Tranchitella
cd109dcf06 fix: using slash prefix for etcd datastore 2022-12-11 21:39:36 +01:00
Dario Tranchitella
1138eb1dea fix: using the status storage schema for the etcd prefix 2022-12-09 11:54:23 +01:00
Dario Tranchitella
f4f914098c feat(migrate): enhancing job metadata 2022-12-08 14:33:20 +01:00
Dario Tranchitella
5e78b6392a feat(migrate): making timeout configurable 2022-12-08 14:33:20 +01:00
Dario Tranchitella
e25f95d7eb feat(migrate): making image configurable 2022-12-08 14:33:20 +01:00
Dario Tranchitella
7f49fc6125 refactor(konnectivity): removing default logging options
verbosity and logtostderr can now be enforced using the extra args
struct member for the server, and the agent as well.
2022-12-08 14:23:31 +01:00
Dario Tranchitella
8b9683802b fix: support for arguments without a value 2022-12-08 14:23:31 +01:00
Dario Tranchitella
cb5e35699e docs: support for konnectivity extra args 2022-12-08 14:23:31 +01:00
Dario Tranchitella
0d6246c098 chore(helm): support for konnectivity extra args 2022-12-08 14:23:31 +01:00
Dario Tranchitella
d8760fdc6e chore(kustomize): support for konnectivity extra args 2022-12-08 14:23:31 +01:00
Dario Tranchitella
c00df62ff7 feat(konnectivity)!: support for extra args 2022-12-08 14:23:31 +01:00
Dario Tranchitella
653a3933e8 chore(helm): decoupling agent and server struct 2022-12-08 14:23:31 +01:00
Dario Tranchitella
6775b2ae57 chore(kustomize): decoupling agent and server struct 2022-12-08 14:23:31 +01:00
Dario Tranchitella
5241fa64ed refactor(konnectivity)!: decoupling agent and server structs 2022-12-08 14:23:31 +01:00
Dario Tranchitella
723fef5336 feat(migrate): injecting webhook into tcp 2022-12-08 14:13:45 +01:00
Dario Tranchitella
8d1d8598c1 refactor: moving datastore migrate resource to its module 2022-12-08 14:13:45 +01:00
Dario Tranchitella
c96f58974b fix(helm): installing datastore upon completion 2022-12-04 22:12:37 +01:00
Dario Tranchitella
2d1daa8498 feat(datastore): validation webhook 2022-12-04 22:12:37 +01:00
Dario Tranchitella
fe948298d8 chore(helm): wrong crd validation markers 2022-12-04 22:12:37 +01:00
Dario Tranchitella
79942dda34 chore(kustomize): wrong crd validation markers 2022-12-04 22:12:37 +01:00
Dario Tranchitella
44919598ec fix(kubebuilder): wrong crd validation markers 2022-12-04 22:12:37 +01:00
Dario Tranchitella
2336d402c3 refactor: using custom validator and custom defaulter 2022-12-04 21:39:14 +01:00
Dario Tranchitella
79c59e55e5 feat: validation webhook to prevent DataStore migration to a different driver 2022-12-04 21:39:14 +01:00
Dario Tranchitella
95d0983faa chore(dockerfile): optimizing build 2022-12-03 12:04:04 +01:00
Dario Tranchitella
7e276e5ba1 chore(helm): support to datastore migration w/ the same driver 2022-12-03 12:04:04 +01:00
Dario Tranchitella
b2e646064f fix(helm): switching over webhook server service 2022-12-03 12:04:04 +01:00
Dario Tranchitella
3850ad9752 chore(kustomize): support to datastore migration w/ the same driver 2022-12-03 12:04:04 +01:00
Dario Tranchitella
9e899379f4 feat: support to datastore migration w/ the same driver 2022-12-03 12:04:04 +01:00
Dario Tranchitella
a260a92495 fix(psql): checking db and table ownership 2022-12-03 12:04:04 +01:00
Dario Tranchitella
cc4864ca9e feat: datastore migration drivers 2022-12-03 12:04:04 +01:00
Dario Tranchitella
ece1a4e7ee fix: avoiding inconsistency upon tcp retrieval and status update 2022-12-03 12:04:04 +01:00
Dario Tranchitella
eb2440ae62 refactor: abstracting datastore configuration retrieval 2022-12-03 12:04:04 +01:00
Dario Tranchitella
0c415707d7 fix(datastore): not deleting database content upon certificates change 2022-12-03 12:04:04 +01:00
Dario Tranchitella
7a6b0a8de3 fix(datastore): ensuring to update status upon any change 2022-12-03 12:04:04 +01:00
Dario Tranchitella
a31fbdc875 chore(makefile): allowing creation of multiple datastore instances 2022-12-03 12:04:04 +01:00
Dario Tranchitella
4ff0cdf28b docs: configuration for the manager command 2022-12-03 12:04:04 +01:00
Dario Tranchitella
ae573b137c chore(kustomize): removing rbac proxy and support for manager command 2022-12-03 12:04:04 +01:00
Dario Tranchitella
e81b3224c2 chore(helm): removing rbac proxy and support for manager command 2022-12-03 12:04:04 +01:00
Dario Tranchitella
4298bdd73e chore(dockerfile): manager command 2022-12-03 12:04:04 +01:00
Dario Tranchitella
15d0d57790 feat: refactoring for commands 2022-12-03 12:04:04 +01:00
Dario Tranchitella
c17a31ef82 fix: avoiding collision of datastore schemes 2022-11-29 18:25:52 +01:00
Dario Tranchitella
f0df1cfe6f fix: removing tcp data using prefix, and not range 2022-11-29 18:25:52 +01:00
Dario Tranchitella
1bcff90785 chore(kustomize): show datastore for each tcp 2022-11-27 18:57:38 +01:00
Dario Tranchitella
6c817a9ae2 chore(helm): show datastore for each tcp 2022-11-27 18:57:38 +01:00
Dario Tranchitella
5b9311f421 feat: show datastore for each tcp 2022-11-27 18:57:38 +01:00
Dario Tranchitella
0d607dfe5d refactor: adding finalizer upon datastore setu 2022-11-27 17:26:34 +01:00
Dario Tranchitella
11502bf359 refactor: retry on conflict for the status update 2022-11-27 17:26:34 +01:00
Dario Tranchitella
ff1c9fca16 chore(samples): updating to latest kubeadm supported version 2022-11-27 17:23:24 +01:00
Dario Tranchitella
adc4b7d98c chore(test): updating to latest kindest/node version 2022-11-27 17:23:24 +01:00
Dario Tranchitella
a96133f342 deps: upgrade to k8s 1.25.4 2022-11-27 17:23:24 +01:00
Dario Tranchitella
81fb429c83 test(e2e): validating tcp kubernetes version 2022-11-26 18:39:59 +01:00
Dario Tranchitella
190acc99b3 feat: tcp version validation upon create and update 2022-11-26 18:39:59 +01:00
Dario Tranchitella
b0a059d305 docs: cert-manager dependency 2022-11-26 16:56:26 +01:00
Dario Tranchitella
bcc7d0ebbd chore(makefile): installing cert-manager for e2e 2022-11-26 16:56:26 +01:00
Dario Tranchitella
9dc0a9a168 chore(makefile): crds diverged between kustomize and helm 2022-11-26 16:56:26 +01:00
Dario Tranchitella
d312738581 chore(helm): support for cert-manager and webhooks 2022-11-26 16:56:26 +01:00
Dario Tranchitella
30bc8cc2bf feat!: support for cert-manager and webhooks 2022-11-26 16:56:26 +01:00
Dario Tranchitella
55d7f09a34 chore(kustomize): support for cert-manager and webhooks 2022-11-26 16:56:26 +01:00
Dario Tranchitella
2c892d79e4 fix(ci): missing metadata upon container images release 2022-11-26 16:56:26 +01:00
Dario Tranchitella
43f1a6b95b chore(makefile): installing required dependencies 2022-11-26 16:56:26 +01:00
Dario Tranchitella
78ef34c9d6 fix(docs): aligning to latest changes for the chart documentation 2022-11-19 11:07:37 +01:00
Matteo Ruina
16d8b2d701 fix(helm): support installation on EKS 2022-11-18 16:50:00 +01:00
Dario Tranchitella
68764be716 chore(helm): support installation using --wait option 2022-10-22 09:47:08 +02:00
Dario Tranchitella
b594b598b1 chore(helm)!: tcp pod advanced scheduling 2022-10-21 14:39:24 +02:00
Dario Tranchitella
c8ce212730 chore(kustomize): tcp pod advanced scheduling 2022-10-21 14:39:24 +02:00
Dario Tranchitella
714b173132 docs: tcp pod advanced scheduling 2022-10-21 14:39:24 +02:00
Dario Tranchitella
0217d579d6 feat: tcp pod advanced scheduling 2022-10-21 14:39:24 +02:00
Dario Tranchitella
c242f4ac58 api!: tcp pod advanced scheduling 2022-10-21 14:39:24 +02:00
Dario Tranchitella
d4d25a8a05 chore(makefile): golint recipe 2022-10-21 14:39:24 +02:00
maxgio92
cff7f7c4e5 Refactor documentation and provide a website (#173) 2022-10-20 09:57:54 +02:00
Dario Tranchitella
6c817fd7ab fix(helm): kubeversion constraint 2022-10-12 11:27:45 +02:00
Massimiliano Giovagnoli
d31ada4da6 docs: add link to env file for admin cluster setup
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2022-10-12 10:20:13 +02:00
Massimiliano Giovagnoli
ee01f721d2 docs: add link script for joining nodes setup
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2022-10-11 17:48:06 +02:00
bsctl
912e010363 docs: add cncf conformance logo 2022-10-06 10:18:18 +02:00
bsctl
e2b03ca873 docs: add cncf conformance logo 2022-10-06 10:18:18 +02:00
Adriano Pezzuto
dccf7bd540 chore(helm): update metadata to helm chart 2022-10-05 09:47:37 +02:00
bsctl
25a65a7496 fix(docs): add logo in svg format 2022-09-23 19:32:16 +02:00
Dario Tranchitella
1ff03246c6 chore(helm): bumping to v0.1.0 2022-09-19 11:43:51 +02:00
Dario Tranchitella
8335f645a5 chore(kustomize): bumping to v0.1.0 2022-09-19 11:43:51 +02:00
Dario Tranchitella
70a791be74 chore(makefile): bumping to v0.1.0 2022-09-19 11:43:51 +02:00
bsctl
b0293c23b5 fix(docs): minor improvement 2022-09-16 20:36:49 +02:00
bsctl
50bba9bb2e fix(docs): deploy tenant nodes on separate subnet 2022-09-16 20:36:49 +02:00
bsctl
f05f7eaf07 fix(docs): remove outdated manifests 2022-09-16 20:36:49 +02:00
bsctl
bfd34ef47e fix(docs): minor improvements 2022-09-16 20:36:49 +02:00
bsctl
b73c7a20ed fix(docs): update roadmap in readme 2022-09-16 20:36:49 +02:00
bsctl
004441e77e fix(docs): use default md style for api reference 2022-09-16 20:36:49 +02:00
bsctl
0f85b6c534 fix(docs): wrong links in readme 2022-09-16 20:36:49 +02:00
bsctl
b674738f0d fix(docs): pin always the kubeadm versions 2022-09-16 20:36:49 +02:00
bsctl
6dc3cd1876 fix(docs): set requirements on kubeadm version 2022-09-16 20:36:49 +02:00
bsctl
96a57fefa5 refactor(docs): track new features and improvements 2022-09-16 20:36:49 +02:00
Dario Tranchitella
87b6f75f66 chore(ci): check helm non committed changes 2022-09-14 11:23:11 +02:00
Dario Tranchitella
1b24806fa3 fix(helm): protocol is not required for external etcd endpoints 2022-09-14 11:23:11 +02:00
Dario Tranchitella
f32ba4a76b fix(makefile): missing namespaces for postgresql kine setup 2022-09-14 11:23:11 +02:00
Dario Tranchitella
19d91aa4d2 chore(log): silencing klog 2022-09-14 11:23:11 +02:00
Dario Tranchitella
a4e2ac24ac fix(kustomize): installing default datastore with proper endpoints 2022-09-14 11:23:11 +02:00
Dario Tranchitella
0dffd9ba46 fix(datastore): default as name for the common datastore 2022-09-14 11:23:11 +02:00
Dario Tranchitella
90b2ca1bab fix(konnectivity): clean-up upon toggling addon
The TCP Deployment container kube-apiserver is deeply hacked with extra
details for konnectivity: most of them weren't cleaned-up properly, and
the function wasn't entirely idempotent in toggling the feature.

This fix is addressing this situation, and rearranging the code
according to the latest polish.
2022-09-12 09:38:36 +02:00
Dario Tranchitella
df8ca7c1d1 refactor: checksum for configmap and secret data 2022-09-12 09:38:36 +02:00
Dario Tranchitella
65519d4f22 refactor: using kamaji prefix for checksum annotation 2022-09-12 09:38:36 +02:00
Dario Tranchitella
e0fa8169f1 refactor: wrapping datastore errors 2022-09-12 09:38:36 +02:00
Dario Tranchitella
41eddc0462 refactor(crypto): eliminating bloated certs functions 2022-09-12 09:38:36 +02:00
Dario Tranchitella
1a9a8a1854 refactor: decoding kubeconfig with less bloated funcs 2022-09-12 09:38:36 +02:00
Dario Tranchitella
0c8a16d604 refactor(utils): encode to yaml uses the non deprecated serializer 2022-09-12 09:38:36 +02:00
Dario Tranchitella
b7adb314ad refactor: logging errors with stacktrace
Using the log facade and logging the error directly in the resource
handler we're getting a more detailed overview of the errors, along with
other metadata useful to understand quicker where the reconciliation
failed.
2022-09-12 09:38:36 +02:00
Dario Tranchitella
e55e6cfdd4 chore(golangci-lint): enabling interfacer and updating code 2022-09-12 09:38:36 +02:00
Dario Tranchitella
6388bf0a7f chore(golangci-lint): enabling used linters 2022-09-12 09:38:36 +02:00
Dario Tranchitella
e089f0ad9a chore: pointer.Int32Ptr is deprecated in favor of pointer.Int32 2022-09-12 09:38:36 +02:00
Dario Tranchitella
0b0bf09813 feat: seeding at startup 2022-09-12 09:38:36 +02:00
Dario Tranchitella
00ea4a562d refactor: moving cert functions to datastore resource 2022-09-12 09:38:36 +02:00
Dario Tranchitella
2a33844c68 refactor(utilities): decreasing bloating functions 2022-09-12 09:38:36 +02:00
Dario Tranchitella
606926ec9a refactor: go simple kubeconfig check 2022-09-12 09:38:36 +02:00
Dario Tranchitella
84b70b3b59 fix: check service-account certificate hash for reconciliation 2022-09-12 09:38:36 +02:00
Dario Tranchitella
4ca79ceb4c fix(helm)!: wrong path for scale spec path 2022-09-10 09:54:12 +02:00
Dario Tranchitella
8df8aa445a fix(kustomize)!: wrong path for scale spec path 2022-09-10 09:54:12 +02:00
Dario Tranchitella
8da916b5cd fix: wrong path for scale spec path 2022-09-10 09:54:12 +02:00
Dario Tranchitella
f15eeebe02 chore(gh): ensure to use go 1.18 for golangci-lint 2022-09-09 17:00:20 +02:00
Dario Tranchitella
7002d48ef9 fix(upgrade): minor release upgrades are allowed 2022-09-09 17:00:20 +02:00
Dario Tranchitella
79edd2606a refactor(kubeadm)!: updating code according to latest changes
Starting from this change, all the nodes trying to join a Kamaji TCP
must be initiated with kubeadm >= 1.25. This is not a hard-prerequisite
since a previous Kubernetes version can be used by specifying it in the
ClusterConfiguration kubernetesVersion field.
2022-09-09 17:00:20 +02:00
Dario Tranchitella
650c20be2b fix(deps): upgrading kubeadm to 1.25.0 2022-09-09 17:00:20 +02:00
Dario Tranchitella
7862717772 refactor: using constants for front-proxy common name 2022-09-09 17:00:10 +02:00
Dario Tranchitella
08eed7b244 fix: --etcd-compaction-interval flag is required for TCP API Server 2022-09-09 17:00:10 +02:00
Dario Tranchitella
1a561758b6 fix: service account issuer must be kubernetes.default.svc 2022-09-09 09:11:43 +02:00
Dario Tranchitella
12f12832f7 fix(kube-apiserver): required flag requestheader-client-ca-file 2022-09-06 19:20:40 +02:00
Dario Tranchitella
b4d0f9b698 chore(helm): adding scale subresource 2022-09-06 16:31:42 +02:00
Dario Tranchitella
14624af093 chore(kustomize)!: adding scale subresource 2022-09-06 16:31:42 +02:00
Dario Tranchitella
52cdc90b48 feat: adding scale subresource 2022-09-06 16:31:42 +02:00
Dario Tranchitella
fbb6e4eec5 chore(helm)!: repository and version override for addons 2022-09-02 14:38:46 +02:00
Dario Tranchitella
880a29f543 chore(kustomize)!: repository and version override for addons 2022-09-02 14:38:46 +02:00
Dario Tranchitella
b0b4ef95c6 feat: repository and version override for addons 2022-09-02 14:38:46 +02:00
Dario Tranchitella
bd909d6567 refactor(docs): updating repository and tag for konnectivity addon 2022-08-31 23:36:58 +02:00
Dario Tranchitella
fcc10c95b2 chore(helm): updating repository and tag 2022-08-31 23:36:58 +02:00
Dario Tranchitella
7e912ed2e8 chore(kustomize): updating repository and tag 2022-08-31 23:36:58 +02:00
Dario Tranchitella
2374176faf refactor(konnectivity): updating repository and tag 2022-08-31 23:36:58 +02:00
Dario Tranchitella
aceeced53a chore(helm)!: support for topology spread constraints 2022-08-31 23:35:54 +02:00
Dario Tranchitella
53c9102ef3 chore(kustomize)!: support for topology spread constraints 2022-08-31 23:35:54 +02:00
Dario Tranchitella
15e1cf7d80 feat: support for topology spread constraints 2022-08-31 23:35:54 +02:00
Dario Tranchitella
f853f25195 refactor: adding further context to error reporting 2022-08-30 16:22:06 +02:00
Dario Tranchitella
5acdc4cc41 refactor(datastore): checking the ca private key for the etcd driver 2022-08-30 16:22:06 +02:00
Dario Tranchitella
360e8200cb chore(helm)!: support for tcp specific data store 2022-08-30 16:22:06 +02:00
Dario Tranchitella
b0c6972873 chore(kustomize)!: support for tcp specific data store 2022-08-30 16:22:06 +02:00
Dario Tranchitella
682006f8aa chore(dockerfile): support for tcp specific data store 2022-08-30 16:22:06 +02:00
Dario Tranchitella
d59f494a69 feat: support for tcp specific data store 2022-08-30 16:22:06 +02:00
Dario Tranchitella
7602d5d803 chore(helm)!: kube-proxy image aligned to tcp version and allowing override 2022-08-27 23:17:01 +02:00
Dario Tranchitella
4c04edbfe8 chore(kustomize)!: kube-proxy image aligned to tcp version and allowing override 2022-08-27 23:17:01 +02:00
Dario Tranchitella
cce4225e07 feat(addons): kube-proxy image aligned to tcp version and allowing override 2022-08-27 23:17:01 +02:00
Dario Tranchitella
10f0021780 chore(controller-gen): upgrading to 0.9.2 2022-08-27 23:17:01 +02:00
Dario Tranchitella
b99a685e2d chore: updating manifests to latest descriptions 2022-08-27 15:39:30 +02:00
Dario Tranchitella
a8de97e442 chore(gomod): upgrading dependencies to k8s 1.25 2022-08-27 15:39:30 +02:00
Dario Tranchitella
8273d7c7b4 chore(golangci-lint): updating to v1.49.0 2022-08-27 15:16:31 +02:00
Dario Tranchitella
a9ea894e32 chore(kustomize)!: storage homogeneity 2022-08-27 15:16:31 +02:00
Dario Tranchitella
ff780aaba6 feat(helm)!: storage homogeneity 2022-08-27 15:16:31 +02:00
Dario Tranchitella
1ddaeccc94 feat: storage homogeneity 2022-08-27 15:16:31 +02:00
245 changed files with 20640 additions and 13012 deletions

View File

@@ -9,28 +9,30 @@ on:
jobs:
golangci:
name: lint
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: '1.18'
go-version: '1.19'
check-latest: true
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v2.3.0
uses: golangci/golangci-lint-action@v3.2.0
with:
version: v1.45.2
version: v1.49.0
only-new-issues: false
args: --timeout 5m --config .golangci.yml
diff:
name: diff
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: '1.18'
go-version: '1.19'
check-latest: true
- run: make yaml-installation-file
- name: Checking if YAML installer file is not aligned
run: if [[ $(git diff | wc -l) -gt 0 ]]; then echo ">>> Untracked generated files have not been committed" && git --no-pager diff && exit 1; fi
@@ -38,3 +40,8 @@ jobs:
run: test -z "$(git ls-files --others --exclude-standard 2> /dev/null)"
- name: Checking if source code is not formatted
run: test -z "$(git diff 2> /dev/null)"
- run: make apidoc
- name: Checking if generated API documentation files are not aligned
run: if [[ $(git diff | wc -l) -gt 0 ]]; then echo ">>> Untracked generated files have not been committed" && git --no-pager diff && exit 1; fi
- name: Checking if generated API documentation generated untracked files
run: test -z "$(git ls-files --others --exclude-standard 2> /dev/null)"

View File

@@ -7,12 +7,29 @@ on:
jobs:
docker-ci:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Generate build-args
id: build-args
run: |
# Declare vars for internal use
VERSION=$(git describe --abbrev=0 --tags)
GIT_HEAD_COMMIT=$(git rev-parse --short HEAD)
GIT_TAG_COMMIT=$(git rev-parse --short $VERSION)
GIT_MODIFIED_1=$(git diff $GIT_HEAD_COMMIT $GIT_TAG_COMMIT --quiet && echo "" || echo ".dev")
GIT_MODIFIED_2=$(git diff --quiet && echo "" || echo ".dirty")
# Export to GH_ENV
echo "GIT_LAST_TAG=$VERSION" >> $GITHUB_ENV
echo "GIT_HEAD_COMMIT=$GIT_HEAD_COMMIT" >> $GITHUB_ENV
echo "GIT_TAG_COMMIT=$GIT_TAG_COMMIT" >> $GITHUB_ENV
echo "GIT_MODIFIED=$(echo "$GIT_MODIFIED_1""$GIT_MODIFIED_2")" >> $GITHUB_ENV
echo "GIT_REPO=$(git config --get remote.origin.url)" >> $GITHUB_ENV
echo "BUILD_DATE=$(git log -1 --format="%at" | xargs -I{} date -d @{} +%Y-%m-%dT%H:%M:%S)" >> $GITHUB_ENV
- name: Docker meta
id: meta
uses: docker/metadata-action@v3

View File

@@ -29,14 +29,15 @@ on:
jobs:
kind:
name: Kubernetes
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: actions/setup-go@v2
- uses: actions/setup-go@v3
with:
go-version: '1.18'
go-version: '1.19'
check-latest: true
- run: |
sudo apt-get update
sudo apt-get install -y golang-cfssl

View File

@@ -8,8 +8,18 @@ on:
branches: [ "*" ]
jobs:
diff:
name: diff
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- run: make -C charts/kamaji docs
- name: Checking if Helm docs is not aligned
run: if [[ $(git diff | wc -l) -gt 0 ]]; then echo ">>> Untracked changes have not been committed" && git --no-pager diff && exit 1; fi
lint:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
- uses: azure/setup-helm@v1
@@ -19,7 +29,7 @@ jobs:
run: helm lint ./charts/kamaji
release:
if: startsWith(github.ref, 'refs/tags/helm-v')
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v2
- name: Publish Helm chart

1
.gitignore vendored
View File

@@ -29,4 +29,5 @@ bin
**/*.key
**/*.pem
**/*.csr
**/server-csr.json
.DS_Store

View File

@@ -14,9 +14,6 @@ linters:
- wrapcheck
- gomnd
- scopelint
- golint
- interfacer
- maligned
- varnamelen
- testpackage
- tagliatelle
@@ -27,6 +24,10 @@ linters:
- exhaustivestruct
- wsl
- exhaustive
- nosprintfhostport
- nonamedreturns
- interfacebloat
- exhaustruct
- lll
- gosec
- gomoddirectives
@@ -34,6 +35,14 @@ linters:
- gochecknoinits
- funlen
- dupl
- maintidx
- cyclop
# deprecated linters
- deadcode
- golint
- interfacer
- structcheck
- varcheck
- nosnakecase
- ifshort
- maligned
enable-all: true

View File

@@ -1,13 +1,5 @@
# Build the manager binary
FROM golang:1.18 as builder
ARG TARGETARCH
ARG GIT_HEAD_COMMIT
ARG GIT_TAG_COMMIT
ARG GIT_LAST_TAG
ARG GIT_MODIFIED
ARG GIT_REPO
ARG BUILD_DATE
FROM golang:1.19 as builder
WORKDIR /workspace
# Copy the Go Modules manifests
@@ -19,21 +11,30 @@ RUN go mod download
# Copy the go source
COPY main.go main.go
COPY cmd/ cmd/
COPY api/ api/
COPY controllers/ controllers/
COPY internal/ internal/
COPY indexers/ indexers/
# Build
ARG TARGETARCH
ARG GIT_HEAD_COMMIT
ARG GIT_TAG_COMMIT
ARG GIT_LAST_TAG
ARG GIT_MODIFIED
ARG GIT_REPO
ARG BUILD_DATE
RUN CGO_ENABLED=0 GOOS=linux GOARCH=$TARGETARCH go build \
-ldflags "-X github.com/clastix/kamaji/internal.GitRepo=$GIT_REPO -X github.com/clastix/kamaji/internal.GitTag=$GIT_LAST_TAG -X github.com/clastix/kamaji/internal.GitCommit=$GIT_HEAD_COMMIT -X github.com/clastix/kamaji/internal.GitDirty=$GIT_MODIFIED -X github.com/clastix/kamaji/internal.BuildTime=$BUILD_DATE" \
-a -o manager main.go
-a -o kamaji main.go
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
COPY ./kamaji.yaml .
COPY --from=builder /workspace/kamaji .
USER 65532:65532
ENTRYPOINT ["/manager"]
ENTRYPOINT ["/kamaji"]

View File

@@ -3,7 +3,7 @@
# To re-generate a bundle for another specific version without changing the standard setup, you can:
# - use the VERSION as arg of the bundle target (e.g make bundle VERSION=0.0.2)
# - use environment variables to overwrite this value (e.g export VERSION=0.0.2)
VERSION ?= 0.0.1
VERSION ?= 0.2.1
# CHANNELS define the bundle channels used in the bundle.
# Add a new line here if you would like to change its default config. (E.g CHANNELS = "candidate,fast,stable")
@@ -36,9 +36,7 @@ IMAGE_TAG_BASE ?= clastix.io/operator
BUNDLE_IMG ?= $(IMAGE_TAG_BASE)-bundle:v$(VERSION)
# Image URL to use all building/pushing image targets
IMG ?= clastix/kamaji:latest
# Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
CRD_OPTIONS ?= "crd:trivialVersions=true,preserveUnknownFields=false"
IMG ?= clastix/kamaji:v$(VERSION)
# Get the currently used golang install path (in GOPATH/bin, unless GOBIN is set)
ifeq (,$(shell go env GOBIN))
@@ -79,7 +77,7 @@ helm: ## Download helm locally if necessary.
GINKGO = $(shell pwd)/bin/ginkgo
ginkgo: ## Download ginkgo locally if necessary.
$(call go-install-tool,$(GINKGO),github.com/onsi/ginkgo/ginkgo@v1.16.5)
$(call go-install-tool,$(GINKGO),github.com/onsi/ginkgo/v2/ginkgo@v2.6.0)
KIND = $(shell pwd)/bin/kind
kind: ## Download kind locally if necessary.
@@ -87,30 +85,69 @@ kind: ## Download kind locally if necessary.
CONTROLLER_GEN = $(shell pwd)/bin/controller-gen
controller-gen: ## Download controller-gen locally if necessary.
$(call go-install-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen@v0.6.1)
$(call go-install-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen@v0.9.2)
GOLANGCI_LINT = $(shell pwd)/bin/golangci-lint
golangci-lint: ## Download golangci-lint locally if necessary.
$(call go-install-tool,$(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint@v1.49.0)
KUSTOMIZE = $(shell pwd)/bin/kustomize
kustomize: ## Download kustomize locally if necessary.
$(call install-kustomize,$(KUSTOMIZE),3.8.7)
APIDOCS_GEN = $(shell pwd)/bin/crdoc
apidocs-gen: ## Download crdoc locally if necessary.
$(call go-install-tool,$(APIDOCS_GEN),fybrik.io/crdoc@latest)
##@ Development
manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
$(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
cp config/crd/bases/kamaji.clastix.io_tenantcontrolplanes.yaml charts/kamaji/crds/tenantcontrolplane.yaml
cp config/crd/bases/kamaji.clastix.io_datastores.yaml charts/kamaji/crds/datastore.yaml
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
generate: controller-gen ## Generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
$(CONTROLLER_GEN) object:headerFile="hack/boilerplate.go.txt" paths="./..."
golint: golangci-lint ## Linting the code according to the styling guide.
$(GOLANGCI_LINT) run -c .golangci.yml
test:
go test ./... -coverprofile cover.out
_datastore-mysql:
$(MAKE) NAME=$(NAME) -C deploy/kine/mysql mariadb
kubectl apply -f $(shell pwd)/config/samples/kamaji_v1alpha1_datastore_mysql_$(NAME).yaml
datastore-mysql:
$(MAKE) NAME=bronze _datastore-mysql
$(MAKE) NAME=silver _datastore-mysql
$(MAKE) NAME=gold _datastore-mysql
_datastore-postgres:
$(MAKE) NAME=$(NAME) NAMESPACE=postgres-system -C deploy/kine/postgresql postgresql
kubectl apply -f $(shell pwd)/config/samples/kamaji_v1alpha1_datastore_postgresql_$(NAME).yaml
datastore-postgres:
$(MAKE) NAME=bronze _datastore-postgres
$(MAKE) NAME=silver _datastore-postgres
$(MAKE) NAME=gold _datastore-postgres
_datastore-etcd:
$(HELM) upgrade --install etcd-$(NAME) clastix/kamaji-etcd --create-namespace -n etcd-system --set datastore.enabled=true
datastore-etcd: helm
$(HELM) repo add clastix https://clastix.github.io/charts
$(HELM) repo update
$(MAKE) NAME=bronze _datastore-etcd
$(MAKE) NAME=silver _datastore-etcd
$(MAKE) NAME=gold _datastore-etcd
datastores: datastore-mysql datastore-etcd datastore-postgres ## Install all Kamaji DataStores with multiple drivers, and different tiers.
##@ Build
# Get information about git current status
GIT_HEAD_COMMIT ?= $$(git rev-parse --short HEAD)
GIT_TAG_COMMIT ?= $$(git rev-parse --short $(VERSION))
GIT_TAG_COMMIT ?= $$(git rev-parse --short v$(VERSION))
GIT_MODIFIED_1 ?= $$(git diff $(GIT_HEAD_COMMIT) $(GIT_TAG_COMMIT) --quiet && echo "" || echo ".dev")
GIT_MODIFIED_2 ?= $$(git diff --quiet && echo "" || echo ".dirty")
GIT_MODIFIED ?= $$(echo "$(GIT_MODIFIED_1)$(GIT_MODIFIED_2)")
@@ -136,6 +173,15 @@ docker-push: ## Push docker image with the manager.
##@ Deployment
metallb:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
kubectl apply -f https://kind.sigs.k8s.io/examples/loadbalancer/metallb-config.yaml
echo ""
docker network inspect -f '{{.IPAM.Config}}' kind
cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.yaml
dev: generate manifests uninstall install rbac ## Full installation for development purposes
go fmt ./...
@@ -244,6 +290,12 @@ env:
##@ e2e
.PHONY: e2e
e2e: env load helm ginkgo ## Create a KinD cluster, install Kamaji on it and run the test suite.
e2e: env load helm ginkgo cert-manager ## Create a KinD cluster, install Kamaji on it and run the test suite.
$(HELM) upgrade --debug --install kamaji ./charts/kamaji --create-namespace --namespace kamaji-system --set "image.pullPolicy=Never"
$(MAKE) datastores
$(GINKGO) -v ./e2e
##@ Document
apidoc: apidocs-gen
$(APIDOCS_GEN) crdoc --resources config/crd/bases --output docs/content/reference/api.md --template docs/templates/reference-cr.tmpl

View File

@@ -16,12 +16,19 @@ resources:
kind: TenantControlPlane
path: github.com/clastix/kamaji/api/v1alpha1
version: v1alpha1
webhooks:
defaulting: true
validation: true
webhookVersion: v1
- api:
crdVersion: v1
namespaced: false
domain: clastix.io
group: kamaji
kind: DataStore
path: github.com/clastix/kamaji/api/v1alpha1
version: v1alpha1
webhooks:
defaulting: true
validation: true
webhookVersion: v1
version: "3"

View File

@@ -20,39 +20,14 @@ Global hyper-scalers are leading the Managed Kubernetes space, while other cloud
**Kamaji** aims to solve these pains by leveraging multi-tenancy and simplifying how to run multiple control planes on the same infrastructure with a fraction of the operational burden.
## How it works
Kamaji turns any Kubernetes cluster into an _“admin cluster”_ to orchestrate other Kubernetes clusters called _“tenant clusters”_. What makes Kamaji special is that Control Planes of _“tenant clusters”_ are just regular pods running in the _“admin cluster”_ instead of dedicated Virtual Machines. This solution makes running control planes at scale cheaper and easier to deploy and operate. View [Core Concepts](./docs/concepts.md) for a deeper understanding of principles behind Kamaji's design.
Kamaji turns any Kubernetes cluster into an _“admin cluster”_ to orchestrate other Kubernetes clusters called _“tenant clusters”_. Kamaji is special because the Control Planes of _“tenant clusters”_ are just regular pods instead of dedicated Virtual Machines. This solution makes running Control Planes at scale cheaper and easier to deploy and operate.
<p align="center">
<img src="assets/kamaji-light.png#gh-light-mode-only" />
</p>
<p align="center">
<img src="assets/kamaji-dark.png#gh-dark-mode-only" />
</p>
All the tenant clusters built with Kamaji are fully compliant CNCF Kubernetes clusters and are compatible with the standard Kubernetes toolchains everybody knows and loves.
<p align="center">
<img src="assets/screenshot.png" />
</p>
![Architecture](docs/content/images/kamaji-light.png#gh-light-mode-only)
![Architecture](docs/content/images/kamaji-dark.png#gh-dark-mode-only)
## Getting started
Please refer to the [Getting Started guide](./docs/getting-started-with-kamaji.md) to deploy a minimal setup of Kamaji on KinD.
> This project is still in the early development stage which means it's not ready for production as APIs, commands, flags, etc. are subject to change, but also that your feedback can still help to shape it. Please try it out and let us know what you like, dislike, what works, what doesn't, etc.
## Use cases
Kamaji project has been initially started as a solution for actual and common problems such as minimizing the Total Cost of Ownership while running Kubernetes at large scale. However, it can open a wider range of use cases.
Here are a few:
- **Managed Kubernetes:** enable companies to provide Cloud Native Infrastructure with ease by introducing a strong separation of concerns between management and workloads. Centralize clusters management, monitoring, and observability by leaving developers to focus on applications, increase productivity and reduce operational costs.
- **Kubernetes as a Service:** provide Kubernetes clusters in a self-service fashion by running management and workloads on different infrastructures with the option of Bring Your Own Device, BYOD.
- **Control Plane as a Service:** provide multiple Kubernetes control planes running on top of a single Kubernetes cluster. Tenants who use namespaces based isolation often still need access to cluster wide resources like Cluster Roles, Admission Webhooks, or Custom Resource Definitions.
- **Edge Computing:** distribute Kubernetes workloads across edge computing locations without having to manage multiple clusters across various providers. Centralize management of hundreds of control planes while leaving workloads to run isolated on their own dedicated infrastructure.
- **Cluster Simulation:** check new Kubernetes API or experimental flag or a new tool without impacting production operations. Kamaji will let you simulate such things in a safe and controlled environment.
- **Workloads Testing:** check the behaviour of your workloads on different and multiple versions of Kubernetes with ease by deploying multiple Control Planes in a single cluster.
Please refer to the [Getting Started guide](https://kamaji.clastix.io/getting-started/) to deploy a minimal setup of Kamaji on KinD.
## Features
@@ -65,7 +40,8 @@ Here are a few:
## Roadmap
- [ ] Benchmarking and stress-test
- [x] Benchmarking
- [ ] Stress-test
- [x] Support for dynamic address allocation on native Load Balancer
- [x] Zero Downtime Tenant Control Plane upgrade
- [x] `konnectivity` integration
@@ -74,37 +50,17 @@ Here are a few:
- [ ] Custom Prometheus metrics for monitoring and alerting
- [x] `kine` integration for MySQL as datastore
- [x] `kine` integration for PostgreSQL as datastore
- [ ] Deeper `kubeadm` integration
- [ ] Pooling of multiple `etcd` datastores
- [x] Pool of multiple datastores
- [x] Seamless migration between datastore with the same driver
- [ ] Automatic assigning of Tenant Control Plane to a datastore
- [ ] Autoscaling of Tenant Control Plane pods
## Documentation
Please, check the project's [documentation](./docs/) for getting started with Kamaji.
Please, check the project's [documentation](https://kamaji.clastix.io/) for getting started with Kamaji.
## Contributions
Kamaji is Open Source with Apache 2 license and any contribution is welcome.
## Community
Join the [Kubernetes Slack Workspace](https://slack.k8s.io/) and the [`#kamaji`](https://kubernetes.slack.com/archives/C03GLTTMWNN) channel to meet end-users and contributors.
## FAQs
Q. What does Kamaji means?
A. Kamaji is named as the character _Kamaji_ from the Japanese movie [_Spirited Away_](https://en.wikipedia.org/wiki/Spirited_Away).
Q. Is Kamaji another Kubernetes distribution?
A. No, Kamaji is a Kubernetes Operator you can install on top of any Kubernetes cluster to provide hundreds of managed Kubernetes clusters as a service. We tested Kamaji on vanilla Kubernetes 1.22+, KinD, and Azure AKS. We expect it to work smoothly on other Kubernetes distributions. The tenant clusters made with Kamaji are conformant CNCF Kubernetes clusters as we leverage on [`kubeadm`](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/).
Q. Is it safe to run Kubernetes control plane components in a pod instead of dedicated virtual machines?
A. Yes, the tenant control plane components are packaged in the same way they are running in bare metal or virtual nodes. We leverage the `kubeadm` code to set up the control plane components as they were running on their own server. The unchanged images of upstream `kube-apiserver`, `kube-scheduler`, and `kube-controller-manager` are used.
Q. You already provide a Kubernetes multi-tenancy solution with [Capsule](https://capsule.clastix.io). Why does Kamaji matter?
A. A multi-tenancy solution, like Capsule shares the Kubernetes control plane among all tenants keeping tenant namespaces isolated by policies. While the solution is the right choice by balancing between features and ease of usage, there are cases where a tenant user requires access to the control plane, for example, when a tenant requires to manage CRDs on his own. With Kamaji, you can provide cluster admin permissions to the tenant.
Q. Well you convinced me, how to get a try?
A. It is possible to get started with Kamaji on a laptop with [KinD](./docs/getting-started-with-kamaji.md) installed.

View File

@@ -30,7 +30,7 @@ func (in *ContentRef) GetContent(ctx context.Context, client client.Client) ([]b
return nil, err
}
v, ok := secret.Data[secretRef.KeyPath]
v, ok := secret.Data[string(secretRef.KeyPath)]
if !ok {
return nil, fmt.Errorf("secret %s does not have key %s", namespacedName.String(), secretRef.KeyPath)
}

View File

@@ -0,0 +1,57 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
"fmt"
"strings"
"github.com/go-logr/logr"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
)
//+kubebuilder:webhook:path=/validate--v1-secret,mutating=false,failurePolicy=ignore,sideEffects=None,groups="",resources=secrets,verbs=delete,versions=v1,name=vdatastoresecrets.kb.io,admissionReviewVersions=v1
type dataStoreSecretValidator struct {
log logr.Logger
client client.Client
}
func (d *dataStoreSecretValidator) ValidateCreate(context.Context, runtime.Object) error {
return nil
}
func (d *dataStoreSecretValidator) ValidateUpdate(context.Context, runtime.Object, runtime.Object) error {
return nil
}
func (d *dataStoreSecretValidator) ValidateDelete(ctx context.Context, obj runtime.Object) error {
secret := obj.(*corev1.Secret) //nolint:forcetypeassert
dsList := &DataStoreList{}
if err := d.client.List(ctx, dsList, client.MatchingFieldsSelector{Selector: fields.OneTermEqualSelector(DatastoreUsedSecretNamespacedNameKey, fmt.Sprintf("%s/%s", secret.GetNamespace(), secret.GetName()))}); err != nil {
return err
}
if len(dsList.Items) > 0 {
var res []string
for _, ds := range dsList.Items {
res = append(res, ds.GetName())
}
return fmt.Errorf("the Secret is used by the following kamajiv1alpha1.DataStores and cannot be deleted (%s)", strings.Join(res, ", "))
}
return nil
}
func (d *dataStoreSecretValidator) Default(context.Context, runtime.Object) error {
return nil
}

View File

@@ -8,7 +8,9 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
type Driver string //+kubebuilder:validation:Enum=etcd;MySQL;PostgreSQL
// +kubebuilder:validation:Enum=etcd;MySQL;PostgreSQL
type Driver string
var (
EtcdDriver Driver = "etcd"
@@ -16,13 +18,17 @@ var (
KinePostgreSQLDriver Driver = "PostgreSQL"
)
// +kubebuilder:validation:MinItems=1
type Endpoints []string
// DataStoreSpec defines the desired state of DataStore.
type DataStoreSpec struct {
// The driver to use to connect to the shared datastore.
Driver Driver `json:"driver"`
// List of the endpoints to connect to the shared datastore.
// No need for protocol, just bare IP/FQDN and port.
Endpoints []string `json:"endpoints"` //+kubebuilder:validation:MinLength=1
Endpoints Endpoints `json:"endpoints"`
// In case of authentication enabled for the given data store, specifies the username and password pair.
// This value is optional.
BasicAuth *BasicAuth `json:"basicAuth,omitempty"`
@@ -62,11 +68,14 @@ type ContentRef struct {
SecretRef *SecretReference `json:"secretReference,omitempty"`
}
// +kubebuilder:validation:MinLength=1
type secretReferKeyPath string
type SecretReference struct {
corev1.SecretReference `json:",inline"`
// Name of the key for the given Secret reference where the content is stored.
// This value is mandatory.
KeyPath string `json:"keyPath"`
KeyPath secretReferKeyPath `json:"keyPath"`
}
// DataStoreStatus defines the observed state of DataStore.

View File

@@ -0,0 +1,185 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
"fmt"
"github.com/go-logr/logr"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
)
//+kubebuilder:webhook:path=/mutate-kamaji-clastix-io-v1alpha1-datastore,mutating=true,failurePolicy=fail,sideEffects=None,groups=kamaji.clastix.io,resources=datastores,verbs=create;update,versions=v1alpha1,name=mdatastore.kb.io,admissionReviewVersions=v1
//+kubebuilder:webhook:path=/validate-kamaji-clastix-io-v1alpha1-datastore,mutating=false,failurePolicy=fail,sideEffects=None,groups=kamaji.clastix.io,resources=datastores,verbs=create;update;delete,versions=v1alpha1,name=vdatastore.kb.io,admissionReviewVersions=v1
func (in *DataStore) SetupWebhookWithManager(mgr ctrl.Manager) error {
secretValidator := &dataStoreSecretValidator{
log: mgr.GetLogger().WithName("datastore-secret-webhook"),
client: mgr.GetClient(),
}
if err := ctrl.NewWebhookManagedBy(mgr).For(&corev1.Secret{}).WithValidator(secretValidator).Complete(); err != nil {
return err
}
dsValidator := &dataStoreValidator{
log: mgr.GetLogger().WithName("datastore-webhook"),
client: mgr.GetClient(),
}
return ctrl.NewWebhookManagedBy(mgr).
For(in).
WithValidator(dsValidator).
WithDefaulter(dsValidator).
Complete()
}
type dataStoreValidator struct {
log logr.Logger
client client.Client
}
func (d *dataStoreValidator) ValidateCreate(ctx context.Context, obj runtime.Object) error {
ds, ok := obj.(*DataStore)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.DataStore")
}
if err := d.validate(ctx, ds); err != nil {
return err
}
return nil
}
func (d *dataStoreValidator) ValidateUpdate(ctx context.Context, oldObj, newObj runtime.Object) error {
old, ok := oldObj.(*DataStore)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.DataStore")
}
ds, ok := newObj.(*DataStore)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.DataStore")
}
d.log.Info("validate update", "name", ds.GetName())
if ds.Spec.Driver != old.Spec.Driver {
return fmt.Errorf("driver of a DataStore cannot be changed")
}
if err := d.validate(ctx, ds); err != nil {
return err
}
return nil
}
func (d *dataStoreValidator) ValidateDelete(ctx context.Context, obj runtime.Object) error {
ds, ok := obj.(*DataStore)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.DataStore")
}
tcpList := &TenantControlPlaneList{}
if err := d.client.List(ctx, tcpList, client.MatchingFieldsSelector{Selector: fields.OneTermEqualSelector(TenantControlPlaneUsedDataStoreKey, ds.GetName())}); err != nil {
return err
}
if len(tcpList.Items) > 0 {
return fmt.Errorf("the DataStore is used by multiple TenantControlPlanes and cannot be removed")
}
return nil
}
func (d *dataStoreValidator) Default(context.Context, runtime.Object) error {
return nil
}
func (d *dataStoreValidator) validate(ctx context.Context, ds *DataStore) error {
if ds.Spec.BasicAuth != nil {
if err := d.validateBasicAuth(ctx, ds); err != nil {
return err
}
}
if err := d.validateTLSConfig(ctx, ds); err != nil {
return err
}
return nil
}
func (d *dataStoreValidator) validateBasicAuth(ctx context.Context, ds *DataStore) error {
if err := d.validateContentReference(ctx, ds.Spec.BasicAuth.Password); err != nil {
return fmt.Errorf("basic-auth password is not valid, %w", err)
}
if err := d.validateContentReference(ctx, ds.Spec.BasicAuth.Username); err != nil {
return fmt.Errorf("basic-auth username is not valid, %w", err)
}
return nil
}
func (d *dataStoreValidator) validateTLSConfig(ctx context.Context, ds *DataStore) error {
if err := d.validateContentReference(ctx, ds.Spec.TLSConfig.CertificateAuthority.Certificate); err != nil {
return fmt.Errorf("CA certificate is not valid, %w", err)
}
if ds.Spec.Driver == EtcdDriver {
if ds.Spec.TLSConfig.CertificateAuthority.PrivateKey == nil {
return fmt.Errorf("CA private key is required when using the etcd driver")
}
}
if ds.Spec.TLSConfig.CertificateAuthority.PrivateKey != nil {
if err := d.validateContentReference(ctx, *ds.Spec.TLSConfig.CertificateAuthority.PrivateKey); err != nil {
return fmt.Errorf("CA private key is not valid, %w", err)
}
}
if err := d.validateContentReference(ctx, ds.Spec.TLSConfig.ClientCertificate.Certificate); err != nil {
return fmt.Errorf("client certificate is not valid, %w", err)
}
if err := d.validateContentReference(ctx, ds.Spec.TLSConfig.ClientCertificate.PrivateKey); err != nil {
return fmt.Errorf("client private key is not valid, %w", err)
}
return nil
}
func (d *dataStoreValidator) validateContentReference(ctx context.Context, ref ContentRef) error {
switch {
case len(ref.Content) > 0:
return nil
case ref.SecretRef == nil:
return fmt.Errorf("the Secret reference is mandatory when bare content is not specified")
case len(ref.SecretRef.SecretReference.Name) == 0:
return fmt.Errorf("the Secret reference name is mandatory")
case len(ref.SecretRef.SecretReference.Namespace) == 0:
return fmt.Errorf("the Secret reference namespace is mandatory")
}
if err := d.client.Get(ctx, types.NamespacedName{Name: ref.SecretRef.SecretReference.Name, Namespace: ref.SecretRef.SecretReference.Namespace}, &corev1.Secret{}); err != nil {
if errors.IsNotFound(err) {
return fmt.Errorf("secret %s/%s is not found", ref.SecretRef.SecretReference.Namespace, ref.SecretRef.SecretReference.Name)
}
return err
}
return nil
}

View File

@@ -2,8 +2,9 @@
// SPDX-License-Identifier: Apache-2.0
// Package v1alpha1 contains API Schema definitions for the kamaji v1alpha1 API group
//+kubebuilder:object:generate=true
//+groupName=kamaji.clastix.io
// +kubebuilder:object:generate=true
// +groupName=kamaji.clastix.io
//nolint
package v1alpha1
import (

View File

@@ -0,0 +1,68 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
"fmt"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
DatastoreUsedSecretNamespacedNameKey = "secretRef"
)
type DatastoreUsedSecret struct{}
func (d *DatastoreUsedSecret) SetupWithManager(ctx context.Context, mgr controllerruntime.Manager) error {
return mgr.GetFieldIndexer().IndexField(ctx, d.Object(), d.Field(), d.ExtractValue())
}
func (d *DatastoreUsedSecret) Object() client.Object {
return &DataStore{}
}
func (d *DatastoreUsedSecret) Field() string {
return DatastoreUsedSecretNamespacedNameKey
}
func (d *DatastoreUsedSecret) ExtractValue() client.IndexerFunc {
return func(object client.Object) (res []string) {
ds := object.(*DataStore) //nolint:forcetypeassert
if ds.Spec.BasicAuth != nil {
if ds.Spec.BasicAuth.Username.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.BasicAuth.Username.SecretRef))
}
if ds.Spec.BasicAuth.Password.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.BasicAuth.Password.SecretRef))
}
}
if ds.Spec.TLSConfig.CertificateAuthority.Certificate.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.CertificateAuthority.Certificate.SecretRef))
}
if ds.Spec.TLSConfig.CertificateAuthority.PrivateKey != nil && ds.Spec.TLSConfig.CertificateAuthority.PrivateKey.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.CertificateAuthority.PrivateKey.SecretRef))
}
if ds.Spec.TLSConfig.ClientCertificate.Certificate.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.ClientCertificate.Certificate.SecretRef))
}
if ds.Spec.TLSConfig.ClientCertificate.PrivateKey.SecretRef != nil {
res = append(res, d.namespacedName(*ds.Spec.TLSConfig.ClientCertificate.PrivateKey.SecretRef))
}
return res
}
}
func (d *DatastoreUsedSecret) namespacedName(ref SecretReference) string {
return fmt.Sprintf("%s/%s", ref.Namespace, ref.Name)
}

View File

@@ -0,0 +1,37 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
TenantControlPlaneUsedDataStoreKey = "status.storage.dataStoreName"
)
type TenantControlPlaneStatusDataStore struct{}
func (t *TenantControlPlaneStatusDataStore) Object() client.Object {
return &TenantControlPlane{}
}
func (t *TenantControlPlaneStatusDataStore) Field() string {
return TenantControlPlaneUsedDataStoreKey
}
func (t *TenantControlPlaneStatusDataStore) ExtractValue() client.IndexerFunc {
return func(object client.Object) []string {
tcp := object.(*TenantControlPlane) //nolint:forcetypeassert
return []string{tcp.Status.Storage.DataStoreName}
}
}
func (t *TenantControlPlaneStatusDataStore) SetupWithManager(ctx context.Context, mgr controllerruntime.Manager) error {
return mgr.GetFieldIndexer().IndexField(ctx, t.Object(), t.Field(), t.ExtractValue())
}

View File

@@ -1,17 +0,0 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func (in AddonStatus) GetChecksum() string {
return in.Checksum
}
func (in *AddonStatus) SetChecksum(checksum string) {
in.LastUpdate = metav1.Now()
in.Checksum = checksum
}

View File

@@ -8,8 +8,6 @@ import (
corev1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/clastix/kamaji/internal/etcd"
)
// APIServerCertificatesStatus defines the observed state of ETCD Certificate for API server.
@@ -57,41 +55,31 @@ type CertificatesStatus struct {
ETCD *ETCDCertificatesStatus `json:"etcd,omitempty"`
}
// ETCDStatus defines the observed state of ETCDStatus.
type ETCDStatus struct {
Role etcd.Role `json:"role,omitempty"`
User etcd.User `json:"user,omitempty"`
}
type SQLCertificateStatus struct {
type DataStoreCertificateStatus struct {
SecretName string `json:"secretName,omitempty"`
Checksum string `json:"checksum,omitempty"`
LastUpdate metav1.Time `json:"lastUpdate,omitempty"`
}
type SQLConfigStatus struct {
type DataStoreConfigStatus struct {
SecretName string `json:"secretName,omitempty"`
Checksum string `json:"checksum,omitempty"`
}
type SQLSetupStatus struct {
type DataStoreSetupStatus struct {
Schema string `json:"schema,omitempty"`
User string `json:"user,omitempty"`
LastUpdate metav1.Time `json:"lastUpdate,omitempty"`
Checksum string `json:"checksum,omitempty"`
}
type KineStatus struct {
Driver string `json:"driver,omitempty"`
Config SQLConfigStatus `json:"config,omitempty"`
Setup SQLSetupStatus `json:"setup,omitempty"`
Certificate SQLCertificateStatus `json:"certificate,omitempty"`
}
// StorageStatus defines the observed state of StorageStatus.
type StorageStatus struct {
ETCD *ETCDStatus `json:"etcd,omitempty"`
Kine *KineStatus `json:"kine,omitempty"`
Driver string `json:"driver,omitempty"`
DataStoreName string `json:"dataStoreName,omitempty"`
Config DataStoreConfigStatus `json:"config,omitempty"`
Setup DataStoreSetupStatus `json:"setup,omitempty"`
Certificate DataStoreCertificateStatus `json:"certificate,omitempty"`
}
// KubeconfigStatus contains information about the generated kubeconfig.
@@ -124,15 +112,12 @@ type KubeadmPhaseStatus struct {
// KubeadmPhasesStatus contains the status of the different kubeadm phases action.
type KubeadmPhasesStatus struct {
UploadConfigKubeadm KubeadmPhaseStatus `json:"uploadConfigKubeadm"`
UploadConfigKubelet KubeadmPhaseStatus `json:"uploadConfigKubelet"`
BootstrapToken KubeadmPhaseStatus `json:"bootstrapToken"`
BootstrapToken KubeadmPhaseStatus `json:"bootstrapToken"`
}
type ExternalKubernetesObjectStatus struct {
Name string `json:"name,omitempty"`
Namespace string `json:"namespace,omitempty"`
Checksum string `json:"checksum,omitempty"`
// Last time when k8s object was updated
LastUpdate metav1.Time `json:"lastUpdate,omitempty"`
}
@@ -157,15 +142,13 @@ type KonnectivityConfigMap struct {
// AddonStatus defines the observed state of an Addon.
type AddonStatus struct {
Enabled bool `json:"enabled"`
Checksum string `json:"checksum,omitempty"`
LastUpdate metav1.Time `json:"lastUpdate,omitempty"`
}
// AddonsStatus defines the observed state of the different Addons.
type AddonsStatus struct {
CoreDNS AddonStatus `json:"coreDNS,omitempty"`
KubeProxy AddonStatus `json:"kubeProxy,omitempty"`
CoreDNS AddonStatus `json:"coreDNS,omitempty"`
KubeProxy AddonStatus `json:"kubeProxy,omitempty"`
Konnectivity KonnectivityStatus `json:"konnectivity,omitempty"`
}
@@ -200,12 +183,14 @@ type KubernetesStatus struct {
Ingress *KubernetesIngressStatus `json:"ingress,omitempty"`
}
// +kubebuilder:validation:Enum=Provisioning;Upgrading;Ready;NotReady
// +kubebuilder:validation:Enum=Provisioning;CertificateAuthorityRotating;Upgrading;Migrating;Ready;NotReady
type KubernetesVersionStatus string
var (
VersionProvisioning KubernetesVersionStatus = "Provisioning"
VersionCARotating KubernetesVersionStatus = "CertificateAuthorityRotating"
VersionUpgrading KubernetesVersionStatus = "Upgrading"
VersionMigrating KubernetesVersionStatus = "Migrating"
VersionReady KubernetesVersionStatus = "Ready"
VersionNotReady KubernetesVersionStatus = "NotReady"
)
@@ -221,6 +206,8 @@ type KubernetesVersion struct {
// KubernetesDeploymentStatus defines the status for the Tenant Control Plane Deployment in the management cluster.
type KubernetesDeploymentStatus struct {
appsv1.DeploymentStatus `json:",inline"`
// Selector is the label selector used to group the Tenant Control Plane Pods used by the scale subresource.
Selector string `json:"selector"`
// The name of the Deployment for the given cluster.
Name string `json:"name"`
// The namespace which the Deployment for the given cluster is deployed.

View File

@@ -4,6 +4,7 @@
package v1alpha1
import (
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -32,7 +33,23 @@ type NetworkProfileSpec struct {
DNSServiceIPs []string `json:"dnsServiceIPs,omitempty"`
}
// +kubebuilder:validation:Enum=Hostname;InternalIP;ExternalIP;InternalDNS;ExternalDNS
type KubeletPreferredAddressType string
const (
NodeHostName KubeletPreferredAddressType = "Hostname"
NodeInternalIP KubeletPreferredAddressType = "InternalIP"
NodeExternalIP KubeletPreferredAddressType = "ExternalIP"
NodeInternalDNS KubeletPreferredAddressType = "InternalDNS"
NodeExternalDNS KubeletPreferredAddressType = "ExternalDNS"
)
type KubeletSpec struct {
// Ordered list of the preferred NodeAddressTypes to use for kubelet connections.
// Default to Hostname, InternalIP, ExternalIP.
// +kubebuilder:default={"Hostname","InternalIP","ExternalIP"}
// +kubebuilder:validation:MinItems=1
PreferredAddressTypes []KubeletPreferredAddressType `json:"preferredAddressTypes,omitempty"`
// CGroupFS defines the cgroup driver for Kubelet
// https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/
CGroupFS CGroupDriver `json:"cgroupfs,omitempty"`
@@ -76,15 +93,52 @@ type IngressSpec struct {
Hostname string `json:"hostname,omitempty"`
}
// ComponentResourceRequirements describes the compute resource requirements.
type ComponentResourceRequirements struct {
// Limits describes the maximum amount of compute resources allowed.
// More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Limits corev1.ResourceList `json:"limits,omitempty" protobuf:"bytes,1,rep,name=limits,casttype=ResourceList,castkey=ResourceName"`
// Requests describes the minimum amount of compute resources required.
// If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
// otherwise to an implementation-defined value.
// More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Requests corev1.ResourceList `json:"requests,omitempty" protobuf:"bytes,2,rep,name=requests,casttype=ResourceList,castkey=ResourceName"`
}
type ControlPlaneComponentsResources struct {
APIServer *corev1.ResourceRequirements `json:"apiServer,omitempty"`
ControllerManager *corev1.ResourceRequirements `json:"controllerManager,omitempty"`
Scheduler *corev1.ResourceRequirements `json:"scheduler,omitempty"`
APIServer *ComponentResourceRequirements `json:"apiServer,omitempty"`
ControllerManager *ComponentResourceRequirements `json:"controllerManager,omitempty"`
Scheduler *ComponentResourceRequirements `json:"scheduler,omitempty"`
}
type DeploymentSpec struct {
// +kubebuilder:default=2
Replicas int32 `json:"replicas,omitempty"`
// NodeSelector is a selector which must be true for the pod to fit on a node.
// Selector which must match a node's labels for the pod to be scheduled on that node.
// More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used
// to run the Tenant Control Plane pod. If no RuntimeClass resource matches the named class, the pod will not be run.
// If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an
// empty definition that uses the default runtime handler.
// More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class
RuntimeClassName string `json:"runtimeClassName,omitempty"`
// Strategy describes how to replace existing pods with new ones for the given Tenant Control Plane.
// Default value is set to Rolling Update, with a blue/green strategy.
// +kubebuilder:default={type:"RollingUpdate",rollingUpdate:{maxUnavailable:0,maxSurge:"100%"}}
Strategy appsv1.DeploymentStrategy `json:"strategy,omitempty"`
// If specified, the Tenant Control Plane pod's tolerations.
// More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
// If specified, the Tenant Control Plane pod's scheduling constraints.
// More info: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/
Affinity *corev1.Affinity `json:"affinity,omitempty"`
// TopologySpreadConstraints describes how the Tenant Control Plane pods ought to spread across topology
// domains. Scheduler will schedule pods in a way which abides by the constraints.
// In case of nil underlying LabelSelector, the Kamaji one for the given Tenant Control Plane will be used.
// All topologySpreadConstraints are ANDed.
TopologySpreadConstraints []corev1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"`
// Resources defines the amount of memory and CPU to allocate to each component of the Control Plane
// (kube-apiserver, controller-manager, and scheduler).
Resources *ControlPlaneComponentsResources `json:"resources,omitempty"`
@@ -110,34 +164,72 @@ type ServiceSpec struct {
}
// AddonSpec defines the spec for every addon.
type AddonSpec struct{}
type AddonSpec struct {
ImageOverrideTrait `json:",inline"`
}
type ImageOverrideTrait struct {
// ImageRepository sets the container registry to pull images from.
// if not set, the default ImageRepository will be used instead.
ImageRepository string `json:"imageRepository,omitempty"`
// ImageTag allows to specify a tag for the image.
// In case this value is set, kubeadm does not change automatically the version of the above components during upgrades.
ImageTag string `json:"imageTag,omitempty"`
}
// ExtraArgs allows adding additional arguments to said component.
type ExtraArgs []string
type KonnectivityServerSpec struct {
// The port which Konnectivity server is listening to.
Port int32 `json:"port"`
// Container image version of the Konnectivity server.
// +kubebuilder:default=v0.0.32
Version string `json:"version,omitempty"`
// Container image used by the Konnectivity server.
// +kubebuilder:default=registry.k8s.io/kas-network-proxy/proxy-server
Image string `json:"image,omitempty"`
// Resources define the amount of CPU and memory to allocate to the Konnectivity server.
Resources *ComponentResourceRequirements `json:"resources,omitempty"`
ExtraArgs ExtraArgs `json:"extraArgs,omitempty"`
}
type KonnectivityAgentSpec struct {
// AgentImage defines the container image for Konnectivity's agent.
// +kubebuilder:default=registry.k8s.io/kas-network-proxy/proxy-agent
Image string `json:"image,omitempty"`
// Version for Konnectivity agent.
// +kubebuilder:default=v0.0.32
Version string `json:"version,omitempty"`
ExtraArgs ExtraArgs `json:"extraArgs,omitempty"`
}
// KonnectivitySpec defines the spec for Konnectivity.
type KonnectivitySpec struct {
// Port of Konnectivity proxy server.
ProxyPort int32 `json:"proxyPort"`
// Version for Konnectivity server and agent.
// +kubebuilder:default=v0.0.31
Version string `json:"version,omitempty"`
// ServerImage defines the container image for Konnectivity's server.
// +kubebuilder:default=us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server
ServerImage string `json:"serverImage,omitempty"`
// AgentImage defines the container image for Konnectivity's agent.
// +kubebuilder:default=us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent
AgentImage string `json:"agentImage,omitempty"`
// Resources define the amount of CPU and memory to allocate to the Konnectivity server.
Resources *corev1.ResourceRequirements `json:"resources,omitempty"`
// +kubebuilder:default={version:"v0.0.32",image:"registry.k8s.io/kas-network-proxy/proxy-server",port:8132}
KonnectivityServerSpec KonnectivityServerSpec `json:"server,omitempty"`
// +kubebuilder:default={version:"v0.0.32",image:"registry.k8s.io/kas-network-proxy/proxy-agent"}
KonnectivityAgentSpec KonnectivityAgentSpec `json:"agent,omitempty"`
}
// AddonsSpec defines the enabled addons and their features.
type AddonsSpec struct {
CoreDNS *AddonSpec `json:"coreDNS,omitempty"`
// Enables the DNS addon in the Tenant Cluster.
// The registry and the tag are configurable, the image is hard-coded to `coredns`.
CoreDNS *AddonSpec `json:"coreDNS,omitempty"`
// Enables the Konnectivity addon in the Tenant Cluster, required if the worker nodes are in a different network.
Konnectivity *KonnectivitySpec `json:"konnectivity,omitempty"`
KubeProxy *AddonSpec `json:"kubeProxy,omitempty"`
// Enables the kube-proxy addon in the Tenant Cluster.
// The registry and the tag are configurable, the image is hard-coded to `kube-proxy`.
KubeProxy *AddonSpec `json:"kubeProxy,omitempty"`
}
// TenantControlPlaneSpec defines the desired state of TenantControlPlane.
type TenantControlPlaneSpec struct {
// DataStore allows to specify a DataStore that should be used to store the Kubernetes data for the given Tenant Control Plane.
// This parameter is optional and acts as an override over the default one which is used by the Kamaji Operator.
// Migration from a different DataStore to another one is not yet supported and the reconciliation will be blocked.
DataStore string `json:"dataStore,omitempty"`
ControlPlane ControlPlane `json:"controlPlane"`
// Kubernetes specification for tenant control plane
Kubernetes KubernetesSpec `json:"kubernetes"`
@@ -149,11 +241,13 @@ type TenantControlPlaneSpec struct {
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:subresource:scale:specpath=.spec.controlPlane.deployment.replicas,statuspath=.status.kubernetesResources.deployment.replicas,selectorpath=.status.kubernetesResources.deployment.selector
// +kubebuilder:resource:shortName=tcp
// +kubebuilder:printcolumn:name="Version",type="string",JSONPath=".spec.kubernetes.version",description="Kubernetes version"
// +kubebuilder:printcolumn:name="Status",type="string",JSONPath=".status.kubernetesResources.version.status",description="Kubernetes version"
// +kubebuilder:printcolumn:name="Control-Plane-Endpoint",type="string",JSONPath=".status.controlPlaneEndpoint",description="Tenant Control Plane Endpoint (API server)"
// +kubebuilder:printcolumn:name="Status",type="string",JSONPath=".status.kubernetesResources.version.status",description="Status"
// +kubebuilder:printcolumn:name="Control-Plane endpoint",type="string",JSONPath=".status.controlPlaneEndpoint",description="Tenant Control Plane Endpoint (API server)"
// +kubebuilder:printcolumn:name="Kubeconfig",type="string",JSONPath=".status.kubeconfig.admin.secretName",description="Secret which contains admin kubeconfig"
//+kubebuilder:printcolumn:name="Datastore",type="string",JSONPath=".status.storage.dataStoreName",description="DataStore actually used"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp",description="Age"
// TenantControlPlane is the Schema for the tenantcontrolplanes API.

View File

@@ -0,0 +1,188 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
"fmt"
"strings"
"github.com/blang/semver"
"github.com/go-logr/logr"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/sets"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/clastix/kamaji/internal/upgrade"
)
//+kubebuilder:webhook:path=/mutate-kamaji-clastix-io-v1alpha1-tenantcontrolplane,mutating=true,failurePolicy=fail,sideEffects=None,groups=kamaji.clastix.io,resources=tenantcontrolplanes,verbs=create;update,versions=v1alpha1,name=mtenantcontrolplane.kb.io,admissionReviewVersions=v1
//+kubebuilder:webhook:path=/validate-kamaji-clastix-io-v1alpha1-tenantcontrolplane,mutating=false,failurePolicy=fail,sideEffects=None,groups=kamaji.clastix.io,resources=tenantcontrolplanes,verbs=create;update,versions=v1alpha1,name=vtenantcontrolplane.kb.io,admissionReviewVersions=v1
func (in *TenantControlPlane) SetupWebhookWithManager(mgr ctrl.Manager, datastore string) error {
validator := &tenantControlPlaneValidator{
client: mgr.GetClient(),
defaultDatastore: datastore,
log: mgr.GetLogger().WithName("tenantcontrolplane-webhook"),
}
return ctrl.NewWebhookManagedBy(mgr).
For(in).
WithValidator(validator).
WithDefaulter(validator).
Complete()
}
type tenantControlPlaneValidator struct {
client client.Client
defaultDatastore string
log logr.Logger
}
func (t *tenantControlPlaneValidator) Default(_ context.Context, obj runtime.Object) error {
tcp, ok := obj.(*TenantControlPlane)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.TenantControlPlane")
}
if len(tcp.Spec.DataStore) == 0 {
tcp.Spec.DataStore = t.defaultDatastore
}
return nil
}
func (t *tenantControlPlaneValidator) ValidateCreate(_ context.Context, obj runtime.Object) error {
tcp, ok := obj.(*TenantControlPlane)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.TenantControlPlane")
}
t.log.Info("validate create", "name", tcp.Name, "namespace", tcp.Namespace)
ver, err := semver.New(t.normalizeKubernetesVersion(tcp.Spec.Kubernetes.Version))
if err != nil {
return errors.Wrap(err, "unable to parse the desired Kubernetes version")
}
supportedVer, supportedErr := semver.Make(t.normalizeKubernetesVersion(upgrade.KubeadmVersion))
if supportedErr != nil {
return errors.Wrap(supportedErr, "unable to parse the Kamaji supported Kubernetes version")
}
if ver.GT(supportedVer) {
return fmt.Errorf("unable to create a TenantControlPlane with a Kubernetes version greater than the supported one, actually %s", supportedVer.String())
}
if err = t.validatePreferredKubeletAddressTypes(tcp.Spec.Kubernetes.Kubelet.PreferredAddressTypes); err != nil {
return err
}
return nil
}
func (t *tenantControlPlaneValidator) ValidateUpdate(ctx context.Context, oldObj, newObj runtime.Object) error {
old, ok := oldObj.(*TenantControlPlane)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.TenantControlPlane")
}
tcp, ok := newObj.(*TenantControlPlane)
if !ok {
return fmt.Errorf("expected *kamajiv1alpha1.TenantControlPlane")
}
t.log.Info("validate update", "name", tcp.Name, "namespace", tcp.Namespace)
if err := t.validateVersionUpdate(old, tcp); err != nil {
return err
}
if err := t.validateDataStore(ctx, old, tcp); err != nil {
return err
}
if err := t.validatePreferredKubeletAddressTypes(tcp.Spec.Kubernetes.Kubelet.PreferredAddressTypes); err != nil {
return err
}
return nil
}
func (t *tenantControlPlaneValidator) ValidateDelete(context.Context, runtime.Object) error {
return nil
}
func (t *tenantControlPlaneValidator) validatePreferredKubeletAddressTypes(addressTypes []KubeletPreferredAddressType) error {
s := sets.NewString()
for _, at := range addressTypes {
if s.Has(string(at)) {
return fmt.Errorf("preferred kubelet address types is stated multiple times: %s", at)
}
s.Insert(string(at))
}
return nil
}
func (t *tenantControlPlaneValidator) validateVersionUpdate(oldObj, newObj *TenantControlPlane) error {
oldVer, oldErr := semver.Make(t.normalizeKubernetesVersion(oldObj.Spec.Kubernetes.Version))
if oldErr != nil {
return errors.Wrap(oldErr, "unable to parse the previous Kubernetes version")
}
newVer, newErr := semver.New(t.normalizeKubernetesVersion(newObj.Spec.Kubernetes.Version))
if newErr != nil {
return errors.Wrap(newErr, "unable to parse the desired Kubernetes version")
}
supportedVer, supportedErr := semver.Make(t.normalizeKubernetesVersion(upgrade.KubeadmVersion))
if supportedErr != nil {
return errors.Wrap(supportedErr, "unable to parse the Kamaji supported Kubernetes version")
}
switch {
case newVer.GT(supportedVer):
return fmt.Errorf("unable to upgrade to a version greater than the supported one, actually %s", supportedVer.String())
case newVer.LT(oldVer):
return fmt.Errorf("unable to downgrade a TenantControlPlane from %s to %s", oldVer.String(), newVer.String())
case newVer.Minor-oldVer.Minor > 1:
return fmt.Errorf("unable to upgrade to a minor version in a non-sequential mode")
}
return nil
}
func (t *tenantControlPlaneValidator) validateDataStore(ctx context.Context, oldObj, tcp *TenantControlPlane) error {
if oldObj.Spec.DataStore == tcp.Spec.DataStore {
return nil
}
previousDatastore, desiredDatastore := &DataStore{}, &DataStore{}
if err := t.client.Get(ctx, types.NamespacedName{Name: oldObj.Spec.DataStore}, previousDatastore); err != nil {
return fmt.Errorf("unable to retrieve old DataStore for validation: %w", err)
}
if err := t.client.Get(ctx, types.NamespacedName{Name: tcp.Spec.DataStore}, desiredDatastore); err != nil {
return fmt.Errorf("unable to retrieve old DataStore for validation: %w", err)
}
if previousDatastore.Spec.Driver != desiredDatastore.Spec.Driver {
return fmt.Errorf("migration between different Datastore drivers is not supported")
}
return nil
}
func (t *tenantControlPlaneValidator) normalizeKubernetesVersion(input string) string {
if strings.HasPrefix(input, "v") {
return strings.Replace(input, "v", "", 1)
}
return input
}

View File

@@ -0,0 +1,123 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1alpha1
import (
"context"
"crypto/tls"
"fmt"
"net"
"path/filepath"
"testing"
"time"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
admissionv1beta1 "k8s.io/api/admission/v1beta1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/rest"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/envtest"
logf "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
//+kubebuilder:scaffold:imports
)
// These tests use Ginkgo (BDD-style Go testing framework). Refer to
// http://onsi.github.io/ginkgo/ to learn more about Ginkgo.
var (
cfg *rest.Config
k8sClient client.Client
testEnv *envtest.Environment
ctx context.Context
cancel context.CancelFunc
)
func TestAPIs(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Webhook Suite")
}
var _ = BeforeSuite(func() {
logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))
ctx, cancel = context.WithCancel(context.TODO())
By("bootstrapping test environment")
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "..", "config", "crd", "bases")},
ErrorIfCRDPathMissing: false,
WebhookInstallOptions: envtest.WebhookInstallOptions{
Paths: []string{filepath.Join("..", "..", "config", "webhook")},
},
}
var err error
// cfg is defined in this file globally.
cfg, err = testEnv.Start()
Expect(err).NotTo(HaveOccurred())
Expect(cfg).NotTo(BeNil())
scheme := runtime.NewScheme()
err = AddToScheme(scheme)
Expect(err).NotTo(HaveOccurred())
err = admissionv1beta1.AddToScheme(scheme)
Expect(err).NotTo(HaveOccurred())
//+kubebuilder:scaffold:scheme
k8sClient, err = client.New(cfg, client.Options{Scheme: scheme})
Expect(err).NotTo(HaveOccurred())
Expect(k8sClient).NotTo(BeNil())
// start webhook server using Manager
webhookInstallOptions := &testEnv.WebhookInstallOptions
mgr, err := ctrl.NewManager(cfg, ctrl.Options{
Scheme: scheme,
Host: webhookInstallOptions.LocalServingHost,
Port: webhookInstallOptions.LocalServingPort,
CertDir: webhookInstallOptions.LocalServingCertDir,
LeaderElection: false,
MetricsBindAddress: "0",
})
Expect(err).NotTo(HaveOccurred())
err = (&TenantControlPlane{}).SetupWebhookWithManager(mgr, "")
Expect(err).NotTo(HaveOccurred())
err = (&DataStore{}).SetupWebhookWithManager(mgr)
Expect(err).NotTo(HaveOccurred())
//+kubebuilder:scaffold:webhook
go func() {
defer GinkgoRecover()
err = mgr.Start(ctx)
Expect(err).NotTo(HaveOccurred())
}()
// wait for the webhook server to get ready
dialer := &net.Dialer{Timeout: time.Second}
addrPort := fmt.Sprintf("%s:%d", webhookInstallOptions.LocalServingHost, webhookInstallOptions.LocalServingPort)
Eventually(func() error {
conn, err := tls.DialWithDialer(dialer, "tcp", addrPort, &tls.Config{InsecureSkipVerify: true})
if err != nil {
return err
}
conn.Close()
return nil
}).Should(Succeed())
})
var _ = AfterSuite(func() {
cancel()
By("tearing down the test environment")
err := testEnv.Stop()
Expect(err).NotTo(HaveOccurred())
})

View File

@@ -10,7 +10,7 @@ package v1alpha1
import (
"k8s.io/api/core/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
@@ -61,6 +61,7 @@ func (in *AdditionalMetadata) DeepCopy() *AdditionalMetadata {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *AddonSpec) DeepCopyInto(out *AddonSpec) {
*out = *in
out.ImageOverrideTrait = in.ImageOverrideTrait
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AddonSpec.
@@ -253,6 +254,35 @@ func (in *ClientCertificate) DeepCopy() *ClientCertificate {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ComponentResourceRequirements) DeepCopyInto(out *ComponentResourceRequirements) {
*out = *in
if in.Limits != nil {
in, out := &in.Limits, &out.Limits
*out = make(v1.ResourceList, len(*in))
for key, val := range *in {
(*out)[key] = val.DeepCopy()
}
}
if in.Requests != nil {
in, out := &in.Requests, &out.Requests
*out = make(v1.ResourceList, len(*in))
for key, val := range *in {
(*out)[key] = val.DeepCopy()
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ComponentResourceRequirements.
func (in *ComponentResourceRequirements) DeepCopy() *ComponentResourceRequirements {
if in == nil {
return nil
}
out := new(ComponentResourceRequirements)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ContentRef) DeepCopyInto(out *ContentRef) {
*out = *in
@@ -305,17 +335,17 @@ func (in *ControlPlaneComponentsResources) DeepCopyInto(out *ControlPlaneCompone
*out = *in
if in.APIServer != nil {
in, out := &in.APIServer, &out.APIServer
*out = new(v1.ResourceRequirements)
*out = new(ComponentResourceRequirements)
(*in).DeepCopyInto(*out)
}
if in.ControllerManager != nil {
in, out := &in.ControllerManager, &out.ControllerManager
*out = new(v1.ResourceRequirements)
*out = new(ComponentResourceRequirements)
(*in).DeepCopyInto(*out)
}
if in.Scheduler != nil {
in, out := &in.Scheduler, &out.Scheduler
*out = new(v1.ResourceRequirements)
*out = new(ComponentResourceRequirements)
(*in).DeepCopyInto(*out)
}
}
@@ -392,6 +422,37 @@ func (in *DataStore) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DataStoreCertificateStatus) DeepCopyInto(out *DataStoreCertificateStatus) {
*out = *in
in.LastUpdate.DeepCopyInto(&out.LastUpdate)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataStoreCertificateStatus.
func (in *DataStoreCertificateStatus) DeepCopy() *DataStoreCertificateStatus {
if in == nil {
return nil
}
out := new(DataStoreCertificateStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DataStoreConfigStatus) DeepCopyInto(out *DataStoreConfigStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataStoreConfigStatus.
func (in *DataStoreConfigStatus) DeepCopy() *DataStoreConfigStatus {
if in == nil {
return nil
}
out := new(DataStoreConfigStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DataStoreList) DeepCopyInto(out *DataStoreList) {
*out = *in
@@ -424,12 +485,28 @@ func (in *DataStoreList) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DataStoreSetupStatus) DeepCopyInto(out *DataStoreSetupStatus) {
*out = *in
in.LastUpdate.DeepCopyInto(&out.LastUpdate)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new DataStoreSetupStatus.
func (in *DataStoreSetupStatus) DeepCopy() *DataStoreSetupStatus {
if in == nil {
return nil
}
out := new(DataStoreSetupStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DataStoreSpec) DeepCopyInto(out *DataStoreSpec) {
*out = *in
if in.Endpoints != nil {
in, out := &in.Endpoints, &out.Endpoints
*out = make([]string, len(*in))
*out = make(Endpoints, len(*in))
copy(*out, *in)
}
if in.BasicAuth != nil {
@@ -473,6 +550,32 @@ func (in *DataStoreStatus) DeepCopy() *DataStoreStatus {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DeploymentSpec) DeepCopyInto(out *DeploymentSpec) {
*out = *in
if in.NodeSelector != nil {
in, out := &in.NodeSelector, &out.NodeSelector
*out = make(map[string]string, len(*in))
for key, val := range *in {
(*out)[key] = val
}
}
if in.Tolerations != nil {
in, out := &in.Tolerations, &out.Tolerations
*out = make([]v1.Toleration, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Affinity != nil {
in, out := &in.Affinity, &out.Affinity
*out = new(v1.Affinity)
(*in).DeepCopyInto(*out)
}
if in.TopologySpreadConstraints != nil {
in, out := &in.TopologySpreadConstraints, &out.TopologySpreadConstraints
*out = make([]v1.TopologySpreadConstraint, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.Resources != nil {
in, out := &in.Resources, &out.Resources
*out = new(ControlPlaneComponentsResources)
@@ -530,20 +633,22 @@ func (in *ETCDCertificatesStatus) DeepCopy() *ETCDCertificatesStatus {
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ETCDStatus) DeepCopyInto(out *ETCDStatus) {
*out = *in
in.Role.DeepCopyInto(&out.Role)
in.User.DeepCopyInto(&out.User)
func (in Endpoints) DeepCopyInto(out *Endpoints) {
{
in := &in
*out = make(Endpoints, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ETCDStatus.
func (in *ETCDStatus) DeepCopy() *ETCDStatus {
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Endpoints.
func (in Endpoints) DeepCopy() Endpoints {
if in == nil {
return nil
}
out := new(ETCDStatus)
out := new(Endpoints)
in.DeepCopyInto(out)
return out
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
@@ -562,6 +667,40 @@ func (in *ExternalKubernetesObjectStatus) DeepCopy() *ExternalKubernetesObjectSt
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in ExtraArgs) DeepCopyInto(out *ExtraArgs) {
{
in := &in
*out = make(ExtraArgs, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ExtraArgs.
func (in ExtraArgs) DeepCopy() ExtraArgs {
if in == nil {
return nil
}
out := new(ExtraArgs)
in.DeepCopyInto(out)
return *out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ImageOverrideTrait) DeepCopyInto(out *ImageOverrideTrait) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ImageOverrideTrait.
func (in *ImageOverrideTrait) DeepCopy() *ImageOverrideTrait {
if in == nil {
return nil
}
out := new(ImageOverrideTrait)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *IngressSpec) DeepCopyInto(out *IngressSpec) {
*out = *in
@@ -579,19 +718,21 @@ func (in *IngressSpec) DeepCopy() *IngressSpec {
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KineStatus) DeepCopyInto(out *KineStatus) {
func (in *KonnectivityAgentSpec) DeepCopyInto(out *KonnectivityAgentSpec) {
*out = *in
out.Config = in.Config
in.Setup.DeepCopyInto(&out.Setup)
in.Certificate.DeepCopyInto(&out.Certificate)
if in.ExtraArgs != nil {
in, out := &in.ExtraArgs, &out.ExtraArgs
*out = make(ExtraArgs, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KineStatus.
func (in *KineStatus) DeepCopy() *KineStatus {
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KonnectivityAgentSpec.
func (in *KonnectivityAgentSpec) DeepCopy() *KonnectivityAgentSpec {
if in == nil {
return nil
}
out := new(KineStatus)
out := new(KonnectivityAgentSpec)
in.DeepCopyInto(out)
return out
}
@@ -612,13 +753,35 @@ func (in *KonnectivityConfigMap) DeepCopy() *KonnectivityConfigMap {
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KonnectivitySpec) DeepCopyInto(out *KonnectivitySpec) {
func (in *KonnectivityServerSpec) DeepCopyInto(out *KonnectivityServerSpec) {
*out = *in
if in.Resources != nil {
in, out := &in.Resources, &out.Resources
*out = new(v1.ResourceRequirements)
*out = new(ComponentResourceRequirements)
(*in).DeepCopyInto(*out)
}
if in.ExtraArgs != nil {
in, out := &in.ExtraArgs, &out.ExtraArgs
*out = make(ExtraArgs, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KonnectivityServerSpec.
func (in *KonnectivityServerSpec) DeepCopy() *KonnectivityServerSpec {
if in == nil {
return nil
}
out := new(KonnectivityServerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KonnectivitySpec) DeepCopyInto(out *KonnectivitySpec) {
*out = *in
in.KonnectivityServerSpec.DeepCopyInto(&out.KonnectivityServerSpec)
in.KonnectivityAgentSpec.DeepCopyInto(&out.KonnectivityAgentSpec)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new KonnectivitySpec.
@@ -688,8 +851,6 @@ func (in *KubeadmPhaseStatus) DeepCopy() *KubeadmPhaseStatus {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *KubeadmPhasesStatus) DeepCopyInto(out *KubeadmPhasesStatus) {
*out = *in
in.UploadConfigKubeadm.DeepCopyInto(&out.UploadConfigKubeadm)
in.UploadConfigKubelet.DeepCopyInto(&out.UploadConfigKubelet)
in.BootstrapToken.DeepCopyInto(&out.BootstrapToken)
}
@@ -906,53 +1067,6 @@ func (in *PublicKeyPrivateKeyPairStatus) DeepCopy() *PublicKeyPrivateKeyPairStat
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SQLCertificateStatus) DeepCopyInto(out *SQLCertificateStatus) {
*out = *in
in.LastUpdate.DeepCopyInto(&out.LastUpdate)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SQLCertificateStatus.
func (in *SQLCertificateStatus) DeepCopy() *SQLCertificateStatus {
if in == nil {
return nil
}
out := new(SQLCertificateStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SQLConfigStatus) DeepCopyInto(out *SQLConfigStatus) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SQLConfigStatus.
func (in *SQLConfigStatus) DeepCopy() *SQLConfigStatus {
if in == nil {
return nil
}
out := new(SQLConfigStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SQLSetupStatus) DeepCopyInto(out *SQLSetupStatus) {
*out = *in
in.LastUpdate.DeepCopyInto(&out.LastUpdate)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SQLSetupStatus.
func (in *SQLSetupStatus) DeepCopy() *SQLSetupStatus {
if in == nil {
return nil
}
out := new(SQLSetupStatus)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SecretReference) DeepCopyInto(out *SecretReference) {
*out = *in
@@ -988,16 +1102,9 @@ func (in *ServiceSpec) DeepCopy() *ServiceSpec {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *StorageStatus) DeepCopyInto(out *StorageStatus) {
*out = *in
if in.ETCD != nil {
in, out := &in.ETCD, &out.ETCD
*out = new(ETCDStatus)
(*in).DeepCopyInto(*out)
}
if in.Kine != nil {
in, out := &in.Kine, &out.Kine
*out = new(KineStatus)
(*in).DeepCopyInto(*out)
}
out.Config = in.Config
in.Setup.DeepCopyInto(&out.Setup)
in.Certificate.DeepCopyInto(&out.Certificate)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StorageStatus.

1
assets/kamaji-logo.svg Normal file
View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" role="img" viewBox="11.85 8.10 202.80 187.55"><title>Kamaji</title><path d="M32.1 13.7c-2.4.9-6.3 3.5-8.6 5.8-7.7 7.7-7.5 5-7.5 82.5 0 77.4-.2 74.8 7.5 82.5 7.7 7.8 4.2 7.5 90 7.5s82.3.3 90-7.5c7.7-7.7 7.5-5.1 7.5-82.5s.2-74.8-7.5-82.5c-7.8-7.8-4.1-7.5-90.4-7.4-66.7 0-77.2.3-81 1.6zm160.5 9.9c1.9.9 4.4 3.1 5.7 4.8l2.2 3.1v141l-2.2 3.1c-4.8 6.7-1.1 6.4-84.8 6.4s-80 .3-84.8-6.4l-2.2-3.1v-141l2.2-3.1c4.8-6.6.8-6.4 84.6-6.4 68 0 76.3.2 79.3 1.6z"/><path d="M90.1 33.7c-5.1 2.5-7.3 6.7-6.8 13.1.3 4.1 1 5.9 3.3 8.4s2.5 3 .9 2.3c-2-.7-25.1-4.6-29-4.9-1.1 0-2 .5-2 1.4 0 1.1-1.2 1.5-4.9 1.5-6.7 0-6.8 1.9-.4 4 8.2 2.7 9 3.4 3.3 3.5-5.3 0-8.2 1.1-7.1 2.8.7 1.2-2.7 2.2-8.1 2.2-7 0-6.5 2.4 1.1 5.1l3.9 1.4-2.9.5c-4.3.8-3.2 2.3 2.8 4.1l5.3 1.5-5.2 2.7c-8.2 4.2-8.3 5.8-.4 6.1 5.6.2 7.3 1.1 4.2 2.1-2.3.7-2.8 3.1-.9 3.7.7.3-.5 2-2.8 4-5.6 5.3-4 6.4 6.2 4.5 4.4-.8 8.1-1.3 8.3-1.2.2.2-1.3 2.4-3.3 4.8-2 2.4-3.6 4.7-3.6 5.2 0 .4 1.4.5 3 .3 2.9-.4 4 .5 2 1.7-.5.3-1 1.3-1 2.2 0 1.6 2.2 1.5 6.5-.3 1.7-.7 1.6-.2-.9 3-5.4 7.2.7 6.5 13.6-1.4 2.7-1.7 5.1-3 5.4-3 .3 0-.9 2.1-2.7 4.6-4.5 6.6-2.5 7.9 3.7 2.3 4.6-4.3 4.7-4.3 3-1.2-1.9 3.8-2.1 5.6-.4 5.1.6-.2 7.1-7.1 14.3-15.4 7.2-8.2 13.7-14.9 14.5-14.9.8 0 7.3 6.7 14.6 15 7.2 8.2 13.7 15.1 14.3 15.3 1.6.5 1.4-1.4-.5-5-1.6-3.2-1.6-3.2 3.2 1 6 5.1 7.8 4 3.5-2.2-1.8-2.5-3-4.6-2.7-4.6.3 0 2.7 1.3 5.4 3 12.9 7.9 19 8.6 13.6 1.4-2.5-3.2-2.6-3.7-.9-3 5.9 2.5 7.7 1.7 5.6-2.3-.9-1.5-.6-1.7 2-1.3 3.8.6 3.7-.5-.7-5.7-2-2.3-3.5-4.4-3.2-4.6.2-.2 2.1 0 4.3.4 13.9 3 16.4 1.8 9.8-4.3-2.1-1.9-3.2-3.6-2.5-3.6 2 0 1.4-2.8-.9-3.5-3.2-1-1.3-2 4.2-2.1 7.9-.2 7.8-1.9-.4-6.1l-5.2-2.7 5.4-1.6c6.4-1.8 7.9-4 2.9-4.1h-3.3l3.9-1.5c7.3-2.6 8.4-5.4 2.2-5.4-5.1 0-9.6-1.1-9-2.2 1.1-1.7-1.8-2.8-7.1-2.8-5.7-.1-4.9-.8 3.3-3.5 6.4-2.1 6.3-4-.4-4-3.7 0-4.9-.4-4.9-1.5 0-.9-.9-1.4-2-1.4-3.9.3-27 4.2-29 4.9-1.6.7-1.4.2.9-2.3 3.7-4 4.7-11.3 2.2-16.1-4.8-9.2-18.8-9.3-23.8 0-4.4 8.3.2 18.4 9.5 20.5 3 .6 2.8.8-5.5 4l-8.8 3.3-8.7-3.3c-8.1-3.2-8.4-3.4-5.5-4.1 1.7-.3 4.3-1.5 5.7-2.7 13.1-10.3.6-30.4-14.4-23.1zm77.6 98.4c-3.6 2.1-.8 7.7 3.2 6.4 2.1-.6 3.5-3.1 2.5-4.6-1.1-1.8-4-2.7-5.7-1.8zm8.3 3.9c0 1.9.5 2.1 6.3 1.8 4.7-.2 6.2-.7 6.2-1.8s-1.5-1.6-6.2-1.8c-5.8-.3-6.3-.1-6.3 1.8zm-135.6.3c-.2.7-.3 7.4-.2 14.8l.3 13.4 3.3.3c3.1.3 3.2.2 3.2-3.4 0-2.5.7-4.6 2.1-6l2.1-2.3 5 6c3.9 4.7 5.6 5.9 7.8 5.9 1.6 0 3.1-.3 3.3-.8.3-.4-2.1-4-5.4-8.1-3.2-4-5.9-7.6-5.9-8 0-.4 2.5-3.1 5.5-6.1 3-3 5.5-5.8 5.5-6.2 0-.4-1.5-.8-3.3-.8-2.8 0-4.4 1-9.6 6.5-3.5 3.6-6.5 6.5-6.7 6.5-.2 0-.4-2.9-.4-6.5V135h-3c-1.7 0-3.3.6-3.6 1.3zm31.2 7c-1.1.8-1.5 1.9-1 3 .5 1.4 1.3 1.6 4 1.1 4.2-.8 8.4.2 8.4 2 0 .8-1.8 1.5-5.1 1.9-6 .7-8.9 2.9-8.9 6.6 0 3.2.8 4.4 3.7 6 2.9 1.5 5.2 1.4 8.6-.3 2.3-1.3 2.7-1.3 2.7 0 0 .9 1.1 1.4 3 1.4h3v-8.6c0-8.1-.1-8.7-2.9-11.5-2.5-2.5-3.7-2.9-8.3-2.9-3 0-6.2.6-7.2 1.3zm11.2 13.9c-.2 1.7-1.1 2.4-3.2 2.6-3.3.4-5.1-1-4.3-3.2.4-1.1 1.9-1.6 4.2-1.6 3.2 0 3.6.3 3.3 2.2zm13.4-4l.3 11.3h6l.5-7.8c.5-7.6 1.5-9.6 4.7-9.7 3 0 4.3 3.2 4.3 10.6v7.4h3c3 0 3 0 3-5.9 0-7.3 1.2-10.7 4.1-11.6 3.8-1.3 5.9 2.5 5.9 10.6v6.9h6v-9c0-8.3-.2-9.3-2.5-11.5-2.9-3-9.8-3.5-12.7-.8-1.7 1.5-1.9 1.5-3.6 0-2.2-2-9.2-2.3-11.1-.5-1.1 1-1.4 1-1.8 0-.3-.6-1.8-1.2-3.4-1.2h-3l.3 11.2zm45.4-9.9c-1.1.8-1.5 1.9-1 3 .5 1.4 1.3 1.6 4 1.1 4.2-.8 8.4.2 8.4 2 0 .8-1.8 1.5-5.1 1.9-6 .7-8.9 2.9-8.9 6.6 0 3.2.8 4.4 3.7 6 2.9 1.5 5.2 1.4 8.6-.3 2.3-1.3 2.7-1.3 2.7 0 0 .9 1.1 1.4 3 1.4h3v-8.6c0-8.1-.1-8.7-2.9-11.5-2.5-2.5-3.7-2.9-8.3-2.9-3 0-6.2.6-7.2 1.3zm11.2 13.9c-.2 1.7-1.1 2.4-3.2 2.6-3.3.4-5.1-1-4.3-3.2.4-1.1 1.9-1.6 4.2-1.6 3.2 0 3.6.3 3.3 2.2zm13-2.5c-.3 12.8-.3 12.8-2.7 12.8-1.5 0-2.7.8-3.1 2-2 5.4 9.4 4.3 11.9-1.2.6-1.3 1.1-7.7 1.1-14.3v-12h-6.9l-.3 12.7zm13.4-1.5l.3 11.3h6v-22l-3.3-.3-3.3-.3.3 11.3z"/></svg>

After

Width:  |  Height:  |  Size: 3.6 KiB

View File

@@ -1,37 +1,24 @@
apiVersion: v2
name: kamaji
description: Kamaji is a tool aimed to build and operate a Managed Kubernetes Service with a fraction of the operational burden. With Kamaji, you can deploy and operate hundreds of Kubernetes clusters as a hyper-scaler.
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.2.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: 0.1.0
appVersion: v0.2.1
description: Kamaji is a tool aimed to build and operate a Managed Kubernetes Service
with a fraction of the operational burden. With Kamaji, you can deploy and operate
hundreds of Kubernetes clusters as a hyper-scaler.
home: https://github.com/clastix/kamaji
sources: ["https://github.com/clastix/kamaji"]
kubeVersion: ">=1.18"
icon: https://github.com/clastix/kamaji/raw/master/assets/kamaji-logo.png
kubeVersion: ">=1.21.0-0"
maintainers:
- email: iam@mendrugory.com
name: Gonzalo Gabriel Jiménez Fuentes
- email: dario@tranchitella.eu
name: Dario Tranchitella
- email: me@maxgio.it
name: Massimiliano Giovagnoli
- email: me@bsctl.io
name: Adriano Pezzuto
name: kamaji
sources:
- https://github.com/clastix/kamaji
type: application
version: 0.11.3
annotations:
catalog.cattle.io/certified: partner
catalog.cattle.io/release-name: kamaji
catalog.cattle.io/display-name: Kamaji - Managed Kubernetes Service

View File

@@ -1,6 +1,6 @@
# kamaji
![Version: 0.1.0](https://img.shields.io/badge/Version-0.1.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.1.0](https://img.shields.io/badge/AppVersion-0.1.0-informational?style=flat-square)
![Version: 0.11.2](https://img.shields.io/badge/Version-0.11.2-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: v0.2.1](https://img.shields.io/badge/AppVersion-v0.2.1-informational?style=flat-square)
Kamaji is a tool aimed to build and operate a Managed Kubernetes Service with a fraction of the operational burden. With Kamaji, you can deploy and operate hundreds of Kubernetes clusters as a hyper-scaler.
@@ -8,7 +8,6 @@ Kamaji is a tool aimed to build and operate a Managed Kubernetes Service with a
| Name | Email | Url |
| ---- | ------ | --- |
| Gonzalo Gabriel Jiménez Fuentes | <iam@mendrugory.com> | |
| Dario Tranchitella | <dario@tranchitella.eu> | |
| Massimiliano Giovagnoli | <me@maxgio.it> | |
| Adriano Pezzuto | <me@bsctl.io> | |
@@ -19,7 +18,7 @@ Kamaji is a tool aimed to build and operate a Managed Kubernetes Service with a
## Requirements
Kubernetes: `>=1.18`
Kubernetes: `>=1.21.0-0`
[Kamaji](https://github.com/clastix/kamaji) requires a [multi-tenant `etcd`](https://github.com/clastix/kamaji-internal/blob/master/deploy/getting-started-with-kamaji.md#setup-internal-multi-tenant-etcd) cluster.
This Helm Chart starting from v0.1.1 provides the installation of an internal `etcd` in order to streamline the local test. If you'd like to use an externally managed etcd instance, you can specify the overrides and by setting the value `etcd.deploy=false`.
@@ -67,7 +66,6 @@ Here the values you can override:
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | Kubernetes affinity rules to apply to Kamaji controller pods |
| configPath | string | `"./kamaji.yaml"` | Configuration file path alternative. (default "./kamaji.yaml") |
| datastore.basicAuth.passwordSecret.keyPath | string | `nil` | The Secret key where the data is stored. |
| datastore.basicAuth.passwordSecret.name | string | `nil` | The name of the Secret containing the password used to connect to the relational database. |
| datastore.basicAuth.passwordSecret.namespace | string | `nil` | The namespace of the Secret containing the password used to connect to the relational database. |
@@ -91,15 +89,16 @@ Here the values you can override:
| datastore.tlsConfig.clientCertificate.privateKey.namespace | string | `nil` | Namespace of the Secret containing the client certificate private key required to establish the mandatory SSL/TLS connection to the datastore. |
| etcd.compactionInterval | int | `0` | ETCD Compaction interval (e.g. "5m0s"). (default: "0" (disabled)) |
| etcd.deploy | bool | `true` | Install an etcd with enabled multi-tenancy along with Kamaji |
| etcd.image | object | `{"pullPolicy":"IfNotPresent","repository":"quay.io/coreos/etcd","tag":"v3.5.4"}` | Install specific etcd image |
| etcd.image | object | `{"pullPolicy":"IfNotPresent","repository":"quay.io/coreos/etcd","tag":"v3.5.6"}` | Install specific etcd image |
| etcd.livenessProbe | object | `{"failureThreshold":8,"httpGet":{"path":"/health?serializable=true","port":2381,"scheme":"HTTP"},"initialDelaySeconds":10,"periodSeconds":10,"timeoutSeconds":15}` | The livenessProbe for the etcd container |
| etcd.overrides.caSecret.name | string | `"etcd-certs"` | Name of the secret which contains CA's certificate and private key. (default: "etcd-certs") |
| etcd.overrides.caSecret.namespace | string | `"kamaji-system"` | Namespace of the secret which contains CA's certificate and private key. (default: "kamaji-system") |
| etcd.overrides.clientSecret.name | string | `"root-client-certs"` | Name of the secret which contains ETCD client certificates. (default: "root-client-certs") |
| etcd.overrides.clientSecret.namespace | string | `"kamaji-system"` | Name of the namespace where the secret which contains ETCD client certificates is. (default: "kamaji-system") |
| etcd.overrides.endpoints | object | `{"etcd-0":"https://etcd-0.etcd.kamaji-system.svc.cluster.local","etcd-1":"https://etcd-1.etcd.kamaji-system.svc.cluster.local","etcd-2":"https://etcd-2.etcd.kamaji-system.svc.cluster.local"}` | (map) Dictionary of the endpoints for the etcd cluster's members, key is the name of the etcd server. Don't define any port, inflected from .etcd.peerApiPort value. |
| etcd.overrides.endpoints | object | `{"etcd-0":"etcd-0.etcd.kamaji-system.svc.cluster.local","etcd-1":"etcd-1.etcd.kamaji-system.svc.cluster.local","etcd-2":"etcd-2.etcd.kamaji-system.svc.cluster.local"}` | (map) Dictionary of the endpoints for the etcd cluster's members, key is the name of the etcd server. Don't define the protocol (TLS is automatically inflected), or any port, inflected from .etcd.peerApiPort value. |
| etcd.peerApiPort | int | `2380` | The peer API port which servers are listening to. |
| etcd.persistence.accessModes[0] | string | `"ReadWriteOnce"` | |
| etcd.persistence.customAnnotations | object | `{}` | The custom annotations to add to the PVC |
| etcd.persistence.size | string | `"10Gi"` | |
| etcd.persistence.storageClass | string | `""` | |
| etcd.port | int | `2379` | The client request port. |
@@ -110,7 +109,7 @@ Here the values you can override:
| healthProbeBindAddress | string | `":8081"` | The address the probe endpoint binds to. (default ":8081") |
| image.pullPolicy | string | `"Always"` | |
| image.repository | string | `"clastix/kamaji"` | The container image of the Kamaji controller. |
| image.tag | string | `"latest"` | |
| image.tag | string | `nil` | Overrides the image tag whose default is the chart appVersion. |
| imagePullSecrets | list | `[]` | |
| livenessProbe | object | `{"httpGet":{"path":"/healthz","port":"healthcheck"},"initialDelaySeconds":15,"periodSeconds":20}` | The livenessProbe for the controller container |
| loggingDevel.enable | bool | `false` | (string) Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default false) |
@@ -126,11 +125,10 @@ Here the values you can override:
| resources.requests.cpu | string | `"100m"` | |
| resources.requests.memory | string | `"20Mi"` | |
| securityContext | object | `{"allowPrivilegeEscalation":false}` | The securityContext to apply to the Kamaji controller container only. It does not apply to the Kamaji RBAC proxy container. |
| service.port | int | `8443` | |
| service.type | string | `"ClusterIP"` | |
| serviceAccount.annotations | object | `{}` | |
| serviceAccount.create | bool | `true` | |
| serviceAccount.name | string | `"kamaji-controller-manager"` | |
| serviceMonitor.enabled | bool | `false` | Toggle the ServiceMonitor true if you have Prometheus Operator installed and configured |
| temporaryDirectoryPath | string | `"/tmp/kamaji"` | Directory which will be used to work with temporary files. (default "/tmp/kamaji") |
| tolerations | list | `[]` | Kubernetes node taints that the Kamaji controller pods would tolerate |

View File

@@ -0,0 +1,30 @@
# Kamaji - Managed Kubernetes Service
Kamaji is a tool aimed to build and operate a Managed Kubernetes Service with a fraction of the operational burden.
Useful links:
- [Kamaji Github repository](https://github.com/clastix/kamaji)
- [Kamaji Documentation](https://github.com/clastix/kamaji/docs/)
## Requirements
* Kubernetes v1.22+
* Helm v3
# Installation
To install the Chart with the release name `kamaji`:
helm upgrade --install --namespace kamaji-system --create-namespace clastix/kamaji
Show the status:
helm status kamaji -n kamaji-system
Upgrade the Chart
helm upgrade kamaji -n kamaji-system clastix/kamaji
Uninstall the Chart
helm uninstall kamaji -n kamaji-system

View File

@@ -1,11 +1,10 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.6.1
creationTimestamp: null
cert-manager.io/inject-ca-from: kamaji-system/kamaji-serving-cert
controller-gen.kubebuilder.io/version: v0.9.2
name: datastores.kamaji.clastix.io
spec:
group: kamaji.clastix.io
@@ -16,254 +15,225 @@ spec:
singular: datastore
scope: Cluster
versions:
- additionalPrinterColumns:
- description: Kamaji data store driver
jsonPath: .spec.driver
name: Driver
type: string
- description: Age
jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: DataStore is the Schema for the datastores API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DataStoreSpec defines the desired state of DataStore.
properties:
basicAuth:
description: In case of authentication enabled for the given data
store, specifies the username and password pair. This value is optional.
properties:
password:
properties:
content:
description: Bare content of the file, base64 encoded. It
has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference
where the content is stored. This value is mandatory.
type: string
name:
description: Name is unique within a namespace to reference
a secret resource.
type: string
namespace:
description: Namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
type: object
username:
properties:
content:
description: Bare content of the file, base64 encoded. It
has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference
where the content is stored. This value is mandatory.
type: string
name:
description: Name is unique within a namespace to reference
a secret resource.
type: string
namespace:
description: Namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
type: object
required:
- password
- username
type: object
driver:
description: The driver to use to connect to the shared datastore.
type: string
endpoints:
description: List of the endpoints to connect to the shared datastore.
No need for protocol, just bare IP/FQDN and port.
items:
- additionalPrinterColumns:
- description: Kamaji data store driver
jsonPath: .spec.driver
name: Driver
type: string
- description: Age
jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: DataStore is the Schema for the datastores API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: DataStoreSpec defines the desired state of DataStore.
properties:
basicAuth:
description: In case of authentication enabled for the given data store, specifies the username and password pair. This value is optional.
properties:
password:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
username:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- password
- username
type: object
driver:
description: The driver to use to connect to the shared datastore.
enum:
- etcd
- MySQL
- PostgreSQL
type: string
type: array
tlsConfig:
description: Defines the TLS/SSL configuration required to connect
to the data store in a secure way.
properties:
certificateAuthority:
description: Retrieve the Certificate Authority certificate and
private key, such as bare content of the file, or a SecretReference.
The key reference is required since etcd authentication is based
on certificates, and Kamaji is responsible in creating this.
properties:
certificate:
properties:
content:
description: Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
type: string
name:
description: Name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: Namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
type: object
privateKey:
properties:
content:
description: Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
type: string
name:
description: Name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: Namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
type: object
required:
- certificate
type: object
clientCertificate:
description: Specifies the SSL/TLS key and private key pair used
to connect to the data store.
properties:
certificate:
properties:
content:
description: Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
type: string
name:
description: Name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: Namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
type: object
privateKey:
properties:
content:
description: Bare content of the file, base64 encoded.
It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
type: string
name:
description: Name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: Namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
type: object
required:
- certificate
- privateKey
type: object
required:
- certificateAuthority
- clientCertificate
type: object
required:
- driver
- endpoints
- tlsConfig
type: object
status:
description: DataStoreStatus defines the observed state of DataStore.
properties:
usedBy:
description: List of the Tenant Control Planes, namespaced named,
using this data store.
items:
type: string
type: array
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
endpoints:
description: List of the endpoints to connect to the shared datastore. No need for protocol, just bare IP/FQDN and port.
items:
type: string
minItems: 1
type: array
tlsConfig:
description: Defines the TLS/SSL configuration required to connect to the data store in a secure way.
properties:
certificateAuthority:
description: Retrieve the Certificate Authority certificate and private key, such as bare content of the file, or a SecretReference. The key reference is required since etcd authentication is based on certificates, and Kamaji is responsible in creating this.
properties:
certificate:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
privateKey:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- certificate
type: object
clientCertificate:
description: Specifies the SSL/TLS key and private key pair used to connect to the data store.
properties:
certificate:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
privateKey:
properties:
content:
description: Bare content of the file, base64 encoded. It has precedence over the SecretReference value.
format: byte
type: string
secretReference:
properties:
keyPath:
description: Name of the key for the given Secret reference where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: name is unique within a namespace to reference a secret resource.
type: string
namespace:
description: namespace defines the space within which the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- certificate
- privateKey
type: object
required:
- certificateAuthority
- clientCertificate
type: object
required:
- driver
- endpoints
- tlsConfig
type: object
status:
description: DataStoreStatus defines the observed state of DataStore.
properties:
usedBy:
description: List of the Tenant Control Planes, namespaced named, using this data store.
items:
type: string
type: array
type: object
type: object
served: true
storage: true
subresources:
status: {}

File diff suppressed because it is too large Load Diff

View File

@@ -46,9 +46,9 @@ app.kubernetes.io/managed-by: {{ .Release.Service }}
Selector labels
*/}}
{{- define "kamaji.selectorLabels" -}}
app.kubernetes.io/name: {{ include "kamaji.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/component: controller-manager
app.kubernetes.io/name: {{ default (include "kamaji.name" .) .name }}
app.kubernetes.io/instance: {{ default .Release.Name .instance }}
app.kubernetes.io/component: {{ default "controller-manager" .component }}
{{- end }}
{{/*
@@ -61,3 +61,31 @@ Create the name of the service account to use
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Create the name of the Service to user for webhooks
*/}}
{{- define "kamaji.webhookServiceName" -}}
{{- printf "%s-webhook-service" (include "kamaji.fullname" .) }}
{{- end }}
{{/*
Create the name of the Service to user for metrics
*/}}
{{- define "kamaji.metricsServiceName" -}}
{{- printf "%s-metrics-service" (include "kamaji.fullname" .) }}
{{- end }}
{{/*
Create the name of the cert-manager secret
*/}}
{{- define "kamaji.webhookSecretName" -}}
{{- printf "%s-webhook-server-cert" (include "kamaji.fullname" .) }}
{{- end }}
{{/*
Create the name of the cert-manager Certificate
*/}}
{{- define "kamaji.certificateName" -}}
{{- printf "%s-serving-cert" (include "kamaji.fullname" .) }}
{{- end }}

View File

@@ -99,7 +99,7 @@ Comma separated list of etcd endpoints, using the overrides in case of unmanaged
{{- $list := list -}}
{{- if .Values.etcd.deploy }}
{{- range $count := until 3 -}}
{{- $list = append $list (printf "https://%s-%d.%s.%s.svc.cluster.local:%d" "etcd" $count ( include "etcd.serviceName" . ) $.Release.Namespace (int $.Values.etcd.port) ) -}}
{{- $list = append $list (printf "%s-%d.%s.%s.svc.cluster.local:%d" "etcd" $count ( include "etcd.serviceName" . ) $.Release.Namespace (int $.Values.etcd.port) ) -}}
{{- end }}
{{- else if .Values.etcd.overrides.endpoints }}
{{- range $v := .Values.etcd.overrides.endpoints -}}

View File

@@ -0,0 +1,16 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
labels:
{{- $data := . | mustMergeOverwrite (dict "component" "certificate") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: {{ include "kamaji.certificateName" . }}
namespace: {{ .Release.Namespace }}
spec:
dnsNames:
- {{ include "kamaji.webhookServiceName" . }}.{{ .Release.Namespace }}.svc
- {{ include "kamaji.webhookServiceName" . }}.{{ .Release.Namespace }}.svc.cluster.local
issuerRef:
kind: Issuer
name: kamaji-selfsigned-issuer
secretName: {{ include "kamaji.webhookSecretName" . }}

View File

@@ -0,0 +1,10 @@
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
labels:
{{- $data := . | mustMergeOverwrite (dict "component" "issuer") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: kamaji-selfsigned-issuer
namespace: {{ .Release.Namespace }}
spec:
selfSigned: {}

View File

@@ -28,18 +28,7 @@ spec:
serviceAccountName: {{ include "kamaji.serviceAccountName" . }}
containers:
- args:
- --secure-listen-address=0.0.0.0:8443
- --upstream=http://127.0.0.1:8080/
- --logtostderr=true
- --v=10
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
name: kube-rbac-proxy
ports:
- containerPort: 8443
name: https
protocol: TCP
- args:
- --config-file={{ .Values.configPath }}
- manager
- --health-probe-bind-address={{ .Values.healthProbeBindAddress }}
- --leader-elect
- --metrics-bind-address={{ .Values.metricsBindAddress }}
@@ -52,7 +41,16 @@ spec:
{{- toYaml . | nindent 8 }}
{{- end }}
command:
- /manager
- /kamaji
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- with .Values.livenessProbe }}
@@ -61,6 +59,12 @@ spec:
{{- end }}
name: manager
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
- containerPort: 8080
name: metrics
protocol: TCP
- containerPort: 8081
name: healthcheck
protocol: TCP
@@ -72,7 +76,21 @@ spec:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
terminationGracePeriodSeconds: 10
volumes:
- name: tmp
emptyDir:
medium: Memory
- name: cert
secret:
defaultMode: 420
secretName: {{ include "kamaji.webhookSecretName" . }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}

View File

@@ -2,6 +2,8 @@ apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: {{ include "datastore.fullname" . }}
annotations:
"helm.sh/hook": pre-install
labels:
{{- include "datastore.labels" . | nindent 4 }}
spec:
@@ -10,7 +12,12 @@ spec:
{{- include "datastore.endpoints" . | indent 4 }}
{{- if (and .Values.datastore.basicAuth.usernameSecret.name .Values.datastore.basicAuth.passwordSecret.name) }}
basicAuth:
{{- .Values.datastore.basicAuth | toYaml | nindent 4 }}
username:
secretReference:
{{- .Values.datastore.basicAuth.usernameSecret | toYaml | nindent 8 }}
password:
secretReference:
{{- .Values.datastore.basicAuth.passwordSecret | toYaml | nindent 8 }}
{{- end }}
tlsConfig:
certificateAuthority:

View File

@@ -6,6 +6,10 @@ metadata:
{{- include "etcd.labels" . | nindent 4 }}
name: {{ include "etcd.csrConfigMapName" . }}
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": "hook-succeeded,hook-failed"
data:
ca-csr.json: |-
{

View File

@@ -28,4 +28,8 @@ spec:
- --ignore-not-found=true
- {{ include "etcd.caSecretName" . }}
- {{ include "etcd.clientSecretName" . }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -18,35 +18,13 @@ spec:
serviceAccountName: {{ include "etcd.serviceAccountName" . }}
restartPolicy: Never
initContainers:
- name: cfssl
image: cfssl/cfssl:latest
command:
- bash
- -c
- |-
cfssl gencert -initca /csr/ca-csr.json | cfssljson -bare /certs/ca &&
mv /certs/ca.pem /certs/ca.crt && mv /certs/ca-key.pem /certs/ca.key &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=peer-authentication /csr/peer-csr.json | cfssljson -bare /certs/peer &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=peer-authentication /csr/server-csr.json | cfssljson -bare /certs/server &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=client-authentication /csr/root-client-csr.json | cfssljson -bare /certs/root-client
volumeMounts:
- mountPath: /certs
name: certs
- mountPath: /csr
name: csr
- name: kubectl
image: {{ printf "clastix/kubectl:%s" (include "etcd.jobsTagKubeVersion" .) }}
command:
- sh
- -c
- |-
kubectl --namespace={{ .Release.Namespace }} delete secret --ignore-not-found=true {{ include "etcd.caSecretName" . }} {{ include "etcd.clientSecretName" . }} &&
kubectl --namespace={{ .Release.Namespace }} create secret generic {{ include "etcd.caSecretName" . }} --from-file=/certs/ca.crt --from-file=/certs/ca.key --from-file=/certs/peer-key.pem --from-file=/certs/peer.pem --from-file=/certs/server-key.pem --from-file=/certs/server.pem &&
kubectl --namespace={{ .Release.Namespace }} create secret tls {{ include "etcd.clientSecretName" . }} --key=/certs/root-client-key.pem --cert=/certs/root-client.pem &&
kubectl --namespace={{ .Release.Namespace }} rollout status sts/etcd --timeout=300s
volumeMounts:
- mountPath: /certs
name: certs
containers:
- command:
- bash
@@ -82,10 +60,11 @@ spec:
- name: root-certs
secret:
secretName: {{ include "etcd.clientSecretName" . }}
optional: true
- name: csr
configMap:
name: {{ include "etcd.csrConfigMapName" . }}
- name: certs
emptyDir: {}
secret:
secretName: {{ include "etcd.caSecretName" . }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,64 @@
{{- if .Values.etcd.deploy }}
apiVersion: batch/v1
kind: Job
metadata:
labels:
{{- include "etcd.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": "hook-succeeded"
name: "{{ .Release.Name }}-etcd-certs"
namespace: {{ .Release.Namespace }}
spec:
template:
metadata:
name: "{{ .Release.Name }}"
spec:
serviceAccountName: {{ include "etcd.serviceAccountName" . }}
restartPolicy: Never
initContainers:
- name: cfssl
image: cfssl/cfssl:latest
command:
- bash
- -c
- |-
cfssl gencert -initca /csr/ca-csr.json | cfssljson -bare /certs/ca &&
mv /certs/ca.pem /certs/ca.crt && mv /certs/ca-key.pem /certs/ca.key &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=peer-authentication /csr/peer-csr.json | cfssljson -bare /certs/peer &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=peer-authentication /csr/server-csr.json | cfssljson -bare /certs/server &&
cfssl gencert -ca=/certs/ca.crt -ca-key=/certs/ca.key -config=/csr/config.json -profile=client-authentication /csr/root-client-csr.json | cfssljson -bare /certs/root-client
volumeMounts:
- mountPath: /certs
name: certs
- mountPath: /csr
name: csr
containers:
- name: kubectl
image: {{ printf "clastix/kubectl:%s" (include "etcd.jobsTagKubeVersion" .) }}
command:
- sh
- -c
- |-
kubectl --namespace={{ .Release.Namespace }} delete secret --ignore-not-found=true {{ include "etcd.caSecretName" . }} {{ include "etcd.clientSecretName" . }} &&
kubectl --namespace={{ .Release.Namespace }} create secret generic {{ include "etcd.caSecretName" . }} --from-file=/certs/ca.crt --from-file=/certs/ca.key --from-file=/certs/peer-key.pem --from-file=/certs/peer.pem --from-file=/certs/server-key.pem --from-file=/certs/server.pem &&
kubectl --namespace={{ .Release.Namespace }} create secret tls {{ include "etcd.clientSecretName" . }} --key=/certs/root-client-key.pem --cert=/certs/root-client.pem
volumeMounts:
- mountPath: /certs
name: certs
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumes:
- name: csr
configMap:
name: {{ include "etcd.csrConfigMapName" . }}
- name: certs
emptyDir: {}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -5,6 +5,9 @@ metadata:
labels:
{{- include "etcd.labels" . | nindent 4 }}
name: etcd-gen-certs-role
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
@@ -38,6 +41,9 @@ metadata:
{{- include "etcd.labels" . | nindent 4 }}
name: etcd-gen-certs-rolebiding
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role

View File

@@ -5,5 +5,8 @@ metadata:
labels:
{{- include "etcd.labels" . | nindent 4 }}
name: {{ include "etcd.serviceAccountName" . }}
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -81,6 +81,10 @@ spec:
volumeClaimTemplates:
- metadata:
name: data
{{- with .Values.etcd.persistence.customAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
storageClassName: {{ .Values.etcd.persistence.storageClassName }}
accessModes:

View File

@@ -0,0 +1,50 @@
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
annotations:
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "kamaji.certificateName" . }}
labels:
{{- $data := . | mustMergeOverwrite (dict "instance" "mutating-webhook-configuration") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: kamaji-mutating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-kamaji-clastix-io-v1alpha1-datastore
failurePolicy: Fail
name: mdatastore.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- datastores
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-kamaji-clastix-io-v1alpha1-tenantcontrolplane
failurePolicy: Fail
name: mtenantcontrolplane.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- tenantcontrolplanes
sideEffects: None

View File

@@ -66,6 +66,16 @@ rules:
- patch
- update
- watch
- apiGroups:
- batch
resources:
- jobs
verbs:
- create
- delete
- get
- list
- watch
- apiGroups:
- ""
resources:
@@ -114,12 +124,6 @@ rules:
- patch
- update
- watch
- apiGroups:
- kamaji.clastix.io
resources:
- datastores/finalizers
verbs:
- update
- apiGroups:
- kamaji.clastix.io
resources:

View File

@@ -1,16 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "kamaji.fullname" . }}
labels:
{{- include "kamaji.labels" . | nindent 4 }}
namespace: {{ .Release.Namespace }}
spec:
type: {{ .Values.service.type }}
ports:
- name: https
port: {{ .Values.service.port }}
protocol: TCP
targetPort: https
selector:
{{- include "kamaji.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
labels:
{{- $data := . | mustMergeOverwrite (dict "component" "metrics") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: {{ include "kamaji.metricsServiceName" . }}
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 8080
name: metrics
protocol: TCP
targetPort: metrics
selector:
{{- include "kamaji.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
labels:
{{- $data := . | mustMergeOverwrite (dict "component" "webhook" "instance" "webhook-service") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
spec:
ports:
- port: 443
protocol: TCP
name: webhook-server
targetPort: webhook-server
selector:
{{- include "kamaji.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,21 @@
{{- if and (.Capabilities.APIVersions.Has "monitoring.coreos.com/v1") .Values.serviceMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
{{- $data := . | mustMergeOverwrite (dict "component" "servicemonitor") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: {{ include "kamaji.fullname" . }}
namespace: {{ .Release.Namespace }}
spec:
endpoints:
- path: /metrics
port: metrics
scheme: http
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "kamaji.name" . }}
{{- end }}

View File

@@ -0,0 +1,70 @@
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
annotations:
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "kamaji.certificateName" . }}
labels:
{{- $data := . | mustMergeOverwrite (dict "instance" "validating-webhook-configuration") -}}
{{- include "kamaji.labels" $data | nindent 4 }}
name: kamaji-validating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate--v1-secret
failurePolicy: Ignore
name: vdatastoresecrets.kb.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- DELETE
resources:
- secrets
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-kamaji-clastix-io-v1alpha1-datastore
failurePolicy: Fail
name: vdatastore.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
- DELETE
resources:
- datastores
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: {{ include "kamaji.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-kamaji-clastix-io-v1alpha1-tenantcontrolplane
failurePolicy: Fail
name: vtenantcontrolplane.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- tenantcontrolplanes
sideEffects: None

View File

@@ -9,14 +9,16 @@ image:
# -- The container image of the Kamaji controller.
repository: clastix/kamaji
pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
tag: latest
# -- Overrides the image tag whose default is the chart appVersion.
tag:
# -- A list of extra arguments to add to the kamaji controller default ones
extraArgs: []
# -- Configuration file path alternative. (default "./kamaji.yaml")
configPath: "./kamaji.yaml"
serviceMonitor:
# -- Toggle the ServiceMonitor true if you have Prometheus Operator installed and configured
enabled: false
etcd:
# -- Install an etcd with enabled multi-tenancy along with Kamaji
@@ -31,7 +33,7 @@ etcd:
# -- Install specific etcd image
image:
repository: quay.io/coreos/etcd
tag: "v3.5.4"
tag: "v3.5.6"
pullPolicy: IfNotPresent
# -- The livenessProbe for the etcd container
@@ -55,6 +57,9 @@ etcd:
storageClass: ""
accessModes:
- ReadWriteOnce
# -- The custom annotations to add to the PVC
customAnnotations: {}
# volumeType: local
overrides:
caSecret:
@@ -67,11 +72,11 @@ etcd:
name: root-client-certs
# -- Name of the namespace where the secret which contains ETCD client certificates is. (default: "kamaji-system")
namespace: kamaji-system
# -- (map) Dictionary of the endpoints for the etcd cluster's members, key is the name of the etcd server. Don't define any port, inflected from .etcd.peerApiPort value.
# -- (map) Dictionary of the endpoints for the etcd cluster's members, key is the name of the etcd server. Don't define the protocol (TLS is automatically inflected), or any port, inflected from .etcd.peerApiPort value.
endpoints:
etcd-0: https://etcd-0.etcd.kamaji-system.svc.cluster.local
etcd-1: https://etcd-1.etcd.kamaji-system.svc.cluster.local
etcd-2: https://etcd-2.etcd.kamaji-system.svc.cluster.local
etcd-0: etcd-0.etcd.kamaji-system.svc.cluster.local
etcd-1: etcd-1.etcd.kamaji-system.svc.cluster.local
etcd-2: etcd-2.etcd.kamaji-system.svc.cluster.local
# -- ETCD Compaction interval (e.g. "5m0s"). (default: "0" (disabled))
compactionInterval: 0
@@ -127,10 +132,6 @@ securityContext:
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 8443
resources:
limits:
cpu: 200m

217
cmd/manager/cmd.go Normal file
View File

@@ -0,0 +1,217 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package manager
import (
"flag"
"fmt"
"io"
"os"
goRuntime "runtime"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/klog/v2"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
cmdutils "github.com/clastix/kamaji/cmd/utils"
"github.com/clastix/kamaji/controllers"
"github.com/clastix/kamaji/controllers/soot"
"github.com/clastix/kamaji/internal"
datastoreutils "github.com/clastix/kamaji/internal/datastore/utils"
"github.com/clastix/kamaji/internal/webhook"
)
func NewCmd(scheme *runtime.Scheme) *cobra.Command {
// CLI flags
var (
metricsBindAddress string
healthProbeBindAddress string
leaderElect bool
tmpDirectory string
kineImage string
datastore string
managerNamespace string
managerServiceAccountName string
managerServiceName string
webhookCABundle []byte
migrateJobImage string
maxConcurrentReconciles int
webhookCAPath string
)
ctx := ctrl.SetupSignalHandler()
cmd := &cobra.Command{
Use: "manager",
Short: "Start the Kamaji Kubernetes Operator",
SilenceErrors: false,
SilenceUsage: true,
PreRunE: func(cmd *cobra.Command, args []string) (err error) {
// Avoid to pollute Kamaji stdout with useless details by the underlying klog implementations
klog.SetOutput(io.Discard)
klog.LogToStderr(false)
if err = cmdutils.CheckFlags(cmd.Flags(), []string{"kine-image", "datastore", "migrate-image", "tmp-directory", "pod-namespace", "webhook-service-name", "serviceaccount-name", "webhook-ca-path"}...); err != nil {
return err
}
if webhookCABundle, err = os.ReadFile(webhookCAPath); err != nil {
return fmt.Errorf("unable to read webhook CA: %w", err)
}
if err = datastoreutils.CheckExists(ctx, scheme, datastore); err != nil {
return err
}
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
setupLog := ctrl.Log.WithName("setup")
setupLog.Info(fmt.Sprintf("Kamaji version %s %s%s", internal.GitTag, internal.GitCommit, internal.GitDirty))
setupLog.Info(fmt.Sprintf("Build from: %s", internal.GitRepo))
setupLog.Info(fmt.Sprintf("Build date: %s", internal.BuildTime))
setupLog.Info(fmt.Sprintf("Go Version: %s", goRuntime.Version()))
setupLog.Info(fmt.Sprintf("Go OS/Arch: %s/%s", goRuntime.GOOS, goRuntime.GOARCH))
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsBindAddress,
Port: 9443,
HealthProbeBindAddress: healthProbeBindAddress,
LeaderElection: leaderElect,
LeaderElectionNamespace: managerNamespace,
LeaderElectionID: "799b98bc.clastix.io",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
return err
}
tcpChannel := make(controllers.TenantControlPlaneChannel)
if err = (&controllers.DataStore{TenantControlPlaneTrigger: tcpChannel}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "DataStore")
return err
}
reconciler := &controllers.TenantControlPlaneReconciler{
Client: mgr.GetClient(),
APIReader: mgr.GetAPIReader(),
Config: controllers.TenantControlPlaneReconcilerConfig{
DefaultDataStoreName: datastore,
KineContainerImage: kineImage,
TmpBaseDirectory: tmpDirectory,
},
TriggerChan: tcpChannel,
KamajiNamespace: managerNamespace,
KamajiServiceAccount: managerServiceAccountName,
KamajiService: managerServiceName,
KamajiMigrateImage: migrateJobImage,
MaxConcurrentReconciles: maxConcurrentReconciles,
}
if err = reconciler.SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Namespace")
return err
}
if err = (&webhook.Freeze{}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to register webhook", "webhook", "Freeze")
return err
}
if err = (&kamajiv1alpha1.DatastoreUsedSecret{}).SetupWithManager(ctx, mgr); err != nil {
setupLog.Error(err, "unable to create indexer", "indexer", "DatastoreUsedSecret")
return err
}
if err = (&kamajiv1alpha1.TenantControlPlaneStatusDataStore{}).SetupWithManager(ctx, mgr); err != nil {
setupLog.Error(err, "unable to create indexer", "indexer", "TenantControlPlaneStatusDataStore")
return err
}
if err = (&kamajiv1alpha1.TenantControlPlane{}).SetupWebhookWithManager(mgr, datastore); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "TenantControlPlane")
return err
}
if err = (&kamajiv1alpha1.DataStore{}).SetupWebhookWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "DataStore")
return err
}
if err = (&soot.Manager{
MigrateCABundle: webhookCABundle,
MigrateServiceName: managerServiceName,
MigrateServiceNamespace: managerNamespace,
AdminClient: mgr.GetClient(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to set up soot manager")
return err
}
if err = mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up health check")
return err
}
if err = mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
return err
}
setupLog.Info("starting manager")
if err = mgr.Start(ctx); err != nil {
setupLog.Error(err, "problem running manager")
return err
}
return nil
},
}
// Setting zap logger
zapfs := flag.NewFlagSet("zap", flag.ExitOnError)
opts := zap.Options{
Development: true,
}
opts.BindFlags(zapfs)
cmd.Flags().AddGoFlagSet(zapfs)
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
// Setting CLI flags
cmd.Flags().StringVar(&metricsBindAddress, "metrics-bind-address", ":8080", "The address the metric endpoint binds to.")
cmd.Flags().StringVar(&healthProbeBindAddress, "health-probe-bind-address", ":8081", "The address the probe endpoint binds to.")
cmd.Flags().BoolVar(&leaderElect, "leader-elect", true, "Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.")
cmd.Flags().StringVar(&tmpDirectory, "tmp-directory", "/tmp/kamaji", "Directory which will be used to work with temporary files.")
cmd.Flags().StringVar(&kineImage, "kine-image", "rancher/kine:v0.9.2-amd64", "Container image along with tag to use for the Kine sidecar container (used only if etcd-storage-type is set to one of kine strategies).")
cmd.Flags().StringVar(&datastore, "datastore", "etcd", "The default DataStore that should be used by Kamaji to setup the required storage.")
cmd.Flags().StringVar(&migrateJobImage, "migrate-image", fmt.Sprintf("clastix/kamaji:v%s", internal.GitTag), "Specify the container image to launch when a TenantControlPlane is migrated to a new datastore.")
cmd.Flags().IntVar(&maxConcurrentReconciles, "max-concurrent-tcp-reconciles", 1, "Specify the number of workers for the Tenant Control Plane controller (beware of CPU consumption)")
cmd.Flags().StringVar(&managerNamespace, "pod-namespace", os.Getenv("POD_NAMESPACE"), "The Kubernetes Namespace on which the Operator is running in, required for the TenantControlPlane migration jobs.")
cmd.Flags().StringVar(&managerServiceName, "webhook-service-name", "kamaji-webhook-service", "The Kamaji webhook server Service name which is used to get validation webhooks, required for the TenantControlPlane migration jobs.")
cmd.Flags().StringVar(&managerServiceAccountName, "serviceaccount-name", os.Getenv("SERVICE_ACCOUNT"), "The Kubernetes Namespace on which the Operator is running in, required for the TenantControlPlane migration jobs.")
cmd.Flags().StringVar(&webhookCAPath, "webhook-ca-path", "/tmp/k8s-webhook-server/serving-certs/ca.crt", "Path to the Manager webhook server CA, required for the TenantControlPlane migration jobs.")
cobra.OnInitialize(func() {
viper.AutomaticEnv()
})
return cmd
}

119
cmd/migrate/cmd.go Normal file
View File

@@ -0,0 +1,119 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package migrate
import (
"context"
"fmt"
"strings"
"time"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/internal/datastore"
)
func NewCmd(scheme *runtime.Scheme) *cobra.Command {
// CLI flags
var (
tenantControlPlane string
targetDataStore string
timeout time.Duration
)
cmd := &cobra.Command{
Use: "migrate",
Short: "Migrate the data of a TenantControlPlane to another compatible DataStore",
SilenceUsage: true,
RunE: func(cmd *cobra.Command, args []string) error {
ctx, cancelFn := context.WithTimeout(context.Background(), timeout)
defer cancelFn()
log := ctrl.Log
log.Info("generating the controller-runtime client")
client, err := ctrlclient.New(ctrl.GetConfigOrDie(), ctrlclient.Options{
Scheme: scheme,
})
if err != nil {
return err
}
parts := strings.Split(tenantControlPlane, string(types.Separator))
if len(parts) != 2 {
return fmt.Errorf("non well-formed namespaced name for the tenant control plane, expected <NAMESPACE>/NAME, fot %s", tenantControlPlane)
}
log.Info("retrieving the TenantControlPlane")
tcp := &kamajiv1alpha1.TenantControlPlane{}
if err = client.Get(ctx, types.NamespacedName{Namespace: parts[0], Name: parts[1]}, tcp); err != nil {
return err
}
log.Info("retrieving the TenantControlPlane used DataStore")
originDs := &kamajiv1alpha1.DataStore{}
if err = client.Get(ctx, types.NamespacedName{Name: tcp.Status.Storage.DataStoreName}, originDs); err != nil {
return err
}
log.Info("retrieving the target DataStore")
targetDs := &kamajiv1alpha1.DataStore{}
if err = client.Get(ctx, types.NamespacedName{Name: targetDataStore}, targetDs); err != nil {
return err
}
if tcp.Status.Storage.Driver != string(targetDs.Spec.Driver) {
return fmt.Errorf("migration between DataStore with different driver is not supported")
}
if tcp.Status.Storage.DataStoreName == targetDs.GetName() {
return fmt.Errorf("cannot migrate to the same DataStore")
}
log.Info("generating the origin storage connection")
originConnection, err := datastore.NewStorageConnection(ctx, client, *originDs)
if err != nil {
return err
}
defer originConnection.Close()
log.Info("generating the target storage connection")
targetConnection, err := datastore.NewStorageConnection(ctx, client, *targetDs)
if err != nil {
return err
}
defer targetConnection.Close()
// Start migrating from the old Datastore to the new one
log.Info("migration from origin to target started")
if err = originConnection.Migrate(ctx, *tcp, targetConnection); err != nil {
return fmt.Errorf("unable to migrate data from %s to %s: %w", originDs.GetName(), targetDs.GetName(), err)
}
log.Info("migration completed")
return nil
},
}
cmd.Flags().StringVar(&tenantControlPlane, "tenant-control-plane", "", "Namespaced-name of the TenantControlPlane that must be migrated (e.g.: default/test)")
cmd.Flags().StringVar(&targetDataStore, "target-datastore", "", "Name of the Datastore to which the TenantControlPlane will be migrated")
cmd.Flags().DurationVar(&timeout, "timeout", 5*time.Minute, "Amount of time for the context timeout")
_ = cmd.MarkFlagRequired("tenant-control-plane")
_ = cmd.MarkFlagRequired("target-datastore")
return cmd
}

31
cmd/root.go Normal file
View File

@@ -0,0 +1,31 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package cmd
import (
"math/rand"
"time"
"github.com/spf13/cobra"
_ "go.uber.org/automaxprocs" // Automatically set `GOMAXPROCS` to match Linux container CPU quota.
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
)
func NewCmd(scheme *runtime.Scheme) *cobra.Command {
return &cobra.Command{
Use: "kamaji",
Short: "Build and operate Kubernetes at scale with a fraction of operational burden.",
PersistentPreRun: func(cmd *cobra.Command, args []string) {
// Seed is required to ensure non reproducibility for the certificates generate by Kamaji.
rand.Seed(time.Now().UnixNano())
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(kamajiv1alpha1.AddToScheme(scheme))
},
}
}

22
cmd/utils/check_flags.go Normal file
View File

@@ -0,0 +1,22 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package utils
import (
"fmt"
"github.com/spf13/pflag"
)
func CheckFlags(flags *pflag.FlagSet, args ...string) error {
for _, arg := range args {
v, _ := flags.GetString(arg)
if len(v) == 0 {
return fmt.Errorf("expecting a value for --%s arg", arg)
}
}
return nil
}

View File

@@ -0,0 +1,39 @@
# The following manifests contain a self-signed issuer CR and a certificate CR.
# More document can be found at https://docs.cert-manager.io
# WARNING: Targets CertManager v1.0. Check https://cert-manager.io/docs/installation/upgrading/ for breaking changes.
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
labels:
app.kubernetes.io/name: issuer
app.kubernetes.io/instance: selfsigned-issuer
app.kubernetes.io/component: certificate
app.kubernetes.io/created-by: operator
app.kubernetes.io/part-of: operator
app.kubernetes.io/managed-by: kustomize
name: selfsigned-issuer
namespace: system
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
labels:
app.kubernetes.io/name: certificate
app.kubernetes.io/instance: serving-cert
app.kubernetes.io/component: certificate
app.kubernetes.io/created-by: operator
app.kubernetes.io/part-of: operator
app.kubernetes.io/managed-by: kustomize
name: serving-cert # this name should match the one appeared in kustomizeconfig.yaml
namespace: system
spec:
# $(SERVICE_NAME) and $(SERVICE_NAMESPACE) will be substituted by kustomize
dnsNames:
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc.cluster.local
issuerRef:
kind: Issuer
name: selfsigned-issuer
secretName: webhook-server-cert # this secret will not be prefixed, since it's not managed by kustomize

View File

@@ -0,0 +1,5 @@
resources:
- certificate.yaml
configurations:
- kustomizeconfig.yaml

View File

@@ -0,0 +1,16 @@
# This configuration is for teaching kustomize how to update name ref and var substitution
nameReference:
- kind: Issuer
group: cert-manager.io
fieldSpecs:
- kind: Certificate
group: cert-manager.io
path: spec/issuerRef/name
varReference:
- kind: Certificate
group: cert-manager.io
path: spec/commonName
- kind: Certificate
group: cert-manager.io
path: spec/dnsNames

View File

@@ -1,10 +1,9 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.6.1
controller-gen.kubebuilder.io/version: v0.9.2
creationTimestamp: null
name: datastores.kamaji.clastix.io
spec:
@@ -61,18 +60,20 @@ spec:
keyPath:
description: Name of the key for the given Secret reference
where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: Name is unique within a namespace to reference
description: name is unique within a namespace to reference
a secret resource.
type: string
namespace:
description: Namespace defines the space within which
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
username:
properties:
@@ -86,18 +87,20 @@ spec:
keyPath:
description: Name of the key for the given Secret reference
where the content is stored. This value is mandatory.
minLength: 1
type: string
name:
description: Name is unique within a namespace to reference
description: name is unique within a namespace to reference
a secret resource.
type: string
namespace:
description: Namespace defines the space within which
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- password
@@ -105,12 +108,17 @@ spec:
type: object
driver:
description: The driver to use to connect to the shared datastore.
enum:
- etcd
- MySQL
- PostgreSQL
type: string
endpoints:
description: List of the endpoints to connect to the shared datastore.
No need for protocol, just bare IP/FQDN and port.
items:
type: string
minItems: 1
type: array
tlsConfig:
description: Defines the TLS/SSL configuration required to connect
@@ -135,18 +143,20 @@ spec:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
minLength: 1
type: string
name:
description: Name is unique within a namespace to
description: name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: Namespace defines the space within which
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
privateKey:
properties:
@@ -161,18 +171,20 @@ spec:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
minLength: 1
type: string
name:
description: Name is unique within a namespace to
description: name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: Namespace defines the space within which
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- certificate
@@ -194,18 +206,20 @@ spec:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
minLength: 1
type: string
name:
description: Name is unique within a namespace to
description: name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: Namespace defines the space within which
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
privateKey:
properties:
@@ -220,18 +234,20 @@ spec:
description: Name of the key for the given Secret
reference where the content is stored. This value
is mandatory.
minLength: 1
type: string
name:
description: Name is unique within a namespace to
description: name is unique within a namespace to
reference a secret resource.
type: string
namespace:
description: Namespace defines the space within which
description: namespace defines the space within which
the secret name must be unique.
type: string
required:
- keyPath
type: object
x-kubernetes-map-type: atomic
type: object
required:
- certificate
@@ -261,9 +277,3 @@ spec:
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

View File

@@ -3,17 +3,15 @@
# It should be run by config/default
resources:
- bases/kamaji.clastix.io_tenantcontrolplanes.yaml
- bases/kamaji.clastix.io_datastores.yaml
#+kubebuilder:scaffold:crdkustomizeresource
patchesStrategicMerge:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
#- patches/webhook_in_clusters.yaml
- patches/webhook_in_clusters.yaml
#+kubebuilder:scaffold:crdkustomizewebhookpatch
# [CERTMANAGER] To enable cert-manager, uncomment all the sections with [CERTMANAGER] prefix.
# patches here are for enabling the CA injection for each CRD
#- patches/cainjection_in_clusters.yaml
- patches/cainjection_in_clusters.yaml
- patches/cainjection_in_datastores.yaml
#+kubebuilder:scaffold:crdkustomizecainjectionpatch
# the following config is for teaching kustomize how to do kustomization for CRDs.

View File

@@ -17,59 +17,42 @@ bases:
- ../rbac
- ../manager
- ../samples
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
#- ../webhook
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
#- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
#- ../prometheus
- ../webhook
- ../certmanager
- ../prometheus
patchesStrategicMerge:
# Protect the /metrics endpoint by putting it behind auth.
# If you want your controller-manager to expose the /metrics
# endpoint w/o any authn/z, please comment the following line.
- manager_auth_proxy_patch.yaml
# Mount the controller config file for loading manager configurations
# through a ComponentConfig type
#- manager_config_patch.yaml
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
# crd/kustomization.yaml
#- manager_webhook_patch.yaml
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'.
# Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA injection in the admission webhooks.
# 'CERTMANAGER' needs to be enabled to use ca injection
#- webhookcainjection_patch.yaml
- manager_webhook_patch.yaml
- webhookcainjection_patch.yaml
# the following config is for teaching kustomize how to do var substitution
vars:
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
#- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
# objref:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert # this name should match the one in certificate.yaml
# fieldref:
# fieldpath: metadata.namespace
#- name: CERTIFICATE_NAME
# objref:
# kind: Certificate
# group: cert-manager.io
# version: v1
# name: serving-cert # this name should match the one in certificate.yaml
#- name: SERVICE_NAMESPACE # namespace of the service
# objref:
# kind: Service
# version: v1
# name: webhook-service
# fieldref:
# fieldpath: metadata.namespace
#- name: SERVICE_NAME
# objref:
# kind: Service
# version: v1
# name: webhook-service
- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
objref:
kind: Certificate
group: cert-manager.io
version: v1
name: serving-cert # this name should match the one in certificate.yaml
fieldref:
fieldpath: metadata.namespace
- name: CERTIFICATE_NAME
objref:
kind: Certificate
group: cert-manager.io
version: v1
name: serving-cert # this name should match the one in certificate.yaml
- name: SERVICE_NAMESPACE # namespace of the service
objref:
kind: Service
version: v1
name: webhook-service
fieldref:
fieldpath: metadata.namespace
- name: SERVICE_NAME
objref:
kind: Service
version: v1
name: webhook-service

View File

@@ -25,3 +25,4 @@ spec:
- "--health-probe-bind-address=:8081"
- "--metrics-bind-address=127.0.0.1:8080"
- "--leader-elect"
- "--datastore=kamaji-etcd"

View File

@@ -0,0 +1,23 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: manager
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
volumes:
- name: cert
secret:
defaultMode: 420
secretName: webhook-server-cert

View File

@@ -0,0 +1,29 @@
# This patch add annotation to admission webhook config and
# the variables $(CERTIFICATE_NAMESPACE) and $(CERTIFICATE_NAME) will be substituted by kustomize.
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: mutatingwebhookconfiguration
app.kubernetes.io/instance: mutating-webhook-configuration
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: operator
app.kubernetes.io/part-of: operator
app.kubernetes.io/managed-by: kustomize
name: mutating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: validatingwebhookconfiguration
app.kubernetes.io/instance: validating-webhook-configuration
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: operator
app.kubernetes.io/part-of: operator
app.kubernetes.io/managed-by: kustomize
name: validating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)

File diff suppressed because it is too large Load Diff

View File

@@ -13,4 +13,4 @@ kind: Kustomization
images:
- name: controller
newName: clastix/kamaji
newTag: latest
newTag: v0.2.1

View File

@@ -26,12 +26,26 @@ spec:
runAsNonRoot: true
containers:
- command:
- /manager
- /kamaji
args:
- manager
- --leader-elect
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
image: controller:latest
imagePullPolicy: Always
name: manager
ports:
- containerPort: 8080
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
livenessProbe:

View File

@@ -1,2 +1,5 @@
resources:
- monitor.yaml
configurations:
- kustomizeconfig.yaml

View File

@@ -0,0 +1,4 @@
varReference:
- kind: ServiceMonitor
group: monitoring.coreos.com
path: spec/namespaceSelector/matchNames

View File

@@ -1,4 +1,3 @@
# Prometheus Monitor Service (Metrics)
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
@@ -10,11 +9,11 @@ metadata:
spec:
endpoints:
- path: /metrics
port: https
scheme: https
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
tlsConfig:
insecureSkipVerify: true
port: metrics
scheme: http
namespaceSelector:
matchNames:
- $(SERVICE_NAMESPACE)
selector:
matchLabels:
control-plane: controller-manager

View File

@@ -1,4 +1,3 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
@@ -18,6 +17,16 @@ rules:
- patch
- update
- watch
- apiGroups:
- batch
resources:
- jobs
verbs:
- create
- delete
- get
- list
- watch
- apiGroups:
- ""
resources:

View File

@@ -5,9 +5,9 @@ metadata:
spec:
driver: etcd
endpoints:
- etcd-0.etcd.kamaji-system.svc:2379
- etcd-1.etcd.kamaji-system.svc:2379
- etcd-2.etcd.kamaji-system.svc:2379
- etcd-0.etcd.kamaji-system.svc.cluster.local:2379
- etcd-1.etcd.kamaji-system.svc.cluster.local:2379
- etcd-2.etcd.kamaji-system.svc.cluster.local:2379
basicAuth: null
tlsConfig:
certificateAuthority:

View File

@@ -0,0 +1,34 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: mysql-bronze
spec:
driver: MySQL
endpoints:
- bronze.mysql-system.svc:3306
basicAuth:
username:
content: cm9vdA==
password:
secretReference:
name: mysql-bronze-config
namespace: mysql-system
keyPath: MYSQL_ROOT_PASSWORD
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: mysql-bronze-config
namespace: mysql-system
keyPath: "ca.crt"
clientCertificate:
certificate:
secretReference:
name: mysql-bronze-config
namespace: mysql-system
keyPath: "server.crt"
privateKey:
secretReference:
name: mysql-bronze-config
namespace: mysql-system
keyPath: "server.key"

View File

@@ -1,34 +1,34 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: mysql
name: mysql-gold
spec:
driver: MySQL
endpoints:
- mariadb.kamaji-system.svc:3306
- gold.mysql-system.svc:3306
basicAuth:
username:
content: cm9vdA==
password:
secretReference:
name: mysql-config
namespace: kamaji-system
name: mysql-gold-config
namespace: mysql-system
keyPath: MYSQL_ROOT_PASSWORD
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: mysql-config
namespace: kamaji-system
name: mysql-gold-config
namespace: mysql-system
keyPath: "ca.crt"
clientCertificate:
certificate:
secretReference:
name: mysql-config
namespace: kamaji-system
name: mysql-gold-config
namespace: mysql-system
keyPath: "server.crt"
privateKey:
secretReference:
name: mysql-config
namespace: kamaji-system
name: mysql-gold-config
namespace: mysql-system
keyPath: "server.key"

View File

@@ -0,0 +1,34 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: mysql-silver
spec:
driver: MySQL
endpoints:
- silver.mysql-system.svc:3306
basicAuth:
username:
content: cm9vdA==
password:
secretReference:
name: mysql-silver-config
namespace: mysql-system
keyPath: MYSQL_ROOT_PASSWORD
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: mysql-silver-config
namespace: mysql-system
keyPath: "ca.crt"
clientCertificate:
certificate:
secretReference:
name: mysql-silver-config
namespace: mysql-system
keyPath: "server.crt"
privateKey:
secretReference:
name: mysql-silver-config
namespace: mysql-system
keyPath: "server.key"

View File

@@ -0,0 +1,37 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: postgresql-bronze
spec:
driver: PostgreSQL
endpoints:
- postgres-bronze-rw.postgres-system.svc:5432
basicAuth:
username:
secretReference:
name: postgres-bronze-superuser
namespace: postgres-system
keyPath: username
password:
secretReference:
name: postgres-bronze-superuser
namespace: postgres-system
keyPath: password
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: postgres-bronze-ca
namespace: postgres-system
keyPath: ca.crt
clientCertificate:
certificate:
secretReference:
name: postgres-bronze-root-cert
namespace: postgres-system
keyPath: tls.crt
privateKey:
secretReference:
name: postgres-bronze-root-cert
namespace: postgres-system
keyPath: tls.key

View File

@@ -1,37 +1,37 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: postgresql
name: postgresql-gold
spec:
driver: PostgreSQL
endpoints:
- postgresql-rw.kamaji-system.svc:5432
- postgres-gold-rw.postgres-system.svc:5432
basicAuth:
username:
secretReference:
name: postgresql-superuser
namespace: kamaji-system
name: postgres-gold-superuser
namespace: postgres-system
keyPath: username
password:
secretReference:
name: postgresql-superuser
namespace: kamaji-system
name: postgres-gold-superuser
namespace: postgres-system
keyPath: password
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: postgresql-ca
namespace: kamaji-system
name: postgres-gold-ca
namespace: postgres-system
keyPath: ca.crt
clientCertificate:
certificate:
secretReference:
name: postgres-root-cert
namespace: kamaji-system
name: postgres-gold-root-cert
namespace: postgres-system
keyPath: tls.crt
privateKey:
secretReference:
name: postgres-root-cert
namespace: kamaji-system
name: postgres-gold-root-cert
namespace: postgres-system
keyPath: tls.key

View File

@@ -0,0 +1,37 @@
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: postgresql-silver
spec:
driver: PostgreSQL
endpoints:
- postgres-silver-rw.postgres-system.svc:5432
basicAuth:
username:
secretReference:
name: postgres-silver-superuser
namespace: postgres-system
keyPath: username
password:
secretReference:
name: postgres-silver-superuser
namespace: postgres-system
keyPath: password
tlsConfig:
certificateAuthority:
certificate:
secretReference:
name: postgres-silver-ca
namespace: postgres-system
keyPath: ca.crt
clientCertificate:
certificate:
secretReference:
name: postgres-silver-root-cert
namespace: postgres-system
keyPath: tls.crt
privateKey:
secretReference:
name: postgres-silver-root-cert
namespace: postgres-system
keyPath: tls.key

View File

@@ -9,7 +9,7 @@ spec:
service:
serviceType: LoadBalancer
kubernetes:
version: "v1.23.1"
version: "v1.25.4"
kubelet:
cgroupfs: cgroupfs
admissionControllers:

View File

@@ -0,0 +1,6 @@
resources:
- manifests.yaml
- service.yaml
configurations:
- kustomizeconfig.yaml

View File

@@ -0,0 +1,25 @@
# the following config is for teaching kustomize where to look at when substituting vars.
# It requires kustomize v2.1.0 or newer to work properly.
nameReference:
- kind: Service
version: v1
fieldSpecs:
- kind: MutatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/name
- kind: ValidatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/name
namespace:
- kind: MutatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/namespace
create: true
- kind: ValidatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/namespace
create: true
varReference:
- path: metadata/annotations

View File

@@ -0,0 +1,114 @@
---
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
creationTimestamp: null
name: mutating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /mutate-kamaji-clastix-io-v1alpha1-datastore
failurePolicy: Fail
name: mdatastore.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- datastores
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /mutate-kamaji-clastix-io-v1alpha1-tenantcontrolplane
failurePolicy: Fail
name: mtenantcontrolplane.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- tenantcontrolplanes
sideEffects: None
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
creationTimestamp: null
name: validating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /validate--v1-secret
failurePolicy: Ignore
name: vdatastoresecrets.kb.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- DELETE
resources:
- secrets
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /validate-kamaji-clastix-io-v1alpha1-datastore
failurePolicy: Fail
name: vdatastore.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
- DELETE
resources:
- datastores
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /validate-kamaji-clastix-io-v1alpha1-tenantcontrolplane
failurePolicy: Fail
name: vtenantcontrolplane.kb.io
rules:
- apiGroups:
- kamaji.clastix.io
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- tenantcontrolplanes
sideEffects: None

View File

@@ -0,0 +1,20 @@
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: service
app.kubernetes.io/instance: webhook-service
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: operator
app.kubernetes.io/part-of: operator
app.kubernetes.io/managed-by: kustomize
name: webhook-service
namespace: system
spec:
ports:
- port: 443
protocol: TCP
targetPort: 9443
selector:
control-plane: controller-manager

View File

@@ -6,15 +6,20 @@ package controllers
import (
"context"
"github.com/pkg/errors"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/fields"
k8stypes "k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/client-go/util/workqueue"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
)
@@ -25,39 +30,32 @@ type DataStore struct {
// if a Data Source is updated we have to be sure that the reconciliation of the certificates content
// for each Tenant Control Plane is put in place properly.
TenantControlPlaneTrigger TenantControlPlaneChannel
// ResourceName is the DataStore object that should be watched for changes.
ResourceName string
}
//+kubebuilder:rbac:groups=kamaji.clastix.io,resources=datastores,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=kamaji.clastix.io,resources=datastores/status,verbs=get;update;patch
func (r *DataStore) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
ds := kamajiv1alpha1.DataStore{}
if err := r.client.Get(ctx, request.NamespacedName, &ds); err != nil {
log := log.FromContext(ctx)
ds := &kamajiv1alpha1.DataStore{}
if err := r.client.Get(ctx, request.NamespacedName, ds); err != nil {
if k8serrors.IsNotFound(err) {
return reconcile.Result{}, nil
}
log.Error(err, "unable to retrieve the request")
return reconcile.Result{}, err
}
// A Data Source can trigger several Tenant Control Planes and requires a minimum validation:
// we have to ensure the data provided by the Data Source is valid and referencing an existing Secret object.
if _, err := ds.Spec.TLSConfig.CertificateAuthority.Certificate.GetContent(ctx, r.client); err != nil {
return reconcile.Result{}, errors.Wrap(err, "invalid Certificate Authority data")
}
if _, err := ds.Spec.TLSConfig.ClientCertificate.Certificate.GetContent(ctx, r.client); err != nil {
return reconcile.Result{}, errors.Wrap(err, "invalid Client Certificate data")
}
if _, err := ds.Spec.TLSConfig.ClientCertificate.PrivateKey.GetContent(ctx, r.client); err != nil {
return reconcile.Result{}, errors.Wrap(err, "invalid Client Certificate data")
}
tcpList := kamajiv1alpha1.TenantControlPlaneList{}
if err := r.client.List(ctx, &tcpList); err != nil {
if err := r.client.List(ctx, &tcpList, client.MatchingFieldsSelector{
Selector: fields.OneTermEqualSelector(kamajiv1alpha1.TenantControlPlaneUsedDataStoreKey, ds.GetName()),
}); err != nil {
log.Error(err, "cannot retrieve list of the Tenant Control Plane using the following instance")
return reconcile.Result{}, err
}
// Updating the status with the list of Tenant Control Plane using the following Data Source
@@ -68,7 +66,9 @@ func (r *DataStore) Reconcile(ctx context.Context, request reconcile.Request) (r
ds.Status.UsedBy = tcpSets.List()
if err := r.client.Status().Update(ctx, &ds); err != nil {
if err := r.client.Status().Update(ctx, ds); err != nil {
log.Error(err, "cannot update the status for the given instance")
return reconcile.Result{}, err
}
// Triggering the reconciliation of the Tenant Control Plane upon a Secret change
@@ -88,9 +88,31 @@ func (r *DataStore) InjectClient(client client.Client) error {
}
func (r *DataStore) SetupWithManager(mgr controllerruntime.Manager) error {
enqueueFn := func(tcp *kamajiv1alpha1.TenantControlPlane, limitingInterface workqueue.RateLimitingInterface) {
if dataStoreName := tcp.Status.Storage.DataStoreName; len(dataStoreName) > 0 {
limitingInterface.AddRateLimited(reconcile.Request{
NamespacedName: k8stypes.NamespacedName{
Name: dataStoreName,
},
})
}
}
//nolint:forcetypeassert
return controllerruntime.NewControllerManagedBy(mgr).
For(&kamajiv1alpha1.DataStore{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return object.GetName() == r.ResourceName
}))).
For(&kamajiv1alpha1.DataStore{}, builder.WithPredicates(
predicate.ResourceVersionChangedPredicate{},
)).
Watches(&source.Kind{Type: &kamajiv1alpha1.TenantControlPlane{}}, handler.Funcs{
CreateFunc: func(createEvent event.CreateEvent, limitingInterface workqueue.RateLimitingInterface) {
enqueueFn(createEvent.Object.(*kamajiv1alpha1.TenantControlPlane), limitingInterface)
},
UpdateFunc: func(updateEvent event.UpdateEvent, limitingInterface workqueue.RateLimitingInterface) {
enqueueFn(updateEvent.ObjectOld.(*kamajiv1alpha1.TenantControlPlane), limitingInterface)
enqueueFn(updateEvent.ObjectNew.(*kamajiv1alpha1.TenantControlPlane), limitingInterface)
},
DeleteFunc: func(deleteEvent event.DeleteEvent, limitingInterface workqueue.RateLimitingInterface) {
enqueueFn(deleteEvent.Object.(*kamajiv1alpha1.TenantControlPlane), limitingInterface)
},
}).
Complete(r)
}

View File

@@ -0,0 +1,10 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package finalizers
const (
// DatastoreFinalizer is using a wrong name, since it's related to the underlying datastore.
DatastoreFinalizer = "finalizer.kamaji.clastix.io"
SootFinalizer = "finalizer.kamaji.clastix.io/soot"
)

View File

@@ -10,83 +10,100 @@ import (
"github.com/google/uuid"
k8stypes "k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/finalizers"
"github.com/clastix/kamaji/internal/datastore"
"github.com/clastix/kamaji/internal/resources"
ds "github.com/clastix/kamaji/internal/resources/datastore"
"github.com/clastix/kamaji/internal/resources/konnectivity"
"github.com/clastix/kamaji/internal/sql"
)
type GroupResourceBuilderConfiguration struct {
client client.Client
log logr.Logger
tcpReconcilerConfig TenantControlPlaneReconcilerConfig
tenantControlPlane kamajiv1alpha1.TenantControlPlane
DBConnection sql.DBConnection
DataStore kamajiv1alpha1.DataStore
client client.Client
log logr.Logger
tcpReconcilerConfig TenantControlPlaneReconcilerConfig
tenantControlPlane kamajiv1alpha1.TenantControlPlane
Connection datastore.Connection
DataStore kamajiv1alpha1.DataStore
KamajiNamespace string
KamajiServiceAccount string
KamajiService string
KamajiMigrateImage string
}
type GroupDeleteableResourceBuilderConfiguration struct {
type GroupDeletableResourceBuilderConfiguration struct {
client client.Client
log logr.Logger
tcpReconcilerConfig TenantControlPlaneReconcilerConfig
tenantControlPlane kamajiv1alpha1.TenantControlPlane
DBConnection sql.DBConnection
connection datastore.Connection
}
// GetResources returns a list of resources that will be used to provide tenant control planes
// Currently there is only a default approach
// TODO: the idea of this function is to become a factory to return the group of resources according to the given configuration.
func GetResources(config GroupResourceBuilderConfiguration, dataStore kamajiv1alpha1.DataStore) []resources.Resource {
return getDefaultResources(config, dataStore)
func GetResources(config GroupResourceBuilderConfiguration) []resources.Resource {
return getDefaultResources(config)
}
// GetDeletableResources returns a list of resources that have to be deleted when tenant control planes are deleted
// Currently there is only a default approach
// TODO: the idea of this function is to become a factory to return the group of deleteable resources according to the given configuration.
func GetDeletableResources(config GroupDeleteableResourceBuilderConfiguration, dataStore kamajiv1alpha1.DataStore) []resources.DeleteableResource {
return getDefaultDeleteableResources(config, dataStore)
// TODO: the idea of this function is to become a factory to return the group of deletable resources according to the given configuration.
func GetDeletableResources(tcp *kamajiv1alpha1.TenantControlPlane, config GroupDeletableResourceBuilderConfiguration) []resources.DeletableResource {
var res []resources.DeletableResource
if controllerutil.ContainsFinalizer(tcp, finalizers.DatastoreFinalizer) {
res = append(res, &ds.Setup{
Client: config.client,
Connection: config.connection,
})
}
return res
}
func getDefaultResources(config GroupResourceBuilderConfiguration, dataStore kamajiv1alpha1.DataStore) []resources.Resource {
resources := append(getUpgradeResources(config.client, config.tenantControlPlane), getKubernetesServiceResources(config.client, config.tenantControlPlane)...)
resources = append(resources, getKubeadmConfigResources(config.client, getTmpDirectory(config.tcpReconcilerConfig.TmpBaseDirectory, config.tenantControlPlane), dataStore)...)
resources = append(resources, getKubernetesCertificatesResources(config.client, config.log, config.tcpReconcilerConfig, config.tenantControlPlane)...)
resources = append(resources, getKubeconfigResources(config.client, config.log, config.tcpReconcilerConfig, config.tenantControlPlane)...)
resources = append(resources, getKubernetesStorageResources(config.client, config.log, config.tcpReconcilerConfig, config.DBConnection, config.tenantControlPlane, dataStore)...)
resources = append(resources, getInternalKonnectivityResources(config.client, config.log, config.tcpReconcilerConfig, config.tenantControlPlane)...)
resources = append(resources, getKubernetesDeploymentResources(config.client, config.tcpReconcilerConfig, dataStore)...)
resources = append(resources, getKubernetesIngressResources(config.client, config.tenantControlPlane)...)
resources = append(resources, getKubeadmPhaseResources(config.client, config.log, config.tenantControlPlane)...)
resources = append(resources, getKubeadmAddonResources(config.client, config.log, config.tenantControlPlane)...)
resources = append(resources, getExternalKonnectivityResources(config.client, config.log, config.tcpReconcilerConfig, config.tenantControlPlane)...)
func getDefaultResources(config GroupResourceBuilderConfiguration) []resources.Resource {
resources := getDataStoreMigratingResources(config.client, config.KamajiNamespace, config.KamajiMigrateImage, config.KamajiServiceAccount, config.KamajiService)
resources = append(resources, getUpgradeResources(config.client)...)
resources = append(resources, getKubernetesServiceResources(config.client)...)
resources = append(resources, getKubeadmConfigResources(config.client, getTmpDirectory(config.tcpReconcilerConfig.TmpBaseDirectory, config.tenantControlPlane), config.DataStore)...)
resources = append(resources, getKubernetesCertificatesResources(config.client, config.tcpReconcilerConfig, config.tenantControlPlane)...)
resources = append(resources, getKubeconfigResources(config.client, config.tcpReconcilerConfig, config.tenantControlPlane)...)
resources = append(resources, getKubernetesStorageResources(config.client, config.Connection, config.DataStore)...)
resources = append(resources, getKonnectivityServerRequirementsResources(config.client)...)
resources = append(resources, getKubernetesDeploymentResources(config.client, config.tcpReconcilerConfig, config.DataStore)...)
resources = append(resources, getKonnectivityServerPatchResources(config.client)...)
resources = append(resources, getDataStoreMigratingCleanup(config.client, config.KamajiNamespace)...)
resources = append(resources, getKubernetesIngressResources(config.client)...)
return resources
}
func getDefaultDeleteableResources(config GroupDeleteableResourceBuilderConfiguration, dataStore kamajiv1alpha1.DataStore) []resources.DeleteableResource {
switch dataStore.Spec.Driver {
case kamajiv1alpha1.EtcdDriver:
return []resources.DeleteableResource{
&resources.ETCDSetupResource{
Client: config.client,
Log: config.log,
DataStore: dataStore,
},
}
case kamajiv1alpha1.KineMySQLDriver, kamajiv1alpha1.KinePostgreSQLDriver:
return []resources.DeleteableResource{
&resources.SQLSetup{
Client: config.client,
DBConnection: config.DBConnection,
},
}
default:
return []resources.DeleteableResource{}
func getDataStoreMigratingCleanup(c client.Client, kamajiNamespace string) []resources.Resource {
return []resources.Resource{
&ds.Migrate{
Client: c,
KamajiNamespace: kamajiNamespace,
ShouldCleanUp: true,
},
}
}
func getUpgradeResources(c client.Client, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
func getDataStoreMigratingResources(c client.Client, kamajiNamespace, migrateImage string, kamajiServiceAccount, kamajiService string) []resources.Resource {
return []resources.Resource{
&ds.Migrate{
Client: c,
MigrateImage: migrateImage,
KamajiNamespace: kamajiNamespace,
KamajiServiceAccount: kamajiServiceAccount,
KamajiServiceName: kamajiService,
},
}
}
func getUpgradeResources(c client.Client) []resources.Resource {
return []resources.Resource{
&resources.KubernetesUpgrade{
Client: c,
@@ -94,7 +111,7 @@ func getUpgradeResources(c client.Client, tenantControlPlane kamajiv1alpha1.Tena
}
}
func getKubernetesServiceResources(c client.Client, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
func getKubernetesServiceResources(c client.Client) []resources.Resource {
return []resources.Resource{
&resources.KubernetesServiceResource{
Client: c,
@@ -121,133 +138,88 @@ func getKubeadmConfigResources(c client.Client, tmpDirectory string, dataStore k
}
}
func getKubernetesCertificatesResources(c client.Client, log logr.Logger, tcpReconcilerConfig TenantControlPlaneReconcilerConfig, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
func getKubernetesCertificatesResources(c client.Client, tcpReconcilerConfig TenantControlPlaneReconcilerConfig, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
return []resources.Resource{
&resources.CACertificate{
Client: c,
Log: log,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
},
&resources.FrontProxyCACertificate{
Client: c,
Log: log,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
},
&resources.SACertificate{
Client: c,
Log: log,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
},
&resources.APIServerCertificate{
Client: c,
Log: log,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
},
&resources.APIServerKubeletClientCertificate{
Client: c,
Log: log,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
},
&resources.FrontProxyClientCertificate{
Client: c,
Log: log,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
},
}
}
func getKubeconfigResources(c client.Client, log logr.Logger, tcpReconcilerConfig TenantControlPlaneReconcilerConfig, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
func getKubeconfigResources(c client.Client, tcpReconcilerConfig TenantControlPlaneReconcilerConfig, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
return []resources.Resource{
&resources.KubeconfigResource{
Name: "admin-kubeconfig",
Client: c,
Log: log,
KubeConfigFileName: resources.AdminKubeConfigFileName,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
},
&resources.KubeconfigResource{
Name: "controller-manager-kubeconfig",
Client: c,
Log: log,
KubeConfigFileName: resources.ControllerManagerKubeConfigFileName,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
},
&resources.KubeconfigResource{
Name: "scheduler-kubeconfig",
Client: c,
Log: log,
KubeConfigFileName: resources.SchedulerKubeConfigFileName,
TmpDirectory: getTmpDirectory(tcpReconcilerConfig.TmpBaseDirectory, tenantControlPlane),
},
}
}
func getKubernetesStorageResources(c client.Client, log logr.Logger, tcpReconcilerConfig TenantControlPlaneReconcilerConfig, dbConnection sql.DBConnection, tenantControlPlane kamajiv1alpha1.TenantControlPlane, ds kamajiv1alpha1.DataStore) []resources.Resource {
switch ds.Spec.Driver {
case kamajiv1alpha1.EtcdDriver:
return []resources.Resource{
&resources.ETCDCACertificatesResource{
Name: "etcd-ca-certificates",
Client: c,
Log: log,
DataStore: ds,
},
&resources.ETCDCertificatesResource{
Name: "etcd-certificates",
Client: c,
Log: log,
},
&resources.ETCDSetupResource{
Client: c,
Log: log,
DataStore: ds,
},
}
case kamajiv1alpha1.KineMySQLDriver, kamajiv1alpha1.KinePostgreSQLDriver:
return []resources.Resource{
&resources.SQLStorageConfig{
Client: c,
Name: "sql-config",
Host: dbConnection.GetHost(),
Port: dbConnection.GetPort(),
Driver: dbConnection.Driver(),
},
&resources.SQLSetup{
Client: c,
DBConnection: dbConnection,
Driver: dbConnection.Driver(),
},
&resources.SQLCertificate{
Client: c,
DataStore: ds,
},
}
default:
return []resources.Resource{}
func getKubernetesStorageResources(c client.Client, dbConnection datastore.Connection, datastore kamajiv1alpha1.DataStore) []resources.Resource {
return []resources.Resource{
&ds.Config{
Client: c,
ConnString: dbConnection.GetConnectionString(),
DataStore: datastore,
},
&ds.Setup{
Client: c,
Connection: dbConnection,
DataStore: datastore,
},
&ds.Certificate{
Client: c,
DataStore: datastore,
},
}
}
func getKubernetesDeploymentResources(c client.Client, tcpReconcilerConfig TenantControlPlaneReconcilerConfig, dataStore kamajiv1alpha1.DataStore) []resources.Resource {
var endpoints []string
switch dataStore.Spec.Driver {
case kamajiv1alpha1.EtcdDriver:
endpoints = dataStore.Spec.Endpoints
default:
endpoints = []string{"127.0.0.1:2379"}
}
return []resources.Resource{
&resources.KubernetesDeploymentResource{
Client: c,
ETCDEndpoints: endpoints,
DataStoreDriver: dataStore.Spec.Driver,
DataStore: dataStore,
KineContainerImage: tcpReconcilerConfig.KineContainerImage,
},
}
}
func getKubernetesIngressResources(c client.Client, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
func getKubernetesIngressResources(c client.Client) []resources.Resource {
return []resources.Resource{
&resources.KubernetesIngressResource{
Client: c,
@@ -255,86 +227,26 @@ func getKubernetesIngressResources(c client.Client, tenantControlPlane kamajiv1a
}
}
func getKubeadmPhaseResources(c client.Client, log logr.Logger, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
func GetExternalKonnectivityResources(c client.Client) []resources.Resource {
return []resources.Resource{
&resources.KubeadmPhase{
Name: "upload-config-kubeadm",
Client: c,
Log: log,
Phase: resources.PhaseUploadConfigKubeadm,
},
&resources.KubeadmPhase{
Name: "upload-config-kubelet",
Client: c,
Log: log,
Phase: resources.PhaseUploadConfigKubelet,
},
&resources.KubeadmPhase{
Name: "bootstrap-token",
Client: c,
Log: log,
Phase: resources.PhaseBootstrapToken,
},
&konnectivity.Agent{Client: c},
&konnectivity.ServiceAccountResource{Client: c},
&konnectivity.ClusterRoleBindingResource{Client: c},
}
}
func getKubeadmAddonResources(c client.Client, log logr.Logger, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
func getKonnectivityServerRequirementsResources(c client.Client) []resources.Resource {
return []resources.Resource{
&resources.KubeadmAddonResource{
Name: "coredns",
Client: c,
Log: log,
KubeadmAddon: resources.AddonCoreDNS,
},
&resources.KubeadmAddonResource{
Name: "kubeproxy",
Client: c,
Log: log,
KubeadmAddon: resources.AddonKubeProxy,
},
&konnectivity.EgressSelectorConfigurationResource{Client: c},
&konnectivity.CertificateResource{Client: c},
&konnectivity.KubeconfigResource{Client: c},
}
}
func getExternalKonnectivityResources(c client.Client, log logr.Logger, tcpReconcilerConfig TenantControlPlaneReconcilerConfig, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
func getKonnectivityServerPatchResources(c client.Client) []resources.Resource {
return []resources.Resource{
&konnectivity.ServiceAccountResource{
Client: c,
Name: "konnectivity-sa",
},
&konnectivity.ClusterRoleBindingResource{
Client: c,
Name: "konnectivity-clusterrolebinding",
},
&konnectivity.KubernetesDeploymentResource{
Client: c,
Name: "konnectivity-deployment",
},
&konnectivity.ServiceResource{
Client: c,
Name: "konnectivity-service",
},
&konnectivity.Agent{
Client: c,
Name: "konnectivity-agent",
},
}
}
func getInternalKonnectivityResources(c client.Client, log logr.Logger, tcpReconcilerConfig TenantControlPlaneReconcilerConfig, tenantControlPlane kamajiv1alpha1.TenantControlPlane) []resources.Resource {
return []resources.Resource{
&konnectivity.EgressSelectorConfigurationResource{
Client: c,
Name: "konnectivity-egress-selector-configuration",
},
&konnectivity.CertificateResource{
Client: c,
Log: log,
Name: "konnectivity-certificate",
},
&konnectivity.KubeconfigResource{
Client: c,
Name: "konnectivity-kubeconfig",
},
&konnectivity.KubernetesDeploymentResource{Client: c},
&konnectivity.ServiceResource{Client: c},
}
}

View File

@@ -0,0 +1,89 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"github.com/go-logr/logr"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/kubeadm"
"github.com/clastix/kamaji/internal/resources"
"github.com/clastix/kamaji/internal/resources/addons"
)
type CoreDNS struct {
logger logr.Logger
AdminClient client.Client
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
TriggerChannel chan event.GenericEvent
}
func (c *CoreDNS) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
tcp, err := c.GetTenantControlPlaneFunc()
if err != nil {
c.logger.Error(err, "cannot retrieve TenantControlPlane")
return reconcile.Result{}, err
}
c.logger.Info("start processing")
resource := &addons.CoreDNS{Client: c.AdminClient}
result, handlingErr := resources.Handle(ctx, resource, tcp)
if handlingErr != nil {
c.logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
return reconcile.Result{}, handlingErr
}
if result == controllerutil.OperationResultNone {
c.logger.Info("reconciliation completed")
return reconcile.Result{}, nil
}
if err = utils.UpdateStatus(ctx, c.AdminClient, tcp, resource); err != nil {
c.logger.Error(err, "update status failed", "resource", resource.GetName())
return reconcile.Result{}, err
}
c.logger.Info("reconciliation processed")
return reconcile.Result{}, nil
}
func (c *CoreDNS) SetupWithManager(mgr manager.Manager) error {
c.logger = mgr.GetLogger().WithName("coredns")
c.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
For(&rbacv1.ClusterRoleBinding{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return object.GetName() == kubeadm.CoreDNSClusterRoleBindingName
}))).
Watches(&source.Channel{Source: c.TriggerChannel}, &handler.EnqueueRequestForObject{}).
Owns(&rbacv1.ClusterRole{}).
Owns(&corev1.ServiceAccount{}).
Owns(&corev1.Service{}).
Owns(&corev1.ConfigMap{}).
Owns(&appsv1.Deployment{}).
Complete(c)
}

View File

@@ -0,0 +1,112 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"github.com/go-logr/logr"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/api/rbac/v1"
"k8s.io/apimachinery/pkg/types"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/controllers"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/resources"
"github.com/clastix/kamaji/internal/resources/konnectivity"
)
type KonnectivityAgent struct {
logger logr.Logger
AdminClient client.Client
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
TriggerChannel chan event.GenericEvent
}
func (k *KonnectivityAgent) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := k.GetTenantControlPlaneFunc()
if err != nil {
k.logger.Error(err, "cannot retrieve TenantControlPlane")
return reconcile.Result{}, err
}
for _, resource := range controllers.GetExternalKonnectivityResources(k.AdminClient) {
k.logger.Info("start processing", "resource", resource.GetName())
result, handlingErr := resources.Handle(ctx, resource, tcp)
if handlingErr != nil {
k.logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
return reconcile.Result{}, handlingErr
}
if result == controllerutil.OperationResultNone {
k.logger.Info("resource processed", "resource", resource.GetName())
continue
}
if err = utils.UpdateStatus(ctx, k.AdminClient, tcp, resource); err != nil {
k.logger.Error(err, "update status failed", "resource", resource.GetName())
return reconcile.Result{}, err
}
}
k.logger.Info("reconciliation completed")
return reconcile.Result{}, nil
}
func (k *KonnectivityAgent) SetupWithManager(mgr manager.Manager) error {
k.logger = mgr.GetLogger().WithName("konnectivity_agent")
k.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
For(&appsv1.DaemonSet{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return object.GetName() == konnectivity.AgentName && object.GetNamespace() == konnectivity.AgentNamespace
}))).
Watches(&source.Kind{Type: &corev1.ServiceAccount{}}, handler.EnqueueRequestsFromMapFunc(func(object client.Object) []reconcile.Request {
if object.GetName() == konnectivity.AgentName && object.GetNamespace() == konnectivity.AgentNamespace {
return []reconcile.Request{
{
NamespacedName: types.NamespacedName{
Namespace: object.GetNamespace(),
Name: object.GetName(),
},
},
}
}
return nil
})).
Watches(&source.Kind{Type: &v1.ClusterRoleBinding{}}, handler.EnqueueRequestsFromMapFunc(func(object client.Object) []reconcile.Request {
if object.GetName() == konnectivity.CertCommonName {
return []reconcile.Request{
{
NamespacedName: types.NamespacedName{
Name: konnectivity.CertCommonName,
},
},
}
}
return nil
})).
Watches(&source.Channel{Source: k.TriggerChannel}, &handler.EnqueueRequestForObject{}).
Complete(k)
}

View File

@@ -0,0 +1,72 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"github.com/go-logr/logr"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/resources"
)
type KubeadmPhase struct {
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
TriggerChannel chan event.GenericEvent
Phase resources.KubeadmPhaseResource
logger logr.Logger
}
func (k *KubeadmPhase) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := k.GetTenantControlPlaneFunc()
if err != nil {
return reconcile.Result{}, err
}
k.logger.Info("start processing")
result, handlingErr := resources.Handle(ctx, k.Phase, tcp)
if handlingErr != nil {
k.logger.Error(handlingErr, "resource process failed")
return reconcile.Result{}, handlingErr
}
if result == controllerutil.OperationResultNone {
k.logger.Info("reconciliation completed")
return reconcile.Result{}, nil
}
if err = utils.UpdateStatus(ctx, k.Phase.GetClient(), tcp, k.Phase); err != nil {
k.logger.Error(err, "update status failed")
return reconcile.Result{}, err
}
k.logger.Info("reconciliation processed")
return reconcile.Result{}, nil
}
func (k *KubeadmPhase) SetupWithManager(mgr manager.Manager) error {
k.logger = mgr.GetLogger().WithName(k.Phase.GetName())
k.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
For(k.Phase.GetWatchedObject(), builder.WithPredicates(predicate.NewPredicateFuncs(k.Phase.GetPredicateFunc()))).
Watches(&source.Channel{Source: k.TriggerChannel}, &handler.EnqueueRequestForObject{}).
Complete(k)
}

View File

@@ -0,0 +1,89 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"github.com/go-logr/logr"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/kubeadm"
"github.com/clastix/kamaji/internal/resources"
"github.com/clastix/kamaji/internal/resources/addons"
)
type KubeProxy struct {
AdminClient client.Client
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
TriggerChannel chan event.GenericEvent
logger logr.Logger
}
func (k *KubeProxy) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := k.GetTenantControlPlaneFunc()
if err != nil {
k.logger.Error(err, "cannot retrieve TenantControlPlane")
return reconcile.Result{}, err
}
k.logger.Info("start processing")
resource := &addons.KubeProxy{Client: k.AdminClient}
result, handlingErr := resources.Handle(ctx, resource, tcp)
if handlingErr != nil {
k.logger.Error(handlingErr, "resource process failed", "resource", resource.GetName())
return reconcile.Result{}, handlingErr
}
if result == controllerutil.OperationResultNone {
k.logger.Info("reconciliation completed")
return reconcile.Result{}, nil
}
if err = utils.UpdateStatus(ctx, k.AdminClient, tcp, resource); err != nil {
k.logger.Error(err, "update status failed")
return reconcile.Result{}, err
}
k.logger.Info("reconciliation processed")
return reconcile.Result{}, nil
}
func (k *KubeProxy) SetupWithManager(mgr manager.Manager) error {
k.logger = mgr.GetLogger().WithName("kube_proxy")
k.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
For(&rbacv1.ClusterRoleBinding{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return object.GetName() == kubeadm.KubeProxyClusterRoleBindingName
}))).
Watches(&source.Channel{Source: k.TriggerChannel}, &handler.EnqueueRequestForObject{}).
Owns(&corev1.ServiceAccount{}).
Owns(&rbacv1.Role{}).
Owns(&rbacv1.RoleBinding{}).
Owns(&corev1.ConfigMap{}).
Owns(&appsv1.DaemonSet{}).
Complete(k)
}

View File

@@ -0,0 +1,200 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"fmt"
"github.com/go-logr/logr"
admissionregistrationv1 "k8s.io/api/admissionregistration/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/utils/pointer"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/utilities"
)
type Migrate struct {
client client.Client
logger logr.Logger
GetTenantControlPlaneFunc utils.TenantControlPlaneRetrievalFn
WebhookNamespace string
WebhookServiceName string
WebhookCABundle []byte
TriggerChannel chan event.GenericEvent
}
func (m *Migrate) Reconcile(ctx context.Context, _ reconcile.Request) (reconcile.Result, error) {
tcp, err := m.GetTenantControlPlaneFunc()
if err != nil {
return reconcile.Result{}, err
}
// Cannot detect the status of the TenantControlPlane, enqueuing back
if tcp.Status.Kubernetes.Version.Status == nil {
return reconcile.Result{Requeue: true}, nil
}
switch *tcp.Status.Kubernetes.Version.Status {
case v1alpha1.VersionMigrating:
err = m.createOrUpdate(ctx)
case v1alpha1.VersionReady:
err = m.cleanup(ctx)
}
if err != nil {
m.logger.Error(err, "reconciliation failed")
return reconcile.Result{}, err
}
return reconcile.Result{}, nil
}
func (m *Migrate) cleanup(ctx context.Context) error {
if err := m.client.Delete(ctx, m.object()); err != nil {
if errors.IsNotFound(err) {
return nil
}
return fmt.Errorf("unable to clean-up ValidationWebhook required for migration: %w", err)
}
return nil
}
func (m *Migrate) createOrUpdate(ctx context.Context) error {
obj := m.object()
_, err := utilities.CreateOrUpdateWithConflict(ctx, m.client, obj, func() error {
obj.Webhooks = []admissionregistrationv1.ValidatingWebhook{
{
Name: "leases.migrate.kamaji.clastix.io",
ClientConfig: admissionregistrationv1.WebhookClientConfig{
URL: pointer.String(fmt.Sprintf("https://%s.%s.svc:443/migrate", m.WebhookServiceName, m.WebhookNamespace)),
CABundle: m.WebhookCABundle,
},
Rules: []admissionregistrationv1.RuleWithOperations{
{
Operations: []admissionregistrationv1.OperationType{
admissionregistrationv1.Create,
admissionregistrationv1.Delete,
},
Rule: admissionregistrationv1.Rule{
APIGroups: []string{"*"},
APIVersions: []string{"*"},
Resources: []string{"*"},
Scope: func(v admissionregistrationv1.ScopeType) *admissionregistrationv1.ScopeType {
return &v
}(admissionregistrationv1.NamespacedScope),
},
},
},
FailurePolicy: func(v admissionregistrationv1.FailurePolicyType) *admissionregistrationv1.FailurePolicyType {
return &v
}(admissionregistrationv1.Fail),
MatchPolicy: func(v admissionregistrationv1.MatchPolicyType) *admissionregistrationv1.MatchPolicyType {
return &v
}(admissionregistrationv1.Equivalent),
NamespaceSelector: &metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "kubernetes.io/metadata.name",
Operator: metav1.LabelSelectorOpIn,
Values: []string{
"kube-node-lease",
},
},
},
},
SideEffects: func(v admissionregistrationv1.SideEffectClass) *admissionregistrationv1.SideEffectClass {
return &v
}(admissionregistrationv1.SideEffectClassNoneOnDryRun),
AdmissionReviewVersions: []string{"v1"},
},
{
Name: "catchall.migrate.kamaji.clastix.io",
ClientConfig: admissionregistrationv1.WebhookClientConfig{
URL: pointer.String(fmt.Sprintf("https://%s.%s.svc:443/migrate", m.WebhookServiceName, m.WebhookNamespace)),
CABundle: m.WebhookCABundle,
},
Rules: []admissionregistrationv1.RuleWithOperations{
{
Operations: []admissionregistrationv1.OperationType{admissionregistrationv1.OperationAll},
Rule: admissionregistrationv1.Rule{
APIGroups: []string{"*"},
APIVersions: []string{"*"},
Resources: []string{"*"},
Scope: func(v admissionregistrationv1.ScopeType) *admissionregistrationv1.ScopeType {
return &v
}(admissionregistrationv1.AllScopes),
},
},
},
FailurePolicy: func(v admissionregistrationv1.FailurePolicyType) *admissionregistrationv1.FailurePolicyType {
return &v
}(admissionregistrationv1.Fail),
MatchPolicy: func(v admissionregistrationv1.MatchPolicyType) *admissionregistrationv1.MatchPolicyType {
return &v
}(admissionregistrationv1.Equivalent),
NamespaceSelector: &metav1.LabelSelector{
MatchExpressions: []metav1.LabelSelectorRequirement{
{
Key: "kubernetes.io/metadata.name",
Operator: metav1.LabelSelectorOpNotIn,
Values: []string{
"kube-system",
"kube-node-lease",
},
},
},
},
SideEffects: func(v admissionregistrationv1.SideEffectClass) *admissionregistrationv1.SideEffectClass {
return &v
}(admissionregistrationv1.SideEffectClassNoneOnDryRun),
TimeoutSeconds: nil,
AdmissionReviewVersions: []string{"v1"},
},
}
return nil
})
return err
}
func (m *Migrate) SetupWithManager(mgr manager.Manager) error {
m.client = mgr.GetClient()
m.logger = mgr.GetLogger().WithName("migrate")
m.TriggerChannel = make(chan event.GenericEvent)
return controllerruntime.NewControllerManagedBy(mgr).
For(&admissionregistrationv1.ValidatingWebhookConfiguration{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
vwc := m.object()
return object.GetName() == vwc.GetName()
}))).
Watches(&source.Channel{Source: m.TriggerChannel}, &handler.EnqueueRequestForObject{}).
Complete(m)
}
func (m *Migrate) object() *admissionregistrationv1.ValidatingWebhookConfiguration {
return &admissionregistrationv1.ValidatingWebhookConfiguration{
ObjectMeta: metav1.ObjectMeta{
Name: "kamaji-freeze",
},
}
}

307
controllers/soot/manager.go Normal file
View File

@@ -0,0 +1,307 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package soot
import (
"context"
"fmt"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/client-go/rest"
"k8s.io/client-go/util/retry"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/finalizers"
"github.com/clastix/kamaji/controllers/soot/controllers"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/resources"
"github.com/clastix/kamaji/internal/utilities"
)
type sootItem struct {
triggers []chan event.GenericEvent
cancelFn context.CancelFunc
}
type sootMap map[string]sootItem
type Manager struct {
client client.Client
sootMap sootMap
// sootManagerErrChan is the channel that is going to be used
// when the soot manager cannot start due to any kind of problem.
sootManagerErrChan chan event.GenericEvent
MigrateCABundle []byte
MigrateServiceName string
MigrateServiceNamespace string
AdminClient client.Client
}
// retrieveTenantControlPlane is the function used to let an underlying controller of the soot manager
// to retrieve its parent TenantControlPlane definition, required to understand which actions must be performed.
func (m *Manager) retrieveTenantControlPlane(ctx context.Context, request reconcile.Request) utils.TenantControlPlaneRetrievalFn {
return func() (*kamajiv1alpha1.TenantControlPlane, error) {
tcp := &kamajiv1alpha1.TenantControlPlane{}
if err := m.client.Get(ctx, request.NamespacedName, tcp); err != nil {
return nil, err
}
return tcp, nil
}
}
// If the TenantControlPlane is deleted we have to free up memory by stopping the soot manager:
// this is made possible by retrieving the cancel function of the soot manager context to cancel it.
func (m *Manager) cleanup(ctx context.Context, req reconcile.Request, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) (err error) {
if tenantControlPlane != nil && controllerutil.ContainsFinalizer(tenantControlPlane, finalizers.SootFinalizer) {
defer func() {
err = retry.RetryOnConflict(retry.DefaultRetry, func() error {
tcp, tcpErr := m.retrieveTenantControlPlane(ctx, req)()
if tcpErr != nil {
return tcpErr
}
controllerutil.RemoveFinalizer(tcp, finalizers.SootFinalizer)
return m.AdminClient.Update(ctx, tcp)
})
}()
}
tcpName := req.NamespacedName.String()
v, ok := m.sootMap[tcpName]
if !ok {
return nil
}
v.cancelFn()
delete(m.sootMap, tcpName)
return nil
}
func (m *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res reconcile.Result, err error) {
// Retrieving the TenantControlPlane:
// in case of deletion, we must be sure to properly remove from the memory the soot manager.
tcp := &kamajiv1alpha1.TenantControlPlane{}
if err = m.client.Get(ctx, request.NamespacedName, tcp); err != nil {
if errors.IsNotFound(err) {
return reconcile.Result{}, m.cleanup(ctx, request, nil)
}
return reconcile.Result{}, err
}
// Handling finalizer if the TenantControlPlane is marked for deletion:
// the clean-up function is already taking care to stop the manager, if this exists.
if tcp.GetDeletionTimestamp() != nil {
if controllerutil.ContainsFinalizer(tcp, finalizers.SootFinalizer) {
return reconcile.Result{}, m.cleanup(ctx, request, tcp)
}
return reconcile.Result{}, nil
}
tcpStatus := *tcp.Status.Kubernetes.Version.Status
// Triggering the reconciliation of the underlying controllers of
// the soot manager if this is already registered.
v, ok := m.sootMap[request.String()]
if ok {
switch {
case tcpStatus == kamajiv1alpha1.VersionCARotating:
// The TenantControlPlane CA has been rotated, it means the running manager
// must be restarted to avoid certificate signed by unknown authority errors.
return reconcile.Result{}, m.cleanup(ctx, request, tcp)
case tcpStatus == kamajiv1alpha1.VersionNotReady:
// The TenantControlPlane is in non-ready mode, or marked for deletion:
// we don't want to pollute with messages due to broken connection.
// Once the TCP will be ready again, the event will be intercepted and the manager started back.
return reconcile.Result{}, m.cleanup(ctx, request, tcp)
default:
for _, trigger := range v.triggers {
trigger <- event.GenericEvent{Object: tcp}
}
}
return reconcile.Result{}, nil
}
// No need to start a soot manager if the TenantControlPlane is not ready:
// enqueuing back is not required since we're going to get that event once ready.
if tcpStatus == kamajiv1alpha1.VersionNotReady || tcpStatus == kamajiv1alpha1.VersionCARotating {
log.FromContext(ctx).Info("skipping start of the soot manager for a not ready instance")
return reconcile.Result{}, nil
}
// Setting the finalizer for the soot manager:
// upon deletion the soot manager will be shut down prior the Deployment, avoiding logs pollution.
if !controllerutil.ContainsFinalizer(tcp, finalizers.SootFinalizer) {
_, finalizerErr := utilities.CreateOrUpdateWithConflict(ctx, m.AdminClient, tcp, func() error {
controllerutil.AddFinalizer(tcp, finalizers.SootFinalizer)
return nil
})
return reconcile.Result{Requeue: true}, finalizerErr
}
// Generating the manager and starting it:
// in case of any error, reconciling the request to start it back from the beginning.
tcpRest, err := utilities.GetRESTClientConfig(ctx, m.client, tcp)
if err != nil {
return reconcile.Result{}, err
}
tcpCtx, tcpCancelFn := context.WithCancel(ctx)
defer func() {
// If the reconciliation fails, we don't need to get a potential dangling goroutine.
if err != nil {
tcpCancelFn()
}
}()
mgr, err := controllerruntime.NewManager(tcpRest, controllerruntime.Options{
Logger: log.Log.WithName(fmt.Sprintf("soot_%s_%s", tcp.GetNamespace(), tcp.GetName())),
Scheme: m.client.Scheme(),
MetricsBindAddress: "0",
NewClient: func(cache cache.Cache, config *rest.Config, options client.Options, uncachedObjects ...client.Object) (client.Client, error) {
return client.New(config, client.Options{
Scheme: m.client.Scheme(),
})
},
})
if err != nil {
return reconcile.Result{}, err
}
//
// Register all the controllers of the soot here:
//
migrate := &controllers.Migrate{
WebhookNamespace: m.MigrateServiceNamespace,
WebhookServiceName: m.MigrateServiceName,
WebhookCABundle: m.MigrateCABundle,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
}
if err = migrate.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
konnectivityAgent := &controllers.KonnectivityAgent{
AdminClient: m.AdminClient,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
}
if err = konnectivityAgent.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
kubeProxy := &controllers.KubeProxy{
AdminClient: m.AdminClient,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
}
if err = kubeProxy.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
coreDNS := &controllers.CoreDNS{
AdminClient: m.AdminClient,
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
}
if err = coreDNS.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
uploadKubeadmConfig := &controllers.KubeadmPhase{
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
Phase: &resources.KubeadmPhase{
Client: m.AdminClient,
Phase: resources.PhaseUploadConfigKubeadm,
},
}
if err = uploadKubeadmConfig.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
uploadKubeletConfig := &controllers.KubeadmPhase{
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
Phase: &resources.KubeadmPhase{
Client: m.AdminClient,
Phase: resources.PhaseUploadConfigKubelet,
},
}
if err = uploadKubeletConfig.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
bootstrapToken := &controllers.KubeadmPhase{
GetTenantControlPlaneFunc: m.retrieveTenantControlPlane(tcpCtx, request),
Phase: &resources.KubeadmPhase{
Client: m.AdminClient,
Phase: resources.PhaseBootstrapToken,
},
}
if err = bootstrapToken.SetupWithManager(mgr); err != nil {
return reconcile.Result{}, err
}
// Starting the manager
go func() {
if err = mgr.Start(tcpCtx); err != nil {
log.FromContext(ctx).Error(err, "unable to start soot manager")
// When the manager cannot start we're enqueuing back the request to take advantage of the backoff factor
// of the queue: this is a goroutine and cannot return an error since the manager is running on its own,
// using the sootManagerErrChan channel we can trigger a reconciliation although the TCP hadn't any change.
m.sootManagerErrChan <- event.GenericEvent{Object: tcp}
}
}()
m.sootMap[request.NamespacedName.String()] = sootItem{
triggers: []chan event.GenericEvent{
migrate.TriggerChannel,
konnectivityAgent.TriggerChannel,
kubeProxy.TriggerChannel,
coreDNS.TriggerChannel,
uploadKubeadmConfig.TriggerChannel,
uploadKubeletConfig.TriggerChannel,
bootstrapToken.TriggerChannel,
},
cancelFn: tcpCancelFn,
}
return reconcile.Result{Requeue: true}, nil
}
func (m *Manager) SetupWithManager(mgr manager.Manager) error {
m.client = mgr.GetClient()
m.sootManagerErrChan = make(chan event.GenericEvent)
m.sootMap = make(map[string]sootItem)
return controllerruntime.NewControllerManagedBy(mgr).
Watches(&source.Channel{Source: m.sootManagerErrChan}, &handler.EnqueueRequestForObject{}).
For(&kamajiv1alpha1.TenantControlPlane{}, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
obj := object.(*kamajiv1alpha1.TenantControlPlane) //nolint:forcetypeassert
// status is required to understand if we have to start or stop the soot manager
if obj.Status.Kubernetes.Version.Status == nil {
return false
}
if *obj.Status.Kubernetes.Version.Status == kamajiv1alpha1.VersionProvisioning {
return false
}
return true
}))).
Complete(m)
}

View File

@@ -1,102 +0,0 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package controllers
import (
"context"
"crypto/tls"
"crypto/x509"
"fmt"
"net"
"strconv"
"github.com/pkg/errors"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/internal/sql"
)
func (r *TenantControlPlaneReconciler) getStorageConnection(ctx context.Context, ds kamajiv1alpha1.DataStore) (sql.DBConnection, error) {
var driver sql.Driver
var dbName string
// TODO: https://github.com/clastix/kamaji/issues/67
switch ds.Spec.Driver {
case kamajiv1alpha1.EtcdDriver:
return nil, nil
case kamajiv1alpha1.KineMySQLDriver:
driver = sql.MySQL
dbName = "mysql"
case kamajiv1alpha1.KinePostgreSQLDriver:
driver = sql.PostgreSQL
default:
return nil, nil
}
ca, err := ds.Spec.TLSConfig.CertificateAuthority.Certificate.GetContent(ctx, r.Client)
if err != nil {
return nil, err
}
crt, err := ds.Spec.TLSConfig.ClientCertificate.Certificate.GetContent(ctx, r.Client)
if err != nil {
return nil, err
}
key, err := ds.Spec.TLSConfig.ClientCertificate.PrivateKey.GetContent(ctx, r.Client)
if err != nil {
return nil, err
}
rootCAs := x509.NewCertPool()
if ok := rootCAs.AppendCertsFromPEM(ca); !ok {
return nil, fmt.Errorf("error create root CA for the DB connector")
}
certificate, err := tls.X509KeyPair(crt, key)
if err != nil {
return nil, errors.Wrap(err, "cannot retrieve x.509 key pair from the Kine Secret")
}
var user, password string
if auth := ds.Spec.BasicAuth; auth != nil {
u, err := auth.Username.GetContent(ctx, r.Client)
if err != nil {
return nil, err
}
user = string(u)
p, err := auth.Password.GetContent(ctx, r.Client)
if err != nil {
return nil, err
}
password = string(p)
}
host, stringPort, err := net.SplitHostPort(ds.Spec.Endpoints[0])
if err != nil {
return nil, errors.Wrap(err, "cannot retrieve host-port pair from DataStore endpoints")
}
port, err := strconv.Atoi(stringPort)
if err != nil {
return nil, errors.Wrap(err, "cannot convert port from string for the given DataStore")
}
return sql.GetDBConnection(
sql.ConnectionConfig{
SQLDriver: driver,
User: user,
Password: password,
Host: host,
Port: port,
DBName: dbName,
TLSConfig: &tls.Config{
ServerName: host,
RootCAs: rootCAs,
Certificates: []tls.Certificate{certificate},
},
},
)
}

View File

@@ -6,136 +6,164 @@ package controllers
import (
"context"
"fmt"
"strings"
"time"
"github.com/juju/mutex/v2"
"github.com/pkg/errors"
appsv1 "k8s.io/api/apps/v1"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
apimachineryerrors "k8s.io/apimachinery/pkg/api/errors"
k8stypes "k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/workqueue"
"k8s.io/utils/clock"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/controllers/finalizers"
"github.com/clastix/kamaji/controllers/utils"
"github.com/clastix/kamaji/internal/datastore"
kamajierrors "github.com/clastix/kamaji/internal/errors"
"github.com/clastix/kamaji/internal/resources"
)
const (
finalizer = "finalizer.kamaji.clastix.io"
)
// TenantControlPlaneReconciler reconciles a TenantControlPlane object.
type TenantControlPlaneReconciler struct {
client.Client
Scheme *runtime.Scheme
Config TenantControlPlaneReconcilerConfig
TriggerChan TenantControlPlaneChannel
Client client.Client
APIReader client.Reader
Config TenantControlPlaneReconcilerConfig
TriggerChan TenantControlPlaneChannel
KamajiNamespace string
KamajiServiceAccount string
KamajiService string
KamajiMigrateImage string
MaxConcurrentReconciles int
clock mutex.Clock
}
// TenantControlPlaneReconcilerConfig gives the necessary configuration for TenantControlPlaneReconciler.
type TenantControlPlaneReconcilerConfig struct {
DataStoreName string
KineContainerImage string
TmpBaseDirectory string
DefaultDataStoreName string
KineContainerImage string
TmpBaseDirectory string
}
//+kubebuilder:rbac:groups=kamaji.clastix.io,resources=tenantcontrolplanes,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=kamaji.clastix.io,resources=tenantcontrolplanes/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=kamaji.clastix.io,resources=tenantcontrolplanes/finalizers,verbs=update
// +kubebuilder:rbac:groups=core,resources=secrets,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=configmaps,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=services,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=networking.k8s.io,resources=ingresses,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=secrets,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=configmaps,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=core,resources=services,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=networking.k8s.io,resources=ingresses,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=batch,resources=jobs,verbs=get;list;watch;create;delete
func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
log := log.FromContext(ctx)
tenantControlPlane := &kamajiv1alpha1.TenantControlPlane{}
isTenantControlPlane, err := r.getTenantControlPlane(ctx, req.NamespacedName, tenantControlPlane)
tenantControlPlane, err := r.getTenantControlPlane(ctx, req.NamespacedName)()
if err != nil {
if apimachineryerrors.IsNotFound(err) {
log.Info("resource may have been deleted, skipping")
return ctrl.Result{}, nil
}
log.Error(err, "cannot retrieve the required instance")
return ctrl.Result{}, err
}
if !isTenantControlPlane {
return ctrl.Result{}, nil
releaser, err := mutex.Acquire(r.mutexSpec(tenantControlPlane))
if err != nil {
switch {
case errors.As(err, &mutex.ErrTimeout):
log.Info("acquire timed out, current process is blocked by another reconciliation")
return ctrl.Result{Requeue: true}, nil
case errors.As(err, &mutex.ErrCancelled):
log.Info("acquire cancelled")
return ctrl.Result{Requeue: true}, nil
default:
log.Error(err, "acquire failed")
return ctrl.Result{}, err
}
}
defer releaser.Release()
markedToBeDeleted := tenantControlPlane.GetDeletionTimestamp() != nil
hasFinalizer := hasFinalizer(*tenantControlPlane)
if markedToBeDeleted && !hasFinalizer {
if markedToBeDeleted && !controllerutil.ContainsFinalizer(tenantControlPlane, finalizers.DatastoreFinalizer) {
return ctrl.Result{}, nil
}
ds := kamajiv1alpha1.DataStore{}
if err = r.Client.Get(ctx, k8stypes.NamespacedName{Name: r.Config.DataStoreName}, &ds); err != nil {
return ctrl.Result{}, errors.Wrap(err, "cannot retrieve kamajiv1alpha.DataStore object")
}
dbConnection, err := r.getStorageConnection(ctx, ds)
// Retrieving the DataStore to use for the current reconciliation
ds, err := r.dataStore(ctx, tenantControlPlane)
if err != nil {
log.Error(err, "cannot retrieve the DataStore for the given instance")
return ctrl.Result{}, err
}
defer func() {
// TODO: Currently, etcd is not accessed using this dbConnection. For that reason we need this check
// Check: https://github.com/clastix/kamaji/issues/67
if dbConnection != nil {
dbConnection.Close()
}
}()
if markedToBeDeleted {
dsConnection, err := datastore.NewStorageConnection(ctx, r.Client, *ds)
if err != nil {
log.Error(err, "cannot generate the DataStore connection for the given instance")
return ctrl.Result{}, err
}
defer dsConnection.Close()
if markedToBeDeleted && controllerutil.ContainsFinalizer(tenantControlPlane, finalizers.DatastoreFinalizer) {
log.Info("marked for deletion, performing clean-up")
groupDeleteableResourceBuilderConfiguration := GroupDeleteableResourceBuilderConfiguration{
groupDeletableResourceBuilderConfiguration := GroupDeletableResourceBuilderConfiguration{
client: r.Client,
log: log,
tcpReconcilerConfig: r.Config,
tenantControlPlane: *tenantControlPlane,
DBConnection: dbConnection,
connection: dsConnection,
}
registeredDeletableResources := GetDeletableResources(groupDeleteableResourceBuilderConfiguration, ds)
for _, resource := range registeredDeletableResources {
if err := resources.HandleDeletion(ctx, resource, tenantControlPlane); err != nil {
for _, resource := range GetDeletableResources(tenantControlPlane, groupDeletableResourceBuilderConfiguration) {
if err = resources.HandleDeletion(ctx, resource, tenantControlPlane); err != nil {
log.Error(err, "resource deletion failed", "resource", resource.GetName())
return ctrl.Result{}, err
}
}
if hasFinalizer {
log.Info("removing finalizer")
if err := r.RemoveFinalizer(ctx, tenantControlPlane); err != nil {
return ctrl.Result{}, err
}
}
log.Info("resource deletion has been completed")
log.Info("resource deletions have been completed")
return ctrl.Result{}, nil
}
if !hasFinalizer {
return ctrl.Result{}, r.AddFinalizer(ctx, tenantControlPlane)
}
groupResourceBuilderConfiguration := GroupResourceBuilderConfiguration{
client: r.Client,
log: log,
tcpReconcilerConfig: r.Config,
tenantControlPlane: *tenantControlPlane,
DataStore: ds,
DBConnection: dbConnection,
client: r.Client,
log: log,
tcpReconcilerConfig: r.Config,
tenantControlPlane: *tenantControlPlane,
Connection: dsConnection,
DataStore: *ds,
KamajiNamespace: r.KamajiNamespace,
KamajiServiceAccount: r.KamajiServiceAccount,
KamajiService: r.KamajiService,
KamajiMigrateImage: r.KamajiMigrateImage,
}
registeredResources := GetResources(groupResourceBuilderConfiguration, ds)
registeredResources := GetResources(groupResourceBuilderConfiguration)
for _, resource := range registeredResources {
result, err := resources.Handle(ctx, resource, tenantControlPlane)
@@ -146,6 +174,8 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
return ctrl.Result{Requeue: true}, nil
}
log.Error(err, "handling of resource failed", "resource", resource.GetName())
return ctrl.Result{}, err
}
@@ -153,13 +183,19 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
continue
}
if err := r.updateStatus(ctx, req.NamespacedName, resource); err != nil {
if err = utils.UpdateStatus(ctx, r.Client, tenantControlPlane, resource); err != nil {
log.Error(err, "update of the resource failed", "resource", resource.GetName())
return ctrl.Result{}, err
}
log.Info(fmt.Sprintf("%s has been configured", resource.GetName()))
return ctrl.Result{}, nil
if result == resources.OperationResultEnqueueBack {
log.Info("requested enqueuing back", "resources", resource.GetName())
return ctrl.Result{Requeue: true}, nil
}
}
log.Info(fmt.Sprintf("%s has been reconciled", tenantControlPlane.GetName()))
@@ -167,8 +203,20 @@ func (r *TenantControlPlaneReconciler) Reconcile(ctx context.Context, req ctrl.R
return ctrl.Result{}, nil
}
func (r *TenantControlPlaneReconciler) mutexSpec(obj client.Object) mutex.Spec {
return mutex.Spec{
Name: strings.ReplaceAll(fmt.Sprintf("kamaji%s", obj.GetUID()), "-", ""),
Clock: r.clock,
Delay: 10 * time.Millisecond,
Timeout: time.Second,
Cancel: nil,
}
}
// SetupWithManager sets up the controller with the Manager.
func (r *TenantControlPlaneReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.clock = clock.RealClock{}
return ctrl.NewControllerManagedBy(mgr).
Watches(&source.Channel{Source: r.TriggerChan}, handler.Funcs{GenericFunc: func(genericEvent event.GenericEvent, limitingInterface workqueue.RateLimitingInterface) {
limitingInterface.AddRateLimited(ctrl.Request{
@@ -184,61 +232,69 @@ func (r *TenantControlPlaneReconciler) SetupWithManager(mgr ctrl.Manager) error
Owns(&appsv1.Deployment{}).
Owns(&corev1.Service{}).
Owns(&networkingv1.Ingress{}).
Watches(&source.Kind{Type: &batchv1.Job{}}, handler.EnqueueRequestsFromMapFunc(func(object client.Object) []reconcile.Request {
labels := object.GetLabels()
name, namespace := labels["tcp.kamaji.clastix.io/name"], labels["tcp.kamaji.clastix.io/namespace"]
return []reconcile.Request{
{
NamespacedName: k8stypes.NamespacedName{
Namespace: namespace,
Name: name,
},
},
}
}), builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
if object.GetNamespace() != r.KamajiNamespace {
return false
}
labels := object.GetLabels()
if labels == nil {
return false
}
v, ok := labels["kamaji.clastix.io/component"]
return ok && v == "migrate"
}))).
WithOptions(controller.Options{
MaxConcurrentReconciles: r.MaxConcurrentReconciles,
}).
Complete(r)
}
func (r *TenantControlPlaneReconciler) getTenantControlPlane(ctx context.Context, namespacedName k8stypes.NamespacedName, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) (bool, error) {
if err := r.Client.Get(ctx, namespacedName, tenantControlPlane); err != nil {
if !k8serrors.IsNotFound(err) {
return false, err
func (r *TenantControlPlaneReconciler) getTenantControlPlane(ctx context.Context, namespacedName k8stypes.NamespacedName) utils.TenantControlPlaneRetrievalFn {
return func() (*kamajiv1alpha1.TenantControlPlane, error) {
tcp := &kamajiv1alpha1.TenantControlPlane{}
if err := r.APIReader.Get(ctx, namespacedName, tcp); err != nil {
return nil, err
}
return false, nil
return tcp, nil
}
return true, nil
}
func (r *TenantControlPlaneReconciler) updateStatus(ctx context.Context, namespacedName k8stypes.NamespacedName, resource resources.Resource) error {
tenantControlPlane := &kamajiv1alpha1.TenantControlPlane{}
isTenantControlPlane, err := r.getTenantControlPlane(ctx, namespacedName, tenantControlPlane)
if err != nil {
return err
}
if !isTenantControlPlane {
return fmt.Errorf("error updating tenantControlPlane %s: not found", namespacedName.Name)
}
if err := resource.UpdateTenantControlPlaneStatus(ctx, tenantControlPlane); err != nil {
return err
}
if err := r.Status().Update(ctx, tenantControlPlane); err != nil {
return fmt.Errorf("error updating tenantControlPlane status: %w", err)
}
return nil
}
func hasFinalizer(tenantControlPlane kamajiv1alpha1.TenantControlPlane) bool {
for _, f := range tenantControlPlane.GetFinalizers() {
if f == finalizer {
return true
}
}
return false
}
func (r *TenantControlPlaneReconciler) AddFinalizer(ctx context.Context, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) error {
controllerutil.AddFinalizer(tenantControlPlane, finalizer)
return r.Update(ctx, tenantControlPlane)
}
func (r *TenantControlPlaneReconciler) RemoveFinalizer(ctx context.Context, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) error {
controllerutil.RemoveFinalizer(tenantControlPlane, finalizer)
controllerutil.RemoveFinalizer(tenantControlPlane, finalizers.DatastoreFinalizer)
return r.Update(ctx, tenantControlPlane)
return r.Client.Update(ctx, tenantControlPlane)
}
// dataStore retrieves the override DataStore for the given Tenant Control Plane if specified,
// otherwise fallback to the default one specified in the Kamaji setup.
func (r *TenantControlPlaneReconciler) dataStore(ctx context.Context, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) (*kamajiv1alpha1.DataStore, error) {
dataStoreName := tenantControlPlane.Spec.DataStore
if len(dataStoreName) == 0 {
dataStoreName = r.Config.DefaultDataStoreName
}
ds := &kamajiv1alpha1.DataStore{}
if err := r.Client.Get(ctx, k8stypes.NamespacedName{Name: dataStoreName}, ds); err != nil {
return nil, errors.Wrap(err, "cannot retrieve *kamajiv1alpha.DataStore object")
}
return ds, nil
}

View File

@@ -0,0 +1,10 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package utils
import (
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
)
type TenantControlPlaneRetrievalFn func() (*kamajiv1alpha1.TenantControlPlane, error)

View File

@@ -0,0 +1,38 @@
// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package utils
import (
"context"
"fmt"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/retry"
"sigs.k8s.io/controller-runtime/pkg/client"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/internal/resources"
)
func UpdateStatus(ctx context.Context, client client.Client, tcp *kamajiv1alpha1.TenantControlPlane, resource resources.Resource) error {
updateErr := retry.RetryOnConflict(retry.DefaultRetry, func() (err error) {
defer func() {
if err != nil {
_ = client.Get(ctx, types.NamespacedName{Name: tcp.Name, Namespace: tcp.Namespace}, tcp)
}
}()
if err = resource.UpdateTenantControlPlaneStatus(ctx, tcp); err != nil {
return fmt.Errorf("error applying TenantcontrolPlane status: %w", err)
}
if err = client.Status().Update(ctx, tcp); err != nil {
return fmt.Errorf("error updating tenantControlPlane status: %w", err)
}
return nil
})
return updateErr
}

View File

@@ -1,16 +0,0 @@
include etcd/Makefile
deploy_path := $(patsubst %/,%,$(dir $(abspath $(lastword $(MAKEFILE_LIST)))))
.DEFAULT_GOAL := kamaji
.PHONY: etcd-cluster
reqs: etcd-cluster
.PHONY: kamaji
kamaji: reqs
@kubectl apply -f $(deploy_path)/../../config/install.yaml
.PHONY: destroy
destroy: etcd-certificates/cleanup
@kubectl delete -f $(deploy_path)/../../config/install.yaml

View File

@@ -1,26 +0,0 @@
# Deploy Kamaji
## Quickstart with KinD
```sh
make -C kind
```
## Multi-tenant etcd cluster
> This assumes you already have a running Kubernetes cluster and kubeconfig.
```sh
make -C etcd
```
## Multi-tenant cluster using Kine
`kine` is an `etcd` shim that allows using different datastore.
Kamaji actually support the following backends:
- [MySQL](kine/mysql/README.md)
- [PostgreSQL](kine/postgresql/README.md)
> This assumes you already have a running Kubernetes cluster and kubeconfig.

View File

@@ -1,680 +0,0 @@
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "vxlan"
# Configure the MTU to use for workload interfaces and tunnels.
# By default, MTU is auto-detected, and explicitly setting this field should not be required.
# You can override auto-detection by providing a non-zero value.
veth_mtu: "0"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"log_file_path": "/var/log/calico/cni/cni.log",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: calico-node
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- watch
- list
- apiGroups:
- ""
resources:
- endpoints
- services
verbs:
- watch
- list
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- update
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- watch
- list
- apiGroups:
- ""
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- pods/status
verbs:
- patch
- apiGroups:
- crd.projectcalico.org
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
- blockaffinities
verbs:
- get
- list
- watch
- apiGroups:
- crd.projectcalico.org
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- crd.projectcalico.org
resources:
- bgpconfigurations
- bgppeers
verbs:
- create
- update
- apiGroups:
- crd.projectcalico.org
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- apiGroups:
- crd.projectcalico.org
resources:
- ipamconfigs
verbs:
- get
- apiGroups:
- crd.projectcalico.org
resources:
- blockaffinities
verbs:
- watch
- apiGroups:
- apps
resources:
- daemonsets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: calico-kube-controllers
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- watch
- list
- get
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups:
- crd.projectcalico.org
resources:
- ippools
verbs:
- list
- apiGroups:
- crd.projectcalico.org
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- watch
- apiGroups:
- crd.projectcalico.org
resources:
- hostendpoints
verbs:
- get
- list
- create
- update
- delete
- apiGroups:
- crd.projectcalico.org
resources:
- clusterinformations
verbs:
- get
- create
- update
- apiGroups:
- crd.projectcalico.org
resources:
- kubecontrollersconfigurations
verbs:
- get
- create
- update
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
spec:
nodeSelector:
kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container performs upgrade from host-local IPAM to calico-ipam.
# It can be deleted if this is a fresh installation, or if you have already
# upgraded to use calico-ipam.
- name: upgrade-ipam
image: docker.io/calico/cni:v3.20.0
command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
volumeMounts:
- mountPath: /var/lib/cni/networks
name: host-local-net-dir
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
securityContext:
privileged: true
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: docker.io/calico/cni:v3.20.0
command: ["/opt/cni/bin/install"]
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
securityContext:
privileged: true
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: docker.io/calico/pod2daemon-flexvol:v3.20.0
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
securityContext:
privileged: true
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: docker.io/calico/node:v3.20.0
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Never"
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
value: "Always"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the VXLAN tunnel device.
- name: FELIX_VXLANMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the Wireguard tunnel device.
- name: FELIX_WIREGUARDMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.36.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
#- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
#- -bird-ready
periodSeconds: 10
timeoutSeconds: 10
volumeMounts:
# For maintaining CNI plugin API credentials.
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
readOnly: false
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: policysync
mountPath: /var/run/nodeagent
# For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the
# parent directory.
- name: sysfs
mountPath: /sys/fs/
# Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.
# If the host is known to mount that filesystem already then Bidirectional can be omitted.
mountPropagation: Bidirectional
- name: cni-log-dir
mountPath: /var/log/calico/cni
readOnly: true
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: sysfs
hostPath:
path: /sys/fs/
type: DirectoryOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Used to access CNI logs.
- name: cni-log-dir
hostPath:
path: /var/log/calico/cni
# Mount in the directory for host-local IPAM allocations. This is
# used when upgrading from host-local to calico-ipam, and can be removed
# if not using the upgrade-ipam init container.
- name: host-local-net-dir
hostPath:
path: /var/lib/cni/networks
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: docker.io/calico/kube-controllers:v3.20.0
resouces:
env:
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: node
- name: DATASTORE_TYPE
value: kubernetes
livenessProbe:
exec:
command:
- /usr/bin/check-status
- -l
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
periodSeconds: 10
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
# This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More