Compare commits

...

132 Commits

Author SHA1 Message Date
Dario Tranchitella
a36c7545db chore(helm): bumping up chart 2022-07-26 20:41:33 +02:00
Dario Tranchitella
f612ecea0c chore: bumping up to v0.1.2 release 2022-07-26 20:11:03 +02:00
Dario Tranchitella
098a74b565 refactor(capsuleconfiguration): allowing to skip tls reconciler 2022-07-26 17:48:58 +02:00
Dario Tranchitella
5a8a8ae77a feat(helm): support for cert-manager and externally managed tls secret 2022-07-26 17:48:58 +02:00
Dario Tranchitella
a8430f2e72 fix(helm): missing blank space in the notes 2022-07-26 17:48:58 +02:00
Dario Tranchitella
3afc470534 chore(e2e): triggering e2e also for pkg files 2022-07-22 19:29:27 +00:00
Dario Tranchitella
d84f0be76b fix: tenant owners cannot replace protected namesapce labels or annotations 2022-07-22 19:29:27 +00:00
dependabot[bot]
3a174bf755 build(deps): bump moment from 2.29.2 to 2.29.4 in /docs
Bumps [moment](https://github.com/moment/moment) from 2.29.2 to 2.29.4.
- [Release notes](https://github.com/moment/moment/releases)
- [Changelog](https://github.com/moment/moment/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/moment/moment/compare/2.29.2...2.29.4)

---
updated-dependencies:
- dependency-name: moment
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-22 18:05:18 +00:00
Massimiliano Giovagnoli
90a2e9c742 docs(guides/flux2-capsule-gitops-multitenancy): add missing picture
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2022-07-22 09:44:20 +00:00
Massimiliano Giovagnoli
a091331070 docs(guides/flux2-capsule-gitops-multitenancy): strip down introductory content
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2022-07-22 09:44:20 +00:00
Massimiliano Giovagnoli
cb3439bd3d docs(guides/flux2-capsule-gitops-multitenancy): initial commit
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2022-07-22 09:44:20 +00:00
dependabot[bot]
1fd390b91e build(deps): bump terser from 4.8.0 to 4.8.1 in /docs
Bumps [terser](https://github.com/terser/terser) from 4.8.0 to 4.8.1.
- [Release notes](https://github.com/terser/terser/releases)
- [Changelog](https://github.com/terser/terser/blob/master/CHANGELOG.md)
- [Commits](https://github.com/terser/terser/commits)

---
updated-dependencies:
- dependency-name: terser
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-22 09:44:08 +00:00
Dario Tranchitella
80c83689f5 docs: documenting capsule-proxy metrics 2022-07-20 16:08:11 +00:00
Dario Tranchitella
da3d42801b chore(helm): releasing new helm chart (#605) 2022-07-18 08:49:33 +02:00
Adriano Pezzuto
9643885574 feat(config): move Grafana dashboard as Config Map 2022-07-18 08:42:32 +02:00
bsctl
ac3f2bbdd7 feat(helm): update manifests 2022-07-14 07:08:29 +00:00
bsctl
adb214f7f9 feat(helm): change values description 2022-07-14 07:08:29 +00:00
bsctl
ef26d0e6db feat(helm): remove scale down before uninstall 2022-07-14 07:08:29 +00:00
bsctl
3d6f29fa43 feat(helm): add DaemonSet deploy option 2022-07-14 07:08:29 +00:00
Dario Tranchitella
261876b59b docs: documenting new support for dynamic tenant owners clusterrole 2022-06-29 10:53:35 +00:00
Dario Tranchitella
ab750141c6 refactor: support for rfc 1123 for tenant owners cluster roles overrides 2022-06-29 10:53:35 +00:00
Oliver Bähler
e237249815 feat: improve chart documentation
Signed-off-by: Oliver Bähler <oliverbaehler@hotmail.com>
2022-06-29 08:35:43 +00:00
Dario Tranchitella
e15191c2a0 refactor: sentinel error for running in out of cluster mode 2022-06-29 08:31:21 +00:00
Dario Tranchitella
741db523e5 chore(gh): adding 1.24 to the e2e test matrix 2022-06-14 14:39:05 +00:00
Dario Tranchitella
7b3f850035 chore(gh): disabling fail fast for e2e 2022-06-13 09:52:58 +00:00
jandres - moscardo
72733415f0 fix(docs): helm example was wrong when customizing value 2022-06-10 13:48:49 +02:00
Oliver Bähler
cac2920827 feat: grant global patch privileges and add patch handler 2022-06-09 18:32:39 +00:00
Dario Tranchitella
e0b339d68a fix(tests): cleaning up protected tenant upon test success 2022-06-09 18:30:52 +00:00
Dario Tranchitella
4f55dd8db8 refactor: removing unrequired verb for clusterrole namespace deleter 2022-06-09 18:30:52 +00:00
Massimiliano Giovagnoli
fd738341ed docs: fix typos
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2022-06-09 12:29:10 +00:00
Maksim Fedotov
fce1658827 chore: remove unused CASecretNameAnnotation constant 2022-06-08 11:12:35 +00:00
Maksim Fedotov
93547c128f build(helm): revert bumping chart version 2022-06-08 11:12:35 +00:00
Maksim Fedotov
f1dc028649 feat: generate TLS certificates before starting controllers 2022-06-08 11:12:35 +00:00
Maksim Fedotov
37381184d2 build(helm): refactor capsule TLS certificates management 2022-06-08 11:12:35 +00:00
Maksim Fedotov
82b58d7d53 feat: refactor capsule TLS certificates management 2022-06-08 11:12:35 +00:00
tony
60e826dc83 docs: update tenant owner default cluster documentation 2022-06-06 06:48:18 +02:00
dependabot[bot]
6e8ddd102f build(deps): bump eventsource from 1.1.0 to 1.1.1 in /docs
Bumps [eventsource](https://github.com/EventSource/eventsource) from 1.1.0 to 1.1.1.
- [Release notes](https://github.com/EventSource/eventsource/releases)
- [Changelog](https://github.com/EventSource/eventsource/blob/master/HISTORY.md)
- [Commits](https://github.com/EventSource/eventsource/compare/v1.1.0...v1.1.1)

---
updated-dependencies:
- dependency-name: eventsource
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-06-03 13:10:21 +00:00
Dario Tranchitella
b64aaebc89 docs: referring to docker hub image 2022-05-31 12:38:38 +00:00
Dario Tranchitella
9a85631bb8 chore(yaml): using docker hub image 2022-05-31 12:38:38 +00:00
Dario Tranchitella
51ed42981f chore(helm): using docker hub image 2022-05-31 12:38:38 +00:00
Dario Tranchitella
cf313d415b chore(make): using docker hub image 2022-05-31 12:38:38 +00:00
Adriano Pezzuto
526a6053a5 docs: documenting charmed operator (#572)
* docs: documenting charmed operator
2022-05-27 21:20:35 +02:00
Dario Tranchitella
0dd13a96fc chore(yaml): aligning to v0.1.2-rc0 image 2022-05-24 15:50:48 +00:00
Dario Tranchitella
1c8a5d8f5a docs(proxy): documenting retrieval of a single namespace 2022-05-24 15:32:04 +00:00
song
b9fc50861b style: removing unused struct field 2022-05-24 15:31:24 +00:00
ptx96
29d29ccd4b feat(ci): added docker.io repository 2022-05-24 13:26:57 +00:00
Massimiliano Giovagnoli
f207546af0 docs(readme.md): add slack link
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2022-05-24 09:02:10 +00:00
Maksim Fedotov
deb0858fae build(helm): support cert-manager for generating tls and ca 2022-05-23 07:17:20 +00:00
Maksim Fedotov
1af56b736b feat: support cert-manager for generating tls and ca 2022-05-23 07:17:20 +00:00
Maksim Fedotov
3c9228d1aa fix: protectedHandler OnDelete get tenant using client 2022-05-18 18:06:10 +02:00
Maksim Fedotov
bf6760fbd0 docs: documenting protected tenants annotation 2022-05-18 18:06:10 +02:00
Maksim Fedotov
23564f8e40 feat: protected tenant annotation 2022-05-18 18:06:10 +02:00
Dario Tranchitella
a8b84c8cb3 fix: using sentinel error for non limited custom resource 2022-05-16 15:51:07 +00:00
Abhinandan Baheti
8c0c8c653d docs: documenting proxysetting crd use cases in capsule-proxy 2022-05-16 14:21:17 +00:00
Massimiliano Giovagnoli
ec89f5dd26 docs(readme.md): add links to community repo and governance doc
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2022-05-13 09:34:58 +00:00
Dario Tranchitella
68956a075a chore(ci): pinning golangci-lint version 2022-05-10 12:48:32 +00:00
Massimiliano Giovagnoli
c036feeefc docs(general/proxy): remove duplicated doc about nodes
Signed-off-by: Massimiliano Giovagnoli <me@maxgio.it>
2022-05-09 12:52:23 +00:00
Dario Tranchitella
9f6883d309 fix: formatting error message for service-related objects 2022-05-05 13:33:39 +00:00
Dario Tranchitella
e7227d24e9 build(helm): alignement with latest changes 2022-05-05 13:33:39 +00:00
Dario Tranchitella
f168137407 build(installer): alignement with latest changes 2022-05-05 13:33:39 +00:00
Dario Tranchitella
49e76f7f93 style: linters refactoring 2022-05-05 13:33:39 +00:00
Dario Tranchitella
9d69770888 style: fixing linters issues 2022-05-05 13:33:39 +00:00
Dario Tranchitella
f4ac85dfed refactor: using k8s client scheme 2022-05-05 13:33:39 +00:00
Dario Tranchitella
cb4289d45b refactor: using kubernetes tls secret key names 2022-05-05 13:33:39 +00:00
Dario Tranchitella
01197892a4 refactor: optimizing watchers predicates 2022-05-05 13:33:39 +00:00
Dario Tranchitella
345836630c refactor: avoiding using background context 2022-05-05 13:33:39 +00:00
dependabot[bot]
69a6394e59 build(deps): bump async from 2.6.3 to 2.6.4 in /docs
Bumps [async](https://github.com/caolan/async) from 2.6.3 to 2.6.4.
- [Release notes](https://github.com/caolan/async/releases)
- [Changelog](https://github.com/caolan/async/blob/v2.6.4/CHANGELOG.md)
- [Commits](https://github.com/caolan/async/compare/v2.6.3...v2.6.4)

---
updated-dependencies:
- dependency-name: async
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-05-02 09:51:18 +00:00
Dario Tranchitella
a3495cf614 chore: go 1.18 support 2022-04-14 15:21:49 +00:00
Dario Tranchitella
7662c3dc6a docs: aligning to dynamic tenant owner roles 2022-04-14 14:35:59 +00:00
Dario Tranchitella
137b0f083b test: aligning to new rolebindings sync policies 2022-04-14 14:35:59 +00:00
Dario Tranchitella
9fd18db5a5 feat: dynamic cluster roles for tenant owners 2022-04-14 14:35:59 +00:00
Dario Tranchitella
364adf7d9e style: using constant for rbac group 2022-04-14 14:35:59 +00:00
Dario Tranchitella
cb3ce372b9 fix: ensuring ca bundle replication upon helm upgrade 2022-04-14 14:10:32 +00:00
gernest
59d81c2002 chore(build): makefile for building local binary
This commit fixes `make manager` command which builds local capsure
binary to  bin/manager.
2022-04-12 10:12:33 +00:00
dependabot[bot]
85861ee5dc build(deps): bump moment from 2.29.1 to 2.29.2 in /docs
Bumps [moment](https://github.com/moment/moment) from 2.29.1 to 2.29.2.
- [Release notes](https://github.com/moment/moment/releases)
- [Changelog](https://github.com/moment/moment/blob/develop/CHANGELOG.md)
- [Commits](https://github.com/moment/moment/compare/2.29.1...2.29.2)

---
updated-dependencies:
- dependency-name: moment
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-04-09 12:29:34 +00:00
dependabot[bot]
ed88606031 build(deps): bump minimist from 1.2.5 to 1.2.6 in /docs
Bumps [minimist](https://github.com/substack/minimist) from 1.2.5 to 1.2.6.
- [Release notes](https://github.com/substack/minimist/releases)
- [Commits](https://github.com/substack/minimist/compare/1.2.5...1.2.6)

---
updated-dependencies:
- dependency-name: minimist
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-04-08 17:11:51 +00:00
Maksim Fedotov
afae361627 fix(helm): jobs in capsule helm chart should use the same tolerations as deployment 2022-04-07 08:16:03 +00:00
Dario Tranchitella
535ef7412c chore(ci): force use of go 1.16 2022-04-06 15:52:22 +00:00
Davide Imola
f373debf54 fix: fixing the helm chart 2022-03-31 13:02:25 +00:00
Davide Imola
569d803e95 fix: using configuration for mutating and validating webhooks 2022-03-31 13:02:25 +00:00
Davide Imola
7b3b0d6504 fix: using configuration for tls and ca secret names 2022-03-31 13:02:25 +00:00
Dario Tranchitella
0bfca6b60e fix(helm): avoiding overwriting secrets upon helm upgrade 2022-03-31 07:28:16 +00:00
gkarthiks
fdc1b3fe39 fix(docs): capsule-proxy chart url
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
2022-03-28 07:53:52 +00:00
Karthikeyan Govindaraj
f7bc2e24cc chore: description for limit ranges and update doc
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
2022-03-18 16:44:34 +00:00
Massimiliano Giovagnoli
d3021633cd Docs update (#530)
Signed-off-by: maxgio92 <me@maxgio.it>
2022-03-18 12:25:57 +01:00
dependabot[bot]
7fefe4f6de build(deps): bump url-parse from 1.5.7 to 1.5.10 in /docs
Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.7 to 1.5.10.
- [Release notes](https://github.com/unshiftio/url-parse/releases)
- [Commits](https://github.com/unshiftio/url-parse/compare/1.5.7...1.5.10)

---
updated-dependencies:
- dependency-name: url-parse
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-02-28 08:44:07 +00:00
dependabot[bot]
302bb19707 build(deps): bump prismjs from 1.25.0 to 1.27.0 in /docs
Bumps [prismjs](https://github.com/PrismJS/prism) from 1.25.0 to 1.27.0.
- [Release notes](https://github.com/PrismJS/prism/releases)
- [Changelog](https://github.com/PrismJS/prism/blob/master/CHANGELOG.md)
- [Commits](https://github.com/PrismJS/prism/compare/v1.25.0...v1.27.0)

---
updated-dependencies:
- dependency-name: prismjs
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-02-27 19:32:10 +00:00
dependabot[bot]
27a7792c31 build(deps): bump simple-get from 3.1.0 to 3.1.1 in /docs
Bumps [simple-get](https://github.com/feross/simple-get) from 3.1.0 to 3.1.1.
- [Release notes](https://github.com/feross/simple-get/releases)
- [Commits](https://github.com/feross/simple-get/compare/v3.1.0...v3.1.1)

---
updated-dependencies:
- dependency-name: simple-get
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-02-24 14:18:39 +00:00
Abhijeet Kasurde
1a60e83772 docs: misc typo fixes in various places
Fixed following spelling mistakes -

* upsteam -> upstream
* Caspule -> Capsule
* suceed -> succeed
* unsed -> unused

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
2022-02-24 14:18:00 +00:00
张连军
632268dd68 fix(docs): adding missing validatingwebhookconfiguration patch for nodes endpoint 2022-02-24 08:54:30 +00:00
dependabot[bot]
4e07de37c4 build(deps): bump url-parse from 1.5.3 to 1.5.7 in /docs
Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.3 to 1.5.7.
- [Release notes](https://github.com/unshiftio/url-parse/releases)
- [Commits](https://github.com/unshiftio/url-parse/compare/1.5.3...1.5.7)

---
updated-dependencies:
- dependency-name: url-parse
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-02-24 08:51:36 +00:00
Pandry
1d10bcab1e test(e2e): tenant regex forbidden namespace labels and annotations 2022-02-22 06:11:49 +00:00
Pandry
d4a5f3beca fix: validate regex patterns in annotations #510 2022-02-22 06:11:49 +00:00
Maksim Fedotov
cd56eab119 fix: object count resource quotas not working when using Tenant scope 2022-01-25 16:04:08 +00:00
dependabot[bot]
6cee5b73af build(deps-dev): bump postcss from 7.0.39 to 8.2.13 in /docs
Bumps [postcss](https://github.com/postcss/postcss) from 7.0.39 to 8.2.13.
- [Release notes](https://github.com/postcss/postcss/releases)
- [Changelog](https://github.com/postcss/postcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/postcss/postcss/compare/7.0.39...8.2.13)

---
updated-dependencies:
- dependency-name: postcss
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-01-22 18:30:07 +00:00
dependabot[bot]
8e7325aecb build(deps): bump nanoid from 3.1.29 to 3.2.0 in /docs
Bumps [nanoid](https://github.com/ai/nanoid) from 3.1.29 to 3.2.0.
- [Release notes](https://github.com/ai/nanoid/releases)
- [Changelog](https://github.com/ai/nanoid/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ai/nanoid/compare/3.1.29...3.2.0)

---
updated-dependencies:
- dependency-name: nanoid
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-01-22 18:24:00 +00:00
Adriano Pezzuto
be26783424 docs: clarify usage of serviceaccount as tenant owner (#503) 2022-01-20 21:52:49 +01:00
Tom OBrien
0b199f4136 fix: modify jobs.image.tag for eks
EKS sometimes has a '+' in kubernetes minor version
This results in invalid image tag for jobs
2022-01-18 16:26:24 +00:00
Dario Tranchitella
1bbaebbc90 build(installer): releaseing to capsule v0.1.1 2022-01-11 09:35:29 +00:00
Dario Tranchitella
4b8d8b2a7c build(helm): aligning to capsule v0.1.1 2022-01-11 09:35:29 +00:00
Dario Tranchitella
3fb4c41daf docs: removing development environment setup for capsule-proxy 2022-01-11 08:21:16 +00:00
Dario Tranchitella
055791966a docs: aliging to capsule-proxy documentation 2022-01-11 08:21:16 +00:00
Dario Tranchitella
c9af9c18e4 chore(ci): e2e for kubernetes v1.23 2022-01-03 10:33:42 +00:00
Maksim Fedotov
fef381d2b4 feat(helm): add default conversion webhook configuration to tenant CRD 2021-12-30 08:31:13 +00:00
Max Fedotov
19aff8c882 fix: ignore NotFound error in ServiceLabelsReconciler (#494)
Co-authored-by: Maksim Fedotov <m_fedotov@wargaming.net>
2021-12-29 18:26:45 +02:00
Dario Tranchitella
8da7e22cb2 fix(docs): broken link for documentation static website 2021-12-29 16:07:37 +00:00
Dario Tranchitella
47c37a3d5d feat(docs): v1alpha1 to v1beta1 upgrade guide 2021-12-27 07:51:04 +00:00
Dario Tranchitella
677175b3ed fix(docs): referring to old capsule version 2021-12-27 07:51:04 +00:00
Dario Tranchitella
c95e3a2068 docs: restoring multi-tenancy benchmark results 2021-12-26 19:51:48 +00:00
Dario Tranchitella
0be3be4480 docs: limiting amount of resources deployed in a tenant 2021-12-23 11:39:34 +00:00
Dario Tranchitella
6ad434fcfb test(e2e): limiting amount of resources deployed in a tenant 2021-12-23 11:39:34 +00:00
Dario Tranchitella
e53911942d feat: limiting amount of resources deployed in a tenant 2021-12-23 11:39:34 +00:00
ptx96
a179645f26 feat(helm): find kubectl tag from server version 2021-12-22 09:33:27 +01:00
Dario Tranchitella
778fb4bcc2 fix: starting all controllers only when certificates are generated
This is going to solve the issue when upgrading Capsule <v0.1.0 to
>=v0.1.0: due to a resource reflector many warning were polluting the
reconciliation loop and causing unmarshaling errors.

Additionally, just the CA secret was checked before starting the
Operator, when also the TLS is requested for the webhooks, along with
the `/convert` one that is used for the CR version conversion.
2021-12-21 06:45:16 +00:00
slushysnowman
bc23324fe7 feat(helm): add imagePullSecrets to jobs
Co-authored-by: Tom OBrien <tom.obrien@ns.nl>
2021-12-21 06:43:03 +00:00
Dario Tranchitella
4a6fd49554 fix: yaml installer should use namespace selector for pods webhook (#484) 2021-12-19 00:01:16 +01:00
Adriano Pezzuto
d7baf18bf9 Refactoring of the documentation structure (#481)
* docs: structure refactoring

* build(yaml): alignement to latest release
2021-12-16 17:39:30 +01:00
Oliver Bähler
5c7804e1bf fix: add rolebinding validation against rfc-1123 dns for sa subjects
Signed-off-by: Oliver Bähler <oliverbaehler@hotmail.com>
2021-11-12 11:22:26 +01:00
Oliver Bähler
c4481f26f7 docs: additions to dev-guide
Signed-off-by: Oliver Bähler <oliverbaehler@hotmail.com>
2021-11-12 11:22:26 +01:00
Maksim Fedotov
ec715d2e8f fix: do not register tenant controller\webhook\indexer until CA is created 2021-11-06 16:34:22 +01:00
Luca Spezzano
0aeaf89cb7 fix(docs): broken links and style, deleted command code from MD file 2021-11-06 16:30:34 +01:00
Dario Tranchitella
3d31ddb4e3 docs: instructions on how to develop the docs website 2021-11-06 16:30:34 +01:00
Luca Spezzano
e83f344cdc feat(docs): removed meta robots and added meta og:url 2021-11-06 16:30:34 +01:00
Luca Spezzano
da83a8711a style(docs): added blockquote style 2021-11-06 16:30:34 +01:00
Luca Spezzano
43a944ace0 feat(docs): created 404 default page 2021-11-06 16:30:34 +01:00
Luca Spezzano
0acc2d2ef1 feat(docs): setup Gridsome for the website 2021-11-06 16:30:34 +01:00
Maxim Fedotov
14f9686bbb Forbidden node labels and annotations (#464)
* feat: forbidden node labels and annotations

* test(e2e): forbidden node labels and annotations

* build(kustomize): forbidden node labels and annotations

* build(helm): forbidden node labels and annotations

* build(installer): forbidden node labels and annotations

* chore(make): forbidden node labels and annotations

* docs: forbidden node labels and annotations

* test(e2e): forbidden node labels and annotations. Use EventuallyCreation func

* feat: forbidden node labels and annotations. Check kubernetes version

* test(e2e): forbidden node labels and annotations. Check kubernetes version

* docs: forbidden node labels and annotations. Version restrictions

* feat: forbidden node labels and annotations. Do not update deepcopy functions

* docs: forbidden node labels and annotations. Use blockquotes for notes

Co-authored-by: Maksim Fedotov <m_fedotov@wargaming.net>
2021-11-02 20:01:53 +03:00
Dario Tranchitella
6ba9826c51 chore(linters): no more need of duplicate check 2021-11-02 17:13:23 +01:00
Dario Tranchitella
bd58084ded docs!: container registry enforcement required fqci 2021-11-02 17:13:23 +01:00
Dario Tranchitella
3a5e50886d test: fqci is required for containar registry enforcement 2021-11-02 17:13:23 +01:00
Dario Tranchitella
e2768dad83 fix!: forcing to use fqci and container registries with no repositories 2021-11-02 17:13:23 +01:00
Vivek Singh
b97c23176d fix: duplicate release for helm chart
this commit remote helm release workflow trigger on create which triggers duplicate event as push

fixes: #459
2021-11-02 17:13:10 +01:00
329 changed files with 38335 additions and 8373 deletions

View File

@@ -24,7 +24,7 @@ jobs:
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v2.3.0
with:
version: latest
version: v1.45.2
only-new-issues: false
args: --timeout 2m --config .golangci.yml
diff:
@@ -36,7 +36,7 @@ jobs:
fetch-depth: 0
- uses: actions/setup-go@v2
with:
go-version: '^1.16'
go-version: '1.18'
- run: make installer
- name: Checking if YAML installer file is not aligned
run: if [[ $(git diff | wc -l) -gt 0 ]]; then echo ">>> Untracked generated files have not been committed" && git --no-pager diff && exit 1; fi

View File

@@ -36,6 +36,7 @@ jobs:
with:
images: |
quay.io/${{ github.repository }}
docker.io/${{ github.repository }}
tags: |
type=semver,pattern={{raw}}
flavor: |
@@ -68,6 +69,13 @@ jobs:
username: ${{ github.repository_owner }}+github
password: ${{ secrets.BOT_QUAY_IO }}
- name: Login to docker.io Container Registry
uses: docker/login-action@v1
with:
registry: docker.io
username: ${{ secrets.USER_DOCKER_IO }}
password: ${{ secrets.BOT_DOCKER_IO }}
- name: Build and push
id: build-release
uses: docker/build-push-action@v2

View File

@@ -7,6 +7,7 @@ on:
- '.github/workflows/e2e.yml'
- 'api/**'
- 'controllers/**'
- 'pkg/**'
- 'e2e/*'
- 'Dockerfile'
- 'go.*'
@@ -18,6 +19,7 @@ on:
- '.github/workflows/e2e.yml'
- 'api/**'
- 'controllers/**'
- 'pkg/**'
- 'e2e/*'
- 'Dockerfile'
- 'go.*'
@@ -28,8 +30,9 @@ jobs:
kind:
name: Kubernetes
strategy:
fail-fast: false
matrix:
k8s-version: ['v1.16.15', 'v1.17.11', 'v1.18.8', 'v1.19.4', 'v1.20.7', 'v1.21.2', 'v1.22.0']
k8s-version: ['v1.16.15', 'v1.17.11', 'v1.18.8', 'v1.19.4', 'v1.20.7', 'v1.21.2', 'v1.22.4', 'v1.23.6', 'v1.24.1']
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
@@ -37,21 +40,16 @@ jobs:
fetch-depth: 0
- uses: actions/setup-go@v2
with:
go-version: '^1.16'
go-version: '1.18'
- run: make manifests
- name: Checking if manifests are disaligned
run: test -z "$(git diff 2> /dev/null)"
- name: Checking if manifests generated untracked files
run: test -z "$(git ls-files --others --exclude-standard 2> /dev/null)"
- name: Installing Ginkgo
run: go get github.com/onsi/ginkgo/ginkgo
- uses: actions/setup-go@v2
with:
go-version: '^1.16'
- uses: engineerd/setup-kind@v0.5.0
with:
skipClusterCreation: true
version: v0.11.1
version: v0.14.0
- uses: azure/setup-helm@v1
with:
version: 3.3.4

View File

@@ -6,9 +6,6 @@ on:
tags: [ "helm-v*" ]
pull_request:
branches: [ "*" ]
create:
branches: [ "*" ]
tags: [ "helm-v*" ]
jobs:
lint:

1
.gitignore vendored
View File

@@ -28,4 +28,5 @@ bin
**/*.crt
**/*.key
.DS_Store
*.tgz

View File

@@ -1,51 +1,39 @@
linters-settings:
govet:
check-shadowing: true
golint:
min-confidence: 0
maligned:
suggest-new: true
goimports:
local-prefixes: github.com/clastix/capsule
dupl:
threshold: 100
goconst:
min-len: 2
min-occurrences: 2
cyclop:
max-complexity: 27
gocognit:
min-complexity: 50
gci:
sections:
- standard
- default
- prefix(github.com/clastix/capsule)
linters:
disable-all: true
enable:
- bodyclose
- deadcode
- depguard
- dogsled
- dupl
- errcheck
- goconst
- gocritic
- gofmt
- goimports
- golint
- goprintffuncname
- gosec
- gosimple
- govet
- ineffassign
- interfacer
- misspell
- nolintlint
- rowserrcheck
- scopelint
- staticcheck
- structcheck
- stylecheck
- typecheck
- unconvert
- unparam
- unused
- varcheck
- whitespace
enable-all: true
disable:
- funlen
- gochecknoinits
- lll
- exhaustivestruct
- maligned
- interfacer
- scopelint
- golint
- gochecknoglobals
- goerr113
- gomnd
- paralleltest
- ireturn
- testpackage
- varnamelen
- wrapcheck
issues:
exclude:

View File

@@ -1,5 +1,5 @@
# Build the manager binary
FROM golang:1.16 as builder
FROM golang:1.18 as builder
ARG TARGETARCH
ARG GIT_HEAD_COMMIT

View File

@@ -2,7 +2,7 @@
VERSION ?= $$(git describe --abbrev=0 --tags --match "v*")
# Default bundle image tag
BUNDLE_IMG ?= quay.io/clastix/capsule:$(VERSION)-bundle
BUNDLE_IMG ?= clastix/capsule:$(VERSION)-bundle
# Options for 'bundle-build'
ifneq ($(origin CHANNELS), undefined)
BUNDLE_CHANNELS := --channels=$(CHANNELS)
@@ -13,7 +13,7 @@ endif
BUNDLE_METADATA_OPTS ?= $(BUNDLE_CHANNELS) $(BUNDLE_DEFAULT_CHANNEL)
# Image URL to use all building/pushing image targets
IMG ?= quay.io/clastix/capsule:$(VERSION)
IMG ?= clastix/capsule:$(VERSION)
# Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
CRD_OPTIONS ?= "crd:preserveUnknownFields=false"
@@ -40,8 +40,8 @@ test: generate manifests
go test ./... -coverprofile cover.out
# Build manager binary
manager: generate fmt vet
go build -o bin/manager main.go
manager: generate golint
go build -o bin/manager
# Run against the configured Kubernetes cluster in ~/.kube/config
run: generate manifests
@@ -126,7 +126,8 @@ dev-setup:
{'op': 'replace', 'path': '/webhooks/4/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/pods\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/5/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/persistentvolumeclaims\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/6/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/services\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/7/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/tenants\",'caBundle':\"$${CA_BUNDLE}\"}}\
{'op': 'replace', 'path': '/webhooks/7/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/tenants\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/8/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/nodes\",'caBundle':\"$${CA_BUNDLE}\"}}\
]";
# Build the docker image
@@ -144,23 +145,33 @@ docker-push:
CONTROLLER_GEN = $(shell pwd)/bin/controller-gen
controller-gen: ## Download controller-gen locally if necessary.
$(call go-get-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen@v0.5.0)
$(call go-install-tool,$(CONTROLLER_GEN),sigs.k8s.io/controller-tools/cmd/controller-gen@v0.5.0)
GINKGO = $(shell pwd)/bin/ginkgo
ginkgo: ## Download ginkgo locally if necessary.
$(call go-install-tool,$(KUSTOMIZE),github.com/onsi/ginkgo/ginkgo@v1.16.5)
KUSTOMIZE = $(shell pwd)/bin/kustomize
kustomize: ## Download kustomize locally if necessary.
$(call go-get-tool,$(KUSTOMIZE),sigs.k8s.io/kustomize/kustomize/v3@v3.8.7)
$(call install-kustomize,$(KUSTOMIZE),3.8.7)
# go-get-tool will 'go get' any package $2 and install it to $1.
PROJECT_DIR := $(shell dirname $(abspath $(lastword $(MAKEFILE_LIST))))
define go-get-tool
define install-kustomize
@[ -f $(1) ] || { \
set -e ;\
TMP_DIR=$$(mktemp -d) ;\
cd $$TMP_DIR ;\
go mod init tmp ;\
echo "Downloading $(2)" ;\
GOBIN=$(PROJECT_DIR)/bin go get $(2) ;\
rm -rf $$TMP_DIR ;\
echo "Installing v$(2)" ;\
cd bin ;\
wget "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" ;\
bash ./install_kustomize.sh $(2) ;\
}
endef
# go-install-tool will 'go install' any package $2 and install it to $1.
PROJECT_DIR := $(shell dirname $(abspath $(lastword $(MAKEFILE_LIST))))
define go-install-tool
@[ -f $(1) ] || { \
set -e ;\
echo "Installing $(2)" ;\
GOBIN=$(PROJECT_DIR)/bin go install $(2) ;\
}
endef
@@ -186,7 +197,7 @@ golint:
# Running e2e tests in a KinD instance
.PHONY: e2e
e2e/%:
e2e/%: ginkgo
kind create cluster --name capsule --image=kindest/node:$*
make docker-build
kind load docker-image --nodes capsule-control-plane --name capsule $(IMG)
@@ -202,5 +213,5 @@ e2e/%:
--set 'manager.readinessProbe.failureThreshold=10' \
capsule \
./charts/capsule
ginkgo -v -tags e2e ./e2e
$(GINKGO) -v -tags e2e ./e2e
kind delete cluster --name capsule

162
README.md
View File

@@ -5,6 +5,9 @@
<a href="https://github.com/clastix/capsule/releases">
<img src="https://img.shields.io/github/v/release/clastix/capsule"/>
</a>
<a href="https://charmhub.io/capsule-k8s">
<img src="https://charmhub.io/capsule-k8s/badge.svg"/>
</a>
</p>
<p align="center">
@@ -13,161 +16,80 @@
---
**Join the community** on the [#capsule](https://kubernetes.slack.com/archives/C03GETTJQRL) channel in the [Kubernetes Slack](https://slack.k8s.io/).
# Kubernetes multi-tenancy made easy
**Capsule** helps to implement a multi-tenancy and policy-based environment in your Kubernetes cluster. It is not intended to be yet another _PaaS_, instead, it has been designed as a micro-services-based ecosystem with the minimalist approach, leveraging only on upstream Kubernetes.
**Capsule** implements a multi-tenant and policy-based environment in your Kubernetes cluster. It is designed as a micro-services-based ecosystem with the minimalist approach, leveraging only on upstream Kubernetes.
# What's the problem with the current status?
Kubernetes introduces the _Namespace_ object type to create logical partitions of the cluster as isolated *slices*. However, implementing advanced multi-tenancy scenarios, it soon becomes complicated because of the flat structure of Kubernetes namespaces and the impossibility to share resources among namespaces belonging to the same tenant. To overcome this, cluster admins tend to provision a dedicated cluster for each groups of users, teams, or departments. As an organization grows, the number of clusters to manage and keep aligned becomes an operational nightmare, described as the well know phenomena of the _clusters sprawl_.
Kubernetes introduces the _Namespace_ object type to create logical partitions of the cluster as isolated *slices*. However, implementing advanced multi-tenancy scenarios, it soon becomes complicated because of the flat structure of Kubernetes namespaces and the impossibility to share resources among namespaces belonging to the same tenant. To overcome this, cluster admins tend to provision a dedicated cluster for each groups of users, teams, or departments. As an organization grows, the number of clusters to manage and keep aligned becomes an operational nightmare, described as the well known phenomena of the _clusters sprawl_.
# Entering Capsule
Capsule takes a different approach. In a single cluster, the Capsule Controller aggregates multiple namespaces in a lightweight abstraction called _Tenant_, basically a grouping of Kubernetes Namespaces. Within each tenant, users are free to create their namespaces and share all the assigned resources while the Capsule Policy Engine keeps the different tenants isolated from each other.
The _Network and Security Policies_, _Resource Quota_, _Limit Ranges_, _RBAC_, and other policies defined at the tenant level are automatically inherited by all the namespaces in the tenant. Then users are free to operate their tenants in autonomy, without the intervention of the cluster administrator. Take a look at following diagram:
Capsule takes a different approach. In a single cluster, the Capsule Controller aggregates multiple namespaces in a lightweight abstraction called _Tenant_, basically a grouping of Kubernetes Namespaces. Within each tenant, users are free to create their namespaces and share all the assigned resources.
<p align="center" style="padding: 60px 20px">
<img src="assets/capsule-operator.svg" />
</p>
On the other side, the Capsule Policy Engine keeps the different tenants isolated from each other. _Network and Security Policies_, _Resource Quota_, _Limit Ranges_, _RBAC_, and other policies defined at the tenant level are automatically inherited by all the namespaces in the tenant. Then users are free to operate their tenants in autonomy, without the intervention of the cluster administrator.
# Features
## Self-Service
Leave to developers the freedom to self-provision their cluster resources according to the assigned boundaries.
Leave developers the freedom to self-provision their cluster resources according to the assigned boundaries.
## Preventing Clusters Sprawl
Share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts.
## Governance
Leverage Kubernetes Admission Controllers to enforce the industry security best practices and meet legal requirements.
Leverage Kubernetes Admission Controllers to enforce the industry security best practices and meet policy requirements.
## Resources Control
Take control of the resources consumed by users while preventing them to overtake.
## Native Experience
Provide multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.
## GitOps ready
Capsule is completely declarative and GitOps ready.
## Bring your own device (BYOD)
Assign to tenants a dedicated set of compute, storage, and network resources and avoid the noisy neighbors' effect.
# Common use cases for Capsule
Please, refer to the corresponding [section](./docs/operator/use-cases/overview.md) in the project documentation for a detailed list of common use cases that Capsule can address.
# Installation
Make sure you have access to a Kubernetes cluster as administrator.
There are two ways to install Capsule:
* Use the Helm Chart available [here](./charts/capsule/README.md)
* Use the [single YAML file installer](./config/install.yaml)
## Install with the single YAML file installer
Ensure you have `kubectl` installed in your `PATH`.
Clone this repository and move to the repo folder:
```
$ kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/install.yaml
```
It will install the Capsule controller in a dedicated namespace `capsule-system`.
## How to create Tenants
Use the scaffold [Tenant](config/samples/capsule_v1beta1_tenant.yaml) and simply apply as cluster admin.
```
$ kubectl apply -f config/samples/capsule_v1beta1_tenant.yaml
tenant.capsule.clastix.io/gas created
```
You can check the tenant just created as
```
$ kubectl get tenants
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
gas Active 3 0 {"kubernetes.io/os":"linux"} 25s
```
## Tenant owners
Each tenant comes with a delegated user or group of users acting as the tenant admin. In the Capsule jargon, this is called the _Tenant Owner_. Other users can operate inside a tenant with different levels of permissions and authorizations assigned directly by the Tenant Owner.
Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of [authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/) are supported. The only requirement to use Capsule is to assign tenant users to the the group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
Assignment to a group depends on the authentication strategy in your cluster.
For example, if you are using `capsule.clastix.io`, users authenticated through a _X.509_ certificate must have `capsule.clastix.io` as _Organization_: `-subj "/CN=${USER}/O=capsule.clastix.io"`
Users authenticated through an _OIDC token_ must have in their token:
```json
...
"users_groups": [
"capsule.clastix.io",
"other_group"
]
```
The [hack/create-user.sh](hack/create-user.sh) can help you set up a dummy `kubeconfig` for the `bob` user acting as owner of a tenant called `gas`
```bash
./hack/create-user.sh bob gas
...
certificatesigningrequest.certificates.k8s.io/bob-gas created
certificatesigningrequest.certificates.k8s.io/bob-gas approved
kubeconfig file is: bob-gas.kubeconfig
to use it as bob export KUBECONFIG=bob-gas.kubeconfig
```
## Working with Tenants
Log in to the Kubernetes cluster as `bob` tenant owner
```
$ export KUBECONFIG=bob-gas.kubeconfig
```
and create a couple of new namespaces
```
$ kubectl create namespace gas-production
$ kubectl create namespace gas-development
```
As user `bob` you can operate with fully admin permissions:
```
$ kubectl -n gas-development run nginx --image=docker.io/nginx
$ kubectl -n gas-development get pods
```
but limited to only your own namespaces:
```
$ kubectl -n kube-system get pods
Error from server (Forbidden): pods is forbidden:
User "bob" cannot list resource "pods" in API group "" in the namespace "kube-system"
```
# Removal
Similar to `deploy`, you can get rid of Capsule using the `remove` target.
```
$ make remove
```
# Documentation
Please, check the project [documentation](./docs/index.md) for more cool things you can do with Capsule.
# Contribution
Please, check the project [documentation](https://capsule.clastix.io) for the cool things you can do with Capsule.
# Contributions
Capsule is Open Source with Apache 2 license and any contribution is welcome.
Please refer to the corresponding docs:
- [contributing.md](./docs/contributing.md) for the general guide; and
- [dev-guide.md](./docs/dev-guide.md) for how to set up the development env to get started.
## Chart Development
The documentation for each chart is done with [helm-docs](https://github.com/norwoodj/helm-docs). This way we can ensure that values are consistent with the chart documentation.
We have a script on the repository which will execute the helm-docs docker container, so that you don't have to worry about downloading the binary etc. Simply execute the script (Bash compatible):
```
bash scripts/helm-docs.sh
```
## Community
Join the community, share and learn from it. You can find all the resources to how to contribute code and docs, connect with people in the [community repository](https://github.com/clastix/capsule-community).
# Governance
You can find how the Capsule project is governed [here](https://capsule.clastix.io/docs/contributing/governance).
# FAQ
- Q. How to pronounce Capsule?
A. It should be pronounced as `/ˈkæpsjuːl/`.

View File

@@ -19,9 +19,12 @@ func (in *AllowedListSpec) ExactMatch(value string) (ok bool) {
sort.SliceStable(in.Exact, func(i, j int) bool {
return strings.ToLower(in.Exact[i]) < strings.ToLower(in.Exact[j])
})
i := sort.SearchStrings(in.Exact, value)
ok = i < len(in.Exact) && in.Exact[i] == value
}
return
}
@@ -29,5 +32,6 @@ func (in AllowedListSpec) RegexMatch(value string) (ok bool) {
if len(in.Regex) > 0 {
ok = regexp.MustCompile(in.Regex).MatchString(value)
}
return
}

View File

@@ -15,6 +15,7 @@ func TestAllowedListSpec_ExactMatch(t *testing.T) {
True []string
False []string
}
for _, tc := range []tc{
{
[]string{"foo", "bar", "bizz", "buzz"},
@@ -35,9 +36,11 @@ func TestAllowedListSpec_ExactMatch(t *testing.T) {
a := AllowedListSpec{
Exact: tc.In,
}
for _, ok := range tc.True {
assert.True(t, a.ExactMatch(ok))
}
for _, ko := range tc.False {
assert.False(t, a.ExactMatch(ko))
}
@@ -50,6 +53,7 @@ func TestAllowedListSpec_RegexMatch(t *testing.T) {
True []string
False []string
}
for _, tc := range []tc{
{`first-\w+-pattern`, []string{"first-date-pattern", "first-year-pattern"}, []string{"broken", "first-year", "second-date-pattern"}},
{``, nil, []string{"any", "value"}},
@@ -57,9 +61,11 @@ func TestAllowedListSpec_RegexMatch(t *testing.T) {
a := AllowedListSpec{
Regex: tc.Regex,
}
for _, ok := range tc.True {
assert.True(t, a.RegexMatch(ok))
}
for _, ko := range tc.False {
assert.False(t, a.RegexMatch(ko))
}

View File

@@ -0,0 +1,12 @@
package v1alpha1
const (
ForbiddenNodeLabelsAnnotation = "capsule.clastix.io/forbidden-node-labels"
ForbiddenNodeLabelsRegexpAnnotation = "capsule.clastix.io/forbidden-node-labels-regexp"
ForbiddenNodeAnnotationsAnnotation = "capsule.clastix.io/forbidden-node-annotations"
ForbiddenNodeAnnotationsRegexpAnnotation = "capsule.clastix.io/forbidden-node-annotations-regexp"
TLSSecretNameAnnotation = "capsule.clastix.io/tls-secret-name"
MutatingWebhookConfigurationName = "capsule.clastix.io/mutating-webhook-configuration-name"
ValidatingWebhookConfigurationName = "capsule.clastix.io/validating-webhook-configuration-name"
EnableTLSConfigurationAnnotationName = "capsule.clastix.io/enable-tls-configuration"
)

View File

@@ -7,7 +7,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// CapsuleConfigurationSpec defines the Capsule configuration
// CapsuleConfigurationSpec defines the Capsule configuration.
type CapsuleConfigurationSpec struct {
// Names of the groups for Capsule users.
// +kubebuilder:default={capsule.clastix.io}
@@ -23,7 +23,7 @@ type CapsuleConfigurationSpec struct {
// +kubebuilder:object:root=true
// +kubebuilder:resource:scope=Cluster
// CapsuleConfiguration is the Schema for the Capsule configuration API
// CapsuleConfiguration is the Schema for the Capsule configuration API.
type CapsuleConfiguration struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
@@ -33,7 +33,7 @@ type CapsuleConfiguration struct {
// +kubebuilder:object:root=true
// CapsuleConfigurationList contains a list of CapsuleConfiguration
// CapsuleConfigurationList contains a list of CapsuleConfiguration.
type CapsuleConfigurationList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`

View File

@@ -49,13 +49,13 @@ const (
)
func (t *Tenant) convertV1Alpha1OwnerToV1Beta1() capsulev1beta1.OwnerListSpec {
var serviceKindToAnnotationMap = map[capsulev1beta1.ProxyServiceKind][]string{
serviceKindToAnnotationMap := map[capsulev1beta1.ProxyServiceKind][]string{
capsulev1beta1.NodesProxy: {enableNodeListingAnnotation, enableNodeUpdateAnnotation, enableNodeDeletionAnnotation},
capsulev1beta1.StorageClassesProxy: {enableStorageClassListingAnnotation, enableStorageClassUpdateAnnotation, enableStorageClassDeletionAnnotation},
capsulev1beta1.IngressClassesProxy: {enableIngressClassListingAnnotation, enableIngressClassUpdateAnnotation, enableIngressClassDeletionAnnotation},
capsulev1beta1.PriorityClassesProxy: {enablePriorityClassListingAnnotation, enablePriorityClassUpdateAnnotation, enablePriorityClassDeletionAnnotation},
}
var annotationToOperationMap = map[string]capsulev1beta1.ProxyOperation{
annotationToOperationMap := map[string]capsulev1beta1.ProxyOperation{
enableNodeListingAnnotation: capsulev1beta1.ListOperation,
enableNodeUpdateAnnotation: capsulev1beta1.UpdateOperation,
enableNodeDeletionAnnotation: capsulev1beta1.DeleteOperation,
@@ -69,14 +69,15 @@ func (t *Tenant) convertV1Alpha1OwnerToV1Beta1() capsulev1beta1.OwnerListSpec {
enablePriorityClassUpdateAnnotation: capsulev1beta1.UpdateOperation,
enablePriorityClassDeletionAnnotation: capsulev1beta1.DeleteOperation,
}
var annotationToOwnerKindMap = map[string]capsulev1beta1.OwnerKind{
annotationToOwnerKindMap := map[string]capsulev1beta1.OwnerKind{
ownerUsersAnnotation: capsulev1beta1.UserOwner,
ownerGroupsAnnotation: capsulev1beta1.GroupOwner,
ownerServiceAccountAnnotation: capsulev1beta1.ServiceAccountOwner,
}
annotations := t.GetAnnotations()
var operations = make(map[string]map[capsulev1beta1.ProxyServiceKind][]capsulev1beta1.ProxyOperation)
operations := make(map[string]map[capsulev1beta1.ProxyServiceKind][]capsulev1beta1.ProxyOperation)
for serviceKind, operationAnnotations := range serviceKindToAnnotationMap {
for _, operationAnnotation := range operationAnnotations {
@@ -86,6 +87,7 @@ func (t *Tenant) convertV1Alpha1OwnerToV1Beta1() capsulev1beta1.OwnerListSpec {
if _, exists := operations[owner]; !exists {
operations[owner] = make(map[capsulev1beta1.ProxyServiceKind][]capsulev1beta1.ProxyOperation)
}
operations[owner][serviceKind] = append(operations[owner][serviceKind], annotationToOperationMap[operationAnnotation])
}
}
@@ -94,7 +96,7 @@ func (t *Tenant) convertV1Alpha1OwnerToV1Beta1() capsulev1beta1.OwnerListSpec {
var owners capsulev1beta1.OwnerListSpec
var getProxySettingsForOwner = func(ownerName string) (settings []capsulev1beta1.ProxySettings) {
getProxySettingsForOwner := func(ownerName string) (settings []capsulev1beta1.ProxySettings) {
ownerOperations, ok := operations[ownerName]
if ok {
for k, v := range ownerOperations {
@@ -104,6 +106,7 @@ func (t *Tenant) convertV1Alpha1OwnerToV1Beta1() capsulev1beta1.OwnerListSpec {
})
}
}
return
}
@@ -129,8 +132,13 @@ func (t *Tenant) convertV1Alpha1OwnerToV1Beta1() capsulev1beta1.OwnerListSpec {
return owners
}
// nolint:gocognit,gocyclo,cyclop,maintidx
func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
dst := dstRaw.(*capsulev1beta1.Tenant)
dst, ok := dstRaw.(*capsulev1beta1.Tenant)
if !ok {
return fmt.Errorf("expected type *capsulev1beta1.Tenant, got %T", dst)
}
annotations := t.GetAnnotations()
// ObjectMeta
@@ -141,6 +149,7 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
if dst.Spec.NamespaceOptions == nil {
dst.Spec.NamespaceOptions = &capsulev1beta1.NamespaceOptions{}
}
dst.Spec.NamespaceOptions.Quota = t.Spec.NamespaceQuota
}
@@ -152,11 +161,13 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
if dst.Spec.NamespaceOptions == nil {
dst.Spec.NamespaceOptions = &capsulev1beta1.NamespaceOptions{}
}
dst.Spec.NamespaceOptions.AdditionalMetadata = &capsulev1beta1.AdditionalMetadataSpec{
Labels: t.Spec.NamespacesMetadata.AdditionalLabels,
Annotations: t.Spec.NamespacesMetadata.AdditionalAnnotations,
}
}
if t.Spec.ServicesMetadata != nil {
if dst.Spec.ServiceOptions == nil {
dst.Spec.ServiceOptions = &capsulev1beta1.ServiceOptions{
@@ -167,13 +178,15 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
}
}
}
if t.Spec.StorageClasses != nil {
dst.Spec.StorageClasses = &capsulev1beta1.AllowedListSpec{
Exact: t.Spec.StorageClasses.Exact,
Regex: t.Spec.StorageClasses.Regex,
}
}
if v, ok := t.Annotations[ingressHostnameCollisionScope]; ok {
if v, annotationOk := t.Annotations[ingressHostnameCollisionScope]; annotationOk {
switch v {
case string(capsulev1beta1.HostnameCollisionScopeCluster), string(capsulev1beta1.HostnameCollisionScopeTenant), string(capsulev1beta1.HostnameCollisionScopeNamespace):
dst.Spec.IngressOptions.HostnameCollisionScope = capsulev1beta1.HostnameCollisionScope(v)
@@ -181,38 +194,44 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
dst.Spec.IngressOptions.HostnameCollisionScope = capsulev1beta1.HostnameCollisionScopeDisabled
}
}
if t.Spec.IngressClasses != nil {
dst.Spec.IngressOptions.AllowedClasses = &capsulev1beta1.AllowedListSpec{
Exact: t.Spec.IngressClasses.Exact,
Regex: t.Spec.IngressClasses.Regex,
}
}
if t.Spec.IngressHostnames != nil {
dst.Spec.IngressOptions.AllowedHostnames = &capsulev1beta1.AllowedListSpec{
Exact: t.Spec.IngressHostnames.Exact,
Regex: t.Spec.IngressHostnames.Regex,
}
}
if t.Spec.ContainerRegistries != nil {
dst.Spec.ContainerRegistries = &capsulev1beta1.AllowedListSpec{
Exact: t.Spec.ContainerRegistries.Exact,
Regex: t.Spec.ContainerRegistries.Regex,
}
}
if len(t.Spec.NetworkPolicies) > 0 {
dst.Spec.NetworkPolicies = capsulev1beta1.NetworkPolicySpec{
Items: t.Spec.NetworkPolicies,
}
}
if len(t.Spec.LimitRanges) > 0 {
dst.Spec.LimitRanges = capsulev1beta1.LimitRangesSpec{
Items: t.Spec.LimitRanges,
}
}
if len(t.Spec.ResourceQuota) > 0 {
dst.Spec.ResourceQuota = capsulev1beta1.ResourceQuotaSpec{
Scope: func() capsulev1beta1.ResourceQuotaScope {
if v, ok := t.GetAnnotations()[resourceQuotaScopeAnnotation]; ok {
if v, annotationOk := t.GetAnnotations()[resourceQuotaScopeAnnotation]; annotationOk {
switch v {
case string(capsulev1beta1.ResourceQuotaScopeNamespace):
return capsulev1beta1.ResourceQuotaScopeNamespace
@@ -220,11 +239,13 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
return capsulev1beta1.ResourceQuotaScopeTenant
}
}
return capsulev1beta1.ResourceQuotaScopeTenant
}(),
Items: t.Spec.ResourceQuota,
}
}
if len(t.Spec.AdditionalRoleBindings) > 0 {
for _, rb := range t.Spec.AdditionalRoleBindings {
dst.Spec.AdditionalRoleBindings = append(dst.Spec.AdditionalRoleBindings, capsulev1beta1.AdditionalRoleBindingsSpec{
@@ -233,10 +254,12 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
})
}
}
if t.Spec.ExternalServiceIPs != nil {
if dst.Spec.ServiceOptions == nil {
dst.Spec.ServiceOptions = &capsulev1beta1.ServiceOptions{}
}
dst.Spec.ServiceOptions.ExternalServiceIPs = &capsulev1beta1.ExternalServiceIPsSpec{
Allowed: make([]capsulev1beta1.AllowedIP, len(t.Spec.ExternalServiceIPs.Allowed)),
}
@@ -256,10 +279,13 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
priorityClasses := capsulev1beta1.AllowedListSpec{}
priorityClassAllowed, ok := annotations[podPriorityAllowedAnnotation]
if ok {
priorityClasses.Exact = strings.Split(priorityClassAllowed, ",")
}
priorityClassesRegexp, ok := annotations[podPriorityAllowedRegexAnnotation]
if ok {
priorityClasses.Regex = priorityClassesRegexp
}
@@ -274,12 +300,15 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
if err != nil {
return errors.Wrap(err, fmt.Sprintf("unable to parse %s annotation on tenant %s", enableNodePortsAnnotation, t.GetName()))
}
if dst.Spec.ServiceOptions == nil {
dst.Spec.ServiceOptions = &capsulev1beta1.ServiceOptions{}
}
if dst.Spec.ServiceOptions.AllowedServices == nil {
dst.Spec.ServiceOptions.AllowedServices = &capsulev1beta1.AllowedServices{}
}
dst.Spec.ServiceOptions.AllowedServices.NodePort = pointer.BoolPtr(val)
}
@@ -289,12 +318,15 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
if err != nil {
return errors.Wrap(err, fmt.Sprintf("unable to parse %s annotation on tenant %s", enableExternalNameAnnotation, t.GetName()))
}
if dst.Spec.ServiceOptions == nil {
dst.Spec.ServiceOptions = &capsulev1beta1.ServiceOptions{}
}
if dst.Spec.ServiceOptions.AllowedServices == nil {
dst.Spec.ServiceOptions.AllowedServices = &capsulev1beta1.AllowedServices{}
}
dst.Spec.ServiceOptions.AllowedServices.ExternalName = pointer.BoolPtr(val)
}
@@ -304,21 +336,22 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
if err != nil {
return errors.Wrap(err, fmt.Sprintf("unable to parse %s annotation on tenant %s", enableLoadBalancerAnnotation, t.GetName()))
}
if dst.Spec.ServiceOptions == nil {
dst.Spec.ServiceOptions = &capsulev1beta1.ServiceOptions{}
}
if dst.Spec.ServiceOptions.AllowedServices == nil {
dst.Spec.ServiceOptions.AllowedServices = &capsulev1beta1.AllowedServices{}
}
dst.Spec.ServiceOptions.AllowedServices.LoadBalancer = pointer.BoolPtr(val)
}
// Status
dst.Status = capsulev1beta1.TenantStatus{
Size: t.Status.Size,
Namespaces: t.Status.Namespaces,
}
// Remove unneeded annotations
delete(dst.ObjectMeta.Annotations, podAllowedImagePullPolicyAnnotation)
delete(dst.ObjectMeta.Annotations, podPriorityAllowedAnnotation)
@@ -347,14 +380,15 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
return nil
}
// nolint:gocognit,gocyclo,cyclop
func (t *Tenant) convertV1Beta1OwnerToV1Alpha1(src *capsulev1beta1.Tenant) {
var ownersAnnotations = map[string][]string{
ownersAnnotations := map[string][]string{
ownerGroupsAnnotation: nil,
ownerUsersAnnotation: nil,
ownerServiceAccountAnnotation: nil,
}
var proxyAnnotations = map[string][]string{
proxyAnnotations := map[string][]string{
enableNodeListingAnnotation: nil,
enableNodeUpdateAnnotation: nil,
enableNodeDeletionAnnotation: nil,
@@ -382,6 +416,7 @@ func (t *Tenant) convertV1Beta1OwnerToV1Alpha1(src *capsulev1beta1.Tenant) {
ownersAnnotations[ownerServiceAccountAnnotation] = append(ownersAnnotations[ownerServiceAccountAnnotation], owner.Name)
}
}
for _, setting := range owner.ProxyOperations {
switch setting.Kind {
case capsulev1beta1.NodesProxy:
@@ -437,6 +472,7 @@ func (t *Tenant) convertV1Beta1OwnerToV1Alpha1(src *capsulev1beta1.Tenant) {
t.Annotations[k] = strings.Join(v, ",")
}
}
for k, v := range proxyAnnotations {
if len(v) > 0 {
t.Annotations[k] = strings.Join(v, ",")
@@ -444,8 +480,12 @@ func (t *Tenant) convertV1Beta1OwnerToV1Alpha1(src *capsulev1beta1.Tenant) {
}
}
// nolint:gocyclo,cyclop
func (t *Tenant) ConvertFrom(srcRaw conversion.Hub) error {
src := srcRaw.(*capsulev1beta1.Tenant)
src, ok := srcRaw.(*capsulev1beta1.Tenant)
if !ok {
return fmt.Errorf("expected *capsulev1beta1.Tenant, got %T", srcRaw)
}
// ObjectMeta
t.ObjectMeta = src.ObjectMeta
@@ -469,47 +509,57 @@ func (t *Tenant) ConvertFrom(srcRaw conversion.Hub) error {
AdditionalAnnotations: src.Spec.NamespaceOptions.AdditionalMetadata.Annotations,
}
}
if src.Spec.ServiceOptions != nil && src.Spec.ServiceOptions.AdditionalMetadata != nil {
t.Spec.ServicesMetadata = &AdditionalMetadataSpec{
AdditionalLabels: src.Spec.ServiceOptions.AdditionalMetadata.Labels,
AdditionalAnnotations: src.Spec.ServiceOptions.AdditionalMetadata.Annotations,
}
}
if src.Spec.StorageClasses != nil {
t.Spec.StorageClasses = &AllowedListSpec{
Exact: src.Spec.StorageClasses.Exact,
Regex: src.Spec.StorageClasses.Regex,
}
}
t.Annotations[ingressHostnameCollisionScope] = string(src.Spec.IngressOptions.HostnameCollisionScope)
if src.Spec.IngressOptions.AllowedClasses != nil {
t.Spec.IngressClasses = &AllowedListSpec{
Exact: src.Spec.IngressOptions.AllowedClasses.Exact,
Regex: src.Spec.IngressOptions.AllowedClasses.Regex,
}
}
if src.Spec.IngressOptions.AllowedHostnames != nil {
t.Spec.IngressHostnames = &AllowedListSpec{
Exact: src.Spec.IngressOptions.AllowedHostnames.Exact,
Regex: src.Spec.IngressOptions.AllowedHostnames.Regex,
}
}
if src.Spec.ContainerRegistries != nil {
t.Spec.ContainerRegistries = &AllowedListSpec{
Exact: src.Spec.ContainerRegistries.Exact,
Regex: src.Spec.ContainerRegistries.Regex,
}
}
if len(src.Spec.NetworkPolicies.Items) > 0 {
t.Spec.NetworkPolicies = src.Spec.NetworkPolicies.Items
}
if len(src.Spec.LimitRanges.Items) > 0 {
t.Spec.LimitRanges = src.Spec.LimitRanges.Items
}
if len(src.Spec.ResourceQuota.Items) > 0 {
t.Annotations[resourceQuotaScopeAnnotation] = string(src.Spec.ResourceQuota.Scope)
t.Spec.ResourceQuota = src.Spec.ResourceQuota.Items
}
if len(src.Spec.AdditionalRoleBindings) > 0 {
for _, rb := range src.Spec.AdditionalRoleBindings {
t.Spec.AdditionalRoleBindings = append(t.Spec.AdditionalRoleBindings, AdditionalRoleBindingsSpec{
@@ -518,6 +568,7 @@ func (t *Tenant) ConvertFrom(srcRaw conversion.Hub) error {
})
}
}
if src.Spec.ServiceOptions != nil && src.Spec.ServiceOptions.ExternalServiceIPs != nil {
t.Spec.ExternalServiceIPs = &ExternalServiceIPsSpec{
Allowed: make([]AllowedIP, len(src.Spec.ServiceOptions.ExternalServiceIPs.Allowed)),
@@ -527,11 +578,14 @@ func (t *Tenant) ConvertFrom(srcRaw conversion.Hub) error {
t.Spec.ExternalServiceIPs.Allowed[i] = AllowedIP(IP)
}
}
if len(src.Spec.ImagePullPolicies) != 0 {
var pullPolicies []string
for _, policy := range src.Spec.ImagePullPolicies {
pullPolicies = append(pullPolicies, string(policy))
}
t.Annotations[podAllowedImagePullPolicyAnnotation] = strings.Join(pullPolicies, ",")
}
@@ -539,15 +593,24 @@ func (t *Tenant) ConvertFrom(srcRaw conversion.Hub) error {
if len(src.Spec.PriorityClasses.Exact) != 0 {
t.Annotations[podPriorityAllowedAnnotation] = strings.Join(src.Spec.PriorityClasses.Exact, ",")
}
if src.Spec.PriorityClasses.Regex != "" {
t.Annotations[podPriorityAllowedRegexAnnotation] = src.Spec.PriorityClasses.Regex
}
}
if src.Spec.ServiceOptions != nil && src.Spec.ServiceOptions.AllowedServices != nil {
t.Annotations[enableNodePortsAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.NodePort)
t.Annotations[enableExternalNameAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.ExternalName)
t.Annotations[enableLoadBalancerAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.LoadBalancer)
if src.Spec.ServiceOptions.AllowedServices.NodePort != nil {
t.Annotations[enableNodePortsAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.NodePort)
}
if src.Spec.ServiceOptions.AllowedServices.ExternalName != nil {
t.Annotations[enableExternalNameAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.ExternalName)
}
if src.Spec.ServiceOptions.AllowedServices.LoadBalancer != nil {
t.Annotations[enableLoadBalancerAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.LoadBalancer)
}
}
// Status

View File

@@ -18,12 +18,14 @@ import (
capsulev1beta1 "github.com/clastix/capsule/api/v1beta1"
)
// nolint:maintidx
func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
var namespaceQuota int32 = 5
var nodeSelector = map[string]string{
nodeSelector := map[string]string{
"foo": "bar",
}
var v1alpha1AdditionalMetadataSpec = &AdditionalMetadataSpec{
v1alpha1AdditionalMetadataSpec := &AdditionalMetadataSpec{
AdditionalLabels: map[string]string{
"foo": "bar",
},
@@ -31,11 +33,11 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
"foo": "bar",
},
}
var v1alpha1AllowedListSpec = &AllowedListSpec{
v1alpha1AllowedListSpec := &AllowedListSpec{
Exact: []string{"foo", "bar"},
Regex: "^foo*",
}
var v1beta1AdditionalMetadataSpec = &capsulev1beta1.AdditionalMetadataSpec{
v1beta1AdditionalMetadataSpec := &capsulev1beta1.AdditionalMetadataSpec{
Labels: map[string]string{
"foo": "bar",
},
@@ -43,11 +45,11 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
"foo": "bar",
},
}
var v1beta1NamespaceOptions = &capsulev1beta1.NamespaceOptions{
v1beta1NamespaceOptions := &capsulev1beta1.NamespaceOptions{
Quota: &namespaceQuota,
AdditionalMetadata: v1beta1AdditionalMetadataSpec,
}
var v1beta1ServiceOptions = &capsulev1beta1.ServiceOptions{
v1beta1ServiceOptions := &capsulev1beta1.ServiceOptions{
AdditionalMetadata: v1beta1AdditionalMetadataSpec,
AllowedServices: &capsulev1beta1.AllowedServices{
NodePort: pointer.BoolPtr(false),
@@ -58,11 +60,11 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
Allowed: []capsulev1beta1.AllowedIP{"192.168.0.1"},
},
}
var v1beta1AllowedListSpec = &capsulev1beta1.AllowedListSpec{
v1beta1AllowedListSpec := &capsulev1beta1.AllowedListSpec{
Exact: []string{"foo", "bar"},
Regex: "^foo*",
}
var networkPolicies = []networkingv1.NetworkPolicySpec{
networkPolicies := []networkingv1.NetworkPolicySpec{
{
Ingress: []networkingv1.NetworkPolicyIngressRule{
{
@@ -87,7 +89,7 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
},
},
}
var limitRanges = []corev1.LimitRangeSpec{
limitRanges := []corev1.LimitRangeSpec{
{
Limits: []corev1.LimitRangeItem{
{
@@ -104,7 +106,7 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
},
},
}
var resourceQuotas = []corev1.ResourceQuotaSpec{
resourceQuotas := []corev1.ResourceQuotaSpec{
{
Hard: map[corev1.ResourceName]resource.Quantity{
corev1.ResourceLimitsCPU: resource.MustParse("8"),
@@ -118,7 +120,7 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
},
}
var v1beta1Tnt = capsulev1beta1.Tenant{
v1beta1Tnt := capsulev1beta1.Tenant{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: "alice",
@@ -256,7 +258,7 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
Subjects: []rbacv1.Subject{
{
Kind: "Group",
APIGroup: "rbac.authorization.k8s.io",
APIGroup: rbacv1.GroupName,
Name: "system:authenticated",
},
},
@@ -274,7 +276,7 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
},
}
var v1alpha1Tnt = Tenant{
v1alpha1Tnt := Tenant{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: "alice",
@@ -327,7 +329,7 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
Subjects: []rbacv1.Subject{
{
Kind: "Group",
APIGroup: "rbac.authorization.k8s.io",
APIGroup: rbacv1.GroupName,
Name: "system:authenticated",
},
},
@@ -347,10 +349,11 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
}
func TestConversionHub_ConvertTo(t *testing.T) {
var v1beta1ConvertedTnt = capsulev1beta1.Tenant{}
v1beta1ConvertedTnt := capsulev1beta1.Tenant{}
v1alpha1Tnt, v1beta1tnt := generateTenantsSpecs()
err := v1alpha1Tnt.ConvertTo(&v1beta1ConvertedTnt)
if assert.NoError(t, err) {
sort.Slice(v1beta1tnt.Spec.Owners, func(i, j int) bool {
return v1beta1tnt.Spec.Owners[i].Name < v1beta1tnt.Spec.Owners[j].Name
@@ -364,17 +367,20 @@ func TestConversionHub_ConvertTo(t *testing.T) {
return owner.ProxyOperations[i].Kind < owner.ProxyOperations[j].Kind
})
}
for _, owner := range v1beta1ConvertedTnt.Spec.Owners {
sort.Slice(owner.ProxyOperations, func(i, j int) bool {
return owner.ProxyOperations[i].Kind < owner.ProxyOperations[j].Kind
})
}
assert.Equal(t, v1beta1tnt, v1beta1ConvertedTnt)
}
}
func TestConversionHub_ConvertFrom(t *testing.T) {
var v1alpha1ConvertedTnt = Tenant{}
v1alpha1ConvertedTnt := Tenant{}
v1alpha1Tnt, v1beta1tnt := generateTenantsSpecs()
err := v1alpha1ConvertedTnt.ConvertFrom(&v1beta1tnt)

View File

@@ -12,10 +12,10 @@ import (
)
var (
// GroupVersion is group version used to register these objects
// GroupVersion is group version used to register these objects.
GroupVersion = schema.GroupVersion{Group: "capsule.clastix.io", Version: "v1alpha1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
// SchemeBuilder is used to add go types to the GroupVersionKind scheme.
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.

View File

@@ -3,7 +3,7 @@
package v1alpha1
// OwnerSpec defines tenant owner name and kind
// OwnerSpec defines tenant owner name and kind.
type OwnerSpec struct {
Name string `json:"name"`
Kind Kind `json:"kind"`

View File

@@ -13,6 +13,7 @@ func (t *Tenant) IsCordoned() bool {
if v, ok := t.Labels["capsule.clastix.io/cordon"]; ok && v == "enabled" {
return true
}
return false
}
@@ -21,16 +22,19 @@ func (t *Tenant) IsFull() bool {
if t.Spec.NamespaceQuota == nil {
return false
}
return len(t.Status.Namespaces) >= int(*t.Spec.NamespaceQuota)
}
func (t *Tenant) AssignNamespaces(namespaces []corev1.Namespace) {
var l []string
for _, ns := range namespaces {
if ns.Status.Phase == corev1.NamespaceActive {
l = append(l, ns.GetName())
}
}
sort.Strings(l)
t.Status.Namespaces = l

View File

@@ -27,5 +27,6 @@ func GetTypeLabel(t runtime.Object) (label string, err error) {
default:
err = fmt.Errorf("type %T is not mapped as Capsule label recognized", v)
}
return
}

View File

@@ -9,7 +9,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// TenantSpec defines the desired state of Tenant
// TenantSpec defines the desired state of Tenant.
type TenantSpec struct {
Owner OwnerSpec `json:"owner"`
@@ -29,7 +29,7 @@ type TenantSpec struct {
ExternalServiceIPs *ExternalServiceIPsSpec `json:"externalServiceIPs,omitempty"`
}
// TenantStatus defines the observed state of Tenant
// TenantStatus defines the observed state of Tenant.
type TenantStatus struct {
Size uint `json:"size"`
Namespaces []string `json:"namespaces,omitempty"`
@@ -45,7 +45,7 @@ type TenantStatus struct {
// +kubebuilder:printcolumn:name="Node selector",type="string",JSONPath=".spec.nodeSelector",description="Node Selector applied to Pods"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp",description="Age"
// Tenant is the Schema for the tenants API
// Tenant is the Schema for the tenants API.
type Tenant struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
@@ -56,7 +56,7 @@ type Tenant struct {
// +kubebuilder:object:root=true
// TenantList contains a list of Tenant
// TenantList contains a list of Tenant.
type TenantList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`

View File

@@ -1,3 +1,4 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
// Copyright 2020-2021 Clastix Labs

View File

@@ -19,9 +19,12 @@ func (in *AllowedListSpec) ExactMatch(value string) (ok bool) {
sort.SliceStable(in.Exact, func(i, j int) bool {
return strings.ToLower(in.Exact[i]) < strings.ToLower(in.Exact[j])
})
i := sort.SearchStrings(in.Exact, value)
ok = i < len(in.Exact) && in.Exact[i] == value
}
return
}
@@ -29,5 +32,6 @@ func (in AllowedListSpec) RegexMatch(value string) (ok bool) {
if len(in.Regex) > 0 {
ok = regexp.MustCompile(in.Regex).MatchString(value)
}
return
}

View File

@@ -15,6 +15,7 @@ func TestAllowedListSpec_ExactMatch(t *testing.T) {
True []string
False []string
}
for _, tc := range []tc{
{
[]string{"foo", "bar", "bizz", "buzz"},
@@ -35,9 +36,11 @@ func TestAllowedListSpec_ExactMatch(t *testing.T) {
a := AllowedListSpec{
Exact: tc.In,
}
for _, ok := range tc.True {
assert.True(t, a.ExactMatch(ok))
}
for _, ko := range tc.False {
assert.False(t, a.ExactMatch(ko))
}
@@ -50,6 +53,7 @@ func TestAllowedListSpec_RegexMatch(t *testing.T) {
True []string
False []string
}
for _, tc := range []tc{
{`first-\w+-pattern`, []string{"first-date-pattern", "first-year-pattern"}, []string{"broken", "first-year", "second-date-pattern"}},
{``, nil, []string{"any", "value"}},
@@ -57,9 +61,11 @@ func TestAllowedListSpec_RegexMatch(t *testing.T) {
a := AllowedListSpec{
Regex: tc.Regex,
}
for _, ok := range tc.True {
assert.True(t, a.RegexMatch(ok))
}
for _, ko := range tc.False {
assert.False(t, a.RegexMatch(ko))
}

View File

@@ -0,0 +1,59 @@
// Copyright 2020-2021 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1beta1
import (
"fmt"
"strconv"
)
const (
ResourceQuotaAnnotationPrefix = "quota.resources.capsule.clastix.io"
ResourceUsedAnnotationPrefix = "used.resources.capsule.clastix.io"
)
func UsedAnnotationForResource(kindGroup string) string {
return fmt.Sprintf("%s/%s", ResourceUsedAnnotationPrefix, kindGroup)
}
func LimitAnnotationForResource(kindGroup string) string {
return fmt.Sprintf("%s/%s", ResourceQuotaAnnotationPrefix, kindGroup)
}
func GetUsedResourceFromTenant(tenant Tenant, kindGroup string) (int64, error) {
usedStr, ok := tenant.GetAnnotations()[UsedAnnotationForResource(kindGroup)]
if !ok {
usedStr = "0"
}
used, _ := strconv.ParseInt(usedStr, 10, 10)
return used, nil
}
type NonLimitedResourceError struct {
kindGroup string
}
func NewNonLimitedResourceError(kindGroup string) *NonLimitedResourceError {
return &NonLimitedResourceError{kindGroup: kindGroup}
}
func (n NonLimitedResourceError) Error() string {
return fmt.Sprintf("resource %s is not limited for the current tenant", n.kindGroup)
}
func GetLimitResourceFromTenant(tenant Tenant, kindGroup string) (int64, error) {
limitStr, ok := tenant.GetAnnotations()[LimitAnnotationForResource(kindGroup)]
if !ok {
return 0, NewNonLimitedResourceError(kindGroup)
}
limit, err := strconv.ParseInt(limitStr, 10, 10)
if err != nil {
return 0, fmt.Errorf("resource %s limit cannot be parsed, %w", kindGroup, err)
}
return limit, nil
}

View File

@@ -11,5 +11,6 @@ func (t *Tenant) IsWildcardDenied() bool {
if v, ok := t.Annotations[denyWildcard]; ok && v == "true" {
return true
}
return false
}

View File

@@ -19,9 +19,12 @@ func (in *ForbiddenListSpec) ExactMatch(value string) (ok bool) {
sort.SliceStable(in.Exact, func(i, j int) bool {
return strings.ToLower(in.Exact[i]) < strings.ToLower(in.Exact[j])
})
i := sort.SearchStrings(in.Exact, value)
ok = i < len(in.Exact) && in.Exact[i] == value
}
return
}
@@ -29,5 +32,6 @@ func (in ForbiddenListSpec) RegexMatch(value string) (ok bool) {
if len(in.Regex) > 0 {
ok = regexp.MustCompile(in.Regex).MatchString(value)
}
return
}

View File

@@ -15,6 +15,7 @@ func TestForbiddenListSpec_ExactMatch(t *testing.T) {
True []string
False []string
}
for _, tc := range []tc{
{
[]string{"foo", "bar", "bizz", "buzz"},
@@ -35,9 +36,11 @@ func TestForbiddenListSpec_ExactMatch(t *testing.T) {
a := ForbiddenListSpec{
Exact: tc.In,
}
for _, ok := range tc.True {
assert.True(t, a.ExactMatch(ok))
}
for _, ko := range tc.False {
assert.False(t, a.ExactMatch(ko))
}
@@ -50,6 +53,7 @@ func TestForbiddenListSpec_RegexMatch(t *testing.T) {
True []string
False []string
}
for _, tc := range []tc{
{`first-\w+-pattern`, []string{"first-date-pattern", "first-year-pattern"}, []string{"broken", "first-year", "second-date-pattern"}},
{``, nil, []string{"any", "value"}},
@@ -57,9 +61,11 @@ func TestForbiddenListSpec_RegexMatch(t *testing.T) {
a := ForbiddenListSpec{
Regex: tc.Regex,
}
for _, ok := range tc.True {
assert.True(t, a.RegexMatch(ok))
}
for _, ko := range tc.False {
assert.False(t, a.RegexMatch(ko))
}

View File

@@ -12,10 +12,10 @@ import (
)
var (
// GroupVersion is group version used to register these objects
// GroupVersion is group version used to register these objects.
GroupVersion = schema.GroupVersion{Group: "capsule.clastix.io", Version: "v1beta1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
// SchemeBuilder is used to add go types to the GroupVersionKind scheme.
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.

View File

@@ -14,9 +14,11 @@ func (t *Tenant) hasForbiddenNamespaceLabelsAnnotations() bool {
if _, ok := t.Annotations[ForbiddenNamespaceLabelsAnnotation]; ok {
return true
}
if _, ok := t.Annotations[ForbiddenNamespaceLabelsRegexpAnnotation]; ok {
return true
}
return false
}
@@ -24,9 +26,11 @@ func (t *Tenant) hasForbiddenNamespaceAnnotationsAnnotations() bool {
if _, ok := t.Annotations[ForbiddenNamespaceAnnotationsAnnotation]; ok {
return true
}
if _, ok := t.Annotations[ForbiddenNamespaceAnnotationsRegexpAnnotation]; ok {
return true
}
return false
}
@@ -34,6 +38,7 @@ func (t *Tenant) ForbiddenUserNamespaceLabels() *ForbiddenListSpec {
if !t.hasForbiddenNamespaceLabelsAnnotations() {
return nil
}
return &ForbiddenListSpec{
Exact: strings.Split(t.Annotations[ForbiddenNamespaceLabelsAnnotation], ","),
Regex: t.Annotations[ForbiddenNamespaceLabelsRegexpAnnotation],
@@ -44,6 +49,7 @@ func (t *Tenant) ForbiddenUserNamespaceAnnotations() *ForbiddenListSpec {
if !t.hasForbiddenNamespaceAnnotationsAnnotations() {
return nil
}
return &ForbiddenListSpec{
Exact: strings.Split(t.Annotations[ForbiddenNamespaceAnnotationsAnnotation], ","),
Regex: t.Annotations[ForbiddenNamespaceAnnotationsRegexpAnnotation],

View File

@@ -15,6 +15,7 @@ func (o OwnerListSpec) FindOwner(name string, kind OwnerKind) (owner OwnerSpec)
if i < len(o) && o[i].Kind == kind && o[i].Name == name {
return o[i]
}
return
}
@@ -23,12 +24,15 @@ type ByKindAndName OwnerListSpec
func (b ByKindAndName) Len() int {
return len(b)
}
func (b ByKindAndName) Less(i, j int) bool {
if b[i].Kind.String() != b[j].Kind.String() {
return b[i].Kind.String() < b[j].Kind.String()
}
return b[i].Name < b[j].Name
}
func (b ByKindAndName) Swap(i, j int) {
b[i], b[j] = b[j], b[i]
}

View File

@@ -7,7 +7,7 @@ import (
)
func TestOwnerListSpec_FindOwner(t *testing.T) {
var bla = OwnerSpec{
bla := OwnerSpec{
Kind: UserOwner,
Name: "bla",
ProxyOperations: []ProxySettings{
@@ -17,7 +17,7 @@ func TestOwnerListSpec_FindOwner(t *testing.T) {
},
},
}
var bar = OwnerSpec{
bar := OwnerSpec{
Kind: GroupOwner,
Name: "bar",
ProxyOperations: []ProxySettings{
@@ -27,7 +27,7 @@ func TestOwnerListSpec_FindOwner(t *testing.T) {
},
},
}
var baz = OwnerSpec{
baz := OwnerSpec{
Kind: UserOwner,
Name: "baz",
ProxyOperations: []ProxySettings{
@@ -37,7 +37,7 @@ func TestOwnerListSpec_FindOwner(t *testing.T) {
},
},
}
var fim = OwnerSpec{
fim := OwnerSpec{
Kind: ServiceAccountOwner,
Name: "fim",
ProxyOperations: []ProxySettings{
@@ -47,7 +47,7 @@ func TestOwnerListSpec_FindOwner(t *testing.T) {
},
},
}
var bom = OwnerSpec{
bom := OwnerSpec{
Kind: GroupOwner,
Name: "bom",
ProxyOperations: []ProxySettings{
@@ -61,7 +61,7 @@ func TestOwnerListSpec_FindOwner(t *testing.T) {
},
},
}
var qip = OwnerSpec{
qip := OwnerSpec{
Kind: ServiceAccountOwner,
Name: "qip",
ProxyOperations: []ProxySettings{
@@ -71,7 +71,7 @@ func TestOwnerListSpec_FindOwner(t *testing.T) {
},
},
}
var owners = OwnerListSpec{bom, qip, bla, bar, baz, fim}
owners := OwnerListSpec{bom, qip, bla, bar, baz, fim}
assert.Equal(t, owners.FindOwner("bom", GroupOwner), bom)
assert.Equal(t, owners.FindOwner("qip", ServiceAccountOwner), qip)

48
api/v1beta1/owner_role.go Normal file
View File

@@ -0,0 +1,48 @@
// Copyright 2020-2021 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1beta1
import (
"fmt"
"strings"
)
const (
ClusterRoleNamesAnnotation = "clusterrolenames.capsule.clastix.io"
)
// GetRoles read the annotation available in the Tenant specification and if it matches the pattern
// clusterrolenames.capsule.clastix.io/${KIND}.${NAME} returns the associated roles.
// Kubernetes annotations and labels must respect RFC 1123 about DNS names and this could be cumbersome in two cases:
// 1. identifying users based on their email address
// 2. the overall length of the annotation key that is exceeding 63 characters
// For emails, the symbol @ can be replaced with the placeholder __AT__.
// For the latter one, the index of the owner can be used to force the retrieval.
func (in OwnerSpec) GetRoles(tenant Tenant, index int) []string {
for key, value := range tenant.GetAnnotations() {
if !strings.HasPrefix(key, fmt.Sprintf("%s/", ClusterRoleNamesAnnotation)) {
continue
}
for symbol, replace := range in.convertMap() {
key = strings.ReplaceAll(key, symbol, replace)
}
nameBased := key == fmt.Sprintf("%s/%s.%s", ClusterRoleNamesAnnotation, strings.ToLower(in.Kind.String()), strings.ToLower(in.Name))
indexBased := key == fmt.Sprintf("%s/%d", ClusterRoleNamesAnnotation, index)
if nameBased || indexBased {
return strings.Split(value, ",")
}
}
return []string{"admin", "capsule-namespace-deleter"}
}
func (in OwnerSpec) convertMap() map[string]string {
return map[string]string{
"__AT__": "@",
}
}

View File

@@ -5,6 +5,7 @@ package v1beta1
import (
"fmt"
"strings"
)
const (
@@ -18,12 +19,13 @@ const (
ForbiddenNamespaceLabelsRegexpAnnotation = "capsule.clastix.io/forbidden-namespace-labels-regexp"
ForbiddenNamespaceAnnotationsAnnotation = "capsule.clastix.io/forbidden-namespace-annotations"
ForbiddenNamespaceAnnotationsRegexpAnnotation = "capsule.clastix.io/forbidden-namespace-annotations-regexp"
ProtectedTenantAnnotation = "capsule.clastix.io/protected"
)
func UsedQuotaFor(resource fmt.Stringer) string {
return "quota.capsule.clastix.io/used-" + resource.String()
return "quota.capsule.clastix.io/used-" + strings.ReplaceAll(resource.String(), "/", "_")
}
func HardQuotaFor(resource fmt.Stringer) string {
return "quota.capsule.clastix.io/hard-" + resource.String()
return "quota.capsule.clastix.io/hard-" + strings.ReplaceAll(resource.String(), "/", "_")
}

View File

@@ -13,6 +13,7 @@ func (t *Tenant) IsCordoned() bool {
if v, ok := t.Labels["capsule.clastix.io/cordon"]; ok && v == "enabled" {
return true
}
return false
}
@@ -21,16 +22,19 @@ func (t *Tenant) IsFull() bool {
if t.Spec.NamespaceOptions == nil || t.Spec.NamespaceOptions.Quota == nil {
return false
}
return len(t.Status.Namespaces) >= int(*t.Spec.NamespaceOptions.Quota)
}
func (t *Tenant) AssignNamespaces(namespaces []corev1.Namespace) {
var l []string
for _, ns := range namespaces {
if ns.Status.Phase == corev1.NamespaceActive {
l = append(l, ns.GetName())
}
}
sort.Strings(l)
t.Status.Namespaces = l

View File

@@ -27,5 +27,6 @@ func GetTypeLabel(t runtime.Object) (label string, err error) {
default:
err = fmt.Errorf("type %T is not mapped as Capsule label recognized", v)
}
return
}

View File

@@ -11,7 +11,7 @@ const (
TenantStateCordoned tenantState = "Cordoned"
)
// Returns the observed state of the Tenant
// Returns the observed state of the Tenant.
type TenantStatus struct {
//+kubebuilder:default=Active
// The operational state of the Tenant. Possible values are "Active", "Cordoned".

View File

@@ -7,7 +7,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// TenantSpec defines the desired state of Tenant
// TenantSpec defines the desired state of Tenant.
type TenantSpec struct {
// Specifies the owners of the Tenant. Mandatory.
Owners OwnerListSpec `json:"owners"`
@@ -21,11 +21,11 @@ type TenantSpec struct {
IngressOptions IngressOptions `json:"ingressOptions,omitempty"`
// Specifies the trusted Image Registries assigned to the Tenant. Capsule assures that all Pods resources created in the Tenant can use only one of the allowed trusted registries. Optional.
ContainerRegistries *AllowedListSpec `json:"containerRegistries,omitempty"`
// Specifies the label to control the placement of pods on a given pool of worker nodes. All namesapces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
// Specifies the label to control the placement of pods on a given pool of worker nodes. All namespaces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// Specifies the NetworkPolicies assigned to the Tenant. The assigned NetworkPolicies are inherited by any namespace created in the Tenant. Optional.
NetworkPolicies NetworkPolicySpec `json:"networkPolicies,omitempty"`
// Specifies the NetworkPolicies assigned to the Tenant. The assigned NetworkPolicies are inherited by any namespace created in the Tenant. Optional.
// Specifies the resource min/max usage restrictions to the Tenant. The assigned values are inherited by any namespace created in the Tenant. Optional.
LimitRanges LimitRangesSpec `json:"limitRanges,omitempty"`
// Specifies a list of ResourceQuota resources assigned to the Tenant. The assigned values are inherited by any namespace created in the Tenant. The Capsule operator aggregates ResourceQuota at Tenant level, so that the hard quota is never crossed for the given Tenant. This permits the Tenant owner to consume resources in the Tenant regardless of the namespace. Optional.
ResourceQuota ResourceQuotaSpec `json:"resourceQuotas,omitempty"`
@@ -47,7 +47,7 @@ type TenantSpec struct {
// +kubebuilder:printcolumn:name="Node selector",type="string",JSONPath=".spec.nodeSelector",description="Node Selector applied to Pods"
// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp",description="Age"
// Tenant is the Schema for the tenants API
// Tenant is the Schema for the tenants API.
type Tenant struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
@@ -60,7 +60,7 @@ func (t *Tenant) Hub() {}
//+kubebuilder:object:root=true
// TenantList contains a list of Tenant
// TenantList contains a list of Tenant.
type TenantList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`

View File

@@ -1,3 +1,4 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
// Copyright 2020-2021 Clastix Labs
@@ -268,6 +269,21 @@ func (in *NetworkPolicySpec) DeepCopy() *NetworkPolicySpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NonLimitedResourceError) DeepCopyInto(out *NonLimitedResourceError) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NonLimitedResourceError.
func (in *NonLimitedResourceError) DeepCopy() *NonLimitedResourceError {
if in == nil {
return nil
}
out := new(NonLimitedResourceError)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in OwnerListSpec) DeepCopyInto(out *OwnerListSpec) {
{

View File

@@ -21,3 +21,4 @@
.idea/
*.tmproj
.vscode/
README.md.gotmpl

View File

@@ -21,8 +21,8 @@ sources:
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.1.3
version: 0.1.11
# This is the version number of the application being deployed.
# This version number should be incremented each time you make changes to the application.
appVersion: 0.1.0
appVersion: 0.1.2

9
charts/capsule/Makefile Normal file
View File

@@ -0,0 +1,9 @@
docs: HELMDOCS_VERSION := v1.8.1
docs: docker
@docker run --rm -v "$$(pwd):/helm-docs" -u $$(id -u) jnorwood/helm-docs:$(HELMDOCS_VERSION)
docker:
@hash docker 2>/dev/null || {\
echo "You need docker" &&\
exit 1;\
}

View File

@@ -24,23 +24,19 @@ The Capsule Operator Chart can be used to instantly deploy the Capsule Operator
$ helm repo add clastix https://clastix.github.io/charts
2. Create the Namespace:
2. Install the Chart:
$ kubectl create namespace capsule-system
$ helm install capsule clastix/capsule -n capsule-system --create-namespace
3. Install the Chart:
$ helm install capsule clastix/capsule -n capsule-system
4. Show the status:
3. Show the status:
$ helm status capsule -n capsule-system
5. Upgrade the Chart
4. Upgrade the Chart
$ helm upgrade capsule clastix/capsule -n capsule-system
6. Uninstall the Chart
5. Uninstall the Chart
$ helm uninstall capsule -n capsule-system
@@ -58,47 +54,101 @@ The values in your overrides file `myvalues.yaml` will override their counterpar
If you only need to make minor customizations, you can specify them on the command line by using the `--set` option. For example:
$ helm install capsule capsule-helm-chart --set force_tenant_prefix=false -n capsule-system
$ helm install capsule capsule-helm-chart --set manager.options.forceTenantPrefix=false -n capsule-system
Here the values you can override:
Parameter | Description | Default
--- | --- | ---
`manager.hostNetwork` | Specifies if the container should be started in `hostNetwork` mode. | `false`
`manager.options.logLevel` | Set the log verbosity of the controller with a value from 1 to 10.| `4`
`manager.options.forceTenantPrefix` | Boolean, enforces the Tenant owner, during Namespace creation, to name it using the selected Tenant name as prefix, separated by a dash | `false`
`manager.options.capsuleUserGroups` | Override the Capsule user groups | `[capsule.clastix.io]`
`manager.options.protectedNamespaceRegex` | If specified, disallows creation of namespaces matching the passed regexp | `null`
`manager.image.repository` | Set the image repository of the controller. | `quay.io/clastix/capsule`
`manager.image.tag` | Overrides the image tag whose default is the chart. `appVersion` | `null`
`manager.image.pullPolicy` | Set the image pull policy. | `IfNotPresent`
`manager.livenessProbe` | Configure the liveness probe using Deployment probe spec | `GET :10080/healthz`
`manager.readinessProbe` | Configure the readiness probe using Deployment probe spec | `GET :10080/readyz`
`manager.resources.requests/cpu` | Set the CPU requests assigned to the controller. | `200m`
`manager.resources.requests/memory` | Set the memory requests assigned to the controller. | `128Mi`
`manager.resources.limits/cpu` | Set the CPU limits assigned to the controller. | `200m`
`manager.resources.limits/cpu` | Set the memory limits assigned to the controller. | `128Mi`
`mutatingWebhooksTimeoutSeconds` | Timeout in seconds for mutating webhooks. | `30`
`validatingWebhooksTimeoutSeconds` | Timeout in seconds for validating webhooks. | `30`
`imagePullSecrets` | Configuration for `imagePullSecrets` so that you can use a private images registry. | `[]`
`serviceAccount.create` | Specifies whether a service account should be created. | `true`
`serviceAccount.annotations` | Annotations to add to the service account. | `{}`
`serviceAccount.name` | The name of the service account to use. If not set and `serviceAccount.create=true`, a name is generated using the fullname template | `capsule`
`podAnnotations` | Annotations to add to the Capsule pod. | `{}`
`priorityClassName` | Set the priority class name of the Capsule pod. | `null`
`nodeSelector` | Set the node selector for the Capsule pod. | `{}`
`tolerations` | Set list of tolerations for the Capsule pod. | `[]`
`replicaCount` | Set the replica count for Capsule pod. | `1`
`affinity` | Set affinity rules for the Capsule pod. | `{}`
`podSecurityPolicy.enabled` | Specify if a Pod Security Policy must be created. | `false`
`serviceMonitor.enabled` | Specifies if a service monitor must be created. | `false`
`serviceMonitor.labels` | Additional labels which will be added to service monitor. | `{}`
`serviceMonitor.annotations` | Additional annotations which will be added to service monitor. | `{}`
`serviceMonitor.matchLabels` | Additional matchLabels which will be added to service monitor. | `{}`
`serviceMonitor.serviceAccount.name` | Specifies service account name for metrics scrape. | `capsule`
`serviceMonitor.serviceAccount.namespace` | Specifies service account namespace for metrics scrape. | `capsule-system`
`customLabels` | Additional labels which will be added to all resources created by Capsule helm chart . | `{}`
`customAnnotations` | Additional annotations which will be added to all resources created by Capsule helm chart . | `{}`
### General Parameters
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| affinity | object | `{}` | Set affinity rules for the Capsule pod |
| certManager.generateCertificates | bool | `false` | Specifies whether capsule webhooks certificates should be generated using cert-manager |
| customAnnotations | object | `{}` | Additional annotations which will be added to all resources created by Capsule helm chart |
| customLabels | object | `{}` | Additional labels which will be added to all resources created by Capsule helm chart |
| jobs.image.pullPolicy | string | `"IfNotPresent"` | Set the image pull policy of the helm chart job |
| jobs.image.repository | string | `"quay.io/clastix/kubectl"` | Set the image repository of the helm chart job |
| jobs.image.tag | string | `""` | Set the image tag of the helm chart job |
| mutatingWebhooksTimeoutSeconds | int | `30` | Timeout in seconds for mutating webhooks |
| nodeSelector | object | `{}` | Set the node selector for the Capsule pod |
| podAnnotations | object | `{}` | Annotations to add to the capsule pod. |
| podSecurityPolicy.enabled | bool | `false` | Specify if a Pod Security Policy must be created |
| priorityClassName | string | `""` | Set the priority class name of the Capsule pod |
| replicaCount | int | `1` | Set the replica count for capsule pod |
| serviceAccount.annotations | object | `{}` | Annotations to add to the service account. |
| serviceAccount.create | bool | `true` | Specifies whether a service account should be created. |
| serviceAccount.name | string | `"capsule"` | The name of the service account to use. If not set and `serviceAccount.create=true`, a name is generated using the fullname template |
| tls.create | bool | `true` | When cert-manager is disabled, Capsule will generate the TLS certificate for webhook and CRDs conversion. |
| tls.enableController | bool | `true` | Start the Capsule controller that injects the CA into mutating and validating webhooks, and CRD as well. |
| tls.name | string | `""` | Override name of the Capsule TLS Secret name when externally managed. |
| tolerations | list | `[]` | Set list of tolerations for the Capsule pod |
| validatingWebhooksTimeoutSeconds | int | `30` | Timeout in seconds for validating webhooks |
### Manager Parameters
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| manager.hostNetwork | bool | `false` | Specifies if the container should be started in hostNetwork mode. Required for use in some managed kubernetes clusters (such as AWS EKS) with custom CNI (such as calico), because control-plane managed by AWS cannot communicate with pods' IP CIDR and admission webhooks are not working |
| manager.image.pullPolicy | string | `"IfNotPresent"` | Set the image pull policy. |
| manager.image.repository | string | `"clastix/capsule"` | Set the image repository of the capsule. |
| manager.image.tag | string | `""` | Overrides the image tag whose default is the chart appVersion. |
| manager.imagePullSecrets | list | `[]` | Configuration for `imagePullSecrets` so that you can use a private images registry. |
| manager.kind | string | `"Deployment"` | Set the controller deployment mode as `Deployment` or `DaemonSet`. |
| manager.livenessProbe | object | `{"httpGet":{"path":"/healthz","port":10080}}` | Configure the liveness probe using Deployment probe spec |
| manager.options.capsuleUserGroups | list | `["capsule.clastix.io"]` | Override the Capsule user groups |
| manager.options.forceTenantPrefix | bool | `false` | Boolean, enforces the Tenant owner, during Namespace creation, to name it using the selected Tenant name as prefix, separated by a dash |
| manager.options.generateCertificates | bool | `true` | Specifies whether capsule webhooks certificates should be generated by capsule operator |
| manager.options.logLevel | string | `"4"` | Set the log verbosity of the capsule with a value from 1 to 10 |
| manager.options.protectedNamespaceRegex | string | `""` | If specified, disallows creation of namespaces matching the passed regexp |
| manager.readinessProbe | object | `{"httpGet":{"path":"/readyz","port":10080}}` | Configure the readiness probe using Deployment probe spec |
| manager.resources.limits.cpu | string | `"200m"` | |
| manager.resources.limits.memory | string | `"128Mi"` | |
| manager.resources.requests.cpu | string | `"200m"` | |
| manager.resources.requests.memory | string | `"128Mi"` | |
### ServiceMonitor Parameters
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| serviceMonitor.annotations | object | `{}` | Assign additional Annotations |
| serviceMonitor.enabled | bool | `false` | Enable ServiceMonitor |
| serviceMonitor.endpoint.interval | string | `"15s"` | Set the scrape interval for the endpoint of the serviceMonitor |
| serviceMonitor.endpoint.metricRelabelings | list | `[]` | Set metricRelabelings for the endpoint of the serviceMonitor |
| serviceMonitor.endpoint.relabelings | list | `[]` | Set relabelings for the endpoint of the serviceMonitor |
| serviceMonitor.endpoint.scrapeTimeout | string | `""` | Set the scrape timeout for the endpoint of the serviceMonitor |
| serviceMonitor.labels | object | `{}` | Assign additional labels according to Prometheus' serviceMonitorSelector matching labels |
| serviceMonitor.matchLabels | object | `{}` | Change matching labels |
| serviceMonitor.namespace | string | `""` | Install the ServiceMonitor into a different Namespace, as the monitoring stack one (default: the release one) |
| serviceMonitor.serviceAccount.name | string | `"capsule"` | ServiceAccount for Metrics RBAC |
| serviceMonitor.serviceAccount.namespace | string | `"capsule-system"` | ServiceAccount Namespace for Metrics RBAC |
| serviceMonitor.targetLabels | list | `[]` | Set targetLabels for the serviceMonitor |
### Webhook Parameters
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| webhooks.cordoning.failurePolicy | string | `"Fail"` | |
| webhooks.cordoning.namespaceSelector.matchExpressions[0].key | string | `"capsule.clastix.io/tenant"` | |
| webhooks.cordoning.namespaceSelector.matchExpressions[0].operator | string | `"Exists"` | |
| webhooks.ingresses.failurePolicy | string | `"Fail"` | |
| webhooks.ingresses.namespaceSelector.matchExpressions[0].key | string | `"capsule.clastix.io/tenant"` | |
| webhooks.ingresses.namespaceSelector.matchExpressions[0].operator | string | `"Exists"` | |
| webhooks.namespaceOwnerReference.failurePolicy | string | `"Fail"` | |
| webhooks.namespaces.failurePolicy | string | `"Fail"` | |
| webhooks.networkpolicies.failurePolicy | string | `"Fail"` | |
| webhooks.networkpolicies.namespaceSelector.matchExpressions[0].key | string | `"capsule.clastix.io/tenant"` | |
| webhooks.networkpolicies.namespaceSelector.matchExpressions[0].operator | string | `"Exists"` | |
| webhooks.nodes.failurePolicy | string | `"Fail"` | |
| webhooks.persistentvolumeclaims.failurePolicy | string | `"Fail"` | |
| webhooks.persistentvolumeclaims.namespaceSelector.matchExpressions[0].key | string | `"capsule.clastix.io/tenant"` | |
| webhooks.persistentvolumeclaims.namespaceSelector.matchExpressions[0].operator | string | `"Exists"` | |
| webhooks.pods.failurePolicy | string | `"Fail"` | |
| webhooks.pods.namespaceSelector.matchExpressions[0].key | string | `"capsule.clastix.io/tenant"` | |
| webhooks.pods.namespaceSelector.matchExpressions[0].operator | string | `"Exists"` | |
| webhooks.services.failurePolicy | string | `"Fail"` | |
| webhooks.services.namespaceSelector.matchExpressions[0].key | string | `"capsule.clastix.io/tenant"` | |
| webhooks.services.namespaceSelector.matchExpressions[0].operator | string | `"Exists"` | |
| webhooks.tenants.failurePolicy | string | `"Fail"` | |
## Created resources
@@ -110,6 +160,7 @@ This Helm Chart creates the following Kubernetes resources in the release namesp
* CA Secret
* Certificate Secret
* Tenant Custom Resource Definition
* CapsuleConfiguration Custom Resource Definition
* MutatingWebHookConfiguration
* ValidatingWebHookConfiguration
* RBAC Cluster Roles
@@ -127,6 +178,34 @@ And optionally, depending on the values set:
Capsule, as many other add-ons, defines its own set of Custom Resource Definitions (CRDs). Helm3 removed the old CRDs installation method for a more simple methodology. In the Helm Chart, there is now a special directory called `crds` to hold the CRDs. These CRDs are not templated, but will be installed by default when running a `helm install` for the chart. If the CRDs already exist (for example, you already executed `helm install`), it will be skipped with a warning. When you wish to skip the CRDs installation, and do not see the warning, you can pass the `--skip-crds` flag to the `helm install` command.
## Cert-Manager integration
You can enable the generation of certificates using `cert-manager` as follows.
```
helm upgrade --install capsule clastix/capsule --namespace capsule-system --create-namespace \
--set "certManager.generateCertificates=true" \
--set "tls.create=false" \
--set "tls.enableController=false"
```
With the usage of `tls.enableController=false` value, you're delegating the injection of the Validating and Mutating Webhooks' CA to `cert-manager`.
Since Helm3 doesn't allow to template _CRDs_, you have to patch manually the Custom Resource Definition `tenants.capsule.clastix.io` adding the proper annotation (YMMV).
```yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.5.0
cert-manager.io/inject-ca-from: capsule-system/capsule-webhook-cert
creationTimestamp: "2022-07-22T08:32:51Z"
generation: 45
name: tenants.capsule.clastix.io
resourceVersion: "9832"
uid: 61e287df-319b-476d-88d5-bdb8dc14d4a6
```
## More
See Capsule [use cases](https://github.com/clastix/capsule/blob/master/use_cases.md) for more information about how to use Capsule.
See Capsule [tutorial](https://github.com/clastix/capsule/blob/master/docs/content/general/tutorial.md) for more information about how to use Capsule.

View File

@@ -0,0 +1,160 @@
# Deploying the Capsule Operator
Use the Capsule Operator for easily implementing, managing, and maintaining multitenancy and access control in Kubernetes.
## Requirements
* [Helm 3](https://github.com/helm/helm/releases) is required when installing the Capsule Operator chart. Follow Helms official [steps](https://helm.sh/docs/intro/install/) for installing helm on your particular operating system.
* A Kubernetes cluster 1.16+ with following [Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) enabled:
* PodNodeSelector
* LimitRanger
* ResourceQuota
* MutatingAdmissionWebhook
* ValidatingAdmissionWebhook
* A [`kubeconfig`](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file accessing the Kubernetes cluster with cluster admin permissions.
## Quick Start
The Capsule Operator Chart can be used to instantly deploy the Capsule Operator on your Kubernetes cluster.
1. Add this repository:
$ helm repo add clastix https://clastix.github.io/charts
2. Install the Chart:
$ helm install capsule clastix/capsule -n capsule-system --create-namespace
3. Show the status:
$ helm status capsule -n capsule-system
4. Upgrade the Chart
$ helm upgrade capsule clastix/capsule -n capsule-system
5. Uninstall the Chart
$ helm uninstall capsule -n capsule-system
## Customize the installation
There are two methods for specifying overrides of values during chart installation: `--values` and `--set`.
The `--values` option is the preferred method because it allows you to keep your overrides in a YAML file, rather than specifying them all on the command line. Create a copy of the YAML file `values.yaml` and add your overrides to it.
Specify your overrides file when you install the chart:
$ helm install capsule capsule-helm-chart --values myvalues.yaml -n capsule-system
The values in your overrides file `myvalues.yaml` will override their counterparts in the charts values.yaml file. Any values in `values.yaml` that werent overridden will keep their defaults.
If you only need to make minor customizations, you can specify them on the command line by using the `--set` option. For example:
$ helm install capsule capsule-helm-chart --set manager.options.forceTenantPrefix=false -n capsule-system
Here the values you can override:
### General Parameters
| Key | Type | Default | Description |
|-----|------|---------|-------------|
{{- range .Values }}
{{- if not (or (hasPrefix "manager" .Key) (hasPrefix "serviceMonitor" .Key) (hasPrefix "webhook" .Key) (hasPrefix "capsule-proxy" .Key) ) }}
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
{{- end }}
{{- end }}
### Manager Parameters
| Key | Type | Default | Description |
|-----|------|---------|-------------|
{{- range .Values }}
{{- if hasPrefix "manager" .Key }}
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
{{- end }}
{{- end }}
### ServiceMonitor Parameters
| Key | Type | Default | Description |
|-----|------|---------|-------------|
{{- range .Values }}
{{- if hasPrefix "serviceMonitor" .Key }}
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
{{- end }}
{{- end }}
### Webhook Parameters
| Key | Type | Default | Description |
|-----|------|---------|-------------|
{{- range .Values }}
{{- if hasPrefix "webhook" .Key }}
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
{{- end }}
{{- end }}
## Created resources
This Helm Chart creates the following Kubernetes resources in the release namespace:
* Capsule Namespace
* Capsule Operator Deployment
* Capsule Service
* CA Secret
* Certificate Secret
* Tenant Custom Resource Definition
* CapsuleConfiguration Custom Resource Definition
* MutatingWebHookConfiguration
* ValidatingWebHookConfiguration
* RBAC Cluster Roles
* Metrics Service
And optionally, depending on the values set:
* Capsule ServiceAccount
* Capsule Service Monitor
* PodSecurityPolicy
* RBAC ClusterRole and RoleBinding for pod security policy
* RBAC Role and Rolebinding for metrics scrape
## Notes on installing Custom Resource Definitions with Helm3
Capsule, as many other add-ons, defines its own set of Custom Resource Definitions (CRDs). Helm3 removed the old CRDs installation method for a more simple methodology. In the Helm Chart, there is now a special directory called `crds` to hold the CRDs. These CRDs are not templated, but will be installed by default when running a `helm install` for the chart. If the CRDs already exist (for example, you already executed `helm install`), it will be skipped with a warning. When you wish to skip the CRDs installation, and do not see the warning, you can pass the `--skip-crds` flag to the `helm install` command.
## Cert-Manager integration
You can enable the generation of certificates using `cert-manager` as follows.
```
helm upgrade --install capsule clastix/capsule --namespace capsule-system --create-namespace \
--set "certManager.generateCertificates=true" \
--set "tls.create=false" \
--set "tls.enableController=false"
```
With the usage of `tls.enableController=false` value, you're delegating the injection of the Validating and Mutating Webhooks' CA to `cert-manager`.
Since Helm3 doesn't allow to template _CRDs_, you have to patch manually the Custom Resource Definition `tenants.capsule.clastix.io` adding the proper annotation (YMMV).
```yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.5.0
cert-manager.io/inject-ca-from: capsule-system/capsule-webhook-cert
creationTimestamp: "2022-07-22T08:32:51Z"
generation: 45
name: tenants.capsule.clastix.io
resourceVersion: "9832"
uid: 61e287df-319b-476d-88d5-bdb8dc14d4a6
```
## More
See Capsule [tutorial](https://github.com/clastix/capsule/blob/master/docs/content/general/tutorial.md) for more information about how to use Capsule.

View File

@@ -17,7 +17,7 @@ spec:
- name: v1alpha1
schema:
openAPIV3Schema:
description: CapsuleConfiguration is the Schema for the Capsule configuration API
description: CapsuleConfiguration is the Schema for the Capsule configuration API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
@@ -28,7 +28,7 @@ spec:
metadata:
type: object
spec:
description: CapsuleConfigurationSpec defines the Capsule configuration
description: CapsuleConfigurationSpec defines the Capsule configuration.
properties:
forceTenantPrefix:
default: false

View File

@@ -7,7 +7,17 @@ metadata:
name: tenants.capsule.clastix.io
spec:
conversion:
strategy: None
strategy: Webhook
webhook:
clientConfig:
service:
name: capsule-webhook-service
namespace: capsule-system
path: /convert
port: 443
conversionReviewVersions:
- v1alpha1
- v1beta1
group: capsule.clastix.io
names:
kind: Tenant
@@ -46,7 +56,7 @@ spec:
name: v1alpha1
schema:
openAPIV3Schema:
description: Tenant is the Schema for the tenants API
description: Tenant is the Schema for the tenants API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
@@ -57,7 +67,7 @@ spec:
metadata:
type: object
spec:
description: TenantSpec defines the desired state of Tenant
description: TenantSpec defines the desired state of Tenant.
properties:
additionalRoleBindings:
items:
@@ -475,7 +485,7 @@ spec:
type: string
type: object
owner:
description: OwnerSpec defines tenant owner name and kind
description: OwnerSpec defines tenant owner name and kind.
properties:
kind:
enum:
@@ -558,7 +568,7 @@ spec:
- owner
type: object
status:
description: TenantStatus defines the observed state of Tenant
description: TenantStatus defines the observed state of Tenant.
properties:
namespaces:
items:
@@ -598,7 +608,7 @@ spec:
name: v1beta1
schema:
openAPIV3Schema:
description: Tenant is the Schema for the tenants API
description: Tenant is the Schema for the tenants API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
@@ -609,7 +619,7 @@ spec:
metadata:
type: object
spec:
description: TenantSpec defines the desired state of Tenant
description: TenantSpec defines the desired state of Tenant.
properties:
additionalRoleBindings:
description: Specifies additional RoleBindings assigned to the Tenant. Capsule will ensure that all namespaces in the Tenant always contain the RoleBinding for the given ClusterRole. Optional.
@@ -697,7 +707,7 @@ spec:
type: string
type: object
limitRanges:
description: Specifies the NetworkPolicies assigned to the Tenant. The assigned NetworkPolicies are inherited by any namespace created in the Tenant. Optional.
description: Specifies the resource min/max usage restrictions to the Tenant. The assigned values are inherited by any namespace created in the Tenant. Optional.
properties:
items:
items:
@@ -1055,7 +1065,7 @@ spec:
nodeSelector:
additionalProperties:
type: string
description: Specifies the label to control the placement of pods on a given pool of worker nodes. All namesapces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
description: Specifies the label to control the placement of pods on a given pool of worker nodes. All namespaces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
type: object
owners:
description: Specifies the owners of the Tenant. Mandatory.
@@ -1224,7 +1234,7 @@ spec:
- owners
type: object
status:
description: Returns the observed state of the Tenant
description: Returns the observed state of the Tenant.
properties:
namespaces:
description: List of namespaces assigned to the Tenant.

View File

@@ -5,7 +5,7 @@
# Check the capsule logs
$ kubectl logs -f deployment/{{ template "capsule.fullname" . }}-controller-manager -c manager -n{{ .Release.Namespace }}
$ kubectl logs -f deployment/{{ template "capsule.fullname" . }}-controller-manager -c manager -n {{ .Release.Namespace }}
- Manage this chart:

View File

@@ -65,7 +65,6 @@ ServiceAccount annotations
{{- end }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
@@ -91,30 +90,38 @@ Create the proxy fully-qualified Docker image to use
{{- printf "%s:%s" .Values.proxy.image.repository .Values.proxy.image.tag -}}
{{- end }}
{{/*
Determine the Kubernetes version to use for jobsFullyQualifiedDockerImage tag
*/}}
{{- define "capsule.jobsTagKubeVersion" -}}
{{- if contains "-eks-" .Capabilities.KubeVersion.GitVersion }}
{{- print "v" .Capabilities.KubeVersion.Major "." (.Capabilities.KubeVersion.Minor | replace "+" "") -}}
{{- else }}
{{- print "v" .Capabilities.KubeVersion.Major "." .Capabilities.KubeVersion.Minor -}}
{{- end }}
{{- end }}
{{/*
Create the jobs fully-qualified Docker image to use
*/}}
{{- define "capsule.jobsFullyQualifiedDockerImage" -}}
{{- if .Values.jobs.image.tag }}
{{- printf "%s:%s" .Values.jobs.image.repository .Values.jobs.image.tag -}}
{{- else }}
{{- printf "%s:%s" .Values.jobs.image.repository (include "capsule.jobsTagKubeVersion" .) -}}
{{- end }}
{{- end }}
{{/*
Create the Capsule Deployment name to use
Create the Capsule controller name to use
*/}}
{{- define "capsule.deploymentName" -}}
{{- define "capsule.controllerName" -}}
{{- printf "%s-controller-manager" (include "capsule.fullname" .) -}}
{{- end }}
{{/*
Create the Capsule CA Secret name to use
*/}}
{{- define "capsule.secretCaName" -}}
{{- printf "%s-ca" (include "capsule.fullname" .) -}}
{{- end }}
{{/*
Create the Capsule TLS Secret name to use
*/}}
{{- define "capsule.secretTlsName" -}}
{{- printf "%s-tls" (include "capsule.fullname" .) -}}
{{ default ( printf "%s-tls" ( include "capsule.fullname" . ) ) .Values.tls.name }}
{{- end }}

View File

@@ -1,11 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
labels:
{{- include "capsule.labels" . | nindent 4 }}
{{- with .Values.customAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "capsule.secretCaName" . }}
data:

View File

@@ -0,0 +1,36 @@
{{- if .Values.certManager.generateCertificates }}
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ include "capsule.fullname" . }}-webhook-selfsigned
labels:
{{- include "capsule.labels" . | nindent 4 }}
{{- with .Values.customAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ include "capsule.fullname" . }}-webhook-cert
labels:
{{- include "capsule.labels" . | nindent 4 }}
{{- with .Values.customAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
dnsNames:
- {{ include "capsule.fullname" . }}-webhook-service.{{ .Release.Namespace }}.svc
- {{ include "capsule.fullname" . }}-webhook-service.{{ .Release.Namespace }}.svc.cluster.local
issuerRef:
kind: Issuer
name: {{ include "capsule.fullname" . }}-webhook-selfsigned
secretName: {{ include "capsule.secretTlsName" . }}
subject:
organizations:
- clastix.io
{{- end }}

View File

@@ -1,3 +1,4 @@
{{- if or (not .Values.certManager.generateCertificates) (.Values.tls.create) }}
apiVersion: v1
kind: Secret
metadata:
@@ -8,4 +9,4 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "capsule.secretTlsName" . }}
data:
{{- end }}

View File

@@ -4,8 +4,12 @@ metadata:
name: default
labels:
{{- include "capsule.labels" . | nindent 4 }}
{{- with .Values.customAnnotations }}
annotations:
capsule.clastix.io/mutating-webhook-configuration-name: {{ include "capsule.fullname" . }}-mutating-webhook-configuration
capsule.clastix.io/tls-secret-name: {{ include "capsule.secretTlsName" . }}
capsule.clastix.io/validating-webhook-configuration-name: {{ include "capsule.fullname" . }}-validating-webhook-configuration
capsule.clastix.io/enable-tls-configuration: "{{ .Values.tls.enableController }}"
{{- with .Values.customAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:

View File

@@ -0,0 +1,88 @@
{{- if eq .Values.manager.kind "DaemonSet" }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "capsule.controllerName" . }}
labels:
{{- include "capsule.labels" . | nindent 4 }}
{{- with .Values.customAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
{{- include "capsule.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "capsule.labels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "capsule.serviceAccountName" . }}
{{- if .Values.manager.hostNetwork }}
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
{{- end }}
priorityClassName: {{ .Values.priorityClassName }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: cert
secret:
defaultMode: 420
secretName: {{ include "capsule.secretTlsName" . }}
containers:
- name: manager
command:
- /manager
args:
- --enable-leader-election
- --zap-log-level={{ default 4 .Values.manager.options.logLevel }}
- --configuration-name=default
image: {{ include "capsule.managerFullyQualifiedDockerImage" . }}
imagePullPolicy: {{ .Values.manager.image.pullPolicy }}
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: webhook-server
containerPort: 9443
protocol: TCP
- name: metrics
containerPort: 8080
protocol: TCP
livenessProbe:
{{- toYaml .Values.manager.livenessProbe | nindent 12}}
readinessProbe:
{{- toYaml .Values.manager.readinessProbe | nindent 12}}
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
resources:
{{- toYaml .Values.manager.resources | nindent 12 }}
securityContext:
allowPrivilegeEscalation: false
{{- end }}

View File

@@ -1,7 +1,8 @@
{{- if eq .Values.manager.kind "Deployment" }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "capsule.deploymentName" . }}
name: {{ include "capsule.controllerName" . }}
labels:
{{- include "capsule.labels" . | nindent 4 }}
{{- with .Values.customAnnotations }}
@@ -29,6 +30,7 @@ spec:
serviceAccountName: {{ include "capsule.serviceAccountName" . }}
{{- if .Values.manager.hostNetwork }}
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
{{- end }}
priorityClassName: {{ .Values.priorityClassName }}
{{- with .Values.nodeSelector }}
@@ -47,7 +49,7 @@ spec:
- name: cert
secret:
defaultMode: 420
secretName: {{ include "capsule.fullname" . }}-tls
secretName: {{ include "capsule.secretTlsName" . }}
containers:
- name: manager
command:
@@ -82,3 +84,4 @@ spec:
{{- toYaml .Values.manager.resources | nindent 12 }}
securityContext:
allowPrivilegeEscalation: false
{{- end }}

View File

@@ -4,8 +4,11 @@ metadata:
name: {{ include "capsule.fullname" . }}-mutating-webhook-configuration
labels:
{{- include "capsule.labels" . | nindent 4 }}
{{- with .Values.customAnnotations }}
annotations:
{{- if .Values.certManager.generateCertificates }}
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "capsule.fullname" . }}-webhook-cert
{{- end }}
{{- with .Values.customAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
webhooks:
@@ -13,7 +16,9 @@ webhooks:
- v1
- v1beta1
clientConfig:
{{- if not .Values.certManager.generateCertificates }}
caBundle: Cg==
{{- end }}
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: {{ .Release.Namespace }}

View File

@@ -1,4 +1,5 @@
{{- $cmd := "while [ -z $$(kubectl -n $NAMESPACE get secret capsule-tls -o jsonpath='{.data.tls\\\\.crt}') ];" -}}
{{- if .Values.tls.create }}
{{- $cmd := printf "while [ -z $$(kubectl -n $NAMESPACE get secret %s -o jsonpath='{.data.tls\\\\.crt}') ];" (include "capsule.secretTlsName" .) -}}
{{- $cmd = printf "%s do echo 'waiting Capsule to be up and running...' && sleep 5;" $cmd -}}
{{- $cmd = printf "%s done" $cmd -}}
apiVersion: batch/v1
@@ -25,6 +26,14 @@ spec:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Never
containers:
- name: post-install-job
@@ -36,4 +45,5 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
serviceAccountName: {{ include "capsule.serviceAccountName" . }}
serviceAccountName: {{ include "capsule.serviceAccountName" . }}
{{- end }}

View File

@@ -1,5 +1,7 @@
{{- $cmd := printf "kubectl scale deployment -n $NAMESPACE %s --replicas 0 &&" (include "capsule.deploymentName" .) -}}
{{- $cmd = printf "%s kubectl delete secret -n $NAMESPACE %s %s --ignore-not-found &&" $cmd (include "capsule.secretTlsName" .) (include "capsule.secretCaName" .) -}}
{{- $cmd := ""}}
{{- if or (.Values.tls.create) (.Values.certManager.generateCertificates) }}
{{- $cmd = printf "%s kubectl delete secret -n $NAMESPACE %s --ignore-not-found &&" $cmd (include "capsule.secretTlsName" .) -}}
{{- end }}
{{- $cmd = printf "%s kubectl delete clusterroles.rbac.authorization.k8s.io capsule-namespace-deleter capsule-namespace-provisioner --ignore-not-found &&" $cmd -}}
{{- $cmd = printf "%s kubectl delete clusterrolebindings.rbac.authorization.k8s.io capsule-namespace-deleter capsule-namespace-provisioner --ignore-not-found" $cmd -}}
apiVersion: batch/v1
@@ -26,6 +28,14 @@ spec:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Never
containers:
- name: pre-delete-job

View File

@@ -15,17 +15,33 @@ metadata:
{{- end }}
spec:
endpoints:
- interval: 15s
{{- with .Values.serviceMonitor.endpoint }}
- interval: {{ .interval }}
port: metrics
path: /metrics
{{- with .scrapeTimeout }}
scrapeTimeout: {{ . }}
{{- end }}
{{- with .metricRelabelings }}
metricRelabelings: {{- toYaml . | nindent 6 }}
{{- end }}
{{- with .relabelings }}
relabelings: {{- toYaml . | nindent 6 }}
{{- end }}
{{- end }}
jobLabel: app.kubernetes.io/name
{{- with .Values.serviceMonitor.targetLabels }}
targetLabels: {{- toYaml . | nindent 4 }}
{{- end }}
selector:
matchLabels:
{{- include "capsule.labels" . | nindent 6 }}
{{- with .Values.serviceMonitor.matchLabels }}
{{- toYaml . | nindent 6 }}
{{- if .Values.serviceMonitor.matchLabels }}
{{- toYaml .Values.serviceMonitor.matchLabels | nindent 6 }}
{{- else }}
{{- include "capsule.labels" . | nindent 6 }}
{{- end }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
{{- end }}

View File

@@ -4,8 +4,11 @@ metadata:
name: {{ include "capsule.fullname" . }}-validating-webhook-configuration
labels:
{{- include "capsule.labels" . | nindent 4 }}
{{- with .Values.customAnnotations }}
annotations:
{{- if .Values.certManager.generateCertificates }}
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "capsule.fullname" . }}-webhook-cert
{{- end }}
{{- with .Values.customAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
webhooks:
@@ -13,7 +16,9 @@ webhooks:
- v1
- v1beta1
clientConfig:
{{- if not .Values.certManager.generateCertificates }}
caBundle: Cg==
{{- end }}
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: {{ .Release.Namespace }}
@@ -43,7 +48,9 @@ webhooks:
- v1
- v1beta1
clientConfig:
{{- if not .Values.certManager.generateCertificates }}
caBundle: Cg==
{{- end }}
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: {{ .Release.Namespace }}
@@ -74,7 +81,9 @@ webhooks:
- v1
- v1beta1
clientConfig:
{{- if not .Values.certManager.generateCertificates }}
caBundle: Cg==
{{- end }}
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: {{ .Release.Namespace }}
@@ -103,7 +112,9 @@ webhooks:
- v1
- v1beta1
clientConfig:
{{- if not .Values.certManager.generateCertificates }}
caBundle: Cg==
{{- end }}
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: {{ .Release.Namespace }}
@@ -132,7 +143,9 @@ webhooks:
- v1
- v1beta1
clientConfig:
{{- if not .Values.certManager.generateCertificates }}
caBundle: Cg==
{{- end }}
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: {{ .Release.Namespace }}
@@ -160,7 +173,9 @@ webhooks:
- v1
- v1beta1
clientConfig:
{{- if not .Values.certManager.generateCertificates }}
caBundle: Cg==
{{- end }}
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: {{ .Release.Namespace }}
@@ -186,7 +201,9 @@ webhooks:
- v1
- v1beta1
clientConfig:
{{- if not .Values.certManager.generateCertificates }}
caBundle: Cg==
{{- end }}
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: {{ .Release.Namespace }}
@@ -215,7 +232,9 @@ webhooks:
- v1
- v1beta1
clientConfig:
{{- if not .Values.certManager.generateCertificates }}
caBundle: Cg==
{{- end }}
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: {{ .Release.Namespace }}
@@ -240,3 +259,31 @@ webhooks:
scope: '*'
sideEffects: None
timeoutSeconds: {{ .Values.validatingWebhooksTimeoutSeconds }}
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
{{- if not .Values.certManager.generateCertificates }}
caBundle: Cg==
{{- end }}
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: {{ .Release.Namespace }}
path: /nodes
port: 443
failurePolicy: {{ .Values.webhooks.nodes.failurePolicy }}
name: nodes.capsule.clastix.io
matchPolicy: Exact
namespaceSelector: {}
objectSelector: {}
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- UPDATE
resources:
- nodes
sideEffects: None
timeoutSeconds: {{ .Values.validatingWebhooksTimeoutSeconds }}

View File

@@ -2,29 +2,59 @@
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# Secret Options
tls:
# -- Start the Capsule controller that injects the CA into mutating and validating webhooks, and CRD as well.
enableController: true
# -- When cert-manager is disabled, Capsule will generate the TLS certificate for webhook and CRDs conversion.
create: true
# -- Override name of the Capsule TLS Secret name when externally managed.
name: ""
# Manager Options
manager:
# -- Set the controller deployment mode as `Deployment` or `DaemonSet`.
kind: Deployment
image:
repository: quay.io/clastix/capsule
# -- Set the image repository of the capsule.
repository: clastix/capsule
# -- Set the image pull policy.
pullPolicy: IfNotPresent
# -- Overrides the image tag whose default is the chart appVersion.
tag: ''
# Specifies if the container should be started in hostNetwork mode.
# -- Configuration for `imagePullSecrets` so that you can use a private images registry.
imagePullSecrets: []
# -- Specifies if the container should be started in hostNetwork mode.
#
# Required for use in some managed kubernetes clusters (such as AWS EKS) with custom
# CNI (such as calico), because control-plane managed by AWS cannot communicate
# with pods' IP CIDR and admission webhooks are not working
hostNetwork: false
# Additional Capsule options
# Additional Capsule Controller Options
options:
# -- Set the log verbosity of the capsule with a value from 1 to 10
logLevel: '4'
# -- Boolean, enforces the Tenant owner, during Namespace creation, to name it using the selected Tenant name as prefix, separated by a dash
forceTenantPrefix: false
# -- Override the Capsule user groups
capsuleUserGroups: ["capsule.clastix.io"]
# -- If specified, disallows creation of namespaces matching the passed regexp
protectedNamespaceRegex: ""
# -- Specifies whether capsule webhooks certificates should be generated by capsule operator
generateCertificates: true
# -- Configure the liveness probe using Deployment probe spec
livenessProbe:
httpGet:
path: /healthz
port: 10080
# -- Configure the readiness probe using Deployment probe spec
readinessProbe:
httpGet:
path: /readyz
@@ -37,46 +67,63 @@ manager:
requests:
cpu: 200m
memory: 128Mi
jobs:
image:
repository: quay.io/clastix/kubectl
pullPolicy: IfNotPresent
tag: "v1.20.7"
imagePullSecrets: []
serviceAccount:
create: true
annotations: {}
name: "capsule"
# -- Annotations to add to the capsule pod.
podAnnotations: {}
# The following annotations guarantee scheduling for critical add-on pods
# podAnnotations:
# scheduler.alpha.kubernetes.io/critical-pod: ''
# -- Set the priority class name of the Capsule pod
priorityClassName: '' #system-cluster-critical
# -- Set the node selector for the Capsule pod
nodeSelector: {}
# node-role.kubernetes.io/master: ""
# -- Set list of tolerations for the Capsule pod
tolerations: []
#- key: CriticalAddonsOnly
# operator: Exists
#- effect: NoSchedule
# key: node-role.kubernetes.io/master
# -- Set the replica count for capsule pod
replicaCount: 1
# -- Set affinity rules for the Capsule pod
affinity: {}
podSecurityPolicy:
# -- Specify if a Pod Security Policy must be created
enabled: false
serviceMonitor:
enabled: false
# Install the ServiceMonitor into a different Namespace, as the monitoring stack one (default: the release one)
namespace: ''
# Assign additional labels according to Prometheus' serviceMonitorSelector matching labels
labels: {}
jobs:
image:
# -- Set the image repository of the helm chart job
repository: quay.io/clastix/kubectl
# -- Set the image pull policy of the helm chart job
pullPolicy: IfNotPresent
# -- Set the image tag of the helm chart job
tag: ""
# ServiceAccount
serviceAccount:
# -- Specifies whether a service account should be created.
create: true
# -- Annotations to add to the service account.
annotations: {}
matchLabels: {}
serviceAccount:
name: capsule
namespace: capsule-system
# -- The name of the service account to use. If not set and `serviceAccount.create=true`, a name is generated using the fullname template
name: "capsule"
# Additional labels
certManager:
# -- Specifies whether capsule webhooks certificates should be generated using cert-manager
generateCertificates: false
# -- Additional labels which will be added to all resources created by Capsule helm chart
customLabels: {}
# Additional annotations
# -- Additional annotations which will be added to all resources created by Capsule helm chart
customAnnotations: {}
# Webhooks configurations
@@ -123,5 +170,39 @@ webhooks:
matchExpressions:
- key: capsule.clastix.io/tenant
operator: Exists
nodes:
failurePolicy: Fail
# -- Timeout in seconds for mutating webhooks
mutatingWebhooksTimeoutSeconds: 30
# -- Timeout in seconds for validating webhooks
validatingWebhooksTimeoutSeconds: 30
# ServiceMonitor
serviceMonitor:
# -- Enable ServiceMonitor
enabled: false
# -- Install the ServiceMonitor into a different Namespace, as the monitoring stack one (default: the release one)
namespace: ''
# -- Assign additional labels according to Prometheus' serviceMonitorSelector matching labels
labels: {}
# -- Assign additional Annotations
annotations: {}
# -- Change matching labels
matchLabels: {}
# -- Set targetLabels for the serviceMonitor
targetLabels: []
serviceAccount:
# -- ServiceAccount for Metrics RBAC
name: capsule
# -- ServiceAccount Namespace for Metrics RBAC
namespace: capsule-system
endpoint:
# -- Set the scrape interval for the endpoint of the serviceMonitor
interval: "15s"
# -- Set the scrape timeout for the endpoint of the serviceMonitor
scrapeTimeout: ""
# -- Set metricRelabelings for the endpoint of the serviceMonitor
metricRelabelings: []
# -- Set relabelings for the endpoint of the serviceMonitor
relabelings: []

View File

@@ -19,7 +19,7 @@ spec:
- name: v1alpha1
schema:
openAPIV3Schema:
description: CapsuleConfiguration is the Schema for the Capsule configuration API
description: CapsuleConfiguration is the Schema for the Capsule configuration API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
@@ -30,7 +30,7 @@ spec:
metadata:
type: object
spec:
description: CapsuleConfigurationSpec defines the Capsule configuration
description: CapsuleConfigurationSpec defines the Capsule configuration.
properties:
forceTenantPrefix:
default: false

View File

@@ -46,7 +46,7 @@ spec:
name: v1alpha1
schema:
openAPIV3Schema:
description: Tenant is the Schema for the tenants API
description: Tenant is the Schema for the tenants API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
@@ -57,7 +57,7 @@ spec:
metadata:
type: object
spec:
description: TenantSpec defines the desired state of Tenant
description: TenantSpec defines the desired state of Tenant.
properties:
additionalRoleBindings:
items:
@@ -475,7 +475,7 @@ spec:
type: string
type: object
owner:
description: OwnerSpec defines tenant owner name and kind
description: OwnerSpec defines tenant owner name and kind.
properties:
kind:
enum:
@@ -558,7 +558,7 @@ spec:
- owner
type: object
status:
description: TenantStatus defines the observed state of Tenant
description: TenantStatus defines the observed state of Tenant.
properties:
namespaces:
items:
@@ -598,7 +598,7 @@ spec:
name: v1beta1
schema:
openAPIV3Schema:
description: Tenant is the Schema for the tenants API
description: Tenant is the Schema for the tenants API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
@@ -609,7 +609,7 @@ spec:
metadata:
type: object
spec:
description: TenantSpec defines the desired state of Tenant
description: TenantSpec defines the desired state of Tenant.
properties:
additionalRoleBindings:
description: Specifies additional RoleBindings assigned to the Tenant. Capsule will ensure that all namespaces in the Tenant always contain the RoleBinding for the given ClusterRole. Optional.
@@ -697,7 +697,7 @@ spec:
type: string
type: object
limitRanges:
description: Specifies the NetworkPolicies assigned to the Tenant. The assigned NetworkPolicies are inherited by any namespace created in the Tenant. Optional.
description: Specifies the resource min/max usage restrictions to the Tenant. The assigned values are inherited by any namespace created in the Tenant. Optional.
properties:
items:
items:
@@ -1055,7 +1055,7 @@ spec:
nodeSelector:
additionalProperties:
type: string
description: Specifies the label to control the placement of pods on a given pool of worker nodes. All namesapces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
description: Specifies the label to control the placement of pods on a given pool of worker nodes. All namespaces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
type: object
owners:
description: Specifies the owners of the Tenant. Mandatory.
@@ -1224,7 +1224,7 @@ spec:
- owners
type: object
status:
description: Returns the observed state of the Tenant
description: Returns the observed state of the Tenant.
properties:
namespaces:
description: List of namespaces assigned to the Tenant.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
labels:
# label selector used by Grafana to load the dashboards from Config Maps
grafana_dashboard: "1"
name: capsule-grafana-dashboard

View File

@@ -0,0 +1,8 @@
configMapGenerator:
- name: capsule-grafana-dashboard
files:
- dashboard.json
generatorOptions:
disableNameSuffixHash: true
patchesStrategicMerge:
- dashboard.yaml

View File

@@ -24,7 +24,7 @@ spec:
- name: v1alpha1
schema:
openAPIV3Schema:
description: CapsuleConfiguration is the Schema for the Capsule configuration API
description: CapsuleConfiguration is the Schema for the Capsule configuration API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
@@ -35,7 +35,7 @@ spec:
metadata:
type: object
spec:
description: CapsuleConfigurationSpec defines the Capsule configuration
description: CapsuleConfigurationSpec defines the Capsule configuration.
properties:
forceTenantPrefix:
default: false
@@ -118,7 +118,7 @@ spec:
name: v1alpha1
schema:
openAPIV3Schema:
description: Tenant is the Schema for the tenants API
description: Tenant is the Schema for the tenants API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
@@ -129,7 +129,7 @@ spec:
metadata:
type: object
spec:
description: TenantSpec defines the desired state of Tenant
description: TenantSpec defines the desired state of Tenant.
properties:
additionalRoleBindings:
items:
@@ -547,7 +547,7 @@ spec:
type: string
type: object
owner:
description: OwnerSpec defines tenant owner name and kind
description: OwnerSpec defines tenant owner name and kind.
properties:
kind:
enum:
@@ -630,7 +630,7 @@ spec:
- owner
type: object
status:
description: TenantStatus defines the observed state of Tenant
description: TenantStatus defines the observed state of Tenant.
properties:
namespaces:
items:
@@ -670,7 +670,7 @@ spec:
name: v1beta1
schema:
openAPIV3Schema:
description: Tenant is the Schema for the tenants API
description: Tenant is the Schema for the tenants API.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
@@ -681,7 +681,7 @@ spec:
metadata:
type: object
spec:
description: TenantSpec defines the desired state of Tenant
description: TenantSpec defines the desired state of Tenant.
properties:
additionalRoleBindings:
description: Specifies additional RoleBindings assigned to the Tenant. Capsule will ensure that all namespaces in the Tenant always contain the RoleBinding for the given ClusterRole. Optional.
@@ -769,7 +769,7 @@ spec:
type: string
type: object
limitRanges:
description: Specifies the NetworkPolicies assigned to the Tenant. The assigned NetworkPolicies are inherited by any namespace created in the Tenant. Optional.
description: Specifies the resource min/max usage restrictions to the Tenant. The assigned values are inherited by any namespace created in the Tenant. Optional.
properties:
items:
items:
@@ -1127,7 +1127,7 @@ spec:
nodeSelector:
additionalProperties:
type: string
description: Specifies the label to control the placement of pods on a given pool of worker nodes. All namesapces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
description: Specifies the label to control the placement of pods on a given pool of worker nodes. All namespaces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
type: object
owners:
description: Specifies the owners of the Tenant. Mandatory.
@@ -1296,7 +1296,7 @@ spec:
- owners
type: object
status:
description: Returns the observed state of the Tenant
description: Returns the observed state of the Tenant.
properties:
namespaces:
description: List of namespaces assigned to the Tenant.
@@ -1411,7 +1411,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: quay.io/clastix/capsule:v0.1.1-rc0
image: clastix/capsule:v0.1.2
imagePullPolicy: IfNotPresent
name: manager
ports:
@@ -1582,6 +1582,29 @@ webhooks:
- networkpolicies
scope: Namespaced
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: capsule-webhook-service
namespace: capsule-system
path: /nodes
failurePolicy: Fail
name: nodes.capsule.clastix.io
namespaceSelector:
matchExpressions:
- key: capsule.clastix.io/tenant
operator: Exists
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- UPDATE
resources:
- nodes
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:

View File

@@ -6,5 +6,5 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: controller
newName: quay.io/clastix/capsule
newTag: v0.1.1-rc0
newName: clastix/capsule
newTag: v0.1.2

View File

@@ -118,6 +118,25 @@ webhooks:
resources:
- networkpolicies
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /nodes
failurePolicy: Fail
name: nodes.capsule.clastix.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- UPDATE
resources:
- nodes
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:

View File

@@ -34,6 +34,12 @@
matchExpressions:
- key: capsule.clastix.io/tenant
operator: Exists
- op: add
path: /webhooks/7/namespaceSelector
value:
matchExpressions:
- key: capsule.clastix.io/tenant
operator: Exists
- op: add
path: /webhooks/0/rules/0/scope
value: Namespaced
@@ -43,12 +49,12 @@
- op: add
path: /webhooks/3/rules/0/scope
value: Namespaced
- op: add
path: /webhooks/4/rules/0/scope
value: Namespaced
- op: add
path: /webhooks/5/rules/0/scope
value: Namespaced
- op: add
path: /webhooks/6/rules/0/scope
value: Namespaced
- op: add
path: /webhooks/7/rules/0/scope
value: Namespaced

View File

@@ -9,13 +9,11 @@ import (
"github.com/go-logr/logr"
"github.com/pkg/errors"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
capsulev1alpha1 "github.com/clastix/capsule/api/v1alpha1"
"github.com/clastix/capsule/controllers/utils"
"github.com/clastix/capsule/pkg/configuration"
)
@@ -24,44 +22,23 @@ type Manager struct {
Client client.Client
}
// InjectClient injects the Client interface, required by the Runnable interface
// InjectClient injects the Client interface, required by the Runnable interface.
func (c *Manager) InjectClient(client client.Client) error {
c.Client = client
return nil
}
func filterByName(objName, desired string) bool {
return objName == desired
}
func forOptionPerInstanceName(instanceName string) builder.ForOption {
return builder.WithPredicates(predicate.Funcs{
CreateFunc: func(event event.CreateEvent) bool {
return filterByName(event.Object.GetName(), instanceName)
},
DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
return filterByName(deleteEvent.Object.GetName(), instanceName)
},
UpdateFunc: func(updateEvent event.UpdateEvent) bool {
return filterByName(updateEvent.ObjectNew.GetName(), instanceName)
},
GenericFunc: func(genericEvent event.GenericEvent) bool {
return filterByName(genericEvent.Object.GetName(), instanceName)
},
})
}
func (c *Manager) SetupWithManager(mgr ctrl.Manager, configurationName string) error {
return ctrl.NewControllerManagedBy(mgr).
For(&capsulev1alpha1.CapsuleConfiguration{}, forOptionPerInstanceName(configurationName)).
For(&capsulev1alpha1.CapsuleConfiguration{}, utils.NamesMatchingPredicate(configurationName)).
Complete(c)
}
func (c *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res reconcile.Result, err error) {
c.Log.Info("CapsuleConfiguration reconciliation started", "request.name", request.Name)
cfg := configuration.NewCapsuleConfiguration(c.Client, request.Name)
cfg := configuration.NewCapsuleConfiguration(ctx, c.Client, request.Name)
// Validating the Capsule Configuration options
if _, err = cfg.ProtectedNamespaceRegexp(); err != nil {
panic(errors.Wrap(err, "Invalid configuration for protected Namespace regex"))

View File

@@ -23,7 +23,7 @@ var (
{
APIGroups: []string{""},
Resources: []string{"namespaces"},
Verbs: []string{"create"},
Verbs: []string{"create", "patch"},
},
},
},
@@ -35,7 +35,7 @@ var (
{
APIGroups: []string{""},
Resources: []string{"namespaces"},
Verbs: []string{"delete", "patch"},
Verbs: []string{"delete"},
},
},
},
@@ -48,7 +48,7 @@ var (
RoleRef: rbacv1.RoleRef{
Kind: "ClusterRole",
Name: ProvisionerRoleName,
APIGroup: "rbac.authorization.k8s.io",
APIGroup: rbacv1.GroupName,
},
}
)

View File

@@ -10,20 +10,19 @@ import (
"github.com/go-logr/logr"
"github.com/hashicorp/go-multierror"
rbacv1 "k8s.io/api/rbac/v1"
"k8s.io/apimachinery/pkg/api/errors"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/util/workqueue"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
capsulev1alpha1 "github.com/clastix/capsule/api/v1alpha1"
"github.com/clastix/capsule/controllers/utils"
"github.com/clastix/capsule/pkg/configuration"
)
@@ -33,65 +32,40 @@ type Manager struct {
Configuration configuration.Configuration
}
// InjectClient injects the Client interface, required by the Runnable interface
// InjectClient injects the Client interface, required by the Runnable interface.
func (r *Manager) InjectClient(c client.Client) error {
r.Client = c
return nil
}
func (r *Manager) filterByNames(name string) bool {
return name == ProvisionerRoleName || name == DeleterRoleName
}
func (r *Manager) SetupWithManager(ctx context.Context, mgr ctrl.Manager, configurationName string) (err error) {
namesPredicate := utils.NamesMatchingPredicate(ProvisionerRoleName, DeleterRoleName)
//nolint:dupl
func (r *Manager) SetupWithManager(mgr ctrl.Manager, configurationName string) (err error) {
crErr := ctrl.NewControllerManagedBy(mgr).
For(&rbacv1.ClusterRole{}, builder.WithPredicates(predicate.Funcs{
CreateFunc: func(event event.CreateEvent) bool {
return r.filterByNames(event.Object.GetName())
},
DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
return r.filterByNames(deleteEvent.Object.GetName())
},
UpdateFunc: func(updateEvent event.UpdateEvent) bool {
return r.filterByNames(updateEvent.ObjectNew.GetName())
},
GenericFunc: func(genericEvent event.GenericEvent) bool {
return r.filterByNames(genericEvent.Object.GetName())
},
})).
For(&rbacv1.ClusterRole{}, namesPredicate).
Complete(r)
if crErr != nil {
err = multierror.Append(err, crErr)
}
crbErr := ctrl.NewControllerManagedBy(mgr).
For(&rbacv1.ClusterRoleBinding{}, builder.WithPredicates(predicate.Funcs{
CreateFunc: func(event event.CreateEvent) bool {
return r.filterByNames(event.Object.GetName())
},
DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
return r.filterByNames(deleteEvent.Object.GetName())
},
UpdateFunc: func(updateEvent event.UpdateEvent) bool {
return r.filterByNames(updateEvent.ObjectNew.GetName())
},
GenericFunc: func(genericEvent event.GenericEvent) bool {
return r.filterByNames(genericEvent.Object.GetName())
},
})).
For(&rbacv1.ClusterRoleBinding{}, namesPredicate).
Watches(source.NewKindWithCache(&capsulev1alpha1.CapsuleConfiguration{}, mgr.GetCache()), handler.Funcs{
UpdateFunc: func(updateEvent event.UpdateEvent, limitingInterface workqueue.RateLimitingInterface) {
if updateEvent.ObjectNew.GetName() == configurationName {
if crbErr := r.EnsureClusterRoleBindings(); crbErr != nil {
if crbErr := r.EnsureClusterRoleBindings(ctx); crbErr != nil {
r.Log.Error(err, "cannot update ClusterRoleBinding upon CapsuleConfiguration update")
}
}
},
}).
Complete(r)
if crbErr != nil {
err = multierror.Append(err, crbErr)
}
return
}
@@ -100,18 +74,19 @@ func (r *Manager) SetupWithManager(mgr ctrl.Manager, configurationName string) (
func (r *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res reconcile.Result, err error) {
switch request.Name {
case ProvisionerRoleName:
if err = r.EnsureClusterRole(ProvisionerRoleName); err != nil {
if err = r.EnsureClusterRole(ctx, ProvisionerRoleName); err != nil {
r.Log.Error(err, "Reconciliation for ClusterRole failed", "ClusterRole", ProvisionerRoleName)
break
}
if err = r.EnsureClusterRoleBindings(); err != nil {
if err = r.EnsureClusterRoleBindings(ctx); err != nil {
r.Log.Error(err, "Reconciliation for ClusterRoleBindings failed")
break
}
case DeleterRoleName:
if err = r.EnsureClusterRole(DeleterRoleName); err != nil {
if err = r.EnsureClusterRole(ctx, DeleterRoleName); err != nil {
r.Log.Error(err, "Reconciliation for ClusterRole failed", "ClusterRole", DeleterRoleName)
}
}
@@ -119,14 +94,14 @@ func (r *Manager) Reconcile(ctx context.Context, request reconcile.Request) (res
return
}
func (r *Manager) EnsureClusterRoleBindings() (err error) {
func (r *Manager) EnsureClusterRoleBindings(ctx context.Context) (err error) {
crb := &rbacv1.ClusterRoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: ProvisionerRoleName,
},
}
_, err = controllerutil.CreateOrUpdate(context.TODO(), r.Client, crb, func() (err error) {
_, err = controllerutil.CreateOrUpdate(ctx, r.Client, crb, func() (err error) {
crb.RoleRef = provisionerClusterRoleBinding.RoleRef
crb.Subjects = []rbacv1.Subject{}
@@ -144,7 +119,7 @@ func (r *Manager) EnsureClusterRoleBindings() (err error) {
return
}
func (r *Manager) EnsureClusterRole(roleName string) (err error) {
func (r *Manager) EnsureClusterRole(ctx context.Context, roleName string) (err error) {
role, ok := clusterRoles[roleName]
if !ok {
return fmt.Errorf("clusterRole %s is not mapped", roleName)
@@ -156,8 +131,9 @@ func (r *Manager) EnsureClusterRole(roleName string) (err error) {
},
}
_, err = controllerutil.CreateOrUpdate(context.TODO(), r.Client, clusterRole, func() error {
_, err = controllerutil.CreateOrUpdate(ctx, r.Client, clusterRole, func() error {
clusterRole.Rules = role.Rules
return nil
})
@@ -170,8 +146,9 @@ func (r *Manager) EnsureClusterRole(roleName string) (err error) {
func (r *Manager) Start(ctx context.Context) error {
for roleName := range clusterRoles {
r.Log.Info("setting up ClusterRoles", "ClusterRole", roleName)
if err := r.EnsureClusterRole(roleName); err != nil {
if errors.IsAlreadyExists(err) {
if err := r.EnsureClusterRole(ctx, roleName); err != nil {
if apierrors.IsAlreadyExists(err) {
continue
}
@@ -180,8 +157,9 @@ func (r *Manager) Start(ctx context.Context) error {
}
r.Log.Info("setting up ClusterRoleBindings")
if err := r.EnsureClusterRoleBindings(); err != nil {
if errors.IsAlreadyExists(err) {
if err := r.EnsureClusterRoleBindings(ctx); err != nil {
if apierrors.IsAlreadyExists(err) {
return nil
}

View File

@@ -1,212 +0,0 @@
// Copyright 2020-2021 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package secret
import (
"bytes"
"context"
"errors"
"time"
"github.com/go-logr/logr"
"golang.org/x/sync/errgroup"
admissionregistrationv1 "k8s.io/api/admissionregistration/v1"
corev1 "k8s.io/api/core/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/retry"
"k8s.io/utils/pointer"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"github.com/clastix/capsule/pkg/cert"
)
type CAReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
Namespace string
}
func (r *CAReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&corev1.Secret{}, forOptionPerInstanceName(caSecretName)).
Complete(r)
}
// By default helm doesn't allow to use templates in CRD (https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#method-1-let-helm-do-it-for-you).
// In order to overcome this, we are setting conversion strategy in helm chart to None, and then update it with CA and namespace information.
func (r *CAReconciler) UpdateCustomResourceDefinition(caBundle []byte) error {
return retry.RetryOnConflict(retry.DefaultBackoff, func() (err error) {
crd := &apiextensionsv1.CustomResourceDefinition{}
err = r.Get(context.TODO(), types.NamespacedName{Name: "tenants.capsule.clastix.io"}, crd)
if err != nil {
r.Log.Error(err, "cannot retrieve CustomResourceDefinition")
return err
}
_, err = controllerutil.CreateOrUpdate(context.TODO(), r.Client, crd, func() error {
crd.Spec.Conversion = &apiextensionsv1.CustomResourceConversion{
Strategy: "Webhook",
Webhook: &apiextensionsv1.WebhookConversion{
ClientConfig: &apiextensionsv1.WebhookClientConfig{
Service: &apiextensionsv1.ServiceReference{
Namespace: r.Namespace,
Name: "capsule-webhook-service",
Path: pointer.StringPtr("/convert"),
Port: pointer.Int32Ptr(443),
},
CABundle: caBundle,
},
ConversionReviewVersions: []string{"v1alpha1", "v1beta1"},
},
}
return nil
})
return err
})
}
//nolint:dupl
func (r CAReconciler) UpdateValidatingWebhookConfiguration(caBundle []byte) error {
return retry.RetryOnConflict(retry.DefaultBackoff, func() (err error) {
vw := &admissionregistrationv1.ValidatingWebhookConfiguration{}
err = r.Get(context.TODO(), types.NamespacedName{Name: "capsule-validating-webhook-configuration"}, vw)
if err != nil {
r.Log.Error(err, "cannot retrieve ValidatingWebhookConfiguration")
return err
}
for i, w := range vw.Webhooks {
// Updating CABundle only in case of an internal service reference
if w.ClientConfig.Service != nil {
vw.Webhooks[i].ClientConfig.CABundle = caBundle
}
}
return r.Update(context.TODO(), vw, &client.UpdateOptions{})
})
}
//nolint:dupl
func (r CAReconciler) UpdateMutatingWebhookConfiguration(caBundle []byte) error {
return retry.RetryOnConflict(retry.DefaultBackoff, func() (err error) {
mw := &admissionregistrationv1.MutatingWebhookConfiguration{}
err = r.Get(context.TODO(), types.NamespacedName{Name: "capsule-mutating-webhook-configuration"}, mw)
if err != nil {
r.Log.Error(err, "cannot retrieve MutatingWebhookConfiguration")
return err
}
for i, w := range mw.Webhooks {
// Updating CABundle only in case of an internal service reference
if w.ClientConfig.Service != nil {
mw.Webhooks[i].ClientConfig.CABundle = caBundle
}
}
return r.Update(context.TODO(), mw, &client.UpdateOptions{})
})
}
func (r CAReconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl.Result, error) {
var err error
r.Log = r.Log.WithValues("Request.Namespace", request.Namespace, "Request.Name", request.Name)
r.Log.Info("Reconciling CA Secret")
// Fetch the CA instance
instance := &corev1.Secret{}
err = r.Client.Get(context.TODO(), request.NamespacedName, instance)
if err != nil {
// Error reading the object - requeue the request.
return reconcile.Result{}, err
}
var ca cert.CA
var rq time.Duration
ca, err = getCertificateAuthority(r.Client, r.Namespace)
if err != nil && errors.Is(err, MissingCaError{}) {
ca, err = cert.GenerateCertificateAuthority()
if err != nil {
return reconcile.Result{}, err
}
} else if err != nil {
return reconcile.Result{}, err
}
r.Log.Info("Handling CA Secret")
rq, err = ca.ExpiresIn(time.Now())
if err != nil {
r.Log.Info("CA is expired, cleaning to obtain a new one")
instance.Data = map[string][]byte{}
} else {
r.Log.Info("Updating CA secret with new PEM and RSA")
var crt *bytes.Buffer
var key *bytes.Buffer
crt, _ = ca.CACertificatePem()
key, _ = ca.CAPrivateKeyPem()
instance.Data = map[string][]byte{
certSecretKey: crt.Bytes(),
privateKeySecretKey: key.Bytes(),
}
group := new(errgroup.Group)
group.Go(func() error {
return r.UpdateMutatingWebhookConfiguration(crt.Bytes())
})
group.Go(func() error {
return r.UpdateValidatingWebhookConfiguration(crt.Bytes())
})
group.Go(func() error {
return r.UpdateCustomResourceDefinition(crt.Bytes())
})
if err = group.Wait(); err != nil {
return reconcile.Result{}, err
}
}
var res controllerutil.OperationResult
t := &corev1.Secret{ObjectMeta: instance.ObjectMeta}
res, err = controllerutil.CreateOrUpdate(context.TODO(), r.Client, t, func() error {
t.Data = instance.Data
return nil
})
if err != nil {
r.Log.Error(err, "cannot update Capsule TLS")
return reconcile.Result{}, err
}
if res == controllerutil.OperationResultUpdated {
r.Log.Info("Capsule CA has been updated, we need to trigger TLS update too")
tls := &corev1.Secret{}
err = r.Get(ctx, types.NamespacedName{
Namespace: r.Namespace,
Name: tlsSecretName,
}, tls)
if err != nil {
r.Log.Error(err, "Capsule TLS Secret missing")
}
err = retry.RetryOnConflict(retry.DefaultBackoff, func() error {
_, err = controllerutil.CreateOrUpdate(ctx, r.Client, tls, func() error {
tls.Data = map[string][]byte{}
return nil
})
return err
})
if err != nil {
r.Log.Error(err, "Cannot clean Capsule TLS Secret due to CA update")
return reconcile.Result{}, err
}
}
r.Log.Info("Reconciliation completed, processing back in " + rq.String())
return reconcile.Result{Requeue: true, RequeueAfter: rq}, nil
}

View File

@@ -1,12 +0,0 @@
// Copyright 2020-2021 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package secret
const (
certSecretKey = "tls.crt"
privateKeySecretKey = "tls.key"
caSecretName = "capsule-ca"
tlsSecretName = "capsule-tls"
)

View File

@@ -1,11 +0,0 @@
// Copyright 2020-2021 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package secret
type MissingCaError struct {
}
func (MissingCaError) Error() string {
return "CA has not been created yet, please generate a new"
}

View File

@@ -1,62 +0,0 @@
// Copyright 2020-2021 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package secret
import (
"context"
"fmt"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"github.com/clastix/capsule/pkg/cert"
)
func getCertificateAuthority(client client.Client, namespace string) (ca cert.CA, err error) {
instance := &corev1.Secret{}
err = client.Get(context.TODO(), types.NamespacedName{
Namespace: namespace,
Name: caSecretName,
}, instance)
if err != nil {
return nil, fmt.Errorf("missing secret %s, cannot reconcile", caSecretName)
}
if instance.Data == nil {
return nil, MissingCaError{}
}
ca, err = cert.NewCertificateAuthorityFromBytes(instance.Data[certSecretKey], instance.Data[privateKeySecretKey])
if err != nil {
return
}
return
}
func forOptionPerInstanceName(instanceName string) builder.ForOption {
return builder.WithPredicates(predicate.Funcs{
CreateFunc: func(event event.CreateEvent) bool {
return filterByName(event.Object.GetName(), instanceName)
},
DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
return filterByName(deleteEvent.Object.GetName(), instanceName)
},
UpdateFunc: func(updateEvent event.UpdateEvent) bool {
return filterByName(updateEvent.ObjectNew.GetName(), instanceName)
},
GenericFunc: func(genericEvent event.GenericEvent) bool {
return filterByName(genericEvent.Object.GetName(), instanceName)
},
})
}
func filterByName(objName, desired string) bool {
return objName == desired
}

View File

@@ -1,152 +0,0 @@
// Copyright 2020-2021 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package secret
import (
"bytes"
"context"
"crypto/x509"
"encoding/pem"
"fmt"
"os"
"time"
"github.com/go-logr/logr"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"github.com/clastix/capsule/pkg/cert"
)
type TLSReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
Namespace string
}
func (r *TLSReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&corev1.Secret{}, forOptionPerInstanceName(tlsSecretName)).
Complete(r)
}
func (r TLSReconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl.Result, error) {
var err error
r.Log = r.Log.WithValues("Request.Namespace", request.Namespace, "Request.Name", request.Name)
r.Log.Info("Reconciling TLS Secret")
// Fetch the Secret instance
instance := &corev1.Secret{}
err = r.Get(ctx, request.NamespacedName, instance)
if err != nil {
// Error reading the object - requeue the request.
return reconcile.Result{}, err
}
var ca cert.CA
var rq time.Duration
ca, err = getCertificateAuthority(r.Client, r.Namespace)
if err != nil {
return reconcile.Result{}, err
}
var shouldCreate bool
for _, key := range []string{certSecretKey, privateKeySecretKey} {
if _, ok := instance.Data[key]; !ok {
shouldCreate = true
break
}
}
if shouldCreate {
r.Log.Info("Missing Capsule TLS certificate")
rq = 6 * 30 * 24 * time.Hour
opts := cert.NewCertOpts(time.Now().Add(rq), fmt.Sprintf("capsule-webhook-service.%s.svc", r.Namespace))
var crt, key *bytes.Buffer
crt, key, err = ca.GenerateCertificate(opts)
if err != nil {
r.Log.Error(err, "Cannot generate new TLS certificate")
return reconcile.Result{}, err
}
instance.Data = map[string][]byte{
certSecretKey: crt.Bytes(),
privateKeySecretKey: key.Bytes(),
}
} else {
var c *x509.Certificate
var b *pem.Block
b, _ = pem.Decode(instance.Data[certSecretKey])
c, err = x509.ParseCertificate(b.Bytes)
if err != nil {
r.Log.Error(err, "cannot parse Capsule TLS")
return reconcile.Result{}, err
}
rq = time.Until(c.NotAfter)
err = ca.ValidateCert(c)
if err != nil {
r.Log.Info("Capsule TLS is expired or invalid, cleaning to obtain a new one")
instance.Data = map[string][]byte{}
}
}
var res controllerutil.OperationResult
t := &corev1.Secret{ObjectMeta: instance.ObjectMeta}
res, err = controllerutil.CreateOrUpdate(ctx, r.Client, t, func() error {
t.Data = instance.Data
return nil
})
if err != nil {
r.Log.Error(err, "cannot update Capsule TLS")
return reconcile.Result{}, err
}
if instance.Name == tlsSecretName && res == controllerutil.OperationResultUpdated {
r.Log.Info("Capsule TLS certificates has been updated, Controller pods must be restarted to load new certificate")
hostname, _ := os.Hostname()
leaderPod := &corev1.Pod{}
if err = r.Client.Get(ctx, types.NamespacedName{Namespace: os.Getenv("NAMESPACE"), Name: hostname}, leaderPod); err != nil {
r.Log.Error(err, "cannot retrieve the leader Pod, probably running in out of the cluster mode")
return reconcile.Result{}, nil
}
podList := &corev1.PodList{}
if err = r.Client.List(ctx, podList, client.MatchingLabels(leaderPod.ObjectMeta.Labels)); err != nil {
r.Log.Error(err, "cannot retrieve list of Capsule pods requiring restart upon TLS update")
return reconcile.Result{}, nil
}
for _, p := range podList.Items {
nonLeaderPod := p
// Skipping this Pod, must be deleted at the end
if nonLeaderPod.GetName() == leaderPod.GetName() {
continue
}
if err = r.Client.Delete(ctx, &nonLeaderPod); err != nil {
r.Log.Error(err, "cannot delete the non-leader Pod due to TLS update")
}
}
if err = r.Client.Delete(ctx, leaderPod); err != nil {
r.Log.Error(err, "cannot delete the leader Pod due to TLS update")
}
}
r.Log.Info("Reconciliation completed, processing back in " + rq.String())
return reconcile.Result{Requeue: true, RequeueAfter: rq}, nil
}

View File

@@ -8,15 +8,15 @@ import (
"fmt"
"github.com/go-logr/logr"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
apierr "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
@@ -27,34 +27,39 @@ type abstractServiceLabelsReconciler struct {
obj client.Object
client client.Client
log logr.Logger
scheme *runtime.Scheme
}
func (r *abstractServiceLabelsReconciler) InjectClient(c client.Client) error {
r.client = c
return nil
}
func (r *abstractServiceLabelsReconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl.Result, error) {
tenant, err := r.getTenant(ctx, request.NamespacedName, r.client)
if err != nil {
switch err.(type) {
case *NonTenantObject, *NoServicesMetadata:
if errors.As(err, &NonTenantObjectError{}) || errors.As(err, &NoServicesMetadataError{}) {
return reconcile.Result{}, nil
default:
r.log.Error(err, fmt.Sprintf("Cannot sync %t labels", r.obj))
return reconcile.Result{}, err
}
r.log.Error(err, fmt.Sprintf("Cannot sync %T %s/%s labels", r.obj, r.obj.GetNamespace(), r.obj.GetName()))
return reconcile.Result{}, err
}
err = r.client.Get(ctx, request.NamespacedName, r.obj)
if err != nil {
if apierr.IsNotFound(err) {
return reconcile.Result{}, nil
}
return reconcile.Result{}, err
}
_, err = controllerutil.CreateOrUpdate(ctx, r.client, r.obj, func() (err error) {
r.obj.SetLabels(r.sync(r.obj.GetLabels(), tenant.Spec.ServiceOptions.AdditionalMetadata.Labels))
r.obj.SetAnnotations(r.sync(r.obj.GetAnnotations(), tenant.Spec.ServiceOptions.AdditionalMetadata.Annotations))
return nil
})
@@ -97,32 +102,23 @@ func (r *abstractServiceLabelsReconciler) sync(available map[string]string, tena
}
}
}
return available
}
func (r *abstractServiceLabelsReconciler) forOptionPerInstanceName() builder.ForOption {
return builder.WithPredicates(predicate.Funcs{
CreateFunc: func(event event.CreateEvent) bool {
return r.IsNamespaceInTenant(event.Object.GetNamespace())
},
DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
return r.IsNamespaceInTenant(deleteEvent.Object.GetNamespace())
},
UpdateFunc: func(updateEvent event.UpdateEvent) bool {
return r.IsNamespaceInTenant(updateEvent.ObjectNew.GetNamespace())
},
GenericFunc: func(genericEvent event.GenericEvent) bool {
return r.IsNamespaceInTenant(genericEvent.Object.GetNamespace())
},
})
func (r *abstractServiceLabelsReconciler) forOptionPerInstanceName(ctx context.Context) builder.ForOption {
return builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return r.IsNamespaceInTenant(ctx, object.GetNamespace())
}))
}
func (r *abstractServiceLabelsReconciler) IsNamespaceInTenant(namespace string) bool {
func (r *abstractServiceLabelsReconciler) IsNamespaceInTenant(ctx context.Context, namespace string) bool {
tl := &capsulev1beta1.TenantList{}
if err := r.client.List(context.Background(), tl, client.MatchingFieldsSelector{
if err := r.client.List(ctx, tl, client.MatchingFieldsSelector{
Selector: fields.OneTermEqualSelector(".status.namespaces", namespace),
}); err != nil {
return false
}
return len(tl.Items) > 0
}

View File

@@ -4,6 +4,8 @@
package servicelabels
import (
"context"
"github.com/go-logr/logr"
corev1 "k8s.io/api/core/v1"
ctrl "sigs.k8s.io/controller-runtime"
@@ -15,14 +17,13 @@ type EndpointsLabelsReconciler struct {
Log logr.Logger
}
func (r *EndpointsLabelsReconciler) SetupWithManager(mgr ctrl.Manager) error {
func (r *EndpointsLabelsReconciler) SetupWithManager(ctx context.Context, mgr ctrl.Manager) error {
r.abstractServiceLabelsReconciler = abstractServiceLabelsReconciler{
obj: &corev1.Endpoints{},
scheme: mgr.GetScheme(),
log: r.Log,
obj: &corev1.Endpoints{},
log: r.Log,
}
return ctrl.NewControllerManagedBy(mgr).
For(r.abstractServiceLabelsReconciler.obj, r.abstractServiceLabelsReconciler.forOptionPerInstanceName()).
For(r.abstractServiceLabelsReconciler.obj, r.abstractServiceLabelsReconciler.forOptionPerInstanceName(ctx)).
Complete(r)
}

View File

@@ -4,6 +4,8 @@
package servicelabels
import (
"context"
"github.com/go-logr/logr"
discoveryv1 "k8s.io/api/discovery/v1"
discoveryv1beta1 "k8s.io/api/discovery/v1beta1"
@@ -14,20 +16,19 @@ type EndpointSlicesLabelsReconciler struct {
abstractServiceLabelsReconciler
Log logr.Logger
VersionMinor int
VersionMajor int
VersionMinor uint
VersionMajor uint
}
func (r *EndpointSlicesLabelsReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.scheme = mgr.GetScheme()
func (r *EndpointSlicesLabelsReconciler) SetupWithManager(ctx context.Context, mgr ctrl.Manager) error {
r.abstractServiceLabelsReconciler = abstractServiceLabelsReconciler{
scheme: mgr.GetScheme(),
log: r.Log,
log: r.Log,
}
switch {
case r.VersionMajor == 1 && r.VersionMinor <= 16:
r.Log.Info("Skipping controller setup, as EndpointSlices are not supported on current kubernetes version", "VersionMajor", r.VersionMajor, "VersionMinor", r.VersionMinor)
return nil
case r.VersionMajor == 1 && r.VersionMinor >= 21:
r.abstractServiceLabelsReconciler.obj = &discoveryv1.EndpointSlice{}
@@ -36,6 +37,6 @@ func (r *EndpointSlicesLabelsReconciler) SetupWithManager(mgr ctrl.Manager) erro
}
return ctrl.NewControllerManagedBy(mgr).
For(r.obj, r.abstractServiceLabelsReconciler.forOptionPerInstanceName()).
For(r.obj, r.abstractServiceLabelsReconciler.forOptionPerInstanceName(ctx)).
Complete(r)
}

View File

@@ -5,26 +5,26 @@ package servicelabels
import "fmt"
type NonTenantObject struct {
type NonTenantObjectError struct {
objectName string
}
func NewNonTenantObject(objectName string) error {
return &NonTenantObject{objectName: objectName}
return &NonTenantObjectError{objectName: objectName}
}
func (n NonTenantObject) Error() string {
func (n NonTenantObjectError) Error() string {
return fmt.Sprintf("Skipping labels sync for %s as it doesn't belong to tenant", n.objectName)
}
type NoServicesMetadata struct {
type NoServicesMetadataError struct {
objectName string
}
func NewNoServicesMetadata(objectName string) error {
return &NoServicesMetadata{objectName: objectName}
return &NoServicesMetadataError{objectName: objectName}
}
func (n NoServicesMetadata) Error() string {
func (n NoServicesMetadataError) Error() string {
return fmt.Sprintf("Skipping labels sync for %s because no AdditionalLabels or AdditionalAnnotations presents in Tenant spec", n.objectName)
}

View File

@@ -4,6 +4,8 @@
package servicelabels
import (
"context"
"github.com/go-logr/logr"
corev1 "k8s.io/api/core/v1"
ctrl "sigs.k8s.io/controller-runtime"
@@ -15,13 +17,13 @@ type ServicesLabelsReconciler struct {
Log logr.Logger
}
func (r *ServicesLabelsReconciler) SetupWithManager(mgr ctrl.Manager) error {
func (r *ServicesLabelsReconciler) SetupWithManager(ctx context.Context, mgr ctrl.Manager) error {
r.abstractServiceLabelsReconciler = abstractServiceLabelsReconciler{
obj: &corev1.Service{},
scheme: mgr.GetScheme(),
log: r.Log,
obj: &corev1.Service{},
log: r.Log,
}
return ctrl.NewControllerManagedBy(mgr).
For(r.abstractServiceLabelsReconciler.obj, r.abstractServiceLabelsReconciler.forOptionPerInstanceName()).
For(r.abstractServiceLabelsReconciler.obj, r.abstractServiceLabelsReconciler.forOptionPerInstanceName(ctx)).
Complete(r)
}

View File

@@ -13,8 +13,9 @@ import (
capsulev1beta1 "github.com/clastix/capsule/api/v1beta1"
)
// nolint:dupl
// Ensuring all the LimitRange are applied to each Namespace handled by the Tenant.
func (r *Manager) syncLimitRanges(tenant *capsulev1beta1.Tenant) error {
func (r *Manager) syncLimitRanges(ctx context.Context, tenant *capsulev1beta1.Tenant) error {
// getting requested LimitRange keys
keys := make([]string, 0, len(tenant.Spec.LimitRanges.Items))
@@ -28,26 +29,27 @@ func (r *Manager) syncLimitRanges(tenant *capsulev1beta1.Tenant) error {
namespace := ns
group.Go(func() error {
return r.syncLimitRange(tenant, namespace, keys)
return r.syncLimitRange(ctx, tenant, namespace, keys)
})
}
return group.Wait()
}
func (r *Manager) syncLimitRange(tenant *capsulev1beta1.Tenant, namespace string, keys []string) (err error) {
func (r *Manager) syncLimitRange(ctx context.Context, tenant *capsulev1beta1.Tenant, namespace string, keys []string) (err error) {
// getting LimitRange labels for the mutateFn
var tenantLabel, limitRangeLabel string
if tenantLabel, err = capsulev1beta1.GetTypeLabel(&capsulev1beta1.Tenant{}); err != nil {
return
}
if limitRangeLabel, err = capsulev1beta1.GetTypeLabel(&corev1.LimitRange{}); err != nil {
return
return err
}
if err = r.pruningResources(namespace, keys, &corev1.LimitRange{}); err != nil {
return
if limitRangeLabel, err = capsulev1beta1.GetTypeLabel(&corev1.LimitRange{}); err != nil {
return err
}
if err = r.pruningResources(ctx, namespace, keys, &corev1.LimitRange{}); err != nil {
return err
}
for i, spec := range tenant.Spec.LimitRanges.Items {
@@ -59,22 +61,24 @@ func (r *Manager) syncLimitRange(tenant *capsulev1beta1.Tenant, namespace string
}
var res controllerutil.OperationResult
res, err = controllerutil.CreateOrUpdate(context.TODO(), r.Client, target, func() (err error) {
res, err = controllerutil.CreateOrUpdate(ctx, r.Client, target, func() (err error) {
target.ObjectMeta.Labels = map[string]string{
tenantLabel: tenant.Name,
limitRangeLabel: strconv.Itoa(i),
}
target.Spec = spec
return controllerutil.SetControllerReference(tenant, target, r.Scheme)
return controllerutil.SetControllerReference(tenant, target, r.Client.Scheme())
})
r.emitEvent(tenant, target.GetNamespace(), res, fmt.Sprintf("Ensuring LimitRange %s", target.GetName()), err)
r.Log.Info("LimitRange sync result: "+string(res), "name", target.Name, "namespace", target.Namespace)
if err != nil {
return
return err
}
}
return
return nil
}

View File

@@ -7,8 +7,8 @@ import (
corev1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
rbacv1 "k8s.io/api/rbac/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/retry"
ctrl "sigs.k8s.io/controller-runtime"
@@ -20,9 +20,9 @@ import (
type Manager struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
Recorder record.EventRecorder
Log logr.Logger
Recorder record.EventRecorder
RESTConfig *rest.Config
}
func (r *Manager) SetupWithManager(mgr ctrl.Manager) error {
@@ -38,77 +38,96 @@ func (r *Manager) SetupWithManager(mgr ctrl.Manager) error {
func (r Manager) Reconcile(ctx context.Context, request ctrl.Request) (result ctrl.Result, err error) {
r.Log = r.Log.WithValues("Request.Name", request.Name)
// Fetch the Tenant instance
instance := &capsulev1beta1.Tenant{}
if err = r.Get(ctx, request.NamespacedName, instance); err != nil {
if errors.IsNotFound(err) {
if apierrors.IsNotFound(err) {
r.Log.Info("Request object not found, could have been deleted after reconcile request")
return reconcile.Result{}, nil
}
r.Log.Error(err, "Error reading the object")
return
}
// Ensuring the Tenant Status
if err = r.updateTenantStatus(instance); err != nil {
if err = r.updateTenantStatus(ctx, instance); err != nil {
r.Log.Error(err, "Cannot update Tenant status")
return
}
// Ensuring ResourceQuota
r.Log.Info("Ensuring limit resources count is updated")
if err = r.syncCustomResourceQuotaUsages(ctx, instance); err != nil {
r.Log.Error(err, "Cannot count limited resources")
return
}
// Ensuring all namespaces are collected
r.Log.Info("Ensuring all Namespaces are collected")
if err = r.collectNamespaces(instance); err != nil {
if err = r.collectNamespaces(ctx, instance); err != nil {
r.Log.Error(err, "Cannot collect Namespace resources")
return
}
// Ensuring Namespace metadata
r.Log.Info("Starting processing of Namespaces", "items", len(instance.Status.Namespaces))
if err = r.syncNamespaces(instance); err != nil {
if err = r.syncNamespaces(ctx, instance); err != nil {
r.Log.Error(err, "Cannot sync Namespace items")
return
}
// Ensuring NetworkPolicy resources
r.Log.Info("Starting processing of Network Policies")
if err = r.syncNetworkPolicies(instance); err != nil {
if err = r.syncNetworkPolicies(ctx, instance); err != nil {
r.Log.Error(err, "Cannot sync NetworkPolicy items")
return
}
// Ensuring LimitRange resources
r.Log.Info("Starting processing of Limit Ranges", "items", len(instance.Spec.LimitRanges.Items))
if err = r.syncLimitRanges(instance); err != nil {
if err = r.syncLimitRanges(ctx, instance); err != nil {
r.Log.Error(err, "Cannot sync LimitRange items")
return
}
// Ensuring ResourceQuota resources
r.Log.Info("Starting processing of Resource Quotas", "items", len(instance.Spec.ResourceQuota.Items))
if err = r.syncResourceQuotas(instance); err != nil {
if err = r.syncResourceQuotas(ctx, instance); err != nil {
r.Log.Error(err, "Cannot sync ResourceQuota items")
return
}
// Ensuring RoleBinding resources
r.Log.Info("Ensuring RoleBindings for Owners and Tenant")
if err = r.syncRoleBindings(ctx, instance); err != nil {
r.Log.Error(err, "Cannot sync RoleBindings items")
r.Log.Info("Ensuring additional RoleBindings for owner")
if err = r.syncAdditionalRoleBindings(instance); err != nil {
r.Log.Error(err, "Cannot sync additional RoleBindings items")
return
}
r.Log.Info("Ensuring RoleBinding for owner")
if err = r.ownerRoleBinding(instance); err != nil {
r.Log.Error(err, "Cannot sync owner RoleBinding")
return
}
// Ensuring Namespace count
r.Log.Info("Ensuring Namespace count")
if err = r.ensureNamespaceCount(instance); err != nil {
if err = r.ensureNamespaceCount(ctx, instance); err != nil {
r.Log.Error(err, "Cannot sync Namespace count")
return
}
r.Log.Info("Tenant reconciling completed")
return ctrl.Result{}, err
}
func (r *Manager) updateTenantStatus(tnt *capsulev1beta1.Tenant) error {
func (r *Manager) updateTenantStatus(ctx context.Context, tnt *capsulev1beta1.Tenant) error {
return retry.RetryOnConflict(retry.DefaultBackoff, func() (err error) {
if tnt.IsCordoned() {
tnt.Status.State = capsulev1beta1.TenantStateCordoned
@@ -116,6 +135,6 @@ func (r *Manager) updateTenantStatus(tnt *capsulev1beta1.Tenant) error {
tnt.Status.State = capsulev1beta1.TenantStateActive
}
return r.Client.Status().Update(context.Background(), tnt)
return r.Client.Status().Update(ctx, tnt)
})
}

View File

@@ -20,37 +20,39 @@ import (
)
// Ensuring all annotations are applied to each Namespace handled by the Tenant.
func (r *Manager) syncNamespaces(tenant *capsulev1beta1.Tenant) (err error) {
func (r *Manager) syncNamespaces(ctx context.Context, tenant *capsulev1beta1.Tenant) (err error) {
group := new(errgroup.Group)
for _, item := range tenant.Status.Namespaces {
namespace := item
group.Go(func() error {
return r.syncNamespaceMetadata(namespace, tenant)
return r.syncNamespaceMetadata(ctx, namespace, tenant)
})
}
if err = group.Wait(); err != nil {
r.Log.Error(err, "Cannot sync Namespaces")
err = fmt.Errorf("cannot sync Namespaces: %s", err.Error())
err = fmt.Errorf("cannot sync Namespaces: %w", err)
}
return
}
func (r *Manager) syncNamespaceMetadata(namespace string, tnt *capsulev1beta1.Tenant) (err error) {
// nolint:gocognit
func (r *Manager) syncNamespaceMetadata(ctx context.Context, namespace string, tnt *capsulev1beta1.Tenant) (err error) {
var res controllerutil.OperationResult
err = retry.RetryOnConflict(retry.DefaultBackoff, func() (conflictErr error) {
ns := &corev1.Namespace{}
if conflictErr = r.Client.Get(context.TODO(), types.NamespacedName{Name: namespace}, ns); err != nil {
if conflictErr = r.Client.Get(ctx, types.NamespacedName{Name: namespace}, ns); err != nil {
return
}
capsuleLabel, _ := capsulev1beta1.GetTypeLabel(&capsulev1beta1.Tenant{})
res, conflictErr = controllerutil.CreateOrUpdate(context.TODO(), r.Client, ns, func() error {
res, conflictErr = controllerutil.CreateOrUpdate(ctx, r.Client, ns, func() error {
annotations := make(map[string]string)
labels := map[string]string{
"name": namespace,
@@ -144,28 +146,28 @@ func (r *Manager) syncNamespaceMetadata(namespace string, tnt *capsulev1beta1.Te
r.emitEvent(tnt, namespace, res, "Ensuring Namespace metadata", err)
return
return err
}
func (r *Manager) ensureNamespaceCount(tenant *capsulev1beta1.Tenant) error {
func (r *Manager) ensureNamespaceCount(ctx context.Context, tenant *capsulev1beta1.Tenant) error {
return retry.RetryOnConflict(retry.DefaultBackoff, func() error {
tenant.Status.Size = uint(len(tenant.Status.Namespaces))
found := &capsulev1beta1.Tenant{}
if err := r.Client.Get(context.TODO(), types.NamespacedName{Name: tenant.GetName()}, found); err != nil {
if err := r.Client.Get(ctx, types.NamespacedName{Name: tenant.GetName()}, found); err != nil {
return err
}
found.Status.Size = tenant.Status.Size
return r.Client.Status().Update(context.TODO(), found, &client.UpdateOptions{})
return r.Client.Status().Update(ctx, found, &client.UpdateOptions{})
})
}
func (r *Manager) collectNamespaces(tenant *capsulev1beta1.Tenant) error {
func (r *Manager) collectNamespaces(ctx context.Context, tenant *capsulev1beta1.Tenant) error {
return retry.RetryOnConflict(retry.DefaultBackoff, func() (err error) {
list := &corev1.NamespaceList{}
err = r.Client.List(context.TODO(), list, client.MatchingFieldsSelector{
err = r.Client.List(ctx, list, client.MatchingFieldsSelector{
Selector: fields.OneTermEqualSelector(".metadata.ownerReferences[*].capsule", tenant.GetName()),
})
@@ -173,11 +175,12 @@ func (r *Manager) collectNamespaces(tenant *capsulev1beta1.Tenant) error {
return
}
_, err = controllerutil.CreateOrUpdate(context.TODO(), r.Client, tenant.DeepCopy(), func() error {
_, err = controllerutil.CreateOrUpdate(ctx, r.Client, tenant.DeepCopy(), func() error {
tenant.AssignNamespaces(list.Items)
return r.Client.Status().Update(context.TODO(), tenant, &client.UpdateOptions{})
return r.Client.Status().Update(ctx, tenant, &client.UpdateOptions{})
})
return
})
}

View File

@@ -13,8 +13,9 @@ import (
capsulev1beta1 "github.com/clastix/capsule/api/v1beta1"
)
// nolint:dupl
// Ensuring all the NetworkPolicies are applied to each Namespace handled by the Tenant.
func (r *Manager) syncNetworkPolicies(tenant *capsulev1beta1.Tenant) error {
func (r *Manager) syncNetworkPolicies(ctx context.Context, tenant *capsulev1beta1.Tenant) error {
// getting requested NetworkPolicy keys
keys := make([]string, 0, len(tenant.Spec.NetworkPolicies.Items))
@@ -28,26 +29,26 @@ func (r *Manager) syncNetworkPolicies(tenant *capsulev1beta1.Tenant) error {
namespace := ns
group.Go(func() error {
return r.syncNetworkPolicy(tenant, namespace, keys)
return r.syncNetworkPolicy(ctx, tenant, namespace, keys)
})
}
return group.Wait()
}
func (r *Manager) syncNetworkPolicy(tenant *capsulev1beta1.Tenant, namespace string, keys []string) (err error) {
if err = r.pruningResources(namespace, keys, &networkingv1.NetworkPolicy{}); err != nil {
return
func (r *Manager) syncNetworkPolicy(ctx context.Context, tenant *capsulev1beta1.Tenant, namespace string, keys []string) (err error) {
if err = r.pruningResources(ctx, namespace, keys, &networkingv1.NetworkPolicy{}); err != nil {
return err
}
// getting NetworkPolicy labels for the mutateFn
var tenantLabel, networkPolicyLabel string
if tenantLabel, err = capsulev1beta1.GetTypeLabel(&capsulev1beta1.Tenant{}); err != nil {
return
return err
}
if networkPolicyLabel, err = capsulev1beta1.GetTypeLabel(&networkingv1.NetworkPolicy{}); err != nil {
return
return err
}
for i, spec := range tenant.Spec.NetworkPolicies.Items {
@@ -59,14 +60,14 @@ func (r *Manager) syncNetworkPolicy(tenant *capsulev1beta1.Tenant, namespace str
}
var res controllerutil.OperationResult
res, err = controllerutil.CreateOrUpdate(context.TODO(), r.Client, target, func() (err error) {
res, err = controllerutil.CreateOrUpdate(ctx, r.Client, target, func() (err error) {
target.SetLabels(map[string]string{
tenantLabel: tenant.Name,
networkPolicyLabel: strconv.Itoa(i),
})
target.Spec = spec
return controllerutil.SetControllerReference(tenant, target, r.Scheme)
return controllerutil.SetControllerReference(tenant, target, r.Client.Scheme())
})
r.emitEvent(tenant, target.GetNamespace(), res, fmt.Sprintf("Ensuring NetworkPolicy %s", target.GetName()), err)
@@ -74,9 +75,9 @@ func (r *Manager) syncNetworkPolicy(tenant *capsulev1beta1.Tenant, namespace str
r.Log.Info("Network Policy sync result: "+string(res), "name", target.Name, "namespace", target.Namespace)
if err != nil {
return
return err
}
}
return
return nil
}

View File

@@ -31,7 +31,8 @@ import (
// the mutateFn along with the CreateOrUpdate to don't perform the update since resources are identical.
//
// In case of Namespace-scoped Resource Budget, we're just replicating the resources across all registered Namespaces.
func (r *Manager) syncResourceQuotas(tenant *capsulev1beta1.Tenant) (err error) {
// nolint:gocognit
func (r *Manager) syncResourceQuotas(ctx context.Context, tenant *capsulev1beta1.Tenant) (err error) {
// getting ResourceQuota labels for the mutateFn
var tenantLabel, typeLabel string
@@ -42,7 +43,7 @@ func (r *Manager) syncResourceQuotas(tenant *capsulev1beta1.Tenant) (err error)
if typeLabel, err = capsulev1beta1.GetTypeLabel(&corev1.ResourceQuota{}); err != nil {
return err
}
// nolint:nestif
if tenant.Spec.ResourceQuota.Scope == capsulev1beta1.ResourceQuotaScopeTenant {
group := new(errgroup.Group)
@@ -67,8 +68,9 @@ func (r *Manager) syncResourceQuotas(tenant *capsulev1beta1.Tenant) (err error)
// These are required since Capsule is going to sum all the used quota to
// sum them and get the Tenant one.
list := &corev1.ResourceQuotaList{}
if scopeErr = r.List(context.TODO(), list, &client.ListOptions{LabelSelector: labels.NewSelector().Add(*tntRequirement).Add(*indexRequirement)}); scopeErr != nil {
if scopeErr = r.List(ctx, list, &client.ListOptions{LabelSelector: labels.NewSelector().Add(*tntRequirement).Add(*indexRequirement)}); scopeErr != nil {
r.Log.Error(scopeErr, "Cannot list ResourceQuota", "tenantFilter", tntRequirement.String(), "indexFilter", indexRequirement.String())
return
}
// Iterating over all the options declared for the ResourceQuota,
@@ -116,11 +118,13 @@ func (r *Manager) syncResourceQuotas(tenant *capsulev1beta1.Tenant) (err error)
list.Items[item].Spec.Hard[name] = resourceQuota.Hard[name]
}
}
if scopeErr = r.resourceQuotasUpdate(name, quantity, resourceQuota.Hard[name], list.Items...); scopeErr != nil {
if scopeErr = r.resourceQuotasUpdate(ctx, name, quantity, resourceQuota.Hard[name], list.Items...); scopeErr != nil {
r.Log.Error(scopeErr, "cannot proceed with outer ResourceQuota")
return
}
}
return
})
}
@@ -142,14 +146,14 @@ func (r *Manager) syncResourceQuotas(tenant *capsulev1beta1.Tenant) (err error)
namespace := ns
group.Go(func() error {
return r.syncResourceQuota(tenant, namespace, keys)
return r.syncResourceQuota(ctx, tenant, namespace, keys)
})
}
return group.Wait()
}
func (r *Manager) syncResourceQuota(tenant *capsulev1beta1.Tenant, namespace string, keys []string) (err error) {
func (r *Manager) syncResourceQuota(ctx context.Context, tenant *capsulev1beta1.Tenant, namespace string, keys []string) (err error) {
// getting ResourceQuota labels for the mutateFn
var tenantLabel, typeLabel string
@@ -161,7 +165,7 @@ func (r *Manager) syncResourceQuota(tenant *capsulev1beta1.Tenant, namespace str
return err
}
// Pruning resource of non-requested resources
if err = r.pruningResources(namespace, keys, &corev1.ResourceQuota{}); err != nil {
if err = r.pruningResources(ctx, namespace, keys, &corev1.ResourceQuota{}); err != nil {
return err
}
@@ -174,8 +178,9 @@ func (r *Manager) syncResourceQuota(tenant *capsulev1beta1.Tenant, namespace str
}
var res controllerutil.OperationResult
err = retry.RetryOnConflict(retry.DefaultBackoff, func() (retryErr error) {
res, retryErr = controllerutil.CreateOrUpdate(context.TODO(), r.Client, target, func() (err error) {
res, retryErr = controllerutil.CreateOrUpdate(ctx, r.Client, target, func() (err error) {
target.SetLabels(map[string]string{
tenantLabel: tenant.Name,
typeLabel: strconv.Itoa(index),
@@ -187,7 +192,7 @@ func (r *Manager) syncResourceQuota(tenant *capsulev1beta1.Tenant, namespace str
target.Spec.Hard = resQuota.Hard
}
return controllerutil.SetControllerReference(tenant, target, r.Scheme)
return controllerutil.SetControllerReference(tenant, target, r.Client.Scheme())
})
return retryErr
@@ -208,7 +213,7 @@ func (r *Manager) syncResourceQuota(tenant *capsulev1beta1.Tenant, namespace str
// Serial ResourceQuota processing is expensive: using Go routines we can speed it up.
// In case of multiple errors these are logged properly, returning a generic error since we have to repush back the
// reconciliation loop.
func (r *Manager) resourceQuotasUpdate(resourceName corev1.ResourceName, actual, limit resource.Quantity, list ...corev1.ResourceQuota) (err error) {
func (r *Manager) resourceQuotasUpdate(ctx context.Context, resourceName corev1.ResourceName, actual, limit resource.Quantity, list ...corev1.ResourceQuota) (err error) {
group := new(errgroup.Group)
for _, item := range list {
@@ -216,12 +221,12 @@ func (r *Manager) resourceQuotasUpdate(resourceName corev1.ResourceName, actual,
group.Go(func() (err error) {
found := &corev1.ResourceQuota{}
if err = r.Get(context.TODO(), types.NamespacedName{Namespace: rq.Namespace, Name: rq.Name}, found); err != nil {
if err = r.Get(ctx, types.NamespacedName{Namespace: rq.Namespace, Name: rq.Name}, found); err != nil {
return
}
return retry.RetryOnConflict(retry.DefaultBackoff, func() (retryErr error) {
_, retryErr = controllerutil.CreateOrUpdate(context.TODO(), r.Client, found, func() error {
_, retryErr = controllerutil.CreateOrUpdate(ctx, r.Client, found, func() error {
// Ensuring annotation map is there to avoid uninitialized map error and
// assigning the overall usage
if found.Annotations == nil {
@@ -232,6 +237,7 @@ func (r *Manager) resourceQuotasUpdate(resourceName corev1.ResourceName, actual,
found.Annotations[capsulev1beta1.HardQuotaFor(resourceName)] = limit.String()
// Updating the Resource according to the actual.Cmp result
found.Spec.Hard = rq.Spec.Hard
return nil
})
@@ -244,7 +250,7 @@ func (r *Manager) resourceQuotasUpdate(resourceName corev1.ResourceName, actual,
// We had an error and we mark the whole transaction as failed
// to process it another time according to the Tenant controller back-off factor.
r.Log.Error(err, "Cannot update outer ResourceQuotas", "resourceName", resourceName.String())
err = fmt.Errorf("update of outer ResourceQuota items has failed: %s", err.Error())
err = fmt.Errorf("update of outer ResourceQuota items has failed: %w", err)
}
return err

View File

@@ -0,0 +1,122 @@
package tenant
import (
"context"
"fmt"
"strings"
"golang.org/x/sync/errgroup"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/util/retry"
capsulev1beta1 "github.com/clastix/capsule/api/v1beta1"
)
func (r *Manager) syncCustomResourceQuotaUsages(ctx context.Context, tenant *capsulev1beta1.Tenant) error {
type resource struct {
kind string
group string
version string
}
// nolint:prealloc
var resourceList []resource
for k := range tenant.GetAnnotations() {
if !strings.HasPrefix(k, capsulev1beta1.ResourceQuotaAnnotationPrefix) {
continue
}
parts := strings.Split(k, "/")
if len(parts) != 2 {
r.Log.Info("non well-formed Resource Limit annotation", "key", k)
continue
}
parts = strings.Split(parts[1], "_")
if len(parts) != 2 {
r.Log.Info("non well-formed Resource Limit annotation, cannot retrieve version", "key", k)
continue
}
groupKindParts := strings.Split(parts[0], ".")
if len(groupKindParts) < 2 {
r.Log.Info("non well-formed Resource Limit annotation, cannot retrieve kind and group", "key", k)
continue
}
resourceList = append(resourceList, resource{
kind: groupKindParts[0],
group: strings.Join(groupKindParts[1:], "."),
version: parts[1],
})
}
errGroup := new(errgroup.Group)
usedMap := make(map[string]int)
defer func() {
for gvk, used := range usedMap {
err := retry.RetryOnConflict(retry.DefaultBackoff, func() (retryErr error) {
tnt := &capsulev1beta1.Tenant{}
if retryErr = r.Client.Get(ctx, types.NamespacedName{Name: tenant.GetName()}, tnt); retryErr != nil {
return
}
if tnt.GetAnnotations() == nil {
tnt.Annotations = make(map[string]string)
}
tnt.Annotations[capsulev1beta1.UsedAnnotationForResource(gvk)] = fmt.Sprintf("%d", used)
return r.Client.Update(ctx, tnt)
})
if err != nil {
r.Log.Error(err, "cannot update custom Resource Quota", "GVK", gvk)
}
}
}()
for _, item := range resourceList {
res := item
errGroup.Go(func() (scopeErr error) {
dynamicClient := dynamic.NewForConfigOrDie(r.RESTConfig)
for _, ns := range tenant.Status.Namespaces {
var list *unstructured.UnstructuredList
list, scopeErr = dynamicClient.Resource(schema.GroupVersionResource{Group: res.group, Version: res.version, Resource: res.kind}).List(ctx, metav1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.namespace==%s", ns),
})
if scopeErr != nil {
return scopeErr
}
key := fmt.Sprintf("%s.%s_%s", res.kind, res.group, res.version)
if _, ok := usedMap[key]; !ok {
usedMap[key] = 0
}
usedMap[key] += len(list.Items)
}
return
})
}
if err := errGroup.Wait(); err != nil {
return err
}
return nil
}

View File

@@ -9,16 +9,43 @@ import (
"golang.org/x/sync/errgroup"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
capsulev1beta1 "github.com/clastix/capsule/api/v1beta1"
"github.com/clastix/capsule/controllers/rbac"
)
// Additional Role Bindings can be used in many ways: applying Pod Security Policies or giving
// access to CRDs or specific API groups.
func (r *Manager) syncAdditionalRoleBindings(tenant *capsulev1beta1.Tenant) (err error) {
// ownerClusterRoleBindings generates a Capsule AdditionalRoleBinding object for the Owner dynamic clusterrole in order
// to take advantage of the additional role binding feature.
func (r *Manager) ownerClusterRoleBindings(owner capsulev1beta1.OwnerSpec, clusterRole string) capsulev1beta1.AdditionalRoleBindingsSpec {
var subject rbacv1.Subject
if owner.Kind == "ServiceAccount" {
splitName := strings.Split(owner.Name, ":")
subject = rbacv1.Subject{
Kind: owner.Kind.String(),
Name: splitName[len(splitName)-1],
Namespace: splitName[len(splitName)-2],
}
} else {
subject = rbacv1.Subject{
APIGroup: rbacv1.GroupName,
Kind: owner.Kind.String(),
Name: owner.Name,
}
}
return capsulev1beta1.AdditionalRoleBindingsSpec{
ClusterRoleName: clusterRole,
Subjects: []rbacv1.Subject{
subject,
},
}
}
// Sync the dynamic Tenant Owner specific cluster-roles and additional Role Bindings, which can be used in many ways:
// applying Pod Security Policies or giving access to CRDs or specific API groups.
func (r *Manager) syncRoleBindings(ctx context.Context, tenant *capsulev1beta1.Tenant) (err error) {
// hashing the RoleBinding name due to DNS RFC-1123 applied to Kubernetes labels
hashFn := func(binding capsulev1beta1.AdditionalRoleBindingsSpec) string {
h := fnv.New64a()
@@ -32,7 +59,16 @@ func (r *Manager) syncAdditionalRoleBindings(tenant *capsulev1beta1.Tenant) (err
return fmt.Sprintf("%x", h.Sum64())
}
// getting requested Role Binding keys
var keys []string
keys := make([]string, 0, len(tenant.Spec.Owners))
// Generating for dynamic tenant owners cluster roles
for index, owner := range tenant.Spec.Owners {
for _, clusterRoleName := range owner.GetRoles(*tenant, index) {
cr := r.ownerClusterRoleBindings(owner, clusterRoleName)
keys = append(keys, hashFn(cr))
}
}
// Generating hash of additional role bindings
for _, i := range tenant.Spec.AdditionalRoleBindings {
keys = append(keys, hashFn(i))
}
@@ -43,14 +79,14 @@ func (r *Manager) syncAdditionalRoleBindings(tenant *capsulev1beta1.Tenant) (err
namespace := ns
group.Go(func() error {
return r.syncAdditionalRoleBinding(tenant, namespace, keys, hashFn)
return r.syncAdditionalRoleBinding(ctx, tenant, namespace, keys, hashFn)
})
}
return group.Wait()
}
func (r *Manager) syncAdditionalRoleBinding(tenant *capsulev1beta1.Tenant, ns string, keys []string, hashFn func(binding capsulev1beta1.AdditionalRoleBindingsSpec) string) (err error) {
func (r *Manager) syncAdditionalRoleBinding(ctx context.Context, tenant *capsulev1beta1.Tenant, ns string, keys []string, hashFn func(binding capsulev1beta1.AdditionalRoleBindingsSpec) string) (err error) {
var tenantLabel, roleBindingLabel string
if tenantLabel, err = capsulev1beta1.GetTypeLabel(&capsulev1beta1.Tenant{}); err != nil {
@@ -61,11 +97,21 @@ func (r *Manager) syncAdditionalRoleBinding(tenant *capsulev1beta1.Tenant, ns st
return
}
if err = r.pruningResources(ns, keys, &rbacv1.RoleBinding{}); err != nil {
if err = r.pruningResources(ctx, ns, keys, &rbacv1.RoleBinding{}); err != nil {
return
}
for i, roleBinding := range tenant.Spec.AdditionalRoleBindings {
var roleBindings []capsulev1beta1.AdditionalRoleBindingsSpec
for index, owner := range tenant.Spec.Owners {
for _, clusterRoleName := range owner.GetRoles(*tenant, index) {
roleBindings = append(roleBindings, r.ownerClusterRoleBindings(owner, clusterRoleName))
}
}
roleBindings = append(roleBindings, tenant.Spec.AdditionalRoleBindings...)
for i, roleBinding := range roleBindings {
roleBindingHashLabel := hashFn(roleBinding)
target := &rbacv1.RoleBinding{
@@ -76,27 +122,29 @@ func (r *Manager) syncAdditionalRoleBinding(tenant *capsulev1beta1.Tenant, ns st
}
var res controllerutil.OperationResult
res, err = controllerutil.CreateOrUpdate(context.TODO(), r.Client, target, func() error {
res, err = controllerutil.CreateOrUpdate(ctx, r.Client, target, func() error {
target.ObjectMeta.Labels = map[string]string{
tenantLabel: tenant.Name,
roleBindingLabel: roleBindingHashLabel,
}
target.RoleRef = rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
APIGroup: rbacv1.GroupName,
Kind: "ClusterRole",
Name: roleBinding.ClusterRoleName,
}
target.Subjects = roleBinding.Subjects
return controllerutil.SetControllerReference(tenant, target, r.Scheme)
return controllerutil.SetControllerReference(tenant, target, r.Client.Scheme())
})
r.emitEvent(tenant, target.GetNamespace(), res, fmt.Sprintf("Ensuring additional RoleBinding %s", target.GetName()), err)
r.emitEvent(tenant, target.GetNamespace(), res, fmt.Sprintf("Ensuring RoleBinding %s", target.GetName()), err)
if err != nil {
r.Log.Error(err, "Cannot sync Additional RoleBinding")
r.Log.Error(err, "Cannot sync RoleBinding")
}
r.Log.Info(fmt.Sprintf("Additional RoleBindings sync result: %s", string(res)), "name", target.Name, "namespace", target.Namespace)
r.Log.Info(fmt.Sprintf("RoleBinding sync result: %s", string(res)), "name", target.Name, "namespace", target.Namespace)
if err != nil {
return
}
@@ -104,76 +152,3 @@ func (r *Manager) syncAdditionalRoleBinding(tenant *capsulev1beta1.Tenant, ns st
return nil
}
// Each Tenant owner needs the admin Role attached to each Namespace, otherwise no actions on it can be performed.
// Since RBAC is based on deny all first, some specific actions like editing Capsule resources are going to be blocked
// via Dynamic Admission Webhooks.
// TODO(prometherion): we could create a capsule:admin role rather than hitting webhooks for each action
func (r *Manager) ownerRoleBinding(tenant *capsulev1beta1.Tenant) error {
// getting RoleBinding label for the mutateFn
var subjects []rbacv1.Subject
tl, err := capsulev1beta1.GetTypeLabel(&capsulev1beta1.Tenant{})
if err != nil {
return err
}
newLabels := map[string]string{tl: tenant.Name}
for _, owner := range tenant.Spec.Owners {
if owner.Kind == "ServiceAccount" {
splitName := strings.Split(owner.Name, ":")
subjects = append(subjects, rbacv1.Subject{
Kind: owner.Kind.String(),
Name: splitName[len(splitName)-1],
Namespace: splitName[len(splitName)-2],
})
} else {
subjects = append(subjects, rbacv1.Subject{
APIGroup: "rbac.authorization.k8s.io",
Kind: owner.Kind.String(),
Name: owner.Name,
})
}
}
list := make(map[types.NamespacedName]rbacv1.RoleRef)
for _, i := range tenant.Status.Namespaces {
list[types.NamespacedName{Namespace: i, Name: "namespace:admin"}] = rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Kind: "ClusterRole",
Name: "admin",
}
list[types.NamespacedName{Namespace: i, Name: "namespace-deleter"}] = rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Kind: "ClusterRole",
Name: rbac.DeleterRoleName,
}
}
for namespacedName, roleRef := range list {
target := &rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: namespacedName.Name,
Namespace: namespacedName.Namespace,
},
}
var res controllerutil.OperationResult
res, err = controllerutil.CreateOrUpdate(context.TODO(), r.Client, target, func() (err error) {
target.ObjectMeta.Labels = newLabels
target.Subjects = subjects
target.RoleRef = roleRef
return controllerutil.SetControllerReference(tenant, target, r.Scheme)
})
r.emitEvent(tenant, target.GetNamespace(), res, fmt.Sprintf("Ensuring Capsule RoleBinding %s", target.GetName()), err)
r.Log.Info("Role Binding sync result: "+string(res), "name", target.Name, "namespace", target.Namespace)
if err != nil {
return err
}
}
return nil
}

View File

@@ -16,8 +16,9 @@ import (
// pruningResources is taking care of removing the no more requested sub-resources as LimitRange, ResourceQuota or
// NetworkPolicy using the "exists" and "notin" LabelSelector to perform an outer-join removal.
func (r *Manager) pruningResources(ns string, keys []string, obj client.Object) (err error) {
func (r *Manager) pruningResources(ctx context.Context, ns string, keys []string, obj client.Object) (err error) {
var capsuleLabel string
if capsuleLabel, err = capsulev1beta1.GetTypeLabel(obj); err != nil {
return
}
@@ -25,13 +26,16 @@ func (r *Manager) pruningResources(ns string, keys []string, obj client.Object)
selector := labels.NewSelector()
var exists *labels.Requirement
if exists, err = labels.NewRequirement(capsuleLabel, selection.Exists, []string{}); err != nil {
return
}
selector = selector.Add(*exists)
if len(keys) > 0 {
var notIn *labels.Requirement
if notIn, err = labels.NewRequirement(capsuleLabel, selection.NotIn, keys); err != nil {
return err
}
@@ -42,7 +46,7 @@ func (r *Manager) pruningResources(ns string, keys []string, obj client.Object)
r.Log.Info("Pruning objects with label selector " + selector.String())
return retry.RetryOnConflict(retry.DefaultBackoff, func() error {
return r.DeleteAllOf(context.TODO(), obj, &client.DeleteAllOfOptions{
return r.DeleteAllOf(ctx, obj, &client.DeleteAllOfOptions{
ListOptions: client.ListOptions{
LabelSelector: selector,
Namespace: ns,
@@ -53,7 +57,8 @@ func (r *Manager) pruningResources(ns string, keys []string, obj client.Object)
}
func (r *Manager) emitEvent(object runtime.Object, namespace string, res controllerutil.OperationResult, msg string, err error) {
var eventType = corev1.EventTypeNormal
eventType := corev1.EventTypeNormal
if err != nil {
eventType = corev1.EventTypeWarning
res = "Error"

10
controllers/tls/errors.go Normal file
View File

@@ -0,0 +1,10 @@
// Copyright 2020-2021 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package tls
type RunningInOutOfClusterModeError struct{}
func (r RunningInOutOfClusterModeError) Error() string {
return "cannot retrieve the leader Pod, probably running in out of the cluster mode"
}

337
controllers/tls/manager.go Normal file
View File

@@ -0,0 +1,337 @@
// Copyright 2020-2021 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package tls
import (
"context"
"fmt"
"os"
"time"
"github.com/go-logr/logr"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
admissionregistrationv1 "k8s.io/api/admissionregistration/v1"
corev1 "k8s.io/api/core/v1"
apiextensionsv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/util/retry"
"k8s.io/utils/pointer"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
"github.com/clastix/capsule/controllers/utils"
"github.com/clastix/capsule/pkg/cert"
"github.com/clastix/capsule/pkg/configuration"
)
const (
certificateExpirationThreshold = 3 * 24 * time.Hour
certificateValidity = 6 * 30 * 24 * time.Hour
PodUpdateAnnotationName = "capsule.clastix.io/updated"
)
type Reconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
Namespace string
Configuration configuration.Configuration
}
func (r *Reconciler) SetupWithManager(mgr ctrl.Manager) error {
enqueueFn := handler.EnqueueRequestsFromMapFunc(func(client.Object) []reconcile.Request {
return []reconcile.Request{
{
NamespacedName: types.NamespacedName{
Namespace: r.Namespace,
Name: r.Configuration.TLSSecretName(),
},
},
}
})
return ctrl.NewControllerManagedBy(mgr).
For(&corev1.Secret{}, utils.NamesMatchingPredicate(r.Configuration.TLSSecretName())).
Watches(source.NewKindWithCache(&admissionregistrationv1.ValidatingWebhookConfiguration{}, mgr.GetCache()), enqueueFn, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return object.GetName() == r.Configuration.ValidatingWebhookConfigurationName()
}))).
Watches(source.NewKindWithCache(&admissionregistrationv1.MutatingWebhookConfiguration{}, mgr.GetCache()), enqueueFn, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return object.GetName() == r.Configuration.MutatingWebhookConfigurationName()
}))).
Watches(source.NewKindWithCache(&apiextensionsv1.CustomResourceDefinition{}, mgr.GetCache()), enqueueFn, builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
return object.GetName() == r.Configuration.TenantCRDName()
}))).
Complete(r)
}
func (r Reconciler) ReconcileCertificates(ctx context.Context, certSecret *corev1.Secret) error {
if r.shouldUpdateCertificate(certSecret) {
r.Log.Info("Generating new TLS certificate")
ca, err := cert.GenerateCertificateAuthority()
if err != nil {
return err
}
opts := cert.NewCertOpts(time.Now().Add(certificateValidity), fmt.Sprintf("capsule-webhook-service.%s.svc", r.Namespace))
crt, key, err := ca.GenerateCertificate(opts)
if err != nil {
r.Log.Error(err, "Cannot generate new TLS certificate")
return err
}
caCrt, _ := ca.CACertificatePem()
certSecret.Data = map[string][]byte{
corev1.TLSCertKey: crt.Bytes(),
corev1.TLSPrivateKeyKey: key.Bytes(),
corev1.ServiceAccountRootCAKey: caCrt.Bytes(),
}
t := &corev1.Secret{ObjectMeta: certSecret.ObjectMeta}
_, err = controllerutil.CreateOrUpdate(ctx, r.Client, t, func() error {
t.Data = certSecret.Data
return nil
})
if err != nil {
r.Log.Error(err, "cannot update Capsule TLS")
return err
}
}
var caBundle []byte
var ok bool
if caBundle, ok = certSecret.Data[corev1.ServiceAccountRootCAKey]; !ok {
return fmt.Errorf("missing %s field in %s secret", corev1.ServiceAccountRootCAKey, r.Configuration.TLSSecretName())
}
r.Log.Info("Updating caBundle in webhooks and crd")
group := new(errgroup.Group)
group.Go(func() error {
return r.updateMutatingWebhookConfiguration(ctx, caBundle)
})
group.Go(func() error {
return r.updateValidatingWebhookConfiguration(ctx, caBundle)
})
group.Go(func() error {
return r.updateCustomResourceDefinition(ctx, caBundle)
})
operatorPods, err := r.getOperatorPods(ctx)
if err != nil {
if errors.As(err, &RunningInOutOfClusterModeError{}) {
r.Log.Info("skipping annotation of Pods for cert-manager", "error", err.Error())
return nil
}
return err
}
r.Log.Info("Updating capsule operator pods")
for _, pod := range operatorPods.Items {
p := pod
group.Go(func() error {
return r.updateOperatorPod(ctx, p)
})
}
if err := group.Wait(); err != nil {
return err
}
return nil
}
func (r Reconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl.Result, error) {
r.Log = r.Log.WithValues("Request.Namespace", request.Namespace, "Request.Name", request.Name)
certSecret := &corev1.Secret{}
if err := r.Client.Get(ctx, request.NamespacedName, certSecret); err != nil {
// Error reading the object - requeue the request.
return reconcile.Result{}, err
}
if err := r.ReconcileCertificates(ctx, certSecret); err != nil {
return reconcile.Result{}, err
}
certificate, err := cert.GetCertificateFromBytes(certSecret.Data[corev1.TLSCertKey])
if err != nil {
return reconcile.Result{}, err
}
now := time.Now()
requeueTime := certificate.NotAfter.Add(-(certificateExpirationThreshold - 1*time.Second))
rq := requeueTime.Sub(now)
r.Log.Info("Reconciliation completed, processing back in " + rq.String())
return reconcile.Result{Requeue: true, RequeueAfter: rq}, nil
}
func (r Reconciler) shouldUpdateCertificate(secret *corev1.Secret) bool {
if _, ok := secret.Data[corev1.ServiceAccountRootCAKey]; !ok {
return true
}
certificate, key, err := cert.GetCertificateWithPrivateKeyFromBytes(secret.Data[corev1.TLSCertKey], secret.Data[corev1.TLSPrivateKeyKey])
if err != nil {
return true
}
if err := cert.ValidateCertificate(certificate, key, certificateExpirationThreshold); err != nil {
r.Log.Error(err, "failed to validate certificate, generating new one")
return true
}
r.Log.Info("Skipping TLS certificate generation as it is still valid")
return false
}
// By default helm doesn't allow to use templates in CRD (https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#method-1-let-helm-do-it-for-you).
// In order to overcome this, we are setting conversion strategy in helm chart to None, and then update it with CA and namespace information.
func (r *Reconciler) updateCustomResourceDefinition(ctx context.Context, caBundle []byte) error {
return retry.RetryOnConflict(retry.DefaultBackoff, func() (err error) {
crd := &apiextensionsv1.CustomResourceDefinition{}
err = r.Get(ctx, types.NamespacedName{Name: "tenants.capsule.clastix.io"}, crd)
if err != nil {
r.Log.Error(err, "cannot retrieve CustomResourceDefinition")
return err
}
_, err = controllerutil.CreateOrUpdate(ctx, r.Client, crd, func() error {
crd.Spec.Conversion = &apiextensionsv1.CustomResourceConversion{
Strategy: "Webhook",
Webhook: &apiextensionsv1.WebhookConversion{
ClientConfig: &apiextensionsv1.WebhookClientConfig{
Service: &apiextensionsv1.ServiceReference{
Namespace: r.Namespace,
Name: "capsule-webhook-service",
Path: pointer.StringPtr("/convert"),
Port: pointer.Int32Ptr(443),
},
CABundle: caBundle,
},
ConversionReviewVersions: []string{"v1alpha1", "v1beta1"},
},
}
return nil
})
return err
})
}
//nolint:dupl
func (r Reconciler) updateValidatingWebhookConfiguration(ctx context.Context, caBundle []byte) error {
return retry.RetryOnConflict(retry.DefaultBackoff, func() (err error) {
vw := &admissionregistrationv1.ValidatingWebhookConfiguration{}
err = r.Get(ctx, types.NamespacedName{Name: r.Configuration.ValidatingWebhookConfigurationName()}, vw)
if err != nil {
r.Log.Error(err, "cannot retrieve ValidatingWebhookConfiguration")
return err
}
for i, w := range vw.Webhooks {
// Updating CABundle only in case of an internal service reference
if w.ClientConfig.Service != nil {
vw.Webhooks[i].ClientConfig.CABundle = caBundle
}
}
return r.Update(ctx, vw, &client.UpdateOptions{})
})
}
//nolint:dupl
func (r Reconciler) updateMutatingWebhookConfiguration(ctx context.Context, caBundle []byte) error {
return retry.RetryOnConflict(retry.DefaultBackoff, func() (err error) {
mw := &admissionregistrationv1.MutatingWebhookConfiguration{}
err = r.Get(ctx, types.NamespacedName{Name: r.Configuration.MutatingWebhookConfigurationName()}, mw)
if err != nil {
r.Log.Error(err, "cannot retrieve MutatingWebhookConfiguration")
return err
}
for i, w := range mw.Webhooks {
// Updating CABundle only in case of an internal service reference
if w.ClientConfig.Service != nil {
mw.Webhooks[i].ClientConfig.CABundle = caBundle
}
}
return r.Update(ctx, mw, &client.UpdateOptions{})
})
}
func (r Reconciler) updateOperatorPod(ctx context.Context, pod corev1.Pod) error {
return retry.RetryOnConflict(retry.DefaultRetry, func() error {
// Need to get latest version of pod
p := &corev1.Pod{}
if err := r.Client.Get(ctx, types.NamespacedName{Namespace: pod.Namespace, Name: pod.Name}, p); err != nil && !apierrors.IsNotFound(err) {
r.Log.Error(err, "cannot get pod", "name", pod.Name, "namespace", pod.Namespace)
return err
}
if p.Annotations == nil {
p.Annotations = map[string]string{}
}
p.Annotations[PodUpdateAnnotationName] = time.Now().Format(time.RFC3339Nano)
if err := r.Client.Update(ctx, p, &client.UpdateOptions{}); err != nil {
r.Log.Error(err, "cannot update pod", "name", pod.Name, "namespace", pod.Namespace)
return err
}
return nil
})
}
func (r Reconciler) getOperatorPods(ctx context.Context) (*corev1.PodList, error) {
hostname, _ := os.Hostname()
leaderPod := &corev1.Pod{}
if err := r.Client.Get(ctx, types.NamespacedName{Namespace: os.Getenv("NAMESPACE"), Name: hostname}, leaderPod); err != nil {
return nil, RunningInOutOfClusterModeError{}
}
podList := &corev1.PodList{}
if err := r.Client.List(ctx, podList, client.MatchingLabels(leaderPod.ObjectMeta.Labels)); err != nil {
r.Log.Error(err, "cannot retrieve list of Capsule pods")
return nil, err
}
return podList, nil
}

View File

@@ -0,0 +1,19 @@
package utils
import (
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/predicate"
)
func NamesMatchingPredicate(names ...string) builder.Predicates {
return builder.WithPredicates(predicate.NewPredicateFuncs(func(object client.Object) bool {
for _, name := range names {
if object.GetName() == name {
return true
}
}
return false
}))
}

8
docs/.gitignore vendored Normal file
View File

@@ -0,0 +1,8 @@
*.log
.cache
.DS_Store
src/.temp
node_modules
dist
.env
.env.*

12
docs/README.md Normal file
View File

@@ -0,0 +1,12 @@
# Capsule Documentation
1. Ensure to have [`yarn`](https://classic.yarnpkg.com/lang/en/docs/install/#debian-stable) installed in your path.
2. `yarn install`
## Local development
```shell
yarn develop
```
This will create a local webserver listening on `localhost:8080` with hot-reload of your local changes.

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 294 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 283 KiB

View File

Before

Width:  |  Height:  |  Size: 111 KiB

After

Width:  |  Height:  |  Size: 111 KiB

View File

@@ -1,25 +1,23 @@
# Capsule Development Guide
# Capsule Development
## Prerequisites
### Tools
Make sure you have these tools installed:
- [Go 1.16+](https://golang.org/dl/)
- [Go 1.18+](https://golang.org/dl/)
- [Operator SDK 1.7.2+](https://github.com/operator-framework/operator-sdk), or [Kubebuilder](https://github.com/kubernetes-sigs/kubebuilder)
- [KinD](https://github.com/kubernetes-sigs/kind) or [k3d](https://k3d.io/), with `kubectl`
- [ngrok](https://ngrok.com/) (if you want to run locally with remote Kubernetes)
- [golangci-lint](https://github.com/golangci/golangci-lint)
- OpenSSL
### Kubernetes Cluster
## Setup a Kubernetes Cluster
A lightweight Kubernetes within your laptop can be very handy for Kubernetes-native development like Capsule.
#### By `k3d`
### By `k3d`
```sh
```shell
# Install K3d cli by brew in Mac, or your preferred way
$ brew install k3d
@@ -31,6 +29,9 @@ $ export LAPTOP_HOST_IP=192.168.10.101
# Refer to here for more options: https://k3d.io/v4.4.8/usage/commands/k3d_cluster_create/
$ k3d cluster create k3s-capsule --servers 1 --agents 1 --no-lb --k3s-server-arg --tls-san=${LAPTOP_HOST_IP}
# Get Kubeconfig
$ k3d kubeconfig get k3s-capsule > /tmp/k3s-capsule && export KUBECONFIG="/tmp/k3s-capsule"
# This will create a cluster with 1 server and 1 worker node
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
@@ -44,9 +45,9 @@ CONTAINER ID IMAGE COMMAND CREATED
753998879b28 rancher/k3s:v1.21.2-k3s1 "/bin/k3s server --t…" 53 seconds ago Up 51 seconds 0.0.0.0:49708->6443/tcp k3d-k3s-capsule-server-0
```
#### By `kind`
### By `kind`
```sh
```shell
# # Install kind cli by brew in Mac, or your preferred way
$ brew install kind
@@ -92,20 +93,20 @@ CONTAINER ID IMAGE COMMAND CREATED
7d50f1633555 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute kind-capsule-worker
```
## Fork & clone the repository
## Fork, build, and deploy Capsule
The `fork-clone-contribute-pr` flow is common for contributing to OSS projects like Kubernetes, Capsule.
The `fork-clone-contribute-pr` flow is common for contributing to OSS projects like Kubernetes and Capsule.
Let's assume you've forked it into your GitHub namespace, say `myuser`, and then you can clone it with Git protocol.
Do remember to change the `myuser` to yours.
```sh
```shell
$ git clone git@github.com:myuser/capsule.git && cd capsule
```
It's a good practice to add the upsteam as the remote too so we can easily fetch and merge the upstream to our fork:
It's a good practice to add the upstream as the remote too so we can easily fetch and merge the upstream to our fork:
```sh
```shell
$ git remote add upstream https://github.com/clastix/capsule.git
$ git remote -vv
origin git@github.com:myuser/capsule.git (fetch)
@@ -114,9 +115,9 @@ upstream https://github.com/clastix/capsule.git (fetch)
upstream https://github.com/clastix/capsule.git (push)
```
## Build & deploy Capsule
Build and deploy:
```sh
```shell
# Download the project dependencies
$ go mod download
@@ -124,12 +125,12 @@ $ go mod download
$ make docker-build
# Retrieve the built image version
$ export CAPSULE_IMAGE_VESION=`docker images --format '{{.Tag}}' quay.io/clastix/capsule`
$ export CAPSULE_IMAGE_VESION=`docker images --format '{{.Tag}}' clastix/capsule`
# If k3s, load the image into cluster by
$ k3d image import --cluster k3s-capsule capsule quay.io/clastix/capsule:${CAPSULE_IMAGE_VESION}
$ k3d image import --cluster k3s-capsule capsule clastix/capsule:${CAPSULE_IMAGE_VESION}
# If Kind, load the image into cluster by
$ kind load docker-image --name kind-capsule quay.io/clastix/capsule:${CAPSULE_IMAGE_VESION}
$ kind load docker-image --name kind-capsule clastix/capsule:${CAPSULE_IMAGE_VESION}
# deploy all the required manifests
# Note: 1) please retry if you saw errors; 2) if you want to clean it up first, run: make remove
@@ -153,6 +154,8 @@ spec:
owners:
- name: alice
kind: User
- name: system:serviceaccount:capsule-system:default
kind: ServiceAccount
EOF
# There shouldn't be any errors and you should see the newly created tenant
@@ -161,25 +164,30 @@ NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
oil Active 0 14s
```
If you want to test namespace creation or such stuff, make sure to use impersonation:
```sh
$ kubectl ... --as system:serviceaccount:capsule-system:default --as-group capsule.clastix.io
```
As of now, a complete Capsule environment has been set up in `kind`- or `k3d`-powered cluster, and the `capsule-controller-manager` is running as a deployment serving as:
- The reconcilers for CRDs and;
- A series of webhooks
## Set up development env
## Setup the development environment
During development, we prefer that the code is running within our IDE locally, instead of running as the normal Pod(s) within the Kubernetes cluster.
Such a setup can be illustrated as below diagram:
![Development Env](assets/dev-env.png)
![Development Env](./assets/dev-env.png)
To achieve that, there are some necessary steps we need to walk through, which have been made as a `make` target within our `Makefile`.
So the TL;DR answer is:
```sh
```shell
# If you haven't installed or run `make deploy` before, do it first
# Note: please retry if you saw errors
$ make deploy
@@ -189,14 +197,13 @@ $ make deploy
$ LAPTOP_HOST_IP="<YOUR_LAPTOP_IP>" make dev-setup
```
This is a very common setup for typical Kubernetes Operator development so we'd better walk them through with more details here.
1. Scaling down the deployed Pod(s) to 0
We need to scale the existing replicas of `capsule-controller-manager` to 0 to avoid reconciliation competition between the Pod(s) and the code running outside of the cluster, in our preferred IDE for example.
```sh
```shell
$ kubectl -n capsule-system scale deployment capsule-controller-manager --replicas=0
deployment.apps/capsule-controller-manager scaled
```
@@ -205,7 +212,7 @@ deployment.apps/capsule-controller-manager scaled
Running webhooks requires TLS, we can prepare the TLS key pair in our development env to handle HTTPS requests.
```sh
```shell
# Prepare a simple OpenSSL config file
# Do remember to export LAPTOP_HOST_IP before running this command
$ cat > _tls.cnf <<EOF
@@ -246,7 +253,7 @@ By default, the webhooks will be registered with the services, which will route
We need to _delegate_ the controllers' and webbooks' services to the code running in our IDE by patching the `MutatingWebhookConfiguration` and `ValidatingWebhookConfiguration`.
```sh
```shell
# Export your laptop's IP with the 9443 port exposed by controllers/webhooks' services
$ export WEBHOOK_URL="https://${LAPTOP_HOST_IP}:9443"
@@ -266,14 +273,15 @@ $ kubectl get MutatingWebhookConfiguration capsule-mutating-webhook-configuratio
# Note: there is a list of validating webhook endpoints, not just one
$ kubectl patch ValidatingWebhookConfiguration capsule-validating-webhook-configuration \
--type='json' -p="[\
{'op': 'replace', 'path': '/webhooks/0/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/cordoning\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/1/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/ingresses\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/2/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/namespaces\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/3/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/networkpolicies\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/4/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/pods\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/5/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/persistentvolumeclaims\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/6/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/services\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/7/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/tenants\",'caBundle':\"${CA_BUNDLE}\"}}\
{'op': 'replace', 'path': '/webhooks/0/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/cordoning\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/1/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/ingresses\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/2/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/namespaces\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/3/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/networkpolicies\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/4/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/pods\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/5/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/persistentvolumeclaims\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/6/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/services\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/7/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/tenants\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/8/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/nodes\",'caBundle':\"${CA_BUNDLE}\"}}\
]"
# Verify it if you want
@@ -284,14 +292,14 @@ $ kubectl get ValidatingWebhookConfiguration capsule-validating-webhook-configur
Now we can run Capsule controllers with webhooks outside of the Kubernetes cluster:
```sh
```shell
$ export NAMESPACE=capsule-system && export TMPDIR=/tmp/
$ go run .
```
To verify that, we can open a new console and create a new Tenant:
```sh
```shell
$ kubectl apply -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
@@ -304,33 +312,9 @@ spec:
EOF
```
We should see output like:
```log
tenant.capsule.clastix.io/gas created
```
We should see output and logs in the `make run` console.
And could see logs in the `make run` console like:
```log
...
{"level":"info","ts":"2021-09-28T21:10:30.520+0800","logger":"controllers.Tenant","msg":"Ensuring all Namespaces are collected","Request.Name":"gas"}
{"level":"info","ts":"2021-09-28T21:10:30.527+0800","logger":"controllers.Tenant","msg":"Starting processing of Namespaces","Request.Name":"gas","items":0}
{"level":"info","ts":"2021-09-28T21:10:30.527+0800","logger":"controllers.Tenant","msg":"Ensuring additional RoleBindings for owner","Request.Name":"gas"}
{"level":"info","ts":"2021-09-28T21:10:30.527+0800","logger":"controllers.Tenant","msg":"Ensuring RoleBinding for owner","Request.Name":"gas"}
{"level":"info","ts":"2021-09-28T21:10:30.527+0800","logger":"controllers.Tenant","msg":"Ensuring Namespace count","Request.Name":"gas"}
{"level":"info","ts":"2021-09-28T21:10:30.533+0800","logger":"controllers.Tenant","msg":"Tenant reconciling completed","Request.Name":"gas"}
{"level":"info","ts":"2021-09-28T21:10:30.540+0800","logger":"controllers.Tenant","msg":"Ensuring all Namespaces are collected","Request.Name":"gas"}
{"level":"info","ts":"2021-09-28T21:10:30.547+0800","logger":"controllers.Tenant","msg":"Starting processing of Namespaces","Request.Name":"gas","items":0}
{"level":"info","ts":"2021-09-28T21:10:30.547+0800","logger":"controllers.Tenant","msg":"Ensuring additional RoleBindings for owner","Request.Name":"gas"}
{"level":"info","ts":"2021-09-28T21:10:30.547+0800","logger":"controllers.Tenant","msg":"Ensuring RoleBinding for owner","Request.Name":"gas"}
{"level":"info","ts":"2021-09-28T21:10:30.547+0800","logger":"controllers.Tenant","msg":"Ensuring Namespace count","Request.Name":"gas"}
{"level":"info","ts":"2021-09-28T21:10:30.554+0800","logger":"controllers.Tenant","msg":"Tenant reconciling completed","Request.Name":"gas"}
```
## Work in your preferred IDE
Now it's time to work through our familiar inner loop for development in our preferred IDE.
For example, if you're using [Visual Studio Code](https://code.visualstudio.com), this `launch.json` file can be a good start.
Now it's time to work through our familiar inner loop for development in our preferred IDE. For example, if you're using [Visual Studio Code](https://code.visualstudio.com), this `launch.json` file can be a good start.
```json
{
@@ -355,5 +339,3 @@ For example, if you're using [Visual Studio Code](https://code.visualstudio.com)
]
}
```
Please refer to [contributing.md](contributing.md) for more details while contributing.

View File

@@ -0,0 +1,22 @@
# Project Governance
This document lays out the guidelines under which the Capsule project will be governed.
The goal is to make sure that the roles and responsibilities are well defined and clarify how decisions are made.
## Roles
In the context of Capsule project, we consider the following roles:
* __Users__: everyone using Capsule, typically willing to provide feedback by proposing features and/or filing issues.
* __Contributors__: everyone contributing code, documentation, examples, tests, and participating in feature proposals as well as design discussions.
* __Maintainers__: are responsible for engaging with and assisting contributors to iterate on the contributions until it reaches acceptable quality. Maintainers can decide whether the contributions can be accepted into the project or rejected.
## Release Management
The release process will be governed by Maintainers.
## Roadmap Planning
Maintainers will share roadmap and release versions as milestones in GitHub.

Some files were not shown because too many files have changed in this diff Show More