Compare commits

...

69 Commits

Author SHA1 Message Date
Dario Tranchitella
0bfca6b60e fix(helm): avoiding overwriting secrets upon helm upgrade 2022-03-31 07:28:16 +00:00
gkarthiks
fdc1b3fe39 fix(docs): capsule-proxy chart url
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
2022-03-28 07:53:52 +00:00
Karthikeyan Govindaraj
f7bc2e24cc chore: description for limit ranges and update doc
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
2022-03-18 16:44:34 +00:00
Massimiliano Giovagnoli
d3021633cd Docs update (#530)
Signed-off-by: maxgio92 <me@maxgio.it>
2022-03-18 12:25:57 +01:00
dependabot[bot]
7fefe4f6de build(deps): bump url-parse from 1.5.7 to 1.5.10 in /docs
Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.7 to 1.5.10.
- [Release notes](https://github.com/unshiftio/url-parse/releases)
- [Commits](https://github.com/unshiftio/url-parse/compare/1.5.7...1.5.10)

---
updated-dependencies:
- dependency-name: url-parse
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-02-28 08:44:07 +00:00
dependabot[bot]
302bb19707 build(deps): bump prismjs from 1.25.0 to 1.27.0 in /docs
Bumps [prismjs](https://github.com/PrismJS/prism) from 1.25.0 to 1.27.0.
- [Release notes](https://github.com/PrismJS/prism/releases)
- [Changelog](https://github.com/PrismJS/prism/blob/master/CHANGELOG.md)
- [Commits](https://github.com/PrismJS/prism/compare/v1.25.0...v1.27.0)

---
updated-dependencies:
- dependency-name: prismjs
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-02-27 19:32:10 +00:00
dependabot[bot]
27a7792c31 build(deps): bump simple-get from 3.1.0 to 3.1.1 in /docs
Bumps [simple-get](https://github.com/feross/simple-get) from 3.1.0 to 3.1.1.
- [Release notes](https://github.com/feross/simple-get/releases)
- [Commits](https://github.com/feross/simple-get/compare/v3.1.0...v3.1.1)

---
updated-dependencies:
- dependency-name: simple-get
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-02-24 14:18:39 +00:00
Abhijeet Kasurde
1a60e83772 docs: misc typo fixes in various places
Fixed following spelling mistakes -

* upsteam -> upstream
* Caspule -> Capsule
* suceed -> succeed
* unsed -> unused

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
2022-02-24 14:18:00 +00:00
张连军
632268dd68 fix(docs): adding missing validatingwebhookconfiguration patch for nodes endpoint 2022-02-24 08:54:30 +00:00
dependabot[bot]
4e07de37c4 build(deps): bump url-parse from 1.5.3 to 1.5.7 in /docs
Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.3 to 1.5.7.
- [Release notes](https://github.com/unshiftio/url-parse/releases)
- [Commits](https://github.com/unshiftio/url-parse/compare/1.5.3...1.5.7)

---
updated-dependencies:
- dependency-name: url-parse
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-02-24 08:51:36 +00:00
Pandry
1d10bcab1e test(e2e): tenant regex forbidden namespace labels and annotations 2022-02-22 06:11:49 +00:00
Pandry
d4a5f3beca fix: validate regex patterns in annotations #510 2022-02-22 06:11:49 +00:00
Maksim Fedotov
cd56eab119 fix: object count resource quotas not working when using Tenant scope 2022-01-25 16:04:08 +00:00
dependabot[bot]
6cee5b73af build(deps-dev): bump postcss from 7.0.39 to 8.2.13 in /docs
Bumps [postcss](https://github.com/postcss/postcss) from 7.0.39 to 8.2.13.
- [Release notes](https://github.com/postcss/postcss/releases)
- [Changelog](https://github.com/postcss/postcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/postcss/postcss/compare/7.0.39...8.2.13)

---
updated-dependencies:
- dependency-name: postcss
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-01-22 18:30:07 +00:00
dependabot[bot]
8e7325aecb build(deps): bump nanoid from 3.1.29 to 3.2.0 in /docs
Bumps [nanoid](https://github.com/ai/nanoid) from 3.1.29 to 3.2.0.
- [Release notes](https://github.com/ai/nanoid/releases)
- [Changelog](https://github.com/ai/nanoid/blob/main/CHANGELOG.md)
- [Commits](https://github.com/ai/nanoid/compare/3.1.29...3.2.0)

---
updated-dependencies:
- dependency-name: nanoid
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-01-22 18:24:00 +00:00
Adriano Pezzuto
be26783424 docs: clarify usage of serviceaccount as tenant owner (#503) 2022-01-20 21:52:49 +01:00
Tom OBrien
0b199f4136 fix: modify jobs.image.tag for eks
EKS sometimes has a '+' in kubernetes minor version
This results in invalid image tag for jobs
2022-01-18 16:26:24 +00:00
Dario Tranchitella
1bbaebbc90 build(installer): releaseing to capsule v0.1.1 2022-01-11 09:35:29 +00:00
Dario Tranchitella
4b8d8b2a7c build(helm): aligning to capsule v0.1.1 2022-01-11 09:35:29 +00:00
Dario Tranchitella
3fb4c41daf docs: removing development environment setup for capsule-proxy 2022-01-11 08:21:16 +00:00
Dario Tranchitella
055791966a docs: aliging to capsule-proxy documentation 2022-01-11 08:21:16 +00:00
Dario Tranchitella
c9af9c18e4 chore(ci): e2e for kubernetes v1.23 2022-01-03 10:33:42 +00:00
Maksim Fedotov
fef381d2b4 feat(helm): add default conversion webhook configuration to tenant CRD 2021-12-30 08:31:13 +00:00
Max Fedotov
19aff8c882 fix: ignore NotFound error in ServiceLabelsReconciler (#494)
Co-authored-by: Maksim Fedotov <m_fedotov@wargaming.net>
2021-12-29 18:26:45 +02:00
Dario Tranchitella
8da7e22cb2 fix(docs): broken link for documentation static website 2021-12-29 16:07:37 +00:00
Dario Tranchitella
47c37a3d5d feat(docs): v1alpha1 to v1beta1 upgrade guide 2021-12-27 07:51:04 +00:00
Dario Tranchitella
677175b3ed fix(docs): referring to old capsule version 2021-12-27 07:51:04 +00:00
Dario Tranchitella
c95e3a2068 docs: restoring multi-tenancy benchmark results 2021-12-26 19:51:48 +00:00
Dario Tranchitella
0be3be4480 docs: limiting amount of resources deployed in a tenant 2021-12-23 11:39:34 +00:00
Dario Tranchitella
6ad434fcfb test(e2e): limiting amount of resources deployed in a tenant 2021-12-23 11:39:34 +00:00
Dario Tranchitella
e53911942d feat: limiting amount of resources deployed in a tenant 2021-12-23 11:39:34 +00:00
ptx96
a179645f26 feat(helm): find kubectl tag from server version 2021-12-22 09:33:27 +01:00
Dario Tranchitella
778fb4bcc2 fix: starting all controllers only when certificates are generated
This is going to solve the issue when upgrading Capsule <v0.1.0 to
>=v0.1.0: due to a resource reflector many warning were polluting the
reconciliation loop and causing unmarshaling errors.

Additionally, just the CA secret was checked before starting the
Operator, when also the TLS is requested for the webhooks, along with
the `/convert` one that is used for the CR version conversion.
2021-12-21 06:45:16 +00:00
slushysnowman
bc23324fe7 feat(helm): add imagePullSecrets to jobs
Co-authored-by: Tom OBrien <tom.obrien@ns.nl>
2021-12-21 06:43:03 +00:00
Dario Tranchitella
4a6fd49554 fix: yaml installer should use namespace selector for pods webhook (#484) 2021-12-19 00:01:16 +01:00
Adriano Pezzuto
d7baf18bf9 Refactoring of the documentation structure (#481)
* docs: structure refactoring

* build(yaml): alignement to latest release
2021-12-16 17:39:30 +01:00
Oliver Bähler
5c7804e1bf fix: add rolebinding validation against rfc-1123 dns for sa subjects
Signed-off-by: Oliver Bähler <oliverbaehler@hotmail.com>
2021-11-12 11:22:26 +01:00
Oliver Bähler
c4481f26f7 docs: additions to dev-guide
Signed-off-by: Oliver Bähler <oliverbaehler@hotmail.com>
2021-11-12 11:22:26 +01:00
Maksim Fedotov
ec715d2e8f fix: do not register tenant controller\webhook\indexer until CA is created 2021-11-06 16:34:22 +01:00
Luca Spezzano
0aeaf89cb7 fix(docs): broken links and style, deleted command code from MD file 2021-11-06 16:30:34 +01:00
Dario Tranchitella
3d31ddb4e3 docs: instructions on how to develop the docs website 2021-11-06 16:30:34 +01:00
Luca Spezzano
e83f344cdc feat(docs): removed meta robots and added meta og:url 2021-11-06 16:30:34 +01:00
Luca Spezzano
da83a8711a style(docs): added blockquote style 2021-11-06 16:30:34 +01:00
Luca Spezzano
43a944ace0 feat(docs): created 404 default page 2021-11-06 16:30:34 +01:00
Luca Spezzano
0acc2d2ef1 feat(docs): setup Gridsome for the website 2021-11-06 16:30:34 +01:00
Maxim Fedotov
14f9686bbb Forbidden node labels and annotations (#464)
* feat: forbidden node labels and annotations

* test(e2e): forbidden node labels and annotations

* build(kustomize): forbidden node labels and annotations

* build(helm): forbidden node labels and annotations

* build(installer): forbidden node labels and annotations

* chore(make): forbidden node labels and annotations

* docs: forbidden node labels and annotations

* test(e2e): forbidden node labels and annotations. Use EventuallyCreation func

* feat: forbidden node labels and annotations. Check kubernetes version

* test(e2e): forbidden node labels and annotations. Check kubernetes version

* docs: forbidden node labels and annotations. Version restrictions

* feat: forbidden node labels and annotations. Do not update deepcopy functions

* docs: forbidden node labels and annotations. Use blockquotes for notes

Co-authored-by: Maksim Fedotov <m_fedotov@wargaming.net>
2021-11-02 20:01:53 +03:00
Dario Tranchitella
6ba9826c51 chore(linters): no more need of duplicate check 2021-11-02 17:13:23 +01:00
Dario Tranchitella
bd58084ded docs!: container registry enforcement required fqci 2021-11-02 17:13:23 +01:00
Dario Tranchitella
3a5e50886d test: fqci is required for containar registry enforcement 2021-11-02 17:13:23 +01:00
Dario Tranchitella
e2768dad83 fix!: forcing to use fqci and container registries with no repositories 2021-11-02 17:13:23 +01:00
Vivek Singh
b97c23176d fix: duplicate release for helm chart
this commit remote helm release workflow trigger on create which triggers duplicate event as push

fixes: #459
2021-11-02 17:13:10 +01:00
Dario Tranchitella
fa8e805842 build(ci): triggering e2e also for nested files 2021-10-28 17:53:17 +02:00
Dario Tranchitella
8df66fc232 test: resources are no more pointers 2021-10-28 17:53:17 +02:00
Dario Tranchitella
c2218912eb fix: pointer doesn't trigger resources pruning 2021-10-28 17:53:17 +02:00
Tom OBrien
e361e2d424 fix: allowing regex underscore for container registry enforcement
While not best practice, underscore can be used and so should be allowed.
2021-10-27 20:55:39 +02:00
Dario Tranchitella
260b60d263 build(helm): bumping up to new Helm version 2021-10-24 17:04:58 +02:00
maxgio
e0d5e6feb2 Refactor helper script to create a Capsule user (#454)
* chore(hack/create-user.sh): let pick bash interpreter from path

bash interpreter binary could be put at different paths than /bin/bash.

Signed-off-by: maxgio92 <massimiliano.giovagnoli.1992@gmail.com>

* refactor(hack/create-user.sh): add helper function to apply dry

add helper function to check commands existence.

Signed-off-by: maxgio92 <massimiliano.giovagnoli.1992@gmail.com>
2021-10-22 20:55:52 +02:00
Adriano Pezzuto
0784dc7177 docs: add service account group to Capsule group (#450) 2021-10-15 14:57:55 +02:00
Vivek Kumar Singh
b17c6c4636 fix(helm): do not hardcode namespace forwebhook configs 2021-10-07 16:14:22 +02:00
Bright Zheng
52cf597041 docs: use one patch for each webhook 2021-10-02 17:13:20 +02:00
Bright Zheng
b8dcded882 docs: add dev env diagram 2021-10-02 17:13:20 +02:00
Bright Zheng
6a175e9017 docs: explicitly add the contribution section 2021-10-02 17:13:20 +02:00
Bright Zheng
3c609f84db docs: tune the dev setup process 2021-10-02 17:13:20 +02:00
Bright Zheng
7c3a59c4e4 feat: ignore vscode 2021-10-02 17:13:20 +02:00
Bright Zheng
d3e3b8a881 docs: review and enhance dev guide 2021-09-30 21:26:31 +02:00
Bright Zheng
7a8148bd58 docs: add dev guide 2021-09-30 21:26:31 +02:00
Bright Zheng
405d3ac52d docs: move and refactor contributing.md 2021-09-30 21:26:31 +02:00
Bright Zheng
f92acf9a9d fix: correct the make run issue 2021-09-30 21:26:31 +02:00
Pietro Terrizzi
bbb7b850d6 fix: avoid CRD reinstall 2021-09-30 21:16:04 +02:00
186 changed files with 33197 additions and 4920 deletions

View File

@@ -5,8 +5,8 @@ on:
branches: [ "*" ]
paths:
- '.github/workflows/e2e.yml'
- 'api/*'
- 'controllers/*'
- 'api/**'
- 'controllers/**'
- 'e2e/*'
- 'Dockerfile'
- 'go.*'
@@ -16,8 +16,8 @@ on:
branches: [ "*" ]
paths:
- '.github/workflows/e2e.yml'
- 'api/*'
- 'controllers/*'
- 'api/**'
- 'controllers/**'
- 'e2e/*'
- 'Dockerfile'
- 'go.*'
@@ -29,7 +29,7 @@ jobs:
name: Kubernetes
strategy:
matrix:
k8s-version: ['v1.16.15', 'v1.17.11', 'v1.18.8', 'v1.19.4', 'v1.20.7', 'v1.21.2', 'v1.22.0']
k8s-version: ['v1.16.15', 'v1.17.11', 'v1.18.8', 'v1.19.4', 'v1.20.7', 'v1.21.2', 'v1.22.4', 'v1.23.0']
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2

View File

@@ -6,9 +6,6 @@ on:
tags: [ "helm-v*" ]
pull_request:
branches: [ "*" ]
create:
branches: [ "*" ]
tags: [ "helm-v*" ]
jobs:
lint:

1
.gitignore vendored
View File

@@ -22,6 +22,7 @@ bin
*.swp
*.swo
*~
.vscode
**/*.kubeconfig
**/*.crt

View File

@@ -45,7 +45,7 @@ manager: generate fmt vet
# Run against the configured Kubernetes cluster in ~/.kube/config
run: generate manifests
go run ./main.go
go run .
# Creates the single file to install Capsule without any external dependency
installer: manifests kustomize
@@ -78,6 +78,58 @@ manifests: controller-gen
generate: controller-gen
$(CONTROLLER_GEN) object:headerFile="hack/boilerplate.go.txt" paths="./..."
# Setup development env
# Usage:
# LAPTOP_HOST_IP=<YOUR_LAPTOP_IP> make dev-setup
# For example:
# LAPTOP_HOST_IP=192.168.10.101 make dev-setup
define TLS_CNF
[ req ]
default_bits = 4096
distinguished_name = req_distinguished_name
req_extensions = req_ext
[ req_distinguished_name ]
countryName = SG
stateOrProvinceName = SG
localityName = SG
organizationName = CAPSULE
commonName = CAPSULE
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
IP.1 = $(LAPTOP_HOST_IP)
endef
export TLS_CNF
dev-setup:
kubectl -n capsule-system scale deployment capsule-controller-manager --replicas=0
mkdir -p /tmp/k8s-webhook-server/serving-certs
echo "$${TLS_CNF}" > _tls.cnf
openssl req -newkey rsa:4096 -days 3650 -nodes -x509 \
-subj "/C=SG/ST=SG/L=SG/O=CAPSULE/CN=CAPSULE" \
-extensions req_ext \
-config _tls.cnf \
-keyout /tmp/k8s-webhook-server/serving-certs/tls.key \
-out /tmp/k8s-webhook-server/serving-certs/tls.crt
rm -f _tls.cnf
export WEBHOOK_URL="https://$${LAPTOP_HOST_IP}:9443"; \
export CA_BUNDLE=`openssl base64 -in /tmp/k8s-webhook-server/serving-certs/tls.crt | tr -d '\n'`; \
kubectl patch MutatingWebhookConfiguration capsule-mutating-webhook-configuration \
--type='json' -p="[\
{'op': 'replace', 'path': '/webhooks/0/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/mutate-v1-namespace-owner-reference\",'caBundle':\"$${CA_BUNDLE}\"}}\
]" && \
kubectl patch ValidatingWebhookConfiguration capsule-validating-webhook-configuration \
--type='json' -p="[\
{'op': 'replace', 'path': '/webhooks/0/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/cordoning\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/1/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/ingresses\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/2/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/namespaces\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/3/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/networkpolicies\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/4/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/pods\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/5/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/persistentvolumeclaims\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/6/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/services\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/7/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/tenants\",'caBundle':\"$${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/8/clientConfig', 'value':{'url':\"$${WEBHOOK_URL}/nodes\",'caBundle':\"$${CA_BUNDLE}\"}}\
]";
# Build the docker image
docker-build: test
docker build . -t ${IMG} --build-arg GIT_HEAD_COMMIT=$(GIT_HEAD_COMMIT) \

136
README.md
View File

@@ -14,161 +14,63 @@
---
# Kubernetes multi-tenancy made easy
**Capsule** helps to implement a multi-tenancy and policy-based environment in your Kubernetes cluster. It is not intended to be yet another _PaaS_, instead, it has been designed as a micro-services-based ecosystem with the minimalist approach, leveraging only on upstream Kubernetes.
**Capsule** implements a multi-tenant and policy-based environment in your Kubernetes cluster. It is designed as a micro-services-based ecosystem with the minimalist approach, leveraging only on upstream Kubernetes.
# What's the problem with the current status?
Kubernetes introduces the _Namespace_ object type to create logical partitions of the cluster as isolated *slices*. However, implementing advanced multi-tenancy scenarios, it soon becomes complicated because of the flat structure of Kubernetes namespaces and the impossibility to share resources among namespaces belonging to the same tenant. To overcome this, cluster admins tend to provision a dedicated cluster for each groups of users, teams, or departments. As an organization grows, the number of clusters to manage and keep aligned becomes an operational nightmare, described as the well know phenomena of the _clusters sprawl_.
# Entering Capsule
Capsule takes a different approach. In a single cluster, the Capsule Controller aggregates multiple namespaces in a lightweight abstraction called _Tenant_, basically a grouping of Kubernetes Namespaces. Within each tenant, users are free to create their namespaces and share all the assigned resources while the Capsule Policy Engine keeps the different tenants isolated from each other.
The _Network and Security Policies_, _Resource Quota_, _Limit Ranges_, _RBAC_, and other policies defined at the tenant level are automatically inherited by all the namespaces in the tenant. Then users are free to operate their tenants in autonomy, without the intervention of the cluster administrator. Take a look at following diagram:
Capsule takes a different approach. In a single cluster, the Capsule Controller aggregates multiple namespaces in a lightweight abstraction called _Tenant_, basically a grouping of Kubernetes Namespaces. Within each tenant, users are free to create their namespaces and share all the assigned resources.
<p align="center" style="padding: 60px 20px">
<img src="assets/capsule-operator.svg" />
</p>
On the other side, the Capsule Policy Engine keeps the different tenants isolated from each other. _Network and Security Policies_, _Resource Quota_, _Limit Ranges_, _RBAC_, and other policies defined at the tenant level are automatically inherited by all the namespaces in the tenant. Then users are free to operate their tenants in autonomy, without the intervention of the cluster administrator.
# Features
## Self-Service
Leave to developers the freedom to self-provision their cluster resources according to the assigned boundaries.
Leave developers the freedom to self-provision their cluster resources according to the assigned boundaries.
## Preventing Clusters Sprawl
Share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts.
## Governance
Leverage Kubernetes Admission Controllers to enforce the industry security best practices and meet legal requirements.
Leverage Kubernetes Admission Controllers to enforce the industry security best practices and meet policy requirements.
## Resources Control
Take control of the resources consumed by users while preventing them to overtake.
## Native Experience
Provide multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customized binaries.
## GitOps ready
Capsule is completely declarative and GitOps ready.
## Bring your own device (BYOD)
Assign to tenants a dedicated set of compute, storage, and network resources and avoid the noisy neighbors' effect.
# Common use cases for Capsule
Please, refer to the corresponding [section](./docs/operator/use-cases/overview.md) in the project documentation for a detailed list of common use cases that Capsule can address.
# Installation
Make sure you have access to a Kubernetes cluster as administrator.
There are two ways to install Capsule:
* Use the Helm Chart available [here](./charts/capsule/README.md)
* Use the [single YAML file installer](./config/install.yaml)
## Install with the single YAML file installer
Ensure you have `kubectl` installed in your `PATH`.
Clone this repository and move to the repo folder:
```
$ kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/install.yaml
```
It will install the Capsule controller in a dedicated namespace `capsule-system`.
## How to create Tenants
Use the scaffold [Tenant](config/samples/capsule_v1beta1_tenant.yaml) and simply apply as cluster admin.
```
$ kubectl apply -f config/samples/capsule_v1beta1_tenant.yaml
tenant.capsule.clastix.io/gas created
```
You can check the tenant just created as
```
$ kubectl get tenants
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
gas Active 3 0 {"kubernetes.io/os":"linux"} 25s
```
## Tenant owners
Each tenant comes with a delegated user or group of users acting as the tenant admin. In the Capsule jargon, this is called the _Tenant Owner_. Other users can operate inside a tenant with different levels of permissions and authorizations assigned directly by the Tenant Owner.
Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of [authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/) are supported. The only requirement to use Capsule is to assign tenant users to the the group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
Assignment to a group depends on the authentication strategy in your cluster.
For example, if you are using `capsule.clastix.io`, users authenticated through a _X.509_ certificate must have `capsule.clastix.io` as _Organization_: `-subj "/CN=${USER}/O=capsule.clastix.io"`
Users authenticated through an _OIDC token_ must have in their token:
```json
...
"users_groups": [
"capsule.clastix.io",
"other_group"
]
```
The [hack/create-user.sh](hack/create-user.sh) can help you set up a dummy `kubeconfig` for the `bob` user acting as owner of a tenant called `gas`
```bash
./hack/create-user.sh bob gas
...
certificatesigningrequest.certificates.k8s.io/bob-gas created
certificatesigningrequest.certificates.k8s.io/bob-gas approved
kubeconfig file is: bob-gas.kubeconfig
to use it as bob export KUBECONFIG=bob-gas.kubeconfig
```
## Working with Tenants
Log in to the Kubernetes cluster as `bob` tenant owner
```
$ export KUBECONFIG=bob-gas.kubeconfig
```
and create a couple of new namespaces
```
$ kubectl create namespace gas-production
$ kubectl create namespace gas-development
```
As user `bob` you can operate with fully admin permissions:
```
$ kubectl -n gas-development run nginx --image=docker.io/nginx
$ kubectl -n gas-development get pods
```
but limited to only your own namespaces:
```
$ kubectl -n kube-system get pods
Error from server (Forbidden): pods is forbidden:
User "bob" cannot list resource "pods" in API group "" in the namespace "kube-system"
```
# Documentation
Please, check the project [documentation](./docs/index.md) for more cool things you can do with Capsule.
# Removal
Similar to `deploy`, you can get rid of Capsule using the `remove` target.
Please, check the project [documentation](https://capsule.clastix.io) for the cool things you can do with Capsule.
```
$ make remove
```
# Contributions
Capsule is Open Source with Apache 2 license and any contribution is welcome.
# FAQ
- Q. How to pronounce Capsule?
A. It should be pronounced as `/ˈkæpsjuːl/`.
- Q. Can I contribute?
A. Absolutely! Capsule is Open Source with Apache 2 license and any contribution is welcome. Please refer to the corresponding [section](./docs/operator/contributing.md) in the documentation.
- Q. Is it production grade?
A. Although under frequent development and improvements, Capsule is ready to be used in production environments as currently, people are using it in public and private deployments. Check out the [release](https://github.com/clastix/capsule/releases) page for a detailed list of available versions.

View File

@@ -0,0 +1,8 @@
package v1alpha1
const (
ForbiddenNodeLabelsAnnotation = "capsule.clastix.io/forbidden-node-labels"
ForbiddenNodeLabelsRegexpAnnotation = "capsule.clastix.io/forbidden-node-labels-regexp"
ForbiddenNodeAnnotationsAnnotation = "capsule.clastix.io/forbidden-node-annotations"
ForbiddenNodeAnnotationsRegexpAnnotation = "capsule.clastix.io/forbidden-node-annotations-regexp"
)

View File

@@ -200,17 +200,17 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
}
}
if len(t.Spec.NetworkPolicies) > 0 {
dst.Spec.NetworkPolicies = &capsulev1beta1.NetworkPolicySpec{
dst.Spec.NetworkPolicies = capsulev1beta1.NetworkPolicySpec{
Items: t.Spec.NetworkPolicies,
}
}
if len(t.Spec.LimitRanges) > 0 {
dst.Spec.LimitRanges = &capsulev1beta1.LimitRangesSpec{
dst.Spec.LimitRanges = capsulev1beta1.LimitRangesSpec{
Items: t.Spec.LimitRanges,
}
}
if len(t.Spec.ResourceQuota) > 0 {
dst.Spec.ResourceQuota = &capsulev1beta1.ResourceQuotaSpec{
dst.Spec.ResourceQuota = capsulev1beta1.ResourceQuotaSpec{
Scope: func() capsulev1beta1.ResourceQuotaScope {
if v, ok := t.GetAnnotations()[resourceQuotaScopeAnnotation]; ok {
switch v {
@@ -500,13 +500,13 @@ func (t *Tenant) ConvertFrom(srcRaw conversion.Hub) error {
Regex: src.Spec.ContainerRegistries.Regex,
}
}
if src.Spec.NetworkPolicies != nil {
if len(src.Spec.NetworkPolicies.Items) > 0 {
t.Spec.NetworkPolicies = src.Spec.NetworkPolicies.Items
}
if src.Spec.LimitRanges != nil {
if len(src.Spec.LimitRanges.Items) > 0 {
t.Spec.LimitRanges = src.Spec.LimitRanges.Items
}
if src.Spec.ResourceQuota != nil {
if len(src.Spec.ResourceQuota.Items) > 0 {
t.Annotations[resourceQuotaScopeAnnotation] = string(src.Spec.ResourceQuota.Scope)
t.Spec.ResourceQuota = src.Spec.ResourceQuota.Items
}
@@ -545,9 +545,15 @@ func (t *Tenant) ConvertFrom(srcRaw conversion.Hub) error {
}
if src.Spec.ServiceOptions != nil && src.Spec.ServiceOptions.AllowedServices != nil {
t.Annotations[enableNodePortsAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.NodePort)
t.Annotations[enableExternalNameAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.ExternalName)
t.Annotations[enableLoadBalancerAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.LoadBalancer)
if src.Spec.ServiceOptions.AllowedServices.NodePort != nil {
t.Annotations[enableNodePortsAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.NodePort)
}
if src.Spec.ServiceOptions.AllowedServices.ExternalName != nil {
t.Annotations[enableExternalNameAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.ExternalName)
}
if src.Spec.ServiceOptions.AllowedServices.LoadBalancer != nil {
t.Annotations[enableLoadBalancerAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.LoadBalancer)
}
}
// Status

View File

@@ -240,13 +240,13 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
},
ContainerRegistries: v1beta1AllowedListSpec,
NodeSelector: nodeSelector,
NetworkPolicies: &capsulev1beta1.NetworkPolicySpec{
NetworkPolicies: capsulev1beta1.NetworkPolicySpec{
Items: networkPolicies,
},
LimitRanges: &capsulev1beta1.LimitRangesSpec{
LimitRanges: capsulev1beta1.LimitRangesSpec{
Items: limitRanges,
},
ResourceQuota: &capsulev1beta1.ResourceQuotaSpec{
ResourceQuota: capsulev1beta1.ResourceQuotaSpec{
Scope: capsulev1beta1.ResourceQuotaScopeNamespace,
Items: resourceQuotas,
},

View File

@@ -1,3 +1,4 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
// Copyright 2020-2021 Clastix Labs

View File

@@ -0,0 +1,47 @@
// Copyright 2020-2021 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package v1beta1
import (
"fmt"
"strconv"
)
const (
ResourceQuotaAnnotationPrefix = "quota.resources.capsule.clastix.io"
ResourceUsedAnnotationPrefix = "used.resources.capsule.clastix.io"
)
func UsedAnnotationForResource(kindGroup string) string {
return fmt.Sprintf("%s/%s", ResourceUsedAnnotationPrefix, kindGroup)
}
func LimitAnnotationForResource(kindGroup string) string {
return fmt.Sprintf("%s/%s", ResourceQuotaAnnotationPrefix, kindGroup)
}
func GetUsedResourceFromTenant(tenant Tenant, kindGroup string) (int64, error) {
usedStr, ok := tenant.GetAnnotations()[UsedAnnotationForResource(kindGroup)]
if !ok {
usedStr = "0"
}
used, _ := strconv.ParseInt(usedStr, 10, 10)
return used, nil
}
func GetLimitResourceFromTenant(tenant Tenant, kindGroup string) (int64, error) {
limitStr, ok := tenant.GetAnnotations()[LimitAnnotationForResource(kindGroup)]
if !ok {
return 0, fmt.Errorf("resource %s is not limited for the current tenant", kindGroup)
}
limit, err := strconv.ParseInt(limitStr, 10, 10)
if err != nil {
return 0, fmt.Errorf("resource %s limit cannot be parsed, %w", kindGroup, err)
}
return limit, nil
}

View File

@@ -5,6 +5,7 @@ package v1beta1
import (
"fmt"
"strings"
)
const (
@@ -21,9 +22,9 @@ const (
)
func UsedQuotaFor(resource fmt.Stringer) string {
return "quota.capsule.clastix.io/used-" + resource.String()
return "quota.capsule.clastix.io/used-" + strings.ReplaceAll(resource.String(), "/", "_")
}
func HardQuotaFor(resource fmt.Stringer) string {
return "quota.capsule.clastix.io/hard-" + resource.String()
return "quota.capsule.clastix.io/hard-" + strings.ReplaceAll(resource.String(), "/", "_")
}

View File

@@ -21,14 +21,14 @@ type TenantSpec struct {
IngressOptions IngressOptions `json:"ingressOptions,omitempty"`
// Specifies the trusted Image Registries assigned to the Tenant. Capsule assures that all Pods resources created in the Tenant can use only one of the allowed trusted registries. Optional.
ContainerRegistries *AllowedListSpec `json:"containerRegistries,omitempty"`
// Specifies the label to control the placement of pods on a given pool of worker nodes. All namesapces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
// Specifies the label to control the placement of pods on a given pool of worker nodes. All namespaces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
NodeSelector map[string]string `json:"nodeSelector,omitempty"`
// Specifies the NetworkPolicies assigned to the Tenant. The assigned NetworkPolicies are inherited by any namespace created in the Tenant. Optional.
NetworkPolicies *NetworkPolicySpec `json:"networkPolicies,omitempty"`
// Specifies the NetworkPolicies assigned to the Tenant. The assigned NetworkPolicies are inherited by any namespace created in the Tenant. Optional.
LimitRanges *LimitRangesSpec `json:"limitRanges,omitempty"`
NetworkPolicies NetworkPolicySpec `json:"networkPolicies,omitempty"`
// Specifies the resource min/max usage restrictions to the Tenant. The assigned values are inherited by any namespace created in the Tenant. Optional.
LimitRanges LimitRangesSpec `json:"limitRanges,omitempty"`
// Specifies a list of ResourceQuota resources assigned to the Tenant. The assigned values are inherited by any namespace created in the Tenant. The Capsule operator aggregates ResourceQuota at Tenant level, so that the hard quota is never crossed for the given Tenant. This permits the Tenant owner to consume resources in the Tenant regardless of the namespace. Optional.
ResourceQuota *ResourceQuotaSpec `json:"resourceQuotas,omitempty"`
ResourceQuota ResourceQuotaSpec `json:"resourceQuotas,omitempty"`
// Specifies additional RoleBindings assigned to the Tenant. Capsule will ensure that all namespaces in the Tenant always contain the RoleBinding for the given ClusterRole. Optional.
AdditionalRoleBindings []AdditionalRoleBindingsSpec `json:"additionalRoleBindings,omitempty"`
// Specify the allowed values for the imagePullPolicies option in Pod resources. Capsule assures that all Pod resources created in the Tenant can use only one of the allowed policy. Optional.

View File

@@ -1,3 +1,4 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
// Copyright 2020-2021 Clastix Labs
@@ -480,21 +481,9 @@ func (in *TenantSpec) DeepCopyInto(out *TenantSpec) {
(*out)[key] = val
}
}
if in.NetworkPolicies != nil {
in, out := &in.NetworkPolicies, &out.NetworkPolicies
*out = new(NetworkPolicySpec)
(*in).DeepCopyInto(*out)
}
if in.LimitRanges != nil {
in, out := &in.LimitRanges, &out.LimitRanges
*out = new(LimitRangesSpec)
(*in).DeepCopyInto(*out)
}
if in.ResourceQuota != nil {
in, out := &in.ResourceQuota, &out.ResourceQuota
*out = new(ResourceQuotaSpec)
(*in).DeepCopyInto(*out)
}
in.NetworkPolicies.DeepCopyInto(&out.NetworkPolicies)
in.LimitRanges.DeepCopyInto(&out.LimitRanges)
in.ResourceQuota.DeepCopyInto(&out.ResourceQuota)
if in.AdditionalRoleBindings != nil {
in, out := &in.AdditionalRoleBindings, &out.AdditionalRoleBindings
*out = make([]AdditionalRoleBindingsSpec, len(*in))

View File

@@ -21,8 +21,8 @@ sources:
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 0.1.2
version: 0.1.7
# This is the version number of the application being deployed.
# This version number should be incremented each time you make changes to the application.
appVersion: 0.1.0
appVersion: 0.1.1

View File

@@ -24,23 +24,19 @@ The Capsule Operator Chart can be used to instantly deploy the Capsule Operator
$ helm repo add clastix https://clastix.github.io/charts
2. Create the Namespace:
2. Install the Chart:
$ kubectl create namespace capsule-system
$ helm install capsule clastix/capsule -n capsule-system --create-namespace
3. Install the Chart:
$ helm install capsule clastix/capsule -n capsule-system
4. Show the status:
3. Show the status:
$ helm status capsule -n capsule-system
5. Upgrade the Chart
4. Upgrade the Chart
$ helm upgrade capsule clastix/capsule -n capsule-system
6. Uninstall the Chart
5. Uninstall the Chart
$ helm uninstall capsule -n capsule-system
@@ -80,6 +76,7 @@ Parameter | Description | Default
`manager.resources.limits/cpu` | Set the memory limits assigned to the controller. | `128Mi`
`mutatingWebhooksTimeoutSeconds` | Timeout in seconds for mutating webhooks. | `30`
`validatingWebhooksTimeoutSeconds` | Timeout in seconds for validating webhooks. | `30`
`webhooks` | Additional configuration for capsule webhooks. |
`imagePullSecrets` | Configuration for `imagePullSecrets` so that you can use a private images registry. | `[]`
`serviceAccount.create` | Specifies whether a service account should be created. | `true`
`serviceAccount.annotations` | Annotations to add to the service account. | `{}`
@@ -110,6 +107,7 @@ This Helm Chart creates the following Kubernetes resources in the release namesp
* CA Secret
* Certificate Secret
* Tenant Custom Resource Definition
* CapsuleConfiguration Custom Resource Definition
* MutatingWebHookConfiguration
* ValidatingWebHookConfiguration
* RBAC Cluster Roles
@@ -129,4 +127,4 @@ Capsule, as many other add-ons, defines its own set of Custom Resource Definitio
## More
See Capsule [use cases](https://github.com/clastix/capsule/blob/master/use_cases.md) for more information about how to use Capsule.
See Capsule [tutorial](https://github.com/clastix/capsule/blob/master/docs/content/general/tutorial.md) for more information about how to use Capsule.

View File

@@ -7,7 +7,17 @@ metadata:
name: tenants.capsule.clastix.io
spec:
conversion:
strategy: None
strategy: Webhook
webhook:
clientConfig:
service:
name: capsule-webhook-service
namespace: capsule-system
path: /convert
port: 443
conversionReviewVersions:
- v1alpha1
- v1beta1
group: capsule.clastix.io
names:
kind: Tenant

View File

@@ -91,11 +91,26 @@ Create the proxy fully-qualified Docker image to use
{{- printf "%s:%s" .Values.proxy.image.repository .Values.proxy.image.tag -}}
{{- end }}
{{/*
Determine the Kubernetes version to use for jobsFullyQualifiedDockerImage tag
*/}}
{{- define "capsule.jobsTagKubeVersion" -}}
{{- if contains "-eks-" .Capabilities.KubeVersion.GitVersion }}
{{- print "v" .Capabilities.KubeVersion.Major "." (.Capabilities.KubeVersion.Minor | replace "+" "") -}}
{{- else }}
{{- print "v" .Capabilities.KubeVersion.Major "." .Capabilities.KubeVersion.Minor -}}
{{- end }}
{{- end }}
{{/*
Create the jobs fully-qualified Docker image to use
*/}}
{{- define "capsule.jobsFullyQualifiedDockerImage" -}}
{{- if .Values.jobs.image.tag }}
{{- printf "%s:%s" .Values.jobs.image.repository .Values.jobs.image.tag -}}
{{- else }}
{{- printf "%s:%s" .Values.jobs.image.repository (include "capsule.jobsTagKubeVersion" .) -}}
{{- end }}
{{- end }}
{{/*

View File

@@ -8,4 +8,3 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "capsule.secretCaName" . }}
data:

View File

@@ -8,4 +8,3 @@ metadata:
{{- toYaml . | nindent 4 }}
{{- end }}
name: {{ include "capsule.secretTlsName" . }}
data:

View File

@@ -25,6 +25,10 @@ spec:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Never
containers:
- name: post-install-job

View File

@@ -26,6 +26,10 @@ spec:
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Never
containers:
- name: pre-delete-job

View File

@@ -163,7 +163,7 @@ webhooks:
caBundle: Cg==
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: capsule-system
namespace: {{ .Release.Namespace }}
path: /persistentvolumeclaims
failurePolicy: {{ .Values.webhooks.persistentvolumeclaims.failurePolicy }}
name: pvc.capsule.clastix.io
@@ -240,3 +240,29 @@ webhooks:
scope: '*'
sideEffects: None
timeoutSeconds: {{ .Values.validatingWebhooksTimeoutSeconds }}
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
caBundle: Cg==
service:
name: {{ include "capsule.fullname" . }}-webhook-service
namespace: {{ .Release.Namespace }}
path: /nodes
port: 443
failurePolicy: {{ .Values.webhooks.nodes.failurePolicy }}
name: nodes.capsule.clastix.io
matchPolicy: Exact
namespaceSelector: {}
objectSelector: {}
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- UPDATE
resources:
- nodes
sideEffects: None
timeoutSeconds: {{ .Values.validatingWebhooksTimeoutSeconds }}

View File

@@ -41,7 +41,7 @@ jobs:
image:
repository: quay.io/clastix/kubectl
pullPolicy: IfNotPresent
tag: "v1.20.7"
tag: ""
imagePullSecrets: []
serviceAccount:
create: true
@@ -123,5 +123,7 @@ webhooks:
matchExpressions:
- key: capsule.clastix.io/tenant
operator: Exists
nodes:
failurePolicy: Fail
mutatingWebhooksTimeoutSeconds: 30
validatingWebhooksTimeoutSeconds: 30

View File

@@ -697,7 +697,7 @@ spec:
type: string
type: object
limitRanges:
description: Specifies the NetworkPolicies assigned to the Tenant. The assigned NetworkPolicies are inherited by any namespace created in the Tenant. Optional.
description: Specifies the resource min/max usage restrictions to the Tenant. The assigned values are inherited by any namespace created in the Tenant. Optional.
properties:
items:
items:
@@ -1055,7 +1055,7 @@ spec:
nodeSelector:
additionalProperties:
type: string
description: Specifies the label to control the placement of pods on a given pool of worker nodes. All namesapces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
description: Specifies the label to control the placement of pods on a given pool of worker nodes. All namespaces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
type: object
owners:
description: Specifies the owners of the Tenant. Mandatory.

View File

@@ -769,7 +769,7 @@ spec:
type: string
type: object
limitRanges:
description: Specifies the NetworkPolicies assigned to the Tenant. The assigned NetworkPolicies are inherited by any namespace created in the Tenant. Optional.
description: Specifies the resource min/max usage restrictions to the Tenant. The assigned values are inherited by any namespace created in the Tenant. Optional.
properties:
items:
items:
@@ -1127,7 +1127,7 @@ spec:
nodeSelector:
additionalProperties:
type: string
description: Specifies the label to control the placement of pods on a given pool of worker nodes. All namesapces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
description: Specifies the label to control the placement of pods on a given pool of worker nodes. All namespaces created within the Tenant will have the node selector annotation. This annotation tells the Kubernetes scheduler to place pods on the nodes having the selector label. Optional.
type: object
owners:
description: Specifies the owners of the Tenant. Mandatory.
@@ -1411,7 +1411,7 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: quay.io/clastix/capsule:v0.1.1-rc0
image: quay.io/clastix/capsule:v0.1.1
imagePullPolicy: IfNotPresent
name: manager
ports:
@@ -1582,6 +1582,29 @@ webhooks:
- networkpolicies
scope: Namespaced
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: capsule-webhook-service
namespace: capsule-system
path: /nodes
failurePolicy: Fail
name: nodes.capsule.clastix.io
namespaceSelector:
matchExpressions:
- key: capsule.clastix.io/tenant
operator: Exists
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- UPDATE
resources:
- nodes
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:

View File

@@ -7,4 +7,4 @@ kind: Kustomization
images:
- name: controller
newName: quay.io/clastix/capsule
newTag: v0.1.1-rc0
newTag: v0.1.1

View File

@@ -118,6 +118,25 @@ webhooks:
resources:
- networkpolicies
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: system
path: /nodes
failurePolicy: Fail
name: nodes.capsule.clastix.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- UPDATE
resources:
- nodes
sideEffects: None
- admissionReviewVersions:
- v1
clientConfig:

View File

@@ -34,6 +34,12 @@
matchExpressions:
- key: capsule.clastix.io/tenant
operator: Exists
- op: add
path: /webhooks/7/namespaceSelector
value:
matchExpressions:
- key: capsule.clastix.io/tenant
operator: Exists
- op: add
path: /webhooks/0/rules/0/scope
value: Namespaced
@@ -43,12 +49,12 @@
- op: add
path: /webhooks/3/rules/0/scope
value: Namespaced
- op: add
path: /webhooks/4/rules/0/scope
value: Namespaced
- op: add
path: /webhooks/5/rules/0/scope
value: Namespaced
- op: add
path: /webhooks/6/rules/0/scope
value: Namespaced
- op: add
path: /webhooks/7/rules/0/scope
value: Namespaced

View File

@@ -35,7 +35,7 @@ type CAReconciler struct {
func (r *CAReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&corev1.Secret{}, forOptionPerInstanceName(caSecretName)).
For(&corev1.Secret{}, forOptionPerInstanceName(CASecretName)).
Complete(r)
}
@@ -189,7 +189,7 @@ func (r CAReconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctrl
tls := &corev1.Secret{}
err = r.Get(ctx, types.NamespacedName{
Namespace: r.Namespace,
Name: tlsSecretName,
Name: TLSSecretName,
}, tls)
if err != nil {
r.Log.Error(err, "Capsule TLS Secret missing")

View File

@@ -7,6 +7,6 @@ const (
certSecretKey = "tls.crt"
privateKeySecretKey = "tls.key"
caSecretName = "capsule-ca"
tlsSecretName = "capsule-tls"
CASecretName = "capsule-ca"
TLSSecretName = "capsule-tls"
)

View File

@@ -22,10 +22,10 @@ func getCertificateAuthority(client client.Client, namespace string) (ca cert.CA
err = client.Get(context.TODO(), types.NamespacedName{
Namespace: namespace,
Name: caSecretName,
Name: CASecretName,
}, instance)
if err != nil {
return nil, fmt.Errorf("missing secret %s, cannot reconcile", caSecretName)
return nil, fmt.Errorf("missing secret %s, cannot reconcile", CASecretName)
}
if instance.Data == nil {

View File

@@ -33,7 +33,7 @@ type TLSReconciler struct {
func (r *TLSReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&corev1.Secret{}, forOptionPerInstanceName(tlsSecretName)).
For(&corev1.Secret{}, forOptionPerInstanceName(TLSSecretName)).
Complete(r)
}
@@ -112,7 +112,7 @@ func (r TLSReconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctr
return reconcile.Result{}, err
}
if instance.Name == tlsSecretName && res == controllerutil.OperationResultUpdated {
if instance.Name == TLSSecretName && res == controllerutil.OperationResultUpdated {
r.Log.Info("Capsule TLS certificates has been updated, Controller pods must be restarted to load new certificate")
hostname, _ := os.Hostname()

View File

@@ -9,6 +9,7 @@ import (
"github.com/go-logr/logr"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
@@ -49,6 +50,9 @@ func (r *abstractServiceLabelsReconciler) Reconcile(ctx context.Context, request
err = r.client.Get(ctx, request.NamespacedName, r.obj)
if err != nil {
if errors.IsNotFound(err) {
return reconcile.Result{}, nil
}
return reconcile.Result{}, err
}

View File

@@ -14,8 +14,8 @@ type EndpointSlicesLabelsReconciler struct {
abstractServiceLabelsReconciler
Log logr.Logger
VersionMinor int
VersionMajor int
VersionMinor uint
VersionMajor uint
}
func (r *EndpointSlicesLabelsReconciler) SetupWithManager(mgr ctrl.Manager) error {

View File

@@ -9,6 +9,7 @@ import (
rbacv1 "k8s.io/api/rbac/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/retry"
ctrl "sigs.k8s.io/controller-runtime"
@@ -20,9 +21,10 @@ import (
type Manager struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
Recorder record.EventRecorder
Log logr.Logger
Scheme *runtime.Scheme
Recorder record.EventRecorder
RESTConfig *rest.Config
}
func (r *Manager) SetupWithManager(mgr ctrl.Manager) error {
@@ -55,6 +57,12 @@ func (r Manager) Reconcile(ctx context.Context, request ctrl.Request) (result ct
return
}
r.Log.Info("Ensuring limit resources count is updated")
if err = r.syncCustomResourceQuotaUsages(ctx, instance); err != nil {
r.Log.Error(err, "Cannot count limited resources")
return
}
// Ensuring all namespaces are collected
r.Log.Info("Ensuring all Namespaces are collected")
if err = r.collectNamespaces(instance); err != nil {
@@ -68,28 +76,22 @@ func (r Manager) Reconcile(ctx context.Context, request ctrl.Request) (result ct
return
}
if instance.Spec.NetworkPolicies != nil {
r.Log.Info("Starting processing of Network Policies", "items", len(instance.Spec.NetworkPolicies.Items))
if err = r.syncNetworkPolicies(instance); err != nil {
r.Log.Error(err, "Cannot sync NetworkPolicy items")
return
}
r.Log.Info("Starting processing of Network Policies")
if err = r.syncNetworkPolicies(instance); err != nil {
r.Log.Error(err, "Cannot sync NetworkPolicy items")
return
}
if instance.Spec.LimitRanges != nil {
r.Log.Info("Starting processing of Limit Ranges", "items", len(instance.Spec.LimitRanges.Items))
if err = r.syncLimitRanges(instance); err != nil {
r.Log.Error(err, "Cannot sync LimitRange items")
return
}
r.Log.Info("Starting processing of Limit Ranges", "items", len(instance.Spec.LimitRanges.Items))
if err = r.syncLimitRanges(instance); err != nil {
r.Log.Error(err, "Cannot sync LimitRange items")
return
}
if instance.Spec.ResourceQuota != nil {
r.Log.Info("Starting processing of Resource Quotas", "items", len(instance.Spec.ResourceQuota.Items))
if err = r.syncResourceQuotas(instance); err != nil {
r.Log.Error(err, "Cannot sync ResourceQuota items")
return
}
r.Log.Info("Starting processing of Resource Quotas", "items", len(instance.Spec.ResourceQuota.Items))
if err = r.syncResourceQuotas(instance); err != nil {
r.Log.Error(err, "Cannot sync ResourceQuota items")
return
}
r.Log.Info("Ensuring additional RoleBindings for owner")

View File

@@ -0,0 +1,122 @@
package tenant
import (
"context"
"fmt"
"strings"
"golang.org/x/sync/errgroup"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/util/retry"
capsulev1beta1 "github.com/clastix/capsule/api/v1beta1"
)
func (r *Manager) syncCustomResourceQuotaUsages(ctx context.Context, tenant *capsulev1beta1.Tenant) error {
type resource struct {
kind string
group string
version string
}
var resourceList []resource
for k := range tenant.GetAnnotations() {
if !strings.HasPrefix(k, capsulev1beta1.ResourceQuotaAnnotationPrefix) {
continue
}
parts := strings.Split(k, "/")
if len(parts) != 2 {
r.Log.Info("non well-formed Resource Limit annotation", "key", k)
continue
}
parts = strings.Split(parts[1], "_")
if len(parts) != 2 {
r.Log.Info("non well-formed Resource Limit annotation, cannot retrieve version", "key", k)
continue
}
groupKindParts := strings.Split(parts[0], ".")
if len(groupKindParts) < 2 {
r.Log.Info("non well-formed Resource Limit annotation, cannot retrieve kind and group", "key", k)
continue
}
resourceList = append(resourceList, resource{
kind: groupKindParts[0],
group: strings.Join(groupKindParts[1:], "."),
version: parts[1],
})
}
errGroup := new(errgroup.Group)
usedMap := make(map[string]int)
defer func() {
for gvk, used := range usedMap {
err := retry.RetryOnConflict(retry.DefaultBackoff, func() (retryErr error) {
tnt := &capsulev1beta1.Tenant{}
if retryErr = r.Client.Get(ctx, types.NamespacedName{Name: tenant.GetName()}, tnt); retryErr != nil {
return
}
if tnt.GetAnnotations() == nil {
tnt.Annotations = make(map[string]string)
}
tnt.Annotations[capsulev1beta1.UsedAnnotationForResource(gvk)] = fmt.Sprintf("%d", used)
return r.Client.Update(ctx, tnt)
})
if err != nil {
r.Log.Error(err, "cannot update custom Resource Quota", "GVK", gvk)
}
}
}()
for _, item := range resourceList {
res := item
errGroup.Go(func() (scopeErr error) {
dynamicClient := dynamic.NewForConfigOrDie(r.RESTConfig)
for _, ns := range tenant.Status.Namespaces {
var list *unstructured.UnstructuredList
list, scopeErr = dynamicClient.Resource(schema.GroupVersionResource{Group: res.group, Version: res.version, Resource: res.kind}).List(ctx, metav1.ListOptions{
FieldSelector: fmt.Sprintf("metadata.namespace==%s", ns),
})
if scopeErr != nil {
return scopeErr
}
key := fmt.Sprintf("%s.%s_%s", res.kind, res.group, res.version)
if _, ok := usedMap[key]; !ok {
usedMap[key] = 0
}
usedMap[key] += len(list.Items)
}
return
})
}
if err := errGroup.Wait(); err != nil {
return err
}
return nil
}

8
docs/.gitignore vendored Normal file
View File

@@ -0,0 +1,8 @@
*.log
.cache
.DS_Store
src/.temp
node_modules
dist
.env
.env.*

12
docs/README.md Normal file
View File

@@ -0,0 +1,12 @@
# Capsule Documentation
1. Ensure to have [`yarn`](https://classic.yarnpkg.com/lang/en/docs/install/#debian-stable) installed in your path.
2. `yarn install`
## Local development
```shell
yarn develop
```
This will create a local webserver listening on `localhost:8080` with hot-reload of your local changes.

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 294 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 283 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

View File

@@ -0,0 +1,341 @@
# Capsule Development
## Prerequisites
Make sure you have these tools installed:
- [Go 1.16+](https://golang.org/dl/)
- [Operator SDK 1.7.2+](https://github.com/operator-framework/operator-sdk), or [Kubebuilder](https://github.com/kubernetes-sigs/kubebuilder)
- [KinD](https://github.com/kubernetes-sigs/kind) or [k3d](https://k3d.io/), with `kubectl`
- [ngrok](https://ngrok.com/) (if you want to run locally with remote Kubernetes)
- [golangci-lint](https://github.com/golangci/golangci-lint)
- OpenSSL
## Setup a Kubernetes Cluster
A lightweight Kubernetes within your laptop can be very handy for Kubernetes-native development like Capsule.
### By `k3d`
```shell
# Install K3d cli by brew in Mac, or your preferred way
$ brew install k3d
# Export your laptop's IP, e.g. retrieving it by: ifconfig
# Do change this IP to yours
$ export LAPTOP_HOST_IP=192.168.10.101
# Spin up a bare minimum cluster
# Refer to here for more options: https://k3d.io/v4.4.8/usage/commands/k3d_cluster_create/
$ k3d cluster create k3s-capsule --servers 1 --agents 1 --no-lb --k3s-server-arg --tls-san=${LAPTOP_HOST_IP}
# Get Kubeconfig
$ k3d kubeconfig get k3s-capsule > /tmp/k3s-capsule && export KUBECONFIG="/tmp/k3s-capsule"
# This will create a cluster with 1 server and 1 worker node
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-k3s-capsule-server-0 Ready control-plane,master 2m13s v1.21.2+k3s1
k3d-k3s-capsule-agent-0 Ready <none> 2m3s v1.21.2+k3s1
# Or 2 Docker containers if you view it from Docker perspective
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5c26ad840c62 rancher/k3s:v1.21.2-k3s1 "/bin/k3s agent" 53 seconds ago Up 45 seconds k3d-k3s-capsule-agent-0
753998879b28 rancher/k3s:v1.21.2-k3s1 "/bin/k3s server --t…" 53 seconds ago Up 51 seconds 0.0.0.0:49708->6443/tcp k3d-k3s-capsule-server-0
```
### By `kind`
```shell
# # Install kind cli by brew in Mac, or your preferred way
$ brew install kind
# Prepare a kind config file with necessary customization
$ cat > kind.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
apiServerAddress: "0.0.0.0"
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
metadata:
name: config
apiServer:
certSANs:
- localhost
- 127.0.0.1
- kubernetes
- kubernetes.default.svc
- kubernetes.default.svc.cluster.local
- kind
- 0.0.0.0
- ${LAPTOP_HOST_IP}
- role: worker
EOF
# Spin up a bare minimum cluster with 1 master 1 worker node
$ kind create cluster --name kind-capsule --config kind.yaml
# This will create a cluster with 1 server and 1 worker node
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-capsule-control-plane Ready control-plane,master 84s v1.21.1
kind-capsule-worker Ready <none> 56s v1.21.1
# Or 2 Docker containers if you view it from Docker perspective
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b329fd3a838 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute 0.0.0.0:54894->6443/tcp kind-capsule-control-plane
7d50f1633555 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute kind-capsule-worker
```
## Fork, build, and deploy Capsule
The `fork-clone-contribute-pr` flow is common for contributing to OSS projects like Kubernetes and Capsule.
Let's assume you've forked it into your GitHub namespace, say `myuser`, and then you can clone it with Git protocol.
Do remember to change the `myuser` to yours.
```shell
$ git clone git@github.com:myuser/capsule.git && cd capsule
```
It's a good practice to add the upstream as the remote too so we can easily fetch and merge the upstream to our fork:
```shell
$ git remote add upstream https://github.com/clastix/capsule.git
$ git remote -vv
origin git@github.com:myuser/capsule.git (fetch)
origin git@github.com:myuser/capsule.git (push)
upstream https://github.com/clastix/capsule.git (fetch)
upstream https://github.com/clastix/capsule.git (push)
```
Build and deploy:
```shell
# Download the project dependencies
$ go mod download
# Build the Capsule image
$ make docker-build
# Retrieve the built image version
$ export CAPSULE_IMAGE_VESION=`docker images --format '{{.Tag}}' quay.io/clastix/capsule`
# If k3s, load the image into cluster by
$ k3d image import --cluster k3s-capsule capsule quay.io/clastix/capsule:${CAPSULE_IMAGE_VESION}
# If Kind, load the image into cluster by
$ kind load docker-image --name kind-capsule quay.io/clastix/capsule:${CAPSULE_IMAGE_VESION}
# deploy all the required manifests
# Note: 1) please retry if you saw errors; 2) if you want to clean it up first, run: make remove
$ make deploy
# Make sure the controller is running
$ kubectl get pod -n capsule-system
NAME READY STATUS RESTARTS AGE
capsule-controller-manager-5c6b8445cf-566dc 1/1 Running 0 23s
# Check the logs if needed
$ kubectl -n capsule-system logs --all-containers -l control-plane=controller-manager
# You may have a try to deploy a Tenant too to make sure it works end to end
$ kubectl apply -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- name: alice
kind: User
- name: system:serviceaccount:capsule-system:default
kind: ServiceAccount
EOF
# There shouldn't be any errors and you should see the newly created tenant
$ kubectl get tenants
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
oil Active 0 14s
```
If you want to test namespace creation or such stuff, make sure to use impersonation:
```sh
$ kubectl ... --as system:serviceaccount:capsule-system:default --as-group capsule.clastix.io
```
As of now, a complete Capsule environment has been set up in `kind`- or `k3d`-powered cluster, and the `capsule-controller-manager` is running as a deployment serving as:
- The reconcilers for CRDs and;
- A series of webhooks
## Setup the development environment
During development, we prefer that the code is running within our IDE locally, instead of running as the normal Pod(s) within the Kubernetes cluster.
Such a setup can be illustrated as below diagram:
![Development Env](./assets/dev-env.png)
To achieve that, there are some necessary steps we need to walk through, which have been made as a `make` target within our `Makefile`.
So the TL;DR answer is:
```shell
# If you haven't installed or run `make deploy` before, do it first
# Note: please retry if you saw errors
$ make deploy
# To retrieve your laptop's IP and execute `make dev-setup` to setup dev env
# For example: LAPTOP_HOST_IP=192.168.10.101 make dev-setup
$ LAPTOP_HOST_IP="<YOUR_LAPTOP_IP>" make dev-setup
```
This is a very common setup for typical Kubernetes Operator development so we'd better walk them through with more details here.
1. Scaling down the deployed Pod(s) to 0
We need to scale the existing replicas of `capsule-controller-manager` to 0 to avoid reconciliation competition between the Pod(s) and the code running outside of the cluster, in our preferred IDE for example.
```shell
$ kubectl -n capsule-system scale deployment capsule-controller-manager --replicas=0
deployment.apps/capsule-controller-manager scaled
```
2. Preparing TLS certificate for the webhooks
Running webhooks requires TLS, we can prepare the TLS key pair in our development env to handle HTTPS requests.
```shell
# Prepare a simple OpenSSL config file
# Do remember to export LAPTOP_HOST_IP before running this command
$ cat > _tls.cnf <<EOF
[ req ]
default_bits = 4096
distinguished_name = req_distinguished_name
req_extensions = req_ext
[ req_distinguished_name ]
countryName = SG
stateOrProvinceName = SG
localityName = SG
organizationName = CAPSULE
commonName = CAPSULE
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
IP.1 = ${LAPTOP_HOST_IP}
EOF
# Create this dir to mimic the Pod mount point
$ mkdir -p /tmp/k8s-webhook-server/serving-certs
# Generate the TLS cert/key under /tmp/k8s-webhook-server/serving-certs
$ openssl req -newkey rsa:4096 -days 3650 -nodes -x509 \
-subj "/C=SG/ST=SG/L=SG/O=CAPSULE/CN=CAPSULE" \
-extensions req_ext \
-config _tls.cnf \
-keyout /tmp/k8s-webhook-server/serving-certs/tls.key \
-out /tmp/k8s-webhook-server/serving-certs/tls.crt
# Clean it up
$ rm -f _tls.cnf
```
3. Patching the Webhooks
By default, the webhooks will be registered with the services, which will route to the Pods, inside the cluster.
We need to _delegate_ the controllers' and webbooks' services to the code running in our IDE by patching the `MutatingWebhookConfiguration` and `ValidatingWebhookConfiguration`.
```shell
# Export your laptop's IP with the 9443 port exposed by controllers/webhooks' services
$ export WEBHOOK_URL="https://${LAPTOP_HOST_IP}:9443"
# Export the cert we just generated as the CA bundle for webhook TLS
$ export CA_BUNDLE=`openssl base64 -in /tmp/k8s-webhook-server/serving-certs/tls.crt | tr -d '\n'`
# Patch the MutatingWebhookConfiguration webhook
$ kubectl patch MutatingWebhookConfiguration capsule-mutating-webhook-configuration \
--type='json' -p="[\
{'op': 'replace', 'path': '/webhooks/0/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/mutate-v1-namespace-owner-reference\",'caBundle':\"${CA_BUNDLE}\"}}\
]"
# Verify it if you want
$ kubectl get MutatingWebhookConfiguration capsule-mutating-webhook-configuration -o yaml
# Patch the ValidatingWebhookConfiguration webhooks
# Note: there is a list of validating webhook endpoints, not just one
$ kubectl patch ValidatingWebhookConfiguration capsule-validating-webhook-configuration \
--type='json' -p="[\
{'op': 'replace', 'path': '/webhooks/0/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/cordoning\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/1/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/ingresses\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/2/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/namespaces\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/3/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/networkpolicies\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/4/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/pods\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/5/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/persistentvolumeclaims\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/6/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/services\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/7/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/tenants\",'caBundle':\"${CA_BUNDLE}\"}},\
{'op': 'replace', 'path': '/webhooks/8/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/nodes\",'caBundle':\"${CA_BUNDLE}\"}}\
]"
# Verify it if you want
$ kubectl get ValidatingWebhookConfiguration capsule-validating-webhook-configuration -o yaml
```
## Run Capsule outside the cluster
Now we can run Capsule controllers with webhooks outside of the Kubernetes cluster:
```shell
$ export NAMESPACE=capsule-system && export TMPDIR=/tmp/
$ go run .
```
To verify that, we can open a new console and create a new Tenant:
```shell
$ kubectl apply -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: gas
spec:
owners:
- name: alice
kind: User
EOF
```
We should see output and logs in the `make run` console.
Now it's time to work through our familiar inner loop for development in our preferred IDE. For example, if you're using [Visual Studio Code](https://code.visualstudio.com), this `launch.json` file can be a good start.
```json
{
"version": "0.2.0",
"configurations": [
{
"name": "Launch",
"type": "go",
"request": "launch",
"mode": "auto",
"program": "${workspaceFolder}",
"args": [
"--zap-encoder=console",
"--zap-log-level=debug",
"--configuration-name=capsule-default"
],
"env": {
"NAMESPACE": "capsule-system",
"TMPDIR": "/tmp/"
}
}
]
}
```

View File

@@ -0,0 +1,22 @@
# Project Governance
This document lays out the guidelines under which the Capsule project will be governed.
The goal is to make sure that the roles and responsibilities are well defined and clarify how decisions are made.
## Roles
In the context of Capsule project, we consider the following roles:
* __Users__: everyone using Capsule, typically willing to provide feedback by proposing features and/or filing issues.
* __Contributors__: everyone contributing code, documentation, examples, tests, and participating in feature proposals as well as design discussions.
* __Maintainers__: are responsible for engaging with and assisting contributors to iterate on the contributions until it reaches acceptable quality. Maintainers can decide whether the contributions can be accepted into the project or rejected.
## Release Management
The release process will be governed by Maintainers.
## Roadmap Planning
Maintainers will share roadmap and release versions as milestones in GitHub.

View File

@@ -0,0 +1,111 @@
# Contributing Guidelines
Thank you for your interest in contributing to Capsule. Whether it's a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.
Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution.
## Pull Requests
Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
1. You are working against the latest source on the *master* branch.
1. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
1. You open an issue to discuss any significant work: we would hate for your time to be wasted.
To send us a pull request, please:
1. Fork the repository.
1. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it
will be hard for us to focus on your change.
1. Ensure local tests pass.
1. Commit to your fork using clear commit messages.
1. Send us a pull request, answering any default questions in the pull request interface.
1. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
[creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
Make sure to keep Pull Requests small and functional to make them easier to review, understand, and look up in commit history. This repository uses "Squash and Commit" to keep our history clean and make it easier to revert changes based on PR.
Adding the appropriate documentation, unit tests and e2e tests as part of a feature is the responsibility of the
feature owner, whether it is done in the same Pull Request or not.
All the Pull Requests must refer to an already open issue: this is the first phase to contribute also for informing maintainers about the issue.
## Commits
Commit's first line should not exceed 50 columns.
A commit description is welcomed to explain more the changes: just ensure
to put a blank line and an arbitrary number of maximum 72 characters long
lines, at most one blank line between them.
Please, split changes into several and documented small commits: this will help us to perform a better review. Commits must follow the Conventional Commits Specification, a lightweight convention on top of commit messages. It provides an easy set of rules for creating an explicit commit history; which makes it easier to write automated tools on top of. This convention dovetails with Semantic Versioning, by describing the features, fixes, and breaking changes made in commit messages. See [Conventional Commits Specification](https://www.conventionalcommits.org) to learn about Conventional Commits.
> In case of errors or need of changes to previous commits,
> fix them squashing to make changes atomic.
## Code convention
Capsule is written in Golang. The changes must follow the Pull Request method where a _GitHub Action_ will
check the `golangci-lint`, so ensure your changes respect the coding standard.
### golint
You can easily check them issuing the _Make_ recipe `golint`.
```
# make golint
golangci-lint run -c .golangci.yml
```
> Enabled linters and related options are defined in the [.golanci.yml file](https://github.com/clastix/capsule/blob/master/.golangci.yml)
### goimports
Also, the Go import statements must be sorted following the best practice:
```
<STANDARD LIBRARY>
<EXTERNAL PACKAGES>
<LOCAL PACKAGES>
```
To help you out you can use the _Make_ recipe `goimports`
```
# make goimports
goimports -w -l -local "github.com/clastix/capsule" .
```
## Finding contributions to work on
Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the
default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted'
and 'good first issue' issues are a great place to start.
## Design Docs
A contributor proposes a design with a PR on the repository to allow for revisions and discussions.
If a design needs to be discussed before formulating a document for it, make use of GitHub Discussions to
involve the community on the discussion.
## GitHub Issues
GitHub Issues are used to file bugs, work items, and feature requests with actionable items/issues.
When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
* A reproducible test case or series of steps
* The version of the code being used
* Any modifications you've made relevant to the bug
* Anything unusual about your environment or deployment
## Miscellanea
Please, add a new single line at end of any file as the current coding style.
## Licensing
See the [LICENSE](https://github.com/clastix/capsule/blob/master/LICENSE) file for our project's licensing. We can ask you to confirm the licensing of your contribution.

View File

@@ -0,0 +1,3 @@
# Contributing
Guidelines for community contribution.

View File

@@ -1,7 +1,9 @@
# Getting started
Thanks for giving Capsule a try.
## Installation
Make sure you have access to a Kubernetes cluster as administrator.
There are two ways to install Capsule:
@@ -10,6 +12,7 @@ There are two ways to install Capsule:
* Use the [Capsule Helm Chart](https://github.com/clastix/capsule/blob/master/charts/capsule/README.md)
### Install with the single YAML file installer
Ensure you have `kubectl` installed in your `PATH`. Clone this repository and move to the repo folder:
```
@@ -19,9 +22,11 @@ $ kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/conf
It will install the Capsule controller in a dedicated namespace `capsule-system`.
### Install with Helm Chart
Please, refer to the instructions reported in the Capsule Helm Chart [README](https://github.com/clastix/capsule/blob/master/charts/capsule/README.md).
# Create your first Tenant
## Create your first Tenant
In Capsule, a _Tenant_ is an abstraction to group multiple namespaces in a single entity within a set of boundaries defined by the Cluster Administrator. The tenant is then assigned to a user or group of users who is called _Tenant Owner_.
Capsule defines a Tenant as Custom Resource with cluster scope.
@@ -49,7 +54,8 @@ NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
oil Active 0 10s
```
## Tenant owners
## Login as Tenant Owner
Each tenant comes with a delegated user or group of users acting as the tenant admin. In the Capsule jargon, this is called the _Tenant Owner_. Other users can operate inside a tenant with different levels of permissions and authorizations assigned directly by the Tenant Owner.
Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of [authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/) are supported. The only requirement to use Capsule is to assign tenant users to the group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
@@ -68,7 +74,7 @@ Users authenticated through an _OIDC token_ must have in their token:
]
```
The [hack/create-user.sh](../../hack/create-user.sh) can help you set up a dummy `kubeconfig` for the `alice` user acting as owner of a tenant called `oil`
The [hack/create-user.sh](https://github.com/clastix/capsule/blob/master/hack/create-user.sh) can help you set up a dummy `kubeconfig` for the `alice` user acting as owner of a tenant called `oil`
```bash
./hack/create-user.sh alice oil
@@ -79,32 +85,36 @@ kubeconfig file is: alice-oil.kubeconfig
to use it as alice export KUBECONFIG=alice-oil.kubeconfig
```
Log as tenant owner
Login as tenant owner
```
$ export KUBECONFIG=alice-oil.kubeconfig
```
and create a couple of new namespaces
## Create namespaces
As tenant owner, you can create namespaces:
```
$ kubectl create namespace oil-production
$ kubectl create namespace oil-development
```
As user `alice` you can operate with fully admin permissions:
And operate with fully admin permissions:
```
$ kubectl -n oil-development run nginx --image=docker.io/nginx
$ kubectl -n oil-development get pods
```
but limited to only your namespaces:
## Limiting access
Tenant Owners have full administrative permissions limited to only the namespaces in the assigned tenant. They can create any namespaced resource in their namespaces but they do not have access to cluster resources or resources belonging to other tenants they do not own:
```
$ kubectl -n kube-system get pods
Error from server (Forbidden): pods is forbidden: User "alice" cannot list resource "pods" in API group "" in the namespace "kube-system"
Error from server (Forbidden): pods is forbidden:
User "alice" cannot list resource "pods" in API group "" in the namespace "kube-system"
```
# Whats next
The Tenant Owners have full administrative permissions limited to only the namespaces in the assigned tenant. However, their permissions can be controlled by the Cluster Admin by setting rules and policies on the assigned tenant. See the [use cases](./use-cases/overview.md) page for more getting more cool things you can do with Capsule.
See the [tutorial](/docs/general/tutorial) for getting more cool things you can do with Capsule.

View File

@@ -0,0 +1,2 @@
# Documentation
General documentation for Capsule Operator

2284
docs/content/general/mtb.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,6 @@
# Capsule Proxy
Capsule Proxy is an add-on for [Capsule](https://github.com/clastix/capsule), the operator providing multi-tenancy in Kubernetes.
## The problem
Capsule Proxy is an add-on for Capsule Operator addressing some RBAC issues when enabling multi-tenacy in Kubernetes since users cannot list the owned cluster-scoped resources.
Kubernetes RBAC cannot list only the owned cluster-scoped resources since there are no ACL-filtered APIs. For example:
@@ -27,36 +25,83 @@ With **Capsule**, we took a different approach. As one of the key goals, we want
## How it works
This project is an add-on of the main [Capsule](https://github.com/clastix/capsule) operator, so make sure you have a working instance of Caspule before attempting to install it.
Use the `capsule-proxy` only if you want Tenant Owners to list their own Cluster-Scope resources.
The `capsule-proxy` implements a simple reverse proxy that intercepts only specific requests to the APIs server and Capsule does all the magic behind the scenes.
Current implementation filters the following requests:
* `api/v1/namespaces`
* `api/v1/nodes`
* `apis/storage.k8s.io/v1/storageclasses{/name}`
* `apis/networking.k8s.io/{v1,v1beta1}/ingressclasses{/name}`
* `api/scheduling.k8s.io/{v1}/priorityclasses{/name}`
* `/api/scheduling.k8s.io/{v1}/priorityclasses{/name}`
* `/api/v1/namespaces`
* `/api/v1/nodes{/name}`
* `/api/v1/pods?fieldSelector=spec.nodeName%3D{name}`
* `/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/{name}`
* `/apis/metrics.k8s.io/{v1beta1}/nodes{/name}`
* `/apis/networking.k8s.io/{v1,v1beta1}/ingressclasses{/name}`
* `/apis/storage.k8s.io/v1/storageclasses{/name}`
All other requestes are proxied transparently to the APIs server, so no side-effects are expected. We're planning to add new APIs in the future, so PRs are welcome!
All other requests are proxied transparently to the APIs server, so no side effects are expected. We're planning to add new APIs in the future, so [PRs are welcome](https://github.com/clastix/capsule-proxy)!
## Installation
Capsule Proxy is an optional add-on of the main Capsule Operator, so make sure you have a working instance of Capsule before attempting to install it.
Use the `capsule-proxy` only if you want Tenant Owners to list their own Cluster-Scope resources.
The `capsule-proxy` can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the APIs server.
Optionally, it can be deployed as a sidecar container in the backend of a dashboard.
Running outside a Kubernetes cluster is also viable, although a valid `KUBECONFIG` file must be provided, using the environment variable `KUBECONFIG` or the default file in `$HOME/.kube/config`.
An Helm Chart is available [here](./charts/capsule-proxy/README.md).
A Helm Chart is available [here](https://github.com/clastix/capsule-proxy/blob/master/charts/capsule-proxy/README.md).
## Does it work with kubectl?
Depending on your environment, you can expose the `capsule-proxy` by:
Yes, it works by intercepting all the requests from the `kubectl` client directed to the APIs server. It works with both users who use the TLS certificate authentication and those who use OIDC.
- Ingress
- NodePort Service
- LoadBalance Service
- HostPort
- HostNetwork
## How RBAC is put in place?
Here how it looks like when exposed through an Ingress Controller:
Each Tenant owner can have their capabilities managed pretty similar to a standard RBAC.
```
+-----------+ +-----------+ +-----------+
kubectl ------>|:443 |--------->|:9001 |-------->|:6443 |
+-----------+ +-----------+ +-----------+
ingress-controller capsule-proxy kube-apiserver
```
## CLI flags
- `capsule-configuration-name`: name of the `CapsuleConfiguration` resource which is containing the [Capsule configurations](/docs/general/references/#capsule-configuration) (default: `default`)
- `capsule-user-group` (deprecated): old way to specify the user groups which request must be intercepted by the proxy
- `ignored-user-group`: names of the groups which requests must be ignored and proxy-passed to the upstream server
- `listening-port`: HTTP port the proxy listens to (default: `9001`)
- `oidc-username-claim`: the OIDC field name used to identify the user (default: `preferred_username`), the proper value can be extracted from the Kubernetes API Server flags
- `enable-ssl`: enable the bind on HTTPS for secure communication, allowing client-based certificate, also knows as mutual TLS (default: `true`)
- `ssl-cert-path`: path to the TLS certificate, then TLS mode is enabled (default: `/opt/capsule-proxy/tls.crt`)
- `ssl-key-path`: path to the TLS certificate key, when TLS mode is enabled (default: `/opt/capsule-proxy/tls.key`)
- `rolebindings-resync-period`: resync period for RoleBinding resources reflector, lower values can help if you're facing [flaky etcd connection](https://github.com/clastix/capsule-proxy/issues/174) (default: `10h`)
## User Authentication
The `capsule-proxy` intercepts all the requests from the `kubectl` client directed to the APIs Server. Users using a TLS client based authentication with certificate and key are able to talks with APIs Server since it is able to forward client certificates to the Kubernetes APIs server.
It is possible to protect the `capsule-proxy` using a certificate provided by Let's Encrypt. Keep in mind that, in this way, the TLS termination will be executed by the Ingress Controller, meaning that the authentication based on client certificate will be withdrawn and not reversed to the upstream.
If your prerequisite is exposing `capsule-proxy` using an Ingress, you must rely on the token-based authentication, for example OIDC or Bearer tokens. Users providing tokens are always able to reach the APIs Server.
## Kubernetes dashboards integration
If you're using a client-only dashboard, for example [Lens](https://k8slens.dev/), the `capsule-proxy` can be used as with `kubectl` since this dashboard usually talks to the APIs server using just a `kubeconfig` file.
![Lens dashboard](../assets/proxy-lens.png)
For a web-based dashboard, like the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), the `capsule-proxy` can be deployed as a sidecar container in the backend, following the well-known cloud-native _Ambassador Pattern_.
![Kubernetes dashboard](../assets/proxy-kubernetes-dashboard.png)
## Tenant Owner Authorization
Each Tenant owner can have their capabilities managed pretty similar to a standard Kubernetes RBAC.
```yaml
apiVersion: capsule.clastix.io/v1beta1
@@ -89,6 +134,7 @@ Each Resource kind can be granted with several verbs, such as:
### Namespaces
As tenant owner `alice`, you can use `kubectl` to create some namespaces:
```
$ kubectl --context alice-oidc@mycluster create namespace oil-production
$ kubectl --context alice-oidc@mycluster create namespace oil-development
@@ -107,6 +153,44 @@ oil-production Active 2m
### Nodes
The Capsule Proxy gives the owners the ability to access the nodes matching the `.spec.nodeSelector` in the Tenant manifest:
```yaml
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
proxySettings:
- kind: Nodes
operations:
- List
nodeSelector:
kubernetes.io/hostname: capsule-gold-qwerty
```
```bash
$ kubectl --context alice-oidc@mycluster get nodes
NAME STATUS ROLES AGE VERSION
capsule-gold-qwerty Ready <none> 43h v1.19.1
```
> Warning: when no `nodeSelector` is specified, the tenant owners has access to all the nodes, according to the permissions listed in the `proxySettings` specs.
### Special routes for kubectl describe
When issuing a `kubectl describe node`, some other endpoints are put in place:
* `api/v1/pods?fieldSelector=spec.nodeName%3D{name}`
* `/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/{name}`
These are mandatory in order to retrieve the list of the running Pods on the required node, and providing info about the lease status of it.
### Nodes
The Capsule Proxy gives the owners the ability to access the nodes matching the `.spec.nodeSelector` in the Tenant manifest:
```yaml
@@ -179,6 +263,19 @@ custom custom.tls/provisioner Delete WaitForFirstConsum
glusterfs rook.io/glusterfs Delete WaitForFirstConsumer false 54m
```
> The `name` label reflecting the resource name is mandatory, otherwise filtering of resources cannot be put in place
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
name: cephfs
name: cephfs
provisioner: cephfs
```
### Ingress Classes
As for Storage Class, also Ingress Class can be enforced.
@@ -225,9 +322,26 @@ external-lb example.com/external IngressParameters.k8s.example.com/e
internal-lb example.com/internal IngressParameters.k8s.example.com/internal-lb 15m
```
> The `name` label reflecting the resource name is mandatory, otherwise filtering of resources cannot be put in place
```yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
name: external-lb
name: external-lb
spec:
controller: example.com/ingress-controller
parameters:
apiGroup: k8s.example.com
kind: IngressParameters
name: external-lb
```
### Priority Classes
Allowed PriorityClasses assigned to a Tenant Owner can be enforced as follows.
Allowed PriorityClasses assigned to a Tenant Owner can be enforced as follows:
```yaml
apiVersion: capsule.clastix.io/v1beta1
@@ -239,12 +353,12 @@ spec:
- kind: User
name: alice
proxySettings:
- kind: IngressClasses
- kind: PriorityClasses
operations:
- List
priorityClasses:
allowed:
- best-effort
- custom
allowedRegex: "\\w+priority"
```
@@ -271,67 +385,36 @@ maxpriority 1000 false 18s
minpriority 1000 false 18s
```
### Storage/Ingress class and PriorityClass required label
For Storage Class, Ingress Class and Priority Class resources, the `name` label reflecting the resource name is mandatory, otherwise filtering of resources cannot be put in place.
> The `name` label reflecting the resource name is mandatory, otherwise filtering of resources cannot be put in place
```yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
name: my-storage-class
name: my-storage-class
provisioner: org.tld/my-storage-class
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
name: external-lb
name: external-lb
spec:
controller: example.com/ingress-controller
parameters:
apiGroup: k8s.example.com
kind: IngressParameters
name: external-lb
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
labels:
name: best-effort
name: best-effort
name: custom
name: custom
value: 1000
globalDefault: false
description: "Priority class for best-effort Tenants"
description: "Priority class for Tenants"
```
## Does it work with kubectl?
Yes, it works by intercepting all the requests from the `kubectl` client directed to the APIs server. It works with both users who use the TLS certificate authentication and those who use OIDC.
## HTTP support
Capsule proxy supports `https` and `http`, although the latter is not recommended, we understand that it can be useful for some use cases (i.e. development, working behind a TLS-terminated reverse proxy and so on). As the default behaviour is to work with `https`, we need to use the flag `--enable-ssl=false` if we really want to work under `http`.
As tenant owner `alice`, you are able to use `kubectl` to create some namespaces:
```
$ kubectl --context alice-oidc@mycluster create namespace oil-production
$ kubectl --context alice-oidc@mycluster create namespace oil-development
$ kubectl --context alice-oidc@mycluster create namespace gas-marketing
After having the `capsule-proxy` working under `http`, requests must provide authentication using an allowed Bearer Token.
For example:
```bash
$ TOKEN=<type your TOKEN>
$ curl -H "Authorization: Bearer $TOKEN" http://localhost:9001/api/v1/namespaces
```
and list only those namespaces:
```
$ kubectl --context alice-oidc@mycluster get namespaces
NAME STATUS AGE
gas-marketing Active 2m
oil-development Active 2m
oil-production Active 2m
```
> NOTE: `kubectl` will not work against a `http` server.
# Whats next
Have a fun with `capsule-proxy`:
## Contributing
* [Standalone Installation](./standalone.md)
* [Sidecar Installation](./sidecar.md)
* [OIDC Authentication](./oidc-auth.md)
* [Contributing](./contributing.md)
`capsule-proxy` is an open-source software released with Apache2 [license](https://github.com/clastix/capsule-proxy/blob/master/LICENSE).
Contributing guidelines are available [here](https://github.com/clastix/capsule-proxy/blob/master/CONTRIBUTING.md).

View File

@@ -1,19 +1,12 @@
# Reference
* [Custom Resource Definition](#customer-resource-definition)
* [Capsule Configuration](#capsule-configuration)
* [Capsule Permissions](#capsule-permissions)
* [Admission Controllers](#admission-controller)
* [Command Options](#command-options)
* [Created Resources](#created-resources)
Reference document for Capsule Operator configuration
## Custom Resource Definition
Capsule operator uses a Custom Resources Definition (CRD) for _Tenants_. In Capsule, Tenants are cluster wide resources. You need cluster level permissions to work with tenants.
Capsule operator uses a Custom Resources Definition (CRD) for _Tenants_. Tenants are cluster wide resources, so you need cluster level permissions to work with tenants. You can learn about tenant CRD by the `kubectl explain` command:
You can learn about tenant CRD by the `kubectl explain` command:
```command
```
kubectl explain tenant
KIND: Tenant
@@ -24,11 +17,15 @@ DESCRIPTION:
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info:
APIVersion defines the versioned schema of this representation of an object.
Servers should convert recognized schemas to the latest internal value,
and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info:
Kind is a string value representing the REST resource this object represents.
Servers may infer this from the endpoint the client submits requests to.
Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
@@ -44,7 +41,7 @@ FIELDS:
For Tenant spec:
```command
```
kubectl explain tenant.spec
KIND: Tenant
@@ -76,9 +73,8 @@ FIELDS:
IngressClass. Optional.
limitRanges <Object>
Specifies the NetworkPolicies assigned to the Tenant. The assigned
NetworkPolicies are inherited by any namespace created in the Tenant.
Optional.
Specifies the resource min/max usage restrictions to the Tenant. The assigned
values are inherited by any namespace created in the Tenant. Optional.
namespaceOptions <Object>
Specifies options for the Namespaces, such as additional metadata or
@@ -124,7 +120,7 @@ FIELDS:
and Tenant status:
```command
```
kubectl explain tenant.status
KIND: Tenant
VERSION: capsule.clastix.io/v1beta1
@@ -171,6 +167,7 @@ Upon installation using Kustomize or Helm, a `capsule-default` resource will be
The reference to this configuration is managed by the CLI flag `--configuration-name`.
## Capsule Permissions
In the current implementation, the Capsule operator requires cluster admin permissions to fully operate. Make sure you deploy Capsule having access to the default `cluster-admin` ClusterRole.
## Admission Controllers
@@ -213,6 +210,7 @@ Option | Description | Default
## Created Resources
Once installed, the Capsule operator creates the following resources in your cluster:
```

File diff suppressed because it is too large Load Diff

View File

Before

Width:  |  Height:  |  Size: 4.5 KiB

After

Width:  |  Height:  |  Size: 4.5 KiB

View File

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

View File

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View File

Before

Width:  |  Height:  |  Size: 63 KiB

After

Width:  |  Height:  |  Size: 63 KiB

View File

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

View File

Before

Width:  |  Height:  |  Size: 79 KiB

After

Width:  |  Height:  |  Size: 79 KiB

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

Before

Width:  |  Height:  |  Size: 131 KiB

After

Width:  |  Height:  |  Size: 131 KiB

View File

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 55 KiB

View File

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 57 KiB

View File

@@ -0,0 +1,2 @@
# Guides
Guides and tutorials on how to integrate Capsule in your Kubernetes environment.

View File

@@ -1,8 +1,6 @@
# Capsule on AWS EKS
This is an example of how to install AWS EKS cluster and one user
manged by Capsule.
It is based on [Using IAM Groups to manage Kubernetes access](https://www.eksworkshop.com/beginner/091_iam-groups/intro/)
manged by Capsule. It is based on [Using IAM Groups to manage Kubernetes access](https://www.eksworkshop.com/beginner/091_iam-groups/intro/)
Create EKS cluster:
@@ -23,7 +21,7 @@ Create AWS User `alice` using CloudFormation, create AWS access files and
kubeconfig for such user:
```bash
cat > cf.yml << \EOF
cat > cf.yml << EOF
Parameters:
ClusterName:
Type: String
@@ -112,8 +110,6 @@ cat >> kubeconfig-alice.conf << EOF
EOF
```
----
Export "admin" kubeconfig to be able to install Capsule:
```bash
@@ -134,15 +130,11 @@ kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config
```
Based on the tenant configuration above the user `alice` should be able
to create namespace...
Switch to new terminal tab and try to create namespace as user `alice`:
to create namespace. Switch to a new terminal and try to create a namespace as user `alice`:
```bash
# Unset AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY if defined
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
kubectl create namespace test --kubeconfig="kubeconfig-alice.conf"
... do other commands allowed by Tenant configuration ...
```
```

View File

@@ -0,0 +1,3 @@
# Capsule on Azure Kubernetes Service
This reference implementation introduces the recommended starting (baseline) infrastructure architecture for implementing a multi-tenancy Azure AKS cluster using Capsule. See [CoAKS](https://github.com/clastix/coaks-baseline-architecture).

View File

@@ -1,5 +1,5 @@
# Capsule over Managed Kubernetes
Capsule Operator can be easily installed on a Managed Kubernetes Service. Since in these services, you do not have access to the Kubernetes APIs Server, you should check with your service provider following pre-requisites:
# Capsule on Managed Kubernetes
Capsule Operator can be easily installed on a Managed Kubernetes Service. Since you do not have access to the Kubernetes APIs Server, you should check with the provider of the service:
- the default `cluster-admin` ClusterRole is accessible
- the following Admission Webhooks are enabled on the APIs Server:
@@ -8,9 +8,3 @@ Capsule Operator can be easily installed on a Managed Kubernetes Service. Since
- ResourceQuota
- MutatingAdmissionWebhook
- ValidatingAdmissionWebhook
* [AWS EKS](./aws-eks.md)
* CoAKS - Capsule over Azure Kubernetes Service
* Google Cloud GKE
* IBM Cloud
* OVH

View File

@@ -1,8 +1,6 @@
# Monitoring Capsule
The Capsule dashboard allows you to track the health and performance of Capsule manager and tenants, with particular attention to resources saturation, server responses, and latencies.
## Requirements
The Capsule dashboard allows you to track the health and performance of Capsule manager and tenants, with particular attention to resources saturation, server responses, and latencies. Prometheus and Grafana are requirements for monitoring Capsule.
### Prometheus
@@ -18,8 +16,6 @@ Grafana is an open-source monitoring solution that offers a flexible way to gene
To fastly deploy this monitoring stack, consider installing the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator).
---
## Quick Start
The Capsule Helm [charts](https://github.com/clastix/capsule/tree/master/charts/capsule) allow you to automatically create Kubernetes minimum resources needed for the proper functioning of the dashboard:
@@ -48,31 +44,29 @@ Take a look at the Helm charts [README.md](https://github.com/clastix/capsule/bl
Verify that the service monitor is working correctly through the Prometheus "targets" page :
![Prometheus Targets](../assets/prometheus_targets.png)
![Prometheus Targets](./assets/prometheus_targets.png)
### Deploy dashboard
Simply upload [dashboard.json](https://github.com/clastix/capsule/blob/master/config/grafana/dashboard.json) file to Grafana through _Create_ -> _Import_,
making sure to select the correct Prometheus data source:
![Grafana Import](../assets/upload_json.png)
![Grafana Import](./assets/upload_json.png)
## In-depth view
### Features
* [Manager controllers](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#manager-controllers)
* [Webhook error rate](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#webhook-error-rate)
* [Webhook latency](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#webhook-latency)
* [REST client latency](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#rest-client-latency)
* [REST client error rate](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#rest-client-error-rate)
* [Saturation](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#saturation)
* [Workqueue](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#workqueue)
---
* [Manager controllers](#manager-controllers)
* [Webhook error rate](#webhook-error-rate)
* [Webhook latency](#webhook-latency)
* [REST client latency](#rest-client-latency)
* [REST client error rate](#rest-client-error-rate)
* [Saturation](#saturation)
* [Workqueue](#workqueue)
#### Manager controllers
![Manager controllers](../assets/manager-controllers.png)
![Manager controllers](./assets/manager-controllers.png)
##### Description
@@ -92,7 +86,7 @@ This section provides information about the medium time delay between manager cl
#### Webhook error rate
![Webhook error rate](../assets/webhook-error-rate.png)
![Webhook error rate](./assets/webhook-error-rate.png)
##### Description
@@ -113,7 +107,7 @@ This section provides information about webhook requests response, mainly focusi
#### Webhook latency
![Webhook latency](../assets/webhook-latency.png)
![Webhook latency](./assets/webhook-latency.png)
##### Description
@@ -134,7 +128,7 @@ This section provides information about the medium time delay between webhook tr
#### REST client latency
![REST client latency](../assets/rest-client-latency.png)
![REST client latency](./assets/rest-client-latency.png)
##### Description
@@ -155,7 +149,7 @@ YMMV
#### REST client error rate
![REST client error rate](../assets/rest-client-error-rate.png)
![REST client error rate](./assets/rest-client-error-rate.png)
##### Description
@@ -163,7 +157,7 @@ This section provides information about client total rest requests response per
#### Saturation
![Saturation](../assets/saturation.png)
![Saturation](./assets/saturation.png)
##### Description
@@ -171,7 +165,7 @@ This section provides information about resources, giving a detailed picture of
#### Workqueue
![Workqueue](../assets/workqueue.png)
![Workqueue](./assets/workqueue.png)
##### Description

View File

@@ -1,7 +1,9 @@
# OIDC Authentication
The `capsule-proxy` works with `kubectl` users with a token-based authentication, e.g. OIDC or Bearer Token. In the following example, we'll use Keycloak as OIDC server capable to provides JWT tokens.
Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of authentication are supported. The only requirement to use Capsule is to assign tenant users to the the group defined by `userGroups` option in the `CapsuleConfiguration`, which defaults to `capsule.clastix.io`.
### Configuring Keycloak
In the following guide, we'll use [Keycloak](https://www.keycloak.org/) an Open Source Identity and Access Management server capable to authenticate users via OIDC and release JWT tokens as proof of authentication.
## Configuring OIDC Server
Configure Keycloak as OIDC server:
- Add a realm called `caas`, or use any existing realm instead
@@ -67,7 +69,7 @@ The result will be like the following:
}
```
### Configuring Kubernetes API Server
## Configuring Kubernetes API Server
Configuring Kubernetes for OIDC Authentication requires adding several parameters to the API Server. Please, refer to the [documentation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) for details and examples. Most likely, your `kube-apiserver.yaml` manifest will looks like the following:
```yaml
@@ -84,7 +86,7 @@ spec:
- --oidc-username-prefix=-
```
### Configuring kubectl
## Configuring kubectl
There are two options to use `kubectl` with OIDC:
- OIDC Authenticator
@@ -103,18 +105,20 @@ $ kubectl config set-credentials oidc \
--auth-provider-arg=extra-scopes=groups
```
To use the --token option:
To use the `--token` option:
```
$ kubectl config set-credentials oidc --token=${ID_TOKEN}
```
Point the kubectl to the URL where the `capsule-proxy` service is reachable:
Point the `kubectl` to the URL where the Kubernetes APIs Server is reachable:
```
$ kubectl config set-cluster mycluster \
--server=https://kube.clastix.io \
--server=https://kube.clastix.io:6443 \
--certificate-authority=~/.kube/ca.crt
```
> If your APIs Server is reachable through the `capsule-proxy`, make sure to use the URL of the `capsule-proxy`.
Create a new context for the OIDC authenticated users:
```
$ kubectl config set-context alice-oidc@mycluster \
@@ -129,26 +133,4 @@ $ kubectl --context alice-oidc@mycluster create namespace oil-development
$ kubectl --context alice-oidc@mycluster create namespace gas-marketing
```
and list only those namespaces:
```
$ kubectl --context alice-oidc@mycluster get namespaces
NAME STATUS AGE
gas-marketing Active 2m
oil-development Active 2m
oil-production Active 2m
```
When logged as cluster-admin power user you should be able to see all namespaces:
```
$ kubectl get namespaces
NAME STATUS AGE
default Active 78d
kube-node-lease Active 78d
kube-public Active 78d
kube-system Active 78d
gas-marketing Active 2m
oil-development Active 2m
oil-production Active 2m
```
_Nota Bene_: once your `ID_TOKEN` expires, the `kubectl` OIDC Authenticator will attempt to refresh automatically your `ID_TOKEN` using the `REFRESH_TOKEN`, the `OIDC_CLIENT_ID` and the `OIDC_CLIENT_SECRET` storing the new values for the `REFRESH_TOKEN` and `ID_TOKEN` in your `kubeconfig` file. In case the OIDC uses a self signed CA certificate, make sure to specify it with the `idp-certificate-authority` option in your `kubeconfig` file, otherwise you'll not able to refresh the tokens. Once the `REFRESH_TOKEN` is expired, you will need to refresh tokens manually.
> _Warning_: once your `ID_TOKEN` expires, the `kubectl` OIDC Authenticator will attempt to refresh automatically your `ID_TOKEN` using the `REFRESH_TOKEN`. In case the OIDC uses a self signed CA certificate, make sure to specify it with the `idp-certificate-authority` option in your `kubeconfig` file, otherwise you'll not able to refresh the tokens.

View File

@@ -0,0 +1,73 @@
# Upgrading Tenant resource from v1alpha1 to v1beta1 version
With [Capsule v0.1.0](https://github.com/clastix/capsule/releases/tag/v0.1.0), the Tenant custom resource has been bumped to `v1beta1` from `v1alpha1` with additional fields addressing the new features implemented so far.
This document aims to provide support and a guide on how to perform a clean upgrade to the latest API version in order to avoid service disruption and data loss.
## Backup your cluster
We strongly suggest performing a full backup of your Kubernetes cluster, such as storage and etcd.
Use your favorite tool according to your needs.
## Uninstall the old Capsule release
If you're using Helm as package manager, all the Operator resources such as Deployment, Service, Role Binding, and etc. must be deleted.
```
helm uninstall -n capsule-system capsule
```
Ensure that everything has been removed correctly, especially the Secret resources.
## Patch the Tenant custom resource definition
Helm doesn't manage the lifecycle of Custom Resource Definitions, additional details can be found [here](https://github.com/helm/community/blob/f9e06c16d89ccea1bea77c01a6a96ae3b309f823/architecture/crds.md).
This process must be executed manually as follows:
```
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/v0.1.0/config/crd/bases/capsule.clastix.io_tenants.yaml
```
> Please note the Capsule version in the said URL, your mileage may vary according to the desired upgrading version.
## Install the Capsule operator using Helm
Since the Tenant custom resource definition has been patched with new fields, we can install back Capsule using the provided Helm chart.
```
helm upgrade --install capsule clastix/capsule -n capsule-system --create-namespace
```
This will start the Operator that will perform several required actions, such as:
1. Generating a new CA
2. Generating new TLS certificates for the local webhook server
3. Patching the Validating and Mutating Webhook Configuration resources with the fresh new CA
4. Patching the Custom Resource Definition tenant conversion webhook CA
## Ensure the conversion webhook is working
Kubernetes Custom Resource definitions provide a conversion webhook that is used by an Operator to perform seamless conversion between resources with different versioning.
With the fresh new installation, Capsule patched all the required moving parts to ensure this conversion is put in place, and using the latest version (actually, `v1beta1`) for presenting the Tenant resources.
You can check this behavior by issuing the following command:
```
$: kubectl get tenants.v1beta1.capsule.clastix.io
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
oil 3 0 alice User {"kubernetes.io/os":"linux"} 3m43s
```
You should see all the previous Tenant resources converted in the new format and structure.
```
$: kubectl get tenants.v1beta1.capsule.clastix.io
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
oil Active 3 0 {"kubernetes.io/os":"linux"} 3m38s
```
> Resources are still persisted in etcd using the `v1alpha1` specification and the conversion is executed on-the-fly thanks to the conversion webhook.
> If you'd like to decrease the pressure on Capsule due to the conversion webhook, we suggest performing a resource patching using the command `kubectl replace`:
> in this way, the API Server will update the etcd key with the specification according to the new versioning, allowing to skip the conversion.

View File

@@ -1,8 +1,8 @@
# Velero Backup Restoration
# Tenants Backup and Restore with Velero
Velero is a backup system that performs disaster recovery and migrates Kubernetes cluster resources and persistent volumes.
[Velero](https://velero.io) is a backup and restore solution that performs disaster recovery and migrates Kubernetes cluster resources and persistent volumes.
Using this in a Kubernetes cluster where Capsule is installed can lead to an incomplete restore of the cluster's Tenants. This is because Velero omits the `ownerReferences` section from the tenant's namespace manifests when backup them.
Using Velero in a Kubernetes cluster where Capsule is installed can lead to an incomplete restore of the cluster's Tenants. This is because Velero omits the `ownerReferences` section from the tenant's namespace manifests when backup them.
To avoid this problem you can use the script `velero-restore.sh` under the `hack/` folder.
@@ -20,8 +20,4 @@ Additionally, you can also specify a selected range of tenants to be restored:
./velero-restore.sh --tenant "gas oil" restore
```
In this way, only the tenants **gas** and **oil** will be restored.
# What's next
See how Bill, the cluster admin, can deny wildcard hostnames to a Tenant. [Deny Wildcard Hostnames](./deny-wildcard-hostnames.md)
In this way, only the tenants **gas** and **oil** will be restored.

17
docs/content/index.md Normal file
View File

@@ -0,0 +1,17 @@
# Capsule Overview
## Kubernetes multi-tenancy made easy
**Capsule** implements a multi-tenant and policy-based environment in your Kubernetes cluster. It is designed as a micro-services-based ecosystem with the minimalist approach, leveraging only on upstream Kubernetes.
## What's the problem with the current status?
Kubernetes introduces the _Namespace_ object type to create logical partitions of the cluster as isolated *slices*. However, implementing advanced multi-tenancy scenarios, it soon becomes complicated because of the flat structure of Kubernetes namespaces and the impossibility to share resources among namespaces belonging to the same tenant. To overcome this, cluster admins tend to provision a dedicated cluster for each groups of users, teams, or departments. As an organization grows, the number of clusters to manage and keep aligned becomes an operational nightmare, described as the well know phenomena of the _clusters sprawl_.
## Entering Capsule
Capsule takes a different approach. In a single cluster, the Capsule Controller aggregates multiple namespaces in a lightweight abstraction called _Tenant_, basically a grouping of Kubernetes Namespaces. Within each tenant, users are free to create their namespaces and share all the assigned resources.
On the other side, the Capsule Policy Engine keeps the different tenants isolated from each other. _Network and Security Policies_, _Resource Quota_, _Limit Ranges_, _RBAC_, and other policies defined at the tenant level are automatically inherited by all the namespaces in the tenant. Then users are free to operate their tenants in autonomy, without the intervention of the cluster administrator.
![capsule-operator](./assets/capsule-operator.svg)

49
docs/gridsome.config.js Normal file
View File

@@ -0,0 +1,49 @@
// This is where project configuration and plugin options are located.
// Learn more: https://gridsome.org/docs/config
// Changes here require a server restart.
// To restart press CTRL + C in terminal and run `gridsome develop`
module.exports = {
siteName: 'Capsule Documentation',
titleTemplate: 'Capsule Documentation | %s',
siteDescription: 'Documentation of Capsule, multi-tenant Operator for Kubernetes',
icon: {
favicon: './src/assets/favicon.png',
},
plugins: [
{
use: "gridsome-plugin-tailwindcss",
options: {
tailwindConfig: './tailwind.config.js',
// presetEnvConfig: {},
// shouldImport: false,
// shouldTimeTravel: false
}
},
{
use: '@gridsome/source-filesystem',
options: {
baseDir: './content',
path: '**/*.md',
pathPrefix: '/docs',
typeName: 'MarkdownPage',
remark: {
externalLinksTarget: '_blank',
externalLinksRel: ['noopener', 'noreferrer'],
plugins: [
'@gridsome/remark-prismjs'
]
}
}
},
],
chainWebpack: config => {
const svgRule = config.module.rule('svg')
svgRule.uses.clear()
svgRule
.use('vue-svg-loader')
.loader('vue-svg-loader')
}
}

117
docs/gridsome.server.js Normal file
View File

@@ -0,0 +1,117 @@
// Server API makes it possible to hook into various parts of Gridsome
// on server-side and add custom data to the GraphQL data layer.
// Learn more: https://gridsome.org/docs/server-api/
// Changes here require a server restart.
// To restart press CTRL + C in terminal and run `gridsome develop`
module.exports = function (api) {
api.loadSource(actions => {
// Use the Data Store API here: https://gridsome.org/docs/data-store-api/
const sidebar = actions.addCollection({
typeName: 'Sidebar'
})
sidebar.addNode({
sections: [
{
items: [
{
label: 'Overview',
path: '/docs/'
}
]
},
{
title: 'Documentation',
items: [
{
label: 'Getting Started',
path: '/docs/general/getting-started'
},
{
label: 'Tutorial',
path: '/docs/general/tutorial'
},
{
label: 'References',
path: '/docs/general/references'
},
{
label: 'Multi-Tenant Benchmark',
path: '/docs/general/mtb'
},
{
label: 'Capsule Proxy',
path: '/docs/general/proxy'
},
{
label: 'Dashboard',
path: '/docs/general/lens'
},
]
},
{
title: 'Guides',
items: [
{
label: 'OIDC Authentication',
path: '/docs/guides/oidc-auth'
},
{
label: 'Monitoring Capsule',
path: '/docs/guides/monitoring'
},
{
label: 'Backup & Restore with Velero',
path: '/docs/guides/velero'
},
{
label: 'Upgrading Tenant version',
path: '/docs/guides/upgrading'
},
{
title: 'Managed Kubernetes',
subItems: [
{
label: 'Overview',
path: '/docs/guides/managed-kubernetes/overview'
},
{
label: 'EKS',
path: '/docs/guides/managed-kubernetes/aws-eks'
},
{
label: 'CoAKS',
path: '/docs/guides/managed-kubernetes/coaks'
},
]
}
]
},
{
title: 'Contributing',
items: [
{
label: 'Guidelines',
path: '/docs/contributing/guidelines'
},
{
label: 'Development',
path: '/docs/contributing/development'
},
{
label: 'Governance',
path: '/docs/contributing/governance'
}
]
}
]
})
})
api.createPages(({ createPage }) => {
// Use the Pages API here: https://gridsome.org/docs/pages-api/
})
}

View File

@@ -1,8 +0,0 @@
# Capsule Documentation
**Capsule** helps to implement a multi-tenancy and policy-based environment in your Kubernetes cluster. It has been designed as a micro-services based ecosystem with the minimalist approach, leveraging only on upstream Kubernetes.
Currently, the Capsule ecosystem comprises the following:
* [Capsule Operator](./operator/overview.md)
* [Capsule Proxy](./proxy/overview.md)
* [Capsule Lens extension](./lens-extension/overview.md)

View File

@@ -1,251 +0,0 @@
# How to contribute to Capsule
First, thanks for your interest in Capsule, any contribution is welcome!
The first step is to set up your local development environment as stated below:
## Setting up the development environment
The following dependencies are mandatory:
- [Go 1.16](https://golang.org/dl/)
- [OperatorSDK 1.7.2](https://github.com/operator-framework/operator-sdk)
- [Kubebuilder](https://github.com/kubernetes-sigs/kubebuilder)
- [KinD](https://github.com/kubernetes-sigs/kind)
- [ngrok](https://ngrok.com/) (if you want to run locally)
- [golangci-lint](https://github.com/golangci/golangci-lint)
### Installing Go dependencies
After cloning Capsule on any folder, access it and issue the following command
to ensure all dependencies are properly downloaded.
```
go mod download
```
### Installing Operator SDK
Some operations, like the Docker image build process or the code-generation of
the CRDs manifests, as well the deep copy functions, require _Operator SDK_:
the binary has to be installed into your `PATH`.
### Installing Kubebuilder
With the latest release of OperatorSDK there's a more tighten integration with
Kubebuilder and its opinionated testing suite: ensure to download the latest
binaries available from the _Releases_ GitHub page and place them into the
`/usr/local/kubebuilder/bin` folder, ensuring this is also in your `PATH`.
### Installing KinD
Capsule can run on any certified Kubernetes installation and locally
the whole development is performed on _KinD_, also knows as
[Kubernetes in Docker](https://github.com/kubernetes-sigs/kind).
> N.B.: Docker is a hard requirement since it's based on it
According to your operative system and architecture, download the right binary
and place it on your `PATH`.
Once done, you're ready to bootstrap in a glance of seconds, a fully functional
Kubernetes cluster.
```
# kind create cluster --name capsule
Creating cluster "capsule" ...
✓ Ensuring node image (kindest/node:v1.18.2) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-capsule"
You can now use your cluster with:
kubectl cluster-info --context kind-capsule
Thanks for using kind! 😊
```
The current `KUBECONFIG` will be populated with the `cluster-admin`
certificates and the context changed to the just born Kubernetes cluster.
### Build the Docker image and push it to KinD
From the root path, issue the _make_ recipe:
```
# make docker-build
```
The image `quay.io/clastix/capsule:<tag>` will be available locally. Built image `<tag>` is resulting last one available [release](https://github.com/clastix/capsule/releases).
Push it to _KinD_ with the following command:
```
# kind load docker-image --nodes capsule-control-plane --name capsule quay.io/clastix/capsule:<tag>
```
### Deploy the Kubernetes manifests
With the current `kind-capsule` context enabled, deploy all the required
manifests issuing the following command:
```
make deploy
```
This will install all the required Kubernetes resources, automatically.
You can check if Capsule is running tailing the logs:
```
# kubectl -n capsule-system logs --all-containers -f -l control-plane=controller-manager
```
Since Capsule is built using _OperatorSDK_, logging is handled by the zap
module: log verbosity of the Capsule controller can be increased passing
the `--zap-log-level` option with a value from `1` to `10` or the
[basic keywords](https://godoc.org/go.uber.org/zap/zapcore#Level) although
it is suggested to use the `--zap-devel` flag to get also stack traces.
> CA generation
>
> You could notice a restart of the Capsule pod upon installation, that's ok:
> Capsule is generating the CA and populating the Secret containing the TLS
> certificate to handle the webhooks and there's the need the reload the whole
> application to serve properly HTTPS requests.
### Run Capsule locally
Debugging remote applications is always struggling but Operators just need
access to the Kubernetes API Server.
#### Scaling down the remote Pod
First, ensure the Capsule pod is not running scaling down the Deployment.
```
# kubectl -n capsule-system scale deployment capsule-controller-manager --replicas=0
deployment.apps/capsule-controller-manager scaled
```
> This is mandatory since Capsule uses Leader Election
#### Providing TLS certificate for webhooks
The next step is to replicate the same environment Capsule is expecting in the Pod,
it means creating a fake certificate to handle HTTP requests.
``` bash
mkdir -p /tmp/k8s-webhook-server/serving-certs
kubectl -n capsule-system get secret capsule-tls -o jsonpath='{.data.tls\.crt}' | base64 -d > /tmp/k8s-webhook-server/serving-certs/tls.crt
kubectl -n capsule-system get secret capsule-tls -o jsonpath='{.data.tls\.key}' | base64 -d > /tmp/k8s-webhook-server/serving-certs/tls.key
```
> We're using the certificates generate upon the first installation of Capsule:
> it means the Secret will be populated at the first start-up.
> If you plan to run it locally since the beginning, it means you will require
> to provide a self-signed certificate in the said directory.
#### Starting NGROK
In another session, we need a `ngrok` session, mandatory to debug also webhooks
(YMMV).
```
# ngrok http https://localhost:9443
ngrok by @inconshreveable
Session Status online
Account Dario Tranchitella (Plan: Free)
Version 2.3.35
Region United States (us)
Web Interface http://127.0.01:4040
Forwarding http://cdb72b99348c.ngrok.io -> https://localhost:9443
Forwarding https://cdb72b99348c.ngrok.io -> https://localhost:9443
Connections ttl opn rt1 rt5 p50 p90
0 0 0.00 0.00 0.00 0.00
```
What we need is the _ngrok_ URL (in this case, `https://cdb72b99348c.ngrok.io`)
since we're going to use this default URL as the `url` parameter for the
_Dynamic Admissions Control Webhooks_.
#### Patching the MutatingWebhookConfiguration
Now it's time to patch the _MutatingWebhookConfiguration_ and the
_ValidatingWebhookConfiguration_ too, adding the said `ngrok` URL as base for
each defined webhook, as following:
```diff
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: capsule-mutating-webhook-configuration
webhooks:
- name: owner.namespace.capsule.clastix.io
failurePolicy: Fail
rules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE"]
resources: ["namespaces"]
clientConfig:
+ url: https://cdb72b99348c.ngrok.io/mutate-v1-namespace-owner-reference
- caBundle:
- service:
- namespace: system
- name: capsule
- path: /mutate-v1-namespace-owner-reference
...
```
#### Run Capsule
Finally, it's time to run locally Capsule using your preferred IDE (or not):
from the project root path, you can issue the following command.
```
make run
```
All the logs will start to flow in your standard output, feel free to attach
your debugger to set breakpoints as well!
## Code convention
The changes must follow the Pull Request method where a _GitHub Action_ will
check the `golangci-lint`, so ensure your changes respect the coding standard.
### golint
You can easily check them issuing the _Make_ recipe `golint`.
```
# make golint
golangci-lint run -c .golangci.yml
```
> Enabled linters and related options are defined in the [.golanci.yml file](../../.golangci.yml)
### goimports
Also, the Go import statements must be sorted following the best practice:
```
<STANDARD LIBRARY>
<EXTERNAL PACKAGES>
<LOCAL PACKAGES>
```
To help you out you can use the _Make_ recipe `goimports`
```
# make goimports
goimports -w -l -local "github.com/clastix/capsule" .
```
### Commits
All the Pull Requests must refer to an already open issue: this is the first phase to contribute also for informing maintainers about the issue.
Commit's first line should not exceed 50 columns.
A commit description is welcomed to explain more the changes: just ensure
to put a blank line and an arbitrary number of maximum 72 characters long
lines, at most one blank line between them.
Please, split changes into several and documented small commits: this will help us to perform a better review. Commits must follow the Conventional Commits Specification, a lightweight convention on top of commit messages. It provides an easy set of rules for creating an explicit commit history; which makes it easier to write automated tools on top of. This convention dovetails with Semantic Versioning, by describing the features, fixes, and breaking changes made in commit messages. See [Conventional Commits Specification](https://www.conventionalcommits.org) to learn about Conventional Commits.
> In case of errors or need of changes to previous commits,
> fix them squashing to make changes atomic.
### Miscellanea
Please, add a new single line at end of any file as the current coding style.

View File

@@ -1,77 +0,0 @@
# Allow self-service management of Network Policies
**Profile Applicability:** L2
**Type:** Behavioral
**Category:** Self-Service Operations
**Description:** Tenants should be able to perform self-service operations by creating their own network policies in their namespaces.
**Rationale:** Enables self-service management of network-policies.
**Audit:**
As cluster admin, create a tenant
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
networkPolicies:
items:
- ingress:
- from:
- namespaceSelector:
matchLabels:
capsule.clastix.io/tenant: oil
podSelector: {}
policyTypes:
- Egress
- Ingress
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, retrieve the networkpolicies resources in the tenant namespace
```bash
kubectl --kubeconfig alice get networkpolicies
NAME POD-SELECTOR AGE
capsule-oil-0 <none> 7m5s
```
As a tenant, checks for permissions to manage networkpolicy for each verb
```bash
kubectl --kubeconfig alice auth can-i get networkpolicies
kubectl --kubeconfig alice auth can-i create networkpolicies
kubectl --kubeconfig alice auth can-i update networkpolicies
kubectl --kubeconfig alice auth can-i patch networkpolicies
kubectl --kubeconfig alice auth can-i delete networkpolicies
kubectl --kubeconfig alice auth can-i deletecollection networkpolicies
```
Each command must return 'yes'
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
```

View File

@@ -1,58 +0,0 @@
# Allow self-service management of Role Bindings
**Profile Applicability:** L2
**Type:** Behavioral
**Category:** Self-Service Operations
**Description:** Tenants should be able to perform self-service operations by creating their rolebindings in their namespaces.
**Rationale:** Enables self-service management of roles.
**Audit:**
As cluster admin, create a tenant
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner check for permissions to manage rolebindings for each verb
```bash
kubectl --kubeconfig alice auth can-i get rolebindings
kubectl --kubeconfig alice auth can-i create rolebindings
kubectl --kubeconfig alice auth can-i update rolebindings
kubectl --kubeconfig alice auth can-i patch rolebindings
kubectl --kubeconfig alice auth can-i delete rolebindings
kubectl --kubeconfig alice auth can-i deletecollection rolebindings
```
Each command must return 'yes'
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
```

View File

@@ -1,58 +0,0 @@
# Allow self-service management of Roles
**Profile Applicability:** L2
**Type:** Behavioral
**Category:** Self-Service Operations
**Description:** Tenants should be able to perform self-service operations by creating their own roles in their namespaces.
**Rationale:** Enables self-service management of roles.
**Audit:**
As cluster admin, create a tenant
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, check for permissions to manage roles for each verb
```bash
kubectl --kubeconfig alice auth can-i get roles
kubectl --kubeconfig alice auth can-i create roles
kubectl --kubeconfig alice auth can-i update roles
kubectl --kubeconfig alice auth can-i patch roles
kubectl --kubeconfig alice auth can-i delete roles
kubectl --kubeconfig alice auth can-i deletecollection roles
```
Each command must return 'yes'
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
```

View File

@@ -1,113 +0,0 @@
# Block access to cluster resources
**Profile Applicability:** L1
**Type:** Configuration Check
**Category:** Control Plane Isolation
**Description:** Tenants should not be able to view, edit, create or delete cluster (non-namespaced) resources such Node, ClusterRole, ClusterRoleBinding, etc.
**Rationale:** Access controls should be configured for tenants so that a tenant cannot list, create, modify or delete cluster resources
**Audit:**
As cluster admin, create a tenant
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
EOF
./create-user.sh alice oil
```
As cluster admin, run the following command to retrieve the list of non-namespaced resources
```bash
kubectl --kubeconfig cluster-admin api-resources --namespaced=false
```
For all non-namespaced resources, and each verb (get, list, create, update, patch, watch, delete, and deletecollection) issue the following command:
```bash
kubectl --kubeconfig alice auth can-i <verb> <resource>
```
Each command must return `no`
**Exception:**
It should, but it does not:
```bash
kubectl --kubeconfig alice auth can-i create selfsubjectaccessreviews
yes
kubectl --kubeconfig alice auth can-i create selfsubjectrulesreviews
yes
kubectl --kubeconfig alice auth can-i create namespaces
yes
```
Any kubernetes user can create `SelfSubjectAccessReview` and `SelfSubjectRulesReviews` to checks whether he/she can act. First, two exceptions are not an issue.
```bash
kubectl --anyuser auth can-i --list
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
```
To enable namespace self-service provisioning, Capsule intentionally gives permissions to create namespaces to all users belonging to the Capsule group:
```bash
kubectl describe clusterrolebindings capsule-namespace-provisioner
Name: capsule-namespace-provisioner
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: capsule-namespace-provisioner
Subjects:
Kind Name Namespace
---- ---- ---------
Group capsule.clastix.io
kubectl describe clusterrole capsule-namespace-provisioner
Name: capsule-namespace-provisioner
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
namespaces [] [] [create]
```
Capsule controls self-service namespace creation by limiting the number of namespaces the user can create by the `tenant.spec.namespaceQuota option`.
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
```

View File

@@ -1,155 +0,0 @@
# Block access to multitenant resources
**Profile Applicability:** L1
**Type:** Behavioral
**Category:** Tenant Isolation
**Description:** Each tenant namespace may contain resources set up by the cluster administrator for multi-tenancy, such as role bindings, and network policies. Tenants should not be allowed to modify the namespaced resources created by the cluster administrator for multi-tenancy. However, for some resources such as network policies, tenants can configure additional instances of the resource for their workloads.
**Rationale:** Tenants can escalate privileges and impact other tenants if they can delete or modify required multi-tenancy resources such as namespace resource quotas or default network policy.
**Audit:**
As cluster admin, create a tenant
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
networkPolicies:
items:
- podSelector: {}
policyTypes:
- Ingress
- Egress
- egress:
- to:
- namespaceSelector:
matchLabels:
capsule.clastix.io/tenant: oil
ingress:
- from:
- namespaceSelector:
matchLabels:
capsule.clastix.io/tenant: oil
podSelector: {}
policyTypes:
- Egress
- Ingress
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, retrieve the networkpolicies resources in the tenant namespace
```bash
kubectl --kubeconfig alice get networkpolicies
NAME POD-SELECTOR AGE
capsule-oil-0 <none> 7m5s
capsule-oil-1 <none> 7m5s
```
As tenant owner try to modify or delete one of the networkpolicies
```bash
kubectl --kubeconfig alice delete networkpolicies capsule-oil-0
```
You should receive an error message denying the edit/delete request
```bash
Error from server (Forbidden): networkpolicies.networking.k8s.io "capsule-oil-0" is forbidden:
User "oil" cannot delete resource "networkpolicies" in API group "networking.k8s.io" in the namespace "oil-production"
```
As tenant owner, you can create an additional networkpolicy inside the namespace
```yaml
kubectl create -f - << EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: hijacking
namespace: oil-production
spec:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
podSelector: {}
policyTypes:
- Egress
EOF
```
However, due to the additive nature of networkpolicies, the `DENY ALL` policy set by the cluster admin, prevents hijacking.
As tenant owner list RBAC permissions set by Capsule
```bash
kubectl --kubeconfig alice get rolebindings
NAME ROLE AGE
namespace-deleter ClusterRole/capsule-namespace-deleter 11h
namespace:admin ClusterRole/admin 11h
```
As tenant owner, try to change/delete the rolebinding to escalate permissions
```bash
kubectl --kubeconfig alice edit/delete rolebinding namespace:admin
```
The rolebinding is immediately recreated by Capsule:
```
kubectl --kubeconfig alice get rolebindings
NAME ROLE AGE
namespace-deleter ClusterRole/capsule-namespace-deleter 11h
namespace:admin ClusterRole/admin 2s
```
However, the tenant owner can create and assign permissions inside the namespace she owns
```yaml
kubectl create -f - << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
name: oil-robot:admin
namespace: oil-production
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- kind: ServiceAccount
name: default
namespace: oil-production
EOF
```
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
```

View File

@@ -1,97 +0,0 @@
# Block access to other tenant resources
**Profile Applicability:** L1
**Type:** Behavioral
**Category:** Tenant Isolation
**Description:** Each tenant has its own set of resources, such as namespaces, service accounts, secrets, pods, services, etc. Tenants should not be allowed to access each other's resources.
**Rationale:** Tenant's resources must be not accessible by other tenants.
**Audit:**
As cluster admin, create a couple of tenants
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
EOF
./create-user.sh alice oil
```
and
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: gas
spec:
owners:
- kind: User
name: joe
EOF
./create-user.sh joe gas
```
As `oil` tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As `gas` tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig joe create ns gas-production
kubectl --kubeconfig joe config set-context --current --namespace gas-production
```
As `oil` tenant owner, try to retrieve the resources in the `gas` tenant namespaces
```bash
kubectl --kubeconfig alice get serviceaccounts --namespace gas-production
```
You must receive an error message:
```
Error from server (Forbidden): serviceaccount is forbidden:
User "oil" cannot list resource "serviceaccounts" in API group "" in the namespace "gas-production"
```
As `gas` tenant owner, try to retrieve the resources in the `oil` tenant namespaces
```bash
kubectl --kubeconfig joe get serviceaccounts --namespace oil-production
```
You must receive an error message:
```
Error from server (Forbidden): serviceaccount is forbidden:
User "joe" cannot list resource "serviceaccounts" in API group "" in the namespace "oil-production"
```
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenants oil gas
```

View File

@@ -1,121 +0,0 @@
# Block add capabilities
**Profile Applicability:** L1
**Type:** Behavioral Check
**Category:** Control Plane Isolation
**Description:** Control Linux capabilities.
**Rationale:** Linux allows defining fine-grained permissions using capabilities. With Kubernetes, it is possible to add capabilities for pods that escalate the level of kernel access and allow other potentially dangerous behaviors.
**Audit:**
As cluster admin, define a `PodSecurityPolicy` with `allowedCapabilities` and map the policy to a tenant:
```yaml
kubectl create -f - << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: tenant
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# The default set of capabilities are implicitly allowed
# The empty set means that no additional capabilities may be added beyond the default set
allowedCapabilities: []
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
EOF
```
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
Then create a ClusterRole using or granting the said item
```yaml
kubectl create -f - << EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tenant:psp
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
resourceNames: ['tenant']
verbs: ['use']
EOF
```
And assign it to the tenant
```yaml
kubectl apply -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
namespace: oil-production
spec:
owners:
- kind: User
name: alice
additionalRoleBindings:
- clusterRoleName: tenant:psp
subjects:
- kind: "Group"
apiGroup: "rbac.authorization.k8s.io"
name: "system:authenticated"
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, create a pod and see new capabilities cannot be added in the tenant namespaces
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-with-settime-cap
namespace:
labels:
spec:
containers:
- name: busybox
image: busybox:latest
command: ["/bin/sleep", "3600"]
securityContext:
capabilities:
add:
- SYS_TIME
EOF
```
You must have the pod blocked by PodSecurityPolicy.
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
```

View File

@@ -1,69 +0,0 @@
# Block modification of resource quotas
**Profile Applicability:** L1
**Type:** Behavioral Check
**Category:** Tenant Isolation
**Description:** Tenants should not be able to modify the resource quotas defined in their namespaces
**Rationale:** Resource quotas must be configured for isolation and fairness between tenants. Tenants should not be able to modify existing resource quotas as they may exhaust cluster resources and impact other tenants.
**Audit:**
As cluster admin, create a tenant
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
resourceQuotas:
items:
- hard:
limits.cpu: "8"
limits.memory: 16Gi
requests.cpu: "8"
requests.memory: 16Gi
- hard:
pods: "10"
services: "50"
- hard:
requests.storage: 100Gi
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, check the permissions to modify/delete the quota in the tenant namespace:
```bash
kubectl --kubeconfig alice auth can-i create quota
kubectl --kubeconfig alice auth can-i update quota
kubectl --kubeconfig alice auth can-i patch quota
kubectl --kubeconfig alice auth can-i delete quota
kubectl --kubeconfig alice auth can-i deletecollection quota
```
Each command must return 'no'
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
```

View File

@@ -1,107 +0,0 @@
# Block access to multitenant resources
**Profile Applicability:** L1
**Type:** Behavioral
**Category:** Tenant Isolation
**Description:** Block network traffic among namespaces from different tenants.
**Rationale:** Tenants cannot access services and pods in another tenant's namespaces.
**Audit:**
As cluster admin, create a couple of tenants
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
networkPolicies:
items:
- ingress:
- from:
- namespaceSelector:
matchLabels:
capsule.clastix.io/tenant: oil
podSelector: {}
policyTypes:
- Ingress
EOF
./create-user.sh alice oil
```
and
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: gas
spec:
owners:
- kind: User
name: joe
networkPolicies:
items:
- ingress:
- from:
- namespaceSelector:
matchLabels:
capsule.clastix.io/tenant: gas
podSelector: {}
policyTypes:
- Ingress
EOF
./create-user.sh joe gas
```
As `oil` tenant owner, run the following commands to create a namespace and resources in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
kubectl --kubeconfig alice run webserver --image nginx:latest
kubectl --kubeconfig alice expose pod webserver --port 80
```
As `gas` tenant owner, run the following commands to create a namespace and resources in the given tenant
```bash
kubectl --kubeconfig joe create ns gas-production
kubectl --kubeconfig joe config set-context --current --namespace gas-production
kubectl --kubeconfig joe run webserver --image nginx:latest
kubectl --kubeconfig joe expose pod webserver --port 80
```
As `oil` tenant owner, verify you can access the service in `oil` tenant namespace but not in the `gas` tenant namespace
```bash
kubectl --kubeconfig alice exec webserver -- curl http://webserver.oil-production.svc.cluster.local
kubectl --kubeconfig alice exec webserver -- curl http://webserver.gas-production.svc.cluster.local
```
Viceversa, as `gas` tenant owner, verify you can access the service in `gas` tenant namespace but not in the `oil` tenant namespace
```bash
kubectl --kubeconfig alice exec webserver -- curl http://webserver.oil-production.svc.cluster.local
kubectl --kubeconfig alice exec webserver -- curl http://webserver.gas-production.svc.cluster.local
```
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenants oil gas
```

View File

@@ -1,115 +0,0 @@
# Block privilege escalation
**Profile Applicability:** L1
**Type:** Behavioral Check
**Category:** Control Plane Isolation
**Description:** Control container permissions.
**Rationale:** The security `allowPrivilegeEscalation` setting allows a process to gain more privileges from its parent process. Processes in tenant containers should not be allowed to gain additional privileges.
**Audit:**
As cluster admin, define a `PodSecurityPolicy` that sets `allowPrivilegeEscalation=false` and map the policy to a tenant:
```yaml
kubectl create -f - << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: tenant
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
EOF
```
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
Then create a ClusterRole using or granting the said item
```yaml
kubectl create -f - << EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tenant:psp
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
resourceNames: ['tenant']
verbs: ['use']
EOF
```
And assign it to the tenant
```yaml
kubectl apply -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
additionalRoleBindings:
- clusterRoleName: tenant:psp
subjects:
- kind: "Group"
apiGroup: "rbac.authorization.k8s.io"
name: "system:authenticated"
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, create a pod or container that sets `allowPrivilegeEscalation=true` in its `securityContext`.
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-priviliged-mode
namespace: oil-production
labels:
spec:
containers:
- name: busybox
image: busybox:latest
command: ["/bin/sleep", "3600"]
securityContext:
allowPrivilegeEscalation: true
EOF
```
You must have the pod blocked by `PodSecurityPolicy`.
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
```

View File

@@ -1,116 +0,0 @@
# Block privileged containers
**Profile Applicability:** L1
**Type:** Behavioral Check
**Category:** Control Plane Isolation
**Description:** Control container permissions.
**Rationale:** By default a container is not allowed to access any devices on the host, but a “privileged” container can access all devices on the host. A process within a privileged container can also get unrestricted host access. Hence, tenants should not be allowed to run privileged containers.
**Audit:**
As cluster admin, define a `PodSecurityPolicy` that sets `privileged=false` and map the policy to a tenant:
```yaml
kubectl create -f - << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: tenant
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
EOF
```
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
Then create a ClusterRole using or granting the said item
```yaml
kubectl create -f - << EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tenant:psp
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
resourceNames: ['tenant']
verbs: ['use']
EOF
```
And assign it to the tenant
```yaml
kubectl apply -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
namespace: oil-production
spec:
owners:
- kind: User
name: alice
additionalRoleBindings:
- clusterRoleName: tenant:psp
subjects:
- kind: "Group"
apiGroup: "rbac.authorization.k8s.io"
name: "system:authenticated"
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, create a pod or container that sets privileges in its `securityContext`.
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-priviliged-mode
namespace:
labels:
spec:
containers:
- name: busybox
image: busybox:latest
command: ["/bin/sleep", "3600"]
securityContext:
privileged: true
EOF
```
You must have the pod blocked by `PodSecurityPolicy`.
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
```

View File

@@ -1,40 +0,0 @@
# Block use of existing PVs
**Profile Applicability:** L1
**Type:** Configuration Check
**Category:** Data Isolation
**Description:** Avoid a tenant to mount existing volumes`.
**Rationale:** Tenants have to be assured that their Persistent Volumes cannot be reclaimed by other tenants.
**Audit:**
As cluster admin, create a tenant
```yaml
kubectl create -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
EOF
./create-user.sh alice oil
```
As tenant owner, check if you can access the persistent volumes
```bash
kubectl --kubeconfig alice auth can-i get persistentvolumes
kubectl --kubeconfig alice auth can-i list persistentvolumes
kubectl --kubeconfig alice auth can-i watch persistentvolumes
```
You must receive for all the requests 'no'.

View File

@@ -1,115 +0,0 @@
# Block use of host IPC
**Profile Applicability:** L1
**Type:** Behavioral Check
**Category:** Host Isolation
**Description:** Tenants should not be allowed to share the host's inter-process communication (IPC) namespace.
**Rationale:** The `hostIPC` setting allows pods to share the host's inter-process communication (IPC) namespace allowing potential access to host processes or processes belonging to other tenants.
**Audit:**
As cluster admin, define a `PodSecurityPolicy` that restricts `hostIPC` usage and map the policy to a tenant:
```yaml
kubectl create -f - << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: tenant
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
hostIPC: false
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
EOF
```
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
Then create a ClusterRole using or granting the said item
```yaml
kubectl create -f - << EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tenant:psp
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
resourceNames: ['tenant']
verbs: ['use']
EOF
```
And assign it to the tenant
```yaml
kubectl apply -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
namespace: oil-production
spec:
owners:
- kind: User
name: alice
additionalRoleBindings:
- clusterRoleName: tenant:psp
subjects:
- kind: "Group"
apiGroup: "rbac.authorization.k8s.io"
name: "system:authenticated"
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, create a pod mounting the host IPC namespace.
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-with-host-ipc
namespace: oil-production
spec:
hostIPC: true
containers:
- name: busybox
image: busybox:latest
command: ["/bin/sleep", "3600"]
EOF
```
You must have the pod blocked by `PodSecurityPolicy`.
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
```

View File

@@ -1,136 +0,0 @@
# Block use of host networking and ports
**Profile Applicability:** L1
**Type:** Behavioral Check
**Category:** Host Isolation
**Description:** Tenants should not be allowed to use host networking and host ports for their workloads.
**Rationale:** Using `hostPort` and `hostNetwork` allows tenants workloads to share the host networking stack allowing potential snooping of network traffic across application pods.
**Audit:**
As cluster admin, define a `PodSecurityPolicy` that restricts `hostPort` and `hostNetwork` and map the policy to a tenant:
```yaml
kubectl create -f - << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: tenant
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
hostNetwork: false
hostPorts: [] # empty means no allowed host ports
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
EOF
```
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
Then create a ClusterRole using or granting the said item
```yaml
kubectl create -f - << EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tenant:psp
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
resourceNames: ['tenant']
verbs: ['use']
EOF
```
And assign it to the tenant
```yaml
kubectl apply -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
namespace: oil-production
spec:
owners:
- kind: User
name: alice
additionalRoleBindings:
- clusterRoleName: tenant:psp
subjects:
- kind: "Group"
apiGroup: "rbac.authorization.k8s.io"
name: "system:authenticated"
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, create a pod using `hostNetwork`
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-with-hostnetwork
namespace: oil-production
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
EOF
```
As tenant owner, create a pod defining a container using `hostPort`
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-with-hostport
namespace: oil-production
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
hostPort: 9090
EOF
```
In both the cases above, you must have the pod blocked by `PodSecurityPolicy`.
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
```

View File

@@ -1,129 +0,0 @@
# Block use of host path volumes
**Profile Applicability:** L1
**Type:** Behavioral Check
**Category:** Host Protection
**Description:** Tenants should not be able to mount host volumes and directories.
**Rationale:** The use of host volumes and directories can be used to access shared data or escalate privileges and also creates a tight coupling between a tenant workload and a host.
**Audit:**
As cluster admin, define a `PodSecurityPolicy` that restricts `hostPath` volumes and map the policy to a tenant:
```yaml
kubectl create -f - << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: tenant
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
volumes: # hostPath is not permitted
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
- 'persistentVolumeClaim'
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
EOF
```
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
Then create a ClusterRole using or granting the said item
```yaml
kubectl create -f - << EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tenant:psp
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
resourceNames: ['tenant']
verbs: ['use']
EOF
```
And assign it to the tenant
```yaml
kubectl apply -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
namespace: oil-production
spec:
owners:
- kind: User
name: alice
additionalRoleBindings:
- clusterRoleName: tenant:psp
subjects:
- kind: "Group"
apiGroup: "rbac.authorization.k8s.io"
name: "system:authenticated"
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, create a pod defining a volume of type `hostpath`.
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-with-hostpath-volume
namespace: oil-production
spec:
containers:
- name: busybox
image: busybox:latest
command: ["/bin/sleep", "3600"]
volumeMounts:
- mountPath: /tmp
name: volume
volumes:
- name: volume
hostPath:
# directory location on host
path: /data
type: Directory
EOF
```
You must have the pod blocked by `PodSecurityPolicy`.
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
```

View File

@@ -1,115 +0,0 @@
# Block use of host PID
**Profile Applicability:** L1
**Type:** Behavioral Check
**Category:** Host Isolation
**Description:** Tenants should not be allowed to share the host process ID (PID) namespace.
**Rationale:** The `hostPID` setting allows pods to share the host process ID namespace allowing potential privilege escalation. Tenant pods should not be allowed to share the host PID namespace.
**Audit:**
As cluster admin, define a `PodSecurityPolicy` that restricts `hostPID` usage and map the policy to a tenant:
```yaml
kubectl create -f - << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: tenant
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
hostPID: false
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
EOF
```
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
Then create a ClusterRole using or granting the said item
```yaml
kubectl create -f - << EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tenant:psp
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
resourceNames: ['tenant']
verbs: ['use']
EOF
```
And assign it to the tenant
```yaml
kubectl apply -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
namespace: oil-production
spec:
owners:
- kind: User
name: alice
additionalRoleBindings:
- clusterRoleName: tenant:psp
subjects:
- kind: "Group"
apiGroup: "rbac.authorization.k8s.io"
name: "system:authenticated"
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, create a pod mounting the host PID namespace.
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-with-host-pid
namespace: oil-production
spec:
hostPID: true
containers:
- name: busybox
image: busybox:latest
command: ["/bin/sleep", "3600"]
EOF
```
You must have the pod blocked by `PodSecurityPolicy`.
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
```

View File

@@ -1,75 +0,0 @@
# Block use of NodePort services
**Profile Applicability:** L1
**Type:** Behavioral Check
**Category:** Host Isolation
**Description:** Tenants should not be able to create services of type NodePort.
**Rationale:** the service type `NodePorts` configures host ports that cannot be secured using Kubernetes network policies and require upstream firewalls. Also, multiple tenants cannot use the same host port numbers.
**Audit:**
As cluster admin, create a tenant
```yaml
kubectl create -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
enableNodePorts: false
owners:
- kind: User
name: alice
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, creates a service in the tenant namespace having service type of `NodePort`
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
namespace: oil-production
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 80
selector:
run: nginx
type: NodePort
EOF
```
You must receive an error message denying the request:
```
Error from server
Error from server (NodePort service types are forbidden for the tenant:
error when creating "STDIN": admission webhook "services.capsule.clastix.io" denied the request:
NodePort service types are forbidden for the tenant: please, reach out to the system administrators
```
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
```

View File

@@ -1,66 +0,0 @@
# Configure namespace object limits
**Profile Applicability:** L1
**Type:** Configuration
**Category:** Fairness
**Description:** Namespace resource quotas should be used to allocate, track and limit the number of objects, of a particular type, that can be created within a namespace.
**Rationale:** Resource quotas must be configured for each tenant namespace, to guarantee isolation and fairness across tenants.
**Audit:**
As cluster admin, create a tenant
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
resourceQuotas:
items:
- hard:
pods: 100
services: 50
services.loadbalancers: 3
services.nodeports: 20
persistentvolumeclaims: 100
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, retrieve the configured quotas in the tenant namespace:
```bash
kubectl --kubeconfig alice get quota
NAME AGE REQUEST LIMIT
capsule-oil-0 23s persistentvolumeclaims: 0/100,
pods: 0/100, services: 0/50,
services.loadbalancers: 0/3,
services.nodeports: 0/20
```
Make sure that a quota is configured for API objects: `PersistentVolumeClaim`, `LoadBalancer`, `NodePort`, `Pods`, etc
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
```

View File

@@ -1,65 +0,0 @@
# Configure namespace resource quotas
**Profile Applicability:** L1
**Type:** Configuration
**Category:** Fairness
**Description:** Namespace resource quotas should be used to allocate, track, and limit a tenant's use of shared resources.
**Rationale:** Resource quotas must be configured for each tenant namespace, to guarantee isolation and fairness across tenants.
**Audit:**
As cluster admin, create a tenant
```yaml
kubectl create -f - <<EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
resourceQuotas:
items:
- hard:
limits.cpu: "8"
limits.memory: 16Gi
requests.cpu: "8"
requests.memory: 16Gi
- hard:
requests.storage: 100Gi
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, retrieve the configured quotas in the tenant namespace:
```bash
kubectl --kubeconfig alice get quota
NAME AGE REQUEST LIMIT
capsule-oil-0 24s requests.cpu: 0/8, requests.memory: 0/16Gi limits.cpu: 0/8, limits.memory: 0/16Gi
capsule-oil-1 24s requests.storage: 0/10Gi
```
Make sure that a quota is configured for CPU, memory, and storage resources.
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
```

View File

@@ -1,71 +0,0 @@
# Require always imagePullPolicy
**Profile Applicability:** L1
**Type:** Configuration Check
**Category:** Data Isolation
**Description:** Set the image pull policy to Always for tenant workloads.
**Rationale:** Tenants have to be assured that their private images can only be used by those who have the credentials to pull them.
**Audit:**
As cluster admin, create a tenant
```yaml
kubectl create -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
imagePullPolicies:
- Always
owners:
- kind: User
name: alice
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, creates a pod in the tenant namespace having `imagePullPolicies=IfNotPresent`
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: oil-production
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
EOF
```
You must receive an error message denying the request:
```
Error from server
(ImagePullPolicy IfNotPresent for container nginx is forbidden, use one of the followings: Always): error when creating "STDIN": admission webhook "pods.capsule.clastix.io" denied the request:
ImagePullPolicy IfNotPresent for container nginx is forbidden, use one of the followings: Always
```
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
```

View File

@@ -1,124 +0,0 @@
# Require PersistentVolumeClaim for storage
**Profile Applicability:** L1
**Type:** Behavioral Check
**Category:** na
**Description:** Tenants should not be able to use all volume types except `PersistentVolumeClaims`.
**Rationale:** In some scenarios, it would be required to disallow usage of any core volume types except PVCs.
**Audit:**
As cluster admin, define a `PodSecurityPolicy` allowing only `PersistentVolumeClaim` volumes and map the policy to a tenant:
```yaml
kubectl create -f - << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: tenant
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
volumes:
- 'persistentVolumeClaim'
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
EOF
```
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
Then create a ClusterRole using or granting the said item
```yaml
kubectl create -f - << EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tenant:psp
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
resourceNames: ['tenant']
verbs: ['use']
EOF
```
And assign it to the tenant
```yaml
kubectl apply -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
namespace: oil-production
spec:
owners:
- kind: User
name: alice
additionalRoleBindings:
- clusterRoleName: tenant:psp
subjects:
- kind: "Group"
apiGroup: "rbac.authorization.k8s.io"
name: "system:authenticated"
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, create a pod defining a volume of any of the core type except `PersistentVolumeClaim`. For example:
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-with-hostpath-volume
namespace: oil-production
spec:
containers:
- name: busybox
image: busybox:latest
command: ["/bin/sleep", "3600"]
volumeMounts:
- mountPath: /tmp
name: volume
volumes:
- name: volume
hostPath:
# directory location on host
path: /data
type: Directory
EOF
```
You must have the pod blocked by `PodSecurityPolicy`.
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
```

View File

@@ -1,87 +0,0 @@
# Require PV reclaim policy of delete
**Profile Applicability:** L1
**Type:** Configuration Check
**Category:** Data Isolation
**Description:** Force a tenant to use a Storage Class with `reclaimPolicy=Delete`.
**Rationale:** Tenants have to be assured that their Persistent Volumes cannot be reclaimed by other tenants.
**Audit:**
As cluster admin, create a Storage Class with `reclaimPolicy=Delete`
```yaml
kubectl create -f - << EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: delete-policy
reclaimPolicy: Delete
provisioner: clastix.io/nfs
EOF
```
As cluster admin, create a tenant and assign the above Storage Class
```yaml
kubectl create -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
storageClasses:
allowed:
- delete-policy
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, creates a Persistent Volum Claim in the tenant namespace missing the Storage Class or using any other Storage Class:
```yaml
kubectl --kubeconfig alice apply -f - << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc
namespace: oil-production
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 12Gi
EOF
```
You must receive an error message denying the request:
```
Error from server (A valid Storage Class must be used, one of the following (delete-policy)):
error when creating "STDIN": admission webhook "pvc.capsule.clastix.io" denied the request:
A valid Storage Class must be used, one of the following (delete-policy)
```
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
kubectl --kubeconfig cluster-admin delete storageclass delete-policy
```

View File

@@ -1,119 +0,0 @@
# Require run as non-root user
**Profile Applicability:** L1
**Type:** Behavioral Check
**Category:** Control Plane Isolation
**Description:** Control container permissions.
**Rationale:** Processes in containers run as the root user (uid 0), by default. To prevent potential compromise of container hosts, specify a least-privileged user ID when building the container image and require that application containers run as non-root users.
**Audit:**
As cluster admin, define a `PodSecurityPolicy` with `runAsUser=MustRunAsNonRoot` and map the policy to a tenant:
```yaml
kubectl create -f - << EOF
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: tenant
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
runAsUser:
# Require the container to run without root privileges.
rule: MustRunAsNonRoot
supplementalGroups:
rule: MustRunAs
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: MustRunAs
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
EOF
```
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
Then create a ClusterRole using or granting the said item
```yaml
kubectl create -f - << EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tenant:psp
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
resourceNames: ['tenant']
verbs: ['use']
EOF
```
And assign it to the tenant
```yaml
kubectl apply -f - << EOF
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
additionalRoleBindings:
- clusterRoleName: tenant:psp
subjects:
- kind: "Group"
apiGroup: "rbac.authorization.k8s.io"
name: "system:authenticated"
EOF
./create-user.sh alice oil
```
As tenant owner, run the following command to create a namespace in the given tenant
```bash
kubectl --kubeconfig alice create ns oil-production
kubectl --kubeconfig alice config set-context --current --namespace oil-production
```
As tenant owner, create a pod or container that does not set `runAsNonRoot` to `true` in its `securityContext`, and `runAsUser` must not be set to 0.
```yaml
kubectl --kubeconfig alice apply -f - << EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-run-as-root
namespace: oil-production
spec:
containers:
- name: busybox
image: busybox:latest
command: ["/bin/sleep", "3600"]
EOF
```
You must have the pod blocked by `PodSecurityPolicy`.
**Cleanup:**
As cluster admin, delete all the created resources
```bash
kubectl --kubeconfig cluster-admin delete tenant oil
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
```

View File

@@ -1,30 +0,0 @@
# Meet the multi-tenancy benchmark MTB
Actually, there's no yet a real standard for the multi-tenancy model in Kubernetes, although the [SIG multi-tenancy group](https://github.com/kubernetes-sigs/multi-tenancy) is working on that. SIG multi-tenancy drafted a generic validation schema appliable to generic multi-tenancy projects. Multi-Tenancy Benchmarks [MTB](https://github.com/kubernetes-sigs/multi-tenancy/tree/master/benchmarks) are guidelines for multi-tenant configuration of Kubernetes clusters. Capsule is an open source multi-tenancy operator and we decided to meet the requirements of MTB.
> N.B. At the time of writing, the MTB is in development and not ready for usage. Strictly speaking, we do not claim official conformance to MTB, but just to adhere to the multi-tenancy requirements and best practices promoted by MTB.
|MTB Benchmark |MTB Profile|Capsule Version|Conformance|Notes |
|--------------|-----------|---------------|-----------|-------|
|[Block access to cluster resources](block-access-to-cluster-resources.md)|L1|v0.1.0|✓|---|
|[Block access to multitenant resources](block-access-to-multitenant-resources.md)|L1|v0.1.0|✓|---|
|[Block access to other tenant resources](block-access-to-other-tenant-resources.md)|L1|v0.1.0|✓|MTB draft|
|[Block add capabilities](block-add-capabilities.md)|L1|v0.1.0|✓|---|
|[Require always imagePullPolicy](require-always-imagepullpolicy.md)|L1|v0.1.0|✓|---|
|[Require run as non-root user](require-run-as-non-root-user.md)|L1|v0.1.0|✓|---|
|[Block privileged containers](block-privileged-containers.md)|L1|v0.1.0|✓|---|
|[Block privilege escalation](block-privilege-escalation.md)|L1|v0.1.0|✓|---|
|[Configure namespace resource quotas](configure-namespace-resource-quotas.md)|L1|v0.1.0|✓|---|
|[Block modification of resource quotas](block-modification-of-resource-quotas.md)|L1|v0.1.0|✓|---|
|[Configure namespace object limits](configure-namespace-object-limits.md)|L1|v0.1.0|✓|---|
|[Block use of host path volumes](block-use-of-host-path-volumes.md)|L1|v0.1.0|✓|---|
|[Block use of host networking and ports](block-use-of-host-networking-and-ports.md)|L1|v0.1.0|✓|---|
|[Block use of host PID](block-use-of-host-pid.md)|L1|v0.1.0|✓|---|
|[Block use of host IPC](block-use-of-host-ipc.md)|L1|v0.1.0|✓|---|
|[Block use of NodePort services](block-use-of-nodeport-services.md)|L1|v0.1.0|✓|---|
|[Require PersistentVolumeClaim for storage](require-persistentvolumeclaim-for-storage.md)|L1|v0.1.0|✓|MTB draft|
|[Require PV reclaim policy of delete](require-reclaim-policy-of-delete.md)|L1|v0.1.0|✓|MTB draft|
|[Block use of existing PVs](block-use-of-existing-persistent-volumes.md)|L1|v0.1.0|✓|MTB draft|
|[Block network access across tenant namespaces](block-network-access-across-tenant-namespaces.md)|L1|v0.1.0|✓|MTB draft|
|[Allow self-service management of Network Policies](allow-self-service-management-of-network-policies.md)|L2|v0.1.0|✓|---|
|[Allow self-service management of Roles](allow-self-service-management-of-roles.md)|L2|v0.1.0|✓|MTB draft|
|[Allow self-service management of Role Bindings](allow-self-service-management-of-rolebindings.md)|L2|v0.1.0|✓|MTB draft|

Some files were not shown because too many files have changed in this diff Show More