mirror of
https://github.com/projectcapsule/capsule.git
synced 2026-05-02 07:26:36 +00:00
Improve documentation (#146)
* move docs in a separate folder * review of readme and add faq * rewrite use cases * more use cases * add new project logo * minor improvements
This commit is contained in:
42
docs/index.md
Normal file
42
docs/index.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# Capsule Documentation
|
||||
**Capsule** helps to implement a multi-tenancy and policy-based environment in your Kubernetes cluster. It has been designed as a micro-services based ecosystem with minimalist approach, leveraging only on upstream Kubernetes.
|
||||
|
||||
Currently, the Capsule ecosystem comprises the following:
|
||||
|
||||
* [Capsule Operator](./operator/overview.md)
|
||||
* [Capsule ns-filter](./ns-filter/overview.md)
|
||||
* [Capsule Lens extension](lens-extension/overview.md) Coming soon!
|
||||
|
||||
## Documents structure
|
||||
```command
|
||||
docs
|
||||
├── index.md
|
||||
├── lens-extension
|
||||
│ └── overview.md
|
||||
├── ns-filter
|
||||
│ ├── overview.md
|
||||
│ ├── sidecar.md
|
||||
│ └── standalone.md
|
||||
└── operator
|
||||
├── contributing.md
|
||||
├── getting-started.md
|
||||
├── monitoring.md
|
||||
├── overview.md
|
||||
├── references.md
|
||||
└── use-cases
|
||||
├── create-namespaces.md
|
||||
├── custom-resources.md
|
||||
├── images-registries.md
|
||||
├── ingress-classes.md
|
||||
├── ingress-hostnames.md
|
||||
├── multiple-tenants.md
|
||||
├── network-policies.md
|
||||
├── nodes-pool.md
|
||||
├── onboarding.md
|
||||
├── overview.md
|
||||
├── permissions.md
|
||||
├── pod-security-policies.md
|
||||
├── resources-quota-limits.md
|
||||
├── storage-classes.md
|
||||
└── taint-namespaces.md
|
||||
```
|
||||
2
docs/lens-extension/overview.md
Normal file
2
docs/lens-extension/overview.md
Normal file
@@ -0,0 +1,2 @@
|
||||
# Capsule extension for Mirantis Lens
|
||||
Coming soon.
|
||||
43
docs/ns-filter/overview.md
Normal file
43
docs/ns-filter/overview.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Capsule ns-filter
|
||||
This project is an add-on for the main Capsule Operator.
|
||||
|
||||
## The problem to solve
|
||||
In Capsule, _Tenant Owners_ are not able to list their namespaces:
|
||||
|
||||
```
|
||||
$ kubectl get namespaces
|
||||
Error from server (Forbidden): namespaces is forbidden: User "alice" cannot list resource "namespaces" in API group "" at the cluster scope
|
||||
```
|
||||
|
||||
The reason, as the error message reported, is that the RBAC _list_ action is available only at Cluster-Scope and it is not granted to the Tenant Owners. Howevers, in Capsule, Tenant Owners are always permitted to get their own namespaces:
|
||||
|
||||
```
|
||||
$ kubectl auth can-i [get|list|watch|delete] ns oil-production
|
||||
yes
|
||||
```
|
||||
|
||||
Kubernetes RBAC lacks the ability to list only the owned namespaces since there are no ACL-filtered APIs. To overcome this problem, many kubernetes distributions introduced mirrored custom resources of namespaces, called `Projects`, `Workspaces`, `Spaces`, or similar, supported by a custom set of ACL-filtered APIs. However, this leads to radically change the user's experience of Kubernetes by introducing hard customizations that make painfull to move from one distribution to another.
|
||||
|
||||
**Capsule** takes a different approach. As one of the key requirements, we want to keep the same user's experience on all the distributions of Kubernetes. With Capsule, users do not need to deal with custom resources to deploy their applications. They can use the basic tools they already learned and love and it just works.
|
||||
|
||||
## How it works
|
||||
Make sure you have a working instance of Caspule before to attempt to use it. Use `capsule-ns-filter` if you want to list your namespaces throught the `kubectl` command line or throught a dashboard.
|
||||
|
||||
This project implements a simple reverse proxy intercepting the Kubernetes
|
||||
`api/v1/namespaces` endpoint in order to filter only the namespaces assigned to the user. And Capsule does all the magic behind the scenes. All other endpoints are proxied transparently against the Kubernetes APIs server using the same request, so no side-effects are expected.
|
||||
|
||||
The `capsule-ns-filter` can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the `kube-apiserver`. Also, it can be deployed as sidecar container in a dashboard backend.
|
||||
|
||||
### Does it work with kubectl?
|
||||
Yes, it works by intercepting all the requests from the `kubectl` client directed to the APIs server. It works with both users who use the TLS certificate authentication and those who use OIDC.
|
||||
|
||||
### Does it work with my preferred Kubernetes dashboard?
|
||||
If you're using a client-only dashboard, for example [Mirantis Lens](https://k8slens.dev/), the `capsule-ns-filter` can be used as in the previous case since these dashboards usually talk to the APIs server using just a `kubeconfig` file.
|
||||
|
||||
For web based dashboards, like the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), the `capsule-ns-filter` can be deployed as sidecar container in the backend side of the dashboard, following the well-known cloud native _Ambassador Pattern_. In such cases, the `capsule-ns-filter` intercept all the requests coming from the dashboard backend and proxies them to the Kubernetes APIs server.
|
||||
|
||||
# What’s next
|
||||
Have a fun with `capsule-ns-filter`:
|
||||
|
||||
* [Standalone Installation](./standalone.md)
|
||||
* [Sidecar Installation](./sidecar.md)
|
||||
118
docs/ns-filter/sidecar.md
Normal file
118
docs/ns-filter/sidecar.md
Normal file
@@ -0,0 +1,118 @@
|
||||
# Sidecar Installation
|
||||
The `capsule-ns-filter` can be deployed as sidecar container for server-side Kubernetes dashboards. It will intercept all requests sent from the client side to the server-side of the dashboard and it will proxy them to the Kubernetes APIs server.
|
||||
|
||||
```
|
||||
capsule-ns-filter
|
||||
+------------+
|
||||
|:9001 +--------+
|
||||
+------------+ v
|
||||
+-----------+ | | +------------+
|
||||
browser +------>+:443 +-------->+:8443 | |:6443 |
|
||||
+-----------+ +------------+ +------------+
|
||||
ingress-controller dashboard kube-apiserver
|
||||
(ssl-passthrough) server-side backend
|
||||
```
|
||||
|
||||
The server-side backend of the dashboard must leave to specify the URL of the Kubernetes APIs server. For example the [sidecar-setup.yaml](../deploy/sidecar-setup.yaml) manifest contains an example for deploying with [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), and the ingress controller in ssl-passthrough mode.
|
||||
|
||||
Place the `capsule-ns-filter` in a pod with SSL mode, i.e. `--enable-ssl=true` and passing valid certificate and key files in a secret.
|
||||
|
||||
```yaml
|
||||
...
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
spec:
|
||||
containers:
|
||||
- name: ns-filter
|
||||
image: quay.io/clastix/capsule-ns-filter
|
||||
imagePullPolicy: IfNotPresent
|
||||
command:
|
||||
- /capsule-ns-filter
|
||||
- --k8s-control-plane-url=https://kubernetes.default.svc
|
||||
- --capsule-user-group=capsule.clastix.io
|
||||
- --zap-log-level=5
|
||||
- --enable-ssl=true
|
||||
- --ssl-cert-path=/opt/certs/tls.crt
|
||||
- --ssl-key-path=/opt/certs/tls.key
|
||||
volumeMounts:
|
||||
- name: ns-filter-certs
|
||||
mountPath: /opt/certs
|
||||
ports:
|
||||
- containerPort: 9001
|
||||
name: http
|
||||
protocol: TCP
|
||||
...
|
||||
```
|
||||
|
||||
In the same pod, place the Kubernetes Dashboard in _"out-of-cluster"_ mode with `--apiserver-host=https://localhost:9001` to send all the requests to the `capsule-ns-filter` sidecar container:
|
||||
|
||||
|
||||
```yaml
|
||||
...
|
||||
- name: dashboard
|
||||
image: kubernetesui/dashboard:v2.0.4
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 8443
|
||||
protocol: TCP
|
||||
args:
|
||||
- --auto-generate-certificates
|
||||
- --namespace=cmp-system
|
||||
- --tls-cert-file=tls.crt
|
||||
- --tls-key-file=tls.key
|
||||
- --apiserver-host=https://localhost:9001
|
||||
- --kubeconfig=/opt/.kube/config
|
||||
volumeMounts:
|
||||
- name: kubernetes-dashboard-certs
|
||||
mountPath: /certs
|
||||
- mountPath: /tmp
|
||||
name: tmp-volume
|
||||
- mountPath: /opt/.kube
|
||||
name: kubeconfig
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
scheme: HTTPS
|
||||
path: /
|
||||
port: 8443
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 30
|
||||
...
|
||||
```
|
||||
|
||||
Make sure you pass a valid `kubeconfig` file to the dashboard pointing to the `capsule-ns-filter` sidecar container instead of the `kube-apiserver` directly:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kubernetes-dashboard-kubeconfig
|
||||
namespace: kubernetes-dashboard
|
||||
data:
|
||||
config: |
|
||||
kind: Config
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
insecure-skip-tls-verify: true
|
||||
server: https://localhost:9001 # <- point to the capsule-ns-filter
|
||||
name: localhost
|
||||
contexts:
|
||||
- context:
|
||||
cluster: localhost
|
||||
user: kubernetes-admin # <- dashboard has cluster-admin permissions
|
||||
name: admin@localhost
|
||||
current-context: admin@localhost
|
||||
preferences: {}
|
||||
users:
|
||||
- name: kubernetes-admin
|
||||
user:
|
||||
client-certificate-data: REDACTED
|
||||
client-key-data: REDACTED
|
||||
```
|
||||
|
||||
After starting the dashboard, login as a Tenant Owner user, e.g. `alice` according to the used authentication method, and check you can see only owned namespaces.
|
||||
|
||||
The `capsule-ns-filter` can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client, a command line tools like `kubectl`, to the `kube-apiserver`. See [Standalone Installation](./standalone.md).
|
||||
|
||||
217
docs/ns-filter/standalone.md
Normal file
217
docs/ns-filter/standalone.md
Normal file
@@ -0,0 +1,217 @@
|
||||
# Standalone Installation
|
||||
The `capsule-ns-filter` can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the `kube-apiserver`. Use this way to provide access to client-side command line tools like `kubectl` or even client-side dashboards.
|
||||
|
||||
You can use an Ingress Controller to expose the `capsule-ns-filter` endpoint or, depending on your environment, you can expose it with either a `NodePort`, or a `LoadBalancer` service. As alternatives, use `HostPort` or `HostNetwork` mode.
|
||||
|
||||
```
|
||||
+-----------+ +-----------+ +-----------+
|
||||
kubectl ------>|:443 |--------->|:9001 |-------->|:6443 |
|
||||
+-----------+ +-----------+ +-----------+
|
||||
ingress-controller capsule-ns-filter kube-apiserver
|
||||
(ssl-passthrough)
|
||||
```
|
||||
|
||||
The [standalone-setup.yaml](../deploy/standalone-setup.yaml) manifest contains an example for deploying with Ingress Controller in ssl-passthrough mode.
|
||||
|
||||
## Arguments
|
||||
Arguments to be passed to the `capsule-ns-filter` proxy:
|
||||
|
||||
```
|
||||
--listening-port HTTP port the proxy listens to, default: 9001
|
||||
--k8s-control-plane-url Kubernetes control plane URL, default: https://kubernetes.default.svc
|
||||
--capsule-user-group The Capsule User Group, default: capsule.clastix.io
|
||||
--zap-devel Enable debug
|
||||
--zap-log-level Set log verbosity, from 1 to 10
|
||||
--enable-ssl Enable the bind on HTTPS for secure communication, default: false
|
||||
--ssl-cert-path Path to the TLS certificate, default: /opt/capsule-ns-filter/tls.crt
|
||||
--ssl-key-path Path to the TLS certificate key, default: /opt/capsule-ns-filter/tls.key
|
||||
```
|
||||
|
||||
## TLS Client Authentication
|
||||
Users using a TLS client based authentication with certificate and key are able to talks with `capsule-ns-filter` since the current implementation of the reverse proxy is able to forward client certificates to the Kubernetes APIs server.
|
||||
|
||||
## OIDC Authentication
|
||||
The `capsule-ns-filter` works with `kubectl` users with a token-based authentication, e.g. OIDC or Bearer Token.
|
||||
|
||||
In the following example, we'll use an OIDC server, e.g. [Keycloak](https://www.keycloak.org/), capable to provides JWT tokens.
|
||||
|
||||
### Configuring Keycloak
|
||||
Configure Keycloak as OIDC server:
|
||||
|
||||
- Add a realm called `caas`, or use any existing realm instead
|
||||
- Add a group `capsule.clastix.io`
|
||||
- Add a user `alice` assigned to group `capsule.clastix.io`
|
||||
- Add an OIDC client called `kubernetes`
|
||||
- For the `kubernetes` client, create protocol mappers called `groups` and `audience`
|
||||
|
||||
If everything is done correctly, now you should be able to authenticate in Keycloak and see user groups in JWT tokens. Use the following snippet to authenticate in Keycloak as `alice` user:
|
||||
|
||||
```
|
||||
$ KEYCLOAK=sso.clastix.io
|
||||
$ REALM=caas
|
||||
$ OIDC_ISSUER=${KEYCLOAK}/auth/realms/${REALM}
|
||||
|
||||
$ curl -k -s https://${OIDC_ISSUER}/protocol/openid-connect/token \
|
||||
-d grant_type=password \
|
||||
-d response_type=id_token \
|
||||
-d scope=openid \
|
||||
-d client_id=${OIDC_CLIENT_ID} \
|
||||
-d client_secret=${OIDC_CLIENT_SECRET} \
|
||||
-d username=${USERNAME} \
|
||||
-d password=${PASSWORD} | jq
|
||||
```
|
||||
|
||||
The result will include an `ACCESS_TOKEN`, a `REFRESH_TOKEN`, and an `ID_TOKEN`. The access-token can generally be disregarded for Kubernetes. It would be used if the identity provider was managing roles and permissions for the users but that is done in Kubernetes itself with RBAC. The id-token is short lived while the refresh-token has longer expiration. The refresh-token is used to fetch a new id-token when the id-token expires.
|
||||
|
||||
```json
|
||||
{
|
||||
"access_token":"ACCESS_TOKEN",
|
||||
"refresh_token":"REFRESH_TOKEN",
|
||||
"id_token": "ID_TOKEN",
|
||||
"token_type":"bearer",
|
||||
"scope": "openid groups profile email"
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
To introspect the `ID_TOKEN` token run:
|
||||
```
|
||||
$ curl -k -s https://${OIDC_ISSUER}/protocol/openid-connect/introspect \
|
||||
-d token=${ID_TOKEN} \
|
||||
--user ${OIDC_CLIENT_ID}:${OIDC_CLIENT_SECRET} | jq
|
||||
```
|
||||
|
||||
The result will be like the following:
|
||||
|
||||
```json
|
||||
{
|
||||
...
|
||||
"exp": 1601323086,
|
||||
"iat": 1601322186,
|
||||
"aud": "kubernetes",
|
||||
"typ": "ID",
|
||||
"azp": "kubernetes",
|
||||
"preferred_username": "alice",
|
||||
"email_verified": false,
|
||||
"acr": "1",
|
||||
"groups": [
|
||||
"capsule.clastix.io"
|
||||
],
|
||||
"client_id": "kubernetes",
|
||||
"username": "alice",
|
||||
"active": true
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
### Configuring Kubernetes API Server
|
||||
Configuring Kubernetes for OIDC Authentication requires adding several parameters to the API Server. Please, refer to the [documentation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) for details and examples. Most likely, your `kube-apiserver.yaml` manifest will looks like the following:
|
||||
|
||||
```yaml
|
||||
...
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- kube-apiserver
|
||||
...
|
||||
- --oidc-issuer-url=https://${OIDC_ISSUER}
|
||||
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
|
||||
- --oidc-client-id=${OIDC_CLIENT_SECRET}
|
||||
- --oidc-username-claim=preferred_username
|
||||
- --oidc-groups-claim=groups
|
||||
- --oidc-username-prefix=-
|
||||
```
|
||||
|
||||
### Configuring Capsule
|
||||
Make sure to have a working instance of the Capsule Operator in your Kubernetes cluster before to attempt to use `capsule-ns-filter`. Please, refer to the Capsule [documentation](https://github.com/clastix/capsule) for details and examples.
|
||||
|
||||
You should have one or more tenants defined, e.g. `oil` and `gas` and they are assigned to the user `alice`.
|
||||
|
||||
As cluster admin, check there are the tenants:
|
||||
|
||||
```
|
||||
$ kubectl get tenants
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND AGE
|
||||
foo 3 1 joe User 4d
|
||||
gas 3 0 alice User 1d
|
||||
oil 9 0 alice User 1d
|
||||
```
|
||||
|
||||
### Configuring kubectl
|
||||
There are two options to use `kubectl` with OIDC:
|
||||
|
||||
- OIDC Authenticator
|
||||
- Use the `--token` option
|
||||
|
||||
To use the OIDC Authenticator, add an `oidc` user entry to your `kubeconfig` file:
|
||||
```
|
||||
$ kubectl config set-credentials oidc \
|
||||
--auth-provider=oidc \
|
||||
--auth-provider-arg=idp-issuer-url=https://${OIDC_ISSUER} \
|
||||
--auth-provider-arg=idp-certificate-authority=/path/to/ca.crt \
|
||||
--auth-provider-arg=client-id=${OIDC_CLIENT_ID} \
|
||||
--auth-provider-arg=client-secret=${OIDC_CLIENT_SECRET} \
|
||||
--auth-provider-arg=refresh-token=${REFRESH_TOKEN} \
|
||||
--auth-provider-arg=id-token=${ID_TOKEN} \
|
||||
--auth-provider-arg=extra-scopes=groups
|
||||
```
|
||||
|
||||
To use the --token option:
|
||||
```
|
||||
$ kubectl config set-credentials oidc --token=${ID_TOKEN}
|
||||
```
|
||||
|
||||
Point the kubectl to the URL where the `capsule-ns-filter` service is reachable:
|
||||
```
|
||||
$ kubectl config set-cluster mycluster \
|
||||
--server=https://kube.clastix.io \
|
||||
--certificate-authority=~/.kube/ca.crt
|
||||
```
|
||||
|
||||
Create a new context for the OIDC authenticated users:
|
||||
```
|
||||
$ kubectl config set-context alice-oidc@mycluster \
|
||||
--cluster=mycluster \
|
||||
--user=oidc
|
||||
```
|
||||
|
||||
As user `alice`, you should be able to use `kubectl` to create some namespaces:
|
||||
```
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-production
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-development
|
||||
$ kubectl --context alice-oidc@mycluster create namespace gas-marketing
|
||||
```
|
||||
|
||||
and list only those namespaces:
|
||||
```
|
||||
$ kubectl --context alice-oidc@mycluster get namespaces
|
||||
NAME STATUS AGE
|
||||
gas-marketing Active 2m
|
||||
oil-development Active 2m
|
||||
oil-production Active 2m
|
||||
```
|
||||
|
||||
When logged as cluster-admin power user you should be able to see all namespaces:
|
||||
```
|
||||
$ kubectl get namespaces
|
||||
NAME STATUS AGE
|
||||
default Active 78d
|
||||
kube-node-lease Active 78d
|
||||
kube-public Active 78d
|
||||
kube-system Active 78d
|
||||
gas-marketing Active 2m
|
||||
oil-development Active 2m
|
||||
oil-production Active 2m
|
||||
```
|
||||
|
||||
_Nota Bene_: once your `ID_TOKEN` expires, the `kubectl` OIDC Authenticator will attempt to refresh automatically your `ID_TOKEN` using the `REFRESH_TOKEN`, the `OIDC_CLIENT_ID` and the `OIDC_CLIENT_SECRET` storing the new values for the `REFRESH_TOKEN` and `ID_TOKEN` in your `kubeconfig` file.
|
||||
|
||||
In case the OIDC uses a self signed CA certificate, make sure to specify it with the `idp-certificate-authority` option in your `kubeconfig` file, otherwise you'll not able to refresh the tokens. Once the `REFRESH_TOKEN` is expired, you will need to refresh tokens manually.
|
||||
|
||||
## RBAC
|
||||
The service account used for `capsule-ns-filter` needs to have `cluster-admin` permissions.
|
||||
|
||||
## Configuring client-only dashboards
|
||||
If you're using a client-only dashboard, for example [Mirantis Lens](https://k8slens.dev/), the `capsule-ns-filter` can be used as in the previous `kubectl` example since Lens just needs for a `kubeconfig` file. Assuming to use a `kubeconfig` file containing a valid OIDC token released for the `alice` user, you can access the cluster with Lens dashboard and see only namespaces belonging to the Alice's tenants.
|
||||
|
||||
For web based dashboards, like the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), the `capsule-ns-filter` can be installed as sidecar container. See [Sidecar Installation](./sidecar.md).
|
||||
246
docs/operator/contributing.md
Normal file
246
docs/operator/contributing.md
Normal file
@@ -0,0 +1,246 @@
|
||||
# How to contribute to Capsule
|
||||
First, thanks for your interest in Capsule, any contribution is welcome!
|
||||
|
||||
The first step is to set up your local development environment as stated below:
|
||||
|
||||
## Setting up the development environment
|
||||
The following dependencies are mandatory:
|
||||
|
||||
- [Go 1.13.8](https://golang.org/dl/)
|
||||
- [OperatorSDK 1.9](https://github.com/operator-framework/operator-sdk)
|
||||
- [Kubebuilder](https://github.com/kubernetes-sigs/kubebuilder)
|
||||
- [KinD](https://github.com/kubernetes-sigs/kind)
|
||||
- [ngrok](https://ngrok.com/) (if you want to run locally)
|
||||
- [golangci-lint](https://github.com/golangci/golangci-lint)
|
||||
|
||||
### Installing Go dependencies
|
||||
After cloning Capsule on any folder, access it and issue the following command
|
||||
to ensure all dependencies are properly downloaded.
|
||||
|
||||
```
|
||||
go mod download
|
||||
```
|
||||
|
||||
### Installing Operator SDK
|
||||
Some operations, like the Docker image build process or the code-generation of
|
||||
the CRDs manifests, as well the deep copy functions, require _Operator SDK_:
|
||||
the binary has to be installed into your `PATH`.
|
||||
|
||||
### Installing Kubebuilder
|
||||
With the latest release of OperatorSDK there's a more tightly integration with
|
||||
Kubebuilder and its opinionated testing suite: ensure to download the latest
|
||||
binaries available from the _Releases_ GitHub page and place them into the
|
||||
`/usr/local/kubebuilder/bin` folder, ensuring this is also in your `PATH`.
|
||||
|
||||
### Installing KinD
|
||||
Capsule can run on any certified Kubernetes installation and locally
|
||||
the whole development is performed on _KinD_, also knows as
|
||||
[Kubernetes in Docker](https://github.com/kubernetes-sigs/kind).
|
||||
|
||||
> N.B.: Docker is a hard requirement since it's based on it
|
||||
|
||||
According to your operative system and architecture, download the right binary
|
||||
and place it on your `PATH`.
|
||||
|
||||
Once done, you're ready to bootstrap in a glance of seconds, a fully functional
|
||||
Kubernetes cluster.
|
||||
|
||||
```
|
||||
# kind create cluster --name capsule
|
||||
Creating cluster "capsule" ...
|
||||
✓ Ensuring node image (kindest/node:v1.18.2) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
✓ Installing CNI 🔌
|
||||
✓ Installing StorageClass 💾
|
||||
Set kubectl context to "kind-capsule"
|
||||
You can now use your cluster with:
|
||||
|
||||
kubectl cluster-info --context kind-capsule
|
||||
|
||||
Thanks for using kind! 😊
|
||||
```
|
||||
|
||||
The current `KUBECONFIG` will be populated with the `cluster-admin`
|
||||
certificates and the context changed to the just born Kubernetes cluster.
|
||||
|
||||
### Build the Docker image and push it to KinD
|
||||
From the root path, issue the _make_ recipe:
|
||||
|
||||
```
|
||||
# make docker-build
|
||||
```
|
||||
|
||||
The image `quay.io/clastix/capsule:<tag>` will be available locally. Built image `<tag>` is resulting last one available [release](https://github.com/clastix/capsule/releases).
|
||||
|
||||
Push it to _KinD_ with the following command:
|
||||
|
||||
```
|
||||
# kind load docker-image --nodes capsule-control-plane --name capsule quay.io/clastix/capsule:<tag>
|
||||
```
|
||||
|
||||
### Deploy the Kubernetes manifests
|
||||
With the current `kind-capsule` context enabled, deploy all the required
|
||||
manifests issuing the following command:
|
||||
|
||||
```
|
||||
make deploy
|
||||
```
|
||||
|
||||
This will install all the required Kubernetes resources, automatically.
|
||||
|
||||
You can check if Capsule is running tailing the logs:
|
||||
|
||||
```
|
||||
# kubectl -n capsule-system logs --all-containers -f -l control-plane=controller-manager
|
||||
```
|
||||
|
||||
Since Capsule is built using _OperatorSDK_, logging is handled by the zap
|
||||
module: log verbosity of the Capsule controller can be increased by passing
|
||||
the `--zap-log-level` option with a value from `1` to `10` or the
|
||||
[basic keywords](https://godoc.org/go.uber.org/zap/zapcore#Level) although
|
||||
it is suggested to use the `--zap-devel` flag to get also stack traces.
|
||||
|
||||
> CA generation
|
||||
>
|
||||
> You could notice a restart of the Capsule pod upon installation, that's ok:
|
||||
> Capsule is generating the CA and populating the Secret containing the TLS
|
||||
> certificate to handle the webhooks and there's the need the reload the whole
|
||||
> application to serve properly HTTPS requests.
|
||||
|
||||
### Run Capsule locally
|
||||
Debugging remote applications is always struggling but Operators just need
|
||||
access to the Kubernetes API Server.
|
||||
|
||||
#### Scaling down the remote Pod
|
||||
First, ensure the Capsule pod is not running scaling down the Deployment.
|
||||
|
||||
```
|
||||
# kubectl -n capsule-system scale deployment capsule-controller-manager --replicas=0
|
||||
deployment.apps/capsule-controller-manager scaled
|
||||
```
|
||||
|
||||
> This is mandatory since Capsule uses Leader Election
|
||||
|
||||
#### Providing TLS certificate for webhooks
|
||||
Next step is to replicate the same environment Capsule is expecting in the Pod,
|
||||
it means creating a fake certificate to handle HTTP requests.
|
||||
|
||||
``` bash
|
||||
mkdir -p /tmp/k8s-webhook-server/serving-certs
|
||||
kubectl -n capsule-system get secret capsule-tls -o jsonpath='{.data.tls\.crt}' | base64 -d > /tmp/k8s-webhook-server/serving-certs/tls.crt
|
||||
kubectl -n capsule-system get secret capsule-tls -o jsonpath='{.data.tls\.key}' | base64 -d > /tmp/k8s-webhook-server/serving-certs/tls.key
|
||||
```
|
||||
|
||||
> We're using the certificates generate upon first installation of Capsule:
|
||||
> it means the Secret will be populated at first start-up.
|
||||
> If you plan to run it locally since the beginning, it means you will require
|
||||
> to provide a self-signed certificate in the said directory.
|
||||
|
||||
#### Starting NGROK
|
||||
In another session, we need a `ngrok` session, mandatory to debug also webhooks
|
||||
(YMMV).
|
||||
|
||||
```
|
||||
# ngrok http https://localhost:9443
|
||||
ngrok by @inconshreveable
|
||||
|
||||
Session Status online
|
||||
Account Dario Tranchitella (Plan: Free)
|
||||
Version 2.3.35
|
||||
Region United States (us)
|
||||
Web Interface http://127.0.01:4040
|
||||
Forwarding http://cdb72b99348c.ngrok.io -> https://localhost:9443
|
||||
Forwarding https://cdb72b99348c.ngrok.io -> https://localhost:9443
|
||||
Connections ttl opn rt1 rt5 p50 p90
|
||||
0 0 0.00 0.00 0.00 0.00
|
||||
```
|
||||
|
||||
What we need is the _ngrok_ URL (in this case, `https://cdb72b99348c.ngrok.io`)
|
||||
since we're going to use this default URL as the `url` parameter for the
|
||||
_Dynamic Admissions Control Webhooks_.
|
||||
|
||||
#### Patching the MutatingWebhookConfiguration
|
||||
Now it's time to patch the _MutatingWebhookConfiguration_ and the
|
||||
_ValidatingWebhookConfiguration_ too, adding the said `ngrok` URL as base for
|
||||
each defined webhook, as following:
|
||||
|
||||
```diff
|
||||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
kind: MutatingWebhookConfiguration
|
||||
metadata:
|
||||
name: capsule-mutating-webhook-configuration
|
||||
webhooks:
|
||||
- name: owner.namespace.capsule.clastix.io
|
||||
failurePolicy: Fail
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
apiVersions: ["v1"]
|
||||
operations: ["CREATE"]
|
||||
resources: ["namespaces"]
|
||||
clientConfig:
|
||||
+ url: https://cdb72b99348c.ngrok.io/mutate-v1-namespace-owner-reference
|
||||
- caBundle:
|
||||
- service:
|
||||
- namespace: system
|
||||
- name: capsule
|
||||
- path: /mutate-v1-namespace-owner-reference
|
||||
...
|
||||
```
|
||||
|
||||
#### Run Capsule
|
||||
Finally, it's time to run locally Capsule using your preferred IDE (or not):
|
||||
from the project root path, you can issue the following command.
|
||||
|
||||
```
|
||||
make run
|
||||
```
|
||||
|
||||
All the logs will start to flow in your standard output, feel free to attach
|
||||
your debugger to set breakpoints as well!
|
||||
|
||||
## Code convention
|
||||
The changes must follow the Pull Request method where a _GitHub Action_ will
|
||||
check the `golangci-lint`, so ensure your changes respect the coding standard.
|
||||
|
||||
### golint
|
||||
You can easily check them issuing the _Make_ recipe `golint`.
|
||||
|
||||
```
|
||||
# make golint
|
||||
golangci-lint run
|
||||
```
|
||||
|
||||
### goimports
|
||||
Also, the Go import statements must be sorted following the best practice:
|
||||
|
||||
```
|
||||
<STANDARD LIBRARY>
|
||||
|
||||
<EXTERNAL PACKAGES>
|
||||
|
||||
<LOCAL PACKAGES>
|
||||
```
|
||||
|
||||
To help you out you can use the _Make_ recipe `goimports`
|
||||
|
||||
```
|
||||
# make goimports
|
||||
goimports -w -l -local "github.com/clastix/capsule" .
|
||||
```
|
||||
|
||||
### Commits
|
||||
All the Pull Requests must refer to an already open issue: this is the first phase to contribute also for informing maintainers about the issue.
|
||||
|
||||
Commit's first line should not exceed 50 columns.
|
||||
|
||||
A commit description is welcomed to explain more the changes: just ensure
|
||||
to put a blank line and an arbitrary number of maximum 72 characters long
|
||||
lines, at most one blank line between them.
|
||||
|
||||
Please, split changes into several and documented small commits: this will help
|
||||
us to perform a better review.
|
||||
|
||||
> In case of errors or need of changes to previous commits,
|
||||
> fix them squashing to make changes atomic.
|
||||
123
docs/operator/getting-started.md
Normal file
123
docs/operator/getting-started.md
Normal file
@@ -0,0 +1,123 @@
|
||||
# Getting started
|
||||
Thanks for giving Capsule a try.
|
||||
|
||||
## Installation
|
||||
Make sure you have access to a Kubernetes cluster as administrator.
|
||||
|
||||
There are two ways to install Capsule:
|
||||
|
||||
* Use the Helm Chart available [here](https://github.com/clastix/capsule/tree/master/charts/capsule)
|
||||
* Use [`kustomize`](https://github.com/kubernetes-sigs/kustomize)
|
||||
|
||||
### Install with kustomize
|
||||
Ensure you have `kubectl` and `kustomize` installed in your `PATH`.
|
||||
|
||||
Clone this repository and move to the repo folder:
|
||||
|
||||
```
|
||||
$ git clone https://github.com/clastix/capsule
|
||||
$ cd capsule
|
||||
$ make deploy
|
||||
```
|
||||
|
||||
It will install the Capsule controller in a dedicated namespace `capsule-system`.
|
||||
|
||||
# Create your first Tenant
|
||||
In Capsule, a _Tenant_ is an abstraction to group togheter multiple namespaces in a single entity within a set of bundaries defined by the Cluster Administrator. The tenant is then assigned to a user or group of users who is called _Tenant Owner_.
|
||||
|
||||
Capsule defines a Tenant as Custom Resource with cluster scope:
|
||||
|
||||
```yaml
|
||||
cat <<EOF > oil_tenant.yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
namespaceQuota: 3
|
||||
EOF
|
||||
```
|
||||
|
||||
Apply as cluster admin:
|
||||
|
||||
```
|
||||
$ kubectl apply -f oil_tenant.yaml
|
||||
tenant.capsule.clastix.io/oil created
|
||||
```
|
||||
|
||||
You can check the tenant just created as cluster admin
|
||||
|
||||
```
|
||||
$ kubectl get tenants
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
oil 3 0 alice User 1m
|
||||
```
|
||||
|
||||
## Tenant owners
|
||||
Each tenant comes with a delegated user or group of users acting as the tenant admin. In the Capsule jargon, this is called the _Tenant Owner_. Other users can operate inside a tenant with different levels of permissions and authorizations assigned directly by the Tenant Owner.
|
||||
|
||||
Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of [authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/) are supported. The only requirement to use Capsule is to assign tenant users to the group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
|
||||
|
||||
Assignment to a group depends on the authentication strategy in your cluster.
|
||||
|
||||
For example, if you are using `capsule.clastix.io`, users authenticated through a _X.509_ certificate must have `capsule.clastix.io` as _Organization_: `-subj "/CN=${USER}/O=capsule.clastix.io"`
|
||||
|
||||
Users authenticated through an _OIDC token_ must have
|
||||
|
||||
```json
|
||||
...
|
||||
"users_groups": [
|
||||
"capsule.clastix.io",
|
||||
"other_group"
|
||||
]
|
||||
```
|
||||
|
||||
in their token.
|
||||
|
||||
The [hack/create-user.sh](hack/create-user.sh) can help you set up a dummy `kubeconfig` for the `alice` user acting as owner of a tenant called `oil`
|
||||
|
||||
```bash
|
||||
./hack/create-user.sh alice oil
|
||||
creating certs in TMPDIR /tmp/tmp.4CLgpuime3
|
||||
Generating RSA private key, 2048 bit long modulus (2 primes)
|
||||
............+++++
|
||||
........................+++++
|
||||
e is 65537 (0x010001)
|
||||
certificatesigningrequest.certificates.k8s.io/alice-oil created
|
||||
certificatesigningrequest.certificates.k8s.io/alice-oil approved
|
||||
kubeconfig file is: alice-oil.kubeconfig
|
||||
to use it as alice export KUBECONFIG=alice-oil.kubeconfig
|
||||
```
|
||||
|
||||
Log as tenant owner
|
||||
|
||||
```
|
||||
$ export KUBECONFIG=alice-oil.kubeconfig
|
||||
```
|
||||
|
||||
and create a couple of new namespaces
|
||||
|
||||
```
|
||||
$ kubectl create namespace oil-production
|
||||
$ kubectl create namespace oil-development
|
||||
```
|
||||
|
||||
As user `alice` you can operate with fully admin permissions:
|
||||
|
||||
```
|
||||
$ kubectl -n oil-development run nginx --image=docker.io/nginx
|
||||
$ kubectl -n oil-development get pods
|
||||
```
|
||||
|
||||
but limited to only your own namespaces:
|
||||
|
||||
```
|
||||
$ kubectl -n kube-system get pods
|
||||
Error from server (Forbidden): pods is forbidden: User "alice" cannot list resource "pods" in API group "" in the namespace "kube-system"
|
||||
```
|
||||
|
||||
# What’s next
|
||||
The Tenant Owners have full administrative permissions limited to only the namespaces in the assigned tenant. However, their permissions can be controlled by the Cluster Admin by setting rules and policies on the assigned tenant. See the [use cases](./use-cases/overview.md) page for more getting more cool things you can do with Capsule.
|
||||
2
docs/operator/monitoring.md
Normal file
2
docs/operator/monitoring.md
Normal file
@@ -0,0 +1,2 @@
|
||||
# Monitoring Capsule
|
||||
Coming soon.
|
||||
36
docs/operator/overview.md
Normal file
36
docs/operator/overview.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# Kubernetes multi-tenancy made simple
|
||||
**Capsule** helps to implement a multi-tenancy and policy-based environment in your Kubernetes cluster. It is not intended to be yet another _PaaS_, instead, it has been designed as a micro-services based ecosystem with minimalist approach, leveraging only on upstream Kubernetes.
|
||||
|
||||
# What's the problem with the current status?
|
||||
Kubernetes introduced the _Namespace_ resource to create logical partitions of the cluster as isolated *slices*. However, implementing advanced multi-tenancy scenarios, it becomes soon complicated because of the flat structure of Kubernetes namespaces. To overcome this, cluster admins tend to provision a dedicated cluster for each groups of users, teams, or departments. As an organization grows, the number of clusters to manage and keep aligned becomes an operational nightmare, described as the well know phenomena of the _clusters sprawl_.
|
||||
|
||||
**Capsule** takes a different approach. In a single cluster, it aggregates multiple namespaces in a lightweight abstraction called _Tenant_. Within each tenant, users are free to create their namespaces and share all the resources while different tenants remain isolated from each other. The _Network and Security Policies_, _Resource Quota_, _Limit Ranges_, _RBAC_, and other policies defined at the tenant level are automatically inherited by all the namespaces in the tenant. And users are free to operate their tenants in authonomy, without the intervention of the cluster administrator.
|
||||
|
||||
### Self-Service
|
||||
Leave to developers the freedom to self-provision their cluster resources according to the assigned boundaries.
|
||||
|
||||
### Preventing Clusters Sprawl
|
||||
Share a single cluster with multiple teams, groups of users, or departments by saving operational and management efforts.
|
||||
|
||||
### Governance
|
||||
Leverage Kubernetes Admission Controllers to enforce the industry security best practices and meet legal requirements.
|
||||
|
||||
### Resources Control
|
||||
Take control of the resources consumed by users while preventing them to overtake.
|
||||
|
||||
### Native Experience
|
||||
Provide multi-tenancy with a native Kubernetes experience without introducing additional management layers, plugins, or customised binaries.
|
||||
|
||||
### Bring your own device (BYOD)
|
||||
Assign to tenants a dedicated set of compute, storage, and network resources and avoid the noisy neighbors' effect.
|
||||
|
||||
# Common use cases for Capsule
|
||||
Please, refer to the corresponding [section](./docs/operator/use-cases/overview.md) in the project documentation for a detailed list of common use cases that Capsule can address.
|
||||
|
||||
# What’s next
|
||||
Have a fun with Capsule:
|
||||
|
||||
* [Getting Started](./getting-started.md)
|
||||
* [Use Cases](./use-cases/overview.md)
|
||||
* [Contributing](./contributing.md)
|
||||
* [References](./references.md)
|
||||
697
docs/operator/references.md
Normal file
697
docs/operator/references.md
Normal file
@@ -0,0 +1,697 @@
|
||||
# Reference
|
||||
|
||||
* [Custom Resource Definition](#customer-resource-definition)
|
||||
* [Metadata](#metadata)
|
||||
* [name](#name)
|
||||
* [Spec](#spec)
|
||||
* [owner](#owner)
|
||||
* [nodeSelector](#nodeSelector)
|
||||
* [namespaceQuota](#namespaceQuota)
|
||||
* [namespacesMetadata](#namespacesMetadata)
|
||||
* [servicesMetadata](#servicesMetadata)
|
||||
* [ingressClasses](#ingressClasses)
|
||||
* [ingressHostNames](#ingressHostNames)
|
||||
* [storageClasses](#storageClasses)
|
||||
* [containerRegistries](#containerRegistries)
|
||||
* [additionalRoleBindings](#additionalRoleBindings)
|
||||
* [resourceQuotas](#resourceQuotas)
|
||||
* [limitRanges](#limitRanges)
|
||||
* [networkPolicies](#networkPolicies)
|
||||
* [externalServiceIPs](#externalServiceIPs)
|
||||
* [Status](#status)
|
||||
* [size](#size)
|
||||
* [namespaces](#namespaces)
|
||||
* [Role Based Access Control](#role-based-access-control)
|
||||
* [Admission Controllers](#admission-controller)
|
||||
* [Command Options](#command-options)
|
||||
* [Created Resources](#created-resources)
|
||||
|
||||
|
||||
## Custom Resource Definition
|
||||
Capsule operator uses a single Custom Resources Definition (CRD) for _Tenants_. Please, see the [Tenant Custom Resource Definition](https://github.com/clastix/capsule/blob/master/config/crd/bases/capsule.clastix.io_tenants.yaml). In Caspule, Tenants are cluster wide resources. You need for cluster level permissions to work with tenants.
|
||||
|
||||
### Metadata
|
||||
#### name
|
||||
Metadata `name` can contain any valid symbol from the regex: `[a-z0-9]([-a-z0-9]*[a-z0-9])?`.
|
||||
|
||||
### Spec
|
||||
#### owner
|
||||
The field `owner` is the only mandatory spec in a _Tenant_ manifest. It specifies the ownership of the tenant:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
owner: # required
|
||||
name: <name>
|
||||
kind: <User|Group>
|
||||
```
|
||||
|
||||
The user and group names should be valid identities. Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of [Authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/) are supported. The only requirement to use Capsule is to assign tenant users to the the group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
|
||||
|
||||
Assignment to a group depends on the used authentication strategy.
|
||||
|
||||
For example, if you are using `capsule.clastix.io`, users authenticated through a _X.509_ certificate must have `capsule.clastix.io` as _Organization_: `-subj "/CN=${USER}/O=capsule.clastix.io"`
|
||||
|
||||
Users authenticated through an _OIDC token_ must have
|
||||
|
||||
```json
|
||||
...
|
||||
"users_groups": [
|
||||
"capsule.clastix.io",
|
||||
"other_group"
|
||||
]
|
||||
```
|
||||
|
||||
Permissions are controlled by RBAC.
|
||||
|
||||
#### nodeSelector
|
||||
Field `nodeSelector` specifies the label to control the placement of pods on a given pool of worker nodes:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
nodeSelector:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
All namesapces created within the tenant will have the annotation:
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/node-selector: 'key=value'
|
||||
```
|
||||
|
||||
This annotation tells the Kubernetes scheduler to place pods on the nodes having that label:
|
||||
|
||||
```yaml
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: sample
|
||||
spec:
|
||||
nodeSelector:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
> NB:
|
||||
> While Capsule just enforces the annotation `scheduler.alpha.kubernetes.io/node-selector` at namespace level,
|
||||
> the `nodeSelector` field in the pod template is under the control of the default _PodNodeSelector_ enabled
|
||||
> on the Kubernetes API server using the flag `--enable-admission-plugins=PodNodeSelector`.
|
||||
|
||||
Please, see how to [Assigning Pods to Nodes](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) documentation.
|
||||
|
||||
The tenant owner is not allowed to change or remove the annotation above from the namespace.
|
||||
|
||||
#### namespaceQuota
|
||||
Field `namespaceQuota` specifies the maximum number of namespaces allowed for that tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
namespaceQuota: <quota>
|
||||
```
|
||||
Once the namespace quota assigned to the tenant has been reached, yhe tenant owner cannot create further namespaces.
|
||||
|
||||
#### namespacesMetadata
|
||||
Field `namespacesMetadata` specifies additional labels and annotations the Capsule operator places on any _Namespace_ in the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
namespacesMetadata:
|
||||
additionalAnnotations:
|
||||
<annotations>
|
||||
additionalLabels:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
Al namespaces in the tenant will have:
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
<annotations>
|
||||
labels:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
The tenant owner is not allowed to change or remove such labels and annotations from the namespace.
|
||||
|
||||
#### servicesMetadata
|
||||
Field `servicesMetadata` specifies additional labels and annotations the Capsule operator places on any _Service_ in the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
servicesMetadata:
|
||||
additionalAnnotations:
|
||||
<annotations>
|
||||
additionalLabels:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
Al services in the tenant will have:
|
||||
|
||||
```yaml
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
<annotations>
|
||||
labels:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
The tenant owner is not allowed to change or remove such labels and annotations from the _Service_.
|
||||
|
||||
#### ingressClasses
|
||||
Field `ingressClasses` specifies the _IngressClass_ assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
ingressClasses:
|
||||
allowed:
|
||||
- <class>
|
||||
allowedRegex: <regex>
|
||||
```
|
||||
|
||||
Capsule assures that all the _Ingress_ resources created in the tenant can use only one of the allowed _IngressClass_.
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: <name>
|
||||
namespace:
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: <class>
|
||||
```
|
||||
|
||||
> NB: _Ingress_ resources are supported in both the versions, `networking.k8s.io/v1beta1` and `networking.k8s.io/v1`.
|
||||
|
||||
Allowed _IngressClasses_ are reported into namespaces as annotations, so the tenant owner can check them
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
capsule.clastix.io/ingress-classes: <class>
|
||||
capsule.clastix.io/ingress-classes-regexp: <regex>
|
||||
```
|
||||
Any tentative of tenant owner to use a not allowed _IngressClass_ will fail.
|
||||
|
||||
#### ingressHostNames
|
||||
Field `ingressHostNames` specifies the allowed hostnames in _Ingresses_ for the given tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
ingressHostNames:
|
||||
allowed:
|
||||
- <hostname>
|
||||
allowedRegex: <regex>
|
||||
```
|
||||
|
||||
Capsule assures that all _Ingress_ resources created in the tenant can use only one of the allowed hostnames.
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: <name>
|
||||
namespace:
|
||||
annotations:
|
||||
spec:
|
||||
rules:
|
||||
- host: <hostname>
|
||||
http: {}
|
||||
```
|
||||
|
||||
> NB: _Ingress_ resources are supported in both the versions, `networking.k8s.io/v1beta1` and `networking.k8s.io/v1`.
|
||||
|
||||
Any tentative of tenant owner to use one of not allowed hostnames will fail.
|
||||
|
||||
#### storageClasses
|
||||
Field `storageClasses` specifies the _StorageClasses_ assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
storageClasses:
|
||||
allowed:
|
||||
- <class>
|
||||
allowedRegex: <regex>
|
||||
```
|
||||
|
||||
Capsule assures that all _PersistentVolumeClaim_ resources created in the tenant can use only one of the allowed _StorageClasses_.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: <name>
|
||||
namespace:
|
||||
spec:
|
||||
storageClassName: <class>
|
||||
```
|
||||
|
||||
Allowed _StorageClasses_ are reported into namespaces as annotations, so the tenant owner can check them
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
capsule.clastix.io/storage-classes: <class>
|
||||
capsule.clastix.io/storage-classes-regexp: <regex>
|
||||
```
|
||||
|
||||
Any tentative of tenant owner to use a not allowed _StorageClass_ will fail.
|
||||
|
||||
#### containerRegistries
|
||||
Field `containerRegistries` specifies the ttrusted image registries assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
containerRegistries:
|
||||
allowed:
|
||||
- <registry>
|
||||
allowedRegex: <regex>
|
||||
```
|
||||
|
||||
Capsule assures that all _Pods_ resources created in the tenant can use only one of the allowed trusted registries.
|
||||
|
||||
Allowed registries are reported into namespaces as annotations, so the tenant owner can check them
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
capsule.clastix.io/allowed-registries-regexp: <regex>
|
||||
capsule.clastix.io/registries: <registry>
|
||||
```
|
||||
|
||||
Any tentative of tenant owner to use a not allowed registry will fail.
|
||||
|
||||
> NB:
|
||||
> In case of naked and official images hosted on Docker Hub, Capsule is going
|
||||
> to retrieve the registry even if it's not explicit: a `busybox:latest` Pod
|
||||
> running on a Tenant allowing `docker.io` will not blocked, even if the image
|
||||
> field is not explicit as `docker.io/busybox:latest`.
|
||||
|
||||
#### additionalRoleBindings
|
||||
Field `additionalRoleBindings` specifies additional _RoleBindings_ assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: <ClusterRole>
|
||||
subjects:
|
||||
- kind: <Group|User|ServiceAccount>
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
name: <name>
|
||||
```
|
||||
|
||||
Capsule will ensure that all namespaces in the tenant always contain the _RoleBinding_ for the given _ClusterRole_.
|
||||
|
||||
#### resourceQuotas
|
||||
Field `resourceQuotas` specifies a list of _ResourceQuota_ resources assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
resourceQuotas:
|
||||
- hard:
|
||||
limits.cpu: <hard_value>
|
||||
limits.memory: <hard_value>
|
||||
requests.cpu: <hard_value>
|
||||
requests.memory: <hard_value>
|
||||
```
|
||||
|
||||
Please, refer to [ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) documentation for the subject.
|
||||
|
||||
The assigned quota are inherited by any namespace created in the tenant
|
||||
|
||||
```yaml
|
||||
kind: ResourceQuota
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: compute
|
||||
namespace:
|
||||
labels:
|
||||
capsule.clastix.io/resource-quota=0
|
||||
capsule.clastix.io/tenant=tenant
|
||||
annotations:
|
||||
# used resources in the tenant
|
||||
quota.capsule.clastix.io/used-limits.cpu=<tenant_used_value>
|
||||
quota.capsule.clastix.io/used-limits.memory=<tenant_used_value>
|
||||
quota.capsule.clastix.io/used-requests.cpu=<tenant_used_value>
|
||||
quota.capsule.clastix.io/used-requests.memory=<tenant_used_value>
|
||||
# hard quota for the tenant
|
||||
quota.capsule.clastix.io/hard-limits.cpu=<tenant_hard_value>
|
||||
quota.capsule.clastix.io/hard-limits.memory=<tenant_hard_value>
|
||||
quota.capsule.clastix.io/hard-requests.cpu=<tenant_hard_value>
|
||||
quota.capsule.clastix.io/hard-requests.memory=<tenant_hard_value>
|
||||
spec:
|
||||
hard:
|
||||
limits.cpu: <hard_value>
|
||||
limits.memory: <hard_value>
|
||||
requests.cpu: <hard_value>
|
||||
requests.memory: <hard_value>
|
||||
status:
|
||||
hard:
|
||||
limits.cpu: <namespace_hard_value>
|
||||
limits.memory: <namespace_hard_value>
|
||||
requests.cpu: <namespace_hard_value>
|
||||
requests.memory: <namespace_hard_value>
|
||||
used:
|
||||
limits.cpu: <namespace_used_value>
|
||||
limits.memory: <namespace_used_value>
|
||||
requests.cpu: <namespace_used_value>
|
||||
requests.memory: <namespace_used_value>
|
||||
```
|
||||
|
||||
The Capsule operator aggregates _ResourceQuota_ at tenant level, so that the hard quota is never crossed for the given tenant. This permits the tenant owner to consume resources in the tenant regardless of the namespace.
|
||||
|
||||
The annotations
|
||||
|
||||
```yaml
|
||||
quota.capsule.clastix.io/used-<resource>=<tenant_used_value>
|
||||
quota.capsule.clastix.io/hard-<resource>=<tenant_hard_value>
|
||||
```
|
||||
|
||||
are updated in realtime by Capsule, according to the actual aggredated usage of resource in the tenant.
|
||||
|
||||
> NB:
|
||||
> While Capsule controls quota at tenant level, at namespace level the quota enforcement
|
||||
> is under the control of the default _ResourceQuota Admission Controller_ enabled on the
|
||||
> Kubernetes API server using the flag `--enable-admission-plugins=ResourceQuota`.
|
||||
|
||||
The tenant owner is not allowed to change or remove the _ResourceQuota_ from the namespace.
|
||||
|
||||
#### limitRanges
|
||||
Field `limitRanges` specifies the _LimitRanges_ assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
limitRanges:
|
||||
- limits:
|
||||
- type: Pod
|
||||
max:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
min:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
- type: Container
|
||||
default:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
defaultRequest:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
max:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
min:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
- type: PersistentVolumeClaim
|
||||
max:
|
||||
storage: <value>
|
||||
min:
|
||||
storage: <value>
|
||||
```
|
||||
|
||||
Please, refer to [LimitRange](https://kubernetes.io/docs/concepts/policy/limit-range/) documentation for the subject.
|
||||
|
||||
The assigned _LimitRanges_ are inherited by any namespace created in the tenant
|
||||
|
||||
```yaml
|
||||
kind: LimitRange
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: <name>
|
||||
namespace:
|
||||
spec:
|
||||
limits:
|
||||
- type: Pod
|
||||
max:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
min:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
- type: Container
|
||||
default:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
defaultRequest:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
max:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
min:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
- type: PersistentVolumeClaim
|
||||
max:
|
||||
storage: <value>
|
||||
min:
|
||||
storage: <value>
|
||||
```
|
||||
|
||||
> NB:
|
||||
> Limit ranges enforcement for a single pod, container, and persistent volume
|
||||
> claim is done by the default _LimitRanger Admission Controller_ enabled on
|
||||
> the Kubernetes API server: using the flag
|
||||
> `--enable-admission-plugins=LimitRanger`.
|
||||
|
||||
Being the limit range specific of single resources, there is no aggregate to count.
|
||||
|
||||
The tenant owner is not allowed to change or remove _LimitRanges_ from the namespace.
|
||||
|
||||
#### networkPolicies
|
||||
Field `networkPolicies` specifies the _NetworkPolicies_ assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
networkPolicies:
|
||||
- policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
cidr: <value>
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector: {}
|
||||
- podSelector: {}
|
||||
- ipBlock:
|
||||
cidr: <value>
|
||||
podSelector: {}
|
||||
```
|
||||
|
||||
Please, refer to [NetworkPolicies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) documentation for the subjects of a _NetworkPolicy_.
|
||||
|
||||
The assigned _NetworkPolicies_ are inherited by any namespace created in the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: <name>
|
||||
namespace:
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector: {}
|
||||
- podSelector: {}
|
||||
- ipBlock:
|
||||
cidr: <value>
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
cidr: <value>
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
```
|
||||
|
||||
The tenant owner can create, patch and delete additional _NetworkPolicy_ to refine the assigned one. However, the tenant owner cannot delete the _NetworkPolicies_ set at tenant level.
|
||||
|
||||
#### externalServiceIPs
|
||||
Field `externalServiceIPs` specifies the external IPs that can be used in _Services_ with type `ClusterIP`.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
externalServiceIPs:
|
||||
allowed:
|
||||
- <cidr>
|
||||
```
|
||||
|
||||
Capsule will ensure that all _Services_ in the tenant can contain only the allowed external IPs. This mitigate the [_CVE-2020-8554_] vulnerability where a potential attacker, able to create a _Service_ with type `ClusterIP` and set the `externalIPs` field, can intercept traffic to that IP. Leave only the allowed CIDRs list to be set as `externalIPs` field in a _Service_ with type `ClusterIP`.
|
||||
|
||||
To prevent users to set the `externalIPs` field, use an empty allowed list:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
externalServiceIPs:
|
||||
allowed: []
|
||||
```
|
||||
|
||||
> NB: Missing of this controller, it exposes your cluster to the vulnerability [_CVE-2020-8554_].
|
||||
|
||||
### Status
|
||||
#### size
|
||||
Status field `size` reports the number of namespaces belonging to the tenant. It is reported as `NAMESPACE COUNT` in the `kubectl` output:
|
||||
|
||||
```
|
||||
$ kubectl get tnt
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
cap 9 1 joe User {"pool":"cmp"} 5d4h
|
||||
gas 6 2 alice User {"node":"worker"} 5d4h
|
||||
oil 9 3 alice User {"pool":"cmp"} 5d4h
|
||||
sample 9 0 alice User {"key":"value"} 29h
|
||||
```
|
||||
|
||||
#### namespaces
|
||||
Status field `namespaces` reports the list of all namespaces belonging to the tenant.
|
||||
|
||||
```yaml
|
||||
...
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
...
|
||||
status:
|
||||
namespaces:
|
||||
oil-development
|
||||
oil-production
|
||||
oil-marketing
|
||||
size: 3
|
||||
```
|
||||
|
||||
## Role Based Access Control
|
||||
In the current implementation, the Capsule operator requires cluster admin permissions to fully operate.
|
||||
|
||||
## Admission Controllers
|
||||
Capsule implements Kubernetes multi-tenancy capabilities using a minimum set of standard [Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) enabled on the Kubernetes APIs server.
|
||||
|
||||
Here the list of required Admission Controllers you have to enable to get full support from Capsule:
|
||||
|
||||
* PodNodeSelector
|
||||
* LimitRanger
|
||||
* ResourceQuota
|
||||
* MutatingAdmissionWebhook
|
||||
* ValidatingAdmissionWebhook
|
||||
|
||||
In addition to the required controllers above, Capsule implements its own set through the [Dynamic Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) mechanism, providing callbacks to add further validation or resource patching.
|
||||
|
||||
To see Admission Controls installed by Capsule:
|
||||
|
||||
```
|
||||
$ kubectl get ValidatingWebhookConfiguration
|
||||
NAME WEBHOOKS AGE
|
||||
capsule-validating-webhook-configuration 8 2h
|
||||
|
||||
$ kubectl get MutatingWebhookConfiguration
|
||||
NAME WEBHOOKS AGE
|
||||
capsule-mutating-webhook-configuration 1 2h
|
||||
```
|
||||
|
||||
## Command Options
|
||||
The Capsule operator provides following command options:
|
||||
|
||||
Option | Description | Default
|
||||
--- | --- | ---
|
||||
`--metrics-addr` | The address and port where `/metrics` are exposed. | `127.0.0.1:8080`
|
||||
`--enable-leader-election` | Start a leader election client and gain leadership before executing the main loop. | `true`
|
||||
`--force-tenant-prefix` | Force the tenant name as prefix for namespaces: `<tenant_name>-<namespace>`. | `false`
|
||||
`--zap-log-level` | The log verbosity with a value from 1 to 10 or the basic keywords. | `4`
|
||||
`--zap-devel` | The flag to get the stack traces for deep debugging. | `null`
|
||||
`--capsule-user-group` | Override the Capsule group to which all tenant owners must belong. | `capsule.clastix.io`
|
||||
`--protected-namespace-regex` | Disallows creation of namespaces matching the passed regexp. | `null`
|
||||
|
||||
## Created Resources
|
||||
Once installed, the Capsule operator creates the following resources in your cluster:
|
||||
|
||||
```
|
||||
NAMESPACE RESOURCE
|
||||
customresourcedefinition.apiextensions.k8s.io/tenants.capsule.clastix.io
|
||||
clusterrole.rbac.authorization.k8s.io/capsule-proxy-role
|
||||
clusterrole.rbac.authorization.k8s.io/capsule-metrics-reader
|
||||
mutatingwebhookconfiguration.admissionregistration.k8s.io/capsule-mutating-webhook-configuration
|
||||
validatingwebhookconfiguration.admissionregistration.k8s.io/capsule-validating-webhook-configuration
|
||||
capsule-system clusterrolebinding.rbac.authorization.k8s.io/capsule-manager-rolebinding
|
||||
capsule-system clusterrolebinding.rbac.authorization.k8s.io/capsule-proxy-rolebinding
|
||||
capsule-system secret/capsule-ca
|
||||
capsule-system secret/capsule-tls
|
||||
capsule-system service/capsule-controller-manager-metrics-service
|
||||
capsule-system service/capsule-webhook-service
|
||||
capsule-system deployment.apps/capsule-controller-manager
|
||||
```
|
||||
99
docs/operator/use-cases/create-namespaces.md
Normal file
99
docs/operator/use-cases/create-namespaces.md
Normal file
@@ -0,0 +1,99 @@
|
||||
# Create namespaces
|
||||
Alice can create a new namespace in her tenant, as simply:
|
||||
|
||||
```
|
||||
alice@caas# kubectl create ns oil-production
|
||||
```
|
||||
|
||||
> Note that Alice started the name of her namespace with an identifier of her
|
||||
> tenant: this is not a strict requirement but it is highly suggested because
|
||||
> it is likely that many different tenants would like to call their namespaces
|
||||
> as `production`, `test`, or `demo`, etc.
|
||||
>
|
||||
> The enforcement of this naming convention, however, is optional and can be controlled by the cluster administrator with the `--force-tenant-prefix` option as argument of the Capsule controller.
|
||||
|
||||
When Alice creates the namespace, the Capsule controller, listening for creation and deletion events assigns to Alice the following roles:
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: namespace:admin
|
||||
namespace: oil-production
|
||||
subjects:
|
||||
- kind: User
|
||||
name: alice
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: admin
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: namespace-deleter
|
||||
namespace: oil-production
|
||||
subjects:
|
||||
- kind: User
|
||||
name: alice
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: namespace-deleter
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
Alice is the admin of the namespaces:
|
||||
|
||||
```
|
||||
alice@caas# kubectl get rolebindings -n oil-production
|
||||
NAME ROLE AGE
|
||||
namespace:admin ClusterRole/admin 9m5s
|
||||
namespace-deleter ClusterRole/admin 9m5s
|
||||
```
|
||||
|
||||
The said Role Binding resources are automatically created by Capsule when Alice creates a namespace in the tenant.
|
||||
|
||||
Alice can deploy any resource in the namespace, according to the predefined
|
||||
[`admin` cluster role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles).
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-development run nginx --image=docker.io/nginx
|
||||
alice@caas# kubectl -n oil-development get pods
|
||||
```
|
||||
|
||||
Alice can create additional namespaces, according to the `namespaceQuota` field of the tenant manifest:
|
||||
|
||||
```
|
||||
alice@caas# kubectl create ns oil-development
|
||||
alice@caas# kubectl create ns oil-test
|
||||
```
|
||||
|
||||
While Alice creates namespace resources, the Capsule controller updates the status of the tenant so Bill, the cluster admin, can check its status:
|
||||
|
||||
```
|
||||
bill@caas# kubectl describe tenant oil
|
||||
```
|
||||
|
||||
```yaml
|
||||
...
|
||||
status:
|
||||
namespaces:
|
||||
oil-development
|
||||
oil-production
|
||||
oil-test
|
||||
size: 3 # current namespace count
|
||||
...
|
||||
```
|
||||
|
||||
Once the namespace quota assigned to the tenant has been reached, Alice cannot create further namespaces
|
||||
|
||||
```
|
||||
alice@caas# kubectl create ns oil-training
|
||||
Error from server (Cannot exceed Namespace quota: please, reach out the system administrators): admission webhook "quota.namespace.capsule.clastix.io" denied the request.
|
||||
```
|
||||
|
||||
The enforcement on the maximum number of Namespace resources per Tenant is in charge of the Capsule controller via its Dynamic Admission Webhook capability.
|
||||
|
||||
# What’s next
|
||||
See how Alice, the tenant owner, can assign different user roles in the tenant. [Assign permissions](./permissions.md).
|
||||
91
docs/operator/use-cases/custom-resources.md
Normal file
91
docs/operator/use-cases/custom-resources.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Create Custom Resources
|
||||
Capsule operator the admin permissions to the tenant's users but only limited to their namespaces. To achieve that, it assign the ClusterRole [admin](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) to the tenant owner. This ClusterRole does not permit the installation of custom resources in the namespaces.
|
||||
|
||||
In order to leave the tenant owner to create Custom Resources in their namespaces, the cluster admin defines a proper Cluster Role. For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: argoproj-provisioner
|
||||
rules:
|
||||
- apiGroups:
|
||||
- argoproj.io
|
||||
resources:
|
||||
- applications
|
||||
- appprojects
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
```
|
||||
|
||||
Bill can assign this role to any namespace in the Alice's tenant by setting it in the tenant manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: 'argoproj-provisioner'
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: alice
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: joe
|
||||
```
|
||||
|
||||
or in case of Group type owners:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: 'argoproj-provisioner'
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: alice
|
||||
```
|
||||
|
||||
With the given specification, Capsule will ensure that all Alice's namespaces will contain a _RoleBinding_ for the specified _Cluster Role_. For example, in the `oil-production` namespace, Alice will see:
|
||||
|
||||
```yaml
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: capsule-oil-argoproj-provisioner
|
||||
namespace: oil-production
|
||||
subjects:
|
||||
- kind: User
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
name: alice
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: argoproj-provisioner
|
||||
```
|
||||
|
||||
With the above example, Capsule is leaving the tenant owner to create namespaced custom resources.
|
||||
|
||||
> Nota bene: a tenant owner having the admin scope on its namespaces only, does not have the permission to create Custom Resources Definitions (CRDs) because this requires a cluster admin permission level. Only Bill, the cluster admin, can create CRDs. This is a known limitation of any multi-tenancy environment based on a single Kubernetes cluster.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can set taints on the Alice's namespaces. [Taint namespaces](./taint-namespaces.md).
|
||||
63
docs/operator/use-cases/images-registries.md
Normal file
63
docs/operator/use-cases/images-registries.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Assign Trusted Images Registries
|
||||
Bill, the cluster admin, can set a strict policy on the applications running into Alice's tenant: he'd like to allow running just images hosted on a list of specific container registries.
|
||||
|
||||
The spec `containerRegistries` addresses this task and can provide combination with hard enforcement using a list of allowed values.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
containerRegistries:
|
||||
allowed:
|
||||
- docker.io
|
||||
- quay.io
|
||||
allowedRegex: ''
|
||||
```
|
||||
|
||||
> In case of naked and official images hosted on Docker Hub, Capsule is going
|
||||
> to retrieve the registry even if it's not explicit: a `busybox:latest` Pod
|
||||
> running on a Tenant allowing `docker.io` will not blocked, even if the image
|
||||
> field is not explicit as `docker.io/busybox:latest`.
|
||||
|
||||
|
||||
Alternatively, use a valid regular expression for a maximum flexibility
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
containerRegistries:
|
||||
allowed: []
|
||||
regex: "internal.registry.\\w.tld"
|
||||
```
|
||||
|
||||
A Pod running `internal.registry.foo.tld` as registry will be allowed, as well `internal.registry.bar.tld` since these are matching the regular expression.
|
||||
|
||||
> You can also set a catch-all as .* to allow every kind of registry,
|
||||
> that would be the same result of unsetting `containerRegistries` at all
|
||||
|
||||
As per Ingress and Storage classes, also the allowed registries can be inspected from the Tenant's namespace
|
||||
|
||||
```
|
||||
alice@caas# kubectl describe ns oil-production
|
||||
Name: oil-production
|
||||
Labels: capsule.clastix.io/tenant=oil
|
||||
Annotations: capsule.clastix.io/allowed-registries: docker.io
|
||||
capsule.clastix.io/allowed-registries-regexp: ^registry\.internal\.\w+$
|
||||
...
|
||||
```
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign Pod Security Policies to Alice's tenant. [Assign Pod Security Policies](./pod-security-policies.md).
|
||||
|
||||
75
docs/operator/use-cases/ingress-classes.md
Normal file
75
docs/operator/use-cases/ingress-classes.md
Normal file
@@ -0,0 +1,75 @@
|
||||
# Assign Ingress Classes
|
||||
An Ingress Controller is used in Kubernetes to publish services and applications outside of the cluster. An Ingress Controller can be provisioned to accept only Ingresses with a given Ingress Class.
|
||||
|
||||
Bill can assign a set of dedicated Ingress Classes to the `oil` tenant to force the applications in the `oil` tenant to be published only by the assigned Ingress Controller:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
ingressClasses:
|
||||
allowed:
|
||||
- oil
|
||||
...
|
||||
```
|
||||
|
||||
It is also possible to use regular expression for assigning Ingress Classes:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
ingressClasses:
|
||||
allowedRegex: "^oil-.*$"
|
||||
...
|
||||
```
|
||||
|
||||
The Capsule controller assures that all Ingresses created in the tenant can use only one of the valid Ingress Classes. Alice, as tenant owner, gets the list of valid Ingress Classes by checking any of her namespaces:
|
||||
|
||||
```
|
||||
alice@caas# kubectl describe ns oil-production
|
||||
Name: oil-production
|
||||
Labels: capsule.clastix.io/tenant=oil
|
||||
Annotations: capsule.clastix.io/ingress-classes: oil
|
||||
capsule.clastix.io/ingress-classes-regexp: ^oil-.*$
|
||||
...
|
||||
```
|
||||
|
||||
Alice creates an Ingress using a valid Ingress Class in the annotation:
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-production
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: oil
|
||||
spec:
|
||||
rules:
|
||||
- host: web.oil-inc.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: nginx
|
||||
servicePort: 80
|
||||
path: /
|
||||
```
|
||||
|
||||
Any tentative of Alice to use a not valid Ingress Class, e.g. `default`, will fail.
|
||||
|
||||
> The effect of this policy is that the services created in the tenant will be published
|
||||
> only on the Ingress Controller designated by Bill to accept one of the allowed Ingress Classes.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign a set of dedicated ingress hostnames to Alice's tenant. [Assign Ingress Hostnames](./ingress-hostnames.md).
|
||||
61
docs/operator/use-cases/ingress-hostnames.md
Normal file
61
docs/operator/use-cases/ingress-hostnames.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# Assign Ingress Hostnames
|
||||
Bill can assign a set of dedicated ingress hostnames to the `oil` tenant in order to force the applications in the tenant to be published only using the given hostnames:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
ingressHostNames:
|
||||
allowed:
|
||||
- *.oil.acmecorp.com
|
||||
...
|
||||
```
|
||||
|
||||
It is also possible to use regular expression for assigning Ingress Classes:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
ingressHostNames:
|
||||
allowedRegex: "^oil-acmecorp.*$"
|
||||
...
|
||||
```
|
||||
|
||||
The Capsule controller assures that all Ingresses created in the tenant can use only one of the valid hostnames.
|
||||
|
||||
Alice creates an Ingress using an allowed hostname
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-production
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: oil
|
||||
spec:
|
||||
rules:
|
||||
- host: web.oil.acmecorp.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: nginx
|
||||
servicePort: 80
|
||||
path: /
|
||||
```
|
||||
|
||||
Any tentative of Alice to use a not valid hostname, e.g. `web.gas.acmecorp.org`, will fail.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign a Storage Class to Alice's tenant. [Assign Storage Classes](./storage-classes.md).
|
||||
110
docs/operator/use-cases/multiple-tenants.md
Normal file
110
docs/operator/use-cases/multiple-tenants.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Assign multiple tenants to an owner
|
||||
In some scenarios, it's likely that a single team is responsible for multiple lines of business. For example, in our sample organization Acme Corp., Alice is responsible for both the Oil and Gas lines of business. Ans it's more probable that Alice requires two different tenants, for example `oil` and `gas` to keep things isolated.
|
||||
|
||||
By design, the Capsule operator does not permit hierarchy of tenants, since all tenants are at the same levels. However, we can assign the ownership of multiple tenants to the same user or group of users.
|
||||
|
||||
Bill, the cluster admin, creates multiple tenants having `alice` as owner:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
namespaceQuota: 3
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: gas
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
namespaceQuota: 9
|
||||
```
|
||||
|
||||
So that
|
||||
|
||||
```
|
||||
bill@caas# kubectl get tenants
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
oil 3 3 alice User 3h
|
||||
gas 9 0 alice User 1m
|
||||
```
|
||||
|
||||
Alternatively, the ownership can be assigned to a group called `oil-and-gas`:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: oil-and-gas
|
||||
kind: Group
|
||||
namespaceQuota: 3
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: gas
|
||||
spec:
|
||||
owner:
|
||||
name: oil-and-gas
|
||||
kind: Group
|
||||
namespaceQuota: 9
|
||||
```
|
||||
|
||||
So that
|
||||
|
||||
```
|
||||
bill@caas# kubectl get tenants
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
oil 3 3 oil-and-gas Group 3h
|
||||
gas 9 0 oil-and-gas Group 1m
|
||||
```
|
||||
|
||||
The two tenants still remain isolated each other in terms of resources assignments, e.g. _ResourceQuota_, _Nodes Pool_, _Storage Calsses_ and _Ingress Classes_, and in terms of governance, e.g. _NetworkPolicies_, _PodSecurityPolicies_, _Trusted Registries_, etc.
|
||||
|
||||
|
||||
When Alice logs in CaaS platform, she has access to all namespaces belonging to both the `oil` and `gas` tenants.
|
||||
|
||||
```
|
||||
alice@caas# kubectl create ns oil-production
|
||||
alice@caas# kubectl create ns gas-production
|
||||
```
|
||||
|
||||
When the enforcement of the naming convention with the `--force-tenant-prefix` option, is enabled, the namespaces are automatically assigned to the right tenant by Capsule because the operator does a lookups on the tenant names. If the `--force-tenant-prefix` option, is not set, Alice needs to specify the tenant name as a label `capsule.clastix.io/tenant=<desired_tenant>` in the namespace manifest:
|
||||
|
||||
```yaml
|
||||
cat <<EOF > gas-production-ns.yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: gas-production
|
||||
labels:
|
||||
capsule.clastix.io/tenant: gas
|
||||
EOF
|
||||
|
||||
kubectl create -f gas-production-ns.yaml
|
||||
```
|
||||
|
||||
> If not specified, Capsule will deny with the following message:
|
||||
>
|
||||
>`Unable to assign namespace to tenant. Please use capsule.clastix.io/tenant label when creating a namespace.`
|
||||
|
||||
# What’s next
|
||||
This end our tour in Capsule use cases. As we improve Capsule, more use cases about multi-tenancy, policy admission control, and cluster governance will be covered in the future. Stay tuned!
|
||||
103
docs/operator/use-cases/network-policies.md
Normal file
103
docs/operator/use-cases/network-policies.md
Normal file
@@ -0,0 +1,103 @@
|
||||
# Assign Network Policies
|
||||
Kubernetes network policies allow controlling network traffic between namespaces and between pods in the same namespace. Bill, the cluster admin, can enforce network traffic isolation between different tenants while leaving to Alice, the tenant owner, the freedom to set isolation between namespaces in the same tenant or even between pods in the same namespace.
|
||||
|
||||
To meet this requirement, Bill needs to define network policies that deny pods belonging to Alice's namespaces to access pods in namespaces belonging to other tenants, e.g. Bob's tenant `water`, or in system namespaces, e.g. `kube-system`.
|
||||
|
||||
Also, Bill can make sure pods belonging to a tenant namespace cannot access other network infrastructure like cluster nodes, load balancers, and virtual machines running other services.
|
||||
|
||||
Bill can set network policies in the tenant manifest, according to the requirements:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
networkPolicies:
|
||||
- policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
cidr: 0.0.0.0/0
|
||||
except:
|
||||
- 192.168.0.0/16
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
- podSelector: {}
|
||||
- ipBlock:
|
||||
cidr: 192.168.0.0/16
|
||||
podSelector: {}
|
||||
```
|
||||
|
||||
The Capsule controller, watching for namespace creation, creates the Network Policies for each namespace in the tenant.
|
||||
|
||||
Alice has access to these network policies:
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production get networkpolicies
|
||||
NAME POD-SELECTOR AGE
|
||||
capsule-oil-0 <none> 42h
|
||||
```
|
||||
|
||||
Alice can create, patch, and delete additional network policies within her namespaces
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production auth can-i get networkpolicies
|
||||
yes
|
||||
|
||||
alice@caas# kubectl -n oil-production auth can-i delete networkpolicies
|
||||
yes
|
||||
|
||||
alice@caas# kubectl -n oil-production auth can-i patch networkpolicies
|
||||
yes
|
||||
```
|
||||
|
||||
For example, she can create
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
labels:
|
||||
name: production-network-policy
|
||||
namespace: oil-production
|
||||
spec:
|
||||
podSelector: {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
```
|
||||
|
||||
Check all the network policies
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production get networkpolicies
|
||||
NAME POD-SELECTOR AGE
|
||||
capsule-oil-0 <none> 42h
|
||||
production-network-policy <none> 3m
|
||||
```
|
||||
|
||||
an delete the namespace network-policies
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production delete networkpolicy production-network-policy
|
||||
```
|
||||
|
||||
|
||||
However, the Capsule controller prevents Alice to delete the tenant network policy:
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production delete networkpolicy capsule-oil-0
|
||||
Error from server (Capsule Network Policies cannot be deleted: please, reach out the system administrators): admission webhook "validating.network-policy.capsule.clastix.io" denied the request: Capsule Network Policies cannot be deleted: please, reach out the system administrators
|
||||
```
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign trusted images registries to Alice's tenant. [Assign Trusted Images Registries](./images-registries.md).
|
||||
53
docs/operator/use-cases/nodes-pool.md
Normal file
53
docs/operator/use-cases/nodes-pool.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# Assign a nodes pool
|
||||
Bill, the cluster admin, can dedicate a pool of worker nodes to the `oil` tenant, to isolate the tenant applications from other noisy neighbors.
|
||||
|
||||
These nodes are labeled by Bill as `pool=oil`
|
||||
|
||||
```
|
||||
bill@caas# kubectl get nodes --show-labels
|
||||
|
||||
NAME STATUS ROLES AGE VERSION LABELS
|
||||
...
|
||||
worker06.acme.com Ready worker 8d v1.18.2 pool=oil
|
||||
worker07.acme.com Ready worker 8d v1.18.2 pool=oil
|
||||
worker08.acme.com Ready worker 8d v1.18.2 pool=oil
|
||||
```
|
||||
|
||||
The label `pool=oil` is defined as node selector in the tenant manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
nodeSelector:
|
||||
pool: oil
|
||||
...
|
||||
```
|
||||
|
||||
The Capsule controller makes sure that any namespace created in the tenant has the annotation: `scheduler.alpha.kubernetes.io/node-selector: pool=oil`. This annotation tells the scheduler of Kubernetes to assign the node selector `pool=oil` to all the pods deployed in the tenant.
|
||||
|
||||
The effect is that all the pods deployed by Alice are placed only on the designated pool of nodes.
|
||||
|
||||
Any tentative of Alice to change the selector on the pods will result in the following error from
|
||||
the `PodNodeSelector` Admission Controller plugin:
|
||||
|
||||
```
|
||||
Error from server (Forbidden): pods "busybox" is forbidden:
|
||||
pod node label selector conflicts with its namespace node label selector
|
||||
```
|
||||
|
||||
RBAC prevents Alice to change the annotation on the namespace:
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i edit ns -n production
|
||||
Warning: resource 'namespaces' is not namespace scoped
|
||||
no
|
||||
```
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign an Ingress Class to Alice's tenant. [Assign Ingress Classes](./ingress-classes.md).
|
||||
96
docs/operator/use-cases/onboarding.md
Normal file
96
docs/operator/use-cases/onboarding.md
Normal file
@@ -0,0 +1,96 @@
|
||||
# Onboard a new tenant
|
||||
Bill receives a new request from the Acme Corp.'s CTO asking a new tenant for Alice's organization has to be on board. Bill assigns the Alice's identity `alice` in the Acme Corp. identity management system. And because, Alice is a tenant owner, Bill needs to assign `alice` the Capsule group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
|
||||
|
||||
To keep the things simple, we assume that Bill just creates a client certificate for authentication using X.509 Certificate Signing Request, so Alice's certificate has `"/CN=alice/O=capsule.clastix.io"`.
|
||||
|
||||
Bill creates a new tenant `oil` in the CaaS manangement portal according to the tenant's profile:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
namespaceQuota: 3
|
||||
```
|
||||
|
||||
Bill checks the new tenant is created and operational:
|
||||
|
||||
```
|
||||
bill@caas# kubectl get tenant oil
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
oil 9 0 alice User 3m
|
||||
```
|
||||
|
||||
> Note that namespaces are not yet assigned to the new tenant.
|
||||
> The tenant owners are free to create their namespaces in a self-service fashion
|
||||
> and without any intervention from Bill.
|
||||
|
||||
Once the new tenant `oil` is in place, Bill sends the login credentials to Alice.
|
||||
|
||||
Alice can log in to the CaaS platform and checks if she can create a namespace
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i create namespaces
|
||||
Warning: resource 'namespaces' is not namespace scoped
|
||||
yes
|
||||
```
|
||||
|
||||
or even delete the namespace
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i delete ns -n oil-production
|
||||
Warning: resource 'namespaces' is not namespace scoped
|
||||
yes
|
||||
```
|
||||
|
||||
However, cluster resources are not accessible to Alice
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i get namespaces
|
||||
Warning: resource 'namespaces' is not namespace scoped
|
||||
no
|
||||
|
||||
alice@caas# kubectl auth can-i get nodes
|
||||
Warning: resource 'nodes' is not namespace scoped
|
||||
no
|
||||
|
||||
alice@caas# kubectl auth can-i get persistentvolumes
|
||||
Warning: resource 'persistentvolumes' is not namespace scoped
|
||||
no
|
||||
```
|
||||
|
||||
including the `Tenant` resources
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i get tenants
|
||||
Warning: resource 'tenants' is not namespace scoped
|
||||
no
|
||||
```
|
||||
|
||||
## Assign a group of users as tenant owner
|
||||
In the example above, Bill assigned the ownership of `oil` tenant to `alice` user. However, is more likely that multiple users in the Alice's oraganization, need to admin the `oil` tenant. In such cases, Bill can assign the ownership of the `oil` tenant to a group of users instead of a single one.
|
||||
|
||||
Bill creates a new group account `oil` in the Acme Corp. identity management system and then he assigns Alice's identity `alice` to the `oil` group.
|
||||
|
||||
The tenant manifest is modified as in the following:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: oil
|
||||
kind: Group
|
||||
namespaceQuota: 3
|
||||
```
|
||||
|
||||
With the snippet above, any user belonging to the Alice's organization will be owner of the `oil` tenant with the same permissions of Alice.
|
||||
|
||||
# What’s next
|
||||
See how Alice, the tenant owner, creates new namespaces. [Create namespaces](./create-namespaces.md).
|
||||
43
docs/operator/use-cases/overview.md
Normal file
43
docs/operator/use-cases/overview.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Use cases for Capsule
|
||||
Using Capsule, a cluster admin can implement complex multi-tenants scenarios for both public and private deployments. Here a list of common scenarios addressed by Capsule.
|
||||
|
||||
# Container as a Service (CaaS)
|
||||
***Acme Corp***, our sample organization, built a Container as a Service platform (CaaS), based on Kubernetes, to serve multiple lines of business. Each line of business, has its own team of engineers that are responsible for development, deployment, and operating their digital products.
|
||||
|
||||
To simplify the usage of Capsule in this scenario, we'll work with the following actors:
|
||||
|
||||
* ***Bill***:
|
||||
he is the cluster administrator from the operations department of Acme Corp. and he is in charge of admin and maintains the CaaS platform.
|
||||
|
||||
* ***Alice***:
|
||||
she works as IT Project Leader at Oil & Gas Business Units, two new lines of business at Acme Corp. Alice is responsible for all the strategic IT projects and she is responsible also for a team made of different background (developers, administrators, SRE engineers, etc.) and organized in separate departments.
|
||||
|
||||
* ***Joe***:
|
||||
he works at Acme Corp, as a lead developer of a distributed team in Alice's organization.
|
||||
Joe is responsible for developing a mission-critical project in the Oil market.
|
||||
|
||||
* ***Bob***:
|
||||
he is the head of Engineering for the Water Business Unit, the main and historichal line of business at Acme Corp. He is responsible for development, deployment, and operating multiple digital products in production for a large set of customers.
|
||||
|
||||
Bill, at Acme Corp. can use Capsule to address any of the following scenarios:
|
||||
|
||||
* [Onboard a new tenant](./onboarding.md)
|
||||
* [Create namespaces](./create-namespaces.md)
|
||||
* [Assign permissions](./permissions.md)
|
||||
* [Enforce resources quota and limits](./resources-quota-limits.md)
|
||||
* [Assign a nodes pool](./nodes-pool.md)
|
||||
* [Assign Ingress Classes](./ingress-classes.md)
|
||||
* [Assign Ingress Hostnames](./ingress-hostnames.md)
|
||||
* [Assign Storage Classes](./storage-classes.md)
|
||||
* [Assign Network Policies](./network-policies.md)
|
||||
* [Assign Trusted Images Registries](./images-registries.md)
|
||||
* [Assign Pod Security Policies](./pod-security-policies.md)
|
||||
* [Create Custom Resources](./custom-resources.md)
|
||||
* [Taint namespaces](./taint-namespaces.md)
|
||||
* [Assign multiple tenants to an owner](./multiple-tenants.md)
|
||||
|
||||
> NB: as we improve Capsule, more use cases about multi-tenancy and cluster governance will be covered.
|
||||
|
||||
|
||||
# What’s next
|
||||
See how the cluster admin puts a new tenant onboard. [Onboard a new tenant](./onboarding.md).
|
||||
43
docs/operator/use-cases/permissions.md
Normal file
43
docs/operator/use-cases/permissions.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Assign permissions
|
||||
Alice acts as the tenant admin. Other users can operate inside the tenant with different levels of permissions and authorizations. Alice is responsible for creating additional roles and assigning these roles to other users to work in the same tenant.
|
||||
|
||||
One of the key design principles of the Capsule is the self-provisioning management from the tenant owner's perspective. Alice, the tenant owner, does not need to interact with Bill, the cluster admin, to complete her day-by-day duties. On the other side, Bill has not to deal with multiple requests coming from multiple tenant owners that probably will overwhelm him.
|
||||
|
||||
Capsule leaves Alice the freedom to create RBAC roles at the namespace level, or using the pre-defined cluster roles already available in Kubernetes, and assign them to other users in the tenant. Being roles and rolebindings, limited to a namespace scope, Alice can assign the roles to the other users accessing the same tenant only after the namespace is created. This gives Alice the power to admin the tenant without the inteervention of the cluster admin.
|
||||
|
||||
From the cluster admin perspective, the only required action to Bill is to provision the other identities, eg. `joe` in the Identity Management system of Acme Corp. But this task can be done once, when onboarding the tenant and the users accessing the tenant can be part of the tenant business profile.
|
||||
|
||||
Alice can create Roles and RoleBindings only in the namespaces she owns
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i get roles -n oil-development
|
||||
yes
|
||||
|
||||
alice@caas# kubectl auth can-i get rolebindings -n oil-development
|
||||
yes
|
||||
|
||||
```
|
||||
|
||||
so she can assign the role of namespace `oil-development` admin to Joe, another user accessing the tenant `oil`
|
||||
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
name: oil-development:admin
|
||||
namespace: oil-development
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: admin
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: joe
|
||||
```
|
||||
|
||||
Joe now can operate on the namespace `oil-development` as admin but he has no access to the other namespaces `oil-production`, and `oil-test` that are part of the same tenant.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, set resources quota and limits for Alice's tenant. [Enforce resources quota and limits](./resources-quota-limits.md).
|
||||
76
docs/operator/use-cases/pod-security-policies.md
Normal file
76
docs/operator/use-cases/pod-security-policies.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Assign Pod Security Policies
|
||||
Bill, the cluster admin, can assign a dedicated Pod Security Policy (PSP) to the Alice's tenant. This is likely to be a requirement in a multi-tenancy environment.
|
||||
|
||||
The cluster admin creates a PSP:
|
||||
|
||||
```yaml
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
...
|
||||
```
|
||||
|
||||
Then create a _ClusterRole_ using or granting the said item
|
||||
|
||||
```yaml
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['psp:restricted']
|
||||
verbs: ['use']
|
||||
```
|
||||
|
||||
Bill can assign this role to any namespace in the Alice's tenant by setting it in the tenant manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: psp:privileged
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
...
|
||||
```
|
||||
|
||||
With the given specification, Capsule will ensure that all Alice's namespaces will contain a _RoleBinding_ for the specified _Cluster Role_. For example, in the `oil-production` namespace, Alice will see:
|
||||
|
||||
```yaml
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: 'capsule-oil-psp:privileged'
|
||||
namespace: oil-production
|
||||
labels:
|
||||
capsule.clastix.io/role-binding: a10c4c8c48474963
|
||||
capsule.clastix.io/tenant: oil
|
||||
subjects:
|
||||
- kind: Group
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
name: 'system:authenticated'
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: 'psp:privileged'
|
||||
```
|
||||
|
||||
With the above example, Capsule is forbidding to any authenticated user in `oil-production` namespace to run privileged pods and let them to performs privilege escalation as declared by the Cluster Role `psp:privileged`.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign to Alice the permissions to create custom resources in her tenant. [Create Custom Resources](./custom-resources.md).
|
||||
213
docs/operator/use-cases/resources-quota-limits.md
Normal file
213
docs/operator/use-cases/resources-quota-limits.md
Normal file
@@ -0,0 +1,213 @@
|
||||
# Enforce resources quota and limits
|
||||
With help of Capsule, Bill, the cluster admin, can set and enforce resources quota and limits for the Alice's tenant
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
resourceQuotas:
|
||||
- hard:
|
||||
limits.cpu: "8"
|
||||
limits.memory: 16Gi
|
||||
requests.cpu: "8"
|
||||
requests.memory: 16Gi
|
||||
scopes:
|
||||
- NotTerminating
|
||||
- hard:
|
||||
pods: "100"
|
||||
services: "50"
|
||||
- hard:
|
||||
requests.storage: 10Gi
|
||||
...
|
||||
```
|
||||
|
||||
The resources quotas above will be inherited by all the namespaces created by Alice. In our case, when Alice creates the namespace `oil-production`, Capsule creates three resource quotas:
|
||||
|
||||
```yaml
|
||||
kind: ResourceQuota
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: compute
|
||||
namespace: oil-production
|
||||
labels:
|
||||
tenant: oil
|
||||
spec:
|
||||
hard:
|
||||
limits.cpu: "8"
|
||||
limits.memory: 16Gi
|
||||
requests.cpu: "8"
|
||||
requests.memory: 16Gi
|
||||
scopes: ["NotTerminating"]
|
||||
---
|
||||
kind: ResourceQuota
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: count
|
||||
namespace: oil-production
|
||||
labels:
|
||||
tenant: oil
|
||||
spec:
|
||||
hard:
|
||||
pods : "10"
|
||||
---
|
||||
kind: ResourceQuota
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: storage
|
||||
namespace: oil-production
|
||||
labels:
|
||||
tenant: oil
|
||||
spec:
|
||||
hard:
|
||||
requests.storage: "10Gi"
|
||||
```
|
||||
|
||||
Alice can create any resource according to the assigned quotas:
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production create deployment nginx --image=nginx:latest
|
||||
```
|
||||
|
||||
To check the remaining resources in the `oil-production` namespace, she gets the ResourceQuota:
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production get resourcequota
|
||||
NAME AGE REQUEST LIMIT
|
||||
capsule-oil-0 42h requests.cpu: 1/8, requests.memory: 1/16Gi limits.cpu: 1/8, limits.memory: 1/16Gi
|
||||
capsule-oil-1 42h pods: 1/10
|
||||
capsule-oil-2 42h requests.storage: 0/100Gi
|
||||
```
|
||||
|
||||
By inspecting the annotations in ResourceQuota, Alice can see the used resources at tenant level and the related hard quota:
|
||||
|
||||
```yaml
|
||||
alice@caas# kubectl get resourcequotas capsule-oil-1 -o yaml
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
annotations:
|
||||
quota.capsule.clastix.io/used-pods: "1"
|
||||
quota.capsule.clastix.io/hard-pods: "10"
|
||||
...
|
||||
```
|
||||
|
||||
At the tenant level, the Capsule controller watches the resources usage for each Tenant namespace and adjusts it as an aggregate of all the namespaces using the said annotations. When the aggregate usage reaches the hard quota, then the native `ResourceQuota` Admission Controller in Kubernetes denies the Alice's request.
|
||||
|
||||
Bill, the cluster admin, can also set Limit Ranges for each namespace in the Alice's tenant by defining limits in the tenant spec:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
limitRanges:
|
||||
- limits:
|
||||
- max:
|
||||
cpu: "1"
|
||||
memory: 1Gi
|
||||
min:
|
||||
cpu: 50m
|
||||
memory: 5Mi
|
||||
type: Pod
|
||||
- default:
|
||||
cpu: 200m
|
||||
memory: 100Mi
|
||||
defaultRequest:
|
||||
cpu: 100m
|
||||
memory: 10Mi
|
||||
max:
|
||||
cpu: "1"
|
||||
memory: 1Gi
|
||||
min:
|
||||
cpu: 50m
|
||||
memory: 5Mi
|
||||
type: Container
|
||||
- max:
|
||||
storage: 10Gi
|
||||
min:
|
||||
storage: 1Gi
|
||||
type: PersistentVolumeClaim
|
||||
...
|
||||
```
|
||||
|
||||
Limits will be inherited by all the namespaces created by Alice. In our case, when Alice creates the namespace `oil-production`, Capsule creates the following:
|
||||
|
||||
```yaml
|
||||
kind: LimitRange
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: limits
|
||||
namespace: oil-production
|
||||
labels:
|
||||
tenant: oil
|
||||
spec:
|
||||
limits:
|
||||
- type: Pod
|
||||
min:
|
||||
cpu: "50m"
|
||||
memory: "5Mi"
|
||||
max:
|
||||
cpu: "1"
|
||||
memory: "1Gi"
|
||||
- type: Container
|
||||
defaultRequest:
|
||||
cpu: "100m"
|
||||
memory: "10Mi"
|
||||
default:
|
||||
cpu: "200m"
|
||||
memory: "100Mi"
|
||||
min:
|
||||
cpu: "50m"
|
||||
memory: "5Mi"
|
||||
max:
|
||||
cpu: "1"
|
||||
memory: "1Gi"
|
||||
- type: PersistentVolumeClaim
|
||||
min:
|
||||
storage: "1Gi"
|
||||
max:
|
||||
storage: "10Gi"
|
||||
```
|
||||
|
||||
Alice can inspect Limit Ranges for her namespaces:
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production get limitranges
|
||||
NAME CREATED AT
|
||||
capsule-oil-0 2020-07-20T18:41:15Z
|
||||
|
||||
# kubectl -n oil-production describe limitranges limits
|
||||
Name: capsule-oil-0
|
||||
Namespace: oil-production
|
||||
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
|
||||
---- -------- --- --- --------------- ------------- -----------------------
|
||||
Pod cpu 50m 1 - - -
|
||||
Pod memory 5Mi 1Gi - - -
|
||||
Container cpu 50m 1 100m 200m -
|
||||
Container memory 5Mi 1Gi 10Mi 100Mi -
|
||||
PersistentVolumeClaim storage 1Gi 10Gi - - -
|
||||
```
|
||||
|
||||
Being the limit range specific of single resources, there is no aggregate to count.
|
||||
|
||||
Having access to resource quota and limits, however, Alice is not able to change or delete it according to the assigned RBAC profile.
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production auth can-i patch resourcequota
|
||||
no - no RBAC policy matched
|
||||
|
||||
alice@caas# kubectl -n oil-production auth can-i patch limitranges
|
||||
no - no RBAC policy matched
|
||||
```
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign a pool of nodes to Alice's tenant. [Assign a nodes pool](./nodes-pool.md).
|
||||
74
docs/operator/use-cases/storage-classes.md
Normal file
74
docs/operator/use-cases/storage-classes.md
Normal file
@@ -0,0 +1,74 @@
|
||||
# Assign Storage Classes
|
||||
The Acme Corp. can provide persistent storage infrastructure to their tenants. Different types of storage requirements, with different levels of QoS, eg. SSD versus HDD, are available for different tenants according to the tenant's profile. To meet these different requirements, Bill, the cluster admin can provision different Storage Classes and assign them to the tenant:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
storageClasses:
|
||||
allowed:
|
||||
- ceph-rbd
|
||||
- ceph-nfs
|
||||
...
|
||||
```
|
||||
|
||||
It is also possible to use regular expression for assigning Storage Classes:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
storageClasses:
|
||||
allowedRegex: "^ceph-.*$"
|
||||
...
|
||||
```
|
||||
|
||||
Alice, as tenant owner, gets the list of valid Storage Classes by checking any of the her namespaces:
|
||||
|
||||
```
|
||||
alice@caas# kubectl describe ns oil-production
|
||||
Name: oil-production
|
||||
Labels: capsule.clastix.io/tenant=oil
|
||||
Annotations: capsule.clastix.io/storage-classes: ceph-rbd,ceph-nfs
|
||||
capsule.clastix.io/storage-classes-regexp: ^ceph-.*$
|
||||
...
|
||||
```
|
||||
|
||||
The Capsule controller will ensure that all Persistent Volume Claims created by Alice will use only one of the assigned storage classes:
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc
|
||||
namespace: oil-production
|
||||
spec:
|
||||
storageClassName: ceph-rbd
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 12Gi
|
||||
```
|
||||
|
||||
Any tentative of Alice to use a not valid Storage Class, e.g. `default`, will fail::
|
||||
```
|
||||
Error from server: error when creating persistent volume claim pvc:
|
||||
admission webhook "pvc.capsule.clastix.io" denied the request:
|
||||
Storage Class default is forbidden for the current Tenant
|
||||
```
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign Network Policies to Alice's tenant. [Assign Network Policies](./network-policies.md).
|
||||
51
docs/operator/use-cases/taint-namespaces.md
Normal file
51
docs/operator/use-cases/taint-namespaces.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# Taint namespaces
|
||||
With Capsule, Bill can _"taint"_ the namespaces created by Alice with an additional labels and/or annotations. There is no specific semantic assigned to these labels and annotations: they just will be assigned to the namespaces in the tenant as they are created by Alice. This can help the cluster admin to implement specific use cases. As for example, it can be used to implement backup as a service for namespaces in the tenant.
|
||||
|
||||
Bill assigns an additional label to the `oil` tenant to force the backup system to take care of Alice's namespaces:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
namespacesMetadata:
|
||||
additionalLabels:
|
||||
capsule.clastix.io/backup: "true"
|
||||
```
|
||||
|
||||
or by annotations:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
namespacesMetadata:
|
||||
additionalAnnotations:
|
||||
capsule.clastix.io/do_stuff: backup
|
||||
```
|
||||
|
||||
When Alice creates a namespace, this will inherit the given label and/or annotation:
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: oil-production
|
||||
labels:
|
||||
capsule.clastix.io/backup: "true" # here the additional label
|
||||
capsule.clastix.io/tenant: oil
|
||||
annotations:
|
||||
capsule.clastix.io/do_stuff: backup # here the additional annotation
|
||||
```
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign multiple tenants to Alice. [Assign multiple tenants to an owner](./multiple-tenants.md).
|
||||
Reference in New Issue
Block a user