feat(docs): setup Gridsome for the website
BIN
docs/content/assets/datasource.png
Executable file
|
After Width: | Height: | Size: 4.5 KiB |
BIN
docs/content/assets/dev-env.png
Normal file
|
After Width: | Height: | Size: 111 KiB |
BIN
docs/content/assets/manager-controllers.png
Executable file
|
After Width: | Height: | Size: 28 KiB |
BIN
docs/content/assets/prometheus_targets.png
Executable file
|
After Width: | Height: | Size: 30 KiB |
BIN
docs/content/assets/rest-client-error-rate.png
Executable file
|
After Width: | Height: | Size: 63 KiB |
BIN
docs/content/assets/rest-client-latency.png
Executable file
|
After Width: | Height: | Size: 14 KiB |
BIN
docs/content/assets/saturation.png
Executable file
|
After Width: | Height: | Size: 79 KiB |
BIN
docs/content/assets/upload_json.png
Executable file
|
After Width: | Height: | Size: 22 KiB |
BIN
docs/content/assets/webhook-error-rate.png
Executable file
|
After Width: | Height: | Size: 131 KiB |
BIN
docs/content/assets/webhook-latency.png
Executable file
|
After Width: | Height: | Size: 55 KiB |
BIN
docs/content/assets/workqueue.png
Executable file
|
After Width: | Height: | Size: 57 KiB |
359
docs/content/dev-guide.md
Normal file
@@ -0,0 +1,359 @@
|
||||
# Capsule Development Guide
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Tools
|
||||
|
||||
Make sure you have these tools installed:
|
||||
|
||||
- [Go 1.16+](https://golang.org/dl/)
|
||||
- [Operator SDK 1.7.2+](https://github.com/operator-framework/operator-sdk), or [Kubebuilder](https://github.com/kubernetes-sigs/kubebuilder)
|
||||
- [KinD](https://github.com/kubernetes-sigs/kind) or [k3d](https://k3d.io/), with `kubectl`
|
||||
- [ngrok](https://ngrok.com/) (if you want to run locally with remote Kubernetes)
|
||||
- [golangci-lint](https://github.com/golangci/golangci-lint)
|
||||
- OpenSSL
|
||||
|
||||
### Kubernetes Cluster
|
||||
|
||||
A lightweight Kubernetes within your laptop can be very handy for Kubernetes-native development like Capsule.
|
||||
|
||||
#### By `k3d`
|
||||
|
||||
```shell
|
||||
# Install K3d cli by brew in Mac, or your preferred way
|
||||
$ brew install k3d
|
||||
|
||||
# Export your laptop's IP, e.g. retrieving it by: ifconfig
|
||||
# Do change this IP to yours
|
||||
$ export LAPTOP_HOST_IP=192.168.10.101
|
||||
|
||||
# Spin up a bare minimum cluster
|
||||
# Refer to here for more options: https://k3d.io/v4.4.8/usage/commands/k3d_cluster_create/
|
||||
$ k3d cluster create k3s-capsule --servers 1 --agents 1 --no-lb --k3s-server-arg --tls-san=${LAPTOP_HOST_IP}
|
||||
|
||||
# This will create a cluster with 1 server and 1 worker node
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
k3d-k3s-capsule-server-0 Ready control-plane,master 2m13s v1.21.2+k3s1
|
||||
k3d-k3s-capsule-agent-0 Ready <none> 2m3s v1.21.2+k3s1
|
||||
|
||||
# Or 2 Docker containers if you view it from Docker perspective
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
5c26ad840c62 rancher/k3s:v1.21.2-k3s1 "/bin/k3s agent" 53 seconds ago Up 45 seconds k3d-k3s-capsule-agent-0
|
||||
753998879b28 rancher/k3s:v1.21.2-k3s1 "/bin/k3s server --t…" 53 seconds ago Up 51 seconds 0.0.0.0:49708->6443/tcp k3d-k3s-capsule-server-0
|
||||
```
|
||||
|
||||
#### By `kind`
|
||||
|
||||
```shell
|
||||
# # Install kind cli by brew in Mac, or your preferred way
|
||||
$ brew install kind
|
||||
|
||||
# Prepare a kind config file with necessary customization
|
||||
$ cat > kind.yaml <<EOF
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
networking:
|
||||
apiServerAddress: "0.0.0.0"
|
||||
nodes:
|
||||
- role: control-plane
|
||||
kubeadmConfigPatches:
|
||||
- |
|
||||
kind: ClusterConfiguration
|
||||
metadata:
|
||||
name: config
|
||||
apiServer:
|
||||
certSANs:
|
||||
- localhost
|
||||
- 127.0.0.1
|
||||
- kubernetes
|
||||
- kubernetes.default.svc
|
||||
- kubernetes.default.svc.cluster.local
|
||||
- kind
|
||||
- 0.0.0.0
|
||||
- ${LAPTOP_HOST_IP}
|
||||
- role: worker
|
||||
EOF
|
||||
|
||||
# Spin up a bare minimum cluster with 1 master 1 worker node
|
||||
$ kind create cluster --name kind-capsule --config kind.yaml
|
||||
|
||||
# This will create a cluster with 1 server and 1 worker node
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kind-capsule-control-plane Ready control-plane,master 84s v1.21.1
|
||||
kind-capsule-worker Ready <none> 56s v1.21.1
|
||||
|
||||
# Or 2 Docker containers if you view it from Docker perspective
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
7b329fd3a838 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute 0.0.0.0:54894->6443/tcp kind-capsule-control-plane
|
||||
7d50f1633555 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute kind-capsule-worker
|
||||
```
|
||||
|
||||
## Fork & clone the repository
|
||||
|
||||
The `fork-clone-contribute-pr` flow is common for contributing to OSS projects like Kubernetes, Capsule.
|
||||
|
||||
Let's assume you've forked it into your GitHub namespace, say `myuser`, and then you can clone it with Git protocol.
|
||||
Do remember to change the `myuser` to yours.
|
||||
|
||||
```shell
|
||||
$ git clone git@github.com:myuser/capsule.git && cd capsule
|
||||
```
|
||||
|
||||
It's a good practice to add the upsteam as the remote too so we can easily fetch and merge the upstream to our fork:
|
||||
|
||||
```shell
|
||||
$ git remote add upstream https://github.com/clastix/capsule.git
|
||||
$ git remote -vv
|
||||
origin git@github.com:myuser/capsule.git (fetch)
|
||||
origin git@github.com:myuser/capsule.git (push)
|
||||
upstream https://github.com/clastix/capsule.git (fetch)
|
||||
upstream https://github.com/clastix/capsule.git (push)
|
||||
```
|
||||
|
||||
## Build & deploy Capsule
|
||||
|
||||
```shell
|
||||
# Download the project dependencies
|
||||
$ go mod download
|
||||
|
||||
# Build the Capsule image
|
||||
$ make docker-build
|
||||
|
||||
# Retrieve the built image version
|
||||
$ export CAPSULE_IMAGE_VESION=`docker images --format '{{.Tag}}' quay.io/clastix/capsule`
|
||||
|
||||
# If k3s, load the image into cluster by
|
||||
$ k3d image import --cluster k3s-capsule capsule quay.io/clastix/capsule:${CAPSULE_IMAGE_VESION}
|
||||
# If Kind, load the image into cluster by
|
||||
$ kind load docker-image --name kind-capsule quay.io/clastix/capsule:${CAPSULE_IMAGE_VESION}
|
||||
|
||||
# deploy all the required manifests
|
||||
# Note: 1) please retry if you saw errors; 2) if you want to clean it up first, run: make remove
|
||||
$ make deploy
|
||||
|
||||
# Make sure the controller is running
|
||||
$ kubectl get pod -n capsule-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
capsule-controller-manager-5c6b8445cf-566dc 1/1 Running 0 23s
|
||||
|
||||
# Check the logs if needed
|
||||
$ kubectl -n capsule-system logs --all-containers -l control-plane=controller-manager
|
||||
|
||||
# You may have a try to deploy a Tenant too to make sure it works end to end
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
EOF
|
||||
|
||||
# There shouldn't be any errors and you should see the newly created tenant
|
||||
$ kubectl get tenants
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
oil Active 0 14s
|
||||
```
|
||||
|
||||
As of now, a complete Capsule environment has been set up in `kind`- or `k3d`-powered cluster, and the `capsule-controller-manager` is running as a deployment serving as:
|
||||
|
||||
- The reconcilers for CRDs and;
|
||||
- A series of webhooks
|
||||
|
||||
|
||||
## Set up development env
|
||||
|
||||
During development, we prefer that the code is running within our IDE locally, instead of running as the normal Pod(s) within the Kubernetes cluster.
|
||||
|
||||
Such a setup can be illustrated as below diagram:
|
||||
|
||||

|
||||
|
||||
To achieve that, there are some necessary steps we need to walk through, which have been made as a `make` target within our `Makefile`.
|
||||
|
||||
So the TL;DR answer is:
|
||||
|
||||
```shell
|
||||
# If you haven't installed or run `make deploy` before, do it first
|
||||
# Note: please retry if you saw errors
|
||||
$ make deploy
|
||||
|
||||
# To retrieve your laptop's IP and execute `make dev-setup` to setup dev env
|
||||
# For example: LAPTOP_HOST_IP=192.168.10.101 make dev-setup
|
||||
$ LAPTOP_HOST_IP="<YOUR_LAPTOP_IP>" make dev-setup
|
||||
```
|
||||
|
||||
|
||||
This is a very common setup for typical Kubernetes Operator development so we'd better walk them through with more details here.
|
||||
|
||||
1. Scaling down the deployed Pod(s) to 0
|
||||
|
||||
We need to scale the existing replicas of `capsule-controller-manager` to 0 to avoid reconciliation competition between the Pod(s) and the code running outside of the cluster, in our preferred IDE for example.
|
||||
|
||||
```shell
|
||||
$ kubectl -n capsule-system scale deployment capsule-controller-manager --replicas=0
|
||||
deployment.apps/capsule-controller-manager scaled
|
||||
```
|
||||
|
||||
2. Preparing TLS certificate for the webhooks
|
||||
|
||||
Running webhooks requires TLS, we can prepare the TLS key pair in our development env to handle HTTPS requests.
|
||||
|
||||
```shell
|
||||
# Prepare a simple OpenSSL config file
|
||||
# Do remember to export LAPTOP_HOST_IP before running this command
|
||||
$ cat > _tls.cnf <<EOF
|
||||
[ req ]
|
||||
default_bits = 4096
|
||||
distinguished_name = req_distinguished_name
|
||||
req_extensions = req_ext
|
||||
[ req_distinguished_name ]
|
||||
countryName = SG
|
||||
stateOrProvinceName = SG
|
||||
localityName = SG
|
||||
organizationName = CAPSULE
|
||||
commonName = CAPSULE
|
||||
[ req_ext ]
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
IP.1 = ${LAPTOP_HOST_IP}
|
||||
EOF
|
||||
|
||||
# Create this dir to mimic the Pod mount point
|
||||
$ mkdir -p /tmp/k8s-webhook-server/serving-certs
|
||||
|
||||
# Generate the TLS cert/key under /tmp/k8s-webhook-server/serving-certs
|
||||
$ openssl req -newkey rsa:4096 -days 3650 -nodes -x509 \
|
||||
-subj "/C=SG/ST=SG/L=SG/O=CAPSULE/CN=CAPSULE" \
|
||||
-extensions req_ext \
|
||||
-config _tls.cnf \
|
||||
-keyout /tmp/k8s-webhook-server/serving-certs/tls.key \
|
||||
-out /tmp/k8s-webhook-server/serving-certs/tls.crt
|
||||
|
||||
# Clean it up
|
||||
$ rm -f _tls.cnf
|
||||
```
|
||||
|
||||
3. Patching the Webhooks
|
||||
|
||||
By default, the webhooks will be registered with the services, which will route to the Pods, inside the cluster.
|
||||
|
||||
We need to _delegate_ the controllers' and webbooks' services to the code running in our IDE by patching the `MutatingWebhookConfiguration` and `ValidatingWebhookConfiguration`.
|
||||
|
||||
```shell
|
||||
# Export your laptop's IP with the 9443 port exposed by controllers/webhooks' services
|
||||
$ export WEBHOOK_URL="https://${LAPTOP_HOST_IP}:9443"
|
||||
|
||||
# Export the cert we just generated as the CA bundle for webhook TLS
|
||||
$ export CA_BUNDLE=`openssl base64 -in /tmp/k8s-webhook-server/serving-certs/tls.crt | tr -d '\n'`
|
||||
|
||||
# Patch the MutatingWebhookConfiguration webhook
|
||||
$ kubectl patch MutatingWebhookConfiguration capsule-mutating-webhook-configuration \
|
||||
--type='json' -p="[\
|
||||
{'op': 'replace', 'path': '/webhooks/0/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/mutate-v1-namespace-owner-reference\",'caBundle':\"${CA_BUNDLE}\"}}\
|
||||
]"
|
||||
|
||||
# Verify it if you want
|
||||
$ kubectl get MutatingWebhookConfiguration capsule-mutating-webhook-configuration -o yaml
|
||||
|
||||
# Patch the ValidatingWebhookConfiguration webhooks
|
||||
# Note: there is a list of validating webhook endpoints, not just one
|
||||
$ kubectl patch ValidatingWebhookConfiguration capsule-validating-webhook-configuration \
|
||||
--type='json' -p="[\
|
||||
{'op': 'replace', 'path': '/webhooks/0/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/cordoning\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/1/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/ingresses\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/2/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/namespaces\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/3/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/networkpolicies\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/4/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/pods\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/5/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/persistentvolumeclaims\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/6/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/services\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/7/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/tenants\",'caBundle':\"${CA_BUNDLE}\"}}\
|
||||
]"
|
||||
|
||||
# Verify it if you want
|
||||
$ kubectl get ValidatingWebhookConfiguration capsule-validating-webhook-configuration -o yaml
|
||||
```
|
||||
|
||||
## Run Capsule outside the cluster
|
||||
|
||||
Now we can run Capsule controllers with webhooks outside of the Kubernetes cluster:
|
||||
|
||||
```shell
|
||||
$ export NAMESPACE=capsule-system && export TMPDIR=/tmp/
|
||||
$ go run .
|
||||
```
|
||||
|
||||
To verify that, we can open a new console and create a new Tenant:
|
||||
|
||||
```shell
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: gas
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
We should see output like:
|
||||
```log
|
||||
tenant.capsule.clastix.io/gas created
|
||||
```
|
||||
|
||||
And could see logs in the `make run` console like:
|
||||
```log
|
||||
...
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.520+0800","logger":"controllers.Tenant","msg":"Ensuring all Namespaces are collected","Request.Name":"gas"}
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.527+0800","logger":"controllers.Tenant","msg":"Starting processing of Namespaces","Request.Name":"gas","items":0}
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.527+0800","logger":"controllers.Tenant","msg":"Ensuring additional RoleBindings for owner","Request.Name":"gas"}
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.527+0800","logger":"controllers.Tenant","msg":"Ensuring RoleBinding for owner","Request.Name":"gas"}
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.527+0800","logger":"controllers.Tenant","msg":"Ensuring Namespace count","Request.Name":"gas"}
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.533+0800","logger":"controllers.Tenant","msg":"Tenant reconciling completed","Request.Name":"gas"}
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.540+0800","logger":"controllers.Tenant","msg":"Ensuring all Namespaces are collected","Request.Name":"gas"}
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.547+0800","logger":"controllers.Tenant","msg":"Starting processing of Namespaces","Request.Name":"gas","items":0}
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.547+0800","logger":"controllers.Tenant","msg":"Ensuring additional RoleBindings for owner","Request.Name":"gas"}
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.547+0800","logger":"controllers.Tenant","msg":"Ensuring RoleBinding for owner","Request.Name":"gas"}
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.547+0800","logger":"controllers.Tenant","msg":"Ensuring Namespace count","Request.Name":"gas"}
|
||||
{"level":"info","ts":"2021-09-28T21:10:30.554+0800","logger":"controllers.Tenant","msg":"Tenant reconciling completed","Request.Name":"gas"}
|
||||
```
|
||||
|
||||
## Work in your preferred IDE
|
||||
|
||||
Now it's time to work through our familiar inner loop for development in our preferred IDE.
|
||||
|
||||
For example, if you're using [Visual Studio Code](https://code.visualstudio.com), this `launch.json` file can be a good start.
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
{
|
||||
"name": "Launch",
|
||||
"type": "go",
|
||||
"request": "launch",
|
||||
"mode": "auto",
|
||||
"program": "${workspaceFolder}",
|
||||
"args": [
|
||||
"--zap-encoder=console",
|
||||
"--zap-log-level=debug",
|
||||
"--configuration-name=capsule-default"
|
||||
],
|
||||
"env": {
|
||||
"NAMESPACE": "capsule-system",
|
||||
"TMPDIR": "/tmp/"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Please refer to [contributing](/docs/contributing) for more details while contributing.
|
||||
8
docs/content/index.md
Normal file
@@ -0,0 +1,8 @@
|
||||
# Capsule Documentation
|
||||
**Capsule** helps to implement a multi-tenancy and policy-based environment in your Kubernetes cluster. It has been designed as a micro-services based ecosystem with the minimalist approach, leveraging only on upstream Kubernetes.
|
||||
|
||||
Currently, the Capsule ecosystem comprises the following:
|
||||
|
||||
* [Capsule Operator](/docs/operator/overview)
|
||||
* [Capsule Proxy](/docs/proxy/overview)
|
||||
* [Capsule Lens extension](/docs/lens-extension/overview)
|
||||
11
docs/content/lens-extension/overview.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# Capsule extension for Lens
|
||||
With Capsule extension for [Lens](https://github.com/lensapp/lens), a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
|
||||
|
||||
## Features
|
||||
Capsule extension for Lens provides these capabilities:
|
||||
|
||||
- List all tenants
|
||||
- See tenant details and change through the embedded Lens editor
|
||||
- Check Resources Quota and Budget at both the tenant and namespace level
|
||||
|
||||
Please, see the [README](https://github.com/clastix/capsule-lens-extension) for details about the installation of the Capsule Lens Extension.
|
||||
63
docs/content/operator/contributing.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# How to contribute to Capsule
|
||||
|
||||
First, thanks for your interest in Capsule, any contribution is welcome!
|
||||
|
||||
## Development environment setup
|
||||
|
||||
The first step is to set up your local development environment.
|
||||
|
||||
Please follow the [Capsule Development Guide](/docs/dev-guide) for details.
|
||||
|
||||
## Code convention
|
||||
|
||||
The changes must follow the Pull Request method where a _GitHub Action_ will
|
||||
check the `golangci-lint`, so ensure your changes respect the coding standard.
|
||||
|
||||
### golint
|
||||
|
||||
You can easily check them issuing the _Make_ recipe `golint`.
|
||||
|
||||
```
|
||||
# make golint
|
||||
golangci-lint run -c .golangci.yml
|
||||
```
|
||||
|
||||
> Enabled linters and related options are defined in the [.golanci.yml file](https://github.com/clastix/capsule/blob/master/.golangci.yml)
|
||||
|
||||
### goimports
|
||||
|
||||
Also, the Go import statements must be sorted following the best practice:
|
||||
|
||||
```
|
||||
<STANDARD LIBRARY>
|
||||
|
||||
<EXTERNAL PACKAGES>
|
||||
|
||||
<LOCAL PACKAGES>
|
||||
```
|
||||
|
||||
To help you out you can use the _Make_ recipe `goimports`
|
||||
|
||||
```
|
||||
# make goimports
|
||||
goimports -w -l -local "github.com/clastix/capsule" .
|
||||
```
|
||||
|
||||
### Commits
|
||||
|
||||
All the Pull Requests must refer to an already open issue: this is the first phase to contribute also for informing maintainers about the issue.
|
||||
|
||||
Commit's first line should not exceed 50 columns.
|
||||
|
||||
A commit description is welcomed to explain more the changes: just ensure
|
||||
to put a blank line and an arbitrary number of maximum 72 characters long
|
||||
lines, at most one blank line between them.
|
||||
|
||||
Please, split changes into several and documented small commits: this will help us to perform a better review. Commits must follow the Conventional Commits Specification, a lightweight convention on top of commit messages. It provides an easy set of rules for creating an explicit commit history; which makes it easier to write automated tools on top of. This convention dovetails with Semantic Versioning, by describing the features, fixes, and breaking changes made in commit messages. See [Conventional Commits Specification](https://www.conventionalcommits.org) to learn about Conventional Commits.
|
||||
|
||||
> In case of errors or need of changes to previous commits,
|
||||
> fix them squashing to make changes atomic.
|
||||
|
||||
### Miscellanea
|
||||
|
||||
Please, add a new single line at end of any file as the current coding style.
|
||||
110
docs/content/operator/getting-started.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Getting started
|
||||
Thanks for giving Capsule a try.
|
||||
|
||||
## Installation
|
||||
Make sure you have access to a Kubernetes cluster as administrator.
|
||||
|
||||
There are two ways to install Capsule:
|
||||
|
||||
* Use the [single YAML file installer](https://raw.githubusercontent.com/clastix/capsule/master/config/install.yaml)
|
||||
* Use the [Capsule Helm Chart](https://github.com/clastix/capsule/blob/master/charts/capsule/README.md)
|
||||
|
||||
### Install with the single YAML file installer
|
||||
Ensure you have `kubectl` installed in your `PATH`. Clone this repository and move to the repo folder:
|
||||
|
||||
```
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/install.yaml
|
||||
```
|
||||
|
||||
It will install the Capsule controller in a dedicated namespace `capsule-system`.
|
||||
|
||||
### Install with Helm Chart
|
||||
Please, refer to the instructions reported in the Capsule Helm Chart [README](https://github.com/clastix/capsule/blob/master/charts/capsule/README.md).
|
||||
|
||||
# Create your first Tenant
|
||||
In Capsule, a _Tenant_ is an abstraction to group multiple namespaces in a single entity within a set of boundaries defined by the Cluster Administrator. The tenant is then assigned to a user or group of users who is called _Tenant Owner_.
|
||||
|
||||
Capsule defines a Tenant as Custom Resource with cluster scope.
|
||||
|
||||
Create the tenant as cluster admin:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
You can check the tenant just created
|
||||
|
||||
```
|
||||
$ kubectl get tenants
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
oil Active 0 10s
|
||||
```
|
||||
|
||||
## Tenant owners
|
||||
Each tenant comes with a delegated user or group of users acting as the tenant admin. In the Capsule jargon, this is called the _Tenant Owner_. Other users can operate inside a tenant with different levels of permissions and authorizations assigned directly by the Tenant Owner.
|
||||
|
||||
Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of [authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/) are supported. The only requirement to use Capsule is to assign tenant users to the group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
|
||||
|
||||
Assignment to a group depends on the authentication strategy in your cluster.
|
||||
|
||||
For example, if you are using `capsule.clastix.io`, users authenticated through a _X.509_ certificate must have `capsule.clastix.io` as _Organization_: `-subj "/CN=${USER}/O=capsule.clastix.io"`
|
||||
|
||||
Users authenticated through an _OIDC token_ must have in their token:
|
||||
|
||||
```json
|
||||
...
|
||||
"users_groups": [
|
||||
"capsule.clastix.io",
|
||||
"other_group"
|
||||
]
|
||||
```
|
||||
|
||||
The [hack/create-user.sh](https://github.com/clastix/capsule/blob/master/hack/create-user.sh) can help you set up a dummy `kubeconfig` for the `alice` user acting as owner of a tenant called `oil`
|
||||
|
||||
```bash
|
||||
./hack/create-user.sh alice oil
|
||||
...
|
||||
certificatesigningrequest.certificates.k8s.io/alice-oil created
|
||||
certificatesigningrequest.certificates.k8s.io/alice-oil approved
|
||||
kubeconfig file is: alice-oil.kubeconfig
|
||||
to use it as alice export KUBECONFIG=alice-oil.kubeconfig
|
||||
```
|
||||
|
||||
Log as tenant owner
|
||||
|
||||
```
|
||||
$ export KUBECONFIG=alice-oil.kubeconfig
|
||||
```
|
||||
|
||||
and create a couple of new namespaces
|
||||
|
||||
```
|
||||
$ kubectl create namespace oil-production
|
||||
$ kubectl create namespace oil-development
|
||||
```
|
||||
|
||||
As user `alice` you can operate with fully admin permissions:
|
||||
|
||||
```
|
||||
$ kubectl -n oil-development run nginx --image=docker.io/nginx
|
||||
$ kubectl -n oil-development get pods
|
||||
```
|
||||
|
||||
but limited to only your namespaces:
|
||||
|
||||
```
|
||||
$ kubectl -n kube-system get pods
|
||||
Error from server (Forbidden): pods is forbidden: User "alice" cannot list resource "pods" in API group "" in the namespace "kube-system"
|
||||
```
|
||||
|
||||
# What’s next
|
||||
The Tenant Owners have full administrative permissions limited to only the namespaces in the assigned tenant. However, their permissions can be controlled by the Cluster Admin by setting rules and policies on the assigned tenant. See the [use cases](/docs/operator/use-cases/overview) page for more getting more cool things you can do with Capsule.
|
||||
148
docs/content/operator/managed-kubernetes/aws-eks.md
Normal file
@@ -0,0 +1,148 @@
|
||||
# Capsule on AWS EKS
|
||||
This is an example of how to install AWS EKS cluster and one user
|
||||
manged by Capsule.
|
||||
|
||||
It is based on [Using IAM Groups to manage Kubernetes access](https://www.eksworkshop.com/beginner/091_iam-groups/intro/)
|
||||
|
||||
Create EKS cluster:
|
||||
|
||||
```bash
|
||||
export AWS_DEFAULT_REGION="eu-west-1"
|
||||
export AWS_ACCESS_KEY_ID="xxxxx"
|
||||
export AWS_SECRET_ACCESS_KEY="xxxxx"
|
||||
|
||||
eksctl create cluster \
|
||||
--name=test-k8s \
|
||||
--managed \
|
||||
--node-type=t3.small \
|
||||
--node-volume-size=20 \
|
||||
--kubeconfig=kubeconfig.conf
|
||||
```
|
||||
|
||||
Create AWS User `alice` using CloudFormation, create AWS access files and
|
||||
kubeconfig for such user:
|
||||
|
||||
```bash
|
||||
cat > cf.yml << \EOF
|
||||
Parameters:
|
||||
ClusterName:
|
||||
Type: String
|
||||
Resources:
|
||||
UserAlice:
|
||||
Type: AWS::IAM::User
|
||||
Properties:
|
||||
UserName: !Sub "alice-${ClusterName}"
|
||||
Policies:
|
||||
- PolicyName: !Sub "alice-${ClusterName}-policy"
|
||||
PolicyDocument:
|
||||
Version: "2012-10-17"
|
||||
Statement:
|
||||
- Sid: AllowAssumeOrganizationAccountRole
|
||||
Effect: Allow
|
||||
Action: sts:AssumeRole
|
||||
Resource: !GetAtt RoleAlice.Arn
|
||||
AccessKeyAlice:
|
||||
Type: AWS::IAM::AccessKey
|
||||
Properties:
|
||||
UserName: !Ref UserAlice
|
||||
RoleAlice:
|
||||
Type: AWS::IAM::Role
|
||||
Properties:
|
||||
Description: !Sub "IAM role for the alice-${ClusterName} user"
|
||||
RoleName: !Sub "alice-${ClusterName}"
|
||||
AssumeRolePolicyDocument:
|
||||
Version: 2012-10-17
|
||||
Statement:
|
||||
- Effect: Allow
|
||||
Principal:
|
||||
AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root"
|
||||
Action: sts:AssumeRole
|
||||
Outputs:
|
||||
RoleAliceArn:
|
||||
Description: The ARN of the Alice IAM Role
|
||||
Value: !GetAtt RoleAlice.Arn
|
||||
Export:
|
||||
Name:
|
||||
Fn::Sub: "${AWS::StackName}-RoleAliceArn"
|
||||
AccessKeyAlice:
|
||||
Description: The AccessKey for Alice user
|
||||
Value: !Ref AccessKeyAlice
|
||||
Export:
|
||||
Name:
|
||||
Fn::Sub: "${AWS::StackName}-AccessKeyAlice"
|
||||
SecretAccessKeyAlice:
|
||||
Description: The SecretAccessKey for Alice user
|
||||
Value: !GetAtt AccessKeyAlice.SecretAccessKey
|
||||
Export:
|
||||
Name:
|
||||
Fn::Sub: "${AWS::StackName}-SecretAccessKeyAlice"
|
||||
EOF
|
||||
|
||||
eval aws cloudformation deploy --capabilities CAPABILITY_NAMED_IAM \
|
||||
--parameter-overrides "ClusterName=test-k8s" \
|
||||
--stack-name "test-k8s-users" --template-file cf.yml
|
||||
|
||||
AWS_CLOUDFORMATION_DETAILS=$(aws cloudformation describe-stacks --stack-name "test-k8s-users")
|
||||
ALICE_ROLE_ARN=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"RoleAliceArn\") .OutputValue")
|
||||
ALICE_USER_ACCESSKEY=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"AccessKeyAlice\") .OutputValue")
|
||||
ALICE_USER_SECRETACCESSKEY=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"SecretAccessKeyAlice\") .OutputValue")
|
||||
|
||||
eksctl create iamidentitymapping --cluster="test-k8s" --arn="${ALICE_ROLE_ARN}" --username alice --group capsule.clastix.io
|
||||
|
||||
cat > aws_config << EOF
|
||||
[profile alice]
|
||||
role_arn=${ALICE_ROLE_ARN}
|
||||
source_profile=alice
|
||||
EOF
|
||||
|
||||
cat > aws_credentials << EOF
|
||||
[alice]
|
||||
aws_access_key_id=${ALICE_USER_ACCESSKEY}
|
||||
aws_secret_access_key=${ALICE_USER_SECRETACCESSKEY}
|
||||
EOF
|
||||
|
||||
eksctl utils write-kubeconfig --cluster=test-k8s --kubeconfig="kubeconfig-alice.conf"
|
||||
cat >> kubeconfig-alice.conf << EOF
|
||||
- name: AWS_PROFILE
|
||||
value: alice
|
||||
- name: AWS_CONFIG_FILE
|
||||
value: aws_config
|
||||
- name: AWS_SHARED_CREDENTIALS_FILE
|
||||
value: aws_credentials
|
||||
EOF
|
||||
```
|
||||
|
||||
----
|
||||
|
||||
Export "admin" kubeconfig to be able to install Capsule:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=kubeconfig.conf
|
||||
```
|
||||
|
||||
Install capsule from helm chart:
|
||||
|
||||
```bash
|
||||
helm repo add clastix https://clastix.github.io/charts
|
||||
helm upgrade --install --version 0.0.19 --namespace capsule-system --create-namespace capsule clastix/capsule
|
||||
```
|
||||
|
||||
Use the default Tenant example:
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/samples/capsule_v1beta1_tenant.yaml
|
||||
```
|
||||
|
||||
Based on the tenant configuration above the user `alice` should be able
|
||||
to create namespace...
|
||||
|
||||
Switch to new terminal tab and try to create namespace as user `alice`:
|
||||
|
||||
```bash
|
||||
# Unset AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY if defined
|
||||
unset AWS_ACCESS_KEY_ID
|
||||
unset AWS_SECRET_ACCESS_KEY
|
||||
kubectl create namespace test --kubeconfig="kubeconfig-alice.conf"
|
||||
|
||||
... do other commands allowed by Tenant configuration ...
|
||||
```
|
||||
16
docs/content/operator/managed-kubernetes/overview.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# Capsule over Managed Kubernetes
|
||||
Capsule Operator can be easily installed on a Managed Kubernetes Service. Since in these services, you do not have access to the Kubernetes APIs Server, you should check with your service provider following pre-requisites:
|
||||
|
||||
- the default `cluster-admin` ClusterRole is accessible
|
||||
- the following Admission Webhooks are enabled on the APIs Server:
|
||||
- PodNodeSelector
|
||||
- LimitRanger
|
||||
- ResourceQuota
|
||||
- MutatingAdmissionWebhook
|
||||
- ValidatingAdmissionWebhook
|
||||
|
||||
* [AWS EKS](/docs/managed-kubernetes/aws-eks)
|
||||
* CoAKS - Capsule over Azure Kubernetes Service
|
||||
* Google Cloud GKE
|
||||
* IBM Cloud
|
||||
* OVH
|
||||
181
docs/content/operator/monitoring.md
Normal file
@@ -0,0 +1,181 @@
|
||||
# Monitoring Capsule
|
||||
|
||||
The Capsule dashboard allows you to track the health and performance of Capsule manager and tenants, with particular attention to resources saturation, server responses, and latencies.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Prometheus
|
||||
|
||||
Prometheus is an open-source monitoring system and time series database; it is based on a multi-dimensional data model and uses PromQL, a powerful query language, to leverage it.
|
||||
|
||||
- Minimum version: 1.0.0
|
||||
|
||||
### Grafana
|
||||
|
||||
Grafana is an open-source monitoring solution that offers a flexible way to generate visuals and configure dashboards.
|
||||
|
||||
- Minimum version: 7.5.5
|
||||
|
||||
To fastly deploy this monitoring stack, consider installing the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator).
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
The Capsule Helm [charts](https://github.com/clastix/capsule/tree/master/charts/capsule) allow you to automatically create Kubernetes minimum resources needed for the proper functioning of the dashboard:
|
||||
|
||||
* ServiceMonitor
|
||||
* Role
|
||||
* RoleBinding
|
||||
|
||||
N.B: we assume that a ServiceAccount resource has already been created so it can easily interact with the Prometheus API.
|
||||
|
||||
### Helm install
|
||||
|
||||
During Capsule installation, set the `serviceMonitor` fields as follow:
|
||||
|
||||
```yaml
|
||||
serviceMonitor:
|
||||
enabled: true
|
||||
[...]
|
||||
serviceAccount:
|
||||
name: <prometheus-sa>
|
||||
namespace: <prometheus-sa-namespace>
|
||||
```
|
||||
Take a look at the Helm charts [README.md](https://github.com/clastix/capsule/blob/master/charts/capsule/README.md#customize-the-installation) file for further customization.
|
||||
|
||||
### Check Service Monitor
|
||||
|
||||
Verify that the service monitor is working correctly through the Prometheus "targets" page :
|
||||
|
||||

|
||||
|
||||
### Deploy dashboard
|
||||
|
||||
Simply upload [dashboard.json](https://github.com/clastix/capsule/blob/master/config/grafana/dashboard.json) file to Grafana through _Create_ -> _Import_,
|
||||
making sure to select the correct Prometheus data source:
|
||||
|
||||

|
||||
|
||||
## In-depth view
|
||||
|
||||
### Features
|
||||
* [Manager controllers](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#manager-controllers)
|
||||
* [Webhook error rate](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#webhook-error-rate)
|
||||
* [Webhook latency](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#webhook-latency)
|
||||
* [REST client latency](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#rest-client-latency)
|
||||
* [REST client error rate](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#rest-client-error-rate)
|
||||
* [Saturation](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#saturation)
|
||||
* [Workqueue](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#workqueue)
|
||||
|
||||
---
|
||||
|
||||
#### Manager controllers
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about the medium time delay between manager client input, side effects, and new state determination (reconciliation).
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* Controller name
|
||||
- capsuleconfiguration
|
||||
- clusterrole
|
||||
- clusterrolebinding
|
||||
- endpoints
|
||||
- endpointslice
|
||||
- secret
|
||||
- service
|
||||
- tenant
|
||||
|
||||
#### Webhook error rate
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about webhook requests response, mainly focusing on server-side errors research.
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* Webhook
|
||||
- cordoning
|
||||
- ingresses
|
||||
- namespace-owner-reference
|
||||
- namespaces
|
||||
- networkpolicies
|
||||
- persistentvolumeclaims
|
||||
- pods
|
||||
- services
|
||||
- tenants
|
||||
|
||||
#### Webhook latency
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about the medium time delay between webhook trigger, side effects, and data written on etcd.
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* Webhook
|
||||
- cordoning
|
||||
- ingresses
|
||||
- namespace-owner-reference
|
||||
- namespaces
|
||||
- networkpolicies
|
||||
- persistentvolumeclaims
|
||||
- pods
|
||||
- services
|
||||
- tenants
|
||||
|
||||
#### REST client latency
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about the medium time delay between all the calls done by the controller and the API server.
|
||||
Data display may depend on the REST client verb considered and on available REST client URLs.
|
||||
|
||||
YMMV
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* REST client URL
|
||||
* REST client verb
|
||||
- GET
|
||||
- PUT
|
||||
- POST
|
||||
- PATCH
|
||||
- DELETE
|
||||
|
||||
#### REST client error rate
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about client total rest requests response per unit time, grouped by thrown code.
|
||||
|
||||
#### Saturation
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about resources, giving a detailed picture of the system’s state and the amount of requested work per active controller.
|
||||
|
||||
#### Workqueue
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about "actions" in the queue, particularly:
|
||||
- Workqueue latency: time to complete a series of actions in the queue ;
|
||||
- Workqueue rate: number of actions per unit time ;
|
||||
- Workqueue depth: number of pending actions waiting in the queue.
|
||||
@@ -0,0 +1,77 @@
|
||||
# Allow self-service management of Network Policies
|
||||
|
||||
**Profile Applicability:** L2
|
||||
|
||||
**Type:** Behavioral
|
||||
|
||||
**Category:** Self-Service Operations
|
||||
|
||||
**Description:** Tenants should be able to perform self-service operations by creating their own network policies in their namespaces.
|
||||
|
||||
**Rationale:** Enables self-service management of network-policies.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a tenant
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
networkPolicies:
|
||||
items:
|
||||
- ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
podSelector: {}
|
||||
policyTypes:
|
||||
- Egress
|
||||
- Ingress
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, retrieve the networkpolicies resources in the tenant namespace
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice get networkpolicies
|
||||
NAME POD-SELECTOR AGE
|
||||
capsule-oil-0 <none> 7m5s
|
||||
```
|
||||
|
||||
As a tenant, checks for permissions to manage networkpolicy for each verb
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice auth can-i get networkpolicies
|
||||
kubectl --kubeconfig alice auth can-i create networkpolicies
|
||||
kubectl --kubeconfig alice auth can-i update networkpolicies
|
||||
kubectl --kubeconfig alice auth can-i patch networkpolicies
|
||||
kubectl --kubeconfig alice auth can-i delete networkpolicies
|
||||
kubectl --kubeconfig alice auth can-i deletecollection networkpolicies
|
||||
```
|
||||
|
||||
Each command must return 'yes'
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
```
|
||||
@@ -0,0 +1,58 @@
|
||||
# Allow self-service management of Role Bindings
|
||||
|
||||
**Profile Applicability:** L2
|
||||
|
||||
**Type:** Behavioral
|
||||
|
||||
**Category:** Self-Service Operations
|
||||
|
||||
**Description:** Tenants should be able to perform self-service operations by creating their rolebindings in their namespaces.
|
||||
|
||||
**Rationale:** Enables self-service management of roles.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a tenant
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner check for permissions to manage rolebindings for each verb
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice auth can-i get rolebindings
|
||||
kubectl --kubeconfig alice auth can-i create rolebindings
|
||||
kubectl --kubeconfig alice auth can-i update rolebindings
|
||||
kubectl --kubeconfig alice auth can-i patch rolebindings
|
||||
kubectl --kubeconfig alice auth can-i delete rolebindings
|
||||
kubectl --kubeconfig alice auth can-i deletecollection rolebindings
|
||||
```
|
||||
|
||||
Each command must return 'yes'
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
```
|
||||
@@ -0,0 +1,58 @@
|
||||
# Allow self-service management of Roles
|
||||
|
||||
**Profile Applicability:** L2
|
||||
|
||||
**Type:** Behavioral
|
||||
|
||||
**Category:** Self-Service Operations
|
||||
|
||||
**Description:** Tenants should be able to perform self-service operations by creating their own roles in their namespaces.
|
||||
|
||||
**Rationale:** Enables self-service management of roles.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a tenant
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, check for permissions to manage roles for each verb
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice auth can-i get roles
|
||||
kubectl --kubeconfig alice auth can-i create roles
|
||||
kubectl --kubeconfig alice auth can-i update roles
|
||||
kubectl --kubeconfig alice auth can-i patch roles
|
||||
kubectl --kubeconfig alice auth can-i delete roles
|
||||
kubectl --kubeconfig alice auth can-i deletecollection roles
|
||||
```
|
||||
|
||||
Each command must return 'yes'
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
```
|
||||
113
docs/content/operator/mtb/block-access-to-cluster-resources.md
Normal file
@@ -0,0 +1,113 @@
|
||||
# Block access to cluster resources
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Configuration Check
|
||||
|
||||
**Category:** Control Plane Isolation
|
||||
|
||||
**Description:** Tenants should not be able to view, edit, create or delete cluster (non-namespaced) resources such Node, ClusterRole, ClusterRoleBinding, etc.
|
||||
|
||||
**Rationale:** Access controls should be configured for tenants so that a tenant cannot list, create, modify or delete cluster resources
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a tenant
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As cluster admin, run the following command to retrieve the list of non-namespaced resources
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin api-resources --namespaced=false
|
||||
```
|
||||
For all non-namespaced resources, and each verb (get, list, create, update, patch, watch, delete, and deletecollection) issue the following command:
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice auth can-i <verb> <resource>
|
||||
```
|
||||
Each command must return `no`
|
||||
|
||||
**Exception:**
|
||||
|
||||
It should, but it does not:
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice auth can-i create selfsubjectaccessreviews
|
||||
yes
|
||||
kubectl --kubeconfig alice auth can-i create selfsubjectrulesreviews
|
||||
yes
|
||||
kubectl --kubeconfig alice auth can-i create namespaces
|
||||
yes
|
||||
```
|
||||
|
||||
Any kubernetes user can create `SelfSubjectAccessReview` and `SelfSubjectRulesReviews` to checks whether he/she can act. First, two exceptions are not an issue.
|
||||
|
||||
```bash
|
||||
kubectl --anyuser auth can-i --list
|
||||
Resources Non-Resource URLs Resource Names Verbs
|
||||
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
|
||||
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
|
||||
[/api/*] [] [get]
|
||||
[/api] [] [get]
|
||||
[/apis/*] [] [get]
|
||||
[/apis] [] [get]
|
||||
[/healthz] [] [get]
|
||||
[/healthz] [] [get]
|
||||
[/livez] [] [get]
|
||||
[/livez] [] [get]
|
||||
[/openapi/*] [] [get]
|
||||
[/openapi] [] [get]
|
||||
[/readyz] [] [get]
|
||||
[/readyz] [] [get]
|
||||
[/version/] [] [get]
|
||||
[/version/] [] [get]
|
||||
[/version] [] [get]
|
||||
[/version] [] [get]
|
||||
```
|
||||
|
||||
To enable namespace self-service provisioning, Capsule intentionally gives permissions to create namespaces to all users belonging to the Capsule group:
|
||||
|
||||
```bash
|
||||
kubectl describe clusterrolebindings capsule-namespace-provisioner
|
||||
Name: capsule-namespace-provisioner
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
Role:
|
||||
Kind: ClusterRole
|
||||
Name: capsule-namespace-provisioner
|
||||
Subjects:
|
||||
Kind Name Namespace
|
||||
---- ---- ---------
|
||||
Group capsule.clastix.io
|
||||
|
||||
kubectl describe clusterrole capsule-namespace-provisioner
|
||||
Name: capsule-namespace-provisioner
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
PolicyRule:
|
||||
Resources Non-Resource URLs Resource Names Verbs
|
||||
--------- ----------------- -------------- -----
|
||||
namespaces [] [] [create]
|
||||
```
|
||||
|
||||
Capsule controls self-service namespace creation by limiting the number of namespaces the user can create by the `tenant.spec.namespaceQuota option`.
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
```
|
||||
@@ -0,0 +1,155 @@
|
||||
# Block access to multitenant resources
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral
|
||||
|
||||
**Category:** Tenant Isolation
|
||||
|
||||
**Description:** Each tenant namespace may contain resources set up by the cluster administrator for multi-tenancy, such as role bindings, and network policies. Tenants should not be allowed to modify the namespaced resources created by the cluster administrator for multi-tenancy. However, for some resources such as network policies, tenants can configure additional instances of the resource for their workloads.
|
||||
|
||||
**Rationale:** Tenants can escalate privileges and impact other tenants if they can delete or modify required multi-tenancy resources such as namespace resource quotas or default network policy.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a tenant
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
networkPolicies:
|
||||
items:
|
||||
- podSelector: {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
- egress:
|
||||
- to:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
podSelector: {}
|
||||
policyTypes:
|
||||
- Egress
|
||||
- Ingress
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, retrieve the networkpolicies resources in the tenant namespace
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice get networkpolicies
|
||||
NAME POD-SELECTOR AGE
|
||||
capsule-oil-0 <none> 7m5s
|
||||
capsule-oil-1 <none> 7m5s
|
||||
```
|
||||
|
||||
As tenant owner try to modify or delete one of the networkpolicies
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice delete networkpolicies capsule-oil-0
|
||||
```
|
||||
|
||||
You should receive an error message denying the edit/delete request
|
||||
|
||||
```bash
|
||||
Error from server (Forbidden): networkpolicies.networking.k8s.io "capsule-oil-0" is forbidden:
|
||||
User "oil" cannot delete resource "networkpolicies" in API group "networking.k8s.io" in the namespace "oil-production"
|
||||
```
|
||||
|
||||
As tenant owner, you can create an additional networkpolicy inside the namespace
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: hijacking
|
||||
namespace: oil-production
|
||||
spec:
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
cidr: 0.0.0.0/0
|
||||
podSelector: {}
|
||||
policyTypes:
|
||||
- Egress
|
||||
EOF
|
||||
```
|
||||
|
||||
However, due to the additive nature of networkpolicies, the `DENY ALL` policy set by the cluster admin, prevents hijacking.
|
||||
|
||||
As tenant owner list RBAC permissions set by Capsule
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice get rolebindings
|
||||
NAME ROLE AGE
|
||||
namespace-deleter ClusterRole/capsule-namespace-deleter 11h
|
||||
namespace:admin ClusterRole/admin 11h
|
||||
```
|
||||
|
||||
As tenant owner, try to change/delete the rolebinding to escalate permissions
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice edit/delete rolebinding namespace:admin
|
||||
```
|
||||
|
||||
The rolebinding is immediately recreated by Capsule:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig alice get rolebindings
|
||||
NAME ROLE AGE
|
||||
namespace-deleter ClusterRole/capsule-namespace-deleter 11h
|
||||
namespace:admin ClusterRole/admin 2s
|
||||
```
|
||||
|
||||
However, the tenant owner can create and assign permissions inside the namespace she owns
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
name: oil-robot:admin
|
||||
namespace: oil-production
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: default
|
||||
namespace: oil-production
|
||||
EOF
|
||||
```
|
||||
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
```
|
||||
@@ -0,0 +1,97 @@
|
||||
# Block access to other tenant resources
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral
|
||||
|
||||
**Category:** Tenant Isolation
|
||||
|
||||
**Description:** Each tenant has its own set of resources, such as namespaces, service accounts, secrets, pods, services, etc. Tenants should not be allowed to access each other's resources.
|
||||
|
||||
**Rationale:** Tenant's resources must be not accessible by other tenants.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a couple of tenants
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: gas
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: joe
|
||||
EOF
|
||||
|
||||
./create-user.sh joe gas
|
||||
|
||||
```
|
||||
|
||||
As `oil` tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As `gas` tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig joe create ns gas-production
|
||||
kubectl --kubeconfig joe config set-context --current --namespace gas-production
|
||||
```
|
||||
|
||||
|
||||
As `oil` tenant owner, try to retrieve the resources in the `gas` tenant namespaces
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice get serviceaccounts --namespace gas-production
|
||||
```
|
||||
|
||||
You must receive an error message:
|
||||
|
||||
```
|
||||
Error from server (Forbidden): serviceaccount is forbidden:
|
||||
User "oil" cannot list resource "serviceaccounts" in API group "" in the namespace "gas-production"
|
||||
```
|
||||
|
||||
As `gas` tenant owner, try to retrieve the resources in the `oil` tenant namespaces
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig joe get serviceaccounts --namespace oil-production
|
||||
```
|
||||
|
||||
You must receive an error message:
|
||||
|
||||
```
|
||||
Error from server (Forbidden): serviceaccount is forbidden:
|
||||
User "joe" cannot list resource "serviceaccounts" in API group "" in the namespace "oil-production"
|
||||
```
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenants oil gas
|
||||
```
|
||||
121
docs/content/operator/mtb/block-add-capabilities.md
Normal file
@@ -0,0 +1,121 @@
|
||||
# Block add capabilities
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral Check
|
||||
|
||||
**Category:** Control Plane Isolation
|
||||
|
||||
**Description:** Control Linux capabilities.
|
||||
|
||||
**Rationale:** Linux allows defining fine-grained permissions using capabilities. With Kubernetes, it is possible to add capabilities for pods that escalate the level of kernel access and allow other potentially dangerous behaviors.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, define a `PodSecurityPolicy` with `allowedCapabilities` and map the policy to a tenant:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
# The default set of capabilities are implicitly allowed
|
||||
# The empty set means that no additional capabilities may be added beyond the default set
|
||||
allowedCapabilities: []
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
EOF
|
||||
```
|
||||
|
||||
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
|
||||
|
||||
Then create a ClusterRole using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: tenant:psp
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['tenant']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
And assign it to the tenant
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
namespace: oil-production
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: tenant:psp
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, create a pod and see new capabilities cannot be added in the tenant namespaces
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-with-settime-cap
|
||||
namespace:
|
||||
labels:
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox:latest
|
||||
command: ["/bin/sleep", "3600"]
|
||||
securityContext:
|
||||
capabilities:
|
||||
add:
|
||||
- SYS_TIME
|
||||
EOF
|
||||
```
|
||||
|
||||
You must have the pod blocked by PodSecurityPolicy.
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
|
||||
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
|
||||
```
|
||||
@@ -0,0 +1,69 @@
|
||||
# Block modification of resource quotas
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral Check
|
||||
|
||||
**Category:** Tenant Isolation
|
||||
|
||||
**Description:** Tenants should not be able to modify the resource quotas defined in their namespaces
|
||||
|
||||
**Rationale:** Resource quotas must be configured for isolation and fairness between tenants. Tenants should not be able to modify existing resource quotas as they may exhaust cluster resources and impact other tenants.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a tenant
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
resourceQuotas:
|
||||
items:
|
||||
- hard:
|
||||
limits.cpu: "8"
|
||||
limits.memory: 16Gi
|
||||
requests.cpu: "8"
|
||||
requests.memory: 16Gi
|
||||
- hard:
|
||||
pods: "10"
|
||||
services: "50"
|
||||
- hard:
|
||||
requests.storage: 100Gi
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, check the permissions to modify/delete the quota in the tenant namespace:
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice auth can-i create quota
|
||||
kubectl --kubeconfig alice auth can-i update quota
|
||||
kubectl --kubeconfig alice auth can-i patch quota
|
||||
kubectl --kubeconfig alice auth can-i delete quota
|
||||
kubectl --kubeconfig alice auth can-i deletecollection quota
|
||||
```
|
||||
|
||||
Each command must return 'no'
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
```
|
||||
@@ -0,0 +1,107 @@
|
||||
# Block access to multitenant resources
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral
|
||||
|
||||
**Category:** Tenant Isolation
|
||||
|
||||
**Description:** Block network traffic among namespaces from different tenants.
|
||||
|
||||
**Rationale:** Tenants cannot access services and pods in another tenant's namespaces.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a couple of tenants
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
networkPolicies:
|
||||
items:
|
||||
- ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
podSelector: {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: gas
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: joe
|
||||
networkPolicies:
|
||||
items:
|
||||
- ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: gas
|
||||
podSelector: {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
EOF
|
||||
|
||||
./create-user.sh joe gas
|
||||
```
|
||||
|
||||
As `oil` tenant owner, run the following commands to create a namespace and resources in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
kubectl --kubeconfig alice run webserver --image nginx:latest
|
||||
kubectl --kubeconfig alice expose pod webserver --port 80
|
||||
```
|
||||
|
||||
As `gas` tenant owner, run the following commands to create a namespace and resources in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig joe create ns gas-production
|
||||
kubectl --kubeconfig joe config set-context --current --namespace gas-production
|
||||
kubectl --kubeconfig joe run webserver --image nginx:latest
|
||||
kubectl --kubeconfig joe expose pod webserver --port 80
|
||||
```
|
||||
|
||||
As `oil` tenant owner, verify you can access the service in `oil` tenant namespace but not in the `gas` tenant namespace
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice exec webserver -- curl http://webserver.oil-production.svc.cluster.local
|
||||
kubectl --kubeconfig alice exec webserver -- curl http://webserver.gas-production.svc.cluster.local
|
||||
```
|
||||
|
||||
Viceversa, as `gas` tenant owner, verify you can access the service in `gas` tenant namespace but not in the `oil` tenant namespace
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice exec webserver -- curl http://webserver.oil-production.svc.cluster.local
|
||||
kubectl --kubeconfig alice exec webserver -- curl http://webserver.gas-production.svc.cluster.local
|
||||
```
|
||||
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenants oil gas
|
||||
```
|
||||
115
docs/content/operator/mtb/block-privilege-escalation.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Block privilege escalation
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral Check
|
||||
|
||||
**Category:** Control Plane Isolation
|
||||
|
||||
**Description:** Control container permissions.
|
||||
|
||||
**Rationale:** The security `allowPrivilegeEscalation` setting allows a process to gain more privileges from its parent process. Processes in tenant containers should not be allowed to gain additional privileges.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, define a `PodSecurityPolicy` that sets `allowPrivilegeEscalation=false` and map the policy to a tenant:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
EOF
|
||||
```
|
||||
|
||||
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
|
||||
|
||||
Then create a ClusterRole using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: tenant:psp
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['tenant']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
And assign it to the tenant
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: tenant:psp
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, create a pod or container that sets `allowPrivilegeEscalation=true` in its `securityContext`.
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-priviliged-mode
|
||||
namespace: oil-production
|
||||
labels:
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox:latest
|
||||
command: ["/bin/sleep", "3600"]
|
||||
securityContext:
|
||||
allowPrivilegeEscalation: true
|
||||
EOF
|
||||
```
|
||||
|
||||
You must have the pod blocked by `PodSecurityPolicy`.
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
|
||||
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
|
||||
```
|
||||
116
docs/content/operator/mtb/block-privileged-containers.md
Normal file
@@ -0,0 +1,116 @@
|
||||
# Block privileged containers
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral Check
|
||||
|
||||
**Category:** Control Plane Isolation
|
||||
|
||||
**Description:** Control container permissions.
|
||||
|
||||
**Rationale:** By default a container is not allowed to access any devices on the host, but a “privileged” container can access all devices on the host. A process within a privileged container can also get unrestricted host access. Hence, tenants should not be allowed to run privileged containers.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, define a `PodSecurityPolicy` that sets `privileged=false` and map the policy to a tenant:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
EOF
|
||||
```
|
||||
|
||||
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
|
||||
|
||||
Then create a ClusterRole using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: tenant:psp
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['tenant']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
And assign it to the tenant
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
namespace: oil-production
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: tenant:psp
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, create a pod or container that sets privileges in its `securityContext`.
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-priviliged-mode
|
||||
namespace:
|
||||
labels:
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox:latest
|
||||
command: ["/bin/sleep", "3600"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
EOF
|
||||
```
|
||||
|
||||
You must have the pod blocked by `PodSecurityPolicy`.
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
|
||||
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
|
||||
```
|
||||
@@ -0,0 +1,40 @@
|
||||
# Block use of existing PVs
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Configuration Check
|
||||
|
||||
**Category:** Data Isolation
|
||||
|
||||
**Description:** Avoid a tenant to mount existing volumes`.
|
||||
|
||||
**Rationale:** Tenants have to be assured that their Persistent Volumes cannot be reclaimed by other tenants.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a tenant
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, check if you can access the persistent volumes
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice auth can-i get persistentvolumes
|
||||
kubectl --kubeconfig alice auth can-i list persistentvolumes
|
||||
kubectl --kubeconfig alice auth can-i watch persistentvolumes
|
||||
```
|
||||
|
||||
You must receive for all the requests 'no'.
|
||||
115
docs/content/operator/mtb/block-use-of-host-ipc.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Block use of host IPC
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral Check
|
||||
|
||||
**Category:** Host Isolation
|
||||
|
||||
**Description:** Tenants should not be allowed to share the host's inter-process communication (IPC) namespace.
|
||||
|
||||
**Rationale:** The `hostIPC` setting allows pods to share the host's inter-process communication (IPC) namespace allowing potential access to host processes or processes belonging to other tenants.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, define a `PodSecurityPolicy` that restricts `hostIPC` usage and map the policy to a tenant:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
hostIPC: false
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
EOF
|
||||
```
|
||||
|
||||
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
|
||||
|
||||
Then create a ClusterRole using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: tenant:psp
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['tenant']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
And assign it to the tenant
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
namespace: oil-production
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: tenant:psp
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, create a pod mounting the host IPC namespace.
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-with-host-ipc
|
||||
namespace: oil-production
|
||||
spec:
|
||||
hostIPC: true
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox:latest
|
||||
command: ["/bin/sleep", "3600"]
|
||||
EOF
|
||||
```
|
||||
|
||||
You must have the pod blocked by `PodSecurityPolicy`.
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
|
||||
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
|
||||
```
|
||||
@@ -0,0 +1,136 @@
|
||||
# Block use of host networking and ports
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral Check
|
||||
|
||||
**Category:** Host Isolation
|
||||
|
||||
**Description:** Tenants should not be allowed to use host networking and host ports for their workloads.
|
||||
|
||||
**Rationale:** Using `hostPort` and `hostNetwork` allows tenants workloads to share the host networking stack allowing potential snooping of network traffic across application pods.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, define a `PodSecurityPolicy` that restricts `hostPort` and `hostNetwork` and map the policy to a tenant:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
hostNetwork: false
|
||||
hostPorts: [] # empty means no allowed host ports
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
EOF
|
||||
```
|
||||
|
||||
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
|
||||
|
||||
Then create a ClusterRole using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: tenant:psp
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['tenant']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
And assign it to the tenant
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
namespace: oil-production
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: tenant:psp
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, create a pod using `hostNetwork`
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-with-hostnetwork
|
||||
namespace: oil-production
|
||||
spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
EOF
|
||||
```
|
||||
|
||||
As tenant owner, create a pod defining a container using `hostPort`
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-with-hostport
|
||||
namespace: oil-production
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 9090
|
||||
EOF
|
||||
```
|
||||
|
||||
In both the cases above, you must have the pod blocked by `PodSecurityPolicy`.
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
|
||||
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
|
||||
```
|
||||
129
docs/content/operator/mtb/block-use-of-host-path-volumes.md
Normal file
@@ -0,0 +1,129 @@
|
||||
# Block use of host path volumes
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral Check
|
||||
|
||||
**Category:** Host Protection
|
||||
|
||||
**Description:** Tenants should not be able to mount host volumes and directories.
|
||||
|
||||
**Rationale:** The use of host volumes and directories can be used to access shared data or escalate privileges and also creates a tight coupling between a tenant workload and a host.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, define a `PodSecurityPolicy` that restricts `hostPath` volumes and map the policy to a tenant:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
volumes: # hostPath is not permitted
|
||||
- 'configMap'
|
||||
- 'emptyDir'
|
||||
- 'projected'
|
||||
- 'secret'
|
||||
- 'downwardAPI'
|
||||
- 'persistentVolumeClaim'
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
EOF
|
||||
```
|
||||
|
||||
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
|
||||
|
||||
Then create a ClusterRole using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: tenant:psp
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['tenant']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
And assign it to the tenant
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
namespace: oil-production
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: tenant:psp
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, create a pod defining a volume of type `hostpath`.
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-with-hostpath-volume
|
||||
namespace: oil-production
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox:latest
|
||||
command: ["/bin/sleep", "3600"]
|
||||
volumeMounts:
|
||||
- mountPath: /tmp
|
||||
name: volume
|
||||
volumes:
|
||||
- name: volume
|
||||
hostPath:
|
||||
# directory location on host
|
||||
path: /data
|
||||
type: Directory
|
||||
EOF
|
||||
```
|
||||
|
||||
You must have the pod blocked by `PodSecurityPolicy`.
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
|
||||
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
|
||||
```
|
||||
115
docs/content/operator/mtb/block-use-of-host-pid.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Block use of host PID
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral Check
|
||||
|
||||
**Category:** Host Isolation
|
||||
|
||||
**Description:** Tenants should not be allowed to share the host process ID (PID) namespace.
|
||||
|
||||
**Rationale:** The `hostPID` setting allows pods to share the host process ID namespace allowing potential privilege escalation. Tenant pods should not be allowed to share the host PID namespace.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, define a `PodSecurityPolicy` that restricts `hostPID` usage and map the policy to a tenant:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
hostPID: false
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
EOF
|
||||
```
|
||||
|
||||
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
|
||||
|
||||
Then create a ClusterRole using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: tenant:psp
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['tenant']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
And assign it to the tenant
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
namespace: oil-production
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: tenant:psp
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, create a pod mounting the host PID namespace.
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-with-host-pid
|
||||
namespace: oil-production
|
||||
spec:
|
||||
hostPID: true
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox:latest
|
||||
command: ["/bin/sleep", "3600"]
|
||||
EOF
|
||||
```
|
||||
|
||||
You must have the pod blocked by `PodSecurityPolicy`.
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
|
||||
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
|
||||
```
|
||||
75
docs/content/operator/mtb/block-use-of-nodeport-services.md
Normal file
@@ -0,0 +1,75 @@
|
||||
# Block use of NodePort services
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral Check
|
||||
|
||||
**Category:** Host Isolation
|
||||
|
||||
**Description:** Tenants should not be able to create services of type NodePort.
|
||||
|
||||
**Rationale:** the service type `NodePorts` configures host ports that cannot be secured using Kubernetes network policies and require upstream firewalls. Also, multiple tenants cannot use the same host port numbers.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a tenant
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
enableNodePorts: false
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, creates a service in the tenant namespace having service type of `NodePort`
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
namespace: oil-production
|
||||
spec:
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8080
|
||||
targetPort: 80
|
||||
selector:
|
||||
run: nginx
|
||||
type: NodePort
|
||||
EOF
|
||||
```
|
||||
|
||||
You must receive an error message denying the request:
|
||||
|
||||
```
|
||||
Error from server
|
||||
Error from server (NodePort service types are forbidden for the tenant:
|
||||
error when creating "STDIN": admission webhook "services.capsule.clastix.io" denied the request:
|
||||
NodePort service types are forbidden for the tenant: please, reach out to the system administrators
|
||||
```
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
```
|
||||
@@ -0,0 +1,66 @@
|
||||
# Configure namespace object limits
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Configuration
|
||||
|
||||
**Category:** Fairness
|
||||
|
||||
**Description:** Namespace resource quotas should be used to allocate, track and limit the number of objects, of a particular type, that can be created within a namespace.
|
||||
|
||||
**Rationale:** Resource quotas must be configured for each tenant namespace, to guarantee isolation and fairness across tenants.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a tenant
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
resourceQuotas:
|
||||
items:
|
||||
- hard:
|
||||
pods: 100
|
||||
services: 50
|
||||
services.loadbalancers: 3
|
||||
services.nodeports: 20
|
||||
persistentvolumeclaims: 100
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, retrieve the configured quotas in the tenant namespace:
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice get quota
|
||||
NAME AGE REQUEST LIMIT
|
||||
capsule-oil-0 23s persistentvolumeclaims: 0/100,
|
||||
pods: 0/100, services: 0/50,
|
||||
services.loadbalancers: 0/3,
|
||||
services.nodeports: 0/20
|
||||
```
|
||||
|
||||
Make sure that a quota is configured for API objects: `PersistentVolumeClaim`, `LoadBalancer`, `NodePort`, `Pods`, etc
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
```
|
||||
@@ -0,0 +1,65 @@
|
||||
# Configure namespace resource quotas
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Configuration
|
||||
|
||||
**Category:** Fairness
|
||||
|
||||
**Description:** Namespace resource quotas should be used to allocate, track, and limit a tenant's use of shared resources.
|
||||
|
||||
**Rationale:** Resource quotas must be configured for each tenant namespace, to guarantee isolation and fairness across tenants.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a tenant
|
||||
|
||||
```yaml
|
||||
kubectl create -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
resourceQuotas:
|
||||
items:
|
||||
- hard:
|
||||
limits.cpu: "8"
|
||||
limits.memory: 16Gi
|
||||
requests.cpu: "8"
|
||||
requests.memory: 16Gi
|
||||
- hard:
|
||||
requests.storage: 100Gi
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, retrieve the configured quotas in the tenant namespace:
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice get quota
|
||||
NAME AGE REQUEST LIMIT
|
||||
capsule-oil-0 24s requests.cpu: 0/8, requests.memory: 0/16Gi limits.cpu: 0/8, limits.memory: 0/16Gi
|
||||
capsule-oil-1 24s requests.storage: 0/10Gi
|
||||
```
|
||||
|
||||
Make sure that a quota is configured for CPU, memory, and storage resources.
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
```
|
||||
71
docs/content/operator/mtb/require-always-imagepullpolicy.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Require always imagePullPolicy
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Configuration Check
|
||||
|
||||
**Category:** Data Isolation
|
||||
|
||||
**Description:** Set the image pull policy to Always for tenant workloads.
|
||||
|
||||
**Rationale:** Tenants have to be assured that their private images can only be used by those who have the credentials to pull them.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a tenant
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
imagePullPolicies:
|
||||
- Always
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, creates a pod in the tenant namespace having `imagePullPolicies=IfNotPresent`
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-production
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:latest
|
||||
imagePullPolicy: IfNotPresent
|
||||
EOF
|
||||
```
|
||||
|
||||
You must receive an error message denying the request:
|
||||
|
||||
```
|
||||
Error from server
|
||||
(ImagePullPolicy IfNotPresent for container nginx is forbidden, use one of the followings: Always): error when creating "STDIN": admission webhook "pods.capsule.clastix.io" denied the request:
|
||||
ImagePullPolicy IfNotPresent for container nginx is forbidden, use one of the followings: Always
|
||||
```
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
```
|
||||
@@ -0,0 +1,124 @@
|
||||
# Require PersistentVolumeClaim for storage
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral Check
|
||||
|
||||
**Category:** na
|
||||
|
||||
**Description:** Tenants should not be able to use all volume types except `PersistentVolumeClaims`.
|
||||
|
||||
**Rationale:** In some scenarios, it would be required to disallow usage of any core volume types except PVCs.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, define a `PodSecurityPolicy` allowing only `PersistentVolumeClaim` volumes and map the policy to a tenant:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
volumes:
|
||||
- 'persistentVolumeClaim'
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
EOF
|
||||
```
|
||||
|
||||
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
|
||||
|
||||
Then create a ClusterRole using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: tenant:psp
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['tenant']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
And assign it to the tenant
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
namespace: oil-production
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: tenant:psp
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, create a pod defining a volume of any of the core type except `PersistentVolumeClaim`. For example:
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-with-hostpath-volume
|
||||
namespace: oil-production
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox:latest
|
||||
command: ["/bin/sleep", "3600"]
|
||||
volumeMounts:
|
||||
- mountPath: /tmp
|
||||
name: volume
|
||||
volumes:
|
||||
- name: volume
|
||||
hostPath:
|
||||
# directory location on host
|
||||
path: /data
|
||||
type: Directory
|
||||
EOF
|
||||
```
|
||||
|
||||
You must have the pod blocked by `PodSecurityPolicy`.
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
|
||||
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
|
||||
```
|
||||
@@ -0,0 +1,87 @@
|
||||
# Require PV reclaim policy of delete
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Configuration Check
|
||||
|
||||
**Category:** Data Isolation
|
||||
|
||||
**Description:** Force a tenant to use a Storage Class with `reclaimPolicy=Delete`.
|
||||
|
||||
**Rationale:** Tenants have to be assured that their Persistent Volumes cannot be reclaimed by other tenants.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, create a Storage Class with `reclaimPolicy=Delete`
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: delete-policy
|
||||
reclaimPolicy: Delete
|
||||
provisioner: clastix.io/nfs
|
||||
EOF
|
||||
```
|
||||
|
||||
As cluster admin, create a tenant and assign the above Storage Class
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
storageClasses:
|
||||
allowed:
|
||||
- delete-policy
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, creates a Persistent Volum Claim in the tenant namespace missing the Storage Class or using any other Storage Class:
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc
|
||||
namespace: oil-production
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 12Gi
|
||||
EOF
|
||||
```
|
||||
|
||||
You must receive an error message denying the request:
|
||||
|
||||
```
|
||||
Error from server (A valid Storage Class must be used, one of the following (delete-policy)):
|
||||
error when creating "STDIN": admission webhook "pvc.capsule.clastix.io" denied the request:
|
||||
A valid Storage Class must be used, one of the following (delete-policy)
|
||||
```
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
kubectl --kubeconfig cluster-admin delete storageclass delete-policy
|
||||
```
|
||||
119
docs/content/operator/mtb/require-run-as-non-root-user.md
Normal file
@@ -0,0 +1,119 @@
|
||||
# Require run as non-root user
|
||||
|
||||
**Profile Applicability:** L1
|
||||
|
||||
**Type:** Behavioral Check
|
||||
|
||||
**Category:** Control Plane Isolation
|
||||
|
||||
**Description:** Control container permissions.
|
||||
|
||||
**Rationale:** Processes in containers run as the root user (uid 0), by default. To prevent potential compromise of container hosts, specify a least-privileged user ID when building the container image and require that application containers run as non-root users.
|
||||
|
||||
**Audit:**
|
||||
|
||||
As cluster admin, define a `PodSecurityPolicy` with `runAsUser=MustRunAsNonRoot` and map the policy to a tenant:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
runAsUser:
|
||||
# Require the container to run without root privileges.
|
||||
rule: MustRunAsNonRoot
|
||||
supplementalGroups:
|
||||
rule: MustRunAs
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
fsGroup:
|
||||
rule: MustRunAs
|
||||
ranges:
|
||||
# Forbid adding the root group.
|
||||
- min: 1
|
||||
max: 65535
|
||||
EOF
|
||||
```
|
||||
|
||||
> Note: make sure `PodSecurityPolicy` Admission Control is enabled on the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
|
||||
|
||||
Then create a ClusterRole using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: tenant:psp
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['tenant']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
And assign it to the tenant
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: tenant:psp
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
EOF
|
||||
|
||||
./create-user.sh alice oil
|
||||
```
|
||||
|
||||
As tenant owner, run the following command to create a namespace in the given tenant
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner, create a pod or container that does not set `runAsNonRoot` to `true` in its `securityContext`, and `runAsUser` must not be set to 0.
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-run-as-root
|
||||
namespace: oil-production
|
||||
spec:
|
||||
containers:
|
||||
- name: busybox
|
||||
image: busybox:latest
|
||||
command: ["/bin/sleep", "3600"]
|
||||
EOF
|
||||
```
|
||||
|
||||
You must have the pod blocked by `PodSecurityPolicy`.
|
||||
|
||||
**Cleanup:**
|
||||
As cluster admin, delete all the created resources
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig cluster-admin delete tenant oil
|
||||
kubectl --kubeconfig cluster-admin delete PodSecurityPolicy tenant
|
||||
kubectl --kubeconfig cluster-admin delete ClusterRole tenant:psp
|
||||
```
|
||||
30
docs/content/operator/mtb/sig-multitenancy-bench.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Meet the multi-tenancy benchmark MTB
|
||||
Actually, there's no yet a real standard for the multi-tenancy model in Kubernetes, although the [SIG multi-tenancy group](https://github.com/kubernetes-sigs/multi-tenancy) is working on that. SIG multi-tenancy drafted a generic validation schema appliable to generic multi-tenancy projects. Multi-Tenancy Benchmarks [MTB](https://github.com/kubernetes-sigs/multi-tenancy/tree/master/benchmarks) are guidelines for multi-tenant configuration of Kubernetes clusters. Capsule is an open source multi-tenancy operator and we decided to meet the requirements of MTB.
|
||||
|
||||
> N.B. At the time of writing, the MTB is in development and not ready for usage. Strictly speaking, we do not claim official conformance to MTB, but just to adhere to the multi-tenancy requirements and best practices promoted by MTB.
|
||||
|
||||
|MTB Benchmark |MTB Profile|Capsule Version|Conformance|Notes |
|
||||
|--------------|-----------|---------------|-----------|-------|
|
||||
|[Block access to cluster resources](/docs/operator/mtb/block-access-to-cluster-resources)|L1|v0.1.0|✓|---|
|
||||
|[Block access to multitenant resources](/docs/operator/mtb/block-access-to-multitenant-resources)|L1|v0.1.0|✓|---|
|
||||
|[Block access to other tenant resources](/docs/operator/mtb/block-access-to-other-tenant-resources)|L1|v0.1.0|✓|MTB draft|
|
||||
|[Block add capabilities](/docs/operator/mtb/block-add-capabilities)|L1|v0.1.0|✓|---|
|
||||
|[Require always imagePullPolicy](/docs/operator/mtb/require-always-imagepullpolicy)|L1|v0.1.0|✓|---|
|
||||
|[Require run as non-root user](/docs/operator/mtb/require-run-as-non-root-user)|L1|v0.1.0|✓|---|
|
||||
|[Block privileged containers](/docs/operator/mtb/block-privileged-containers)|L1|v0.1.0|✓|---|
|
||||
|[Block privilege escalation](/docs/operator/mtb/block-privilege-escalation)|L1|v0.1.0|✓|---|
|
||||
|[Configure namespace resource quotas](/docs/operator/mtb/configure-namespace-resource-quotas)|L1|v0.1.0|✓|---|
|
||||
|[Block modification of resource quotas](/docs/operator/mtb/block-modification-of-resource-quotas)|L1|v0.1.0|✓|---|
|
||||
|[Configure namespace object limits](/docs/operator/mtb/configure-namespace-object-limits)|L1|v0.1.0|✓|---|
|
||||
|[Block use of host path volumes](/docs/operator/mtb/block-use-of-host-path-volumes)|L1|v0.1.0|✓|---|
|
||||
|[Block use of host networking and ports](/docs/operator/mtb/block-use-of-host-networking-and-ports)|L1|v0.1.0|✓|---|
|
||||
|[Block use of host PID](/docs/operator/mtb/block-use-of-host-pid)|L1|v0.1.0|✓|---|
|
||||
|[Block use of host IPC](/docs/operator/mtb/block-use-of-host-ipc)|L1|v0.1.0|✓|---|
|
||||
|[Block use of NodePort services](/docs/operator/mtb/block-use-of-nodeport-services)|L1|v0.1.0|✓|---|
|
||||
|[Require PersistentVolumeClaim for storage](/docs/operator/mtb/require-persistentvolumeclaim-for-storage)|L1|v0.1.0|✓|MTB draft|
|
||||
|[Require PV reclaim policy of delete](/docs/operator/mtb/require-reclaim-policy-of-delete)|L1|v0.1.0|✓|MTB draft|
|
||||
|[Block use of existing PVs](/docs/operator/mtb/block-use-of-existing-persistent-volumes)|L1|v0.1.0|✓|MTB draft|
|
||||
|[Block network access across tenant namespaces](/docs/operator/mtb/block-network-access-across-tenant-namespaces)|L1|v0.1.0|✓|MTB draft|
|
||||
|[Allow self-service management of Network Policies](/docs/operator/mtb/allow-self-service-management-of-network-policies)|L2|v0.1.0|✓|---|
|
||||
|[Allow self-service management of Roles](/docs/operator/mtb/allow-self-service-management-of-roles)|L2|v0.1.0|✓|MTB draft|
|
||||
|[Allow self-service management of Role Bindings](/docs/operator/mtb/allow-self-service-management-of-rolebindings)|L2|v0.1.0|✓|MTB draft|
|
||||
10
docs/content/operator/overview.md
Normal file
@@ -0,0 +1,10 @@
|
||||
# Kubernetes Operator
|
||||
|
||||
* [Getting Started](/docs/operator/getting-started)
|
||||
* [Use Cases](/docs/operator/use-cases/overview)
|
||||
* [SIG Multi-tenancy benchmark](/docs/operator/mtb/sig-multitenancy-bench)
|
||||
* [Run on Managed Kubernetes Services](/docs/operator/managed-kubernetes/overview)
|
||||
* [Monitoring Capsule](/docs/operator/monitoring)
|
||||
* [References](/docs/operator/references)
|
||||
* [Contributing](/docs/operator/contributing)
|
||||
|
||||
235
docs/content/operator/references.md
Normal file
@@ -0,0 +1,235 @@
|
||||
# Reference
|
||||
|
||||
* [Custom Resource Definition](/docs/operator/references/#customer-resource-definition)
|
||||
* [Capsule Configuration](/docs/operator/references/#capsule-configuration)
|
||||
* [Capsule Permissions](/docs/operator/references/#capsule-permissions)
|
||||
* [Admission Controllers](/docs/operator/references/#admission-controller)
|
||||
* [Command Options](/docs/operator/references/#command-options)
|
||||
* [Created Resources](/docs/operator/references/#created-resources)
|
||||
|
||||
## Custom Resource Definition
|
||||
|
||||
Capsule operator uses a Custom Resources Definition (CRD) for _Tenants_. In Capsule, Tenants are cluster wide resources. You need cluster level permissions to work with tenants.
|
||||
|
||||
You can learn about tenant CRD by the `kubectl explain` command:
|
||||
|
||||
```command
|
||||
kubectl explain tenant
|
||||
|
||||
KIND: Tenant
|
||||
VERSION: capsule.clastix.io/v1beta1
|
||||
|
||||
DESCRIPTION:
|
||||
Tenant is the Schema for the tenants API
|
||||
|
||||
FIELDS:
|
||||
apiVersion <string>
|
||||
APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info:
|
||||
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
|
||||
|
||||
kind <string>
|
||||
Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info:
|
||||
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
|
||||
|
||||
metadata <Object>
|
||||
Standard object's metadata. More info:
|
||||
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||
|
||||
spec <Object>
|
||||
TenantSpec defines the desired state of Tenant
|
||||
|
||||
status <Object>
|
||||
Returns the observed state of the Tenant
|
||||
```
|
||||
|
||||
For Tenant spec:
|
||||
|
||||
```command
|
||||
kubectl explain tenant.spec
|
||||
|
||||
KIND: Tenant
|
||||
VERSION: capsule.clastix.io/v1beta1
|
||||
|
||||
RESOURCE: spec <Object>
|
||||
|
||||
DESCRIPTION:
|
||||
TenantSpec defines the desired state of Tenant
|
||||
|
||||
FIELDS:
|
||||
additionalRoleBindings <[]Object>
|
||||
Specifies additional RoleBindings assigned to the Tenant. Capsule will
|
||||
ensure that all namespaces in the Tenant always contain the RoleBinding for
|
||||
the given ClusterRole. Optional.
|
||||
|
||||
containerRegistries <Object>
|
||||
Specifies the trusted Image Registries assigned to the Tenant. Capsule
|
||||
assures that all Pods resources created in the Tenant can use only one of
|
||||
the allowed trusted registries. Optional.
|
||||
|
||||
imagePullPolicies <[]string>
|
||||
Specify the allowed values for the imagePullPolicies option in Pod
|
||||
resources. Capsule assures that all Pod resources created in the Tenant can
|
||||
use only one of the allowed policy. Optional.
|
||||
|
||||
ingressOptions <Object>
|
||||
Specifies options for the Ingress resources, such as allowed hostnames and
|
||||
IngressClass. Optional.
|
||||
|
||||
limitRanges <Object>
|
||||
Specifies the NetworkPolicies assigned to the Tenant. The assigned
|
||||
NetworkPolicies are inherited by any namespace created in the Tenant.
|
||||
Optional.
|
||||
|
||||
namespaceOptions <Object>
|
||||
Specifies options for the Namespaces, such as additional metadata or
|
||||
maximum number of namespaces allowed for that Tenant. Once the namespace
|
||||
quota assigned to the Tenant has been reached, the Tenant owner cannot
|
||||
create further namespaces. Optional.
|
||||
|
||||
networkPolicies <Object>
|
||||
Specifies the NetworkPolicies assigned to the Tenant. The assigned
|
||||
NetworkPolicies are inherited by any namespace created in the Tenant.
|
||||
Optional.
|
||||
|
||||
nodeSelector <map[string]string>
|
||||
Specifies the label to control the placement of pods on a given pool of
|
||||
worker nodes. All namesapces created within the Tenant will have the node
|
||||
selector annotation. This annotation tells the Kubernetes scheduler to
|
||||
place pods on the nodes having the selector label. Optional.
|
||||
|
||||
owners <[]Object> -required-
|
||||
Specifies the owners of the Tenant. Mandatory.
|
||||
|
||||
priorityClasses <Object>
|
||||
Specifies the allowed priorityClasses assigned to the Tenant. Capsule
|
||||
assures that all pods created in the Tenant can use only one
|
||||
of the allowed priorityClasses. Optional.
|
||||
|
||||
resourceQuotas <Object>
|
||||
Specifies a list of ResourceQuota resources assigned to the Tenant. The
|
||||
assigned values are inherited by any namespace created in the Tenant. The
|
||||
Capsule operator aggregates ResourceQuota at Tenant level, so that the hard
|
||||
quota is never crossed for the given Tenant. This permits the Tenant owner
|
||||
to consume resources in the Tenant regardless of the namespace. Optional.
|
||||
|
||||
serviceOptions <Object>
|
||||
Specifies options for the Service, such as additional metadata or block of
|
||||
certain type of Services. Optional.
|
||||
|
||||
storageClasses <Object>
|
||||
Specifies the allowed StorageClasses assigned to the Tenant. Capsule
|
||||
assures that all PersistentVolumeClaim resources created in the Tenant can
|
||||
use only one of the allowed StorageClasses. Optional.
|
||||
```
|
||||
|
||||
and Tenant status:
|
||||
|
||||
```command
|
||||
kubectl explain tenant.status
|
||||
KIND: Tenant
|
||||
VERSION: capsule.clastix.io/v1beta1
|
||||
|
||||
RESOURCE: status <Object>
|
||||
|
||||
DESCRIPTION:
|
||||
Returns the observed state of the Tenant
|
||||
|
||||
FIELDS:
|
||||
namespaces <[]string>
|
||||
List of namespaces assigned to the Tenant.
|
||||
|
||||
size <integer> -required-
|
||||
How many namespaces are assigned to the Tenant.
|
||||
|
||||
state <string> -required-
|
||||
The operational state of the Tenant. Possible values are "Active",
|
||||
"Cordoned".
|
||||
```
|
||||
|
||||
## Capsule Configuration
|
||||
|
||||
The Capsule configuration can be piloted by a Custom Resource definition named `CapsuleConfiguration`.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: CapsuleConfiguration
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
userGroups: ["capsule.clastix.io"]
|
||||
forceTenantPrefix: false
|
||||
protectedNamespaceRegex: ""
|
||||
```
|
||||
|
||||
Option | Description | Default
|
||||
--- | --- | ---
|
||||
`.spec.forceTenantPrefix` | Force the tenant name as prefix for namespaces: `<tenant_name>-<namespace>`. | `false`
|
||||
`.spec.userGroups` | Array of Capsule groups to which all tenant owners must belong. | `[capsule.clastix.io]`
|
||||
`.spec.protectedNamespaceRegex` | Disallows creation of namespaces matching the passed regexp. | `null`
|
||||
|
||||
Upon installation using Kustomize or Helm, a `capsule-default` resource will be created.
|
||||
The reference to this configuration is managed by the CLI flag `--configuration-name`.
|
||||
|
||||
## Capsule Permissions
|
||||
In the current implementation, the Capsule operator requires cluster admin permissions to fully operate. Make sure you deploy Capsule having access to the default `cluster-admin` ClusterRole.
|
||||
|
||||
## Admission Controllers
|
||||
|
||||
Capsule implements Kubernetes multi-tenancy capabilities using a minimum set of standard [Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) enabled on the Kubernetes APIs server.
|
||||
|
||||
Here the list of required Admission Controllers you have to enable to get full support from Capsule:
|
||||
|
||||
* PodNodeSelector
|
||||
* LimitRanger
|
||||
* ResourceQuota
|
||||
* MutatingAdmissionWebhook
|
||||
* ValidatingAdmissionWebhook
|
||||
|
||||
In addition to the required controllers above, Capsule implements its own set through the [Dynamic Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) mechanism, providing callbacks to add further validation or resource patching.
|
||||
|
||||
To see Admission Controls installed by Capsule:
|
||||
|
||||
```
|
||||
$ kubectl get ValidatingWebhookConfiguration
|
||||
NAME WEBHOOKS AGE
|
||||
capsule-validating-webhook-configuration 8 2h
|
||||
|
||||
$ kubectl get MutatingWebhookConfiguration
|
||||
NAME WEBHOOKS AGE
|
||||
capsule-mutating-webhook-configuration 1 2h
|
||||
```
|
||||
|
||||
## Command Options
|
||||
|
||||
The Capsule operator provides the following command options:
|
||||
|
||||
Option | Description | Default
|
||||
--- | --- | ---
|
||||
`--metrics-addr` | The address and port where `/metrics` are exposed. | `127.0.0.1:8080`
|
||||
`--enable-leader-election` | Start a leader election client and gain leadership before executing the main loop. | `true`
|
||||
`--zap-log-level` | The log verbosity with a value from 1 to 10 or the basic keywords. | `4`
|
||||
`--zap-devel` | The flag to get the stack traces for deep debugging. | `null`
|
||||
`--configuration-name` | The Capsule Configuration CRD name, default is installed automatically | `capsule-default`
|
||||
|
||||
|
||||
## Created Resources
|
||||
Once installed, the Capsule operator creates the following resources in your cluster:
|
||||
|
||||
```
|
||||
NAMESPACE RESOURCE
|
||||
namespace/capsule-system
|
||||
customresourcedefinition.apiextensions.k8s.io/tenants.capsule.clastix.io
|
||||
customresourcedefinition.apiextensions.k8s.io/capsuleconfigurations.capsule.clastix.io
|
||||
clusterrole.rbac.authorization.k8s.io/capsule-proxy-role
|
||||
clusterrole.rbac.authorization.k8s.io/capsule-metrics-reader
|
||||
capsuleconfiguration.capsule.clastix.io/capsule-default
|
||||
mutatingwebhookconfiguration.admissionregistration.k8s.io/capsule-mutating-webhook-configuration
|
||||
validatingwebhookconfiguration.admissionregistration.k8s.io/capsule-validating-webhook-configuration
|
||||
capsule-system clusterrolebinding.rbac.authorization.k8s.io/capsule-manager-rolebinding
|
||||
capsule-system clusterrolebinding.rbac.authorization.k8s.io/capsule-proxy-rolebinding
|
||||
capsule-system secret/capsule-ca
|
||||
capsule-system secret/capsule-tls
|
||||
capsule-system service/capsule-controller-manager-metrics-service
|
||||
capsule-system service/capsule-webhook-service
|
||||
capsule-system deployment.apps/capsule-controller-manager
|
||||
```
|
||||
52
docs/content/operator/use-cases/cordoning-tenant.md
Normal file
@@ -0,0 +1,52 @@
|
||||
# Cordoning a Tenant
|
||||
|
||||
Bill needs to cordon a Tenant and its Namespaces for several reasons:
|
||||
|
||||
- Avoid accidental resource modification(s) including deletion during a Production Freeze Window
|
||||
- During the Kubernetes upgrade, to prevent any workload updates
|
||||
- During incidents or outages
|
||||
- During planned maintenance of a dedicated nodes pool in a BYOD scenario
|
||||
|
||||
With this said, the Tenant Owner and the related Service Account living into managed Namespaces, cannot proceed to any update, create or delete action.
|
||||
|
||||
This is possible just labeling the Tenant as follows:
|
||||
|
||||
```shell
|
||||
kubectl label tenant oil capsule.clastix.io/cordon=enabled
|
||||
tenant oil labeled
|
||||
```
|
||||
|
||||
Any operation performed by Alice, the Tenant Owner, will be rejected.
|
||||
|
||||
```shell
|
||||
$ kubectl --as alice --as-group capsule.clastix.io -n oil-dev create deployment nginx --image nginx
|
||||
error: failed to create deployment: admission webhook "cordoning.tenant.capsule.clastix.io" denied the request: tenant oil is frozen: please, reach out to the system administrator
|
||||
|
||||
$ kubectl --as alice --as-group capsule.clastix.io -n oil-dev delete ingress,deployment,serviceaccount --all
|
||||
error: failed to create deployment: admission webhook "cordoning.tenant.capsule.clastix.io" denied the request: tenant oil is frozen: please, reach out to the system administrator
|
||||
```
|
||||
|
||||
Uncordoning can be done by removing the said label:
|
||||
|
||||
```shell
|
||||
$ kubectl label tenant oil capsule.clastix.io/cordon-
|
||||
tenant.capsule.clastix.io/oil labeled
|
||||
|
||||
$ kubectl --as alice --as-group capsule.clastix.io -n oil-dev create deployment nginx --image nginx
|
||||
deployment.apps/nginx created
|
||||
```
|
||||
|
||||
Status of cordoning is also reported in the `state` of the tenant:
|
||||
|
||||
```shell
|
||||
kubectl get tenants
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
bronze Active 2 3d13h
|
||||
gold Active 2 3d13h
|
||||
oil Cordoned 4 2d11h
|
||||
silver Active 2 3d13h
|
||||
```
|
||||
|
||||
# What’s next
|
||||
|
||||
See how Bill, the cluster admin, can prevent creating services with specific service types. [Disabling Service Types](/docs/operator/use-cases/service-type).
|
||||
110
docs/content/operator/use-cases/create-namespaces.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Create namespaces
|
||||
Alice, once logged with her credentials, can create a new namespace in her tenant, as simply issuing:
|
||||
|
||||
```
|
||||
kubectl create ns oil-production
|
||||
```
|
||||
|
||||
Alice started the name of the namespace prepended by the name of the tenant: this is not a strict requirement but it is highly suggested because it is likely that many different tenants would like to call their namespaces `production`, `test`, or `demo`, etc.
|
||||
|
||||
The enforcement of this naming convention is optional and can be controlled by the cluster administrator with the `--force-tenant-prefix` option as an argument of the Capsule controller.
|
||||
|
||||
When Alice creates the namespace, the Capsule controller listening for creation and deletion events assigns to Alice the following roles:
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: namespace:admin
|
||||
namespace: oil-production
|
||||
subjects:
|
||||
- kind: User
|
||||
name: alice
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: admin
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: namespace-deleter
|
||||
namespace: oil-production
|
||||
subjects:
|
||||
- kind: User
|
||||
name: alice
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: capsule-namespace-deleter
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
So Alice is the admin of the namespaces:
|
||||
|
||||
```
|
||||
kubectl get rolebindings -n oil-development
|
||||
NAME ROLE AGE
|
||||
namespace:admin ClusterRole/admin 12s
|
||||
namespace-deleter ClusterRole/capsule-namespace-deleter 12s
|
||||
```
|
||||
|
||||
The said Role Binding resources are automatically created by Capsule controller when the tenant owner Alice creates a namespace in the tenant.
|
||||
|
||||
Alice can deploy any resource in the namespace, according to the predefined
|
||||
[`admin` cluster role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles).
|
||||
|
||||
```
|
||||
kubectl -n oil-development run nginx --image=docker.io/nginx
|
||||
kubectl -n oil-development get pods
|
||||
```
|
||||
|
||||
Bill, the cluster admin, can control how many namespaces Alice, creates by setting a quota in the tenant manifest `spec.namespaceOptions.quota`
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
namespaceOptions:
|
||||
quota: 3
|
||||
```
|
||||
|
||||
Alice can create additional namespaces according to the quota:
|
||||
|
||||
```
|
||||
kubectl create ns oil-development
|
||||
kubectl create ns oil-test
|
||||
```
|
||||
|
||||
While Alice creates namespaces, the Capsule controller updates the status of the tenant so Bill, the cluster admin, can check the status:
|
||||
|
||||
```
|
||||
kubectl describe tenant oil
|
||||
```
|
||||
|
||||
```yaml
|
||||
...
|
||||
status:
|
||||
Namespaces:
|
||||
oil-development
|
||||
oil-production
|
||||
oil-test
|
||||
size: 3 # current namespace count
|
||||
...
|
||||
```
|
||||
|
||||
Once the namespace quota assigned to the tenant has been reached, Alice cannot create further namespaces
|
||||
|
||||
```
|
||||
kubectl create ns oil-training
|
||||
Error from server (Cannot exceed Namespace quota: please, reach out to the system administrators): admission webhook "namespace.capsule.clastix.io" denied the request.
|
||||
```
|
||||
The enforcement on the maximum number of namespaces per Tenant is the responsibility of the Capsule controller via its Dynamic Admission Webhook capability.
|
||||
|
||||
# What’s next
|
||||
See how Alice, the tenant owner, can assign different user roles in the tenant. [Assign permissions](/docs/operator/use-cases/permissions).
|
||||
78
docs/content/operator/use-cases/custom-resources.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# Create Custom Resources
|
||||
Capsule grants admin permissions to the tenant owners but is only limited to their namespaces. To achieve that, it assigns the ClusterRole [admin](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) to the tenant owner. This ClusterRole does not permit the installation of custom resources in the namespaces.
|
||||
|
||||
In order to leave the tenant owner to create Custom Resources in their namespaces, the cluster admin defines a proper Cluster Role. For example:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: argoproj-provisioner
|
||||
rules:
|
||||
- apiGroups:
|
||||
- argoproj.io
|
||||
resources:
|
||||
- applications
|
||||
- appprojects
|
||||
verbs:
|
||||
- create
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
EOF
|
||||
```
|
||||
|
||||
Bill can assign this role to any namespace in the Alice's tenant by setting it in the tenant manifest:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
- name: joe
|
||||
kind: User
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: 'argoproj-provisioner'
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: alice
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: joe
|
||||
EOF
|
||||
```
|
||||
|
||||
With the given specification, Capsule will ensure that all Alice's namespaces will contain a _RoleBinding_ for the specified _Cluster Role_. For example, in the `oil-production` namespace, Alice will see:
|
||||
|
||||
```yaml
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: capsule-oil-argoproj-provisioner
|
||||
namespace: oil-production
|
||||
subjects:
|
||||
- kind: User
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
name: alice
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: argoproj-provisioner
|
||||
```
|
||||
|
||||
With the above example, Capsule is leaving the tenant owner to create namespaced custom resources.
|
||||
|
||||
> Take Note: a tenant owner having the admin scope on its namespaces only, does not have the permission to create Custom Resources Definitions (CRDs) because this requires a cluster admin permission level. Only Bill, the cluster admin, can create CRDs. This is a known limitation of any multi-tenancy environment based on a single Kubernetes cluster.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can set taints on Alice's namespaces. [Taint namespaces](/docs/operator/use-cases/taint-namespaces).
|
||||
29
docs/content/operator/use-cases/deny-wildcard-hostnames.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Deny Wildcard Hostnames
|
||||
|
||||
Bill, the cluster admin, can deny the use of wildcard hostnames.
|
||||
|
||||
Let's assume that we had a big organization, having a domain `bigorg.com` and there are two tenants, `gas` and `oil`.
|
||||
|
||||
As a tenant-owner of `gas`, Alice create ingress with the host like `- host: "*.bigorg.com"`. That can lead to big problems for the `oil` tenant because Alice can deliberately create ingress with host: `oil.bigorg.com`.
|
||||
|
||||
To avoid this kind of problems, Bill can deny the use of wildcard hostnames in the following way:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: gas
|
||||
annotations:
|
||||
capsule.clastix.io/deny-wildcard: true
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
Doing this, Alice will not be able to use `oil.bigorg.com`, being the tenant-owner of `gas`.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin can protect specific labels and annotations on Nodes from modifications by Tenant Owners. [Denying specific user-defined labels or annotations on Nodes](/docs/operator/use-cases/node-labels-and-annotations).
|
||||
80
docs/content/operator/use-cases/hostname-collision.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# Control hostname collision in ingresses
|
||||
In a multi-tenant environment, as more and more ingresses are defined, there is a chance of collision on the hostname leading to unpredictable behavior of the Ingress Controller. Bill, the cluster admin, can enforce hostname collision detection at different scope levels:
|
||||
|
||||
1. Cluster
|
||||
2. Tenant
|
||||
3. Namespace
|
||||
4. Disabled (default)
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
- name: joe
|
||||
kind: User
|
||||
ingressOptions:
|
||||
hostnameCollisionScope: Tenant
|
||||
EOF
|
||||
```
|
||||
|
||||
When a tenant owner creates an Ingress resource, Capsule will check the collision of hostname in the current ingress with all the hostnames already used, depending on the defined scope.
|
||||
|
||||
For example, Alice, one of the tenant owners, creates an Ingress
|
||||
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-production
|
||||
spec:
|
||||
rules:
|
||||
- host: web.oil.acmecorp.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: nginx
|
||||
port:
|
||||
number: 80
|
||||
EOF
|
||||
```
|
||||
|
||||
Another user, Joe creates an Ingress having the same hostname
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-development apply -f - << EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-development
|
||||
spec:
|
||||
rules:
|
||||
- host: web.oil.acmecorp.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: nginx
|
||||
port:
|
||||
number: 80
|
||||
EOF
|
||||
```
|
||||
|
||||
When a collision is detected at scope defined by `spec.ingressOptions.hostnameCollisionScope`, the creation of the Ingress resource will be rejected by the Validation Webhook enforcing it. When `hostnameCollisionScope=Disabled`, no collision detection is made at all.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign a Storage Class to Alice's tenant. [Assign Storage Classes](/docs/operator/use-cases/storage-classes).
|
||||
32
docs/content/operator/use-cases/images-pullpolicy.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Enforcing Pod containers image PullPolicy
|
||||
|
||||
Bill is a cluster admin providing a Container as a Service platform using shared nodes.
|
||||
|
||||
Alice, a Tenant Owner, can start container images using private images: according to the Kubernetes architecture, the `kubelet` will download the layers on its cache.
|
||||
|
||||
Bob, an attacker, could try to schedule a Pod on the same node where Alice is running her Pods backed by private images: they could start new Pods using `ImagePullPolicy=IfNotPresent` and be able to start them, even without required authentication since the image is cached on the node.
|
||||
|
||||
To avoid this kind of attack, Bill, the cluster admin, can force Alice, the tenant owner, to start her Pods using only the allowed values for `ImagePullPolicy`, enforcing the `kubelet` to check the authorization first.
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
imagePullPolicies:
|
||||
- Always
|
||||
EOF
|
||||
```
|
||||
|
||||
Allowed values are: `Always`, `IfNotPresent`, `Never`.
|
||||
|
||||
Any attempt of Alice to use a disallowed `imagePullPolicies` value is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
|
||||
See how Bill, the cluster admin, can assign trusted images registries to Alice's tenant. [Assign Trusted Images Registries](/docs/operator/use-cases/images-registries).
|
||||
34
docs/content/operator/use-cases/images-registries.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Assign Trusted Images Registries
|
||||
Bill, the cluster admin, can set a strict policy on the applications running into Alice's tenant: he'd like to allow running just images hosted on a list of specific container registries.
|
||||
|
||||
The spec `containerRegistries` addresses this task and can provide a combination with hard enforcement using a list of allowed values.
|
||||
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
containerRegistries:
|
||||
allowed:
|
||||
- docker.io
|
||||
- quay.io
|
||||
allowedRegex: 'internal.registry.\\w.tld'
|
||||
```
|
||||
|
||||
> In case of Pod running `non-FQCI` (non fully qualified container image) containers, the container registry enforcement will disallow the execution.
|
||||
> If you would like to run a `busybox:latest` container that is commonly hosted on Docker Hub, the Tenant Owner has to specify its name explicitly, like `docker.io/library/busybox:latest`.
|
||||
|
||||
A Pod running `internal.registry.foo.tld/capsule:latest` as registry will be allowed, as well `internal.registry.bar.tld` since these are matching the regular expression.
|
||||
|
||||
> A catch-all regex entry as `.*` allows every kind of registry, which would be the same result of unsetting `containerRegistries` at all.
|
||||
|
||||
Any attempt of Alice to use a not allowed `containerRegistries` value is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign Pod Security Policies to Alice's tenant. [Assign Pod Security Policies](/docs/operator/use-cases/pod-security-policies).
|
||||
52
docs/content/operator/use-cases/ingress-classes.md
Normal file
@@ -0,0 +1,52 @@
|
||||
# Assign Ingress Classes
|
||||
An Ingress Controller is used in Kubernetes to publish services and applications outside of the cluster. An Ingress Controller can be provisioned to accept only Ingresses with a given Ingress Class.
|
||||
|
||||
Bill can assign a set of dedicated Ingress Classes to the `oil` tenant to force the applications in the `oil` tenant to be published only by the assigned Ingress Controller:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
ingressOptions:
|
||||
allowedClasses:
|
||||
allowed:
|
||||
- default
|
||||
allowedRegex: ^\w+-lb$
|
||||
EOF
|
||||
```
|
||||
|
||||
Capsule assures that all Ingresses created in the tenant can use only one of the valid Ingress Classes.
|
||||
|
||||
Alice can create an Ingress using only an allowed Ingress Class:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-production
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: default
|
||||
spec:
|
||||
rules:
|
||||
- host: oil.acmecorp.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: nginx
|
||||
servicePort: 80
|
||||
path: /
|
||||
EOF
|
||||
```
|
||||
|
||||
Any attempt of Alice to use a non-valid Ingress Class, or missing it, is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign a set of dedicated ingress hostnames to Alice's tenant. [Assign Ingress Hostnames](/docs/operator/use-cases/ingress-hostnames).
|
||||
53
docs/content/operator/use-cases/ingress-hostnames.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# Assign Ingress Hostnames
|
||||
Bill can control ingress hostnames to the `oil` tenant to force the applications to be published only using the given hostname or set of hostnames:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
ingressOptions:
|
||||
allowedHostnames:
|
||||
allowed:
|
||||
- oil.acmecorp.com
|
||||
allowedRegex: ^.*acmecorp.com$
|
||||
EOF
|
||||
```
|
||||
|
||||
The Capsule controller assures that all Ingresses created in the tenant can use only one of the valid hostnames.
|
||||
|
||||
Alice can create an Ingress using any allowed hostname
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-production
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: oil
|
||||
spec:
|
||||
rules:
|
||||
- host: web.oil.acmecorp.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: nginx
|
||||
port:
|
||||
number: 80
|
||||
EOF
|
||||
```
|
||||
|
||||
Any attempt of Alice to use a non-valid hostname is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can control the hostname collision in Ingresses. [Control hostname collision in ingresses](/docs/operator/use-cases//hostname-collision).
|
||||
94
docs/content/operator/use-cases/multiple-tenants.md
Normal file
@@ -0,0 +1,94 @@
|
||||
# Assign multiple tenants to an owner
|
||||
In some scenarios, a single team is likely responsible for multiple lines of business. For example, in our sample organization Acme Corp., Alice is responsible for both the Oil and Gas lines of business. It's more likely that Alice requires two different tenants, for example, `oil` and `gas` to keep things isolated.
|
||||
|
||||
By design, the Capsule operator does not permit a hierarchy of tenants, since all tenants are at the same levels. However, we can assign the ownership of multiple tenants to the same user or group of users.
|
||||
|
||||
Bill, the cluster admin, creates multiple tenants having `alice` as owner:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: gas
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
Alternatively, the ownership can be assigned to a group called `oil-and-gas`:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: oil-and-gas
|
||||
kind: Group
|
||||
EOF
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: gas
|
||||
spec:
|
||||
owners:
|
||||
- name: oil-and-gas
|
||||
kind: Group
|
||||
EOF
|
||||
```
|
||||
|
||||
The two tenants remain isolated from each other in terms of resources assignments, e.g. _ResourceQuota_, _Nodes Pool_, _Storage Calsses_ and _Ingress Classes_, and in terms of governance, e.g. _NetworkPolicies_, _PodSecurityPolicies_, _Trusted Registries_, etc.
|
||||
|
||||
|
||||
When Alice logs in, she has access to all namespaces belonging to both the `oil` and `gas` tenants.
|
||||
|
||||
```
|
||||
kubectl create ns oil-production
|
||||
kubectl create ns gas-production
|
||||
```
|
||||
|
||||
When the enforcement of the naming convention with the `--force-tenant-prefix` option, is enabled, the namespaces are automatically assigned to the right tenant by Capsule because the operator does a lookup on the tenant names. If the `--force-tenant-prefix` option, is not set, Alice needs to specify the tenant name as a label `capsule.clastix.io/tenant=<desired_tenant>` in the namespace manifest:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: gas-production
|
||||
labels:
|
||||
capsule.clastix.io/tenant: gas
|
||||
EOF
|
||||
```
|
||||
|
||||
> If not specified, Capsule will deny with the following message:
|
||||
>`Unable to assign namespace to tenant. Please use capsule.clastix.io/tenant label when creating a namespace.`
|
||||
|
||||
# What’s next
|
||||
|
||||
See how Bill, the cluster admin, can cordon all the Namespaces belonging to a Tenant. [Cordoning a Tenant](/docs/operator/use-cases/cordoning-tenant).
|
||||
@@ -0,0 +1,28 @@
|
||||
# Denying specific user-defined labels or annotations on Namespaces
|
||||
|
||||
By default, capsule allows tenant owners to add and modify any label or annotation on their namespaces.
|
||||
|
||||
But there are some scenarios, when tenant owners should not have an ability to add or modify specific labels or annotations (for example, this can be labels used in [Kubernetes network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) which are added by cluster administrator).
|
||||
|
||||
Bill, the cluster admin, can deny Alice to add specific labels and annotations on namespaces:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
annotations:
|
||||
capsule.clastix.io/forbidden-namespace-labels: foo.acme.net,bar.acme.net
|
||||
capsule.clastix.io/forbidden-namespace-labels-regexp: .*.acme.net
|
||||
capsule.clastix.io/forbidden-namespace-annotations: foo.acme.net,bar.acme.net
|
||||
capsule.clastix.io/forbidden-namespace-annotations-regexp: .*.acme.net
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
# What’s next
|
||||
Let's check it out how to restore Tenants after a Velero Backup. [Velero Backup Restoration](/docs/operator/use-cases/velero-backup-restoration).
|
||||
102
docs/content/operator/use-cases/network-policies.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# Assign Network Policies
|
||||
Kubernetes network policies control network traffic between namespaces and between pods in the same namespace. Bill, the cluster admin, can enforce network traffic isolation between different tenants while leaving to Alice, the tenant owner, the freedom to set isolation between namespaces in the same tenant or even between pods in the same namespace.
|
||||
|
||||
To meet this requirement, Bill needs to define network policies that deny pods belonging to Alice's namespaces to access pods in namespaces belonging to other tenants, e.g. Bob's tenant `water`, or in system namespaces, e.g. `kube-system`.
|
||||
|
||||
Also, Bill can make sure pods belonging to a tenant namespace cannot access other network infrastructures like cluster nodes, load balancers, and virtual machines running other services.
|
||||
|
||||
Bill can set network policies in the tenant manifest, according to the requirements:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
networkPolicies:
|
||||
items:
|
||||
- policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
cidr: 0.0.0.0/0
|
||||
except:
|
||||
- 192.168.0.0/16
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
- podSelector: {}
|
||||
- ipBlock:
|
||||
cidr: 192.168.0.0/16
|
||||
podSelector: {}
|
||||
EOF
|
||||
```
|
||||
|
||||
The Capsule controller, watching for namespace creation, creates the Network Policies for each namespace in the tenant.
|
||||
|
||||
Alice has access to network policies:
|
||||
|
||||
```
|
||||
kubectl -n oil-production get networkpolicies
|
||||
NAME POD-SELECTOR AGE
|
||||
capsule-oil-0 <none> 42h
|
||||
```
|
||||
|
||||
Alice can create, patch, and delete additional network policies within her namespaces
|
||||
|
||||
```
|
||||
kubectl -n oil-production auth can-i get networkpolicies
|
||||
yes
|
||||
|
||||
kubectl -n oil-production auth can-i delete networkpolicies
|
||||
yes
|
||||
|
||||
kubectl -n oil-production auth can-i patch networkpolicies
|
||||
yes
|
||||
```
|
||||
|
||||
For example, she can create
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
labels:
|
||||
name: production-network-policy
|
||||
namespace: oil-production
|
||||
spec:
|
||||
podSelector: {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
EOF
|
||||
```
|
||||
|
||||
Check all the network policies
|
||||
|
||||
```
|
||||
kubectl -n oil-production get networkpolicies
|
||||
NAME POD-SELECTOR AGE
|
||||
capsule-oil-0 <none> 42h
|
||||
production-network-policy <none> 3m
|
||||
```
|
||||
|
||||
And delete the namespace network policies
|
||||
|
||||
```
|
||||
kubectl -n oil-production delete networkpolicy production-network-policy
|
||||
```
|
||||
|
||||
Any attempt of Alice to delete the tenant network policy defined in the tenant manifest is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
See how Bill can enforce the Pod containers image pull policy to `Always` to avoid leaking of private images when running on shared nodes. [Enforcing Pod containers image PullPolicy](/docs/operator/use-cases/images-pullpolicy)
|
||||
@@ -0,0 +1,41 @@
|
||||
# Denying specific user-defined labels or annotations on Nodes
|
||||
|
||||
When using `capsule` together with [capsule-proxy](https://github.com/clastix/capsule-proxy), Bill can allow Tenant Owners to [modify Nodes](/docs/proxy/overview).
|
||||
|
||||
By default, it will allow tenant owners to add and modify any label or annotation on their nodes.
|
||||
|
||||
But there are some scenarios, when tenant owners should not have an ability to add or modify specific labels or annotations (there are some types of labels or annotations, which must be protected from modifications - for example, which are set by `cloud-providers` or `autoscalers`).
|
||||
|
||||
Bill, the cluster admin, can deny Tenant Owners to add or modify specific labels and annotations on Nodes:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: CapsuleConfiguration
|
||||
metadata:
|
||||
name: default
|
||||
annotations:
|
||||
capsule.clastix.io/forbidden-node-labels: foo.acme.net,bar.acme.net
|
||||
capsule.clastix.io/forbidden-node-labels-regexp: .*.acme.net
|
||||
capsule.clastix.io/forbidden-node-annotations: foo.acme.net,bar.acme.net
|
||||
capsule.clastix.io/forbidden-node-annotations-regexp: .*.acme.net
|
||||
spec:
|
||||
userGroups:
|
||||
- capsule.clastix.io
|
||||
- system:serviceaccounts:default
|
||||
EOF
|
||||
```
|
||||
|
||||
> **Important note**
|
||||
>
|
||||
>Due to [CVE-2021-25735](https://github.com/kubernetes/kubernetes/issues/100096) this feature is only supported for Kubernetes version older than:
|
||||
>* v1.18.18
|
||||
>* v1.19.10
|
||||
>* v1.20.6
|
||||
>* v1.21.0
|
||||
|
||||
# What’s next
|
||||
|
||||
This ends our tour in Capsule use cases. As we improve Capsule, more use cases about multi-tenancy, policy admission control, and cluster governance will be covered in the future.
|
||||
|
||||
Stay tuned!
|
||||
64
docs/content/operator/use-cases/nodes-pool.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# Assign a node's pool
|
||||
Bill, the cluster admin, can dedicate a pool of worker nodes to the `oil` tenant, to isolate the tenant applications from other noisy neighbors.
|
||||
|
||||
These nodes are labeled by Bill as `pool=oil`
|
||||
|
||||
```
|
||||
kubectl get nodes --show-labels
|
||||
|
||||
NAME STATUS ROLES AGE VERSION LABELS
|
||||
...
|
||||
worker06.acme.com Ready worker 8d v1.18.2 pool=oil
|
||||
worker07.acme.com Ready worker 8d v1.18.2 pool=oil
|
||||
worker08.acme.com Ready worker 8d v1.18.2 pool=oil
|
||||
```
|
||||
|
||||
The label `pool=oil` is defined as node selector in the tenant manifest:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
nodeSelector:
|
||||
pool: oil
|
||||
kubernetes.io/os: linux
|
||||
EOF
|
||||
```
|
||||
|
||||
The Capsule controller makes sure that any namespace created in the tenant has the annotation: `scheduler.alpha.kubernetes.io/node-selector: pool=oil`. This annotation tells the scheduler of Kubernetes to assign the node selector `pool=oil` to all the pods deployed in the tenant. The effect is that all the pods deployed by Alice are placed only on the designated pool of nodes.
|
||||
|
||||
Multiple node selector labels can be defined as in the following snippet:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
nodeSelector:
|
||||
pool: oil
|
||||
kubernetes.io/os: linux
|
||||
kubernetes.io/arch: amd64
|
||||
hardware: gpu
|
||||
```
|
||||
|
||||
Any attempt of Alice to change the selector on the pods will result in an error from the `PodNodeSelector` Admission Controller plugin.
|
||||
|
||||
Also, RBAC prevents Alice to change the annotation on the namespace:
|
||||
|
||||
```
|
||||
kubectl auth can-i edit ns -n oil-production
|
||||
no
|
||||
```
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign an Ingress Class to Alice's tenant. [Assign Ingress Classes](/docs/operator/use-cases/ingress-classes).
|
||||
51
docs/content/operator/use-cases/overview.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# Use cases for Capsule
|
||||
Using Capsule, a cluster admin can implement complex multi-tenant scenarios for both public and private deployments. Here is a list of common scenarios addressed by Capsule.
|
||||
|
||||
# Container as a Service (CaaS)
|
||||
***Acme Corp***, our sample organization, built a Container as a Service platform (CaaS), based on Kubernetes to serve multiple lines of business. Each line of business has its team of engineers that are responsible for the development, deployment, and operating of their digital products.
|
||||
|
||||
To simplify the usage of Capsule in this scenario, we'll work with the following actors:
|
||||
|
||||
* ***Bill***:
|
||||
he is the cluster administrator from the operations department of Acme Corp. and he is in charge of administration and maintains the CaaS platform.
|
||||
|
||||
* ***Alice***:
|
||||
she works as the IT Project Leader in the Oil & Gas Business Units. These are two new lines of business at Acme Corp. Alice is responsible for all the strategic IT projects in the two LOBs. She also is responsible for a team made of different job responsibilities (developers, administrators, SRE engineers, etc.) working in separate departments.
|
||||
|
||||
* ***Joe***:
|
||||
he works at Acme Corp, as a lead developer of a distributed team in Alice's organization. Joe is responsible for developing a mission-critical project in the Oil market.
|
||||
|
||||
* ***Bob***:
|
||||
he is the head of Engineering for the Water Business Unit, the main and historical line of business at Acme Corp. He is responsible for the development, deployment, and operation of multiple digital products in production for a large set of customers.
|
||||
|
||||
Use Capsule to address any of the following scenarios:
|
||||
|
||||
* [Assign Tenant Ownership](/docs/operator/use-cases/tenant-ownership)
|
||||
* [Create Namespaces](/docs/operator/use-cases/create-namespaces)
|
||||
* [Assign Permissions](/docs/operator/use-cases/permissions)
|
||||
* [Enforce Resources Quotas and Limits](/docs/operator/use-cases/resources-quota-limits)
|
||||
* [Enforce Pod Priority Classes](/docs/operator/use-cases/pod-priority-classes)
|
||||
* [Assign specific Node Pools](/docs/operator/use-cases/nodes-pool)
|
||||
* [Assign Ingress Classes](/docs/operator/use-cases/ingress-classes)
|
||||
* [Assign Ingress Hostnames](/docs/operator/use-cases/ingress-hostnames)
|
||||
* [Control hostname collision in Ingresses](/docs/operator/use-cases/hostname-collision)
|
||||
* [Assign Storage Classes](/docs/operator/use-cases/storage-classes)
|
||||
* [Assign Network Policies](/docs/operator/use-cases/network-policies)
|
||||
* [Enforce Containers image PullPolicy](/docs/operator/use-cases/images-pullpolicy)
|
||||
* [Assign Trusted Images Registries](/docs/operator/use-cases/images-registries)
|
||||
* [Assign Pod Security Policies](/docs/operator/use-cases/pod-security-policies)
|
||||
* [Create Custom Resources](/docs/operator/use-cases/custom-resources)
|
||||
* [Taint Namespaces](/docs/operator/use-cases/taint-namespaces)
|
||||
* [Assign multiple Tenants](/docs/operator/use-cases/multiple-tenants)
|
||||
* [Cordon Tenants](/docs/operator/use-cases/cordoning-tenant)
|
||||
* [Disable Service Types](/docs/operator/use-cases/service-type)
|
||||
* [Taint Services](/docs/operator/use-cases/taint-services)
|
||||
* [Allow adding labels and annotations on namespaces](/docs/operator/use-cases/namespace-labels-and-annotations)
|
||||
* [Velero Backup Restoration](/docs/operator/use-cases/velero-backup-restoration)
|
||||
* [Deny Wildcard Hostnames](/docs/operator/use-cases/deny-wildcard-hostnames)
|
||||
* [Denying specific user-defined labels or annotations on Nodes](/docs/operator/use-cases/deny-specific-user-defined-labels-or-annotations-on-nodes)
|
||||
|
||||
> NB: as we improve Capsule, more use cases about multi-tenancy and cluster governance will be covered.
|
||||
|
||||
# What’s next
|
||||
Now let's see how the cluster admin onboards a new tenant. [Onboarding a new tenant](/docs/operator/use-cases/onboarding).
|
||||
54
docs/content/operator/use-cases/permissions.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Assign permissions
|
||||
Alice acts as the tenant admin. Other users can operate inside the tenant with different levels of permissions and authorizations. Alice is responsible for creating additional roles and assigning these roles to other users to work in the same tenant.
|
||||
|
||||
One of the key design principles of the Capsule is self-provisioning management from the tenant owner's perspective. Alice, the tenant owner, does not need to interact with Bill, the cluster admin, to complete her day-by-day duties. On the other side, Bill does not have to deal with multiple requests coming from multiple tenant owners that probably will overwhelm him.
|
||||
|
||||
Capsule leaves Alice, and the other tenant owners, the freedom to create RBAC roles at the namespace level, or using the pre-defined cluster roles already available in Kubernetes. Since roles and rolebindings are limited to a namespace scope, Alice can assign the roles to the other users accessing the same tenant only after the namespace is created. This gives Alice the power to administer the tenant without the intervention of the cluster admin.
|
||||
|
||||
From the cluster admin perspective, the only required action for Bill is to provide the other identities, eg. `joe` in the Identity Management system. This task can be done once when onboarding the tenant and the number of users accessing the tenant can be part of the tenant business profile.
|
||||
|
||||
Alice can create Roles and RoleBindings only in the namespaces she owns
|
||||
|
||||
```
|
||||
kubectl auth can-i get roles -n oil-development
|
||||
yes
|
||||
|
||||
kubectl auth can-i get rolebindings -n oil-development
|
||||
yes
|
||||
```
|
||||
|
||||
so she can assign the role of namespace `oil-development` admin to Joe, another user accessing the tenant `oil`
|
||||
|
||||
```yaml
|
||||
kubectl --as alice --as-group capsule.clastix.io apply -f - << EOF
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
labels:
|
||||
name: oil-development:admin
|
||||
namespace: oil-development
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: admin
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: joe
|
||||
EOF
|
||||
```
|
||||
|
||||
Joe now can operate on the namespace `oil-development` as admin but he has no access to the other namespaces `oil-production`, and `oil-test` that are part of the same tenant:
|
||||
|
||||
```
|
||||
kubectl --as joe --as-group capsule.clastix.io auth can-i create pod -n oil-development
|
||||
yes
|
||||
|
||||
kubectl --as joe --as-group capsule.clastix.io auth can-i create pod -n oil-production
|
||||
no
|
||||
```
|
||||
|
||||
> Please, note the user `joe`, in the example above, is not acting as tenant owner. He can just operate in `oil-development` namespace as admin.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, sets resources quota and limits for Alice's tenant. [Enforce resources quota and limits](/docs/operator/use-cases/resources-quota-limits).
|
||||
35
docs/content/operator/use-cases/pod-priority-classes.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Enforcing Pod Priority Classes
|
||||
|
||||
Pods can have priority. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible. See [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/).
|
||||
|
||||
In a multi-tenant cluster, not all users can be trusted, as a tenant owner could create Pods at the highest possible priorities, causing other Pods to be evicted/not get scheduled.
|
||||
|
||||
To prevent misuses of Pod Priority Class, Bill, the cluster admin, can enforce the allowed Pod Priority Class at tenant level:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
priorityClasses:
|
||||
allowed:
|
||||
- default
|
||||
allowedRegex: "^tier-.*$"
|
||||
EOF
|
||||
```
|
||||
|
||||
With the said Tenant specification, Alice can create a Pod resource if `spec.priorityClassName` equals to:
|
||||
|
||||
- `default`
|
||||
- `tier-gold`, `tier-silver`, or `tier-bronze`, since these compile the allowed regex.
|
||||
|
||||
If a Pod is going to use a non-allowed _Priority Class_, it will be rejected by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
|
||||
See how Bill, the cluster admin, can assign a pool of nodes to Alice's tenant. [Assign a nodes pool](/docs/operator/use-cases/nodes-pool).
|
||||
83
docs/content/operator/use-cases/pod-security-policies.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# Assign Pod Security Policies
|
||||
Bill, the cluster admin, can assign a dedicated Pod Security Policy (PSP) to Alice's tenant. This is likely to be a requirement in a multi-tenancy environment.
|
||||
|
||||
The cluster admin creates a PSP:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
...
|
||||
EOF
|
||||
```
|
||||
|
||||
Then create a _ClusterRole_ using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['psp:restricted']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
Bill can assign this role to all namespaces in the Alice's tenant by setting it in the tenant manifest:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: psp:privileged
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
EOF
|
||||
```
|
||||
|
||||
With the given specification, Capsule will ensure that all Alice's namespaces will contain a _RoleBinding_ for the specified _Cluster Role_.
|
||||
|
||||
For example, in the `oil-production` namespace, Alice will see:
|
||||
|
||||
```yaml
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: 'capsule-oil-psp:privileged'
|
||||
namespace: oil-production
|
||||
labels:
|
||||
capsule.clastix.io/role-binding: a10c4c8c48474963
|
||||
capsule.clastix.io/tenant: oil
|
||||
subjects:
|
||||
- kind: Group
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
name: 'system:authenticated'
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: 'psp:privileged'
|
||||
```
|
||||
|
||||
With the above example, Capsule is forbidding any authenticated user in `oil-production` namespace to run privileged pods and to perform privilege escalation as declared by the Cluster Role `psp:privileged`.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign to Alice the permissions to create custom resources in her tenant. [Create Custom Resources](/docs/operator/use-cases/custom-resources).
|
||||
252
docs/content/operator/use-cases/resources-quota-limits.md
Normal file
@@ -0,0 +1,252 @@
|
||||
# Enforce resources quota and limits
|
||||
With help of Capsule, Bill, the cluster admin, can set and enforce resources quota and limits for Alice's tenant.
|
||||
|
||||
## Resources quota
|
||||
Set resources quota for each namespace in the Alice's tenant by defining them in the tenant spec:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
namespaceOptions:
|
||||
quota: 3
|
||||
resourceQuotas:
|
||||
scope: Tenant
|
||||
items:
|
||||
- hard:
|
||||
limits.cpu: "8"
|
||||
limits.memory: 16Gi
|
||||
requests.cpu: "8"
|
||||
requests.memory: 16Gi
|
||||
- hard:
|
||||
pods: "10"
|
||||
limitRanges:
|
||||
items:
|
||||
- limits:
|
||||
- default:
|
||||
cpu: 500m
|
||||
memory: 512Mi
|
||||
defaultRequest:
|
||||
cpu: 100m
|
||||
memory: 10Mi
|
||||
type: Container
|
||||
EOF
|
||||
```
|
||||
|
||||
The resource quotas above will be inherited by all the namespaces created by Alice. In our case, when Alice creates the namespace `oil-production`, Capsule creates the following resource quotas:
|
||||
|
||||
```yaml
|
||||
kind: ResourceQuota
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: capsule-oil-0
|
||||
namespace: oil-production
|
||||
labels:
|
||||
tenant: oil
|
||||
spec:
|
||||
hard:
|
||||
limits.cpu: "8"
|
||||
limits.memory: 16Gi
|
||||
requests.cpu: "8"
|
||||
requests.memory: 16Gi
|
||||
---
|
||||
kind: ResourceQuota
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: capsule-oil-1
|
||||
namespace: oil-production
|
||||
labels:
|
||||
tenant: oil
|
||||
spec:
|
||||
hard:
|
||||
pods : "10"
|
||||
```
|
||||
|
||||
Alice can create any resource according to the assigned quotas:
|
||||
|
||||
```
|
||||
kubectl -n oil-production create deployment nginx --image nginx:latest --replicas 4
|
||||
```
|
||||
|
||||
At namespace `oil-production` level, Alice can see the used resources by inspecting the `status` in ResourceQuota:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production get resourcequota capsule-oil-1 -o yaml
|
||||
...
|
||||
status:
|
||||
hard:
|
||||
pods: "10"
|
||||
services: "50"
|
||||
used:
|
||||
pods: "4"
|
||||
```
|
||||
|
||||
At tenant level, the behaviour is controlled by the `spec.resourceQuotas.scope` value:
|
||||
|
||||
* Tenant (default)
|
||||
* Namespace
|
||||
|
||||
### Enforcement at tenant level
|
||||
By setting enforcement at tenant level, i.e. `spec.resourceQuotas.scope=Tenant`, Capsule aggregates resources usage for all namespaces in the tenant and adjusts all the `ResourceQuota` usage as aggregate. In such case, Alice can check the used resources at the tenant level by inspecting the `annotations` in ResourceQuota object of any namespace in the tenant:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production get resourcequotas capsule-oil-1 -o yaml
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
annotations:
|
||||
quota.capsule.clastix.io/used-pods: "4"
|
||||
quota.capsule.clastix.io/hard-pods: "10"
|
||||
...
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-development get resourcequotas capsule-oil-1 -o yaml
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
annotations:
|
||||
quota.capsule.clastix.io/used-pods: "4"
|
||||
quota.capsule.clastix.io/hard-pods: "10"
|
||||
...
|
||||
```
|
||||
|
||||
When the aggregate usage for all namespaces crosses the hard quota, then the native `ResourceQuota` Admission Controller in Kubernetes denies Alice's request to create resources exceeding the quota:
|
||||
|
||||
```
|
||||
kubectl -n oil-development create deployment nginx --image nginx:latest --replicas 10
|
||||
```
|
||||
|
||||
Alice cannot schedule more pods than the admitted at tenant aggregate level.
|
||||
|
||||
```
|
||||
kubectl -n oil-development get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-55649fd747-6fzcx 1/1 Running 0 12s
|
||||
nginx-55649fd747-7q6x6 1/1 Running 0 12s
|
||||
nginx-55649fd747-86wr5 1/1 Running 0 12s
|
||||
nginx-55649fd747-h6kbs 1/1 Running 0 12s
|
||||
nginx-55649fd747-mlhlq 1/1 Running 0 12s
|
||||
nginx-55649fd747-t48s5 1/1 Running 0 7s
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```
|
||||
kubectl -n oil-production get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-55649fd747-52fsq 1/1 Running 0 22m
|
||||
nginx-55649fd747-9q8n5 1/1 Running 0 22m
|
||||
nginx-55649fd747-r8vzr 1/1 Running 0 22m
|
||||
nginx-55649fd747-tkv7m 1/1 Running 0 22m
|
||||
```
|
||||
|
||||
### Enforcement at namespace level
|
||||
|
||||
By setting enforcement at the namespace level, i.e. `spec.resourceQuotas.scope=Namespace`, Capsule does not aggregate the resources usage and all enforcement is done at the namespace level.
|
||||
|
||||
## Pods and containers limits
|
||||
|
||||
Bill, the cluster admin, can also set Limit Ranges for each namespace in Alice's tenant by defining limits for pods and containers in the tenant spec:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
...
|
||||
limitRanges:
|
||||
items:
|
||||
- type: Pod
|
||||
min:
|
||||
cpu: "50m"
|
||||
memory: "5Mi"
|
||||
max:
|
||||
cpu: "1"
|
||||
memory: "1Gi"
|
||||
- type: Container
|
||||
defaultRequest:
|
||||
cpu: "100m"
|
||||
memory: "10Mi"
|
||||
default:
|
||||
cpu: "200m"
|
||||
memory: "100Mi"
|
||||
min:
|
||||
cpu: "50m"
|
||||
memory: "5Mi"
|
||||
max:
|
||||
cpu: "1"
|
||||
memory: "1Gi"
|
||||
- type: PersistentVolumeClaim
|
||||
min:
|
||||
storage: "1Gi"
|
||||
max:
|
||||
storage: "10Gi"
|
||||
```
|
||||
|
||||
Limits will be inherited by all the namespaces created by Alice. In our case, when Alice creates the namespace `oil-production`, Capsule creates the following:
|
||||
|
||||
```yaml
|
||||
kind: LimitRange
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: limits
|
||||
namespace: oil-production
|
||||
labels:
|
||||
tenant: oil
|
||||
spec:
|
||||
limits:
|
||||
- type: Pod
|
||||
min:
|
||||
cpu: "50m"
|
||||
memory: "5Mi"
|
||||
max:
|
||||
cpu: "1"
|
||||
memory: "1Gi"
|
||||
- type: Container
|
||||
defaultRequest:
|
||||
cpu: "100m"
|
||||
memory: "10Mi"
|
||||
default:
|
||||
cpu: "200m"
|
||||
memory: "100Mi"
|
||||
min:
|
||||
cpu: "50m"
|
||||
memory: "5Mi"
|
||||
max:
|
||||
cpu: "1"
|
||||
memory: "1Gi"
|
||||
- type: PersistentVolumeClaim
|
||||
min:
|
||||
storage: "1Gi"
|
||||
max:
|
||||
storage: "10Gi"
|
||||
```
|
||||
|
||||
> Note: being the limit range specific of single resources, there is no aggregate to count.
|
||||
|
||||
Alice doesn't have permission to change or delete the resources according to the assigned RBAC profile.
|
||||
|
||||
```
|
||||
kubectl -n oil-production auth can-i patch resourcequota
|
||||
no
|
||||
kubectl -n oil-production auth can-i delete resourcequota
|
||||
no
|
||||
kubectl -n oil-production auth can-i patch limitranges
|
||||
no
|
||||
kubectl -n oil-production auth can-i delete limitranges
|
||||
no
|
||||
```
|
||||
|
||||
# What’s next
|
||||
|
||||
See how Bill, the cluster admin, can enforce the PriorityClass of Pods running of Alice's tenant namespaces. [Enforce Pod Priority Classes](/docs/operator/use-cases/pod-priority-classes)
|
||||
71
docs/content/operator/use-cases/service-type.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Disable Service Types
|
||||
Bill, the cluster admin, can prevent the creation of services with specific service types.
|
||||
|
||||
## NodePort
|
||||
When dealing with a _shared multi-tenant_ scenario, multiple _NodePort_ services can start becoming cumbersome to manage. The reason behind this could be related to the overlapping needs by the Tenant owners, since a _NodePort_ is going to be open on all nodes and, when using `hostNetwork=true`, accessible to any _Pod_ although any specific `NetworkPolicy`.
|
||||
|
||||
Bill, the cluster admin, can block the creation of services with `NodePort` service type for a given tenant
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
serviceOptions:
|
||||
allowedServices:
|
||||
nodePort: false
|
||||
EOF
|
||||
```
|
||||
|
||||
With the above configuration, any attempt of Alice to create a Service of type `NodePort` is denied by the Validation Webhook enforcing it. Default value is `true`.
|
||||
|
||||
## ExternalName
|
||||
Service with the type of `ExternalName` has been found subject to many security issues. To prevent tenant owners to create services with the type of `ExternalName`, the cluster admin can prevent a tenant to create them:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
serviceOptions:
|
||||
allowedServices:
|
||||
externalName: false
|
||||
EOF
|
||||
```
|
||||
|
||||
With the above configuration, any attempt of Alice to create a Service of type `externalName` is denied by the Validation Webhook enforcing it. Default value is `true`.
|
||||
|
||||
## LoadBalancer
|
||||
|
||||
Same as previously, the Service of type of `LoadBalancer` could be blocked for various reasons. To prevent tenant owners to create these kinds of services, the cluster admin can prevent a tenant to create them:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
serviceOptions:
|
||||
allowedServices:
|
||||
loadBalancer: false
|
||||
EOF
|
||||
```
|
||||
|
||||
With the above configuration, any attempt of Alice to create a Service of type `LoadBalancer` is denied by the Validation Webhook enforcing it. Default value is `true`.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can set taints on the Alice's services. [Taint services](/docs/operator/use-cases/taint-services).
|
||||
44
docs/content/operator/use-cases/storage-classes.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Assign Storage Classes
|
||||
Persistent storage infrastructure is provided to tenants. Different types of storage requirements, with different levels of QoS, eg. SSD versus HDD, are available for different tenants according to the tenant's profile. To meet these different requirements, Bill, the cluster admin can provision different Storage Classes and assign them to the tenant:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
storageClasses:
|
||||
allowed:
|
||||
- ceph-rbd
|
||||
- ceph-nfs
|
||||
allowedRegex: "^ceph-.*$"
|
||||
EOF
|
||||
```
|
||||
|
||||
Capsule assures that all Persistent Volume Claims created by Alice will use only one of the valid storage classes:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc
|
||||
namespace: oil-production
|
||||
spec:
|
||||
storageClassName: ceph-rbd
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 12Gi
|
||||
EOF
|
||||
```
|
||||
|
||||
Any attempt of Alice to use a non-valid Storage Class, or missing it, is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign Network Policies to Alice's tenant. [Assign Network Policies](/docs/operator/use-cases/network-policies).
|
||||
28
docs/content/operator/use-cases/taint-namespaces.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Taint namespaces
|
||||
With Capsule, Bill can _"taint"_ the namespaces created by Alice with additional labels and/or annotations. There is no specific semantic assigned to these labels and annotations: they just will be assigned to the namespaces in the tenant as they are created by Alice. This can help the cluster admin to implement specific use cases. As it can be used to implement backup as a service for namespaces in the tenant.
|
||||
|
||||
Bill assigns additional labels and annotations to all namespaces created in the `oil` tenant:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
namespaceOptions:
|
||||
additionalMetadata:
|
||||
annotations:
|
||||
capsule.clastix.io/backup: "true"
|
||||
labels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
EOF
|
||||
```
|
||||
|
||||
When Alice creates a namespace, this will inherit the given label and/or annotation.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign multiple tenants to Alice. [Assign multiple tenants to an owner](/docs/operator/use-cases/multiple-tenants).
|
||||
28
docs/content/operator/use-cases/taint-services.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Taint services
|
||||
With Capsule, Bill can _"taint"_ the services created by Alice with additional labels and/or annotations. There is no specific semantic assigned to these labels and annotations: they just will be assigned to the services in the tenant as they are created by Alice. This can help the cluster admin to implement specific use cases.
|
||||
|
||||
Bill assigns additional labels and annotations to all services created in the `oil` tenant:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
serviceOptions:
|
||||
additionalMetadata:
|
||||
annotations:
|
||||
capsule.clastix.io/backup: "true"
|
||||
labels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
EOF
|
||||
```
|
||||
|
||||
When Alice creates a service in a namespace, this will inherit the given label and/or annotation.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can protect specific labels and annotations on Namespaces from modifications by Alice. [Denying specific user-defined labels or annotations on Namespaces](/docs/operator/use-cases/namespace-labels-and-annotations).
|
||||
161
docs/content/operator/use-cases/tenant-ownership.md
Normal file
@@ -0,0 +1,161 @@
|
||||
# Tenant ownership
|
||||
Bill, the cluster admin, receives a new request from Acme Corp.'s CTO asking for a new tenant to be onboarded and Alice user will be the tenant owner. Bill then assigns Alice's identity of `alice` in the Acme Corp. identity management system. Since Alice is a tenant owner, Bill needs to assign `alice` the Capsule group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
|
||||
|
||||
To keep things simple, we assume that Bill just creates a client certificate for authentication using X.509 Certificate Signing Request, so Alice's certificate has `"/CN=alice/O=capsule.clastix.io"`.
|
||||
|
||||
Bill creates a new tenant `oil` in the CaaS management portal according to the tenant's profile:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
Bill checks if the new tenant is created and operational:
|
||||
|
||||
```
|
||||
kubectl get tenant oil
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
oil Active 0 33m
|
||||
```
|
||||
|
||||
> Note that namespaces are not yet assigned to the new tenant.
|
||||
> The tenant owners are free to create their namespaces in a self-service fashion
|
||||
> and without any intervention from Bill.
|
||||
|
||||
Once the new tenant `oil` is in place, Bill sends the login credentials to Alice.
|
||||
|
||||
Alice can log in using her credentials and check if she can create a namespace
|
||||
|
||||
```
|
||||
kubectl auth can-i create namespaces
|
||||
yes
|
||||
```
|
||||
|
||||
or even delete the namespace
|
||||
|
||||
```
|
||||
kubectl auth can-i delete ns -n oil-production
|
||||
yes
|
||||
```
|
||||
|
||||
However, cluster resources are not accessible to Alice
|
||||
|
||||
```
|
||||
kubectl auth can-i get namespaces
|
||||
no
|
||||
|
||||
kubectl auth can-i get nodes
|
||||
no
|
||||
|
||||
kubectl auth can-i get persistentvolumes
|
||||
no
|
||||
```
|
||||
|
||||
including the `Tenant` resources
|
||||
|
||||
```
|
||||
kubectl auth can-i get tenants
|
||||
no
|
||||
```
|
||||
|
||||
## Assign a group of users as tenant owner
|
||||
In the example above, Bill assigned the ownership of `oil` tenant to `alice` user. If another user, e.g. Bob needs to administer the `oil` tenant, Bill can assign the ownership of `oil` tenant to such user too:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
- name: bob
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
However, it's more likely that Bill assigns the ownership of the `oil` tenant to a group of users instead of a single one. Bill creates a new group account `oil-users` in the Acme Corp. identity management system and then he assigns Alice and Bob identities to the `oil-users` group.
|
||||
|
||||
The tenant manifest is modified as in the following:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: oil-users
|
||||
kind: Group
|
||||
EOF
|
||||
```
|
||||
|
||||
With the configuration above, any user belonging to the `oil-users` group will be the owner of the `oil` tenant with the same permissions of Alice. For example, Bob can log in with his credentials and issue
|
||||
|
||||
```
|
||||
kubectl auth can-i create namespaces
|
||||
yes
|
||||
```
|
||||
|
||||
## Assign a robot account as tenant owner
|
||||
|
||||
As GitOps methodology is gaining more and more adoption everywhere, it's more likely that an application (Service Account) should act as Tenant Owner. In Capsule, a Tenant can also be owned by a Kubernetes _ServiceAccount_ identity.
|
||||
|
||||
The tenant manifest is modified as in the following:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: oil-users
|
||||
kind: Group
|
||||
- name: system:serviceaccount:default:robot
|
||||
kind: ServiceAccount
|
||||
EOF
|
||||
```
|
||||
|
||||
Bill can create a Service Account called `robot`, for example, in the `default` namespace and leave it to act as Tenant Owner of the `oil` tenant
|
||||
|
||||
```
|
||||
kubectl --as system:serviceaccount:default:robot --as-group capsule.clastix.io auth can-i create namespaces
|
||||
yes
|
||||
```
|
||||
|
||||
The service account has to be part of Capsule group, so Bill has to set in the `CapsuleConfiguration`
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: CapsuleConfiguration
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
userGroups:
|
||||
- capsule.clastix.io
|
||||
- system:serviceaccounts:default
|
||||
```
|
||||
|
||||
because, by default, each service account is a member of following groups:
|
||||
|
||||
```
|
||||
system:serviceaccounts
|
||||
system:serviceaccounts:{service-account-namespace}
|
||||
system:authenticated
|
||||
```
|
||||
|
||||
# What’s next
|
||||
See how a tenant owner, creates new namespaces. [Create namespaces](/docs/operator/use-cases/create-namespaces).
|
||||
27
docs/content/operator/use-cases/velero-backup-restoration.md
Normal file
@@ -0,0 +1,27 @@
|
||||
# Velero Backup Restoration
|
||||
|
||||
Velero is a backup system that performs disaster recovery and migrates Kubernetes cluster resources and persistent volumes.
|
||||
|
||||
Using this in a Kubernetes cluster where Capsule is installed can lead to an incomplete restore of the cluster's Tenants. This is because Velero omits the `ownerReferences` section from the tenant's namespace manifests when backup them.
|
||||
|
||||
To avoid this problem you can use the script `velero-restore.sh` under the `hack/` folder.
|
||||
|
||||
In case of a data loss, the right thing to do is to restore the cluster with **Velero** at first. Once Velero has finished, you can proceed using the script to complete the restoration.
|
||||
|
||||
```bash
|
||||
./velero-restore.sh --kubeconfing /path/to/your/kubeconfig restore
|
||||
```
|
||||
|
||||
Running this command, we are going to patch the tenant's namespaces manifests that are actually `ownerReferences`-less. Once the command has finished its run, you got the cluster back.
|
||||
|
||||
Additionally, you can also specify a selected range of tenants to be restored:
|
||||
|
||||
```bash
|
||||
./velero-restore.sh --tenant "gas oil" restore
|
||||
```
|
||||
|
||||
In this way, only the tenants **gas** and **oil** will be restored.
|
||||
|
||||
# What's next
|
||||
|
||||
See how Bill, the cluster admin, can deny wildcard hostnames to a Tenant. [Deny Wildcard Hostnames](/docs/operator/use-cases/deny-wildcard-hostnames)
|
||||
61
docs/content/proxy/contributing.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# How to contribute to Capsule Proxy
|
||||
First, thanks for your interest in Capsule and Capsule Proxy, any contribution is welcome!
|
||||
|
||||
You should setup your development environment as following:
|
||||
|
||||
- [Go 1.16](https://golang.org/dl/)
|
||||
- [KinD](https://github.com/kubernetes-sigs/kind)
|
||||
|
||||
> Please, refer to the general coding style rules for Capsule.
|
||||
|
||||
## Run locally for test and debug
|
||||
|
||||
This guide helps new contributors to locally debug in _out or cluster_ mode the project.
|
||||
|
||||
1. You need to run a kind cluster and find the endpoint port of `kind-control-plane` using `docker ps`:
|
||||
|
||||
```bash
|
||||
❯ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
88432e392adb kindest/node:v1.20.2 "/usr/local/bin/entr…" 32 seconds ago Up 28 seconds 127.0.0.1:64582->6443/tcp kind-control-plane
|
||||
```
|
||||
|
||||
2. You need to generate TLS cert keys for localhost, you can use [mkcert](https://github.com/FiloSottile/mkcert):
|
||||
|
||||
```bash
|
||||
> cd /tmp
|
||||
> mkcert localhost
|
||||
> ls
|
||||
localhost-key.pem localhost.pem
|
||||
```
|
||||
|
||||
3. Run the proxy with the following options
|
||||
|
||||
```bash
|
||||
go run main.go \
|
||||
--ssl-cert-path=/tmp/localhost.pem \
|
||||
--ssl-key-path=/tmp/localhost-key.pem \
|
||||
--enable-ssl=true \
|
||||
--kubeconfig=<YOUR KUBERNETES CONFIGURATION FILE>
|
||||
```
|
||||
|
||||
5. Edit the `KUBECONFIG` file (you should make a copy and work on it) as follows:
|
||||
- Find the section of your cluster
|
||||
- replace the server path with `https://127.0.0.1:9001`
|
||||
- replace the certificate-authority-data path with the content of your rootCA.pem file. (if you use mkcert, you'll find with `cat "$(mkcert -CAROOT)/rootCA.pem"|base64|tr -d '\n'`)
|
||||
|
||||
6. Now you should be able to run kubectl using the proxy!
|
||||
|
||||
## Debug in a remote Kubernetes cluster
|
||||
|
||||
In some cases, you would need to debug the in-cluster mode and [`delve`](https://github.com/go-delve/delve) plays a big role here.
|
||||
|
||||
1. build the Docker image with `delve` issuing `make dlv-build`
|
||||
2. with the `quay.io/clastix/capsule-proxy:dlv` produced Docker image, publish it or load it to your [KinD](https://github.com/kubernetes-sigs/kind) instance (`kind load docker-image --name capsule --nodes capsule-control-plane quay.io/clastix/capsule-proxy:dlv`)
|
||||
3. change the Deployment image using `kubectl edit` or `kubectl set image deployment/capsule-proxy capsule-proxy=quay.io/clastix/capsule-proxy:dlv`
|
||||
4. wait for the image rollout (`kubectl -n capsule-system rollout status deployment/capsule-proxy`)
|
||||
5. perform the port-forwarding with `kubectl -n capsule-system port-forward $(kubectl -n capsule-system get pods -l app.kubernetes.io/name=capsule-proxy --output name) 2345:2345`
|
||||
6. connect using your `delve` options
|
||||
|
||||
> _Nota Bene_: the application could be killed by the Liveness Probe since delve will wait for the debugger connection before starting it.
|
||||
> Feel free to edit and remove the probes to avoid this kind of issue.
|
||||
154
docs/content/proxy/oidc-auth.md
Normal file
@@ -0,0 +1,154 @@
|
||||
# OIDC Authentication
|
||||
The `capsule-proxy` works with `kubectl` users with a token-based authentication, e.g. OIDC or Bearer Token. In the following example, we'll use Keycloak as OIDC server capable to provides JWT tokens.
|
||||
|
||||
### Configuring Keycloak
|
||||
Configure Keycloak as OIDC server:
|
||||
|
||||
- Add a realm called `caas`, or use any existing realm instead
|
||||
- Add a group `capsule.clastix.io`
|
||||
- Add a user `alice` assigned to group `capsule.clastix.io`
|
||||
- Add an OIDC client called `kubernetes`
|
||||
- For the `kubernetes` client, create protocol mappers called `groups` and `audience`
|
||||
|
||||
If everything is done correctly, now you should be able to authenticate in Keycloak and see user groups in JWT tokens. Use the following snippet to authenticate in Keycloak as `alice` user:
|
||||
|
||||
```
|
||||
$ KEYCLOAK=sso.clastix.io
|
||||
$ REALM=caas
|
||||
$ OIDC_ISSUER=${KEYCLOAK}/auth/realms/${REALM}
|
||||
|
||||
$ curl -k -s https://${OIDC_ISSUER}/protocol/openid-connect/token \
|
||||
-d grant_type=password \
|
||||
-d response_type=id_token \
|
||||
-d scope=openid \
|
||||
-d client_id=${OIDC_CLIENT_ID} \
|
||||
-d client_secret=${OIDC_CLIENT_SECRET} \
|
||||
-d username=${USERNAME} \
|
||||
-d password=${PASSWORD} | jq
|
||||
```
|
||||
|
||||
The result will include an `ACCESS_TOKEN`, a `REFRESH_TOKEN`, and an `ID_TOKEN`. The access-token can generally be disregarded for Kubernetes. It would be used if the identity provider was managing roles and permissions for the users but that is done in Kubernetes itself with RBAC. The id-token is short lived while the refresh-token has longer expiration. The refresh-token is used to fetch a new id-token when the id-token expires.
|
||||
|
||||
```json
|
||||
{
|
||||
"access_token":"ACCESS_TOKEN",
|
||||
"refresh_token":"REFRESH_TOKEN",
|
||||
"id_token": "ID_TOKEN",
|
||||
"token_type":"bearer",
|
||||
"scope": "openid groups profile email"
|
||||
}
|
||||
```
|
||||
|
||||
To introspect the `ID_TOKEN` token run:
|
||||
```
|
||||
$ curl -k -s https://${OIDC_ISSUER}/protocol/openid-connect/introspect \
|
||||
-d token=${ID_TOKEN} \
|
||||
--user ${OIDC_CLIENT_ID}:${OIDC_CLIENT_SECRET} | jq
|
||||
```
|
||||
|
||||
The result will be like the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"exp": 1601323086,
|
||||
"iat": 1601322186,
|
||||
"aud": "kubernetes",
|
||||
"typ": "ID",
|
||||
"azp": "kubernetes",
|
||||
"preferred_username": "alice",
|
||||
"email_verified": false,
|
||||
"acr": "1",
|
||||
"groups": [
|
||||
"capsule.clastix.io"
|
||||
],
|
||||
"client_id": "kubernetes",
|
||||
"username": "alice",
|
||||
"active": true
|
||||
}
|
||||
```
|
||||
|
||||
### Configuring Kubernetes API Server
|
||||
Configuring Kubernetes for OIDC Authentication requires adding several parameters to the API Server. Please, refer to the [documentation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) for details and examples. Most likely, your `kube-apiserver.yaml` manifest will looks like the following:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- kube-apiserver
|
||||
...
|
||||
- --oidc-issuer-url=https://${OIDC_ISSUER}
|
||||
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
|
||||
- --oidc-client-id=${OIDC_CLIENT_SECRET}
|
||||
- --oidc-username-claim=preferred_username
|
||||
- --oidc-groups-claim=groups
|
||||
- --oidc-username-prefix=-
|
||||
```
|
||||
|
||||
### Configuring kubectl
|
||||
There are two options to use `kubectl` with OIDC:
|
||||
|
||||
- OIDC Authenticator
|
||||
- Use the `--token` option
|
||||
|
||||
To use the OIDC Authenticator, add an `oidc` user entry to your `kubeconfig` file:
|
||||
```
|
||||
$ kubectl config set-credentials oidc \
|
||||
--auth-provider=oidc \
|
||||
--auth-provider-arg=idp-issuer-url=https://${OIDC_ISSUER} \
|
||||
--auth-provider-arg=idp-certificate-authority=/path/to/ca.crt \
|
||||
--auth-provider-arg=client-id=${OIDC_CLIENT_ID} \
|
||||
--auth-provider-arg=client-secret=${OIDC_CLIENT_SECRET} \
|
||||
--auth-provider-arg=refresh-token=${REFRESH_TOKEN} \
|
||||
--auth-provider-arg=id-token=${ID_TOKEN} \
|
||||
--auth-provider-arg=extra-scopes=groups
|
||||
```
|
||||
|
||||
To use the --token option:
|
||||
```
|
||||
$ kubectl config set-credentials oidc --token=${ID_TOKEN}
|
||||
```
|
||||
|
||||
Point the kubectl to the URL where the `capsule-proxy` service is reachable:
|
||||
```
|
||||
$ kubectl config set-cluster mycluster \
|
||||
--server=https://kube.clastix.io \
|
||||
--certificate-authority=~/.kube/ca.crt
|
||||
```
|
||||
|
||||
Create a new context for the OIDC authenticated users:
|
||||
```
|
||||
$ kubectl config set-context alice-oidc@mycluster \
|
||||
--cluster=mycluster \
|
||||
--user=oidc
|
||||
```
|
||||
|
||||
As user `alice`, you should be able to use `kubectl` to create some namespaces:
|
||||
```
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-production
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-development
|
||||
$ kubectl --context alice-oidc@mycluster create namespace gas-marketing
|
||||
```
|
||||
|
||||
and list only those namespaces:
|
||||
```
|
||||
$ kubectl --context alice-oidc@mycluster get namespaces
|
||||
NAME STATUS AGE
|
||||
gas-marketing Active 2m
|
||||
oil-development Active 2m
|
||||
oil-production Active 2m
|
||||
```
|
||||
|
||||
When logged as cluster-admin power user you should be able to see all namespaces:
|
||||
```
|
||||
$ kubectl get namespaces
|
||||
NAME STATUS AGE
|
||||
default Active 78d
|
||||
kube-node-lease Active 78d
|
||||
kube-public Active 78d
|
||||
kube-system Active 78d
|
||||
gas-marketing Active 2m
|
||||
oil-development Active 2m
|
||||
oil-production Active 2m
|
||||
```
|
||||
|
||||
_Nota Bene_: once your `ID_TOKEN` expires, the `kubectl` OIDC Authenticator will attempt to refresh automatically your `ID_TOKEN` using the `REFRESH_TOKEN`, the `OIDC_CLIENT_ID` and the `OIDC_CLIENT_SECRET` storing the new values for the `REFRESH_TOKEN` and `ID_TOKEN` in your `kubeconfig` file. In case the OIDC uses a self signed CA certificate, make sure to specify it with the `idp-certificate-authority` option in your `kubeconfig` file, otherwise you'll not able to refresh the tokens. Once the `REFRESH_TOKEN` is expired, you will need to refresh tokens manually.
|
||||
337
docs/content/proxy/overview.md
Normal file
@@ -0,0 +1,337 @@
|
||||
# Capsule Proxy
|
||||
|
||||
Capsule Proxy is an add-on for [Capsule](https://github.com/clastix/capsule), the operator providing multi-tenancy in Kubernetes.
|
||||
|
||||
## The problem
|
||||
|
||||
Kubernetes RBAC cannot list only the owned cluster-scoped resources since there are no ACL-filtered APIs. For example:
|
||||
|
||||
```
|
||||
$ kubectl get namespaces
|
||||
Error from server (Forbidden): namespaces is forbidden:
|
||||
User "alice" cannot list resource "namespaces" in API group "" at the cluster scope
|
||||
```
|
||||
|
||||
However, the user can have permissions on some namespaces
|
||||
|
||||
```
|
||||
$ kubectl auth can-i [get|list|watch|delete] ns oil-production
|
||||
yes
|
||||
```
|
||||
|
||||
The reason, as the error message reported, is that the RBAC _list_ action is available only at Cluster-Scope and it is not granted to users without appropriate permissions.
|
||||
|
||||
To overcome this problem, many Kubernetes distributions introduced mirrored custom resources supported by a custom set of ACL-filtered APIs. However, this leads to radically change the user's experience of Kubernetes by introducing hard customizations that make it painful to move from one distribution to another.
|
||||
|
||||
With **Capsule**, we took a different approach. As one of the key goals, we want to keep the same user's experience on all the distributions of Kubernetes. We want people to use the standard tools they already know and love and it should just work.
|
||||
|
||||
## How it works
|
||||
|
||||
This project is an add-on of the main [Capsule](https://github.com/clastix/capsule) operator, so make sure you have a working instance of Caspule before attempting to install it.
|
||||
Use the `capsule-proxy` only if you want Tenant Owners to list their own Cluster-Scope resources.
|
||||
|
||||
The `capsule-proxy` implements a simple reverse proxy that intercepts only specific requests to the APIs server and Capsule does all the magic behind the scenes.
|
||||
|
||||
Current implementation filters the following requests:
|
||||
|
||||
* `api/v1/namespaces`
|
||||
* `api/v1/nodes`
|
||||
* `apis/storage.k8s.io/v1/storageclasses{/name}`
|
||||
* `apis/networking.k8s.io/{v1,v1beta1}/ingressclasses{/name}`
|
||||
* `api/scheduling.k8s.io/{v1}/priorityclasses{/name}`
|
||||
|
||||
All other requestes are proxied transparently to the APIs server, so no side-effects are expected. We're planning to add new APIs in the future, so PRs are welcome!
|
||||
|
||||
## Installation
|
||||
|
||||
The `capsule-proxy` can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the APIs server.
|
||||
Optionally, it can be deployed as a sidecar container in the backend of a dashboard.
|
||||
Running outside a Kubernetes cluster is also viable, although a valid `KUBECONFIG` file must be provided, using the environment variable `KUBECONFIG` or the default file in `$HOME/.kube/config`.
|
||||
|
||||
An Helm Chart is available [here](https://github.com/clastix/capsule/blob/master/charts/capsule/README.md).
|
||||
|
||||
## Does it work with kubectl?
|
||||
|
||||
Yes, it works by intercepting all the requests from the `kubectl` client directed to the APIs server. It works with both users who use the TLS certificate authentication and those who use OIDC.
|
||||
|
||||
## How RBAC is put in place?
|
||||
|
||||
Each Tenant owner can have their capabilities managed pretty similar to a standard RBAC.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: my-tenant
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: IngressClasses
|
||||
operations:
|
||||
- List
|
||||
```
|
||||
|
||||
The proxy setting `kind` is an __enum__ accepting the supported resources:
|
||||
|
||||
- `Nodes`
|
||||
- `StorageClasses`
|
||||
- `IngressClasses`
|
||||
- `PriorityClasses`
|
||||
|
||||
Each Resource kind can be granted with several verbs, such as:
|
||||
|
||||
- `List`
|
||||
- `Update`
|
||||
- `Delete`
|
||||
|
||||
### Namespaces
|
||||
|
||||
As tenant owner `alice`, you can use `kubectl` to create some namespaces:
|
||||
```
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-production
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-development
|
||||
$ kubectl --context alice-oidc@mycluster create namespace gas-marketing
|
||||
```
|
||||
|
||||
and list only those namespaces:
|
||||
|
||||
```
|
||||
$ kubectl --context alice-oidc@mycluster get namespaces
|
||||
NAME STATUS AGE
|
||||
gas-marketing Active 2m
|
||||
oil-development Active 2m
|
||||
oil-production Active 2m
|
||||
```
|
||||
|
||||
### Nodes
|
||||
|
||||
The Capsule Proxy gives the owners the ability to access the nodes matching the `.spec.nodeSelector` in the Tenant manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: Nodes
|
||||
operations:
|
||||
- List
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: capsule-gold-qwerty
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl --context alice-oidc@mycluster get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
capsule-gold-qwerty Ready <none> 43h v1.19.1
|
||||
```
|
||||
|
||||
> Warning: when no `nodeSelector` is specified, the tenant owners has access to all the nodes, according to the permissions listed in the `proxySettings` specs.
|
||||
|
||||
### Storage Classes
|
||||
|
||||
A Tenant may be limited to use a set of allowed Storage Class resources, as follows.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: StorageClasses
|
||||
operations:
|
||||
- List
|
||||
storageClasses:
|
||||
allowed:
|
||||
- custom
|
||||
allowedRegex: "\\w+fs"
|
||||
```
|
||||
|
||||
In the Kubernetes cluster we could have more Storage Class resources, some of them forbidden and non-usable by the Tenant owner.
|
||||
|
||||
```bash
|
||||
$ kubectl --context admin@mycluster get storageclasses
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
cephfs rook.io/cephfs Delete WaitForFirstConsumer false 21h
|
||||
custom custom.tls/provisioner Delete WaitForFirstConsumer false 43h
|
||||
default(standard) rancher.io/local-path Delete WaitForFirstConsumer false 43h
|
||||
glusterfs rook.io/glusterfs Delete WaitForFirstConsumer false 54m
|
||||
zol zfs-on-linux/zfs Delete WaitForFirstConsumer false 54m
|
||||
```
|
||||
|
||||
The expected output using `capsule-proxy` is the retrieval of the `custom` Storage Class as well the other ones matching the regex `\w+fs`.
|
||||
|
||||
```bash
|
||||
$ kubectl --context alice-oidc@mycluster get storageclasses
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
cephfs rook.io/cephfs Delete WaitForFirstConsumer false 21h
|
||||
custom custom.tls/provisioner Delete WaitForFirstConsumer false 43h
|
||||
glusterfs rook.io/glusterfs Delete WaitForFirstConsumer false 54m
|
||||
```
|
||||
|
||||
### Ingress Classes
|
||||
|
||||
As for Storage Class, also Ingress Class can be enforced.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: IngressClasses
|
||||
operations:
|
||||
- List
|
||||
ingressOptions:
|
||||
allowedClasses:
|
||||
allowed:
|
||||
- custom
|
||||
allowedRegex: "\\w+-lb"
|
||||
```
|
||||
|
||||
In the Kubernetes cluster we could have more Ingress Class resources, some of them forbidden and non-usable by the Tenant owner.
|
||||
|
||||
```bash
|
||||
$ kubectl --context admin@mycluster get ingressclasses
|
||||
NAME CONTROLLER PARAMETERS AGE
|
||||
custom example.com/custom IngressParameters.k8s.example.com/custom 24h
|
||||
external-lb example.com/external IngressParameters.k8s.example.com/external-lb 2s
|
||||
haproxy-ingress haproxy.tech/ingress 4d
|
||||
internal-lb example.com/internal IngressParameters.k8s.example.com/external-lb 15m
|
||||
nginx nginx.plus/ingress 5d
|
||||
```
|
||||
|
||||
The expected output using `capsule-proxy` is the retrieval of the `custom` Ingress Class as well the other ones matching the regex `\w+-lb`.
|
||||
|
||||
```bash
|
||||
$ kubectl --context alice-oidc@mycluster get ingressclasses
|
||||
NAME CONTROLLER PARAMETERS AGE
|
||||
custom example.com/custom IngressParameters.k8s.example.com/custom 24h
|
||||
external-lb example.com/external IngressParameters.k8s.example.com/external-lb 2s
|
||||
internal-lb example.com/internal IngressParameters.k8s.example.com/internal-lb 15m
|
||||
```
|
||||
|
||||
### Priority Classes
|
||||
|
||||
Allowed PriorityClasses assigned to a Tenant Owner can be enforced as follows.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: IngressClasses
|
||||
operations:
|
||||
- List
|
||||
priorityClasses:
|
||||
allowed:
|
||||
- best-effort
|
||||
allowedRegex: "\\w+priority"
|
||||
```
|
||||
|
||||
In the Kubernetes cluster we could have more PriorityClasses resources, some of them forbidden and non-usable by the Tenant owner.
|
||||
|
||||
```bash
|
||||
$ kubectl --context admin@mycluster get priorityclasses.scheduling.k8s.io
|
||||
NAME VALUE GLOBAL-DEFAULT AGE
|
||||
custom 1000 false 18s
|
||||
maxpriority 1000 false 18s
|
||||
minpriority 1000 false 18s
|
||||
nonallowed 1000 false 8m54s
|
||||
system-cluster-critical 2000000000 false 3h40m
|
||||
system-node-critical 2000001000 false 3h40m
|
||||
```
|
||||
|
||||
The expected output using `capsule-proxy` is the retrieval of the `custom` PriorityClass as well the other ones matching the regex `\w+priority`.
|
||||
|
||||
```bash
|
||||
$ kubectl --context alice-oidc@mycluster get ingressclasses
|
||||
NAME VALUE GLOBAL-DEFAULT AGE
|
||||
custom 1000 false 18s
|
||||
maxpriority 1000 false 18s
|
||||
minpriority 1000 false 18s
|
||||
```
|
||||
|
||||
### Storage/Ingress class and PriorityClass required label
|
||||
|
||||
For Storage Class, Ingress Class and Priority Class resources, the `name` label reflecting the resource name is mandatory, otherwise filtering of resources cannot be put in place.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
labels:
|
||||
name: my-storage-class
|
||||
name: my-storage-class
|
||||
provisioner: org.tld/my-storage-class
|
||||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: IngressClass
|
||||
metadata:
|
||||
labels:
|
||||
name: external-lb
|
||||
name: external-lb
|
||||
spec:
|
||||
controller: example.com/ingress-controller
|
||||
parameters:
|
||||
apiGroup: k8s.example.com
|
||||
kind: IngressParameters
|
||||
name: external-lb
|
||||
---
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
labels:
|
||||
name: best-effort
|
||||
name: best-effort
|
||||
value: 1000
|
||||
globalDefault: false
|
||||
description: "Priority class for best-effort Tenants"
|
||||
```
|
||||
|
||||
## Does it work with kubectl?
|
||||
Yes, it works by intercepting all the requests from the `kubectl` client directed to the APIs server. It works with both users who use the TLS certificate authentication and those who use OIDC.
|
||||
|
||||
As tenant owner `alice`, you are able to use `kubectl` to create some namespaces:
|
||||
```
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-production
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-development
|
||||
$ kubectl --context alice-oidc@mycluster create namespace gas-marketing
|
||||
```
|
||||
|
||||
and list only those namespaces:
|
||||
```
|
||||
$ kubectl --context alice-oidc@mycluster get namespaces
|
||||
NAME STATUS AGE
|
||||
gas-marketing Active 2m
|
||||
oil-development Active 2m
|
||||
oil-production Active 2m
|
||||
```
|
||||
|
||||
# What’s next
|
||||
Have a fun with `capsule-proxy`:
|
||||
|
||||
* [Standalone Installation](/docs/proxy/standalone)
|
||||
* [Sidecar Installation](/docs/proxy/sidecar)
|
||||
* [OIDC Authentication](/docs/proxy/oidc-auth)
|
||||
* [Contributing](/docs/proxy/contributing)
|
||||
116
docs/content/proxy/sidecar.md
Normal file
@@ -0,0 +1,116 @@
|
||||
# Sidecar Installation
|
||||
The `capsule-proxy` can be deployed as sidecar container for server-side Kubernetes dashboards. It will intercept all requests sent from the client side to the server-side of the dashboard and it will proxy them to the Kubernetes APIs server.
|
||||
|
||||
```
|
||||
capsule-proxy
|
||||
+------------+ +------------+
|
||||
|:9001 +------->|:6443 |
|
||||
+------------+ +------------+
|
||||
+-----------+ | | kube-apiserver
|
||||
browser +------>+:443 +-------->+:8443 |
|
||||
+-----------+ +------------+
|
||||
ingress-controller dashboard backend
|
||||
(ssl-passthrough)
|
||||
```
|
||||
|
||||
In order to use this pattern, the server-side backend of your dashboard must permit to specify the URL of the Kubernetes APIs server. For example, the following manifest contains an excerpt for deploying with [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), and the Ingress Controller in ssl-passthrough mode.
|
||||
|
||||
Place the `capsule-proxy` in a pod with SSL mode, i.e. `--enable-ssl=true` and passing valid certificate and key files in a secret.
|
||||
|
||||
```yaml
|
||||
...
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kubernetes-dashboard
|
||||
spec:
|
||||
containers:
|
||||
- name: ns-filter
|
||||
image: quay.io/clastix/capsule-proxy
|
||||
imagePullPolicy: IfNotPresent
|
||||
args:
|
||||
- --capsule-user-group=capsule.clastix.io
|
||||
- --zap-log-level=5
|
||||
- --enable-ssl=true
|
||||
- --ssl-cert-path=/opt/certs/tls.crt
|
||||
- --ssl-key-path=/opt/certs/tls.key
|
||||
volumeMounts:
|
||||
- name: ns-filter-certs
|
||||
mountPath: /opt/certs
|
||||
ports:
|
||||
- containerPort: 9001
|
||||
name: http
|
||||
protocol: TCP
|
||||
...
|
||||
```
|
||||
|
||||
In the same pod, place the Kubernetes Dashboard in _"out-of-cluster"_ mode with `--apiserver-host=https://localhost:9001` to send all the requests to the `capsule-proxy` sidecar container:
|
||||
|
||||
|
||||
```yaml
|
||||
...
|
||||
- name: dashboard
|
||||
image: kubernetesui/dashboard:v2.0.4
|
||||
imagePullPolicy: Always
|
||||
ports:
|
||||
- containerPort: 8443
|
||||
protocol: TCP
|
||||
args:
|
||||
- --auto-generate-certificates
|
||||
- --namespace=cmp-system
|
||||
- --tls-cert-file=tls.crt
|
||||
- --tls-key-file=tls.key
|
||||
- --apiserver-host=https://localhost:9001
|
||||
- --kubeconfig=/opt/.kube/config
|
||||
volumeMounts:
|
||||
- name: kubernetes-dashboard-certs
|
||||
mountPath: /certs
|
||||
- mountPath: /tmp
|
||||
name: tmp-volume
|
||||
- mountPath: /opt/.kube
|
||||
name: kubeconfig
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
scheme: HTTPS
|
||||
path: /
|
||||
port: 8443
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 30
|
||||
...
|
||||
```
|
||||
|
||||
Make sure you pass a valid `kubeconfig` file to the dashboard pointing to the `capsule-proxy` sidecar container instead of the `kube-apiserver` directly:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kubernetes-dashboard-kubeconfig
|
||||
namespace: kubernetes-dashboard
|
||||
data:
|
||||
config: |
|
||||
kind: Config
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
insecure-skip-tls-verify: true
|
||||
server: https://localhost:9001 # <- point to the capsule-proxy
|
||||
name: localhost
|
||||
contexts:
|
||||
- context:
|
||||
cluster: localhost
|
||||
user: kubernetes-admin # <- dashboard has cluster-admin permissions
|
||||
name: admin@localhost
|
||||
current-context: admin@localhost
|
||||
preferences: {}
|
||||
users:
|
||||
- name: kubernetes-admin
|
||||
user:
|
||||
client-certificate-data: REDACTED
|
||||
client-key-data: REDACTED
|
||||
```
|
||||
|
||||
After starting the dashboard, login as a Tenant Owner user, e.g. `alice` according to the used authentication method, and check you can see only owned namespaces.
|
||||
|
||||
The `capsule-proxy` can be deployed in standalone mode, in order to be used with a command line tools like `kubectl`. See [Standalone Installation](/docs/proxy/standalone).
|
||||
|
||||
66
docs/content/proxy/standalone.md
Normal file
@@ -0,0 +1,66 @@
|
||||
# Standalone Installation
|
||||
The `capsule-proxy` can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the `kube-apiserver`. Use this way to provide access to client-side command line tools like `kubectl` or even client-side dashboards.
|
||||
|
||||
You can use an Ingress Controller to expose the `capsule-proxy` endpoint in SSL passthrough, or,depending on your environment, you can expose it with either a `NodePort`, or a `LoadBalancer` service. As further alternatives, use `HostPort` or `HostNetwork` mode.
|
||||
|
||||
```
|
||||
+-----------+ +-----------+ +-----------+
|
||||
kubectl ------>|:443 |--------->|:9001 |-------->|:6443 |
|
||||
+-----------+ +-----------+ +-----------+
|
||||
ingress-controller capsule-proxy kube-apiserver
|
||||
(ssl-passthrough)
|
||||
```
|
||||
|
||||
## Configure Capsule
|
||||
Make sure to have a working instance of the Capsule Operator in your Kubernetes cluster before to attempt to use `capsule-proxy`. Please, refer to the Capsule Operator [documentation](/docs/operator/overview) for instructions.
|
||||
|
||||
You should also have one or more tenants defined, e.g. `oil` and `gas` and they are assigned to the user `alice`.
|
||||
|
||||
As cluster admin, check there are the tenants:
|
||||
|
||||
```
|
||||
$ kubectl get tenants
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND AGE
|
||||
foo 3 1 joe User 4d
|
||||
gas 3 0 alice User 1d
|
||||
oil 9 0 alice User 1d
|
||||
```
|
||||
|
||||
## Install Capsule Proxy
|
||||
Create a secret in the target namespace containing the SSL certificate which `capsule-proxy` will use.
|
||||
|
||||
```
|
||||
$ kubectl -n capsule-system create secret tls capsule-proxy --cert=tls.cert --key=tls.key
|
||||
```
|
||||
|
||||
Then use the Helm Chart to install the `capsule-proxy` in such namespace:
|
||||
|
||||
```bash
|
||||
$ cat <<EOF | sudo tee custom-values.yaml
|
||||
options:
|
||||
enableSSL: true
|
||||
ingress:
|
||||
enabled: true
|
||||
annotations:
|
||||
ingress.kubernetes.io/ssl-passthrough: 'true'
|
||||
hosts:
|
||||
- host: kube.clastix.io
|
||||
paths: [ "/" ]
|
||||
EOF
|
||||
|
||||
$ helm install capsule-proxy clastix/capsule-proxy \
|
||||
--values custom-values.yaml \
|
||||
-n capsule-system
|
||||
```
|
||||
|
||||
The `capsule-proxy` should be exposed with an Ingress in SSL passthrough mode and reachable at `https://kube.clastix.io`.
|
||||
|
||||
Users using a TLS client based authentication with certificate and key are able to talks with `capsule-proxy` since the current implementation of the reverse proxy is able to forward client certificates to the Kubernetes APIs server.
|
||||
|
||||
## RBAC Considerations
|
||||
Currently, the service account used for `capsule-proxy` needs to have `cluster-admin` permissions.
|
||||
|
||||
## Configuring client-only dashboards
|
||||
If you're using a client-only dashboard, for example [Lens](https://k8slens.dev/), the `capsule-proxy` can be used as in the previous `kubectl` example since Lens just needs for a `kubeconfig` file. Assuming to use a `kubeconfig` file containing a valid OIDC token released for the `alice` user, you can access the cluster with Lens dashboard and see only namespaces belonging to the Alice's tenants.
|
||||
|
||||
For web based dashboards, like the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), the `capsule-proxy` can be installed as sidecar container. See [Sidecar Installation](/docs/proxy/sidecar).
|
||||