feat(docs): setup Gridsome for the website

This commit is contained in:
Luca Spezzano
2021-11-02 18:35:41 +01:00
committed by Dario Tranchitella
parent 14f9686bbb
commit 0acc2d2ef1
111 changed files with 25054 additions and 149 deletions

View File

@@ -0,0 +1,61 @@
# How to contribute to Capsule Proxy
First, thanks for your interest in Capsule and Capsule Proxy, any contribution is welcome!
You should setup your development environment as following:
- [Go 1.16](https://golang.org/dl/)
- [KinD](https://github.com/kubernetes-sigs/kind)
> Please, refer to the general coding style rules for Capsule.
## Run locally for test and debug
This guide helps new contributors to locally debug in _out or cluster_ mode the project.
1. You need to run a kind cluster and find the endpoint port of `kind-control-plane` using `docker ps`:
```bash
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
88432e392adb kindest/node:v1.20.2 "/usr/local/bin/entr…" 32 seconds ago Up 28 seconds 127.0.0.1:64582->6443/tcp kind-control-plane
```
2. You need to generate TLS cert keys for localhost, you can use [mkcert](https://github.com/FiloSottile/mkcert):
```bash
> cd /tmp
> mkcert localhost
> ls
localhost-key.pem localhost.pem
```
3. Run the proxy with the following options
```bash
go run main.go \
--ssl-cert-path=/tmp/localhost.pem \
--ssl-key-path=/tmp/localhost-key.pem \
--enable-ssl=true \
--kubeconfig=<YOUR KUBERNETES CONFIGURATION FILE>
```
5. Edit the `KUBECONFIG` file (you should make a copy and work on it) as follows:
- Find the section of your cluster
- replace the server path with `https://127.0.0.1:9001`
- replace the certificate-authority-data path with the content of your rootCA.pem file. (if you use mkcert, you'll find with `cat "$(mkcert -CAROOT)/rootCA.pem"|base64|tr -d '\n'`)
6. Now you should be able to run kubectl using the proxy!
## Debug in a remote Kubernetes cluster
In some cases, you would need to debug the in-cluster mode and [`delve`](https://github.com/go-delve/delve) plays a big role here.
1. build the Docker image with `delve` issuing `make dlv-build`
2. with the `quay.io/clastix/capsule-proxy:dlv` produced Docker image, publish it or load it to your [KinD](https://github.com/kubernetes-sigs/kind) instance (`kind load docker-image --name capsule --nodes capsule-control-plane quay.io/clastix/capsule-proxy:dlv`)
3. change the Deployment image using `kubectl edit` or `kubectl set image deployment/capsule-proxy capsule-proxy=quay.io/clastix/capsule-proxy:dlv`
4. wait for the image rollout (`kubectl -n capsule-system rollout status deployment/capsule-proxy`)
5. perform the port-forwarding with `kubectl -n capsule-system port-forward $(kubectl -n capsule-system get pods -l app.kubernetes.io/name=capsule-proxy --output name) 2345:2345`
6. connect using your `delve` options
> _Nota Bene_: the application could be killed by the Liveness Probe since delve will wait for the debugger connection before starting it.
> Feel free to edit and remove the probes to avoid this kind of issue.

View File

@@ -0,0 +1,154 @@
# OIDC Authentication
The `capsule-proxy` works with `kubectl` users with a token-based authentication, e.g. OIDC or Bearer Token. In the following example, we'll use Keycloak as OIDC server capable to provides JWT tokens.
### Configuring Keycloak
Configure Keycloak as OIDC server:
- Add a realm called `caas`, or use any existing realm instead
- Add a group `capsule.clastix.io`
- Add a user `alice` assigned to group `capsule.clastix.io`
- Add an OIDC client called `kubernetes`
- For the `kubernetes` client, create protocol mappers called `groups` and `audience`
If everything is done correctly, now you should be able to authenticate in Keycloak and see user groups in JWT tokens. Use the following snippet to authenticate in Keycloak as `alice` user:
```
$ KEYCLOAK=sso.clastix.io
$ REALM=caas
$ OIDC_ISSUER=${KEYCLOAK}/auth/realms/${REALM}
$ curl -k -s https://${OIDC_ISSUER}/protocol/openid-connect/token \
-d grant_type=password \
-d response_type=id_token \
-d scope=openid \
-d client_id=${OIDC_CLIENT_ID} \
-d client_secret=${OIDC_CLIENT_SECRET} \
-d username=${USERNAME} \
-d password=${PASSWORD} | jq
```
The result will include an `ACCESS_TOKEN`, a `REFRESH_TOKEN`, and an `ID_TOKEN`. The access-token can generally be disregarded for Kubernetes. It would be used if the identity provider was managing roles and permissions for the users but that is done in Kubernetes itself with RBAC. The id-token is short lived while the refresh-token has longer expiration. The refresh-token is used to fetch a new id-token when the id-token expires.
```json
{
"access_token":"ACCESS_TOKEN",
"refresh_token":"REFRESH_TOKEN",
"id_token": "ID_TOKEN",
"token_type":"bearer",
"scope": "openid groups profile email"
}
```
To introspect the `ID_TOKEN` token run:
```
$ curl -k -s https://${OIDC_ISSUER}/protocol/openid-connect/introspect \
-d token=${ID_TOKEN} \
--user ${OIDC_CLIENT_ID}:${OIDC_CLIENT_SECRET} | jq
```
The result will be like the following:
```json
{
"exp": 1601323086,
"iat": 1601322186,
"aud": "kubernetes",
"typ": "ID",
"azp": "kubernetes",
"preferred_username": "alice",
"email_verified": false,
"acr": "1",
"groups": [
"capsule.clastix.io"
],
"client_id": "kubernetes",
"username": "alice",
"active": true
}
```
### Configuring Kubernetes API Server
Configuring Kubernetes for OIDC Authentication requires adding several parameters to the API Server. Please, refer to the [documentation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) for details and examples. Most likely, your `kube-apiserver.yaml` manifest will looks like the following:
```yaml
spec:
containers:
- command:
- kube-apiserver
...
- --oidc-issuer-url=https://${OIDC_ISSUER}
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
- --oidc-client-id=${OIDC_CLIENT_SECRET}
- --oidc-username-claim=preferred_username
- --oidc-groups-claim=groups
- --oidc-username-prefix=-
```
### Configuring kubectl
There are two options to use `kubectl` with OIDC:
- OIDC Authenticator
- Use the `--token` option
To use the OIDC Authenticator, add an `oidc` user entry to your `kubeconfig` file:
```
$ kubectl config set-credentials oidc \
--auth-provider=oidc \
--auth-provider-arg=idp-issuer-url=https://${OIDC_ISSUER} \
--auth-provider-arg=idp-certificate-authority=/path/to/ca.crt \
--auth-provider-arg=client-id=${OIDC_CLIENT_ID} \
--auth-provider-arg=client-secret=${OIDC_CLIENT_SECRET} \
--auth-provider-arg=refresh-token=${REFRESH_TOKEN} \
--auth-provider-arg=id-token=${ID_TOKEN} \
--auth-provider-arg=extra-scopes=groups
```
To use the --token option:
```
$ kubectl config set-credentials oidc --token=${ID_TOKEN}
```
Point the kubectl to the URL where the `capsule-proxy` service is reachable:
```
$ kubectl config set-cluster mycluster \
--server=https://kube.clastix.io \
--certificate-authority=~/.kube/ca.crt
```
Create a new context for the OIDC authenticated users:
```
$ kubectl config set-context alice-oidc@mycluster \
--cluster=mycluster \
--user=oidc
```
As user `alice`, you should be able to use `kubectl` to create some namespaces:
```
$ kubectl --context alice-oidc@mycluster create namespace oil-production
$ kubectl --context alice-oidc@mycluster create namespace oil-development
$ kubectl --context alice-oidc@mycluster create namespace gas-marketing
```
and list only those namespaces:
```
$ kubectl --context alice-oidc@mycluster get namespaces
NAME STATUS AGE
gas-marketing Active 2m
oil-development Active 2m
oil-production Active 2m
```
When logged as cluster-admin power user you should be able to see all namespaces:
```
$ kubectl get namespaces
NAME STATUS AGE
default Active 78d
kube-node-lease Active 78d
kube-public Active 78d
kube-system Active 78d
gas-marketing Active 2m
oil-development Active 2m
oil-production Active 2m
```
_Nota Bene_: once your `ID_TOKEN` expires, the `kubectl` OIDC Authenticator will attempt to refresh automatically your `ID_TOKEN` using the `REFRESH_TOKEN`, the `OIDC_CLIENT_ID` and the `OIDC_CLIENT_SECRET` storing the new values for the `REFRESH_TOKEN` and `ID_TOKEN` in your `kubeconfig` file. In case the OIDC uses a self signed CA certificate, make sure to specify it with the `idp-certificate-authority` option in your `kubeconfig` file, otherwise you'll not able to refresh the tokens. Once the `REFRESH_TOKEN` is expired, you will need to refresh tokens manually.

View File

@@ -0,0 +1,337 @@
# Capsule Proxy
Capsule Proxy is an add-on for [Capsule](https://github.com/clastix/capsule), the operator providing multi-tenancy in Kubernetes.
## The problem
Kubernetes RBAC cannot list only the owned cluster-scoped resources since there are no ACL-filtered APIs. For example:
```
$ kubectl get namespaces
Error from server (Forbidden): namespaces is forbidden:
User "alice" cannot list resource "namespaces" in API group "" at the cluster scope
```
However, the user can have permissions on some namespaces
```
$ kubectl auth can-i [get|list|watch|delete] ns oil-production
yes
```
The reason, as the error message reported, is that the RBAC _list_ action is available only at Cluster-Scope and it is not granted to users without appropriate permissions.
To overcome this problem, many Kubernetes distributions introduced mirrored custom resources supported by a custom set of ACL-filtered APIs. However, this leads to radically change the user's experience of Kubernetes by introducing hard customizations that make it painful to move from one distribution to another.
With **Capsule**, we took a different approach. As one of the key goals, we want to keep the same user's experience on all the distributions of Kubernetes. We want people to use the standard tools they already know and love and it should just work.
## How it works
This project is an add-on of the main [Capsule](https://github.com/clastix/capsule) operator, so make sure you have a working instance of Caspule before attempting to install it.
Use the `capsule-proxy` only if you want Tenant Owners to list their own Cluster-Scope resources.
The `capsule-proxy` implements a simple reverse proxy that intercepts only specific requests to the APIs server and Capsule does all the magic behind the scenes.
Current implementation filters the following requests:
* `api/v1/namespaces`
* `api/v1/nodes`
* `apis/storage.k8s.io/v1/storageclasses{/name}`
* `apis/networking.k8s.io/{v1,v1beta1}/ingressclasses{/name}`
* `api/scheduling.k8s.io/{v1}/priorityclasses{/name}`
All other requestes are proxied transparently to the APIs server, so no side-effects are expected. We're planning to add new APIs in the future, so PRs are welcome!
## Installation
The `capsule-proxy` can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the APIs server.
Optionally, it can be deployed as a sidecar container in the backend of a dashboard.
Running outside a Kubernetes cluster is also viable, although a valid `KUBECONFIG` file must be provided, using the environment variable `KUBECONFIG` or the default file in `$HOME/.kube/config`.
An Helm Chart is available [here](https://github.com/clastix/capsule/blob/master/charts/capsule/README.md).
## Does it work with kubectl?
Yes, it works by intercepting all the requests from the `kubectl` client directed to the APIs server. It works with both users who use the TLS certificate authentication and those who use OIDC.
## How RBAC is put in place?
Each Tenant owner can have their capabilities managed pretty similar to a standard RBAC.
```yaml
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: my-tenant
spec:
owners:
- kind: User
name: alice
proxySettings:
- kind: IngressClasses
operations:
- List
```
The proxy setting `kind` is an __enum__ accepting the supported resources:
- `Nodes`
- `StorageClasses`
- `IngressClasses`
- `PriorityClasses`
Each Resource kind can be granted with several verbs, such as:
- `List`
- `Update`
- `Delete`
### Namespaces
As tenant owner `alice`, you can use `kubectl` to create some namespaces:
```
$ kubectl --context alice-oidc@mycluster create namespace oil-production
$ kubectl --context alice-oidc@mycluster create namespace oil-development
$ kubectl --context alice-oidc@mycluster create namespace gas-marketing
```
and list only those namespaces:
```
$ kubectl --context alice-oidc@mycluster get namespaces
NAME STATUS AGE
gas-marketing Active 2m
oil-development Active 2m
oil-production Active 2m
```
### Nodes
The Capsule Proxy gives the owners the ability to access the nodes matching the `.spec.nodeSelector` in the Tenant manifest:
```yaml
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
proxySettings:
- kind: Nodes
operations:
- List
nodeSelector:
kubernetes.io/hostname: capsule-gold-qwerty
```
```bash
$ kubectl --context alice-oidc@mycluster get nodes
NAME STATUS ROLES AGE VERSION
capsule-gold-qwerty Ready <none> 43h v1.19.1
```
> Warning: when no `nodeSelector` is specified, the tenant owners has access to all the nodes, according to the permissions listed in the `proxySettings` specs.
### Storage Classes
A Tenant may be limited to use a set of allowed Storage Class resources, as follows.
```yaml
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
proxySettings:
- kind: StorageClasses
operations:
- List
storageClasses:
allowed:
- custom
allowedRegex: "\\w+fs"
```
In the Kubernetes cluster we could have more Storage Class resources, some of them forbidden and non-usable by the Tenant owner.
```bash
$ kubectl --context admin@mycluster get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
cephfs rook.io/cephfs Delete WaitForFirstConsumer false 21h
custom custom.tls/provisioner Delete WaitForFirstConsumer false 43h
default(standard) rancher.io/local-path Delete WaitForFirstConsumer false 43h
glusterfs rook.io/glusterfs Delete WaitForFirstConsumer false 54m
zol zfs-on-linux/zfs Delete WaitForFirstConsumer false 54m
```
The expected output using `capsule-proxy` is the retrieval of the `custom` Storage Class as well the other ones matching the regex `\w+fs`.
```bash
$ kubectl --context alice-oidc@mycluster get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
cephfs rook.io/cephfs Delete WaitForFirstConsumer false 21h
custom custom.tls/provisioner Delete WaitForFirstConsumer false 43h
glusterfs rook.io/glusterfs Delete WaitForFirstConsumer false 54m
```
### Ingress Classes
As for Storage Class, also Ingress Class can be enforced.
```yaml
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
proxySettings:
- kind: IngressClasses
operations:
- List
ingressOptions:
allowedClasses:
allowed:
- custom
allowedRegex: "\\w+-lb"
```
In the Kubernetes cluster we could have more Ingress Class resources, some of them forbidden and non-usable by the Tenant owner.
```bash
$ kubectl --context admin@mycluster get ingressclasses
NAME CONTROLLER PARAMETERS AGE
custom example.com/custom IngressParameters.k8s.example.com/custom 24h
external-lb example.com/external IngressParameters.k8s.example.com/external-lb 2s
haproxy-ingress haproxy.tech/ingress 4d
internal-lb example.com/internal IngressParameters.k8s.example.com/external-lb 15m
nginx nginx.plus/ingress 5d
```
The expected output using `capsule-proxy` is the retrieval of the `custom` Ingress Class as well the other ones matching the regex `\w+-lb`.
```bash
$ kubectl --context alice-oidc@mycluster get ingressclasses
NAME CONTROLLER PARAMETERS AGE
custom example.com/custom IngressParameters.k8s.example.com/custom 24h
external-lb example.com/external IngressParameters.k8s.example.com/external-lb 2s
internal-lb example.com/internal IngressParameters.k8s.example.com/internal-lb 15m
```
### Priority Classes
Allowed PriorityClasses assigned to a Tenant Owner can be enforced as follows.
```yaml
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
name: oil
spec:
owners:
- kind: User
name: alice
proxySettings:
- kind: IngressClasses
operations:
- List
priorityClasses:
allowed:
- best-effort
allowedRegex: "\\w+priority"
```
In the Kubernetes cluster we could have more PriorityClasses resources, some of them forbidden and non-usable by the Tenant owner.
```bash
$ kubectl --context admin@mycluster get priorityclasses.scheduling.k8s.io
NAME VALUE GLOBAL-DEFAULT AGE
custom 1000 false 18s
maxpriority 1000 false 18s
minpriority 1000 false 18s
nonallowed 1000 false 8m54s
system-cluster-critical 2000000000 false 3h40m
system-node-critical 2000001000 false 3h40m
```
The expected output using `capsule-proxy` is the retrieval of the `custom` PriorityClass as well the other ones matching the regex `\w+priority`.
```bash
$ kubectl --context alice-oidc@mycluster get ingressclasses
NAME VALUE GLOBAL-DEFAULT AGE
custom 1000 false 18s
maxpriority 1000 false 18s
minpriority 1000 false 18s
```
### Storage/Ingress class and PriorityClass required label
For Storage Class, Ingress Class and Priority Class resources, the `name` label reflecting the resource name is mandatory, otherwise filtering of resources cannot be put in place.
```yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
name: my-storage-class
name: my-storage-class
provisioner: org.tld/my-storage-class
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
name: external-lb
name: external-lb
spec:
controller: example.com/ingress-controller
parameters:
apiGroup: k8s.example.com
kind: IngressParameters
name: external-lb
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
labels:
name: best-effort
name: best-effort
value: 1000
globalDefault: false
description: "Priority class for best-effort Tenants"
```
## Does it work with kubectl?
Yes, it works by intercepting all the requests from the `kubectl` client directed to the APIs server. It works with both users who use the TLS certificate authentication and those who use OIDC.
As tenant owner `alice`, you are able to use `kubectl` to create some namespaces:
```
$ kubectl --context alice-oidc@mycluster create namespace oil-production
$ kubectl --context alice-oidc@mycluster create namespace oil-development
$ kubectl --context alice-oidc@mycluster create namespace gas-marketing
```
and list only those namespaces:
```
$ kubectl --context alice-oidc@mycluster get namespaces
NAME STATUS AGE
gas-marketing Active 2m
oil-development Active 2m
oil-production Active 2m
```
# Whats next
Have a fun with `capsule-proxy`:
* [Standalone Installation](/docs/proxy/standalone)
* [Sidecar Installation](/docs/proxy/sidecar)
* [OIDC Authentication](/docs/proxy/oidc-auth)
* [Contributing](/docs/proxy/contributing)

View File

@@ -0,0 +1,116 @@
# Sidecar Installation
The `capsule-proxy` can be deployed as sidecar container for server-side Kubernetes dashboards. It will intercept all requests sent from the client side to the server-side of the dashboard and it will proxy them to the Kubernetes APIs server.
```
capsule-proxy
+------------+ +------------+
|:9001 +------->|:6443 |
+------------+ +------------+
+-----------+ | | kube-apiserver
browser +------>+:443 +-------->+:8443 |
+-----------+ +------------+
ingress-controller dashboard backend
(ssl-passthrough)
```
In order to use this pattern, the server-side backend of your dashboard must permit to specify the URL of the Kubernetes APIs server. For example, the following manifest contains an excerpt for deploying with [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), and the Ingress Controller in ssl-passthrough mode.
Place the `capsule-proxy` in a pod with SSL mode, i.e. `--enable-ssl=true` and passing valid certificate and key files in a secret.
```yaml
...
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: ns-filter
image: quay.io/clastix/capsule-proxy
imagePullPolicy: IfNotPresent
args:
- --capsule-user-group=capsule.clastix.io
- --zap-log-level=5
- --enable-ssl=true
- --ssl-cert-path=/opt/certs/tls.crt
- --ssl-key-path=/opt/certs/tls.key
volumeMounts:
- name: ns-filter-certs
mountPath: /opt/certs
ports:
- containerPort: 9001
name: http
protocol: TCP
...
```
In the same pod, place the Kubernetes Dashboard in _"out-of-cluster"_ mode with `--apiserver-host=https://localhost:9001` to send all the requests to the `capsule-proxy` sidecar container:
```yaml
...
- name: dashboard
image: kubernetesui/dashboard:v2.0.4
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=cmp-system
- --tls-cert-file=tls.crt
- --tls-key-file=tls.key
- --apiserver-host=https://localhost:9001
- --kubeconfig=/opt/.kube/config
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
- mountPath: /tmp
name: tmp-volume
- mountPath: /opt/.kube
name: kubeconfig
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
...
```
Make sure you pass a valid `kubeconfig` file to the dashboard pointing to the `capsule-proxy` sidecar container instead of the `kube-apiserver` directly:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubernetes-dashboard-kubeconfig
namespace: kubernetes-dashboard
data:
config: |
kind: Config
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:9001 # <- point to the capsule-proxy
name: localhost
contexts:
- context:
cluster: localhost
user: kubernetes-admin # <- dashboard has cluster-admin permissions
name: admin@localhost
current-context: admin@localhost
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
```
After starting the dashboard, login as a Tenant Owner user, e.g. `alice` according to the used authentication method, and check you can see only owned namespaces.
The `capsule-proxy` can be deployed in standalone mode, in order to be used with a command line tools like `kubectl`. See [Standalone Installation](/docs/proxy/standalone).

View File

@@ -0,0 +1,66 @@
# Standalone Installation
The `capsule-proxy` can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the `kube-apiserver`. Use this way to provide access to client-side command line tools like `kubectl` or even client-side dashboards.
You can use an Ingress Controller to expose the `capsule-proxy` endpoint in SSL passthrough, or,depending on your environment, you can expose it with either a `NodePort`, or a `LoadBalancer` service. As further alternatives, use `HostPort` or `HostNetwork` mode.
```
+-----------+ +-----------+ +-----------+
kubectl ------>|:443 |--------->|:9001 |-------->|:6443 |
+-----------+ +-----------+ +-----------+
ingress-controller capsule-proxy kube-apiserver
(ssl-passthrough)
```
## Configure Capsule
Make sure to have a working instance of the Capsule Operator in your Kubernetes cluster before to attempt to use `capsule-proxy`. Please, refer to the Capsule Operator [documentation](/docs/operator/overview) for instructions.
You should also have one or more tenants defined, e.g. `oil` and `gas` and they are assigned to the user `alice`.
As cluster admin, check there are the tenants:
```
$ kubectl get tenants
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND AGE
foo 3 1 joe User 4d
gas 3 0 alice User 1d
oil 9 0 alice User 1d
```
## Install Capsule Proxy
Create a secret in the target namespace containing the SSL certificate which `capsule-proxy` will use.
```
$ kubectl -n capsule-system create secret tls capsule-proxy --cert=tls.cert --key=tls.key
```
Then use the Helm Chart to install the `capsule-proxy` in such namespace:
```bash
$ cat <<EOF | sudo tee custom-values.yaml
options:
enableSSL: true
ingress:
enabled: true
annotations:
ingress.kubernetes.io/ssl-passthrough: 'true'
hosts:
- host: kube.clastix.io
paths: [ "/" ]
EOF
$ helm install capsule-proxy clastix/capsule-proxy \
--values custom-values.yaml \
-n capsule-system
```
The `capsule-proxy` should be exposed with an Ingress in SSL passthrough mode and reachable at `https://kube.clastix.io`.
Users using a TLS client based authentication with certificate and key are able to talks with `capsule-proxy` since the current implementation of the reverse proxy is able to forward client certificates to the Kubernetes APIs server.
## RBAC Considerations
Currently, the service account used for `capsule-proxy` needs to have `cluster-admin` permissions.
## Configuring client-only dashboards
If you're using a client-only dashboard, for example [Lens](https://k8slens.dev/), the `capsule-proxy` can be used as in the previous `kubectl` example since Lens just needs for a `kubeconfig` file. Assuming to use a `kubeconfig` file containing a valid OIDC token released for the `alice` user, you can access the cluster with Lens dashboard and see only namespaces belonging to the Alice's tenants.
For web based dashboards, like the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), the `capsule-proxy` can be installed as sidecar container. See [Sidecar Installation](/docs/proxy/sidecar).