chore(repo): pre-commit fixes (#1431)
* chore: add golint to pre-commit Signed-off-by: Oliver Bähler <oliverbaehler@hotmail.com> * chore: move legacy docs Signed-off-by: Oliver Bähler <oliverbaehler@hotmail.com> * chore: ran pre-commit Signed-off-by: Oliver Bähler <oliverbaehler@hotmail.com> * chore: fix goreleaser regexps Signed-off-by: Oliver Bähler <oliverbaehler@hotmail.com> --------- Signed-off-by: Oliver Bähler <oliverbaehler@hotmail.com>
2
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -9,7 +9,7 @@ assignees: ''
|
||||
|
||||
<!--
|
||||
Thanks for taking time reporting a Capsule bug!
|
||||
|
||||
|
||||
-->
|
||||
|
||||
# Bug description
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -32,4 +32,4 @@ How would the new interaction with Capsule look like? E.g.
|
||||
Feel free to add a diagram if that helps explain things.
|
||||
|
||||
# Expected behavior
|
||||
A clear and concise description of what you expect to happen.
|
||||
A clear and concise description of what you expect to happen.
|
||||
|
||||
6
.github/configs/lintconf.yaml
vendored
@@ -6,6 +6,12 @@ ignore:
|
||||
rules:
|
||||
truthy:
|
||||
level: warning
|
||||
allowed-values:
|
||||
- "true"
|
||||
- "false"
|
||||
- "on"
|
||||
- "off"
|
||||
|
||||
check-keys: false
|
||||
braces:
|
||||
min-spaces-inside: 0
|
||||
|
||||
@@ -63,9 +63,9 @@ release:
|
||||
|
||||
[Review the Major Changes section first before upgrading to a new version](https://artifacthub.io/packages/helm/projectcapsule/capsule/{{ .Version }}#major-changes)
|
||||
|
||||
**Kubernetes compatibility**
|
||||
|
||||
> [!IMPORTANT]
|
||||
> **Kubernetes compatibility**
|
||||
>
|
||||
> Note that the Capsule project offers support only for the latest minor version of Kubernetes.
|
||||
> Backwards compatibility with older versions of Kubernetes and OpenShift is [offered by vendors](https://projectcapsule.dev/support/).
|
||||
>
|
||||
@@ -93,26 +93,27 @@ changelog:
|
||||
- Merge branch
|
||||
groups:
|
||||
# https://github.com/conventional-changelog/commitlint/tree/master/%40commitlint/config-conventional
|
||||
- title: '🛠 Dependency updates'
|
||||
regexp: '^.*?(feat|fix)\(deps\)!?:.+$'
|
||||
order: 300
|
||||
- title: '✨ New Features'
|
||||
regexp: '^.*?feat(\([[:word:]]+\))??!?:.+$'
|
||||
order: 100
|
||||
- title: '🐛 Bug fixes'
|
||||
regexp: '^.*?fix(\([[:word:]]+\))??!?:.+$'
|
||||
order: 200
|
||||
- title: '📖 Documentation updates'
|
||||
regexp: ^.*?docs(\([[:word:]]+\))??!?:.+$
|
||||
order: 400
|
||||
- title: '🛡️ Security updates'
|
||||
regexp: ^.*?(sec)(\([[:word:]]+\))??!?:.+$
|
||||
order: 500
|
||||
- title: '🚀 Build process updates'
|
||||
regexp: ^.*?(build|ci)(\([[:word:]]+\))??!?:.+$
|
||||
order: 600
|
||||
- title: '📦 Other work'
|
||||
order: 9999
|
||||
- title: '🛠 Dependency updates'
|
||||
regexp: '^fix\(deps\):|^feat\(deps\):'
|
||||
order: 300
|
||||
- title: '✨ New Features'
|
||||
regexp: '^feat(\([^)]*\))?:'
|
||||
order: 100
|
||||
- title: '🐛 Bug fixes'
|
||||
regexp: '^fix(\([^)]*\))?:'
|
||||
order: 200
|
||||
- title: '📖 Documentation updates'
|
||||
regexp: '^docs(\([^)]*\))?:'
|
||||
order: 400
|
||||
- title: '🛡️ Security updates'
|
||||
regexp: '^sec(\([^)]*\))?:'
|
||||
order: 500
|
||||
- title: '🚀 Build process updates'
|
||||
regexp: '^(build|ci)(\([^)]*\))?:'
|
||||
order: 600
|
||||
- title: '📦 Other work'
|
||||
regexp: '^chore(\([^)]*\))?:|^chore:'
|
||||
order: 9999
|
||||
sboms:
|
||||
- artifacts: archive
|
||||
signs:
|
||||
|
||||
@@ -9,7 +9,6 @@ repos:
|
||||
rev: v5.0.0
|
||||
hooks:
|
||||
- id: check-executables-have-shebangs
|
||||
- id: check-yaml
|
||||
- id: double-quote-string-fixer
|
||||
- id: end-of-file-fixer
|
||||
- id: trailing-whitespace
|
||||
@@ -35,24 +34,8 @@ repos:
|
||||
entry: make helm-lint
|
||||
language: system
|
||||
files: ^charts/
|
||||
# Currently too slow smw
|
||||
# - id: golangci-lint
|
||||
# name: Execute golangci-lint
|
||||
# entry: make golint
|
||||
# language: system
|
||||
# files: \.go$
|
||||
# - repo: https://github.com/tekwizely/pre-commit-golang
|
||||
# rev: v1.0.0-rc.1
|
||||
# hooks:
|
||||
# - id: go-vet
|
||||
# - id: go-vet-mod
|
||||
# - id: go-vet-pkg
|
||||
# - id: go-vet-repo-mod
|
||||
# - id: go-vet-repo-pkg
|
||||
# - id: go-revive
|
||||
# - id: go-revive-mod
|
||||
# - id: go-revive-repo-mod
|
||||
# - id: go-sec-mod
|
||||
# - id: go-sec-pkg
|
||||
# - id: go-sec-repo-mod
|
||||
# - id: go-sec-repo-pkg
|
||||
- id: golangci-lint
|
||||
name: Execute golangci-lint
|
||||
entry: make golint
|
||||
language: system
|
||||
files: \.go$
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
|
||||
This is a list of companies that have adopted Capsule, feel free to open a Pull-Request to get yours listed.
|
||||
|
||||
[See all on the website](https://projectcapsule.dev/adopters/)
|
||||
|
||||
## Adopters list (alphabetically)
|
||||
|
||||
### [Bedag Informatik AG](https://www.bedag.ch/)
|
||||
|
||||
@@ -7,4 +7,4 @@ See the [Releases](https://github.com/projectcapsule/capsule/releases)
|
||||
|
||||
## Helm Chart
|
||||
|
||||
For the helm chart, a dedicated changelog is created based on the chart's annotations ([See](./DEVELOPMENT.md#helm-changelog)).
|
||||
For the helm chart, a dedicated changelog is created based on the chart's annotations ([See](./DEVELOPMENT.md#helm-changelog)).
|
||||
|
||||
@@ -45,7 +45,7 @@ Prereleases are marked as `-rc.x` (release candidate) and may refere to any type
|
||||
|
||||
The pull request title is checked according to the described [semantics](#semantics) (pull requests don't require a scope). However pull requests are currently not used to generate the changelog. Check if your pull requests body meets the following criteria:
|
||||
|
||||
- reference a previously opened issue: https://docs.github.com/en/github/writing-on-github/autolinked-references-and-urls#issues-and-pull-requests
|
||||
- reference a previously opened issue: https://docs.github.com/en/github/writing-on-github/autolinked-references-and-urls#issues-and-pull-requests
|
||||
- splitting changes into several and documented small commits
|
||||
- limit the git subject to 50 characters and write as the continuation of the
|
||||
sentence "If applied, this commit will ..."
|
||||
@@ -104,7 +104,7 @@ To reorganise your commits, do the following (or use your way of doing it):
|
||||
|
||||
|
||||
1. Pull upstream changes
|
||||
|
||||
|
||||
```bash
|
||||
git remote add upstream git@github.com:projectcapsule/capsule.git
|
||||
git pull upstream main
|
||||
@@ -186,4 +186,3 @@ The following types are allowed for commits and pull requests:
|
||||
* `fix`: bug fixes
|
||||
* `test`: test related changes
|
||||
* `sec`: security related changes
|
||||
|
||||
|
||||
@@ -23,10 +23,10 @@ Capsule maintainers must follow these guidelines when consuming third-party pack
|
||||
|
||||
When adding a new third-party package to Capsule, maintainers must follow these steps:
|
||||
|
||||
1. Evaluate the need for the package. Is it necessary for the functionality of Capsule?
|
||||
2. Research the package. Is it well-maintained? Does it have a good reputation?
|
||||
3. Choose a version of the package. Use the latest version whenever possible.
|
||||
4. Pin the package to the specific version in the Capsule codebase.
|
||||
1. Evaluate the need for the package. Is it necessary for the functionality of Capsule?
|
||||
2. Research the package. Is it well-maintained? Does it have a good reputation?
|
||||
3. Choose a version of the package. Use the latest version whenever possible.
|
||||
4. Pin the package to the specific version in the Capsule codebase.
|
||||
5. Update the Capsule documentation to reflect the new dependency.
|
||||
|
||||
## Archive/Deprecation
|
||||
|
||||
@@ -60,7 +60,7 @@ To achieve that, there are some necessary steps we need to walk through, which h
|
||||
|
||||
So the TL;DR answer is:
|
||||
|
||||
**Make sure a *KinD* cluster is running on your laptop, and then run `make dev-setup` to setup the dev environment.**. This is not done in the `make dev-setup` setup.
|
||||
**Make sure a *KinD* cluster is running on your laptop, and then run `make dev-setup` to setup the dev environment.**. This is not done in the `make dev-setup` setup.
|
||||
|
||||
```bash
|
||||
# If you haven't installed or run `make deploy` before, do it first
|
||||
@@ -222,12 +222,12 @@ time="2023-10-23T13:45:08Z" level=info msg="Found Chart directories [charts/caps
|
||||
time="2023-10-23T13:45:08Z" level=info msg="Generating README Documentation for chart /helm-docs/charts/capsule"
|
||||
```
|
||||
|
||||
This will update the documentation for the chart in the `README.md` file.
|
||||
This will update the documentation for the chart in the `README.md` file.
|
||||
|
||||
### Helm Changelog
|
||||
### Helm Changelog
|
||||
|
||||
The `version` of the chart does not require a bump, since it's driven by our release process. The `appVersion` of the chart is the version of the Capsule project. This is the version that should be bumped when a new Capsule version is released. This will be done by the maintainers.
|
||||
|
||||
To create the proper changelog for the helm chart, all changes which affect the helm chart must be documented as chart annotation. See all the available [chart annotations](https://artifacthub.io/docs/topics/annotations/helm/).
|
||||
|
||||
This annotation can be provided using two different formats: using a plain list of strings with the description of the change or using a list of objects with some extra structured information (see example below). Please feel free to use the one that better suits your needs. The UI experience will be slightly different depending on the choice. When using the list of objects option the valid supported kinds are `added`, `changed`, `deprecated`, `removed`, `fixed` and `security`.
|
||||
This annotation can be provided using two different formats: using a plain list of strings with the description of the change or using a list of objects with some extra structured information (see example below). Please feel free to use the one that better suits your needs. The UI experience will be slightly different depending on the choice. When using the list of objects option the valid supported kinds are `added`, `changed`, `deprecated`, `removed`, `fixed` and `security`.
|
||||
|
||||
@@ -77,7 +77,7 @@ Maintainers who are selected will be granted the necessary GitHub rights.
|
||||
Maintainers may resign at any time if they feel that they will not be able to
|
||||
continue fulfilling their project duties.
|
||||
|
||||
Maintainers may also be removed after being inactive, failure to fulfill their
|
||||
Maintainers may also be removed after being inactive, failure to fulfill their
|
||||
Maintainer responsibilities, violating the Code of Conduct, or other reasons.
|
||||
A Maintainer may be removed at any time by a 2/3 vote of the remaining maintainers.
|
||||
|
||||
@@ -88,7 +88,7 @@ and can be rapidly returned to Maintainer status if their availability changes.
|
||||
## Meetings
|
||||
|
||||
Time zones permitting, Maintainers are expected to participate in the public
|
||||
developer meeting and/or public discussions.
|
||||
developer meeting and/or public discussions.
|
||||
|
||||
Maintainers will also have closed meetings in order to discuss security reports
|
||||
or Code of Conduct violations. Such meetings should be scheduled by any
|
||||
@@ -110,7 +110,7 @@ violations by community members will be discussed and resolved in private Mainta
|
||||
|
||||
The Maintainers will appoint a Security Response Team to handle security reports.
|
||||
This committee may simply consist of the Maintainer Council themselves. If this
|
||||
responsibility is delegated, the Maintainers will appoint a team of at least two
|
||||
responsibility is delegated, the Maintainers will appoint a team of at least two
|
||||
contributors to handle it. The Maintainers will review who is assigned to this
|
||||
at least once a year.
|
||||
|
||||
@@ -119,15 +119,15 @@ holes and breaches according to the [security policy](TODO:Link to security.md).
|
||||
|
||||
## Voting
|
||||
|
||||
While most business in Capsule Project is conducted by "[lazy consensus](https://community.apache.org/committers/lazyConsensus.html)",
|
||||
While most business in Capsule Project is conducted by "[lazy consensus](https://community.apache.org/committers/lazyConsensus.html)",
|
||||
periodically the Maintainers may need to vote on specific actions or changes.
|
||||
Any Maintainer may demand a vote be taken.
|
||||
|
||||
Most votes require a simple majority of all Maintainers to succeed, except where
|
||||
otherwise noted. Two-thirds majority votes mean at least two-thirds of all
|
||||
otherwise noted. Two-thirds majority votes mean at least two-thirds of all
|
||||
existing maintainers.
|
||||
|
||||
## Modifying this Charter
|
||||
|
||||
Changes to this Governance and its supporting documents may be approved by
|
||||
a 2/3 vote of the Maintainers.
|
||||
Changes to this Governance and its supporting documents may be approved by
|
||||
a 2/3 vote of the Maintainers.
|
||||
|
||||
@@ -10,4 +10,4 @@ The current Maintainers Group for the [TODO: Projectname] Project consists of:
|
||||
|
||||
This list must be kept in sync with the [CNCF Project Maintainers list](https://github.com/cncf/foundation/blob/master/project-maintainers.csv).
|
||||
|
||||
See [the project Governance](GOVERNANCE.md) for how maintainers are selected and replaced.
|
||||
See [the project Governance](GOVERNANCE.md) for how maintainers are selected and replaced.
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
# Roadmap
|
||||
|
||||
future features and fixes are planned with [release milestones on GitHub](https://github.com/projectcapsule/capsule/milestones?direction=asc&sort=due_date&state=open). You can influence the roadmap by opening issues or joining our community meetings.
|
||||
future features and fixes are planned with [release milestones on GitHub](https://github.com/projectcapsule/capsule/milestones?direction=asc&sort=due_date&state=open). You can influence the roadmap by opening issues or joining our community meetings.
|
||||
|
||||
@@ -81,7 +81,7 @@ Capsule was accepted as a CNCF sandbox project in December 2022.
|
||||
It's the Operator which provides all the multi-tenant capabilities offered by Capsule.
|
||||
It's made of two internal components, such as the webhooks server (known as _policy engine_), and the _tenant controller_.
|
||||
|
||||
**Capsule Tenant Controller**
|
||||
**Capsule Tenant Controller**
|
||||
|
||||
The controller is responsible for managing the tenants by reconciling the required objects at the Namespace level, such as _Network Policy_, _LimitRange_, _ResourceQuota_, _Role Binding_, as well as labelling the Namespace objects belonging to a Tenant according to their desired metadata.
|
||||
It is responsible for binding Namespaces to the selected Tenant, and managing their lifecycle.
|
||||
@@ -90,10 +90,10 @@ Furthermore, the manager can replicate objects thanks to the **Tenant Resource**
|
||||
|
||||
The replicated resources are dynamically created, and replicated by Capsule itself, as well as preserving the deletion of these objects by the Tenant owner.
|
||||
|
||||
**Capsule Tenant Controller (Policy Engine)**
|
||||
**Capsule Tenant Controller (Policy Engine)**
|
||||
|
||||
Policies are defined on a Tenant basis: therefore the policy engine is enforcing these policies on the tenants's Namespaces and their children's resources.
|
||||
The Policy Engine is currently not a dedicated component, but a part of the Capsule Tenant Controller.
|
||||
The Policy Engine is currently not a dedicated component, but a part of the Capsule Tenant Controller.
|
||||
|
||||
The webhook server, also known as the policy engine, interpolates the Tenant rules and takes full advantage of the dynamic admission controllers offered by Kubernetes itself (such as `ValidatingWebhookConfiguration` and `MutatingWebhookConfiguration`).
|
||||
Thanks to the _policy engine_ the cluster administrators can enforce specific rules such as preventing _Pod_ objects from untrusted registries to run or preventing the creation of _PersistentVolumeClaim_ resources using a non-allowed _StorageClass_, etc.
|
||||
@@ -152,7 +152,7 @@ This is a further abstraction from having cluster defaults (eg. default `Storage
|
||||
|
||||
**General**
|
||||
|
||||
* **Control Plane**: Capsule can't mimic for each tenant a feeling of a dedicated control plane.
|
||||
* **Control Plane**: Capsule can't mimic for each tenant a feeling of a dedicated control plane.
|
||||
|
||||
* **Custom Resource Definitions**: Capsule doesn't want to provide virtual cluster capabilities and it's sticking to the native Kubernetes user experience and design; rather, its focus is to provide a governance solution by focusing on resource optimization and security lockdown.
|
||||
|
||||
|
||||
@@ -11,4 +11,4 @@ spec:
|
||||
{{- include "capsule.webhooks.service" (dict "path" "/convert" "ctx" $) | nindent 8 }}
|
||||
conversionReviewVersions:
|
||||
- v1beta1
|
||||
- v1beta2
|
||||
- v1beta2
|
||||
|
||||
@@ -11,4 +11,4 @@ spec:
|
||||
{{- include "capsule.webhooks.service" (dict "path" "/convert" "ctx" $) | nindent 8 }}
|
||||
conversionReviewVersions:
|
||||
- v1beta1
|
||||
- v1beta2
|
||||
- v1beta2
|
||||
|
||||
@@ -154,5 +154,3 @@ Capsule Webhook endpoint CA Bundle
|
||||
caBundle: {{ $.Values.webhooks.service.caBundle -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ spec:
|
||||
- {{ include "capsule.fullname" . }}-webhook-service.{{ .Release.Namespace }}.svc
|
||||
- {{ include "capsule.fullname" . }}-webhook-service.{{ .Release.Namespace }}.svc.cluster.local
|
||||
{{- range .Values.certManager.additionalSANS }}
|
||||
- {{ toYaml . }}
|
||||
- {{ toYaml . }}
|
||||
{{- end }}
|
||||
issuerRef:
|
||||
kind: Issuer
|
||||
|
||||
@@ -26,4 +26,3 @@ spec:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
|
||||
@@ -13,5 +13,3 @@ crd-install-hook
|
||||
{{- define "capsule.crds.regexReplace" -}}
|
||||
{{- printf "%s" ($ | base | trimSuffix ".yaml" | regexReplaceAll "[_.]" "-") -}}
|
||||
{{- end }}
|
||||
|
||||
|
||||
|
||||
@@ -53,4 +53,4 @@ data:
|
||||
{{- end }}
|
||||
{{ end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -47,7 +47,7 @@ spec:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with $Values.topologySpreadConstraints }}
|
||||
topologySpreadConstraints:
|
||||
topologySpreadConstraints:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with $Values.priorityClassName }}
|
||||
@@ -56,7 +56,7 @@ spec:
|
||||
{{- with $Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
serviceAccountName: {{ include "capsule.crds.name" . }}
|
||||
containers:
|
||||
- name: crds-hook
|
||||
@@ -98,4 +98,4 @@ spec:
|
||||
path: {{ $path | base }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -49,4 +49,4 @@ subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ include "capsule.crds.name" . }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -11,4 +11,4 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/component: {{ include "capsule.crds.component" . | quote }}
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -31,7 +31,7 @@ webhooks:
|
||||
- pods
|
||||
scope: "Namespaced"
|
||||
namespaceSelector:
|
||||
{{- toYaml .namespaceSelector | nindent 4}}
|
||||
{{- toYaml .namespaceSelector | nindent 4}}
|
||||
sideEffects: None
|
||||
timeoutSeconds: {{ $.Values.webhooks.mutatingWebhooksTimeoutSeconds }}
|
||||
{{- end }}
|
||||
@@ -53,11 +53,11 @@ webhooks:
|
||||
- persistentvolumeclaims
|
||||
scope: "Namespaced"
|
||||
namespaceSelector:
|
||||
{{- toYaml .namespaceSelector | nindent 4}}
|
||||
{{- toYaml .namespaceSelector | nindent 4}}
|
||||
sideEffects: None
|
||||
timeoutSeconds: {{ $.Values.webhooks.mutatingWebhooksTimeoutSeconds }}
|
||||
{{- end }}
|
||||
{{- with .Values.webhooks.hooks.defaults.ingress }}
|
||||
{{- with .Values.webhooks.hooks.defaults.ingress }}
|
||||
- admissionReviewVersions:
|
||||
- v1
|
||||
clientConfig:
|
||||
@@ -81,7 +81,7 @@ webhooks:
|
||||
sideEffects: None
|
||||
timeoutSeconds: {{ $.Values.webhooks.mutatingWebhooksTimeoutSeconds }}
|
||||
{{- end }}
|
||||
{{- with .Values.webhooks.hooks.namespaceOwnerReference }}
|
||||
{{- with .Values.webhooks.hooks.namespaceOwnerReference }}
|
||||
- admissionReviewVersions:
|
||||
- v1
|
||||
- v1beta1
|
||||
|
||||
@@ -9,4 +9,3 @@
|
||||
{{- define "capsule.post-install.component" -}}
|
||||
post-install-hook
|
||||
{{- end }}
|
||||
|
||||
|
||||
@@ -44,7 +44,7 @@ spec:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with $Values.topologySpreadConstraints }}
|
||||
topologySpreadConstraints:
|
||||
topologySpreadConstraints:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with $Values.priorityClassName }}
|
||||
@@ -59,7 +59,7 @@ spec:
|
||||
- name: post-install
|
||||
image: {{ include "capsule.jobsFullyQualifiedDockerImage" . }}
|
||||
imagePullPolicy: {{ $Values.image.pullPolicy }}
|
||||
command:
|
||||
command:
|
||||
- "sh"
|
||||
- "-c"
|
||||
- |
|
||||
@@ -81,4 +81,4 @@ spec:
|
||||
{{- toYaml . | nindent 10 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -41,4 +41,4 @@ subjects:
|
||||
name: {{ include "capsule.post-install.name" . }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -12,4 +12,4 @@ metadata:
|
||||
app.kubernetes.io/component: {{ include "capsule.post-install.component" . | quote }}
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -12,4 +12,3 @@
|
||||
{{- define "capsule.pre-delete.component" -}}
|
||||
pre-delete-hook
|
||||
{{- end }}
|
||||
|
||||
|
||||
@@ -44,7 +44,7 @@ spec:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with $Values.topologySpreadConstraints }}
|
||||
topologySpreadConstraints:
|
||||
topologySpreadConstraints:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- with $Values.priorityClassName }}
|
||||
@@ -82,4 +82,4 @@ spec:
|
||||
resources:
|
||||
{{- toYaml . | nindent 12 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -87,4 +87,4 @@ subjects:
|
||||
- kind: ServiceAccount
|
||||
name: {{ include "capsule.pre-delete.name" . }}
|
||||
namespace: {{ .Release.Namespace | quote }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -11,4 +11,4 @@ metadata:
|
||||
labels:
|
||||
app.kubernetes.io/component: {{ include "capsule.pre-delete.component" . | quote }}
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -29,7 +29,7 @@ spec:
|
||||
{{- with .relabelings }}
|
||||
relabelings: {{- toYaml . | nindent 6 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
jobLabel: app.kubernetes.io/name
|
||||
{{- with .Values.serviceMonitor.targetLabels }}
|
||||
targetLabels: {{- toYaml . | nindent 4 }}
|
||||
@@ -46,4 +46,3 @@ spec:
|
||||
- {{ .Release.Namespace }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
|
||||
@@ -274,4 +274,4 @@ webhooks:
|
||||
sideEffects: None
|
||||
timeoutSeconds: {{ $.Values.webhooks.validatingWebhooksTimeoutSeconds }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -16,5 +16,5 @@ const Configuration = {
|
||||
helpUrl:
|
||||
'https://github.com/projectcapsule/capsule/blob/main/CONTRIBUTING.md#commits',
|
||||
};
|
||||
|
||||
module.exports = Configuration;
|
||||
|
||||
module.exports = Configuration;
|
||||
|
||||
8
docs/.gitignore
vendored
@@ -1,8 +0,0 @@
|
||||
*.log
|
||||
.cache
|
||||
.DS_Store
|
||||
src/.temp
|
||||
node_modules
|
||||
dist
|
||||
.env
|
||||
.env.*
|
||||
@@ -1,12 +0,0 @@
|
||||
# Capsule Documentation
|
||||
|
||||
1. Ensure to have [`yarn`](https://classic.yarnpkg.com/lang/en/docs/install/#debian-stable) installed in your path.
|
||||
2. `yarn install`
|
||||
|
||||
## Local development
|
||||
|
||||
```shell
|
||||
yarn develop
|
||||
```
|
||||
|
||||
This will create a local webserver listening on `localhost:8080` with hot-reload of your local changes.
|
||||
|
Before Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 294 KiB |
|
Before Width: | Height: | Size: 283 KiB |
|
Before Width: | Height: | Size: 111 KiB |
@@ -1,347 +0,0 @@
|
||||
# Capsule Development
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Make sure you have these tools installed:
|
||||
|
||||
- [Go 1.19+](https://golang.org/dl/)
|
||||
- [Operator SDK 1.7.2+](https://github.com/operator-framework/operator-sdk), or [Kubebuilder](https://github.com/kubernetes-sigs/kubebuilder)
|
||||
- [KinD](https://github.com/kubernetes-sigs/kind) or [k3d](https://k3d.io/), with `kubectl`
|
||||
- [ngrok](https://ngrok.com/) (if you want to run locally with remote Kubernetes)
|
||||
- [golangci-lint](https://github.com/golangci/golangci-lint)
|
||||
- OpenSSL
|
||||
|
||||
## Setup a Kubernetes Cluster
|
||||
|
||||
A lightweight Kubernetes within your laptop can be very handy for Kubernetes-native development like Capsule.
|
||||
|
||||
### By `k3d`
|
||||
|
||||
```shell
|
||||
# Install K3d cli by brew in Mac, or your preferred way
|
||||
$ brew install k3d
|
||||
|
||||
# Export your laptop's IP, e.g. retrieving it by: ifconfig
|
||||
# Do change this IP to yours
|
||||
$ export LAPTOP_HOST_IP=192.168.10.101
|
||||
|
||||
# Spin up a bare minimum cluster
|
||||
# Refer to here for more options: https://k3d.io/v4.4.8/usage/commands/k3d_cluster_create/
|
||||
$ k3d cluster create k3s-capsule --servers 1 --agents 1 --no-lb --k3s-server-arg --tls-san=${LAPTOP_HOST_IP}
|
||||
|
||||
# Get Kubeconfig
|
||||
$ k3d kubeconfig get k3s-capsule > /tmp/k3s-capsule && export KUBECONFIG="/tmp/k3s-capsule"
|
||||
|
||||
# This will create a cluster with 1 server and 1 worker node
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
k3d-k3s-capsule-server-0 Ready control-plane,master 2m13s v1.21.2+k3s1
|
||||
k3d-k3s-capsule-agent-0 Ready <none> 2m3s v1.21.2+k3s1
|
||||
|
||||
# Or 2 Docker containers if you view it from Docker perspective
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
5c26ad840c62 rancher/k3s:v1.21.2-k3s1 "/bin/k3s agent" 53 seconds ago Up 45 seconds k3d-k3s-capsule-agent-0
|
||||
753998879b28 rancher/k3s:v1.21.2-k3s1 "/bin/k3s server --t…" 53 seconds ago Up 51 seconds 0.0.0.0:49708->6443/tcp k3d-k3s-capsule-server-0
|
||||
```
|
||||
|
||||
### By `kind`
|
||||
|
||||
```shell
|
||||
# # Install kind cli by brew in Mac, or your preferred way
|
||||
$ brew install kind
|
||||
|
||||
# Prepare a kind config file with necessary customization
|
||||
$ cat > kind.yaml <<EOF
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
networking:
|
||||
apiServerAddress: "0.0.0.0"
|
||||
nodes:
|
||||
- role: control-plane
|
||||
kubeadmConfigPatches:
|
||||
- |
|
||||
kind: ClusterConfiguration
|
||||
metadata:
|
||||
name: config
|
||||
apiServer:
|
||||
certSANs:
|
||||
- localhost
|
||||
- 127.0.0.1
|
||||
- kubernetes
|
||||
- kubernetes.default.svc
|
||||
- kubernetes.default.svc.cluster.local
|
||||
- kind
|
||||
- 0.0.0.0
|
||||
- ${LAPTOP_HOST_IP}
|
||||
- role: worker
|
||||
EOF
|
||||
|
||||
# Spin up a bare minimum cluster with 1 master 1 worker node
|
||||
$ kind create cluster --name kind-capsule --config kind.yaml
|
||||
|
||||
# This will create a cluster with 1 server and 1 worker node
|
||||
$ kubectl get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
kind-capsule-control-plane Ready control-plane,master 84s v1.21.1
|
||||
kind-capsule-worker Ready <none> 56s v1.21.1
|
||||
|
||||
# Or 2 Docker containers if you view it from Docker perspective
|
||||
$ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
7b329fd3a838 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute 0.0.0.0:54894->6443/tcp kind-capsule-control-plane
|
||||
7d50f1633555 kindest/node:v1.21.1 "/usr/local/bin/entr…" About a minute ago Up About a minute kind-capsule-worker
|
||||
```
|
||||
|
||||
## Fork, build, and deploy Capsule
|
||||
|
||||
The `fork-clone-contribute-pr` flow is common for contributing to OSS projects like Kubernetes and Capsule.
|
||||
|
||||
Let's assume you've forked it into your GitHub namespace, say `myuser`, and then you can clone it with Git protocol.
|
||||
Do remember to change the `myuser` to yours.
|
||||
|
||||
```shell
|
||||
$ git clone git@github.com:myuser/capsule.git && cd capsule
|
||||
```
|
||||
|
||||
It's a good practice to add the upstream as the remote too so we can easily fetch and merge the upstream to our fork:
|
||||
|
||||
```shell
|
||||
$ git remote add upstream https://github.com/projectcapsule/capsule.git
|
||||
$ git remote -vv
|
||||
origin git@github.com:myuser/capsule.git (fetch)
|
||||
origin git@github.com:myuser/capsule.git (push)
|
||||
upstream https://github.com/projectcapsule/capsule.git (fetch)
|
||||
upstream https://github.com/projectcapsule/capsule.git (push)
|
||||
```
|
||||
|
||||
Pull all tags
|
||||
|
||||
```
|
||||
$ git fetch --all && git pull upstream
|
||||
```
|
||||
|
||||
Build and deploy:
|
||||
|
||||
```shell
|
||||
# Download the project dependencies
|
||||
$ go mod download
|
||||
|
||||
# Build the Capsule image
|
||||
$ make docker-build
|
||||
|
||||
# Retrieve the built image version
|
||||
$ export CAPSULE_IMAGE_VESION=`docker images --format '{{.Tag}}' clastix/capsule`
|
||||
|
||||
# If k3s, load the image into cluster by
|
||||
$ k3d image import --cluster k3s-capsule capsule clastix/capsule:${CAPSULE_IMAGE_VESION}
|
||||
# If Kind, load the image into cluster by
|
||||
$ kind load docker-image --name kind-capsule clastix/capsule:${CAPSULE_IMAGE_VESION}
|
||||
|
||||
# deploy all the required manifests
|
||||
# Note: 1) please retry if you saw errors; 2) if you want to clean it up first, run: make remove
|
||||
$ make deploy
|
||||
|
||||
# Make sure the controller is running
|
||||
$ kubectl get pod -n capsule-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
capsule-controller-manager-5c6b8445cf-566dc 1/1 Running 0 23s
|
||||
|
||||
# Check the logs if needed
|
||||
$ kubectl -n capsule-system logs --all-containers -l control-plane=controller-manager
|
||||
|
||||
# You may have a try to deploy a Tenant too to make sure it works end to end
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
- name: system:serviceaccount:capsule-system:default
|
||||
kind: ServiceAccount
|
||||
EOF
|
||||
|
||||
# There shouldn't be any errors and you should see the newly created tenant
|
||||
$ kubectl get tenants
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
oil Active 0 14s
|
||||
```
|
||||
|
||||
If you want to test namespace creation or such stuff, make sure to use impersonation:
|
||||
|
||||
```sh
|
||||
$ kubectl ... --as system:serviceaccount:capsule-system:default --as-group capsule.clastix.io
|
||||
```
|
||||
|
||||
As of now, a complete Capsule environment has been set up in `kind`- or `k3d`-powered cluster, and the `capsule-controller-manager` is running as a deployment serving as:
|
||||
|
||||
- The reconcilers for CRDs and;
|
||||
- A series of webhooks
|
||||
|
||||
## Setup the development environment
|
||||
|
||||
During development, we prefer that the code is running within our IDE locally, instead of running as the normal Pod(s) within the Kubernetes cluster.
|
||||
|
||||
Such a setup can be illustrated as below diagram:
|
||||
|
||||

|
||||
|
||||
To achieve that, there are some necessary steps we need to walk through, which have been made as a `make` target within our `Makefile`.
|
||||
|
||||
So the TL;DR answer is:
|
||||
|
||||
```shell
|
||||
# If you haven't installed or run `make deploy` before, do it first
|
||||
# Note: please retry if you saw errors
|
||||
$ make deploy
|
||||
|
||||
# To retrieve your laptop's IP and execute `make dev-setup` to setup dev env
|
||||
# For example: LAPTOP_HOST_IP=192.168.10.101 make dev-setup
|
||||
$ LAPTOP_HOST_IP="<YOUR_LAPTOP_IP>" make dev-setup
|
||||
```
|
||||
|
||||
This is a very common setup for typical Kubernetes Operator development so we'd better walk them through with more details here.
|
||||
|
||||
1. Scaling down the deployed Pod(s) to 0
|
||||
|
||||
We need to scale the existing replicas of `capsule-controller-manager` to 0 to avoid reconciliation competition between the Pod(s) and the code running outside of the cluster, in our preferred IDE for example.
|
||||
|
||||
```shell
|
||||
$ kubectl -n capsule-system scale deployment capsule-controller-manager --replicas=0
|
||||
deployment.apps/capsule-controller-manager scaled
|
||||
```
|
||||
|
||||
2. Preparing TLS certificate for the webhooks
|
||||
|
||||
Running webhooks requires TLS, we can prepare the TLS key pair in our development env to handle HTTPS requests.
|
||||
|
||||
```shell
|
||||
# Prepare a simple OpenSSL config file
|
||||
# Do remember to export LAPTOP_HOST_IP before running this command
|
||||
$ cat > _tls.cnf <<EOF
|
||||
[ req ]
|
||||
default_bits = 4096
|
||||
distinguished_name = req_distinguished_name
|
||||
req_extensions = req_ext
|
||||
[ req_distinguished_name ]
|
||||
countryName = SG
|
||||
stateOrProvinceName = SG
|
||||
localityName = SG
|
||||
organizationName = CAPSULE
|
||||
commonName = CAPSULE
|
||||
[ req_ext ]
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
IP.1 = ${LAPTOP_HOST_IP}
|
||||
EOF
|
||||
|
||||
# Create this dir to mimic the Pod mount point
|
||||
$ mkdir -p /tmp/k8s-webhook-server/serving-certs
|
||||
|
||||
# Generate the TLS cert/key under /tmp/k8s-webhook-server/serving-certs
|
||||
$ openssl req -newkey rsa:4096 -days 3650 -nodes -x509 \
|
||||
-subj "/C=SG/ST=SG/L=SG/O=CAPSULE/CN=CAPSULE" \
|
||||
-extensions req_ext \
|
||||
-config _tls.cnf \
|
||||
-keyout /tmp/k8s-webhook-server/serving-certs/tls.key \
|
||||
-out /tmp/k8s-webhook-server/serving-certs/tls.crt
|
||||
|
||||
# Clean it up
|
||||
$ rm -f _tls.cnf
|
||||
```
|
||||
|
||||
3. Patching the Webhooks
|
||||
|
||||
By default, the webhooks will be registered with the services, which will route to the Pods, inside the cluster.
|
||||
|
||||
We need to _delegate_ the controllers' and webhooks' services to the code running in our IDE by patching the `MutatingWebhookConfiguration` and `ValidatingWebhookConfiguration`.
|
||||
|
||||
```shell
|
||||
# Export your laptop's IP with the 9443 port exposed by controllers/webhooks' services
|
||||
$ export WEBHOOK_URL="https://${LAPTOP_HOST_IP}:9443"
|
||||
|
||||
# Export the cert we just generated as the CA bundle for webhook TLS
|
||||
$ export CA_BUNDLE=`openssl base64 -in /tmp/k8s-webhook-server/serving-certs/tls.crt | tr -d '\n'`
|
||||
|
||||
# Patch the MutatingWebhookConfiguration webhook
|
||||
$ kubectl patch MutatingWebhookConfiguration capsule-mutating-webhook-configuration \
|
||||
--type='json' -p="[\
|
||||
{'op': 'replace', 'path': '/webhooks/0/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/mutate-v1-namespace-owner-reference\",'caBundle':\"${CA_BUNDLE}\"}}\
|
||||
]"
|
||||
|
||||
# Verify it if you want
|
||||
$ kubectl get MutatingWebhookConfiguration capsule-mutating-webhook-configuration -o yaml
|
||||
|
||||
# Patch the ValidatingWebhookConfiguration webhooks
|
||||
# Note: there is a list of validating webhook endpoints, not just one
|
||||
$ kubectl patch ValidatingWebhookConfiguration capsule-validating-webhook-configuration \
|
||||
--type='json' -p="[\
|
||||
{'op': 'replace', 'path': '/webhooks/0/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/cordoning\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/1/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/ingresses\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/2/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/namespaces\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/3/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/networkpolicies\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/4/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/pods\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/5/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/persistentvolumeclaims\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/6/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/services\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/7/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/tenants\",'caBundle':\"${CA_BUNDLE}\"}},\
|
||||
{'op': 'replace', 'path': '/webhooks/8/clientConfig', 'value':{'url':\"${WEBHOOK_URL}/nodes\",'caBundle':\"${CA_BUNDLE}\"}}\
|
||||
]"
|
||||
|
||||
# Verify it if you want
|
||||
$ kubectl get ValidatingWebhookConfiguration capsule-validating-webhook-configuration -o yaml
|
||||
```
|
||||
|
||||
## Run Capsule outside the cluster
|
||||
|
||||
Now we can run Capsule controllers with webhooks outside of the Kubernetes cluster:
|
||||
|
||||
```shell
|
||||
$ export NAMESPACE=capsule-system && export TMPDIR=/tmp/
|
||||
$ go run .
|
||||
```
|
||||
|
||||
To verify that, we can open a new console and create a new Tenant:
|
||||
|
||||
```shell
|
||||
$ kubectl apply -f - <<EOF
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: gas
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
We should see output and logs in the `make run` console.
|
||||
|
||||
Now it's time to work through our familiar inner loop for development in our preferred IDE. For example, if you're using [Visual Studio Code](https://code.visualstudio.com), this `launch.json` file can be a good start.
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
{
|
||||
"name": "Launch",
|
||||
"type": "go",
|
||||
"request": "launch",
|
||||
"mode": "auto",
|
||||
"program": "${workspaceFolder}",
|
||||
"args": [
|
||||
"--zap-encoder=console",
|
||||
"--zap-log-level=debug",
|
||||
"--configuration-name=capsule-default"
|
||||
],
|
||||
"env": {
|
||||
"NAMESPACE": "capsule-system",
|
||||
"TMPDIR": "/tmp/"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -1,24 +0,0 @@
|
||||
# Project Governance
|
||||
|
||||
This document lays out the guidelines under which the Capsule project will be governed.
|
||||
The goal is to make sure that the roles and responsibilities are well defined and clarify how decisions are made.
|
||||
|
||||
## Roles
|
||||
|
||||
In the context of Capsule project, we consider the following roles:
|
||||
|
||||
* __Users__: everyone using Capsule, typically willing to provide feedback by proposing features and/or filing issues.
|
||||
|
||||
* __Contributors__: everyone contributing code, documentation, examples, tests, and participating in feature proposals as well as design discussions.
|
||||
|
||||
* __Maintainers__: are responsible for engaging with and assisting contributors to iterate on the contributions until it reaches acceptable quality. Maintainers can decide whether the contributions can be accepted into the project or rejected.
|
||||
|
||||
## Release Management
|
||||
|
||||
The release process will be governed by Maintainers.
|
||||
|
||||
Please, refer to the [maintainers file](https://github.com/projectcapsule/capsule/blob/master/.github/maintainers.yaml) available in the source code.
|
||||
|
||||
## Roadmap Planning
|
||||
|
||||
Maintainers will share roadmap and release versions as milestones in GitHub.
|
||||
@@ -1,111 +0,0 @@
|
||||
# Contributing Guidelines
|
||||
|
||||
Thank you for your interest in contributing to Capsule. Whether it's a bug report, new feature, correction, or additional documentation, we greatly value feedback and contributions from our community.
|
||||
|
||||
Please read through this document before submitting any issues or pull requests to ensure we have all the necessary information to effectively respond to your bug report or contribution.
|
||||
|
||||
## Pull Requests
|
||||
|
||||
Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
|
||||
|
||||
1. You are working against the latest source on the *master* branch.
|
||||
1. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
|
||||
1. You open an issue to discuss any significant work: we would hate for your time to be wasted.
|
||||
|
||||
To send us a pull request, please:
|
||||
|
||||
1. Fork the repository.
|
||||
1. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it
|
||||
will be hard for us to focus on your change.
|
||||
1. Ensure local tests pass.
|
||||
1. Commit to your fork using clear commit messages.
|
||||
1. Send us a pull request, answering any default questions in the pull request interface.
|
||||
1. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
|
||||
|
||||
GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
|
||||
[creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
|
||||
|
||||
Make sure to keep Pull Requests small and functional to make them easier to review, understand, and look up in commit history. This repository uses "Squash and Commit" to keep our history clean and make it easier to revert changes based on PR.
|
||||
|
||||
Adding the appropriate documentation, unit tests and e2e tests as part of a feature is the responsibility of the
|
||||
feature owner, whether it is done in the same Pull Request or not.
|
||||
|
||||
All the Pull Requests must refer to an already open issue: this is the first phase to contribute also for informing maintainers about the issue.
|
||||
|
||||
## Commits
|
||||
|
||||
Commit's first line should not exceed 50 columns.
|
||||
|
||||
A commit description is welcomed to explain more the changes: just ensure
|
||||
to put a blank line and an arbitrary number of maximum 72 characters long
|
||||
lines, at most one blank line between them.
|
||||
|
||||
Please, split changes into several and documented small commits: this will help us to perform a better review. Commits must follow the Conventional Commits Specification, a lightweight convention on top of commit messages. It provides an easy set of rules for creating an explicit commit history; which makes it easier to write automated tools on top of. This convention dovetails with Semantic Versioning, by describing the features, fixes, and breaking changes made in commit messages. See [Conventional Commits Specification](https://www.conventionalcommits.org) to learn about Conventional Commits.
|
||||
|
||||
> In case of errors or need of changes to previous commits,
|
||||
> fix them squashing to make changes atomic.
|
||||
|
||||
## Code convention
|
||||
|
||||
Capsule is written in Golang. The changes must follow the Pull Request method where a _GitHub Action_ will
|
||||
check the `golangci-lint`, so ensure your changes respect the coding standard.
|
||||
|
||||
### golint
|
||||
|
||||
You can easily check them issuing the _Make_ recipe `golint`.
|
||||
|
||||
```
|
||||
# make golint
|
||||
golangci-lint run -c .golangci.yml
|
||||
```
|
||||
|
||||
> Enabled linters and related options are defined in the [.golanci.yml file](https://github.com/projectcapsule/capsule/blob/master/.golangci.yml)
|
||||
|
||||
### goimports
|
||||
|
||||
Also, the Go import statements must be sorted following the best practice:
|
||||
|
||||
```
|
||||
<STANDARD LIBRARY>
|
||||
|
||||
<EXTERNAL PACKAGES>
|
||||
|
||||
<LOCAL PACKAGES>
|
||||
```
|
||||
|
||||
To help you out you can use the _Make_ recipe `goimports`
|
||||
|
||||
```
|
||||
# make goimports
|
||||
goimports -w -l -local "github.com/projectcapsule/capsule" .
|
||||
```
|
||||
|
||||
## Finding contributions to work on
|
||||
Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the
|
||||
default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted'
|
||||
and 'good first issue' issues are a great place to start.
|
||||
|
||||
## Design Docs
|
||||
|
||||
A contributor proposes a design with a PR on the repository to allow for revisions and discussions.
|
||||
If a design needs to be discussed before formulating a document for it, make use of GitHub Discussions to
|
||||
involve the community on the discussion.
|
||||
|
||||
## GitHub Issues
|
||||
|
||||
GitHub Issues are used to file bugs, work items, and feature requests with actionable items/issues.
|
||||
|
||||
When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
|
||||
|
||||
* A reproducible test case or series of steps
|
||||
* The version of the code being used
|
||||
* Any modifications you've made relevant to the bug
|
||||
* Anything unusual about your environment or deployment
|
||||
|
||||
## Miscellanea
|
||||
|
||||
Please, add a new single line at end of any file as the current coding style.
|
||||
|
||||
## Licensing
|
||||
|
||||
See the [LICENSE](https://github.com/projectcapsule/capsule/blob/master/LICENSE) file for our project's licensing. We can ask you to confirm the licensing of your contribution.
|
||||
@@ -1,3 +0,0 @@
|
||||
# Contributing
|
||||
|
||||
Guidelines for community contribution.
|
||||
@@ -1,34 +0,0 @@
|
||||
# Release Process
|
||||
|
||||
The Capsule release process is constrained to _GitHub Releases_, following the git tag semantic versioning.
|
||||
|
||||
## Semantic versioning convention
|
||||
|
||||
Capsule is taking advantage of the [Semantic Versioning](https://semver.org/), although with some rules about the patch, the minor and the major bump versions.
|
||||
|
||||
- `patch` (e.g.: 0.1.0 to 0.1.1):
|
||||
a patch bumping occurs when some bugs are fixed, and no Kubernetes CRDs API changes are introduced.
|
||||
The patch can contain also new features not yet promoted to a specific Kubernetes CRDs API type.
|
||||
A patch may be used also to address CVE patches.
|
||||
- `minor` (e.g.: 0.1.0 to 0.2.0):
|
||||
a minor bumping occurs when a new CRDs API object is introduced, or rather, when some CRDs schemes are updated.
|
||||
The minor bump is used to inform the Capsule adopters to manually update the Capsule CRDs, since Helm, the suggested tool for the release lifecycle management, is not able to automatically update the objects.
|
||||
Upon every minor release, on the GitHub Release page, a list of API updates is described, and a link to the [upgrade guide](https://capsule.clastix.io/docs/guides/upgrading) is provided.
|
||||
- `major` (e.g.: 0.1.0 to 1.0.0):
|
||||
a major bump occurs when a breaking change, such as backward incompatible changes is introduced.
|
||||
|
||||
## Container hosting
|
||||
|
||||
All the Capsule container images are publicly hosted on [CLASTIX](https://clastix.io) [Docker Hub repository](https://hub.docker.com/r/clastix/capsule).
|
||||
|
||||
The Capsule container image is built upon a git tag (issued thanks to the _GitHub Release_ feature) starting with the prefix `v` (e.g.: `v1.0.1`).
|
||||
This will trigger a _GitHub Action_ which builds a multi-arch container image, then pushes it to the container registry.
|
||||
|
||||
> The `latest` tag is not available to avoid moving git commit SHA reference.
|
||||
|
||||
## Helm Chart hosting
|
||||
|
||||
The suggested installation tool is [Helm](https://helm.sh), and the Capsule chart is hosted in the [GitHub repository](https://github.com/projectcapsule/capsule/tree/master/charts/capsule).
|
||||
For each Helm Chart release, a tit tag with the prefix `helm-v` will be issued to help developers to address the corresponding commit.
|
||||
|
||||
The built Helm Charts are then automatically pushed upon tag release to the [CLASTIX Helm repository](https://clastix.github.io/charts).
|
||||
@@ -1,224 +0,0 @@
|
||||
ACL-filtered
|
||||
APIs
|
||||
Apache2
|
||||
Authenticator
|
||||
BYOD
|
||||
CLASTIX
|
||||
CLI
|
||||
CRD
|
||||
CRDs
|
||||
CRs
|
||||
CTO
|
||||
CVE
|
||||
CVE-2021-25735
|
||||
CaaS
|
||||
CapsuleConfiguration
|
||||
CapsuleConfigurationSpec
|
||||
ClusterIP
|
||||
ClusterRole
|
||||
ClusterRoles
|
||||
ConfigMap
|
||||
Dependant
|
||||
Env
|
||||
ExternalName
|
||||
GitOps
|
||||
GitOps-ready
|
||||
GitOps-way
|
||||
GlobalTenantResource
|
||||
GlobalTenantResourceSpec
|
||||
GlobalTenantResourceStatus
|
||||
Golang
|
||||
Grafana
|
||||
HTTPS
|
||||
HostNetwork
|
||||
HostPort
|
||||
Hostname
|
||||
Hostnames
|
||||
IPBlock
|
||||
IPC
|
||||
IPs
|
||||
IngressClass
|
||||
IngressClasses
|
||||
JSON
|
||||
JWT
|
||||
Keycloak
|
||||
Kubebuilder
|
||||
Kubeconfig
|
||||
Kubernetes
|
||||
Kubernetes-native
|
||||
Kustomization
|
||||
Kustomization.
|
||||
Kustomizations
|
||||
Kustomize
|
||||
LimitRangeItem
|
||||
LimitRangeSpec
|
||||
LimitRanger
|
||||
LoadBalance
|
||||
LoadBalancer
|
||||
MTB
|
||||
MTB.
|
||||
Miscellanea
|
||||
MutatingAdmissionWebhook
|
||||
MutatingWebhookConfiguration
|
||||
Namespace
|
||||
Namespace-as-a-Service
|
||||
Namespace-level
|
||||
NamespaceSelector
|
||||
Namespaced-scope
|
||||
Namespaces
|
||||
NetworkPolicies
|
||||
NetworkPolicy
|
||||
NetworkPolicyEgressRule
|
||||
NetworkPolicyIngressRule
|
||||
NetworkPolicyPeer
|
||||
NetworkPolicyPort
|
||||
NetworkPolicySpec
|
||||
NetworkPolicySpec's
|
||||
NodePort
|
||||
NodeSelector
|
||||
OAuth
|
||||
OIDC
|
||||
OSS
|
||||
OpenSSL
|
||||
OwnerSpec
|
||||
PID
|
||||
PRs
|
||||
PV
|
||||
PVCs
|
||||
PVs
|
||||
PersistentVolume
|
||||
PersistentVolumeClaim
|
||||
PodNodeSelector
|
||||
PodSecurityPolicies
|
||||
PodSecurityPolicy
|
||||
PriorityClass
|
||||
PriorityClasses
|
||||
PromQL
|
||||
ProxySetting
|
||||
PullPolicy
|
||||
QoS
|
||||
RBAC
|
||||
README
|
||||
Reconciler
|
||||
Reconciler's
|
||||
ResourceQuota
|
||||
ResourceQuotaSpec
|
||||
Roadmap
|
||||
RoleBinding
|
||||
RuntimeClass
|
||||
RuntimeClasses
|
||||
SDK
|
||||
SHA
|
||||
SRE
|
||||
SSD
|
||||
ScopeSelectorOperator
|
||||
ServiceAccount
|
||||
ServiceAccounts
|
||||
ServiceMonitor
|
||||
StorageClass
|
||||
StorageClasses
|
||||
TLS
|
||||
TLS-terminated
|
||||
TenantResource
|
||||
TenantResourceSpec
|
||||
TenantResourceStatus
|
||||
TenantSpec
|
||||
TenantStatus
|
||||
UI
|
||||
Uncordoning
|
||||
ValidatingAdmissionWebhook
|
||||
ValidatingWebhookConfiguration
|
||||
Velero
|
||||
Viceversa
|
||||
WG
|
||||
Webhook
|
||||
Webhooks
|
||||
Workqueue
|
||||
YAML
|
||||
additively
|
||||
backend
|
||||
balancers
|
||||
behaviour
|
||||
capsuleconfiguration
|
||||
clusterrole
|
||||
clusterrolebinding
|
||||
config
|
||||
cross-Namespace
|
||||
cross-namespace
|
||||
customizations
|
||||
datasource
|
||||
deletecollection
|
||||
e2e
|
||||
eg
|
||||
eg.
|
||||
endpointslice
|
||||
enum
|
||||
enums
|
||||
env
|
||||
etcd
|
||||
fastly
|
||||
favourite
|
||||
flux2-capsule-multi-tenancy
|
||||
flux2-multi-tenancy
|
||||
gitops-reconciler-kubeconfig
|
||||
goimports
|
||||
golangci-lint
|
||||
golint
|
||||
hostname
|
||||
hostnames
|
||||
imagePullPolicy
|
||||
init-time
|
||||
ipBlock
|
||||
k3d
|
||||
keycloak
|
||||
kubeconfig
|
||||
kubectl
|
||||
kubernetes
|
||||
kustomize
|
||||
labelled
|
||||
latencies
|
||||
lifecycle
|
||||
linters
|
||||
linux
|
||||
lockdown
|
||||
microservice
|
||||
multitenant
|
||||
naas
|
||||
namespace
|
||||
namespace-owner-reference
|
||||
namespaceSelector
|
||||
namespaced
|
||||
namespaces
|
||||
neighbour
|
||||
networkpolicies
|
||||
networkpolicy
|
||||
ngrok
|
||||
no-naas
|
||||
non-namespaced
|
||||
oauth2-proxy
|
||||
onboarded
|
||||
persistentvolumeclaims
|
||||
podSelector
|
||||
prepended
|
||||
priorityClasses
|
||||
radiuses
|
||||
reconcilers
|
||||
repo
|
||||
resync
|
||||
roadmap
|
||||
rolebinding
|
||||
rolebindings
|
||||
scopeSelector
|
||||
serviceaccount
|
||||
stateful
|
||||
uid
|
||||
unsetting
|
||||
v1alpha1
|
||||
v2
|
||||
webhook
|
||||
webhooks
|
||||
wontfix
|
||||
Quickstart
|
||||
FluxCD
|
||||
addon
|
||||
kustomize-controller
|
||||
@@ -1,107 +0,0 @@
|
||||
# Getting started
|
||||
|
||||
Thanks for giving Capsule a try.
|
||||
|
||||
## Installation
|
||||
|
||||
Make sure you have access to a Kubernetes cluster as administrator.
|
||||
|
||||
You can use the [Capsule Helm Chart](https://github.com/projectcapsule/capsule/blob/master/charts/capsule/README.md) to install Capsule.
|
||||
|
||||
### Install with Helm Chart
|
||||
|
||||
Please, refer to the instructions reported in the Capsule Helm Chart [README](https://github.com/projectcapsule/capsule/blob/master/charts/capsule/README.md).
|
||||
|
||||
## Create your first Tenant
|
||||
|
||||
In Capsule, a _Tenant_ is an abstraction to group multiple namespaces in a single entity within a set of boundaries defined by the Cluster Administrator. The tenant is then assigned to a user or group of users who is called _Tenant Owner_.
|
||||
|
||||
Capsule defines a Tenant as Custom Resource with cluster scope.
|
||||
|
||||
Create the tenant as cluster admin:
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
You can check the tenant just created
|
||||
|
||||
```
|
||||
$ kubectl get tenants
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
oil Active 0 10s
|
||||
```
|
||||
|
||||
## Login as Tenant Owner
|
||||
|
||||
Each tenant comes with a delegated user or group of users acting as the tenant admin. In the Capsule jargon, this is called the _Tenant Owner_. Other users can operate inside a tenant with different levels of permissions and authorizations assigned directly by the Tenant Owner.
|
||||
|
||||
Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of [authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/) are supported. The only requirement to use Capsule is to assign tenant users to the group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
|
||||
|
||||
Assignment to a group depends on the authentication strategy in your cluster.
|
||||
|
||||
For example, if you are using `capsule.clastix.io`, users authenticated through a _X.509_ certificate must have `capsule.clastix.io` as _Organization_: `-subj "/CN=${USER}/O=capsule.clastix.io"`
|
||||
|
||||
Users authenticated through an _OIDC token_ must have in their token:
|
||||
|
||||
```json
|
||||
...
|
||||
"users_groups": [
|
||||
"capsule.clastix.io",
|
||||
"other_group"
|
||||
]
|
||||
```
|
||||
|
||||
The [hack/create-user.sh](https://github.com/projectcapsule/capsule/blob/master/hack/create-user.sh) can help you set up a dummy `kubeconfig` for the `alice` user acting as owner of a tenant called `oil`
|
||||
|
||||
```bash
|
||||
./hack/create-user.sh alice oil
|
||||
...
|
||||
certificatesigningrequest.certificates.k8s.io/alice-oil created
|
||||
certificatesigningrequest.certificates.k8s.io/alice-oil approved
|
||||
kubeconfig file is: alice-oil.kubeconfig
|
||||
to use it as alice export KUBECONFIG=alice-oil.kubeconfig
|
||||
```
|
||||
|
||||
Login as tenant owner
|
||||
|
||||
```
|
||||
$ export KUBECONFIG=alice-oil.kubeconfig
|
||||
```
|
||||
|
||||
## Create namespaces
|
||||
|
||||
As tenant owner, you can create namespaces:
|
||||
|
||||
```
|
||||
$ kubectl create namespace oil-production
|
||||
$ kubectl create namespace oil-development
|
||||
```
|
||||
|
||||
And operate with fully admin permissions:
|
||||
|
||||
```
|
||||
$ kubectl -n oil-development run nginx --image=docker.io/nginx
|
||||
$ kubectl -n oil-development get pods
|
||||
```
|
||||
|
||||
## Limiting access
|
||||
|
||||
Tenant Owners have full administrative permissions limited to only the namespaces in the assigned tenant. They can create any namespaced resource in their namespaces but they do not have access to cluster resources or resources belonging to other tenants they do not own:
|
||||
|
||||
```
|
||||
$ kubectl -n kube-system get pods
|
||||
Error from server (Forbidden): pods is forbidden:
|
||||
User "alice" cannot list resource "pods" in API group "" in the namespace "kube-system"
|
||||
```
|
||||
|
||||
See the [tutorial](/docs/general/tutorial) for getting more cool things you can do with Capsule.
|
||||
@@ -1,2 +0,0 @@
|
||||
# Documentation
|
||||
General documentation for Capsule Operator
|
||||
@@ -1,11 +0,0 @@
|
||||
# Capsule extension for Lens
|
||||
With Capsule extension for [Lens](https://github.com/lensapp/lens), a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
|
||||
|
||||
## Features
|
||||
Capsule extension for Lens provides these capabilities:
|
||||
|
||||
- List all tenants
|
||||
- See tenant details and change through the embedded Lens editor
|
||||
- Check Resources Quota and Budget at both the tenant and namespace level
|
||||
|
||||
Please, see the [README](https://github.com/clastix/capsule-lens-extension) for details about the installation of the Capsule Lens Extension.
|
||||
@@ -1,628 +0,0 @@
|
||||
# Capsule Proxy
|
||||
|
||||
Capsule Proxy is an add-on for Capsule Operator addressing some RBAC issues when enabling multi-tenancy in Kubernetes since users cannot list the owned cluster-scoped resources.
|
||||
|
||||
Kubernetes RBAC cannot list only the owned cluster-scoped resources since there are no ACL-filtered APIs. For example:
|
||||
|
||||
```
|
||||
$ kubectl get namespaces
|
||||
Error from server (Forbidden): namespaces is forbidden:
|
||||
User "alice" cannot list resource "namespaces" in API group "" at the cluster scope
|
||||
```
|
||||
|
||||
However, the user can have permission on some namespaces
|
||||
|
||||
```
|
||||
$ kubectl auth can-i [get|list|watch|delete] ns oil-production
|
||||
yes
|
||||
```
|
||||
|
||||
The reason, as the error message reported, is that the RBAC _list_ action is available only at Cluster-Scope and it is not granted to users without appropriate permissions.
|
||||
|
||||
To overcome this problem, many Kubernetes distributions introduced mirrored custom resources supported by a custom set of ACL-filtered APIs. However, this leads to radically change the user's experience of Kubernetes by introducing hard customizations that make it painful to move from one distribution to another.
|
||||
|
||||
With **Capsule**, we took a different approach. As one of the key goals, we want to keep the same user experience on all the distributions of Kubernetes. We want people to use the standard tools they already know and love and it should just work.
|
||||
|
||||
## How it works
|
||||
|
||||
The `capsule-proxy` implements a simple reverse proxy that intercepts only specific requests to the APIs server and Capsule does all the magic behind the scenes.
|
||||
|
||||
The current implementation filters the following requests:
|
||||
|
||||
* `/api/scheduling.k8s.io/{v1}/priorityclasses{/name}`
|
||||
* `/api/v1/namespaces{/name}`
|
||||
* `/api/v1/nodes{/name}`
|
||||
* `/api/v1/pods?fieldSelector=spec.nodeName%3D{name}`
|
||||
* `/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/{name}`
|
||||
* `/apis/metrics.k8s.io/{v1beta1}/nodes{/name}`
|
||||
* `/apis/networking.k8s.io/{v1,v1beta1}/ingressclasses{/name}`
|
||||
* `/apis/storage.k8s.io/v1/storageclasses{/name}`
|
||||
* `/apis/node.k8s.io/v1/runtimeclasses{/name}`
|
||||
* `/api/v1/persistentvolumes{/name}`
|
||||
|
||||
All other requests are proxy-passed transparently to the API server, so no side effects are expected.
|
||||
We're planning to add new APIs in the future, so [PRs are welcome](https://github.com/clastix/capsule-proxy)!
|
||||
|
||||
## Installation
|
||||
|
||||
Capsule Proxy is an optional add-on of the main Capsule Operator, so make sure you have a working instance of Capsule before attempting to install it.
|
||||
Use the `capsule-proxy` only if you want Tenant Owners to list their Cluster-Scope resources.
|
||||
|
||||
The `capsule-proxy` can be deployed in standalone mode, e.g. running as a pod bridging any Kubernetes client to the APIs server.
|
||||
Optionally, it can be deployed as a sidecar container in the backend of a dashboard.
|
||||
|
||||
Running outside a Kubernetes cluster is also viable, although a valid `KUBECONFIG` file must be provided, using the environment variable `KUBECONFIG` or the default file in `$HOME/.kube/config`.
|
||||
|
||||
A Helm Chart is available [here](https://github.com/clastix/capsule-proxy/blob/master/charts/capsule-proxy/README.md).
|
||||
|
||||
Depending on your environment, you can expose the `capsule-proxy` by:
|
||||
|
||||
- Ingress
|
||||
- NodePort Service
|
||||
- LoadBalance Service
|
||||
- HostPort
|
||||
- HostNetwork
|
||||
|
||||
Here how it looks like when exposed through an Ingress Controller:
|
||||
|
||||
```
|
||||
+-----------+ +-----------+ +-----------+
|
||||
kubectl ------>|:443 |--------->|:9001 |-------->|:6443 |
|
||||
+-----------+ +-----------+ +-----------+
|
||||
ingress-controller capsule-proxy kube-apiserver
|
||||
```
|
||||
|
||||
## CLI flags
|
||||
|
||||
- `capsule-configuration-name`: name of the `CapsuleConfiguration` resource which is containing the [Capsule configurations](/docs/general/references/#capsule-configuration) (default: `default`)
|
||||
- `capsule-user-group` (deprecated): the old way to specify the user groups whose request must be intercepted by the proxy
|
||||
- `ignored-user-group`: names of the groups whose requests must be ignored and proxy-passed to the upstream server
|
||||
- `listening-port`: HTTP port the proxy listens to (default: `9001`)
|
||||
- `oidc-username-claim`: the OIDC field name used to identify the user (default: `preferred_username`), the proper value can be extracted from the Kubernetes API Server flags
|
||||
- `enable-ssl`: enable the bind on HTTPS for secure communication, allowing client-based certificate, also known as mutual TLS (default: `true`)
|
||||
- `ssl-cert-path`: path to the TLS certificate, then TLS mode is enabled (default: `/opt/capsule-proxy/tls.crt`)
|
||||
- `ssl-key-path`: path to the TLS certificate key, when TLS mode is enabled (default: `/opt/capsule-proxy/tls.key`)
|
||||
- `rolebindings-resync-period`: resync period for RoleBinding resources reflector, lower values can help if you're facing [flaky etcd connection](https://github.com/clastix/capsule-proxy/issues/174) (default: `10h`)
|
||||
|
||||
## User Authentication
|
||||
|
||||
The `capsule-proxy` intercepts all the requests from the `kubectl` client directed to the APIs Server. Users using a TLS client-based authentication with a certificate and key can talk with the API Server since it can forward client certificates to the Kubernetes APIs server.
|
||||
|
||||
It is possible to protect the `capsule-proxy` using a certificate provided by Let's Encrypt. Keep in mind that, in this way, the TLS termination will be executed by the Ingress Controller, meaning that the authentication based on the client certificate will be withdrawn and not reversed to the upstream.
|
||||
|
||||
If your prerequisite is exposing `capsule-proxy` using an Ingress, you must rely on the token-based authentication, for example, OIDC or Bearer tokens. Users providing tokens are always able to reach the APIs Server.
|
||||
|
||||
## Kubernetes dashboards integration
|
||||
|
||||
If you're using a client-only dashboard, for example [Lens](https://k8slens.dev/), the `capsule-proxy` can be used as with `kubectl` since this dashboard usually talks to the APIs server using just a `kubeconfig` file.
|
||||
|
||||

|
||||
|
||||
For a web-based dashboard, like the [Kubernetes Dashboard](https://github.com/kubernetes/dashboard), the `capsule-proxy` can be deployed as a sidecar container in the backend, following the well-known cloud-native _Ambassador Pattern_.
|
||||
|
||||

|
||||
|
||||
## Tenant Owner Authorization
|
||||
|
||||
Each Tenant owner can have their capabilities managed pretty similar to a standard Kubernetes RBAC.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: my-tenant
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: IngressClasses
|
||||
operations:
|
||||
- List
|
||||
```
|
||||
|
||||
The proxy setting `kind` is an __enum__ accepting the supported resources:
|
||||
|
||||
- `Nodes`
|
||||
- `StorageClasses`
|
||||
- `IngressClasses`
|
||||
- `PriorityClasses`
|
||||
- `RuntimeClasses`
|
||||
- `PersistentVolumes`
|
||||
|
||||
Each Resource kind can be granted with several verbs, such as:
|
||||
|
||||
- `List`
|
||||
- `Update`
|
||||
- `Delete`
|
||||
|
||||
## Cluster-scoped resources selection strategy precedence
|
||||
|
||||
Starting from [Capsule v0.2.0](https://github.com/projectcapsule/capsule/releases/tag/v0.2.0), selection of cluster-scoped resources based on labels has been introduced.
|
||||
|
||||
Due to the limitations of Kubernetes API Server which not support `OR` label selector, the Capsule core team decided to give precedence to the label selector over the exact and regex match.
|
||||
|
||||
Capsule is going to deprecate in the upcoming feature the selection based on exact names and regex in order to approach entirely to the matching labels approach of Kubernetes itself.
|
||||
|
||||
### Namespaces
|
||||
|
||||
As tenant owner `alice`, you can use `kubectl` to create some namespaces:
|
||||
|
||||
```
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-production
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-development
|
||||
$ kubectl --context alice-oidc@mycluster create namespace gas-marketing
|
||||
```
|
||||
|
||||
and list only those namespaces:
|
||||
|
||||
```
|
||||
$ kubectl --context alice-oidc@mycluster get namespaces
|
||||
NAME STATUS AGE
|
||||
gas-marketing Active 2m
|
||||
oil-development Active 2m
|
||||
oil-production Active 2m
|
||||
```
|
||||
|
||||
Capsule Proxy supports applying a Namespace configuration using the `apply` command, as follows.
|
||||
|
||||
```
|
||||
$: cat <<EOF | kubectl apply -f -
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: solar-development
|
||||
EOF
|
||||
|
||||
namespace/solar-development unchanged
|
||||
# or, in case of non-existing Namespace:
|
||||
namespace/solar-development created
|
||||
```
|
||||
|
||||
### Nodes
|
||||
|
||||
The Capsule Proxy gives the owners the ability to access the nodes matching the `.spec.nodeSelector` in the Tenant manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: Nodes
|
||||
operations:
|
||||
- List
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: capsule-gold-qwerty
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl --context alice-oidc@mycluster get nodes
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
capsule-gold-qwerty Ready <none> 43h v1.19.1
|
||||
```
|
||||
|
||||
> Warning: when no `nodeSelector` is specified, the tenant owners has access to all the nodes, according to the permissions listed in the `proxySettings` specs.
|
||||
|
||||
### Special routes for kubectl describe
|
||||
|
||||
When issuing a `kubectl describe node`, some other endpoints are put in place:
|
||||
|
||||
* `api/v1/pods?fieldSelector=spec.nodeName%3D{name}`
|
||||
* `/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/{name}`
|
||||
|
||||
These are mandatory to retrieve the list of the running Pods on the required node and provide info about its lease status.
|
||||
|
||||
### Storage Classes
|
||||
|
||||
A Tenant may be limited to use a set of allowed Storage Class resources, as follows.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: StorageClasses
|
||||
operations:
|
||||
- List
|
||||
storageClasses:
|
||||
allowed:
|
||||
- custom
|
||||
allowedRegex: "\\w+fs"
|
||||
```
|
||||
|
||||
In the Kubernetes cluster we could have more Storage Class resources, some of them forbidden and non-usable by the Tenant owner.
|
||||
|
||||
```bash
|
||||
$ kubectl --context admin@mycluster get storageclasses
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
cephfs rook.io/cephfs Delete WaitForFirstConsumer false 21h
|
||||
custom custom.tls/provisioner Delete WaitForFirstConsumer false 43h
|
||||
default(standard) rancher.io/local-path Delete WaitForFirstConsumer false 43h
|
||||
glusterfs rook.io/glusterfs Delete WaitForFirstConsumer false 54m
|
||||
zol zfs-on-linux/zfs Delete WaitForFirstConsumer false 54m
|
||||
```
|
||||
|
||||
The expected output using `capsule-proxy` is the retrieval of the `custom` Storage Class as well as the other ones matching the regex `\w+fs`.
|
||||
|
||||
```bash
|
||||
$ kubectl --context alice-oidc@mycluster get storageclasses
|
||||
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
|
||||
cephfs rook.io/cephfs Delete WaitForFirstConsumer false 21h
|
||||
custom custom.tls/provisioner Delete WaitForFirstConsumer false 43h
|
||||
glusterfs rook.io/glusterfs Delete WaitForFirstConsumer false 54m
|
||||
```
|
||||
|
||||
> The `name` label reflecting the resource name is mandatory, otherwise filtering of resources cannot be put in place
|
||||
|
||||
```yaml
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
labels:
|
||||
name: cephfs
|
||||
name: cephfs
|
||||
provisioner: cephfs
|
||||
```
|
||||
|
||||
### Ingress Classes
|
||||
|
||||
As for Storage Class, also Ingress Class can be enforced.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: IngressClasses
|
||||
operations:
|
||||
- List
|
||||
ingressOptions:
|
||||
allowedClasses:
|
||||
allowed:
|
||||
- custom
|
||||
allowedRegex: "\\w+-lb"
|
||||
```
|
||||
|
||||
In the Kubernetes cluster, we could have more Ingress Class resources, some of them forbidden and non-usable by the Tenant owner.
|
||||
|
||||
```bash
|
||||
$ kubectl --context admin@mycluster get ingressclasses
|
||||
NAME CONTROLLER PARAMETERS AGE
|
||||
custom example.com/custom IngressParameters.k8s.example.com/custom 24h
|
||||
external-lb example.com/external IngressParameters.k8s.example.com/external-lb 2s
|
||||
haproxy-ingress haproxy.tech/ingress 4d
|
||||
internal-lb example.com/internal IngressParameters.k8s.example.com/external-lb 15m
|
||||
nginx nginx.plus/ingress 5d
|
||||
```
|
||||
|
||||
The expected output using `capsule-proxy` is the retrieval of the `custom` Ingress Class as well the other ones matching the regex `\w+-lb`.
|
||||
|
||||
```bash
|
||||
$ kubectl --context alice-oidc@mycluster get ingressclasses
|
||||
NAME CONTROLLER PARAMETERS AGE
|
||||
custom example.com/custom IngressParameters.k8s.example.com/custom 24h
|
||||
external-lb example.com/external IngressParameters.k8s.example.com/external-lb 2s
|
||||
internal-lb example.com/internal IngressParameters.k8s.example.com/internal-lb 15m
|
||||
```
|
||||
|
||||
> The `name` label reflecting the resource name is mandatory, otherwise filtering of resources cannot be put in place
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: IngressClass
|
||||
metadata:
|
||||
labels:
|
||||
name: external-lb
|
||||
name: external-lb
|
||||
spec:
|
||||
controller: example.com/ingress-controller
|
||||
parameters:
|
||||
apiGroup: k8s.example.com
|
||||
kind: IngressParameters
|
||||
name: external-lb
|
||||
```
|
||||
|
||||
### Priority Classes
|
||||
|
||||
Allowed PriorityClasses assigned to a Tenant Owner can be enforced as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: PriorityClasses
|
||||
operations:
|
||||
- List
|
||||
priorityClasses:
|
||||
allowed:
|
||||
- custom
|
||||
allowedRegex: "\\w+priority"
|
||||
```
|
||||
|
||||
In the Kubernetes cluster we could have more PriorityClasses resources, some of them forbidden and non-usable by the Tenant owner.
|
||||
|
||||
```bash
|
||||
$ kubectl --context admin@mycluster get priorityclasses.scheduling.k8s.io
|
||||
NAME VALUE GLOBAL-DEFAULT AGE
|
||||
custom 1000 false 18s
|
||||
maxpriority 1000 false 18s
|
||||
minpriority 1000 false 18s
|
||||
nonallowed 1000 false 8m54s
|
||||
system-cluster-critical 2000000000 false 3h40m
|
||||
system-node-critical 2000001000 false 3h40m
|
||||
```
|
||||
|
||||
The expected output using `capsule-proxy` is the retrieval of the `custom` PriorityClass as well the other ones matching the regex `\w+priority`.
|
||||
|
||||
```bash
|
||||
$ kubectl --context alice-oidc@mycluster get ingressclasses
|
||||
NAME VALUE GLOBAL-DEFAULT AGE
|
||||
custom 1000 false 18s
|
||||
maxpriority 1000 false 18s
|
||||
minpriority 1000 false 18s
|
||||
```
|
||||
|
||||
> The `name` label reflecting the resource name is mandatory, otherwise filtering of resources cannot be put in place
|
||||
|
||||
```yaml
|
||||
apiVersion: scheduling.k8s.io/v1
|
||||
kind: PriorityClass
|
||||
metadata:
|
||||
labels:
|
||||
name: custom
|
||||
name: custom
|
||||
value: 1000
|
||||
globalDefault: false
|
||||
description: "Priority class for Tenants"
|
||||
```
|
||||
|
||||
### Runtime Classes
|
||||
|
||||
Allowed RuntimeClasses assigned to a Tenant Owner can be enforced as follows:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: PriorityClasses
|
||||
operations:
|
||||
- List
|
||||
runtimeClasses:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/qos
|
||||
operator: Exists
|
||||
values:
|
||||
- bronze
|
||||
- silver
|
||||
```
|
||||
|
||||
In the Kubernetes cluster we could have more RuntimeClasses resources, some of them forbidden and non-usable by the Tenant owner.
|
||||
|
||||
```bash
|
||||
$ kubectl --context admin@mycluster get runtimeclasses.node.k8s.io --show-labels
|
||||
NAME HANDLER AGE LABELS
|
||||
bronze bronze 21h capsule.clastix.io/qos=bronze
|
||||
default myconfiguration 21h <none>
|
||||
gold gold 21h capsule.clastix.io/qos=gold
|
||||
silver silver 21h capsule.clastix.io/qos=silver
|
||||
```
|
||||
|
||||
The expected output using `capsule-proxy` is the retrieval of the `bronze` and `silver` ones.
|
||||
|
||||
```bash
|
||||
$ kubectl --context alice-oidc@mycluster get runtimeclasses.node.k8s.io
|
||||
NAME HANDLER AGE
|
||||
bronze bronze 21h
|
||||
silver silver 21h
|
||||
```
|
||||
|
||||
> `RuntimeClass` is one of the latest implementations in Capsule Proxy and is adhering to the new selection strategy based on labels selector, rather than exact match and regex ones.
|
||||
>
|
||||
> The latter ones are going to be deprecated in the upcoming releases of Capsule.
|
||||
|
||||
### Persistent Volumes
|
||||
|
||||
A Tenant can request persistent volumes through the `PersistentVolumeClaim` API, and get a volume from it.
|
||||
|
||||
Starting from release v0.2.0, all the `PersistentVolumes` are labelled with the Capsule label that is used by the Capsule Proxy to allow the retrieval.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
annotations:
|
||||
finalizers:
|
||||
- kubernetes.io/pv-protection
|
||||
labels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
name: data-01
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
hostPath:
|
||||
path: /mnt/data
|
||||
type: ""
|
||||
persistentVolumeReclaimPolicy: Retain
|
||||
storageClassName: manual
|
||||
volumeMode: Filesystem
|
||||
```
|
||||
|
||||
> Please, notice the label `capsule.clastix.io/tenant` matching the Tenant name.
|
||||
|
||||
With that said, a multi-tenant cluster can be made of several volumes, each one for different tenants.
|
||||
|
||||
```bash
|
||||
$ kubectl --context admin@mycluster get persistentvolumes --show-labels
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
|
||||
data-01 10Gi RWO Retain Available manual 17h capsule.clastix.io/tenant=oil
|
||||
data-02 10Gi RWO Retain Available manual 17h capsule.clastix.io/tenant=gas
|
||||
|
||||
```
|
||||
|
||||
For the `oil` Tenant, Alice has the required permission to list Volumes.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: PersistentVolumes
|
||||
operations:
|
||||
- List
|
||||
```
|
||||
|
||||
The expected output using `capsule-proxy` is the retrieval of the PVs used currently, or in the past, by the PVCs in their Tenants.
|
||||
|
||||
```bash
|
||||
$ kubectl --context alice-oidc@mycluster get persistentvolumes
|
||||
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
|
||||
data-01 10Gi RWO Retain Available manual 17h
|
||||
```
|
||||
|
||||
### ProxySetting Use Case
|
||||
Consider a scenario, where a cluster admin creates a tenant and assigns ownership of the tenant to a user, the so-called tenant owner. Afterwards, tenant owner would in turn like to provide access to their cluster-scoped resources to a set of users (e.g. non-owners or tenant users), groups and service accounts, who doesn't require tenant-owner-level permissions.
|
||||
|
||||
Tenant Owner can provide access to the following cluster-scoped resources to their tenant users, groups and service account by creating `ProxySetting` resource
|
||||
- `Nodes`
|
||||
- `StorageClasses`
|
||||
- `IngressClasses`
|
||||
- `PriorityClasses`
|
||||
- `RuntimeClasses`
|
||||
- `PersistentVolumes`
|
||||
|
||||
Each Resource kind can be granted with the following verbs, such as:
|
||||
- `List`
|
||||
- `Update`
|
||||
- `Delete`
|
||||
|
||||
These tenant users, groups and services accounts have less privileged access than tenant owners.
|
||||
|
||||
As a Tenant Owner `alice`, you can create a `ProxySetting` resource to allow `bob` to list nodes, storage classes, ingress classes and priority classes
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: ProxySetting
|
||||
metadata:
|
||||
name: sre-readers
|
||||
namespace: solar-production
|
||||
spec:
|
||||
subjects:
|
||||
- name: bob
|
||||
kind: User
|
||||
proxySettings:
|
||||
- kind: Nodes
|
||||
operations:
|
||||
- List
|
||||
- kind: StorageClasses
|
||||
operations:
|
||||
- List
|
||||
- kind: IngressClasses
|
||||
operations:
|
||||
- List
|
||||
- kind: PriorityClasses
|
||||
operations:
|
||||
- List
|
||||
```
|
||||
As a Tenant User `bob`, you can list nodes, storage classes, ingress classes and priority classes
|
||||
|
||||
```bash
|
||||
$ kubectl auth can-i --context bob-oidc@mycluster get nodes
|
||||
yes
|
||||
$ kubectl auth can-i --context bob-oidc@mycluster get storageclasses
|
||||
yes
|
||||
$ kubectl auth can-i --context bob-oidc@mycluster get ingressclasses
|
||||
yes
|
||||
$ kubectl auth can-i --context bob-oidc@mycluster get priorityclasses
|
||||
yes
|
||||
```
|
||||
## HTTP support
|
||||
Capsule proxy supports `https` and `http`, although the latter is not recommended, we understand that it can be useful for some use cases (i.e. development, working behind a TLS-terminated reverse proxy and so on). As the default behaviour is to work with `https`, we need to use the flag `--enable-ssl=false` if we want to work under `http`.
|
||||
|
||||
After having the `capsule-proxy` working under `http`, requests must provide authentication using an allowed Bearer Token.
|
||||
|
||||
For example:
|
||||
|
||||
```bash
|
||||
$ TOKEN=<type your TOKEN>
|
||||
$ curl -H "Authorization: Bearer $TOKEN" http://localhost:9001/api/v1/namespaces
|
||||
```
|
||||
|
||||
> NOTE: `kubectl` will not work against a `http` server.
|
||||
|
||||
## Metrics
|
||||
|
||||
Starting from the v0.3.0 release, Capsule Proxy exposes Prometheus metrics available at `http://0.0.0.0:8080/metrics`.
|
||||
|
||||
The offered metrics are related to the internal `controller-manager` code base, such as work queue and REST client requests, and the Go runtime ones.
|
||||
|
||||
Along with these, metrics `capsule_proxy_response_time_seconds` and `capsule_proxy_requests_total` have been introduced and are specific to the Capsule Proxy code-base and functionalities.
|
||||
|
||||
`capsule_proxy_response_time_seconds` offers a bucket representation of the HTTP request duration.
|
||||
The available variables for these metrics are the following ones:
|
||||
- `path`: the HTTP path of every single request that Capsule Proxy passes to the upstream
|
||||
|
||||
`capsule_proxy_requests_total` counts the global requests that Capsule Proxy is passing to the upstream with the following labels.
|
||||
- `path`: the HTTP path of every single request that Capsule Proxy passes to the upstream
|
||||
- `status`: the HTTP status code of the request
|
||||
|
||||
> Example output of the metrics:
|
||||
> ```
|
||||
> # HELP capsule_proxy_requests_total Number of requests
|
||||
> # TYPE capsule_proxy_requests_total counter
|
||||
> capsule_proxy_requests_total{path="/api/v1/namespaces",status="403"} 1
|
||||
> # HELP capsule_proxy_response_time_seconds Duration of capsule proxy requests.
|
||||
> # TYPE capsule_proxy_response_time_seconds histogram
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="0.005"} 0
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="0.01"} 0
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="0.025"} 0
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="0.05"} 0
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="0.1"} 0
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="0.25"} 0
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="0.5"} 0
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="1"} 0
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="2.5"} 1
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="5"} 1
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="10"} 1
|
||||
> capsule_proxy_response_time_seconds_bucket{path="/api/v1/namespaces",le="+Inf"} 1
|
||||
> capsule_proxy_response_time_seconds_sum{path="/api/v1/namespaces"} 2.206192787
|
||||
> capsule_proxy_response_time_seconds_count{path="/api/v1/namespaces"} 1
|
||||
> ```
|
||||
|
||||
## Contributing
|
||||
|
||||
`capsule-proxy` is open-source software released with Apache2 [license](https://github.com/clastix/capsule-proxy/blob/master/LICENSE).
|
||||
|
||||
Contributing guidelines are available [here](https://github.com/clastix/capsule-proxy/blob/master/CONTRIBUTING.md).
|
||||
@@ -1,108 +0,0 @@
|
||||
# Reference
|
||||
|
||||
Reference document for Capsule Operator configuration
|
||||
|
||||
## Custom Resource Definition
|
||||
|
||||
Capsule operator uses a Custom Resources Definition (CRD) for _Tenants_.
|
||||
Tenants are cluster wide resources, so you need cluster level permissions to work with tenants.
|
||||
You can learn about tenant CRDs in the following [section](./crds-apis)
|
||||
|
||||
## Capsule Configuration
|
||||
|
||||
The Capsule configuration can be piloted by a Custom Resource definition named `CapsuleConfiguration`.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: CapsuleConfiguration
|
||||
metadata:
|
||||
name: default
|
||||
annotations:
|
||||
capsule.clastix.io/ca-secret-name: "capsule-ca"
|
||||
capsule.clastix.io/tls-secret-name: "capsule-tls"
|
||||
capsule.clastix.io/mutating-webhook-configuration-name: "capsule-mutating-webhook-configuration"
|
||||
capsule.clastix.io/validating-webhook-configuration-name: "capsule-validating-webhook-configuration"
|
||||
spec:
|
||||
userGroups: ["capsule.clastix.io"]
|
||||
forceTenantPrefix: false
|
||||
protectedNamespaceRegex: ""
|
||||
```
|
||||
|
||||
Option | Description | Default
|
||||
--- |------------------------------------------------------------------------------| ---
|
||||
`.spec.forceTenantPrefix` | Force the tenant name as prefix for namespaces: `<tenant_name>-<namespace>`. | `false`
|
||||
`.spec.userGroups` | Array of Capsule groups to which all tenant owners must belong. | `[capsule.clastix.io]`
|
||||
`.spec.protectedNamespaceRegex` | Disallows creation of namespaces matching the passed regexp. | `null`
|
||||
`.metadata.annotations.capsule.clastix.io/ca-secret-name` | Set the Capsule Certificate Authority secret name | `capsule-ca`
|
||||
`.metadata.annotations.capsule.clastic.io/tls-secret-name` | Set the Capsule TLS secret name | `capsule-tls`
|
||||
`.metadata.annotations.capsule.clastix.io/mutating-webhook-configuration-name` | Set the MutatingWebhookConfiguration name | `mutating-webhook-configuration-name`
|
||||
`.metadata.annotations.capsule.clastix.io/validating-webhook-configuration-name` | Set the ValidatingWebhookConfiguration name | `validating-webhook-configuration-name`
|
||||
|
||||
Upon installation using Kustomize or Helm, a `capsule-default` resource will be created.
|
||||
The reference to this configuration is managed by the CLI flag `--configuration-name`.
|
||||
|
||||
## Capsule Permissions
|
||||
|
||||
In the current implementation, the Capsule operator requires cluster admin permissions to fully operate. Make sure you deploy Capsule having access to the default `cluster-admin` ClusterRole.
|
||||
|
||||
## Admission Controllers
|
||||
|
||||
Capsule implements Kubernetes multi-tenancy capabilities using a minimum set of standard [Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) enabled on the Kubernetes APIs server.
|
||||
|
||||
Here the list of required Admission Controllers you have to enable to get full support from Capsule:
|
||||
|
||||
* PodNodeSelector
|
||||
* LimitRanger
|
||||
* ResourceQuota
|
||||
* MutatingAdmissionWebhook
|
||||
* ValidatingAdmissionWebhook
|
||||
|
||||
In addition to the required controllers above, Capsule implements its own set through the [Dynamic Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) mechanism, providing callbacks to add further validation or resource patching.
|
||||
|
||||
To see Admission Controls installed by Capsule:
|
||||
|
||||
```
|
||||
$ kubectl get ValidatingWebhookConfiguration
|
||||
NAME WEBHOOKS AGE
|
||||
capsule-validating-webhook-configuration 8 2h
|
||||
|
||||
$ kubectl get MutatingWebhookConfiguration
|
||||
NAME WEBHOOKS AGE
|
||||
capsule-mutating-webhook-configuration 1 2h
|
||||
```
|
||||
|
||||
## Command Options
|
||||
|
||||
The Capsule operator provides the following command options:
|
||||
|
||||
Option | Description | Default
|
||||
--- | --- | ---
|
||||
`--metrics-addr` | The address and port where `/metrics` are exposed. | `127.0.0.1:8080`
|
||||
`--enable-leader-election` | Start a leader election client and gain leadership before executing the main loop. | `true`
|
||||
`--zap-log-level` | The log verbosity with a value from 1 to 10 or the basic keywords. | `4`
|
||||
`--zap-devel` | The flag to get the stack traces for deep debugging. | `null`
|
||||
`--configuration-name` | The Capsule Configuration CRD name, default is installed automatically | `capsule-default`
|
||||
|
||||
|
||||
## Created Resources
|
||||
|
||||
Once installed, the Capsule operator creates the following resources in your cluster:
|
||||
|
||||
```
|
||||
NAMESPACE RESOURCE
|
||||
namespace/capsule-system
|
||||
customresourcedefinition.apiextensions.k8s.io/tenants.capsule.clastix.io
|
||||
customresourcedefinition.apiextensions.k8s.io/capsuleconfigurations.capsule.clastix.io
|
||||
clusterrole.rbac.authorization.k8s.io/capsule-proxy-role
|
||||
clusterrole.rbac.authorization.k8s.io/capsule-metrics-reader
|
||||
capsuleconfiguration.capsule.clastix.io/capsule-default
|
||||
mutatingwebhookconfiguration.admissionregistration.k8s.io/capsule-mutating-webhook-configuration
|
||||
validatingwebhookconfiguration.admissionregistration.k8s.io/capsule-validating-webhook-configuration
|
||||
capsule-system clusterrolebinding.rbac.authorization.k8s.io/capsule-manager-rolebinding
|
||||
capsule-system clusterrolebinding.rbac.authorization.k8s.io/capsule-proxy-rolebinding
|
||||
capsule-system secret/capsule-ca
|
||||
capsule-system secret/capsule-tls
|
||||
capsule-system service/capsule-controller-manager-metrics-service
|
||||
capsule-system service/capsule-webhook-service
|
||||
capsule-system deployment.apps/capsule-controller-manager
|
||||
```
|
||||
|
Before Width: | Height: | Size: 4.5 KiB |
|
Before Width: | Height: | Size: 85 KiB |
|
Before Width: | Height: | Size: 72 KiB |
|
Before Width: | Height: | Size: 106 KiB |
|
Before Width: | Height: | Size: 82 KiB |
|
Before Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 30 KiB |
|
Before Width: | Height: | Size: 156 KiB |
|
Before Width: | Height: | Size: 63 KiB |
|
Before Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 79 KiB |
|
Before Width: | Height: | Size: 22 KiB |
|
Before Width: | Height: | Size: 131 KiB |
|
Before Width: | Height: | Size: 55 KiB |
|
Before Width: | Height: | Size: 57 KiB |
@@ -1,5 +0,0 @@
|
||||
# Install Capsule on Charmed Kubernetes distribution
|
||||
|
||||
[Canonical Charmed Kubernetes](https://github.com/charmed-kubernetes) is a Kubernetes distribution coming with out-of-the-box tools that support deployments and operational management and make microservice development easier. Combined with Capsule, Charmed Kubernetes allows users to further reduce the operational overhead of Kubernetes setup and management.
|
||||
|
||||
The Charm package for Capsule is available to Charmed Kubernetes users via [Charmhub.io](https://charmhub.io/capsule-k8s).
|
||||
@@ -1,570 +0,0 @@
|
||||
# Multi-tenancy the GitOps way
|
||||
|
||||
This document will guide you to manage Tenant resources the GitOps way with Flux configured with the [multi-tenancy lockdown](https://fluxcd.io/docs/installation/#multi-tenancy-lockdown).
|
||||
|
||||
The proposed approach consists on making Flux to reconcile Tenant resources as Tenant Owners, while still providing Namespace as a Service to Tenants.
|
||||
|
||||
This means that Tenants can operate and declare multiple Namespaces in their own Git repositories while not escaping the policies enforced by Capsule.
|
||||
|
||||
## Quickstart
|
||||
|
||||
### Install
|
||||
|
||||
In order to make it work you can install the FluxCD addon via Helm:
|
||||
|
||||
```shell
|
||||
helm install -n capsule-system capsule-addon-fluxcd \
|
||||
oci://ghcr.io/projectcapsule/charts/capsule-addon-fluxcd
|
||||
```
|
||||
|
||||
### Configure Tenants
|
||||
|
||||
> The audience for this part is the **platform administrator** user persona.
|
||||
|
||||
In order to make Flux controllers reconcile Tenant resources impersonating a Tenant Owner, a Tenant Owner as Service Account is required.
|
||||
|
||||
To be recognized by the addon that will automate the required configurations, the `ServiceAccount` needs the `capsule.addon.fluxcd/enabled=true` annotation.
|
||||
|
||||
Assuming a configured *oil* `Tenant`, the following Tenant Owner `ServiceAccount` must be declared:
|
||||
|
||||
```yml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: oil-system
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: gitops-reconciler
|
||||
namespace: oil-system
|
||||
annotations:
|
||||
capsule.addon.fluxcd/enabled: "true"
|
||||
```
|
||||
|
||||
set it as a valid *oil* `Tenant` owner, and made Capsule recognize its `Group`:
|
||||
|
||||
```yml
|
||||
---
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: cluster-admin
|
||||
subjects:
|
||||
- name: gitops-reconciler
|
||||
kind: ServiceAccount
|
||||
namespace: oil-system
|
||||
owners:
|
||||
- name: system:serviceaccount:oil-system:gitops-reconciler
|
||||
kind: ServiceAccount
|
||||
---
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: CapsuleConfiguration
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
userGroups:
|
||||
- capsule.clastix.io
|
||||
- system:serviceaccounts:oil-system
|
||||
```
|
||||
|
||||
The addon will automate:
|
||||
* RBAC configuration for the `Tenant` owner `ServiceAccount`
|
||||
* `Tenant` owner `ServiceAccount` token generation
|
||||
* `Tenant` owner `kubeconfig` needed to send Flux reconciliation requests through the Capsule proxy
|
||||
* `Tenant` `kubeconfig` distribution across all Tenant `Namespace`s.
|
||||
|
||||
The last automation is needed so that the `kubeconfig` can be set on `Kustomization`s/`HelmRelease`s across all `Tenant`'s `Namespace`s.
|
||||
|
||||
More details on this are available in the deep-dive section.
|
||||
|
||||
### How to use
|
||||
|
||||
> The audience for this part is the **platform administrator** user persona.
|
||||
|
||||
Consider a `Tenant` named *oil* that has a dedicated Git repository that contains oil's configurations.
|
||||
|
||||
You as a platform administrator want to provide to the *oil* `Tenant` a Namespace-as-a-Service with a GitOps experience, allowing the tenant to version the configurations in a Git repository.
|
||||
|
||||
You, as Tenant owner, can configure Flux [reconciliation](https://fluxcd.io/flux/concepts/#reconciliation) resources to be applied as Tenant owner:
|
||||
|
||||
```yml
|
||||
---
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: oil-apps
|
||||
namespace: oil-system
|
||||
spec:
|
||||
serviceAccountName: gitops-reconciler
|
||||
kubeConfig:
|
||||
secretRef:
|
||||
name: gitops-reconciler-kubeconfig
|
||||
key: kubeconfig
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: oil
|
||||
---
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta2
|
||||
kind: GitRepository
|
||||
metadata:
|
||||
name: oil
|
||||
namespace: oil-system
|
||||
spec:
|
||||
url: https://github.com/oil/oil-apps
|
||||
```
|
||||
|
||||
Let's analyze the setup field by field:
|
||||
- the `GitRepository` and the `Kustomization` are in a Tenant system `Namespace`
|
||||
- the `Kustomization` refers to a `ServiceAccount` to be impersonated when reconciling the resources the `Kustomization` refers to: this ServiceAccount is an *oil* **Tenant owner**
|
||||
- the `Kustomization` refers also to a `kubeConfig` to be used when reconciling the resources the `Kustomization` refers to: this is needed to make requests through the **Capsule proxy** in order to operate on cluster-wide resources as a Tenant
|
||||
|
||||
The *oil* tenant can also declare new `Namespace`s thanks to the segregation provided by Capsule.
|
||||
|
||||
> Note: it can be avoided to explicitly set the service account name when it's set as default Service Account name at Flux's [kustomize-controller level](https://fluxcd.io/flux/installation/configuration/multitenancy/#how-to-configure-flux-multi-tenancy) via the `default-service-account` flag.
|
||||
|
||||
More information are available in the [addon repository](https://github.com/projectcapsule/capsule-addon-fluxcd).
|
||||
|
||||
## Deep dive
|
||||
|
||||
### Flux and multi-tenancy
|
||||
|
||||
Flux v2 released a [set of features](https://fluxcd.io/blog/2022/05/may-2022-security-announcement/#whats-next-for-flux) that further increased security for multi-tenancy scenarios.
|
||||
|
||||
These features enable you to:
|
||||
- disable cross-Namespace reference of Source CRs from Reconciliation CRs and Notification CRs. This way, especially for tenants, they can't access resources outside their space. This can be achieved with `--no-cross-namespace-refs=true` option of kustomize, helm, notification, image-reflector, image-automation controllers.
|
||||
- set a default `ServiceAccount` impersonation for Reconciliation CRs. This is supposed to be an unprivileged SA that reconciles just the tenant's desired state. This will be enforced when is not otherwise specified explicitly in Reconciliation CR spec. This can be enforced with the `--default-service-account=<name>` option of helm and kustomize controllers.
|
||||
|
||||
> For this responsibility we identify a Tenant GitOps Reconciler identity, which is a ServiceAccount and it's also the tenant owner (more on tenants and owners later on, with Capsule).
|
||||
|
||||
- disallow remote bases for Kustomizations. Actually, this is not strictly required, but it decreases the risk of referencing Kustomizations which aren't part of the controlled GitOps pipelines. In a multi-tenant scenario this is important too. They can be disabled with `--no-remote-bases=true` option of the kustomize controller.
|
||||
|
||||
Where required, to ensure privileged Reconciliation resources have the needed privileges to be reconciled, we can explicitly set a privileged `ServiceAccount`s.
|
||||
|
||||
In any case, is required that the `ServiceAccount` is in the same `Namespace` of the `Kustomization`, so unprivileged spaces should not have privileged `ServiceAccount`s available.
|
||||
|
||||
For example, for the root `Kustomization`:
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: flux-system
|
||||
namespace: flux-system
|
||||
spec:
|
||||
serviceAccountName: kustomize-controller # It has cluster-admin permissions
|
||||
path: ./clusters/staging
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
```
|
||||
|
||||
In example, the cluster admin is supposed to apply this Kustomization, during the cluster bootstrap that i.e. will reconcile also Flux itself.
|
||||
All the remaining Reconciliation resources can be children of this Kustomization.
|
||||
|
||||

|
||||
|
||||
### Namespace-as-a-Service
|
||||
|
||||
Tenants could have his own set of Namespaces to operate on but it should be prepared by higher-level roles, like platform admins: the declarations would be part of the platform space.
|
||||
They would be responsible of tenants administration, and each change (e.g. new tenant Namespace) should be a request that would pass through approval.
|
||||
|
||||

|
||||
|
||||
What if we would like to provide tenants the ability to manage also their own space the GitOps-way? Enter Capsule.
|
||||
|
||||

|
||||
|
||||
## Manual setup
|
||||
|
||||
> Legenda:
|
||||
> - Privileged space: group of Namespaces which are not part of any Tenant.
|
||||
> - Privileged identity: identity that won't pass through Capsule tenant access control.
|
||||
> - Unprivileged space: group of Namespaces which are part of a Tenant.
|
||||
> - Unprivileged identity: identity that would pass through Capsule tenant access control.
|
||||
> - Tenant GitOps Reconciler: a machine Tenant Owner expected to reconcile Tenant desired state.
|
||||
|
||||
### Capsule
|
||||
|
||||
Capsule provides a Custom Resource `Tenant` and ability to set its owners through `spec.owners` as references to:
|
||||
- `User`
|
||||
- `Group`
|
||||
- `ServiceAccount`
|
||||
|
||||
#### Tenant and Tenant Owner
|
||||
|
||||
We would like to let a machine reconcile Tenant's states, we'll need a `ServiceAccount` as a Tenant Owner:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: gitops-reconciler
|
||||
namespace: my-tenant
|
||||
---
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: my-tenant
|
||||
spec:
|
||||
owners:
|
||||
- name: system:serviceaccount:my-tenant:gitops-reconciler # the Tenant GitOps Reconciler
|
||||
```
|
||||
|
||||
From now on, we'll refer to it as the **Tenant GitOps Reconciler**.
|
||||
|
||||
#### Tenant Groups
|
||||
|
||||
We also need to state that Capsule should enforce tenant access control for requests coming from tenants, and we can do that by specifying one of the `Group`s bound by default by Kubernetes to the Tenant GitOps Reconciler `ServiceAccount` in the `CapsuleConfiguration`:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: CapsuleConfiguration
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
userGroups:
|
||||
- system:serviceaccounts:my-tenant
|
||||
```
|
||||
|
||||
Other privileged requests, e.g. for reconciliation coming from the Flux privileged `ServiceAccount`s like `flux-system/kustomize-controller` will bypass Capsule.
|
||||
|
||||
### Flux
|
||||
|
||||
Flux enables to specify with which identity Reconciliation resources are reconciled, through:
|
||||
- `ServiceAccount` impersonation
|
||||
- `kubeconfig`
|
||||
|
||||
#### ServiceAccount
|
||||
|
||||
As by default Flux reconciles those resources with Flux `cluster-admin` Service Accounts, we set at controller-level the **default `ServiceAccount` impersonation** to the unprivileged **Tenant GitOps Reconciler**:
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- flux-controllers.yaml
|
||||
patches:
|
||||
- patch: |
|
||||
- op: add
|
||||
path: /spec/template/spec/containers/0/args/0
|
||||
value: --default-service-account=gitops-reconciler # the Tenant GitOps Reconciler
|
||||
target:
|
||||
kind: Deployment
|
||||
name: "(kustomize-controller|helm-controller)"
|
||||
```
|
||||
|
||||
This way tenants can't make Flux apply their Reconciliation resources with Flux's privileged Service Accounts, by not specifying a `spec.ServiceAccountName` on them.
|
||||
|
||||
At the same time at resource-level in privileged space we still can specify a privileged ServiceAccount, and its reconciliation requests won't pass through Capsule validation:
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: flux-system
|
||||
namespace: flux-system
|
||||
spec:
|
||||
serviceAccountName: kustomize-controller
|
||||
path: ./clusters/staging
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: flux-system
|
||||
```
|
||||
|
||||
#### Kubeconfig
|
||||
|
||||
We also need to specify on Tenant's Reconciliation resources, the `Secret` with **`kubeconfig`** configured to use the **Capsule Proxy** as the API server in order to provide the Tenant GitOps Reconciler the ability to list cluster-level resources.
|
||||
The `kubeconfig` would specify also as the token the Tenant GitOps Reconciler SA token,
|
||||
|
||||
For example:
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: my-app
|
||||
namespace: my-tenant
|
||||
spec:
|
||||
kubeConfig:
|
||||
secretRef:
|
||||
name: gitops-reconciler-kubeconfig
|
||||
key: kubeconfig
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: my-tenant
|
||||
path: ./staging
|
||||
```
|
||||
|
||||
> We'll see how to prepare the related `Secret` (i.e. *gitops-reconciler-kubeconfig*) later on.
|
||||
|
||||
Each request made with this kubeconfig will be done impersonating the user of the default impersonation SA, that is the same of the token specified in the kubeconfig.
|
||||
To deepen on this please go to [#Insights](#insights).
|
||||
|
||||
## The recipe
|
||||
|
||||
### How to setup Tenants GitOps-ready
|
||||
|
||||
Given that [Capsule](github.com/projectcapsule/capsule) and [Capsule Proxy](github.com/clastix/capsule-proxy) are installed, and [Flux v2](https://github.com/fluxcd/flux2) configured with [multi-tenancy lockdown](https://fluxcd.io/docs/installation/#multi-tenancy-lockdown) features, of which the patch below:
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- flux-components.yaml
|
||||
patches:
|
||||
- patch: |
|
||||
- op: add
|
||||
path: /spec/template/spec/containers/0/args/0
|
||||
value: --no-cross-namespace-refs=true
|
||||
target:
|
||||
kind: Deployment
|
||||
name: "(kustomize-controller|helm-controller|notification-controller|image-reflector-controller|image-automation-controller)"
|
||||
- patch: |
|
||||
- op: add
|
||||
path: /spec/template/spec/containers/0/args/-
|
||||
value: --no-remote-bases=true
|
||||
target:
|
||||
kind: Deployment
|
||||
name: "kustomize-controller"
|
||||
- patch: |
|
||||
- op: add
|
||||
path: /spec/template/spec/containers/0/args/0
|
||||
value: --default-service-account=gitops-reconciler # The Tenant GitOps Reconciler
|
||||
target:
|
||||
kind: Deployment
|
||||
name: "(kustomize-controller|helm-controller)"
|
||||
- patch: |
|
||||
- op: add
|
||||
path: /spec/serviceAccountName
|
||||
value: kustomize-controller
|
||||
target:
|
||||
kind: Kustomization
|
||||
name: "flux-system"
|
||||
```
|
||||
|
||||
this is the required set of resources to setup a Tenant:
|
||||
- `Namespace`: the Tenant GitOps Reconciler "home". This is not part of the Tenant to avoid a chicken & egg problem:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: my-tenant
|
||||
```
|
||||
- `ServiceAccount` of the Tenant GitOps Reconciler, in the above `Namespace`:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: gitops-reconciler
|
||||
namespace: my-tenant
|
||||
```
|
||||
- `Tenant` resource with the above Tenant GitOps Reconciler's SA as Tenant Owner, with:
|
||||
- Additional binding to *cluster-admin* `ClusterRole` for the Tenant's `Namespace`s and `Namespace` of the Tenant GitOps Reconciler' `ServiceAccount`.
|
||||
By default Capsule binds only `admin` ClusterRole, which has no privileges over Custom Resources, but *cluster-admin* has. This is needed to operate on Flux CRs:
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: my-tenant
|
||||
spec:
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: cluster-admin
|
||||
subjects:
|
||||
- name: gitops-reconciler
|
||||
kind: ServiceAccount
|
||||
namespace: my-tenant
|
||||
owners:
|
||||
- name: system:serviceaccount:my-tenant:gitops-reconciler
|
||||
kind: ServiceAccount
|
||||
```
|
||||
- Additional binding to *cluster-admin* `ClusterRole` for home `Namespace` of the Tenant GitOps Reconciler' `ServiceAccount`, so that the Tenant GitOps Reconciler can create Flux CRs on the tenant home Namespace and use Reconciliation resource's `spec.targetNamespace` to place resources to `Tenant` `Namespace`s:
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: gitops-reconciler
|
||||
namespace: my-tenant
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: gitops-reconciler
|
||||
namespace: my-tenant
|
||||
```
|
||||
- Additional `Group` in the `CapsuleConfiguration` to make Tenant GitOps Reconciler requests pass through Capsule admission (group `system:serviceaccount:<tenant-gitops-reconciler-home-namespace>`):
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: CapsuleConfiguration
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
userGroups:
|
||||
- system:serviceaccounts:my-tenant
|
||||
```
|
||||
- Additional `ClusterRole` with related `ClusterRoleBinding` that allows the Tenant GitOps Reconciler to impersonate his own `User` (e.g. `system:serviceaccount:my-tenant:gitops-reconciler`):
|
||||
```yaml
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: my-tenant-gitops-reconciler-impersonator
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["users"]
|
||||
verbs: ["impersonate"]
|
||||
resourceNames: ["system:serviceaccount:my-tenant:gitops-reconciler"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: my-tenant-gitops-reconciler-impersonate
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: my-tenant-gitops-reconciler-impersonator
|
||||
subjects:
|
||||
- name: gitops-reconciler
|
||||
kind: ServiceAccount
|
||||
namespace: my-tenant
|
||||
```
|
||||
- `Secret` with `kubeconfig` for the Tenant GitOps Reconciler with Capsule Proxy as `kubeconfig.server` and the SA token as `kubeconfig.token`.
|
||||
> This is supported only with Service Account static tokens.
|
||||
- Flux Source and Reconciliation resources that refer to Tenant desired state. This typically points to a specific path inside a dedicated Git repository, where tenant's root configuration reside:
|
||||
```yaml
|
||||
apiVersion: source.toolkit.fluxcd.io/v1beta2
|
||||
kind: GitRepository
|
||||
metadata:
|
||||
name: my-tenant
|
||||
namespace: my-tenant
|
||||
spec:
|
||||
url: https://github.com/my-tenant/all.git # Git repository URL
|
||||
ref:
|
||||
branch: main # Git reference
|
||||
---
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: my-tenant
|
||||
namespace: my-tenant
|
||||
spec:
|
||||
kubeConfig:
|
||||
secretRef:
|
||||
name: gitops-reconciler-kubeconfig
|
||||
key: kubeconfig
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: my-tenant
|
||||
path: config # Path to config from GitRepository Source
|
||||
```
|
||||
This `Kustomization` can in turn refer to further `Kustomization` resources creating a tenant configuration hierarchy.
|
||||
|
||||
#### Generate the Capsule Proxy kubeconfig Secret
|
||||
|
||||
You need to create a `Secret` in the Tenant GitOps Reconciler home `Namespace`, containing the `kubeconfig` that specifies:
|
||||
- `server`: Capsule Proxy `Service` URL with related CA certificate for TLS
|
||||
- `token`: the token of the `Tenant` GitOps Reconciler
|
||||
|
||||
With required privileges over the target `Namespace` to create `Secret`, you can generate it with the `proxy-kubeconfig-generator` utility:
|
||||
|
||||
```sh
|
||||
$ go install github.com/maxgio92/proxy-kubeconfig-generator@latest
|
||||
$ proxy-kubeconfig-generator \
|
||||
--kubeconfig-secret-key kubeconfig \
|
||||
--namespace my-tenant \
|
||||
--server 'https://capsule-proxy.capsule-system.svc:9001' \
|
||||
--server-tls-secret-namespace capsule-system \
|
||||
--server-tls-secret-name capsule-proxy \
|
||||
--serviceaccount gitops-reconciler
|
||||
```
|
||||
|
||||
### How a Tenant can declare his state
|
||||
|
||||
Considering the example above, a Tenant `my-tenant` could place in his own repository (i.e. `https://github.com/my-tenant/all`), on branch `main` at path `/config` further Reconciliation resources, like:
|
||||
|
||||
```yaml
|
||||
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
|
||||
kind: Kustomization
|
||||
metadata:
|
||||
name: my-apps
|
||||
namespace: my-tenant
|
||||
spec:
|
||||
kubeConfig:
|
||||
secretRef:
|
||||
name: gitops-reconciler-kubeconfig
|
||||
key: kubeconfig
|
||||
sourceRef:
|
||||
kind: GitRepository
|
||||
name: my-tenant
|
||||
path: config/apps
|
||||
```
|
||||
|
||||
that refer to the same Source but different path (i.e. `config/apps`) that could contain his applications' manifests.
|
||||
|
||||
The same is valid for a `HelmRelease`s, that instead will refer to an `HelmRepository` Source.
|
||||
|
||||
The reconciliation requests will pass through Capsule Proxy as Tenant GitOps Reconciler with impersonation. Then, as the identity group of the requests matches the Capsule groups they will be validated by Capsule, and finally the RBAC will provide boundaries to Tenant GitOps Reconciler privileges.
|
||||
|
||||
> If the `spec.kubeConfig` is not specified the Flux privileged `ServiceAccount` will impersonate the default unprivileged Tenant GitOps Reconciler `ServiceAccount` as configured with `--default-service-account` option of kustomize and helm controllers, but it list requests on cluster-level resources like `Namespace`s will fail.
|
||||
|
||||
## Full setup
|
||||
|
||||
To have a glimpse on a full setup you can follow the [flux2-capsule-multi-tenancy](https://github.com/clastix/flux2-capsule-multi-tenancy.git) repository.
|
||||
For simplicity, the system and tenants declarations are on the same repository but on dedicated git branches.
|
||||
|
||||
It's a fork of [flux2-multi-tenancy](https://github.com/fluxcd/flux2-multi-tenancy.git) but with the integration we saw with Capsule.
|
||||
|
||||
## Insights
|
||||
|
||||
### Why ServiceAccount that impersonates its own User
|
||||
|
||||
As stated just above, you'd be wondering why a user would make a request impersonating himself (i.e. the Tenant GitOps Reconciler ServiceAccount User).
|
||||
|
||||
This is because we need to make tenant reconciliation requests through Capsule Proxy and we want to protect from risk of privilege escalation done through bypass of impersonation.
|
||||
|
||||
### Threats
|
||||
|
||||
##### Bypass unprivileged impersonation
|
||||
|
||||
The reason why we can't set impersonation to be optional is because, as each tenant is allowed to not specify neither the kubeconfig nor the impersonation SA for the Reconciliation resource, and because in any case that kubeconfig could contain whatever privileged credentials, Flux would otherwise use the privileged ServiceAccount, to reconcile tenant resources.
|
||||
|
||||
That way, a tenant would be capable of managing the GitOps way the cluster as he was a cluster admin.
|
||||
|
||||
Furthermore, let's see if there are other vulnerabilities we are able to protect from.
|
||||
|
||||
##### Impersonate privileged SA
|
||||
|
||||
Then, what if a tenant tries to escalate by using one of the Flux controllers privileged `ServiceAccount`s?
|
||||
|
||||
As `spec.ServiceAccountName` for Reconciliation resource cannot cross-namespace reference Service Accounts, tenants are able to let Flux apply his own resources only with ServiceAccounts that reside in his own Namespaces. Which is, Namespace of the ServiceAccount and Namespace of the Reconciliation resource must match.
|
||||
|
||||
He could neither create the Reconciliation resource where a privileged ServiceAccount is present (like flux-system), as the Namespace has to be owned by the Tenant. Capsule would block those Reconciliation resource creation requests.
|
||||
|
||||
##### Create and impersonate privileged SA
|
||||
|
||||
Then, what if a tenant tries to escalate by creating a privileged `ServiceAccount` inside on of his own `Namespace`s?
|
||||
|
||||
A tenant could create a `ServiceAccount` in an owned `Namespace`, but he can't neither bind at cluster-level nor at a non-owned Namespace-level a ClusterRole, as that wouldn't be permitted by Capsule admission controllers.
|
||||
|
||||
Now let's go on with the practical part.
|
||||
|
||||
##### Change ownership of privileged Namespaces (e.g. flux-system)
|
||||
|
||||
He could try to use privileged `ServiceAccount` by changing ownership of a privileged Namespace so that he could create Reconciliation resource there and using the privileged SA.
|
||||
This is not permitted as he can't patch Namespaces which have not been created by him. Capsule request validation would not pass.
|
||||
|
||||
For other protections against threats in this multi-tenancy scenario please see the Capsule [Multi-Tenancy Benchmark](/docs/general/mtb).
|
||||
|
||||
## References
|
||||
- https://fluxcd.io/docs/installation/#multi-tenancy-lockdown
|
||||
- https://fluxcd.io/blog/2022/05/may-2022-security-announcement/
|
||||
- https://github.com/clastix/capsule-proxy/issues/218
|
||||
- https://github.com/projectcapsule/capsule/issues/528
|
||||
- https://github.com/clastix/flux2-capsule-multi-tenancy
|
||||
- https://github.com/fluxcd/flux2-multi-tenancy
|
||||
- https://fluxcd.io/docs/guides/repository-structure/
|
||||
@@ -1,2 +0,0 @@
|
||||
# Guides
|
||||
Guides and tutorials on how to integrate Capsule in your Kubernetes environment.
|
||||
@@ -1,145 +0,0 @@
|
||||
# Kubernetes Dashboard
|
||||
|
||||
This guide describes how to integrate the [Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/) and [Capsule Proxy](https://capsule.clastix.io/docs/general/proxy/) with OIDC authorization.
|
||||
|
||||
In this guide, we will use [Keycloak](https://www.keycloak.org) as the Identity Provider.
|
||||
|
||||

|
||||
|
||||
## Configuring oauth2-proxy
|
||||
|
||||
To enable the proxy authorization from the Kubernetes dashboard to Keycloak, we need to use an OAuth proxy.
|
||||
In this article, we will use [oauth2-proxy](https://oauth2-proxy.github.io/oauth2-proxy/) and install it as a pod in the Kubernetes Dashboard namespace.
|
||||
Alternatively, we can install `oauth2-proxy` in a different namespace or use it as a sidecar container in the Kubernetes Dashboard deployment.
|
||||
|
||||
Prepare the values for oauth2-proxy:
|
||||
```bash
|
||||
cat > values-oauth2-proxy.yaml <<EOF
|
||||
config:
|
||||
clientID: "${OIDC_CLIENT_ID}"
|
||||
clientSecret: ${OIDC_CLIENT_SECRET}
|
||||
|
||||
extraArgs:
|
||||
provider: "keycloak-oidc"
|
||||
redirect-url: "https://${DASHBOARD_URL}/oauth2/callback"
|
||||
oidc-issuer-url: "https://${KEYCLOAK_URL}/auth/realms/${OIDC_CLIENT_ID}"
|
||||
pass-access-token: true
|
||||
set-authorization-header: true
|
||||
pass-user-headers: true
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
path: "/oauth2"
|
||||
hosts:
|
||||
- ${DASHBOARD_URL}
|
||||
tls:
|
||||
- hosts:
|
||||
- ${DASHBOARD_URL}
|
||||
EOF
|
||||
```
|
||||
|
||||
> Values used for the config:
|
||||
>
|
||||
> - **OIDC_CLIENT_ID**: the keycloak client ID (name) which user in Kubernetes API Server for authorization
|
||||
> - **OIDC_CLIENT_SECRET**: secret for the client (`OIDC_CLIENT_ID`). You can see it from the Keycloak UI -> Clients -> `OIDC_CLIENT_ID` -> Credentials
|
||||
> - **DASHBOARD_URL**: the Kubernetes Dashboard URL
|
||||
> - **KEYCLOAK_URL**: the Keycloak URL
|
||||
|
||||
More information about the `keycloak-oidc` provider can be found on the [oauth2-proxy documentation](https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/oauth_provider/#keycloak-oidc-auth-provider).
|
||||
|
||||
We're ready to install the `oauth2-proxy`:
|
||||
|
||||
```bash
|
||||
helm repo add oauth2-proxy https://oauth2-proxy.github.io/manifests
|
||||
helm install oauth2-proxy oauth2-proxy/oauth2-proxy -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-oauth2-proxy.yaml
|
||||
```
|
||||
|
||||
## Configuring Keycloak
|
||||
|
||||
The Kubernetes cluster must be configured with a valid OIDC provider: for our guide, we're giving for granted that Keycloak is used, if you need more info please follow the [OIDC Authentication](/docs/guides/oidc-auth) section.
|
||||
|
||||
In a such scenario, you should have in the `kube-apiserver.yaml` manifest the following content:
|
||||
```yaml
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- kube-apiserver
|
||||
...
|
||||
- --oidc-issuer-url=https://${OIDC_ISSUER}
|
||||
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
|
||||
- --oidc-client-id=${OIDC_CLIENT_ID}
|
||||
- --oidc-username-claim=preferred_username
|
||||
- --oidc-groups-claim=groups
|
||||
- --oidc-username-prefix=-
|
||||
```
|
||||
|
||||
Where `${OIDC_CLIENT_ID}` refers to the client ID that all tokens must be issued.
|
||||
|
||||
For this client we need:
|
||||
1. Check `Valid Redirect URIs`: in the `oauth2-proxy` configuration we set `redirect-url: "https://${DASHBOARD_URL}/oauth2/callback"`, it needs to add this path to the `Valid Redirect URIs`
|
||||
2. Create a mapper with Mapper Type 'Group Membership' and Token Claim Name 'groups'.
|
||||
3. Create a mapper with Mapper Type 'Audience' and Included Client Audience and Included Custom Audience set to your client name(OIDC_CLIENT_ID).
|
||||
|
||||
## Configuring Kubernetes Dashboard
|
||||
|
||||
If your Capsule Proxy uses HTTPS and the CA certificate is not the Kubernetes CA, you need to add a secret with the CA for the Capsule Proxy URL.
|
||||
```bash
|
||||
cat > ca.crt<< EOF
|
||||
-----BEGIN CERTIFICATE-----
|
||||
...
|
||||
...
|
||||
...
|
||||
-----END CERTIFICATE-----
|
||||
EOF
|
||||
|
||||
kubectl create secret generic certificate --from-file=ca.crt=ca.crt -n ${KUBERNETES_DASHBOARD_NAMESPACE}
|
||||
```
|
||||
|
||||
Prepare the values for the Kubernetes Dashboard:
|
||||
```bash
|
||||
cat > values-kubernetes-dashboard.yaml <<EOF
|
||||
extraVolumes:
|
||||
- name: token-ca
|
||||
projected:
|
||||
sources:
|
||||
- serviceAccountToken:
|
||||
expirationSeconds: 86400
|
||||
path: token
|
||||
- secret:
|
||||
name: certificate
|
||||
items:
|
||||
- key: ca.crt
|
||||
path: ca.crt
|
||||
extraVolumeMounts:
|
||||
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
|
||||
name: token-ca
|
||||
|
||||
ingress:
|
||||
enabled: true
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/auth-signin: https://${DASHBOARD_URL}/oauth2/start?rd=$escaped_request_uri
|
||||
nginx.ingress.kubernetes.io/auth-url: https://${DASHBOARD_URL}/oauth2/auth
|
||||
nginx.ingress.kubernetes.io/auth-response-headers: "authorization"
|
||||
hosts:
|
||||
- ${DASHBOARD_URL}
|
||||
tls:
|
||||
- hosts:
|
||||
- ${DASHBOARD_URL}
|
||||
|
||||
extraEnv:
|
||||
- name: KUBERNETES_SERVICE_HOST
|
||||
value: '${CAPSULE_PROXY_URL}'
|
||||
- name: KUBERNETES_SERVICE_PORT
|
||||
value: '${CAPSULE_PROXY_PORT}'
|
||||
EOF
|
||||
```
|
||||
|
||||
To add the Certificate Authority for the Capsule Proxy URL, we use the volume `token-ca` to mount the `ca.crt` file.
|
||||
Additionally, we set the environment variables `KUBERNETES_SERVICE_HOST` and `KUBERNETES_SERVICE_PORT` to route requests to the Capsule Proxy.
|
||||
|
||||
Now you can install the Kubernetes Dashboard:
|
||||
|
||||
```bash
|
||||
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
|
||||
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard -n ${KUBERNETES_DASHBOARD_NAMESPACE} -f values-kubernetes-dashboard.yaml
|
||||
```
|
||||
@@ -1,140 +0,0 @@
|
||||
# Capsule on AWS EKS
|
||||
This is an example of how to install AWS EKS cluster and one user
|
||||
manged by Capsule. It is based on [Using IAM Groups to manage Kubernetes access](https://www.eksworkshop.com/beginner/091_iam-groups/intro/)
|
||||
|
||||
Create EKS cluster:
|
||||
|
||||
```bash
|
||||
export AWS_DEFAULT_REGION="eu-west-1"
|
||||
export AWS_ACCESS_KEY_ID="xxxxx"
|
||||
export AWS_SECRET_ACCESS_KEY="xxxxx"
|
||||
|
||||
eksctl create cluster \
|
||||
--name=test-k8s \
|
||||
--managed \
|
||||
--node-type=t3.small \
|
||||
--node-volume-size=20 \
|
||||
--kubeconfig=kubeconfig.conf
|
||||
```
|
||||
|
||||
Create AWS User `alice` using CloudFormation, create AWS access files and
|
||||
kubeconfig for such user:
|
||||
|
||||
```bash
|
||||
cat > cf.yml << EOF
|
||||
Parameters:
|
||||
ClusterName:
|
||||
Type: String
|
||||
Resources:
|
||||
UserAlice:
|
||||
Type: AWS::IAM::User
|
||||
Properties:
|
||||
UserName: !Sub "alice-${ClusterName}"
|
||||
Policies:
|
||||
- PolicyName: !Sub "alice-${ClusterName}-policy"
|
||||
PolicyDocument:
|
||||
Version: "2012-10-17"
|
||||
Statement:
|
||||
- Sid: AllowAssumeOrganizationAccountRole
|
||||
Effect: Allow
|
||||
Action: sts:AssumeRole
|
||||
Resource: !GetAtt RoleAlice.Arn
|
||||
AccessKeyAlice:
|
||||
Type: AWS::IAM::AccessKey
|
||||
Properties:
|
||||
UserName: !Ref UserAlice
|
||||
RoleAlice:
|
||||
Type: AWS::IAM::Role
|
||||
Properties:
|
||||
Description: !Sub "IAM role for the alice-${ClusterName} user"
|
||||
RoleName: !Sub "alice-${ClusterName}"
|
||||
AssumeRolePolicyDocument:
|
||||
Version: 2012-10-17
|
||||
Statement:
|
||||
- Effect: Allow
|
||||
Principal:
|
||||
AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root"
|
||||
Action: sts:AssumeRole
|
||||
Outputs:
|
||||
RoleAliceArn:
|
||||
Description: The ARN of the Alice IAM Role
|
||||
Value: !GetAtt RoleAlice.Arn
|
||||
Export:
|
||||
Name:
|
||||
Fn::Sub: "${AWS::StackName}-RoleAliceArn"
|
||||
AccessKeyAlice:
|
||||
Description: The AccessKey for Alice user
|
||||
Value: !Ref AccessKeyAlice
|
||||
Export:
|
||||
Name:
|
||||
Fn::Sub: "${AWS::StackName}-AccessKeyAlice"
|
||||
SecretAccessKeyAlice:
|
||||
Description: The SecretAccessKey for Alice user
|
||||
Value: !GetAtt AccessKeyAlice.SecretAccessKey
|
||||
Export:
|
||||
Name:
|
||||
Fn::Sub: "${AWS::StackName}-SecretAccessKeyAlice"
|
||||
EOF
|
||||
|
||||
eval aws cloudformation deploy --capabilities CAPABILITY_NAMED_IAM \
|
||||
--parameter-overrides "ClusterName=test-k8s" \
|
||||
--stack-name "test-k8s-users" --template-file cf.yml
|
||||
|
||||
AWS_CLOUDFORMATION_DETAILS=$(aws cloudformation describe-stacks --stack-name "test-k8s-users")
|
||||
ALICE_ROLE_ARN=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"RoleAliceArn\") .OutputValue")
|
||||
ALICE_USER_ACCESSKEY=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"AccessKeyAlice\") .OutputValue")
|
||||
ALICE_USER_SECRETACCESSKEY=$(echo "${AWS_CLOUDFORMATION_DETAILS}" | jq -r ".Stacks[0].Outputs[] | select(.OutputKey==\"SecretAccessKeyAlice\") .OutputValue")
|
||||
|
||||
eksctl create iamidentitymapping --cluster="test-k8s" --arn="${ALICE_ROLE_ARN}" --username alice --group capsule.clastix.io
|
||||
|
||||
cat > aws_config << EOF
|
||||
[profile alice]
|
||||
role_arn=${ALICE_ROLE_ARN}
|
||||
source_profile=alice
|
||||
EOF
|
||||
|
||||
cat > aws_credentials << EOF
|
||||
[alice]
|
||||
aws_access_key_id=${ALICE_USER_ACCESSKEY}
|
||||
aws_secret_access_key=${ALICE_USER_SECRETACCESSKEY}
|
||||
EOF
|
||||
|
||||
eksctl utils write-kubeconfig --cluster=test-k8s --kubeconfig="kubeconfig-alice.conf"
|
||||
cat >> kubeconfig-alice.conf << EOF
|
||||
- name: AWS_PROFILE
|
||||
value: alice
|
||||
- name: AWS_CONFIG_FILE
|
||||
value: aws_config
|
||||
- name: AWS_SHARED_CREDENTIALS_FILE
|
||||
value: aws_credentials
|
||||
EOF
|
||||
```
|
||||
|
||||
Export "admin" kubeconfig to be able to install Capsule:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=kubeconfig.conf
|
||||
```
|
||||
|
||||
Install capsule from helm chart:
|
||||
|
||||
```bash
|
||||
helm repo add clastix https://clastix.github.io/charts
|
||||
helm upgrade --install --version 0.0.19 --namespace capsule-system --create-namespace capsule clastix/capsule
|
||||
```
|
||||
|
||||
Use the default Tenant example:
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/samples/capsule_v1beta1_tenant.yaml
|
||||
```
|
||||
|
||||
Based on the tenant configuration above the user `alice` should be able
|
||||
to create namespace. Switch to a new terminal and try to create a namespace as user `alice`:
|
||||
|
||||
```bash
|
||||
# Unset AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY if defined
|
||||
unset AWS_ACCESS_KEY_ID
|
||||
unset AWS_SECRET_ACCESS_KEY
|
||||
kubectl create namespace test --kubeconfig="kubeconfig-alice.conf"
|
||||
```
|
||||
@@ -1,3 +0,0 @@
|
||||
# Capsule on Azure Kubernetes Service
|
||||
|
||||
This reference implementation introduces the recommended starting (baseline) infrastructure architecture for implementing a multi-tenancy Azure AKS cluster using Capsule. See [CoAKS](https://github.com/clastix/coaks-baseline-architecture).
|
||||
@@ -1,10 +0,0 @@
|
||||
# Capsule on Managed Kubernetes
|
||||
Capsule Operator can be easily installed on a Managed Kubernetes Service. Since you do not have access to the Kubernetes APIs Server, you should check with the provider of the service:
|
||||
|
||||
- the default `cluster-admin` ClusterRole is accessible
|
||||
- the following Admission Webhooks are enabled on the APIs Server:
|
||||
- PodNodeSelector
|
||||
- LimitRanger
|
||||
- ResourceQuota
|
||||
- MutatingAdmissionWebhook
|
||||
- ValidatingAdmissionWebhook
|
||||
@@ -1,181 +0,0 @@
|
||||
# Monitoring Capsule
|
||||
|
||||
The Capsule dashboard allows you to track the health and performance of Capsule manager and tenants, with particular attention to resources saturation, server responses, and latencies. Prometheus and Grafana are requirements for monitoring Capsule.
|
||||
|
||||
### Prometheus
|
||||
|
||||
Prometheus is an open-source monitoring system and time series database; it is based on a multi-dimensional data model and uses PromQL, a powerful query language, to leverage it.
|
||||
|
||||
- Minimum version: 1.0.0
|
||||
|
||||
### Grafana
|
||||
|
||||
Grafana is an open-source monitoring solution that offers a flexible way to generate visuals and configure dashboards.
|
||||
|
||||
- Minimum version: 7.5.5
|
||||
|
||||
To fastly deploy this monitoring stack, consider installing the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator).
|
||||
|
||||
## Quick Start
|
||||
|
||||
The Capsule Helm [charts](https://github.com/projectcapsule/capsule/tree/master/charts/capsule) allow you to automatically create Kubernetes minimum resources needed for the proper functioning of the dashboard:
|
||||
|
||||
* ServiceMonitor
|
||||
* Role
|
||||
* RoleBinding
|
||||
|
||||
N.B: we assume that a ServiceAccount resource has already been created so it can easily interact with the Prometheus API.
|
||||
|
||||
### Helm install
|
||||
|
||||
During Capsule installation, set the `serviceMonitor` fields as follow:
|
||||
|
||||
```yaml
|
||||
serviceMonitor:
|
||||
enabled: true
|
||||
[...]
|
||||
serviceAccount:
|
||||
name: <prometheus-sa>
|
||||
namespace: <prometheus-sa-namespace>
|
||||
```
|
||||
Take a look at the Helm charts [README.md](https://github.com/projectcapsule/capsule/blob/master/charts/capsule/README.md#customize-the-installation) file for further customization.
|
||||
|
||||
### Check Service Monitor
|
||||
|
||||
Verify that the service monitor is working correctly through the Prometheus "targets" page :
|
||||
|
||||

|
||||
|
||||
### Deploy dashboard
|
||||
A dashboard for Grafana is provided as [dashboard.json](https://github.com/projectcapsule/capsule/blob/master/config/grafana/dashboard.json).
|
||||
|
||||
Render with `kustomize` the dashboard as a ConfigMap and apply in the namespace where Grafana is installed, making sure to select the correct Prometheus datasource:
|
||||
|
||||
```
|
||||
kubectl -n <grafana-namespace> apply -k config/grafana
|
||||
```
|
||||
|
||||
Alternatively, manual upload the dashboard in JSON format to Grafana through _Create -> Import_:
|
||||
|
||||

|
||||
|
||||
## In-depth view
|
||||
|
||||
### Features
|
||||
* [Manager controllers](#manager-controllers)
|
||||
* [Webhook error rate](#webhook-error-rate)
|
||||
* [Webhook latency](#webhook-latency)
|
||||
* [REST client latency](#rest-client-latency)
|
||||
* [REST client error rate](#rest-client-error-rate)
|
||||
* [Saturation](#saturation)
|
||||
* [Workqueue](#workqueue)
|
||||
|
||||
#### Manager controllers
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about the medium time delay between manager client input, side effects, and new state determination (reconciliation).
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* Controller name
|
||||
- capsuleconfiguration
|
||||
- clusterrole
|
||||
- clusterrolebinding
|
||||
- endpoints
|
||||
- endpointslice
|
||||
- secret
|
||||
- service
|
||||
- tenant
|
||||
|
||||
#### Webhook error rate
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about webhook requests response, mainly focusing on server-side errors research.
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* Webhook
|
||||
- cordoning
|
||||
- ingresses
|
||||
- namespace-owner-reference
|
||||
- namespaces
|
||||
- networkpolicies
|
||||
- persistentvolumeclaims
|
||||
- pods
|
||||
- services
|
||||
- tenants
|
||||
|
||||
#### Webhook latency
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about the medium time delay between webhook trigger, side effects, and data written on etcd.
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* Webhook
|
||||
- cordoning
|
||||
- ingresses
|
||||
- namespace-owner-reference
|
||||
- namespaces
|
||||
- networkpolicies
|
||||
- persistentvolumeclaims
|
||||
- pods
|
||||
- services
|
||||
- tenants
|
||||
|
||||
#### REST client latency
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about the medium time delay between all the calls done by the controller and the API server.
|
||||
Data display may depend on the REST client verb considered and on available REST client URLs.
|
||||
|
||||
YMMV
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* REST client URL
|
||||
* REST client verb
|
||||
- GET
|
||||
- PUT
|
||||
- POST
|
||||
- PATCH
|
||||
- DELETE
|
||||
|
||||
#### REST client error rate
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about client total rest requests response per unit time, grouped by thrown code.
|
||||
|
||||
#### Saturation
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about resources, giving a detailed picture of the system’s state and the amount of requested work per active controller.
|
||||
|
||||
#### Workqueue
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about "actions" in the queue, particularly:
|
||||
- Workqueue latency: time to complete a series of actions in the queue ;
|
||||
- Workqueue rate: number of actions per unit time ;
|
||||
- Workqueue depth: number of pending actions waiting in the queue.
|
||||
@@ -1,136 +0,0 @@
|
||||
# OIDC Authentication
|
||||
Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of authentication are supported. The only requirement to use Capsule is to assign tenant users to the group defined by `userGroups` option in the `CapsuleConfiguration`, which defaults to `capsule.clastix.io`.
|
||||
|
||||
In the following guide, we'll use [Keycloak](https://www.keycloak.org/) an Open Source Identity and Access Management server capable to authenticate users via OIDC and release JWT tokens as proof of authentication.
|
||||
|
||||
## Configuring OIDC Server
|
||||
Configure Keycloak as OIDC server:
|
||||
|
||||
- Add a realm called `caas`, or use any existing realm instead
|
||||
- Add a group `capsule.clastix.io`
|
||||
- Add a user `alice` assigned to group `capsule.clastix.io`
|
||||
- Add an OIDC client called `kubernetes`
|
||||
- For the `kubernetes` client, create protocol mappers called `groups` and `audience`
|
||||
|
||||
If everything is done correctly, now you should be able to authenticate in Keycloak and see user groups in JWT tokens. Use the following snippet to authenticate in Keycloak as `alice` user:
|
||||
|
||||
```
|
||||
$ KEYCLOAK=sso.clastix.io
|
||||
$ REALM=caas
|
||||
$ OIDC_ISSUER=${KEYCLOAK}/auth/realms/${REALM}
|
||||
|
||||
$ curl -k -s https://${OIDC_ISSUER}/protocol/openid-connect/token \
|
||||
-d grant_type=password \
|
||||
-d response_type=id_token \
|
||||
-d scope=openid \
|
||||
-d client_id=${OIDC_CLIENT_ID} \
|
||||
-d client_secret=${OIDC_CLIENT_SECRET} \
|
||||
-d username=${USERNAME} \
|
||||
-d password=${PASSWORD} | jq
|
||||
```
|
||||
|
||||
The result will include an `ACCESS_TOKEN`, a `REFRESH_TOKEN`, and an `ID_TOKEN`. The access-token can generally be disregarded for Kubernetes. It would be used if the identity provider was managing roles and permissions for the users but that is done in Kubernetes itself with RBAC. The id-token is short lived while the refresh-token has longer expiration. The refresh-token is used to fetch a new id-token when the id-token expires.
|
||||
|
||||
```json
|
||||
{
|
||||
"access_token":"ACCESS_TOKEN",
|
||||
"refresh_token":"REFRESH_TOKEN",
|
||||
"id_token": "ID_TOKEN",
|
||||
"token_type":"bearer",
|
||||
"scope": "openid groups profile email"
|
||||
}
|
||||
```
|
||||
|
||||
To introspect the `ID_TOKEN` token run:
|
||||
```
|
||||
$ curl -k -s https://${OIDC_ISSUER}/protocol/openid-connect/introspect \
|
||||
-d token=${ID_TOKEN} \
|
||||
--user ${OIDC_CLIENT_ID}:${OIDC_CLIENT_SECRET} | jq
|
||||
```
|
||||
|
||||
The result will be like the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"exp": 1601323086,
|
||||
"iat": 1601322186,
|
||||
"aud": "kubernetes",
|
||||
"typ": "ID",
|
||||
"azp": "kubernetes",
|
||||
"preferred_username": "alice",
|
||||
"email_verified": false,
|
||||
"acr": "1",
|
||||
"groups": [
|
||||
"capsule.clastix.io"
|
||||
],
|
||||
"client_id": "kubernetes",
|
||||
"username": "alice",
|
||||
"active": true
|
||||
}
|
||||
```
|
||||
|
||||
## Configuring Kubernetes API Server
|
||||
Configuring Kubernetes for OIDC Authentication requires adding several parameters to the API Server. Please, refer to the [documentation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) for details and examples. Most likely, your `kube-apiserver.yaml` manifest will looks like the following:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
containers:
|
||||
- command:
|
||||
- kube-apiserver
|
||||
...
|
||||
- --oidc-issuer-url=https://${OIDC_ISSUER}
|
||||
- --oidc-ca-file=/etc/kubernetes/oidc/ca.crt
|
||||
- --oidc-client-id=${OIDC_CLIENT_SECRET}
|
||||
- --oidc-username-claim=preferred_username
|
||||
- --oidc-groups-claim=groups
|
||||
- --oidc-username-prefix=-
|
||||
```
|
||||
|
||||
## Configuring kubectl
|
||||
There are two options to use `kubectl` with OIDC:
|
||||
|
||||
- OIDC Authenticator
|
||||
- Use the `--token` option
|
||||
|
||||
To use the OIDC Authenticator, add an `oidc` user entry to your `kubeconfig` file:
|
||||
```
|
||||
$ kubectl config set-credentials oidc \
|
||||
--auth-provider=oidc \
|
||||
--auth-provider-arg=idp-issuer-url=https://${OIDC_ISSUER} \
|
||||
--auth-provider-arg=idp-certificate-authority=/path/to/ca.crt \
|
||||
--auth-provider-arg=client-id=${OIDC_CLIENT_ID} \
|
||||
--auth-provider-arg=client-secret=${OIDC_CLIENT_SECRET} \
|
||||
--auth-provider-arg=refresh-token=${REFRESH_TOKEN} \
|
||||
--auth-provider-arg=id-token=${ID_TOKEN} \
|
||||
--auth-provider-arg=extra-scopes=groups
|
||||
```
|
||||
|
||||
To use the `--token` option:
|
||||
```
|
||||
$ kubectl config set-credentials oidc --token=${ID_TOKEN}
|
||||
```
|
||||
|
||||
Point the `kubectl` to the URL where the Kubernetes APIs Server is reachable:
|
||||
```
|
||||
$ kubectl config set-cluster mycluster \
|
||||
--server=https://kube.clastix.io:6443 \
|
||||
--certificate-authority=~/.kube/ca.crt
|
||||
```
|
||||
|
||||
> If your APIs Server is reachable through the `capsule-proxy`, make sure to use the URL of the `capsule-proxy`.
|
||||
|
||||
Create a new context for the OIDC authenticated users:
|
||||
```
|
||||
$ kubectl config set-context alice-oidc@mycluster \
|
||||
--cluster=mycluster \
|
||||
--user=oidc
|
||||
```
|
||||
|
||||
As user `alice`, you should be able to use `kubectl` to create some namespaces:
|
||||
```
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-production
|
||||
$ kubectl --context alice-oidc@mycluster create namespace oil-development
|
||||
$ kubectl --context alice-oidc@mycluster create namespace gas-marketing
|
||||
```
|
||||
|
||||
> _Warning_: once your `ID_TOKEN` expires, the `kubectl` OIDC Authenticator will attempt to refresh automatically your `ID_TOKEN` using the `REFRESH_TOKEN`. In case the OIDC uses a self signed CA certificate, make sure to specify it with the `idp-certificate-authority` option in your `kubeconfig` file, otherwise you'll not able to refresh the tokens.
|
||||
@@ -1,258 +0,0 @@
|
||||
# Pod Security
|
||||
In Kubernetes, by default, workloads run with administrative access, which might be acceptable if there is only a single application running in the cluster or a single user accessing it. This is seldom required and you’ll consequently suffer a noisy neighbour effect along with large security blast radiuses.
|
||||
|
||||
Many of these concerns were addressed initially by [PodSecurityPolicies](https://kubernetes.io/docs/concepts/security/pod-security-policy) which have been present in the Kubernetes APIs since the very early days.
|
||||
|
||||
The Pod Security Policies are deprecated in Kubernetes 1.21 and removed entirely in 1.25. As replacement, the [Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/) and [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) has been introduced. Capsule support the new standard for tenants under its control as well as the oldest approach.
|
||||
|
||||
## Pod Security Policies
|
||||
As stated in the documentation, *"PodSecurityPolicies enable fine-grained authorization of pod creation and updates. A Pod Security Policy is a cluster-level resource that controls security sensitive aspects of the pod specification. The `PodSecurityPolicy` objects define a set of conditions that a pod must run with in order to be accepted into the system, as well as defaults for the related fields."*
|
||||
|
||||
Using the [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy), the cluster admin can impose limits on pod creation, for example the types of volume that can be consumed, the linux user that the process runs as in order to avoid running things as root, and more. From multi-tenancy point of view, the cluster admin has to control how users run pods in their tenants with a different level of permission on tenant basis.
|
||||
|
||||
Assume the Kubernetes cluster has been configured with [Pod Security Policy Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podsecuritypolicy) enabled in the APIs server: `--enable-admission-plugins=PodSecurityPolicy`
|
||||
|
||||
The cluster admin creates a `PodSecurityPolicy`:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
spec:
|
||||
privileged: false
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
EOF
|
||||
```
|
||||
|
||||
Then create a _ClusterRole_ using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: psp:restricted
|
||||
rules:
|
||||
- apiGroups: ['policy']
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['psp:restricted']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
He can assign this role to all namespaces in a tenant by setting the tenant manifest:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: psp:privileged
|
||||
subjects:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
EOF
|
||||
```
|
||||
|
||||
With the given specification, Capsule will ensure that all tenant namespaces will contain a _RoleBinding_ for the specified _Cluster Role_:
|
||||
|
||||
```yaml
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: 'capsule-oil-psp:privileged'
|
||||
namespace: oil-production
|
||||
labels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
subjects:
|
||||
- kind: Group
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
name: 'system:authenticated'
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: 'psp:privileged'
|
||||
```
|
||||
|
||||
Capsule admission controller forbids the tenant owner to run privileged pods in `oil-production` namespace and perform privilege escalation as declared by the above Cluster Role `psp:privileged`.
|
||||
|
||||
As tenant owner, creates a namespace:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig alice-oil.kubeconfig create ns oil-production
|
||||
```
|
||||
|
||||
and create a pod with privileged permissions:
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice-oil.kubeconfig apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-production
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
securityContext:
|
||||
privileged: true
|
||||
EOF
|
||||
```
|
||||
|
||||
Since the assigned `PodSecurityPolicy` explicitly disallows privileged containers, the tenant owner will see her request to be rejected by the Pod Security Policy Admission Controller.
|
||||
|
||||
## Pod Security Standards
|
||||
One of the issues with Pod Security Policies is that it is difficult to apply restrictive permissions on a granular level, increasing security risk. Also the Pod Security Policies get applied when the request is submitted and there is no way of applying them to pods that are already running. For these, and other reasons, the Kubernetes community decided to deprecate the Pod Security Policies.
|
||||
|
||||
As the Pod Security Policies get deprecated and removed, the [Pod Security Standards](https://kubernetes.io/docs/concepts/security/pod-security-standards/) is used in place. It defines three different policies to broadly cover the security spectrum. These policies are cumulative and range from highly-permissive to highly-restrictive:
|
||||
|
||||
- **Privileged**: unrestricted policy, providing the widest possible level of permissions.
|
||||
- **Baseline**: minimally restrictive policy which prevents known privilege escalations.
|
||||
- **Restricted**: heavily restricted policy, following current Pod hardening best practices.
|
||||
|
||||
Kubernetes provides a built-in [Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#podsecurity) to enforce the Pod Security Standards at either:
|
||||
|
||||
1. cluster level which applies a standard configuration to all namespaces in a cluster
|
||||
2. namespace level, one namespace at a time
|
||||
|
||||
For the first case, the cluster admin has to configure the Admission Controller and pass the configuration to the `kube-apiserver` by mean of the `--admission-control-config-file` extra argument, for example:
|
||||
|
||||
```yaml
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
kind: AdmissionConfiguration
|
||||
plugins:
|
||||
- name: PodSecurity
|
||||
configuration:
|
||||
apiVersion: pod-security.admission.config.k8s.io/v1beta1
|
||||
kind: PodSecurityConfiguration
|
||||
defaults:
|
||||
enforce: "baseline"
|
||||
enforce-version: "latest"
|
||||
warn: "restricted"
|
||||
warn-version: "latest"
|
||||
audit: "restricted"
|
||||
audit-version: "latest"
|
||||
exemptions:
|
||||
usernames: []
|
||||
runtimeClasses: []
|
||||
namespaces: [kube-system]
|
||||
```
|
||||
|
||||
For the second case, he can just assign labels to the specific namespace he wants enforce the policy since the Pod Security Admission Controller is enabled by default starting from Kubernetes 1.23+:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
labels:
|
||||
pod-security.kubernetes.io/enforce: baseline
|
||||
pod-security.kubernetes.io/warn: restricted
|
||||
pod-security.kubernetes.io/audit: restricted
|
||||
name: development
|
||||
```
|
||||
|
||||
## Pod Security Standards with Capsule
|
||||
According to the regular Kubernetes segregation model, the cluster admin has to operate either at cluster level or at namespace level. Since Capsule introduces a further segregation level (the _Tenant_ abstraction), the cluster admin can implement Pod Security Standards at tenant level by simply forcing specific labels on all the namespaces created in the tenant.
|
||||
|
||||
As cluster admin, create a tenant with additional labels:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
namespaceOptions:
|
||||
additionalMetadata:
|
||||
labels:
|
||||
pod-security.kubernetes.io/enforce: baseline
|
||||
pod-security.kubernetes.io/audit: restricted
|
||||
pod-security.kubernetes.io/warn: restricted
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
EOF
|
||||
```
|
||||
|
||||
All namespaces created by the tenant owner, will inherit the Pod Security labels:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
labels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
kubernetes.io/metadata.name: oil-development
|
||||
name: oil-development
|
||||
pod-security.kubernetes.io/enforce: baseline
|
||||
pod-security.kubernetes.io/warn: restricted
|
||||
pod-security.kubernetes.io/audit: restricted
|
||||
name: oil-development
|
||||
ownerReferences:
|
||||
- apiVersion: capsule.clastix.io/v1beta2
|
||||
blockOwnerDeletion: true
|
||||
controller: true
|
||||
kind: Tenant
|
||||
name: oil
|
||||
```
|
||||
|
||||
and the regular Pod Security Admission Controller does the magic:
|
||||
|
||||
```yaml
|
||||
kubectl --kubeconfig alice-oil.kubeconfig apply -f - << EOF
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-production
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
securityContext:
|
||||
privileged: true
|
||||
EOF
|
||||
```
|
||||
|
||||
The request gets denied:
|
||||
|
||||
```
|
||||
Error from server (Forbidden): error when creating "STDIN":
|
||||
pods "nginx" is forbidden: violates PodSecurity "baseline:latest": privileged
|
||||
(container "nginx" must not set securityContext.privileged=true)
|
||||
```
|
||||
|
||||
If the tenant owner tries to change o delete the above labels, Capsule will reconcile them to the original tenant manifest set by the cluster admin.
|
||||
|
||||
As additional security measure, the cluster admin can also prevent the tenant owner to make an improper usage of the above labels:
|
||||
|
||||
```
|
||||
kubectl annotate tenant oil \
|
||||
capsule.clastix.io/forbidden-namespace-labels-regexp="pod-security.kubernetes.io\/(enforce|warn|audit)"
|
||||
```
|
||||
|
||||
In that case, the tenant owner gets denied if she tries to use the labels:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig alice-oil.kubeconfig label ns oil-production \
|
||||
pod-security.kubernetes.io/enforce=restricted \
|
||||
--overwrite
|
||||
|
||||
Error from server (Label pod-security.kubernetes.io/audit is forbidden for namespaces in the current Tenant ...
|
||||
```
|
||||
@@ -1,128 +0,0 @@
|
||||
# Capsule Proxy and Rancher Projects
|
||||
|
||||
This guide explains how to setup the integration between Capsule Proxy and Rancher Projects.
|
||||
|
||||
It then explains how for the tenant user, the access to Kubernetes cluster-wide resources is transparent.
|
||||
|
||||
## Rancher Shell and Capsule
|
||||
|
||||
In order to integrate the Rancher Shell with Capsule it's needed to route the Kubernetes API requests made from the shell, via Capsule Proxy.
|
||||
|
||||
The [capsule-rancher-addon](https://github.com/clastix/capsule-addon-rancher/tree/master/charts/capsule-rancher-addon) allows the integration transparently.
|
||||
|
||||
### Install the Capsule addon
|
||||
|
||||
Add the Clastix Helm repository `https://clastix.github.io/charts`.
|
||||
|
||||
By updating the cache with Clastix's Helm repository a Helm chart named `capsule-rancher-addon` is available.
|
||||
|
||||
Install keeping attention to the following Helm values:
|
||||
|
||||
* `proxy.caSecretKey`: the `Secret` key that contains the CA certificate used to sign the Capsule Proxy TLS certificate (it should be`"ca.crt"` when Capsule Proxy has been configured with certificates generated with Cert Manager).
|
||||
* `proxy.servicePort`: the port configured for the Capsule Proxy Kubernetes `Service` (`443` in this setup).
|
||||
* `proxy.serviceURL`: the name of the Capsule Proxy `Service` (by default `"capsule-proxy.capsule-system.svc"` hen installed in the *capsule-system* `Namespace`).
|
||||
|
||||
## Rancher Cluster Agent
|
||||
|
||||
In both CLI and dashboard use cases, the [Cluster Agent](https://ranchermanager.docs.rancher.com/v2.5/how-to-guides/new-user-guides/kubernetes-clusters-in-rancher-setup/launch-kubernetes-with-rancher/about-rancher-agents) is responsible for the two-way communication between Rancher and the downstream cluster.
|
||||
|
||||
In a standard setup, the Cluster Agents communicates to the API server. In this setup it will communicate with Capsule Proxy to ensure filtering of cluster-scope resources, for Tenants.
|
||||
|
||||
Cluster Agents accepts as arguments:
|
||||
- `KUBERNETES_SERVICE_HOST` environment variable
|
||||
- `KUBERNETES_SERVICE_PORT` environment variable
|
||||
|
||||
which will be set, at cluster import-time, to the values of the Capsule Proxy `Service`. For example:
|
||||
- `KUBERNETES_SERVICE_HOST=capsule-proxy.capsule-system.svc`
|
||||
- (optional) `KUBERNETES_SERVICE_PORT=9001`. You can skip it by installing Capsule Proxy with Helm value `service.port=443`.
|
||||
|
||||
The expected CA is the one for which the certificate is inside the `kube-root-ca` `ConfigMap` in the same `Namespace` of the Cluster Agent (*cattle-system*).
|
||||
|
||||
## Capsule Proxy
|
||||
|
||||
Capsule Proxy needs to provide a x509 certificate for which the root CA is trusted by the Cluster Agent.
|
||||
The goal can be achieved by, either using the Kubernetes CA to sign its certificate, or by using a dedicated root CA.
|
||||
|
||||
### With the Kubernetes root CA
|
||||
|
||||
> Note: this can be achieved when the Kubernetes root CA keypair is accessible. For example is likely to be possibile with on-premise setup, but not with managed Kubernetes services.
|
||||
|
||||
With this approach Cert Manager will sign certificates with the Kubernetes root CA for which it's needed to be provided a `Secret`.
|
||||
|
||||
```shell
|
||||
kubectl create secret tls -n capsule-system kubernetes-ca-key-pair --cert=/path/to/ca.crt --key=/path/to/ca.key
|
||||
```
|
||||
|
||||
When installing Capsule Proxy with Helm chart, it's needed to specify to generate Capsule Proxy `Certificate`s with Cert Manager with an external `ClusterIssuer`:
|
||||
- `certManager.externalCA.enabled=true`
|
||||
- `certManager.externalCA.secretName=kubernetes-ca-key-pair`
|
||||
- `certManager.generateCertificates=true`
|
||||
|
||||
and disable the job for generating the certificates without Cert Manager:
|
||||
- `options.generateCertificates=false`
|
||||
|
||||
### Enable tenant users access cluster resources
|
||||
|
||||
In order to allow tenant users to list cluster-scope resources, like `Node`s, Tenants need to be configured with proper `proxySettings`, for example:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: Nodes
|
||||
operations:
|
||||
- List
|
||||
[...]
|
||||
```
|
||||
|
||||
Also, in order to assign or filter nodes per Tenant, it's needed labels on node in order to be selected:
|
||||
|
||||
```shell
|
||||
kubectl label node worker-01 capsule.clastix.io/tenant=oil
|
||||
```
|
||||
|
||||
and a node selector at Tenant level:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
nodeSelector:
|
||||
capsule.clastix.io/tenant: oil
|
||||
[...]
|
||||
```
|
||||
|
||||
The final manifest is:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- kind: User
|
||||
name: alice
|
||||
proxySettings:
|
||||
- kind: Node
|
||||
operations:
|
||||
- List
|
||||
nodeSelector:
|
||||
capsule.clastix.io/tenant: oil
|
||||
```
|
||||
|
||||
The same appplies for:
|
||||
- `Nodes`
|
||||
- `StorageClasses`
|
||||
- `IngressClasses`
|
||||
- `PriorityClasses`
|
||||
|
||||
More on this in the [official documentation](https://capsule.clastix.io/docs/general/proxy#tenant-owner-authorization).
|
||||
@@ -1,207 +0,0 @@
|
||||
# Capsule and Rancher Projects
|
||||
|
||||
This guide explains how to setup the integration between Capsule and Rancher Projects.
|
||||
|
||||
It then explains how for the tenant user, the access to Kubernetes resources is transparent.
|
||||
|
||||
## Manually
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
- An authentication provider in Rancher, e.g. an OIDC identity provider
|
||||
- A *Tenant Member* `Cluster Role` in Rancher
|
||||
|
||||
### Configure an identity provider for Kubernetes
|
||||
|
||||
You can follow [this general guide](https://capsule.clastix.io/docs/guides/oidc-auth) to configure an OIDC authentication for Kubernetes.
|
||||
|
||||
For a Keycloak specific setup yon can check [this resources list](./oidc-keycloak.md).
|
||||
|
||||
#### Known issues
|
||||
|
||||
##### Keycloak new URLs without `/auth` makes Rancher crash
|
||||
|
||||
- [rancher/rancher#38480](https://github.com/rancher/rancher/issues/38480)
|
||||
- [rancher/rancher#38683](https://github.com/rancher/rancher/issues/38683)
|
||||
|
||||
### Create the Tenant Member Cluster Role
|
||||
|
||||
A custom Rancher `Cluster Role` is needed to allow Tenant users, to read cluster-scope resources and Rancher doesn't provide e built-in Cluster Role with this tailored set of privileges.
|
||||
|
||||
When logged-in to the Rancher UI as administrator, from the Users & Authentication page, create a Cluster Role named *Tenant Member* with the following privileges:
|
||||
- `get`, `list`, `watch` operations over `IngressClasses` resources.
|
||||
- `get`, `list`, `watch` operations over `StorageClasses` resources.
|
||||
- `get`, `list`, `watch` operations over `PriorityClasses` resources.
|
||||
- `get`, `list`, `watch` operations over `Nodes` resources.
|
||||
- `get`, `list`, `watch` operations over `RuntimeClasses` resources.
|
||||
|
||||
## Configuration (administration)
|
||||
|
||||
### Tenant onboarding
|
||||
|
||||
When onboarding tenants, the administrator needs to create the following, in order to bind the `Project` with the `Tenant`:
|
||||
|
||||
- In Rancher, create a `Project`.
|
||||
- In the target Kubernetes cluster, create a `Tenant`, with the following specification:
|
||||
```yaml
|
||||
kind: Tenant
|
||||
...
|
||||
spec:
|
||||
namespaceOptions:
|
||||
additionalMetadata:
|
||||
annotations:
|
||||
field.cattle.io/projectId: ${CLUSTER_ID}:${PROJECT_ID}
|
||||
labels:
|
||||
field.cattle.io/projectId: ${PROJECT_ID}
|
||||
```
|
||||
where `$CLUSTER_ID` and `$PROEJCT_ID` can be retrieved, assuming a valid `$CLUSTER_NAME`, as:
|
||||
|
||||
```shell
|
||||
CLUSTER_NAME=foo
|
||||
CLUSTER_ID=$(kubectl get cluster -n fleet-default ${CLUSTER_NAME} -o jsonpath='{.status.clusterName}')
|
||||
PROJECT_IDS=$(kubectl get projects -n $CLUSTER_ID -o jsonpath="{.items[*].metadata.name}")
|
||||
for project_id in $PROJECT_IDS; do echo "${project_id}"; done
|
||||
```
|
||||
|
||||
More on declarative `Project`s [here](https://github.com/rancher/rancher/issues/35631).
|
||||
- In the identity provider, create a user with [correct OIDC claim](https://capsule.clastix.io/docs/guides/oidc-auth) of the Tenant.
|
||||
- In Rancher, add the new user to the `Project` with the *Read-only* `Role`.
|
||||
- In Rancher, add the new user to the `Cluster` with the *Tenant Member* `Cluster Role`.
|
||||
|
||||
#### Create the Tenant Member Project Role
|
||||
|
||||
A custom `Project Role` is needed to allow Tenant users, with minimum set of privileges and create and delete `Namespace`s.
|
||||
|
||||
Create a Project Role named *Tenant Member* that inherits the privileges from the following Roles:
|
||||
- *read-only*
|
||||
- *create-ns*
|
||||
|
||||
|
||||
### Usage
|
||||
|
||||
When the configuration administrative tasks have been completed, the tenant users are ready to use the Kubernetes cluster transparently.
|
||||
|
||||
For example can create Namespaces in a self-service mode, that would be otherwise impossible with the sole use of Rancher Projects.
|
||||
|
||||
#### Namespace creation
|
||||
|
||||
From the tenant user perspective both CLI and the UI are valid interfaces to communicate with.
|
||||
|
||||
#### From CLI
|
||||
|
||||
- Tenants `kubectl`-logs in to the OIDC provider
|
||||
- Tenant creates a Namespace, as a valid OIDC-discoverable user.
|
||||
|
||||
the `Namespace` is now part of both the Tenant and the Project.
|
||||
|
||||
> As administrator, you can verify with:
|
||||
>
|
||||
> ```shell
|
||||
> kubectl get tenant ${TENANT_NAME} -o jsonpath='{.status}'
|
||||
> kubectl get namespace -l field.cattle.io/projectId=${PROJECT_ID}
|
||||
> ```
|
||||
|
||||
#### From UI
|
||||
|
||||
- Tenants logs in to Rancher, with a valid OIDC-discoverable user (in a valid Tenant group).
|
||||
- Tenant user create a valid Namespace
|
||||
|
||||
the `Namespace` is now part of both the Tenant and the Project.
|
||||
|
||||
> As administrator, you can verify with:
|
||||
>
|
||||
> ```shell
|
||||
> kubectl get tenant ${TENANT_NAME} -o jsonpath='{.status}'
|
||||
> kubectl get namespace -l field.cattle.io/projectId=${PROJECT_ID}
|
||||
> ```
|
||||
|
||||
### Additional administration
|
||||
|
||||
#### Project monitoring
|
||||
|
||||
Before proceeding is recommended to read the official Rancher documentation about [Project Monitors](https://ranchermanager.docs.rancher.com/v2.6/how-to-guides/advanced-user-guides/monitoring-alerting-guides/prometheus-federator-guides/project-monitors).
|
||||
|
||||
In summary, the setup is composed by a cluster-level Prometheus, Prometheus Federator via which single Project-level Prometheus federate to.
|
||||
|
||||
#### Network isolation
|
||||
|
||||
Before proceeding is recommended to read the official Capsule documentation about [`NetworkPolicy` at `Tenant`-level](https://capsule.clastix.io/docs/general/tutorial/#assign-network-policies)`.
|
||||
|
||||
##### Network isolation and Project Monitor
|
||||
|
||||
As Rancher's Project Monitor deploys the Prometheus stack in a `Namespace` that is not part of **neither** the `Project` **nor** the `Tenant` `Namespace`s, is important to apply the label selectors in the `NetworkPolicy` `ingress` rules to the `Namespace` created by Project Monitor.
|
||||
|
||||
That Project monitoring `Namespace` will be named as `cattle-project-<PROJECT_ID>-monitoring`.
|
||||
|
||||
For example, if the `NetworkPolicy` is configured to allow all ingress traffic from `Namespace` with label `capsule.clastix.io/tenant=foo`, this label is to be applied to the Project monitoring `Namespace` too.
|
||||
|
||||
Then, a `NetworkPolicy` can be applied at `Tenant`-level with Capsule `GlobalTenantResource`s. For example it can be applied a minimal policy for the *oil* `Tenant`:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta2
|
||||
kind: GlobalTenantResource
|
||||
metadata:
|
||||
name: oil-networkpolicies
|
||||
spec:
|
||||
tenantSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
resyncPeriod: 360s
|
||||
pruningOnDelete: true
|
||||
resources:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
rawItems:
|
||||
- apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: oil-minimal
|
||||
spec:
|
||||
podSelector: {}
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
ingress:
|
||||
# Intra-Tenant
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
# Rancher Project Monitor stack
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
role: monitoring
|
||||
# Kubernetes nodes
|
||||
- from:
|
||||
- ipBlock:
|
||||
cidr: 192.168.1.0/24
|
||||
egress:
|
||||
# Kubernetes DNS server
|
||||
- to:
|
||||
- namespaceSelector: {}
|
||||
podSelector:
|
||||
matchLabels:
|
||||
k8s-app: kube-dns
|
||||
ports:
|
||||
- port: 53
|
||||
protocol: UDP
|
||||
# Intra-Tenant
|
||||
- to:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
# Kubernetes API server
|
||||
- to:
|
||||
- ipBlock:
|
||||
cidr: 10.43.0.1/32
|
||||
ports:
|
||||
- port: 443
|
||||
```
|
||||
|
||||
## Cluster-wide resources and Rancher Shell interface
|
||||
|
||||
For using the Rancher Shell and cluster-wide resources as tenant user, please follow [this guide](./capsule-proxy-rancher.md).
|
||||
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
# Introduction
|
||||
|
||||
The integration between Rancher and Capsule, aims to provide a multi-tenant Kubernetes service to users, enabling:
|
||||
- a self-service approach
|
||||
- access to cluster-wide resources
|
||||
|
||||
to end-users.
|
||||
|
||||
Tenant users will have the ability to access Kubernetes resources through:
|
||||
- Rancher UI
|
||||
- Rancher Shell
|
||||
- Kubernetes CLI
|
||||
|
||||
On the other side, administrators need to manage the Kubernetes clusters through Rancher.
|
||||
|
||||
Rancher provides a feature called **Projects** to segregate resources inside a common domain.
|
||||
At the same time Projects doesn't provide way to segregate Kubernetes cluster-scope resources.
|
||||
|
||||
Capsule as a project born for creating a framework for multi-tenant platforms, integrates with Rancher Projects enhancing the experience with **Tenants**.
|
||||
|
||||
Capsule allows tenants isolation and resources control in a declarative way, while enabling a self-service experience to tenants.
|
||||
With Capsule Proxy users can also access cluster-wide resources, as configured by administrators at `Tenant` custom resource-level.
|
||||
|
||||
You can read in detail how the integration works and how to configure it, in the following guides.
|
||||
- [How to integrate Rancher Projects with Capsule Tenants](./capsule-proxy-rancher.md)
|
||||
- [How to enable cluster-wide resources and Rancher shell access](./capsule-proxy-rancher.md).
|
||||
|
||||
@@ -1,40 +0,0 @@
|
||||
# Configure OIDC authentication with Keycloak
|
||||
|
||||
## Pre-requisites
|
||||
|
||||
- Keycloak realm for Rancher
|
||||
- Rancher OIDC authentication provider
|
||||
|
||||
## Keycloak realm for Rancher
|
||||
|
||||
These instructions is specific to a setup made with Keycloak as an OIDC identity provider.
|
||||
|
||||
### Mappers
|
||||
|
||||
- Add to userinfo Group Membership type, claim name `groups`
|
||||
- Add to userinfo Audience type, claim name `client audience`
|
||||
- Add to userinfo, full group path, Group Membership type, claim name `full_group_path`
|
||||
|
||||
More on this on the [official guide](https://capsule.clastix.io/docs/guides/oidc-auth/#configuring-oidc-server).
|
||||
|
||||
## Rancher OIDC authentication provider
|
||||
|
||||
Configure an OIDC authentication provider, with Client with issuer, return URLs specific to the Keycloak setup.
|
||||
|
||||
> Use old and Rancher-standard paths with `/auth` subpath (see issues below).
|
||||
>
|
||||
> Add custom paths, remove `/auth` subpath in return and issuer URLs.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Configure Tenant users
|
||||
|
||||
1. In Rancher, configure OIDC authentication with Keycloak to use [with Rancher](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/authentication-permissions-and-global-configuration/authentication-config/configure-keycloak-oidc).
|
||||
1. In Keycloak, Create a Group in the rancher Realm: *capsule.clastix.io*.
|
||||
1. In Keycloak, Create a User in the rancher Realm, member of *capsule.clastix.io* Group.
|
||||
1. In the Kubernetes target cluster, update the `CapsuleConfiguration` by adding the `"keycloakoidc_group://capsule.clastix.io"` Kubernetes `Group`.
|
||||
1. Login to Rancher with Keycloak with the new user.
|
||||
1. In Rancher as an administrator, set the user custom role with `get` of Cluster.
|
||||
1. In Rancher as an administrator, add the Rancher user ID of the just-logged in user as Owner of a `Tenant`.
|
||||
1. (optional) configure `proxySettings` for the `Tenant` to enable tenant users to access cluster-wide resources.
|
||||
|
||||
@@ -1,242 +0,0 @@
|
||||
# Capsule upgrading guide
|
||||
|
||||
List of Tenant API changes:
|
||||
|
||||
- [Capsule v0.1.0](https://github.com/projectcapsule/capsule/releases/tag/v0.1.0) bump to `v1beta1` from `v1alpha1`.
|
||||
- [Capsule v0.2.0](https://github.com/projectcapsule/capsule/releases/tag/v0.2.0) bump to `v1beta2` from `v1beta1`, deprecating `v1alpha1`.
|
||||
- [Capsule v0.3.0](https://github.com/projectcapsule/capsule/releases/tag/v0.3.0) missing enums required by [Capsule Proxy](https://github.com/clastix/capsule-proxy).
|
||||
|
||||
This document aims to provide support and a guide on how to perform a clean upgrade to the latest API version in order to avoid service disruption and data loss.
|
||||
|
||||
As an installation method, Helm is given for granted, YMMV using the `kustomize` manifest.
|
||||
|
||||
## Considerations
|
||||
|
||||
We strongly suggest performing a full backup of your Kubernetes cluster, such as storage and etcd.
|
||||
Use your favourite tool according to your needs.
|
||||
|
||||
# Upgrading from v0.2.x to v0.3.x
|
||||
|
||||
A minor bump has been requested due to some missing enums in the Tenant resource.
|
||||
|
||||
## Scale down the Capsule controller
|
||||
|
||||
Using the `kubectl` or Helm, scale down the Capsule controller manager: this is required to avoid the old Capsule version from processing objects that aren't yet installed as a CRD.
|
||||
|
||||
```
|
||||
helm upgrade -n capsule-system capsule --set "replicaCount=0"
|
||||
```
|
||||
|
||||
## Patch the Tenant custom resource definition
|
||||
|
||||
Unfortunately, Helm doesn't manage the lifecycle of Custom Resource Definitions, additional details can be found [here](https://github.com/helm/community/blob/f9e06c16d89ccea1bea77c01a6a96ae3b309f823/architecture/crds.md).
|
||||
|
||||
This process must be executed manually as follows:
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/v0.3.0/charts/capsule/crds/tenant-crd.yaml
|
||||
```
|
||||
|
||||
## Update your Capsule Helm chart
|
||||
|
||||
Ensure to update the Capsule repository to fetch the latest changes.
|
||||
|
||||
```
|
||||
helm repo update
|
||||
```
|
||||
|
||||
The latest Chart must be used, at the current time, >=0.4.0 is expected for Capsule >=v0.3.0, you can fetch the full list of available charts with the following command.
|
||||
|
||||
```
|
||||
helm search repo -l clastix/capsule
|
||||
```
|
||||
|
||||
Since the Tenant custom resource definition has been patched with new fields, we can install back Capsule using the provided Helm chart.
|
||||
|
||||
```
|
||||
helm upgrade --install capsule clastix/capsule -n capsule-system --create-namespace --version 0.4.0
|
||||
```
|
||||
|
||||
This will start the Operator with the latest changes, and perform the required sync operations like:
|
||||
|
||||
1. Ensuring the CA is still valid
|
||||
2. Ensuring a TLS certificate is valid for the local webhook server
|
||||
3. If not using the cert-manager integration, patching the Validating and Mutating Webhook Configuration resources with the Capsule CA
|
||||
4. If not using the cert-manager integration, patching the Capsule's Custom Resource Definitions conversion webhook fields with the Capsule CA
|
||||
|
||||
# Upgrading from v0.1.3 to v0.2.x
|
||||
|
||||
## Scale down the Capsule controller
|
||||
|
||||
Using the `kubectl` or Helm, scale down the Capsule controller manager: this is required to avoid the old Capsule version from processing objects that aren't yet installed as a CRD.
|
||||
|
||||
```
|
||||
helm upgrade -n capsule-system capsule --set "replicaCount=0"
|
||||
```
|
||||
|
||||
> Ensure that all the Pods have been removed correctly.
|
||||
|
||||
## Migrate manually the `CapsuleConfiguration` to the latest API version
|
||||
|
||||
With the v0.2.x release of Capsule and the new features introduced, the resource `CapsuleConfiguration` is offering a new API version, bumped to `v1beta1` from `v1alpha1`.
|
||||
|
||||
Essentially, the `CapsuleConfiguration` is storing configuration flags that allow Capsule to be configured on the fly without requiring the operator to reload.
|
||||
This resource is read at the operator init-time when the conversion webhook offered by Capsule is not yet ready to serve any request.
|
||||
|
||||
Migrating from v0.1.3 to v0.2.x requires a manual conversion of your `CapsuleConfiguration` according to the latest version (currently, `v1beta2`).
|
||||
You can find further information about it at the section `CRDs APIs`.
|
||||
|
||||
The deletion of the `CapsuleConfiguration` resource is required, along with the update of the related CRD.
|
||||
|
||||
```
|
||||
kubectl delete capsuleconfiguration default
|
||||
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/v0.2.1/charts/capsule/crds/capsuleconfiguration-crd.yaml
|
||||
```
|
||||
|
||||
During the Helm upgrade, a new `CapsuleConfiguration` will be created: please, refer to the Helm Chart values to pick up your desired settings.
|
||||
|
||||
## Patch the Tenant custom resource definition
|
||||
|
||||
Unfortunately, Helm doesn't manage the lifecycle of Custom Resource Definitions, additional details can be found [here](https://github.com/helm/community/blob/f9e06c16d89ccea1bea77c01a6a96ae3b309f823/architecture/crds.md).
|
||||
|
||||
This process must be executed manually as follows:
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/v0.2.1/charts/capsule/crds/globaltenantresources-crd.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/v0.2.1/charts/capsule/crds/tenant-crd.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/v0.2.1/charts/capsule/crds/tenantresources-crd.yaml
|
||||
```
|
||||
|
||||
> We're giving for granted that Capsule is installed in the `capsule-system` Namespace.
|
||||
> According to your needs you can change the Namespace at your wish, e.g.:
|
||||
>
|
||||
> ```bash
|
||||
> CUSTOM_NS="tenancy-operations"
|
||||
>
|
||||
> for CR in capsuleconfigurations.capsule.clastix.io globaltenantresources.capsule.clastix.io tenantresources.capsule.clastix.io tenants.capsule.clastix.io; do
|
||||
> kubectl patch crd capsuleconfigurations.capsule.clastix.io --type='json' -p=" [{'op': 'replace', 'path': '/spec/conversion/webhook/clientConfig/service/namespace', 'value': "${CUSTOM_NS}"}]"
|
||||
> done
|
||||
> ```
|
||||
|
||||
## Update your Capsule Helm chart
|
||||
|
||||
Ensure to update the Capsule repository to fetch the latest changes.
|
||||
|
||||
```
|
||||
helm repo update
|
||||
```
|
||||
|
||||
The latest Chart must be used, at the current time, >0.3.0 is expected for Capsule >v0.2.0, you can fetch the full list of available charts with the following command.
|
||||
|
||||
```
|
||||
helm search repo -l clastix/capsule
|
||||
```
|
||||
|
||||
Since the Tenant custom resource definition has been patched with new fields, we can install back Capsule using the provided Helm chart.
|
||||
|
||||
```
|
||||
helm upgrade --install capsule clastix/capsule -n capsule-system --create-namespace --version 0.3.0
|
||||
```
|
||||
|
||||
This will start the Operator with the latest changes, and perform the required sync operations like:
|
||||
|
||||
1. Ensuring the CA is still valid
|
||||
2. Ensuring a TLS certificate is valid for the local webhook server
|
||||
3. If not using the cert-manager integration, patching the Validating and Mutating Webhook Configuration resources with the Capsule CA
|
||||
4. If not using the cert-manager integration, patching the Capsule's Custom Resource Definitions conversion webhook fields with the Capsule CA
|
||||
|
||||
## Ensure the conversion webhook is working
|
||||
|
||||
Kubernetes Custom Resource definitions provide a conversion webhook that is used by an Operator to perform a seamless conversion between resources with different versioning.
|
||||
|
||||
With the fresh new installation, Capsule patches all the required moving parts to ensure this conversion is put in place and uses the latest version (actually, `v1beta2`) for presenting the Tenant resources.
|
||||
|
||||
You can check this behaviour by issuing the following command:
|
||||
|
||||
```
|
||||
$: kubectl get tenants.v1beta2.capsule.clastix.io
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
oil 3 0 alice User {"kubernetes.io/os":"linux"} 3m43s
|
||||
```
|
||||
|
||||
You should see all the previous Tenant resources converted in the new format and structure.
|
||||
|
||||
```
|
||||
$: kubectl get tenants.v1beta2.capsule.clastix.io
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
oil Active 3 0 {"kubernetes.io/os":"linux"} 3m38s
|
||||
```
|
||||
|
||||
> Resources are still persisted in etcd using the previous Tenant version (`v1beta1`) and the conversion is executed on-the-fly thanks to the conversion webhook.
|
||||
> If you'd like to decrease the pressure on Capsule due to the conversion webhook, we suggest performing a resource patching using the command `kubectl replace`:
|
||||
> in this way, the API Server will update the etcd key with the specification according to the new versioning, allowing to skip the conversion.
|
||||
>
|
||||
> The `kubectl replace` command must be triggered when the Capsule webhook is up and running to allow the conversion between versions.
|
||||
|
||||
# Upgrading from < v0.1.0 up to v0.1.3
|
||||
|
||||
## Uninstall the old Capsule release
|
||||
|
||||
If you're using Helm as package manager, all the Operator resources such as Deployment, Service, Role Binding, etc. must be deleted.
|
||||
|
||||
```
|
||||
helm uninstall -n capsule-system capsule
|
||||
```
|
||||
|
||||
Ensure that everything has been removed correctly, especially the Secret resources.
|
||||
|
||||
## Patch the Tenant custom resource definition
|
||||
|
||||
Unfortunately, Helm doesn't manage the lifecycle of Custom Resource Definitions, additional details can be found [here](https://github.com/helm/community/blob/f9e06c16d89ccea1bea77c01a6a96ae3b309f823/architecture/crds.md).
|
||||
|
||||
This process must be executed manually as follows:
|
||||
|
||||
```
|
||||
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/v0.1.0/charts/capsule/crds/tenant-crd.yaml
|
||||
```
|
||||
|
||||
> Please note the Capsule version in the said URL, your mileage may vary according to the desired upgrading version.
|
||||
|
||||
## Install the Capsule operator using Helm
|
||||
|
||||
Since the Tenant custom resource definition has been patched with new fields, we can install back Capsule using the provided Helm chart.
|
||||
|
||||
```
|
||||
helm upgrade --install capsule clastix/capsule -n capsule-system --create-namespace --version=DESIRED_VERSION
|
||||
```
|
||||
|
||||
> Please, note the `DESIRED_VERSION`: you have to pick the Helm chart version according to the Capsule version you'd like to upgrade to.
|
||||
>
|
||||
> You can retrieve it by browsing the GitHub source code picking the Capsule tag as ref and inspecting the file `Chart.yaml` available in the folder `charts/capsule`.
|
||||
|
||||
This will start the operator that will perform several required actions, such as:
|
||||
|
||||
1. Generating a new CA
|
||||
2. Generating new TLS certificates for the local webhook server
|
||||
3. Patching the Validating and Mutating Webhook Configuration resources with the fresh new CA
|
||||
4. Patching the Custom Resource Definition tenant conversion webhook CA
|
||||
|
||||
## Ensure the conversion webhook is working
|
||||
|
||||
Kubernetes Custom Resource definitions provide a conversion webhook that is used by an Operator to perform a seamless conversion between resources with different versioning.
|
||||
|
||||
With the fresh new installation, Capsule patched all the required moving parts to ensure this conversion is put in place and using the latest version (actually, `v1beta1`) for presenting the Tenant resources.
|
||||
|
||||
You can check this behaviour by issuing the following command:
|
||||
|
||||
```
|
||||
$: kubectl get tenants.v1beta1.capsule.clastix.io
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
oil 3 0 alice User {"kubernetes.io/os":"linux"} 3m43s
|
||||
```
|
||||
|
||||
You should see all the previous Tenant resources converted into the new format and structure.
|
||||
|
||||
```
|
||||
$: kubectl get tenants.v1beta1.capsule.clastix.io
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
oil Active 3 0 {"kubernetes.io/os":"linux"} 3m38s
|
||||
```
|
||||
|
||||
> Resources are still persisted in etcd using the v1alpha1 specification and the conversion is executed on-the-fly thanks to the conversion webhook.
|
||||
> If you'd like to decrease the pressure on Capsule due to the conversion webhook, we suggest performing a resource patching using the command kubectl replace: in this way, the API Server will update the etcd key with the specification according to the new versioning, allowing to skip the conversion.
|
||||
@@ -1,125 +0,0 @@
|
||||
# Tenants Backup and Restore with Velero
|
||||
|
||||
[Velero](https://velero.io) is a backup and restore solution that performs data protection, disaster recovery and migrates Kubernetes cluster from on-premises to the Cloud or between different Clouds.
|
||||
|
||||
When coming to backup and restore in Kubernetes, we have two main requirements:
|
||||
|
||||
- Configurations backup
|
||||
- Data backup
|
||||
|
||||
The first requirement aims to backup all the resources stored into `etcd` database, for example: `namespaces`, `pods`, `services`, `deployments`, etc. The second is about how to backup stateful application data as volumes.
|
||||
|
||||
The main limitation of Velero is the multi tenancy. Currently, Velero does not support multi tenancy meaning it can be only used from admin users and so it cannot provided "as a service" to the users. This means that the cluster admin needs to take care of users' backup.
|
||||
|
||||
Assuming you have multiple tenants managed by Capsule, for example `oil` and `gas`, as cluster admin, you can to take care of scheduling backups for:
|
||||
|
||||
- Tenant cluster resources
|
||||
- Namespaces belonging to each tenant
|
||||
|
||||
## Create backup of a tenant
|
||||
Create a backup of the tenant `oil`. It consists in two different backups:
|
||||
|
||||
- backup of the tenant resource
|
||||
- backup of all the resources belonging to the tenant
|
||||
|
||||
To backup the `oil` tenant selectively, label the tenant as:
|
||||
|
||||
```
|
||||
kubectl label tenant oil capsule.clastix.io/tenant=oil
|
||||
```
|
||||
|
||||
and create the backup
|
||||
|
||||
```
|
||||
velero create backup oil-tenant \
|
||||
--include-cluster-resources=true \
|
||||
--include-resources=tenants.capsule.clastix.io \
|
||||
--selector capsule.clastix.io/tenant=oil
|
||||
```
|
||||
|
||||
resulting in the following Velero object:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: Backup
|
||||
metadata:
|
||||
name: oil-tenant
|
||||
spec:
|
||||
defaultVolumesToRestic: false
|
||||
hooks: {}
|
||||
includeClusterResources: true
|
||||
includedNamespaces:
|
||||
- '*'
|
||||
includedResources:
|
||||
- tenants.capsule.clastix.io
|
||||
labelSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
metadata: {}
|
||||
storageLocation: default
|
||||
ttl: 720h0m0s
|
||||
```
|
||||
|
||||
Create a backup of all the resources belonging to the `oil` tenant namespaces:
|
||||
|
||||
```
|
||||
velero create backup oil-namespaces \
|
||||
--include-cluster-resources=false \
|
||||
--include-namespaces oil-production,oil-development,oil-marketing
|
||||
```
|
||||
|
||||
resulting to the following Velero object:
|
||||
|
||||
```yaml
|
||||
apiVersion: velero.io/v1
|
||||
kind: Backup
|
||||
metadata:
|
||||
name: oil-namespaces
|
||||
spec:
|
||||
defaultVolumesToRestic: false
|
||||
hooks: {}
|
||||
includeClusterResources: false
|
||||
includedNamespaces:
|
||||
- oil-production
|
||||
- oil-development
|
||||
- oil-marketing
|
||||
metadata: {}
|
||||
storageLocation: default
|
||||
ttl: 720h0m0s
|
||||
```
|
||||
|
||||
> Velero requires an Object Storage backend where to store backups, you should take care of this requirement before to use Velero.
|
||||
|
||||
## Restore a tenant from the backup
|
||||
To recover the tenant after a disaster, or to migrate it to another cluster, create a restore from the previous backups:
|
||||
|
||||
```
|
||||
velero create restore --from-backup oil-tenant
|
||||
velero create restore --from-backup oil-namespaces
|
||||
```
|
||||
|
||||
Using Velero to restore a Capsule tenant can lead to an incomplete recovery of tenant because the namespaces restored with Velero do not have the `OwnerReference` field used to bind the namespaces to the tenant. For this reason, all restored namespaces are not bound to the tenant:
|
||||
|
||||
```
|
||||
kubectl get tnt
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
gas active 9 5 {"pool":"gas"} 34m
|
||||
solar active 9 8 {"pool":"solar"} 33m
|
||||
oil active 9 0 # <<< {"pool":"oil"} 54m
|
||||
```
|
||||
|
||||
To avoid this problem you can use the script `velero-restore.sh` located under the `hack/` folder:
|
||||
|
||||
```
|
||||
./velero-restore.sh --kubeconfing /path/to/your/kubeconfig --tenant "oil" restore
|
||||
```
|
||||
|
||||
Running this command, we are going to patch the tenant's namespaces manifests that are actually `ownerReferences`-less. Once the command has finished its run, you got the tenant back.
|
||||
|
||||
```
|
||||
kubectl get tnt
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
gas active 9 5 {"pool":"gas"} 44m
|
||||
solar active 9 8 {"pool":"solar"} 43m
|
||||
oil active 9 3 # <<< {"pool":"oil"} 12s
|
||||
```
|
||||
@@ -1,17 +0,0 @@
|
||||
# Capsule Overview
|
||||
|
||||
## Kubernetes multi-tenancy made easy
|
||||
**Capsule** implements a multi-tenant and policy-based environment in your Kubernetes cluster. It is designed as a micro-services-based ecosystem with the minimalist approach, leveraging only on upstream Kubernetes.
|
||||
|
||||
## What's the problem with the current status?
|
||||
|
||||
Kubernetes introduces the _Namespace_ object type to create logical partitions of the cluster as isolated *slices*. However, implementing advanced multi-tenancy scenarios, it soon becomes complicated because of the flat structure of Kubernetes namespaces and the impossibility to share resources among namespaces belonging to the same tenant. To overcome this, cluster admins tend to provision a dedicated cluster for each groups of users, teams, or departments. As an organization grows, the number of clusters to manage and keep aligned becomes an operational nightmare, described as the well known phenomena of the _clusters sprawl_.
|
||||
|
||||
## Entering Capsule
|
||||
|
||||
Capsule takes a different approach. In a single cluster, the Capsule Controller aggregates multiple namespaces in a lightweight abstraction called _Tenant_, basically a grouping of Kubernetes Namespaces. Within each tenant, users are free to create their namespaces and share all the assigned resources.
|
||||
|
||||
On the other side, the Capsule Policy Engine keeps the different tenants isolated from each other. _Network and Security Policies_, _Resource Quota_, _Limit Ranges_, _RBAC_, and other policies defined at the tenant level are automatically inherited by all the namespaces in the tenant. Then users are free to operate their tenants in autonomy, without the intervention of the cluster administrator.
|
||||
|
||||
|
||||

|
||||
@@ -1,57 +0,0 @@
|
||||
// This is where project configuration and plugin options are located.
|
||||
// Learn more: https://gridsome.org/docs/config
|
||||
|
||||
// Changes here require a server restart.
|
||||
// To restart press CTRL + C in terminal and run `gridsome develop`
|
||||
|
||||
module.exports = {
|
||||
siteName: 'Capsule Documentation',
|
||||
titleTemplate: 'Capsule Documentation | %s',
|
||||
siteDescription: 'Documentation of Capsule, multi-tenant Operator for Kubernetes',
|
||||
icon: {
|
||||
favicon: './src/assets/favicon.png',
|
||||
},
|
||||
plugins: [
|
||||
{
|
||||
use: 'gridsome-plugin-gtag',
|
||||
options: {
|
||||
config: {
|
||||
id: 'G-ZL1M3TWPY2',
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
use: "gridsome-plugin-tailwindcss",
|
||||
|
||||
options: {
|
||||
tailwindConfig: './tailwind.config.js',
|
||||
// presetEnvConfig: {},
|
||||
// shouldImport: false,
|
||||
// shouldTimeTravel: false
|
||||
}
|
||||
},
|
||||
{
|
||||
use: '@gridsome/source-filesystem',
|
||||
options: {
|
||||
baseDir: './content',
|
||||
path: '**/*.md',
|
||||
pathPrefix: '/docs',
|
||||
typeName: 'MarkdownPage',
|
||||
remark: {
|
||||
externalLinksTarget: '_blank',
|
||||
externalLinksRel: ['noopener', 'noreferrer'],
|
||||
plugins: [
|
||||
'@gridsome/remark-prismjs'
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
],
|
||||
chainWebpack: config => {
|
||||
const svgRule = config.module.rule('svg')
|
||||
svgRule.uses.clear()
|
||||
svgRule
|
||||
.use('vue-svg-loader')
|
||||
.loader('vue-svg-loader')
|
||||
}
|
||||
}
|
||||
@@ -1,161 +0,0 @@
|
||||
// Server API makes it possible to hook into various parts of Gridsome
|
||||
// on server-side and add custom data to the GraphQL data layer.
|
||||
// Learn more: https://gridsome.org/docs/server-api/
|
||||
|
||||
// Changes here require a server restart.
|
||||
// To restart press CTRL + C in terminal and run `gridsome develop`
|
||||
|
||||
module.exports = function (api) {
|
||||
api.loadSource(actions => {
|
||||
// Use the Data Store API here: https://gridsome.org/docs/data-store-api/
|
||||
const sidebar = actions.addCollection({
|
||||
typeName: 'Sidebar'
|
||||
})
|
||||
|
||||
sidebar.addNode({
|
||||
sections: [
|
||||
{
|
||||
items: [
|
||||
{
|
||||
label: 'Overview',
|
||||
path: '/docs/'
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
title: 'Documentation',
|
||||
items: [
|
||||
{
|
||||
label: 'Getting Started',
|
||||
path: '/docs/general/getting-started'
|
||||
},
|
||||
{
|
||||
label: 'Tutorial',
|
||||
path: '/docs/general/tutorial'
|
||||
},
|
||||
{
|
||||
label: 'References',
|
||||
path: '/docs/general/references'
|
||||
},
|
||||
{
|
||||
label: 'CRDs APIs',
|
||||
path: '/docs/general/crds-apis'
|
||||
},
|
||||
{
|
||||
label: 'Multi-Tenant Benchmark',
|
||||
path: '/docs/general/mtb'
|
||||
},
|
||||
{
|
||||
label: 'Capsule Proxy',
|
||||
path: '/docs/general/proxy'
|
||||
},
|
||||
{
|
||||
label: 'Dashboard',
|
||||
path: '/docs/general/lens'
|
||||
},
|
||||
]
|
||||
},
|
||||
{
|
||||
title: 'Guides',
|
||||
items: [
|
||||
{
|
||||
label: 'OIDC Authentication',
|
||||
path: '/docs/guides/oidc-auth'
|
||||
},
|
||||
{
|
||||
label: 'Monitoring Capsule',
|
||||
path: '/docs/guides/monitoring'
|
||||
},
|
||||
{
|
||||
label: 'Kubernetes Dashboard',
|
||||
path: '/docs/guides/kubernetes-dashboard'
|
||||
},
|
||||
{
|
||||
label: 'Backup & Restore with Velero',
|
||||
path: '/docs/guides/velero'
|
||||
},
|
||||
{
|
||||
label: 'Upgrading Capsule',
|
||||
path: '/docs/guides/upgrading'
|
||||
},
|
||||
{
|
||||
label: 'Multi-tenant GitOps with Flux',
|
||||
path: '/docs/guides/flux2-capsule'
|
||||
},
|
||||
{
|
||||
label: 'Install on Charmed Kubernetes',
|
||||
path: '/docs/guides/charmed'
|
||||
},
|
||||
{
|
||||
label: 'Control Pod Security',
|
||||
path: '/docs/guides/pod-security'
|
||||
},
|
||||
{
|
||||
title: 'Tenants and Rancher Projects',
|
||||
subItems: [
|
||||
{
|
||||
label: 'Overview',
|
||||
path: '/docs/guides/rancher-projects/introduction'
|
||||
},
|
||||
{
|
||||
label: 'Tenants and Projects',
|
||||
path: '/docs/guides/rancher-projects/capsule-rancher'
|
||||
},
|
||||
{
|
||||
label: 'Rancher Shell and cluster-wide resources',
|
||||
path: '/docs/guides/rancher-projects/capsule-proxy-rancher'
|
||||
},
|
||||
{
|
||||
label: 'OIDC authentication with Capsule, Rancher and Keycloak',
|
||||
path: '/docs/guides/rancher-projects/oidc-keycloak'
|
||||
},
|
||||
]
|
||||
},
|
||||
{
|
||||
title: 'Managed Kubernetes',
|
||||
subItems: [
|
||||
{
|
||||
label: 'Overview',
|
||||
path: '/docs/guides/managed-kubernetes/overview'
|
||||
},
|
||||
{
|
||||
label: 'EKS',
|
||||
path: '/docs/guides/managed-kubernetes/aws-eks'
|
||||
},
|
||||
{
|
||||
label: 'CoAKS',
|
||||
path: '/docs/guides/managed-kubernetes/coaks'
|
||||
},
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
title: 'Contributing',
|
||||
items: [
|
||||
{
|
||||
label: 'Guidelines',
|
||||
path: '/docs/contributing/guidelines'
|
||||
},
|
||||
{
|
||||
label: 'Development',
|
||||
path: '/docs/contributing/development'
|
||||
},
|
||||
{
|
||||
label: 'Governance',
|
||||
path: '/docs/contributing/governance'
|
||||
},
|
||||
{
|
||||
label: 'Release process',
|
||||
path: '/docs/contributing/release'
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
})
|
||||
|
||||
api.createPages(({ createPage }) => {
|
||||
// Use the Pages API here: https://gridsome.org/docs/pages-api/
|
||||
})
|
||||
}
|
||||
14778
docs/package-lock.json
generated
@@ -1,29 +0,0 @@
|
||||
{
|
||||
"name": "doc-capsule",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"develop": "gridsome develop",
|
||||
"explore": "gridsome explore",
|
||||
"build": "gridsome build"
|
||||
},
|
||||
"dependencies": {
|
||||
"@gridsome/remark-prismjs": "^0.5.0",
|
||||
"@gridsome/source-filesystem": "^0.6.2",
|
||||
"@gridsome/transformer-remark": "^0.6.4",
|
||||
"fuse.js": "^6.4.6",
|
||||
"gridsome": "^0.7.0",
|
||||
"gridsome-plugin-gtag": "^0.1.10"
|
||||
},
|
||||
"devDependencies": {
|
||||
"autoprefixer": "^9.8.8",
|
||||
"gridsome-plugin-tailwindcss": "^4.1.1",
|
||||
"postcss": "^8.4.31",
|
||||
"postcss-import": "^14.0.2",
|
||||
"postcss-preset-env": "^6.7.0",
|
||||
"prism-themes": "^1.9.0",
|
||||
"sass": "^1.42.1",
|
||||
"sass-loader": "^10.1.1",
|
||||
"tailwindcss": "npm:@tailwindcss/postcss7-compat@^2.2.17",
|
||||
"vue-svg-loader": "^0.16.0"
|
||||
}
|
||||
}
|
||||
|
Before Width: | Height: | Size: 1.9 KiB |
@@ -1 +0,0 @@
|
||||
<svg fill="none" stroke="currentColor" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 10"><path d="M15 1.2l-7 7-7-7" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/></svg>
|
||||
|
Before Width: | Height: | Size: 192 B |
|
Before Width: | Height: | Size: 6.0 KiB |
@@ -1 +0,0 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-github"><path d="M9 19c-5 1.5-5-2.5-7-3m14 6v-3.87a3.37 3.37 0 0 0-.94-2.61c3.14-.35 6.44-1.54 6.44-7A5.44 5.44 0 0 0 20 4.77 5.07 5.07 0 0 0 19.91 1S18.73.65 16 2.48a13.38 13.38 0 0 0-7 0C6.27.65 5.09 1 5.09 1A5.07 5.07 0 0 0 5 4.77a5.44 5.44 0 0 0-1.5 3.78c0 5.42 3.3 6.61 6.44 7A3.37 3.37 0 0 0 9 18.13V22"></path></svg>
|
||||
|
Before Width: | Height: | Size: 504 B |
@@ -1 +0,0 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-linkedin"><path d="M16 8a6 6 0 0 1 6 6v7h-4v-7a2 2 0 0 0-2-2 2 2 0 0 0-2 2v7h-4v-7a6 6 0 0 1 6-6z"></path><rect x="2" y="9" width="4" height="12"></rect><circle cx="4" cy="4" r="2"></circle></svg>
|
||||
|
Before Width: | Height: | Size: 377 B |
@@ -1 +0,0 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-search"><circle cx="11" cy="11" r="8"></circle><line x1="21" y1="21" x2="16.65" y2="16.65"></line></svg>
|
||||
|
Before Width: | Height: | Size: 308 B |
@@ -1 +0,0 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-slack"><path d="M14.5 10c-.83 0-1.5-.67-1.5-1.5v-5c0-.83.67-1.5 1.5-1.5s1.5.67 1.5 1.5v5c0 .83-.67 1.5-1.5 1.5z"></path><path d="M20.5 10H19V8.5c0-.83.67-1.5 1.5-1.5s1.5.67 1.5 1.5-.67 1.5-1.5 1.5z"></path><path d="M9.5 14c.83 0 1.5.67 1.5 1.5v5c0 .83-.67 1.5-1.5 1.5S8 21.33 8 20.5v-5c0-.83.67-1.5 1.5-1.5z"></path><path d="M3.5 14H5v1.5c0 .83-.67 1.5-1.5 1.5S2 16.33 2 15.5 2.67 14 3.5 14z"></path><path d="M14 14.5c0-.83.67-1.5 1.5-1.5h5c.83 0 1.5.67 1.5 1.5s-.67 1.5-1.5 1.5h-5c-.83 0-1.5-.67-1.5-1.5z"></path><path d="M15.5 19H14v1.5c0 .83.67 1.5 1.5 1.5s1.5-.67 1.5-1.5-.67-1.5-1.5-1.5z"></path><path d="M10 9.5C10 8.67 9.33 8 8.5 8h-5C2.67 8 2 8.67 2 9.5S2.67 11 3.5 11h5c.83 0 1.5-.67 1.5-1.5z"></path><path d="M8.5 5H10V3.5C10 2.67 9.33 2 8.5 2S7 2.67 7 3.5 7.67 5 8.5 5z"></path></svg>
|
||||
|
Before Width: | Height: | Size: 976 B |
@@ -1 +0,0 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="feather feather-twitter"><path d="M23 3a10.9 10.9 0 0 1-3.14 1.53 4.48 4.48 0 0 0-7.86 3v1A10.66 10.66 0 0 1 3 4s-4 9 5 13a11.64 11.64 0 0 1-7 2c9 5 20 0 20-11.5a4.5 4.5 0 0 0-.08-.83A7.72 7.72 0 0 0 23 3z"></path></svg>
|
||||
|
Before Width: | Height: | Size: 385 B |