Compare commits
47 Commits
v0.1.0-rc6
...
helm-v0.1.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7a66e8ea93 | ||
|
|
b5eb03ea76 | ||
|
|
681b514516 | ||
|
|
b28b98a7bc | ||
|
|
f6bf0ca446 | ||
|
|
1081bad7cb | ||
|
|
79372c7332 | ||
|
|
4e8faaf845 | ||
|
|
d1b008972c | ||
|
|
a14c7609df | ||
|
|
03456c0b54 | ||
|
|
ddfe2219a0 | ||
|
|
6b68363a46 | ||
|
|
357834c5b9 | ||
|
|
085d9f6503 | ||
|
|
196e3c910d | ||
|
|
0039c91c23 | ||
|
|
26965a5ea2 | ||
|
|
422b6598ba | ||
|
|
61e6ab4088 | ||
|
|
94c6a64fcb | ||
|
|
75ebb571e4 | ||
|
|
8f3b3eac29 | ||
|
|
7979c256d9 | ||
|
|
bdafbcf90a | ||
|
|
d0530bbbe3 | ||
|
|
1035afc7fe | ||
|
|
67046c5b54 | ||
|
|
564c4db81a | ||
|
|
30c3ab078d | ||
|
|
e9b803b9cd | ||
|
|
cb8e504832 | ||
|
|
713867d916 | ||
|
|
23e55c685c | ||
|
|
6393541818 | ||
|
|
c140ab076e | ||
|
|
6b629777b7 | ||
|
|
5554ed5f32 | ||
|
|
00ef9a2f67 | ||
|
|
46c2f0e997 | ||
|
|
0c0a90a934 | ||
|
|
9d65013a22 | ||
|
|
60ab33337d | ||
|
|
225d671301 | ||
|
|
7538926bae | ||
|
|
0de0eca72a | ||
|
|
d5a702ceae |
9
.github/workflows/ci.yml
vendored
@@ -7,6 +7,15 @@ on:
|
||||
branches: [ "*" ]
|
||||
|
||||
jobs:
|
||||
commit_lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: wagoid/commitlint-github-action@v2
|
||||
with:
|
||||
firstParent: true
|
||||
golangci:
|
||||
name: lint
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
18
.github/workflows/e2e.yml
vendored
@@ -3,8 +3,26 @@ name: e2e
|
||||
on:
|
||||
push:
|
||||
branches: [ "*" ]
|
||||
paths:
|
||||
- '.github/workflows/e2e.yml'
|
||||
- 'api/*'
|
||||
- 'controllers/*'
|
||||
- 'e2e/*'
|
||||
- 'Dockerfile'
|
||||
- 'go.*'
|
||||
- 'main.go'
|
||||
- 'Makefile'
|
||||
pull_request:
|
||||
branches: [ "*" ]
|
||||
paths:
|
||||
- '.github/workflows/e2e.yml'
|
||||
- 'api/*'
|
||||
- 'controllers/*'
|
||||
- 'e2e/*'
|
||||
- 'Dockerfile'
|
||||
- 'go.*'
|
||||
- 'main.go'
|
||||
- 'Makefile'
|
||||
|
||||
jobs:
|
||||
kind:
|
||||
|
||||
4
.github/workflows/helm.yml
vendored
@@ -3,8 +3,12 @@ name: Helm Chart
|
||||
on:
|
||||
push:
|
||||
branches: [ "*" ]
|
||||
tags: [ "helm-v*" ]
|
||||
pull_request:
|
||||
branches: [ "*" ]
|
||||
create:
|
||||
branches: [ "*" ]
|
||||
tags: [ "helm-v*" ]
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
|
||||
2
Makefile
@@ -1,5 +1,5 @@
|
||||
# Current Operator version
|
||||
VERSION ?= $$(git describe --abbrev=0 --tags)
|
||||
VERSION ?= $$(git describe --abbrev=0 --tags --match "v*")
|
||||
|
||||
# Default bundle image tag
|
||||
BUNDLE_IMG ?= quay.io/clastix/capsule:$(VERSION)-bundle
|
||||
|
||||
66
README.md
@@ -13,7 +13,7 @@
|
||||
|
||||
---
|
||||
|
||||
# Kubernetes multi-tenancy made simple
|
||||
# Kubernetes multi-tenancy made easy
|
||||
**Capsule** helps to implement a multi-tenancy and policy-based environment in your Kubernetes cluster. It is not intended to be yet another _PaaS_, instead, it has been designed as a micro-services-based ecosystem with the minimalist approach, leveraging only on upstream Kubernetes.
|
||||
|
||||
# What's the problem with the current status?
|
||||
@@ -71,36 +71,24 @@ Clone this repository and move to the repo folder:
|
||||
|
||||
```
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/install.yaml
|
||||
namespace/capsule-system created
|
||||
customresourcedefinition.apiextensions.k8s.io/capsuleconfigurations.capsule.clastix.io created
|
||||
customresourcedefinition.apiextensions.k8s.io/tenants.capsule.clastix.io created
|
||||
clusterrolebinding.rbac.authorization.k8s.io/capsule-manager-rolebinding created
|
||||
secret/capsule-ca created
|
||||
secret/capsule-tls created
|
||||
service/capsule-controller-manager-metrics-service created
|
||||
service/capsule-webhook-service created
|
||||
deployment.apps/capsule-controller-manager created
|
||||
capsuleconfiguration.capsule.clastix.io/capsule-default created
|
||||
mutatingwebhookconfiguration.admissionregistration.k8s.io/capsule-mutating-webhook-configuration created
|
||||
validatingwebhookconfiguration.admissionregistration.k8s.io/capsule-validating-webhook-configuration created
|
||||
```
|
||||
|
||||
It will install the Capsule controller in a dedicated namespace `capsule-system`.
|
||||
|
||||
## How to create Tenants
|
||||
Use the scaffold [Tenant](config/samples/capsule_v1alpha1_tenant.yaml) and simply apply as cluster admin.
|
||||
Use the scaffold [Tenant](config/samples/capsule_v1beta1_tenant.yaml) and simply apply as cluster admin.
|
||||
|
||||
```
|
||||
$ kubectl apply -f config/samples/capsule_v1alpha1_tenant.yaml
|
||||
tenant.capsule.clastix.io/oil created
|
||||
$ kubectl apply -f config/samples/capsule_v1beta1_tenant.yaml
|
||||
tenant.capsule.clastix.io/gas created
|
||||
```
|
||||
|
||||
You can check the tenant just created as
|
||||
|
||||
```
|
||||
$ kubectl get tenants
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
oil 3 0 alice User 1m
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
gas Active 3 0 {"kubernetes.io/os":"linux"} 25s
|
||||
```
|
||||
|
||||
## Tenant owners
|
||||
@@ -112,52 +100,46 @@ Assignment to a group depends on the authentication strategy in your cluster.
|
||||
|
||||
For example, if you are using `capsule.clastix.io`, users authenticated through a _X.509_ certificate must have `capsule.clastix.io` as _Organization_: `-subj "/CN=${USER}/O=capsule.clastix.io"`
|
||||
|
||||
Users authenticated through an _OIDC token_ must have
|
||||
Users authenticated through an _OIDC token_ must have in their token:
|
||||
|
||||
```json
|
||||
...
|
||||
"users_groups": [
|
||||
"capsule.clastix.io",
|
||||
"other_group"
|
||||
"capsule.clastix.io",
|
||||
"other_group"
|
||||
]
|
||||
```
|
||||
|
||||
in their token.
|
||||
|
||||
The [hack/create-user.sh](hack/create-user.sh) can help you set up a dummy `kubeconfig` for the `alice` user acting as owner of a tenant called `oil`
|
||||
The [hack/create-user.sh](hack/create-user.sh) can help you set up a dummy `kubeconfig` for the `bob` user acting as owner of a tenant called `gas`
|
||||
|
||||
```bash
|
||||
./hack/create-user.sh alice oil
|
||||
creating certs in TMPDIR /tmp/tmp.4CLgpuime3
|
||||
Generating RSA private key, 2048 bit long modulus (2 primes)
|
||||
............+++++
|
||||
........................+++++
|
||||
e is 65537 (0x010001)
|
||||
certificatesigningrequest.certificates.k8s.io/alice-oil created
|
||||
certificatesigningrequest.certificates.k8s.io/alice-oil approved
|
||||
kubeconfig file is: alice-oil.kubeconfig
|
||||
to use it as alice export KUBECONFIG=alice-oil.kubeconfig
|
||||
./hack/create-user.sh bob gas
|
||||
...
|
||||
certificatesigningrequest.certificates.k8s.io/bob-gas created
|
||||
certificatesigningrequest.certificates.k8s.io/bob-gas approved
|
||||
kubeconfig file is: bob-gas.kubeconfig
|
||||
to use it as bob export KUBECONFIG=bob-gas.kubeconfig
|
||||
```
|
||||
|
||||
## Working with Tenants
|
||||
Log in to the Kubernetes cluster as `alice` tenant owner
|
||||
Log in to the Kubernetes cluster as `bob` tenant owner
|
||||
|
||||
```
|
||||
$ export KUBECONFIG=alice-oil.kubeconfig
|
||||
$ export KUBECONFIG=bob-gas.kubeconfig
|
||||
```
|
||||
|
||||
and create a couple of new namespaces
|
||||
|
||||
```
|
||||
$ kubectl create namespace oil-production
|
||||
$ kubectl create namespace oil-development
|
||||
$ kubectl create namespace gas-production
|
||||
$ kubectl create namespace gas-development
|
||||
```
|
||||
|
||||
As user `alice` you can operate with fully admin permissions:
|
||||
As user `bob` you can operate with fully admin permissions:
|
||||
|
||||
```
|
||||
$ kubectl -n oil-development run nginx --image=docker.io/nginx
|
||||
$ kubectl -n oil-development get pods
|
||||
$ kubectl -n gas-development run nginx --image=docker.io/nginx
|
||||
$ kubectl -n gas-development get pods
|
||||
```
|
||||
|
||||
but limited to only your own namespaces:
|
||||
@@ -165,7 +147,7 @@ but limited to only your own namespaces:
|
||||
```
|
||||
$ kubectl -n kube-system get pods
|
||||
Error from server (Forbidden): pods is forbidden:
|
||||
User "alice" cannot list resource "pods" in API group "" in the namespace "kube-system"
|
||||
User "bob" cannot list resource "pods" in API group "" in the namespace "kube-system"
|
||||
```
|
||||
|
||||
# Documentation
|
||||
|
||||
@@ -26,6 +26,7 @@ const (
|
||||
|
||||
enableNodePortsAnnotation = "capsule.clastix.io/enable-node-ports"
|
||||
enableExternalNameAnnotation = "capsule.clastix.io/enable-external-name"
|
||||
enableLoadBalancerAnnotation = "capsule.clastix.io/enable-loadbalancer-service"
|
||||
|
||||
ownerGroupsAnnotation = "owners.capsule.clastix.io/group"
|
||||
ownerUsersAnnotation = "owners.capsule.clastix.io/user"
|
||||
@@ -297,6 +298,21 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
|
||||
dst.Spec.ServiceOptions.AllowedServices.ExternalName = pointer.BoolPtr(val)
|
||||
}
|
||||
|
||||
loadBalancerService, ok := annotations[enableLoadBalancerAnnotation]
|
||||
if ok {
|
||||
val, err := strconv.ParseBool(loadBalancerService)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, fmt.Sprintf("unable to parse %s annotation on tenant %s", enableLoadBalancerAnnotation, t.GetName()))
|
||||
}
|
||||
if dst.Spec.ServiceOptions == nil {
|
||||
dst.Spec.ServiceOptions = &capsulev1beta1.ServiceOptions{}
|
||||
}
|
||||
if dst.Spec.ServiceOptions.AllowedServices == nil {
|
||||
dst.Spec.ServiceOptions.AllowedServices = &capsulev1beta1.AllowedServices{}
|
||||
}
|
||||
dst.Spec.ServiceOptions.AllowedServices.LoadBalancer = pointer.BoolPtr(val)
|
||||
}
|
||||
|
||||
// Status
|
||||
dst.Status = capsulev1beta1.TenantStatus{
|
||||
Size: t.Status.Size,
|
||||
@@ -309,6 +325,7 @@ func (t *Tenant) ConvertTo(dstRaw conversion.Hub) error {
|
||||
delete(dst.ObjectMeta.Annotations, podPriorityAllowedRegexAnnotation)
|
||||
delete(dst.ObjectMeta.Annotations, enableNodePortsAnnotation)
|
||||
delete(dst.ObjectMeta.Annotations, enableExternalNameAnnotation)
|
||||
delete(dst.ObjectMeta.Annotations, enableLoadBalancerAnnotation)
|
||||
delete(dst.ObjectMeta.Annotations, ownerGroupsAnnotation)
|
||||
delete(dst.ObjectMeta.Annotations, ownerUsersAnnotation)
|
||||
delete(dst.ObjectMeta.Annotations, ownerServiceAccountAnnotation)
|
||||
@@ -530,6 +547,7 @@ func (t *Tenant) ConvertFrom(srcRaw conversion.Hub) error {
|
||||
if src.Spec.ServiceOptions != nil && src.Spec.ServiceOptions.AllowedServices != nil {
|
||||
t.Annotations[enableNodePortsAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.NodePort)
|
||||
t.Annotations[enableExternalNameAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.ExternalName)
|
||||
t.Annotations[enableLoadBalancerAnnotation] = strconv.FormatBool(*src.Spec.ServiceOptions.AllowedServices.LoadBalancer)
|
||||
}
|
||||
|
||||
// Status
|
||||
|
||||
@@ -52,6 +52,7 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
|
||||
AllowedServices: &capsulev1beta1.AllowedServices{
|
||||
NodePort: pointer.BoolPtr(false),
|
||||
ExternalName: pointer.BoolPtr(false),
|
||||
LoadBalancer: pointer.BoolPtr(false),
|
||||
},
|
||||
ExternalServiceIPs: &capsulev1beta1.ExternalServiceIPsSpec{
|
||||
Allowed: []capsulev1beta1.AllowedIP{"192.168.0.1"},
|
||||
@@ -285,6 +286,7 @@ func generateTenantsSpecs() (Tenant, capsulev1beta1.Tenant) {
|
||||
podAllowedImagePullPolicyAnnotation: "Always,IfNotPresent",
|
||||
enableExternalNameAnnotation: "false",
|
||||
enableNodePortsAnnotation: "false",
|
||||
enableLoadBalancerAnnotation: "false",
|
||||
podPriorityAllowedAnnotation: "default",
|
||||
podPriorityAllowedRegexAnnotation: "^tier-.*$",
|
||||
ownerGroupsAnnotation: "owner-foo,owner-bar",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright 2020-2021 Clastix Labs
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
//nolint:dupl
|
||||
package v1beta1
|
||||
|
||||
import (
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Copyright 2020-2021 Clastix Labs
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
//nolint:dupl
|
||||
package v1beta1
|
||||
|
||||
import (
|
||||
|
||||
15
api/v1beta1/deny_wildcard.go
Normal file
@@ -0,0 +1,15 @@
|
||||
// Copyright 2020-2021 Clastix Labs
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
package v1beta1
|
||||
|
||||
const (
|
||||
denyWildcard = "capsule.clastix.io/deny-wildcard"
|
||||
)
|
||||
|
||||
func (t *Tenant) IsWildcardDenied() bool {
|
||||
if v, ok := t.Annotations[denyWildcard]; ok && v == "true" {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
33
api/v1beta1/forbidden_list.go
Normal file
@@ -0,0 +1,33 @@
|
||||
// Copyright 2020-2021 Clastix Labs
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//nolint:dupl
|
||||
package v1beta1
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type ForbiddenListSpec struct {
|
||||
Exact []string `json:"denied,omitempty"`
|
||||
Regex string `json:"deniedRegex,omitempty"`
|
||||
}
|
||||
|
||||
func (in *ForbiddenListSpec) ExactMatch(value string) (ok bool) {
|
||||
if len(in.Exact) > 0 {
|
||||
sort.SliceStable(in.Exact, func(i, j int) bool {
|
||||
return strings.ToLower(in.Exact[i]) < strings.ToLower(in.Exact[j])
|
||||
})
|
||||
i := sort.SearchStrings(in.Exact, value)
|
||||
ok = i < len(in.Exact) && in.Exact[i] == value
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (in ForbiddenListSpec) RegexMatch(value string) (ok bool) {
|
||||
if len(in.Regex) > 0 {
|
||||
ok = regexp.MustCompile(in.Regex).MatchString(value)
|
||||
}
|
||||
return
|
||||
}
|
||||
67
api/v1beta1/forbidden_list_test.go
Normal file
@@ -0,0 +1,67 @@
|
||||
// Copyright 2020-2021 Clastix Labs
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
//nolint:dupl
|
||||
package v1beta1
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestForbiddenListSpec_ExactMatch(t *testing.T) {
|
||||
type tc struct {
|
||||
In []string
|
||||
True []string
|
||||
False []string
|
||||
}
|
||||
for _, tc := range []tc{
|
||||
{
|
||||
[]string{"foo", "bar", "bizz", "buzz"},
|
||||
[]string{"foo", "bar", "bizz", "buzz"},
|
||||
[]string{"bing", "bong"},
|
||||
},
|
||||
{
|
||||
[]string{"one", "two", "three"},
|
||||
[]string{"one", "two", "three"},
|
||||
[]string{"a", "b", "c"},
|
||||
},
|
||||
{
|
||||
nil,
|
||||
nil,
|
||||
[]string{"any", "value"},
|
||||
},
|
||||
} {
|
||||
a := ForbiddenListSpec{
|
||||
Exact: tc.In,
|
||||
}
|
||||
for _, ok := range tc.True {
|
||||
assert.True(t, a.ExactMatch(ok))
|
||||
}
|
||||
for _, ko := range tc.False {
|
||||
assert.False(t, a.ExactMatch(ko))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestForbiddenListSpec_RegexMatch(t *testing.T) {
|
||||
type tc struct {
|
||||
Regex string
|
||||
True []string
|
||||
False []string
|
||||
}
|
||||
for _, tc := range []tc{
|
||||
{`first-\w+-pattern`, []string{"first-date-pattern", "first-year-pattern"}, []string{"broken", "first-year", "second-date-pattern"}},
|
||||
{``, nil, []string{"any", "value"}},
|
||||
} {
|
||||
a := ForbiddenListSpec{
|
||||
Regex: tc.Regex,
|
||||
}
|
||||
for _, ok := range tc.True {
|
||||
assert.True(t, a.RegexMatch(ok))
|
||||
}
|
||||
for _, ko := range tc.False {
|
||||
assert.False(t, a.RegexMatch(ko))
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,5 +1,7 @@
|
||||
package v1beta1
|
||||
|
||||
import "strings"
|
||||
|
||||
type NamespaceOptions struct {
|
||||
//+kubebuilder:validation:Minimum=1
|
||||
// Specifies the maximum number of namespaces allowed for that Tenant. Once the namespace quota assigned to the Tenant has been reached, the Tenant owner cannot create further namespaces. Optional.
|
||||
@@ -7,3 +9,43 @@ type NamespaceOptions struct {
|
||||
// Specifies additional labels and annotations the Capsule operator places on any Namespace resource in the Tenant. Optional.
|
||||
AdditionalMetadata *AdditionalMetadataSpec `json:"additionalMetadata,omitempty"`
|
||||
}
|
||||
|
||||
func (t *Tenant) hasForbiddenNamespaceLabelsAnnotations() bool {
|
||||
if _, ok := t.Annotations[ForbiddenNamespaceLabelsAnnotation]; ok {
|
||||
return true
|
||||
}
|
||||
if _, ok := t.Annotations[ForbiddenNamespaceLabelsRegexpAnnotation]; ok {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (t *Tenant) hasForbiddenNamespaceAnnotationsAnnotations() bool {
|
||||
if _, ok := t.Annotations[ForbiddenNamespaceAnnotationsAnnotation]; ok {
|
||||
return true
|
||||
}
|
||||
if _, ok := t.Annotations[ForbiddenNamespaceAnnotationsRegexpAnnotation]; ok {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (t *Tenant) ForbiddenUserNamespaceLabels() *ForbiddenListSpec {
|
||||
if !t.hasForbiddenNamespaceLabelsAnnotations() {
|
||||
return nil
|
||||
}
|
||||
return &ForbiddenListSpec{
|
||||
Exact: strings.Split(t.Annotations[ForbiddenNamespaceLabelsAnnotation], ","),
|
||||
Regex: t.Annotations[ForbiddenNamespaceLabelsRegexpAnnotation],
|
||||
}
|
||||
}
|
||||
|
||||
func (t *Tenant) ForbiddenUserNamespaceAnnotations() *ForbiddenListSpec {
|
||||
if !t.hasForbiddenNamespaceAnnotationsAnnotations() {
|
||||
return nil
|
||||
}
|
||||
return &ForbiddenListSpec{
|
||||
Exact: strings.Split(t.Annotations[ForbiddenNamespaceAnnotationsAnnotation], ","),
|
||||
Regex: t.Annotations[ForbiddenNamespaceAnnotationsRegexpAnnotation],
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,4 +10,7 @@ type AllowedServices struct {
|
||||
//+kubebuilder:default=true
|
||||
// Specifies if ExternalName service type resources are allowed for the Tenant. Default is true. Optional.
|
||||
ExternalName *bool `json:"externalName,omitempty"`
|
||||
//+kubebuilder:default=true
|
||||
// Specifies if LoadBalancer service type resources are allowed for the Tenant. Default is true. Optional.
|
||||
LoadBalancer *bool `json:"loadBalancer,omitempty"`
|
||||
}
|
||||
|
||||
@@ -8,12 +8,16 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
AvailableIngressClassesAnnotation = "capsule.clastix.io/ingress-classes"
|
||||
AvailableIngressClassesRegexpAnnotation = "capsule.clastix.io/ingress-classes-regexp"
|
||||
AvailableStorageClassesAnnotation = "capsule.clastix.io/storage-classes"
|
||||
AvailableStorageClassesRegexpAnnotation = "capsule.clastix.io/storage-classes-regexp"
|
||||
AllowedRegistriesAnnotation = "capsule.clastix.io/allowed-registries"
|
||||
AllowedRegistriesRegexpAnnotation = "capsule.clastix.io/allowed-registries-regexp"
|
||||
AvailableIngressClassesAnnotation = "capsule.clastix.io/ingress-classes"
|
||||
AvailableIngressClassesRegexpAnnotation = "capsule.clastix.io/ingress-classes-regexp"
|
||||
AvailableStorageClassesAnnotation = "capsule.clastix.io/storage-classes"
|
||||
AvailableStorageClassesRegexpAnnotation = "capsule.clastix.io/storage-classes-regexp"
|
||||
AllowedRegistriesAnnotation = "capsule.clastix.io/allowed-registries"
|
||||
AllowedRegistriesRegexpAnnotation = "capsule.clastix.io/allowed-registries-regexp"
|
||||
ForbiddenNamespaceLabelsAnnotation = "capsule.clastix.io/forbidden-namespace-labels"
|
||||
ForbiddenNamespaceLabelsRegexpAnnotation = "capsule.clastix.io/forbidden-namespace-labels-regexp"
|
||||
ForbiddenNamespaceAnnotationsAnnotation = "capsule.clastix.io/forbidden-namespace-annotations"
|
||||
ForbiddenNamespaceAnnotationsRegexpAnnotation = "capsule.clastix.io/forbidden-namespace-annotations-regexp"
|
||||
)
|
||||
|
||||
func UsedQuotaFor(resource fmt.Stringer) string {
|
||||
|
||||
@@ -33,7 +33,7 @@ type TenantSpec struct {
|
||||
AdditionalRoleBindings []AdditionalRoleBindingsSpec `json:"additionalRoleBindings,omitempty"`
|
||||
// Specify the allowed values for the imagePullPolicies option in Pod resources. Capsule assures that all Pod resources created in the Tenant can use only one of the allowed policy. Optional.
|
||||
ImagePullPolicies []ImagePullPolicySpec `json:"imagePullPolicies,omitempty"`
|
||||
// Specifies the allowed IngressClasses assigned to the Tenant. Capsule assures that all Ingress resources created in the Tenant can use only one of the allowed IngressClasses. Optional.
|
||||
// Specifies the allowed priorityClasses assigned to the Tenant. Capsule assures that all Pods resources created in the Tenant can use only one of the allowed PriorityClasses. Optional.
|
||||
PriorityClasses *AllowedListSpec `json:"priorityClasses,omitempty"`
|
||||
}
|
||||
|
||||
|
||||
@@ -96,6 +96,11 @@ func (in *AllowedServices) DeepCopyInto(out *AllowedServices) {
|
||||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
if in.LoadBalancer != nil {
|
||||
in, out := &in.LoadBalancer, &out.LoadBalancer
|
||||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AllowedServices.
|
||||
@@ -149,6 +154,26 @@ func (in *ExternalServiceIPsSpec) DeepCopy() *ExternalServiceIPsSpec {
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *ForbiddenListSpec) DeepCopyInto(out *ForbiddenListSpec) {
|
||||
*out = *in
|
||||
if in.Exact != nil {
|
||||
in, out := &in.Exact, &out.Exact
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ForbiddenListSpec.
|
||||
func (in *ForbiddenListSpec) DeepCopy() *ForbiddenListSpec {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(ForbiddenListSpec)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *IngressOptions) DeepCopyInto(out *IngressOptions) {
|
||||
*out = *in
|
||||
|
||||
@@ -21,8 +21,8 @@ sources:
|
||||
|
||||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
version: 0.0.19
|
||||
version: 0.1.1
|
||||
|
||||
# This is the version number of the application being deployed.
|
||||
# This version number should be incremented each time you make changes to the application.
|
||||
appVersion: 0.0.5
|
||||
appVersion: 0.1.0
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Deploying the Capsule Operator
|
||||
|
||||
Use the Capsule Operator for easily implementing, managing, and maintaining mutitenancy and access control in Kubernetes.
|
||||
Use the Capsule Operator for easily implementing, managing, and maintaining multitenancy and access control in Kubernetes.
|
||||
|
||||
## Requirements
|
||||
|
||||
@@ -67,7 +67,7 @@ Parameter | Description | Default
|
||||
`manager.hostNetwork` | Specifies if the container should be started in `hostNetwork` mode. | `false`
|
||||
`manager.options.logLevel` | Set the log verbosity of the controller with a value from 1 to 10.| `4`
|
||||
`manager.options.forceTenantPrefix` | Boolean, enforces the Tenant owner, during Namespace creation, to name it using the selected Tenant name as prefix, separated by a dash | `false`
|
||||
`manager.options.capsuleUserGroup` | Override the Capsule user group | `capsule.clastix.io`
|
||||
`manager.options.capsuleUserGroups` | Override the Capsule user groups | `[capsule.clastix.io]`
|
||||
`manager.options.protectedNamespaceRegex` | If specified, disallows creation of namespaces matching the passed regexp | `null`
|
||||
`manager.image.repository` | Set the image repository of the controller. | `quay.io/clastix/capsule`
|
||||
`manager.image.tag` | Overrides the image tag whose default is the chart. `appVersion` | `null`
|
||||
@@ -91,19 +91,24 @@ Parameter | Description | Default
|
||||
`replicaCount` | Set the replica count for Capsule pod. | `1`
|
||||
`affinity` | Set affinity rules for the Capsule pod. | `{}`
|
||||
`podSecurityPolicy.enabled` | Specify if a Pod Security Policy must be created. | `false`
|
||||
`serviceMonitor.enabled` | Specify if a Service Monitor must be created. | `false`
|
||||
`serviceMonitor.serviceAccount.name` | Specify Service Account name for metrics scrape. | `capsule`
|
||||
`serviceMonitor.serviceAccount.namespace` | Specify Service Account namespace for metrics scrape. | `capsule-system`
|
||||
`serviceMonitor.enabled` | Specifies if a service monitor must be created. | `false`
|
||||
`serviceMonitor.labels` | Additional labels which will be added to service monitor. | `{}`
|
||||
`serviceMonitor.annotations` | Additional annotations which will be added to service monitor. | `{}`
|
||||
`serviceMonitor.matchLabels` | Additional matchLabels which will be added to service monitor. | `{}`
|
||||
`serviceMonitor.serviceAccount.name` | Specifies service account name for metrics scrape. | `capsule`
|
||||
`serviceMonitor.serviceAccount.namespace` | Specifies service account namespace for metrics scrape. | `capsule-system`
|
||||
`customLabels` | Additional labels which will be added to all resources created by Capsule helm chart . | `{}`
|
||||
`customAnnotations` | Additional annotations which will be added to all resources created by Capsule helm chart . | `{}`
|
||||
|
||||
## Created resources
|
||||
|
||||
This Helm Chart cretes the following Kubernetes resources in the release namespace:
|
||||
This Helm Chart creates the following Kubernetes resources in the release namespace:
|
||||
|
||||
* Capsule Namespace
|
||||
* Capsule Operator Deployment
|
||||
* Capsule Service
|
||||
* CA Secret
|
||||
* Certfificate Secret
|
||||
* Certificate Secret
|
||||
* Tenant Custom Resource Definition
|
||||
* MutatingWebHookConfiguration
|
||||
* ValidatingWebHookConfiguration
|
||||
|
||||
@@ -1101,7 +1101,7 @@ spec:
|
||||
type: object
|
||||
type: array
|
||||
priorityClasses:
|
||||
description: Specifies the allowed IngressClasses assigned to the Tenant. Capsule assures that all Ingress resources created in the Tenant can use only one of the allowed IngressClasses. Optional.
|
||||
description: Specifies the allowed priorityClasses assigned to the Tenant. Capsule assures that all Pods resources created in the Tenant can use only one of the allowed PriorityClasses. Optional.
|
||||
properties:
|
||||
allowed:
|
||||
items:
|
||||
@@ -1189,6 +1189,10 @@ spec:
|
||||
default: true
|
||||
description: Specifies if ExternalName service type resources are allowed for the Tenant. Default is true. Optional.
|
||||
type: boolean
|
||||
loadBalancer:
|
||||
default: true
|
||||
description: Specifies if LoadBalancer service type resources are allowed for the Tenant. Default is true. Optional.
|
||||
type: boolean
|
||||
nodePort:
|
||||
default: true
|
||||
description: Specifies if NodePort service type resources are allowed for the Tenant. Default is true. Optional.
|
||||
|
||||
@@ -40,6 +40,9 @@ helm.sh/chart: {{ include "capsule.chart" . }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
|
||||
{{- end }}
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||
{{- if .Values.customLabels }}
|
||||
{{ toYaml .Values.customLabels }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
@@ -50,6 +53,19 @@ app.kubernetes.io/name: {{ include "capsule.name" . }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end }}
|
||||
|
||||
{{/*
|
||||
ServiceAccount annotations
|
||||
*/}}
|
||||
{{- define "capsule.serviceAccountAnnotations" -}}
|
||||
{{- if .Values.serviceAccount.annotations }}
|
||||
{{- toYaml .Values.serviceAccount.annotations }}
|
||||
{{- end }}
|
||||
{{- if .Values.customAnnotations }}
|
||||
{{ toYaml .Values.customAnnotations }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
|
||||
{{/*
|
||||
Create the name of the service account to use
|
||||
*/}}
|
||||
|
||||
@@ -3,5 +3,9 @@ kind: Secret
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
name: {{ include "capsule.secretCaName" . }}
|
||||
data:
|
||||
|
||||
@@ -3,5 +3,9 @@ kind: Secret
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
name: {{ include "capsule.secretTlsName" . }}
|
||||
data:
|
||||
|
||||
@@ -2,6 +2,12 @@ apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: CapsuleConfiguration
|
||||
metadata:
|
||||
name: default
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
forceTenantPrefix: {{ .Values.manager.options.forceTenantPrefix }}
|
||||
userGroups:
|
||||
|
||||
@@ -4,6 +4,10 @@ metadata:
|
||||
name: {{ include "capsule.deploymentName" . }}
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
replicas: {{ .Values.replicaCount }}
|
||||
selector:
|
||||
@@ -11,12 +15,12 @@ spec:
|
||||
{{- include "capsule.selectorLabels" . | nindent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
{{- with .Values.podAnnotations }}
|
||||
{{- with .Values.podAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 8 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
labels:
|
||||
{{- include "capsule.selectorLabels" . | nindent 8 }}
|
||||
{{- include "capsule.labels" . | nindent 8 }}
|
||||
spec:
|
||||
{{- with .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
|
||||
@@ -4,9 +4,13 @@ kind: Role
|
||||
metadata:
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- if .Values.serviceMonitor.labels }}
|
||||
{{- if .Values.serviceMonitor.labels }}
|
||||
{{- toYaml .Values.serviceMonitor.labels | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
name: {{ include "capsule.fullname" . }}-metrics-role
|
||||
namespace: {{ .Values.serviceMonitor.namespace | default .Release.Namespace }}
|
||||
rules:
|
||||
|
||||
@@ -4,6 +4,10 @@ metadata:
|
||||
name: {{ include "capsule.fullname" . }}-controller-manager-metrics-service
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
ports:
|
||||
- port: 8080
|
||||
|
||||
@@ -4,6 +4,10 @@ metadata:
|
||||
name: {{ include "capsule.fullname" . }}-mutating-webhook-configuration
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
webhooks:
|
||||
- admissionReviewVersions:
|
||||
- v1
|
||||
@@ -15,7 +19,7 @@ webhooks:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
path: /namespace-owner-reference
|
||||
port: 443
|
||||
failurePolicy: Fail
|
||||
failurePolicy: {{ .Values.webhooks.namespaceOwnerReference.failurePolicy }}
|
||||
matchPolicy: Equivalent
|
||||
name: owner.namespace.capsule.clastix.io
|
||||
namespaceSelector: {}
|
||||
@@ -28,6 +32,7 @@ webhooks:
|
||||
- v1
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- namespaces
|
||||
scope: '*'
|
||||
|
||||
@@ -5,6 +5,10 @@ metadata:
|
||||
name: {{ include "capsule.fullname" . }}
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
|
||||
@@ -6,16 +6,16 @@ kind: Job
|
||||
metadata:
|
||||
name: "{{ .Release.Name }}-waiting-certs"
|
||||
labels:
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name | quote }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
annotations:
|
||||
# This is what defines this resource as a hook. Without this line, the
|
||||
# job is considered part of the release.
|
||||
"helm.sh/hook": post-install
|
||||
"helm.sh/hook-weight": "-5"
|
||||
"helm.sh/hook-delete-policy": hook-succeeded
|
||||
{{- with .Values.customAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
|
||||
@@ -7,16 +7,16 @@ kind: Job
|
||||
metadata:
|
||||
name: "{{ .Release.Name }}-rbac-cleaner"
|
||||
labels:
|
||||
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name | quote }}
|
||||
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
annotations:
|
||||
# This is what defines this resource as a hook. Without this line, the
|
||||
# job is considered part of the release.
|
||||
"helm.sh/hook": pre-delete
|
||||
"helm.sh/hook-weight": "-5"
|
||||
"helm.sh/hook-delete-policy": hook-succeeded
|
||||
{{- with .Values.customAnnotations }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
|
||||
@@ -4,6 +4,10 @@ metadata:
|
||||
name: {{ include "capsule.fullname" . }}-proxy-role
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
rules:
|
||||
- apiGroups:
|
||||
- authentication.k8s.io
|
||||
@@ -24,6 +28,10 @@ metadata:
|
||||
name: {{ include "capsule.fullname" . }}-metrics-reader
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
rules:
|
||||
- nonResourceURLs:
|
||||
- /metrics
|
||||
@@ -36,6 +44,10 @@ metadata:
|
||||
name: {{ include "capsule.fullname" . }}-proxy-rolebinding
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
@@ -51,6 +63,10 @@ metadata:
|
||||
name: {{ include "capsule.fullname" . }}-manager-rolebinding
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
|
||||
@@ -5,8 +5,8 @@ metadata:
|
||||
name: {{ include "capsule.serviceAccountName" . }}
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.serviceAccount.annotations }}
|
||||
{{- if or (.Values.serviceAccount.annotations) (.Values.customAnnotations) }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- include "capsule.serviceAccountAnnotations" . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
|
||||
@@ -6,9 +6,13 @@ metadata:
|
||||
namespace: {{ .Values.serviceMonitor.namespace | default .Release.Namespace }}
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- if .Values.serviceMonitor.labels }}
|
||||
{{- toYaml .Values.serviceMonitor.labels | nindent 4 }}
|
||||
{{- with .Values.serviceMonitor.labels }}
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
{{- with .Values.serviceMonitor.annotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
endpoints:
|
||||
- interval: 15s
|
||||
@@ -16,7 +20,11 @@ spec:
|
||||
path: /metrics
|
||||
jobLabel: app.kubernetes.io/name
|
||||
selector:
|
||||
matchLabels: {{ include "capsule.labels" . | nindent 6 }}
|
||||
matchLabels:
|
||||
{{- include "capsule.labels" . | nindent 6 }}
|
||||
{{- with .Values.serviceMonitor.matchLabels }}
|
||||
{{- toYaml . | nindent 6 }}
|
||||
{{- end }}
|
||||
namespaceSelector:
|
||||
matchNames:
|
||||
- {{ .Release.Namespace }}
|
||||
|
||||
@@ -4,6 +4,10 @@ metadata:
|
||||
name: {{ include "capsule.fullname" . }}-validating-webhook-configuration
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
webhooks:
|
||||
- admissionReviewVersions:
|
||||
- v1
|
||||
@@ -15,13 +19,11 @@ webhooks:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
path: /cordoning
|
||||
port: 443
|
||||
failurePolicy: Fail
|
||||
failurePolicy: {{ .Values.webhooks.cordoning.failurePolicy }}
|
||||
matchPolicy: Equivalent
|
||||
name: cordoning.tenant.capsule.clastix.io
|
||||
namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
{{- toYaml .Values.webhooks.cordoning.namespaceSelector | nindent 4}}
|
||||
objectSelector: {}
|
||||
rules:
|
||||
- apiGroups:
|
||||
@@ -47,10 +49,11 @@ webhooks:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
path: /ingresses
|
||||
port: 443
|
||||
failurePolicy: Fail
|
||||
failurePolicy: {{ .Values.webhooks.ingresses.failurePolicy }}
|
||||
matchPolicy: Equivalent
|
||||
name: ingress.capsule.clastix.io
|
||||
namespaceSelector:
|
||||
{{- toYaml .Values.webhooks.ingresses.namespaceSelector | nindent 4}}
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
@@ -80,7 +83,7 @@ webhooks:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
path: /namespaces
|
||||
port: 443
|
||||
failurePolicy: Fail
|
||||
failurePolicy: {{ .Values.webhooks.namespaces.failurePolicy }}
|
||||
matchPolicy: Equivalent
|
||||
name: namespaces.capsule.clastix.io
|
||||
namespaceSelector: {}
|
||||
@@ -109,13 +112,11 @@ webhooks:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
path: /networkpolicies
|
||||
port: 443
|
||||
failurePolicy: Fail
|
||||
failurePolicy: {{ .Values.webhooks.networkpolicies.failurePolicy }}
|
||||
matchPolicy: Equivalent
|
||||
name: networkpolicies.capsule.clastix.io
|
||||
namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
{{- toYaml .Values.webhooks.networkpolicies.namespaceSelector | nindent 4}}
|
||||
objectSelector: {}
|
||||
rules:
|
||||
- apiGroups:
|
||||
@@ -140,13 +141,11 @@ webhooks:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
path: /pods
|
||||
port: 443
|
||||
failurePolicy: Fail
|
||||
failurePolicy: {{ .Values.webhooks.pods.failurePolicy }}
|
||||
matchPolicy: Exact
|
||||
name: pods.capsule.clastix.io
|
||||
namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
{{- toYaml .Values.webhooks.pods.namespaceSelector | nindent 4}}
|
||||
objectSelector: {}
|
||||
rules:
|
||||
- apiGroups:
|
||||
@@ -169,12 +168,10 @@ webhooks:
|
||||
name: {{ include "capsule.fullname" . }}-webhook-service
|
||||
namespace: capsule-system
|
||||
path: /persistentvolumeclaims
|
||||
failurePolicy: Fail
|
||||
failurePolicy: {{ .Values.webhooks.persistentvolumeclaims.failurePolicy }}
|
||||
name: pvc.capsule.clastix.io
|
||||
namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
{{- toYaml .Values.webhooks.persistentvolumeclaims.namespaceSelector | nindent 4}}
|
||||
objectSelector: {}
|
||||
rules:
|
||||
- apiGroups:
|
||||
@@ -198,13 +195,11 @@ webhooks:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
path: /services
|
||||
port: 443
|
||||
failurePolicy: Fail
|
||||
failurePolicy: {{ .Values.webhooks.services.failurePolicy }}
|
||||
matchPolicy: Exact
|
||||
name: services.capsule.clastix.io
|
||||
namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
{{- toYaml .Values.webhooks.services.namespaceSelector | nindent 4}}
|
||||
objectSelector: {}
|
||||
rules:
|
||||
- apiGroups:
|
||||
@@ -229,7 +224,7 @@ webhooks:
|
||||
namespace: {{ .Release.Namespace }}
|
||||
path: /tenants
|
||||
port: 443
|
||||
failurePolicy: Fail
|
||||
failurePolicy: {{ .Values.webhooks.tenants.failurePolicy }}
|
||||
matchPolicy: Exact
|
||||
name: tenants.capsule.clastix.io
|
||||
namespaceSelector: {}
|
||||
|
||||
@@ -4,6 +4,10 @@ metadata:
|
||||
name: {{ include "capsule.fullname" . }}-webhook-service
|
||||
labels:
|
||||
{{- include "capsule.labels" . | nindent 4 }}
|
||||
{{- with .Values.customAnnotations }}
|
||||
annotations:
|
||||
{{- toYaml . | nindent 4 }}
|
||||
{{- end }}
|
||||
spec:
|
||||
ports:
|
||||
- port: 443
|
||||
|
||||
@@ -42,8 +42,6 @@ jobs:
|
||||
repository: quay.io/clastix/kubectl
|
||||
pullPolicy: IfNotPresent
|
||||
tag: "v1.20.7"
|
||||
mutatingWebhooksTimeoutSeconds: 30
|
||||
validatingWebhooksTimeoutSeconds: 30
|
||||
imagePullSecrets: []
|
||||
serviceAccount:
|
||||
create: true
|
||||
@@ -66,7 +64,7 @@ podSecurityPolicy:
|
||||
serviceMonitor:
|
||||
enabled: false
|
||||
# Install the ServiceMonitor into a different Namespace, as the monitoring stack one (default: the release one)
|
||||
namespace:
|
||||
namespace: ''
|
||||
# Assign additional labels according to Prometheus' serviceMonitorSelector matching labels
|
||||
labels: {}
|
||||
annotations: {}
|
||||
@@ -74,3 +72,56 @@ serviceMonitor:
|
||||
serviceAccount:
|
||||
name: capsule
|
||||
namespace: capsule-system
|
||||
|
||||
# Additional labels
|
||||
customLabels: {}
|
||||
|
||||
# Additional annotations
|
||||
customAnnotations: {}
|
||||
|
||||
# Webhooks configurations
|
||||
webhooks:
|
||||
namespaceOwnerReference:
|
||||
failurePolicy: Fail
|
||||
cordoning:
|
||||
failurePolicy: Fail
|
||||
namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
ingresses:
|
||||
failurePolicy: Fail
|
||||
namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
namespaces:
|
||||
failurePolicy: Fail
|
||||
networkpolicies:
|
||||
failurePolicy: Fail
|
||||
namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
pods:
|
||||
failurePolicy: Fail
|
||||
namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
persistentvolumeclaims:
|
||||
failurePolicy: Fail
|
||||
namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
tenants:
|
||||
failurePolicy: Fail
|
||||
services:
|
||||
failurePolicy: Fail
|
||||
namespaceSelector:
|
||||
matchExpressions:
|
||||
- key: capsule.clastix.io/tenant
|
||||
operator: Exists
|
||||
mutatingWebhooksTimeoutSeconds: 30
|
||||
validatingWebhooksTimeoutSeconds: 30
|
||||
|
||||
@@ -1101,7 +1101,7 @@ spec:
|
||||
type: object
|
||||
type: array
|
||||
priorityClasses:
|
||||
description: Specifies the allowed IngressClasses assigned to the Tenant. Capsule assures that all Ingress resources created in the Tenant can use only one of the allowed IngressClasses. Optional.
|
||||
description: Specifies the allowed priorityClasses assigned to the Tenant. Capsule assures that all Pods resources created in the Tenant can use only one of the allowed PriorityClasses. Optional.
|
||||
properties:
|
||||
allowed:
|
||||
items:
|
||||
@@ -1189,6 +1189,10 @@ spec:
|
||||
default: true
|
||||
description: Specifies if ExternalName service type resources are allowed for the Tenant. Default is true. Optional.
|
||||
type: boolean
|
||||
loadBalancer:
|
||||
default: true
|
||||
description: Specifies if LoadBalancer service type resources are allowed for the Tenant. Default is true. Optional.
|
||||
type: boolean
|
||||
nodePort:
|
||||
default: true
|
||||
description: Specifies if NodePort service type resources are allowed for the Tenant. Default is true. Optional.
|
||||
|
||||
@@ -1173,7 +1173,7 @@ spec:
|
||||
type: object
|
||||
type: array
|
||||
priorityClasses:
|
||||
description: Specifies the allowed IngressClasses assigned to the Tenant. Capsule assures that all Ingress resources created in the Tenant can use only one of the allowed IngressClasses. Optional.
|
||||
description: Specifies the allowed priorityClasses assigned to the Tenant. Capsule assures that all Pods resources created in the Tenant can use only one of the allowed PriorityClasses. Optional.
|
||||
properties:
|
||||
allowed:
|
||||
items:
|
||||
@@ -1261,6 +1261,10 @@ spec:
|
||||
default: true
|
||||
description: Specifies if ExternalName service type resources are allowed for the Tenant. Default is true. Optional.
|
||||
type: boolean
|
||||
loadBalancer:
|
||||
default: true
|
||||
description: Specifies if LoadBalancer service type resources are allowed for the Tenant. Default is true. Optional.
|
||||
type: boolean
|
||||
nodePort:
|
||||
default: true
|
||||
description: Specifies if NodePort service type resources are allowed for the Tenant. Default is true. Optional.
|
||||
@@ -1407,7 +1411,7 @@ spec:
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
image: quay.io/clastix/capsule:v0.1.0-rc5
|
||||
image: quay.io/clastix/capsule:v0.1.1-rc0
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: manager
|
||||
ports:
|
||||
@@ -1468,6 +1472,7 @@ webhooks:
|
||||
- v1
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- namespaces
|
||||
sideEffects: None
|
||||
|
||||
@@ -7,4 +7,4 @@ kind: Kustomization
|
||||
images:
|
||||
- name: controller
|
||||
newName: quay.io/clastix/capsule
|
||||
newTag: v0.1.0-rc5
|
||||
newTag: v0.1.1-rc0
|
||||
|
||||
@@ -22,6 +22,7 @@ webhooks:
|
||||
- v1
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- namespaces
|
||||
sideEffects: None
|
||||
|
||||
@@ -35,7 +35,7 @@ var (
|
||||
{
|
||||
APIGroups: []string{""},
|
||||
Resources: []string{"namespaces"},
|
||||
Verbs: []string{"delete"},
|
||||
Verbs: []string{"delete", "patch"},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
@@ -9,12 +9,13 @@ import (
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"fmt"
|
||||
"syscall"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/go-logr/logr"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/types"
|
||||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client"
|
||||
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
|
||||
@@ -112,8 +113,38 @@ func (r TLSReconciler) Reconcile(ctx context.Context, request ctrl.Request) (ctr
|
||||
}
|
||||
|
||||
if instance.Name == tlsSecretName && res == controllerutil.OperationResultUpdated {
|
||||
r.Log.Info("Capsule TLS certificates has been updated, we need to restart the Controller")
|
||||
_ = syscall.Kill(syscall.Getpid(), syscall.SIGINT)
|
||||
r.Log.Info("Capsule TLS certificates has been updated, Controller pods must be restarted to load new certificate")
|
||||
|
||||
hostname, _ := os.Hostname()
|
||||
leaderPod := &corev1.Pod{}
|
||||
if err = r.Client.Get(ctx, types.NamespacedName{Namespace: os.Getenv("NAMESPACE"), Name: hostname}, leaderPod); err != nil {
|
||||
r.Log.Error(err, "cannot retrieve the leader Pod, probably running in out of the cluster mode")
|
||||
|
||||
return reconcile.Result{}, nil
|
||||
}
|
||||
|
||||
podList := &corev1.PodList{}
|
||||
if err = r.Client.List(ctx, podList, client.MatchingLabels(leaderPod.ObjectMeta.Labels)); err != nil {
|
||||
r.Log.Error(err, "cannot retrieve list of Capsule pods requiring restart upon TLS update")
|
||||
|
||||
return reconcile.Result{}, nil
|
||||
}
|
||||
|
||||
for _, p := range podList.Items {
|
||||
nonLeaderPod := p
|
||||
// Skipping this Pod, must be deleted at the end
|
||||
if nonLeaderPod.GetName() == leaderPod.GetName() {
|
||||
continue
|
||||
}
|
||||
|
||||
if err = r.Client.Delete(ctx, &nonLeaderPod); err != nil {
|
||||
r.Log.Error(err, "cannot delete the non-leader Pod due to TLS update")
|
||||
}
|
||||
}
|
||||
|
||||
if err = r.Client.Delete(ctx, leaderPod); err != nil {
|
||||
r.Log.Error(err, "cannot delete the leader Pod due to TLS update")
|
||||
}
|
||||
}
|
||||
|
||||
r.Log.Info("Reconciliation completed, processing back in " + rq.String())
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
// Copyright 2020-2021 Clastix Labs
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
package tenant
|
||||
|
||||
import (
|
||||
@@ -49,6 +52,10 @@ func (r *Manager) syncNamespaceMetadata(namespace string, tnt *capsulev1beta1.Te
|
||||
|
||||
res, conflictErr = controllerutil.CreateOrUpdate(context.TODO(), r.Client, ns, func() error {
|
||||
annotations := make(map[string]string)
|
||||
labels := map[string]string{
|
||||
"name": namespace,
|
||||
capsuleLabel: tnt.GetName(),
|
||||
}
|
||||
|
||||
if tnt.Spec.NamespaceOptions != nil && tnt.Spec.NamespaceOptions.AdditionalMetadata != nil {
|
||||
for k, v := range tnt.Spec.NamespaceOptions.AdditionalMetadata.Annotations {
|
||||
@@ -56,6 +63,12 @@ func (r *Manager) syncNamespaceMetadata(namespace string, tnt *capsulev1beta1.Te
|
||||
}
|
||||
}
|
||||
|
||||
if tnt.Spec.NamespaceOptions != nil && tnt.Spec.NamespaceOptions.AdditionalMetadata != nil {
|
||||
for k, v := range tnt.Spec.NamespaceOptions.AdditionalMetadata.Labels {
|
||||
labels[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
if tnt.Spec.NodeSelector != nil {
|
||||
var selector []string
|
||||
for k, v := range tnt.Spec.NodeSelector {
|
||||
@@ -91,20 +104,37 @@ func (r *Manager) syncNamespaceMetadata(namespace string, tnt *capsulev1beta1.Te
|
||||
}
|
||||
}
|
||||
|
||||
ns.SetAnnotations(annotations)
|
||||
|
||||
newLabels := map[string]string{
|
||||
"name": namespace,
|
||||
capsuleLabel: tnt.GetName(),
|
||||
if value, ok := tnt.Annotations[capsulev1beta1.ForbiddenNamespaceLabelsAnnotation]; ok {
|
||||
annotations[capsulev1beta1.ForbiddenNamespaceLabelsAnnotation] = value
|
||||
}
|
||||
|
||||
if tnt.Spec.NamespaceOptions != nil && tnt.Spec.NamespaceOptions.AdditionalMetadata != nil {
|
||||
for k, v := range tnt.Spec.NamespaceOptions.AdditionalMetadata.Labels {
|
||||
newLabels[k] = v
|
||||
if value, ok := tnt.Annotations[capsulev1beta1.ForbiddenNamespaceLabelsRegexpAnnotation]; ok {
|
||||
annotations[capsulev1beta1.ForbiddenNamespaceLabelsRegexpAnnotation] = value
|
||||
}
|
||||
|
||||
if value, ok := tnt.Annotations[capsulev1beta1.ForbiddenNamespaceAnnotationsAnnotation]; ok {
|
||||
annotations[capsulev1beta1.ForbiddenNamespaceAnnotationsAnnotation] = value
|
||||
}
|
||||
|
||||
if value, ok := tnt.Annotations[capsulev1beta1.ForbiddenNamespaceAnnotationsRegexpAnnotation]; ok {
|
||||
annotations[capsulev1beta1.ForbiddenNamespaceAnnotationsRegexpAnnotation] = value
|
||||
}
|
||||
|
||||
if ns.Annotations == nil {
|
||||
ns.SetAnnotations(annotations)
|
||||
} else {
|
||||
for k, v := range annotations {
|
||||
ns.Annotations[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
ns.SetLabels(newLabels)
|
||||
if ns.Labels == nil {
|
||||
ns.SetLabels(labels)
|
||||
} else {
|
||||
for k, v := range labels {
|
||||
ns.Labels[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
|
||||
BIN
docs/assets/datasource.png
Executable file
|
After Width: | Height: | Size: 4.5 KiB |
BIN
docs/assets/manager-controllers.png
Executable file
|
After Width: | Height: | Size: 28 KiB |
BIN
docs/assets/prometheus_targets.png
Executable file
|
After Width: | Height: | Size: 30 KiB |
BIN
docs/assets/rest-client-error-rate.png
Executable file
|
After Width: | Height: | Size: 63 KiB |
BIN
docs/assets/rest-client-latency.png
Executable file
|
After Width: | Height: | Size: 14 KiB |
BIN
docs/assets/saturation.png
Executable file
|
After Width: | Height: | Size: 79 KiB |
BIN
docs/assets/upload_json.png
Executable file
|
After Width: | Height: | Size: 22 KiB |
BIN
docs/assets/webhook-error-rate.png
Executable file
|
After Width: | Height: | Size: 131 KiB |
BIN
docs/assets/webhook-latency.png
Executable file
|
After Width: | Height: | Size: 55 KiB |
BIN
docs/assets/workqueue.png
Executable file
|
After Width: | Height: | Size: 57 KiB |
@@ -5,40 +5,4 @@ Currently, the Capsule ecosystem comprises the following:
|
||||
|
||||
* [Capsule Operator](./operator/overview.md)
|
||||
* [Capsule Proxy](./proxy/overview.md)
|
||||
* [Capsule Lens extension](lens-extension/overview.md) Coming soon!
|
||||
|
||||
## Documents structure
|
||||
```command
|
||||
docs
|
||||
├── index.md
|
||||
├── lens-extension
|
||||
│ └── overview.md
|
||||
├── proxy
|
||||
│ ├── overview.md
|
||||
│ ├── sidecar.md
|
||||
│ └── standalone.md
|
||||
└── operator
|
||||
├── contributing.md
|
||||
├── getting-started.md
|
||||
├── monitoring.md
|
||||
├── overview.md
|
||||
├── references.md
|
||||
└── use-cases
|
||||
├── create-namespaces.md
|
||||
├── custom-resources.md
|
||||
├── images-registries.md
|
||||
├── ingress-classes.md
|
||||
├── ingress-hostnames.md
|
||||
├── multiple-tenants.md
|
||||
├── network-policies.md
|
||||
├── node-ports.md
|
||||
├── nodes-pool.md
|
||||
├── onboarding.md
|
||||
├── overview.md
|
||||
├── permissions.md
|
||||
├── pod-priority-class.md
|
||||
├── pod-security-policies.md
|
||||
├── resources-quota-limits.md
|
||||
├── storage-classes.md
|
||||
└── taint-namespaces.md
|
||||
```
|
||||
* [Capsule Lens extension](./lens-extension/overview.md)
|
||||
|
||||
@@ -1,2 +1,11 @@
|
||||
# Capsule extension for Mirantis Lens
|
||||
Coming soon.
|
||||
# Capsule extension for Lens
|
||||
With Capsule extension for [Lens](https://github.com/lensapp/lens), a cluster administrator can easily manage from a single pane of glass all resources of a Kubernetes cluster, including all the Tenants created through the Capsule Operator.
|
||||
|
||||
## Features
|
||||
Capsule extension for Lens provides these capabilities:
|
||||
|
||||
- List all tenants
|
||||
- See tenant details and change through the embedded Lens editor
|
||||
- Check Resources Quota and Budget at both the tenant and namespace level
|
||||
|
||||
Please, see the [README](https://github.com/clastix/capsule-lens-extension) for details about the installation of the Capsule Lens Extension.
|
||||
|
||||
@@ -27,7 +27,7 @@ the CRDs manifests, as well the deep copy functions, require _Operator SDK_:
|
||||
the binary has to be installed into your `PATH`.
|
||||
|
||||
### Installing Kubebuilder
|
||||
With the latest release of OperatorSDK there's a more tightly integration with
|
||||
With the latest release of OperatorSDK there's a more tighten integration with
|
||||
Kubebuilder and its opinionated testing suite: ensure to download the latest
|
||||
binaries available from the _Releases_ GitHub page and place them into the
|
||||
`/usr/local/kubebuilder/bin` folder, ensuring this is also in your `PATH`.
|
||||
@@ -97,7 +97,7 @@ You can check if Capsule is running tailing the logs:
|
||||
```
|
||||
|
||||
Since Capsule is built using _OperatorSDK_, logging is handled by the zap
|
||||
module: log verbosity of the Capsule controller can be increased by passing
|
||||
module: log verbosity of the Capsule controller can be increased passing
|
||||
the `--zap-log-level` option with a value from `1` to `10` or the
|
||||
[basic keywords](https://godoc.org/go.uber.org/zap/zapcore#Level) although
|
||||
it is suggested to use the `--zap-devel` flag to get also stack traces.
|
||||
@@ -124,7 +124,7 @@ deployment.apps/capsule-controller-manager scaled
|
||||
> This is mandatory since Capsule uses Leader Election
|
||||
|
||||
#### Providing TLS certificate for webhooks
|
||||
Next step is to replicate the same environment Capsule is expecting in the Pod,
|
||||
The next step is to replicate the same environment Capsule is expecting in the Pod,
|
||||
it means creating a fake certificate to handle HTTP requests.
|
||||
|
||||
``` bash
|
||||
@@ -133,8 +133,8 @@ kubectl -n capsule-system get secret capsule-tls -o jsonpath='{.data.tls\.crt}'
|
||||
kubectl -n capsule-system get secret capsule-tls -o jsonpath='{.data.tls\.key}' | base64 -d > /tmp/k8s-webhook-server/serving-certs/tls.key
|
||||
```
|
||||
|
||||
> We're using the certificates generate upon first installation of Capsule:
|
||||
> it means the Secret will be populated at first start-up.
|
||||
> We're using the certificates generate upon the first installation of Capsule:
|
||||
> it means the Secret will be populated at the first start-up.
|
||||
> If you plan to run it locally since the beginning, it means you will require
|
||||
> to provide a self-signed certificate in the said directory.
|
||||
|
||||
@@ -242,8 +242,10 @@ A commit description is welcomed to explain more the changes: just ensure
|
||||
to put a blank line and an arbitrary number of maximum 72 characters long
|
||||
lines, at most one blank line between them.
|
||||
|
||||
Please, split changes into several and documented small commits: this will help
|
||||
us to perform a better review.
|
||||
Please, split changes into several and documented small commits: this will help us to perform a better review. Commits must follow the Conventional Commits Specification, a lightweight convention on top of commit messages. It provides an easy set of rules for creating an explicit commit history; which makes it easier to write automated tools on top of. This convention dovetails with Semantic Versioning, by describing the features, fixes, and breaking changes made in commit messages. See [Conventional Commits Specification](https://www.conventionalcommits.org) to learn about Conventional Commits.
|
||||
|
||||
> In case of errors or need of changes to previous commits,
|
||||
> fix them squashing to make changes atomic.
|
||||
|
||||
### Miscellanea
|
||||
Please, add a new single line at end of any file as the current coding style.
|
||||
|
||||
@@ -6,54 +6,47 @@ Make sure you have access to a Kubernetes cluster as administrator.
|
||||
|
||||
There are two ways to install Capsule:
|
||||
|
||||
* Use the Helm Chart available [here](https://github.com/clastix/capsule/tree/master/charts/capsule)
|
||||
* Use [`kustomize`](https://github.com/kubernetes-sigs/kustomize)
|
||||
* Use the [single YAML file installer](https://raw.githubusercontent.com/clastix/capsule/master/config/install.yaml)
|
||||
* Use the [Capsule Helm Chart](https://github.com/clastix/capsule/blob/master/charts/capsule/README.md)
|
||||
|
||||
### Install with kustomize
|
||||
Ensure you have `kubectl` and `kustomize` installed in your `PATH`.
|
||||
|
||||
Clone this repository and move to the repo folder:
|
||||
### Install with the single YAML file installer
|
||||
Ensure you have `kubectl` installed in your `PATH`. Clone this repository and move to the repo folder:
|
||||
|
||||
```
|
||||
$ git clone https://github.com/clastix/capsule
|
||||
$ cd capsule
|
||||
$ make deploy
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/install.yaml
|
||||
```
|
||||
|
||||
It will install the Capsule controller in a dedicated namespace `capsule-system`.
|
||||
|
||||
# Create your first Tenant
|
||||
In Capsule, a _Tenant_ is an abstraction to group togheter multiple namespaces in a single entity within a set of bundaries defined by the Cluster Administrator. The tenant is then assigned to a user or group of users who is called _Tenant Owner_.
|
||||
### Install with Helm Chart
|
||||
Please, refer to the instructions reported in the Capsule Helm Chart [README](https://github.com/clastix/capsule/blob/master/charts/capsule/README.md).
|
||||
|
||||
Capsule defines a Tenant as Custom Resource with cluster scope:
|
||||
# Create your first Tenant
|
||||
In Capsule, a _Tenant_ is an abstraction to group multiple namespaces in a single entity within a set of boundaries defined by the Cluster Administrator. The tenant is then assigned to a user or group of users who is called _Tenant Owner_.
|
||||
|
||||
Capsule defines a Tenant as Custom Resource with cluster scope.
|
||||
|
||||
Create the tenant as cluster admin:
|
||||
|
||||
```yaml
|
||||
cat <<EOF > oil_tenant.yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
namespaceQuota: 3
|
||||
EOF
|
||||
```
|
||||
|
||||
Apply as cluster admin:
|
||||
|
||||
```
|
||||
$ kubectl apply -f oil_tenant.yaml
|
||||
tenant.capsule.clastix.io/oil created
|
||||
```
|
||||
|
||||
You can check the tenant just created as cluster admin
|
||||
You can check the tenant just created
|
||||
|
||||
```
|
||||
$ kubectl get tenants
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
oil 3 0 alice User 1m
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
oil Active 0 10s
|
||||
```
|
||||
|
||||
## Tenant owners
|
||||
@@ -65,27 +58,21 @@ Assignment to a group depends on the authentication strategy in your cluster.
|
||||
|
||||
For example, if you are using `capsule.clastix.io`, users authenticated through a _X.509_ certificate must have `capsule.clastix.io` as _Organization_: `-subj "/CN=${USER}/O=capsule.clastix.io"`
|
||||
|
||||
Users authenticated through an _OIDC token_ must have
|
||||
Users authenticated through an _OIDC token_ must have in their token:
|
||||
|
||||
```json
|
||||
...
|
||||
"users_groups": [
|
||||
"capsule.clastix.io",
|
||||
"other_group"
|
||||
"capsule.clastix.io",
|
||||
"other_group"
|
||||
]
|
||||
```
|
||||
|
||||
in their token.
|
||||
|
||||
The [hack/create-user.sh](../../hack/create-user.sh) can help you set up a dummy `kubeconfig` for the `alice` user acting as owner of a tenant called `oil`
|
||||
|
||||
```bash
|
||||
./hack/create-user.sh alice oil
|
||||
creating certs in TMPDIR /tmp/tmp.4CLgpuime3
|
||||
Generating RSA private key, 2048 bit long modulus (2 primes)
|
||||
............+++++
|
||||
........................+++++
|
||||
e is 65537 (0x010001)
|
||||
...
|
||||
certificatesigningrequest.certificates.k8s.io/alice-oil created
|
||||
certificatesigningrequest.certificates.k8s.io/alice-oil approved
|
||||
kubeconfig file is: alice-oil.kubeconfig
|
||||
@@ -112,7 +99,7 @@ $ kubectl -n oil-development run nginx --image=docker.io/nginx
|
||||
$ kubectl -n oil-development get pods
|
||||
```
|
||||
|
||||
but limited to only your own namespaces:
|
||||
but limited to only your namespaces:
|
||||
|
||||
```
|
||||
$ kubectl -n kube-system get pods
|
||||
@@ -120,4 +107,4 @@ Error from server (Forbidden): pods is forbidden: User "alice" cannot list resou
|
||||
```
|
||||
|
||||
# What’s next
|
||||
The Tenant Owners have full administrative permissions limited to only the namespaces in the assigned tenant. However, their permissions can be controlled by the Cluster Admin by setting rules and policies on the assigned tenant. See the [use cases](./use-cases/overview.md) page for more getting more cool things you can do with Capsule.
|
||||
The Tenant Owners have full administrative permissions limited to only the namespaces in the assigned tenant. However, their permissions can be controlled by the Cluster Admin by setting rules and policies on the assigned tenant. See the [use cases](./use-cases/overview.md) page for more getting more cool things you can do with Capsule.
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
# Capsule with Amazon EKS
|
||||
|
||||
This is an example how to install Amazon EKS cluster and one user
|
||||
manged by capsule.
|
||||
# Capsule on AWS EKS
|
||||
This is an example of how to install AWS EKS cluster and one user
|
||||
manged by Capsule.
|
||||
|
||||
It is based on [Using IAM Groups to manage Kubernetes access](https://www.eksworkshop.com/beginner/091_iam-groups/intro/)
|
||||
|
||||
@@ -115,7 +114,7 @@ EOF
|
||||
|
||||
----
|
||||
|
||||
Export "admin" kubeconfig to be able to install capsule:
|
||||
Export "admin" kubeconfig to be able to install Capsule:
|
||||
|
||||
```bash
|
||||
export KUBECONFIG=kubeconfig.conf
|
||||
@@ -131,7 +130,7 @@ helm upgrade --install --version 0.0.19 --namespace capsule-system --create-name
|
||||
Use the default Tenant example:
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/samples/capsule_v1alpha1_tenant.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/clastix/capsule/master/config/samples/capsule_v1beta1_tenant.yaml
|
||||
```
|
||||
|
||||
Based on the tenant configuration above the user `alice` should be able
|
||||
16
docs/operator/managed-kubernetes/overview.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# Capsule over Managed Kubernetes
|
||||
Capsule Operator can be easily installed on a Managed Kubernetes Service. Since in these services, you do not have access to the Kubernetes APIs Server, you should check with your service provider following pre-requisites:
|
||||
|
||||
- the default `cluster-admin` ClusterRole is accessible
|
||||
- the following Admission Webhooks are enabled on the APIs Server:
|
||||
- PodNodeSelector
|
||||
- LimitRanger
|
||||
- ResourceQuota
|
||||
- MutatingAdmissionWebhook
|
||||
- ValidatingAdmissionWebhook
|
||||
|
||||
* [AWS EKS](./aws-eks.md)
|
||||
* CoAKS - Capsule over Azure Kubernetes Service
|
||||
* Google Cloud GKE
|
||||
* IBM Cloud
|
||||
* OVH
|
||||
@@ -1,2 +1,181 @@
|
||||
# Monitoring Capsule
|
||||
Coming soon.
|
||||
|
||||
The Capsule dashboard allows you to track the health and performance of Capsule manager and tenants, with particular attention to resources saturation, server responses, and latencies.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Prometheus
|
||||
|
||||
Prometheus is an open-source monitoring system and time series database; it is based on a multi-dimensional data model and uses PromQL, a powerful query language, to leverage it.
|
||||
|
||||
- Minimum version: 1.0.0
|
||||
|
||||
### Grafana
|
||||
|
||||
Grafana is an open-source monitoring solution that offers a flexible way to generate visuals and configure dashboards.
|
||||
|
||||
- Minimum version: 7.5.5
|
||||
|
||||
To fastly deploy this monitoring stack, consider installing the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator).
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
The Capsule Helm [charts](https://github.com/clastix/capsule/tree/master/charts/capsule) allow you to automatically create Kubernetes minimum resources needed for the proper functioning of the dashboard:
|
||||
|
||||
* ServiceMonitor
|
||||
* Role
|
||||
* RoleBinding
|
||||
|
||||
N.B: we assume that a ServiceAccount resource has already been created so it can easily interact with the Prometheus API.
|
||||
|
||||
### Helm install
|
||||
|
||||
During Capsule installation, set the `serviceMonitor` fields as follow:
|
||||
|
||||
```yaml
|
||||
serviceMonitor:
|
||||
enabled: true
|
||||
[...]
|
||||
serviceAccount:
|
||||
name: <prometheus-sa>
|
||||
namespace: <prometheus-sa-namespace>
|
||||
```
|
||||
Take a look at the Helm charts [README.md](https://github.com/clastix/capsule/blob/master/charts/capsule/README.md#customize-the-installation) file for further customization.
|
||||
|
||||
### Check Service Monitor
|
||||
|
||||
Verify that the service monitor is working correctly through the Prometheus "targets" page :
|
||||
|
||||

|
||||
|
||||
### Deploy dashboard
|
||||
|
||||
Simply upload [dashboard.json](https://github.com/clastix/capsule/blob/master/config/grafana/dashboard.json) file to Grafana through _Create_ -> _Import_,
|
||||
making sure to select the correct Prometheus data source:
|
||||
|
||||

|
||||
|
||||
## In-depth view
|
||||
|
||||
### Features
|
||||
* [Manager controllers](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#manager-controllers)
|
||||
* [Webhook error rate](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#webhook-error-rate)
|
||||
* [Webhook latency](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#webhook-latency)
|
||||
* [REST client latency](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#rest-client-latency)
|
||||
* [REST client error rate](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#rest-client-error-rate)
|
||||
* [Saturation](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#saturation)
|
||||
* [Workqueue](https://github.com/clastix/capsule/blob/master/docs/operator/monitoring.md#workqueue)
|
||||
|
||||
---
|
||||
|
||||
#### Manager controllers
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about the medium time delay between manager client input, side effects, and new state determination (reconciliation).
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* Controller name
|
||||
- capsuleconfiguration
|
||||
- clusterrole
|
||||
- clusterrolebinding
|
||||
- endpoints
|
||||
- endpointslice
|
||||
- secret
|
||||
- service
|
||||
- tenant
|
||||
|
||||
#### Webhook error rate
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about webhook requests response, mainly focusing on server-side errors research.
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* Webhook
|
||||
- cordoning
|
||||
- ingresses
|
||||
- namespace-owner-reference
|
||||
- namespaces
|
||||
- networkpolicies
|
||||
- persistentvolumeclaims
|
||||
- pods
|
||||
- services
|
||||
- tenants
|
||||
|
||||
#### Webhook latency
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about the medium time delay between webhook trigger, side effects, and data written on etcd.
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* Webhook
|
||||
- cordoning
|
||||
- ingresses
|
||||
- namespace-owner-reference
|
||||
- namespaces
|
||||
- networkpolicies
|
||||
- persistentvolumeclaims
|
||||
- pods
|
||||
- services
|
||||
- tenants
|
||||
|
||||
#### REST client latency
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about the medium time delay between all the calls done by the controller and the API server.
|
||||
Data display may depend on the REST client verb considered and on available REST client URLs.
|
||||
|
||||
YMMV
|
||||
|
||||
##### Dependant variables and available values
|
||||
|
||||
* REST client URL
|
||||
* REST client verb
|
||||
- GET
|
||||
- PUT
|
||||
- POST
|
||||
- PATCH
|
||||
- DELETE
|
||||
|
||||
#### REST client error rate
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about client total rest requests response per unit time, grouped by thrown code.
|
||||
|
||||
#### Saturation
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about resources, giving a detailed picture of the system’s state and the amount of requested work per active controller.
|
||||
|
||||
#### Workqueue
|
||||
|
||||

|
||||
|
||||
##### Description
|
||||
|
||||
This section provides information about "actions" in the queue, particularly:
|
||||
- Workqueue latency: time to complete a series of actions in the queue ;
|
||||
- Workqueue rate: number of actions per unit time ;
|
||||
- Workqueue depth: number of pending actions waiting in the queue.
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
|
||||
**Category:** Self-Service Operations
|
||||
|
||||
**Description:** Tenants should be able to perform self-service operations by creating own network policies in their namespaces.
|
||||
**Description:** Tenants should be able to perform self-service operations by creating their own network policies in their namespaces.
|
||||
|
||||
**Rationale:** Enables self-service management of network-policies.
|
||||
|
||||
@@ -56,7 +56,7 @@ NAME POD-SELECTOR AGE
|
||||
capsule-oil-0 <none> 7m5s
|
||||
```
|
||||
|
||||
As tenant owner check for permissions to manage networkpolicy for each verb
|
||||
As a tenant, checks for permissions to manage networkpolicy for each verb
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice auth can-i get networkpolicies
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
|
||||
**Category:** Self-Service Operations
|
||||
|
||||
**Description:** Tenants should be able to perform self-service operations by creating own rolebindings in their namespaces.
|
||||
**Description:** Tenants should be able to perform self-service operations by creating their rolebindings in their namespaces.
|
||||
|
||||
**Rationale:** Enables self-service management of roles.
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
|
||||
**Category:** Self-Service Operations
|
||||
|
||||
**Description:** Tenants should be able to perform self-service operations by creating own roles in their namespaces.
|
||||
**Description:** Tenants should be able to perform self-service operations by creating their own roles in their namespaces.
|
||||
|
||||
**Rationale:** Enables self-service management of roles.
|
||||
|
||||
@@ -37,7 +37,7 @@ kubectl --kubeconfig alice create ns oil-production
|
||||
kubectl --kubeconfig alice config set-context --current --namespace oil-production
|
||||
```
|
||||
|
||||
As tenant owner check for permissions to manage roles for each verb
|
||||
As tenant owner, check for permissions to manage roles for each verb
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice auth can-i get roles
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
|
||||
**Category:** Control Plane Isolation
|
||||
|
||||
**Description:** Tenants should not be able to view, edit, create, or delete cluster (non-namespaced) resources such Node, ClusterRole, ClusterRoleBinding, etc.
|
||||
**Description:** Tenants should not be able to view, edit, create or delete cluster (non-namespaced) resources such Node, ClusterRole, ClusterRoleBinding, etc.
|
||||
|
||||
**Rationale:** Access controls should be configured for tenants so that a tenant cannot list, create, modify or delete cluster resources
|
||||
|
||||
@@ -53,7 +53,7 @@ kubectl --kubeconfig alice auth can-i create namespaces
|
||||
yes
|
||||
```
|
||||
|
||||
Any kubernetes user can create `SelfSubjectAccessReview` and `SelfSubjectRulesReviews` to checks whether he/she can perform an action. First two exceptions are not an issue.
|
||||
Any kubernetes user can create `SelfSubjectAccessReview` and `SelfSubjectRulesReviews` to checks whether he/she can act. First, two exceptions are not an issue.
|
||||
|
||||
```bash
|
||||
kubectl --anyuser auth can-i --list
|
||||
@@ -78,7 +78,7 @@ selfsubjectrulesreviews.authorization.k8s.io [] []
|
||||
[/version] [] [get]
|
||||
```
|
||||
|
||||
In order to enable namespace self-service provisioning, Capsule intentionally gives permissions to create namespaces to all users belonging to the Capsule group:
|
||||
To enable namespace self-service provisioning, Capsule intentionally gives permissions to create namespaces to all users belonging to the Capsule group:
|
||||
|
||||
```bash
|
||||
kubectl describe clusterrolebindings capsule-namespace-provisioner
|
||||
|
||||
@@ -6,9 +6,9 @@
|
||||
|
||||
**Category:** Tenant Isolation
|
||||
|
||||
**Description:** Each tenant namespace may contain resources setup by the cluster administrator for multi-tenancy, such as role bindings, and network policies. Tenants should not be allowed to modify the namespaced resources created by the cluster administrator for multi-tenancy. However, for some resources such as network policies, tenants can configure additional instances of the resource for their workloads.
|
||||
**Description:** Each tenant namespace may contain resources set up by the cluster administrator for multi-tenancy, such as role bindings, and network policies. Tenants should not be allowed to modify the namespaced resources created by the cluster administrator for multi-tenancy. However, for some resources such as network policies, tenants can configure additional instances of the resource for their workloads.
|
||||
|
||||
**Rationale:** Tenants can escalate priviliges and impact other tenants if they are able to delete or modify required multi-tenancy resources such as namespace resource quotas or default network policy.
|
||||
**Rationale:** Tenants can escalate privileges and impact other tenants if they can delete or modify required multi-tenancy resources such as namespace resource quotas or default network policy.
|
||||
|
||||
**Audit:**
|
||||
|
||||
@@ -99,7 +99,7 @@ spec:
|
||||
EOF
|
||||
```
|
||||
|
||||
However, due the additive nature of networkpolicies, the `DENY ALL` policy set by the cluster admin, prevents the hijacking.
|
||||
However, due to the additive nature of networkpolicies, the `DENY ALL` policy set by the cluster admin, prevents hijacking.
|
||||
|
||||
As tenant owner list RBAC permissions set by Capsule
|
||||
|
||||
@@ -110,13 +110,13 @@ namespace-deleter ClusterRole/capsule-namespace-deleter 11h
|
||||
namespace:admin ClusterRole/admin 11h
|
||||
```
|
||||
|
||||
As tenant owner, try to change/delete the rolebindings in order to escalate permissions
|
||||
As tenant owner, try to change/delete the rolebinding to escalate permissions
|
||||
|
||||
```bash
|
||||
kubectl --kubeconfig alice edit/delete rolebinding namespace:admin
|
||||
```
|
||||
|
||||
The rolebindings is immediately recreated by Capsule:
|
||||
The rolebinding is immediately recreated by Capsule:
|
||||
|
||||
```
|
||||
kubectl --kubeconfig alice get rolebindings
|
||||
@@ -125,7 +125,7 @@ namespace-deleter ClusterRole/capsule-namespace-deleter 11h
|
||||
namespace:admin ClusterRole/admin 2s
|
||||
```
|
||||
|
||||
However, the tenant owner can create and assign permissions inside namespace she owns
|
||||
However, the tenant owner can create and assign permissions inside the namespace she owns
|
||||
|
||||
```yaml
|
||||
kubectl create -f - << EOF
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
|
||||
**Category:** Tenant Isolation
|
||||
|
||||
**Description:** Each tenant has its own set of resources, such as namespaces, service accounts, secrets, pods, services, etc. Tenants should not be allowed to access eachother's resources.
|
||||
**Description:** Each tenant has its own set of resources, such as namespaces, service accounts, secrets, pods, services, etc. Tenants should not be allowed to access each other's resources.
|
||||
|
||||
**Rationale:** Tenant's resources must be not accessible by other tenants.
|
||||
|
||||
@@ -69,10 +69,10 @@ As `oil` tenant owner, try to retrieve the resources in the `gas` tenant namespa
|
||||
kubectl --kubeconfig alice get serviceaccounts --namespace gas-production
|
||||
```
|
||||
|
||||
You must receive an eror message:
|
||||
You must receive an error message:
|
||||
|
||||
```
|
||||
Error from server (Forbidden): serviceaccounts is forbidden:
|
||||
Error from server (Forbidden): serviceaccount is forbidden:
|
||||
User "oil" cannot list resource "serviceaccounts" in API group "" in the namespace "gas-production"
|
||||
```
|
||||
|
||||
@@ -82,10 +82,10 @@ As `gas` tenant owner, try to retrieve the resources in the `oil` tenant namespa
|
||||
kubectl --kubeconfig joe get serviceaccounts --namespace oil-production
|
||||
```
|
||||
|
||||
You must receive an eror message:
|
||||
You must receive an error message:
|
||||
|
||||
```
|
||||
Error from server (Forbidden): serviceaccounts is forbidden:
|
||||
Error from server (Forbidden): serviceaccount is forbidden:
|
||||
User "joe" cannot list resource "serviceaccounts" in API group "" in the namespace "oil-production"
|
||||
```
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
**Description:** Control container permissions.
|
||||
|
||||
**Rationale:** The security `allowPrivilegeEscalation` setting allows a process to gain more privileges from its parent process. Processes in tenant containers should not be allowed to gain additional priviliges.
|
||||
**Rationale:** The security `allowPrivilegeEscalation` setting allows a process to gain more privileges from its parent process. Processes in tenant containers should not be allowed to gain additional privileges.
|
||||
|
||||
**Audit:**
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
**Description:** Avoid a tenant to mount existing volumes`.
|
||||
|
||||
**Rationale:** Tenants have to be assured that their Persisten Volumes cannot be reclaimed by other tenants.
|
||||
**Rationale:** Tenants have to be assured that their Persistent Volumes cannot be reclaimed by other tenants.
|
||||
|
||||
**Audit:**
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
**Description:** Tenants should not be able to mount host volumes and directories.
|
||||
|
||||
**Rationale:** The use of host volumes and directories can be used to access shared data or escalate priviliges and also creates a tight coupling between a tenant workload and a host.
|
||||
**Rationale:** The use of host volumes and directories can be used to access shared data or escalate privileges and also creates a tight coupling between a tenant workload and a host.
|
||||
|
||||
**Audit:**
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
**Description:** Force a tenant to use a Storage Class with `reclaimPolicy=Delete`.
|
||||
|
||||
**Rationale:** Tenants have to be assured that their Persisten Volumes cannot be reclaimed by other tenants.
|
||||
**Rationale:** Tenants have to be assured that their Persistent Volumes cannot be reclaimed by other tenants.
|
||||
|
||||
**Audit:**
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
|
||||
**Description:** Control container permissions.
|
||||
|
||||
**Rationale:** Processes in containers run as the root user (uid 0), by default. To prevent potential compromise of container hosts, specify a least privileged user ID when building the container image and require that application containers run as non root users.
|
||||
**Rationale:** Processes in containers run as the root user (uid 0), by default. To prevent potential compromise of container hosts, specify a least-privileged user ID when building the container image and require that application containers run as non-root users.
|
||||
|
||||
**Audit:**
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Meet the multi-tenancy benchmark MTB
|
||||
Actually, there's no yet a real standard for the multi-tenancy model in Kubernetes, although the [SIG multi-tenancy group](https://github.com/kubernetes-sigs/multi-tenancy) is working on that. SIG multi-tenancy drafted a generic validation schema appliable to generic multi-tenancy projects. Multi-Tenancy Benchmarks [MTB](https://github.com/kubernetes-sigs/multi-tenancy/tree/master/benchmarks) are guidelines for multi-tenant configuration of Kubernetes clusters. Capsule is an open source multi-tenacy operator and we decided to meet the requirements of MTB.
|
||||
Actually, there's no yet a real standard for the multi-tenancy model in Kubernetes, although the [SIG multi-tenancy group](https://github.com/kubernetes-sigs/multi-tenancy) is working on that. SIG multi-tenancy drafted a generic validation schema appliable to generic multi-tenancy projects. Multi-Tenancy Benchmarks [MTB](https://github.com/kubernetes-sigs/multi-tenancy/tree/master/benchmarks) are guidelines for multi-tenant configuration of Kubernetes clusters. Capsule is an open source multi-tenancy operator and we decided to meet the requirements of MTB.
|
||||
|
||||
> N.B. At time of writing, the MTB are in development and not ready for usage. Strictly speaking, we do not claim an official conformance to MTB, but just to adhere to the multi-tenancy requirements and best practices promoted by MTB.
|
||||
> N.B. At the time of writing, the MTB is in development and not ready for usage. Strictly speaking, we do not claim official conformance to MTB, but just to adhere to the multi-tenancy requirements and best practices promoted by MTB.
|
||||
|
||||
|MTB Benchmark |MTB Profile|Capsule Version|Conformance|Notes |
|
||||
|--------------|-----------|---------------|-----------|-------|
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
# Kubernetes multi-tenancy made simple
|
||||
**Capsule** helps to implement a multi-tenancy and policy-based environment in your Kubernetes cluster. Have a fun with Capsule:
|
||||
# Kubernetes Operator
|
||||
|
||||
* [Getting Started](./getting-started.md)
|
||||
* [Use Cases](./use-cases/overview.md)
|
||||
* [SIG Multi-tenancy benchmark](./mtb/sig-multitenancy-bench.md)
|
||||
* [Run on Managed Kubernetes Services](./managed-kubernetes/getting-started-amazon-eks.md)
|
||||
* [Run on Managed Kubernetes Services](./managed-kubernetes/overview.md)
|
||||
* [Monitoring Capsule](./monitoring.md)
|
||||
* [References](./references.md)
|
||||
* [Contributing](./contributing.md)
|
||||
* [References](./references.md)
|
||||
|
||||
|
||||
@@ -1,645 +1,180 @@
|
||||
# Reference
|
||||
|
||||
* [Custom Resource Definition](#customer-resource-definition)
|
||||
* [Metadata](#metadata)
|
||||
* [name](#name)
|
||||
* [Spec](#spec)
|
||||
* [owner](#owner)
|
||||
* [nodeSelector](#nodeSelector)
|
||||
* [namespaceQuota](#namespaceQuota)
|
||||
* [namespacesMetadata](#namespacesMetadata)
|
||||
* [servicesMetadata](#servicesMetadata)
|
||||
* [ingressClasses](#ingressClasses)
|
||||
* [ingressHostnames](#ingressHostnames)
|
||||
* [storageClasses](#storageClasses)
|
||||
* [containerRegistries](#containerRegistries)
|
||||
* [additionalRoleBindings](#additionalRoleBindings)
|
||||
* [resourceQuotas](#resourceQuotas)
|
||||
* [limitRanges](#limitRanges)
|
||||
* [networkPolicies](#networkPolicies)
|
||||
* [externalServiceIPs](#externalServiceIPs)
|
||||
* [Status](#status)
|
||||
* [size](#size)
|
||||
* [namespaces](#namespaces)
|
||||
* [Role Based Access Control](#role-based-access-control)
|
||||
* [Capsule Configuration](#capsule-configuration)
|
||||
* [Capsule Permissions](#capsule-permissions)
|
||||
* [Admission Controllers](#admission-controller)
|
||||
* [Command Options](#command-options)
|
||||
* [Created Resources](#created-resources)
|
||||
|
||||
|
||||
## Custom Resource Definition
|
||||
Capsule operator uses a single Custom Resources Definition (CRD) for _Tenants_. Please, see the [Tenant Custom Resource Definition](https://github.com/clastix/capsule/blob/master/config/crd/bases/capsule.clastix.io_tenants.yaml). In Caspule, Tenants are cluster wide resources. You need for cluster level permissions to work with tenants.
|
||||
|
||||
### Metadata
|
||||
#### name
|
||||
Metadata `name` can contain any valid symbol from the regex: `[a-z0-9]([-a-z0-9]*[a-z0-9])?`.
|
||||
Capsule operator uses a Custom Resources Definition (CRD) for _Tenants_. In Capsule, Tenants are cluster wide resources. You need cluster level permissions to work with tenants.
|
||||
|
||||
### Spec
|
||||
#### owner
|
||||
The field `owner` is the only mandatory spec in a _Tenant_ manifest. It specifies the ownership of the tenant:
|
||||
You can learn about tenant CRD by the `kubectl explain` command:
|
||||
|
||||
```command
|
||||
kubectl explain tenant
|
||||
|
||||
KIND: Tenant
|
||||
VERSION: capsule.clastix.io/v1beta1
|
||||
|
||||
DESCRIPTION:
|
||||
Tenant is the Schema for the tenants API
|
||||
|
||||
FIELDS:
|
||||
apiVersion <string>
|
||||
APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info:
|
||||
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
|
||||
|
||||
kind <string>
|
||||
Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info:
|
||||
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
|
||||
|
||||
metadata <Object>
|
||||
Standard object's metadata. More info:
|
||||
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
|
||||
|
||||
spec <Object>
|
||||
TenantSpec defines the desired state of Tenant
|
||||
|
||||
status <Object>
|
||||
Returns the observed state of the Tenant
|
||||
```
|
||||
|
||||
For Tenant spec:
|
||||
|
||||
```command
|
||||
kubectl explain tenant.spec
|
||||
|
||||
KIND: Tenant
|
||||
VERSION: capsule.clastix.io/v1beta1
|
||||
|
||||
RESOURCE: spec <Object>
|
||||
|
||||
DESCRIPTION:
|
||||
TenantSpec defines the desired state of Tenant
|
||||
|
||||
FIELDS:
|
||||
additionalRoleBindings <[]Object>
|
||||
Specifies additional RoleBindings assigned to the Tenant. Capsule will
|
||||
ensure that all namespaces in the Tenant always contain the RoleBinding for
|
||||
the given ClusterRole. Optional.
|
||||
|
||||
containerRegistries <Object>
|
||||
Specifies the trusted Image Registries assigned to the Tenant. Capsule
|
||||
assures that all Pods resources created in the Tenant can use only one of
|
||||
the allowed trusted registries. Optional.
|
||||
|
||||
imagePullPolicies <[]string>
|
||||
Specify the allowed values for the imagePullPolicies option in Pod
|
||||
resources. Capsule assures that all Pod resources created in the Tenant can
|
||||
use only one of the allowed policy. Optional.
|
||||
|
||||
ingressOptions <Object>
|
||||
Specifies options for the Ingress resources, such as allowed hostnames and
|
||||
IngressClass. Optional.
|
||||
|
||||
limitRanges <Object>
|
||||
Specifies the NetworkPolicies assigned to the Tenant. The assigned
|
||||
NetworkPolicies are inherited by any namespace created in the Tenant.
|
||||
Optional.
|
||||
|
||||
namespaceOptions <Object>
|
||||
Specifies options for the Namespaces, such as additional metadata or
|
||||
maximum number of namespaces allowed for that Tenant. Once the namespace
|
||||
quota assigned to the Tenant has been reached, the Tenant owner cannot
|
||||
create further namespaces. Optional.
|
||||
|
||||
networkPolicies <Object>
|
||||
Specifies the NetworkPolicies assigned to the Tenant. The assigned
|
||||
NetworkPolicies are inherited by any namespace created in the Tenant.
|
||||
Optional.
|
||||
|
||||
nodeSelector <map[string]string>
|
||||
Specifies the label to control the placement of pods on a given pool of
|
||||
worker nodes. All namesapces created within the Tenant will have the node
|
||||
selector annotation. This annotation tells the Kubernetes scheduler to
|
||||
place pods on the nodes having the selector label. Optional.
|
||||
|
||||
owners <[]Object> -required-
|
||||
Specifies the owners of the Tenant. Mandatory.
|
||||
|
||||
priorityClasses <Object>
|
||||
Specifies the allowed priorityClasses assigned to the Tenant. Capsule
|
||||
assures that all pods created in the Tenant can use only one
|
||||
of the allowed priorityClasses. Optional.
|
||||
|
||||
resourceQuotas <Object>
|
||||
Specifies a list of ResourceQuota resources assigned to the Tenant. The
|
||||
assigned values are inherited by any namespace created in the Tenant. The
|
||||
Capsule operator aggregates ResourceQuota at Tenant level, so that the hard
|
||||
quota is never crossed for the given Tenant. This permits the Tenant owner
|
||||
to consume resources in the Tenant regardless of the namespace. Optional.
|
||||
|
||||
serviceOptions <Object>
|
||||
Specifies options for the Service, such as additional metadata or block of
|
||||
certain type of Services. Optional.
|
||||
|
||||
storageClasses <Object>
|
||||
Specifies the allowed StorageClasses assigned to the Tenant. Capsule
|
||||
assures that all PersistentVolumeClaim resources created in the Tenant can
|
||||
use only one of the allowed StorageClasses. Optional.
|
||||
```
|
||||
|
||||
and Tenant status:
|
||||
|
||||
```command
|
||||
kubectl explain tenant.status
|
||||
KIND: Tenant
|
||||
VERSION: capsule.clastix.io/v1beta1
|
||||
|
||||
RESOURCE: status <Object>
|
||||
|
||||
DESCRIPTION:
|
||||
Returns the observed state of the Tenant
|
||||
|
||||
FIELDS:
|
||||
namespaces <[]string>
|
||||
List of namespaces assigned to the Tenant.
|
||||
|
||||
size <integer> -required-
|
||||
How many namespaces are assigned to the Tenant.
|
||||
|
||||
state <string> -required-
|
||||
The operational state of the Tenant. Possible values are "Active",
|
||||
"Cordoned".
|
||||
```
|
||||
|
||||
## Capsule Configuration
|
||||
|
||||
The Capsule configuration can be piloted by a Custom Resource definition named `CapsuleConfiguration`.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
kind: CapsuleConfiguration
|
||||
metadata:
|
||||
name: tenant
|
||||
name: default
|
||||
spec:
|
||||
owner: # required
|
||||
name: <name>
|
||||
kind: <User|Group>
|
||||
userGroups: ["capsule.clastix.io"]
|
||||
forceTenantPrefix: false
|
||||
protectedNamespaceRegex: ""
|
||||
```
|
||||
|
||||
The user and group names should be valid identities. Capsule does not care about the authentication strategy used in the cluster and all the Kubernetes methods of [Authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/) are supported. The only requirement to use Capsule is to assign tenant users to the the group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
|
||||
Option | Description | Default
|
||||
--- | --- | ---
|
||||
`.spec.forceTenantPrefix` | Force the tenant name as prefix for namespaces: `<tenant_name>-<namespace>`. | `false`
|
||||
`.spec.userGroups` | Array of Capsule groups to which all tenant owners must belong. | `[capsule.clastix.io]`
|
||||
`.spec.protectedNamespaceRegex` | Disallows creation of namespaces matching the passed regexp. | `null`
|
||||
|
||||
Assignment to a group depends on the used authentication strategy.
|
||||
Upon installation using Kustomize or Helm, a `capsule-default` resource will be created.
|
||||
The reference to this configuration is managed by the CLI flag `--configuration-name`.
|
||||
|
||||
For example, if you are using `capsule.clastix.io`, users authenticated through a _X.509_ certificate must have `capsule.clastix.io` as _Organization_: `-subj "/CN=${USER}/O=capsule.clastix.io"`
|
||||
|
||||
Users authenticated through an _OIDC token_ must have
|
||||
|
||||
```json
|
||||
...
|
||||
"users_groups": [
|
||||
"capsule.clastix.io",
|
||||
"other_group"
|
||||
]
|
||||
```
|
||||
|
||||
Permissions are controlled by RBAC.
|
||||
|
||||
#### nodeSelector
|
||||
Field `nodeSelector` specifies the label to control the placement of pods on a given pool of worker nodes:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
nodeSelector:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
All namesapces created within the tenant will have the annotation:
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/node-selector: 'key=value'
|
||||
```
|
||||
|
||||
This annotation tells the Kubernetes scheduler to place pods on the nodes having that label:
|
||||
|
||||
```yaml
|
||||
kind: Pod
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: sample
|
||||
spec:
|
||||
nodeSelector:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
> NB:
|
||||
> While Capsule just enforces the annotation `scheduler.alpha.kubernetes.io/node-selector` at namespace level,
|
||||
> the `nodeSelector` field in the pod template is under the control of the default _PodNodeSelector_ enabled
|
||||
> on the Kubernetes API server using the flag `--enable-admission-plugins=PodNodeSelector`.
|
||||
|
||||
Please, see how to [Assigning Pods to Nodes](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) documentation.
|
||||
|
||||
The tenant owner is not allowed to change or remove the annotation above from the namespace.
|
||||
|
||||
#### namespaceQuota
|
||||
Field `namespaceQuota` specifies the maximum number of namespaces allowed for that tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
namespaceQuota: <quota>
|
||||
```
|
||||
Once the namespace quota assigned to the tenant has been reached, yhe tenant owner cannot create further namespaces.
|
||||
|
||||
#### namespacesMetadata
|
||||
Field `namespacesMetadata` specifies additional labels and annotations the Capsule operator places on any _Namespace_ in the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
namespacesMetadata:
|
||||
additionalAnnotations:
|
||||
<annotations>
|
||||
additionalLabels:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
Al namespaces in the tenant will have:
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
<annotations>
|
||||
labels:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
The tenant owner is not allowed to change or remove such labels and annotations from the namespace.
|
||||
|
||||
#### servicesMetadata
|
||||
Field `servicesMetadata` specifies additional labels and annotations the Capsule operator places on any _Service_ in the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
servicesMetadata:
|
||||
additionalAnnotations:
|
||||
<annotations>
|
||||
additionalLabels:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
Al services in the tenant will have:
|
||||
|
||||
```yaml
|
||||
kind: Service
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
<annotations>
|
||||
labels:
|
||||
<key>: <value>
|
||||
```
|
||||
|
||||
The tenant owner is not allowed to change or remove such labels and annotations from the _Service_.
|
||||
|
||||
#### ingressClasses
|
||||
Field `ingressClasses` specifies the _IngressClass_ assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
ingressClasses:
|
||||
allowed:
|
||||
- <class>
|
||||
allowedRegex: <regex>
|
||||
```
|
||||
|
||||
Capsule assures that all the _Ingress_ resources created in the tenant can use only one of the allowed _IngressClass_.
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: <name>
|
||||
namespace:
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: <class>
|
||||
```
|
||||
|
||||
> NB: _Ingress_ resources are supported in both the versions, `networking.k8s.io/v1beta1` and `networking.k8s.io/v1`.
|
||||
|
||||
Allowed _IngressClasses_ are reported into namespaces as annotations, so the tenant owner can check them
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
capsule.clastix.io/ingress-classes: <class>
|
||||
capsule.clastix.io/ingress-classes-regexp: <regex>
|
||||
```
|
||||
Any tentative of tenant owner to use a not allowed _IngressClass_ will fail.
|
||||
|
||||
#### ingressHostnames
|
||||
Field `ingressHostnames` specifies the allowed hostnames in _Ingresses_ for the given tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
ingressHostnames:
|
||||
allowed:
|
||||
- <hostname>
|
||||
allowedRegex: <regex>
|
||||
```
|
||||
|
||||
Capsule assures that all _Ingress_ resources created in the tenant can use only one of the allowed hostnames.
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: <name>
|
||||
namespace:
|
||||
annotations:
|
||||
spec:
|
||||
rules:
|
||||
- host: <hostname>
|
||||
http: {}
|
||||
```
|
||||
|
||||
> NB: _Ingress_ resources are supported in both the versions, `networking.k8s.io/v1beta1` and `networking.k8s.io/v1`.
|
||||
|
||||
Any tentative of tenant owner to use one of not allowed hostnames will fail.
|
||||
|
||||
#### storageClasses
|
||||
Field `storageClasses` specifies the _StorageClasses_ assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
storageClasses:
|
||||
allowed:
|
||||
- <class>
|
||||
allowedRegex: <regex>
|
||||
```
|
||||
|
||||
Capsule assures that all _PersistentVolumeClaim_ resources created in the tenant can use only one of the allowed _StorageClasses_.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: <name>
|
||||
namespace:
|
||||
spec:
|
||||
storageClassName: <class>
|
||||
```
|
||||
|
||||
Allowed _StorageClasses_ are reported into namespaces as annotations, so the tenant owner can check them
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
capsule.clastix.io/storage-classes: <class>
|
||||
capsule.clastix.io/storage-classes-regexp: <regex>
|
||||
```
|
||||
|
||||
Any tentative of tenant owner to use a not allowed _StorageClass_ will fail.
|
||||
|
||||
#### containerRegistries
|
||||
Field `containerRegistries` specifies the ttrusted image registries assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
containerRegistries:
|
||||
allowed:
|
||||
- <registry>
|
||||
allowedRegex: <regex>
|
||||
```
|
||||
|
||||
Capsule assures that all _Pods_ resources created in the tenant can use only one of the allowed trusted registries.
|
||||
|
||||
Allowed registries are reported into namespaces as annotations, so the tenant owner can check them
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
annotations:
|
||||
capsule.clastix.io/allowed-registries-regexp: <regex>
|
||||
capsule.clastix.io/registries: <registry>
|
||||
```
|
||||
|
||||
Any tentative of tenant owner to use a not allowed registry will fail.
|
||||
|
||||
> NB:
|
||||
> In case of naked and official images hosted on Docker Hub, Capsule is going
|
||||
> to retrieve the registry even if it's not explicit: a `busybox:latest` Pod
|
||||
> running on a Tenant allowing `docker.io` will not blocked, even if the image
|
||||
> field is not explicit as `docker.io/busybox:latest`.
|
||||
|
||||
#### additionalRoleBindings
|
||||
Field `additionalRoleBindings` specifies additional _RoleBindings_ assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: <ClusterRole>
|
||||
subjects:
|
||||
- kind: <Group|User|ServiceAccount>
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
name: <name>
|
||||
```
|
||||
|
||||
Capsule will ensure that all namespaces in the tenant always contain the _RoleBinding_ for the given _ClusterRole_.
|
||||
|
||||
#### resourceQuotas
|
||||
Field `resourceQuotas` specifies a list of _ResourceQuota_ resources assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
resourceQuotas:
|
||||
- hard:
|
||||
limits.cpu: <hard_value>
|
||||
limits.memory: <hard_value>
|
||||
requests.cpu: <hard_value>
|
||||
requests.memory: <hard_value>
|
||||
```
|
||||
|
||||
Please, refer to [ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) documentation for the subject.
|
||||
|
||||
The assigned quota are inherited by any namespace created in the tenant
|
||||
|
||||
```yaml
|
||||
kind: ResourceQuota
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: compute
|
||||
namespace:
|
||||
labels:
|
||||
capsule.clastix.io/resource-quota=0
|
||||
capsule.clastix.io/tenant=tenant
|
||||
annotations:
|
||||
# used resources in the tenant
|
||||
quota.capsule.clastix.io/used-limits.cpu=<tenant_used_value>
|
||||
quota.capsule.clastix.io/used-limits.memory=<tenant_used_value>
|
||||
quota.capsule.clastix.io/used-requests.cpu=<tenant_used_value>
|
||||
quota.capsule.clastix.io/used-requests.memory=<tenant_used_value>
|
||||
# hard quota for the tenant
|
||||
quota.capsule.clastix.io/hard-limits.cpu=<tenant_hard_value>
|
||||
quota.capsule.clastix.io/hard-limits.memory=<tenant_hard_value>
|
||||
quota.capsule.clastix.io/hard-requests.cpu=<tenant_hard_value>
|
||||
quota.capsule.clastix.io/hard-requests.memory=<tenant_hard_value>
|
||||
spec:
|
||||
hard:
|
||||
limits.cpu: <hard_value>
|
||||
limits.memory: <hard_value>
|
||||
requests.cpu: <hard_value>
|
||||
requests.memory: <hard_value>
|
||||
status:
|
||||
hard:
|
||||
limits.cpu: <namespace_hard_value>
|
||||
limits.memory: <namespace_hard_value>
|
||||
requests.cpu: <namespace_hard_value>
|
||||
requests.memory: <namespace_hard_value>
|
||||
used:
|
||||
limits.cpu: <namespace_used_value>
|
||||
limits.memory: <namespace_used_value>
|
||||
requests.cpu: <namespace_used_value>
|
||||
requests.memory: <namespace_used_value>
|
||||
```
|
||||
|
||||
The Capsule operator aggregates _ResourceQuota_ at tenant level, so that the hard quota is never crossed for the given tenant. This permits the tenant owner to consume resources in the tenant regardless of the namespace.
|
||||
|
||||
The annotations
|
||||
|
||||
```yaml
|
||||
quota.capsule.clastix.io/used-<resource>=<tenant_used_value>
|
||||
quota.capsule.clastix.io/hard-<resource>=<tenant_hard_value>
|
||||
```
|
||||
|
||||
are updated in realtime by Capsule, according to the actual aggredated usage of resource in the tenant.
|
||||
|
||||
> NB:
|
||||
> While Capsule controls quota at tenant level, at namespace level the quota enforcement
|
||||
> is under the control of the default _ResourceQuota Admission Controller_ enabled on the
|
||||
> Kubernetes API server using the flag `--enable-admission-plugins=ResourceQuota`.
|
||||
|
||||
The tenant owner is not allowed to change or remove the _ResourceQuota_ from the namespace.
|
||||
|
||||
#### limitRanges
|
||||
Field `limitRanges` specifies the _LimitRanges_ assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
limitRanges:
|
||||
- limits:
|
||||
- type: Pod
|
||||
max:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
min:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
- type: Container
|
||||
default:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
defaultRequest:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
max:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
min:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
- type: PersistentVolumeClaim
|
||||
max:
|
||||
storage: <value>
|
||||
min:
|
||||
storage: <value>
|
||||
```
|
||||
|
||||
Please, refer to [LimitRange](https://kubernetes.io/docs/concepts/policy/limit-range/) documentation for the subject.
|
||||
|
||||
The assigned _LimitRanges_ are inherited by any namespace created in the tenant
|
||||
|
||||
```yaml
|
||||
kind: LimitRange
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: <name>
|
||||
namespace:
|
||||
spec:
|
||||
limits:
|
||||
- type: Pod
|
||||
max:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
min:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
- type: Container
|
||||
default:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
defaultRequest:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
max:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
min:
|
||||
cpu: <value>
|
||||
memory: <value>
|
||||
- type: PersistentVolumeClaim
|
||||
max:
|
||||
storage: <value>
|
||||
min:
|
||||
storage: <value>
|
||||
```
|
||||
|
||||
> NB:
|
||||
> Limit ranges enforcement for a single pod, container, and persistent volume
|
||||
> claim is done by the default _LimitRanger Admission Controller_ enabled on
|
||||
> the Kubernetes API server: using the flag
|
||||
> `--enable-admission-plugins=LimitRanger`.
|
||||
|
||||
Being the limit range specific of single resources, there is no aggregate to count.
|
||||
|
||||
The tenant owner is not allowed to change or remove _LimitRanges_ from the namespace.
|
||||
|
||||
#### networkPolicies
|
||||
Field `networkPolicies` specifies the _NetworkPolicies_ assigned to the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
networkPolicies:
|
||||
- policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
cidr: <value>
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector: {}
|
||||
- podSelector: {}
|
||||
- ipBlock:
|
||||
cidr: <value>
|
||||
podSelector: {}
|
||||
```
|
||||
|
||||
Please, refer to [NetworkPolicies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) documentation for the subjects of a _NetworkPolicy_.
|
||||
|
||||
The assigned _NetworkPolicies_ are inherited by any namespace created in the tenant.
|
||||
|
||||
```yaml
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: <name>
|
||||
namespace:
|
||||
spec:
|
||||
podSelector: {}
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector: {}
|
||||
- podSelector: {}
|
||||
- ipBlock:
|
||||
cidr: <value>
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
cidr: <value>
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
```
|
||||
|
||||
The tenant owner can create, patch and delete additional _NetworkPolicy_ to refine the assigned one. However, the tenant owner cannot delete the _NetworkPolicies_ set at tenant level.
|
||||
|
||||
#### externalServiceIPs
|
||||
Field `externalServiceIPs` specifies the external IPs that can be used in _Services_ with type `ClusterIP`.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
externalServiceIPs:
|
||||
allowed:
|
||||
- <cidr>
|
||||
```
|
||||
|
||||
Capsule will ensure that all _Services_ in the tenant can contain only the allowed external IPs. This mitigate the [_CVE-2020-8554_] vulnerability where a potential attacker, able to create a _Service_ with type `ClusterIP` and set the `externalIPs` field, can intercept traffic to that IP. Leave only the allowed CIDRs list to be set as `externalIPs` field in a _Service_ with type `ClusterIP`.
|
||||
|
||||
To prevent users to set the `externalIPs` field, use an empty allowed list:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
externalServiceIPs:
|
||||
allowed: []
|
||||
```
|
||||
|
||||
> NB: Missing of this controller, it exposes your cluster to the vulnerability [_CVE-2020-8554_].
|
||||
|
||||
### Status
|
||||
#### size
|
||||
Status field `size` reports the number of namespaces belonging to the tenant. It is reported as `NAMESPACE COUNT` in the `kubectl` output:
|
||||
|
||||
```
|
||||
$ kubectl get tnt
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
cap 9 1 joe User {"pool":"cmp"} 5d4h
|
||||
gas 6 2 alice User {"node":"worker"} 5d4h
|
||||
oil 9 3 alice User {"pool":"cmp"} 5d4h
|
||||
sample 9 0 alice User {"key":"value"} 29h
|
||||
```
|
||||
|
||||
#### namespaces
|
||||
Status field `namespaces` reports the list of all namespaces belonging to the tenant.
|
||||
|
||||
```yaml
|
||||
...
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: tenant
|
||||
spec:
|
||||
...
|
||||
status:
|
||||
namespaces:
|
||||
oil-development
|
||||
oil-production
|
||||
oil-marketing
|
||||
size: 3
|
||||
```
|
||||
|
||||
## Role Based Access Control
|
||||
In the current implementation, the Capsule operator requires cluster admin permissions to fully operate.
|
||||
## Capsule Permissions
|
||||
In the current implementation, the Capsule operator requires cluster admin permissions to fully operate. Make sure you deploy Capsule having access to the default `cluster-admin` ClusterRole.
|
||||
|
||||
## Admission Controllers
|
||||
|
||||
Capsule implements Kubernetes multi-tenancy capabilities using a minimum set of standard [Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) enabled on the Kubernetes APIs server.
|
||||
|
||||
Here the list of required Admission Controllers you have to enable to get full support from Capsule:
|
||||
@@ -665,7 +200,8 @@ capsule-mutating-webhook-configuration 1 2h
|
||||
```
|
||||
|
||||
## Command Options
|
||||
The Capsule operator provides following command options:
|
||||
|
||||
The Capsule operator provides the following command options:
|
||||
|
||||
Option | Description | Default
|
||||
--- | --- | ---
|
||||
@@ -673,40 +209,20 @@ Option | Description | Default
|
||||
`--enable-leader-election` | Start a leader election client and gain leadership before executing the main loop. | `true`
|
||||
`--zap-log-level` | The log verbosity with a value from 1 to 10 or the basic keywords. | `4`
|
||||
`--zap-devel` | The flag to get the stack traces for deep debugging. | `null`
|
||||
`--configuration-name` | The Capsule Configuration CRD name, a default is installed automatically | `default`
|
||||
`--configuration-name` | The Capsule Configuration CRD name, default is installed automatically | `capsule-default`
|
||||
|
||||
## Capsule Configuration
|
||||
|
||||
The Capsule configuration can be piloted by a Custom Resource definition named `CapsuleConfiguration`.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: CapsuleConfiguration
|
||||
metadata:
|
||||
name: default
|
||||
spec:
|
||||
userGroups: ["capsule.clastix.io"]
|
||||
forceTenantPrefix: false
|
||||
protectedNamespaceRegex: ""
|
||||
```
|
||||
|
||||
Option | Description | Default
|
||||
--- | --- | ---
|
||||
`.spec.forceTenantPrefix` | Force the tenant name as prefix for namespaces: `<tenant_name>-<namespace>`. | `false`
|
||||
`.spec.userGroups` | Array of Capsule groups to which all tenant owners must belong. | `[capsule.clastix.io]`
|
||||
`.spec.protectedNamespaceRegex` | Disallows creation of namespaces matching the passed regexp. | `null`
|
||||
|
||||
Upon installation using Kustomize or Helm, a `default` resource will be created.
|
||||
The reference to this configuration is managed by the CLI flag `--configuration-name`.
|
||||
|
||||
## Created Resources
|
||||
Once installed, the Capsule operator creates the following resources in your cluster:
|
||||
|
||||
```
|
||||
NAMESPACE RESOURCE
|
||||
namespace/capsule-system
|
||||
customresourcedefinition.apiextensions.k8s.io/tenants.capsule.clastix.io
|
||||
customresourcedefinition.apiextensions.k8s.io/capsuleconfigurations.capsule.clastix.io
|
||||
clusterrole.rbac.authorization.k8s.io/capsule-proxy-role
|
||||
clusterrole.rbac.authorization.k8s.io/capsule-metrics-reader
|
||||
capsuleconfiguration.capsule.clastix.io/capsule-default
|
||||
mutatingwebhookconfiguration.admissionregistration.k8s.io/capsule-mutating-webhook-configuration
|
||||
validatingwebhookconfiguration.admissionregistration.k8s.io/capsule-validating-webhook-configuration
|
||||
capsule-system clusterrolebinding.rbac.authorization.k8s.io/capsule-manager-rolebinding
|
||||
@@ -716,4 +232,4 @@ capsule-system secret/capsule-tls
|
||||
capsule-system service/capsule-controller-manager-metrics-service
|
||||
capsule-system service/capsule-webhook-service
|
||||
capsule-system deployment.apps/capsule-controller-manager
|
||||
```
|
||||
```
|
||||
|
||||
@@ -3,16 +3,16 @@
|
||||
Bill needs to cordon a Tenant and its Namespaces for several reasons:
|
||||
|
||||
- Avoid accidental resource modification(s) including deletion during a Production Freeze Window
|
||||
- During Kubernetes upgrade, to prevent any workload updates
|
||||
- During the Kubernetes upgrade, to prevent any workload updates
|
||||
- During incidents or outages
|
||||
- During planned maintenance of a dedicated nodes pool in a BYOD scenario
|
||||
|
||||
With this said, the Tenant Owner and the related Service Account living into managed Namespaces, cannot proceed to any update, create or delete action.
|
||||
|
||||
This is possible just labelling the Tenant as follows:
|
||||
This is possible just labeling the Tenant as follows:
|
||||
|
||||
```shell
|
||||
$ kubectl label tenant oil capsule.clastix.io/cordon=enabled
|
||||
kubectl label tenant oil capsule.clastix.io/cordon=enabled
|
||||
tenant oil labeled
|
||||
```
|
||||
|
||||
@@ -20,13 +20,13 @@ Any operation performed by Alice, the Tenant Owner, will be rejected.
|
||||
|
||||
```shell
|
||||
$ kubectl --as alice --as-group capsule.clastix.io -n oil-dev create deployment nginx --image nginx
|
||||
error: failed to create deployment: admission webhook "cordoning.tenant.capsule.clastix.io" denied the request: tenant oil is freezed: please, reach out to the system administrator
|
||||
error: failed to create deployment: admission webhook "cordoning.tenant.capsule.clastix.io" denied the request: tenant oil is frozen: please, reach out to the system administrator
|
||||
|
||||
$ kubectl --as alice --as-group capsule.clastix.io -n oil-dev delete ingress,deployment,serviceaccount --all
|
||||
error: failed to create deployment: admission webhook "cordoning.tenant.capsule.clastix.io" denied the request: tenant oil is freezed: please, reach out to the system administrator
|
||||
error: failed to create deployment: admission webhook "cordoning.tenant.capsule.clastix.io" denied the request: tenant oil is frozen: please, reach out to the system administrator
|
||||
```
|
||||
|
||||
Uncordoning can be done removing the said label:
|
||||
Uncordoning can be done by removing the said label:
|
||||
|
||||
```shell
|
||||
$ kubectl label tenant oil capsule.clastix.io/cordon-
|
||||
@@ -36,8 +36,17 @@ $ kubectl --as alice --as-group capsule.clastix.io -n oil-dev create deployment
|
||||
deployment.apps/nginx created
|
||||
```
|
||||
|
||||
Status of cordoning is also reported in the `state` of the tenant:
|
||||
|
||||
```shell
|
||||
kubectl get tenants
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
bronze Active 2 3d13h
|
||||
gold Active 2 3d13h
|
||||
oil Cordoned 4 2d11h
|
||||
silver Active 2 3d13h
|
||||
```
|
||||
|
||||
# What’s next
|
||||
|
||||
This end our tour in Capsule use cases.
|
||||
As we improve Capsule, more use cases about multi-tenancy, policy admission control, and cluster governance will be covered in the future.
|
||||
Stay tuned!
|
||||
See how Bill, the cluster admin, can prevent creating services with specific service types. [Disabling Service Types](./service-type.md).
|
||||
@@ -1,16 +1,13 @@
|
||||
# Create namespaces
|
||||
Alice can create a new namespace in her tenant, as simply:
|
||||
Alice, once logged with her credentials, can create a new namespace in her tenant, as simply issuing:
|
||||
|
||||
```
|
||||
alice@caas# kubectl create ns oil-production
|
||||
kubectl create ns oil-production
|
||||
```
|
||||
|
||||
> Note that Alice started the name of her namespace with an identifier of her
|
||||
> tenant: this is not a strict requirement but it is highly suggested because
|
||||
> it is likely that many different tenants would like to call their namespaces
|
||||
> as `production`, `test`, or `demo`, etc.
|
||||
>
|
||||
> The enforcement of this naming convention is optional and can be controlled by the cluster administrator with the `--force-tenant-prefix` option as an argument of the Capsule controller.
|
||||
Alice started the name of the namespace prepended by the name of the tenant: this is not a strict requirement but it is highly suggested because it is likely that many different tenants would like to call their namespaces `production`, `test`, or `demo`, etc.
|
||||
|
||||
The enforcement of this naming convention is optional and can be controlled by the cluster administrator with the `--force-tenant-prefix` option as an argument of the Capsule controller.
|
||||
|
||||
When Alice creates the namespace, the Capsule controller listening for creation and deletion events assigns to Alice the following roles:
|
||||
|
||||
@@ -39,46 +36,61 @@ subjects:
|
||||
name: alice
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: namespace-deleter
|
||||
name: capsule-namespace-deleter
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
Alice is the admin of the namespaces:
|
||||
So Alice is the admin of the namespaces:
|
||||
|
||||
```
|
||||
alice@caas# kubectl get rolebindings -n oil-production
|
||||
NAME ROLE AGE
|
||||
namespace:admin ClusterRole/admin 9m5s
|
||||
namespace-deleter ClusterRole/admin 9m5s
|
||||
kubectl get rolebindings -n oil-development
|
||||
NAME ROLE AGE
|
||||
namespace:admin ClusterRole/admin 12s
|
||||
namespace-deleter ClusterRole/capsule-namespace-deleter 12s
|
||||
```
|
||||
|
||||
The said Role Binding resources are automatically created by Capsule when Alice creates a namespace in the tenant.
|
||||
The said Role Binding resources are automatically created by Capsule controller when the tenant owner Alice creates a namespace in the tenant.
|
||||
|
||||
Alice can deploy any resource in the namespace, according to the predefined
|
||||
[`admin` cluster role](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles).
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-development run nginx --image=docker.io/nginx
|
||||
alice@caas# kubectl -n oil-development get pods
|
||||
kubectl -n oil-development run nginx --image=docker.io/nginx
|
||||
kubectl -n oil-development get pods
|
||||
```
|
||||
|
||||
Alice can create additional namespaces, according to the `namespaceQuota` field of the tenant manifest:
|
||||
Bill, the cluster admin, can control how many namespaces Alice, creates by setting a quota in the tenant manifest `spec.namespaceOptions.quota`
|
||||
|
||||
```
|
||||
alice@caas# kubectl create ns oil-development
|
||||
alice@caas# kubectl create ns oil-test
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
namespaceOptions:
|
||||
quota: 3
|
||||
```
|
||||
|
||||
While Alice creates namespace resources the Capsule controller updates the status of the tenant so Bill, the cluster admin, can check its status:
|
||||
Alice can create additional namespaces according to the quota:
|
||||
|
||||
```
|
||||
bill@caas# kubectl describe tenant oil
|
||||
kubectl create ns oil-development
|
||||
kubectl create ns oil-test
|
||||
```
|
||||
|
||||
While Alice creates namespaces, the Capsule controller updates the status of the tenant so Bill, the cluster admin, can check the status:
|
||||
|
||||
```
|
||||
kubectl describe tenant oil
|
||||
```
|
||||
|
||||
```yaml
|
||||
...
|
||||
status:
|
||||
namespaces:
|
||||
Namespaces:
|
||||
oil-development
|
||||
oil-production
|
||||
oil-test
|
||||
@@ -89,10 +101,10 @@ status:
|
||||
Once the namespace quota assigned to the tenant has been reached, Alice cannot create further namespaces
|
||||
|
||||
```
|
||||
alice@caas# kubectl create ns oil-training
|
||||
Error from server (Cannot exceed Namespace quota: please, reach out to the system administrators): admission webhook "quota.namespace.capsule.clastix.io" denied the request.
|
||||
kubectl create ns oil-training
|
||||
Error from server (Cannot exceed Namespace quota: please, reach out to the system administrators): admission webhook "namespace.capsule.clastix.io" denied the request.
|
||||
```
|
||||
The enforcement on the maximum number of Namespace resources per Tenant is the responsibility of the Capsule controller via its Dynamic Admission Webhook capability.
|
||||
The enforcement on the maximum number of namespaces per Tenant is the responsibility of the Capsule controller via its Dynamic Admission Webhook capability.
|
||||
|
||||
# What’s next
|
||||
See how Alice, the tenant owner, can assign different user roles in the tenant. [Assign permissions](./permissions.md).
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
# Create Custom Resources
|
||||
Capsule operator can grant the admin permissions to the tenant's users but only limited to their namespaces. To achieve that, it assigns the ClusterRole [admin](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) to the tenant owner. This ClusterRole does not permit the installation of custom resources in the namespaces.
|
||||
Capsule grants admin permissions to the tenant owners but is only limited to their namespaces. To achieve that, it assigns the ClusterRole [admin](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) to the tenant owner. This ClusterRole does not permit the installation of custom resources in the namespaces.
|
||||
|
||||
In order to leave the tenant owner to create Custom Resources in their namespaces, the cluster admin defines a proper Cluster Role. For example:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
@@ -22,18 +23,22 @@ rules:
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
EOF
|
||||
```
|
||||
|
||||
Bill can assign this role to any namespace in the Alice's tenant by setting it in the tenant manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
- name: joe
|
||||
kind: User
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: 'argoproj-provisioner'
|
||||
@@ -44,25 +49,7 @@ spec:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: joe
|
||||
```
|
||||
|
||||
or in case of Group type owners:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: 'argoproj-provisioner'
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: alice
|
||||
EOF
|
||||
```
|
||||
|
||||
With the given specification, Capsule will ensure that all Alice's namespaces will contain a _RoleBinding_ for the specified _Cluster Role_. For example, in the `oil-production` namespace, Alice will see:
|
||||
@@ -88,4 +75,4 @@ With the above example, Capsule is leaving the tenant owner to create namespaced
|
||||
> Take Note: a tenant owner having the admin scope on its namespaces only, does not have the permission to create Custom Resources Definitions (CRDs) because this requires a cluster admin permission level. Only Bill, the cluster admin, can create CRDs. This is a known limitation of any multi-tenancy environment based on a single Kubernetes cluster.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can set taints on the Alice's namespaces. [Taint namespaces](./taint-namespaces.md).
|
||||
See how Bill, the cluster admin, can set taints on Alice's namespaces. [Taint namespaces](./taint-namespaces.md).
|
||||
|
||||
80
docs/operator/use-cases/hostname-collision.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# Control hostname collision in ingresses
|
||||
In a multi-tenant environment, as more and more ingresses are defined, there is a chance of collision on the hostname leading to unpredictable behavior of the Ingress Controller. Bill, the cluster admin, can enforce hostname collision detection at different scope levels:
|
||||
|
||||
1. Cluster
|
||||
2. Tenant
|
||||
3. Namespace
|
||||
4. Disabled (default)
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
- name: joe
|
||||
kind: User
|
||||
ingressOptions:
|
||||
hostnameCollisionScope: Tenant
|
||||
EOF
|
||||
```
|
||||
|
||||
When a tenant owner creates an Ingress resource, Capsule will check the collision of hostname in the current ingress with all the hostnames already used, depending on the defined scope.
|
||||
|
||||
For example, Alice, one of the tenant owners, creates an Ingress
|
||||
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-production
|
||||
spec:
|
||||
rules:
|
||||
- host: web.oil.acmecorp.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: nginx
|
||||
port:
|
||||
number: 80
|
||||
EOF
|
||||
```
|
||||
|
||||
Another user, Joe creates an Ingress having the same hostname
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-development apply -f - << EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-development
|
||||
spec:
|
||||
rules:
|
||||
- host: web.oil.acmecorp.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: nginx
|
||||
port:
|
||||
number: 80
|
||||
EOF
|
||||
```
|
||||
|
||||
When a collision is detected at scope defined by `spec.ingressOptions.hostnameCollisionScope`, the creation of the Ingress resource will be rejected by the Validation Webhook enforcing it. When `hostnameCollisionScope=Disabled`, no collision detection is made at all.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign a Storage Class to Alice's tenant. [Assign Storage Classes](./storage-classes.md).
|
||||
@@ -4,30 +4,28 @@ Bill is a cluster admin providing a Container as a Service platform using shared
|
||||
|
||||
Alice, a Tenant Owner, can start container images using private images: according to the Kubernetes architecture, the `kubelet` will download the layers on its cache.
|
||||
|
||||
Bob, an attacker, could try to schedule a Pod on the same node where Alice is running their Pod backed by private images: they could start new Pods using `ImagePullPolicy=IfNotPresent` and able to start them, even without required authentication since the image is cached on the node.
|
||||
Bob, an attacker, could try to schedule a Pod on the same node where Alice is running her Pods backed by private images: they could start new Pods using `ImagePullPolicy=IfNotPresent` and be able to start them, even without required authentication since the image is cached on the node.
|
||||
|
||||
To avoid this kind of attack all the Tenant Owners must start their Pods using the `ImagePullPolicy` to `Always`, enforcing the `kubelet` to check the authorization first.
|
||||
|
||||
Capsule provides a way to enforce this behavior, as follows.
|
||||
To avoid this kind of attack, Bill, the cluster admin, can force Alice, the tenant owner, to start her Pods using only the allowed values for `ImagePullPolicy`, enforcing the `kubelet` to check the authorization first.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
annotations:
|
||||
capsule.clastix.io/allowed-image-pull-policy: Always
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
imagePullPolicies:
|
||||
- Always
|
||||
EOF
|
||||
```
|
||||
|
||||
If you need to address specific use-case, the said annotation supports multiple values comma separated
|
||||
Allowed values are: `Always`, `IfNotPresent`, `Never`.
|
||||
|
||||
```yaml
|
||||
capsule.clastix.io/allowed-image-pull-policy: Always,IfNotPresent
|
||||
```
|
||||
Any attempt of Alice to use a disallowed `imagePullPolicies` value is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
|
||||
|
||||
@@ -1,62 +1,36 @@
|
||||
# Assign Trusted Images Registries
|
||||
Bill, the cluster admin, can set a strict policy on the applications running into Alice's tenant: he'd like to allow running just images hosted on a list of specific container registries.
|
||||
|
||||
The spec `containerRegistries` addresses this task and can provide combination with hard enforcement using a list of allowed values.
|
||||
The spec `containerRegistries` addresses this task and can provide a combination with hard enforcement using a list of allowed values.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
containerRegistries:
|
||||
allowed:
|
||||
- docker.io
|
||||
- quay.io
|
||||
allowedRegex: ''
|
||||
allowedRegex: 'internal.registry.\\w.tld'
|
||||
```
|
||||
|
||||
> In case of `non FQDI` (non fully qualified Docker image) and official images hosted on Docker Hub, Capsule is going
|
||||
> to retrieve the registry even if it's not explicit: a `busybox:latest` Pod
|
||||
> In case of `non-FQDI` (non fully qualified Docker image) and official images hosted on Docker Hub,
|
||||
> Capsule is going to retrieve the registry even if it's not explicit: a `busybox:latest` Pod
|
||||
> running on a Tenant allowing `docker.io` will not be blocked, even if the image
|
||||
> field is not explicit as `docker.io/busybox:latest`.
|
||||
|
||||
|
||||
Alternatively, use a valid regular expression for a maximum flexibility
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
containerRegistries:
|
||||
allowed: []
|
||||
regex: "internal.registry.\\w.tld"
|
||||
```
|
||||
|
||||
A Pod running `internal.registry.foo.tld` as registry will be allowed, as well `internal.registry.bar.tld` since these are matching the regular expression.
|
||||
|
||||
> You can also set a catch-all regex entry as .* to allow every kind of registry,
|
||||
> that would be the same result of unsetting `containerRegistries` at all
|
||||
> A catch-all regex entry as `.*` allows every kind of registry, which would be the same result of unsetting `containerRegistries` at all.
|
||||
|
||||
As per Ingress and Storage classes the allowed registries can be inspected from the Tenant's namespace
|
||||
|
||||
```
|
||||
alice@caas# kubectl describe ns oil-production
|
||||
Name: oil-production
|
||||
Labels: capsule.clastix.io/tenant=oil
|
||||
Annotations: capsule.clastix.io/allowed-registries: docker.io
|
||||
capsule.clastix.io/allowed-registries-regexp: ^registry\.internal\.\w+$
|
||||
...
|
||||
```
|
||||
Any attempt of Alice to use a not allowed `containerRegistries` value is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign Pod Security Policies to Alice's tenant. [Assign Pod Security Policies](./pod-security-policies.md).
|
||||
|
||||
@@ -4,72 +4,49 @@ An Ingress Controller is used in Kubernetes to publish services and applications
|
||||
Bill can assign a set of dedicated Ingress Classes to the `oil` tenant to force the applications in the `oil` tenant to be published only by the assigned Ingress Controller:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
ingressClasses:
|
||||
allowed:
|
||||
- oil
|
||||
...
|
||||
ingressOptions:
|
||||
allowedClasses:
|
||||
allowed:
|
||||
- default
|
||||
allowedRegex: ^\w+-lb$
|
||||
EOF
|
||||
```
|
||||
|
||||
It is also possible to use regular expression for assigning Ingress Classes:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
ingressClasses:
|
||||
allowedRegex: "^oil-.*$"
|
||||
...
|
||||
```
|
||||
|
||||
The Capsule controller assures that all Ingresses created in the tenant can use only one of the valid Ingress Classes. Alice, as tenant owner, gets the list of valid Ingress Classes by checking any of her namespaces:
|
||||
|
||||
```
|
||||
alice@caas# kubectl describe ns oil-production
|
||||
Name: oil-production
|
||||
Labels: capsule.clastix.io/tenant=oil
|
||||
Annotations: capsule.clastix.io/ingress-classes: oil
|
||||
capsule.clastix.io/ingress-classes-regexp: ^oil-.*$
|
||||
...
|
||||
```
|
||||
|
||||
Alice creates an Ingress using a valid Ingress Class in the annotation:
|
||||
Capsule assures that all Ingresses created in the tenant can use only one of the valid Ingress Classes.
|
||||
|
||||
Alice can create an Ingress using only an allowed Ingress Class:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginx
|
||||
namespace: oil-production
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: oil
|
||||
kubernetes.io/ingress.class: default
|
||||
spec:
|
||||
rules:
|
||||
- host: web.oil-inc.com
|
||||
- host: oil.acmecorp.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: nginx
|
||||
servicePort: 80
|
||||
path: /
|
||||
EOF
|
||||
```
|
||||
|
||||
Any attempt of Alice to use a non valid Ingress Class, e.g. `default`, will fail.
|
||||
|
||||
> The effect of this policy is that the services created in the tenant will be published
|
||||
> only on the Ingress Controller designated by Bill to accept one of the allowed Ingress Classes.
|
||||
Any attempt of Alice to use a non-valid Ingress Class, or missing it, is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign a set of dedicated ingress hostnames to Alice's tenant. [Assign Ingress Hostnames](./ingress-hostnames.md).
|
||||
|
||||
@@ -1,42 +1,30 @@
|
||||
# Assign Ingress Hostnames
|
||||
Bill can assign a set of dedicated ingress hostnames to the `oil` tenant in order to force the applications in the tenant to be published only using the given hostnames:
|
||||
Bill can control ingress hostnames to the `oil` tenant to force the applications to be published only using the given hostname or set of hostnames:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
ingressHostnames:
|
||||
allowed:
|
||||
- *.oil.acmecorp.com
|
||||
...
|
||||
ingressOptions:
|
||||
allowedHostnames:
|
||||
allowed:
|
||||
- oil.acmecorp.com
|
||||
allowedRegex: ^.*acmecorp.com$
|
||||
EOF
|
||||
```
|
||||
|
||||
It is also possible to use regular expression for assigning Ingress Classes:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
ingressHostnames:
|
||||
allowedRegex: "^oil-acmecorp.*$"
|
||||
...
|
||||
```
|
||||
|
||||
The Capsule controller assures that all Ingresses created in the tenant can use only one of the valid hostnames.
|
||||
|
||||
Alice creates an Ingress using an allowed hostname
|
||||
The Capsule controller assures that all Ingresses created in the tenant can use only one of the valid hostnames.
|
||||
|
||||
Alice can create an Ingress using any allowed hostname
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
@@ -46,20 +34,20 @@ metadata:
|
||||
kubernetes.io/ingress.class: oil
|
||||
spec:
|
||||
rules:
|
||||
- host: web.oil.acmecorp.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: nginx
|
||||
port:
|
||||
number: 80
|
||||
- host: web.oil.acmecorp.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: nginx
|
||||
port:
|
||||
number: 80
|
||||
EOF
|
||||
```
|
||||
|
||||
|
||||
Any attempt of Alice to use a non valid hostname, e.g. `web.gas.acmecorp.org`, will fail.
|
||||
Any attempt of Alice to use a non-valid hostname is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign a Storage Class to Alice's tenant. [Assign Storage Classes](./storage-classes.md).
|
||||
See how Bill, the cluster admin, can control the hostname collision in Ingresses. [Control hostname collision in ingresses](./hostname-collision.md).
|
||||
|
||||
@@ -1,96 +1,82 @@
|
||||
# Assign multiple tenants to an owner
|
||||
In some scenarios, it's likely that a single team is responsible for multiple lines of business. For example, in our sample organization Acme Corp., Alice is responsible for both the Oil and Gas lines of business. It's more likely that Alice requires two different tenants, for example `oil` and `gas` to keep things isolated.
|
||||
In some scenarios, a single team is likely responsible for multiple lines of business. For example, in our sample organization Acme Corp., Alice is responsible for both the Oil and Gas lines of business. It's more likely that Alice requires two different tenants, for example, `oil` and `gas` to keep things isolated.
|
||||
|
||||
By design, the Capsule operator does not permit hierarchy of tenants, since all tenants are at the same levels. However, we can assign the ownership of multiple tenants to the same user or group of users.
|
||||
By design, the Capsule operator does not permit a hierarchy of tenants, since all tenants are at the same levels. However, we can assign the ownership of multiple tenants to the same user or group of users.
|
||||
|
||||
Bill, the cluster admin, creates multiple tenants having `alice` as owner:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
namespaceQuota: 3
|
||||
EOF
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: gas
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
namespaceQuota: 9
|
||||
```
|
||||
|
||||
So that
|
||||
|
||||
```
|
||||
bill@caas# kubectl get tenants
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
oil 3 3 alice User 3h
|
||||
gas 9 0 alice User 1m
|
||||
EOF
|
||||
```
|
||||
|
||||
Alternatively, the ownership can be assigned to a group called `oil-and-gas`:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: oil-and-gas
|
||||
owners:
|
||||
- name: oil-and-gas
|
||||
kind: Group
|
||||
namespaceQuota: 3
|
||||
EOF
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: gas
|
||||
spec:
|
||||
owner:
|
||||
name: oil-and-gas
|
||||
owners:
|
||||
- name: oil-and-gas
|
||||
kind: Group
|
||||
namespaceQuota: 9
|
||||
EOF
|
||||
```
|
||||
|
||||
So that
|
||||
The two tenants remain isolated from each other in terms of resources assignments, e.g. _ResourceQuota_, _Nodes Pool_, _Storage Calsses_ and _Ingress Classes_, and in terms of governance, e.g. _NetworkPolicies_, _PodSecurityPolicies_, _Trusted Registries_, etc.
|
||||
|
||||
|
||||
When Alice logs in, she has access to all namespaces belonging to both the `oil` and `gas` tenants.
|
||||
|
||||
```
|
||||
bill@caas# kubectl get tenants
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
oil 3 3 oil-and-gas Group 3h
|
||||
gas 9 0 oil-and-gas Group 1m
|
||||
kubectl create ns oil-production
|
||||
kubectl create ns gas-production
|
||||
```
|
||||
|
||||
The two tenants still remain isolated each other in terms of resources assignments, e.g. _ResourceQuota_, _Nodes Pool_, _Storage Calsses_ and _Ingress Classes_, and in terms of governance, e.g. _NetworkPolicies_, _PodSecurityPolicies_, _Trusted Registries_, etc.
|
||||
|
||||
|
||||
When Alice logs in CaaS platform, she has access to all namespaces belonging to both the `oil` and `gas` tenants.
|
||||
|
||||
```
|
||||
alice@caas# kubectl create ns oil-production
|
||||
alice@caas# kubectl create ns gas-production
|
||||
```
|
||||
|
||||
When the enforcement of the naming convention with the `--force-tenant-prefix` option, is enabled, the namespaces are automatically assigned to the right tenant by Capsule because the operator does a lookups on the tenant names. If the `--force-tenant-prefix` option, is not set, Alice needs to specify the tenant name as a label `capsule.clastix.io/tenant=<desired_tenant>` in the namespace manifest:
|
||||
When the enforcement of the naming convention with the `--force-tenant-prefix` option, is enabled, the namespaces are automatically assigned to the right tenant by Capsule because the operator does a lookup on the tenant names. If the `--force-tenant-prefix` option, is not set, Alice needs to specify the tenant name as a label `capsule.clastix.io/tenant=<desired_tenant>` in the namespace manifest:
|
||||
|
||||
```yaml
|
||||
cat <<EOF > gas-production-ns.yaml
|
||||
kubectl apply -f - << EOF
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
@@ -98,12 +84,9 @@ metadata:
|
||||
labels:
|
||||
capsule.clastix.io/tenant: gas
|
||||
EOF
|
||||
|
||||
kubectl create -f gas-production-ns.yaml
|
||||
```
|
||||
|
||||
> If not specified, Capsule will deny with the following message:
|
||||
>
|
||||
>`Unable to assign namespace to tenant. Please use capsule.clastix.io/tenant label when creating a namespace.`
|
||||
|
||||
# What’s next
|
||||
|
||||
30
docs/operator/use-cases/namespace-labels-and-annotations.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Denying user-defined labels or annotations
|
||||
|
||||
By default, capsule allows tenant owners to add and modify any label or annotation on their namespaces.
|
||||
|
||||
But there are some scenarios, when tenant owners should not have an ability to add or modify specific labels or annotations (for example, this can be labels used in [Kubernetes network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) which are added by cluster administrator).
|
||||
|
||||
Bill, the cluster admin, can deny Alice to add specific labels and annotations on namespaces:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
annotations:
|
||||
capsule.clastix.io/forbidden-namespace-labels: foo.acme.net, bar.acme.net
|
||||
capsule.clastix.io/forbidden-namespace-labels-regexp: .*.acme.net
|
||||
capsule.clastix.io/forbidden-namespace-annotations: foo.acme.net, bar.acme.net
|
||||
capsule.clastix.io/forbidden-namespace-annotations-regexp: .*.acme.net
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
# What’s next
|
||||
This ends our tour in Capsule use cases. As we improve Capsule, more use cases about multi-tenancy, policy admission control, and cluster governance will be covered in the future.
|
||||
|
||||
Stay tuned!
|
||||
@@ -1,48 +1,51 @@
|
||||
# Assign Network Policies
|
||||
Kubernetes network policies allow controlling network traffic between namespaces and between pods in the same namespace. Bill, the cluster admin, can enforce network traffic isolation between different tenants while leaving to Alice, the tenant owner, the freedom to set isolation between namespaces in the same tenant or even between pods in the same namespace.
|
||||
Kubernetes network policies control network traffic between namespaces and between pods in the same namespace. Bill, the cluster admin, can enforce network traffic isolation between different tenants while leaving to Alice, the tenant owner, the freedom to set isolation between namespaces in the same tenant or even between pods in the same namespace.
|
||||
|
||||
To meet this requirement, Bill needs to define network policies that deny pods belonging to Alice's namespaces to access pods in namespaces belonging to other tenants, e.g. Bob's tenant `water`, or in system namespaces, e.g. `kube-system`.
|
||||
|
||||
Also, Bill can make sure pods belonging to a tenant namespace cannot access other network infrastructure like cluster nodes, load balancers, and virtual machines running other services.
|
||||
Also, Bill can make sure pods belonging to a tenant namespace cannot access other network infrastructures like cluster nodes, load balancers, and virtual machines running other services.
|
||||
|
||||
Bill can set network policies in the tenant manifest, according to the requirements:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
networkPolicies:
|
||||
- policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
cidr: 0.0.0.0/0
|
||||
except:
|
||||
- 192.168.0.0/16
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
- podSelector: {}
|
||||
- ipBlock:
|
||||
cidr: 192.168.0.0/16
|
||||
podSelector: {}
|
||||
items:
|
||||
- policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
egress:
|
||||
- to:
|
||||
- ipBlock:
|
||||
cidr: 0.0.0.0/0
|
||||
except:
|
||||
- 192.168.0.0/16
|
||||
ingress:
|
||||
- from:
|
||||
- namespaceSelector:
|
||||
matchLabels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
- podSelector: {}
|
||||
- ipBlock:
|
||||
cidr: 192.168.0.0/16
|
||||
podSelector: {}
|
||||
EOF
|
||||
```
|
||||
|
||||
The Capsule controller, watching for namespace creation, creates the Network Policies for each namespace in the tenant.
|
||||
|
||||
Alice has access to these network policies:
|
||||
Alice has access to network policies:
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production get networkpolicies
|
||||
kubectl -n oil-production get networkpolicies
|
||||
NAME POD-SELECTOR AGE
|
||||
capsule-oil-0 <none> 42h
|
||||
```
|
||||
@@ -50,19 +53,20 @@ capsule-oil-0 <none> 42h
|
||||
Alice can create, patch, and delete additional network policies within her namespaces
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production auth can-i get networkpolicies
|
||||
kubectl -n oil-production auth can-i get networkpolicies
|
||||
yes
|
||||
|
||||
alice@caas# kubectl -n oil-production auth can-i delete networkpolicies
|
||||
kubectl -n oil-production auth can-i delete networkpolicies
|
||||
yes
|
||||
|
||||
alice@caas# kubectl -n oil-production auth can-i patch networkpolicies
|
||||
kubectl -n oil-production auth can-i patch networkpolicies
|
||||
yes
|
||||
```
|
||||
|
||||
For example, she can create
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
@@ -74,12 +78,13 @@ spec:
|
||||
policyTypes:
|
||||
- Ingress
|
||||
- Egress
|
||||
EOF
|
||||
```
|
||||
|
||||
Check all the network policies
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production get networkpolicies
|
||||
kubectl -n oil-production get networkpolicies
|
||||
NAME POD-SELECTOR AGE
|
||||
capsule-oil-0 <none> 42h
|
||||
production-network-policy <none> 3m
|
||||
@@ -88,17 +93,10 @@ production-network-policy <none> 3m
|
||||
And delete the namespace network policies
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production delete networkpolicy production-network-policy
|
||||
kubectl -n oil-production delete networkpolicy production-network-policy
|
||||
```
|
||||
|
||||
|
||||
However, the Capsule controller prevents Alice from deleting the tenant network policy:
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production delete networkpolicy capsule-oil-0
|
||||
Error from server (Capsule Network Policies cannot be deleted: please, reach out to the system administrators): admission webhook "validating.network-policy.capsule.clastix.io" denied the request: Capsule Network Policies cannot be deleted: please, reach out to the system administrators
|
||||
```
|
||||
Any attempt of Alice to delete the tenant network policy defined in the tenant manifest is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
See how Bill can enforce the Pod containers image pull policy to `Always` to avoid leaking of private images when running on shared nodes.
|
||||
[Enforcing Pod containers image PullPolicy](./images-pullpolicy.md)
|
||||
See how Bill can enforce the Pod containers image pull policy to `Always` to avoid leaking of private images when running on shared nodes. [Enforcing Pod containers image PullPolicy](./images-pullpolicy.md)
|
||||
|
||||
@@ -1,24 +0,0 @@
|
||||
# Disabling NodePort Services per Tenant
|
||||
|
||||
When dealing with a _shared multi-tenant_ scenario, _NodePort_ services can start becoming cumbersome to manage.
|
||||
|
||||
Reason behind this could be related to the overlapping needs by the Tenant owners, since a _NodePort_ is going to be open on all nodes and, when using `hostNetwork=true`, accessible to any _Pod_ although any specific `NetworkPolicy`.
|
||||
|
||||
Actually, Capsule doesn't block by default the creation of `NodePort` services.
|
||||
|
||||
Although this behavior is not yet manageable using a CRD key, if you need to prevent a Tenant from creating `NodePort` Services, the annotation `capsule.clastix.io/enable-node-ports` can be used as follows.
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
annotations:
|
||||
capsule.clastix.io/enable-node-ports: "false"
|
||||
spec:
|
||||
owner:
|
||||
kind: User
|
||||
name: alice
|
||||
```
|
||||
|
||||
With the said configuration, any Namespace owned by the Tenant will not be able to get a Service of type `NodePort` since the creation will be denied by the validation webhook.
|
||||
@@ -4,7 +4,7 @@ Bill, the cluster admin, can dedicate a pool of worker nodes to the `oil` tenant
|
||||
These nodes are labeled by Bill as `pool=oil`
|
||||
|
||||
```
|
||||
bill@caas# kubectl get nodes --show-labels
|
||||
kubectl get nodes --show-labels
|
||||
|
||||
NAME STATUS ROLES AGE VERSION LABELS
|
||||
...
|
||||
@@ -16,36 +16,47 @@ worker08.acme.com Ready worker 8d v1.18.2 pool=oil
|
||||
The label `pool=oil` is defined as node selector in the tenant manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
nodeSelector:
|
||||
pool: oil
|
||||
...
|
||||
kubernetes.io/os: linux
|
||||
EOF
|
||||
```
|
||||
|
||||
The Capsule controller makes sure that any namespace created in the tenant has the annotation: `scheduler.alpha.kubernetes.io/node-selector: pool=oil`. This annotation tells the scheduler of Kubernetes to assign the node selector `pool=oil` to all the pods deployed in the tenant.
|
||||
The Capsule controller makes sure that any namespace created in the tenant has the annotation: `scheduler.alpha.kubernetes.io/node-selector: pool=oil`. This annotation tells the scheduler of Kubernetes to assign the node selector `pool=oil` to all the pods deployed in the tenant. The effect is that all the pods deployed by Alice are placed only on the designated pool of nodes.
|
||||
|
||||
The effect is that all the pods deployed by Alice are placed only on the designated pool of nodes.
|
||||
Multiple node selector labels can be defined as in the following snippet:
|
||||
|
||||
Any attempt of Alice to change the selector on the pods will result in the following error from
|
||||
the `PodNodeSelector` Admission Controller plugin:
|
||||
|
||||
```
|
||||
Error from server (Forbidden): pods "busybox" is forbidden:
|
||||
pod node label selector conflicts with its namespace node label selector
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
nodeSelector:
|
||||
pool: oil
|
||||
kubernetes.io/os: linux
|
||||
kubernetes.io/arch: amd64
|
||||
hardware: gpu
|
||||
```
|
||||
|
||||
RBAC prevents Alice to change the annotation on the namespace:
|
||||
Any attempt of Alice to change the selector on the pods will result in an error from the `PodNodeSelector` Admission Controller plugin.
|
||||
|
||||
Also, RBAC prevents Alice to change the annotation on the namespace:
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i edit ns -n production
|
||||
Warning: resource 'namespaces' is not namespace scoped
|
||||
kubectl auth can-i edit ns -n oil-production
|
||||
no
|
||||
```
|
||||
|
||||
|
||||
@@ -1,28 +1,29 @@
|
||||
# Onboard a new tenant
|
||||
Bill receives a new request from Acme Corp.'s CTO asking for a new tenant to be onboarded in Alice’s organization. Bill then assigns Alice's identity of `alice` in Acme Corp. identity management system. Since Alice is a tenant owner, Bill needs to assign `alice` the Capsule group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
|
||||
Bill, the cluster admin, receives a new request from Acme Corp.'s CTO asking for a new tenant to be onboarded and Alice user will be the tenant owner. Bill then assigns Alice's identity of `alice` in the Acme Corp. identity management system. Since Alice is a tenant owner, Bill needs to assign `alice` the Capsule group defined by `--capsule-user-group` option, which defaults to `capsule.clastix.io`.
|
||||
|
||||
To keep the things simple, we assume that Bill just creates a client certificate for authentication using X.509 Certificate Signing Request, so Alice's certificate has `"/CN=alice/O=capsule.clastix.io"`.
|
||||
To keep things simple, we assume that Bill just creates a client certificate for authentication using X.509 Certificate Signing Request, so Alice's certificate has `"/CN=alice/O=capsule.clastix.io"`.
|
||||
|
||||
Bill creates a new tenant `oil` in the CaaS manangement portal according to the tenant's profile:
|
||||
Bill creates a new tenant `oil` in the CaaS management portal according to the tenant's profile:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl create -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
namespaceQuota: 3
|
||||
EOF
|
||||
```
|
||||
|
||||
Bill checks if the new tenant is created and operational:
|
||||
|
||||
```
|
||||
bill@caas# kubectl get tenant oil
|
||||
NAME NAMESPACE QUOTA NAMESPACE COUNT OWNER NAME OWNER KIND NODE SELECTOR AGE
|
||||
oil 9 0 alice User 3m
|
||||
kubectl get tenant oil
|
||||
NAME STATE NAMESPACE QUOTA NAMESPACE COUNT NODE SELECTOR AGE
|
||||
oil Active 0 33m
|
||||
```
|
||||
|
||||
> Note that namespaces are not yet assigned to the new tenant.
|
||||
@@ -31,66 +32,109 @@ oil 9 0 alice User
|
||||
|
||||
Once the new tenant `oil` is in place, Bill sends the login credentials to Alice.
|
||||
|
||||
Alice can log in to the CaaS platform and check if she can create a namespace
|
||||
Alice can log in using her credentials and check if she can create a namespace
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i create namespaces
|
||||
Warning: resource 'namespaces' is not namespace scoped
|
||||
kubectl auth can-i create namespaces
|
||||
yes
|
||||
```
|
||||
|
||||
or even delete the namespace
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i delete ns -n oil-production
|
||||
Warning: resource 'namespaces' is not namespace scoped
|
||||
kubectl auth can-i delete ns -n oil-production
|
||||
yes
|
||||
```
|
||||
|
||||
However, cluster resources are not accessible to Alice
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i get namespaces
|
||||
Warning: resource 'namespaces' is not namespace scoped
|
||||
kubectl auth can-i get namespaces
|
||||
no
|
||||
|
||||
alice@caas# kubectl auth can-i get nodes
|
||||
Warning: resource 'nodes' is not namespace scoped
|
||||
kubectl auth can-i get nodes
|
||||
no
|
||||
|
||||
alice@caas# kubectl auth can-i get persistentvolumes
|
||||
Warning: resource 'persistentvolumes' is not namespace scoped
|
||||
kubectl auth can-i get persistentvolumes
|
||||
no
|
||||
```
|
||||
|
||||
including the `Tenant` resources
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i get tenants
|
||||
Warning: resource 'tenants' is not namespace scoped
|
||||
kubectl auth can-i get tenants
|
||||
no
|
||||
```
|
||||
|
||||
## Assign a group of users as tenant owner
|
||||
In the example above, Bill assigned the ownership of `oil` tenant to `alice` user. However, is more likely that multiple users in the Alice's organization, need to administer the `oil` tenant. In such cases, Bill can assign the ownership of the `oil` tenant to a group of users instead of a single one.
|
||||
|
||||
Bill creates a new group account `oil` in the Acme Corp. identity management system and then he assigns Alice's identity `alice` to the `oil` group.
|
||||
|
||||
The tenant manifest is modified as in the following:
|
||||
In the example above, Bill assigned the ownership of `oil` tenant to `alice` user. If another user, e.g. Bob needs to administer the `oil` tenant, Bill can assign the ownership of `oil` tenant to such user too:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: oil
|
||||
kind: Group
|
||||
namespaceQuota: 3
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
- name: bob
|
||||
kind: User
|
||||
EOF
|
||||
```
|
||||
|
||||
With the snippet above, any user belonging to the Alice's organization will be owner of the `oil` tenant with the same permissions of Alice.
|
||||
However, it's more likely that Bill assigns the ownership of the `oil` tenant to a group of users instead of a single one. Bill creates a new group account `oil-users` in the Acme Corp. identity management system and then he assigns Alice and Bob identities to the `oil-users` group.
|
||||
|
||||
The tenant manifest is modified as in the following:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: oil-users
|
||||
kind: Group
|
||||
EOF
|
||||
```
|
||||
|
||||
With the configuration above, any user belonging to the `oil-users` group will be the owner of the `oil` tenant with the same permissions of Alice. For example, Bob can log in with his credentials and issue
|
||||
|
||||
```
|
||||
kubectl auth can-i create namespaces
|
||||
yes
|
||||
```
|
||||
|
||||
## Assign a robot account as tenant owner
|
||||
|
||||
As GitOps methodology is gaining more and more adoption everywhere, it's more likely that an application (Service Account) should act as Tenant Owner. In Capsule, a Tenant can also be owned by a Kubernetes _ServiceAccount_ identity.
|
||||
|
||||
The tenant manifest is modified as in the following:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: oil-users
|
||||
kind: Group
|
||||
- name: system:serviceaccount:default:robot
|
||||
kind: ServiceAccount
|
||||
EOF
|
||||
```
|
||||
|
||||
Bill can create a Service Account called `robot`, for example, in the `default` namespace and leave it to act as Tenant Owner of the `oil` tenant
|
||||
|
||||
```
|
||||
kubectl --as system:serviceaccount:default:robot --as-group capsule.clastix.io auth can-i create namespaces
|
||||
yes
|
||||
```
|
||||
|
||||
# What’s next
|
||||
See how Alice, the tenant owner, creates new namespaces. [Create namespaces](./create-namespaces.md).
|
||||
See how a tenant owner, creates new namespaces. [Create namespaces](./create-namespaces.md).
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# Use cases for Capsule
|
||||
Using Capsule, a cluster admin can implement complex multi-tenant scenarios for both public and private deployments. Here a list of common scenarios addressed by Capsule.
|
||||
Using Capsule, a cluster admin can implement complex multi-tenant scenarios for both public and private deployments. Here is a list of common scenarios addressed by Capsule.
|
||||
|
||||
# Container as a Service (CaaS)
|
||||
***Acme Corp***, our sample organization, built a Container as a Service platform (CaaS), based on Kubernetes to serve multiple lines of business. Each line of business, has its own team of engineers that are responsible for development, deployment, and operating their digital products.
|
||||
***Acme Corp***, our sample organization, built a Container as a Service platform (CaaS), based on Kubernetes to serve multiple lines of business. Each line of business has its team of engineers that are responsible for the development, deployment, and operating of their digital products.
|
||||
|
||||
To simplify the usage of Capsule in this scenario, we'll work with the following actors:
|
||||
|
||||
@@ -10,39 +10,40 @@ To simplify the usage of Capsule in this scenario, we'll work with the following
|
||||
he is the cluster administrator from the operations department of Acme Corp. and he is in charge of administration and maintains the CaaS platform.
|
||||
|
||||
* ***Alice***:
|
||||
she works as the IT Project Leader in the Oil & Gas Business Units. These are two new lines of business at Acme Corp. Alice is responsible for all the strategic IT projects in the two LOB's. She also is responsible for a team made of different job responsibilities (developers, administrators, SRE engineers, etc.) working in separate departments.
|
||||
she works as the IT Project Leader in the Oil & Gas Business Units. These are two new lines of business at Acme Corp. Alice is responsible for all the strategic IT projects in the two LOBs. She also is responsible for a team made of different job responsibilities (developers, administrators, SRE engineers, etc.) working in separate departments.
|
||||
|
||||
* ***Joe***:
|
||||
he works at Acme Corp, as a lead developer of a distributed team in Alice's organization. Joe is responsible for developing a mission-critical project in the Oil market.
|
||||
|
||||
* ***Bob***:
|
||||
he is the head of Engineering for the Water Business Unit, the main and historical line of business at Acme Corp. He is responsible for the development, deployment, and operation of multiple digital products in production for a large set of customers.
|
||||
|
||||
he is the head of Engineering for the Water Business Unit, the main and historical line of business at Acme Corp. He is responsible for development, deployment, and operating multiple digital products in production for a large set of customers.
|
||||
Use Capsule to address any of the following scenarios:
|
||||
|
||||
Bill, at Acme Corp. can use Capsule to address any of the following scenarios:
|
||||
|
||||
* [Onboard a new Tenant](./onboarding.md)
|
||||
* [Onboard Tenants](./onboarding.md)
|
||||
* [Create Namespaces](./create-namespaces.md)
|
||||
* [Assign Permissions](./permissions.md)
|
||||
* [Enforce Resources Quotas and Limits](./resources-quota-limits.md)
|
||||
* [Enforce Pod Priority Classes](./pod-priority-class.md)
|
||||
* [Enforce Pod Priority Classes](./pod-priority-classes.md)
|
||||
* [Assign specific Node Pools](./nodes-pool.md)
|
||||
* [Assign Ingress Classes](./ingress-classes.md)
|
||||
* [Assign Ingress Hostnames](./ingress-hostnames.md)
|
||||
* [Control hostname collision in Ingresses](./hostname-collision.md)
|
||||
* [Assign Storage Classes](./storage-classes.md)
|
||||
* [Disable NodePort Services](./node-ports.md)
|
||||
* [Assign Network Policies](./network-policies.md)
|
||||
* [Enforcing Pod containers image PullPolicy](./images-pullpolicy.md)
|
||||
* [Enforce Containers image PullPolicy](./images-pullpolicy.md)
|
||||
* [Assign Trusted Images Registries](./images-registries.md)
|
||||
* [Assign Pod Security Policies](./pod-security-policies.md)
|
||||
* [Create Custom Resources](./custom-resources.md)
|
||||
* [Taint Namespaces](./taint-namespaces.md)
|
||||
* [Assign multiple Tenants to an owner](./multiple-tenants.md)
|
||||
* [Cordoning a Tenant](./cordoning-tenant.md)
|
||||
* [Assign multiple Tenants](./multiple-tenants.md)
|
||||
* [Cordon Tenants](./cordoning-tenant.md)
|
||||
* [Disable Service Types](./service-type.md)
|
||||
* [Taint Services](./taint-services.md)
|
||||
* [Allow adding labels and annotations on namespaces](./namespace-labels-and-annotations.md)
|
||||
* [Velero Backup Restoration](./velero-backup-restoration.md)
|
||||
|
||||
> NB: as we improve Capsule, more use cases about multi-tenancy and cluster governance will be covered.
|
||||
|
||||
|
||||
# What’s next
|
||||
Now let's see how the cluster admin onboards a new tenant. [Onboarding a new tenant](./onboarding.md).
|
||||
|
||||
@@ -1,26 +1,26 @@
|
||||
# Assign permissions
|
||||
Alice acts as the tenant admin. Other users can operate inside the tenant with different levels of permissions and authorizations. Alice is responsible for creating additional roles and assigning these roles to other users to work in the same tenant.
|
||||
|
||||
One of the key design principles of the Capsule is the self-provisioning management from the tenant owner's perspective. Alice, the tenant owner, does not need to interact with Bill, the cluster admin, to complete her day-by-day duties. On the other side, Bill does not have to deal with multiple requests coming from multiple tenant owners that probably will overwhelm him.
|
||||
One of the key design principles of the Capsule is self-provisioning management from the tenant owner's perspective. Alice, the tenant owner, does not need to interact with Bill, the cluster admin, to complete her day-by-day duties. On the other side, Bill does not have to deal with multiple requests coming from multiple tenant owners that probably will overwhelm him.
|
||||
|
||||
Capsule leaves Alice the freedom to create RBAC roles at the namespace level, or using the pre-defined cluster roles already available in Kubernetes, and assign them to other users in the tenant. Since roles and rolebindings are limited to a namespace scope, Alice can assign the roles to the other users accessing the same tenant only after the namespace is created. This gives Alice the power to administer the tenant without the intervention of the cluster admin.
|
||||
Capsule leaves Alice, and the other tenant owners, the freedom to create RBAC roles at the namespace level, or using the pre-defined cluster roles already available in Kubernetes. Since roles and rolebindings are limited to a namespace scope, Alice can assign the roles to the other users accessing the same tenant only after the namespace is created. This gives Alice the power to administer the tenant without the intervention of the cluster admin.
|
||||
|
||||
From the cluster admin perspective, the only required action for Bill is to provision the other identities, eg. `joe` in the Identity Management system of Acme Corp. but this task can be done once, when onboarding the tenant and the users accessing the tenant can be part of the tenant business profile.
|
||||
From the cluster admin perspective, the only required action for Bill is to provide the other identities, eg. `joe` in the Identity Management system. This task can be done once when onboarding the tenant and the number of users accessing the tenant can be part of the tenant business profile.
|
||||
|
||||
Alice can create Roles and RoleBindings only in the namespaces she owns
|
||||
|
||||
```
|
||||
alice@caas# kubectl auth can-i get roles -n oil-development
|
||||
kubectl auth can-i get roles -n oil-development
|
||||
yes
|
||||
|
||||
alice@caas# kubectl auth can-i get rolebindings -n oil-development
|
||||
kubectl auth can-i get rolebindings -n oil-development
|
||||
yes
|
||||
|
||||
```
|
||||
|
||||
so she can assign the role of namespace `oil-development` admin to Joe, another user accessing the tenant `oil`
|
||||
|
||||
```yaml
|
||||
kubectl --as alice --as-group capsule.clastix.io apply -f - << EOF
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
@@ -35,9 +35,20 @@ subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: joe
|
||||
EOF
|
||||
```
|
||||
|
||||
Joe now can operate on the namespace `oil-development` as admin but he has no access to the other namespaces `oil-production`, and `oil-test` that are part of the same tenant.
|
||||
Joe now can operate on the namespace `oil-development` as admin but he has no access to the other namespaces `oil-production`, and `oil-test` that are part of the same tenant:
|
||||
|
||||
```
|
||||
kubectl --as joe --as-group capsule.clastix.io auth can-i create pod -n oil-development
|
||||
yes
|
||||
|
||||
kubectl --as joe --as-group capsule.clastix.io auth can-i create pod -n oil-production
|
||||
no
|
||||
```
|
||||
|
||||
> Please, note the user `joe`, in the example above, is not acting as tenant owner. He can just operate in `oil-development` namespace as admin.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, set resources quota and limits for Alice's tenant. [Enforce resources quota and limits](./resources-quota-limits.md).
|
||||
See how Bill, the cluster admin, sets resources quota and limits for Alice's tenant. [Enforce resources quota and limits](./resources-quota-limits.md).
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
# Enforcing Pod Priority Classes
|
||||
|
||||
> Pods can have priority. Priority indicates the importance of a Pod relative to other Pods.
|
||||
> If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible.
|
||||
>
|
||||
> [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/)
|
||||
|
||||
In a multi-tenant cluster where not all users are trusted, a tenant owner could create Pods at the highest possible priorities, causing other Pods to be evicted/not get scheduled.
|
||||
|
||||
At the current state, Capsule doesn't have, yet, a CRD key to handle the enforced [Priority Class](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass).
|
||||
|
||||
Enforcement is feasible using the Tenant's annotations field, as following:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
annotations:
|
||||
priorityclass.capsule.clastix.io/allowed: default
|
||||
priorityclass.capsule.clastix.io/allowed-regex: "^tier-.*$"
|
||||
spec:
|
||||
owner:
|
||||
kind: User
|
||||
name: alice
|
||||
```
|
||||
|
||||
With the said Tenant specification Alice can create Pod resource if `spec.priorityClassName` equals to:
|
||||
|
||||
- `default`, as mentioned in the annotation `priorityclass.capsule.clastix.io/allowed`
|
||||
- `tier-gold`, `tier-silver`, or `tier-bronze`, since these compile the regex declared in the annotation `priorityclass.capsule.clastix.io/allowed-regex`
|
||||
|
||||
If a Pod is going to use a non-allowed _Priority Class_, it will be rejected by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
|
||||
See how Bill, the cluster admin, can assign a pool of nodes to Alice's tenant. [Assign a nodes pool](./nodes-pool.md).
|
||||
35
docs/operator/use-cases/pod-priority-classes.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Enforcing Pod Priority Classes
|
||||
|
||||
Pods can have priority. Priority indicates the importance of a Pod relative to other Pods. If a Pod cannot be scheduled, the scheduler tries to preempt (evict) lower priority Pods to make scheduling of the pending Pod possible. See [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/).
|
||||
|
||||
In a multi-tenant cluster, not all users can be trusted, as a tenant owner could create Pods at the highest possible priorities, causing other Pods to be evicted/not get scheduled.
|
||||
|
||||
To prevent misuses of Pod Priority Class, Bill, the cluster admin, can enforce the allowed Pod Priority Class at tenant level:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
priorityClasses:
|
||||
allowed:
|
||||
- default
|
||||
allowedRegex: "^tier-.*$"
|
||||
EOF
|
||||
```
|
||||
|
||||
With the said Tenant specification, Alice can create a Pod resource if `spec.priorityClassName` equals to:
|
||||
|
||||
- `default`
|
||||
- `tier-gold`, `tier-silver`, or `tier-bronze`, since these compile the allowed regex.
|
||||
|
||||
If a Pod is going to use a non-allowed _Priority Class_, it will be rejected by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
|
||||
See how Bill, the cluster admin, can assign a pool of nodes to Alice's tenant. [Assign a nodes pool](./nodes-pool.md).
|
||||
@@ -1,9 +1,10 @@
|
||||
# Assign Pod Security Policies
|
||||
Bill, the cluster admin, can assign a dedicated Pod Security Policy (PSP) to the Alice's tenant. This is likely to be a requirement in a multi-tenancy environment.
|
||||
Bill, the cluster admin, can assign a dedicated Pod Security Policy (PSP) to Alice's tenant. This is likely to be a requirement in a multi-tenancy environment.
|
||||
|
||||
The cluster admin creates a PSP:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
@@ -13,11 +14,13 @@ spec:
|
||||
# Required to prevent escalations to root.
|
||||
allowPrivilegeEscalation: false
|
||||
...
|
||||
EOF
|
||||
```
|
||||
|
||||
Then create a _ClusterRole_ using or granting the said item
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
@@ -27,18 +30,20 @@ rules:
|
||||
resources: ['podsecuritypolicies']
|
||||
resourceNames: ['psp:restricted']
|
||||
verbs: ['use']
|
||||
EOF
|
||||
```
|
||||
|
||||
Bill can assign this role to any namespace in the Alice's tenant by setting it in the tenant manifest:
|
||||
Bill can assign this role to all namespaces in the Alice's tenant by setting it in the tenant manifest:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl -n oil-production apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
additionalRoleBindings:
|
||||
- clusterRoleName: psp:privileged
|
||||
@@ -46,10 +51,12 @@ spec:
|
||||
- kind: "Group"
|
||||
apiGroup: "rbac.authorization.k8s.io"
|
||||
name: "system:authenticated"
|
||||
...
|
||||
EOF
|
||||
```
|
||||
|
||||
With the given specification, Capsule will ensure that all Alice's namespaces will contain a _RoleBinding_ for the specified _Cluster Role_. For example, in the `oil-production` namespace, Alice will see:
|
||||
With the given specification, Capsule will ensure that all Alice's namespaces will contain a _RoleBinding_ for the specified _Cluster Role_.
|
||||
|
||||
For example, in the `oil-production` namespace, Alice will see:
|
||||
|
||||
```yaml
|
||||
kind: RoleBinding
|
||||
@@ -73,4 +80,4 @@ roleRef:
|
||||
With the above example, Capsule is forbidding any authenticated user in `oil-production` namespace to run privileged pods and to perform privilege escalation as declared by the Cluster Role `psp:privileged`.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign to Alice the permissions to create custom resources in her tenant. [Create Custom Resources](./custom-resources.md).
|
||||
See how Bill, the cluster admin, can assign to Alice the permissions to create custom resources in her tenant. [Create Custom Resources](./custom-resources.md).
|
||||
|
||||
@@ -1,38 +1,51 @@
|
||||
# Enforce resources quota and limits
|
||||
With help of Capsule, Bill and the cluster admin can set and enforce resources quota and limits for the Alice's tenant
|
||||
With help of Capsule, Bill, the cluster admin, can set and enforce resources quota and limits for Alice's tenant.
|
||||
|
||||
## Resources quota
|
||||
Set resources quota for each namespace in the Alice's tenant by defining them in the tenant spec:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
namespaceOptions:
|
||||
quota: 3
|
||||
resourceQuotas:
|
||||
- hard:
|
||||
limits.cpu: "8"
|
||||
limits.memory: 16Gi
|
||||
requests.cpu: "8"
|
||||
requests.memory: 16Gi
|
||||
scopes:
|
||||
- NotTerminating
|
||||
- hard:
|
||||
pods: "10"
|
||||
services: "50"
|
||||
- hard:
|
||||
requests.storage: 10Gi
|
||||
...
|
||||
scope: Tenant
|
||||
items:
|
||||
- hard:
|
||||
limits.cpu: "8"
|
||||
limits.memory: 16Gi
|
||||
requests.cpu: "8"
|
||||
requests.memory: 16Gi
|
||||
- hard:
|
||||
pods: "10"
|
||||
limitRanges:
|
||||
items:
|
||||
- limits:
|
||||
- default:
|
||||
cpu: 500m
|
||||
memory: 512Mi
|
||||
defaultRequest:
|
||||
cpu: 100m
|
||||
memory: 10Mi
|
||||
type: Container
|
||||
EOF
|
||||
```
|
||||
|
||||
The resources quotas above will be inherited by all the namespaces created by Alice. In our case, when Alice creates the namespace `oil-production`, Capsule creates three resource quotas:
|
||||
The resource quotas above will be inherited by all the namespaces created by Alice. In our case, when Alice creates the namespace `oil-production`, Capsule creates the following resource quotas:
|
||||
|
||||
```yaml
|
||||
kind: ResourceQuota
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: compute
|
||||
name: capsule-oil-0
|
||||
namespace: oil-production
|
||||
labels:
|
||||
tenant: oil
|
||||
@@ -42,63 +55,107 @@ spec:
|
||||
limits.memory: 16Gi
|
||||
requests.cpu: "8"
|
||||
requests.memory: 16Gi
|
||||
scopes: ["NotTerminating"]
|
||||
---
|
||||
kind: ResourceQuota
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: count
|
||||
name: capsule-oil-1
|
||||
namespace: oil-production
|
||||
labels:
|
||||
tenant: oil
|
||||
spec:
|
||||
hard:
|
||||
pods : "10"
|
||||
---
|
||||
kind: ResourceQuota
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: storage
|
||||
namespace: oil-production
|
||||
labels:
|
||||
tenant: oil
|
||||
spec:
|
||||
hard:
|
||||
requests.storage: "10Gi"
|
||||
```
|
||||
|
||||
Alice can create any resource according to the assigned quotas:
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production create deployment nginx --image=nginx:latest
|
||||
kubectl -n oil-production create deployment nginx --image nginx:latest --replicas 4
|
||||
```
|
||||
|
||||
To check the remaining resources in the `oil-production` namespace, she gets the ResourceQuota:
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production get resourcequota
|
||||
NAME AGE REQUEST LIMIT
|
||||
capsule-oil-0 42h requests.cpu: 1/8, requests.memory: 1/16Gi limits.cpu: 1/8, limits.memory: 1/16Gi
|
||||
capsule-oil-1 42h pods: 1/10
|
||||
capsule-oil-2 42h requests.storage: 0/100Gi
|
||||
```
|
||||
|
||||
By inspecting the annotations in ResourceQuota, Alice can see the used resources at tenant level and the related hard quota:
|
||||
At namespace `oil-production` level, Alice can see the used resources by inspecting the `status` in ResourceQuota:
|
||||
|
||||
```yaml
|
||||
alice@caas# kubectl get resourcequotas capsule-oil-1 -o yaml
|
||||
kubectl -n oil-production get resourcequota capsule-oil-1 -o yaml
|
||||
...
|
||||
status:
|
||||
hard:
|
||||
pods: "10"
|
||||
services: "50"
|
||||
used:
|
||||
pods: "4"
|
||||
```
|
||||
|
||||
At tenant level, the behaviour is controlled by the `spec.resourceQuotas.scope` value:
|
||||
|
||||
* Tenant (default)
|
||||
* Namespace
|
||||
|
||||
### Enforcement at tenant level
|
||||
By setting enforcement at tenant level, i.e. `spec.resourceQuotas.scope=Tenant`, Capsule aggregates resources usage for all namespaces in the tenant and adjusts all the `ResourceQuota` usage as aggregate. In such case, Alice can check the used resources at the tenant level by inspecting the `annotations` in ResourceQuota object of any namespace in the tenant:
|
||||
|
||||
```yaml
|
||||
kubectl -n oil-production get resourcequotas capsule-oil-1 -o yaml
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
annotations:
|
||||
quota.capsule.clastix.io/used-pods: "1"
|
||||
quota.capsule.clastix.io/used-pods: "4"
|
||||
quota.capsule.clastix.io/hard-pods: "10"
|
||||
...
|
||||
```
|
||||
|
||||
At the tenant level, the Capsule controller watches the resources usage for each Tenant namespace and adjusts it as an aggregate of all the namespaces using the said annotations. When the aggregate usage reaches the hard quota, then the native `ResourceQuota` Admission Controller in Kubernetes denies the Alice's request.
|
||||
or
|
||||
|
||||
Bill, the cluster admin, can also set Limit Ranges for each namespace in the Alice's tenant by defining limits in the tenant spec:
|
||||
```yaml
|
||||
kubectl -n oil-development get resourcequotas capsule-oil-1 -o yaml
|
||||
apiVersion: v1
|
||||
kind: ResourceQuota
|
||||
metadata:
|
||||
annotations:
|
||||
quota.capsule.clastix.io/used-pods: "4"
|
||||
quota.capsule.clastix.io/hard-pods: "10"
|
||||
...
|
||||
```
|
||||
|
||||
When the aggregate usage for all namespaces crosses the hard quota, then the native `ResourceQuota` Admission Controller in Kubernetes denies Alice's request to create resources exceeding the quota:
|
||||
|
||||
```
|
||||
kubectl -n oil-development create deployment nginx --image nginx:latest --replicas 10
|
||||
```
|
||||
|
||||
Alice cannot schedule more pods than the admitted at tenant aggregate level.
|
||||
|
||||
```
|
||||
kubectl -n oil-development get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-55649fd747-6fzcx 1/1 Running 0 12s
|
||||
nginx-55649fd747-7q6x6 1/1 Running 0 12s
|
||||
nginx-55649fd747-86wr5 1/1 Running 0 12s
|
||||
nginx-55649fd747-h6kbs 1/1 Running 0 12s
|
||||
nginx-55649fd747-mlhlq 1/1 Running 0 12s
|
||||
nginx-55649fd747-t48s5 1/1 Running 0 7s
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```
|
||||
kubectl -n oil-production get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-55649fd747-52fsq 1/1 Running 0 22m
|
||||
nginx-55649fd747-9q8n5 1/1 Running 0 22m
|
||||
nginx-55649fd747-r8vzr 1/1 Running 0 22m
|
||||
nginx-55649fd747-tkv7m 1/1 Running 0 22m
|
||||
```
|
||||
|
||||
### Enforcement at namespace level
|
||||
|
||||
By setting enforcement at the namespace level, i.e. `spec.resourceQuotas.scope=Namespace`, Capsule does not aggregate the resources usage and all enforcement is done at the namespace level.
|
||||
|
||||
## Pods and containers limits
|
||||
|
||||
Bill, the cluster admin, can also set Limit Ranges for each namespace in Alice's tenant by defining limits for pods and containers in the tenant spec:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
@@ -106,37 +163,34 @@ kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
...
|
||||
limitRanges:
|
||||
- limits:
|
||||
- max:
|
||||
cpu: "1"
|
||||
memory: 1Gi
|
||||
items:
|
||||
- type: Pod
|
||||
min:
|
||||
cpu: 50m
|
||||
memory: 5Mi
|
||||
type: Pod
|
||||
- default:
|
||||
cpu: 200m
|
||||
memory: 100Mi
|
||||
defaultRequest:
|
||||
cpu: 100m
|
||||
memory: 10Mi
|
||||
cpu: "50m"
|
||||
memory: "5Mi"
|
||||
max:
|
||||
cpu: "1"
|
||||
memory: 1Gi
|
||||
memory: "1Gi"
|
||||
- type: Container
|
||||
defaultRequest:
|
||||
cpu: "100m"
|
||||
memory: "10Mi"
|
||||
default:
|
||||
cpu: "200m"
|
||||
memory: "100Mi"
|
||||
min:
|
||||
cpu: 50m
|
||||
memory: 5Mi
|
||||
type: Container
|
||||
- max:
|
||||
storage: 10Gi
|
||||
cpu: "50m"
|
||||
memory: "5Mi"
|
||||
max:
|
||||
cpu: "1"
|
||||
memory: "1Gi"
|
||||
- type: PersistentVolumeClaim
|
||||
min:
|
||||
storage: 1Gi
|
||||
type: PersistentVolumeClaim
|
||||
...
|
||||
storage: "1Gi"
|
||||
max:
|
||||
storage: "10Gi"
|
||||
```
|
||||
|
||||
Limits will be inherited by all the namespaces created by Alice. In our case, when Alice creates the namespace `oil-production`, Capsule creates the following:
|
||||
@@ -178,37 +232,21 @@ spec:
|
||||
storage: "10Gi"
|
||||
```
|
||||
|
||||
Alice can inspect Limit Ranges for her namespaces:
|
||||
> Note: being the limit range specific of single resources, there is no aggregate to count.
|
||||
|
||||
Alice doesn't have permission to change or delete the resources according to the assigned RBAC profile.
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production get limitranges
|
||||
NAME CREATED AT
|
||||
capsule-oil-0 2020-07-20T18:41:15Z
|
||||
|
||||
# kubectl -n oil-production describe limitranges limits
|
||||
Name: capsule-oil-0
|
||||
Namespace: oil-production
|
||||
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
|
||||
---- -------- --- --- --------------- ------------- -----------------------
|
||||
Pod cpu 50m 1 - - -
|
||||
Pod memory 5Mi 1Gi - - -
|
||||
Container cpu 50m 1 100m 200m -
|
||||
Container memory 5Mi 1Gi 10Mi 100Mi -
|
||||
PersistentVolumeClaim storage 1Gi 10Gi - - -
|
||||
```
|
||||
|
||||
Being the limit range specific of single resources, there is no aggregate to count.
|
||||
|
||||
Having access to resource quotas and limits, Alice still doesn't have permissions to change or delete the resources according to the assigned RBAC profile.
|
||||
|
||||
```
|
||||
alice@caas# kubectl -n oil-production auth can-i patch resourcequota
|
||||
no - no RBAC policy matched
|
||||
|
||||
alice@caas# kubectl -n oil-production auth can-i patch limitranges
|
||||
no - no RBAC policy matched
|
||||
kubectl -n oil-production auth can-i patch resourcequota
|
||||
no
|
||||
kubectl -n oil-production auth can-i delete resourcequota
|
||||
no
|
||||
kubectl -n oil-production auth can-i patch limitranges
|
||||
no
|
||||
kubectl -n oil-production auth can-i delete limitranges
|
||||
no
|
||||
```
|
||||
|
||||
# What’s next
|
||||
|
||||
See how Bill, the cluster admin, can enforce the PriorityClass of Pods running of Alice's tenant Namespaces. [Enforce Pod Priority Classes](./pod-priority-class.md)
|
||||
See how Bill, the cluster admin, can enforce the PriorityClass of Pods running of Alice's tenant namespaces. [Enforce Pod Priority Classes](./pod-priority-classes.md)
|
||||
|
||||
71
docs/operator/use-cases/service-type.md
Normal file
@@ -0,0 +1,71 @@
|
||||
# Disable Service Types
|
||||
Bill, the cluster admin, can prevent the creation of services with specific service types.
|
||||
|
||||
## NodePort
|
||||
When dealing with a _shared multi-tenant_ scenario, multiple _NodePort_ services can start becoming cumbersome to manage. The reason behind this could be related to the overlapping needs by the Tenant owners, since a _NodePort_ is going to be open on all nodes and, when using `hostNetwork=true`, accessible to any _Pod_ although any specific `NetworkPolicy`.
|
||||
|
||||
Bill, the cluster admin, can block the creation of services with `NodePort` service type for a given tenant
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
serviceOptions:
|
||||
allowedServices:
|
||||
nodePort: false
|
||||
EOF
|
||||
```
|
||||
|
||||
With the above configuration, any attempt of Alice to create a Service of type `NodePort` is denied by the Validation Webhook enforcing it. Default value is `true`.
|
||||
|
||||
## ExternalName
|
||||
Service with the type of `ExternalName` has been found subject to many security issues. To prevent tenant owners to create services with the type of `ExternalName`, the cluster admin can prevent a tenant to create them:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
serviceOptions:
|
||||
allowedServices:
|
||||
externalName: false
|
||||
EOF
|
||||
```
|
||||
|
||||
With the above configuration, any attempt of Alice to create a Service of type `externalName` is denied by the Validation Webhook enforcing it. Default value is `true`.
|
||||
|
||||
## LoadBalancer
|
||||
|
||||
Same as previously, the Service of type of `LoadBalancer` could be blocked for various reasons. To prevent tenant owners to create these kinds of services, the cluster admin can prevent a tenant to create them:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
serviceOptions:
|
||||
allowedServices:
|
||||
loadBalancer: false
|
||||
EOF
|
||||
```
|
||||
|
||||
With the above configuration, any attempt of Alice to create a Service of type `LoadBalancer` is denied by the Validation Webhook enforcing it. Default value is `true`.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can set taints on the Alice's services. [Taint services](./taint-services.md).
|
||||
@@ -1,54 +1,28 @@
|
||||
# Assign Storage Classes
|
||||
The Acme Corp. can provide persistent storage infrastructure to their tenants. Different types of storage requirements, with different levels of QoS, eg. SSD versus HDD, are available for different tenants according to the tenant's profile. To meet these different requirements, Bill, the cluster admin can provision different Storage Classes and assign them to the tenant:
|
||||
Persistent storage infrastructure is provided to tenants. Different types of storage requirements, with different levels of QoS, eg. SSD versus HDD, are available for different tenants according to the tenant's profile. To meet these different requirements, Bill, the cluster admin can provision different Storage Classes and assign them to the tenant:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
storageClasses:
|
||||
allowed:
|
||||
- ceph-rbd
|
||||
- ceph-nfs
|
||||
...
|
||||
allowedRegex: "^ceph-.*$"
|
||||
EOF
|
||||
```
|
||||
|
||||
It is also possible to use a regular expression for assigning Storage Classes:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
storageClasses:
|
||||
allowedRegex: "^ceph-.*$"
|
||||
...
|
||||
```
|
||||
|
||||
Alice, as tenant owner, gets the list of valid Storage Classes by checking any of the her namespaces:
|
||||
|
||||
```
|
||||
alice@caas# kubectl describe ns oil-production
|
||||
Name: oil-production
|
||||
Labels: capsule.clastix.io/tenant=oil
|
||||
Annotations: capsule.clastix.io/storage-classes: ceph-rbd,ceph-nfs
|
||||
capsule.clastix.io/storage-classes-regexp: ^ceph-.*$
|
||||
...
|
||||
```
|
||||
|
||||
The Capsule controller will ensure that all Persistent Volume Claims created by Alice will use only one of the assigned storage classes:
|
||||
|
||||
For example:
|
||||
Capsule assures that all Persistent Volume Claims created by Alice will use only one of the valid storage classes:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
@@ -61,14 +35,10 @@ spec:
|
||||
resources:
|
||||
requests:
|
||||
storage: 12Gi
|
||||
EOF
|
||||
```
|
||||
|
||||
Any attempt of Alice to use a non valid Storage Class, e.g. `default`, will fail::
|
||||
```
|
||||
Error from server: error when creating persistent volume claim pvc:
|
||||
admission webhook "pvc.capsule.clastix.io" denied the request:
|
||||
Storage Class default is forbidden for the current Tenant
|
||||
```
|
||||
Any attempt of Alice to use a non-valid Storage Class, or missing it, is denied by the Validation Webhook enforcing it.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign Network Policies to Alice's tenant. [Assign Network Policies](./network-policies.md).
|
||||
|
||||
@@ -1,51 +1,28 @@
|
||||
# Taint namespaces
|
||||
With Capsule, Bill can _"taint"_ the namespaces created by Alice with an additional labels and/or annotations. There is no specific semantic assigned to these labels and annotations: they just will be assigned to the namespaces in the tenant as they are created by Alice. This can help the cluster admin to implement specific use cases. As for example, it can be used to implement backup as a service for namespaces in the tenant.
|
||||
With Capsule, Bill can _"taint"_ the namespaces created by Alice with additional labels and/or annotations. There is no specific semantic assigned to these labels and annotations: they just will be assigned to the namespaces in the tenant as they are created by Alice. This can help the cluster admin to implement specific use cases. As it can be used to implement backup as a service for namespaces in the tenant.
|
||||
|
||||
Bill assigns an additional label to the `oil` tenant to force the backup system to take care of Alice's namespaces:
|
||||
Bill assigns additional labels and annotations to all namespaces created in the `oil` tenant:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
namespacesMetadata:
|
||||
additionalLabels:
|
||||
capsule.clastix.io/backup: "true"
|
||||
namespaceOptions:
|
||||
additionalMetadata:
|
||||
annotations:
|
||||
capsule.clastix.io/backup: "true"
|
||||
labels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
EOF
|
||||
```
|
||||
|
||||
or by annotations:
|
||||
|
||||
```yaml
|
||||
apiVersion: capsule.clastix.io/v1alpha1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owner:
|
||||
name: alice
|
||||
kind: User
|
||||
namespacesMetadata:
|
||||
additionalAnnotations:
|
||||
capsule.clastix.io/do_stuff: backup
|
||||
```
|
||||
|
||||
When Alice creates a namespace, this will inherit the given label and/or annotation:
|
||||
|
||||
```yaml
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: oil-production
|
||||
labels:
|
||||
capsule.clastix.io/backup: "true" # here the additional label
|
||||
capsule.clastix.io/tenant: oil
|
||||
annotations:
|
||||
capsule.clastix.io/do_stuff: backup # here the additional annotation
|
||||
```
|
||||
When Alice creates a namespace, this will inherit the given label and/or annotation.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can assign multiple tenants to Alice. [Assign multiple tenants to an owner](./multiple-tenants.md).
|
||||
28
docs/operator/use-cases/taint-services.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Taint services
|
||||
With Capsule, Bill can _"taint"_ the services created by Alice with additional labels and/or annotations. There is no specific semantic assigned to these labels and annotations: they just will be assigned to the services in the tenant as they are created by Alice. This can help the cluster admin to implement specific use cases.
|
||||
|
||||
Bill assigns additional labels and annotations to all services created in the `oil` tenant:
|
||||
|
||||
```yaml
|
||||
kubectl apply -f - << EOF
|
||||
apiVersion: capsule.clastix.io/v1beta1
|
||||
kind: Tenant
|
||||
metadata:
|
||||
name: oil
|
||||
spec:
|
||||
owners:
|
||||
- name: alice
|
||||
kind: User
|
||||
serviceOptions:
|
||||
additionalMetadata:
|
||||
annotations:
|
||||
capsule.clastix.io/backup: "true"
|
||||
labels:
|
||||
capsule.clastix.io/tenant: oil
|
||||
EOF
|
||||
```
|
||||
|
||||
When Alice creates a service in a namespace, this will inherit the given label and/or annotation.
|
||||
|
||||
# What’s next
|
||||
See how Bill, the cluster admin, can allow Alice to use specific labels or annotations. [Allow adding labels and annotations on namespaces](./namespace-labels-and-annotations.md).
|
||||