Compare commits

...

12 Commits

Author SHA1 Message Date
jpgouin
a2f5fd7592 Merge pull request #328 from takushi-35/ehemeral-docs
[DOC] fix: Incorrect creation method when using Ephemeral Storage in advanced-usage.md
2025-04-11 14:27:40 +02:00
takushi-35
c8df86b83b fix: Correcting incorrect procedures in advanced-usage.md 2025-04-10 13:27:00 +09:00
Hussein Galal
d41d2b8c31 Fix update bug in ensureObject (#325)
* Fix update bug in ensureObjects

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix log msg

Co-authored-by: Enrico Candino <enrico.candino@gmail.com>

* Fix import

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
Co-authored-by: Enrico Candino <enrico.candino@gmail.com>
2025-04-09 17:25:48 +02:00
Enrico Candino
7cb2399b89 Update and fix to k3kcli for new ClusterSet integration (#321)
* added clusterset flag to cluster creation and displayname to clusterset creation

* updated cli docs
2025-04-04 13:22:55 +02:00
Enrico Candino
90568f24b1 Added ClusterSet as singleton (#316)
* added ClusterSet as singleton

* fix tests
2025-04-03 16:26:25 +02:00
Hussein Galal
0843a9e313 Initial support for ResourceQuotas in clustersets (#308)
* Add ResourceQuota to clusterset

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Generate docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add a defualt limitRange for ClusterSets

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix linting

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add test for clusterset limitRange

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add server and worker limits

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make charts

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add default limits and fixes to resourcesquota

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make build-crds

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make build-crds

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make spec as pointer

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* delete default limit

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Update tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Update tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* return on delete in limitrange

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-04-03 12:30:48 +02:00
Enrico Candino
b58578788c Add clusterset commands (#319)
* added clusterset create command, small refactor with appcontext

* added clusterset delete

* updated docs
2025-04-03 11:07:32 +02:00
Enrico Candino
c4cc1e69cd requeue if server not ready (#318) 2025-04-03 10:45:18 +02:00
Enrico Candino
bd947c0fcb Create dedicated namespace for new clusters (#314)
* create dedicated namespace for new clusters

* porcelain test

* use --exit-code instead of test and shell for escaping issue

* update go.mod
2025-03-26 14:53:41 +01:00
Hussein Galal
b0b61f8d8e Fix delete cli (#281)
* Fix delete cli

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* make lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update docs

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix delete cli

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* check if object has a controller reference before removing

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* move the update to the if condition

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* move the update to the if condition

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-03-24 13:44:00 +02:00
Enrico Candino
3281d54c6c fix typo (#300) 2025-03-24 10:48:42 +01:00
Hussein Galal
853b0a7e05 Chart update for 0.3.1 (#309)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-03-21 01:59:53 +02:00
28 changed files with 1206 additions and 504 deletions

View File

@@ -89,9 +89,8 @@ lint: ## Find any linting issues in the project
validate: build-crds docs ## Validate the project checking for any dependency or doc mismatch
$(GINKGO) unfocus
go mod tidy
git --no-pager diff go.mod go.sum
test -z "$(shell git status --porcelain)"
git status --porcelain
git --no-pager diff --exit-code
.PHONY: install
install: ## Install K3k with Helm on the targeted Kubernetes cluster

View File

@@ -2,5 +2,5 @@ apiVersion: v2
name: k3k
description: A Helm chart for K3K
type: application
version: 0.3.1-r1
appVersion: v0.3.1-rc1
version: 0.3.1-r2
appVersion: v0.3.1

View File

@@ -94,29 +94,6 @@ spec:
x-kubernetes-validations:
- message: clusterDNS is immutable
rule: self == oldSelf
clusterLimit:
description: Limit defines resource limits for server/agent nodes.
properties:
serverLimit:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: ServerLimit specifies resource limits for server
nodes.
type: object
workerLimit:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: WorkerLimit specifies resource limits for agent nodes.
type: object
type: object
expose:
description: |-
Expose specifies options for exposing the API server.
@@ -225,6 +202,15 @@ spec:
items:
type: string
type: array
serverLimit:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: ServerLimit specifies resource limits for server nodes.
type: object
servers:
default: 1
description: |-
@@ -271,6 +257,15 @@ spec:
It should follow the K3s versioning convention (e.g., v1.28.2-k3s1).
If not specified, the Kubernetes version of the host node will be used.
type: string
workerLimit:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: WorkerLimit specifies resource limits for agent nodes.
type: object
type: object
status:
description: Status reflects the observed state of the Cluster.

View File

@@ -14,7 +14,14 @@ spec:
singular: clusterset
scope: Namespaced
versions:
- name: v1alpha1
- additionalPrinterColumns:
- jsonPath: .spec.displayName
name: Display Name
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1alpha1
schema:
openAPIV3Schema:
description: |-
@@ -42,10 +49,10 @@ spec:
default: {}
description: Spec defines the desired state of the ClusterSet.
properties:
allowedNodeTypes:
allowedModeTypes:
default:
- shared
description: AllowedNodeTypes specifies the allowed cluster provisioning
description: AllowedModeTypes specifies the allowed cluster provisioning
modes. Defaults to [shared].
items:
description: ClusterMode is the possible provisioning mode of a
@@ -59,30 +66,6 @@ spec:
x-kubernetes-validations:
- message: mode is immutable
rule: self == oldSelf
defaultLimits:
description: DefaultLimits specifies the default resource limits for
servers/agents when a cluster in the set doesn't provide any.
properties:
serverLimit:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: ServerLimit specifies resource limits for server
nodes.
type: object
workerLimit:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: WorkerLimit specifies resource limits for agent nodes.
type: object
type: object
defaultNodeSelector:
additionalProperties:
type: string
@@ -97,15 +80,84 @@ spec:
description: DisableNetworkPolicy indicates whether to disable the
creation of a default network policy for cluster isolation.
type: boolean
maxLimits:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: MaxLimits specifies the maximum resource limits that
apply to all clusters (server + agent) in the set.
displayName:
description: DisplayName is the human-readable name for the set.
type: string
limit:
description: |-
Limit specifies the LimitRange that will be applied to all pods within the ClusterSet
to set defaults and constraints (min/max)
properties:
limits:
description: Limits is the list of LimitRangeItem objects that
are enforced.
items:
description: LimitRangeItem defines a min/max usage limit for
any resource that matches on kind.
properties:
default:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Default resource requirement limit value by
resource name if resource limit is omitted.
type: object
defaultRequest:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: DefaultRequest is the default resource requirement
request value by resource name if resource request is
omitted.
type: object
max:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Max usage constraints on this kind by resource
name.
type: object
maxLimitRequestRatio:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: MaxLimitRequestRatio if specified, the named
resource must have a request and limit that are both non-zero
where limit divided by request is less than or equal to
the enumerated value; this represents the max burst for
the named resource.
type: object
min:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: Min usage constraints on this kind by resource
name.
type: object
type:
description: Type of resource that this limit applies to.
type: string
required:
- type
type: object
type: array
required:
- limits
type: object
podSecurityAdmissionLevel:
description: PodSecurityAdmissionLevel specifies the pod security
@@ -115,6 +167,70 @@ spec:
- baseline
- restricted
type: string
quota:
description: Quota specifies the resource limits for clusters within
a clusterset.
properties:
hard:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: |-
hard is the set of desired hard limits for each named resource.
More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/
type: object
scopeSelector:
description: |-
scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota
but expressed using ScopeSelectorOperator in combination with possible values.
For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched.
properties:
matchExpressions:
description: A list of scope selector requirements by scope
of the resources.
items:
description: |-
A scoped-resource selector requirement is a selector that contains values, a scope name, and an operator
that relates the scope name and values.
properties:
operator:
description: |-
Represents a scope's relationship to a set of values.
Valid operators are In, NotIn, Exists, DoesNotExist.
type: string
scopeName:
description: The name of the scope that the selector
applies to.
type: string
values:
description: |-
An array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty.
This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- operator
- scopeName
type: object
type: array
type: object
x-kubernetes-map-type: atomic
scopes:
description: |-
A collection of filters that must match each object tracked by a quota.
If not specified, the quota matches all objects.
items:
description: A ResourceQuotaScope defines a filter that must
match each object tracked by a quota
type: string
type: array
type: object
type: object
status:
description: Status reflects the observed state of the ClusterSet.
@@ -206,6 +322,9 @@ spec:
required:
- spec
type: object
x-kubernetes-validations:
- message: Name must match 'default'
rule: self.metadata.name == "default"
served: true
storage: true
subresources:

View File

@@ -4,13 +4,13 @@ import (
"github.com/urfave/cli/v2"
)
func NewClusterCommand() *cli.Command {
func NewClusterCmd(appCtx *AppContext) *cli.Command {
return &cli.Command{
Name: "cluster",
Usage: "cluster command",
Subcommands: []*cli.Command{
NewClusterCreateCmd(),
NewClusterDeleteCmd(),
NewClusterCreateCmd(appCtx),
NewClusterDeleteCmd(appCtx),
},
}
}

View File

@@ -3,9 +3,9 @@ package cmds
import (
"context"
"errors"
"fmt"
"net/url"
"os"
"path/filepath"
"slices"
"strings"
"time"
@@ -17,12 +17,11 @@ import (
v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/tools/clientcmd"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
"k8s.io/client-go/util/retry"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
)
type CreateConfig struct {
@@ -38,9 +37,10 @@ type CreateConfig struct {
version string
mode string
kubeconfigServerHost string
clusterset string
}
func NewClusterCreateCmd() *cli.Command {
func NewClusterCreateCmd(appCtx *AppContext) *cli.Command {
createConfig := &CreateConfig{}
createFlags := NewCreateFlags(createConfig)
@@ -48,15 +48,16 @@ func NewClusterCreateCmd() *cli.Command {
Name: "create",
Usage: "Create new cluster",
UsageText: "k3kcli cluster create [command options] NAME",
Action: createAction(createConfig),
Flags: append(CommonFlags, createFlags...),
Action: createAction(appCtx, createConfig),
Flags: WithCommonFlags(appCtx, createFlags...),
HideHelpCommand: true,
}
}
func createAction(config *CreateConfig) cli.ActionFunc {
func createAction(appCtx *AppContext, config *CreateConfig) cli.ActionFunc {
return func(clx *cli.Context) error {
ctx := context.Background()
client := appCtx.Client
if clx.NArg() != 1 {
return cli.ShowSubcommandHelp(clx)
@@ -67,16 +68,38 @@ func createAction(config *CreateConfig) cli.ActionFunc {
return errors.New("invalid cluster name")
}
restConfig, err := loadRESTConfig()
if err != nil {
namespace := appCtx.Namespace(name)
// if clusterset is set, use the namespace of the clusterset
if config.clusterset != "" {
namespace = appCtx.Namespace(config.clusterset)
}
if err := createNamespace(ctx, client, namespace); err != nil {
return err
}
ctrlClient, err := client.New(restConfig, client.Options{
Scheme: Scheme,
})
if err != nil {
return err
// if clusterset is set, create the cluster set
if config.clusterset != "" {
namespace = appCtx.Namespace(config.clusterset)
clusterSet := &v1alpha1.ClusterSet{}
if err := client.Get(ctx, types.NamespacedName{Name: "default", Namespace: namespace}, clusterSet); err != nil {
if !apierrors.IsNotFound(err) {
return err
}
clusterSet, err = createClusterSet(ctx, client, namespace, v1alpha1.ClusterMode(config.mode), config.clusterset)
if err != nil {
return err
}
}
logrus.Infof("ClusterSet in namespace [%s] available", namespace)
if !slices.Contains(clusterSet.Spec.AllowedModeTypes, v1alpha1.ClusterMode(config.mode)) {
return fmt.Errorf("invalid '%s' Cluster mode. ClusterSet only allows %v", config.mode, clusterSet.Spec.AllowedModeTypes)
}
}
if strings.Contains(config.version, "+") {
@@ -86,25 +109,25 @@ func createAction(config *CreateConfig) cli.ActionFunc {
}
if config.token != "" {
logrus.Infof("Creating cluster token secret")
logrus.Info("Creating cluster token secret")
obj := k3kcluster.TokenSecretObj(config.token, name, Namespace())
obj := k3kcluster.TokenSecretObj(config.token, name, namespace)
if err := ctrlClient.Create(ctx, &obj); err != nil {
if err := client.Create(ctx, &obj); err != nil {
return err
}
}
logrus.Infof("Creating a new cluster [%s]", name)
logrus.Infof("Creating cluster [%s] in namespace [%s]", name, namespace)
cluster := newCluster(name, Namespace(), config)
cluster := newCluster(name, namespace, config)
cluster.Spec.Expose = &v1alpha1.ExposeConfig{
NodePort: &v1alpha1.NodePortConfig{},
}
// add Host IP address as an extra TLS-SAN to expose the k3k cluster
url, err := url.Parse(restConfig.Host)
url, err := url.Parse(appCtx.RestConfig.Host)
if err != nil {
return err
}
@@ -116,7 +139,7 @@ func createAction(config *CreateConfig) cli.ActionFunc {
cluster.Spec.TLSSANs = []string{host[0]}
if err := ctrlClient.Create(ctx, cluster); err != nil {
if err := client.Create(ctx, cluster); err != nil {
if apierrors.IsAlreadyExists(err) {
logrus.Infof("Cluster [%s] already exists", name)
} else {
@@ -140,29 +163,13 @@ func createAction(config *CreateConfig) cli.ActionFunc {
var kubeconfig *clientcmdapi.Config
if err := retry.OnError(availableBackoff, apierrors.IsNotFound, func() error {
kubeconfig, err = cfg.Extract(ctx, ctrlClient, cluster, host[0])
kubeconfig, err = cfg.Extract(ctx, client, cluster, host[0])
return err
}); err != nil {
return err
}
pwd, err := os.Getwd()
if err != nil {
return err
}
logrus.Infof(`You can start using the cluster with:
export KUBECONFIG=%s
kubectl cluster-info
`, filepath.Join(pwd, cluster.Name+"-kubeconfig.yaml"))
kubeconfigData, err := clientcmd.Write(*kubeconfig)
if err != nil {
return err
}
return os.WriteFile(cluster.Name+"-kubeconfig.yaml", kubeconfigData, 0644)
return writeKubeconfigFile(cluster, kubeconfig)
}
}

View File

@@ -94,5 +94,10 @@ func NewCreateFlags(config *CreateConfig) []cli.Flag {
Usage: "override the kubeconfig server host",
Destination: &config.kubeconfigServerHost,
},
&cli.StringFlag{
Name: "clusterset",
Usage: "The clusterset to create the cluster in",
Destination: &config.clusterset,
},
}
}

View File

@@ -6,55 +6,112 @@ import (
"github.com/rancher/k3k/pkg/apis/k3k.io/v1alpha1"
k3kcluster "github.com/rancher/k3k/pkg/controller/cluster"
"github.com/rancher/k3k/pkg/controller/cluster/agent"
"github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
"k8s.io/apimachinery/pkg/types"
ctrlclient "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
)
func NewClusterDeleteCmd() *cli.Command {
var keepData bool
func NewClusterDeleteCmd(appCtx *AppContext) *cli.Command {
return &cli.Command{
Name: "delete",
Usage: "Delete an existing cluster",
UsageText: "k3kcli cluster delete [command options] NAME",
Action: delete,
Flags: CommonFlags,
Name: "delete",
Usage: "Delete an existing cluster",
UsageText: "k3kcli cluster delete [command options] NAME",
Action: delete(appCtx),
Flags: WithCommonFlags(appCtx, &cli.BoolFlag{
Name: "keep-data",
Usage: "keeps persistence volumes created for the cluster after deletion",
Destination: &keepData,
}),
HideHelpCommand: true,
}
}
func delete(clx *cli.Context) error {
ctx := context.Background()
func delete(appCtx *AppContext) cli.ActionFunc {
return func(clx *cli.Context) error {
ctx := context.Background()
client := appCtx.Client
if clx.NArg() != 1 {
return cli.ShowSubcommandHelp(clx)
if clx.NArg() != 1 {
return cli.ShowSubcommandHelp(clx)
}
name := clx.Args().First()
if name == k3kcluster.ClusterInvalidName {
return errors.New("invalid cluster name")
}
namespace := appCtx.Namespace(name)
logrus.Infof("Deleting [%s] cluster in namespace [%s]", name, namespace)
cluster := v1alpha1.Cluster{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
}
// keep bootstrap secrets and tokens if --keep-data flag is passed
if keepData {
// skip removing tokenSecret
if err := RemoveOwnerReferenceFromSecret(ctx, k3kcluster.TokenSecretName(cluster.Name), client, cluster); err != nil {
return err
}
// skip removing webhook secret
if err := RemoveOwnerReferenceFromSecret(ctx, agent.WebhookSecretName(cluster.Name), client, cluster); err != nil {
return err
}
} else {
matchingLabels := ctrlclient.MatchingLabels(map[string]string{"cluster": cluster.Name, "role": "server"})
listOpts := ctrlclient.ListOptions{Namespace: cluster.Namespace}
matchingLabels.ApplyToList(&listOpts)
deleteOpts := &ctrlclient.DeleteAllOfOptions{ListOptions: listOpts}
if err := client.DeleteAllOf(ctx, &v1.PersistentVolumeClaim{}, deleteOpts); err != nil {
return ctrlclient.IgnoreNotFound(err)
}
}
if err := client.Delete(ctx, &cluster); err != nil {
return ctrlclient.IgnoreNotFound(err)
}
return nil
}
name := clx.Args().First()
if name == k3kcluster.ClusterInvalidName {
return errors.New("invalid cluster name")
}
restConfig, err := loadRESTConfig()
if err != nil {
return err
}
ctrlClient, err := client.New(restConfig, client.Options{
Scheme: Scheme,
})
if err != nil {
return err
}
logrus.Infof("deleting [%s] cluster", name)
cluster := v1alpha1.Cluster{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: Namespace(),
},
}
return ctrlClient.Delete(ctx, &cluster)
}
func RemoveOwnerReferenceFromSecret(ctx context.Context, name string, cl ctrlclient.Client, cluster v1alpha1.Cluster) error {
var secret v1.Secret
key := types.NamespacedName{
Name: name,
Namespace: cluster.Namespace,
}
if err := cl.Get(ctx, key, &secret); err != nil {
if apierrors.IsNotFound(err) {
logrus.Warnf("%s secret is not found", name)
return nil
}
return err
}
if controllerutil.HasControllerReference(&secret) {
if err := controllerutil.RemoveOwnerReference(&cluster, &secret, cl.Scheme()); err != nil {
return err
}
return cl.Update(ctx, &secret)
}
return nil
}

16
cli/cmds/clusterset.go Normal file
View File

@@ -0,0 +1,16 @@
package cmds
import (
"github.com/urfave/cli/v2"
)
func NewClusterSetCmd(appCtx *AppContext) *cli.Command {
return &cli.Command{
Name: "clusterset",
Usage: "clusterset command",
Subcommands: []*cli.Command{
NewClusterSetCreateCmd(appCtx),
NewClusterSetDeleteCmd(appCtx),
},
}
}

View File

@@ -0,0 +1,138 @@
package cmds
import (
"context"
"errors"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1alpha1"
k3kcluster "github.com/rancher/k3k/pkg/controller/cluster"
"github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
)
type ClusterSetCreateConfig struct {
mode string
displayName string
}
func NewClusterSetCreateCmd(appCtx *AppContext) *cli.Command {
config := &ClusterSetCreateConfig{}
createFlags := []cli.Flag{
&cli.StringFlag{
Name: "mode",
Usage: "The allowed mode type of the clusterset",
Destination: &config.mode,
Value: "shared",
Action: func(ctx *cli.Context, value string) error {
switch value {
case string(v1alpha1.VirtualClusterMode), string(v1alpha1.SharedClusterMode):
return nil
default:
return errors.New(`mode should be one of "shared" or "virtual"`)
}
},
},
&cli.StringFlag{
Name: "display-name",
Usage: "The display name of the clusterset",
Destination: &config.displayName,
},
}
return &cli.Command{
Name: "create",
Usage: "Create new clusterset",
UsageText: "k3kcli clusterset create [command options] NAME",
Action: clusterSetCreateAction(appCtx, config),
Flags: WithCommonFlags(appCtx, createFlags...),
HideHelpCommand: true,
}
}
func clusterSetCreateAction(appCtx *AppContext, config *ClusterSetCreateConfig) cli.ActionFunc {
return func(clx *cli.Context) error {
ctx := context.Background()
client := appCtx.Client
if clx.NArg() != 1 {
return cli.ShowSubcommandHelp(clx)
}
name := clx.Args().First()
if name == k3kcluster.ClusterInvalidName {
return errors.New("invalid cluster name")
}
displayName := config.displayName
if displayName == "" {
displayName = name
}
// if both display name and namespace are set the name is ignored
if config.displayName != "" && appCtx.namespace != "" {
logrus.Warnf("Ignoring name [%s] because display name and namespace are set", name)
}
namespace := appCtx.Namespace(name)
if err := createNamespace(ctx, client, namespace); err != nil {
return err
}
_, err := createClusterSet(ctx, client, namespace, v1alpha1.ClusterMode(config.mode), displayName)
return err
}
}
func createNamespace(ctx context.Context, client client.Client, name string) error {
ns := &v1.Namespace{ObjectMeta: metav1.ObjectMeta{Name: name}}
if err := client.Get(ctx, types.NamespacedName{Name: name}, ns); err != nil {
if !apierrors.IsNotFound(err) {
return err
}
logrus.Infof(`Creating namespace [%s]`, name)
if err := client.Create(ctx, ns); err != nil {
return err
}
}
return nil
}
func createClusterSet(ctx context.Context, client client.Client, namespace string, mode v1alpha1.ClusterMode, displayName string) (*v1alpha1.ClusterSet, error) {
logrus.Infof("Creating clusterset in namespace [%s]", namespace)
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
TypeMeta: metav1.TypeMeta{
Kind: "ClusterSet",
APIVersion: "k3k.io/v1alpha1",
},
Spec: v1alpha1.ClusterSetSpec{
AllowedModeTypes: []v1alpha1.ClusterMode{mode},
DisplayName: displayName,
},
}
if err := client.Create(ctx, clusterSet); err != nil {
if apierrors.IsAlreadyExists(err) {
logrus.Infof("ClusterSet in namespace [%s] already exists", namespace)
} else {
return nil, err
}
}
return clusterSet, nil
}

View File

@@ -0,0 +1,61 @@
package cmds
import (
"context"
"errors"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1alpha1"
k3kcluster "github.com/rancher/k3k/pkg/controller/cluster"
"github.com/sirupsen/logrus"
"github.com/urfave/cli/v2"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func NewClusterSetDeleteCmd(appCtx *AppContext) *cli.Command {
return &cli.Command{
Name: "delete",
Usage: "Delete an existing clusterset",
UsageText: "k3kcli clusterset delete [command options] NAME",
Action: clusterSetDeleteAction(appCtx),
Flags: WithCommonFlags(appCtx),
HideHelpCommand: true,
}
}
func clusterSetDeleteAction(appCtx *AppContext) cli.ActionFunc {
return func(clx *cli.Context) error {
ctx := context.Background()
client := appCtx.Client
if clx.NArg() != 1 {
return cli.ShowSubcommandHelp(clx)
}
name := clx.Args().First()
if name == k3kcluster.ClusterInvalidName {
return errors.New("invalid cluster name")
}
namespace := appCtx.Namespace(name)
logrus.Infof("Deleting clusterset in namespace [%s]", namespace)
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
}
if err := client.Delete(ctx, clusterSet); err != nil {
if apierrors.IsNotFound(err) {
logrus.Warnf("ClusterSet not found in namespace [%s]", namespace)
} else {
return err
}
}
return nil
}
}

View File

@@ -20,7 +20,6 @@ import (
"k8s.io/client-go/tools/clientcmd"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
"k8s.io/client-go/util/retry"
"sigs.k8s.io/controller-runtime/pkg/client"
)
var (
@@ -73,86 +72,88 @@ var (
}
)
var subcommands = []*cli.Command{
{
func NewKubeconfigCmd(appCtx *AppContext) *cli.Command {
return &cli.Command{
Name: "kubeconfig",
Usage: "Manage kubeconfig for clusters",
Subcommands: []*cli.Command{
NewKubeconfigGenerateCmd(appCtx),
},
}
}
func NewKubeconfigGenerateCmd(appCtx *AppContext) *cli.Command {
return &cli.Command{
Name: "generate",
Usage: "Generate kubeconfig for clusters",
SkipFlagParsing: false,
Action: generate,
Flags: append(CommonFlags, generateKubeconfigFlags...),
},
}
func NewKubeconfigCommand() *cli.Command {
return &cli.Command{
Name: "kubeconfig",
Usage: "Manage kubeconfig for clusters",
Subcommands: subcommands,
Action: generate(appCtx),
Flags: WithCommonFlags(appCtx, generateKubeconfigFlags...),
}
}
func generate(clx *cli.Context) error {
restConfig, err := loadRESTConfig()
if err != nil {
return err
}
func generate(appCtx *AppContext) cli.ActionFunc {
return func(clx *cli.Context) error {
ctx := context.Background()
client := appCtx.Client
ctrlClient, err := client.New(restConfig, client.Options{
Scheme: Scheme,
})
if err != nil {
return err
}
clusterKey := types.NamespacedName{
Name: name,
Namespace: appCtx.Namespace(name),
}
clusterKey := types.NamespacedName{
Name: name,
Namespace: Namespace(),
}
var cluster v1alpha1.Cluster
var cluster v1alpha1.Cluster
ctx := context.Background()
if err := ctrlClient.Get(ctx, clusterKey, &cluster); err != nil {
return err
}
url, err := url.Parse(restConfig.Host)
if err != nil {
return err
}
host := strings.Split(url.Host, ":")
if kubeconfigServerHost != "" {
host = []string{kubeconfigServerHost}
if err := altNames.Set(kubeconfigServerHost); err != nil {
if err := client.Get(ctx, clusterKey, &cluster); err != nil {
return err
}
url, err := url.Parse(appCtx.RestConfig.Host)
if err != nil {
return err
}
host := strings.Split(url.Host, ":")
if kubeconfigServerHost != "" {
host = []string{kubeconfigServerHost}
if err := altNames.Set(kubeconfigServerHost); err != nil {
return err
}
}
certAltNames := certs.AddSANs(altNames.Value())
orgs := org.Value()
if orgs == nil {
orgs = []string{user.SystemPrivilegedGroup}
}
cfg := kubeconfig.KubeConfig{
CN: cn,
ORG: orgs,
ExpiryDate: time.Hour * 24 * time.Duration(expirationDays),
AltNames: certAltNames,
}
logrus.Infof("waiting for cluster to be available..")
var kubeconfig *clientcmdapi.Config
if err := retry.OnError(controller.Backoff, apierrors.IsNotFound, func() error {
kubeconfig, err = cfg.Extract(ctx, client, &cluster, host[0])
return err
}); err != nil {
return err
}
return writeKubeconfigFile(&cluster, kubeconfig)
}
}
certAltNames := certs.AddSANs(altNames.Value())
orgs := org.Value()
if orgs == nil {
orgs = []string{user.SystemPrivilegedGroup}
}
cfg := kubeconfig.KubeConfig{
CN: cn,
ORG: orgs,
ExpiryDate: time.Hour * 24 * time.Duration(expirationDays),
AltNames: certAltNames,
}
logrus.Infof("waiting for cluster to be available..")
var kubeconfig *clientcmdapi.Config
if err := retry.OnError(controller.Backoff, apierrors.IsNotFound, func() error {
kubeconfig, err = cfg.Extract(ctx, ctrlClient, &cluster, host[0])
return err
}); err != nil {
return err
func writeKubeconfigFile(cluster *v1alpha1.Cluster, kubeconfig *clientcmdapi.Config) error {
if configName == "" {
configName = cluster.Namespace + "-" + cluster.Name + "-kubeconfig.yaml"
}
pwd, err := os.Getwd()
@@ -160,11 +161,7 @@ func generate(clx *cli.Context) error {
return err
}
if configName == "" {
configName = cluster.Name + "-kubeconfig.yaml"
}
logrus.Infof(`You can start using the cluster with:
logrus.Infof(`You can start using the cluster with:
export KUBECONFIG=%s
kubectl cluster-info

View File

@@ -11,57 +11,49 @@ import (
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
defaultNamespace = "default"
)
type AppContext struct {
RestConfig *rest.Config
Client client.Client
var (
Scheme = runtime.NewScheme()
debug bool
// Global flags
Debug bool
Kubeconfig string
namespace string
CommonFlags = []cli.Flag{
&cli.StringFlag{
Name: "kubeconfig",
Usage: "kubeconfig path",
Destination: &Kubeconfig,
DefaultText: "$HOME/.kube/config or $KUBECONFIG if set",
},
&cli.StringFlag{
Name: "namespace",
Usage: "namespace to create the k3k cluster in",
Destination: &namespace,
},
}
)
func init() {
_ = clientgoscheme.AddToScheme(Scheme)
_ = v1alpha1.AddToScheme(Scheme)
}
func NewApp() *cli.App {
appCtx := &AppContext{}
app := cli.NewApp()
app.Name = "k3kcli"
app.Usage = "CLI for K3K"
app.Flags = []cli.Flag{
&cli.BoolFlag{
Name: "debug",
Usage: "Turn on debug logs",
Destination: &debug,
EnvVars: []string{"K3K_DEBUG"},
},
}
app.Flags = WithCommonFlags(appCtx)
app.Before = func(clx *cli.Context) error {
if debug {
if appCtx.Debug {
logrus.SetLevel(logrus.DebugLevel)
}
restConfig, err := loadRESTConfig(appCtx.Kubeconfig)
if err != nil {
return err
}
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
_ = v1alpha1.AddToScheme(scheme)
ctrlClient, err := client.New(restConfig, client.Options{Scheme: scheme})
if err != nil {
return err
}
appCtx.RestConfig = restConfig
appCtx.Client = ctrlClient
return nil
}
@@ -71,30 +63,55 @@ func NewApp() *cli.App {
}
app.Commands = []*cli.Command{
NewClusterCommand(),
NewKubeconfigCommand(),
NewClusterCmd(appCtx),
NewClusterSetCmd(appCtx),
NewKubeconfigCmd(appCtx),
}
return app
}
func Namespace() string {
if namespace == "" {
return defaultNamespace
func (ctx *AppContext) Namespace(name string) string {
if ctx.namespace != "" {
return ctx.namespace
}
return namespace
return "k3k-" + name
}
func loadRESTConfig() (*rest.Config, error) {
func loadRESTConfig(kubeconfig string) (*rest.Config, error) {
loadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
configOverrides := &clientcmd.ConfigOverrides{}
if Kubeconfig != "" {
loadingRules.ExplicitPath = Kubeconfig
if kubeconfig != "" {
loadingRules.ExplicitPath = kubeconfig
}
kubeConfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, configOverrides)
return kubeConfig.ClientConfig()
}
func WithCommonFlags(appCtx *AppContext, flags ...cli.Flag) []cli.Flag {
commonFlags := []cli.Flag{
&cli.BoolFlag{
Name: "debug",
Usage: "Turn on debug logs",
Destination: &appCtx.Debug,
EnvVars: []string{"K3K_DEBUG"},
},
&cli.StringFlag{
Name: "kubeconfig",
Usage: "kubeconfig path",
Destination: &appCtx.Kubeconfig,
DefaultText: "$HOME/.kube/config or $KUBECONFIG if set",
},
&cli.StringFlag{
Name: "namespace",
Usage: "namespace to create the k3k cluster in",
Destination: &appCtx.namespace,
},
}
return append(commonFlags, flags...)
}

View File

@@ -122,7 +122,7 @@ You can check the [k3kcli documentation](./cli/cli-docs.md) for the full specs.
* Ephemeral Storage:
```bash
k3kcli cluster create my-cluster --persistence-type ephemeral
k3kcli cluster create --persistence-type ephemeral my-cluster
```
*Important Notes:*

View File

@@ -8,6 +8,8 @@ k3kcli
```
[--debug]
[--kubeconfig]=[value]
[--namespace]=[value]
```
**Usage**:
@@ -20,6 +22,10 @@ k3kcli [GLOBAL OPTIONS] command [COMMAND OPTIONS] [ARGUMENTS...]
**--debug**: Turn on debug logs
**--kubeconfig**="": kubeconfig path (default: $HOME/.kube/config or $KUBECONFIG if set)
**--namespace**="": namespace to create the k3k cluster in
# COMMANDS
@@ -39,6 +45,10 @@ Create new cluster
**--cluster-cidr**="": cluster CIDR
**--clusterset**="": The clusterset to create the cluster in
**--debug**: Turn on debug logs
**--kubeconfig**="": kubeconfig path (default: $HOME/.kube/config or $KUBECONFIG if set)
**--kubeconfig-server**="": override the kubeconfig server host
@@ -67,6 +77,42 @@ Delete an existing cluster
>k3kcli cluster delete [command options] NAME
**--debug**: Turn on debug logs
**--keep-data**: keeps persistence volumes created for the cluster after deletion
**--kubeconfig**="": kubeconfig path (default: $HOME/.kube/config or $KUBECONFIG if set)
**--namespace**="": namespace to create the k3k cluster in
## clusterset
clusterset command
### create
Create new clusterset
>k3kcli clusterset create [command options] NAME
**--debug**: Turn on debug logs
**--display-name**="": The display name of the clusterset
**--kubeconfig**="": kubeconfig path (default: $HOME/.kube/config or $KUBECONFIG if set)
**--mode**="": The allowed mode type of the clusterset (default: "shared")
**--namespace**="": namespace to create the k3k cluster in
### delete
Delete an existing clusterset
>k3kcli clusterset delete [command options] NAME
**--debug**: Turn on debug logs
**--kubeconfig**="": kubeconfig path (default: $HOME/.kube/config or $KUBECONFIG if set)
**--namespace**="": namespace to create the k3k cluster in
@@ -85,6 +131,8 @@ Generate kubeconfig for clusters
**--config-name**="": the name of the generated kubeconfig file
**--debug**: Turn on debug logs
**--expiration-days**="": Expiration date of the certificates used for the kubeconfig (default: 356)
**--kubeconfig**="": kubeconfig path (default: $HOME/.kube/config or $KUBECONFIG if set)

View File

@@ -51,23 +51,6 @@ _Appears in:_
| `spec` _[ClusterSpec](#clusterspec)_ | Spec defines the desired state of the Cluster. | \{ \} | |
#### ClusterLimit
ClusterLimit defines resource limits for server and agent nodes.
_Appears in:_
- [ClusterSpec](#clusterspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `serverLimit` _[ResourceList](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcelist-v1-core)_ | ServerLimit specifies resource limits for server nodes. | | |
| `workerLimit` _[ResourceList](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcelist-v1-core)_ | WorkerLimit specifies resource limits for agent nodes. | | |
#### ClusterList
@@ -124,12 +107,13 @@ _Appears in:_
| `expose` _[ExposeConfig](#exposeconfig)_ | Expose specifies options for exposing the API server.<br />By default, it's only exposed as a ClusterIP. | | |
| `nodeSelector` _object (keys:string, values:string)_ | NodeSelector specifies node labels to constrain where server/agent pods are scheduled.<br />In "shared" mode, this also applies to workloads. | | |
| `priorityClass` _string_ | PriorityClass specifies the priorityClassName for server/agent pods.<br />In "shared" mode, this also applies to workloads. | | |
| `clusterLimit` _[ClusterLimit](#clusterlimit)_ | Limit defines resource limits for server/agent nodes. | | |
| `tokenSecretRef` _[SecretReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#secretreference-v1-core)_ | TokenSecretRef is a Secret reference containing the token used by worker nodes to join the cluster.<br />The Secret must have a "token" field in its data. | | |
| `tlsSANs` _string array_ | TLSSANs specifies subject alternative names for the K3s server certificate. | | |
| `serverArgs` _string array_ | ServerArgs specifies ordered key-value pairs for K3s server pods.<br />Example: ["--tls-san=example.com"] | | |
| `agentArgs` _string array_ | AgentArgs specifies ordered key-value pairs for K3s agent pods.<br />Example: ["--node-name=my-agent-node"] | | |
| `addons` _[Addon](#addon) array_ | Addons specifies secrets containing raw YAML to deploy on cluster startup. | | |
| `serverLimit` _[ResourceList](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcelist-v1-core)_ | ServerLimit specifies resource limits for server nodes. | | |
| `workerLimit` _[ResourceList](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcelist-v1-core)_ | WorkerLimit specifies resource limits for agent nodes. | | |

View File

@@ -4,7 +4,7 @@ metadata:
name: clusterset-example
# spec:
# disableNetworkPolicy: false
# allowedNodeTypes:
# allowedModeTypes:
# - "shared"
# - "virtual"
# podSecurityAdmissionLevel: "baseline"

2
go.mod
View File

@@ -33,6 +33,7 @@ require (
k8s.io/client-go v0.29.11
k8s.io/component-base v0.29.11
k8s.io/component-helpers v0.29.11
k8s.io/kubectl v0.29.11
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738
sigs.k8s.io/controller-runtime v0.17.5
)
@@ -207,7 +208,6 @@ require (
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kms v0.29.11 // indirect
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect
k8s.io/kubectl v0.29.11 // indirect
oras.land/oras-go v1.2.5 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.0 // indirect
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect

View File

@@ -40,6 +40,7 @@ import (
"k8s.io/client-go/transport/spdy"
compbasemetrics "k8s.io/component-base/metrics"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/manager"
)
@@ -398,6 +399,11 @@ func (p *Provider) createPod(ctx context.Context, pod *corev1.Pod) error {
"virtual_namespace", pod.Namespace, "virtual_name", pod.Name,
)
// set ownerReference to the cluster object
if err := controllerutil.SetControllerReference(&cluster, tPod, p.HostClient.Scheme()); err != nil {
return err
}
return p.HostClient.Create(ctx, tPod)
}

View File

@@ -114,11 +114,6 @@ type ClusterSpec struct {
// +optional
PriorityClass string `json:"priorityClass,omitempty"`
// Limit defines resource limits for server/agent nodes.
//
// +optional
Limit *ClusterLimit `json:"clusterLimit,omitempty"`
// TokenSecretRef is a Secret reference containing the token used by worker nodes to join the cluster.
// The Secret must have a "token" field in its data.
//
@@ -146,6 +141,16 @@ type ClusterSpec struct {
//
// +optional
Addons []Addon `json:"addons,omitempty"`
// ServerLimit specifies resource limits for server nodes.
//
// +optional
ServerLimit v1.ResourceList `json:"serverLimit,omitempty"`
// WorkerLimit specifies resource limits for agent nodes.
//
// +optional
WorkerLimit v1.ResourceList `json:"workerLimit,omitempty"`
}
// ClusterMode is the possible provisioning mode of a Cluster.
@@ -175,15 +180,6 @@ const (
DynamicPersistenceMode = PersistenceMode("dynamic")
)
// ClusterLimit defines resource limits for server and agent nodes.
type ClusterLimit struct {
// ServerLimit specifies resource limits for server nodes.
ServerLimit v1.ResourceList `json:"serverLimit,omitempty"`
// WorkerLimit specifies resource limits for agent nodes.
WorkerLimit v1.ResourceList `json:"workerLimit,omitempty"`
}
// Addon specifies a Secret containing YAML to be deployed on cluster startup.
type Addon struct {
// SecretNamespace is the namespace of the Secret.
@@ -317,6 +313,9 @@ type ClusterList struct {
// +kubebuilder:storageversion
// +kubebuilder:subresource:status
// +kubebuilder:object:root=true
// +kubebuilder:validation:XValidation:rule="self.metadata.name == \"default\"",message="Name must match 'default'"
// +kubebuilder:printcolumn:JSONPath=".spec.displayName",name=Display Name,type=string
// +kubebuilder:printcolumn:JSONPath=".metadata.creationTimestamp",name=Age,type=date
// ClusterSet represents a group of virtual Kubernetes clusters managed by k3k.
// It allows defining common configurations and constraints for the clusters within the set.
@@ -338,10 +337,21 @@ type ClusterSet struct {
// ClusterSetSpec defines the desired state of a ClusterSet.
type ClusterSetSpec struct {
// DefaultLimits specifies the default resource limits for servers/agents when a cluster in the set doesn't provide any.
// DisplayName is the human-readable name for the set.
//
// +optional
DefaultLimits *ClusterLimit `json:"defaultLimits,omitempty"`
DisplayName string `json:"displayName,omitempty"`
// Quota specifies the resource limits for clusters within a clusterset.
//
// +optional
Quota *v1.ResourceQuotaSpec `json:"quota,omitempty"`
// Limit specifies the LimitRange that will be applied to all pods within the ClusterSet
// to set defaults and constraints (min/max)
//
// +optional
Limit *v1.LimitRangeSpec `json:"limit,omitempty"`
// DefaultNodeSelector specifies the node selector that applies to all clusters (server + agent) in the set.
//
@@ -353,18 +363,13 @@ type ClusterSetSpec struct {
// +optional
DefaultPriorityClass string `json:"defaultPriorityClass,omitempty"`
// MaxLimits specifies the maximum resource limits that apply to all clusters (server + agent) in the set.
//
// +optional
MaxLimits v1.ResourceList `json:"maxLimits,omitempty"`
// AllowedNodeTypes specifies the allowed cluster provisioning modes. Defaults to [shared].
// AllowedModeTypes specifies the allowed cluster provisioning modes. Defaults to [shared].
//
// +kubebuilder:default={shared}
// +kubebuilder:validation:XValidation:message="mode is immutable",rule="self == oldSelf"
// +kubebuilder:validation:MinItems=1
// +optional
AllowedNodeTypes []ClusterMode `json:"allowedNodeTypes,omitempty"`
AllowedModeTypes []ClusterMode `json:"allowedModeTypes,omitempty"`
// DisableNetworkPolicy indicates whether to disable the creation of a default network policy for cluster isolation.
//

View File

@@ -55,36 +55,6 @@ func (in *Cluster) DeepCopyObject() runtime.Object {
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterLimit) DeepCopyInto(out *ClusterLimit) {
*out = *in
if in.ServerLimit != nil {
in, out := &in.ServerLimit, &out.ServerLimit
*out = make(v1.ResourceList, len(*in))
for key, val := range *in {
(*out)[key] = val.DeepCopy()
}
}
if in.WorkerLimit != nil {
in, out := &in.WorkerLimit, &out.WorkerLimit
*out = make(v1.ResourceList, len(*in))
for key, val := range *in {
(*out)[key] = val.DeepCopy()
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterLimit.
func (in *ClusterLimit) DeepCopy() *ClusterLimit {
if in == nil {
return nil
}
out := new(ClusterLimit)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterList) DeepCopyInto(out *ClusterList) {
*out = *in
@@ -182,16 +152,14 @@ func (in *ClusterSetList) DeepCopyObject() runtime.Object {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterSetSpec) DeepCopyInto(out *ClusterSetSpec) {
*out = *in
if in.MaxLimits != nil {
in, out := &in.MaxLimits, &out.MaxLimits
*out = make(v1.ResourceList, len(*in))
for key, val := range *in {
(*out)[key] = val.DeepCopy()
}
if in.Quota != nil {
in, out := &in.Quota, &out.Quota
*out = new(v1.ResourceQuotaSpec)
(*in).DeepCopyInto(*out)
}
if in.DefaultLimits != nil {
in, out := &in.DefaultLimits, &out.DefaultLimits
*out = new(ClusterLimit)
if in.Limit != nil {
in, out := &in.Limit, &out.Limit
*out = new(v1.LimitRangeSpec)
(*in).DeepCopyInto(*out)
}
if in.DefaultNodeSelector != nil {
@@ -201,8 +169,8 @@ func (in *ClusterSetSpec) DeepCopyInto(out *ClusterSetSpec) {
(*out)[key] = val
}
}
if in.AllowedNodeTypes != nil {
in, out := &in.AllowedNodeTypes, &out.AllowedNodeTypes
if in.AllowedModeTypes != nil {
in, out := &in.AllowedModeTypes, &out.AllowedModeTypes
*out = make([]ClusterMode, len(*in))
copy(*out, *in)
}
@@ -260,6 +228,12 @@ func (in *ClusterSpec) DeepCopyInto(out *ClusterSpec) {
*out = new(int32)
**out = **in
}
in.Persistence.DeepCopyInto(&out.Persistence)
if in.Expose != nil {
in, out := &in.Expose, &out.Expose
*out = new(ExposeConfig)
(*in).DeepCopyInto(*out)
}
if in.NodeSelector != nil {
in, out := &in.NodeSelector, &out.NodeSelector
*out = make(map[string]string, len(*in))
@@ -267,16 +241,16 @@ func (in *ClusterSpec) DeepCopyInto(out *ClusterSpec) {
(*out)[key] = val
}
}
if in.Limit != nil {
in, out := &in.Limit, &out.Limit
*out = new(ClusterLimit)
(*in).DeepCopyInto(*out)
}
if in.TokenSecretRef != nil {
in, out := &in.TokenSecretRef, &out.TokenSecretRef
*out = new(v1.SecretReference)
**out = **in
}
if in.TLSSANs != nil {
in, out := &in.TLSSANs, &out.TLSSANs
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.ServerArgs != nil {
in, out := &in.ServerArgs, &out.ServerArgs
*out = make([]string, len(*in))
@@ -287,21 +261,24 @@ func (in *ClusterSpec) DeepCopyInto(out *ClusterSpec) {
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.TLSSANs != nil {
in, out := &in.TLSSANs, &out.TLSSANs
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Addons != nil {
in, out := &in.Addons, &out.Addons
*out = make([]Addon, len(*in))
copy(*out, *in)
}
in.Persistence.DeepCopyInto(&out.Persistence)
if in.Expose != nil {
in, out := &in.Expose, &out.Expose
*out = new(ExposeConfig)
(*in).DeepCopyInto(*out)
if in.ServerLimit != nil {
in, out := &in.ServerLimit, &out.ServerLimit
*out = make(v1.ResourceList, len(*in))
for key, val := range *in {
(*out)[key] = val.DeepCopy()
}
}
if in.WorkerLimit != nil {
in, out := &in.WorkerLimit, &out.WorkerLimit
*out = make(v1.ResourceList, len(*in))
for key, val := range *in {
(*out)[key] = val.DeepCopy()
}
}
return
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/rancher/k3k/pkg/apis/k3k.io/v1alpha1"
"github.com/rancher/k3k/pkg/controller"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
ctrlruntimeclient "sigs.k8s.io/controller-runtime/pkg/client"
@@ -41,14 +42,21 @@ func configSecretName(clusterName string) string {
func ensureObject(ctx context.Context, cfg *Config, obj ctrlruntimeclient.Object) error {
log := ctrl.LoggerFrom(ctx)
result, err := controllerutil.CreateOrUpdate(ctx, cfg.client, obj, func() error {
return controllerutil.SetControllerReference(cfg.cluster, obj, cfg.scheme)
})
key := ctrlruntimeclient.ObjectKeyFromObject(obj)
if result != controllerutil.OperationResultNone {
key := ctrlruntimeclient.ObjectKeyFromObject(obj)
log.Info(fmt.Sprintf("ensuring %T", obj), "key", key, "result", result)
log.Info(fmt.Sprintf("ensuring %T", obj), "key", key)
if err := controllerutil.SetControllerReference(cfg.cluster, obj, cfg.scheme); err != nil {
return err
}
return err
if err := cfg.client.Create(ctx, obj); err != nil {
if apierrors.IsAlreadyExists(err) {
return cfg.client.Update(ctx, obj)
}
return err
}
return nil
}

View File

@@ -228,5 +228,12 @@ func (v *VirtualAgent) podSpec(image, name string, args []string, affinitySelect
},
}
// specify resource limits if specified for the servers.
if v.cluster.Spec.WorkerLimit != nil {
podSpec.Containers[0].Resources = v1.ResourceRequirements{
Limits: v.cluster.Spec.WorkerLimit,
}
}
return podSpec
}

View File

@@ -123,6 +123,11 @@ func (c *ClusterReconciler) Reconcile(ctx context.Context, req reconcile.Request
// if there was an error during the reconciliation, return
if reconcilerErr != nil {
if errors.Is(reconcilerErr, bootstrap.ErrServerNotReady) {
log.Info("server not ready, requeueing")
return reconcile.Result{RequeueAfter: time.Second * 10}, nil
}
return reconcile.Result{}, reconcilerErr
}

View File

@@ -8,6 +8,7 @@ import (
"errors"
"fmt"
"net/http"
"syscall"
"time"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1alpha1"
@@ -17,6 +18,8 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
)
var ErrServerNotReady = errors.New("server not ready")
type ControlRuntimeBootstrap struct {
ServerCA content `json:"serverCA"`
ServerCAKey content `json:"serverCAKey"`
@@ -68,6 +71,10 @@ func requestBootstrap(token, serverIP string) (*ControlRuntimeBootstrap, error)
resp, err := client.Do(req)
if err != nil {
if errors.Is(err, syscall.ECONNREFUSED) {
return nil, ErrServerNotReady
}
return nil, err
}
defer resp.Body.Close()

View File

@@ -46,11 +46,6 @@ func New(cluster *v1alpha1.Cluster, client client.Client, token, mode string) *S
}
func (s *Server) podSpec(image, name string, persistent bool, startupCmd string) v1.PodSpec {
var limit v1.ResourceList
if s.cluster.Spec.Limit != nil && s.cluster.Spec.Limit.ServerLimit != nil {
limit = s.cluster.Spec.Limit.ServerLimit
}
podSpec := v1.PodSpec{
NodeSelector: s.cluster.Spec.NodeSelector,
PriorityClassName: s.cluster.Spec.PriorityClass,
@@ -118,9 +113,6 @@ func (s *Server) podSpec(image, name string, persistent bool, startupCmd string)
{
Name: name,
Image: image,
Resources: v1.ResourceRequirements{
Limits: limit,
},
Env: []v1.EnvVar{
{
Name: "POD_NAME",
@@ -220,6 +212,13 @@ func (s *Server) podSpec(image, name string, persistent bool, startupCmd string)
}
}
// specify resource limits if specified for the servers.
if s.cluster.Spec.ServerLimit != nil {
podSpec.Containers[0].Resources = v1.ResourceRequirements{
Limits: s.cluster.Spec.ServerLimit,
}
}
return podSpec
}

View File

@@ -16,7 +16,6 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder"
"sigs.k8s.io/controller-runtime/pkg/client"
ctrlruntimeclient "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller"
"sigs.k8s.io/controller-runtime/pkg/event"
"sigs.k8s.io/controller-runtime/pkg/handler"
@@ -32,7 +31,7 @@ const (
)
type ClusterSetReconciler struct {
Client ctrlruntimeclient.Client
Client client.Client
Scheme *runtime.Scheme
ClusterCIDR string
}
@@ -49,6 +48,7 @@ func Add(ctx context.Context, mgr manager.Manager, clusterCIDR string) error {
return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.ClusterSet{}).
Owns(&networkingv1.NetworkPolicy{}).
Owns(&v1.ResourceQuota{}).
WithOptions(controller.Options{
MaxConcurrentReconciles: maxConcurrentReconciles,
}).
@@ -59,54 +59,33 @@ func Add(ctx context.Context, mgr manager.Manager, clusterCIDR string) error {
).
Watches(
&v1alpha1.Cluster{},
handler.EnqueueRequestsFromMapFunc(sameNamespaceEventHandler(reconciler)),
handler.EnqueueRequestsFromMapFunc(namespaceEventHandler(reconciler)),
).
Complete(&reconciler)
}
// namespaceEventHandler will enqueue reconciling requests for all the ClusterSets in the changed namespace
// namespaceEventHandler will enqueue a reconcile request for the ClusterSet in the given namespace
func namespaceEventHandler(reconciler ClusterSetReconciler) handler.MapFunc {
return func(ctx context.Context, obj client.Object) []reconcile.Request {
var (
requests []reconcile.Request
set v1alpha1.ClusterSetList
)
// if the object is a Namespace, use the name as the namespace
namespace := obj.GetName()
_ = reconciler.Client.List(ctx, &set, client.InNamespace(obj.GetName()))
for _, clusterSet := range set.Items {
requests = append(requests, reconcile.Request{
NamespacedName: types.NamespacedName{
Name: clusterSet.Name,
Namespace: obj.GetName(),
},
})
// if the object is a namespaced resource, use the namespace
if obj.GetNamespace() != "" {
namespace = obj.GetNamespace()
}
return requests
}
}
// sameNamespaceEventHandler will enqueue reconciling requests for all the ClusterSets in the changed namespace
func sameNamespaceEventHandler(reconciler ClusterSetReconciler) handler.MapFunc {
return func(ctx context.Context, obj client.Object) []reconcile.Request {
var (
requests []reconcile.Request
set v1alpha1.ClusterSetList
)
_ = reconciler.Client.List(ctx, &set, client.InNamespace(obj.GetNamespace()))
for _, clusterSet := range set.Items {
requests = append(requests, reconcile.Request{
NamespacedName: types.NamespacedName{
Name: clusterSet.Name,
Namespace: obj.GetNamespace(),
},
})
key := types.NamespacedName{
Name: "default",
Namespace: namespace,
}
return requests
var clusterSet v1alpha1.ClusterSet
if err := reconciler.Client.Get(ctx, key, &clusterSet); err != nil {
return nil
}
return []reconcile.Request{{NamespacedName: key}}
}
}
@@ -131,42 +110,56 @@ func (c *ClusterSetReconciler) Reconcile(ctx context.Context, req reconcile.Requ
return reconcile.Result{}, client.IgnoreNotFound(err)
}
if err := c.reconcileNetworkPolicy(ctx, &clusterSet); err != nil {
return reconcile.Result{}, err
orig := clusterSet.DeepCopy()
reconcilerErr := c.reconcileClusterSet(ctx, &clusterSet)
// update Status if needed
if !reflect.DeepEqual(orig.Status, clusterSet.Status) {
if err := c.Client.Status().Update(ctx, &clusterSet); err != nil {
return reconcile.Result{}, err
}
}
if err := c.reconcileNamespacePodSecurityLabels(ctx, &clusterSet); err != nil {
return reconcile.Result{}, err
// if there was an error during the reconciliation, return
if reconcilerErr != nil {
return reconcile.Result{}, reconcilerErr
}
if err := c.reconcileClusters(ctx, &clusterSet); err != nil {
return reconcile.Result{}, err
// update ClusterSet if needed
if !reflect.DeepEqual(orig.Spec, clusterSet.Spec) {
if err := c.Client.Update(ctx, &clusterSet); err != nil {
return reconcile.Result{}, err
}
}
// TODO: Add resource quota for clustersets
// if clusterSet.Spec.MaxLimits != nil {
// quota := v1.ResourceQuota{
// ObjectMeta: metav1.ObjectMeta{
// Name: "clusterset-quota",
// Namespace: clusterSet.Namespace,
// OwnerReferences: []metav1.OwnerReference{
// {
// UID: clusterSet.UID,
// Name: clusterSet.Name,
// APIVersion: clusterSet.APIVersion,
// Kind: clusterSet.Kind,
// },
// },
// },
// }
// quota.Spec.Hard = clusterSet.Spec.MaxLimits
// if err := c.Client.Create(ctx, &quota); err != nil {
// return reconcile.Result{}, fmt.Errorf("unable to create resource quota from cluster set: %w", err)
// }
// }
return reconcile.Result{}, nil
}
func (c *ClusterSetReconciler) reconcileClusterSet(ctx context.Context, clusterSet *v1alpha1.ClusterSet) error {
if err := c.reconcileNetworkPolicy(ctx, clusterSet); err != nil {
return err
}
if err := c.reconcileNamespacePodSecurityLabels(ctx, clusterSet); err != nil {
return err
}
if err := c.reconcileLimit(ctx, clusterSet); err != nil {
return err
}
if err := c.reconcileQuota(ctx, clusterSet); err != nil {
return err
}
if err := c.reconcileClusters(ctx, clusterSet); err != nil {
return err
}
return nil
}
func (c *ClusterSetReconciler) reconcileNetworkPolicy(ctx context.Context, clusterSet *v1alpha1.ClusterSet) error {
log := ctrl.LoggerFrom(ctx)
log.Info("reconciling NetworkPolicy")
@@ -315,7 +308,7 @@ func (c *ClusterSetReconciler) reconcileClusters(ctx context.Context, clusterSet
log.Info("reconciling Clusters")
var clusters v1alpha1.ClusterList
if err := c.Client.List(ctx, &clusters, ctrlruntimeclient.InNamespace(clusterSet.Namespace)); err != nil {
if err := c.Client.List(ctx, &clusters, client.InNamespace(clusterSet.Namespace)); err != nil {
return err
}
@@ -340,3 +333,98 @@ func (c *ClusterSetReconciler) reconcileClusters(ctx context.Context, clusterSet
return err
}
func (c *ClusterSetReconciler) reconcileQuota(ctx context.Context, clusterSet *v1alpha1.ClusterSet) error {
if clusterSet.Spec.Quota == nil {
// check if resourceQuota object exists and deletes it.
var toDeleteResourceQuota v1.ResourceQuota
key := types.NamespacedName{
Name: k3kcontroller.SafeConcatNameWithPrefix(clusterSet.Name),
Namespace: clusterSet.Namespace,
}
if err := c.Client.Get(ctx, key, &toDeleteResourceQuota); err != nil {
return client.IgnoreNotFound(err)
}
return c.Client.Delete(ctx, &toDeleteResourceQuota)
}
// create/update resource Quota
resourceQuota := resourceQuota(clusterSet)
if err := ctrl.SetControllerReference(clusterSet, &resourceQuota, c.Scheme); err != nil {
return err
}
if err := c.Client.Create(ctx, &resourceQuota); err != nil {
if apierrors.IsAlreadyExists(err) {
return c.Client.Update(ctx, &resourceQuota)
}
}
return nil
}
func resourceQuota(clusterSet *v1alpha1.ClusterSet) v1.ResourceQuota {
return v1.ResourceQuota{
ObjectMeta: metav1.ObjectMeta{
Name: k3kcontroller.SafeConcatNameWithPrefix(clusterSet.Name),
Namespace: clusterSet.Namespace,
},
TypeMeta: metav1.TypeMeta{
Kind: "ResourceQuota",
APIVersion: "v1",
},
Spec: *clusterSet.Spec.Quota,
}
}
func (c *ClusterSetReconciler) reconcileLimit(ctx context.Context, clusterSet *v1alpha1.ClusterSet) error {
log := ctrl.LoggerFrom(ctx)
log.Info("Reconciling ClusterSet Limit")
// delete limitrange if spec.limits isnt specified.
if clusterSet.Spec.Limit == nil {
var toDeleteLimitRange v1.LimitRange
key := types.NamespacedName{
Name: k3kcontroller.SafeConcatNameWithPrefix(clusterSet.Name),
Namespace: clusterSet.Namespace,
}
if err := c.Client.Get(ctx, key, &toDeleteLimitRange); err != nil {
return client.IgnoreNotFound(err)
}
return c.Client.Delete(ctx, &toDeleteLimitRange)
}
limitRange := limitRange(clusterSet)
if err := ctrl.SetControllerReference(clusterSet, &limitRange, c.Scheme); err != nil {
return err
}
if err := c.Client.Create(ctx, &limitRange); err != nil {
if apierrors.IsAlreadyExists(err) {
return c.Client.Update(ctx, &limitRange)
}
}
return nil
}
func limitRange(clusterSet *v1alpha1.ClusterSet) v1.LimitRange {
return v1.LimitRange{
ObjectMeta: metav1.ObjectMeta{
Name: k3kcontroller.SafeConcatNameWithPrefix(clusterSet.Name),
Namespace: clusterSet.Namespace,
},
TypeMeta: metav1.TypeMeta{
Kind: "LimitRange",
APIVersion: "v1",
},
Spec: *clusterSet.Spec.Limit,
}
}

View File

@@ -8,11 +8,11 @@ import (
"github.com/rancher/k3k/pkg/apis/k3k.io/v1alpha1"
k3kcontroller "github.com/rancher/k3k/pkg/controller"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/ptr"
@@ -29,34 +29,62 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
)
BeforeEach(func() {
createdNS := &corev1.Namespace{ObjectMeta: v1.ObjectMeta{GenerateName: "ns-"}}
createdNS := &v1.Namespace{ObjectMeta: metav1.ObjectMeta{GenerateName: "ns-"}}
err := k8sClient.Create(context.Background(), createdNS)
Expect(err).To(Not(HaveOccurred()))
namespace = createdNS.Name
})
When("created with a default spec", func() {
It("should have only the 'shared' allowedNodeTypes", func() {
It("should have only the 'shared' allowedModeTypes", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
}
err := k8sClient.Create(ctx, clusterSet)
Expect(err).To(Not(HaveOccurred()))
allowedModeTypes := clusterSet.Spec.AllowedNodeTypes
allowedModeTypes := clusterSet.Spec.AllowedModeTypes
Expect(allowedModeTypes).To(HaveLen(1))
Expect(allowedModeTypes).To(ContainElement(v1alpha1.SharedClusterMode))
})
It("should not be able to create a cluster with a non 'default' name", func() {
err := k8sClient.Create(ctx, &v1alpha1.ClusterSet{
ObjectMeta: metav1.ObjectMeta{
Name: "another-name",
Namespace: namespace,
},
})
Expect(err).To(HaveOccurred())
})
It("should not be able to create two ClusterSets in the same namespace", func() {
err := k8sClient.Create(ctx, &v1alpha1.ClusterSet{
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
})
Expect(err).To(Not(HaveOccurred()))
err = k8sClient.Create(ctx, &v1alpha1.ClusterSet{
ObjectMeta: metav1.ObjectMeta{
Name: "default-2",
Namespace: namespace,
},
})
Expect(err).To(HaveOccurred())
})
It("should create a NetworkPolicy", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
}
@@ -118,9 +146,9 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
When("created with DisableNetworkPolicy", func() {
It("should not create a NetworkPolicy if true", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
DisableNetworkPolicy: true,
@@ -147,9 +175,9 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
It("should delete the NetworkPolicy if changed to false", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
}
@@ -191,9 +219,9 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
It("should recreate the NetworkPolicy if deleted", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
}
@@ -242,12 +270,12 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
When("created specifying the mode", func() {
It("should have the 'virtual' mode if specified", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
AllowedNodeTypes: []v1alpha1.ClusterMode{
AllowedModeTypes: []v1alpha1.ClusterMode{
v1alpha1.VirtualClusterMode,
},
},
@@ -256,19 +284,19 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
err := k8sClient.Create(ctx, clusterSet)
Expect(err).To(Not(HaveOccurred()))
allowedModeTypes := clusterSet.Spec.AllowedNodeTypes
allowedModeTypes := clusterSet.Spec.AllowedModeTypes
Expect(allowedModeTypes).To(HaveLen(1))
Expect(allowedModeTypes).To(ContainElement(v1alpha1.VirtualClusterMode))
})
It("should have both modes if specified", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
AllowedNodeTypes: []v1alpha1.ClusterMode{
AllowedModeTypes: []v1alpha1.ClusterMode{
v1alpha1.SharedClusterMode,
v1alpha1.VirtualClusterMode,
},
@@ -278,7 +306,7 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
err := k8sClient.Create(ctx, clusterSet)
Expect(err).To(Not(HaveOccurred()))
allowedModeTypes := clusterSet.Spec.AllowedNodeTypes
allowedModeTypes := clusterSet.Spec.AllowedModeTypes
Expect(allowedModeTypes).To(HaveLen(2))
Expect(allowedModeTypes).To(ContainElements(
v1alpha1.SharedClusterMode,
@@ -288,12 +316,12 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
It("should fail for a non-existing mode", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
AllowedNodeTypes: []v1alpha1.ClusterMode{
AllowedModeTypes: []v1alpha1.ClusterMode{
v1alpha1.SharedClusterMode,
v1alpha1.VirtualClusterMode,
v1alpha1.ClusterMode("non-existing"),
@@ -315,9 +343,9 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
)
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
PodSecurityAdmissionLevel: &privileged,
@@ -327,7 +355,7 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
err := k8sClient.Create(ctx, clusterSet)
Expect(err).To(Not(HaveOccurred()))
var ns corev1.Namespace
var ns v1.Namespace
// Check privileged
@@ -418,9 +446,9 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
privileged := v1alpha1.PrivilegedPodSecurityAdmissionLevel
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
PodSecurityAdmissionLevel: &privileged,
@@ -430,7 +458,7 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
err := k8sClient.Create(ctx, clusterSet)
Expect(err).To(Not(HaveOccurred()))
var ns corev1.Namespace
var ns v1.Namespace
// wait a bit for the namespace to be updated
Eventually(func() bool {
@@ -469,9 +497,9 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
When("a cluster in the same namespace is present", func() {
It("should update it if needed", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
DefaultPriorityClass: "foobar",
@@ -482,7 +510,7 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
Expect(err).To(Not(HaveOccurred()))
cluster := &v1alpha1.Cluster{
ObjectMeta: v1.ObjectMeta{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "cluster-",
Namespace: namespace,
},
@@ -510,9 +538,9 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
It("should update the nodeSelector", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
DefaultNodeSelector: map[string]string{"label-1": "value-1"},
@@ -523,7 +551,7 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
Expect(err).To(Not(HaveOccurred()))
cluster := &v1alpha1.Cluster{
ObjectMeta: v1.ObjectMeta{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "cluster-",
Namespace: namespace,
},
@@ -551,9 +579,9 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
It("should update the nodeSelector if changed", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
DefaultNodeSelector: map[string]string{"label-1": "value-1"},
@@ -564,7 +592,7 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
Expect(err).To(Not(HaveOccurred()))
cluster := &v1alpha1.Cluster{
ObjectMeta: v1.ObjectMeta{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "cluster-",
Namespace: namespace,
},
@@ -622,9 +650,9 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
When("a cluster in a different namespace is present", func() {
It("should not be update", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: v1.ObjectMeta{
GenerateName: "clusterset-",
Namespace: namespace,
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
DefaultPriorityClass: "foobar",
@@ -634,12 +662,12 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
err := k8sClient.Create(ctx, clusterSet)
Expect(err).To(Not(HaveOccurred()))
namespace2 := &corev1.Namespace{ObjectMeta: v1.ObjectMeta{GenerateName: "ns-"}}
namespace2 := &v1.Namespace{ObjectMeta: metav1.ObjectMeta{GenerateName: "ns-"}}
err = k8sClient.Create(ctx, namespace2)
Expect(err).To(Not(HaveOccurred()))
cluster := &v1alpha1.Cluster{
ObjectMeta: v1.ObjectMeta{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "cluster-",
Namespace: namespace2.Name,
},
@@ -666,5 +694,134 @@ var _ = Describe("ClusterSet Controller", Label("controller"), Label("ClusterSet
Should(BeTrue())
})
})
When("created with ResourceQuota", func() {
It("should create resourceQuota if Quota is enabled", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
Quota: &v1.ResourceQuotaSpec{
Hard: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("800m"),
v1.ResourceMemory: resource.MustParse("1Gi"),
},
},
},
}
err := k8sClient.Create(ctx, clusterSet)
Expect(err).To(Not(HaveOccurred()))
var resourceQuota v1.ResourceQuota
Eventually(func() error {
key := types.NamespacedName{
Name: k3kcontroller.SafeConcatNameWithPrefix(clusterSet.Name),
Namespace: namespace,
}
return k8sClient.Get(ctx, key, &resourceQuota)
}).
WithTimeout(time.Second * 10).
WithPolling(time.Second).
Should(BeNil())
Expect(resourceQuota.Spec.Hard.Cpu().String()).To(BeEquivalentTo("800m"))
Expect(resourceQuota.Spec.Hard.Memory().String()).To(BeEquivalentTo("1Gi"))
})
It("should delete the ResourceQuota if Quota is deleted", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
Quota: &v1.ResourceQuotaSpec{
Hard: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("800m"),
v1.ResourceMemory: resource.MustParse("1Gi"),
},
},
},
}
err := k8sClient.Create(ctx, clusterSet)
Expect(err).To(Not(HaveOccurred()))
var resourceQuota v1.ResourceQuota
Eventually(func() error {
key := types.NamespacedName{
Name: k3kcontroller.SafeConcatNameWithPrefix(clusterSet.Name),
Namespace: namespace,
}
return k8sClient.Get(ctx, key, &resourceQuota)
}).
WithTimeout(time.Minute).
WithPolling(time.Second).
Should(BeNil())
clusterSet.Spec.Quota = nil
err = k8sClient.Update(ctx, clusterSet)
Expect(err).To(Not(HaveOccurred()))
// wait for a bit for the resourceQuota to be deleted
Eventually(func() bool {
key := types.NamespacedName{
Name: k3kcontroller.SafeConcatNameWithPrefix(clusterSet.Name),
Namespace: namespace,
}
err := k8sClient.Get(ctx, key, &resourceQuota)
return apierrors.IsNotFound(err)
}).
WithTimeout(time.Second * 10).
WithPolling(time.Second).
Should(BeTrue())
})
It("should create resourceQuota if Quota is enabled", func() {
clusterSet := &v1alpha1.ClusterSet{
ObjectMeta: metav1.ObjectMeta{
Name: "default",
Namespace: namespace,
},
Spec: v1alpha1.ClusterSetSpec{
Limit: &v1.LimitRangeSpec{
Limits: []v1.LimitRangeItem{
{
Type: v1.LimitTypeContainer,
DefaultRequest: v1.ResourceList{
v1.ResourceCPU: resource.MustParse("500m"),
},
},
},
},
},
}
err := k8sClient.Create(ctx, clusterSet)
Expect(err).To(Not(HaveOccurred()))
var limitRange v1.LimitRange
Eventually(func() error {
key := types.NamespacedName{
Name: k3kcontroller.SafeConcatNameWithPrefix(clusterSet.Name),
Namespace: namespace,
}
return k8sClient.Get(ctx, key, &limitRange)
}).
WithTimeout(time.Minute).
WithPolling(time.Second).
Should(BeNil())
// make sure that default limit range has the default requet values.
Expect(limitRange.Spec.Limits).ShouldNot(BeEmpty())
cpu := limitRange.Spec.Limits[0].DefaultRequest.Cpu().String()
Expect(cpu).To(BeEquivalentTo("500m"))
})
})
})
})