Files
kamaji/internal/resources/kubeadm_phases.go
Matteo Ruina a9c2c0de89 fix(style): migrate from deprecated github.com/pkg/errors package (#1071)
* refactor: migrate error packages from pkg/errors to stdlib

Replace github.com/pkg/errors with Go standard library error handling
in foundation error packages:

- internal/datastore/errors: errors.Wrap -> fmt.Errorf with %w
- internal/errors: errors.As -> stdlib errors.As
- controllers/soot/controllers/errors: errors.New -> stdlib errors.New

Part 1 of 4 in the pkg/errors migration.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: migrate datastore package from pkg/errors to stdlib

Replace github.com/pkg/errors with Go standard library error handling
in the datastore layer:

- connection.go: errors.Wrap -> fmt.Errorf with %w
- datastore.go: errors.Wrap -> fmt.Errorf with %w
- etcd.go: goerrors alias removed, use stdlib errors.As
- nats.go: errors.Wrap/Is/New -> stdlib equivalents
- postgresql.go: goerrors.Wrap -> fmt.Errorf with %w

Part 2 of 4 in the pkg/errors migration.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: migrate internal packages from pkg/errors to stdlib (partial)

Replace github.com/pkg/errors with Go standard library error handling
in internal packages:

- internal/builders/controlplane: errors.Wrap -> fmt.Errorf
- internal/crypto: errors.Wrap -> fmt.Errorf
- internal/kubeadm: errors.Wrap/Wrapf -> fmt.Errorf
- internal/upgrade: errors.Wrap -> fmt.Errorf
- internal/webhook: errors.Wrap -> fmt.Errorf

Part 3 of 4 in the pkg/errors migration.

Remaining files: internal/resources/*.go (8 files, 42 occurrences)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(resources): migrate from pkg/errors to stdlib

Replace github.com/pkg/errors with Go standard library:
- errors.Wrap(err, msg) → fmt.Errorf("msg: %w", err)
- errors.New(msg) → errors.New(msg)

Files migrated:
- internal/resources/kubeadm_phases.go
- internal/resources/kubeadm_upgrade.go
- internal/resources/kubeadm_utils.go
- internal/resources/datastore/datastore_multitenancy.go
- internal/resources/datastore/datastore_setup.go
- internal/resources/datastore/datastore_storage_config.go
- internal/resources/addons/coredns.go
- internal/resources/addons/kube_proxy.go

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(controllers): migrate from pkg/errors to stdlib

Replace github.com/pkg/errors with Go standard library:
- errors.Wrap(err, msg) → fmt.Errorf("msg: %w", err)
- errors.New(msg) → errors.New(msg) (stdlib)
- errors.Is/As → errors.Is/As (stdlib)

Files migrated:
- controllers/datastore_controller.go
- controllers/kubeconfiggenerator_controller.go
- controllers/tenantcontrolplane_controller.go
- controllers/telemetry_controller.go
- controllers/certificate_lifecycle_controller.go

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(soot): migrate from pkg/errors to stdlib

Replace github.com/pkg/errors with Go standard library:
- errors.Is() now uses stdlib errors.Is()

Files migrated:
- controllers/soot/controllers/kubeproxy.go
- controllers/soot/controllers/migrate.go
- controllers/soot/controllers/coredns.go
- controllers/soot/controllers/konnectivity.go
- controllers/soot/controllers/kubeadm_phase.go

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(api,cmd): migrate from pkg/errors to stdlib

Replace github.com/pkg/errors with Go standard library:
- errors.Wrap(err, msg) → fmt.Errorf("msg: %w", err)

Files migrated:
- api/v1alpha1/tenantcontrolplane_funcs.go
- cmd/utils/k8s_version.go

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* chore: run go mod tidy after pkg/errors migration

The github.com/pkg/errors package moved from direct to indirect
dependency. It remains as an indirect dependency because other
packages in the dependency tree still use it.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(datastore): use errors.Is for sentinel error comparison

The stdlib errors.As expects a pointer to a concrete error type, not
a pointer to an error value. For comparing against sentinel errors
like rpctypes.ErrGRPCUserNotFound, errors.Is should be used instead.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: resolve golangci-lint errors

- Fix GCI import formatting (remove extra blank lines between groups)
- Use errors.Is instead of errors.As for mutex sentinel errors

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(errors): use proper variable declarations for errors.As

The errors.As function requires a pointer to an assignable variable,
not a pointer to a composite literal. The previous pattern
`errors.As(err, &SomeError{})` creates a pointer to a temporary value
which errors.As cannot reliably use for assignment.

This fix declares proper variables for each error type and passes
their addresses to errors.As, ensuring correct error chain matching.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(datastore/etcd): use rpctypes.Error() for gRPC error comparison

The etcd gRPC status errors (ErrGRPCUserNotFound, ErrGRPCRoleNotFound)
cannot be compared directly using errors.Is() because they are wrapped
in gRPC status errors during transmission.

The etcd rpctypes package provides:
- ErrGRPC* constants: server-side gRPC status errors
- Err* constants (without GRPC prefix): client-side comparable errors
- Error() function: converts gRPC errors to comparable EtcdError values

The correct pattern is to use rpctypes.Error(err) to normalize the
received error, then compare against client-side error constants
like rpctypes.ErrUserNotFound.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-05 06:11:14 +01:00

267 lines
8.4 KiB
Go

// Copyright 2022 Clastix Labs
// SPDX-License-Identifier: Apache-2.0
package resources
import (
"context"
"fmt"
"os"
"strings"
"time"
jsonpatchv5 "github.com/evanphx/json-patch/v5"
"github.com/prometheus/client_golang/prometheus"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
clientset "k8s.io/client-go/kubernetes"
bootstrapapi "k8s.io/cluster-bootstrap/token/api"
kubeadmconstants "k8s.io/kubernetes/cmd/kubeadm/app/constants"
"k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/log"
kamajiv1alpha1 "github.com/clastix/kamaji/api/v1alpha1"
"github.com/clastix/kamaji/internal/kubeadm"
)
type kubeadmPhase int
const (
PhaseUploadConfigKubeadm kubeadmPhase = iota
PhaseUploadConfigKubelet
PhaseBootstrapToken
PhaseClusterAdminRBAC
)
func (d kubeadmPhase) String() string {
return [...]string{"PhaseUploadConfigKubeadm", "PhaseUploadConfigKubelet", "PhaseBootstrapToken", "PhaseClusterAdminRBAC"}[d]
}
type KubeadmPhase struct {
Client client.Client
Phase kubeadmPhase
checksum string
}
func (r *KubeadmPhase) GetHistogram() prometheus.Histogram {
switch r.Phase {
case PhaseUploadConfigKubeadm:
kubeadmphaseUploadConfigKubeadmCollector = LazyLoadHistogramFromResource(kubeadmphaseUploadConfigKubeadmCollector, r)
return kubeadmphaseUploadConfigKubeadmCollector
case PhaseUploadConfigKubelet:
kubeadmphaseUploadConfigKubeletCollector = LazyLoadHistogramFromResource(kubeadmphaseUploadConfigKubeletCollector, r)
return kubeadmphaseUploadConfigKubeletCollector
case PhaseBootstrapToken:
kubeadmphaseBootstrapTokenCollector = LazyLoadHistogramFromResource(kubeadmphaseBootstrapTokenCollector, r)
return kubeadmphaseBootstrapTokenCollector
case PhaseClusterAdminRBAC:
kubeadmphaseClusterAdminRBACCollector = LazyLoadHistogramFromResource(kubeadmphaseClusterAdminRBACCollector, r)
return kubeadmphaseClusterAdminRBACCollector
default:
panic("should not happen")
}
}
func (r *KubeadmPhase) GetWatchedObject() client.Object {
switch r.Phase {
case PhaseUploadConfigKubeadm:
return &corev1.ConfigMap{}
case PhaseUploadConfigKubelet:
return &corev1.ConfigMap{}
case PhaseBootstrapToken:
return &corev1.ConfigMap{}
case PhaseClusterAdminRBAC:
return &rbacv1.ClusterRoleBinding{}
default:
panic("shouldn't happen")
}
}
func (r *KubeadmPhase) GetPredicateFunc() func(obj client.Object) bool {
switch r.Phase {
case PhaseUploadConfigKubeadm:
return func(obj client.Object) bool {
return obj.GetName() == kubeadmconstants.KubeadmConfigConfigMap && obj.GetNamespace() == metav1.NamespaceSystem
}
case PhaseUploadConfigKubelet:
return func(obj client.Object) bool {
return obj.GetName() == kubeadmconstants.KubeletBaseConfigurationConfigMap && obj.GetNamespace() == metav1.NamespaceSystem
}
case PhaseBootstrapToken:
return func(obj client.Object) bool {
cm := obj.(*corev1.ConfigMap) //nolint:forcetypeassert
return cm.Name == bootstrapapi.ConfigMapClusterInfo && cm.GetNamespace() == metav1.NamespacePublic
}
case PhaseClusterAdminRBAC:
return func(obj client.Object) bool {
cr := obj.(*rbacv1.ClusterRoleBinding) //nolint:forcetypeassert
return strings.HasPrefix(cr.Name, "kubeadm:")
}
default:
panic("shouldn't happen")
}
}
func (r *KubeadmPhase) isStatusEqual(tenantControlPlane *kamajiv1alpha1.TenantControlPlane) bool {
i, err := r.GetStatus(tenantControlPlane)
if err != nil {
return true
}
status, ok := i.(*kamajiv1alpha1.KubeadmPhaseStatus)
if !ok {
return true
}
return status.Checksum == r.checksum
}
func (r *KubeadmPhase) SetKubeadmConfigChecksum(checksum string) {
r.checksum = checksum
}
func (r *KubeadmPhase) ShouldStatusBeUpdated(_ context.Context, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) bool {
return !r.isStatusEqual(tenantControlPlane)
}
func (r *KubeadmPhase) ShouldCleanup(*kamajiv1alpha1.TenantControlPlane) bool {
return false
}
func (r *KubeadmPhase) CleanUp(context.Context, *kamajiv1alpha1.TenantControlPlane) (bool, error) {
return false, nil
}
func (r *KubeadmPhase) Define(context.Context, *kamajiv1alpha1.TenantControlPlane) error {
return nil
}
func (r *KubeadmPhase) GetKubeadmFunction(ctx context.Context, tcp *kamajiv1alpha1.TenantControlPlane) (func(clientset.Interface, *kubeadm.Configuration) ([]byte, error), error) {
switch r.Phase {
case PhaseUploadConfigKubeadm:
return kubeadm.UploadKubeadmConfig, nil
case PhaseUploadConfigKubelet:
return func(c clientset.Interface, configuration *kubeadm.Configuration) ([]byte, error) {
var patch jsonpatchv5.Patch
if len(tcp.Spec.Kubernetes.Kubelet.ConfigurationJSONPatches) > 0 {
jsonP, patchErr := tcp.Spec.Kubernetes.Kubelet.ConfigurationJSONPatches.ToJSON()
if patchErr != nil {
return nil, fmt.Errorf("cannot encode JSON Patches to JSON: %w", patchErr)
}
if patch, patchErr = jsonpatchv5.DecodePatch(jsonP); patchErr != nil {
return nil, fmt.Errorf("cannot decode JSON Patches: %w", patchErr)
}
}
return kubeadm.UploadKubeletConfig(c, configuration, patch)
}, nil
case PhaseBootstrapToken:
return func(client clientset.Interface, config *kubeadm.Configuration) ([]byte, error) {
config.InitConfiguration.BootstrapTokens = nil
return nil, kubeadm.BootstrapToken(client, config)
}, nil
case PhaseClusterAdminRBAC:
return func(c clientset.Interface, configuration *kubeadm.Configuration) ([]byte, error) {
tmp, err := os.MkdirTemp("", string(tcp.UID))
if err != nil {
return nil, err
}
defer func() { _ = os.Remove(tmp) }()
var caSecret corev1.Secret
if err = r.Client.Get(ctx, types.NamespacedName{Name: tcp.Status.Certificates.CA.SecretName, Namespace: tcp.Namespace}, &caSecret); err != nil {
return nil, err
}
crtKeyPair := kubeadm.CertificatePrivateKeyPair{
Certificate: caSecret.Data[kubeadmconstants.CACertName],
PrivateKey: caSecret.Data[kubeadmconstants.CAKeyName],
}
for _, i := range []string{AdminKubeConfigFileName, SuperAdminKubeConfigFileName} {
configuration.InitConfiguration.CertificatesDir, _ = os.MkdirTemp(tmp, "")
kubeconfigValue, err := kubeadm.CreateKubeconfig(SuperAdminKubeConfigFileName, crtKeyPair, configuration)
if err != nil {
return nil, err
}
_ = os.WriteFile(fmt.Sprintf("%s/%s", tmp, i), kubeconfigValue, os.ModePerm)
}
if _, err = kubeconfig.EnsureAdminClusterRoleBinding(tmp, func(_ context.Context, _ clientset.Interface, _ clientset.Interface, duration time.Duration, duration2 time.Duration) (clientset.Interface, error) {
return kubeconfig.EnsureAdminClusterRoleBindingImpl(ctx, c, c, duration, duration2)
}); err != nil {
return nil, err
}
return nil, nil
}, nil
default:
return nil, fmt.Errorf("no available functionality for phase %s", r.Phase)
}
}
func (r *KubeadmPhase) GetClient() client.Client {
return r.Client
}
func (r *KubeadmPhase) GetTmpDirectory() string {
return ""
}
func (r *KubeadmPhase) GetName() string {
return r.Phase.String()
}
func (r *KubeadmPhase) UpdateTenantControlPlaneStatus(ctx context.Context, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) error {
logger := log.FromContext(ctx, "resource", r.GetName(), "phase", r.Phase.String())
status, err := r.GetStatus(tenantControlPlane)
if err != nil {
logger.Error(err, "unable to update the status")
return err
}
if status != nil {
status.SetChecksum(r.checksum)
}
return nil
}
func (r *KubeadmPhase) GetStatus(tenantControlPlane *kamajiv1alpha1.TenantControlPlane) (kamajiv1alpha1.KubeadmConfigChecksumDependant, error) {
switch r.Phase {
case PhaseUploadConfigKubeadm, PhaseUploadConfigKubelet, PhaseClusterAdminRBAC:
return nil, nil //nolint:nilnil
case PhaseBootstrapToken:
return &tenantControlPlane.Status.KubeadmPhase.BootstrapToken, nil
default:
return nil, fmt.Errorf("%s is not a right kubeadm phase", r.Phase)
}
}
func (r *KubeadmPhase) CreateOrUpdate(ctx context.Context, tenantControlPlane *kamajiv1alpha1.TenantControlPlane) (controllerutil.OperationResult, error) {
logger := log.FromContext(ctx, "resource", r.GetName(), "phase", r.Phase.String())
if r.Phase == PhaseBootstrapToken {
return KubeadmBootstrap(ctx, r, logger, tenantControlPlane)
}
return KubeadmPhaseCreate(ctx, r, logger, tenantControlPlane)
}