Compare commits

...

37 Commits
v1.0.1 ... main

Author SHA1 Message Date
Enrico Candino
f341f7f5e8 Bump Charts to 1.0.2-rc1 (#652) 2026-01-29 09:55:31 +01:00
renovate-rancher[bot]
ca50a6b231 Update registry.suse.com/bci/bci-base Docker tag to v15.7 (#651)
* Update registry.suse.com/bci/bci-base Docker tag to v15.7

* move k3k controller image to `registry.suse.com/bci/bci-base:15.7`

---------

Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
Co-authored-by: Enrico Candino <enrico.candino@suse.com>
2026-01-28 12:35:28 +01:00
Enrico Candino
004e177ac1 Bump kubernetes dependencies (v1.33) (#647)
* bump kubernetes to v0.33.7

* updated kuberneets api versions

* bump tests

* fix k3s version

* fix test

* centralize k8s version

* remove focus

* revert GetPodCondition, GetContainerStatus and pin of k8s.io/controller-manager
2026-01-27 22:28:56 +01:00
Enrico Candino
0164c785ab Show correct allocatable resources when a Policy is applied (#638)
* wip

* wip

* wip

* fix lint and tests

* fixed bugs for missing resources

* cleanup and refactor

* removed coreClient from configureNode

* added comments to distribute algorithm
2026-01-27 15:56:37 +01:00
Hussein Galal
c1b7da4c72 SecretMounts feature and private registries (#570)
* Add SecretMounts field

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2026-01-26 21:47:40 +02:00
renovate-rancher[bot]
ff0b03af02 Update Update Kubernetes dependencies to v1.32.10 [SECURITY] (#626)
* Update Update Kubernetes dependencies to v1.32.10 [SECURITY]

* bump k8s.io/kubelet

---------

Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
Co-authored-by: Enrico Candino <enrico.candino@suse.com>
2026-01-26 17:24:14 +01:00
Enrico Candino
62a76a8202 Bump testcontainers-go (v0.40.0), containerd (v1.7.30) and x/crypto (v0.45.0) (#640)
* bump testcontainers to v0.40.0

* bump containerd andx/crypto
2026-01-26 16:37:05 +01:00
Enrico Candino
9e841cc75c Update helm.sh/helm/v3 to v3.18.5 (#641)
* bump helm to v3.17.4

* removed unneeded replace

* bump helm to v3.18.5
2026-01-26 15:38:56 +01:00
renovate-rancher[bot]
bc79a2e6a9 Update module github.com/sirupsen/logrus to v1.9.4 (#631)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-26 13:45:23 +01:00
renovate-rancher[bot]
3681614a3e Update dependency golangci/golangci-lint to v2.8.0 (#635)
* Update dependency golangci/golangci-lint to v2.8.0

* bump golangci-lint version in github action

---------

Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
Co-authored-by: Enrico Candino <enrico.candino@suse.com>
2026-01-26 13:30:26 +01:00
renovate-rancher[bot]
f04d88bd3f Update github/codeql-action digest to 38e701f (#634)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-26 10:27:20 +01:00
renovate-rancher[bot]
4b293cef42 Update module go.uber.org/zap to v1.27.1 (#633)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-26 10:19:02 +01:00
renovate-rancher[bot]
1e0aa0ad37 Update module github.com/spf13/cobra to v1.10.2 (#632)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-26 10:18:36 +01:00
renovate-rancher[bot]
e28fa84ae7 Update module github.com/go-logr/logr to v1.4.3 (#629)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-26 10:18:01 +01:00
renovate-rancher[bot]
511be5aa4e Pin dependencies (#628)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-22 15:24:57 +01:00
renovate-rancher[bot]
cd6c962bcf Migrate config .github/renovate.json (#627)
Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
2026-01-22 15:06:16 +01:00
Kevin McDermott
c0418267c9 Merge pull request #623 from bigkevmcd/resource-quantity
Use resource.Quantity instead of a string for storageRequestSize in the Cluster definition.
2026-01-22 13:13:06 +00:00
Kevin McDermott
eaa20c16e7 Make the storageRequestSize immutable.
It can't be changed in the StatefulSet and modifying the value causes an
error.
2026-01-22 08:27:20 +00:00
jpgouin
0cea0c9e14 Only reconcile the server resource on the StatefullSet Controller (fix #618) 2026-01-21 16:53:52 +01:00
Kevin McDermott
d12f3ea757 Fix lint issues and failing test.
golangci-lint was complaining about duplicate imports of corev1 and the
ordering of them in the files.
2026-01-21 14:50:30 +00:00
Kevin McDermott
9ea81c861b Use resource.Quantity for storageRequestSize
Previously the resource.Quantity was stored as string which allowed
invalid values to be created.

This performs validation on the strings using the standard K8s resource
mechanism.
2026-01-21 14:50:28 +00:00
Enrico Candino
20c5441030 Bump to Go 1.25 (#620)
* bump to Go 1.25

* add go toolchain
2026-01-21 15:21:34 +01:00
renovate-rancher[bot]
a3a4c931a0 Add initial Renovate configuration (#621)
* Add initial Renovate configuration

* add permission

* fix multiple runs

---------

Co-authored-by: renovate-rancher[bot] <119870437+renovate-rancher[bot]@users.noreply.github.com>
Co-authored-by: Enrico Candino <enrico.candino@suse.com>
2026-01-21 15:04:51 +01:00
Hussein Galal
fcc7191ab3 CLI cluster update (#595)
* CLI cluster update

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2026-01-20 14:00:24 +02:00
jpgouin
ff6862e511 fix virtual pod NodeSelector #572 (#616) 2026-01-20 11:33:42 +01:00
Peter Matseykanets
20305e03b7 Add a dedicated Validate GitHub Actions workflow (#614) 2026-01-19 10:00:12 -05:00
Enrico Candino
5f42eafd2a Dev doc update (#611)
* update development.md

* fix tests

* fix cli test
2026-01-16 14:56:43 +01:00
Enrico Candino
ccc3d1651c Fixed resource allocation fetching the stats from the node where the (#610)
virtual-kubelet is running on.
Removed random node selection during Pod creation.
2026-01-16 13:23:18 +01:00
Enrico Candino
0185998aa0 Bump Charts to 1.0.2-rc1 (#609) 2026-01-15 14:12:52 +01:00
Guilherme Macedo
af5d33cfb8 Add FOSSA scanning workflow (#606)
Signed-off-by: Guilherme Macedo <guilherme@gmacedo.com>
2026-01-14 19:14:07 +01:00
Enrico Candino
f0d9b08b24 Added AsciiDoc K3k CRDs docs automation (#600)
* added asciidoc conversion for crds doc

* check versions

* use crd-ref-docs for generated asciidoc documentation

* add custom templates

* clenaup templates

* update references, rename docs file

* revert to found available pandoc version
2026-01-09 15:50:36 +01:00
Hussein Galal
a871917aec Refactor startup command to wait for node IP changes (#598)
* Patch node ip when server pod restarts

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* wsl

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Refactor startup command and adding safe mode

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add date/time logging to the startup script

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2026-01-09 16:29:47 +02:00
Enrico Candino
c16eae99c7 Added AsciiDoc k3kcli automation (#597)
* adding scripts for asciidoc cli generation

* fix small typos to align to existing docs

* added pandoc

* pandoc check
2026-01-07 11:16:40 +01:00
Enrico Candino
fc6bcedc5f fix for missing label update in creation, added tests (#592) 2025-12-16 11:34:50 +01:00
Hussein Galal
0086d5aa4a Attach creation of Pseudo PV to the PVC instead of the pods (#577)
* Change creation of pseudo PV to PVC instead of pods

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Ignore not found pvs when deleting the pvc

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fixes

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Avoid returning when the PV already exists

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2025-12-15 17:27:30 +02:00
Enrico Candino
c9bb1bcf46 Fixed CreatePod and UpdatePod in Virtual Kubelet for Downward API (#573)
* wip

* fix lint

* ephemerals container

* remove unused

* add retry

* volumes refactor

* added configmap and secret keyRef translation

* set debug logger to virtual kubelet logger

* added tests

* fix lint, removed unused func

* added test file locally
2025-12-10 13:47:34 +01:00
Enrico Candino
6d5dd8564f Bump Charts to 1.0.1 (#588) 2025-12-09 12:33:21 +01:00
107 changed files with 6400 additions and 1398 deletions

10
.github/renovate.json vendored Normal file
View File

@@ -0,0 +1,10 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"github>rancher/renovate-config#release"
],
"baseBranchPatterns": [
"main"
],
"prHourlyLimit": 2
}

View File

@@ -2,9 +2,9 @@ name: Build
on:
push:
branches:
- main
branches: [main]
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
@@ -19,18 +19,18 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v6
uses: goreleaser/goreleaser-action@e435ccd777264be153ace6237001ef4d979d3a7a # v6
with:
distribution: goreleaser
version: v2
@@ -40,7 +40,7 @@ jobs:
REGISTRY: ""
- name: Run Trivy vulnerability scanner (k3kcli)
uses: aquasecurity/trivy-action@0.28.0
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # 0.33.1
with:
ignore-unfixed: true
severity: 'MEDIUM,HIGH,CRITICAL'
@@ -50,13 +50,13 @@ jobs:
output: 'trivy-results-k3kcli.sarif'
- name: Upload Trivy scan results to GitHub Security tab (k3kcli)
uses: github/codeql-action/upload-sarif@v3
uses: github/codeql-action/upload-sarif@38e701f46e33fb233075bf4238cb1e5d68e429e4 # v3
with:
sarif_file: trivy-results-k3kcli.sarif
category: k3kcli
- name: Run Trivy vulnerability scanner (k3k)
uses: aquasecurity/trivy-action@0.28.0
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # 0.33.1
with:
ignore-unfixed: true
severity: 'MEDIUM,HIGH,CRITICAL'
@@ -66,13 +66,13 @@ jobs:
output: 'trivy-results-k3k.sarif'
- name: Upload Trivy scan results to GitHub Security tab (k3k)
uses: github/codeql-action/upload-sarif@v3
uses: github/codeql-action/upload-sarif@38e701f46e33fb233075bf4238cb1e5d68e429e4 # v3
with:
sarif_file: trivy-results-k3k.sarif
category: k3k
- name: Run Trivy vulnerability scanner (k3k-kubelet)
uses: aquasecurity/trivy-action@0.28.0
uses: aquasecurity/trivy-action@b6643a29fecd7f34b3597bc6acb0a98b03d33ff8 # 0.33.1
with:
ignore-unfixed: true
severity: 'MEDIUM,HIGH,CRITICAL'
@@ -82,7 +82,7 @@ jobs:
output: 'trivy-results-k3k-kubelet.sarif'
- name: Upload Trivy scan results to GitHub Security tab (k3k-kubelet)
uses: github/codeql-action/upload-sarif@v3
uses: github/codeql-action/upload-sarif@38e701f46e33fb233075bf4238cb1e5d68e429e4 # v3
with:
sarif_file: trivy-results-k3k-kubelet.sarif
category: k3k-kubelet

View File

@@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
@@ -21,12 +21,12 @@ jobs:
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Install Helm
uses: azure/setup-helm@v4
uses: azure/setup-helm@1a275c3b69536ee54be43f2070a358922e12c8d4 # v4
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.6.0
uses: helm/chart-releaser-action@cae68fefc6b5f367a0275617c9f83181ba54714f # v1.7.0
with:
config: .cr.yaml
env:

34
.github/workflows/fossa.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: FOSSA Scanning
on:
push:
branches: ["main", "release/**"]
workflow_dispatch:
permissions:
contents: read
id-token: write
jobs:
fossa-scanning:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6
# The FOSSA token is shared between all repos in Rancher's GH org. It can be
# used directly and there is no need to request specific access to EIO.
- name: Read FOSSA token
uses: rancher-eio/read-vault-secrets@main
with:
secrets: |
secret/data/github/org/rancher/fossa/push token | FOSSA_API_KEY_PUSH_ONLY
- name: FOSSA scan
uses: fossas/fossa-action@main
with:
api-key: ${{ env.FOSSA_API_KEY_PUSH_ONLY }}
# Only runs the scan and do not provide/returns any results back to the
# pipeline.
run-tests: false

View File

@@ -24,7 +24,7 @@ jobs:
run: echo "::error::Missing tag from input" && exit 1
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Check if release is draft
run: |

View File

@@ -21,7 +21,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
@@ -31,12 +31,12 @@ jobs:
run: git checkout ${{ inputs.commit }}
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
uses: docker/setup-qemu-action@c7c53464625b32c7a7e944ae62b3e17d2b600130 # v3
- name: "Read secrets"
uses: rancher-eio/read-vault-secrets@main
@@ -55,7 +55,7 @@ jobs:
echo "DOCKER_PASSWORD=${{ github.token }}" >> $GITHUB_ENV
- name: Login to container registry
uses: docker/login-action@v3
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ env.DOCKER_USERNAME }}
@@ -78,7 +78,7 @@ jobs:
echo "CURRENT_TAG=${CURRENT_TAG}" >> "$GITHUB_OUTPUT"
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v6
uses: goreleaser/goreleaser-action@e435ccd777264be153ace6237001ef4d979d3a7a # v6
with:
distribution: goreleaser
version: v2

63
.github/workflows/renovate-vault.yml vendored Normal file
View File

@@ -0,0 +1,63 @@
name: Renovate
on:
workflow_dispatch:
inputs:
logLevel:
description: "Override default log level"
required: false
default: info
type: choice
options:
- info
- debug
overrideSchedule:
description: "Override all schedules"
required: false
default: "false"
type: choice
options:
- "false"
- "true"
configMigration:
description: "Toggle PRs for config migration"
required: false
default: "true"
type: choice
options:
- "false"
- "true"
renovateConfig:
description: "Define a custom renovate config file"
required: false
default: ".github/renovate.json"
type: string
minimumReleaseAge:
description: "Override minimumReleaseAge for a one-time run (e.g., '0 days' to disable delay)"
required: false
default: "null"
type: string
extendsPreset:
description: "Override renovate extends preset (default: 'github>rancher/renovate-config#release')."
required: false
default: "github>rancher/renovate-config#release"
type: string
schedule:
- cron: '30 4,6 * * 1-5'
permissions:
contents: read
id-token: write
jobs:
call-workflow:
uses: rancher/renovate-config/.github/workflows/renovate-vault.yml@release
with:
configMigration: ${{ inputs.configMigration || 'true' }}
logLevel: ${{ inputs.logLevel || 'info' }}
overrideSchedule: ${{ github.event.inputs.overrideSchedule == 'true' && '{''schedule'':null}' || '' }}
renovateConfig: ${{ inputs.renovateConfig || '.github/renovate.json' }}
minimumReleaseAge: ${{ inputs.minimumReleaseAge || 'null' }}
extendsPreset: ${{ inputs.extendsPreset || 'github>rancher/renovate-config#release' }}
secrets:
override-token: "${{ secrets.RENOVATE_FORK_GH_TOKEN || '' }}"

View File

@@ -21,17 +21,17 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/setup-go@v5
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Install helm
uses: azure/setup-helm@v4.3.0
uses: azure/setup-helm@1a275c3b69536ee54be43f2070a358922e12c8d4 # v4.3.1
- name: Install hydrophone
run: go install sigs.k8s.io/hydrophone@latest
@@ -131,7 +131,7 @@ jobs:
--output-dir /tmp
- name: Archive conformance logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: conformance-${{ matrix.type }}-logs

View File

@@ -21,17 +21,17 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/setup-go@v5
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
- name: Install helm
uses: azure/setup-helm@v4.3.0
uses: azure/setup-helm@1a275c3b69536ee54be43f2070a358922e12c8d4 # v4.3.1
- name: Install hydrophone
run: go install sigs.k8s.io/hydrophone@latest
@@ -39,7 +39,7 @@ jobs:
- name: Install k3s
env:
KUBECONFIG: /etc/rancher/k3s/k3s.yaml
K3S_HOST_VERSION: v1.32.1+k3s1
K3S_HOST_VERSION: v1.33.7+k3s1
run: |
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=${K3S_HOST_VERSION} INSTALL_K3S_EXEC="--write-kubeconfig-mode=777" sh -s -
@@ -104,21 +104,21 @@ jobs:
kubectl logs -n k3k-system -l "app.kubernetes.io/name=k3k" --tail=-1 > /tmp/k3k.log
- name: Archive K3s logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: k3s-${{ matrix.type }}-logs
path: /tmp/k3s.log
- name: Archive K3k logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: k3k-${{ matrix.type }}-logs
path: /tmp/k3k.log
- name: Archive conformance logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: conformance-${{ matrix.type }}-logs

View File

@@ -2,38 +2,26 @@ name: Tests E2E
on:
push:
branches: [main]
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
permissions:
contents: read
jobs:
validate:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: Validate
run: make validate
tests-e2e:
runs-on: ubuntu-latest
needs: validate
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/setup-go@v5
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
@@ -76,44 +64,43 @@ jobs:
run: go tool covdata textfmt -i=${GOCOVERDIR} -o ${GOCOVERDIR}/cover.out
- name: Upload coverage reports to Codecov (controller)
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ${GOCOVERDIR}/cover.out
flags: controller
- name: Upload coverage reports to Codecov (e2e)
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./cover.out
flags: e2e
- name: Archive k3s logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: e2e-k3s-logs
path: /tmp/k3s.log
- name: Archive k3k logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: e2e-k3k-logs
path: /tmp/k3k.log
tests-e2e-slow:
runs-on: ubuntu-latest
needs: validate
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/setup-go@v5
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
@@ -156,28 +143,28 @@ jobs:
run: go tool covdata textfmt -i=${GOCOVERDIR} -o ${GOCOVERDIR}/cover.out
- name: Upload coverage reports to Codecov (controller)
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ${GOCOVERDIR}/cover.out
flags: controller
- name: Upload coverage reports to Codecov (e2e)
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./cover.out
flags: e2e
- name: Archive k3s logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: e2e-k3s-logs
path: /tmp/k3s.log
- name: Archive k3k logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: e2e-k3k-logs

View File

@@ -2,53 +2,23 @@ name: Tests
on:
push:
branches: [main]
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: golangci-lint
uses: golangci/golangci-lint-action@v8
with:
args: --timeout=5m
version: v2.3.0
validate:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: Validate
run: make validate
tests:
runs-on: ubuntu-latest
needs: validate
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- uses: actions/setup-go@v5
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
@@ -56,7 +26,7 @@ jobs:
run: make test-unit
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./cover.out
@@ -64,16 +34,15 @@ jobs:
tests-cli:
runs-on: ubuntu-latest
needs: validate
steps:
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/setup-go@v5
- uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
@@ -109,21 +78,21 @@ jobs:
run: go tool covdata textfmt -i=${{ github.workspace }}/covdata -o ${{ github.workspace }}/covdata/cover.out
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ${{ github.workspace }}/covdata/cover.out
flags: cli
- name: Archive k3s logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: cli-k3s-logs
path: /tmp/k3s.log
- name: Archive k3k logs
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: cli-k3k-logs

41
.github/workflows/validate.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Validate
on:
push:
branches: [main]
pull_request:
types: [opened, synchronize, reopened]
workflow_dispatch:
permissions:
contents: read
jobs:
validate:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4
- name: Set up Go
uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
with:
go-version-file: go.mod
cache: true
- name: Install Pandoc
run: sudo apt-get install pandoc
- name: Run linters
uses: golangci/golangci-lint-action@1e7e51e771db61008b38414a730f564565cf7c20 # v9.2.0
with:
version: v2.8.0
args: -v
only-new-issues: true
skip-cache: false
- name: Run formatters
run: golangci-lint -v fmt ./...
- name: Validate
run: make validate

View File

@@ -5,16 +5,17 @@ VERSION ?= $(shell git describe --tags --always --dirty --match="v[0-9]*")
## Dependencies
GOLANGCI_LINT_VERSION := v2.3.0
GOLANGCI_LINT_VERSION := v2.8.0
GINKGO_VERSION ?= v2.21.0
GINKGO_FLAGS ?= -v -r --coverprofile=cover.out --coverpkg=./...
ENVTEST_VERSION ?= v0.0.0-20250505003155-b6c5897febe5
ENVTEST_K8S_VERSION := 1.31.0
CRD_REF_DOCS_VER ?= v0.1.0
CRD_REF_DOCS_VER ?= v0.2.0
GOLANGCI_LINT ?= go run github.com/golangci/golangci-lint/v2/cmd/golangci-lint@$(GOLANGCI_LINT_VERSION)
GINKGO ?= go run github.com/onsi/ginkgo/v2/ginkgo@$(GINKGO_VERSION)
CRD_REF_DOCS := go run github.com/elastic/crd-ref-docs@$(CRD_REF_DOCS_VER)
PANDOC := $(shell which pandoc 2> /dev/null)
ENVTEST ?= go run sigs.k8s.io/controller-runtime/tools/setup-envtest@$(ENVTEST_VERSION)
ENVTEST_DIR ?= $(shell pwd)/.envtest
@@ -83,26 +84,44 @@ generate: ## Generate the CRDs specs
go generate ./...
.PHONY: docs
docs: ## Build the CRDs and CLI docs
docs: docs-crds docs-cli ## Build the CRDs and CLI docs
.PHONY: docs-crds
docs-crds: ## Build the CRDs docs
$(CRD_REF_DOCS) --config=./docs/crds/config.yaml \
--renderer=markdown \
--source-path=./pkg/apis/k3k.io/v1beta1 \
--output-path=./docs/crds/crd-docs.md
@go run ./docs/cli/genclidoc.go
--output-path=./docs/crds/crds.md
$(CRD_REF_DOCS) --config=./docs/crds/config.yaml \
--renderer=asciidoctor \
--templates-dir=./docs/crds/templates/asciidoctor \
--source-path=./pkg/apis/k3k.io/v1beta1 \
--output-path=./docs/crds/crds.adoc
.PHONY: docs-cli
docs-cli: ## Build the CLI docs
ifeq (, $(PANDOC))
$(error "pandoc not found in PATH.")
endif
@./scripts/generate-cli-docs
.PHONY: lint
lint: ## Find any linting issues in the project
$(GOLANGCI_LINT) run --timeout=5m
.PHONY: fmt
fmt: ## Find any linting issues in the project
fmt: ## Format source files in the project
ifndef CI
$(GOLANGCI_LINT) fmt ./...
endif
.PHONY: validate
validate: generate docs fmt ## Validate the project checking for any dependency or doc mismatch
$(GINKGO) unfocus
go mod tidy
git status --porcelain
go mod verify
git status --porcelain
git --no-pager diff --exit-code
.PHONY: install

View File

@@ -67,7 +67,7 @@ To install it, simply download the latest available version for your architectur
For example, you can download the Linux amd64 version with:
```
wget -qO k3kcli https://github.com/rancher/k3k/releases/download/v1.0.0/k3kcli-linux-amd64 && \
wget -qO k3kcli https://github.com/rancher/k3k/releases/download/v1.0.1/k3kcli-linux-amd64 && \
chmod +x k3kcli && \
sudo mv k3kcli /usr/local/bin
```
@@ -75,7 +75,7 @@ wget -qO k3kcli https://github.com/rancher/k3k/releases/download/v1.0.0/k3kcli-l
You should now be able to run:
```bash
-> % k3kcli --version
k3kcli version v1.0.0
k3kcli version v1.0.1
```

View File

@@ -2,5 +2,5 @@ apiVersion: v2
name: k3k
description: A Helm chart for K3K
type: application
version: 1.0.1-rc2
appVersion: v1.0.1-rc2
version: 1.0.2-rc2
appVersion: v1.0.2-rc2

View File

@@ -4,7 +4,7 @@ kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
controller-gen.kubebuilder.io/version: v0.16.0
controller-gen.kubebuilder.io/version: v0.20.0
name: clusters.k3k.io
spec:
group: k3k.io
@@ -241,10 +241,9 @@ spec:
properties:
secretName:
description: |-
SecretName specifies the name of an existing secret to use.
The controller expects specific keys inside based on the credential type:
- For TLS pairs (e.g., ServerCA): 'tls.crt' and 'tls.key'.
- For ServiceAccountTokenKey: 'tls.key'.
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
@@ -255,10 +254,9 @@ spec:
properties:
secretName:
description: |-
SecretName specifies the name of an existing secret to use.
The controller expects specific keys inside based on the credential type:
- For TLS pairs (e.g., ServerCA): 'tls.crt' and 'tls.key'.
- For ServiceAccountTokenKey: 'tls.key'.
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
@@ -269,10 +267,9 @@ spec:
properties:
secretName:
description: |-
SecretName specifies the name of an existing secret to use.
The controller expects specific keys inside based on the credential type:
- For TLS pairs (e.g., ServerCA): 'tls.crt' and 'tls.key'.
- For ServiceAccountTokenKey: 'tls.key'.
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
@@ -283,10 +280,9 @@ spec:
properties:
secretName:
description: |-
SecretName specifies the name of an existing secret to use.
The controller expects specific keys inside based on the credential type:
- For TLS pairs (e.g., ServerCA): 'tls.crt' and 'tls.key'.
- For ServiceAccountTokenKey: 'tls.key'.
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
@@ -296,10 +292,9 @@ spec:
properties:
secretName:
description: |-
SecretName specifies the name of an existing secret to use.
The controller expects specific keys inside based on the credential type:
- For TLS pairs (e.g., ServerCA): 'tls.crt' and 'tls.key'.
- For ServiceAccountTokenKey: 'tls.key'.
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
@@ -310,10 +305,9 @@ spec:
properties:
secretName:
description: |-
SecretName specifies the name of an existing secret to use.
The controller expects specific keys inside based on the credential type:
- For TLS pairs (e.g., ServerCA): 'tls.crt' and 'tls.key'.
- For ServiceAccountTokenKey: 'tls.key'.
The secret must contain specific keys based on the credential type:
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
- For the ServiceAccountToken signing key: `tls.key`.
type: string
required:
- secretName
@@ -434,11 +428,18 @@ spec:
This field is only relevant in "dynamic" mode.
type: string
storageRequestSize:
anyOf:
- type: integer
- type: string
default: 2G
description: |-
StorageRequestSize is the requested size for the PVC.
This field is only relevant in "dynamic" mode.
type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
x-kubernetes-validations:
- message: storageRequestSize is immutable
rule: self == oldSelf
type:
default: dynamic
description: Type specifies the persistence mode.
@@ -449,6 +450,95 @@ spec:
PriorityClass specifies the priorityClassName for server/agent pods.
In "shared" mode, this also applies to workloads.
type: string
secretMounts:
description: |-
SecretMounts specifies a list of secrets to mount into server and agent pods.
Each entry defines a secret and its mount path within the pods.
items:
description: |-
SecretMount defines a secret to be mounted into server or agent pods,
allowing for custom configurations, certificates, or other sensitive data.
properties:
defaultMode:
description: |-
defaultMode is Optional: mode bits used to set permissions on created files by default.
Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
YAML accepts both octal and decimal values, JSON requires decimal values
for mode bits. Defaults to 0644.
Directories within the path are not affected by this setting.
This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
format: int32
type: integer
items:
description: |-
items If unspecified, each key-value pair in the Data field of the referenced
Secret will be projected into the volume as a file whose name is the
key and content is the value. If specified, the listed keys will be
projected into the specified paths, and unlisted keys will not be
present. If a key is specified which is not present in the Secret,
the volume setup will error unless it is marked optional. Paths must be
relative and may not contain the '..' path or start with '..'.
items:
description: Maps a string key to a path within a volume.
properties:
key:
description: key is the key to project.
type: string
mode:
description: |-
mode is Optional: mode bits used to set permissions on this file.
Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
YAML accepts both octal and decimal values, JSON requires decimal values for mode bits.
If not specified, the volume defaultMode will be used.
This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
format: int32
type: integer
path:
description: |-
path is the relative path of the file to map the key to.
May not be an absolute path.
May not contain the path element '..'.
May not start with the string '..'.
type: string
required:
- key
- path
type: object
type: array
x-kubernetes-list-type: atomic
mountPath:
description: |-
MountPath is the path within server and agent pods where the
secret contents will be mounted.
type: string
optional:
description: optional field specify whether the Secret or its
keys must be defined
type: boolean
role:
description: |-
Role is the type of the k3k pod that will be used to mount the secret.
This can be 'server', 'agent', or 'all' (for both).
enum:
- server
- agent
- all
type: string
secretName:
description: |-
secretName is the name of the secret in the pod's namespace to use.
More info: https://kubernetes.io/docs/concepts/storage/volumes#secret
type: string
subPath:
description: |-
SubPath is an optional path within the secret to mount instead of the root.
When specified, only the specified key from the secret will be mounted as a file
at MountPath, keeping the parent directory writable.
type: string
type: object
type: array
serverArgs:
description: |-
ServerArgs specifies ordered key-value pairs for K3s server pods.

View File

@@ -4,7 +4,7 @@ kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
controller-gen.kubebuilder.io/version: v0.16.0
controller-gen.kubebuilder.io/version: v0.20.0
name: virtualclusterpolicies.k3k.io
spec:
group: k3k.io

View File

@@ -23,9 +23,11 @@ rules:
resources:
- "nodes"
- "nodes/proxy"
- "namespaces"
verbs:
- "get"
- "list"
- "watch"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1

View File

@@ -7,11 +7,12 @@ import (
func NewClusterCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "cluster",
Short: "cluster command",
Short: "K3k cluster command.",
}
cmd.AddCommand(
NewClusterCreateCmd(appCtx),
NewClusterUpdateCmd(appCtx),
NewClusterDeleteCmd(appCtx),
NewClusterListCmd(appCtx),
)

View File

@@ -13,12 +13,13 @@ import (
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/util/retry"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
v1 "k8s.io/api/core/v1"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
@@ -58,7 +59,7 @@ func NewClusterCreateCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "create",
Short: "Create new cluster",
Short: "Create a new cluster.",
Example: "k3kcli cluster create [command options] NAME",
PreRunE: func(cmd *cobra.Command, args []string) error {
return validateCreateConfig(createConfig)
@@ -117,7 +118,10 @@ func createAction(appCtx *AppContext, config *CreateConfig) func(cmd *cobra.Comm
logrus.Infof("Creating cluster '%s' in namespace '%s'", name, namespace)
cluster := newCluster(name, namespace, config)
cluster, err := newCluster(name, namespace, config)
if err != nil {
return err
}
cluster.Spec.Expose = &v1beta1.ExposeConfig{
NodePort: &v1beta1.NodePortConfig{},
@@ -148,9 +152,9 @@ func createAction(appCtx *AppContext, config *CreateConfig) func(cmd *cobra.Comm
return fmt.Errorf("failed to wait for cluster to be reconciled: %w", err)
}
clusterDetails, err := printClusterDetails(cluster)
clusterDetails, err := getClusterDetails(cluster)
if err != nil {
return fmt.Errorf("failed to print cluster details: %w", err)
return fmt.Errorf("failed to get cluster details: %w", err)
}
logrus.Info(clusterDetails)
@@ -185,7 +189,17 @@ func createAction(appCtx *AppContext, config *CreateConfig) func(cmd *cobra.Comm
}
}
func newCluster(name, namespace string, config *CreateConfig) *v1beta1.Cluster {
func newCluster(name, namespace string, config *CreateConfig) (*v1beta1.Cluster, error) {
var storageRequestSize *resource.Quantity
if config.storageRequestSize != "" {
parsed, err := resource.ParseQuantity(config.storageRequestSize)
if err != nil {
return nil, err
}
storageRequestSize = ptr.To(parsed)
}
cluster := &v1beta1.Cluster{
ObjectMeta: metav1.ObjectMeta{
Name: name,
@@ -211,7 +225,7 @@ func newCluster(name, namespace string, config *CreateConfig) *v1beta1.Cluster {
Persistence: v1beta1.PersistenceConfig{
Type: v1beta1.PersistenceMode(config.persistenceType),
StorageClassName: ptr.To(config.storageClassName),
StorageRequestSize: config.storageRequestSize,
StorageRequestSize: storageRequestSize,
},
MirrorHostNodes: config.mirrorHostNodes,
},
@@ -221,7 +235,7 @@ func newCluster(name, namespace string, config *CreateConfig) *v1beta1.Cluster {
}
if config.token != "" {
cluster.Spec.TokenSecretRef = &v1.SecretReference{
cluster.Spec.TokenSecretRef = &corev1.SecretReference{
Name: k3kcluster.TokenSecretName(name),
Namespace: namespace,
}
@@ -253,11 +267,11 @@ func newCluster(name, namespace string, config *CreateConfig) *v1beta1.Cluster {
}
}
return cluster
return cluster, nil
}
func env(envSlice []string) []v1.EnvVar {
var envVars []v1.EnvVar
func env(envSlice []string) []corev1.EnvVar {
var envVars []corev1.EnvVar
for _, env := range envSlice {
keyValue := strings.Split(env, "=")
@@ -265,7 +279,7 @@ func env(envSlice []string) []v1.EnvVar {
logrus.Fatalf("incorrect value for environment variable %s", env)
}
envVars = append(envVars, v1.EnvVar{
envVars = append(envVars, corev1.EnvVar{
Name: keyValue[0],
Value: keyValue[1],
})
@@ -352,8 +366,8 @@ func CreateCustomCertsSecrets(ctx context.Context, name, namespace, customCertsP
return nil
}
func caCertSecret(certName, clusterName, clusterNamespace string, cert, key []byte) *v1.Secret {
return &v1.Secret{
func caCertSecret(certName, clusterName, clusterNamespace string, cert, key []byte) *corev1.Secret {
return &corev1.Secret{
TypeMeta: metav1.TypeMeta{
Kind: "Secret",
APIVersion: "v1",
@@ -362,10 +376,10 @@ func caCertSecret(certName, clusterName, clusterNamespace string, cert, key []by
Name: controller.SafeConcatNameWithPrefix(clusterName, certName),
Namespace: clusterNamespace,
},
Type: v1.SecretTypeTLS,
Type: corev1.SecretTypeTLS,
Data: map[string][]byte{
v1.TLSCertKey: cert,
v1.TLSPrivateKeyKey: key,
corev1.TLSCertKey: cert,
corev1.TLSPrivateKeyKey: key,
},
}
}
@@ -399,9 +413,13 @@ const clusterDetailsTemplate = `Cluster details:
Persistence:
Type: {{.Persistence.Type}}{{ if .Persistence.StorageClassName }}
StorageClass: {{ .Persistence.StorageClassName }}{{ end }}{{ if .Persistence.StorageRequestSize }}
Size: {{ .Persistence.StorageRequestSize }}{{ end }}`
Size: {{ .Persistence.StorageRequestSize }}{{ end }}{{ if .Labels }}
Labels: {{ range $key, $value := .Labels }}
{{$key}}: {{$value}}{{ end }}{{ end }}{{ if .Annotations }}
Annotations: {{ range $key, $value := .Annotations }}
{{$key}}: {{$value}}{{ end }}{{ end }}`
func printClusterDetails(cluster *v1beta1.Cluster) (string, error) {
func getClusterDetails(cluster *v1beta1.Cluster) (string, error) {
type templateData struct {
Mode v1beta1.ClusterMode
Servers int32
@@ -413,6 +431,8 @@ func printClusterDetails(cluster *v1beta1.Cluster) (string, error) {
StorageClassName string
StorageRequestSize string
}
Labels map[string]string
Annotations map[string]string
}
data := templateData{
@@ -421,11 +441,16 @@ func printClusterDetails(cluster *v1beta1.Cluster) (string, error) {
Agents: ptr.Deref(cluster.Spec.Agents, 0),
Version: cluster.Spec.Version,
HostVersion: cluster.Status.HostVersion,
Annotations: cluster.Annotations,
Labels: cluster.Labels,
}
data.Persistence.Type = cluster.Spec.Persistence.Type
data.Persistence.StorageClassName = ptr.Deref(cluster.Spec.Persistence.StorageClassName, "")
data.Persistence.StorageRequestSize = cluster.Spec.Persistence.StorageRequestSize
if srs := cluster.Spec.Persistence.StorageRequestSize; srs != nil {
data.Persistence.StorageRequestSize = srs.String()
}
tmpl, err := template.New("clusterDetails").Parse(clusterDetailsTemplate)
if err != nil {

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/utils/ptr"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
@@ -66,7 +67,7 @@ func Test_printClusterDetails(t *testing.T) {
Persistence: v1beta1.PersistenceConfig{
Type: v1beta1.DynamicPersistenceMode,
StorageClassName: ptr.To("local-path"),
StorageRequestSize: "3gb",
StorageRequestSize: ptr.To(resource.MustParse("3G")),
},
},
Status: v1beta1.ClusterStatus{
@@ -81,13 +82,13 @@ func Test_printClusterDetails(t *testing.T) {
Persistence:
Type: dynamic
StorageClass: local-path
Size: 3gb`,
Size: 3G`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
clusterDetails, err := printClusterDetails(tt.cluster)
clusterDetails, err := getClusterDetails(tt.cluster)
assert.NoError(t, err)
assert.Equal(t, tt.want, clusterDetails)
})

View File

@@ -24,7 +24,7 @@ var keepData bool
func NewClusterDeleteCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "delete",
Short: "Delete an existing cluster",
Short: "Delete an existing cluster.",
Example: "k3kcli cluster delete [command options] NAME",
RunE: delete(appCtx),
Args: cobra.ExactArgs(1),

View File

@@ -16,7 +16,7 @@ import (
func NewClusterListCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "list",
Short: "List all the existing cluster",
Short: "List all existing clusters.",
Example: "k3kcli cluster list [command options]",
RunE: list(appCtx),
Args: cobra.NoArgs,

198
cli/cmds/cluster_update.go Normal file
View File

@@ -0,0 +1,198 @@
package cmds
import (
"bufio"
"errors"
"fmt"
"os"
"strings"
"github.com/blang/semver/v4"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"k8s.io/utils/ptr"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
k3kcluster "github.com/rancher/k3k/pkg/controller/cluster"
)
type UpdateConfig struct {
servers int32
agents int32
labels []string
annotations []string
version string
noConfirm bool
}
func NewClusterUpdateCmd(appCtx *AppContext) *cobra.Command {
updateConfig := &UpdateConfig{}
cmd := &cobra.Command{
Use: "update",
Short: "Update existing cluster",
Example: "k3kcli cluster update [command options] NAME",
RunE: updateAction(appCtx, updateConfig),
Args: cobra.ExactArgs(1),
}
CobraFlagNamespace(appCtx, cmd.Flags())
updateFlags(cmd, updateConfig)
return cmd
}
func updateFlags(cmd *cobra.Command, cfg *UpdateConfig) {
cmd.Flags().Int32Var(&cfg.servers, "servers", 1, "number of servers")
cmd.Flags().Int32Var(&cfg.agents, "agents", 0, "number of agents")
cmd.Flags().StringArrayVar(&cfg.labels, "labels", []string{}, "Labels to add to the cluster object (e.g. key=value)")
cmd.Flags().StringArrayVar(&cfg.annotations, "annotations", []string{}, "Annotations to add to the cluster object (e.g. key=value)")
cmd.Flags().StringVar(&cfg.version, "version", "", "k3s version")
cmd.Flags().BoolVarP(&cfg.noConfirm, "no-confirm", "y", false, "Skip interactive approval before applying update")
}
func updateAction(appCtx *AppContext, config *UpdateConfig) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
ctx := cmd.Context()
client := appCtx.Client
name := args[0]
if name == k3kcluster.ClusterInvalidName {
return errors.New("invalid cluster name")
}
namespace := appCtx.Namespace(name)
var virtualCluster v1beta1.Cluster
clusterKey := types.NamespacedName{Name: name, Namespace: appCtx.namespace}
if err := appCtx.Client.Get(ctx, clusterKey, &virtualCluster); err != nil {
if apierrors.IsNotFound(err) {
return fmt.Errorf("cluster %s not found in namespace %s", name, appCtx.namespace)
}
return fmt.Errorf("failed to fetch cluster: %w", err)
}
var changes []change
if cmd.Flags().Changed("version") && config.version != virtualCluster.Spec.Version {
currentVersion := virtualCluster.Spec.Version
if currentVersion == "" {
currentVersion = virtualCluster.Status.HostVersion
}
currentVersionSemver, err := semver.ParseTolerant(currentVersion)
if err != nil {
return fmt.Errorf("failed to parse current cluster version %w", err)
}
newVersionSemver, err := semver.ParseTolerant(config.version)
if err != nil {
return fmt.Errorf("failed to parse new cluster version %w", err)
}
if newVersionSemver.LT(currentVersionSemver) {
return fmt.Errorf("downgrading cluster version is not supported")
}
changes = append(changes, change{"Version", currentVersion, config.version})
virtualCluster.Spec.Version = config.version
}
if cmd.Flags().Changed("servers") {
var oldServers int32
if virtualCluster.Spec.Agents != nil {
oldServers = *virtualCluster.Spec.Servers
}
if oldServers != config.servers {
changes = append(changes, change{"Servers", fmt.Sprintf("%d", oldServers), fmt.Sprintf("%d", config.servers)})
virtualCluster.Spec.Servers = ptr.To(config.servers)
}
}
if cmd.Flags().Changed("agents") {
var oldAgents int32
if virtualCluster.Spec.Agents != nil {
oldAgents = *virtualCluster.Spec.Agents
}
if oldAgents != config.agents {
changes = append(changes, change{"Agents", fmt.Sprintf("%d", oldAgents), fmt.Sprintf("%d", config.agents)})
virtualCluster.Spec.Agents = ptr.To(config.agents)
}
}
var labelChanges []change
if cmd.Flags().Changed("labels") {
oldLabels := labels.Merge(nil, virtualCluster.Labels)
virtualCluster.Labels = labels.Merge(virtualCluster.Labels, parseKeyValuePairs(config.labels, "label"))
labelChanges = diffMaps(oldLabels, virtualCluster.Labels)
}
var annotationChanges []change
if cmd.Flags().Changed("annotations") {
oldAnnotations := labels.Merge(nil, virtualCluster.Annotations)
virtualCluster.Annotations = labels.Merge(virtualCluster.Annotations, parseKeyValuePairs(config.annotations, "annotation"))
annotationChanges = diffMaps(oldAnnotations, virtualCluster.Annotations)
}
if len(changes) == 0 && len(labelChanges) == 0 && len(annotationChanges) == 0 {
logrus.Info("No changes detected, skipping update")
return nil
}
logrus.Infof("Updating cluster '%s' in namespace '%s'", name, namespace)
printDiff(changes)
printMapDiff("Labels", labelChanges)
printMapDiff("Annotations", annotationChanges)
if !config.noConfirm {
if !confirmClusterUpdate(&virtualCluster) {
return nil
}
}
if err := client.Update(ctx, &virtualCluster); err != nil {
return err
}
logrus.Info("Cluster updated successfully")
return nil
}
}
func confirmClusterUpdate(cluster *v1beta1.Cluster) bool {
clusterDetails, err := getClusterDetails(cluster)
if err != nil {
logrus.Fatalf("unable to get cluster details: %v", err)
}
fmt.Printf("\nNew %s\n", clusterDetails)
fmt.Printf("\nDo you want to update the cluster? [y/N]: ")
scanner := bufio.NewScanner(os.Stdin)
if !scanner.Scan() {
if err := scanner.Err(); err != nil {
logrus.Errorf("Error reading input: %v", err)
}
return false
}
fmt.Printf("\n")
return strings.ToLower(strings.TrimSpace(scanner.Text())) == "y"
}

53
cli/cmds/diff_printer.go Normal file
View File

@@ -0,0 +1,53 @@
package cmds
import "fmt"
type change struct {
field string
oldValue string
newValue string
}
func printDiff(changes []change) {
for _, c := range changes {
if c.oldValue == c.newValue {
continue
}
fmt.Printf("%s: %s -> %s\n", c.field, c.oldValue, c.newValue)
}
}
func printMapDiff(title string, changes []change) {
if len(changes) == 0 {
return
}
fmt.Printf("%s:\n", title)
for _, c := range changes {
switch c.oldValue {
case "":
fmt.Printf(" %s=%s (new)\n", c.field, c.newValue)
default:
fmt.Printf(" %s=%s -> %s=%s\n", c.field, c.oldValue, c.field, c.newValue)
}
}
}
func diffMaps(oldMap, newMap map[string]string) []change {
var changes []change
// Check for new and changed keys
for k, newVal := range newMap {
if oldVal, exists := oldMap[k]; exists {
if oldVal != newVal {
changes = append(changes, change{k, oldVal, newVal})
}
} else {
changes = append(changes, change{k, "", newVal})
}
}
return changes
}

View File

@@ -37,7 +37,7 @@ type GenerateKubeconfigConfig struct {
func NewKubeconfigCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "kubeconfig",
Short: "Manage kubeconfig for clusters",
Short: "Manage kubeconfig for clusters.",
}
cmd.AddCommand(
@@ -52,7 +52,7 @@ func NewKubeconfigGenerateCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "generate",
Short: "Generate kubeconfig for clusters",
Short: "Generate kubeconfig for clusters.",
RunE: generate(appCtx, cfg),
Args: cobra.NoArgs,
}

View File

@@ -7,7 +7,7 @@ import (
func NewPolicyCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "policy",
Short: "policy command",
Short: "K3k policy command.",
}
cmd.AddCommand(

View File

@@ -30,7 +30,7 @@ func NewPolicyCreateCmd(appCtx *AppContext) *cobra.Command {
cmd := &cobra.Command{
Use: "create",
Short: "Create new policy",
Short: "Create a new policy.",
Example: "k3kcli policy create [command options] NAME",
PreRunE: func(cmd *cobra.Command, args []string) error {
switch config.mode {
@@ -150,6 +150,8 @@ func bindPolicyToNamespaces(ctx context.Context, client client.Client, config *V
// no old policy, safe to update
if oldPolicy == "" {
ns.Labels[policy.PolicyNameLabelKey] = policyName
if err := client.Update(ctx, &ns); err != nil {
errs = append(errs, err)
} else {

View File

@@ -14,7 +14,7 @@ import (
func NewPolicyDeleteCmd(appCtx *AppContext) *cobra.Command {
return &cobra.Command{
Use: "delete",
Short: "Delete an existing policy",
Short: "Delete an existing policy.",
Example: "k3kcli policy delete [command options] NAME",
RunE: policyDeleteAction(appCtx),
Args: cobra.ExactArgs(1),

View File

@@ -15,7 +15,7 @@ import (
func NewPolicyListCmd(appCtx *AppContext) *cobra.Command {
return &cobra.Command{
Use: "list",
Short: "List all the existing policies",
Short: "List all existing policies.",
Example: "k3kcli policy list [command options]",
RunE: policyList(appCtx),
Args: cobra.NoArgs,

View File

@@ -36,7 +36,7 @@ func NewRootCmd() *cobra.Command {
rootCmd := &cobra.Command{
SilenceUsage: true,
Use: "k3kcli",
Short: "CLI for K3K",
Short: "CLI for K3K.",
Version: buildinfo.Version,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
InitializeConfig(cmd)

View File

@@ -4,7 +4,7 @@ This document provides advanced usage information for k3k, including detailed us
## Customizing the Cluster Resource
The `Cluster` resource provides a variety of fields for customizing the behavior of your virtual clusters. You can check the [CRD documentation](./crds/crd-docs.md) for the full specs.
The `Cluster` resource provides a variety of fields for customizing the behavior of your virtual clusters. You can check the [CRD documentation](./crds/crds.md) for the full specs.
**Note:** Most of these customization options can also be configured using the `k3kcli` tool. Refer to the [k3kcli](./cli/k3kcli.md) documentation for more details.
@@ -115,7 +115,7 @@ The `serverArgs` field allows you to specify additional arguments to be passed t
## Using the cli
You can check the [k3kcli documentation](./cli/cli-docs.md) for the full specs.
You can check the [k3kcli documentation](./cli/k3kcli.md) for the full specs.
### No storage provider:

View File

@@ -104,7 +104,7 @@ Common use cases for administrators leveraging VirtualClusterPolicy include:
The K3k controller actively monitors VirtualClusterPolicy resources and the corresponding Namespace bindings. When a VCP is applied or updated, the controller ensures that the defined configurations are enforced on the relevant virtual clusters and their associated resources within the targeted Namespaces.
For a deep dive into what VirtualClusterPolicy can do, along with more examples, check out the [VirtualClusterPolicy Concepts](./virtualclusterpolicy.md) page. For a full list of all the spec fields, see the [API Reference for VirtualClusterPolicy](./crds/crd-docs.md#virtualclusterpolicy).
For a deep dive into what VirtualClusterPolicy can do, along with more examples, check out the [VirtualClusterPolicy Concepts](./virtualclusterpolicy.md) page. For a full list of all the spec fields, see the [API Reference for VirtualClusterPolicy](./crds/crds.md#virtualclusterpolicy).
## Comparison and Trade-offs

25
docs/cli/convert.lua Normal file
View File

@@ -0,0 +1,25 @@
local deleting_see_also = false
function Header(el)
-- If we hit "SEE ALSO", start deleting and remove the header itself
if pandoc.utils.stringify(el):upper() == "SEE ALSO" then
deleting_see_also = true
return {}
end
-- If we hit any other header, stop deleting
deleting_see_also = false
return el
end
function BulletList(el)
if deleting_see_also then
return {} -- Deletes the list of links
end
return el
end
function CodeBlock(el)
-- Forces the ---- separator
local content = "----\n" .. el.text .. "\n----\n\n"
return pandoc.RawBlock('asciidoc', content)
end

317
docs/cli/k3kcli.adoc Normal file
View File

@@ -0,0 +1,317 @@
== k3kcli
CLI for K3K.
=== Options
----
--debug Turn on debug logs
-h, --help help for k3kcli
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli cluster
K3k cluster command.
=== Options
----
-h, --help help for cluster
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli cluster create
Create a new cluster.
----
k3kcli cluster create [flags]
----
=== Examples
----
k3kcli cluster create [command options] NAME
----
=== Options
----
--agent-args strings agents extra arguments
--agent-envs strings agents extra Envs
--agents int number of agents
--annotations stringArray Annotations to add to the cluster object (e.g. key=value)
--cluster-cidr string cluster CIDR
--custom-certs string The path for custom certificate directory
-h, --help help for create
--kubeconfig-server string override the kubeconfig server host
--labels stringArray Labels to add to the cluster object (e.g. key=value)
--mirror-host-nodes Mirror Host Cluster Nodes
--mode string k3k mode type (shared, virtual) (default "shared")
-n, --namespace string namespace of the k3k cluster
--persistence-type string persistence mode for the nodes (dynamic, ephemeral) (default "dynamic")
--policy string The policy to create the cluster in
--server-args strings servers extra arguments
--server-envs strings servers extra Envs
--servers int number of servers (default 1)
--service-cidr string service CIDR
--storage-class-name string storage class name for dynamic persistence type
--storage-request-size string storage size for dynamic persistence type
--timeout duration The timeout for waiting for the cluster to become ready (e.g., 10s, 5m, 1h). (default 3m0s)
--token string token of the cluster
--version string k3s version
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli cluster delete
Delete an existing cluster.
----
k3kcli cluster delete [flags]
----
=== Examples
----
k3kcli cluster delete [command options] NAME
----
=== Options
----
-h, --help help for delete
--keep-data keeps persistence volumes created for the cluster after deletion
-n, --namespace string namespace of the k3k cluster
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli cluster list
List all existing clusters.
----
k3kcli cluster list [flags]
----
=== Examples
----
k3kcli cluster list [command options]
----
=== Options
----
-h, --help help for list
-n, --namespace string namespace of the k3k cluster
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli cluster update
Update existing cluster
----
k3kcli cluster update [flags]
----
=== Examples
----
k3kcli cluster update [command options] NAME
----
=== Options
----
--agents int32 number of agents
--annotations stringArray Annotations to add to the cluster object (e.g. key=value)
-h, --help help for update
--labels stringArray Labels to add to the cluster object (e.g. key=value)
-n, --namespace string namespace of the k3k cluster
-y, --no-confirm Skip interactive approval before applying update
--servers int32 number of servers (default 1)
--version string k3s version
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli kubeconfig
Manage kubeconfig for clusters.
=== Options
----
-h, --help help for kubeconfig
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli kubeconfig generate
Generate kubeconfig for clusters.
----
k3kcli kubeconfig generate [flags]
----
=== Options
----
--altNames strings altNames of the generated certificates for the kubeconfig
--cn string Common name (CN) of the generated certificates for the kubeconfig (default "system:admin")
--config-name string the name of the generated kubeconfig file
--expiration-days int Expiration date of the certificates used for the kubeconfig (default 365)
-h, --help help for generate
--kubeconfig-server string override the kubeconfig server host
--name string cluster name
-n, --namespace string namespace of the k3k cluster
--org strings Organization name (ORG) of the generated certificates for the kubeconfig
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli policy
K3k policy command.
=== Options
----
-h, --help help for policy
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli policy create
Create a new policy.
----
k3kcli policy create [flags]
----
=== Examples
----
k3kcli policy create [command options] NAME
----
=== Options
----
--annotations stringArray Annotations to add to the policy object (e.g. key=value)
-h, --help help for create
--labels stringArray Labels to add to the policy object (e.g. key=value)
--mode string The allowed mode type of the policy (default "shared")
--namespace strings The namespaces where to bind the policy
--overwrite Overwrite namespace binding of existing policy
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli policy delete
Delete an existing policy.
----
k3kcli policy delete [flags]
----
=== Examples
----
k3kcli policy delete [command options] NAME
----
=== Options
----
-h, --help help for delete
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----
== k3kcli policy list
List all existing policies.
----
k3kcli policy list [flags]
----
=== Examples
----
k3kcli policy list [command options]
----
=== Options
----
-h, --help help for list
----
=== Options inherited from parent commands
----
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
----

View File

@@ -1,6 +1,6 @@
## k3kcli
CLI for K3K
CLI for K3K.
### Options
@@ -12,7 +12,7 @@ CLI for K3K
### SEE ALSO
* [k3kcli cluster](k3kcli_cluster.md) - cluster command
* [k3kcli kubeconfig](k3kcli_kubeconfig.md) - Manage kubeconfig for clusters
* [k3kcli policy](k3kcli_policy.md) - policy command
* [k3kcli cluster](k3kcli_cluster.md) - K3k cluster command.
* [k3kcli kubeconfig](k3kcli_kubeconfig.md) - Manage kubeconfig for clusters.
* [k3kcli policy](k3kcli_policy.md) - K3k policy command.

View File

@@ -1,6 +1,6 @@
## k3kcli cluster
cluster command
K3k cluster command.
### Options
@@ -17,8 +17,9 @@ cluster command
### SEE ALSO
* [k3kcli](k3kcli.md) - CLI for K3K
* [k3kcli cluster create](k3kcli_cluster_create.md) - Create new cluster
* [k3kcli cluster delete](k3kcli_cluster_delete.md) - Delete an existing cluster
* [k3kcli cluster list](k3kcli_cluster_list.md) - List all the existing cluster
* [k3kcli](k3kcli.md) - CLI for K3K.
* [k3kcli cluster create](k3kcli_cluster_create.md) - Create a new cluster.
* [k3kcli cluster delete](k3kcli_cluster_delete.md) - Delete an existing cluster.
* [k3kcli cluster list](k3kcli_cluster_list.md) - List all existing clusters.
* [k3kcli cluster update](k3kcli_cluster_update.md) - Update existing cluster

View File

@@ -1,6 +1,6 @@
## k3kcli cluster create
Create new cluster
Create a new cluster.
```
k3kcli cluster create [flags]
@@ -49,5 +49,5 @@ k3kcli cluster create [command options] NAME
### SEE ALSO
* [k3kcli cluster](k3kcli_cluster.md) - cluster command
* [k3kcli cluster](k3kcli_cluster.md) - K3k cluster command.

View File

@@ -1,6 +1,6 @@
## k3kcli cluster delete
Delete an existing cluster
Delete an existing cluster.
```
k3kcli cluster delete [flags]
@@ -29,5 +29,5 @@ k3kcli cluster delete [command options] NAME
### SEE ALSO
* [k3kcli cluster](k3kcli_cluster.md) - cluster command
* [k3kcli cluster](k3kcli_cluster.md) - K3k cluster command.

View File

@@ -1,6 +1,6 @@
## k3kcli cluster list
List all the existing cluster
List all existing clusters.
```
k3kcli cluster list [flags]
@@ -28,5 +28,5 @@ k3kcli cluster list [command options]
### SEE ALSO
* [k3kcli cluster](k3kcli_cluster.md) - cluster command
* [k3kcli cluster](k3kcli_cluster.md) - K3k cluster command.

View File

@@ -0,0 +1,38 @@
## k3kcli cluster update
Update existing cluster
```
k3kcli cluster update [flags]
```
### Examples
```
k3kcli cluster update [command options] NAME
```
### Options
```
--agents int32 number of agents
--annotations stringArray Annotations to add to the cluster object (e.g. key=value)
-h, --help help for update
--labels stringArray Labels to add to the cluster object (e.g. key=value)
-n, --namespace string namespace of the k3k cluster
-y, --no-confirm Skip interactive approval before applying update
--servers int32 number of servers (default 1)
--version string k3s version
```
### Options inherited from parent commands
```
--debug Turn on debug logs
--kubeconfig string kubeconfig path ($HOME/.kube/config or $KUBECONFIG if set)
```
### SEE ALSO
* [k3kcli cluster](k3kcli_cluster.md) - K3k cluster command.

View File

@@ -1,6 +1,6 @@
## k3kcli kubeconfig
Manage kubeconfig for clusters
Manage kubeconfig for clusters.
### Options
@@ -17,6 +17,6 @@ Manage kubeconfig for clusters
### SEE ALSO
* [k3kcli](k3kcli.md) - CLI for K3K
* [k3kcli kubeconfig generate](k3kcli_kubeconfig_generate.md) - Generate kubeconfig for clusters
* [k3kcli](k3kcli.md) - CLI for K3K.
* [k3kcli kubeconfig generate](k3kcli_kubeconfig_generate.md) - Generate kubeconfig for clusters.

View File

@@ -1,6 +1,6 @@
## k3kcli kubeconfig generate
Generate kubeconfig for clusters
Generate kubeconfig for clusters.
```
k3kcli kubeconfig generate [flags]
@@ -29,5 +29,5 @@ k3kcli kubeconfig generate [flags]
### SEE ALSO
* [k3kcli kubeconfig](k3kcli_kubeconfig.md) - Manage kubeconfig for clusters
* [k3kcli kubeconfig](k3kcli_kubeconfig.md) - Manage kubeconfig for clusters.

View File

@@ -1,6 +1,6 @@
## k3kcli policy
policy command
K3k policy command.
### Options
@@ -17,8 +17,8 @@ policy command
### SEE ALSO
* [k3kcli](k3kcli.md) - CLI for K3K
* [k3kcli policy create](k3kcli_policy_create.md) - Create new policy
* [k3kcli policy delete](k3kcli_policy_delete.md) - Delete an existing policy
* [k3kcli policy list](k3kcli_policy_list.md) - List all the existing policies
* [k3kcli](k3kcli.md) - CLI for K3K.
* [k3kcli policy create](k3kcli_policy_create.md) - Create a new policy.
* [k3kcli policy delete](k3kcli_policy_delete.md) - Delete an existing policy.
* [k3kcli policy list](k3kcli_policy_list.md) - List all existing policies.

View File

@@ -1,6 +1,6 @@
## k3kcli policy create
Create new policy
Create a new policy.
```
k3kcli policy create [flags]
@@ -32,5 +32,5 @@ k3kcli policy create [command options] NAME
### SEE ALSO
* [k3kcli policy](k3kcli_policy.md) - policy command
* [k3kcli policy](k3kcli_policy.md) - K3k policy command.

View File

@@ -1,6 +1,6 @@
## k3kcli policy delete
Delete an existing policy
Delete an existing policy.
```
k3kcli policy delete [flags]
@@ -27,5 +27,5 @@ k3kcli policy delete [command options] NAME
### SEE ALSO
* [k3kcli policy](k3kcli_policy.md) - policy command
* [k3kcli policy](k3kcli_policy.md) - K3k policy command.

View File

@@ -1,6 +1,6 @@
## k3kcli policy list
List all the existing policies
List all existing policies.
```
k3kcli policy list [flags]
@@ -27,5 +27,5 @@ k3kcli policy list [command options]
### SEE ALSO
* [k3kcli policy](k3kcli_policy.md) - policy command
* [k3kcli policy](k3kcli_policy.md) - K3k policy command.

View File

@@ -1,8 +1,8 @@
processor:
# RE2 regular expressions describing type fields that should be excluded from the generated documentation.
ignoreFields:
- "status$"
- "TypeMeta$"
- "status$"
- "TypeMeta$"
render:
# Version of Kubernetes to use when generating links to Kubernetes API documentation.

691
docs/crds/crds.adoc Normal file
View File

@@ -0,0 +1,691 @@
[id="k3k-api-reference"]
= API Reference
:revdate: "2006-01-02"
:page-revdate: {revdate}
:anchor_prefix: k8s-api
== Packages
- xref:{anchor_prefix}-k3k-io-v1beta1[$$k3k.io/v1beta1$$]
[id="{anchor_prefix}-k3k-io-v1beta1"]
== k3k.io/v1beta1
=== Resource Types
- xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-cluster[$$Cluster$$]
- xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterlist[$$ClusterList$$]
- xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicy[$$VirtualClusterPolicy$$]
- xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicylist[$$VirtualClusterPolicyList$$]
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-addon"]
=== Addon
Addon specifies a Secret containing YAML to be deployed on cluster startup.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`secretNamespace`* __string__ | SecretNamespace is the namespace of the Secret. + | |
| *`secretRef`* __string__ | SecretRef is the name of the Secret. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-cluster"]
=== Cluster
Cluster defines a virtual Kubernetes cluster managed by k3k.
It specifies the desired state of a virtual cluster, including version, node configuration, and networking.
k3k uses this to provision and manage these virtual clusters.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterlist[$$ClusterList$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`apiVersion`* __string__ | `k3k.io/v1beta1` | |
| *`kind`* __string__ | `Cluster` | |
| *`metadata`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#objectmeta-v1-meta[$$ObjectMeta$$]__ | Refer to Kubernetes API documentation for fields of `metadata`.
| |
| *`spec`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]__ | Spec defines the desired state of the Cluster. + | { } |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterlist"]
=== ClusterList
ClusterList is a list of Cluster resources.
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`apiVersion`* __string__ | `k3k.io/v1beta1` | |
| *`kind`* __string__ | `ClusterList` | |
| *`metadata`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#listmeta-v1-meta[$$ListMeta$$]__ | Refer to Kubernetes API documentation for fields of `metadata`.
| |
| *`items`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-cluster[$$Cluster$$] array__ | | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clustermode"]
=== ClusterMode
_Underlying type:_ _string_
ClusterMode is the possible provisioning mode of a Cluster.
_Validation:_
- Enum: [shared virtual]
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicyspec[$$VirtualClusterPolicySpec$$]
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterphase"]
=== ClusterPhase
_Underlying type:_ _string_
ClusterPhase is a high-level summary of the cluster's current lifecycle state.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterstatus[$$ClusterStatus$$]
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec"]
=== ClusterSpec
ClusterSpec defines the desired state of a virtual Kubernetes cluster.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-cluster[$$Cluster$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`version`* __string__ | Version is the K3s version to use for the virtual nodes. +
It should follow the K3s versioning convention (e.g., v1.28.2-k3s1). +
If not specified, the Kubernetes version of the host node will be used. + | |
| *`mode`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clustermode[$$ClusterMode$$]__ | Mode specifies the cluster provisioning mode: "shared" or "virtual". +
Defaults to "shared". This field is immutable. + | shared | Enum: [shared virtual] +
| *`servers`* __integer__ | Servers specifies the number of K3s pods to run in server (control plane) mode. +
Must be at least 1. Defaults to 1. + | 1 |
| *`agents`* __integer__ | Agents specifies the number of K3s pods to run in agent (worker) mode. +
Must be 0 or greater. Defaults to 0. +
This field is ignored in "shared" mode. + | 0 |
| *`clusterCIDR`* __string__ | ClusterCIDR is the CIDR range for pod IPs. +
Defaults to 10.42.0.0/16 in shared mode and 10.52.0.0/16 in virtual mode. +
This field is immutable. + | |
| *`serviceCIDR`* __string__ | ServiceCIDR is the CIDR range for service IPs. +
Defaults to 10.43.0.0/16 in shared mode and 10.53.0.0/16 in virtual mode. +
This field is immutable. + | |
| *`clusterDNS`* __string__ | ClusterDNS is the IP address for the CoreDNS service. +
Must be within the ServiceCIDR range. Defaults to 10.43.0.10. +
This field is immutable. + | |
| *`persistence`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistenceconfig[$$PersistenceConfig$$]__ | Persistence specifies options for persisting etcd data. +
Defaults to dynamic persistence, which uses a PersistentVolumeClaim to provide data persistence. +
A default StorageClass is required for dynamic persistence. + | |
| *`expose`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-exposeconfig[$$ExposeConfig$$]__ | Expose specifies options for exposing the API server. +
By default, it's only exposed as a ClusterIP. + | |
| *`nodeSelector`* __object (keys:string, values:string)__ | NodeSelector specifies node labels to constrain where server/agent pods are scheduled. +
In "shared" mode, this also applies to workloads. + | |
| *`priorityClass`* __string__ | PriorityClass specifies the priorityClassName for server/agent pods. +
In "shared" mode, this also applies to workloads. + | |
| *`tokenSecretRef`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#secretreference-v1-core[$$SecretReference$$]__ | TokenSecretRef is a Secret reference containing the token used by worker nodes to join the cluster. +
The Secret must have a "token" field in its data. + | |
| *`tlsSANs`* __string array__ | TLSSANs specifies subject alternative names for the K3s server certificate. + | |
| *`serverArgs`* __string array__ | ServerArgs specifies ordered key-value pairs for K3s server pods. +
Example: ["--tls-san=example.com"] + | |
| *`agentArgs`* __string array__ | AgentArgs specifies ordered key-value pairs for K3s agent pods. +
Example: ["--node-name=my-agent-node"] + | |
| *`serverEnvs`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#envvar-v1-core[$$EnvVar$$] array__ | ServerEnvs specifies list of environment variables to set in the server pod. + | |
| *`agentEnvs`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#envvar-v1-core[$$EnvVar$$] array__ | AgentEnvs specifies list of environment variables to set in the agent pod. + | |
| *`addons`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-addon[$$Addon$$] array__ | Addons specifies secrets containing raw YAML to deploy on cluster startup. + | |
| *`serverLimit`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcelist-v1-core[$$ResourceList$$]__ | ServerLimit specifies resource limits for server nodes. + | |
| *`workerLimit`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcelist-v1-core[$$ResourceList$$]__ | WorkerLimit specifies resource limits for agent nodes. + | |
| *`mirrorHostNodes`* __boolean__ | MirrorHostNodes controls whether node objects from the host cluster +
are mirrored into the virtual cluster. + | |
| *`customCAs`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-customcas[$$CustomCAs$$]__ | CustomCAs specifies the cert/key pairs for custom CA certificates. + | |
| *`sync`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]__ | Sync specifies the resources types that will be synced from virtual cluster to host cluster. + | { } |
| *`secretMounts`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-secretmount[$$SecretMount$$] array__ | SecretMounts specifies a list of secrets to mount into server and agent pods. +
Each entry defines a secret and its mount path within the pods. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-configmapsyncconfig"]
=== ConfigMapSyncConfig
ConfigMapSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | true |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource"]
=== CredentialSource
CredentialSource defines where to get a credential from.
It can represent either a TLS key pair or a single private key.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsources[$$CredentialSources$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`secretName`* __string__ | The secret must contain specific keys based on the credential type: +
- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`. +
- For the ServiceAccountToken signing key: `tls.key`. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsources"]
=== CredentialSources
CredentialSources lists all the required credentials, including both
TLS key pairs and single signing keys.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-customcas[$$CustomCAs$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`serverCA`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | ServerCA specifies the server-ca cert/key pair. + | |
| *`clientCA`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | ClientCA specifies the client-ca cert/key pair. + | |
| *`requestHeaderCA`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | RequestHeaderCA specifies the request-header-ca cert/key pair. + | |
| *`etcdServerCA`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | ETCDServerCA specifies the etcd-server-ca cert/key pair. + | |
| *`etcdPeerCA`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | ETCDPeerCA specifies the etcd-peer-ca cert/key pair. + | |
| *`serviceAccountToken`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsource[$$CredentialSource$$]__ | ServiceAccountToken specifies the service-account-token key. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-customcas"]
=== CustomCAs
CustomCAs specifies the cert/key pairs for custom CA certificates.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled toggles this feature on or off. + | true |
| *`sources`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-credentialsources[$$CredentialSources$$]__ | Sources defines the sources for all required custom CA certificates. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-exposeconfig"]
=== ExposeConfig
ExposeConfig specifies options for exposing the API server.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`ingress`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-ingressconfig[$$IngressConfig$$]__ | Ingress specifies options for exposing the API server through an Ingress. + | |
| *`loadBalancer`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-loadbalancerconfig[$$LoadBalancerConfig$$]__ | LoadBalancer specifies options for exposing the API server through a LoadBalancer service. + | |
| *`nodePort`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-nodeportconfig[$$NodePortConfig$$]__ | NodePort specifies options for exposing the API server through NodePort. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-ingressconfig"]
=== IngressConfig
IngressConfig specifies options for exposing the API server through an Ingress.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-exposeconfig[$$ExposeConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`annotations`* __object (keys:string, values:string)__ | Annotations specifies annotations to add to the Ingress. + | |
| *`ingressClassName`* __string__ | IngressClassName specifies the IngressClass to use for the Ingress. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-ingresssyncconfig"]
=== IngressSyncConfig
IngressSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | false |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-loadbalancerconfig"]
=== LoadBalancerConfig
LoadBalancerConfig specifies options for exposing the API server through a LoadBalancer service.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-exposeconfig[$$ExposeConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`serverPort`* __integer__ | ServerPort is the port on which the K3s server is exposed when type is LoadBalancer. +
If not specified, the default https 443 port will be allocated. +
If 0 or negative, the port will not be exposed. + | |
| *`etcdPort`* __integer__ | ETCDPort is the port on which the ETCD service is exposed when type is LoadBalancer. +
If not specified, the default etcd 2379 port will be allocated. +
If 0 or negative, the port will not be exposed. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-nodeportconfig"]
=== NodePortConfig
NodePortConfig specifies options for exposing the API server through NodePort.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-exposeconfig[$$ExposeConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`serverPort`* __integer__ | ServerPort is the port on each node on which the K3s server is exposed when type is NodePort. +
If not specified, a random port between 30000-32767 will be allocated. +
If out of range, the port will not be exposed. + | |
| *`etcdPort`* __integer__ | ETCDPort is the port on each node on which the ETCD service is exposed when type is NodePort. +
If not specified, a random port between 30000-32767 will be allocated. +
If out of range, the port will not be exposed. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistenceconfig"]
=== PersistenceConfig
PersistenceConfig specifies options for persisting etcd data.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`type`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistencemode[$$PersistenceMode$$]__ | Type specifies the persistence mode. + | dynamic |
| *`storageClassName`* __string__ | StorageClassName is the name of the StorageClass to use for the PVC. +
This field is only relevant in "dynamic" mode. + | |
| *`storageRequestSize`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#quantity-resource-api[$$Quantity$$]__ | StorageRequestSize is the requested size for the PVC. +
This field is only relevant in "dynamic" mode. + | 2G |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistencemode"]
=== PersistenceMode
_Underlying type:_ _string_
PersistenceMode is the storage mode of a Cluster.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistenceconfig[$$PersistenceConfig$$]
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistentvolumeclaimsyncconfig"]
=== PersistentVolumeClaimSyncConfig
PersistentVolumeClaimSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | true |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-podsecurityadmissionlevel"]
=== PodSecurityAdmissionLevel
_Underlying type:_ _string_
PodSecurityAdmissionLevel is the policy level applied to the pods in the namespace.
_Validation:_
- Enum: [privileged baseline restricted]
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicyspec[$$VirtualClusterPolicySpec$$]
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-priorityclasssyncconfig"]
=== PriorityClassSyncConfig
PriorityClassSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | false |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-secretmount"]
=== SecretMount
SecretMount defines a secret to be mounted into server or agent pods,
allowing for custom configurations, certificates, or other sensitive data.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`secretName`* __string__ | secretName is the name of the secret in the pod's namespace to use. +
More info: https://kubernetes.io/docs/concepts/storage/volumes#secret + | |
| *`items`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#keytopath-v1-core[$$KeyToPath$$] array__ | items If unspecified, each key-value pair in the Data field of the referenced +
Secret will be projected into the volume as a file whose name is the +
key and content is the value. If specified, the listed keys will be +
projected into the specified paths, and unlisted keys will not be +
present. If a key is specified which is not present in the Secret, +
the volume setup will error unless it is marked optional. Paths must be +
relative and may not contain the '..' path or start with '..'. + | |
| *`defaultMode`* __integer__ | defaultMode is Optional: mode bits used to set permissions on created files by default. +
Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. +
YAML accepts both octal and decimal values, JSON requires decimal values +
for mode bits. Defaults to 0644. +
Directories within the path are not affected by this setting. +
This might be in conflict with other options that affect the file +
mode, like fsGroup, and the result can be other mode bits set. + | |
| *`optional`* __boolean__ | optional field specify whether the Secret or its keys must be defined + | |
| *`mountPath`* __string__ | MountPath is the path within server and agent pods where the +
secret contents will be mounted. + | |
| *`subPath`* __string__ | SubPath is an optional path within the secret to mount instead of the root. +
When specified, only the specified key from the secret will be mounted as a file +
at MountPath, keeping the parent directory writable. + | |
| *`role`* __string__ | Role is the type of the k3k pod that will be used to mount the secret. +
This can be 'server', 'agent', or 'all' (for both). + | | Enum: [server agent all] +
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-secretsyncconfig"]
=== SecretSyncConfig
SecretSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | true |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-servicesyncconfig"]
=== ServiceSyncConfig
ServiceSyncConfig specifies the sync options for services.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`enabled`* __boolean__ | Enabled is an on/off switch for syncing resources. + | true |
| *`selector`* __object (keys:string, values:string)__ | Selector specifies set of labels of the resources that will be synced, if empty +
then all resources of the given type will be synced. + | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig"]
=== SyncConfig
SyncConfig will contain the resources that should be synced from virtual cluster to host cluster.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clusterspec[$$ClusterSpec$$]
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicyspec[$$VirtualClusterPolicySpec$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`services`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-servicesyncconfig[$$ServiceSyncConfig$$]__ | Services resources sync configuration. + | { enabled:true } |
| *`configMaps`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-configmapsyncconfig[$$ConfigMapSyncConfig$$]__ | ConfigMaps resources sync configuration. + | { enabled:true } |
| *`secrets`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-secretsyncconfig[$$SecretSyncConfig$$]__ | Secrets resources sync configuration. + | { enabled:true } |
| *`ingresses`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-ingresssyncconfig[$$IngressSyncConfig$$]__ | Ingresses resources sync configuration. + | { enabled:false } |
| *`persistentVolumeClaims`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-persistentvolumeclaimsyncconfig[$$PersistentVolumeClaimSyncConfig$$]__ | PersistentVolumeClaims resources sync configuration. + | { enabled:true } |
| *`priorityClasses`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-priorityclasssyncconfig[$$PriorityClassSyncConfig$$]__ | PriorityClasses resources sync configuration. + | { enabled:false } |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicy"]
=== VirtualClusterPolicy
VirtualClusterPolicy allows defining common configurations and constraints
for clusters within a clusterpolicy.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicylist[$$VirtualClusterPolicyList$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`apiVersion`* __string__ | `k3k.io/v1beta1` | |
| *`kind`* __string__ | `VirtualClusterPolicy` | |
| *`metadata`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#objectmeta-v1-meta[$$ObjectMeta$$]__ | Refer to Kubernetes API documentation for fields of `metadata`.
| |
| *`spec`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicyspec[$$VirtualClusterPolicySpec$$]__ | Spec defines the desired state of the VirtualClusterPolicy. + | { } |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicylist"]
=== VirtualClusterPolicyList
VirtualClusterPolicyList is a list of VirtualClusterPolicy resources.
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`apiVersion`* __string__ | `k3k.io/v1beta1` | |
| *`kind`* __string__ | `VirtualClusterPolicyList` | |
| *`metadata`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#listmeta-v1-meta[$$ListMeta$$]__ | Refer to Kubernetes API documentation for fields of `metadata`.
| |
| *`items`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicy[$$VirtualClusterPolicy$$] array__ | | |
|===
[id="{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicyspec"]
=== VirtualClusterPolicySpec
VirtualClusterPolicySpec defines the desired state of a VirtualClusterPolicy.
_Appears In:_
* xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-virtualclusterpolicy[$$VirtualClusterPolicy$$]
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
| *`quota`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#resourcequotaspec-v1-core[$$ResourceQuotaSpec$$]__ | Quota specifies the resource limits for clusters within a clusterpolicy. + | |
| *`limit`* __link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#limitrangespec-v1-core[$$LimitRangeSpec$$]__ | Limit specifies the LimitRange that will be applied to all pods within the VirtualClusterPolicy +
to set defaults and constraints (min/max) + | |
| *`defaultNodeSelector`* __object (keys:string, values:string)__ | DefaultNodeSelector specifies the node selector that applies to all clusters (server + agent) in the target Namespace. + | |
| *`defaultPriorityClass`* __string__ | DefaultPriorityClass specifies the priorityClassName applied to all pods of all clusters in the target Namespace. + | |
| *`allowedMode`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-clustermode[$$ClusterMode$$]__ | AllowedMode specifies the allowed cluster provisioning mode. Defaults to "shared". + | shared | Enum: [shared virtual] +
| *`disableNetworkPolicy`* __boolean__ | DisableNetworkPolicy indicates whether to disable the creation of a default network policy for cluster isolation. + | |
| *`podSecurityAdmissionLevel`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-podsecurityadmissionlevel[$$PodSecurityAdmissionLevel$$]__ | PodSecurityAdmissionLevel specifies the pod security admission level applied to the pods in the namespace. + | | Enum: [privileged baseline restricted] +
| *`sync`* __xref:{anchor_prefix}-github-com-rancher-k3k-pkg-apis-k3k-io-v1beta1-syncconfig[$$SyncConfig$$]__ | Sync specifies the resources types that will be synced from virtual cluster to host cluster. + | { } |
|===

View File

@@ -135,6 +135,7 @@ _Appears in:_
| `mirrorHostNodes` _boolean_ | MirrorHostNodes controls whether node objects from the host cluster<br />are mirrored into the virtual cluster. | | |
| `customCAs` _[CustomCAs](#customcas)_ | CustomCAs specifies the cert/key pairs for custom CA certificates. | | |
| `sync` _[SyncConfig](#syncconfig)_ | Sync specifies the resources types that will be synced from virtual cluster to host cluster. | \{ \} | |
| `secretMounts` _[SecretMount](#secretmount) array_ | SecretMounts specifies a list of secrets to mount into server and agent pods.<br />Each entry defines a secret and its mount path within the pods. | | |
@@ -170,7 +171,7 @@ _Appears in:_
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `secretName` _string_ | SecretName specifies the name of an existing secret to use.<br />The controller expects specific keys inside based on the credential type:<br />- For TLS pairs (e.g., ServerCA): 'tls.crt' and 'tls.key'.<br />- For ServiceAccountTokenKey: 'tls.key'. | | |
| `secretName` _string_ | The secret must contain specific keys based on the credential type:<br />- For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.<br />- For the ServiceAccountToken signing key: `tls.key`. | | |
#### CredentialSources
@@ -313,7 +314,7 @@ _Appears in:_
| --- | --- | --- | --- |
| `type` _[PersistenceMode](#persistencemode)_ | Type specifies the persistence mode. | dynamic | |
| `storageClassName` _string_ | StorageClassName is the name of the StorageClass to use for the PVC.<br />This field is only relevant in "dynamic" mode. | | |
| `storageRequestSize` _string_ | StorageRequestSize is the requested size for the PVC.<br />This field is only relevant in "dynamic" mode. | 2G | |
| `storageRequestSize` _[Quantity](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#quantity-resource-api)_ | StorageRequestSize is the requested size for the PVC.<br />This field is only relevant in "dynamic" mode. | 2G | |
#### PersistenceMode
@@ -377,6 +378,29 @@ _Appears in:_
| `selector` _object (keys:string, values:string)_ | Selector specifies set of labels of the resources that will be synced, if empty<br />then all resources of the given type will be synced. | | |
#### SecretMount
SecretMount defines a secret to be mounted into server or agent pods,
allowing for custom configurations, certificates, or other sensitive data.
_Appears in:_
- [ClusterSpec](#clusterspec)
| Field | Description | Default | Validation |
| --- | --- | --- | --- |
| `secretName` _string_ | secretName is the name of the secret in the pod's namespace to use.<br />More info: https://kubernetes.io/docs/concepts/storage/volumes#secret | | |
| `items` _[KeyToPath](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#keytopath-v1-core) array_ | items If unspecified, each key-value pair in the Data field of the referenced<br />Secret will be projected into the volume as a file whose name is the<br />key and content is the value. If specified, the listed keys will be<br />projected into the specified paths, and unlisted keys will not be<br />present. If a key is specified which is not present in the Secret,<br />the volume setup will error unless it is marked optional. Paths must be<br />relative and may not contain the '..' path or start with '..'. | | |
| `defaultMode` _integer_ | defaultMode is Optional: mode bits used to set permissions on created files by default.<br />Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.<br />YAML accepts both octal and decimal values, JSON requires decimal values<br />for mode bits. Defaults to 0644.<br />Directories within the path are not affected by this setting.<br />This might be in conflict with other options that affect the file<br />mode, like fsGroup, and the result can be other mode bits set. | | |
| `optional` _boolean_ | optional field specify whether the Secret or its keys must be defined | | |
| `mountPath` _string_ | MountPath is the path within server and agent pods where the<br />secret contents will be mounted. | | |
| `subPath` _string_ | SubPath is an optional path within the secret to mount instead of the root.<br />When specified, only the specified key from the secret will be mounted as a file<br />at MountPath, keeping the parent directory writable. | | |
| `role` _string_ | Role is the type of the k3k pod that will be used to mount the secret.<br />This can be 'server', 'agent', or 'all' (for both). | | Enum: [server agent all] <br /> |
#### SecretSyncConfig

View File

@@ -0,0 +1,19 @@
{{- define "gvDetails" -}}
{{- $gv := . -}}
[id="{{ asciidocGroupVersionID $gv | asciidocRenderAnchorID }}"]
== {{ $gv.GroupVersionString }}
{{ $gv.Doc }}
{{- if $gv.Kinds }}
=== Resource Types
{{- range $gv.SortedKinds }}
- {{ $gv.TypeForKind . | asciidocRenderTypeLink }}
{{- end }}
{{ end }}
{{ range $gv.SortedTypes }}
{{ template "type" . }}
{{ end }}
{{- end -}}

View File

@@ -0,0 +1,19 @@
{{- define "gvList" -}}
{{- $groupVersions := . -}}
[id="k3k-api-reference"]
= API Reference
:revdate: "2006-01-02"
:page-revdate: {revdate}
:anchor_prefix: k8s-api
== Packages
{{- range $groupVersions }}
- {{ asciidocRenderGVLink . }}
{{- end }}
{{ range $groupVersions }}
{{ template "gvDetails" . }}
{{ end }}
{{- end -}}

View File

@@ -0,0 +1,43 @@
{{- define "type" -}}
{{- $type := . -}}
{{- if asciidocShouldRenderType $type -}}
[id="{{ asciidocTypeID $type | asciidocRenderAnchorID }}"]
=== {{ $type.Name }}
{{ if $type.IsAlias }}_Underlying type:_ _{{ asciidocRenderTypeLink $type.UnderlyingType }}_{{ end }}
{{ $type.Doc }}
{{ if $type.Validation -}}
_Validation:_
{{- range $type.Validation }}
- {{ . }}
{{- end }}
{{- end }}
{{ if $type.References -}}
_Appears In:_
{{ range $type.SortedReferences }}
* {{ asciidocRenderTypeLink . }}
{{- end }}
{{- end }}
{{ if $type.Members -}}
[cols="25a,55a,10a,10a", options="header"]
|===
| Field | Description | Default | Validation
{{ if $type.GVK -}}
| *`apiVersion`* __string__ | `{{ $type.GVK.Group }}/{{ $type.GVK.Version }}` | |
| *`kind`* __string__ | `{{ $type.GVK.Kind }}` | |
{{ end -}}
{{ range $type.Members -}}
| *`{{ .Name }}`* __{{ asciidocRenderType .Type }}__ | {{ template "type_members" . }} | {{ .Default }} | {{ range .Validation -}} {{ asciidocRenderValidation . }} +
{{ end }}
{{ end -}}
|===
{{ end -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,8 @@
{{- define "type_members" -}}
{{- $field := . -}}
{{- if eq $field.Name "metadata" -}}
Refer to Kubernetes API documentation for fields of `metadata`.
{{ else -}}
{{ asciidocRenderFieldDoc $field.Doc }}
{{- end -}}
{{- end -}}

View File

@@ -11,6 +11,18 @@ To start developing K3k you will need:
- A running Kubernetes cluster
> [!IMPORTANT]
>
> Virtual clusters in shared mode need to have a configured storage provider, unless the `--persistence-type ephemeral` flag is used.
>
> To install the [`local-path-provisioner`](https://github.com/rancher/local-path-provisioner) and set it as the default storage class you can run:
>
> ```
> kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.34/deploy/local-path-storage.yaml
> kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
> ```
### TLDR
```shell
@@ -43,9 +55,13 @@ To see all the available Make commands you can run `make help`, i.e:
test-controller Run the controller tests (pkg/controller)
test-kubelet-controller Run the controller tests (pkg/controller)
test-e2e Run the e2e tests
test-cli Run the cli tests
generate Generate the CRDs specs
docs Build the CRDs and CLI docs
docs-crds Build the CRDs docs
docs-cli Build the CLI docs
lint Find any linting issues in the project
fmt Format source files in the project
validate Validate the project checking for any dependency or doc mismatch
install Install K3k with Helm on the targeted Kubernetes cluster
help Show this help.
@@ -80,7 +96,20 @@ Once you have your images available you can install K3k with the `make install`
## Tests
To run the tests you can just run `make test`, or one of the other available "sub-tests" targets (`test-unit`, `test-controller`, `test-e2e`).
To run the tests you can just run `make test`, or one of the other available "sub-tests" targets (`test-unit`, `test-controller`, `test-e2e`, `test-cli`).
When running the tests the namespaces used are cleaned up. If you want to keep them to debug you can use the `KEEP_NAMESPACES`, i.e.:
```
KEEP_NAMESPACES=true make test-e2e
```
The e2e and cli tests run against the cluster configured in your KUBECONFIG environment variable. Running the tests with the `K3K_DOCKER_INSTALL` environment variable set will use `tescontainers` instead:
```
K3K_DOCKER_INSTALL=true make test-e2e
```
We use [Ginkgo](https://onsi.github.io/ginkgo/), and [`envtest`](https://book.kubebuilder.io/reference/envtest) for testing the controllers.
@@ -153,3 +182,7 @@ Last thing to do is to get the kubeconfig to connect to the virtual cluster we'v
```bash
k3kcli kubeconfig generate --name mycluster --namespace k3k-mycluster --kubeconfig-server localhost:30001
```
> [!IMPORTANT]
> Because of technical limitation is not possible to create virtual clusters in `virtual` mode with K3d, or any other dockerized environment (Kind, Minikube)

View File

@@ -3,8 +3,8 @@
This guide walks through the various ways to create and manage virtual clusters in K3K. We'll cover common use cases using both the **Custom Resource Definitions (CRDs)** and the **K3K CLI**, so you can choose the method that fits your workflow.
> 📘 For full reference:
> - [CRD Reference Documentation](../crds/crd-docs.md)
> - [CLI Reference Documentation](../cli/cli-docs.md)
> - [CRD Reference Documentation](../crds/crds.md)
> - [CLI Reference Documentation](../cli/k3kcli.md)
> - [Full example](../advanced-usage.md)
> [!NOTE]

View File

@@ -143,5 +143,5 @@ spec:
## Further Reading
* For a complete reference of all `VirtualClusterPolicy` spec fields, see the [API Reference for VirtualClusterPolicy](./crds/crd-docs.md#virtualclusterpolicy).
* For a complete reference of all `VirtualClusterPolicy` spec fields, see the [API Reference for VirtualClusterPolicy](./crds/crds.md#virtualclusterpolicy).
* To understand how VCPs fit into the overall K3k system, see the [Architecture](./architecture.md) document.

172
go.mod
View File

@@ -1,55 +1,50 @@
module github.com/rancher/k3k
go 1.24.10
go 1.25
replace (
github.com/google/cel-go => github.com/google/cel-go v0.20.1
github.com/prometheus/client_golang => github.com/prometheus/client_golang v1.16.0
github.com/prometheus/client_model => github.com/prometheus/client_model v0.6.1
github.com/prometheus/common => github.com/prometheus/common v0.64.0
golang.org/x/term => golang.org/x/term v0.15.0
)
toolchain go1.25.6
require (
github.com/go-logr/logr v1.4.2
github.com/blang/semver/v4 v4.0.0
github.com/go-logr/logr v1.4.3
github.com/go-logr/zapr v1.3.0
github.com/google/go-cmp v0.7.0
github.com/onsi/ginkgo/v2 v2.21.0
github.com/onsi/gomega v1.36.0
github.com/rancher/dynamiclistener v1.27.5
github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.10.1
github.com/sirupsen/logrus v1.9.4
github.com/spf13/cobra v1.10.2
github.com/spf13/pflag v1.0.10
github.com/spf13/viper v1.21.0
github.com/stretchr/testify v1.11.1
github.com/testcontainers/testcontainers-go v0.35.0
github.com/testcontainers/testcontainers-go/modules/k3s v0.35.0
github.com/testcontainers/testcontainers-go v0.40.0
github.com/testcontainers/testcontainers-go/modules/k3s v0.40.0
github.com/virtual-kubelet/virtual-kubelet v1.11.1-0.20250530103808-c9f64e872803
go.etcd.io/etcd/api/v3 v3.5.16
go.etcd.io/etcd/client/v3 v3.5.16
go.uber.org/zap v1.27.0
go.etcd.io/etcd/api/v3 v3.5.21
go.etcd.io/etcd/client/v3 v3.5.21
go.uber.org/zap v1.27.1
gopkg.in/yaml.v2 v2.4.0
helm.sh/helm/v3 v3.14.4
k8s.io/api v0.31.13
k8s.io/apiextensions-apiserver v0.31.13
k8s.io/apimachinery v0.31.13
k8s.io/apiserver v0.31.13
k8s.io/cli-runtime v0.31.13
k8s.io/client-go v0.31.13
k8s.io/component-base v0.31.13
k8s.io/component-helpers v0.31.13
k8s.io/kubectl v0.31.13
k8s.io/kubelet v0.31.13
k8s.io/kubernetes v1.31.13
helm.sh/helm/v3 v3.18.5
k8s.io/api v0.33.7
k8s.io/apiextensions-apiserver v0.33.7
k8s.io/apimachinery v0.33.7
k8s.io/apiserver v0.33.7
k8s.io/cli-runtime v0.33.7
k8s.io/client-go v0.33.7
k8s.io/component-base v0.33.7
k8s.io/component-helpers v0.33.7
k8s.io/kubectl v0.33.7
k8s.io/kubelet v0.33.7
k8s.io/kubernetes v1.33.7
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738
sigs.k8s.io/controller-runtime v0.19.4
)
require (
dario.cat/mergo v1.0.1 // indirect
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
github.com/BurntSushi/toml v1.4.0 // indirect
cel.dev/expr v0.19.1 // indirect
dario.cat/mergo v1.0.2 // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/BurntSushi/toml v1.5.0 // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver/v3 v3.3.0 // indirect
@@ -60,30 +55,27 @@ require (
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/chai2010/gettext-go v1.0.2 // indirect
github.com/containerd/containerd v1.7.24 // indirect
github.com/containerd/errdefs v0.3.0 // indirect
github.com/containerd/containerd v1.7.30 // indirect
github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/platforms v0.2.1 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cpuguy83/dockercfg v0.3.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.6 // indirect
github.com/cyphar/filepath-securejoin v0.3.6 // indirect
github.com/cyphar/filepath-securejoin v0.5.1 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/cli v25.0.1+incompatible // indirect
github.com/docker/distribution v2.8.3+incompatible // indirect
github.com/docker/docker v27.1.1+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/docker v28.5.1+incompatible // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/ebitengine/purego v0.8.4 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/evanphx/json-patch v5.9.0+incompatible // indirect
github.com/evanphx/json-patch v5.9.11+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/fatih/color v1.13.0 // indirect
@@ -104,33 +96,32 @@ require (
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/cel-go v0.22.0 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/cel-go v0.23.2 // indirect
github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gorilla/mux v1.8.1 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/gosuri/uitable v0.0.4 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.24.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/huandu/xstrings v1.5.0 // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmoiron/sqlx v1.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/magiconair/properties v1.8.10 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-isatty v0.0.17 // indirect
@@ -139,14 +130,13 @@ require (
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/locker v1.0.1 // indirect
github.com/moby/go-archive v0.1.0 // indirect
github.com/moby/patternmatcher v0.6.0 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/sys/mountinfo v0.7.2 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/user v0.3.0 // indirect
github.com/moby/sys/sequential v0.6.0 // indirect
github.com/moby/sys/user v0.4.0 // indirect
github.com/moby/sys/userns v0.1.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/moby/term v0.5.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
@@ -154,21 +144,21 @@ require (
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/prometheus/client_golang v1.20.5 // indirect
github.com/prometheus/client_golang v1.22.0 // indirect
github.com/prometheus/client_model v0.6.2
github.com/prometheus/common v0.64.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/rubenv/sql-migrate v1.7.1 // indirect
github.com/rubenv/sql-migrate v1.8.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sagikazarmark/locafero v0.11.0 // indirect
github.com/shirou/gopsutil/v3 v3.23.12 // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/santhosh-tekuri/jsonschema/v6 v6.0.2 // indirect
github.com/shirou/gopsutil/v4 v4.25.6 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 // indirect
github.com/spf13/afero v1.15.0 // indirect
@@ -178,52 +168,52 @@ require (
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.16 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.21 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 // indirect
go.opentelemetry.io/otel v1.33.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0 // indirect
go.opentelemetry.io/otel/metric v1.33.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.58.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 // indirect
go.opentelemetry.io/otel v1.35.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0 // indirect
go.opentelemetry.io/otel/metric v1.35.0 // indirect
go.opentelemetry.io/otel/sdk v1.33.0 // indirect
go.opentelemetry.io/otel/trace v1.33.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.opentelemetry.io/otel/trace v1.35.0 // indirect
go.opentelemetry.io/proto/otlp v1.4.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.yaml.in/yaml/v2 v2.4.2 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.40.0 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/net v0.42.0 // indirect
golang.org/x/net v0.47.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/sys v0.34.0 // indirect
golang.org/x/term v0.33.0 // indirect
golang.org/x/text v0.28.0 // indirect
golang.org/x/time v0.9.0 // indirect
golang.org/x/tools v0.35.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.37.0 // indirect
golang.org/x/text v0.31.0 // indirect
golang.org/x/time v0.12.0 // indirect
golang.org/x/tools v0.38.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8 // indirect
google.golang.org/grpc v1.67.3 // indirect
google.golang.org/grpc v1.68.1 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/controller-manager v0.33.7 // indirect
k8s.io/klog/v2 v2.130.1
k8s.io/kms v0.31.13 // indirect
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect
oras.land/oras-go v1.2.5 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.0 // indirect
k8s.io/kms v0.33.7 // indirect
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff // indirect
oras.land/oras-go/v2 v2.6.0 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.2 // indirect
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect
sigs.k8s.io/kustomize/api v0.18.0 // indirect
sigs.k8s.io/kustomize/kyaml v0.18.1 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.3 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
sigs.k8s.io/kustomize/api v0.19.0 // indirect
sigs.k8s.io/kustomize/kyaml v0.19.0 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 // indirect
sigs.k8s.io/yaml v1.5.0 // indirect
)

545
go.sum
View File

@@ -1,17 +1,17 @@
cel.dev/expr v0.19.1 h1:NciYrtDRIR0lNCnH1LFJegdjspNx9fI59O7TWcua/W4=
cel.dev/expr v0.19.1/go.mod h1:MrpN08Q+lEBs+bGYdLxxHkZoUSsCp0nSKTs0nTymJgw=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go/compute/metadata v0.2.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
dario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s=
dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 h1:bvDV9vkmnHYOMsOr4WLk+Vo07yKIzd94sVoIqshQ4bU=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 h1:L/gRVlceqvL25UVaW/CKtUDjefjrs0SPonmDGUVOYP0=
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8afgbRMd7mFxO99hRNu+6tazq8nFF9lIwo9JFroBk=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v1.4.0 h1:kuoIxZQy2WRRk1pttg9asf+WVv6tWQuBNVmK8+nqPr0=
github.com/BurntSushi/toml v1.4.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU=
github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ=
@@ -26,14 +26,8 @@ github.com/Masterminds/squirrel v1.5.4 h1:uUcX/aBc8O7Fg9kaISIUsHXdKuqehiXAMQTYX8
github.com/Masterminds/squirrel v1.5.4/go.mod h1:NNaOrjSoIDfDA40n7sr2tPNZRfjzjA400rg+riTZj10=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/Microsoft/hcsshim v0.11.7 h1:vl/nj3Bar/CvJSYo7gIQPyRWc9f3c6IeSNavBTSZNZQ=
github.com/Microsoft/hcsshim v0.11.7/go.mod h1:MV8xMfmECjl5HdO7U/3/hFVnkmSBjAjmA09d4bExKcU=
github.com/NYTimes/gziphandler v1.1.1 h1:ZUDjpQae29j0ryrS0u/B8HZfJBtBQHjqw2rQ2cqUQ3I=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d h1:UrqY+r/OJnIp5u0s1SbQ8dVfLCZJsnvazdBP5hS4iRs=
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/alecthomas/kingpin/v2 v2.4.0/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE=
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
github.com/antlr4-go/antlr/v4 v4.13.0 h1:lxCg3LAv+EUK6t1i0y1V6/SLeUi0eKEKdhQAlS8TVTI=
github.com/antlr4-go/antlr/v4 v4.13.0/go.mod h1:pfChB/xh/Unjila75QW7+VU4TSnWnnk9UTnmpPaOR2g=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
@@ -48,30 +42,21 @@ github.com/bombsimon/logrusr/v3 v3.1.0 h1:zORbLM943D+hDMGgyjMhSAz/iDz86ZV72qaak/
github.com/bombsimon/logrusr/v3 v3.1.0/go.mod h1:PksPPgSFEL2I52pla2glgCyyd2OqOHAnFF5E+g8Ixco=
github.com/bshuster-repo/logrus-logstash-hook v1.0.0 h1:e+C0SB5R1pu//O4MQ3f9cFuPGoOVeF2fE4Og9otCc70=
github.com/bshuster-repo/logrus-logstash-hook v1.0.0/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd h1:rFt+Y/IK1aEZkEHchZRSq9OQbsSzIT/OrI8YFFmRIng=
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b h1:otBG+dV+YK+Soembjv71DPz3uX/V/6MMlSyD9JBQ6kQ=
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 h1:nvj0OLI3YqYXer/kZD8Ri1aaunCxIEsOst1BVJswV0o=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chai2010/gettext-go v1.0.2 h1:1Lwwip6Q2QGsAdl/ZKPCwTe9fe0CjlUbqj5bFNSjIRk=
github.com/chai2010/gettext-go v1.0.2/go.mod h1:y+wnP2cHYaVj19NZhYKAwEMH2CI1gNHeQQ+5AjwawxA=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/containerd/cgroups v1.1.0 h1:v8rEWFl6EoqHB+swVNjVoCJE8o3jX7e8nqBGPLaDFBM=
github.com/containerd/cgroups v1.1.0/go.mod h1:6ppBcbh/NOOUU+dMKrykgaBnK9lCIBxHqJDGwsa1mIw=
github.com/containerd/containerd v1.7.24 h1:zxszGrGjrra1yYJW/6rhm9cJ1ZQ8rkKBR48brqsa7nA=
github.com/containerd/containerd v1.7.24/go.mod h1:7QUzfURqZWCZV7RLNEn1XjUCQLEf0bkaK4GjUaZehxw=
github.com/containerd/continuity v0.4.2 h1:v3y/4Yz5jwnvqPKJJ+7Wf93fyWoCB3F5EclWG023MDM=
github.com/containerd/continuity v0.4.2/go.mod h1:F6PTNCKepoxEaXLQp3wDAjygEnImnZ/7o4JzpodfroQ=
github.com/containerd/errdefs v0.3.0 h1:FSZgGOeK4yuT/+DnF07/Olde/q4KBoMsaamhXxIMDp4=
github.com/containerd/errdefs v0.3.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
github.com/containerd/containerd v1.7.30 h1:/2vezDpLDVGGmkUXmlNPLCCNKHJ5BbC5tJB5JNzQhqE=
github.com/containerd/containerd v1.7.30/go.mod h1:fek494vwJClULlTpExsmOyKCMUAbuVjlFsJQc4/j44M=
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A=
@@ -87,44 +72,44 @@ github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6N
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.18 h1:n56/Zwd5o6whRC5PMGretI4IdRLlmBXYNjScPaBgsbY=
github.com/creack/pty v1.1.18/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
github.com/cyphar/filepath-securejoin v0.3.6 h1:4d9N5ykBnSp5Xn2JkhocYDkOpURL/18CYMpo6xB9uWM=
github.com/cyphar/filepath-securejoin v0.3.6/go.mod h1:Sdj7gXlvMcPZsbhwhQ33GguGLDGQL7h7bg04C/+u9jI=
github.com/cyphar/filepath-securejoin v0.5.1 h1:eYgfMq5yryL4fbWfkLpFFy2ukSELzaJOTaUTuh+oF48=
github.com/cyphar/filepath-securejoin v0.5.1/go.mod h1:Sdj7gXlvMcPZsbhwhQ33GguGLDGQL7h7bg04C/+u9jI=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/distribution/distribution/v3 v3.0.0-20221208165359-362910506bc2 h1:aBfCb7iqHmDEIp6fBvC/hQUddQfg+3qdYjwzaiP9Hnc=
github.com/distribution/distribution/v3 v3.0.0-20221208165359-362910506bc2/go.mod h1:WHNsWjnIn2V1LYOrME7e8KxSeKunYHsxEm4am0BUtcI=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/distribution/distribution/v3 v3.0.0 h1:q4R8wemdRQDClzoNNStftB2ZAfqOiN6UX90KJc4HjyM=
github.com/distribution/distribution/v3 v3.0.0/go.mod h1:tRNuFoZsUdyRVegq8xGNeds4KLjwLCRin/tTo6i1DhU=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/cli v25.0.1+incompatible h1:mFpqnrS6Hsm3v1k7Wa/BO23oz0k121MTbTO1lpcGSkU=
github.com/docker/cli v25.0.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=
github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v27.1.1+incompatible h1:hO/M4MtV36kzKldqnA37IWhebRA+LnqqcqDja6kVaKY=
github.com/docker/docker v27.1.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.7.0 h1:xtCHsjxogADNZcdv1pKUHXryefjlVRqWqIhk/uXJp0A=
github.com/docker/docker-credential-helpers v0.7.0/go.mod h1:rETQfLdHNT3foU5kuNkFR1R1V12OJRRO5lzt2D1b5X0=
github.com/docker/go-connections v0.5.0 h1:USnMq7hx7gwdVZq1L49hLXaFtUdTADjXGp+uj1Br63c=
github.com/docker/go-connections v0.5.0/go.mod h1:ov60Kzw0kKElRwhNs9UlUHAE/F9Fe6GLaXnqyDdmEXc=
github.com/dlclark/regexp2 v1.11.0 h1:G/nrcoOa7ZXlpoa/91N3X7mM3r8eIlMBBJZvsz/mxKI=
github.com/dlclark/regexp2 v1.11.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
github.com/docker/docker v28.5.1+incompatible h1:Bm8DchhSD2J6PsFzxC35TZo4TLGR2PdW/E69rU45NhM=
github.com/docker/docker v28.5.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.8.2 h1:bX3YxiGzFP5sOXWc3bTPEXdEaZSeVMrFgOr3T+zrFAo=
github.com/docker/docker-credential-helpers v0.8.2/go.mod h1:P3ci7E3lwkZg6XiHdRKft1KckHiO9a2rNtyFbZ/ry9M=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-metrics v0.0.1 h1:AgB/0SvBxihN0X8OR4SjsblXkbMvalQ8cjmtKQ2rQV8=
github.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1 h1:ZClxb8laGDf5arXfYcAtECDFgAgHklGI8CxgjHnXKJ4=
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/ebitengine/purego v0.8.4 h1:CF7LEKg5FFOsASUj0+QwaXf8Ht6TlFxg09+S9wz0omw=
github.com/ebitengine/purego v0.8.4/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls=
github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v5.9.11+incompatible h1:ixHHqfcGvxhWkniF1tWxBHA0yb4Z+d1UQi45df52xW8=
github.com/evanphx/json-patch v5.9.11+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.9.0 h1:kcBlZQbplgElYIlo/n1hJbls2z/1awpXxpRi0/FOJfg=
github.com/evanphx/json-patch/v5 v5.9.0/go.mod h1:VNkHZ/282BpEyt/tObQO8s5CMPmYYq14uClGH4abBuQ=
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f h1:Wl78ApPPB2Wvf/TIe2xdyJxTlb6obmF18d8QdkxNDu4=
@@ -133,8 +118,8 @@ github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/foxcpp/go-mockdns v1.0.0 h1:7jBqxd3WDWwi/6WhDvacvH1XsN3rOLXyHM1uhvIx6FI=
github.com/foxcpp/go-mockdns v1.0.0/go.mod h1:lgRN6+KxQBawyIghpnl5CezHFGS9VLzvtVlwxvzXTQ4=
github.com/foxcpp/go-mockdns v1.1.0 h1:jI0rD8M0wuYAxL7r/ynTrCQQq0BVqfB99Vgk7DlmewI=
github.com/foxcpp/go-mockdns v1.1.0/go.mod h1:IhLeSFGed3mJIAXPH2aiRQB+kqz7oqu8ld2qVbOu7Wk=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
@@ -146,8 +131,8 @@ github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3Bop
github.com/go-gorp/gorp/v3 v3.1.0 h1:ItKF/Vbuj31dmV4jxA1qblpSwkl9g1typ24xoe70IGs=
github.com/go-gorp/gorp/v3 v3.1.0/go.mod h1:dLEjIyyRNiXvNZ8PSmzpt1GsWAUK8kjVhEpjH8TixEw=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-logr/zapr v1.3.0 h1:XGdV8XW8zdwFiwOA2Dryh1gj2KRQyOOoNmBy4EplIcQ=
@@ -173,15 +158,14 @@ github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJA
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
@@ -190,30 +174,22 @@ github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:W
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/gomodule/redigo v1.8.2 h1:H5XSIre1MB5NbPYFp+i1NBbb5qN1W8Y8YAQoAYbkm8k=
github.com/gomodule/redigo v1.8.2/go.mod h1:P9dn9mFrCBvWhGE1wpxx6fgq7BAeLBk+UUUzlpkBYO0=
github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg=
github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
github.com/google/cel-go v0.20.1 h1:nDx9r8S3L4pE61eDdt8igGj8rf5kjYR3ILxWIpWNi84=
github.com/google/cel-go v0.20.1/go.mod h1:kWcIzTsPX0zmQ+H3TirHstLLf9ep5QTsZBN9u4dOYLg=
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
github.com/google/cel-go v0.23.2 h1:UdEe3CvQh3Nv+E/j9r1Y//WO0K0cSyD7/y0bzyLIMI4=
github.com/google/cel-go v0.23.2/go.mod h1:52Pb6QsDbC5kvgxvZhiL9QX1oZEkcUF/ZqaPx1J5Wwo=
github.com/google/gnostic-models v0.6.9 h1:MU/8wDLif2qCXZmzncUQ/BOfxWfthHi63KqpoNbWqVw=
github.com/google/gnostic-models v0.6.9/go.mod h1:CiWsm0s6BSQd1hRn8/QmxqB6BesYcbSZxsz9b0KuDBw=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@@ -226,12 +202,12 @@ github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/handlers v1.5.1 h1:9lRY6j8DEeeBT10CvO9hGW0gmky0BprnvDI5vfhUHH4=
github.com/gorilla/handlers v1.5.1/go.mod h1:t8XrUpc4KVXb7HGyJ4/cEnwQiaxrX/hz1Zv/4g96P1Q=
github.com/gorilla/handlers v1.5.2 h1:cLTUSsNkgcwhgRqvCNmdbRWG0A3N4F+M2nWKdScwyEE=
github.com/gorilla/handlers v1.5.2/go.mod h1:dX+xVpaxdSw+q0Qek8SSsl3dfMk3jNddUkMzo0GtH0w=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/gosuri/uitable v0.0.4 h1:IG2xLKRvErL3uhY6e1BylFzG+aJiwQviDDTfOKeKTpY=
github.com/gosuri/uitable v0.0.4/go.mod h1:tKR86bXuXPZazfOTG1FIzvjIdXzd0mo4Vtn16vt0PJo=
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 h1:+ngKgrYPPJrOjhax5N+uePQ0Fh1Z7PheYoUI/0nzkPA=
@@ -242,35 +218,33 @@ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 h1:Ovs26xHkKqVztRpIrF/92Bcuy
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 h1:bkypFPDjIYGfCYD5mRBvpqxfYX1YCS1PXdKYWi8FsN0=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0/go.mod h1:P+Lt/0by1T8bfcF3z737NnSbmxQAppXMRziHUxPOC8k=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.24.0 h1:TmHmbvxPmaegwhDubVz0lICL0J5Ka2vwTzhoePEXsGE=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.24.0/go.mod h1:qztMSjm835F2bXf+5HKAPIS5qsmQDqZna/PgVt4rWtI=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/golang-lru/arc/v2 v2.0.5 h1:l2zaLDubNhW4XO3LnliVj0GXO3+/CGNJAg1dcN2Fpfw=
github.com/hashicorp/golang-lru/arc/v2 v2.0.5/go.mod h1:ny6zBSQZi2JxIeYcv7kt2sH2PXJtirBN7RDhRpxPkxU=
github.com/hashicorp/golang-lru/v2 v2.0.5 h1:wW7h1TG88eUIJ2i69gaE3uNVtEPIagzhGvHgwfx2Vm4=
github.com/hashicorp/golang-lru/v2 v2.0.5/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/huandu/xstrings v1.5.0 h1:2ag3IFq9ZDANvthTwTiqSSZLjDc+BedvHPAp5tJy2TI=
github.com/huandu/xstrings v1.5.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
github.com/imdario/mergo v0.3.13 h1:lFzP57bqS/wsqKssCGmtLAb8A0wKjLGrve2q3PPVcBk=
github.com/imdario/mergo v0.3.13/go.mod h1:4lJ1jqUDcsbIECGy0RUJAXNIhg+6ocWgb1ALK2O4oXg=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jmoiron/sqlx v1.4.0 h1:1PLqN7S1UYp5t4SrVVnt4nUVNemrDAtxlulVe+Qgm3o=
github.com/jmoiron/sqlx v1.4.0/go.mod h1:ZrZ7UsYB/weZdl2Bxg6jCRO9c3YHl8r3ahlKmRT4JLY=
github.com/jonboulle/clockwork v0.2.2 h1:UOGuzwb1PwsrDAObMuhUnj0p5ULPj8V/xJ7Kx9qUBdQ=
github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8=
github.com/jonboulle/clockwork v0.4.0 h1:p4Cf1aMWXnXAUh8lVfewRBx1zaTSYKrKMF2g3ST4RZ4=
github.com/jonboulle/clockwork v0.4.0/go.mod h1:xgRqUGwRcjKCO1vbZUEtSLrqKoPSsUpK7fnezOII0kc=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
@@ -278,6 +252,8 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 h1:SOEGU9fKiNWd/HOJuq6+3iTQz8KNCLtVX6idSoTLdUw=
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0/go.mod h1:dXGbAdH5GtBTC4WfIxhKZfyBF/HBFgRZSWwZ9g/He9o=
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 h1:P6pPBnrTSX3DEVR4fDembhRWSsG5rVo6hYhAB/ADZrk=
@@ -288,8 +264,8 @@ github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de h1:9TO3cAIGXtEhn
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/magiconair/properties v1.8.7 h1:IeQXZAiQcpL9mgcAe1Nu6cX9LLw6ExEHKjN0VQdvPDY=
github.com/magiconair/properties v1.8.7/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE=
github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
@@ -304,9 +280,8 @@ github.com/mattn/go-runewidth v0.0.9 h1:Lm995f3rfxdpd6TSmuVCHVb/QhupuXlYr8sCI/Qd
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
github.com/mattn/go-sqlite3 v1.14.22 h1:2gZY6PC6kBnID23Tichd1K+Z0oS6nE/XwU+Vz/5o4kU=
github.com/mattn/go-sqlite3 v1.14.22/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/miekg/dns v1.1.25 h1:dFwPR6SfLtrSwgDcIq2bcU/gVutB4sNApq2HBdqcakg=
github.com/miekg/dns v1.1.25/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
github.com/miekg/dns v1.1.57 h1:Jzi7ApEIzwEPLHWRcafCN9LZSBbqQpxjt/wpgvg7wcM=
github.com/miekg/dns v1.1.57/go.mod h1:uqRjCRUuEAA6qsOiJvDd+CFo/vW+y5WR6SNmHE55hZk=
github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=
github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=
github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0=
@@ -315,22 +290,22 @@ github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zx
github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg=
github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/go-archive v0.1.0 h1:Kk/5rdW/g+H8NHdJW2gsXyZ7UnzvJNOy6VKJqueWdcQ=
github.com/moby/go-archive v0.1.0/go.mod h1:G9B+YoujNohJmrIYFBpSd54GTUB4lt9S+xVQvsJyFuo=
github.com/moby/patternmatcher v0.6.0 h1:GmP9lR19aU5GqSSFko+5pRqHi+Ohk1O69aFiKkVGiPk=
github.com/moby/patternmatcher v0.6.0/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
github.com/moby/spdystream v0.5.0 h1:7r0J1Si3QO/kjRitvSLVVFUjxMEb/YLj6S9FF62JBCU=
github.com/moby/spdystream v0.5.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI=
github.com/moby/sys/mountinfo v0.7.2 h1:1shs6aH5s4o5H2zQLn796ADW1wMrIwHsyJ2v9KouLrg=
github.com/moby/sys/mountinfo v0.7.2/go.mod h1:1YOa8w8Ih7uW0wALDUgT1dTTSBrZ+HiBLGws92L2RU4=
github.com/moby/sys/sequential v0.5.0 h1:OPvI35Lzn9K04PBbCLW0g4LcFAJgHsvXsRyewg5lXtc=
github.com/moby/sys/sequential v0.5.0/go.mod h1:tH2cOOs5V9MlPiXcQzRC+eEyab644PWKGRYaaV5ZZlo=
github.com/moby/sys/user v0.3.0 h1:9ni5DlcW5an3SvRSx4MouotOygvzaXbaSrc/wGDFWPo=
github.com/moby/sys/user v0.3.0/go.mod h1:bG+tYYYJgaMtRKgEmuueC0hJEAZWwtIbZTB+85uoHjs=
github.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw=
github.com/moby/sys/atomicwriter v0.1.0/go.mod h1:Ul8oqv2ZMNHOceF643P6FKPXeCmYtlQMvpizfsSoaWs=
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
github.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko=
github.com/moby/sys/user v0.4.0 h1:jhcMKit7SA80hivmFJcbB1vqmw//wU61Zdui2eQXuMs=
github.com/moby/sys/user v0.4.0/go.mod h1:bG+tYYYJgaMtRKgEmuueC0hJEAZWwtIbZTB+85uoHjs=
github.com/moby/sys/userns v0.1.0 h1:tVLXkFOxVu9A64/yh59slHVv9ahO9UIev4JZusOLG/g=
github.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28=
github.com/moby/term v0.5.0 h1:xt8Q1nalod/v7BqbG21f8mQPqH+xAaC9C3N3wfWbVP0=
github.com/moby/term v0.5.0/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y=
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -342,7 +317,6 @@ github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/onsi/ginkgo/v2 v2.21.0 h1:7rg/4f3rB88pb5obDgNZrNHrQ4e6WpjonchcpuBRnZM=
@@ -351,15 +325,14 @@ github.com/onsi/gomega v1.36.0 h1:Pb12RlruUtj4XUuPUqeEWc6j5DkVVVA49Uf6YLfC95Y=
github.com/onsi/gomega v1.36.0/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.0 h1:8SG7/vwALn54lVB/0yZ/MMwhFrPYtpEHQb2IpWsCzug=
github.com/opencontainers/image-spec v1.1.0/go.mod h1:W4s4sFTMaBeK1BQLXbG4AdM2szdn85PY75RI83NrTrM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/phayes/freeport v0.0.0-20220201140144-74d24b5ae9f5 h1:Ii+DKncOVM8Cu1Hc+ETb5K+23HdAMvESYE3ZJ5b5cMI=
github.com/phayes/freeport v0.0.0-20220201140144-74d24b5ae9f5/go.mod h1:iIss55rKnNBTvrwdmkUpLnDpZoAHvWaiq5+iMmen4AE=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
@@ -369,39 +342,41 @@ github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/poy/onpar v1.1.2 h1:QaNrNiZx0+Nar5dLgTVp5mXkyoVFIbepjyEoGSnhbAY=
github.com/poy/onpar v1.1.2/go.mod h1:6X8FLNoxyr9kkmnlqpK6LSoiOtrO6MICtWwEuWkLjzg=
github.com/prometheus/client_golang v1.16.0 h1:yk/hx9hDbrGHovbci4BY+pRMfSuuat626eFsHb7tmT8=
github.com/prometheus/client_golang v1.16.0/go.mod h1:Zsulrv/L9oM40tJ7T815tM89lFEugiJ9HzIqaAx4LKc=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.64.0 h1:pdZeA+g617P7oGv1CzdTzyeShxAGrTBsolKNOLQPGO4=
github.com/prometheus/common v0.64.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/prometheus/procfs v0.10.1/go.mod h1:nwNm2aOCAYw8uTR/9bWRREkZFxAUcWzPHWJq+XBB/FM=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/rancher/dynamiclistener v1.27.5 h1:FA/s9vbQzGz1Au3BuFvdbBfBBUmHGXGR3xoliwR4qfY=
github.com/rancher/dynamiclistener v1.27.5/go.mod h1:VqBaJNi+bZmre0+gi+2Jb6jbn7ovHzRueW+M7QhVKsk=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/redis/go-redis/extra/rediscmd/v9 v9.0.5 h1:EaDatTxkdHG+U3Bk4EUr+DZ7fOGwTfezUiUJMaIcaho=
github.com/redis/go-redis/extra/rediscmd/v9 v9.0.5/go.mod h1:fyalQWdtzDBECAQFBJuQe5bzQ02jGd5Qcbgb97Flm7U=
github.com/redis/go-redis/extra/redisotel/v9 v9.0.5 h1:EfpWLLCyXw8PSM2/XNJLjI3Pb27yVE+gIAfeqp8LUCc=
github.com/redis/go-redis/extra/redisotel/v9 v9.0.5/go.mod h1:WZjPDy7VNzn77AAfnAfVjZNvfJTYfPetfZk5yoSTLaQ=
github.com/redis/go-redis/v9 v9.7.3 h1:YpPyAayJV+XErNsatSElgRZZVCwXX9QzkKYNvO7x0wM=
github.com/redis/go-redis/v9 v9.7.3/go.mod h1:bGUrSggJ9X9GUmZpZNEOQKaANxSGgOEBRltRTZHSvrA=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/rubenv/sql-migrate v1.7.1 h1:f/o0WgfO/GqNuVg+6801K/KW3WdDSupzSjDYODmiUq4=
github.com/rubenv/sql-migrate v1.7.1/go.mod h1:Ob2Psprc0/3ggbM6wCzyYVFFuc6FyZrb2AS+ezLDFb4=
github.com/rubenv/sql-migrate v1.8.0 h1:dXnYiJk9k3wetp7GfQbKJcPHjVJL6YK19tKj8t2Ns0o=
github.com/rubenv/sql-migrate v1.8.0/go.mod h1:F2bGFBwCU+pnmbtNYDeKvSuvL6lBVtXDXUUv5t+u1qw=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sagikazarmark/locafero v0.11.0 h1:1iurJgmM9G3PA/I+wWYIOw/5SyBtxapeHDcg+AAIFXc=
github.com/sagikazarmark/locafero v0.11.0/go.mod h1:nVIGvgyzw595SUSUE6tvCp3YYTeHs15MvlmU87WwIik=
github.com/santhosh-tekuri/jsonschema/v6 v6.0.2 h1:KRzFb2m7YtdldCEkzs6KqmJw4nqEVZGK7IN2kJkjTuQ=
github.com/santhosh-tekuri/jsonschema/v6 v6.0.2/go.mod h1:JXeL+ps8p7/KNMjDQk3TCwPpBy0wYklyWTfbkIzdIFU=
github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ=
github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/shirou/gopsutil/v3 v3.23.12 h1:z90NtUkp3bMtmICZKpC4+WaknU1eXtp5vtbQ11DgpE4=
github.com/shirou/gopsutil/v3 v3.23.12/go.mod h1:1FrWgea594Jp7qmjHUUPlJDTPgcsb9mGnXDxavtikzM=
github.com/shoenig/go-m1cpu v0.1.6 h1:nxdKQNcEB6vzgA2E2bvzKIYRuNj7XNJ4S/aRSwKzFtM=
github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
github.com/shoenig/test v0.6.4 h1:kVTaSd7WLz5WZ2IaoM0RSzRsUD+m8wRR+5qvntpn4LU=
github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
github.com/shirou/gopsutil/v4 v4.25.6 h1:kLysI2JsKorfaFPcYmcJqbzROzsBWEOAtw6A7dIfqXs=
github.com/shirou/gopsutil/v4 v4.25.6/go.mod h1:PfybzyydfZcN+JMMjkF6Zb8Mq1A/VcogFFg7hj50W9c=
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
github.com/soheilhy/cmux v0.1.5 h1:jjzc5WVemNEDTLwv9tlmemhC73tI08BNOIGwBOo10Js=
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 h1:+jumHNA0Wrelhe64i8F6HNlS8pkoyMv5sreGx2Ry5Rw=
@@ -410,8 +385,8 @@ github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
github.com/spf13/cast v1.10.0/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo=
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
@@ -426,23 +401,19 @@ github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/testcontainers/testcontainers-go v0.35.0 h1:uADsZpTKFAtp8SLK+hMwSaa+X+JiERHtd4sQAFmXeMo=
github.com/testcontainers/testcontainers-go v0.35.0/go.mod h1:oEVBj5zrfJTrgjwONs1SsRbnBtH9OKl+IGl3UMcr2B4=
github.com/testcontainers/testcontainers-go/modules/k3s v0.35.0 h1:zEfdO1Dz7sA2jNpf1PVCOI6FND1t/mDpaeDCguaLRXw=
github.com/testcontainers/testcontainers-go/modules/k3s v0.35.0/go.mod h1:YWc+Yph4EvIXHsjRAwPezJEvQGoOFP1AEbfhrYrylAM=
github.com/testcontainers/testcontainers-go v0.40.0 h1:pSdJYLOVgLE8YdUY2FHQ1Fxu+aMnb6JfVz1mxk7OeMU=
github.com/testcontainers/testcontainers-go v0.40.0/go.mod h1:FSXV5KQtX2HAMlm7U3APNyLkkap35zNLxukw9oBi/MY=
github.com/testcontainers/testcontainers-go/modules/k3s v0.40.0 h1:3w6SjtIp/+FdpjWJCyPqaGWknG2iU6MacEWA7hl0IqQ=
github.com/testcontainers/testcontainers-go/modules/k3s v0.40.0/go.mod h1:1xJwmfO2g+XKox9LiJXKGCm1vWp7LozX+78UjXVRbF0=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
@@ -453,87 +424,95 @@ github.com/virtual-kubelet/virtual-kubelet v1.11.1-0.20250530103808-c9f64e872803
github.com/virtual-kubelet/virtual-kubelet v1.11.1-0.20250530103808-c9f64e872803/go.mod h1:SHfH2bqArcMTBh/JejdbtsyZwmYYqkpJnABOyipjT54=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo=
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74=
github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y=
github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5Qo6v2eYzo7kUS51QINcR5jNpbZS8=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xiang90/probing v0.0.0-20221125231312-a49e3df8f510 h1:S2dVYn90KE98chqDkyE9Z4N61UnQd+KOfgp5Iu53llk=
github.com/xiang90/probing v0.0.0-20221125231312-a49e3df8f510/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xlab/treeprint v1.2.0 h1:HzHnuAF1plUN2zGlAFHbSQP2qJ0ZAD3XF5XD7OesXRQ=
github.com/xlab/treeprint v1.2.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/yusufpapurcu/wmi v1.2.3 h1:E1ctvB7uKFMOJw3fdOW32DwGE9I7t++CRUEMKvFoFiw=
github.com/yusufpapurcu/wmi v1.2.3/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43 h1:+lm10QQTNSBd8DVTNGHx7o/IKu9HYDvLMffDhbyLccI=
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50 h1:hlE8//ciYMztlGpl/VA+Zm1AcTPHYkHJPbHqE6WJUXE=
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA=
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f h1:ERexzlUfuTvpE74urLSbIQW0Z/6hF9t8U4NsJLaioAY=
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg=
go.etcd.io/bbolt v1.3.10 h1:+BqfJTcCzTItrop8mq/lbzL8wSGtj94UO/3U31shqG0=
go.etcd.io/bbolt v1.3.10/go.mod h1:bK3UQLPJZly7IlNmV7uVHJDxfe5aK9Ll93e/74Y9oEQ=
go.etcd.io/etcd/api/v3 v3.5.16 h1:WvmyJVbjWqK4R1E+B12RRHz3bRGy9XVfh++MgbN+6n0=
go.etcd.io/etcd/api/v3 v3.5.16/go.mod h1:1P4SlIP/VwkDmGo3OlOD7faPeP8KDIFhqvciH5EfN28=
go.etcd.io/etcd/client/pkg/v3 v3.5.16 h1:ZgY48uH6UvB+/7R9Yf4x574uCO3jIx0TRDyetSfId3Q=
go.etcd.io/etcd/client/pkg/v3 v3.5.16/go.mod h1:V8acl8pcEK0Y2g19YlOV9m9ssUe6MgiDSobSoaBAM0E=
go.etcd.io/etcd/client/v2 v2.305.13 h1:RWfV1SX5jTU0lbCvpVQe3iPQeAHETWdOTb6pxhd77C8=
go.etcd.io/etcd/client/v2 v2.305.13/go.mod h1:iQnL7fepbiomdXMb3om1rHq96htNNGv2sJkEcZGDRRg=
go.etcd.io/etcd/client/v3 v3.5.16 h1:sSmVYOAHeC9doqi0gv7v86oY/BTld0SEFGaxsU9eRhE=
go.etcd.io/etcd/client/v3 v3.5.16/go.mod h1:X+rExSGkyqxvu276cr2OwPLBaeqFu1cIl4vmRjAD/50=
go.etcd.io/etcd/pkg/v3 v3.5.13 h1:st9bDWNsKkBNpP4PR1MvM/9NqUPfvYZx/YXegsYEH8M=
go.etcd.io/etcd/pkg/v3 v3.5.13/go.mod h1:N+4PLrp7agI/Viy+dUYpX7iRtSPvKq+w8Y14d1vX+m0=
go.etcd.io/etcd/raft/v3 v3.5.13 h1:7r/NKAOups1YnKcfro2RvGGo2PTuizF/xh26Z2CTAzA=
go.etcd.io/etcd/raft/v3 v3.5.13/go.mod h1:uUFibGLn2Ksm2URMxN1fICGhk8Wu96EfDQyuLhAcAmw=
go.etcd.io/etcd/server/v3 v3.5.13 h1:V6KG+yMfMSqWt+lGnhFpP5z5dRUj1BDRJ5k1fQ9DFok=
go.etcd.io/etcd/server/v3 v3.5.13/go.mod h1:K/8nbsGupHqmr5MkgaZpLlH1QdX1pcNQLAkODy44XcQ=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.etcd.io/bbolt v1.3.11 h1:yGEzV1wPz2yVCLsD8ZAiGHhHVlczyC9d1rP43/VCRJ0=
go.etcd.io/bbolt v1.3.11/go.mod h1:dksAq7YMXoljX0xu6VF5DMZGbhYYoLUalEiSySYAS4I=
go.etcd.io/etcd/api/v3 v3.5.21 h1:A6O2/JDb3tvHhiIz3xf9nJ7REHvtEFJJ3veW3FbCnS8=
go.etcd.io/etcd/api/v3 v3.5.21/go.mod h1:c3aH5wcvXv/9dqIw2Y810LDXJfhSYdHQ0vxmP3CCHVY=
go.etcd.io/etcd/client/pkg/v3 v3.5.21 h1:lPBu71Y7osQmzlflM9OfeIV2JlmpBjqBNlLtcoBqUTc=
go.etcd.io/etcd/client/pkg/v3 v3.5.21/go.mod h1:BgqT/IXPjK9NkeSDjbzwsHySX3yIle2+ndz28nVsjUs=
go.etcd.io/etcd/client/v2 v2.305.21 h1:eLiFfexc2mE+pTLz9WwnoEsX5JTTpLCYVivKkmVXIRA=
go.etcd.io/etcd/client/v2 v2.305.21/go.mod h1:OKkn4hlYNf43hpjEM3Ke3aRdUkhSl8xjKjSf8eCq2J8=
go.etcd.io/etcd/client/v3 v3.5.21 h1:T6b1Ow6fNjOLOtM0xSoKNQt1ASPCLWrF9XMHcH9pEyY=
go.etcd.io/etcd/client/v3 v3.5.21/go.mod h1:mFYy67IOqmbRf/kRUvsHixzo3iG+1OF2W2+jVIQRAnU=
go.etcd.io/etcd/pkg/v3 v3.5.21 h1:jUItxeKyrDuVuWhdh0HtjUANwyuzcb7/FAeUfABmQsk=
go.etcd.io/etcd/pkg/v3 v3.5.21/go.mod h1:wpZx8Egv1g4y+N7JAsqi2zoUiBIUWznLjqJbylDjWgU=
go.etcd.io/etcd/raft/v3 v3.5.21 h1:dOmE0mT55dIUsX77TKBLq+RgyumsQuYeiRQnW/ylugk=
go.etcd.io/etcd/raft/v3 v3.5.21/go.mod h1:fmcuY5R2SNkklU4+fKVBQi2biVp5vafMrWUEj4TJ4Cs=
go.etcd.io/etcd/server/v3 v3.5.21 h1:9w0/k12majtgarGmlMVuhwXRI2ob3/d1Ik3X5TKo0yU=
go.etcd.io/etcd/server/v3 v3.5.21/go.mod h1:G1mOzdwuzKT1VRL7SqRchli/qcFrtLBTAQ4lV20sXXo=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0 h1:r6I7RJCN86bpD/FQwedZ0vSixDpwuWREjW9oRMsmqDc=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0/go.mod h1:B9yO6b04uB80CzjedvewuqDhxJxi11s7/GtiGa8bAjI=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 h1:TT4fX+nBOA/+LUkobKGW1ydGcn+G3vRw9+g5HwCphpk=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0/go.mod h1:L7UH0GbB0p47T4Rri3uHjbpCFYrVrwc1I25QhNPiGK8=
go.opentelemetry.io/otel v1.33.0 h1:/FerN9bax5LoK51X/sI0SVYrjSE0/yUL7DpxW4K3FWw=
go.opentelemetry.io/otel v1.33.0/go.mod h1:SUUkR6csvUQl+yjReHu5uM3EtVV7MBm5FHKRlNx4I8I=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 h1:3Q/xZUyC1BBkualc9ROb4G8qkH90LXEIICcs5zv1OYY=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0/go.mod h1:s75jGIWA9OfCMzF0xr+ZgfrB5FEbbV7UuYo32ahUiFI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0 h1:qFffATk0X+HD+f1Z8lswGiOQYKHRlzfmdJm0wEaVrFA=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.27.0/go.mod h1:MOiCmryaYtc+V0Ei+Tx9o5S1ZjA7kzLucuVuyzBZloQ=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0 h1:IeMeyr1aBvBiPVYihXIaeIZba6b8E1bYp7lbdxK8CQg=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0/go.mod h1:oVdCUtjq9MK9BlS7TtucsQwUcXcymNiEDjgDD2jMtZU=
go.opentelemetry.io/otel/metric v1.33.0 h1:r+JOocAyeRVXD8lZpjdQjzMadVZp2M4WmQ+5WtEnklQ=
go.opentelemetry.io/otel/metric v1.33.0/go.mod h1:L9+Fyctbp6HFTddIxClbQkjtubW6O9QS3Ann/M82u6M=
go.opentelemetry.io/contrib/bridges/prometheus v0.57.0 h1:UW0+QyeyBVhn+COBec3nGhfnFe5lwB0ic1JBVjzhk0w=
go.opentelemetry.io/contrib/bridges/prometheus v0.57.0/go.mod h1:ppciCHRLsyCio54qbzQv0E4Jyth/fLWDTJYfvWpcSVk=
go.opentelemetry.io/contrib/exporters/autoexport v0.57.0 h1:jmTVJ86dP60C01K3slFQa2NQ/Aoi7zA+wy7vMOKD9H4=
go.opentelemetry.io/contrib/exporters/autoexport v0.57.0/go.mod h1:EJBheUMttD/lABFyLXhce47Wr6DPWYReCzaZiXadH7g=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.58.0 h1:PS8wXpbyaDJQ2VDHHncMe9Vct0Zn1fEjpsjrLxGJoSc=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.58.0/go.mod h1:HDBUsEjOuRC0EzKZ1bSaRGZWUBAzo+MhAcUUORSr4D0=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 h1:yd02MEjBdJkG3uabWP9apV+OuWRIXGDuJEUJbOHmCFU=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0/go.mod h1:umTcuxiv1n/s/S6/c2AT/g2CQ7u5C59sHDNmfSwgz7Q=
go.opentelemetry.io/otel v1.35.0 h1:xKWKPxrxB6OtMCbmMY021CqC45J+3Onta9MqjhnusiQ=
go.opentelemetry.io/otel v1.35.0/go.mod h1:UEqy8Zp11hpkUrL73gSlELM0DupHoiq72dR+Zqel/+Y=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.8.0 h1:WzNab7hOOLzdDF/EoWCt4glhrbMPVMOO5JYTmpz36Ls=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.8.0/go.mod h1:hKvJwTzJdp90Vh7p6q/9PAOd55dI6WA6sWj62a/JvSs=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.8.0 h1:S+LdBGiQXtJdowoJoQPEtI52syEP/JYBUpjO49EQhV8=
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.8.0/go.mod h1:5KXybFvPGds3QinJWQT7pmXf+TN5YIa7CNYObWRkj50=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.32.0 h1:j7ZSD+5yn+lo3sGV69nW04rRR0jhYnBwjuX3r0HvnK0=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.32.0/go.mod h1:WXbYJTUaZXAbYd8lbgGuvih0yuCfOFC5RJoYnoLcGz8=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.32.0 h1:t/Qur3vKSkUCcDVaSumWF2PKHt85pc7fRvFuoVT8qFU=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.32.0/go.mod h1:Rl61tySSdcOJWoEgYZVtmnKdA0GeKrSqkHC1t+91CH8=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0 h1:Vh5HayB/0HHfOQA7Ctx69E/Y/DcQSMPpKANYVMQ7fBA=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.33.0/go.mod h1:cpgtDBaqD/6ok/UG0jT15/uKjAY8mRA53diogHBg3UI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0 h1:5pojmb1U1AogINhN3SurB+zm/nIcusopeBNp42f45QM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.33.0/go.mod h1:57gTHJSE5S1tqg+EKsLPlTWhpHMsWlVmer+LA926XiA=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.32.0 h1:cMyu9O88joYEaI47CnQkxO1XZdpoTF9fEnW2duIddhw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.32.0/go.mod h1:6Am3rn7P9TVVeXYG+wtcGE7IE1tsQ+bP3AuWcKt/gOI=
go.opentelemetry.io/otel/exporters/prometheus v0.54.0 h1:rFwzp68QMgtzu9PgP3jm9XaMICI6TsofWWPcBDKwlsU=
go.opentelemetry.io/otel/exporters/prometheus v0.54.0/go.mod h1:QyjcV9qDP6VeK5qPyKETvNjmaaEc7+gqjh4SS0ZYzDU=
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.8.0 h1:CHXNXwfKWfzS65yrlB2PVds1IBZcdsX8Vepy9of0iRU=
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.8.0/go.mod h1:zKU4zUgKiaRxrdovSS2amdM5gOc59slmo/zJwGX+YBg=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.32.0 h1:SZmDnHcgp3zwlPBS2JX2urGYe/jBKEIT6ZedHRUyCz8=
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.32.0/go.mod h1:fdWW0HtZJ7+jNpTKUR0GpMEDP69nR8YBJQxNiVCE3jk=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.32.0 h1:cC2yDI3IQd0Udsux7Qmq8ToKAx1XCilTQECZ0KDZyTw=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.32.0/go.mod h1:2PD5Ex6z8CFzDbTdOlwyNIUywRr1DN0ospafJM1wJ+s=
go.opentelemetry.io/otel/log v0.8.0 h1:egZ8vV5atrUWUbnSsHn6vB8R21G2wrKqNiDt3iWertk=
go.opentelemetry.io/otel/log v0.8.0/go.mod h1:M9qvDdUTRCopJcGRKg57+JSQ9LgLBrwwfC32epk5NX8=
go.opentelemetry.io/otel/metric v1.35.0 h1:0znxYu2SNyuMSQT4Y9WDWej0VpcsxkuklLa4/siN90M=
go.opentelemetry.io/otel/metric v1.35.0/go.mod h1:nKVFgxBZ2fReX6IlyW28MgZojkoAkJGaE8CpgeAU3oE=
go.opentelemetry.io/otel/sdk v1.33.0 h1:iax7M131HuAm9QkZotNHEfstof92xM+N8sr3uHXc2IM=
go.opentelemetry.io/otel/sdk v1.33.0/go.mod h1:A1Q5oi7/9XaMlIWzPSxLRWOI8nG3FnzHJNbiENQuihM=
go.opentelemetry.io/otel/trace v1.33.0 h1:cCJuF7LRjUFso9LPnEAHJDB2pqzp+hbO8eu1qqW2d/s=
go.opentelemetry.io/otel/trace v1.33.0/go.mod h1:uIcdVUZMpTAmz0tI1z04GoVSezK37CbGV4fr1f2nBck=
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
go.opentelemetry.io/otel/sdk/log v0.8.0 h1:zg7GUYXqxk1jnGF/dTdLPrK06xJdrXgqgFLnI4Crxvs=
go.opentelemetry.io/otel/sdk/log v0.8.0/go.mod h1:50iXr0UVwQrYS45KbruFrEt4LvAdCaWWgIrsN3ZQggo=
go.opentelemetry.io/otel/sdk/metric v1.32.0 h1:rZvFnvmvawYb0alrYkjraqJq0Z4ZUJAiyYCU9snn1CU=
go.opentelemetry.io/otel/sdk/metric v1.32.0/go.mod h1:PWeZlq0zt9YkYAp3gjKZ0eicRYvOh1Gd+X99x6GHpCQ=
go.opentelemetry.io/otel/trace v1.35.0 h1:dPpEfJu1sDIqruz7BHFG3c7528f6ddfSWfFDVt/xgMs=
go.opentelemetry.io/otel/trace v1.35.0/go.mod h1:WUk7DtFp1Aw2MkvqGdwiXYDZZNvA/1J8o6xRXLrIkyc=
go.opentelemetry.io/proto/otlp v1.4.0 h1:TA9WRvW6zMwP+Ssb6fLoUIuirti1gGbP28GcKG1jgeg=
go.opentelemetry.io/proto/otlp v1.4.0/go.mod h1:PPBWZIP98o2ElSqI35IHfu7hIhSwvc5N38Jw8pXuGFY=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI=
go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw=
golang.org/x/crypto v0.40.0 h1:r4x+VvoG5Fm+eJcxMaY8CQM7Lb0l1lsmjGBQ6s8BfKM=
golang.org/x/crypto v0.40.0/go.mod h1:Qr1vMER5WyS2dfPHAlsOj01wgLbsyWtFn/aY+5+ZdxY=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
@@ -542,51 +521,29 @@ golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvx
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/net v0.42.0 h1:jzkYrhi3YQWD6MLBJcsklgQsoAcw89EcZbJw8Z614hs=
golang.org/x/net v0.42.0/go.mod h1:FF1RA5d3u7nAYA4z2TkclSCKh68eSXtiFwcWQpPXdt8=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.5.0/go.mod h1:9/XBHVqLaWO3/BRHs5jbpYCnOZVjj5V0ndyaAM7KB4I=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.2.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -594,42 +551,22 @@ golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.34.0 h1:H5Y5sJ2L2JRdyv7ROF1he/lPdvFsd0mJHFw2ThKHxLA=
golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.15.0 h1:y/Oo/a/q3IXu26lQgl04j/gjuBDOBlx7X6Om1j2CPW4=
golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
golang.org/x/time v0.9.0 h1:EsRrnYcQiGH+5FfbgvV4AP7qEZstoyrHB0DzarOQ4ZY=
golang.org/x/time v0.9.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
@@ -638,12 +575,8 @@ golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBn
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.35.0 h1:mBffYraMEf7aa0sB+NuKnuCy8qI/9Bughn8dC2Gu5r0=
golang.org/x/tools v0.35.0/go.mod h1:NKdj5HkL/73byiZSJjqJgKn3ep7KjFkBOkR/Hps3VPw=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -652,12 +585,11 @@ gomodules.xyz/jsonpatch/v2 v2.4.0 h1:Ci3iUJyx9UeRx7CeFN8ARgGbkESwJK+KB9lLcWxY/Zw
gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20231211222908-989df2bf70f3 h1:1hfbdAfFbkmpg41000wDVqr7jUpK/Yo+LPnIxxGzmkg=
google.golang.org/genproto v0.0.0-20231211222908-989df2bf70f3/go.mod h1:5RBcpGRxr25RbDzY5w+dmaqpSEvl8Gwl1x2CICf60ic=
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 h1:KAeGQVN3M9nD0/bQXnr/ClcEMJ968gUXJQ9pwfSynuQ=
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80/go.mod h1:cc8bqMqtv9gMOr0zHg2Vzff5ULhhL2IXP4sbcn32Dro=
google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576 h1:CkkIfIt50+lT6NHAVoRYEyAvQGFM7xEwXUUywFvEb3Q=
google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576/go.mod h1:1R3kvZ1dtP3+4p4d3G8uJ8rFk/fWlScl38vanWACI08=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241223144023-3abc09e42ca8 h1:TqExAhdPaB60Ux47Cn0oLV07rGnxZzIsaRhQaqS666A=
@@ -667,8 +599,8 @@ google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyac
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.67.3 h1:OgPcDAFKHnH8X3O4WcO4XUc8GRDeKsKReqbQtiCj7N8=
google.golang.org/grpc v1.67.3/go.mod h1:YGaHCc6Oap+FzBJTZLBzkGSYt/cvGPFTPxkn7QfSU8s=
google.golang.org/grpc v1.68.1 h1:oI5oTa11+ng8r8XMMN7jAOmWfPZWbYpCFaMUTACxkM0=
google.golang.org/grpc v1.68.1/go.mod h1:+q1XYFJjShcqn0QZHvCyeR4CXPA+llXIeUIfIe00waw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -678,11 +610,6 @@ google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -694,64 +621,68 @@ gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools/v3 v3.5.1 h1:EENdUnS3pdur5nybKYIh2Vfgc8IUNBjxDPSjtiJcOzU=
gotest.tools/v3 v3.5.1/go.mod h1:isy3WKz7GK6uNw/sbHzfKBLvlvXwUyV06n6brMxxopU=
helm.sh/helm/v3 v3.14.4 h1:6FSpEfqyDalHq3kUr4gOMThhgY55kXUEjdQoyODYnrM=
helm.sh/helm/v3 v3.14.4/go.mod h1:Tje7LL4gprZpuBNTbG34d1Xn5NmRT3OWfBRwpOSer9I=
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
helm.sh/helm/v3 v3.18.5 h1:Cc3Z5vd6kDrZq9wO9KxKLNEickiTho6/H/dBNRVSos4=
helm.sh/helm/v3 v3.18.5/go.mod h1:L/dXDR2r539oPlFP1PJqKAC1CUgqHJDLkxKpDGrWnyg=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
k8s.io/api v0.31.13 h1:sco9Cq2pY4Ysv9qZiWzcR97MmA/35nwYQ/VCTzOcWmc=
k8s.io/api v0.31.13/go.mod h1:4D8Ry8RqqLDemNLwGYC6v5wOy51N7hitr4WQ6oSWfLY=
k8s.io/apiextensions-apiserver v0.31.13 h1:8xtWKVpV/YbYX0UX2k6w+cgxfxKhX0UNGuo/VXAdg8g=
k8s.io/apiextensions-apiserver v0.31.13/go.mod h1:zxpMLWXBxnJqKUIruJ+ulP+Xlfe5lPZPxq1z0cLwA2U=
k8s.io/apimachinery v0.31.13 h1:rkG0EiBkBkEzURo/8dKGx/oBF202Z2LqHuSD8Cm3bG4=
k8s.io/apimachinery v0.31.13/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo=
k8s.io/apiserver v0.31.13 h1:Ke9/X2m3vHSgsminpAbUxULDNMbvAfjrRX73Gqx6CZc=
k8s.io/apiserver v0.31.13/go.mod h1:5nBPhL2g7am/CS+/OI5A6+olEbo0C7tQ8QNDODLd+WY=
k8s.io/cli-runtime v0.31.13 h1:oz37PuIe4JyUDfTf8JKcZye1obyYAwF146gRpcj+AR8=
k8s.io/cli-runtime v0.31.13/go.mod h1:x6QI7U97fvrplKgd3JEvCpoZKR9AorjvDjBEr1GZG+g=
k8s.io/client-go v0.31.13 h1:Q0LG51uFbzNd9fzIj5ilA0Sm1wUholHvDaNwVKzqdCA=
k8s.io/client-go v0.31.13/go.mod h1:UB4yTzQeRAv+vULOKp2jdqA5LSwV55bvc3RQ5tM48LM=
k8s.io/component-base v0.31.13 h1:/uVLq7yHk9azReqeCFAZSr/8NXydzpz7yDZ6p/yiwBQ=
k8s.io/component-base v0.31.13/go.mod h1:uMXtKNyDqeNdZYL6SRCr9wB6FutL9pOlQmkK2dRVAKQ=
k8s.io/component-helpers v0.31.13 h1:Yy7j+Va7u6v0DXaKqMEOfIcq5pFnvzFcSGM58/lskeA=
k8s.io/component-helpers v0.31.13/go.mod h1:nXTLwkwCjXcrPG62D0IYiKuKi6JkFM2mBe2myrOUeug=
k8s.io/api v0.33.7 h1:Koh06KurzmXwCwe/DOaIiM1A8vEXTZ6B1tTDnmLLfxw=
k8s.io/api v0.33.7/go.mod h1:pu6qwFzTj0ijPbNYAbMgLFDEWgLFu2VUB6PVvQNtswc=
k8s.io/apiextensions-apiserver v0.33.7 h1:1J3CO3vsa645qKuhN8vdB1x3If5vuyH3uAWtLXZKkuQ=
k8s.io/apiextensions-apiserver v0.33.7/go.mod h1:WVsg48xGoaWz9vAREcbjfJqxFd1tpOcZoFutFBVC4DI=
k8s.io/apimachinery v0.33.7 h1:f1kF3V+Stdr+2IGB8QhrfZ6J9JkXF6e1gWX2wKP5slU=
k8s.io/apimachinery v0.33.7/go.mod h1:BHW0YOu7n22fFv/JkYOEfkUYNRN0fj0BlvMFWA7b+SM=
k8s.io/apiserver v0.33.7 h1:A+3bpgxp9PUy8SEqVCrq5BoFxwUujYYwkrTXpv621cU=
k8s.io/apiserver v0.33.7/go.mod h1:d7/iHfHmI7WF+z+xuMi+O1osC1lHv6irtPua/7yVPto=
k8s.io/cli-runtime v0.33.7 h1:WeWuUlmE8qB0g2vq1wTr5vYaVO2745VJZ3aP/U95OF0=
k8s.io/cli-runtime v0.33.7/go.mod h1:9QK4Lcj/nm2vM61pRLinzXbNsxvOZ0XC7dVGoMhm85I=
k8s.io/client-go v0.33.7 h1:sEcU4syZnbwaiGDctJE6G/IKsuays3wjEWGuyrD7M8c=
k8s.io/client-go v0.33.7/go.mod h1:0MEM10zY5dGdc3FdkyNCTKXiTr8P+2Vj65njzvE0Vhw=
k8s.io/component-base v0.33.7 h1:r3xd2l2lngeiOrQhpnD7CYtgbbrTDBnO3qyDUUfwTXw=
k8s.io/component-base v0.33.7/go.mod h1:3v7hH1NvNLID9BUBAR/FqM9StQ/Sa4yBDxEzE1yvGFg=
k8s.io/component-helpers v0.33.7 h1:m23GzIX36RHfKbumTQig8eobBMK7JG0iSekRGEFa1bs=
k8s.io/component-helpers v0.33.7/go.mod h1:RZw7qlcJdIYnoN7KPKGsVSaSgFPKM7xN2IcAdMX0uZ0=
k8s.io/controller-manager v0.33.7 h1:AATKJiqBhRc7IMH7KAWdem2xa2VRduAcVArdp5D04A8=
k8s.io/controller-manager v0.33.7/go.mod h1:IUJSup98WchXED3L/29z+WJAz2sEi10TC5s0obajkT0=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kms v0.31.13 h1:pJCG79BqdCmGetUsETwKfq+OE/D3M1DdqH14EKQl0lI=
k8s.io/kms v0.31.13/go.mod h1:OZKwl1fan3n3N5FFxnW5C4V3ygrah/3YXeJWS3O6+94=
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f h1:GA7//TjRY9yWGy1poLzYYJJ4JRdzg3+O6e8I+e+8T5Y=
k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f/go.mod h1:R/HEjbvWI0qdfb8viZUeVZm0X6IZnxAydC7YU42CMw4=
k8s.io/kubectl v0.31.13 h1:VcSyzFsZ7Fi991FzK80hy+9clUIhChbnQg2L6eZRQzA=
k8s.io/kubectl v0.31.13/go.mod h1:IxUKvsKrvqEL7NcaBCQCVDLzcYghu8b9yMiYKx8nYho=
k8s.io/kubelet v0.31.13 h1:wN9NXmj9DRFTMph1EhAtdQ6+UfEHKV3B7XMKcJr122c=
k8s.io/kubelet v0.31.13/go.mod h1:DxEqJViO7GE5dZXvEJGsP5HORNTSj9MhMQi1JDirCQs=
k8s.io/kubernetes v1.31.13 h1:c/YugS3TqG6YQMNRclrcWVabgIuqyap++lM5AuCtD5M=
k8s.io/kubernetes v1.31.13/go.mod h1:9xmT2buyTYj8TRKwRae7FcuY8k5+xlxv7VivvO0KKfs=
k8s.io/kms v0.33.7 h1:ckQz1NkobzSVXaiDi064exY+G5lUseiwsq8m/bXTcPo=
k8s.io/kms v0.33.7/go.mod h1:C1I8mjFFBNzfUZXYt9FZVJ8MJl7ynFbGgZFbBzkBJ3E=
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff h1:/usPimJzUKKu+m+TE36gUyGcf03XZEP0ZIKgKj35LS4=
k8s.io/kube-openapi v0.0.0-20250318190949-c8a335a9a2ff/go.mod h1:5jIi+8yX4RIb8wk3XwBo5Pq2ccx4FP10ohkbSKCZoK8=
k8s.io/kubectl v0.33.7 h1:qsiKBslMDfcSkvBsEpLKju7n9ZyCFcUkZ8lAq+jexVA=
k8s.io/kubectl v0.33.7/go.mod h1:4gHw8yanjdPbUGgEO0c9UVvLvOOY1UJ2la8T7Aq7EPc=
k8s.io/kubelet v0.33.7 h1:huNa5PQjUpFskjD2Q9Q+96Hk2+nzkY+T6EiB5k6p5sY=
k8s.io/kubelet v0.33.7/go.mod h1:QHPXSFQ4zeU2cvlxE3LliKcU0Mvy7ZcDTzbAPzonscc=
k8s.io/kubernetes v1.33.7 h1:Qhp1gwCPSOqt3du6A0uTGrrTcZDtShdSCIR5IZag16Y=
k8s.io/kubernetes v1.33.7/go.mod h1:eJiHC143tnNSvmDkCRwGNKA80yXqBvYC3U8L/i67nAY=
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro=
k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
oras.land/oras-go v1.2.5 h1:XpYuAwAb0DfQsunIyMfeET92emK8km3W4yEzZvUbsTo=
oras.land/oras-go v1.2.5/go.mod h1:PuAwRShRZCsZb7g8Ar3jKKQR/2A/qN+pkYxIOd/FAoo=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.0 h1:CPT0ExVicCzcpeN4baWEV2ko2Z/AsiZgEdwgcfwLgMo=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.0/go.mod h1:Ve9uj1L+deCXFrPOk1LpFXqTg7LCFzFso6PA48q/XZw=
oras.land/oras-go/v2 v2.6.0 h1:X4ELRsiGkrbeox69+9tzTu492FMUu7zJQW6eJU+I2oc=
oras.land/oras-go/v2 v2.6.0/go.mod h1:magiQDfG6H1O9APp+rOsvCPcW1GD2MM7vgnKY0Y+u1o=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.2 h1:jpcvIRr3GLoUoEKRkHKSmGjxb6lWwrBlJsXc+eUYQHM=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.31.2/go.mod h1:Ve9uj1L+deCXFrPOk1LpFXqTg7LCFzFso6PA48q/XZw=
sigs.k8s.io/controller-runtime v0.19.4 h1:SUmheabttt0nx8uJtoII4oIP27BVVvAKFvdvGFwV/Qo=
sigs.k8s.io/controller-runtime v0.19.4/go.mod h1:iRmWllt8IlaLjvTTDLhRBXIEtkCK6hwVBJJsYS9Ajf4=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8=
sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo=
sigs.k8s.io/kustomize/api v0.18.0 h1:hTzp67k+3NEVInwz5BHyzc9rGxIauoXferXyjv5lWPo=
sigs.k8s.io/kustomize/api v0.18.0/go.mod h1:f8isXnX+8b+SGLHQ6yO4JG1rdkZlvhaCf/uZbLVMb0U=
sigs.k8s.io/kustomize/kyaml v0.18.1 h1:WvBo56Wzw3fjS+7vBjN6TeivvpbW9GmRaWZ9CIVmt4E=
sigs.k8s.io/kustomize/kyaml v0.18.1/go.mod h1:C3L2BFVU1jgcddNBE1TxuVLgS46TjObMwW5FT9FcjYo=
sigs.k8s.io/structured-merge-diff/v4 v4.4.3 h1:sCP7Vv3xx/CWIuTPVN38lUPx0uw0lcLfzaiDa8Ja01A=
sigs.k8s.io/structured-merge-diff/v4 v4.4.3/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4=
sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E=
sigs.k8s.io/kustomize/api v0.19.0 h1:F+2HB2mU1MSiR9Hp1NEgoU2q9ItNOaBJl0I4Dlus5SQ=
sigs.k8s.io/kustomize/api v0.19.0/go.mod h1:/BbwnivGVcBh1r+8m3tH1VNxJmHSk1PzP5fkP6lbL1o=
sigs.k8s.io/kustomize/kyaml v0.19.0 h1:RFge5qsO1uHhwJsu3ipV7RNolC7Uozc0jUBC/61XSlA=
sigs.k8s.io/kustomize/kyaml v0.19.0/go.mod h1:FeKD5jEOH+FbZPpqUghBP8mrLjJ3+zD3/rf9NNu1cwY=
sigs.k8s.io/randfill v0.0.0-20250304075658-069ef1bbf016/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0 h1:IUA9nvMmnKWcj5jl84xn+T5MnlZKThmUW1TdblaLVAc=
sigs.k8s.io/structured-merge-diff/v4 v4.6.0/go.mod h1:dDy58f92j70zLsuZVuUX5Wp9vtxXpaZnkPGWeqDfCps=
sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY=
sigs.k8s.io/yaml v1.5.0 h1:M10b2U7aEUY6hRtU870n2VTPgR5RZiL/I6Lcc2F4NUQ=
sigs.k8s.io/yaml v1.5.0/go.mod h1:wZs27Rbxoai4C0f8/9urLZtZtF3avA3gKvGyPdDqTO4=

View File

@@ -5,6 +5,7 @@ import (
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"k8s.io/component-helpers/storage/volume"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
@@ -12,6 +13,7 @@ import (
v1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
ctrl "sigs.k8s.io/controller-runtime"
ctrlruntimeclient "sigs.k8s.io/controller-runtime/pkg/client"
@@ -22,6 +24,7 @@ import (
const (
pvcControllerName = "pvc-syncer-controller"
pvcFinalizerName = "pvc.k3k.io/finalizer"
pseudoPVLabel = "pod.k3k.io/pseudoPV"
)
type PVCReconciler struct {
@@ -105,6 +108,12 @@ func (r *PVCReconciler) Reconcile(ctx context.Context, req reconcile.Request) (r
if err := r.HostClient.Delete(ctx, syncedPVC); err != nil && !apierrors.IsNotFound(err) {
return reconcile.Result{}, err
}
// delete the synced virtual PV
if err := r.VirtualClient.Delete(ctx, newPersistentVolume(&virtPVC)); err != nil && !apierrors.IsNotFound(err) {
return reconcile.Result{}, err
}
// remove the finalizer after cleaning up the synced pvc
if controllerutil.RemoveFinalizer(&virtPVC, pvcFinalizerName) {
if err := r.VirtualClient.Update(ctx, &virtPVC); err != nil {
@@ -127,7 +136,13 @@ func (r *PVCReconciler) Reconcile(ctx context.Context, req reconcile.Request) (r
// note that we dont need to update the PVC on the host cluster, only syncing the PVC to allow being
// handled by the host cluster.
return reconcile.Result{}, ctrlruntimeclient.IgnoreAlreadyExists(r.HostClient.Create(ctx, syncedPVC))
if err := r.HostClient.Create(ctx, syncedPVC); err != nil && !apierrors.IsAlreadyExists(err) {
return reconcile.Result{}, err
}
// Creating a virtual PV to bound the existing PVC in the virtual cluster - needed for scheduling of
// the consumer pods
return reconcile.Result{}, r.createVirtualPersistentVolume(ctx, virtPVC)
}
func (r *PVCReconciler) pvc(obj *v1.PersistentVolumeClaim) *v1.PersistentVolumeClaim {
@@ -136,3 +151,82 @@ func (r *PVCReconciler) pvc(obj *v1.PersistentVolumeClaim) *v1.PersistentVolumeC
return hostPVC
}
func (r *PVCReconciler) createVirtualPersistentVolume(ctx context.Context, pvc v1.PersistentVolumeClaim) error {
log := ctrl.LoggerFrom(ctx)
log.V(1).Info("Creating virtual PersistentVolume")
pv := newPersistentVolume(&pvc)
if err := r.VirtualClient.Create(ctx, pv); err != nil && !apierrors.IsAlreadyExists(err) {
return err
}
orig := pv.DeepCopy()
pv.Status = v1.PersistentVolumeStatus{
Phase: v1.VolumeBound,
}
if err := r.VirtualClient.Status().Patch(ctx, pv, ctrlruntimeclient.MergeFrom(orig)); err != nil {
return err
}
log.V(1).Info("Patch the status of PersistentVolumeClaim to Bound")
pvcPatch := pvc.DeepCopy()
if pvcPatch.Annotations == nil {
pvcPatch.Annotations = make(map[string]string)
}
pvcPatch.Annotations[volume.AnnBoundByController] = "yes"
pvcPatch.Annotations[volume.AnnBindCompleted] = "yes"
pvcPatch.Status.Phase = v1.ClaimBound
pvcPatch.Status.AccessModes = pvcPatch.Spec.AccessModes
return r.VirtualClient.Status().Update(ctx, pvcPatch)
}
func newPersistentVolume(obj *v1.PersistentVolumeClaim) *v1.PersistentVolume {
var storageClass string
if obj.Spec.StorageClassName != nil {
storageClass = *obj.Spec.StorageClassName
}
return &v1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: obj.Name,
Labels: map[string]string{
pseudoPVLabel: "true",
},
Annotations: map[string]string{
volume.AnnBoundByController: "true",
volume.AnnDynamicallyProvisioned: "k3k-kubelet",
},
},
TypeMeta: metav1.TypeMeta{
Kind: "PersistentVolume",
APIVersion: "v1",
},
Spec: v1.PersistentVolumeSpec{
PersistentVolumeSource: v1.PersistentVolumeSource{
FlexVolume: &v1.FlexPersistentVolumeSource{
Driver: "pseudopv",
},
},
StorageClassName: storageClass,
VolumeMode: obj.Spec.VolumeMode,
PersistentVolumeReclaimPolicy: v1.PersistentVolumeReclaimDelete,
AccessModes: obj.Spec.AccessModes,
Capacity: obj.Spec.Resources.Requests,
ClaimRef: &v1.ObjectReference{
APIVersion: obj.APIVersion,
UID: obj.UID,
ResourceVersion: obj.ResourceVersion,
Kind: obj.Kind,
Namespace: obj.Namespace,
Name: obj.Name,
},
},
}
}

View File

@@ -55,7 +55,7 @@ var PVCTests = func() {
Expect(err).NotTo(HaveOccurred())
})
It("creates a pvc on the host cluster", func() {
It("creates a pvc on the host cluster and virtual pv in virtual cluster", func() {
ctx := context.Background()
pvc := &v1.PersistentVolumeClaim{
@@ -100,5 +100,11 @@ var PVCTests = func() {
Expect(*hostPVC.Spec.StorageClassName).To(Equal("test-sc"))
GinkgoWriter.Printf("labels: %v\n", hostPVC.Labels)
var virtualPV v1.PersistentVolume
key := client.ObjectKey{Name: pvc.Name}
err = virtTestEnv.k8sClient.Get(ctx, key, &virtualPV)
Expect(err).NotTo(HaveOccurred())
})
}

View File

@@ -1,215 +0,0 @@
package syncer
import (
"context"
"k8s.io/apimachinery/pkg/types"
"k8s.io/component-helpers/storage/volume"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
ctrl "sigs.k8s.io/controller-runtime"
ctrlruntimeclient "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/rancher/k3k/k3k-kubelet/translate"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
)
const (
podControllerName = "pod-pvc-controller"
pseudoPVLabel = "pod.k3k.io/pseudoPV"
)
type PodReconciler struct {
*SyncerContext
}
// AddPodPVCController adds pod controller to k3k-kubelet
func AddPodPVCController(ctx context.Context, virtMgr, hostMgr manager.Manager, clusterName, clusterNamespace string) error {
// initialize a new Reconciler
reconciler := PodReconciler{
SyncerContext: &SyncerContext{
ClusterName: clusterName,
ClusterNamespace: clusterNamespace,
VirtualClient: virtMgr.GetClient(),
HostClient: hostMgr.GetClient(),
Translator: translate.ToHostTranslator{},
},
}
name := reconciler.Translator.TranslateName(clusterNamespace, podControllerName)
return ctrl.NewControllerManagedBy(virtMgr).
Named(name).
For(&v1.Pod{}).
WithEventFilter(predicate.NewPredicateFuncs(reconciler.filterResources)).
Complete(&reconciler)
}
func (r *PodReconciler) filterResources(object ctrlruntimeclient.Object) bool {
var cluster v1beta1.Cluster
ctx := context.Background()
if err := r.HostClient.Get(ctx, types.NamespacedName{Name: r.ClusterName, Namespace: r.ClusterNamespace}, &cluster); err != nil {
return false
}
// check for pvc config
syncConfig := cluster.Spec.Sync.PersistentVolumeClaims
// If PVC syncing is disabled, only process deletions to allow for cleanup.
return syncConfig.Enabled || object.GetDeletionTimestamp() != nil
}
func (r *PodReconciler) Reconcile(ctx context.Context, req reconcile.Request) (reconcile.Result, error) {
log := ctrl.LoggerFrom(ctx).WithValues("cluster", r.ClusterName, "clusterNamespace", r.ClusterNamespace)
ctx = ctrl.LoggerInto(ctx, log)
var (
virtPod v1.Pod
cluster v1beta1.Cluster
)
if err := r.HostClient.Get(ctx, types.NamespacedName{Name: r.ClusterName, Namespace: r.ClusterNamespace}, &cluster); err != nil {
return reconcile.Result{}, err
}
if err := r.VirtualClient.Get(ctx, req.NamespacedName, &virtPod); err != nil {
return reconcile.Result{}, ctrlruntimeclient.IgnoreNotFound(err)
}
// reconcile pods with pvcs
for _, vol := range virtPod.Spec.Volumes {
if vol.PersistentVolumeClaim != nil {
log.Info("Handling pod with pvc")
if err := r.reconcilePodWithPVC(ctx, &virtPod, vol.PersistentVolumeClaim); err != nil {
return reconcile.Result{}, err
}
}
}
return reconcile.Result{}, nil
}
// reconcilePodWithPVC will make sure to create a fake PV for each PVC for any pod so that it can be scheduled on the virtual-kubelet
// and then created on the host, the PV is not synced to the host cluster.
func (r *PodReconciler) reconcilePodWithPVC(ctx context.Context, pod *v1.Pod, pvcSource *v1.PersistentVolumeClaimVolumeSource) error {
log := ctrl.LoggerFrom(ctx).WithValues("PersistentVolumeClaim", pvcSource.ClaimName)
ctx = ctrl.LoggerInto(ctx, log)
var pvc v1.PersistentVolumeClaim
key := types.NamespacedName{
Name: pvcSource.ClaimName,
Namespace: pod.Namespace,
}
if err := r.VirtualClient.Get(ctx, key, &pvc); err != nil {
return ctrlruntimeclient.IgnoreNotFound(err)
}
pv := r.pseudoPV(&pvc)
if pod.DeletionTimestamp != nil {
return r.handlePodDeletion(ctx, pv)
}
log.Info("Creating pseudo Persistent Volume")
if err := r.VirtualClient.Create(ctx, pv); err != nil {
return ctrlruntimeclient.IgnoreAlreadyExists(err)
}
orig := pv.DeepCopy()
pv.Status = v1.PersistentVolumeStatus{
Phase: v1.VolumeBound,
}
if err := r.VirtualClient.Status().Patch(ctx, pv, ctrlruntimeclient.MergeFrom(orig)); err != nil {
return err
}
log.Info("Patch the status of PersistentVolumeClaim to Bound")
pvcPatch := pvc.DeepCopy()
if pvcPatch.Annotations == nil {
pvcPatch.Annotations = make(map[string]string)
}
pvcPatch.Annotations[volume.AnnBoundByController] = "yes"
pvcPatch.Annotations[volume.AnnBindCompleted] = "yes"
pvcPatch.Status.Phase = v1.ClaimBound
pvcPatch.Status.AccessModes = pvcPatch.Spec.AccessModes
return r.VirtualClient.Status().Update(ctx, pvcPatch)
}
func (r *PodReconciler) pseudoPV(obj *v1.PersistentVolumeClaim) *v1.PersistentVolume {
var storageClass string
if obj.Spec.StorageClassName != nil {
storageClass = *obj.Spec.StorageClassName
}
return &v1.PersistentVolume{
ObjectMeta: metav1.ObjectMeta{
Name: obj.Name,
Labels: map[string]string{
pseudoPVLabel: "true",
},
Annotations: map[string]string{
volume.AnnBoundByController: "true",
volume.AnnDynamicallyProvisioned: "k3k-kubelet",
},
},
TypeMeta: metav1.TypeMeta{
Kind: "PersistentVolume",
APIVersion: "v1",
},
Spec: v1.PersistentVolumeSpec{
PersistentVolumeSource: v1.PersistentVolumeSource{
FlexVolume: &v1.FlexPersistentVolumeSource{
Driver: "pseudopv",
},
},
StorageClassName: storageClass,
VolumeMode: obj.Spec.VolumeMode,
PersistentVolumeReclaimPolicy: v1.PersistentVolumeReclaimDelete,
AccessModes: obj.Spec.AccessModes,
Capacity: obj.Spec.Resources.Requests,
ClaimRef: &v1.ObjectReference{
APIVersion: obj.APIVersion,
UID: obj.UID,
ResourceVersion: obj.ResourceVersion,
Kind: obj.Kind,
Namespace: obj.Namespace,
Name: obj.Name,
},
},
}
}
func (r *PodReconciler) handlePodDeletion(ctx context.Context, pv *v1.PersistentVolume) error {
var currentPV v1.PersistentVolume
if err := r.VirtualClient.Get(ctx, ctrlruntimeclient.ObjectKeyFromObject(pv), &currentPV); err != nil {
return ctrlruntimeclient.IgnoreNotFound(err)
}
pvPatch := currentPV.DeepCopy()
pvPatch.Spec.ClaimRef = nil
pvPatch.Status.Phase = v1.VolumeReleased
controllerutil.RemoveFinalizer(pvPatch, "kubernetes.io/pv-protection")
if err := r.VirtualClient.Status().Update(ctx, pvPatch); err != nil {
return err
}
return ctrlruntimeclient.IgnoreNotFound(r.VirtualClient.Delete(ctx, &currentPV))
}

View File

@@ -242,7 +242,7 @@ func (k *kubelet) start(ctx context.Context) {
// run the node async so that we can wait for it to be ready in another call
go func() {
klog.SetLogger(k.logger)
klog.SetLogger(k.logger.V(1))
ctx = log.WithLogger(ctx, klogv2.New(nil))
if err := k.node.Run(ctx); err != nil {
@@ -265,12 +265,23 @@ func (k *kubelet) start(ctx context.Context) {
func (k *kubelet) newProviderFunc(cfg config) nodeutil.NewProviderFunc {
return func(pc nodeutil.ProviderConfig) (nodeutil.Provider, node.NodeProvider, error) {
utilProvider, err := provider.New(*k.hostConfig, k.hostMgr, k.virtualMgr, k.logger, cfg.ClusterNamespace, cfg.ClusterName, cfg.ServerIP, k.dnsIP)
utilProvider, err := provider.New(*k.hostConfig, k.hostMgr, k.virtualMgr, k.logger, cfg.ClusterNamespace, cfg.ClusterName, cfg.ServerIP, k.dnsIP, cfg.AgentHostname)
if err != nil {
return nil, nil, errors.New("unable to make nodeutil provider: " + err.Error())
}
provider.ConfigureNode(k.logger, pc.Node, cfg.AgentHostname, k.port, k.agentIP, utilProvider.CoreClient, utilProvider.VirtualClient, k.virtualCluster, cfg.Version, cfg.MirrorHostNodes)
provider.ConfigureNode(
k.logger,
pc.Node,
cfg.AgentHostname,
k.port,
k.agentIP,
utilProvider.HostClient,
utilProvider.VirtualClient,
k.virtualCluster,
cfg.Version,
cfg.MirrorHostNodes,
)
return utilProvider, &provider.Node{}, nil
}
@@ -458,12 +469,6 @@ func addControllers(ctx context.Context, hostMgr, virtualMgr manager.Manager, c
return errors.New("failed to add pvc syncer controller: " + err.Error())
}
logger.Info("adding pod pvc controller")
if err := syncer.AddPodPVCController(ctx, virtualMgr, hostMgr, c.ClusterName, c.ClusterNamespace); err != nil {
return errors.New("failed to add pod pvc controller: " + err.Error())
}
logger.Info("adding priorityclass controller")
if err := syncer.AddPriorityClassSyncer(ctx, virtualMgr, hostMgr, c.ClusterName, c.ClusterNamespace); err != nil {

View File

@@ -38,6 +38,7 @@ func main() {
logger = zapr.NewLogger(log.New(debug, logFormat))
ctrlruntimelog.SetLogger(logger)
return nil
},
RunE: run,

View File

@@ -2,25 +2,23 @@ package provider
import (
"context"
"time"
"github.com/go-logr/logr"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
typedv1 "k8s.io/client-go/kubernetes/typed/core/v1"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
)
func ConfigureNode(logger logr.Logger, node *corev1.Node, hostname string, servicePort int, ip string, coreClient typedv1.CoreV1Interface, virtualClient client.Client, virtualCluster v1beta1.Cluster, version string, mirrorHostNodes bool) {
func ConfigureNode(logger logr.Logger, node *corev1.Node, hostname string, servicePort int, ip string, hostClient client.Client, virtualClient client.Client, virtualCluster v1beta1.Cluster, version string, mirrorHostNodes bool) {
ctx := context.Background()
if mirrorHostNodes {
hostNode, err := coreClient.Nodes().Get(ctx, node.Name, metav1.GetOptions{})
if err != nil {
var hostNode corev1.Node
if err := hostClient.Get(ctx, types.NamespacedName{Name: node.Name}, &hostNode); err != nil {
logger.Error(err, "error getting host node for mirroring", err)
}
@@ -50,16 +48,7 @@ func ConfigureNode(logger logr.Logger, node *corev1.Node, hostname string, servi
// configure versions
node.Status.NodeInfo.KubeletVersion = version
updateNodeCapacityInterval := 10 * time.Second
ticker := time.NewTicker(updateNodeCapacityInterval)
go func() {
for range ticker.C {
if err := updateNodeCapacity(ctx, coreClient, virtualClient, node.Name, virtualCluster.Spec.NodeSelector); err != nil {
logger.Error(err, "error updating node capacity")
}
}
}()
startNodeCapacityUpdater(ctx, logger, hostClient, virtualClient, virtualCluster, node.Name)
}
}
@@ -108,73 +97,3 @@ func nodeConditions() []corev1.NodeCondition {
},
}
}
// updateNodeCapacity will update the virtual node capacity (and the allocatable field) with the sum of all the resource in the host nodes.
// If the nodeLabels are specified only the matching nodes will be considered.
func updateNodeCapacity(ctx context.Context, coreClient typedv1.CoreV1Interface, virtualClient client.Client, virtualNodeName string, nodeLabels map[string]string) error {
capacity, allocatable, err := getResourcesFromNodes(ctx, coreClient, nodeLabels)
if err != nil {
return err
}
var virtualNode corev1.Node
if err := virtualClient.Get(ctx, types.NamespacedName{Name: virtualNodeName}, &virtualNode); err != nil {
return err
}
virtualNode.Status.Capacity = capacity
virtualNode.Status.Allocatable = allocatable
return virtualClient.Status().Update(ctx, &virtualNode)
}
// getResourcesFromNodes will return a sum of all the resource capacity of the host nodes, and the allocatable resources.
// If some node labels are specified only the matching nodes will be considered.
func getResourcesFromNodes(ctx context.Context, coreClient typedv1.CoreV1Interface, nodeLabels map[string]string) (corev1.ResourceList, corev1.ResourceList, error) {
listOpts := metav1.ListOptions{}
if nodeLabels != nil {
labelSelector := metav1.LabelSelector{MatchLabels: nodeLabels}
listOpts.LabelSelector = labels.Set(labelSelector.MatchLabels).String()
}
nodeList, err := coreClient.Nodes().List(ctx, listOpts)
if err != nil {
return nil, nil, err
}
// sum all
virtualCapacityResources := corev1.ResourceList{}
virtualAvailableResources := corev1.ResourceList{}
for _, node := range nodeList.Items {
// check if the node is Ready
for _, condition := range node.Status.Conditions {
if condition.Type != corev1.NodeReady {
continue
}
// if the node is not Ready then we can skip it
if condition.Status != corev1.ConditionTrue {
break
}
}
// add all the available metrics to the virtual node
for resourceName, resourceQuantity := range node.Status.Capacity {
virtualResource := virtualCapacityResources[resourceName]
(&virtualResource).Add(resourceQuantity)
virtualCapacityResources[resourceName] = virtualResource
}
for resourceName, resourceQuantity := range node.Status.Allocatable {
virtualResource := virtualAvailableResources[resourceName]
(&virtualResource).Add(resourceQuantity)
virtualAvailableResources[resourceName] = virtualResource
}
}
return virtualCapacityResources, virtualAvailableResources, nil
}

View File

@@ -0,0 +1,200 @@
package provider
import (
"context"
"sort"
"time"
"github.com/go-logr/logr"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/types"
"sigs.k8s.io/controller-runtime/pkg/client"
corev1 "k8s.io/api/core/v1"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
)
const (
// UpdateNodeCapacityInterval is the interval at which the node capacity is updated.
UpdateNodeCapacityInterval = 10 * time.Second
)
// milliScaleResources is a set of resource names that are measured in milli-units (e.g., CPU).
// This is used to determine whether to use MilliValue() for calculations.
var milliScaleResources = map[corev1.ResourceName]struct{}{
corev1.ResourceCPU: {},
corev1.ResourceMemory: {},
corev1.ResourceStorage: {},
corev1.ResourceEphemeralStorage: {},
corev1.ResourceRequestsCPU: {},
corev1.ResourceRequestsMemory: {},
corev1.ResourceRequestsStorage: {},
corev1.ResourceRequestsEphemeralStorage: {},
corev1.ResourceLimitsCPU: {},
corev1.ResourceLimitsMemory: {},
corev1.ResourceLimitsEphemeralStorage: {},
}
// StartNodeCapacityUpdater starts a goroutine that periodically updates the capacity
// of the virtual node based on host node capacity and any applied ResourceQuotas.
func startNodeCapacityUpdater(ctx context.Context, logger logr.Logger, hostClient client.Client, virtualClient client.Client, virtualCluster v1beta1.Cluster, virtualNodeName string) {
go func() {
ticker := time.NewTicker(UpdateNodeCapacityInterval)
defer ticker.Stop()
for {
select {
case <-ticker.C:
updateNodeCapacity(ctx, logger, hostClient, virtualClient, virtualCluster, virtualNodeName)
case <-ctx.Done():
logger.Info("Stopping node capacity updates for node", "node", virtualNodeName)
return
}
}
}()
}
// updateNodeCapacity will update the virtual node capacity (and the allocatable field) with the sum of all the resource in the host nodes.
// If the nodeLabels are specified only the matching nodes will be considered.
func updateNodeCapacity(ctx context.Context, logger logr.Logger, hostClient client.Client, virtualClient client.Client, virtualCluster v1beta1.Cluster, virtualNodeName string) {
// by default we get the resources of the same Node where the kubelet is running
var node corev1.Node
if err := hostClient.Get(ctx, types.NamespacedName{Name: virtualNodeName}, &node); err != nil {
logger.Error(err, "error getting virtual node for updating node capacity")
return
}
allocatable := node.Status.Allocatable.DeepCopy()
// we need to check if the virtual cluster resources are "limited" through ResourceQuotas
// If so we will use the minimum resources
var quotas corev1.ResourceQuotaList
if err := hostClient.List(ctx, &quotas, &client.ListOptions{Namespace: virtualCluster.Namespace}); err != nil {
logger.Error(err, "error getting namespace for updating node capacity")
}
if len(quotas.Items) > 0 {
resourceLists := []corev1.ResourceList{allocatable}
for _, q := range quotas.Items {
resourceLists = append(resourceLists, q.Status.Hard)
}
mergedResourceLists := mergeResourceLists(resourceLists...)
m, err := distributeQuotas(ctx, logger, virtualClient, mergedResourceLists)
if err != nil {
logger.Error(err, "error distributing policy quota")
}
allocatable = m[virtualNodeName]
}
var virtualNode corev1.Node
if err := virtualClient.Get(ctx, types.NamespacedName{Name: virtualNodeName}, &virtualNode); err != nil {
logger.Error(err, "error getting virtual node for updating node capacity")
return
}
virtualNode.Status.Capacity = allocatable
virtualNode.Status.Allocatable = allocatable
if err := virtualClient.Status().Update(ctx, &virtualNode); err != nil {
logger.Error(err, "error updating node capacity")
}
}
// mergeResourceLists takes multiple resource lists and returns a single list that represents
// the most restrictive set of resources. For each resource name, it selects the minimum
// quantity found across all the provided lists.
func mergeResourceLists(resourceLists ...corev1.ResourceList) corev1.ResourceList {
merged := corev1.ResourceList{}
for _, resourceList := range resourceLists {
for resName, qty := range resourceList {
existingQty, found := merged[resName]
// If it's the first time we see it OR the new one is smaller -> Update
if !found || qty.Cmp(existingQty) < 0 {
merged[resName] = qty.DeepCopy()
}
}
}
return merged
}
// distributeQuotas divides the total resource quotas evenly among all active virtual nodes.
// This ensures that each virtual node reports a fair share of the available resources,
// preventing the scheduler from overloading a single node.
//
// The algorithm iterates over each resource, divides it as evenly as possible among the
// sorted virtual nodes, and distributes any remainder to the first few nodes to ensure
// all resources are allocated. Sorting the nodes by name guarantees a deterministic
// distribution.
func distributeQuotas(ctx context.Context, logger logr.Logger, virtualClient client.Client, quotas corev1.ResourceList) (map[string]corev1.ResourceList, error) {
// List all virtual nodes to distribute the quota stably.
var virtualNodeList corev1.NodeList
if err := virtualClient.List(ctx, &virtualNodeList); err != nil {
logger.Error(err, "error listing virtual nodes for stable capacity distribution, falling back to full quota")
return nil, err
}
// If there are no virtual nodes, there's nothing to distribute.
numNodes := int64(len(virtualNodeList.Items))
if numNodes == 0 {
logger.Info("error listing virtual nodes for stable capacity distribution, falling back to full quota")
return nil, nil
}
// Sort nodes by name for a deterministic distribution of resources.
sort.Slice(virtualNodeList.Items, func(i, j int) bool {
return virtualNodeList.Items[i].Name < virtualNodeList.Items[j].Name
})
// Initialize the resource map for each virtual node.
resourceMap := make(map[string]corev1.ResourceList)
for _, virtualNode := range virtualNodeList.Items {
resourceMap[virtualNode.Name] = corev1.ResourceList{}
}
// Distribute each resource type from the policy's hard quota
for resourceName, totalQuantity := range quotas {
// Use MilliValue for precise division, especially for resources like CPU,
// which are often expressed in milli-units. Otherwise, use the standard Value().
var totalValue int64
if _, found := milliScaleResources[resourceName]; found {
totalValue = totalQuantity.MilliValue()
} else {
totalValue = totalQuantity.Value()
}
// Calculate the base quantity of the resource to be allocated per node.
// and the remainder that needs to be distributed among the nodes.
//
// For example, if totalValue is 2000 (e.g., 2 CPU) and there are 3 nodes:
// - quantityPerNode would be 666 (2000 / 3)
// - remainder would be 2 (2000 % 3)
// The first two nodes would get 667 (666 + 1), and the last one would get 666.
quantityPerNode := totalValue / numNodes
remainder := totalValue % numNodes
// Iterate through the sorted virtual nodes to distribute the resource.
for _, virtualNode := range virtualNodeList.Items {
nodeQuantity := quantityPerNode
if remainder > 0 {
nodeQuantity++
remainder--
}
if _, found := milliScaleResources[resourceName]; found {
resourceMap[virtualNode.Name][resourceName] = *resource.NewMilliQuantity(nodeQuantity, totalQuantity.Format)
} else {
resourceMap[virtualNode.Name][resourceName] = *resource.NewQuantity(nodeQuantity, totalQuantity.Format)
}
}
}
return resourceMap, nil
}

View File

@@ -0,0 +1,147 @@
package provider
import (
"context"
"testing"
"github.com/go-logr/zapr"
"github.com/stretchr/testify/assert"
"go.uber.org/zap"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func Test_distributeQuotas(t *testing.T) {
scheme := runtime.NewScheme()
err := corev1.AddToScheme(scheme)
assert.NoError(t, err)
node1 := &corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node-1"}}
node2 := &corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node-2"}}
node3 := &corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node-3"}}
tests := []struct {
name string
virtualNodes []client.Object
quotas corev1.ResourceList
want map[string]corev1.ResourceList
wantErr bool
}{
{
name: "no virtual nodes",
virtualNodes: []client.Object{},
quotas: corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("2"),
},
want: map[string]corev1.ResourceList{},
wantErr: false,
},
{
name: "no quotas",
virtualNodes: []client.Object{node1, node2},
quotas: corev1.ResourceList{},
want: map[string]corev1.ResourceList{
"node-1": {},
"node-2": {},
},
wantErr: false,
},
{
name: "even distribution of cpu and memory",
virtualNodes: []client.Object{node1, node2},
quotas: corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("2"),
corev1.ResourceMemory: resource.MustParse("4Gi"),
},
want: map[string]corev1.ResourceList{
"node-1": {
corev1.ResourceCPU: resource.MustParse("1"),
corev1.ResourceMemory: resource.MustParse("2Gi"),
},
"node-2": {
corev1.ResourceCPU: resource.MustParse("1"),
corev1.ResourceMemory: resource.MustParse("2Gi"),
},
},
wantErr: false,
},
{
name: "uneven distribution with remainder",
virtualNodes: []client.Object{node1, node2, node3},
quotas: corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("2"), // 2000m / 3 = 666m with 2m remainder
},
want: map[string]corev1.ResourceList{
"node-1": {corev1.ResourceCPU: resource.MustParse("667m")},
"node-2": {corev1.ResourceCPU: resource.MustParse("667m")},
"node-3": {corev1.ResourceCPU: resource.MustParse("666m")},
},
wantErr: false,
},
{
name: "distribution of number resources",
virtualNodes: []client.Object{node1, node2, node3},
quotas: corev1.ResourceList{
corev1.ResourceCPU: resource.MustParse("2"),
corev1.ResourcePods: resource.MustParse("11"),
corev1.ResourceSecrets: resource.MustParse("9"),
"custom": resource.MustParse("8"),
},
want: map[string]corev1.ResourceList{
"node-1": {
corev1.ResourceCPU: resource.MustParse("667m"),
corev1.ResourcePods: resource.MustParse("4"),
corev1.ResourceSecrets: resource.MustParse("3"),
"custom": resource.MustParse("3"),
},
"node-2": {
corev1.ResourceCPU: resource.MustParse("667m"),
corev1.ResourcePods: resource.MustParse("4"),
corev1.ResourceSecrets: resource.MustParse("3"),
"custom": resource.MustParse("3"),
},
"node-3": {
corev1.ResourceCPU: resource.MustParse("666m"),
corev1.ResourcePods: resource.MustParse("3"),
corev1.ResourceSecrets: resource.MustParse("3"),
"custom": resource.MustParse("2"),
},
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
fakeClient := fake.NewClientBuilder().WithScheme(scheme).WithObjects(tt.virtualNodes...).Build()
logger := zapr.NewLogger(zap.NewNop())
got, gotErr := distributeQuotas(context.Background(), logger, fakeClient, tt.quotas)
if tt.wantErr {
assert.Error(t, gotErr)
} else {
assert.NoError(t, gotErr)
}
assert.Equal(t, len(tt.want), len(got), "Number of nodes in result should match")
for nodeName, expectedResources := range tt.want {
actualResources, ok := got[nodeName]
assert.True(t, ok, "Node %s not found in result", nodeName)
assert.Equal(t, len(expectedResources), len(actualResources), "Number of resources for node %s should match", nodeName)
for resName, expectedQty := range expectedResources {
actualQty, ok := actualResources[resName]
assert.True(t, ok, "Resource %s not found for node %s", resName, nodeName)
assert.True(t, expectedQty.Equal(actualQty), "Resource %s for node %s did not match. want: %s, got: %s", resName, nodeName, expectedQty.String(), actualQty.String())
}
}
})
}
}

View File

@@ -6,7 +6,6 @@ import (
"errors"
"fmt"
"io"
"maps"
"net/http"
"strconv"
"strings"
@@ -36,7 +35,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
cv1 "k8s.io/client-go/kubernetes/typed/core/v1"
compbasemetrics "k8s.io/component-base/metrics"
stats "k8s.io/kubelet/pkg/apis/stats/v1alpha1"
v1alpha1stats "k8s.io/kubelet/pkg/apis/stats/v1alpha1"
"github.com/rancher/k3k/k3k-kubelet/controller/webhook"
"github.com/rancher/k3k/k3k-kubelet/provider/collectors"
@@ -61,12 +60,13 @@ type Provider struct {
ClusterName string
serverIP string
dnsIP string
agentHostname string
logger logr.Logger
}
var ErrRetryTimeout = errors.New("provider timed out")
func New(hostConfig rest.Config, hostMgr, virtualMgr manager.Manager, logger logr.Logger, namespace, name, serverIP, dnsIP string) (*Provider, error) {
func New(hostConfig rest.Config, hostMgr, virtualMgr manager.Manager, logger logr.Logger, namespace, name, serverIP, dnsIP, agentHostname string) (*Provider, error) {
coreClient, err := cv1.NewForConfig(&hostConfig)
if err != nil {
return nil, err
@@ -89,6 +89,7 @@ func New(hostConfig rest.Config, hostMgr, virtualMgr manager.Manager, logger log
logger: logger.WithValues("cluster", name),
serverIP: serverIP,
dnsIP: dnsIP,
agentHostname: agentHostname,
}
return &p, nil
@@ -226,47 +227,35 @@ func (p *Provider) AttachToContainer(ctx context.Context, namespace, name, conta
}
// GetStatsSummary gets the stats for the node, including running pods
func (p *Provider) GetStatsSummary(ctx context.Context) (*stats.Summary, error) {
func (p *Provider) GetStatsSummary(ctx context.Context) (*v1alpha1stats.Summary, error) {
p.logger.V(1).Info("GetStatsSummary")
nodeList := &corev1.NodeList{}
if err := p.CoreClient.RESTClient().Get().Resource("nodes").Do(ctx).Into(nodeList); err != nil {
node, err := p.CoreClient.Nodes().Get(ctx, p.agentHostname, metav1.GetOptions{})
if err != nil {
p.logger.Error(err, "Unable to get nodes of cluster")
return nil, err
}
// fetch the stats from all the nodes
var (
nodeStats stats.NodeStats
allPodsStats []stats.PodStats
)
for _, n := range nodeList.Items {
res, err := p.CoreClient.RESTClient().
Get().
Resource("nodes").
Name(n.Name).
SubResource("proxy").
Suffix("stats/summary").
DoRaw(ctx)
if err != nil {
p.logger.Error(err, "Unable to get stats/summary from cluster node", "node", n.Name)
return nil, err
}
stats := &stats.Summary{}
if err := json.Unmarshal(res, stats); err != nil {
p.logger.Error(err, "Error unmarshaling stats/summary from cluster node", "node", n.Name)
return nil, err
}
// TODO: we should probably calculate somehow the node stats from the different nodes of the host
// or reflect different nodes from the virtual kubelet.
// For the moment let's just pick one random node stats.
nodeStats = stats.Node
allPodsStats = append(allPodsStats, stats.Pods...)
res, err := p.CoreClient.RESTClient().
Get().
Resource("nodes").
Name(node.Name).
SubResource("proxy").
Suffix("stats/summary").
DoRaw(ctx)
if err != nil {
p.logger.Error(err, "Unable to get stats/summary from cluster node", "node", node.Name)
return nil, err
}
var statsSummary v1alpha1stats.Summary
if err := json.Unmarshal(res, &statsSummary); err != nil {
p.logger.Error(err, "Error unmarshaling stats/summary from cluster node", "node", node.Name)
return nil, err
}
nodeStats := statsSummary.Node
pods, err := p.GetPods(ctx)
if err != nil {
p.logger.Error(err, "Error getting pods from cluster for stats")
@@ -280,12 +269,12 @@ func (p *Provider) GetStatsSummary(ctx context.Context) (*stats.Summary, error)
podsNameMap[hostPodName] = pod
}
filteredStats := &stats.Summary{
filteredStats := &v1alpha1stats.Summary{
Node: nodeStats,
Pods: make([]stats.PodStats, 0),
Pods: make([]v1alpha1stats.PodStats, 0),
}
for _, podStat := range allPodsStats {
for _, podStat := range statsSummary.Pods {
// skip pods that are not in the cluster namespace
if podStat.PodRef.Namespace != p.ClusterNamespace {
continue
@@ -293,7 +282,7 @@ func (p *Provider) GetStatsSummary(ctx context.Context) (*stats.Summary, error)
// rewrite the PodReference to match the data of the virtual cluster
if pod, found := podsNameMap[podStat.PodRef.Name]; found {
podStat.PodRef = stats.PodReference{
podStat.PodRef = v1alpha1stats.PodReference{
Name: pod.Name,
Namespace: pod.Namespace,
UID: string(pod.UID),
@@ -373,24 +362,10 @@ func (p *Provider) CreatePod(ctx context.Context, pod *corev1.Pod) error {
return p.withRetry(ctx, p.createPod, pod)
}
// createPod takes a Kubernetes Pod and deploys it within the provider.
func (p *Provider) createPod(ctx context.Context, pod *corev1.Pod) error {
logger := p.logger.WithValues("namespace", pod.Namespace, "name", pod.Name)
logger.V(1).Info("CreatePod")
// fieldPath envs are not being translated correctly using the virtual kubelet pod controller
// as a workaround we will try to fetch the pod from the virtual cluster and copy over the envSource
var sourcePod corev1.Pod
if err := p.VirtualClient.Get(ctx, types.NamespacedName{Name: pod.Name, Namespace: pod.Namespace}, &sourcePod); err != nil {
logger.Error(err, "Error getting Pod from Virtual Cluster")
return err
}
tPod := sourcePod.DeepCopy()
p.Translator.TranslateTo(tPod)
logger = p.logger.WithValues("pod", tPod.Name)
// get Cluster definition
clusterKey := types.NamespacedName{
Namespace: p.ClusterNamespace,
@@ -398,71 +373,88 @@ func (p *Provider) createPod(ctx context.Context, pod *corev1.Pod) error {
}
var cluster v1beta1.Cluster
if err := p.HostClient.Get(ctx, clusterKey, &cluster); err != nil {
logger.Error(err, "Error getting Virtual Cluster definition")
return err
}
// these values shouldn't be set on create
tPod.UID = ""
tPod.ResourceVersion = ""
// get Pod from Virtual Cluster
key := types.NamespacedName{
Name: pod.Name,
Namespace: pod.Namespace,
}
// the node was scheduled on the virtual kubelet, but leaving it this way will make it pending indefinitely
tPod.Spec.NodeName = ""
var virtualPod corev1.Pod
if err := p.VirtualClient.Get(ctx, key, &virtualPod); err != nil {
logger.Error(err, "Error getting Pod from Virtual Cluster")
return err
}
tPod.Spec.NodeSelector = cluster.Spec.NodeSelector
// Copy the virtual Pod and use it as a baseline for the hostPod
// do some basic translation and clearing some values (UID, ResourceVersion, ...)
hostPod := virtualPod.DeepCopy()
p.Translator.TranslateTo(hostPod)
logger = logger.WithValues("pod", hostPod.Name)
// Schedule the host pod in the same host node of the virtual kubelet
hostPod.Spec.NodeName = p.agentHostname
hostPod.Spec.NodeSelector = cluster.Spec.NodeSelector
// setting the hostname for the pod if its not set
if pod.Spec.Hostname == "" {
tPod.Spec.Hostname = k3kcontroller.SafeConcatName(pod.Name)
if virtualPod.Spec.Hostname == "" {
hostPod.Spec.Hostname = k3kcontroller.SafeConcatName(virtualPod.Name)
}
// if the priorityClass for the virtual cluster is set then override the provided value
// Note: the core-dns and local-path-provisioner pod are scheduled by k3s with the
// 'system-cluster-critical' and 'system-node-critical' default priority classes.
if !strings.HasPrefix(tPod.Spec.PriorityClassName, "system-") {
if tPod.Spec.PriorityClassName != "" {
tPriorityClassName := p.Translator.TranslateName("", tPod.Spec.PriorityClassName)
tPod.Spec.PriorityClassName = tPriorityClassName
if !strings.HasPrefix(hostPod.Spec.PriorityClassName, "system-") {
if hostPod.Spec.PriorityClassName != "" {
tPriorityClassName := p.Translator.TranslateName("", hostPod.Spec.PriorityClassName)
hostPod.Spec.PriorityClassName = tPriorityClassName
}
if cluster.Spec.PriorityClass != "" {
tPod.Spec.PriorityClassName = cluster.Spec.PriorityClass
tPod.Spec.Priority = nil
hostPod.Spec.PriorityClassName = cluster.Spec.PriorityClass
hostPod.Spec.Priority = nil
}
}
p.configurePodEnvs(hostPod, &virtualPod)
// fieldpath annotations
if err := p.configureFieldPathEnv(&sourcePod, tPod); err != nil {
if err := p.configureFieldPathEnv(&virtualPod, hostPod); err != nil {
logger.Error(err, "Unable to fetch fieldpath annotations for pod")
return err
}
// volumes will often refer to resources in the virtual cluster
// but instead need to refer to the synced host cluster version
p.transformVolumes(pod.Namespace, tPod.Spec.Volumes)
p.transformVolumes(pod.Namespace, hostPod.Spec.Volumes)
// sync serviceaccount token to a the host cluster
if err := p.transformTokens(ctx, pod, tPod); err != nil {
if err := p.transformTokens(ctx, &virtualPod, hostPod); err != nil {
logger.Error(err, "Unable to transform tokens for pod")
return err
}
for i, imagePullSecret := range tPod.Spec.ImagePullSecrets {
tPod.Spec.ImagePullSecrets[i].Name = p.Translator.TranslateName(pod.Namespace, imagePullSecret.Name)
for i, imagePullSecret := range hostPod.Spec.ImagePullSecrets {
hostPod.Spec.ImagePullSecrets[i].Name = p.Translator.TranslateName(virtualPod.Namespace, imagePullSecret.Name)
}
// inject networking information to the pod including the virtual cluster controlplane endpoint
configureNetworking(tPod, pod.Name, pod.Namespace, p.serverIP, p.dnsIP)
configureNetworking(hostPod, virtualPod.Name, virtualPod.Namespace, p.serverIP, p.dnsIP)
// set ownerReference to the cluster object
if err := controllerutil.SetControllerReference(&cluster, tPod, p.HostClient.Scheme()); err != nil {
if err := controllerutil.SetControllerReference(&cluster, hostPod, p.HostClient.Scheme()); err != nil {
logger.Error(err, "Unable to set owner reference for pod")
return err
}
if err := p.HostClient.Create(ctx, tPod); err != nil {
if err := p.HostClient.Create(ctx, hostPod); err != nil {
logger.Error(err, "Error creating pod on host cluster")
return err
}
@@ -475,7 +467,7 @@ func (p *Provider) createPod(ctx context.Context, pod *corev1.Pod) error {
// withRetry retries passed function with interval and timeout
func (p *Provider) withRetry(ctx context.Context, f func(context.Context, *corev1.Pod) error, pod *corev1.Pod) error {
const (
interval = 2 * time.Second
interval = time.Second
timeout = 10 * time.Second
)
@@ -499,36 +491,43 @@ func (p *Provider) withRetry(ctx context.Context, f func(context.Context, *corev
return nil
}
// transformVolumes changes the volumes to the representation in the host cluster. Will return an error
// if one/more volumes couldn't be transformed
// transformVolumes changes the volumes to the representation in the host cluster
func (p *Provider) transformVolumes(podNamespace string, volumes []corev1.Volume) {
for _, volume := range volumes {
for i := range volumes {
volume := &volumes[i]
// Skip volumes related to Kube API access
if strings.HasPrefix(volume.Name, kubeAPIAccessPrefix) {
continue
}
// note: this needs to handle downward api volumes as well, but more thought is needed on how to do that
if volume.ConfigMap != nil {
switch {
case volume.ConfigMap != nil:
volume.ConfigMap.Name = p.Translator.TranslateName(podNamespace, volume.ConfigMap.Name)
} else if volume.Secret != nil {
case volume.Secret != nil:
volume.Secret.SecretName = p.Translator.TranslateName(podNamespace, volume.Secret.SecretName)
} else if volume.Projected != nil {
case volume.PersistentVolumeClaim != nil:
volume.PersistentVolumeClaim.ClaimName = p.Translator.TranslateName(podNamespace, volume.PersistentVolumeClaim.ClaimName)
case volume.Projected != nil:
for _, source := range volume.Projected.Sources {
if source.ConfigMap != nil {
switch {
case source.ConfigMap != nil:
source.ConfigMap.Name = p.Translator.TranslateName(podNamespace, source.ConfigMap.Name)
} else if source.Secret != nil {
case source.Secret != nil:
source.Secret.Name = p.Translator.TranslateName(podNamespace, source.Secret.Name)
}
}
} else if volume.PersistentVolumeClaim != nil {
volume.PersistentVolumeClaim.ClaimName = p.Translator.TranslateName(podNamespace, volume.PersistentVolumeClaim.ClaimName)
} else if volume.DownwardAPI != nil {
case volume.DownwardAPI != nil:
for _, downwardAPI := range volume.DownwardAPI.Items {
if downwardAPI.FieldRef != nil {
if downwardAPI.FieldRef.FieldPath == translate.MetadataNameField {
switch downwardAPI.FieldRef.FieldPath {
case translate.MetadataNameField:
downwardAPI.FieldRef.FieldPath = fmt.Sprintf("metadata.annotations['%s']", translate.ResourceNameAnnotation)
}
if downwardAPI.FieldRef.FieldPath == translate.MetadataNamespaceField {
case translate.MetadataNamespaceField:
downwardAPI.FieldRef.FieldPath = fmt.Sprintf("metadata.annotations['%s']", translate.ResourceNamespaceAnnotation)
}
}
@@ -543,85 +542,81 @@ func (p *Provider) UpdatePod(ctx context.Context, pod *corev1.Pod) error {
}
func (p *Provider) updatePod(ctx context.Context, pod *corev1.Pod) error {
// Once scheduled a Pod cannot update other fields than the image of the containers, initcontainers and a few others
// See: https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement
hostPodName := p.Translator.TranslateName(pod.Namespace, pod.Name)
logger := p.logger.WithValues("namespace", pod.Namespace, "name", pod.Name, "pod", hostPodName)
logger.V(1).Info("UpdatePod")
// Once scheduled a Pod cannot update other fields than the image of the containers, initcontainers and a few others
// See: https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement
//
// Host Pod update
//
// Update Pod in the virtual cluster
var currentVirtualPod corev1.Pod
if err := p.VirtualClient.Get(ctx, client.ObjectKeyFromObject(pod), &currentVirtualPod); err != nil {
logger.Error(err, "Unable to get pod to update from virtual cluster")
return err
}
hostNamespaceName := types.NamespacedName{
hostKey := types.NamespacedName{
Namespace: p.ClusterNamespace,
Name: hostPodName,
}
logger = logger.WithValues()
var currentHostPod corev1.Pod
if err := p.HostClient.Get(ctx, hostNamespaceName, &currentHostPod); err != nil {
var hostPod corev1.Pod
if err := p.HostClient.Get(ctx, hostKey, &hostPod); err != nil {
logger.Error(err, "Unable to get Pod to update from host cluster")
return err
}
// Handle ephemeral containers
if !cmp.Equal(currentHostPod.Spec.EphemeralContainers, pod.Spec.EphemeralContainers) {
logger.V(1).Info("Updating ephemeral containers")
updatePod(&hostPod, pod)
currentHostPod.Spec.EphemeralContainers = pod.Spec.EphemeralContainers
if _, err := p.CoreClient.Pods(p.ClusterNamespace).UpdateEphemeralContainers(ctx, currentHostPod.Name, &currentHostPod, metav1.UpdateOptions{}); err != nil {
logger.Error(err, "error when updating ephemeral containers")
return err
}
return nil
}
// fieldpath annotations
if err := p.configureFieldPathEnv(&currentVirtualPod, &currentHostPod); err != nil {
logger.Error(err, "Unable to fetch fieldpath annotations for Pod")
if err := p.HostClient.Update(ctx, &hostPod); err != nil {
logger.Error(err, "Unable to update Pod in host cluster")
return err
}
currentVirtualPod.Spec.Containers = updateContainerImages(currentVirtualPod.Spec.Containers, pod.Spec.Containers)
currentVirtualPod.Spec.InitContainers = updateContainerImages(currentVirtualPod.Spec.InitContainers, pod.Spec.InitContainers)
// Ephemeral containers update (subresource)
if !cmp.Equal(hostPod.Spec.EphemeralContainers, pod.Spec.EphemeralContainers) {
logger.V(1).Info("Updating ephemeral containers in host pod")
currentVirtualPod.Spec.ActiveDeadlineSeconds = pod.Spec.ActiveDeadlineSeconds
currentVirtualPod.Spec.Tolerations = pod.Spec.Tolerations
hostPod.Spec.EphemeralContainers = pod.Spec.EphemeralContainers
// in the virtual cluster we can update also the labels and annotations
currentVirtualPod.Annotations = pod.Annotations
currentVirtualPod.Labels = pod.Labels
if _, err := p.CoreClient.Pods(p.ClusterNamespace).UpdateEphemeralContainers(ctx, hostPod.Name, &hostPod, metav1.UpdateOptions{}); err != nil {
logger.Error(err, "Error when updating ephemeral containers in host pod")
return err
}
}
if err := p.VirtualClient.Update(ctx, &currentVirtualPod); err != nil {
logger.Info("Pod updated in host cluster")
//
// Virtual Pod update
//
key := types.NamespacedName{
Name: pod.Name,
Namespace: pod.Namespace,
}
var virtualPod corev1.Pod
if err := p.VirtualClient.Get(ctx, key, &virtualPod); err != nil {
logger.Error(err, "Unable to get pod to update from virtual cluster")
return err
}
updatePod(&virtualPod, pod)
if err := p.VirtualClient.Update(ctx, &virtualPod); err != nil {
logger.Error(err, "Unable to update Pod in virtual cluster")
return err
}
// Update Pod in the host cluster
currentHostPod.Spec.Containers = updateContainerImages(currentHostPod.Spec.Containers, pod.Spec.Containers)
currentHostPod.Spec.InitContainers = updateContainerImages(currentHostPod.Spec.InitContainers, pod.Spec.InitContainers)
// Ephemeral containers update (subresource)
if !cmp.Equal(virtualPod.Spec.EphemeralContainers, pod.Spec.EphemeralContainers) {
logger.V(1).Info("Updating ephemeral containers in virtual pod")
// update ActiveDeadlineSeconds and Tolerations
currentHostPod.Spec.ActiveDeadlineSeconds = pod.Spec.ActiveDeadlineSeconds
currentHostPod.Spec.Tolerations = pod.Spec.Tolerations
virtualPod.Spec.EphemeralContainers = pod.Spec.EphemeralContainers
// in the virtual cluster we can update also the labels and annotations
maps.Copy(currentHostPod.Annotations, pod.Annotations)
maps.Copy(currentHostPod.Labels, pod.Labels)
if err := p.HostClient.Update(ctx, &currentHostPod); err != nil {
logger.Error(err, "Unable to update Pod in host cluster")
return err
if _, err := p.CoreClient.Pods(p.ClusterNamespace).UpdateEphemeralContainers(ctx, virtualPod.Name, &virtualPod, metav1.UpdateOptions{}); err != nil {
logger.Error(err, "Error when updating ephemeral containers in virtual pod")
return err
}
}
logger.Info("Pod updated in virtual and host cluster")
@@ -629,21 +624,28 @@ func (p *Provider) updatePod(ctx context.Context, pod *corev1.Pod) error {
return nil
}
func updatePod(dst, src *corev1.Pod) {
updateContainerImages(dst.Spec.Containers, src.Spec.Containers)
updateContainerImages(dst.Spec.InitContainers, src.Spec.InitContainers)
dst.Spec.ActiveDeadlineSeconds = src.Spec.ActiveDeadlineSeconds
dst.Spec.Tolerations = src.Spec.Tolerations
dst.Annotations = src.Annotations
dst.Labels = src.Labels
}
// updateContainerImages will update the images of the original container images with the same name
func updateContainerImages(original, updated []corev1.Container) []corev1.Container {
newImages := make(map[string]string)
func updateContainerImages(dst, src []corev1.Container) {
images := make(map[string]string)
for _, c := range updated {
newImages[c.Name] = c.Image
for _, container := range src {
images[container.Name] = container.Image
}
for i, c := range original {
if updatedImage, found := newImages[c.Name]; found {
original[i].Image = updatedImage
}
for i, container := range dst {
dst[i].Image = images[container.Name]
}
return original
}
// DeletePod executes deletePod with retry
@@ -854,28 +856,91 @@ func mergeEnvVars(orig, updated []corev1.EnvVar) []corev1.EnvVar {
return orig
}
func (p *Provider) configurePodEnvs(hostPod, virtualPod *corev1.Pod) {
for i := range hostPod.Spec.Containers {
hostPod.Spec.Containers[i].Env = p.configureEnv(virtualPod, virtualPod.Spec.Containers[i].Env)
hostPod.Spec.Containers[i].EnvFrom = p.configureEnvFrom(virtualPod, virtualPod.Spec.Containers[i].EnvFrom)
}
for i := range hostPod.Spec.InitContainers {
hostPod.Spec.InitContainers[i].Env = p.configureEnv(virtualPod, virtualPod.Spec.InitContainers[i].Env)
hostPod.Spec.InitContainers[i].EnvFrom = p.configureEnvFrom(virtualPod, virtualPod.Spec.InitContainers[i].EnvFrom)
}
for i := range hostPod.Spec.EphemeralContainers {
hostPod.Spec.EphemeralContainers[i].Env = p.configureEnv(virtualPod, virtualPod.Spec.EphemeralContainers[i].Env)
hostPod.Spec.EphemeralContainers[i].EnvFrom = p.configureEnvFrom(virtualPod, virtualPod.Spec.EphemeralContainers[i].EnvFrom)
}
}
func (p *Provider) configureEnv(virtualPod *corev1.Pod, envs []corev1.EnvVar) []corev1.EnvVar {
resultingEnvVars := make([]corev1.EnvVar, 0, len(envs))
for _, envVar := range envs {
resultingEnvVar := envVar
if envVar.ValueFrom != nil {
from := envVar.ValueFrom
switch {
case from.FieldRef != nil:
fieldRef := from.FieldRef
// for name and namespace we need to hardcode the virtual cluster values, and clear the FieldRef
switch fieldRef.FieldPath {
case "metadata.name":
resultingEnvVar.Value = virtualPod.Name
resultingEnvVar.ValueFrom = nil
case "metadata.namespace":
resultingEnvVar.Value = virtualPod.Namespace
resultingEnvVar.ValueFrom = nil
}
case from.ConfigMapKeyRef != nil:
resultingEnvVar.ValueFrom.ConfigMapKeyRef.Name = p.Translator.TranslateName(virtualPod.Namespace, resultingEnvVar.ValueFrom.ConfigMapKeyRef.Name)
case from.SecretKeyRef != nil:
resultingEnvVar.ValueFrom.SecretKeyRef.Name = p.Translator.TranslateName(virtualPod.Namespace, resultingEnvVar.ValueFrom.SecretKeyRef.Name)
}
}
resultingEnvVars = append(resultingEnvVars, resultingEnvVar)
}
return resultingEnvVars
}
func (p *Provider) configureEnvFrom(virtualPod *corev1.Pod, envs []corev1.EnvFromSource) []corev1.EnvFromSource {
resultingEnvVars := make([]corev1.EnvFromSource, 0, len(envs))
for _, envVar := range envs {
resultingEnvVar := envVar
if envVar.ConfigMapRef != nil {
resultingEnvVar.ConfigMapRef.Name = p.Translator.TranslateName(virtualPod.Namespace, envVar.ConfigMapRef.Name)
}
if envVar.SecretRef != nil {
resultingEnvVar.SecretRef.Name = p.Translator.TranslateName(virtualPod.Namespace, envVar.SecretRef.Name)
}
resultingEnvVars = append(resultingEnvVars, resultingEnvVar)
}
return resultingEnvVars
}
// configureFieldPathEnv will retrieve all annotations created by the pod mutating webhook
// to assign env fieldpaths to pods, it will also make sure to change the metadata.name and metadata.namespace to the
// assigned annotations
func (p *Provider) configureFieldPathEnv(pod, tPod *corev1.Pod) error {
for _, container := range pod.Spec.EphemeralContainers {
addFieldPathAnnotationToEnv(container.Env)
}
// override metadata.name and metadata.namespace with pod annotations
for _, container := range pod.Spec.InitContainers {
addFieldPathAnnotationToEnv(container.Env)
}
for _, container := range pod.Spec.Containers {
addFieldPathAnnotationToEnv(container.Env)
}
for name, value := range pod.Annotations {
if strings.Contains(name, webhook.FieldpathField) {
containerIndex, envName, err := webhook.ParseFieldPathAnnotationKey(name)
if err != nil {
return err
}
// re-adding these envs to the pod
tPod.Spec.Containers[containerIndex].Env = append(tPod.Spec.Containers[containerIndex].Env, corev1.EnvVar{
Name: envName,
@@ -885,6 +950,7 @@ func (p *Provider) configureFieldPathEnv(pod, tPod *corev1.Pod) error {
},
},
})
// removing the annotation from the pod
delete(tPod.Annotations, name)
}
@@ -892,22 +958,3 @@ func (p *Provider) configureFieldPathEnv(pod, tPod *corev1.Pod) error {
return nil
}
func addFieldPathAnnotationToEnv(envVars []corev1.EnvVar) {
for j, envVar := range envVars {
if envVar.ValueFrom == nil || envVar.ValueFrom.FieldRef == nil {
continue
}
fieldPath := envVar.ValueFrom.FieldRef.FieldPath
if fieldPath == translate.MetadataNameField {
envVar.ValueFrom.FieldRef.FieldPath = fmt.Sprintf("metadata.annotations['%s']", translate.ResourceNameAnnotation)
envVars[j] = envVar
}
if fieldPath == translate.MetadataNamespaceField {
envVar.ValueFrom.FieldRef.FieldPath = fmt.Sprintf("metadata.annotations['%s']", translate.ResourceNamespaceAnnotation)
envVars[j] = envVar
}
}
}

View File

@@ -4,7 +4,12 @@ import (
"reflect"
"testing"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/rancher/k3k/k3k-kubelet/translate"
)
func Test_mergeEnvVars(t *testing.T) {
@@ -68,3 +73,230 @@ func Test_mergeEnvVars(t *testing.T) {
})
}
}
func Test_configureEnv(t *testing.T) {
virtualPod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "my-pod",
Namespace: "my-namespace",
},
}
tests := []struct {
name string
virtualPod *corev1.Pod
envs []corev1.EnvVar
want []corev1.EnvVar
}{
{
name: "empty envs",
virtualPod: virtualPod,
envs: []corev1.EnvVar{},
want: []corev1.EnvVar{},
},
{
name: "simple env var",
virtualPod: virtualPod,
envs: []corev1.EnvVar{
{Name: "MY_VAR", Value: "my-value"},
},
want: []corev1.EnvVar{
{Name: "MY_VAR", Value: "my-value"},
},
},
{
name: "metadata.name field ref",
virtualPod: virtualPod,
envs: []corev1.EnvVar{
{
Name: "POD_NAME",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.name",
},
},
},
},
want: []corev1.EnvVar{
{Name: "POD_NAME", Value: "my-pod"},
},
},
{
name: "metadata.namespace field ref",
virtualPod: virtualPod,
envs: []corev1.EnvVar{
{
Name: "POD_NAMESPACE",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.namespace",
},
},
},
},
want: []corev1.EnvVar{
{Name: "POD_NAMESPACE", Value: "my-namespace"},
},
},
{
name: "other field ref",
virtualPod: virtualPod,
envs: []corev1.EnvVar{
{
Name: "NODE_NAME",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "spec.nodeName",
},
},
},
},
want: []corev1.EnvVar{
{
Name: "NODE_NAME",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "spec.nodeName",
},
},
},
},
},
{
name: "secret key ref",
virtualPod: virtualPod,
envs: []corev1.EnvVar{
{
Name: "SECRET_VAR",
ValueFrom: &corev1.EnvVarSource{
SecretKeyRef: &corev1.SecretKeySelector{
LocalObjectReference: corev1.LocalObjectReference{Name: "my-secret"},
Key: "my-key",
},
},
},
},
want: []corev1.EnvVar{
{
Name: "SECRET_VAR",
ValueFrom: &corev1.EnvVarSource{
SecretKeyRef: &corev1.SecretKeySelector{
LocalObjectReference: corev1.LocalObjectReference{Name: "my-secret-my-namespace-c-test-6d792d7365637265742b6d792d6-887db"},
Key: "my-key",
},
},
},
},
},
{
name: "configmap key ref",
virtualPod: virtualPod,
envs: []corev1.EnvVar{
{
Name: "CONFIG_VAR",
ValueFrom: &corev1.EnvVarSource{
ConfigMapKeyRef: &corev1.ConfigMapKeySelector{
LocalObjectReference: corev1.LocalObjectReference{Name: "my-configmap"},
Key: "my-key",
},
},
},
},
want: []corev1.EnvVar{
{
Name: "CONFIG_VAR",
ValueFrom: &corev1.EnvVarSource{
ConfigMapKeyRef: &corev1.ConfigMapKeySelector{
LocalObjectReference: corev1.LocalObjectReference{Name: "my-configmap-my-namespace-c-test-6d792d636f6e6669676d6170-301f6"},
Key: "my-key",
},
},
},
},
},
{
name: "resource field ref",
virtualPod: virtualPod,
envs: []corev1.EnvVar{
{
Name: "CPU_LIMIT",
ValueFrom: &corev1.EnvVarSource{
ResourceFieldRef: &corev1.ResourceFieldSelector{
ContainerName: "my-container",
Resource: "limits.cpu",
},
},
},
},
want: []corev1.EnvVar{
{
Name: "CPU_LIMIT",
ValueFrom: &corev1.EnvVarSource{
ResourceFieldRef: &corev1.ResourceFieldSelector{
ContainerName: "my-container",
Resource: "limits.cpu",
},
},
},
},
},
{
name: "mixed env vars",
virtualPod: virtualPod,
envs: []corev1.EnvVar{
{Name: "MY_VAR", Value: "my-value"},
{
Name: "POD_NAME",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.name",
},
},
},
{
Name: "POD_NAMESPACE",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "metadata.namespace",
},
},
},
{
Name: "NODE_NAME",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "spec.nodeName",
},
},
},
},
want: []corev1.EnvVar{
{Name: "MY_VAR", Value: "my-value"},
{Name: "POD_NAME", Value: "my-pod"},
{Name: "POD_NAMESPACE", Value: "my-namespace"},
{
Name: "NODE_NAME",
ValueFrom: &corev1.EnvVarSource{
FieldRef: &corev1.ObjectFieldSelector{
FieldPath: "spec.nodeName",
},
},
},
},
},
}
p := Provider{
Translator: translate.ToHostTranslator{
ClusterName: "c-test",
ClusterNamespace: "ns-test",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := p.configureEnv(tt.virtualPod, tt.envs)
assert.Equal(t, tt.want, got)
})
}
}

View File

@@ -23,7 +23,8 @@ const (
// transformTokens copies the serviceaccount tokens used by pod's serviceaccount to a secret on the host cluster and mount it
// to look like the serviceaccount token
func (p *Provider) transformTokens(ctx context.Context, pod, tPod *corev1.Pod) error {
p.logger.Info("transforming token", "pod", pod.Name, "namespace", pod.Namespace, "serviceAccountName", pod.Spec.ServiceAccountName)
logger := p.logger.WithValues("namespace", pod.Namespace, "name", pod.Name, "serviceAccountNameod", pod.Spec.ServiceAccountName)
logger.V(1).Info("Transforming token")
// skip this process if the kube-api-access is already removed from the pod
// this is needed in case users already adds their own custom tokens like in rancher imported clusters

View File

@@ -57,6 +57,7 @@ func main() {
},
PersistentPreRun: func(cmd *cobra.Command, args []string) {
cmds.InitializeConfig(cmd)
logger = zapr.NewLogger(log.New(debug, logFormat))
},
RunE: run,

View File

@@ -1,4 +1,4 @@
FROM alpine
FROM registry.suse.com/bci/bci-base:15.7
ARG BIN_K3K=bin/k3k
ARG BIN_K3KCLI=bin/k3kcli

View File

@@ -1,5 +1,5 @@
# TODO: swicth this to BCI-micro or scratch. Left as base right now so that debug can be done a bit easier
FROM registry.suse.com/bci/bci-base:15.6
FROM registry.suse.com/bci/bci-base:15.7
ARG BIN_K3K_KUBELET=bin/k3k-kubelet

View File

@@ -1,7 +1,9 @@
package v1beta1
import (
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -123,7 +125,7 @@ type ClusterSpec struct {
// The Secret must have a "token" field in its data.
//
// +optional
TokenSecretRef *v1.SecretReference `json:"tokenSecretRef,omitempty"`
TokenSecretRef *corev1.SecretReference `json:"tokenSecretRef,omitempty"`
// TLSSANs specifies subject alternative names for the K3s server certificate.
//
@@ -145,12 +147,12 @@ type ClusterSpec struct {
// ServerEnvs specifies list of environment variables to set in the server pod.
//
// +optional
ServerEnvs []v1.EnvVar `json:"serverEnvs,omitempty"`
ServerEnvs []corev1.EnvVar `json:"serverEnvs,omitempty"`
// AgentEnvs specifies list of environment variables to set in the agent pod.
//
// +optional
AgentEnvs []v1.EnvVar `json:"agentEnvs,omitempty"`
AgentEnvs []corev1.EnvVar `json:"agentEnvs,omitempty"`
// Addons specifies secrets containing raw YAML to deploy on cluster startup.
//
@@ -160,12 +162,12 @@ type ClusterSpec struct {
// ServerLimit specifies resource limits for server nodes.
//
// +optional
ServerLimit v1.ResourceList `json:"serverLimit,omitempty"`
ServerLimit corev1.ResourceList `json:"serverLimit,omitempty"`
// WorkerLimit specifies resource limits for agent nodes.
//
// +optional
WorkerLimit v1.ResourceList `json:"workerLimit,omitempty"`
WorkerLimit corev1.ResourceList `json:"workerLimit,omitempty"`
// MirrorHostNodes controls whether node objects from the host cluster
// are mirrored into the virtual cluster.
@@ -183,6 +185,36 @@ type ClusterSpec struct {
// +kubebuilder:default={}
// +optional
Sync *SyncConfig `json:"sync,omitempty"`
// SecretMounts specifies a list of secrets to mount into server and agent pods.
// Each entry defines a secret and its mount path within the pods.
//
// +optional
SecretMounts []SecretMount `json:"secretMounts,omitempty"`
}
// SecretMount defines a secret to be mounted into server or agent pods,
// allowing for custom configurations, certificates, or other sensitive data.
type SecretMount struct {
// Embeds SecretName, Items, DefaultMode, and Optional
corev1.SecretVolumeSource `json:",inline"`
// MountPath is the path within server and agent pods where the
// secret contents will be mounted.
//
// +optional
MountPath string `json:"mountPath,omitempty"`
// SubPath is an optional path within the secret to mount instead of the root.
// When specified, only the specified key from the secret will be mounted as a file
// at MountPath, keeping the parent directory writable.
//
// +optional
SubPath string `json:"subPath,omitempty"`
// Role is the type of the k3k pod that will be used to mount the secret.
// This can be 'server', 'agent', or 'all' (for both).
//
// +optional
// +kubebuilder:validation:Enum=server;agent;all
Role string `json:"role,omitempty"`
}
// SyncConfig will contain the resources that should be synced from virtual cluster to host cluster.
@@ -362,8 +394,9 @@ type PersistenceConfig struct {
// This field is only relevant in "dynamic" mode.
//
// +kubebuilder:default="2G"
// +kubebuilder:validation:XValidation:message="storageRequestSize is immutable",rule="self == oldSelf"
// +optional
StorageRequestSize string `json:"storageRequestSize,omitempty"`
StorageRequestSize *resource.Quantity `json:"storageRequestSize,omitempty"`
}
// ExposeConfig specifies options for exposing the API server.
@@ -467,10 +500,9 @@ type CredentialSources struct {
// CredentialSource defines where to get a credential from.
// It can represent either a TLS key pair or a single private key.
type CredentialSource struct {
// SecretName specifies the name of an existing secret to use.
// The controller expects specific keys inside based on the credential type:
// - For TLS pairs (e.g., ServerCA): 'tls.crt' and 'tls.key'.
// - For ServiceAccountTokenKey: 'tls.key'.
// The secret must contain specific keys based on the credential type:
// - For TLS certificate pairs (e.g., ServerCA): `tls.crt` and `tls.key`.
// - For the ServiceAccountToken signing key: `tls.key`.
SecretName string `json:"secretName"`
}
@@ -582,13 +614,13 @@ type VirtualClusterPolicySpec struct {
// Quota specifies the resource limits for clusters within a clusterpolicy.
//
// +optional
Quota *v1.ResourceQuotaSpec `json:"quota,omitempty"`
Quota *corev1.ResourceQuotaSpec `json:"quota,omitempty"`
// Limit specifies the LimitRange that will be applied to all pods within the VirtualClusterPolicy
// to set defaults and constraints (min/max)
//
// +optional
Limit *v1.LimitRangeSpec `json:"limit,omitempty"`
Limit *corev1.LimitRangeSpec `json:"limit,omitempty"`
// DefaultNodeSelector specifies the node selector that applies to all clusters (server + agent) in the target Namespace.
//

View File

@@ -173,6 +173,13 @@ func (in *ClusterSpec) DeepCopyInto(out *ClusterSpec) {
*out = new(SyncConfig)
(*in).DeepCopyInto(*out)
}
if in.SecretMounts != nil {
in, out := &in.SecretMounts, &out.SecretMounts
*out = make([]SecretMount, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterSpec.
@@ -418,6 +425,11 @@ func (in *PersistenceConfig) DeepCopyInto(out *PersistenceConfig) {
*out = new(string)
**out = **in
}
if in.StorageRequestSize != nil {
in, out := &in.StorageRequestSize, &out.StorageRequestSize
x := (*in).DeepCopy()
*out = &x
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PersistenceConfig.
@@ -474,6 +486,22 @@ func (in *PriorityClassSyncConfig) DeepCopy() *PriorityClassSyncConfig {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SecretMount) DeepCopyInto(out *SecretMount) {
*out = *in
in.SecretVolumeSource.DeepCopyInto(&out.SecretVolumeSource)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SecretMount.
func (in *SecretMount) DeepCopy() *SecretMount {
if in == nil {
return nil
}
out := new(SecretMount)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SecretSyncConfig) DeepCopyInto(out *SecretSyncConfig) {
*out = *in

View File

@@ -386,6 +386,11 @@ func (s *SharedAgent) role(ctx context.Context) error {
Resources: []string{"events"},
Verbs: []string{"create"},
},
{
APIGroups: []string{""},
Resources: []string{"resourcequotas"},
Verbs: []string{"get", "watch", "list"},
},
{
APIGroups: []string{"networking.k8s.io"},
Resources: []string{"ingresses"},

View File

@@ -13,6 +13,7 @@ import (
ctrlruntimeclient "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/rancher/k3k/pkg/controller"
"github.com/rancher/k3k/pkg/controller/cluster/mounts"
)
const (
@@ -98,6 +99,15 @@ func (v *VirtualAgent) deployment(ctx context.Context) error {
"mode": "virtual",
},
}
podSpec := v.podSpec(image, name)
if len(v.cluster.Spec.SecretMounts) > 0 {
vols, volMounts := mounts.BuildSecretsMountsVolumes(v.cluster.Spec.SecretMounts, "agent")
podSpec.Volumes = append(podSpec.Volumes, vols...)
podSpec.Containers[0].VolumeMounts = append(podSpec.Containers[0].VolumeMounts, volMounts...)
}
deployment := &apps.Deployment{
TypeMeta: metav1.TypeMeta{
@@ -116,7 +126,7 @@ func (v *VirtualAgent) deployment(ctx context.Context) error {
ObjectMeta: metav1.ObjectMeta{
Labels: selector.MatchLabels,
},
Spec: v.podSpec(image, name, v.cluster.Spec.AgentArgs, &selector),
Spec: podSpec,
},
},
}
@@ -124,12 +134,14 @@ func (v *VirtualAgent) deployment(ctx context.Context) error {
return v.ensureObject(ctx, deployment)
}
func (v *VirtualAgent) podSpec(image, name string, args []string, affinitySelector *metav1.LabelSelector) v1.PodSpec {
func (v *VirtualAgent) podSpec(image, name string) v1.PodSpec {
var limit v1.ResourceList
args := v.cluster.Spec.AgentArgs
args = append([]string{"agent", "--config", "/opt/rancher/k3s/config.yaml"}, args...)
podSpec := v1.PodSpec{
NodeSelector: v.cluster.Spec.NodeSelector,
Volumes: []v1.Volume{
{
Name: "config",

View File

@@ -23,7 +23,7 @@ func newVirtualClient(ctx context.Context, hostClient ctrlruntimeclient.Client,
}
if err := hostClient.Get(ctx, kubeconfigSecretName, &clusterKubeConfig); err != nil {
return nil, fmt.Errorf("failed to get kubeconfig secret: %w", err)
return nil, err
}
restConfig, err := clientcmd.RESTConfigFromKubeConfig(clusterKubeConfig.Data["kubeconfig.yaml"])

View File

@@ -43,7 +43,6 @@ import (
)
const (
namePrefix = "k3k"
clusterController = "k3k-cluster-controller"
clusterFinalizerName = "cluster.k3k.io/finalizer"
ClusterInvalidName = "system"

View File

@@ -2,11 +2,14 @@ package cluster_test
import (
"context"
"strings"
"time"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/utils/ptr"
"sigs.k8s.io/controller-runtime/pkg/client"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
networkingv1 "k8s.io/api/networking/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -66,7 +69,7 @@ var _ = Describe("Cluster Controller", Label("controller"), Label("Cluster"), fu
Expect(cluster.Spec.Sync.PriorityClasses.Enabled).To(BeFalse())
Expect(cluster.Spec.Persistence.Type).To(Equal(v1beta1.DynamicPersistenceMode))
Expect(cluster.Spec.Persistence.StorageRequestSize).To(Equal("2G"))
Expect(cluster.Spec.Persistence.StorageRequestSize.Equal(resource.MustParse("2G"))).To(BeTrue())
Expect(cluster.Status.Phase).To(Equal(v1beta1.ClusterUnknown))
@@ -294,6 +297,164 @@ var _ = Describe("Cluster Controller", Label("controller"), Label("Cluster"), fu
Expect(err).To(HaveOccurred())
})
})
When("adding addons to the cluster", func() {
It("will create a statefulset with the correct addon volumes and volume mounts", func() {
// Create the addon secret first
addonSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "test-addon",
Namespace: namespace,
},
Data: map[string][]byte{
"manifest.yaml": []byte("apiVersion: v1\nkind: ConfigMap\nmetadata:\n name: test-cm\n"),
},
}
Expect(k8sClient.Create(ctx, addonSecret)).To(Succeed())
// Create the cluster with an addon referencing the secret
cluster := &v1beta1.Cluster{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "cluster-",
Namespace: namespace,
},
Spec: v1beta1.ClusterSpec{
Addons: []v1beta1.Addon{
{
SecretRef: "test-addon",
},
},
},
}
Expect(k8sClient.Create(ctx, cluster)).To(Succeed())
// Wait for the statefulset to be created and verify volumes/mounts
var statefulSet appsv1.StatefulSet
statefulSetName := k3kcontroller.SafeConcatNameWithPrefix(cluster.Name, "server")
Eventually(func() error {
return k8sClient.Get(ctx, client.ObjectKey{
Name: statefulSetName,
Namespace: cluster.Namespace,
}, &statefulSet)
}).
WithTimeout(time.Second * 30).
WithPolling(time.Second).
Should(Succeed())
// Verify the addon volume exists
var addonVolume *corev1.Volume
for i := range statefulSet.Spec.Template.Spec.Volumes {
v := &statefulSet.Spec.Template.Spec.Volumes[i]
if v.Name == "addon-test-addon" {
addonVolume = v
break
}
}
Expect(addonVolume).NotTo(BeNil(), "addon volume should exist")
Expect(addonVolume.VolumeSource.Secret).NotTo(BeNil())
Expect(addonVolume.VolumeSource.Secret.SecretName).To(Equal("test-addon"))
// Verify the addon volume mount exists in the first container
containers := statefulSet.Spec.Template.Spec.Containers
Expect(containers).NotTo(BeEmpty())
var addonMount *corev1.VolumeMount
for i := range containers[0].VolumeMounts {
m := &containers[0].VolumeMounts[i]
if m.Name == "addon-test-addon" {
addonMount = m
break
}
}
Expect(addonMount).NotTo(BeNil(), "addon volume mount should exist")
Expect(addonMount.MountPath).To(Equal("/var/lib/rancher/k3s/server/manifests/test-addon"))
Expect(addonMount.ReadOnly).To(BeTrue())
})
It("will create volumes for multiple addons in the correct order", func() {
// Create multiple addon secrets
addonSecret1 := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "addon-one",
Namespace: namespace,
},
Data: map[string][]byte{
"manifest.yaml": []byte("apiVersion: v1\nkind: ConfigMap\nmetadata:\n name: cm-one\n"),
},
}
addonSecret2 := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "addon-two",
Namespace: namespace,
},
Data: map[string][]byte{
"manifest.yaml": []byte("apiVersion: v1\nkind: ConfigMap\nmetadata:\n name: cm-two\n"),
},
}
Expect(k8sClient.Create(ctx, addonSecret1)).To(Succeed())
Expect(k8sClient.Create(ctx, addonSecret2)).To(Succeed())
// Create the cluster with multiple addons in specific order
cluster := &v1beta1.Cluster{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "cluster-",
Namespace: namespace,
},
Spec: v1beta1.ClusterSpec{
Addons: []v1beta1.Addon{
{SecretRef: "addon-one"},
{SecretRef: "addon-two"},
},
},
}
Expect(k8sClient.Create(ctx, cluster)).To(Succeed())
// Wait for the statefulset to be created
var statefulSet appsv1.StatefulSet
statefulSetName := k3kcontroller.SafeConcatNameWithPrefix(cluster.Name, "server")
Eventually(func() error {
return k8sClient.Get(ctx, client.ObjectKey{
Name: statefulSetName,
Namespace: cluster.Namespace,
}, &statefulSet)
}).
WithTimeout(time.Second * 30).
WithPolling(time.Second).
Should(Succeed())
// Verify both addon volumes exist and are in the correct order
volumes := statefulSet.Spec.Template.Spec.Volumes
// Extract only addon volumes (those starting with "addon-")
var addonVolumes []corev1.Volume
for _, v := range volumes {
if strings.HasPrefix(v.Name, "addon-") {
addonVolumes = append(addonVolumes, v)
}
}
Expect(addonVolumes).To(HaveLen(2))
Expect(addonVolumes[0].Name).To(Equal("addon-addon-one"))
Expect(addonVolumes[1].Name).To(Equal("addon-addon-two"))
// Verify both addon volume mounts exist and are in the correct order
containers := statefulSet.Spec.Template.Spec.Containers
Expect(containers).NotTo(BeEmpty())
// Extract only addon mounts (those starting with "addon-")
var addonMounts []corev1.VolumeMount
for _, m := range containers[0].VolumeMounts {
if strings.HasPrefix(m.Name, "addon-") {
addonMounts = append(addonMounts, m)
}
}
Expect(addonMounts).To(HaveLen(2))
Expect(addonMounts[0].Name).To(Equal("addon-addon-one"))
Expect(addonMounts[0].MountPath).To(Equal("/var/lib/rancher/k3s/server/manifests/addon-one"))
Expect(addonMounts[1].Name).To(Equal("addon-addon-two"))
Expect(addonMounts[1].MountPath).To(Equal("/var/lib/rancher/k3s/server/manifests/addon-two"))
})
})
})
})
})

View File

@@ -0,0 +1,60 @@
package mounts
import (
v1 "k8s.io/api/core/v1"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
)
func BuildSecretsMountsVolumes(secretMounts []v1beta1.SecretMount, role string) ([]v1.Volume, []v1.VolumeMount) {
var (
vols []v1.Volume
volMounts []v1.VolumeMount
)
for _, secretMount := range secretMounts {
if secretMount.SecretName == "" || secretMount.MountPath == "" {
continue
}
if secretMount.Role == role || secretMount.Role == "" || secretMount.Role == "all" {
vol, volMount := buildSecretMountVolume(secretMount)
vols = append(vols, vol)
volMounts = append(volMounts, volMount)
}
}
return vols, volMounts
}
func buildSecretMountVolume(secretMount v1beta1.SecretMount) (v1.Volume, v1.VolumeMount) {
projectedVolSources := []v1.VolumeProjection{
{
Secret: &v1.SecretProjection{
LocalObjectReference: v1.LocalObjectReference{
Name: secretMount.SecretName,
},
Items: secretMount.Items,
Optional: secretMount.Optional,
},
},
}
vol := v1.Volume{
Name: secretMount.SecretName,
VolumeSource: v1.VolumeSource{
Projected: &v1.ProjectedVolumeSource{
Sources: projectedVolSources,
},
},
}
volMount := v1.VolumeMount{
Name: secretMount.SecretName,
MountPath: secretMount.MountPath,
SubPath: secretMount.SubPath,
}
return vol, volMount
}

View File

@@ -0,0 +1,523 @@
package mounts
import (
"testing"
"github.com/stretchr/testify/assert"
v1 "k8s.io/api/core/v1"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
)
func Test_BuildSecretMountsVolume(t *testing.T) {
type args struct {
secretMounts []v1beta1.SecretMount
role string
}
type expectedVolumes struct {
vols []v1.Volume
volMounts []v1.VolumeMount
}
tests := []struct {
name string
args args
expectedData expectedVolumes
}{
{
name: "empty secret mounts",
args: args{
secretMounts: []v1beta1.SecretMount{},
role: "server",
},
expectedData: expectedVolumes{
vols: nil,
volMounts: nil,
},
},
{
name: "nil secret mounts",
args: args{
secretMounts: nil,
role: "server",
},
expectedData: expectedVolumes{
vols: nil,
volMounts: nil,
},
},
{
name: "single secret mount with no role specified defaults to all",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-1",
},
MountPath: "/mount-dir-1",
},
},
role: "server",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("secret-1", nil),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("secret-1", "/mount-dir-1", ""),
},
},
},
{
name: "multiple secrets mounts with no role specified defaults to all",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-1",
},
MountPath: "/mount-dir-1",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-2",
},
MountPath: "/mount-dir-2",
},
},
role: "server",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("secret-1", nil),
expectedVolume("secret-2", nil),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("secret-1", "/mount-dir-1", ""),
expectedVolumeMount("secret-2", "/mount-dir-2", ""),
},
},
},
{
name: "single secret mount with items",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-1",
Items: []v1.KeyToPath{
{
Key: "key-1",
Path: "path-1",
},
},
},
MountPath: "/mount-dir-1",
},
},
role: "server",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("secret-1", []v1.KeyToPath{{Key: "key-1", Path: "path-1"}}),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("secret-1", "/mount-dir-1", ""),
},
},
},
{
name: "multiple secret mounts with items",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-1",
Items: []v1.KeyToPath{
{
Key: "key-1",
Path: "path-1",
},
},
},
MountPath: "/mount-dir-1",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-2",
Items: []v1.KeyToPath{
{
Key: "key-2",
Path: "path-2",
},
},
},
MountPath: "/mount-dir-2",
},
},
role: "server",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("secret-1", []v1.KeyToPath{{Key: "key-1", Path: "path-1"}}),
expectedVolume("secret-2", []v1.KeyToPath{{Key: "key-2", Path: "path-2"}}),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("secret-1", "/mount-dir-1", ""),
expectedVolumeMount("secret-2", "/mount-dir-2", ""),
},
},
},
{
name: "user will specify the order",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "z-secret",
},
MountPath: "/z",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "a-secret",
},
MountPath: "/a",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "m-secret",
},
MountPath: "/m",
},
},
role: "server",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("z-secret", nil),
expectedVolume("a-secret", nil),
expectedVolume("m-secret", nil),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("z-secret", "/z", ""),
expectedVolumeMount("a-secret", "/a", ""),
expectedVolumeMount("m-secret", "/m", ""),
},
},
},
{
name: "skip entries with empty secret name",
args: args{
secretMounts: []v1beta1.SecretMount{
{
MountPath: "/mount-dir-1",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-2",
},
MountPath: "/mount-dir-2",
},
},
role: "server",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("secret-2", nil),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("secret-2", "/mount-dir-2", ""),
},
},
},
{
name: "skip entries with empty mount path",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-1",
},
MountPath: "",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-2",
},
MountPath: "/mount-dir-2",
},
},
role: "server",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("secret-2", nil),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("secret-2", "/mount-dir-2", ""),
},
},
},
{
name: "secret mount with subPath",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-1",
},
MountPath: "/etc/rancher/k3s/registries.yaml",
SubPath: "registries.yaml",
},
},
role: "server",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("secret-1", nil),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("secret-1", "/etc/rancher/k3s/registries.yaml", "registries.yaml"),
},
},
},
// Role-based filtering tests
{
name: "role server includes only server and all roles",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "server-secret",
},
MountPath: "/server",
Role: "server",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "agent-secret",
},
MountPath: "/agent",
Role: "agent",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "all-secret",
},
MountPath: "/all",
Role: "all",
},
},
role: "server",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("server-secret", nil),
expectedVolume("all-secret", nil),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("server-secret", "/server", ""),
expectedVolumeMount("all-secret", "/all", ""),
},
},
},
{
name: "role agent includes only agent and all roles",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "server-secret",
},
MountPath: "/server",
Role: "server",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "agent-secret",
},
MountPath: "/agent",
Role: "agent",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "all-secret",
},
MountPath: "/all",
Role: "all",
},
},
role: "agent",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("agent-secret", nil),
expectedVolume("all-secret", nil),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("agent-secret", "/agent", ""),
expectedVolumeMount("all-secret", "/all", ""),
},
},
},
{
name: "empty role in secret mount defaults to all",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "no-role-secret",
},
MountPath: "/no-role",
Role: "",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "server-secret",
},
MountPath: "/server",
Role: "server",
},
},
role: "agent",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("no-role-secret", nil),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("no-role-secret", "/no-role", ""),
},
},
},
{
name: "mixed roles with server filter",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "registry-config",
},
MountPath: "/etc/rancher/k3s/registries.yaml",
SubPath: "registries.yaml",
Role: "all",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "server-config",
},
MountPath: "/etc/server",
Role: "server",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "agent-config",
},
MountPath: "/etc/agent",
Role: "agent",
},
},
role: "server",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("registry-config", nil),
expectedVolume("server-config", nil),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("registry-config", "/etc/rancher/k3s/registries.yaml", "registries.yaml"),
expectedVolumeMount("server-config", "/etc/server", ""),
},
},
},
{
name: "all secrets have role all",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-1",
},
MountPath: "/secret-1",
Role: "all",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "secret-2",
},
MountPath: "/secret-2",
Role: "all",
},
},
role: "server",
},
expectedData: expectedVolumes{
vols: []v1.Volume{
expectedVolume("secret-1", nil),
expectedVolume("secret-2", nil),
},
volMounts: []v1.VolumeMount{
expectedVolumeMount("secret-1", "/secret-1", ""),
expectedVolumeMount("secret-2", "/secret-2", ""),
},
},
},
{
name: "no secrets match agent role",
args: args{
secretMounts: []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "server-only",
},
MountPath: "/server-only",
Role: "server",
},
},
role: "agent",
},
expectedData: expectedVolumes{
vols: nil,
volMounts: nil,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
vols, volMounts := BuildSecretsMountsVolumes(tt.args.secretMounts, tt.args.role)
assert.Equal(t, tt.expectedData.vols, vols)
assert.Equal(t, tt.expectedData.volMounts, volMounts)
})
}
}
func expectedVolume(name string, items []v1.KeyToPath) v1.Volume {
return v1.Volume{
Name: name,
VolumeSource: v1.VolumeSource{
Projected: &v1.ProjectedVolumeSource{
Sources: []v1.VolumeProjection{
{Secret: &v1.SecretProjection{
LocalObjectReference: v1.LocalObjectReference{
Name: name,
},
Items: items,
}},
},
},
},
}
}
func expectedVolumeMount(name, mountPath, subPath string) v1.VolumeMount {
return v1.VolumeMount{
Name: name,
MountPath: mountPath,
SubPath: subPath,
}
}

View File

@@ -81,7 +81,7 @@ func serverOptions(cluster *v1beta1.Cluster, token string) string {
}
if cluster.Spec.Mode != agent.VirtualNodeMode {
opts = opts + "disable-agent: true\negress-selector-mode: disabled\ndisable:\n- servicelb\n- traefik\n- metrics-server\n- local-storage"
opts = opts + "disable-agent: true\negress-selector-mode: disabled\ndisable:\n- servicelb\n- traefik\n- metrics-server\n- local-storage\n"
}
// TODO: Add extra args to the options

View File

@@ -8,7 +8,6 @@ import (
"strings"
"text/template"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/utils/ptr"
@@ -22,13 +21,13 @@ import (
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
"github.com/rancher/k3k/pkg/controller"
"github.com/rancher/k3k/pkg/controller/cluster/agent"
"github.com/rancher/k3k/pkg/controller/cluster/mounts"
)
const (
k3kSystemNamespace = "k3k-system"
serverName = "server"
configName = "server-config"
initConfigName = "init-server-config"
serverName = "server"
configName = "server-config"
initConfigName = "init-server-config"
)
// Server
@@ -280,67 +279,31 @@ func (s *Server) StatefulServer(ctx context.Context) (*apps.StatefulSet, error)
volumeMounts []v1.VolumeMount
)
for _, addon := range s.cluster.Spec.Addons {
namespace := k3kSystemNamespace
if addon.SecretNamespace != "" {
namespace = addon.SecretNamespace
}
nn := types.NamespacedName{
Name: addon.SecretRef,
Namespace: namespace,
}
var addons v1.Secret
if err := s.client.Get(ctx, nn, &addons); err != nil {
return nil, err
}
clusterAddons := v1.Secret{
TypeMeta: metav1.TypeMeta{
Kind: "Secret",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: addons.Name,
Namespace: s.cluster.Namespace,
},
Data: make(map[string][]byte, len(addons.Data)),
}
for k, v := range addons.Data {
clusterAddons.Data[k] = v
}
if err := s.client.Create(ctx, &clusterAddons); err != nil {
return nil, err
}
name := "varlibrancherk3smanifests" + addon.SecretRef
volume := v1.Volume{
Name: name,
VolumeSource: v1.VolumeSource{
Secret: &v1.SecretVolumeSource{
SecretName: addon.SecretRef,
},
},
}
volumes = append(volumes, volume)
volumeMount := v1.VolumeMount{
Name: name,
MountPath: "/var/lib/rancher/k3s/server/manifests/" + addon.SecretRef,
// changes to this part of the filesystem shouldn't be done manually. The secret should be updated instead.
ReadOnly: true,
}
volumeMounts = append(volumeMounts, volumeMount)
}
if s.cluster.Spec.CustomCAs != nil && s.cluster.Spec.CustomCAs.Enabled {
vols, mounts, err := s.loadCACertBundle(ctx)
if len(s.cluster.Spec.Addons) > 0 {
addonsVols, addonsMounts, err := s.buildAddonsVolumes(ctx)
if err != nil {
return nil, err
}
volumes = append(volumes, addonsVols...)
volumeMounts = append(volumeMounts, addonsMounts...)
}
if s.cluster.Spec.CustomCAs != nil && s.cluster.Spec.CustomCAs.Enabled {
vols, mounts, err := s.buildCABundleVolumes(ctx)
if err != nil {
return nil, err
}
volumes = append(volumes, vols...)
volumeMounts = append(volumeMounts, mounts...)
}
if len(s.cluster.Spec.SecretMounts) > 0 {
vols, mounts := mounts.BuildSecretsMountsVolumes(s.cluster.Spec.SecretMounts, "server")
volumes = append(volumes, vols...)
volumeMounts = append(volumeMounts, mounts...)
@@ -406,7 +369,7 @@ func (s *Server) setupDynamicPersistence() v1.PersistentVolumeClaim {
StorageClassName: s.cluster.Spec.Persistence.StorageClassName,
Resources: v1.VolumeResourceRequirements{
Requests: v1.ResourceList{
"storage": resource.MustParse(s.cluster.Spec.Persistence.StorageRequestSize),
"storage": *s.cluster.Spec.Persistence.StorageRequestSize,
},
},
},
@@ -416,9 +379,11 @@ func (s *Server) setupDynamicPersistence() v1.PersistentVolumeClaim {
func (s *Server) setupStartCommand() (string, error) {
var output bytes.Buffer
tmpl := singleServerTemplate
tmpl := StartupCommand
mode := "single"
if *s.cluster.Spec.Servers > 1 {
tmpl = HAServerTemplate
mode = "ha"
}
tmplCmd, err := template.New("").Parse(tmpl)
@@ -430,6 +395,8 @@ func (s *Server) setupStartCommand() (string, error) {
"ETCD_DIR": "/var/lib/rancher/k3s/server/db/etcd",
"INIT_CONFIG": "/opt/rancher/k3s/init/config.yaml",
"SERVER_CONFIG": "/opt/rancher/k3s/server/config.yaml",
"CLUSTER_MODE": mode,
"K3K_MODE": string(s.cluster.Spec.Mode),
"EXTRA_ARGS": strings.Join(s.cluster.Spec.ServerArgs, " "),
}); err != nil {
return "", err
@@ -438,7 +405,7 @@ func (s *Server) setupStartCommand() (string, error) {
return output.String(), nil
}
func (s *Server) loadCACertBundle(ctx context.Context) ([]v1.Volume, []v1.VolumeMount, error) {
func (s *Server) buildCABundleVolumes(ctx context.Context) ([]v1.Volume, []v1.VolumeMount, error) {
if s.cluster.Spec.CustomCAs == nil {
return nil, nil, fmt.Errorf("customCAs not found")
}
@@ -530,6 +497,71 @@ func (s *Server) mountCACert(volumeName, certName, secretName string, subPathMou
return volume, mounts
}
func (s *Server) buildAddonsVolumes(ctx context.Context) ([]v1.Volume, []v1.VolumeMount, error) {
var (
volumes []v1.Volume
mounts []v1.VolumeMount
)
for _, addon := range s.cluster.Spec.Addons {
namespace := s.cluster.Namespace
if addon.SecretNamespace != "" {
namespace = addon.SecretNamespace
}
nn := types.NamespacedName{
Name: addon.SecretRef,
Namespace: namespace,
}
var addons v1.Secret
if err := s.client.Get(ctx, nn, &addons); err != nil {
return nil, nil, err
}
// skip creating the addon secret if it already exists and in the same namespace as the cluster
if namespace != s.cluster.Namespace {
clusterAddons := v1.Secret{
TypeMeta: metav1.TypeMeta{
Kind: "Secret",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: addons.Name,
Namespace: s.cluster.Namespace,
},
Data: addons.Data,
}
if _, err := controllerutil.CreateOrUpdate(ctx, s.client, &clusterAddons, func() error {
return controllerutil.SetOwnerReference(s.cluster, &clusterAddons, s.client.Scheme())
}); err != nil {
return nil, nil, err
}
}
name := "addon-" + addon.SecretRef
volume := v1.Volume{
Name: name,
VolumeSource: v1.VolumeSource{
Secret: &v1.SecretVolumeSource{
SecretName: addon.SecretRef,
},
},
}
volumes = append(volumes, volume)
volumeMount := v1.VolumeMount{
Name: name,
MountPath: "/var/lib/rancher/k3s/server/manifests/" + addon.SecretRef,
ReadOnly: true,
}
mounts = append(mounts, volumeMount)
}
return volumes, mounts, nil
}
func sortedKeys(keyMap map[string]string) []string {
keys := make([]string, 0, len(keyMap))

View File

@@ -1,16 +1,102 @@
package server
var singleServerTemplate string = `
if [ -d "{{.ETCD_DIR}}" ]; then
# if directory exists then it means its not an initial run
/bin/k3s server --cluster-reset --config {{.INIT_CONFIG}} {{.EXTRA_ARGS}} 2>&1 | tee /var/log/k3s.log
fi
rm -f /var/lib/rancher/k3s/server/db/reset-flag
/bin/k3s server --config {{.INIT_CONFIG}} {{.EXTRA_ARGS}} 2>&1 | tee /var/log/k3s.log`
var StartupCommand string = `
info()
{
echo "[INFO] [$(date +"%c")]" "$@"
}
fatal()
{
echo "[FATAL] [$(date +"%c")] " "$@" >&2
exit 1
}
# safe mode function to reset node IP after pod restarts
safe_mode() {
CURRENT_IP=""
if [ -f /var/lib/rancher/k3s/k3k-node-ip ]; then
CURRENT_IP=$(cat /var/lib/rancher/k3s/k3k-node-ip)
fi
if [ -z "$CURRENT_IP" ] || [ "$CURRENT_IP" = "$POD_IP" ] || [ {{.K3K_MODE}} != "virtual" ]; then
return
fi
# skipping if the node is starting for the first time
if [ -d "{{.ETCD_DIR}}" ]; then
info "Starting K3s in Safe Mode (Network Policy Disabled) to patch Node IP from ${CURRENT_IP} to ${POD_IP}"
/bin/k3s server --disable-network-policy --config $1 {{.EXTRA_ARGS}} > /dev/null 2>&1 &
PID=$!
# Start the loop to wait for the nodeIP to change
info "Waiting for Node IP to update to ${POD_IP}."
count=0
until kubectl get nodes -o wide 2>/dev/null | grep -q "${POD_IP}"; do
if ! kill -0 $PID 2>/dev/null; then
fatal "safe Mode K3s process died unexpectedly!"
fi
sleep 2
count=$((count+1))
if [ $count -gt 60 ]; then
fatal "timed out waiting for node to change IP from $CURRENT_IP to $POD_IP"
fi
done
info "Node IP is set to ${POD_IP} successfully. Stopping Safe Mode process..."
kill $PID
wait $PID 2>/dev/null || true
fi
}
start_single_node() {
info "Starting single node setup..."
# checking for existing data in single server if found we must perform reset
if [ -d "{{.ETCD_DIR}}" ]; then
info "Existing data found in single node setup. Performing cluster-reset to ensure quorum..."
if ! /bin/k3s server --cluster-reset --config {{.INIT_CONFIG}} {{.EXTRA_ARGS}} > /dev/null 2>&1; then
fatal "cluster reset failed!"
fi
info "Cluster reset complete. Removing Reset flag file."
rm -f /var/lib/rancher/k3s/server/db/reset-flag
fi
# entering safe mode to ensure correct NodeIP
safe_mode {{.INIT_CONFIG}}
info "Adding pod IP file."
echo $POD_IP > /var/lib/rancher/k3s/k3k-node-ip
var HAServerTemplate string = `
if [ ${POD_NAME: -1} == 0 ] && [ ! -d "{{.ETCD_DIR}}" ]; then
/bin/k3s server --config {{.INIT_CONFIG}} {{.EXTRA_ARGS}} 2>&1 | tee /var/log/k3s.log
else
/bin/k3s server --config {{.SERVER_CONFIG}} {{.EXTRA_ARGS}} 2>&1 | tee /var/log/k3s.log
fi`
}
start_ha_node() {
info "Starting pod $POD_NAME in HA node setup"
if [ ${POD_NAME: -1} == 0 ] && [ ! -d "{{.ETCD_DIR}}" ]; then
info "Adding pod IP file."
echo $POD_IP > /var/lib/rancher/k3s/k3k-node-ip
/bin/k3s server --config {{.INIT_CONFIG}} {{.EXTRA_ARGS}} 2>&1 | tee /var/log/k3s.log
else
safe_mode {{.SERVER_CONFIG}}
info "Adding pod IP file."
echo $POD_IP > /var/lib/rancher/k3s/k3k-node-ip
/bin/k3s server --config {{.SERVER_CONFIG}} {{.EXTRA_ARGS}} 2>&1 | tee /var/log/k3s.info
fi
}
case "{{.CLUSTER_MODE}}" in
"ha")
start_ha_node
;;
"single"|*)
start_single_node
;;
esac`

View File

@@ -54,6 +54,7 @@ func AddStatefulSetController(ctx context.Context, mgr manager.Manager, maxConcu
return ctrl.NewControllerManagedBy(mgr).
For(&apps.StatefulSet{}).
WithEventFilter(newClusterPredicate()).
Owns(&v1.Pod{}).
Named(statefulsetController).
WithOptions(controller.Options{MaxConcurrentReconciles: maxConcurrentReconciles}).
@@ -120,7 +121,7 @@ func (p *StatefulSetReconciler) handleServerPod(ctx context.Context, cluster v1b
if pod.DeletionTimestamp.IsZero() {
if controllerutil.AddFinalizer(pod, etcdPodFinalizerName) {
log.V(1).Info("Server Pod is being deleted. Removing finalizer", "pod", pod.Name, "namespace", pod.Namespace)
log.V(1).Info("Server Pod is being created. Adding finalizer", "pod", pod.Name, "namespace", pod.Namespace)
return p.Client.Update(ctx, pod)
}

View File

@@ -165,6 +165,7 @@ func nodeEventHandler(r *VirtualClusterPolicyReconciler) handler.Funcs {
if oldNode.Spec.PodCIDR != newNode.Spec.PodCIDR {
podCIDRChanged = true
}
if !reflect.DeepEqual(oldNode.Spec.PodCIDRs, newNode.Spec.PodCIDRs) {
podCIDRChanged = true
}

View File

@@ -27,6 +27,7 @@ func newEncoder(format string) zapcore.Encoder {
encCfg.EncodeTime = zapcore.ISO8601TimeEncoder
var encoder zapcore.Encoder
if format == "text" {
encCfg.EncodeLevel = zapcore.CapitalColorLevelEncoder
encoder = zapcore.NewConsoleEncoder(encCfg)

View File

@@ -3,7 +3,7 @@
set -eou pipefail
CONTROLLER_TOOLS_VERSION=v0.16.0
CONTROLLER_TOOLS_VERSION=v0.20.0
# This will return non-zero until all of our objects in ./pkg/apis can generate valid crds.
# allowDangerousTypes is needed for struct that use floats

43
scripts/generate-cli-docs Executable file
View File

@@ -0,0 +1,43 @@
#!/bin/bash
set -eou pipefail
DOCS_DIR=./docs/cli
# Generate the raw markdown files
go run $DOCS_DIR/genclidoc.go
echo "Converting Markdown documentation to asciidoc"
pandoc --from markdown --to asciidoc --lua-filter=$DOCS_DIR/convert.lua $DOCS_DIR/k3kcli.md > $DOCS_DIR/k3kcli.adoc
echo "" >> $DOCS_DIR/k3kcli.adoc
# We use an explicit list of files to keep a stable ordering
SUBCOMMAND_FILES=(
"$DOCS_DIR/k3kcli_cluster.md"
"$DOCS_DIR/k3kcli_cluster_create.md"
"$DOCS_DIR/k3kcli_cluster_delete.md"
"$DOCS_DIR/k3kcli_cluster_list.md"
"$DOCS_DIR/k3kcli_cluster_update.md"
"$DOCS_DIR/k3kcli_kubeconfig.md"
"$DOCS_DIR/k3kcli_kubeconfig_generate.md"
"$DOCS_DIR/k3kcli_policy.md"
"$DOCS_DIR/k3kcli_policy_create.md"
"$DOCS_DIR/k3kcli_policy_delete.md"
"$DOCS_DIR/k3kcli_policy_list.md"
)
# Check for newly added doc files
EXPECTED_COUNT=${#SUBCOMMAND_FILES[@]}
ACTUAL_COUNT=$(ls -1 $DOCS_DIR/k3kcli_*.md | wc -l)
if [ "$EXPECTED_COUNT" -ne "$ACTUAL_COUNT" ]; then
echo "ERROR: File count mismatch!"
echo "Expected $EXPECTED_COUNT files in the array, but found $ACTUAL_COUNT on disk."
echo "Please update the SUBCOMMAND_FILES array in $0"
exit 1
fi
pandoc --from markdown --to asciidoc --lua-filter=$DOCS_DIR/convert.lua "${SUBCOMMAND_FILES[@]}">> $DOCS_DIR/k3kcli.adoc
echo "Asciidoc documentation generated at $DOCS_DIR/k3kcli.adoc"

View File

@@ -6,16 +6,30 @@ import (
"os/exec"
"time"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/rand"
v1 "k8s.io/api/core/v1"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
"github.com/rancher/k3k/pkg/controller/policy"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
func K3kcli(args ...string) (string, string, error) {
return runCmd("k3kcli", args...)
}
func Kubectl(args ...string) (string, string, error) {
return runCmd("kubectl", args...)
}
func runCmd(cmdName string, args ...string) (string, string, error) {
stdout, stderr := &bytes.Buffer{}, &bytes.Buffer{}
cmd := exec.CommandContext(context.Background(), "k3kcli", args...)
cmd := exec.CommandContext(context.Background(), cmdName, args...)
cmd.Stdout = stdout
cmd.Stderr = stderr
@@ -28,7 +42,7 @@ var _ = When("using the k3kcli", Label("cli"), func() {
It("can get the version", func() {
stdout, _, err := K3kcli("--version")
Expect(err).To(Not(HaveOccurred()))
Expect(stdout).To(ContainSubstring("k3kcli version v"))
Expect(stdout).To(ContainSubstring("k3kcli version "))
})
When("trying the cluster commands", func() {
@@ -40,13 +54,14 @@ var _ = When("using the k3kcli", Label("cli"), func() {
)
clusterName := "cluster-" + rand.String(5)
clusterNamespace := "k3k-" + clusterName
namespace := NewNamespace()
clusterNamespace := namespace.Name
DeferCleanup(func() {
DeleteNamespaces(clusterNamespace)
DeleteNamespaces(namespace.Name)
})
_, stderr, err = K3kcli("cluster", "create", clusterName)
_, stderr, err = K3kcli("cluster", "create", "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("You can start using the cluster"))
@@ -55,7 +70,7 @@ var _ = When("using the k3kcli", Label("cli"), func() {
Expect(stderr).To(BeEmpty())
Expect(stdout).To(ContainSubstring(clusterNamespace))
_, stderr, err = K3kcli("cluster", "delete", clusterName)
_, stderr, err = K3kcli("cluster", "delete", "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring(`Deleting '%s' cluster in namespace '%s'`, clusterName, clusterNamespace))
@@ -77,13 +92,14 @@ var _ = When("using the k3kcli", Label("cli"), func() {
)
clusterName := "cluster-" + rand.String(5)
clusterNamespace := "k3k-" + clusterName
namespace := NewNamespace()
clusterNamespace := namespace.Name
DeferCleanup(func() {
DeleteNamespaces(clusterNamespace)
})
_, stderr, err = K3kcli("cluster", "create", "--version", "v1.33.6-k3s1", clusterName)
_, stderr, err = K3kcli("cluster", "create", "--version", "v1.33.5-k3s1", clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("You can start using the cluster"))
})
@@ -115,8 +131,251 @@ var _ = When("using the k3kcli", Label("cli"), func() {
stdout, stderr, err = K3kcli("policy", "list")
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stdout).To(BeEmpty())
Expect(stderr).To(BeEmpty())
Expect(stdout).To(Not(ContainSubstring(policyName)))
})
It("can bound a policy to a namespace", func() {
var (
stdout string
stderr string
err error
)
namespace := NewNamespace()
namespaceName := namespace.Name
DeferCleanup(func() {
DeleteNamespaces(namespaceName)
})
By("Creating a policy and binding to a namespace")
policy1Name := "policy-" + rand.String(5)
_, stderr, err = K3kcli("policy", "create", "--namespace", namespaceName, policy1Name)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring(`Creating policy '%s'`, policy1Name))
DeferCleanup(func() {
stdout, stderr, err = K3kcli("policy", "delete", policy1Name)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stdout).To(BeEmpty())
Expect(stderr).To(ContainSubstring(`Policy '%s' deleted`, policy1Name))
})
var ns v1.Namespace
err = k8sClient.Get(context.Background(), types.NamespacedName{Name: namespaceName}, &ns)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(ns.Name).To(Equal(namespaceName))
Expect(ns.Labels).To(HaveKeyWithValue(policy.PolicyNameLabelKey, policy1Name))
By("Creating another policy and binding to the same namespace without the --overwrite flag")
policy2Name := "policy-" + rand.String(5)
stdout, stderr, err = K3kcli("policy", "create", "--namespace", namespaceName, policy2Name)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring(`Creating policy '%s'`, policy2Name))
DeferCleanup(func() {
stdout, stderr, err = K3kcli("policy", "delete", policy2Name)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stdout).To(BeEmpty())
Expect(stderr).To(ContainSubstring(`Policy '%s' deleted`, policy2Name))
})
err = k8sClient.Get(context.Background(), types.NamespacedName{Name: namespaceName}, &ns)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(ns.Name).To(Equal(namespaceName))
Expect(ns.Labels).To(HaveKeyWithValue(policy.PolicyNameLabelKey, policy1Name))
By("Forcing the other policy binding with the overwrite flag")
stdout, stderr, err = K3kcli("policy", "create", "--namespace", namespaceName, "--overwrite", policy2Name)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring(`Creating policy '%s'`, policy2Name))
err = k8sClient.Get(context.Background(), types.NamespacedName{Name: namespaceName}, &ns)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(ns.Name).To(Equal(namespaceName))
Expect(ns.Labels).To(HaveKeyWithValue(policy.PolicyNameLabelKey, policy2Name))
})
})
When("trying the cluster update commands", func() {
It("can update a cluster's server count", func() {
var (
stderr string
err error
)
clusterName := "cluster-" + rand.String(5)
namespace := NewNamespace()
clusterNamespace := namespace.Name
DeferCleanup(func() {
DeleteNamespaces(clusterNamespace)
})
// Create the cluster first
_, stderr, err = K3kcli("cluster", "create", "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("You can start using the cluster"))
// Update the cluster server count
_, stderr, err = K3kcli("cluster", "update", "-y", "--servers", "2", "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("Updating cluster"))
// Verify the cluster state was actually updated
var cluster v1beta1.Cluster
err = k8sClient.Get(context.Background(), types.NamespacedName{Name: clusterName, Namespace: clusterNamespace}, &cluster)
Expect(err).To(Not(HaveOccurred()))
Expect(cluster.Spec.Servers).To(Not(BeNil()))
Expect(*cluster.Spec.Servers).To(Equal(int32(2)))
})
It("can update a cluster's version", func() {
var (
stderr string
err error
)
clusterName := "cluster-" + rand.String(5)
namespace := NewNamespace()
clusterNamespace := namespace.Name
DeferCleanup(func() {
DeleteNamespaces(clusterNamespace)
})
// Create the cluster with initial version
_, stderr, err = K3kcli("cluster", "create", "--version", "v1.33.0-k3s1", "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("You can start using the cluster"))
// Update the cluster version
_, stderr, err = K3kcli("cluster", "update", "-y", "--version", k3sVersion, "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("Updating cluster"))
// Verify the cluster state was actually updated
var cluster v1beta1.Cluster
err = k8sClient.Get(context.Background(), types.NamespacedName{Name: clusterName, Namespace: clusterNamespace}, &cluster)
Expect(err).To(Not(HaveOccurred()))
Expect(cluster.Spec.Version).To(Equal(k3sVersion))
})
It("fails to downgrade cluster version", func() {
var (
stderr string
err error
)
clusterName := "cluster-" + rand.String(5)
namespace := NewNamespace()
clusterNamespace := namespace.Name
DeferCleanup(func() {
DeleteNamespaces(clusterNamespace)
})
// Create the cluster with a version
_, stderr, err = K3kcli("cluster", "create", "--version", k3sVersion, "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("You can start using the cluster"))
// Attempt to downgrade should fail
_, stderr, err = K3kcli("cluster", "update", "-y", "--version", "v1.33.0-k3s1", "--namespace", clusterNamespace, clusterName)
Expect(err).To(HaveOccurred())
Expect(stderr).To(ContainSubstring("downgrading cluster version is not supported"))
// Verify the cluster version was NOT changed
var cluster v1beta1.Cluster
err = k8sClient.Get(context.Background(), types.NamespacedName{Name: clusterName, Namespace: clusterNamespace}, &cluster)
Expect(err).To(Not(HaveOccurred()))
Expect(cluster.Spec.Version).To(Equal(k3sVersion))
})
It("fails to update a non-existent cluster", func() {
var (
stderr string
err error
)
// Attempt to update a cluster that doesn't exist
_, stderr, err = K3kcli("cluster", "update", "-y", "--servers", "2", "non-existent-cluster")
Expect(err).To(HaveOccurred())
Expect(stderr).To(ContainSubstring("failed to fetch cluster"))
})
It("can update a cluster's labels", func() {
var (
stderr string
err error
)
clusterName := "cluster-" + rand.String(5)
namespace := NewNamespace()
clusterNamespace := namespace.Name
DeferCleanup(func() {
DeleteNamespaces(clusterNamespace)
})
// Create the cluster first
_, stderr, err = K3kcli("cluster", "create", "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("You can start using the cluster"))
// Update the cluster with labels
_, stderr, err = K3kcli("cluster", "update", "-y", "--labels", "env=test", "--labels", "team=dev", "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("Updating cluster"))
// Verify the cluster labels were actually updated
var cluster v1beta1.Cluster
err = k8sClient.Get(context.Background(), types.NamespacedName{Name: clusterName, Namespace: clusterNamespace}, &cluster)
Expect(err).To(Not(HaveOccurred()))
Expect(cluster.Labels).To(HaveKeyWithValue("env", "test"))
Expect(cluster.Labels).To(HaveKeyWithValue("team", "dev"))
})
It("can update a cluster's annotations", func() {
var (
stderr string
err error
)
clusterName := "cluster-" + rand.String(5)
namespace := NewNamespace()
clusterNamespace := namespace.Name
DeferCleanup(func() {
DeleteNamespaces(clusterNamespace)
})
// Create the cluster first
_, stderr, err = K3kcli("cluster", "create", "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("You can start using the cluster"))
// Update the cluster with annotations
_, stderr, err = K3kcli("cluster", "update", "-y", "--annotations", "description=test-cluster", "--annotations", "owner=qa-team", "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("Updating cluster"))
// Verify the cluster annotations were actually updated
var cluster v1beta1.Cluster
err = k8sClient.Get(context.Background(), types.NamespacedName{Name: clusterName, Namespace: clusterNamespace}, &cluster)
Expect(err).To(Not(HaveOccurred()))
Expect(cluster.Annotations).To(HaveKeyWithValue("description", "test-cluster"))
Expect(cluster.Annotations).To(HaveKeyWithValue("owner", "qa-team"))
})
})
@@ -128,21 +387,22 @@ var _ = When("using the k3kcli", Label("cli"), func() {
)
clusterName := "cluster-" + rand.String(5)
clusterNamespace := "k3k-" + clusterName
namespace := NewNamespace()
clusterNamespace := namespace.Name
DeferCleanup(func() {
DeleteNamespaces(clusterNamespace)
})
_, stderr, err = K3kcli("cluster", "create", clusterName)
_, stderr, err = K3kcli("cluster", "create", "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("You can start using the cluster"))
_, stderr, err = K3kcli("kubeconfig", "generate", "--name", clusterName)
_, stderr, err = K3kcli("kubeconfig", "generate", "--namespace", clusterNamespace, "--name", clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring("You can start using the cluster"))
_, stderr, err = K3kcli("cluster", "delete", clusterName)
_, stderr, err = K3kcli("cluster", "delete", "--namespace", clusterNamespace, clusterName)
Expect(err).To(Not(HaveOccurred()), string(stderr))
Expect(stderr).To(ContainSubstring(`Deleting '%s' cluster in namespace '%s'`, clusterName, clusterNamespace))
})

View File

@@ -0,0 +1,235 @@
package k3k_test
import (
"context"
"os"
"time"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
const (
addonsTestsLabel = "addons"
addonsSecretName = "k3s-addons"
secretMountManifestMountPath = "/var/lib/rancher/k3s/server/manifests/nginx.yaml"
addonManifestMountPath = "/var/lib/rancher/k3s/server/manifests/k3s-addons/nginx.yaml"
)
var _ = When("a cluster with secretMounts configuration is used to load addons", Label("e2e"), Label(addonsTestsLabel), func() {
var virtualCluster *VirtualCluster
BeforeEach(func() {
ctx := context.Background()
namespace := NewNamespace()
// Create the addon secret
err := createAddonSecret(ctx, namespace.Name)
Expect(err).ToNot(HaveOccurred())
DeferCleanup(func() {
DeleteNamespaces(namespace.Name)
})
cluster := NewCluster(namespace.Name)
cluster.Spec.SecretMounts = []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: addonsSecretName,
},
MountPath: secretMountManifestMountPath,
SubPath: "nginx.yaml",
},
}
CreateCluster(cluster)
virtualClient, restConfig := NewVirtualK8sClientAndConfig(cluster)
virtualCluster = &VirtualCluster{
Cluster: cluster,
RestConfig: restConfig,
Client: virtualClient,
}
})
It("will load the addon manifest in server pod", func() {
ctx := context.Background()
serverPods := listServerPods(ctx, virtualCluster)
Expect(len(serverPods)).To(Equal(1))
serverPod := serverPods[0]
addonContent, err := readFileWithinPod(ctx, k8s, restcfg, serverPod.Name, serverPod.Namespace, secretMountManifestMountPath)
Expect(err).To(Not(HaveOccurred()))
addonTestFile, err := os.ReadFile("testdata/addons/nginx.yaml")
Expect(err).To(Not(HaveOccurred()))
Expect(addonContent).To(Equal(addonTestFile))
})
It("will deploy the addon pod in the virtual cluster", func() {
ctx := context.Background()
Eventually(func(g Gomega) {
nginxPod, err := virtualCluster.Client.CoreV1().Pods("default").Get(ctx, "nginx-addon", metav1.GetOptions{})
g.Expect(err).To(Not(HaveOccurred()))
g.Expect(nginxPod.Status.Phase).To(Equal(v1.PodRunning))
}).
WithTimeout(time.Minute * 3).
WithPolling(time.Second * 5).
Should(Succeed())
})
})
var _ = When("a cluster with addon configuration is used with addons secret in the same namespace", Label("e2e"), Label(addonsTestsLabel), func() {
var virtualCluster *VirtualCluster
BeforeEach(func() {
ctx := context.Background()
namespace := NewNamespace()
// Create the addon secret
err := createAddonSecret(ctx, namespace.Name)
Expect(err).ToNot(HaveOccurred())
DeferCleanup(func() {
DeleteNamespaces(namespace.Name)
})
cluster := NewCluster(namespace.Name)
cluster.Spec.Addons = []v1beta1.Addon{
{
SecretNamespace: namespace.Name,
SecretRef: addonsSecretName,
},
}
CreateCluster(cluster)
virtualClient, restConfig := NewVirtualK8sClientAndConfig(cluster)
virtualCluster = &VirtualCluster{
Cluster: cluster,
RestConfig: restConfig,
Client: virtualClient,
}
serverPods := listServerPods(ctx, virtualCluster)
Expect(len(serverPods)).To(Equal(1))
serverPod := serverPods[0]
addonContent, err := readFileWithinPod(ctx, k8s, restcfg, serverPod.Name, serverPod.Namespace, addonManifestMountPath)
Expect(err).To(Not(HaveOccurred()))
addonTestFile, err := os.ReadFile("testdata/addons/nginx.yaml")
Expect(err).To(Not(HaveOccurred()))
Expect(addonContent).To(Equal(addonTestFile))
Eventually(func(g Gomega) {
nginxPod, err := virtualCluster.Client.CoreV1().Pods("default").Get(ctx, "nginx-addon", metav1.GetOptions{})
g.Expect(err).To(Not(HaveOccurred()))
g.Expect(nginxPod.Status.Phase).To(Equal(v1.PodRunning))
}).
WithTimeout(time.Minute * 3).
WithPolling(time.Second * 5).
Should(Succeed())
})
})
var _ = When("a cluster with addon configuration is used with addons secret in the different namespace", Label("e2e"), Label(addonsTestsLabel), func() {
var virtualCluster *VirtualCluster
BeforeEach(func() {
ctx := context.Background()
namespace := NewNamespace()
secretNamespace := NewNamespace()
// Create the addon secret
err := createAddonSecret(ctx, secretNamespace.Name)
Expect(err).ToNot(HaveOccurred())
DeferCleanup(func() {
DeleteNamespaces(namespace.Name, secretNamespace.Name)
})
cluster := NewCluster(namespace.Name)
cluster.Spec.Addons = []v1beta1.Addon{
{
SecretNamespace: secretNamespace.Name,
SecretRef: addonsSecretName,
},
}
CreateCluster(cluster)
virtualClient, restConfig := NewVirtualK8sClientAndConfig(cluster)
virtualCluster = &VirtualCluster{
Cluster: cluster,
RestConfig: restConfig,
Client: virtualClient,
}
})
It("will load the addon manifest in server pod and deploys the pod", func() {
ctx := context.Background()
serverPods := listServerPods(ctx, virtualCluster)
Expect(len(serverPods)).To(Equal(1))
serverPod := serverPods[0]
addonContent, err := readFileWithinPod(ctx, k8s, restcfg, serverPod.Name, serverPod.Namespace, addonManifestMountPath)
Expect(err).To(Not(HaveOccurred()))
addonTestFile, err := os.ReadFile("testdata/addons/nginx.yaml")
Expect(err).To(Not(HaveOccurred()))
Expect(addonContent).To(Equal(addonTestFile))
Eventually(func(g Gomega) {
nginxPod, err := virtualCluster.Client.CoreV1().Pods("default").Get(ctx, "nginx-addon", metav1.GetOptions{})
g.Expect(err).To(Not(HaveOccurred()))
g.Expect(nginxPod.Status.Phase).To(Equal(v1.PodRunning))
}).
WithTimeout(time.Minute * 3).
WithPolling(time.Second * 5).
Should(Succeed())
})
})
func createAddonSecret(ctx context.Context, namespace string) error {
addonContent, err := os.ReadFile("testdata/addons/nginx.yaml")
if err != nil {
return err
}
secret := &v1.Secret{
TypeMeta: metav1.TypeMeta{
Kind: "Secret",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: addonsSecretName,
Namespace: namespace,
},
Data: map[string][]byte{
"nginx.yaml": addonContent,
},
}
return k8sClient.Create(ctx, secret)
}

View File

@@ -5,8 +5,6 @@ import (
"os"
"strings"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
. "github.com/onsi/ginkgo/v2"
@@ -98,12 +96,10 @@ var _ = When("a cluster with custom certificates is installed with individual ce
It("will load the custom certs in the server pod", func() {
ctx := context.Background()
labelSelector := "cluster=" + virtualCluster.Cluster.Name + ",role=server"
serverPods, err := k8s.CoreV1().Pods(virtualCluster.Cluster.Namespace).List(ctx, v1.ListOptions{LabelSelector: labelSelector})
Expect(err).To(Not(HaveOccurred()))
serverPods := listServerPods(ctx, virtualCluster)
Expect(len(serverPods.Items)).To(Equal(1))
serverPod := serverPods.Items[0]
Expect(len(serverPods)).To(Equal(1))
serverPod := serverPods[0]
// check server-ca.crt
serverCACrtPath := "/var/lib/rancher/k3s/server/tls/server-ca.crt"

View File

@@ -59,12 +59,10 @@ var _ = When("an ephemeral cluster is installed", Label(e2eTestLabel), Label(per
_, err := virtualCluster.Client.ServerVersion()
Expect(err).To(Not(HaveOccurred()))
labelSelector := "cluster=" + virtualCluster.Cluster.Name + ",role=server"
serverPods, err := k8s.CoreV1().Pods(virtualCluster.Cluster.Namespace).List(ctx, v1.ListOptions{LabelSelector: labelSelector})
Expect(err).To(Not(HaveOccurred()))
serverPods := listServerPods(ctx, virtualCluster)
Expect(len(serverPods.Items)).To(Equal(1))
serverPod := serverPods.Items[0]
Expect(len(serverPods)).To(Equal(1))
serverPod := serverPods[0]
GinkgoWriter.Printf("deleting pod %s/%s\n", serverPod.Namespace, serverPod.Name)
@@ -75,10 +73,9 @@ var _ = When("an ephemeral cluster is installed", Label(e2eTestLabel), Label(per
// check that the server pods restarted
Eventually(func() any {
serverPods, err = k8s.CoreV1().Pods(virtualCluster.Cluster.Namespace).List(ctx, v1.ListOptions{LabelSelector: labelSelector})
Expect(err).To(Not(HaveOccurred()))
Expect(len(serverPods.Items)).To(Equal(1))
return serverPods.Items[0].DeletionTimestamp
serverPods := listServerPods(ctx, virtualCluster)
Expect(len(serverPods)).To(Equal(1))
return serverPods[0].DeletionTimestamp
}).
WithTimeout(time.Minute).
WithPolling(time.Second * 5).

View File

@@ -2,8 +2,12 @@ package k3k_test
import (
"context"
"fmt"
"os"
"os/exec"
"time"
"k8s.io/client-go/kubernetes"
"k8s.io/kubernetes/pkg/api/v1/pod"
v1 "k8s.io/api/core/v1"
@@ -15,162 +19,263 @@ import (
. "github.com/onsi/gomega"
)
var _ = When("a cluster creates a pod with an invalid configuration", Label(e2eTestLabel), func() {
var (
virtualCluster *VirtualCluster
virtualPod *v1.Pod
)
var _ = Context("In a shared cluster", Label(e2eTestLabel), Ordered, func() {
var virtualCluster *VirtualCluster
BeforeEach(func() {
BeforeAll(func() {
virtualCluster = NewVirtualCluster()
DeferCleanup(func() {
DeleteNamespaces(virtualCluster.Cluster.Namespace)
})
})
p := &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "nginx-",
Namespace: "default",
Labels: map[string]string{
"name": "var-expansion-test",
When("creating a Pod with an invalid configuration", func() {
var virtualPod *v1.Pod
BeforeEach(func() {
p := &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "nginx-",
Namespace: "default",
Labels: map[string]string{
"name": "var-expansion-test",
},
Annotations: map[string]string{
"notmysubpath": "mypath",
},
},
Annotations: map[string]string{
"notmysubpath": "mypath",
},
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "nginx",
Image: "nginx",
Env: []v1.EnvVar{
{
Name: "POD_NAME",
Value: "nginx",
},
{
Name: "ANNOTATION",
ValueFrom: &v1.EnvVarSource{
FieldRef: &v1.ObjectFieldSelector{
FieldPath: "metadata.annotations['mysubpath']",
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "nginx",
Image: "nginx",
Env: []v1.EnvVar{
{
Name: "POD_NAME",
ValueFrom: &v1.EnvVarSource{
FieldRef: &v1.ObjectFieldSelector{
FieldPath: "metadata.name",
},
},
},
{
Name: "ANNOTATION",
ValueFrom: &v1.EnvVarSource{
FieldRef: &v1.ObjectFieldSelector{
FieldPath: "metadata.annotations['mysubpath']",
},
},
},
},
},
VolumeMounts: []v1.VolumeMount{
{
Name: "workdir",
MountPath: "/volume_mount",
},
{
Name: "workdir",
MountPath: "/subpath_mount",
SubPathExpr: "$(ANNOTATION)/$(POD_NAME)",
VolumeMounts: []v1.VolumeMount{
{
Name: "workdir",
MountPath: "/volume_mount",
},
{
Name: "workdir",
MountPath: "/subpath_mount",
SubPathExpr: "$(ANNOTATION)/$(POD_NAME)",
},
},
},
},
Volumes: []v1.Volume{{
Name: "workdir",
VolumeSource: v1.VolumeSource{EmptyDir: &v1.EmptyDirVolumeSource{}},
}},
},
Volumes: []v1.Volume{{
Name: "workdir",
VolumeSource: v1.VolumeSource{EmptyDir: &v1.EmptyDirVolumeSource{}},
}},
},
}
ctx := context.Background()
var err error
virtualPod, err = virtualCluster.Client.CoreV1().Pods(p.Namespace).Create(ctx, p, metav1.CreateOptions{})
Expect(err).To(Not(HaveOccurred()))
})
It("should be in Pending status with the CreateContainerConfigError until we fix the annotation", func() {
ctx := context.Background()
By("Checking the container status of the Pod in the Virtual Cluster")
Eventually(func(g Gomega) {
pod, err := virtualCluster.Client.CoreV1().Pods(virtualPod.Namespace).Get(ctx, virtualPod.Name, metav1.GetOptions{})
g.Expect(err).NotTo(HaveOccurred())
g.Expect(pod.Status.Phase).To(Equal(v1.PodPending))
envVars := pod.Spec.Containers[0].Env
g.Expect(envVars).NotTo(BeEmpty())
var found bool
for _, envVar := range envVars {
if envVar.Name == "POD_NAME" {
found = true
g.Expect(envVars[0].ValueFrom).NotTo(BeNil())
g.Expect(envVars[0].ValueFrom.FieldRef).NotTo(BeNil())
g.Expect(envVars[0].ValueFrom.FieldRef.FieldPath).To(Equal("metadata.name"))
break
}
}
g.Expect(found).To(BeTrue())
containerStatuses := pod.Status.ContainerStatuses
g.Expect(containerStatuses).To(HaveLen(1))
waitingState := containerStatuses[0].State.Waiting
g.Expect(waitingState).NotTo(BeNil())
g.Expect(waitingState.Reason).To(Equal("CreateContainerConfigError"))
}).
WithPolling(time.Second).
WithTimeout(time.Minute).
Should(Succeed())
By("Checking the container status of the Pod in the Host Cluster")
Eventually(func(g Gomega) {
translator := translate.NewHostTranslator(virtualCluster.Cluster)
hostPodName := translator.NamespacedName(virtualPod)
pod, err := k8s.CoreV1().Pods(hostPodName.Namespace).Get(ctx, hostPodName.Name, metav1.GetOptions{})
g.Expect(err).NotTo(HaveOccurred())
g.Expect(pod.Status.Phase).To(Equal(v1.PodPending))
envVars := pod.Spec.Containers[0].Env
g.Expect(envVars).NotTo(BeEmpty())
var found bool
for _, envVar := range envVars {
if envVar.Name == "POD_NAME" {
found = true
g.Expect(envVar.ValueFrom).To(BeNil())
g.Expect(envVar.Value).To(Equal(virtualPod.Name))
break
}
}
g.Expect(found).To(BeTrue())
containerStatuses := pod.Status.ContainerStatuses
g.Expect(containerStatuses).To(HaveLen(1))
waitingState := containerStatuses[0].State.Waiting
g.Expect(waitingState).NotTo(BeNil())
g.Expect(waitingState.Reason).To(Equal("CreateContainerConfigError"))
}).
WithPolling(time.Second).
WithTimeout(time.Minute).
Should(Succeed())
By("Fixing the annotation")
var err error
virtualPod, err = virtualCluster.Client.CoreV1().Pods(virtualPod.Namespace).Get(ctx, virtualPod.Name, metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
virtualPod.Annotations["mysubpath"] = virtualPod.Annotations["notmysubpath"]
delete(virtualPod.Annotations, "notmysubpath")
virtualPod, err = virtualCluster.Client.CoreV1().Pods(virtualPod.Namespace).Update(ctx, virtualPod, metav1.UpdateOptions{})
Expect(err).NotTo(HaveOccurred())
By("Checking the status of the Pod in the Virtual Cluster")
Eventually(func(g Gomega) {
vPod, err := virtualCluster.Client.CoreV1().Pods(virtualPod.Namespace).Get(ctx, virtualPod.Name, metav1.GetOptions{})
g.Expect(err).NotTo(HaveOccurred())
_, cond := pod.GetPodCondition(&vPod.Status, v1.PodReady)
g.Expect(cond).NotTo(BeNil())
g.Expect(cond.Status).To(BeEquivalentTo(metav1.ConditionTrue))
}).
WithPolling(time.Second).
WithTimeout(time.Minute).
Should(Succeed())
By("Checking the status of the Pod in the Host Cluster")
Eventually(func(g Gomega) {
translator := translate.NewHostTranslator(virtualCluster.Cluster)
hostPodName := translator.NamespacedName(virtualPod)
hPod, err := k8s.CoreV1().Pods(hostPodName.Namespace).Get(ctx, hostPodName.Name, metav1.GetOptions{})
g.Expect(err).NotTo(HaveOccurred())
_, cond := pod.GetPodCondition(&hPod.Status, v1.PodReady)
g.Expect(cond).NotTo(BeNil())
g.Expect(cond.Status).To(BeEquivalentTo(metav1.ConditionTrue))
}).
WithPolling(time.Second).
WithTimeout(time.Minute).
Should(Succeed())
})
})
When("installing the nginx-ingress controller", func() {
BeforeAll(func() {
By("installing the nginx-ingress controller")
Expect(os.WriteFile("vk-kubeconfig.yaml", virtualCluster.Kubeconfig, 0o644)).To(Succeed())
ingressNginx := "testdata/resources/ingress-nginx-v1.14.1.yaml"
cmd := exec.Command("kubectl", "apply", "--kubeconfig", "vk-kubeconfig.yaml", "-f", ingressNginx)
output, err := cmd.CombinedOutput()
fmt.Println("#### output", "\n", string(output))
Expect(err).NotTo(HaveOccurred(), string(output))
})
expectJobToSucceed := func(client *kubernetes.Clientset, namespace string, label string) {
GinkgoHelper()
Eventually(func(g Gomega) {
listOpts := metav1.ListOptions{LabelSelector: "job-name=" + label}
pods, err := client.CoreV1().Pods(namespace).List(context.Background(), listOpts)
g.Expect(err).NotTo(HaveOccurred())
g.Expect(pods.Items).NotTo(BeEmpty())
g.Expect(pods.Items[0].Status.Phase).To(Equal(v1.PodSucceeded))
}).
WithPolling(time.Second).
WithTimeout(2 * time.Minute).
Should(Succeed())
}
ctx := context.Background()
var err error
It("should complete the ingress-nginx-admission-create Job in the host cluster", func() {
expectJobToSucceed(k8s, virtualCluster.Cluster.Namespace, "ingress-nginx-admission-create")
})
virtualPod, err = virtualCluster.Client.CoreV1().Pods(p.Namespace).Create(ctx, p, metav1.CreateOptions{})
Expect(err).To(Not(HaveOccurred()))
})
It("should complete the ingress-nginx-admission-create Job in the virtual cluster", func() {
expectJobToSucceed(virtualCluster.Client, "ingress-nginx", "ingress-nginx-admission-create")
})
It("should be in Pending status with the CreateContainerConfigError until we fix the annotation", func() {
ctx := context.Background()
It("should complete the ingress-nginx-admission-patch Job in the host cluster", func() {
expectJobToSucceed(k8s, virtualCluster.Cluster.Namespace, "ingress-nginx-admission-patch")
})
By("Checking the container status of the Pod in the Virtual Cluster")
It("should complete the ingress-nginx-admission-patch Job in the virtual cluster", func() {
expectJobToSucceed(virtualCluster.Client, "ingress-nginx", "ingress-nginx-admission-patch")
})
Eventually(func(g Gomega) {
pod, err := virtualCluster.Client.CoreV1().Pods(virtualPod.Namespace).Get(ctx, virtualPod.Name, metav1.GetOptions{})
g.Expect(err).NotTo(HaveOccurred())
It("should run the ingress-nginx controller", func() {
Eventually(func(g Gomega) {
ctx := context.Background()
deployment, err := virtualCluster.Client.AppsV1().Deployments("ingress-nginx").Get(ctx, "ingress-nginx-controller", metav1.GetOptions{})
g.Expect(err).NotTo(HaveOccurred())
g.Expect(pod.Status.Phase).To(Equal(v1.PodPending))
desiredReplicas := *deployment.Spec.Replicas
containerStatuses := pod.Status.ContainerStatuses
g.Expect(containerStatuses).To(HaveLen(1))
waitingState := containerStatuses[0].State.Waiting
g.Expect(waitingState).NotTo(BeNil())
g.Expect(waitingState.Reason).To(Equal("CreateContainerConfigError"))
}).
WithPolling(time.Second).
WithTimeout(time.Minute).
Should(Succeed())
By("Checking the container status of the Pod in the Host Cluster")
Eventually(func(g Gomega) {
translator := translate.NewHostTranslator(virtualCluster.Cluster)
hostPodName := translator.NamespacedName(virtualPod)
pod, err := k8s.CoreV1().Pods(hostPodName.Namespace).Get(ctx, hostPodName.Name, metav1.GetOptions{})
g.Expect(err).NotTo(HaveOccurred())
g.Expect(pod.Status.Phase).To(Equal(v1.PodPending))
containerStatuses := pod.Status.ContainerStatuses
g.Expect(containerStatuses).To(HaveLen(1))
waitingState := containerStatuses[0].State.Waiting
g.Expect(waitingState).NotTo(BeNil())
g.Expect(waitingState.Reason).To(Equal("CreateContainerConfigError"))
}).
WithPolling(time.Second).
WithTimeout(time.Minute).
Should(Succeed())
By("Fixing the annotation")
var err error
virtualPod, err = virtualCluster.Client.CoreV1().Pods(virtualPod.Namespace).Get(ctx, virtualPod.Name, metav1.GetOptions{})
Expect(err).NotTo(HaveOccurred())
virtualPod.Annotations["mysubpath"] = virtualPod.Annotations["notmysubpath"]
delete(virtualPod.Annotations, "notmysubpath")
virtualPod, err = virtualCluster.Client.CoreV1().Pods(virtualPod.Namespace).Update(ctx, virtualPod, metav1.UpdateOptions{})
Expect(err).NotTo(HaveOccurred())
By("Checking the status of the Pod in the Virtual Cluster")
Eventually(func(g Gomega) {
vPod, err := virtualCluster.Client.CoreV1().Pods(virtualPod.Namespace).Get(ctx, virtualPod.Name, metav1.GetOptions{})
g.Expect(err).NotTo(HaveOccurred())
_, cond := pod.GetPodCondition(&vPod.Status, v1.PodReady)
g.Expect(cond).NotTo(BeNil())
g.Expect(cond.Status).To(BeEquivalentTo(metav1.ConditionTrue))
}).
WithPolling(time.Second).
WithTimeout(time.Minute).
Should(Succeed())
By("Checking the status of the Pod in the Host Cluster")
Eventually(func(g Gomega) {
translator := translate.NewHostTranslator(virtualCluster.Cluster)
hostPodName := translator.NamespacedName(virtualPod)
hPod, err := k8s.CoreV1().Pods(hostPodName.Namespace).Get(ctx, hostPodName.Name, metav1.GetOptions{})
g.Expect(err).NotTo(HaveOccurred())
_, cond := pod.GetPodCondition(&hPod.Status, v1.PodReady)
g.Expect(cond).NotTo(BeNil())
g.Expect(cond.Status).To(BeEquivalentTo(metav1.ConditionTrue))
}).
WithPolling(time.Second).
WithTimeout(time.Minute).
Should(Succeed())
status := deployment.Status
g.Expect(status.ObservedGeneration).To(BeNumerically(">=", deployment.Generation))
g.Expect(status.UpdatedReplicas).To(BeNumerically("==", desiredReplicas))
g.Expect(status.AvailableReplicas).To(BeNumerically("==", desiredReplicas))
})
})
})
})

View File

@@ -0,0 +1,160 @@
package k3k_test
import (
"context"
"os"
"time"
"k8s.io/kubernetes/pkg/api/v1/pod"
"sigs.k8s.io/controller-runtime/pkg/client"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/rancher/k3k/pkg/apis/k3k.io/v1beta1"
"github.com/rancher/k3k/pkg/controller/policy"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = When("a cluster with private registry configuration is used", Label("e2e"), Label(registryTestsLabel), func() {
var virtualCluster *VirtualCluster
BeforeEach(func() {
ctx := context.Background()
vcp := &v1beta1.VirtualClusterPolicy{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "policy-",
},
Spec: v1beta1.VirtualClusterPolicySpec{
AllowedMode: v1beta1.VirtualClusterMode,
DisableNetworkPolicy: true,
},
}
Expect(k8sClient.Create(ctx, vcp)).To(Succeed())
namespace := NewNamespace()
err := k8sClient.Get(ctx, client.ObjectKeyFromObject(namespace), namespace)
Expect(err).To(Not(HaveOccurred()))
namespace.Labels = map[string]string{
policy.PolicyNameLabelKey: vcp.Name,
}
Expect(k8sClient.Update(ctx, namespace)).To(Succeed())
DeferCleanup(func() {
DeleteNamespaces(namespace.Name)
Expect(k8sClient.Delete(ctx, vcp)).To(Succeed())
})
err = privateRegistry(ctx, namespace.Name)
Expect(err).ToNot(HaveOccurred())
cluster := NewCluster(namespace.Name)
// configure the cluster with the private registry secrets using SecretMounts
// Using subPath allows mounting individual files while keeping parent directories writable
cluster.Spec.SecretMounts = []v1beta1.SecretMount{
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "k3s-registry-config",
},
MountPath: "/etc/rancher/k3s/registries.yaml",
SubPath: "registries.yaml",
},
{
SecretVolumeSource: v1.SecretVolumeSource{
SecretName: "private-registry-ca-cert",
},
MountPath: "/etc/rancher/k3s/tls/ca.crt",
SubPath: "tls.crt",
},
}
cluster.Spec.Mode = v1beta1.VirtualClusterMode
// airgap the k3k-server pod
err = buildRegistryNetPolicy(ctx, cluster.Namespace)
Expect(err).ToNot(HaveOccurred())
CreateCluster(cluster)
client, restConfig := NewVirtualK8sClientAndConfig(cluster)
virtualCluster = &VirtualCluster{
Cluster: cluster,
RestConfig: restConfig,
Client: client,
}
})
It("will be load the registries.yaml and crts in server pod", func() {
ctx := context.Background()
serverPods := listServerPods(ctx, virtualCluster)
Expect(len(serverPods)).To(Equal(1))
serverPod := serverPods[0]
// check registries.yaml
registriesConfigPath := "/etc/rancher/k3s/registries.yaml"
registriesConfig, err := readFileWithinPod(ctx, k8s, restcfg, serverPod.Name, serverPod.Namespace, registriesConfigPath)
Expect(err).To(Not(HaveOccurred()))
registriesConfigTestFile, err := os.ReadFile("testdata/registry/registries.yaml")
Expect(err).To(Not(HaveOccurred()))
Expect(registriesConfig).To(Equal(registriesConfigTestFile))
// check ca.crt
CACrtPath := "/etc/rancher/k3s/tls/ca.crt"
CACrt, err := readFileWithinPod(ctx, k8s, restcfg, serverPod.Name, serverPod.Namespace, CACrtPath)
Expect(err).To(Not(HaveOccurred()))
CACrtTestFile, err := os.ReadFile("testdata/registry/certs/ca.crt")
Expect(err).To(Not(HaveOccurred()))
Expect(CACrt).To(Equal(CACrtTestFile))
})
It("will only pull images from mirrored docker.io registry", func() {
ctx := context.Background()
// make sure that any pod using docker.io mirror works
virtualCluster.NewNginxPod("")
// creating a pod with image that uses any registry other than docker.io should fail
// for example public.ecr.aws/docker/library/alpine:latest
alpinePod := &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "alpine-",
Namespace: "default",
},
Spec: v1.PodSpec{
Containers: []v1.Container{{
Name: "alpine",
Image: "public.ecr.aws/docker/library/alpine:latest",
}},
},
}
By("Creating Alpine Pod and making sure its failing to start")
var err error
alpinePod, err = virtualCluster.Client.CoreV1().Pods(alpinePod.Namespace).Create(ctx, alpinePod, metav1.CreateOptions{})
Expect(err).To(Not(HaveOccurred()))
// check that the alpine Pod is failing to pull the image
Eventually(func(g Gomega) {
alpinePod, err = virtualCluster.Client.CoreV1().Pods(alpinePod.Namespace).Get(ctx, alpinePod.Name, metav1.GetOptions{})
g.Expect(err).To(Not(HaveOccurred()))
status, _ := pod.GetContainerStatus(alpinePod.Status.ContainerStatuses, "alpine")
state := status.State.Waiting
g.Expect(state).NotTo(BeNil())
g.Expect(state.Reason).To(BeEquivalentTo("ImagePullBackOff"))
}).
WithTimeout(time.Minute).
WithPolling(time.Second).
Should(Succeed())
})
})

View File

@@ -127,6 +127,11 @@ var _ = When("a shared mode cluster update its envs", Label(e2eTestLabel), Label
serverPods := listServerPods(ctx, virtualCluster)
g.Expect(len(serverPods)).To(Equal(1))
serverPod := serverPods[0]
_, cond := pod.GetPodCondition(&serverPod.Status, v1.PodReady)
g.Expect(cond).NotTo(BeNil())
g.Expect(cond.Status).To(BeEquivalentTo(metav1.ConditionTrue))
serverEnv1, ok := getEnv(&serverPods[0], "TEST_SERVER_ENV_1")
g.Expect(ok).To(BeTrue())
g.Expect(serverEnv1).To(Equal("upgraded"))
@@ -145,6 +150,11 @@ var _ = When("a shared mode cluster update its envs", Label(e2eTestLabel), Label
aPods := listAgentPods(ctx, virtualCluster)
g.Expect(aPods).To(HaveLen(len(nodes.Items)))
agentPod := aPods[0]
_, cond = pod.GetPodCondition(&agentPod.Status, v1.PodReady)
g.Expect(cond).NotTo(BeNil())
g.Expect(cond.Status).To(BeEquivalentTo(metav1.ConditionTrue))
agentEnv1, ok := getEnv(&aPods[0], "TEST_AGENT_ENV_1")
g.Expect(ok).To(BeTrue())
g.Expect(agentEnv1).To(Equal("upgraded"))
@@ -213,6 +223,11 @@ var _ = When("a shared mode cluster update its server args", Label(e2eTestLabel)
sPods := listServerPods(ctx, virtualCluster)
g.Expect(len(sPods)).To(Equal(1))
serverPod := sPods[0]
_, cond := pod.GetPodCondition(&serverPod.Status, v1.PodReady)
g.Expect(cond).NotTo(BeNil())
g.Expect(cond.Status).To(BeEquivalentTo(metav1.ConditionTrue))
g.Expect(isArgFound(&sPods[0], "--node-label=test_server=upgraded")).To(BeTrue())
}).
WithPolling(time.Second * 2).
@@ -330,6 +345,11 @@ var _ = When("a virtual mode cluster update its envs", Label(e2eTestLabel), Labe
serverPods := listServerPods(ctx, virtualCluster)
g.Expect(len(serverPods)).To(Equal(1))
serverPod := serverPods[0]
_, cond := pod.GetPodCondition(&serverPod.Status, v1.PodReady)
g.Expect(cond).NotTo(BeNil())
g.Expect(cond.Status).To(BeEquivalentTo(metav1.ConditionTrue))
serverEnv1, ok := getEnv(&serverPods[0], "TEST_SERVER_ENV_1")
g.Expect(ok).To(BeTrue())
g.Expect(serverEnv1).To(Equal("upgraded"))
@@ -345,6 +365,11 @@ var _ = When("a virtual mode cluster update its envs", Label(e2eTestLabel), Labe
aPods := listAgentPods(ctx, virtualCluster)
g.Expect(len(aPods)).To(Equal(1))
agentPod := aPods[0]
_, cond = pod.GetPodCondition(&agentPod.Status, v1.PodReady)
g.Expect(cond).NotTo(BeNil())
g.Expect(cond.Status).To(BeEquivalentTo(metav1.ConditionTrue))
agentEnv1, ok := getEnv(&aPods[0], "TEST_AGENT_ENV_1")
g.Expect(ok).To(BeTrue())
g.Expect(agentEnv1).To(Equal("upgraded"))
@@ -416,6 +441,11 @@ var _ = When("a virtual mode cluster update its server args", Label(e2eTestLabel
sPods := listServerPods(ctx, virtualCluster)
g.Expect(len(sPods)).To(Equal(1))
serverPod := sPods[0]
_, cond := pod.GetPodCondition(&serverPod.Status, v1.PodReady)
g.Expect(cond).NotTo(BeNil())
g.Expect(cond.Status).To(BeEquivalentTo(metav1.ConditionTrue))
g.Expect(isArgFound(&sPods[0], "--node-label=test_server=upgraded")).To(BeTrue())
}).
WithPolling(time.Second * 2).
@@ -429,6 +459,7 @@ var _ = When("a shared mode cluster update its version", Label(e2eTestLabel), La
virtualCluster *VirtualCluster
nginxPod *v1.Pod
)
BeforeEach(func() {
ctx := context.Background()
namespace := NewNamespace()
@@ -439,8 +470,8 @@ var _ = When("a shared mode cluster update its version", Label(e2eTestLabel), La
cluster := NewCluster(namespace.Name)
// Add initial version
cluster.Spec.Version = "v1.31.13-k3s1"
// Add initial old version
cluster.Spec.Version = "v1.33.0-k3s1"
// need to enable persistence for this
cluster.Spec.Persistence = v1beta1.PersistenceConfig{
@@ -477,7 +508,7 @@ var _ = When("a shared mode cluster update its version", Label(e2eTestLabel), La
Expect(err).NotTo(HaveOccurred())
// update cluster version
cluster.Spec.Version = "v1.32.8-k3s1"
cluster.Spec.Version = k3sVersion
err = k8sClient.Update(ctx, &cluster)
Expect(err).NotTo(HaveOccurred())
@@ -515,6 +546,7 @@ var _ = When("a virtual mode cluster update its version", Label(e2eTestLabel), L
virtualCluster *VirtualCluster
nginxPod *v1.Pod
)
BeforeEach(func() {
ctx := context.Background()
namespace := NewNamespace()
@@ -525,8 +557,8 @@ var _ = When("a virtual mode cluster update its version", Label(e2eTestLabel), L
cluster := NewCluster(namespace.Name)
// Add initial version
cluster.Spec.Version = "v1.31.13-k3s1"
// Add initial old version
cluster.Spec.Version = "v1.33.0-k3s1"
cluster.Spec.Mode = v1beta1.VirtualClusterMode
cluster.Spec.Agents = ptr.To[int32](1)
@@ -559,6 +591,7 @@ var _ = When("a virtual mode cluster update its version", Label(e2eTestLabel), L
nginxPod, _ = virtualCluster.NewNginxPod("")
})
It("will update server version when version spec is updated", func() {
var cluster v1beta1.Cluster
ctx := context.Background()
@@ -567,7 +600,7 @@ var _ = When("a virtual mode cluster update its version", Label(e2eTestLabel), L
Expect(err).NotTo(HaveOccurred())
// update cluster version
cluster.Spec.Version = "v1.32.8-k3s1"
cluster.Spec.Version = k3sVersion
err = k8sClient.Update(ctx, &cluster)
Expect(err).NotTo(HaveOccurred())

View File

@@ -36,6 +36,7 @@ type VirtualCluster struct {
Cluster *v1beta1.Cluster
RestConfig *rest.Config
Client *kubernetes.Clientset
Kubeconfig []byte
}
func NewVirtualCluster() *VirtualCluster { // By default, create an ephemeral cluster
@@ -54,7 +55,7 @@ func NewVirtualClusterWithType(persistenceType v1beta1.PersistenceMode) *Virtual
CreateCluster(cluster)
client, restConfig := NewVirtualK8sClientAndConfig(cluster)
client, restConfig, kubeconfig := NewVirtualK8sClientAndKubeconfig(cluster)
By(fmt.Sprintf("Created virtual cluster %s/%s", cluster.Namespace, cluster.Name))
@@ -62,6 +63,7 @@ func NewVirtualClusterWithType(persistenceType v1beta1.PersistenceMode) *Virtual
Cluster: cluster,
RestConfig: restConfig,
Client: client,
Kubeconfig: kubeconfig,
}
}
@@ -140,9 +142,6 @@ func NewCluster(namespace string) *v1beta1.Cluster {
Persistence: v1beta1.PersistenceConfig{
Type: v1beta1.EphemeralPersistenceMode,
},
ServerArgs: []string{
"--disable-network-policy",
},
},
}
}
@@ -234,6 +233,7 @@ func NewVirtualK8sClientAndConfig(cluster *v1beta1.Cluster) (*kubernetes.Clients
kubeletAltName := fmt.Sprintf("k3k-%s-kubelet", cluster.Name)
vKubeconfig.AltNames = certs.AddSANs([]string{hostIP, kubeletAltName})
config, err = vKubeconfig.Generate(ctx, k8sClient, cluster, hostIP, 0)
return err
}).
WithTimeout(time.Minute * 2).
@@ -251,6 +251,40 @@ func NewVirtualK8sClientAndConfig(cluster *v1beta1.Cluster) (*kubernetes.Clients
return virtualK8sClient, restcfg
}
// NewVirtualK8sClient returns a Kubernetes ClientSet for the virtual cluster
func NewVirtualK8sClientAndKubeconfig(cluster *v1beta1.Cluster) (*kubernetes.Clientset, *rest.Config, []byte) {
GinkgoHelper()
var (
err error
config *clientcmdapi.Config
)
ctx := context.Background()
Eventually(func() error {
vKubeconfig := kubeconfig.New()
kubeletAltName := fmt.Sprintf("k3k-%s-kubelet", cluster.Name)
vKubeconfig.AltNames = certs.AddSANs([]string{hostIP, kubeletAltName})
config, err = vKubeconfig.Generate(ctx, k8sClient, cluster, hostIP, 0)
return err
}).
WithTimeout(time.Minute * 2).
WithPolling(time.Second * 5).
Should(BeNil())
configData, err := clientcmd.Write(*config)
Expect(err).To(Not(HaveOccurred()))
restcfg, err := clientcmd.RESTConfigFromKubeConfig(configData)
Expect(err).To(Not(HaveOccurred()))
virtualK8sClient, err := kubernetes.NewForConfig(restcfg)
Expect(err).To(Not(HaveOccurred()))
return virtualK8sClient, restcfg, configData
}
func (c *VirtualCluster) NewNginxPod(namespace string) (*v1.Pod, string) {
GinkgoHelper()
@@ -365,26 +399,25 @@ func (c *VirtualCluster) ExecCmd(pod *v1.Pod, command string) (string, string, e
func restartServerPod(ctx context.Context, virtualCluster *VirtualCluster) {
GinkgoHelper()
labelSelector := "cluster=" + virtualCluster.Cluster.Name + ",role=server"
serverPods, err := k8s.CoreV1().Pods(virtualCluster.Cluster.Namespace).List(ctx, metav1.ListOptions{LabelSelector: labelSelector})
Expect(err).To(Not(HaveOccurred()))
serverPods := listServerPods(ctx, virtualCluster)
Expect(len(serverPods.Items)).To(Equal(1))
serverPod := serverPods.Items[0]
Expect(len(serverPods)).To(Equal(1))
serverPod := serverPods[0]
GinkgoWriter.Printf("deleting pod %s/%s\n", serverPod.Namespace, serverPod.Name)
err = k8s.CoreV1().Pods(virtualCluster.Cluster.Namespace).Delete(ctx, serverPod.Name, metav1.DeleteOptions{})
err := k8s.CoreV1().Pods(virtualCluster.Cluster.Namespace).Delete(ctx, serverPod.Name, metav1.DeleteOptions{})
Expect(err).To(Not(HaveOccurred()))
By("Deleting server pod")
// check that the server pods restarted
Eventually(func() any {
serverPods, err = k8s.CoreV1().Pods(virtualCluster.Cluster.Namespace).List(ctx, metav1.ListOptions{LabelSelector: labelSelector})
Expect(err).To(Not(HaveOccurred()))
Expect(len(serverPods.Items)).To(Equal(1))
return serverPods.Items[0].DeletionTimestamp
serverPods := listServerPods(ctx, virtualCluster)
Expect(len(serverPods)).To(Equal(1))
return serverPods[0].DeletionTimestamp
}).WithTimeout(60 * time.Second).WithPolling(time.Second * 5).Should(BeNil())
}

9
tests/testdata/addons/nginx.yaml vendored Normal file
View File

@@ -0,0 +1,9 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx-addon
namespace: default
spec:
containers:
- name: nginx
image: nginx:latest

33
tests/testdata/registry/certs/ca.crt vendored Normal file
View File

@@ -0,0 +1,33 @@
-----BEGIN CERTIFICATE-----
MIIFuzCCA6OgAwIBAgIUI2MIXZDFl1X+tVkpbeYv78eGB5swDQYJKoZIhvcNAQEL
BQAwbTELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBUxvY2FsMQ4wDAYDVQQHDAVMb2Nh
bDESMBAGA1UECgwJUHJpdmF0ZUNBMQwwCgYDVQQLDANEZXYxHDAaBgNVBAMME1By
aXZhdGUtUmVnaXN0cnktQ0EwHhcNMjUxMTI2MTIxMDIyWhcNMzUxMTI0MTIxMDIy
WjBtMQswCQYDVQQGEwJVUzEOMAwGA1UECAwFTG9jYWwxDjAMBgNVBAcMBUxvY2Fs
MRIwEAYDVQQKDAlQcml2YXRlQ0ExDDAKBgNVBAsMA0RldjEcMBoGA1UEAwwTUHJp
dmF0ZS1SZWdpc3RyeS1DQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIB
AIwwuetZ0bpDmYkdP1HjQpPVjatiQHyedyulKBfPo9/uVE+HVoPe3Ku4QQgSmhKs
hzYE0g7CoM/Kr1EdCYS0RkJhLp+Mfdqwcn2+QXH0PHqdJU3ocutmxRB0xObXyXBv
2Nf+0yKNY36E096wgcJKIl8eNrONmFbPmNl7PjsIvnilQBUSUbfecxUjDcmbZxXq
dSSntMPN/twPi9FzZGMGhZzHD0+Wf+9V0dfF1L1P7rFtmdEtPbBEH6wZFpKl8qzH
O3aXxznLbRSiVZ8PP4zdEKXGcXU8bWZ0NeGPOfgQhsBZvN2auV+esb1McK+7ZzOK
P5dPuzmlCCa8F7+P8wYgHm3W5dCkwsRqvu+ht4Z2sw/Gj1Uot7hunW2jO1PWoegs
W4kGD2EFHqmYU8j3RzxppY0YvZxlSVMiQDHE11ZKHrvu0p7EGsZnAqntAH73jlEg
kt7ZhAS+AMs+evIGAFL5C6dZc8C99uiXiANausrBlVQUk068kR7W2mVy8omet2mQ
DCkmnLJnrxgq5BwPKKvqMDs6dl+idYirxiX33dhildTrkX07DIOwy0nBTdBz8O5s
PIhIA5maVZt8g8w+YpNLEJSMI7NJ1YlgIJ3P9JBOk47hlNp3gw7WlkUxW5XXS1jU
qUWUDHi8grjL9WxSbENXIY4aGIu5ag/xqgQzKGkWcr1nAgMBAAGjUzBRMB0GA1Ud
DgQWBBQYh4H6KFRZXYi9/pzeZUTgxlWQ3jAfBgNVHSMEGDAWgBQYh4H6KFRZXYi9
/pzeZUTgxlWQ3jAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4ICAQBt
IGJhMJsvDFXWO5pRMjA633k4zquyYBfSt1lTmH5S91SWW8sAickNVvnpBvmsG5lg
GAA1kdF9b7+yrcWGK3TldQobZVpU46MBoiqWPY8tHqbM/0x+UwHHPTwbNa5JLjxT
5ECsZa5wdeACiLt0dFv4CJ8GIcGK7k8QaoxvolEPcxbEpas7bJF9cHNZH3TEwhIb
kC6Q+4+Y2r0pxDW/2uqFpL2RSzk4kJDH3K2y3ywnkSsM6QTDXlaBKijU5bXvrq7v
ZIAX75w+dSHtj4oCWFm/jgVyS+KrAhWbjwxwRMLku0/603OQCAv7c1oEbLh58fLb
zfOOPGkpDvNP5rTwacIyW0P0/GJpmwFjaJYRJgmmCwM7S0qhhJfuVp2d7oZvkkRT
vRpNqpR813ge6T3pnXpdBA9NofTiIsxJ18CGDaHBvDBR+MAsPjmCskIsf9T3fGY/
fAzuYd8qgE2jIuyZzBLIwARq+zKSzBwgLbfnYRprbp62Qi0OQRYOqxgfP0grw29S
k++Wtfd1zr8OCX/8CkC1P85ipzUoF3G7W1Cadn4aesfxvHPKQlRlpHnt7RImwJMZ
k5QP/2igzjjGwvlGb97Br25dgjiJmQlY3IjqNpzt8uIrCK65vQSwpioq7coJWlTZ
feU/+JrEL8lsa4jkMMeYnW4IJTRqq4Aqoe+e+vz8bQ==
-----END CERTIFICATE-----

Some files were not shown because too many files have changed in this diff Show More