chore: Cleanup a lot of code

This commit is contained in:
TheiLLeniumStudios
2026-01-15 00:21:56 +01:00
parent 6ac8f5d5d8
commit 082b7cc4c4
38 changed files with 413 additions and 848 deletions

View File

@@ -1,523 +1,123 @@
# Reloader E2E Tests
End-to-end tests that verify Reloader works correctly in a real Kubernetes cluster. Tests create workloads, modify their referenced ConfigMaps/Secrets/SecretProviderClasses, and verify that Reloader triggers the appropriate rolling updates.
## Table of Contents
- [Quick Start](#quick-start)
- [Prerequisites](#prerequisites)
- [Running Tests](#running-tests)
- [Test Coverage](#test-coverage)
- [Workload Types](#workload-types)
- [Resource Types](#resource-types)
- [Reload Strategies](#reload-strategies)
- [Reference Methods](#reference-methods)
- [Annotations](#annotations)
- [CLI Flags](#cli-flags)
- [Test Organization](#test-organization)
- [Debugging](#debugging)
- [Writing Tests](#writing-tests)
---
End-to-end tests verifying Reloader functionality in a real Kubernetes cluster.
## Quick Start
```bash
# One-time setup: create Kind cluster and install dependencies
make e2e-setup
# Run all e2e tests
make e2e
# Cleanup when done
make e2e-cleanup
make e2e-setup # Create Kind cluster, install Argo/CSI/Vault
make e2e # Build image, run tests
make e2e-cleanup # Teardown
```
---
## Prerequisites
| Requirement | Version | Purpose |
|------------|---------|---------|
| Go | 1.25+ | Test execution |
| Docker/Podman | Latest | Image building |
| [Kind](https://kind.sigs.k8s.io/) | 0.20+ | Local Kubernetes cluster |
| kubectl | Latest | Cluster interaction |
| Helm | 3.x | Reloader deployment |
### Optional Dependencies
| Component | Purpose | Auto-installed by |
|-----------|---------|-------------------|
| [Argo Rollouts](https://argoproj.github.io/rollouts/) | Argo Rollout tests | `make e2e-setup` |
| [CSI Secrets Store Driver](https://secrets-store-csi-driver.sigs.k8s.io/) | SecretProviderClass tests | `make e2e-setup` |
| [Vault](https://www.vaultproject.io/) | CSI provider backend | `make e2e-setup` |
| OpenShift | DeploymentConfig tests | Requires OpenShift cluster |
---
- Go 1.25+
- Docker or Podman
- [Kind](https://kind.sigs.k8s.io/) 0.20+
- kubectl
- Helm 3.x
## Running Tests
### Make Targets
| Target | Description |
|--------|-------------|
| `make e2e-setup` | Create Kind cluster and install all dependencies (Argo, CSI, Vault) |
| `make e2e` | Build image, load to Kind, run all tests |
| `make e2e-cleanup` | Remove test resources and delete Kind cluster |
| `make e2e-ci` | Full CI pipeline: setup → test → cleanup |
### Common Workflows
```bash
# Development workflow
make e2e-setup # Once at the start
make e2e # Run tests (repeat as needed)
make e2e # ...iterate...
make e2e-cleanup # When done
# Run all tests
make e2e
# CI workflow
make e2e-ci # Does everything
# Test specific image
SKIP_BUILD=true RELOADER_IMAGE=ghcr.io/stakater/reloader:v1.2.0 make e2e
```
### Running Specific Tests
```bash
# Run a specific test suite
# Run specific suite
go tool ginkgo -v ./test/e2e/core/...
go tool ginkgo -v ./test/e2e/annotations/...
go tool ginkgo -v ./test/e2e/csi/...
# Run tests matching a pattern
go tool ginkgo -v --focus="should reload when ConfigMap" ./test/e2e/...
# Run by pattern
go tool ginkgo -v --focus="ConfigMap" ./test/e2e/...
# Run tests with specific labels
# Run by label
go tool ginkgo -v --label-filter="csi" ./test/e2e/...
go tool ginkgo -v --label-filter="!argo && !openshift" ./test/e2e/...
# Run all tests, continue on failure
go tool ginkgo --keep-going -v ./test/e2e/...
# Test a specific image
SKIP_BUILD=true RELOADER_IMAGE=ghcr.io/stakater/reloader:v1.2.0 make e2e
```
### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `RELOADER_IMAGE` | Image to test | `ghcr.io/stakater/reloader:test` |
| `SKIP_BUILD` | Skip image build | `false` |
| `KIND_CLUSTER` | Kind cluster name | `reloader-e2e` |
| `KUBECONFIG` | Kubernetes config path | `~/.kube/config` |
| `E2E_TIMEOUT` | Test timeout | `45m` |
| Variable | Default | Description |
|----------|---------|-------------|
| `RELOADER_IMAGE` | `ghcr.io/stakater/reloader:test` | Image to test |
| `SKIP_BUILD` | `false` | Skip image build |
| `KIND_CLUSTER` | `reloader-e2e` | Kind cluster name |
| `E2E_TIMEOUT` | `45m` | Test timeout |
---
## Test Coverage
### Workload Types
| Workload | Annotations | EnvVars | CSI | Special Handling |
|----------|-------------|---------|-----|------------------|
| Deployment | ✅ | ✅ | ✅ | Standard rolling update |
| DaemonSet | ✅ | ✅ | ✅ | Standard rolling update |
| StatefulSet | ✅ | ✅ | ✅ | Standard rolling update |
| CronJob | ✅ | ❌ | ❌ | Updates job template |
| Job | ✅ | ❌ | ❌ | Recreates job |
| Argo Rollout | ✅ | ✅ | ❌ | Supports restart strategy |
| DeploymentConfig | ✅ | ✅ | ❌ | OpenShift only |
### Resource Types
#### ConfigMaps & Secrets
Standard Kubernetes resources that trigger reloads when their data changes.
**Tested Scenarios:**
- Data changes trigger reload
- Label-only changes do NOT trigger reload
- Annotation-only changes do NOT trigger reload
- Multiple resources in single annotation (comma-separated)
- Regex patterns for resource names
#### SecretProviderClass (CSI)
CSI Secrets Store Driver integration for external secret providers (Vault, Azure, AWS, etc.).
**Tested Scenarios:**
- SecretProviderClassPodStatus changes trigger reload
- Label-only changes on SPCPS do NOT trigger reload
- Auto-detection with `secretproviderclass.reloader.stakater.com/auto: "true"`
- Exclude specific SPCs from auto-reload
- Init containers with CSI volumes
- Multiple CSI volumes per workload
### Reload Strategies
#### Annotations Strategy (Default)
Adds/updates `reloader.stakater.com/last-reloaded-from` annotation on pod template.
```yaml
spec:
template:
metadata:
annotations:
reloader.stakater.com/last-reloaded-from: "my-configmap"
```
#### EnvVars Strategy
Adds `STAKATER_<RESOURCE>_<TYPE>` environment variable to containers.
```yaml
spec:
template:
spec:
containers:
- env:
- name: STAKATER_MY_CONFIGMAP_CONFIGMAP
value: "<sha256-hash>"
```
### Reference Methods
All methods are tested for Deployment, DaemonSet, and StatefulSet:
| Method | Description | ConfigMap | Secret | CSI |
|--------|-------------|-----------|--------|-----|
| `envFrom` | All keys as env vars | ✅ | ✅ | - |
| `valueFrom.configMapKeyRef` | Single key as env var | ✅ | - | - |
| `valueFrom.secretKeyRef` | Single key as env var | - | ✅ | - |
| Volume mount | Mount as files | ✅ | ✅ | ✅ |
| Projected volume | Combined sources | ✅ | ✅ | - |
| Init container (envFrom) | Init container env | ✅ | ✅ | - |
| Init container (volume) | Init container mount | ✅ | ✅ | ✅ |
### Annotations
#### Reload Triggers
| Annotation | Description |
|------------|-------------|
| `configmap.reloader.stakater.com/reload` | Reload on specific ConfigMap(s) change |
| `secret.reloader.stakater.com/reload` | Reload on specific Secret(s) change |
| `secretproviderclass.reloader.stakater.com/reload` | Reload on specific SPC(s) change |
#### Auto-Detection
| Annotation | Description |
|------------|-------------|
| `reloader.stakater.com/auto: "true"` | Auto-detect all mounted resources |
| `configmap.reloader.stakater.com/auto: "true"` | Auto-detect ConfigMaps only |
| `secret.reloader.stakater.com/auto: "true"` | Auto-detect Secrets only |
| `secretproviderclass.reloader.stakater.com/auto: "true"` | Auto-detect SPCs only |
#### Exclusions
| Annotation | Description |
|------------|-------------|
| `configmaps.exclude.reloader.stakater.com/reload` | Exclude ConfigMaps from auto |
| `secrets.exclude.reloader.stakater.com/reload` | Exclude Secrets from auto |
| `secretproviderclasses.exclude.reloader.stakater.com/reload` | Exclude SPCs from auto |
| `reloader.stakater.com/ignore: "true"` | On resource: prevents any reload |
#### Search & Match
| Annotation | Target | Description |
|------------|--------|-------------|
| `reloader.stakater.com/search: "true"` | Workload | Watch for matching resources |
| `reloader.stakater.com/match: "true"` | Resource | Trigger watchers on change |
#### Other
| Annotation | Description |
|------------|-------------|
| `reloader.stakater.com/pause-period` | Pause deployment after reload |
### CLI Flags
Tests verify these Reloader command-line flags:
| Flag | Description |
|------|-------------|
| `--namespaces-to-ignore` | Skip specified namespaces |
| `--namespace-selector` | Only watch namespaces with matching labels |
| `--watch-globally` | Watch all namespaces vs own namespace only |
| `--resource-label-selector` | Only watch resources with matching labels |
| `--ignore-secrets` | Ignore all Secret changes |
| `--ignore-configmaps` | Ignore all ConfigMap changes |
| `--ignore-cronjobs` | Skip CronJob workloads |
| `--ignore-jobs` | Skip Job workloads |
| `--reload-on-create` | Trigger reload on resource creation |
| `--reload-on-delete` | Trigger reload on resource deletion |
| `--auto-reload-all` | Auto-reload all workloads without annotations |
| `--enable-csi-integration` | Enable SecretProviderClass support |
---
## Test Organization
## Test Structure
```
test/e2e/
├── core/ # Core workload tests
│ ├── core_suite_test.go
│ └── workloads_test.go # All workload types, both strategies
├── annotations/ # Annotation behavior tests
├── annotations_suite_test.go
│ ├── auto_reload_test.go # Auto-detection variations
│ ├── combination_test.go # Multiple annotations together
│ ├── exclude_test.go # Exclude annotations
│ ├── pause_period_test.go # Pause after reload
│ ├── resource_ignore_test.go # Ignore annotation on resources
│ └── search_match_test.go # Search/match pattern
├── flags/ # CLI flag tests
│ ├── flags_suite_test.go
│ ├── auto_reload_all_test.go
│ ├── ignore_resources_test.go
│ ├── ignored_workloads_test.go
│ ├── namespace_ignore_test.go
│ ├── namespace_selector_test.go
│ ├── reload_on_create_test.go
│ ├── reload_on_delete_test.go
│ ├── resource_selector_test.go
│ └── watch_globally_test.go
├── advanced/ # Advanced scenarios
│ ├── advanced_suite_test.go
│ ├── job_reload_test.go # Job recreation
│ ├── multi_container_test.go # Multiple containers
│ ├── pod_annotations_test.go # Pod template annotations
│ └── regex_test.go # Regex patterns
├── csi/ # CSI SecretProviderClass tests
│ ├── csi_suite_test.go
│ └── csi_test.go # SPC-specific scenarios
├── argo/ # Argo Rollouts (requires installation)
│ ├── argo_suite_test.go
│ └── rollout_test.go
└── utils/ # Shared test utilities
├── annotations.go # Annotation builders
├── constants.go # Test constants
├── csi.go # CSI client and helpers
├── resources.go # Resource creation helpers
├── testenv.go # Test environment setup
├── wait.go # Wait/polling utilities
├── workload_adapter.go # Workload abstraction interface
├── workload_deployment.go # Deployment adapter
├── workload_daemonset.go # DaemonSet adapter
├── workload_statefulset.go # StatefulSet adapter
├── workload_cronjob.go # CronJob adapter
├── workload_job.go # Job adapter
├── workload_argo.go # Argo Rollout adapter
└── workload_openshift.go # DeploymentConfig adapter
├── core/ # Core reload functionality
├── annotations/ # Annotation behaviors (auto, exclude, search/match)
├── flags/ # CLI flag behaviors
├── advanced/ # Jobs, multi-container, regex patterns
├── csi/ # SecretProviderClass integration
├── argo/ # Argo Rollouts (requires CRDs)
── utils/ # Shared test utilities and workload adapters
```
---
### Labels
## Debugging
### View Test Output
```bash
# Verbose output
go tool ginkgo -v ./test/e2e/core/...
# Focus on specific test
go tool ginkgo -v --focus="should reload when ConfigMap" ./test/e2e/...
# Show all spec names
go tool ginkgo -v --dry-run ./test/e2e/...
```
### Check Reloader Logs
```bash
# Find Reloader pod
kubectl get pods -A | grep reloader
# View logs
kubectl logs -n <namespace> -l app.kubernetes.io/name=reloader --tail=100 -f
# Check events
kubectl get events -n <namespace> --sort-by='.lastTimestamp'
```
### Inspect Test Resources
```bash
# List test namespaces
kubectl get ns | grep reloader
# Check workloads in test namespace
kubectl get deploy,ds,sts,cronjob,job -n <test-namespace>
# Check ConfigMaps/Secrets
kubectl get cm,secret -n <test-namespace>
# Check CSI resources
kubectl get secretproviderclass,secretproviderclasspodstatus -n <test-namespace>
```
### Common Issues
| Issue | Cause | Solution |
|-------|-------|----------|
| Tests timeout | Reloader not running | Check pod status and logs |
| CSI tests skipped | CSI driver not installed | Run `make e2e-setup` |
| Argo tests skipped | Argo Rollouts not installed | Run `make e2e-setup` |
| OpenShift tests skipped | Not an OpenShift cluster | Expected on Kind |
| "resource not found" | Missing CRDs | Install required components |
| Duplicate volume names | Test bug | Check CSI volume naming |
---
| Label | Description |
|-------|-------------|
| `csi` | Requires CSI driver and Vault |
| `argo` | Requires Argo Rollouts CRDs |
| `openshift` | Requires OpenShift cluster |
## Writing Tests
### Using the Workload Adapter Pattern
Test the same behavior across multiple workload types:
Use the workload adapter pattern for cross-workload tests:
```go
DescribeTable("should reload when ConfigMap changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil {
Skip(fmt.Sprintf("%s not available", workloadType))
}
DescribeTable("should reload when ConfigMap changes", func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil {
Skip(fmt.Sprintf("%s not available", workloadType))
}
// Create ConfigMap
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
// Create resources
_, err := utils.CreateConfigMap(ctx, kubeClient, ns, cmName, map[string]string{"key": "v1"}, nil)
Expect(err).NotTo(HaveOccurred())
// Create workload via adapter
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
UseConfigMapEnvFrom: true,
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
})
Expect(err).NotTo(HaveOccurred())
err = adapter.Create(ctx, ns, name, utils.WorkloadConfig{
ConfigMapName: cmName,
UseConfigMapEnvFrom: true,
Annotations: utils.BuildConfigMapReloadAnnotation(cmName),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, ns, name) })
// Wait for ready
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
// Wait ready
Expect(adapter.WaitReady(ctx, ns, name, utils.WorkloadReadyTimeout)).To(Succeed())
// Update ConfigMap
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "updated"})
Expect(err).NotTo(HaveOccurred())
// Trigger reload
Expect(utils.UpdateConfigMap(ctx, kubeClient, ns, cmName, map[string]string{"key": "v2"})).To(Succeed())
// Verify reload
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue())
},
// Verify
reloaded, err := adapter.WaitReloaded(ctx, ns, name, utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue())
},
Entry("Deployment", utils.WorkloadDeployment),
Entry("DaemonSet", utils.WorkloadDaemonSet),
Entry("StatefulSet", utils.WorkloadStatefulSet),
Entry("ArgoRollout", Label("argo"), utils.WorkloadArgoRollout),
)
```
### Direct Resource Creation
## Debugging
For Deployment-specific tests:
```go
It("should reload with custom setup", func() {
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "value"}, nil)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithConfigMapEnvFrom(configMapName),
utils.WithAnnotations(utils.BuildAutoTrueAnnotation()),
)
Expect(err).NotTo(HaveOccurred())
// ... test logic ...
})
```
### CSI Tests
```go
It("should reload when SecretProviderClassPodStatus changes", func() {
if !utils.IsCSIDriverInstalled(ctx, csiClient) {
Skip("CSI driver not installed")
}
// Create SPC
_, err := utils.CreateSecretProviderClass(ctx, csiClient, testNamespace, spcName, nil)
Expect(err).NotTo(HaveOccurred())
// Create SPCPS
_, err = utils.CreateSecretProviderClassPodStatus(ctx, csiClient, testNamespace, spcpsName, spcName,
utils.NewSPCPSObjects("secret1", "v1"))
Expect(err).NotTo(HaveOccurred())
// Create Deployment with CSI volume
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithCSIVolume(spcName),
utils.WithAnnotations(utils.BuildSecretProviderClassReloadAnnotation(spcName)),
)
Expect(err).NotTo(HaveOccurred())
// Update SPCPS
err = utils.UpdateSecretProviderClassPodStatus(ctx, csiClient, testNamespace, spcpsName,
utils.NewSPCPSObjects("secret1", "v2"))
Expect(err).NotTo(HaveOccurred())
// Verify reload using adapter
adapter := utils.NewDeploymentAdapter(kubeClient)
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue())
})
```
### Negative Tests
Verify that something does NOT trigger a reload:
```go
It("should NOT reload when only labels change", func() {
// Setup...
adapter := utils.NewDeploymentAdapter(kubeClient)
// Make a change that shouldn't trigger reload
err = utils.UpdateConfigMapLabels(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"new-label": "value"})
Expect(err).NotTo(HaveOccurred())
// Wait briefly, then verify NO reload
time.Sleep(utils.NegativeTestWait)
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ShortTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeFalse(), "Should NOT have reloaded")
})
```
### Test Labels
Use labels to categorize tests:
```go
Entry("Deployment", Label("csi"), utils.WorkloadDeployment),
Entry("with OpenShift", Label("openshift"), utils.WorkloadDeploymentConfig),
Entry("with Argo", Label("argo"), utils.WorkloadArgoRollout),
```
Run by label:
```bash
go tool ginkgo --label-filter="csi" ./test/e2e/...
go tool ginkgo --label-filter="!openshift && !argo" ./test/e2e/...
# Reloader logs
kubectl logs -n <namespace> -l app.kubernetes.io/name=reloader -f
# Test resources
kubectl get deploy,ds,sts,cm,secret -n <test-namespace>
# CSI resources
kubectl get secretproviderclass,secretproviderclasspodstatus -A
```

View File

@@ -184,11 +184,9 @@ var _ = Describe("Job Workload Recreation Tests", func() {
Context("Job with SecretProviderClass reference", Label("csi"), func() {
BeforeEach(func() {
// Skip if CSI driver not installed
if !utils.IsCSIDriverInstalled(ctx, csiClient) {
Skip("CSI secrets store driver not installed - skipping CSI test")
}
// Skip if Vault CSI provider not installed
if !utils.IsVaultProviderInstalled(ctx, kubeClient) {
Skip("Vault CSI provider not installed - skipping CSI test")
}
@@ -209,6 +207,7 @@ var _ = Describe("Job Workload Recreation Tests", func() {
By("Creating a Job with CSI volume and SPC reload annotation")
job, err := utils.CreateJob(ctx, kubeClient, testNamespace, jobName,
utils.WithJobCommand("sleep 300"),
utils.WithJobCSIVolume(spcName),
utils.WithJobAnnotations(utils.BuildSecretProviderClassReloadAnnotation(spcName)))
Expect(err).NotTo(HaveOccurred())

View File

@@ -43,7 +43,6 @@ var _ = BeforeSuite(func() {
registry = utils.NewAdapterRegistry(kubeClient)
// Register optional adapters if CRDs are installed
if utils.IsArgoRolloutsInstalled(ctx, testEnv.RolloutsClient) {
GinkgoWriter.Println("Argo Rollouts detected, registering ArgoRolloutAdapter")
registry.RegisterAdapter(utils.NewArgoRolloutAdapter(testEnv.RolloutsClient))
@@ -60,7 +59,7 @@ var _ = BeforeSuite(func() {
deployValues := map[string]string{
"reloader.reloadStrategy": "annotations",
"reloader.watchGlobally": "false", // Only watch own namespace to prevent cross-talk between test suites
"reloader.watchGlobally": "false",
}
if utils.IsCSIDriverInstalled(ctx, csiClient) {

View File

@@ -130,36 +130,7 @@ var _ = Describe("Auto Reload Annotation Tests", func() {
})
})
Context("with reloader.stakater.com/auto=false annotation", func() {
It("should NOT reload Deployment when ConfigMap changes", func() {
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with auto=false annotation")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithConfigMapEnvFrom(configMapName),
utils.WithAnnotations(utils.BuildAutoFalseAnnotation()),
)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = adapter.WaitReady(ctx, testNamespace, deploymentName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
By("Updating the ConfigMap data")
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName, map[string]string{"key": "updated"})
Expect(err).NotTo(HaveOccurred())
By("Verifying Deployment is NOT reloaded (negative test)")
time.Sleep(utils.NegativeTestWait)
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, deploymentName,
utils.AnnotationLastReloadedFrom, utils.ShortTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeFalse(), "Deployment with auto=false should NOT have been reloaded")
})
})
// Note: auto=false test is now in core/workloads_test.go as a DescribeTable for all workload types
Context("with configmap.reloader.stakater.com/auto=true annotation", func() {
It("should reload Deployment only when ConfigMap changes, not Secret", func() {

View File

@@ -48,10 +48,10 @@ var _ = Describe("Combination Annotation Tests", func() {
By("Creating a Deployment with auto=true AND explicit reload annotation for extra ConfigMap")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithConfigMapEnvFrom(configMapName), // auto-detected
utils.WithConfigMapEnvFrom(configMapName),
utils.WithAnnotations(utils.MergeAnnotations(
utils.BuildAutoTrueAnnotation(),
utils.BuildConfigMapReloadAnnotation(configMapName2), // explicitly listed
utils.BuildConfigMapReloadAnnotation(configMapName2),
)),
)
Expect(err).NotTo(HaveOccurred())
@@ -82,10 +82,10 @@ var _ = Describe("Combination Annotation Tests", func() {
By("Creating a Deployment with auto=true AND explicit reload annotation for extra ConfigMap")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithConfigMapEnvFrom(configMapName), // auto-detected
utils.WithConfigMapEnvFrom(configMapName),
utils.WithAnnotations(utils.MergeAnnotations(
utils.BuildAutoTrueAnnotation(),
utils.BuildConfigMapReloadAnnotation(configMapName2), // explicitly listed
utils.BuildConfigMapReloadAnnotation(configMapName2),
)),
)
Expect(err).NotTo(HaveOccurred())
@@ -116,10 +116,10 @@ var _ = Describe("Combination Annotation Tests", func() {
By("Creating a Deployment with auto=true AND explicit reload annotation for extra Secret")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithSecretEnvFrom(secretName), // auto-detected
utils.WithSecretEnvFrom(secretName),
utils.WithAnnotations(utils.MergeAnnotations(
utils.BuildAutoTrueAnnotation(),
utils.BuildSecretReloadAnnotation(secretName2), // explicitly listed
utils.BuildSecretReloadAnnotation(secretName2),
)),
)
Expect(err).NotTo(HaveOccurred())
@@ -153,10 +153,10 @@ var _ = Describe("Combination Annotation Tests", func() {
By("Creating a Deployment with auto=true AND exclude for second ConfigMap")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithConfigMapEnvFrom(configMapName),
utils.WithConfigMapEnvFrom(configMapName2), // also mounted, but excluded
utils.WithConfigMapEnvFrom(configMapName2),
utils.WithAnnotations(utils.MergeAnnotations(
utils.BuildAutoTrueAnnotation(),
utils.BuildConfigMapExcludeAnnotation(configMapName2), // exclude this one
utils.BuildConfigMapExcludeAnnotation(configMapName2),
)),
)
Expect(err).NotTo(HaveOccurred())

View File

@@ -357,7 +357,7 @@ var _ = Describe("Exclude Annotation Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Finding the SPCPS for non-excluded SPC")
// We need to find SPCPS for the non-excluded SPC (spcName2)
spcpsName2, err := utils.FindSPCPSForSPC(ctx, csiClient, testNamespace, spcName2, 30*time.Second)
Expect(err).NotTo(HaveOccurred())

View File

@@ -98,9 +98,7 @@ var _ = Describe("Search and Match Annotation Tests", func() {
By("Creating a Deployment WITHOUT search annotation (only standard annotation)")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName,
utils.WithConfigMapEnvFrom(configMapName),
// Note: No search or reload annotation - deployment won't be affected by match
)
utils.WithConfigMapEnvFrom(configMapName))
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
@@ -141,7 +139,6 @@ var _ = Describe("Search and Match Annotation Tests", func() {
By("Creating second Deployment WITHOUT search annotation")
_, err = utils.CreateDeployment(ctx, kubeClient, testNamespace, deploymentName2,
utils.WithConfigMapEnvFrom(configMapName),
// No search annotation
)
Expect(err).NotTo(HaveOccurred())

View File

@@ -29,8 +29,6 @@ var _ = Describe("Argo Rollout Strategy Tests", func() {
_ = utils.DeleteConfigMap(ctx, kubeClient, testNamespace, configMapName)
})
// Argo Rollouts have a special "restart" strategy that sets spec.restartAt field
// instead of using pod template annotations. This is unique to Argo Rollouts.
Context("Rollout strategy annotation", func() {
It("should use default rollout strategy (annotation-based reload)", func() {
By("Creating a ConfigMap")
@@ -67,7 +65,6 @@ var _ = Describe("Argo Rollout Strategy Tests", func() {
Expect(err).NotTo(HaveOccurred())
By("Creating an Argo Rollout with restart strategy annotation")
// Note: auto annotation goes on pod template, rollout-strategy goes on object metadata
_, err = utils.CreateRollout(ctx, rolloutsClient, testNamespace, rolloutName,
utils.WithRolloutConfigMapEnvFrom(configMapName),
utils.WithRolloutAnnotations(utils.BuildAutoTrueAnnotation()),

View File

@@ -59,7 +59,7 @@ var _ = BeforeSuite(func() {
deployValues := map[string]string{
"reloader.reloadStrategy": "annotations",
"reloader.watchGlobally": "false", // Only watch own namespace to prevent cross-talk between test suites
"reloader.watchGlobally": "false",
}
if utils.IsArgoRolloutsInstalled(ctx, testEnv.RolloutsClient) {

View File

@@ -607,214 +607,260 @@ var _ = Describe("Workload Reload Tests", func() {
_ = standardWorkloads
// ============================================================
// EDGE CASE TESTS (Deployment-specific)
// EDGE CASE TESTS
// These tests verify edge cases that should work across all workload types.
// ============================================================
Context("Edge Cases", func() {
It("should reload deployment with multiple ConfigMaps when any one changes", func() {
configMapName2 := utils.RandName("cm2")
defer func() { _ = utils.DeleteConfigMap(ctx, kubeClient, testNamespace, configMapName2) }()
adapter := registry.Get(utils.WorkloadDeployment)
Expect(adapter).NotTo(BeNil())
By("Creating two ConfigMaps")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key1": "value1"}, nil)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName2,
map[string]string{"key2": "value2"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment referencing both ConfigMaps")
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
UseConfigMapEnvFrom: true,
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName, configMapName2),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
By("Waiting for Deployment to be ready")
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
By("Updating the second ConfigMap")
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName2, map[string]string{"key2": "updated-value2"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be reloaded")
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment should have been reloaded when second ConfigMap changed")
})
It("should reload deployment with multiple Secrets when any one changes", func() {
secretName2 := utils.RandName("secret2")
defer func() { _ = utils.DeleteSecret(ctx, kubeClient, testNamespace, secretName2) }()
adapter := registry.Get(utils.WorkloadDeployment)
Expect(adapter).NotTo(BeNil())
By("Creating two Secrets")
_, err := utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName,
map[string]string{"key1": "value1"}, nil)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName2,
map[string]string{"key2": "value2"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment referencing both Secrets")
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
SecretName: secretName,
UseSecretEnvFrom: true,
Annotations: utils.BuildSecretReloadAnnotation(secretName, secretName2),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
By("Waiting for Deployment to be ready")
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
By("Updating the second Secret")
err = utils.UpdateSecretFromStrings(ctx, kubeClient, testNamespace, secretName2, map[string]string{"key2": "updated-value2"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be reloaded")
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment should have been reloaded when second Secret changed")
})
It("should reload deployment multiple times for sequential ConfigMap updates", func() {
adapter := registry.Get(utils.WorkloadDeployment)
Expect(adapter).NotTo(BeNil())
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "v1"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with ConfigMap reference annotation")
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
UseConfigMapEnvFrom: true,
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
By("Waiting for Deployment to be ready")
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
By("First update to ConfigMap")
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName, map[string]string{"key": "v2"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for first reload")
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue())
By("Getting first reload annotation value")
deploy, err := utils.GetDeployment(ctx, kubeClient, testNamespace, workloadName)
Expect(err).NotTo(HaveOccurred())
firstReloadValue := deploy.Spec.Template.Annotations[utils.AnnotationLastReloadedFrom]
By("Second update to ConfigMap")
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName, map[string]string{"key": "v3"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for second reload with different annotation value")
Eventually(func() string {
deploy, err := utils.GetDeployment(ctx, kubeClient, testNamespace, workloadName)
if err != nil {
return ""
DescribeTable("should reload with multiple ConfigMaps when any one changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
return deploy.Spec.Template.Annotations[utils.AnnotationLastReloadedFrom]
}, utils.ReloadTimeout, utils.DefaultInterval).ShouldNot(Equal(firstReloadValue),
"Reload annotation should change after second update")
})
It("should reload deployment when either ConfigMap or Secret changes", func() {
adapter := registry.Get(utils.WorkloadDeployment)
Expect(adapter).NotTo(BeNil())
configMapName2 := utils.RandName("cm2")
DeferCleanup(func() { _ = utils.DeleteConfigMap(ctx, kubeClient, testNamespace, configMapName2) })
By("Creating a ConfigMap and Secret")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"config": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating two ConfigMaps")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key1": "value1"}, nil)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName,
map[string]string{"secret": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName2,
map[string]string{"key2": "value2"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment referencing both")
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
SecretName: secretName,
UseConfigMapEnvFrom: true,
UseSecretEnvFrom: true,
Annotations: utils.MergeAnnotations(utils.BuildConfigMapReloadAnnotation(configMapName),
utils.BuildSecretReloadAnnotation(secretName)),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
By("Creating workload referencing both ConfigMaps")
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
UseConfigMapEnvFrom: true,
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName, configMapName2),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
By("Waiting for Deployment to be ready")
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
By("Waiting for workload to be ready")
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
By("Updating the Secret")
err = utils.UpdateSecretFromStrings(ctx, kubeClient, testNamespace, secretName, map[string]string{"secret": "updated"})
Expect(err).NotTo(HaveOccurred())
By("Updating the second ConfigMap")
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName2, map[string]string{"key2": "updated-value2"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be reloaded")
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "Deployment should have been reloaded when Secret changed")
})
By("Waiting for workload to be reloaded")
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "%s should reload when second ConfigMap changes", workloadType)
},
Entry("Deployment", utils.WorkloadDeployment),
Entry("DaemonSet", utils.WorkloadDaemonSet),
Entry("StatefulSet", utils.WorkloadStatefulSet),
Entry("ArgoRollout", Label("argo"), utils.WorkloadArgoRollout),
Entry("DeploymentConfig", Label("openshift"), utils.WorkloadDeploymentConfig),
)
It("should NOT reload deployment with auto=false annotation", func() {
adapter := registry.Get(utils.WorkloadDeployment)
Expect(adapter).NotTo(BeNil())
DescribeTable("should reload with multiple Secrets when any one changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
secretName2 := utils.RandName("secret2")
DeferCleanup(func() { _ = utils.DeleteSecret(ctx, kubeClient, testNamespace, secretName2) })
By("Creating a Deployment with auto=false annotation")
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
UseConfigMapEnvFrom: true,
Annotations: utils.BuildAutoFalseAnnotation(),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
By("Creating two Secrets")
_, err := utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName,
map[string]string{"key1": "value1"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Waiting for Deployment to be ready")
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName2,
map[string]string{"key2": "value2"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Updating the ConfigMap data")
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName, map[string]string{"key": "updated"})
Expect(err).NotTo(HaveOccurred())
By("Creating workload referencing both Secrets")
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
SecretName: secretName,
UseSecretEnvFrom: true,
Annotations: utils.BuildSecretReloadAnnotation(secretName, secretName2),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
By("Verifying Deployment is NOT reloaded (auto=false)")
time.Sleep(utils.NegativeTestWait)
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ShortTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeFalse(), "Deployment with auto=false should NOT have been reloaded")
})
By("Waiting for workload to be ready")
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
By("Updating the second Secret")
err = utils.UpdateSecretFromStrings(ctx, kubeClient, testNamespace, secretName2, map[string]string{"key2": "updated-value2"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for workload to be reloaded")
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "%s should reload when second Secret changes", workloadType)
},
Entry("Deployment", utils.WorkloadDeployment),
Entry("DaemonSet", utils.WorkloadDaemonSet),
Entry("StatefulSet", utils.WorkloadStatefulSet),
Entry("ArgoRollout", Label("argo"), utils.WorkloadArgoRollout),
Entry("DeploymentConfig", Label("openshift"), utils.WorkloadDeploymentConfig),
)
DescribeTable("should reload multiple times for sequential ConfigMap updates",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "v1"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating workload with ConfigMap reference annotation")
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
UseConfigMapEnvFrom: true,
Annotations: utils.BuildConfigMapReloadAnnotation(configMapName),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
By("Waiting for workload to be ready")
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
By("First update to ConfigMap")
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName, map[string]string{"key": "v2"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for first reload")
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue())
By("Getting first reload annotation value")
firstReloadValue, err := adapter.GetPodTemplateAnnotation(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom)
Expect(err).NotTo(HaveOccurred())
By("Second update to ConfigMap")
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName, map[string]string{"key": "v3"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for second reload with different annotation value")
Eventually(func() string {
val, _ := adapter.GetPodTemplateAnnotation(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom)
return val
}, utils.ReloadTimeout, utils.DefaultInterval).ShouldNot(Equal(firstReloadValue),
"Reload annotation should change after second update")
},
Entry("Deployment", utils.WorkloadDeployment),
Entry("DaemonSet", utils.WorkloadDaemonSet),
Entry("StatefulSet", utils.WorkloadStatefulSet),
Entry("ArgoRollout", Label("argo"), utils.WorkloadArgoRollout),
Entry("DeploymentConfig", Label("openshift"), utils.WorkloadDeploymentConfig),
)
DescribeTable("should reload when either ConfigMap or Secret changes",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a ConfigMap and Secret")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"config": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
_, err = utils.CreateSecretFromStrings(ctx, kubeClient, testNamespace, secretName,
map[string]string{"secret": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating workload referencing both")
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
SecretName: secretName,
UseConfigMapEnvFrom: true,
UseSecretEnvFrom: true,
Annotations: utils.MergeAnnotations(
utils.BuildConfigMapReloadAnnotation(configMapName),
utils.BuildSecretReloadAnnotation(secretName),
),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
By("Waiting for workload to be ready")
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
By("Updating the Secret")
err = utils.UpdateSecretFromStrings(ctx, kubeClient, testNamespace, secretName, map[string]string{"secret": "updated"})
Expect(err).NotTo(HaveOccurred())
By("Waiting for workload to be reloaded")
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ReloadTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeTrue(), "%s should reload when Secret changes", workloadType)
},
Entry("Deployment", utils.WorkloadDeployment),
Entry("DaemonSet", utils.WorkloadDaemonSet),
Entry("StatefulSet", utils.WorkloadStatefulSet),
Entry("ArgoRollout", Label("argo"), utils.WorkloadArgoRollout),
Entry("DeploymentConfig", Label("openshift"), utils.WorkloadDeploymentConfig),
)
DescribeTable("should NOT reload with auto=false annotation",
func(workloadType utils.WorkloadType) {
adapter := registry.Get(workloadType)
if adapter == nil {
Skip(fmt.Sprintf("%s adapter not available (CRD not installed)", workloadType))
}
By("Creating a ConfigMap")
_, err := utils.CreateConfigMap(ctx, kubeClient, testNamespace, configMapName,
map[string]string{"key": "initial"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating workload with auto=false annotation")
err = adapter.Create(ctx, testNamespace, workloadName, utils.WorkloadConfig{
ConfigMapName: configMapName,
UseConfigMapEnvFrom: true,
Annotations: utils.BuildAutoFalseAnnotation(),
})
Expect(err).NotTo(HaveOccurred())
DeferCleanup(func() { _ = adapter.Delete(ctx, testNamespace, workloadName) })
By("Waiting for workload to be ready")
err = adapter.WaitReady(ctx, testNamespace, workloadName, utils.WorkloadReadyTimeout)
Expect(err).NotTo(HaveOccurred())
By("Updating the ConfigMap data")
err = utils.UpdateConfigMap(ctx, kubeClient, testNamespace, configMapName, map[string]string{"key": "updated"})
Expect(err).NotTo(HaveOccurred())
By("Verifying workload is NOT reloaded (auto=false)")
time.Sleep(utils.NegativeTestWait)
reloaded, err := adapter.WaitReloaded(ctx, testNamespace, workloadName,
utils.AnnotationLastReloadedFrom, utils.ShortTimeout)
Expect(err).NotTo(HaveOccurred())
Expect(reloaded).To(BeFalse(), "%s with auto=false should NOT be reloaded", workloadType)
},
Entry("Deployment", utils.WorkloadDeployment),
Entry("DaemonSet", utils.WorkloadDaemonSet),
Entry("StatefulSet", utils.WorkloadStatefulSet),
Entry("ArgoRollout", Label("argo"), utils.WorkloadArgoRollout),
Entry("DeploymentConfig", Label("openshift"), utils.WorkloadDeploymentConfig),
)
})
// ============================================================

View File

@@ -32,30 +32,25 @@ var _ = BeforeSuite(func() {
var err error
ctx, cancel = context.WithCancel(context.Background())
// Setup test environment
testEnv, err = utils.SetupTestEnvironment(ctx, "reloader-csi-test")
Expect(err).NotTo(HaveOccurred(), "Failed to setup test environment")
// Export for use in tests
kubeClient = testEnv.KubeClient
csiClient = testEnv.CSIClient
restConfig = testEnv.RestConfig
testNamespace = testEnv.Namespace
// Skip entire suite if CSI driver not installed
if !utils.IsCSIDriverInstalled(ctx, csiClient) {
Skip("CSI secrets store driver not installed - skipping CSI suite")
}
// Skip entire suite if Vault CSI provider not installed
if !utils.IsVaultProviderInstalled(ctx, kubeClient) {
Skip("Vault CSI provider not installed - skipping CSI suite")
}
// Deploy Reloader with annotations strategy and CSI integration enabled
err = testEnv.DeployAndWait(map[string]string{
"reloader.reloadStrategy": "annotations",
"reloader.watchGlobally": "false", // Only watch own namespace to prevent cross-talk between test suites
"reloader.watchGlobally": "false",
"reloader.enableCSIIntegration": "true",
})
Expect(err).NotTo(HaveOccurred(), "Failed to deploy Reloader")

View File

@@ -23,7 +23,6 @@ var _ = Describe("CSI SecretProviderClass Tests", Label("csi"), func() {
deploymentName = utils.RandName("deploy")
configMapName = utils.RandName("cm")
spcName = utils.RandName("spc")
// Each test gets its own Vault secret path to avoid conflicts
vaultSecretPath = fmt.Sprintf("secret/%s", utils.RandName("test"))
adapter = utils.NewDeploymentAdapter(kubeClient)
})
@@ -32,7 +31,6 @@ var _ = Describe("CSI SecretProviderClass Tests", Label("csi"), func() {
_ = utils.DeleteDeployment(ctx, kubeClient, testNamespace, deploymentName)
_ = utils.DeleteConfigMap(ctx, kubeClient, testNamespace, configMapName)
_ = utils.DeleteSecretProviderClass(ctx, csiClient, testNamespace, spcName)
// Clean up Vault secret
_ = utils.DeleteVaultSecret(ctx, kubeClient, restConfig, vaultSecretPath)
})
@@ -80,7 +78,6 @@ var _ = Describe("CSI SecretProviderClass Tests", Label("csi"), func() {
Expect(err).NotTo(HaveOccurred())
By("Waiting for CSI driver to sync the new secret version")
// CSI rotation poll interval is 10s, wait up to 30s for sync
err = utils.WaitForSPCPSVersionChange(ctx, csiClient, testNamespace, spcpsName, initialVersion, 10*time.Second)
Expect(err).NotTo(HaveOccurred())
GinkgoWriter.Println("CSI driver synced new secret version")

View File

@@ -27,16 +27,11 @@ var _ = BeforeSuite(func() {
var err error
ctx = context.Background()
// Setup test environment (but don't deploy Reloader - tests do that with specific flags)
testEnv, err = utils.SetupTestEnvironment(ctx, "reloader-flags")
Expect(err).NotTo(HaveOccurred(), "Failed to setup test environment")
// Export for use in tests
kubeClient = testEnv.KubeClient
testNamespace = testEnv.Namespace
// Note: Unlike other suites, we don't deploy Reloader here.
// Each test deploys with specific flag configurations.
})
var _ = AfterSuite(func() {
@@ -51,7 +46,6 @@ var _ = AfterSuite(func() {
// deployReloaderWithFlags deploys Reloader with the specified Helm value overrides.
// This is a convenience function for tests that need to deploy with specific flags.
func deployReloaderWithFlags(values map[string]string) error {
// Always include annotations strategy
if values == nil {
values = make(map[string]string)
}

View File

@@ -34,11 +34,9 @@ var _ = Describe("Ignore Resources Flag Tests", func() {
Context("with ignoreSecrets=true flag", func() {
BeforeEach(func() {
// Create test namespace
err := utils.CreateNamespace(ctx, kubeClient, ignoreNS)
Expect(err).NotTo(HaveOccurred())
// Deploy Reloader with ignoreSecrets flag
err = deployReloaderWithFlags(map[string]string{
"reloader.ignoreSecrets": "true",
})
@@ -113,11 +111,9 @@ var _ = Describe("Ignore Resources Flag Tests", func() {
Context("with ignoreConfigMaps=true flag", func() {
BeforeEach(func() {
// Create test namespace
err := utils.CreateNamespace(ctx, kubeClient, ignoreNS)
Expect(err).NotTo(HaveOccurred())
// Deploy Reloader with ignoreConfigMaps flag
err = deployReloaderWithFlags(map[string]string{
"reloader.ignoreConfigMaps": "true",
})

View File

@@ -33,11 +33,9 @@ var _ = Describe("Ignored Workloads Flag Tests", func() {
Context("with ignoreCronJobs=true flag", func() {
BeforeEach(func() {
// Create test namespace
err := utils.CreateNamespace(ctx, kubeClient, ignoreNS)
Expect(err).NotTo(HaveOccurred())
// Deploy Reloader with ignoreCronJobs flag
err = deployReloaderWithFlags(map[string]string{
"reloader.ignoreCronJobs": "true",
})
@@ -113,11 +111,9 @@ var _ = Describe("Ignored Workloads Flag Tests", func() {
Context("with both ignoreCronJobs=true and ignoreJobs=true flags", func() {
BeforeEach(func() {
// Create test namespace
err := utils.CreateNamespace(ctx, kubeClient, ignoreNS)
Expect(err).NotTo(HaveOccurred())
// Deploy Reloader with both ignore flags
err = deployReloaderWithFlags(map[string]string{
"reloader.ignoreCronJobs": "true",
"reloader.ignoreJobs": "true",

View File

@@ -31,11 +31,9 @@ var _ = Describe("Reload On Create Flag Tests", func() {
Context("with reloadOnCreate=true flag", func() {
BeforeEach(func() {
// Create test namespace
err := utils.CreateNamespace(ctx, kubeClient, createNamespace)
Expect(err).NotTo(HaveOccurred())
// Deploy Reloader with reloadOnCreate flag
err = deployReloaderWithFlags(map[string]string{
"reloader.reloadOnCreate": "true",
})
@@ -102,11 +100,9 @@ var _ = Describe("Reload On Create Flag Tests", func() {
Context("with reloadOnCreate=false (default)", func() {
BeforeEach(func() {
// Create test namespace
err := utils.CreateNamespace(ctx, kubeClient, createNamespace)
Expect(err).NotTo(HaveOccurred())
// Deploy Reloader without reloadOnCreate flag (default is false)
err = deployReloaderWithFlags(map[string]string{})
Expect(err).NotTo(HaveOccurred())

View File

@@ -31,11 +31,9 @@ var _ = Describe("Reload On Delete Flag Tests", func() {
Context("with reloadOnDelete=true flag", func() {
BeforeEach(func() {
// Create test namespace
err := utils.CreateNamespace(ctx, kubeClient, deleteNamespace)
Expect(err).NotTo(HaveOccurred())
// Deploy Reloader with reloadOnDelete flag
err = deployReloaderWithFlags(map[string]string{
"reloader.reloadOnDelete": "true",
})
@@ -109,11 +107,9 @@ var _ = Describe("Reload On Delete Flag Tests", func() {
Context("with reloadOnDelete=false (default)", func() {
BeforeEach(func() {
// Create test namespace
err := utils.CreateNamespace(ctx, kubeClient, deleteNamespace)
Expect(err).NotTo(HaveOccurred())
// Deploy Reloader without reloadOnDelete flag (default is false)
err = deployReloaderWithFlags(map[string]string{})
Expect(err).NotTo(HaveOccurred())

View File

@@ -34,11 +34,9 @@ var _ = Describe("Resource Label Selector Flag Tests", func() {
Context("with resourceLabelSelector flag", func() {
BeforeEach(func() {
// Create test namespace
err := utils.CreateNamespace(ctx, kubeClient, resourceNS)
Expect(err).NotTo(HaveOccurred())
// Deploy Reloader with resourceLabelSelector flag
err = deployReloaderWithFlags(map[string]string{
"reloader.resourceLabelSelector": "reload=true",
})
@@ -57,7 +55,7 @@ var _ = Describe("Resource Label Selector Flag Tests", func() {
By("Creating a ConfigMap with matching label")
_, err := utils.CreateConfigMapWithLabels(ctx, kubeClient, resourceNS, matchingCM,
map[string]string{"key": "initial"},
map[string]string{"reload": "true"}, nil) // no annotations
map[string]string{"reload": "true"}, nil)
Expect(err).NotTo(HaveOccurred())
By("Creating a Deployment with auto annotation")

View File

@@ -25,7 +25,6 @@ var _ = Describe("Watch Globally Flag Tests", func() {
})
AfterEach(func() {
// Clean up resources in both namespaces
_ = utils.DeleteDeployment(ctx, kubeClient, testNamespace, deploymentName)
_ = utils.DeleteConfigMap(ctx, kubeClient, testNamespace, configMapName)
_ = utils.DeleteDeployment(ctx, kubeClient, otherNS, deploymentName)
@@ -34,12 +33,9 @@ var _ = Describe("Watch Globally Flag Tests", func() {
Context("with watchGlobally=false flag", func() {
BeforeEach(func() {
// Create the other namespace for testing cross-namespace behavior
err := utils.CreateNamespace(ctx, kubeClient, otherNS)
Expect(err).NotTo(HaveOccurred())
// Deploy Reloader with watchGlobally=false
// This makes Reloader only watch resources in its own namespace (testNamespace)
err = deployReloaderWithFlags(map[string]string{
"reloader.watchGlobally": "false",
})
@@ -118,11 +114,9 @@ var _ = Describe("Watch Globally Flag Tests", func() {
BeforeEach(func() {
globalNS = "global-" + utils.RandName("ns")
// Create test namespace
err := utils.CreateNamespace(ctx, kubeClient, globalNS)
Expect(err).NotTo(HaveOccurred())
// Deploy Reloader with watchGlobally=true (default)
err = deployReloaderWithFlags(map[string]string{
"reloader.watchGlobally": "true",
})

View File

@@ -101,7 +101,7 @@ var (
return c.Spec.JobTemplate.Spec.Template.Spec.Containers
}
CronJobExists StatusAccessor[*batchv1.CronJob] = func(c *batchv1.CronJob) bool {
return true // Just existence check
return true
}
)

View File

@@ -260,8 +260,6 @@ func TestJoinNames(t *testing.T) {
}
func TestAnnotationConstants(t *testing.T) {
// Verify annotation constants have expected values
// This ensures we don't accidentally change the annotation keys
tests := []struct {
name string
constant string
@@ -293,7 +291,6 @@ func TestAnnotationConstants(t *testing.T) {
}
func TestAnnotationValues(t *testing.T) {
// Verify annotation value constants
if AnnotationValueTrue != "true" {
t.Errorf("AnnotationValueTrue = %q, want \"true\"", AnnotationValueTrue)
}

View File

@@ -99,7 +99,6 @@ func CreateSecretProviderClass(ctx context.Context, client csiclient.Interface,
*csiv1.SecretProviderClass, error,
) {
if params == nil {
// Default Vault-compatible parameters for testing
params = map[string]string{
"vaultAddress": VaultAddress,
"roleName": VaultRole,
@@ -133,8 +132,6 @@ func CreateSecretProviderClass(ctx context.Context, client csiclient.Interface,
func CreateSecretProviderClassWithSecret(ctx context.Context, client csiclient.Interface, namespace, name, secretPath, secretKey string) (
*csiv1.SecretProviderClass, error,
) {
// Convert KV v1 style path to KV v2 data path
// "secret/foo" -> "secret/data/foo"
kvV2Path := secretPath
if strings.HasPrefix(secretPath, "secret/") && !strings.HasPrefix(secretPath, "secret/data/") {
kvV2Path = strings.Replace(secretPath, "secret/", "secret/data/", 1)
@@ -199,8 +196,6 @@ func CreateVaultSecret(ctx context.Context, kubeClient kubernetes.Interface, res
// secretPath should be like "secret/test" (without "data" prefix - it's added automatically).
// data is a map of key-value pairs to store in the secret.
func UpdateVaultSecret(ctx context.Context, kubeClient kubernetes.Interface, restConfig *rest.Config, secretPath string, data map[string]string) error {
// Build the vault kv put command
// Format: vault kv put secret/path key1=value1 key2=value2
args := []string{"kv", "put", secretPath}
for k, v := range data {
args = append(args, fmt.Sprintf("%s=%s", k, v))
@@ -217,7 +212,6 @@ func UpdateVaultSecret(ctx context.Context, kubeClient kubernetes.Interface, res
func DeleteVaultSecret(ctx context.Context, kubeClient kubernetes.Interface, restConfig *rest.Config, secretPath string) error {
args := []string{"kv", "metadata", "delete", secretPath}
if err := execInVaultPod(ctx, kubeClient, restConfig, args); err != nil {
// Ignore not found errors
if strings.Contains(err.Error(), "No value found") {
return nil
}
@@ -281,7 +275,6 @@ func WaitForSPCPSVersionChange(ctx context.Context, client csiclient.Interface,
func FindSPCPSForDeployment(ctx context.Context, csiClient csiclient.Interface, kubeClient kubernetes.Interface, namespace, deploymentName string, timeout time.Duration) (
string, error,
) {
// Get pods for the deployment
pods, err := kubeClient.CoreV1().Pods(namespace).List(
ctx, metav1.ListOptions{
LabelSelector: fmt.Sprintf("app=%s", deploymentName),
@@ -300,7 +293,6 @@ func FindSPCPSForDeployment(ctx context.Context, csiClient csiclient.Interface,
return csiClient.SecretsstoreV1().SecretProviderClassPodStatuses(namespace).Watch(ctx, opts)
}
// Watch all SPCPS (empty name) and find one that matches any pod
spcps, err := WatchUntil(ctx, watchFunc, "", SPCPSForPods(podNames), timeout)
if errors.Is(err, ErrWatchTimeout) {
return "", fmt.Errorf("timeout finding SecretProviderClassPodStatus for deployment %s/%s", namespace, deploymentName)
@@ -318,7 +310,6 @@ func FindSPCPSForSPC(ctx context.Context, csiClient csiclient.Interface, namespa
return csiClient.SecretsstoreV1().SecretProviderClassPodStatuses(namespace).Watch(ctx, opts)
}
// Watch all SPCPS (empty name) and find one that matches the SPC
spcps, err := WatchUntil(ctx, watchFunc, "", SPCPSForSPC(spcName), timeout)
if errors.Is(err, ErrWatchTimeout) {
return "", fmt.Errorf("timeout finding SecretProviderClassPodStatus for SPC %s/%s", namespace, spcName)
@@ -339,7 +330,6 @@ func GetSPCPSVersion(ctx context.Context, client csiclient.Interface, namespace,
if len(spcps.Status.Objects) == 0 {
return "", nil
}
// Return concatenated versions for all objects to detect any change
var versions []string
for _, obj := range spcps.Status.Objects {
versions = append(versions, obj.Version)

View File

@@ -59,8 +59,6 @@ func DeployReloader(opts DeployOptions) error {
opts.Image = GetTestImage()
}
// Clean up any existing cluster-scoped resources before deploying
// This prevents "already exists" errors when a previous test didn't clean up properly
cleanupClusterResources(opts.ReleaseName)
chartPath := filepath.Join(projectDir, DefaultHelmChartPath)
@@ -70,7 +68,7 @@ func DeployReloader(opts DeployOptions) error {
chartPath,
"--namespace", opts.Namespace,
"--create-namespace",
"--reset-values", // Important: reset values to ensure clean state between tests
"--reset-values",
"--set", fmt.Sprintf("image.repository=%s", GetImageRepository(opts.Image)),
"--set", fmt.Sprintf("image.tag=%s", GetImageTag(opts.Image)),
"--set", "image.pullPolicy=IfNotPresent",
@@ -78,7 +76,6 @@ func DeployReloader(opts DeployOptions) error {
"--timeout", opts.Timeout,
}
// Add custom values
for key, value := range opts.Values {
args = append(args, "--set", fmt.Sprintf("%s=%s", key, value))
}
@@ -100,15 +97,12 @@ func UndeployReloader(namespace, releaseName string) error {
releaseName = DefaultHelmReleaseName
}
// Use --wait to ensure Helm waits for resources to be deleted
cmd := exec.Command("helm", "uninstall", releaseName, "--namespace", namespace, "--ignore-not-found", "--wait")
output, err := Run(cmd)
if err != nil {
return fmt.Errorf("helm uninstall failed: %s: %w", output, err)
}
// Clean up cluster-scoped resources that Helm might not delete
// Use --wait to ensure resources are fully deleted before returning
clusterResources := []struct {
kind string
name string
@@ -119,11 +113,9 @@ func UndeployReloader(namespace, releaseName string) error {
for _, res := range clusterResources {
cmd := exec.Command("kubectl", "delete", res.kind, res.name, "--ignore-not-found", "--wait=true")
_, _ = Run(cmd) // Ignore errors - resource may not exist
_, _ = Run(cmd)
}
// Additional wait to ensure controller is fully stopped and resources are cleaned up
// This prevents race conditions when the next test tries to deploy immediately
waitForReloaderGone(namespace, releaseName)
return nil
@@ -133,7 +125,6 @@ func UndeployReloader(namespace, releaseName string) error {
func waitForReloaderGone(namespace, releaseName string) {
deploymentName := ReloaderDeploymentName(releaseName)
// Poll until deployment is gone (max 30 seconds)
for i := 0; i < 30; i++ {
cmd := exec.Command("kubectl", "get", "deployment", deploymentName, "-n", namespace, "--ignore-not-found", "-o", "name")
output, _ := Run(cmd)
@@ -164,7 +155,6 @@ func cleanupClusterResources(releaseName string) {
_, _ = Run(cmd)
}
// Small wait to ensure API server has processed the deletions
time.Sleep(500 * time.Millisecond)
}
@@ -184,7 +174,6 @@ func GetImageRepository(image string) string {
return image[:i]
}
if image[i] == '/' {
// No tag found, return as-is
break
}
}
@@ -200,7 +189,6 @@ func GetImageTag(image string) string {
return image[i+1:]
}
if image[i] == '/' {
// No tag found
break
}
}

View File

@@ -28,7 +28,7 @@ func TestGetImageRepository(t *testing.T) {
{
name: "image with digest (not fully supported)",
image: "nginx@sha256:abc123",
expected: "nginx@sha256", // Note: digest handling is limited
expected: "nginx@sha256",
},
{
name: "simple image name",

View File

@@ -1,27 +0,0 @@
package utils
import (
"fmt"
"os"
"os/exec"
)
// GetKindClusterName returns the Kind cluster name from the KIND_CLUSTER environment variable,
// or "kind" as the default.
func GetKindClusterName() string {
if cluster := os.Getenv("KIND_CLUSTER"); cluster != "" {
return cluster
}
return "kind"
}
// LoadImageToKindCluster loads a Docker image into the Kind cluster using the default cluster name.
func LoadImageToKindCluster(image string) error {
cmd := exec.Command("kind", "load", "docker-image", image, "--name", GetKindClusterName())
output, err := Run(cmd)
if err != nil {
return fmt.Errorf("failed to load image %s to Kind cluster: %w\nOutput: %s",
image, err, output)
}
return nil
}

View File

@@ -196,7 +196,6 @@ func AddInitContainerWithVolumes(spec *corev1.PodSpec, cmName, secretName string
// ApplyWorkloadConfig applies all WorkloadConfig settings to a PodTemplateSpec.
// This includes both pod template annotations and pod spec configuration.
func ApplyWorkloadConfig(template *corev1.PodTemplateSpec, cfg WorkloadConfig) {
// Apply pod template annotations
if len(cfg.PodTemplateAnnotations) > 0 {
if template.Annotations == nil {
template.Annotations = make(map[string]string)
@@ -206,7 +205,6 @@ func ApplyWorkloadConfig(template *corev1.PodTemplateSpec, cfg WorkloadConfig) {
}
}
// Apply pod spec configuration
spec := &template.Spec
if cfg.UseConfigMapEnvFrom && cfg.ConfigMapName != "" {
AddEnvFromSource(spec, 0, cfg.ConfigMapName, false)

View File

@@ -21,13 +21,11 @@ func TestRandSeq(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
result := RandSeq(tt.length)
// Verify length
if len(result) != tt.length {
t.Errorf("RandSeq(%d) returned string of length %d, want %d",
tt.length, len(result), tt.length)
}
// Verify only lowercase letters
if tt.length > 0 {
matched, _ := regexp.MatchString("^[a-z]+$", result)
if !matched {
@@ -39,8 +37,6 @@ func TestRandSeq(t *testing.T) {
}
func TestRandSeqRandomness(t *testing.T) {
// Generate multiple sequences and verify they're different
// (with very high probability)
const iterations = 10
const length = 20
@@ -48,13 +44,11 @@ func TestRandSeqRandomness(t *testing.T) {
for i := 0; i < iterations; i++ {
s := RandSeq(length)
if seen[s] {
// This is extremely unlikely with 20 chars (26^20 possibilities)
t.Errorf("RandSeq generated duplicate: %q", s)
}
seen[s] = true
}
// Verify we got 10 unique strings
if len(seen) != iterations {
t.Errorf("Expected %d unique strings, got %d", iterations, len(seen))
}
@@ -76,20 +70,17 @@ func TestRandName(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
result := RandName(tt.prefix)
// Verify format: prefix-xxxxx
expectedPrefix := tt.prefix + "-"
if len(result) <= len(expectedPrefix) {
t.Errorf("RandName(%q) = %q, too short", tt.prefix, result)
return
}
// Check prefix
if result[:len(expectedPrefix)] != expectedPrefix {
t.Errorf("RandName(%q) = %q, doesn't start with %q",
tt.prefix, result, expectedPrefix)
}
// Check random suffix is 5 lowercase letters
suffix := result[len(expectedPrefix):]
if len(suffix) != 5 {
t.Errorf("RandName(%q) suffix length = %d, want 5", tt.prefix, len(suffix))
@@ -105,7 +96,6 @@ func TestRandName(t *testing.T) {
}
func TestRandNameUniqueness(t *testing.T) {
// Generate multiple names with same prefix and verify uniqueness
const prefix = "test"
const iterations = 100
@@ -120,9 +110,6 @@ func TestRandNameUniqueness(t *testing.T) {
}
func TestRandNameKubernetesCompatibility(t *testing.T) {
// Verify generated names are valid Kubernetes resource names
// Must match: [a-z0-9]([-a-z0-9]*[a-z0-9])?
prefixes := []string{"deploy", "cm", "secret", "test-app", "my-resource"}
k8sNamePattern := regexp.MustCompile(`^[a-z0-9]([-a-z0-9]*[a-z0-9])?$`)

View File

@@ -781,7 +781,7 @@ func baseCronJobResource(namespace, name string) *batchv1.CronJob {
Namespace: namespace,
},
Spec: batchv1.CronJobSpec{
Schedule: "* * * * *", // Every minute
Schedule: "* * * * *",
JobTemplate: batchv1.JobTemplateSpec{
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
@@ -898,6 +898,13 @@ func WithJobSecretKeyRef(secretName, key, envVarName string) JobOption {
}
}
// WithJobCommand sets the command for the Job's container.
func WithJobCommand(command string) JobOption {
return func(j *batchv1.Job) {
j.Spec.Template.Spec.Containers[0].Command = []string{"sh", "-c", command}
}
}
// WithJobCSIVolume adds a CSI volume referencing a SecretProviderClass to a Job.
func WithJobCSIVolume(spcName string) JobOption {
return func(j *batchv1.Job) {

View File

@@ -46,7 +46,7 @@ func TestMergeAnnotations(t *testing.T) {
"key1": "value1",
"key2": "value2",
"key3": "value3",
"shared": "third", // Last map wins
"shared": "third",
},
},
{
@@ -118,13 +118,11 @@ func TestMergeAnnotations(t *testing.T) {
}
func TestMergeAnnotationsDoesNotModifyInput(t *testing.T) {
// Ensure MergeAnnotations doesn't modify the input maps
map1 := map[string]string{"key1": "value1"}
map2 := map[string]string{"key2": "value2"}
_ = MergeAnnotations(map1, map2)
// Verify original maps are unchanged
if len(map1) != 1 || map1["key1"] != "value1" {
t.Errorf("map1 was modified: %v", map1)
}
@@ -134,14 +132,11 @@ func TestMergeAnnotationsDoesNotModifyInput(t *testing.T) {
}
func TestMergeAnnotationsReturnsNewMap(t *testing.T) {
// Ensure MergeAnnotations returns a new map, not a reference to an input
input := map[string]string{"key1": "value1"}
result := MergeAnnotations(input)
// Modify the result
result["key2"] = "value2"
// Verify original is unchanged
if _, exists := input["key2"]; exists {
t.Error("modifying result affected input map - should return a new map")
}

View File

@@ -13,7 +13,7 @@ import (
// Timeout constants for watch operations.
const (
DefaultInterval = 1 * time.Second // Polling interval (legacy, will be removed)
DefaultInterval = 1 * time.Second // Polling interval
ShortTimeout = 5 * time.Second // Quick checks
NegativeTestWait = 3 * time.Second // Wait before checking negative conditions
WorkloadReadyTimeout = 60 * time.Second // Workload readiness timeout (buffer for CI)
@@ -65,7 +65,6 @@ func WatchUntil[T runtime.Object](ctx context.Context, watchFunc WatchFunc, name
if done {
return result, err
}
// Watch disconnected, retry after brief pause
select {
case <-ctx.Done():
return zero, ErrWatchTimeout
@@ -85,7 +84,7 @@ func watchOnce[T runtime.Object](
watcher, err := watchFunc(ctx, opts)
if err != nil {
return zero, false, nil // Retry
return zero, false, nil
}
defer watcher.Stop()
@@ -95,7 +94,7 @@ func watchOnce[T runtime.Object](
return zero, true, ErrWatchTimeout
case event, ok := <-watcher.ResultChan():
if !ok {
return zero, false, nil // Watch closed, retry
return zero, false, nil
}
switch event.Type {
@@ -108,10 +107,9 @@ func watchOnce[T runtime.Object](
return obj, true, nil
}
case watch.Deleted:
// Resource deleted, keep watching for recreation
continue
case watch.Error:
return zero, false, nil // Retry on error
return zero, false, nil
}
}
}

View File

@@ -82,6 +82,10 @@ type WorkloadAdapter interface {
// RequiresSpecialHandling returns true for workloads that need special handling.
// For example, CronJob triggers a new job instead of rolling restart.
RequiresSpecialHandling() bool
// GetPodTemplateAnnotation returns the value of a pod template annotation.
// This is useful for tests that need to compare annotation values before/after updates.
GetPodTemplateAnnotation(ctx context.Context, namespace, name, annotationKey string) (string, error)
}
// Pausable is implemented by workloads that support pause/unpause.

View File

@@ -92,6 +92,15 @@ func (a *ArgoRolloutAdapter) RequiresSpecialHandling() bool {
return false
}
// GetPodTemplateAnnotation returns the value of a pod template annotation.
func (a *ArgoRolloutAdapter) GetPodTemplateAnnotation(ctx context.Context, namespace, name, annotationKey string) (string, error) {
rollout, err := a.rolloutsClient.ArgoprojV1alpha1().Rollouts(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return "", err
}
return rollout.Spec.Template.Annotations[annotationKey], nil
}
// baseRollout returns a minimal Rollout template.
func baseRollout(name string) *rolloutv1alpha1.Rollout {
return &rolloutv1alpha1.Rollout{
@@ -128,7 +137,6 @@ func baseRollout(name string) *rolloutv1alpha1.Rollout {
func buildRolloutOptions(cfg WorkloadConfig) []RolloutOption {
return []RolloutOption{
func(r *rolloutv1alpha1.Rollout) {
// Set annotations on Rollout level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if r.Annotations == nil {
r.Annotations = make(map[string]string)

View File

@@ -79,11 +79,19 @@ func (a *CronJobAdapter) WaitForTriggeredJob(ctx context.Context, namespace, cro
return HandleWatchResult(err)
}
// GetPodTemplateAnnotation returns the value of a pod template annotation.
func (a *CronJobAdapter) GetPodTemplateAnnotation(ctx context.Context, namespace, name, annotationKey string) (string, error) {
cj, err := a.client.BatchV1().CronJobs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return "", err
}
return cj.Spec.JobTemplate.Spec.Template.Annotations[annotationKey], nil
}
// buildCronJobOptions converts WorkloadConfig to CronJobOption slice.
func buildCronJobOptions(cfg WorkloadConfig) []CronJobOption {
return []CronJobOption{
func(cj *batchv1.CronJob) {
// Set annotations on CronJob level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if cj.Annotations == nil {
cj.Annotations = make(map[string]string)
@@ -92,7 +100,6 @@ func buildCronJobOptions(cfg WorkloadConfig) []CronJobOption {
cj.Annotations[k] = v
}
}
// CronJob has nested JobTemplate
ApplyWorkloadConfig(&cj.Spec.JobTemplate.Spec.Template, cfg)
},
}

View File

@@ -74,11 +74,19 @@ func (a *DaemonSetAdapter) RequiresSpecialHandling() bool {
return false
}
// GetPodTemplateAnnotation returns the value of a pod template annotation.
func (a *DaemonSetAdapter) GetPodTemplateAnnotation(ctx context.Context, namespace, name, annotationKey string) (string, error) {
ds, err := a.client.AppsV1().DaemonSets(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return "", err
}
return ds.Spec.Template.Annotations[annotationKey], nil
}
// buildDaemonSetOptions converts WorkloadConfig to DaemonSetOption slice.
func buildDaemonSetOptions(cfg WorkloadConfig) []DaemonSetOption {
return []DaemonSetOption{
func(ds *appsv1.DaemonSet) {
// Set annotations on DaemonSet level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if ds.Annotations == nil {
ds.Annotations = make(map[string]string)

View File

@@ -92,11 +92,19 @@ func (a *DeploymentAdapter) RequiresSpecialHandling() bool {
return false
}
// GetPodTemplateAnnotation returns the value of a pod template annotation.
func (a *DeploymentAdapter) GetPodTemplateAnnotation(ctx context.Context, namespace, name, annotationKey string) (string, error) {
deploy, err := a.client.AppsV1().Deployments(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return "", err
}
return deploy.Spec.Template.Annotations[annotationKey], nil
}
// buildDeploymentOptions converts WorkloadConfig to DeploymentOption slice.
func buildDeploymentOptions(cfg WorkloadConfig) []DeploymentOption {
return []DeploymentOption{
func(d *appsv1.Deployment) {
// Set annotations on deployment level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if d.Annotations == nil {
d.Annotations = make(map[string]string)

View File

@@ -13,7 +13,6 @@ import (
)
// JobAdapter implements WorkloadAdapter for Kubernetes Jobs.
// Note: Jobs are handled specially by Reloader - they are recreated rather than updated.
type JobAdapter struct {
client kubernetes.Interface
}
@@ -94,11 +93,19 @@ func (a *JobAdapter) GetOriginalUID(ctx context.Context, namespace, name string)
return string(job.UID), nil
}
// GetPodTemplateAnnotation returns the value of a pod template annotation.
func (a *JobAdapter) GetPodTemplateAnnotation(ctx context.Context, namespace, name, annotationKey string) (string, error) {
job, err := a.client.BatchV1().Jobs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return "", err
}
return job.Spec.Template.Annotations[annotationKey], nil
}
// buildJobOptions converts WorkloadConfig to JobOption slice.
func buildJobOptions(cfg WorkloadConfig) []JobOption {
return []JobOption{
func(job *batchv1.Job) {
// Set annotations on Job level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if job.Annotations == nil {
job.Annotations = make(map[string]string)

View File

@@ -84,6 +84,18 @@ func (a *DeploymentConfigAdapter) RequiresSpecialHandling() bool {
return false
}
// GetPodTemplateAnnotation returns the value of a pod template annotation.
func (a *DeploymentConfigAdapter) GetPodTemplateAnnotation(ctx context.Context, namespace, name, annotationKey string) (string, error) {
dc, err := a.openshiftClient.AppsV1().DeploymentConfigs(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return "", err
}
if dc.Spec.Template == nil {
return "", nil
}
return dc.Spec.Template.Annotations[annotationKey], nil
}
// baseDeploymentConfig returns a minimal DeploymentConfig template.
func baseDeploymentConfig(name string) *openshiftappsv1.DeploymentConfig {
return &openshiftappsv1.DeploymentConfig{
@@ -114,7 +126,6 @@ func baseDeploymentConfig(name string) *openshiftappsv1.DeploymentConfig {
func buildDeploymentConfigOptions(cfg WorkloadConfig) []DeploymentConfigOption {
return []DeploymentConfigOption{
func(dc *openshiftappsv1.DeploymentConfig) {
// Set annotations on DeploymentConfig level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if dc.Annotations == nil {
dc.Annotations = make(map[string]string)

View File

@@ -74,11 +74,19 @@ func (a *StatefulSetAdapter) RequiresSpecialHandling() bool {
return false
}
// GetPodTemplateAnnotation returns the value of a pod template annotation.
func (a *StatefulSetAdapter) GetPodTemplateAnnotation(ctx context.Context, namespace, name, annotationKey string) (string, error) {
sts, err := a.client.AppsV1().StatefulSets(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return "", err
}
return sts.Spec.Template.Annotations[annotationKey], nil
}
// buildStatefulSetOptions converts WorkloadConfig to StatefulSetOption slice.
func buildStatefulSetOptions(cfg WorkloadConfig) []StatefulSetOption {
return []StatefulSetOption{
func(sts *appsv1.StatefulSet) {
// Set annotations on StatefulSet level (where Reloader checks them)
if len(cfg.Annotations) > 0 {
if sts.Annotations == nil {
sts.Annotations = make(map[string]string)